content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Converting value from Pyspark Row datetime.date to yyyy-mm-dd I am trying to fetch data from a table that returns a list of Row datetime.date objects. I would like to have them as a list of Varchar/String values. query = "select device_date from device where device is not null" res = spark.sql(query).collect() if len(res) != 0: return res[:20] The returned value seems to be of format [Row(device_date =datetime.date(2019, 9, 25)), Row(device_date =datetime.date(2019, 9, 17)), Row(device_date =datetime.date(2020, 1, 8))] I would like to have the following output returned instead: ['2019-09-25','2019-09-17','2020-01-08'] Please advise. A: Are you sure you want to collect your data and then have to process them using python ? With df = spark.sql(query), depending on the answer : YES (python solution) out = df.collect() list(map(lambda x: datetime.datetime.strftime(x.device_date, "%Y-%m-%d"), out)) ['2019-09-25', '2019-09-17', '2020-01-08'] # OR list(map(str, (x.device_date for x in out))) ['2019-09-25', '2019-09-17', '2020-01-08'] NO (Spark solution) from pyspark.sql import functions as F df.select(F.date_format("device_date", "yyyy-MM-dd").alias("device_date")).collect() [Row(device_date='2019-09-25'), Row(device_date='2019-09-17'), Row(device_date='2020-01-08')] The spark version can also be done directly in SQL : query = "select date_format(device_date, 'yyyy-MM-dd') as date_format from device" spark.sql(query).collect() [Row(date_format='2019-09-25'), Row(date_format='2019-09-17'), Row(date_format='2020-01-08')] A: I suggest you use the date_format function beforehand. Here is the documentation, but basically: >>> from pyspark.sql.functions import date_format >>> df.select(date_format('device_date', 'YYYY-mm-dd').alias('date')).collect() [Row(date='2015-04-08')] Also please be careful, it seems like your column name: "device_date " has a space at the end. That could be making your life harder.
Converting value from Pyspark Row datetime.date to yyyy-mm-dd
I am trying to fetch data from a table that returns a list of Row datetime.date objects. I would like to have them as a list of Varchar/String values. query = "select device_date from device where device is not null" res = spark.sql(query).collect() if len(res) != 0: return res[:20] The returned value seems to be of format [Row(device_date =datetime.date(2019, 9, 25)), Row(device_date =datetime.date(2019, 9, 17)), Row(device_date =datetime.date(2020, 1, 8))] I would like to have the following output returned instead: ['2019-09-25','2019-09-17','2020-01-08'] Please advise.
[ "Are you sure you want to collect your data and then have to process them using python ?\nWith df = spark.sql(query), depending on the answer :\nYES (python solution)\nout = df.collect()\n\nlist(map(lambda x: datetime.datetime.strftime(x.device_date, \"%Y-%m-%d\"), out))\n\n['2019-09-25', '2019-09-17', '2020-01-08']\n\n# OR\n\nlist(map(str, (x.device_date for x in out)))\n['2019-09-25', '2019-09-17', '2020-01-08']\n\nNO (Spark solution)\nfrom pyspark.sql import functions as F\n\ndf.select(F.date_format(\"device_date\", \"yyyy-MM-dd\").alias(\"device_date\")).collect()\n \n[Row(device_date='2019-09-25'),\n Row(device_date='2019-09-17'),\n Row(device_date='2020-01-08')]\n\n\nThe spark version can also be done directly in SQL :\nquery = \"select date_format(device_date, 'yyyy-MM-dd') as date_format from device\"\n\nspark.sql(query).collect()\n\n[Row(date_format='2019-09-25'),\n Row(date_format='2019-09-17'),\n Row(date_format='2020-01-08')]\n\n", "I suggest you use the date_format function beforehand.\nHere is the documentation, but basically:\n>>> from pyspark.sql.functions import date_format\n>>> df.select(date_format('device_date', 'YYYY-mm-dd').alias('date')).collect()\n[Row(date='2015-04-08')]\n\nAlso please be careful, it seems like your column name: \"device_date \" has a space at the end. That could be making your life harder.\n" ]
[ 2, 1 ]
[]
[]
[ "apache_spark", "pyspark", "python" ]
stackoverflow_0074645031_apache_spark_pyspark_python.txt
Q: Building Tree Structure from a list of string paths I have a list of paths as in paths = ["x1/x2", "x1/x2/x3", "x1/x4", "x1/x5/x6", ...] where the actual length of the list if roughly 20,000. I want to construct a tree structure that can be printed. The tree structure would look something like this: x1 ├── x2 │ └── x3 ├── x4 └── x5 └── x6 I also want to have some data associated to each node in the Node Object that can be currently accessed through a dictionary where each node is a key e.g. d = {"x1": [[1,2], [3,4]], "x2": [[5,6], [7,8]], ...} Every tree node should inherit the data from its parent. Such that the data at the "x2" node would be [[1,2], [3,4], [5,6], [7,8]]. I have tried the module anytree but it requires that you define each node of the tree as a variable. Any ideas? Thanks in advance! A: If I understand your question correctly, one possible solution might be like this. The tree nodes store their parents in order to construct the messy ├─── and └─── before directory/file names. Output: x1 ├── x2 │ └── x3 ├── x4 └── x5 └── x6 Code: class TreeNode: def __init__(self, name, parent): self.parent = parent self.name = name self.children = [] def add_child(self, node): self.children.append(node) return node def print(self, is_root): pre_0 = " " pre_1 = "│ " pre_2 = "├── " pre_3 = "└── " tree = self prefix = pre_2 if tree.parent and id(tree) != id(tree.parent.children[-1]) else pre_3 while tree.parent and tree.parent.parent: if tree.parent.parent and id(tree.parent) != id(tree.parent.parent.children[-1]): prefix = pre_1 + prefix else: prefix = pre_0 + prefix tree = tree.parent if is_root: print(self.name) else: print(prefix + self.name) for child in self.children: child.print(False) def find_and_insert(parent, edges): # Terminate if there is no edge if not edges: return # Find a child with the name edges[0] in the current node match = [tree for tree in parent.children if tree.name == edges[0]] # If there is already a node with the name edges[0] in the children, set "pointer" tree to this node. If there is no such node, add a node in the current tree node then set "pointer" tree to it tree = match[0] if match else parent.add_child(TreeNode(edges[0], parent)) # Recursively process the following edges[1:] find_and_insert(tree, edges[1:]) paths = ["x1/x2", "x1/x2/x3", "x1/x4", "x1/x5/x6"] root = TreeNode("x1", None) for path in paths: find_and_insert(root, path.split("/")[1:]) root.print(True) A: bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame. For this scenario, there are three parts to this, Define a new Node class that does the inheritance of data from parent nodes Construct tree using the path list and Node class we defined earlier (1 line of code!) Add in the data dictionary that maps the node name to data (1 line of code!) from bigtree import Node, list_to_tree, print_tree, add_dict_to_tree_by_name # Define new Node class class NodeInherit(Node): @property def data(self): if self.is_root: return self._data return self.parent.data + self._data # Construct tree using the path list paths = ["x1/x2", "x1/x2/x3", "x1/x4", "x1/x5/x6"] root = list_to_tree(paths, node_type=NodeInherit) # Add in the data dictionary d = {"x1": [[1,2], [3,4]], "x2": [[5,6], [7,8]], "x3": [[9]], "x4": [[5,6]], "x5": [[5]], "x6": [[6]]} d2 = {k: {"_data": v} for k, v in d.items()} # minor input format change root = add_dict_to_tree_by_name(root, d2) # Check tree structure with data print_tree(root, attr_list=["data"]) This results in output, x1 [data=[[1, 2], [3, 4]]] ├── x2 [data=[[1, 2], [3, 4], [5, 6], [7, 8]]] │ └── x3 [data=[[1, 2], [3, 4], [5, 6], [7, 8], [9]]] ├── x4 [data=[[1, 2], [3, 4], [5, 6]]] └── x5 [data=[[1, 2], [3, 4], [5]]] └── x6 [data=[[1, 2], [3, 4], [5], [6]]] You can also export the data out to dictionary or pandas DataFrame format besides printing it out to console. Source/Disclaimer: I'm the creator of bigtree ;)
Building Tree Structure from a list of string paths
I have a list of paths as in paths = ["x1/x2", "x1/x2/x3", "x1/x4", "x1/x5/x6", ...] where the actual length of the list if roughly 20,000. I want to construct a tree structure that can be printed. The tree structure would look something like this: x1 ├── x2 │ └── x3 ├── x4 └── x5 └── x6 I also want to have some data associated to each node in the Node Object that can be currently accessed through a dictionary where each node is a key e.g. d = {"x1": [[1,2], [3,4]], "x2": [[5,6], [7,8]], ...} Every tree node should inherit the data from its parent. Such that the data at the "x2" node would be [[1,2], [3,4], [5,6], [7,8]]. I have tried the module anytree but it requires that you define each node of the tree as a variable. Any ideas? Thanks in advance!
[ "If I understand your question correctly, one possible solution might be like this.\nThe tree nodes store their parents in order to construct the messy ├───\nand └─── before directory/file names.\nOutput:\nx1\n├── x2\n│ └── x3\n├── x4\n└── x5\n └── x6\n\nCode:\nclass TreeNode:\n def __init__(self, name, parent):\n self.parent = parent\n self.name = name\n self.children = []\n\n def add_child(self, node):\n self.children.append(node)\n return node\n\n def print(self, is_root):\n pre_0 = \" \"\n pre_1 = \"│ \"\n pre_2 = \"├── \"\n pre_3 = \"└── \"\n\n tree = self\n prefix = pre_2 if tree.parent and id(tree) != id(tree.parent.children[-1]) else pre_3\n\n while tree.parent and tree.parent.parent:\n if tree.parent.parent and id(tree.parent) != id(tree.parent.parent.children[-1]):\n prefix = pre_1 + prefix\n else:\n prefix = pre_0 + prefix\n\n tree = tree.parent\n\n if is_root:\n print(self.name)\n else:\n print(prefix + self.name)\n\n for child in self.children:\n child.print(False)\n\ndef find_and_insert(parent, edges):\n # Terminate if there is no edge\n if not edges:\n return\n \n # Find a child with the name edges[0] in the current node\n match = [tree for tree in parent.children if tree.name == edges[0]]\n \n # If there is already a node with the name edges[0] in the children, set \"pointer\" tree to this node. If there is no such node, add a node in the current tree node then set \"pointer\" tree to it\n tree = match[0] if match else parent.add_child(TreeNode(edges[0], parent))\n \n # Recursively process the following edges[1:]\n find_and_insert(tree, edges[1:])\n\npaths = [\"x1/x2\", \"x1/x2/x3\", \"x1/x4\", \"x1/x5/x6\"]\n\nroot = TreeNode(\"x1\", None)\n\nfor path in paths:\n find_and_insert(root, path.split(\"/\")[1:])\n\nroot.print(True)\n\n", "bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame.\nFor this scenario, there are three parts to this,\n\nDefine a new Node class that does the inheritance of data from parent nodes\nConstruct tree using the path list and Node class we defined earlier (1 line of code!)\nAdd in the data dictionary that maps the node name to data (1 line of code!)\n\nfrom bigtree import Node, list_to_tree, print_tree, add_dict_to_tree_by_name\n\n# Define new Node class\nclass NodeInherit(Node):\n @property\n def data(self):\n if self.is_root:\n return self._data\n return self.parent.data + self._data\n\n# Construct tree using the path list\npaths = [\"x1/x2\", \"x1/x2/x3\", \"x1/x4\", \"x1/x5/x6\"]\nroot = list_to_tree(paths, node_type=NodeInherit)\n\n# Add in the data dictionary\nd = {\"x1\": [[1,2], [3,4]], \"x2\": [[5,6], [7,8]], \"x3\": [[9]], \"x4\": [[5,6]], \"x5\": [[5]], \"x6\": [[6]]}\nd2 = {k: {\"_data\": v} for k, v in d.items()} # minor input format change\nroot = add_dict_to_tree_by_name(root, d2)\n\n# Check tree structure with data\nprint_tree(root, attr_list=[\"data\"])\n\nThis results in output,\nx1 [data=[[1, 2], [3, 4]]]\n├── x2 [data=[[1, 2], [3, 4], [5, 6], [7, 8]]]\n│ └── x3 [data=[[1, 2], [3, 4], [5, 6], [7, 8], [9]]]\n├── x4 [data=[[1, 2], [3, 4], [5, 6]]]\n└── x5 [data=[[1, 2], [3, 4], [5]]]\n └── x6 [data=[[1, 2], [3, 4], [5], [6]]]\n\nYou can also export the data out to dictionary or pandas DataFrame format besides printing it out to console.\nSource/Disclaimer: I'm the creator of bigtree ;)\n" ]
[ 1, 0 ]
[]
[]
[ "python", "tree" ]
stackoverflow_0066994282_python_tree.txt
Q: Divide quantities in order to get one quantity per row - python I have a dataframe with quantities and prices. I would like to get an dataframe with same prices vs quantity but only one quantity per line. Dataframe: Name | qty | Price Apple | 3 | 3.50 Avocado | 2 | 1.50 Expected Output: Name | qty| Price Apple | 1 | 3.50 Apple | 1 | 3.50 Apple | 1 | 3.50 Avocado | 1 | 1.50 Avocado | 1 | 1.50 honestly don't know how to code this in a pythonic way. A: We can use df.index.repeat, then set the qty to 1 for all rows. df = df.loc[df.index.repeat(df['qty'])].reset_index(drop=True) df['qty'] = 1 Output: Name qty Price 0 Apple 1 3.5 1 Apple 1 3.5 2 Apple 1 3.5 3 Avocado 1 1.5 4 Avocado 1 1.5
Divide quantities in order to get one quantity per row - python
I have a dataframe with quantities and prices. I would like to get an dataframe with same prices vs quantity but only one quantity per line. Dataframe: Name | qty | Price Apple | 3 | 3.50 Avocado | 2 | 1.50 Expected Output: Name | qty| Price Apple | 1 | 3.50 Apple | 1 | 3.50 Apple | 1 | 3.50 Avocado | 1 | 1.50 Avocado | 1 | 1.50 honestly don't know how to code this in a pythonic way.
[ "We can use df.index.repeat, then set the qty to 1 for all rows.\ndf = df.loc[df.index.repeat(df['qty'])].reset_index(drop=True)\ndf['qty'] = 1\n\nOutput:\n Name qty Price\n0 Apple 1 3.5\n1 Apple 1 3.5\n2 Apple 1 3.5\n3 Avocado 1 1.5\n4 Avocado 1 1.5\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "division", "python" ]
stackoverflow_0074645144_dataframe_division_python.txt
Q: How do I do CRUD operations on a MySQL database using python via a class implementation? I have a MySQL database running on local machine, and I am able to read and write using MySQLConnector library. I have standalone read and write methods which are working just fine, I just wanted to clean things up a bit and wanted to have all my CRUD operations be part of some class DatabaseOperations. I keep on getting a variety of errors however which I haven't found answers to, so I was hoping I could find further information here on what I need to do to make my class based CRUD operations work. Below is a method to write to some table, I have a similar db_read method that is exactly the same thing except with a different SQL command def db_write(): local_ip = '192.168.120.191' # LOCAL MACHINE SERVER RUNNING ON s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # SOCK_DGRAM refers to UDP s.connect((local_ip, 80)) client_ip = str(s.getsockname()[0]) print(client_ip) db_connection = mysql.connector.connect( host="localhost", user="localuser", password="localpass", database="Testing" ) db_cursor = db_connection.cursor() db_cursor.execute( f"INSERT INTO receiving (client_ip, client_message) VALUES( '{client_ip}', '{globalResultsList[0]}');" ) db_connection.commit() db_cursor.close() And here is an attempt at a class based implementation class DatabaseOperations: def __init__(self): self.local_ip = '192.168.120.191' # LOCAL MACHINE SERVER RUNNING ON self.s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # SOCK_DGRAM refers to UDP self.s.connect((self.local_ip, 80)) self.db_connection = mysql.connector.connect( host="localhost", user="localuser", password="localpass", database="Testing" ) def readFromDatabase(self): db_cursor = self.db_connection db_cursor.execute( f"select * from receiving;" ) db_result = db_cursor.fetchall() print(db_result) Further down below in that class I have the writeToDatabase method which just has different SQL command. The error I have been getting the most is: AttributeError: 'MySQLConnection' object has no attribute 'execute' Thanks! A: I do see you are defining db_cursor = self.db_connection inside the readFromDatabase() class method, so you are attempting to use a connection object as a cursor object (you are getting the error because of it when running db_cursor.execute(...)). Based on your code the right definition would be: # reference: https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-cursor.html db_cursor = self.db_connection.cursor() Eventually, you need to close the cursor, I suggest having a look at the mysql-connector-python documentation to learn how you can open/close a cursor, create different cursor types, and learn more about the API in general.
How do I do CRUD operations on a MySQL database using python via a class implementation?
I have a MySQL database running on local machine, and I am able to read and write using MySQLConnector library. I have standalone read and write methods which are working just fine, I just wanted to clean things up a bit and wanted to have all my CRUD operations be part of some class DatabaseOperations. I keep on getting a variety of errors however which I haven't found answers to, so I was hoping I could find further information here on what I need to do to make my class based CRUD operations work. Below is a method to write to some table, I have a similar db_read method that is exactly the same thing except with a different SQL command def db_write(): local_ip = '192.168.120.191' # LOCAL MACHINE SERVER RUNNING ON s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # SOCK_DGRAM refers to UDP s.connect((local_ip, 80)) client_ip = str(s.getsockname()[0]) print(client_ip) db_connection = mysql.connector.connect( host="localhost", user="localuser", password="localpass", database="Testing" ) db_cursor = db_connection.cursor() db_cursor.execute( f"INSERT INTO receiving (client_ip, client_message) VALUES( '{client_ip}', '{globalResultsList[0]}');" ) db_connection.commit() db_cursor.close() And here is an attempt at a class based implementation class DatabaseOperations: def __init__(self): self.local_ip = '192.168.120.191' # LOCAL MACHINE SERVER RUNNING ON self.s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # SOCK_DGRAM refers to UDP self.s.connect((self.local_ip, 80)) self.db_connection = mysql.connector.connect( host="localhost", user="localuser", password="localpass", database="Testing" ) def readFromDatabase(self): db_cursor = self.db_connection db_cursor.execute( f"select * from receiving;" ) db_result = db_cursor.fetchall() print(db_result) Further down below in that class I have the writeToDatabase method which just has different SQL command. The error I have been getting the most is: AttributeError: 'MySQLConnection' object has no attribute 'execute' Thanks!
[ "I do see you are defining db_cursor = self.db_connection inside the readFromDatabase() class method, so you are attempting to use a connection object as a cursor object (you are getting the error because of it when running db_cursor.execute(...)).\nBased on your code the right definition would be:\n# reference: https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-cursor.html\ndb_cursor = self.db_connection.cursor()\n\nEventually, you need to close the cursor, I suggest having a look at the mysql-connector-python documentation to learn how you can open/close a cursor, create different cursor types, and learn more about the API in general.\n" ]
[ 2 ]
[]
[]
[ "attributeerror", "mysql", "mysql_connector_python", "python" ]
stackoverflow_0074643379_attributeerror_mysql_mysql_connector_python_python.txt
Q: Is there a way to stop detecting part of a word in a different word? Sorry if the title is a little confusing, I'll do my best to explain further here! I'm setting up a Discord bot and ran into an interesting issue. Our bot is called Ed and we always call his name when wanting something from him, however, I realised that since the word need has ed in it, we can accidentally call some of his functions. I was wondering how I coud simply make it clear to only consider 'ed' by itself and nothing more. @bot.listen() async def on_message(positive): if positive.author == bot.user: return if 'positivity' in positive.content.lower() and 'ed' in positive.content.lower(): await positive.channel.send(random.choice(positivity)) Afterwards, I attempted to change the code like this to only consider whether there was a space before 'Ed' but this would cause issues if his name was the first thing you typed. @bot.listen() async def on_message(positive): if positive.author == bot.user: return if 'positivity' in positive.content.lower() and ' ed' in positive.content.lower(): await positive.channel.send(random.choice(positivity)) I'm new to programming and I'm sure the solution is super simple, any help would be appreciated! A: I suppose positive.content.lower() is a sentence or a paragraph. So why don't we split it for every space with: word_list = (positive.content.lower() + " ").split(" ") if "ed" in word_list and "positivity" in word_list: await positive.channel.send(random.choice(positivity)) The added space at the end there is to ensure we have at least one space, so that we don't get a list of letters instead of a list of words.
Is there a way to stop detecting part of a word in a different word?
Sorry if the title is a little confusing, I'll do my best to explain further here! I'm setting up a Discord bot and ran into an interesting issue. Our bot is called Ed and we always call his name when wanting something from him, however, I realised that since the word need has ed in it, we can accidentally call some of his functions. I was wondering how I coud simply make it clear to only consider 'ed' by itself and nothing more. @bot.listen() async def on_message(positive): if positive.author == bot.user: return if 'positivity' in positive.content.lower() and 'ed' in positive.content.lower(): await positive.channel.send(random.choice(positivity)) Afterwards, I attempted to change the code like this to only consider whether there was a space before 'Ed' but this would cause issues if his name was the first thing you typed. @bot.listen() async def on_message(positive): if positive.author == bot.user: return if 'positivity' in positive.content.lower() and ' ed' in positive.content.lower(): await positive.channel.send(random.choice(positivity)) I'm new to programming and I'm sure the solution is super simple, any help would be appreciated!
[ "I suppose positive.content.lower() is a sentence or a paragraph.\nSo why don't we split it for every space with:\nword_list = (positive.content.lower() + \" \").split(\" \")\nif \"ed\" in word_list and \"positivity\" in word_list:\n await positive.channel.send(random.choice(positivity))\n\nThe added space at the end there is to ensure we have at least one space, so that we don't get a list of letters instead of a list of words.\n" ]
[ 0 ]
[]
[]
[ "detection", "discord", "list", "python", "string" ]
stackoverflow_0074643554_detection_discord_list_python_string.txt
Q: How to stock value from tkinter in an efficient way? I have some problems with tkinter. I want to ask for some values and stock those values. I found some code about get() method but here are my questions: Here is code: ` from tkinter import* window= Tk() window.geometry("300x300") #1) def getEntry(): result= a.get() print(result) a = Spinbox(window, from_=0, to=10, command=getEntry) a.pack() #2) def getEntry2(): result=s.get() return result s = Spinbox(window, from_=0, to=10, command=getEntry2) print(s) s.pack() #3) def getEntry3(): result=Trees.get() print(result) Trees= Listbox(window, command=getEntry3) Trees.insert(1, "Tree1") Trees.insert(2, "Tree2") Trees.pack() bouton_close= Button(window, text="Fermer", command=window.quit) bouton_close.pack() window.mainloop() ` I dont understand why I have to create a function each time I want a result. It doesn't look efficient to have many function doing the same thing just with different names. like if my all code was using only function like getEntry() function? WAnd how could compact the code? hy is it not possible to do something like that? : def getEntry(name): result=name.get() print(result) a = Spinbox(window, from_=0, to=10, command=getEntry(a)) Is it possible to stock the value ? because I can print the value (getEntry) but when I try to return the value, there is no error but it doesn't print anything (getEntry2) Why with Listbox (getEntry3) "command" is not reconnized ? "unknown optin "-command" Tanks you for your help !! A: 1-There is a way to call same function for multiple widgets. That is by using lambda you pass widget itself to function. 2-As you call it from button call, there is no where to return the value. you can use global variables which is not adviced, or you can use classes. 3-Listboxes like treeviews, has some major differences from other widgets. So if you wanna use it like that you should bind events. from tkinter import * window= Tk() window.geometry("300x300") def getEntry(entry): result= entry.get() print(result) def getTree(et): tree = et.widget index = int(tree.curselection()[0]) value = tree.get(index) print(value) a = Spinbox(window, from_=0, to=10) a.configure(command=lambda:getEntry(a)) a.pack() b = Spinbox(window, from_=0, to=10) b.configure(command=lambda:getEntry(b)) b.pack() c = Spinbox(window, from_=0, to=10) c.configure(command=lambda:getEntry(c)) c.pack() Trees= Listbox(window) Trees.insert(1, "Tree1") Trees.insert(2, "Tree2") Trees.pack() Trees.bind('<<ListboxSelect>>',getTree) bouton_close= Button(window, text="Fermer", command=window.quit) bouton_close.pack() window.mainloop()
How to stock value from tkinter in an efficient way?
I have some problems with tkinter. I want to ask for some values and stock those values. I found some code about get() method but here are my questions: Here is code: ` from tkinter import* window= Tk() window.geometry("300x300") #1) def getEntry(): result= a.get() print(result) a = Spinbox(window, from_=0, to=10, command=getEntry) a.pack() #2) def getEntry2(): result=s.get() return result s = Spinbox(window, from_=0, to=10, command=getEntry2) print(s) s.pack() #3) def getEntry3(): result=Trees.get() print(result) Trees= Listbox(window, command=getEntry3) Trees.insert(1, "Tree1") Trees.insert(2, "Tree2") Trees.pack() bouton_close= Button(window, text="Fermer", command=window.quit) bouton_close.pack() window.mainloop() ` I dont understand why I have to create a function each time I want a result. It doesn't look efficient to have many function doing the same thing just with different names. like if my all code was using only function like getEntry() function? WAnd how could compact the code? hy is it not possible to do something like that? : def getEntry(name): result=name.get() print(result) a = Spinbox(window, from_=0, to=10, command=getEntry(a)) Is it possible to stock the value ? because I can print the value (getEntry) but when I try to return the value, there is no error but it doesn't print anything (getEntry2) Why with Listbox (getEntry3) "command" is not reconnized ? "unknown optin "-command" Tanks you for your help !!
[ "1-There is a way to call same function for multiple widgets. That is by using lambda you pass widget itself to function.\n2-As you call it from button call, there is no where to return the value. you can use global variables which is not adviced, or you can use classes.\n3-Listboxes like treeviews, has some major differences from other widgets. So if you wanna use it like that you should bind events.\nfrom tkinter import *\n\nwindow= Tk()\nwindow.geometry(\"300x300\")\n\ndef getEntry(entry):\n result= entry.get()\n print(result)\n\ndef getTree(et):\n tree = et.widget\n index = int(tree.curselection()[0])\n value = tree.get(index)\n print(value)\n\na = Spinbox(window, from_=0, to=10) \na.configure(command=lambda:getEntry(a))\na.pack()\n\nb = Spinbox(window, from_=0, to=10) \nb.configure(command=lambda:getEntry(b))\nb.pack()\n\nc = Spinbox(window, from_=0, to=10) \nc.configure(command=lambda:getEntry(c))\nc.pack()\n\n\nTrees= Listbox(window)\nTrees.insert(1, \"Tree1\")\nTrees.insert(2, \"Tree2\")\nTrees.pack()\nTrees.bind('<<ListboxSelect>>',getTree)\n\n\nbouton_close= Button(window, text=\"Fermer\", command=window.quit) \nbouton_close.pack()\n\nwindow.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "get", "python", "tkinter" ]
stackoverflow_0074642247_get_python_tkinter.txt
Q: is there a way to pass parameter of flask api to my python code hello am trying to create flask api where user input the vaue and I will use that value to process my python code here is the code ` app = Flask(__name__) api = Api(app) class Users(Resource): @app.route('/users/<string:name>/') def hello(name): namesource = request.args.get('name') return "Hello {}!".format(name) print(namesource) # here am trying to get the sitring/value in name source but i can't because it no variable defines api.add_resource(Users, name='users') # For Running our Api on Localhost if __name__ == '__main__': app.run(debug=True) ` am trying to expect that i get that value/String outside of the function of api flask A: You are trying to access the variable namesource outside of the function hello. You can't do that. You can access the variable name outside of the function hello because it is a parameter of the function. You can fix it by making a global variable. app = Flask(__name__) api = Api(app) namesource = None class Users(Resource): @app.route('/users/&lt;string:name&gt;/') def hello(name): global namesource namesource = request.args.get('name') return "Hello {}!".format(name) print(namesource) api.add_resource(Users, name='users') # For Running our Api on Localhost if __name__ == '__main__': app.run(debug=True)
is there a way to pass parameter of flask api to my python code
hello am trying to create flask api where user input the vaue and I will use that value to process my python code here is the code ` app = Flask(__name__) api = Api(app) class Users(Resource): @app.route('/users/<string:name>/') def hello(name): namesource = request.args.get('name') return "Hello {}!".format(name) print(namesource) # here am trying to get the sitring/value in name source but i can't because it no variable defines api.add_resource(Users, name='users') # For Running our Api on Localhost if __name__ == '__main__': app.run(debug=True) ` am trying to expect that i get that value/String outside of the function of api flask
[ "You are trying to access the variable namesource outside of the function hello.\nYou can't do that.\nYou can access the variable name outside of the function hello because it is a parameter of the function. You can fix it by making a global variable.\napp = Flask(__name__)\napi = Api(app)\n\nnamesource = None\n\nclass Users(Resource):\n @app.route('/users/&lt;string:name&gt;/')\n def hello(name):\n global namesource\n namesource = request.args.get('name')\n return \"Hello {}!\".format(name)\n\nprint(namesource)\n\napi.add_resource(Users, name='users')\n# For Running our Api on Localhost\nif __name__ == '__main__':\n app.run(debug=True)\n\n" ]
[ 1 ]
[]
[]
[ "api", "flask", "python" ]
stackoverflow_0074645094_api_flask_python.txt
Q: overwriting dataframes in pandas I have a given dataframe new_df : ID summary text_len 1 xxx 45 2 aaa 34 I am performing some df manipulation by concatenating keywords from different df, like that: keywords = df["keyword"].to_list() for key in keywords: new_df[key] = new_df["summary"].str.lower().str.count(key) new_df from here I need two separate dataframes to perform few actions (to each of them add some columns, do some calculations etc). I need a dataframe with occurrences as per given piece of code and a binary dataframe. WHAT I DID: assign dataframe for occurrences: df_freq = new_df (because it is already calculated an done) I created another dataframe - binary one - on the top of new_df: #select only numeric columns to change them to binary numeric_cols = new_df.select_dtypes("number", exclude='float64').columns.tolist() new_df_binary = new_df new_df_binary['text_length'] = new_df_binary['text_length'].astype(int) new_df_binary[numeric_cols] = (new_df_binary[numeric_cols] > 0).astype(int) Everything works fine - I perform the math I need, but when I want to come back to df_freq - it is no longer dataframe with occurrences.. looks like it changed along with binary code I need separate tables and perform separate math on them. Do you know how I can avoid this hmm overwriting issue? A: You may use pandas' copy method with the deep argument set to True: df_freq = new_df.copy(deep=True) Setting deep=True (which is the default parameter) ensures that modifications to the data or indices of the copy do not impact the original dataframe.
overwriting dataframes in pandas
I have a given dataframe new_df : ID summary text_len 1 xxx 45 2 aaa 34 I am performing some df manipulation by concatenating keywords from different df, like that: keywords = df["keyword"].to_list() for key in keywords: new_df[key] = new_df["summary"].str.lower().str.count(key) new_df from here I need two separate dataframes to perform few actions (to each of them add some columns, do some calculations etc). I need a dataframe with occurrences as per given piece of code and a binary dataframe. WHAT I DID: assign dataframe for occurrences: df_freq = new_df (because it is already calculated an done) I created another dataframe - binary one - on the top of new_df: #select only numeric columns to change them to binary numeric_cols = new_df.select_dtypes("number", exclude='float64').columns.tolist() new_df_binary = new_df new_df_binary['text_length'] = new_df_binary['text_length'].astype(int) new_df_binary[numeric_cols] = (new_df_binary[numeric_cols] > 0).astype(int) Everything works fine - I perform the math I need, but when I want to come back to df_freq - it is no longer dataframe with occurrences.. looks like it changed along with binary code I need separate tables and perform separate math on them. Do you know how I can avoid this hmm overwriting issue?
[ "You may use pandas' copy method with the deep argument set to True:\ndf_freq = new_df.copy(deep=True)\n\nSetting deep=True (which is the default parameter) ensures that modifications to the data or indices of the copy do not impact the original dataframe.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074645187_dataframe_pandas_python.txt
Q: Why is my test suite not recognising my test case? I am practicing creating my own testing framework on Pycharm in Python with Selenium. However, for some reason the suite is failing to initiate pytest and recognise my test case, I am not sure where I have gone wrong, I usually dont have this problem, and I have marked the test case with test_. import pytest from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys import time from myPageObjects.sauce_login_page import sauce_login_page from myPageObjects.sauce_items_page import sauce_item_page from myPageObjects.sauce_details_page import sauce_details_page from myPageObjects.sauce_confirm_page import sauce_confirm_page from myPageObjects.sauce_success_page import sauce_success_page from ownUtilities.Sauce_Base_Class import sauce_base_class class sauce_test_one(sauce_base_class): def test_saucee2e(self): sauceLogin = sauce_login_page(self.driver) # Type in your username in the username box sauceLogin.getUsername().send_keys("standard_user") # Type in your password in your password box sauceLogin.getPassword().send_keys("secret_sauce") # Click the login button sauceLogin.getLogin().click() sauceItems = sauce_item_page(self.driver) # Click the Backpack to add to the cart sauceItems.getBackpack().click() # Click the Bikelight to add to the cart sauceItems.getBikelight().click() # Click on the T shirt to add to the cart sauceItems.getTshirt().click() # Click on the Jacket and add to the cart sauceItems.getFleecejacket().click() # Click on the shopping cart badge at the top of the page sauceItems.getShoppingcartbadge().click() # Click the checkout button sauceItems.getCheckoutbutton().click() sauceDetails = sauce_details_page(self.driver) # Make sure the next page has loaded self.wait4connection() # Type in the first name in the first name box sauceDetails.getFirstname().send_keys("Miserable") # Type in the second name in the second name box sauceDetails.getSecondname().send_keys("Teen") # Type in your postcode in the postcode sauceDetails.getPostcode().send_keys("se15 4qu") # Click continue SauceDetailsPage.getContinuebutton().click() sauceConfirm = sauce_confirm_page(self.driver) # Make sure the price of the items are correct correctprice = sauceConfirm.getCheckouttotal().text assert "$144.44" in correctprice print(correctprice) sauceSuccess = sauce_success_page(self.driver) # Click the finish button sauceSuccess.getFinishbutton().click() # Check that the successful order message has appeared passedtest = sauceSuccess.getSuccessmessage().text assert "THANK YOU FOR YOUR ORDER" in passedtest print(passedtest) Pytest result Picture Test Case Picture Any input or advice would really be appreciated I expected pytest to run the test case and hopefully pass, however, it is not even recognising the test case and telling me its an Empty suite A: By default, PyTest expects test classes to be named like SomethingTest and modules like test_something. You can fine-tune the test discovery process as described here: Changing standard (Python) test discovery A: There are various naming conventions to follow File name: Should start with test_ Class name: Should start with Test (your class name is sauce_test_one over here which would not be detected) Class module name: Should start with test_ (your class module name is test_saucee2e which is good)
Why is my test suite not recognising my test case?
I am practicing creating my own testing framework on Pycharm in Python with Selenium. However, for some reason the suite is failing to initiate pytest and recognise my test case, I am not sure where I have gone wrong, I usually dont have this problem, and I have marked the test case with test_. import pytest from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys import time from myPageObjects.sauce_login_page import sauce_login_page from myPageObjects.sauce_items_page import sauce_item_page from myPageObjects.sauce_details_page import sauce_details_page from myPageObjects.sauce_confirm_page import sauce_confirm_page from myPageObjects.sauce_success_page import sauce_success_page from ownUtilities.Sauce_Base_Class import sauce_base_class class sauce_test_one(sauce_base_class): def test_saucee2e(self): sauceLogin = sauce_login_page(self.driver) # Type in your username in the username box sauceLogin.getUsername().send_keys("standard_user") # Type in your password in your password box sauceLogin.getPassword().send_keys("secret_sauce") # Click the login button sauceLogin.getLogin().click() sauceItems = sauce_item_page(self.driver) # Click the Backpack to add to the cart sauceItems.getBackpack().click() # Click the Bikelight to add to the cart sauceItems.getBikelight().click() # Click on the T shirt to add to the cart sauceItems.getTshirt().click() # Click on the Jacket and add to the cart sauceItems.getFleecejacket().click() # Click on the shopping cart badge at the top of the page sauceItems.getShoppingcartbadge().click() # Click the checkout button sauceItems.getCheckoutbutton().click() sauceDetails = sauce_details_page(self.driver) # Make sure the next page has loaded self.wait4connection() # Type in the first name in the first name box sauceDetails.getFirstname().send_keys("Miserable") # Type in the second name in the second name box sauceDetails.getSecondname().send_keys("Teen") # Type in your postcode in the postcode sauceDetails.getPostcode().send_keys("se15 4qu") # Click continue SauceDetailsPage.getContinuebutton().click() sauceConfirm = sauce_confirm_page(self.driver) # Make sure the price of the items are correct correctprice = sauceConfirm.getCheckouttotal().text assert "$144.44" in correctprice print(correctprice) sauceSuccess = sauce_success_page(self.driver) # Click the finish button sauceSuccess.getFinishbutton().click() # Check that the successful order message has appeared passedtest = sauceSuccess.getSuccessmessage().text assert "THANK YOU FOR YOUR ORDER" in passedtest print(passedtest) Pytest result Picture Test Case Picture Any input or advice would really be appreciated I expected pytest to run the test case and hopefully pass, however, it is not even recognising the test case and telling me its an Empty suite
[ "By default, PyTest expects test classes to be named like SomethingTest and modules like test_something. You can fine-tune the test discovery process as described here: Changing standard (Python) test discovery\n", "There are various naming conventions to follow\n\nFile name: Should start with test_\nClass name: Should start with Test (your class name is sauce_test_one over here which would not be detected)\nClass module name: Should start with test_ (your class module name is test_saucee2e which is good)\n\n" ]
[ 0, 0 ]
[]
[]
[ "pycharm", "pytest", "python", "selenium_webdriver" ]
stackoverflow_0074643456_pycharm_pytest_python_selenium_webdriver.txt
Q: How can I reset/revert a string back to its orginal form after mutating it Alright, so I want to reset a word after I have to change/mutated it without the reset method taking any parameters. Reset() should revert the text after the use() method is used. Is there any way of doing this? Class words def __init__(self, text): self.text = text def use(self): # sets the text as an empty string self.text = "" def reset(self): # revert empty string back to the original text Here is the unit test for reset() import unittest from word import word def test_reset(self): string = word("Sunshine") string.use() string.reset() self.assertEqual("Sunshine", string.text) if __name__ == "__main__": unittest.main() A: I may be way out of line but... why don't you just copy it? Class words def __init__(self, text): self.text = text self.original = text def use(self): # sets the text as an empty string self.text = "" def reset(self): # revert empty string back to the original text self.text = self.original
How can I reset/revert a string back to its orginal form after mutating it
Alright, so I want to reset a word after I have to change/mutated it without the reset method taking any parameters. Reset() should revert the text after the use() method is used. Is there any way of doing this? Class words def __init__(self, text): self.text = text def use(self): # sets the text as an empty string self.text = "" def reset(self): # revert empty string back to the original text Here is the unit test for reset() import unittest from word import word def test_reset(self): string = word("Sunshine") string.use() string.reset() self.assertEqual("Sunshine", string.text) if __name__ == "__main__": unittest.main()
[ "I may be way out of line but... why don't you just copy it?\nClass words\n\ndef __init__(self, text):\n self.text = text\n self.original = text\n \n def use(self): # sets the text as an empty string\n self.text = \"\"\n\n def reset(self): # revert empty string back to the original text\n self.text = self.original\n\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074645238_python.txt
Q: FashionMNIST Dataset not transforming to Tensor Trying to calculate the mean and standard deviation of the dataset to normalise it afterwards. Current Code: train_dataset = datasets.FashionMNIST('data', train=True, download = True, transform=[transforms.ToTensor()]) test_dataset = datasets.FashionMNIST('data', train=False, download = True, transform=[transforms.ToTensor()]) def calc_torch_mean_std(tens): mean = torch.mean(tens, dim=1) std = torch.sqrt(torch.mean((tens - mean[:, None]) ** 2, dim=1)) return(std, mean) train_mean, train_std = calc_torch_mean_std(train_dataset) test_mean, test_std = calc_torch_mean_std(test_dataset) However, i'm getting the error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /var/folders/16/crymx03s6pzfspm_3qfrlkx00000gn/T/ipykernel_72423/605045038.py in <module> 8 return(std, mean) 9 ---> 10 train_mean, train_std = calc_torch_mean_std(train_dataset) 11 12 test_mean, test_std = calc_torch_mean_std(test_dataset) /var/folders/16/crymx03s6pzfspm_3qfrlkx00000gn/T/ipykernel_72423/605045038.py in calc_torch_mean_std(tens) 4 5 def calc_torch_mean_std(tens): ----> 6 mean = torch.mean(tens, dim=1) 7 std = torch.sqrt(torch.mean((tens - mean[:, None]) ** 2, dim=1)) 8 return(std, mean) TypeError: mean() received an invalid combination of arguments - got (FashionMNIST, dim=int), but expected one of: * (Tensor input, *, torch.dtype dtype) * (Tensor input, tuple of ints dim, bool keepdim, *, torch.dtype dtype, Tensor out) * (Tensor input, tuple of names dim, bool keepdim, *, torch.dtype dtype, Tensor out) It should be getting a tensor as i transform the data as it comes in using transforms.ToTensor(). Checked import of transforms and it is okay. Checked parameters for the datasets.FashionMNIST() and transform is correctly used (should work both with and without [ ]). Expecting no error, and to get the mean and std for both datasets. A: datasets.FashionMNIST returns (image, target) where target is index of the target class. So if you want to take the mean you need to extract just the image. images = torch.vstack([pair[0] for pair in train_dataset]) images should now be of shape (N, H, W) and you can do whatever you want from there. Another solution as noted by OP is to use train_dataset.data to directly access the data.
FashionMNIST Dataset not transforming to Tensor
Trying to calculate the mean and standard deviation of the dataset to normalise it afterwards. Current Code: train_dataset = datasets.FashionMNIST('data', train=True, download = True, transform=[transforms.ToTensor()]) test_dataset = datasets.FashionMNIST('data', train=False, download = True, transform=[transforms.ToTensor()]) def calc_torch_mean_std(tens): mean = torch.mean(tens, dim=1) std = torch.sqrt(torch.mean((tens - mean[:, None]) ** 2, dim=1)) return(std, mean) train_mean, train_std = calc_torch_mean_std(train_dataset) test_mean, test_std = calc_torch_mean_std(test_dataset) However, i'm getting the error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /var/folders/16/crymx03s6pzfspm_3qfrlkx00000gn/T/ipykernel_72423/605045038.py in <module> 8 return(std, mean) 9 ---> 10 train_mean, train_std = calc_torch_mean_std(train_dataset) 11 12 test_mean, test_std = calc_torch_mean_std(test_dataset) /var/folders/16/crymx03s6pzfspm_3qfrlkx00000gn/T/ipykernel_72423/605045038.py in calc_torch_mean_std(tens) 4 5 def calc_torch_mean_std(tens): ----> 6 mean = torch.mean(tens, dim=1) 7 std = torch.sqrt(torch.mean((tens - mean[:, None]) ** 2, dim=1)) 8 return(std, mean) TypeError: mean() received an invalid combination of arguments - got (FashionMNIST, dim=int), but expected one of: * (Tensor input, *, torch.dtype dtype) * (Tensor input, tuple of ints dim, bool keepdim, *, torch.dtype dtype, Tensor out) * (Tensor input, tuple of names dim, bool keepdim, *, torch.dtype dtype, Tensor out) It should be getting a tensor as i transform the data as it comes in using transforms.ToTensor(). Checked import of transforms and it is okay. Checked parameters for the datasets.FashionMNIST() and transform is correctly used (should work both with and without [ ]). Expecting no error, and to get the mean and std for both datasets.
[ "datasets.FashionMNIST returns (image, target) where target is index of the target class. So if you want to take the mean you need to extract just the image.\nimages = torch.vstack([pair[0] for pair in train_dataset])\n\nimages should now be of shape (N, H, W) and you can do whatever you want from there.\nAnother solution as noted by OP is to use train_dataset.data to directly access the data.\n" ]
[ 1 ]
[]
[]
[ "mnist", "python", "pytorch", "tensor", "torchvision" ]
stackoverflow_0074644993_mnist_python_pytorch_tensor_torchvision.txt
Q: Rearrange dataframe values Let's say I have the following dataframe: ID stop x y z 0 202 9 20 27 4 1 202 2 23 24 13 2 1756 5 5 41 73 3 1756 3 7 42 72 4 1756 4 3 50 73 5 2153 14 121 12 6 6 2153 3 122.5 2 6 7 3276 1 54 33 -12 8 5609 9 -2 44 -32 9 5609 2 8 44 -32 10 5609 5 102 -23 16 I would like to change the ID values in order to have the smallest being 1, the second smallest being 2 etc.. So for my example, I would get this: ID stop x y z 0 1 9 20 27 4 1 1 2 23 24 13 2 2 5 5 41 73 3 2 3 7 42 72 4 2 4 3 50 73 5 3 14 121 12 6 6 3 3 122.5 2 6 7 4 1 54 33 -12 8 5 9 -2 44 -32 9 5 2 8 44 -32 10 5 5 102 -23 16 Any idea please? Thanks in advance! A: You can use pd.Series.rank with method='dense' df['ID'] = df['ID'].rank(method='dense').astype(int)
Rearrange dataframe values
Let's say I have the following dataframe: ID stop x y z 0 202 9 20 27 4 1 202 2 23 24 13 2 1756 5 5 41 73 3 1756 3 7 42 72 4 1756 4 3 50 73 5 2153 14 121 12 6 6 2153 3 122.5 2 6 7 3276 1 54 33 -12 8 5609 9 -2 44 -32 9 5609 2 8 44 -32 10 5609 5 102 -23 16 I would like to change the ID values in order to have the smallest being 1, the second smallest being 2 etc.. So for my example, I would get this: ID stop x y z 0 1 9 20 27 4 1 1 2 23 24 13 2 2 5 5 41 73 3 2 3 7 42 72 4 2 4 3 50 73 5 3 14 121 12 6 6 3 3 122.5 2 6 7 4 1 54 33 -12 8 5 9 -2 44 -32 9 5 2 8 44 -32 10 5 5 102 -23 16 Any idea please? Thanks in advance!
[ "You can use pd.Series.rank with method='dense'\ndf['ID'] = df['ID'].rank(method='dense').astype(int)\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074645184_dataframe_pandas_python.txt
Q: Read entry in a string array into a dictionary object I have a string stored in: reviewers_list I want to iterate through it and create a new list of dictionaries called reviewer_dicts reviewers_dicts = {} for i in reviewers_list: reviewers_dicts.append(i) print(reviewers_dict) I have tried this so far A: Many things to talk about. First you are making a dictionary, here, not a list of dictionaries. Secondly, a dictionary is made up of key-value pairs, not single values. So for your example, if we wanted the keys to be i and the values to be empty strings we'd do like so: reviewers_dicts = {} for i in reviewers_list: reviewers_dicts[i]="" print(reviewers_dict) And if you actually wanted a list of dicts, then you would do this: list_of_dicts = [] for i in reviewers_list: list_of_dicts.append({i:""}) print(list_of_dicts )
Read entry in a string array into a dictionary object
I have a string stored in: reviewers_list I want to iterate through it and create a new list of dictionaries called reviewer_dicts reviewers_dicts = {} for i in reviewers_list: reviewers_dicts.append(i) print(reviewers_dict) I have tried this so far
[ "Many things to talk about.\nFirst you are making a dictionary, here, not a list of dictionaries.\nSecondly, a dictionary is made up of key-value pairs, not single values.\nSo for your example, if we wanted the keys to be i and the values to be empty strings we'd do like so:\nreviewers_dicts = {}\nfor i in reviewers_list:\n reviewers_dicts[i]=\"\"\nprint(reviewers_dict)\n\nAnd if you actually wanted a list of dicts, then you would do this:\nlist_of_dicts = []\nfor i in reviewers_list:\n list_of_dicts.append({i:\"\"})\nprint(list_of_dicts )\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "iteration", "python" ]
stackoverflow_0074643501_dictionary_iteration_python.txt
Q: How to save model output/predictions I have trained a model. Now I want to export it's output which is type (str). How I can I save it's output results in a dataframe or any other form that I can use for future purpose. gf = df['findings'].astype(str) preprocess_text = gf.str.strip().replace("\n","") t5_prepared_Text = "summarize: "+preprocess_text print ("original text preprocessed: \n", preprocess_text) tokenized_text = tokenizer.encode(str(t5_prepared_Text, return_tensors="pt").to(device) # summmarize summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, min_length=30, max_length=100, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print ("\n\nSummarized text: \n" Output of the model 0 summarize: There is XXXX increased opacity wit... 1 summarize: There is XXXX increased opacity wit... 2 summarize: There is XXXX increased opacity wit... 3 summarize: Interstitial markings are diffusely... 4 summarize: Interstitial markings are diffusely... 5 summarize: nan 6 summarize: nan Name: findings, dtype: object: So far I have tried like this prediction = pd.DataFrame([text]).to_csv('prediction.csv') But it saves all these rows in just one cell of the csv (first cell) and all in half form like below. 0 summarize: There is XXXX increased opacity wit... 1 summarize: There is XXXX increased opacity wit... 2 summarize: There is XXXX increased opacity wit... 3 summarize: Interstitial markings are diffusely... 4 summarize: Interstitial markings are diffusely... 5 summarize: nan 6 summarize: nan Name: findings, dtype: object: A: Just replace this prediction = pd.DataFrame([text]).to_csv('prediction.csv') With this prediction = pd.DataFrame([text]).to_csv('prediction.csv', sep=";")
How to save model output/predictions
I have trained a model. Now I want to export it's output which is type (str). How I can I save it's output results in a dataframe or any other form that I can use for future purpose. gf = df['findings'].astype(str) preprocess_text = gf.str.strip().replace("\n","") t5_prepared_Text = "summarize: "+preprocess_text print ("original text preprocessed: \n", preprocess_text) tokenized_text = tokenizer.encode(str(t5_prepared_Text, return_tensors="pt").to(device) # summmarize summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, min_length=30, max_length=100, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print ("\n\nSummarized text: \n" Output of the model 0 summarize: There is XXXX increased opacity wit... 1 summarize: There is XXXX increased opacity wit... 2 summarize: There is XXXX increased opacity wit... 3 summarize: Interstitial markings are diffusely... 4 summarize: Interstitial markings are diffusely... 5 summarize: nan 6 summarize: nan Name: findings, dtype: object: So far I have tried like this prediction = pd.DataFrame([text]).to_csv('prediction.csv') But it saves all these rows in just one cell of the csv (first cell) and all in half form like below. 0 summarize: There is XXXX increased opacity wit... 1 summarize: There is XXXX increased opacity wit... 2 summarize: There is XXXX increased opacity wit... 3 summarize: Interstitial markings are diffusely... 4 summarize: Interstitial markings are diffusely... 5 summarize: nan 6 summarize: nan Name: findings, dtype: object:
[ "Just replace this\nprediction = pd.DataFrame([text]).to_csv('prediction.csv')\n\nWith this\nprediction = pd.DataFrame([text]).to_csv('prediction.csv', sep=\";\")\n\n" ]
[ 3 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0074623438_keras_python_tensorflow.txt
Q: How to write str (byte) from Cloudmersive API response to PDF file without file corruption I'm currently working to convert several different file formats (.csv, .xlsx, .docx, .one) to .pdf output using the Cloudmersive API (https://api.cloudmersive.com/docs/convert.asp). Their documentation does not detail the type of encoding from the API_response during the conversion. I've tried several different approaches to write the api_response (output: str (byte)). It appears to successfully write to a .pdf file, but when I go to open it, Adobe says that the file is corrupted. I've tried detecting the type of encoding but chardet found no encoding. configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'PUT YOUR KEY HERE' #individual user-id linked to the account # create an instance of the API class api_instance = cloudmersive_convert_api_client.ConvertDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) # Convert Document to PDF if (os.stat(input_file).st_size != 0): #api does not work on empty files try: api_response = api_instance.convert_document_ppt_to_pdf(input_file) #ONLY DIFFERENCE os.remove(input_file) output_file=os.path.splitext(input_file)[0]+".pdf" with open(output_file, 'wb') as binary_file: binary_file.write(bytearray(str(api_response),encoding='utf-8')) print(input_file, 'was processed by ConvertDocumentPptToPdf.') except ApiException as e: print(input_file, 'was not processed.') I've also tried but this does not work either: with open(output_file, 'wb') as binary_file: binary_file.write(bytearray(api_response)) Here is some sample output from the API response (api_response): b'b\'%PDF-1.5\\n%\\xc3\\xa4\\xc3\\xbc\\xc3\\xb6\\xc3\\x9f\\n2 0 obj\\n<</Length 3 0 R/Filter/FlateDecode>>\\nstream\\nx\\x9c\\x85TM\\x8b\\xdc0\\x0c\\xbd\\xe7W\\xf8\\xbc\\x10\\xaf$ Also, when I've tried to detect the encoding, it says the following: detection = chardet.detect(test.encode()) print(detection) {'encoding': None, 'confidence': 0.0, 'language': None} A: The following code worked as suggested in the comments: import ast # create an instance of the API class api_instance = cloudmersive_convert_api_client.ConvertDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) # Convert Document to PDF if (os.stat(input_file).st_size != 0): #api does not work on empty files try: api_response = api_instance.convert_document_ppt_to_pdf(input_file) #ONLY DIFFERENCE os.remove(input_file) output_file=os.path.splitext(input_file)[0]+".pdf" data = ast.literal_eval(api_response) with open(output_file, 'wb') as binary_file: binary_file.write(data) print(input_file, 'was processed by ConvertDocumentPptToPdf.') except ApiException as e: print(input_file, 'was not processed.')
How to write str (byte) from Cloudmersive API response to PDF file without file corruption
I'm currently working to convert several different file formats (.csv, .xlsx, .docx, .one) to .pdf output using the Cloudmersive API (https://api.cloudmersive.com/docs/convert.asp). Their documentation does not detail the type of encoding from the API_response during the conversion. I've tried several different approaches to write the api_response (output: str (byte)). It appears to successfully write to a .pdf file, but when I go to open it, Adobe says that the file is corrupted. I've tried detecting the type of encoding but chardet found no encoding. configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'PUT YOUR KEY HERE' #individual user-id linked to the account # create an instance of the API class api_instance = cloudmersive_convert_api_client.ConvertDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) # Convert Document to PDF if (os.stat(input_file).st_size != 0): #api does not work on empty files try: api_response = api_instance.convert_document_ppt_to_pdf(input_file) #ONLY DIFFERENCE os.remove(input_file) output_file=os.path.splitext(input_file)[0]+".pdf" with open(output_file, 'wb') as binary_file: binary_file.write(bytearray(str(api_response),encoding='utf-8')) print(input_file, 'was processed by ConvertDocumentPptToPdf.') except ApiException as e: print(input_file, 'was not processed.') I've also tried but this does not work either: with open(output_file, 'wb') as binary_file: binary_file.write(bytearray(api_response)) Here is some sample output from the API response (api_response): b'b\'%PDF-1.5\\n%\\xc3\\xa4\\xc3\\xbc\\xc3\\xb6\\xc3\\x9f\\n2 0 obj\\n<</Length 3 0 R/Filter/FlateDecode>>\\nstream\\nx\\x9c\\x85TM\\x8b\\xdc0\\x0c\\xbd\\xe7W\\xf8\\xbc\\x10\\xaf$ Also, when I've tried to detect the encoding, it says the following: detection = chardet.detect(test.encode()) print(detection) {'encoding': None, 'confidence': 0.0, 'language': None}
[ "The following code worked as suggested in the comments:\nimport ast\n\n# create an instance of the API class\napi_instance = cloudmersive_convert_api_client.ConvertDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration))\n\n# Convert Document to PDF\nif (os.stat(input_file).st_size != 0): #api does not work on empty files\n \n try:\n api_response = api_instance.convert_document_ppt_to_pdf(input_file) #ONLY DIFFERENCE\n \n os.remove(input_file) \n output_file=os.path.splitext(input_file)[0]+\".pdf\"\n \n data = ast.literal_eval(api_response)\n \n with open(output_file, 'wb') as binary_file:\n binary_file.write(data)\n \n print(input_file, 'was processed by ConvertDocumentPptToPdf.')\n \n except ApiException as e:\n print(input_file, 'was not processed.')\n\n" ]
[ 0 ]
[]
[]
[ "api", "arrays", "character_encoding", "pdf_generation", "python" ]
stackoverflow_0074633253_api_arrays_character_encoding_pdf_generation_python.txt
Q: Extract a substring from a path I would like to extract two parts of the string (path). In particular, I would like to have the part a = "fds89gsa8asdfas0sgfsaajajgsf6shjksa6" and the part b = "arc-D41234". path = "//users/ftac/tref/arc-D41234/fds89gsa8asdfas0sgfsaajajgsf6shjksa6" a = path[-36:] b = path[-47:-37] I tried with slicing and was fine, the problem is that I have to repeat for various paths (in a for loop) and the part "fds89gsa8asdfas0sgfsaajajgsf6shjksa6" and also the part "//users/ftac/tref/" is not always with the same str length and with the same subfolder numbers. The only thing is that I want to take the name of the last two subfolders. Can someone help me, how can I solve this? I think that the algorithm should be: Take the str a from the last character until the first (from the end) forward slash (/) Take the str b from the first (from the end) forward slash (/) until the second (from the end) forward slash (/) A: You need to split the path like: a = path.split('/')[-1] b = path.split('/')[-2]
Extract a substring from a path
I would like to extract two parts of the string (path). In particular, I would like to have the part a = "fds89gsa8asdfas0sgfsaajajgsf6shjksa6" and the part b = "arc-D41234". path = "//users/ftac/tref/arc-D41234/fds89gsa8asdfas0sgfsaajajgsf6shjksa6" a = path[-36:] b = path[-47:-37] I tried with slicing and was fine, the problem is that I have to repeat for various paths (in a for loop) and the part "fds89gsa8asdfas0sgfsaajajgsf6shjksa6" and also the part "//users/ftac/tref/" is not always with the same str length and with the same subfolder numbers. The only thing is that I want to take the name of the last two subfolders. Can someone help me, how can I solve this? I think that the algorithm should be: Take the str a from the last character until the first (from the end) forward slash (/) Take the str b from the first (from the end) forward slash (/) until the second (from the end) forward slash (/)
[ "You need to split the path like:\na = path.split('/')[-1]\nb = path.split('/')[-2]\n\n" ]
[ 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0074645396_python_string.txt
Q: Regular expression to extract text in python I am new using the re library and I would like to know if somebody knows how to extract the following text: Initial '[p]I am a test paragraph[/p]' Output I am a test paragraph I tried to use the following line : text = '[p]I am a test paragraph[/p]' param = re.findall("[p](.*?)[/p]]", text) but the output was : >>[']I am a test paragraph[/'] I tried to used the BBCode library but it doesn't work with this kind of text. A: # regex to extract text between [p] and [/p] tags regex = r'\[p\](.*?)\[/p\]' test_text = '[p]I am a test paragraph[/p]' # extract text between [p] and [/p] tags list_of_results = re.findall(regex, test_text) A: import re text = '[p]I am a test paragraph[/p]' parm = re.findall(r'\[p](.*?)\[/p]', text)[0] print(parm) Gives # I am a test paragraph or simply uisng rfind text = '[p]I am a test paragraph[/p]' start = '[p]' end = '[/p]' print (text[text.find(start)+len(start):text.rfind(end)]) Also Gives # I am a test paragraph
Regular expression to extract text in python
I am new using the re library and I would like to know if somebody knows how to extract the following text: Initial '[p]I am a test paragraph[/p]' Output I am a test paragraph I tried to use the following line : text = '[p]I am a test paragraph[/p]' param = re.findall("[p](.*?)[/p]]", text) but the output was : >>[']I am a test paragraph[/'] I tried to used the BBCode library but it doesn't work with this kind of text.
[ "# regex to extract text between [p] and [/p] tags\nregex = r'\\[p\\](.*?)\\[/p\\]'\ntest_text = '[p]I am a test paragraph[/p]'\n\n# extract text between [p] and [/p] tags\nlist_of_results = re.findall(regex, test_text)\n\n", "import re\ntext = '[p]I am a test paragraph[/p]'\nparm = re.findall(r'\\[p](.*?)\\[/p]', text)[0] \nprint(parm)\n\nGives #\nI am a test paragraph\n\nor simply uisng rfind\ntext = '[p]I am a test paragraph[/p]'\n\nstart = '[p]'\nend = '[/p]'\n\nprint (text[text.find(start)+len(start):text.rfind(end)])\n\nAlso Gives #\nI am a test paragraph\n\n" ]
[ 0, 0 ]
[]
[]
[ "bbcode", "python", "regex" ]
stackoverflow_0074645253_bbcode_python_regex.txt
Q: a code to input three user-provided statements and solve the questions A code to input three user-provided statements. Is there any information in the title case? Total the letters to determine the number. Count up all of the words. What number of words begin with "e"? How many words have "er" at the end? The number of vowels in each of these sentences. Do the statements include any digits? Invert the second assertion. Reverse the third statement's words one at a time. please solve via python
a code to input three user-provided statements and solve the questions
A code to input three user-provided statements. Is there any information in the title case? Total the letters to determine the number. Count up all of the words. What number of words begin with "e"? How many words have "er" at the end? The number of vowels in each of these sentences. Do the statements include any digits? Invert the second assertion. Reverse the third statement's words one at a time. please solve via python
[]
[]
[ "a = input(\"First statement : \")\nb = input(\"Second statement : \")\nc = input(\"Third statement : \")\nal = a.split()\nbl = b.split()\ncl = c.split()\ncounte = 0\ncounter = 0\nletter = 0\nbll = \"\"\nvowels = 0\nans = 0\nansb = 0\nansc = 0\nprint(f\"The words in first statement is {len(al)}\")\nprint(f\"The words in second statement is {len(bl)}\")\nprint(f\"The words in third statement is {len(cl)}\")\nprint(f\"Total words {len(al)+len(bl)+len(cl)}\")\nfor i in al:\n g = len(i)\n letter += g\n for vow in range(0, len(i)):\n if i[vow] == 'a' or i[vow] == 'i' or i[vow] == 'e' or i[vow] == 'o' or i[vow] == 'u':\n vowels += 1\n\n if i[0] == 'e':\n counte += 1\n if i[g-1]+i[g-2] == \"re\":\n counter += 1\n\nfor j in bl:\n h = len(j)\n letter += h\n for vow in range(0, len(j)):\n if j[vow] == 'a' or j[vow] == 'i' or j[vow] == 'e' or j[vow] == 'o' or j[vow] == 'u':\n vowels += 1\n if j[0] == 'e':\n counte += 1\n if j[h-1]+j[h-2] == \"re\":\n counter += 1\nfor k in al:\n u = len(k)\n letter += u\n for vow in range(0, len(k)):\n if k[vow] == 'a' or k[vow] == 'i' or k[vow] == 'e' or k[vow] == 'o' or k[vow] == 'u':\n vowels += 1\n if k[0] == 'e':\n counte += 1\n if k[u-1]+k[u-2] == \"re\":\n counter += 1\nprint(f\"The total letters is {letter}\")\nprint(f\"The words starting with e is {counte}\")\nprint(f\"The words ending with er is {counter}\")\n\n# Reversing the second statements\nrevb = b[::-1]\nprint(f\"The second statement reversed : {revb}\")\n# Third statement reverse word by word\nfor words in bl:\n words = words[::-1]\n bll += words+\" \"\nprint(f\"The third statement reversed word by word : {bll}\")\nprint(f\"The total number of vowels is {vowels}\")\nli = ['0', '1', '3', '2', '4', '5', '6', '7', '8', '9']\nfor gh in al:\n for ff in range(len(gh)):\n if gh[ff] in li:\n ans = 967\n print(\"Numbers are there in first statement \")\n break\nfor ghk in bl:\n for ffd in range(len(ghk)):\n if ghk[ffd] in li:\n ansb = 9673\n print(\"Numbers are there in second statement \")\n break\nfor ghks in cl:\n for fffd in range(len(ghks)):\n if ghks[fffd] in li:\n ansc = 96733\n print(\"Numbers are there in third statement \")\n break\nif ans == 0:\n print(\" No Numbers are there in first statement \")\nif ansb == 0:\n print(\" No Numbers are there in second statement \")\nif ansc == 0:\n print(\" No Numbers are there in third statement \")\n\n#Code is Completed\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074645170_python.txt
Q: Nested Dictionary (JSON): Merge multiple keys stored in a list to access its value from the dict I have a JSON with an unknown number of keys & values, I need to store the user's selection in a list & then access the selected key's value; (it'll be guaranteed that the keys in the list are always stored in the correct sequence). Example I need to access the value_key1-2. mydict = { 'key1': { 'key1-1': { 'key1-2': 'value_key1-2' }, }, 'key2': 'value_key2' } I can see the keys & they're limited so I can manually use: >>> print(mydict['key1']['key1-1']['key1-2']) >>> 'value_key1-2' Now after storing the user's selections in a list, we have the following list: Uselection = ['key1', 'key1-1', 'key1-2'] How can I convert those list elements into the similar code we used earlier? How can I automate it using Python? A: You have to loop the list of keys and update the "current value" on each step. val = mydict try: for key in Uselection: val = val[key] except KeyError: handle non-existing keys here Another, more 'posh' way to do the same (not generally recommended): from functools import reduce val = reduce(dict.get, Uselection, mydict)
Nested Dictionary (JSON): Merge multiple keys stored in a list to access its value from the dict
I have a JSON with an unknown number of keys & values, I need to store the user's selection in a list & then access the selected key's value; (it'll be guaranteed that the keys in the list are always stored in the correct sequence). Example I need to access the value_key1-2. mydict = { 'key1': { 'key1-1': { 'key1-2': 'value_key1-2' }, }, 'key2': 'value_key2' } I can see the keys & they're limited so I can manually use: >>> print(mydict['key1']['key1-1']['key1-2']) >>> 'value_key1-2' Now after storing the user's selections in a list, we have the following list: Uselection = ['key1', 'key1-1', 'key1-2'] How can I convert those list elements into the similar code we used earlier? How can I automate it using Python?
[ "You have to loop the list of keys and update the \"current value\" on each step.\nval = mydict\n\ntry:\n for key in Uselection:\n val = val[key]\nexcept KeyError:\n handle non-existing keys here\n\nAnother, more 'posh' way to do the same (not generally recommended):\nfrom functools import reduce\n\nval = reduce(dict.get, Uselection, mydict)\n\n" ]
[ 3 ]
[]
[]
[ "dictionary", "json", "python" ]
stackoverflow_0074645351_dictionary_json_python.txt
Q: Could not find a version that satisfies the requirement torch>=1.0.0? Could not find a version that satisfies the requirement torch>=1.0.0 No matching distribution found for torch>=1.0.0 (from stanfordnlp) A: This can also happen if your Python version is too new. Pytorch currently does not support past 3.7.9. Figured out from: https://stackoverflow.com/a/58902298/5090928 A: This is the latest command for pytorch. pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html A: I had some difficulties with this as well. The steps I had to do was: Install the latest version of PyTorch: pip3 install torch===1.3.1 torchvision===0.4.2 -f https://download.pytorch.org/whl/torch_stable.html Make sure you are installing with 64bit python version; otherwise, it won't work A: I finally managed to solve this problem thanks to John Red' comment and serg06 answer. Here's what I've done : Install Python 3.7.9 and not newer. BUT make sure to install 64bits python Every other combination failed for me. A: I had this same issue while installing standfordnlp in my windows 10 system. Installing torch before installing stanfordnlp worked out for me. I have installed torch from pytorch official website. A: If you have python 3.7 already installed along with newer versions then can use below command to install torch using python 3.7 py -3.7 -m pip install torch But also note that you have to execute the python program using py -3.7 py -3.7 program_name.py A: For people visiting this questions with slightly newer versions of python and pytorch, I had Python 3.8.3 32-bit and even though the pytorch page states that: Currently, PyTorch on Windows only supports Python 3.7-3.9; Installing Python 3.9.13 64-bit instead of Python 3.8.3 32-bit solved it for me. After that, I used the install script generator and ran python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 ... and it started downloading. A: torch and torchvision need python 3.8.x ... so in your CLI run python --version to get the python version. make sure that your environment has python 3.8.x, otherwise, create another virtual environment with anaconda conda create -n myenv python==3.8 anaconda conda activate myenv Then install torch and torchvision by this command pip install torch===1.5.0 torchvision===0.6.0 -f https://download.pytorch.org/whl/torch_stable.html A: I went heaven and earth for this problem and here is what it turned out to be: 1- i had python 3.10 2- the python.exe in my virtual environemnt linked to python310 i uninstalled the python3.10 , then went to delete the paths in system environemnts variables (go to windows search, type this thing, you get a window, click on environment variables, and find a word called paths, click edit) ....\python310\ (i named as such when initially installed, you probs have another name) and also this ...python310\Scripts\ delete them go to https://www.python.org/downloads/release/python-3711/ , istall pythion 3.7 , after that go back to system env. variables thingy, .. add the paths that ends with ...\python37\ , and ...\python37\Scripts\ (make you sure you end the paths with "") then go to new command prompt, type python , you should get Python 3.7.0 ... cd to your virtual environment path script (mine looked like this C:\Users...\python_ver\python_projects\root_environment\Scripts>) , activate to the name of whatever you called, for me i typed: activate tf. type python again, if you have python 3.7 as a result, you good to go ... if you still seeing python 3.10 ... then you probs get some error saying no python in ....\python310\python.exe ... so: go the folder that you saved python310 (the path shown in the last step), make sure all folders of your python 3.7 go in there. type python in cmd in the same virtual envrioment path by the cursor ... to check you running pythong 3.7 ... once python 3.7 is your default .. run the blood Clot for pytorch https://pytorch.org/ to install pytorch thanks to all the guys on this A: Use 64-Bit Python. PyTorch doesn't work with 32-Bit Python. I had the same issue. A: I tried every possible command for Windows, but nothing worked. I also tried using Pycharm package installation, everything throws the same error. Finally installed Pytorch using Anaconda. A: I want to pip install " torch>=1.4.0, torchvision>=0.5.0 ", but in a conda env with python=3.0, this is not right. I tried create a new conda env with python=3.7, and pip install " torch>=1.4.0, torchvision>=0.5.0 " again, it is ok. A: For previous versions please use the snippets from the PyTorch website; https://pytorch.org/get-started/previous-versions/ As an example, this will turn into an error since the cudatoolkit versions are not listed in pip; !pip install torch==1.10.0+cu111 ERROR: Could not find a version that satisfies the requirement torch==1.10.0+cu111 (from versions: 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0) ERROR: No matching distribution found for torch==1.10.0+cu111 For the same Torch version and cudatoolkit, you can use the following code instead; !pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html import torch TORCH_VERSION = ".".join(torch.__version__.split(".")[:2]) CUDA_VERSION = torch.__version__.split("+")[-1] print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION) Result: torch: 1.10 ; cuda: cu111 A: This is for Deep Reinforcement Learning: Chapter-6: 02_dnq_pong.py: I did not verify the above solutions. I found it tedious to change the existing library version. So, as mentioned by someone in GIT, you just have to change two variables in your loss function. https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/90 Change the variables to below: actions_v = torch.tensor(actions).to(device, dtype=torch.int64) done_mask = torch.tensor(dones).to(device, dtype=torch.bool) You could make similar changes to the variables which cause the error, in your respective programs. A: I had the same issue with Python 3.8.2 which is currently supported by PyTorch. I updated pip from version 19.2.3 to 22.3.1 and my issue was solved: pip install --upgrade pip
Could not find a version that satisfies the requirement torch>=1.0.0?
Could not find a version that satisfies the requirement torch>=1.0.0 No matching distribution found for torch>=1.0.0 (from stanfordnlp)
[ "This can also happen if your Python version is too new. Pytorch currently does not support past 3.7.9.\nFigured out from: https://stackoverflow.com/a/58902298/5090928\n", "This is the latest command for pytorch.\npip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n", "I had some difficulties with this as well. The steps I had to do was:\nInstall the latest version of PyTorch:\n pip3 install torch===1.3.1 torchvision===0.4.2 -f \n https://download.pytorch.org/whl/torch_stable.html\n\nMake sure you are installing with 64bit python version; otherwise, it won't work\n", "I finally managed to solve this problem thanks to John Red' comment and serg06 answer. Here's what I've done :\n\nInstall Python 3.7.9 and not newer.\nBUT make sure to install 64bits python\n\nEvery other combination failed for me.\n", "I had this same issue while installing standfordnlp in my windows 10 system.\nInstalling torch before installing stanfordnlp worked out for me.\nI have installed torch from pytorch official website.\n", "If you have python 3.7 already installed along with newer versions then can use below command to install torch using python 3.7\npy -3.7 -m pip install torch\n\nBut also note that you have to execute the python program using py -3.7\npy -3.7 program_name.py\n\n", "For people visiting this questions with slightly newer versions of python and pytorch, I had Python 3.8.3 32-bit and even though the pytorch page states that:\n\nCurrently, PyTorch on Windows only supports Python 3.7-3.9;\n\nInstalling Python 3.9.13 64-bit instead of Python 3.8.3 32-bit solved it for me.\nAfter that, I used the install script generator and ran\npython -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113\n... and it started downloading.\n", "torch and torchvision need python 3.8.x ... so in your CLI run\npython --version\n\nto get the python version.\nmake sure that your environment has python 3.8.x, otherwise, create another virtual environment with anaconda\nconda create -n myenv python==3.8 anaconda\nconda activate myenv\n\nThen install torch and torchvision by this command\npip install torch===1.5.0 torchvision===0.6.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n", "I went heaven and earth for this problem and here is what it turned out to be:\n1- i had python 3.10\n2- the python.exe in my virtual environemnt linked to python310\ni uninstalled the python3.10 , then went to delete the paths in system environemnts variables (go to windows search, type this thing, you get a window, click on environment variables, and find a word called paths, click edit) ....\\python310\\ (i named as such when initially installed, you probs have another name) and also this ...python310\\Scripts\\\ndelete them\ngo to https://www.python.org/downloads/release/python-3711/ , istall pythion 3.7 , after that go back to system env. variables thingy, .. add the paths that ends with ...\\python37\\ , and ...\\python37\\Scripts\\ (make you sure you end the paths with \"\")\nthen go to new command prompt, type python , you should get Python 3.7.0 ...\ncd to your virtual environment path script (mine looked like this C:\\Users...\\python_ver\\python_projects\\root_environment\\Scripts>) , activate to the name of whatever you called, for me i typed: activate tf.\ntype python again, if you have python 3.7 as a result, you good to go ... if you still seeing python 3.10 ... then you probs get some error saying no python in ....\\python310\\python.exe ... so:\ngo the folder that you saved python310 (the path shown in the last step), make sure all folders of your python 3.7 go in there.\ntype python in cmd in the same virtual envrioment path by the cursor ... to check you running pythong 3.7 ...\nonce python 3.7 is your default .. run the blood Clot for pytorch https://pytorch.org/ to install pytorch\nthanks to all the guys on this\n", "Use 64-Bit Python. PyTorch doesn't work with 32-Bit Python. I had the same issue.\n", "I tried every possible command for Windows, but nothing worked. I also tried using Pycharm package installation, everything throws the same error.\nFinally installed Pytorch using Anaconda.\n", "I want to pip install \" torch>=1.4.0, torchvision>=0.5.0 \", but in a conda env with python=3.0, this is not right.\nI tried create a new conda env with python=3.7, and pip install \" torch>=1.4.0, torchvision>=0.5.0 \" again, it is ok.\n", "For previous versions please use the snippets from the PyTorch website; https://pytorch.org/get-started/previous-versions/\nAs an example, this will turn into an error since the cudatoolkit versions are not listed in pip;\n!pip install torch==1.10.0+cu111\n\n\nERROR: Could not find a version that satisfies the requirement torch==1.10.0+cu111 (from versions: 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0) ERROR: No matching distribution found for torch==1.10.0+cu111\n\nFor the same Torch version and cudatoolkit, you can use the following code instead;\n!pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n\nimport torch\nTORCH_VERSION = \".\".join(torch.__version__.split(\".\")[:2])\nCUDA_VERSION = torch.__version__.split(\"+\")[-1]\nprint(\"torch: \", TORCH_VERSION, \"; cuda: \", CUDA_VERSION)\n\n\nResult:\n\ntorch: 1.10 ; cuda: cu111\n\n", "This is for Deep Reinforcement Learning: Chapter-6: 02_dnq_pong.py:\nI did not verify the above solutions. I found it tedious to change the existing library version.\nSo, as mentioned by someone in GIT, you just have to change two variables in your loss function.\nhttps://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/90\nChange the variables to below:\nactions_v = torch.tensor(actions).to(device, dtype=torch.int64)\ndone_mask = torch.tensor(dones).to(device, dtype=torch.bool)\nYou could make similar changes to the variables which cause the error, in your respective programs.\n", "I had the same issue with Python 3.8.2 which is currently supported by PyTorch. I updated pip from version 19.2.3 to 22.3.1 and my issue was solved:\npip install --upgrade pip\n" ]
[ 78, 44, 13, 10, 5, 2, 2, 1, 1, 1, 0, 0, 0, 0, 0 ]
[ "follow the link: https://pytorch.org/\nand set your system requirement in QUICK START LOCALLY SECTION\n\n" ]
[ -4 ]
[ "python" ]
stackoverflow_0056239310_python.txt
Q: Problem to connect the django with mongodb using djongo How i use DATABASES = { 'default': { 'ENGINE': 'djongo', 'NAME': 'rede_social', 'HOST': 'mongodb+srv://blackwolf449:[email protected]/?retryWrites=true&w=majority', 'USER': 'blackwolf449', 'PASSWORD': '3CErLxvGLPM4rLsK' } } Error django.core.exceptions.ImproperlyConfigured: 'djongo' isn't an available database backend or couldn't be imported. Check the above exception. To use one of the built-in backends, use 'django.db.backends.XXX', where XXX is one of: 'mysql', 'oracle', 'postgresql', 'sqlite3' A: downgrade your django version to 3.0.5 A: I had the same problem, I fixed it by installing Pytz. Do : pip install pytz
Problem to connect the django with mongodb using djongo
How i use DATABASES = { 'default': { 'ENGINE': 'djongo', 'NAME': 'rede_social', 'HOST': 'mongodb+srv://blackwolf449:[email protected]/?retryWrites=true&w=majority', 'USER': 'blackwolf449', 'PASSWORD': '3CErLxvGLPM4rLsK' } } Error django.core.exceptions.ImproperlyConfigured: 'djongo' isn't an available database backend or couldn't be imported. Check the above exception. To use one of the built-in backends, use 'django.db.backends.XXX', where XXX is one of: 'mysql', 'oracle', 'postgresql', 'sqlite3'
[ "downgrade your django version to 3.0.5\n", "I had the same problem, I fixed it by installing Pytz.\nDo :\n\npip install pytz\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "djongo", "python" ]
stackoverflow_0073016836_django_djongo_python.txt
Q: Flask testing routes - not registered I've a very basic Flask application: #main.py from flask import Flask app = Flask(__name__) @app.route('/sth/') def hi(): return 'HI\n' and I try to test the existence of the url, however, to me it seems the routes are not registered: #tests/test_view.py from flask import Flask class TestSthView: def test_sth_returns_ok(self): app = Flask(__name__) c = app.test_client() resp = c.get('/sth/') assert resp.request.path == '/sth/' assert resp.status_code == 200 . Could anybody point me out how can I test the existence of the /sth/ url? Why do I get 404 instead of 200 ? I went through on many pages about testing, but I still unable to find the mistake. * | \---main.py | \---tests/ | \--------test_view.py Thanks. A: #main.py from flask import Flask app = Flask(__name__) @app.route('/sth/') def hi(): return 'HI\n' if __name__ == "__main__": app.run() In another terminal you do an easy request e.g. import requests r = requests.get('http://127.0.0.1:5000/sth/') assert r.status_code == 200 Or you do it the Flask native way: test_example.py @pytest.fixture() def app(): app = create_app() app.config.update({ "TESTING": True, }) # other setup can go here yield app # clean up / reset resources here @pytest.fixture() def client(app): return app.test_client() def yourtest(client): response = client.get("/sth/") assert response.request.path == "/index" assert response.status_code == 200 using this command in your CLI pytest test_example.py::yourtest I didn't test it yet. My sources: https://docs.pytest.org/en/7.1.x/how-to/usage.html https://flask.palletsprojects.com/en/2.2.x/testing/ A: I believe your issue is with how you're creating a new app in your unit test when you call app = Flask(__name__). If you import your app variable into your unit test (I can't say how to do this exactly without seeing your project layout) but the below code should work if they're in the same directory. from main import app class TestSthView: def test_sth_returns_ok(self): c = app.test_client() resp = c.get('/sth/') assert resp.request.path == '/sth/' assert resp.status_code == 200
Flask testing routes - not registered
I've a very basic Flask application: #main.py from flask import Flask app = Flask(__name__) @app.route('/sth/') def hi(): return 'HI\n' and I try to test the existence of the url, however, to me it seems the routes are not registered: #tests/test_view.py from flask import Flask class TestSthView: def test_sth_returns_ok(self): app = Flask(__name__) c = app.test_client() resp = c.get('/sth/') assert resp.request.path == '/sth/' assert resp.status_code == 200 . Could anybody point me out how can I test the existence of the /sth/ url? Why do I get 404 instead of 200 ? I went through on many pages about testing, but I still unable to find the mistake. * | \---main.py | \---tests/ | \--------test_view.py Thanks.
[ "#main.py\nfrom flask import Flask\n\napp = Flask(__name__)\n\[email protected]('/sth/')\ndef hi():\n return 'HI\\n'\n\nif __name__ == \"__main__\":\n app.run()\n\nIn another terminal you do an easy request e.g.\nimport requests\n\nr = requests.get('http://127.0.0.1:5000/sth/')\n\nassert r.status_code == 200\n\nOr you do it the Flask native way:\ntest_example.py\[email protected]()\ndef app():\n app = create_app()\n app.config.update({\n \"TESTING\": True,\n })\n\n # other setup can go here\n\n yield app\n\n # clean up / reset resources here\n\n\[email protected]()\ndef client(app):\n return app.test_client()\n\n\n\ndef yourtest(client):\n response = client.get(\"/sth/\")\n assert response.request.path == \"/index\"\n assert response.status_code == 200\n\nusing this command in your CLI\npytest test_example.py::yourtest\nI didn't test it yet.\nMy sources:\nhttps://docs.pytest.org/en/7.1.x/how-to/usage.html\nhttps://flask.palletsprojects.com/en/2.2.x/testing/\n", "I believe your issue is with how you're creating a new app in your unit test when you call app = Flask(__name__).\nIf you import your app variable into your unit test (I can't say how to do this exactly without seeing your project layout) but the below code should work if they're in the same directory.\nfrom main import app\n\nclass TestSthView:\n def test_sth_returns_ok(self):\n c = app.test_client()\n resp = c.get('/sth/')\n\n assert resp.request.path == '/sth/'\n assert resp.status_code == 200\n\n" ]
[ 1, 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074645353_flask_python.txt
Q: I'm trying to make a for loop but it did not work (A little bit complex) So I'm trying to make a 'for' loop In my discord bot. But It giving me errors in whatever I try. The loop is for the embed.add_field function, Here's what I tried: for i in netres: embed.add_field(name=f"Email: {netres[0][i[0]]}", value=f"Password: {netres[1][i]}", inline=False) if it can Help, netres = a c.fetchall from a db. Here's netres: netres = c.fetchall() The full traceback is: Ignoring exception in command buy: Traceback (most recent call last): File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 124, in wrapped ret = await coro(arg) ^^^^^^^^^^^^^^^ File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 980, in _invoke await self.callback(ctx, **kwargs) File "C:\Users\sidal\Desktop\sidtho\main.py", line 323, in buy embed.add_field(name=f"Email: {netres[0][i[0]]}", value=f"Password: {netres[1][i]}", inline=False) ~~~~~~~~~^^^^^^ TypeError: tuple indices must be integers or slices, not str The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\bot.py", line 1114, in invoke_application_command await ctx.command.invoke(ctx) File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 375, in invoke await injected(ctx) File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 132, in wrapped raise ApplicationCommandInvokeError(exc) from exc discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: TypeError: tuple indices must be integers or slices, not str``` A: You just need to make the elements of tuples either an integer or slices. (if your requirement is string)
I'm trying to make a for loop but it did not work (A little bit complex)
So I'm trying to make a 'for' loop In my discord bot. But It giving me errors in whatever I try. The loop is for the embed.add_field function, Here's what I tried: for i in netres: embed.add_field(name=f"Email: {netres[0][i[0]]}", value=f"Password: {netres[1][i]}", inline=False) if it can Help, netres = a c.fetchall from a db. Here's netres: netres = c.fetchall() The full traceback is: Ignoring exception in command buy: Traceback (most recent call last): File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 124, in wrapped ret = await coro(arg) ^^^^^^^^^^^^^^^ File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 980, in _invoke await self.callback(ctx, **kwargs) File "C:\Users\sidal\Desktop\sidtho\main.py", line 323, in buy embed.add_field(name=f"Email: {netres[0][i[0]]}", value=f"Password: {netres[1][i]}", inline=False) ~~~~~~~~~^^^^^^ TypeError: tuple indices must be integers or slices, not str The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\bot.py", line 1114, in invoke_application_command await ctx.command.invoke(ctx) File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 375, in invoke await injected(ctx) File "C:\Users\sidal\AppData\Local\Programs\Python\Python311\Lib\site-packages\discord\commands\core.py", line 132, in wrapped raise ApplicationCommandInvokeError(exc) from exc discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: TypeError: tuple indices must be integers or slices, not str```
[ "You just need to make the elements of tuples either an integer or slices. (if your requirement is string)\n" ]
[ 0 ]
[]
[]
[ "discord.py", "pycord", "python", "python_3.x", "sqlite" ]
stackoverflow_0074645325_discord.py_pycord_python_python_3.x_sqlite.txt
Q: Cannot import Tensorflow - Apple Macbook M1 I'm trying to use tensorflow, but I can't import it. I followed the steps on the Apple website to download and install Tensorflow, and everything appears to be ok, but when I try to import tensorflow, I get some errors. What should I do? Errors: TypeError: Unable to convert function return value to a Python type! The signature was () -> handle RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import What I did: conda create --name myenv python conda activate myenv conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal conda list: absl-py 1.3.0 pypi_0 pypi astunparse 1.6.3 pypi_0 pypi blas 1.0 openblas bzip2 1.0.8 h620ffc9_4 c-ares 1.18.1 h1a28f6b_0 ca-certificates 2022.10.11 hca03da5_0 cachetools 5.2.0 pypi_0 pypi certifi 2022.9.24 py310hca03da5_0 charset-normalizer 2.1.1 pypi_0 pypi flatbuffers 22.10.26 pypi_0 pypi gast 0.4.0 pypi_0 pypi google-auth 2.14.0 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.42.0 py310h95c9599_0 h5py 3.6.0 py310h181c318_0 hdf5 1.12.1 h160e8cb_2 idna 3.4 pypi_0 pypi keras 2.10.0 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi krb5 1.19.2 h3b8d789_0 libclang 14.0.6 pypi_0 pypi libcurl 7.85.0 hc6d1d07_0 libcxx 14.0.6 h848a8c0_0 libedit 3.1.20210910 h1a28f6b_0 libev 4.33 h1a28f6b_1 libffi 3.4.2 hc377ac9_4 libgfortran 5.0.0 11_3_0_hca03da5_28 libgfortran5 11.3.0 h009349e_28 libnghttp2 1.46.0 h95c9599_0 libopenblas 0.3.21 h269037a_0 libssh2 1.10.0 hf27765b_0 llvm-openmp 14.0.6 hc6e5704_0 markdown 3.4.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi ncurses 6.3 h1a28f6b_3 numpy 1.22.3 py310hdb36b11_0 numpy-base 1.22.3 py310h5e3e9f0_0 oauthlib 3.2.2 pypi_0 pypi openssl 1.1.1q h1a28f6b_0 opt-einsum 3.3.0 pypi_0 pypi packaging 21.3 pypi_0 pypi pip 22.2.2 py310hca03da5_0 protobuf 3.19.6 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.9 pypi_0 pypi python 3.10.6 hbdb9e5c_1 readline 8.2 h1a28f6b_0 requests 2.28.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.9 pypi_0 pypi setuptools 65.5.0 py310hca03da5_0 six 1.16.0 pyhd3eb1b0_1 sqlite 3.39.3 h1058600_0 tensorboard 2.10.1 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow-deps 2.9.0 0 apple tensorflow-estimator 2.10.0 pypi_0 pypi tensorflow-macos 2.10.0 pypi_0 pypi tensorflow-metal 0.6.0 pypi_0 pypi termcolor 2.1.0 pypi_0 pypi tk 8.6.12 hb8d0fd4_0 typing-extensions 4.4.0 pypi_0 pypi tzdata 2022f h04d1e81_0 urllib3 1.26.12 pypi_0 pypi werkzeug 2.2.2 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0 wrapt 1.14.1 pypi_0 pypi xz 5.2.6 h1a28f6b_0 zlib 1.2.13 h5a0b063_0 Thanks, Ed A: python -m pip install tensorflow-macos==2.9.0 Apple has not updated tensorflow-deps to 2.10 now.
Cannot import Tensorflow - Apple Macbook M1
I'm trying to use tensorflow, but I can't import it. I followed the steps on the Apple website to download and install Tensorflow, and everything appears to be ok, but when I try to import tensorflow, I get some errors. What should I do? Errors: TypeError: Unable to convert function return value to a Python type! The signature was () -> handle RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import What I did: conda create --name myenv python conda activate myenv conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal conda list: absl-py 1.3.0 pypi_0 pypi astunparse 1.6.3 pypi_0 pypi blas 1.0 openblas bzip2 1.0.8 h620ffc9_4 c-ares 1.18.1 h1a28f6b_0 ca-certificates 2022.10.11 hca03da5_0 cachetools 5.2.0 pypi_0 pypi certifi 2022.9.24 py310hca03da5_0 charset-normalizer 2.1.1 pypi_0 pypi flatbuffers 22.10.26 pypi_0 pypi gast 0.4.0 pypi_0 pypi google-auth 2.14.0 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.42.0 py310h95c9599_0 h5py 3.6.0 py310h181c318_0 hdf5 1.12.1 h160e8cb_2 idna 3.4 pypi_0 pypi keras 2.10.0 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi krb5 1.19.2 h3b8d789_0 libclang 14.0.6 pypi_0 pypi libcurl 7.85.0 hc6d1d07_0 libcxx 14.0.6 h848a8c0_0 libedit 3.1.20210910 h1a28f6b_0 libev 4.33 h1a28f6b_1 libffi 3.4.2 hc377ac9_4 libgfortran 5.0.0 11_3_0_hca03da5_28 libgfortran5 11.3.0 h009349e_28 libnghttp2 1.46.0 h95c9599_0 libopenblas 0.3.21 h269037a_0 libssh2 1.10.0 hf27765b_0 llvm-openmp 14.0.6 hc6e5704_0 markdown 3.4.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi ncurses 6.3 h1a28f6b_3 numpy 1.22.3 py310hdb36b11_0 numpy-base 1.22.3 py310h5e3e9f0_0 oauthlib 3.2.2 pypi_0 pypi openssl 1.1.1q h1a28f6b_0 opt-einsum 3.3.0 pypi_0 pypi packaging 21.3 pypi_0 pypi pip 22.2.2 py310hca03da5_0 protobuf 3.19.6 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.9 pypi_0 pypi python 3.10.6 hbdb9e5c_1 readline 8.2 h1a28f6b_0 requests 2.28.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.9 pypi_0 pypi setuptools 65.5.0 py310hca03da5_0 six 1.16.0 pyhd3eb1b0_1 sqlite 3.39.3 h1058600_0 tensorboard 2.10.1 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow-deps 2.9.0 0 apple tensorflow-estimator 2.10.0 pypi_0 pypi tensorflow-macos 2.10.0 pypi_0 pypi tensorflow-metal 0.6.0 pypi_0 pypi termcolor 2.1.0 pypi_0 pypi tk 8.6.12 hb8d0fd4_0 typing-extensions 4.4.0 pypi_0 pypi tzdata 2022f h04d1e81_0 urllib3 1.26.12 pypi_0 pypi werkzeug 2.2.2 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0 wrapt 1.14.1 pypi_0 pypi xz 5.2.6 h1a28f6b_0 zlib 1.2.13 h5a0b063_0 Thanks, Ed
[ "python -m pip install tensorflow-macos==2.9.0\nApple has not updated tensorflow-deps to 2.10 now.\n" ]
[ 0 ]
[]
[]
[ "apple_m1", "macos", "miniconda", "python", "tensorflow" ]
stackoverflow_0074309572_apple_m1_macos_miniconda_python_tensorflow.txt
Q: Replace value in column based on multiple conditions in Pandas I have this dataframe df = pd.DataFrame.from_dict( { 'Name': ['Jane', 'Melissa', 'John', 'Matt'], 'Age': [23, 45, 35, 64], 'Birth City': ['London', 'Paris', 'Toronto', 'Atlanta'], 'Gender': ['F', 'F', 'M', 'M'] } ) and I want to replace the Gender to X, when the name is Melissa or John. How would I do this? A: Here is my solution: df.loc[((df['Name'] == 'Melissa') | (df['Name'] == 'John')), 'Gender'] = 'X' And output Name Age Birth City Gender 0 Jane 23 London F 1 Melissa 45 Paris X 2 John 35 Toronto X 3 Matt 64 Atlanta M A: A possible solution: df['Gender'] = df['Gender'].mask(df['Name'].isin(['Melissa', 'John']), other='X') Output: Name Age Birth City Gender 0 Jane 23 London F 1 Melissa 45 Paris X 2 John 35 Toronto X 3 Matt 64 Atlanta M
Replace value in column based on multiple conditions in Pandas
I have this dataframe df = pd.DataFrame.from_dict( { 'Name': ['Jane', 'Melissa', 'John', 'Matt'], 'Age': [23, 45, 35, 64], 'Birth City': ['London', 'Paris', 'Toronto', 'Atlanta'], 'Gender': ['F', 'F', 'M', 'M'] } ) and I want to replace the Gender to X, when the name is Melissa or John. How would I do this?
[ "Here is my solution:\ndf.loc[((df['Name'] == 'Melissa') | (df['Name'] == 'John')), 'Gender'] = 'X'\n\nAnd output\n Name Age Birth City Gender\n0 Jane 23 London F\n1 Melissa 45 Paris X\n2 John 35 Toronto X\n3 Matt 64 Atlanta M\n\n", "A possible solution:\ndf['Gender'] = df['Gender'].mask(df['Name'].isin(['Melissa', 'John']), other='X')\n\nOutput:\n Name Age Birth City Gender\n0 Jane 23 London F\n1 Melissa 45 Paris X\n2 John 35 Toronto X\n3 Matt 64 Atlanta M\n\n" ]
[ 3, 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074645373_pandas_python.txt
Q: Splitting a large CSV file and converting into multiple Parquet files - Safe? I learnt, the parquet file format stores a bunch of metadata and uses various compressions to store data in an efficient way, when it comes to size and query-speed. And it possibly generates multiple files out of, let's say: one input, like from a Panda dataframe. Now, I have a large CSV file and I want to convert it into a parquet file format. Naively, I would remove the header (store elsewhere for later) and chunk the file up in blocks with n lines. Then turn each chunk into parquet (here Python): table = pyarrow.csv.read_csv(fileName) pyarrow.parquet.write_table(table, fileName.replace('csv', 'parquet')) I guess the method doesn't much matter. From what I see, at least with a small test data set and no extra context, I get one parquet file per csv file (1:1). For now that is all I need, as I am not doing queries on "the whole", logical data set. I use the raw files, as input to a further cleaning step that is nifty to do with the csv format. And I haven't yet tried reading the files... Do I have to readd the header to each CSV chunk at the least? Is this as straight-forward as I think, or am I missing something? A: When creating a parquet dataset with Mutiple files, All the files should have matching schema. In your case, when you split the csv file into Mutiple parquet files, you will have to include the csv headers in each chunk to create a valid parquet file. Note that parquet is a compressed format (with a high compression ratio). Parquet data will be much smaller than the csv data. On top of that, applications that read parquet file usually prefer fewer large parquet file and not many small parquet files. A: An easy way to write a partitioned parquet file is with dask.dataframe. You could even read in the data with dask.dataframe.read_csv and then you don't have to do any conversion: import dask.dataframe # here, the block size will determine the partition boundaries, which will # be preserved in the parquet file. So if you have a 5 GB file, this would # write 50 partitions: df = dask.dataframe.read_csv(fileName, blocksize="100MB") df.to_parquet(fileName.replace(".csv", ".parquet"))
Splitting a large CSV file and converting into multiple Parquet files - Safe?
I learnt, the parquet file format stores a bunch of metadata and uses various compressions to store data in an efficient way, when it comes to size and query-speed. And it possibly generates multiple files out of, let's say: one input, like from a Panda dataframe. Now, I have a large CSV file and I want to convert it into a parquet file format. Naively, I would remove the header (store elsewhere for later) and chunk the file up in blocks with n lines. Then turn each chunk into parquet (here Python): table = pyarrow.csv.read_csv(fileName) pyarrow.parquet.write_table(table, fileName.replace('csv', 'parquet')) I guess the method doesn't much matter. From what I see, at least with a small test data set and no extra context, I get one parquet file per csv file (1:1). For now that is all I need, as I am not doing queries on "the whole", logical data set. I use the raw files, as input to a further cleaning step that is nifty to do with the csv format. And I haven't yet tried reading the files... Do I have to readd the header to each CSV chunk at the least? Is this as straight-forward as I think, or am I missing something?
[ "When creating a parquet dataset with Mutiple files, All the files should have matching schema. In your case, when you split the csv file into Mutiple parquet files, you will have to include the csv headers in each chunk to create a valid parquet file.\nNote that parquet is a compressed format (with a high compression ratio). Parquet data will be much smaller than the csv data. On top of that, applications that read parquet file usually prefer fewer large parquet file and not many small parquet files.\n", "An easy way to write a partitioned parquet file is with dask.dataframe. You could even read in the data with dask.dataframe.read_csv and then you don't have to do any conversion:\nimport dask.dataframe\n\n# here, the block size will determine the partition boundaries, which will\n# be preserved in the parquet file. So if you have a 5 GB file, this would\n# write 50 partitions:\ndf = dask.dataframe.read_csv(fileName, blocksize=\"100MB\")\ndf.to_parquet(fileName.replace(\".csv\", \".parquet\"))\n\n" ]
[ 2, 1 ]
[]
[]
[ "csv", "parquet", "python" ]
stackoverflow_0074618182_csv_parquet_python.txt
Q: open-cv installation in docker image does not work on raspberry pi I have created python project with some dependencies, among them open-cv. Now I want to deploy my project in a docker image. For this, I created the following build-file on my local machine (running Ubuntu 22.04): # syntax=docker/dockerfile:1 FROM python:3.8-slim-buster WORKDIR /app COPY requirements.txt . COPY main.py . RUN apt-get update RUN apt-get install ffmpeg libsm6 libxext6 -y RUN pip3 install -r requirements.txt CMD python3 main.py Those are all of my requirements: numpy==1.23.4 opencv-python==4.6.0.66 matplotlib==3.6.1 Pillow==9.3.0 XlsxWriter==3.0.3 keyboard==0.13.5 When building the image on my machine using docker build --rm -t dockerfile:latest . everything works fine. The image is built and I can use it how I intend to. Now I wanted to build the image on a raspberry pi (running Raspbian GNU/Linux 11 (bullseye)). I have also tried this withFROM arm32v7/python:3.8-slim-buster, yielding the same results. The build fails with a very long error message: Installing build dependencies: started Installing build dependencies: finished with status 'error' error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [377 lines of output] Ignoring numpy: markers 'python_version == "3.6" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "linux" and platform_machine == "aarch64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "darwin" and platform_machine == "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.9" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version >= "3.10"' don't match your environment Collecting setuptools==59.2.0 Using cached setuptools-59.2.0-py3-none-any.whl (952 kB) Collecting wheel==0.37.0 Using cached wheel-0.37.0-py2.py3-none-any.whl (35 kB) Collecting cmake>=3.1 Downloading cmake-3.25.0.tar.gz (33 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting pip Downloading pip-22.3.1-py3-none-any.whl (2.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 2.2 MB/s eta 0:00:00 Collecting scikit-build>=0.13.2 Using cached scikit_build-0.16.2-py3-none-any.whl (78 kB) Collecting numpy==1.17.3 Downloading numpy-1.17.3.zip (6.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.4/6.4 MB 2.6 MB/s eta 0:00:00 Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting distro Using cached distro-1.8.0-py3-none-any.whl (20 kB) Collecting packaging Using cached packaging-21.3-py3-none-any.whl (40 kB) Collecting pyparsing!=3.0.5,>=2.0.2 Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) Building wheels for collected packages: numpy, cmake Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): finished with status 'error' error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [263 lines of output] Running from numpy source directory. blas_opt_info: blas_mkl_info: customize UnixCCompiler libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE blis_info: customize UnixCCompiler libraries blis not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE openblas_info: customize UnixCCompiler customize UnixCCompiler libraries openblas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries tatlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_3_10_blas_info: customize UnixCCompiler libraries satlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_blas_info: customize UnixCCompiler libraries f77blas,cblas,atlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE accelerate_info: NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() blas_info: customize UnixCCompiler libraries blas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE /bin/sh: 1: svnversion: not found non-existing path in 'numpy/distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: customize UnixCCompiler libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE openblas_lapack_info: customize UnixCCompiler customize UnixCCompiler libraries openblas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE openblas_clapack_info: customize UnixCCompiler customize UnixCCompiler libraries openblas,lapack not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE flame_info: customize UnixCCompiler libraries flame not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries tatlas,tatlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries tatlas,tatlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries satlas,satlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries satlas,satlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries ptf77blas,ptcblas,atlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries f77blas,cblas,atlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries f77blas,cblas,atlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: customize UnixCCompiler libraries lapack not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): lapack_src_info: NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): NOT AVAILABLE /usr/local/lib/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running bdist_wheel running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build/src.linux-armv7l-3.8 creating build/src.linux-armv7l-3.8/numpy creating build/src.linux-armv7l-3.8/numpy/distutils building library "npymath" sources get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']' customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize GnuFCompiler Could not locate executable g77 customize G95FCompiler Could not locate executable g95 customize PathScaleFCompiler Could not locate executable pathf95 customize NAGFORCompiler Could not locate executable nagfor don't know how to compile Fortran code on platform 'posix' C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include/python3.8 -c' gcc: _configtest.c failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/setup.py", line 443, in <module> setup_package() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/setup.py", line 435, in setup_package setup(**metadata) File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/core.py", line 171, in setup return old_setup(**new_attr) File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 325, in run self.run_command("build") File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build.py", line 47, in run old_build.run(self) File "/usr/local/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 142, in run self.build_sources() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 153, in build_sources self.build_library_sources(*libname_info) File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 286, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 369, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 669, in get_mathlib_info raise RuntimeError("Broken toolchain: cannot link a simple C program") RuntimeError: Broken toolchain: cannot link a simple C program [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Running setup.py clean for numpy error: subprocess-exited-with-error × python setup.py clean did not run successfully. │ exit code: 1 ╰─> [10 lines of output] Running from numpy source directory. `setup.py clean` is not supported, use one of the following instead: - `git clean -xdf` (cleans all files) - `git clean -Xdf` (cleans all versioned files, doesn't touch files that aren't checked into the git repo) Add `--force` to your command to use it anyway if you must (unsupported). [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed cleaning build dir for numpy Building wheel for cmake (pyproject.toml): started Building wheel for cmake (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for cmake (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [33 lines of output] Traceback (most recent call last): File "/tmp/pip-build-env-q0a3b44q/overlay/lib/python3.8/site-packages/skbuild/setuptools_wrap.py", line 612, in setup cmkr = cmaker.CMaker(cmake_executable) File "/tmp/pip-build-env-q0a3b44q/overlay/lib/python3.8/site-packages/skbuild/cmaker.py", line 148, in __init__ self.cmake_version = get_cmake_version(self.cmake_executable) File "/tmp/pip-build-env-q0a3b44q/overlay/lib/python3.8/site-packages/skbuild/cmaker.py", line 103, in get_cmake_version raise SKBuildError( =============================DEBUG ASSISTANCE============================= If you are seeing a compilation error please try the following steps to successfully install cmake: 1) Upgrade to the latest pip and try again. This will fix errors for most users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip 2) If on Linux, with glibc < 2.12, you can set PIP_ONLY_BINARY=cmake in order to retrieve the last manylinux1 compatible wheel. 3) If on Linux, with glibc < 2.12, you can cap "cmake<3.23" in your requirements in order to retrieve the last manylinux1 compatible wheel. 4) Open an issue with the debug information that follows at https://github.com/scikit-build/cmake-python-distributions/issues Python: 3.8.15 platform: Linux-5.15.32-v7l+-armv7l-with-glibc2.4 glibc: glibc 2.28 machine: armv7l bits: 32 pip: n/a setuptools: 65.6.3 scikit-build: 0.16.2 PEP517_BUILD_BACKEND=setuptools.build_meta =============================DEBUG ASSISTANCE============================= Problem with the CMake installation, aborting build. CMake executable is cmake [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cmake Failed to build numpy cmake ERROR: Could not build wheels for cmake, which is required to install pyproject.toml-based projects WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available. You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available. You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command. I have tried to base my image on other images: FROM ubuntu:22:04 (additionally installing python and pip) FROM python:3.9 FROM python:3.8-bullseye I have also tried other python versions. Those do not give an error, but when it comes to installing open-cv, it says Installing build dependencies: started Installing build dependencies: still running... while repeating the latter (endlessly it seems, I let it run for about 15min without any changes). I have also tried adding the following (after researching stackoverflow): RUN apt-get install lbhdf5-dev libhdf5-serial-dev libatlas-base-dev -y without any results. I also tried building the image on my local machine nad loading it on the raspberry, yielding the following error message when running the container: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested exec /bin/sh: exec format error Thanks in advance for any help. A: I solved this by updating the Raspberry Pi to a 64-Bit installation.
open-cv installation in docker image does not work on raspberry pi
I have created python project with some dependencies, among them open-cv. Now I want to deploy my project in a docker image. For this, I created the following build-file on my local machine (running Ubuntu 22.04): # syntax=docker/dockerfile:1 FROM python:3.8-slim-buster WORKDIR /app COPY requirements.txt . COPY main.py . RUN apt-get update RUN apt-get install ffmpeg libsm6 libxext6 -y RUN pip3 install -r requirements.txt CMD python3 main.py Those are all of my requirements: numpy==1.23.4 opencv-python==4.6.0.66 matplotlib==3.6.1 Pillow==9.3.0 XlsxWriter==3.0.3 keyboard==0.13.5 When building the image on my machine using docker build --rm -t dockerfile:latest . everything works fine. The image is built and I can use it how I intend to. Now I wanted to build the image on a raspberry pi (running Raspbian GNU/Linux 11 (bullseye)). I have also tried this withFROM arm32v7/python:3.8-slim-buster, yielding the same results. The build fails with a very long error message: Installing build dependencies: started Installing build dependencies: finished with status 'error' error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [377 lines of output] Ignoring numpy: markers 'python_version == "3.6" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "linux" and platform_machine == "aarch64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "darwin" and platform_machine == "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.9" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version >= "3.10"' don't match your environment Collecting setuptools==59.2.0 Using cached setuptools-59.2.0-py3-none-any.whl (952 kB) Collecting wheel==0.37.0 Using cached wheel-0.37.0-py2.py3-none-any.whl (35 kB) Collecting cmake>=3.1 Downloading cmake-3.25.0.tar.gz (33 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting pip Downloading pip-22.3.1-py3-none-any.whl (2.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 2.2 MB/s eta 0:00:00 Collecting scikit-build>=0.13.2 Using cached scikit_build-0.16.2-py3-none-any.whl (78 kB) Collecting numpy==1.17.3 Downloading numpy-1.17.3.zip (6.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.4/6.4 MB 2.6 MB/s eta 0:00:00 Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting distro Using cached distro-1.8.0-py3-none-any.whl (20 kB) Collecting packaging Using cached packaging-21.3-py3-none-any.whl (40 kB) Collecting pyparsing!=3.0.5,>=2.0.2 Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) Building wheels for collected packages: numpy, cmake Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): finished with status 'error' error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [263 lines of output] Running from numpy source directory. blas_opt_info: blas_mkl_info: customize UnixCCompiler libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE blis_info: customize UnixCCompiler libraries blis not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE openblas_info: customize UnixCCompiler customize UnixCCompiler libraries openblas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries tatlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_3_10_blas_info: customize UnixCCompiler libraries satlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_blas_info: customize UnixCCompiler libraries f77blas,cblas,atlas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE accelerate_info: NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() blas_info: customize UnixCCompiler libraries blas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() blas_src_info: NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() NOT AVAILABLE /bin/sh: 1: svnversion: not found non-existing path in 'numpy/distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: customize UnixCCompiler libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE openblas_lapack_info: customize UnixCCompiler customize UnixCCompiler libraries openblas not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE openblas_clapack_info: customize UnixCCompiler customize UnixCCompiler libraries openblas,lapack not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE flame_info: customize UnixCCompiler libraries flame not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries tatlas,tatlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries tatlas,tatlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries satlas,satlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries satlas,satlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries ptf77blas,ptcblas,atlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: customize UnixCCompiler libraries lapack_atlas not found in /usr/local/lib customize UnixCCompiler libraries f77blas,cblas,atlas not found in /usr/local/lib customize UnixCCompiler libraries lapack_atlas not found in /usr/lib customize UnixCCompiler libraries f77blas,cblas,atlas not found in /usr/lib <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: customize UnixCCompiler libraries lapack not found in ['/usr/local/lib', '/usr/lib'] NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): lapack_src_info: NOT AVAILABLE /tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): NOT AVAILABLE /usr/local/lib/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running bdist_wheel running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build/src.linux-armv7l-3.8 creating build/src.linux-armv7l-3.8/numpy creating build/src.linux-armv7l-3.8/numpy/distutils building library "npymath" sources get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']' customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize GnuFCompiler Could not locate executable g77 customize G95FCompiler Could not locate executable g95 customize PathScaleFCompiler Could not locate executable pathf95 customize NAGFORCompiler Could not locate executable nagfor don't know how to compile Fortran code on platform 'posix' C compiler: gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include/python3.8 -c' gcc: _configtest.c failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/setup.py", line 443, in <module> setup_package() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/setup.py", line 435, in setup_package setup(**metadata) File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/core.py", line 171, in setup return old_setup(**new_attr) File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 325, in run self.run_command("build") File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build.py", line 47, in run old_build.run(self) File "/usr/local/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 142, in run self.build_sources() File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 153, in build_sources self.build_library_sources(*libname_info) File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 286, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "/tmp/pip-install-b1zdl1d3/numpy_c51059096ab144ca9ad2b38cd023e512/numpy/distutils/command/build_src.py", line 369, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 669, in get_mathlib_info raise RuntimeError("Broken toolchain: cannot link a simple C program") RuntimeError: Broken toolchain: cannot link a simple C program [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Running setup.py clean for numpy error: subprocess-exited-with-error × python setup.py clean did not run successfully. │ exit code: 1 ╰─> [10 lines of output] Running from numpy source directory. `setup.py clean` is not supported, use one of the following instead: - `git clean -xdf` (cleans all files) - `git clean -Xdf` (cleans all versioned files, doesn't touch files that aren't checked into the git repo) Add `--force` to your command to use it anyway if you must (unsupported). [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed cleaning build dir for numpy Building wheel for cmake (pyproject.toml): started Building wheel for cmake (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for cmake (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [33 lines of output] Traceback (most recent call last): File "/tmp/pip-build-env-q0a3b44q/overlay/lib/python3.8/site-packages/skbuild/setuptools_wrap.py", line 612, in setup cmkr = cmaker.CMaker(cmake_executable) File "/tmp/pip-build-env-q0a3b44q/overlay/lib/python3.8/site-packages/skbuild/cmaker.py", line 148, in __init__ self.cmake_version = get_cmake_version(self.cmake_executable) File "/tmp/pip-build-env-q0a3b44q/overlay/lib/python3.8/site-packages/skbuild/cmaker.py", line 103, in get_cmake_version raise SKBuildError( =============================DEBUG ASSISTANCE============================= If you are seeing a compilation error please try the following steps to successfully install cmake: 1) Upgrade to the latest pip and try again. This will fix errors for most users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip 2) If on Linux, with glibc < 2.12, you can set PIP_ONLY_BINARY=cmake in order to retrieve the last manylinux1 compatible wheel. 3) If on Linux, with glibc < 2.12, you can cap "cmake<3.23" in your requirements in order to retrieve the last manylinux1 compatible wheel. 4) Open an issue with the debug information that follows at https://github.com/scikit-build/cmake-python-distributions/issues Python: 3.8.15 platform: Linux-5.15.32-v7l+-armv7l-with-glibc2.4 glibc: glibc 2.28 machine: armv7l bits: 32 pip: n/a setuptools: 65.6.3 scikit-build: 0.16.2 PEP517_BUILD_BACKEND=setuptools.build_meta =============================DEBUG ASSISTANCE============================= Problem with the CMake installation, aborting build. CMake executable is cmake [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cmake Failed to build numpy cmake ERROR: Could not build wheels for cmake, which is required to install pyproject.toml-based projects WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available. You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available. You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command. I have tried to base my image on other images: FROM ubuntu:22:04 (additionally installing python and pip) FROM python:3.9 FROM python:3.8-bullseye I have also tried other python versions. Those do not give an error, but when it comes to installing open-cv, it says Installing build dependencies: started Installing build dependencies: still running... while repeating the latter (endlessly it seems, I let it run for about 15min without any changes). I have also tried adding the following (after researching stackoverflow): RUN apt-get install lbhdf5-dev libhdf5-serial-dev libatlas-base-dev -y without any results. I also tried building the image on my local machine nad loading it on the raspberry, yielding the following error message when running the container: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested exec /bin/sh: exec format error Thanks in advance for any help.
[ "I solved this by updating the Raspberry Pi to a 64-Bit installation.\n" ]
[ 0 ]
[]
[]
[ "docker", "opencv", "python", "raspberry_pi" ]
stackoverflow_0074601849_docker_opencv_python_raspberry_pi.txt
Q: How to monitor a aws-python based application using APM tool? I want to know if there's any way I can monitor my application using one of the open-source application monitoring (APM) tools. I don't have much knowledge about them, and got pretty confused when I searched for this, so asking it here. I tried SigNoz but it's not for windows and I work on windows os. I am looking for a tool that supports mac/windows/Linux. I am pretty much blank, so direction on how exactly the setup is done or any kind of help would be appreciated. Thank you. A: Sure thing! We have docs available for the Elastic APM Python Agent. Just add elastic-apm to your requirements.txt and then follow the onboarding instructions for whichever framework you're using. Our agent and the Elastic Stack are both fully open source. You can also start a no-credit-card-required trial with Elastic Cloud here if you don't want to run the Elastic Stack yourself.
How to monitor a aws-python based application using APM tool?
I want to know if there's any way I can monitor my application using one of the open-source application monitoring (APM) tools. I don't have much knowledge about them, and got pretty confused when I searched for this, so asking it here. I tried SigNoz but it's not for windows and I work on windows os. I am looking for a tool that supports mac/windows/Linux. I am pretty much blank, so direction on how exactly the setup is done or any kind of help would be appreciated. Thank you.
[ "Sure thing! We have docs available for the Elastic APM Python Agent. Just add elastic-apm to your requirements.txt and then follow the onboarding instructions for whichever framework you're using. Our agent and the Elastic Stack are both fully open source.\nYou can also start a no-credit-card-required trial with Elastic Cloud here if you don't want to run the Elastic Stack yourself.\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "elastic_apm", "monitor", "performance", "python" ]
stackoverflow_0074636397_amazon_web_services_elastic_apm_monitor_performance_python.txt
Q: VSCode displays "Module numpy could not be resolved" I have just installed VS Code and trying to run python code. In the past, I had already created some virtual environments (which I am able to see in VS Code), but for the moment I am using the base one. I am trying to import some standard libraries which are present in the base environment (I tried also the conda environment), but VS code displays that the modules could not be resolved (Pylance). I have noticed that when I open VS Code, the terminal tries to activate conda with "conda activate PATH_TO_MY_FOLDER/.conda", so it's trying to activate an environment in my folder, but it displays this error: conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. I have also noticed that VS Code warns me of the environment variable PATH which has some """ characters: EDIT: I add here also the Path environment variable relative to my user (not relative to the system) Do you have any suggestion on how to proceed? Thanks. A: Try to add path to environment variable. C:\Users\username\AppData\Local\Programs\Microsoft VS Code\bin check the path of your VS Code bin folder
VSCode displays "Module numpy could not be resolved"
I have just installed VS Code and trying to run python code. In the past, I had already created some virtual environments (which I am able to see in VS Code), but for the moment I am using the base one. I am trying to import some standard libraries which are present in the base environment (I tried also the conda environment), but VS code displays that the modules could not be resolved (Pylance). I have noticed that when I open VS Code, the terminal tries to activate conda with "conda activate PATH_TO_MY_FOLDER/.conda", so it's trying to activate an environment in my folder, but it displays this error: conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. I have also noticed that VS Code warns me of the environment variable PATH which has some """ characters: EDIT: I add here also the Path environment variable relative to my user (not relative to the system) Do you have any suggestion on how to proceed? Thanks.
[ "Try to add path to environment variable.\n\nC:\\Users\\username\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\n\ncheck the path of your VS Code bin folder\n" ]
[ 0 ]
[]
[]
[ "conda", "path", "python", "visual_studio_code" ]
stackoverflow_0074645405_conda_path_python_visual_studio_code.txt
Q: Python: How to prevent a randomly generated number from appearing twice import random import time import sys x = input("Put a number between 1 and 100: ") z = int(x) if z < (0): sys.exit("Number too small") if z > (100): sys.exit("Number too big") y = random.randint(1, 100) while y != z: print("trying again, number was", y) time.sleep(0.2) y = random.randint(1, 100) print("Got it the number was", y) Trying to make a randomly generated number not appear twice. Unsure how to make a number not appear twice I'm trying to keep this as flexible as possible A: Try using random.sample() which samples without replacement: >>> import random >>> random.sample(range(1, 101), k=20) [98, 47, 29, 50, 19, 5, 97, 12, 35, 81, 13, 89, 16, 20, 71, 11, 24, 78, 56, 85] >>> random.sample(range(1, 101), k=20) [36, 41, 47, 69, 57, 98, 73, 54, 89, 86, 8, 79, 38, 17, 90, 65, 78, 30, 77, 23] >>> random.sample(range(1, 101), k=20) [89, 71, 38, 58, 3, 64, 66, 88, 51, 30, 80, 43, 33, 44, 26, 73, 37, 98, 19, 22]
Python: How to prevent a randomly generated number from appearing twice
import random import time import sys x = input("Put a number between 1 and 100: ") z = int(x) if z < (0): sys.exit("Number too small") if z > (100): sys.exit("Number too big") y = random.randint(1, 100) while y != z: print("trying again, number was", y) time.sleep(0.2) y = random.randint(1, 100) print("Got it the number was", y) Trying to make a randomly generated number not appear twice. Unsure how to make a number not appear twice I'm trying to keep this as flexible as possible
[ "Try using random.sample() which samples without replacement:\n>>> import random\n>>> random.sample(range(1, 101), k=20)\n[98, 47, 29, 50, 19, 5, 97, 12, 35, 81, 13, 89, 16, 20, 71, 11, 24, 78, 56, 85]\n>>> random.sample(range(1, 101), k=20)\n[36, 41, 47, 69, 57, 98, 73, 54, 89, 86, 8, 79, 38, 17, 90, 65, 78, 30, 77, 23]\n>>> random.sample(range(1, 101), k=20)\n[89, 71, 38, 58, 3, 64, 66, 88, 51, 30, 80, 43, 33, 44, 26, 73, 37, 98, 19, 22]\n\n" ]
[ 0 ]
[]
[]
[ "list", "numbers", "python", "random" ]
stackoverflow_0074645603_list_numbers_python_random.txt
Q: You are trying to merge on datetime64[ns, UTC] and datetime64[ns] columns. If you wish to proceed you should use pd.concat I'm running into an interesting issue. I've recreated the issue as best I can and am reproducing the same error. Essentially I have a script that is running through a database and collecting information on different assets that have datetime64[ns, UTC] dtypes. This script can return a dataframe with data (df1_utc) or an empty dataframe (empty_df2_utc). I also have empty dataframe's (result_table & result_table2) that I want to merge with the dataframe's from my script (df1 to result_table & empty_df2 to result_table2). When the script's dataframe is empty it throws me the error below, but when it is populated it executes the script perfectly and without errors. Can someone help me to figure out why this is happening, and a possible solution? FYI most of the assets have populated dataframe's, there a few that are not populated. Essentially I just need a solution that I can apply to my code so when it loops through the entire database it can successfully merge the two. I have tried pd.concat and it works, but I'm trying to understand the root cause of the issue and handle it there rather than applying a band-aid downstream. Thank you! Code: import pandas as pd from datetime import date, datetime import numpy as np data1 = [['2022-06-20 12:05:00+00:00', 13.6]] df1_utc = pd.DataFrame(data1, columns=['timestamp', 'VoltageDCDC12V']) df1_utc['timestamp'] = pd.to_datetime(df1_utc['timestamp'], utc=True) df1_utc['VoltageDCDC12V'] = df1_utc['VoltageDCDC12V'].astype(object) empty_df2_utc = pd.DataFrame(columns=['timestamp', 'VoltageDCDC12V']) empty_df2_utc['timestamp'] = pd.to_datetime(empty_df2_utc['timestamp'], utc=True) empty_df2_utc['VoltageDCDC12V'] = empty_df2_utc['VoltageDCDC12V'].astype(object) result_table = pd.DataFrame(columns=['timestamp']) result_table['timestamp'] = pd.to_datetime(result_table['timestamp']) result_table2 = pd.DataFrame(columns=['timestamp']) result_table2['timestamp'] = pd.to_datetime(result_table['timestamp']) print('') print('df1_utc') print(df1_utc) print(df1_utc.dtypes) print('') print('empty_df2_utc') print(empty_df2_utc) print(empty_df2_utc.dtypes) print('') print('result_table') print(result_table) print(result_table.dtypes) print('') print('result_table2') print(result_table2) print(result_table2.dtypes) print('') try: result_table = result_table.merge(df1_utc, on="timestamp", how='outer') except Exception as e: print('Result_table to df1_utc merge error:') print(e) try: result_table2 = result_table2.merge(empty_df2_utc, on="timestamp", how='outer') except Exception as e: print('Result_table2 to empty_df2_utc merge error:') print(e) Output: df1_utc timestamp VoltageDCDC12V 0 2022-06-20 12:05:00+00:00 13.6 timestamp datetime64[ns, UTC] VoltageDCDC12V object dtype: object empty_df2_utc Empty DataFrame Columns: [timestamp, VoltageDCDC12V] Index: [] timestamp datetime64[ns, UTC] VoltageDCDC12V object dtype: object result_table Empty DataFrame Columns: [timestamp] Index: [] timestamp datetime64[ns] dtype: object result_table2 Empty DataFrame Columns: [timestamp] Index: [] timestamp datetime64[ns] dtype: object Result_table2 to empty_df2_utc merge error: You are trying to merge on datetime64[ns] and datetime64[ns, UTC] columns. If you wish to proceed you should use pd.concat A: Your result_tables are not timezone-aware, therefore it causes an error on merge. In case you except all of your data in UTC timezone, you can change the code to this, and it will not cause an error: result_table = pd.DataFrame(columns=['timestamp']) result_table['timestamp'] = pd.to_datetime(result_table['timestamp'], utc=True) result_table2 = pd.DataFrame(columns=['timestamp']) result_table2['timestamp'] = pd.to_datetime(result_table['timestamp'], utc=True)
You are trying to merge on datetime64[ns, UTC] and datetime64[ns] columns. If you wish to proceed you should use pd.concat
I'm running into an interesting issue. I've recreated the issue as best I can and am reproducing the same error. Essentially I have a script that is running through a database and collecting information on different assets that have datetime64[ns, UTC] dtypes. This script can return a dataframe with data (df1_utc) or an empty dataframe (empty_df2_utc). I also have empty dataframe's (result_table & result_table2) that I want to merge with the dataframe's from my script (df1 to result_table & empty_df2 to result_table2). When the script's dataframe is empty it throws me the error below, but when it is populated it executes the script perfectly and without errors. Can someone help me to figure out why this is happening, and a possible solution? FYI most of the assets have populated dataframe's, there a few that are not populated. Essentially I just need a solution that I can apply to my code so when it loops through the entire database it can successfully merge the two. I have tried pd.concat and it works, but I'm trying to understand the root cause of the issue and handle it there rather than applying a band-aid downstream. Thank you! Code: import pandas as pd from datetime import date, datetime import numpy as np data1 = [['2022-06-20 12:05:00+00:00', 13.6]] df1_utc = pd.DataFrame(data1, columns=['timestamp', 'VoltageDCDC12V']) df1_utc['timestamp'] = pd.to_datetime(df1_utc['timestamp'], utc=True) df1_utc['VoltageDCDC12V'] = df1_utc['VoltageDCDC12V'].astype(object) empty_df2_utc = pd.DataFrame(columns=['timestamp', 'VoltageDCDC12V']) empty_df2_utc['timestamp'] = pd.to_datetime(empty_df2_utc['timestamp'], utc=True) empty_df2_utc['VoltageDCDC12V'] = empty_df2_utc['VoltageDCDC12V'].astype(object) result_table = pd.DataFrame(columns=['timestamp']) result_table['timestamp'] = pd.to_datetime(result_table['timestamp']) result_table2 = pd.DataFrame(columns=['timestamp']) result_table2['timestamp'] = pd.to_datetime(result_table['timestamp']) print('') print('df1_utc') print(df1_utc) print(df1_utc.dtypes) print('') print('empty_df2_utc') print(empty_df2_utc) print(empty_df2_utc.dtypes) print('') print('result_table') print(result_table) print(result_table.dtypes) print('') print('result_table2') print(result_table2) print(result_table2.dtypes) print('') try: result_table = result_table.merge(df1_utc, on="timestamp", how='outer') except Exception as e: print('Result_table to df1_utc merge error:') print(e) try: result_table2 = result_table2.merge(empty_df2_utc, on="timestamp", how='outer') except Exception as e: print('Result_table2 to empty_df2_utc merge error:') print(e) Output: df1_utc timestamp VoltageDCDC12V 0 2022-06-20 12:05:00+00:00 13.6 timestamp datetime64[ns, UTC] VoltageDCDC12V object dtype: object empty_df2_utc Empty DataFrame Columns: [timestamp, VoltageDCDC12V] Index: [] timestamp datetime64[ns, UTC] VoltageDCDC12V object dtype: object result_table Empty DataFrame Columns: [timestamp] Index: [] timestamp datetime64[ns] dtype: object result_table2 Empty DataFrame Columns: [timestamp] Index: [] timestamp datetime64[ns] dtype: object Result_table2 to empty_df2_utc merge error: You are trying to merge on datetime64[ns] and datetime64[ns, UTC] columns. If you wish to proceed you should use pd.concat
[ "Your result_tables are not timezone-aware, therefore it causes an error on merge.\nIn case you except all of your data in UTC timezone, you can change the code to this, and it will not cause an error:\n result_table = pd.DataFrame(columns=['timestamp'])\n result_table['timestamp'] = pd.to_datetime(result_table['timestamp'], utc=True)\n\n result_table2 = pd.DataFrame(columns=['timestamp'])\n result_table2['timestamp'] = pd.to_datetime(result_table['timestamp'], utc=True)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "merge", "pandas", "python", "python_datetime" ]
stackoverflow_0073964894_dataframe_merge_pandas_python_python_datetime.txt
Q: TypeError : unsupported operand type(s) for +: 'NoneType' and 'int' import pandas as pd import numpy as np import openpyxl time = pd.ExcelFile('Block_time_JP1.xlsx') time.head(3) output : Date_time Station Pending 0 28-11_15:30 DTK2 36 1 28-11_15:30 DTK2 36 2 28-11_15:30 DTK2 36 Then -- d = [0] b=[0] for i in time.index: b[0] = time['Pending'][i] j = d k= d + b for j in k: df['Date_time'][j] = time['Date_time'][j] df['Station'][j] = time['Station'][j] df['Pending'][j] = time['Pending'][j] d[0] = j+1 I am getting error in k = d + b line. Does anyone has an idea of solving this problem? Please help me out. Thanks in advance! Sample example --- input dataset (2 columns) a 2 b 3 output dataset (2 columns) a 2 a 2 b 3 b 3 b 3 I am just trying to execute this thing using the logic inside for loop. A: Let's mentally step through your code, b and d start as [0], one element lists. for i in time.index: b[0] = time['Pending'][i] # I'm guessing `b` is now `[36]` j = d # j is d, not a copy k= d + b # k is [0,36] - list addition is join for j in k: # this use of j overrides the previous assignment # j is going to be 0, and then 36; 0 may be a valid row index; 36? df['Date_time'][j] = time['Date_time'][j] df['Station'][j] = time['Station'][j] df['Pending'][j] = time['Pending'][j] d[0] = j+1 # I expect d to be [37] I won't continue, but that should give you an idea of what you need to track when doing iterations like this. When you get errors like this, make sure you know what the variable are. The error indicates that some how d became None, and b a number, not a list. It isn't obvious from the code how that happened.
TypeError : unsupported operand type(s) for +: 'NoneType' and 'int'
import pandas as pd import numpy as np import openpyxl time = pd.ExcelFile('Block_time_JP1.xlsx') time.head(3) output : Date_time Station Pending 0 28-11_15:30 DTK2 36 1 28-11_15:30 DTK2 36 2 28-11_15:30 DTK2 36 Then -- d = [0] b=[0] for i in time.index: b[0] = time['Pending'][i] j = d k= d + b for j in k: df['Date_time'][j] = time['Date_time'][j] df['Station'][j] = time['Station'][j] df['Pending'][j] = time['Pending'][j] d[0] = j+1 I am getting error in k = d + b line. Does anyone has an idea of solving this problem? Please help me out. Thanks in advance! Sample example --- input dataset (2 columns) a 2 b 3 output dataset (2 columns) a 2 a 2 b 3 b 3 b 3 I am just trying to execute this thing using the logic inside for loop.
[ "Let's mentally step through your code,\nb and d start as [0], one element lists.\nfor i in time.index:\n b[0] = time['Pending'][i] # I'm guessing `b` is now `[36]`\n j = d # j is d, not a copy\n k= d + b # k is [0,36] - list addition is join\n for j in k: # this use of j overrides the previous assignment\n # j is going to be 0, and then 36; 0 may be a valid row index; 36?\n df['Date_time'][j] = time['Date_time'][j]\n df['Station'][j] = time['Station'][j]\n df['Pending'][j] = time['Pending'][j]\n d[0] = j+1 # I expect d to be [37]\n\nI won't continue, but that should give you an idea of what you need to track when doing iterations like this. When you get errors like this, make sure you know what the variable are. The error indicates that some how d became None, and b a number, not a list. It isn't obvious from the code how that happened.\n" ]
[ 0 ]
[]
[]
[ "list", "loops", "numpy", "pandas", "python" ]
stackoverflow_0074637934_list_loops_numpy_pandas_python.txt
Q: How to add second x-axis at the bottom of the first one in matplotlib.? I am refering to the question already asked here. In this example the users have solved the second axis problem by adding it to the upper part of the graph where it coincide with the title. Question: Is it possible to add the second x-axis at the bottom of the first one? Code: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twiny() X = np.linspace(0,1,1000) Y = np.cos(X*20) ax1.plot(X,Y) ax1.set_xlabel(r"Original x-axis: $X$") new_tick_locations = np.array([.2, .5, .9]) def tick_function(X): V = 1/(1+X) return ["%.3f" % z for z in V] ax2.set_xticks(new_tick_locations) ax2.set_xticklabels(tick_function(new_tick_locations)) ax2.set_xlabel(r"Modified x-axis: $1/(1+X)$") plt.show() A: As an alternative to the answer from @DizietAsahi, you can use spines in a similar way to the matplotlib example posted here. import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twiny() # Add some extra space for the second axis at the bottom fig.subplots_adjust(bottom=0.2) X = np.linspace(0,1,1000) Y = np.cos(X*20) ax1.plot(X,Y) ax1.set_xlabel(r"Original x-axis: $X$") new_tick_locations = np.array([.2, .5, .9]) def tick_function(X): V = 1/(1+X) return ["%.3f" % z for z in V] # Move twinned axis ticks and label from top to bottom ax2.xaxis.set_ticks_position("bottom") ax2.xaxis.set_label_position("bottom") # Offset the twin axis below the host ax2.spines["bottom"].set_position(("axes", -0.15)) # Turn on the frame for the twin axis, but then hide all # but the bottom spine ax2.set_frame_on(True) ax2.patch.set_visible(False) # as @ali14 pointed out, for python3, use this # for sp in ax2.spines.values(): # and for python2, use this for sp in ax2.spines.itervalues(): sp.set_visible(False) ax2.spines["bottom"].set_visible(True) ax2.set_xticks(new_tick_locations) ax2.set_xticklabels(tick_function(new_tick_locations)) ax2.set_xlabel(r"Modified x-axis: $1/(1+X)$") plt.show() A: I think you have to create a second Axes with 0 height (and hide the yaxis) to have a second xaxis that you can place wherever you like. for example: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_axes((0.1,0.3,0.8,0.6)) # create an Axes with some room below X = np.linspace(0,1,1000) Y = np.cos(X*20) ax1.plot(X,Y) ax1.set_xlabel(r"Original x-axis: $X$") # create second Axes. Note the 0.0 height ax2 = fig.add_axes((0.1,0.1,0.8,0.0)) ax2.yaxis.set_visible(False) # hide the yaxis new_tick_locations = np.array([.2, .5, .9]) def tick_function(X): V = 1/(1+X) return ["%.3f" % z for z in V] ax2.set_xticks(new_tick_locations) ax2.set_xticklabels(tick_function(new_tick_locations)) ax2.set_xlabel(r"Modified x-axis: $1/(1+X)$") plt.show() A: Not really an answer to the question, but it took me quite long time until I figured out how to do the same with logscale. There are a bunch of strange behaviours in that case. Here's my code to apply some simple scaling to the original y axis: def set_scaled_y_axis(ax, label1, label2, scale): #define the minor and major ticks #might give an error for too small or large exponents (e.g. 1e-20 or 1e+20) log_ticks_major=[] log_ticks_minor=[] tick_labels=[] for k in range(-15,16,1): log_ticks_major.append(10**k) tick_labels.append("10$^{"+f"{k}"+"}$") for kk in range(2,10): log_ticks_minor.append(kk*10**k) log_ticks_major=np.array(log_ticks_major) log_ticks_minor=np.array(log_ticks_minor) #update the original label ax.set_ylabel(label2) # make a twin axis and set the position # to make the same with x axis you need "ax.twiny()" instead ax22 = ax.twinx() ax22.yaxis.set_ticks_position("left") ax22.yaxis.set_label_position("left") ax22.spines["left"].set_position(("axes", -0.15)) # draw only the left y axis ax22.xaxis.set_visible(False) # set the log scale for the 2nd axis ax22.set_yscale("log") ax22.set_yticks(log_ticks_minor/scale, minor=True) # set minor ticks ax22.set_yticks(log_ticks_major/scale) # set normal(/major?) ticks ax22.set_yticklabels(tick_labels) #must be after "ax22.set_yticks(log_ticks_major/scale)" ax22.tick_params('y', which="minor", labelleft=False) #some "random" minor tick labels would appear # set the 2nd y axis label ax22.set_ylabel(label1) # set the limits of the 2nd y axis to be the same as the 1st one ax22.set_ylim(ax.get_ylim())
How to add second x-axis at the bottom of the first one in matplotlib.?
I am refering to the question already asked here. In this example the users have solved the second axis problem by adding it to the upper part of the graph where it coincide with the title. Question: Is it possible to add the second x-axis at the bottom of the first one? Code: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twiny() X = np.linspace(0,1,1000) Y = np.cos(X*20) ax1.plot(X,Y) ax1.set_xlabel(r"Original x-axis: $X$") new_tick_locations = np.array([.2, .5, .9]) def tick_function(X): V = 1/(1+X) return ["%.3f" % z for z in V] ax2.set_xticks(new_tick_locations) ax2.set_xticklabels(tick_function(new_tick_locations)) ax2.set_xlabel(r"Modified x-axis: $1/(1+X)$") plt.show()
[ "As an alternative to the answer from @DizietAsahi, you can use spines in a similar way to the matplotlib example posted here.\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax2 = ax1.twiny()\n\n# Add some extra space for the second axis at the bottom\nfig.subplots_adjust(bottom=0.2)\n\nX = np.linspace(0,1,1000)\nY = np.cos(X*20)\n\nax1.plot(X,Y)\nax1.set_xlabel(r\"Original x-axis: $X$\")\n\nnew_tick_locations = np.array([.2, .5, .9])\n\ndef tick_function(X):\n V = 1/(1+X)\n return [\"%.3f\" % z for z in V]\n\n# Move twinned axis ticks and label from top to bottom\nax2.xaxis.set_ticks_position(\"bottom\")\nax2.xaxis.set_label_position(\"bottom\")\n\n# Offset the twin axis below the host\nax2.spines[\"bottom\"].set_position((\"axes\", -0.15))\n\n# Turn on the frame for the twin axis, but then hide all \n# but the bottom spine\nax2.set_frame_on(True)\nax2.patch.set_visible(False)\n\n# as @ali14 pointed out, for python3, use this\n# for sp in ax2.spines.values():\n# and for python2, use this\nfor sp in ax2.spines.itervalues():\n sp.set_visible(False)\nax2.spines[\"bottom\"].set_visible(True)\n\nax2.set_xticks(new_tick_locations)\nax2.set_xticklabels(tick_function(new_tick_locations))\nax2.set_xlabel(r\"Modified x-axis: $1/(1+X)$\")\nplt.show()\n\n\n", "I think you have to create a second Axes with 0 height (and hide the yaxis) to have a second xaxis that you can place wherever you like.\nfor example:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax1 = fig.add_axes((0.1,0.3,0.8,0.6)) # create an Axes with some room below\n\nX = np.linspace(0,1,1000)\nY = np.cos(X*20)\n\nax1.plot(X,Y)\nax1.set_xlabel(r\"Original x-axis: $X$\")\n\n\n# create second Axes. Note the 0.0 height\nax2 = fig.add_axes((0.1,0.1,0.8,0.0))\nax2.yaxis.set_visible(False) # hide the yaxis\n\nnew_tick_locations = np.array([.2, .5, .9])\n\ndef tick_function(X):\n V = 1/(1+X)\n return [\"%.3f\" % z for z in V]\n\nax2.set_xticks(new_tick_locations)\nax2.set_xticklabels(tick_function(new_tick_locations))\nax2.set_xlabel(r\"Modified x-axis: $1/(1+X)$\")\nplt.show()\n\n\n", "Not really an answer to the question, but it took me quite long time until I figured out how to do the same with logscale. There are a bunch of strange behaviours in that case. Here's my code to apply some simple scaling to the original y axis:\ndef set_scaled_y_axis(ax, label1, label2, scale):\n #define the minor and major ticks\n #might give an error for too small or large exponents (e.g. 1e-20 or 1e+20)\n log_ticks_major=[]\n log_ticks_minor=[]\n tick_labels=[]\n for k in range(-15,16,1):\n log_ticks_major.append(10**k)\n tick_labels.append(\"10$^{\"+f\"{k}\"+\"}$\")\n for kk in range(2,10):\n log_ticks_minor.append(kk*10**k)\n\n log_ticks_major=np.array(log_ticks_major)\n log_ticks_minor=np.array(log_ticks_minor)\n\n #update the original label\n ax.set_ylabel(label2)\n\n # make a twin axis and set the position\n # to make the same with x axis you need \"ax.twiny()\" instead\n ax22 = ax.twinx()\n ax22.yaxis.set_ticks_position(\"left\")\n ax22.yaxis.set_label_position(\"left\")\n ax22.spines[\"left\"].set_position((\"axes\", -0.15))\n \n # draw only the left y axis\n ax22.xaxis.set_visible(False)\n\n # set the log scale for the 2nd axis\n ax22.set_yscale(\"log\")\n ax22.set_yticks(log_ticks_minor/scale, minor=True) # set minor ticks\n ax22.set_yticks(log_ticks_major/scale) # set normal(/major?) ticks\n ax22.set_yticklabels(tick_labels) #must be after \"ax22.set_yticks(log_ticks_major/scale)\"\n ax22.tick_params('y', which=\"minor\", labelleft=False) #some \"random\" minor tick labels would appear\n \n # set the 2nd y axis label\n ax22.set_ylabel(label1)\n\n # set the limits of the 2nd y axis to be the same as the 1st one\n ax22.set_ylim(ax.get_ylim())\n\n" ]
[ 25, 6, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0031803817_matplotlib_python.txt
Q: How can I switch the focus between toplevels of a tkinter program? I am using key-accelerators for menu entries in a multi window program. But when an accelerator-key is pressed, then always the same window reacts to the key. As you can see in my example code, I tried to change the focus by binding the event "FocusIn" to the toplevel and to the canvas. At the event I tried focus_force() and focus_set(). But the accelerator key event is always recognized in the last opened window. This is my example code: import tkinter as tk from tkinter import ttk class win: number = 0 def __init__(self): win.number += 1 self.number = win.number self.top = tk.Toplevel() self.top.protocol("WM_DELETE_WINDOW", lambda: self.close_root()) self.top.bind_all("<Control-o>", lambda event : self.menu()) self.file_menu_button = ttk.Menubutton(self.top, text="File menu of top" + str(self.number)) self.file_menu_button.grid() self.file_menu = tk.Menu(self.file_menu_button) self.file_menu.add_command(label="Open", accelerator="Ctrl+o", command=self.menu) self.file_menu_button.configure(menu=self.file_menu) self.canvas = tk.Canvas(self.top, width=400, height=200) self.canvas.grid() self.top.bind("<FocusIn>", lambda event: self.top.focus_force()) #self.top.bind("<FocusIn>", lambda event: self.top.focus_set()) self.canvas.bind("<FocusIn>", lambda event: self.top.focus_force()) #self.canvas.bind("<FocusIn>", lambda event: self.top.focus_set()) def close_root(self) : root.quit() def menu(self): print("menu" + str(self.number) + " clicked") root = tk.Tk() root.withdraw() win1 = win() win2 = win() root.mainloop() A: I found the problem is caused by this line from my example code above: self.top.bind_all("<Control-o>", lambda event : self.menu()) This line always gives the last new created toplevel the binding. All previous created toplevel windows loose the binding in this moment. So this line must be removed. Instead the binding must be created new at any time when a toplevel window gets the focus. This can be done in this way: self.top.bind("<FocusIn>", lambda event: self.top.bind_all("<Control-o>", lambda event : self.menu()))
How can I switch the focus between toplevels of a tkinter program?
I am using key-accelerators for menu entries in a multi window program. But when an accelerator-key is pressed, then always the same window reacts to the key. As you can see in my example code, I tried to change the focus by binding the event "FocusIn" to the toplevel and to the canvas. At the event I tried focus_force() and focus_set(). But the accelerator key event is always recognized in the last opened window. This is my example code: import tkinter as tk from tkinter import ttk class win: number = 0 def __init__(self): win.number += 1 self.number = win.number self.top = tk.Toplevel() self.top.protocol("WM_DELETE_WINDOW", lambda: self.close_root()) self.top.bind_all("<Control-o>", lambda event : self.menu()) self.file_menu_button = ttk.Menubutton(self.top, text="File menu of top" + str(self.number)) self.file_menu_button.grid() self.file_menu = tk.Menu(self.file_menu_button) self.file_menu.add_command(label="Open", accelerator="Ctrl+o", command=self.menu) self.file_menu_button.configure(menu=self.file_menu) self.canvas = tk.Canvas(self.top, width=400, height=200) self.canvas.grid() self.top.bind("<FocusIn>", lambda event: self.top.focus_force()) #self.top.bind("<FocusIn>", lambda event: self.top.focus_set()) self.canvas.bind("<FocusIn>", lambda event: self.top.focus_force()) #self.canvas.bind("<FocusIn>", lambda event: self.top.focus_set()) def close_root(self) : root.quit() def menu(self): print("menu" + str(self.number) + " clicked") root = tk.Tk() root.withdraw() win1 = win() win2 = win() root.mainloop()
[ "I found the problem is caused by this line from my example code above:\nself.top.bind_all(\"<Control-o>\", lambda event : self.menu())\n\nThis line always gives the last new created toplevel the binding. All previous created toplevel windows loose the binding in this moment. So this line must be removed. Instead the binding must be created new at any time when a toplevel window gets the focus. This can be done in this way:\nself.top.bind(\"<FocusIn>\",\n lambda event:\n self.top.bind_all(\"<Control-o>\", lambda event : self.menu()))\n\n" ]
[ 0 ]
[]
[]
[ "focus", "python", "tkinter" ]
stackoverflow_0074576267_focus_python_tkinter.txt
Q: How many integers can fully represent a float? The problem and my current solution: For example, take two columns of latitude and longitude values: lat lon 30.1239871239 -80.1239871239 30.1239991239 -80.1439871239 I want to create integer columns that represent the floats. This is what I currently have: lat lat_dec lat_sign 30 1239871239 1 30 1239991239 1 lon lon_dec lon_sign 80 1239871239 -1 80 1239871239 -1 By doing the following, where col is lat or lon df[f'{col}_dec'] = df[col].apply(lambda x: int(str(x).split('.')[-1])) df[f'{col}_sign'] = np.sign(df[col]) df[col] = abs(df[col].astype(int)) I then run a process that minimizes the data type of each column, resulting in uint8, uint32, and int8, respectively for the first, second, and third integer column. Each float column took 8 bytes, while all three integer columns take 6 bytes; that is a 25% size reduction. EDIT Asking in a different way Is there a way to efficiently compress float data types as at least one integer up to a precision of 10 decimal spaces? The code above is what I tried. The focus is not on coordinates as shown in the example, but on float data types in general. Thank you. A: Can you make it better? Only you know what better means. can you reduce the size even more? Yes. The way you're storing coordinates now you have a resolution of ~10μm (1e-5m). That seems excessively precise. If you can live with around a 1 meter resolution you can break up the coordinate values into increments of 63356 values and use a single uint16 or int16 to store each (4 bytes per coordinate pair). import numpy as np INCREMENTS_PER_DEGREE_LONG = np.iinfo(np.uint16).max / 360.0 INCREMENTS_PER_DEGREE_LAT = (np.iinfo(np.int16).max - np.iinfo(np.int16).min) / 180.0 def longitude_to_increments(longitude_deg): assert longitude_deg >= 0 and longitude_deg < 360 return np.uint16(longitude_deg * INCREMENTS_PER_DEGREE_LONG) def increments_to_longitude(increments: np.uint16): return increments / INCREMENTS_PER_DEGREE_LONG # similar for latitude but use a np.int16 for [-90, 90] degree range
How many integers can fully represent a float?
The problem and my current solution: For example, take two columns of latitude and longitude values: lat lon 30.1239871239 -80.1239871239 30.1239991239 -80.1439871239 I want to create integer columns that represent the floats. This is what I currently have: lat lat_dec lat_sign 30 1239871239 1 30 1239991239 1 lon lon_dec lon_sign 80 1239871239 -1 80 1239871239 -1 By doing the following, where col is lat or lon df[f'{col}_dec'] = df[col].apply(lambda x: int(str(x).split('.')[-1])) df[f'{col}_sign'] = np.sign(df[col]) df[col] = abs(df[col].astype(int)) I then run a process that minimizes the data type of each column, resulting in uint8, uint32, and int8, respectively for the first, second, and third integer column. Each float column took 8 bytes, while all three integer columns take 6 bytes; that is a 25% size reduction. EDIT Asking in a different way Is there a way to efficiently compress float data types as at least one integer up to a precision of 10 decimal spaces? The code above is what I tried. The focus is not on coordinates as shown in the example, but on float data types in general. Thank you.
[ "\nCan you make it better?\n\nOnly you know what better means.\n\ncan you reduce the size even more?\n\nYes. The way you're storing coordinates now you have a resolution of ~10μm (1e-5m). That seems excessively precise. If you can live with around a 1 meter resolution you can break up the coordinate values into increments of 63356 values and use a single uint16 or int16 to store each (4 bytes per coordinate pair).\nimport numpy as np\n\nINCREMENTS_PER_DEGREE_LONG = np.iinfo(np.uint16).max / 360.0\nINCREMENTS_PER_DEGREE_LAT = (np.iinfo(np.int16).max -\n np.iinfo(np.int16).min) / 180.0\n\ndef longitude_to_increments(longitude_deg):\n assert longitude_deg >= 0 and longitude_deg < 360\n return np.uint16(longitude_deg * INCREMENTS_PER_DEGREE_LONG)\n\ndef increments_to_longitude(increments: np.uint16):\n return increments / INCREMENTS_PER_DEGREE_LONG\n\n# similar for latitude but use a np.int16 for [-90, 90] degree range\n\n" ]
[ 0 ]
[]
[]
[ "memory", "numpy", "pandas", "python" ]
stackoverflow_0074645383_memory_numpy_pandas_python.txt
Q: Finding duplicates based on values of specific keys from a list of dict I have the following list of dict records, from which I need to extract all the duplicates (based on the label) and leave one per label in the original records. Also, when the items get removed by label, always remove the one with the headings value True over one with headings value False. Input: records = [ {"label": "x", "headings": False, "key": 300}, {"label": "x", "headings": True, "key": 301}, {"label": "x", "headings": False, "key": 302}, {"label": "x", "headings": False, "key": 303}, {"label": "y", "headings": False, "key": 304}, {"label": "y", "headings": True, "key": 305}, {"label": "z", "headings": True, "key": 306}, {"label": "z", "headings": True, "key": 307}, ] Output: (duplicate items) [ {"label": "x", "headings": False, "key": 300}, {"label": "x", "headings": True, "key": 301}, {"label": "x", "headings": False, "key": 302}, {"label": "y", "headings": True, "key": 305}, {"label": "z", "headings": True, "key": 306}, ] A: You left a few questions unanswered (see comments). You also did not provide your own code and any unexpected output/error you got with it, so we have nothing to work with/fix. This is bad form. But I found this to be a fun exercise, so here is what I came up with: from typing import TypedDict class Record(TypedDict): label: str headings: bool key: int def remove_duplicates(records: list[Record]) -> list[Record]: # First, decide which records (by index) _not_ to remove. # Map labels to 2-tuples of (index, headings boolean): keep: dict[str, tuple[int, bool]] = {} for idx, record in enumerate(records): label, headings = record["label"], record["headings"] # We keep it, if this is the first time we see that label OR # we did encounter it, but this record's `headings` value is `False`, # whereas the previous one was `True`: if label not in keep or (not headings and keep[label][1]): keep[label] = (idx, headings) # Combine all indices we want to keep into one set for easy lookup: keep_indices = {idx for idx, _ in keep.values()} # Iterate over all record indices in reverse order # and pop the corresponding records if necessary: removed = [] for idx in reversed(range(len(records))): if idx not in keep_indices: removed.append(records.pop(idx)) return removed The original list is mutated in-place, but a new list is created and returned from the removed dictionaries/duplicates. The algorithm creates a few helper-datastructures sacrificing a bit of memory, but should be fairly efficient in terms of time, i.e. approximately O(n) with n being the number of records. To test it: ... def main() -> None: from pprint import pprint records = [ {"label": "x", "headings": False, "key": 300}, {"label": "x", "headings": True, "key": 301}, {"label": "x", "headings": False, "key": 302}, {"label": "x", "headings": False, "key": 303}, {"label": "y", "headings": False, "key": 304}, {"label": "y", "headings": True, "key": 305}, {"label": "z", "headings": True, "key": 306}, {"label": "z", "headings": True, "key": 307}, ] removed = remove_duplicates(records) # type: ignore[arg-type] print("remaining:") pprint(records) removed.reverse() print("removed:") pprint(removed) if __name__ == "__main__": main() Output: remaining: [{'headings': False, 'key': 300, 'label': 'x'}, {'headings': False, 'key': 304, 'label': 'y'}, {'headings': True, 'key': 306, 'label': 'z'}] removed: [{'headings': True, 'key': 301, 'label': 'x'}, {'headings': False, 'key': 302, 'label': 'x'}, {'headings': False, 'key': 303, 'label': 'x'}, {'headings': True, 'key': 305, 'label': 'y'}, {'headings': True, 'key': 307, 'label': 'z'}]
Finding duplicates based on values of specific keys from a list of dict
I have the following list of dict records, from which I need to extract all the duplicates (based on the label) and leave one per label in the original records. Also, when the items get removed by label, always remove the one with the headings value True over one with headings value False. Input: records = [ {"label": "x", "headings": False, "key": 300}, {"label": "x", "headings": True, "key": 301}, {"label": "x", "headings": False, "key": 302}, {"label": "x", "headings": False, "key": 303}, {"label": "y", "headings": False, "key": 304}, {"label": "y", "headings": True, "key": 305}, {"label": "z", "headings": True, "key": 306}, {"label": "z", "headings": True, "key": 307}, ] Output: (duplicate items) [ {"label": "x", "headings": False, "key": 300}, {"label": "x", "headings": True, "key": 301}, {"label": "x", "headings": False, "key": 302}, {"label": "y", "headings": True, "key": 305}, {"label": "z", "headings": True, "key": 306}, ]
[ "You left a few questions unanswered (see comments). You also did not provide your own code and any unexpected output/error you got with it, so we have nothing to work with/fix. This is bad form.\nBut I found this to be a fun exercise, so here is what I came up with:\nfrom typing import TypedDict\n\n\nclass Record(TypedDict):\n label: str\n headings: bool\n key: int\n\n\ndef remove_duplicates(records: list[Record]) -> list[Record]:\n # First, decide which records (by index) _not_ to remove.\n # Map labels to 2-tuples of (index, headings boolean):\n keep: dict[str, tuple[int, bool]] = {}\n for idx, record in enumerate(records):\n label, headings = record[\"label\"], record[\"headings\"]\n # We keep it, if this is the first time we see that label OR\n # we did encounter it, but this record's `headings` value is `False`,\n # whereas the previous one was `True`:\n if label not in keep or (not headings and keep[label][1]):\n keep[label] = (idx, headings)\n # Combine all indices we want to keep into one set for easy lookup:\n keep_indices = {idx for idx, _ in keep.values()}\n # Iterate over all record indices in reverse order\n # and pop the corresponding records if necessary:\n removed = []\n for idx in reversed(range(len(records))):\n if idx not in keep_indices:\n removed.append(records.pop(idx))\n return removed\n\nThe original list is mutated in-place, but a new list is created and returned from the removed dictionaries/duplicates. The algorithm creates a few helper-datastructures sacrificing a bit of memory, but should be fairly efficient in terms of time, i.e. approximately O(n) with n being the number of records.\nTo test it:\n...\n\ndef main() -> None:\n from pprint import pprint\n records = [\n {\"label\": \"x\", \"headings\": False, \"key\": 300},\n {\"label\": \"x\", \"headings\": True, \"key\": 301},\n {\"label\": \"x\", \"headings\": False, \"key\": 302},\n {\"label\": \"x\", \"headings\": False, \"key\": 303},\n {\"label\": \"y\", \"headings\": False, \"key\": 304},\n {\"label\": \"y\", \"headings\": True, \"key\": 305},\n {\"label\": \"z\", \"headings\": True, \"key\": 306},\n {\"label\": \"z\", \"headings\": True, \"key\": 307},\n ]\n removed = remove_duplicates(records) # type: ignore[arg-type]\n print(\"remaining:\")\n pprint(records)\n removed.reverse()\n print(\"removed:\")\n pprint(removed)\n\n\nif __name__ == \"__main__\":\n main()\n\nOutput:\n\nremaining:\n[{'headings': False, 'key': 300, 'label': 'x'},\n {'headings': False, 'key': 304, 'label': 'y'},\n {'headings': True, 'key': 306, 'label': 'z'}]\nremoved:\n[{'headings': True, 'key': 301, 'label': 'x'},\n {'headings': False, 'key': 302, 'label': 'x'},\n {'headings': False, 'key': 303, 'label': 'x'},\n {'headings': True, 'key': 305, 'label': 'y'},\n {'headings': True, 'key': 307, 'label': 'z'}]\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "duplicates", "key", "list", "python" ]
stackoverflow_0074644893_dictionary_duplicates_key_list_python.txt
Q: How to install pyspark.pandas in Apache Spark? I downloaded Apache Spark 3.3.0 bundle which contains pyspark $ pyspark Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 3.3.0 /_/ Using Python version 3.7.10 (default, Jun 3 2021 00:02:01) Spark context Web UI available at http://XXX-XXX-XXX-XXXX.compute.internal:4041 Spark context available as 'sc' (master = local[*], app id = local-1669908157343). SparkSession available as 'spark'. **>>> import pyspark.pandas as ps** Traceback (most recent call last): File "/home/ec2-user/bin/spark/latest/python/pyspark/sql/pandas/utils.py", line 27, in require_minimum_pandas_version import pandas ModuleNotFoundError: No module named 'pandas' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ec2-user/bin/spark/latest/python/pyspark/pandas/__init__.py", line 31, in <module> require_minimum_pandas_version() File "/home/ec2-user/bin/spark/latest/python/pyspark/sql/pandas/utils.py", line 36, in require_minimum_pandas_version ) from raised_error ImportError: Pandas >= 1.0.5 must be installed; however, it was not found. How do I import python packages inside Apache-Spark in custom directory like /home/ec2-user/bin/spark/latest/python/pyspark? I also tried: $ pip install pandas -bash: pip: command not found If I try to install pip, how can ensure the libraries are compatible with the Python version 3.7.20 in Spark? A: Have you tried installing Pandas in the following way: pip install pyspark[pandas_on_spark] If the pip is not discoverable by bash, maybe try to active your Python environment first (whether virtualenv, conda or anything else).
How to install pyspark.pandas in Apache Spark?
I downloaded Apache Spark 3.3.0 bundle which contains pyspark $ pyspark Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 3.3.0 /_/ Using Python version 3.7.10 (default, Jun 3 2021 00:02:01) Spark context Web UI available at http://XXX-XXX-XXX-XXXX.compute.internal:4041 Spark context available as 'sc' (master = local[*], app id = local-1669908157343). SparkSession available as 'spark'. **>>> import pyspark.pandas as ps** Traceback (most recent call last): File "/home/ec2-user/bin/spark/latest/python/pyspark/sql/pandas/utils.py", line 27, in require_minimum_pandas_version import pandas ModuleNotFoundError: No module named 'pandas' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ec2-user/bin/spark/latest/python/pyspark/pandas/__init__.py", line 31, in <module> require_minimum_pandas_version() File "/home/ec2-user/bin/spark/latest/python/pyspark/sql/pandas/utils.py", line 36, in require_minimum_pandas_version ) from raised_error ImportError: Pandas >= 1.0.5 must be installed; however, it was not found. How do I import python packages inside Apache-Spark in custom directory like /home/ec2-user/bin/spark/latest/python/pyspark? I also tried: $ pip install pandas -bash: pip: command not found If I try to install pip, how can ensure the libraries are compatible with the Python version 3.7.20 in Spark?
[ "Have you tried installing Pandas in the following way:\npip install pyspark[pandas_on_spark]\n\nIf the pip is not discoverable by bash, maybe try to active your Python environment first (whether virtualenv, conda or anything else).\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "pandas", "pyspark", "python" ]
stackoverflow_0074644628_apache_spark_pandas_pyspark_python.txt
Q: Python does not let me run files without calling the interpreter Im running python 3.7.9 on Windows 10, I usually run files as file.py but now does not run unless I run it as python file.py I have python included in PATH but it still doesn't work I've tried everything, reinstalling python, makeing a new file, changing the path, using other python versions but nothing A: Yup, it was a file association problem, thanks for the answers.
Python does not let me run files without calling the interpreter
Im running python 3.7.9 on Windows 10, I usually run files as file.py but now does not run unless I run it as python file.py I have python included in PATH but it still doesn't work I've tried everything, reinstalling python, makeing a new file, changing the path, using other python versions but nothing
[ "Yup, it was a file association problem, thanks for the answers.\n" ]
[ 0 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0074644987_python_windows.txt
Q: Running a Tkinter window and PysTray Icon together I'm building a tkinter gui project and i'm looking for ways to run a tray icon with the tkinter window. I found Pystray library that does it, But now i'm trying to figure it out how to use this library (tray Icon) together with tkinter window, I set it up when the user exit winodw it's only will withdraw window: self.protocol('WM_DELETE_WINDOW', self.withdraw) I want to bring it back with the tray icon.. anyone know how to do it? EDIT:untill now I just wrote this code so far (they're not running together but it's also fine): from pystray import MenuItem as item import pystray from PIL import Image import tkinter as tk def quit_window(icon, item): icon.stop() #window.destroy() def show_window(icon, item): icon.stop() #window.deiconify() def withdraw_window(window): window.withdraw() image = Image.open("image.ico") menu = (item('Quit', quit_window), item('Show', show_window)) icon = pystray.Icon("name", image, "title", menu) icon.run() def main(): window = tk.Tk() window.title("Welcome") window.protocol('WM_DELETE_WINDOW', lambda: withdraw_window(window)) window.mainloop() main() A: Finally I figure it out, Now I just need to combine this with my main code, I hope this code will help to other people too... from pystray import MenuItem as item import pystray from PIL import Image import tkinter as tk window = tk.Tk() window.title("Welcome") def quit_window(icon, item): icon.stop() window.destroy() def show_window(icon, item): icon.stop() window.after(0,window.deiconify) def withdraw_window(): window.withdraw() image = Image.open("image.ico") menu = (item('Quit', quit_window), item('Show', show_window)) icon = pystray.Icon("name", image, "title", menu) icon.run() window.protocol('WM_DELETE_WINDOW', withdraw_window) window.mainloop() A: Thanks to Oshers solution, I adapted it into my own project. One issue I fixed was that you could only hide the main window once, then the loop would crash. With this solution, it has no limit. import tkinter as tk from PIL import Image import pystray class Gui(): def __init__(self): self.window = tk.Tk() self.image = Image.open("./assets/icons/ready.png") self.menu = ( pystray.MenuItem('Show', self.show_window), pystray.MenuItem('Quit', self.quit_window) ) self.window.protocol('WM_DELETE_WINDOW', self.withdraw_window) self.window.mainloop() def quit_window(self): self.icon.stop() self.window.destroy() def show_window(self): self.icon.stop() self.window.protocol('WM_DELETE_WINDOW', self.withdraw_window) self.window.after(0, self.window.deiconify) def withdraw_window(self): self.window.withdraw() self.icon = pystray.Icon("name", self.image, "title", self.menu) self.icon.run() if __name__ in '__main__': Gui()
Running a Tkinter window and PysTray Icon together
I'm building a tkinter gui project and i'm looking for ways to run a tray icon with the tkinter window. I found Pystray library that does it, But now i'm trying to figure it out how to use this library (tray Icon) together with tkinter window, I set it up when the user exit winodw it's only will withdraw window: self.protocol('WM_DELETE_WINDOW', self.withdraw) I want to bring it back with the tray icon.. anyone know how to do it? EDIT:untill now I just wrote this code so far (they're not running together but it's also fine): from pystray import MenuItem as item import pystray from PIL import Image import tkinter as tk def quit_window(icon, item): icon.stop() #window.destroy() def show_window(icon, item): icon.stop() #window.deiconify() def withdraw_window(window): window.withdraw() image = Image.open("image.ico") menu = (item('Quit', quit_window), item('Show', show_window)) icon = pystray.Icon("name", image, "title", menu) icon.run() def main(): window = tk.Tk() window.title("Welcome") window.protocol('WM_DELETE_WINDOW', lambda: withdraw_window(window)) window.mainloop() main()
[ "Finally I figure it out, \nNow I just need to combine this with my main code, I hope this code will help to other people too... \nfrom pystray import MenuItem as item\nimport pystray\nfrom PIL import Image\nimport tkinter as tk\n\nwindow = tk.Tk()\nwindow.title(\"Welcome\")\n\ndef quit_window(icon, item):\n icon.stop()\n window.destroy()\n\ndef show_window(icon, item):\n icon.stop()\n window.after(0,window.deiconify)\n\ndef withdraw_window(): \n window.withdraw()\n image = Image.open(\"image.ico\")\n menu = (item('Quit', quit_window), item('Show', show_window))\n icon = pystray.Icon(\"name\", image, \"title\", menu)\n icon.run()\n\nwindow.protocol('WM_DELETE_WINDOW', withdraw_window)\nwindow.mainloop()\n\n", "Thanks to Oshers solution, I adapted it into my own project.\nOne issue I fixed was that you could only hide the main window once, then the loop would crash. With this solution, it has no limit.\nimport tkinter as tk\nfrom PIL import Image\n\nimport pystray\n\n\nclass Gui():\n\n def __init__(self):\n self.window = tk.Tk()\n self.image = Image.open(\"./assets/icons/ready.png\")\n self.menu = (\n pystray.MenuItem('Show', self.show_window),\n pystray.MenuItem('Quit', self.quit_window)\n )\n self.window.protocol('WM_DELETE_WINDOW', self.withdraw_window)\n self.window.mainloop()\n\n\n def quit_window(self):\n self.icon.stop()\n self.window.destroy()\n\n\n def show_window(self):\n self.icon.stop()\n self.window.protocol('WM_DELETE_WINDOW', self.withdraw_window)\n self.window.after(0, self.window.deiconify)\n\n\n def withdraw_window(self):\n self.window.withdraw()\n self.icon = pystray.Icon(\"name\", self.image, \"title\", self.menu)\n self.icon.run()\n\n\nif __name__ in '__main__':\n Gui()\n\n" ]
[ 25, 1 ]
[]
[]
[ "python", "systray", "tkinter" ]
stackoverflow_0054835399_python_systray_tkinter.txt
Q: Asyncio file reading json I am trying to read a json file in an async function. I managed to find this code that works, but is rather clunky in the sense that it requires three extra parts for the file read: import aiofiles read the file convert file to dict import aiofiles import asyncio import json async def main(): # Read the contents of the json file. async with aiofiles.open('rhydon.json', mode='r') as f: contents = await f.read() # Load it into a dictionary and create a list of moves. pokemon = json.loads(contents) name = pokemon['name'] moves = [move['move']['name'] for move in pokemon['moves']] # Open a new file to write the list of moves into. async with aiofiles.open(f'{name}_moves.txt', mode='w') as f: await f.write('\n'.join(moves)) asyncio.run(main()) Ideally, i would like to use just the asyncio module alone, so was wondering if this is achievable in that module or if it is necessary to use aiofiles or if i have missed a better method altogether ? A: asyncio does not support asynchronous file operations, you can see the asyncio wiki for further explanation aiofiles allows you to read files asynchronously by delegating their operations to a separate thread pool
Asyncio file reading json
I am trying to read a json file in an async function. I managed to find this code that works, but is rather clunky in the sense that it requires three extra parts for the file read: import aiofiles read the file convert file to dict import aiofiles import asyncio import json async def main(): # Read the contents of the json file. async with aiofiles.open('rhydon.json', mode='r') as f: contents = await f.read() # Load it into a dictionary and create a list of moves. pokemon = json.loads(contents) name = pokemon['name'] moves = [move['move']['name'] for move in pokemon['moves']] # Open a new file to write the list of moves into. async with aiofiles.open(f'{name}_moves.txt', mode='w') as f: await f.write('\n'.join(moves)) asyncio.run(main()) Ideally, i would like to use just the asyncio module alone, so was wondering if this is achievable in that module or if it is necessary to use aiofiles or if i have missed a better method altogether ?
[ "asyncio does not support asynchronous file operations, you can see the asyncio wiki for further explanation\naiofiles allows you to read files asynchronously by delegating their operations to a separate thread pool\n" ]
[ 0 ]
[]
[]
[ "json", "python", "python_asyncio" ]
stackoverflow_0074645594_json_python_python_asyncio.txt
Q: how to treat var in second multiprocessing.Pool when the var is from first Pool How to treat var in second multiprocessing.Pool when the var is from first Pool? For the example code from multiprocessing import Pool import pandas as pd lst = [1, 2, 3] def csv(code): df = pd.DataFrame({code: [code, code**2, code**3]}, index=lst) return {code: df} def mp1(): with Pool(8) as pool: rs = pool.map(csv, lst) dfs = dict((key, val) for k in rs for key, val in k.items()) return dfs def dosomthing(code): dfs[code] = dfs[code] * code return {code: dfs[code]} def mp_dosomething(): with Pool(8) as pool: rs = pool.map(dosomthing, lst) dfc = dict((key, val) for k in rs for key, val in k.items()) return dfc if __name__ == '__main__': dfs = mp1() dfc = mp_dosomething() I can easily get dfs after if __name__ == '__main__': from fuction mp1. But when I want to do something with dfs using second Pool. It will get errer: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 48, in mapstar return list(map(*args)) File "c:\Users\NeNe\OneDrive\Python\test.py", line 17, in dosomthing dfs[code] = dfs[code] * code NameError: name 'dfs' is not defined """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\NeNe\OneDrive\Python\test.py", line 28, in <module> dfc = mp_dosomething() File "c:\Users\NeNe\OneDrive\Python\test.py", line 22, in mp_dosomething rs = pool.map(dosomthing, lst) File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 367, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 774, in get raise self._value NameError: name 'dfs' is not defined How can I get the dfc? A: At least on Windows, global variables of the main process are not available in the worker processes. The Pool class supports to call an initializer function on each worker to receive such variables (if their data can be pickled) and set them. Here this can be done like: def initializer(ext_dfs): global dfs dfs = ext_dfs def mp_dosomething(): with Pool(8, initializer, (dfs,)) as pool: # Do work
how to treat var in second multiprocessing.Pool when the var is from first Pool
How to treat var in second multiprocessing.Pool when the var is from first Pool? For the example code from multiprocessing import Pool import pandas as pd lst = [1, 2, 3] def csv(code): df = pd.DataFrame({code: [code, code**2, code**3]}, index=lst) return {code: df} def mp1(): with Pool(8) as pool: rs = pool.map(csv, lst) dfs = dict((key, val) for k in rs for key, val in k.items()) return dfs def dosomthing(code): dfs[code] = dfs[code] * code return {code: dfs[code]} def mp_dosomething(): with Pool(8) as pool: rs = pool.map(dosomthing, lst) dfc = dict((key, val) for k in rs for key, val in k.items()) return dfc if __name__ == '__main__': dfs = mp1() dfc = mp_dosomething() I can easily get dfs after if __name__ == '__main__': from fuction mp1. But when I want to do something with dfs using second Pool. It will get errer: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 48, in mapstar return list(map(*args)) File "c:\Users\NeNe\OneDrive\Python\test.py", line 17, in dosomthing dfs[code] = dfs[code] * code NameError: name 'dfs' is not defined """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\NeNe\OneDrive\Python\test.py", line 28, in <module> dfc = mp_dosomething() File "c:\Users\NeNe\OneDrive\Python\test.py", line 22, in mp_dosomething rs = pool.map(dosomthing, lst) File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 367, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "C:\Users\NeNe\AppData\Local\Programs\Python\Python310\lib\multiprocessing\pool.py", line 774, in get raise self._value NameError: name 'dfs' is not defined How can I get the dfc?
[ "At least on Windows, global variables of the main process are not available in the worker processes. The Pool class supports to call an initializer function on each worker to receive such variables (if their data can be pickled) and set them.\nHere this can be done like:\ndef initializer(ext_dfs):\n global dfs\n dfs = ext_dfs\n\n\ndef mp_dosomething():\n with Pool(8, initializer, (dfs,)) as pool:\n # Do work\n\n" ]
[ 0 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0074638071_multiprocessing_python.txt
Q: Why is all of my data stored as the key in JSON? When I receive a POST request from AJAX, all of my data is stored as the key with an empty value. Client side: var csrftoken = $('meta[name=csrf-token]').attr('content') $.ajaxSetup({ beforeSend: function(xhr, settings) { if (!/^(GET|HEAD|OPTIONS|TRACE)$/i.test(settings.type)) { xhr.setRequestHeader("X-CSRFToken", csrftoken) } } }) function post_order_items() { my_data = {id:{{ order.order_id }}, name:"test"}; $.ajax({ type: "POST", url: "{{ url_for('update_order_items') }}", data: JSON.stringify(my_data), dataType: "json", success: function(data, textStatus) { if (data.redirect) { // data.redirect contains the string URL to redirect to window.location.href = data.redirect; } else { console.log('no redirect') } } }); } Server side: @app.route("/order_items", methods = ['GET','POST']) def update_order_items(): try: jsdata = request.form print(jsdata) except Exception as e: print(e) flash('Order Updated', category='success') return jsonify({'redirect': url_for('order', order_id=3)}) The output looks like this (order.order_id was 3 in this case) ImmutableMultiDict([('{"id":3,"name":"test"}', '')]) For some reason, '{"id":3,"name":"test"}' is a key and the value is an empty string. Why is that? and how do I fix it? I'd like it to end up as a standard python dictionary, for example: { 'id':3, 'name':'test' } A: I was able to access the value using request.form['id'] after removing the JSON.stringify() function. function post_order_items() { my_data = {id:{{ order.order_id }}, name:"test"}; $.ajax({ type: "POST", url: "{{ url_for('update_order_items') }}", data: my_data, dataType: "json", success: function(data, textStatus) { if (data.redirect) { // data.redirect contains the string URL to redirect to window.location.href = data.redirect; } else { console.log('no redirect') } } }); }
Why is all of my data stored as the key in JSON?
When I receive a POST request from AJAX, all of my data is stored as the key with an empty value. Client side: var csrftoken = $('meta[name=csrf-token]').attr('content') $.ajaxSetup({ beforeSend: function(xhr, settings) { if (!/^(GET|HEAD|OPTIONS|TRACE)$/i.test(settings.type)) { xhr.setRequestHeader("X-CSRFToken", csrftoken) } } }) function post_order_items() { my_data = {id:{{ order.order_id }}, name:"test"}; $.ajax({ type: "POST", url: "{{ url_for('update_order_items') }}", data: JSON.stringify(my_data), dataType: "json", success: function(data, textStatus) { if (data.redirect) { // data.redirect contains the string URL to redirect to window.location.href = data.redirect; } else { console.log('no redirect') } } }); } Server side: @app.route("/order_items", methods = ['GET','POST']) def update_order_items(): try: jsdata = request.form print(jsdata) except Exception as e: print(e) flash('Order Updated', category='success') return jsonify({'redirect': url_for('order', order_id=3)}) The output looks like this (order.order_id was 3 in this case) ImmutableMultiDict([('{"id":3,"name":"test"}', '')]) For some reason, '{"id":3,"name":"test"}' is a key and the value is an empty string. Why is that? and how do I fix it? I'd like it to end up as a standard python dictionary, for example: { 'id':3, 'name':'test' }
[ "I was able to access the value using request.form['id'] after removing the JSON.stringify() function.\nfunction post_order_items() {\n my_data = {id:{{ order.order_id }}, name:\"test\"};\n $.ajax({\n type: \"POST\",\n url: \"{{ url_for('update_order_items') }}\",\n data: my_data,\n dataType: \"json\",\n success: function(data, textStatus) {\n if (data.redirect) {\n // data.redirect contains the string URL to redirect to\n window.location.href = data.redirect;\n } else {\n console.log('no redirect')\n } \n }\n });\n}\n\n\n" ]
[ 0 ]
[]
[]
[ "ajax", "flask", "json", "python" ]
stackoverflow_0074620534_ajax_flask_json_python.txt
Q: Swap Elements of Array in Python in One Line [Failed Memory Reference] Given the following Array: A = [11, 0, 9, 2, 7], I want to swap A[0] and A[3]. Expected result: A = [2, 0, 9, 11, 7]. Can someone explain why the first and the second method failed? I am suspecting this has to do with memory reference. Any thoughts? First Approach (FAILED) A = [11, 0, 9, 2, 7] print("Original:",A) temp = A[0] temp, A[3] = A[3], temp print("First: ",A) Second Approach (FAILED) A = [11, 0, 9, 2, 7] temp = A[0] A[3], temp = temp, A[3] print("Second: ",A) Third Approach (Worked) A = [11, 0, 9, 2, 7] A[0], A[3] = A[3], A[0] print("Third: ",A) Fourth Approach (Worked) A = [11, 0, 9, 2, 7] temp = A[0] # p = 11 A[0] = A[3] A[3] = temp print("Fourth: ",A) RESULTS Original: [11, 0, 9, 2, 7] First (Failed): [11, 0, 9, 11, 7] Second (Failed): [11, 0, 9, 11, 7] Third (Success): [2, 0, 9, 11, 7] Fourth (Success): [2, 0, 9, 11, 7] A: Because in 1st and 2nd you are swapping values between temp and A[3] But not doing it for A[0] just do A[0] = temp As you have don in 3rd "temp" is a variable outside the list. be carefull
Swap Elements of Array in Python in One Line [Failed Memory Reference]
Given the following Array: A = [11, 0, 9, 2, 7], I want to swap A[0] and A[3]. Expected result: A = [2, 0, 9, 11, 7]. Can someone explain why the first and the second method failed? I am suspecting this has to do with memory reference. Any thoughts? First Approach (FAILED) A = [11, 0, 9, 2, 7] print("Original:",A) temp = A[0] temp, A[3] = A[3], temp print("First: ",A) Second Approach (FAILED) A = [11, 0, 9, 2, 7] temp = A[0] A[3], temp = temp, A[3] print("Second: ",A) Third Approach (Worked) A = [11, 0, 9, 2, 7] A[0], A[3] = A[3], A[0] print("Third: ",A) Fourth Approach (Worked) A = [11, 0, 9, 2, 7] temp = A[0] # p = 11 A[0] = A[3] A[3] = temp print("Fourth: ",A) RESULTS Original: [11, 0, 9, 2, 7] First (Failed): [11, 0, 9, 11, 7] Second (Failed): [11, 0, 9, 11, 7] Third (Success): [2, 0, 9, 11, 7] Fourth (Success): [2, 0, 9, 11, 7]
[ "Because in 1st and 2nd you are swapping values between temp and A[3]\nBut not doing it for A[0]\njust do\nA[0] = temp\nAs you have don in 3rd\n\"temp\" is a variable outside the list.\nbe carefull\n" ]
[ 0 ]
[]
[]
[ "partitioning", "python", "reference", "sharedpreferences", "swap" ]
stackoverflow_0074645644_partitioning_python_reference_sharedpreferences_swap.txt
Q: How can I implement a tree in Python? I am trying to construct a General tree. Are there any built-in data structures in Python to implement it? A: I recommend anytree (I am the author). Example: from anytree import Node, RenderTree udo = Node("Udo") marc = Node("Marc", parent=udo) lian = Node("Lian", parent=marc) dan = Node("Dan", parent=udo) jet = Node("Jet", parent=dan) jan = Node("Jan", parent=dan) joe = Node("Joe", parent=dan) print(udo) Node('/Udo') print(joe) Node('/Udo/Dan/Joe') for pre, fill, node in RenderTree(udo): print("%s%s" % (pre, node.name)) Udo ├── Marc │ └── Lian └── Dan ├── Jet ├── Jan └── Joe print(dan.children) (Node('/Udo/Dan/Jet'), Node('/Udo/Dan/Jan'), Node('/Udo/Dan/Joe')) anytree has also a powerful API with: simple tree creation simple tree modification pre-order tree iteration post-order tree iteration resolve relative and absolute node paths walking from one node to an other. tree rendering (see example above) node attach/detach hookups A: Python doesn't have the quite the extensive range of "built-in" data structures as Java does. However, because Python is dynamic, a general tree is easy to create. For example, a binary tree might be: class Tree: def __init__(self): self.left = None self.right = None self.data = None You can use it like this: root = Tree() root.data = "root" root.left = Tree() root.left.data = "left" root.right = Tree() root.right.data = "right" If you need an arbitrary number of children per node, then use a list of children: class Tree: def __init__(self, data): self.children = [] self.data = data left = Tree("left") middle = Tree("middle") right = Tree("right") root = Tree("root") root.children = [left, middle, right] A: A generic tree is a node with zero or more children, each one a proper (tree) node. It isn't the same as a binary tree, they're different data structures, although both shares some terminology. There isn't any builtin data structure for generic trees in Python, but it's easily implemented with classes. class Tree(object): "Generic tree node." def __init__(self, name='root', children=None): self.name = name self.children = [] if children is not None: for child in children: self.add_child(child) def __repr__(self): return self.name def add_child(self, node): assert isinstance(node, Tree) self.children.append(node) # * # /|\ # 1 2 + # / \ # 3 4 t = Tree('*', [Tree('1'), Tree('2'), Tree('+', [Tree('3'), Tree('4')])]) A: You can try: from collections import defaultdict def tree(): return defaultdict(tree) users = tree() users['harold']['username'] = 'hrldcpr' users['handler']['username'] = 'matthandlersux' As suggested here: https://gist.github.com/2012250 A: class Node: """ Class Node """ def __init__(self, value): self.left = None self.data = value self.right = None class Tree: """ Class tree will provide a tree as well as utility functions. """ def createNode(self, data): """ Utility function to create a node. """ return Node(data) def insert(self, node , data): """ Insert function will insert a node into tree. Duplicate keys are not allowed. """ #if tree is empty , return a root node if node is None: return self.createNode(data) # if data is smaller than parent , insert it into left side if data < node.data: node.left = self.insert(node.left, data) elif data > node.data: node.right = self.insert(node.right, data) return node def search(self, node, data): """ Search function will search a node into tree. """ # if root is None or root is the search data. if node is None or node.data == data: return node if node.data < data: return self.search(node.right, data) else: return self.search(node.left, data) def deleteNode(self,node,data): """ Delete function will delete a node into tree. Not complete , may need some more scenarion that we can handle Now it is handling only leaf. """ # Check if tree is empty. if node is None: return None # searching key into BST. if data < node.data: node.left = self.deleteNode(node.left, data) elif data > node.data: node.right = self.deleteNode(node.right, data) else: # reach to the node that need to delete from BST. if node.left is None and node.right is None: del node if node.left == None: temp = node.right del node return temp elif node.right == None: temp = node.left del node return temp return node def traverseInorder(self, root): """ traverse function will print all the node in the tree. """ if root is not None: self.traverseInorder(root.left) print(root.data) self.traverseInorder(root.right) def traversePreorder(self, root): """ traverse function will print all the node in the tree. """ if root is not None: print(root.data) self.traversePreorder(root.left) self.traversePreorder(root.right) def traversePostorder(self, root): """ traverse function will print all the node in the tree. """ if root is not None: self.traversePostorder(root.left) self.traversePostorder(root.right) print(root.data) def main(): root = None tree = Tree() root = tree.insert(root, 10) print(root) tree.insert(root, 20) tree.insert(root, 30) tree.insert(root, 40) tree.insert(root, 70) tree.insert(root, 60) tree.insert(root, 80) print("Traverse Inorder") tree.traverseInorder(root) print("Traverse Preorder") tree.traversePreorder(root) print("Traverse Postorder") tree.traversePostorder(root) if __name__ == "__main__": main() A: There aren't trees built in, but you can easily construct one by subclassing a Node type from List and writing the traversal methods. If you do this, I've found bisect useful. There are also many implementations on PyPi that you can browse. If I remember correctly, the Python standard lib doesn't include tree data structures for the same reason that the .NET base class library doesn't: locality of memory is reduced, resulting in more cache misses. On modern processors it's usually faster to just bring a large chunk of memory into the cache, and "pointer rich" data structures negate the benefit. A: I implemented a rooted tree as a dictionary {child:parent}. So for instance with the root node 0, a tree might look like that: tree={1:0, 2:0, 3:1, 4:2, 5:3} This structure made it quite easy to go upward along a path from any node to the root, which was relevant for the problem I was working on. A: Greg Hewgill's answer is great but if you need more nodes per level you can use a list|dictionary to create them: And then use method to access them either by name or order (like id) class node(object): def __init__(self): self.name=None self.node=[] self.otherInfo = None self.prev=None def nex(self,child): "Gets a node by number" return self.node[child] def prev(self): return self.prev def goto(self,data): "Gets the node by name" for child in range(0,len(self.node)): if(self.node[child].name==data): return self.node[child] def add(self): node1=node() self.node.append(node1) node1.prev=self return node1 Now just create a root and build it up: ex: tree=node() #create a node tree.name="root" #name it root tree.otherInfo="blue" #or what ever tree=tree.add() #add a node to the root tree.name="node1" #name it root / child1 tree=tree.add() tree.name="grandchild1" root / child1 / grandchild1 tree=tree.prev() tree=tree.add() tree.name="gchild2" root / child1 / \ grandchild1 gchild2 tree=tree.prev() tree=tree.prev() tree=tree.add() tree=tree.name="child2" root / \ child1 child2 / \ grandchild1 gchild2 tree=tree.prev() tree=tree.goto("child1") or tree=tree.nex(0) tree.name="changed" root / \ changed child2 / \ grandchild1 gchild2 That should be enough for you to start figuring out how to make this work A: class Tree(dict): """A tree implementation using python's autovivification feature.""" def __missing__(self, key): value = self[key] = type(self)() return value #cast a (nested) dict to a (nested) Tree class def __init__(self, data={}): for k, data in data.items(): if isinstance(data, dict): self[k] = type(self)(data) else: self[k] = data works as a dictionary, but provides as many nested dicts you want. Try the following: your_tree = Tree() your_tree['a']['1']['x'] = '@' your_tree['a']['1']['y'] = '#' your_tree['a']['2']['x'] = '$' your_tree['a']['3'] = '%' your_tree['b'] = '*' will deliver a nested dict ... which works as a tree indeed. {'a': {'1': {'x': '@', 'y': '#'}, '2': {'x': '$'}, '3': '%'}, 'b': '*'} ... If you have already a dict, it will cast each level to a tree: d = {'foo': {'amy': {'what': 'runs'} } } tree = Tree(d) print(d['foo']['amy']['what']) # returns 'runs' d['foo']['amy']['when'] = 'now' # add new branch In this way, you can keep edit/add/remove each dict level as you wish. All the dict methods for traversal etc, still apply. A: If someone needs a simpler way to do it, a tree is only a recursively nested list (since set is not hashable) : [root, [child_1, [[child_11, []], [child_12, []]], [child_2, []]]] Where each branch is a pair: [ object, [children] ] and each leaf is a pair: [ object, [] ] But if you need a class with methods, you can use anytree. A: I've implemented trees using nested dicts. It is quite easy to do, and it has worked for me with pretty large data sets. I've posted a sample below, and you can see more at Google code def addBallotToTree(self, tree, ballotIndex, ballot=""): """Add one ballot to the tree. The root of the tree is a dictionary that has as keys the indicies of all continuing and winning candidates. For each candidate, the value is also a dictionary, and the keys of that dictionary include "n" and "bi". tree[c]["n"] is the number of ballots that rank candidate c first. tree[c]["bi"] is a list of ballot indices where the ballots rank c first. If candidate c is a winning candidate, then that portion of the tree is expanded to indicate the breakdown of the subsequently ranked candidates. In this situation, additional keys are added to the tree[c] dictionary corresponding to subsequently ranked candidates. tree[c]["n"] is the number of ballots that rank candidate c first. tree[c]["bi"] is a list of ballot indices where the ballots rank c first. tree[c][d]["n"] is the number of ballots that rank c first and d second. tree[c][d]["bi"] is a list of the corresponding ballot indices. Where the second ranked candidates is also a winner, then the tree is expanded to the next level. Losing candidates are ignored and treated as if they do not appear on the ballots. For example, tree[c][d]["n"] is the total number of ballots where candidate c is the first non-losing candidate, c is a winner, and d is the next non-losing candidate. This will include the following ballots, where x represents a losing candidate: [c d] [x c d] [c x d] [x c x x d] During the count, the tree is dynamically updated as candidates change their status. The parameter "tree" to this method may be the root of the tree or may be a sub-tree. """ if ballot == "": # Add the complete ballot to the tree weight, ballot = self.b.getWeightedBallot(ballotIndex) else: # When ballot is not "", we are adding a truncated ballot to the tree, # because a higher-ranked candidate is a winner. weight = self.b.getWeight(ballotIndex) # Get the top choice among candidates still in the running # Note that we can't use Ballots.getTopChoiceFromWeightedBallot since # we are looking for the top choice over a truncated ballot. for c in ballot: if c in self.continuing | self.winners: break # c is the top choice so stop else: c = None # no candidates left on this ballot if c is None: # This will happen if the ballot contains only winning and losing # candidates. The ballot index will not need to be transferred # again so it can be thrown away. return # Create space if necessary. if not tree.has_key(c): tree[c] = {} tree[c]["n"] = 0 tree[c]["bi"] = [] tree[c]["n"] += weight if c in self.winners: # Because candidate is a winner, a portion of the ballot goes to # the next candidate. Pass on a truncated ballot so that the same # candidate doesn't get counted twice. i = ballot.index(c) ballot2 = ballot[i+1:] self.addBallotToTree(tree[c], ballotIndex, ballot2) else: # Candidate is in continuing so we stop here. tree[c]["bi"].append(ballotIndex) A: If you are already using the networkx library, then you can implement a tree using that. NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. As 'tree' is another term for a (normally rooted) connected acyclic graph, and these are called 'arborescences' in NetworkX. You may want to implement a plane tree (aka ordered tree) where each sibling has a unique rank and this is normally done via labelling the nodes. However, graph language looks different from tree language, and the means of 'rooting' an arborescence is normally done by using a directed graph so, while there are some really cool functions and corresponding visualisations available, it would probably not be an ideal choice if you are not already using networkx. An example of building a tree: import networkx as nx G = nx.Graph() G.add_edge('A', 'B') G.add_edge('B', 'C') G.add_edge('B', 'D') G.add_edge('A', 'E') G.add_edge('E', 'F') The library enables each node to be any hashable object, and there is no constraint on the number of children each node has. A: I've published a Python 3 tree implementation on my site: https://web.archive.org/web/20120723175438/www.quesucede.com/page/show/id/python_3_tree_implementation Here's the code: import uuid def sanitize_id(id): return id.strip().replace(" ", "") (_ADD, _DELETE, _INSERT) = range(3) (_ROOT, _DEPTH, _WIDTH) = range(3) class Node: def __init__(self, name, identifier=None, expanded=True): self.__identifier = (str(uuid.uuid1()) if identifier is None else sanitize_id(str(identifier))) self.name = name self.expanded = expanded self.__bpointer = None self.__fpointer = [] @property def identifier(self): return self.__identifier @property def bpointer(self): return self.__bpointer @bpointer.setter def bpointer(self, value): if value is not None: self.__bpointer = sanitize_id(value) @property def fpointer(self): return self.__fpointer def update_fpointer(self, identifier, mode=_ADD): if mode is _ADD: self.__fpointer.append(sanitize_id(identifier)) elif mode is _DELETE: self.__fpointer.remove(sanitize_id(identifier)) elif mode is _INSERT: self.__fpointer = [sanitize_id(identifier)] class Tree: def __init__(self): self.nodes = [] def get_index(self, position): for index, node in enumerate(self.nodes): if node.identifier == position: break return index def create_node(self, name, identifier=None, parent=None): node = Node(name, identifier) self.nodes.append(node) self.__update_fpointer(parent, node.identifier, _ADD) node.bpointer = parent return node def show(self, position, level=_ROOT): queue = self[position].fpointer if level == _ROOT: print("{0} [{1}]".format(self[position].name, self[position].identifier)) else: print("\t"*level, "{0} [{1}]".format(self[position].name, self[position].identifier)) if self[position].expanded: level += 1 for element in queue: self.show(element, level) # recursive call def expand_tree(self, position, mode=_DEPTH): # Python generator. Loosly based on an algorithm from 'Essential LISP' by # John R. Anderson, Albert T. Corbett, and Brian J. Reiser, page 239-241 yield position queue = self[position].fpointer while queue: yield queue[0] expansion = self[queue[0]].fpointer if mode is _DEPTH: queue = expansion + queue[1:] # depth-first elif mode is _WIDTH: queue = queue[1:] + expansion # width-first def is_branch(self, position): return self[position].fpointer def __update_fpointer(self, position, identifier, mode): if position is None: return else: self[position].update_fpointer(identifier, mode) def __update_bpointer(self, position, identifier): self[position].bpointer = identifier def __getitem__(self, key): return self.nodes[self.get_index(key)] def __setitem__(self, key, item): self.nodes[self.get_index(key)] = item def __len__(self): return len(self.nodes) def __contains__(self, identifier): return [node.identifier for node in self.nodes if node.identifier is identifier] if __name__ == "__main__": tree = Tree() tree.create_node("Harry", "harry") # root node tree.create_node("Jane", "jane", parent = "harry") tree.create_node("Bill", "bill", parent = "harry") tree.create_node("Joe", "joe", parent = "jane") tree.create_node("Diane", "diane", parent = "jane") tree.create_node("George", "george", parent = "diane") tree.create_node("Mary", "mary", parent = "diane") tree.create_node("Jill", "jill", parent = "george") tree.create_node("Carol", "carol", parent = "jill") tree.create_node("Grace", "grace", parent = "bill") tree.create_node("Mark", "mark", parent = "jane") print("="*80) tree.show("harry") print("="*80) for node in tree.expand_tree("harry", mode=_WIDTH): print(node) print("="*80) A: Hi you may give itertree a try (I'm the author). The package goes in the direction of anytree package but with a bit different focus. The performance on huge trees (>100000 items) is much better and it deals with iterators to have effective filter mechanism. >>>from itertree import * >>>root=iTree('root') >>># add some children: >>>root.append(iTree('Africa',data={'surface':30200000,'inhabitants':1257000000})) >>>root.append(iTree('Asia', data={'surface': 44600000, 'inhabitants': 4000000000})) >>>root.append(iTree('America', data={'surface': 42549000, 'inhabitants': 1009000000})) >>>root.append(iTree('Australia&Oceania', data={'surface': 8600000, 'inhabitants': 36000000})) >>>root.append(iTree('Europe', data={'surface': 10523000 , 'inhabitants': 746000000})) >>># you might use __iadd__ operator for adding too: >>>root+=iTree('Antarktika', data={'surface': 14000000, 'inhabitants': 1100}) >>># for building next level we select per index: >>>root[0]+=iTree('Ghana',data={'surface':238537,'inhabitants':30950000}) >>>root[0]+=iTree('Niger', data={'surface': 1267000, 'inhabitants': 23300000}) >>>root[1]+=iTree('China', data={'surface': 9596961, 'inhabitants': 1411780000}) >>>root[1]+=iTree('India', data={'surface': 3287263, 'inhabitants': 1380004000}) >>>root[2]+=iTree('Canada', data={'type': 'country', 'surface': 9984670, 'inhabitants': 38008005}) >>>root[2]+=iTree('Mexico', data={'surface': 1972550, 'inhabitants': 127600000 }) >>># extend multiple items: >>>root[3].extend([iTree('Australia', data={'surface': 7688287, 'inhabitants': 25700000 }), iTree('New Zealand', data={'surface': 269652, 'inhabitants': 4900000 })]) >>>root[4]+=iTree('France', data={'surface': 632733, 'inhabitants': 67400000 })) >>># select parent per TagIdx - remember in itertree you might put items with same tag multiple times: >>>root[TagIdx('Europe'0)]+=iTree('Finland', data={'surface': 338465, 'inhabitants': 5536146 }) The created tree can be rendered: >>>root.render() iTree('root') └──iTree('Africa', data=iTData({'surface': 30200000, 'inhabitants': 1257000000})) └──iTree('Ghana', data=iTData({'surface': 238537, 'inhabitants': 30950000})) └──iTree('Niger', data=iTData({'surface': 1267000, 'inhabitants': 23300000})) └──iTree('Asia', data=iTData({'surface': 44600000, 'inhabitants': 4000000000})) └──iTree('China', data=iTData({'surface': 9596961, 'inhabitants': 1411780000})) └──iTree('India', data=iTData({'surface': 3287263, 'inhabitants': 1380004000})) └──iTree('America', data=iTData({'surface': 42549000, 'inhabitants': 1009000000})) └──iTree('Canada', data=iTData({'surface': 9984670, 'inhabitants': 38008005})) └──iTree('Mexico', data=iTData({'surface': 1972550, 'inhabitants': 127600000})) └──iTree('Australia&Oceania', data=iTData({'surface': 8600000, 'inhabitants': 36000000})) └──iTree('Australia', data=iTData({'surface': 7688287, 'inhabitants': 25700000})) └──iTree('New Zealand', data=iTData({'surface': 269652, 'inhabitants': 4900000})) └──iTree('Europe', data=iTData({'surface': 10523000, 'inhabitants': 746000000})) └──iTree('France', data=iTData({'surface': 632733, 'inhabitants': 67400000})) └──iTree('Finland', data=iTData({'surface': 338465, 'inhabitants': 5536146})) └──iTree('Antarktika', data=iTData({'surface': 14000000, 'inhabitants': 1100})) E.g. Filtering can be done like this: >>>item_filter = Filter.iTFilterData(data_key='inhabitants', data_value=iTInterval(0, 20000000)) >>>iterator=root.iter_all(item_filter=item_filter) >>>for i in iterator: >>> print(i) iTree("'New Zealand'", data=iTData({'surface': 269652, 'inhabitants': 4900000}), subtree=[]) iTree("'Finland'", data=iTData({'surface': 338465, 'inhabitants': 5536146}), subtree=[]) iTree("'Antarktika'", data=iTData({'surface': 14000000, 'inhabitants': 1100}), subtree=[]) A: Another tree implementation loosely based off of Bruno's answer: class Node: def __init__(self): self.name: str = '' self.children: List[Node] = [] self.parent: Node = self def __getitem__(self, i: int) -> 'Node': return self.children[i] def add_child(self): child = Node() self.children.append(child) child.parent = self return child def __str__(self) -> str: def _get_character(x, left, right) -> str: if x < left: return '/' elif x >= right: return '\\' else: return '|' if len(self.children): children_lines: Sequence[List[str]] = list(map(lambda child: str(child).split('\n'), self.children)) widths: Sequence[int] = list(map(lambda child_lines: len(child_lines[0]), children_lines)) max_height: int = max(map(len, children_lines)) total_width: int = sum(widths) + len(widths) - 1 left: int = (total_width - len(self.name) + 1) // 2 right: int = left + len(self.name) return '\n'.join(( self.name.center(total_width), ' '.join(map(lambda width, position: _get_character(position - width // 2, left, right).center(width), widths, accumulate(widths, add))), *map( lambda row: ' '.join(map( lambda child_lines: child_lines[row] if row < len(child_lines) else ' ' * len(child_lines[0]), children_lines)), range(max_height)))) else: return self.name And an example of how to use it: tree = Node() tree.name = 'Root node' tree.add_child() tree[0].name = 'Child node 0' tree.add_child() tree[1].name = 'Child node 1' tree.add_child() tree[2].name = 'Child node 2' tree[1].add_child() tree[1][0].name = 'Grandchild 1.0' tree[2].add_child() tree[2][0].name = 'Grandchild 2.0' tree[2].add_child() tree[2][1].name = 'Grandchild 2.1' print(tree) Which should output: Root node / / \ Child node 0 Child node 1 Child node 2 | / \ Grandchild 1.0 Grandchild 2.0 Grandchild 2.1 A: Treelib is convenient for the task as well. Documentation can be found treelib. from treelib import Node, Tree tree = Tree() # creating an object tree.create_node("Harry", "harry") # root node tree.create_node("Jane", "jane", parent="harry") #adding nodes tree.create_node("Bill", "bill", parent="harry") tree.create_node("Diane", "diane", parent="jane") tree.create_node("Mary", "mary", parent="diane") tree.create_node("Mark", "mark", parent="jane") tree.show() Harry ├── Bill └── Jane ├── Diane │ └── Mary └── Mark A: bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame. It is pythonic, making it easy to learn and extendable to many types of workflows. There are various components to bigtree, namely Constructing Trees from list, dictionary, and pandas DataFrame Traversing Tree Modifying Tree (shift/copy nodes) Search Tree Helper methods (clone tree, prune tree, get the difference between two treess) Export tree (print to console, export tree to dictionary, pandas DataFrame, image etc.) Other tree structures: Binary Trees! Other graph structured: Directed Acyclic Graphs (DAGs)! What more can I say... ah yes it is also well-documented. Some examples: from bigtree import list_to_tree, tree_to_dict, tree_to_dot # Create tree from list, print tree root = list_to_tree(["a/b/d", "a/c"]) print_tree(root) # a # ├── b # │ └── d # └── c # Query tree root.children # (Node(/a/b, ), Node(/a/c, )) # Export tree to dictionary / image tree_to_dict(root) # { # '/a': {'name': 'a'}, # '/a/b': {'name': 'b'}, # '/a/b/d': {'name': 'd'}, # '/a/c': {'name': 'c'} # } graph = tree_to_dot(root, node_colour="gold") graph.write_png("tree.png") Source/Disclaimer: I'm the creator of bigtree ;)
How can I implement a tree in Python?
I am trying to construct a General tree. Are there any built-in data structures in Python to implement it?
[ "I recommend anytree (I am the author).\nExample:\nfrom anytree import Node, RenderTree\n\nudo = Node(\"Udo\")\nmarc = Node(\"Marc\", parent=udo)\nlian = Node(\"Lian\", parent=marc)\ndan = Node(\"Dan\", parent=udo)\njet = Node(\"Jet\", parent=dan)\njan = Node(\"Jan\", parent=dan)\njoe = Node(\"Joe\", parent=dan)\n\nprint(udo)\nNode('/Udo')\nprint(joe)\nNode('/Udo/Dan/Joe')\n\nfor pre, fill, node in RenderTree(udo):\n print(\"%s%s\" % (pre, node.name))\nUdo\n├── Marc\n│ └── Lian\n└── Dan\n ├── Jet\n ├── Jan\n └── Joe\n\nprint(dan.children)\n(Node('/Udo/Dan/Jet'), Node('/Udo/Dan/Jan'), Node('/Udo/Dan/Joe'))\n\nanytree has also a powerful API with:\n\nsimple tree creation\nsimple tree modification\npre-order tree iteration\npost-order tree iteration\nresolve relative and absolute node paths\nwalking from one node to an other.\ntree rendering (see example above)\nnode attach/detach hookups\n\n", "Python doesn't have the quite the extensive range of \"built-in\" data structures as Java does. However, because Python is dynamic, a general tree is easy to create. For example, a binary tree might be:\nclass Tree:\n def __init__(self):\n self.left = None\n self.right = None\n self.data = None\n\nYou can use it like this:\nroot = Tree()\nroot.data = \"root\"\nroot.left = Tree()\nroot.left.data = \"left\"\nroot.right = Tree()\nroot.right.data = \"right\"\n\nIf you need an arbitrary number of children per node, then use a list of children:\nclass Tree:\n def __init__(self, data):\n self.children = []\n self.data = data\n\nleft = Tree(\"left\")\nmiddle = Tree(\"middle\")\nright = Tree(\"right\")\nroot = Tree(\"root\")\nroot.children = [left, middle, right]\n\n", "A generic tree is a node with zero or more children, each one a proper (tree) node. It isn't the same as a binary tree, they're different data structures, although both shares some terminology.\nThere isn't any builtin data structure for generic trees in Python, but it's easily implemented with classes.\nclass Tree(object):\n \"Generic tree node.\"\n def __init__(self, name='root', children=None):\n self.name = name\n self.children = []\n if children is not None:\n for child in children:\n self.add_child(child)\n def __repr__(self):\n return self.name\n def add_child(self, node):\n assert isinstance(node, Tree)\n self.children.append(node)\n# *\n# /|\\\n# 1 2 +\n# / \\\n# 3 4\nt = Tree('*', [Tree('1'),\n Tree('2'),\n Tree('+', [Tree('3'),\n Tree('4')])])\n\n", "You can try:\nfrom collections import defaultdict\ndef tree(): return defaultdict(tree)\nusers = tree()\nusers['harold']['username'] = 'hrldcpr'\nusers['handler']['username'] = 'matthandlersux'\n\nAs suggested here: https://gist.github.com/2012250\n", "class Node:\n \"\"\"\n Class Node\n \"\"\"\n def __init__(self, value):\n self.left = None\n self.data = value\n self.right = None\n\nclass Tree:\n \"\"\"\n Class tree will provide a tree as well as utility functions.\n \"\"\"\n\n def createNode(self, data):\n \"\"\"\n Utility function to create a node.\n \"\"\"\n return Node(data)\n\n def insert(self, node , data):\n \"\"\"\n Insert function will insert a node into tree.\n Duplicate keys are not allowed.\n \"\"\"\n #if tree is empty , return a root node\n if node is None:\n return self.createNode(data)\n # if data is smaller than parent , insert it into left side\n if data < node.data:\n node.left = self.insert(node.left, data)\n elif data > node.data:\n node.right = self.insert(node.right, data)\n\n return node\n\n\n def search(self, node, data):\n \"\"\"\n Search function will search a node into tree.\n \"\"\"\n # if root is None or root is the search data.\n if node is None or node.data == data:\n return node\n\n if node.data < data:\n return self.search(node.right, data)\n else:\n return self.search(node.left, data)\n\n\n\n def deleteNode(self,node,data):\n \"\"\"\n Delete function will delete a node into tree.\n Not complete , may need some more scenarion that we can handle\n Now it is handling only leaf.\n \"\"\"\n\n # Check if tree is empty.\n if node is None:\n return None\n\n # searching key into BST.\n if data < node.data:\n node.left = self.deleteNode(node.left, data)\n elif data > node.data:\n node.right = self.deleteNode(node.right, data)\n else: # reach to the node that need to delete from BST.\n if node.left is None and node.right is None:\n del node\n if node.left == None:\n temp = node.right\n del node\n return temp\n elif node.right == None:\n temp = node.left\n del node\n return temp\n\n return node\n\n def traverseInorder(self, root):\n \"\"\"\n traverse function will print all the node in the tree.\n \"\"\"\n if root is not None:\n self.traverseInorder(root.left)\n print(root.data)\n self.traverseInorder(root.right)\n\n def traversePreorder(self, root):\n \"\"\"\n traverse function will print all the node in the tree.\n \"\"\"\n if root is not None:\n print(root.data)\n self.traversePreorder(root.left)\n self.traversePreorder(root.right)\n\n def traversePostorder(self, root):\n \"\"\"\n traverse function will print all the node in the tree.\n \"\"\"\n if root is not None:\n self.traversePostorder(root.left)\n self.traversePostorder(root.right)\n print(root.data)\n\n\ndef main():\n root = None\n tree = Tree()\n root = tree.insert(root, 10)\n print(root)\n tree.insert(root, 20)\n tree.insert(root, 30)\n tree.insert(root, 40)\n tree.insert(root, 70)\n tree.insert(root, 60)\n tree.insert(root, 80)\n\n print(\"Traverse Inorder\")\n tree.traverseInorder(root)\n\n print(\"Traverse Preorder\")\n tree.traversePreorder(root)\n\n print(\"Traverse Postorder\")\n tree.traversePostorder(root)\n\n\nif __name__ == \"__main__\":\n main()\n\n", "There aren't trees built in, but you can easily construct one by subclassing a Node type from List and writing the traversal methods. If you do this, I've found bisect useful. \nThere are also many implementations on PyPi that you can browse. \nIf I remember correctly, the Python standard lib doesn't include tree data structures for the same reason that the .NET base class library doesn't: locality of memory is reduced, resulting in more cache misses. On modern processors it's usually faster to just bring a large chunk of memory into the cache, and \"pointer rich\" data structures negate the benefit. \n", "I implemented a rooted tree as a dictionary {child:parent}. So for instance with the root node 0, a tree might look like that:\ntree={1:0, 2:0, 3:1, 4:2, 5:3}\n\nThis structure made it quite easy to go upward along a path from any node to the root, which was relevant for the problem I was working on.\n", "Greg Hewgill's answer is great but if you need more nodes per level you can use a list|dictionary to create them: And then use method to access them either by name or order (like id)\nclass node(object):\n def __init__(self):\n self.name=None\n self.node=[]\n self.otherInfo = None\n self.prev=None\n def nex(self,child):\n \"Gets a node by number\"\n return self.node[child]\n def prev(self):\n return self.prev\n def goto(self,data):\n \"Gets the node by name\"\n for child in range(0,len(self.node)):\n if(self.node[child].name==data):\n return self.node[child]\n def add(self):\n node1=node()\n self.node.append(node1)\n node1.prev=self\n return node1\n\nNow just create a root and build it up:\nex:\ntree=node() #create a node\ntree.name=\"root\" #name it root\ntree.otherInfo=\"blue\" #or what ever \ntree=tree.add() #add a node to the root\ntree.name=\"node1\" #name it\n\n root\n /\nchild1\n\ntree=tree.add()\ntree.name=\"grandchild1\"\n\n root\n /\n child1\n /\ngrandchild1\n\ntree=tree.prev()\ntree=tree.add()\ntree.name=\"gchild2\"\n\n root\n /\n child1\n / \\\ngrandchild1 gchild2\n\ntree=tree.prev()\ntree=tree.prev()\ntree=tree.add()\ntree=tree.name=\"child2\"\n\n root\n / \\\n child1 child2\n / \\\ngrandchild1 gchild2\n\n\ntree=tree.prev()\ntree=tree.goto(\"child1\") or tree=tree.nex(0)\ntree.name=\"changed\"\n\n root\n / \\\n changed child2\n / \\\n grandchild1 gchild2\n\nThat should be enough for you to start figuring out how to make this work\n", "class Tree(dict):\n \"\"\"A tree implementation using python's autovivification feature.\"\"\"\n def __missing__(self, key):\n value = self[key] = type(self)()\n return value\n\n #cast a (nested) dict to a (nested) Tree class\n def __init__(self, data={}):\n for k, data in data.items():\n if isinstance(data, dict):\n self[k] = type(self)(data)\n else:\n self[k] = data\n\nworks as a dictionary, but provides as many nested dicts you want.\nTry the following:\nyour_tree = Tree()\n\nyour_tree['a']['1']['x'] = '@'\nyour_tree['a']['1']['y'] = '#'\nyour_tree['a']['2']['x'] = '$'\nyour_tree['a']['3'] = '%'\nyour_tree['b'] = '*'\n\nwill deliver a nested dict ... which works as a tree indeed.\n{'a': {'1': {'x': '@', 'y': '#'}, '2': {'x': '$'}, '3': '%'}, 'b': '*'}\n\n... If you have already a dict, it will cast each level to a tree:\nd = {'foo': {'amy': {'what': 'runs'} } }\ntree = Tree(d)\n\nprint(d['foo']['amy']['what']) # returns 'runs'\nd['foo']['amy']['when'] = 'now' # add new branch\n\nIn this way, you can keep edit/add/remove each dict level as you wish.\nAll the dict methods for traversal etc, still apply.\n", "If someone needs a simpler way to do it, a tree is only a recursively nested list (since set is not hashable) :\n[root, [child_1, [[child_11, []], [child_12, []]], [child_2, []]]]\n\nWhere each branch is a pair: [ object, [children] ]\nand each leaf is a pair: [ object, [] ]\nBut if you need a class with methods, you can use anytree.\n", "I've implemented trees using nested dicts. It is quite easy to do, and it has worked for me with pretty large data sets. I've posted a sample below, and you can see more at Google code\n def addBallotToTree(self, tree, ballotIndex, ballot=\"\"):\n \"\"\"Add one ballot to the tree.\n\n The root of the tree is a dictionary that has as keys the indicies of all \n continuing and winning candidates. For each candidate, the value is also\n a dictionary, and the keys of that dictionary include \"n\" and \"bi\".\n tree[c][\"n\"] is the number of ballots that rank candidate c first.\n tree[c][\"bi\"] is a list of ballot indices where the ballots rank c first.\n\n If candidate c is a winning candidate, then that portion of the tree is\n expanded to indicate the breakdown of the subsequently ranked candidates.\n In this situation, additional keys are added to the tree[c] dictionary\n corresponding to subsequently ranked candidates.\n tree[c][\"n\"] is the number of ballots that rank candidate c first.\n tree[c][\"bi\"] is a list of ballot indices where the ballots rank c first.\n tree[c][d][\"n\"] is the number of ballots that rank c first and d second.\n tree[c][d][\"bi\"] is a list of the corresponding ballot indices.\n\n Where the second ranked candidates is also a winner, then the tree is \n expanded to the next level. \n\n Losing candidates are ignored and treated as if they do not appear on the \n ballots. For example, tree[c][d][\"n\"] is the total number of ballots\n where candidate c is the first non-losing candidate, c is a winner, and\n d is the next non-losing candidate. This will include the following\n ballots, where x represents a losing candidate:\n [c d]\n [x c d]\n [c x d]\n [x c x x d]\n\n During the count, the tree is dynamically updated as candidates change\n their status. The parameter \"tree\" to this method may be the root of the\n tree or may be a sub-tree.\n \"\"\"\n\n if ballot == \"\":\n # Add the complete ballot to the tree\n weight, ballot = self.b.getWeightedBallot(ballotIndex)\n else:\n # When ballot is not \"\", we are adding a truncated ballot to the tree,\n # because a higher-ranked candidate is a winner.\n weight = self.b.getWeight(ballotIndex)\n\n # Get the top choice among candidates still in the running\n # Note that we can't use Ballots.getTopChoiceFromWeightedBallot since\n # we are looking for the top choice over a truncated ballot.\n for c in ballot:\n if c in self.continuing | self.winners:\n break # c is the top choice so stop\n else:\n c = None # no candidates left on this ballot\n\n if c is None:\n # This will happen if the ballot contains only winning and losing\n # candidates. The ballot index will not need to be transferred\n # again so it can be thrown away.\n return\n\n # Create space if necessary.\n if not tree.has_key(c):\n tree[c] = {}\n tree[c][\"n\"] = 0\n tree[c][\"bi\"] = []\n\n tree[c][\"n\"] += weight\n\n if c in self.winners:\n # Because candidate is a winner, a portion of the ballot goes to\n # the next candidate. Pass on a truncated ballot so that the same\n # candidate doesn't get counted twice.\n i = ballot.index(c)\n ballot2 = ballot[i+1:]\n self.addBallotToTree(tree[c], ballotIndex, ballot2)\n else:\n # Candidate is in continuing so we stop here.\n tree[c][\"bi\"].append(ballotIndex)\n\n", "If you are already using the networkx library, then you can implement a tree using that.\n\nNetworkX is a Python package for the creation, manipulation, and study\nof the structure, dynamics, and functions of complex networks.\n\nAs 'tree' is another term for a (normally rooted) connected acyclic graph, and these are called 'arborescences' in NetworkX.\nYou may want to implement a plane tree (aka ordered tree) where each sibling has a unique rank and this is normally done via labelling the nodes.\nHowever, graph language looks different from tree language, and the means of 'rooting' an arborescence is normally done by using a directed graph so, while there are some really cool functions and corresponding visualisations available, it would probably not be an ideal choice if you are not already using networkx.\nAn example of building a tree:\nimport networkx as nx\nG = nx.Graph()\nG.add_edge('A', 'B')\nG.add_edge('B', 'C')\nG.add_edge('B', 'D')\nG.add_edge('A', 'E')\nG.add_edge('E', 'F')\n\nThe library enables each node to be any hashable object, and there is no constraint on the number of children each node has.\n", "I've published a Python 3 tree implementation on my site: https://web.archive.org/web/20120723175438/www.quesucede.com/page/show/id/python_3_tree_implementation\nHere's the code:\nimport uuid\n\ndef sanitize_id(id):\n return id.strip().replace(\" \", \"\")\n\n(_ADD, _DELETE, _INSERT) = range(3)\n(_ROOT, _DEPTH, _WIDTH) = range(3)\n\nclass Node:\n\n def __init__(self, name, identifier=None, expanded=True):\n self.__identifier = (str(uuid.uuid1()) if identifier is None else\n sanitize_id(str(identifier)))\n self.name = name\n self.expanded = expanded\n self.__bpointer = None\n self.__fpointer = []\n\n @property\n def identifier(self):\n return self.__identifier\n\n @property\n def bpointer(self):\n return self.__bpointer\n\n @bpointer.setter\n def bpointer(self, value):\n if value is not None:\n self.__bpointer = sanitize_id(value)\n\n @property\n def fpointer(self):\n return self.__fpointer\n\n def update_fpointer(self, identifier, mode=_ADD):\n if mode is _ADD:\n self.__fpointer.append(sanitize_id(identifier))\n elif mode is _DELETE:\n self.__fpointer.remove(sanitize_id(identifier))\n elif mode is _INSERT:\n self.__fpointer = [sanitize_id(identifier)]\n\nclass Tree:\n\n def __init__(self):\n self.nodes = []\n\n def get_index(self, position):\n for index, node in enumerate(self.nodes):\n if node.identifier == position:\n break\n return index\n\n def create_node(self, name, identifier=None, parent=None):\n\n node = Node(name, identifier)\n self.nodes.append(node)\n self.__update_fpointer(parent, node.identifier, _ADD)\n node.bpointer = parent\n return node\n\n def show(self, position, level=_ROOT):\n queue = self[position].fpointer\n if level == _ROOT:\n print(\"{0} [{1}]\".format(self[position].name,\n self[position].identifier))\n else:\n print(\"\\t\"*level, \"{0} [{1}]\".format(self[position].name,\n self[position].identifier))\n if self[position].expanded:\n level += 1\n for element in queue:\n self.show(element, level) # recursive call\n\n def expand_tree(self, position, mode=_DEPTH):\n # Python generator. Loosly based on an algorithm from 'Essential LISP' by\n # John R. Anderson, Albert T. Corbett, and Brian J. Reiser, page 239-241\n yield position\n queue = self[position].fpointer\n while queue:\n yield queue[0]\n expansion = self[queue[0]].fpointer\n if mode is _DEPTH:\n queue = expansion + queue[1:] # depth-first\n elif mode is _WIDTH:\n queue = queue[1:] + expansion # width-first\n\n def is_branch(self, position):\n return self[position].fpointer\n\n def __update_fpointer(self, position, identifier, mode):\n if position is None:\n return\n else:\n self[position].update_fpointer(identifier, mode)\n\n def __update_bpointer(self, position, identifier):\n self[position].bpointer = identifier\n\n def __getitem__(self, key):\n return self.nodes[self.get_index(key)]\n\n def __setitem__(self, key, item):\n self.nodes[self.get_index(key)] = item\n\n def __len__(self):\n return len(self.nodes)\n\n def __contains__(self, identifier):\n return [node.identifier for node in self.nodes\n if node.identifier is identifier]\n\nif __name__ == \"__main__\":\n\n tree = Tree()\n tree.create_node(\"Harry\", \"harry\") # root node\n tree.create_node(\"Jane\", \"jane\", parent = \"harry\")\n tree.create_node(\"Bill\", \"bill\", parent = \"harry\")\n tree.create_node(\"Joe\", \"joe\", parent = \"jane\")\n tree.create_node(\"Diane\", \"diane\", parent = \"jane\")\n tree.create_node(\"George\", \"george\", parent = \"diane\")\n tree.create_node(\"Mary\", \"mary\", parent = \"diane\")\n tree.create_node(\"Jill\", \"jill\", parent = \"george\")\n tree.create_node(\"Carol\", \"carol\", parent = \"jill\")\n tree.create_node(\"Grace\", \"grace\", parent = \"bill\")\n tree.create_node(\"Mark\", \"mark\", parent = \"jane\")\n\n print(\"=\"*80)\n tree.show(\"harry\")\n print(\"=\"*80)\n for node in tree.expand_tree(\"harry\", mode=_WIDTH):\n print(node)\n print(\"=\"*80)\n\n", "Hi you may give itertree a try (I'm the author).\nThe package goes in the direction of anytree package but with a bit different focus. The performance on huge trees (>100000 items) is much better and it deals with iterators to have effective filter mechanism.\n>>>from itertree import *\n>>>root=iTree('root')\n\n>>># add some children:\n>>>root.append(iTree('Africa',data={'surface':30200000,'inhabitants':1257000000}))\n>>>root.append(iTree('Asia', data={'surface': 44600000, 'inhabitants': 4000000000}))\n>>>root.append(iTree('America', data={'surface': 42549000, 'inhabitants': 1009000000}))\n>>>root.append(iTree('Australia&Oceania', data={'surface': 8600000, 'inhabitants': 36000000}))\n>>>root.append(iTree('Europe', data={'surface': 10523000 , 'inhabitants': 746000000}))\n>>># you might use __iadd__ operator for adding too:\n>>>root+=iTree('Antarktika', data={'surface': 14000000, 'inhabitants': 1100})\n\n>>># for building next level we select per index:\n>>>root[0]+=iTree('Ghana',data={'surface':238537,'inhabitants':30950000})\n>>>root[0]+=iTree('Niger', data={'surface': 1267000, 'inhabitants': 23300000})\n>>>root[1]+=iTree('China', data={'surface': 9596961, 'inhabitants': 1411780000})\n>>>root[1]+=iTree('India', data={'surface': 3287263, 'inhabitants': 1380004000})\n>>>root[2]+=iTree('Canada', data={'type': 'country', 'surface': 9984670, 'inhabitants': 38008005}) \n>>>root[2]+=iTree('Mexico', data={'surface': 1972550, 'inhabitants': 127600000 })\n>>># extend multiple items:\n>>>root[3].extend([iTree('Australia', data={'surface': 7688287, 'inhabitants': 25700000 }), iTree('New Zealand', data={'surface': 269652, 'inhabitants': 4900000 })])\n>>>root[4]+=iTree('France', data={'surface': 632733, 'inhabitants': 67400000 }))\n>>># select parent per TagIdx - remember in itertree you might put items with same tag multiple times:\n>>>root[TagIdx('Europe'0)]+=iTree('Finland', data={'surface': 338465, 'inhabitants': 5536146 })\n\nThe created tree can be rendered:\n>>>root.render()\niTree('root')\n └──iTree('Africa', data=iTData({'surface': 30200000, 'inhabitants': 1257000000}))\n └──iTree('Ghana', data=iTData({'surface': 238537, 'inhabitants': 30950000}))\n └──iTree('Niger', data=iTData({'surface': 1267000, 'inhabitants': 23300000}))\n └──iTree('Asia', data=iTData({'surface': 44600000, 'inhabitants': 4000000000}))\n └──iTree('China', data=iTData({'surface': 9596961, 'inhabitants': 1411780000}))\n └──iTree('India', data=iTData({'surface': 3287263, 'inhabitants': 1380004000}))\n └──iTree('America', data=iTData({'surface': 42549000, 'inhabitants': 1009000000}))\n └──iTree('Canada', data=iTData({'surface': 9984670, 'inhabitants': 38008005}))\n └──iTree('Mexico', data=iTData({'surface': 1972550, 'inhabitants': 127600000}))\n └──iTree('Australia&Oceania', data=iTData({'surface': 8600000, 'inhabitants': 36000000}))\n └──iTree('Australia', data=iTData({'surface': 7688287, 'inhabitants': 25700000}))\n └──iTree('New Zealand', data=iTData({'surface': 269652, 'inhabitants': 4900000}))\n └──iTree('Europe', data=iTData({'surface': 10523000, 'inhabitants': 746000000}))\n └──iTree('France', data=iTData({'surface': 632733, 'inhabitants': 67400000}))\n └──iTree('Finland', data=iTData({'surface': 338465, 'inhabitants': 5536146}))\n └──iTree('Antarktika', data=iTData({'surface': 14000000, 'inhabitants': 1100}))\n\nE.g. Filtering can be done like this:\n>>>item_filter = Filter.iTFilterData(data_key='inhabitants', data_value=iTInterval(0, 20000000))\n>>>iterator=root.iter_all(item_filter=item_filter)\n>>>for i in iterator:\n>>> print(i)\niTree(\"'New Zealand'\", data=iTData({'surface': 269652, 'inhabitants': 4900000}), subtree=[])\niTree(\"'Finland'\", data=iTData({'surface': 338465, 'inhabitants': 5536146}), subtree=[])\niTree(\"'Antarktika'\", data=iTData({'surface': 14000000, 'inhabitants': 1100}), subtree=[])\n\n", "Another tree implementation loosely based off of Bruno's answer:\nclass Node:\n def __init__(self):\n self.name: str = ''\n self.children: List[Node] = []\n self.parent: Node = self\n\n def __getitem__(self, i: int) -> 'Node':\n return self.children[i]\n\n def add_child(self):\n child = Node()\n self.children.append(child)\n child.parent = self\n return child\n\n def __str__(self) -> str:\n def _get_character(x, left, right) -> str:\n if x < left:\n return '/'\n elif x >= right:\n return '\\\\'\n else:\n return '|'\n\n if len(self.children):\n children_lines: Sequence[List[str]] = list(map(lambda child: str(child).split('\\n'), self.children))\n widths: Sequence[int] = list(map(lambda child_lines: len(child_lines[0]), children_lines))\n max_height: int = max(map(len, children_lines))\n total_width: int = sum(widths) + len(widths) - 1\n left: int = (total_width - len(self.name) + 1) // 2\n right: int = left + len(self.name)\n\n return '\\n'.join((\n self.name.center(total_width),\n ' '.join(map(lambda width, position: _get_character(position - width // 2, left, right).center(width),\n widths, accumulate(widths, add))),\n *map(\n lambda row: ' '.join(map(\n lambda child_lines: child_lines[row] if row < len(child_lines) else ' ' * len(child_lines[0]),\n children_lines)),\n range(max_height))))\n else:\n return self.name\n\nAnd an example of how to use it:\ntree = Node()\ntree.name = 'Root node'\ntree.add_child()\ntree[0].name = 'Child node 0'\ntree.add_child()\ntree[1].name = 'Child node 1'\ntree.add_child()\ntree[2].name = 'Child node 2'\ntree[1].add_child()\ntree[1][0].name = 'Grandchild 1.0'\ntree[2].add_child()\ntree[2][0].name = 'Grandchild 2.0'\ntree[2].add_child()\ntree[2][1].name = 'Grandchild 2.1'\nprint(tree)\n\nWhich should output:\n\n Root node \n / / \\ \nChild node 0 Child node 1 Child node 2 \n | / \\ \n Grandchild 1.0 Grandchild 2.0 Grandchild 2.1\n\n", "Treelib is convenient for the task as well. Documentation can be found treelib.\nfrom treelib import Node, Tree\ntree = Tree() # creating an object\ntree.create_node(\"Harry\", \"harry\") # root node \ntree.create_node(\"Jane\", \"jane\", parent=\"harry\") #adding nodes\ntree.create_node(\"Bill\", \"bill\", parent=\"harry\")\ntree.create_node(\"Diane\", \"diane\", parent=\"jane\")\ntree.create_node(\"Mary\", \"mary\", parent=\"diane\")\ntree.create_node(\"Mark\", \"mark\", parent=\"jane\")\ntree.show()\n\nHarry\n├── Bill\n└── Jane\n ├── Diane\n │ └── Mary\n └── Mark\n\n", "bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame. It is pythonic, making it easy to learn and extendable to many types of workflows.\nThere are various components to bigtree, namely\n\nConstructing Trees from list, dictionary, and pandas DataFrame\nTraversing Tree\nModifying Tree (shift/copy nodes)\nSearch Tree\nHelper methods (clone tree, prune tree, get the difference between two treess)\nExport tree (print to console, export tree to dictionary, pandas DataFrame, image etc.)\nOther tree structures: Binary Trees!\nOther graph structured: Directed Acyclic Graphs (DAGs)!\n\nWhat more can I say... ah yes it is also well-documented.\nSome examples:\nfrom bigtree import list_to_tree, tree_to_dict, tree_to_dot\n\n# Create tree from list, print tree\nroot = list_to_tree([\"a/b/d\", \"a/c\"])\nprint_tree(root)\n# a\n# ├── b\n# │ └── d\n# └── c\n\n# Query tree\nroot.children\n# (Node(/a/b, ), Node(/a/c, ))\n\n# Export tree to dictionary / image\ntree_to_dict(root)\n# {\n# '/a': {'name': 'a'},\n# '/a/b': {'name': 'b'},\n# '/a/b/d': {'name': 'd'},\n# '/a/c': {'name': 'c'}\n# }\n\ngraph = tree_to_dot(root, node_colour=\"gold\")\ngraph.write_png(\"tree.png\")\n\n\nSource/Disclaimer: I'm the creator of bigtree ;)\n" ]
[ 367, 146, 68, 45, 39, 17, 16, 10, 9, 9, 7, 7, 6, 2, 1, 1, 0 ]
[ "If you want to create a tree data structure then first you have to create the treeElement object. If you create the treeElement object, then you can decide how your tree behaves. \nTo do this following is the TreeElement class:\nclass TreeElement (object):\n\ndef __init__(self):\n self.elementName = None\n self.element = []\n self.previous = None\n self.elementScore = None\n self.elementParent = None\n self.elementPath = []\n self.treeLevel = 0\n\ndef goto(self, data):\n for child in range(0, len(self.element)):\n if (self.element[child].elementName == data):\n return self.element[child]\n\ndef add(self):\n\n single_element = TreeElement()\n single_element.elementName = self.elementName\n single_element.previous = self.elementParent\n single_element.elementScore = self.elementScore\n single_element.elementPath = self.elementPath\n single_element.treeLevel = self.treeLevel\n\n self.element.append(single_element)\n\n return single_element\n\nNow, we have to use this element to create the tree, I am using A* tree in this example.\nclass AStarAgent(Agent):\n# Initialization Function: Called one time when the game starts\ndef registerInitialState(self, state):\n return;\n\n# GetAction Function: Called with every frame\ndef getAction(self, state):\n\n # Sorting function for the queue\n def sortByHeuristic(each_element):\n\n if each_element.elementScore:\n individual_score = each_element.elementScore[0][0] + each_element.treeLevel\n else:\n individual_score = admissibleHeuristic(each_element)\n\n return individual_score\n\n # check the game is over or not\n if state.isWin():\n print('Job is done')\n return Directions.STOP\n elif state.isLose():\n print('you lost')\n return Directions.STOP\n\n # Create empty list for the next states\n astar_queue = []\n astar_leaf_queue = []\n astar_tree_level = 0\n parent_tree_level = 0\n\n # Create Tree from the give node element\n astar_tree = TreeElement()\n astar_tree.elementName = state\n astar_tree.treeLevel = astar_tree_level\n astar_tree = astar_tree.add()\n\n # Add first element into the queue\n astar_queue.append(astar_tree)\n\n # Traverse all the elements of the queue\n while astar_queue:\n\n # Sort the element from the queue\n if len(astar_queue) > 1:\n astar_queue.sort(key=lambda x: sortByHeuristic(x))\n\n # Get the first node from the queue\n astar_child_object = astar_queue.pop(0)\n astar_child_state = astar_child_object.elementName\n\n # get all legal actions for the current node\n current_actions = astar_child_state.getLegalPacmanActions()\n\n if current_actions:\n\n # get all the successor state for these actions\n for action in current_actions:\n\n # Get the successor of the current node\n next_state = astar_child_state.generatePacmanSuccessor(action)\n\n if next_state:\n\n # evaluate the successor states using scoreEvaluation heuristic\n element_scored = [(admissibleHeuristic(next_state), action)]\n\n # Increase the level for the child\n parent_tree_level = astar_tree.goto(astar_child_state)\n if parent_tree_level:\n astar_tree_level = parent_tree_level.treeLevel + 1\n else:\n astar_tree_level += 1\n\n # create tree for the finding the data\n astar_tree.elementName = next_state\n astar_tree.elementParent = astar_child_state\n astar_tree.elementScore = element_scored\n astar_tree.elementPath.append(astar_child_state)\n astar_tree.treeLevel = astar_tree_level\n astar_object = astar_tree.add()\n\n # If the state exists then add that to the queue\n astar_queue.append(astar_object)\n\n else:\n # Update the value leaf into the queue\n astar_leaf_state = astar_tree.goto(astar_child_state)\n astar_leaf_queue.append(astar_leaf_state)\n\nYou can add/remove any elements from the object, but make the structure intect. \n" ]
[ -3 ]
[ "data_structures", "python", "tree" ]
stackoverflow_0002358045_data_structures_python_tree.txt
Q: How to extract the information from XML beautiful soup? I have a list of XML beautifulsoap tag elements as: [ <Entry> <EffectiveDate> <DateFormattedForTHForm>07/01/2022</DateFormattedForTHForm> </EffectiveDate> <ExpirationDate> <DateFormattedForTHForm>07/01/2023</DateFormattedForTHForm> </ExpirationDate> <FormDescription>Notification Of Settlement</FormDescription> <FormNumber>WC 99 06 04</FormNumber> </Entry>, <Entry> <AccountContactRole> <AccountContact> <Contact> <DisplayName>Mallesham Yamulla</DisplayName> <FEINOrSSN>123-45-6789</FEINOrSSN> <formsMaskedSSN_and_NoMaskFEIN>**-***-8834</formsMaskedSSN_and_NoMaskFEIN> <PrimaryAddress> <AddressLine1>A</AddressLine1> <AddressLine123>B</AddressLine123> <CityStateZip>ENID, OK 73703</CityStateZip> <Country>IND</Country> <AddressLine2 xsi:nil="true"/> <AddressLine3 xsi:nil="true"/> </PrimaryAddress> </Contact> </AccountContact> </AccountContactRole> </Entry> ] Here I would like to loop through the list of entry xml elements, get a tag name and its contained information's, if any of tag is empty and its information is also empty it should be ignored. From first entry the below tag information is required to be extracted as they hold on information. [<DateFormattedForTHForm>07/01/2022</DateFormattedForTHForm>, <DateFormattedForTHForm>07/01/2023</DateFormattedForTHForm>, <FormDescription>Notification Of Settlement</FormDescription>, <FormNumber>WC 99 06 04</FormNumber>] From second entry: <DisplayName>Mallesham Yamulla</DisplayName> <FEINOrSSN>123-45-6789</FEINOrSSN> <formsMaskedSSN_and_NoMaskFEIN>**-***-6789</formsMaskedSSN_and_NoMaskFEIN> <PrimaryAddress> <AddressLine1>A</AddressLine1> <AddressLine123>B</AddressLine123> <CityStateZip>ENID, OK 73703</CityStateZip> <Country>IND</Country> A: (With "list of XML beautifulsoup tag elements" in variable xTagList,) you could try something like this bsParser = 'html.parser' # 'xml' # # xTagList = [BeautifulSoup(str(x), bsParser) for x in xTagList] # should fix some formatting wCont_xstrs = ['\n'.join([ str(d) for d in x.descendants if hasattr(d, 'find_all') and not d.find_all() and d.get_text().strip() ]) for x in xTagList] to get html/xml string. with bsParser = 'xml', wCont_xstrs looks like [ <DateFormattedForTHForm>07/01/2022</DateFormattedForTHForm> <DateFormattedForTHForm>07/01/2023</DateFormattedForTHForm> <FormDescription>Notification Of Settlement</FormDescription> <FormNumber>WC 99 06 04</FormNumber> , <DisplayName>Mallesham Yamulla</DisplayName> <FEINOrSSN>123-45-6789</FEINOrSSN> <formsMaskedSSN_and_NoMaskFEIN>**-***-8834</formsMaskedSSN_and_NoMaskFEIN> <AddressLine1>A</AddressLine1> <AddressLine123>B</AddressLine123> <CityStateZip>ENID, OK 73703</CityStateZip> <Country>IND</Country> ] [btw, if your xml had namespaces (as well formed xmls usually do), they would be lost after using xml parser. Using html parser will preserve namespaces, but there will be another issue as you will see below.] with bsParser = 'html.parser' (and probably any other parser other than xml), wCont_xstrs looks like [ <dateformattedforthform>07/01/2022</dateformattedforthform> <dateformattedforthform>07/01/2023</dateformattedforthform> <formdescription>Notification Of Settlement</formdescription> <formnumber>WC 99 06 04</formnumber> , <displayname>Mallesham Yamulla</displayname> <feinorssn>123-45-6789</feinorssn> <formsmaskedssn_and_nomaskfein>**-***-8834</formsmaskedssn_and_nomaskfein> <addressline1>A</addressline1> <addressline123>B</addressline123> <citystatezip>ENID, OK 73703</citystatezip> <country>IND</country> ] (notice how capitalization has been lost from tag names) If you want a list bs4 objects, you can do something like wCont_xtags = [BeautifulSoup(x, bsParser) for x in wCont_xstrs] UNLESS you're using bsParser = 'xml', because then you need to wrap them in some tag first like wCont_xtags = [BeautifulSoup(f'<Entry>{x}</Entry>', bsParser).Entry for x in wCont_xstrs]
How to extract the information from XML beautiful soup?
I have a list of XML beautifulsoap tag elements as: [ <Entry> <EffectiveDate> <DateFormattedForTHForm>07/01/2022</DateFormattedForTHForm> </EffectiveDate> <ExpirationDate> <DateFormattedForTHForm>07/01/2023</DateFormattedForTHForm> </ExpirationDate> <FormDescription>Notification Of Settlement</FormDescription> <FormNumber>WC 99 06 04</FormNumber> </Entry>, <Entry> <AccountContactRole> <AccountContact> <Contact> <DisplayName>Mallesham Yamulla</DisplayName> <FEINOrSSN>123-45-6789</FEINOrSSN> <formsMaskedSSN_and_NoMaskFEIN>**-***-8834</formsMaskedSSN_and_NoMaskFEIN> <PrimaryAddress> <AddressLine1>A</AddressLine1> <AddressLine123>B</AddressLine123> <CityStateZip>ENID, OK 73703</CityStateZip> <Country>IND</Country> <AddressLine2 xsi:nil="true"/> <AddressLine3 xsi:nil="true"/> </PrimaryAddress> </Contact> </AccountContact> </AccountContactRole> </Entry> ] Here I would like to loop through the list of entry xml elements, get a tag name and its contained information's, if any of tag is empty and its information is also empty it should be ignored. From first entry the below tag information is required to be extracted as they hold on information. [<DateFormattedForTHForm>07/01/2022</DateFormattedForTHForm>, <DateFormattedForTHForm>07/01/2023</DateFormattedForTHForm>, <FormDescription>Notification Of Settlement</FormDescription>, <FormNumber>WC 99 06 04</FormNumber>] From second entry: <DisplayName>Mallesham Yamulla</DisplayName> <FEINOrSSN>123-45-6789</FEINOrSSN> <formsMaskedSSN_and_NoMaskFEIN>**-***-6789</formsMaskedSSN_and_NoMaskFEIN> <PrimaryAddress> <AddressLine1>A</AddressLine1> <AddressLine123>B</AddressLine123> <CityStateZip>ENID, OK 73703</CityStateZip> <Country>IND</Country>
[ "(With \"list of XML beautifulsoup tag elements\" in variable xTagList,) you could try something like this\nbsParser = 'html.parser' # 'xml' # \n# xTagList = [BeautifulSoup(str(x), bsParser) for x in xTagList] # should fix some formatting\nwCont_xstrs = ['\\n'.join([\n str(d) for d in x.descendants if hasattr(d, 'find_all') \n and not d.find_all() and d.get_text().strip()\n]) for x in xTagList]\n\nto get html/xml string.\n\nwith bsParser = 'xml', wCont_xstrs looks like\n[\n<DateFormattedForTHForm>07/01/2022</DateFormattedForTHForm>\n<DateFormattedForTHForm>07/01/2023</DateFormattedForTHForm>\n<FormDescription>Notification Of Settlement</FormDescription>\n<FormNumber>WC 99 06 04</FormNumber>\n,\n<DisplayName>Mallesham Yamulla</DisplayName>\n<FEINOrSSN>123-45-6789</FEINOrSSN>\n<formsMaskedSSN_and_NoMaskFEIN>**-***-8834</formsMaskedSSN_and_NoMaskFEIN>\n<AddressLine1>A</AddressLine1>\n<AddressLine123>B</AddressLine123>\n<CityStateZip>ENID, OK 73703</CityStateZip>\n<Country>IND</Country>\n]\n\n[btw, if your xml had namespaces (as well formed xmls usually do), they would be lost after using xml parser. Using html parser will preserve namespaces, but there will be another issue as you will see below.]\n\nwith bsParser = 'html.parser' (and probably any other parser other than xml), wCont_xstrs looks like\n[\n<dateformattedforthform>07/01/2022</dateformattedforthform>\n<dateformattedforthform>07/01/2023</dateformattedforthform>\n<formdescription>Notification Of Settlement</formdescription>\n<formnumber>WC 99 06 04</formnumber>\n,\n<displayname>Mallesham Yamulla</displayname>\n<feinorssn>123-45-6789</feinorssn>\n<formsmaskedssn_and_nomaskfein>**-***-8834</formsmaskedssn_and_nomaskfein>\n<addressline1>A</addressline1>\n<addressline123>B</addressline123>\n<citystatezip>ENID, OK 73703</citystatezip>\n<country>IND</country>\n]\n\n(notice how capitalization has been lost from tag names)\n\nIf you want a list bs4 objects, you can do something like\nwCont_xtags = [BeautifulSoup(x, bsParser) for x in wCont_xstrs]\n\nUNLESS you're using bsParser = 'xml', because then you need to wrap them in some tag first like\nwCont_xtags = [BeautifulSoup(f'<Entry>{x}</Entry>', bsParser).Entry for x in wCont_xstrs]\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074642521_beautifulsoup_python.txt
Q: Retrieving values of a CSR matrix Question I have a CSR matrix, and I want to be able to retrieve the column indices and the values stored. Data For different reasons I'm not allowed to share my data, but here's a look (the numpy library is imported as np): print(type(data) == type(ind) == list) # data and ind are lists # OUT: True print(len(data) == len(ind) == 134464) # data and ind have a size of 134,464 # OUT: True print(np.alltrue([type(subarray) == np.ndarray for subarray in data])) # data (and ind) contains ndarray # OUT: True print(np.alltrue([len(data[i]) == len(ind[i]) for i in range(len(data))])) # each ndarray of data have the same length than the corresponding ndarray of ind # OUT: True print(min([len(data[i]) for i in range(len(data))]) >= 1) # each subarray of data (and of ind) has at least a length of 1 # OUT: True print(np.alltrue([subarray.dtype == np.float64 for subarray in data])) # each subarray of data (and of ind) contains floats # OUT: True Code Here is how I create the matrix (using csr_matrix from scipy.sparse): indptr = np.empty(nbr_of_rows + 1) # nbr_of_rows = 134,464 = len(data) indptr[0] = 0 for i in range(1, len(indptr)): indptr[i] = indptr[i-1] + len(data[i-1]) data = np.concatenate(data) # now I have type(data) = np.darray, data.dtype = np.float64 and len(data) = 2,821,574 ind = np.concatenante(ind) # same than above X = csr_matrix((data, ind, indptr), shape=(nbr_of_rows, nbr_of_columns)) # nbr_of_columns = 3,991 = max(ind) + 1 (since min(ind) = 0) print(f"The matrix has a shape of {X.shape} and a sparsity of {(1 - (X.nnz / (X.shape[0] * X.shape[1]))): .2%}.") # OUT: The matrix has a shape of (134464, 3991) and a sparsity of 99.47%. So far so good (at least I think so). But now, even though I manage to retrieve the column indices, I can’t successfully retrieve the values: print(np.alltrue(ind == X.nonzero()[1])) # Retrieving the columns indices # OUT: True print(np.alltrue(data == X[X.nonzero()])) # Trying to retrieve the values # OUT: False print(np.alltrue(np.sort(data) == np.sort(X[X.nonzero()]))) # Seeing if the values are at least the same # OUT: False print(np.sum(data) == np.sum(X[X.nonzero()])) # Seeing if the values add up to the same total # OUT: False When I look deeper, I find that I get almost all the values (only a small amount of mistakes): print(len(data) == len(X[X.nonzero()].tolist()[0])) # OUT: True print(len(np.argwhere((data != X[X.nonzero()])))) # OUT: 2184 So I get "only" 2,184 wrong values out of 2,821,574 total values. Can someone please help me in getting all the correct values from my CSR matrix? EDIT I know now thanks to @hpaulj that I can use the class attributes X.indices and X.data to retrieve the CSR format index array and the CSR format data array of the matrix. However, I still would like to know why, in my case, I don't have np.altrue(X[X.nonzero()] == X.data). A: Without your data I can't replicate your problem, and probably wouldn't want to do so even with such a large array. But I'll try to illustrate what I expect to happen when constructing a matrix this way. From another question I have a small matrix in a Ipython session: In [60]: Mx Out[60]: <1x3 sparse matrix of type '<class 'numpy.intc'>' with 2 stored elements in Compressed Sparse Row format> In [61]: Mx.A Out[61]: array([[0, 1, 2]], dtype=int32) nonzero returns the coo format indices, row, col In [62]: Mx.nonzero() Out[62]: (array([0, 0], dtype=int32), array([1, 2], dtype=int32)) The csr attributes are: In [63]: Mx.data,Mx.indices,Mx.indptr Out[63]: (array([1, 2], dtype=int32), array([1, 2], dtype=int32), array([0, 2], dtype=int32)) Now lets make a new matrix, using the attributes of Mx. Assuming you constructed your indptr, indices, and data correctly this should imitate what you've done: In [64]: newM = sparse.csr_matrix((Mx.data, Mx.indices, Mx.indptr)) In [65]: newM.A Out[65]: array([[0, 1, 2]], dtype=int32) data matches between the two matrices: In [68]: Mx.data==newM.data Out[68]: array([ True, True]) id of the data don't match, but their bases do. See my recent answer to see why this is relevant https://stackoverflow.com/a/74543855/901925 In [75]: id(Mx.data.base), id(newM.data.base) Out[75]: (2255407394864, 2255407394864) That means changes to newA will appear in Mx: In [77]: newM[0,1] = 100 In [78]: newM.A Out[78]: array([[ 0, 100, 2]], dtype=int32) In [79]: Mx.A Out[79]: array([[ 0, 100, 2]], dtype=int32) fuller test Let's try a small scale test of your code: In [92]: data = np.array([[1.23,2],[3],[]],object); ind = np.array([[1,2],[3],[]],object) ...: indptr = np.empty(4) ...: indptr[0] = 0 ...: for i in range(1, 4): ...: indptr[i] = indptr[i-1] + len(data[i-1]) ...: data = np.concatenate(data).ravel() ...: ind = np.concatenate(ind).ravel() # same than above In [93]: data,ind,indptr Out[93]: (array([1.23, 2. , 3. ]), array([1., 2., 3.]), array([0., 2., 3., 3.])) And the sparse matrix: In [94]: X = sparse.csr_matrix((data, ind, indptr), shape=(3,3)) In [95]: X Out[95]: <3x3 sparse matrix of type '<class 'numpy.float64'>' with 3 stored elements in Compressed Sparse Row format> data matches: In [96]: X.data Out[96]: array([1.23, 2. , 3. ]) In [97]: data == X.data Out[97]: array([ True, True, True]) and is infact a view: In [98]: data[1]+=.23; data Out[98]: array([1.23, 2.23, 3. ]) In [99]: X.A Out[99]: array([[0. , 1.23, 2.23], [0. , 0. , 0. ], [3. , 0. , 0. ]]) oops I made an error in specifying the X shape: In [110]: X = sparse.csr_matrix((data, ind, indptr), shape=(3,4)) In [111]: X.A Out[111]: array([[0. , 1.23, 2.23, 0. ], [0. , 0. , 0. , 3. ], [0. , 0. , 0. , 0. ]]) In [112]: X.data Out[112]: array([1.23, 2.23, 3. ]) In [113]: X.nonzero() Out[113]: (array([0, 0, 1], dtype=int32), array([1, 2, 3], dtype=int32)) In [114]: X[X.nonzero()] Out[114]: matrix([[1.23, 2.23, 3. ]]) In [115]: data Out[115]: array([1.23, 2.23, 3. ]) In [116]: data == X[X.nonzero()] Out[116]: matrix([[ True, True, True]]) A: Depending on the type of the values you store in the matrix, numpy.float64 or numpy.int64, perhaps, the following post might answer your question: https://github.com/scipy/scipy/issues/13329#issuecomment-753541268 In particular, the comment "Apparently I don't get an error when data is a numpy array rather than a list." suggests that having data as numpy.array rather than a list could solve your problem. Hopefully, this at least sets you on the right track.
Retrieving values of a CSR matrix
Question I have a CSR matrix, and I want to be able to retrieve the column indices and the values stored. Data For different reasons I'm not allowed to share my data, but here's a look (the numpy library is imported as np): print(type(data) == type(ind) == list) # data and ind are lists # OUT: True print(len(data) == len(ind) == 134464) # data and ind have a size of 134,464 # OUT: True print(np.alltrue([type(subarray) == np.ndarray for subarray in data])) # data (and ind) contains ndarray # OUT: True print(np.alltrue([len(data[i]) == len(ind[i]) for i in range(len(data))])) # each ndarray of data have the same length than the corresponding ndarray of ind # OUT: True print(min([len(data[i]) for i in range(len(data))]) >= 1) # each subarray of data (and of ind) has at least a length of 1 # OUT: True print(np.alltrue([subarray.dtype == np.float64 for subarray in data])) # each subarray of data (and of ind) contains floats # OUT: True Code Here is how I create the matrix (using csr_matrix from scipy.sparse): indptr = np.empty(nbr_of_rows + 1) # nbr_of_rows = 134,464 = len(data) indptr[0] = 0 for i in range(1, len(indptr)): indptr[i] = indptr[i-1] + len(data[i-1]) data = np.concatenate(data) # now I have type(data) = np.darray, data.dtype = np.float64 and len(data) = 2,821,574 ind = np.concatenante(ind) # same than above X = csr_matrix((data, ind, indptr), shape=(nbr_of_rows, nbr_of_columns)) # nbr_of_columns = 3,991 = max(ind) + 1 (since min(ind) = 0) print(f"The matrix has a shape of {X.shape} and a sparsity of {(1 - (X.nnz / (X.shape[0] * X.shape[1]))): .2%}.") # OUT: The matrix has a shape of (134464, 3991) and a sparsity of 99.47%. So far so good (at least I think so). But now, even though I manage to retrieve the column indices, I can’t successfully retrieve the values: print(np.alltrue(ind == X.nonzero()[1])) # Retrieving the columns indices # OUT: True print(np.alltrue(data == X[X.nonzero()])) # Trying to retrieve the values # OUT: False print(np.alltrue(np.sort(data) == np.sort(X[X.nonzero()]))) # Seeing if the values are at least the same # OUT: False print(np.sum(data) == np.sum(X[X.nonzero()])) # Seeing if the values add up to the same total # OUT: False When I look deeper, I find that I get almost all the values (only a small amount of mistakes): print(len(data) == len(X[X.nonzero()].tolist()[0])) # OUT: True print(len(np.argwhere((data != X[X.nonzero()])))) # OUT: 2184 So I get "only" 2,184 wrong values out of 2,821,574 total values. Can someone please help me in getting all the correct values from my CSR matrix? EDIT I know now thanks to @hpaulj that I can use the class attributes X.indices and X.data to retrieve the CSR format index array and the CSR format data array of the matrix. However, I still would like to know why, in my case, I don't have np.altrue(X[X.nonzero()] == X.data).
[ "Without your data I can't replicate your problem, and probably wouldn't want to do so even with such a large array.\nBut I'll try to illustrate what I expect to happen when constructing a matrix this way. From another question I have a small matrix in a Ipython session:\nIn [60]: Mx\nOut[60]: \n<1x3 sparse matrix of type '<class 'numpy.intc'>'\n with 2 stored elements in Compressed Sparse Row format>\nIn [61]: Mx.A\nOut[61]: array([[0, 1, 2]], dtype=int32)\n\nnonzero returns the coo format indices, row, col\nIn [62]: Mx.nonzero()\nOut[62]: (array([0, 0], dtype=int32), array([1, 2], dtype=int32))\n\nThe csr attributes are:\nIn [63]: Mx.data,Mx.indices,Mx.indptr\nOut[63]: \n(array([1, 2], dtype=int32),\n array([1, 2], dtype=int32),\n array([0, 2], dtype=int32))\n\nNow lets make a new matrix, using the attributes of Mx. Assuming you constructed your indptr, indices, and data correctly this should imitate what you've done:\nIn [64]: newM = sparse.csr_matrix((Mx.data, Mx.indices, Mx.indptr)) \nIn [65]: newM.A\nOut[65]: array([[0, 1, 2]], dtype=int32)\n\ndata matches between the two matrices:\nIn [68]: Mx.data==newM.data\nOut[68]: array([ True, True])\n\nid of the data don't match, but their bases do. See my recent answer to see why this is relevant\nhttps://stackoverflow.com/a/74543855/901925\nIn [75]: id(Mx.data.base), id(newM.data.base)\nOut[75]: (2255407394864, 2255407394864)\n\nThat means changes to newA will appear in Mx:\nIn [77]: newM[0,1] = 100\nIn [78]: newM.A\nOut[78]: array([[ 0, 100, 2]], dtype=int32)\nIn [79]: Mx.A\nOut[79]: array([[ 0, 100, 2]], dtype=int32)\n\nfuller test\nLet's try a small scale test of your code:\nIn [92]: data = np.array([[1.23,2],[3],[]],object); ind = np.array([[1,2],[3],[]],object)\n ...: indptr = np.empty(4) \n ...: indptr[0] = 0\n ...: for i in range(1, 4):\n ...: indptr[i] = indptr[i-1] + len(data[i-1])\n ...: data = np.concatenate(data).ravel() \n ...: ind = np.concatenate(ind).ravel() # same than above\n\nIn [93]: data,ind,indptr\nOut[93]: (array([1.23, 2. , 3. ]), array([1., 2., 3.]), array([0., 2., 3., 3.]))\n\nAnd the sparse matrix:\nIn [94]: X = sparse.csr_matrix((data, ind, indptr), shape=(3,3)) \nIn [95]: X\nOut[95]: \n<3x3 sparse matrix of type '<class 'numpy.float64'>'\n with 3 stored elements in Compressed Sparse Row format>\n\ndata matches:\nIn [96]: X.data\nOut[96]: array([1.23, 2. , 3. ])\n\nIn [97]: data == X.data\nOut[97]: array([ True, True, True])\n\nand is infact a view:\nIn [98]: data[1]+=.23; data\nOut[98]: array([1.23, 2.23, 3. ]) \nIn [99]: X.A\nOut[99]: \narray([[0. , 1.23, 2.23],\n [0. , 0. , 0. ],\n [3. , 0. , 0. ]])\n\noops\nI made an error in specifying the X shape:\nIn [110]: X = sparse.csr_matrix((data, ind, indptr), shape=(3,4))\n\nIn [111]: X.A\nOut[111]: \narray([[0. , 1.23, 2.23, 0. ],\n [0. , 0. , 0. , 3. ],\n [0. , 0. , 0. , 0. ]])\n\nIn [112]: X.data\nOut[112]: array([1.23, 2.23, 3. ])\n\nIn [113]: X.nonzero()\nOut[113]: (array([0, 0, 1], dtype=int32), array([1, 2, 3], dtype=int32))\n\nIn [114]: X[X.nonzero()]\nOut[114]: matrix([[1.23, 2.23, 3. ]])\n\nIn [115]: data\nOut[115]: array([1.23, 2.23, 3. ])\n\nIn [116]: data == X[X.nonzero()]\nOut[116]: matrix([[ True, True, True]])\n\n", "Depending on the type of the values you store in the matrix, numpy.float64 or numpy.int64, perhaps, the following post might answer your question: https://github.com/scipy/scipy/issues/13329#issuecomment-753541268\nIn particular, the comment \"Apparently I don't get an error when data is a numpy array rather than a list.\" suggests that having data as numpy.array rather than a list could solve your problem.\nHopefully, this at least sets you on the right track.\n" ]
[ 1, 0 ]
[]
[]
[ "matrix", "numpy", "python", "scipy", "sparse_matrix" ]
stackoverflow_0074614497_matrix_numpy_python_scipy_sparse_matrix.txt
Q: how to draw a pixel in ipycanvas I cannot figure out how to draw a pixel in ipycanvas. I am drawing rectangles instead of pixels and this makes drawing very slow. Drawing a rectangle using: canvas.fill_rect Code to display image in ipycanvas : import pandas as pd import numpy as np import matplotlib.pyplot as plt from PIL import Image import ipycanvas from ipycanvas import Canvas import requests from io import BytesIO url = r"https://wallpapercave.com/dwp1x/wp1816238.jpg" response = requests.get(url) img = Image.open(BytesIO(response.content)) array = img.tobytes() canvas = Canvas(width=img.width, height=img.height) with ipycanvas.hold_canvas(): for i in range(int(len(array)/3)): r = array[i * 3 + 0] # red g = array[i * 3 + 1] # green b = array[i * 3 + 2] # blue canvas.fill_style = f"#{r:02x}{g:02x}{b:02x}" # setting color canvas.fill_rect(i%img.width, int(i/img.width), 1, 1) # drawing rectangle canvas Output: I am drawing image pixel by pixel because I want to apply filters in images. How to draw pixels in ipycanvas? A: Not sure if this will help but given you're talking about filtering I'd assume you mean things like convolutions. Numpy and Scipy help a lot and provide various ways of applying these and work well with images from Pillow. For example: import requests from io import BytesIO from PIL import Image import numpy as np from scipy import signal image_req = requests.get("https://wallpapercave.com/dwp1x/wp1816238.jpg") image_req.raise_for_status() image = Image.open(BytesIO(image_req.content)) # create gaussian glur of a given standard deviation sd = 3 filt = np.outer(*2*[signal.windows.gaussian(int(sd*5)|1, sd)]) filt /= filt.sum() # interpret image as 3d array arr = np.array(image) # apply it to each channel independently, this loop runs in ~0.1 seconds for chan in range(3): arr[:,:,chan] = signal.oaconvolve(arr[:,:,chan], filt, mode='same') # array back into image for display in notebook Image.fromarray(arr) This produces an image like:
how to draw a pixel in ipycanvas
I cannot figure out how to draw a pixel in ipycanvas. I am drawing rectangles instead of pixels and this makes drawing very slow. Drawing a rectangle using: canvas.fill_rect Code to display image in ipycanvas : import pandas as pd import numpy as np import matplotlib.pyplot as plt from PIL import Image import ipycanvas from ipycanvas import Canvas import requests from io import BytesIO url = r"https://wallpapercave.com/dwp1x/wp1816238.jpg" response = requests.get(url) img = Image.open(BytesIO(response.content)) array = img.tobytes() canvas = Canvas(width=img.width, height=img.height) with ipycanvas.hold_canvas(): for i in range(int(len(array)/3)): r = array[i * 3 + 0] # red g = array[i * 3 + 1] # green b = array[i * 3 + 2] # blue canvas.fill_style = f"#{r:02x}{g:02x}{b:02x}" # setting color canvas.fill_rect(i%img.width, int(i/img.width), 1, 1) # drawing rectangle canvas Output: I am drawing image pixel by pixel because I want to apply filters in images. How to draw pixels in ipycanvas?
[ "Not sure if this will help but given you're talking about filtering I'd assume you mean things like convolutions. Numpy and Scipy help a lot and provide various ways of applying these and work well with images from Pillow.\nFor example:\nimport requests\nfrom io import BytesIO\nfrom PIL import Image\n\nimport numpy as np\nfrom scipy import signal\n\nimage_req = requests.get(\"https://wallpapercave.com/dwp1x/wp1816238.jpg\")\nimage_req.raise_for_status()\n\nimage = Image.open(BytesIO(image_req.content))\n\n# create gaussian glur of a given standard deviation\nsd = 3\nfilt = np.outer(*2*[signal.windows.gaussian(int(sd*5)|1, sd)])\nfilt /= filt.sum()\n\n# interpret image as 3d array\narr = np.array(image)\n\n# apply it to each channel independently, this loop runs in ~0.1 seconds\nfor chan in range(3):\n arr[:,:,chan] = signal.oaconvolve(arr[:,:,chan], filt, mode='same')\n\n# array back into image for display in notebook\nImage.fromarray(arr)\n\nThis produces an image like:\n\n" ]
[ 1 ]
[]
[]
[ "ipycanvas", "jupyter_notebook", "pixel", "python", "python_imaging_library" ]
stackoverflow_0074626615_ipycanvas_jupyter_notebook_pixel_python_python_imaging_library.txt
Q: Turn a string into a valid filename? I have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python. I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like "_-.() ". What's the most elegant solution? The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines. A: You can look at the Django framework for how they create a "slug" from arbitrary text. A slug is URL- and filename- friendly. The Django text utils define a function, slugify(), that's probably the gold standard for this kind of thing. Essentially, their code is the following. import unicodedata import re def slugify(value, allow_unicode=False): """ Taken from https://github.com/django/django/blob/master/django/utils/text.py Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated dashes to single dashes. Remove characters that aren't alphanumerics, underscores, or hyphens. Convert to lowercase. Also strip leading and trailing whitespace, dashes, and underscores. """ value = str(value) if allow_unicode: value = unicodedata.normalize('NFKC', value) else: value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') value = re.sub(r'[^\w\s-]', '', value.lower()) return re.sub(r'[-\s]+', '-', value).strip('-_') And the older version: def slugify(value): """ Normalizes string, converts to lowercase, removes non-alpha characters, and converts spaces to hyphens. """ import unicodedata value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore') value = unicode(re.sub('[^\w\s-]', '', value).strip().lower()) value = unicode(re.sub('[-\s]+', '-', value)) # ... return value There's more, but I left it out, since it doesn't address slugification, but escaping. A: You can use list comprehension together with the string methods. >>> s 'foo-bar#baz?qux@127/\\9]' >>> "".join(x for x in s if x.isalnum()) 'foobarbazqux1279' A: This whitelist approach (ie, allowing only the chars present in valid_chars) will work if there aren't limits on the formatting of the files or combination of valid chars that are illegal (like ".."), for example, what you say would allow a filename named " . txt" which I think is not valid on Windows. As this is the most simple approach I'd try to remove whitespace from the valid_chars and prepend a known valid string in case of error, any other approach will have to know about what is allowed where to cope with Windows file naming limitations and thus be a lot more complex. >>> import string >>> valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits) >>> valid_chars '-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789' >>> filename = "This Is a (valid) - filename%$&$ .txt" >>> ''.join(c for c in filename if c in valid_chars) 'This Is a (valid) - filename .txt' A: What is the reason to use the strings as file names? If human readability is not a factor I would go with base64 module which can produce file system safe strings. It won't be readable but you won't have to deal with collisions and it is reversible. import base64 file_name_string = base64.urlsafe_b64encode(your_string) Update: Changed based on Matthew comment. A: There is a nice project on Github called python-slugify: Install: pip install python-slugify Then use: >>> from slugify import slugify >>> txt = "This\ is/ a%#$ test ---" >>> slugify(txt) 'this-is-a-test' A: Just to further complicate things, you are not guaranteed to get a valid filename just by removing invalid characters. Since allowed characters differ on different filenames, a conservative approach could end up turning a valid name into an invalid one. You may want to add special handling for the cases where: The string is all invalid characters (leaving you with an empty string) You end up with a string with a special meaning, eg "." or ".." On windows, certain device names are reserved. For instance, you can't create a file named "nul", "nul.txt" (or nul.anything in fact) The reserved names are: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9 You can probably work around these issues by prepending some string to the filenames that can never result in one of these cases, and stripping invalid characters. A: Just like S.Lott answered, you can look at the Django Framework for how they convert a string to a valid filename. The most recent and updated version is found in utils/text.py, and defines "get_valid_filename", which is as follows: def get_valid_filename(s): s = str(s).strip().replace(' ', '_') return re.sub(r'(?u)[^-\w.]', '', s) ( See https://github.com/django/django/blob/master/django/utils/text.py ) A: This is the solution I ultimately used: import unicodedata validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits) def removeDisallowedFilenameChars(filename): cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore') return ''.join(c for c in cleanedFilename if c in validFilenameChars) The unicodedata.normalize call replaces accented characters with the unaccented equivalent, which is better than simply stripping them out. After that all disallowed characters are removed. My solution doesn't prepend a known string to avoid possible disallowed filenames, because I know they can't occur given my particular filename format. A more general solution would need to do so. A: In one line: valid_file_name = re.sub('[^\w_.)( -]', '', any_string) you can also put '_' character to make it more readable (in case of replacing slashs, for example) A: Keep in mind, there are actually no restrictions on filenames on Unix systems other than It may not contain \0 It may not contain / Everything else is fair game. $ touch " > even multiline > haha > ^[[31m red ^[[0m > evil" $ ls -la -rw-r--r-- 0 Nov 17 23:39 ?even multiline?haha??[31m red ?[0m?evil $ ls -lab -rw-r--r-- 0 Nov 17 23:39 \neven\ multiline\nhaha\n\033[31m\ red\ \033[0m\nevil $ perl -e 'for my $i ( glob(q{./*even*}) ){ print $i; } ' ./ even multiline haha red evil Yes, i just stored ANSI Colour Codes in a file name and had them take effect. For entertainment, put a BEL character in a directory name and watch the fun that ensues when you CD into it ;) A: You could use the re.sub() method to replace anything not "filelike". But in effect, every character could be valid; so there are no prebuilt functions (I believe), to get it done. import re str = "File!name?.txt" f = open(os.path.join("/tmp", re.sub('[^-a-zA-Z0-9_.() ]+', '', str)) Would result in a filehandle to /tmp/filename.txt. A: >>> import string >>> safechars = bytearray(('_-.()' + string.digits + string.ascii_letters).encode()) >>> allchars = bytearray(range(0x100)) >>> deletechars = bytearray(set(allchars) - set(safechars)) >>> filename = u'#ab\xa0c.$%.txt' >>> safe_filename = filename.encode('ascii', 'ignore').translate(None, deletechars).decode() >>> safe_filename 'abc..txt' It doesn't handle empty strings, special filenames ('nul', 'con', etc). A: Why not just wrap the "osopen" with a try/except and let the underlying OS sort out whether the file is valid? This seems like much less work and is valid no matter which OS you use. A: Another issue that the other comments haven't addressed yet is the empty string, which is obviously not a valid filename. You can also end up with an empty string from stripping too many characters. What with the Windows reserved filenames and issues with dots, the safest answer to the question “how do I normalise a valid filename from arbitrary user input?” is “don't even bother try”: if you can find any other way to avoid it (eg. using integer primary keys from a database as filenames), do that. If you must, and you really need to allow spaces and ‘.’ for file extensions as part of the name, try something like: import re badchars= re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$') badnames= re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)') def makeName(s): name= badchars.sub('_', s) if badnames.match(name): name= '_'+name return name Even this can't be guaranteed right especially on unexpected OSs — for example RISC OS hates spaces and uses ‘.’ as a directory separator. A: Though you have to be careful. It is not clearly said in your intro, if you are looking only at latine language. Some words can become meaningless or another meaning if you sanitize them with ascii characters only. imagine you have "forêt poésie" (forest poetry), your sanitization might give "fort-posie" (strong + something meaningless) Worse if you have to deal with chinese characters. "下北沢" your system might end up doing "---" which is doomed to fail after a while and not very helpful. So if you deal with only files I would encourage to either call them a generic chain that you control or to keep the characters as it is. For URIs, about the same. A: I realise there are many answers but they mostly rely on regular expressions or external modules, so I'd like to throw in my own answer. A pure python function, no external module needed, no regular expression used. My approach is not to clean invalid chars, but to only allow valid ones. def normalizefilename(fn): validchars = "-_.() " out = "" for c in fn: if str.isalpha(c) or str.isdigit(c) or (c in validchars): out += c else: out += "_" return out if you like, you can add your own valid chars to the validchars variable at the beginning, such as your national letters that don't exist in English alphabet. This is something you may or may not want: some file systems that don't run on UTF-8 might still have problems with non-ASCII chars. This function is to test for a single file name validity, so it will replace path separators with _ considering them invalid chars. If you want to add that, it is trivial to modify the if to include os path separator. A: If you don't mind installing a package, this should be useful: https://pypi.org/project/pathvalidate/ From https://pypi.org/project/pathvalidate/#sanitize-a-filename: from pathvalidate import sanitize_filename fname = "fi:l*e/p\"a?t>h|.t<xt" print(f"{fname} -> {sanitize_filename(fname)}\n") fname = "\0_a*b:c<d>e%f/(g)h+i_0.txt" print(f"{fname} -> {sanitize_filename(fname)}\n") Output fi:l*e/p"a?t>h|.t<xt -> filepath.txt _a*b:c<d>e%f/(g)h+i_0.txt -> _abcde%f(g)h+i_0.txt A: I liked the python-slugify approach here but it was stripping dots also away which was not desired. So I optimized it for uploading a clean filename to s3 this way: pip install python-slugify Example code: s = 'Very / Unsafe / file\nname hähä \n\r .txt' clean_basename = slugify(os.path.splitext(s)[0]) clean_extension = slugify(os.path.splitext(s)[1][1:]) if clean_extension: clean_filename = '{}.{}'.format(clean_basename, clean_extension) elif clean_basename: clean_filename = clean_basename else: clean_filename = 'none' # only unclean characters Output: >>> clean_filename 'very-unsafe-file-name-haha.txt' This is so failsafe, it works with filenames without extension and it even works for only unsafe characters file names (result is none here). A: Answer modified for python 3.6 import string import unicodedata validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits) def removeDisallowedFilenameChars(filename): cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore') return ''.join(chr(c) for c in cleanedFilename if chr(c) in validFilenameChars) A: Not exactly what OP was asking for but this is what I use because I need unique and reversible conversions: # p3 code def safePath (url): return ''.join(map(lambda ch: chr(ch) if ch in safePath.chars else '%%%02x' % ch, url.encode('utf-8'))) safePath.chars = set(map(lambda x: ord(x), '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+-_ .')) Result is "somewhat" readable, at least from a sysadmin point of view. A: When confronted with the same problem I used python-slugify. Usage was also suggested by Shoham but, as therealmarv pointed out, by default python-slugify also converts dots. This behaviour can be overruled by including dots into the regex_pattern argument. > filename = "This is a väryì' Strange File-Nömé.jpeg" > pattern = re.compile(r'[^-a-zA-Z0-9.]+') > slugify(filename,regex_pattern=pattern) 'this-is-a-varyi-strange-file-nome.jpeg' Note that the regex pattern was copied from the ALLOWED_CHARS_PATTERN_WITH_UPPERCASE global variable within the slugify.py file of the python-slugify package and extended with "." Keep in mind that special characters like .() must be escaped with \. If you want to preserve uppercase letters use the lowercase=False argument. > filename = "This is a väryì' Strange File-Nömé.jpeg" > pattern = re.compile(r'[^-a-zA-Z0-9.]+') > slugify(filename,regex_pattern=pattern, lowercase=False) 'This-is-a-varyi-Strange-File-Nome.jpeg' This worked using Python 3.8.4 and python-slugify 4.0.1 A: Most of these solutions don't work. '/hello/world' -> 'helloworld' '/helloworld'/ -> 'helloworld' This isn't what you want generally, say you are saving the html for each link, you're going to overwrite the html for a different webpage. I pickle a dict such as: {'helloworld': ( {'/hello/world': 'helloworld', '/helloworld/': 'helloworld1'}, 2) } 2 represents the number that should be appended to the next filename. I look up the filename each time from the dict. If it's not there, I create a new one, appending the max number if needed. A: Yet another answer for Windows specific paths, using simple replacement and no funky modules: import re def check_for_illegal_char(input_str): # remove illegal characters for Windows file names/paths # (illegal filenames are a superset (41) of the illegal path names (36)) # this is according to windows blacklist obtained with Powershell # from: https://stackoverflow.com/questions/1976007/what-characters-are-forbidden-in-windows-and-linux-directory-names/44750843#44750843 # # PS> $enc = [system.Text.Encoding]::UTF8 # PS> $FileNameInvalidChars = [System.IO.Path]::GetInvalidFileNameChars() # PS> $FileNameInvalidChars | foreach { $enc.GetBytes($_) } | Out-File -FilePath InvalidFileCharCodes.txt illegal = '\u0022\u003c\u003e\u007c\u0000\u0001\u0002\u0003\u0004\u0005\u0006\u0007\u0008' + \ '\u0009\u000a\u000b\u000c\u000d\u000e\u000f\u0010\u0011\u0012\u0013\u0014\u0015' + \ '\u0016\u0017\u0018\u0019\u001a\u001b\u001c\u001d\u001e\u001f\u003a\u002a\u003f\u005c\u002f' output_str, _ = re.subn('['+illegal+']','_', input_str) output_str = output_str.replace('\\','_') # backslash cannot be handled by regex output_str = output_str.replace('..','_') # double dots are illegal too, or at least a bad idea output_str = output_str[:-1] if output_str[-1] == '.' else output_str # can't have end of line '.' if output_str != input_str: print(f"The name '{input_str}' had invalid characters, " f"name was modified to '{output_str}'") return output_str When tested with check_for_illegal_char('fas\u0003\u0004good\\..asd.'), I get: The name 'fas♥♦good\..asd.' had invalid characters, name was modified to 'fas__good__asd' A: UPDATE All links broken beyond repair in this 6 year old answer. Also, I also wouldn't do it this way anymore, just base64 encode or drop unsafe chars. Python 3 example: import re t = re.compile("[a-zA-Z0-9.,_-]") unsafe = "abc∂éåß®∆˚˙©¬ñ√ƒµ©∆∫ø" safe = [ch for ch in unsafe if t.match(ch)] # => 'abc' With base64 you can encode and decode, so you can retrieve the original filename again. But depending on the use case you might be better off generating a random filename and storing the metadata in separate file or DB. from random import choice from string import ascii_lowercase, ascii_uppercase, digits allowed_chr = ascii_lowercase + ascii_uppercase + digits safe = ''.join([choice(allowed_chr) for _ in range(16)]) # => 'CYQ4JDKE9JfcRzAZ' ORIGINAL LINKROTTEN ANSWER: The bobcat project contains a python module that does just this. It's not completely robust, see this post and this reply. So, as noted: base64 encoding is probably a better idea if readability doesn't matter. Docs https://svn.origo.ethz.ch/bobcat/src-doc/safefilename-module.html Source https://svn.origo.ethz.ch/bobcat/trunk/src/bobcatlib/safefilename.py A: I'm sure this isn't a great answer, since it modifies the string it's looping over, but it seems to work alright: import string for chr in your_string: if chr == ' ': your_string = your_string.replace(' ', '_') elif chr not in string.ascii_letters or chr not in string.digits: your_string = your_string.replace(chr, '') A: Here, this should cover all the bases. It handles all types of issues for you, including (but not limited too) character substitution. Works in Windows, *nix, and almost every other file system. Allows printable characters only. def txt2filename(txt, chr_set='normal'): """Converts txt to a valid Windows/*nix filename with printable characters only. args: txt: The str to convert. chr_set: 'normal', 'universal', or 'inclusive'. 'universal': ' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' 'normal': Every printable character exept those disallowed on Windows/*nix. 'extended': All 'normal' characters plus the extended character ASCII codes 128-255 """ FILLER = '-' # Step 1: Remove excluded characters. if chr_set == 'universal': # Lookups in a set are O(n) vs O(n * x) for a str. printables = set(' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz') else: if chr_set == 'normal': max_chr = 127 elif chr_set == 'extended': max_chr = 256 else: raise ValueError(f'The chr_set argument may be normal, extended or universal; not {chr_set=}') EXCLUDED_CHRS = set(r'<>:"/\|?*') # Illegal characters in Windows filenames. EXCLUDED_CHRS.update(chr(127)) # DEL (non-printable). printables = set(chr(x) for x in range(32, max_chr) if chr(x) not in EXCLUDED_CHRS) result = ''.join(x if x in printables else FILLER # Allow printable characters only. for x in txt) # Step 2: Device names, '.', and '..' are invalid filenames in Windows. DEVICE_NAMES = 'CON,PRN,AUX,NUL,COM1,COM2,COM3,COM4,' \ 'COM5,COM6,COM7,COM8,COM9,LPT1,LPT2,' \ 'LPT3,LPT4,LPT5,LPT6,LPT7,LPT8,LPT9,' \ 'CONIN$,CONOUT$,..,.'.split() # This list is an O(n) operation. if result in DEVICE_NAMES: result = f'-{result}-' # Step 3: Maximum length of filename is 255 bytes in Windows and Linux (other *nix flavors may allow longer names). result = result[:255] # Step 4: Windows does not allow filenames to end with '.' or ' ' or begin with ' '. result = re.sub(r'^[. ]', FILLER, result) result = re.sub(r' $', FILLER, result) return result This solution needs no external libraries. It substitutes non-printable filenames too because they are not always simple to deal with. A: Still haven't found a good library to generate a valid filename. Note that in languages like German, Norwegian or French special characters in filenames are very common and totally OK. So I ended up with my own library: # util/files.py CHAR_MAX_LEN = 31 CHAR_REPLACE = '_' ILLEGAL_CHARS = [ '#', # pound '%', # percent '&', # ampersand '{', # left curly bracket '}', # right curly bracket '\\', # back slash '<', # left angle bracket '>', # right angle bracket '*', # asterisk '?', # question mark '/', # forward slash ' ', # blank spaces '$', # dollar sign '!', # exclamation point "'", # single quotes '"', # double quotes ':', # colon '@', # at sign '+', # plus sign '`', # backtick '|', # pipe '=', # equal sign ] def generate_filename( name, char_replace=CHAR_REPLACE, length=CHAR_MAX_LEN, illegal=ILLEGAL_CHARS, replace_dot=False): ''' return clean filename ''' # init _elem = name.split('.') extension = _elem[-1].strip() _length = length - len(extension) - 1 label = '.'.join(_elem[:-1]).strip()[:_length] filename = '' # replace '.' ? if replace_dot: label = label.replace('.', char_replace) # clean for char in label + '.' + extension: if char in illegal: char = char_replace filename += char return filename generate_filename('nucgae zutaäer..0.1.docx', replace_dot=False) nucgae_zutaäer..0.1.docx generate_filename('nucgae zutaäer..0.1.docx', replace_dot=True) nucgae_zutaäer__0_1.docx
Turn a string into a valid filename?
I have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python. I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like "_-.() ". What's the most elegant solution? The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines.
[ "You can look at the Django framework for how they create a \"slug\" from arbitrary text. A slug is URL- and filename- friendly.\nThe Django text utils define a function, slugify(), that's probably the gold standard for this kind of thing. Essentially, their code is the following.\nimport unicodedata\nimport re\n\ndef slugify(value, allow_unicode=False):\n \"\"\"\n Taken from https://github.com/django/django/blob/master/django/utils/text.py\n Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated\n dashes to single dashes. Remove characters that aren't alphanumerics,\n underscores, or hyphens. Convert to lowercase. Also strip leading and\n trailing whitespace, dashes, and underscores.\n \"\"\"\n value = str(value)\n if allow_unicode:\n value = unicodedata.normalize('NFKC', value)\n else:\n value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')\n value = re.sub(r'[^\\w\\s-]', '', value.lower())\n return re.sub(r'[-\\s]+', '-', value).strip('-_')\n\nAnd the older version:\ndef slugify(value):\n \"\"\"\n Normalizes string, converts to lowercase, removes non-alpha characters,\n and converts spaces to hyphens.\n \"\"\"\n import unicodedata\n value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')\n value = unicode(re.sub('[^\\w\\s-]', '', value).strip().lower())\n value = unicode(re.sub('[-\\s]+', '-', value))\n # ...\n return value\n\nThere's more, but I left it out, since it doesn't address slugification, but escaping.\n", "You can use list comprehension together with the string methods.\n>>> s\n'foo-bar#baz?qux@127/\\\\9]'\n>>> \"\".join(x for x in s if x.isalnum())\n'foobarbazqux1279'\n\n", "This whitelist approach (ie, allowing only the chars present in valid_chars) will work if there aren't limits on the formatting of the files or combination of valid chars that are illegal (like \"..\"), for example, what you say would allow a filename named \" . txt\" which I think is not valid on Windows. As this is the most simple approach I'd try to remove whitespace from the valid_chars and prepend a known valid string in case of error, any other approach will have to know about what is allowed where to cope with Windows file naming limitations and thus be a lot more complex. \n>>> import string\n>>> valid_chars = \"-_.() %s%s\" % (string.ascii_letters, string.digits)\n>>> valid_chars\n'-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'\n>>> filename = \"This Is a (valid) - filename%$&$ .txt\"\n>>> ''.join(c for c in filename if c in valid_chars)\n'This Is a (valid) - filename .txt'\n\n", "What is the reason to use the strings as file names? If human readability is not a factor I would go with base64 module which can produce file system safe strings. It won't be readable but you won't have to deal with collisions and it is reversible.\nimport base64\nfile_name_string = base64.urlsafe_b64encode(your_string)\n\nUpdate: Changed based on Matthew comment.\n", "There is a nice project on Github called python-slugify: \nInstall:\npip install python-slugify\n\nThen use:\n>>> from slugify import slugify\n>>> txt = \"This\\ is/ a%#$ test ---\"\n>>> slugify(txt)\n'this-is-a-test'\n\n", "Just to further complicate things, you are not guaranteed to get a valid filename just by removing invalid characters. Since allowed characters differ on different filenames, a conservative approach could end up turning a valid name into an invalid one. You may want to add special handling for the cases where:\n\nThe string is all invalid characters (leaving you with an empty string)\nYou end up with a string with a special meaning, eg \".\" or \"..\"\nOn windows, certain device names are reserved. For instance, you can't create a file named \"nul\", \"nul.txt\" (or nul.anything in fact) The reserved names are:\nCON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9\n\nYou can probably work around these issues by prepending some string to the filenames that can never result in one of these cases, and stripping invalid characters.\n", "Just like S.Lott answered, you can look at the Django Framework for how they convert a string to a valid filename. \nThe most recent and updated version is found in utils/text.py, and defines \"get_valid_filename\", which is as follows:\ndef get_valid_filename(s):\n s = str(s).strip().replace(' ', '_')\n return re.sub(r'(?u)[^-\\w.]', '', s)\n\n( See https://github.com/django/django/blob/master/django/utils/text.py )\n", "This is the solution I ultimately used:\nimport unicodedata\n\nvalidFilenameChars = \"-_.() %s%s\" % (string.ascii_letters, string.digits)\n\ndef removeDisallowedFilenameChars(filename):\n cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')\n return ''.join(c for c in cleanedFilename if c in validFilenameChars)\n\nThe unicodedata.normalize call replaces accented characters with the unaccented equivalent, which is better than simply stripping them out. After that all disallowed characters are removed.\nMy solution doesn't prepend a known string to avoid possible disallowed filenames, because I know they can't occur given my particular filename format. A more general solution would need to do so.\n", "In one line:\nvalid_file_name = re.sub('[^\\w_.)( -]', '', any_string)\n\nyou can also put '_' character to make it more readable (in case of replacing slashs, for example)\n", "Keep in mind, there are actually no restrictions on filenames on Unix systems other than \n\nIt may not contain \\0 \nIt may not contain /\n\nEverything else is fair game. \n\n$ touch \"\n> even multiline\n> haha\n> ^[[31m red ^[[0m\n> evil\"\n$ ls -la \n-rw-r--r-- 0 Nov 17 23:39 ?even multiline?haha??[31m red ?[0m?evil\n$ ls -lab\n-rw-r--r-- 0 Nov 17 23:39 \\neven\\ multiline\\nhaha\\n\\033[31m\\ red\\ \\033[0m\\nevil\n$ perl -e 'for my $i ( glob(q{./*even*}) ){ print $i; } '\n./\neven multiline\nhaha\n red \nevil\n\nYes, i just stored ANSI Colour Codes in a file name and had them take effect. \nFor entertainment, put a BEL character in a directory name and watch the fun that ensues when you CD into it ;) \n", "You could use the re.sub() method to replace anything not \"filelike\". But in effect, every character could be valid; so there are no prebuilt functions (I believe), to get it done.\nimport re\n\nstr = \"File!name?.txt\"\nf = open(os.path.join(\"/tmp\", re.sub('[^-a-zA-Z0-9_.() ]+', '', str))\n\nWould result in a filehandle to /tmp/filename.txt.\n", ">>> import string\n>>> safechars = bytearray(('_-.()' + string.digits + string.ascii_letters).encode())\n>>> allchars = bytearray(range(0x100))\n>>> deletechars = bytearray(set(allchars) - set(safechars))\n>>> filename = u'#ab\\xa0c.$%.txt'\n>>> safe_filename = filename.encode('ascii', 'ignore').translate(None, deletechars).decode()\n>>> safe_filename\n'abc..txt'\n\nIt doesn't handle empty strings, special filenames ('nul', 'con', etc).\n", "Why not just wrap the \"osopen\" with a try/except and let the underlying OS sort out whether the file is valid?\nThis seems like much less work and is valid no matter which OS you use.\n", "Another issue that the other comments haven't addressed yet is the empty string, which is obviously not a valid filename. You can also end up with an empty string from stripping too many characters.\nWhat with the Windows reserved filenames and issues with dots, the safest answer to the question “how do I normalise a valid filename from arbitrary user input?” is “don't even bother try”: if you can find any other way to avoid it (eg. using integer primary keys from a database as filenames), do that.\nIf you must, and you really need to allow spaces and ‘.’ for file extensions as part of the name, try something like:\nimport re\nbadchars= re.compile(r'[^A-Za-z0-9_. ]+|^\\.|\\.$|^ | $|^$')\nbadnames= re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\\.|$)')\n\ndef makeName(s):\n name= badchars.sub('_', s)\n if badnames.match(name):\n name= '_'+name\n return name\n\nEven this can't be guaranteed right especially on unexpected OSs — for example RISC OS hates spaces and uses ‘.’ as a directory separator.\n", "Though you have to be careful. It is not clearly said in your intro, if you are looking only at latine language. Some words can become meaningless or another meaning if you sanitize them with ascii characters only.\nimagine you have \"forêt poésie\" (forest poetry), your sanitization might give \"fort-posie\" (strong + something meaningless)\nWorse if you have to deal with chinese characters.\n\"下北沢\" your system might end up doing \"---\" which is doomed to fail after a while and not very helpful. So if you deal with only files I would encourage to either call them a generic chain that you control or to keep the characters as it is. For URIs, about the same.\n", "I realise there are many answers but they mostly rely on regular expressions or external modules, so I'd like to throw in my own answer. A pure python function, no external module needed, no regular expression used. My approach is not to clean invalid chars, but to only allow valid ones.\ndef normalizefilename(fn):\n validchars = \"-_.() \"\n out = \"\"\n for c in fn:\n if str.isalpha(c) or str.isdigit(c) or (c in validchars):\n out += c\n else:\n out += \"_\"\n return out \n\nif you like, you can add your own valid chars to the validchars variable at the beginning, such as your national letters that don't exist in English alphabet. This is something you may or may not want: some file systems that don't run on UTF-8 might still have problems with non-ASCII chars.\nThis function is to test for a single file name validity, so it will replace path separators with _ considering them invalid chars. If you want to add that, it is trivial to modify the if to include os path separator.\n", "If you don't mind installing a package, this should be useful:\nhttps://pypi.org/project/pathvalidate/\nFrom https://pypi.org/project/pathvalidate/#sanitize-a-filename:\n\nfrom pathvalidate import sanitize_filename\n\nfname = \"fi:l*e/p\\\"a?t>h|.t<xt\"\nprint(f\"{fname} -> {sanitize_filename(fname)}\\n\")\nfname = \"\\0_a*b:c<d>e%f/(g)h+i_0.txt\"\nprint(f\"{fname} -> {sanitize_filename(fname)}\\n\")\n\nOutput\nfi:l*e/p\"a?t>h|.t<xt -> filepath.txt\n_a*b:c<d>e%f/(g)h+i_0.txt -> _abcde%f(g)h+i_0.txt\n\n\n", "I liked the python-slugify approach here but it was stripping dots also away which was not desired. So I optimized it for uploading a clean filename to s3 this way:\npip install python-slugify\n\nExample code:\ns = 'Very / Unsafe / file\\nname hähä \\n\\r .txt'\nclean_basename = slugify(os.path.splitext(s)[0])\nclean_extension = slugify(os.path.splitext(s)[1][1:])\nif clean_extension:\n clean_filename = '{}.{}'.format(clean_basename, clean_extension)\nelif clean_basename:\n clean_filename = clean_basename\nelse:\n clean_filename = 'none' # only unclean characters\n\nOutput:\n>>> clean_filename\n'very-unsafe-file-name-haha.txt'\n\nThis is so failsafe, it works with filenames without extension and it even works for only unsafe characters file names (result is none here).\n", "Answer modified for python 3.6\nimport string\nimport unicodedata\n\nvalidFilenameChars = \"-_.() %s%s\" % (string.ascii_letters, string.digits)\ndef removeDisallowedFilenameChars(filename):\n cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')\n return ''.join(chr(c) for c in cleanedFilename if chr(c) in validFilenameChars)\n\n", "Not exactly what OP was asking for but this is what I use because I need unique and reversible conversions:\n# p3 code\ndef safePath (url):\n return ''.join(map(lambda ch: chr(ch) if ch in safePath.chars else '%%%02x' % ch, url.encode('utf-8')))\nsafePath.chars = set(map(lambda x: ord(x), '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+-_ .'))\n\nResult is \"somewhat\" readable, at least from a sysadmin point of view.\n", "When confronted with the same problem I used python-slugify.\nUsage was also suggested by Shoham but, as therealmarv pointed out, by default python-slugify also converts dots.\nThis behaviour can be overruled by including dots into the regex_pattern argument.\n> filename = \"This is a väryì' Strange File-Nömé.jpeg\"\n> pattern = re.compile(r'[^-a-zA-Z0-9.]+')\n> slugify(filename,regex_pattern=pattern) \n'this-is-a-varyi-strange-file-nome.jpeg'\n\nNote that the regex pattern was copied from the\nALLOWED_CHARS_PATTERN_WITH_UPPERCASE\nglobal variable within the slugify.py file of the python-slugify package and extended with \".\"\nKeep in mind that special characters like .() must be escaped with \\.\nIf you want to preserve uppercase letters use the lowercase=False argument.\n> filename = \"This is a väryì' Strange File-Nömé.jpeg\"\n> pattern = re.compile(r'[^-a-zA-Z0-9.]+')\n> slugify(filename,regex_pattern=pattern, lowercase=False) \n'This-is-a-varyi-Strange-File-Nome.jpeg'\n\nThis worked using Python 3.8.4 and python-slugify 4.0.1\n", "Most of these solutions don't work.\n'/hello/world' -> 'helloworld'\n'/helloworld'/ -> 'helloworld'\nThis isn't what you want generally, say you are saving the html for each link, you're going to overwrite the html for a different webpage.\nI pickle a dict such as:\n{'helloworld': \n (\n {'/hello/world': 'helloworld', '/helloworld/': 'helloworld1'},\n 2)\n }\n\n2 represents the number that should be appended to the next filename.\nI look up the filename each time from the dict. If it's not there, I create a new one, appending the max number if needed.\n", "Yet another answer for Windows specific paths, using simple replacement and no funky modules:\nimport re\n\ndef check_for_illegal_char(input_str):\n # remove illegal characters for Windows file names/paths \n # (illegal filenames are a superset (41) of the illegal path names (36))\n # this is according to windows blacklist obtained with Powershell\n # from: https://stackoverflow.com/questions/1976007/what-characters-are-forbidden-in-windows-and-linux-directory-names/44750843#44750843\n #\n # PS> $enc = [system.Text.Encoding]::UTF8\n # PS> $FileNameInvalidChars = [System.IO.Path]::GetInvalidFileNameChars()\n # PS> $FileNameInvalidChars | foreach { $enc.GetBytes($_) } | Out-File -FilePath InvalidFileCharCodes.txt\n\n illegal = '\\u0022\\u003c\\u003e\\u007c\\u0000\\u0001\\u0002\\u0003\\u0004\\u0005\\u0006\\u0007\\u0008' + \\\n '\\u0009\\u000a\\u000b\\u000c\\u000d\\u000e\\u000f\\u0010\\u0011\\u0012\\u0013\\u0014\\u0015' + \\\n '\\u0016\\u0017\\u0018\\u0019\\u001a\\u001b\\u001c\\u001d\\u001e\\u001f\\u003a\\u002a\\u003f\\u005c\\u002f' \n\n output_str, _ = re.subn('['+illegal+']','_', input_str)\n output_str = output_str.replace('\\\\','_') # backslash cannot be handled by regex\n output_str = output_str.replace('..','_') # double dots are illegal too, or at least a bad idea \n output_str = output_str[:-1] if output_str[-1] == '.' else output_str # can't have end of line '.'\n\n if output_str != input_str:\n print(f\"The name '{input_str}' had invalid characters, \"\n f\"name was modified to '{output_str}'\")\n\n return output_str\n\nWhen tested with check_for_illegal_char('fas\\u0003\\u0004good\\\\..asd.'), I get:\nThe name 'fas♥♦good\\..asd.' had invalid characters, name was modified to 'fas__good__asd'\n\n", "UPDATE\nAll links broken beyond repair in this 6 year old answer.\nAlso, I also wouldn't do it this way anymore, just base64 encode or drop unsafe chars. Python 3 example:\nimport re\nt = re.compile(\"[a-zA-Z0-9.,_-]\")\nunsafe = \"abc∂éåß®∆˚˙©¬ñ√ƒµ©∆∫ø\"\nsafe = [ch for ch in unsafe if t.match(ch)]\n# => 'abc'\n\nWith base64 you can encode and decode, so you can retrieve the original filename again.\nBut depending on the use case you might be better off generating a random filename and storing the metadata in separate file or DB.\nfrom random import choice\nfrom string import ascii_lowercase, ascii_uppercase, digits\nallowed_chr = ascii_lowercase + ascii_uppercase + digits\n\nsafe = ''.join([choice(allowed_chr) for _ in range(16)])\n# => 'CYQ4JDKE9JfcRzAZ'\n\nORIGINAL LINKROTTEN ANSWER:\nThe bobcat project contains a python module that does just this.\nIt's not completely robust, see this post and this reply.\nSo, as noted: base64 encoding is probably a better idea if readability doesn't matter.\n\nDocs https://svn.origo.ethz.ch/bobcat/src-doc/safefilename-module.html\nSource https://svn.origo.ethz.ch/bobcat/trunk/src/bobcatlib/safefilename.py\n\n", "I'm sure this isn't a great answer, since it modifies the string it's looping over, but it seems to work alright:\nimport string\nfor chr in your_string:\n if chr == ' ':\n your_string = your_string.replace(' ', '_')\n elif chr not in string.ascii_letters or chr not in string.digits:\n your_string = your_string.replace(chr, '')\n\n", "Here, this should cover all the bases. It handles all types of issues for you, including (but not limited too) character substitution.\nWorks in Windows, *nix, and almost every other file system. Allows printable characters only.\ndef txt2filename(txt, chr_set='normal'):\n \"\"\"Converts txt to a valid Windows/*nix filename with printable characters only.\n\n args:\n txt: The str to convert.\n chr_set: 'normal', 'universal', or 'inclusive'.\n 'universal': ' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'\n 'normal': Every printable character exept those disallowed on Windows/*nix.\n 'extended': All 'normal' characters plus the extended character ASCII codes 128-255\n \"\"\"\n\n FILLER = '-'\n\n # Step 1: Remove excluded characters.\n if chr_set == 'universal':\n # Lookups in a set are O(n) vs O(n * x) for a str.\n printables = set(' -.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz')\n else:\n if chr_set == 'normal':\n max_chr = 127\n elif chr_set == 'extended':\n max_chr = 256\n else:\n raise ValueError(f'The chr_set argument may be normal, extended or universal; not {chr_set=}')\n EXCLUDED_CHRS = set(r'<>:\"/\\|?*') # Illegal characters in Windows filenames.\n EXCLUDED_CHRS.update(chr(127)) # DEL (non-printable).\n printables = set(chr(x)\n for x in range(32, max_chr)\n if chr(x) not in EXCLUDED_CHRS)\n result = ''.join(x if x in printables else FILLER # Allow printable characters only.\n for x in txt)\n\n # Step 2: Device names, '.', and '..' are invalid filenames in Windows.\n DEVICE_NAMES = 'CON,PRN,AUX,NUL,COM1,COM2,COM3,COM4,' \\\n 'COM5,COM6,COM7,COM8,COM9,LPT1,LPT2,' \\\n 'LPT3,LPT4,LPT5,LPT6,LPT7,LPT8,LPT9,' \\\n 'CONIN$,CONOUT$,..,.'.split() # This list is an O(n) operation.\n if result in DEVICE_NAMES:\n result = f'-{result}-'\n\n # Step 3: Maximum length of filename is 255 bytes in Windows and Linux (other *nix flavors may allow longer names).\n result = result[:255]\n\n # Step 4: Windows does not allow filenames to end with '.' or ' ' or begin with ' '.\n result = re.sub(r'^[. ]', FILLER, result)\n result = re.sub(r' $', FILLER, result)\n\n return result\n\nThis solution needs no external libraries. It substitutes non-printable filenames too because they are not always simple to deal with.\n", "Still haven't found a good library to generate a valid filename. Note that in languages like German, Norwegian or French special characters in filenames are very common and totally OK. So I ended up with my own library:\n# util/files.py\n\nCHAR_MAX_LEN = 31\nCHAR_REPLACE = '_'\n\nILLEGAL_CHARS = [\n '#', # pound\n '%', # percent\n '&', # ampersand\n '{', # left curly bracket\n '}', # right curly bracket\n '\\\\', # back slash\n '<', # left angle bracket\n '>', # right angle bracket\n '*', # asterisk\n '?', # question mark\n '/', # forward slash\n ' ', # blank spaces\n '$', # dollar sign\n '!', # exclamation point\n \"'\", # single quotes\n '\"', # double quotes\n ':', # colon\n '@', # at sign\n '+', # plus sign\n '`', # backtick\n '|', # pipe\n '=', # equal sign\n]\n\n\ndef generate_filename(\n name, char_replace=CHAR_REPLACE, length=CHAR_MAX_LEN, \n illegal=ILLEGAL_CHARS, replace_dot=False):\n ''' return clean filename '''\n # init\n _elem = name.split('.')\n extension = _elem[-1].strip()\n _length = length - len(extension) - 1\n label = '.'.join(_elem[:-1]).strip()[:_length]\n filename = ''\n \n # replace '.' ?\n if replace_dot:\n label = label.replace('.', char_replace)\n \n # clean\n for char in label + '.' + extension:\n if char in illegal:\n char = char_replace\n filename += char \n \n return filename\n\n\ngenerate_filename('nucgae zutaäer..0.1.docx', replace_dot=False)\nnucgae_zutaäer..0.1.docx\ngenerate_filename('nucgae zutaäer..0.1.docx', replace_dot=True)\nnucgae_zutaäer__0_1.docx\n" ]
[ 243, 157, 110, 108, 47, 46, 42, 20, 19, 15, 7, 6, 6, 6, 6, 6, 6, 5, 4, 2, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "filenames", "python", "sanitize", "slug" ]
stackoverflow_0000295135_filenames_python_sanitize_slug.txt
Q: Dearpygui - built another window within an callback Hello my fellow programmer! In my endless searcch for an suiteble GUI i found these wonderfull modul calles dearpygui. After i started to learn more about how to built a GUI with these, i came to the point where i just asked me: "How can i built a window inside of an callback, is this even possible?" Maybe you awesome people, can help me with my little question! Thanks in advance Freddie This is my code solong... Maybe anyone get it and can help me out! import dearpygui.dearpygui as dpg item_table = [] #builts the context window dpg.create_context() dpg.create_viewport(title="invengo", width=600, height=600) #outputs the data from the inputs and callbacks def reg(sender): print(dpg.get_value(sender)) item_table.append(dpg.get_value(sender)) def lel(sender): with dpg.window(tag="PW"): #builts the datainputs with dpg.window(tag="PW"): item_name = dpg.add_input_text(label="Gegenstand", hint="Hier den Namen des Gegenstandes eintragen...",callback=reg, on_enter=True) item_amount = dpg.add_combo(label="Menge", default_value=1, items=(1,2,3,"Mehrere"), callback=reg) check_button = dpg.add_button(label="CLICK ME", callback=lel) dpg.set_item_callback(item_name, reg) #debugging print(dpg.get_value(item_name)) #start the modul dpg.setup_dearpygui() dpg.show_viewport() dpg.set_primary_window("PW", True) dpg.start_dearpygui() dpg.destroy_context() #debugging print(item_table) A: I was able to run your code just by changing the callback function lel to: def lel(sender): with dpg.window(): pass and could generate many child windows. You see that I removed the tag. Tags must be unique, but you used the same one as in your main window, which is not allowed. Also note the need of the pass dummy statement at the end of an empty with block.
Dearpygui - built another window within an callback
Hello my fellow programmer! In my endless searcch for an suiteble GUI i found these wonderfull modul calles dearpygui. After i started to learn more about how to built a GUI with these, i came to the point where i just asked me: "How can i built a window inside of an callback, is this even possible?" Maybe you awesome people, can help me with my little question! Thanks in advance Freddie This is my code solong... Maybe anyone get it and can help me out! import dearpygui.dearpygui as dpg item_table = [] #builts the context window dpg.create_context() dpg.create_viewport(title="invengo", width=600, height=600) #outputs the data from the inputs and callbacks def reg(sender): print(dpg.get_value(sender)) item_table.append(dpg.get_value(sender)) def lel(sender): with dpg.window(tag="PW"): #builts the datainputs with dpg.window(tag="PW"): item_name = dpg.add_input_text(label="Gegenstand", hint="Hier den Namen des Gegenstandes eintragen...",callback=reg, on_enter=True) item_amount = dpg.add_combo(label="Menge", default_value=1, items=(1,2,3,"Mehrere"), callback=reg) check_button = dpg.add_button(label="CLICK ME", callback=lel) dpg.set_item_callback(item_name, reg) #debugging print(dpg.get_value(item_name)) #start the modul dpg.setup_dearpygui() dpg.show_viewport() dpg.set_primary_window("PW", True) dpg.start_dearpygui() dpg.destroy_context() #debugging print(item_table)
[ "I was able to run your code just by changing the callback function lel to:\ndef lel(sender):\n with dpg.window():\n pass\n\nand could generate many child windows. You see that I removed the tag. Tags must be unique, but you used the same one as in your main window, which is not allowed. Also note the need of the pass dummy statement at the end of an empty with block.\n" ]
[ 0 ]
[]
[]
[ "dearpygui", "python" ]
stackoverflow_0074610549_dearpygui_python.txt
Q: Load S3 Data into AWS SageMaker Notebook I've just started to experiment with AWS SageMaker and would like to load data from an S3 bucket into a pandas dataframe in my SageMaker python jupyter notebook for analysis. I could use boto to grab the data from S3, but I'm wondering whether there is a more elegant method as part of the SageMaker framework to do this in my python code? Thanks in advance for any advice. A: import boto3 import pandas as pd from sagemaker import get_execution_role role = get_execution_role() bucket='my-bucket' data_key = 'train.csv' data_location = 's3://{}/{}'.format(bucket, data_key) pd.read_csv(data_location) A: In the simplest case you don't need boto3, because you just read resources. Then it's even simpler: import pandas as pd bucket='my-bucket' data_key = 'train.csv' data_location = 's3://{}/{}'.format(bucket, data_key) pd.read_csv(data_location) But as Prateek stated make sure to configure your SageMaker notebook instance to have access to s3. This is done at configuration step in Permissions > IAM role A: If you have a look here it seems you can specify this in the InputDataConfig. Search for "S3DataSource" (ref) in the document. The first hit is even in Python, on page 25/26. A: You could also access your bucket as your file system using s3fs import s3fs fs = s3fs.S3FileSystem() # To List 5 files in your accessible bucket fs.ls('s3://bucket-name/data/')[:5] # open it directly with fs.open(f's3://bucket-name/data/image.png') as f: display(Image.open(f)) A: Do make sure the Amazon SageMaker role has policy attached to it to have access to S3. It can be done in IAM. A: You can also use AWS Data Wrangler https://github.com/awslabs/aws-data-wrangler: import awswrangler as wr df = wr.s3.read_csv(path="s3://...") A: A similar answer with the f-string. import pandas as pd bucket = 'your-bucket-name' file = 'file.csv' df = pd.read_csv(f"s3://{bucket}/{file}") len(df) # print row counts A: This code sample to import csv file from S3, tested at SageMaker notebook. Use pip or conda to install s3fs. !pip install s3fs import pandas as pd my_bucket = '' #declare bucket name my_file = 'aa/bb.csv' #declare file path import boto3 # AWS Python SDK from sagemaker import get_execution_role role = get_execution_role() data_location = 's3://{}/{}'.format(my_bucket,my_file) data=pd.read_csv(data_location) data.head(2) A: There are multiple ways to read data into Sagemaker. To make the response more comprehensive i am adding details to read the data into Sagemaker Studio Notebook in memory as well as S3 mounting options. Though Notebooks are not recommend for data intensive modeling and are more used for prototyping based on my experience, there are multiple ways the data can be read into it. In Memory Based Options Boto3 S3FS Both Boto3 and S3FS can also be used in conjunction with python libraries like Pandas to read the data in memory as well as can also be used to copy the data to local instance EFS. Mount Options S3FS-Fuse (https://github.com/s3fs-fuse/s3fs-fuse) Goofy (https://github.com/kahing/goofys) These two options provide a mount like behaviour where the data appears to be in as if the local directory for higher IO operations. Both of these options have their pros and cons.
Load S3 Data into AWS SageMaker Notebook
I've just started to experiment with AWS SageMaker and would like to load data from an S3 bucket into a pandas dataframe in my SageMaker python jupyter notebook for analysis. I could use boto to grab the data from S3, but I'm wondering whether there is a more elegant method as part of the SageMaker framework to do this in my python code? Thanks in advance for any advice.
[ "import boto3\nimport pandas as pd\nfrom sagemaker import get_execution_role\n\nrole = get_execution_role()\nbucket='my-bucket'\ndata_key = 'train.csv'\ndata_location = 's3://{}/{}'.format(bucket, data_key)\n\npd.read_csv(data_location)\n\n", "In the simplest case you don't need boto3, because you just read resources.\nThen it's even simpler:\nimport pandas as pd\n\nbucket='my-bucket'\ndata_key = 'train.csv'\ndata_location = 's3://{}/{}'.format(bucket, data_key)\n\npd.read_csv(data_location)\n\nBut as Prateek stated make sure to configure your SageMaker notebook instance to have access to s3. This is done at configuration step in Permissions > IAM role\n", "If you have a look here it seems you can specify this in the InputDataConfig. Search for \"S3DataSource\" (ref) in the document. The first hit is even in Python, on page 25/26.\n", "You could also access your bucket as your file system using s3fs \nimport s3fs\nfs = s3fs.S3FileSystem()\n\n# To List 5 files in your accessible bucket\nfs.ls('s3://bucket-name/data/')[:5]\n\n# open it directly\nwith fs.open(f's3://bucket-name/data/image.png') as f:\n display(Image.open(f))\n\n", "Do make sure the Amazon SageMaker role has policy attached to it to have access to S3. It can be done in IAM.\n", "You can also use AWS Data Wrangler https://github.com/awslabs/aws-data-wrangler:\nimport awswrangler as wr\n\ndf = wr.s3.read_csv(path=\"s3://...\")\n\n", "A similar answer with the f-string.\nimport pandas as pd\nbucket = 'your-bucket-name'\nfile = 'file.csv'\ndf = pd.read_csv(f\"s3://{bucket}/{file}\")\nlen(df) # print row counts\n\n", "This code sample to import csv file from S3, tested at SageMaker notebook.\nUse pip or conda to install s3fs. !pip install s3fs\nimport pandas as pd\n\nmy_bucket = '' #declare bucket name\nmy_file = 'aa/bb.csv' #declare file path\n\nimport boto3 # AWS Python SDK\nfrom sagemaker import get_execution_role\nrole = get_execution_role()\n\ndata_location = 's3://{}/{}'.format(my_bucket,my_file)\ndata=pd.read_csv(data_location)\ndata.head(2)\n\n", "There are multiple ways to read data into Sagemaker. To make the response more comprehensive i am adding details to read the data into Sagemaker Studio Notebook in memory as well as S3 mounting options.\nThough Notebooks are not recommend for data intensive modeling and are more used for prototyping based on my experience, there are multiple ways the data can be read into it.\nIn Memory Based Options\n\nBoto3\nS3FS\n\nBoth Boto3 and S3FS can also be used in conjunction with python libraries like Pandas to read the data in memory as well as can also be used to\ncopy the data to local instance EFS.\nMount Options\n\nS3FS-Fuse (https://github.com/s3fs-fuse/s3fs-fuse)\nGoofy (https://github.com/kahing/goofys)\n\nThese two options provide a mount like behaviour where the data appears to be in as if the local directory for higher IO operations. Both of these options have their pros and cons.\n" ]
[ 57, 42, 11, 10, 5, 4, 2, 0, 0 ]
[]
[]
[ "amazon_s3", "amazon_sagemaker", "amazon_web_services", "machine_learning", "python" ]
stackoverflow_0048264656_amazon_s3_amazon_sagemaker_amazon_web_services_machine_learning_python.txt
Q: How to rearrange matrix elements vertically on python I'm trying to build a basic game-like program where I need to rearrange a given matrix but vertically. In this case, I only have 0s and 1s. 0 being lighter objects and 1 being heavier. When the function runs, all the 1s should fall down vertically and the zeros go up vertically as well. It needs to have the exact number of 0s and 1s as the original matrix. Example: -If I give the following matrix: [1,0,1,1,0,1,0], [0,0,0,1,0,0,0], [1,0,1,1,1,1,1], [0,1,1,0,1,1,0], [1,1,0,1,0,0,1] It should rearrange it to: [0,0,0,0,0,0,0], [0,0,0,1,0,0,0], [1,0,1,1,0,1,0], [1,1,1,1,1,1,1], [1,1,1,1,1,1,1] Any help or suggestions will be highly appreciated. A: Consider using numpy for your matrices. You can then use np.sort to do what you want: np.sort(matrix, axis=0) A: If you didn't want to use numpy (though you should), you could do: from collections import Counter test = [[1,0,1,1,0,1,0], [0,0,0,1,0,0,0], [1,0,1,1,1,1,1], [0,1,1,0,1,1,0], [1,1,0,1,0,0,1] ] new_version = [[] for _ in test] # create an empty list to append data to for count, item in enumerate(test[0]): # go through the length of one of the list of lists for their length # assuming that all lists are of equal length frequency = Counter([x[count] for x in test]) # get frequency count for the column for count_inside, item_inside in enumerate(test): # to add the values depending on their frequency distribution in the column value = 0 if 0 in frequency and count_inside < frequency[0] else 1 new_version[count_inside].append(value) print(new_version) A: Not as readable as the numpy approach, but if you want to use the list-approach you could Transpose the matrix by using the zip(*matrix) approach. Sort the resulting rows (which are columns of the original matrix) Transpose back. You can do it in one line: [row for row in zip(*[sorted(column) for column in zip(*matrix)])]
How to rearrange matrix elements vertically on python
I'm trying to build a basic game-like program where I need to rearrange a given matrix but vertically. In this case, I only have 0s and 1s. 0 being lighter objects and 1 being heavier. When the function runs, all the 1s should fall down vertically and the zeros go up vertically as well. It needs to have the exact number of 0s and 1s as the original matrix. Example: -If I give the following matrix: [1,0,1,1,0,1,0], [0,0,0,1,0,0,0], [1,0,1,1,1,1,1], [0,1,1,0,1,1,0], [1,1,0,1,0,0,1] It should rearrange it to: [0,0,0,0,0,0,0], [0,0,0,1,0,0,0], [1,0,1,1,0,1,0], [1,1,1,1,1,1,1], [1,1,1,1,1,1,1] Any help or suggestions will be highly appreciated.
[ "Consider using numpy for your matrices. You can then use np.sort to do what you want:\nnp.sort(matrix, axis=0)\n\n", "If you didn't want to use numpy (though you should), you could do:\nfrom collections import Counter\n\ntest = [[1,0,1,1,0,1,0],\n[0,0,0,1,0,0,0],\n[1,0,1,1,1,1,1],\n[0,1,1,0,1,1,0],\n[1,1,0,1,0,0,1] ]\n\nnew_version = [[] for _ in test] # create an empty list to append data to\nfor count, item in enumerate(test[0]): # go through the length of one of the list of lists for their length # assuming that all lists are of equal length\n frequency = Counter([x[count] for x in test]) # get frequency count for the column\n for count_inside, item_inside in enumerate(test): \n # to add the values depending on their frequency distribution in the column\n value = 0 if 0 in frequency and count_inside < frequency[0] else 1\n new_version[count_inside].append(value)\n \nprint(new_version)\n \n\n", "Not as readable as the numpy approach, but if you want to use the list-approach you could\n\nTranspose the matrix by using the zip(*matrix) approach.\nSort the resulting rows (which are columns of the original matrix)\nTranspose back.\n\nYou can do it in one line:\n[row for row in zip(*[sorted(column) for column in zip(*matrix)])]\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "matrix", "python" ]
stackoverflow_0074645690_matrix_python.txt
Q: Fill in values in other columns based on missing dates in another columns - Pandas I am currently having a similar need to the question in this thread, but it looks like it cannot fill in the dates if the min and max dates of the given date column does not fall into the first and last day of a given month and year. In particular, assume this dataframe df = pd.DataFrame({'user': ['a','a','b','b','c','c','c'], 'dt': ['2016-01-05','2016-01-08', '2016-01-10','2016-01-15','2016-01-16', '2016-01-22', '2016-01-19'], 'val': [1,33,2,1,5,5,6], 'price': [1,2,1,1,2,5.5,4.2]}) user dt val price 0 a 2016-01-05 1 1.0 1 a 2016-01-08 33 2.0 2 b 2016-01-10 2 1.0 3 b 2016-01-15 1 1.0 4 c 2016-01-16 5 2.0 5 c 2016-01-22 5 5.5 6 c 2016-01-19 6 4.2 Using the code in the first answer of the above thread, the resulting dataframe can only fill in 0 values for all dates between 2016-01-05 and 2016-01-22. It could not do the same thing on dates between 2016-01-01 and 2016-01-04, OR from 2016-01-23 to 2016-01-31. I wonder if anyone could help address this point, as I currently have a need to accomplish the fill-in for every missing dates within a given month and year? Expected Output user dt val price 0 a 2016-01-01 0 0.0 1 a 2016-01-02 0 0.0 2 a 2016-01-03 0 0.0 3 a 2016-01-04 0 0.0 4 a 2016-01-05 1 1.0 5 a 2016-01-06 0 0.0 6 a 2016-01-07 0 0.0 7 a 2016-01-08 33 2.0 8 a 2016-01-09 0 0.0 9 a 2016-01-10 0 0.0 10 a 2016-01-11 0 0.0 11 a 2016-01-12 0 0.0 12 a 2016-01-13 0 0.0 13 a 2016-01-14 0 0.0 14 a 2016-01-15 0 0.0 15 a 2016-01-16 0 0.0 16 a 2016-01-17 0 0.0 17 a 2016-01-18 0 0.0 18 a 2016-01-19 0 0.0 19 a 2016-01-20 0 0.0 20 a 2016-01-21 0 0.0 21 a 2016-01-22 0 0.0 22 a 2016-01-23 0 0.0 23 a 2016-01-24 0 0.0 24 a 2016-01-25 0 0.0 25 a 2016-01-26 0 0.0 26 a 2016-01-27 0 0.0 27 a 2016-01-28 0 0.0 28 a 2016-01-29 0 0.0 29 a 2016-01-30 0 0.0 30 a 2016-01-31 0 0.0 31 b 2016-01-01 0 0.0 32 b 2016-01-02 0 0.0 33 b 2016-01-03 0 0.0 34 b 2016-01-04 0 0.0 35 b 2016-01-05 0 0.0 36 b 2016-01-06 0 0.0 37 b 2016-01-07 0 0.0 38 b 2016-01-08 0 0.0 39 b 2016-01-09 0 0.0 40 b 2016-01-10 2 1.0 41 b 2016-01-11 0 0.0 42 b 2016-01-12 0 0.0 43 b 2016-01-13 0 0.0 44 b 2016-01-14 0 0.0 45 b 2016-01-15 1 1.0 46 b 2016-01-16 0 0.0 47 b 2016-01-17 0 0.0 48 b 2016-01-18 0 0.0 49 b 2016-01-19 0 0.0 50 b 2016-01-20 0 0.0 51 b 2016-01-21 0 0.0 52 b 2016-01-22 0 0.0 53 b 2016-01-23 0 0.0 54 b 2016-01-24 0 0.0 55 b 2016-01-25 0 0.0 56 b 2016-01-26 0 0.0 57 b 2016-01-27 0 0.0 58 b 2016-01-28 0 0.0 59 b 2016-01-29 0 0.0 60 b 2016-01-30 0 0.0 61 b 2016-01-31 0 0.0 62 c 2016-01-01 0 0.0 63 c 2016-01-02 0 0.0 64 c 2016-01-03 0 0.0 65 c 2016-01-04 0 0.0 66 c 2016-01-05 0 0.0 67 c 2016-01-06 0 0.0 68 c 2016-01-07 0 0.0 69 c 2016-01-08 0 0.0 70 c 2016-01-09 0 0.0 71 c 2016-01-10 2 1.0 72 c 2016-01-11 0 0.0 73 c 2016-01-12 0 0.0 74 c 2016-01-13 0 0.0 75 c 2016-01-14 0 0.0 76 c 2016-01-15 1 1.0 77 c 2016-01-16 5 2.0 78 c 2016-01-17 0 0.0 79 c 2016-01-18 0 0.0 80 c 2016-01-19 6 4.2 81 c 2016-01-20 0 0.0 82 c 2016-01-21 0 0.0 83 c 2016-01-22 5 5.5 84 c 2016-01-23 0 0.0 85 c 2016-01-24 0 0.0 86 c 2016-01-25 0 0.0 87 c 2016-01-26 0 0.0 88 c 2016-01-27 0 0.0 89 c 2016-01-28 0 0.0 90 c 2016-01-29 0 0.0 91 c 2016-01-30 0 0.0 92 c 2016-01-31 0 0.0 A: Ok, so you just need to define your own pd.date_range, then build a new MultiIndex to get the daily data for each user and use pd.DataFrame.reindex. df["dt"] = pd.to_datetime(df["dt"]) df = df.set_index(["user", "dt"]) daily_idx = pd.date_range(start="2016-01-01", end="2016-01-31", freq="D") new_idx = pd.MultiIndex.from_product( [df.index.get_level_values("user").unique(), daily_idx], names=["user", "daily"] ) out = df.reindex(new_idx, fill_value=0).reset_index() print(out) user daily val price 0 a 2016-01-01 0 0.0 1 a 2016-01-02 0 0.0 2 a 2016-01-03 0 0.0 3 a 2016-01-04 0 0.0 4 a 2016-01-05 1 1.0 5 a 2016-01-06 0 0.0 6 a 2016-01-07 0 0.0 7 a 2016-01-08 33 2.0 8 a 2016-01-09 0 0.0 9 a 2016-01-10 0 0.0 10 a 2016-01-11 0 0.0 11 a 2016-01-12 0 0.0 12 a 2016-01-13 0 0.0 13 a 2016-01-14 0 0.0 14 a 2016-01-15 0 0.0 15 a 2016-01-16 0 0.0 16 a 2016-01-17 0 0.0 17 a 2016-01-18 0 0.0 18 a 2016-01-19 0 0.0 19 a 2016-01-20 0 0.0 20 a 2016-01-21 0 0.0 21 a 2016-01-22 0 0.0 22 a 2016-01-23 0 0.0 23 a 2016-01-24 0 0.0 24 a 2016-01-25 0 0.0 25 a 2016-01-26 0 0.0 26 a 2016-01-27 0 0.0 27 a 2016-01-28 0 0.0 28 a 2016-01-29 0 0.0 29 a 2016-01-30 0 0.0 30 a 2016-01-31 0 0.0 31 b 2016-01-01 0 0.0 32 b 2016-01-02 0 0.0 33 b 2016-01-03 0 0.0 34 b 2016-01-04 0 0.0 35 b 2016-01-05 0 0.0 36 b 2016-01-06 0 0.0 37 b 2016-01-07 0 0.0 38 b 2016-01-08 0 0.0 39 b 2016-01-09 0 0.0 40 b 2016-01-10 2 1.0 41 b 2016-01-11 0 0.0 42 b 2016-01-12 0 0.0 43 b 2016-01-13 0 0.0 44 b 2016-01-14 0 0.0 45 b 2016-01-15 1 1.0 46 b 2016-01-16 0 0.0 47 b 2016-01-17 0 0.0 48 b 2016-01-18 0 0.0 49 b 2016-01-19 0 0.0 50 b 2016-01-20 0 0.0 51 b 2016-01-21 0 0.0 52 b 2016-01-22 0 0.0 53 b 2016-01-23 0 0.0 54 b 2016-01-24 0 0.0 55 b 2016-01-25 0 0.0 56 b 2016-01-26 0 0.0 57 b 2016-01-27 0 0.0 58 b 2016-01-28 0 0.0 59 b 2016-01-29 0 0.0 60 b 2016-01-30 0 0.0 61 b 2016-01-31 0 0.0 62 c 2016-01-01 0 0.0 63 c 2016-01-02 0 0.0 64 c 2016-01-03 0 0.0 65 c 2016-01-04 0 0.0 66 c 2016-01-05 0 0.0 67 c 2016-01-06 0 0.0 68 c 2016-01-07 0 0.0 69 c 2016-01-08 0 0.0 70 c 2016-01-09 0 0.0 71 c 2016-01-10 0 0.0 72 c 2016-01-11 0 0.0 73 c 2016-01-12 0 0.0 74 c 2016-01-13 0 0.0 75 c 2016-01-14 0 0.0 76 c 2016-01-15 0 0.0 77 c 2016-01-16 5 2.0 78 c 2016-01-17 0 0.0 79 c 2016-01-18 0 0.0 80 c 2016-01-19 6 4.2 81 c 2016-01-20 0 0.0 82 c 2016-01-21 0 0.0 83 c 2016-01-22 5 5.5 84 c 2016-01-23 0 0.0 85 c 2016-01-24 0 0.0 86 c 2016-01-25 0 0.0 87 c 2016-01-26 0 0.0 88 c 2016-01-27 0 0.0 89 c 2016-01-28 0 0.0 90 c 2016-01-29 0 0.0 91 c 2016-01-30 0 0.0 92 c 2016-01-31 0 0.0 A: You can use: df['dt'] = pd.to_datetime(df['dt']) (df.set_index('dt') .groupby('user', as_index=False) .apply(lambda d: d.reindex(pd.date_range(d.index.min(), d.index.max()), fill_value=0 )) .reset_index(-1) ) If you want to round to month start/end: (df.set_index('dt') .groupby('user', as_index=False) .apply(lambda d: d.reindex(pd.date_range(d.index.min()-pd.offsets.MonthBegin(1), d.index.max()+pd.offsets.MonthEnd(1) ).rename('id'), fill_value=0) ) .reset_index('id') ) Output: id user val price 0 2016-01-01 0 0 0.0 0 2016-01-02 0 0 0.0 0 2016-01-03 0 0 0.0 0 2016-01-04 0 0 0.0 0 2016-01-05 a 1 1.0 .. ... ... ... ... 2 2016-01-27 0 0 0.0 2 2016-01-28 0 0 0.0 2 2016-01-29 0 0 0.0 2 2016-01-30 0 0 0.0 2 2016-01-31 0 0 0.0 [93 rows x 4 columns]
Fill in values in other columns based on missing dates in another columns - Pandas
I am currently having a similar need to the question in this thread, but it looks like it cannot fill in the dates if the min and max dates of the given date column does not fall into the first and last day of a given month and year. In particular, assume this dataframe df = pd.DataFrame({'user': ['a','a','b','b','c','c','c'], 'dt': ['2016-01-05','2016-01-08', '2016-01-10','2016-01-15','2016-01-16', '2016-01-22', '2016-01-19'], 'val': [1,33,2,1,5,5,6], 'price': [1,2,1,1,2,5.5,4.2]}) user dt val price 0 a 2016-01-05 1 1.0 1 a 2016-01-08 33 2.0 2 b 2016-01-10 2 1.0 3 b 2016-01-15 1 1.0 4 c 2016-01-16 5 2.0 5 c 2016-01-22 5 5.5 6 c 2016-01-19 6 4.2 Using the code in the first answer of the above thread, the resulting dataframe can only fill in 0 values for all dates between 2016-01-05 and 2016-01-22. It could not do the same thing on dates between 2016-01-01 and 2016-01-04, OR from 2016-01-23 to 2016-01-31. I wonder if anyone could help address this point, as I currently have a need to accomplish the fill-in for every missing dates within a given month and year? Expected Output user dt val price 0 a 2016-01-01 0 0.0 1 a 2016-01-02 0 0.0 2 a 2016-01-03 0 0.0 3 a 2016-01-04 0 0.0 4 a 2016-01-05 1 1.0 5 a 2016-01-06 0 0.0 6 a 2016-01-07 0 0.0 7 a 2016-01-08 33 2.0 8 a 2016-01-09 0 0.0 9 a 2016-01-10 0 0.0 10 a 2016-01-11 0 0.0 11 a 2016-01-12 0 0.0 12 a 2016-01-13 0 0.0 13 a 2016-01-14 0 0.0 14 a 2016-01-15 0 0.0 15 a 2016-01-16 0 0.0 16 a 2016-01-17 0 0.0 17 a 2016-01-18 0 0.0 18 a 2016-01-19 0 0.0 19 a 2016-01-20 0 0.0 20 a 2016-01-21 0 0.0 21 a 2016-01-22 0 0.0 22 a 2016-01-23 0 0.0 23 a 2016-01-24 0 0.0 24 a 2016-01-25 0 0.0 25 a 2016-01-26 0 0.0 26 a 2016-01-27 0 0.0 27 a 2016-01-28 0 0.0 28 a 2016-01-29 0 0.0 29 a 2016-01-30 0 0.0 30 a 2016-01-31 0 0.0 31 b 2016-01-01 0 0.0 32 b 2016-01-02 0 0.0 33 b 2016-01-03 0 0.0 34 b 2016-01-04 0 0.0 35 b 2016-01-05 0 0.0 36 b 2016-01-06 0 0.0 37 b 2016-01-07 0 0.0 38 b 2016-01-08 0 0.0 39 b 2016-01-09 0 0.0 40 b 2016-01-10 2 1.0 41 b 2016-01-11 0 0.0 42 b 2016-01-12 0 0.0 43 b 2016-01-13 0 0.0 44 b 2016-01-14 0 0.0 45 b 2016-01-15 1 1.0 46 b 2016-01-16 0 0.0 47 b 2016-01-17 0 0.0 48 b 2016-01-18 0 0.0 49 b 2016-01-19 0 0.0 50 b 2016-01-20 0 0.0 51 b 2016-01-21 0 0.0 52 b 2016-01-22 0 0.0 53 b 2016-01-23 0 0.0 54 b 2016-01-24 0 0.0 55 b 2016-01-25 0 0.0 56 b 2016-01-26 0 0.0 57 b 2016-01-27 0 0.0 58 b 2016-01-28 0 0.0 59 b 2016-01-29 0 0.0 60 b 2016-01-30 0 0.0 61 b 2016-01-31 0 0.0 62 c 2016-01-01 0 0.0 63 c 2016-01-02 0 0.0 64 c 2016-01-03 0 0.0 65 c 2016-01-04 0 0.0 66 c 2016-01-05 0 0.0 67 c 2016-01-06 0 0.0 68 c 2016-01-07 0 0.0 69 c 2016-01-08 0 0.0 70 c 2016-01-09 0 0.0 71 c 2016-01-10 2 1.0 72 c 2016-01-11 0 0.0 73 c 2016-01-12 0 0.0 74 c 2016-01-13 0 0.0 75 c 2016-01-14 0 0.0 76 c 2016-01-15 1 1.0 77 c 2016-01-16 5 2.0 78 c 2016-01-17 0 0.0 79 c 2016-01-18 0 0.0 80 c 2016-01-19 6 4.2 81 c 2016-01-20 0 0.0 82 c 2016-01-21 0 0.0 83 c 2016-01-22 5 5.5 84 c 2016-01-23 0 0.0 85 c 2016-01-24 0 0.0 86 c 2016-01-25 0 0.0 87 c 2016-01-26 0 0.0 88 c 2016-01-27 0 0.0 89 c 2016-01-28 0 0.0 90 c 2016-01-29 0 0.0 91 c 2016-01-30 0 0.0 92 c 2016-01-31 0 0.0
[ "Ok, so you just need to define your own pd.date_range, then build a new MultiIndex to get the daily data for each user and use pd.DataFrame.reindex.\ndf[\"dt\"] = pd.to_datetime(df[\"dt\"])\ndf = df.set_index([\"user\", \"dt\"])\n\ndaily_idx = pd.date_range(start=\"2016-01-01\", end=\"2016-01-31\", freq=\"D\")\n\nnew_idx = pd.MultiIndex.from_product(\n [df.index.get_level_values(\"user\").unique(), daily_idx], names=[\"user\", \"daily\"]\n)\nout = df.reindex(new_idx, fill_value=0).reset_index()\nprint(out)\n\n user daily val price\n0 a 2016-01-01 0 0.0\n1 a 2016-01-02 0 0.0\n2 a 2016-01-03 0 0.0\n3 a 2016-01-04 0 0.0\n4 a 2016-01-05 1 1.0\n5 a 2016-01-06 0 0.0\n6 a 2016-01-07 0 0.0\n7 a 2016-01-08 33 2.0\n8 a 2016-01-09 0 0.0\n9 a 2016-01-10 0 0.0\n10 a 2016-01-11 0 0.0\n11 a 2016-01-12 0 0.0\n12 a 2016-01-13 0 0.0\n13 a 2016-01-14 0 0.0\n14 a 2016-01-15 0 0.0\n15 a 2016-01-16 0 0.0\n16 a 2016-01-17 0 0.0\n17 a 2016-01-18 0 0.0\n18 a 2016-01-19 0 0.0\n19 a 2016-01-20 0 0.0\n20 a 2016-01-21 0 0.0\n21 a 2016-01-22 0 0.0\n22 a 2016-01-23 0 0.0\n23 a 2016-01-24 0 0.0\n24 a 2016-01-25 0 0.0\n25 a 2016-01-26 0 0.0\n26 a 2016-01-27 0 0.0\n27 a 2016-01-28 0 0.0\n28 a 2016-01-29 0 0.0\n29 a 2016-01-30 0 0.0\n30 a 2016-01-31 0 0.0\n31 b 2016-01-01 0 0.0\n32 b 2016-01-02 0 0.0\n33 b 2016-01-03 0 0.0\n34 b 2016-01-04 0 0.0\n35 b 2016-01-05 0 0.0\n36 b 2016-01-06 0 0.0\n37 b 2016-01-07 0 0.0\n38 b 2016-01-08 0 0.0\n39 b 2016-01-09 0 0.0\n40 b 2016-01-10 2 1.0\n41 b 2016-01-11 0 0.0\n42 b 2016-01-12 0 0.0\n43 b 2016-01-13 0 0.0\n44 b 2016-01-14 0 0.0\n45 b 2016-01-15 1 1.0\n46 b 2016-01-16 0 0.0\n47 b 2016-01-17 0 0.0\n48 b 2016-01-18 0 0.0\n49 b 2016-01-19 0 0.0\n50 b 2016-01-20 0 0.0\n51 b 2016-01-21 0 0.0\n52 b 2016-01-22 0 0.0\n53 b 2016-01-23 0 0.0\n54 b 2016-01-24 0 0.0\n55 b 2016-01-25 0 0.0\n56 b 2016-01-26 0 0.0\n57 b 2016-01-27 0 0.0\n58 b 2016-01-28 0 0.0\n59 b 2016-01-29 0 0.0\n60 b 2016-01-30 0 0.0\n61 b 2016-01-31 0 0.0\n62 c 2016-01-01 0 0.0\n63 c 2016-01-02 0 0.0\n64 c 2016-01-03 0 0.0\n65 c 2016-01-04 0 0.0\n66 c 2016-01-05 0 0.0\n67 c 2016-01-06 0 0.0\n68 c 2016-01-07 0 0.0\n69 c 2016-01-08 0 0.0\n70 c 2016-01-09 0 0.0\n71 c 2016-01-10 0 0.0\n72 c 2016-01-11 0 0.0\n73 c 2016-01-12 0 0.0\n74 c 2016-01-13 0 0.0\n75 c 2016-01-14 0 0.0\n76 c 2016-01-15 0 0.0\n77 c 2016-01-16 5 2.0\n78 c 2016-01-17 0 0.0\n79 c 2016-01-18 0 0.0\n80 c 2016-01-19 6 4.2\n81 c 2016-01-20 0 0.0\n82 c 2016-01-21 0 0.0\n83 c 2016-01-22 5 5.5\n84 c 2016-01-23 0 0.0\n85 c 2016-01-24 0 0.0\n86 c 2016-01-25 0 0.0\n87 c 2016-01-26 0 0.0\n88 c 2016-01-27 0 0.0\n89 c 2016-01-28 0 0.0\n90 c 2016-01-29 0 0.0\n91 c 2016-01-30 0 0.0\n92 c 2016-01-31 0 0.0\n\n", "You can use:\ndf['dt'] = pd.to_datetime(df['dt'])\n\n(df.set_index('dt')\n .groupby('user', as_index=False)\n .apply(lambda d: d.reindex(pd.date_range(d.index.min(), d.index.max()),\n fill_value=0\n ))\n .reset_index(-1)\n)\n\nIf you want to round to month start/end:\n(df.set_index('dt')\n .groupby('user', as_index=False)\n .apply(lambda d: d.reindex(pd.date_range(d.index.min()-pd.offsets.MonthBegin(1),\n d.index.max()+pd.offsets.MonthEnd(1)\n ).rename('id'),\n fill_value=0)\n )\n .reset_index('id')\n)\n\nOutput:\n id user val price\n0 2016-01-01 0 0 0.0\n0 2016-01-02 0 0 0.0\n0 2016-01-03 0 0 0.0\n0 2016-01-04 0 0 0.0\n0 2016-01-05 a 1 1.0\n.. ... ... ... ...\n2 2016-01-27 0 0 0.0\n2 2016-01-28 0 0 0.0\n2 2016-01-29 0 0 0.0\n2 2016-01-30 0 0 0.0\n2 2016-01-31 0 0 0.0\n\n[93 rows x 4 columns]\n\n" ]
[ 2, 2 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074645693_dataframe_datetime_pandas_python.txt
Q: discord.py command() got an unexpected keyword argument 'options' So i am trying to make slash ban command in discord.py and i get this error: command() got an unexpected keyword argument 'options' My code: @tree.command(name="ban", description="Ban someone", options = [app_commands.choices(name="user", description="select user to ban", option_type=6, required=True)]) async def _ban(ctx, user: discord.Member): await user.ban(reason="NO reason") await ctx.send(f'Banned {user}! Reason: {reason}') A: @tree.command(name="ban", description="Ban someone") async def _ban(ctx, user: discord.Member): await user.ban(reason="NO reason") await ctx.send(f'Banned {user}! Reason: {reason}') If you are only using discord.py, which i assume so, user: discord.Member is enough. options = [app_commands.choices( isn't a thing. Read the docs on @discord.app_commands.CommandTree.command The real use of app_commands.choices looks like this: @tree.command(name='command') @app_commands.choices(option=[ app_commands.Choice(name='option1', value='1'), app_commands.Choice(name='option2', value='2'), ]) async def command(interaction: discord.Interaction, option: app_commands.Choice[str]): if option.value == '1': #chosen option was option1 elif option.value == '2': #chosen option was option2
discord.py command() got an unexpected keyword argument 'options'
So i am trying to make slash ban command in discord.py and i get this error: command() got an unexpected keyword argument 'options' My code: @tree.command(name="ban", description="Ban someone", options = [app_commands.choices(name="user", description="select user to ban", option_type=6, required=True)]) async def _ban(ctx, user: discord.Member): await user.ban(reason="NO reason") await ctx.send(f'Banned {user}! Reason: {reason}')
[ "@tree.command(name=\"ban\", description=\"Ban someone\")\nasync def _ban(ctx, user: discord.Member):\n await user.ban(reason=\"NO reason\")\n await ctx.send(f'Banned {user}! Reason: {reason}') \n\nIf you are only using discord.py, which i assume so, user: discord.Member is enough. options = [app_commands.choices( isn't a thing.\nRead the docs on @discord.app_commands.CommandTree.command\nThe real use of app_commands.choices looks like this:\[email protected](name='command')\n@app_commands.choices(option=[\n app_commands.Choice(name='option1', value='1'),\n app_commands.Choice(name='option2', value='2'),\n])\nasync def command(interaction: discord.Interaction, option: app_commands.Choice[str]):\n if option.value == '1':\n #chosen option was option1\n elif option.value == '2':\n #chosen option was option2\n\n" ]
[ 1 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074645575_discord.py_python.txt
Q: How to set Font size to fit a specific width of the frame in tkinter actually I am making a project with the help of tkinter in python so basically I want to shrink my font size of a label according to the width of the frame that it will be put in. I want that just i am giving a string and it will automatically adjust the size of the font according to the width. For an Example:- Let I am Giving a String that is " Hotel Raj Kundra and Family Resturant", let the width of the Frame/label is 500. so how it will automatically adjust in this size of window without wrapped the text. just fit in the window size A: with .config you can adjust Attributes after you placed it. Now you can adjust the window size with the len() of your string. I hope it helps. import tkinter as tk def adjust_Window(canvas, text): canvas.config(width=len(text)*50) def main(): root = tk.Tk() canvas = tk.Canvas(root, width=500, height=500) canvas.pack() adjust_Window(canvas, "i hope it helps <3") root.mainloop() if __name__ == '__main__': main() A: Easier way to do this: from tkinter import * from tkinter import ttk # Create an instance of tkinter frame or window win=Tk() # Set the size of the window win.geometry("700x350") def update_width(): l.configure(text='Hotel Raj Kundra and Family Resturant', background='blue', foreground='white', font=("Calibri,8,itlaic")) # Create a frame frame=Frame(win, background="skyblue3", width=700, height=250) frame.pack() # Add a button in the main window ttk.Button(win, text="Update", command=update_width).pack() l = ttk.Label(win, background='red', text="so how it will automatically adjust in this size of window without wrapped the text. just fit in the window siz" , font=("Calibri,32,Bold")) l.pack() win.mainloop() Result widest: Result to shrink:
How to set Font size to fit a specific width of the frame in tkinter
actually I am making a project with the help of tkinter in python so basically I want to shrink my font size of a label according to the width of the frame that it will be put in. I want that just i am giving a string and it will automatically adjust the size of the font according to the width. For an Example:- Let I am Giving a String that is " Hotel Raj Kundra and Family Resturant", let the width of the Frame/label is 500. so how it will automatically adjust in this size of window without wrapped the text. just fit in the window size
[ "with .config you can adjust Attributes after you placed it.\nNow you can adjust the window size with the len() of your string.\nI hope it helps.\nimport tkinter as tk\n\ndef adjust_Window(canvas, text):\n\n canvas.config(width=len(text)*50)\n\ndef main():\n root = tk.Tk()\n\n canvas = tk.Canvas(root, width=500, height=500)\n canvas.pack()\n\n adjust_Window(canvas, \"i hope it helps <3\")\n\n root.mainloop()\n\n\nif __name__ == '__main__':\n main()\n\n", "Easier way to do this:\nfrom tkinter import *\nfrom tkinter import ttk\n\n# Create an instance of tkinter frame or window\nwin=Tk()\n\n# Set the size of the window\nwin.geometry(\"700x350\")\n\ndef update_width():\n l.configure(text='Hotel Raj Kundra and Family Resturant', background='blue', foreground='white', font=(\"Calibri,8,itlaic\"))\n \n\n# Create a frame\nframe=Frame(win, background=\"skyblue3\", width=700, height=250)\nframe.pack()\n\n# Add a button in the main window\nttk.Button(win, text=\"Update\", command=update_width).pack()\nl = ttk.Label(win, background='red', text=\"so how it will automatically adjust in this size of window without wrapped the text. just fit in the window siz\" , font=(\"Calibri,32,Bold\"))\nl.pack()\n \nwin.mainloop()\n\nResult widest:\n\nResult to shrink:\n\n" ]
[ 0, 0 ]
[]
[]
[ "font_size", "python", "tkinter" ]
stackoverflow_0071744565_font_size_python_tkinter.txt
Q: Grouped Dataframe to a nested tree in python I have a dataframe as grpdata = {'Group1':['A', 'A', 'A', 'B','B'], 'Group2':['A2','B2','B2','A2','B2'], 'Group3':['A3', 'A3', 'B3','A3', 'A3'], 'Count':['10', '12', '14', '20']} # Convert the dictionary into DataFrame groupdf = pd.DataFrame(grpdata) I want to convert this dataframe to a tree, wherein each row is a path from root node to a leaf node. I have tried using the approach shown in Read data from a pandas dataframe and create a dataframe using anytree in python def add_nodes(nodes, parent, child): if parent not in nodes: nodes[parent] = Node(parent) if child not in nodes: nodes[child] = Node(child) nodes[child].parent = nodes[parent] nodes = {} for parent, child in zip(groupdf["Group1"],groupdf["Group2"]): add_nodes(nodes, parent, child) However I am not able to figure out how to add the Group3 as a child to Group2 as parent node in the same node structure defined above. Also roots = list(groupdf[~groupdf["Group1"].isin(groupdf["Group2"])]["Group1"].unique()) for root in roots: for pre, _, node in RenderTree(nodes[root]): print("%s%s" % (pre, node.name)) How to add the subsequent columns "Group3" and "Count to this tree structure? A: bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame. For this scenario, there is a built-in dataframe_to_tree method which does this for you. import pandas as pd from bigtree import dataframe_to_tree, print_tree # I changed the dataframe to path column and count column # I also added a `root` over here since trees must start from same root path_data = pd.DataFrame([ ["root/A/A2/A3", 10], ["root/A/B2/A3", 12], ["root/A/B2/B3", 14], ["root/B/A2/A3", 20], ["root/B/B2/A3", 25], ], columns=["Path", "Count"] ) root = dataframe_to_tree(path_data) print_tree(root, attr_list=["Count"]) This results in output, root ├── A │ ├── A2 │ │ └── A3 [Count=10] │ └── B2 │ ├── A3 [Count=12] │ └── B3 [Count=14] └── B ├── A2 │ └── A3 [Count=20] └── B2 └── A3 [Count=25] Source/Disclaimer: I'm the creator of bigtree ;)
Grouped Dataframe to a nested tree in python
I have a dataframe as grpdata = {'Group1':['A', 'A', 'A', 'B','B'], 'Group2':['A2','B2','B2','A2','B2'], 'Group3':['A3', 'A3', 'B3','A3', 'A3'], 'Count':['10', '12', '14', '20']} # Convert the dictionary into DataFrame groupdf = pd.DataFrame(grpdata) I want to convert this dataframe to a tree, wherein each row is a path from root node to a leaf node. I have tried using the approach shown in Read data from a pandas dataframe and create a dataframe using anytree in python def add_nodes(nodes, parent, child): if parent not in nodes: nodes[parent] = Node(parent) if child not in nodes: nodes[child] = Node(child) nodes[child].parent = nodes[parent] nodes = {} for parent, child in zip(groupdf["Group1"],groupdf["Group2"]): add_nodes(nodes, parent, child) However I am not able to figure out how to add the Group3 as a child to Group2 as parent node in the same node structure defined above. Also roots = list(groupdf[~groupdf["Group1"].isin(groupdf["Group2"])]["Group1"].unique()) for root in roots: for pre, _, node in RenderTree(nodes[root]): print("%s%s" % (pre, node.name)) How to add the subsequent columns "Group3" and "Count to this tree structure?
[ "bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame.\nFor this scenario, there is a built-in dataframe_to_tree method which does this for you.\nimport pandas as pd\nfrom bigtree import dataframe_to_tree, print_tree\n\n# I changed the dataframe to path column and count column\n# I also added a `root` over here since trees must start from same root\npath_data = pd.DataFrame([\n [\"root/A/A2/A3\", 10],\n [\"root/A/B2/A3\", 12],\n [\"root/A/B2/B3\", 14],\n [\"root/B/A2/A3\", 20],\n [\"root/B/B2/A3\", 25],\n],\n columns=[\"Path\", \"Count\"]\n)\nroot = dataframe_to_tree(path_data)\nprint_tree(root, attr_list=[\"Count\"])\n\nThis results in output,\nroot\n├── A\n│ ├── A2\n│ │ └── A3 [Count=10]\n│ └── B2\n│ ├── A3 [Count=12]\n│ └── B3 [Count=14]\n└── B\n ├── A2\n │ └── A3 [Count=20]\n └── B2\n └── A3 [Count=25]\n\nSource/Disclaimer: I'm the creator of bigtree ;)\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python", "tree" ]
stackoverflow_0073778680_dataframe_python_tree.txt
Q: how can fix this Python Code is not defined surface=self.surface NameError: name 'self' is not defined how can fix this Python Code is not defined the code class Rectangle: def __init__(self, longueur=30, largeur=15): self.lon = longueur self.lar = largeur self.nom = "rectangle" def surface(self): return self.lon * self.lar surface=self.surface def affichage(self): print("rectangle=" + self.nom, "longueur=" + self.lon, "largeur=" + self.lar, "surface=" + self.surface) class Carre(Rectangle): def __init__(self, cote=10): Rectangle.__init__(self, cote, cote) self.nom = "carre" r = Rectangle() print(r) c = Carre() print(c) A: At this line surface=self.surface you're trying to access a variable that does not exist in this scope. self has only been defined within the context of the various functions of your classes, and python doesn't know about it outside of those functions. If you have an instance of Rectangle called for example rect, you can refer to its member function surface as rect.surface, or you can evaluate that function's value by calling rect.surface(). The key to understanding this is to know that objects can have many names. By convention, within the object we refer to the instance by the name self. Outside of the object, this would be confusing so we use names that tell us what object we're referring to. (Just as you might refer to yourself as "me", but you'd be confused if others used that same word to refer to you!)
how can fix this Python Code is not defined
surface=self.surface NameError: name 'self' is not defined how can fix this Python Code is not defined the code class Rectangle: def __init__(self, longueur=30, largeur=15): self.lon = longueur self.lar = largeur self.nom = "rectangle" def surface(self): return self.lon * self.lar surface=self.surface def affichage(self): print("rectangle=" + self.nom, "longueur=" + self.lon, "largeur=" + self.lar, "surface=" + self.surface) class Carre(Rectangle): def __init__(self, cote=10): Rectangle.__init__(self, cote, cote) self.nom = "carre" r = Rectangle() print(r) c = Carre() print(c)
[ "At this line surface=self.surface you're trying to access a variable that does not exist in this scope. self has only been defined within the context of the various functions of your classes, and python doesn't know about it outside of those functions.\nIf you have an instance of Rectangle called for example rect, you can refer to its member function surface as rect.surface, or you can evaluate that function's value by calling rect.surface().\nThe key to understanding this is to know that objects can have many names. By convention, within the object we refer to the instance by the name self. Outside of the object, this would be confusing so we use names that tell us what object we're referring to. (Just as you might refer to yourself as \"me\", but you'd be confused if others used that same word to refer to you!)\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074645893_python.txt
Q: How to install module of python in VS code Please share process how to install external modules i cann't access modules ss of my vs code screen A: Unless you want to manually install packages/modules like pandas you should first install a package manager like PIP, or Anaconda if you have not done so already. Once you install follow instructions of either to setup the package manager. Using PIP you should be entering the following command once installed correctly: $ pip install package_name $ pip3 install package_name If you are using Anaconda, Miniconda, or any other package manager follow the instructions here NOTE: Before asking a question, check if there are any similar questions that have already been answered
How to install module of python in VS code
Please share process how to install external modules i cann't access modules ss of my vs code screen
[ "Unless you want to manually install packages/modules like pandas you should first install a package manager like PIP, or Anaconda if you have not done so already.\nOnce you install follow instructions of either to setup the package manager.\nUsing PIP you should be entering the following command once installed correctly:\n$ pip install package_name\n$ pip3 install package_name\n\nIf you are using Anaconda, Miniconda, or any other package manager follow the instructions here\nNOTE: Before asking a question, check if there are any similar questions that have already been answered\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "python_module" ]
stackoverflow_0074645554_python_python_3.x_python_module.txt
Q: How to get sqlalchemy length of a string column Consider this simple table definition (using SQLAlchemy-0.5.6) from sqlalchemy import * db = create_engine('sqlite:///tutorial.db') db.echo = False # Try changing this to True and see what happens metadata = MetaData(db) user = Table('user', metadata, Column('user_id', Integer, primary_key=True), Column('name', String(40)), Column('age', Integer), Column('password', String), ) from sqlalchemy.ext.declarative import declarative_base class User(declarative_base()): __tablename__ = 'user' user_id = Column('user_id', Integer, primary_key=True) name = Column('name', String(40)) I want to know what is the max length of column name e.g. from user table and from User (declarative class) print user.name.length print User.name.length I have tried (User.name.type.length) but it throws exception Traceback (most recent call last): File "del.py", line 25, in <module> print User.name.type.length File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.6-py2.5.egg/sqlalchemy/orm/attributes.py", line 135, in __getattr__ key) AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object has an attribute 'type' A: User.name.property.columns[0].type.length Note, that SQLAlchemy supports composite properties, that's why columns is a list. It has single item for simple column properties. A: This should work (tested on my machine) : print user.columns.name.type.length A: I was getting errors when fields were too big so I wrote a generic function to trim any string down and account for words with spaces. This will leave words intact and trim a string down to insert for you. I included my orm model for reference. class ProductIdentifierTypes(Base): __tablename__ = 'prod_id_type' id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(length=20)) description = Column(String(length=100)) def trim_for_insert(field_obj, in_str) -> str: max_len = field_obj.property.columns[0].type.length if len(in_str) <= max_len: return in_str logger.debug(f'Trimming {field_obj} to {max_len} max length.') trim_str = in_str[:(max_len-1)] if ' ' in trim_str[:int(max_len*0.9)]: return(str.join(' ', trim_str.split(' ')[:-1])) return trim_str def foo_bar(): from models.deals import ProductIdentifierTypes, ProductName _str = "Foo is a 42 year old big brown dog that all the kids call bar." print(_str) print(trim_for_insert(ProductIdentifierTypes.name, _str)) _str = "Full circle from the tomb of the womb to the womb of the tomb we come, an ambiguous, enigmatical incursion into a world of solid matter that is soon to melt from us like the substance of a dream." print(_str) print(trim_for_insert(ProductIdentifierTypes.description, _str))``` A: If you have access to the class: TableClass.column_name.type.length If you have access to an instance, you access the Class using the __class__ dunder method. table_instance.__class__.column_name.type.length So in your case: # Via Instance user.__class__.name.type.length # Via Class User.name.type.length My use case is similar to @Gregg Williamson However, I implemented it differently: def __setattr__(self, attr, value): column = self.__class__.type if length := getattr(column, "length", 0): value = value[:length] super().__setattr__(name, value)
How to get sqlalchemy length of a string column
Consider this simple table definition (using SQLAlchemy-0.5.6) from sqlalchemy import * db = create_engine('sqlite:///tutorial.db') db.echo = False # Try changing this to True and see what happens metadata = MetaData(db) user = Table('user', metadata, Column('user_id', Integer, primary_key=True), Column('name', String(40)), Column('age', Integer), Column('password', String), ) from sqlalchemy.ext.declarative import declarative_base class User(declarative_base()): __tablename__ = 'user' user_id = Column('user_id', Integer, primary_key=True) name = Column('name', String(40)) I want to know what is the max length of column name e.g. from user table and from User (declarative class) print user.name.length print User.name.length I have tried (User.name.type.length) but it throws exception Traceback (most recent call last): File "del.py", line 25, in <module> print User.name.type.length File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.6-py2.5.egg/sqlalchemy/orm/attributes.py", line 135, in __getattr__ key) AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object has an attribute 'type'
[ "User.name.property.columns[0].type.length\n\nNote, that SQLAlchemy supports composite properties, that's why columns is a list. It has single item for simple column properties.\n", "This should work (tested on my machine) :\nprint user.columns.name.type.length\n\n", "I was getting errors when fields were too big so I wrote a generic function to trim any string down and account for words with spaces. This will leave words intact and trim a string down to insert for you. I included my orm model for reference.\nclass ProductIdentifierTypes(Base):\n __tablename__ = 'prod_id_type'\n id = Column(Integer, primary_key=True, autoincrement=True)\n name = Column(String(length=20))\n description = Column(String(length=100))\n\ndef trim_for_insert(field_obj, in_str) -> str:\n\n max_len = field_obj.property.columns[0].type.length\n if len(in_str) <= max_len:\n return in_str\n \n logger.debug(f'Trimming {field_obj} to {max_len} max length.')\n \n trim_str = in_str[:(max_len-1)]\n \n if ' ' in trim_str[:int(max_len*0.9)]:\n return(str.join(' ', trim_str.split(' ')[:-1]))\n \n return trim_str\n\ndef foo_bar():\n from models.deals import ProductIdentifierTypes, ProductName\n \n _str = \"Foo is a 42 year old big brown dog that all the kids call bar.\"\n \n print(_str)\n \n print(trim_for_insert(ProductIdentifierTypes.name, _str))\n \n _str = \"Full circle from the tomb of the womb to the womb of the tomb we come, an ambiguous, enigmatical incursion into a world of solid matter that is soon to melt from us like the substance of a dream.\"\n \n print(_str)\n \n print(trim_for_insert(ProductIdentifierTypes.description, _str))```\n\n", "If you have access to the class:\nTableClass.column_name.type.length\n\nIf you have access to an instance, you access the Class using the __class__ dunder method.\ntable_instance.__class__.column_name.type.length\n\nSo in your case:\n# Via Instance\nuser.__class__.name.type.length\n# Via Class\nUser.name.type.length\n\nMy use case is similar to @Gregg Williamson\nHowever, I implemented it differently:\ndef __setattr__(self, attr, value):\n column = self.__class__.type\n if length := getattr(column, \"length\", 0):\n value = value[:length]\n super().__setattr__(name, value)\n\n" ]
[ 22, 3, 0, 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001777814_python_sqlalchemy.txt
Q: Putting print statements on the same line I am tring to learn python and want to know if i can do this, and how. I am trying to make binary looking code come up digit by digit, with delay. In maybe there is 15 numbers, and each repeat i would like to make it do a set of 5, with a space after. if answer == 'MAYBE': deleteall() print("GIVE ME AN ANSWER!!!") time.sleep(1) deletelastline() for x in maybe: print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(" ") However, it outputs this: 0 1 1 0 0 1 0 0 0 1 1 ext. How do i get them on one line?!? Thx
Putting print statements on the same line
I am tring to learn python and want to know if i can do this, and how. I am trying to make binary looking code come up digit by digit, with delay. In maybe there is 15 numbers, and each repeat i would like to make it do a set of 5, with a space after. if answer == 'MAYBE': deleteall() print("GIVE ME AN ANSWER!!!") time.sleep(1) deletelastline() for x in maybe: print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(random.choice("1" "0")) time.sleep(0.1) print(" ") However, it outputs this: 0 1 1 0 0 1 0 0 0 1 1 ext. How do i get them on one line?!? Thx
[]
[]
[ "by setting end parameter you can set whatever will be after it printed. it is next line command by default \"\\n\" so everytime it prints it gets to the next line\nimport time\nimport random\nmaybe = range(5)\nprint(\"GIVE ME AN ANSWER!!!\")\ntime.sleep(1)\nfor x in maybe:\n print(random.choice(\"1\" \"0\"),end=\" \")\n time.sleep(0.1)\n print(random.choice(\"1\" \"0\"),end=\" \")\n time.sleep(0.1)\n print(random.choice(\"1\" \"0\"),end=\" \")\n time.sleep(0.1)\n print(random.choice(\"1\" \"0\"),end=\" \")\n time.sleep(0.1)\n print(random.choice(\"1\" \"0\"),end=\" \")\n time.sleep(0.1)\n print(\" \")\n\noutput is:\n1 1 0 1 1 \n0 1 1 1 0 \n0 1 0 1 0 \n1 0 0 1 0 \n1 0 0 0 1 \n\n", "import random\nimport time\n\nfor x in range(10):\n print(random.randint(0,1),end=' ')\n time.sleep(0.1)\nprint()\n\n" ]
[ -1, -1 ]
[ "python", "replit" ]
stackoverflow_0074645899_python_replit.txt
Q: How can I get rows that compouse up to 90% of a sum? I have two different dataframes, one containing the Net Revenue by SKU and Supplier and another one containing the stock of SKUs in each store. I need to get an average by Supplier of the stores that contains the SKUs that compouse up to 90% the net revenue of the supplier. It's a bit complicated but I will exemplify, and I hope it can make it clear. Please, note that if 3 SKUs compose 89% of the revenue, we need to consider another one. Example: Dataframe 1 - Net Revenue Supplier SKU Net Revenue UNILEVER 1111 10000 UNILEVER 2222 50000 UNILEVER 3333 500 PEPSICO 1313 680 PEPSICO 2424 10000 PEPSICO 2323 450 Dataframe 2 - Stock Store SKU Stock 1 1111 1 1 2222 2 1 3333 1 2 1111 1 2 2222 0 2 3333 1 In this case, for UNILEVER, we need to discard SKU 3333 because its net revenue is not relevant (as 1111 and 2222 already compouse more than 90% of the total net revenue of UNILEVER). Coverage in this case will be 1.5 (we have 1111 in 2 stores and 2222 in one store: (1+2)/2). Result is something like this: Supplier Coverage UNILEVER 1.5 PEPSICO ... Please, note that the real dataset has a different number of SKUs by supplier and a huge number of suppliers (around 150), so performance doesn't need to be PRIORITY but it has to be considered. Thanks in advance, guys. A: Calculate the cumulative sum grouping by Suppler and divide by the Supplier Total Revenue. Then find each Supplier Revenue Threshold by getting the minimum Cumulative Revenue Percentage under 90%. Then you can get the list of SKUs by Supplier and calculate the coverage. import pandas as pd df = pd.DataFrame([ ['UNILEVER', '1111', 10000], ['UNILEVER', '2222', 50000], ['UNILEVER', '3333', 500], ['PEPSICO', '1313', 680], ['PEPSICO', '2424', 10000], ['PEPSICO', '2323', 450], ], columns=['Supplier', 'SKU', 'Net Revenue']) total_revenue_by_supplier = df.groupby(df['Supplier']).sum().reset_index() total_revenue_by_supplier.columns = ['Supplier', 'Total Revenue'] df = df.sort_values(['Supplier', 'Net Revenue'], ascending=[True, False]) df['cumsum'] = df.groupby(df['Supplier'])['Net Revenue'].transform(pd.Series.cumsum) df = df.merge(total_revenue_by_supplier, on='Supplier') df['cumpercentage'] = df['cumsum'] / df['Total Revenue'] min_before_threshold = df[df['cumpercentage'] >= 0.9][['Supplier', 'cumpercentage']].groupby('Supplier').min().reset_index() min_before_threshold.columns = ['Supplier', 'Revenue Threshold'] df = df.merge(min_before_threshold, on='Supplier') df = df[df['cumpercentage'] <= df['Revenue Threshold']][['Supplier', 'SKU', 'Net Revenue']] df
How can I get rows that compouse up to 90% of a sum?
I have two different dataframes, one containing the Net Revenue by SKU and Supplier and another one containing the stock of SKUs in each store. I need to get an average by Supplier of the stores that contains the SKUs that compouse up to 90% the net revenue of the supplier. It's a bit complicated but I will exemplify, and I hope it can make it clear. Please, note that if 3 SKUs compose 89% of the revenue, we need to consider another one. Example: Dataframe 1 - Net Revenue Supplier SKU Net Revenue UNILEVER 1111 10000 UNILEVER 2222 50000 UNILEVER 3333 500 PEPSICO 1313 680 PEPSICO 2424 10000 PEPSICO 2323 450 Dataframe 2 - Stock Store SKU Stock 1 1111 1 1 2222 2 1 3333 1 2 1111 1 2 2222 0 2 3333 1 In this case, for UNILEVER, we need to discard SKU 3333 because its net revenue is not relevant (as 1111 and 2222 already compouse more than 90% of the total net revenue of UNILEVER). Coverage in this case will be 1.5 (we have 1111 in 2 stores and 2222 in one store: (1+2)/2). Result is something like this: Supplier Coverage UNILEVER 1.5 PEPSICO ... Please, note that the real dataset has a different number of SKUs by supplier and a huge number of suppliers (around 150), so performance doesn't need to be PRIORITY but it has to be considered. Thanks in advance, guys.
[ "Calculate the cumulative sum grouping by Suppler and divide by the Supplier Total Revenue.\nThen find each Supplier Revenue Threshold by getting the minimum Cumulative Revenue Percentage under 90%.\nThen you can get the list of SKUs by Supplier and calculate the coverage.\nimport pandas as pd\n\ndf = pd.DataFrame([\n ['UNILEVER', '1111', 10000], \n ['UNILEVER', '2222', 50000], \n ['UNILEVER', '3333', 500], \n ['PEPSICO', '1313', 680], \n ['PEPSICO', '2424', 10000], \n ['PEPSICO', '2323', 450], \n], columns=['Supplier', 'SKU', 'Net Revenue'])\n\ntotal_revenue_by_supplier = df.groupby(df['Supplier']).sum().reset_index()\ntotal_revenue_by_supplier.columns = ['Supplier', 'Total Revenue']\n\ndf = df.sort_values(['Supplier', 'Net Revenue'], ascending=[True, False])\n\ndf['cumsum'] = df.groupby(df['Supplier'])['Net Revenue'].transform(pd.Series.cumsum)\n\ndf = df.merge(total_revenue_by_supplier, on='Supplier')\n\ndf['cumpercentage'] = df['cumsum'] / df['Total Revenue']\n\nmin_before_threshold = df[df['cumpercentage'] >= 0.9][['Supplier', 'cumpercentage']].groupby('Supplier').min().reset_index()\nmin_before_threshold.columns = ['Supplier', 'Revenue Threshold']\n\ndf = df.merge(min_before_threshold, on='Supplier')\n\ndf = df[df['cumpercentage'] <= df['Revenue Threshold']][['Supplier', 'SKU', 'Net Revenue']]\n\ndf\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0074645417_dataframe_group_by_pandas_python.txt
Q: How do I create image from binary data BSQ? I've got a problem. I'm trying create image from binary data which I got from hyperspectral camera. The file which I have is in BSQ uint16 format. From the documentation I found out that images contained in the file (.dat) have a resolution of 1024x1024 and there are 24 images in total. The whole thing is to form a kind of "cube" which I want use in the future to creat multi-layered orthomosaic. I would also like to add that I am completely new in python but I try to be up to date with everything I need. I hope that everything what I have written is clear and uderstandable. At first I tried to use Numpy liblary to creating 3D array but ended up with an arrangement of random pixels. from PIL import Image import numpy as np file=open('Sequence 1_000021.dat','rb') myarray=np.fromfile(file,dtype=np.uint16) print('Size of new array',":", len(myarray)) con_array=np.reshape(myarray,(24,1024,1024),'C') naPIL=Image.fromarray(con_array[1,:,:]) naPIL.save('naPIL.tiff') The result: enter image description here Example of image which I want to achieve (thumbnail): enter image description here A: As suspected it's just byte order, I get a sensible looking image when running the following code in a Jupyter notebook: import numpy as np from PIL import Image # open as big-endian, convert to native order, then reshape as appropriate raw = np.fromfile( './Sequence 1_000021.dat', dtype='>u2' ).astype('uint16').reshape((24, 1024, 1024)) # display inline Image.fromarray(raw[1,:,:])
How do I create image from binary data BSQ?
I've got a problem. I'm trying create image from binary data which I got from hyperspectral camera. The file which I have is in BSQ uint16 format. From the documentation I found out that images contained in the file (.dat) have a resolution of 1024x1024 and there are 24 images in total. The whole thing is to form a kind of "cube" which I want use in the future to creat multi-layered orthomosaic. I would also like to add that I am completely new in python but I try to be up to date with everything I need. I hope that everything what I have written is clear and uderstandable. At first I tried to use Numpy liblary to creating 3D array but ended up with an arrangement of random pixels. from PIL import Image import numpy as np file=open('Sequence 1_000021.dat','rb') myarray=np.fromfile(file,dtype=np.uint16) print('Size of new array',":", len(myarray)) con_array=np.reshape(myarray,(24,1024,1024),'C') naPIL=Image.fromarray(con_array[1,:,:]) naPIL.save('naPIL.tiff') The result: enter image description here Example of image which I want to achieve (thumbnail): enter image description here
[ "As suspected it's just byte order, I get a sensible looking image when running the following code in a Jupyter notebook:\nimport numpy as np\nfrom PIL import Image\n\n# open as big-endian, convert to native order, then reshape as appropriate\nraw = np.fromfile(\n './Sequence 1_000021.dat', dtype='>u2'\n).astype('uint16').reshape((24, 1024, 1024))\n\n# display inline\nImage.fromarray(raw[1,:,:])\n\n" ]
[ 1 ]
[]
[]
[ "geospatial", "image", "numpy", "python", "python_imaging_library" ]
stackoverflow_0074642656_geospatial_image_numpy_python_python_imaging_library.txt
Q: Pandas compute features diff within group I have a dataframe with n rows for each group ID, where only one label is 1 and all the others are 0s. Example: ID, Feature_1; Feature_2; Feature_3; label 1, 10, 3, 4, 1 1, 9, 1, 2, 0 ... 2, 100, 30, 40, 1 2, 90, 10, 20, 0 I want to group by ID and for each ID group transform the features for each label=0 as a diff(Feature_1_i - Feature_1_j) where i is the row with lablel=1 within the group and j to n are the other rows in the group with label=1. Expected output ID, Feature_1; Feature_2; Feature_3; label 1, 10, 3, 4, 1 1, 10 - 9, 3- 1, 4- 2, 0 ... 2, 100, 30, 40, 1 2, 100-90, 30-10, 40-20, 0 How can I achieve this in Pandas? A: You can sort your dataframe using sort_values based on 'ID' and 'label in ascending and descending order respectively. Then you can calculate a grouped difference using diff on your columns, which would calculate the difference between the last and first row of each group (last - new) and populate the last row, leaving the first row with NaN. The last thing to do is to fill those resulted NaN (which are the first rows of each group): df_sorted = df.sort_values(by=['ID','label'],ascending=[True,False]) df_sorted.groupby('ID').diff().assign(label=np.nan).fillna(df_sorted).astype(int) prints: Feature_1; Feature_2; Feature_3; label 0 10 3 4 1 1 -1 -2 -2 0 2 100 30 40 1 3 -10 -20 -20 0
Pandas compute features diff within group
I have a dataframe with n rows for each group ID, where only one label is 1 and all the others are 0s. Example: ID, Feature_1; Feature_2; Feature_3; label 1, 10, 3, 4, 1 1, 9, 1, 2, 0 ... 2, 100, 30, 40, 1 2, 90, 10, 20, 0 I want to group by ID and for each ID group transform the features for each label=0 as a diff(Feature_1_i - Feature_1_j) where i is the row with lablel=1 within the group and j to n are the other rows in the group with label=1. Expected output ID, Feature_1; Feature_2; Feature_3; label 1, 10, 3, 4, 1 1, 10 - 9, 3- 1, 4- 2, 0 ... 2, 100, 30, 40, 1 2, 100-90, 30-10, 40-20, 0 How can I achieve this in Pandas?
[ "You can sort your dataframe using sort_values based on 'ID' and 'label in ascending and descending order respectively.\nThen you can calculate a grouped difference using diff on your columns, which would calculate the difference between the last and first row of each group (last - new) and populate the last row, leaving the first row with NaN.\nThe last thing to do is to fill those resulted NaN (which are the first rows of each group):\ndf_sorted = df.sort_values(by=['ID','label'],ascending=[True,False])\ndf_sorted.groupby('ID').diff().assign(label=np.nan).fillna(df_sorted).astype(int)\n\nprints:\n Feature_1; Feature_2; Feature_3; label\n0 10 3 4 1\n1 -1 -2 -2 0\n2 100 30 40 1\n3 -10 -20 -20 0\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074631343_pandas_python.txt
Q: Scan and find the keywords in the database from the csv file, then calculate the occurrence rate of other words I need to find the presence rate/prevalence of words in a csv file separated by comma, for words next to a certain keyword on the line. import pandas as pd from elasticsearch import Elasticsearch es = Elasticsearch("http://localhost:9200") searchDB = pd.read_csv('') searchDB = searchDB["AllKeywords"].str.split(', ') searchDB = searchDB.explode() df = pd.read_csv('') // keywords to look for for i in range(len(df)): keywordToSearch = df.loc[i, "H"] res = es.search(index=searchDB["AllKeywords"], body={"from":0, "size":0, "query":{"match":{"sentence": df.loc[i, "H"]}}}) I am getting an error on the last lines I'm using Elasticsearch. Can you help me? Traceback (most recent call last): File "/Users//PycharmProjects/DataImp/venv/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3629, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 144, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index_class_helper.pxi", line 41, in pandas._libs.index.Int64Engine._check_type KeyError: 'AllKeywords' A: Seems like your error is at index=searchDB["AllKeywords"] Clean up your variables import pandas as pd from elasticsearch import Elasticsearch es = Elasticsearch("http://localhost:9200") df = pd.read_csv('') keywords = df["AllKeywords"].str.split(', ') exploded_keywords = searchDB.explode() for i in range(len(df)): keywordToSearch = df.loc[i, "H"] res = es.search(index=df["AllKeywords"], body={"from":0, "size":0, "query":{"match":{"sentence": keywordToSearch}}})
Scan and find the keywords in the database from the csv file, then calculate the occurrence rate of other words
I need to find the presence rate/prevalence of words in a csv file separated by comma, for words next to a certain keyword on the line. import pandas as pd from elasticsearch import Elasticsearch es = Elasticsearch("http://localhost:9200") searchDB = pd.read_csv('') searchDB = searchDB["AllKeywords"].str.split(', ') searchDB = searchDB.explode() df = pd.read_csv('') // keywords to look for for i in range(len(df)): keywordToSearch = df.loc[i, "H"] res = es.search(index=searchDB["AllKeywords"], body={"from":0, "size":0, "query":{"match":{"sentence": df.loc[i, "H"]}}}) I am getting an error on the last lines I'm using Elasticsearch. Can you help me? Traceback (most recent call last): File "/Users//PycharmProjects/DataImp/venv/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3629, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 144, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index_class_helper.pxi", line 41, in pandas._libs.index.Int64Engine._check_type KeyError: 'AllKeywords'
[ "Seems like your error is at index=searchDB[\"AllKeywords\"]\nClean up your variables\nimport pandas as pd\nfrom elasticsearch import Elasticsearch\n\nes = Elasticsearch(\"http://localhost:9200\")\n\ndf = pd.read_csv('')\nkeywords = df[\"AllKeywords\"].str.split(', ')\nexploded_keywords = searchDB.explode()\n\nfor i in range(len(df)):\n keywordToSearch = df.loc[i, \"H\"]\n res = es.search(index=df[\"AllKeywords\"], body={\"from\":0, \"size\":0, \"query\":{\"match\":{\"sentence\": keywordToSearch}}})\n\n" ]
[ 0 ]
[]
[]
[ "elasticsearch", "pandas", "python" ]
stackoverflow_0074645918_elasticsearch_pandas_python.txt
Q: How can I use thresholding to improve image quality after rotating an image with skimage.transform? I have the following image: Initial Image I am using the following code the rotate the image: from skimage.transform import rotate image = cv2.imread('122.png') rotated = rotate(image,34,cval=1,resize = True) Once I execute this code, I receive the following image: Rotated Image To eliminate the blur on the image, I use the following code to set a threshold. Anything that is not white is turned to black (so the gray spots turn black). The code for that is as follows: ret, thresh_hold = cv2.threshold(rotated, 0, 100, cv2.THRESH_BINARY) plt.imshow(thresh_hold) Instead of getting a nice clear picture, I receive the following: Choppy Image Does anyone know what I can do to improve the image quality, or adjust the threshold to create a clearer image? I attempted to adjust the threshold to different values, but this changed the image to all black or all white. A: To improve the image quality after rotating the image, you can try using different interpolation methods when rotating the image. The rotate function in skimage.transform has a mode parameter that allows you to specify the interpolation method to use. The default value is constant, which means that it uses a constant value (specified by the cval parameter) for pixels outside the boundaries of the input image. You can try using a different interpolation method, such as bilinear or bicubic, which can provide better results in some cases. For example, the following code uses the bilinear interpolation method: from skimage.transform import rotate image = cv2.imread('122.png') rotated = rotate(image, 34, cval=1, resize=True, mode='bilinear') In addition to using a different interpolation method, you can also try using a different threshold value when applying thresholding to the rotated image. The cv2.threshold function has a threshold parameter that specifies the threshold value to use. By default, it is set to 0, which means that all pixels with a value less than or equal to 0 will be set to 0 (black) and all other pixels will be set to the maximum value specified by the maxval parameter (100 in your case). ret, thresh_hold = cv2.threshold(rotated, 127, 255, cv2.THRESH_BINARY) plt.imshow(thresh_hold) Alternatively, you can use the cv2.THRESH_BINARY_INV flag instead of cv2.THRESH_BINARY to invert the thresholding result, so that pixels with a value less than or equal to 127 are set to 255 (white) and all other pixels are set to 0 (black). The following code shows how to do this: ret, thresh_hold = cv2.threshold(rotated, 127, 255, cv2.THRESH_BINARY_INV) plt.imshow(thresh_hold) A: One way to approach that is to simply antialias the image in Python/OpenCV. To do that one simply converts to grayscale. Then blurs the image, then applies a stretch of the image. Adjust the blur sigma to change the antialiasing. Input: import cv2 import numpy as np import skimage.exposure # load image img = cv2.imread('122.png') # convert to gray gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # blur threshold image blur = cv2.GaussianBlur(gray, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT) # stretch so that 255 -> 255 and 127.5 -> 0 result = skimage.exposure.rescale_intensity(blur, in_range=(127.5,255), out_range=(0,255)).astype(np.uint8) # save output cv2.imwrite('122_antialiased.png', result) # Display various images to see the steps cv2.imshow('result', result) cv2.waitKey(0) cv2.destroyAllWindows() Result:
How can I use thresholding to improve image quality after rotating an image with skimage.transform?
I have the following image: Initial Image I am using the following code the rotate the image: from skimage.transform import rotate image = cv2.imread('122.png') rotated = rotate(image,34,cval=1,resize = True) Once I execute this code, I receive the following image: Rotated Image To eliminate the blur on the image, I use the following code to set a threshold. Anything that is not white is turned to black (so the gray spots turn black). The code for that is as follows: ret, thresh_hold = cv2.threshold(rotated, 0, 100, cv2.THRESH_BINARY) plt.imshow(thresh_hold) Instead of getting a nice clear picture, I receive the following: Choppy Image Does anyone know what I can do to improve the image quality, or adjust the threshold to create a clearer image? I attempted to adjust the threshold to different values, but this changed the image to all black or all white.
[ "To improve the image quality after rotating the image, you can try using different interpolation methods when rotating the image. The rotate function in skimage.transform has a mode parameter that allows you to specify the interpolation method to use. The default value is constant, which means that it uses a constant value (specified by the cval parameter) for pixels outside the boundaries of the input image.\nYou can try using a different interpolation method, such as bilinear or bicubic, which can provide better results in some cases. For example, the following code uses the bilinear interpolation method:\nfrom skimage.transform import rotate\nimage = cv2.imread('122.png')\nrotated = rotate(image, 34, cval=1, resize=True, mode='bilinear')\n\n\nIn addition to using a different interpolation method, you can also try using a different threshold value when applying thresholding to the rotated image. The cv2.threshold function has a threshold parameter that specifies the threshold value to use. By default, it is set to 0, which means that all pixels with a value less than or equal to 0 will be set to 0 (black) and all other pixels will be set to the maximum value specified by the maxval parameter (100 in your case).\nret, thresh_hold = cv2.threshold(rotated, 127, 255, cv2.THRESH_BINARY)\nplt.imshow(thresh_hold)\n\nAlternatively, you can use the cv2.THRESH_BINARY_INV flag instead of cv2.THRESH_BINARY to invert the thresholding result, so that pixels with a value less than or equal to 127 are set to 255 (white) and all other pixels are set to 0 (black). The following code shows how to do this:\nret, thresh_hold = cv2.threshold(rotated, 127, 255, cv2.THRESH_BINARY_INV)\nplt.imshow(thresh_hold)\n\n", "One way to approach that is to simply antialias the image in Python/OpenCV.\nTo do that one simply converts to grayscale. Then blurs the image, then applies a stretch of the image.\nAdjust the blur sigma to change the antialiasing.\nInput:\n\nimport cv2\nimport numpy as np\nimport skimage.exposure\n\n# load image\nimg = cv2.imread('122.png')\n\n# convert to gray\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n# blur threshold image\nblur = cv2.GaussianBlur(gray, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)\n\n# stretch so that 255 -> 255 and 127.5 -> 0\nresult = skimage.exposure.rescale_intensity(blur, in_range=(127.5,255), out_range=(0,255)).astype(np.uint8)\n\n# save output\ncv2.imwrite('122_antialiased.png', result)\n\n# Display various images to see the steps\ncv2.imshow('result', result)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nResult:\n\n" ]
[ 0, 0 ]
[]
[]
[ "mnist", "opencv", "python" ]
stackoverflow_0074645772_mnist_opencv_python.txt
Q: named parameter passing multiple times I was wondering if there is a best practice or a convention for this kind of "chained" named parameter. I am trying to pass the first variable d to bar through foo. It is kind of awkward to do it this way and I believe there should be a smarter way but after looking through tons of documents today still no clue. def bar(a=0, b=0, c=0, d=0): print(a,b,c,d) def foo(b=0,d=0): bar(d=d) foo(d=1) #(0,0,0,1) A: I don't think there is a better way of doing this, but if the b parameter is not going to be used in foo() then I wouldn't have it in there. def bar(a=0, b=0, c=0, d=0): print(a,b,c,d) def foo(d=0): bar(d=d) foo(1) #(0,0,0,1) A: Any answer here is subjective but I'm not going to submit to close the question because I think this is valuable for anyone learning Python and professional practice. With that said, I think the way you have it is correct, if b in foo is assumed to be used in a line of code within the function after bar is called. If b isn't used, you should remove it from the parameter list. But let me make another case while we're at subjectivity: specify arguments upon a function call by use of keyword arguments as much as possible. Why? Because when you're looking at complex code in a professional environment, it becomes much more clear what's being passed into a function and why. And in Python, explicit is preferred over implicit. You will write your code once. You might correct it a couple of times. Someone will read your code to handle a production issue a thousand times. Take for example the below code (which is a financial model): def black_scholes(spot, strike, volatility, risk_free_rate, expiry=.25): # do calculation and return result # .... return result option_value = black_scholes(44, 48, .08, .54, .75) Put your hand over the parameter list in the function definition. Can you tell what each of those numbers represents? Sure, you can find the definition in the same file and compare the positional arguments, but what if black_scholes() is in another module? Now it's a tiny bit tougher. What if we expanded on this and added a higher-leveled wrapper around black_scholes() and that's what we had to debug to get to this? Now, let me show you the function call with keyword arguments: result = black_scholes(spot=44, strike=48, volatility=.08, risk_free_rate=.54, expiry=.75) This now becomes much more clear, and we can anticipate what the expected result should be. We also just saved a lot of time on reading the code and have a result to compare with (given the expected) from stepping over the function in the debugger instead of having to go into it and reading it line by line. A: It's not specific to Python, but what is asked looks much like a classical code smell, which are mostly language agnostic. If you have the same fields appearing frequently in function signatures, you should probably make them an object. https://refactoring.guru/fr/smells/data-clumps
named parameter passing multiple times
I was wondering if there is a best practice or a convention for this kind of "chained" named parameter. I am trying to pass the first variable d to bar through foo. It is kind of awkward to do it this way and I believe there should be a smarter way but after looking through tons of documents today still no clue. def bar(a=0, b=0, c=0, d=0): print(a,b,c,d) def foo(b=0,d=0): bar(d=d) foo(d=1) #(0,0,0,1)
[ "I don't think there is a better way of doing this, but if the b parameter is not going to be used in foo() then I wouldn't have it in there.\ndef bar(a=0, b=0, c=0, d=0):\n print(a,b,c,d)\n\ndef foo(d=0):\n bar(d=d)\n\nfoo(1)\n#(0,0,0,1)\n\n", "Any answer here is subjective but I'm not going to submit to close the question because I think this is valuable for anyone learning Python and professional practice.\nWith that said, I think the way you have it is correct, if b in foo is assumed to be used in a line of code within the function after bar is called. If b isn't used, you should remove it from the parameter list.\nBut let me make another case while we're at subjectivity: specify arguments upon a function call by use of keyword arguments as much as possible.\nWhy? \nBecause when you're looking at complex code in a professional environment, it becomes much more clear what's being passed into a function and why. And in Python, explicit is preferred over implicit.\nYou will write your code once. You might correct it a couple of times. Someone will read your code to handle a production issue a thousand times.\nTake for example the below code (which is a financial model):\ndef black_scholes(spot, strike, volatility, risk_free_rate, expiry=.25):\n # do calculation and return result\n # ....\n return result\n\noption_value = black_scholes(44, 48, .08, .54, .75)\n\nPut your hand over the parameter list in the function definition. Can you tell what each of those numbers represents? \nSure, you can find the definition in the same file and compare the positional arguments, but what if black_scholes() is in another module? Now it's a tiny bit tougher. What if we expanded on this and added a higher-leveled wrapper around black_scholes() and that's what we had to debug to get to this?\nNow, let me show you the function call with keyword arguments:\nresult = black_scholes(spot=44, strike=48, volatility=.08, risk_free_rate=.54, expiry=.75)\n\nThis now becomes much more clear, and we can anticipate what the expected result should be. We also just saved a lot of time on reading the code and have a result to compare with (given the expected) from stepping over the function in the debugger instead of having to go into it and reading it line by line.\n", "It's not specific to Python, but what is asked looks much like a classical code smell, which are mostly language agnostic.\nIf you have the same fields appearing frequently in function signatures, you should probably make them an object.\nhttps://refactoring.guru/fr/smells/data-clumps\n" ]
[ 0, 0, 0 ]
[]
[]
[ "named_parameters", "python" ]
stackoverflow_0042404746_named_parameters_python.txt
Q: How can I use PyPDF2 to update variable text to a form field? PyPDF2 update page form field values function working fine with hardcoded strings but nothing shows if using variable text. I have tried using string variables like this writer.update_page_form_field_values( #writer.pages[0], {"Piece Weight": variableString} doesn't work writer.pages[0], {"Piece Weight": "hardcoded string"}#works ) as well as like this writer.update_page_form_field_values( #writer.pages[0], {"Piece Weight": f"{variableString}"} doesn't work writer.pages[0], {"Piece Weight": "hardcoded string"}#works ) I am expecting the final output file to show the text I store into a string variable within the field named "piece weight" but what actually happens is absolutely no data is displayed on in the field when a variable is applied to it. UPDATE- found that my issue was not that it refuses to show variable data, rather it was a matter of, my variable data not being updated after it is initialized. I am creating it at one point variablestring = "" and then later in the code i am attempting to change it within a function def onStart(): variablestring = variableEntry.get() This is an issue of scope as the variablestring within the function and outside the function are seen as separate memory spaces. there in lies an issue however, I can not pass this function parameters as it needs to be automatically called by a tkinter.Button(form, text="start", command=onStart) A: Final answer to my issue was to use the global tag to call the variable outside of the function into it. variable = "te" def onStart(): global variable variable = variableEntry.get() when reading about globals for python i misunderstood it as a declarative global and not a call global
How can I use PyPDF2 to update variable text to a form field?
PyPDF2 update page form field values function working fine with hardcoded strings but nothing shows if using variable text. I have tried using string variables like this writer.update_page_form_field_values( #writer.pages[0], {"Piece Weight": variableString} doesn't work writer.pages[0], {"Piece Weight": "hardcoded string"}#works ) as well as like this writer.update_page_form_field_values( #writer.pages[0], {"Piece Weight": f"{variableString}"} doesn't work writer.pages[0], {"Piece Weight": "hardcoded string"}#works ) I am expecting the final output file to show the text I store into a string variable within the field named "piece weight" but what actually happens is absolutely no data is displayed on in the field when a variable is applied to it. UPDATE- found that my issue was not that it refuses to show variable data, rather it was a matter of, my variable data not being updated after it is initialized. I am creating it at one point variablestring = "" and then later in the code i am attempting to change it within a function def onStart(): variablestring = variableEntry.get() This is an issue of scope as the variablestring within the function and outside the function are seen as separate memory spaces. there in lies an issue however, I can not pass this function parameters as it needs to be automatically called by a tkinter.Button(form, text="start", command=onStart)
[ "Final answer to my issue was to use the global tag to call the variable outside of the function into it.\nvariable = \"te\"\ndef onStart():\n global variable\n variable = variableEntry.get()\n\nwhen reading about globals for python i misunderstood it as a declarative global and not a call global\n" ]
[ 0 ]
[]
[]
[ "pypdf2", "python" ]
stackoverflow_0074645364_pypdf2_python.txt
Q: Getting a count of objects in a queryset in Django How can I add a field for the count of objects in a database. I have the following models: class Item(models.Model): name = models.CharField() class Contest(models.Model); name = models.CharField() class Votes(models.Model): user = models.ForeignKey(User) item = models.ForeignKey(Item) contest = models.ForeignKey(Contest) comment = models.TextField() To find the votes for contestA I am using the following query in my view current_vote = Item.objects.filter(votes__contest=contestA) This returns a queryset with all of the votes individually but I want to get the count votes for each item, anyone know how I can do that? thanks A: To get the number of votes for a specific item, you would use: vote_count = Item.objects.filter(votes__contest=contestA).count() If you wanted a break down of the distribution of votes in a particular contest, I would do something like the following: contest = Contest.objects.get(pk=contest_id) votes = contest.votes_set.select_related() vote_counts = {} for vote in votes: if not vote_counts.has_key(vote.item.id): vote_counts[vote.item.id] = { 'item': vote.item, 'count': 0 } vote_counts[vote.item.id]['count'] += 1 This will create dictionary that maps items to number of votes. Not the only way to do this, but it's pretty light on database hits, so will run pretty quickly. A: Another way of doing this would be using Aggregation. You should be able to achieve a similar result using a single query. Such as this: from django.db.models import Count Item.objects.values("contest").annotate(Count("id")) I did not test this specific query, but this should output a count of the items for each value in contests as a dictionary. A: Use related name to count votes for a specific contest class Item(models.Model): name = models.CharField() class Contest(models.Model); name = models.CharField() class Votes(models.Model): user = models.ForeignKey(User) item = models.ForeignKey(Item) contest = models.ForeignKey(Contest, related_name="contest_votes") comment = models.TextField() >>> comments = Contest.objects.get(id=contest_id).contest_votes.count() A: You can use len() to get the count of contestA's votes: current_vote = len(Item.objects.filter(votes__contest=contestA))
Getting a count of objects in a queryset in Django
How can I add a field for the count of objects in a database. I have the following models: class Item(models.Model): name = models.CharField() class Contest(models.Model); name = models.CharField() class Votes(models.Model): user = models.ForeignKey(User) item = models.ForeignKey(Item) contest = models.ForeignKey(Contest) comment = models.TextField() To find the votes for contestA I am using the following query in my view current_vote = Item.objects.filter(votes__contest=contestA) This returns a queryset with all of the votes individually but I want to get the count votes for each item, anyone know how I can do that? thanks
[ "To get the number of votes for a specific item, you would use:\nvote_count = Item.objects.filter(votes__contest=contestA).count()\n\nIf you wanted a break down of the distribution of votes in a particular contest, I would do something like the following:\ncontest = Contest.objects.get(pk=contest_id)\nvotes = contest.votes_set.select_related()\n\nvote_counts = {}\n\nfor vote in votes:\n if not vote_counts.has_key(vote.item.id):\n vote_counts[vote.item.id] = {\n 'item': vote.item,\n 'count': 0\n }\n\n vote_counts[vote.item.id]['count'] += 1\n\nThis will create dictionary that maps items to number of votes. Not the only way to do this, but it's pretty light on database hits, so will run pretty quickly.\n", "Another way of doing this would be using Aggregation. You should be able to achieve a similar result using a single query. Such as this:\nfrom django.db.models import Count\n\nItem.objects.values(\"contest\").annotate(Count(\"id\"))\n\nI did not test this specific query, but this should output a count of the items for each value in contests as a dictionary.\n", "Use related name to count votes for a specific contest\nclass Item(models.Model):\n name = models.CharField()\n\nclass Contest(models.Model);\n name = models.CharField()\n\nclass Votes(models.Model):\n user = models.ForeignKey(User)\n item = models.ForeignKey(Item)\n contest = models.ForeignKey(Contest, related_name=\"contest_votes\")\n comment = models.TextField()\n\n>>> comments = Contest.objects.get(id=contest_id).contest_votes.count()\n\n", "You can use len() to get the count of contestA's votes:\ncurrent_vote = len(Item.objects.filter(votes__contest=contestA))\n\n" ]
[ 181, 23, 1, 0 ]
[]
[]
[ "count", "django", "django_queryset", "python", "python_3.x" ]
stackoverflow_0005439901_count_django_django_queryset_python_python_3.x.txt
Q: Testing django mail and attachment return empty I'm trying to test mails with attachment, I'm attaching the files something like this: # snippet of send_pdf_mail mail = EmailMessage( subject=subject, body=message, from_email=from_email, to=recipient_list, ) dynamic_template_data.update({'subject': subject}) mail.content_subtype = 'html' mail.dynamic_template_data = dynamic_template_data mail.template_id = dynamic_template_id if attachment: attachment.open() mail.attach(basename(attachment.name), attachment.read(), guess_type(attachment.name)[0]) attachment.close() return mail.send(fail_silently=False) then my test is something like this: f = open('tests/test.pdf', 'rb') user.pdf.save('test.pdf', File(f)) f.close() send_pdf_mail(user) self.assertEqual(len(mail.outbox), 1) self.assertEqual(mail.outbox[0].to[0], user.email) But when I try to check if there are attachment via: print(mail.outbox[0].attachments) It returns an empty list so I'm not sure why but I tested the code and I can confirm that this indeed includes an attachment when sending an e-mail. A: I recommend to use EmailMultiAlternatives instead of EmailMessage In this case you can attack file without any problems email_message = EmailMultiAlternatives( subject="subject", to="to_email", ) html_email: str = loader.render_to_string(template_name, context) email_message.attach_alternative(html_email, 'text/html') if file_path: email_message.attach_file(file_path, mimetype="file mimetype") email_message.send() So you need to create EmailMultiAlternatives instance and via attach_file() method add a path to file. Also need to provide mimetype of the file. For example for pdf file will be mimetype="application/pdf"
Testing django mail and attachment return empty
I'm trying to test mails with attachment, I'm attaching the files something like this: # snippet of send_pdf_mail mail = EmailMessage( subject=subject, body=message, from_email=from_email, to=recipient_list, ) dynamic_template_data.update({'subject': subject}) mail.content_subtype = 'html' mail.dynamic_template_data = dynamic_template_data mail.template_id = dynamic_template_id if attachment: attachment.open() mail.attach(basename(attachment.name), attachment.read(), guess_type(attachment.name)[0]) attachment.close() return mail.send(fail_silently=False) then my test is something like this: f = open('tests/test.pdf', 'rb') user.pdf.save('test.pdf', File(f)) f.close() send_pdf_mail(user) self.assertEqual(len(mail.outbox), 1) self.assertEqual(mail.outbox[0].to[0], user.email) But when I try to check if there are attachment via: print(mail.outbox[0].attachments) It returns an empty list so I'm not sure why but I tested the code and I can confirm that this indeed includes an attachment when sending an e-mail.
[ "I recommend to use EmailMultiAlternatives instead of EmailMessage\nIn this case you can attack file without any problems\nemail_message = EmailMultiAlternatives(\n subject=\"subject\",\n to=\"to_email\",\n)\nhtml_email: str = loader.render_to_string(template_name, context)\nemail_message.attach_alternative(html_email, 'text/html')\nif file_path:\n email_message.attach_file(file_path, mimetype=\"file mimetype\")\nemail_message.send()\n\nSo you need to create EmailMultiAlternatives instance and via attach_file() method add a path to file.\nAlso need to provide mimetype of the file. For example for pdf file will be mimetype=\"application/pdf\"\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0065038261_django_python.txt
Q: Mac M1 python packages error after upgrade to python3 I am writing a project in NativeScript and I received the following error the last few days when I tried the commands: ns run ios or ns doctor. Couldn't retrieve installed python packages. The Python 'six' package not found. I tried python and pip upgrade and also the command pip install six. Nothing of them fixed the problem. I believe that is not a NativeScript issue, is about the configuration of the python packages in my machine. I mention that I am using a MacBook with M1 chip and it is running the 12.5 OS version. I will appreciate any suggestions on this situation. A: Lastly, I found the solution. It was about the python folder into the path /usr/local/bin/python You could check it by the following command: where python In my case this folder is missing, perhaps I deleted it after the upgrade of the python3. That was a mistake both folders should exist on this path! If you type: where python you should receive: /usr/local/bin/python If you type: where python3 you should receive: /usr/local/bin/python3 In order to fix the error, I installed python again by using the brew install pyenv this suggestion helps me to install it properly. In the end, in order to eliminate all errors I installed the Python six package by using the command: pip install --ignore-installed six A: Try doing it with the python virtual environment. Following are the steps. Create a virtual environment. Activate the virtual environment. Run the pip install command with the virtual environment active. Implement as follows: use correct version of Python when creating VENV python3 -m venv venv activate on Unix or MacOS source venv/bin/activate activate on Windows (cmd.exe) venv\Scripts\activate.bat activate on Windows (PowerShell) 'venv\Scripts\Activate.ps1' install the required package in the virtual environment python3 -m pip install --upgrade pip python3 -m pip install six Note: This works only when you are in the virtual environment.
Mac M1 python packages error after upgrade to python3
I am writing a project in NativeScript and I received the following error the last few days when I tried the commands: ns run ios or ns doctor. Couldn't retrieve installed python packages. The Python 'six' package not found. I tried python and pip upgrade and also the command pip install six. Nothing of them fixed the problem. I believe that is not a NativeScript issue, is about the configuration of the python packages in my machine. I mention that I am using a MacBook with M1 chip and it is running the 12.5 OS version. I will appreciate any suggestions on this situation.
[ "Lastly, I found the solution. It was about the python folder into the path /usr/local/bin/python\nYou could check it by the following command: where python\nIn my case this folder is missing, perhaps I deleted it after the upgrade of the python3.\nThat was a mistake both folders should exist on this path!\nIf you type: where python you should receive: /usr/local/bin/python\nIf you type: where python3 you should receive: /usr/local/bin/python3\nIn order to fix the error, I installed python again by using the brew install pyenv\nthis suggestion helps me to install it properly.\nIn the end, in order to eliminate all errors I installed the Python six package by using the command:\npip install --ignore-installed six\n", "Try doing it with the python virtual environment. Following are the steps.\n\nCreate a virtual environment.\nActivate the virtual environment.\nRun the pip install command with the virtual environment active.\n\nImplement as follows:\n\nuse correct version of Python when creating VENV\n\npython3 -m venv venv\n\nactivate on Unix or MacOS\n\nsource venv/bin/activate\n\nactivate on Windows (cmd.exe)\n\nvenv\\Scripts\\activate.bat\n\nactivate on Windows (PowerShell)\n\n'venv\\Scripts\\Activate.ps1'\n\ninstall the required package in the virtual environment\n\npython3 -m pip install --upgrade pip\npython3 -m pip install six\nNote: This works only when you are in the virtual environment.\n" ]
[ 0, 0 ]
[]
[]
[ "apple_m1", "homebrew", "nativescript", "python" ]
stackoverflow_0073959594_apple_m1_homebrew_nativescript_python.txt
Q: Nginx is throwing an 403 Forbidden on Static Files I have a django app, python 2.7 with gunicorn and nginx. Nginx is throwing a 403 Forbidden Error, if I try to view anything in my static folder @: /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static nginx config(/etc/nginx/sites-enabled/myapp) contains: server { listen 80; server_name *.myapp.com; access_log /home/ubuntu/virtualenv/myapp/error/access.log; error_log /home/ubuntu/virtualenv/myapp/error/error.log warn; connection_pool_size 2048; fastcgi_buffer_size 4K; fastcgi_buffers 64 4k; root /home/ubuntu/virtualenv/myapp/myapp/homelaunch/; location /static/ { alias /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } error.log contains: 2013/11/24 23:00:16 [error] 18243#0: *277 open() "/home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/img/templated/home/img.png" failed (13: Permission denied), client: xx.xx.xxx.xxx, server: *.myapp.com, request: "GET /static/img/templated/home/img2.png HTTP/1.1", host: "myapp.com", referrer: "http://myapp.com/" access.log contains xx.xx.xx.xxx - - [24/Nov/2013:23:02:02 +0000] "GET /static/img/templated/base/animg.png HTTP/1.1" 403 141 "http://myapp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:25.0) Gecko/20100101 Firefox/25.0" xx.xx.xx.xxx - - [24/Nov/2013:23:02:07 +0000] "-" 400 0 "-" "-" I tried just viewing say a .css file in /static/ and it throws an error like this in source: <html> <head><title>403 Forbidden</title></head> <body bgcolor="white"> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.1.19</center> </body> </html> A: It appears the user nginx is running as (nginx?) is missing privileges to read the local file /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/img/templated/home/img.png. You probably wanna check file permissions as well as permissions on the directories in the hierarchy. A: MacOs El Capitan: At the top of nginx.conf write user username group_name My user name is Kamil so i write: user Kamil staff; (word 'staff' is very important in macOS). This do the trick. After that you don't need to change any permission in your project folder and files. A: The minimum fix that worked for me is: sudo chmod -R 664 /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/ sudo chmod -R a+X /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/ (BTW, in my case the static folder is called collected_static) A: It seems the web server user doesn't have read permissions to the static files. You can solve this in 2 ways: (easiest, safer) run the nginx as you app user instead of default nginx user. To do this, add the following in nginx.conf user your_app_user Replace your_app_user with appropriate unix username for your app. In this case the your_app_user already has necessary permissions to the static content. Another way would be to to grant permissions for the web server user to the static dir. A: Try specifying a user at the top of your nginx.conf, above the server section. user www-data; A: The best solution in that case would be to add www-data to username group: gpasswd -a www-data username For your changes to work, restart nginx nginx -s reload A: I had the same issue no long ago. It might be a combination of factors. I found how to fix 403 access denied by replacing the user in the nginx.conf file. I deployed my website on an ubuntu server using Digital Ocean. I created a new user on my new ubuntu server and give admin priviliges adduser newuser usermod -aG sudo newuser I updated my new server and installed few packages sudo apt update sudo apt install python3-pip python3-dev libpq-dev postgresql postgresql-contrib nginx curl I followed all this beautiful instruction on how to deploy your site on Digital Ocean Since I changed the user and I ssh into my new server using this new user, I need to replace the user on the nginx.conf. By default nginx.conf user is www-data: user www-data; worker_processes auto; pid /run/nginx.pid; Then I replaced with my sudo user and solved my problem. user newuser; worker_processes auto; pid /run/nginx.pid; Then I restart nginx, gunicorn and postgresql(even if the last one it is not really necessary) sudo systemctl restart nginx sudo systemctl restart gunicorn sudo systemctl restart postgresql And tada.. :) no more issue. A: Fix 403 error with Django static files on Ubuntu server. Run this -> gpasswd -a www-data your_proj_username Reload nginx -> nginx -s reload Check chmod for your dirs: /home, /home/proj_dir, /home/proj_dir/static Run this - stat --format '%a' /home . Result must be 755 Run this - stat --format '%a' /home/your_proj_dir/static . Result must be 755 Run this - stat --format '%a' /home/your_proj_dir . Result must be 750 If you have different values you can try to change this: sudo chmod 755 /home sudo chmod 755 /home/your_proj_dir/static sudo chmod 750 /home/your_proj_dir Reload you project-server. This solve all permission errors
Nginx is throwing an 403 Forbidden on Static Files
I have a django app, python 2.7 with gunicorn and nginx. Nginx is throwing a 403 Forbidden Error, if I try to view anything in my static folder @: /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static nginx config(/etc/nginx/sites-enabled/myapp) contains: server { listen 80; server_name *.myapp.com; access_log /home/ubuntu/virtualenv/myapp/error/access.log; error_log /home/ubuntu/virtualenv/myapp/error/error.log warn; connection_pool_size 2048; fastcgi_buffer_size 4K; fastcgi_buffers 64 4k; root /home/ubuntu/virtualenv/myapp/myapp/homelaunch/; location /static/ { alias /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } error.log contains: 2013/11/24 23:00:16 [error] 18243#0: *277 open() "/home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/img/templated/home/img.png" failed (13: Permission denied), client: xx.xx.xxx.xxx, server: *.myapp.com, request: "GET /static/img/templated/home/img2.png HTTP/1.1", host: "myapp.com", referrer: "http://myapp.com/" access.log contains xx.xx.xx.xxx - - [24/Nov/2013:23:02:02 +0000] "GET /static/img/templated/base/animg.png HTTP/1.1" 403 141 "http://myapp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:25.0) Gecko/20100101 Firefox/25.0" xx.xx.xx.xxx - - [24/Nov/2013:23:02:07 +0000] "-" 400 0 "-" "-" I tried just viewing say a .css file in /static/ and it throws an error like this in source: <html> <head><title>403 Forbidden</title></head> <body bgcolor="white"> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.1.19</center> </body> </html>
[ "It appears the user nginx is running as (nginx?) is missing privileges to read the local file /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/img/templated/home/img.png. You probably wanna check file permissions as well as permissions on the directories in the hierarchy.\n", "MacOs El Capitan: At the top of nginx.conf write user username group_name\nMy user name is Kamil so i write:\nuser Kamil staff;\n\n(word 'staff' is very important in macOS). This do the trick. After that you don't need to change any permission in your project folder and files.\n", "The minimum fix that worked for me is:\nsudo chmod -R 664 /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/\nsudo chmod -R a+X /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/\n\n(BTW, in my case the static folder is called collected_static)\n", "It seems the web server user doesn't have read permissions to the static files.\nYou can solve this in 2 ways:\n\n(easiest, safer) run the nginx as you app user instead of default nginx user. To do this, add the following in nginx.conf\nuser your_app_user\n\nReplace your_app_user with appropriate unix username for your app. In this case the your_app_user already has necessary permissions to the static content.\nAnother way would be to to grant permissions for the web server user to the static dir.\n\n", "Try specifying a user at the top of your nginx.conf, above the server section. \nuser www-data;\n\n", "The best solution in that case would be to add www-data to username group:\ngpasswd -a www-data username\nFor your changes to work, restart nginx\nnginx -s reload \n", "I had the same issue no long ago. It might be a combination of factors. I found how to fix 403 access denied by replacing the user in the nginx.conf file.\n\nI deployed my website on an ubuntu server using Digital Ocean.\nI created a new user on my new ubuntu server and give admin priviliges\n\n adduser newuser\n\n usermod -aG sudo newuser \n\n\nI updated my new server and installed few packages\n\n sudo apt update\n\n sudo apt install python3-pip python3-dev libpq-dev postgresql postgresql-contrib nginx curl \n\n\nI followed all this beautiful instruction on how to deploy your site on Digital Ocean\nSince I changed the user and I ssh into my new server using this new user, I need to replace the user on the nginx.conf. By default nginx.conf user is www-data:\n\n user www-data;\n\n worker_processes auto;\n\n pid /run/nginx.pid;\n\nThen I replaced with my sudo user and solved my problem. \n user newuser;\n\n worker_processes auto;\n\n pid /run/nginx.pid;\n\n\nThen I restart nginx, gunicorn and postgresql(even if the last one it is not really necessary)\n\n sudo systemctl restart nginx \n\n sudo systemctl restart gunicorn\n\n sudo systemctl restart postgresql\n\nAnd tada.. :) no more issue.\n", "Fix 403 error with Django static files on Ubuntu server.\n\nRun this -> gpasswd -a www-data your_proj_username\n\nReload nginx -> nginx -s reload\n\nCheck chmod for your dirs: /home, /home/proj_dir, /home/proj_dir/static\n\n\n\nRun this - stat --format '%a' /home . Result must be 755\nRun this - stat --format '%a' /home/your_proj_dir/static . Result must be 755\nRun this - stat --format '%a' /home/your_proj_dir . Result must be 750\n\n\nIf you have different values you can try to change this:\n\n\nsudo chmod 755 /home\nsudo chmod 755 /home/your_proj_dir/static\nsudo chmod 750 /home/your_proj_dir\n\n\nReload you project-server. This solve all permission errors\n\n" ]
[ 27, 27, 9, 9, 8, 4, 1, 0 ]
[ "After hours upon hours following so many articles, I ran across :\nhttp://nicholasorr.com/blog/2008/07/22/nginx-engine-x-what-a-pain-in-the-bum/\nwhich had a comment to chmod the whole django app dir, so I did:\nsudo chmod -R myapp\n\nThis fixed it. Unbelievable!\nThanks to those who offered solutions to fix this. \n" ]
[ -6 ]
[ "configuration", "django", "nginx", "python" ]
stackoverflow_0020182329_configuration_django_nginx_python.txt
Q: Messed up Python install, how do I get it back? I downgraded to Python 3.7.15 using the tarball on their website. I downgraded because I needed to use an application that was only compatible with Python 3.7.*, well, now I can't uninstall it and it's for some reason set as the default installation. There is no rule for make uninstall so I need help figuring out what to do. I've tried manually deleting files to no avail, I have no idea where the rest of the program files are located, I could only find them in /usr/bin and /bin. I should add that the installation I had before still exists on my system, I just cant figure out how to use it or get rid of the downgraded installation. OS: Arch Linux Architecture: x86_64 I tried make uninstall I tried manually deleting files I tried just using the install for Python 3.10.8 I was expecting to be able to use Python 3.10.8 as well as python 3.7.15.
Messed up Python install, how do I get it back?
I downgraded to Python 3.7.15 using the tarball on their website. I downgraded because I needed to use an application that was only compatible with Python 3.7.*, well, now I can't uninstall it and it's for some reason set as the default installation. There is no rule for make uninstall so I need help figuring out what to do. I've tried manually deleting files to no avail, I have no idea where the rest of the program files are located, I could only find them in /usr/bin and /bin. I should add that the installation I had before still exists on my system, I just cant figure out how to use it or get rid of the downgraded installation. OS: Arch Linux Architecture: x86_64 I tried make uninstall I tried manually deleting files I tried just using the install for Python 3.10.8 I was expecting to be able to use Python 3.10.8 as well as python 3.7.15.
[]
[]
[ "Try to using anaconda\nthere you can specify version of python while creating virtual environment\nNo need to uninstall base python\nconda create -n envname python=x.x anaconda\n\n" ]
[ -2 ]
[ "archlinux", "binaries", "executable", "python" ]
stackoverflow_0074645963_archlinux_binaries_executable_python.txt
Q: Get value from an array that stores a tree of keys in python I have an array that stores a key tree from a dictionary. For example person_dict = [{"person": {"first_name": "John", "age_of_children": [1, 8, 13]}}, ...] Becomes key_tree = [0, "person", "first_name"] OR key_tree = [0, "person", "age_of_children"] This array count contain one item or many items. I'd like to get the value from the person_dict, "John" in this case, by using the key_tree array dynamically. I would then like to set a different value for it. A: You can try the following: def get_value(d, key_list): for key in key_list: d = d[key] return d def set_value(d, key_list, value): res = d *keys, last_key = key_list for key in keys: d = d[key] d[last_key] = value return res person_dict = [{"person": {"first_name": "John", "age_of_children": [1, 8, 13]}}] key_tree = [0, "person", "first_name"] print(get_value(person_dict, key_tree)) print(set_value(person_dict, key_tree, "John2")) output: John [{'person': {'first_name': 'John2', 'age_of_children': [1, 8, 13]}}] For getting values just use get_value, it's pretty simple. In set_value you need to iterate until before the last key, so that you can assign new value to the last object. After the for-loop, d is your last container(dict or list or whatever object who can is subscriptable) object, you can update the last_key value with the value of value. res = d line is needed because you need to have a reference to the most outer container otherwise after the for-loop you have only the last inner container. A: person_dict['person'].update({'first_name':'Ahmad'}) or person_dict[key_tree[0]].update({key_tree[1]: 'Ahmad'})
Get value from an array that stores a tree of keys in python
I have an array that stores a key tree from a dictionary. For example person_dict = [{"person": {"first_name": "John", "age_of_children": [1, 8, 13]}}, ...] Becomes key_tree = [0, "person", "first_name"] OR key_tree = [0, "person", "age_of_children"] This array count contain one item or many items. I'd like to get the value from the person_dict, "John" in this case, by using the key_tree array dynamically. I would then like to set a different value for it.
[ "You can try the following:\ndef get_value(d, key_list):\n for key in key_list:\n d = d[key]\n return d\n\n\ndef set_value(d, key_list, value):\n res = d\n *keys, last_key = key_list\n\n for key in keys:\n d = d[key]\n\n d[last_key] = value\n return res\n\n\nperson_dict = [{\"person\": {\"first_name\": \"John\", \"age_of_children\": [1, 8, 13]}}]\nkey_tree = [0, \"person\", \"first_name\"]\n\nprint(get_value(person_dict, key_tree))\nprint(set_value(person_dict, key_tree, \"John2\"))\n\noutput:\nJohn\n[{'person': {'first_name': 'John2', 'age_of_children': [1, 8, 13]}}]\n\nFor getting values just use get_value, it's pretty simple. In set_value you need to iterate until before the last key, so that you can assign new value to the last object. After the for-loop, d is your last container(dict or list or whatever object who can is subscriptable) object, you can update the last_key value with the value of value. res = d line is needed because you need to have a reference to the most outer container otherwise after the for-loop you have only the last inner container.\n", "person_dict['person'].update({'first_name':'Ahmad'})\n\nor\nperson_dict[key_tree[0]].update({key_tree[1]: 'Ahmad'})\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "list", "loops", "python" ]
stackoverflow_0074646092_dictionary_list_loops_python.txt
Q: From text file to JSON file with python Suppose I have a txt file that looks like this (indentation is 4 spaces): key1=value1 key2 key2_1=value2_1 key2_2 key2_2_1=value2_2_1 key2_3=value2_3_1,value2_3_2,value2_3_3 key3=value3_1,value3_2,value3_3 I want to convert it into any VALID json, like this one: { 'key1':'value1', 'key2': { 'key2_1':'value2_1', 'key2_2':{ 'key2_2_1':'value2_2_1' }, 'key2_3':['value2_3_1','value2_3_2','value2_3_3'] }, 'key3':['value3_1','value3_2','value3_3'] } I have tried this (which I got from another post): # helper method to convert equals sign to indentation for easier parsing def convertIndentation(inputString): indentCount = 0 indentVal = " " for position, eachLine in enumerate(inputString): if "=" not in eachLine: continue else: strSplit = eachLine.split("=", 1) #get previous indentation prevIndent = inputString[position].count(indentVal) newVal = (indentVal * (prevIndent + 1)) + strSplit[1] inputString[position] = strSplit[0] + '\n' inputString.insert(position+1, newVal) flatList = "".join(inputString) return flatList # helper class for node usage class Node: def __init__(self, indented_line): self.children = [] self.level = len(indented_line) - len(indented_line.lstrip()) self.text = indented_line.strip() def add_children(self, nodes): childlevel = nodes[0].level while nodes: node = nodes.pop(0) if node.level == childlevel: # add node as a child self.children.append(node) elif node.level > childlevel: # add nodes as grandchildren of the last child nodes.insert(0,node) self.children[-1].add_children(nodes) elif node.level <= self.level: # this node is a sibling, no more children nodes.insert(0,node) return def as_dict(self): if len(self.children) > 1: return {self.text: [node.as_dict() for node in self.children]} elif len(self.children) == 1: return {self.text: self.children[0].as_dict()} else: return self.text # process our file here with open(filename, 'r') as fh: fileContent = fh.readlines() fileParse = convertIndentation(fileContent) # convert equals signs to indentation root = Node('root') root.add_children([Node(line) for line in fileParse.splitlines() if line.strip()]) d = root.as_dict()['root'] # this variable is storing the json output jsonOutput = json.dumps(d, indent = 4, sort_keys = False) print(jsonOutput) which yields the following: [ { "key1": "value1" }, { "key2": [ { "key2_1": "value2_1" }, { "key2_2": { "key2_2_1": "value2_2_1" } }, { "key2_3": "value2_3_1,value2_3_2,value2_3_3" }, ] }, { "key3": "value3_1,value3_2,value3_3" } ] Yet this is still not a valid JSON file. When I try to open the output file using 'json' module, I get this predictable message: "JSONDecodeError: Expecting property name enclosed in double quotes: line 10 column 5 (char 165)". with open(r'C:\Users\nigel\OneDrive\Documents\LAB\lean\sample_01.02_R00.json', 'r', encoding='utf-8') as read_file: data = json.load(read_file) output: JSONDecodeError Traceback (most recent call last) Input In [2], in <cell line: 1>() 1 with open(r'C:\Users\nigel\OneDrive\Documents\LAB\lean\sample_01.02_R00.json', 'r', encoding='utf-8') as read_file: ----> 2 data = json.load(read_file) File ~\Anaconda3\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 274 def load(fp, *, cls=None, object_hook=None, parse_float=None, 275 parse_int=None, parse_constant=None, object_pairs_hook=None, **kw): 276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing 277 a JSON document) to a Python object. 278 (...) 291 kwarg; otherwise ``JSONDecoder`` is used. 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, 295 parse_float=parse_float, parse_int=parse_int, 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File ~\Anaconda3\lib\json\__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 341 s = s.decode(detect_encoding(s), 'surrogatepass') 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: 348 cls = JSONDecoder File ~\Anaconda3\lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w) 332 def decode(self, s, _w=WHITESPACE.match): 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): File ~\Anaconda3\lib\json\decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 344 """Decode a JSON document from ``s`` (a ``str`` beginning with 345 a JSON document) and return a 2-tuple of the Python 346 representation and the index in ``s`` where the document ended. (...) 350 351 """ 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Expecting property name enclosed in double quotes: line 10 column 5 (char 165) The reason is that JSON expects to find keys (strings enclosed in double quotes) when it actually finds json objects (nested dictionaries) in their places. That is it! I truly appreciate any comments. Best, Nigel A: An aside for users that land on this page: I could not reproduce the error that the OP posted. json.dumps() would be very highly unlikely to output "bad json". This was merely an attempt to help out the poster. Splitting The Strings Into Lists I am assuming per your comment that you mean that you want to take your strings, for example, this line key2_3=value2_3_1,value2_3_2,value2_3_3 and break these values up into "key2_3": ["value2_3_1", "value2_3_2", "value2_3_3"]. To do so, you'd have to make the following adjustment to the code provided to you: def as_dict(self): if len(self.children) > 1: return {self.text: [node.as_dict() for node in self.children]} elif len(self.children) == 1: return {self.text: self.children[0].as_dict()} else: return self.text.split(",") # was self.text Dictionaries of Dictionaries Instead of Lists To make the output dictionary a dictionary of dictionaries with node base values of lists, ie {k1: {k2: [1, 2, 3]}}, and of the like, we have to make 2 changes. Update the as_dict method to use {} instead of []. Include a function to compress keys. When I was doing this, I had a hard time outputting the correct data structure... it'd look basically like this, {k1: {k1: {k2: {k2: value}}}}. This becomes obvious when you don't run the d = compress(root.as_dict()['root']) (d = root.as_dict()['root']) function in the code. So the code went from def as_dict(self): if len(self.children) > 1: return {self.text: [node.as_dict() for node in self.children]} elif len(self.children) == 1: return {self.text: self.children[0].as_dict()} else: return self.text.split(",") if "," in self.text else self.text to def as_dict(self): if len(self.children) > 1: return {self.text: {node.text: node.as_dict() for node in self.children}} elif len(self.children) == 1: return {self.text: self.children[0].as_dict()} else: return self.text.split(",") if "," in self.text else self.text , then I included the compress function # for merging like sub keys and values def compress(dictionary): if isinstance(dictionary, dict): for k, v in dictionary.items(): if isinstance(v, dict): if k in v.keys(): dictionary[k] = dictionary[k].pop(k) compress(dictionary[k]) compress(k) return dictionary Full Code If you put the below in a file and run it from the command line, it should work 100%. Otherwise its probably a problem with anaconda or version of python (though that doesn't really seem likely). from io import StringIO import json # for merging like sub keys and values def compress(dictionary): if isinstance(dictionary, dict): for k, v in dictionary.items(): if isinstance(v, dict): if k in v.keys(): dictionary[k] = dictionary[k].pop(k) compress(dictionary[k]) compress(k) return dictionary # helper method to convert equals sign to indentation for easier parsing def convertIndentation(inputString): indentCount = 0 indentVal = " " for position, eachLine in enumerate(inputString): if "=" not in eachLine: continue else: strSplit = eachLine.split("=", 1) #get previous indentation prevIndent = inputString[position].count(indentVal) newVal = (indentVal * (prevIndent + 1)) + strSplit[1] inputString[position] = strSplit[0] + '\n' inputString.insert(position+1, newVal) flatList = "".join(inputString) return flatList # helper class for node usage class Node: def __init__(self, indented_line): self.children = [] self.level = len(indented_line) - len(indented_line.lstrip()) self.text = indented_line.strip() def add_children(self, nodes): childlevel = nodes[0].level while nodes: node = nodes.pop(0) if node.level == childlevel: # add node as a child self.children.append(node) elif node.level > childlevel: # add nodes as grandchildren of the last child nodes.insert(0,node) self.children[-1].add_children(nodes) elif node.level <= self.level: # this node is a sibling, no more children nodes.insert(0,node) return def as_dict(self): if len(self.children) > 1: return {self.text: {node.text: node.as_dict() for node in self.children}} elif len(self.children) == 1: return {self.text: self.children[0].as_dict()} else: return self.text.split(",") if "," in self.text else self.text if __name__ == "__main__": s = """ key1=value1 key2 key2_1=value2_1 key2_2 key2_2_1 key2_2_1_1=value2_2_1_1 key2_3=value2_3_1,value2_3_2,value2_3_3 key3=value3_1,value3_2,value3_3 """ fh = StringIO(s) fileContent = fh.readlines() fileParse = convertIndentation(fileContent) # convert equals signs to indentation root = Node('root') root.add_children([Node(line) for line in fileParse.splitlines() if line.strip()]) d = compress(root.as_dict()['root']) # this variable is storing the json output jsonOutput = json.dumps(d, indent=4, sort_keys=False) f = StringIO(jsonOutput) # load the "file" loaded = json.load(f) print(s) print(jsonOutput) print(loaded)
From text file to JSON file with python
Suppose I have a txt file that looks like this (indentation is 4 spaces): key1=value1 key2 key2_1=value2_1 key2_2 key2_2_1=value2_2_1 key2_3=value2_3_1,value2_3_2,value2_3_3 key3=value3_1,value3_2,value3_3 I want to convert it into any VALID json, like this one: { 'key1':'value1', 'key2': { 'key2_1':'value2_1', 'key2_2':{ 'key2_2_1':'value2_2_1' }, 'key2_3':['value2_3_1','value2_3_2','value2_3_3'] }, 'key3':['value3_1','value3_2','value3_3'] } I have tried this (which I got from another post): # helper method to convert equals sign to indentation for easier parsing def convertIndentation(inputString): indentCount = 0 indentVal = " " for position, eachLine in enumerate(inputString): if "=" not in eachLine: continue else: strSplit = eachLine.split("=", 1) #get previous indentation prevIndent = inputString[position].count(indentVal) newVal = (indentVal * (prevIndent + 1)) + strSplit[1] inputString[position] = strSplit[0] + '\n' inputString.insert(position+1, newVal) flatList = "".join(inputString) return flatList # helper class for node usage class Node: def __init__(self, indented_line): self.children = [] self.level = len(indented_line) - len(indented_line.lstrip()) self.text = indented_line.strip() def add_children(self, nodes): childlevel = nodes[0].level while nodes: node = nodes.pop(0) if node.level == childlevel: # add node as a child self.children.append(node) elif node.level > childlevel: # add nodes as grandchildren of the last child nodes.insert(0,node) self.children[-1].add_children(nodes) elif node.level <= self.level: # this node is a sibling, no more children nodes.insert(0,node) return def as_dict(self): if len(self.children) > 1: return {self.text: [node.as_dict() for node in self.children]} elif len(self.children) == 1: return {self.text: self.children[0].as_dict()} else: return self.text # process our file here with open(filename, 'r') as fh: fileContent = fh.readlines() fileParse = convertIndentation(fileContent) # convert equals signs to indentation root = Node('root') root.add_children([Node(line) for line in fileParse.splitlines() if line.strip()]) d = root.as_dict()['root'] # this variable is storing the json output jsonOutput = json.dumps(d, indent = 4, sort_keys = False) print(jsonOutput) which yields the following: [ { "key1": "value1" }, { "key2": [ { "key2_1": "value2_1" }, { "key2_2": { "key2_2_1": "value2_2_1" } }, { "key2_3": "value2_3_1,value2_3_2,value2_3_3" }, ] }, { "key3": "value3_1,value3_2,value3_3" } ] Yet this is still not a valid JSON file. When I try to open the output file using 'json' module, I get this predictable message: "JSONDecodeError: Expecting property name enclosed in double quotes: line 10 column 5 (char 165)". with open(r'C:\Users\nigel\OneDrive\Documents\LAB\lean\sample_01.02_R00.json', 'r', encoding='utf-8') as read_file: data = json.load(read_file) output: JSONDecodeError Traceback (most recent call last) Input In [2], in <cell line: 1>() 1 with open(r'C:\Users\nigel\OneDrive\Documents\LAB\lean\sample_01.02_R00.json', 'r', encoding='utf-8') as read_file: ----> 2 data = json.load(read_file) File ~\Anaconda3\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 274 def load(fp, *, cls=None, object_hook=None, parse_float=None, 275 parse_int=None, parse_constant=None, object_pairs_hook=None, **kw): 276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing 277 a JSON document) to a Python object. 278 (...) 291 kwarg; otherwise ``JSONDecoder`` is used. 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, 295 parse_float=parse_float, parse_int=parse_int, 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File ~\Anaconda3\lib\json\__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 341 s = s.decode(detect_encoding(s), 'surrogatepass') 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: 348 cls = JSONDecoder File ~\Anaconda3\lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w) 332 def decode(self, s, _w=WHITESPACE.match): 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): File ~\Anaconda3\lib\json\decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 344 """Decode a JSON document from ``s`` (a ``str`` beginning with 345 a JSON document) and return a 2-tuple of the Python 346 representation and the index in ``s`` where the document ended. (...) 350 351 """ 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Expecting property name enclosed in double quotes: line 10 column 5 (char 165) The reason is that JSON expects to find keys (strings enclosed in double quotes) when it actually finds json objects (nested dictionaries) in their places. That is it! I truly appreciate any comments. Best, Nigel
[ "An aside for users that land on this page: I could not reproduce the error that the OP posted. json.dumps() would be very highly unlikely to output \"bad json\". This was merely an attempt to help out the poster.\nSplitting The Strings Into Lists\nI am assuming per your comment that you mean that you want to take your strings, for example, this line\nkey2_3=value2_3_1,value2_3_2,value2_3_3\nand break these values up into \"key2_3\": [\"value2_3_1\", \"value2_3_2\", \"value2_3_3\"].\nTo do so, you'd have to make the following adjustment to the code provided to you:\ndef as_dict(self):\n if len(self.children) > 1:\n return {self.text: [node.as_dict() for node in self.children]}\n elif len(self.children) == 1:\n return {self.text: self.children[0].as_dict()}\n else:\n return self.text.split(\",\") # was self.text\n\n\nDictionaries of Dictionaries Instead of Lists\nTo make the output dictionary a dictionary of dictionaries with node base values of lists, ie {k1: {k2: [1, 2, 3]}}, and of the like, we have to make 2 changes.\n\nUpdate the as_dict method to use {}\ninstead of [].\nInclude a function to compress keys.\n\nWhen I was doing this, I had a hard time outputting the correct data structure... it'd look basically like this, {k1: {k1: {k2: {k2: value}}}}. This becomes obvious when you don't run the d = compress(root.as_dict()['root']) (d = root.as_dict()['root']) function in the code. So the code went from\ndef as_dict(self):\n if len(self.children) > 1:\n return {self.text: [node.as_dict() for node in self.children]}\n elif len(self.children) == 1:\n return {self.text: self.children[0].as_dict()}\n else:\n return self.text.split(\",\") if \",\" in self.text else self.text\n\nto\ndef as_dict(self):\n if len(self.children) > 1:\n return {self.text: {node.text: node.as_dict() for node in self.children}}\n elif len(self.children) == 1:\n return {self.text: self.children[0].as_dict()}\n else:\n return self.text.split(\",\") if \",\" in self.text else self.text\n\n, then I included the compress function\n# for merging like sub keys and values\ndef compress(dictionary):\n if isinstance(dictionary, dict):\n for k, v in dictionary.items():\n if isinstance(v, dict):\n if k in v.keys():\n dictionary[k] = dictionary[k].pop(k)\n compress(dictionary[k])\n compress(k)\n return dictionary\n\n\nFull Code\nIf you put the below in a file and run it from the command line, it should work 100%. Otherwise its probably a problem with anaconda or version of python (though that doesn't really seem likely).\nfrom io import StringIO\nimport json\n\n# for merging like sub keys and values\ndef compress(dictionary):\n if isinstance(dictionary, dict):\n for k, v in dictionary.items():\n if isinstance(v, dict):\n if k in v.keys():\n dictionary[k] = dictionary[k].pop(k)\n compress(dictionary[k])\n compress(k)\n return dictionary\n\n# helper method to convert equals sign to indentation for easier parsing\ndef convertIndentation(inputString):\n indentCount = 0\n indentVal = \" \"\n for position, eachLine in enumerate(inputString):\n if \"=\" not in eachLine:\n continue\n else:\n strSplit = eachLine.split(\"=\", 1)\n #get previous indentation\n prevIndent = inputString[position].count(indentVal)\n newVal = (indentVal * (prevIndent + 1)) + strSplit[1]\n inputString[position] = strSplit[0] + '\\n'\n inputString.insert(position+1, newVal)\n flatList = \"\".join(inputString)\n return flatList\n\n\n\n# helper class for node usage\nclass Node:\n def __init__(self, indented_line):\n self.children = []\n self.level = len(indented_line) - len(indented_line.lstrip())\n self.text = indented_line.strip()\n def add_children(self, nodes):\n childlevel = nodes[0].level\n while nodes:\n node = nodes.pop(0)\n if node.level == childlevel: # add node as a child\n self.children.append(node)\n elif node.level > childlevel: # add nodes as grandchildren of the last child\n nodes.insert(0,node)\n self.children[-1].add_children(nodes)\n elif node.level <= self.level: # this node is a sibling, no more children\n nodes.insert(0,node)\n return\n def as_dict(self):\n if len(self.children) > 1:\n return {self.text: {node.text: node.as_dict() for node in self.children}}\n elif len(self.children) == 1:\n return {self.text: self.children[0].as_dict()}\n else:\n return self.text.split(\",\") if \",\" in self.text else self.text\n\nif __name__ == \"__main__\":\n\n s = \"\"\"\n key1=value1\n key2\n key2_1=value2_1\n key2_2\n key2_2_1\n key2_2_1_1=value2_2_1_1\n key2_3=value2_3_1,value2_3_2,value2_3_3\n key3=value3_1,value3_2,value3_3\n \"\"\"\n\n fh = StringIO(s)\n fileContent = fh.readlines()\n fileParse = convertIndentation(fileContent)\n # convert equals signs to indentation\n root = Node('root')\n root.add_children([Node(line) for line in fileParse.splitlines() if line.strip()])\n d = compress(root.as_dict()['root'])\n # this variable is storing the json output\n jsonOutput = json.dumps(d, indent=4, sort_keys=False)\n f = StringIO(jsonOutput)\n\n # load the \"file\"\n loaded = json.load(f)\n\n print(s)\n print(jsonOutput)\n print(loaded)\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "dictionary", "json", "python", "txt" ]
stackoverflow_0074642972_arrays_dictionary_json_python_txt.txt
Q: Comparing Two Functions for Recursive Digit Sum I'm stumped as to why my solution for the Recursive Digit Sum question on HackerRank is being rejected. Background The question: For an input of string n and integer k, the number h is created by concatenating n "k" times. Find the "super digit" of h by recursively summing the integers until one is left. For example: n = '9875', k = 2, so h = 98759875 sum(98759875)= 58 sum(58)= 13 sum(13) = 4 Submissions My Solution def superDigit(n, k): h=n*k while len(h)>1: h=str(sum([int(i) for i in h])) return int(h) Solution I've Found def superDigit(n, k): return 1 + (k * sum(int(x) for x in n) - 1) % 9 My Inquiry to the Community What am I missing in my solution? Yes it's not as simple as the supplied solution involving the digital root function (which I don't fully understand, I just found it online) but I don't see how my function is supplying incorrect answers. It passes most of the test cases but is rejecting for 1/3 of them. A: Here is the result of my research on your case: You don't supply the typing, so I had to case check to find out you use one str and one int. How do I know this? Well if you used 2 strs the multiplication would fail: >>> "10"*"2" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can't multiply sequence by non-int of type 'str' And if you used 2 ints, h would also be an int, and your sum would fail: >>> str(sum([int(i) for i in 100])) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not iterable So as a result you must have one int and one str. But that also doesn't work since the int*str multiplication uses concatenation instead of the "mathematical" addition: >>> 10 * "2" '2222222222' My solution suggestion is simply to: Use typing for clarity Use ints for the multiplication and strs for splitting the digits This can be done simply by editing only the first two lines: def superDigit(n: int, k:int) -> int: h=str(n*k) while len(h)>1: h=str(sum([int(i) for i in h])) return int(h) Let me know if this helps. A: Thanks to some helpful comments from @Tim Roberts, @Paul Hankin, and @yagod I was able to solve the issue. It was in fact due to time-out! All I had to do was update one line: h=str(sum([int(i) for i in n])*(k%9))
Comparing Two Functions for Recursive Digit Sum
I'm stumped as to why my solution for the Recursive Digit Sum question on HackerRank is being rejected. Background The question: For an input of string n and integer k, the number h is created by concatenating n "k" times. Find the "super digit" of h by recursively summing the integers until one is left. For example: n = '9875', k = 2, so h = 98759875 sum(98759875)= 58 sum(58)= 13 sum(13) = 4 Submissions My Solution def superDigit(n, k): h=n*k while len(h)>1: h=str(sum([int(i) for i in h])) return int(h) Solution I've Found def superDigit(n, k): return 1 + (k * sum(int(x) for x in n) - 1) % 9 My Inquiry to the Community What am I missing in my solution? Yes it's not as simple as the supplied solution involving the digital root function (which I don't fully understand, I just found it online) but I don't see how my function is supplying incorrect answers. It passes most of the test cases but is rejecting for 1/3 of them.
[ "Here is the result of my research on your case:\nYou don't supply the typing, so I had to case check to find out you use one str and one int. How do I know this?\nWell if you used 2 strs the multiplication would fail:\n>>> \"10\"*\"2\"\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: can't multiply sequence by non-int of type 'str'\n\nAnd if you used 2 ints, h would also be an int, and your sum would fail:\n>>> str(sum([int(i) for i in 100]))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: 'int' object is not iterable\n\nSo as a result you must have one int and one str. But that also doesn't work since the int*str multiplication uses concatenation instead of the \"mathematical\" addition:\n>>> 10 * \"2\"\n'2222222222'\n\nMy solution suggestion is simply to:\n\nUse typing for clarity\nUse ints for the multiplication and strs for splitting the digits\n\nThis can be done simply by editing only the first two lines:\ndef superDigit(n: int, k:int) -> int:\n h=str(n*k)\n while len(h)>1:\n h=str(sum([int(i) for i in h]))\n return int(h)\n\nLet me know if this helps.\n", "Thanks to some helpful comments from @Tim Roberts, @Paul Hankin, and @yagod I was able to solve the issue. It was in fact due to time-out! All I had to do was update one line: h=str(sum([int(i) for i in n])*(k%9))\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074636865_python.txt
Q: ModuleNotFoundError: No module named 'caffe._caffe' on Windows 10 I wanted to make a deepdream video using this script: https://github.com/graphific/DeepDreamVideo. I had to make few changes to it but now I'm receiving this error: Traceback (most recent call last): File "C:\Users\Daniel\Desktop\deepdream-master\2_dreaming_time.py", line 20, in <module> import caffe File "C:\Users\Daniel\Desktop\deepdream-master\caffe\python\caffe\__init__.py", line 1, in <module> from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "C:\Users\Daniel\Desktop\deepdream-master\caffe\python\caffe\pycaffe.py", line 13, in <module> from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \ ModuleNotFoundError: No module named 'caffe._caffe' I've installed caffe by pip install caffe-ssd-x86 but it doesn't resolve this problem. I'm using windows 10 and Python 3.8 The code I'm using now: #!/usr/bin/python __author__ = 'graphific' import argparse import os, os.path import errno import sys import time import subprocess from random import randint from io import StringIO import numpy as np import scipy.ndimage as nd import PIL.Image from google.protobuf import text_format sys.path.insert(0, r'C:\Users\Daniel\Desktop\deepdream-master\caffe\python') import caffe caffe.set_mode_gpu() def extractVideo(inputdir, outputdir): print(subprocess.Popen('ffmpeg -i ' + inputdir + ' -f image2 ' + outputdir + '/%08d.png', shell=True, stdout=subprocess.PIPE).stdout.read()) def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) def showarrayHQ(a, fmt='png'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) # a couple of utility functions for converting to and from Caffe's input image layout def preprocess(net, img): #print np.float32(img).shape return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data'] def deprocess(net, img): return np.dstack((img + net.transformer.mean['data'])[::-1]) def objective_L2(dst): dst.diff[:] = dst.data #objective for guided dreaming def objective_guide(dst,guide_features): x = dst.data[0].copy() y = guide_features ch = x.shape[0] x = x.reshape(ch,-1) y = y.reshape(ch,-1) A = x.T.dot(y) # compute the matrix of dot-products with guide features dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best #from https://github.com/jrosebr1/bat-country/blob/master/batcountry/batcountry.py def prepare_guide(net, image, end="inception_4c/output", maxW=224, maxH=224): # grab dimensions of input image (w, h) = image.size # GoogLeNet was trained on images with maximum width and heights # of 224 pixels -- if either dimension is larger than 224 pixels, # then we'll need to do some resizing if h > maxH or w > maxW: # resize based on width if w > h: r = maxW / float(w) # resize based on height else: r = maxH / float(h) # resize the image (nW, nH) = (int(r * w), int(r * h)) image = np.float32(image.resize((nW, nH), PIL.Image.BILINEAR)) (src, dst) = (net.blobs["data"], net.blobs[end]) src.reshape(1, 3, nH, nW) src.data[0] = preprocess(net, image) net.forward(end=end) guide_features = dst.data[0].copy() return guide_features # ------- # Make dreams # ------- def make_step(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True): '''Basic gradient ascent step.''' src = net.blobs['data'] # input image is stored in Net's 'data' blob dst = net.blobs[end] ox, oy = np.random.randint(-jitter, jitter + 1, 2) src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift net.forward(end=end) dst.diff[:] = dst.data # specify the optimization objective net.backward(start=end) g = src.diff[0] # apply normalized ascent step to the input image src.data[:] += step_size / np.abs(g).mean() * g src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image if clip: bias = net.transformer.mean['data'] src.data[:] = np.clip(src.data, -bias, 255-bias) def deepdream(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', verbose = 1, clip=True, **step_params): # prepare base images for all octaves octaves = [preprocess(net, base_img)] for i in range(octave_n - 1): octaves.append(nd.zoom(octaves[-1], (1, 1.0 / octave_scale, 1.0 / octave_scale), order=1)) src = net.blobs['data'] detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details for octave, octave_base in enumerate(octaves[::-1]): h, w = octave_base.shape[-2:] if octave > 0: # upscale details from the previous octave h1, w1 = detail.shape[-2:] detail = nd.zoom(detail, (1, 1.0 * h / h1, 1.0 * w / w1), order=1) src.reshape(1,3,h,w) # resize the network's input image size src.data[0] = octave_base+detail for i in range(iter_n): make_step(net, end=end, clip=clip, **step_params) # visualization vis = deprocess(net, src.data[0]) if not clip: # adjust image contrast if clipping is disabled vis = vis * (255.0 / np.percentile(vis, 99.98)) if verbose == 3: if image_type == "png": showarrayHQ(vis) elif image_type == "jpg": showarray(vis) print (octave, i, end, vis.shape) clear_output(wait=True) elif verbose == 2: print (octave, i, end, vis.shape) # extract details produced on the current octave detail = src.data[0]-octave_base # returning the resulting image return deprocess(net, src.data[0]) # -------------- # Guided Dreaming # -------------- def make_step_guided(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True, objective_fn=objective_guide, **objective_params): '''Basic gradient ascent step.''' #if objective_fn is None: # objective_fn = objective_L2 src = net.blobs['data'] # input image is stored in Net's 'data' blob dst = net.blobs[end] ox, oy = np.random.randint(-jitter, jitter+1, 2) src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift net.forward(end=end) objective_fn(dst, **objective_params) # specify the optimization objective net.backward(start=end) g = src.diff[0] # apply normalized ascent step to the input image src.data[:] += step_size/np.abs(g).mean() * g src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image if clip: bias = net.transformer.mean['data'] src.data[:] = np.clip(src.data, -bias, 255-bias) def deepdream_guided(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', clip=True, verbose=1, objective_fn=objective_guide, **step_params): #if objective_fn is None: # objective_fn = objective_L2 # prepare base images for all octaves octaves = [preprocess(net, base_img)] for i in range(octave_n-1): octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1)) src = net.blobs['data'] detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details for octave, octave_base in enumerate(octaves[::-1]): h, w = octave_base.shape[-2:] if octave > 0: # upscale details from the previous octave h1, w1 = detail.shape[-2:] detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1) src.reshape(1,3,h,w) # resize the network's input image size src.data[0] = octave_base+detail for i in range(iter_n): make_step_guided(net, end=end, clip=clip, objective_fn=objective_fn, **step_params) # visualization vis = deprocess(net, src.data[0]) if not clip: # adjust image contrast if clipping is disabled vis = vis*(255.0/np.percentile(vis, 99.98)) if verbose == 3: if image_type == "png": showarrayHQ(vis) elif image_type == "jpg": showarray(vis) print(octave, i, end, vis.shape) clear_output(wait=True) elif verbose == 2: print(octave, i, end, vis.shape) # extract details produced on the current octave detail = src.data[0]-octave_base # returning the resulting image return deprocess(net, src.data[0]) def resizePicture(image,width): img = PIL.Image.open(image) basewidth = width wpercent = (basewidth/float(img.size[0])) hsize = int((float(img.size[1])*float(wpercent))) return img.resize((basewidth,hsize), PIL.Image.ANTIALIAS) def morphPicture(filename1,filename2,blend,width): img1 = PIL.Image.open(filename1) img2 = PIL.Image.open(filename2) if width != 0: img2 = resizePicture(filename2,width) return PIL.Image.blend(img1, img2, blend) def make_sure_path_exists(path): ''' make sure input and output directory exist, if not create them. If another error (permission denied) throw an error. ''' try: os.makedirs(path) except OSError as exception: if exception.errno != errno.EEXIST: raise layersloop = ['inception_4c/output', 'inception_4d/output', 'inception_4e/output', 'inception_5a/output', 'inception_5b/output', 'inception_5a/output', 'inception_4e/output', 'inception_4d/output', 'inception_4c/output'] def main(input, output, image_type, gpu, model_path, model_name, preview, octaves, octave_scale, iterations, jitter, zoom, stepsize, blend, layers, guide_image, start_frame, end_frame, verbose): make_sure_path_exists(input) make_sure_path_exists(output) # let max nr of frames nrframes =len([name for name in os.listdir(input) if os.path.isfile(os.path.join(input, name))]) if nrframes == 0: print("no frames to process found") sys.exit(0) if preview is None: preview = 0 if octaves is None: octaves = 4 if octave_scale is None: octave_scale = 1.5 if iterations is None: iterations = 5 if jitter is None: jitter = 32 if zoom is None: zoom = 1 if stepsize is None: stepsize = 1.5 if blend is None: blend = 0.5 #can be nr (constant), random, or loop if verbose is None: verbose = 1 if layers is None: layers = 'customloop' #['inception_4c/output'] if start_frame is None: frame_i = 1 else: frame_i = int(start_frame) if not end_frame is None: nrframes = int(end_frame)+1 else: nrframes = nrframes+1 #Load DNN net_fn = model_path + 'deploy.prototxt' param_fn = model_path + model_name #'bvlc_googlenet.caffemodel' if gpu is None: print("SHITTTTTTTTTTTTTT You're running CPU man =D") else: caffe.set_mode_gpu() caffe.set_device(int(args.gpu)) print(("GPU mode [device id: %s]" % args.gpu)) print("using GPU, but you'd still better make a cup of coffee") # Patching model to be able to compute gradients. # Note that you can also manually add "force_backward: true" line to "deploy.prototxt". model = caffe.io.caffe_pb2.NetParameter() text_format.Merge(open(net_fn).read(), model) model.force_backward = True open('tmp.prototxt', 'w').write(str(model)) net = caffe.Classifier('tmp.prototxt', param_fn, mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB if verbose == 3: from IPython.display import clear_output, Image, display print("display turned on") frame = np.float32(PIL.Image.open(input + '/%08d.%s' % (frame_i, image_type) )) if preview != 0: frame = np.float32(resizePicture(input + '/%08d.%s' % (frame_i, image_type), preview)) now = time.time() totaltime = 0 if blend == 'loop': blend_forward = True blend_at = 0.4 blend_step = 0.1 for i in range(frame_i, nrframes): print(('Processing frame #{}').format(frame_i)) #Choosing Layer if layers == 'customloop': #loop over layers as set in layersloop array endparam = layersloop[frame_i % len(layersloop)] else: #loop through layers one at a time until this specific layer endparam = layers[frame_i % len(layers)] #Choosing between normal dreaming, and guided dreaming if guide_image is None: frame = deepdream(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves, octave_scale = octave_scale, jitter=jitter, end = endparam) else: guide = np.float32(PIL.Image.open(guide_image)) print('Setting up Guide with selected image') guide_features = prepare_guide(net,PIL.Image.open(guide_image), end=endparam) frame = deepdream_guided(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves, octave_scale = octave_scale, jitter=jitter, end = endparam, objective_fn=objective_guide, guide_features=guide_features,) saveframe = output + "/%08d.%s" % (frame_i, image_type) later = time.time() difference = int(later - now) totaltime += difference avgtime = (totaltime / i) # Stats (stolen + adapted from Samim: https://github.com/samim23/DeepDreamAnim/blob/master/dreamer.py) print('***************************************') print('Saving Image As: ' + saveframe) print('Frame ' + str(i) + ' of ' + str(nrframes-1)) print('Frame Time: ' + str(difference) + 's') timeleft = avgtime * ((nrframes-1) - frame_i) m, s = divmod(timeleft, 60) h, m = divmod(m, 60) print('Estimated Total Time Remaining: ' + str(timeleft) + 's (' + "%d:%02d:%02d" % (h, m, s) + ')') print('***************************************') PIL.Image.fromarray(np.uint8(frame)).save(saveframe) newframe = input + "/%08d.%s" % (frame_i,image_type) if blend == 0: newimg = PIL.Image.open(newframe) if preview != 0: newimg = resizePicture(newframe,preview) frame = newimg else: if blend == 'random': blendval=randint(5,10)/10. elif blend == 'loop': if blend_at > 1 - blend_step: blend_forward = False elif blend_at <= 0.5: blend_forward = True if blend_forward: blend_at += blend_step else: blend_at -= blend_step blendval = blend_at else: blendval = float(blend) frame = morphPicture(saveframe,newframe,blendval,preview) frame = np.float32(frame) now = time.time() frame_i += 1 if __name__ == "__main__": parser = argparse.ArgumentParser(description='Dreaming in videos.') parser.add_argument( '-i','--input', help='Input directory where extracted frames are stored', required=True) parser.add_argument( '-o','--output', help='Output directory where processed frames are to be stored', required=True) parser.add_argument( '-it','--image_type', help='Specify whether jpg or png ', required=True) parser.add_argument( "--gpu", default= None, help="Switch for gpu computation." ) #int can chose index of gpu, if there are multiple gpu's to chose from parser.add_argument( '-t', '--model_path', dest='model_path', default='../caffe/models/bvlc_googlenet/', help='Model directory to use') parser.add_argument( '-m', '--model_name', dest='model_name', default='bvlc_googlenet.caffemodel', help='Caffe Model name to use') parser.add_argument( '-p','--preview', type=int, required=False, help='Preview image width. Default: 0') parser.add_argument( '-oct','--octaves', type=int, required=False, help='Octaves. Default: 4') parser.add_argument( '-octs','--octavescale', type=float, required=False, help='Octave Scale. Default: 1.4',) parser.add_argument( '-itr','--iterations', type=int, required=False, help='Iterations. Default: 10') parser.add_argument( '-j','--jitter', type=int, required=False, help='Jitter. Default: 32') parser.add_argument( '-z','--zoom', type=int, required=False, help='Zoom in Amount. Default: 1') parser.add_argument( '-s','--stepsize', type=float, required=False, help='Step Size. Default: 1.5') parser.add_argument( '-b','--blend', type=str, required=False, help='Blend Amount. Default: "0.5" (constant), or "loop" (0.5-1.0), or "random"') parser.add_argument( '-l','--layers', nargs="+", type=str, required=False, help='Array of Layers to loop through. Default: [customloop] \ - or choose ie [inception_4c/output] for that single layer') parser.add_argument( '-v', '--verbose', type=int, required=False, help="verbosity [0-3]") parser.add_argument( '-gi', '--guide_image', required=False, help="path to guide image") parser.add_argument( '-sf', '--start_frame', type=int, required=False, help="starting frame nr") parser.add_argument( '-ef', '--end_frame', type=int, required=False, help="end frame nr") parser.add_argument( '-e', '--extract', type=int, required=False, help="Extract frames from video") args = parser.parse_args() if not args.model_path[-1] == '/': args.model_path = args.model_path + '/' if not os.path.exists(args.model_path): print("Model directory not found") print("Please set the model_path to a correct caffe model directory") sys.exit(0) model = os.path.join(args.model_path, args.model_name) if not os.path.exists(model): print("Model not found") print("Please set the model_name to a correct caffe model") print("or download one with ./caffe_dir/scripts/download_model_binary.py caffe_dir/models/bvlc_googlenet") sys.exit(0) if args.extract == 1: extractVideo(args.input, args.output) else: main(args.input, args.output, args.image_type, args.gpu, args.model_path, args.model_name, args.preview, args.octaves, args.octavescale, args.iterations, args.jitter, args.zoom, args.stepsize, args.blend, args.layers, args.guide_image, args.start_frame, args.end_frame, args.verbose) Do you have any solutions on this problem? I was searching on Google for answers but couldn't find any, thanks in advance for help! A: Install caffe from source then it will work. A: I install from the source but get this same problem. Still no idea what happen. A: Faced the same issue, while importing Caffe after installing Caffe in windows with GPU, could fix it by copying <CAFFE installation>/caffe/python/caffe/ to <Python Directory>/Lib/site-packages Hope this will help, all the best..!
ModuleNotFoundError: No module named 'caffe._caffe' on Windows 10
I wanted to make a deepdream video using this script: https://github.com/graphific/DeepDreamVideo. I had to make few changes to it but now I'm receiving this error: Traceback (most recent call last): File "C:\Users\Daniel\Desktop\deepdream-master\2_dreaming_time.py", line 20, in <module> import caffe File "C:\Users\Daniel\Desktop\deepdream-master\caffe\python\caffe\__init__.py", line 1, in <module> from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "C:\Users\Daniel\Desktop\deepdream-master\caffe\python\caffe\pycaffe.py", line 13, in <module> from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \ ModuleNotFoundError: No module named 'caffe._caffe' I've installed caffe by pip install caffe-ssd-x86 but it doesn't resolve this problem. I'm using windows 10 and Python 3.8 The code I'm using now: #!/usr/bin/python __author__ = 'graphific' import argparse import os, os.path import errno import sys import time import subprocess from random import randint from io import StringIO import numpy as np import scipy.ndimage as nd import PIL.Image from google.protobuf import text_format sys.path.insert(0, r'C:\Users\Daniel\Desktop\deepdream-master\caffe\python') import caffe caffe.set_mode_gpu() def extractVideo(inputdir, outputdir): print(subprocess.Popen('ffmpeg -i ' + inputdir + ' -f image2 ' + outputdir + '/%08d.png', shell=True, stdout=subprocess.PIPE).stdout.read()) def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) def showarrayHQ(a, fmt='png'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) # a couple of utility functions for converting to and from Caffe's input image layout def preprocess(net, img): #print np.float32(img).shape return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data'] def deprocess(net, img): return np.dstack((img + net.transformer.mean['data'])[::-1]) def objective_L2(dst): dst.diff[:] = dst.data #objective for guided dreaming def objective_guide(dst,guide_features): x = dst.data[0].copy() y = guide_features ch = x.shape[0] x = x.reshape(ch,-1) y = y.reshape(ch,-1) A = x.T.dot(y) # compute the matrix of dot-products with guide features dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best #from https://github.com/jrosebr1/bat-country/blob/master/batcountry/batcountry.py def prepare_guide(net, image, end="inception_4c/output", maxW=224, maxH=224): # grab dimensions of input image (w, h) = image.size # GoogLeNet was trained on images with maximum width and heights # of 224 pixels -- if either dimension is larger than 224 pixels, # then we'll need to do some resizing if h > maxH or w > maxW: # resize based on width if w > h: r = maxW / float(w) # resize based on height else: r = maxH / float(h) # resize the image (nW, nH) = (int(r * w), int(r * h)) image = np.float32(image.resize((nW, nH), PIL.Image.BILINEAR)) (src, dst) = (net.blobs["data"], net.blobs[end]) src.reshape(1, 3, nH, nW) src.data[0] = preprocess(net, image) net.forward(end=end) guide_features = dst.data[0].copy() return guide_features # ------- # Make dreams # ------- def make_step(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True): '''Basic gradient ascent step.''' src = net.blobs['data'] # input image is stored in Net's 'data' blob dst = net.blobs[end] ox, oy = np.random.randint(-jitter, jitter + 1, 2) src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift net.forward(end=end) dst.diff[:] = dst.data # specify the optimization objective net.backward(start=end) g = src.diff[0] # apply normalized ascent step to the input image src.data[:] += step_size / np.abs(g).mean() * g src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image if clip: bias = net.transformer.mean['data'] src.data[:] = np.clip(src.data, -bias, 255-bias) def deepdream(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', verbose = 1, clip=True, **step_params): # prepare base images for all octaves octaves = [preprocess(net, base_img)] for i in range(octave_n - 1): octaves.append(nd.zoom(octaves[-1], (1, 1.0 / octave_scale, 1.0 / octave_scale), order=1)) src = net.blobs['data'] detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details for octave, octave_base in enumerate(octaves[::-1]): h, w = octave_base.shape[-2:] if octave > 0: # upscale details from the previous octave h1, w1 = detail.shape[-2:] detail = nd.zoom(detail, (1, 1.0 * h / h1, 1.0 * w / w1), order=1) src.reshape(1,3,h,w) # resize the network's input image size src.data[0] = octave_base+detail for i in range(iter_n): make_step(net, end=end, clip=clip, **step_params) # visualization vis = deprocess(net, src.data[0]) if not clip: # adjust image contrast if clipping is disabled vis = vis * (255.0 / np.percentile(vis, 99.98)) if verbose == 3: if image_type == "png": showarrayHQ(vis) elif image_type == "jpg": showarray(vis) print (octave, i, end, vis.shape) clear_output(wait=True) elif verbose == 2: print (octave, i, end, vis.shape) # extract details produced on the current octave detail = src.data[0]-octave_base # returning the resulting image return deprocess(net, src.data[0]) # -------------- # Guided Dreaming # -------------- def make_step_guided(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True, objective_fn=objective_guide, **objective_params): '''Basic gradient ascent step.''' #if objective_fn is None: # objective_fn = objective_L2 src = net.blobs['data'] # input image is stored in Net's 'data' blob dst = net.blobs[end] ox, oy = np.random.randint(-jitter, jitter+1, 2) src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift net.forward(end=end) objective_fn(dst, **objective_params) # specify the optimization objective net.backward(start=end) g = src.diff[0] # apply normalized ascent step to the input image src.data[:] += step_size/np.abs(g).mean() * g src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image if clip: bias = net.transformer.mean['data'] src.data[:] = np.clip(src.data, -bias, 255-bias) def deepdream_guided(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', clip=True, verbose=1, objective_fn=objective_guide, **step_params): #if objective_fn is None: # objective_fn = objective_L2 # prepare base images for all octaves octaves = [preprocess(net, base_img)] for i in range(octave_n-1): octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1)) src = net.blobs['data'] detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details for octave, octave_base in enumerate(octaves[::-1]): h, w = octave_base.shape[-2:] if octave > 0: # upscale details from the previous octave h1, w1 = detail.shape[-2:] detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1) src.reshape(1,3,h,w) # resize the network's input image size src.data[0] = octave_base+detail for i in range(iter_n): make_step_guided(net, end=end, clip=clip, objective_fn=objective_fn, **step_params) # visualization vis = deprocess(net, src.data[0]) if not clip: # adjust image contrast if clipping is disabled vis = vis*(255.0/np.percentile(vis, 99.98)) if verbose == 3: if image_type == "png": showarrayHQ(vis) elif image_type == "jpg": showarray(vis) print(octave, i, end, vis.shape) clear_output(wait=True) elif verbose == 2: print(octave, i, end, vis.shape) # extract details produced on the current octave detail = src.data[0]-octave_base # returning the resulting image return deprocess(net, src.data[0]) def resizePicture(image,width): img = PIL.Image.open(image) basewidth = width wpercent = (basewidth/float(img.size[0])) hsize = int((float(img.size[1])*float(wpercent))) return img.resize((basewidth,hsize), PIL.Image.ANTIALIAS) def morphPicture(filename1,filename2,blend,width): img1 = PIL.Image.open(filename1) img2 = PIL.Image.open(filename2) if width != 0: img2 = resizePicture(filename2,width) return PIL.Image.blend(img1, img2, blend) def make_sure_path_exists(path): ''' make sure input and output directory exist, if not create them. If another error (permission denied) throw an error. ''' try: os.makedirs(path) except OSError as exception: if exception.errno != errno.EEXIST: raise layersloop = ['inception_4c/output', 'inception_4d/output', 'inception_4e/output', 'inception_5a/output', 'inception_5b/output', 'inception_5a/output', 'inception_4e/output', 'inception_4d/output', 'inception_4c/output'] def main(input, output, image_type, gpu, model_path, model_name, preview, octaves, octave_scale, iterations, jitter, zoom, stepsize, blend, layers, guide_image, start_frame, end_frame, verbose): make_sure_path_exists(input) make_sure_path_exists(output) # let max nr of frames nrframes =len([name for name in os.listdir(input) if os.path.isfile(os.path.join(input, name))]) if nrframes == 0: print("no frames to process found") sys.exit(0) if preview is None: preview = 0 if octaves is None: octaves = 4 if octave_scale is None: octave_scale = 1.5 if iterations is None: iterations = 5 if jitter is None: jitter = 32 if zoom is None: zoom = 1 if stepsize is None: stepsize = 1.5 if blend is None: blend = 0.5 #can be nr (constant), random, or loop if verbose is None: verbose = 1 if layers is None: layers = 'customloop' #['inception_4c/output'] if start_frame is None: frame_i = 1 else: frame_i = int(start_frame) if not end_frame is None: nrframes = int(end_frame)+1 else: nrframes = nrframes+1 #Load DNN net_fn = model_path + 'deploy.prototxt' param_fn = model_path + model_name #'bvlc_googlenet.caffemodel' if gpu is None: print("SHITTTTTTTTTTTTTT You're running CPU man =D") else: caffe.set_mode_gpu() caffe.set_device(int(args.gpu)) print(("GPU mode [device id: %s]" % args.gpu)) print("using GPU, but you'd still better make a cup of coffee") # Patching model to be able to compute gradients. # Note that you can also manually add "force_backward: true" line to "deploy.prototxt". model = caffe.io.caffe_pb2.NetParameter() text_format.Merge(open(net_fn).read(), model) model.force_backward = True open('tmp.prototxt', 'w').write(str(model)) net = caffe.Classifier('tmp.prototxt', param_fn, mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB if verbose == 3: from IPython.display import clear_output, Image, display print("display turned on") frame = np.float32(PIL.Image.open(input + '/%08d.%s' % (frame_i, image_type) )) if preview != 0: frame = np.float32(resizePicture(input + '/%08d.%s' % (frame_i, image_type), preview)) now = time.time() totaltime = 0 if blend == 'loop': blend_forward = True blend_at = 0.4 blend_step = 0.1 for i in range(frame_i, nrframes): print(('Processing frame #{}').format(frame_i)) #Choosing Layer if layers == 'customloop': #loop over layers as set in layersloop array endparam = layersloop[frame_i % len(layersloop)] else: #loop through layers one at a time until this specific layer endparam = layers[frame_i % len(layers)] #Choosing between normal dreaming, and guided dreaming if guide_image is None: frame = deepdream(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves, octave_scale = octave_scale, jitter=jitter, end = endparam) else: guide = np.float32(PIL.Image.open(guide_image)) print('Setting up Guide with selected image') guide_features = prepare_guide(net,PIL.Image.open(guide_image), end=endparam) frame = deepdream_guided(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves, octave_scale = octave_scale, jitter=jitter, end = endparam, objective_fn=objective_guide, guide_features=guide_features,) saveframe = output + "/%08d.%s" % (frame_i, image_type) later = time.time() difference = int(later - now) totaltime += difference avgtime = (totaltime / i) # Stats (stolen + adapted from Samim: https://github.com/samim23/DeepDreamAnim/blob/master/dreamer.py) print('***************************************') print('Saving Image As: ' + saveframe) print('Frame ' + str(i) + ' of ' + str(nrframes-1)) print('Frame Time: ' + str(difference) + 's') timeleft = avgtime * ((nrframes-1) - frame_i) m, s = divmod(timeleft, 60) h, m = divmod(m, 60) print('Estimated Total Time Remaining: ' + str(timeleft) + 's (' + "%d:%02d:%02d" % (h, m, s) + ')') print('***************************************') PIL.Image.fromarray(np.uint8(frame)).save(saveframe) newframe = input + "/%08d.%s" % (frame_i,image_type) if blend == 0: newimg = PIL.Image.open(newframe) if preview != 0: newimg = resizePicture(newframe,preview) frame = newimg else: if blend == 'random': blendval=randint(5,10)/10. elif blend == 'loop': if blend_at > 1 - blend_step: blend_forward = False elif blend_at <= 0.5: blend_forward = True if blend_forward: blend_at += blend_step else: blend_at -= blend_step blendval = blend_at else: blendval = float(blend) frame = morphPicture(saveframe,newframe,blendval,preview) frame = np.float32(frame) now = time.time() frame_i += 1 if __name__ == "__main__": parser = argparse.ArgumentParser(description='Dreaming in videos.') parser.add_argument( '-i','--input', help='Input directory where extracted frames are stored', required=True) parser.add_argument( '-o','--output', help='Output directory where processed frames are to be stored', required=True) parser.add_argument( '-it','--image_type', help='Specify whether jpg or png ', required=True) parser.add_argument( "--gpu", default= None, help="Switch for gpu computation." ) #int can chose index of gpu, if there are multiple gpu's to chose from parser.add_argument( '-t', '--model_path', dest='model_path', default='../caffe/models/bvlc_googlenet/', help='Model directory to use') parser.add_argument( '-m', '--model_name', dest='model_name', default='bvlc_googlenet.caffemodel', help='Caffe Model name to use') parser.add_argument( '-p','--preview', type=int, required=False, help='Preview image width. Default: 0') parser.add_argument( '-oct','--octaves', type=int, required=False, help='Octaves. Default: 4') parser.add_argument( '-octs','--octavescale', type=float, required=False, help='Octave Scale. Default: 1.4',) parser.add_argument( '-itr','--iterations', type=int, required=False, help='Iterations. Default: 10') parser.add_argument( '-j','--jitter', type=int, required=False, help='Jitter. Default: 32') parser.add_argument( '-z','--zoom', type=int, required=False, help='Zoom in Amount. Default: 1') parser.add_argument( '-s','--stepsize', type=float, required=False, help='Step Size. Default: 1.5') parser.add_argument( '-b','--blend', type=str, required=False, help='Blend Amount. Default: "0.5" (constant), or "loop" (0.5-1.0), or "random"') parser.add_argument( '-l','--layers', nargs="+", type=str, required=False, help='Array of Layers to loop through. Default: [customloop] \ - or choose ie [inception_4c/output] for that single layer') parser.add_argument( '-v', '--verbose', type=int, required=False, help="verbosity [0-3]") parser.add_argument( '-gi', '--guide_image', required=False, help="path to guide image") parser.add_argument( '-sf', '--start_frame', type=int, required=False, help="starting frame nr") parser.add_argument( '-ef', '--end_frame', type=int, required=False, help="end frame nr") parser.add_argument( '-e', '--extract', type=int, required=False, help="Extract frames from video") args = parser.parse_args() if not args.model_path[-1] == '/': args.model_path = args.model_path + '/' if not os.path.exists(args.model_path): print("Model directory not found") print("Please set the model_path to a correct caffe model directory") sys.exit(0) model = os.path.join(args.model_path, args.model_name) if not os.path.exists(model): print("Model not found") print("Please set the model_name to a correct caffe model") print("or download one with ./caffe_dir/scripts/download_model_binary.py caffe_dir/models/bvlc_googlenet") sys.exit(0) if args.extract == 1: extractVideo(args.input, args.output) else: main(args.input, args.output, args.image_type, args.gpu, args.model_path, args.model_name, args.preview, args.octaves, args.octavescale, args.iterations, args.jitter, args.zoom, args.stepsize, args.blend, args.layers, args.guide_image, args.start_frame, args.end_frame, args.verbose) Do you have any solutions on this problem? I was searching on Google for answers but couldn't find any, thanks in advance for help!
[ "Install caffe from source then it will work.\n", "I install from the source but get this same problem.\nStill no idea what happen.\n", "Faced the same issue, while importing Caffe after installing Caffe in windows with GPU, could fix it by copying <CAFFE installation>/caffe/python/caffe/ to <Python Directory>/Lib/site-packages\nHope this will help, all the best..!\n" ]
[ 1, 0, 0 ]
[]
[]
[ "caffe", "deep_dream", "pycaffe", "python", "python_3.x" ]
stackoverflow_0064472948_caffe_deep_dream_pycaffe_python_python_3.x.txt
Q: Merge all excel files into one file with multiple sheets i would like some help. I have multiple excel files, each file only has one sheet. I would like to combine all excel files into just one file but with multiple sheets one sheet per excel file keeping the same sheet names. this is what i have so far: import pandas as pd from glob import glob import os excelWriter = pd.ExcelWriter("multiple_sheets.xlsx",engine='xlsxwriter') for file in glob('*.xlsx'): df = pd.read_excel(file) df.to_excel(excelWriter,sheet_name=file,index=False) excelWriter.save() All the excel files looks like this: https://iili.io/HfiJRHl.png sorry i cannot upload images here, dont know why but i pasted the link But all the excel files have the exact same columns and rows and just one sheet, the only difference is the sheet name Thanks in advance A: import pandas as pd import os output_excel = r'/home/bera/Desktop/all_excels.xlsx' #List all excel files in folder excel_folder= r'/home/bera/Desktop/GIStest/excelfiles/' excel_files = [os.path.join(root, file) for root, folder, files in os.walk(excel_folder) for file in files if file.endswith(".xlsx")] with pd.ExcelWriter(output_excel) as writer: for excel in excel_files: #For each excel sheet_name = pd.ExcelFile(excel).sheet_names[0] #Find the sheet name df = pd.read_excel(excel) #Create a dataframe df.to_excel(writer, sheet_name=sheet_name, index=False) #Write it to a sheet in the output excel
Merge all excel files into one file with multiple sheets
i would like some help. I have multiple excel files, each file only has one sheet. I would like to combine all excel files into just one file but with multiple sheets one sheet per excel file keeping the same sheet names. this is what i have so far: import pandas as pd from glob import glob import os excelWriter = pd.ExcelWriter("multiple_sheets.xlsx",engine='xlsxwriter') for file in glob('*.xlsx'): df = pd.read_excel(file) df.to_excel(excelWriter,sheet_name=file,index=False) excelWriter.save() All the excel files looks like this: https://iili.io/HfiJRHl.png sorry i cannot upload images here, dont know why but i pasted the link But all the excel files have the exact same columns and rows and just one sheet, the only difference is the sheet name Thanks in advance
[ "import pandas as pd\nimport os\n\noutput_excel = r'/home/bera/Desktop/all_excels.xlsx'\n\n#List all excel files in folder\nexcel_folder= r'/home/bera/Desktop/GIStest/excelfiles/'\nexcel_files = [os.path.join(root, file) for root, folder, files in os.walk(excel_folder) for file in files if file.endswith(\".xlsx\")]\n\nwith pd.ExcelWriter(output_excel) as writer:\n for excel in excel_files: #For each excel\n sheet_name = pd.ExcelFile(excel).sheet_names[0] #Find the sheet name\n df = pd.read_excel(excel) #Create a dataframe\n df.to_excel(writer, sheet_name=sheet_name, index=False) #Write it to a sheet in the output excel\n\n\n" ]
[ 1 ]
[]
[]
[ "excel", "pandas", "python" ]
stackoverflow_0074646115_excel_pandas_python.txt
Q: Smallest Square Function Consider a positive integer n. What will be the smallest number k such that if we concatenate the digits of n with those of k we get a perfect square? For example, for n=1 the smallest k is 6 since 16 is a perfect square. For n=4, k has to be 9 because 49 is a perfect square. For n=35, k is 344, since 35344=1882 is the smallest perfect square starting with the digits 35. Define the smallestSquare function that takes a positive integer n and returns the smallest integer k whose concatenation of the digits of n,k results in a perfect square. For now all I have is this, which checks wether the given number is a perfect square or not. I would like to solve this using recursion but I'm not even sure where to start. from math import sqrt def isSquare(n): return n == int(sqrt(n) + 0.5) ** 2 def smallestSquare(n): A: No recursion is necessary: def smallestSquare(n): x = 1 while isSquare(int(str(n)+str(x))) == False: x += 1 return int(str(n)+str(x)) A: If, given some number 'n', you are looking for the smallest perfect suqare that begins with 'n', the following is an approach that should work: import math def find_smallest_perfect_square(start: int) -> int: while True: if int(math.sqrt(start)) != math.sqrt(start): start += 1 else: return start def find_concatenation(n: int) -> int: str_n = str(n) while True: val = find_smallest_perfect_square(n) if str(val).startswith(str_n): return val else: n = val + 1 Testing: for i in range(10): print (f'The smallest perfect square that begins with {i} is {find_concatenation(i)}') # Result: # The smallest perfect square that begins with 0 is 0 # The smallest perfect square that begins with 1 is 1 # The smallest perfect square that begins with 2 is 25 # The smallest perfect square that begins with 3 is 36 # The smallest perfect square that begins with 4 is 4 # The smallest perfect square that begins with 5 is 529 # The smallest perfect square that begins with 6 is 64 # The smallest perfect square that begins with 7 is 729 # The smallest perfect square that begins with 8 is 81 # The smallest perfect square that begins with 9 is 9 If what you're looking for is "the smallest value of k" which, when concatenated with n, yields a perfect square - the above approach is not guaranteed to give you what you want. If that's what you require, please specify.
Smallest Square Function
Consider a positive integer n. What will be the smallest number k such that if we concatenate the digits of n with those of k we get a perfect square? For example, for n=1 the smallest k is 6 since 16 is a perfect square. For n=4, k has to be 9 because 49 is a perfect square. For n=35, k is 344, since 35344=1882 is the smallest perfect square starting with the digits 35. Define the smallestSquare function that takes a positive integer n and returns the smallest integer k whose concatenation of the digits of n,k results in a perfect square. For now all I have is this, which checks wether the given number is a perfect square or not. I would like to solve this using recursion but I'm not even sure where to start. from math import sqrt def isSquare(n): return n == int(sqrt(n) + 0.5) ** 2 def smallestSquare(n):
[ "No recursion is necessary:\ndef smallestSquare(n):\n x = 1\n while isSquare(int(str(n)+str(x))) == False:\n x += 1\n return int(str(n)+str(x))\n\n", "If, given some number 'n', you are looking for the smallest perfect suqare that begins with 'n', the following is an approach that should work:\nimport math\n\ndef find_smallest_perfect_square(start: int) -> int:\n while True:\n if int(math.sqrt(start)) != math.sqrt(start):\n start += 1\n else:\n return start\n \ndef find_concatenation(n: int) -> int:\n str_n = str(n)\n while True:\n val = find_smallest_perfect_square(n)\n if str(val).startswith(str_n):\n return val\n else:\n n = val + 1\n\nTesting:\nfor i in range(10):\n print (f'The smallest perfect square that begins with {i} is {find_concatenation(i)}')\n\n# Result:\n # The smallest perfect square that begins with 0 is 0\n # The smallest perfect square that begins with 1 is 1\n # The smallest perfect square that begins with 2 is 25\n # The smallest perfect square that begins with 3 is 36\n # The smallest perfect square that begins with 4 is 4\n # The smallest perfect square that begins with 5 is 529\n # The smallest perfect square that begins with 6 is 64\n # The smallest perfect square that begins with 7 is 729\n # The smallest perfect square that begins with 8 is 81\n # The smallest perfect square that begins with 9 is 9\n\nIf what you're looking for is \"the smallest value of k\" which, when concatenated with n, yields a perfect square - the above approach is not guaranteed to give you what you want. If that's what you require, please specify.\n" ]
[ 1, 0 ]
[]
[]
[ "function", "python", "recursion" ]
stackoverflow_0074645753_function_python_recursion.txt
Q: python nested list sort based on 2nd value of the list is not working properly when it has value 10 here is my code for hackerrank nested list problem in python problem link:https://www.hackerrank.com/challenges/nested-list/problem?isFullScreen=true code: def sort(sub_li): return(sorted(sub_li, key = lambda x: x[1])) if __name__ == '__main__': x=int(input ()) stu=[] record=[] for i in range(0,x): stu.append(input()) stu.append(input()) record.append(stu) stu = [] namelist = [] sortedrecord = sort(record) print(sortedrecord) value = 0 for i,j in sortedrecord: if j>sortedrecord[0][1]: value = j break for i,j in sortedrecord: if j==value: namelist.append(i) namelist.sort() for i in namelist: print(i) problem is that the sort fuction is not sorting properly when it has a score of 10 sample input: 4 Shadab 8 Varun 8.9 Sarvesh 9.5 Harsh 10 output: [['Harsh', '10'], ['Shadab', '8'], ['Varun', '8.9'], ['Sarvesh', '9.5']] Shadab note: i have tried alternative sorting ways ,but the condition remains the same. A: In the lexicographic order, 10 comes befor 8, because 1 is before 8 You need to convert to float to get it work like you expect at input stu.append(input()) stu.append(float(input())) or at use def sort(sub_li): return sorted(sub_li, key=lambda x: float(x[1]))
python nested list sort based on 2nd value of the list is not working properly when it has value 10
here is my code for hackerrank nested list problem in python problem link:https://www.hackerrank.com/challenges/nested-list/problem?isFullScreen=true code: def sort(sub_li): return(sorted(sub_li, key = lambda x: x[1])) if __name__ == '__main__': x=int(input ()) stu=[] record=[] for i in range(0,x): stu.append(input()) stu.append(input()) record.append(stu) stu = [] namelist = [] sortedrecord = sort(record) print(sortedrecord) value = 0 for i,j in sortedrecord: if j>sortedrecord[0][1]: value = j break for i,j in sortedrecord: if j==value: namelist.append(i) namelist.sort() for i in namelist: print(i) problem is that the sort fuction is not sorting properly when it has a score of 10 sample input: 4 Shadab 8 Varun 8.9 Sarvesh 9.5 Harsh 10 output: [['Harsh', '10'], ['Shadab', '8'], ['Varun', '8.9'], ['Sarvesh', '9.5']] Shadab note: i have tried alternative sorting ways ,but the condition remains the same.
[ "In the lexicographic order, 10 comes befor 8, because 1 is before 8\nYou need to convert to float to get it work like you expect\n\nat input\n stu.append(input())\n stu.append(float(input()))\n\n\nor at use\ndef sort(sub_li):\n return sorted(sub_li, key=lambda x: float(x[1]))\n\n\n\n" ]
[ 0 ]
[]
[]
[ "nested_lists", "python", "secondary_indexes", "sorting" ]
stackoverflow_0074646219_nested_lists_python_secondary_indexes_sorting.txt
Q: Python Selenium Take a Screenshot Of An Whole Page Without Using Headless Mode I need to take a screenshot of an element which is very long and not fit on the screen, I can use headless mode to do this but site doesn't allow me to do even with user-agent and other stuff. But I can access the site with undetectedChromeDriver, so there's a extension to do this stuff called 'HTML Elements Screenshot'. That extension will allow you to select element and take the screenshot of an whole element for you. I automated that process with pyautogui and cv2 but I want to do it without this libraries. Is there any Javascript code to do it ? I did my research but can't find any useful. Thanks in advance The codes I tried: def save_screenshot(driver, path: str = 'screenshot.png') -> None: input('Let it go when u ready.') driver.switch_to.window(driver.window_handles[-1]) original_size = driver.get_window_size() required_width = driver.execute_script('return document.body.parentNode.scrollWidth') required_height = driver.execute_script('return document.body.parentNode.scrollHeight') driver.set_window_size(required_width, required_height) driver.save_screenshot(path) # has scrollbar #driver.find_element_by_tag_name('body').screenshot(path) # avoids scrollbar driver.set_window_size(original_size['width'], original_size['height']) def saveScreenshot(driver,path: str="screenshot.png"): input('Let it go when u ready.') driver.switch_to.window(driver.window_handles[-1]) el = driver.find_element(By.TAG_NAME,"body") el.screenshot(path) driver.quit() A: For python you can use pyppeteer. For javascript you can use puppeteer You can find the documentation here
Python Selenium Take a Screenshot Of An Whole Page Without Using Headless Mode
I need to take a screenshot of an element which is very long and not fit on the screen, I can use headless mode to do this but site doesn't allow me to do even with user-agent and other stuff. But I can access the site with undetectedChromeDriver, so there's a extension to do this stuff called 'HTML Elements Screenshot'. That extension will allow you to select element and take the screenshot of an whole element for you. I automated that process with pyautogui and cv2 but I want to do it without this libraries. Is there any Javascript code to do it ? I did my research but can't find any useful. Thanks in advance The codes I tried: def save_screenshot(driver, path: str = 'screenshot.png') -> None: input('Let it go when u ready.') driver.switch_to.window(driver.window_handles[-1]) original_size = driver.get_window_size() required_width = driver.execute_script('return document.body.parentNode.scrollWidth') required_height = driver.execute_script('return document.body.parentNode.scrollHeight') driver.set_window_size(required_width, required_height) driver.save_screenshot(path) # has scrollbar #driver.find_element_by_tag_name('body').screenshot(path) # avoids scrollbar driver.set_window_size(original_size['width'], original_size['height']) def saveScreenshot(driver,path: str="screenshot.png"): input('Let it go when u ready.') driver.switch_to.window(driver.window_handles[-1]) el = driver.find_element(By.TAG_NAME,"body") el.screenshot(path) driver.quit()
[ "For python you can use pyppeteer.\nFor javascript you can use puppeteer\nYou can find the documentation here\n" ]
[ 0 ]
[]
[]
[ "javascript", "python", "screenshot", "selenium", "undetected_chromedriver" ]
stackoverflow_0074645486_javascript_python_screenshot_selenium_undetected_chromedriver.txt
Q: Is there any way to create a user generator? I need your help, I'm trying to create a program that can generate usernames by entering its first and lastname and apply some rules specifically, but I don't know how to store a list of elements into a list on Python. print('Welcome to your program!') print("How many users do you want to create: ") firstName = input('What is your firstname: \n').lower() lastName = input('What is your lastname: \n').lower() def username_gen(firstName, lastName): all_letters = firstName first_letters = lastName[0:3] username = '{}{}'.format(all_letters, first_letters) print(username +'company.com') username_gen(firstName, lastName) I can only create one user, and I would like to create more than 10 users. Can anybody help me? I tried using lists but it did not work, not sure If I did it correctly. A: Put a loop around the process, and collect into a list usernames = [] nb = int(input("How many users do you want to create ?")) for i in range(nb): print(f"Person n°{i + 1}") firstName = input('What is your firstname: ').lower() lastName = input('What is your lastname: ').lower() username = username_gen(firstName, lastName) print(">>", username) usernames.append(username) print(usernames) Using f-string that is easier to build strings with variable def username_gen(firstName, lastName): all_letters = firstName first_letters = lastName[0:3] return f'{all_letters}{first_letters}company.com'
Is there any way to create a user generator?
I need your help, I'm trying to create a program that can generate usernames by entering its first and lastname and apply some rules specifically, but I don't know how to store a list of elements into a list on Python. print('Welcome to your program!') print("How many users do you want to create: ") firstName = input('What is your firstname: \n').lower() lastName = input('What is your lastname: \n').lower() def username_gen(firstName, lastName): all_letters = firstName first_letters = lastName[0:3] username = '{}{}'.format(all_letters, first_letters) print(username +'company.com') username_gen(firstName, lastName) I can only create one user, and I would like to create more than 10 users. Can anybody help me? I tried using lists but it did not work, not sure If I did it correctly.
[ "Put a loop around the process, and collect into a list\nusernames = []\nnb = int(input(\"How many users do you want to create ?\"))\nfor i in range(nb):\n print(f\"Person n°{i + 1}\")\n firstName = input('What is your firstname: ').lower()\n lastName = input('What is your lastname: ').lower()\n username = username_gen(firstName, lastName)\n print(\">>\", username)\n usernames.append(username)\nprint(usernames)\n\nUsing f-string that is easier to build strings with variable\ndef username_gen(firstName, lastName):\n all_letters = firstName\n first_letters = lastName[0:3]\n return f'{all_letters}{first_letters}company.com'\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "list", "methods", "python", "tuples" ]
stackoverflow_0074646325_arrays_list_methods_python_tuples.txt
Q: how to pass multiple flags in argparse python I am trying to pass multiple flags, basically two flags. This is what my code looks like parser.add_argument('--naruto', action='store_true') parser.add_argument('--transformers', action='store_true') parser.add_argument('--goku', action='store_true') parser.add_argument('--anime', action='store_true') I know action='store_true' makes it a flag. basically in the command line I will pass the arguments like => nameOfTheScript.py --goku --anime and based on this later on I will be checking if "anime" was sent as an argument do x else do y. how can I achieve something like this? A: If I understand it correctly you ask how you can parse those args. Here is an example of argument parsing: parser.add_argument('--naruto', action='store_true') parser.add_argument('--transformers', action='store_true') parser.add_argument('--goku', action='store_true') parser.add_argument('--anime', action='store_true') args = parser.parse_args() # You can access your args with dot notation if args.goku: print("goku arg was passed in") if args.anime: print("anime arg was passed in")
how to pass multiple flags in argparse python
I am trying to pass multiple flags, basically two flags. This is what my code looks like parser.add_argument('--naruto', action='store_true') parser.add_argument('--transformers', action='store_true') parser.add_argument('--goku', action='store_true') parser.add_argument('--anime', action='store_true') I know action='store_true' makes it a flag. basically in the command line I will pass the arguments like => nameOfTheScript.py --goku --anime and based on this later on I will be checking if "anime" was sent as an argument do x else do y. how can I achieve something like this?
[ "If I understand it correctly you ask how you can parse those args.\nHere is an example of argument parsing:\nparser.add_argument('--naruto', action='store_true')\nparser.add_argument('--transformers', action='store_true')\nparser.add_argument('--goku', action='store_true')\nparser.add_argument('--anime', action='store_true')\n\nargs = parser.parse_args()\n\n# You can access your args with dot notation\nif args.goku:\n print(\"goku arg was passed in\")\n\nif args.anime:\n print(\"anime arg was passed in\")\n\n" ]
[ -1 ]
[]
[]
[ "argparse", "command_line", "command_line_arguments", "python", "python_3.x" ]
stackoverflow_0074646162_argparse_command_line_command_line_arguments_python_python_3.x.txt
Q: How to stop randomised drawings in python from overlapping So, I have written a code that creates snowflakes using turtle. Essentially it asks the user how many snowflakes to generate. It then opens a turtle window and draws the snowflakes in a random place, size and colour. The random place is important for this question. Essentially, when it draws the snowflakes, is there a way to stop the snowflakes from being drawn in the (approx.) same area so that they don't overlap? Normally yes, this would be simple but due to its random nature, I have no clue how to do this. Here is the code: import time import sys import turtle import random restart = True print("This program creates snowflakes. Enjoy!") while restart == True: n = int(input("How many snowflakes do you want?: ")) screen = turtle.Screen() screen.bgcolor("black") speedy = turtle.Turtle() speedy.speed(0) sfcolor = ["yellow","gold","orange","red","violet","magenta","purple","navy","blue","skyblue","cyan","turquoise","lightgreen","green","darkgreen","white","BlueViolet","DeepSkyBlue","snow2","ForestGreen", "gainsboro", "GhostWhite", "goldenrod"] def snowflake(size): speedy.penup() speedy.forward(10 * size) speedy.left(45) speedy.pendown() speedy.color(random.choice(sfcolor)) for i in range(8): branch(size) speedy.left(45) def branch(size): for i in range(3): for i in range(3): speedy.forward(10.0 * size / 3) speedy.back(10.0 * size / 3) speedy.right(45) speedy.left(90) speedy.back(10.0 * size / 3) speedy.left(45) speedy.right(90) speedy.forward(10.0 * size) for i in range(n): x = random.randint(-375, 375) y = random.randint(-375, 375) sfsize = random.randint(1, 4) speedy.penup() speedy.goto(x, y) speedy.pendown() snowflake(sfsize) print(i+1," Snowflake(s)") restart = False print("Thanks for using the program! You will have the option to resart it shortly.") time.sleep(3) restart = input("Do you want to run the program again? Yes or No: ") restart = restart.upper() if restart == "YES": turtle.Screen().bye() restart = True print("Restarting...") elif restart == "NO": restart = False print("Thank you for using the program. Goodbye!") time.sleep(3) turtle.Screen().bye() sys.exit() else: print("\nError. Program Resetting...") turtle.Screen().bye() print("This program creates snowflakes. Enjoy!") restart = True A: Similar to @mx0's suggestion (+1), rather than a square, we define a circle that encompasses the snowflake and for each successful placement, keep a list of existing positions and radii. We also use the radius to avoid drawing partial snowflakes near the edge of our window: from turtle import Screen, Turtle from random import randint, choice WIDTH, HEIGHT = 480, 320 # small for testing SF_COLORS = [ 'yellow', 'gold', 'orange', 'red', 'violet', 'magenta', 'purple', 'navy', 'blue', 'skyblue', 'cyan', 'turquoise', 'lightgreen', 'green', 'darkgreen', 'white', 'BlueViolet', 'DeepSkyBlue', 'snow2', 'ForestGreen', 'gainsboro', 'GhostWhite', 'goldenrod', ] def snowflake(size): radius = 15 * size # circle roughly encompassing snowflake position = randint(radius - WIDTH/2, WIDTH/2 - radius), randint(radius - HEIGHT/2, HEIGHT/2 - radius) speedy.goto(position) trys = 0 while any(speedy.distance(other_position) < (radius + other_radius) for other_position, other_radius in snowflakes): position = randint(radius - WIDTH/2, WIDTH/2 - radius), randint(radius - HEIGHT/2, HEIGHT/2 - radius) speedy.goto(position) trys += 1 if trys > 100: return False # can't fit this snowflake, signal caller to try a different `size` snowflakes.append((position, radius)) speedy.color(choice(SF_COLORS)) speedy.penup() speedy.forward(10 * size) speedy.left(45) speedy.pendown() for _ in range(8): branch(size) speedy.left(45) speedy.penup() return True def branch(size): length = 10.0 * size / 3 for _ in range(3): for _ in range(3): speedy.forward(length) speedy.backward(length) speedy.right(45) speedy.left(90) speedy.backward(length) speedy.left(45) speedy.right(90) speedy.forward(length * 3) print("This program creates snowflakes. Enjoy!") n = int(input("How many snowflakes do you want?: ")) screen = Screen() screen.setup(WIDTH, HEIGHT) screen.bgcolor('black') speedy = Turtle() speedy.speed('fastest') snowflakes = [] flakes = 0 while flakes < n: sfsize = randint(1, 4) if snowflake(sfsize): flakes += 1 speedy.hideturtle() screen.exitonclick() However, fitting snowflakes like this creates an issue. The user might request more snowflakes than can fit in a given size window. The code above partially addresses this by returning failure and letting the caller figure out what to do. Here, we simply try another snowflake size. Smarter code would reduce the random size range based on failure, and quit trying altogether when a size 1 snowflake fails! I've removed the restart logic to simplify my example and because I'm not convinced it works.
How to stop randomised drawings in python from overlapping
So, I have written a code that creates snowflakes using turtle. Essentially it asks the user how many snowflakes to generate. It then opens a turtle window and draws the snowflakes in a random place, size and colour. The random place is important for this question. Essentially, when it draws the snowflakes, is there a way to stop the snowflakes from being drawn in the (approx.) same area so that they don't overlap? Normally yes, this would be simple but due to its random nature, I have no clue how to do this. Here is the code: import time import sys import turtle import random restart = True print("This program creates snowflakes. Enjoy!") while restart == True: n = int(input("How many snowflakes do you want?: ")) screen = turtle.Screen() screen.bgcolor("black") speedy = turtle.Turtle() speedy.speed(0) sfcolor = ["yellow","gold","orange","red","violet","magenta","purple","navy","blue","skyblue","cyan","turquoise","lightgreen","green","darkgreen","white","BlueViolet","DeepSkyBlue","snow2","ForestGreen", "gainsboro", "GhostWhite", "goldenrod"] def snowflake(size): speedy.penup() speedy.forward(10 * size) speedy.left(45) speedy.pendown() speedy.color(random.choice(sfcolor)) for i in range(8): branch(size) speedy.left(45) def branch(size): for i in range(3): for i in range(3): speedy.forward(10.0 * size / 3) speedy.back(10.0 * size / 3) speedy.right(45) speedy.left(90) speedy.back(10.0 * size / 3) speedy.left(45) speedy.right(90) speedy.forward(10.0 * size) for i in range(n): x = random.randint(-375, 375) y = random.randint(-375, 375) sfsize = random.randint(1, 4) speedy.penup() speedy.goto(x, y) speedy.pendown() snowflake(sfsize) print(i+1," Snowflake(s)") restart = False print("Thanks for using the program! You will have the option to resart it shortly.") time.sleep(3) restart = input("Do you want to run the program again? Yes or No: ") restart = restart.upper() if restart == "YES": turtle.Screen().bye() restart = True print("Restarting...") elif restart == "NO": restart = False print("Thank you for using the program. Goodbye!") time.sleep(3) turtle.Screen().bye() sys.exit() else: print("\nError. Program Resetting...") turtle.Screen().bye() print("This program creates snowflakes. Enjoy!") restart = True
[ "Similar to @mx0's suggestion (+1), rather than a square, we define a circle that encompasses the snowflake and for each successful placement, keep a list of existing positions and radii. We also use the radius to avoid drawing partial snowflakes near the edge of our window:\nfrom turtle import Screen, Turtle\nfrom random import randint, choice\n\nWIDTH, HEIGHT = 480, 320 # small for testing\n\nSF_COLORS = [\n 'yellow', 'gold', 'orange', 'red', 'violet',\n 'magenta', 'purple', 'navy', 'blue', 'skyblue',\n 'cyan', 'turquoise', 'lightgreen', 'green', 'darkgreen',\n 'white', 'BlueViolet', 'DeepSkyBlue', 'snow2', 'ForestGreen',\n 'gainsboro', 'GhostWhite', 'goldenrod',\n]\n\ndef snowflake(size):\n radius = 15 * size # circle roughly encompassing snowflake\n\n position = randint(radius - WIDTH/2, WIDTH/2 - radius), randint(radius - HEIGHT/2, HEIGHT/2 - radius)\n speedy.goto(position)\n\n trys = 0\n\n while any(speedy.distance(other_position) < (radius + other_radius) for other_position, other_radius in snowflakes):\n position = randint(radius - WIDTH/2, WIDTH/2 - radius), randint(radius - HEIGHT/2, HEIGHT/2 - radius)\n speedy.goto(position)\n\n trys += 1\n\n if trys > 100:\n return False # can't fit this snowflake, signal caller to try a different `size`\n\n snowflakes.append((position, radius))\n speedy.color(choice(SF_COLORS))\n\n speedy.penup()\n speedy.forward(10 * size)\n speedy.left(45)\n speedy.pendown()\n\n for _ in range(8):\n branch(size)\n speedy.left(45)\n\n speedy.penup()\n\n return True\n\ndef branch(size):\n length = 10.0 * size / 3\n\n for _ in range(3):\n for _ in range(3):\n speedy.forward(length)\n speedy.backward(length)\n speedy.right(45)\n\n speedy.left(90)\n speedy.backward(length)\n speedy.left(45)\n\n speedy.right(90)\n speedy.forward(length * 3)\n\nprint(\"This program creates snowflakes. Enjoy!\")\n\nn = int(input(\"How many snowflakes do you want?: \"))\n\nscreen = Screen()\nscreen.setup(WIDTH, HEIGHT)\nscreen.bgcolor('black')\n\nspeedy = Turtle()\nspeedy.speed('fastest')\n\nsnowflakes = []\n\nflakes = 0\n\nwhile flakes < n:\n sfsize = randint(1, 4)\n\n if snowflake(sfsize):\n flakes += 1\n\nspeedy.hideturtle()\nscreen.exitonclick()\n\nHowever, fitting snowflakes like this creates an issue. The user might request more snowflakes than can fit in a given size window. The code above partially addresses this by returning failure and letting the caller figure out what to do. Here, we simply try another snowflake size. Smarter code would reduce the random size range based on failure, and quit trying altogether when a size 1 snowflake fails!\n\nI've removed the restart logic to simplify my example and because I'm not convinced it works.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "python_turtle", "turtle_graphics" ]
stackoverflow_0074639715_python_python_3.x_python_turtle_turtle_graphics.txt
Q: grpc about 'a client-to-server stream RPC' but with an error of 'Exception iterating requests' I made a demo about 'a client-to-server stream RPC', but when I run client, it appears an error as below: Traceback (most recent call last): File "C:/Users/Administrator/Desktop/back_test_v2/gRPC/client/order_client.py", line 57, in <module> run_client() File "C:/Users/Administrator/Desktop/back_test_v2/gRPC/client/order_client.py", line 41, in run_client response = stub.TransOrder(OrderRequest(orders = [SingleOrder(contract='asd', File "C:\ProgramData\Anaconda3\lib\site-packages\grpc\_channel.py", line 1108, in __call__ return _end_unary_response_blocking(state, call, False, None) File "C:\ProgramData\Anaconda3\lib\site-packages\grpc\_channel.py", line 826, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNKNOWN details = "Exception iterating requests!" debug_error_string = "None" > and codes of server class PlaceOrder(PlaceOrderServicer): def TransOrder(self, request_iterator, context): for request in request_iterator: print(f'contract is {request.contract}, begin_pos is {request.begin_pos}, end_pos is {request.end_pos}') return OrderReply() def serve(): server = grpc.server(futures.ThreadPoolExecutor()) add_PlaceOrderServicer_to_server(PlaceOrder(), server) server.add_insecure_port('[::]:50062') server.start() codes of client: def data(): for i in range(1, 5): request = OrderRequest(orders=[SingleOrder(contract='asd', begin_pos=i, end_pos=i)]) print("Visiting OrderRequest %s" % request.contract) yield request def run_client(): with grpc.insecure_channel('localhost:50062') as channel: stub = PlaceOrderStub(channel) data1 = data() response = stub.TransOrder(data1) protos service PlaceOrder { rpc TransOrder (stream OrderRequest) returns (OrderReply) {} } //报单 message SingleOrder { string contract = 1; int32 begin_pos = 2; int32 end_pos = 3; } //输出参数 message OrderRequest { repeated SingleOrder orders = 1; } //输出参数 // 不需要返回消息 message OrderReply { } I make it as easy as possible, but I still can't figure out what is the problem. Could anyone help? A: The Exception iterating requests! error message means there is an Exception raised in the request iterator. I would recommend to add a try-catch clause in the client-side def data() function. A: I Believe this post might be bit late, But at least for those who might wonder over her looking for a resolution, hope below solves your issue, for me it did!! The non blocking stub method is looking for an iterator, but the yield command or the data() method returns a Generator object which might be reason your getting that error, try passing the generator's iterator attribute - that solved the issue for me. The code snippet- def data(): for i in range(1, 5): request = OrderRequest(orders=[SingleOrder(contract='asd', begin_pos=i, end_pos=i)]) print("Visiting OrderRequest %s" % request.contract) yield request def run_client(): with grpc.insecure_channel('localhost:50062') as channel: stub = PlaceOrderStub(channel) data1 = data() #Below change should resolve the issue response = stub.TransOrder(data1.__iter()__)
grpc about 'a client-to-server stream RPC' but with an error of 'Exception iterating requests'
I made a demo about 'a client-to-server stream RPC', but when I run client, it appears an error as below: Traceback (most recent call last): File "C:/Users/Administrator/Desktop/back_test_v2/gRPC/client/order_client.py", line 57, in <module> run_client() File "C:/Users/Administrator/Desktop/back_test_v2/gRPC/client/order_client.py", line 41, in run_client response = stub.TransOrder(OrderRequest(orders = [SingleOrder(contract='asd', File "C:\ProgramData\Anaconda3\lib\site-packages\grpc\_channel.py", line 1108, in __call__ return _end_unary_response_blocking(state, call, False, None) File "C:\ProgramData\Anaconda3\lib\site-packages\grpc\_channel.py", line 826, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNKNOWN details = "Exception iterating requests!" debug_error_string = "None" > and codes of server class PlaceOrder(PlaceOrderServicer): def TransOrder(self, request_iterator, context): for request in request_iterator: print(f'contract is {request.contract}, begin_pos is {request.begin_pos}, end_pos is {request.end_pos}') return OrderReply() def serve(): server = grpc.server(futures.ThreadPoolExecutor()) add_PlaceOrderServicer_to_server(PlaceOrder(), server) server.add_insecure_port('[::]:50062') server.start() codes of client: def data(): for i in range(1, 5): request = OrderRequest(orders=[SingleOrder(contract='asd', begin_pos=i, end_pos=i)]) print("Visiting OrderRequest %s" % request.contract) yield request def run_client(): with grpc.insecure_channel('localhost:50062') as channel: stub = PlaceOrderStub(channel) data1 = data() response = stub.TransOrder(data1) protos service PlaceOrder { rpc TransOrder (stream OrderRequest) returns (OrderReply) {} } //报单 message SingleOrder { string contract = 1; int32 begin_pos = 2; int32 end_pos = 3; } //输出参数 message OrderRequest { repeated SingleOrder orders = 1; } //输出参数 // 不需要返回消息 message OrderReply { } I make it as easy as possible, but I still can't figure out what is the problem. Could anyone help?
[ "The Exception iterating requests! error message means there is an Exception raised in the request iterator. I would recommend to add a try-catch clause in the client-side def data() function.\n", "I Believe this post might be bit late, But at least for those who might wonder over her looking for a resolution, hope below solves your issue, for me it did!!\nThe non blocking stub method is looking for an iterator, but the yield command or the data() method returns a Generator object which might be reason your getting that error, try passing the generator's iterator attribute - that solved the issue for me.\nThe code snippet-\ndef data():\n for i in range(1, 5):\n request = OrderRequest(orders=[SingleOrder(contract='asd',\n begin_pos=i,\n end_pos=i)])\n print(\"Visiting OrderRequest %s\" % request.contract)\n yield request\n\n\ndef run_client():\n with grpc.insecure_channel('localhost:50062') as channel:\n stub = PlaceOrderStub(channel)\n\n data1 = data()\n #Below change should resolve the issue\n response = stub.TransOrder(data1.__iter()__)\n\n" ]
[ 1, 0 ]
[]
[]
[ "grpc", "python" ]
stackoverflow_0070417991_grpc_python.txt
Q: How can I run pygame and PyQt5 together but seperately? I have a question about using pyqt5 and pygame. I have already made a pygame script and a pyqt5 script. The problem is that when I want to make pygame excute the game, it shows a ranking board in pyqt5 and plays game by a pygame script. This is my pyqt UI code: from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtGui import QMovie import rankingUI import sys import main import pickle import subprocess class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.setFixedSize(800,400) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") # create label self.label = QtWidgets.QLabel(self.centralwidget) self.label.move(0,0) # start button self.button = QtWidgets.QPushButton(self.centralwidget) self.button.setGeometry(320,300,150,100) self.button.setStyleSheet("border-image:url(./assets/ui/start_btn.png); border:0px;") self.button.clicked.connect(self.game_start) # title self.title = QtWidgets.QLabel(self.centralwidget) self.title.setGeometry(250, 10, 300, 100) self.title.setStyleSheet("border-image:url(./assets/ui/dinotitle.png); border:0px;") # input nick self.nick_inp = QtWidgets.QLineEdit("ENTER YOUR NICK",self.centralwidget) self.nick_inp.setAlignment(QtCore.Qt.AlignCenter) self.nick_inp.setGeometry(320,290, 150 ,20) #ranking self.ranking_btn = QtWidgets.QPushButton(self.centralwidget) self.ranking_btn.setStyleSheet("border-image:url(./assets/ui/rank_btn.png); border:0px;") self.ranking_btn.setGeometry(730, 325, 50, 50) self.ranking_btn.clicked.connect(self.popup_ranking) # add popup self.add_dia = QtWidgets.QDialog() self.rank_dia = QtWidgets.QDialog() # add label to main window MainWindow.setCentralWidget(self.centralwidget) # set qmovie as label self.movie = QMovie("assets/ui/dinogif.gif") self.label.setMovie(self.movie) self.movie.start() def game_start(self): player_nick = self.nick_inp.text() if(len(player_nick)==0): self.nick_inp.setText("ENTER YOUR NICK") return main.game_start(player_nick) def popup_ranking(self): # ui init self.rank_dia.setWindowModality(QtCore.Qt.ApplicationModal) self.rank_dia.setWindowTitle("RANKING") self.rank_dia.setFixedSize(500,330) rank_label = QtWidgets.QLabel("RANKKING") rank_label.setAlignment(QtCore.Qt.AlignCenter) rank_label.setFont(QtGui.QFont('Arial', 30)) output = QtWidgets.QTextEdit() output.setFont(QtGui.QFont('Ubuntu',15)) window = QtWidgets.QVBoxLayout() window.addWidget(rank_label) window.addWidget(output) # read data rank_list = [] ranking_dat = open("ranking.dat", 'rb') try: rank_list = pickle.load(ranking_dat) except: pass ranking_dat.close() # write data strFormat = '%-18s%-18s%-18s\n' strOut = strFormat % ('RANK', 'SCORE', 'NICK') rank_num = 1 strFormat = '%-20s%-20s%-20s\n' for x in sorted(rank_list, key=lambda s: s["Score"], reverse=True): tmp = [] tmp.append(str(rank_num)) rank_num += 1 for y in x: tmp.append(str(x[y])) strOut += strFormat % (tmp[0], tmp[2], tmp[1]) if rank_num == 10: break output.setText(strOut) self.rank_dia.setLayout(window) self.rank_dia.show() # def score_reg(self): # # popup UI setting # self.add_dia.setWindowTitle("score registration") # self.add_dia.setWindowModality(QtCore.Qt.ApplicationModal) # self.add_dia.setFixedSize(300,70) # # # add widget # nick_label = QtWidgets.QLabel("Insert Nickname :") # self.nick_input = QtWidgets.QLineEdit() # score_label = QtWidgets.QLabel("Your Score : ") # self.score_input = QtWidgets.QLabel("333") # reg_btn = QtWidgets.QPushButton("register") # reg_btn.clicked.connect(self.register) # # h_box1 = QtWidgets.QHBoxLayout() # h_box1.addWidget(nick_label) # h_box1.addWidget(self.nick_input) # # h_box2 = QtWidgets.QHBoxLayout() # h_box2.addWidget(score_label) # h_box2.addWidget(self.score_input) # h_box2.addStretch() # h_box2.addWidget(reg_btn) # # v_box = QtWidgets.QVBoxLayout() # v_box.addLayout(h_box1) # v_box.addLayout(h_box2) # # self.add_dia.setLayout(v_box) # self.add_dia.show() # # def register(self): # print(self.nick_input.text()) # print(self.score_input.text()) if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) window = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(window) window.show() sys.exit(app.exec_()) This is the pygame code: import pygame as pyg import os import random import sys import pickle import pygame.time import ui nickname = ... # 화면 크기 SCREEN_HEIGHT = 600 SCREEN_WIDTH = 1600 SCREEN = ... # 달리는 모션 (running1, running2) RUNNING_MOTIONS = [pyg.image.load(os.path.join("assets/Dino", "DinoRun1.png")), pyg.image.load(os.path.join("assets/Dino", "DinoRun2.png"))] # 뛰는 모션, 숙이는 모션 (stooping1, stooping2) JUMPING_MOTION = pyg.image.load(os.path.join("assets/Dino", "DinoJump.png")) STOOPING_MOTIONS = [pyg.image.load(os.path.join("assets/Dino", "DinoStoop1.png")), pyg.image.load(os.path.join("assets/Dino", "DinoStoop2.png"))] # 선인장 SMALL_CACTUS_IMG = [pyg.image.load(os.path.join("assets/Cactus", "SmallCactus1.png")), pyg.image.load(os.path.join("assets/Cactus", "SmallCactus2.png")), pyg.image.load(os.path.join("assets/Cactus", "SmallCactus3.png"))] LARGE_CACTUS_IMG = [pyg.image.load(os.path.join("assets/Cactus", "LargeCactus1.png")), pyg.image.load(os.path.join("assets/Cactus", "LargeCactus2.png")), pyg.image.load(os.path.join("assets/Cactus", "LargeCactus3.png"))] # 새 모션 BIRD_MOTIONS = [pyg.image.load(os.path.join("assets/Bird", "Bird1.png")), pyg.image.load(os.path.join("assets/Bird", "Bird2.png"))] # 기타 (구름, 바닥) -> 하트 추가 예정 CLOUD = pyg.image.load(os.path.join("assets/Other", "Cloud.png")) GROUND = pyg.image.load(os.path.join("assets/Other", "Track.png")) global points class Dinosaur(): X_Dino = 80 Y_Dino = 310 Y_DinoStoop = 340 Jump_height = 8.5 hitScale = 0.5 def __init__(self): self.stoop_img = STOOPING_MOTIONS self.run_img = RUNNING_MOTIONS self.jump_img = JUMPING_MOTION self.dino_stoop = False self.dino_run = True self.dino_jump = False self.step_index = 0 # 움직임 인덱스 self.jump_height = self.Jump_height self.image = self.run_img[0] # 0, 1 인덱스 반복하여 애니메이션 구현 self.dino_hitbox = self.image.get_rect() # 공룡 히트박스 설정 self.dino_hitbox.x = self.X_Dino * self.hitScale self.dino_hitbox.y = self.Y_Dino * self.hitScale def update(self, Input): if self.dino_stoop: self.stoop() if self.dino_run: self.run() if self.dino_jump: self.jump() if self.step_index >= 10: self.step_index = 0 # 공룡 동작 # 점프 if Input[pyg.K_UP] and not self.dino_jump: self.dino_stoop = False self.dino_run = False self.dino_jump = True # 숙이기 elif Input[pyg.K_DOWN] and not self.dino_jump: self.dino_stoop = True self.dino_run = False self.dino_jump = False # 달리기 elif not (self.dino_jump or Input[pyg.K_DOWN]): self.dino_stoop = False self.dino_run = True self.dino_jump = False def stoop(self): self.image = self.stoop_img[self.step_index // 5] self.dino_hitbox = self.image.get_rect() self.dino_hitbox.x = self.X_Dino self.dino_hitbox.y = self.Y_DinoStoop self.step_index += 1 def run(self): self.image = self.run_img[self.step_index // 5] # 5로 해야 속도 맞음 self.dino_hitbox = self.image.get_rect() self.dino_hitbox.x = self.X_Dino self.dino_hitbox.y = self.Y_Dino self.step_index += 1 def jump(self): self.image = self.jump_img if self.dino_jump: self.dino_hitbox.y -= self.jump_height * 4 self.jump_height -= 0.8 if self.jump_height < - self.Jump_height: self.dino_jump = False self.jump_height = self.Jump_height def draw(self, SCREEN): SCREEN.blit(self.image, (self.dino_hitbox.x, self.dino_hitbox.y)) class Cloud(): def __init__(self): self.x = SCREEN_WIDTH + random.randint(800, 1000) self.y = random.randint(50, 100) self.image = CLOUD self.width = self.image.get_width() def update(self): self.x -= game_speed if self.x < - self.width: self.x = SCREEN_WIDTH + random.randint(2600, 3000) self.y = random.randint(50, 100) def draw(self, SCREEN): SCREEN.blit(self.image, (self.x, self.y)) class Obstacle(): def __init__(self, image, type): self.image = image self.type = type self.rect = self.image[self.type].get_rect() self.rect.x = SCREEN_WIDTH def update(self): self.rect.x -= game_speed if self.rect.x < - self.rect.width: obstacles.pop() def draw(self, SCREEN): SCREEN.blit(self.image[self.type], self.rect) class SmallCactus(Obstacle): def __init__(self, image): self.type = random.randint(0, 2) super().__init__(image, self.type) self.rect.y = 325 class LargeCactus(Obstacle): def __init__(self, image): self.type = random.randint(0, 2) super().__init__(image, self.type) self.rect.y = 300 class Bird(Obstacle): def __init__(self, image): self.type = 0 super().__init__(image, self.type) self.rect.y = 250 self.index = 0 def draw(self, SCREEN): if self.index >= 9: self.index = 0 SCREEN.blit(self.image[self.index // 5], self.rect) self.index += 1 def main(): global game_speed, x_ground, y_ground, points, obstacles run = True clock = pyg.time.Clock() cloud = Cloud() player = Dinosaur() game_speed = 14 x_ground = 0 y_ground = 380 points = 0 font = pyg.font.Font('freesansbold.ttf', 20) obstacles = [] death_cnt = 0 def score(): global points, game_speed points += 1 if points % 100 == 0: game_speed += 1 text = font.render("points: " + str(points), True, (0,0,0)) text_rect = text.get_rect() text_rect.center = (1000, 40) SCREEN.blit(text, text_rect) def ground(): global x_ground, y_ground image_width = GROUND.get_width() SCREEN.blit(GROUND, (x_ground, y_ground)) SCREEN.blit(GROUND, (image_width + x_ground, y_ground)) if x_ground <= - image_width: SCREEN.blit(GROUND, (image_width + x_ground, y_ground)) x_ground = 0 x_ground -= game_speed while run: for pyEvent in pyg.event.get(): if pyEvent.type == pyg.QUIT: sys.exit() SCREEN.fill((255,255,255)) userInput = pyg.key.get_pressed() player.draw(SCREEN) player.update(userInput) if len(obstacles) == 0: if random.randint(0, 2) == 0: obstacles.append(SmallCactus(SMALL_CACTUS_IMG)) elif random.randint(0, 2) == 1: obstacles.append(LargeCactus(LARGE_CACTUS_IMG)) elif random.randint(0, 2) == 2: obstacles.append(Bird(BIRD_MOTIONS)) for ob in obstacles: ob.draw(SCREEN) ob.update() if player.dino_hitbox.colliderect(ob.rect): pyg.time.delay(500) death_cnt += 1 menu(death_cnt) ground() cloud.draw(SCREEN) cloud.update() score() clock.tick(30) pyg.display.update() def menu(death_cnt): global points run = True if death_cnt == 0: points = 0 while run: update(points) SCREEN.fill((255,255,255)) font = pyg.font.Font('freesansbold.ttf', 30) # 폰트 적용 오류...... if death_cnt == 0: text = font.render("Press any key to Start", True, (0,0,0)) # 한글 "시작하기" 로 변경 예정 text1 = font.render("DinoSaurGame", True, (0,0,0)) # "공룡게임"으로 변경 예정 elif death_cnt > 0: text = font.render("Press any key to Restart", True, (0,0,0)) score = font.render("Your Score : " + str(points), True, (0,0,0)) scoreRect = score.get_rect() scoreRect.center = (SCREEN_WIDTH // 2, SCREEN_HEIGHT // 2 + 50) SCREEN.blit(score, scoreRect) textRect = text.get_rect() textRect.center = (SCREEN_WIDTH // 2, SCREEN_HEIGHT // 2) SCREEN.blit(text, textRect) SCREEN.blit(RUNNING_MOTIONS[0], (SCREEN_WIDTH // 2 - 20, SCREEN_HEIGHT // 2 - 140)) pyg.display.update() for pyEvent in pyg.event.get(): if pyEvent.type == pyg.QUIT: sys.exit() if pyEvent.type == pyg.KEYDOWN: main() def update(score): pass def game_start(nick): pyg.init() global SCREEN global nickname nickname = nick SCREEN = pyg.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT)) menu(death_cnt=0) When I quit the pygame window, pyqt5 quits too. How can I only quit the pygame window? A: Do not mix frameworks, mixing frameworks always means some kind of undefined behavior. The frameworks may interact poorly or completely conflict with one another. Getting it to work on your system doesn't mean it will work on another system or with a different version of any of the frameworks. If you use Qt, then I suggest to develop the game with Qt as well (see Qt Based Games).
How can I run pygame and PyQt5 together but seperately?
I have a question about using pyqt5 and pygame. I have already made a pygame script and a pyqt5 script. The problem is that when I want to make pygame excute the game, it shows a ranking board in pyqt5 and plays game by a pygame script. This is my pyqt UI code: from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtGui import QMovie import rankingUI import sys import main import pickle import subprocess class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.setFixedSize(800,400) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") # create label self.label = QtWidgets.QLabel(self.centralwidget) self.label.move(0,0) # start button self.button = QtWidgets.QPushButton(self.centralwidget) self.button.setGeometry(320,300,150,100) self.button.setStyleSheet("border-image:url(./assets/ui/start_btn.png); border:0px;") self.button.clicked.connect(self.game_start) # title self.title = QtWidgets.QLabel(self.centralwidget) self.title.setGeometry(250, 10, 300, 100) self.title.setStyleSheet("border-image:url(./assets/ui/dinotitle.png); border:0px;") # input nick self.nick_inp = QtWidgets.QLineEdit("ENTER YOUR NICK",self.centralwidget) self.nick_inp.setAlignment(QtCore.Qt.AlignCenter) self.nick_inp.setGeometry(320,290, 150 ,20) #ranking self.ranking_btn = QtWidgets.QPushButton(self.centralwidget) self.ranking_btn.setStyleSheet("border-image:url(./assets/ui/rank_btn.png); border:0px;") self.ranking_btn.setGeometry(730, 325, 50, 50) self.ranking_btn.clicked.connect(self.popup_ranking) # add popup self.add_dia = QtWidgets.QDialog() self.rank_dia = QtWidgets.QDialog() # add label to main window MainWindow.setCentralWidget(self.centralwidget) # set qmovie as label self.movie = QMovie("assets/ui/dinogif.gif") self.label.setMovie(self.movie) self.movie.start() def game_start(self): player_nick = self.nick_inp.text() if(len(player_nick)==0): self.nick_inp.setText("ENTER YOUR NICK") return main.game_start(player_nick) def popup_ranking(self): # ui init self.rank_dia.setWindowModality(QtCore.Qt.ApplicationModal) self.rank_dia.setWindowTitle("RANKING") self.rank_dia.setFixedSize(500,330) rank_label = QtWidgets.QLabel("RANKKING") rank_label.setAlignment(QtCore.Qt.AlignCenter) rank_label.setFont(QtGui.QFont('Arial', 30)) output = QtWidgets.QTextEdit() output.setFont(QtGui.QFont('Ubuntu',15)) window = QtWidgets.QVBoxLayout() window.addWidget(rank_label) window.addWidget(output) # read data rank_list = [] ranking_dat = open("ranking.dat", 'rb') try: rank_list = pickle.load(ranking_dat) except: pass ranking_dat.close() # write data strFormat = '%-18s%-18s%-18s\n' strOut = strFormat % ('RANK', 'SCORE', 'NICK') rank_num = 1 strFormat = '%-20s%-20s%-20s\n' for x in sorted(rank_list, key=lambda s: s["Score"], reverse=True): tmp = [] tmp.append(str(rank_num)) rank_num += 1 for y in x: tmp.append(str(x[y])) strOut += strFormat % (tmp[0], tmp[2], tmp[1]) if rank_num == 10: break output.setText(strOut) self.rank_dia.setLayout(window) self.rank_dia.show() # def score_reg(self): # # popup UI setting # self.add_dia.setWindowTitle("score registration") # self.add_dia.setWindowModality(QtCore.Qt.ApplicationModal) # self.add_dia.setFixedSize(300,70) # # # add widget # nick_label = QtWidgets.QLabel("Insert Nickname :") # self.nick_input = QtWidgets.QLineEdit() # score_label = QtWidgets.QLabel("Your Score : ") # self.score_input = QtWidgets.QLabel("333") # reg_btn = QtWidgets.QPushButton("register") # reg_btn.clicked.connect(self.register) # # h_box1 = QtWidgets.QHBoxLayout() # h_box1.addWidget(nick_label) # h_box1.addWidget(self.nick_input) # # h_box2 = QtWidgets.QHBoxLayout() # h_box2.addWidget(score_label) # h_box2.addWidget(self.score_input) # h_box2.addStretch() # h_box2.addWidget(reg_btn) # # v_box = QtWidgets.QVBoxLayout() # v_box.addLayout(h_box1) # v_box.addLayout(h_box2) # # self.add_dia.setLayout(v_box) # self.add_dia.show() # # def register(self): # print(self.nick_input.text()) # print(self.score_input.text()) if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) window = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(window) window.show() sys.exit(app.exec_()) This is the pygame code: import pygame as pyg import os import random import sys import pickle import pygame.time import ui nickname = ... # 화면 크기 SCREEN_HEIGHT = 600 SCREEN_WIDTH = 1600 SCREEN = ... # 달리는 모션 (running1, running2) RUNNING_MOTIONS = [pyg.image.load(os.path.join("assets/Dino", "DinoRun1.png")), pyg.image.load(os.path.join("assets/Dino", "DinoRun2.png"))] # 뛰는 모션, 숙이는 모션 (stooping1, stooping2) JUMPING_MOTION = pyg.image.load(os.path.join("assets/Dino", "DinoJump.png")) STOOPING_MOTIONS = [pyg.image.load(os.path.join("assets/Dino", "DinoStoop1.png")), pyg.image.load(os.path.join("assets/Dino", "DinoStoop2.png"))] # 선인장 SMALL_CACTUS_IMG = [pyg.image.load(os.path.join("assets/Cactus", "SmallCactus1.png")), pyg.image.load(os.path.join("assets/Cactus", "SmallCactus2.png")), pyg.image.load(os.path.join("assets/Cactus", "SmallCactus3.png"))] LARGE_CACTUS_IMG = [pyg.image.load(os.path.join("assets/Cactus", "LargeCactus1.png")), pyg.image.load(os.path.join("assets/Cactus", "LargeCactus2.png")), pyg.image.load(os.path.join("assets/Cactus", "LargeCactus3.png"))] # 새 모션 BIRD_MOTIONS = [pyg.image.load(os.path.join("assets/Bird", "Bird1.png")), pyg.image.load(os.path.join("assets/Bird", "Bird2.png"))] # 기타 (구름, 바닥) -> 하트 추가 예정 CLOUD = pyg.image.load(os.path.join("assets/Other", "Cloud.png")) GROUND = pyg.image.load(os.path.join("assets/Other", "Track.png")) global points class Dinosaur(): X_Dino = 80 Y_Dino = 310 Y_DinoStoop = 340 Jump_height = 8.5 hitScale = 0.5 def __init__(self): self.stoop_img = STOOPING_MOTIONS self.run_img = RUNNING_MOTIONS self.jump_img = JUMPING_MOTION self.dino_stoop = False self.dino_run = True self.dino_jump = False self.step_index = 0 # 움직임 인덱스 self.jump_height = self.Jump_height self.image = self.run_img[0] # 0, 1 인덱스 반복하여 애니메이션 구현 self.dino_hitbox = self.image.get_rect() # 공룡 히트박스 설정 self.dino_hitbox.x = self.X_Dino * self.hitScale self.dino_hitbox.y = self.Y_Dino * self.hitScale def update(self, Input): if self.dino_stoop: self.stoop() if self.dino_run: self.run() if self.dino_jump: self.jump() if self.step_index >= 10: self.step_index = 0 # 공룡 동작 # 점프 if Input[pyg.K_UP] and not self.dino_jump: self.dino_stoop = False self.dino_run = False self.dino_jump = True # 숙이기 elif Input[pyg.K_DOWN] and not self.dino_jump: self.dino_stoop = True self.dino_run = False self.dino_jump = False # 달리기 elif not (self.dino_jump or Input[pyg.K_DOWN]): self.dino_stoop = False self.dino_run = True self.dino_jump = False def stoop(self): self.image = self.stoop_img[self.step_index // 5] self.dino_hitbox = self.image.get_rect() self.dino_hitbox.x = self.X_Dino self.dino_hitbox.y = self.Y_DinoStoop self.step_index += 1 def run(self): self.image = self.run_img[self.step_index // 5] # 5로 해야 속도 맞음 self.dino_hitbox = self.image.get_rect() self.dino_hitbox.x = self.X_Dino self.dino_hitbox.y = self.Y_Dino self.step_index += 1 def jump(self): self.image = self.jump_img if self.dino_jump: self.dino_hitbox.y -= self.jump_height * 4 self.jump_height -= 0.8 if self.jump_height < - self.Jump_height: self.dino_jump = False self.jump_height = self.Jump_height def draw(self, SCREEN): SCREEN.blit(self.image, (self.dino_hitbox.x, self.dino_hitbox.y)) class Cloud(): def __init__(self): self.x = SCREEN_WIDTH + random.randint(800, 1000) self.y = random.randint(50, 100) self.image = CLOUD self.width = self.image.get_width() def update(self): self.x -= game_speed if self.x < - self.width: self.x = SCREEN_WIDTH + random.randint(2600, 3000) self.y = random.randint(50, 100) def draw(self, SCREEN): SCREEN.blit(self.image, (self.x, self.y)) class Obstacle(): def __init__(self, image, type): self.image = image self.type = type self.rect = self.image[self.type].get_rect() self.rect.x = SCREEN_WIDTH def update(self): self.rect.x -= game_speed if self.rect.x < - self.rect.width: obstacles.pop() def draw(self, SCREEN): SCREEN.blit(self.image[self.type], self.rect) class SmallCactus(Obstacle): def __init__(self, image): self.type = random.randint(0, 2) super().__init__(image, self.type) self.rect.y = 325 class LargeCactus(Obstacle): def __init__(self, image): self.type = random.randint(0, 2) super().__init__(image, self.type) self.rect.y = 300 class Bird(Obstacle): def __init__(self, image): self.type = 0 super().__init__(image, self.type) self.rect.y = 250 self.index = 0 def draw(self, SCREEN): if self.index >= 9: self.index = 0 SCREEN.blit(self.image[self.index // 5], self.rect) self.index += 1 def main(): global game_speed, x_ground, y_ground, points, obstacles run = True clock = pyg.time.Clock() cloud = Cloud() player = Dinosaur() game_speed = 14 x_ground = 0 y_ground = 380 points = 0 font = pyg.font.Font('freesansbold.ttf', 20) obstacles = [] death_cnt = 0 def score(): global points, game_speed points += 1 if points % 100 == 0: game_speed += 1 text = font.render("points: " + str(points), True, (0,0,0)) text_rect = text.get_rect() text_rect.center = (1000, 40) SCREEN.blit(text, text_rect) def ground(): global x_ground, y_ground image_width = GROUND.get_width() SCREEN.blit(GROUND, (x_ground, y_ground)) SCREEN.blit(GROUND, (image_width + x_ground, y_ground)) if x_ground <= - image_width: SCREEN.blit(GROUND, (image_width + x_ground, y_ground)) x_ground = 0 x_ground -= game_speed while run: for pyEvent in pyg.event.get(): if pyEvent.type == pyg.QUIT: sys.exit() SCREEN.fill((255,255,255)) userInput = pyg.key.get_pressed() player.draw(SCREEN) player.update(userInput) if len(obstacles) == 0: if random.randint(0, 2) == 0: obstacles.append(SmallCactus(SMALL_CACTUS_IMG)) elif random.randint(0, 2) == 1: obstacles.append(LargeCactus(LARGE_CACTUS_IMG)) elif random.randint(0, 2) == 2: obstacles.append(Bird(BIRD_MOTIONS)) for ob in obstacles: ob.draw(SCREEN) ob.update() if player.dino_hitbox.colliderect(ob.rect): pyg.time.delay(500) death_cnt += 1 menu(death_cnt) ground() cloud.draw(SCREEN) cloud.update() score() clock.tick(30) pyg.display.update() def menu(death_cnt): global points run = True if death_cnt == 0: points = 0 while run: update(points) SCREEN.fill((255,255,255)) font = pyg.font.Font('freesansbold.ttf', 30) # 폰트 적용 오류...... if death_cnt == 0: text = font.render("Press any key to Start", True, (0,0,0)) # 한글 "시작하기" 로 변경 예정 text1 = font.render("DinoSaurGame", True, (0,0,0)) # "공룡게임"으로 변경 예정 elif death_cnt > 0: text = font.render("Press any key to Restart", True, (0,0,0)) score = font.render("Your Score : " + str(points), True, (0,0,0)) scoreRect = score.get_rect() scoreRect.center = (SCREEN_WIDTH // 2, SCREEN_HEIGHT // 2 + 50) SCREEN.blit(score, scoreRect) textRect = text.get_rect() textRect.center = (SCREEN_WIDTH // 2, SCREEN_HEIGHT // 2) SCREEN.blit(text, textRect) SCREEN.blit(RUNNING_MOTIONS[0], (SCREEN_WIDTH // 2 - 20, SCREEN_HEIGHT // 2 - 140)) pyg.display.update() for pyEvent in pyg.event.get(): if pyEvent.type == pyg.QUIT: sys.exit() if pyEvent.type == pyg.KEYDOWN: main() def update(score): pass def game_start(nick): pyg.init() global SCREEN global nickname nickname = nick SCREEN = pyg.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT)) menu(death_cnt=0) When I quit the pygame window, pyqt5 quits too. How can I only quit the pygame window?
[ "Do not mix frameworks, mixing frameworks always means some kind of undefined behavior. The frameworks may interact poorly or completely conflict with one another. Getting it to work on your system doesn't mean it will work on another system or with a different version of any of the frameworks.\nIf you use Qt, then I suggest to develop the game with Qt as well (see Qt Based Games).\n" ]
[ 1 ]
[]
[]
[ "pygame", "pyqt5", "python" ]
stackoverflow_0074642105_pygame_pyqt5_python.txt
Q: connect to Redshift from lambda and fetch some record using python is any one able successfully connect to Redshift from lambda. I want to fetch some records from Redshift table and feed to my bot (aws lex) Please suggest - this code is working outside lambda how to make it work inside lambda. import psycopg2 con=psycopg2.connect(dbname= 'qa', host='name', port= '5439', user= 'dwuser', password= '1234567') cur = con.cursor() cur.execute("SELECT * FROM pk.fact limit 4;") for result in cur: print (result) cur.close() con.close() A: Here is the node lambda that works to connecting to Redshift and pulling data from it. exports.handler = function(event, context, callback) { var response = { status: "SUCCESS", errors: [], response: {}, verbose: {} }; var client = new pg.Client(connectionString); client.connect(function(err) { if (err) { callback('Could not connect to RedShift ' + JSON.stringify(err)); } else { client.query(sql.Sql, function(err, result) { client.end(); if (err) { callback('Error Cleaning up Redshift' + err); } else { callback(null, ' Good ' + JSON.stringify(result)); } }); } }); }; Hope it helps. A: You need to fetch the records first. results = cur.fetchall() for result in results: ...
connect to Redshift from lambda and fetch some record using python
is any one able successfully connect to Redshift from lambda. I want to fetch some records from Redshift table and feed to my bot (aws lex) Please suggest - this code is working outside lambda how to make it work inside lambda. import psycopg2 con=psycopg2.connect(dbname= 'qa', host='name', port= '5439', user= 'dwuser', password= '1234567') cur = con.cursor() cur.execute("SELECT * FROM pk.fact limit 4;") for result in cur: print (result) cur.close() con.close()
[ "Here is the node lambda that works to connecting to Redshift and pulling data from it.\nexports.handler = function(event, context, callback) {\n var response = {\n status: \"SUCCESS\",\n errors: [],\n response: {},\n verbose: {}\n };\n\n var client = new pg.Client(connectionString);\n client.connect(function(err) {\n if (err) {\n callback('Could not connect to RedShift ' + JSON.stringify(err));\n } else {\n client.query(sql.Sql, function(err, result) {\n client.end();\n if (err) {\n callback('Error Cleaning up Redshift' + err);\n } else {\n callback(null, ' Good ' + JSON.stringify(result));\n }\n });\n }\n });\n};\n\nHope it helps.\n", "You need to fetch the records first.\nresults = cur.fetchall()\nfor result in results:\n ...\n\n" ]
[ 2, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_lambda", "lambda", "python" ]
stackoverflow_0048308584_amazon_s3_amazon_web_services_aws_lambda_lambda_python.txt
Q: regular expressions parentheses python stuck with regular expressions. There is an example text: '[1 | Hi {name} | Hello {name} | Good morning {name}] other text {1 |{name}| 3| 4} OTHER {5 |{name}| 6| 7}' It is necessary to extract from it the constructions [1 | Hi {name} | hello {name} | Good morning {name}] and {1|{name}| 3| 4} and {5 |{name}| 6| 7} re.findall(r'\s*(\{[^(/{name})].+\})\s*', message) but I can't write a regular expression that matches the requirements expression {name} must be ignored A: This is tricky with regular expressions, but quite trivial with "parsing": def top_level_parens(s): stack = [] for n, c in enumerate(s): if c in '({[': stack.append(n) elif c in ')}]': m = stack.pop() if not stack: yield s[m:n+1] result = list(top_level_parens(your_string)) Assuming parens are properly balanced, if this is not always the case, add additional checks to the "parser". A: so far the solution for me is re.findall(r'(\{[^n].*?[^e]\})|(\[.*?\])', message)
regular expressions parentheses python
stuck with regular expressions. There is an example text: '[1 | Hi {name} | Hello {name} | Good morning {name}] other text {1 |{name}| 3| 4} OTHER {5 |{name}| 6| 7}' It is necessary to extract from it the constructions [1 | Hi {name} | hello {name} | Good morning {name}] and {1|{name}| 3| 4} and {5 |{name}| 6| 7} re.findall(r'\s*(\{[^(/{name})].+\})\s*', message) but I can't write a regular expression that matches the requirements expression {name} must be ignored
[ "This is tricky with regular expressions, but quite trivial with \"parsing\":\ndef top_level_parens(s):\n stack = []\n\n for n, c in enumerate(s):\n if c in '({[':\n stack.append(n)\n elif c in ')}]':\n m = stack.pop()\n if not stack:\n yield s[m:n+1]\n\n\nresult = list(top_level_parens(your_string))\n\nAssuming parens are properly balanced, if this is not always the case, add additional checks to the \"parser\".\n", "so far the solution for me is\nre.findall(r'(\\{[^n].*?[^e]\\})|(\\[.*?\\])', message)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_re" ]
stackoverflow_0074645122_python_python_re.txt
Q: OverflowError: cannot convert float infinity to integer, after doing so import pandas as pd from sklearn.cluster import KMeans import matplotlib.pyplot as plt import numpy as np import quantstats as qs data = pd.read_csv('worldometer_data.csv') X = data.drop(columns=['Country/Region', 'Continent', 'Population', 'WHO Region']) # replace NaN values with 0 for i in X: X[i] = X[i].fillna(0) # getting rid of float infinity X = X.replace([np.inf, -np.inf, -0], 0) wcss = [] # getting Kmeans for i in range(0, 51): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans.fit(X) wcss.append(kmeans.inertia_) # visualizing the kmeans graph plt.plot(range(0, 51), wcss) plt.title('Elbow method') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() I have checked the X array after getting rid of float infinity and it does that successfully . But when it gets to kmeans.fit(X) it fails and returns the OverflowError. Error: OverflowError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_21416\3473011824.py in <module> 16 for i in range(0, 51): 17 kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) ---> 18 kmeans.fit(X) 19 wcss.append(kmeans.inertia_) 20 c:\Users\usr\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in fit(self, X, y, sample_weight) 1177 for i in range(self._n_init): 1178 # Initialize centers -> 1179 centers_init = self._init_centroids( 1180 X, x_squared_norms=x_squared_norms, init=init, random_state=random_state 1181 ) c:\Users\usr\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _init_centroids(self, X, x_squared_norms, init, random_state, init_size) 1088 1089 if isinstance(init, str) and init == "k-means++": -> 1090 centers, _ = _kmeans_plusplus( 1091 X, 1092 n_clusters, c:\Users\usr\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _kmeans_plusplus(X, n_clusters, x_squared_norms, random_state, n_local_trials) 189 # specific results for other than mentioning in the conclusion ... --> 191 n_local_trials = 2 + int(np.log(n_clusters)) 192 193 # Pick first center randomly and track index of point OverflowError: cannot convert float infinity to integer How can I fix this and is there something else I did wrong? Dataset used: https://www.kaggle.com/datasets/imdevskp/corona-virus-report (the worldometer_data.csv) A: The problem is here: for i in range(0, 51): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) You cannot set n_clusers to 0. It must be larger than 0.
OverflowError: cannot convert float infinity to integer, after doing so
import pandas as pd from sklearn.cluster import KMeans import matplotlib.pyplot as plt import numpy as np import quantstats as qs data = pd.read_csv('worldometer_data.csv') X = data.drop(columns=['Country/Region', 'Continent', 'Population', 'WHO Region']) # replace NaN values with 0 for i in X: X[i] = X[i].fillna(0) # getting rid of float infinity X = X.replace([np.inf, -np.inf, -0], 0) wcss = [] # getting Kmeans for i in range(0, 51): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans.fit(X) wcss.append(kmeans.inertia_) # visualizing the kmeans graph plt.plot(range(0, 51), wcss) plt.title('Elbow method') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() I have checked the X array after getting rid of float infinity and it does that successfully . But when it gets to kmeans.fit(X) it fails and returns the OverflowError. Error: OverflowError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_21416\3473011824.py in <module> 16 for i in range(0, 51): 17 kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) ---> 18 kmeans.fit(X) 19 wcss.append(kmeans.inertia_) 20 c:\Users\usr\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in fit(self, X, y, sample_weight) 1177 for i in range(self._n_init): 1178 # Initialize centers -> 1179 centers_init = self._init_centroids( 1180 X, x_squared_norms=x_squared_norms, init=init, random_state=random_state 1181 ) c:\Users\usr\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _init_centroids(self, X, x_squared_norms, init, random_state, init_size) 1088 1089 if isinstance(init, str) and init == "k-means++": -> 1090 centers, _ = _kmeans_plusplus( 1091 X, 1092 n_clusters, c:\Users\usr\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _kmeans_plusplus(X, n_clusters, x_squared_norms, random_state, n_local_trials) 189 # specific results for other than mentioning in the conclusion ... --> 191 n_local_trials = 2 + int(np.log(n_clusters)) 192 193 # Pick first center randomly and track index of point OverflowError: cannot convert float infinity to integer How can I fix this and is there something else I did wrong? Dataset used: https://www.kaggle.com/datasets/imdevskp/corona-virus-report (the worldometer_data.csv)
[ "The problem is here:\nfor i in range(0, 51):\n kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0)\n\nYou cannot set n_clusers to 0. It must be larger than 0.\n" ]
[ 1 ]
[]
[]
[ "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074646376_jupyter_notebook_pandas_python.txt
Q: Python Logging for custom module I am looking to implement custom logging for my module. The issues i am facing are the setLevel(0) does not disable the logging, and basicConfig(level=0) duplicates the error with default formatting. My aim is to disable my modules logging by default without affecting the user and allow the user to import logging and my module and just enable the desired log level logging.getLogger('rapidTk').setLevel(99) rapidTk/__init__.py from .rTkLogging import rTkLogger import logging logging.setLoggerClass(rTkLogger) rtklog = logging.getLogger('rapidTk') rtklog.setLevel(0) rapidTk/rTkLogger.py import logging RTKLOG = 1 class rTkLogger(logging.Logger): logging.addLevelName(RTKLOG, 'rTk_Log') def __init__(self, name): super(rTkLogger, self).__init__(name) handler = logging.StreamHandler() fmat = logging.Formatter('%(asctime)s %(levelname)s %(filename)s(%(lineno)d) - %(message)s') handler.setFormatter(fmat) hndlr = self.addHandler(handler) self.setLevel(0) def rtklog(self, msg, *args, **kwargs): print(self.getEffectiveLevel(), 'is the effective level') if self.getEffectiveLevel() >= RTKLOG and self.isEnabledFor(self.getEffectiveLevel()): super()._log(RTKLOG, msg, args, **kwargs) rapidTk/rTkUtils.py from functools import wraps from time import perf_counter import logging def time_it(func): def wrapper(*args, **kwargs): start = perf_counter() fn = func rs = fn(*args, **kwargs) t = perf_counter()-start logging.getLogger('rapidTk').rtkdebug(f'{fn.__name__} finished in {t}') return rs return wrapper projects/mypythonscript.py import logging #logging.basicConfig(level=10) ## duplicates the log if level is enabled. import time from rapidTk import * from rapidTk.rTkUtils import time_it #rtklog = logging.getLogger('rapidTk') #rtklog.setLevel(0) ##makes no changes @time_it def runner(): print("hello") time.sleep(1) print("World") if __name__ == "__main__": runner() print("done") Here are the outputs for each case: output basicConfig(level=10)>>> >>> 10 is the effective level >>> 2022-12-01 17:14:57,161 rTk_Debug rTkUtils.py(17) - tester finished in 1.0115269999987504 >>> rTk_Debug:rapidTk:tester finished in 1.0115269999987504 output setLevel(0)>>> >>> 30 is the effective level >>>2022-12-01 17:16:52,528 rTk_Debug rTkUtils.py(17) - tester finished in 0.9971981999988202 A: If your purpose is to disable logging by default and let users implement their own logging levels, it is similar to how Python packages implement logging - there is a need to use NullHandler. import logging logging.getLogger('foo').addHandler(logging.NullHandler()) Source: The logging Documentation on how to configure logging for a Python library For your other issues: setLevel(0) does not disable the logging: I suspect there is overriding happening such that the level is overridden by some other configuration basicConfig(level=0) duplicates the error: There could be inheritance happening resulting in duplicated logs, you need to set propagate to False Source: Common issues faced in logging
Python Logging for custom module
I am looking to implement custom logging for my module. The issues i am facing are the setLevel(0) does not disable the logging, and basicConfig(level=0) duplicates the error with default formatting. My aim is to disable my modules logging by default without affecting the user and allow the user to import logging and my module and just enable the desired log level logging.getLogger('rapidTk').setLevel(99) rapidTk/__init__.py from .rTkLogging import rTkLogger import logging logging.setLoggerClass(rTkLogger) rtklog = logging.getLogger('rapidTk') rtklog.setLevel(0) rapidTk/rTkLogger.py import logging RTKLOG = 1 class rTkLogger(logging.Logger): logging.addLevelName(RTKLOG, 'rTk_Log') def __init__(self, name): super(rTkLogger, self).__init__(name) handler = logging.StreamHandler() fmat = logging.Formatter('%(asctime)s %(levelname)s %(filename)s(%(lineno)d) - %(message)s') handler.setFormatter(fmat) hndlr = self.addHandler(handler) self.setLevel(0) def rtklog(self, msg, *args, **kwargs): print(self.getEffectiveLevel(), 'is the effective level') if self.getEffectiveLevel() >= RTKLOG and self.isEnabledFor(self.getEffectiveLevel()): super()._log(RTKLOG, msg, args, **kwargs) rapidTk/rTkUtils.py from functools import wraps from time import perf_counter import logging def time_it(func): def wrapper(*args, **kwargs): start = perf_counter() fn = func rs = fn(*args, **kwargs) t = perf_counter()-start logging.getLogger('rapidTk').rtkdebug(f'{fn.__name__} finished in {t}') return rs return wrapper projects/mypythonscript.py import logging #logging.basicConfig(level=10) ## duplicates the log if level is enabled. import time from rapidTk import * from rapidTk.rTkUtils import time_it #rtklog = logging.getLogger('rapidTk') #rtklog.setLevel(0) ##makes no changes @time_it def runner(): print("hello") time.sleep(1) print("World") if __name__ == "__main__": runner() print("done") Here are the outputs for each case: output basicConfig(level=10)>>> >>> 10 is the effective level >>> 2022-12-01 17:14:57,161 rTk_Debug rTkUtils.py(17) - tester finished in 1.0115269999987504 >>> rTk_Debug:rapidTk:tester finished in 1.0115269999987504 output setLevel(0)>>> >>> 30 is the effective level >>>2022-12-01 17:16:52,528 rTk_Debug rTkUtils.py(17) - tester finished in 0.9971981999988202
[ "If your purpose is to disable logging by default and let users implement their own logging levels, it is similar to how Python packages implement logging - there is a need to use NullHandler.\nimport logging\nlogging.getLogger('foo').addHandler(logging.NullHandler())\n\nSource: The logging Documentation on how to configure logging for a Python library\nFor your other issues:\n\nsetLevel(0) does not disable the logging: I suspect there is overriding happening such that the level is overridden by some other configuration\nbasicConfig(level=0) duplicates the error: There could be inheritance happening resulting in duplicated logs, you need to set propagate to False\n\nSource: Common issues faced in logging\n" ]
[ 0 ]
[]
[]
[ "python", "python_logging", "python_module" ]
stackoverflow_0074645804_python_python_logging_python_module.txt
Q: How to get feature names of shap_values from TreeExplainer? I am doing a shap tutorial, and attempting to get the shap values for each person in a dataset from sklearn.model_selection import train_test_split import xgboost import shap import numpy as np import pandas as pd import matplotlib.pylab as pl X,y = shap.datasets.adult() X_display,y_display = shap.datasets.adult(display=True) # create a train/test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7) d_train = xgboost.DMatrix(X_train, label=y_train) d_test = xgboost.DMatrix(X_test, label=y_test) params = { "eta": 0.01, "objective": "binary:logistic", "subsample": 0.5, "base_score": np.mean(y_train), "eval_metric": "logloss" } #model = xgboost.train(params, d_train, 5000, evals = [(d_test, "test")], verbose_eval=100, early_stopping_rounds=20) xg_clf = xgboost.XGBClassifier() xg_clf.fit(X_train, y_train) explainer = shap.TreeExplainer(xg_clf, X_train) #shap_values = explainer(X) shap_values = explainer.shap_values(X) going through the Python3 interpreter, shap_values is a massive array of 32,561 persons, each with a shap value for 12 features. For example, the first individual has the following SHAP values: >>> shap_values[0] array([ 0.76437867, -0.11881508, 0.57451954, -0.41974955, -0.20982443, -0.38079952, -0.00986504, 0.32272505, -3.04392116, 0.00411322, -0.26587735, 0.02700199]) However, which value applies to which feature is a complete mystery to me. the documentation says: For models with a single output this returns a matrix of SHAP values (# samples x # features). Each row sums to the difference between the model output for that sample and the expected value of the model output (which is stored in the expected_value attribute of the explainer when it is constant). For models with vector outputs this returns a list of such matrices, one for each output When I go to explainer which produced shap_values I see that I can get feature names: explainer.data_feature_names ['Age', 'Workclass', 'Education-Num', 'Marital Status', 'Occupation', 'Relationship', 'Race', 'Sex', 'Capital Gain', 'Capital Loss', 'Hours per week', 'Country'] but I cannot see how to get feature names within shap_values at the Python interpreter, if they're even there: >>> shap_values. shap_values.all( shap_values.compress( shap_values.dump( shap_values.max( shap_values.ravel( shap_values.sort( shap_values.tostring( shap_values.any( shap_values.conj( shap_values.dumps( shap_values.mean( shap_values.real shap_values.squeeze( shap_values.trace( shap_values.argmax( shap_values.conjugate( shap_values.fill( shap_values.min( shap_values.repeat( shap_values.std( shap_values.transpose( shap_values.argmin( shap_values.copy( shap_values.flags shap_values.nbytes shap_values.reshape( shap_values.strides shap_values.var( shap_values.argpartition( shap_values.ctypes shap_values.flat shap_values.ndim shap_values.resize( shap_values.sum( shap_values.view( shap_values.argsort( shap_values.cumprod( shap_values.flatten( shap_values.newbyteorder( shap_values.round( shap_values.swapaxes( shap_values.astype( shap_values.cumsum( shap_values.getfield( shap_values.nonzero( shap_values.searchsorted( shap_values.T shap_values.base shap_values.data shap_values.imag shap_values.partition( shap_values.setfield( shap_values.take( shap_values.byteswap( shap_values.diagonal( shap_values.item( shap_values.prod( shap_values.setflags( shap_values.tobytes( shap_values.choose( shap_values.dot( shap_values.itemset( shap_values.ptp( shap_values.shape shap_values.tofile( shap_values.clip( shap_values.dtype shap_values.itemsize shap_values.put( shap_values.size shap_values.tolist( My primary question: How can I figure out which feature in ['Age', 'Workclass', 'Education-Num', 'Marital Status', 'Occupation', 'Relationship', 'Race', 'Sex', 'Capital Gain', 'Capital Loss', 'Hours per week', 'Country'] applies to which number in each row of shap_values? >>> shap_values[0] array([ 0.76437867, -0.11881508, 0.57451954, -0.41974955, -0.20982443, -0.38079952, -0.00986504, 0.32272505, -3.04392116, 0.00411322, -0.26587735, 0.02700199]) I would assume that the features are in the same order, but I have no evidence for that. My secondary question: how can I find the feature names in shap_values? A: The features are indeed in the same order, as you assume; see how to extract the most important feature names? and how to get feature names from explainer issues in Github. To find the feature name, you simply need to access the element with the same index of the array with the names For example: shap_values = np.array([ 0.76437867, -0.11881508, 0.57451954, -0.41974955, -0.20982443, -0.38079952, -0.00986504, 0.32272505, -3.04392116, 0.00411322, -0.26587735, 0.02700199]) features_names = ['Age', 'Workclass', 'Education-Num', 'Marital Status', 'Occupation', 'Relationship', 'Race', 'Sex', 'Capital Gain', 'Capital Loss', 'Hours per week', 'Country'] features_names[shap_values.argmin()] # the index 8 -> Capital Gain features_names[shap_values.argmax()] # the index 0 -> Age A: If you find this answer useful, upvote @lucas's answer and GitHub user ba1mn's post. I'm just adding it here in case the link breaks. The following function will return the feature names along with their corresponding importance in a DataFrame. def global_shap_importance(model, X): """ Return a dataframe containing the features sorted by Shap importance Parameters ---------- model : The tree-based model X : pd.Dataframe training set/test set/the whole dataset ... (without the label) Returns ------- pd.Dataframe A dataframe containing the features sorted by Shap importance """ explainer = shap.Explainer(model) shap_values = explainer(X) cohorts = {"": shap_values} cohort_labels = list(cohorts.keys()) cohort_exps = list(cohorts.values()) for i in range(len(cohort_exps)): if len(cohort_exps[i].shape) == 2: cohort_exps[i] = cohort_exps[i].abs.mean(0) features = cohort_exps[0].data feature_names = cohort_exps[0].feature_names values = np.array([cohort_exps[i].values for i in range(len(cohort_exps))]) feature_importance = pd.DataFrame( list(zip(feature_names, sum(values))), columns=['features', 'importance']) feature_importance.sort_values( by=['importance'], ascending=False, inplace=True) return feature_importance
How to get feature names of shap_values from TreeExplainer?
I am doing a shap tutorial, and attempting to get the shap values for each person in a dataset from sklearn.model_selection import train_test_split import xgboost import shap import numpy as np import pandas as pd import matplotlib.pylab as pl X,y = shap.datasets.adult() X_display,y_display = shap.datasets.adult(display=True) # create a train/test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7) d_train = xgboost.DMatrix(X_train, label=y_train) d_test = xgboost.DMatrix(X_test, label=y_test) params = { "eta": 0.01, "objective": "binary:logistic", "subsample": 0.5, "base_score": np.mean(y_train), "eval_metric": "logloss" } #model = xgboost.train(params, d_train, 5000, evals = [(d_test, "test")], verbose_eval=100, early_stopping_rounds=20) xg_clf = xgboost.XGBClassifier() xg_clf.fit(X_train, y_train) explainer = shap.TreeExplainer(xg_clf, X_train) #shap_values = explainer(X) shap_values = explainer.shap_values(X) going through the Python3 interpreter, shap_values is a massive array of 32,561 persons, each with a shap value for 12 features. For example, the first individual has the following SHAP values: >>> shap_values[0] array([ 0.76437867, -0.11881508, 0.57451954, -0.41974955, -0.20982443, -0.38079952, -0.00986504, 0.32272505, -3.04392116, 0.00411322, -0.26587735, 0.02700199]) However, which value applies to which feature is a complete mystery to me. the documentation says: For models with a single output this returns a matrix of SHAP values (# samples x # features). Each row sums to the difference between the model output for that sample and the expected value of the model output (which is stored in the expected_value attribute of the explainer when it is constant). For models with vector outputs this returns a list of such matrices, one for each output When I go to explainer which produced shap_values I see that I can get feature names: explainer.data_feature_names ['Age', 'Workclass', 'Education-Num', 'Marital Status', 'Occupation', 'Relationship', 'Race', 'Sex', 'Capital Gain', 'Capital Loss', 'Hours per week', 'Country'] but I cannot see how to get feature names within shap_values at the Python interpreter, if they're even there: >>> shap_values. shap_values.all( shap_values.compress( shap_values.dump( shap_values.max( shap_values.ravel( shap_values.sort( shap_values.tostring( shap_values.any( shap_values.conj( shap_values.dumps( shap_values.mean( shap_values.real shap_values.squeeze( shap_values.trace( shap_values.argmax( shap_values.conjugate( shap_values.fill( shap_values.min( shap_values.repeat( shap_values.std( shap_values.transpose( shap_values.argmin( shap_values.copy( shap_values.flags shap_values.nbytes shap_values.reshape( shap_values.strides shap_values.var( shap_values.argpartition( shap_values.ctypes shap_values.flat shap_values.ndim shap_values.resize( shap_values.sum( shap_values.view( shap_values.argsort( shap_values.cumprod( shap_values.flatten( shap_values.newbyteorder( shap_values.round( shap_values.swapaxes( shap_values.astype( shap_values.cumsum( shap_values.getfield( shap_values.nonzero( shap_values.searchsorted( shap_values.T shap_values.base shap_values.data shap_values.imag shap_values.partition( shap_values.setfield( shap_values.take( shap_values.byteswap( shap_values.diagonal( shap_values.item( shap_values.prod( shap_values.setflags( shap_values.tobytes( shap_values.choose( shap_values.dot( shap_values.itemset( shap_values.ptp( shap_values.shape shap_values.tofile( shap_values.clip( shap_values.dtype shap_values.itemsize shap_values.put( shap_values.size shap_values.tolist( My primary question: How can I figure out which feature in ['Age', 'Workclass', 'Education-Num', 'Marital Status', 'Occupation', 'Relationship', 'Race', 'Sex', 'Capital Gain', 'Capital Loss', 'Hours per week', 'Country'] applies to which number in each row of shap_values? >>> shap_values[0] array([ 0.76437867, -0.11881508, 0.57451954, -0.41974955, -0.20982443, -0.38079952, -0.00986504, 0.32272505, -3.04392116, 0.00411322, -0.26587735, 0.02700199]) I would assume that the features are in the same order, but I have no evidence for that. My secondary question: how can I find the feature names in shap_values?
[ "The features are indeed in the same order, as you assume; see how to extract the most important feature names? and how to get feature names from explainer issues in Github.\nTo find the feature name, you simply need to access the element with the same index of the array with the names\nFor example:\nshap_values = np.array([\n 0.76437867, -0.11881508, 0.57451954, -0.41974955, -0.20982443,\n -0.38079952, -0.00986504, 0.32272505, -3.04392116, 0.00411322,\n -0.26587735, 0.02700199])\nfeatures_names = ['Age', 'Workclass', 'Education-Num', 'Marital Status', 'Occupation',\n 'Relationship', 'Race', 'Sex', 'Capital Gain', 'Capital Loss',\n 'Hours per week', 'Country']\n\nfeatures_names[shap_values.argmin()] # the index 8 -> Capital Gain\nfeatures_names[shap_values.argmax()] # the index 0 -> Age\n\n", "If you find this answer useful, upvote @lucas's answer and GitHub user ba1mn's post. I'm just adding it here in case the link breaks.\nThe following function will return the feature names along with their corresponding importance in a DataFrame.\ndef global_shap_importance(model, X):\n \"\"\" Return a dataframe containing the features sorted by Shap importance\n Parameters\n ----------\n model : The tree-based model \n X : pd.Dataframe\n training set/test set/the whole dataset ... (without the label)\n Returns\n -------\n pd.Dataframe\n A dataframe containing the features sorted by Shap importance\n \"\"\"\n explainer = shap.Explainer(model)\n shap_values = explainer(X)\n cohorts = {\"\": shap_values}\n cohort_labels = list(cohorts.keys())\n cohort_exps = list(cohorts.values())\n for i in range(len(cohort_exps)):\n if len(cohort_exps[i].shape) == 2:\n cohort_exps[i] = cohort_exps[i].abs.mean(0)\n features = cohort_exps[0].data\n feature_names = cohort_exps[0].feature_names\n values = np.array([cohort_exps[i].values for i in range(len(cohort_exps))])\n feature_importance = pd.DataFrame(\n list(zip(feature_names, sum(values))), columns=['features', 'importance'])\n feature_importance.sort_values(\n by=['importance'], ascending=False, inplace=True)\n return feature_importance\n\n" ]
[ 3, 0 ]
[]
[]
[ "machine_learning", "python", "python_3.x", "shap" ]
stackoverflow_0067443411_machine_learning_python_python_3.x_shap.txt
Q: Trying to merge dictionaries together to create new df but dictionaries values arent showing up in df image of jupter notebook issue For my quarters instead of values for examples 1,0,0,0 showing up I get NaN. How do I fix the code below so I return values in my dataframe qrt_1 = {'q1':[1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0]} qrt_2 = {'q2':[0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0]} qrt_3 = {'q3':[0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0]} qrt_4 = {'q4':[0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1]} year = {'year': [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9]} value = data_1['Sales'] data = [year, qrt_1, qrt_2, qrt_3, qrt_4] dataframes = [] for x in data: dataframes.append(pd.DataFrame(x)) df = pd.concat(dataframes) I am expecting a dataframe that returns the qrt_1, qrt_2 etc with their corresponding column names A: Try to use axis=1 in pd.concat: df = pd.concat(dataframes, axis=1) print(df) Prints: year q1 q2 q3 q4 0 1 1 0 0 0 1 1 0 1 0 0 2 1 0 0 1 0 3 1 0 0 0 1 4 2 1 0 0 0 5 2 0 1 0 0 6 2 0 0 1 0 7 2 0 0 0 1 8 3 1 0 0 0 9 3 0 1 0 0 10 3 0 0 1 0 11 3 0 0 0 1 12 4 1 0 0 0 13 4 0 1 0 0 14 4 0 0 1 0 15 4 0 0 0 1 16 5 1 0 0 0 17 5 0 1 0 0 18 5 0 0 1 0 19 5 0 0 0 1 20 6 1 0 0 0 21 6 0 1 0 0 22 6 0 0 1 0 23 6 0 0 0 1 24 7 1 0 0 0 25 7 0 1 0 0 26 7 0 0 1 0 27 7 0 0 0 1 28 8 1 0 0 0 29 8 0 1 0 0 30 8 0 0 1 0 31 8 0 0 0 1 32 9 1 0 0 0 33 9 0 1 0 0 34 9 0 0 1 0 35 9 0 0 0 1
Trying to merge dictionaries together to create new df but dictionaries values arent showing up in df
image of jupter notebook issue For my quarters instead of values for examples 1,0,0,0 showing up I get NaN. How do I fix the code below so I return values in my dataframe qrt_1 = {'q1':[1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0]} qrt_2 = {'q2':[0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0]} qrt_3 = {'q3':[0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0]} qrt_4 = {'q4':[0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1]} year = {'year': [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9]} value = data_1['Sales'] data = [year, qrt_1, qrt_2, qrt_3, qrt_4] dataframes = [] for x in data: dataframes.append(pd.DataFrame(x)) df = pd.concat(dataframes) I am expecting a dataframe that returns the qrt_1, qrt_2 etc with their corresponding column names
[ "Try to use axis=1 in pd.concat:\ndf = pd.concat(dataframes, axis=1)\nprint(df)\n\nPrints:\n year q1 q2 q3 q4\n0 1 1 0 0 0\n1 1 0 1 0 0\n2 1 0 0 1 0\n3 1 0 0 0 1\n4 2 1 0 0 0\n5 2 0 1 0 0\n6 2 0 0 1 0\n7 2 0 0 0 1\n8 3 1 0 0 0\n9 3 0 1 0 0\n10 3 0 0 1 0\n11 3 0 0 0 1\n12 4 1 0 0 0\n13 4 0 1 0 0\n14 4 0 0 1 0\n15 4 0 0 0 1\n16 5 1 0 0 0\n17 5 0 1 0 0\n18 5 0 0 1 0\n19 5 0 0 0 1\n20 6 1 0 0 0\n21 6 0 1 0 0\n22 6 0 0 1 0\n23 6 0 0 0 1\n24 7 1 0 0 0\n25 7 0 1 0 0\n26 7 0 0 1 0\n27 7 0 0 0 1\n28 8 1 0 0 0\n29 8 0 1 0 0\n30 8 0 0 1 0\n31 8 0 0 0 1\n32 9 1 0 0 0\n33 9 0 1 0 0\n34 9 0 0 1 0\n35 9 0 0 0 1\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "dictionary", "pandas", "python" ]
stackoverflow_0074646374_dataframe_dictionary_pandas_python.txt
Q: Get all value and items from drop down list using selenium I am trying to extract values from dropdown using python selenium. I am getting the text but not getting the values with xpath. Code I used is from selenium.common.exceptions import WebDriverException from selenium import webdriver headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.3" } options = webdriver.ChromeOptions() options.add_argument("--headless") options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') URL = ['https://www.classicalmusicartists.com/cma/artists.aspx'] for url in URL: try: driver = webdriver.Chrome(executable_path = '/home/ubuntu/selenium_drivers/chromedriver', options = options) driver.get(url) driver.implicitly_wait(2) datas = driver.find_element("xpath",'//select[@id="ctl00_cphMainContent_lstCategory"]') d= Select(datas) for opt in d.options: print(opt.text) driver.quit() except WebDriverException: driver.quit() A: So what you should do is: ds = [d.text for d in datas.find_elements('tag name','option')] you were using the improper tag locator. Options are tags not names, the 'name=' attribute (similar to class names) inside a tag element. Secondly you were looking for a singular item and then iterating over that element (as it was a single item there isn't any support for iteration over it to retrieve the text element) so had to make it .find_elements() to find all such matches. And to get the values you could do: dv = [d.get_attribute('value') for d in datas.find_elements('tag name','option')]
Get all value and items from drop down list using selenium
I am trying to extract values from dropdown using python selenium. I am getting the text but not getting the values with xpath. Code I used is from selenium.common.exceptions import WebDriverException from selenium import webdriver headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.3" } options = webdriver.ChromeOptions() options.add_argument("--headless") options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') URL = ['https://www.classicalmusicartists.com/cma/artists.aspx'] for url in URL: try: driver = webdriver.Chrome(executable_path = '/home/ubuntu/selenium_drivers/chromedriver', options = options) driver.get(url) driver.implicitly_wait(2) datas = driver.find_element("xpath",'//select[@id="ctl00_cphMainContent_lstCategory"]') d= Select(datas) for opt in d.options: print(opt.text) driver.quit() except WebDriverException: driver.quit()
[ "So what you should do is:\nds = [d.text for d in datas.find_elements('tag name','option')]\n\nyou were using the improper tag locator. Options are tags not names, the 'name=' attribute (similar to class names) inside a tag element. Secondly you were looking for a singular item and then iterating over that element (as it was a single item there isn't any support for iteration over it to retrieve the text element) so had to make it .find_elements() to find all such matches.\nAnd to get the values you could do:\ndv = [d.get_attribute('value') for d in datas.find_elements('tag name','option')]\n\n" ]
[ 2 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074646351_python_selenium.txt
Q: Django NOT NULL constraint error on imaginary field I have been getting the following error. django.db.utils.IntegrityError: NOT NULL constraint failed: doctor_owner.doc_name This error primarily arises on when I save the owner information using .save() and the error it gives is on doc_name, which is not present in the model definition of the class Owner. I am clueless why it is giving such an error. My model is attached below: . This is my model description: from django.db import models # Create your models here. from base.models import BaseModel class Owner(BaseModel): owner_id = models.CharField(max_length=50) owner_name = models.CharField(max_length=250) class Pet(BaseModel): owner = models.ForeignKey(Owner, on_delete=models.CASCADE) pet_name = models.CharField(max_length=100) pet_age = models.DecimalField(max_length=3, decimal_places=2, max_digits=50) pet_specie = models.CharField(max_length=250) pet_gender = models.CharField(max_length=1) class Medicine(BaseModel): medicine_name = models.CharField(max_length=250) frequency = models.CharField(max_length=100) duration = models.CharField(max_length=100) class Prescription(BaseModel): pet = models.ForeignKey(Pet, on_delete=models.CASCADE) medicine = models.ForeignKey(Medicine, on_delete=models.CASCADE) class Treatment(BaseModel): pet = models.ForeignKey(Pet, on_delete=models.CASCADE) owner = models.ForeignKey(Owner, on_delete=models.CASCADE) doc_name = models.CharField(max_length=250) prescription = models.ForeignKey(Prescription, on_delete=models.CASCADE) A: In your treatment table you have a reference as a foreign key to owner; try putting an equivalent to 'nullable=True' or give it a default value A: why not put doc_name = models.CharField(max_length=250, null = True) to try is working A: The problem was the corruption in db.sqlite3 file of the Django project. Deleting this file and redoing migrations fixed the problem.
Django NOT NULL constraint error on imaginary field
I have been getting the following error. django.db.utils.IntegrityError: NOT NULL constraint failed: doctor_owner.doc_name This error primarily arises on when I save the owner information using .save() and the error it gives is on doc_name, which is not present in the model definition of the class Owner. I am clueless why it is giving such an error. My model is attached below: . This is my model description: from django.db import models # Create your models here. from base.models import BaseModel class Owner(BaseModel): owner_id = models.CharField(max_length=50) owner_name = models.CharField(max_length=250) class Pet(BaseModel): owner = models.ForeignKey(Owner, on_delete=models.CASCADE) pet_name = models.CharField(max_length=100) pet_age = models.DecimalField(max_length=3, decimal_places=2, max_digits=50) pet_specie = models.CharField(max_length=250) pet_gender = models.CharField(max_length=1) class Medicine(BaseModel): medicine_name = models.CharField(max_length=250) frequency = models.CharField(max_length=100) duration = models.CharField(max_length=100) class Prescription(BaseModel): pet = models.ForeignKey(Pet, on_delete=models.CASCADE) medicine = models.ForeignKey(Medicine, on_delete=models.CASCADE) class Treatment(BaseModel): pet = models.ForeignKey(Pet, on_delete=models.CASCADE) owner = models.ForeignKey(Owner, on_delete=models.CASCADE) doc_name = models.CharField(max_length=250) prescription = models.ForeignKey(Prescription, on_delete=models.CASCADE)
[ "In your treatment table you have a reference as a foreign key to owner; try putting an equivalent to 'nullable=True' or give it a default value\n", "why not put\ndoc_name = models.CharField(max_length=250, null = True) to try is working\n", "The problem was the corruption in db.sqlite3 file of the Django project. Deleting this file and redoing migrations fixed the problem.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074630413_django_python.txt
Q: django.core.exceptions.ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured I want to connect MySQL database to my django project, but it is throwing an error : "django.core.exceptions.ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings." Trace: (myenv) LIBINGLADWINs-MacBook-Air:libinrenold$ django-admin dbshell Traceback (most recent call last): File "/Users/libinrenold/Desktop/djangoworks/myenv/bin/django-admin", line 11, in <module> sys.exit(execute_from_command_line()) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line utility.execute() File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/__init__.py", line 356, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/base.py", line 322, in execute saved_locale = translation.get_language() File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/utils/translation/__init__.py", line 195, in get_language return _trans.get_language() File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/utils/translation/__init__.py", line 59, in __getattr__ if settings.USE_I18N: File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/conf/__init__.py", line 56, in __getattr__ self._setup(name) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/conf/__init__.py", line 39, in _setup % (desc, ENVIRONMENT_VARIABLE)) django.core.exceptions.ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. settings.py. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'user', 'PASSWORD': 'root', 'HOST':'', 'PORT': '', } } A: You must define the relevant variable to show where your settings.py file lives: export DJANGO_SETTINGS_MODULE=mysite.settings This is the relevant docs entry: When you use Django, you have to tell it which settings you’re using. Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g. mysite.settings. Note that the settings module should be on the Python import search path. If you are using a virtualenv (which is the best practice), you can paste the relevant export command in the file <path-to-virtualenv>/bin/activate A: It happens to me too, I just now found unintentional. I changed "configuration" in top of PyCharm IDE so pycharm confused when try to run django code. Exactly the problem popup when I try to run server with shortcut but when I used terminal to run server, I found django code not having any problem and pycharm can't found setting. So please check "configuration" in top pycharm's ide maybe you do same mistake :) I'm new in django so be calm if my answer was silly. A: I had this problem because I tried to import a django script after just typing python instead of python manage.py shell Duh! A: Problem I had the same error with Django==3.2 and djangorestframework==3.12.4 (2021/04) when I ran unittest. In my case, the python manage.py test works properly but I cannot directly run or debug the test module which was in a specific app, and I get this error: django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. Solution First I added the django project root to the sys.path. Then I added DJANGO_SETTINGS_MODULE to the environment variable to address the project setting root and called the Django setup function. This is my code: from os import path, environ from sys import path as sys_path from django import setup sys_path.append(<path to django setting.py>) environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_project.settings') setup() A: put this in top of settings.py this will configure django for you import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", __file__) import django django.setup() A: I had the same error with Django 3.0.8 (july 2020) when trying to run the server, and in my case no need for the DJANGO_SETTINGS_MODULE environment variable. The solution is to use another form to run the server: cd <directory containing manage.py> python3 manage.py runserver It works to run Django shell too : python3 manage.py shell A: Like raratiru answered, you need DJANGO_SETTINGS_MODULE environment variable defined with the relative pythonic path to your setting file. OR use your django-admin command with the settings parameter: django-admin --settings=mysite.settings dbshell A: Below code in terminal to manually inform which settings I am using worked for me: set DJANGO_SETTINGS_MODULE=mysite.settings A: Dear all I faced the same problem and i scrapped the web to find a solution. None of the one above solved anything. The issue in my case was related to a wrong configuration of pycharm. As for someone above the app didnt have any issue when launched from the command line/ shell. The issue is here: For some reason the env variable disappeared. i added back and everything started to work again without any issue.
django.core.exceptions.ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured
I want to connect MySQL database to my django project, but it is throwing an error : "django.core.exceptions.ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings." Trace: (myenv) LIBINGLADWINs-MacBook-Air:libinrenold$ django-admin dbshell Traceback (most recent call last): File "/Users/libinrenold/Desktop/djangoworks/myenv/bin/django-admin", line 11, in <module> sys.exit(execute_from_command_line()) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line utility.execute() File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/__init__.py", line 356, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/core/management/base.py", line 322, in execute saved_locale = translation.get_language() File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/utils/translation/__init__.py", line 195, in get_language return _trans.get_language() File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/utils/translation/__init__.py", line 59, in __getattr__ if settings.USE_I18N: File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/conf/__init__.py", line 56, in __getattr__ self._setup(name) File "/Users/libinrenold/Desktop/djangoworks/myenv/lib/python3.6/site-packages/django/conf/__init__.py", line 39, in _setup % (desc, ENVIRONMENT_VARIABLE)) django.core.exceptions.ImproperlyConfigured: Requested setting USE_I18N, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. settings.py. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'user', 'PASSWORD': 'root', 'HOST':'', 'PORT': '', } }
[ "You must define the relevant variable to show where your settings.py file lives:\nexport DJANGO_SETTINGS_MODULE=mysite.settings\n\nThis is the relevant docs entry:\n\nWhen you use Django, you have to tell it which settings you’re using.\nDo this by using an environment variable, DJANGO_SETTINGS_MODULE.\nThe value of DJANGO_SETTINGS_MODULE should be in Python path syntax,\ne.g. mysite.settings. Note that the settings module should be on the\nPython import search path.\n\nIf you are using a virtualenv (which is the best practice), you can paste the relevant export command in the file <path-to-virtualenv>/bin/activate\n", "It happens to me too, I just now found unintentional.\nI changed \"configuration\" in top of PyCharm IDE so pycharm confused when try to run django code. Exactly the problem popup when I try to run server with shortcut but when I used terminal to run server, I found django code not having any problem and pycharm can't found setting.\nSo please check \"configuration\" in top pycharm's ide maybe you do same mistake :)\nI'm new in django so be calm if my answer was silly.\n\n", "I had this problem because I tried to import a django script after just typing\npython\n\ninstead of\npython manage.py shell\n\nDuh!\n", "Problem\nI had the same error with Django==3.2 and djangorestframework==3.12.4 (2021/04) when I ran unittest. In my case, the python manage.py test works properly but I cannot directly run or debug the test module which was in a specific app, and I get this error:\ndjango.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.\n\nSolution\nFirst I added the django project root to the sys.path. Then I added DJANGO_SETTINGS_MODULE to the environment variable to address the project setting root and called the Django setup function. This is my code:\nfrom os import path, environ\nfrom sys import path as sys_path\nfrom django import setup\n\nsys_path.append(<path to django setting.py>) \nenviron.setdefault('DJANGO_SETTINGS_MODULE', 'django_project.settings')\nsetup()\n\n", "put this in top of settings.py \nthis will configure django for you \nimport os\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", __file__)\nimport django\ndjango.setup()\n\n", "I had the same error with Django 3.0.8 (july 2020) when trying to run the server, and in my case no need for the DJANGO_SETTINGS_MODULE environment variable. The solution is to use another form to run the server:\ncd <directory containing manage.py>\npython3 manage.py runserver\n\nIt works to run Django shell too :\npython3 manage.py shell\n\n", "Like raratiru answered, you need DJANGO_SETTINGS_MODULE environment variable defined with the relative pythonic path to your setting file.\nOR use your django-admin command with the settings parameter:\ndjango-admin --settings=mysite.settings dbshell\n\n", "Below code in terminal to manually inform which settings I am using worked for me:\nset DJANGO_SETTINGS_MODULE=mysite.settings\n\n", "Dear all I faced the same problem and i scrapped the web to find a solution. None of the one above solved anything.\nThe issue in my case was related to a wrong configuration of pycharm. As for someone above the app didnt have any issue when launched from the command line/ shell.\nThe issue is here:\n\nFor some reason the env variable disappeared. i added back and everything started to work again without any issue.\n" ]
[ 40, 17, 12, 5, 4, 4, 3, 0, 0 ]
[]
[]
[ "django", "mysql_python", "python" ]
stackoverflow_0047700347_django_mysql_python_python.txt
Q: How to "join" right and left eye videos in Python or bash to for stereoscopic 3D VR video I have a code which generates 360 video frames from a "camera" placed in a 3D dataset. I can run this code twice with an offset of the camera position to get "right and left eye" videos. These should be able to be combined into a single file which can be viewed as a 3D stereoscopic video with a VR headset. How can I combine the two video files for the eyes, via Python or ffmpeg, or anything. Really I'm just asking about the necessary file format. Searching the web for this has turned up mostly tutorials for specific softwares which are able to do this sort of thing. I'd like to just do it manually. What file formats are typical for these sorts of videos? Are there Python libraries or bash tools which provide an interface for manipulating files of this type? A: Stereoscopic 3D video is typically encoded in a single file using a technique called "multi-view video coding" (MVC). This technique allows the video to be played back on devices that support 3D playback, such as VR headsets. To create a MVC video file, you can use a tool like ffmpeg. The basic process would be to use ffmpeg to encode the two separate video files for the left and right eyes into a single MVC file. This can be done using the following ffmpeg command: ffmpeg -i left_eye.mp4 -i right_eye.mp4 -c:v mvc your_output_file.mp4 This will create a single MVC file called your_output_file.mp4 that contains the video data for both the left and right eye views. You can also use Python to create MVC files by using the ffmpeg-python library, which provides a Pythonic interface to the ffmpeg command-line tools. For example, the following code will create an MVC file from two input videos: import ffmpeg # Set the input and output filenames left_eye = 'left_eye.mp4' right_eye = 'right_eye.mp4' output = 'your_output_file.mp4' # Use ffmpeg to encode the MVC file ( ffmpeg .input(left_eye) .input(right_eye) .output(output, c='mvc') .run() ) This code will create an MVC file called your_output_file.mp4 that contains the video data for both the left and right eye views.
How to "join" right and left eye videos in Python or bash to for stereoscopic 3D VR video
I have a code which generates 360 video frames from a "camera" placed in a 3D dataset. I can run this code twice with an offset of the camera position to get "right and left eye" videos. These should be able to be combined into a single file which can be viewed as a 3D stereoscopic video with a VR headset. How can I combine the two video files for the eyes, via Python or ffmpeg, or anything. Really I'm just asking about the necessary file format. Searching the web for this has turned up mostly tutorials for specific softwares which are able to do this sort of thing. I'd like to just do it manually. What file formats are typical for these sorts of videos? Are there Python libraries or bash tools which provide an interface for manipulating files of this type?
[ "Stereoscopic 3D video is typically encoded in a single file using a technique called \"multi-view video coding\" (MVC). This technique allows the video to be played back on devices that support 3D playback, such as VR headsets.\nTo create a MVC video file, you can use a tool like ffmpeg. The basic process would be to use ffmpeg to encode the two separate video files for the left and right eyes into a single MVC file. This can be done using the following ffmpeg command:\nffmpeg -i left_eye.mp4 -i right_eye.mp4 -c:v mvc your_output_file.mp4\n\nThis will create a single MVC file called your_output_file.mp4 that contains the video data for both the left and right eye views.\nYou can also use Python to create MVC files by using the ffmpeg-python library, which provides a Pythonic interface to the ffmpeg command-line tools. For example, the following code will create an MVC file from two input videos:\nimport ffmpeg\n\n# Set the input and output filenames\nleft_eye = 'left_eye.mp4'\nright_eye = 'right_eye.mp4'\noutput = 'your_output_file.mp4'\n\n# Use ffmpeg to encode the MVC file\n(\n ffmpeg\n .input(left_eye)\n .input(right_eye)\n .output(output, c='mvc')\n .run()\n)\n\nThis code will create an MVC file called your_output_file.mp4 that contains the video data for both the left and right eye views.\n" ]
[ 1 ]
[]
[]
[ "bash", "python", "video", "virtual_reality" ]
stackoverflow_0074646501_bash_python_video_virtual_reality.txt
Q: Find similarities in Python I have a list of Customer Names and Supplier Names and since of 2 different countries in the same company. Unfortunately same Customer and Suppliers have different ids in different countries. In the other hand the names in the most of the cases are the same or at least very similar. The goal is to have a PBI report with unified customers/suppliers for both countries. So I will try to match them by name. But I want in someway to check for possible same names which have a very small difference like a letter for example i found the Greek name "HARALAMPOS" in the one Country and "CHARALAMPOS" in the other. I am pretty sure that I can do this in python but I have no idea how. What I want is to extract a list with the Names of the countries (actually it is not only 2 are 8). and find the most similar cases like the example I gave above. Can anyone navigate me of which libraries I need to use and which packages from each library in order to achieve that ? I have these 3 columns in different schemas in SQL tables and if you noticed there are some differences for example in Germany the L.T.D. contains dots or for the Customer Name "Hermanos" in Germany is "LOS HERMANOS". This is because of the users that they added the data but is the only possible way to match them for me. GREECE DATA GERMANY DATA: US DATA: Expected Result A: IIUC, you can use rapidfuzz and pandas.DataFrame.merge. To give you the general logic, let's compare for example two dataframes (df_ger) with (df_us) : import pandas as pd from rapidfuzz import process ​ out = ( df_ger .assign(message_adapted = (df_ger['CUSTOMER_SUPPLIER_NAME'] .map(lambda x: process.extractOne(x, df_us['CUSTOMER_SUPPLIER_NAME']))).str[0]) .merge(df_us.add_suffix(" (US)"), left_on="message_adapted", right_on="CUSTOMER_SUPPLIER_NAME (US)", how="left", indicator="CHECK") .rename(columns= {"CUSTOMER_SUPPLIER_NAME": "CUSTOMER_SUPPLIER_NAME (GE)"}) .drop(columns="message_adapted") ) # Output : print(out) CUSTOMER_SUPPLIER_NAME (GE) CUSTOMER_SUPPLIER_NAME (US) CHECK 0 LEUTERIS MAVROPOULOS LEFTERIS MAVROPOULOS both 1 HARALAMPOS GEORGIOU CHARALAMPOS GEORGIOU both 2 ATHLETIC COMPANY L.T.D. ATHLETIC COMAPANY LTD both 3 LTK LTD LTK L.T.D. both 4 GEORGE ANDREW GEORGE ANDREW GEORGE both 5 BLACK POWER BLACK POWER both 6 LOS HERMANOS HERMANOS both 7 HILLS BROTHERS HILLS BROTHERS both 8 AFOI KLIRU AFOI KLIROU both 9 BOOKER HALIFA BOOKER HALIFA both 10 MARCOS COMPANY MARCO'S COMPANY both # Edit : Assuming, you have a single dataframe df holding two columns and you want to calculate the score of similarity between those, you can use difflib.SequenceMatcher : from difflib import SequenceMatcher​ ​ df['SIMILARITY'] = ( df.apply(lambda x: str(round(SequenceMatcher(None, x[0].lower(), x[1].lower()).ratio(),2)*100) + "%", axis=1) ) print(df) ​ CUSTOMER_SUPPLIER_NAME (GE) CUSTOMER_SUPPLIER_NAME (US) SIMILARITY 0 LEUTERIS MAVROPOULOS LEFTERIS MAVROPOULOS 95.0% 1 HARALAMPOS GEORGIOU CHARALAMPOS GEORGIOU 97.0% 2 ATHLETIC COMPANY L.T.D. ATHLETIC COMAPANY LTD 91.0% 3 LTK LTD LTK L.T.D. 82.0% 4 GEORGE ANDREW GEORGE ANDREW GEORGE 79.0% 5 BLACK POWER BLACK POWER 100.0% 6 LOS HERMANOS HERMANOS 80.0% 7 HILLS BROTHERS HILLS BROTHERS 100.0% 8 AFOI KLIRU AFOI KLIROU 95.0% 9 BOOKER HALIFA BOOKER HALIFA 100.0% 10 MARCOS COMPANY MARCO'S COMPANY 97.0%
Find similarities in Python
I have a list of Customer Names and Supplier Names and since of 2 different countries in the same company. Unfortunately same Customer and Suppliers have different ids in different countries. In the other hand the names in the most of the cases are the same or at least very similar. The goal is to have a PBI report with unified customers/suppliers for both countries. So I will try to match them by name. But I want in someway to check for possible same names which have a very small difference like a letter for example i found the Greek name "HARALAMPOS" in the one Country and "CHARALAMPOS" in the other. I am pretty sure that I can do this in python but I have no idea how. What I want is to extract a list with the Names of the countries (actually it is not only 2 are 8). and find the most similar cases like the example I gave above. Can anyone navigate me of which libraries I need to use and which packages from each library in order to achieve that ? I have these 3 columns in different schemas in SQL tables and if you noticed there are some differences for example in Germany the L.T.D. contains dots or for the Customer Name "Hermanos" in Germany is "LOS HERMANOS". This is because of the users that they added the data but is the only possible way to match them for me. GREECE DATA GERMANY DATA: US DATA: Expected Result
[ "IIUC, you can use rapidfuzz and pandas.DataFrame.merge.\nTo give you the general logic, let's compare for example two dataframes (df_ger) with (df_us) :\nimport pandas as pd\nfrom rapidfuzz import process\n​\nout = (\n df_ger\n .assign(message_adapted = (df_ger['CUSTOMER_SUPPLIER_NAME']\n .map(lambda x: process.extractOne(x, df_us['CUSTOMER_SUPPLIER_NAME']))).str[0])\n .merge(df_us.add_suffix(\" (US)\"),\n left_on=\"message_adapted\", right_on=\"CUSTOMER_SUPPLIER_NAME (US)\", how=\"left\", indicator=\"CHECK\")\n .rename(columns= {\"CUSTOMER_SUPPLIER_NAME\": \"CUSTOMER_SUPPLIER_NAME (GE)\"})\n .drop(columns=\"message_adapted\")\n )\n\n# Output :\nprint(out)\n\n CUSTOMER_SUPPLIER_NAME (GE) CUSTOMER_SUPPLIER_NAME (US) CHECK\n0 LEUTERIS MAVROPOULOS LEFTERIS MAVROPOULOS both\n1 HARALAMPOS GEORGIOU CHARALAMPOS GEORGIOU both\n2 ATHLETIC COMPANY L.T.D. ATHLETIC COMAPANY LTD both\n3 LTK LTD LTK L.T.D. both\n4 GEORGE ANDREW GEORGE ANDREW GEORGE both\n5 BLACK POWER BLACK POWER both\n6 LOS HERMANOS HERMANOS both\n7 HILLS BROTHERS HILLS BROTHERS both\n8 AFOI KLIRU AFOI KLIROU both\n9 BOOKER HALIFA BOOKER HALIFA both\n10 MARCOS COMPANY MARCO'S COMPANY both\n\n# Edit :\nAssuming, you have a single dataframe df holding two columns and you want to calculate the score of similarity between those, you can use difflib.SequenceMatcher :\nfrom difflib import SequenceMatcher​\n​\ndf['SIMILARITY'] = (\n df.apply(lambda x: str(round(SequenceMatcher(None,\n x[0].lower(),\n x[1].lower()).ratio(),2)*100) + \"%\",\n axis=1)\n )\nprint(df)\n​\n CUSTOMER_SUPPLIER_NAME (GE) CUSTOMER_SUPPLIER_NAME (US) SIMILARITY\n0 LEUTERIS MAVROPOULOS LEFTERIS MAVROPOULOS 95.0%\n1 HARALAMPOS GEORGIOU CHARALAMPOS GEORGIOU 97.0%\n2 ATHLETIC COMPANY L.T.D. ATHLETIC COMAPANY LTD 91.0%\n3 LTK LTD LTK L.T.D. 82.0%\n4 GEORGE ANDREW GEORGE ANDREW GEORGE 79.0%\n5 BLACK POWER BLACK POWER 100.0%\n6 LOS HERMANOS HERMANOS 80.0%\n7 HILLS BROTHERS HILLS BROTHERS 100.0%\n8 AFOI KLIRU AFOI KLIROU 95.0%\n9 BOOKER HALIFA BOOKER HALIFA 100.0%\n10 MARCOS COMPANY MARCO'S COMPANY 97.0%\n\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "pandas", "python" ]
stackoverflow_0074644638_machine_learning_pandas_python.txt
Q: How to select Xpath element I am playing around with connect automation in Linkedin and trying to send custom connection message to search list. The way I do it, first I find all buttons. Then I find XPath of the names and index it. Then I fill all_names list. Then this names are inserted into greeting message. The problem I face is that search result list does not only contain Connect buttons but also sometimes Follow button and that messes up the index, which results in wrong name being added to the greeting message. Here is the code part: driver.get( "https://www.linkedin.com/search/results/people/?network=%5B%22S%22%5D&origin=FACETED_SEARCH&page=7") time.sleep(2) # ---------------------------------------------------------------- all_connect_buttons = driver.find_elements(By.TAG_NAME, 'button') connect_buttons = [ btn for btn in all_connect_buttons if btn.text == "Connect"] all_names = [] all_span = driver.find_elements( By.XPATH, "//a[contains(@class,'app-aware-link ')]/span[@dir='ltr']/span[@aria-hidden='true']") idx = [*range(1, 11)] for j in range(len(idx)): #get only first name name = all_span[j].text.split(" ")[0] all_names.append(name) # print(name) So basically I have types of buttons: <span class="artdeco-button__text"> Connect </span> and <span class="artdeco-button__text"> Follow </span> Is it somehow possible to filter only Connect button names so that Follow names are not added to all_span list? Can it be done with some XPath of Python expression? A: Instead of collecting all button elements you can make more precise locating. This Xpath will give you "Connect" buttons only //button[contains(@aria-label,'Invite')]. So, instead of using this all_connect_buttons = driver.find_elements(By.TAG_NAME, 'button') you can use this: all_connect_buttons = driver.find_elements(By.XPATH, "//button[contains(@aria-label,'Invite')]") This can also be done with CSS Selectors all_connect_buttons = driver.find_elements(By.CSS_SELECTOR, "button[aria-label*='Invite']")
How to select Xpath element
I am playing around with connect automation in Linkedin and trying to send custom connection message to search list. The way I do it, first I find all buttons. Then I find XPath of the names and index it. Then I fill all_names list. Then this names are inserted into greeting message. The problem I face is that search result list does not only contain Connect buttons but also sometimes Follow button and that messes up the index, which results in wrong name being added to the greeting message. Here is the code part: driver.get( "https://www.linkedin.com/search/results/people/?network=%5B%22S%22%5D&origin=FACETED_SEARCH&page=7") time.sleep(2) # ---------------------------------------------------------------- all_connect_buttons = driver.find_elements(By.TAG_NAME, 'button') connect_buttons = [ btn for btn in all_connect_buttons if btn.text == "Connect"] all_names = [] all_span = driver.find_elements( By.XPATH, "//a[contains(@class,'app-aware-link ')]/span[@dir='ltr']/span[@aria-hidden='true']") idx = [*range(1, 11)] for j in range(len(idx)): #get only first name name = all_span[j].text.split(" ")[0] all_names.append(name) # print(name) So basically I have types of buttons: <span class="artdeco-button__text"> Connect </span> and <span class="artdeco-button__text"> Follow </span> Is it somehow possible to filter only Connect button names so that Follow names are not added to all_span list? Can it be done with some XPath of Python expression?
[ "Instead of collecting all button elements you can make more precise locating.\nThis Xpath will give you \"Connect\" buttons only //button[contains(@aria-label,'Invite')].\nSo, instead of using this all_connect_buttons = driver.find_elements(By.TAG_NAME, 'button') you can use this:\nall_connect_buttons = driver.find_elements(By.XPATH, \"//button[contains(@aria-label,'Invite')]\")\n\nThis can also be done with CSS Selectors\nall_connect_buttons = driver.find_elements(By.CSS_SELECTOR, \"button[aria-label*='Invite']\")\n\n" ]
[ 3 ]
[]
[]
[ "css_selectors", "python", "selenium", "selenium_webdriver", "xpath" ]
stackoverflow_0074646450_css_selectors_python_selenium_selenium_webdriver_xpath.txt
Q: Make an executable file from python project I want to make a .exe file from my python project, I have made a GUI in tkinter. This project has multiple files and uses a variety of libraries. I tried to use auto-py-to-exe but it gave a variety of errors concerning the use of tkinter, saying it can not find tkinter. I do not understand this error since tkinter is automatically installed with python? Are there better ways to use auto-py-to-exe or better programs to convert a hole project to .exe? I also tried pyinstaller, but when opening the program it immediately closes again. The program does run properly in pycharm. The error is I\output\main_init_.py", line 1, in <module> import tkinter ModuleNotFoundError: No module named 'tkinter' A: I personally use CX-Freeze to compile my executables. I have probably used it over 100 or so updates of my tools and typically the problem I run into is either related to missing file that need to be identified in the setup.py file or the fact that when it compiles the Tkinter folder it uses a capital T instead of a lower case t so after I compiling an app I have to manually update the folder to be lowercase T. Here is an example of the setup file. As you can see below when compiling tkinter you need to ID the TK and TCL library folders in order for it to compile the listed DLL files properly. from cx_Freeze import setup, Executable import os base = "Win32GUI" os.environ['TCL_LIBRARY'] = r'C:\Users\user\Desktop\Python381\tcl\tcl8.6' os.environ['TK_LIBRARY'] = r'C:\Users\user\Desktop\Python381\tcl\tk8.6' build_exe_options = {'packages': ['os', 'json', 'http', 'email', 'pyodbc', 'openpyxl', 'calendar', 'threading', 'datetime', 'tkinter', 'tkinter.ttk', 'tkinter.messagebox'], 'excludes': ['PyQt5', 'PIL', 'numpy', 'pandas'], # 'urllib', # 'encodings', # 'numpy' 'include_files': [r'excel_temp.xlsx', r'opt_3_excel_temp.xlsx', r'tcoms_excel_temp.xlsx', r'main_config.json', r"C:\Users\user\Desktop\Python381\DLLs\tcl86t.dll", r"C:\Users\user\Desktop\Python381\DLLs\tk86t.dll"]} setup( name='<GIT>', options={'build_exe': build_exe_options}, version='0.57', description='<GIT - Global Inventory Tool!>', executables=[Executable(r'C:\Users\user\PycharmProjects\Py381_GIT\MAIN.py', base=base)] ) After you run the compiler you will often get an error that looks like this. The error NoduleNotFoundError: No module named 'tkinter' is due to the odd behavior of the compiler giving the tkinter folder a Capital T like the below image in the lib folder. In this case you would update the library to be a lowercase t. Let me know if you have any questions.
Make an executable file from python project
I want to make a .exe file from my python project, I have made a GUI in tkinter. This project has multiple files and uses a variety of libraries. I tried to use auto-py-to-exe but it gave a variety of errors concerning the use of tkinter, saying it can not find tkinter. I do not understand this error since tkinter is automatically installed with python? Are there better ways to use auto-py-to-exe or better programs to convert a hole project to .exe? I also tried pyinstaller, but when opening the program it immediately closes again. The program does run properly in pycharm. The error is I\output\main_init_.py", line 1, in <module> import tkinter ModuleNotFoundError: No module named 'tkinter'
[ "I personally use CX-Freeze to compile my executables. I have probably used it over 100 or so updates of my tools and typically the problem I run into is either related to missing file that need to be identified in the setup.py file or the fact that when it compiles the Tkinter folder it uses a capital T instead of a lower case t so after I compiling an app I have to manually update the folder to be lowercase T.\nHere is an example of the setup file.\nAs you can see below when compiling tkinter you need to ID the TK and TCL library folders in order for it to compile the listed DLL files properly.\nfrom cx_Freeze import setup, Executable\nimport os\n\n\nbase = \"Win32GUI\"\n\nos.environ['TCL_LIBRARY'] = r'C:\\Users\\user\\Desktop\\Python381\\tcl\\tcl8.6'\nos.environ['TK_LIBRARY'] = r'C:\\Users\\user\\Desktop\\Python381\\tcl\\tk8.6'\n\n\nbuild_exe_options = {'packages': ['os',\n 'json',\n 'http',\n 'email',\n 'pyodbc',\n 'openpyxl',\n 'calendar',\n 'threading',\n 'datetime',\n\n 'tkinter',\n 'tkinter.ttk',\n 'tkinter.messagebox'],\n 'excludes': ['PyQt5',\n 'PIL',\n 'numpy',\n 'pandas'], # 'urllib', # 'encodings', # 'numpy'\n\n 'include_files': [r'excel_temp.xlsx',\n r'opt_3_excel_temp.xlsx',\n r'tcoms_excel_temp.xlsx',\n r'main_config.json',\n r\"C:\\Users\\user\\Desktop\\Python381\\DLLs\\tcl86t.dll\",\n r\"C:\\Users\\user\\Desktop\\Python381\\DLLs\\tk86t.dll\"]}\n \nsetup(\n name='<GIT>',\n options={'build_exe': build_exe_options},\n version='0.57',\n description='<GIT - Global Inventory Tool!>',\n executables=[Executable(r'C:\\Users\\user\\PycharmProjects\\Py381_GIT\\MAIN.py', base=base)]\n)\n\nAfter you run the compiler you will often get an error that looks like this.\n\nThe error NoduleNotFoundError: No module named 'tkinter' is due to the odd behavior of the compiler giving the tkinter folder a Capital T like the below image in the lib folder.\n\n\nIn this case you would update the library to be a lowercase t.\n\nLet me know if you have any questions.\n" ]
[ 1 ]
[]
[]
[ "exe", "python", "tkinter" ]
stackoverflow_0074645760_exe_python_tkinter.txt
Q: How to run the doctest for a single function in python3? How do I run the doctest for only a single function in python using the command line? I can python3 -m doctest -v main.py but this will run all the doctests in main.py. How do I specify one function to call the doctest on? A: That depends on the code in main.py that runs the doctests. You can change that code to test a specific function by calling doctest.run_docstring_examples(). When that code runs doctest.testmod() however, you cannot limit testing to a single function from the command line. A: You can accomplish this using my doctestfn package: pip install doctestfn It can then be used to run tests for a single function in a module as follows: doctestfn myfile.py myfunction
How to run the doctest for a single function in python3?
How do I run the doctest for only a single function in python using the command line? I can python3 -m doctest -v main.py but this will run all the doctests in main.py. How do I specify one function to call the doctest on?
[ "That depends on the code in main.py that runs the doctests. You can change that code to test a specific function by calling doctest.run_docstring_examples().\nWhen that code runs doctest.testmod() however, you cannot limit testing to a single function from the command line.\n", "You can accomplish this using my doctestfn package:\npip install doctestfn\n\nIt can then be used to run tests for a single function in a module as follows:\ndoctestfn myfile.py myfunction\n\n" ]
[ 1, 0 ]
[]
[]
[ "doctest", "python" ]
stackoverflow_0068407088_doctest_python.txt
Q: Error while trying to add up purchases between two inputed dates The instructions are the following: "2 dates are entered and the total purchases between those dates is shown, including the purchases made on those dates." I have the following list: list_purchases=[{'id':'123','name':'Luis', 'surname':'Henderson', 'price':16000, 'date': "(2022, 3, 12)"},{''id':'123','name':'Luis', 'surname':'Henderson', 'price':4000, 'date': "(2022, 12, 1)"}] And I've tried the following code but I don't know how to arrange it in order to make a for loop which goes through those specific dates and add up the purchases made. def dates_purchases(): year=int(input("Enter year of first date: ")) month = int(input("Enter month of first date: ")) day = int(input("Enter day of first date: ")) first_date=date(year,month,day) year2=int(input("Enter year of second date: ")) month2 = int(input("Enter month of second date: ")) day2 = int(input("Enter day of second date: ")) second_date=date(year2,month2,day2) for purchases in range(first_date,second_date): sum=+purchases['price'] print('The total cost of the purchases made between those dates is:',sum) A: You can convert it to a pandas Dataframe, perform filtering operations, then sum up the price. import datetime import pandas as pd list_purchases = [{'id':'123','name':'Luis', 'surname':'Henderson', 'price':16000, 'date': "(2022, 3, 12)"},{'id':'123','name':'Luis', 'surname':'Henderson', 'price':4000, 'date': "(2022, 12, 1)"}] # Convert to DataFrame and datetime format purchase_data = pd.DataFrame(list_purchases) purchase_data["date"] = pd.to_datetime(purchase_data["date"], format="(%Y, %m, %d)") # Query for first and last date (you can set as input, let me pre-define it here) first_date = datetime.datetime(2022, 3, 12) last_date = datetime.datetime(2022, 12, 1) # Filter for date between first_date and last_date inclusive purchase_data_filter = purchase_data[((first_date <= purchase_data["date"])) & (purchase_data["date"] <= last_date)] result = purchase_data_filter.price.sum()
Error while trying to add up purchases between two inputed dates
The instructions are the following: "2 dates are entered and the total purchases between those dates is shown, including the purchases made on those dates." I have the following list: list_purchases=[{'id':'123','name':'Luis', 'surname':'Henderson', 'price':16000, 'date': "(2022, 3, 12)"},{''id':'123','name':'Luis', 'surname':'Henderson', 'price':4000, 'date': "(2022, 12, 1)"}] And I've tried the following code but I don't know how to arrange it in order to make a for loop which goes through those specific dates and add up the purchases made. def dates_purchases(): year=int(input("Enter year of first date: ")) month = int(input("Enter month of first date: ")) day = int(input("Enter day of first date: ")) first_date=date(year,month,day) year2=int(input("Enter year of second date: ")) month2 = int(input("Enter month of second date: ")) day2 = int(input("Enter day of second date: ")) second_date=date(year2,month2,day2) for purchases in range(first_date,second_date): sum=+purchases['price'] print('The total cost of the purchases made between those dates is:',sum)
[ "You can convert it to a pandas Dataframe, perform filtering operations, then sum up the price.\nimport datetime\nimport pandas as pd\n\nlist_purchases = [{'id':'123','name':'Luis', 'surname':'Henderson', 'price':16000, 'date': \"(2022, 3, 12)\"},{'id':'123','name':'Luis', 'surname':'Henderson', 'price':4000, 'date': \"(2022, 12, 1)\"}]\n\n# Convert to DataFrame and datetime format\npurchase_data = pd.DataFrame(list_purchases)\npurchase_data[\"date\"] = pd.to_datetime(purchase_data[\"date\"], format=\"(%Y, %m, %d)\")\n\n# Query for first and last date (you can set as input, let me pre-define it here)\nfirst_date = datetime.datetime(2022, 3, 12)\nlast_date = datetime.datetime(2022, 12, 1)\n\n# Filter for date between first_date and last_date inclusive\npurchase_data_filter = purchase_data[((first_date <= purchase_data[\"date\"])) & (purchase_data[\"date\"] <= last_date)]\n\nresult = purchase_data_filter.price.sum()\n\n\n" ]
[ 0 ]
[]
[]
[ "date", "python", "python_3.x" ]
stackoverflow_0074646558_date_python_python_3.x.txt
Q: How to create a heatmap in Python with 3 columns - the x and y coordinates and the heat I have a dataframe with 3 columns, x-points, y-points and the heat. Like this: X, Y, Z -2, 0, 1 -2, 1, 2 -2, 2, 5 -1, 0, 3 -1, 1, 5 -1, 2, 8 .., .., .. 2, 1, 4 2, 2, 1 I want to plot a heatmap of this data with X and Y being the coords, and Z being the heat. I have tried lots of ways to do this and constantly run into different errors. A: Use pivot and seaborn.heatmap: import seaborn as sns sns.heatmap(df.pivot(index='Y', columns='X', values='Z')) Output: IF you want to handle missing coordinates: df2 = (df .pivot(index='Y', columns='X', values='Z') .pipe(lambda d: d.reindex(index=range(d.index.min(), d.index.max()+1), columns=range(d.columns.min(), d.columns.max()+1), ) ) ) sns.heatmap(df2) Output: A: https://seaborn.pydata.org/generated/seaborn.heatmap.html However I notice your data are all numeric. You may be looking for a Z dimensional colored scatterplot rather than a true 'heatmap'. You can use the plt.scatter function of matplotlib. (X_col=x,Y_col=y,color=Z). https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html
How to create a heatmap in Python with 3 columns - the x and y coordinates and the heat
I have a dataframe with 3 columns, x-points, y-points and the heat. Like this: X, Y, Z -2, 0, 1 -2, 1, 2 -2, 2, 5 -1, 0, 3 -1, 1, 5 -1, 2, 8 .., .., .. 2, 1, 4 2, 2, 1 I want to plot a heatmap of this data with X and Y being the coords, and Z being the heat. I have tried lots of ways to do this and constantly run into different errors.
[ "Use pivot and seaborn.heatmap:\nimport seaborn as sns\n\nsns.heatmap(df.pivot(index='Y', columns='X', values='Z'))\n\nOutput:\n\nIF you want to handle missing coordinates:\ndf2 = (df\n .pivot(index='Y', columns='X', values='Z')\n .pipe(lambda d: d.reindex(index=range(d.index.min(), d.index.max()+1),\n columns=range(d.columns.min(), d.columns.max()+1),\n )\n )\n)\n\nsns.heatmap(df2)\n\nOutput:\n\n", "https://seaborn.pydata.org/generated/seaborn.heatmap.html\nHowever I notice your data are all numeric. You may be looking for a Z dimensional colored scatterplot rather than a true 'heatmap'. You can use the plt.scatter function of matplotlib. (X_col=x,Y_col=y,color=Z). https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html\n" ]
[ 1, 0 ]
[]
[]
[ "graph", "heatmap", "matplotlib", "python", "seaborn" ]
stackoverflow_0074646588_graph_heatmap_matplotlib_python_seaborn.txt