content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: How to make a constraint based in a entry value in an Association Object in SQLAlchemy? Given the following minimal example: class Association(Base): __tablename__ = "association_table" left_id = Column(ForeignKey("left_table.id"), primary_key=True) right_id = Column(ForeignKey("right_table.id"), primary_key=True) first_child = Column(Boolean, nullable=False) child = relationship("Child", back_populates="parents") parent = relationship("Parent", back_populates="children") class Parent(Base): __tablename__ = "left_table" id = Column(Integer, primary_key=True) children = relationship("Association", back_populates="parent") class Child(Base): __tablename__ = "right_table" id = Column(Integer, primary_key=True) parents = relationship("Association", back_populates="child") How to make a constraint that only one child could be the first child of a parent? (Disregarding the 'couple first child' logic) A: The simpliest solution is to use the Partial Index, which are supported for PostgreSQL, Partial Indexes and SQLite, Partial Indexes. The code below should work for both: class Association(Base): __tablename__ = "association_table" left_id = Column(ForeignKey("left_table.id"), primary_key=True) right_id = Column(ForeignKey("right_table.id"), primary_key=True) first_child = Column(Boolean, nullable=False) child = relationship("Child", back_populates="parents") parent = relationship("Parent", back_populates="children") __table_args__ = ( Index( "uci_first_child", left_id, unique=True, postgres_where=first_child==True, # will be used for postgresql sqlite_where=first_child==True, # will be used for sqlite ), )
How to make a constraint based in a entry value in an Association Object in SQLAlchemy?
Given the following minimal example: class Association(Base): __tablename__ = "association_table" left_id = Column(ForeignKey("left_table.id"), primary_key=True) right_id = Column(ForeignKey("right_table.id"), primary_key=True) first_child = Column(Boolean, nullable=False) child = relationship("Child", back_populates="parents") parent = relationship("Parent", back_populates="children") class Parent(Base): __tablename__ = "left_table" id = Column(Integer, primary_key=True) children = relationship("Association", back_populates="parent") class Child(Base): __tablename__ = "right_table" id = Column(Integer, primary_key=True) parents = relationship("Association", back_populates="child") How to make a constraint that only one child could be the first child of a parent? (Disregarding the 'couple first child' logic)
[ "The simpliest solution is to use the Partial Index, which are supported for PostgreSQL, Partial Indexes and SQLite, Partial Indexes.\nThe code below should work for both:\nclass Association(Base):\n __tablename__ = \"association_table\"\n left_id = Column(ForeignKey(\"left_table.id\"), primary_key=True)\n right_id = Column(ForeignKey(\"right_table.id\"), primary_key=True)\n first_child = Column(Boolean, nullable=False)\n child = relationship(\"Child\", back_populates=\"parents\")\n parent = relationship(\"Parent\", back_populates=\"children\")\n\n __table_args__ = (\n Index(\n \"uci_first_child\",\n left_id,\n unique=True,\n postgres_where=first_child==True, # will be used for postgresql\n sqlite_where=first_child==True, # will be used for sqlite\n ),\n )\n\n" ]
[ 3 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0074581422_python_sqlalchemy.txt
Q: Error while converting String elements of List of Lists to float This is in reference to my previous question related to extracting data from .asc file and separating them while having multiple delimiters. I want to perform mathematical operations on the float elements of the list of lists generated from the above question. The separation of individual data from the string has been achieved however, since the list of lists has also generated individual elements in form of strings i am unable to perform mathematical operations on them. I would like to be able to access each element in the list of lists, convert them to float type and then perform mathematical operations on them. Here is my code where in the .asc file strings have been separated into individual elements and stored as list of lists. This is the image of a specific set of datas i got from the bigger list of lists. I access the specific set of data from the lists and then when i try to convert them to float, i get this error ValueError: could not convert string to float: '.' This is the code i have been working with import numpy as np import pandas as pd import re Output_list = [] Final = [] count = 0 with open(r"myfile.asc","r") as file_in: for line in map(str.strip, file_in): if "LoggingString :=" in line: first_quote = line.index('"') # returns the column number where '"' first appears in the # whole string last_quote = line.index('"', first_quote + 1) #returns the column value where " appears last #in the # whole string ( end of line ) Output_list.append( line[:first_quote].split(maxsplit=1) + line[first_quote + 1: last_quote].split(","), ) Final.append(Output_list[count][8:25]) Data = list(map(float, Output_list[count][8])) #converting column 8th element of every lists #in Output_list to float count += 1 df = pd.DataFrame(Output_list) df.to_csv("Triall_2.csv", sep=';') df_1 = pd.DataFrame(Final) df_1.to_csv("Test.csv", sep=";") I alternatively tried using np.array(Final).astype(float).tolist() method as well but it didn't change the strings to float as i wanted. A: Declare Data as empty list outside for-loop and use .append to insert a float value into it: Data = [] with open(r"myfile.asc", "r") as file_in: for line in map(str.strip, file_in): if "LoggingString :=" in line: # ... Data.append(float(Output_list[count][8])) count += 1 print(Data) A: The problem occurs when trying to map an individual string '1.06' into a float object. It wil treat the string as an array and try to turn each individual element of the array into a float object, but the second element of this example is the dot character . which cannot be turned into a float object. >>> my_array = ['0','1.06','23.345'] >>> list(map(float, my_array[1])) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: could not convert string to float: '.' Instead it is more convenient to turn all elements of the array into float objects: >>> my_array = ['0', '1.06', '23.345'] >>> list(map(float,my_array)) [0.0, 1.06, 23.345] For more information you can look at the map documentation.
Error while converting String elements of List of Lists to float
This is in reference to my previous question related to extracting data from .asc file and separating them while having multiple delimiters. I want to perform mathematical operations on the float elements of the list of lists generated from the above question. The separation of individual data from the string has been achieved however, since the list of lists has also generated individual elements in form of strings i am unable to perform mathematical operations on them. I would like to be able to access each element in the list of lists, convert them to float type and then perform mathematical operations on them. Here is my code where in the .asc file strings have been separated into individual elements and stored as list of lists. This is the image of a specific set of datas i got from the bigger list of lists. I access the specific set of data from the lists and then when i try to convert them to float, i get this error ValueError: could not convert string to float: '.' This is the code i have been working with import numpy as np import pandas as pd import re Output_list = [] Final = [] count = 0 with open(r"myfile.asc","r") as file_in: for line in map(str.strip, file_in): if "LoggingString :=" in line: first_quote = line.index('"') # returns the column number where '"' first appears in the # whole string last_quote = line.index('"', first_quote + 1) #returns the column value where " appears last #in the # whole string ( end of line ) Output_list.append( line[:first_quote].split(maxsplit=1) + line[first_quote + 1: last_quote].split(","), ) Final.append(Output_list[count][8:25]) Data = list(map(float, Output_list[count][8])) #converting column 8th element of every lists #in Output_list to float count += 1 df = pd.DataFrame(Output_list) df.to_csv("Triall_2.csv", sep=';') df_1 = pd.DataFrame(Final) df_1.to_csv("Test.csv", sep=";") I alternatively tried using np.array(Final).astype(float).tolist() method as well but it didn't change the strings to float as i wanted.
[ "Declare Data as empty list outside for-loop and use .append to insert a float value into it:\nData = []\nwith open(r\"myfile.asc\", \"r\") as file_in:\n for line in map(str.strip, file_in):\n if \"LoggingString :=\" in line:\n # ...\n Data.append(float(Output_list[count][8]))\n count += 1\n\nprint(Data)\n\n", "The problem occurs when trying to map an individual string '1.06' into a float object. It wil treat the string as an array and try to turn each individual element of the array into a float object, but the second element of this example is the dot character . which cannot be turned into a float object.\n>>> my_array = ['0','1.06','23.345']\n>>> list(map(float, my_array[1]))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: could not convert string to float: '.'\n\nInstead it is more convenient to turn all elements of the array into float objects:\n>>> my_array = ['0', '1.06', '23.345']\n>>> list(map(float,my_array))\n[0.0, 1.06, 23.345]\n\nFor more information you can look at the map documentation.\n" ]
[ 1, 0 ]
[]
[]
[ "file", "numpy", "pandas", "python", "type_conversion" ]
stackoverflow_0074614779_file_numpy_pandas_python_type_conversion.txt
Q: Find the point at which a curve touches the X axis I have the following plot made with some data points,. What is the best Pythonic way to find the point through which the curve intersects the X-axis? Thanks for any help. -2.0 -2.22537043 -1.9 -2.22609532 -1.8 -2.22075396 -1.7 -2.22729678 -1.6 -2.22353721 -1.5 -2.22341588 -1.4 -2.2180032 -1.3 -2.22850037 -1.2 -2.22553919 -1.1 -2.22866368 -1.0 -2.22400234 -0.9 -2.22865694 -0.8 -2.22058969 -0.7 -2.22399086 -0.6 -2.20372207 -0.5 -2.22639477 -0.4 -2.10633351 -0.3 -2.03573848 -0.2 -1.52582935 -0.1 -0.344812049 0.0 1.61330696 0.1 2.21013059 0.2 2.22698993 0.3 2.22698993 0.4 2.22698993 0.5 2.22698993 0.6 2.22698993 0.7 2.21522144 0.8 2.22699297 0.9 2.22361681 1.0 2.22055266 1.1 2.22299154 1.2 2.21155482 1.3 2.22212628 1.4 2.22437687 1.5 2.22365865 1.6 2.21749658 1.7 2.22603657 1.8 2.22736 1.9 2.22471317 2.0 2.22724296 Update: Here is the data point. How I'm finding it now? I take my mouse to the plot window and find the point manually, why it is not working? It is slow. A: There is really not enough information given in this question to solve the problem outright. That said, if I understand correctly, you perhaps are looking to see where any two functions (line or curve) are intersecting. There are a few approaches. The most simple I'd say would be to use robust curve intersection approach such as this implementation by sukhbinder: intersection, which is itself a python port of an existing Matlab File Exchange entry. For example, given a sigmoid that looks somewhat like your figure above, and an overlapping sine wave, find the intersections: from intersect import intersection import matplotlib.pyplot as plt import numpy as np x1 = np.linspace(-1, 1, 100) y1 = 1 / (1 + np.exp(-x1 * 25)) x2 = np.linspace(-1, 1, 100) y2 = np.sin(x2 * 2.25) + 0.5 x, y = intersection(x1, y1, x2, y2) plt.plot(x1, y1, c="r") plt.plot(x2, y2, c="g") plt.plot(x, y, "*k") plt.show() Edit (#2 to solve for x at y=0) I do not have the reputation to comment on the original post, but will mention this sounds instead like a root finding problem. For completeness, here's a rework of the same approach using the data supplied by OP. The intersecting line in this case is a line spanning the min/max of x1 where . Essentially, this is now a graphical look at finding the roots. from intersect import intersection import matplotlib.pyplot as plt import numpy as np data = np.array( [ [-2.0, -2.22537043], [-1.9, -2.22609532], [-1.8, -2.22075396], [-1.7, -2.22729678], [-1.6, -2.22353721], [-1.5, -2.22341588], [-1.4, -2.2180032], [-1.3, -2.22850037], [-1.2, -2.22553919], [-1.1, -2.22866368], [-1.0, -2.22400234], [-0.9, -2.22865694], [-0.8, -2.22058969], [-0.7, -2.22399086], [-0.6, -2.20372207], [-0.5, -2.22639477], [-0.4, -2.10633351], [-0.3, -2.03573848], [-0.2, -1.52582935], [-0.1, -0.344812049], [0.0, 1.61330696], [0.1, 2.21013059], [0.2, 2.22698993], [0.3, 2.22698993], [0.4, 2.22698993], [0.5, 2.22698993], [0.6, 2.22698993], [0.7, 2.21522144], [0.8, 2.22699297], [0.9, 2.22361681], [1.0, 2.22055266], [1.1, 2.22299154], [1.2, 2.21155482], [1.3, 2.22212628], [1.4, 2.22437687], [1.5, 2.22365865], [1.6, 2.21749658], [1.7, 2.22603657], [1.8, 2.22736], [1.9, 2.22471317], [2.0, 2.227242961] ] ) x1, y1 = data[:, 0], data[:, 1] x2 = [np.min(x1), np.max(x1)] y2 = [0, 0] x, y = intersection(x1, y1, x2, y2) plt.plot(x1, y1, c="r") plt.plot(x2, y2, c="g") plt.plot(x, y, "*k") plt.show() A: I don't know if there is anything standard in matplotlib, but the below algorithm in a loop Find xi where y[xi] > 0. Call that location i. If y[xi - 1] == 0, don't continue - its an x intercept add it to the array, else Compute the slope -> y2 - y1 / x2 -x1. Plugin an (x0, y0) pair to solve for the intercept. Solve for the x, instead of the y. can compute those values. import numpy as np values = np.loadtxt(r"test.txt", delimiter=" ") x_intercepts = [] for i, v in enumerate(values): if i > 0: # if y > 0 if v[1] > 0: previous_v = values[i - 1] # if the previous y == 0 if previous_v[1] == 0: x_intercepts.append(previous_v[0]) elif previous_v[1] < 0: # slope = y2 - y1 / x2 - x1 # formula of linear equation -> y = mx + b # intercept -> b = y - m(x) slope = (v[1] - previous_v[1]) / (v[0] - previous_v[0]) # if the slope is changing and not a constant if slope != 0: intercept = v[1] - (slope * v[0]) # equation -> y = (slope * x) + intercept # equation -> y - intercept = (slope * x) # equation -> (y - intercept) / slope = x x_intercept = (0 - intercept) / slope x_intercepts.append(x_intercept) else: # if the if doesn't make it, we still need to check if the previous value was 0 if values[i - 1][1] == 0: x_intercepts.append(values[i - 1][0]) print(x_intercepts) Give this a whirl and let me know how it works for you. Explanation The for loop gives us the index (i) of the current value v. We want to be able to use the current value as the pair (x2, y2) and the previous value as (x1, y1), so I check to make sure we aren't on the first iteration with if i > 0: If y2 < 0 (the only else statement), check if y1 is == to 0 -> if so add it as an x intercept. If y2 > 0, check if y1 is == to 0 -> if so add it as an x intercept. If its not, check if it is less than 0. Compute the slope and check if the slope is not 0 -> we want to avoid a division by 0 error. Compute the y intercept. Solve for the x intercept and add it to the list of x intercepts.
Find the point at which a curve touches the X axis
I have the following plot made with some data points,. What is the best Pythonic way to find the point through which the curve intersects the X-axis? Thanks for any help. -2.0 -2.22537043 -1.9 -2.22609532 -1.8 -2.22075396 -1.7 -2.22729678 -1.6 -2.22353721 -1.5 -2.22341588 -1.4 -2.2180032 -1.3 -2.22850037 -1.2 -2.22553919 -1.1 -2.22866368 -1.0 -2.22400234 -0.9 -2.22865694 -0.8 -2.22058969 -0.7 -2.22399086 -0.6 -2.20372207 -0.5 -2.22639477 -0.4 -2.10633351 -0.3 -2.03573848 -0.2 -1.52582935 -0.1 -0.344812049 0.0 1.61330696 0.1 2.21013059 0.2 2.22698993 0.3 2.22698993 0.4 2.22698993 0.5 2.22698993 0.6 2.22698993 0.7 2.21522144 0.8 2.22699297 0.9 2.22361681 1.0 2.22055266 1.1 2.22299154 1.2 2.21155482 1.3 2.22212628 1.4 2.22437687 1.5 2.22365865 1.6 2.21749658 1.7 2.22603657 1.8 2.22736 1.9 2.22471317 2.0 2.22724296 Update: Here is the data point. How I'm finding it now? I take my mouse to the plot window and find the point manually, why it is not working? It is slow.
[ "There is really not enough information given in this question to solve the problem outright. That said, if I understand correctly, you perhaps are looking to see where any two functions (line or curve) are intersecting.\nThere are a few approaches. The most simple I'd say would be to use robust curve intersection approach such as this implementation by sukhbinder: intersection, which is itself a python port of an existing Matlab File Exchange entry.\nFor example, given a sigmoid that looks somewhat like your figure above, and an overlapping sine wave, find the intersections:\nfrom intersect import intersection\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx1 = np.linspace(-1, 1, 100)\ny1 = 1 / (1 + np.exp(-x1 * 25))\n\nx2 = np.linspace(-1, 1, 100)\ny2 = np.sin(x2 * 2.25) + 0.5\n\nx, y = intersection(x1, y1, x2, y2)\n\nplt.plot(x1, y1, c=\"r\")\nplt.plot(x2, y2, c=\"g\")\nplt.plot(x, y, \"*k\")\nplt.show()\n\n\nEdit (#2 to solve for x at y=0)\nI do not have the reputation to comment on the original post, but will mention this sounds instead like a root finding problem.\nFor completeness, here's a rework of the same approach using the data supplied by OP. The intersecting line in this case is a line spanning the min/max of x1 where . Essentially, this is now a graphical look at finding the roots.\nfrom intersect import intersection\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.array(\n [\n [-2.0, -2.22537043],\n [-1.9, -2.22609532],\n [-1.8, -2.22075396],\n [-1.7, -2.22729678],\n [-1.6, -2.22353721],\n [-1.5, -2.22341588],\n [-1.4, -2.2180032],\n [-1.3, -2.22850037],\n [-1.2, -2.22553919],\n [-1.1, -2.22866368],\n [-1.0, -2.22400234],\n [-0.9, -2.22865694],\n [-0.8, -2.22058969],\n [-0.7, -2.22399086],\n [-0.6, -2.20372207],\n [-0.5, -2.22639477],\n [-0.4, -2.10633351],\n [-0.3, -2.03573848],\n [-0.2, -1.52582935],\n [-0.1, -0.344812049],\n [0.0, 1.61330696],\n [0.1, 2.21013059],\n [0.2, 2.22698993],\n [0.3, 2.22698993],\n [0.4, 2.22698993],\n [0.5, 2.22698993],\n [0.6, 2.22698993],\n [0.7, 2.21522144],\n [0.8, 2.22699297],\n [0.9, 2.22361681],\n [1.0, 2.22055266],\n [1.1, 2.22299154],\n [1.2, 2.21155482],\n [1.3, 2.22212628],\n [1.4, 2.22437687],\n [1.5, 2.22365865],\n [1.6, 2.21749658],\n [1.7, 2.22603657],\n [1.8, 2.22736],\n [1.9, 2.22471317],\n [2.0, 2.227242961]\n ]\n)\nx1, y1 = data[:, 0], data[:, 1]\n\nx2 = [np.min(x1), np.max(x1)]\ny2 = [0, 0]\n\nx, y = intersection(x1, y1, x2, y2)\n\nplt.plot(x1, y1, c=\"r\")\nplt.plot(x2, y2, c=\"g\")\nplt.plot(x, y, \"*k\")\nplt.show()\n\n\n", "I don't know if there is anything standard in matplotlib, but the below algorithm in a loop\n\nFind xi where y[xi] > 0. Call that location i.\nIf y[xi - 1] == 0, don't continue - its an x intercept add it to the array, else\nCompute the slope -> y2 - y1 / x2 -x1.\nPlugin an (x0, y0) pair to solve for the intercept.\nSolve for the x, instead of the y.\n\ncan compute those values.\nimport numpy as np\nvalues = np.loadtxt(r\"test.txt\", delimiter=\" \")\nx_intercepts = []\nfor i, v in enumerate(values):\n if i > 0:\n # if y > 0\n if v[1] > 0:\n previous_v = values[i - 1]\n # if the previous y == 0\n if previous_v[1] == 0:\n x_intercepts.append(previous_v[0])\n elif previous_v[1] < 0:\n # slope = y2 - y1 / x2 - x1\n # formula of linear equation -> y = mx + b\n # intercept -> b = y - m(x)\n slope = (v[1] - previous_v[1]) / (v[0] - previous_v[0])\n # if the slope is changing and not a constant\n if slope != 0:\n intercept = v[1] - (slope * v[0])\n # equation -> y = (slope * x) + intercept\n # equation -> y - intercept = (slope * x)\n # equation -> (y - intercept) / slope = x\n x_intercept = (0 - intercept) / slope\n x_intercepts.append(x_intercept)\n else:\n # if the if doesn't make it, we still need to check if the previous value was 0\n if values[i - 1][1] == 0:\n x_intercepts.append(values[i - 1][0])\nprint(x_intercepts)\n\nGive this a whirl and let me know how it works for you.\n\nExplanation\n\nThe for loop gives us the index (i) of the current value v.\nWe want to be able to use the current value as the pair (x2, y2) and the previous value as (x1, y1), so I check to make sure we aren't on the first iteration with if i > 0:\nIf y2 < 0 (the only else statement), check if y1 is == to 0 -> if so add it as an x intercept.\nIf y2 > 0, check if y1 is == to 0 -> if so add it as an x intercept.\nIf its not, check if it is less than 0.\nCompute the slope and check if the slope is not 0 -> we want to avoid a division by 0 error.\nCompute the y intercept.\nSolve for the x intercept and add it to the list of x intercepts.\n\n" ]
[ 5, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074616911_matplotlib_python.txt
Q: new line issue with f.write I'm a beginner python programmer so I'll cut right to the chase. I'm trying to use the f.write keyword, I want each thing I write to be in a new line so I did this:f.write('',message_variable_from_previous_input,'\n') However, after I ran this it threw back an error saying the following: Traceback (most recent call last): File "c:\Users\User1\OneDrive\Desktop\coding\folder_namr\file_name.py", line 5, in <module> f.write('',msg,'\n') TypeError: TextIOWrapper.write() takes exactly one argument (3 given) Does anybody know haw to fix this ? Any help is appreciated A: write method takes exactly one argument so you should write like this: f.write(f"{message_variable_from_previous_input}\n") or : f.write(str(message_variable_from_previous_input) + "\n")
new line issue with f.write
I'm a beginner python programmer so I'll cut right to the chase. I'm trying to use the f.write keyword, I want each thing I write to be in a new line so I did this:f.write('',message_variable_from_previous_input,'\n') However, after I ran this it threw back an error saying the following: Traceback (most recent call last): File "c:\Users\User1\OneDrive\Desktop\coding\folder_namr\file_name.py", line 5, in <module> f.write('',msg,'\n') TypeError: TextIOWrapper.write() takes exactly one argument (3 given) Does anybody know haw to fix this ? Any help is appreciated
[ "write method takes exactly one argument\nso you should write like this:\nf.write(f\"{message_variable_from_previous_input}\\n\")\nor :\nf.write(str(message_variable_from_previous_input) + \"\\n\")\n" ]
[ 3 ]
[]
[]
[ "python", "txt" ]
stackoverflow_0074617845_python_txt.txt
Q: How are pytest fixure scopes intended to work? I want to use pytest fixtures to prepare an object I want to use across a set of tests. I follow the documentation and create a fixture in something_fixture.py with its scope set to session like this: import pytest @pytest.fixture(scope="session") def something(): return 'something' Then in test_something.py I try to use the fixture like this: def test_something(something): assert something == 'something' Which does not work, but if I import the fixture like this: from tests.something_fixture import something def test_something(something): assert something == 'something' the test passes... Is this import necessary? Because to me this is not clear according to the documentation. A: This session-scoped fixture should be defined in a conftest.py module, see conftest.py: sharing fixtures across multiple files in the docs. The conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a conftest.py can be used by any test in that package without needing to import them (pytest will automatically discover them). By writing the fixture in something_fixture.py it was defined somewhere that went "unnoticed" because there was no reason for Python to import this module. The default test collection phase considers filenames matching these glob patterns: - test_*.py - *_test.py Since it's a session-scoped feature, define it instead in a conftest.py file, so it will be created at test collection time and available to all tests. You can remove the import statement from tests.something_fixture import something. In fact the "tests" subdirectory generally doesn't need to be importable at all.
How are pytest fixure scopes intended to work?
I want to use pytest fixtures to prepare an object I want to use across a set of tests. I follow the documentation and create a fixture in something_fixture.py with its scope set to session like this: import pytest @pytest.fixture(scope="session") def something(): return 'something' Then in test_something.py I try to use the fixture like this: def test_something(something): assert something == 'something' Which does not work, but if I import the fixture like this: from tests.something_fixture import something def test_something(something): assert something == 'something' the test passes... Is this import necessary? Because to me this is not clear according to the documentation.
[ "This session-scoped fixture should be defined in a conftest.py module, see conftest.py: sharing fixtures across multiple files in the docs.\n\nThe conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a conftest.py can be used by any test in that package without needing to import them (pytest will automatically discover them).\n\nBy writing the fixture in something_fixture.py it was defined somewhere that went \"unnoticed\" because there was no reason for Python to import this module. The default test collection phase considers filenames matching these glob patterns:\n- test_*.py\n- *_test.py\n\nSince it's a session-scoped feature, define it instead in a conftest.py file, so it will be created at test collection time and available to all tests.\nYou can remove the import statement from tests.something_fixture import something. In fact the \"tests\" subdirectory generally doesn't need to be importable at all.\n" ]
[ 1 ]
[]
[]
[ "fixtures", "pytest", "python" ]
stackoverflow_0074617822_fixtures_pytest_python.txt
Q: How to activate API Bigquery in GCP with python? I am developing a small application in gcp and I must activate the bigquery api to interact with it, I do it through the console, but, Is it possible to do it with the python google api client? I've been looking in the documentation but it's still not clear to me. A: To enable BigQuery API from console: Go to console.google.com From the menu, click on APIs & Services ->Enable APIs & Service Click on Enable APIs and Service Search for BigQuery API and click on enable To enable through gcloud sdk: gcloud services enable bigquery.googleapis.com You may need to enable other BigQuery related APIs as well: bigquery.googleapis.com BigQuery API bigqueryconnection.googleapis.com BigQuery Connection API bigquerydatatransfer.googleapis.com BigQuery Data Transfer API bigquerymigration.googleapis.com BigQuery Migration API bigquerystorage.googleapis.com BigQuery Storage API To enable a service through python REST refer this documentation To interact/work with BigQuery using Python client library: Install the Python Client library for BigQuery: install google-cloud-bigquery Query stack overflow public dataset: from google.cloud import bigquery client = bigquery.Client() # Perform a query. QUERY = ("SELECT title FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10") query_job = client.query(QUERY) # API request rows = query_job.result() # Waits for query to finish for row in rows: print(row.name) Reference: BigQuery QuickStart using client libraries
How to activate API Bigquery in GCP with python?
I am developing a small application in gcp and I must activate the bigquery api to interact with it, I do it through the console, but, Is it possible to do it with the python google api client? I've been looking in the documentation but it's still not clear to me.
[ "To enable BigQuery API from console:\n\nGo to console.google.com\nFrom the menu, click on APIs & Services ->Enable APIs & Service\nClick on Enable APIs and Service\nSearch for BigQuery API and click on enable\n\nTo enable through gcloud sdk:\ngcloud services enable bigquery.googleapis.com\n\nYou may need to enable other BigQuery related APIs as well:\nbigquery.googleapis.com BigQuery API\nbigqueryconnection.googleapis.com BigQuery Connection API\nbigquerydatatransfer.googleapis.com BigQuery Data Transfer API\nbigquerymigration.googleapis.com BigQuery Migration API\nbigquerystorage.googleapis.com BigQuery Storage API\n\nTo enable a service through python REST refer this documentation\nTo interact/work with BigQuery using Python client library:\nInstall the Python Client library for BigQuery:\ninstall google-cloud-bigquery\n\nQuery stack overflow public dataset:\nfrom google.cloud import bigquery\n\nclient = bigquery.Client()\n\n# Perform a query.\nQUERY = (\"SELECT title FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10\")\nquery_job = client.query(QUERY) # API request\nrows = query_job.result() # Waits for query to finish\n\nfor row in rows:\n print(row.name)\n\nReference: BigQuery QuickStart using client libraries\n" ]
[ 0 ]
[]
[]
[ "google_api_client", "google_bigquery", "google_cloud_platform", "python" ]
stackoverflow_0074616287_google_api_client_google_bigquery_google_cloud_platform_python.txt
Q: How to override printing in python multiprocessing I have a class AlternativePrinter that overrides the python print function (in this example it appends "[write to terminal]" before outputs) but for some reason anything printed from within a multiprocessing process doesn't go through this print function. How can I make all printing including from processes go through my new printer? import multiprocessing as mp import sys import time class AlternativePrinter: def __init__(self): self.old_stdout = sys.stdout sys.stdout = self # executed when the user does a `print` def write(self, text): self.old_stdout.write("\n[write to terminal]"+text) def __enter__(self): return self def __exit__(self, type, value, traceback): sys.stdout = self.old_stdout def flush(self): self.old_stdout.flush() def wait_and_print(seconds): time.sleep(seconds) print("I waited for", seconds, "seconds") return seconds if __name__ == "__main__": with AlternativePrinter(): print('Initiating simultaneous processes:') pool = mp.Pool(4) input = [(4,),(2,)] results = pool.starmap(wait_and_print, input) pool.close() pool.join() for result in results: print(result) output: [write to terminal]Initiating simultaneous processes: [write to terminal] I waited for 2 seconds I waited for 4 seconds [write to terminal]4 [write to terminal] [write to terminal]2 [write to terminal] end A: Add with AlternativePrinter(): in wait_and_print. The lines below if __name__ == "__main__": are not executed in the child process and hence AlternativePrinter() is never used.
How to override printing in python multiprocessing
I have a class AlternativePrinter that overrides the python print function (in this example it appends "[write to terminal]" before outputs) but for some reason anything printed from within a multiprocessing process doesn't go through this print function. How can I make all printing including from processes go through my new printer? import multiprocessing as mp import sys import time class AlternativePrinter: def __init__(self): self.old_stdout = sys.stdout sys.stdout = self # executed when the user does a `print` def write(self, text): self.old_stdout.write("\n[write to terminal]"+text) def __enter__(self): return self def __exit__(self, type, value, traceback): sys.stdout = self.old_stdout def flush(self): self.old_stdout.flush() def wait_and_print(seconds): time.sleep(seconds) print("I waited for", seconds, "seconds") return seconds if __name__ == "__main__": with AlternativePrinter(): print('Initiating simultaneous processes:') pool = mp.Pool(4) input = [(4,),(2,)] results = pool.starmap(wait_and_print, input) pool.close() pool.join() for result in results: print(result) output: [write to terminal]Initiating simultaneous processes: [write to terminal] I waited for 2 seconds I waited for 4 seconds [write to terminal]4 [write to terminal] [write to terminal]2 [write to terminal] end
[ "Add with AlternativePrinter(): in wait_and_print. The lines below if __name__ == \"__main__\": are not executed in the child process and hence AlternativePrinter() is never used.\n" ]
[ 1 ]
[]
[]
[ "io", "output", "python", "python_multiprocessing", "python_multithreading" ]
stackoverflow_0074617341_io_output_python_python_multiprocessing_python_multithreading.txt
Q: Functions not being called correctly This is a code that prompts the user for the amount of months they want to budget analyze, prompts for the budget the user has, prompts for how much the user spent that month, and then calculates if the user is over or under their budget. When code is run, it prompts user one twice, and then creates errors: Traceback (most recent call last): File "C:\Users\\Desktop\", line 53, in <module> main() File "C:\Users\\Desktop\", line 51, in main AnalyzeBudget(months) File "C:\Users\\Desktop\", line 46, in AnalyzeBudget MoBudget,MoSpent = GetMonthBudgetandSpent(month) File "C:\Users\\Desktop\", line 40, in GetMonthBudgetandSpent return int(Mobudget, MoSpent) TypeError: 'str' object cannot be interpreted as an integer any help is appreciated. def DescribeProgram(): print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") def GetMonths(): Months = input("Enter the number of months you want to analyze") return int(Months) def GetMonthBudgetandSpent(month): Mobudget = input("Enter the budget you have for the month") MoSpent = input("Enter the amount you spent this month") return int(Mobudget, MoSpent) def AnalyzeBudget(months): for month in range(1,months+1): print("\nMonth",month,":") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) def main(): DescribeProgram() months = GetMonths() AnalyzeBudget(months) main() A: You can change the return type of GetMonths() : def GetMonths(): Months = input("Enter the number of months you want to analyze") return int(Months) Or You can cast the type to int in range() of AnalyzeBudget() def AnalyzeBudget(months): for month in range(1,int(months)+1): print("\nMonth",month,":") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) A: As @quamrana said you need to def GetMonths(): Months = input("Enter the number of months you want to analyze") return int(Months) The error says TypeError: can only concatenate str (not "int") to str because when you ask for Months in Months = input("Enter the number of months you want to analyze") Python read the input as a string, then in months+1 you are adding a string with an int so you get the error. I recommend to factorize your code in an class where each function is a method, I can help you with this if you need to. Edit(This is not related to the original post). You have an error in GetMonthBudgetandSpent, you can do the following. def GetMonthBudgetandSpent(month): Mobudget = input("Enter the budget you have for the month") MoSpent = input("Enter the amount you spent this month") MoSpent = int(MoSpent) Mobudget = int(Mobudget) return Mobudget, MoSpent As @Pranav Hosangadi said int function int docs takes in the first input of the thing you want to convert, so you can only pass one string. For the if statement you asked for in the comments. def AnalyzeBudget(months): for month in range(1,months+1): print("\nMonth",month,":") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) if MoBudget >= MoSpent: print("Safe zone") else : print("Danger zone") Here if MoBudget >= MoSpent checks if the budget is more than spent in that month, if so then print "safe zone", else (MoSpent > Mobudget) then print "Danger zone"
Functions not being called correctly
This is a code that prompts the user for the amount of months they want to budget analyze, prompts for the budget the user has, prompts for how much the user spent that month, and then calculates if the user is over or under their budget. When code is run, it prompts user one twice, and then creates errors: Traceback (most recent call last): File "C:\Users\\Desktop\", line 53, in <module> main() File "C:\Users\\Desktop\", line 51, in main AnalyzeBudget(months) File "C:\Users\\Desktop\", line 46, in AnalyzeBudget MoBudget,MoSpent = GetMonthBudgetandSpent(month) File "C:\Users\\Desktop\", line 40, in GetMonthBudgetandSpent return int(Mobudget, MoSpent) TypeError: 'str' object cannot be interpreted as an integer any help is appreciated. def DescribeProgram(): print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") def GetMonths(): Months = input("Enter the number of months you want to analyze") return int(Months) def GetMonthBudgetandSpent(month): Mobudget = input("Enter the budget you have for the month") MoSpent = input("Enter the amount you spent this month") return int(Mobudget, MoSpent) def AnalyzeBudget(months): for month in range(1,months+1): print("\nMonth",month,":") print("=======") MoBudget,MoSpent = GetMonthBudgetandSpent(month) def main(): DescribeProgram() months = GetMonths() AnalyzeBudget(months) main()
[ "You can change the return type of GetMonths() :\ndef GetMonths():\n Months = input(\"Enter the number of months you want to analyze\")\n return int(Months)\n\n\nOr You can cast the type to int in range() of AnalyzeBudget()\ndef AnalyzeBudget(months):\n for month in range(1,int(months)+1):\n print(\"\\nMonth\",month,\":\")\n print(\"=======\")\n MoBudget,MoSpent = GetMonthBudgetandSpent(month)\n\n", "As @quamrana said you need to\ndef GetMonths():\n Months = input(\"Enter the number of months you want to analyze\")\n return int(Months)\n\nThe error says TypeError: can only concatenate str (not \"int\") to str because when you ask for Months in\nMonths = input(\"Enter the number of months you want to analyze\")\n\nPython read the input as a string, then in months+1 you are adding a string with an int so you get the error.\nI recommend to factorize your code in an class where each function is a method, I can help you with this if you need to.\nEdit(This is not related to the original post).\nYou have an error in GetMonthBudgetandSpent, you can do the following.\ndef GetMonthBudgetandSpent(month):\n Mobudget = input(\"Enter the budget you have for the month\")\n MoSpent = input(\"Enter the amount you spent this month\")\n MoSpent = int(MoSpent)\n Mobudget = int(Mobudget)\n return Mobudget, MoSpent\n\nAs @Pranav Hosangadi said int function int docs takes in the first input of the thing you want to convert, so you can only pass one string.\nFor the if statement you asked for in the comments.\ndef AnalyzeBudget(months):\nfor month in range(1,months+1):\n print(\"\\nMonth\",month,\":\")\n print(\"=======\")\n MoBudget,MoSpent = GetMonthBudgetandSpent(month)\n if MoBudget >= MoSpent:\n print(\"Safe zone\")\n else :\n print(\"Danger zone\") \n\nHere if MoBudget >= MoSpent checks if the budget is more than spent in that month, if so then print \"safe zone\", else (MoSpent > Mobudget) then print \"Danger zone\"\n" ]
[ 0, 0 ]
[]
[]
[ "function", "loops", "python" ]
stackoverflow_0074617819_function_loops_python.txt
Q: How to get the line index in a row from CSV imported into Python? Please see this image Basically, I want to be able to get the vertical index position of the maximum number from an imported CSV. I have been able to grab the maximum number from the CSV which is 188 and represented by 'maxTemp'. I need the vertical position of the number from the CSV, I know how you get what column it is in but how do I get the vertical index position of it? I hope this makes sense. `with open('CSV_load.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=',') names = [] surnames = [] marks = [] maxTemp=[] for row in readCSV: maxTemp.append(int(row[2])) name = row[0] surname = row[1] mark = row[2] print(name, surname, mark) print("\n") print("The highest mark is:", (max(maxTemp))) print("\n")` I cannot figure out how to get the vertical index position from the CSV. A: You can use enumerate() + max() to get row with highest value. For example: import csv with open("CSV_load.csv", "r") as csvfile: readCSV = csv.reader(csvfile, delimiter=",") line, row_with_highest_temp = max( enumerate(readCSV, 1), key=lambda row: int(row[1][2]) ) print("Highest Temp is", row_with_highest_temp[2], "on line", line)
How to get the line index in a row from CSV imported into Python?
Please see this image Basically, I want to be able to get the vertical index position of the maximum number from an imported CSV. I have been able to grab the maximum number from the CSV which is 188 and represented by 'maxTemp'. I need the vertical position of the number from the CSV, I know how you get what column it is in but how do I get the vertical index position of it? I hope this makes sense. `with open('CSV_load.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=',') names = [] surnames = [] marks = [] maxTemp=[] for row in readCSV: maxTemp.append(int(row[2])) name = row[0] surname = row[1] mark = row[2] print(name, surname, mark) print("\n") print("The highest mark is:", (max(maxTemp))) print("\n")` I cannot figure out how to get the vertical index position from the CSV.
[ "You can use enumerate() + max() to get row with highest value. For example:\nimport csv\n\nwith open(\"CSV_load.csv\", \"r\") as csvfile:\n readCSV = csv.reader(csvfile, delimiter=\",\")\n line, row_with_highest_temp = max(\n enumerate(readCSV, 1), key=lambda row: int(row[1][2])\n )\n print(\"Highest Temp is\", row_with_highest_temp[2], \"on line\", line)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0074617264_csv_python.txt
Q: Azure flask deploy If you are the application administrator, you can access the diagnostic resources When I enter to the domain I got this :( Application Error If you are the application administrator, you can access the diagnostic resources. This is when I enter to diagnostic: Distributing your web app across multiple instances The webapp is currently configured to run on only one instance. Since you have only one instance you can expect downtime because when the App Service platform is upgraded, the instance on which your web app is running will be upgraded. Therefore, your web app process will be restarted and will experience downtime. I tried to make autoscaling but still the problem app = Flask(__name__) I put this if __name__ == "__main__": app.run() and this if __name__ == "__main__": app.run(debug=True ,port=8080,use_reloader=False) A: For a big applications it's better to work on virtual machine not on ready services
Azure flask deploy If you are the application administrator, you can access the diagnostic resources
When I enter to the domain I got this :( Application Error If you are the application administrator, you can access the diagnostic resources. This is when I enter to diagnostic: Distributing your web app across multiple instances The webapp is currently configured to run on only one instance. Since you have only one instance you can expect downtime because when the App Service platform is upgraded, the instance on which your web app is running will be upgraded. Therefore, your web app process will be restarted and will experience downtime. I tried to make autoscaling but still the problem app = Flask(__name__) I put this if __name__ == "__main__": app.run() and this if __name__ == "__main__": app.run(debug=True ,port=8080,use_reloader=False)
[ "For a big applications it's better to work on virtual machine not on ready services\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_devops", "flask", "python" ]
stackoverflow_0073776416_azure_azure_devops_flask_python.txt
Q: Need help implementing an example in Google Colab Im trying to run this example in Colab https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation/raspberry_pi but i cant make it because the example use webcam. Anyone have a different version of this example which use image, video or gif? Or can help in making one? Thanks A: You can update the example yourself. Remove the lines that tries to open the camera, and update this line which tries to capture image from the camera video with your image instead https://github.com/tensorflow/examples/blob/master/lite/examples/image_segmentation/raspberry_pi/segment.py#L76
Need help implementing an example in Google Colab
Im trying to run this example in Colab https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation/raspberry_pi but i cant make it because the example use webcam. Anyone have a different version of this example which use image, video or gif? Or can help in making one? Thanks
[ "You can update the example yourself.\nRemove the lines that tries to open the camera, and update this line which tries to capture image from the camera video with your image instead\nhttps://github.com/tensorflow/examples/blob/master/lite/examples/image_segmentation/raspberry_pi/segment.py#L76\n" ]
[ 0 ]
[]
[]
[ "google_colaboratory", "python", "tensorflow_lite" ]
stackoverflow_0074616333_google_colaboratory_python_tensorflow_lite.txt
Q: VS Code cursor bug in terminal Cursor repeating and remaining in the Integrated Terminal in VS Code I encountered this bug in my terminal while doing Python tutorial so downloaded and reinstalled the same version (latest version of VS Code) but the problem persists. I looked about for some answers but only found this tutorial which is not related. Anyway, I reinstalled the software only to find the bug was still present. The code runs but the cursor is an obstruction. From time to time I may type in the wrong execution so it's a bit of a bother. A: Turned off GPU acceleration in the Terminal of VS Code. That has seemed to resolve the matter; No longer cursor trails. Settings > Type 'Render' > Go to Terminal › Integrated: Gpu Acceleration. Setting controls whether the terminal will leverage the GPU to do its rendering. Switch 'off' in dropdown menu
VS Code cursor bug in terminal
Cursor repeating and remaining in the Integrated Terminal in VS Code I encountered this bug in my terminal while doing Python tutorial so downloaded and reinstalled the same version (latest version of VS Code) but the problem persists. I looked about for some answers but only found this tutorial which is not related. Anyway, I reinstalled the software only to find the bug was still present. The code runs but the cursor is an obstruction. From time to time I may type in the wrong execution so it's a bit of a bother.
[ "Turned off GPU acceleration in the Terminal of VS Code. That has seemed to resolve the matter; No longer cursor trails.\nSettings > Type 'Render' > Go to\nTerminal › Integrated: Gpu Acceleration.\nSetting controls whether the terminal will leverage the GPU to do its rendering.\nSwitch 'off' in dropdown menu\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "terminal", "visual_studio_code" ]
stackoverflow_0074607032_python_python_3.x_terminal_visual_studio_code.txt
Q: How do I make Mypy recognize non-nullable ORM attributes? Mypy infers ORM non-nullable instance attributes as optionals. Filename: test.py from sqlalchemy.orm import decl_api, registry from sqlalchemy import BigInteger, Column, String mapper_registry = registry() class Base(metaclass=decl_api.DeclarativeMeta): __abstract__ = True registry = mapper_registry metadata = mapper_registry.metadata __init__ = mapper_registry.constructor class Person(Base): __tablename__ = "persons" id = Column(BigInteger, primary_key=True, autoincrement=True) name = Column(String(40), nullable=False) def main(person: Person): person_id = person.id person_name = person.name reveal_locals() Running mypy test.py yields: test.py:27: note: Revealed local types are: test.py:27: note: person: test.Person test.py:27: note: person_id: Union[builtins.int, None] test.py:27: note: person_name: Union[builtins.str, None] As far as my understanding goes, person_id and person_name should have been int and str respectively since they are set as non-nullable. What am I missing here? Relevant libraries: SQLAlchemy 1.4.25 sqlalchemy2-stubs 0.0.2a15 mypy 0.910 mypy-extensions 0.4.3 A: I had the same question when evaluating a nullable=False column with mypy. One of my teammates found the answer in the SqlAlchemy docs: https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html#introspection-of-columns-based-on-typeengine The types are by default always considered to be Optional, even for the primary key and non-nullable column. The reason is because while the database columns “id” and “name” can’t be NULL, the Python attributes id and name most certainly can be None without an explicit constructor: There are also mitigations they list in that article with declaring the columns to be mapped types explicitly.
How do I make Mypy recognize non-nullable ORM attributes?
Mypy infers ORM non-nullable instance attributes as optionals. Filename: test.py from sqlalchemy.orm import decl_api, registry from sqlalchemy import BigInteger, Column, String mapper_registry = registry() class Base(metaclass=decl_api.DeclarativeMeta): __abstract__ = True registry = mapper_registry metadata = mapper_registry.metadata __init__ = mapper_registry.constructor class Person(Base): __tablename__ = "persons" id = Column(BigInteger, primary_key=True, autoincrement=True) name = Column(String(40), nullable=False) def main(person: Person): person_id = person.id person_name = person.name reveal_locals() Running mypy test.py yields: test.py:27: note: Revealed local types are: test.py:27: note: person: test.Person test.py:27: note: person_id: Union[builtins.int, None] test.py:27: note: person_name: Union[builtins.str, None] As far as my understanding goes, person_id and person_name should have been int and str respectively since they are set as non-nullable. What am I missing here? Relevant libraries: SQLAlchemy 1.4.25 sqlalchemy2-stubs 0.0.2a15 mypy 0.910 mypy-extensions 0.4.3
[ "I had the same question when evaluating a nullable=False column with mypy. One of my teammates found the answer in the SqlAlchemy docs:\nhttps://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html#introspection-of-columns-based-on-typeengine\n\nThe types are by default always considered to be Optional, even for\nthe primary key and non-nullable column. The reason is because while\nthe database columns “id” and “name” can’t be NULL, the Python\nattributes id and name most certainly can be None without an explicit\nconstructor:\n\nThere are also mitigations they list in that article with declaring the columns to be mapped types explicitly.\n" ]
[ 1 ]
[]
[]
[ "mypy", "python", "sqlalchemy" ]
stackoverflow_0071674202_mypy_python_sqlalchemy.txt
Q: Scrapy encoding nested dict params I want to send a request which has params in nested dict. params = { 'apiKey': 'XXXXXXXXXXXXXXXXXXX', 'facetInclusion': 'All', 'filter': '{"facetFilter": {"andClauses": [{"value": "WEBCAT_1_2_1", "type": "CategoryCode", "negate": false}], "orClauses": []}, "numericalFilter": [], "filteringFacetFilter": {"andClauses": []}}', 'pageNumber': 0, 'pageSize': 48, 'productRepresentation': 'ExplicitRepresentation'....} I want to send it via Scrapy request but I get 422 and error code that something is wrong with params yield scrapy.Request(url=self.url, cb_kwargs=params, callback=self.parse, headers=self.headers) However when I try to send the same request with requests it goes okay response = requests.get(url=self.url, headers=self.headers, params=params) I tried all different forms of encoding and dumping the url + params but I always get 422 form Scrapy. Any idea where can be the problem? Many thanks A: cb_kwargs is a dictionary that will be passed the request’s callback. body is the request's body. import json yield scrapy.Request(url=self.url, body=json.dumps(params), callback=self.parse, headers=self.headers) EDIT: I misunderstood your question. This is what you want: import urllib.parse url_params = '?' + urllib.parse.urlencode(params) yield scrapy.Request(url=self.url+urlparams, callback=self.parse, headers=self.headers) Where url is something like www.url.com/products/search.
Scrapy encoding nested dict params
I want to send a request which has params in nested dict. params = { 'apiKey': 'XXXXXXXXXXXXXXXXXXX', 'facetInclusion': 'All', 'filter': '{"facetFilter": {"andClauses": [{"value": "WEBCAT_1_2_1", "type": "CategoryCode", "negate": false}], "orClauses": []}, "numericalFilter": [], "filteringFacetFilter": {"andClauses": []}}', 'pageNumber': 0, 'pageSize': 48, 'productRepresentation': 'ExplicitRepresentation'....} I want to send it via Scrapy request but I get 422 and error code that something is wrong with params yield scrapy.Request(url=self.url, cb_kwargs=params, callback=self.parse, headers=self.headers) However when I try to send the same request with requests it goes okay response = requests.get(url=self.url, headers=self.headers, params=params) I tried all different forms of encoding and dumping the url + params but I always get 422 form Scrapy. Any idea where can be the problem? Many thanks
[ "cb_kwargs is a dictionary that will be passed the request’s callback.\nbody is the request's body.\nimport json\n\nyield scrapy.Request(url=self.url, body=json.dumps(params), callback=self.parse, headers=self.headers)\n\nEDIT:\nI misunderstood your question. This is what you want:\nimport urllib.parse\nurl_params = '?' + urllib.parse.urlencode(params)\n\nyield scrapy.Request(url=self.url+urlparams, callback=self.parse, headers=self.headers)\n\nWhere url is something like www.url.com/products/search.\n" ]
[ 1 ]
[]
[]
[ "python", "python_requests", "scrapy", "web_scraping" ]
stackoverflow_0074617259_python_python_requests_scrapy_web_scraping.txt
Q: can't get discord to recognize my role id? import discord import os from keep_alive import keep_alive import random import glob import os.path import asyncio client = discord.Client() @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) @client.event async def on_message(ctx): if ctx.content.startswith( "/roles") and ctx.author.id == 251419567954460673: await roles(ctx) async def roles(ctx): channel = client.get_channel(1047136010456285214) await channel.purge() text = "everyone choose a role\nKomt = ☑️\nSlapen = ️" message = await channel.send(text) await message.add_reaction("☑️") await message.add_reaction("️") def check(r: discord.Reaction, u: discord.User): return u.id == ctx.author.id and r.message.channel.id == ctx.channel.id and str( r.emoji) in ["☑️", "️"] try: reaction, user = await client.wait_for('reaction_add', check=check, timeout=None) except asyncio.TimeoutError: await ctx.send( f"**{ctx.author}**, you didnt react with a ☑️ or ️ in time.") return else: if str(reaction.emoji) == "☑️": print("komt role selected") role_id = 1047136385036980224 role = discord.utils.get(ctx.guild.roles, id=role_id) await user.add_roles(role) elif str(reaction.emoji) == "️": print("slapen role selected") role_id = 1047136293173342258 role = discord.utils.get(ctx.guild.roles, id=role_id) await user.add_roles(role) my_secret = os.environ['DISCORDKEY'] client.run(my_secret) keep_alive() So I created this code (mostly of the help of looking up and such) and I am trying to create a bot that adds roles to a user that chooses on of the reaction I got almost everything working fine. The only problem that still remains is that discord doesn't recognize these roles as actual role id's which they are since I copied them from my server above is all the code I have. A: For other people: The problem might be your role id. Try grabbing the role id from the roles page in server settings instead of from a message. What the page looks like in the unlikely case you don't know Role ids are usually 18 digits long but newly created roles may be 19. The error in this case was that he lacked intents in client = discord.Client() intents = discord.Intents().all() client = commands.Bot(command_prefix=',', intents=intents) Intents must both be enabled in your code and in the developer portal.
can't get discord to recognize my role id?
import discord import os from keep_alive import keep_alive import random import glob import os.path import asyncio client = discord.Client() @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) @client.event async def on_message(ctx): if ctx.content.startswith( "/roles") and ctx.author.id == 251419567954460673: await roles(ctx) async def roles(ctx): channel = client.get_channel(1047136010456285214) await channel.purge() text = "everyone choose a role\nKomt = ☑️\nSlapen = ️" message = await channel.send(text) await message.add_reaction("☑️") await message.add_reaction("️") def check(r: discord.Reaction, u: discord.User): return u.id == ctx.author.id and r.message.channel.id == ctx.channel.id and str( r.emoji) in ["☑️", "️"] try: reaction, user = await client.wait_for('reaction_add', check=check, timeout=None) except asyncio.TimeoutError: await ctx.send( f"**{ctx.author}**, you didnt react with a ☑️ or ️ in time.") return else: if str(reaction.emoji) == "☑️": print("komt role selected") role_id = 1047136385036980224 role = discord.utils.get(ctx.guild.roles, id=role_id) await user.add_roles(role) elif str(reaction.emoji) == "️": print("slapen role selected") role_id = 1047136293173342258 role = discord.utils.get(ctx.guild.roles, id=role_id) await user.add_roles(role) my_secret = os.environ['DISCORDKEY'] client.run(my_secret) keep_alive() So I created this code (mostly of the help of looking up and such) and I am trying to create a bot that adds roles to a user that chooses on of the reaction I got almost everything working fine. The only problem that still remains is that discord doesn't recognize these roles as actual role id's which they are since I copied them from my server above is all the code I have.
[ "For other people: The problem might be your role id. Try grabbing the role id from the roles page in server settings instead of from a message. What the page looks like in the unlikely case you don't know\nRole ids are usually 18 digits long but newly created roles may be 19.\nThe error in this case was that he lacked intents in client = discord.Client()\nintents = discord.Intents().all()\nclient = commands.Bot(command_prefix=',', intents=intents)\n\nIntents must both be enabled in your code and in the developer portal.\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python", "roles" ]
stackoverflow_0074617829_discord.py_python_roles.txt
Q: Generating points within a Menger Sponge (fractal shape) I am trying to generate a lattice of points in the shape of a Menger sponge or Sierpinski sponge. https://en.wikipedia.org/wiki/Menger_sponge This link details how the shape is mathematically constructed. I wanted to find a way where I could make this shape using recursion to remove the necessary cubes. I looked online but I could only find code which generated 3d renderings of the shape and not a lattice of points. It is worth mentioning that I am not familiar with OO programming which seemed to be the general method the examples I found used. I then tried to make a 2D version to see if I could implement it, but the only version I got to work was by manually subtracting the areas needed. This is what I've gotten to work, only removing the first square from the centre: ` import numpy as np import matplotlib.pyplot as plt size = 12 x = [] y = [] for index_x in np.arange(size): for index_y in np.arange(size): x = np.append(x, index_x) y = np.append(y, index_y) # step 1: remove central box x_box = [] y_box = [] for index_1 in np.arange(144): if (x[index_1] < size/3 or x[index_1] >= 2/3 * size or y[index_1] < size/3 or y[index_1] >= 2/3 * size): x_box = np.append(x_box, x[index_1]) y_box = np.append(y_box, y[index_1]) # step 2: remove central square in each surrounding square # Do the same steps as above but for the other smaller squares fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(x_box, y_box) ax.set_title('Menger Sponge') ax.set_xlabel('x') ax.set_ylabel('y') plt.show() ` This is what my code produces. Is there an easier / better way of implementing this? A: You need to add a recursive element to your code. I would also suggest thinking in terms of 2D (and eventually 3D) matricies instead of 1D arrays and explore numpy's abilities in depth: import numpy as np def menger(matrix, size): quotient, remainder = divmod(size, 3) if remainder == 0: for x in np.arange(0, size, quotient): for y in np.arange(0, size, quotient): view = matrix[x:x + quotient, y:y + quotient] if (x // quotient) % 3 == 1 and (y // quotient) % 3 == 1: view *= 0 menger(view, quotient) if __name__ == "__main__": import matplotlib.pyplot as plt SIZE = 27 matrix = np.ones((SIZE, SIZE)) menger(matrix, SIZE) plt.matshow(matrix) plt.colorbar() plt.show()
Generating points within a Menger Sponge (fractal shape)
I am trying to generate a lattice of points in the shape of a Menger sponge or Sierpinski sponge. https://en.wikipedia.org/wiki/Menger_sponge This link details how the shape is mathematically constructed. I wanted to find a way where I could make this shape using recursion to remove the necessary cubes. I looked online but I could only find code which generated 3d renderings of the shape and not a lattice of points. It is worth mentioning that I am not familiar with OO programming which seemed to be the general method the examples I found used. I then tried to make a 2D version to see if I could implement it, but the only version I got to work was by manually subtracting the areas needed. This is what I've gotten to work, only removing the first square from the centre: ` import numpy as np import matplotlib.pyplot as plt size = 12 x = [] y = [] for index_x in np.arange(size): for index_y in np.arange(size): x = np.append(x, index_x) y = np.append(y, index_y) # step 1: remove central box x_box = [] y_box = [] for index_1 in np.arange(144): if (x[index_1] < size/3 or x[index_1] >= 2/3 * size or y[index_1] < size/3 or y[index_1] >= 2/3 * size): x_box = np.append(x_box, x[index_1]) y_box = np.append(y_box, y[index_1]) # step 2: remove central square in each surrounding square # Do the same steps as above but for the other smaller squares fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(x_box, y_box) ax.set_title('Menger Sponge') ax.set_xlabel('x') ax.set_ylabel('y') plt.show() ` This is what my code produces. Is there an easier / better way of implementing this?
[ "You need to add a recursive element to your code. I would also suggest thinking in terms of 2D (and eventually 3D) matricies instead of 1D arrays and explore numpy's abilities in depth:\nimport numpy as np\n\ndef menger(matrix, size):\n quotient, remainder = divmod(size, 3)\n\n if remainder == 0:\n for x in np.arange(0, size, quotient):\n for y in np.arange(0, size, quotient):\n view = matrix[x:x + quotient, y:y + quotient]\n\n if (x // quotient) % 3 == 1 and (y // quotient) % 3 == 1:\n view *= 0\n\n menger(view, quotient)\n\nif __name__ == \"__main__\":\n import matplotlib.pyplot as plt\n\n SIZE = 27\n\n matrix = np.ones((SIZE, SIZE))\n\n menger(matrix, SIZE)\n\n plt.matshow(matrix)\n plt.colorbar()\n plt.show()\n\n\n" ]
[ 1 ]
[]
[]
[ "fractals", "python", "recursion" ]
stackoverflow_0074616982_fractals_python_recursion.txt
Q: I try to map values to a column in pandas but i get nan values instead This is my code: mapping = {"ISTJ":1, "ISTP":2, "ISFJ":3, "ISFP":4, "INFP":6, "INTJ":7, "INTP":8, "ESTP":9, "ESTJ":10, "ESFP":11, "ESFJ":12, "ENFP":13, "ENFJ":14, "ENTP":15, "ENTJ":16, "NaN": 17} q20 = castaway_details["personality_type"] q20["personality_type"] = q20["personality_type"].map(mapping) the data frame is like this personality_type 0 INTP 1 INFP 2 INTJ 3 ISTJ 4 NAN 5 ESFP I want the output like this: personality_type 0 8 1 6 2 7 3 1 4 17 5 11 however, what I get from my code is all NANs A: Try to pandas.Series.str.strip before the pandas.Series.map : q20["personality_type"]= q20["personality_type"].str.strip().map(mapping) # Output : print(q20) personality_type 0 8 1 6 2 7 3 1 4 17 5 11 A: The key NaN in your mapping dictionary and NaN value in your data frame do not match. I have modified the one in your dictionary. df.apply(lambda x: x.fillna('NAN').map(mapping)) personality_type 0 8 1 6 2 7 3 1 4 17 5 11
I try to map values to a column in pandas but i get nan values instead
This is my code: mapping = {"ISTJ":1, "ISTP":2, "ISFJ":3, "ISFP":4, "INFP":6, "INTJ":7, "INTP":8, "ESTP":9, "ESTJ":10, "ESFP":11, "ESFJ":12, "ENFP":13, "ENFJ":14, "ENTP":15, "ENTJ":16, "NaN": 17} q20 = castaway_details["personality_type"] q20["personality_type"] = q20["personality_type"].map(mapping) the data frame is like this personality_type 0 INTP 1 INFP 2 INTJ 3 ISTJ 4 NAN 5 ESFP I want the output like this: personality_type 0 8 1 6 2 7 3 1 4 17 5 11 however, what I get from my code is all NANs
[ "Try to pandas.Series.str.strip before the pandas.Series.map :\nq20[\"personality_type\"]= q20[\"personality_type\"].str.strip().map(mapping)\n\n# Output :\nprint(q20)\n\n personality_type\n0 8\n1 6\n2 7\n3 1\n4 17\n5 11\n\n", "The key NaN in your mapping dictionary and NaN value in your data frame do not match. I have modified the one in your dictionary.\ndf.apply(lambda x: x.fillna('NAN').map(mapping))\n\n personality_type\n0 8\n1 6\n2 7\n3 1\n4 17\n5 11\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074617982_dataframe_pandas_python.txt
Q: How do you produce a random 0 or 1 with random.rand I'm trying to produce a 0 or 1 with numpy's random.rand. np.random.rand() produces a random float between 0 and 1 but not just a 0 or a 1. Thank you. A: You can use np.random.choice with a list of [0,1], or use np.random.radint with a range of 0,2 In [1]: import numpy as np In [2]: np.random.choice([0,1]) Out[2]: 0 In [5]: np.random.choice([0,1]) Out[5]: 1 In [8]: np.random.randint(2) Out[8]: 0 In [9]: np.random.randint(2) Out[9]: 1 You can also use the random module for the equivalent of these functions A: You can use numpy.random.random_integers random_int= np.random.random_integers(0,1) print (random_int) A: You can use np.random.randint(low, high=None, size=None). >>> np.random.randint(0,2,10) array([0, 1, 1, 0, 1, 1, 0, 0, 1, 0]) >>> np.random.randint(2) 0 >>> np.random.randint(2) 1 Fore more details, you can refer to https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html A: You should consider using np.random.randint() This function takes a range as an input. For example, >>> np.random.randint(2) This will give you an output of either 0 or 1 A: To add to the other answers, it is possible to simply do p_True = 0.5 # 50% probability that you get 1 your_bool = p_True >= np.random.rand() # >= because rand returns a float between 0 and 1, excluding 1. You can have a biased sample by changing p_true. A: Is there a reason to specifically use np.random.rand? This function outputs a float as noted in the question and previous answers, and you would need thresholding to obtain an int. scipy.stats.bernoulli(p) directly outputs a 1 with probability p and 0 with probability 1-p. A: You can use np.random.choice with a list of [0,1] and a size to get a random choice matrix like this: In [1]: import numpy as np In [2]: np.random.choice([0,1], size=(3,4)) Out[2]: array([[1, 0, 0, 0], [0, 1, 1, 0], [1, 1, 1, 1]])
How do you produce a random 0 or 1 with random.rand
I'm trying to produce a 0 or 1 with numpy's random.rand. np.random.rand() produces a random float between 0 and 1 but not just a 0 or a 1. Thank you.
[ "You can use np.random.choice with a list of [0,1], or use np.random.radint with a range of 0,2\nIn [1]: import numpy as np \n\nIn [2]: np.random.choice([0,1]) \nOut[2]: 0\n\nIn [5]: np.random.choice([0,1]) \nOut[5]: 1\n\nIn [8]: np.random.randint(2) \nOut[8]: 0\n\nIn [9]: np.random.randint(2) \nOut[9]: 1\n\n\nYou can also use the random module for the equivalent of these functions\n", "You can use numpy.random.random_integers\nrandom_int= np.random.random_integers(0,1)\nprint (random_int)\n\n", "You can use np.random.randint(low, high=None, size=None). \n>>> np.random.randint(0,2,10)\narray([0, 1, 1, 0, 1, 1, 0, 0, 1, 0])\n>>> np.random.randint(2)\n0\n>>> np.random.randint(2)\n1\n\nFore more details, you can refer to https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html\n", "You should consider using np.random.randint() This function takes a range as an input. \nFor example,\n>>> np.random.randint(2)\n\nThis will give you an output of either 0 or 1\n", "To add to the other answers, it is possible to simply do\np_True = 0.5 # 50% probability that you get 1\nyour_bool = p_True >= np.random.rand() # >= because rand returns a float between 0 and 1, excluding 1.\n\nYou can have a biased sample by changing p_true.\n", "Is there a reason to specifically use np.random.rand? This function outputs a float as noted in the question and previous answers, and you would need thresholding to obtain an int.\nscipy.stats.bernoulli(p) directly outputs a 1 with probability p and 0 with probability 1-p.\n", "You can use np.random.choice with a list of [0,1] and a size to get a random choice matrix like this:\nIn [1]: import numpy as np\n\nIn [2]: np.random.choice([0,1], size=(3,4))\nOut[2]: array([[1, 0, 0, 0],\n [0, 1, 1, 0],\n [1, 1, 1, 1]])\n\n" ]
[ 8, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0056372240_numpy_python.txt
Q: What is the efficient way to find missing rows of a dataframe and put NaN for columns? Consider I have dataframe which the first column is the datetime, and the other columns are data in the specified datetime (Data is collected hourly, so first column of every row is one hour after the previous row). In this dateframe data for some datetimes are missed. I want to make a new dataframe in which missing rows are replaced with the related datetime and NaNs for other columns. I tried to read the dataframe from a csv as first DF, and created an empty DF in a loop to create datetime for every hour chronologically, then I take the data from first DF and put it in the second DF and if there is no data from first DF for the specified datetime I put NaN in the row. This works for me, but it's very slow and takes 3 days to run for 70000 rows and I guess there is an efficient and pythonic way to do this. I guess there is a better way like this one but I need it for datetime. I'm looking for an answer like Replacing one data frame value from another based on timestamp Criterion but just with datetime. A: I think you could create a df where you have the timestamp as your index. You can then use pd.date_range to create a full datetime range for every hour from min to max. You can then run the Index.difference to efficiently find any timestamps that are missing from your original dataframe --> this will be the index of a new df with missing values. Then just fill in missing columns with NaN import pandas as pd import numpy as np # name of your datetime column datetime_col = 'datetime' # mock up some data data = { datetime_col: [ '2021-01-18 00:00:00', '2021-01-18 01:00:00', '2021-01-18 03:00:00', '2021-01-18 06:00:00'], 'extra_col1': ['b', 'c', 'd', 'e'], 'extra_col2': ['g', 'h', 'i', 'j'], } df = pd.DataFrame(data) # Setting the Date values as index df = df.set_index(datetime_col) # to_datetime() method converts string # format to a DateTime object df.index = pd.to_datetime(df.index) # create df of missing dates from the sequence # starting from min dateitme, to max, with hourly intervals new_df = pd.DataFrame( pd.date_range( start=df.index.min(), end=df.index.max(), freq='H' ).difference(df.index) ) # you will need to add these columns to your df missing_columns = [col for col in df.columns if col!=datetime_col] # add null data new_df[missing_columns] = np.nan # fix column names new_df.columns = [datetime_col] + missing_columns new_df A: I am not sure I follow exactly what you need, i.e what is the frequency of the datetimes you are trying to complete, but assuming it is hourly, then you could try something along those lines: Use the pd.date_range(start_date, end_date, freq='H') function from pandas to create a pandas DataFrame with all the missing hourly times you need (one column and same name than the first column in your initial DataFrame). See documentation here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html Use the pd.merge(initial_df, complete_df, how='outer') function to perform an outer merge between the two dataframes. If I am not mistaken all columns of cases where you had no date in the initial DataFrame should be filled with NAs by default. Reproducible example below using Matt's example: import pandas as pd import numpy as np # mock up some data data = { 'date': [ '2021-01-18 00:00:00', '2021-01-18 01:00:00', '2021-01-18 03:00:00', '2021-01-18 06:00:00'], 'extra_col1': ['b', 'c', 'd', 'e'], 'extra_col2': ['g', 'h', 'i', 'j'], } df = pd.DataFrame(data) # Use to_datetime() method to convert string # format to a DateTime object df['date'] = pd.to_datetime(df['date']) # Create df with missing dates from the sequence # starting from min dateitme, to max, with hourly intervals new_df = pd.DataFrame( {'date': pd.date_range( start=df['date'].min(), end=df['date'].max(), freq='H' )} ) # Use the merge function to perform an outer merge # and reorder the date column result_df = pd.merge(df,new_df,how='outer') result_df.sort_values(by='date',ascending=True, inplace=True)
What is the efficient way to find missing rows of a dataframe and put NaN for columns?
Consider I have dataframe which the first column is the datetime, and the other columns are data in the specified datetime (Data is collected hourly, so first column of every row is one hour after the previous row). In this dateframe data for some datetimes are missed. I want to make a new dataframe in which missing rows are replaced with the related datetime and NaNs for other columns. I tried to read the dataframe from a csv as first DF, and created an empty DF in a loop to create datetime for every hour chronologically, then I take the data from first DF and put it in the second DF and if there is no data from first DF for the specified datetime I put NaN in the row. This works for me, but it's very slow and takes 3 days to run for 70000 rows and I guess there is an efficient and pythonic way to do this. I guess there is a better way like this one but I need it for datetime. I'm looking for an answer like Replacing one data frame value from another based on timestamp Criterion but just with datetime.
[ "I think you could create a df where you have the timestamp as your index.\nYou can then use pd.date_range to create a full datetime range for every hour from min to max.\nYou can then run the Index.difference to efficiently find any timestamps that are missing from your original dataframe --> this will be the index of a new df with missing values.\nThen just fill in missing columns with NaN\nimport pandas as pd\nimport numpy as np\n\n# name of your datetime column\ndatetime_col = 'datetime'\n \n# mock up some data\ndata = {\n datetime_col: [\n '2021-01-18 00:00:00', '2021-01-18 01:00:00',\n '2021-01-18 03:00:00', '2021-01-18 06:00:00'],\n 'extra_col1': ['b', 'c', 'd', 'e'],\n 'extra_col2': ['g', 'h', 'i', 'j'],\n}\n\ndf = pd.DataFrame(data)\n \n# Setting the Date values as index\ndf = df.set_index(datetime_col)\n \n# to_datetime() method converts string\n# format to a DateTime object\ndf.index = pd.to_datetime(df.index)\n \n# create df of missing dates from the sequence\n# starting from min dateitme, to max, with hourly intervals\nnew_df = pd.DataFrame(\n pd.date_range(\n start=df.index.min(), \n end=df.index.max(),\n freq='H'\n ).difference(df.index)\n)\n\n# you will need to add these columns to your df\nmissing_columns = [col for col in df.columns if col!=datetime_col]\n\n# add null data\nnew_df[missing_columns] = np.nan\n\n# fix column names\nnew_df.columns = [datetime_col] + missing_columns\n\nnew_df\n\n", "I am not sure I follow exactly what you need, i.e what is the frequency of the datetimes you are trying to complete, but assuming it is hourly, then you could try something along those lines:\n\nUse the pd.date_range(start_date, end_date, freq='H') function from pandas to create a pandas DataFrame with all the missing hourly times you need (one column and same name than the first column in your initial DataFrame).\nSee documentation here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html\nUse the pd.merge(initial_df, complete_df, how='outer') function to perform an outer merge between the two dataframes. If I am not mistaken all columns of cases where you had no date in the initial DataFrame should be filled with NAs by default.\n\nReproducible example below using Matt's example:\nimport pandas as pd\nimport numpy as np\n \n# mock up some data\ndata = {\n 'date': [\n '2021-01-18 00:00:00', '2021-01-18 01:00:00',\n '2021-01-18 03:00:00', '2021-01-18 06:00:00'],\n 'extra_col1': ['b', 'c', 'd', 'e'],\n 'extra_col2': ['g', 'h', 'i', 'j'],\n}\n\ndf = pd.DataFrame(data)\n \n# Use to_datetime() method to convert string\n# format to a DateTime object\ndf['date'] = pd.to_datetime(df['date'])\n \n# Create df with missing dates from the sequence\n# starting from min dateitme, to max, with hourly intervals\nnew_df = pd.DataFrame(\n {'date': pd.date_range(\n start=df['date'].min(), \n end=df['date'].max(),\n freq='H'\n )}\n)\n\n# Use the merge function to perform an outer merge\n# and reorder the date column\nresult_df = pd.merge(df,new_df,how='outer')\nresult_df.sort_values(by='date',ascending=True, inplace=True)\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python", "python_3.x" ]
stackoverflow_0074617766_dataframe_datetime_pandas_python_python_3.x.txt
Q: FastAPI: How to know if a parameter is really null? i have a ressource and want to have a post api endpoint to modify it. My problem is if i set all propertys Optional[...] how did i know if i want to "delete" one property or set it to null? If i set it in the request to null: I get NoneType. But if i don't set it in the request i also get NoneType. Is there a solution to differ between this cases? Here is an example program: from typing import Optional from fastapi import FastAPI import uvicorn from pydantic import BaseModel class TestEntity(BaseModel): first: Optional[str] second: Optional[str] third: Optional[str] app = FastAPI() @app.post("/test") def test(entity: TestEntity): return entity if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=5000) I want to set first to null and don't do anything with the other propertys, I do: { "first":null } via POST request. As response I get: { "first": null, "second": null, "third": null } As you can see you cannot know which property is set null and which propertys should remain the same. A: You can find your answer here : Pydantic: Detect if a field value is missing or given as null @app.post("/test") def test(entity: TestEntity): return entity.dict(exclude_unset=True)
FastAPI: How to know if a parameter is really null?
i have a ressource and want to have a post api endpoint to modify it. My problem is if i set all propertys Optional[...] how did i know if i want to "delete" one property or set it to null? If i set it in the request to null: I get NoneType. But if i don't set it in the request i also get NoneType. Is there a solution to differ between this cases? Here is an example program: from typing import Optional from fastapi import FastAPI import uvicorn from pydantic import BaseModel class TestEntity(BaseModel): first: Optional[str] second: Optional[str] third: Optional[str] app = FastAPI() @app.post("/test") def test(entity: TestEntity): return entity if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=5000) I want to set first to null and don't do anything with the other propertys, I do: { "first":null } via POST request. As response I get: { "first": null, "second": null, "third": null } As you can see you cannot know which property is set null and which propertys should remain the same.
[ "You can find your answer here : Pydantic: Detect if a field value is missing or given as null\[email protected](\"/test\")\ndef test(entity: TestEntity):\n return entity.dict(exclude_unset=True)\n\n" ]
[ 0 ]
[]
[]
[ "fastapi", "pydantic", "python" ]
stackoverflow_0069645547_fastapi_pydantic_python.txt
Q: Multiprocessing Pool vs Process I'm reviewing some code and noticed some possibly redundant code: def tasker(val): do stuff def multiprocessor (func, vals): chunks = np.array_split(vals, os.cpu_count()) with multiprocessing.Pool() as pool: pool.map(partial(func,vals), chunksize=chunks) if __name__ == '__main__': values = foobar p = multiprocessing.Process(target=multiprocessor(tasker,values)) p.start() p.close() p.join() Just for a sanity check - Is running multiprocessing.Process on the multiprocessing.Pool function not redundant? No need to functionalize the multiprocessing.Pool to begin with, correct? Is there any advantage of running it like this? A: As it happens, the Process call never actually does anything useful; target=multiprocessor(tasker,values) is running multiprocessor in the main process, then passing its return value (None, since it has no explicit return) as the target for the Process. So yes, definitionally, this is completely pointless; you make the Pool in the parent process, run it to completion, then create a no-op Process, launch it, it does nothing, then when the useless Process exits, the main process continues. Unless there is some benefit to creating such a no-op process, the code would do the same thing if the guarded block were just: if __name__ == '__main__': values = foobar multiprocessor(tasker, values) If the Process had been created correctly, with: p = multiprocessing.Process(target=multiprocessor, args=(tasker, values)) and the code was more complex, there might be some benefit to this, if the Process needed to be killable (you could kill it easily for whatever reason, e.g. because some deadline had passed), or it would allocate huge amounts of memory that must be completely returned to the OS (not merely released to the user-mode free pool for reuse), or you were trying to avoid any mutations of the main process's globals (if the Process's target mutated them, the changes would only be seen in that child process and any processes forked after the change, the parent would not see them changed). As written, none of these conditions seem to apply (aside from maybe memory growth issues, especially due to the use of partial, which has issues when used as the mapper function with Pool's various map-like methods), but without knowing the contents of tasker (more specifically, what it returns, which Pool.map will collect and dispose of, consuming memory that isn't strictly needed only to free it in bulk at the end), I can't be sure. An aside: I'll note your code as written makes no sense: def multiprocessor (func, vals): chunks = np.array_split(vals, os.cpu_count()) with multiprocessing.Pool() as pool: pool.map(partial(func,vals), chunksize=chunks) doesn't provide an iterable to pool.map, and passed chunks (a list of numpy sub-arrays) as the chunksize, which should be an int. The additional comments below assume it was actually implemented as: def multiprocessor (func, vals): chunks = np.array_split(vals, os.cpu_count()) with multiprocessing.Pool() as pool: pool.map(func, chunks, chunksize=1) or: def multiprocessor (func, vals): chunk_size = -(-len(vals) // os.cpu_count()) # Trick to get ceiling division out of floor division operator with multiprocessing.Pool() as pool: pool.map(func, vals, chunksize=chunk_size) Having said that, the possible memory issue from Pool.map storing all the results when they're clearly discarded can be ameliorated by using Pool.imap_unordered instead, and just forcing the resulting iterator to run to completion efficiently. For example, you could replace pool.map(func, chunks, chunksize=1) with consume(pool.imap_unordered(func, chunks)) and pool.map(func, vals, chunksize=chunk_size) with consume(pool.imap_unordered(func, vals, chunksize=chunk_size)) (where consume is the itertools recipe of the same name). In both cases, rather than allocating a list for all the results, storing each result in it as the workers complete tasks (allocating more and more stuff you don't need), imap_unordered produces each result as it's returned, and consume immediately grabs each result and throws it away (memory must be allocated for each result, but it's immediately released, so the peak memory consumption for the process, and therefore the size the heap grows to, is kept minimal).
Multiprocessing Pool vs Process
I'm reviewing some code and noticed some possibly redundant code: def tasker(val): do stuff def multiprocessor (func, vals): chunks = np.array_split(vals, os.cpu_count()) with multiprocessing.Pool() as pool: pool.map(partial(func,vals), chunksize=chunks) if __name__ == '__main__': values = foobar p = multiprocessing.Process(target=multiprocessor(tasker,values)) p.start() p.close() p.join() Just for a sanity check - Is running multiprocessing.Process on the multiprocessing.Pool function not redundant? No need to functionalize the multiprocessing.Pool to begin with, correct? Is there any advantage of running it like this?
[ "As it happens, the Process call never actually does anything useful; target=multiprocessor(tasker,values) is running multiprocessor in the main process, then passing its return value (None, since it has no explicit return) as the target for the Process.\nSo yes, definitionally, this is completely pointless; you make the Pool in the parent process, run it to completion, then create a no-op Process, launch it, it does nothing, then when the useless Process exits, the main process continues. Unless there is some benefit to creating such a no-op process, the code would do the same thing if the guarded block were just:\nif __name__ == '__main__':\n values = foobar\n multiprocessor(tasker, values)\n\nIf the Process had been created correctly, with:\np = multiprocessing.Process(target=multiprocessor, args=(tasker, values))\n\nand the code was more complex, there might be some benefit to this, if the Process needed to be killable (you could kill it easily for whatever reason, e.g. because some deadline had passed), or it would allocate huge amounts of memory that must be completely returned to the OS (not merely released to the user-mode free pool for reuse), or you were trying to avoid any mutations of the main process's globals (if the Process's target mutated them, the changes would only be seen in that child process and any processes forked after the change, the parent would not see them changed).\nAs written, none of these conditions seem to apply (aside from maybe memory growth issues, especially due to the use of partial, which has issues when used as the mapper function with Pool's various map-like methods), but without knowing the contents of tasker (more specifically, what it returns, which Pool.map will collect and dispose of, consuming memory that isn't strictly needed only to free it in bulk at the end), I can't be sure.\n\nAn aside:\nI'll note your code as written makes no sense:\ndef multiprocessor (func, vals):\n chunks = np.array_split(vals, os.cpu_count())\n with multiprocessing.Pool() as pool:\n pool.map(partial(func,vals), chunksize=chunks)\n\ndoesn't provide an iterable to pool.map, and passed chunks (a list of numpy sub-arrays) as the chunksize, which should be an int.\nThe additional comments below assume it was actually implemented as:\ndef multiprocessor (func, vals):\n chunks = np.array_split(vals, os.cpu_count())\n with multiprocessing.Pool() as pool:\n pool.map(func, chunks, chunksize=1)\n\nor:\ndef multiprocessor (func, vals):\n chunk_size = -(-len(vals) // os.cpu_count()) # Trick to get ceiling division out of floor division operator\n with multiprocessing.Pool() as pool:\n pool.map(func, vals, chunksize=chunk_size)\n\nHaving said that, the possible memory issue from Pool.map storing all the results when they're clearly discarded can be ameliorated by using Pool.imap_unordered instead, and just forcing the resulting iterator to run to completion efficiently. For example, you could replace pool.map(func, chunks, chunksize=1) with consume(pool.imap_unordered(func, chunks)) and pool.map(func, vals, chunksize=chunk_size) with consume(pool.imap_unordered(func, vals, chunksize=chunk_size)) (where consume is the itertools recipe of the same name).\nIn both cases, rather than allocating a list for all the results, storing each result in it as the workers complete tasks (allocating more and more stuff you don't need), imap_unordered produces each result as it's returned, and consume immediately grabs each result and throws it away (memory must be allocated for each result, but it's immediately released, so the peak memory consumption for the process, and therefore the size the heap grows to, is kept minimal).\n" ]
[ 1 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0074618056_multiprocessing_python.txt
Q: Modulo Arithmetic function, Python code conversion here is my error Suppose we want to do 954^893 mod 1457. Now let this big number be ((954^50 mod 1457)(954^50 mod 1457)(954^50 mod 1457)(954^50 mod 1457)........(954^43 mod 1457)) mod 1457 break which I want to find the answer. So how can it be done in Python code. I want hard code RSA algorithm where plain message = 83 , prime number p = 47 , q = 31 , e = 17, and encrypted c = 954 A: What you want can be computed with the builtin pow function as follows: ans = pow(954, 893, mod = 1457) This result can also be computed "naively" using ans = (954 ** 893) % 1457. However, this forces python to compute the value 954**893 (i.e. 954893), a 2660-digit number. For these numbers, Python has no issues computing the solution, but the second approach becomes much worse for larger exponents. For example, the computation 954893000 mod 1457 takes approximately 0.1 ms the first way and 1 whole second the second way (on my device).
Modulo Arithmetic function, Python code conversion
here is my error Suppose we want to do 954^893 mod 1457. Now let this big number be ((954^50 mod 1457)(954^50 mod 1457)(954^50 mod 1457)(954^50 mod 1457)........(954^43 mod 1457)) mod 1457 break which I want to find the answer. So how can it be done in Python code. I want hard code RSA algorithm where plain message = 83 , prime number p = 47 , q = 31 , e = 17, and encrypted c = 954
[ "What you want can be computed with the builtin pow function as follows:\nans = pow(954, 893, mod = 1457)\n\nThis result can also be computed \"naively\" using ans = (954 ** 893) % 1457. However, this forces python to compute the value 954**893 (i.e. 954893), a 2660-digit number.\nFor these numbers, Python has no issues computing the solution, but the second approach becomes much worse for larger exponents. For example, the computation 954893000 mod 1457 takes approximately 0.1 ms the first way and 1 whole second the second way (on my device).\n" ]
[ 0 ]
[]
[]
[ "list", "loops", "python", "python_3.x" ]
stackoverflow_0074618081_list_loops_python_python_3.x.txt
Q: How to keep no return (zero rows) in a concatenation loop? I am using a code for a query, sometimes the input goes and there is no return (basically it does not find anything so the return is an empty row) so it is empty. However, when I use pd.concat, those empty rows disappear. Is there a way to keep these no return rows in the loop as well so that when I use that I can have empty rows on the final output.csv? import numpy as np import pandas as pd from dl import authClient as ac, queryClient as qc from dl.helpers.utils import convert import openpyxl as xl wb = xl.load_workbook('/Users/somethingfile.xlsx') sheet = wb['Sheet 1'] df = pd.DataFrame([],columns = ['col1','col2',...,'coln']) for row in range(3, sheet.max_row + 1): a0, b0, r = sheet.cell(row,1).value, sheet.cell(row,2).value, 0.001 query = """ SELECT a,b,c,d,e FROM smthng WHERE q3c_radial_query(a,b,{:f},{:f},{:f}) LIMIT 1 """.format(a0,b0,r) response = qc.query(sql=query,format='csv') temp_df = convert(response,'pandas') df = pd.concat([df,temp_df]) df.to_csv('output.csv') A: So, as far as I understand, the problem is that when "temp_df" is empty you want to add a blank row. You should be able to do that using the .append() method, appending an empty Series(). if len(temp_df) == 0: temp_df=temp_df.append(pd.Series(), ignore_index=True) #Then concat... A: For your specific question, it works if you check if temp_df is empty or not in each step and make it a DataFrame of NaNs if it is empty. Another note on the implementation is that, concatenating in each iteration will be a very expensive operation. It is much faster if you store the temp_dfs in each iteration in a list and concatenate once after the loop is over. lst = [] # <-- empty list to fill later for row in range(3, sheet.max_row + 1): a0, b0, r = sheet.cell(row,1).value, sheet.cell(row,2).value, 0.001 query = """ SELECT a,b,c,d,e FROM smthng WHERE q3c_radial_query(a,b,{:f},{:f},{:f}) LIMIT 1 """.format(a0,b0,r) response = qc.query(sql=query,format='csv') temp_df = convert(response,'pandas') if temp_df.empty: temp_df = pd.DataFrame([np.nan]) lst.append(temp_df) df = pd.concat(lst) # <-- concat once df.to_csv('output.csv', index=False)
How to keep no return (zero rows) in a concatenation loop?
I am using a code for a query, sometimes the input goes and there is no return (basically it does not find anything so the return is an empty row) so it is empty. However, when I use pd.concat, those empty rows disappear. Is there a way to keep these no return rows in the loop as well so that when I use that I can have empty rows on the final output.csv? import numpy as np import pandas as pd from dl import authClient as ac, queryClient as qc from dl.helpers.utils import convert import openpyxl as xl wb = xl.load_workbook('/Users/somethingfile.xlsx') sheet = wb['Sheet 1'] df = pd.DataFrame([],columns = ['col1','col2',...,'coln']) for row in range(3, sheet.max_row + 1): a0, b0, r = sheet.cell(row,1).value, sheet.cell(row,2).value, 0.001 query = """ SELECT a,b,c,d,e FROM smthng WHERE q3c_radial_query(a,b,{:f},{:f},{:f}) LIMIT 1 """.format(a0,b0,r) response = qc.query(sql=query,format='csv') temp_df = convert(response,'pandas') df = pd.concat([df,temp_df]) df.to_csv('output.csv')
[ "So, as far as I understand, the problem is that when \"temp_df\" is empty you want to add a blank row. You should be able to do that using the .append() method, appending an empty Series().\nif len(temp_df) == 0:\n temp_df=temp_df.append(pd.Series(), ignore_index=True)\n#Then concat...\n\n", "For your specific question, it works if you check if temp_df is empty or not in each step and make it a DataFrame of NaNs if it is empty.\nAnother note on the implementation is that, concatenating in each iteration will be a very expensive operation. It is much faster if you store the temp_dfs in each iteration in a list and concatenate once after the loop is over.\nlst = [] # <-- empty list to fill later\nfor row in range(3, sheet.max_row + 1): \n a0, b0, r = sheet.cell(row,1).value, sheet.cell(row,2).value, 0.001\n query = \"\"\"\n SELECT a,b,c,d,e FROM smthng\n WHERE q3c_radial_query(a,b,{:f},{:f},{:f}) LIMIT 1\n \"\"\".format(a0,b0,r)\n response = qc.query(sql=query,format='csv')\n temp_df = convert(response,'pandas')\n\n if temp_df.empty:\n temp_df = pd.DataFrame([np.nan])\n lst.append(temp_df)\n \ndf = pd.concat(lst) # <-- concat once\ndf.to_csv('output.csv', index=False)\n\n" ]
[ 1, 1 ]
[]
[]
[ "concatenation", "pandas", "python", "sql" ]
stackoverflow_0074614393_concatenation_pandas_python_sql.txt
Q: Compare a dictionary with list of dictionaries and return index from the list which has higher value than the separate dictionary I have a list of dictionaries and a separate dictionary having the same keys and only the values are different. For example the list of dictionaries look like this: [{'A': 0.102, 'B': 0.568, 'C': 0.33}, {'A': 0.026, 'B': 0.590, 'C': 0.382}, {'A': 0.005, 'B': 0.857, 'C': 0.137}, {'A': 0.0, 'B': 0.962, 'C': 0.036}, {'A': 0.0, 'B': 0.991, 'C': 0.008}] and the separate dictionary looks like this: {'A': 0.005, 'B': 0.956, 'C': 0.038} I want to compare the separate dictionary with the list of dictionaries and return the index from the list which has higher value than the separate dictionary. In this example, the indices would be 3, 4 as the dictionary in indices 3 and 4 has a higher value for key 'B' since 'B' has the highest value in the separate dictionary. Any ideas on how I should I proceed the problem? A: are you sure that it should be only index 4? dict_list = [{'A': 0.102, 'B': 0.568, 'C': 0.33}, {'A': 0.026, 'B': 0.590, 'C': 0.382}, {'A': 0.005, 'B': 0.857, 'C': 0.137}, {'A': 0.0, 'B': 0.962, 'C': 0.036}, {'A': 0.0, 'B': 0.991, 'C': 0.008}] d = {'A': 0.005, 'B': 0.956, 'C': 0.038} max_val = max(d.values()) idxmax = [i for i,j in enumerate(dict_list) if max(j.values()) > max_val] print(idxmax) # [3, 4] A: You can use enumerate for finding index of max value: org = [ {'A': 0.102, 'B': 0.568, 'C': 0.33}, {'A': 0.026, 'B': 0.590, 'C': 0.382}, {'A': 0.005, 'B': 0.857, 'C': 0.137}, {'A': 0.0, 'B': 0.962, 'C': 0.036}, {'A': 0.0, 'B': 0.991, 'C': 0.008} ] com = {'A': 0.005, 'B': 0.956, 'C': 0.038} def fnd_index(org, com): key_max, val_max = max(com.items(), key=lambda x: x[1]) print('key_max:', key_max) print('val_max:', val_max) res = [] for idx, dct in enumerate(org): if dct[key_max] > val_max: res.append(idx) return res res = fnd_index(org, com) print('result:', res) Output: key_max: B val_max: 0.956 result: [3, 4]
Compare a dictionary with list of dictionaries and return index from the list which has higher value than the separate dictionary
I have a list of dictionaries and a separate dictionary having the same keys and only the values are different. For example the list of dictionaries look like this: [{'A': 0.102, 'B': 0.568, 'C': 0.33}, {'A': 0.026, 'B': 0.590, 'C': 0.382}, {'A': 0.005, 'B': 0.857, 'C': 0.137}, {'A': 0.0, 'B': 0.962, 'C': 0.036}, {'A': 0.0, 'B': 0.991, 'C': 0.008}] and the separate dictionary looks like this: {'A': 0.005, 'B': 0.956, 'C': 0.038} I want to compare the separate dictionary with the list of dictionaries and return the index from the list which has higher value than the separate dictionary. In this example, the indices would be 3, 4 as the dictionary in indices 3 and 4 has a higher value for key 'B' since 'B' has the highest value in the separate dictionary. Any ideas on how I should I proceed the problem?
[ "are you sure that it should be only index 4?\ndict_list = [{'A': 0.102, 'B': 0.568, 'C': 0.33}, \n {'A': 0.026, 'B': 0.590, 'C': 0.382}, \n {'A': 0.005, 'B': 0.857, 'C': 0.137}, \n {'A': 0.0, 'B': 0.962, 'C': 0.036}, \n {'A': 0.0, 'B': 0.991, 'C': 0.008}] \n\nd = {'A': 0.005, 'B': 0.956, 'C': 0.038}\n\nmax_val = max(d.values())\nidxmax = [i for i,j in enumerate(dict_list) if max(j.values()) > max_val]\n\nprint(idxmax) # [3, 4]\n\n", "You can use enumerate for finding index of max value:\norg = [\n {'A': 0.102, 'B': 0.568, 'C': 0.33}, \n {'A': 0.026, 'B': 0.590, 'C': 0.382}, \n {'A': 0.005, 'B': 0.857, 'C': 0.137}, \n {'A': 0.0, 'B': 0.962, 'C': 0.036}, \n {'A': 0.0, 'B': 0.991, 'C': 0.008}\n] \n\ncom = {'A': 0.005, 'B': 0.956, 'C': 0.038}\n\ndef fnd_index(org, com):\n key_max, val_max = max(com.items(), key=lambda x: x[1])\n print('key_max:', key_max)\n print('val_max:', val_max)\n res = []\n for idx, dct in enumerate(org):\n if dct[key_max] > val_max:\n res.append(idx)\n return res\n\n\nres = fnd_index(org, com)\nprint('result:', res)\n\nOutput:\nkey_max: B\nval_max: 0.956\nresult: [3, 4]\n\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "list", "python", "python_3.x" ]
stackoverflow_0074617717_dictionary_list_python_python_3.x.txt
Q: How to find the coordinate of the center of a sphere made in 3D by voxels given all the coordinate of the voxels that made up the sphere I have coordinate of all the voxels that make up the sphere C = [[x1,y1,z1],[x2,y2,z2],...,[xn,yn,zn]] (around 4000 coordinates) Please teach me how I can get a coordinate that is the center of this sphere by Python code. Because this is a sphere, so when I plot it and try to identify the center coordinate, I couldn't see where the center is. This would be easier in 2D (circle). I still have no clue where I can start from. I expect the result to be Ccenter = [x_center,y_center,z_center] I am new to this community and to coding, you can give me feedback regarding the way I ask this question. Thank you very much in advance. A: So if you have an array of shape (n, 3), np.sum((Ccenter - C)**2, axis=1)-R**2 so you could get the coefficients of x**2, y**2, z**2, x, y, z. a,_,_,_ = np.linalg.lstsq(np.hstack([C**2, C]), np.ones(n), rcond=None) Ccenter = -a[3:6] / (2*a[0]) A test example import numpy as np n = 4000; # a cloud of 4000 points x = np.random.randn(n, 3) # make them a standard sphere x = x / np.sqrt(np.sum(x**2, 1, keepdims=True)); # Placement of the sphere r = 7 center = [4,5,6] C = r * x + center # Discover the placement from the points a,_,_,_ = np.linalg.lstsq(np.hstack([C**2, C]), np.ones(n), rcond=None) print(-a[3:6] / (2*a[0]))
How to find the coordinate of the center of a sphere made in 3D by voxels given all the coordinate of the voxels that made up the sphere
I have coordinate of all the voxels that make up the sphere C = [[x1,y1,z1],[x2,y2,z2],...,[xn,yn,zn]] (around 4000 coordinates) Please teach me how I can get a coordinate that is the center of this sphere by Python code. Because this is a sphere, so when I plot it and try to identify the center coordinate, I couldn't see where the center is. This would be easier in 2D (circle). I still have no clue where I can start from. I expect the result to be Ccenter = [x_center,y_center,z_center] I am new to this community and to coding, you can give me feedback regarding the way I ask this question. Thank you very much in advance.
[ "So if you have an array of shape (n, 3), np.sum((Ccenter - C)**2, axis=1)-R**2 so you could get the coefficients of x**2, y**2, z**2, x, y, z.\na,_,_,_ = np.linalg.lstsq(np.hstack([C**2, C]), np.ones(n), rcond=None)\nCcenter = -a[3:6] / (2*a[0])\n\nA test example\nimport numpy as np\nn = 4000; # a cloud of 4000 points\nx = np.random.randn(n, 3)\n# make them a standard sphere\nx = x / np.sqrt(np.sum(x**2, 1, keepdims=True));\n\n# Placement of the sphere\nr = 7\ncenter = [4,5,6]\nC = r * x + center\n\n# Discover the placement from the points\na,_,_,_ = np.linalg.lstsq(np.hstack([C**2, C]), np.ones(n), rcond=None)\nprint(-a[3:6] / (2*a[0]))\n\n" ]
[ 0 ]
[]
[]
[ "3d", "graph", "linear_algebra", "python", "voxel" ]
stackoverflow_0074597891_3d_graph_linear_algebra_python_voxel.txt
Q: save dataframe as csv correctly Initially, I have this dataframe: I save this as a csv file by using: df.to_csv('Frequency.csv') The problem lies with when I try to read the csv file again with: pd.read_csv("Frequency.csv") The dataframe then looks like this: Why is there an extra column added and why did the index change? I suppose it has something the do with the way how you should save the dataframe as a csv file, but I am not sure. A: Use these to save and read: #if you don't want to save the index column in the first place df.to_csv('Frequency.csv', index=False) # drop the extra column if any while reading pd.read_csv("Frequency.csv",index_col=0) Example : import pandas as pd data = { "calories": [420, 380, 390], "duration": [50, 40, 45] } df1 = pd.DataFrame(data) df1.to_csv('calories.csv', index=False) pd.read_csv("calories.csv",index_col=0) I've used the combination of these 2 given below because my jupyter notebook adds index on it's own while reading even if I use index=False while saving. So I find the combo of these 2 as a full proof method. df1.to_csv('calories.csv', index=False) pd.read_csv("calories.csv",index_col=0) A: It's writing the index into the csv, so then when you load it, it's an unnamed column. You can get around it by writing the csv like this. df.to_csv('Frequency.csv', index=False)
save dataframe as csv correctly
Initially, I have this dataframe: I save this as a csv file by using: df.to_csv('Frequency.csv') The problem lies with when I try to read the csv file again with: pd.read_csv("Frequency.csv") The dataframe then looks like this: Why is there an extra column added and why did the index change? I suppose it has something the do with the way how you should save the dataframe as a csv file, but I am not sure.
[ "Use these to save and read:\n#if you don't want to save the index column in the first place\ndf.to_csv('Frequency.csv', index=False) \n# drop the extra column if any while reading\npd.read_csv(\"Frequency.csv\",index_col=0)\n\nExample :\nimport pandas as pd\n\ndata = {\n \"calories\": [420, 380, 390],\n \"duration\": [50, 40, 45]\n}\ndf1 = pd.DataFrame(data)\n\ndf1.to_csv('calories.csv', index=False)\npd.read_csv(\"calories.csv\",index_col=0)\n\nI've used the combination of these 2 given below because my jupyter notebook adds index on it's own while reading even if I use index=False while saving. So I find the combo of these 2 as a full proof method.\ndf1.to_csv('calories.csv', index=False)\npd.read_csv(\"calories.csv\",index_col=0)\n\n", "It's writing the index into the csv, so then when you load it, it's an unnamed column. You can get around it by writing the csv like this.\ndf.to_csv('Frequency.csv', index=False)\n\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074618024_csv_dataframe_pandas_python.txt
Q: Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda? On a Windows 10 PC with an NVidia GeForce 820M I installed CUDA 9.2 and cudnn 7.1 successfully, and then installed PyTorch using the instructions at pytorch.org: pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html But I get: >>> import torch >>> torch.cuda.is_available() False A: Your graphics card does not support CUDA 9.0. Since I've seen a lot of questions that refer to issues like this I'm writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer. The system requirements to use PyTorch with CUDA are as follows: Your graphics card must support the required version of CUDA Your graphics card driver must support the required version of CUDA The PyTorch binaries must be built with support for the compute capability of your graphics card Note: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library. 1. How to check if your GPU/graphics card supports a particular CUDA version First, identify the model of your graphics card. Before moving forward ensure that you've got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA. NVIDIA doesn't do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1. In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1. If your card doesn't support the required CUDA version then see the options in section 4 of this answer. Note: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA. 2. How to check if your GPU/graphics driver supports a particular CUDA version The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA. First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA's website. If you've installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 3 in the CUDA release notes. In rare cases I've heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn't required. If you can't, or don't want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows: On Windows Determine your current graphics driver version (Source https://www.nvidia.com/en-gb/drivers/drivers-faq/) Right-click on your desktop and select NVIDIA Control Panel. From the NVIDIA Control Panel menu, select Help > System Information. The driver version is listed at the top of the Details window. For more advanced users, you can also get the driver version number from the Windows Device Manager. Right-click on your graphics device under display adapters and then select Properties. Select the Driver tab and read the Driver version. The last 5 digits are the NVIDIA driver version number. Visit the CUDA release notes and scroll down to Table 3. Use this table to verify your graphics driver is new enough to support the required version of CUDA. On Linux/OS X Run the following command in a terminal window nvidia-smi This should result in something like the following Sat Apr 4 15:31:57 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A | | 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1138 G /usr/lib/xorg/Xorg 300MiB | | 0 2550 G /usr/bin/compiz 189MiB | | 0 5735 G /usr/lib/firefox/firefox 5MiB | | 0 7073 G /usr/lib/firefox/firefox 5MiB | +-----------------------------------------------------------------------------+ Driver Version: ###.## is your graphic driver version. In the example above the driver version is 435.21. CUDA Version: ##.# is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1. Note: The CUDA Version displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with. To be extra sure that your driver supports the desired CUDA version you can visit Table 3 on the CUDA release notes page. 3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability Even if your graphics card supports the required version of CUDA then it's possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability <= 5.0 was dropped. First, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above), the information in this section assumes that this is the case. The easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter >>> import torch >>> torch.zeros(1).cuda() If you get an error message that reads Found GPU0 XXXXX which is of cuda capability #.#. PyTorch no longer supports this GPU because it is too old. then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go. Update If you're installing an old version of PyTorch on a system with a newer GPU then it's possible that the old PyTorch release wasn't compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities. 4. Conclusion If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don't support your compute capability (section 3) then your options are Compile PyTorch from source with support for your compute capability (see here) Install PyTorch without CUDA support (CPU-only) Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries Upgrade your graphics card If your graphics card doesn't support the required version of CUDA (section 1) then your options are Install PyTorch without CUDA support (CPU-only) Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don't support your compute capability) Upgrade your graphics card A: To solve this issue, the following method answered for me: 1- First you have to update Anaconda. 2- In your notebook, select the following based on your system. https://pytorch.org/ example for Windows:(This may take some time. Be patient) conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch 3- Find and install the latest graphics card for your system through the following site: https://www.nvidia.com/Download/index.aspx 4- Supported CUDA level of GPU and card. see this A: The same error can appear when the version of your Pytorch supports different CUDA. For example, my Pytorch version was with CUDA 8.0 support, but I had CUDA 9.0 installed. To fix that I had to upgrade my Pytorch to cu90 like this: pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu90/torch_nightly.html Reference: here A: I want to share also my experience, especially in the WSL2 environment. See my post here. Despite I had installed the correct and latest drivers following the guide provided by NVidia here, my WSL was not able to detect any GPU both in PyTorch and in the whole environment. My GPU is Nvidia GeForce RTX 1650 Ti, which is not listed in the Wiki link above but is actually shown in the NVidia page. Downgrading to an older driver version found at this NVidia link, namely Driver Version: 472.39 helped me out. Now PyTorch can correctly detect the driver, as well as I can run containers that require GPU access since it is correctly found and used. Hoping this will help someone in my situation. A: Just faced the same with my GPU (last available driver installed), none of above helped, searching for hours over Google also no luck. Here's what worked out for me: Delete all environments that were created in Anaconda. Uninstall Anaconda and delete all related folders in "user" folder Install Anaconda Add conda-forge to channels https://conda-forge.org/docs/user/introduction.html Run through installation Guide from NVIDIA https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html Choose proper configuration and run conda installation https://pytorch.org/get-started/locally/ IF installation failed with well-known failed with initial frozen solve. Retrying with flexible solve. go with pip3 install instead of conda Enjoy your GPU in Jupyter Notebook: import torch torch.cuda.is_available() True A: ok here's my experience my system is ubuntu 20.4, gpu - nvidi gtx 1060 when i go and change run the 'Nvidia X Server Settings' application i found under the PRIME Profiles Nvidia On-Demand or Inter(power saving mode) is selected giving torch.cuda.is_available() to False i changed the GPU Mode to 'NVIDIA(Performance Mode) then i got True NVIDIA X Server Setting-GUI A: Step 1.) Check your cuda and GPU DRIVER version using nvidia-smi . This will be helpful in downloading the correct version of pytorch with this hardware Step 2.) Check if you have installed gpu version of pytorch by using conda list pytorch If you get "cpu_" version of pytorch then you need to uninstall pytorch and reinstall it by below command ```` conda uninstall pytorch conda install pytorch torchvision cudatoolkit=11.3 -c pytorch -c conda-forge ```` A: I had a similar issue with a GPU with MIG mode. I had to disable the MIG mode: >> nvidia-smi -mig 0 As pointed out by ptrblck, it was enable by default but I didn't create any MIG devices. You can try to create them (user guide).
Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda?
On a Windows 10 PC with an NVidia GeForce 820M I installed CUDA 9.2 and cudnn 7.1 successfully, and then installed PyTorch using the instructions at pytorch.org: pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html But I get: >>> import torch >>> torch.cuda.is_available() False
[ "Your graphics card does not support CUDA 9.0.\nSince I've seen a lot of questions that refer to issues like this I'm writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer.\n\nThe system requirements to use PyTorch with CUDA are as follows:\n\nYour graphics card must support the required version of CUDA\nYour graphics card driver must support the required version of CUDA\nThe PyTorch binaries must be built with support for the compute capability of your graphics card\n\nNote: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library.\n\n1. How to check if your GPU/graphics card supports a particular CUDA version\nFirst, identify the model of your graphics card.\nBefore moving forward ensure that you've got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA.\nNVIDIA doesn't do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported\n\nLocate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1.\nIn the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1.\n\nIf your card doesn't support the required CUDA version then see the options in section 4 of this answer.\nNote: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA.\n\n2. How to check if your GPU/graphics driver supports a particular CUDA version\nThe graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA.\nFirst, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA's website.\nIf you've installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 3 in the CUDA release notes. In rare cases I've heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn't required.\nIf you can't, or don't want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows:\nOn Windows\n\nDetermine your current graphics driver version (Source https://www.nvidia.com/en-gb/drivers/drivers-faq/)\n\n\nRight-click on your desktop and select NVIDIA Control Panel. From the\nNVIDIA Control Panel menu, select Help > System Information. The\ndriver version is listed at the top of the Details window. For more\nadvanced users, you can also get the driver version number from the\nWindows Device Manager. Right-click on your graphics device under\ndisplay adapters and then select Properties. Select the Driver tab and\nread the Driver version. The last 5 digits are the NVIDIA driver\nversion number.\n\n\nVisit the CUDA release notes and scroll down to Table 3. Use this table to verify your graphics driver is new enough to support the required version of CUDA.\n\nOn Linux/OS X\nRun the following command in a terminal window\nnvidia-smi\n\nThis should result in something like the following\nSat Apr 4 15:31:57 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A |\n| 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| 0 1138 G /usr/lib/xorg/Xorg 300MiB |\n| 0 2550 G /usr/bin/compiz 189MiB |\n| 0 5735 G /usr/lib/firefox/firefox 5MiB |\n| 0 7073 G /usr/lib/firefox/firefox 5MiB |\n+-----------------------------------------------------------------------------+\n\nDriver Version: ###.## is your graphic driver version. In the example above the driver version is 435.21.\nCUDA Version: ##.# is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1.\nNote: The CUDA Version displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with.\nTo be extra sure that your driver supports the desired CUDA version you can visit Table 3 on the CUDA release notes page.\n\n3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability\nEven if your graphics card supports the required version of CUDA then it's possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability <= 5.0 was dropped.\nFirst, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above), the information in this section assumes that this is the case.\nThe easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter\n>>> import torch\n>>> torch.zeros(1).cuda()\n\nIf you get an error message that reads\nFound GPU0 XXXXX which is of cuda capability #.#.\nPyTorch no longer supports this GPU because it is too old.\n\nthen that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go.\nUpdate If you're installing an old version of PyTorch on a system with a newer GPU then it's possible that the old PyTorch release wasn't compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities.\n\n4. Conclusion\nIf your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don't support your compute capability (section 3) then your options are\n\nCompile PyTorch from source with support for your compute capability (see here)\nInstall PyTorch without CUDA support (CPU-only)\nInstall an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries\nUpgrade your graphics card\n\nIf your graphics card doesn't support the required version of CUDA (section 1) then your options are\n\nInstall PyTorch without CUDA support (CPU-only)\nInstall an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don't support your compute capability)\nUpgrade your graphics card\n\n", "To solve this issue, the following method answered for me:\n1- First you have to update Anaconda.\n2- In your notebook, select the following based on your system.\nhttps://pytorch.org/\n\n\nexample for Windows:(This may take some time. Be patient)\nconda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch\n\n3- Find and install the latest graphics card for your system through the following site:\nhttps://www.nvidia.com/Download/index.aspx\n\n\n4- Supported CUDA level of GPU and card. see this\n", "The same error can appear when the version of your Pytorch supports different CUDA. For example, my Pytorch version was with CUDA 8.0 support, but I had CUDA 9.0 installed. To fix that I had to upgrade my Pytorch to cu90 like this:\npip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu90/torch_nightly.html\n\nReference: here\n", "I want to share also my experience, especially in the WSL2 environment. See my post here.\nDespite I had installed the correct and latest drivers following the guide provided by NVidia here, my WSL was not able to detect any GPU both in PyTorch and in the whole environment.\nMy GPU is Nvidia GeForce RTX 1650 Ti, which is not listed in the Wiki link above but is actually shown in the NVidia page.\nDowngrading to an older driver version found at this NVidia link, namely Driver Version: 472.39 helped me out. Now PyTorch can correctly detect the driver, as well as I can run containers that require GPU access since it is correctly found and used.\nHoping this will help someone in my situation.\n", "Just faced the same with my GPU (last available driver installed), none of above helped, searching for hours over Google also no luck. Here's what worked out for me:\n\nDelete all environments that were created in Anaconda. Uninstall Anaconda and delete all related folders in \"user\" folder\n\nInstall Anaconda\n\nAdd conda-forge to channels https://conda-forge.org/docs/user/introduction.html\n\nRun through installation Guide from NVIDIA https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html\n\nChoose proper configuration and run conda installation https://pytorch.org/get-started/locally/\nIF installation failed with well-known failed with initial frozen solve. Retrying with flexible solve. go with pip3 install instead of conda\n\nEnjoy your GPU in Jupyter Notebook:\nimport torch\ntorch.cuda.is_available()\nTrue\n\n\n", "ok here's my experience\nmy system is ubuntu 20.4, gpu - nvidi gtx 1060\nwhen i go and change run the 'Nvidia X Server Settings' application i found under the PRIME Profiles\nNvidia On-Demand or Inter(power saving mode) is selected\ngiving torch.cuda.is_available() to False\ni changed the GPU Mode to 'NVIDIA(Performance Mode) then i got True\nNVIDIA X Server Setting-GUI\n", "Step 1.) Check your cuda and GPU DRIVER version using\n nvidia-smi .\nThis will be helpful in downloading the correct version of pytorch with this hardware\nStep 2.) Check if you have installed gpu version of pytorch by using\nconda list pytorch\nIf you get \"cpu_\" version of pytorch then you need to uninstall pytorch and reinstall it by below command\n ```` conda uninstall pytorch \n conda install pytorch torchvision cudatoolkit=11.3 -c pytorch -c conda-forge ````\n\n", "I had a similar issue with a GPU with MIG mode. I had to disable the MIG mode:\n>> nvidia-smi -mig 0\n\nAs pointed out by ptrblck, it was enable by default but I didn't create any MIG devices.\nYou can try to create them (user guide).\n" ]
[ 125, 5, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0060987997_python_pytorch.txt
Q: Switch order of differential and real operator in expression in Python Let's say I want to simplify the terms [ where u and v are (sympy) complex variables. u and w are independent from each other and the above differentials should thereby be evaluated to zero. As my code currently stands, it will not set the above differentials to zero since it does not know how to evaluate re(w) and im(w) (see reason below). Is there a way to tell Python/Sympy to reverse the order of operation between the differential and re/im operator, i.e to evaluate them as: Since then Python can evaluate the differentials, and since they both are zero to begin with, it can set re(0) and im(0) to zero automatically. I am basically looking for a solution to this where I don't have to decompose u and w into with u_1, u_2, w_1, w_2 real Initial attempt: I noticed that one can use sympy.subs to swith the re operator to im operator by [expression].subs({re: im}). Maybe one could do something similiar with the differential and re/im operator to switch the order, but I do not know how to write the differential operator inside of subs. A: Maybe just use doit: In [3]: v, w = symbols('v, w') In [4]: diff(re(w), v) Out[4]: 0 In [5]: Derivative(re(w), v) Out[5]: d ──(re(w)) dv In [6]: Derivative(re(w), v).doit() Out[6]: 0 A: The doit is good since, otherwise, an unevaluated Derivative(x, y) will not evaluate to 0. But...if you want to do as requested, you can use a replace command to look for a derivative whose argument is re or im like this, using the args and func attributes of the expressions: >>> eq = Derivative(re(x),y) >>> eq.replace( ... lambda x: isinstance(x, Derivative) and isinstance(x.args[0], (re,im)), ... lambda x: x.args[0].func(x.func(x.args[0].args[0], *x.args[1:]))) re(Derivative(x, y))
Switch order of differential and real operator in expression in Python
Let's say I want to simplify the terms [ where u and v are (sympy) complex variables. u and w are independent from each other and the above differentials should thereby be evaluated to zero. As my code currently stands, it will not set the above differentials to zero since it does not know how to evaluate re(w) and im(w) (see reason below). Is there a way to tell Python/Sympy to reverse the order of operation between the differential and re/im operator, i.e to evaluate them as: Since then Python can evaluate the differentials, and since they both are zero to begin with, it can set re(0) and im(0) to zero automatically. I am basically looking for a solution to this where I don't have to decompose u and w into with u_1, u_2, w_1, w_2 real Initial attempt: I noticed that one can use sympy.subs to swith the re operator to im operator by [expression].subs({re: im}). Maybe one could do something similiar with the differential and re/im operator to switch the order, but I do not know how to write the differential operator inside of subs.
[ "Maybe just use doit:\nIn [3]: v, w = symbols('v, w')\n\nIn [4]: diff(re(w), v)\nOut[4]: 0\n\nIn [5]: Derivative(re(w), v)\nOut[5]: \nd \n──(re(w))\ndv \n\nIn [6]: Derivative(re(w), v).doit()\nOut[6]: 0\n\n", "The doit is good since, otherwise, an unevaluated Derivative(x, y) will not evaluate to 0. But...if you want to do as requested, you can use a replace command to look for a derivative whose argument is re or im like this, using the args and func attributes of the expressions:\n>>> eq = Derivative(re(x),y)\n>>> eq.replace(\n... lambda x: isinstance(x, Derivative) and isinstance(x.args[0], (re,im)),\n... lambda x: x.args[0].func(x.func(x.args[0].args[0], *x.args[1:])))\nre(Derivative(x, y))\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "sympy" ]
stackoverflow_0074610786_python_sympy.txt
Q: Can I import default Python modules in a Python Docker image? I am new to Docker, and as a learning exercise, I want to make a custom Python package available through a Docker image. The package is called hashtable-nicolerg and includes a HashTable class that can be imported with from hashtable_nicolerg.hashtable import HashTable. It is straightforward to create an image with additional Python packages installed: Write a Dockerfile # Dockerfile FROM python:3 RUN pip install --no-cache-dir hashtable-nicolerg Build the image docker build -t python-hashtable . However, the goal, which I realize is hardly an abundant use-case for Docker images, is for the user to be able to create HashTable instances as soon as the container's Python prompt starts. Specifically, this is the current behavior: $ docker run -it python-hashtable Python 3.11.0 (main, Nov 15 2022, 19:58:01) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash_table = HashTable(capacity=100) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'HashTable' is not defined >>> from hashtable_nicolerg.hashtable import HashTable >>> hash_table = HashTable(capacity=100) And this is the desired behavior: $ docker run -it python-hashtable Python 3.11.0 (main, Nov 15 2022, 19:58:01) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash_table = HashTable(capacity=100) I don't want my imaginary users to have to type from hashtable_nicolerg.hashtable import HashTable every time they run a container from this image. So, is it possible for me to effectively run from hashtable_nicolerg.hashtable import HashTable within my Docker image so that users don't have to manually import this module? Again, I realize this is not the most popular use-case for a Docker image. I'm using this as an exercise to learn more about Python and Docker. I'd appreciate any help! A: If the PYTHONSTARTUP environment variable exists and is the name of a valid Python file, then that file will be executed when any new Python shell starts up. So whether you want to do this in a Docker container or on your local machine, it works the same way. Define PYTHONSTARTUP (in .bashrc, for instance) and then write a startup file. # .bashrc export PYTHONSTARTUP=~/.python_startup   # .python_startup from hashtable_nicolerg.hashtable import HashTable A: Thanks to Silvio Mayolo for the answer! Here is a Docker alternative to exporting an environment variable from ~/.bashrc: # Dockerfile FROM python:3 RUN pip install --no-cache-dir hashtable-nicolerg WORKDIR / COPY .python_startup . ENV PYTHONSTARTUP=./.python_startup Then after building the image with docker build -t python-hashtable ., I get the desired behavior: $ docker run -it python-hashtable Python 3.11.0 (main, Nov 15 2022, 19:58:01) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash_table = HashTable(capacity=100)
Can I import default Python modules in a Python Docker image?
I am new to Docker, and as a learning exercise, I want to make a custom Python package available through a Docker image. The package is called hashtable-nicolerg and includes a HashTable class that can be imported with from hashtable_nicolerg.hashtable import HashTable. It is straightforward to create an image with additional Python packages installed: Write a Dockerfile # Dockerfile FROM python:3 RUN pip install --no-cache-dir hashtable-nicolerg Build the image docker build -t python-hashtable . However, the goal, which I realize is hardly an abundant use-case for Docker images, is for the user to be able to create HashTable instances as soon as the container's Python prompt starts. Specifically, this is the current behavior: $ docker run -it python-hashtable Python 3.11.0 (main, Nov 15 2022, 19:58:01) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash_table = HashTable(capacity=100) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'HashTable' is not defined >>> from hashtable_nicolerg.hashtable import HashTable >>> hash_table = HashTable(capacity=100) And this is the desired behavior: $ docker run -it python-hashtable Python 3.11.0 (main, Nov 15 2022, 19:58:01) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash_table = HashTable(capacity=100) I don't want my imaginary users to have to type from hashtable_nicolerg.hashtable import HashTable every time they run a container from this image. So, is it possible for me to effectively run from hashtable_nicolerg.hashtable import HashTable within my Docker image so that users don't have to manually import this module? Again, I realize this is not the most popular use-case for a Docker image. I'm using this as an exercise to learn more about Python and Docker. I'd appreciate any help!
[ "If the PYTHONSTARTUP environment variable exists and is the name of a valid Python file, then that file will be executed when any new Python shell starts up.\nSo whether you want to do this in a Docker container or on your local machine, it works the same way. Define PYTHONSTARTUP (in .bashrc, for instance) and then write a startup file.\n# .bashrc\nexport PYTHONSTARTUP=~/.python_startup\n\n \n# .python_startup\nfrom hashtable_nicolerg.hashtable import HashTable\n\n", "Thanks to Silvio Mayolo for the answer! Here is a Docker alternative to exporting an environment variable from ~/.bashrc:\n# Dockerfile\nFROM python:3\nRUN pip install --no-cache-dir hashtable-nicolerg\n\nWORKDIR /\nCOPY .python_startup .\nENV PYTHONSTARTUP=./.python_startup\n\nThen after building the image with docker build -t python-hashtable ., I get the desired behavior:\n$ docker run -it python-hashtable\nPython 3.11.0 (main, Nov 15 2022, 19:58:01) [GCC 10.2.1 20210110] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> hash_table = HashTable(capacity=100)\n\n" ]
[ 3, 1 ]
[]
[]
[ "docker", "python", "python_import" ]
stackoverflow_0074609139_docker_python_python_import.txt
Q: Is there a way to use try. except to limit a list to numbers only so if anything else other then numbers is used such as special char or char it loops var=input("Enter a list of Number to Find its Minimum, Maximum, Average and Total:") try: a=eval(var) b=min(a) c=max(a) d=sum(a)/len(a) e=sum(a) print('min -',b,'max -',c,'avg -',d,'total -',e) break except NameError: txt=var.isalpha() if (txt==True): print("Please enter a valid input") else: print("Please enter a valid input") Errors : Enter a list of Number to Find its Minimum, Maximum, Average and Total:[1,2,3ab,4] File "<string>", line 1 [1,2,3ab,4] ^ SyntaxError: invalid syntax Enter a list of Number to Find its Minimum, Maximum, Average and Total:[@,+,_] File "<string>", line 1 [@,+,_] ^ SyntaxError: invalid syntax Was Expecting it to print an error message like " Please enter a valid input " and ask me to type the list again. A: Your code has a few quirks that you may want to tweak: There is a break statement inside the try, but no for loop. Both if and else print the same text. You can't know which one printed the output. Just by changing the captured exception type from NameError to SyntaxError should be enough: var = input("Enter a list of Number to Find its Minimum, Maximum, Average and Total:") try: a = eval(var) b = min(a) c = max(a) d = sum(a)/len(a) e = sum(a) print('min -', b, 'max -', c, 'avg -', d, 'total -', e) except SyntaxError: print("Please enter a valid input") Nonetheless, if you want to print different messages for [1,2,3ab,4] and [@,+,_] you should know that str.isalpha() only returns True if all characters are letters, which is not true just by adding commas or brackets. I would suggest using regular expressions (regex): import re var = input("Enter a list of Number to Find its Minimum, Maximum, Average and Total:") try: a = eval(var) b = min(a) c = max(a) d = sum(a)/len(a) e = sum(a) print('min -', b, 'max -', c, 'avg -', d, 'total -', e) except SyntaxError: if re.match(r'[a-zA-Z]', var): print("Text contains letters. Please enter a valid input") elif re.match(r'[\@\+\_]', var): print("Text contains special characters. Please enter a valid input") else: print("Please enter a valid input") You can play with regex on regexr.com. If you would rather not use the re library, you can do it iterating over the string: var = input("Enter a list of Number to Find its Minimum, Maximum, Average and Total:") try: a = eval(var) b = min(a) c = max(a) d = sum(a)/len(a) e = sum(a) print('min -', b, 'max -', c, 'avg -', d, 'total -', e) except SyntaxError: for l in var: if l.isnumeric() or l in [',', '[', ']']: continue if l.isalpha(): print("Text contains letters. Please enter a valid input") elif l in ['@', '+', '_']: print("Text contains special characters. Please enter a valid input") else: print("Please enter a valid input") break
Is there a way to use try. except to limit a list to numbers only so if anything else other then numbers is used such as special char or char it loops
var=input("Enter a list of Number to Find its Minimum, Maximum, Average and Total:") try: a=eval(var) b=min(a) c=max(a) d=sum(a)/len(a) e=sum(a) print('min -',b,'max -',c,'avg -',d,'total -',e) break except NameError: txt=var.isalpha() if (txt==True): print("Please enter a valid input") else: print("Please enter a valid input") Errors : Enter a list of Number to Find its Minimum, Maximum, Average and Total:[1,2,3ab,4] File "<string>", line 1 [1,2,3ab,4] ^ SyntaxError: invalid syntax Enter a list of Number to Find its Minimum, Maximum, Average and Total:[@,+,_] File "<string>", line 1 [@,+,_] ^ SyntaxError: invalid syntax Was Expecting it to print an error message like " Please enter a valid input " and ask me to type the list again.
[ "Your code has a few quirks that you may want to tweak:\n\nThere is a break statement inside the try, but no for loop.\nBoth if and else print the same text. You can't know which one printed the output.\n\nJust by changing the captured exception type from NameError to SyntaxError should be enough:\nvar = input(\"Enter a list of Number to Find its Minimum, Maximum, Average and Total:\")\ntry:\n a = eval(var)\n b = min(a)\n c = max(a)\n d = sum(a)/len(a)\n e = sum(a)\n print('min -', b, 'max -', c, 'avg -', d, 'total -', e)\nexcept SyntaxError:\n print(\"Please enter a valid input\")\n\nNonetheless, if you want to print different messages for [1,2,3ab,4] and [@,+,_] you should know that str.isalpha() only returns True if all characters are letters, which is not true just by adding commas or brackets. I would suggest using regular expressions (regex):\nimport re\n\nvar = input(\"Enter a list of Number to Find its Minimum, Maximum, Average and Total:\")\ntry:\n a = eval(var)\n b = min(a)\n c = max(a)\n d = sum(a)/len(a)\n e = sum(a)\n print('min -', b, 'max -', c, 'avg -', d, 'total -', e)\nexcept SyntaxError:\n if re.match(r'[a-zA-Z]', var):\n print(\"Text contains letters. Please enter a valid input\")\n elif re.match(r'[\\@\\+\\_]', var):\n print(\"Text contains special characters. Please enter a valid input\")\n else:\n print(\"Please enter a valid input\")\n\nYou can play with regex on regexr.com.\nIf you would rather not use the re library, you can do it iterating over the string:\nvar = input(\"Enter a list of Number to Find its Minimum, Maximum, Average and Total:\")\ntry:\n a = eval(var)\n b = min(a)\n c = max(a)\n d = sum(a)/len(a)\n e = sum(a)\n print('min -', b, 'max -', c, 'avg -', d, 'total -', e)\nexcept SyntaxError:\n for l in var:\n if l.isnumeric() or l in [',', '[', ']']:\n continue\n if l.isalpha():\n print(\"Text contains letters. Please enter a valid input\")\n elif l in ['@', '+', '_']:\n print(\"Text contains special characters. Please enter a valid input\")\n else:\n print(\"Please enter a valid input\")\n break\n\n" ]
[ 0 ]
[]
[]
[ "python", "syntax_error" ]
stackoverflow_0074617795_python_syntax_error.txt
Q: Overlay Graphs at same point I want to overlay some graphs out of CSV data (two datasets). The graph I got from my dataset is shown down below. Is there any way to plot those datasets over specific points? I would like to overlay these plots by using the anchor of the "big drop" to compare them in a better way. The code used: import pandas as pd import matplotlib.pyplot as plt # Read the data data1 = pd.read_csv('data1.csv', delimiter=";", decimal=",") data2 = pd.read_csv('data2.csv', delimiter=";", decimal=",") data3 = pd.read_csv('data3.csv', delimiter=";", decimal=",") data4 = pd.read_csv('data4.csv', delimiter=";", decimal=",") # Plot the data plt.plot(data1['Zeit'], data1['Kanal A']) plt.plot(data2['Zeit'], data2['Kanal A']) plt.plot(data3['Zeit'], data3['Kanal A']) plt.plot(data4['Zeit'], data4['Kanal A']) plt.show() plt.close() I would like to share you some data here: Link to data A: Part 1: Anchor times A simple way is to find the times of interest (lowest point) in each frame, then plot each series with x=t - t_peak instead of x=t. Two ways come to mind to find the desired anchor points: Simply using the global minimum (in your plots, that would work fine), or Using the most prominent local minimum, either from first principles, or using scipy's find_peaks(). But first of all, let us attempt to build a reproducible example: import matplotlib.pyplot as plt import numpy as np import pandas as pd def make_sample(t_peak, tmax_approx=17.5, n=100): # uneven times t = np.random.uniform(0, 2*tmax_approx/n, n).cumsum() y = -1 / (0.1 + 2 * np.abs(t - t_peak)) trend = 4 * np.random.uniform(-1, 1) / n level = np.random.uniform(10, 12) y += np.random.normal(trend, 1/n, n).cumsum() + level return pd.DataFrame({'t': t, 'y': y}) poi = [2, 2.48, 2.6, 2.1] np.random.seed(0) frames = [make_sample(t_peak) for t_peak in poi] plt.rcParams['figure.figsize'] = (6,2) fig, ax = plt.subplots() for df in frames: ax.plot(*df.values.T) In this case, we made the problem maximally inconvenient by giving each time series its own, independent, unevenly distributed time sampling. Now, retrieving the "maximum drop" by global minimum: peaks = [df.loc[df['y'].idxmin(), 't'] for df in frames] >>> peaks [2.0209774600118764, 2.4932468358014157, 2.5835972003585472, 2.12438578790615] fig, ax = plt.subplots() for t_peak, df in zip(peaks, frames): ax.plot(df['t'] - t_peak, df['y']) But imagine a case where the global minimum is not suitable. For example, add a large sine wave to each series: frames = [df.assign(y=df['y'] + 5 * np.sin(df['t'])) for df in frames] # just plotting the first series df = frames[0] plt.plot(*df.values.T) Clearly, there are several local minima, and the one we want ("sharpest drop") is not the global one. A simple way to find the desired sharpest drop time is by looking at the difference from each point to its two neighbors: def arg_steepest_min(v): # simply find the minimum that is most separated from the surrounding points diff = np.diff(v) i = np.argmin(diff[:-1] - diff[1:]) + 1 return i peaks = [df['t'].iloc[arg_steepest_min(df['y'])] for df in frames] >>> peaks [2.0209774600118764, 2.4932468358014157, 2.5835972003585472, 2.12438578790615] # just plotting the first curve and the peak found df = frames[0] plt.plot(*df.values.T) plt.plot(*df.iloc[arg_steepest_min(df['y'])].T, 'x') There are more complex cases where you want to bring the full power of find_peaks(). Here is an example that uses the most prominent minimum, using a certain number of samples for neighborhood: from scipy.signal import find_peaks, peak_prominences def arg_most_prominent_min(v, prominence=1, wlen=10): peaks, details = find_peaks(-v, prominence=prominence, wlen=wlen) i = peaks[np.argmax(details['prominences'])] return i peaks = [df['t'].iloc[arg_most_prominent_min(df['y'])] for df in frames] >>> peaks [2.0209774600118764, 2.4932468358014157, 2.5835972003585472, 2.12438578790615] In this case, the peaks found by both methods are the same. Aligning the curves gives: fig, ax = plt.subplots() for t_peak, df in zip(peaks, frames): ax.plot(df['t'] - t_peak, df['y']) Part 2: aligning the time series for numeric operations Having found the anchor times and plotted the time series by shifting the x-axis accordingly, suppose now that we want to align all the time series, for example to somehow compare them to one another (e.g.: differences, correlation, etc.). In this example we made up, the time samples are not equidistant and all series have their own sampling. We can use resample() to achieve our goal. Let us convert the frames into actual time series, transforming the column t (supposed in seconds) into a DateTimeIndex, after shifting the time using the previously found t_peak and using an arbitrary "0" date: frames = [ pd.Series( df['y'].values, index=pd.Timestamp(0) + (df['t'] - t_peak) * pd.Timedelta(1, 's') ) for t_peak, df in zip(peaks, frames)] >>> frames[0] t 1969-12-31 23:59:58.171107267 11.244308 1969-12-31 23:59:58.421423545 12.387291 1969-12-31 23:59:58.632390727 13.268186 1969-12-31 23:59:58.823099841 13.942224 1969-12-31 23:59:58.971379021 14.359900 ... 1970-01-01 00:00:14.022717327 10.422229 1970-01-01 00:00:14.227996854 9.504693 1970-01-01 00:00:14.235034496 9.489011 1970-01-01 00:00:14.525163506 8.388377 1970-01-01 00:00:14.526806922 8.383366 Length: 100, dtype: float64 At this point, the sampling is still uneven, so we use resample to get a fixed frequency. One strategy is to oversample and interpolate: frames = [df.resample('100ms').mean().interpolate() for df in frames] for df in frames: df.plot() At this point, we can compare the Series. Here are the pairwise differences and correlations: fig, axes = plt.subplots(nrows=len(frames), ncols=len(frames), figsize=(10, 5)) for axrow, a in zip(axes, frames): for ax, b in zip(axrow, frames): (b-a).plot(ax=ax) ax.set_title(fr'$\rho = {b.corr(a):.3f}$') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.tight_layout()
Overlay Graphs at same point
I want to overlay some graphs out of CSV data (two datasets). The graph I got from my dataset is shown down below. Is there any way to plot those datasets over specific points? I would like to overlay these plots by using the anchor of the "big drop" to compare them in a better way. The code used: import pandas as pd import matplotlib.pyplot as plt # Read the data data1 = pd.read_csv('data1.csv', delimiter=";", decimal=",") data2 = pd.read_csv('data2.csv', delimiter=";", decimal=",") data3 = pd.read_csv('data3.csv', delimiter=";", decimal=",") data4 = pd.read_csv('data4.csv', delimiter=";", decimal=",") # Plot the data plt.plot(data1['Zeit'], data1['Kanal A']) plt.plot(data2['Zeit'], data2['Kanal A']) plt.plot(data3['Zeit'], data3['Kanal A']) plt.plot(data4['Zeit'], data4['Kanal A']) plt.show() plt.close() I would like to share you some data here: Link to data
[ "Part 1: Anchor times\nA simple way is to find the times of interest (lowest point) in each frame, then plot each series with x=t - t_peak instead of x=t. Two ways come to mind to find the desired anchor points:\n\nSimply using the global minimum (in your plots, that would work fine), or\nUsing the most prominent local minimum, either from first principles, or using scipy's find_peaks().\n\nBut first of all, let us attempt to build a reproducible example:\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\ndef make_sample(t_peak, tmax_approx=17.5, n=100):\n # uneven times\n t = np.random.uniform(0, 2*tmax_approx/n, n).cumsum()\n y = -1 / (0.1 + 2 * np.abs(t - t_peak))\n trend = 4 * np.random.uniform(-1, 1) / n\n level = np.random.uniform(10, 12)\n y += np.random.normal(trend, 1/n, n).cumsum() + level\n return pd.DataFrame({'t': t, 'y': y})\n\npoi = [2, 2.48, 2.6, 2.1]\n\nnp.random.seed(0)\nframes = [make_sample(t_peak) for t_peak in poi]\n\nplt.rcParams['figure.figsize'] = (6,2)\nfig, ax = plt.subplots()\nfor df in frames:\n ax.plot(*df.values.T)\n\n\nIn this case, we made the problem maximally inconvenient by giving each time series its own, independent, unevenly distributed time sampling.\nNow, retrieving the \"maximum drop\" by global minimum:\npeaks = [df.loc[df['y'].idxmin(), 't'] for df in frames]\n>>> peaks\n[2.0209774600118764, 2.4932468358014157, 2.5835972003585472, 2.12438578790615]\n\nfig, ax = plt.subplots()\nfor t_peak, df in zip(peaks, frames):\n ax.plot(df['t'] - t_peak, df['y'])\n\n\nBut imagine a case where the global minimum is not suitable. For example, add a large sine wave to each series:\nframes = [df.assign(y=df['y'] + 5 * np.sin(df['t'])) for df in frames]\n\n# just plotting the first series\ndf = frames[0]\nplt.plot(*df.values.T)\n\n\nClearly, there are several local minima, and the one we want (\"sharpest drop\") is not the global one.\nA simple way to find the desired sharpest drop time is by looking at the difference from each point to its two neighbors:\ndef arg_steepest_min(v):\n # simply find the minimum that is most separated from the surrounding points\n diff = np.diff(v)\n i = np.argmin(diff[:-1] - diff[1:]) + 1\n return i\n\npeaks = [df['t'].iloc[arg_steepest_min(df['y'])] for df in frames]\n>>> peaks\n[2.0209774600118764, 2.4932468358014157, 2.5835972003585472, 2.12438578790615]\n\n# just plotting the first curve and the peak found\ndf = frames[0]\nplt.plot(*df.values.T)\nplt.plot(*df.iloc[arg_steepest_min(df['y'])].T, 'x')\n\n\nThere are more complex cases where you want to bring the full power of find_peaks(). Here is an example that uses the most prominent minimum, using a certain number of samples for neighborhood:\nfrom scipy.signal import find_peaks, peak_prominences\n\ndef arg_most_prominent_min(v, prominence=1, wlen=10):\n peaks, details = find_peaks(-v, prominence=prominence, wlen=wlen)\n i = peaks[np.argmax(details['prominences'])]\n return i\n\npeaks = [df['t'].iloc[arg_most_prominent_min(df['y'])] for df in frames]\n>>> peaks\n[2.0209774600118764, 2.4932468358014157, 2.5835972003585472, 2.12438578790615]\n\nIn this case, the peaks found by both methods are the same. Aligning the curves gives:\nfig, ax = plt.subplots()\nfor t_peak, df in zip(peaks, frames):\n ax.plot(df['t'] - t_peak, df['y'])\n\n\nPart 2: aligning the time series for numeric operations\nHaving found the anchor times and plotted the time series by shifting the x-axis accordingly, suppose now that we want to align all the time series, for example to somehow compare them to one another (e.g.: differences, correlation, etc.). In this example we made up, the time samples are not equidistant and all series have their own sampling.\nWe can use resample() to achieve our goal. Let us convert the frames into actual time series, transforming the column t (supposed in seconds) into a DateTimeIndex, after shifting the time using the previously found t_peak and using an arbitrary \"0\" date:\nframes = [\n pd.Series(\n df['y'].values,\n index=pd.Timestamp(0) + (df['t'] - t_peak) * pd.Timedelta(1, 's')\n ) for t_peak, df in zip(peaks, frames)]\n>>> frames[0]\nt\n1969-12-31 23:59:58.171107267 11.244308\n1969-12-31 23:59:58.421423545 12.387291\n1969-12-31 23:59:58.632390727 13.268186\n1969-12-31 23:59:58.823099841 13.942224\n1969-12-31 23:59:58.971379021 14.359900\n ... \n1970-01-01 00:00:14.022717327 10.422229\n1970-01-01 00:00:14.227996854 9.504693\n1970-01-01 00:00:14.235034496 9.489011\n1970-01-01 00:00:14.525163506 8.388377\n1970-01-01 00:00:14.526806922 8.383366\nLength: 100, dtype: float64\n\nAt this point, the sampling is still uneven, so we use resample to get a fixed frequency. One strategy is to oversample and interpolate:\nframes = [df.resample('100ms').mean().interpolate() for df in frames]\n\nfor df in frames:\n df.plot()\n\n\nAt this point, we can compare the Series. Here are the pairwise differences and correlations:\nfig, axes = plt.subplots(nrows=len(frames), ncols=len(frames), figsize=(10, 5))\nfor axrow, a in zip(axes, frames):\n for ax, b in zip(axrow, frames):\n (b-a).plot(ax=ax)\n ax.set_title(fr'$\\rho = {b.corr(a):.3f}$')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.tight_layout()\n\n\n" ]
[ 2 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074612930_matplotlib_pandas_python.txt
Q: What do the scipy.stats.binom and script.stats.hypergeom functions actually do? I am trying to work with some hypergeometric and binomial random variables, and so I am looking at the scipy.stats functionality. But I'm confused what scipy.stats.binom() and script.stats.hypergeom() functions actually do. Do they implicitly create a PMF for with given parameters, which we then access with the stats.pmf() function, or do they define a function from the sample space to the numerical quantities we define? The last is what a random variable actually does, but I haven't passed a sample space to the binom or hypergeom functions, so I'm confused about what they are actually doing. The reference manual doesn't clear things up. Thank you for any help. A: According to the documentation: A binomial discrete random variable. As an instance of the rv_discrete class, binom object inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution. Some of these methods are pmf(k, n, p, loc=0), median(n, p, loc=0), and std(n, p, loc=0). Alternatively, the distribution object can be called (as a function) to fix the shape and location. This returns a “frozen” RV object holding the given parameters fixed. So that from scipy.stats import binom n,p = 5, 0.4 rv = binom(n, p) rv.rvs(size=1000) binom.rvs(n, p, size=1000) do the same thing, because you froze the parameters at n,p when you called the constructor function binom.
What do the scipy.stats.binom and script.stats.hypergeom functions actually do?
I am trying to work with some hypergeometric and binomial random variables, and so I am looking at the scipy.stats functionality. But I'm confused what scipy.stats.binom() and script.stats.hypergeom() functions actually do. Do they implicitly create a PMF for with given parameters, which we then access with the stats.pmf() function, or do they define a function from the sample space to the numerical quantities we define? The last is what a random variable actually does, but I haven't passed a sample space to the binom or hypergeom functions, so I'm confused about what they are actually doing. The reference manual doesn't clear things up. Thank you for any help.
[ "According to the documentation:\n\nA binomial discrete random variable.\nAs an instance of the rv_discrete class, binom object inherits from it\na collection of generic methods (see below for the full list), and\ncompletes them with details specific for this particular distribution.\n\nSome of these methods are pmf(k, n, p, loc=0), median(n, p, loc=0), and std(n, p, loc=0).\n\nAlternatively, the distribution object can be called (as a function) to fix the shape and location. This returns a “frozen” RV object holding the given parameters fixed.\n\nSo that\nfrom scipy.stats import binom\n\nn,p = 5, 0.4\nrv = binom(n, p)\nrv.rvs(size=1000)\n\nbinom.rvs(n, p, size=1000)\n\ndo the same thing, because you froze the parameters at n,p when you called the constructor function binom.\n" ]
[ 0 ]
[]
[]
[ "probability_distribution", "python", "scipy", "scipy.stats", "statistics" ]
stackoverflow_0074613606_probability_distribution_python_scipy_scipy.stats_statistics.txt
Q: xml to srt conversion not working after installing pytube I have installed pytube to extract captions from some youtube videos. Both the following code give me the xml captions. from pytube import YouTube yt = YouTube('https://www.youtube.com/watch?v=4ZQQofkz9eE') caption = yt.captions['a.en'] print(caption.xml_captions) and also as mentioned in the docs yt = YouTube('http://youtube.com/watch?v=2lAe1cqCOXo') caption = yt.captions.get_by_language_code('en') caption.xml_captions But in both cases, I get the xml output and when use print(caption.generate_srt_captions()) I get an error like the following. Can you help on how to extract the srt format? KeyError ~/anaconda3/envs/myenv/lib/python3.6/site-packages/pytube/captions.py in generate_srt_captions(self) 49 recompiles them into the "SubRip Subtitle" format. 50 """ 51 return self.xml_caption_to_srt(self.xml_captions) 52 53 @staticmethod ~/anaconda3/envs/myenv/lib/python3.6/site-packages/pytube/captions.py in xml_caption_to_srt(self, xml_captions) 81 except KeyError: 82 duration = 0.0 83 start = float(child.attrib["start"]) 84 end = start + duration 85 sequence_number = i + 1 # convert from 0-indexed to 1. KeyError: 'start' A: This is a bug in the library itself. Everything below is done in pytube 11.01. In the captions.py file on line 76 replace: for i, child in enumerate(list(root)): to: for i, child in enumerate(list(root.findall('body/p'))): Then on line 83, replace: duration = float(child.attrib["dur"]) to: duration = float(child.attrib["d"]) Then on line 86, replace: start = float(child.attrib["start"]) to: start = float(child.attrib["t"]) If only the number of lines and time will be displayed but no subtitle text, replace line 77: text = child.text or "" to: text = ''.join(child.itertext()).strip() if not text: continue It worked for me, python 3.9, pytube 11.01. Good luck! A: I did some work on the source code of the file captions.py. Just replace the entire code of this file with this code: import math import os import time import xml.etree.ElementTree as ElementTree from html import unescape from typing import Dict, Optional from pytube import request from pytube.helpers import safe_filename, target_directory class Caption: def __init__(self, caption_track: Dict): self.url = caption_track.get("baseUrl") name_dict = caption_track['name'] if 'simpleText' in name_dict: self.name = name_dict['simpleText'] else: for el in name_dict['runs']: if 'text' in el: self.name = el['text'] self.code = caption_track["vssId"] self.code = self.code.strip('.') @property def xml_captions(self) -> str: return request.get(self.url) def generate_srt_captions(self) -> str: return self.xml_caption_to_srt(self.xml_captions) @staticmethod def float_to_srt_time_format(d: float) -> str: fraction, whole = math.modf(d/1000) time_fmt = time.strftime("%H:%M:%S,", time.gmtime(whole)) ms = f"{fraction:.3f}".replace("0.", "") return time_fmt + ms def xml_caption_to_srt(self, xml_captions: str) -> str: segments = [] root = ElementTree.fromstring(xml_captions) count_line = 0 for i, child in enumerate(list(root.findall('body/p'))): text = ''.join(child.itertext()).strip() if not text: continue count_line += 1 caption = unescape(text.replace("\n", " ").replace(" ", " "),) try: duration = float(child.attrib["d"]) except KeyError: duration = 0.0 start = float(child.attrib["t"]) end = start + duration try: end2 = float(root.findall('body/p')[i+2].attrib['t']) except: end2 = float(root.findall('body/p')[i].attrib['t']) + duration sequence_number = i + 1 # convert from 0-indexed to 1. line = "{seq}\n{start} --> {end}\n{text}\n".format( seq=count_line, start=self.float_to_srt_time_format(start), end=self.float_to_srt_time_format(end2), text=caption, ) segments.append(line) return "\n".join(segments).strip() def download( self, title: str, srt: bool = True, output_path: Optional[str] = None, filename_prefix: Optional[str] = None, ) -> str: if title.endswith(".srt") or title.endswith(".xml"): filename = ".".join(title.split(".")[:-1]) else: filename = title if filename_prefix: filename = f"{safe_filename(filename_prefix)}{filename}" filename = safe_filename(filename) filename += f" ({self.code})" if srt: filename += ".srt" else: filename += ".xml" file_path = os.path.join(target_directory(output_path), filename) with open(file_path, "w", encoding="utf-8") as file_handle: if srt: file_handle.write(self.generate_srt_captions()) else: file_handle.write(self.xml_captions) return file_path def __repr__(self): return '<Caption lang="{s.name}" code="{s.code}">'.format(s=self) A: # Apparently YouTube changed their captions format. # Here's my version of the function for the function in captions.py I found this good def xml_caption_to_srt(self, xml_captions: str) -> str: """Convert xml caption tracks to "SubRip Subtitle (srt)". :param str xml_captions: XML formatted caption tracks. """ segments = [] root = ElementTree.fromstring(xml_captions) i=0 for child in list(root.iter("body"))[0]: if child.tag == 'p': caption = '' if len(list(child))==0: # instead of 'continue' caption = child.text for s in list(child): if s.tag == 's': caption += ' ' + s.text caption = unescape(caption.replace("\n", " ").replace(" ", " "),) try: duration = float(child.attrib["d"])/1000.0 except KeyError: duration = 0.0 start = float(child.attrib["t"])/1000.0 end = start + duration sequence_number = i + 1 # convert from 0-indexed to 1. line = "{seq}\n{start} --> {end}\n{text}\n".format( seq=sequence_number, start=self.float_to_srt_time_format(start), end=self.float_to_srt_time_format(end), text=caption, ) segments.append(line) i += 1 return "\n".join(segments).strip()
xml to srt conversion not working after installing pytube
I have installed pytube to extract captions from some youtube videos. Both the following code give me the xml captions. from pytube import YouTube yt = YouTube('https://www.youtube.com/watch?v=4ZQQofkz9eE') caption = yt.captions['a.en'] print(caption.xml_captions) and also as mentioned in the docs yt = YouTube('http://youtube.com/watch?v=2lAe1cqCOXo') caption = yt.captions.get_by_language_code('en') caption.xml_captions But in both cases, I get the xml output and when use print(caption.generate_srt_captions()) I get an error like the following. Can you help on how to extract the srt format? KeyError ~/anaconda3/envs/myenv/lib/python3.6/site-packages/pytube/captions.py in generate_srt_captions(self) 49 recompiles them into the "SubRip Subtitle" format. 50 """ 51 return self.xml_caption_to_srt(self.xml_captions) 52 53 @staticmethod ~/anaconda3/envs/myenv/lib/python3.6/site-packages/pytube/captions.py in xml_caption_to_srt(self, xml_captions) 81 except KeyError: 82 duration = 0.0 83 start = float(child.attrib["start"]) 84 end = start + duration 85 sequence_number = i + 1 # convert from 0-indexed to 1. KeyError: 'start'
[ "This is a bug in the library itself. Everything below is done in pytube 11.01.\nIn the captions.py file on line 76 replace:\nfor i, child in enumerate(list(root)):\n\nto:\nfor i, child in enumerate(list(root.findall('body/p'))):\n\nThen on line 83, replace:\nduration = float(child.attrib[\"dur\"])\n\nto:\nduration = float(child.attrib[\"d\"])\n\nThen on line 86, replace:\nstart = float(child.attrib[\"start\"])\n\nto:\nstart = float(child.attrib[\"t\"])\n\nIf only the number of lines and time will be displayed but no subtitle text, replace line 77:\ntext = child.text or \"\"\n\nto:\ntext = ''.join(child.itertext()).strip()\nif not text:\n continue\n\nIt worked for me, python 3.9, pytube 11.01. Good luck!\n", "I did some work on the source code of the file captions.py.\nJust replace the entire code of this file with this code:\nimport math\nimport os\nimport time\nimport xml.etree.ElementTree as ElementTree\nfrom html import unescape\nfrom typing import Dict, Optional\n\nfrom pytube import request\nfrom pytube.helpers import safe_filename, target_directory\n\n\nclass Caption:\n\n def __init__(self, caption_track: Dict):\n\n self.url = caption_track.get(\"baseUrl\")\n\n name_dict = caption_track['name']\n if 'simpleText' in name_dict:\n self.name = name_dict['simpleText']\n else:\n for el in name_dict['runs']:\n if 'text' in el:\n self.name = el['text']\n\n self.code = caption_track[\"vssId\"]\n\n self.code = self.code.strip('.')\n\n @property\n def xml_captions(self) -> str:\n\n return request.get(self.url)\n\n def generate_srt_captions(self) -> str:\n\n return self.xml_caption_to_srt(self.xml_captions)\n\n @staticmethod\n def float_to_srt_time_format(d: float) -> str:\n\n fraction, whole = math.modf(d/1000)\n time_fmt = time.strftime(\"%H:%M:%S,\", time.gmtime(whole))\n ms = f\"{fraction:.3f}\".replace(\"0.\", \"\")\n return time_fmt + ms\n\n def xml_caption_to_srt(self, xml_captions: str) -> str:\n\n segments = []\n root = ElementTree.fromstring(xml_captions)\n count_line = 0\n for i, child in enumerate(list(root.findall('body/p'))):\n \n text = ''.join(child.itertext()).strip()\n if not text:\n continue\n count_line += 1\n caption = unescape(text.replace(\"\\n\", \" \").replace(\" \", \" \"),)\n try:\n duration = float(child.attrib[\"d\"])\n except KeyError:\n duration = 0.0\n start = float(child.attrib[\"t\"])\n end = start + duration\n try:\n end2 = float(root.findall('body/p')[i+2].attrib['t'])\n except:\n end2 = float(root.findall('body/p')[i].attrib['t']) + duration\n sequence_number = i + 1 # convert from 0-indexed to 1.\n line = \"{seq}\\n{start} --> {end}\\n{text}\\n\".format(\n seq=count_line,\n start=self.float_to_srt_time_format(start),\n end=self.float_to_srt_time_format(end2),\n text=caption,\n )\n segments.append(line)\n\n return \"\\n\".join(segments).strip()\n\n def download(\n self,\n title: str,\n srt: bool = True,\n output_path: Optional[str] = None,\n filename_prefix: Optional[str] = None,\n ) -> str:\n\n if title.endswith(\".srt\") or title.endswith(\".xml\"):\n filename = \".\".join(title.split(\".\")[:-1])\n else:\n filename = title\n\n if filename_prefix:\n filename = f\"{safe_filename(filename_prefix)}{filename}\"\n\n filename = safe_filename(filename)\n\n filename += f\" ({self.code})\"\n\n if srt:\n filename += \".srt\"\n else:\n filename += \".xml\"\n\n file_path = os.path.join(target_directory(output_path), filename)\n\n with open(file_path, \"w\", encoding=\"utf-8\") as file_handle:\n if srt:\n file_handle.write(self.generate_srt_captions())\n else:\n file_handle.write(self.xml_captions)\n\n return file_path\n\n def __repr__(self):\n return '<Caption lang=\"{s.name}\" code=\"{s.code}\">'.format(s=self)\n\n", "# Apparently YouTube changed their captions format.\n\n# Here's my version of the function for the function in captions.py\n\nI found this good\n \n\n def xml_caption_to_srt(self, xml_captions: str) -> str:\n \"\"\"Convert xml caption tracks to \"SubRip Subtitle (srt)\".\n \n :param str xml_captions:\n XML formatted caption tracks.\n \"\"\"\n segments = []\n root = ElementTree.fromstring(xml_captions)\n i=0\n for child in list(root.iter(\"body\"))[0]:\n if child.tag == 'p':\n caption = ''\n if len(list(child))==0:\n # instead of 'continue'\n caption = child.text\n for s in list(child):\n if s.tag == 's':\n caption += ' ' + s.text\n caption = unescape(caption.replace(\"\\n\", \" \").replace(\" \", \" \"),)\n try:\n duration = float(child.attrib[\"d\"])/1000.0\n except KeyError:\n duration = 0.0\n start = float(child.attrib[\"t\"])/1000.0\n end = start + duration\n sequence_number = i + 1 # convert from 0-indexed to 1.\n line = \"{seq}\\n{start} --> {end}\\n{text}\\n\".format(\n seq=sequence_number,\n start=self.float_to_srt_time_format(start),\n end=self.float_to_srt_time_format(end),\n text=caption,\n )\n segments.append(line)\n i += 1\n return \"\\n\".join(segments).strip()\n \n \n\n" ]
[ 4, 3, 0 ]
[]
[]
[ "python", "pytube", "xml" ]
stackoverflow_0068780808_python_pytube_xml.txt
Q: How do i make my discord bot reply to a certain user always with the same string? made my own discord bot using py and trying to solve the issue in the title. This is my current code for responses def get_response(message: str) -> str: p_message = message.lower() if p_message == 'hello': return '' if p_message == 'alex': return '' if p_message == 'benji': return '' if message == 'roll': return str(random.randint(1, 6)) if p_message == '!help': return '`This is a help message that you can modify.`' if p_message == '-': return '' if p_message == 'aja': return '' if p_message == 'christian': return '' This is where i call my responses. And execute the code from above import discord import responses intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) async def send_message(message, user_message, is_private): try: response = responses.get_response(user_message) await message.author.send(response) if is_private else await message.channel.send(response) except Exception as e: print(e) # @client.event # async def on_message(message): # if message.author.id == 195251214307426305: # await message.channel.send("") # @client.event # async def on_message(message): # if message.author.id == 305280287519145984: # # Sending a message to the channel that the user is in. # await message.channel.send("") def run_discord_bot(): intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'{client.user} is now running!') @client.event async def on_message(message): if message.author == client.user: return username = str(message.author) user_message = str(message.content) channel = str(message.channel) print(f'{username} said: "{user_message}" ({channel})') if user_message[0] == '?': user_message = user_message[1:] await send_message(message, user_message, is_private=True) else: await send_message(message, user_message, is_private=False) TOKEN="" with open("token.txt") as file: TOKEN = file.read() client.run(TOKEN) Tried using msg.author.id and its unknown now. More then that i was sadly stunned by this issue and found no solutions online A: I assume you use the discord libary, so this could be a solution to your problem import discord #dependencies client = discord.Client() #create a client object @client.event #bind the function async def on_message(message): if message.author.id == 398238482382: #check for id await message.channel.send("Hey") #send 'hey' to channel where the message was written client.run(yout_token) #starts the bot
How do i make my discord bot reply to a certain user always with the same string?
made my own discord bot using py and trying to solve the issue in the title. This is my current code for responses def get_response(message: str) -> str: p_message = message.lower() if p_message == 'hello': return '' if p_message == 'alex': return '' if p_message == 'benji': return '' if message == 'roll': return str(random.randint(1, 6)) if p_message == '!help': return '`This is a help message that you can modify.`' if p_message == '-': return '' if p_message == 'aja': return '' if p_message == 'christian': return '' This is where i call my responses. And execute the code from above import discord import responses intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) async def send_message(message, user_message, is_private): try: response = responses.get_response(user_message) await message.author.send(response) if is_private else await message.channel.send(response) except Exception as e: print(e) # @client.event # async def on_message(message): # if message.author.id == 195251214307426305: # await message.channel.send("") # @client.event # async def on_message(message): # if message.author.id == 305280287519145984: # # Sending a message to the channel that the user is in. # await message.channel.send("") def run_discord_bot(): intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'{client.user} is now running!') @client.event async def on_message(message): if message.author == client.user: return username = str(message.author) user_message = str(message.content) channel = str(message.channel) print(f'{username} said: "{user_message}" ({channel})') if user_message[0] == '?': user_message = user_message[1:] await send_message(message, user_message, is_private=True) else: await send_message(message, user_message, is_private=False) TOKEN="" with open("token.txt") as file: TOKEN = file.read() client.run(TOKEN) Tried using msg.author.id and its unknown now. More then that i was sadly stunned by this issue and found no solutions online
[ "I assume you use the discord libary, so this could be a solution to your problem\nimport discord #dependencies\n\nclient = discord.Client() #create a client object\n\[email protected] #bind the function\nasync def on_message(message):\n if message.author.id == 398238482382: #check for id\n await message.channel.send(\"Hey\") #send 'hey' to channel where the message was written\n\nclient.run(yout_token) #starts the bot\n\n" ]
[ 0 ]
[]
[]
[ "bots", "discord", "python" ]
stackoverflow_0074618155_bots_discord_python.txt
Q: How to loop over 2 things in asyncio using aiohttp I am very new to asyncio and REST APIs in general. I had a rudimentary code working using requests but it was very slow (20-30 minutes), and it seems like asyncio and aiohttp are the tools I need to use to improve this situation. What my code does is send a get request to an endpoint which returns a list of jobids and a next_url to continue searching the endpoint for more jobids. I have a quick filter to see if I want to search this job or not, and if I do, I use its jobid to get a list of fileids. I then have a test to see if I want those files, and if I do I download them. When doing this synchronously I do something like this: import json import os from collections import namedtuple from typing import Generator import requests Job = namedtuple("Job", "jobid name owner") File = namedtuple("File", "fileid name path") OWNER_FILTER = 'Drphoton' FILENAME_FILTER = ['filename_a.txt', '.png'] BATCH_SIZE = 100 JOBS_ENDPOINT = 'https://website.com/jobs/' API_KEY = os.environ.get('MYAPIKEY') def main(): for job in get_all_jobs(JOBS_ENDPOINT): if not keep_job(job): continue for file in get_all_files(job): if keep_file(file): download(file) def get_all_jobs(job_url: str) -> Generator[Job, None, None]: """Gets basic info about all jobs. Lazily returns a Job object. """ while job_url is not None: result = requests.get(job_url, params={"page_size": BATCH_SIZE}, headers={"Authorization": f"Token {API_KEY}"}) if not (result.status_code == 200 and result.content): continue json_data = json.loads(result.content) results = json_data['results'] for job_result in results: yield Job(jobid=job_result['id'], status=job_result['status'], owner=job_result['owner'], name=job_result['name']) job_url = json_data['next'] def get_all_files(job: str, job_url: str) -> Generator[File, None, None]: """Gets basic info about all files within a job. Lazily returns a File object. """ file_url = f"{job_url}{job.jobid}/files/" while file_url is not None: result = requests.get(job_url, params={"page_size": BATCH_SIZE}, headers={"Authorization": f"Token {API_KEY}"}) if not (result.status_code == 200 and result.content): continue json_data = json.loads(result.content) results = json_data['results'] for file_result in results: yield Job(jobid=job_result['id'], owner=job_result['owner'], name=job_result['name']) file_url = json_data['next'] def keep_job(job: Job) -> bool: """Tests if we should keep the job based on certain filters, and returns a boolean""" if OWNER_FILTER not in job.owner: return False return True def keep_file(file: File) -> bool: """Tests if we should keep the job based on certain filters, and returns a boolean""" if any(NAME in file.name for NAME in FILENAME_FILTER): return True else: return False This works, but is crazy slow for a relatively small number of jobs and files. I tried to adapt this to async code using the tools mentioned above, but it doesn't seem to actually run asynchronously. I'm a bit at a loss on how to improve it. I feel like what's tripping me up is that part of the json payload is a link to the "next page", so I can't get ALL of the job names and search them all at once --- I need to somehow asynchronously look up the next page of jobs while the files-for-loop is looking up file names, but I can't figure out how to do that cleanly. In fact, it seems I'm not even looking up the filenames for multiple jobs simultaneously either! Here is my best attempt at an async version of the code above. It runs but isn't much faster. I'm mainly highlighting the differences to above to show you how I've implemented it: import aiohttp async def main(): async with aiohttp.ClientSession() as session: async for job in get_all_jobs(job_url=JOBS_ENDPOINT, session=session): if keep_job(job): continue async for file in get_all_files(job, job_url=JOBS_ENDPOINT, session=session): if keep_file(file): download(file) async def get_all_jobs(job_url: str, session: aiohttp.ClientSession) -> Generator[list[Job], None, None]: while job_url is not None: async with session.get(job_url, params={"page_size": BATCH_SIZE}, headers={"Authorization": f"Token {API_KEY}"}) as resp: if not (resp.status == 200 and resp.content): continue json_data = await resp.json() for job in json_data['results']: yield Job(jobid=job['id'], owner=job['owner'], name=job['name']) job_url = json_data['next'] and something similar for get_all_files(). If I had to pinpoint my misunderstanding, it feels like async for doesn't really work asynchronously the way I would expect. My understanding is that anything called using await or async for is added to the event loop, and python is smart enough to keep track of all of these items and jump to the first that completes, carrying on from there. Instead, it really seems like it loops through each job 1 at a time instead of simultaneously, going through each of their filenames 1 at a time. I expect my number of jobs and files to grow a lot in the coming year, so this problem is only going to get worse. Thanks in Advance! A: One solution might be creating few tasks which wait for data (job_url, job_id) from asyncio.Queue and perform the get_all_files, downloading, ... tasks independent of each other. A pseudocode: import asyncio import aiohttp queue = asyncio.Queue() async def get_all_files_task(): async with aiohttp.ClientSession() as session: while True: job_url, job_id = await queue.get() # get all files using aiohttp: # async with session.get(...) # perform keep file, download file... await asyncio.sleep(0.5) # task is done, wait for another queue.task_done() print("Task done.") async def main(): tasks = [asyncio.create_task(get_all_files_task()) for _ in range(10)] async with aiohttp.ClientSession() as session: # here you will put into a queue job_url, job_id from get_all_jobs # async for job in get_all_jobs(job_url=JOBS_ENDPOINT, session=session): for i in range(20): queue.put_nowait((100, 200)) # wait for queue to be empty await queue.join() # cancel the get_all_files_tasks for t in tasks: t.cancel() try: await t except asyncio.CancelledError: pass asyncio.run(main())
How to loop over 2 things in asyncio using aiohttp
I am very new to asyncio and REST APIs in general. I had a rudimentary code working using requests but it was very slow (20-30 minutes), and it seems like asyncio and aiohttp are the tools I need to use to improve this situation. What my code does is send a get request to an endpoint which returns a list of jobids and a next_url to continue searching the endpoint for more jobids. I have a quick filter to see if I want to search this job or not, and if I do, I use its jobid to get a list of fileids. I then have a test to see if I want those files, and if I do I download them. When doing this synchronously I do something like this: import json import os from collections import namedtuple from typing import Generator import requests Job = namedtuple("Job", "jobid name owner") File = namedtuple("File", "fileid name path") OWNER_FILTER = 'Drphoton' FILENAME_FILTER = ['filename_a.txt', '.png'] BATCH_SIZE = 100 JOBS_ENDPOINT = 'https://website.com/jobs/' API_KEY = os.environ.get('MYAPIKEY') def main(): for job in get_all_jobs(JOBS_ENDPOINT): if not keep_job(job): continue for file in get_all_files(job): if keep_file(file): download(file) def get_all_jobs(job_url: str) -> Generator[Job, None, None]: """Gets basic info about all jobs. Lazily returns a Job object. """ while job_url is not None: result = requests.get(job_url, params={"page_size": BATCH_SIZE}, headers={"Authorization": f"Token {API_KEY}"}) if not (result.status_code == 200 and result.content): continue json_data = json.loads(result.content) results = json_data['results'] for job_result in results: yield Job(jobid=job_result['id'], status=job_result['status'], owner=job_result['owner'], name=job_result['name']) job_url = json_data['next'] def get_all_files(job: str, job_url: str) -> Generator[File, None, None]: """Gets basic info about all files within a job. Lazily returns a File object. """ file_url = f"{job_url}{job.jobid}/files/" while file_url is not None: result = requests.get(job_url, params={"page_size": BATCH_SIZE}, headers={"Authorization": f"Token {API_KEY}"}) if not (result.status_code == 200 and result.content): continue json_data = json.loads(result.content) results = json_data['results'] for file_result in results: yield Job(jobid=job_result['id'], owner=job_result['owner'], name=job_result['name']) file_url = json_data['next'] def keep_job(job: Job) -> bool: """Tests if we should keep the job based on certain filters, and returns a boolean""" if OWNER_FILTER not in job.owner: return False return True def keep_file(file: File) -> bool: """Tests if we should keep the job based on certain filters, and returns a boolean""" if any(NAME in file.name for NAME in FILENAME_FILTER): return True else: return False This works, but is crazy slow for a relatively small number of jobs and files. I tried to adapt this to async code using the tools mentioned above, but it doesn't seem to actually run asynchronously. I'm a bit at a loss on how to improve it. I feel like what's tripping me up is that part of the json payload is a link to the "next page", so I can't get ALL of the job names and search them all at once --- I need to somehow asynchronously look up the next page of jobs while the files-for-loop is looking up file names, but I can't figure out how to do that cleanly. In fact, it seems I'm not even looking up the filenames for multiple jobs simultaneously either! Here is my best attempt at an async version of the code above. It runs but isn't much faster. I'm mainly highlighting the differences to above to show you how I've implemented it: import aiohttp async def main(): async with aiohttp.ClientSession() as session: async for job in get_all_jobs(job_url=JOBS_ENDPOINT, session=session): if keep_job(job): continue async for file in get_all_files(job, job_url=JOBS_ENDPOINT, session=session): if keep_file(file): download(file) async def get_all_jobs(job_url: str, session: aiohttp.ClientSession) -> Generator[list[Job], None, None]: while job_url is not None: async with session.get(job_url, params={"page_size": BATCH_SIZE}, headers={"Authorization": f"Token {API_KEY}"}) as resp: if not (resp.status == 200 and resp.content): continue json_data = await resp.json() for job in json_data['results']: yield Job(jobid=job['id'], owner=job['owner'], name=job['name']) job_url = json_data['next'] and something similar for get_all_files(). If I had to pinpoint my misunderstanding, it feels like async for doesn't really work asynchronously the way I would expect. My understanding is that anything called using await or async for is added to the event loop, and python is smart enough to keep track of all of these items and jump to the first that completes, carrying on from there. Instead, it really seems like it loops through each job 1 at a time instead of simultaneously, going through each of their filenames 1 at a time. I expect my number of jobs and files to grow a lot in the coming year, so this problem is only going to get worse. Thanks in Advance!
[ "One solution might be creating few tasks which wait for data (job_url, job_id) from asyncio.Queue and perform the get_all_files, downloading, ... tasks independent of each other.\nA pseudocode:\nimport asyncio\nimport aiohttp\n\nqueue = asyncio.Queue()\n\n\nasync def get_all_files_task():\n async with aiohttp.ClientSession() as session:\n while True:\n job_url, job_id = await queue.get()\n\n # get all files using aiohttp:\n # async with session.get(...)\n\n # perform keep file, download file...\n\n await asyncio.sleep(0.5)\n\n # task is done, wait for another\n queue.task_done()\n print(\"Task done.\")\n\n\nasync def main():\n\n tasks = [asyncio.create_task(get_all_files_task()) for _ in range(10)]\n\n async with aiohttp.ClientSession() as session:\n\n # here you will put into a queue job_url, job_id from get_all_jobs\n # async for job in get_all_jobs(job_url=JOBS_ENDPOINT, session=session):\n for i in range(20):\n queue.put_nowait((100, 200))\n\n # wait for queue to be empty\n await queue.join()\n\n # cancel the get_all_files_tasks\n for t in tasks:\n t.cancel()\n try:\n await t\n except asyncio.CancelledError:\n pass\n\n\nasyncio.run(main())\n\n" ]
[ 0 ]
[]
[]
[ "aiohttp", "python", "python_asyncio" ]
stackoverflow_0074617467_aiohttp_python_python_asyncio.txt
Q: Is there a way for two People to work on one Jupyter Notebook A friend of mine and me are doing some field research for our Physics degree. And we are using jupyter notebook to analyse the data we get. We usually sit together working at two different copies of the same file that in the end will be drag and dropped together using jupyter lab. This is obviously not ideal, so i thought is there any way for just two people to work on one document in Jupyter, sadly Google Colab has been Deprecated and CoCalc is expensive. So i thought id ask here if there is a way to make one person run a Jupyter notebook and the other one just being able to access it over peer to peer aswell so we could write in the same file at the same time. Do you guys know something that makes me do this maybe a workaround that i can do. Thanks for answers in advance A: CoCalc is expensive. Fortunately, we also provide a complete free easy to install open source version of CoCalc, which you can run on any computer that supports Docker. For example, here's how to run it on Google cloud. (I have put too many years of my life into making realtiime collaboration work for Jupyter via CoCalc... In any case, the open source code has been battle tested in production for a while now and is working well finally. I hope it can solve your problem...) A: You can upload your notebook to Deepnote. It provides a hosted environment, where you and your colleague can connect at the same time and work on the same notebook in real-time (the same way you'd do in Google Docs). Colab is also good, but writing at the same time will result in conflicts. A: Notebook itself doesn't support to collaborate simultaneously, but you can use GitHub to manage your python script and upload it into Colab separately. This way Github can help manage the file history and solve the conflicts. A: JupyterLab 3.1.0a7 introduced real time collaboration. There is a screencast showing it in action. Key thing to note is the new top-level menu item called Share, to the right of Settings & Help. You can click on launch binder here or here to try it now. "Once you see the JupyterLab interface, there's a new top-level menu item called "Share"; click that, grab and share that URL, and you're done!"-SOURCE: Step #5 here There's a gist here that seems to be updated regularly with how to activate the feature. There's a detailed walk-through here if you want to add the ability into your own repositories that can launch via MyBinder.org. Although if that repo falls behind the gist, you'll probably want to consult the gist for the current best practices once you have the idea from the detailed walk-through. Closely related question with an answer by @krassowski, is here. You may want to look there for some additional details. A: While you can use github for this it can get messy, many people clear output cells when committing to git to avoid conflict issues. Which would defeat the object of your review work. You should try Curvenote (which we're building for that reason) it doesn't offer compute as its a collaborative writing tool, works on top of Jupyter via a chrome extenson and gives you real time versioning, commenting and diffs. A: Google Colab has been Deprecated and CoCalc is expensive Noteable.io is 100% free for all users including storage, compute, RAM. For your purposes, it will be ideal as you will get Google Drive like collaboration (commenting, @mentioning, Annotating data points), versioning, sharing, interactive visualizations, choice of using Python and SQL in the same notebook and a ton of other features. Here are good example notebooks on Noteable: Climate Change: An analysis of Dew Point for the city of Toronto Healthcare Sector Employee Attrition Exploratory Data Analysis Exploratory Data Analysis Using SQL and Python - Online Retailer Orders
Is there a way for two People to work on one Jupyter Notebook
A friend of mine and me are doing some field research for our Physics degree. And we are using jupyter notebook to analyse the data we get. We usually sit together working at two different copies of the same file that in the end will be drag and dropped together using jupyter lab. This is obviously not ideal, so i thought is there any way for just two people to work on one document in Jupyter, sadly Google Colab has been Deprecated and CoCalc is expensive. So i thought id ask here if there is a way to make one person run a Jupyter notebook and the other one just being able to access it over peer to peer aswell so we could write in the same file at the same time. Do you guys know something that makes me do this maybe a workaround that i can do. Thanks for answers in advance
[ "\nCoCalc is expensive.\n\nFortunately, we also provide a complete free easy to install open source version of CoCalc, which you can run on any computer that supports Docker. For example, here's how to run it on Google cloud.\n(I have put too many years of my life into making realtiime collaboration work for Jupyter via CoCalc... In any case, the open source code has been battle tested in production for a while now and is working well finally. I hope it can solve your problem...)\n", "You can upload your notebook to Deepnote. It provides a hosted environment, where you and your colleague can connect at the same time and work on the same notebook in real-time (the same way you'd do in Google Docs). \nColab is also good, but writing at the same time will result in conflicts. \n", "Notebook itself doesn't support to collaborate simultaneously, but you can use GitHub to manage your python script and upload it into Colab separately. This way Github can help manage the file history and solve the conflicts.\n", "JupyterLab 3.1.0a7 introduced real time collaboration.\nThere is a screencast showing it in action.\nKey thing to note is the new top-level menu item called Share, to the right of Settings & Help.\n\nYou can click on launch binder here or here to try it now.\n\n\"Once you see the JupyterLab interface, there's a new top-level menu item called \"Share\"; click that, grab and share that URL, and you're done!\"-SOURCE: Step #5 here\n\nThere's a gist here that seems to be updated regularly with how to activate the feature.\nThere's a detailed walk-through here if you want to add the ability into your own repositories that can launch via MyBinder.org. Although if that repo falls behind the gist, you'll probably want to consult the gist for the current best practices once you have the idea from the detailed walk-through.\n\n\nClosely related question with an answer by @krassowski, is here. You may want to look there for some additional details.\n", "While you can use github for this it can get messy, many people clear output cells when committing to git to avoid conflict issues. Which would defeat the object of your review work.\nYou should try Curvenote (which we're building for that reason) it doesn't offer compute as its a collaborative writing tool, works on top of Jupyter via a chrome extenson and gives you real time versioning, commenting and diffs.\n", "\nGoogle Colab has been Deprecated and CoCalc is expensive\n\nNoteable.io is 100% free for all users including storage, compute, RAM. For your purposes, it will be ideal as you will get Google Drive like collaboration (commenting, @mentioning, Annotating data points), versioning, sharing, interactive visualizations, choice of using Python and SQL in the same notebook and a ton of other features.\nHere are good example notebooks on Noteable:\n\nClimate Change: An analysis of Dew Point for the city of Toronto\nHealthcare Sector Employee Attrition Exploratory Data Analysis\nExploratory Data Analysis Using SQL and Python - Online Retailer Orders\n\n" ]
[ 9, 5, 2, 2, 1, 0 ]
[]
[]
[ "collaboration", "jupyter_notebook", "python" ]
stackoverflow_0058903507_collaboration_jupyter_notebook_python.txt
Q: pyenv shell errors after uninstalling pyenv I've recently been diagnosing why my pyenv installation is giving me errors in the shell. After trying everything I could find, I thought I would remove it from my system. However, after uninstalling, I am still seeing pyenv shell when loading a new terminal and zsh: command not found: pyenv Steps I took: rm -rf $(pyenv root) brew uninstall pyenv I have checked my .zshrc,.zprofile, .zlogin and there are no references to pyenv in any of them I am using homebrew on Mac with oh-my-zsh(no pyenv plugin) At a loss for where to look to remove this. Thanks A: Turns out it was a VSCode thing. Opening a new terminal outside of VSCode didn't give the errors. It's due to the Python extension that was installed trying to activate an environment: "python.terminal.activateEnvironment": true
pyenv shell errors after uninstalling pyenv
I've recently been diagnosing why my pyenv installation is giving me errors in the shell. After trying everything I could find, I thought I would remove it from my system. However, after uninstalling, I am still seeing pyenv shell when loading a new terminal and zsh: command not found: pyenv Steps I took: rm -rf $(pyenv root) brew uninstall pyenv I have checked my .zshrc,.zprofile, .zlogin and there are no references to pyenv in any of them I am using homebrew on Mac with oh-my-zsh(no pyenv plugin) At a loss for where to look to remove this. Thanks
[ "Turns out it was a VSCode thing. Opening a new terminal outside of VSCode didn't give the errors. It's due to the Python extension that was installed trying to activate an environment:\n\"python.terminal.activateEnvironment\": true\n\n" ]
[ 1 ]
[]
[]
[ "homebrew", "pyenv", "python", "shell", "zsh" ]
stackoverflow_0074618249_homebrew_pyenv_python_shell_zsh.txt
Q: using if-elif-else statments for adding two integers I have just starting learning python and as I creating this program, which asks user to input two numbers, which then adds them to together using a simple if-elif-else statement, however the else part of the code just seems to not work if, an user types out the six, for example, in words instead of the number. num_1 = int(input("Enter the first number: ")) num_2 = int(input("Enter the second number: ")) Total = num_1 + num_2 print("The total is: ",Total) if num_1 > num_2: print("num_1 is greater then num_2") elif num_2 > num_1: print("num_2 is greater then num_1") elif num_1 == num_2: print("Equal") else: if num_1 == str: if num_2 == str: print("invalid") A: This should be what you are looking for: try: num_1 = int(input("Enter the first number: ")) num_2 = int(input("Enter the second number: ")) except ValueError: print("invalid") exit() Total = num_1 + num_2 print("The total is: ", Total) if num_1 > num_2: print("num_1 is greater then num_2") elif num_2 > num_1: print("num_2 is greater then num_1") else: print("Equal") A: In your first two lines you’re calling int() on a string in the situation you’re describing. This won’t work, and your code will stop running here. What you want is probably something call a try-catch statement.
using if-elif-else statments for adding two integers
I have just starting learning python and as I creating this program, which asks user to input two numbers, which then adds them to together using a simple if-elif-else statement, however the else part of the code just seems to not work if, an user types out the six, for example, in words instead of the number. num_1 = int(input("Enter the first number: ")) num_2 = int(input("Enter the second number: ")) Total = num_1 + num_2 print("The total is: ",Total) if num_1 > num_2: print("num_1 is greater then num_2") elif num_2 > num_1: print("num_2 is greater then num_1") elif num_1 == num_2: print("Equal") else: if num_1 == str: if num_2 == str: print("invalid")
[ "This should be what you are looking for:\ntry:\n num_1 = int(input(\"Enter the first number: \"))\n num_2 = int(input(\"Enter the second number: \"))\nexcept ValueError:\n print(\"invalid\")\n exit()\nTotal = num_1 + num_2\nprint(\"The total is: \", Total)\n\nif num_1 > num_2:\n print(\"num_1 is greater then num_2\")\nelif num_2 > num_1:\n print(\"num_2 is greater then num_1\")\nelse:\n print(\"Equal\")\n\n", "In your first two lines you’re calling int() on a string in the situation you’re describing. This won’t work, and your code will stop running here. What you want is probably something call a try-catch statement.\n" ]
[ 1, 0 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0074618168_if_statement_python.txt
Q: File not found error when launching a subprocess containing piped commands I need to run the command date | grep -o -w '"+tz+"'' | wc -w using Python on my localhost. I am using subprocess module for the same and using the check_output method as I need to capture the output for the same. However it is throwing me an error : Traceback (most recent call last): File "test.py", line 47, in <module> check_timezone() File "test.py", line 40, in check_timezone count = subprocess.check_output(command) File "/usr/lib/python2.7/subprocess.py", line 537, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child raise child_exception- OSError: [Errno 2] No such file or directory A: You have to add shell=True to execute a shell command. check_output is trying to find an executable called: date | grep -o -w '"+tz+"'' | wc -w and he cannot find it. (no idea why you removed the essential information from the error message). See the difference between: >>> subprocess.check_output('date | grep 1') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.4/subprocess.py", line 603, in check_output with Popen(*popenargs, stdout=PIPE, **kwargs) as process: File "/usr/lib/python3.4/subprocess.py", line 848, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.4/subprocess.py", line 1446, in _execute_child raise child_exception_type(errno_num, err_msg) FileNotFoundError: [Errno 2] No such file or directory: 'date | grep 1' And: >>> subprocess.check_output('date | grep 1', shell=True) b'gio 19 giu 2014, 14.15.35, CEST\n' Read the documentation about the Frequently Used Arguments for more information about the shell argument and how it changes the interpretation of the other arguments. Note that you should try to avoid using shell=True since spawning a shell can be a security hazard (even if you do not execute untrusted input attacks like Shellshock can still be performed!). The documentation for the subprocess module has a little section about replacing the shell pipeline. You can do so by spawning the two processes in python and use subprocess.PIPE: date_proc = subprocess.Popen(['date'], stdout=subprocess.PIPE) grep_proc = subprocess.check_output(['grep', '1'], stdin=date_proc.stdout, stdout=subprocess.PIPE) date_proc.stdout.close() output = grep_proc.communicate()[0] You can write some simple wrapper function to easily define pipelines: import subprocess from shlex import split from collections import namedtuple from functools import reduce proc_output = namedtuple('proc_output', 'stdout stderr') def pipeline(starter_command, *commands): if not commands: try: starter_command, *commands = starter_command.split('|') except AttributeError: pass starter_command = _parse(starter_command) starter = subprocess.Popen(starter_command, stdout=subprocess.PIPE) last_proc = reduce(_create_pipe, map(_parse, commands), starter) return proc_output(*last_proc.communicate()) def _create_pipe(previous, command): proc = subprocess.Popen(command, stdin=previous.stdout, stdout=subprocess.PIPE) previous.stdout.close() return proc def _parse(cmd): try: return split(cmd) except Exception: return cmd With this in place you can write pipeline('date | grep 1') or pipeline('date', 'grep 1') or pipeline(['date'], ['grep', '1']) A: The most common cause of FileNotFound with subprocess, in my experience, is the use of spaces in your command. If you have just a single command (not a pipeline, and no redirection, wildcards, etc), use a list instead. # Wrong, even with a valid command string subprocess.run(['grep -o -w "+tz+"']) # Fixed; notice also subprocess.run(["grep", "-o", "-w", '"+tz+"']) This change results in no more FileNotFound errors, and is a nice solution if you got here searching for that exception with a simpler command. If you need a pipeline or other shell features, the simple fix is to add shell=True: subprocess.run( '''date | grep -o -w '"+tz+"'' | wc -w''', shell=True) However, if you are using python 3.5 or greater, try using this approach: import subprocess a = subprocess.run(["date"], stdout=subprocess.PIPE) print(a.stdout.decode('utf-8')) b = subprocess.run(["grep", "-o", "-w", '"+tz+"'], input=a.stdout, stdout=subprocess.PIPE) print(b.stdout.decode('utf-8')) c = subprocess.run(["wc", "-w"], input=b.stdout, stdout=subprocess.PIPE) print(c.stdout.decode('utf-8')) You should see how one command's output becomes another's input just like using a shell pipe, but you can easily debug each step of the process in python. Using subprocess.run is recommended for python > 3.5, but not available in prior versions. A: The FileNotFoundError happens because - in the absence of shell=True - Python tries to find an executable whose file name is the entire string you are passing in. You need to add shell=True to get the shell to parse and execute the string, or figure out how to rearticulate this command line to avoid requiring a shell. As an aside, the shell programming here is decidedly weird. On any normal system, date will absolutely never output "+tz+" and so the rest of the processing is moot. Further, using wc -w to count the number of output words from grep is unusual. The much more common use case (if you can't simply use grep -c to count the number of matching lines) would be to use wc -l to count lines of output from grep. Anyway, if you can, you want to avoid shell=True; if the intent here is to test the date command, you should probably replace the rest of the shell script with native Python code. Pros: The person trying to understand the program only needs to understand Python, not shell script. The script will have fewer external dependencies (here, date) rather than require a Unix-like platform. Cons: Reimplementing standard Unix tools in Python is tiresome and sometimes rather verbose. With that out of the way, if the intent is simply to count how wany times "+tz+" occurs in the output from date, try p = subprocess.run(['date'], capture_output=True, text=True, check=True) result = len(p.stdout.split('"+tz+"'))-1 The keyword argument text=True requires Python 3.7; for compatibility back to earlier Python versions, try the (misnomer) legacy synonym universal_newlines=True. For really old Python versions, maybe fall back to subprocess.check_output(). If you really need the semantics of the -w option of grep, you need to check if the characters adjacent to the match are not alphabetic, and exclude those which are. I'm leaving that as an exercise, and in fact would assume that the original shell script implementation here was not actually correct. (Maybe try re.split(r'(?<=^|\W)"\+tz\+"(?=\W|$)', p.stdout).) In more trivial cases (single command, no pipes, wildcards, redirection, shell builtins, etc) you can use Python's shlex.split() to parse a command into a correctly quoted list of arguments. For example, >>> import shlex >>> shlex.split(r'''one "two three" four\ five 'six seven' eight"'"nine'"'ten''') ['one', 'two three', 'four five', 'six seven', 'eight\'nine"ten'] Notice how the regular string split() is completely unsuitable here; it simply splits on every whitespace character, and doesn't support any sort of quoting or escaping. (But notice also how it boneheadedly just returns a list of tokens from the original input: >>> shlex.split('''date | grep -o -w '"+tz+"' | wc -w''') ['date', '|', 'grep', '-o', '-w', '"+tz+"', '|', 'wc', '-w'] (Even more parenthetically, this isn't exactly the original input, which had a superfluous extra single quote after '"+tz+"'). This is in fact passing | and grep etc as arguments to date, not implementing a shell pipeline! You still have to understand what you are doing.) A: The question already has an answer above but just in case the solutions are not working for you; Please check the path itself and if all the environment variables are set for the process to locate the path. A: what worked for me on python 3.8.10 (inspired by @mightypile solution here: https://stackoverflow.com/a/49986004/12361522), was removed splits of parametres and i had to enable shell, too: this: c = subprocess.run(["wc -w"], input=b.stdout, stdout=subprocess.PIPE, shell=True) instead of: c = subprocess.run(["wc", "-w"], input=b.stdout, stdout=subprocess.PIPE) if anyone wanted to try my solution (at least for v3.8.10), here is mine: i have directory with multiple files of at least 2 file-types (.jpg and others). i needed to count specific file type (.jpg) and not all files in the directory, via 1 pipe: ls *.jpg | wc -l so eventually i got it working like here: import subprocess proc1 = subprocess.run(["ls *.jpg"], stdout=subprocess.PIPE, shell=True) proc2 = subprocess.run(['wc -l'], input=proc1.stdout, stdout=subprocess.PIPE, shell=True) print(proc2.stdout.decode()) it would not work with splits: ["ls", "*.jpg"] that would make ls to ignore contraint *.jpg ['wc', '-l'] that would return correct count, but will all 3 outputs and not just one i was after all that would not work without enabled shell shell=True A: I had this error too and what worked for me was setting the line endings of the .sh file - that I was calling with subprocess - to Unix (LF) instead of Windows CRLF.
File not found error when launching a subprocess containing piped commands
I need to run the command date | grep -o -w '"+tz+"'' | wc -w using Python on my localhost. I am using subprocess module for the same and using the check_output method as I need to capture the output for the same. However it is throwing me an error : Traceback (most recent call last): File "test.py", line 47, in <module> check_timezone() File "test.py", line 40, in check_timezone count = subprocess.check_output(command) File "/usr/lib/python2.7/subprocess.py", line 537, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File "/usr/lib/python2.7/subprocess.py", line 679, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child raise child_exception- OSError: [Errno 2] No such file or directory
[ "You have to add shell=True to execute a shell command. check_output is trying to find an executable called: date | grep -o -w '\"+tz+\"'' | wc -w and he cannot find it. (no idea why you removed the essential information from the error message).\nSee the difference between:\n>>> subprocess.check_output('date | grep 1')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/lib/python3.4/subprocess.py\", line 603, in check_output\n with Popen(*popenargs, stdout=PIPE, **kwargs) as process:\n File \"/usr/lib/python3.4/subprocess.py\", line 848, in __init__\n restore_signals, start_new_session)\n File \"/usr/lib/python3.4/subprocess.py\", line 1446, in _execute_child\n raise child_exception_type(errno_num, err_msg)\nFileNotFoundError: [Errno 2] No such file or directory: 'date | grep 1'\n\nAnd: \n>>> subprocess.check_output('date | grep 1', shell=True)\nb'gio 19 giu 2014, 14.15.35, CEST\\n'\n\nRead the documentation about the Frequently Used Arguments for more information about the shell argument and how it changes the interpretation of the other arguments.\n\nNote that you should try to avoid using shell=True since spawning a shell can be a security hazard (even if you do not execute untrusted input attacks like Shellshock can still be performed!). \nThe documentation for the subprocess module has a little section about replacing the shell pipeline.\nYou can do so by spawning the two processes in python and use subprocess.PIPE:\ndate_proc = subprocess.Popen(['date'], stdout=subprocess.PIPE)\ngrep_proc = subprocess.check_output(['grep', '1'], stdin=date_proc.stdout, stdout=subprocess.PIPE)\ndate_proc.stdout.close()\noutput = grep_proc.communicate()[0]\n\nYou can write some simple wrapper function to easily define pipelines:\nimport subprocess\nfrom shlex import split\nfrom collections import namedtuple\nfrom functools import reduce\n\nproc_output = namedtuple('proc_output', 'stdout stderr')\n\n\ndef pipeline(starter_command, *commands):\n if not commands:\n try:\n starter_command, *commands = starter_command.split('|')\n except AttributeError:\n pass\n starter_command = _parse(starter_command)\n starter = subprocess.Popen(starter_command, stdout=subprocess.PIPE)\n last_proc = reduce(_create_pipe, map(_parse, commands), starter)\n return proc_output(*last_proc.communicate())\n\ndef _create_pipe(previous, command):\n proc = subprocess.Popen(command, stdin=previous.stdout, stdout=subprocess.PIPE)\n previous.stdout.close()\n return proc\n\ndef _parse(cmd):\n try:\n return split(cmd)\n except Exception:\n return cmd\n\nWith this in place you can write pipeline('date | grep 1') or pipeline('date', 'grep 1') or pipeline(['date'], ['grep', '1'])\n", "The most common cause of FileNotFound with subprocess, in my experience, is the use of spaces in your command. If you have just a single command (not a pipeline, and no redirection, wildcards, etc), use a list instead.\n# Wrong, even with a valid command string\nsubprocess.run(['grep -o -w \"+tz+\"'])\n\n# Fixed; notice also \nsubprocess.run([\"grep\", \"-o\", \"-w\", '\"+tz+\"'])\n\nThis change results in no more FileNotFound errors, and is a nice solution if you got here searching for that exception with a simpler command.\nIf you need a pipeline or other shell features, the simple fix is to add shell=True:\nsubprocess.run(\n '''date | grep -o -w '\"+tz+\"'' | wc -w''',\n shell=True)\n\nHowever, if you are using python 3.5 or greater, try using this approach:\nimport subprocess\n\na = subprocess.run([\"date\"], stdout=subprocess.PIPE)\nprint(a.stdout.decode('utf-8'))\n\nb = subprocess.run([\"grep\", \"-o\", \"-w\", '\"+tz+\"'],\n input=a.stdout, stdout=subprocess.PIPE)\nprint(b.stdout.decode('utf-8'))\n\nc = subprocess.run([\"wc\", \"-w\"],\n input=b.stdout, stdout=subprocess.PIPE)\nprint(c.stdout.decode('utf-8'))\n\nYou should see how one command's output becomes another's input just like using a shell pipe, but you can easily debug each step of the process in python. Using subprocess.run is recommended for python > 3.5, but not available in prior versions.\n", "The FileNotFoundError happens because - in the absence of shell=True - Python tries to find an executable whose file name is the entire string you are passing in. You need to add shell=True to get the shell to parse and execute the string, or figure out how to rearticulate this command line to avoid requiring a shell.\nAs an aside, the shell programming here is decidedly weird. On any normal system, date will absolutely never output \"+tz+\" and so the rest of the processing is moot.\nFurther, using wc -w to count the number of output words from grep is unusual. The much more common use case (if you can't simply use grep -c to count the number of matching lines) would be to use wc -l to count lines of output from grep.\nAnyway, if you can, you want to avoid shell=True; if the intent here is to test the date command, you should probably replace the rest of the shell script with native Python code.\nPros:\n\nThe person trying to understand the program only needs to understand Python, not shell script.\nThe script will have fewer external dependencies (here, date) rather than require a Unix-like platform.\n\nCons:\n\nReimplementing standard Unix tools in Python is tiresome and sometimes rather verbose.\n\nWith that out of the way, if the intent is simply to count how wany times \"+tz+\" occurs in the output from date, try\np = subprocess.run(['date'],\n capture_output=True, text=True,\n check=True)\nresult = len(p.stdout.split('\"+tz+\"'))-1\n\nThe keyword argument text=True requires Python 3.7; for compatibility back to earlier Python versions, try the (misnomer) legacy synonym universal_newlines=True. For really old Python versions, maybe fall back to subprocess.check_output().\nIf you really need the semantics of the -w option of grep, you need to check if the characters adjacent to the match are not alphabetic, and exclude those which are. I'm leaving that as an exercise, and in fact would assume that the original shell script implementation here was not actually correct. (Maybe try re.split(r'(?<=^|\\W)\"\\+tz\\+\"(?=\\W|$)', p.stdout).)\nIn more trivial cases (single command, no pipes, wildcards, redirection, shell builtins, etc) you can use Python's shlex.split() to parse a command into a correctly quoted list of arguments. For example,\n>>> import shlex\n>>> shlex.split(r'''one \"two three\" four\\ five 'six seven' eight\"'\"nine'\"'ten''')\n['one', 'two three', 'four five', 'six seven', 'eight\\'nine\"ten']\n\nNotice how the regular string split() is completely unsuitable here; it simply splits on every whitespace character, and doesn't support any sort of quoting or escaping. (But notice also how it boneheadedly just returns a list of tokens from the original input:\n>>> shlex.split('''date | grep -o -w '\"+tz+\"' | wc -w''')\n['date', '|', 'grep', '-o', '-w', '\"+tz+\"', '|', 'wc', '-w']\n\n(Even more parenthetically, this isn't exactly the original input, which had a superfluous extra single quote after '\"+tz+\"').\nThis is in fact passing | and grep etc as arguments to date, not implementing a shell pipeline! You still have to understand what you are doing.)\n", "The question already has an answer above but just in case the solutions are not working for you; Please check the path itself and if all the environment variables are set for the process to locate the path.\n", "what worked for me on python 3.8.10 (inspired by @mightypile solution here: https://stackoverflow.com/a/49986004/12361522), was removed splits of parametres and i had to enable shell, too:\nthis:\nc = subprocess.run([\"wc -w\"], input=b.stdout, stdout=subprocess.PIPE, shell=True)\n\ninstead of:\nc = subprocess.run([\"wc\", \"-w\"], input=b.stdout, stdout=subprocess.PIPE)\n\n\n\nif anyone wanted to try my solution (at least for v3.8.10), here is mine:\ni have directory with multiple files of at least 2 file-types (.jpg and others). i needed to count specific file type (.jpg) and not all files in the directory, via 1 pipe:\nls *.jpg | wc -l\n\nso eventually i got it working like here:\nimport subprocess\nproc1 = subprocess.run([\"ls *.jpg\"], stdout=subprocess.PIPE, shell=True)\nproc2 = subprocess.run(['wc -l'], input=proc1.stdout, stdout=subprocess.PIPE, shell=True)\nprint(proc2.stdout.decode())\n\nit would not work with splits:\n[\"ls\", \"*.jpg\"] that would make ls to ignore contraint *.jpg\n['wc', '-l'] that would return correct count, but will all 3 outputs and not just one i was after\nall that would not work without enabled shell shell=True\n", "I had this error too and what worked for me was setting the line endings of the .sh file - that I was calling with subprocess - to Unix (LF) instead of Windows CRLF.\n" ]
[ 139, 20, 6, 0, 0, 0 ]
[]
[]
[ "pipe", "python", "shell", "subprocess" ]
stackoverflow_0024306205_pipe_python_shell_subprocess.txt
Q: How can I find the respective P-values for a multiple linear regression using the linear model from sklearn? So, I'm trying to develop a ml model for multiple linear regression that predicts the Y given n number of X variables. So far, my model can read in a data set and give the predicted value with a coefficient of determination as well as the respective coefficients for a 1-unit increase in X. The only issues are: I can't get the p-value for the life of me, it says most of the time the data isn't shaped right due to it being 5 columns and 1329 rows. When I do get an output, they're just incorrect, I know because I did the regression in analysis toolpak in excel. Is there a way to make the model recursive so that it recognizes the highest pvalue above .05 and calls itself again without said value until it hits the base case. Which would be something like While dependent_v[pvalue] > .05: Also what would be the best visualization method to show my data? Thank you for any and all that help, I'm just starting to delve into machine learning on my own and want to hone my skills before an upcoming data science internship in the summer. import matplotlib.pyplot as plt import pandas as pd from sklearn import linear_model def multipleReg(): dfreg = pd.read_csv("dfreg.csv") #Setting dependent variables dependent_v = ['Large_size', 'Mid_Level', 'Senior_Level', 'Exec_Level', 'Company_Location'] #Setting independent variable independent_v = 'Salary_In_USD' X = dfreg[dependent_v] #Drawing dependent variables from dataframe y = dfreg[independent_v] #Drawing independent variable from dataframe reg = linear_model.LinearRegression() #Initializing regression model reg.fit(X.values,y) #Fitting appropriate values predicted_sal = reg.predict([[1,0,1,0,0]]) #Prediction using 2 dimensions in array percent_rscore = (reg.score(X.values,y)*100) #Model coefficient of determination print('\n') print("The predicted salary is:", predicted_sal) print("The Coefficient of deterimination is:", "{:,.2f}%".format(percent_rscore)) #Printing coefficents of dependent variables(How much Y increases due to 1 #unit increase in X) print("The corresponding coefficients for the dependent variables are:", reg.coef_) A: As far as i know sklearn doesn't return p values, is better using the statsmodels library. But if you need to use sklearn anyway, you can find various solutions here: Find p-value (significance) in scikit-learn LinearRegression
How can I find the respective P-values for a multiple linear regression using the linear model from sklearn?
So, I'm trying to develop a ml model for multiple linear regression that predicts the Y given n number of X variables. So far, my model can read in a data set and give the predicted value with a coefficient of determination as well as the respective coefficients for a 1-unit increase in X. The only issues are: I can't get the p-value for the life of me, it says most of the time the data isn't shaped right due to it being 5 columns and 1329 rows. When I do get an output, they're just incorrect, I know because I did the regression in analysis toolpak in excel. Is there a way to make the model recursive so that it recognizes the highest pvalue above .05 and calls itself again without said value until it hits the base case. Which would be something like While dependent_v[pvalue] > .05: Also what would be the best visualization method to show my data? Thank you for any and all that help, I'm just starting to delve into machine learning on my own and want to hone my skills before an upcoming data science internship in the summer. import matplotlib.pyplot as plt import pandas as pd from sklearn import linear_model def multipleReg(): dfreg = pd.read_csv("dfreg.csv") #Setting dependent variables dependent_v = ['Large_size', 'Mid_Level', 'Senior_Level', 'Exec_Level', 'Company_Location'] #Setting independent variable independent_v = 'Salary_In_USD' X = dfreg[dependent_v] #Drawing dependent variables from dataframe y = dfreg[independent_v] #Drawing independent variable from dataframe reg = linear_model.LinearRegression() #Initializing regression model reg.fit(X.values,y) #Fitting appropriate values predicted_sal = reg.predict([[1,0,1,0,0]]) #Prediction using 2 dimensions in array percent_rscore = (reg.score(X.values,y)*100) #Model coefficient of determination print('\n') print("The predicted salary is:", predicted_sal) print("The Coefficient of deterimination is:", "{:,.2f}%".format(percent_rscore)) #Printing coefficents of dependent variables(How much Y increases due to 1 #unit increase in X) print("The corresponding coefficients for the dependent variables are:", reg.coef_)
[ "As far as i know sklearn doesn't return p values, is better using the statsmodels library.\nBut if you need to use sklearn anyway, you can find various solutions here:\nFind p-value (significance) in scikit-learn LinearRegression\n" ]
[ 1 ]
[]
[]
[ "data_science", "machine_learning", "python", "regression", "statistics" ]
stackoverflow_0074618290_data_science_machine_learning_python_regression_statistics.txt
Q: What is the best way to write a dictionary of column connection with column2? How to create dictionary of connecting column index selection: data: No_1 C_N 1 1 1 2 1 7 2 13 2 6 desired output should be like {1:[1,2,7],2:[13,6]} I have tried this, but it doesn't seem to work. import pandas as pd df = pd.read_csv('connection.csv' ,sep=',') for i ,j in zip(range(166),df['No_1']): if i==j: print(df['C_N']) A: Use df.groupby('No_1')['C_N'].apply(list).to_dict(). This gives you {1: [1, 2, 7], 2: [13, 6]}.
What is the best way to write a dictionary of column connection with column2?
How to create dictionary of connecting column index selection: data: No_1 C_N 1 1 1 2 1 7 2 13 2 6 desired output should be like {1:[1,2,7],2:[13,6]} I have tried this, but it doesn't seem to work. import pandas as pd df = pd.read_csv('connection.csv' ,sep=',') for i ,j in zip(range(166),df['No_1']): if i==j: print(df['C_N'])
[ "Use df.groupby('No_1')['C_N'].apply(list).to_dict().\nThis gives you {1: [1, 2, 7], 2: [13, 6]}.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074618398_python.txt
Q: How to match version substring within overall string I am trying to match a version substring with regex in the form of v###.##.### or version #.##.###. The number of version numbers doesn't matter and there may or may not be a space after the v or version. This is what I was trying so far but it's not matching in some cases: \bv\s?[\d.]*\b|\bversion\s?[\d.]*\b For example, it matches "version 6.2.11" but not c2000_v6.2.11. I'm relatively new to regex and not sure what I'm doing wrong here. I'm pretty sure there's a better way to do the "or" part as well, so any help would be appreciated thank you! A: First, your pattern can be shortened considerably by implementing an optional non-capturing group so that v or version could be matched without the need of an alternation. Next, the first \b requires a word boundary but the version information starts after _ in the second expected match, and _ is a word char. You can use (?<![^\W_])v(?:ersion)?\s?[\d.]*\b See the regex demo. Details: (?<![^\W_]) - immediately on the left, there can be no letter or digit v - a v char (?:ersion)? - an optional ersion string \s? - an optional whitespace [\d.]* - zero or more digits or dots \b - a word boundary.
How to match version substring within overall string
I am trying to match a version substring with regex in the form of v###.##.### or version #.##.###. The number of version numbers doesn't matter and there may or may not be a space after the v or version. This is what I was trying so far but it's not matching in some cases: \bv\s?[\d.]*\b|\bversion\s?[\d.]*\b For example, it matches "version 6.2.11" but not c2000_v6.2.11. I'm relatively new to regex and not sure what I'm doing wrong here. I'm pretty sure there's a better way to do the "or" part as well, so any help would be appreciated thank you!
[ "First, your pattern can be shortened considerably by implementing an optional non-capturing group so that v or version could be matched without the need of an alternation.\nNext, the first \\b requires a word boundary but the version information starts after _ in the second expected match, and _ is a word char.\nYou can use\n(?<![^\\W_])v(?:ersion)?\\s?[\\d.]*\\b\n\nSee the regex demo.\nDetails:\n\n(?<![^\\W_]) - immediately on the left, there can be no letter or digit\nv - a v char\n(?:ersion)? - an optional ersion string\n\\s? - an optional whitespace\n[\\d.]* - zero or more digits or dots\n\\b - a word boundary.\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074618484_python_regex.txt
Q: Is there an easy way to construct a pandas DataFrame from an Iterable of attrs objects? One can do that with dataclasses like so: from dataclasses import dataclass import pandas as pd @dataclass class MyDataClass: i: int s: str df = pd.DataFrame([MyDataClass("a", 1), MyDataClass("b", 2)]) that makes the DataFrame df with columns i and s as one would expect. Is there an easy way to do that with an attrs class? I can do it by iterating over the the object's properties and constructing an object of a type like dict[str, list] ({"i": [1, 2], "s": ["a", "b"]} in this case) and constructing the DataFrame from that but it would be nice to have support for attrs objects directly. A: You can access the dictionary at the heart of a dataclass like so a = MyDataClass("a", 1) a.__dict__ this outputs: {'i': 'a', 's': 1} Knowing this, if you have an iterable arr of type MyDataClass, you can access the __dict__ attribute and construct a dataframe arr = [MyDataClass("a", 1), MyDataClass("b", 2)] df = pd.DataFrame([x.__dict__ for x in arr]) df outputs: i s 0 a 1 1 b 2 The limitation with this approach that if the slots option is used, then this will not work. Alternatively, it is possible to convert the data from a dataclass to a tuple or dictionary using dataclasses.astuple and dataclasses.asdict respectively. The data frame can be also constructed using either of the following: # using astuple df = pd.DataFrame( [dataclasses.astuple(x) for x in arr], columns=[f.name for f in dataclasses.fields(MyDataClass)] ) # using asdict df = pd.DataFrame([dataclasses.asdict(x) for x in arr])
Is there an easy way to construct a pandas DataFrame from an Iterable of attrs objects?
One can do that with dataclasses like so: from dataclasses import dataclass import pandas as pd @dataclass class MyDataClass: i: int s: str df = pd.DataFrame([MyDataClass("a", 1), MyDataClass("b", 2)]) that makes the DataFrame df with columns i and s as one would expect. Is there an easy way to do that with an attrs class? I can do it by iterating over the the object's properties and constructing an object of a type like dict[str, list] ({"i": [1, 2], "s": ["a", "b"]} in this case) and constructing the DataFrame from that but it would be nice to have support for attrs objects directly.
[ "You can access the dictionary at the heart of a dataclass like so\na = MyDataClass(\"a\", 1)\na.__dict__\n\nthis outputs:\n{'i': 'a', 's': 1}\n\nKnowing this, if you have an iterable arr of type MyDataClass, you can access the __dict__ attribute and construct a dataframe\narr = [MyDataClass(\"a\", 1), MyDataClass(\"b\", 2)]\ndf = pd.DataFrame([x.__dict__ for x in arr])\n\ndf outputs:\n i s\n0 a 1\n1 b 2\n\nThe limitation with this approach that if the slots option is used, then this will not work.\nAlternatively, it is possible to convert the data from a dataclass to a tuple or dictionary using dataclasses.astuple and dataclasses.asdict respectively.\nThe data frame can be also constructed using either of the following:\n# using astuple\ndf = pd.DataFrame(\n [dataclasses.astuple(x) for x in arr], \n columns=[f.name for f in dataclasses.fields(MyDataClass)]\n)\n\n# using asdict\ndf = pd.DataFrame([dataclasses.asdict(x) for x in arr])\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "python_attrs" ]
stackoverflow_0074618499_pandas_python_python_attrs.txt
Q: Having inputs add onto each other in python I am a beginner, and I was working on a simple credit program. I want it to work so every time I add an input of a number it gets stored in a variable that shows my total balance. The problem right now is that the program is only a one use program so the input i enter does not get saved into a variable so that when I enter another value it gets added onto a previous input. Code is below: Purchase = int(input("How much was your purchase? ")) credit_balance = 0 credit_limit = 2000 Total = credit_balance + Purchase print("Your account value right now: ", Total) if Total == credit_limit: print("You have reached your credit limit!", Total) A: You'll need to introduce a while loop to keep it going. Try this: credit_limit = 2000 credit_balance = 0 while True: print('Welcome to the Credit Card Company') Purchase = int(input("How much was your purchase? ")) Total = credit_balance + Purchase print("Your account value right now: ", Total) if Total >= credit_limit: print("You have reached your credit limit!", Total) Note that this will keep it going indefinitely. You'll need to add logic for the user to input a command to exit. You can use something like: print('Welcome to the Credit Card Company') Purchase = int(input("How much was your purchase? Or type Exit to exit.")) Then: if Purchase == 'Exit': exit() Edit: Here's a version that retains the balance each time. The key difference is that a variable can equal its previous value plus a change. I rewrote a few things for clarity. credit_limit = 2000 current_balance = 0 while True: print('Welcome to the Credit Card Company') Purchase = int(input("How much was your purchase? ")) current_balance = current_balance + Purchase print("Your account value right now: ", current_balance) if current_balance == credit_limit: print("You have reached your credit limit!", current_balance) A: If you don't want your code to exit you can use the while loop. credit_balance = 0 credit_limit = 2000 while True: purchase = int(input("How much was your purchase? ")) Total = credit_balance + purchase print("Your account value right now: ", Total) if Total == credit_limit: print("You have reached your credit limit!", Total) Please notice I also changed the variable Purchase to purchase. This is because in Python the convention is lower case letters for variables. You can read more about conventions here: Python Conventions Also if you want to read more about loops, you can have a look here: Python Loops Good luck and welcome to python :) A: You can get user input infinitely if you use a while loop: credit_balance = 0 credit_limit = 2000 while True: purchase = int(input("How much was your purchase? ")) credit_balance += purchase # add purchase to credit_balance print("Your account value right now: ", credit_balance) if credit_balance >= credit_limit: print("You have reached/exceeded your credit limit!", Total) A good exercise would be to add some logic to ensure purchases don't exceed the credit limit.
Having inputs add onto each other in python
I am a beginner, and I was working on a simple credit program. I want it to work so every time I add an input of a number it gets stored in a variable that shows my total balance. The problem right now is that the program is only a one use program so the input i enter does not get saved into a variable so that when I enter another value it gets added onto a previous input. Code is below: Purchase = int(input("How much was your purchase? ")) credit_balance = 0 credit_limit = 2000 Total = credit_balance + Purchase print("Your account value right now: ", Total) if Total == credit_limit: print("You have reached your credit limit!", Total)
[ "You'll need to introduce a while loop to keep it going. Try this:\ncredit_limit = 2000\ncredit_balance = 0\n\nwhile True:\n\n print('Welcome to the Credit Card Company')\n Purchase = int(input(\"How much was your purchase? \"))\n Total = credit_balance + Purchase\n\n print(\"Your account value right now: \", Total)\n\n if Total >= credit_limit:\n print(\"You have reached your credit limit!\", Total)\n\nNote that this will keep it going indefinitely. You'll need to add logic for the user to input a command to exit. You can use something like:\n print('Welcome to the Credit Card Company')\n Purchase = int(input(\"How much was your purchase? Or type Exit to exit.\"))\n\nThen:\nif Purchase == 'Exit':\n exit()\n\nEdit:\nHere's a version that retains the balance each time. The key difference is that a variable can equal its previous value plus a change. I rewrote a few things for clarity.\ncredit_limit = 2000\ncurrent_balance = 0\n\nwhile True:\n\n print('Welcome to the Credit Card Company')\n Purchase = int(input(\"How much was your purchase? \"))\n current_balance = current_balance + Purchase\n\n print(\"Your account value right now: \", current_balance)\n\n if current_balance == credit_limit:\n print(\"You have reached your credit limit!\", current_balance)\n\n\n", "If you don't want your code to exit you can use the while loop.\ncredit_balance = 0\ncredit_limit = 2000\n\nwhile True:\n purchase = int(input(\"How much was your purchase? \"))\n Total = credit_balance + purchase\n print(\"Your account value right now: \", Total)\n if Total == credit_limit:\n print(\"You have reached your credit limit!\", Total)\n\nPlease notice I also changed the variable Purchase to purchase.\nThis is because in Python the convention is lower case letters for variables.\nYou can read more about conventions here:\nPython Conventions\nAlso if you want to read more about loops, you can have a look here:\nPython Loops\nGood luck and welcome to python :)\n", "You can get user input infinitely if you use a while loop:\ncredit_balance = 0\ncredit_limit = 2000\n\nwhile True:\n purchase = int(input(\"How much was your purchase? \"))\n credit_balance += purchase # add purchase to credit_balance\n \n print(\"Your account value right now: \", credit_balance)\n \n if credit_balance >= credit_limit:\n print(\"You have reached/exceeded your credit limit!\", Total)\n\nA good exercise would be to add some logic to ensure purchases don't exceed the credit limit.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074618424_python.txt
Q: Solving generalized eigenvalue system with a semidefinite positive B in python I am trying to use Normalized Cut algorithm (Shi and Malik, 2000) to cut a matrix into two matrices. In this regard, I need to find the second smallest eigenvector in a generalized eigenvalue system (Ax = lambda.B.x). In my input, B is a semidefinite positive matrix. However, scipy.linalg.eigh requires B to be definite positive and raises an error when I use it. I need to know if I can have a solution with this input, and how can I find it. I tried eigvals, eigvecs = eigh(A, B, eigvals_only=False, subset_by_index=[0, 1]) But I got: numpy.linalg.LinAlgError: The leading minor of order 2 of B is not positive definite. The factorization of B could not be completed and no eigenvalues or eigenvectors were computed. A: If B is semidefinite, it means it has at least one eigenvector associated with an eigenvalue 0, you still could have solutions if the nullspace of B is also a null space of A, i.e. if B @ x = 0, A @ x = 0, but in that case the generalized eigenvalue associated with x is undetermined.
Solving generalized eigenvalue system with a semidefinite positive B in python
I am trying to use Normalized Cut algorithm (Shi and Malik, 2000) to cut a matrix into two matrices. In this regard, I need to find the second smallest eigenvector in a generalized eigenvalue system (Ax = lambda.B.x). In my input, B is a semidefinite positive matrix. However, scipy.linalg.eigh requires B to be definite positive and raises an error when I use it. I need to know if I can have a solution with this input, and how can I find it. I tried eigvals, eigvecs = eigh(A, B, eigvals_only=False, subset_by_index=[0, 1]) But I got: numpy.linalg.LinAlgError: The leading minor of order 2 of B is not positive definite. The factorization of B could not be completed and no eigenvalues or eigenvectors were computed.
[ "If B is semidefinite, it means it has at least one eigenvector associated with an eigenvalue 0, you still could have solutions if the nullspace of B is also a null space of A, i.e. if B @ x = 0, A @ x = 0, but in that case the generalized eigenvalue associated with x is undetermined.\n" ]
[ 0 ]
[]
[]
[ "eigenvalue", "eigenvector", "linear_algebra", "python", "scipy" ]
stackoverflow_0074618427_eigenvalue_eigenvector_linear_algebra_python_scipy.txt
Q: Sometimes pip install is very slow I am sure it is not network issue. Some of my machine install packages using pip is very fast while some other machine is pretty slow, from the logs, I suspect the slow is due to it will compile the package, I am wondering how can I avoid this compilation to make the pip installation fast. Here's the logs from the slow pip installation. Collecting numpy==1.10.4 (from -r requirements.txt (line 1)) Downloading numpy-1.10.4.tar.gz (4.1MB) 100% |████████████████████████████████| 4.1MB 95kB/s Requirement already satisfied (use --upgrade to upgrade): wheel==0.26.0 in ./lib/python2.7/site-packages (from -r requirements.txt (line 2)) Building wheels for collected packages: numpy Running setup.py bdist_wheel for numpy ... - done Stored in directory: /root/.cache/pip/wheels/66/f5/d7/f6ddd78b61037fcb51a3e32c9cd276e292343cdd62d5384efd Successfully built numpy A: The slowness is due to compilation indeed. But there is now the manylinux tag. Which allows the installation of pre-compiled distributions. See for example the PyPI page of numpy to see if a manylinux package is provided for your Python version. Update (2021-06) Since this answer received some attention lately, there are now many manylinux tags for precompiled packages (no pun intended). A: For me, I was running into this issue with pip 22.0.4, Ubuntu 20.04.4 LTS. I was installing tensorflow-gpu, which already takes too much time, but the pip was unusually very slow. The above solutions didn't make any sense to me, so I did the following: Do not run pip commands with sudo. apt-update && apt-upgrade Reboot the server/computer I know it doesn't seem to be a permanent solution, but it fixed the issue for me. A: In case anyone was having the network issue and landed on this page like me: I noticed slowness on my machine because pip install would get stuck in network calls while trying to create socket connections (sock.connect()). As discussed here, this can happen when the host supports IPv6 but your network doesnt. As instructed here, I checked if this was true by disabling IPv6 on my Ubuntu machine as follows : sysctl net.ipv6.conf.all.disable_ipv6=1 I was no longer hanging in network calls after this change. However, I am not sure if this is a sustainable solution since we are all slowly moving to IPv6.
Sometimes pip install is very slow
I am sure it is not network issue. Some of my machine install packages using pip is very fast while some other machine is pretty slow, from the logs, I suspect the slow is due to it will compile the package, I am wondering how can I avoid this compilation to make the pip installation fast. Here's the logs from the slow pip installation. Collecting numpy==1.10.4 (from -r requirements.txt (line 1)) Downloading numpy-1.10.4.tar.gz (4.1MB) 100% |████████████████████████████████| 4.1MB 95kB/s Requirement already satisfied (use --upgrade to upgrade): wheel==0.26.0 in ./lib/python2.7/site-packages (from -r requirements.txt (line 2)) Building wheels for collected packages: numpy Running setup.py bdist_wheel for numpy ... - done Stored in directory: /root/.cache/pip/wheels/66/f5/d7/f6ddd78b61037fcb51a3e32c9cd276e292343cdd62d5384efd Successfully built numpy
[ "The slowness is due to compilation indeed. But there is now the manylinux tag. Which allows the installation of pre-compiled distributions. See for example the PyPI page of numpy to see if a manylinux package is provided for your Python version.\nUpdate (2021-06)\nSince this answer received some attention lately, there are now many manylinux tags for precompiled packages (no pun intended).\n", "For me, I was running into this issue with pip 22.0.4, Ubuntu 20.04.4 LTS.\nI was installing tensorflow-gpu, which already takes too much time, but the pip was unusually very slow.\nThe above solutions didn't make any sense to me, so I did the following:\n\nDo not run pip commands with sudo.\napt-update && apt-upgrade\nReboot the server/computer\n\nI know it doesn't seem to be a permanent solution, but it fixed the issue for me.\n", "In case anyone was having the network issue and landed on this page like me:\nI noticed slowness on my machine because pip install would get stuck in network calls while trying to create socket connections (sock.connect()). As discussed here, this can happen when the host supports IPv6 but your network doesnt. As instructed here, I checked if this was true by disabling IPv6 on my Ubuntu machine as follows :\nsysctl net.ipv6.conf.all.disable_ipv6=1\n\nI was no longer hanging in network calls after this change.\nHowever, I am not sure if this is a sustainable solution since we are all slowly moving to IPv6.\n" ]
[ 24, 1, 0 ]
[ "Command\nInstead of (too slow to complete)\npython -m pip install numpy\n\nThis worked (fast as supposed to be)\npip install numpy\n\nCheck\npython\n\nimport numpy as np \n\n(it shouldn't give any errors)\n" ]
[ -34 ]
[ "pip", "python" ]
stackoverflow_0035144103_pip_python.txt
Q: Starting a python script from another before it crashes I'm trying to make some project code I have written, more resilient to crashes, except the circumstances of my previous crashes have all been different. So that I do not have to try and account for every single one, I thought I'd try to get my code to either restart, or execute a copy of itself in place of it and then close itself down gracefully, meaning its replacement, because it's coded identically, would in essence be the same as restarting from the beginning again. The desired result for me would be that while the error resulting circumstances are present, my code would be in a program swap out, or restart loop until such time as it can execute its code normally again....until the next time it faces a similar situation. To experiment with, I've written two programs. I'm hoping from these examples someone will understand what I am trying to achieve. I want the first script to execute, then start the execute process for the second (in a new terminal) before closing itself down gracefully. Is this even possible? Thanks in advance. first.py #!/usr/bin/env python #!/bin/bash #first.py import time import os import sys from subprocess import run import subprocess thisfile = "first" #thisfile = "second" time.sleep(3) while thisfile == "second": print("this is the second file") time.sleep(1) #os.system("first.py") #exec(open("first.py").read()) #run("python "+"first.py", check=False) #import first #os.system('python first.py') #subprocess.call(" python first.py 1", shell=True) os.execv("first.py", sys.argv) print("I'm leaving second now") break while thisfile == "first": print("this is the first file") time.sleep(1) #os.system("second.py") #exec(open("second.py").read()) #run("python "+"second.py", check=False) #import second #os.system('python second.py') #subprocess.call(" python second.py 1", shell=True) os.execv("second.py", sys.argv) print("I'm leaving first now") break time.sleep(1) sys.exit("Quitting") second.py (basically a copy of first.py) #!/usr/bin/env python #!/bin/bash #second.py import time import os import sys from subprocess import run import subprocess #thisfile = "first" thisfile = "second" time.sleep(3) while thisfile == "second": print("this is the second file") time.sleep(1) #os.system("first.py") #exec(open("first.py").read()) #run("python "+"first.py", check=False) #import first #os.system('python first.py') #subprocess.call(" python first.py 1", shell=True) os.execv("first.py", sys.argv) print("I'm leaving second now") break while thisfile == "first": print("this is the first file") time.sleep(1) #os.system("second.py") #exec(open("second.py").read()) #run("python "+"second.py", check=False) #import second #os.system('python second.py') #subprocess.call(" python second.py 1", shell=True) os.execv("second.py", sys.argv) print("I'm leaving first now") break time.sleep(1) sys.exit("Quitting") I have tried quite a few solutions as can be seen with my hashed out lines of code. Nothing so far though has given me the result I am after unfortunately. EDIT: This is the part of the actual code i think i am having problems with. This is the part where I am attempting to publish to my MQTT broker. try: client.connect(broker, port, 10) #connect to broker time.sleep(1) except: print("Cannot connect") sys.exit("Quitting") Instead of exiting with the "quitting" part, will it keep my code alive if i route it to stay within a repeat loop until such time as it successfully connects to the broker again and then continue back with the rest of the script? Or is this wishful thinking? A: You can do this in many ways. Your subprocess.call() option would work - but it depends on the details of implementation. Perhaps the easiest is to use multiprocessing to run the program in a subprocess while the parent simply restarts it as necessary. import multiprocessing as mp import time def do_the_things(arg1, arg2): print("doing the things") time.sleep(2) # for test raise RuntimeError("Virgin Media dun me wrong") def launch_and_monitor(): while True: print("start the things") proc = mp.Process(target=do_the_things, args=(0, 1)) proc.start() proc.wait() print("things went awry") time.sleep(2) # a moment before restart hoping badness resolves if __name__ == "__main__": launch_and_monitor() Note: The child process uses the same terminal as the parent. Running separate terminals is quite a bit more difficult. It would depend, for instance, on how you've setup to have a terminal attach to the pi. If you want to catch and process errors in the parent process, you could write some extra code to catch the error, pickle it, and have a queue to pass it back to the parent. Multiprocessing pools already do that, so you could just have a pool with 1 process and and a single iterable to consume. with multiprocessing.Pool(1) as pool: while True: try: result = pool.map(do_the_things, [(0,1)]) except Exception as e: print("caught", e) A: Ok, I got it! For anyone else interested in trying to do what my original question was: To close down a script on the occurrence of an error and then open either a new script, or a copy of the original one (for the purpose of having the same functionality as the first) in a new terminal window, this is the answer using my original code samples as an example (first.py and second.py where both scripts run the exact same code - other than defining them as different names and this name allocation defined within for which alternate file to open in its place) first.py import time import subprocess thisfile = "first" #thisfile = "second" if thisfile == "second": restartcommand = 'python3 /home/mypi/myprograms/first.py' else: restartcommand = 'python3 /home/mypi/myprograms/second.py' time.sleep(3) while thisfile == "second": print("this is the second file") time.sleep(1) subprocess.run('lxterminal -e ' + restartcommand, shell=True) print("I'm leaving second now") break while thisfile == "first": print("this is the first file") time.sleep(1) subprocess.run('lxterminal -e ' + restartcommand, shell=True) print("I'm leaving first now") break time.sleep(1) quit() second.py import time import subprocess #thisfile = "first" thisfile = "second" if thisfile == "second": restartcommand = 'python3 /home/mypi/myprograms/first.py' else: restartcommand = 'python3 /home/mypi/myprograms/second.py' time.sleep(3) while thisfile == "second": print("this is the second file") time.sleep(1) subprocess.run('lxterminal -e ' + restartcommand, shell=True) print("I'm leaving second now") break while thisfile == "first": print("this is the first file") time.sleep(1) subprocess.run('lxterminal -e ' + restartcommand, shell=True) print("I'm leaving first now") break time.sleep(1) quit() The result of running either one of these will be that the program runs, then opens the other file and starts running that before closing down itself and this operation will continue back and forth, back and forth until you close down the running file before it gets a chance to open the other file. Try it! it's fun!
Starting a python script from another before it crashes
I'm trying to make some project code I have written, more resilient to crashes, except the circumstances of my previous crashes have all been different. So that I do not have to try and account for every single one, I thought I'd try to get my code to either restart, or execute a copy of itself in place of it and then close itself down gracefully, meaning its replacement, because it's coded identically, would in essence be the same as restarting from the beginning again. The desired result for me would be that while the error resulting circumstances are present, my code would be in a program swap out, or restart loop until such time as it can execute its code normally again....until the next time it faces a similar situation. To experiment with, I've written two programs. I'm hoping from these examples someone will understand what I am trying to achieve. I want the first script to execute, then start the execute process for the second (in a new terminal) before closing itself down gracefully. Is this even possible? Thanks in advance. first.py #!/usr/bin/env python #!/bin/bash #first.py import time import os import sys from subprocess import run import subprocess thisfile = "first" #thisfile = "second" time.sleep(3) while thisfile == "second": print("this is the second file") time.sleep(1) #os.system("first.py") #exec(open("first.py").read()) #run("python "+"first.py", check=False) #import first #os.system('python first.py') #subprocess.call(" python first.py 1", shell=True) os.execv("first.py", sys.argv) print("I'm leaving second now") break while thisfile == "first": print("this is the first file") time.sleep(1) #os.system("second.py") #exec(open("second.py").read()) #run("python "+"second.py", check=False) #import second #os.system('python second.py') #subprocess.call(" python second.py 1", shell=True) os.execv("second.py", sys.argv) print("I'm leaving first now") break time.sleep(1) sys.exit("Quitting") second.py (basically a copy of first.py) #!/usr/bin/env python #!/bin/bash #second.py import time import os import sys from subprocess import run import subprocess #thisfile = "first" thisfile = "second" time.sleep(3) while thisfile == "second": print("this is the second file") time.sleep(1) #os.system("first.py") #exec(open("first.py").read()) #run("python "+"first.py", check=False) #import first #os.system('python first.py') #subprocess.call(" python first.py 1", shell=True) os.execv("first.py", sys.argv) print("I'm leaving second now") break while thisfile == "first": print("this is the first file") time.sleep(1) #os.system("second.py") #exec(open("second.py").read()) #run("python "+"second.py", check=False) #import second #os.system('python second.py') #subprocess.call(" python second.py 1", shell=True) os.execv("second.py", sys.argv) print("I'm leaving first now") break time.sleep(1) sys.exit("Quitting") I have tried quite a few solutions as can be seen with my hashed out lines of code. Nothing so far though has given me the result I am after unfortunately. EDIT: This is the part of the actual code i think i am having problems with. This is the part where I am attempting to publish to my MQTT broker. try: client.connect(broker, port, 10) #connect to broker time.sleep(1) except: print("Cannot connect") sys.exit("Quitting") Instead of exiting with the "quitting" part, will it keep my code alive if i route it to stay within a repeat loop until such time as it successfully connects to the broker again and then continue back with the rest of the script? Or is this wishful thinking?
[ "You can do this in many ways. Your subprocess.call() option would work - but it depends on the details of implementation. Perhaps the easiest is to use multiprocessing to run the program in a subprocess while the parent simply restarts it as necessary.\nimport multiprocessing as mp\nimport time\n\ndef do_the_things(arg1, arg2):\n print(\"doing the things\")\n time.sleep(2) # for test\n raise RuntimeError(\"Virgin Media dun me wrong\")\n\ndef launch_and_monitor():\n while True:\n print(\"start the things\")\n proc = mp.Process(target=do_the_things, args=(0, 1))\n proc.start()\n proc.wait()\n print(\"things went awry\")\n time.sleep(2) # a moment before restart hoping badness resolves\n\nif __name__ == \"__main__\":\n launch_and_monitor()\n\nNote: The child process uses the same terminal as the parent. Running separate terminals is quite a bit more difficult. It would depend, for instance, on how you've setup to have a terminal attach to the pi.\nIf you want to catch and process errors in the parent process, you could write some extra code to catch the error, pickle it, and have a queue to pass it back to the parent. Multiprocessing pools already do that, so you could just have a pool with 1 process and and a single iterable to consume.\nwith multiprocessing.Pool(1) as pool:\n while True:\n try:\n result = pool.map(do_the_things, [(0,1)])\n except Exception as e:\n print(\"caught\", e)\n\n", "Ok, I got it!\nFor anyone else interested in trying to do what my original question was:\nTo close down a script on the occurrence of an error and then open either a new script, or a copy of the original one (for the purpose of having the same functionality as the first) in a new terminal window, this is the answer using my original code samples as an example (first.py and second.py where both scripts run the exact same code - other than defining them as different names and this name allocation defined within for which alternate file to open in its place)\nfirst.py\nimport time\nimport subprocess\n\nthisfile = \"first\"\n#thisfile = \"second\"\n\nif thisfile == \"second\":\n restartcommand = 'python3 /home/mypi/myprograms/first.py'\nelse:\n restartcommand = 'python3 /home/mypi/myprograms/second.py'\n\ntime.sleep(3)\nwhile thisfile == \"second\":\n print(\"this is the second file\")\n time.sleep(1)\n subprocess.run('lxterminal -e ' + restartcommand, shell=True)\n print(\"I'm leaving second now\")\n break\nwhile thisfile == \"first\":\n print(\"this is the first file\")\n time.sleep(1)\n subprocess.run('lxterminal -e ' + restartcommand, shell=True)\n print(\"I'm leaving first now\")\n break\ntime.sleep(1)\nquit()\n\nsecond.py\nimport time\nimport subprocess\n\n#thisfile = \"first\"\nthisfile = \"second\"\n\nif thisfile == \"second\":\n restartcommand = 'python3 /home/mypi/myprograms/first.py'\nelse:\n restartcommand = 'python3 /home/mypi/myprograms/second.py'\n\ntime.sleep(3)\nwhile thisfile == \"second\":\n print(\"this is the second file\")\n time.sleep(1)\n subprocess.run('lxterminal -e ' + restartcommand, shell=True)\n print(\"I'm leaving second now\")\n break\nwhile thisfile == \"first\":\n print(\"this is the first file\")\n time.sleep(1)\n subprocess.run('lxterminal -e ' + restartcommand, shell=True)\n print(\"I'm leaving first now\")\n break\ntime.sleep(1)\nquit()\n\nThe result of running either one of these will be that the program runs, then opens the other file and starts running that before closing down itself and this operation will continue back and forth, back and forth until you close down the running file before it gets a chance to open the other file.\nTry it! it's fun!\n" ]
[ 1, 0 ]
[]
[]
[ "python", "restart" ]
stackoverflow_0074593058_python_restart.txt
Q: matplotlib plot monthly count in order How do I plot a monthly count of events with the right order in the x-axis? I have several dataframes like the below (this is an example): df = pd.DataFrame({'Month': [5, 6, 8, 9, 1, 2, 3, 4, 7, 10, 11, 12], 'Count': [3, 1, 6, 1, 0, 0, 0, 0, 0, 0, 0, 0]}) where I have counts of events per month, not in order. My aim is to line-plot a monthly count, and when I do fig, ax = plt.subplots(1,1) ax.grid(color='gray', linestyle='-', linewidth=0.1) plt.setp(ax, xticks=np.arange(1, 13, step=1)) ax.plot(df.Month, df.Count, marker='o') it plots in the order of the df.Month. This is not what I want: What I want in magenta (ignore the markers...) How do I get this plot? A: Using sort_values: df = df.sort_values('Month') or directly plotting from pandas: import matplotlib.ticker as mticker fig, ax = plt.subplots(1,1) df.sort_values('Month').plot( x='Month', y='Count', marker='o', legend=False, ax=ax ) ax.grid(color='gray', linestyle='-', linewidth=0.1) ax.xaxis.set_major_locator(mticker.MultipleLocator()) Output:
matplotlib plot monthly count in order
How do I plot a monthly count of events with the right order in the x-axis? I have several dataframes like the below (this is an example): df = pd.DataFrame({'Month': [5, 6, 8, 9, 1, 2, 3, 4, 7, 10, 11, 12], 'Count': [3, 1, 6, 1, 0, 0, 0, 0, 0, 0, 0, 0]}) where I have counts of events per month, not in order. My aim is to line-plot a monthly count, and when I do fig, ax = plt.subplots(1,1) ax.grid(color='gray', linestyle='-', linewidth=0.1) plt.setp(ax, xticks=np.arange(1, 13, step=1)) ax.plot(df.Month, df.Count, marker='o') it plots in the order of the df.Month. This is not what I want: What I want in magenta (ignore the markers...) How do I get this plot?
[ "Using sort_values:\ndf = df.sort_values('Month')\n\nor directly plotting from pandas:\nimport matplotlib.ticker as mticker\n\nfig, ax = plt.subplots(1,1)\ndf.sort_values('Month').plot(\n x='Month', y='Count', \n marker='o', legend=False, ax=ax\n)\nax.grid(color='gray', linestyle='-', linewidth=0.1)\nax.xaxis.set_major_locator(mticker.MultipleLocator())\n\nOutput:\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074618685_matplotlib_pandas_python.txt
Q: Insert a row after a date in a dataframe I have a Dataframe like : date col1 col2 0 2022-10-07 04:00:00 x x1 1 2022-10-08 04:00:00 y x2 I need to update a row (as dictionary) in a specific date if exist, and if it does not exist, insert the row next to the closest date. For this new given date 2022-10-07 05:00:00 (one hour later) and dic {col1:z} I would like to get : date col1 col2 0 2022-10-07 04:00:00 x x1 1 2022-10-07 05:00:00 z x1 2 2022-10-08 04:00:00 y x2 Currently I am doing this: def write(date,dic): m = df['date'] == date if m.any(): df.loc[df['date'] == date, list(dic.keys())] = list(dic.values()) else: df.loc[len(df)] = {**dic, **{'date':date}} Which means that if I can't find the date, I just add the row to the end of the df, but I want to insert it right after the previous date. (Please also see that since when I insert/update I only have col1, so col2 value will be copied from previous row somehow with ffill) A: You can set the date as index and update like: df1 = df.set_index('date') df1.loc[new_date, dic.keys()] = dic.values() df = df1.sort_index().reset_index().ffill() It will insert new date if it doesn't exist. If it exists it will update the record at that index.
Insert a row after a date in a dataframe
I have a Dataframe like : date col1 col2 0 2022-10-07 04:00:00 x x1 1 2022-10-08 04:00:00 y x2 I need to update a row (as dictionary) in a specific date if exist, and if it does not exist, insert the row next to the closest date. For this new given date 2022-10-07 05:00:00 (one hour later) and dic {col1:z} I would like to get : date col1 col2 0 2022-10-07 04:00:00 x x1 1 2022-10-07 05:00:00 z x1 2 2022-10-08 04:00:00 y x2 Currently I am doing this: def write(date,dic): m = df['date'] == date if m.any(): df.loc[df['date'] == date, list(dic.keys())] = list(dic.values()) else: df.loc[len(df)] = {**dic, **{'date':date}} Which means that if I can't find the date, I just add the row to the end of the df, but I want to insert it right after the previous date. (Please also see that since when I insert/update I only have col1, so col2 value will be copied from previous row somehow with ffill)
[ "You can set the date as index and update like:\ndf1 = df.set_index('date')\ndf1.loc[new_date, dic.keys()] = dic.values()\ndf = df1.sort_index().reset_index().ffill()\n\nIt will insert new date if it doesn't exist. If it exists it will update the record at that index.\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074618436_pandas_python.txt
Q: Keep a datetime.date in 'yyyy-mm-dd' format when using Flask's jsonify For some reason, the jsonify function is converting my datetime.date to what appears to be an HTTP date. How can I keep the date in yyyy-mm-dd format when using jsonify? test_date = datetime.date(2017, 4, 27) print(test_date) # 2017-04-27 test_date_jsonify = jsonify(test_date) print(test_date_jsonify.get_data(as_text=True)) # Thu, 27 Apr 2017 00:00:00 GMT As suggested in the comments, using jsonify(str(test_date)) returns the desired format. However, consider the following case: test_dict = {"name": "name1", "date":datetime.date(2017, 4, 27)} print(test_dict) # {"name": "name1", "date":datetime.date(2017, 4, 27)} test_dict_jsonify = jsonify(test_dict) print(test_dict_jsonify.get_data(as_text=True)) # {"date": "Thu, 27 Apr 2017 00:00:00 GMT", "name": "name1"} test_dict_jsonify = jsonify(str(test_dict)) print(test_dict_jsonify.get_data(as_text=True)) # "{"date": datetime.date(2017, 4, 27), "name": "name1"}" In this case, the str() solution does not work. A: Following this snippet you can do this: from flask.json import JSONEncoder from datetime import date class CustomJSONEncoder(JSONEncoder): def default(self, obj): try: if isinstance(obj, date): return obj.isoformat() iterable = iter(obj) except TypeError: pass else: return list(iterable) return JSONEncoder.default(self, obj) app = Flask(__name__) app.json_encoder = CustomJSONEncoder Route: import datetime as dt @app.route('/', methods=['GET']) def index(): now = dt.datetime.now() return jsonify({'now': now}) A: datetime.date is not a JSON type, so it's not serializable by default. Instead, Flask adds a hook to dump the date to a string in RFC 1123 format, which is consistent with dates in other parts of HTTP requests and responses. Use a custom JSON encoder if you want to change the format. Subclass JSONEncoder and set Flask.json_encoder to it. from flask import Flask from flask.json import JSONEncoder class MyJSONEncoder(JSONEncoder): def default(self, o): if isinstance(o, date): return o.isoformat() return super().default(o) class MyFlask(Flask): json_encoder = MyJSONEncoder app = MyFlask(__name__) It is a good idea to use ISO 8601 to transmit and store the value. It can be parsed unambiguously by JavaScript Date.parse (and other parsers). Choose the output format when you output, not when you store. A string representing an RFC 2822 or ISO 8601 date (other formats may be used, but results may be unexpected). When you load the data, there's no way to know the value was meant to be a date instead of a string (since date is not a JSON type), so you don't get a datetime.date back, you get a string. (And if you did get a date, how would it know to return date instead of datetime?) A: You can change your app's .json_encoder attribute, implementing a variant of JSONEncoder that formats dates as you see fit. A: Flask 2.2 shows a deprecation warning 'JSONEncoder' is deprecated and will be removed in Flask 2.3. Use 'Flask.json' to provide an alternate JSON implementation instead. An update is needed to remove it and/or have it work in Flask 2.3+. Another example from the Flask repository here from flask import Flask from flask.json.provider import DefaultJSONProvider class UpdatedJSONProvider(DefaultJSONProvider): def default(self, o): if isinstance(o, date) or isinstance(o, datetime): return o.isoformat() return super().default(o) app = Flask(__name__) app.json = UpdatedJSONProvider(app)
Keep a datetime.date in 'yyyy-mm-dd' format when using Flask's jsonify
For some reason, the jsonify function is converting my datetime.date to what appears to be an HTTP date. How can I keep the date in yyyy-mm-dd format when using jsonify? test_date = datetime.date(2017, 4, 27) print(test_date) # 2017-04-27 test_date_jsonify = jsonify(test_date) print(test_date_jsonify.get_data(as_text=True)) # Thu, 27 Apr 2017 00:00:00 GMT As suggested in the comments, using jsonify(str(test_date)) returns the desired format. However, consider the following case: test_dict = {"name": "name1", "date":datetime.date(2017, 4, 27)} print(test_dict) # {"name": "name1", "date":datetime.date(2017, 4, 27)} test_dict_jsonify = jsonify(test_dict) print(test_dict_jsonify.get_data(as_text=True)) # {"date": "Thu, 27 Apr 2017 00:00:00 GMT", "name": "name1"} test_dict_jsonify = jsonify(str(test_dict)) print(test_dict_jsonify.get_data(as_text=True)) # "{"date": datetime.date(2017, 4, 27), "name": "name1"}" In this case, the str() solution does not work.
[ "Following this snippet you can do this:\nfrom flask.json import JSONEncoder\nfrom datetime import date\n\n\nclass CustomJSONEncoder(JSONEncoder):\n def default(self, obj):\n try:\n if isinstance(obj, date):\n return obj.isoformat()\n iterable = iter(obj)\n except TypeError:\n pass\n else:\n return list(iterable)\n return JSONEncoder.default(self, obj)\n\napp = Flask(__name__)\napp.json_encoder = CustomJSONEncoder\n\nRoute:\nimport datetime as dt\n\[email protected]('/', methods=['GET'])\ndef index():\n now = dt.datetime.now()\n return jsonify({'now': now})\n\n", "datetime.date is not a JSON type, so it's not serializable by default. Instead, Flask adds a hook to dump the date to a string in RFC 1123 format, which is consistent with dates in other parts of HTTP requests and responses.\nUse a custom JSON encoder if you want to change the format. Subclass JSONEncoder and set Flask.json_encoder to it.\nfrom flask import Flask\nfrom flask.json import JSONEncoder\n\nclass MyJSONEncoder(JSONEncoder):\n def default(self, o):\n if isinstance(o, date):\n return o.isoformat()\n\n return super().default(o)\n\nclass MyFlask(Flask):\n json_encoder = MyJSONEncoder\n\napp = MyFlask(__name__)\n\nIt is a good idea to use ISO 8601 to transmit and store the value. It can be parsed unambiguously by JavaScript Date.parse (and other parsers). Choose the output format when you output, not when you store.\n\nA string representing an RFC 2822 or ISO 8601 date (other formats may be used, but results may be unexpected).\n\nWhen you load the data, there's no way to know the value was meant to be a date instead of a string (since date is not a JSON type), so you don't get a datetime.date back, you get a string. (And if you did get a date, how would it know to return date instead of datetime?)\n", "You can change your app's .json_encoder attribute, implementing a variant of JSONEncoder that formats dates as you see fit.\n", "Flask 2.2 shows a deprecation warning\n'JSONEncoder' is deprecated and will be removed in Flask 2.3. Use 'Flask.json' to provide an alternate JSON implementation instead.\nAn update is needed to remove it and/or have it work in Flask 2.3+. Another example from the Flask repository here\nfrom flask import Flask\nfrom flask.json.provider import DefaultJSONProvider\n\nclass UpdatedJSONProvider(DefaultJSONProvider):\n def default(self, o):\n if isinstance(o, date) or isinstance(o, datetime):\n return o.isoformat()\n return super().default(o)\n\napp = Flask(__name__)\napp.json = UpdatedJSONProvider(app)\n\n" ]
[ 59, 23, 1, 0 ]
[]
[]
[ "date", "datetime", "flask", "json", "python" ]
stackoverflow_0043663552_date_datetime_flask_json_python.txt
Q: How to store django objects as session variables ( object is not JSON serializable)? I have a simple view def foo(request): card = Card.objects.latest(datetime) request.session['card']=card For the above code I get the error "<Card: Card object> is not JSON serializable" Django version 1.6.2. What am I doing wrong ? A: In a session, I'd just store the object primary key: request.session['card'] = card.id and when loading the card from the session, obtain the card again with: try: card = Card.objects.get(id=request.session['card']) except (KeyError, Card.DoesNotExist): card = None which will set card to None if there isn't a card entry in the session or the specific card doesn't exist. By default, session data is serialised to JSON. You could also provide your own serializer, which knows how to store the card.id value or some other representation and, on deserialization, produce your Card instance again. A: Unfortunately the suggested answer does not work if the object is not a database object but some other kind of object - say, datetime or an object class Foo(object): pass that isn't a database model object. Sure, if the object happen to have some id field you can store the id field in the database and look up the value from there but in general it may not have such a simple value and the only way is to convert the data to string in such a way that you can read that string and reconstruct the object based on the information in the string. In the case of a datetime object this is made more complicated by the fact that while a naive datetime object can print out format %Z by simply not printing anything, the strptime object cannot read format %Z if there is nothing, it will choke unless there is a valid timezone specification there - so if you have a datetime object that may or may not contain a tzinfo field you really have to do strptime twice once with %Z and then if it chokes without the %Z. This is silly. It is made even sillier by the fact that datetime objects have a fromtimestamp function but no totimestamp function that uniformly produces a timestamp that fromtimestamp will read. If there is a format code that produces timestamp number I haven't found one and again, strftime/strptime suffer from the fact that they are not symmetric as described above. A: There are two simple ways to do this. If each object belongs to a single session at the same time, store session id as a model field, and update models. If an object can belong to multiple sessions at the same time, store object.id as a session variable. A: @Martijn is or might be the correct way to save the object in session variables. But the issue was solved by moving back to Django 1.5. So this issue is specifically for django 1.6.2. Hope this helps. A: --> DON'T EVER FEEL LIKE DOING SOMETHING LIKE THIS! <-- Django using session and ctypes: Plz read Martijn Pieters explanation comment below. class MyObject: def stuff(self): return "stuff..." my_obj = MyObject() request.session['id_my_obj'] = id(my_obj) ... id_my_obj = request.session.get('id_my_obj') import ctypes obj = ctypes.cast(id_my_obj, ctypes.py_object).value print(obj.stuff()) # returns "stuff..." A: Objects cannot be stored in session from Django 1.6 or above. If you don't want to change the behavior of the cookie value(of a object), you can add a dictionary there. This can be used for non database objects like class object etc. from django.core.serializers.json import DjangoJSONEncoder import json card_dict = card.__dict__ card_dict .pop('_state', None) #Pop which are not json serialize card_dict = json.dumps(card_dict , cls=DjangoJSONEncoder) request.session['card'] = card_dict Hope it will help you! A: In my case, as the object is not serializable (selenium webdriver), I had to use global variables, which is working great.
How to store django objects as session variables ( object is not JSON serializable)?
I have a simple view def foo(request): card = Card.objects.latest(datetime) request.session['card']=card For the above code I get the error "<Card: Card object> is not JSON serializable" Django version 1.6.2. What am I doing wrong ?
[ "In a session, I'd just store the object primary key:\nrequest.session['card'] = card.id\n\nand when loading the card from the session, obtain the card again with:\ntry:\n card = Card.objects.get(id=request.session['card'])\nexcept (KeyError, Card.DoesNotExist):\n card = None\n\nwhich will set card to None if there isn't a card entry in the session or the specific card doesn't exist.\nBy default, session data is serialised to JSON. You could also provide your own serializer, which knows how to store the card.id value or some other representation and, on deserialization, produce your Card instance again.\n", "Unfortunately the suggested answer does not work if the object is not a database object but some other kind of object - say, datetime or an object class Foo(object): pass that isn't a database model object.\nSure, if the object happen to have some id field you can store the id field in the database and look up the value from there but in general it may not have such a simple value and the only way is to convert the data to string in such a way that you can read that string and reconstruct the object based on the information in the string.\nIn the case of a datetime object this is made more complicated by the fact that while a naive datetime object can print out format %Z by simply not printing anything, the strptime object cannot read format %Z if there is nothing, it will choke unless there is a valid timezone specification there - so if you have a datetime object that may or may not contain a tzinfo field you really have to do strptime twice once with %Z and then if it chokes without the %Z. This is silly. It is made even sillier by the fact that datetime objects have a fromtimestamp function but no totimestamp function that uniformly produces a timestamp that fromtimestamp will read. If there is a format code that produces timestamp number I haven't found one and again, strftime/strptime suffer from the fact that they are not symmetric as described above.\n", "There are two simple ways to do this.\n\nIf each object belongs to a single session at the same time, store session id as a model field, and update models.\nIf an object can belong to multiple sessions at the same time, store object.id as a session variable.\n\n", "@Martijn is or might be the correct way to save the object in session variables. \nBut the issue was solved by moving back to Django 1.5. So this issue is specifically for django 1.6.2. \nHope this helps. \n", "--> DON'T EVER FEEL LIKE DOING SOMETHING LIKE THIS! <--\nDjango using session and ctypes: Plz read Martijn Pieters explanation comment below.\nclass MyObject:\n def stuff(self):\n return \"stuff...\"\n\nmy_obj = MyObject()\nrequest.session['id_my_obj'] = id(my_obj)\n\n...\n\nid_my_obj = request.session.get('id_my_obj')\n\nimport ctypes\nobj = ctypes.cast(id_my_obj, ctypes.py_object).value\n\nprint(obj.stuff()) \n# returns \"stuff...\"\n\n", "Objects cannot be stored in session from Django 1.6 or above.\nIf you don't want to change the behavior of the cookie value(of a object), you can add a dictionary there. This can be used for non database objects like class object etc.\nfrom django.core.serializers.json import DjangoJSONEncoder\nimport json \ncard_dict = card.__dict__\ncard_dict .pop('_state', None) #Pop which are not json serialize\ncard_dict = json.dumps(card_dict , cls=DjangoJSONEncoder)\nrequest.session['card'] = card_dict \n\nHope it will help you!\n", "In my case, as the object is not serializable (selenium webdriver), I had to use global variables, which is working great.\n" ]
[ 23, 4, 3, 0, 0, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0022294788_django_python.txt
Q: How do I insert rows based on a column of start dates and end dates with python? I have a data frame of product numbers that are on sales promotions. The columns include the product number, start date, end date, promotion type and promotion description. The dates could span up to 4 months. I need to add rows to account for the months between the start and end dates. Here is an example of the data currently: import pandas as pd sales_dict = {} sales_dict['item'] = ['100179K', '100086K'] sales_dict['start_date'] = [201703, 201801] sales_dict['end_date'] = [201707, 201802] sales_dict['promotin_type'] = [1,0] sales_dict['promotion_desc'] = [0,1] df = pd.DataFrame.from_dict(sales_dict) I tried to create a data frame of dates in year_month format from the beginning of the time frame through the end then join the two datasets. But some data seemed to fall out. I also looked at Creating a single column of dates from a column of start dates and a column of end dates - python but not sure now to fill all the other columns correctly. This is what I wanted to happen. sales_dict = {} sales_dict['item'] = ['100179K','100179K','100179K','100179K','100179K','100086K','100086K'] sales_dict['start_date'] = [201703, 201704, 201705, 201706, 201707, 201801, 201802] sales_dict['promotin_type'] = [1,1,1,1,1, 0,0] sales_dict['promotion_desc'] = [0,0,0,0,0,1,1] df = pd.DataFrame.from_dict(sales_dict) A: I figured this out although it may not be elegant: df2 = pd.DataFrame() for item in df['item'].unique(): df_ = df[df['item'] == item] df_ = pd.concat([df_,df_.apply(lambda dt: pd.date_range(dt['start_date'], dt['end_date'], freq="MS"), axis = 'columns').explode(ignore_index=True)], axis=1) df_.drop(['start_date', 'end_date'], axis=1, inplace = True) # df_.iloc[:, -1:].rename({'0':'dates'},inplace=True) df_.ffill(inplace=True) df_.bfill(inplace=True) df2 = df2.append(df_) df2.rename(columns={0:'dates'},inplace=True)
How do I insert rows based on a column of start dates and end dates with python?
I have a data frame of product numbers that are on sales promotions. The columns include the product number, start date, end date, promotion type and promotion description. The dates could span up to 4 months. I need to add rows to account for the months between the start and end dates. Here is an example of the data currently: import pandas as pd sales_dict = {} sales_dict['item'] = ['100179K', '100086K'] sales_dict['start_date'] = [201703, 201801] sales_dict['end_date'] = [201707, 201802] sales_dict['promotin_type'] = [1,0] sales_dict['promotion_desc'] = [0,1] df = pd.DataFrame.from_dict(sales_dict) I tried to create a data frame of dates in year_month format from the beginning of the time frame through the end then join the two datasets. But some data seemed to fall out. I also looked at Creating a single column of dates from a column of start dates and a column of end dates - python but not sure now to fill all the other columns correctly. This is what I wanted to happen. sales_dict = {} sales_dict['item'] = ['100179K','100179K','100179K','100179K','100179K','100086K','100086K'] sales_dict['start_date'] = [201703, 201704, 201705, 201706, 201707, 201801, 201802] sales_dict['promotin_type'] = [1,1,1,1,1, 0,0] sales_dict['promotion_desc'] = [0,0,0,0,0,1,1] df = pd.DataFrame.from_dict(sales_dict)
[ "I figured this out although it may not be elegant:\ndf2 = pd.DataFrame()\nfor item in df['item'].unique():\n df_ = df[df['item'] == item]\n df_ = pd.concat([df_,df_.apply(lambda dt: pd.date_range(dt['start_date'], dt['end_date'], freq=\"MS\"), axis = 'columns').explode(ignore_index=True)], axis=1)\n df_.drop(['start_date', 'end_date'], axis=1, inplace = True)\n # df_.iloc[:, -1:].rename({'0':'dates'},inplace=True)\n df_.ffill(inplace=True)\n df_.bfill(inplace=True)\n df2 = df2.append(df_)\n\ndf2.rename(columns={0:'dates'},inplace=True)\n\n" ]
[ 1 ]
[]
[]
[ "data_wrangling", "pandas", "python" ]
stackoverflow_0074616848_data_wrangling_pandas_python.txt
Q: Can't store a Selenium web driver object to recover it through Django views After days of research, I wasn't able to properly store a Selenium web driver object to recover it through different Django views. In fact, My project has only one view, and all I need is to recover the same instance of the web driver object every time that view is called. All my app does is making AJAX post requests to the view and updating the frontend and some data in the web driver window. Having initialized the driver as driver = webdriver.Chrome(executable_path=driverpath, desired_capabilities=caps) , these are all the things that I tried: 1) Storing the object in request.session array. Of course this doesn't work, a web driver object is to complex to be JSON serialized. TypeError: Object of type WebDriver is not JSON serializable 2) Pickle Serialize: Didn't work. A code like pickle.dumps(driver, open( "driver.p", "wb" )) throws this error AttributeError: Can't pickle local object '_createenviron.<locals>.encode' 3) Creating a new driver and assigning to its session_id attribute the previous web driver session_id value. Didn't work. This was the approach: request.session['driver_id'] = driver.session_id #And then on another view call: chrome_options = Options() chrome_options.add_argument("--headless") #to prevent opening a new window new_driver = webdriver.Chrome(options=chrome_options) new_driver.session_id = request.session['driver_id'] 4) Using Ctypes: This is the only solution that works some of the times, therefore the only solution that lets me use my project. As answered on This Question by Slipstream, this would be the approach: import ctypes request.session['id_my_obj'] = id(driver) id_my_obj = request.session.get('id_my_obj') obj = ctypes.cast(id_my_obj, ctypes.py_object).value As Martijn Pieters said on that answer, "this is a monumentally Bad Idea. If you are hosting Django in a multiprocess or multi-machine setup or let my_obj be garbage collected, this will not only not work, but WILL lead to memory corruption." I'm pretty sure that this is the reason why it fails and even crashes django sometimes. But sadly this is the only workaround. My question is, is there a proper way to serialize a Selenium web driver object, or at least store it in a file or in any way that can be recoverable later? Since the ctypes solution is the only one that works, is there a way to improve it to make it work everytime? I don't mind security implications since this is only for local use. Python version: 3.8.5 Django version: 3.1.1 Selenium version: 4.0.0a1 Thank you. A: I was in the same boat. The best solution that worked great for me was to use global variables. In this case, before creating the web driver object, just call "global driver" on each view that you need that same object. So, I would do this way: global driver driver = webdriver.Chrome(executable_path=driverpath, desired_capabilities=caps) On any other view in your views.py file, just call the "global driver" again and use the same object instance already instantiated before.
Can't store a Selenium web driver object to recover it through Django views
After days of research, I wasn't able to properly store a Selenium web driver object to recover it through different Django views. In fact, My project has only one view, and all I need is to recover the same instance of the web driver object every time that view is called. All my app does is making AJAX post requests to the view and updating the frontend and some data in the web driver window. Having initialized the driver as driver = webdriver.Chrome(executable_path=driverpath, desired_capabilities=caps) , these are all the things that I tried: 1) Storing the object in request.session array. Of course this doesn't work, a web driver object is to complex to be JSON serialized. TypeError: Object of type WebDriver is not JSON serializable 2) Pickle Serialize: Didn't work. A code like pickle.dumps(driver, open( "driver.p", "wb" )) throws this error AttributeError: Can't pickle local object '_createenviron.<locals>.encode' 3) Creating a new driver and assigning to its session_id attribute the previous web driver session_id value. Didn't work. This was the approach: request.session['driver_id'] = driver.session_id #And then on another view call: chrome_options = Options() chrome_options.add_argument("--headless") #to prevent opening a new window new_driver = webdriver.Chrome(options=chrome_options) new_driver.session_id = request.session['driver_id'] 4) Using Ctypes: This is the only solution that works some of the times, therefore the only solution that lets me use my project. As answered on This Question by Slipstream, this would be the approach: import ctypes request.session['id_my_obj'] = id(driver) id_my_obj = request.session.get('id_my_obj') obj = ctypes.cast(id_my_obj, ctypes.py_object).value As Martijn Pieters said on that answer, "this is a monumentally Bad Idea. If you are hosting Django in a multiprocess or multi-machine setup or let my_obj be garbage collected, this will not only not work, but WILL lead to memory corruption." I'm pretty sure that this is the reason why it fails and even crashes django sometimes. But sadly this is the only workaround. My question is, is there a proper way to serialize a Selenium web driver object, or at least store it in a file or in any way that can be recoverable later? Since the ctypes solution is the only one that works, is there a way to improve it to make it work everytime? I don't mind security implications since this is only for local use. Python version: 3.8.5 Django version: 3.1.1 Selenium version: 4.0.0a1 Thank you.
[ "I was in the same boat. The best solution that worked great for me was to use global variables.\nIn this case, before creating the web driver object, just call \"global driver\" on each view that you need that same object.\nSo, I would do this way:\nglobal driver\ndriver = webdriver.Chrome(executable_path=driverpath, desired_capabilities=caps)\n\nOn any other view in your views.py file, just call the \"global driver\" again and use the same object instance already instantiated before.\n" ]
[ 0 ]
[]
[]
[ "django", "python", "selenium", "session", "store" ]
stackoverflow_0063857592_django_python_selenium_session_store.txt
Q: cv2 import error on Jupyter notebook I'm trying to import cv2 on Jupyter notebook but I get this error: ImportError: No module named cv2 I am frustrated because I'm working on this simple issue for hours now. it works on Pycharm but not on Jupiter notebook. I've already installed cv2 into Python2.7's site packages, configured Jupyter's kernel to python2, browsed the documentation but I still don't get what I am missing ? (I'm using windows 10 and working with microsoft cognitives api, that's why I need to import this package.) here is the code: <ipython-input-1-9dee6ed62d2d> in <module>() ----> 1 import cv2 2 cv2.__version__ What should I do in order to make this work ? A: Is your python path looking in the right place? Check where python is looking for the module. Within the notebook try: import os os.sys.path Is the cv2 module located in any of those directories? If not your path is looking in the wrong place. If it is overlooking the install location, append it to your python path. You can follow the instructions here. A: I didn't have the openCV installation in my Python3 kernel, so I installed it by activating the specific environment and running this in the command prompt: pip install opencv-python How to find and activate my environment? To list all of Your conda environments, run this command: conda info --envs You will get something like this: ipykernel_py2 D:\Anaconda\envs\ipykernel_py2 root D:\Anaconda After that, activate the environment that is complaining for the missing cv2 and run the pip install opencv-python command. How to activate an environment? Just run the command: activate env_name where env_name is the wanted environment (for example, You could type activate ipykernel_py2 if You wanted to access the first of the two environments listed above). Note: If You are on Linux, You need to type source activate env_name. A: Go to your notebook, in menu section kernel -> Change kernel -> Python<desired version> Now in the notebook run following command to install opencv2 in the selected environment kernel python2: !pip install opencv-python python3: !pip3 install opencv-python A: Binmosa's explanation is great and to the point. As an alternative (easier, but I'm pretty sure it's just a band-aid fix), if you write: import sys !{sys.executable} -m pip install opencv-python directly into your notebook, you'll be able to actually install the module in the notebook itself. The longer explanation is interesting and informative, though. Link: https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/ A: To make this clear for those who are having the same issue: By default: Anaconda (jupyter notebook) has its own version of Python & packages once it has been installed on your PC. If you have Python x.x installed on your PC, and you installed OpenCV or -whatever packages- using the package manager of this python version, it does NOT mean your jupyter notebook will get access to these python packages you installed earlier. They are not living in the same folder. To illustrate this, open your windows CMD and write : python then write: import os os.path you will get the path of your python. in my case (C:\Python35) Now open the Anaconda Prompt and write the same commands again: you will get the anaconda's python path. In my case (C:\Users\MY_NAME\Anaconda3). As you can see, there are two different paths of python, so make sure that your first step in diagnosing such error (No module named x) is to ask yourself whether you installed the package in the right place or not! N.B: within Anaconda itself you can create environments, each environment may have different packages installed in it, so you also have to make sure you're in the right environment and it is the active one. A: It is because of opencv library. Try running this command in anaconda prompt: conda install -c conda-forge opencv A: You can simply open Jupyter Notebook and in any of the cell, just write: pip install opencv-python It will automatically install the file Note : Keep turn ON your Internet connection Then in next cell : import cv2 It will work. A: I added \envs\myenv\Library\bin also in the path variable and it got solved. A: You will need to install ipykernel for the jupyter notebook. Follow the following steps: python -m virtualenv env source env/bin/acitivate pip install opencv-contrib-python pip install ipykernel --upgrade python -m ipykernel install --user jupyter notebook A: I had this issue in my Jupyter Notebook after I had "installed" the opencv package, using Anaconda Navigator, on my base (root) environment. However, after "installing" the package and its dependencies, Anaconda Navigator showed a reminder popup to update to the next Anaconda Navigator version. I ignored this at first, but couldn't use the opencv package in my Jupyter Notebook. After I did update Anaconda Navigator to the newer version, the opencv package install worked fine. A: pip install opencv-python This solved the error for me in MacOS. A: I had similar problem. None of the above solution worked for me. I did below in my notebook and that solved the issue !pip install opencv-python !pip install opencv-python-headless A: I hope you have already activated the environment you know OpenCV is installed in but is not running/import error in jupyter notebook. If not then run the below command and activate your environment before running the jupyter notebook. conda activate /Users/prajendr/anaconda3/envs/cvpy39 Then, check all the anaconda environments on your machine using the below command on the jupyter notebook. !conda info --envs The output would be similar - Try to install OpenCV in the environment again. You know that you have OpenCV installed in this anaconda environment - cvpy39 and the path is "/Users/prajendr/anaconda3/envs/cvpy39/lib/python3.9/site-packages" Then type the below commands to see if the OpenCV path was imported in the notebook or not? import os os.sys.path you see the OpenCV path is not in this list so you need to manually import it. Then in a cell type the below set of code. Make sure to change the python path of the environment to yours. import sys path_to_module = "/User/prajendr/anaconda3/envs/cvpy39/lib/python3.9/site-packages/" sys.path.append(path_to_module) import cv2 You will now be able to import OpenCV to your jupyter notebook. A: You can simply try this in your jupyter notebook cell `%pip install opencv-python` no matter which python version you're using. you may need to restart kernel to use updated package
cv2 import error on Jupyter notebook
I'm trying to import cv2 on Jupyter notebook but I get this error: ImportError: No module named cv2 I am frustrated because I'm working on this simple issue for hours now. it works on Pycharm but not on Jupiter notebook. I've already installed cv2 into Python2.7's site packages, configured Jupyter's kernel to python2, browsed the documentation but I still don't get what I am missing ? (I'm using windows 10 and working with microsoft cognitives api, that's why I need to import this package.) here is the code: <ipython-input-1-9dee6ed62d2d> in <module>() ----> 1 import cv2 2 cv2.__version__ What should I do in order to make this work ?
[ "Is your python path looking in the right place? Check where python is looking for the module. Within the notebook try:\nimport os\nos.sys.path\n\nIs the cv2 module located in any of those directories? If not your path is looking in the wrong place. If it is overlooking the install location, append it to your python path. You can follow the instructions here.\n", "I didn't have the openCV installation in my Python3 kernel, so I installed it by activating the specific environment and running this in the command prompt:\npip install opencv-python\n\nHow to find and activate my environment?\nTo list all of Your conda environments, run this command:\nconda info --envs\n\nYou will get something like this:\nipykernel_py2 D:\\Anaconda\\envs\\ipykernel_py2\nroot D:\\Anaconda\n\nAfter that, activate the environment that is complaining for the missing cv2 and run the pip install opencv-python command.\nHow to activate an environment?\nJust run the command:\nactivate env_name\n\nwhere env_name is the wanted environment (for example, You could type activate ipykernel_py2 if You wanted to access the first of the two environments listed above).\nNote: If You are on Linux, You need to type source activate env_name.\n", "Go to your notebook, in menu section\nkernel -> Change kernel -> Python<desired version>\nNow in the notebook run following command to install opencv2 in the selected environment kernel\npython2:\n!pip install opencv-python\npython3:\n!pip3 install opencv-python\n", "Binmosa's explanation is great and to the point. As an alternative (easier, but I'm pretty sure it's just a band-aid fix), if you write:\n import sys\n !{sys.executable} -m pip install opencv-python\n\ndirectly into your notebook, you'll be able to actually install the module in the notebook itself.\nThe longer explanation is interesting and informative, though. Link: https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/\n", "To make this clear for those who are having the same issue:\nBy default: Anaconda (jupyter notebook) has its own version of Python & packages once it has been installed on your PC.\nIf you have Python x.x installed on your PC, and you installed OpenCV or -whatever packages- using the package manager of this python version, it does NOT mean your jupyter notebook will get access to these python packages you installed earlier. They are not living in the same folder.\nTo illustrate this, open your windows CMD and write :\npython\n\nthen write:\nimport os\nos.path\n\nyou will get the path of your python. in my case (C:\\Python35)\nNow open the Anaconda Prompt and write the same commands again:\nyou will get the anaconda's python path. In my case (C:\\Users\\MY_NAME\\Anaconda3).\nAs you can see, there are two different paths of python, so make sure that your first step in diagnosing such error (No module named x) is to ask yourself whether you installed the package in the right place or not!\nN.B: within Anaconda itself you can create environments, each environment may have different packages installed in it, so you also have to make sure you're in the right environment and it is the active one.\n", "It is because of opencv library.\nTry running this command in anaconda prompt:\nconda install -c conda-forge opencv\n\n", "You can simply open Jupyter Notebook and in any of the cell, just write:\npip install opencv-python\n\nIt will automatically install the file\nNote : Keep turn ON your Internet connection\nThen in next cell :\nimport cv2\n\nIt will work.\n", "I added \\envs\\myenv\\Library\\bin also in the path variable and it got solved.\n", "You will need to install ipykernel for the jupyter notebook. Follow the following steps:\npython -m virtualenv env\nsource env/bin/acitivate\npip install opencv-contrib-python\npip install ipykernel --upgrade\npython -m ipykernel install --user\njupyter notebook\n\n", "I had this issue in my Jupyter Notebook after I had \"installed\" the opencv package, using Anaconda Navigator, on my base (root) environment. \nHowever, after \"installing\" the package and its dependencies, Anaconda Navigator showed a reminder popup to update to the next Anaconda Navigator version. I ignored this at first, but couldn't use the opencv package in my Jupyter Notebook.\nAfter I did update Anaconda Navigator to the newer version, the opencv package install worked fine.\n", "pip install opencv-python\n\nThis solved the error for me in MacOS.\n", "I had similar problem. None of the above solution worked for me. I did below in my notebook and that solved the issue\n!pip install opencv-python\n!pip install opencv-python-headless\n\n", "I hope you have already activated the environment you know OpenCV is installed in but is not running/import error in jupyter notebook.\nIf not then run the below command and activate your environment before running the jupyter notebook.\nconda activate /Users/prajendr/anaconda3/envs/cvpy39\n\nThen, check all the anaconda environments on your machine using the below command on the jupyter notebook.\n!conda info --envs\n\nThe output would be similar -\n\nTry to install OpenCV in the environment again.\n\nYou know that you have OpenCV installed in this anaconda environment - cvpy39 and the path is \"/Users/prajendr/anaconda3/envs/cvpy39/lib/python3.9/site-packages\"\nThen type the below commands to see if the OpenCV path was imported in the notebook or not?\nimport os\nos.sys.path\n\n\nyou see the OpenCV path is not in this list so you need to manually import it.\nThen in a cell type the below set of code. Make sure to change the python path of the environment to yours.\nimport sys\npath_to_module = \"/User/prajendr/anaconda3/envs/cvpy39/lib/python3.9/site-packages/\"\nsys.path.append(path_to_module)\nimport cv2\n\nYou will now be able to import OpenCV to your jupyter notebook.\n", "You can simply try this in your jupyter notebook cell `%pip install opencv-python`\n\nno matter which python version you're using. you may need to restart kernel to use updated package\n" ]
[ 17, 13, 7, 6, 5, 3, 3, 1, 1, 0, 0, 0, 0, 0 ]
[ "One of possibility is that you could have written import cv2 and its utilisation in separate cells of jupyter notebook.If this is the case then first run the cell having import cv2 part and then run the cell utilising the cv2 library.\n" ]
[ -1 ]
[ "jupyter_notebook", "opencv", "python" ]
stackoverflow_0038109270_jupyter_notebook_opencv_python.txt
Q: Read and Write Structures in the Beckhoff Plc with Python Pyads Module(ADSError: symbol not found (1808)) I am trying to read and write to structure variables in the CX9020 Benchoff Plc at Linux. I am doing the same thing as in the Pyads documentation example but I am getting error. I added definitions and error to below . Thanks for your help. PLC Definition Code : TYPE sample_structure : STRUCT rVar : LREAL; rVar2 : LREAL; rVar3 : LREAL; rVar4 : ARRAY [1..3] OF LREAL; END_STRUCT END_TYPE Python Code : import sys import pyads PLC_AMS_ID= '5.41.49.218.1.1' SENDER_AMS = '192.168.0.5.1.1' PLC_IP = '192.168.0.8' PLC_USERNAME = 'Administrator' PLC_PASSWORD = '1' ROUTE_NAME = 'CX-682843' HOSTNAME = '192.168.0.5' # or IP vel_f,vel_b,vel_l,vel_r =0.0,0.0,0.0,0.0 sol_hiz,sag_hiz=0,0 vel_msg=0 pyads.set_local_address(SENDER_AMS) pyads.add_route_to_plc(SENDER_AMS, HOSTNAME, PLC_IP, PLC_USERNAME, PLC_PASSWORD, route_name=ROUTE_NAME) plc = pyads.Connection (PLC_AMS_ID, 851,PLC_IP) plc.open () plc.write_by_name('GVL.sample_structure',[11.1, 22.2, 33.3, 44.4, 55.5, 66.6],pyads.PLCTYPE_LREAL * 6) plc.read_by_name('GVL.sample_structure', pyads.PLCTYPE_LREAL * 6) Error Message : 2022-07-26T08:43:12+0300 Info: Connected to 192.168.0.8 Traceback (most recent call last): File "sampleStruct.py", line 22, in <module> plc.write_by_name('GVL.sample_structure',[11.1, 22.2, 33.3, 44.4, 55.5, 66.6],pyads.PLCTYPE_LREAL * 6) File "/usr/local/lib/python3.6/dist-packages/pyads/ads.py", line 900, in write_by_name self._port, self._adr, data_name, value, plc_datatype, handle=handle File "/usr/local/lib/python3.6/dist-packages/pyads/pyads_ex.py", line 1018, in adsSyncWriteByNameEx handle = adsGetHandle(port, address, data_name) File "/usr/local/lib/python3.6/dist-packages/pyads/pyads_ex.py", line 777, in adsGetHandle PLCTYPE_STRING, File "/usr/local/lib/python3.6/dist-packages/pyads/pyads_ex.py", line 638, in adsSyncReadWriteReqEx2 raise ADSError(err_code) pyads.pyads_ex.ADSError: ADSError: symbol not found (1808). 2022-07-26T08:43:12+0300 Info: connection closed by remote A: You've created the structure , but you haven't done the implementation in GVL. You have to add in GVL: VAR_GLOBAL sample_structure : sample_structure; END_VAR
Read and Write Structures in the Beckhoff Plc with Python Pyads Module(ADSError: symbol not found (1808))
I am trying to read and write to structure variables in the CX9020 Benchoff Plc at Linux. I am doing the same thing as in the Pyads documentation example but I am getting error. I added definitions and error to below . Thanks for your help. PLC Definition Code : TYPE sample_structure : STRUCT rVar : LREAL; rVar2 : LREAL; rVar3 : LREAL; rVar4 : ARRAY [1..3] OF LREAL; END_STRUCT END_TYPE Python Code : import sys import pyads PLC_AMS_ID= '5.41.49.218.1.1' SENDER_AMS = '192.168.0.5.1.1' PLC_IP = '192.168.0.8' PLC_USERNAME = 'Administrator' PLC_PASSWORD = '1' ROUTE_NAME = 'CX-682843' HOSTNAME = '192.168.0.5' # or IP vel_f,vel_b,vel_l,vel_r =0.0,0.0,0.0,0.0 sol_hiz,sag_hiz=0,0 vel_msg=0 pyads.set_local_address(SENDER_AMS) pyads.add_route_to_plc(SENDER_AMS, HOSTNAME, PLC_IP, PLC_USERNAME, PLC_PASSWORD, route_name=ROUTE_NAME) plc = pyads.Connection (PLC_AMS_ID, 851,PLC_IP) plc.open () plc.write_by_name('GVL.sample_structure',[11.1, 22.2, 33.3, 44.4, 55.5, 66.6],pyads.PLCTYPE_LREAL * 6) plc.read_by_name('GVL.sample_structure', pyads.PLCTYPE_LREAL * 6) Error Message : 2022-07-26T08:43:12+0300 Info: Connected to 192.168.0.8 Traceback (most recent call last): File "sampleStruct.py", line 22, in <module> plc.write_by_name('GVL.sample_structure',[11.1, 22.2, 33.3, 44.4, 55.5, 66.6],pyads.PLCTYPE_LREAL * 6) File "/usr/local/lib/python3.6/dist-packages/pyads/ads.py", line 900, in write_by_name self._port, self._adr, data_name, value, plc_datatype, handle=handle File "/usr/local/lib/python3.6/dist-packages/pyads/pyads_ex.py", line 1018, in adsSyncWriteByNameEx handle = adsGetHandle(port, address, data_name) File "/usr/local/lib/python3.6/dist-packages/pyads/pyads_ex.py", line 777, in adsGetHandle PLCTYPE_STRING, File "/usr/local/lib/python3.6/dist-packages/pyads/pyads_ex.py", line 638, in adsSyncReadWriteReqEx2 raise ADSError(err_code) pyads.pyads_ex.ADSError: ADSError: symbol not found (1808). 2022-07-26T08:43:12+0300 Info: connection closed by remote
[ "You've created the structure , but you haven't done the implementation in GVL.\nYou have to add in GVL:\nVAR_GLOBAL\n\n sample_structure : sample_structure;\n\nEND_VAR\n\n" ]
[ 0 ]
[]
[]
[ "linux", "plc", "python", "structure" ]
stackoverflow_0073118844_linux_plc_python_structure.txt
Q: Find missing numbers in a column dataframe pandas I have a dataframe with stores and its invoices numbers and I need to find the missing consecutive invoices numbers per Store, for example: df1 = pd.DataFrame() df1['Store'] = ['A','A','A','A','A','B','B','B','B','C','C','C','D','D'] df1['Invoice'] = ['1','2','5','6','8','20','23','24','30','200','202','203','204','206'] Store Invoice 0 A 1 1 A 2 2 A 5 3 A 6 4 A 8 5 B 20 6 B 23 7 B 24 8 B 30 9 C 200 10 C 202 11 C 203 12 D 204 13 D 206 And I want a dataframe like this: Store MissInvoice 0 A 3 1 A 4 2 A 7 3 B 21 4 B 22 5 B 25 6 B 26 7 B 27 8 B 28 9 B 29 10 C 201 11 D 205 Thanks in advance! A: You can use groupby.apply to compute a set difference with the range from the min to max value. Then explode: (df1.astype({'Invoice': int}) .groupby('Store')['Invoice'] .apply(lambda s: set(range(s.min(), s.max())).difference(s)) .explode().reset_index() ) NB. if you want to ensure having sorted values, use lambda s: sorted(set(range(s.min(), s.max())).difference(s)). Output: Store Invoice 0 A 3 1 A 4 2 A 7 3 B 21 4 B 22 5 B 25 6 B 26 7 B 27 8 B 28 9 B 29 10 C 201 11 D 205 A: Here's an approach: import pandas as pd import numpy as np df1 = pd.DataFrame() df1['Store'] = ['A','A','A','A','A','B','B','B','B','C','C','C'] df1['Invoice'] = ['1','2','5','6','8','20','23','24','30','200','202','203'] df1['Invoice'] = df1['Invoice'].astype(int) df2 = df1.groupby('Store')['Invoice'].agg(['min','max']) df2['MissInvoice'] = [[]]*len(df2) for store,row in df2.iterrows(): df2.at[store,'MissInvoice'] = np.setdiff1d(np.arange(row['min'],row['max']+1), df1.loc[df1['Store'] == store, 'Invoice']) df2 = df2.explode('MissInvoice').drop(columns = ['min','max']).reset_index() The resulting dataframe df2: Store MissInvoice 0 A 3 1 A 4 2 A 7 3 B 21 4 B 22 5 B 25 6 B 26 7 B 27 8 B 28 9 B 29 10 C 201 Note: Store D is absent from the dataframe in my code because it is omitted from the lines in the question defining df1.
Find missing numbers in a column dataframe pandas
I have a dataframe with stores and its invoices numbers and I need to find the missing consecutive invoices numbers per Store, for example: df1 = pd.DataFrame() df1['Store'] = ['A','A','A','A','A','B','B','B','B','C','C','C','D','D'] df1['Invoice'] = ['1','2','5','6','8','20','23','24','30','200','202','203','204','206'] Store Invoice 0 A 1 1 A 2 2 A 5 3 A 6 4 A 8 5 B 20 6 B 23 7 B 24 8 B 30 9 C 200 10 C 202 11 C 203 12 D 204 13 D 206 And I want a dataframe like this: Store MissInvoice 0 A 3 1 A 4 2 A 7 3 B 21 4 B 22 5 B 25 6 B 26 7 B 27 8 B 28 9 B 29 10 C 201 11 D 205 Thanks in advance!
[ "You can use groupby.apply to compute a set difference with the range from the min to max value. Then explode:\n(df1.astype({'Invoice': int})\n .groupby('Store')['Invoice']\n .apply(lambda s: set(range(s.min(), s.max())).difference(s))\n .explode().reset_index()\n)\n\nNB. if you want to ensure having sorted values, use lambda s: sorted(set(range(s.min(), s.max())).difference(s)).\nOutput:\n Store Invoice\n0 A 3\n1 A 4\n2 A 7\n3 B 21\n4 B 22\n5 B 25\n6 B 26\n7 B 27\n8 B 28\n9 B 29\n10 C 201\n11 D 205\n\n", "Here's an approach:\nimport pandas as pd\nimport numpy as np\n\ndf1 = pd.DataFrame()\ndf1['Store'] = ['A','A','A','A','A','B','B','B','B','C','C','C']\ndf1['Invoice'] = ['1','2','5','6','8','20','23','24','30','200','202','203']\ndf1['Invoice'] = df1['Invoice'].astype(int)\n\ndf2 = df1.groupby('Store')['Invoice'].agg(['min','max'])\ndf2['MissInvoice'] = [[]]*len(df2)\nfor store,row in df2.iterrows():\n df2.at[store,'MissInvoice'] = np.setdiff1d(np.arange(row['min'],row['max']+1), \n df1.loc[df1['Store'] == store, 'Invoice'])\ndf2 = df2.explode('MissInvoice').drop(columns = ['min','max']).reset_index()\n\nThe resulting dataframe df2:\n Store MissInvoice\n0 A 3\n1 A 4\n2 A 7\n3 B 21\n4 B 22\n5 B 25\n6 B 26\n7 B 27\n8 B 28\n9 B 29\n10 C 201\n\nNote: Store D is absent from the dataframe in my code because it is omitted from the lines in the question defining df1.\n" ]
[ 4, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074618512_dataframe_pandas_python.txt
Q: Trying to get specific words from large body of text? Python I am trying to write a script in python that given the following large body of text, will find the username and password. Then I want it to be written into a row in a csv. This script will run every day and add a new row each time there is a new account created (Usernames are unique). TEXT TEXT username: Test_user TEXT password: 123456 TEXT TEXT I tried adding the body to a list and indexing values but that failed because the text could be different each time. A: This is a low effort response but you have a low effort question https://help.relativity.com/RelativityOne/Content/Relativity/Regular_expressions/Searching_with_regular_expressions.htm The answer is regular expressions.
Trying to get specific words from large body of text? Python
I am trying to write a script in python that given the following large body of text, will find the username and password. Then I want it to be written into a row in a csv. This script will run every day and add a new row each time there is a new account created (Usernames are unique). TEXT TEXT username: Test_user TEXT password: 123456 TEXT TEXT I tried adding the body to a list and indexing values but that failed because the text could be different each time.
[ "This is a low effort response but you have a low effort question https://help.relativity.com/RelativityOne/Content/Relativity/Regular_expressions/Searching_with_regular_expressions.htm\nThe answer is regular expressions.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074618887_python.txt
Q: How do I create unlimited inputs in Python? I'm supposed to write a program that will determine letter grades (A, B, C, D, F), track how many students are passing and failing, and display the class average. One part that is getting me is that "the program will be able to handle as many students as the user indicates are in this class." How to I get to create unlimited inputs - as much as the user wants. I basically have a framework upon what I should do, but I'm stuck on how I could create as many inputs as the user wants and then use that information on the other functions (how I could get all those info. into another function). If any of you guys can tell me how to create unlimited number of inputs, it would be greatly appreciated!! Have a great day guys! :) My code: studentScore = input("Grade for a student: ") fail = 0 def determineGrade (studentScore): if studentScore <= 40: print 'F' elif studentScore <= 50: print 'D' elif studentScore <= 60: print 'C' elif studentScore <= 70: print 'B' elif studentScore <= 100: print 'A' else: print 'Invalid' def determinePass (studentScore): for i in range(): if studentScore <= 40: fail += 1 else: Pass += 1 def classAverage (studentScore): determineGrade(studentScore) determinePass(studentScore) A: The infinite input can be done using the while loop. You can save that input to the other data structure such a list, but you can also put below it the code. while True: x = input('Enter something') determineGrade(x) determinePass(x) A: Try this out. while True: try: variable = int(input("Enter your input")) # your code except (EOFError,ValueError): break EOFError- You will get this error if you are taking input from a file. ValueError- In case wrong input is provided. A: To ask for data an unlimited number of times, you want a while loop. scores=[] while True: score=input("Students score >>") #asks for an input if score in ("","q","quit","e","end","exit"): #if the input was any of these strings, stop asking for input. break elif score.isdigit(): #if the input was a number, add it to the list. scores.append(int(score)) else: #the user typed in nonsense, probably a typo, ask them to try again print("invalid score, please try again or press enter to end list") #you now have an array scores to process as you see fit. A: Have a look into and get the idea of https://docs.python.org/3/library/itertools.html, so just for the letters itertools.cycle('ABCDF') might fit. Or for the scores: import random def next_input(): return random.randint(1, 100) if __name__ == '__main__': while True: studentScore = next_input() print(f"score: {studentScore:3}") Further read (for probability distributions) could be https://numpy.org/doc/stable/reference/random/index.html. A: Sorry if I am a little late, but I will provide another example in case this is an assignment that may have others confused. I tried to approach it differently than using while True: as someone already explained this. I will be using a sentinel value(s) or the value(s) that cause a loop to terminate. See below: """ I kept your function the same besides removing the invalid conditional branch and adding a print statement """ def determineGrade (studentScore): print("Student earned the following grade: ", end = " ") if studentScore <= 40: print ('F') elif studentScore <= 50: print ('D') elif studentScore <= 60: print ('C') elif studentScore <= 70: print ('B') elif studentScore <= 100: print ('A') """ I kept this function very similar as well except I used a list initialized in main so you have a history of all student scores input and how many passed or failed. I also put the fail and passing variables here for readability and to avoid scope problems. """ def determinePass (score_list): fail = 0 passing = 0 for i in range(len(score_list)): if score_list[i] <= 40: fail += 1 else: passing += 1 print("Students passing: {}".format(passing)) print("Students failing: {}".format(fail)) """ I finished this function by using basic list functions that calculate the average. In the future, use the keyword pass or a stub in your definition if not finished so you can still test it :) """ def classAverage (score_list): avg = sum(score_list) / len(score_list) print("Class Average: {}".format(avg)) """ MAIN """ if __name__ == "__main__": # Makes sentinel value known to the user (any negative number). print("Welcome. Input a negative integer at any time to exit.") # Wrapped input in float() so input only accepts whole and decimal point numbers. studentScore = float(input("Grade for a student: ")) # Declares an empty list. Remember that they are mutable meaning they can change even in functions. score_list = [] # Anything below 0 is considered a sentinel value or what terminates the loop. while studentScore >= 0: # If input score is between 0-100: if studentScore <= 100: # Input score is added to our list for use in the functions. score_list.append(studentScore) determineGrade(studentScore) determinePass(score_list) classAverage(score_list) # If a number beyond 100 is input as a score. else: print("Invalid. Score must be between 0-100") # Used to avoid infinite loop and allow as many valid inputs as desired. print("Welcome. Input a negative integer at any time to exit.") studentScore = float(input("Grade for a student: ")) Some important notes I would like to add. One, a reference to what techniques were introduced in this example with a little more detail: (https://redirect.cs.umbc.edu/courses/201/fall16/labs/lab05/lab05_preLab.shtml). Second, I tried to follow my formatting based on the functional requirements provided in your code and explanation, but since I did not have the guidelines, you may need to reformat some things. Third, I tried to use techniques that you or someone is likely learning in the near future or already learned up to this assignment. As you gain experience, you may wish to alter this program to where entering anything but an integer or float will throw an exception and/or not terminate the program. You may also wish to reduce the runtime complexity by shifting or modifying some things. You could even track the name or id of students using a different structure such as a dictionary. Basically, what I provided is just a working example around what I deemed students may already know at this point to get you started :) If you have any questions, need additional resources, or want some examples of any other techniques, let me know and happy coding!
How do I create unlimited inputs in Python?
I'm supposed to write a program that will determine letter grades (A, B, C, D, F), track how many students are passing and failing, and display the class average. One part that is getting me is that "the program will be able to handle as many students as the user indicates are in this class." How to I get to create unlimited inputs - as much as the user wants. I basically have a framework upon what I should do, but I'm stuck on how I could create as many inputs as the user wants and then use that information on the other functions (how I could get all those info. into another function). If any of you guys can tell me how to create unlimited number of inputs, it would be greatly appreciated!! Have a great day guys! :) My code: studentScore = input("Grade for a student: ") fail = 0 def determineGrade (studentScore): if studentScore <= 40: print 'F' elif studentScore <= 50: print 'D' elif studentScore <= 60: print 'C' elif studentScore <= 70: print 'B' elif studentScore <= 100: print 'A' else: print 'Invalid' def determinePass (studentScore): for i in range(): if studentScore <= 40: fail += 1 else: Pass += 1 def classAverage (studentScore): determineGrade(studentScore) determinePass(studentScore)
[ "The infinite input can be done using the while loop. You can save that input to the other data structure such a list, but you can also put below it the code.\nwhile True:\n x = input('Enter something')\n determineGrade(x)\n determinePass(x)\n\n", "Try this out.\nwhile True:\n try:\n variable = int(input(\"Enter your input\"))\n # your code\n\n except (EOFError,ValueError):\n break\n\nEOFError- You will get this error if you are taking input from a file.\nValueError- In case wrong input is provided.\n", "To ask for data an unlimited number of times, you want a while loop.\nscores=[] \nwhile True:\n score=input(\"Students score >>\")\n #asks for an input\n if score in (\"\",\"q\",\"quit\",\"e\",\"end\",\"exit\"):\n #if the input was any of these strings, stop asking for input.\n break\n elif score.isdigit():\n #if the input was a number, add it to the list.\n scores.append(int(score))\n else:\n #the user typed in nonsense, probably a typo, ask them to try again \n print(\"invalid score, please try again or press enter to end list\")\n#you now have an array scores to process as you see fit.\n\n", "Have a look into and get the idea of https://docs.python.org/3/library/itertools.html, so just for the letters itertools.cycle('ABCDF') might fit. Or for the scores:\nimport random\n\ndef next_input():\n return random.randint(1, 100)\n\nif __name__ == '__main__':\n while True:\n studentScore = next_input()\n print(f\"score: {studentScore:3}\")\n\nFurther read (for probability distributions) could be https://numpy.org/doc/stable/reference/random/index.html.\n", "Sorry if I am a little late, but I will provide another example in case this is an assignment that may have others confused. I tried to approach it differently than using while True: as someone already explained this. I will be using a sentinel value(s) or the value(s) that cause a loop to terminate. See below:\n\"\"\"\nI kept your function the same besides removing the invalid conditional branch and adding a \nprint statement\n\"\"\"\ndef determineGrade (studentScore):\n print(\"Student earned the following grade: \", end = \" \")\n \n if studentScore <= 40:\n print ('F')\n elif studentScore <= 50:\n print ('D')\n elif studentScore <= 60:\n print ('C')\n elif studentScore <= 70:\n print ('B')\n elif studentScore <= 100:\n print ('A')\n\n\"\"\"\nI kept this function very similar as well except I used a list initialized in main so you have a \nhistory of all student scores input and how many passed or failed. I also put the fail and passing\nvariables here for readability and to avoid scope problems.\n\"\"\"\ndef determinePass (score_list):\n fail = 0\n passing = 0\n \n for i in range(len(score_list)):\n if score_list[i] <= 40:\n fail += 1\n else:\n passing += 1\n \n print(\"Students passing: {}\".format(passing))\n print(\"Students failing: {}\".format(fail))\n\n\"\"\"\nI finished this function by using basic list functions that calculate the average. In the future,\nuse the keyword pass or a stub in your definition if not finished so you can still test it :)\n\"\"\"\ndef classAverage (score_list):\n avg = sum(score_list) / len(score_list)\n \n print(\"Class Average: {}\".format(avg))\n\n\"\"\" MAIN \"\"\"\nif __name__ == \"__main__\":\n # Makes sentinel value known to the user (any negative number).\n print(\"Welcome. Input a negative integer at any time to exit.\")\n # Wrapped input in float() so input only accepts whole and decimal point numbers. \n studentScore = float(input(\"Grade for a student: \"))\n # Declares an empty list. Remember that they are mutable meaning they can change even in functions.\n score_list = []\n \n # Anything below 0 is considered a sentinel value or what terminates the loop. \n while studentScore >= 0:\n # If input score is between 0-100:\n if studentScore <= 100:\n # Input score is added to our list for use in the functions.\n score_list.append(studentScore)\n \n determineGrade(studentScore)\n determinePass(score_list)\n classAverage(score_list)\n # If a number beyond 100 is input as a score.\n else:\n print(\"Invalid. Score must be between 0-100\")\n \n # Used to avoid infinite loop and allow as many valid inputs as desired.\n print(\"Welcome. Input a negative integer at any time to exit.\")\n studentScore = float(input(\"Grade for a student: \"))\n\nSome important notes I would like to add. One, a reference to what techniques were introduced in this example with a little more detail: (https://redirect.cs.umbc.edu/courses/201/fall16/labs/lab05/lab05_preLab.shtml).\nSecond, I tried to follow my formatting based on the functional requirements provided in your code and explanation, but since I did not have the guidelines, you may need to reformat some things.\nThird, I tried to use techniques that you or someone is likely learning in the near future or already learned up to this assignment. As you gain experience, you may wish to alter this program to where entering anything but an integer or float will throw an exception and/or not terminate the program. You may also wish to reduce the runtime complexity by shifting or modifying some things. You could even track the name or id of students using a different structure such as a dictionary. Basically, what I provided is just a working example around what I deemed students may already know at this point to get you started :)\nIf you have any questions, need additional resources, or want some examples of any other techniques, let me know and happy coding!\n" ]
[ 2, 1, 1, 1, 0 ]
[]
[]
[ "function", "input", "python" ]
stackoverflow_0062978714_function_input_python.txt
Q: Add uuid to a new column in a pandas DataFrame I'm looking to add a uuid for every row in a single new column in a pandas DataFrame. This obviously fills the column with the same uuid: import uuid import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(4,3), columns=list('abc'), index=['apple', 'banana', 'cherry', 'date']) df['uuid'] = uuid.uuid4() print(df) a b c uuid apple 0.687601 -1.332904 -0.166018 34115445-c4b8-4e64-bc96-e120abda1653 banana -2.252191 -0.844470 0.384140 34115445-c4b8-4e64-bc96-e120abda1653 cherry -0.470388 0.642342 0.692454 34115445-c4b8-4e64-bc96-e120abda1653 date -0.943255 1.450051 -0.296499 34115445-c4b8-4e64-bc96-e120abda1653 What I am looking for is a new uuid in each row of the 'uuid' column. I have also tried using .apply() and .map() without success. A: This is one way: df['uuid'] = [uuid.uuid4() for _ in range(len(df.index))] A: I can't speak to computational efficiency here, but I prefer the syntax here, as it's consistent with the other apply-lambda modifications I usually use to generate new columns: df['uuid'] = df.apply(lambda _: uuid.uuid4(), axis=1) You can also pick a random column to remove the axis requirement (why axis=0 is the default, I'll never understand): df['uuid'] = df['col'].apply(lambda _: uuid.uuid4()) The downside to these is technically you're passing in a variable (_) that you don't actually use. It would be mildly nice to have the capability to do something like lambda: uuid.uuid4(), but apply doesn't support lambas with no args, which is reasonable given its use case would be rather limited. A: from uuid import uuid4 df['uuid'] = df.index.to_series().map(lambda x: uuid4()) A: To create a new column, you must have enough values to fill the column. If we know the number of rows (by calculating the len of the dataframe), we can create a set of values that can then be applied to a column. import uuid import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(4,3), columns=list('abc'), index=['apple', 'banana', 'cherry', 'date']) # you can create a simple list of values using a list comprehension # based on the len (or number of rows) of the dataframe df['uuid'] = [uuid.uuid4() for x in range(len(df))] print(df) apple -0.775699 -1.104219 1.144653 f98a9c76-99b7-4ba7-9c0a-9121cdf8ad7f banana -1.540495 -0.945760 0.649370 179819a0-3d0f-43f8-8645-da9229ef3fc3 cherry -0.340872 2.445467 -1.071793 b48a9830-3a10-4ce0-bca0-0cc136f09732 date -1.286273 0.244233 0.626831 e7b7c65c-0adc-4ba6-88ab-2160e9858fc4 A: A revised version of S. A. Calder's answer using Pandas v1.5.2: from uuid import uuid4 df['uuid'] = df.index.map(lambda _: uuid4()) There is no need to convert the index to a Series. replacing lambda x: with lambda _: indicates to the programmer that the series elements provided by the map method are unused in calculating the UUIDs.
Add uuid to a new column in a pandas DataFrame
I'm looking to add a uuid for every row in a single new column in a pandas DataFrame. This obviously fills the column with the same uuid: import uuid import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(4,3), columns=list('abc'), index=['apple', 'banana', 'cherry', 'date']) df['uuid'] = uuid.uuid4() print(df) a b c uuid apple 0.687601 -1.332904 -0.166018 34115445-c4b8-4e64-bc96-e120abda1653 banana -2.252191 -0.844470 0.384140 34115445-c4b8-4e64-bc96-e120abda1653 cherry -0.470388 0.642342 0.692454 34115445-c4b8-4e64-bc96-e120abda1653 date -0.943255 1.450051 -0.296499 34115445-c4b8-4e64-bc96-e120abda1653 What I am looking for is a new uuid in each row of the 'uuid' column. I have also tried using .apply() and .map() without success.
[ "This is one way:\ndf['uuid'] = [uuid.uuid4() for _ in range(len(df.index))]\n\n", "I can't speak to computational efficiency here, but I prefer the syntax here, as it's consistent with the other apply-lambda modifications I usually use to generate new columns:\ndf['uuid'] = df.apply(lambda _: uuid.uuid4(), axis=1)\n\nYou can also pick a random column to remove the axis requirement (why axis=0 is the default, I'll never understand):\ndf['uuid'] = df['col'].apply(lambda _: uuid.uuid4())\n\nThe downside to these is technically you're passing in a variable (_) that you don't actually use. It would be mildly nice to have the capability to do something like lambda: uuid.uuid4(), but apply doesn't support lambas with no args, which is reasonable given its use case would be rather limited.\n", "from uuid import uuid4\ndf['uuid'] = df.index.to_series().map(lambda x: uuid4())\n\n", "To create a new column, you must have enough values to fill the column. If we know the number of rows (by calculating the len of the dataframe), we can create a set of values that can then be applied to a column.\nimport uuid\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(4,3), columns=list('abc'),\n index=['apple', 'banana', 'cherry', 'date'])\n\n\n# you can create a simple list of values using a list comprehension \n# based on the len (or number of rows) of the dataframe\ndf['uuid'] = [uuid.uuid4() for x in range(len(df))]\nprint(df)\n\napple -0.775699 -1.104219 1.144653 f98a9c76-99b7-4ba7-9c0a-9121cdf8ad7f\nbanana -1.540495 -0.945760 0.649370 179819a0-3d0f-43f8-8645-da9229ef3fc3\ncherry -0.340872 2.445467 -1.071793 b48a9830-3a10-4ce0-bca0-0cc136f09732\ndate -1.286273 0.244233 0.626831 e7b7c65c-0adc-4ba6-88ab-2160e9858fc4\n\n", "A revised version of S. A. Calder's answer using Pandas v1.5.2:\nfrom uuid import uuid4\ndf['uuid'] = df.index.map(lambda _: uuid4())\n\nThere is no need to convert the index to a Series. replacing lambda x: with lambda _: indicates to the programmer that the series elements provided by the map method are unused in calculating the UUIDs.\n" ]
[ 38, 20, 4, 2, 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x", "uuid" ]
stackoverflow_0048837006_dataframe_pandas_python_python_3.x_uuid.txt
Q: Why does Docker on Windows CMD fail for bash script? I've got a Dockerfile containerizing a Python FastAPI application that works on my Mac with this final CMD statement: CMD ["./startup.sh"] However, when I try to run the Dockerfile on a Windows machine that has Docker installed, it can't run/find the startup.sh script. I have to change the CMD to this: CMD ["uvicorn", "app.main:app", "--reload", "--workers", "1", "--host", "0.0.0.0", "--port", "8000"] Any suggestions about why this is happening and how I can fix it? A: Check the startup.sh file permissions on the Windows machine and make sure that it's executable.
Why does Docker on Windows CMD fail for bash script?
I've got a Dockerfile containerizing a Python FastAPI application that works on my Mac with this final CMD statement: CMD ["./startup.sh"] However, when I try to run the Dockerfile on a Windows machine that has Docker installed, it can't run/find the startup.sh script. I have to change the CMD to this: CMD ["uvicorn", "app.main:app", "--reload", "--workers", "1", "--host", "0.0.0.0", "--port", "8000"] Any suggestions about why this is happening and how I can fix it?
[ "Check the startup.sh file permissions on the Windows machine and make sure that it's executable.\n" ]
[ 0 ]
[]
[]
[ "docker", "fastapi", "python", "windows" ]
stackoverflow_0074618824_docker_fastapi_python_windows.txt
Q: Get values of Enum fields from Django Queryset I have a model with an enum column, e.g. # Using Django 2.2 (does not support Enums natively) from django_enum_choices.fields import EnumChoiceField class Service(Enum) MOBILE: "MOBILE" LAPTOP: "LAPTOP" class Device(models.Model): service = EnumChoiceField(Service) ... Is it possible to get get the query results with the enumerated column being the value of the enum? For example: If I do: query = Device.objects.values("service") print(query) I get: <QuerySet [{'service': <Service.MOBILE: 'MOBILE'>}, {'service': <Service.MOBILE: 'MOBILE'>}, {'service': <Service.LAPTOP: 'LAPTOP'>}]> I wish to get: <QuerySet [{'service': 'MOBILE'}, {'service': 'MOBILE'}, {'service': 'LAPTOP'}]> I get errors when I run: query = Device.objects.values("service__value") or query = Device.objects.values("service.value") I want to something like how we can get value of an enum field by saying mobile_service = Service.MOBILE # <Service.MOBILE: "MOBILE"> mobile_service_as_string = mobile_service.value # "MOBILE" The errors: django.core.exceptions.FieldError: Cannot resolve keyword 'value' into field. Join on 'service' not permitted. django.core.exceptions.FieldError: Cannot resolve keyword 'service.value' into field. Choices are: service, .. A: It turns out that you can just use a serializer! from django_enum_choices.serializers import EnumChoiceModelSerializerMixin from rest_framework import serializer class DeviceSerializer(EnumChoiceModelSerializerMixin, serializers.ModelSerializer): class Meta: model = Device fields = ( "service", ) Results: query = Device.objects.values("service").all() results = DeviceSerializer(query, many=True) print(results) # [OrderedDict([('service', 'MOBILE')]), OrderedDict([('service', 'LAPTOP')])]
Get values of Enum fields from Django Queryset
I have a model with an enum column, e.g. # Using Django 2.2 (does not support Enums natively) from django_enum_choices.fields import EnumChoiceField class Service(Enum) MOBILE: "MOBILE" LAPTOP: "LAPTOP" class Device(models.Model): service = EnumChoiceField(Service) ... Is it possible to get get the query results with the enumerated column being the value of the enum? For example: If I do: query = Device.objects.values("service") print(query) I get: <QuerySet [{'service': <Service.MOBILE: 'MOBILE'>}, {'service': <Service.MOBILE: 'MOBILE'>}, {'service': <Service.LAPTOP: 'LAPTOP'>}]> I wish to get: <QuerySet [{'service': 'MOBILE'}, {'service': 'MOBILE'}, {'service': 'LAPTOP'}]> I get errors when I run: query = Device.objects.values("service__value") or query = Device.objects.values("service.value") I want to something like how we can get value of an enum field by saying mobile_service = Service.MOBILE # <Service.MOBILE: "MOBILE"> mobile_service_as_string = mobile_service.value # "MOBILE" The errors: django.core.exceptions.FieldError: Cannot resolve keyword 'value' into field. Join on 'service' not permitted. django.core.exceptions.FieldError: Cannot resolve keyword 'service.value' into field. Choices are: service, ..
[ "It turns out that you can just use a serializer!\nfrom django_enum_choices.serializers import EnumChoiceModelSerializerMixin\nfrom rest_framework import serializer\n\n\nclass DeviceSerializer(EnumChoiceModelSerializerMixin, serializers.ModelSerializer):\n class Meta:\n model = Device\n fields = (\n \"service\",\n )\n\nResults:\nquery = Device.objects.values(\"service\").all()\nresults = DeviceSerializer(query, many=True)\nprint(results)\n\n# [OrderedDict([('service', 'MOBILE')]), OrderedDict([('service', 'LAPTOP')])]\n\n" ]
[ 0 ]
[]
[]
[ "django", "enums", "python" ]
stackoverflow_0074616378_django_enums_python.txt
Q: How to make approximate equal division of task between given number of people? Let say task is to divide 33 tables between 3 people. If equally divided, then output is [11, 11, 11] and if number of tables is 35 tables, then output should be [12, 12, 11]. When I am trying to divide, I get [11, 11, 11, 1, 1]. I need help to solve this in python. This is part of my main problem statement. Here is my code: div2 = count2 // len(ri_ot_curr) # equal division of other tables rem2 = 0 rem2 = count2 % len(ri_ot_curr) # remaining tables tables unallocated for i in range(len(ri_ot_curr)): c = 0 for start in range(len(tft)): if tft.loc[start, 'Release Date'] == 'Release '+str(release_date) a: #some condition tft.loc[start, 'Quant RI - Table'] = ri_ot_curr[i] tft.loc[start, 'Date'] = date_tft() c = c+1 if c == div2: break if rem2 > 0: ri_ot_rem = random.sample(ri_ot_curr, rem2) for i in range(len(ri_ot_rem)): for start in range(len(tft)): if tft.loc[start, 'Release Date'] == 'Release '+str(release_date):#some condition tft.loc[start, 'Quant RI - Table'] = ri_ot_rem[i] tft.loc[start, 'Date'] = date_tft() break A: i hope i understood you correctly, if i did this code will do the trick: number_of_tables = 35 number_of_people = 3 tables_list = [int(number_of_tables / number_of_people) for _ in range(number_of_people)] remainder = number_of_tables % number_of_people for index in range(remainder): tables_list[index] += 1 print(tables_list)
How to make approximate equal division of task between given number of people?
Let say task is to divide 33 tables between 3 people. If equally divided, then output is [11, 11, 11] and if number of tables is 35 tables, then output should be [12, 12, 11]. When I am trying to divide, I get [11, 11, 11, 1, 1]. I need help to solve this in python. This is part of my main problem statement. Here is my code: div2 = count2 // len(ri_ot_curr) # equal division of other tables rem2 = 0 rem2 = count2 % len(ri_ot_curr) # remaining tables tables unallocated for i in range(len(ri_ot_curr)): c = 0 for start in range(len(tft)): if tft.loc[start, 'Release Date'] == 'Release '+str(release_date) a: #some condition tft.loc[start, 'Quant RI - Table'] = ri_ot_curr[i] tft.loc[start, 'Date'] = date_tft() c = c+1 if c == div2: break if rem2 > 0: ri_ot_rem = random.sample(ri_ot_curr, rem2) for i in range(len(ri_ot_rem)): for start in range(len(tft)): if tft.loc[start, 'Release Date'] == 'Release '+str(release_date):#some condition tft.loc[start, 'Quant RI - Table'] = ri_ot_rem[i] tft.loc[start, 'Date'] = date_tft() break
[ "i hope i understood you correctly, if i did this code will do the trick:\nnumber_of_tables = 35\nnumber_of_people = 3\n\ntables_list = [int(number_of_tables / number_of_people) for _ in range(number_of_people)]\n\nremainder = number_of_tables % number_of_people\n\nfor index in range(remainder):\n tables_list[index] += 1\n\nprint(tables_list)\n\n" ]
[ 2 ]
[]
[]
[ "list", "logic", "pandas", "python", "python_3.x" ]
stackoverflow_0074616430_list_logic_pandas_python_python_3.x.txt
Q: Why these two WAV-creating functions are not producing identical output? I am using these functions (that receive a pyaudio input) to produce an audio object usable on torchaudio. However, only "write2" produces a result that works, but not "write1". def write2(recording): n_files = len(os.listdir(f_name_directory)) filename = os.path.join(f_name_directory, 'file.wav') wf = wave.open(filename, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(recording) wf.close() with open('file.wav', 'rb') as f: buffer = io.BytesIO(f.read()) return buffer def write1(recording): buffer = io.BytesIO() wave_write = wave.open(buffer, 'wb') wave_write.setnchannels(CHANNELS) wave_write.setsampwidth(p.get_sample_size(FORMAT)) wave_write.setframerate(RATE) wave_write.writeframes(recording) wave_write.close() return buffer What do I need to do for write1 become equivalent to write2 without the i/o operations? A: As said by @jasonharper on the opening post comment, the solution was to insert buffer.seek(0) at the end of the function before returning it. (...) wave_write.close() buffer.seek(0) return buffer
Why these two WAV-creating functions are not producing identical output?
I am using these functions (that receive a pyaudio input) to produce an audio object usable on torchaudio. However, only "write2" produces a result that works, but not "write1". def write2(recording): n_files = len(os.listdir(f_name_directory)) filename = os.path.join(f_name_directory, 'file.wav') wf = wave.open(filename, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(recording) wf.close() with open('file.wav', 'rb') as f: buffer = io.BytesIO(f.read()) return buffer def write1(recording): buffer = io.BytesIO() wave_write = wave.open(buffer, 'wb') wave_write.setnchannels(CHANNELS) wave_write.setsampwidth(p.get_sample_size(FORMAT)) wave_write.setframerate(RATE) wave_write.writeframes(recording) wave_write.close() return buffer What do I need to do for write1 become equivalent to write2 without the i/o operations?
[ "As said by @jasonharper on the opening post comment, the solution was to insert buffer.seek(0) at the end of the function before returning it.\n(...)\n wave_write.close()\n buffer.seek(0)\n return buffer\n\n" ]
[ 0 ]
[]
[]
[ "pyaudio", "python", "torchaudio", "wave" ]
stackoverflow_0074618692_pyaudio_python_torchaudio_wave.txt
Q: I am using Python on Visual Studio Code to import an excel file but I am getting an import error I want to read the data on an excel file within a F drive. I am using python on Visual Studio Code to try achieve this however I am getting an error as seen in the pictures below. I installed pandas but I still get an error. How can I fix this issue? Coding Error Installed Pandas Library I tried closing and opening visual studio. I tried uninstalling and reinstalling pandas. Python on Computer A: You should try to open terminal in VS Code, and run pip freeze (and pip3 freeze). Check if you find pandas in the results, it won't. That must be because you'd have multiple installations of Python on your system. You may do any one of the below - Get rid of all but one Python installation. Install pandas on the VS Code installed python instance. Configure the same installation of python that is referenced by your command prompt. (choose the correct python interpreter from VS Code Command Palette)
I am using Python on Visual Studio Code to import an excel file but I am getting an import error
I want to read the data on an excel file within a F drive. I am using python on Visual Studio Code to try achieve this however I am getting an error as seen in the pictures below. I installed pandas but I still get an error. How can I fix this issue? Coding Error Installed Pandas Library I tried closing and opening visual studio. I tried uninstalling and reinstalling pandas. Python on Computer
[ "You should try to open terminal in VS Code, and run pip freeze (and pip3 freeze). Check if you find pandas in the results, it won't. That must be because you'd have multiple installations of Python on your system. You may do any one of the below -\n\nGet rid of all but one Python installation.\nInstall pandas on the VS Code installed python instance.\nConfigure the same installation of python that is referenced by your command prompt. (choose the correct python interpreter from VS Code Command Palette)\n\n" ]
[ 2 ]
[ "To read an Excel file with Python, you need to install the pandas library. To install pandas, open the command line or terminal and type:\npip install pandas\nOnce pandas is installed, you can read an Excel file like this:\nimport pandas as pd\ndf = pd.read_excel('file_name.xlsx')\nprint(df)\nYou should also make sure that the file path is correct and that you have the correct permissions to access the file.\nIf you are still having issues, try using the absolute file path instead of a relative file path.\n" ]
[ -1 ]
[ "pip", "python", "visual_studio_code" ]
stackoverflow_0074618712_pip_python_visual_studio_code.txt
Q: constrained linear regression / quadratic programming python I have a dataset like this: import numpy as np a = np.array([1.2, 2.3, 4.2]) b = np.array([1, 5, 6]) c = np.array([5.4, 6.2, 1.9]) m = np.vstack([a,b,c]) y = np.array([5.3, 0.9, 5.6]) and want to fit a constrained linear regression y = b1*a + b2*b + b3*c where all b's sum to one and are positive: b1+b2+b3=1 A similar problem in R is specified here: https://stats.stackexchange.com/questions/21565/how-do-i-fit-a-constrained-regression-in-r-so-that-coefficients-total-1 How can I do this in python? A: EDIT: These two approaches are very general and can work for small-medium scale instances. For a more efficient approach, check the answer of chthonicdaemon (using customized preprocessing and scipy's optimize.nnls). Using scipy Code import numpy as np from scipy.optimize import minimize a = np.array([1.2, 2.3, 4.2]) b = np.array([1, 5, 6]) c = np.array([5.4, 6.2, 1.9]) m = np.vstack([a,b,c]) y = np.array([5.3, 0.9, 5.6]) def loss(x): return np.sum(np.square((np.dot(x, m) - y))) cons = ({'type': 'eq', 'fun' : lambda x: np.sum(x) - 1.0}) x0 = np.zeros(m.shape[0]) res = minimize(loss, x0, method='SLSQP', constraints=cons, bounds=[(0, np.inf) for i in range(m.shape[0])], options={'disp': True}) print(res.x) print(np.dot(res.x, m.T)) print(np.sum(np.square(np.dot(res.x, m) - y))) Output Optimization terminated successfully. (Exit mode 0) Current function value: 18.817792344 Iterations: 5 Function evaluations: 26 Gradient evaluations: 5 [ 0.7760881 0. 0.2239119] [ 1.87173571 2.11955951 4.61630834] 18.817792344 Evaluation It's obvious that the model-capabilitites / model-complexity is not enough to obtain a good performance (high loss!) Using general-purpose QP/SOCP-optimization modelled by cvxpy Advantages: cvxpy prooves, that the problem is convex convergence for convex optimization problems is guaranteed (might be also true for the above) in general: better accuracy in general: more robust in regards to numerical instabilities (solver is only able to solve SOCPs; not non-convex models like the SLSQP-approach above!) Code import numpy as np from cvxpy import * a = np.array([1.2, 2.3, 4.2]) b = np.array([1, 5, 6]) c = np.array([5.4, 6.2, 1.9]) m = np.vstack([a,b,c]) y = np.array([5.3, 0.9, 5.6]) X = Variable(m.shape[0]) constraints = [X >= 0, sum_entries(X) == 1.0] product = m.T * diag(X) diff = sum_entries(product, axis=1) - y problem = Problem(Minimize(norm(diff)), constraints) problem.solve(verbose=True) print(problem.value) print(X.value) Output ECOS 2.0.4 - (C) embotech GmbH, Zurich Switzerland, 2012-15. Web: www.embotech.com/ECOS It pcost dcost gap pres dres k/t mu step sigma IR | BT 0 +0.000e+00 -0.000e+00 +2e+01 5e-01 1e-01 1e+00 4e+00 --- --- 1 1 - | - - 1 +2.451e+00 +2.539e+00 +4e+00 1e-01 2e-02 2e-01 8e-01 0.8419 4e-02 2 2 2 | 0 0 2 +4.301e+00 +4.306e+00 +2e-01 5e-03 7e-04 1e-02 4e-02 0.9619 1e-02 2 2 2 | 0 0 3 +4.333e+00 +4.334e+00 +2e-02 4e-04 6e-05 1e-03 4e-03 0.9326 2e-02 2 1 2 | 0 0 4 +4.338e+00 +4.338e+00 +5e-04 1e-05 2e-06 4e-05 1e-04 0.9698 1e-04 2 1 1 | 0 0 5 +4.338e+00 +4.338e+00 +3e-05 8e-07 1e-07 3e-06 7e-06 0.9402 7e-03 2 1 1 | 0 0 6 +4.338e+00 +4.338e+00 +7e-07 2e-08 2e-09 6e-08 2e-07 0.9796 1e-03 2 1 1 | 0 0 7 +4.338e+00 +4.338e+00 +1e-07 3e-09 4e-10 1e-08 3e-08 0.8458 2e-02 2 1 1 | 0 0 8 +4.338e+00 +4.338e+00 +7e-09 2e-10 2e-11 9e-10 2e-09 0.9839 5e-02 1 1 1 | 0 0 OPTIMAL (within feastol=1.7e-10, reltol=1.5e-09, abstol=6.5e-09). Runtime: 0.000555 seconds. 4.337947939 # needs to be squared to be compared to scipy's output! # as we are using l2-norm (outer sqrt) instead of sum-of-squares # which is nicely converted to SOCP-form and easier to # tackle by SOCP-based solvers like ECOS # -> does not change the solution-vector x, only the obj-value [[ 7.76094262e-01] [ 7.39698388e-10] [ 2.23905737e-01]] A: You can get a good solution to this with a little bit of math and scipy.optimize.nnls: First we do the math: If y = b1*a + b2*b + b3*c and b1 + b2 + b3 = 1, then b3 = 1 - b1 - b2. If we substitute and simplify we end up with y - c = b1(a - c) + b2(b - c) Now, we don't have any equality constraints and nnls can solve directly: import scipy.optimize A = np.vstack([a - c, b - c]).T (b1, b2), norm = scipy.optimize.nnls(A, y - c) b3 = 1 - b1 - b2 This recovers the same solution as obtained in the other answer using cvxpy. b1 = 0.77608809648662802 b2 = 0.0 b3 = 0.22391190351337198 norm = 4.337947941595865 This approach can be generalised to an arbitrary number of dimensions as follows. Assume that we have a matrix B constructed with a, b, c from the original question arranged in the columns. Any additional dimensions will get added to this. Now, we can do A = B[:, :-1] - B[:, -1:] bb, norm = scipy.optimize.nnls(A, y - B[:, -1]) bi = np.append(bb, 1 - sum(bb)) A: One comment regarding sascha's scipy implementation: be aware that with scipy minimize, the trial-and-error nature of SLSQP may get you a solution that is slightly off unless you make some other specifications, namely the maximum iterations (maxiter) and maximum tolerance (ftol), as detailed in the scipy docs here. The default values are: maxiter=100 and ftol=1e-06. Here is an example to illustrate using matrix notation: first get rid of the constraints and bounds. Also assume for simplicity that the intercept=0. In that case, the coefficients for any multiple regression, as covered here on page 4, will be (precisely): def betas(y, x): # y and x are ndarrays--response & design matrixes return np.dot(np.linalg.inv(np.dot(x.T, x)), np.dot(x.T, y)) Now, given that the goal of least squares regression is to minimize the sum of squared residuals, take sascha's loss function (re-written slightly): def resids(b, y, x): resid = y - np.dot(x, b) return np.dot(resid.T, resid) Given your actual Y and X vectors, you can plug in the resulting "true" betas from the first equation above into the second to get a much better "benchmark." Compare this benchmark to the .fun attribute of res (what scipy minimize spits out). Even tiny changes can cause meaningful changes to the resulting coefficients. So to make a long story short, it will sacrifice speed but improve accuracy to use something like options={'maxiter' : 1000, 'ftol' : 1e-07} within sascha's code. A: Your problem is a linear least squares, you could solve it directly with a quadratic programming solver using the solve_ls function in qpsolvers. Here is a snippet adapted from this post on linear regression in Python: from qpsolvers import solve_ls # Objective (|| R x - s ||^2): || [a b c] x - y ||^2 R = m.T s = y # Linear constraint (A * x == b): sum(x) = 1 A = np.ones((1, 3)) b = np.array([1.0]) # Box constraint (lb <= x): x >= 0 lb = np.zeros(3) x = solve_ls(R, s, A=A, b=b, lb=lb, solver="quadprog") On my machine this code finds the solution x = array([0.7760881, 0.0, 0.2239119]). I've uploaded the full code to constrained_linear_regression.py, feel free to try it out. A: thanks @sascha for the great answer! the cvxpy example there is pretty out of date, so thought I would provide a slightly different version based on their current API and with some light editing, for clarity: import numpy as np import cvxpy as cp x1 = np.arange(1000) x2 = np.random.normal(size=(1000,)) x = np.vstack([x1, x2]).T y = np.random.random_sample(size=(1000,)) print("x shape", x.shape) print("y shape", y.shape) weights = cp.Variable(x.shape[1]) objective = cp.sum_squares(x @ weights - y) minimize = cp.Minimize(objective) constraints = [weights >= 0, cp.sum(weights) == 1.0] problem = cp.Problem(minimize, constraints) problem.solve(verbose=True) print(problem.value) print(weights.value) I'll add that cvxpy is actively managed and the team there seems very responsive.
constrained linear regression / quadratic programming python
I have a dataset like this: import numpy as np a = np.array([1.2, 2.3, 4.2]) b = np.array([1, 5, 6]) c = np.array([5.4, 6.2, 1.9]) m = np.vstack([a,b,c]) y = np.array([5.3, 0.9, 5.6]) and want to fit a constrained linear regression y = b1*a + b2*b + b3*c where all b's sum to one and are positive: b1+b2+b3=1 A similar problem in R is specified here: https://stats.stackexchange.com/questions/21565/how-do-i-fit-a-constrained-regression-in-r-so-that-coefficients-total-1 How can I do this in python?
[ "EDIT:\nThese two approaches are very general and can work for small-medium scale instances. For a more efficient approach, check the answer of chthonicdaemon (using customized preprocessing and scipy's optimize.nnls).\nUsing scipy\nCode\nimport numpy as np\nfrom scipy.optimize import minimize\n\na = np.array([1.2, 2.3, 4.2])\nb = np.array([1, 5, 6])\nc = np.array([5.4, 6.2, 1.9])\n\nm = np.vstack([a,b,c])\ny = np.array([5.3, 0.9, 5.6])\n\ndef loss(x):\n return np.sum(np.square((np.dot(x, m) - y)))\n\ncons = ({'type': 'eq',\n 'fun' : lambda x: np.sum(x) - 1.0})\n\nx0 = np.zeros(m.shape[0])\nres = minimize(loss, x0, method='SLSQP', constraints=cons,\n bounds=[(0, np.inf) for i in range(m.shape[0])], options={'disp': True})\n\nprint(res.x)\nprint(np.dot(res.x, m.T))\nprint(np.sum(np.square(np.dot(res.x, m) - y)))\n\nOutput\nOptimization terminated successfully. (Exit mode 0)\n Current function value: 18.817792344\n Iterations: 5\n Function evaluations: 26\n Gradient evaluations: 5\n[ 0.7760881 0. 0.2239119]\n[ 1.87173571 2.11955951 4.61630834]\n18.817792344\n\nEvaluation\n\nIt's obvious that the model-capabilitites / model-complexity is not enough to obtain a good performance (high loss!)\n\nUsing general-purpose QP/SOCP-optimization modelled by cvxpy\nAdvantages:\n\ncvxpy prooves, that the problem is convex\nconvergence for convex optimization problems is guaranteed (might be also true for the above)\nin general: better accuracy\nin general: more robust in regards to numerical instabilities (solver is only able to solve SOCPs; not non-convex models like the SLSQP-approach above!)\n\nCode\nimport numpy as np\nfrom cvxpy import *\n\na = np.array([1.2, 2.3, 4.2])\nb = np.array([1, 5, 6])\nc = np.array([5.4, 6.2, 1.9])\n\nm = np.vstack([a,b,c])\ny = np.array([5.3, 0.9, 5.6])\n\nX = Variable(m.shape[0])\nconstraints = [X >= 0, sum_entries(X) == 1.0]\n\nproduct = m.T * diag(X)\ndiff = sum_entries(product, axis=1) - y\nproblem = Problem(Minimize(norm(diff)), constraints)\nproblem.solve(verbose=True)\n\nprint(problem.value)\nprint(X.value)\n\nOutput\nECOS 2.0.4 - (C) embotech GmbH, Zurich Switzerland, 2012-15. Web: www.embotech.com/ECOS\n\nIt pcost dcost gap pres dres k/t mu step sigma IR | BT\n 0 +0.000e+00 -0.000e+00 +2e+01 5e-01 1e-01 1e+00 4e+00 --- --- 1 1 - | - - \n 1 +2.451e+00 +2.539e+00 +4e+00 1e-01 2e-02 2e-01 8e-01 0.8419 4e-02 2 2 2 | 0 0\n 2 +4.301e+00 +4.306e+00 +2e-01 5e-03 7e-04 1e-02 4e-02 0.9619 1e-02 2 2 2 | 0 0\n 3 +4.333e+00 +4.334e+00 +2e-02 4e-04 6e-05 1e-03 4e-03 0.9326 2e-02 2 1 2 | 0 0\n 4 +4.338e+00 +4.338e+00 +5e-04 1e-05 2e-06 4e-05 1e-04 0.9698 1e-04 2 1 1 | 0 0\n 5 +4.338e+00 +4.338e+00 +3e-05 8e-07 1e-07 3e-06 7e-06 0.9402 7e-03 2 1 1 | 0 0\n 6 +4.338e+00 +4.338e+00 +7e-07 2e-08 2e-09 6e-08 2e-07 0.9796 1e-03 2 1 1 | 0 0\n 7 +4.338e+00 +4.338e+00 +1e-07 3e-09 4e-10 1e-08 3e-08 0.8458 2e-02 2 1 1 | 0 0\n 8 +4.338e+00 +4.338e+00 +7e-09 2e-10 2e-11 9e-10 2e-09 0.9839 5e-02 1 1 1 | 0 0\n\nOPTIMAL (within feastol=1.7e-10, reltol=1.5e-09, abstol=6.5e-09).\nRuntime: 0.000555 seconds.\n\n4.337947939 # needs to be squared to be compared to scipy's output!\n # as we are using l2-norm (outer sqrt) instead of sum-of-squares\n # which is nicely converted to SOCP-form and easier to\n # tackle by SOCP-based solvers like ECOS\n # -> does not change the solution-vector x, only the obj-value\n[[ 7.76094262e-01]\n [ 7.39698388e-10]\n [ 2.23905737e-01]]\n\n", "You can get a good solution to this with a little bit of math and scipy.optimize.nnls:\nFirst we do the math:\nIf \ny = b1*a + b2*b + b3*c and b1 + b2 + b3 = 1, then b3 = 1 - b1 - b2. \nIf we substitute and simplify we end up with \ny - c = b1(a - c) + b2(b - c)\nNow, we don't have any equality constraints and nnls can solve directly:\nimport scipy.optimize\nA = np.vstack([a - c, b - c]).T\n(b1, b2), norm = scipy.optimize.nnls(A, y - c)\nb3 = 1 - b1 - b2\n\nThis recovers the same solution as obtained in the other answer using cvxpy.\nb1 = 0.77608809648662802\nb2 = 0.0\nb3 = 0.22391190351337198\nnorm = 4.337947941595865\n\nThis approach can be generalised to an arbitrary number of dimensions as follows. Assume that we have a matrix B constructed with a, b, c from the original question arranged in the columns. Any additional dimensions will get added to this.\nNow, we can do\nA = B[:, :-1] - B[:, -1:]\nbb, norm = scipy.optimize.nnls(A, y - B[:, -1])\nbi = np.append(bb, 1 - sum(bb))\n\n", "One comment regarding sascha's scipy implementation: be aware that with scipy minimize, the trial-and-error nature of SLSQP may get you a solution that is slightly off unless you make some other specifications, namely the maximum iterations (maxiter) and maximum tolerance (ftol), as detailed in the scipy docs here.\nThe default values are: maxiter=100 and ftol=1e-06.\nHere is an example to illustrate using matrix notation: first get rid of the constraints and bounds. Also assume for simplicity that the intercept=0. In that case, the coefficients for any multiple regression, as covered here on page 4, will be (precisely):\ndef betas(y, x):\n # y and x are ndarrays--response & design matrixes\n return np.dot(np.linalg.inv(np.dot(x.T, x)), np.dot(x.T, y))\n\nNow, given that the goal of least squares regression is to minimize the sum of squared residuals, take sascha's loss function (re-written slightly):\ndef resids(b, y, x):\n resid = y - np.dot(x, b)\n return np.dot(resid.T, resid)\n\nGiven your actual Y and X vectors, you can plug in the resulting \"true\" betas from the first equation above into the second to get a much better \"benchmark.\" Compare this benchmark to the .fun attribute of res (what scipy minimize spits out). Even tiny changes can cause meaningful changes to the resulting coefficients.\nSo to make a long story short, it will sacrifice speed but improve accuracy to use something like\noptions={'maxiter' : 1000, 'ftol' : 1e-07}\n\nwithin sascha's code.\n", "Your problem is a linear least squares, you could solve it directly with a quadratic programming solver using the solve_ls function in qpsolvers. Here is a snippet adapted from this post on linear regression in Python:\nfrom qpsolvers import solve_ls\n\n# Objective (|| R x - s ||^2): || [a b c] x - y ||^2\nR = m.T\ns = y\n\n# Linear constraint (A * x == b): sum(x) = 1\nA = np.ones((1, 3))\nb = np.array([1.0])\n\n# Box constraint (lb <= x): x >= 0\nlb = np.zeros(3)\n\nx = solve_ls(R, s, A=A, b=b, lb=lb, solver=\"quadprog\")\n\nOn my machine this code finds the solution x = array([0.7760881, 0.0, 0.2239119]). I've uploaded the full code to constrained_linear_regression.py, feel free to try it out.\n", "thanks @sascha for the great answer! the cvxpy example there is pretty out of date, so thought I would provide a slightly different version based on their current API and with some light editing, for clarity:\nimport numpy as np\nimport cvxpy as cp\n\nx1 = np.arange(1000)\nx2 = np.random.normal(size=(1000,))\nx = np.vstack([x1, x2]).T\n\ny = np.random.random_sample(size=(1000,))\n\nprint(\"x shape\", x.shape)\nprint(\"y shape\", y.shape)\n\nweights = cp.Variable(x.shape[1])\n\nobjective = cp.sum_squares(x @ weights - y)\nminimize = cp.Minimize(objective)\n\nconstraints = [weights >= 0, cp.sum(weights) == 1.0]\nproblem = cp.Problem(minimize, constraints)\n\nproblem.solve(verbose=True)\n\nprint(problem.value)\nprint(weights.value)\n\nI'll add that cvxpy is actively managed and the team there seems very responsive.\n" ]
[ 9, 6, 1, 0, 0 ]
[]
[]
[ "linear_regression", "python", "quadratic_programming", "scipy" ]
stackoverflow_0039852921_linear_regression_python_quadratic_programming_scipy.txt
Q: Replace rows in an MxN matrix with numbers from 1 to N Im interested in replacing all of my rows in an MxN matrix with values from 1 to N. For example: [[4,6,8,9,3],[5,1,2,5,6],[1,9,4,5,7],[3,8,8,2,5],[1,4,2,2,7]] To: [[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5]] I've tried using loops going through each row individually but struggle to replace elements. A: Try: lst = [ [4, 6, 8, 9, 3], [5, 1, 2, 5, 6], [1, 9, 4, 5, 7], [3, 8, 8, 2, 5], [1, 4, 2, 2, 7], ] for row in lst: row[:] = range(1, len(row) + 1) print(lst) Prints: [ [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], ] A: As you tagged your question with numpy, I assume that your source array is a Numpy array. So, for the test purpose, you can create it as: arr = np.array([[4,6,8,9,3],[5,1,2,5,6],[1,9,4,5,7],[3,8,8,2,5],[1,4,2,2,7]]) Then, to fill each row with consecutive numbers, you can run: arr[:] = np.arange(1, arr.shape[1] + 1) Due to Numpy broadcasting there is no need to use any loop to operate on consecutive rows. The result is: array([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]])
Replace rows in an MxN matrix with numbers from 1 to N
Im interested in replacing all of my rows in an MxN matrix with values from 1 to N. For example: [[4,6,8,9,3],[5,1,2,5,6],[1,9,4,5,7],[3,8,8,2,5],[1,4,2,2,7]] To: [[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5]] I've tried using loops going through each row individually but struggle to replace elements.
[ "Try:\nlst = [\n [4, 6, 8, 9, 3],\n [5, 1, 2, 5, 6],\n [1, 9, 4, 5, 7],\n [3, 8, 8, 2, 5],\n [1, 4, 2, 2, 7],\n]\n\nfor row in lst:\n row[:] = range(1, len(row) + 1)\n\nprint(lst)\n\nPrints:\n[\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n]\n\n", "As you tagged your question with numpy, I assume that your source array\nis a Numpy array.\nSo, for the test purpose, you can create it as:\narr = np.array([[4,6,8,9,3],[5,1,2,5,6],[1,9,4,5,7],[3,8,8,2,5],[1,4,2,2,7]])\n\nThen, to fill each row with consecutive numbers, you can run:\narr[:] = np.arange(1, arr.shape[1] + 1)\n\nDue to Numpy broadcasting there is no need to use any loop\nto operate on consecutive rows.\nThe result is:\narray([[1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5]])\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "matrix", "numpy", "python", "row" ]
stackoverflow_0074618937_arrays_matrix_numpy_python_row.txt
Q: Removing € from a string and converting the string into an int using python to calculate the average price per year So I am scraping a website and the code gives me all the information I want however when scraping it also gives me the "€" symbol with the price. So I want to be able to have the price as a int and remove the "€" symbol so I can Calculate the average car price per year. It does give me the ValueError: invalid literal for int() with base 10: 'price' but when I try look at other questions on this website with the answer the solutions don't work for me. Year is also a string so would it make sense to convert the year to an int as well so I can do equations? import requests import pandas as pd from bs4 import BeautifulSoup url = "https://jammer.ie/used-cars?page={}&per-page=12" all_data = [] for page in range(1, 4): # <-- increase number of pages here soup = BeautifulSoup(requests.get(url.format(page)).text, "html.parser") for car in soup.select(".car"): info = car.select_one(".top-info").get_text(strip=True, separator="|") info = info.split("|") if len(info) == 4: make, model, year, price = info else: make, year, price = info model = "N/A" dealer_name = car.select_one(".dealer-name h6").get_text( strip=True, separator=" " ) address = car.select_one(".address").get_text(strip=True) features = {} for feature in car.select(".car--features li"): k = feature.img["src"].split("/")[-1].split(".")[0] v = feature.span.text features[f"feature_{k}"] = v all_data.append( { "make": make, "model": model, "year": year, "price": price, "dealer_name": dealer_name, "address": address, "url": "https://jammer.ie" + car.select_one("a[href*=vehicle]")["href"], **features, } ) df = pd.DataFrame(all_data) # prints sample data to screen: print(df.tail().to_markdown(index=False)) # saves all data to CSV df.to_csv("data.csv", index=False) I tired converting using df = pd.read_csv('data.csv', usecols= ['price','year']) print(type("price")) print(int("price")) But this did not work out for me. I also tired converting it to a float as well which did not work too. A: You can define a custom function for that and apply it on new/existing column, like so: pd = pd.DataFrame( {"col1": [1,2,2,3,4], "prices": ["1€", "2.2€", "5€","66€", "999€"] } ) # Use own function to create custom column def remove_currency_sign(price: str, sign:str = "€")->int: return int(price.replace(sign,"")) pd["new_col"] = pd["prices"].apply(remove_currency_sign) print(pd) A: When you have data in pandas DataFrame you can do: #... your code df = pd.DataFrame(all_data) df["price"] = df["price"].str.replace(r"[€,]", "", regex=True) df["price"] = pd.to_numeric(df["price"], errors="coerce") df["year"] = pd.to_numeric(df["year"], errors="coerce") print(df) Prints: make model year price dealer_name address url feature_speed feature_engine feature_transmission feature_door-icon1 feature_petrol5 feature_hatchback feature_owner feature_paint 0 BMW 5 Series 2012 14250.0 JOS Jack O Sullivan Cars Co. Wexford https://jammer.ie/vehicle/150168-bmw-5-series-2012 103000 miles 2.0 litres Automatic 0 doors Diesel Saloon NaN NaN 1 Citroen C3 2003 1999.0 Somerville Motors Co. Meath https://jammer.ie/vehicle/167272-citroen-c3-2003 74000 miles 1.4 litres Automatic 5 doors Petrol Hatchback 8 previous owners Grey 2 Volkswagen Touareg 2017 29958.0 Holden Motor Company Co. Dublin https://jammer.ie/vehicle/167271-volkswagen-touareg-2017 145000 miles 3.0 litres Automatic 5 doors Diesel SUV 3 previous owners Black 3 Audi A6 2010 7950.0 Roskeen Cars & Commercials Co. Cork https://jammer.ie/vehicle/167270-audi-a6-2010 174921 miles 2.0 litres Manual 5 doors Diesel Estate 1 previous owners Grey 4 Volkswagen Passat 2012 6999.0 Somerville Motors Co. Meath https://jammer.ie/vehicle/167269-volkswagen-passat-2012 150000 miles 1.6 litres Manual 4 doors Diesel Saloon 7 previous owners Silver 5 Vauxhall Insignia 2016 12499.0 ARTHUR AUTO SALES Co. Limerick https://jammer.ie/vehicle/167268-vauxhall-insignia-2016 127000 miles 2.0 litres Manual 5 doors Diesel Hatchback 1 previous owners Black 6 Audi A6 2022 NaN Audi Approved Plus Kilkenny Co. Kilkenny https://jammer.ie/vehicle/167267-audi-a6-2022 932 miles 2.0 litres Automatic NaN Diesel Saloon NaN Grey 7 Volkswagen Polo 2018 14999.0 O Neills Car Sales Co. Meath https://jammer.ie/vehicle/167266-volkswagen-polo-2018 73868 miles 1.0 litres Manual 5 doors Petrol Hatchback 1 previous owners White 8 Audi A1 2022 NaN Audi Approved Plus Kilkenny Co. Kilkenny https://jammer.ie/vehicle/167265-audi-a1-2022 932 miles 1.0 litres Manual NaN Petrol Hatchback NaN Grey 9 Volkswagen Golf 2014 24999.0 O Neills Car Sales Co. Meath https://jammer.ie/vehicle/167263-volkswagen-golf-2014 82523 miles 2.0 litres Manual 3 doors Petrol Hatchback 1 previous owners Black 10 Peugeot 5008 2014 13950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167260-peugeot-5008-2014 47000 miles 1.6 litres Automatic 5 doors Petrol MPV 1 previous owners White 11 Audi A4 2014 14950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167259-audi-a4-2014 120000 miles 2.0 litres Manual 4 doors Diesel Saloon 2 previous owners White 12 Volkswagen up! 2013 NaN PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167257-volkswagen-up-2013 90101 miles 1.0 litres Manual 4 doors Petrol Hatchback 3 previous owners White 13 Ford Fiesta 2010 5950.0 Car Options Co. Dublin https://jammer.ie/vehicle/111682-ford-fiesta-2010 113144 miles 1.4 litres Manual 5 doors Diesel Hatchback 4 previous owners Silver 14 Vauxhall Insignia 2015 8950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167256-vauxhall-insignia-2015 111849 miles 2.0 litres Manual 0 doors Diesel Hatchback 1 previous owners Black 15 Nissan X-Trail 2016 18950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167255-nissan-x-trail-2016 83887 miles 1.6 litres Manual 4 doors Diesel MPV 2 previous owners White 16 Hyundai i10 2016 8950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167254-hyundai-i10-2016 59031 miles 1.0 litres Manual 4 doors Petrol Hatchback 4 previous owners Silver 17 Peugeot 3008 2017 23950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167253-peugeot-3008-2017 74566 miles 1.6 litres Manual 5 doors Diesel Hatchback 1 previous owners Black 18 Kia Sportage 2019 NaN Kirwan's Co. Wexford https://jammer.ie/vehicle/167252-kia-sportage-2019 56008 miles 1.6 litres Manual 5 doors Diesel MPV 2 previous owners Grey 19 Toyota Corolla 2007 2950.0 LPD Commercials Co. Dublin https://jammer.ie/vehicle/167251-toyota-corolla-2007 115000 miles 1.4 litres Manual 5 doors Petrol Hatchback 3 previous owners Blue 20 Peugeot Partner 2012 5950.0 LPD Commercials Co. Dublin https://jammer.ie/vehicle/167250-peugeot-partner-2012 118000 miles 1.6 litres Manual 5 doors Diesel MPV 3 previous owners Blue 21 Audi A6 2006 2950.0 LPD Commercials Co. Dublin https://jammer.ie/vehicle/167249-audi-a6-2006 130000 miles 2.0 litres Manual 6 doors Diesel Saloon 3 previous owners Black 22 Opel Insignia 2011 4950.0 Blue Diamond Cars Co. Cork https://jammer.ie/vehicle/167248-opel-insignia-2011 153000 miles 2.0 litres Manual 4 doors Diesel Saloon 6 previous owners Black 23 Hyundai Bayon 2023 26545.0 Doran Motors Co. Monaghan https://jammer.ie/vehicle/167247-hyundai-bayon-2023 NaN 1.2 litres Manual 5 doors Petrol SUV NaN Red 24 Volkswagen Jetta 2014 9950.0 Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/167246-volkswagen-jetta-2014 112000 miles 1.2 litres Manual 4 doors Petrol Saloon 2 previous owners Black 25 Lexus CT 2011 9950.0 Weirs Motors Co. Dublin https://jammer.ie/vehicle/167245-lexus-ct-2011 139210 miles 1.8 litres Automatic 5 doors NaN Hatchback 4 previous owners White 26 BMW 5 Series 2018 31995.0 Auto Vision Motor Company Co. Dublin https://jammer.ie/vehicle/159883-bmw-5-series-2018 78646 miles 2.0 litres Automatic 4 doors Hybrid Saloon 1 previous owners Grey 27 Kia N/A 2012 6950.0 Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/167244-kia-2012 115000 miles 1.6 litres Manual 3 doors Diesel Hatchback 1 previous owners Grey 28 Volkswagen Scirocco 2015 18950.0 Weirs Motors Co. Dublin https://jammer.ie/vehicle/167242-volkswagen-scirocco-2015 34928 miles 2.0 litres Manual 3 doors Diesel Coupe NaN Black 29 Volkswagen Fox 2010 4800.0 Pat & Jason Ryan Co. Waterford https://jammer.ie/vehicle/167241-volkswagen-fox-2010 150095 miles 1.2 litres Manual 3 doors Petrol Hatchback 3 previous owners Grey 30 Peugeot 3008 2019 30950.0 Gowan Motors Co. Dublin https://jammer.ie/vehicle/167240-peugeot-3008-2019 20989 miles 1.5 litres Automatic 5 doors Diesel SUV 1 previous owners Grey 31 Volkswagen Beetle 2017 20950.0 Grange Road Motors Co. Dublin https://jammer.ie/vehicle/167239-volkswagen-beetle-2017 39023 miles 1.2 litres Automatic 3 doors Petrol Hatchback 1 previous owners Red 32 Mercedes-Benz E-Class 2012 13950.0 Car Options Co. Dublin https://jammer.ie/vehicle/167238-mercedes-benz-e-class-2012 119487 miles 2.1 litres Automatic 4 doors Diesel Saloon 2 previous owners Silver 33 Kia Optima 2016 16500.0 BG Motors Ltd Co. Kerry https://jammer.ie/vehicle/167237-kia-optima-2016 86994 miles 1.7 litres Manual 5 doors Diesel Estate 2 previous owners Black 34 Volkswagen Beetle 2007 4950.0 Moloney Motors Co. Dublin https://jammer.ie/vehicle/167236-volkswagen-beetle-2007 103150 miles 1.4 litres Manual 2 doors Petrol Convertible 6 previous owners Blue 35 BMW 5 Series 2019 NaN Cieran McConnon Car Sales Co. Monaghan https://jammer.ie/vehicle/167235-bmw-5-series-2019 34500 miles 2.0 litres Automatic 4 doors Diesel Saloon 1 previous owners Blue 36 Opel Corsa 2011 6999.0 Michael Tynan Cars Co. Dublin https://jammer.ie/vehicle/167234-opel-corsa-2011 59032 miles 1.2 litres Manual 5 doors Petrol Hatchback 3 previous owners Black 37 Ford Focus 2018 21999.0 Michael Tynan Cars Co. Dublin https://jammer.ie/vehicle/167233-ford-focus-2018 31691 miles 1.5 litres Manual 4 doors Diesel Hatchback 2 previous owners Blue 38 Volkswagen Polo 2021 22222.0 Michael Tynan Cars Co. Dublin https://jammer.ie/vehicle/167232-volkswagen-polo-2021 10573 miles 1.0 litres Manual 5 doors Petrol Hatchback 1 previous owners Red
Removing € from a string and converting the string into an int using python to calculate the average price per year
So I am scraping a website and the code gives me all the information I want however when scraping it also gives me the "€" symbol with the price. So I want to be able to have the price as a int and remove the "€" symbol so I can Calculate the average car price per year. It does give me the ValueError: invalid literal for int() with base 10: 'price' but when I try look at other questions on this website with the answer the solutions don't work for me. Year is also a string so would it make sense to convert the year to an int as well so I can do equations? import requests import pandas as pd from bs4 import BeautifulSoup url = "https://jammer.ie/used-cars?page={}&per-page=12" all_data = [] for page in range(1, 4): # <-- increase number of pages here soup = BeautifulSoup(requests.get(url.format(page)).text, "html.parser") for car in soup.select(".car"): info = car.select_one(".top-info").get_text(strip=True, separator="|") info = info.split("|") if len(info) == 4: make, model, year, price = info else: make, year, price = info model = "N/A" dealer_name = car.select_one(".dealer-name h6").get_text( strip=True, separator=" " ) address = car.select_one(".address").get_text(strip=True) features = {} for feature in car.select(".car--features li"): k = feature.img["src"].split("/")[-1].split(".")[0] v = feature.span.text features[f"feature_{k}"] = v all_data.append( { "make": make, "model": model, "year": year, "price": price, "dealer_name": dealer_name, "address": address, "url": "https://jammer.ie" + car.select_one("a[href*=vehicle]")["href"], **features, } ) df = pd.DataFrame(all_data) # prints sample data to screen: print(df.tail().to_markdown(index=False)) # saves all data to CSV df.to_csv("data.csv", index=False) I tired converting using df = pd.read_csv('data.csv', usecols= ['price','year']) print(type("price")) print(int("price")) But this did not work out for me. I also tired converting it to a float as well which did not work too.
[ "You can define a custom function for that and apply it on new/existing column,\nlike so:\npd = pd.DataFrame(\n {\"col1\": [1,2,2,3,4],\n \"prices\": [\"1€\", \"2.2€\", \"5€\",\"66€\", \"999€\"]\n }\n)\n\n# Use own function to create custom column\ndef remove_currency_sign(price: str, sign:str = \"€\")->int:\n return int(price.replace(sign,\"\"))\n\npd[\"new_col\"] = pd[\"prices\"].apply(remove_currency_sign)\nprint(pd)\n\n", "When you have data in pandas DataFrame you can do:\n#... your code\n\ndf = pd.DataFrame(all_data)\n\ndf[\"price\"] = df[\"price\"].str.replace(r\"[€,]\", \"\", regex=True)\ndf[\"price\"] = pd.to_numeric(df[\"price\"], errors=\"coerce\")\n\ndf[\"year\"] = pd.to_numeric(df[\"year\"], errors=\"coerce\")\n\nprint(df)\n\nPrints:\n make model year price dealer_name address url feature_speed feature_engine feature_transmission feature_door-icon1 feature_petrol5 feature_hatchback feature_owner feature_paint\n0 BMW 5 Series 2012 14250.0 JOS Jack O Sullivan Cars Co. Wexford https://jammer.ie/vehicle/150168-bmw-5-series-2012 103000 miles 2.0 litres Automatic 0 doors Diesel Saloon NaN NaN\n1 Citroen C3 2003 1999.0 Somerville Motors Co. Meath https://jammer.ie/vehicle/167272-citroen-c3-2003 74000 miles 1.4 litres Automatic 5 doors Petrol Hatchback 8 previous owners Grey\n2 Volkswagen Touareg 2017 29958.0 Holden Motor Company Co. Dublin https://jammer.ie/vehicle/167271-volkswagen-touareg-2017 145000 miles 3.0 litres Automatic 5 doors Diesel SUV 3 previous owners Black\n3 Audi A6 2010 7950.0 Roskeen Cars & Commercials Co. Cork https://jammer.ie/vehicle/167270-audi-a6-2010 174921 miles 2.0 litres Manual 5 doors Diesel Estate 1 previous owners Grey\n4 Volkswagen Passat 2012 6999.0 Somerville Motors Co. Meath https://jammer.ie/vehicle/167269-volkswagen-passat-2012 150000 miles 1.6 litres Manual 4 doors Diesel Saloon 7 previous owners Silver\n5 Vauxhall Insignia 2016 12499.0 ARTHUR AUTO SALES Co. Limerick https://jammer.ie/vehicle/167268-vauxhall-insignia-2016 127000 miles 2.0 litres Manual 5 doors Diesel Hatchback 1 previous owners Black\n6 Audi A6 2022 NaN Audi Approved Plus Kilkenny Co. Kilkenny https://jammer.ie/vehicle/167267-audi-a6-2022 932 miles 2.0 litres Automatic NaN Diesel Saloon NaN Grey\n7 Volkswagen Polo 2018 14999.0 O Neills Car Sales Co. Meath https://jammer.ie/vehicle/167266-volkswagen-polo-2018 73868 miles 1.0 litres Manual 5 doors Petrol Hatchback 1 previous owners White\n8 Audi A1 2022 NaN Audi Approved Plus Kilkenny Co. Kilkenny https://jammer.ie/vehicle/167265-audi-a1-2022 932 miles 1.0 litres Manual NaN Petrol Hatchback NaN Grey\n9 Volkswagen Golf 2014 24999.0 O Neills Car Sales Co. Meath https://jammer.ie/vehicle/167263-volkswagen-golf-2014 82523 miles 2.0 litres Manual 3 doors Petrol Hatchback 1 previous owners Black\n10 Peugeot 5008 2014 13950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167260-peugeot-5008-2014 47000 miles 1.6 litres Automatic 5 doors Petrol MPV 1 previous owners White\n11 Audi A4 2014 14950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167259-audi-a4-2014 120000 miles 2.0 litres Manual 4 doors Diesel Saloon 2 previous owners White\n12 Volkswagen up! 2013 NaN PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167257-volkswagen-up-2013 90101 miles 1.0 litres Manual 4 doors Petrol Hatchback 3 previous owners White\n13 Ford Fiesta 2010 5950.0 Car Options Co. Dublin https://jammer.ie/vehicle/111682-ford-fiesta-2010 113144 miles 1.4 litres Manual 5 doors Diesel Hatchback 4 previous owners Silver\n14 Vauxhall Insignia 2015 8950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167256-vauxhall-insignia-2015 111849 miles 2.0 litres Manual 0 doors Diesel Hatchback 1 previous owners Black\n15 Nissan X-Trail 2016 18950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167255-nissan-x-trail-2016 83887 miles 1.6 litres Manual 4 doors Diesel MPV 2 previous owners White\n16 Hyundai i10 2016 8950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167254-hyundai-i10-2016 59031 miles 1.0 litres Manual 4 doors Petrol Hatchback 4 previous owners Silver\n17 Peugeot 3008 2017 23950.0 PRESTIGE AUTOS Co. Dublin https://jammer.ie/vehicle/167253-peugeot-3008-2017 74566 miles 1.6 litres Manual 5 doors Diesel Hatchback 1 previous owners Black\n18 Kia Sportage 2019 NaN Kirwan's Co. Wexford https://jammer.ie/vehicle/167252-kia-sportage-2019 56008 miles 1.6 litres Manual 5 doors Diesel MPV 2 previous owners Grey\n19 Toyota Corolla 2007 2950.0 LPD Commercials Co. Dublin https://jammer.ie/vehicle/167251-toyota-corolla-2007 115000 miles 1.4 litres Manual 5 doors Petrol Hatchback 3 previous owners Blue\n20 Peugeot Partner 2012 5950.0 LPD Commercials Co. Dublin https://jammer.ie/vehicle/167250-peugeot-partner-2012 118000 miles 1.6 litres Manual 5 doors Diesel MPV 3 previous owners Blue\n21 Audi A6 2006 2950.0 LPD Commercials Co. Dublin https://jammer.ie/vehicle/167249-audi-a6-2006 130000 miles 2.0 litres Manual 6 doors Diesel Saloon 3 previous owners Black\n22 Opel Insignia 2011 4950.0 Blue Diamond Cars Co. Cork https://jammer.ie/vehicle/167248-opel-insignia-2011 153000 miles 2.0 litres Manual 4 doors Diesel Saloon 6 previous owners Black\n23 Hyundai Bayon 2023 26545.0 Doran Motors Co. Monaghan https://jammer.ie/vehicle/167247-hyundai-bayon-2023 NaN 1.2 litres Manual 5 doors Petrol SUV NaN Red\n24 Volkswagen Jetta 2014 9950.0 Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/167246-volkswagen-jetta-2014 112000 miles 1.2 litres Manual 4 doors Petrol Saloon 2 previous owners Black\n25 Lexus CT 2011 9950.0 Weirs Motors Co. Dublin https://jammer.ie/vehicle/167245-lexus-ct-2011 139210 miles 1.8 litres Automatic 5 doors NaN Hatchback 4 previous owners White\n26 BMW 5 Series 2018 31995.0 Auto Vision Motor Company Co. Dublin https://jammer.ie/vehicle/159883-bmw-5-series-2018 78646 miles 2.0 litres Automatic 4 doors Hybrid Saloon 1 previous owners Grey\n27 Kia N/A 2012 6950.0 Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/167244-kia-2012 115000 miles 1.6 litres Manual 3 doors Diesel Hatchback 1 previous owners Grey\n28 Volkswagen Scirocco 2015 18950.0 Weirs Motors Co. Dublin https://jammer.ie/vehicle/167242-volkswagen-scirocco-2015 34928 miles 2.0 litres Manual 3 doors Diesel Coupe NaN Black\n29 Volkswagen Fox 2010 4800.0 Pat & Jason Ryan Co. Waterford https://jammer.ie/vehicle/167241-volkswagen-fox-2010 150095 miles 1.2 litres Manual 3 doors Petrol Hatchback 3 previous owners Grey\n30 Peugeot 3008 2019 30950.0 Gowan Motors Co. Dublin https://jammer.ie/vehicle/167240-peugeot-3008-2019 20989 miles 1.5 litres Automatic 5 doors Diesel SUV 1 previous owners Grey\n31 Volkswagen Beetle 2017 20950.0 Grange Road Motors Co. Dublin https://jammer.ie/vehicle/167239-volkswagen-beetle-2017 39023 miles 1.2 litres Automatic 3 doors Petrol Hatchback 1 previous owners Red\n32 Mercedes-Benz E-Class 2012 13950.0 Car Options Co. Dublin https://jammer.ie/vehicle/167238-mercedes-benz-e-class-2012 119487 miles 2.1 litres Automatic 4 doors Diesel Saloon 2 previous owners Silver\n33 Kia Optima 2016 16500.0 BG Motors Ltd Co. Kerry https://jammer.ie/vehicle/167237-kia-optima-2016 86994 miles 1.7 litres Manual 5 doors Diesel Estate 2 previous owners Black\n34 Volkswagen Beetle 2007 4950.0 Moloney Motors Co. Dublin https://jammer.ie/vehicle/167236-volkswagen-beetle-2007 103150 miles 1.4 litres Manual 2 doors Petrol Convertible 6 previous owners Blue\n35 BMW 5 Series 2019 NaN Cieran McConnon Car Sales Co. Monaghan https://jammer.ie/vehicle/167235-bmw-5-series-2019 34500 miles 2.0 litres Automatic 4 doors Diesel Saloon 1 previous owners Blue\n36 Opel Corsa 2011 6999.0 Michael Tynan Cars Co. Dublin https://jammer.ie/vehicle/167234-opel-corsa-2011 59032 miles 1.2 litres Manual 5 doors Petrol Hatchback 3 previous owners Black\n37 Ford Focus 2018 21999.0 Michael Tynan Cars Co. Dublin https://jammer.ie/vehicle/167233-ford-focus-2018 31691 miles 1.5 litres Manual 4 doors Diesel Hatchback 2 previous owners Blue\n38 Volkswagen Polo 2021 22222.0 Michael Tynan Cars Co. Dublin https://jammer.ie/vehicle/167232-volkswagen-polo-2021 10573 miles 1.0 litres Manual 5 doors Petrol Hatchback 1 previous owners Red\n\n" ]
[ 1, 1 ]
[]
[]
[ "beautifulsoup", "pandas", "python", "python_requests", "web_scraping" ]
stackoverflow_0074619038_beautifulsoup_pandas_python_python_requests_web_scraping.txt
Q: Python: get a frequency count based on two columns (variables) in pandas dataframe some row appears Hello I have the following dataframe. Group Size Short Small Short Small Moderate Medium Moderate Small Tall Large I want to count the frequency of how many times the same row appears in the dataframe. Group Size Time Short Small 2 Moderate Medium 1 Moderate Small 1 Tall Large 1 A: You can use groupby's size: In [11]: df.groupby(["Group", "Size"]).size() Out[11]: Group Size Moderate Medium 1 Small 1 Short Small 2 Tall Large 1 dtype: int64 In [12]: df.groupby(["Group", "Size"]).size().reset_index(name="Time") Out[12]: Group Size Time 0 Moderate Medium 1 1 Moderate Small 1 2 Short Small 2 3 Tall Large 1 A: Update after pandas 1.1 value_counts now accept multiple columns df.value_counts(["Group", "Size"]) You can also try pd.crosstab() Group Size Short Small Short Small Moderate Medium Moderate Small Tall Large pd.crosstab(df.Group,df.Size) Size Large Medium Small Group Moderate 0 1 1 Short 0 0 2 Tall 1 0 0 EDIT: In order to get your out put pd.crosstab(df.Group,df.Size).replace(0,np.nan).\ stack().reset_index().rename(columns={0:'Time'}) Out[591]: Group Size Time 0 Moderate Medium 1.0 1 Moderate Small 1.0 2 Short Small 2.0 3 Tall Large 1.0 A: Other posibbility is using .pivot_table() and aggfunc='size' df_solution = df.pivot_table(index=['Group','Size'], aggfunc='size') A: You can use function pd.crosstab() from Pandas. It works the same way as value_counts() but for more than one column.
Python: get a frequency count based on two columns (variables) in pandas dataframe some row appears
Hello I have the following dataframe. Group Size Short Small Short Small Moderate Medium Moderate Small Tall Large I want to count the frequency of how many times the same row appears in the dataframe. Group Size Time Short Small 2 Moderate Medium 1 Moderate Small 1 Tall Large 1
[ "You can use groupby's size:\nIn [11]: df.groupby([\"Group\", \"Size\"]).size()\nOut[11]:\nGroup Size\nModerate Medium 1\n Small 1\nShort Small 2\nTall Large 1\ndtype: int64\n\nIn [12]: df.groupby([\"Group\", \"Size\"]).size().reset_index(name=\"Time\")\nOut[12]:\n Group Size Time\n0 Moderate Medium 1\n1 Moderate Small 1\n2 Short Small 2\n3 Tall Large 1\n\n", "Update after pandas 1.1 value_counts now accept multiple columns\ndf.value_counts([\"Group\", \"Size\"])\n\n\nYou can also try pd.crosstab()\nGroup Size\n\nShort Small\nShort Small\nModerate Medium\nModerate Small\nTall Large\n\npd.crosstab(df.Group,df.Size)\n\n\nSize Large Medium Small\nGroup \nModerate 0 1 1\nShort 0 0 2\nTall 1 0 0\n\n\nEDIT: In order to get your out put\npd.crosstab(df.Group,df.Size).replace(0,np.nan).\\\n stack().reset_index().rename(columns={0:'Time'})\nOut[591]: \n Group Size Time\n0 Moderate Medium 1.0\n1 Moderate Small 1.0\n2 Short Small 2.0\n3 Tall Large 1.0\n\n", "Other posibbility is using .pivot_table() and aggfunc='size'\ndf_solution = df.pivot_table(index=['Group','Size'], aggfunc='size')\n\n", "You can use function pd.crosstab() from Pandas. It works the same way as value_counts() but for more than one column.\n" ]
[ 191, 86, 5, 0 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0033271098_dataframe_group_by_pandas_python.txt
Q: Playwright auto-scroll to bottom of infinite-scroll page I am trying to automate the scraping of a site with "infinite scroll" with Python and Playwright. The issue is that Playwright doesn't include, as of yet, a scroll functionnality let alone an infinite auto-scroll functionnality. From what I found on the net and my personnal testing, I can automate an infinite or finite scroll using the page.evaluate() function and some Javascript code. For example, this works: for i in range(20): page.evaluate('var div = document.getElementsByClassName("comment-container")[0];div.scrollTop = div.scrollHeight') page.wait_for_timeout(500) The problem with this approach is that it will either work by specifying a number of scrolls or by telling it to keep going forever with a while True loop. I need to find a way to tell it to keep scrolling until the final content loads. This is the Javascript that I am currently trying in page.evaluate(): var intervalID = setInterval(function() { var scrollingElement = (document.scrollingElement || document.body); scrollingElement.scrollTop = scrollingElement.scrollHeight; console.log('fail') }, 1000); var anotherID = setInterval(function() { if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) { clearInterval(intervalID); }}, 1000) This does not work either in my firefox browser or in the Playwright firefox browser. It returns immediately and doesn't execute the code in intervals. I would be grateful if someone could tell me how I can, using Playwright, create an auto-scroll function that will detect and stop when it reaches the bottom of a dynamically loading webpage. A: So I found a working solution. What I did was to combine Javascript with python Playwright code. I start the setInterval with a timer of 200ms to scroll down on the page with page.evaluate() and then I follow it up with a python loop that checks every second whether the total height of the page (scroll included) has changed. If it changes it continues to scroll and if it hasn't changed than the scroll is over. This is what it looks like: page.evaluate( """ var intervalID = setInterval(function () { var scrollingElement = (document.scrollingElement || document.body); scrollingElement.scrollTop = scrollingElement.scrollHeight; }, 200); """ ) prev_height = None while True: curr_height = page.evaluate('(window.innerHeight + window.scrollY)') if not prev_height: prev_height = curr_height time.sleep(1) elif prev_height == curr_height: page.evaluate('clearInterval(intervalID)') break else: prev_height = curr_height time.sleep(1) EDIT See the below answer using the new mouse.wheel(x, y) feature for an up to date way to scroll using playwright. Combine my answer with his to lessen the need to use JS. A: The new Playwright version has a scroll function. it's called mouse.wheel(x, y). In the below code, we'll be attempting to scroll through youtube.com which has an "infinite scroll": from playwright.sync_api import Playwright, sync_playwright import time def run(playwright: Playwright) -> None: browser = playwright.chromium.launch(headless=False) context = browser.new_context() # Open new page page = context.new_page() page.goto('https://www.youtube.com/') # page.mouse.wheel(horizontally, vertically(positive is # scrolling down, negative is scrolling up) for i in range(5): #make the range as long as needed page.mouse.wheel(0, 15000) time.sleep(2) i += 1 time.sleep(15) # --------------------- context.close() browser.close() with sync_playwright() as playwright: run(playwright) A: The other solutions were a tad bit verbose and "overkill" for me and this is what worked for me. Here's a two liner that took me a few migraines to come around to :) Note: you are going to have to put in your own selector. This is just an example... while page.locator("span",has_text="End of results").is_visible() is False: page.mouse.wheel(0,100) #page.keyboard.down(PageDown) also works Literally just keep scrolling until some sort of unique selector is present. In this case a span tag with the string "End of results" (for the context of my use case) popped up when you scroll to the bottom. I trust you can translate this logic for you own usage.. A: the playwright has the page.keyboard.down('End') command, it will scroll to the end of the page.
Playwright auto-scroll to bottom of infinite-scroll page
I am trying to automate the scraping of a site with "infinite scroll" with Python and Playwright. The issue is that Playwright doesn't include, as of yet, a scroll functionnality let alone an infinite auto-scroll functionnality. From what I found on the net and my personnal testing, I can automate an infinite or finite scroll using the page.evaluate() function and some Javascript code. For example, this works: for i in range(20): page.evaluate('var div = document.getElementsByClassName("comment-container")[0];div.scrollTop = div.scrollHeight') page.wait_for_timeout(500) The problem with this approach is that it will either work by specifying a number of scrolls or by telling it to keep going forever with a while True loop. I need to find a way to tell it to keep scrolling until the final content loads. This is the Javascript that I am currently trying in page.evaluate(): var intervalID = setInterval(function() { var scrollingElement = (document.scrollingElement || document.body); scrollingElement.scrollTop = scrollingElement.scrollHeight; console.log('fail') }, 1000); var anotherID = setInterval(function() { if ((window.innerHeight + window.scrollY) >= document.body.offsetHeight) { clearInterval(intervalID); }}, 1000) This does not work either in my firefox browser or in the Playwright firefox browser. It returns immediately and doesn't execute the code in intervals. I would be grateful if someone could tell me how I can, using Playwright, create an auto-scroll function that will detect and stop when it reaches the bottom of a dynamically loading webpage.
[ "So I found a working solution.\nWhat I did was to combine Javascript with python Playwright code.\nI start the setInterval with a timer of 200ms to scroll down on the page with page.evaluate() and then I follow it up with a python loop that checks every second whether the total height of the page (scroll included) has changed. If it changes it continues to scroll and if it hasn't changed than the scroll is over.\nThis is what it looks like:\npage.evaluate(\n \"\"\"\n var intervalID = setInterval(function () {\n var scrollingElement = (document.scrollingElement || document.body);\n scrollingElement.scrollTop = scrollingElement.scrollHeight;\n }, 200);\n\n \"\"\"\n)\nprev_height = None\nwhile True:\n curr_height = page.evaluate('(window.innerHeight + window.scrollY)')\n if not prev_height:\n prev_height = curr_height\n time.sleep(1)\n elif prev_height == curr_height:\n page.evaluate('clearInterval(intervalID)')\n break\n else:\n prev_height = curr_height\n time.sleep(1)\n\nEDIT\nSee the below answer using the new mouse.wheel(x, y) feature for an up to date way to scroll using playwright. Combine my answer with his to lessen the need to use JS.\n", "The new Playwright version has a scroll function. it's called mouse.wheel(x, y). In the below code, we'll be attempting to scroll through youtube.com which has an \"infinite scroll\":\nfrom playwright.sync_api import Playwright, sync_playwright\nimport time\n\n\ndef run(playwright: Playwright) -> None:\n browser = playwright.chromium.launch(headless=False)\n context = browser.new_context()\n\n # Open new page\n page = context.new_page()\n\n page.goto('https://www.youtube.com/')\n\n # page.mouse.wheel(horizontally, vertically(positive is \n # scrolling down, negative is scrolling up)\n for i in range(5): #make the range as long as needed\n page.mouse.wheel(0, 15000)\n time.sleep(2)\n i += 1\n \n time.sleep(15)\n # ---------------------\n context.close()\n browser.close()\n\n\nwith sync_playwright() as playwright:\n run(playwright)\n\n", "The other solutions were a tad bit verbose and \"overkill\" for me and this is what worked for me.\nHere's a two liner that took me a few migraines to come around to :)\nNote: you are going to have to put in your own selector. This is just an example...\n while page.locator(\"span\",has_text=\"End of results\").is_visible() is False:\n page.mouse.wheel(0,100)\n #page.keyboard.down(PageDown) also works\n\nLiterally just keep scrolling until some sort of unique selector is present. In this case a span tag with the string \"End of results\" (for the context of my use case) popped up when you scroll to the bottom.\nI trust you can translate this logic for you own usage..\n", "the playwright has the page.keyboard.down('End') command, it will scroll to the end of the page.\n" ]
[ 12, 5, 1, 0 ]
[ "This topic old, but new to me. I have been using the playwright wheel scroll but for me it takes control/focus on the mouse.\nSo if I happen to be typing (which i usually am) and it scrolls, my beautiful words go into the void to never be seen again.\nI am going to go ahead and try out the js solution posted above and see if that gets me around the mouse/focus issue.\n" ]
[ -2 ]
[ "javascript", "playwright", "python", "python_3.x" ]
stackoverflow_0069183922_javascript_playwright_python_python_3.x.txt
Q: Setting Mandelbrot Python Image Background Color to Cyan How do I set the Mandelbrot Set background to cyan? I don't understand the code. Here's the code: # Python code for Mandelbrot Fractal # Import necessary libraries from PIL import Image from numpy import complex, array import colorsys # setting the width of the output image as 1024 WIDTH = 1024 # a function to return a tuple of colors # as integer value of rgb def rgb_conv(i): color = 255 * array(colorsys.hsv_to_rgb(i / 255.0, 1.0, 0.5)) return tuple(color.astype(int)) # function defining a mandelbrot def mandelbrot(x, y): c0 = complex(x, y) c = 0 for i in range(1, 1000): if abs(c) > 2: return rgb_conv(i) c = c * c + c0 return (0, 0, 0) # creating the new image in RGB mode img = Image.new('RGB', (WIDTH, int(WIDTH / 2))) pixels = img.load() for x in range(img.size[0]): # displaying the progress as percentage print("%.2f %%" % (x / WIDTH * 100.0)) for y in range(img.size[1]): pixels[x, y] = mandelbrot((x - (0.75 * WIDTH)) / (WIDTH / 4), (y - (WIDTH / 4)) / (WIDTH / 4)) # to display the created fractal after # completing the given number of iterations img.show() I would like to set the background color to cyan. More Info needs to be entered here in order for me to post but I have no more info. Thanks. Neo A: try to change "mandelbrot" function to def mandelbrot(x, y): c0 = complex(x, y) c = 0 for i in range(1, 1000): if abs(c) > 2: return (0, 0, 0) c = c * c + c0 return (0, 255, 255) Final return statement is a background color
Setting Mandelbrot Python Image Background Color to Cyan
How do I set the Mandelbrot Set background to cyan? I don't understand the code. Here's the code: # Python code for Mandelbrot Fractal # Import necessary libraries from PIL import Image from numpy import complex, array import colorsys # setting the width of the output image as 1024 WIDTH = 1024 # a function to return a tuple of colors # as integer value of rgb def rgb_conv(i): color = 255 * array(colorsys.hsv_to_rgb(i / 255.0, 1.0, 0.5)) return tuple(color.astype(int)) # function defining a mandelbrot def mandelbrot(x, y): c0 = complex(x, y) c = 0 for i in range(1, 1000): if abs(c) > 2: return rgb_conv(i) c = c * c + c0 return (0, 0, 0) # creating the new image in RGB mode img = Image.new('RGB', (WIDTH, int(WIDTH / 2))) pixels = img.load() for x in range(img.size[0]): # displaying the progress as percentage print("%.2f %%" % (x / WIDTH * 100.0)) for y in range(img.size[1]): pixels[x, y] = mandelbrot((x - (0.75 * WIDTH)) / (WIDTH / 4), (y - (WIDTH / 4)) / (WIDTH / 4)) # to display the created fractal after # completing the given number of iterations img.show() I would like to set the background color to cyan. More Info needs to be entered here in order for me to post but I have no more info. Thanks. Neo
[ "try to change \"mandelbrot\" function to\ndef mandelbrot(x, y):\n c0 = complex(x, y)\n c = 0\n for i in range(1, 1000):\n if abs(c) > 2:\n return (0, 0, 0)\n c = c * c + c0\n return (0, 255, 255)\n\nFinal return statement is a background color\n" ]
[ 1 ]
[]
[]
[ "mandelbrot", "python" ]
stackoverflow_0074619148_mandelbrot_python.txt
Q: How to specified a group of object in a dataframe column using Python In the example below, how do I specified 'mansion' under 'h_type', and find its highest price? (prevent from finding a highest price from the whole data which might include 'aparment') ie: df=pd.DataFrame({'h_type':[aparment,mansion,....],'h_price':[..., ...,...]}) if df.loc[df['h_type']=='mansion']: ##<= do not work, aidMax = priceSr.idxmax() if not isnan(aidMax): amaxSr = df.loc[aidMax] if amost is None: amost = amaxSr.copy() else: if float(amaxSr['h_price']) > float(amost['h_price']): amost = amaxSr.copy() amost = amost.to_frame().transpose() print(amost, '\n==========') A: TL;DR: That can be a oneliner: max_price = df[df["h_price"] == "mansion"]]["h_price"].max() Explanation A little bit of explaining here: df[df["h_price"] == "mansion"]] That pieces selects all the rows who's column "h_price" value is the maximum. df[df["h_price"] == "mansion"]]["h_price"] Then, we access the column "h_price" of all the rows. Finally df[df["h_price"] == "mansion"]]["h_price"].max() Will return the maximum value for that column (amongs all rows).
How to specified a group of object in a dataframe column using Python
In the example below, how do I specified 'mansion' under 'h_type', and find its highest price? (prevent from finding a highest price from the whole data which might include 'aparment') ie: df=pd.DataFrame({'h_type':[aparment,mansion,....],'h_price':[..., ...,...]}) if df.loc[df['h_type']=='mansion']: ##<= do not work, aidMax = priceSr.idxmax() if not isnan(aidMax): amaxSr = df.loc[aidMax] if amost is None: amost = amaxSr.copy() else: if float(amaxSr['h_price']) > float(amost['h_price']): amost = amaxSr.copy() amost = amost.to_frame().transpose() print(amost, '\n==========')
[ "TL;DR:\nThat can be a oneliner:\nmax_price = df[df[\"h_price\"] == \"mansion\"]][\"h_price\"].max()\n\nExplanation\nA little bit of explaining here:\ndf[df[\"h_price\"] == \"mansion\"]]\n\nThat pieces selects all the rows who's column \"h_price\" value is the maximum.\ndf[df[\"h_price\"] == \"mansion\"]][\"h_price\"]\n\nThen, we access the column \"h_price\" of all the rows.\nFinally\ndf[df[\"h_price\"] == \"mansion\"]][\"h_price\"].max()\n\nWill return the maximum value for that column (amongs all rows).\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074618146_dataframe_pandas_python.txt
Q: How to get specific value from JSON response in Python I have a response coming in as : b' { "_items": [ { "_id": "61a8dc29fab70adfacf59789", "name": "CP", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "", "phone": "", "country": "other", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6183d49420d13bc4e332281d", "events_only": false, "_created": "2021-12-02T14:46:01+0000", "_updated": "2022-02-06T11:59:32+0000", "_etag": "277e2a8667b650fe4ba56f4b9b44780f3992062a", "archive_access": false, "sections": { "wire": true, "agenda": true, "news_api": true, "monitoring": true }, "_links": { "self": { "title": "Companie", "href": "companies/61a8dc29fab70adfacf59789" }, "related": { "original_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } }, { "_id": "635ac6b9b837aa06e8e94ea3", "name": "Load Company No Exp", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "[email protected]", "phone": "6478934734", "country": "", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6298c949007f2fb1c968dfdf", "events_only": false, "_created": "2022-10-27T17:58:17+0000", "_updated": "2022-10-27T18:03:17+0000", "_etag": "9cb17d520b3ca9dc1c3326a1ccab8bbb5e7839f2", "version_creator": "6183d49420d13bc4e332281d", "_links": { "self": { "title": "Companie", "href": "companies/635ac6b9b837aa06e8e94ea3" }, "related": { "original_creator": { "title": "User", "href": "users/6298c949007f2fb1c968dfdf" }, "version_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } } ] } All i Need is the _items part of it that is inside the [] : [ { "_id": "61a8dc29fab70adfacf59789", "name": "CP", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "", "phone": "", "country": "other", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6183d49420d13bc4e332281d", "events_only": false, "_created": "2021-12-02T14:46:01+0000", "_updated": "2022-02-06T11:59:32+0000", "_etag": "277e2a8667b650fe4ba56f4b9b44780f3992062a", "archive_access": false, "sections": { "wire": true, "agenda": true, "news_api": true, "monitoring": true }, "_links": { "self": { "title": "Companie", "href": "companies/61a8dc29fab70adfacf59789" }, "related": { "original_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } }, { "_id": "635ac6b9b837aa06e8e94ea3", "name": "Load Company No Exp", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "[email protected]", "phone": "6478934734", "country": "", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6298c949007f2fb1c968dfdf", "events_only": false, "_created": "2022-10-27T17:58:17+0000", "_updated": "2022-10-27T18:03:17+0000", "_etag": "9cb17d520b3ca9dc1c3326a1ccab8bbb5e7839f2", "version_creator": "6183d49420d13bc4e332281d", "_links": { "self": { "title": "Companie", "href": "companies/635ac6b9b837aa06e8e94ea3" }, "related": { "original_creator": { "title": "User", "href": "users/6298c949007f2fb1c968dfdf" }, "version_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } } ] How to get it. I tried getting it as temp = response['_items'] but it wont work Please help me out. A: You need to convert raw byte string to Python dict first, assuming that you are using Python version 3.6+ and your response object is either string or bytes: import json data = json.loads(response) # loads() decodes it to dict temp = response['_items']
How to get specific value from JSON response in Python
I have a response coming in as : b' { "_items": [ { "_id": "61a8dc29fab70adfacf59789", "name": "CP", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "", "phone": "", "country": "other", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6183d49420d13bc4e332281d", "events_only": false, "_created": "2021-12-02T14:46:01+0000", "_updated": "2022-02-06T11:59:32+0000", "_etag": "277e2a8667b650fe4ba56f4b9b44780f3992062a", "archive_access": false, "sections": { "wire": true, "agenda": true, "news_api": true, "monitoring": true }, "_links": { "self": { "title": "Companie", "href": "companies/61a8dc29fab70adfacf59789" }, "related": { "original_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } }, { "_id": "635ac6b9b837aa06e8e94ea3", "name": "Load Company No Exp", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "[email protected]", "phone": "6478934734", "country": "", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6298c949007f2fb1c968dfdf", "events_only": false, "_created": "2022-10-27T17:58:17+0000", "_updated": "2022-10-27T18:03:17+0000", "_etag": "9cb17d520b3ca9dc1c3326a1ccab8bbb5e7839f2", "version_creator": "6183d49420d13bc4e332281d", "_links": { "self": { "title": "Companie", "href": "companies/635ac6b9b837aa06e8e94ea3" }, "related": { "original_creator": { "title": "User", "href": "users/6298c949007f2fb1c968dfdf" }, "version_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } } ] } All i Need is the _items part of it that is inside the [] : [ { "_id": "61a8dc29fab70adfacf59789", "name": "CP", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "", "phone": "", "country": "other", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6183d49420d13bc4e332281d", "events_only": false, "_created": "2021-12-02T14:46:01+0000", "_updated": "2022-02-06T11:59:32+0000", "_etag": "277e2a8667b650fe4ba56f4b9b44780f3992062a", "archive_access": false, "sections": { "wire": true, "agenda": true, "news_api": true, "monitoring": true }, "_links": { "self": { "title": "Companie", "href": "companies/61a8dc29fab70adfacf59789" }, "related": { "original_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } }, { "_id": "635ac6b9b837aa06e8e94ea3", "name": "Load Company No Exp", "url": "", "sd_subscriber_id": "", "account_manager": "", "contact_name": "", "contact_email": "[email protected]", "phone": "6478934734", "country": "", "is_enabled": true, "company_type": null, "monitoring_administrator": null, "allowed_ip_list": null, "expiry_date": null, "original_creator": "6298c949007f2fb1c968dfdf", "events_only": false, "_created": "2022-10-27T17:58:17+0000", "_updated": "2022-10-27T18:03:17+0000", "_etag": "9cb17d520b3ca9dc1c3326a1ccab8bbb5e7839f2", "version_creator": "6183d49420d13bc4e332281d", "_links": { "self": { "title": "Companie", "href": "companies/635ac6b9b837aa06e8e94ea3" }, "related": { "original_creator": { "title": "User", "href": "users/6298c949007f2fb1c968dfdf" }, "version_creator": { "title": "User", "href": "users/6183d49420d13bc4e332281d" } } } } ] How to get it. I tried getting it as temp = response['_items'] but it wont work Please help me out.
[ "You need to convert raw byte string to Python dict first, assuming that you are using Python version 3.6+ and your response object is either string or bytes:\nimport json\n\ndata = json.loads(response) # loads() decodes it to dict\ntemp = response['_items'] \n\n" ]
[ 4 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074619209_json_python.txt
Q: Annotation not found outside plotly graph I have a graph that looks like this: where I want to add some text towards the left bottom side of the plot, something similar to the text at the bottom here, but for me on my left or right side of the graph. I searched on stack and found many solutions, even one specific to the graph shown,however none work for me. My current code is below, where the annotation does not display on my plot. data1= final_api.query("info_title=='JupyterHub'").sort_values(by=['commitDate']) data1['Year-Month'] = pd.to_datetime(data1['Year-Month']) data1['Commit-growth'] = data1['commits'].cumsum() import plotly.graph_objects as go fig = go.Figure() fig = px.scatter(data1, x='Year-Month', y='Commit-growth', color='major_version', text='Commit-growth') fig.add_trace(go.Scatter(mode='lines', x=data1["Year-Month"], y=data1["Commit-growth"], line_color='black', line_width=1, line_shape='hvh', showlegend=False ) ) for _,row in data1.iterrows(): fig.add_annotation( go.layout.Annotation( x=row["Year-Month"], y=row["Commit-growth"], text=row['info_version'], showarrow=False, align='center', yanchor='bottom', yshift=5, textangle=-10) ) note = 'NYSE Trading Days After Announcement<br>Source:<a href="https://www.nytimes.com/"">The NY TIMES</a> Data: <a href="https://www.yahoofinance.com/">Yahoo! Finance</a>' fig.add_annotation( showarrow=False, text=note, font=dict(size=5), xref='x domain', x=0.5, yref='y domain', y=-0.5 ) fig.update_layout(template='plotly_white',title_text=' Version Change in Jupyter Hub API by commits',title_x=0.5, xaxis_title='Year-Month', yaxis_title='Number of Commits', yaxis_range=[0, 400],height=760, width=1600, xaxis_range=['2016-06-01', '2021-04-01']) fig.update_traces(textposition="bottom right", showlegend=False,marker_size=10,marker_line_width=2, marker_line_color='black') fig.show() Any help on this would be really helpful. A: Edit: figured how to do it by myself Add another annotation like this, although it gives the text at the upper left corner of the graph,works for me. Just add this code instead of the previous annotation code in the question,the rest of the code remains the same.These dimensions work for the particular alignment mentioned. fig.add_annotation( showarrow=False, text="23 paths over 450 updates", font=dict(size=15), xref='paper', x=0.014, yref='paper', y=1.077 )
Annotation not found outside plotly graph
I have a graph that looks like this: where I want to add some text towards the left bottom side of the plot, something similar to the text at the bottom here, but for me on my left or right side of the graph. I searched on stack and found many solutions, even one specific to the graph shown,however none work for me. My current code is below, where the annotation does not display on my plot. data1= final_api.query("info_title=='JupyterHub'").sort_values(by=['commitDate']) data1['Year-Month'] = pd.to_datetime(data1['Year-Month']) data1['Commit-growth'] = data1['commits'].cumsum() import plotly.graph_objects as go fig = go.Figure() fig = px.scatter(data1, x='Year-Month', y='Commit-growth', color='major_version', text='Commit-growth') fig.add_trace(go.Scatter(mode='lines', x=data1["Year-Month"], y=data1["Commit-growth"], line_color='black', line_width=1, line_shape='hvh', showlegend=False ) ) for _,row in data1.iterrows(): fig.add_annotation( go.layout.Annotation( x=row["Year-Month"], y=row["Commit-growth"], text=row['info_version'], showarrow=False, align='center', yanchor='bottom', yshift=5, textangle=-10) ) note = 'NYSE Trading Days After Announcement<br>Source:<a href="https://www.nytimes.com/"">The NY TIMES</a> Data: <a href="https://www.yahoofinance.com/">Yahoo! Finance</a>' fig.add_annotation( showarrow=False, text=note, font=dict(size=5), xref='x domain', x=0.5, yref='y domain', y=-0.5 ) fig.update_layout(template='plotly_white',title_text=' Version Change in Jupyter Hub API by commits',title_x=0.5, xaxis_title='Year-Month', yaxis_title='Number of Commits', yaxis_range=[0, 400],height=760, width=1600, xaxis_range=['2016-06-01', '2021-04-01']) fig.update_traces(textposition="bottom right", showlegend=False,marker_size=10,marker_line_width=2, marker_line_color='black') fig.show() Any help on this would be really helpful.
[ "Edit: figured how to do it by myself\nAdd another annotation like this, although it gives the text at the upper left corner of the graph,works for me. Just add this code instead of the previous annotation code in the question,the rest of the code remains the same.These dimensions work for the particular alignment mentioned.\nfig.add_annotation(\n showarrow=False,\n text=\"23 paths over 450 updates\",\n font=dict(size=15),\n xref='paper',\n x=0.014,\n yref='paper',\n y=1.077\n )\n\n" ]
[ 1 ]
[]
[]
[ "plotly", "python" ]
stackoverflow_0074606535_plotly_python.txt
Q: Translating Stata if else statement to python I have this piece of Stata code that I am trying to translate into python. if inlist(nid, 4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729) { capture confirm variable child_age_year if !_rc { replace child_age_year = 0 } else { gen child_age_year = 0 } } It is supposed to create a age_year variable for surveys that have no child_demographics. What I have now is the translation of the first 2 lines like so: if sum((df['nid'] == i).any() for i in [4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729]) == 12: How should I finish the statement so that it replicates the original Stata code? A: The code does not make much sense in Stata. What doesn't make sense is that the over-arching command if inlist(nid, 4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729) can in Stata only apply to the first observation (case, record, row) in the dataset. In other words. it means in practice if inlist(nid[1], 4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729) as a condition attaching to the rest of the code. The rest of the code says: if such and such a variable exists, overwrite its value with 0 in every observation; otherwise create it with 0 in every observation. What is perhaps more likely is the original programmer was confusing the if command (used here) and the if qualifier. I'd put a prior probability near 1 on this code and whatever it comes with as being not worth translation until it is checked.
Translating Stata if else statement to python
I have this piece of Stata code that I am trying to translate into python. if inlist(nid, 4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729) { capture confirm variable child_age_year if !_rc { replace child_age_year = 0 } else { gen child_age_year = 0 } } It is supposed to create a age_year variable for surveys that have no child_demographics. What I have now is the translation of the first 2 lines like so: if sum((df['nid'] == i).any() for i in [4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729]) == 12: How should I finish the statement so that it replicates the original Stata code?
[ "The code does not make much sense in Stata.\nWhat doesn't make sense is that the over-arching command\nif inlist(nid, 4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729) \n\ncan in Stata only apply to the first observation (case, record, row) in the dataset.\nIn other words. it means in practice\nif inlist(nid[1], 4580, 4250, 165101, 4679, 236205, 419098, 438439, 11240, 317089, 430032, 3716, 164729) \n\nas a condition attaching to the rest of the code.\nThe rest of the code says: if such and such a variable exists, overwrite its value with 0 in every observation; otherwise create it with 0 in every observation.\nWhat is perhaps more likely is the original programmer was confusing the if command (used here) and the if qualifier.\nI'd put a prior probability near 1 on this code and whatever it comes with as being not worth translation until it is checked.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "stata" ]
stackoverflow_0074618708_python_python_3.x_stata.txt
Q: Number of words in text you can fully type using this keyboard There is such a task with Leetcode. Everything works for me when I press RUN, but when I submit, it gives an error: text = "a b c d e" brokenLetters = "abcde" Output : 1 Expected: 0 def canBeTypedWords(self, text, brokenLetters): for i in brokenLetters: cnt = 0 text = text.split() s1 = text[0] s2 = text[1] if i in s1 and i in s2: return 0 else: cnt += 1 return cnt Can you please assist what I missed here? Everything work exclude separate letters condition in a text. A: So consider logically what you have to do, then write that algorithmically. Logically, you have a list of words, a list of broken letters, and you need to return the count of words that have none of those broken letters in them. "None of those broken letters in them" is the important bit -- if even one broken letter is in the word, it's no good. def count_words(broken_letters, word_list) -> int: words = word_list.split() # split on spaces broken_letters = set(broken_letters) # we'll be doing membership checks # on this kind of a lot, so changing # it to a set is more performant count = 0 for word in words: for letter in word: if letter in broken_letters: # this word doesn't work, so break out of the # "for letter in word" loop break else: # a for..else block is only entered if execution # falls off the bottom naturally, so in this case # the word works! count += 1 return count This can, of course, be written much more concisely and (one might argue) idiomatically, but it is less obvious to a novice how this code works. As exercise to the reader: see if you can understand how this code works and how you might modify it if the exercise was, instead, giving you all the letters that work rather than the letters that are broken. def count_words(broken_letters, word_list) -> int: words = word_list.split() broken_letters = set(broken_letters) return sum((1 for word in words if all(lett not in broken_letters for lett in word)))
Number of words in text you can fully type using this keyboard
There is such a task with Leetcode. Everything works for me when I press RUN, but when I submit, it gives an error: text = "a b c d e" brokenLetters = "abcde" Output : 1 Expected: 0 def canBeTypedWords(self, text, brokenLetters): for i in brokenLetters: cnt = 0 text = text.split() s1 = text[0] s2 = text[1] if i in s1 and i in s2: return 0 else: cnt += 1 return cnt Can you please assist what I missed here? Everything work exclude separate letters condition in a text.
[ "So consider logically what you have to do, then write that algorithmically.\nLogically, you have a list of words, a list of broken letters, and you need to return the count of words that have none of those broken letters in them.\n\"None of those broken letters in them\" is the important bit -- if even one broken letter is in the word, it's no good.\ndef count_words(broken_letters, word_list) -> int:\n words = word_list.split() # split on spaces\n broken_letters = set(broken_letters) # we'll be doing membership checks\n # on this kind of a lot, so changing\n # it to a set is more performant\n count = 0\n for word in words:\n for letter in word:\n if letter in broken_letters:\n # this word doesn't work, so break out of the\n # \"for letter in word\" loop\n break\n else:\n # a for..else block is only entered if execution\n # falls off the bottom naturally, so in this case\n # the word works!\n count += 1\n\n return count\n\nThis can, of course, be written much more concisely and (one might argue) idiomatically, but it is less obvious to a novice how this code works. As exercise to the reader: see if you can understand how this code works and how you might modify it if the exercise was, instead, giving you all the letters that work rather than the letters that are broken.\ndef count_words(broken_letters, word_list) -> int:\n words = word_list.split()\n broken_letters = set(broken_letters)\n return sum((1 for word in words if all(lett not in broken_letters for lett in word)))\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074619248_python.txt
Q: How do i differentiate between two widgets on the same event I want to have some input boxes which contain an text for the user to know what is required to enter. This text should disappear when the user clicks on it. How do i know which box the user clicked? class window(): def handleEvent(self,event): self.text.set("") def handleEvent2(self,event): a = self.efeld.get() print(a) def page0(self): self.text = tk.StringVar(None) self.text.set("Enter text here") self.efeld = ttk.Entry(fenster, textvariable=self.text) self.efeld.place(x=5, y=20) self.efeld.bind("<Button-1>",self.handleEvent) self.efeld.bind("<Return>",self.handleEvent2) self.text2 = tk.StringVar(None) self.text2.set("Enter text 2 here") self.efeld2 = ttk.Entry(fenster, textvariable=self.text2) self.efeld2.place(x=5, y=50) self.efeld2.bind("<Button-1>",self.handleEvent) self.efeld2.bind("<Return>",self.handleEvent2) fenster = tk.Tk() fenster.title("Test") fenster.geometry("500x350") fenster.resizable(False,False) window().page0() fenster.mainloop() A: You can use the widget attribute of the event object. It is a reference to the widget that got the event. def handleEvent2(self,event): a = event.widget.get() print(a) A: You can use the event.widget attribute to get a reference to the widget that triggered the event. A: Since you are using tkinker, event.widget will contain what you want. Sorry for short reply, navigating from mobile.
How do i differentiate between two widgets on the same event
I want to have some input boxes which contain an text for the user to know what is required to enter. This text should disappear when the user clicks on it. How do i know which box the user clicked? class window(): def handleEvent(self,event): self.text.set("") def handleEvent2(self,event): a = self.efeld.get() print(a) def page0(self): self.text = tk.StringVar(None) self.text.set("Enter text here") self.efeld = ttk.Entry(fenster, textvariable=self.text) self.efeld.place(x=5, y=20) self.efeld.bind("<Button-1>",self.handleEvent) self.efeld.bind("<Return>",self.handleEvent2) self.text2 = tk.StringVar(None) self.text2.set("Enter text 2 here") self.efeld2 = ttk.Entry(fenster, textvariable=self.text2) self.efeld2.place(x=5, y=50) self.efeld2.bind("<Button-1>",self.handleEvent) self.efeld2.bind("<Return>",self.handleEvent2) fenster = tk.Tk() fenster.title("Test") fenster.geometry("500x350") fenster.resizable(False,False) window().page0() fenster.mainloop()
[ "You can use the widget attribute of the event object. It is a reference to the widget that got the event.\ndef handleEvent2(self,event):\n a = event.widget.get()\n print(a)\n\n", "You can use the event.widget attribute to get a reference to the widget that triggered the event.\n", "Since you are using tkinker,\nevent.widget will contain what you want.\nSorry for short reply, navigating from mobile.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074619193_python_tkinter.txt
Q: how can i do vector-matrix multiplication in python without numpy? Ok, so i know this question has been asked several times before but they all had different errors so i am a newbie in python and we were given a Algebra practical with python for vector-matrix multpilication and this was my code but i am getting a specific error everytime which is list index out of range line 20 in d=m[i][j]*v[j] i don't really understand what is the cause of this error! please help Heres my code: r=int(input("enter rows")) c=int(input("enter columns")) m=[] for i in range(r): m.append([]) for j in range(c): e=int(input("enter element")) m[i].append(e) for i in range(r): print(m[i]) vm=input("enter vector [ vector matrix] \n v :") v=[] v=[int(x) for x in vm.split()] print('vector v ',v) print('Vector-Matrix multiplication:') for i in range (c): re=0 for j in range(len(v)): d=m[i][j]*v[j] re+=d print('[',re,']') um=input("enter vector[ matrix -vector ]\n u :") u=[] u=[int(x)for x in um.split()] print('vector u',u) print("matrix vector multiplication") for i in range(r): res=0 for j in range(len(u)): c=m[i][j]*u[j] res+=c print('[',res,']') A: i is indexing the row in d=m[i][j]*v[j], but your loop is over the number of columns. You could eliminate errors like that and make the code cleaner by looping directly over the rows. Instead of for i in range (c): you'd have for row in m: and d=row[j]*v[j]. You should also take advantage of the sum function to do the dot products. Combining these you end up replacing for i in range (c): re=0 for j in range(len(v)): d=m[i][j]*v[j] re+=d print('[',re,']') with for row in m: re=sum(row[j] * v[j] for j in range(v)] print('[',re,']') If you really need an incrementing index then enumerate is the function that can provide it, for example: for i, row in enumerate(m): re=sum(row[j] * v[j] for j in range(v)] print(f'{i}. [{re}]')
how can i do vector-matrix multiplication in python without numpy?
Ok, so i know this question has been asked several times before but they all had different errors so i am a newbie in python and we were given a Algebra practical with python for vector-matrix multpilication and this was my code but i am getting a specific error everytime which is list index out of range line 20 in d=m[i][j]*v[j] i don't really understand what is the cause of this error! please help Heres my code: r=int(input("enter rows")) c=int(input("enter columns")) m=[] for i in range(r): m.append([]) for j in range(c): e=int(input("enter element")) m[i].append(e) for i in range(r): print(m[i]) vm=input("enter vector [ vector matrix] \n v :") v=[] v=[int(x) for x in vm.split()] print('vector v ',v) print('Vector-Matrix multiplication:') for i in range (c): re=0 for j in range(len(v)): d=m[i][j]*v[j] re+=d print('[',re,']') um=input("enter vector[ matrix -vector ]\n u :") u=[] u=[int(x)for x in um.split()] print('vector u',u) print("matrix vector multiplication") for i in range(r): res=0 for j in range(len(u)): c=m[i][j]*u[j] res+=c print('[',res,']')
[ "i is indexing the row in d=m[i][j]*v[j], but your loop is over the number of columns. You could eliminate errors like that and make the code cleaner by looping directly over the rows. Instead of for i in range (c): you'd have for row in m: and d=row[j]*v[j]. You should also take advantage of the sum function to do the dot products. Combining these you end up replacing\nfor i in range (c):\n re=0\n for j in range(len(v)):\n d=m[i][j]*v[j]\n re+=d\n print('[',re,']')\n\nwith\nfor row in m:\n re=sum(row[j] * v[j] for j in range(v)]\n print('[',re,']')\n\nIf you really need an incrementing index then enumerate is the function that can provide it, for example:\nfor i, row in enumerate(m):\n re=sum(row[j] * v[j] for j in range(v)]\n print(f'{i}. [{re}]')\n\n" ]
[ 0 ]
[]
[]
[ "linear_algebra", "python" ]
stackoverflow_0074618105_linear_algebra_python.txt
Q: REGEX : Extracting Table information from connection strings I am attempting to extract Schema and Table information from connection string data. The Schema and Table information is in the format "Schema.Table" (e.g FROM EDWP_D2PM.SN_INC_RPTG_SCRUBBED in string below) . Multiple Schema and Tables can exist in the connection strings, and they always follow the words FROM or JOIN (Including INNER JOIN, LEFT JOIN etc.). I only want to extract Schema and Tables for Connections from a specific database, in the attached example this is DB2. "let Source = DB2.Database(""69.69.69.69"", ""bcudb"", [HierarchicalNavigation=true, Implementation=""Microsoft"", Query=""SELECT i.ASSIGNMENT_GROUP, i.BUSINESS_SERVICE, i.CATEGORY, i.CAUSED_BY, i.CLOSE_CODE, i.CLOSED_ON, i.COMPANY, i.CONTACT_TYPE, i.DESCRIPTION, i.NUMBER, i.PARENT_INCIDENT, i.PRIORITY, i.RESOLVED_ON, i.SHORT_DESCRIPTION, i.CREATED_ON, i.CAUSE_CODE, i.CLOSED, i.CREATED, i.RESOLVED, i.RESOLUTION_MET, p.problem_id FROM EDWP_D2PM.SN_INC_RPTG_SCRUBBED i LEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG p on p.number = i.number WHERE i.PRIORITY INLEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG ('1 - Critical', '2 - High') AND TO_CHAR(i.created_on,'YYYY-MM') > (select to_char((CURRENT DATE - 12 MONTHS),'YYYY-MM') from SYSIBM.SYSDUMMY1) AND (i.CATEGORY <> 'Alert' OR (i.CATEGORY IS NULL)) AND i.PARENT_INCIDENT IS NULL AND i.EXCLUSIONS <> 'R' AND i.CLOSE_CODE <> 'Duplicate - No Action Taken' with ur""]), RIGHT JOIN EDWP.TEMP blaha blah #""Changed Type"" = Table.TransformColumnTypes(Source,{{""RESOLVED_ON"", type datetime}}) in #""Changed Type""" I have a REGEX expression that is correctly returning the 4 Schemas.Tables in the sample text as expected (link to regex 101 with working example : (?:\s+JOIN\s+)(\w+\.\w+)|(?:\s+FROM\s+)(\w+\.\w+) I want to improve the REGEX so that that I only get Schemas & Tables back for strings that begin with the text "DB2.Database" . How can I do this? I have attempted adding in the prefix : (?:DB2.Database)(?:\s|\S)* but that stops the 4 Schema.Tables from being returned. Can anyone suggest a fix? If you do provide an answer, an explaination of the REGEX logic would be appreciated. Once I have the REGEX working I will run it in Python using the re module. A: You can try to put ? after the prefix which will cause the expression not to match as much of the text as possible (?:DB2.Database)(?:\s|\S)*? Then the following (?:DB2.Database)(((?:\s|\S)*?(?:\s+(JOIN|FROM)\s+)(\w+\.\w+))+) would almost work but re module doesn't support repeated captures As a workaround, running the original regex (?:\s+(FROM|JOIN)\s+)(\w+\.\w+) with the first matched group should give you the desired result. A: Simple implemention of the solution proposed by @radof import re import pandas text ="""let Source = DB2.Database(""69.69.69.69"", ""bcudb"", [HierarchicalNavigation=true, Implementation=""Microsoft"", Query=""SELECT i.ASSIGNMENT_GROUP, i.BUSINESS_SERVICE, i.CATEGORY, i.CAUSED_BY, i.CLOSE_CODE, i.CLOSED_ON, i.COMPANY, i.CONTACT_TYPE, i.DESCRIPTION, i.NUMBER, i.PARENT_INCIDENT, i.PRIORITY, i.RESOLVED_ON, i.SHORT_DESCRIPTION, i.CREATED_ON, i.CAUSE_CODE, i.CLOSED, i.CREATED, i.RESOLVED, i.RESOLUTION_MET, p.problem_id FROM EDWP_D2PM.SN_INC_RPTG_SCRUBBED i LEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG p on p.number = i.number WHERE i.PRIORITY INLEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG ('1 - Critical', '2 - High') AND TO_CHAR(i.created_on,'YYYY-MM') > (select to_char((CURRENT DATE - 12 MONTHS),'YYYY-MM') from SYSIBM.SYSDUMMY1) AND (i.CATEGORY <> 'Alert' OR (i.CATEGORY IS NULL)) AND i.PARENT_INCIDENT IS NULL AND i.EXCLUSIONS <> 'R' AND i.CLOSE_CODE <> 'Duplicate - No Action Taken' with ur""]), RIGHT JOIN EDWP.TEMP blaha blah #""Changed Type"" = Table.TransformColumnTypes(Source,{{""RESOLVED_ON"", type datetime #filler filler filler filler }}) in #""Changed Type""" DB2_pattern = re.compile(r'(DB2.Database)(?:\s|\S)*', re.IGNORECASE) Schema_Table = re.compile(r'(?:\s+JOIN\s+)(\w+\.\w+)|(?:\s+FROM\s+)(\w+\.\w+)', re.IGNORECASE) DB2_matches = DB2_pattern.finditer(text) for constring in DB2_matches: db2=constring.group() #print(db2) matches = Schema_Table.finditer(db2) for i in matches: tables=i.group() print(tables)
REGEX : Extracting Table information from connection strings
I am attempting to extract Schema and Table information from connection string data. The Schema and Table information is in the format "Schema.Table" (e.g FROM EDWP_D2PM.SN_INC_RPTG_SCRUBBED in string below) . Multiple Schema and Tables can exist in the connection strings, and they always follow the words FROM or JOIN (Including INNER JOIN, LEFT JOIN etc.). I only want to extract Schema and Tables for Connections from a specific database, in the attached example this is DB2. "let Source = DB2.Database(""69.69.69.69"", ""bcudb"", [HierarchicalNavigation=true, Implementation=""Microsoft"", Query=""SELECT i.ASSIGNMENT_GROUP, i.BUSINESS_SERVICE, i.CATEGORY, i.CAUSED_BY, i.CLOSE_CODE, i.CLOSED_ON, i.COMPANY, i.CONTACT_TYPE, i.DESCRIPTION, i.NUMBER, i.PARENT_INCIDENT, i.PRIORITY, i.RESOLVED_ON, i.SHORT_DESCRIPTION, i.CREATED_ON, i.CAUSE_CODE, i.CLOSED, i.CREATED, i.RESOLVED, i.RESOLUTION_MET, p.problem_id FROM EDWP_D2PM.SN_INC_RPTG_SCRUBBED i LEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG p on p.number = i.number WHERE i.PRIORITY INLEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG ('1 - Critical', '2 - High') AND TO_CHAR(i.created_on,'YYYY-MM') > (select to_char((CURRENT DATE - 12 MONTHS),'YYYY-MM') from SYSIBM.SYSDUMMY1) AND (i.CATEGORY <> 'Alert' OR (i.CATEGORY IS NULL)) AND i.PARENT_INCIDENT IS NULL AND i.EXCLUSIONS <> 'R' AND i.CLOSE_CODE <> 'Duplicate - No Action Taken' with ur""]), RIGHT JOIN EDWP.TEMP blaha blah #""Changed Type"" = Table.TransformColumnTypes(Source,{{""RESOLVED_ON"", type datetime}}) in #""Changed Type""" I have a REGEX expression that is correctly returning the 4 Schemas.Tables in the sample text as expected (link to regex 101 with working example : (?:\s+JOIN\s+)(\w+\.\w+)|(?:\s+FROM\s+)(\w+\.\w+) I want to improve the REGEX so that that I only get Schemas & Tables back for strings that begin with the text "DB2.Database" . How can I do this? I have attempted adding in the prefix : (?:DB2.Database)(?:\s|\S)* but that stops the 4 Schema.Tables from being returned. Can anyone suggest a fix? If you do provide an answer, an explaination of the REGEX logic would be appreciated. Once I have the REGEX working I will run it in Python using the re module.
[ "You can try to put ? after the prefix which will cause the expression not to match as much of the text as possible\n(?:DB2.Database)(?:\\s|\\S)*?\n\nThen the following\n(?:DB2.Database)(((?:\\s|\\S)*?(?:\\s+(JOIN|FROM)\\s+)(\\w+\\.\\w+))+)\n\nwould almost work but re module doesn't support repeated captures\nAs a workaround, running the original regex (?:\\s+(FROM|JOIN)\\s+)(\\w+\\.\\w+) with the first matched group should give you the desired result.\n", "Simple implemention of the solution proposed by @radof\nimport re\nimport pandas\n\ntext =\"\"\"let\n Source = DB2.Database(\"\"69.69.69.69\"\", \"\"bcudb\"\", [HierarchicalNavigation=true, Implementation=\"\"Microsoft\"\", Query=\"\"SELECT i.ASSIGNMENT_GROUP, i.BUSINESS_SERVICE, i.CATEGORY, i.CAUSED_BY, i.CLOSE_CODE, i.CLOSED_ON, i.COMPANY, i.CONTACT_TYPE, i.DESCRIPTION, i.NUMBER, i.PARENT_INCIDENT, i.PRIORITY, i.RESOLVED_ON, i.SHORT_DESCRIPTION, i.CREATED_ON, i.CAUSE_CODE, i.CLOSED, i.CREATED, i.RESOLVED, i.RESOLUTION_MET, p.problem_id FROM EDWP_D2PM.SN_INC_RPTG_SCRUBBED i LEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG p on p.number = i.number WHERE i.PRIORITY INLEFT OUTER JOIN EDWP_D2PM.SN_INCIDENTS_CUST_RPTG ('1 - Critical', '2 - High') AND TO_CHAR(i.created_on,'YYYY-MM') > (select to_char((CURRENT DATE - 12 MONTHS),'YYYY-MM') from SYSIBM.SYSDUMMY1) AND (i.CATEGORY <> 'Alert' OR (i.CATEGORY IS NULL)) AND i.PARENT_INCIDENT IS NULL AND i.EXCLUSIONS <> 'R' AND i.CLOSE_CODE <> 'Duplicate - No Action Taken' with ur\"\"]), RIGHT JOIN EDWP.TEMP blaha blah\n #\"\"Changed Type\"\" = Table.TransformColumnTypes(Source,{{\"\"RESOLVED_ON\"\", type datetime\n #filler \n filler\n filler\n filler\n }})\nin\n #\"\"Changed Type\"\"\"\n\nDB2_pattern = re.compile(r'(DB2.Database)(?:\\s|\\S)*', re.IGNORECASE)\nSchema_Table = re.compile(r'(?:\\s+JOIN\\s+)(\\w+\\.\\w+)|(?:\\s+FROM\\s+)(\\w+\\.\\w+)', re.IGNORECASE)\n\nDB2_matches = DB2_pattern.finditer(text)\nfor constring in DB2_matches:\n db2=constring.group()\n #print(db2)\n matches = Schema_Table.finditer(db2)\n for i in matches:\n tables=i.group()\n print(tables)\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "regex", "regex_group" ]
stackoverflow_0074608287_python_regex_regex_group.txt
Q: My "list index out of range" problem in PYTHON total_task=float(input("Enter the assigned total task length(in half-hour(s)):")) total_len=total_task*2 leng=int(total_len) payments=[] hours=[] for i in range(leng): print("Enter the payment value( in TL) for task portion ID ",(i+1)," having length ",((i+1)*0.5)," hour(s):") portionLen=int(input()) payments.append(portionLen) hours.append(portionLen) paymentsTable=[] for i in range(leng): paymentsRow=[] for j in range(leng): paymentsRow.append(0) paymentsTable.append(paymentsRow) for i in range(leng): paymentsTable[i][i]=payments[i] for i in range(leng): for j in range(1,leng+1): maxPayment=0 for k in range(j): pay=paymentsTable[i][k]+paymentsTable[k+1][j] if(pay>maxPayment): maxPayment=pay paymentsTable[i][j]=maxPayment idTable=[] for i in range(leng): idTableRow=[] for j in range(leng): idTableRow.append(0) idTable.append(idTableRow) for i in range(leng): idTable[i][i]=i+1 for i in range(leng): for j in range(1,leng+1): maxPayment=0 for k in range(j): pay = paymentsTable[i][k] + paymentsTable[k + 1][j] if (pay > maxPayment): maxPayment = pay paymentsTable[i][j] = maxPayment for i in range(leng): for j in range(1,leng+1): maxPayment=0 for k in range(j): pay = paymentsTable[i][k] + paymentsTable[k + 1][j] if (pay > maxPayment): maxPayment = pay idTable[i][j]=k+1 My Sample input Enter the assigned total task length(in half-hour(s)):**2** Enter the payment value( in TL) for task portion ID 1 having length 0.5 hour(s): **100** Enter the payment value( in TL) for task portion ID 2 having length 1.0 hour(s): **400** Enter the payment value( in TL) for task portion ID 3 having length 1.5 hour(s): **500** Enter the payment value( in TL) for task portion ID 4 having length 2.0 hour(s): **600** and my sample error line 23, in <module> pay=paymentsTable[i][k]+paymentsTable[k+1][j] IndexError: list index out of range A: The error message you attached is pretty clear: in line 23, either paymentsTable[i][k] or paymentsTable[k+1][j] has an index out of range. paymentsTable has exactly leng elements, so their valid indices go from 0 to leng-1. Every element paymentsTable[i] is also a list with exactly leng elements, so their valid indices also go from 0 to leng-1. Now, i ranges inside range(leng), but j is ranging over (range(1, leng + 1)), which means the first value will be 1 and the last will be leng. Hence, in the last iteration of for j in range(1,leng+1), j is leng and the last valid index of paymentsTable[i] was leng-1, so you get the "IndexError: list index out of range" Moreover, k+1 also gets the value leng in the last iteration of that line, it is also out of range, considering that the last valid index of paymentsTable is also leng. A: I haven't tried this yet but I think the problem is with the row assignment. paymentsRow and idTableRow must be created outside the for a loop. Your implementation means it loops through and appends, but after appending, it makes the list empty again. So basically in your code, paymentsRow and idTableRow have only one element in them. [UPDATE] I integrated Rodrigo's solution with mine and no errors appeared. total_task=float(input("Enter the assigned total task length(in half-hour(s)):")) total_len=total_task*2 leng=int(total_len) payments=[] hours=[] for i in range(leng): print("Enter the payment value( in TL) for task portion ID ",(i+1)," having length ",((i+1)*0.5)," hour(s):") portionLen=int(input()) payments.append(portionLen) hours.append(portionLen) paymentsTable=[] paymentsRow=[] for i in range(leng): for j in range(leng): paymentsRow.append(0) paymentsTable.append(paymentsRow) for i in range(leng): paymentsTable[i][i]=payments[i] for i in range(leng): for j in range(1,leng): maxPayment=0 for k in range(j): pay=paymentsTable[i][k]+paymentsTable[k+1][j] if(pay>maxPayment): maxPayment=pay paymentsTable[i][j]=maxPayment idTable=[] idTableRow=[] for i in range(leng): for j in range(leng): idTableRow.append(0) idTable.append(idTableRow) for i in range(leng): idTable[i][i]=i+1 for i in range(leng): for j in range(1,leng): maxPayment=0 for k in range(j): pay = paymentsTable[i][k] + paymentsTable[k + 1][j] if (pay > maxPayment): maxPayment = pay paymentsTable[i][j] = maxPayment for i in range(leng): for j in range(1,leng): maxPayment=0 for k in range(j): pay = paymentsTable[i][k] + paymentsTable[k + 1][j] if (pay > maxPayment): maxPayment = pay idTable[i][j]=k+1
My "list index out of range" problem in PYTHON
total_task=float(input("Enter the assigned total task length(in half-hour(s)):")) total_len=total_task*2 leng=int(total_len) payments=[] hours=[] for i in range(leng): print("Enter the payment value( in TL) for task portion ID ",(i+1)," having length ",((i+1)*0.5)," hour(s):") portionLen=int(input()) payments.append(portionLen) hours.append(portionLen) paymentsTable=[] for i in range(leng): paymentsRow=[] for j in range(leng): paymentsRow.append(0) paymentsTable.append(paymentsRow) for i in range(leng): paymentsTable[i][i]=payments[i] for i in range(leng): for j in range(1,leng+1): maxPayment=0 for k in range(j): pay=paymentsTable[i][k]+paymentsTable[k+1][j] if(pay>maxPayment): maxPayment=pay paymentsTable[i][j]=maxPayment idTable=[] for i in range(leng): idTableRow=[] for j in range(leng): idTableRow.append(0) idTable.append(idTableRow) for i in range(leng): idTable[i][i]=i+1 for i in range(leng): for j in range(1,leng+1): maxPayment=0 for k in range(j): pay = paymentsTable[i][k] + paymentsTable[k + 1][j] if (pay > maxPayment): maxPayment = pay paymentsTable[i][j] = maxPayment for i in range(leng): for j in range(1,leng+1): maxPayment=0 for k in range(j): pay = paymentsTable[i][k] + paymentsTable[k + 1][j] if (pay > maxPayment): maxPayment = pay idTable[i][j]=k+1 My Sample input Enter the assigned total task length(in half-hour(s)):**2** Enter the payment value( in TL) for task portion ID 1 having length 0.5 hour(s): **100** Enter the payment value( in TL) for task portion ID 2 having length 1.0 hour(s): **400** Enter the payment value( in TL) for task portion ID 3 having length 1.5 hour(s): **500** Enter the payment value( in TL) for task portion ID 4 having length 2.0 hour(s): **600** and my sample error line 23, in <module> pay=paymentsTable[i][k]+paymentsTable[k+1][j] IndexError: list index out of range
[ "The error message you attached is pretty clear:\n\nin line 23, either paymentsTable[i][k] or paymentsTable[k+1][j]\nhas an index out of range.\n\npaymentsTable has exactly leng elements, so their valid indices go from 0 to leng-1.\nEvery element paymentsTable[i] is also a list with exactly leng elements, so their valid indices also go from 0 to leng-1.\nNow, i ranges inside range(leng), but j is ranging over (range(1, leng + 1)), which means the first value will be 1 and the last will be leng.\nHence, in the last iteration of for j in range(1,leng+1), j is leng and the last valid index of paymentsTable[i] was leng-1, so you get the \"IndexError: list index out of range\"\nMoreover, k+1 also gets the value leng in the last iteration of that line, it is also out of range, considering that the last valid index of paymentsTable is also leng.\n", "I haven't tried this yet but I think the problem is with the row assignment. paymentsRow and idTableRow must be created outside the for a loop. Your implementation means it loops through and appends, but after appending, it makes the list empty again. So basically in your code, paymentsRow and idTableRow have only one element in them.\n[UPDATE]\nI integrated Rodrigo's solution with mine and no errors appeared.\ntotal_task=float(input(\"Enter the assigned total task length(in half-hour(s)):\"))\ntotal_len=total_task*2\nleng=int(total_len)\npayments=[]\nhours=[]\nfor i in range(leng):\n print(\"Enter the payment value( in TL) for task portion ID \",(i+1),\" having length \",((i+1)*0.5),\" hour(s):\")\n portionLen=int(input())\n payments.append(portionLen)\n hours.append(portionLen)\npaymentsTable=[]\npaymentsRow=[]\nfor i in range(leng):\n for j in range(leng):\n paymentsRow.append(0)\n paymentsTable.append(paymentsRow)\nfor i in range(leng):\n paymentsTable[i][i]=payments[i]\nfor i in range(leng):\n for j in range(1,leng):\n maxPayment=0\n for k in range(j):\n pay=paymentsTable[i][k]+paymentsTable[k+1][j]\n if(pay>maxPayment):\n maxPayment=pay\n paymentsTable[i][j]=maxPayment\nidTable=[]\nidTableRow=[]\n\nfor i in range(leng):\n for j in range(leng):\n idTableRow.append(0)\n idTable.append(idTableRow)\nfor i in range(leng):\n idTable[i][i]=i+1\nfor i in range(leng):\n for j in range(1,leng):\n maxPayment=0\n for k in range(j):\n pay = paymentsTable[i][k] + paymentsTable[k + 1][j]\n if (pay > maxPayment):\n maxPayment = pay\n paymentsTable[i][j] = maxPayment\nfor i in range(leng):\n for j in range(1,leng):\n maxPayment=0\n for k in range(j):\n pay = paymentsTable[i][k] + paymentsTable[k + 1][j]\n if (pay > maxPayment):\n maxPayment = pay\n idTable[i][j]=k+1\n\n" ]
[ 2, 0 ]
[]
[]
[ "arrays", "multidimensional_array", "python" ]
stackoverflow_0074619007_arrays_multidimensional_array_python.txt
Q: how to use script to conditionally modify values in CSV file The CSV file contains name and Values i want any value more than 1000 converted to 1000 in same file or in differentt file. mostly using shell script. what is the best way to it? for example the values are as follows Name Value ABV 1200 CCD 1000 CAD 500 DDD 1800 and i want it as Name Value ABV 1000 CCD 1000 CAD 500 DDD 1000 i tried awk function but it didnt work any other alteernatives A: Using awk: $ awk '{print $1,($2+0>1000?1000:$2)}' file Output: Name Value ABV 1000 CCD 1000 CAD 500 DDD 1000 A: A few issues with OP's current code: need to skip processing of the first line -gt is invalid in awk ... use > instead -F, says to use the comma as the input field delimiter but the sample input file does not contain commas; for now I'm going to assume the sample input file is correct (ie, there are no commas) Updating OP's current code to address these issues: awk 'NR==1 {print $0; next} {if ($2 > 1000) {$2=1000} {print $0}}' book1.csv Or an equivalent: awk 'NR>1 && ($2>1000) {$2=1000} 1' book1.csv Both of these generate: Name Value ABV 1000 CCD 1000 CAD 500 DDD 1000
how to use script to conditionally modify values in CSV file
The CSV file contains name and Values i want any value more than 1000 converted to 1000 in same file or in differentt file. mostly using shell script. what is the best way to it? for example the values are as follows Name Value ABV 1200 CCD 1000 CAD 500 DDD 1800 and i want it as Name Value ABV 1000 CCD 1000 CAD 500 DDD 1000 i tried awk function but it didnt work any other alteernatives
[ "Using awk:\n$ awk '{print $1,($2+0>1000?1000:$2)}' file\n\nOutput:\nName Value\nABV 1000\nCCD 1000\nCAD 500\nDDD 1000\n\n", "A few issues with OP's current code:\n\nneed to skip processing of the first line\n-gt is invalid in awk ... use > instead\n-F, says to use the comma as the input field delimiter but the sample input file does not contain commas; for now I'm going to assume the sample input file is correct (ie, there are no commas)\n\nUpdating OP's current code to address these issues:\nawk 'NR==1 {print $0; next} {if ($2 > 1000) {$2=1000} {print $0}}' book1.csv\n\nOr an equivalent:\nawk 'NR>1 && ($2>1000) {$2=1000} 1' book1.csv\n\nBoth of these generate:\nName Value\nABV 1000\nCCD 1000\nCAD 500\nDDD 1000\n\n" ]
[ 2, 1 ]
[]
[]
[ "awk", "bash", "python", "shell" ]
stackoverflow_0074619187_awk_bash_python_shell.txt
Q: group a list of paths based on directory name and file name I want to sort my paths, based on directory name and then on file name. They are separated by first different folder ("TENT1" and "TENT2"). Notice that some files are inside "Job1" and "Job2" folders but some are not but need them sorted as well. Thank you! paths = [ '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_29', '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_29', '/var/lib/conc/states/TENT1/down-ok_2022_11_27', '/var/lib/conc/states/TENT1/down-ok_2022_11_28', '/var/lib/conc/states/TENT1/serv-ok_2022_11_28', '/var/lib/conc/states/TENT1/serv-ok_2022_11_29', '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_29', '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_29', '/var/lib/conc/states/TENT2/down-ok_2022_11_27', '/var/lib/conc/states/TENT2/down-ok_2022_11_28', '/var/lib/conc/states/TENT2/serv-ok_2022_11_28', '/var/lib/conc/states/TENT2/serv-ok_2022_11_29', ] but this is what I want: paths = [ [ [ '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT1/down-ok_2022_11_27', '/var/lib/conc/states/TENT1/down-ok_2022_11_28', ], [ '/var/lib/conc/states/TENT1/serv-ok_2022_11_28', '/var/lib/conc/states/TENT1/serv-ok_2022_11_29' ], ], [ [ '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT2/down-ok_2022_11_27', '/var/lib/conc/states/TENT2/down-ok_2022_11_28', ], [ '/var/lib/conc/states/TENT2/serv-ok_2022_11_28', '/var/lib/conc/states/TENT2/serv-ok_2022_11_29', ], ] ] this is my code: from itertools import groupby from os.path import dirname sorted_by_file = [list(g) for _,g in groupby(paths, dirname)] I am struggling how to sort those files once they are sorted by folder name. A: Instead of the dirname function for grouping you have to create your own function which can look like import os.path def grouper(path): d, f = os.path.split(path) f = f.split('-')[0] return d, f It returns a tuple with the directory and the relevant part of the filename. It can be used in the same way as you did already in your code: from itertools import groupby sorted_by_file = [list(g) for _,g in groupby(paths, grouper)]
group a list of paths based on directory name and file name
I want to sort my paths, based on directory name and then on file name. They are separated by first different folder ("TENT1" and "TENT2"). Notice that some files are inside "Job1" and "Job2" folders but some are not but need them sorted as well. Thank you! paths = [ '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_29', '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_29', '/var/lib/conc/states/TENT1/down-ok_2022_11_27', '/var/lib/conc/states/TENT1/down-ok_2022_11_28', '/var/lib/conc/states/TENT1/serv-ok_2022_11_28', '/var/lib/conc/states/TENT1/serv-ok_2022_11_29', '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_29', '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_29', '/var/lib/conc/states/TENT2/down-ok_2022_11_27', '/var/lib/conc/states/TENT2/down-ok_2022_11_28', '/var/lib/conc/states/TENT2/serv-ok_2022_11_28', '/var/lib/conc/states/TENT2/serv-ok_2022_11_29', ] but this is what I want: paths = [ [ [ '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/metr-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_28', '/var/lib/conc/states/TENT1/Job1/weig-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT1/down-ok_2022_11_27', '/var/lib/conc/states/TENT1/down-ok_2022_11_28', ], [ '/var/lib/conc/states/TENT1/serv-ok_2022_11_28', '/var/lib/conc/states/TENT1/serv-ok_2022_11_29' ], ], [ [ '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/metr-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_28', '/var/lib/conc/states/TENT2/Job2/weig-ok_2022_11_29' ], [ '/var/lib/conc/states/TENT2/down-ok_2022_11_27', '/var/lib/conc/states/TENT2/down-ok_2022_11_28', ], [ '/var/lib/conc/states/TENT2/serv-ok_2022_11_28', '/var/lib/conc/states/TENT2/serv-ok_2022_11_29', ], ] ] this is my code: from itertools import groupby from os.path import dirname sorted_by_file = [list(g) for _,g in groupby(paths, dirname)] I am struggling how to sort those files once they are sorted by folder name.
[ "Instead of the dirname function for grouping you have to create your own function which can look like\nimport os.path\n\ndef grouper(path):\n d, f = os.path.split(path)\n f = f.split('-')[0]\n return d, f\n\nIt returns a tuple with the directory and the relevant part of the filename.\nIt can be used in the same way as you did already in your code:\nfrom itertools import groupby\nsorted_by_file = [list(g) for _,g in groupby(paths, grouper)]\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074616163_python.txt
Q: Update single value in JSON I have a JSON file that looks like this: { "displayName": "", "Location": "Jacksonville", "directNumber": "+1 904-513-6504", "extension": "36504" }, { "displayName": "Lawrence Curka", "Location": "Jacksonville", "directNumber": "+1 123-513-6508", "extension": "36508" }, { "displayName": "Chris Brown", "Location": "Jacksonville", "directNumber": "+1 123-513-6511", "extension": "36511" Basically I'm just trying to write a short Python script that will loop through the JSON, finds number that's free (meaning no displayName assigned) and if it's free add user to it (first name, last name). But so far all examples I've found for JSON and Python is to append data but not updating individual key. Here is Python I use that returns me all the free numbers from the JSON: with open('file.json') as json_file: data = json.load(json_file) user_count = 0 for i in data: if i['displayName'] == "": print("Found Free Number: ", i['directNumber']) user_count += 1 print("Free Number Count: ", user_count) First object in JSON doesn't have user assigned (dispalyName). Is it possible to just update just that value with name if it's empty/null? A: i['displayName'] = "Name Surname" jsonFile.write(json.dumps(i))
Update single value in JSON
I have a JSON file that looks like this: { "displayName": "", "Location": "Jacksonville", "directNumber": "+1 904-513-6504", "extension": "36504" }, { "displayName": "Lawrence Curka", "Location": "Jacksonville", "directNumber": "+1 123-513-6508", "extension": "36508" }, { "displayName": "Chris Brown", "Location": "Jacksonville", "directNumber": "+1 123-513-6511", "extension": "36511" Basically I'm just trying to write a short Python script that will loop through the JSON, finds number that's free (meaning no displayName assigned) and if it's free add user to it (first name, last name). But so far all examples I've found for JSON and Python is to append data but not updating individual key. Here is Python I use that returns me all the free numbers from the JSON: with open('file.json') as json_file: data = json.load(json_file) user_count = 0 for i in data: if i['displayName'] == "": print("Found Free Number: ", i['directNumber']) user_count += 1 print("Free Number Count: ", user_count) First object in JSON doesn't have user assigned (dispalyName). Is it possible to just update just that value with name if it's empty/null?
[ "i['displayName'] = \"Name Surname\"\n\njsonFile.write(json.dumps(i))\n\n" ]
[ 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074619364_json_python.txt
Q: Set Airflow Variable dynamically Hi community I need for help. I have a GCS bucket called "staging". This bucket contain folders and subfolders (see picture). The "date-folders" (eg. 20221128) may be several. Each date-folder has 3 subfolders: I'm interested in the "main_folder". The main_folder has 2 "project folders". Each project folder has several subfolders. Each of these last subfolder has a .txt file. The main objective is: Obtain a list of all the path to .txt files (eg. gs://staging/20221128/main_folder/project_1/subfold_1/file.txt, ...) Export the list on an Airflow Variable Use the "list" Variable to run some DAGS dynamically. The folders in the staging bucket may vary everyday, so I don't have static paths. I'm using Apache Beam with Python SDK on Cloud Dataflow and Airflow with Cloud Composer. Is there a way to obtain the list of paths (as os.listdir() on python) with Beam and schedule this workflow daily? (I need to override the list Variable eveyday with new paths). For example I can achieve step n.1 (locally) with the following Python script: def collect_paths(startpath="C:/Users/PycharmProjects/staging/"): list_paths = [] for path, dirs, files in os.walk(startpath): for f in files: file_path = path + "/" + f list_paths .append(file_path ) return list_paths Thank you all in advance. Edit n.1. I've retrieved file paths thanks to google.cloud.storage API in my collect_paths script. Now, I want to access to XCom and get the list of paths. This is my task instance: collect_paths_job = PythonOperator( task_id='collect_paths', python_callable=collect_paths, op_kwargs={'bucket_name': 'staging'}, do_xcom_push=True ) I want to iterate over the list in order to run (in the same DAG) N concurrent task, each processing a single file. I tried with: files = collect_paths_job.xcom_pull(task_ids='collect_paths', include_prior_dates=False) for f in files: job = get_job_operator(f) chain(job) But got the following error: TypeError: xcom_pull() missing 1 required positional argument: 'context' A: I would like to correct you in your usage of the term Variable . Airflow attributes a special meaning to this object. What you want is for the file info to be accessible as parameters in a task. Use XCom Assume you have the DAG with the python task called -- list_files_from_gcs. This task is a python task which exactly runs the collect_path function that you have written. Since this function returns a list, airflow automatically stuffs this into XCom. So now you can access this information anywhere in your DAG in subsequent tasks. Now your subsequent task can again be a python task in the same DAG which case you can access XCom very very easily: @task def next_task(xyz, abc, **context): ti = context['ti'] files_list = ti.xcom_pull(task_ids='list_files_from_gcs') ... ... If you are now looking to call an entirely different DAG now, then you can use TriggerDagRunOperator as well and pass this list as dag_run config like this: TriggerDagRunOperator( conf={ "gcs_files_list": "{{task_instance.xcom_pull(task_ids='list_files_from_gcs'}}" }, .... .... ) Then your triggered DAG can just parse the DAG run config to move ahead.
Set Airflow Variable dynamically
Hi community I need for help. I have a GCS bucket called "staging". This bucket contain folders and subfolders (see picture). The "date-folders" (eg. 20221128) may be several. Each date-folder has 3 subfolders: I'm interested in the "main_folder". The main_folder has 2 "project folders". Each project folder has several subfolders. Each of these last subfolder has a .txt file. The main objective is: Obtain a list of all the path to .txt files (eg. gs://staging/20221128/main_folder/project_1/subfold_1/file.txt, ...) Export the list on an Airflow Variable Use the "list" Variable to run some DAGS dynamically. The folders in the staging bucket may vary everyday, so I don't have static paths. I'm using Apache Beam with Python SDK on Cloud Dataflow and Airflow with Cloud Composer. Is there a way to obtain the list of paths (as os.listdir() on python) with Beam and schedule this workflow daily? (I need to override the list Variable eveyday with new paths). For example I can achieve step n.1 (locally) with the following Python script: def collect_paths(startpath="C:/Users/PycharmProjects/staging/"): list_paths = [] for path, dirs, files in os.walk(startpath): for f in files: file_path = path + "/" + f list_paths .append(file_path ) return list_paths Thank you all in advance. Edit n.1. I've retrieved file paths thanks to google.cloud.storage API in my collect_paths script. Now, I want to access to XCom and get the list of paths. This is my task instance: collect_paths_job = PythonOperator( task_id='collect_paths', python_callable=collect_paths, op_kwargs={'bucket_name': 'staging'}, do_xcom_push=True ) I want to iterate over the list in order to run (in the same DAG) N concurrent task, each processing a single file. I tried with: files = collect_paths_job.xcom_pull(task_ids='collect_paths', include_prior_dates=False) for f in files: job = get_job_operator(f) chain(job) But got the following error: TypeError: xcom_pull() missing 1 required positional argument: 'context'
[ "I would like to correct you in your usage of the term Variable . Airflow attributes a special meaning to this object. What you want is for the file info to be accessible as parameters in a task.\nUse XCom\nAssume you have the DAG with the python task called -- list_files_from_gcs.\nThis task is a python task which exactly runs the collect_path function that you have written. Since this function returns a list, airflow automatically stuffs this into XCom. So now you can access this information anywhere in your DAG in subsequent tasks.\n\nNow your subsequent task can again be a python task in the same DAG which case you can access XCom very very easily:\n@task\ndef next_task(xyz, abc, **context):\n ti = context['ti']\n files_list = ti.xcom_pull(task_ids='list_files_from_gcs')\n ...\n ...\n\n\nIf you are now looking to call an entirely different DAG now, then you can use TriggerDagRunOperator as well and pass this list as dag_run config like this:\nTriggerDagRunOperator(\n conf={\n \"gcs_files_list\": \"{{task_instance.xcom_pull(task_ids='list_files_from_gcs'}}\"\n },\n ....\n ....\n)\n\nThen your triggered DAG can just parse the DAG run config to move ahead.\n\n\n" ]
[ 1 ]
[]
[]
[ "airflow", "apache_beam", "google_cloud_composer", "google_cloud_dataflow", "python" ]
stackoverflow_0074617283_airflow_apache_beam_google_cloud_composer_google_cloud_dataflow_python.txt
Q: folium geojson multiple layers control I'm trying to create a map that has multiple layers output from the key value pairs of a geojson, I can create the map and the layers but the layer filter doesn't work. data = {"type": "FeatureCollection", "name": "OVE", "crs": {"type": "name", "properties": {"name": "urn:ogc:def:crs:OGC:1.3:CRS84"}}, "features": [{"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.849254, 47.643435]}, "properties": {"id": 1, "Country": "Canada", "Category": "hot dogs", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.840219, 47.971115]}, "properties": {"id": 2, "Country": "Canada", "Category": "hamburger", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.849254, 48.278694]}, "properties": {"id": 3, "Country": "Canada", "Category": "barbecue", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-74.792153, 48.284706]}, "properties": {"id": 6, "Country": "Canada", "Category": "hot dogs", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.298115, 47.643435]}, "properties": {"id": 7, "Country": "Canada", "Category": "barbecue", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.298115, 47.971115]}, "properties": {"id": 8, "Country": "Canada", "Category": "barbecue", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.28908, 48.284706]}, "properties": {"id": 9, "Country": "Canada", "Category": "hamburger", "Designac": null}}]} my code is this: data_geo= folium.GeoJson(data) Category = [] for i in data["features"]: Category.append(i["properties"]["Category"]) set_res = set(Category) list_category = list(set_res) for i in range(len(list_category)): if list_category[i] == None: list_category[i] = 'No Value' list_category_lower = [name.lower() for name in list_category] list_category_lower = sorted(list_category_lower) l_replace = [s.replace(' ', '') for s in list_category_lower] for feature in data_geo.data['features']: for replace, lower in zip(l_replace, list_category_lower): globals()['%s' % replace] = folium.FeatureGroup(lower) category = feature['properties']['Category'] if category == None: category = 'No Value' category = category.lower() if feature['properties']['Country'] == 'Canada': for i in list_category_lower: if category == i: folium.Marker(location=list(reversed(feature['geometry']['coordinates'])), icon=folium.Icon(color="green",icon='info-sign'), popup="<b>Timberline Lodge</b>", tooltip = feature['properties']['Category'], ).add_to(globals()['%s' % replace]) (globals()['%s' % replace]).add_to(m) folium.LayerControl().add_to(m) m.save('demo.html') m I deselect hamburger but the layer is still displayed A: I solved it adding one more conditional for replace, lower in zip(l_replace, list_category_lower): globals()['%s' % replace] = folium.FeatureGroup(lower) variable = globals()['%s' % replace] #temp = variable.layer_name for feature in data_geo.data['features']: category = feature['properties']['Category'] if category == None: category = 'No Value' category = category.lower() if feature['properties']['Country'] == 'Canada': for i in list_category_lower: if i == variable.layer_name: if category == i: folium.Marker(location=list(reversed(feature['geometry']['coordinates'])), icon=folium.Icon(color="green",icon='info-sign'), popup="<b>Timberline Lodge</b>", tooltip = feature['properties']['Category'], #Categoria=feature['properties']['entidad'] ).add_to(variable) (globals()['%s' % replace]).add_to(m) folium.LayerControl().add_to(m) m.save('demo.html') m
folium geojson multiple layers control
I'm trying to create a map that has multiple layers output from the key value pairs of a geojson, I can create the map and the layers but the layer filter doesn't work. data = {"type": "FeatureCollection", "name": "OVE", "crs": {"type": "name", "properties": {"name": "urn:ogc:def:crs:OGC:1.3:CRS84"}}, "features": [{"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.849254, 47.643435]}, "properties": {"id": 1, "Country": "Canada", "Category": "hot dogs", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.840219, 47.971115]}, "properties": {"id": 2, "Country": "Canada", "Category": "hamburger", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.849254, 48.278694]}, "properties": {"id": 3, "Country": "Canada", "Category": "barbecue", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-74.792153, 48.284706]}, "properties": {"id": 6, "Country": "Canada", "Category": "hot dogs", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.298115, 47.643435]}, "properties": {"id": 7, "Country": "Canada", "Category": "barbecue", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.298115, 47.971115]}, "properties": {"id": 8, "Country": "Canada", "Category": "barbecue", "Designac": null}}, {"type": "Feature", "geometry": {"type": "Point", "coordinates": [-75.28908, 48.284706]}, "properties": {"id": 9, "Country": "Canada", "Category": "hamburger", "Designac": null}}]} my code is this: data_geo= folium.GeoJson(data) Category = [] for i in data["features"]: Category.append(i["properties"]["Category"]) set_res = set(Category) list_category = list(set_res) for i in range(len(list_category)): if list_category[i] == None: list_category[i] = 'No Value' list_category_lower = [name.lower() for name in list_category] list_category_lower = sorted(list_category_lower) l_replace = [s.replace(' ', '') for s in list_category_lower] for feature in data_geo.data['features']: for replace, lower in zip(l_replace, list_category_lower): globals()['%s' % replace] = folium.FeatureGroup(lower) category = feature['properties']['Category'] if category == None: category = 'No Value' category = category.lower() if feature['properties']['Country'] == 'Canada': for i in list_category_lower: if category == i: folium.Marker(location=list(reversed(feature['geometry']['coordinates'])), icon=folium.Icon(color="green",icon='info-sign'), popup="<b>Timberline Lodge</b>", tooltip = feature['properties']['Category'], ).add_to(globals()['%s' % replace]) (globals()['%s' % replace]).add_to(m) folium.LayerControl().add_to(m) m.save('demo.html') m I deselect hamburger but the layer is still displayed
[ "I solved it adding one more conditional\nfor replace, lower in zip(l_replace, list_category_lower): \n globals()['%s' % replace] = folium.FeatureGroup(lower)\n variable = globals()['%s' % replace]\n #temp = variable.layer_name\n for feature in data_geo.data['features']:\n\n category = feature['properties']['Category']\n\n if category == None:\n category = 'No Value'\n category = category.lower()\n\n if feature['properties']['Country'] == 'Canada': \n for i in list_category_lower:\n if i == variable.layer_name:\n if category == i: \n folium.Marker(location=list(reversed(feature['geometry']['coordinates'])),\n icon=folium.Icon(color=\"green\",icon='info-sign'), \n popup=\"<b>Timberline Lodge</b>\",\n tooltip = feature['properties']['Category'],\n #Categoria=feature['properties']['entidad']\n ).add_to(variable)\n (globals()['%s' % replace]).add_to(m) \n\nfolium.LayerControl().add_to(m) \nm.save('demo.html')\n\nm\n\n" ]
[ 1 ]
[]
[]
[ "folium", "layer", "loops", "maps", "python" ]
stackoverflow_0074616800_folium_layer_loops_maps_python.txt
Q: Python: How to force overwriting of files when using setup.py install (distutil) I am using distutil to install my python code using python setup.py install I run into problems when I want to install an older branch of my code over a new one: setup.py install won't overwrite older files. A work around is touching (touch <filename>) all files so they are forced to be newer than those installed, but this is pretty ugly. What I am looking for is an option to force overwriting of all files, eg. something like python setup.py --force install Any Ideas? A: The Python developers had the same idea, they just put the option after the command: python setup.py install --force The distutils documentation doesn't mention the --force option specifically, but you can find it by using the --help option: python setup.py --help install A: Go to the setup.py directory and I simply use: pip install . It works for me. A: In my case, I have to remove the build and dist folders rm -rf build rm -rf dist python3 setup.py install
Python: How to force overwriting of files when using setup.py install (distutil)
I am using distutil to install my python code using python setup.py install I run into problems when I want to install an older branch of my code over a new one: setup.py install won't overwrite older files. A work around is touching (touch <filename>) all files so they are forced to be newer than those installed, but this is pretty ugly. What I am looking for is an option to force overwriting of all files, eg. something like python setup.py --force install Any Ideas?
[ "The Python developers had the same idea, they just put the option after the command:\npython setup.py install --force\n\nThe distutils documentation doesn't mention the --force option specifically, but you can find it by using the --help option:\npython setup.py --help install\n\n", "Go to the setup.py directory and I simply use:\npip install .\n\nIt works for me.\n", "In my case, I have to remove the build and dist folders\nrm -rf build\nrm -rf dist\npython3 setup.py install\n\n" ]
[ 53, 4, 0 ]
[]
[]
[ "distutils", "installation", "overwrite", "python" ]
stackoverflow_0019133831_distutils_installation_overwrite_python.txt
Q: Python Scatter plot with matrix input. Having trouble getting number of columns showing on x axis, then a dot for each value in each column I'm making a bar chart and a scatter plot. The bar chart takes a vector as an input. I plotted the values on the x-axis, and the amount of times they repeat on the y-axis. This is did by converting the vector to a list and using .count(). That worked great and was relatively straightforward. As for the scatterplot, the input is going to be a matrix of any x and y dimensions. The idea is to have the amount of columns in the matrix show up on the x axis going from 1,2,3,4 etc depending on how many columns the inserted matrix is. The rows of each column will consist of many different numbers that I would like all to be displayed as dots or stars above the relevant column index, i. e. Column #3 consists of values 6,2,8,5,9,5 going down, and would like a dot for each of them going up the y-axis directly on top of the number 3 on the x axis. I have tried different approaches, some with dots showing up but in wrong places, other times the x axis is completely off even though I used .len(0,:) which prints out the correct amount of columns but doesn't chart it. My latest attempt which now doesn't even show the dots or stars: import numpy as np # Import NumPy import matplotlib.pyplot as plt # Import the matplotlib.pyplot module vector = np.array([[-3,7,12,4,0o2,7,-3],[7,7,12,4,0o2,4,12],[12,-3,4,10,12,4,-3],[10,12,4,0o3,7,10,12]]) x = len(vector[0,:]) print(x)#vector[0,:] y = vector[:,0] plt.plot(x, y, "r.") # Scatter plot with blue stars plt.title("Scatter plot") # Set the title of the graph plt.xlabel("Column #") # Set the x-axis label plt.ylabel("Occurences of values for each column") # Set the y-axis label plt.xlim([1,len(vector[0,:])]) # Set the limits of the x-axis plt.ylim([-5,15]) # Set the limits of the y-axis plt.show(vector) The matrix shown at the top is just one I made up for the purpose of testing, the idea is that it should work for any given matrix which is imported. I tried the above pasted code which is the closest I have gotten as it actually prints the amount of columns it has, but it doesn't show them on the plot. I haven't gotten to a point where it actually plots the points above the columns on y axis yet, only in completely wrong positions in a previous version. A: import numpy as np # Import NumPy import matplotlib.pyplot as plt # Import the matplotlib.pyplot module vector = np.array([[-3,7,12,4,0o2,7,-3], [7,7,12,4,0o2,4,12], [12,-3,4,10,12,4,-3], [10,12,4,0o3,7,10,12]]) rows, columns = vector.shape plt.title("Scatter plot") # Set the title of the graph plt.xlabel("Column #") # Set the x-axis label plt.ylabel("Occurences of values for each column") # Set the y-axis label plt.xlim([1,columns]) # Set the limits of the x-axis plt.ylim([-5,15]) # Set the limits of the y-axis for i in range(1, columns+1): y = vector[:,i-1] x = [i] * rows plt.plot(x, y, "r.") plt.show()
Python Scatter plot with matrix input. Having trouble getting number of columns showing on x axis, then a dot for each value in each column
I'm making a bar chart and a scatter plot. The bar chart takes a vector as an input. I plotted the values on the x-axis, and the amount of times they repeat on the y-axis. This is did by converting the vector to a list and using .count(). That worked great and was relatively straightforward. As for the scatterplot, the input is going to be a matrix of any x and y dimensions. The idea is to have the amount of columns in the matrix show up on the x axis going from 1,2,3,4 etc depending on how many columns the inserted matrix is. The rows of each column will consist of many different numbers that I would like all to be displayed as dots or stars above the relevant column index, i. e. Column #3 consists of values 6,2,8,5,9,5 going down, and would like a dot for each of them going up the y-axis directly on top of the number 3 on the x axis. I have tried different approaches, some with dots showing up but in wrong places, other times the x axis is completely off even though I used .len(0,:) which prints out the correct amount of columns but doesn't chart it. My latest attempt which now doesn't even show the dots or stars: import numpy as np # Import NumPy import matplotlib.pyplot as plt # Import the matplotlib.pyplot module vector = np.array([[-3,7,12,4,0o2,7,-3],[7,7,12,4,0o2,4,12],[12,-3,4,10,12,4,-3],[10,12,4,0o3,7,10,12]]) x = len(vector[0,:]) print(x)#vector[0,:] y = vector[:,0] plt.plot(x, y, "r.") # Scatter plot with blue stars plt.title("Scatter plot") # Set the title of the graph plt.xlabel("Column #") # Set the x-axis label plt.ylabel("Occurences of values for each column") # Set the y-axis label plt.xlim([1,len(vector[0,:])]) # Set the limits of the x-axis plt.ylim([-5,15]) # Set the limits of the y-axis plt.show(vector) The matrix shown at the top is just one I made up for the purpose of testing, the idea is that it should work for any given matrix which is imported. I tried the above pasted code which is the closest I have gotten as it actually prints the amount of columns it has, but it doesn't show them on the plot. I haven't gotten to a point where it actually plots the points above the columns on y axis yet, only in completely wrong positions in a previous version.
[ "import numpy as np # Import NumPy\nimport matplotlib.pyplot as plt # Import the matplotlib.pyplot module\n\nvector = np.array([[-3,7,12,4,0o2,7,-3],\n [7,7,12,4,0o2,4,12],\n [12,-3,4,10,12,4,-3],\n [10,12,4,0o3,7,10,12]])\n\nrows, columns = vector.shape\nplt.title(\"Scatter plot\") # Set the title of the graph\nplt.xlabel(\"Column #\") # Set the x-axis label\nplt.ylabel(\"Occurences of values for each column\") # Set the y-axis label\nplt.xlim([1,columns]) # Set the limits of the x-axis\nplt.ylim([-5,15]) # Set the limits of the y-axis\n\nfor i in range(1, columns+1):\n y = vector[:,i-1]\n x = [i] * rows\n plt.plot(x, y, \"r.\")\n\nplt.show()\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "numpy", "python" ]
stackoverflow_0074617268_matplotlib_numpy_python.txt
Q: How do i read the values of a returned pointer from ctypes? I am currently struggling with ctypes. I am able to convert a python list to a float array and give it to the C-function. But i can't figure out how to return this array from the C-function back to a python list... Python-Code class Point(ctypes.Structure): _fields_= [("a", ctypes.c_float * 4), ("aa", ctypes.c_int)] floats = [1.0, 2.0, 3.0, 4.0] FloatArray4 = (ctypes.c_float * 4) parameter_array = FloatArray4(*floats) test1 = clibrary.dosth test1.argtypes = [ctypes.c_float * 4, ctypes.c_int] test1.restype = ctypes.POINTER(Point) struc = test1(parameter_array, 9) p = (struc.contents.a) print(p) clibrary.free_memory(struc) The C-function basically puts the parameter_array into a structure ant returns the structure.. C-Code: #include <stdio.h> #include <stdlib.h> struct a{float *a; int aa; } ; struct a *dosth(float *lsit, int x){ struct a *b = malloc(200000); b -> a = lsit; b -> aa = 3; return b; } void free_memory(struct a *pointer){ free(pointer); } Output of print(p) in Python is : <__main__.c_float_Array_4 object at 0x000001FE9EEA79C0> How do i get access to those values? A: Slicing a ctypes pointer will generate a Python list of the contents. Since a pointer has no knowledge of how many items it points to, you'll need to know the size, usually through another parameter: >>> import ctypes as ct >>> f = (ct.c_float * 4)(1,2,3,4) >>> f <__main__.c_float_Array_4 object at 0x00000216D6B7A840> >>> f[:4] [1.0, 2.0, 3.0, 4.0] Here's a fleshed-out example based on your code: test.c #include <stdlib.h> #ifdef _WIN32 # define API __declspec(dllexport) #else # define API #endif struct Floats { float *fptr; size_t size; }; API struct Floats *alloc_floats(float *fptr, size_t size) { struct Floats *pFloats = malloc(sizeof(struct Floats)); pFloats->fptr = fptr; pFloats->size = size; return pFloats; } API void free_floats(struct Floats *pFloats) { free(pFloats); } test.py import ctypes as ct class Floats(ct.Structure): _fields_= (('fptr', ct.POINTER(ct.c_float)), # Pointer, not array. ('size', ct.c_int)) # Used to know the size of the array pointed to. # Display routine when printing this class. # Note the slicing of the pointer to generate a Python list. def __repr__(self): return f'Floats({self.fptr[:self.size]})' dll = ct.CDLL('./test') dll.alloc_floats.argtypes = ct.POINTER(ct.c_float), ct.c_size_t dll.alloc_floats.restype = ct.POINTER(Floats) dll.free_floats.argtypes = ct.POINTER(Floats), dll.free_floats.restype = None data = (ct.c_float * 4)(1.0, 2.0, 3.0, 4.0) p = dll.alloc_floats(data, len(data)) print(p.contents) dll.free_floats(p) Output: Floats([1.0, 2.0, 3.0, 4.0])
How do i read the values of a returned pointer from ctypes?
I am currently struggling with ctypes. I am able to convert a python list to a float array and give it to the C-function. But i can't figure out how to return this array from the C-function back to a python list... Python-Code class Point(ctypes.Structure): _fields_= [("a", ctypes.c_float * 4), ("aa", ctypes.c_int)] floats = [1.0, 2.0, 3.0, 4.0] FloatArray4 = (ctypes.c_float * 4) parameter_array = FloatArray4(*floats) test1 = clibrary.dosth test1.argtypes = [ctypes.c_float * 4, ctypes.c_int] test1.restype = ctypes.POINTER(Point) struc = test1(parameter_array, 9) p = (struc.contents.a) print(p) clibrary.free_memory(struc) The C-function basically puts the parameter_array into a structure ant returns the structure.. C-Code: #include <stdio.h> #include <stdlib.h> struct a{float *a; int aa; } ; struct a *dosth(float *lsit, int x){ struct a *b = malloc(200000); b -> a = lsit; b -> aa = 3; return b; } void free_memory(struct a *pointer){ free(pointer); } Output of print(p) in Python is : <__main__.c_float_Array_4 object at 0x000001FE9EEA79C0> How do i get access to those values?
[ "Slicing a ctypes pointer will generate a Python list of the contents. Since a pointer has no knowledge of how many items it points to, you'll need to know the size, usually through another parameter:\n>>> import ctypes as ct\n>>> f = (ct.c_float * 4)(1,2,3,4)\n>>> f\n<__main__.c_float_Array_4 object at 0x00000216D6B7A840>\n>>> f[:4]\n[1.0, 2.0, 3.0, 4.0]\n\nHere's a fleshed-out example based on your code:\ntest.c\n#include <stdlib.h>\n\n#ifdef _WIN32\n# define API __declspec(dllexport)\n#else\n# define API\n#endif\n\nstruct Floats {\n float *fptr;\n size_t size;\n};\n\nAPI struct Floats *alloc_floats(float *fptr, size_t size) {\n struct Floats *pFloats = malloc(sizeof(struct Floats));\n pFloats->fptr = fptr;\n pFloats->size = size;\n return pFloats;\n}\n\nAPI void free_floats(struct Floats *pFloats) {\n free(pFloats);\n}\n\ntest.py\nimport ctypes as ct\n\nclass Floats(ct.Structure):\n _fields_= (('fptr', ct.POINTER(ct.c_float)), # Pointer, not array.\n ('size', ct.c_int)) # Used to know the size of the array pointed to.\n # Display routine when printing this class.\n # Note the slicing of the pointer to generate a Python list.\n def __repr__(self):\n return f'Floats({self.fptr[:self.size]})'\n\ndll = ct.CDLL('./test')\ndll.alloc_floats.argtypes = ct.POINTER(ct.c_float), ct.c_size_t\ndll.alloc_floats.restype = ct.POINTER(Floats)\ndll.free_floats.argtypes = ct.POINTER(Floats),\ndll.free_floats.restype = None\n\ndata = (ct.c_float * 4)(1.0, 2.0, 3.0, 4.0)\np = dll.alloc_floats(data, len(data))\nprint(p.contents)\ndll.free_floats(p)\n\nOutput:\nFloats([1.0, 2.0, 3.0, 4.0])\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "c", "ctypes", "pointers", "python" ]
stackoverflow_0074615515_arrays_c_ctypes_pointers_python.txt
Q: Python selenium loop scrap with xpath try: for i in range(n): company=driver.find_element(By.XPATH,'//*[@id="main-content"]/section[2]/ul/li['+str(i)+']/div/div[2]/h4') companyname.append(company) except IndexError: print("no") hi, xpath doesn't work when scrapping python selenium loop or i am doing it wrong A: Try to find this element using driver.find_element(By.CSS_SELECTOR "CSS_SELECTOR")and simply copy selector of element in page inspection section
Python selenium loop scrap with xpath
try: for i in range(n): company=driver.find_element(By.XPATH,'//*[@id="main-content"]/section[2]/ul/li['+str(i)+']/div/div[2]/h4') companyname.append(company) except IndexError: print("no") hi, xpath doesn't work when scrapping python selenium loop or i am doing it wrong
[ "Try to find this element using driver.find_element(By.CSS_SELECTOR \"CSS_SELECTOR\")and simply copy selector of element in page inspection section\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074617900_python_selenium.txt
Q: Showing an image from console in Python What is the easiest way to show a .jpg or .gif image from Python console? I've got a Python console program that is checking a data set which contains links to images stored locally. How should I write the script so that it would display images pop-up graphical windows? A: Using the awesome Pillow library: >>> from PIL import Image >>> img = Image.open('test.png') >>> img.show() This will open the image in your default image viewer. A: In a new window using Pillow/PIL Install Pillow (or PIL), e.g.: $ pip install pillow Now you can from PIL import Image with Image.open('path/to/file.jpg') as img: img.show() Using native apps Other common alternatives include running xdg-open or starting the browser with the image path: import webbrowser webbrowser.open('path/to/file.jpg') Inline a Linux console If you really want to show the image inline in the console and not as a new window, you may do that but only in a Linux console using fbi see ask Ubuntu or else use ASCII-art like CACA. A: Since you are probably running Windows (from looking at your tags), this would be the easiest way to open and show an image file from the console without installing extra stuff like PIL. import os os.system('start pic.png') A: In Xterm-compatible terminals, you can show the image directly in the terminal. See my answer to "PPM image to ASCII art in Python" A: Or simply execute the image through the shell, as in import subprocess subprocess.call([ fname ], shell=True) and whatever program is installed to handle images will be launched. A: Why not just display it in the user's web browser? A: If you would like to show it in a new window, you could use Tkinter + PIL library, like so: import tkinter as tk from PIL import ImageTk, Image def show_imge(path): image_window = tk.Tk() img = ImageTk.PhotoImage(Image.open(path)) panel = tk.Label(image_window, image=img) panel.pack(side="bottom", fill="both", expand="yes") image_window.mainloop() This is a modified example that can be found all over the web. A: You cannot display images in a console window. You need a graphical toolkit such as Tkinter, PyGTK, PyQt, PyKDE, wxPython, PyObjC, or PyFLTK. There are plenty of tutorials on how to create simple windows and loading images in python. A: You can also using the Python module Ipython, which in addition to displaying an image in the Spyder console can embed images in Jupyter notebook. In Spyder, the image will be displayed in full size, not scaled to fit the console. from IPython.display import Image, display display(Image(filename="mypic.png")) A: I made a simple tool that will display an image given a filename or image object or url. It's crude, but it'll do in a hurry. Installation: $ pip install simple-imshow Usage: from simshow import simshow simshow('some_local_file.jpg') # display from local file simshow('http://mathandy.com/escher_sphere.png') # display from url A: If you want to open the image in your native image viewer, try os.startfile: import os os.startfile('file') Or you could set the image as the background using a GUI library and then show it when you want to. But this way uses a lot more code and might impact the time your script takes to run. But it does allow you to customize the ui. Here's an example using wxpython: import wx ######################################################################## class MainPanel(wx.Panel): """""" #---------------------------------------------------------------------- def __init__(self, parent): """Constructor""" wx.Panel.__init__(self, parent=parent) self.SetBackgroundStyle(wx.BG_STYLE_PAINT) # Was wx.BG_STYLE_CUSTOM) self.frame = parent sizer = wx.BoxSizer(wx.VERTICAL) hSizer = wx.BoxSizer(wx.HORIZONTAL) for num in range(4): label = "Button %s" % num btn = wx.Button(self, label=label) sizer.Add(btn, 0, wx.ALL, 5) hSizer.Add((1,1), 1, wx.EXPAND) hSizer.Add(sizer, 0, wx.TOP, 100) hSizer.Add((1,1), 0, wx.ALL, 75) self.SetSizer(hSizer) self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground) #---------------------------------------------------------------------- def OnEraseBackground(self, evt): """ Add a picture to the background """ # yanked from ColourDB.py dc = evt.GetDC() if not dc: dc = wx.ClientDC(self) rect = self.GetUpdateRegion().GetBox() dc.SetClippingRect(rect) dc.Clear() bmp = wx.Bitmap("file") dc.DrawBitmap(bmp, 0, 0) ######################################################################## class MainFrame(wx.Frame): """""" #---------------------------------------------------------------------- def __init__(self): """Constructor""" wx.Frame.__init__(self, None, size=(600,450)) panel = MainPanel(self) self.Center() ######################################################################## class Main(wx.App): """""" #---------------------------------------------------------------------- def __init__(self, redirect=False, filename=None): """Constructor""" wx.App.__init__(self, redirect, filename) dlg = MainFrame() dlg.Show() #---------------------------------------------------------------------- if __name__ == "__main__": app = Main() app.MainLoop() (source code from how to put a image as a background in wxpython) You can even show the image in your terminal using timg: import timg obj = timg.Renderer() obj.load_image_from_file("file") obj.render(timg.SixelMethod) (PyPI: https://pypi.org/project/timg) A: 2022: import os os.open("filename.png") It will open the filename.png in a window using default image viewer. A: You can use the following code: import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline img = mpimg.imread('FILEPATH/FILENAME.jpg') imgplot = plt.imshow(img) plt.axis('off') plt.show() A: Displaying images in console using Python For this you will need a library called ascii_magic Installation : pip install ascii_magic Sample Code : import ascii_magic img = ascii_magic.from_image_file("Image.png") result = ascii_magic.to_terminal(img) Reference : Ascii_Magic A: The easiest way to display an image from a console script is to open it in a web browser using webbrowser standard library module. No additional packages need to be installed Works across different operating systems On macOS, webbrowser directly opens Preview app if you pass it an image file Here is an example, tested on macOS. import webbrowser # Generate PNG file path = Path("/tmp/test-image.png") with open(path, "wb") as out: out.write(png_data) # Test the image on a local screen # using a web browser. # Path URL format may vary across different operating systems, # consult Python manual for details. webbrowser.open(f"file://{path.as_posix()}")
Showing an image from console in Python
What is the easiest way to show a .jpg or .gif image from Python console? I've got a Python console program that is checking a data set which contains links to images stored locally. How should I write the script so that it would display images pop-up graphical windows?
[ "Using the awesome Pillow library:\n>>> from PIL import Image \n>>> img = Image.open('test.png')\n>>> img.show() \n\nThis will open the image in your default image viewer.\n", "In a new window using Pillow/PIL\nInstall Pillow (or PIL), e.g.:\n$ pip install pillow\n\nNow you can\nfrom PIL import Image\nwith Image.open('path/to/file.jpg') as img:\n img.show()\n\nUsing native apps\nOther common alternatives include running xdg-open or starting the browser with the image path:\nimport webbrowser\nwebbrowser.open('path/to/file.jpg')\n\nInline a Linux console\nIf you really want to show the image inline in the console and not as a new window, you may do that but only in a Linux console using fbi see ask Ubuntu or else use ASCII-art like CACA.\n", "Since you are probably running Windows (from looking at your tags), this would be the easiest way to open and show an image file from the console without installing extra stuff like PIL.\nimport os\nos.system('start pic.png')\n\n", "In Xterm-compatible terminals, you can show the image directly in the terminal. See my answer to \"PPM image to ASCII art in Python\"\n\n", "Or simply execute the image through the shell, as in \nimport subprocess\nsubprocess.call([ fname ], shell=True)\n\nand whatever program is installed to handle images will be launched.\n", "Why not just display it in the user's web browser?\n", "If you would like to show it in a new window, you could use Tkinter + PIL library, like so:\nimport tkinter as tk\nfrom PIL import ImageTk, Image\n\ndef show_imge(path):\n image_window = tk.Tk()\n img = ImageTk.PhotoImage(Image.open(path))\n panel = tk.Label(image_window, image=img)\n panel.pack(side=\"bottom\", fill=\"both\", expand=\"yes\")\n image_window.mainloop()\n\nThis is a modified example that can be found all over the web.\n", "You cannot display images in a console window.\nYou need a graphical toolkit such as Tkinter, PyGTK, PyQt, PyKDE, wxPython, PyObjC, or PyFLTK.\nThere are plenty of tutorials on how to create simple windows and loading images in python.\n", "You can also using the Python module Ipython, which in addition to displaying an image in the Spyder console can embed images in Jupyter notebook. In Spyder, the image will be displayed in full size, not scaled to fit the console. \nfrom IPython.display import Image, display\ndisplay(Image(filename=\"mypic.png\"))\n\n", "I made a simple tool that will display an image given a filename or image object or url.\nIt's crude, but it'll do in a hurry.\nInstallation:\n $ pip install simple-imshow\n\nUsage:\nfrom simshow import simshow\nsimshow('some_local_file.jpg') # display from local file\nsimshow('http://mathandy.com/escher_sphere.png') # display from url\n\n", "If you want to open the image in your native image viewer, try os.startfile:\nimport os\n\nos.startfile('file')\n\nOr you could set the image as the background using a GUI library and then show it when you want to. But this way uses a lot more code and might impact the time your script takes to run. But it does allow you to customize the ui. Here's an example using wxpython:\nimport wx\n \n########################################################################\nclass MainPanel(wx.Panel):\n \"\"\"\"\"\"\n \n #----------------------------------------------------------------------\n def __init__(self, parent):\n \"\"\"Constructor\"\"\"\n wx.Panel.__init__(self, parent=parent)\n self.SetBackgroundStyle(wx.BG_STYLE_PAINT) # Was wx.BG_STYLE_CUSTOM)\n self.frame = parent\n \n sizer = wx.BoxSizer(wx.VERTICAL)\n hSizer = wx.BoxSizer(wx.HORIZONTAL)\n \n for num in range(4):\n label = \"Button %s\" % num\n btn = wx.Button(self, label=label)\n sizer.Add(btn, 0, wx.ALL, 5)\n hSizer.Add((1,1), 1, wx.EXPAND)\n hSizer.Add(sizer, 0, wx.TOP, 100)\n hSizer.Add((1,1), 0, wx.ALL, 75)\n self.SetSizer(hSizer)\n self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)\n \n #----------------------------------------------------------------------\n def OnEraseBackground(self, evt):\n \"\"\"\n Add a picture to the background\n \"\"\"\n # yanked from ColourDB.py\n dc = evt.GetDC()\n \n if not dc:\n dc = wx.ClientDC(self)\n rect = self.GetUpdateRegion().GetBox()\n dc.SetClippingRect(rect)\n dc.Clear()\n bmp = wx.Bitmap(\"file\")\n dc.DrawBitmap(bmp, 0, 0)\n \n \n########################################################################\nclass MainFrame(wx.Frame):\n \"\"\"\"\"\"\n \n #----------------------------------------------------------------------\n def __init__(self):\n \"\"\"Constructor\"\"\"\n wx.Frame.__init__(self, None, size=(600,450))\n panel = MainPanel(self) \n self.Center()\n \n########################################################################\nclass Main(wx.App):\n \"\"\"\"\"\"\n \n #----------------------------------------------------------------------\n def __init__(self, redirect=False, filename=None):\n \"\"\"Constructor\"\"\"\n wx.App.__init__(self, redirect, filename)\n dlg = MainFrame()\n dlg.Show()\n \n#----------------------------------------------------------------------\nif __name__ == \"__main__\":\n app = Main()\n app.MainLoop()\n\n(source code from how to put a image as a background in wxpython)\nYou can even show the image in your terminal using timg:\nimport timg\n\nobj = timg.Renderer() \nobj.load_image_from_file(\"file\")\nobj.render(timg.SixelMethod)\n\n(PyPI: https://pypi.org/project/timg)\n", "2022:\nimport os\n\nos.open(\"filename.png\")\n\nIt will open the filename.png in a window using default image viewer.\n", "You can use the following code:\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n%matplotlib inline\nimg = mpimg.imread('FILEPATH/FILENAME.jpg')\nimgplot = plt.imshow(img)\nplt.axis('off')\nplt.show()\n\n", "Displaying images in console using Python\nFor this you will need a library called ascii_magic\nInstallation : pip install ascii_magic\nSample Code :\nimport ascii_magic\n\nimg = ascii_magic.from_image_file(\"Image.png\")\nresult = ascii_magic.to_terminal(img)\n\n\nReference : Ascii_Magic\n", "The easiest way to display an image from a console script is to open it in a web browser using webbrowser standard library module.\n\nNo additional packages need to be installed\n\nWorks across different operating systems\n\nOn macOS, webbrowser directly opens Preview app if you pass it an image file\n\n\nHere is an example, tested on macOS.\n import webbrowser\n\n # Generate PNG file\n path = Path(\"/tmp/test-image.png\")\n with open(path, \"wb\") as out:\n out.write(png_data)\n\n # Test the image on a local screen\n # using a web browser.\n # Path URL format may vary across different operating systems,\n # consult Python manual for details.\n webbrowser.open(f\"file://{path.as_posix()}\")\n\n" ]
[ 77, 12, 10, 8, 7, 6, 6, 5, 3, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "image", "python" ]
stackoverflow_0001413540_image_python.txt
Q: PyQt6: DLL load failed while importing QtGui: The specified procedure could not be found Windows 10 PyCharm Python 3.9.0 I installed PyQt6 and then pyqt6-tools in PyCharm throw the File->Settings. Now when i run my program i am getting the following error in the terminal from PyQt6.QtWidgets import QApplication, QWidget PyQt6: DLL load failed while importing QtGui: The specified procedure could not be found. My program code import sys from PyQt6.QtWidgets import QApplication, QWidget app = QApplication(sys.args) window = QWidget() window.show() app.exec() How can i solve this problem? A: The following command helped me solve this problem: pip install --upgrade PyQt6
PyQt6: DLL load failed while importing QtGui: The specified procedure could not be found
Windows 10 PyCharm Python 3.9.0 I installed PyQt6 and then pyqt6-tools in PyCharm throw the File->Settings. Now when i run my program i am getting the following error in the terminal from PyQt6.QtWidgets import QApplication, QWidget PyQt6: DLL load failed while importing QtGui: The specified procedure could not be found. My program code import sys from PyQt6.QtWidgets import QApplication, QWidget app = QApplication(sys.args) window = QWidget() window.show() app.exec() How can i solve this problem?
[ "The following command helped me solve this problem:\npip install --upgrade PyQt6\n\n" ]
[ 0 ]
[]
[]
[ "import", "pyqt", "pyqt6", "python" ]
stackoverflow_0074512247_import_pyqt_pyqt6_python.txt
Q: Best way to find a mismatched value that can exist in different locations in a nested dictionary So I have a dictionary that looks something like the following: { "tigj09j32f0j2": { "car": { "lead": { "version": "1.1" } }, "bike": { "lead": { "version": "2.2" } }, "jet_ski": { "lead": { "version": "3.3" } } }, "fj983j2r9jfjf": { "car": { "lead": { "version": "1.1" } }, "bike": { "lead": { "version": "2.3" } }, "jet_ski": { "lead": { "version": "3.3" } } } } The number of different dictionaries that contain car, bike and jet_ski can be huge and not just two as in my example. The number of different vehicle types can also be much larger. My goal is to find a mismatch in a given type of vehicle version between the different dictionaries. For example for bike the version is different between the two dictionaries. The way I currently do it is by iterating through all sub-dictionaries in my dictionary and then looking for the version. I save the version in a class dictionary that contains the vehicle type and version and then start comparing to it. I am sure there is a much more elegant and pythonic way to go about this and would appreciate any feedback! Here is more or less what I am doing: def is_version_issue(vehicle_type: str, object_json: dict): issue = False for object_id in object_json: current_object = object_json.get(object_id) if vehicle_type in current_object: current_vehicle_version = current_object.get(vehicle_type).get("lead").get("version") # vehicles is a class dictionary that contains the vehicles I am looking for if self.vehicles[vehicle_type]: if self.vehicles[vehicle_type] == current_vehicle_version: issue = False continue else: return True self.vehicles[vehicle_type] = current_vehicle_version issue = False return issue A: Well, your solution is not bad. There are a few things I would suggest to improve. Iterate over sub-dictionaries directly You don't seem to use the keys (object_id) at all, so you might as well iterate via dict.values. No need for the issue variable You can just return your flag once an "issue" is found and otherwise return the opposite at the end of the loop. Reduce indentation Use continue in the loop, if the vehicle_type is not present to reduce indentation. Decide what assumptions are sensible If you know that each vehicle sub-dictionary will have the lead key and the one below that will have the version key (which you imply by using dict.get multiple times without checking for None first), just use regular dictionary subscript notation ([]). No need for the class dictionary If you are checking a specific vehicle type when calling your function anyway, there is no need for that dictionary (as far as I can tell). You just need a local variable to hold the last known version number for that type. Semantics This may be a matter of personal preference, but I would design the function to return True, if everything is "fine" and False if there is a mismatch somewhere. Specify type arguments If you already take the time to use type annotations, you should take the time to specify your generics properly. Granted, in this case it may become unwieldy, if your dictionary nesting gets much deeper, but in that case you can still at least use dict[str, Any]. Use constants for repeating keys To reduce the room for error, I like to define constants for strings that have fixed meaning in my code and are used repeatedly. That schema seems to be more or less fixed, so you can define the keys once and then use the constants throughout the code. This has the added benefit that it will be very easy to fix, if the schema does change for some reason and one of the keys is renamed (e.g. from version to ver or something like that). Obviously, in this super simple situation this is overkill, but if you refer to the same keys in more places throughout your code, I highly suggest adopting this practice. Suggested implementation KEY_LEAD = "lead" KEY_VERSION = "version" def versions_consistent( vehicle_type: str, data: dict[str, dict[str, dict[str, dict[str, str]]]] ) -> bool: version_found: str | None = None for vehicles in data.values(): vehicle = vehicles.get(vehicle_type) if vehicle is None: continue if version_found is None: version_found = vehicle[KEY_LEAD][KEY_VERSION] elif version_found != vehicle[KEY_LEAD][KEY_VERSION]: return False return True Bonus You might consider performing an additional check at the end, to see if version_found is still None. That might indicate that an invalid vehicle_type was passed (e.g. due to a typo). In that case you could raise an exception. As an alternative, if you know the vehicle types in advance, you can avoid this by defining them again as constants ahead of time and then checking at the beginning of the function, if a valid type was passed. Finally, you could consider not just returning a bool, but actually saving the mismatches/inconsistencies in some data structure and returning that to indicate which IDs had which versions for a specified vehicle type. So it could also look something like this: ALLOWED_VEHICLES = {"car", "bike", "jet_ski"} def get_version_id_mapping( vehicle_type: str, data: dict[str, dict[str, dict[str, dict[str, str]]]] ) -> dict[str, set[str]]: if vehicle_type not in ALLOWED_VEHICLES: raise ValueError(f"{vehicle_type} is not a valid vehicle type") version_id_map: dict[str, set[str]] = {} for obj_id, vehicles in data.items(): vehicle = vehicles.get(vehicle_type) if vehicle is None: continue ids = version_id_map.setdefault(vehicle["lead"]["version"], set()) ids.add(obj_id) return version_id_map Calling get_version_id_mapping("bike", d) (d being your example data) gives the following: {'2.2': {'tigj09j32f0j2'}, '2.3': {'fj983j2r9jfjf'}} Calling it for jet_ski gives this: {'3.3': {'fj983j2r9jfjf', 'tigj09j32f0j2'}} So by checking the length of the output dictionary, you would see, if there is an inconsistency (length > 1) or not. Bonus 2 Come to think of it, if you want to do this check for every type of vehicle for the entire dataset anyway, this can all be done in one go: def vehicle_type_versions( data: dict[str, dict[str, dict[str, dict[str, str]]]] ) -> dict[str, dict[str, set[str]]]: output: dict[str, dict[str, set[str]]] = {} for obj_id, vehicles in data.items(): for vehicle_type, vehicle_data in vehicles.items(): sub_dict = output.setdefault(vehicle_type, {}) ids = sub_dict.setdefault(vehicle_data["lead"]["version"], set()) ids.add(obj_id) return output Calling this on your example data yield the following output: {'bike': {'2.2': {'tigj09j32f0j2'}, '2.3': {'fj983j2r9jfjf'}}, 'car': {'1.1': {'fj983j2r9jfjf', 'tigj09j32f0j2'}}, 'jet_ski': {'3.3': {'fj983j2r9jfjf', 'tigj09j32f0j2'}}} A: def is_version_issue(vehicle_type: str, object_json: dict): current_object = object_json[object_id] for object_id in current_object: if vehicle_type in object_json[object_id]: current_vehicle_version = current_object[vehicle_type]["lead"]["version"] # vehicles is a class dictionary that contains the vehicles I am looking for if self.vehicles[vehicle_type]: if self.vehicles[vehicle_type] != current_vehicle_version: return True return False No I think this is the most logical way I think you have some redundancy but this makes sense to me. Aslo some other problems with you not being clear about how you are using the dict. I would also not use the temp current_ variables but they are good if you are using a debugger.
Best way to find a mismatched value that can exist in different locations in a nested dictionary
So I have a dictionary that looks something like the following: { "tigj09j32f0j2": { "car": { "lead": { "version": "1.1" } }, "bike": { "lead": { "version": "2.2" } }, "jet_ski": { "lead": { "version": "3.3" } } }, "fj983j2r9jfjf": { "car": { "lead": { "version": "1.1" } }, "bike": { "lead": { "version": "2.3" } }, "jet_ski": { "lead": { "version": "3.3" } } } } The number of different dictionaries that contain car, bike and jet_ski can be huge and not just two as in my example. The number of different vehicle types can also be much larger. My goal is to find a mismatch in a given type of vehicle version between the different dictionaries. For example for bike the version is different between the two dictionaries. The way I currently do it is by iterating through all sub-dictionaries in my dictionary and then looking for the version. I save the version in a class dictionary that contains the vehicle type and version and then start comparing to it. I am sure there is a much more elegant and pythonic way to go about this and would appreciate any feedback! Here is more or less what I am doing: def is_version_issue(vehicle_type: str, object_json: dict): issue = False for object_id in object_json: current_object = object_json.get(object_id) if vehicle_type in current_object: current_vehicle_version = current_object.get(vehicle_type).get("lead").get("version") # vehicles is a class dictionary that contains the vehicles I am looking for if self.vehicles[vehicle_type]: if self.vehicles[vehicle_type] == current_vehicle_version: issue = False continue else: return True self.vehicles[vehicle_type] = current_vehicle_version issue = False return issue
[ "Well, your solution is not bad. There are a few things I would suggest to improve.\nIterate over sub-dictionaries directly\nYou don't seem to use the keys (object_id) at all, so you might as well iterate via dict.values.\nNo need for the issue variable\nYou can just return your flag once an \"issue\" is found and otherwise return the opposite at the end of the loop.\nReduce indentation\nUse continue in the loop, if the vehicle_type is not present to reduce indentation.\nDecide what assumptions are sensible\nIf you know that each vehicle sub-dictionary will have the lead key and the one below that will have the version key (which you imply by using dict.get multiple times without checking for None first), just use regular dictionary subscript notation ([]).\nNo need for the class dictionary\nIf you are checking a specific vehicle type when calling your function anyway, there is no need for that dictionary (as far as I can tell). You just need a local variable to hold the last known version number for that type.\nSemantics\nThis may be a matter of personal preference, but I would design the function to return True, if everything is \"fine\" and False if there is a mismatch somewhere.\nSpecify type arguments\nIf you already take the time to use type annotations, you should take the time to specify your generics properly. Granted, in this case it may become unwieldy, if your dictionary nesting gets much deeper, but in that case you can still at least use dict[str, Any].\nUse constants for repeating keys\nTo reduce the room for error, I like to define constants for strings that have fixed meaning in my code and are used repeatedly. That schema seems to be more or less fixed, so you can define the keys once and then use the constants throughout the code. This has the added benefit that it will be very easy to fix, if the schema does change for some reason and one of the keys is renamed (e.g. from version to ver or something like that).\nObviously, in this super simple situation this is overkill, but if you refer to the same keys in more places throughout your code, I highly suggest adopting this practice.\nSuggested implementation\nKEY_LEAD = \"lead\"\nKEY_VERSION = \"version\"\n\ndef versions_consistent(\n vehicle_type: str,\n data: dict[str, dict[str, dict[str, dict[str, str]]]]\n) -> bool:\n version_found: str | None = None\n for vehicles in data.values():\n vehicle = vehicles.get(vehicle_type)\n if vehicle is None:\n continue\n if version_found is None:\n version_found = vehicle[KEY_LEAD][KEY_VERSION]\n elif version_found != vehicle[KEY_LEAD][KEY_VERSION]:\n return False\n return True\n\nBonus\nYou might consider performing an additional check at the end, to see if version_found is still None. That might indicate that an invalid vehicle_type was passed (e.g. due to a typo). In that case you could raise an exception.\nAs an alternative, if you know the vehicle types in advance, you can avoid this by defining them again as constants ahead of time and then checking at the beginning of the function, if a valid type was passed.\nFinally, you could consider not just returning a bool, but actually saving the mismatches/inconsistencies in some data structure and returning that to indicate which IDs had which versions for a specified vehicle type.\nSo it could also look something like this:\nALLOWED_VEHICLES = {\"car\", \"bike\", \"jet_ski\"}\n\ndef get_version_id_mapping(\n vehicle_type: str,\n data: dict[str, dict[str, dict[str, dict[str, str]]]]\n) -> dict[str, set[str]]:\n if vehicle_type not in ALLOWED_VEHICLES:\n raise ValueError(f\"{vehicle_type} is not a valid vehicle type\")\n version_id_map: dict[str, set[str]] = {}\n for obj_id, vehicles in data.items():\n vehicle = vehicles.get(vehicle_type)\n if vehicle is None:\n continue\n ids = version_id_map.setdefault(vehicle[\"lead\"][\"version\"], set())\n ids.add(obj_id)\n return version_id_map\n\nCalling get_version_id_mapping(\"bike\", d) (d being your example data) gives the following:\n\n{'2.2': {'tigj09j32f0j2'}, '2.3': {'fj983j2r9jfjf'}}\n\nCalling it for jet_ski gives this:\n\n{'3.3': {'fj983j2r9jfjf', 'tigj09j32f0j2'}}\n\nSo by checking the length of the output dictionary, you would see, if there is an inconsistency (length > 1) or not.\nBonus 2\nCome to think of it, if you want to do this check for every type of vehicle for the entire dataset anyway, this can all be done in one go:\ndef vehicle_type_versions(\n data: dict[str, dict[str, dict[str, dict[str, str]]]]\n) -> dict[str, dict[str, set[str]]]:\n output: dict[str, dict[str, set[str]]] = {}\n for obj_id, vehicles in data.items():\n for vehicle_type, vehicle_data in vehicles.items():\n sub_dict = output.setdefault(vehicle_type, {})\n ids = sub_dict.setdefault(vehicle_data[\"lead\"][\"version\"], set())\n ids.add(obj_id)\n return output\n\nCalling this on your example data yield the following output:\n\n{'bike': {'2.2': {'tigj09j32f0j2'}, '2.3': {'fj983j2r9jfjf'}},\n 'car': {'1.1': {'fj983j2r9jfjf', 'tigj09j32f0j2'}},\n 'jet_ski': {'3.3': {'fj983j2r9jfjf', 'tigj09j32f0j2'}}}\n\n", "def is_version_issue(vehicle_type: str, object_json: dict):\n current_object = object_json[object_id]\n for object_id in current_object:\n if vehicle_type in object_json[object_id]:\n current_vehicle_version = current_object[vehicle_type][\"lead\"][\"version\"]\n # vehicles is a class dictionary that contains the vehicles I am looking for\n if self.vehicles[vehicle_type]:\n if self.vehicles[vehicle_type] != current_vehicle_version:\n return True\n return False\n\nNo I think this is the most logical way I think you have some redundancy but this makes sense to me. Aslo some other problems with you not being clear about how you are using the dict.\nI would also not use the temp current_ variables but they are good if you are using a debugger.\n" ]
[ 2, 1 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074619303_dictionary_python.txt
Q: Reference parameter in figure caption with Quarto Is there a way to reference a parameter in a Quarto figure or table caption? In the example below, I am able to reference my input parameter txt in a regular text block, but not in a figure caption. In the figure caption, only the raw text is displayed: --- title: "example" format: html params: txt: "example" --- ## I can reference it in text Here I can reference an input parameter: `r params$txt` ```{r} #| label: fig-example #| fig-cap: "Example: I cannot reference a parameter using `r params$txt` or params$txt." plot(1:10, 1:10) ``` A: Try with !expr ```{r} #| label: fig-example #| fig-cap: !expr params$txt plot(1:10, 1:10) ``` -output If we need to add some text, either paste or use glue #| fig-cap: !expr glue::glue("This should be {params$txt}")
Reference parameter in figure caption with Quarto
Is there a way to reference a parameter in a Quarto figure or table caption? In the example below, I am able to reference my input parameter txt in a regular text block, but not in a figure caption. In the figure caption, only the raw text is displayed: --- title: "example" format: html params: txt: "example" --- ## I can reference it in text Here I can reference an input parameter: `r params$txt` ```{r} #| label: fig-example #| fig-cap: "Example: I cannot reference a parameter using `r params$txt` or params$txt." plot(1:10, 1:10) ```
[ "Try with !expr\n ```{r}\n #| label: fig-example\n #| fig-cap: !expr params$txt\n plot(1:10, 1:10)\n ```\n\n-output\n\n\nIf we need to add some text, either paste or use glue\n#| fig-cap: !expr glue::glue(\"This should be {params$txt}\")\n\n" ]
[ 3 ]
[]
[]
[ "python", "quarto", "r" ]
stackoverflow_0074619283_python_quarto_r.txt
Q: Django project on AWS not updating code after git pull I am deploying a Django project on AWS. I am running Postgres, Redis, Nginx as well as my project on Docker there. Everything is working fine, but when I change something on my local machine, push changes to git and then pull them on the AWS instance, the code is changing, files are updated but they are not showing on the website. Only the static files are updating automatically (I guess because of Nginx). Here is my docker-compose config: version: '3.9' services: redis: image: redis command: redis-server ports: - "6379:6379" postgres: image: postgres environment: - POSTGRES_USER= - POSTGRES_PASSWORD= - POSTGRES_DB= ports: - "5432:5432" web: image: image_name build: . restart: always command: gunicorn project.wsgi:application --bind 0.0.0.0:8000 env_file: - envs/.env.prod ports: - "8000:8000" volumes: - ./staticfiles/:/tmp/project/staticfiles depends_on: - postgres - redis nginx: image: nginx ports: - "80:80" - "443:443" volumes: - ./staticfiles:/home/app/web/staticfiles - ./nginx/conf.d:/etc/nginx/conf.d - ./nginx/logs:/var/log/nginx - ./certbot/www:/var/www/certbot/:ro - ./certbot/conf/:/etc/nginx/ssl/:ro depends_on: - web Can you please tell me what to do? I tried deleting everything from docker and compose up again but nothing happened. I looked all over in here but I still don't understand... instance restart is not working as well. I tried cleaning redis cache because I have template caching and still nothing. A: After updating the code on the EC2 instance, you need to build a new web docker image from that new code. If you are just restarting things then docker-compose is going to continue to pick up the last docker image you built. You need to run the following sequence of commands (on the EC2 instance): docker-compose build web docker-compose up -d You are seeing the static files change immediately, without rebuilding the docker image, because you are mapping to those files via docker volume. A: I found the issue... it was because I had template caching. If I remove the cache and do what @MarkB suggested, all is updating. I don't understand why this happens since I tried flushing all redis cache after changes but I guess it solves my issues.
Django project on AWS not updating code after git pull
I am deploying a Django project on AWS. I am running Postgres, Redis, Nginx as well as my project on Docker there. Everything is working fine, but when I change something on my local machine, push changes to git and then pull them on the AWS instance, the code is changing, files are updated but they are not showing on the website. Only the static files are updating automatically (I guess because of Nginx). Here is my docker-compose config: version: '3.9' services: redis: image: redis command: redis-server ports: - "6379:6379" postgres: image: postgres environment: - POSTGRES_USER= - POSTGRES_PASSWORD= - POSTGRES_DB= ports: - "5432:5432" web: image: image_name build: . restart: always command: gunicorn project.wsgi:application --bind 0.0.0.0:8000 env_file: - envs/.env.prod ports: - "8000:8000" volumes: - ./staticfiles/:/tmp/project/staticfiles depends_on: - postgres - redis nginx: image: nginx ports: - "80:80" - "443:443" volumes: - ./staticfiles:/home/app/web/staticfiles - ./nginx/conf.d:/etc/nginx/conf.d - ./nginx/logs:/var/log/nginx - ./certbot/www:/var/www/certbot/:ro - ./certbot/conf/:/etc/nginx/ssl/:ro depends_on: - web Can you please tell me what to do? I tried deleting everything from docker and compose up again but nothing happened. I looked all over in here but I still don't understand... instance restart is not working as well. I tried cleaning redis cache because I have template caching and still nothing.
[ "After updating the code on the EC2 instance, you need to build a new web docker image from that new code. If you are just restarting things then docker-compose is going to continue to pick up the last docker image you built.\nYou need to run the following sequence of commands (on the EC2 instance):\ndocker-compose build web\ndocker-compose up -d\n\n\nYou are seeing the static files change immediately, without rebuilding the docker image, because you are mapping to those files via docker volume.\n", "I found the issue... it was because I had template caching.\nIf I remove the cache and do what @MarkB suggested, all is updating.\nI don't understand why this happens since I tried flushing all redis cache after changes but I guess it solves my issues.\n" ]
[ 0, 0 ]
[]
[]
[ "amazon_web_services", "django", "nginx", "python", "redis" ]
stackoverflow_0074619051_amazon_web_services_django_nginx_python_redis.txt
Q: How to fix streched colorbar with Matplotlib's TwtoSlopeNorm I have a function whose image goes from 0 to infinity, for example $f(x,y)=x^2 + y^2$. I would like to use a diverging colormap to highlight the region where the function equals 1 with a flexible colorbar. The colorbar should go from 0 to whatever vmax, white ("center") at 1, and the interval between colors should be proportional to numbers. When plotting it with no constrains, the white region placement depends on the vmin-vmax range. I would like to have it fixed at 1, vmin cannot be less than 0. I partially solve the problem using Matplotloib's TwtoSlopeNorm setting the center to 1, whatever the vmin-vmax range is. But this causes the lower part of the colorbar to take as much space as the upper part, which is not correct. The colorbar is stretched. Trying spacing="proportional" does nothing. How can I have the colorbar use as much space for the 0-1 range as it does for the other unit intervals, while fixing the white to 1? import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cbook as cbook from matplotlib import cm delta = 0.01 x = np.arange(0, 2.001, delta) y = np.arange(0.0, 2.001, delta) X, Y = np.meshgrid(x, y) Z = X**2 + Y**2 cmap = cm.seismic fig = plt.figure(figsize=(16,5)) ax = fig.add_subplot(131) plt.pcolormesh(Z, cmap=cmap, vmin=0, vmax=9) plt.title('White position depends on vmin-vmax range') plt.colorbar(extend='max') ax = fig.add_subplot(132) plt.pcolormesh(Z, cmap=cmap, norm=colors.TwoSlopeNorm(vmin=0, vmax=9, vcenter=1)) plt.title('White position is fixed, but colorbar is streched') plt.colorbar(extend='max') ax = fig.add_subplot(133) plt.pcolormesh(Z, cmap=cmap, norm=colors.TwoSlopeNorm(vmin=0, vmax=9, vcenter=1)) plt.title('spacing="proportional" does not work') plt.colorbar(extend='max', spacing='proportional') plt.show() A: Full answer is here import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors from matplotlib import cm delta = 0.01 x = np.arange(0, 4.001, delta) y = np.arange(0.0, 4.001, delta) X, Y = np.meshgrid(x, y) Z = X**2 + Y**2 fig = plt.figure(figsize=(6,3)) ax = fig.add_subplot(121) m1 = plt.contourf(Z, cmap=cm.seismic, levels=np.arange(0,33,1), norm=colors.TwoSlopeNorm(vmin=0, vmax=32, vcenter=1), extend='max') plt.colorbar(m1) ax = fig.add_subplot(122) m1 = plt.pcolormesh(Z, cmap=cm.seismic, norm=colors.TwoSlopeNorm(vmin=0, vmax=32, vcenter=1)) cb = plt.colorbar(m1, extend='max', spacing='proportional') cb.ax.set_yscale('linear') plt.tight_layout() plt.show()
How to fix streched colorbar with Matplotlib's TwtoSlopeNorm
I have a function whose image goes from 0 to infinity, for example $f(x,y)=x^2 + y^2$. I would like to use a diverging colormap to highlight the region where the function equals 1 with a flexible colorbar. The colorbar should go from 0 to whatever vmax, white ("center") at 1, and the interval between colors should be proportional to numbers. When plotting it with no constrains, the white region placement depends on the vmin-vmax range. I would like to have it fixed at 1, vmin cannot be less than 0. I partially solve the problem using Matplotloib's TwtoSlopeNorm setting the center to 1, whatever the vmin-vmax range is. But this causes the lower part of the colorbar to take as much space as the upper part, which is not correct. The colorbar is stretched. Trying spacing="proportional" does nothing. How can I have the colorbar use as much space for the 0-1 range as it does for the other unit intervals, while fixing the white to 1? import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cbook as cbook from matplotlib import cm delta = 0.01 x = np.arange(0, 2.001, delta) y = np.arange(0.0, 2.001, delta) X, Y = np.meshgrid(x, y) Z = X**2 + Y**2 cmap = cm.seismic fig = plt.figure(figsize=(16,5)) ax = fig.add_subplot(131) plt.pcolormesh(Z, cmap=cmap, vmin=0, vmax=9) plt.title('White position depends on vmin-vmax range') plt.colorbar(extend='max') ax = fig.add_subplot(132) plt.pcolormesh(Z, cmap=cmap, norm=colors.TwoSlopeNorm(vmin=0, vmax=9, vcenter=1)) plt.title('White position is fixed, but colorbar is streched') plt.colorbar(extend='max') ax = fig.add_subplot(133) plt.pcolormesh(Z, cmap=cmap, norm=colors.TwoSlopeNorm(vmin=0, vmax=9, vcenter=1)) plt.title('spacing="proportional" does not work') plt.colorbar(extend='max', spacing='proportional') plt.show()
[ "Full answer is here\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom matplotlib import cm\n\ndelta = 0.01\nx = np.arange(0, 4.001, delta)\ny = np.arange(0.0, 4.001, delta)\nX, Y = np.meshgrid(x, y)\nZ = X**2 + Y**2\n\nfig = plt.figure(figsize=(6,3))\nax = fig.add_subplot(121)\nm1 = plt.contourf(Z, cmap=cm.seismic, levels=np.arange(0,33,1), norm=colors.TwoSlopeNorm(vmin=0, vmax=32, vcenter=1), extend='max')\nplt.colorbar(m1)\n\nax = fig.add_subplot(122)\nm1 = plt.pcolormesh(Z, cmap=cm.seismic, norm=colors.TwoSlopeNorm(vmin=0, vmax=32, vcenter=1))\ncb = plt.colorbar(m1, extend='max', spacing='proportional')\ncb.ax.set_yscale('linear')\n\nplt.tight_layout()\nplt.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074613524_matplotlib_python.txt
Q: Can someone add and explain NEAT alogrithm to simple game I can't get the NEAT algo. Need someone to take my simple game made for human and add NEAT to it using NEAT-python library. Game - neural network must write a number that should be close to the randomly generated number in each round. Closer guess = better score and higher fitness. If you select randomly generated number you get punishment. Simple, right? But can't understand how to implement the NEAT. Here in the code the smaller the number the better. Here is the code: import os import math import sys import neat import random #idea = closer user input to the random number the better; #if input of the user is the number itself it is bad; class Agent(): def __init__(self): super().__init__() self.my_fitness = 0 def input(self): user_input = int(input()) if target_n[0] > user_input: dif = target_n[0] - user_input else: dif = user_input - target_n[0] self.my_fitness = self.my_fitness - dif bro1 = Agent() #make Agent def eval_genomes(): #main loop run = True while run == True: #set up global target_n target_n = random.sample(range(1,100),1) print(target_n,'is the target') #input bro1.input() print() print('new round') print(bro1.my_fitness) print() eval_genomes() #run the loop How to implement NEAT? A: I am also just starting my deep-in NL and interested in learning. First: Would recommend first to get in details with 2 good examples that are well implemented and explained: Flappy Bird 2. Google Dinosaur. Both have game code + NEAT implementation tied together. Google search will give plenty of examples. Second: as per your question. Per my mind no any NL would be able to solve it in such example. Just because there are no 'good and enough' input data. It will always be just 50/50 per poor probability theory. In order to make NEAT work, need to have some input data and some type of output data, and that output data should be a consequence of a 'choise' based on input data, hope it not get too complicated, but in one word should be some kind of dependency of input data and actions that should be taken to achieve valuable output data. More exactly: in your case, as input data you have a choosed random number - that ok. But you have no any data, parameters based on which that random number is actually choosed. This data needed in order to let NEAT create genomes that will make tries, calculations, fits...in order to find best parameters for its model, to get a good guess. That why NL model can not be constructed. Hope all makes sense and happy to discuss if interested.
Can someone add and explain NEAT alogrithm to simple game
I can't get the NEAT algo. Need someone to take my simple game made for human and add NEAT to it using NEAT-python library. Game - neural network must write a number that should be close to the randomly generated number in each round. Closer guess = better score and higher fitness. If you select randomly generated number you get punishment. Simple, right? But can't understand how to implement the NEAT. Here in the code the smaller the number the better. Here is the code: import os import math import sys import neat import random #idea = closer user input to the random number the better; #if input of the user is the number itself it is bad; class Agent(): def __init__(self): super().__init__() self.my_fitness = 0 def input(self): user_input = int(input()) if target_n[0] > user_input: dif = target_n[0] - user_input else: dif = user_input - target_n[0] self.my_fitness = self.my_fitness - dif bro1 = Agent() #make Agent def eval_genomes(): #main loop run = True while run == True: #set up global target_n target_n = random.sample(range(1,100),1) print(target_n,'is the target') #input bro1.input() print() print('new round') print(bro1.my_fitness) print() eval_genomes() #run the loop How to implement NEAT?
[ "I am also just starting my deep-in NL and interested in learning.\nFirst:\nWould recommend first to get in details with 2 good examples that are well implemented and explained:\n\nFlappy Bird 2. Google Dinosaur.\nBoth have game code + NEAT implementation tied together.\nGoogle search will give plenty of examples.\n\nSecond: as per your question.\nPer my mind no any NL would be able to solve it in such example.\nJust because there are no 'good and enough' input data. It will always be just 50/50 per poor probability theory.\nIn order to make NEAT work, need to have some input data and some type of output data, and that output data should be a consequence of a 'choise' based on input data, hope it not get too complicated, but in one word should be some kind of dependency of input data and actions that should be taken to achieve valuable output data.\nMore exactly:\nin your case, as input data you have a choosed random number - that ok. But you have no any data, parameters based on which that random number is actually choosed.\nThis data needed in order to let NEAT create genomes that will make tries, calculations, fits...in order to find best parameters for its model, to get a good guess.\nThat why NL model can not be constructed.\nHope all makes sense and happy to discuss if interested.\n" ]
[ 0 ]
[]
[]
[ "neat", "python" ]
stackoverflow_0070245102_neat_python.txt
Q: IDA - execute commands in WinDbg console from python script I have a python script to run in IDA that generates commands for WinDbg. I also open the memory dump (via the windmp64.dll loader), where the WinDbg console is already available: I want to execute commands in WinDbg console from python script. If I'm right, I need something like ida_expr.exec_idc_script() but for WinDbg. A: ida_dbg.send_dbg_command() is exactly what I needed.
IDA - execute commands in WinDbg console from python script
I have a python script to run in IDA that generates commands for WinDbg. I also open the memory dump (via the windmp64.dll loader), where the WinDbg console is already available: I want to execute commands in WinDbg console from python script. If I'm right, I need something like ida_expr.exec_idc_script() but for WinDbg.
[ "ida_dbg.send_dbg_command() is exactly what I needed.\n" ]
[ 0 ]
[]
[]
[ "console", "ida", "python", "scripting", "windbg" ]
stackoverflow_0074586201_console_ida_python_scripting_windbg.txt
Q: how to add pagination in Django? I want to apply pagination on my data I tried to watch lots of videos and read lots of articles but still can't solve my problem. This is my Views. def car(request): all_products = None all_category = category.get_all_category() categoryid = request.GET.get('category') if categoryid: all_products = Product.get_all_products_by_id(categoryid) else: all_products = Product.get_all_products() data = {} data['products'] = all_products # all products data['category'] = all_category # all category all_products = Product.get_all_products() data['product'] = all_products ] return render(request, 'car.html', data) as you can see I made some changes in above code but its make no diffrence def car(request): all_products = None all_category = category.get_all_category() categoryid = request.GET.get('category') if categoryid: all_products = Product.get_all_products_by_id(categoryid) else: all_products = Product.get_all_products() #pagination paginator = Paginator(all_products,2) **Changes** page_number=request.GET.get('page') **Changes** finaldata=paginator.get_page(page_number) **Changes** data = {'all_products':finaldata,} **Changes** data['products'] = all_products #all products data['category'] = all_category #all category all_products = Product.get_all_products() data['product'] = all_products return render(request, 'car.html', data) I want to display 4 products per page I tried to apply data limit query that work but that not a genuine approach to display data. I read many articles and watch YouTube video. but can't find any solution. which videos and articles I watched there pagination method is totally different they use pagination with objects.all method to get all data and I used .get method to get data I think that is my problem. and second thing is that they just working with simple data to paginate but in my case it's so complicated. I tried alot please guide. I got stuck in solving a problem for 5 days now. I am convinced that I'm not a good programmer. I tried a lot but I can't succussed. A: My problem is almost solve there is one issue. I can't get the next, previous and last option in pagination but soon I'll figure it out. this is my views.py file coding def car(request): all_products = None all_category = category.get_all_category() categoryid = request.GET.get('category') if categoryid: all_products = Product.get_all_products_by_id(categoryid) else: all_products = Product.get_all_products() #pagination paginator = Paginator(all_products,3) page_number=request.GET.get('page') finaldata=paginator.get_page(page_number) totalpage=finaldata.paginator.num_pages data = {'all_products':finaldata, 'lastpage':totalpage, 'totalPageList':[n+1 for n in range(totalpage)]} data['products'] = finaldata #all products data['category'] = all_category #all category all_products = Product.get_all_products() data['product'] = all_products return render(request, 'car.html', data) This is my html coding my html file screenshot I'm sharing html screenshot due to this error Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon. It looks like your post is mostly code; please add some more details. I'm not familiar with stack overflow so I don't know what to do so I shared screenshot when I find any solution for first previous and last option then soon I'll share that here A: Based on pagination documentation, it would be something similar to this: (Always explore the framework documentation, it is the best source to learn) views.py from django.core.paginator import Paginator from django.shortcuts import render def car(request, category_id=None): products = Products.objects.all() if category_id: products = Product.objects.filter(category__id=category_id) paginator = Paginator(products, 3) page_number = request.GET.get('page') page_obj = paginator.get_page(page_number) context = { 'page_obj': page_obj } return render(request, 'car.html', context) tempate.html (where you access previous and next pages, and total number of pages) {% for car in page_obj %} {{ car.category }}<br> ... {% endfor %} <div class="pagination"> <span class="step-links"> {% if page_obj.has_previous %} <a href="?page=1">&laquo; first</a> <a href="?page={{ page_obj.previous_page_number }}">previous</a> {% endif %} <span class="current"> Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}. </span> {% if page_obj.has_next %} <a href="?page={{ page_obj.next_page_number }}">next</a> <a href="?page={{ page_obj.paginator.num_pages }}">last &raquo;</a> {% endif %} </span> </div> urls.py (you want to pass category_id, if you don't it will be none) path('cars/category/<int:category_id>/', views.car, name='car'),
how to add pagination in Django?
I want to apply pagination on my data I tried to watch lots of videos and read lots of articles but still can't solve my problem. This is my Views. def car(request): all_products = None all_category = category.get_all_category() categoryid = request.GET.get('category') if categoryid: all_products = Product.get_all_products_by_id(categoryid) else: all_products = Product.get_all_products() data = {} data['products'] = all_products # all products data['category'] = all_category # all category all_products = Product.get_all_products() data['product'] = all_products ] return render(request, 'car.html', data) as you can see I made some changes in above code but its make no diffrence def car(request): all_products = None all_category = category.get_all_category() categoryid = request.GET.get('category') if categoryid: all_products = Product.get_all_products_by_id(categoryid) else: all_products = Product.get_all_products() #pagination paginator = Paginator(all_products,2) **Changes** page_number=request.GET.get('page') **Changes** finaldata=paginator.get_page(page_number) **Changes** data = {'all_products':finaldata,} **Changes** data['products'] = all_products #all products data['category'] = all_category #all category all_products = Product.get_all_products() data['product'] = all_products return render(request, 'car.html', data) I want to display 4 products per page I tried to apply data limit query that work but that not a genuine approach to display data. I read many articles and watch YouTube video. but can't find any solution. which videos and articles I watched there pagination method is totally different they use pagination with objects.all method to get all data and I used .get method to get data I think that is my problem. and second thing is that they just working with simple data to paginate but in my case it's so complicated. I tried alot please guide. I got stuck in solving a problem for 5 days now. I am convinced that I'm not a good programmer. I tried a lot but I can't succussed.
[ "My problem is almost solve there is one issue. I can't get the next, previous and last option in pagination but soon I'll figure it out.\nthis is my views.py file coding\ndef car(request):\n all_products = None \n all_category = category.get_all_category()\n categoryid = request.GET.get('category')\n if categoryid:\n all_products = Product.get_all_products_by_id(categoryid)\n else:\n all_products = Product.get_all_products()\n #pagination\n paginator = Paginator(all_products,3)\n page_number=request.GET.get('page')\n finaldata=paginator.get_page(page_number)\n\n totalpage=finaldata.paginator.num_pages\n\n \n data = {'all_products':finaldata,\n 'lastpage':totalpage,\n 'totalPageList':[n+1 for n in range(totalpage)]}\n\n data['products'] = finaldata #all products\n\n \n data['category'] = all_category #all category\n all_products = Product.get_all_products()\n data['product'] = all_products\n \n return render(request, 'car.html', data)\n\nThis is my html coding\nmy html file screenshot\nI'm sharing html screenshot due to this error\nYour post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon.\nIt looks like your post is mostly code; please add some more details.\nI'm not familiar with stack overflow so I don't know what to do so I shared screenshot\nwhen I find any solution for first previous and last option then soon I'll share that here\n", "Based on pagination documentation, it would be something similar to this:\n(Always explore the framework documentation, it is the best source to learn)\nviews.py\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import render\n\ndef car(request, category_id=None):\n products = Products.objects.all() \n\n if category_id:\n products = Product.objects.filter(category__id=category_id)\n\n paginator = Paginator(products, 3)\n\n page_number = request.GET.get('page')\n page_obj = paginator.get_page(page_number)\n\n context = {\n 'page_obj': page_obj\n }\n\n return render(request, 'car.html', context)\n\ntempate.html (where you access previous and next pages, and total number of pages)\n{% for car in page_obj %}\n {{ car.category }}<br>\n ...\n{% endfor %}\n\n<div class=\"pagination\">\n <span class=\"step-links\">\n {% if page_obj.has_previous %}\n <a href=\"?page=1\">&laquo; first</a>\n <a href=\"?page={{ page_obj.previous_page_number }}\">previous</a>\n {% endif %}\n\n <span class=\"current\">\n Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}.\n </span>\n\n {% if page_obj.has_next %}\n <a href=\"?page={{ page_obj.next_page_number }}\">next</a>\n <a href=\"?page={{ page_obj.paginator.num_pages }}\">last &raquo;</a>\n {% endif %}\n </span>\n</div>\n\nurls.py (you want to pass category_id, if you don't it will be none)\npath('cars/category/<int:category_id>/', views.car, name='car'),\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "pagination", "python" ]
stackoverflow_0074615456_django_pagination_python.txt
Q: Can I add string to zipped lists? I am 4 classes into my first programming class and I am stumped. I am wondering if I am able to add strings to three zipped lists? For instance, I need to add the following format: '+' department_name, (department_number, product_variable) where department_name, department_number, product_variable are the separate lists zipped together and I need to add the + at the beginning and parenthesis around the two lists. This is what I have: output = zip(department_name, department_number, product_variable) for department_name, department_number, product_variable in zip(department_name, department_number, product_variable): print (department_name, department_number, product_variable) Any ideas? A: You may need string formatting. department_names = ["n1", "n2", "n3", "n4"] department_numbers = [1, 2, 3, 4] product_variables = ["p1", "p2", "p3", "p4"] for department_name, department_number, product_variable in zip(department_names, department_numbers, product_variables): print(f"'+' {department_name}, ({department_number}, {product_variable})") Output: '+' n1, (1, p1) '+' n2, (2, p2) '+' n3, (3, p3) '+' n4, (4, p4) A: l1= [ {'id': '2', 'name': 'Sou', 'Medicine': 'Paracetamol'}, {'id': '2', 'name': 'Sou', 'Medicine': 'Supradyn'}, {'id': '3', 'name': 'Roopa', 'Medicine': 'Revital'}, {'id': '3','name': 'Roopa', 'Medicine': 'Pain killer'}, {'id': '4','name': 'Tabu', 'Medicine': 'Vitamin C'} ] def mergeListOfDictionaries1(l1): dn = {} for d in l1: if d['id'] in dn: if isinstance(dn[d['id']]['Medicine'],list): dn[d['id']]['Medicine'].append(d['Medicine']) else: dn[d['id']]['Medicine'] = [dn[d['id']]['Medicine'],d['Medicine'] ] else: dn[d['id']] = d return list(dn.values()) aa= mergeListOfDictionaries1(l1) print(aa) """ [{'id': '2', 'name': 'Sou', 'Medicine': ['Paracetamol', 'Supradyn']}, {'id': '3', 'name': 'Roopa', 'Medicine': ['Revital', 'Pain killer']}, {'id': '4', 'name': 'Tabu', 'Medicine': 'Vitamin C'}] """ import itertools, pandas as pd l1 = ["n1", "n2", "n3", "n4"] l2 = [1, 2, 3, 4] l3 = ["p1", "p2", "p3", "p4"] nest = [l1,l2,l3] cc = pd.DataFrame( (x for x in itertools.zip_longest(*nest)) ) print(cc) """ Output : 0 1 2 0 A 1 P 1 B 2 Q 2 C 3 R """ for l1,l2,l3 in zip(l1,l2,l3): print(l1 , '+',l2 , '+', l3) """ Print all and put a plus sign in between so that the output becomes A + 1 + P B + 2 + Q C + 3 + R D + 4 + S """ Now If we want to zip and Print Alternate words from different lists. Output : [('A', 1, 'P'), ('B', 2, 'Q'), ('C', 3, 'R'), ('D', 4, 'S')] Then, l1 = ["A", "B", "C", "D"] l2 = [1, 2, 3, 4] l3 = ["P", "Q", "R", "S"] def zipAlternate(list1,*rest,fillvalue=None): rest = [iter(r) for r in rest] for x in list1: res = yield x, *[next(r,fillvalue) for r in rest] return res zz = zipAlternate(l1,l2,l3) zzl = list(zz) print(zzl) """ Output : [('A', 1, 'P'), ('B', 2, 'Q'), ('C', 3, 'R'), ('D', 4, 'S')] Now if few Strings are given. l1 = 'ABCDE', l2 = '12', l3 = 'PQXYZ' and we want the output as 'A1PB2QC'. l1 = 'ABCDE' l2 = '12' l3 = 'PQXYZ' def zipAlternateString(*allString): rest = [iter(r) for r in allString] str= '' while True: for r in rest: try : str += next(r) except StopIteration: return str return str aa = zipAlternateString(l1,l2,l3) print(aa) """ Output : A1PB2QC """ d1 = {'A': 'a', 'B': 'b'} d2 = {'A': 'c', 'B': 'd'} d3 = {'A': 'e', 'B': 'f'} aa = { k : [d1.get(k),d2.get(k),d3.get(k)] for k in d1.keys() | d2.keys() | d3.keys() } print(aa) """ {'A': ['a', 'c', 'e'], 'B': ['b', 'd', 'f']} """ l12= ['A','1','B','2'] i = iter(l12) l122 = dict(zip(i,i)) print(l122) """ {'A': '1', 'B': '2'} """ s = "banana 4 apple 2 orange 4" l1 = s.split() print(l1) """ ['banana', '4', 'apple', '2', 'orange', '4'] """ i= iter(l1) dict1 = dict(zip(i,i)) print(dict1) #{'banana': '4', 'apple': '2', 'orange': '4'}
Can I add string to zipped lists?
I am 4 classes into my first programming class and I am stumped. I am wondering if I am able to add strings to three zipped lists? For instance, I need to add the following format: '+' department_name, (department_number, product_variable) where department_name, department_number, product_variable are the separate lists zipped together and I need to add the + at the beginning and parenthesis around the two lists. This is what I have: output = zip(department_name, department_number, product_variable) for department_name, department_number, product_variable in zip(department_name, department_number, product_variable): print (department_name, department_number, product_variable) Any ideas?
[ "You may need string formatting.\ndepartment_names = [\"n1\", \"n2\", \"n3\", \"n4\"]\ndepartment_numbers = [1, 2, 3, 4]\nproduct_variables = [\"p1\", \"p2\", \"p3\", \"p4\"]\n\nfor department_name, department_number, product_variable in zip(department_names, department_numbers, product_variables):\n print(f\"'+' {department_name}, ({department_number}, {product_variable})\")\n\nOutput:\n'+' n1, (1, p1)\n'+' n2, (2, p2)\n'+' n3, (3, p3)\n'+' n4, (4, p4)\n\n", "l1= [\n {'id': '2', 'name': 'Sou', 'Medicine': 'Paracetamol'},\n {'id': '2', 'name': 'Sou', 'Medicine': 'Supradyn'},\n {'id': '3', 'name': 'Roopa', 'Medicine': 'Revital'},\n {'id': '3','name': 'Roopa', 'Medicine': 'Pain killer'},\n {'id': '4','name': 'Tabu', 'Medicine': 'Vitamin C'}\n \n]\n\ndef mergeListOfDictionaries1(l1):\n dn = {}\n for d in l1:\n if d['id'] in dn:\n if isinstance(dn[d['id']]['Medicine'],list):\n dn[d['id']]['Medicine'].append(d['Medicine'])\n else:\n dn[d['id']]['Medicine'] = [dn[d['id']]['Medicine'],d['Medicine'] ]\n \n else:\n dn[d['id']] = d\n \n return list(dn.values()) \n \naa= mergeListOfDictionaries1(l1)\nprint(aa)\n\n\"\"\"\n[{'id': '2', 'name': 'Sou', 'Medicine': ['Paracetamol', 'Supradyn']}, \n{'id': '3', 'name': 'Roopa', 'Medicine': ['Revital', 'Pain killer']}, \n{'id': '4', 'name': 'Tabu', 'Medicine': 'Vitamin C'}]\n\n\"\"\"\n\n import itertools, pandas as pd\n \n l1 = [\"n1\", \"n2\", \"n3\", \"n4\"]\n l2 = [1, 2, 3, 4]\n l3 = [\"p1\", \"p2\", \"p3\", \"p4\"]\n \n \n nest = [l1,l2,l3] \n cc = pd.DataFrame(\n (x for x in itertools.zip_longest(*nest))\n )\n print(cc)\n\"\"\"\nOutput : \n 0 1 2\n0 A 1 P\n1 B 2 Q\n2 C 3 R\n\"\"\"\n\n\n\n for l1,l2,l3 in zip(l1,l2,l3):\n print(l1 , '+',l2 , '+', l3)\n\"\"\"\nPrint all and put a plus sign in between so that the output becomes\nA + 1 + P\nB + 2 + Q\nC + 3 + R\nD + 4 + S\n\"\"\"\n\nNow If we want to zip and Print Alternate words from different lists.\nOutput :\n[('A', 1, 'P'), ('B', 2, 'Q'), ('C', 3, 'R'), ('D', 4, 'S')]\nThen,\n l1 = [\"A\", \"B\", \"C\", \"D\"]\n l2 = [1, 2, 3, 4]\n l3 = [\"P\", \"Q\", \"R\", \"S\"]\n \n \n def zipAlternate(list1,*rest,fillvalue=None):\n rest = [iter(r) for r in rest]\n for x in list1:\n res = yield x, *[next(r,fillvalue) for r in rest]\n return res \n zz = zipAlternate(l1,l2,l3)\n zzl = list(zz)\n print(zzl) \n\"\"\"\nOutput :\n[('A', 1, 'P'), ('B', 2, 'Q'), ('C', 3, 'R'), ('D', 4, 'S')]\n \n\nNow if few Strings are given.\nl1 = 'ABCDE',\nl2 = '12',\nl3 = 'PQXYZ'\nand we want the output as 'A1PB2QC'.\nl1 = 'ABCDE'\nl2 = '12'\nl3 = 'PQXYZ'\ndef zipAlternateString(*allString):\n rest = [iter(r) for r in allString]\n str= ''\n while True:\n for r in rest:\n try : \n str += next(r)\n except StopIteration:\n return str \n return str \n \naa = zipAlternateString(l1,l2,l3)\nprint(aa) \n\"\"\"\n Output :\n A1PB2QC\n\"\"\"\n\n\n\nd1 = {'A': 'a', 'B': 'b'}\nd2 = {'A': 'c', 'B': 'd'}\nd3 = {'A': 'e', 'B': 'f'}\n\naa = {\n \n k : [d1.get(k),d2.get(k),d3.get(k)] for k in d1.keys() | d2.keys() | d3.keys() \n \n }\n\nprint(aa)\n\n\"\"\"\n{'A': ['a', 'c', 'e'], 'B': ['b', 'd', 'f']}\n\n\"\"\"\n\n\nl12= ['A','1','B','2']\n\ni = iter(l12)\nl122 = dict(zip(i,i))\nprint(l122)\n\"\"\"\n{'A': '1', 'B': '2'}\n\"\"\"\n\ns = \"banana 4 apple 2 orange 4\"\n\nl1 = s.split()\nprint(l1)\n\"\"\"\n['banana', '4', 'apple', '2', 'orange', '4']\n\"\"\"\n\ni= iter(l1)\ndict1 = dict(zip(i,i))\nprint(dict1) #{'banana': '4', 'apple': '2', 'orange': '4'}\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "string", "zip" ]
stackoverflow_0073012213_python_string_zip.txt
Q: Counting elements inside an array/matrix I am struggling with what is hopefully a simple problem. Haven't been able to find a clear cut answer online. The program given, asks for a user input (n) and then produces an n-sized square matrix. The matrix will only be made of 0s and 1s. I am attempting to count the arrays (I have called this x) that contain a number, or those that do not only contain only 0s. Example output: n = 3 [0, 0, 0] [1, 0, 0] [0, 1, 0] In this case, x should = 2. n = 4 [0, 0, 0, 0] [1, 0, 0, 0] [0, 1, 0, 0] [0, 0, 0, 0] In this case, x should also be 2. def xArrayCount(MX): x = 0 count = 0 for i in MX: if i in MX[0 + count] == 0: x += 0 count += 1 else: x += 1 count += 1 return(x) Trying to count the number of 0s/1s in each index of the matrix but I am going wrong somewhere, could someone explain how this should work? (Use of extra python modules is disallowed) Thank you A: You need to count all the lists that contain at least once the number one. To do that you can't use any other module. def count_none_zero_items(matrix): count = 0 for row in matrix: if 1 in row: count += 1 return count x = [[0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0]] count_none_zero_items(x) Also notice that i changed the function name to lower case as this is the convention in python. Read more about it here: Python3 Conventions Also it's worth mentioning that in python we call the variable list instead of array. A: Those look like tuples, not arrays. I tried this myself and if I change the tuples into nested arrays (adding outer brackets and commas between the single arrays), this function works: def xArrayCount(MX): x = 0 count = 0 for i in matrix: if MX[count][count] == 0: count += 1 else: x += 1 count += 1 return x
Counting elements inside an array/matrix
I am struggling with what is hopefully a simple problem. Haven't been able to find a clear cut answer online. The program given, asks for a user input (n) and then produces an n-sized square matrix. The matrix will only be made of 0s and 1s. I am attempting to count the arrays (I have called this x) that contain a number, or those that do not only contain only 0s. Example output: n = 3 [0, 0, 0] [1, 0, 0] [0, 1, 0] In this case, x should = 2. n = 4 [0, 0, 0, 0] [1, 0, 0, 0] [0, 1, 0, 0] [0, 0, 0, 0] In this case, x should also be 2. def xArrayCount(MX): x = 0 count = 0 for i in MX: if i in MX[0 + count] == 0: x += 0 count += 1 else: x += 1 count += 1 return(x) Trying to count the number of 0s/1s in each index of the matrix but I am going wrong somewhere, could someone explain how this should work? (Use of extra python modules is disallowed) Thank you
[ "You need to count all the lists that contain at least once the number one. To do that you can't use any other module.\ndef count_none_zero_items(matrix):\n count = 0\n for row in matrix:\n if 1 in row:\n count += 1\n return count\n\n\nx = [[0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0]]\ncount_none_zero_items(x)\n\nAlso notice that i changed the function name to lower case as this is the convention in python. Read more about it here:\nPython3 Conventions\nAlso it's worth mentioning that in python we call the variable list instead of array.\n", "Those look like tuples, not arrays. I tried this myself and if I change the tuples into nested arrays (adding outer brackets and commas between the single arrays), this function works:\ndef xArrayCount(MX):\n x = 0\n count = 0\n\n for i in matrix:\n if MX[count][count] == 0:\n count += 1\n else:\n x += 1\n count += 1\n\n return x\n\n" ]
[ 8, 0 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074619524_arrays_python.txt
Q: Recursive function call fails to recurse in AWS Lambda - Python3 Im trying to replace python dictionary key with a different key name recursively for which i am using aws lambda with a api endpoint to trigger. Suprisingly the recursion part fails for weird reason. The same code works fine in local. Checked cloudwatch logs. No error message get displayed there. Let me know if im missing anything here EDIT: Related to Unable to invoke a recursive function with AWS Lambda and recursive lambda function never seems to run ### function that is called inside lambda_handler def replace_recursive(data,mapping): for dict1 in data: for k,v in dict1.copy().items(): if isinstance(v,dict): dict1[k] = replace_recursive([v], mapping) try: dict1[mapping['value'][mapping['key'].index(k)]] = dict1.pop(mapping['key'][mapping['key'].index(k)]) except KeyError: continue return data ## lambda handler def lambda_handler(events,_): resp = {'statusCode': 200} parsed_events = json.loads(events['body']) if parsed_events: op = replace_recursive(parsed_events,schema) resp['body'] = json.dumps(op) return resp Input I pass: { "name": "michael", "age": 35, "family": { "name": "john", "relation": "father" } } In the output, keys in the nested dictionary are not getting updated. Only outer keys get modified A: Since you're ingesting JSON, you can do the key replacement right in the parse phase for a faster and simpler experience using the object_pairs_hook argument to json.loads. import json key_mapping = { "name": "noot", "age": "doot", "relation": "root", } def lambda_handler(events, _): replaced_events = json.loads( events["body"], object_pairs_hook=lambda pairs: dict( (key_mapping.get(k, k), v) for k, v in pairs ), ) return { "statusCode": 200, "body": json.dumps(replaced_events), } body = { "name": "michael", "age": 35, "family": {"name": "john", "relation": "father"}, } print( lambda_handler( { "body": json.dumps(body), }, None, ) ) prints out {'statusCode': 200, 'body': '{"noot": "michael", "doot": 35, "family": {"noot": "john", "root": "father"}}'}
Recursive function call fails to recurse in AWS Lambda - Python3
Im trying to replace python dictionary key with a different key name recursively for which i am using aws lambda with a api endpoint to trigger. Suprisingly the recursion part fails for weird reason. The same code works fine in local. Checked cloudwatch logs. No error message get displayed there. Let me know if im missing anything here EDIT: Related to Unable to invoke a recursive function with AWS Lambda and recursive lambda function never seems to run ### function that is called inside lambda_handler def replace_recursive(data,mapping): for dict1 in data: for k,v in dict1.copy().items(): if isinstance(v,dict): dict1[k] = replace_recursive([v], mapping) try: dict1[mapping['value'][mapping['key'].index(k)]] = dict1.pop(mapping['key'][mapping['key'].index(k)]) except KeyError: continue return data ## lambda handler def lambda_handler(events,_): resp = {'statusCode': 200} parsed_events = json.loads(events['body']) if parsed_events: op = replace_recursive(parsed_events,schema) resp['body'] = json.dumps(op) return resp Input I pass: { "name": "michael", "age": 35, "family": { "name": "john", "relation": "father" } } In the output, keys in the nested dictionary are not getting updated. Only outer keys get modified
[ "Since you're ingesting JSON, you can do the key replacement right in the parse phase for a faster and simpler experience using the object_pairs_hook argument to json.loads.\nimport json\n\nkey_mapping = {\n \"name\": \"noot\",\n \"age\": \"doot\",\n \"relation\": \"root\",\n}\n\n\ndef lambda_handler(events, _):\n replaced_events = json.loads(\n events[\"body\"],\n object_pairs_hook=lambda pairs: dict(\n (key_mapping.get(k, k), v) for k, v in pairs\n ),\n )\n return {\n \"statusCode\": 200,\n \"body\": json.dumps(replaced_events),\n }\n\n\nbody = {\n \"name\": \"michael\",\n \"age\": 35,\n \"family\": {\"name\": \"john\", \"relation\": \"father\"},\n}\nprint(\n lambda_handler(\n {\n \"body\": json.dumps(body),\n },\n None,\n )\n)\n\nprints out\n{'statusCode': 200, 'body': '{\"noot\": \"michael\", \"doot\": 35, \"family\": {\"noot\": \"john\", \"root\": \"father\"}}'}\n\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "python", "python_3.x", "recursion" ]
stackoverflow_0074619555_amazon_web_services_aws_lambda_python_python_3.x_recursion.txt