content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
NumPy: apply vector-valued function to mesh grid
I am trying to do something like the following in NumPy:
import numpy as np
def f(x):
return x[0] + x[1]
X1 = np.array([0, 1, 2])
X2 = np.array([0, 1, 2])
X = np.meshgrid(X1, X2)
result = np.vectorize(f)(X)
with the expected result being array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]), but it returns the following error:
2
3 def f(x):
----> 4 return x[0] + x[1]
5
6 X1 = np.array([0, 1, 2])
IndexError: invalid index to scalar variable
This is because it tries to apply f to all 18 scalar elements of the mesh grid, whereas I want it applied to 9 pairs of 2 scalars. What is the correct way to do this?
Note: I am aware this code will work if I do not vectorize f, but this is important because f can be any function, e.g. it could contain an if statement which throws value error without vectorizing.
A:
If you persist to use numpy.vectorize you need to define signature when defining vectorize on function.
import numpy as np
def f(x):
return x[0] + x[1]
# Or
# return np.add.reduce(x, axis=0)
X1 = np.array([0, 1, 2])
X2 = np.array([0, 1, 2])
X = np.meshgrid(X1, X2)
# np.asarray(X).shape -> (2, 3, 3)
# shape of the desired result is (3, 3)
f_vec = np.vectorize(f, signature='(n,m,m)->(m,m)')
result = f_vec(X)
print(result)
Output:
[[0 1 2]
[1 2 3]
[2 3 4]]
A:
For the function you mentioned in the comments:
f = lambda x: x[0] + x[1] if x[0] > 0 else 0
You can use np.where:
def f(x):
return np.where(x > 0, x[0] + x[1], 0)
# np.where(some_condition, value_if_true, value_if_false)
Numpy was designed with vectorization in mind -- unless you have some crazy edge-case there's almost always a way to take advantage of Numpy's broadcasting and vectorization. I strongly recommend seeking out vectorized solutions before giving up so easily and resorting to using for loops.
A:
If you are too lazy, or ignorant, to do are "proper" 'vectorization', you can use np.vectorize. But you need to take time to really read its docs. It isn't magic. It can be useful, especially if you need to take advantage of broadcasting, and the function, some reason or other, only accepts scalars.
Rewriting your function to work with scalar inputs (though it also works fine with arrays, in this case):
In [91]: def foo(x,y): return x+y
...: f = np.vectorize(foo)
With scalar inputs:
In [92]: f(1,2)
Out[92]: array(3)
With 2 arrays (a (2,1) and (3,)), returning a (2,3):
In [93]: f(np.array([1,2])[:,None], np.arange(1,4))
Out[93]:
array([[2, 3, 4],
[3, 4, 5]])
Samething with meshgrid:
In [94]: I,J = np.meshgrid(np.array([1,2]), np.arange(1,4),indexing='ij')
In [95]: I
Out[95]:
array([[1, 1, 1],
[2, 2, 2]])
In [96]: J
Out[96]:
array([[1, 2, 3],
[1, 2, 3]])
In [97]: f(I,J)
Out[97]:
array([[2, 3, 4],
[3, 4, 5]])
Or meshgrid arrays as defined in [93]:
In [98]: I,J = np.meshgrid(np.array([1,2]), np.arange(1,4),indexing='ij', sparse=True)
In [99]: I,J
Out[99]:
(array([[1],
[2]]),
array([[1, 2, 3]]))
But in a true vectorized sense, you can just add the 2 arrays:
In [100]: I+J
Out[100]:
array([[2, 3, 4],
[3, 4, 5]])
The first paragraph of np.vectorize docs (my emphasis):
Define a vectorized function which takes a nested sequence of objects or
numpy arrays as inputs and returns a single numpy array or a tuple of numpy
arrays. The vectorized function evaluates pyfunc over successive tuples
of the input arrays like the python map function, except it uses the
broadcasting rules of numpy.
edit
Starting with a function that expects a 2 element tuple, we could add a cover that splits it into two, and apply vectorize to that:
In [103]: def foo1(x): return x[0]+x[1]
...: def foo2(x,y): return foo1((x,y))
...: f = np.vectorize(foo2)
In [104]: f(1,2)
Out[104]: array(3)
X is a 2d element tuple:
In [105]: X = np.meshgrid(np.array([1,2]), np.arange(1,4),indexing='ij')
In [106]: X
Out[106]:
[array([[1, 1, 1],
[2, 2, 2]]),
array([[1, 2, 3],
[1, 2, 3]])]
which can be passed to f as:
In [107]: f(X[0],X[1])
Out[107]:
array([[2, 3, 4],
[3, 4, 5]])
But there's no need to slow things down with that iteration. Just pass the tuple to foo1:
In [108]: foo1(X)
Out[108]:
array([[2, 3, 4],
[3, 4, 5]])
In f = lambda x: x[0] + x[1] if x[0] > 0 else 0 you get the 'ambiguity' valueerror because if only works with scalars. But there are plenty of faster numpy ways of replacing such an if step.
A:
ChatGPT to the rescue! As it turns out, the better option here is np.apply_along_axis. The following code solved the problem:
import numpy as np
def f(x):
return x[0] + x[1]
X1 = np.array([0, 1, 2])
X2 = np.array([0, 1, 2])
X = np.meshgrid(X1, X2)
result = np.apply_along_axis(f, 0, X)
| NumPy: apply vector-valued function to mesh grid | I am trying to do something like the following in NumPy:
import numpy as np
def f(x):
return x[0] + x[1]
X1 = np.array([0, 1, 2])
X2 = np.array([0, 1, 2])
X = np.meshgrid(X1, X2)
result = np.vectorize(f)(X)
with the expected result being array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]), but it returns the following error:
2
3 def f(x):
----> 4 return x[0] + x[1]
5
6 X1 = np.array([0, 1, 2])
IndexError: invalid index to scalar variable
This is because it tries to apply f to all 18 scalar elements of the mesh grid, whereas I want it applied to 9 pairs of 2 scalars. What is the correct way to do this?
Note: I am aware this code will work if I do not vectorize f, but this is important because f can be any function, e.g. it could contain an if statement which throws value error without vectorizing.
| [
"If you persist to use numpy.vectorize you need to define signature when defining vectorize on function.\nimport numpy as np\n\ndef f(x):\n return x[0] + x[1]\n # Or\n # return np.add.reduce(x, axis=0)\n\n\nX1 = np.array([0, 1, 2])\nX2 = np.array([0, 1, 2])\nX = np.meshgrid(X1, X2)\n\n# np.asarray(X).shape -> (2, 3, 3)\n# shape of the desired result is (3, 3)\n\nf_vec = np.vectorize(f, signature='(n,m,m)->(m,m)')\nresult = f_vec(X)\nprint(result)\n\nOutput:\n[[0 1 2]\n [1 2 3]\n [2 3 4]]\n\n",
"For the function you mentioned in the comments:\nf = lambda x: x[0] + x[1] if x[0] > 0 else 0\n\nYou can use np.where:\ndef f(x):\n return np.where(x > 0, x[0] + x[1], 0)\n # np.where(some_condition, value_if_true, value_if_false)\n\nNumpy was designed with vectorization in mind -- unless you have some crazy edge-case there's almost always a way to take advantage of Numpy's broadcasting and vectorization. I strongly recommend seeking out vectorized solutions before giving up so easily and resorting to using for loops.\n",
"If you are too lazy, or ignorant, to do are \"proper\" 'vectorization', you can use np.vectorize. But you need to take time to really read its docs. It isn't magic. It can be useful, especially if you need to take advantage of broadcasting, and the function, some reason or other, only accepts scalars.\nRewriting your function to work with scalar inputs (though it also works fine with arrays, in this case):\nIn [91]: def foo(x,y): return x+y\n ...: f = np.vectorize(foo)\n\nWith scalar inputs:\nIn [92]: f(1,2)\nOut[92]: array(3)\n\nWith 2 arrays (a (2,1) and (3,)), returning a (2,3):\nIn [93]: f(np.array([1,2])[:,None], np.arange(1,4))\nOut[93]: \narray([[2, 3, 4],\n [3, 4, 5]])\n\nSamething with meshgrid:\nIn [94]: I,J = np.meshgrid(np.array([1,2]), np.arange(1,4),indexing='ij')\n\nIn [95]: I\nOut[95]: \narray([[1, 1, 1],\n [2, 2, 2]])\n\nIn [96]: J\nOut[96]: \narray([[1, 2, 3],\n [1, 2, 3]])\n\nIn [97]: f(I,J)\nOut[97]: \narray([[2, 3, 4],\n [3, 4, 5]])\n\nOr meshgrid arrays as defined in [93]:\nIn [98]: I,J = np.meshgrid(np.array([1,2]), np.arange(1,4),indexing='ij', sparse=True)\n\nIn [99]: I,J\nOut[99]: \n(array([[1],\n [2]]),\n array([[1, 2, 3]]))\n\nBut in a true vectorized sense, you can just add the 2 arrays:\nIn [100]: I+J\nOut[100]: \narray([[2, 3, 4],\n [3, 4, 5]])\n\nThe first paragraph of np.vectorize docs (my emphasis):\n\nDefine a vectorized function which takes a nested sequence of objects or\nnumpy arrays as inputs and returns a single numpy array or a tuple of numpy\narrays. The vectorized function evaluates pyfunc over successive tuples\nof the input arrays like the python map function, except it uses the\nbroadcasting rules of numpy.\n\nedit\nStarting with a function that expects a 2 element tuple, we could add a cover that splits it into two, and apply vectorize to that:\nIn [103]: def foo1(x): return x[0]+x[1]\n ...: def foo2(x,y): return foo1((x,y))\n ...: f = np.vectorize(foo2)\n\nIn [104]: f(1,2)\nOut[104]: array(3)\n\nX is a 2d element tuple:\nIn [105]: X = np.meshgrid(np.array([1,2]), np.arange(1,4),indexing='ij')\n\nIn [106]: X\nOut[106]: \n[array([[1, 1, 1],\n [2, 2, 2]]),\n array([[1, 2, 3],\n [1, 2, 3]])]\n\nwhich can be passed to f as:\nIn [107]: f(X[0],X[1])\nOut[107]: \narray([[2, 3, 4],\n [3, 4, 5]])\n\nBut there's no need to slow things down with that iteration. Just pass the tuple to foo1:\nIn [108]: foo1(X)\nOut[108]: \narray([[2, 3, 4],\n [3, 4, 5]])\n\nIn f = lambda x: x[0] + x[1] if x[0] > 0 else 0 you get the 'ambiguity' valueerror because if only works with scalars. But there are plenty of faster numpy ways of replacing such an if step.\n",
"ChatGPT to the rescue! As it turns out, the better option here is np.apply_along_axis. The following code solved the problem:\nimport numpy as np\n\ndef f(x):\n return x[0] + x[1]\n\nX1 = np.array([0, 1, 2])\nX2 = np.array([0, 1, 2])\nX = np.meshgrid(X1, X2)\n\nresult = np.apply_along_axis(f, 0, X)\n\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"numpy",
"python",
"vectorization"
] | stackoverflow_0074584886_numpy_python_vectorization.txt |
Q:
Comparing Django project structure to ruby on rails
After some years developing web apps using ruby on rails, I decided to give Django a try, however it seems that I'm missing something, which is how to structure large project, or any project in general.
For example, in rails we have a models folder which contains model classes, each in a separate ruby file, a controllers folder which contains controller classes, again each in a separate ruby file.
However, in Django it split the project into independent apps, which can be installed independently in other Django project, each app has a models.py file which contains all the models classes, a views.py file which contain all the views functions.
But then how to group functions in views like rails? That is one controller per each model.
In general how to structure my project when it contains one large app that can't be separated into multiple independent apps? I want for example to have a view index function for each model, but how to do this if all functions are in one file?
If my project is about selling cars for example. I should have index function that maps to /cars, another index function to map to /users, etc...
I searched the web but couldn't find a suitable answer.
It is unclear to me how to structure Django app, so any help will be appreciated.
A:
In short, Django is a Model-View-Template framework and Rails is a Model-View-Controller framework.
In Django we store controllers(sort of) in views.py for each specified app, while in MVC framework such as Rails store it in controllers. In Django, you also have to create your own HTML template separately which some people may find it tedious but its easier to work with other frameworks such as Vue or React due to that separation.
This is general comparison I found on the net.
However, to answer your questions on folder structure. Basically Django is very flexible on folder arrangements, it really depends how you want to design the project structure. Normally what I'd do is keep every app in the main folder (project folder). This way you wont mess with the venv setup
A:
As mentioned in @shanksfk's answer, Django is very flexible in folder arrangements. You don't have to follow the default app structure. When I create a purely backend Django project (with DRF), I usually have 3 base apps:
api - where modules, serializers, and urls are stored
core - the default app (the one that has the name of your Django project)
db - where models are stored
Then as I expand, I can add a folder dedicated for the helpers, utils, and possibly abstraction layers for external services. I recommend reading more about Domain-driven Design to get an idea on how to structure your project. You can also check other Django projects for inspiration:
django CMS
Baserow
Django API Domains
| Comparing Django project structure to ruby on rails | After some years developing web apps using ruby on rails, I decided to give Django a try, however it seems that I'm missing something, which is how to structure large project, or any project in general.
For example, in rails we have a models folder which contains model classes, each in a separate ruby file, a controllers folder which contains controller classes, again each in a separate ruby file.
However, in Django it split the project into independent apps, which can be installed independently in other Django project, each app has a models.py file which contains all the models classes, a views.py file which contain all the views functions.
But then how to group functions in views like rails? That is one controller per each model.
In general how to structure my project when it contains one large app that can't be separated into multiple independent apps? I want for example to have a view index function for each model, but how to do this if all functions are in one file?
If my project is about selling cars for example. I should have index function that maps to /cars, another index function to map to /users, etc...
I searched the web but couldn't find a suitable answer.
It is unclear to me how to structure Django app, so any help will be appreciated.
| [
"In short, Django is a Model-View-Template framework and Rails is a Model-View-Controller framework.\nIn Django we store controllers(sort of) in views.py for each specified app, while in MVC framework such as Rails store it in controllers. In Django, you also have to create your own HTML template separately which some people may find it tedious but its easier to work with other frameworks such as Vue or React due to that separation.\nThis is general comparison I found on the net.\nHowever, to answer your questions on folder structure. Basically Django is very flexible on folder arrangements, it really depends how you want to design the project structure. Normally what I'd do is keep every app in the main folder (project folder). This way you wont mess with the venv setup\n",
"As mentioned in @shanksfk's answer, Django is very flexible in folder arrangements. You don't have to follow the default app structure. When I create a purely backend Django project (with DRF), I usually have 3 base apps:\n\napi - where modules, serializers, and urls are stored\ncore - the default app (the one that has the name of your Django project)\ndb - where models are stored\n\nThen as I expand, I can add a folder dedicated for the helpers, utils, and possibly abstraction layers for external services. I recommend reading more about Domain-driven Design to get an idea on how to structure your project. You can also check other Django projects for inspiration:\n\ndjango CMS\nBaserow\nDjango API Domains\n\n"
] | [
1,
1
] | [] | [] | [
"django",
"python",
"ruby_on_rails"
] | stackoverflow_0074651540_django_python_ruby_on_rails.txt |
Q:
Get all combinations of several columns in a pandas dataframe and calculate sum for each combination
I have a dataframe as below:
df = pd.DataFrame({'id': ['a', 'b', 'c', 'd'],
'colA': [1, 2, 3, 4],
'colB': [5, 6, 7, 8],
'colC': [9, 10, 11, 12],
'colD': [13, 14, 15, 16]})
I want to get all combinations of 'colA', 'colB', 'colC' and 'colD' and calculate sum for each combination. I can get all combinations using itertools
cols = ['colA', 'colB', 'colC', 'colD']
all_combinations = [c for i in range(2, len(cols)+1) for c in combinations(cols, i)]
But how can I get the sum for each combination and create a new column in the dataframe? Expected output:
id colA colB colC colD colA+colB colB+colC ... colA+colB+colC+colD
a 1 5 9 13 6 14 ... 28
b 2 6 10 14 8 16 ... 32
c 3 7 11 15 10 18 ... 36
d 4 8 12 16 12 20 ... 40
A:
First, select from the frame a list of all columns starting with col. Then we create a dictionary using combinations, where the keys are the names of the new summing columns, and the values are the sums of the corresponding columns of the original dataframe, then we unpack them ** as arguments to the assign method, thereby adding to the frame
cols = [c for c in df.columns if c.startswith('col')]
df = df.assign(**{'+'.join(c):df.loc[:, c].sum(axis=1) for i in range(2, len(cols) + 1) for c in combinations(cols, i)})
print(df)
id colA colB colC colD colA+colB colA+colC colA+colD colB+colC colB+colD colC+colD colA+colB+colC colA+colB+colD colA+colC+colD colB+colC+colD colA+colB+colC+colD
0 a 1 5 9 13 6 10 14 14 18 22 15 19 23 27 28
1 b 2 6 10 14 8 12 16 16 20 24 18 22 26 30 32
2 c 3 7 11 15 10 14 18 18 22 26 21 25 29 33 36
3 d 4 8 12 16 12 16 20 20 24 28 24 28 32 36 40
| Get all combinations of several columns in a pandas dataframe and calculate sum for each combination | I have a dataframe as below:
df = pd.DataFrame({'id': ['a', 'b', 'c', 'd'],
'colA': [1, 2, 3, 4],
'colB': [5, 6, 7, 8],
'colC': [9, 10, 11, 12],
'colD': [13, 14, 15, 16]})
I want to get all combinations of 'colA', 'colB', 'colC' and 'colD' and calculate sum for each combination. I can get all combinations using itertools
cols = ['colA', 'colB', 'colC', 'colD']
all_combinations = [c for i in range(2, len(cols)+1) for c in combinations(cols, i)]
But how can I get the sum for each combination and create a new column in the dataframe? Expected output:
id colA colB colC colD colA+colB colB+colC ... colA+colB+colC+colD
a 1 5 9 13 6 14 ... 28
b 2 6 10 14 8 16 ... 32
c 3 7 11 15 10 18 ... 36
d 4 8 12 16 12 20 ... 40
| [
"First, select from the frame a list of all columns starting with col. Then we create a dictionary using combinations, where the keys are the names of the new summing columns, and the values are the sums of the corresponding columns of the original dataframe, then we unpack them ** as arguments to the assign method, thereby adding to the frame\ncols = [c for c in df.columns if c.startswith('col')]\ndf = df.assign(**{'+'.join(c):df.loc[:, c].sum(axis=1) for i in range(2, len(cols) + 1) for c in combinations(cols, i)})\nprint(df)\n\n id colA colB colC colD colA+colB colA+colC colA+colD colB+colC colB+colD colC+colD colA+colB+colC colA+colB+colD colA+colC+colD colB+colC+colD colA+colB+colC+colD\n0 a 1 5 9 13 6 10 14 14 18 22 15 19 23 27 28\n1 b 2 6 10 14 8 12 16 16 20 24 18 22 26 30 32\n2 c 3 7 11 15 10 14 18 18 22 26 21 25 29 33 36\n3 d 4 8 12 16 12 16 20 20 24 28 24 28 32 36 40\n\n"
] | [
2
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074652092_pandas_python.txt |
Q:
'Line2D' object has no property 'line' error when using Matplotlib in a PyQt5 subwindow
I am trying to embed a Matplotlib plot in a PyQt5 subwindow written in Python. When I plot a line in the plot I get the error:
'Line2D' object has no property 'line'
Below is the relevant code extracted from my application. Any help would be much appreciated and if anyone knows of a better way to do this that would be much appreciated too.
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
import matplotlib.pyplot as plt
self.figure = plt.figure(figsize=(plotWidth, plotHeight), dpi=dotsPerInch)
self.canvas = FigureCanvas(self.figure)
self.subPlot = self.figure.add_subplot(111)
#xData, yData are lists of data points
self.subPlot.plot(xData, yData, line='b.-', label=lableText)
A:
You should try below as official site says.
linestyle='--'
| 'Line2D' object has no property 'line' error when using Matplotlib in a PyQt5 subwindow | I am trying to embed a Matplotlib plot in a PyQt5 subwindow written in Python. When I plot a line in the plot I get the error:
'Line2D' object has no property 'line'
Below is the relevant code extracted from my application. Any help would be much appreciated and if anyone knows of a better way to do this that would be much appreciated too.
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
import matplotlib.pyplot as plt
self.figure = plt.figure(figsize=(plotWidth, plotHeight), dpi=dotsPerInch)
self.canvas = FigureCanvas(self.figure)
self.subPlot = self.figure.add_subplot(111)
#xData, yData are lists of data points
self.subPlot.plot(xData, yData, line='b.-', label=lableText)
| [
"You should try below as official site says.\nlinestyle='--'\n"
] | [
0
] | [] | [] | [
"matplotlib",
"pyqt5",
"python"
] | stackoverflow_0071072530_matplotlib_pyqt5_python.txt |
Q:
Why won't this prophet Python module import? I have tried everything possible
I am running Python version 3.7 and trying to create a stock prediction Python program using fbprophet. However, it doesn't want to install.
I have tried importing it using pip, through python, and using conda install. Nothing seems to work. Can I get some help here?
Showing that it isn't importing
A:
Try with virtuanenv in python.
❯ python -m venv .venv
❯ source .venv/bin/activate
❯ pip install prophet
Then everything you installed will be in the virtual environment.
Here is the sample code.
from prophet import Prophet
print("imported successfully!")
And the output would be following:
❯ python sample.py
Fontconfig warning: ignoring UTF-8: not a valid region tag
Importing plotly failed. Interactive plots will not work.
imported successfully!
Keep in mind that, every library you installed will be only in the virtual env and you have to source first, before running the code.
| Why won't this prophet Python module import? I have tried everything possible | I am running Python version 3.7 and trying to create a stock prediction Python program using fbprophet. However, it doesn't want to install.
I have tried importing it using pip, through python, and using conda install. Nothing seems to work. Can I get some help here?
Showing that it isn't importing
| [
"Try with virtuanenv in python.\n❯ python -m venv .venv\n❯ source .venv/bin/activate\n❯ pip install prophet\n\nThen everything you installed will be in the virtual environment.\nHere is the sample code.\n\nfrom prophet import Prophet\nprint(\"imported successfully!\")\n\nAnd the output would be following:\n❯ python sample.py\nFontconfig warning: ignoring UTF-8: not a valid region tag\nImporting plotly failed. Interactive plots will not work.\nimported successfully!\n\nKeep in mind that, every library you installed will be only in the virtual env and you have to source first, before running the code.\n"
] | [
0
] | [] | [] | [
"installation",
"module",
"prophet",
"python"
] | stackoverflow_0074652156_installation_module_prophet_python.txt |
Q:
Python docx2pdf AttributeError: Open.SaveAs
I am trying to convert a docx file to pdf using the docx2pdf library, using the following code:
from docx2pdf import convert
convert("generated.docx")
As written here. But I have an error:
Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\script.py", line 29, in <module>
convert("generated.docx")
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\docx2pdf-0.1.8-py3.10.egg\docx2pdf\__init__.py", line 106, in convert
return windows(paths, keep_active)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\docx2pdf-0.1.8-py3.10.egg\docx2pdf\__init__.py", line 33, in windows
doc.SaveAs(str(pdf_filepath), FileFormat=wdFormatPDF)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\win32com\client\dynamic.py", line 639, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Open.SaveAs
I also tried converting with comtypes and pywin32, but I get the same error. I take code from here.
import sys
import comtypes.client
wdFormatPDF = 17
in_file = os.path.abspath("generated.docx")
out_file = os.path.abspath("generated.pdf")
word = comtypes.client.CreateObject('Word.Application')
doc = word.Documents.Open(in_file)
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
doc.Close()
word.Quit()
---------------------------------
Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\script.py", line 45, in <module>
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
_ctypes.COMError: (-2147418111, 'Call was rejected by callee.', (None, None, None, 0, None))
import sys
import win32com.client
wdFormatPDF = 17
in_file = os.path.abspath("generated.docx")
out_file = os.path.abspath("generated.pdf")
word = win32com.client.Dispatch('Word.Application')
doc = word.Documents.Open(in_file)
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
doc.Close()
word.Quit()
---------------------------------
Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\script.py", line 46, in <module>
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\win32com\client\dynamic.py", line 639, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Open.SaveAs
How can I fix this error? Or please suggest another way to convert docx to pdf. Thank you in advance
A:
change:
word = win32com.client.Dispatch('Word.Application')
to
import pythoncom
word = win32com.client.Dispatch('Word.Application', pythoncom.CoInitialize())
A:
from docx2pdf import convert
inputFile = "document.docx"
outputFile = "document2.pdf"
file = open(outputFile, "w")
file.close()
convert(inputFile, outputFile)
You should create the output file first
A:
One of the observation I had on this issue was, when the word document is opened in Microsoft Word and when we try to execute the convert() for the same word file, that is when this issue occurs. incase, if the file is opened please close it and try.
| Python docx2pdf AttributeError: Open.SaveAs | I am trying to convert a docx file to pdf using the docx2pdf library, using the following code:
from docx2pdf import convert
convert("generated.docx")
As written here. But I have an error:
Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\script.py", line 29, in <module>
convert("generated.docx")
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\docx2pdf-0.1.8-py3.10.egg\docx2pdf\__init__.py", line 106, in convert
return windows(paths, keep_active)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\docx2pdf-0.1.8-py3.10.egg\docx2pdf\__init__.py", line 33, in windows
doc.SaveAs(str(pdf_filepath), FileFormat=wdFormatPDF)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\win32com\client\dynamic.py", line 639, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Open.SaveAs
I also tried converting with comtypes and pywin32, but I get the same error. I take code from here.
import sys
import comtypes.client
wdFormatPDF = 17
in_file = os.path.abspath("generated.docx")
out_file = os.path.abspath("generated.pdf")
word = comtypes.client.CreateObject('Word.Application')
doc = word.Documents.Open(in_file)
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
doc.Close()
word.Quit()
---------------------------------
Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\script.py", line 45, in <module>
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
_ctypes.COMError: (-2147418111, 'Call was rejected by callee.', (None, None, None, 0, None))
import sys
import win32com.client
wdFormatPDF = 17
in_file = os.path.abspath("generated.docx")
out_file = os.path.abspath("generated.pdf")
word = win32com.client.Dispatch('Word.Application')
doc = word.Documents.Open(in_file)
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
doc.Close()
word.Quit()
---------------------------------
Traceback (most recent call last):
File "c:\Users\user\Desktop\folder\script.py", line 46, in <module>
doc.SaveAs(out_file, FileFormat=wdFormatPDF)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\win32com\client\dynamic.py", line 639, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Open.SaveAs
How can I fix this error? Or please suggest another way to convert docx to pdf. Thank you in advance
| [
"change:\nword = win32com.client.Dispatch('Word.Application')\n\nto\nimport pythoncom\nword = win32com.client.Dispatch('Word.Application', pythoncom.CoInitialize())\n\n",
"from docx2pdf import convert\n\ninputFile = \"document.docx\"\noutputFile = \"document2.pdf\"\nfile = open(outputFile, \"w\")\nfile.close()\n\nconvert(inputFile, outputFile)\n\nYou should create the output file first\n",
"One of the observation I had on this issue was, when the word document is opened in Microsoft Word and when we try to execute the convert() for the same word file, that is when this issue occurs. incase, if the file is opened please close it and try.\n"
] | [
1,
0,
0
] | [] | [] | [
"comtypes",
"docx",
"python",
"python_docx",
"pywin32"
] | stackoverflow_0071292585_comtypes_docx_python_python_docx_pywin32.txt |
Q:
AddPrivateFont to App Title / Title bar in WxPython?
My problem is I can't find the way to use AddPrivateFont to change the App Title font in WxPython.
From the demo
https://github.com/wxWidgets/Phoenix/blob/master/demo/AddPrivateFont.py
and the sample
https://wiki.wxpython.org/How%20to%20add%20a%20menu%20bar%20in%20the%20title%20bar%20%28Phoenix%29
I tried
f = wx.Font(
pointSize=18,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch",
encoding=wx.FONTENCODING_DEFAULT,
)
xwq = self.SetAppName("Custom Gui 1")
xwq.SetFont(f)
But I'm getting the error:
AttributeError: 'NoneType' object has no attribute 'SetFont'
I also tested the same with SetTitle
f = wx.Font(
pointSize=18,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch",
encoding=wx.FONTENCODING_DEFAULT,
)
frame.SetTitle('yourtitle')
frame.SetFont(f)
and the same error shows up.
import wx
import wx.grid as grid
import os
import sys
try:
gFileDir = os.path.dirname(os.path.abspath(__file__))
except:
gFileDir = os.path.dirname(os.path.abspath(sys.argv[0]))
gDataDir = os.path.join(gFileDir, "myfonts")
filename = os.path.join(gDataDir, "YouthTouchDemoRegular-4VwY.ttf")
wx.Font.AddPrivateFont(filename)
class MyFrame(wx.Frame):
def __init__(self, parent, title):
super(MyFrame, self).__init__(parent, title=title, size=(800, 600))
f = wx.Font(
pointSize=12,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch DEMO Regular",
encoding=wx.FONTENCODING_DEFAULT,
)
title.SetFont(f)
self.panel = MyPanel(self)
class MyPanel(wx.Panel):
def __init__(self, parent):
super(MyPanel, self).__init__(parent)
mygrid = grid.Grid(self)
mygrid.CreateGrid(26, 9)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(mygrid, 1, wx.EXPAND)
self.SetSizer(sizer)
class MyApp(wx.App):
def OnInit(self):
self.frame = MyFrame(parent=None, title="Grid In WxPython")
self.frame.Show()
return True
app = MyApp()
app.MainLoop()
I's there a documented way or other known workaround example to set a Private Font to the Title?
EDIT:
Working result here from @Rob's solution :
Sample: https://web.archive.org/web/20221202192613/https://paste.c-net.org/HondoPrairie
try:
gFileDir = os.path.dirname(os.path.abspath(__file__))
except:
gFileDir = os.path.dirname(os.path.abspath(sys.argv[0]))
gDataDir = os.path.join(gFileDir, "myfonts")
print("gDataDir: ", gDataDir)
filename = os.path.join(gDataDir, "YouthTouchDemoRegular-4VwY.ttf")
wx.Font.AddPrivateFont(filename)
print("filename: ", filename)
self.label_font = self.GetFont()
self.label_font.SetPointSize(18)
self.label_font.SetFamily(wx.FONTFAMILY_DEFAULT)
self.label_font.SetStyle(wx.FONTSTYLE_NORMAL)
self.label_font.SetWeight(wx.FONTWEIGHT_NORMAL)
self.label_font.SetUnderlined(False)
self.label_font.SetFaceName("Youth Touch DEMO")
self.label_font.SetEncoding(wx.FONTENCODING_DEFAULT)
self.SetFont(self.label_font)
A:
The reason you are getting those errors is that the title for a frame is just a string.
To be able to modify the font in the title you will have to create a title bar class based off of wx.Control like they did in the sample.
The created title bar will house the label and you will be able to set its font.
When you are ready to set the font run AddPrivateFont for it and then you should be able to reference it by faceName.
Also, the faceName you have above is not correct compared to the ttf in https://paste.c-net.org/FellasDerby . The faceName should be "Youth Touch DEMO"
From the docs modified with your font name/file:
filename = os.path.join(gDataDir, "YouthTouchDemoRegular-4VwY.ttf")
wx.Font.AddPrivateFont(filename)
st2 = wx.StaticText(self, -1, 'SAMPLETEXT', (15, 42))
f = wx.Font(pointSize=12,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch DEMO",
encoding=wx.FONTENCODING_DEFAULT)
st2.SetFont(f)
| AddPrivateFont to App Title / Title bar in WxPython? | My problem is I can't find the way to use AddPrivateFont to change the App Title font in WxPython.
From the demo
https://github.com/wxWidgets/Phoenix/blob/master/demo/AddPrivateFont.py
and the sample
https://wiki.wxpython.org/How%20to%20add%20a%20menu%20bar%20in%20the%20title%20bar%20%28Phoenix%29
I tried
f = wx.Font(
pointSize=18,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch",
encoding=wx.FONTENCODING_DEFAULT,
)
xwq = self.SetAppName("Custom Gui 1")
xwq.SetFont(f)
But I'm getting the error:
AttributeError: 'NoneType' object has no attribute 'SetFont'
I also tested the same with SetTitle
f = wx.Font(
pointSize=18,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch",
encoding=wx.FONTENCODING_DEFAULT,
)
frame.SetTitle('yourtitle')
frame.SetFont(f)
and the same error shows up.
import wx
import wx.grid as grid
import os
import sys
try:
gFileDir = os.path.dirname(os.path.abspath(__file__))
except:
gFileDir = os.path.dirname(os.path.abspath(sys.argv[0]))
gDataDir = os.path.join(gFileDir, "myfonts")
filename = os.path.join(gDataDir, "YouthTouchDemoRegular-4VwY.ttf")
wx.Font.AddPrivateFont(filename)
class MyFrame(wx.Frame):
def __init__(self, parent, title):
super(MyFrame, self).__init__(parent, title=title, size=(800, 600))
f = wx.Font(
pointSize=12,
family=wx.FONTFAMILY_DEFAULT,
style=wx.FONTSTYLE_NORMAL,
weight=wx.FONTWEIGHT_NORMAL,
underline=False,
faceName="Youth Touch DEMO Regular",
encoding=wx.FONTENCODING_DEFAULT,
)
title.SetFont(f)
self.panel = MyPanel(self)
class MyPanel(wx.Panel):
def __init__(self, parent):
super(MyPanel, self).__init__(parent)
mygrid = grid.Grid(self)
mygrid.CreateGrid(26, 9)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(mygrid, 1, wx.EXPAND)
self.SetSizer(sizer)
class MyApp(wx.App):
def OnInit(self):
self.frame = MyFrame(parent=None, title="Grid In WxPython")
self.frame.Show()
return True
app = MyApp()
app.MainLoop()
I's there a documented way or other known workaround example to set a Private Font to the Title?
EDIT:
Working result here from @Rob's solution :
Sample: https://web.archive.org/web/20221202192613/https://paste.c-net.org/HondoPrairie
try:
gFileDir = os.path.dirname(os.path.abspath(__file__))
except:
gFileDir = os.path.dirname(os.path.abspath(sys.argv[0]))
gDataDir = os.path.join(gFileDir, "myfonts")
print("gDataDir: ", gDataDir)
filename = os.path.join(gDataDir, "YouthTouchDemoRegular-4VwY.ttf")
wx.Font.AddPrivateFont(filename)
print("filename: ", filename)
self.label_font = self.GetFont()
self.label_font.SetPointSize(18)
self.label_font.SetFamily(wx.FONTFAMILY_DEFAULT)
self.label_font.SetStyle(wx.FONTSTYLE_NORMAL)
self.label_font.SetWeight(wx.FONTWEIGHT_NORMAL)
self.label_font.SetUnderlined(False)
self.label_font.SetFaceName("Youth Touch DEMO")
self.label_font.SetEncoding(wx.FONTENCODING_DEFAULT)
self.SetFont(self.label_font)
| [
"The reason you are getting those errors is that the title for a frame is just a string.\nTo be able to modify the font in the title you will have to create a title bar class based off of wx.Control like they did in the sample.\nThe created title bar will house the label and you will be able to set its font.\nWhen you are ready to set the font run AddPrivateFont for it and then you should be able to reference it by faceName.\nAlso, the faceName you have above is not correct compared to the ttf in https://paste.c-net.org/FellasDerby . The faceName should be \"Youth Touch DEMO\"\nFrom the docs modified with your font name/file:\nfilename = os.path.join(gDataDir, \"YouthTouchDemoRegular-4VwY.ttf\")\nwx.Font.AddPrivateFont(filename)\n\nst2 = wx.StaticText(self, -1, 'SAMPLETEXT', (15, 42))\n\nf = wx.Font(pointSize=12,\n family=wx.FONTFAMILY_DEFAULT,\n style=wx.FONTSTYLE_NORMAL,\n weight=wx.FONTWEIGHT_NORMAL,\n underline=False,\n faceName=\"Youth Touch DEMO\",\n encoding=wx.FONTENCODING_DEFAULT)\n\nst2.SetFont(f)\n\n"
] | [
1
] | [] | [] | [
"fonts",
"python",
"python_3.x",
"wxpython",
"wxwidgets"
] | stackoverflow_0074651265_fonts_python_python_3.x_wxpython_wxwidgets.txt |
Q:
Repairing pdfs with damaged xref table
Are there any solutions (preferably in Python) that can repair pdfs with damaged xref tables?
I have a pdf that I tried to convert to a png in Ghostscript and received the following error:
**** Error: An error occurred while reading an XREF table.
**** The file has been damaged. This may have been caused
**** by a problem while converting or transfering the file.
However, I am able to open the pdf in Preview on my Mac and when I export the pdf using Preview, I am able to convert the exported pdf.
Is there any way to repair pdfs without having to manually open them and export them?
A:
If the file renders as expected in Ghostscript then you can run it through GS to the pdfwrite device and create a new PDF file which won't be damaged.
Preview is (like Acrobat) almost certainly silently repairing the problem in the background. Ghostscript will be doing the same, but unlike other applications we feel you need to know that the file has a problem. Firstly so that you know its broken, secondly so that if the file renders incorrectly in Ghostscript (or indeed, other applications) you know why.
Note that there are two main reasons for a damaged xref; firstly the developer of the application didn't read the specification carefully enough and the file offsets in the xref are correct, but the format is incorrect (this is not uncommon and a repair by GS will be harmless), secondly the file genuinely has been damaged in transit, or by editing it.
In the latter case there may be other problems and Ghostscript will try to warn you about those too. If you don't get any other warnings or errors, then its probably just a malformed xref table.
| Repairing pdfs with damaged xref table | Are there any solutions (preferably in Python) that can repair pdfs with damaged xref tables?
I have a pdf that I tried to convert to a png in Ghostscript and received the following error:
**** Error: An error occurred while reading an XREF table.
**** The file has been damaged. This may have been caused
**** by a problem while converting or transfering the file.
However, I am able to open the pdf in Preview on my Mac and when I export the pdf using Preview, I am able to convert the exported pdf.
Is there any way to repair pdfs without having to manually open them and export them?
| [
"If the file renders as expected in Ghostscript then you can run it through GS to the pdfwrite device and create a new PDF file which won't be damaged.\nPreview is (like Acrobat) almost certainly silently repairing the problem in the background. Ghostscript will be doing the same, but unlike other applications we feel you need to know that the file has a problem. Firstly so that you know its broken, secondly so that if the file renders incorrectly in Ghostscript (or indeed, other applications) you know why.\nNote that there are two main reasons for a damaged xref; firstly the developer of the application didn't read the specification carefully enough and the file offsets in the xref are correct, but the format is incorrect (this is not uncommon and a repair by GS will be harmless), secondly the file genuinely has been damaged in transit, or by editing it.\nIn the latter case there may be other problems and Ghostscript will try to warn you about those too. If you don't get any other warnings or errors, then its probably just a malformed xref table.\n"
] | [
1
] | [
"i know im super late but, if you try...\ncat my.pdf > temp.pdf && hexdump temp.pdf > newpdf.pdf\n\n\nor\nzip my.pdf && unzip my.pdf\n\nif you opened the document in...\n\nutf-8 read mode\n\n...then you probably changed some key bytes around, specifically the octal 011, hexadecimal 0A, decimal 10... these are the line feed or \"new line\" characters and they are essential to documentation in ascii encoding.\nYou can hexdump the octal or hexadecimal line strings with hexdump, all-search the document for bad newline characters and change them back to ascii newline.\nBe sure to open the document in encoding='ascii' or in bytes mode. your have to get out a character matrix...\nIf heard of people just compressing the file with zip and uncompressing it to fix this problem as well.\nWhenever fiddling around in a pdf, first make a new copy, then fiddle it.\nTL;DR\non line 17 of your document \nyou hit a << or ascii 'Line/page Separator' character. \nThe guilleme or double chevron isnt used for \nthat in UTF-8, your reader panicked and raised an error\n\nPDF was written in postscript. If you want to learn how to go crazy on a pdf, i recommend learning postscript.\nThis forbidden text is a good start\n"
] | [
-1
] | [
"ghostscript",
"pdf",
"python"
] | stackoverflow_0043149372_ghostscript_pdf_python.txt |
Q:
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'services_backend.routes.models.category.CategoryCreate' is not mapped
I am using pydantic, fastapi+sqlalchemy and postgresql for my project. When i`m trying to create new button (or category), i get UnmappedInstanceError.
Here is my code:
button.py (from routes)
from fastapi import HTTPException, APIRouter
from fastapi_sqlalchemy import db
from .models.button import ButtonCreate, ButtonUpdate, ButtonGet
from ..models.database import Button
from .category import get_category
button = APIRouter(
tags=["button"],
responses={200: {"description": "Ok"}}
)
@button.post("/", response_model=ButtonCreate)
def create_button(button: ButtonCreate):
db_category = get_category(category_id=button.category_id)
if db_category is None:
raise HTTPException(status_code=404, detail="Category does not exist")
db_button = ButtonCreate(category_id=button.category_id, name=button.name,
icon=button.icon)
db.session.add(db_button)
return db_button
@button.get("/", response_model=list[ButtonGet])
def get_buttons(skip: int = 0, limit: int = 100):
return db.session.query(Button).offset(skip).limit(limit).all()
@button.get("/{button_id}", response_model=ButtonGet)
def get_button(button_id: int):
db_button = db.session.query(Button).filter(Button.id == button_id).first()
if db_button is None:
raise HTTPException(status_code=404, detail="Button does not exist")
return db_button
@button.delete("/")
def remove_button(button_id: int):
db_button = get_button(button_id=button_id)
if db_button is None:
raise HTTPException(status_code=404, detail="Button does not exist")
db.session.query(Button).filter(Button.id == button_id).first()
@button.patch("/", response_model=ButtonUpdate)
def update_button(button: ButtonUpdate):
db_old_button = get_button(button_id=button.id)
if db_old_button is None:
raise HTTPException(status_code=404, detail="Button does not exist")
return db.session.query(Button).update(button)
button.py (from models, my schemas)
from .base import Base
from typing import Optional
class ButtonCreate(Base):
category_id: int
icon: Optional[str]
name: Optional[str]
class ButtonUpdate(Base):
id: int
category_id: Optional[int]
icon: Optional[str]
name: Optional[str]
class ButtonGet(Base):
id: int
All of my schemas are inherited from custom Base class (which inherited from BaseModel)
from pydantic import BaseModel
class Base(BaseModel):
def __repr__(self) -> str:
attrs = []
for k, v in self.__class__.schema().items():
attrs.append(f"{k}={v}")
return "{}({})".format(self.__class__.__name__, ', '.join(attrs))
class Config:
orm_mode = True
A:
The error message is pretty clear: you're trying to use CategoryCreate as an ORM model, when it is a pydantic model (I'm guessing, you haven't included it here). Same in your create_button function, you're trying to add a ButtonCreate object to the database. That should be the ORM model Button instead.
| sqlalchemy.orm.exc.UnmappedInstanceError: Class 'services_backend.routes.models.category.CategoryCreate' is not mapped | I am using pydantic, fastapi+sqlalchemy and postgresql for my project. When i`m trying to create new button (or category), i get UnmappedInstanceError.
Here is my code:
button.py (from routes)
from fastapi import HTTPException, APIRouter
from fastapi_sqlalchemy import db
from .models.button import ButtonCreate, ButtonUpdate, ButtonGet
from ..models.database import Button
from .category import get_category
button = APIRouter(
tags=["button"],
responses={200: {"description": "Ok"}}
)
@button.post("/", response_model=ButtonCreate)
def create_button(button: ButtonCreate):
db_category = get_category(category_id=button.category_id)
if db_category is None:
raise HTTPException(status_code=404, detail="Category does not exist")
db_button = ButtonCreate(category_id=button.category_id, name=button.name,
icon=button.icon)
db.session.add(db_button)
return db_button
@button.get("/", response_model=list[ButtonGet])
def get_buttons(skip: int = 0, limit: int = 100):
return db.session.query(Button).offset(skip).limit(limit).all()
@button.get("/{button_id}", response_model=ButtonGet)
def get_button(button_id: int):
db_button = db.session.query(Button).filter(Button.id == button_id).first()
if db_button is None:
raise HTTPException(status_code=404, detail="Button does not exist")
return db_button
@button.delete("/")
def remove_button(button_id: int):
db_button = get_button(button_id=button_id)
if db_button is None:
raise HTTPException(status_code=404, detail="Button does not exist")
db.session.query(Button).filter(Button.id == button_id).first()
@button.patch("/", response_model=ButtonUpdate)
def update_button(button: ButtonUpdate):
db_old_button = get_button(button_id=button.id)
if db_old_button is None:
raise HTTPException(status_code=404, detail="Button does not exist")
return db.session.query(Button).update(button)
button.py (from models, my schemas)
from .base import Base
from typing import Optional
class ButtonCreate(Base):
category_id: int
icon: Optional[str]
name: Optional[str]
class ButtonUpdate(Base):
id: int
category_id: Optional[int]
icon: Optional[str]
name: Optional[str]
class ButtonGet(Base):
id: int
All of my schemas are inherited from custom Base class (which inherited from BaseModel)
from pydantic import BaseModel
class Base(BaseModel):
def __repr__(self) -> str:
attrs = []
for k, v in self.__class__.schema().items():
attrs.append(f"{k}={v}")
return "{}({})".format(self.__class__.__name__, ', '.join(attrs))
class Config:
orm_mode = True
| [
"The error message is pretty clear: you're trying to use CategoryCreate as an ORM model, when it is a pydantic model (I'm guessing, you haven't included it here). Same in your create_button function, you're trying to add a ButtonCreate object to the database. That should be the ORM model Button instead.\n"
] | [
0
] | [] | [] | [
"fastapi",
"python",
"sqlalchemy"
] | stackoverflow_0074642491_fastapi_python_sqlalchemy.txt |
Q:
Django Serializer - how to check if a ListField is valid?
I'm currently working on a practice social media app. In this app, current users can invite their friends by email to join the app (specifically, joining a 'channel' of the app, like Discord). I'm working on unit tests to ensure that emails are valid. I'm working with serializers for the first time and I'm trying to move past some issues I've been having.
Functionality: the user enters in a friend's email address. For the purposes of this test, the email address needs to be at least 6 characters long. So something like "[email protected]" (6 char) would be acceptable, but "[email protected]" (5 char) would not be accepted. Users can enter up to 10 emails at a time.
What I'm currently struggling with is what function to use for "is_valid"? I know Django forms has something for this, but I haven't found a satisfactory method for serializers.
Here is what's currently in serializers.py.
from django.core.validators import MinValueValidator, MaxValueValidator
from rest.framework.fields import ListField
from rest.framework import serializers
class EmailList(ListField):
"""Will validate the email input"""
def to_internal_value(self, data):
# if the EmailList is not valid, then a ValidationError should be raised--here's where I'm wondering what to put
raise serializers.ValidationError({"invalid_email": "Email is invalid, please recheck".})
class EmailListSerializer(serializers.Serializer):
"""Serializer for the email list"""
emails = EmailList(
required=True
child=Serializers.CharField(
validators= [
MinLengthValidator(6, message="too short"),
MaxLengthValidator(50, message="too long"),
],
),
max_length=10,
error_messages = ({"invalid_email": "Email is invalid, please recheck."}
)
Can anyone assist as to what function I should put in the to_internal_value method of the EmailList class to check that the emails are valid?
A:
if you are getting email in list then write for loop to get each mail to validate, in your to_internal_value function write code as below:
from django.core.validators import validate_email
def to_internal_value(data):
for email in data:
try:
validate_email(email)
except:
raise serializers.ValidationError({"invalid_email": "Email is invalid, please recheck".})
let me know if this works for you or not
A:
You do not have to implement a custom serializer field for the reason u mentioned. all u have to do is have a email field under a list field. for min and max length u can just pass the kwargs because EmailField inherits CharField.
In case you want to customize the error message u can pass error_messages kwarg to the email filed.
class EmailSerialilizer(serializers.Serializer):
emails = serilizers.ListField(child=Serializers.EmailField(
min_length = 6,
max_length = 50,
error_messages = {
"invalid": "Email is invalid, please recheck"
}
),
| Django Serializer - how to check if a ListField is valid? | I'm currently working on a practice social media app. In this app, current users can invite their friends by email to join the app (specifically, joining a 'channel' of the app, like Discord). I'm working on unit tests to ensure that emails are valid. I'm working with serializers for the first time and I'm trying to move past some issues I've been having.
Functionality: the user enters in a friend's email address. For the purposes of this test, the email address needs to be at least 6 characters long. So something like "[email protected]" (6 char) would be acceptable, but "[email protected]" (5 char) would not be accepted. Users can enter up to 10 emails at a time.
What I'm currently struggling with is what function to use for "is_valid"? I know Django forms has something for this, but I haven't found a satisfactory method for serializers.
Here is what's currently in serializers.py.
from django.core.validators import MinValueValidator, MaxValueValidator
from rest.framework.fields import ListField
from rest.framework import serializers
class EmailList(ListField):
"""Will validate the email input"""
def to_internal_value(self, data):
# if the EmailList is not valid, then a ValidationError should be raised--here's where I'm wondering what to put
raise serializers.ValidationError({"invalid_email": "Email is invalid, please recheck".})
class EmailListSerializer(serializers.Serializer):
"""Serializer for the email list"""
emails = EmailList(
required=True
child=Serializers.CharField(
validators= [
MinLengthValidator(6, message="too short"),
MaxLengthValidator(50, message="too long"),
],
),
max_length=10,
error_messages = ({"invalid_email": "Email is invalid, please recheck."}
)
Can anyone assist as to what function I should put in the to_internal_value method of the EmailList class to check that the emails are valid?
| [
"if you are getting email in list then write for loop to get each mail to validate, in your to_internal_value function write code as below:\nfrom django.core.validators import validate_email\n \ndef to_internal_value(data):\n for email in data:\n try:\n validate_email(email)\n except:\n raise serializers.ValidationError({\"invalid_email\": \"Email is invalid, please recheck\".})\n\nlet me know if this works for you or not\n",
"You do not have to implement a custom serializer field for the reason u mentioned. all u have to do is have a email field under a list field. for min and max length u can just pass the kwargs because EmailField inherits CharField.\nIn case you want to customize the error message u can pass error_messages kwarg to the email filed.\nclass EmailSerialilizer(serializers.Serializer):\n emails = serilizers.ListField(child=Serializers.EmailField(\n min_length = 6, \n max_length = 50,\n error_messages = {\n \"invalid\": \"Email is invalid, please recheck\"\n }\n ),\n\n"
] | [
0,
0
] | [] | [] | [
"django",
"error_handling",
"python",
"serialization",
"validation"
] | stackoverflow_0074648592_django_error_handling_python_serialization_validation.txt |
Q:
read json composite attribute by python
I have the following content of Json file
{
"role": [
{
"type": "account",
"attributes": {
"order": 50
}
},
{
"type": "secretary",
"attributes": {
"order": 10
}
},
{
"type": "account",
"attributes": {
"order": 3
}
}
]
}
I want get the role with max order between all roles. how can I do this in python ?
I have tried the following:
lowestpriority = roles.attributes[max('order')]
but it gives an error : AttributeError: 'list' object has no attribute 'attributes'
| read json composite attribute by python | I have the following content of Json file
{
"role": [
{
"type": "account",
"attributes": {
"order": 50
}
},
{
"type": "secretary",
"attributes": {
"order": 10
}
},
{
"type": "account",
"attributes": {
"order": 3
}
}
]
}
I want get the role with max order between all roles. how can I do this in python ?
I have tried the following:
lowestpriority = roles.attributes[max('order')]
but it gives an error : AttributeError: 'list' object has no attribute 'attributes'
| [] | [] | [
"You from JS background?\nIndeed the list does not have such attribute.\nAssuming roles is your list (cause you didn't bother to provide any code unfortunately):\nmax_order_role = sorted(roles, key= lambda i:i.get(\"attributes\",{}).get(\"order\"))[-1]\n\n"
] | [
-1
] | [
"composite",
"json",
"python"
] | stackoverflow_0074652304_composite_json_python.txt |
Q:
How to replace strings with int in sublist?
I'm trying to split above special letters '/' or '_' whatever comes first in the string column.
Here is the sample of the dataset(df2):
4. 발견장소 코딩사유 Unnamed : 1
1 67488 교외/야산_등산로 계곡 앞
2 100825 자택_자택 방안 텐트
3. 101199 숙박업소_게스트하우스 21층 복도
I converted Unnamed: 1 column like this:
4. 발견장소 코딩사유 Unnamed : 1
1 67488 [[교외],[야산_등산로 계곡 앞]]
2 100825 [[자택],[자택 방안 텐트]]
3. 101199 [[숙박업소],[게스트하우스 21층 복도]]
I'm trying to replace the strings with numbers like this. Here is the desired output:
4. 발견장소 코딩사유 Unnamed : 1
1 67488 [[7],[야산_등산로 계곡 앞]]
2 100825 [[1],[자택 방안 텐트]]
3 101199 [[6],[게스트하우스 21층 복도]]
What I tried:
for i in range(1,122):
# KEY값 추출
index_key = df2['4. 발견장소 코딩사유'][i]
# FIND_PLACETP값 추출
index_rawdata = df.loc[df['KEY'] == index_key,'FIND_PLACETP'].index[0]
num = df['FIND_PLACETP'][index_rawdata]
# text split
findplace = df2['Unnamed: 1'].str.split('/|_',expand=False)
# replace words with numbers
findplace[i][0] = findplace[i][0].str.replace('자택','1')
findplace[i][0]
findplace[i][0].str.replace(dict(zip(['자택','친척집','지인집','학교|직장','공공장소','숙박업소','교외|야산','병원','기타'],
[1,2,3,4,5,6,7,8,9])),regex=True)
but it caused the error like this
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-50-b8929ebb1248> in <module>
11
12 # replace words with numbers
---> 13 findplace[i][0] = findplace[i][0].str.replace('자택','1')
14 findplace[i][0]
15 # findplace[i][0].str.replace(dict(zip(['자택','친척집','지인집','학교|직장','공공장소','숙박업소','교외|야산','병원','기타'],
AttributeError: 'str' object has no attribute 'str'
A:
In pandas, with your method, you are referring to a value instead of row or column. In findplace[i][0], i is your column name and 0 is your row name, and it is returning the value of column i where row is 0.
I don't really know, either you are trying to use replace on row (with index name as 0) or the whole column and row.
For the whole column:
findplace[i] = findplace[i].str.replace('자택','1') #[0] is removed
For the row with index name as 0:
findplace[i][0] = findplace[i][0].replace('자택','1')
#findplace[i][0] is a string so replace can be applied directly
| How to replace strings with int in sublist? | I'm trying to split above special letters '/' or '_' whatever comes first in the string column.
Here is the sample of the dataset(df2):
4. 발견장소 코딩사유 Unnamed : 1
1 67488 교외/야산_등산로 계곡 앞
2 100825 자택_자택 방안 텐트
3. 101199 숙박업소_게스트하우스 21층 복도
I converted Unnamed: 1 column like this:
4. 발견장소 코딩사유 Unnamed : 1
1 67488 [[교외],[야산_등산로 계곡 앞]]
2 100825 [[자택],[자택 방안 텐트]]
3. 101199 [[숙박업소],[게스트하우스 21층 복도]]
I'm trying to replace the strings with numbers like this. Here is the desired output:
4. 발견장소 코딩사유 Unnamed : 1
1 67488 [[7],[야산_등산로 계곡 앞]]
2 100825 [[1],[자택 방안 텐트]]
3 101199 [[6],[게스트하우스 21층 복도]]
What I tried:
for i in range(1,122):
# KEY값 추출
index_key = df2['4. 발견장소 코딩사유'][i]
# FIND_PLACETP값 추출
index_rawdata = df.loc[df['KEY'] == index_key,'FIND_PLACETP'].index[0]
num = df['FIND_PLACETP'][index_rawdata]
# text split
findplace = df2['Unnamed: 1'].str.split('/|_',expand=False)
# replace words with numbers
findplace[i][0] = findplace[i][0].str.replace('자택','1')
findplace[i][0]
findplace[i][0].str.replace(dict(zip(['자택','친척집','지인집','학교|직장','공공장소','숙박업소','교외|야산','병원','기타'],
[1,2,3,4,5,6,7,8,9])),regex=True)
but it caused the error like this
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-50-b8929ebb1248> in <module>
11
12 # replace words with numbers
---> 13 findplace[i][0] = findplace[i][0].str.replace('자택','1')
14 findplace[i][0]
15 # findplace[i][0].str.replace(dict(zip(['자택','친척집','지인집','학교|직장','공공장소','숙박업소','교외|야산','병원','기타'],
AttributeError: 'str' object has no attribute 'str'
| [
"In pandas, with your method, you are referring to a value instead of row or column. In findplace[i][0], i is your column name and 0 is your row name, and it is returning the value of column i where row is 0.\nI don't really know, either you are trying to use replace on row (with index name as 0) or the whole column and row.\nFor the whole column:\nfindplace[i] = findplace[i].str.replace('자택','1') #[0] is removed\n\nFor the row with index name as 0:\nfindplace[i][0] = findplace[i][0].replace('자택','1')\n#findplace[i][0] is a string so replace can be applied directly\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074652325_pandas_python.txt |
Q:
Converting stl code plot from Matlab to Python
I am looking to be able to plot an stl file in Python. I can do it in Matlab easily enough with this code:
phacon = stlread("Spine_PhaconModel.stl");
hold on
figure = trisurf(phacon,'FaceColor',[0.6 0.6 0.6], 'Edgecolor','none','FaceLighting','gouraud','AmbientStrength', 0.15, 'MarkerEdgeColor',"r")
view([180, -1])
camlight('headlight');
material('dull');
But when I try to change the code to Python, the output is not what I want. The output plot I'm looking for is something like what is attached below:
Spine ouput from matlab
I tried using functions such as mesh.Mesh.from_file to get the data as an equivalent of the Matlab function stlread() and plot_trisurf as an equivalent of the Matlab function trisurf(). The code I tried is:
fig = plt.figure()
ax = fig.gca(projection='3d')
stl_data = mesh.Mesh.from_file(Spine_PhaconModel.stl')
points = stl_data.points.reshape([-1, 3])
x = points[:,0]
y = points[:,1]
z = points[:,2]
collec = ax.plot_trisurf(x,y,z,linewidth=0.2)
However, I don't know how to make it look visually the same as the first attached image. What I get is this Spine Phacon output using Python
I would really appreciate the help, thanks so much!
A:
from the documentation it seems you need to plot it as 3d polygons, the answer below shows how to plot it, the following is a modified example because the docs are outdated, and matplotlib seems to have changed since then.
from stl import mesh
from mpl_toolkits import mplot3d
from matplotlib import pyplot
# Create a new plot
figure = pyplot.figure()
axes = figure.add_subplot(projection='3d')
# Load the STL files and add the vectors to the plot
your_mesh = mesh.Mesh.from_file(r'your_mesh_path.stl')
poly_collection = mplot3d.art3d.Poly3DCollection(your_mesh.vectors)
poly_collection.set_color((0.7,0.7,0.7)) # play with color
axes.add_collection3d(poly_collection)
# Show the plot to the screen
pyplot.show()
if you want something even better than what matlab draws then you should use vtkplotlib as it draws using your GPU as opposed to matplotlib CPU rasterization, an example is in this answer https://stackoverflow.com/a/57501486/15649230
Edit: i have modified the code above to remove autoscale, now to scale it yourself, you rotate with left mouse button and move with middle mouse button and zoom with right mouse button, and this answer specifies how to do it in code https://stackoverflow.com/a/65017422/15649230
| Converting stl code plot from Matlab to Python | I am looking to be able to plot an stl file in Python. I can do it in Matlab easily enough with this code:
phacon = stlread("Spine_PhaconModel.stl");
hold on
figure = trisurf(phacon,'FaceColor',[0.6 0.6 0.6], 'Edgecolor','none','FaceLighting','gouraud','AmbientStrength', 0.15, 'MarkerEdgeColor',"r")
view([180, -1])
camlight('headlight');
material('dull');
But when I try to change the code to Python, the output is not what I want. The output plot I'm looking for is something like what is attached below:
Spine ouput from matlab
I tried using functions such as mesh.Mesh.from_file to get the data as an equivalent of the Matlab function stlread() and plot_trisurf as an equivalent of the Matlab function trisurf(). The code I tried is:
fig = plt.figure()
ax = fig.gca(projection='3d')
stl_data = mesh.Mesh.from_file(Spine_PhaconModel.stl')
points = stl_data.points.reshape([-1, 3])
x = points[:,0]
y = points[:,1]
z = points[:,2]
collec = ax.plot_trisurf(x,y,z,linewidth=0.2)
However, I don't know how to make it look visually the same as the first attached image. What I get is this Spine Phacon output using Python
I would really appreciate the help, thanks so much!
| [
"from the documentation it seems you need to plot it as 3d polygons, the answer below shows how to plot it, the following is a modified example because the docs are outdated, and matplotlib seems to have changed since then.\nfrom stl import mesh\nfrom mpl_toolkits import mplot3d\nfrom matplotlib import pyplot\n\n# Create a new plot\nfigure = pyplot.figure()\naxes = figure.add_subplot(projection='3d')\n\n# Load the STL files and add the vectors to the plot\nyour_mesh = mesh.Mesh.from_file(r'your_mesh_path.stl')\npoly_collection = mplot3d.art3d.Poly3DCollection(your_mesh.vectors)\npoly_collection.set_color((0.7,0.7,0.7)) # play with color\naxes.add_collection3d(poly_collection)\n\n# Show the plot to the screen\npyplot.show()\n\nif you want something even better than what matlab draws then you should use vtkplotlib as it draws using your GPU as opposed to matplotlib CPU rasterization, an example is in this answer https://stackoverflow.com/a/57501486/15649230\nEdit: i have modified the code above to remove autoscale, now to scale it yourself, you rotate with left mouse button and move with middle mouse button and zoom with right mouse button, and this answer specifies how to do it in code https://stackoverflow.com/a/65017422/15649230\n"
] | [
1
] | [] | [] | [
"matlab",
"plot",
"python"
] | stackoverflow_0074617585_matlab_plot_python.txt |
Q:
TCP sockets. Missing bytes when transmitting data over internet
I have simple client-server setup that communicate JPG bytes. When running locally it works perfectly. However when transmitting over internet JPGs get corrupted and when decoded have severe visual artifacts.
Client is a Rust Tokio application. It consumes jpeg stream from a camera and pushed the jpeg bytes to TCP socket.
// Async application to run on the edge (Raspberry Pi)
// Reads MJPEG HTTP stream from provided URL and sends it to the CWS server over TCP
#![warn(rust_2018_idioms)]
use std::convert::TryFrom;
use futures::StreamExt;
use tokio::signal;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpStream;
use std::error::Error;
use std::path::PathBuf;
use structopt::StructOpt;
//DEBUG
use std::fs;
#[derive(StructOpt, Debug)]
#[structopt(name = "basic")]
struct Opt {
#[structopt(long = "stream", required(true))]
stream: String,
#[structopt(long = "server", required(true))]
server: String,
}
async fn acquire_tcp_connection(server: &str) -> Result<TcpStream, Box<dyn Error>> {
// "127.0.0.1:6142"
loop {
match TcpStream::connect(server).await {
Ok(stream) => {
println!("Connected to server");
return Ok(stream);
}
Err(e) => {
println!("Failed to connect to server: {}", e);
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let opt = Opt::from_args();
let url = http::Uri::try_from(opt.stream).unwrap();
loop {
let mut tcp_conn = acquire_tcp_connection(&opt.server).await?;
// hyper client
let client = hyper::Client::new();
// Do the request
let res = client.get(url.clone()).await.unwrap();
// Check the status
if !res.status().is_success() {
eprintln!("HTTP request failed with status {}", res.status());
std::process::exit(1);
}
// https://docs.rs/mime/latest/mime/#what-is-mime
// Basically HTTP response content
let content_type: mime::Mime = res
.headers()
.get(http::header::CONTENT_TYPE)
.unwrap()
.to_str()
.unwrap()
.parse()
.unwrap();
assert_eq!(content_type.type_(), "multipart");
let boundary = content_type.get_param(mime::BOUNDARY).unwrap();
let stream = res.into_body();
// https://github.com/scottlamb/multipart-stream-rs
let mut stream = multipart_stream::parse(stream, boundary.as_str());
'outer: while let Some(p) = stream.next().await {
let p = p.unwrap();
// Split the jpeg bytes into chunks of 2048 bytes
for slice in p.body.chunks(2048) {
// DEBUG capture bytes to a file just for debugging
fs::write("tcp_debug.txt", &slice).expect("Unable to write file");
let tcp_result = tcp_conn.write(&slice).await;
match tcp_result {
Ok(_) => {
println!("Sent {} bytes", slice.len());
}
Err(e) => {
println!("Failed to send data: {}", e);
break 'outer;
}
}
}
}
}
Ok(())
}
Server is a Python app that accepts TCP connection and decodes the received bytes:
import asyncio
from threading import Thread
import cv2
import numpy as np
class SingleFrameReader:
"""
Simple API for reading a single frame from a video source
"""
def __init__(self, video_source, ..., tcp=False):
self._video_source = video_source
...
elif tcp:
self._stop_tcp_server = False
self._tcp_image = None
self._tcp_addr = video_source
self._tcp_thread = Thread(target=asyncio.run, args=(self._start_tcp_server(),)).start()
self.read = lambda: self._tcp_image
async def _start_tcp_server(self):
uri, port = self._tcp_addr.split(':')
port = int(port)
print(f"Starting TCP server on {uri}:{port}")
server = await asyncio.start_server(
self.handle_client, uri, port
)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
async def handle_client(self, reader, writer):
"""
Handle a client connection. Receive JPEGs from the client and have them ready to be ready
"""
client_addr = writer.get_extra_info('peername')
print(f"New connection from {client_addr}")
jpg_bytes = b''
while True:
if self._stop_tcp_server:
break
data = await reader.read(2**16)
#DEBUG
print(f"Received {len(data)} bytes from {client_addr}")
byes_file = open('tcp_bytes_server.txt', 'wb')
if not data:
print("Client disconnected")
break
#DEBUG the corruption issue
byes_file.write(data)
jpg_bytes += data
start_idx = jpg_bytes.find(b'\xff\xd8')
end_idx = jpg_bytes.find(b'\xff\xd9')
if start_idx != -1 and end_idx != -1:
nparr = np.frombuffer(jpg_bytes[start_idx:end_idx+2], np.uint8)
img_np = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
if img_np is None:
continue
self._tcp_image = img_np
jpg_bytes = jpg_bytes[end_idx+2:]
...
I have tried sending different chunk sizes, when sending the entire image at a time, image artifact are more severe.
I captured the bytes to a file that were sent from the client (rust) and the bytes received on server (python). The bytes from the server were smaller ~1.6KB vs ~2KB from the rust. And the first received bytes do not match each other.
Again, when running locally it this works smoothly, you can see the camera stream in real time. When client and server are separated by the internet the bytes seem to get corrupted.
A:
To write the bytes to the TCP socket you are calling AsyncWriteExt::write:
let tcp_result = tcp_conn.write(&slice).await;
And quoting the docs:
This function will attempt to write the entire contents of buf, but the entire write may not succeed...
If the return value is Ok(n) then it must be guaranteed that n <= buf.len().
You are looking if there is an error and logging the number of bytes, but doing nothing if they are less than the length of the buffer.
But when will the buffer not be written entirely? Well, usually that might happen when your program is much faster than the network and/or your slices are too big. It looks like when you are connecting to localhost that is not an issue but when you connect to a remote system it is. YMMV.
If you were programming C, you would have to do a loop, advance a pointer and so on. In Rust you can do that too, but it is easier to call AsyncWriteExt::write_all that does the loop for you:
let tcp_result = tcp_conn.write_all(&slice).await;
Note that now tcp_result is a io::Result<()> instead of a io::Result<usize> because now not writing all the bytes in the slice is an error. Exactly what you want.
Also note that the peer, when reading from the TCP connection, also has this potential issue, even if using Python. This line could do a partial read, no matter if the server wrote the whole buffer or not:
data = await reader.read(2**16)
But this line is already in a loop concatenating the data and waiting for the whole image, so it causes no issues.
| TCP sockets. Missing bytes when transmitting data over internet | I have simple client-server setup that communicate JPG bytes. When running locally it works perfectly. However when transmitting over internet JPGs get corrupted and when decoded have severe visual artifacts.
Client is a Rust Tokio application. It consumes jpeg stream from a camera and pushed the jpeg bytes to TCP socket.
// Async application to run on the edge (Raspberry Pi)
// Reads MJPEG HTTP stream from provided URL and sends it to the CWS server over TCP
#![warn(rust_2018_idioms)]
use std::convert::TryFrom;
use futures::StreamExt;
use tokio::signal;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpStream;
use std::error::Error;
use std::path::PathBuf;
use structopt::StructOpt;
//DEBUG
use std::fs;
#[derive(StructOpt, Debug)]
#[structopt(name = "basic")]
struct Opt {
#[structopt(long = "stream", required(true))]
stream: String,
#[structopt(long = "server", required(true))]
server: String,
}
async fn acquire_tcp_connection(server: &str) -> Result<TcpStream, Box<dyn Error>> {
// "127.0.0.1:6142"
loop {
match TcpStream::connect(server).await {
Ok(stream) => {
println!("Connected to server");
return Ok(stream);
}
Err(e) => {
println!("Failed to connect to server: {}", e);
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let opt = Opt::from_args();
let url = http::Uri::try_from(opt.stream).unwrap();
loop {
let mut tcp_conn = acquire_tcp_connection(&opt.server).await?;
// hyper client
let client = hyper::Client::new();
// Do the request
let res = client.get(url.clone()).await.unwrap();
// Check the status
if !res.status().is_success() {
eprintln!("HTTP request failed with status {}", res.status());
std::process::exit(1);
}
// https://docs.rs/mime/latest/mime/#what-is-mime
// Basically HTTP response content
let content_type: mime::Mime = res
.headers()
.get(http::header::CONTENT_TYPE)
.unwrap()
.to_str()
.unwrap()
.parse()
.unwrap();
assert_eq!(content_type.type_(), "multipart");
let boundary = content_type.get_param(mime::BOUNDARY).unwrap();
let stream = res.into_body();
// https://github.com/scottlamb/multipart-stream-rs
let mut stream = multipart_stream::parse(stream, boundary.as_str());
'outer: while let Some(p) = stream.next().await {
let p = p.unwrap();
// Split the jpeg bytes into chunks of 2048 bytes
for slice in p.body.chunks(2048) {
// DEBUG capture bytes to a file just for debugging
fs::write("tcp_debug.txt", &slice).expect("Unable to write file");
let tcp_result = tcp_conn.write(&slice).await;
match tcp_result {
Ok(_) => {
println!("Sent {} bytes", slice.len());
}
Err(e) => {
println!("Failed to send data: {}", e);
break 'outer;
}
}
}
}
}
Ok(())
}
Server is a Python app that accepts TCP connection and decodes the received bytes:
import asyncio
from threading import Thread
import cv2
import numpy as np
class SingleFrameReader:
"""
Simple API for reading a single frame from a video source
"""
def __init__(self, video_source, ..., tcp=False):
self._video_source = video_source
...
elif tcp:
self._stop_tcp_server = False
self._tcp_image = None
self._tcp_addr = video_source
self._tcp_thread = Thread(target=asyncio.run, args=(self._start_tcp_server(),)).start()
self.read = lambda: self._tcp_image
async def _start_tcp_server(self):
uri, port = self._tcp_addr.split(':')
port = int(port)
print(f"Starting TCP server on {uri}:{port}")
server = await asyncio.start_server(
self.handle_client, uri, port
)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
async def handle_client(self, reader, writer):
"""
Handle a client connection. Receive JPEGs from the client and have them ready to be ready
"""
client_addr = writer.get_extra_info('peername')
print(f"New connection from {client_addr}")
jpg_bytes = b''
while True:
if self._stop_tcp_server:
break
data = await reader.read(2**16)
#DEBUG
print(f"Received {len(data)} bytes from {client_addr}")
byes_file = open('tcp_bytes_server.txt', 'wb')
if not data:
print("Client disconnected")
break
#DEBUG the corruption issue
byes_file.write(data)
jpg_bytes += data
start_idx = jpg_bytes.find(b'\xff\xd8')
end_idx = jpg_bytes.find(b'\xff\xd9')
if start_idx != -1 and end_idx != -1:
nparr = np.frombuffer(jpg_bytes[start_idx:end_idx+2], np.uint8)
img_np = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
if img_np is None:
continue
self._tcp_image = img_np
jpg_bytes = jpg_bytes[end_idx+2:]
...
I have tried sending different chunk sizes, when sending the entire image at a time, image artifact are more severe.
I captured the bytes to a file that were sent from the client (rust) and the bytes received on server (python). The bytes from the server were smaller ~1.6KB vs ~2KB from the rust. And the first received bytes do not match each other.
Again, when running locally it this works smoothly, you can see the camera stream in real time. When client and server are separated by the internet the bytes seem to get corrupted.
| [
"To write the bytes to the TCP socket you are calling AsyncWriteExt::write:\nlet tcp_result = tcp_conn.write(&slice).await;\n\nAnd quoting the docs:\n\nThis function will attempt to write the entire contents of buf, but the entire write may not succeed...\nIf the return value is Ok(n) then it must be guaranteed that n <= buf.len().\n\nYou are looking if there is an error and logging the number of bytes, but doing nothing if they are less than the length of the buffer.\nBut when will the buffer not be written entirely? Well, usually that might happen when your program is much faster than the network and/or your slices are too big. It looks like when you are connecting to localhost that is not an issue but when you connect to a remote system it is. YMMV.\nIf you were programming C, you would have to do a loop, advance a pointer and so on. In Rust you can do that too, but it is easier to call AsyncWriteExt::write_all that does the loop for you:\nlet tcp_result = tcp_conn.write_all(&slice).await;\n\nNote that now tcp_result is a io::Result<()> instead of a io::Result<usize> because now not writing all the bytes in the slice is an error. Exactly what you want.\nAlso note that the peer, when reading from the TCP connection, also has this potential issue, even if using Python. This line could do a partial read, no matter if the server wrote the whole buffer or not:\n data = await reader.read(2**16)\n\nBut this line is already in a loop concatenating the data and waiting for the whole image, so it causes no issues.\n"
] | [
1
] | [] | [] | [
"python",
"python_asyncio",
"rust",
"rust_tokio",
"tcp"
] | stackoverflow_0074608177_python_python_asyncio_rust_rust_tokio_tcp.txt |
Q:
How to copy a file in the time stamp folder
I make a directory with the current timestamp 12:32:57 PM. I want to copy the file to the time folder who can I do this? I run the following code
import os
import shutil
cmd_mkdir = 'mkdir /home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/"$(date +%r)"'
os.system(cmd_mkdir)
origin_past=f'/home/mubashir/catkin_ws/src/germany1_trush/rosbag/1.bag'
target_past=f'/home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/$(date +%r)/1.bag'
shutil.copy(origin_past,target_past)
A:
What you want to do is grab the time in python and save it in a variable, that way you can reference the same time for both creating the folder and copying the file into it.
You could use
import os
import shuitil
from datetime import datetime
time = datetime.now().strftime('%I:%M:%S %p')
This will get the time in the format you have above (ie 12:32:57 PM)
Then update your commands to use it:
cmd_mkdir = f'mkdir /home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/{time}'
target_past = f'/home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/{time}/1.bag'
| How to copy a file in the time stamp folder | I make a directory with the current timestamp 12:32:57 PM. I want to copy the file to the time folder who can I do this? I run the following code
import os
import shutil
cmd_mkdir = 'mkdir /home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/"$(date +%r)"'
os.system(cmd_mkdir)
origin_past=f'/home/mubashir/catkin_ws/src/germany1_trush/rosbag/1.bag'
target_past=f'/home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/$(date +%r)/1.bag'
shutil.copy(origin_past,target_past)
| [
"What you want to do is grab the time in python and save it in a variable, that way you can reference the same time for both creating the folder and copying the file into it.\nYou could use\nimport os\nimport shuitil\nfrom datetime import datetime\ntime = datetime.now().strftime('%I:%M:%S %p')\n\nThis will get the time in the format you have above (ie 12:32:57 PM)\nThen update your commands to use it:\ncmd_mkdir = f'mkdir /home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/{time}'\ntarget_past = f'/home/mubashir/catkin_ws/src/germany1_trush/ros_bag_saved/{time}/1.bag'\n\n"
] | [
0
] | [] | [] | [
"linux",
"mkdir",
"python",
"timestamp"
] | stackoverflow_0074652356_linux_mkdir_python_timestamp.txt |
Q:
How can i read large EmailMessage object using wb+ if the file is large using python3?
I wish to write a very large EmailMessage object using binary in small sizes just like it is done using buffer.read(1024). IS there a way?
I have tried like this which is wrong as the Email Message object is not having read() method:
with open(complete_path,'wb+') as fp:
while True:
x = msg.read(1024)
if x:
fp.write(x.as_bytes())
else:
break
fp.write(x.as_bytes())
Here msg is the EmailMessage object and trying to save it at location complete_path When tried with below code it works for most of the mails but does not complete and gets stuck for few mails and generated zombies:
with open(complete_path,'wb+') as fp:
fp.write(msg.as_bytes())
I am reading the email in a fuglu plugin and not from a file.
Please guide.
A:
Try using this:
import email
import os
# Open the file in binary mode
with open(complete_path, 'rb') as fp:
msg = email.message_from_binary_file(fp)
# Write the message to a file in binary mode
with open(complete_path, 'wb+') as fp:
fp.write(msg.as_bytes())
OR
import email
import os
def read_email_from_file(filename):
with open(filename, 'rb') as fp:
msg = email.message_from_binary_file(fp)
return msg
def write_email_to_file(msg, filename):
with open(filename, 'wb+') as fp:
fp.write(msg.as_bytes())
def main():
msg = read_email_from_file(complete_path)
write_email_to_file(msg, 'email.txt')
if __name__ == '__main__':
main()
I hope it would help. Change it according to your requirements if you like.
| How can i read large EmailMessage object using wb+ if the file is large using python3? | I wish to write a very large EmailMessage object using binary in small sizes just like it is done using buffer.read(1024). IS there a way?
I have tried like this which is wrong as the Email Message object is not having read() method:
with open(complete_path,'wb+') as fp:
while True:
x = msg.read(1024)
if x:
fp.write(x.as_bytes())
else:
break
fp.write(x.as_bytes())
Here msg is the EmailMessage object and trying to save it at location complete_path When tried with below code it works for most of the mails but does not complete and gets stuck for few mails and generated zombies:
with open(complete_path,'wb+') as fp:
fp.write(msg.as_bytes())
I am reading the email in a fuglu plugin and not from a file.
Please guide.
| [
"Try using this:\n import email\n import os\n\n # Open the file in binary mode\n with open(complete_path, 'rb') as fp:\n msg = email.message_from_binary_file(fp)\n\n # Write the message to a file in binary mode\n with open(complete_path, 'wb+') as fp:\n fp.write(msg.as_bytes())\n\nOR\nimport email\nimport os\n\ndef read_email_from_file(filename):\n with open(filename, 'rb') as fp:\n msg = email.message_from_binary_file(fp)\n return msg\n\ndef write_email_to_file(msg, filename):\n with open(filename, 'wb+') as fp:\n fp.write(msg.as_bytes())\n\n\ndef main():\n\n msg = read_email_from_file(complete_path)\n write_email_to_file(msg, 'email.txt')\n\n\nif __name__ == '__main__':\n\n main()\n\nI hope it would help. Change it according to your requirements if you like.\n"
] | [
0
] | [] | [] | [
"email",
"python"
] | stackoverflow_0074579971_email_python.txt |
Q:
ElementNotInteractableException: Message: Element could not be scrolled into view
I am trying to press a the download on this page
https://data.unwomen.org/data-portal/sdg?annex=All&finic[]=SUP_1_1_IPL_P&flocat[]=478&flocat[]=174&flocat[]=818&flocat[]=504&flocat[]=729&flocat[]=788&flocat[]=368&flocat[]=400&flocat[]=275&flocat[]=760&fys[]=2015&fyr[]=2030&fca[ALLAGE]=ALLAGE&fca[<15Y]=<15Y&fca[15%2B]=15%2B&fca[15-24]=15-24&fca[25-34]=25-34&fca[35-54]=35-54&fca[55%2B]=55%2B&tab=table
i am using python selenium with firefox and this is what i tried:
driver.set_page_load_timeout(30)
driver.get(url)
time.sleep(1)
WebDriverWait(driver, timeout=20).until(EC.presence_of_element_located((By.ID, 'SDG-Indicator-Dashboard')))
time.sleep(1)
download_div = driver.find_element(By.CLASS_NAME, 'float-buttons-wrap')
buttons = download_div.find_elements(By.TAG_NAME, 'button')
buttons_attributes = [i.get_attribute('title') for i in buttons]
download_button_index = buttons_attributes.index('Download')
buttons[download_button_index].location_once_scrolled_into_view
buttons[download_button_index].click()```
i keep getting the same error:
ElementNotInteractableException: Message: Element <button class="btn btn-outline-light btn-icons" type="button"> could not be scrolled into view
eventho i am getting the correct element and i tried using js like this:
```driver.execute_script("return arguments[0].scrollIntoView(true);", element)```
also did not work.
A:
You have to modify the XPath, try the below code:
driver.get("https://data.unwomen.org/data-portal/sdg?annex=All&finic[]=SUP_1_1_IPL_P&flocat[]=478&flocat[]=174&flocat[]=818&flocat[]=504&flocat[]=729&flocat[]=788&flocat[]=368&flocat[]=400&flocat[]=275&flocat[]=760&fys[]=2015&fyr[]=2030&fca[ALLAGE]=ALLAGE&fca[<15Y]=<15Y&fca[15%2B]=15%2B&fca[15-24]=15-24&fca[25-34]=25-34&fca[35-54]=35-54&fca[55%2B]=55%2B&tab=table")
driver.implicitly_wait(15)
time.sleep(2)
download_btn = driver.find_element(By.XPATH, "(.//button[@type='button' and @title='Download'])[2]")
download_btn.location_once_scrolled_into_view
time.sleep(1)
download_btn.click()
A:
1.You need to make sure you are giving enough time to complete page loading
2.Once you started to find button element , it's better to use try/catch blocks multiple times and put sleep function when exception occures to provide appropriate time to load scripts and elements.
3.Try finding by xpath instead of finding elements by indexes and be aware that you need to use the index [1] instead of [0] in a xpath string
| ElementNotInteractableException: Message: Element could not be scrolled into view | I am trying to press a the download on this page
https://data.unwomen.org/data-portal/sdg?annex=All&finic[]=SUP_1_1_IPL_P&flocat[]=478&flocat[]=174&flocat[]=818&flocat[]=504&flocat[]=729&flocat[]=788&flocat[]=368&flocat[]=400&flocat[]=275&flocat[]=760&fys[]=2015&fyr[]=2030&fca[ALLAGE]=ALLAGE&fca[<15Y]=<15Y&fca[15%2B]=15%2B&fca[15-24]=15-24&fca[25-34]=25-34&fca[35-54]=35-54&fca[55%2B]=55%2B&tab=table
i am using python selenium with firefox and this is what i tried:
driver.set_page_load_timeout(30)
driver.get(url)
time.sleep(1)
WebDriverWait(driver, timeout=20).until(EC.presence_of_element_located((By.ID, 'SDG-Indicator-Dashboard')))
time.sleep(1)
download_div = driver.find_element(By.CLASS_NAME, 'float-buttons-wrap')
buttons = download_div.find_elements(By.TAG_NAME, 'button')
buttons_attributes = [i.get_attribute('title') for i in buttons]
download_button_index = buttons_attributes.index('Download')
buttons[download_button_index].location_once_scrolled_into_view
buttons[download_button_index].click()```
i keep getting the same error:
ElementNotInteractableException: Message: Element <button class="btn btn-outline-light btn-icons" type="button"> could not be scrolled into view
eventho i am getting the correct element and i tried using js like this:
```driver.execute_script("return arguments[0].scrollIntoView(true);", element)```
also did not work.
| [
"You have to modify the XPath, try the below code:\ndriver.get(\"https://data.unwomen.org/data-portal/sdg?annex=All&finic[]=SUP_1_1_IPL_P&flocat[]=478&flocat[]=174&flocat[]=818&flocat[]=504&flocat[]=729&flocat[]=788&flocat[]=368&flocat[]=400&flocat[]=275&flocat[]=760&fys[]=2015&fyr[]=2030&fca[ALLAGE]=ALLAGE&fca[<15Y]=<15Y&fca[15%2B]=15%2B&fca[15-24]=15-24&fca[25-34]=25-34&fca[35-54]=35-54&fca[55%2B]=55%2B&tab=table\")\ndriver.implicitly_wait(15)\ntime.sleep(2)\ndownload_btn = driver.find_element(By.XPATH, \"(.//button[@type='button' and @title='Download'])[2]\")\ndownload_btn.location_once_scrolled_into_view\ntime.sleep(1)\ndownload_btn.click()\n\n",
"1.You need to make sure you are giving enough time to complete page loading\n2.Once you started to find button element , it's better to use try/catch blocks multiple times and put sleep function when exception occures to provide appropriate time to load scripts and elements.\n3.Try finding by xpath instead of finding elements by indexes and be aware that you need to use the index [1] instead of [0] in a xpath string\n"
] | [
2,
0
] | [] | [] | [
"download",
"python",
"python_3.x",
"selenium",
"web_scraping"
] | stackoverflow_0074652258_download_python_python_3.x_selenium_web_scraping.txt |
Q:
I have a problem about DuplicateWidgetID. When program runs it says there are multiple identical st.checkbox widgets with the same generated key
import streamlit as st
import functions
list = functions.get_list()
def add_todo():
todo = st.session_state["new_todo"] + "\n"
list.append(todo)
functions.write_list(list)
st.title("Simple To-Do app")
st.subheader("This is my todo app")
st.write("Increase your productivity")
for todo in list:
st.checkbox(todo)
st.text_input(label = "", placeholder = "Add a new todo...",
on_change=add_todo, key = 'new_todo')
When a widget is created, it's assigned an internal key based on its structure. Multiple widgets with an identical structure will result in the same internal key, which causes this error.
To fix this error, please pass a unique key argument to st.checkbox.
Traceback:
File "C:\Users\seraf\Desktop\Final project\web.py", line 17, in <module>
st.checkbox(todo)
A:
This happens because every new st.text_input() expects a unique key value. You can use an increment in your for-loop to always re-assign a new key value. for the next st.text_input() to be created.
Example:
unique_value = 0
for todo in list:
unique_value += 1
if st.checkbox(todo):
st.text_input(label="",
placeholder="Add a new todo...",
on_change=add_todo,
key=f'new_todo{unique_value}')
| I have a problem about DuplicateWidgetID. When program runs it says there are multiple identical st.checkbox widgets with the same generated key | import streamlit as st
import functions
list = functions.get_list()
def add_todo():
todo = st.session_state["new_todo"] + "\n"
list.append(todo)
functions.write_list(list)
st.title("Simple To-Do app")
st.subheader("This is my todo app")
st.write("Increase your productivity")
for todo in list:
st.checkbox(todo)
st.text_input(label = "", placeholder = "Add a new todo...",
on_change=add_todo, key = 'new_todo')
When a widget is created, it's assigned an internal key based on its structure. Multiple widgets with an identical structure will result in the same internal key, which causes this error.
To fix this error, please pass a unique key argument to st.checkbox.
Traceback:
File "C:\Users\seraf\Desktop\Final project\web.py", line 17, in <module>
st.checkbox(todo)
| [
"This happens because every new st.text_input() expects a unique key value. You can use an increment in your for-loop to always re-assign a new key value. for the next st.text_input() to be created.\nExample:\nunique_value = 0\nfor todo in list:\n unique_value += 1\n if st.checkbox(todo):\n st.text_input(label=\"\",\n placeholder=\"Add a new todo...\",\n on_change=add_todo,\n key=f'new_todo{unique_value}')\n\n"
] | [
0
] | [] | [] | [
"pysimplegui",
"python",
"session_state",
"streamlit"
] | stackoverflow_0074650034_pysimplegui_python_session_state_streamlit.txt |
Q:
How can i fix this issue TypeError: 'SMTP_SSL' object is not callable
How can i fix this error in python TypeError: 'SMTP_SSL' object is not callable
this is a function
import smtplib, ssl
def show_im_having_fun():
return
smtplib.SMTP_SSL("smtp.gmail.com", 465, context=ssl.create_default_context()).login('[email protected]', 'pass')
smtplib.SMTP_SSL("smtp.gmail.com", 465, context=ssl.create_default_context())('[email protected]',"receemail", message = 'Subject: {}\n\n{}'.format('this is subject', """\ Hello World """))
print("sent")
and i am importing and calling this function like this in other file
from fun import show_im_having_fun
show_im_having_fun()
print(show_im_having_fun())
A:
Unlike in some other programming languages, you can't chain the object's initializer like that; the error message tells you as much.
Code which is not inside a def or class or similar will run when you import that file, which is probably not what you want here. The immediate fix would be to move your SMTP code to a different file, and only leave the function you actually wanted to import in this file.
(Also, the function will simply return None so that's what you end up printing. But I suspect this code is trimmed down to such a minimal form that discussing it is pointless.)
If I am able to guess what you are actually trying to do with smtplib, this should be a minimal fix:
# Probably don't stack multiple imports on the same line
# but we don't really need ssl here anyway
import smtplib
with smtplib.SMTP_SSL("smtp.gmail.com", 465) as server:
server.login('[email protected]', 'pass')
server.sendmail(
'[email protected]', "receemail",
message='Subject: {}\n\n{}'.format('this is subject', """\ Hello World """))
print("sent")
Assembling an SMTP message by pasting together strings like that is extremely error-prone, though; you'll want to use the email.message.EmailMessage class going forward;
from email.message import EmailMessage
import smtplib
message = EmailMessage()
message["from"] = '[email protected]'
message["to"] = "receemail"
message["subject" ] = 'this is subject'
message.set_content("""\ Hello World """)
with smtplib.SMTP_SSL("smtp.gmail.com", 465) as server:
server.login('[email protected]', 'pass')
server.send_message(message)
# probably don't
# print("sent")
This seems a bit more verbose, but it is also hugely more robust; your simple attempt will work fine if all you need is a few short lines of English text, but breaks with a fanfare and big explosions if you try to send non-English text, rich content, attachments, or etc.
As an aside, many on-line tutorials about sending email from Python which you find on the Internet are obsolescent. The email library was overhauled in Python 3.6 (which is when it became official; the code was introduced already in 3.3 as optional) and is now quite a bit more versatile and logical than the MIMEBase etc examples you will often find in search engines and on tutorial sites. A good intro for many use cases is in the current examples from the email documentation.
| How can i fix this issue TypeError: 'SMTP_SSL' object is not callable | How can i fix this error in python TypeError: 'SMTP_SSL' object is not callable
this is a function
import smtplib, ssl
def show_im_having_fun():
return
smtplib.SMTP_SSL("smtp.gmail.com", 465, context=ssl.create_default_context()).login('[email protected]', 'pass')
smtplib.SMTP_SSL("smtp.gmail.com", 465, context=ssl.create_default_context())('[email protected]',"receemail", message = 'Subject: {}\n\n{}'.format('this is subject', """\ Hello World """))
print("sent")
and i am importing and calling this function like this in other file
from fun import show_im_having_fun
show_im_having_fun()
print(show_im_having_fun())
| [
"Unlike in some other programming languages, you can't chain the object's initializer like that; the error message tells you as much.\nCode which is not inside a def or class or similar will run when you import that file, which is probably not what you want here. The immediate fix would be to move your SMTP code to a different file, and only leave the function you actually wanted to import in this file.\n(Also, the function will simply return None so that's what you end up printing. But I suspect this code is trimmed down to such a minimal form that discussing it is pointless.)\nIf I am able to guess what you are actually trying to do with smtplib, this should be a minimal fix:\n# Probably don't stack multiple imports on the same line\n# but we don't really need ssl here anyway\nimport smtplib\n\n\nwith smtplib.SMTP_SSL(\"smtp.gmail.com\", 465) as server:\n server.login('[email protected]', 'pass')\n server.sendmail(\n '[email protected]', \"receemail\", \n message='Subject: {}\\n\\n{}'.format('this is subject', \"\"\"\\ Hello World \"\"\"))\n\nprint(\"sent\")\n\nAssembling an SMTP message by pasting together strings like that is extremely error-prone, though; you'll want to use the email.message.EmailMessage class going forward;\nfrom email.message import EmailMessage\nimport smtplib\n\n\nmessage = EmailMessage()\nmessage[\"from\"] = '[email protected]'\nmessage[\"to\"] = \"receemail\"\nmessage[\"subject\" ] = 'this is subject'\nmessage.set_content(\"\"\"\\ Hello World \"\"\")\n\nwith smtplib.SMTP_SSL(\"smtp.gmail.com\", 465) as server:\n server.login('[email protected]', 'pass')\n server.send_message(message)\n\n# probably don't\n# print(\"sent\")\n\nThis seems a bit more verbose, but it is also hugely more robust; your simple attempt will work fine if all you need is a few short lines of English text, but breaks with a fanfare and big explosions if you try to send non-English text, rich content, attachments, or etc.\nAs an aside, many on-line tutorials about sending email from Python which you find on the Internet are obsolescent. The email library was overhauled in Python 3.6 (which is when it became official; the code was introduced already in 3.3 as optional) and is now quite a bit more versatile and logical than the MIMEBase etc examples you will often find in search engines and on tutorial sites. A good intro for many use cases is in the current examples from the email documentation.\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"ssl"
] | stackoverflow_0074651581_python_python_3.x_ssl.txt |
Q:
how to take take multiple pages as input in pdfplumber?
I am using pdfplumber to take input from a pdf file.
My question is how can I take from page 1-7 input using pdfplumber.
I'm using this code:
filename = "1st Year 1stSemester.pdf"
pdf = pdfplumber.open(filename)
totalpages = len(pdf.pages)
p0 = pdf.pages[0-6]
table = p0.extract_table()
table
I want to take input from page 1 to 7
I've also tried p0 = pdf.pages[0,1,2,3,6]. It also doesn't work.
A:
for i in range (0,7):
print(filename.pages[i].extract_text)
To show the pages, use the for loop. Input the desired pages you want to show for example in your case you want to display the pages 1-7, just like how you count an array, it start with 0 till the last page which is 7.
| how to take take multiple pages as input in pdfplumber? | I am using pdfplumber to take input from a pdf file.
My question is how can I take from page 1-7 input using pdfplumber.
I'm using this code:
filename = "1st Year 1stSemester.pdf"
pdf = pdfplumber.open(filename)
totalpages = len(pdf.pages)
p0 = pdf.pages[0-6]
table = p0.extract_table()
table
I want to take input from page 1 to 7
I've also tried p0 = pdf.pages[0,1,2,3,6]. It also doesn't work.
| [
"for i in range (0,7):\n print(filename.pages[i].extract_text)\n\nTo show the pages, use the for loop. Input the desired pages you want to show for example in your case you want to display the pages 1-7, just like how you count an array, it start with 0 till the last page which is 7.\n"
] | [
0
] | [] | [] | [
"pdf",
"pdfplumber",
"python"
] | stackoverflow_0070461352_pdf_pdfplumber_python.txt |
Q:
missing 2 required positional arguments flask python
I'm developing a web application where I pull and use bluetooth data via the browser via the Bleak library.I do not intend to connect to the database. My only purpose is to keep the person's bluettoh data on the browser (cookies or sessions) as well. I haven't gotten to this stage yet. At the moment, I just need to view the detailed bluetooth data on the browser. But I am getting this error.
"TypeError: devices() missing 2 required positional arguments: 'device' and 'advertisement_data'"
Look at my Codes
from flask import Flask
import asyncio
from bleak import BleakScanner
import asyncio
from uuid import UUID
import json
from construct import Array, Byte, Const, Int8sl, Int16ub, Struct
from construct.core import ConstError
from bleak import BleakScanner
from bleak.backends.device import BLEDevice
from bleak.backends.scanner import AdvertisementData
app = Flask(__name__)
ibeacon_format = Struct(
"type_length" / Const(b"\x02\x15"),
"uuid" / Array(16, Byte),
"major" / Int16ub,
"minor" / Int16ub,
"power" / Int8sl,
)
class UUIDEncoder(json.JSONEncoder):
def default(self, uuid):
if isinstance(uuid, UUID):
# if the obj is uuid, we simply return the value of uuid
return uuid.hex
return json.JSONEncoder.default(self, uuid)
@app.route("/")
def hello_world():
return "<p>Hello, World!</p>"
@app.route("/beacons")
async def devices(device: BLEDevice, advertisement_data: AdvertisementData):
try:
macadress = device.address
name = advertisement_data.local_name
apple_data = advertisement_data.manufacturer_data[0x004C]
ibeacon = ibeacon_format.parse(apple_data)
uuid = UUID(bytes=bytes(ibeacon.uuid))
minor = ibeacon.minor
major = ibeacon.major
power = ibeacon.power
rssi = device.rssi
rssi = int(rssi)
beacons = {
"Mac Adress": macadress,
"Local Name": name,
"UUID": uuid,
"Major": major,
"Minor": minor,
"TX Power": power,
"RSSI": rssi
}
if (beacons["Local Name"] == "POI" and beacons["RSSI"] <= -40 and beacons["RSSI"] >= -80):
return print(beacons)
# with open("data.json","a") as file:
# json.dump(beacons,file,sort_keys=True,indent=4,skipkeys=True,cls=UUIDEncoder,separators=(",",":"))
else:
pass
except KeyError:
# Apple company ID (0x004c) not found
pass
except ConstError:
# No iBeacon (type 0x02 and length 0x15)
pass
async def main():
"""Scan for devices."""
scanner = BleakScanner()
scanner.register_detection_callback(devices)
while (True):
await scanner.start()
await asyncio.sleep(0.5)
await scanner.stop()
if __name__ == '__main__':
app.run(host="127.0.0.0", port=5000, debug=True)
A:
Don't you call the function devices in the line:
scanner.register_detection_callback(devices)
If so you have to give it the two arguments device and advertisement_data.
| missing 2 required positional arguments flask python | I'm developing a web application where I pull and use bluetooth data via the browser via the Bleak library.I do not intend to connect to the database. My only purpose is to keep the person's bluettoh data on the browser (cookies or sessions) as well. I haven't gotten to this stage yet. At the moment, I just need to view the detailed bluetooth data on the browser. But I am getting this error.
"TypeError: devices() missing 2 required positional arguments: 'device' and 'advertisement_data'"
Look at my Codes
from flask import Flask
import asyncio
from bleak import BleakScanner
import asyncio
from uuid import UUID
import json
from construct import Array, Byte, Const, Int8sl, Int16ub, Struct
from construct.core import ConstError
from bleak import BleakScanner
from bleak.backends.device import BLEDevice
from bleak.backends.scanner import AdvertisementData
app = Flask(__name__)
ibeacon_format = Struct(
"type_length" / Const(b"\x02\x15"),
"uuid" / Array(16, Byte),
"major" / Int16ub,
"minor" / Int16ub,
"power" / Int8sl,
)
class UUIDEncoder(json.JSONEncoder):
def default(self, uuid):
if isinstance(uuid, UUID):
# if the obj is uuid, we simply return the value of uuid
return uuid.hex
return json.JSONEncoder.default(self, uuid)
@app.route("/")
def hello_world():
return "<p>Hello, World!</p>"
@app.route("/beacons")
async def devices(device: BLEDevice, advertisement_data: AdvertisementData):
try:
macadress = device.address
name = advertisement_data.local_name
apple_data = advertisement_data.manufacturer_data[0x004C]
ibeacon = ibeacon_format.parse(apple_data)
uuid = UUID(bytes=bytes(ibeacon.uuid))
minor = ibeacon.minor
major = ibeacon.major
power = ibeacon.power
rssi = device.rssi
rssi = int(rssi)
beacons = {
"Mac Adress": macadress,
"Local Name": name,
"UUID": uuid,
"Major": major,
"Minor": minor,
"TX Power": power,
"RSSI": rssi
}
if (beacons["Local Name"] == "POI" and beacons["RSSI"] <= -40 and beacons["RSSI"] >= -80):
return print(beacons)
# with open("data.json","a") as file:
# json.dump(beacons,file,sort_keys=True,indent=4,skipkeys=True,cls=UUIDEncoder,separators=(",",":"))
else:
pass
except KeyError:
# Apple company ID (0x004c) not found
pass
except ConstError:
# No iBeacon (type 0x02 and length 0x15)
pass
async def main():
"""Scan for devices."""
scanner = BleakScanner()
scanner.register_detection_callback(devices)
while (True):
await scanner.start()
await asyncio.sleep(0.5)
await scanner.stop()
if __name__ == '__main__':
app.run(host="127.0.0.0", port=5000, debug=True)
| [
"Don't you call the function devices in the line:\nscanner.register_detection_callback(devices)\nIf so you have to give it the two arguments device and advertisement_data.\n"
] | [
0
] | [] | [] | [
"bluetooth",
"flask",
"python",
"web"
] | stackoverflow_0074652327_bluetooth_flask_python_web.txt |
Q:
python.h file not found after using command on cygwin
I tried to include python.h but received fatal error
I downloaded a code which includes python.h, but i received fatal error python.h not found. I followed stackoverflow using the command shown below on cygwin
apt-cyg install python-devel
However, in \usr\include i only find a folder python2.7 with pyconfig.h file only. Is there a way I can copy python.h file to include?
A:
do not use apt-cyg as it is unmaintained and not updated to the latest format of Setup.ini format
Use Cygwin Setup.
You should install python3-devel or python39-devel
https://cygwin.com/packages/summary/python3-devel.html
| python.h file not found after using command on cygwin | I tried to include python.h but received fatal error
I downloaded a code which includes python.h, but i received fatal error python.h not found. I followed stackoverflow using the command shown below on cygwin
apt-cyg install python-devel
However, in \usr\include i only find a folder python2.7 with pyconfig.h file only. Is there a way I can copy python.h file to include?
| [
"do not use apt-cyg as it is unmaintained and not updated to the latest format of Setup.ini format\nUse Cygwin Setup.\nYou should install python3-devel or python39-devel\nhttps://cygwin.com/packages/summary/python3-devel.html\n"
] | [
0
] | [] | [] | [
"cygwin",
"python"
] | stackoverflow_0074651523_cygwin_python.txt |
Q:
not the desirable output
the expected out put was
1 1
2 2
3 3
4 4
5 5
but the output I got is
1
1 2
2 3
3 4
4 5
5
for num in numlist:
print(num)
print(num,end=' ')
I tried to execute this python code in python interpreter and got the wrong output
A:
Every print has an end. Unless you overwrite what print should end in, it ends in a new line. In your first print, you don't overwrite end, so you get a new line. In your second print command, you do overwrite end with a single whitespace.
What you get is this order:
1st print NEWLINE
2nd print SPACE 1st print NEWLINE
2nd print SPACE 1st print NEWLINE
...
You get the exact output you are asking for. I suggest you read the entire Input/Output section of this geeksforgeeks page: https://www.geeksforgeeks.org/taking-input-in-python/?ref=lbp
A:
newline ('\n') is the default end character. Therefore the first call to print() will emit the value of num followed by newline.
In the second print you override the end character with space (' ') so no newline will be emitted.
When you print multiple values, the default separator is a space. This means that you can achieve your objective with:
for num in numlist:
print(num, num)
| not the desirable output | the expected out put was
1 1
2 2
3 3
4 4
5 5
but the output I got is
1
1 2
2 3
3 4
4 5
5
for num in numlist:
print(num)
print(num,end=' ')
I tried to execute this python code in python interpreter and got the wrong output
| [
"Every print has an end. Unless you overwrite what print should end in, it ends in a new line. In your first print, you don't overwrite end, so you get a new line. In your second print command, you do overwrite end with a single whitespace.\nWhat you get is this order:\n1st print NEWLINE\n2nd print SPACE 1st print NEWLINE\n2nd print SPACE 1st print NEWLINE\n...\n\nYou get the exact output you are asking for. I suggest you read the entire Input/Output section of this geeksforgeeks page: https://www.geeksforgeeks.org/taking-input-in-python/?ref=lbp\n",
"newline ('\\n') is the default end character. Therefore the first call to print() will emit the value of num followed by newline.\nIn the second print you override the end character with space (' ') so no newline will be emitted.\nWhen you print multiple values, the default separator is a space. This means that you can achieve your objective with:\nfor num in numlist:\n print(num, num)\n\n"
] | [
1,
0
] | [] | [] | [
"for_loop",
"python",
"python_3.x"
] | stackoverflow_0074651923_for_loop_python_python_3.x.txt |
Q:
Match True in python
In JavaScript, using the switch statement, I can do the following code:
switch(true){
case 1 === 1:
console.log(1)
break
case 1 > 1:
console.log(2)
break
default:
console.log(3)
break
}
And it's going to return 1, since JavaScript switch is comparing true === (1 === 1)
But the same does not happen when I try it with Python Match statement, like as follows:
match True:
case 1 = 1:
print(1)
case 1 > 1:
print(2)
case _:
print(3)
It returns:
File "<stdin>", line 2
case 1 = 1:
^
SyntaxError: invalid syntax
And another error is returned if I try it this way:
Check1 = 1 == 1
Check2 = 1 > 1
match True:
case Check1:
print(1)
case Check2:
print(2)
case _:
print(3)
It returns:
case Check1:
^^^^^^
SyntaxError: name capture 'Check1' makes remaining patterns unreachable
What would be the cleanest/fastest way to do many different checks without using a lot of if's and elif's?
A:
In JavaScript, using the switch statement, I can do the following code
I definitely wouldn't be using JavaScript as any form of litmus or comparator for python.
If you used 1==1 in your first test case, the below is what both of your test cases are ultimately doing.
match True:
case True:
print(1)
case False: #will never get hit
print(2)
case _: #will never get hit
print(3)
This is why you get the error for the second version. True will only ever be True, so no other case will ever be hit.
Based on your example, it seems like you are trying to use match/case just to determine the "truthiness" of an expression. Put the expression in the match.
match a==1:
case True:
pass
case False:
pass
If you have a lot of expressions, you could do something like the below, although I don't think this is very good.
a = 2
match (a==1, a>1):
case (True, False):
print('equals 1')
case (False, True):
print('greater than 1')
case _:
print(_)
#OR
match ((a>1) << 1) | (a==1):
case 1:
print('equals 1')
case 2:
print('greater than 1')
case _:
print(_)
cases should be possible results of the match, NOT expressions that attempt to emulate the match. You're doing it backwards. The below link should tell you pretty much everything that you need to know about match/case, as-well-as provide you with alternatives.
Match/Case Examples and Alternatives
A:
If you don't want to use the match statement from Python 3.10, you can create a switch-like statement with a one line function and use it with a one-pass for loop. It would look very similar to the javascript syntax:
def switch(v): yield lambda *c:v in c
for case in switch(True):
if case(1 == 1):
print(1)
break
if case( 1 > 1):
print(2)
break
else:
print(3)
Note that if you don't use breaks, the conditions will flow through (which would allow multiple cases to be executed if that's what you need).
A:
Check if you are using Python 3.10, if not then use this instead and
also the match case isn't meant to be used liked this, you're better off using switch case if you're just trying to print something out
The switch case in python is used by creating a function for an 'if-else-if' statement and declaring the case above like:
match = int(input("1-3: "))
def switch(case): #Function
if case == 1:
print(1)
elif case > 1:
print(2)
else:
print(3)
switch(match) #Main driver
else is the 'default' and you can add as many elif statements.
| Match True in python | In JavaScript, using the switch statement, I can do the following code:
switch(true){
case 1 === 1:
console.log(1)
break
case 1 > 1:
console.log(2)
break
default:
console.log(3)
break
}
And it's going to return 1, since JavaScript switch is comparing true === (1 === 1)
But the same does not happen when I try it with Python Match statement, like as follows:
match True:
case 1 = 1:
print(1)
case 1 > 1:
print(2)
case _:
print(3)
It returns:
File "<stdin>", line 2
case 1 = 1:
^
SyntaxError: invalid syntax
And another error is returned if I try it this way:
Check1 = 1 == 1
Check2 = 1 > 1
match True:
case Check1:
print(1)
case Check2:
print(2)
case _:
print(3)
It returns:
case Check1:
^^^^^^
SyntaxError: name capture 'Check1' makes remaining patterns unreachable
What would be the cleanest/fastest way to do many different checks without using a lot of if's and elif's?
| [
"\nIn JavaScript, using the switch statement, I can do the following code\n\nI definitely wouldn't be using JavaScript as any form of litmus or comparator for python.\n\nIf you used 1==1 in your first test case, the below is what both of your test cases are ultimately doing.\nmatch True:\n case True:\n print(1)\n\n case False: #will never get hit\n print(2)\n\n case _: #will never get hit\n print(3)\n\nThis is why you get the error for the second version. True will only ever be True, so no other case will ever be hit.\nBased on your example, it seems like you are trying to use match/case just to determine the \"truthiness\" of an expression. Put the expression in the match.\nmatch a==1:\n case True: \n pass\n case False:\n pass\n\nIf you have a lot of expressions, you could do something like the below, although I don't think this is very good.\na = 2\n\nmatch (a==1, a>1):\n case (True, False):\n print('equals 1')\n case (False, True):\n print('greater than 1')\n case _:\n print(_)\n\n#OR\n\nmatch ((a>1) << 1) | (a==1):\n case 1:\n print('equals 1')\n case 2:\n print('greater than 1')\n case _:\n print(_)\n\ncases should be possible results of the match, NOT expressions that attempt to emulate the match. You're doing it backwards. The below link should tell you pretty much everything that you need to know about match/case, as-well-as provide you with alternatives.\nMatch/Case Examples and Alternatives\n",
"If you don't want to use the match statement from Python 3.10, you can create a switch-like statement with a one line function and use it with a one-pass for loop. It would look very similar to the javascript syntax:\ndef switch(v): yield lambda *c:v in c\n\nfor case in switch(True):\n\n if case(1 == 1):\n print(1)\n break\n\n if case( 1 > 1):\n print(2)\n break\nelse:\n print(3)\n\nNote that if you don't use breaks, the conditions will flow through (which would allow multiple cases to be executed if that's what you need).\n",
"Check if you are using Python 3.10, if not then use this instead and\nalso the match case isn't meant to be used liked this, you're better off using switch case if you're just trying to print something out\nThe switch case in python is used by creating a function for an 'if-else-if' statement and declaring the case above like:\nmatch = int(input(\"1-3: \"))\n\n\ndef switch(case): #Function\n if case == 1:\n print(1)\n\n elif case > 1:\n print(2)\n\n else:\n print(3)\n\n\nswitch(match) #Main driver\n\nelse is the 'default' and you can add as many elif statements.\n"
] | [
1,
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074209719_python_python_3.x.txt |
Q:
After updating python version, old downloaded packages not working in vsc when using jupyter
I updated my python version to latest in my WSL and now when running the latest kernel version with Visual Studio Code jupyter extension it cannot regognize the packages I have downloaded from pip with the earlier version.
With the earlier version (3.8.10) when I run
import torch
all goes normal, but when using (3.11.0)
It says:
No module named 'torch'
I tried to download the packages from pip again and it seems to work but do I really have to do it with every package I have used with the earlier version or can I in some way update the packages or something?
A:
Good practice is to have virtual env for each python version, using Anaconda or pipenv.
in your case you may change the python path to the new version
in you .bashrc file
export PYTHONPATH=${PYTHONPATH}:${HOME}/[path to new version]
source .bashrc and you good to go
you can now install packages to the new version
| After updating python version, old downloaded packages not working in vsc when using jupyter | I updated my python version to latest in my WSL and now when running the latest kernel version with Visual Studio Code jupyter extension it cannot regognize the packages I have downloaded from pip with the earlier version.
With the earlier version (3.8.10) when I run
import torch
all goes normal, but when using (3.11.0)
It says:
No module named 'torch'
I tried to download the packages from pip again and it seems to work but do I really have to do it with every package I have used with the earlier version or can I in some way update the packages or something?
| [
"Good practice is to have virtual env for each python version, using Anaconda or pipenv.\nin your case you may change the python path to the new version\nin you .bashrc file\nexport PYTHONPATH=${PYTHONPATH}:${HOME}/[path to new version]\n\nsource .bashrc and you good to go\nyou can now install packages to the new version\n"
] | [
2
] | [] | [] | [
"jupyter_notebook",
"kernel",
"pip",
"python",
"visual_studio_code"
] | stackoverflow_0074652560_jupyter_notebook_kernel_pip_python_visual_studio_code.txt |
Q:
query = query % self._escape_args(args, conn) TypeError: unsupported operand type(s) for %: 'tuple' and 'str'
[enter[
query = query % self._escape_args(args, conn) TypeError: unsupported operand type(s) for %: 'tuple' and 'str'
](https://i.stack.imgur.com/nkDJb.png) image description here](https://i.stack.imgur.com/zIEAR.png)
update on the datas on the sql database
| query = query % self._escape_args(args, conn) TypeError: unsupported operand type(s) for %: 'tuple' and 'str' | [enter[
query = query % self._escape_args(args, conn) TypeError: unsupported operand type(s) for %: 'tuple' and 'str'
](https://i.stack.imgur.com/nkDJb.png) image description here](https://i.stack.imgur.com/zIEAR.png)
update on the datas on the sql database
| [] | [] | [
"I'd like to help. But you haven't really posted a question here.\nThe pictures you have linked don't seem to explicitly cover the code you have in the post (query = query %...)\nFrom the code you have posted you are trying to run a math operation between a string and a tuple, which is not possible.\nIf you are trying to format a string with %s you would want something along the lines of:\nquery = 'query %s' % 'some string here'\nor\nquery = 'query %s %s' % ('string1', 'string2')\n\nPlease update your post with more information if you need more assistance.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074652272_python.txt |
Q:
How can get products for active owners (users) only with Django Rest Framework?
I'm creating an e-commerce API with DRF.
I would like to retrieve and display only active products from active owners (users) using a ModelViewSet How can I do this? Here is my code :
views.py
class ProductViewSet(viewsets.ModelViewSet):
serializer_class = ProductSerializer
parser_classes = (MultiPartParser, FormParser)
search_fields = ['title', 'description']
ordering_fields = ['price', 'last_update']
permission_classes = [permissions.IsAuthenticatedOrReadOnly, IsVendorOrReadOnly, IsOwnerOrReadOnly]
def get_queryset(self):
return Product.objects.filter(is_active=True)
# WHEN I'M USING THIS COMMENT CODE, I CAN'T RETRIEVE ONE PRODUCT BY PK
# products = Product.objects.all()
# new_products = []
# for p in products:
# if p.owner.is_active and p.is_active:
# new_products.append(p)
# return new_products
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
A:
if u have a model like this:
class Prooduct(m.Model):
...
is_active = m.BooleanField()
owner = m.ForeingKey(User, ...)
Then on the get_queryset method
def get_queryset(self)
return Product.objects.filter(is_active = True, owner__is_active = True)
this would return all products that are active and the owner of the product is active as well.
| How can get products for active owners (users) only with Django Rest Framework? | I'm creating an e-commerce API with DRF.
I would like to retrieve and display only active products from active owners (users) using a ModelViewSet How can I do this? Here is my code :
views.py
class ProductViewSet(viewsets.ModelViewSet):
serializer_class = ProductSerializer
parser_classes = (MultiPartParser, FormParser)
search_fields = ['title', 'description']
ordering_fields = ['price', 'last_update']
permission_classes = [permissions.IsAuthenticatedOrReadOnly, IsVendorOrReadOnly, IsOwnerOrReadOnly]
def get_queryset(self):
return Product.objects.filter(is_active=True)
# WHEN I'M USING THIS COMMENT CODE, I CAN'T RETRIEVE ONE PRODUCT BY PK
# products = Product.objects.all()
# new_products = []
# for p in products:
# if p.owner.is_active and p.is_active:
# new_products.append(p)
# return new_products
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
| [
"if u have a model like this:\nclass Prooduct(m.Model):\n ...\n is_active = m.BooleanField()\n owner = m.ForeingKey(User, ...)\n\n\nThen on the get_queryset method\ndef get_queryset(self)\n\n return Product.objects.filter(is_active = True, owner__is_active = True)\n\nthis would return all products that are active and the owner of the product is active as well.\n"
] | [
0
] | [] | [] | [
"django",
"django_rest_framework",
"django_views",
"python"
] | stackoverflow_0074648593_django_django_rest_framework_django_views_python.txt |
Q:
Select all the values from a drag and drop Listbox in python
I am trying to create a gui in tkinter where I will have a Listbox and be able to drag and drop files into it. How can I store all the items inside this listbox in a list with a command on the button?
lb = tk.Listbox(root, height=8)
lb.drop_target_register(DND_FILES)
lb.dnd_bind("<<Drop>>", lambda e: lb.insert(tk.END, e.data))
lb.grid(row=1, column=0, sticky="ew")
btn = ttk.Button(root, text="Submit")
btn.grid(row=2,column=0)
A:
First you need to assign a callback to the command option of the button, then you can use lb.get(0, tk.END) to get the item list inside the callback:
...
def submit():
# in case you need to access itemlist outside this function
global itemlist
# get all the items inside the listbox to itemlist
itemlist = lb.get(0, tk.END)
print(itemlist)
# assign a callback via command option
btn = ttk.Button(root, text="Submit", command=submit)
btn.grid(row=2, column=0)
...
| Select all the values from a drag and drop Listbox in python | I am trying to create a gui in tkinter where I will have a Listbox and be able to drag and drop files into it. How can I store all the items inside this listbox in a list with a command on the button?
lb = tk.Listbox(root, height=8)
lb.drop_target_register(DND_FILES)
lb.dnd_bind("<<Drop>>", lambda e: lb.insert(tk.END, e.data))
lb.grid(row=1, column=0, sticky="ew")
btn = ttk.Button(root, text="Submit")
btn.grid(row=2,column=0)
| [
"First you need to assign a callback to the command option of the button, then you can use lb.get(0, tk.END) to get the item list inside the callback:\n...\ndef submit():\n # in case you need to access itemlist outside this function\n global itemlist\n # get all the items inside the listbox to itemlist\n itemlist = lb.get(0, tk.END)\n print(itemlist)\n\n# assign a callback via command option\nbtn = ttk.Button(root, text=\"Submit\", command=submit)\nbtn.grid(row=2, column=0)\n...\n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074639094_python_tkinter.txt |
Q:
python seaborn plot title or footnote including variable?
I'd like to be able to include a variable in either the title (subtitle) or footnote text on a seaborn pairplot -- I'm selecting for a date range and specifically want to add the dates and a name.
I can easily put a static title on the pairplot and it runs as expected.
Here's the code:
subData = rslt_avgData[['averageSpO2', 'averageHR', 'averageRR', 'averagePVi', 'averagePi']]
g=sns.pairplot(subData, hue = 'averageSpO2', palette= 'dark:salmon')
g.fig.suptitle("Average SpO2 compared to average of other variables, selected dates", y=1.05, fontweight='bold')
g.fig.text('Event Name: ', eventName,
'\nStart Date: ', startDate,
"\nEnd Date: ", endDate)
Here's the error:
----> 7 g.fig.text('Event Name: ', eventName,
8 '\nStart Date: ', startDate,
9 "\nEnd Date: ", endDate)
TypeError: text() takes from 4 to 5 positional arguments but 7 were given
If I try just using the fixed text, I get this error:
----> 7 g.fig.text('Event Name: \nStart Date: \nEnd Date: ')
TypeError: text() missing 2 required positional arguments: 'y' and 's'
If I try when specifying location (x,y), and the text (s), like this:
g.fig.text(x=0, y=-0.5, s='Event Name: ',eventName,'\nStart Date: \nEnd Date: ')
(whether I put the x and y at the beginning or the end...)
I get this error:
g.fig.text(x=0, y=-0.5, s='Event Name: ',eventName,'\nStart Date: \nEnd Date: ')
^
SyntaxError: positional argument follows keyword argument
A:
Seaborn is based on Matplotlib and you can use the syntax from matplotlib.pyplot.text()
text(x, y, s, fontdict=None, **kwargs)
Add text to the Axes.
Add the text s to the Axes at location x, y in data coordinates.
Parameters:
x, y: float
The position to place the text. By default, this is in data coordinates. >The coordinate system can be changed using the transform parameter.
s: str
The text.
fontdictdict, default: None
A dictionary to override the default text properties. If fontdict is None, the defaults are determined by rcParams.
To use this function, you have to give always the positions x and y ans a string s. Keyword Arguments are optional, ant with this you can set the font style and text size.
It is not allowed to write something like text(x=1, 2). All parameters, without a parameter assignment have to be at the beginning of the function call. Afterwars all arguments have to have a keayword. text(x=1, y=2) is valid.
In your last line the problem is this:
g.fig.text(x=0, y=-0.5, s='Event Name: ', eventName,'\nStart Date: \nEnd Date: ')
^ This has to be "="
Because eventName is not a valid keyword, nothing should happen. See the linked documentation for more details.
| python seaborn plot title or footnote including variable? | I'd like to be able to include a variable in either the title (subtitle) or footnote text on a seaborn pairplot -- I'm selecting for a date range and specifically want to add the dates and a name.
I can easily put a static title on the pairplot and it runs as expected.
Here's the code:
subData = rslt_avgData[['averageSpO2', 'averageHR', 'averageRR', 'averagePVi', 'averagePi']]
g=sns.pairplot(subData, hue = 'averageSpO2', palette= 'dark:salmon')
g.fig.suptitle("Average SpO2 compared to average of other variables, selected dates", y=1.05, fontweight='bold')
g.fig.text('Event Name: ', eventName,
'\nStart Date: ', startDate,
"\nEnd Date: ", endDate)
Here's the error:
----> 7 g.fig.text('Event Name: ', eventName,
8 '\nStart Date: ', startDate,
9 "\nEnd Date: ", endDate)
TypeError: text() takes from 4 to 5 positional arguments but 7 were given
If I try just using the fixed text, I get this error:
----> 7 g.fig.text('Event Name: \nStart Date: \nEnd Date: ')
TypeError: text() missing 2 required positional arguments: 'y' and 's'
If I try when specifying location (x,y), and the text (s), like this:
g.fig.text(x=0, y=-0.5, s='Event Name: ',eventName,'\nStart Date: \nEnd Date: ')
(whether I put the x and y at the beginning or the end...)
I get this error:
g.fig.text(x=0, y=-0.5, s='Event Name: ',eventName,'\nStart Date: \nEnd Date: ')
^
SyntaxError: positional argument follows keyword argument
| [
"Seaborn is based on Matplotlib and you can use the syntax from matplotlib.pyplot.text()\n\ntext(x, y, s, fontdict=None, **kwargs)\nAdd text to the Axes.\nAdd the text s to the Axes at location x, y in data coordinates.\nParameters:\nx, y: float\nThe position to place the text. By default, this is in data coordinates. >The coordinate system can be changed using the transform parameter.\ns: str\nThe text.\nfontdictdict, default: None\nA dictionary to override the default text properties. If fontdict is None, the defaults are determined by rcParams.\n\nTo use this function, you have to give always the positions x and y ans a string s. Keyword Arguments are optional, ant with this you can set the font style and text size.\nIt is not allowed to write something like text(x=1, 2). All parameters, without a parameter assignment have to be at the beginning of the function call. Afterwars all arguments have to have a keayword. text(x=1, y=2) is valid.\nIn your last line the problem is this:\ng.fig.text(x=0, y=-0.5, s='Event Name: ', eventName,'\\nStart Date: \\nEnd Date: ')\n ^ This has to be \"=\"\n\nBecause eventName is not a valid keyword, nothing should happen. See the linked documentation for more details.\n"
] | [
0
] | [] | [] | [
"pairplot",
"python",
"seaborn"
] | stackoverflow_0074652376_pairplot_python_seaborn.txt |
Q:
How to set proxy on cmd or python for windows 10?
can anyone help me about this? I just wanna set up a proxy with cmd or python. I have tried these but it doesn't work.
First try(on cmd):
netsh winhttp set proxy proxy-ip:proxy-port
Output:
C:\Windows\system32>netsh winhttp show proxy
Current WinHTTP proxy settings:
Proxy Server(s) : proxy-ip:proxy-port
Bypass List : (none)
I checked my ip on web site. Nothing changed.
Second try:
import os
import requests
os.system('netsh winhttp set proxy proxy-ip:proxy-port')
r = requests.get("https://ipinfo.io/")
print(r.text)
Stil nothing changed. What i need to do?
A:
When you need something major changes on OS, you should provide admin permissions, here is the solution;
pip install elevate
And you can use this code;
import os
from elevate import elevate
elevate(show_console=False)
command = 'netsh winhttp set proxy proxy_ip:proxy_port bypass-list="localhost"'
os.system(command)
You can look at the details of elevate package.
| How to set proxy on cmd or python for windows 10? | can anyone help me about this? I just wanna set up a proxy with cmd or python. I have tried these but it doesn't work.
First try(on cmd):
netsh winhttp set proxy proxy-ip:proxy-port
Output:
C:\Windows\system32>netsh winhttp show proxy
Current WinHTTP proxy settings:
Proxy Server(s) : proxy-ip:proxy-port
Bypass List : (none)
I checked my ip on web site. Nothing changed.
Second try:
import os
import requests
os.system('netsh winhttp set proxy proxy-ip:proxy-port')
r = requests.get("https://ipinfo.io/")
print(r.text)
Stil nothing changed. What i need to do?
| [
"When you need something major changes on OS, you should provide admin permissions, here is the solution;\npip install elevate\n\nAnd you can use this code;\nimport os\nfrom elevate import elevate\n\nelevate(show_console=False)\n\ncommand = 'netsh winhttp set proxy proxy_ip:proxy_port bypass-list=\"localhost\"'\nos.system(command)\n\nYou can look at the details of elevate package.\n"
] | [
0
] | [] | [] | [
"cmd",
"proxy",
"python",
"windows"
] | stackoverflow_0074641570_cmd_proxy_python_windows.txt |
Q:
I have a dataframe in which one columns contains day and time and I want to put each day and its time in different column
I have a dataframe in which one column contains day and its time, I want to put that each day and its time in its respective column.
I have put a '$' in each day to either split or use it to put it in its respective column.
import pandas as pd
data = [{'timings' : 'Friday 10 am - 6:30 pm$Saturday 10am-6:30pm$Sunday Closed$Monday 10am-6:30pm$Tuesday 10am-6:30pm$Wednesday 10am-6:30pm$Thursday 10am-6:30pm',
'monday':'','tuesday':'','wednesday':'','thursday':'','friday':'','saturday':'','sunday':''
}]
df = pd.DataFrame.from_dict(data)
For e.g.:
Data contains df['timing'] = "friday 10 am, saturday 6:30pm", then in df['friday'] = '10 am' and df['saturday'] = '6:30pm'.
I dont know how to put it in words.
Please me solve this problem.
A:
Use nested list comprehension for list of dictionaries, then pass to DataFrame constructor:
L = [dict(y.split(maxsplit=1) for y in x.split('$')) for x in df['timings']]
df = pd.DataFrame(L, index=df.index)
print (df)
Friday Saturday Sunday Monday Tuesday \
0 10 am - 6:30 pm 10am-6:30pm Closed 10am-6:30pm 10am-6:30pm
Wednesday Thursday
0 10am-6:30pm 10am-6:30pm
EDIT: If need merge duplicated days use:
from collections import defaultdict
out = []
for x in df['timings']:
d = defaultdict(list)
for y in x.split('$'):
k, v = y.split(maxsplit=1)
d[k].append(v)
out.append({k: '/'.join(v) for k, v in d.items()})
df = pd.DataFrame(out, index=df.index)
A:
You can use str.extractall to extract the day name and times and then reshaping the DataFrame:
(df['timings'].str.extractall(r'(?P<day>[^$\s]+)\s+([^$]+)')
.droplevel('match')
.set_index('day', append=True)[1].unstack('day')
)
Output:
day Friday Monday Saturday Sunday Thursday Tuesday Wednesday
0 10 am - 6:30 pm 10am-6:30pm 10am-6:30pm Closed 10am-6:30pm 10am-6:30pm 10am-6:30pm
If you want to keep the original order of the days:
(df['timings'].str.extractall('(?P<day>[^$\s]+)\s+([^$]+)')
.set_index('day', append=True)[1].unstack(['match', 'day'])
.droplevel('match', axis=1)
)
Output:
day Friday Saturday Sunday Monday Tuesday Wednesday Thursday
0 10 am - 6:30 pm 10am-6:30pm Closed 10am-6:30pm 10am-6:30pm 10am-6:30pm 10am-6:30pm
Alternative to sort based on a custom order (here Friday first):
from calendar import day_name
sorter = pd.Series({d: (i+3)%7 for i,d in enumerate(day_name)})
out = (df['timings']
.str.extractall('(?P<day>[^$\s]+)\s+([^$]+)')
.droplevel('match')
.set_index('day', append=True)[1].unstack('day')
.sort_index(axis=1, key=sorter.get)
)
joining duplicates as string:
from calendar import day_name
sorter = pd.Series({d: (i+3)%7 for i,d in enumerate(day_name)})
out = (df['timings']
.str.extractall('(?P<day>[^$\s]+)\s+([^$]+)')
.droplevel('match')
.set_index('day', append=True)[1]
.groupby(level=[0,1]).agg('/'.join)
.unstack('day')
.sort_index(axis=1, key=sorter.get)
)
Output:
day Friday Saturday Sunday Monday Tuesday Wednesday Thursday
0 10 am - 6:30 pm 10am-6:30pm Closed 10am-12pm/1:30pm-6:30pm 10am-6:30pm 10am-6:30pm 10am-6:30pm
| I have a dataframe in which one columns contains day and time and I want to put each day and its time in different column | I have a dataframe in which one column contains day and its time, I want to put that each day and its time in its respective column.
I have put a '$' in each day to either split or use it to put it in its respective column.
import pandas as pd
data = [{'timings' : 'Friday 10 am - 6:30 pm$Saturday 10am-6:30pm$Sunday Closed$Monday 10am-6:30pm$Tuesday 10am-6:30pm$Wednesday 10am-6:30pm$Thursday 10am-6:30pm',
'monday':'','tuesday':'','wednesday':'','thursday':'','friday':'','saturday':'','sunday':''
}]
df = pd.DataFrame.from_dict(data)
For e.g.:
Data contains df['timing'] = "friday 10 am, saturday 6:30pm", then in df['friday'] = '10 am' and df['saturday'] = '6:30pm'.
I dont know how to put it in words.
Please me solve this problem.
| [
"Use nested list comprehension for list of dictionaries, then pass to DataFrame constructor:\nL = [dict(y.split(maxsplit=1) for y in x.split('$')) for x in df['timings']]\n\ndf = pd.DataFrame(L, index=df.index)\nprint (df) \n Friday Saturday Sunday Monday Tuesday \\\n0 10 am - 6:30 pm 10am-6:30pm Closed 10am-6:30pm 10am-6:30pm \n\n Wednesday Thursday \n0 10am-6:30pm 10am-6:30pm \n\nEDIT: If need merge duplicated days use:\nfrom collections import defaultdict\n\nout = []\nfor x in df['timings']:\n\n d = defaultdict(list)\n for y in x.split('$'):\n k, v = y.split(maxsplit=1)\n d[k].append(v)\n out.append({k: '/'.join(v) for k, v in d.items()})\n\ndf = pd.DataFrame(out, index=df.index)\n\n",
"You can use str.extractall to extract the day name and times and then reshaping the DataFrame:\n(df['timings'].str.extractall(r'(?P<day>[^$\\s]+)\\s+([^$]+)')\n .droplevel('match')\n .set_index('day', append=True)[1].unstack('day')\n)\n\nOutput:\nday Friday Monday Saturday Sunday Thursday Tuesday Wednesday\n0 10 am - 6:30 pm 10am-6:30pm 10am-6:30pm Closed 10am-6:30pm 10am-6:30pm 10am-6:30pm\n\nIf you want to keep the original order of the days:\n(df['timings'].str.extractall('(?P<day>[^$\\s]+)\\s+([^$]+)')\n .set_index('day', append=True)[1].unstack(['match', 'day'])\n .droplevel('match', axis=1)\n)\n\nOutput:\nday Friday Saturday Sunday Monday Tuesday Wednesday Thursday\n0 10 am - 6:30 pm 10am-6:30pm Closed 10am-6:30pm 10am-6:30pm 10am-6:30pm 10am-6:30pm\n\nAlternative to sort based on a custom order (here Friday first):\nfrom calendar import day_name\n\nsorter = pd.Series({d: (i+3)%7 for i,d in enumerate(day_name)})\n\nout = (df['timings']\n .str.extractall('(?P<day>[^$\\s]+)\\s+([^$]+)')\n .droplevel('match')\n .set_index('day', append=True)[1].unstack('day')\n .sort_index(axis=1, key=sorter.get)\n)\n\njoining duplicates as string:\nfrom calendar import day_name\n\nsorter = pd.Series({d: (i+3)%7 for i,d in enumerate(day_name)})\n\nout = (df['timings']\n .str.extractall('(?P<day>[^$\\s]+)\\s+([^$]+)')\n .droplevel('match')\n .set_index('day', append=True)[1]\n .groupby(level=[0,1]).agg('/'.join)\n .unstack('day')\n .sort_index(axis=1, key=sorter.get)\n)\n\nOutput:\nday Friday Saturday Sunday Monday Tuesday Wednesday Thursday\n0 10 am - 6:30 pm 10am-6:30pm Closed 10am-12pm/1:30pm-6:30pm 10am-6:30pm 10am-6:30pm 10am-6:30pm\n\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"numpy",
"pandas",
"python"
] | stackoverflow_0074652369_dataframe_numpy_pandas_python.txt |
Q:
Visual Studio Select Python Interpreter error
Am trying to use visual studio to run my python notebook. I have setup my venv.
I managed to select my Python interpreter in visual studio code using the ctrl + shift + p function to the specific python script in my newly made venv as per image below
But the kernel still remains the same as per image below
Appreciate any help to workaround this. Thnks
A:
My jupyter extension version is v2022.9.1303220346.
If your are pre-release version. You can switch to release version.
And you can try the following way to find python interpreter.
Open your settins and search for Python Path.
Enter the absolute path here manually.
| Visual Studio Select Python Interpreter error | Am trying to use visual studio to run my python notebook. I have setup my venv.
I managed to select my Python interpreter in visual studio code using the ctrl + shift + p function to the specific python script in my newly made venv as per image below
But the kernel still remains the same as per image below
Appreciate any help to workaround this. Thnks
| [
"My jupyter extension version is v2022.9.1303220346.\nIf your are pre-release version. You can switch to release version.\nAnd you can try the following way to find python interpreter.\nOpen your settins and search for Python Path.\n\nEnter the absolute path here manually.\n"
] | [
0
] | [] | [] | [
"python",
"visual_studio_code"
] | stackoverflow_0074652226_python_visual_studio_code.txt |
Q:
having an issue when installing pygraphviz
I'm having an issue when trying to install pygraphviz via pip. It took a long time, but I managed to install graphviz.
(base) C:\Users\>pip install graphviz --upgrade
Requirement already satisfied: graphviz in c:\programdata\anaconda3\lib\site-packages (0.19.2)
I am using Python 3.9.7 and the OS is Windows 10. When I use Anaconda Prompt and type pip install pygraphviz, I receive the following error.
Installing collected packages: pygraphviz
Running setup.py install for pygraphviz ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"'; __file__='"'"'C:\\Users \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users \AppData\Local\Temp\pip-record-qvuual1u\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\Include\pygraphviz'
cwd: C:\Users \AppData\Local\Temp\pip-install-krs5aqg9\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\
writing pygraphviz.egg-info\PKG-INFO
writing dependency_links to pygraphviz.egg-info\dependency_links.txt
writing top-level names to pygraphviz.egg-info\top_level.txt
reading manifest file 'pygraphviz.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'doc'
warning: no files found matching '*.txt' under directory 'doc'
warning: no files found matching '*.css' under directory 'doc'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc\build'
adding license file 'LICENSE'
writing manifest file 'pygraphviz.egg-info\SOURCES.txt'
copying pygraphviz\graphviz.i -> build\lib.win-amd64-3.9\pygraphviz
copying pygraphviz\graphviz_wrap.c -> build\lib.win-amd64-3.9\pygraphviz
running build_ext
building 'pygraphviz._graphviz' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\ \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"'; __file__='"'"'C:\\Users \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users \AppData\Local\Temp\pip-record-qvuual1u\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\Include\pygraphviz' Check the logs for full command output.
The only error that makes sense is Microsoft Visual C++ 14.0 or greater is required., but I already installed the latest version.
A:
Try to install from your Anaconda environment:
conda install -c conda-forge python-graphviz
| having an issue when installing pygraphviz | I'm having an issue when trying to install pygraphviz via pip. It took a long time, but I managed to install graphviz.
(base) C:\Users\>pip install graphviz --upgrade
Requirement already satisfied: graphviz in c:\programdata\anaconda3\lib\site-packages (0.19.2)
I am using Python 3.9.7 and the OS is Windows 10. When I use Anaconda Prompt and type pip install pygraphviz, I receive the following error.
Installing collected packages: pygraphviz
Running setup.py install for pygraphviz ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"'; __file__='"'"'C:\\Users \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users \AppData\Local\Temp\pip-record-qvuual1u\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\Include\pygraphviz'
cwd: C:\Users \AppData\Local\Temp\pip-install-krs5aqg9\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\
writing pygraphviz.egg-info\PKG-INFO
writing dependency_links to pygraphviz.egg-info\dependency_links.txt
writing top-level names to pygraphviz.egg-info\top_level.txt
reading manifest file 'pygraphviz.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'doc'
warning: no files found matching '*.txt' under directory 'doc'
warning: no files found matching '*.css' under directory 'doc'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc\build'
adding license file 'LICENSE'
writing manifest file 'pygraphviz.egg-info\SOURCES.txt'
copying pygraphviz\graphviz.i -> build\lib.win-amd64-3.9\pygraphviz
copying pygraphviz\graphviz_wrap.c -> build\lib.win-amd64-3.9\pygraphviz
running build_ext
building 'pygraphviz._graphviz' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\ \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"'; __file__='"'"'C:\\Users \\AppData\\Local\\Temp\\pip-install-krs5aqg9\\pygraphviz_28eebe5841e1478493b04aa9e79a4fcb\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users \AppData\Local\Temp\pip-record-qvuual1u\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\Include\pygraphviz' Check the logs for full command output.
The only error that makes sense is Microsoft Visual C++ 14.0 or greater is required., but I already installed the latest version.
| [
"Try to install from your Anaconda environment:\nconda install -c conda-forge python-graphviz\n"
] | [
0
] | [] | [] | [
"graphviz",
"pygraphviz",
"python"
] | stackoverflow_0071845331_graphviz_pygraphviz_python.txt |
Q:
Google Auth Service Account Bearer Photo API
I am trying to upload an image to google photos service using a google service account with a domain-wide delegation to use a [email protected] account.
Somehow I only get a 401 error: "Authentication session is not defined." using the code below
What am I doing wrong?
CODE:
class GooglePhotosApi:
def __init__(self):
self.SERVICE_ACCOUNT_FILE = (
os.path.dirname(os.path.abspath(__file__)) + "/service_account.json"
)
self.SCOPES = [
"https://www.googleapis.com/auth/photoslibrary",
"https://www.googleapis.com/auth/photoslibrary.sharing",
"https://www.googleapis.com/auth/photoslibrary.appendonly",
"https://www.googleapis.com/auth/iam",
"https://www.googleapis.com/auth/cloud-platform",
]
def auth_service_account(self):
self.credentials = service_account.Credentials.from_service_account_file(
self.SERVICE_ACCOUNT_FILE,
scopes=self.SCOPES,
subject="*[email protected]*",
)
self.access_token = self.credentials._make_authorization_grant_assertion()
return self.credentials
def upload_media(self, img):
self.service = discovery.build(
"photoslibrary", "v1", credentials=self.credentials, static_discovery=False
)
# step 1: Upload byte data to Google Server
image_dir = os.path.dirname(os.path.abspath(__file__)) + "/"
upload_url = "https://photoslibrary.googleapis.com/v1/uploads"
headers = {
# "Authorization": "Bearer {}".format(self.access_token.decode("utf-8")),
"Authorization": "Bearer " + str(self.access_token),
"Content-type": "application/octet-stream",
"X-Goog-Upload-Protocol": "raw",
}
image_file = os.path.join(image_dir, "img.jpg")
headers["X-Goog-Upload-File-Name"] = "img.jpg"
img = open(image_file, "rb").read()
response = requests.post(upload_url, data=img, headers=headers)
request_body = {
"newMediaItems": [
{
"description": "Static Name",
"simpleMediaItem": {
"uploadToken": response.content.decode("utf-8")
},
}
]
}
upload_response = (
self.service.mediaItems().batchCreate(body=request_body).execute()
)
return upload_response
These are the domains for the domain-wide delegation:
https://www.googleapis.com/auth/drive
https://www.googleapis.com/auth/calendar,
https://www.googleapis.com/auth/photoslibrary,
https://www.googleapis.com/auth/photoslibrary.sharing,
https://www.googleapis.com/auth/photoslibrary.appendonly,
https://www.googleapis.com/auth/cloud-platform,
https://www.googleapis.com/auth/iam
Thank you!
SOLUTION:
OK I figured out how to obtain the correct JWT token. I simply needed to access the newly created discovery build after I send a request with the API based function and then extract the JWT token afterwards. This way I am using the JWT token from the API call in my request.post call. Here is the code:
def upload_media(self, img):
self.service = discovery.build(
"photoslibrary", "v1", credentials=self.credentials, static_discovery=False
)
results = (
self.service.albums()
.list(pageSize=10, fields="nextPageToken,albums(id,title)")
.execute()
)
# step 1: Upload byte data to Google Server
image_dir = os.path.dirname(os.path.abspath(__file__)) + "/"
upload_url = "https://photoslibrary.googleapis.com/v1/uploads"
headers = {
# "Authorization": "Bearer {}".format(self.access_token.decode("utf-8")),
"Authorization": "Bearer " + str(self.service._http.credentials.token),
"Content-type": "application/octet-stream",
"X-Goog-Upload-Protocol": "raw",
}
image_file = os.path.join(image_dir, "img.jpg")
headers["X-Goog-Upload-File-Name"] = "img.jpg"
img = open(image_file, "rb").read()
response = requests.post(upload_url, data=img, headers=headers)
# upload_response = self.service.uploads().(body=request_body).execute()
request_body = {
"newMediaItems": [
{
"description": "Kuma the corgi",
"simpleMediaItem": {
"uploadToken": response.content.decode("utf-8")
},
}
]
}
upload_response = (
self.service.mediaItems().batchCreate(body=request_body).execute()
)
return upload_response
Thank you and happy coding :)
A:
You may want to consult the Google Photos api documentation on service-accounts
| Google Auth Service Account Bearer Photo API | I am trying to upload an image to google photos service using a google service account with a domain-wide delegation to use a [email protected] account.
Somehow I only get a 401 error: "Authentication session is not defined." using the code below
What am I doing wrong?
CODE:
class GooglePhotosApi:
def __init__(self):
self.SERVICE_ACCOUNT_FILE = (
os.path.dirname(os.path.abspath(__file__)) + "/service_account.json"
)
self.SCOPES = [
"https://www.googleapis.com/auth/photoslibrary",
"https://www.googleapis.com/auth/photoslibrary.sharing",
"https://www.googleapis.com/auth/photoslibrary.appendonly",
"https://www.googleapis.com/auth/iam",
"https://www.googleapis.com/auth/cloud-platform",
]
def auth_service_account(self):
self.credentials = service_account.Credentials.from_service_account_file(
self.SERVICE_ACCOUNT_FILE,
scopes=self.SCOPES,
subject="*[email protected]*",
)
self.access_token = self.credentials._make_authorization_grant_assertion()
return self.credentials
def upload_media(self, img):
self.service = discovery.build(
"photoslibrary", "v1", credentials=self.credentials, static_discovery=False
)
# step 1: Upload byte data to Google Server
image_dir = os.path.dirname(os.path.abspath(__file__)) + "/"
upload_url = "https://photoslibrary.googleapis.com/v1/uploads"
headers = {
# "Authorization": "Bearer {}".format(self.access_token.decode("utf-8")),
"Authorization": "Bearer " + str(self.access_token),
"Content-type": "application/octet-stream",
"X-Goog-Upload-Protocol": "raw",
}
image_file = os.path.join(image_dir, "img.jpg")
headers["X-Goog-Upload-File-Name"] = "img.jpg"
img = open(image_file, "rb").read()
response = requests.post(upload_url, data=img, headers=headers)
request_body = {
"newMediaItems": [
{
"description": "Static Name",
"simpleMediaItem": {
"uploadToken": response.content.decode("utf-8")
},
}
]
}
upload_response = (
self.service.mediaItems().batchCreate(body=request_body).execute()
)
return upload_response
These are the domains for the domain-wide delegation:
https://www.googleapis.com/auth/drive
https://www.googleapis.com/auth/calendar,
https://www.googleapis.com/auth/photoslibrary,
https://www.googleapis.com/auth/photoslibrary.sharing,
https://www.googleapis.com/auth/photoslibrary.appendonly,
https://www.googleapis.com/auth/cloud-platform,
https://www.googleapis.com/auth/iam
Thank you!
SOLUTION:
OK I figured out how to obtain the correct JWT token. I simply needed to access the newly created discovery build after I send a request with the API based function and then extract the JWT token afterwards. This way I am using the JWT token from the API call in my request.post call. Here is the code:
def upload_media(self, img):
self.service = discovery.build(
"photoslibrary", "v1", credentials=self.credentials, static_discovery=False
)
results = (
self.service.albums()
.list(pageSize=10, fields="nextPageToken,albums(id,title)")
.execute()
)
# step 1: Upload byte data to Google Server
image_dir = os.path.dirname(os.path.abspath(__file__)) + "/"
upload_url = "https://photoslibrary.googleapis.com/v1/uploads"
headers = {
# "Authorization": "Bearer {}".format(self.access_token.decode("utf-8")),
"Authorization": "Bearer " + str(self.service._http.credentials.token),
"Content-type": "application/octet-stream",
"X-Goog-Upload-Protocol": "raw",
}
image_file = os.path.join(image_dir, "img.jpg")
headers["X-Goog-Upload-File-Name"] = "img.jpg"
img = open(image_file, "rb").read()
response = requests.post(upload_url, data=img, headers=headers)
# upload_response = self.service.uploads().(body=request_body).execute()
request_body = {
"newMediaItems": [
{
"description": "Kuma the corgi",
"simpleMediaItem": {
"uploadToken": response.content.decode("utf-8")
},
}
]
}
upload_response = (
self.service.mediaItems().batchCreate(body=request_body).execute()
)
return upload_response
Thank you and happy coding :)
| [
"You may want to consult the Google Photos api documentation on service-accounts\n\n"
] | [
0
] | [] | [] | [
"google_api_python_client",
"google_photos",
"google_photos_api",
"python",
"service_accounts"
] | stackoverflow_0074652665_google_api_python_client_google_photos_google_photos_api_python_service_accounts.txt |
Q:
Segmentation fault after calling py_Finalize() with python version higher than 3.6
I am using ubuntu 18.04 LTS.
I am embedding python to C++ for uploading logs to azure application insights. My code worked well with python3.6 but now support is not available for python3.6 for the python core team. So I am trying to use a higher version of python for my code, but it causes a segmentation fault when repetitive calls to py_Initialize() and py_Finalize() are made. If py_Finalize() is called only once, there is no crash, but the logs are not uploaded to the cloud. I want to keep the application running.
Install Python & App Insight Dependencies:
a) sudo apt-get update
b) sudo apt install python3.6 (or use a higher version)
c) python3 -V (Use to check python3 version)
d) sudo apt-get install python3-dev
e) sudo apt-get install libpython3.6-dev
f) sudo apt-get install python3-pip
h) sudo apt install rustc
i) sudo -H pip3 install setuptools_rust
g) sudo -H pip3 install opencensus-ext-azure applicationinsights
Code Sample:
#include <stdio.h>
#include <Python.h>
#include <iostream>
#include <string>
#include <stdint.h>
void CallAppInsightUploadFunction();
int main()
{
for (int i = 0; i <= 5; i++)
{
Py_Initialize();
CallAppInsightUploadFunction();
std::cout << "Loop count: " + std::to_string(i) << std::endl;
Py_Finalize();
}
printf("\nGood Bye...\n");
return 0;
}
void CallAppInsightUploadFunction()
{
PyRun_SimpleString("import sys");
PyRun_SimpleString("if not hasattr(sys, 'argv'): sys.argv = ['']");
PyRun_SimpleString("import logging");
PyRun_SimpleString("from opencensus.ext.azure.log_exporter import AzureLogHandler");
PyRun_SimpleString("logger = logging.getLogger(__name__)");
PyRun_SimpleString("logger.addHandler(AzureLogHandler(connection_string='InstrumentationKey=<YOUR-INSTRUMENTATION-KEY>'))");
PyRun_SimpleString("logger.setLevel(logging.INFO)");
PyRun_SimpleString("logger.info('Testing AppInsight Uploads from VM...')");
}
CMakeLists.txt file:
cmake_minimum_required(VERSION 3.13)
project(AppInsightTest)
find_package(PythonLibs 3 REQUIRED)
include_directories(include)
include_directories(${PYTHON_INCLUDE_DIRS})
message("Python Include directory:")
message("${PYTHON_INCLUDE_DIRS}")
message("Python Library:")
message("${PYTHON_LIBRARIES}")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
include_directories(${CMAKE_SOURCE_DIR}/include)
file(GLOB SOURCES src/*.cpp)
add_executable(AppInsightTest ${SOURCES})
target_link_libraries(AppInsightTest PRIVATE ${PYTHON_LIBRARIES} )
Output when using python3.6:
Loop count: 0
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 1
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 2
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 3
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 4
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 5
Good Bye...
Expected output:
When I run the same code with a python version higher than 3.6, the code should work and give the same output as above. I use the following flag in cmake command when using a higher python version(python 3.8):
-DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.8.so -DPYTHON_INCLUDE_DIR=/usr/include/python3.8
A:
I found solution for the issue I was facing.
Isuue as per my understandings:
My concern was to upload data to cloud on every iteration. I was calling Py_Initialize() and Py_FinalizeEx() in order to do that.
But with python versions higher than 3.6, memory for external library was not getting freed correctly. That is why when it comes to the second iteration in for loop and tries to import it again it was causing segmentation fault.
Workaround that worked for me:
Rather than using Py_Initialize() and Py_FinalizeEx() multiple times in a running application, I should use:
PyRun_SimpleString("handler.flush()");
It pushes data to cloud at every iteration in the for loop. Now I can call Py_Initialize() and Py_FinalizeEx() only once in my code as the data are pushed to the cloud.
Note:
It takes some time for your data to get reflected on the portal.
Modified Code which works fine
#include <stdio.h>
#include <Python.h>
#include <iostream>
#include <string>
#include <stdint.h>
#include <thread>
void CallAppInsightUploadFunction();
int main()
{
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("if not hasattr(sys, 'argv'): sys.argv = ['']");
PyRun_SimpleString("import logging");
PyRun_SimpleString("from opencensus.ext.azure.log_exporter import AzureLogHandler");
PyRun_SimpleString("logger = logging.getLogger(__name__)");
PyRun_SimpleString("handler = AzureLogHandler(connection_string='InstrumentationKey=<INSTRUMENTATION-KEY>')");
PyRun_SimpleString("logger.addHandler(handler)");
PyRun_SimpleString("logger.setLevel(logging.INFO)");
for (int i = 0; i <= 5; i++)
{
CallAppInsightUploadFunction();
std::cout << "Loop count: " + std::to_string(i) << std::endl;
// Uncomment following lines to verify that logs are being uploaded when the application is running
std::cout << "Device is going to sleep for 120 seconds..." << std::endl;
sleep(120);
std::cout << "Device wake-up from sleep..." << std::endl;
std::cout << "Please check the logs on web portal" << std::endl;
}
Py_FinalizeEx();
printf("\nGood Bye...\n");
return 0;
}
void CallAppInsightUploadFunction()
{
PyRun_SimpleString("logger.info('Testing AppInsight Uploads...')");
PyRun_SimpleString("handler.flush()");
}
Output Log of modified code
~/AppInsightTest/build/bin$ sudo ./AppInsightTest
Loop count: 0
Device is going to sleep for 120 seconds...
Device wake-up from sleep...
Please check the logs on web portal
Loop count: 1
Device is going to sleep for 120 seconds...
Device wake-up from sleep...
Please check the logs on web portal
Loop count: 2
Device is going to sleep for 120 seconds...
Device wake-up from sleep...
Please check the logs on web portal
Loop count: 3
Device is going to sleep for 120 seconds...
Device wake-up from sleep...
Please check the logs on web portal
Loop count: 4
Device is going to sleep for 120 seconds...
Device wake-up from sleep...
Please check the logs on web portal
Loop count: 5
Device is going to sleep for 120 seconds...
Device wake-up from sleep...
Please check the logs on web portal
Good Bye...
~/AppInsightTest/build/bin$
Snapshot of log from web portal:
Web portal log
| Segmentation fault after calling py_Finalize() with python version higher than 3.6 | I am using ubuntu 18.04 LTS.
I am embedding python to C++ for uploading logs to azure application insights. My code worked well with python3.6 but now support is not available for python3.6 for the python core team. So I am trying to use a higher version of python for my code, but it causes a segmentation fault when repetitive calls to py_Initialize() and py_Finalize() are made. If py_Finalize() is called only once, there is no crash, but the logs are not uploaded to the cloud. I want to keep the application running.
Install Python & App Insight Dependencies:
a) sudo apt-get update
b) sudo apt install python3.6 (or use a higher version)
c) python3 -V (Use to check python3 version)
d) sudo apt-get install python3-dev
e) sudo apt-get install libpython3.6-dev
f) sudo apt-get install python3-pip
h) sudo apt install rustc
i) sudo -H pip3 install setuptools_rust
g) sudo -H pip3 install opencensus-ext-azure applicationinsights
Code Sample:
#include <stdio.h>
#include <Python.h>
#include <iostream>
#include <string>
#include <stdint.h>
void CallAppInsightUploadFunction();
int main()
{
for (int i = 0; i <= 5; i++)
{
Py_Initialize();
CallAppInsightUploadFunction();
std::cout << "Loop count: " + std::to_string(i) << std::endl;
Py_Finalize();
}
printf("\nGood Bye...\n");
return 0;
}
void CallAppInsightUploadFunction()
{
PyRun_SimpleString("import sys");
PyRun_SimpleString("if not hasattr(sys, 'argv'): sys.argv = ['']");
PyRun_SimpleString("import logging");
PyRun_SimpleString("from opencensus.ext.azure.log_exporter import AzureLogHandler");
PyRun_SimpleString("logger = logging.getLogger(__name__)");
PyRun_SimpleString("logger.addHandler(AzureLogHandler(connection_string='InstrumentationKey=<YOUR-INSTRUMENTATION-KEY>'))");
PyRun_SimpleString("logger.setLevel(logging.INFO)");
PyRun_SimpleString("logger.info('Testing AppInsight Uploads from VM...')");
}
CMakeLists.txt file:
cmake_minimum_required(VERSION 3.13)
project(AppInsightTest)
find_package(PythonLibs 3 REQUIRED)
include_directories(include)
include_directories(${PYTHON_INCLUDE_DIRS})
message("Python Include directory:")
message("${PYTHON_INCLUDE_DIRS}")
message("Python Library:")
message("${PYTHON_LIBRARIES}")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
include_directories(${CMAKE_SOURCE_DIR}/include)
file(GLOB SOURCES src/*.cpp)
add_executable(AppInsightTest ${SOURCES})
target_link_libraries(AppInsightTest PRIVATE ${PYTHON_LIBRARIES} )
Output when using python3.6:
Loop count: 0
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 1
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 2
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 3
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 4
context.c:55: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Loop count: 5
Good Bye...
Expected output:
When I run the same code with a python version higher than 3.6, the code should work and give the same output as above. I use the following flag in cmake command when using a higher python version(python 3.8):
-DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.8.so -DPYTHON_INCLUDE_DIR=/usr/include/python3.8
| [
"I found solution for the issue I was facing.\nIsuue as per my understandings:\nMy concern was to upload data to cloud on every iteration. I was calling Py_Initialize() and Py_FinalizeEx() in order to do that.\nBut with python versions higher than 3.6, memory for external library was not getting freed correctly. That is why when it comes to the second iteration in for loop and tries to import it again it was causing segmentation fault.\nWorkaround that worked for me:\nRather than using Py_Initialize() and Py_FinalizeEx() multiple times in a running application, I should use:\nPyRun_SimpleString(\"handler.flush()\");\nIt pushes data to cloud at every iteration in the for loop. Now I can call Py_Initialize() and Py_FinalizeEx() only once in my code as the data are pushed to the cloud.\nNote:\nIt takes some time for your data to get reflected on the portal.\nModified Code which works fine\n#include <stdio.h>\n#include <Python.h>\n#include <iostream>\n#include <string>\n#include <stdint.h>\n#include <thread>\n\nvoid CallAppInsightUploadFunction();\n\nint main()\n{\nPy_Initialize();\n\nPyRun_SimpleString(\"import sys\");\n\nPyRun_SimpleString(\"if not hasattr(sys, 'argv'): sys.argv = ['']\");\n\nPyRun_SimpleString(\"import logging\");\n\nPyRun_SimpleString(\"from opencensus.ext.azure.log_exporter import AzureLogHandler\");\n\nPyRun_SimpleString(\"logger = logging.getLogger(__name__)\");\nPyRun_SimpleString(\"handler = AzureLogHandler(connection_string='InstrumentationKey=<INSTRUMENTATION-KEY>')\");\nPyRun_SimpleString(\"logger.addHandler(handler)\");\nPyRun_SimpleString(\"logger.setLevel(logging.INFO)\");\n\nfor (int i = 0; i <= 5; i++)\n{\n CallAppInsightUploadFunction();\n std::cout << \"Loop count: \" + std::to_string(i) << std::endl;\n // Uncomment following lines to verify that logs are being uploaded when the application is running\n std::cout << \"Device is going to sleep for 120 seconds...\" << std::endl;\n sleep(120);\n std::cout << \"Device wake-up from sleep...\" << std::endl;\n std::cout << \"Please check the logs on web portal\" << std::endl;\n}\nPy_FinalizeEx();\n\nprintf(\"\\nGood Bye...\\n\");\nreturn 0;\n}\n\nvoid CallAppInsightUploadFunction()\n{\nPyRun_SimpleString(\"logger.info('Testing AppInsight Uploads...')\");\nPyRun_SimpleString(\"handler.flush()\");\n}\n\nOutput Log of modified code\n~/AppInsightTest/build/bin$ sudo ./AppInsightTest\nLoop count: 0\nDevice is going to sleep for 120 seconds...\nDevice wake-up from sleep...\nPlease check the logs on web portal\nLoop count: 1\nDevice is going to sleep for 120 seconds...\nDevice wake-up from sleep...\nPlease check the logs on web portal\nLoop count: 2\nDevice is going to sleep for 120 seconds...\nDevice wake-up from sleep...\nPlease check the logs on web portal\nLoop count: 3\nDevice is going to sleep for 120 seconds...\nDevice wake-up from sleep...\nPlease check the logs on web portal\nLoop count: 4\nDevice is going to sleep for 120 seconds...\nDevice wake-up from sleep...\nPlease check the logs on web portal\nLoop count: 5\nDevice is going to sleep for 120 seconds...\nDevice wake-up from sleep...\nPlease check the logs on web portal\nGood Bye...\n~/AppInsightTest/build/bin$\nSnapshot of log from web portal:\nWeb portal log\n"
] | [
0
] | [] | [] | [
"azure_application_insights",
"opencensus",
"python",
"python_3.8",
"segmentation_fault"
] | stackoverflow_0074637543_azure_application_insights_opencensus_python_python_3.8_segmentation_fault.txt |
Q:
LoRa HAT communication module doesn't send anything
I'm working on a long-range communication system, and I need help. I can't use GSM, WIFI, etc. I am testing a LoRa Raspberry Pi shield, and I bought this, and I have followed this tutorial. Of course I have tested other tutorials, like this, this lib, and others. None of them worked.
I have bought another 2 modules, because maybe my first purchase didn't work, but this wasn't the problem. (I didn't read any data from the other device in receiver module, but in the sender the TX LED was activated).
I want to communicate up to 30km (with a direct view), and I want to send data about 100 kbit per second.
So if you find another way to solve this problem (e.g. better tutorial), I will be grateful. I'm open to any other solution and idea. Thanks for your help.
(If this isn't the best forum for this, I'm sorry)
A:
The problem is with the eByte module on the RPi shield. What module are you using on the other end? The same? Or something else? If it's the same, it should work. But if you are using a non-eByte, regular LoRa module, like an SX1276 or SX1262 attached to an ESP32, forget about it, it won't work. It turns out that the "managed" eByte modules, with a -T- in their name, don't do LoRa, really, but FSK/OOK. So if you are trying to communicate between a RPi shield and a regular LoRa module, you will grow old before it works.
As for 30 km, forget about it. Not with that equipment. And especially not at 100 kbs, lol.
| LoRa HAT communication module doesn't send anything | I'm working on a long-range communication system, and I need help. I can't use GSM, WIFI, etc. I am testing a LoRa Raspberry Pi shield, and I bought this, and I have followed this tutorial. Of course I have tested other tutorials, like this, this lib, and others. None of them worked.
I have bought another 2 modules, because maybe my first purchase didn't work, but this wasn't the problem. (I didn't read any data from the other device in receiver module, but in the sender the TX LED was activated).
I want to communicate up to 30km (with a direct view), and I want to send data about 100 kbit per second.
So if you find another way to solve this problem (e.g. better tutorial), I will be grateful. I'm open to any other solution and idea. Thanks for your help.
(If this isn't the best forum for this, I'm sorry)
| [
"The problem is with the eByte module on the RPi shield. What module are you using on the other end? The same? Or something else? If it's the same, it should work. But if you are using a non-eByte, regular LoRa module, like an SX1276 or SX1262 attached to an ESP32, forget about it, it won't work. It turns out that the \"managed\" eByte modules, with a -T- in their name, don't do LoRa, really, but FSK/OOK. So if you are trying to communicate between a RPi shield and a regular LoRa module, you will grow old before it works.\nAs for 30 km, forget about it. Not with that equipment. And especially not at 100 kbs, lol.\n"
] | [
0
] | [] | [] | [
"communication",
"lora",
"python",
"raspberry_pi",
"raspberry_pi4"
] | stackoverflow_0074421719_communication_lora_python_raspberry_pi_raspberry_pi4.txt |
Q:
howto install pygraphviz on windows 10 64bit
Has anyone succeeded in installing pygraphviz on windows 10 64bit? I tried anaconda with python 3.5 64bit & 32bit with no success.
Here is the error I am getting with python 3.5 32bit on win10 64bit
python -m pip install pygraphviz --install-option="--include-path=C:\Program Files (x86)\Graphviz2.38\include" --install-option="--library-path=C:\Program Files (x86)\Graphviz2.38\lib"
Error:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD "-IC:\Program Files (x86)\Graphviz2.38\include" -IC:\Users\tra20\Anaconda3\include -IC:\Users\tra20\Anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tcpygraphviz/graphviz_wrap.c /Fobuild\temp.win32-3.5\Release\pygraphviz/graphviz_wrap.obj
graphviz_wrap.c
pygraphviz/graphviz_wrap.c(3321): warning C4047: 'return': 'int' differs in levels of indirection from 'Agsym_t *'
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO "/LIBPATH:C:\Program Files (x86)\Graphviz2.38\lib" /LIBPATH:C:\Users\tra20\Anaconda3\libs /LIBPATH:C:\Users\tra20\Anaconda3\PCbuild\win32 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x86" cgraph.lib cdt.lib /EXPORT:PyInit__graphviz build\temp.win32-3.5\Release\pygraphviz/graphviz_wrap.obj /OUT:build\lib.win32-3.5\pygraphviz\_graphviz.cp35-win32.pyd /IMPLIB:build\temp.win32-3.5\Release\pygraphviz\_graphviz.cp35-win32.lib
LINK : fatal error LNK1181: cannot open input file 'cgraph.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\link.exe' failed with exit status 1181
I assume it has something to do with the fact graphviz is linked in 32bit?
Note - I tried all binary for pygraphviz i could found on internet(anaconda,internet), and none work on win10 64bit... if you have any working (i mean you realy tested it ) i would be also happy ...
A:
I've created a build of PyGraphviz 1.5 on my Anaconda channel for Windows 64 bit running Python 3.6 through 3.9. If you're running Anaconda, you can install with:
conda install -c alubbock pygraphviz
This will also install Graphviz 2.41 as a dependency (don't install it separately, it might conflict and not all versions are 64-bit compatible).
I don't currently have a version for Python 3.5 or 32-bit versions of Windows, but I hope the above helps.
A:
The accepted answer didn't work for me running Python 2.7 (Anaconda) on Windows 10. The file path that @MiniMe suggested for --global-option didn't even exist in the git repo that he or she pointed to.
What did work for me was following instructions provided by the (currently) bottom answer to: Installing pygraphviz on windows
Steps:
1. Download graphviz-2.38.msi from https://graphviz.gitlab.io/_pages/Download/Download_windows.html and install
2. Download the 2.7 o̶r̶ ̶3̶.̶4̶ wheel file you need from http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygraphviz
3. Navigate to the directory that you downloaded the wheel file to
4. Run pip install pygraphviz-1.3.1-cp27-none-win_amd64.whl
5. Rejoice
N̶o̶t̶e̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶ ̶m̶i̶g̶h̶t̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶r̶u̶n̶ ̶̶p̶i̶p̶ ̶i̶n̶s̶t̶a̶l̶l̶ ̶p̶y̶g̶r̶a̶p̶h̶v̶i̶z̶-̶1̶.̶3̶.̶1̶-̶c̶p̶3̶4̶-̶n̶o̶n̶e̶-̶w̶i̶n̶_̶a̶m̶d̶6̶4̶.̶w̶h̶l̶̶ ̶i̶f̶ ̶y̶o̶u̶'̶r̶e̶ ̶t̶r̶y̶i̶n̶g̶ ̶t̶o̶ ̶g̶e̶t̶ ̶i̶t̶ ̶t̶o̶ ̶w̶o̶r̶k̶ ̶w̶i̶t̶h̶ ̶P̶y̶t̶h̶o̶n̶ ̶3̶.̶4̶.̶ ̶I̶ ̶d̶i̶d̶n̶'̶t̶ ̶t̶e̶s̶t̶ ̶t̶h̶a̶t̶ ̶t̶h̶o̶u̶g̶h̶.̶ Also, the SO answer I referenced also mentioned needing to add graphviz to your PATH but I didn't need to. Good luck!
Update: The python3 wheel vanished. If you're running python3, this answer worked for me. Follow step 1 above and then in WSL bash run:
1. sudo apt-get install python-dev graphviz libgraphviz-dev pkg-config
2. pip install pygraphviz
That answers says to use sudo pip install pygraphviz, but that gave me a dreaded pip import error for some reason. Dropping the sudo made it work in my case.
A:
Start reading from here
https://github.com/pygraphviz/pygraphviz/issues/58
At the bottom of that page there is a link to a x64 zip file in Github (like this)
Unpack that. Create a coresponding Program Files folder for your x64 file and put them there
Then install using this
pip install --global-option=build_ext --global-option="-IC:\Program Files\Graphviz2.38\include" --global-option="-LC:\Program Files\Graphviz2.38\lib\" pygraphviz
A:
It is a real pain to install pygraphviz on Windows 10 but this is the simples solution which works for me:
Step 1: Download and install Graphviz
https://graphviz.gitlab.io/_pages/Download/Download_windows.html
Step 2: Add below path to your PATH environment variable
C:\Program Files (x86)\Graphviz2.38\bin
Step 3: Re-open command line and activate venv in your project, example:
venv\Scripts\activate
Step 4: Download binaries from below link:
https://github.com/CristiFati/Prebuilt-Binaries/tree/master/PyGraphviz/v1.5/Graphviz-2.42.2
Step 5. Install whl into your virtual environment
For example:
In case of python 3.7
pip install pygraphviz-1.5-cp37-cp37m-win_amd64.whl
In case of python 3.8
pip install pygraphviz-1.5-cp38-cp38-win_amd64.whl
A:
Here's how I installed 64 bit PyGraphViz for Windows 10:
Downloaded and installed GraphViz from https://www2.graphviz.org/Packages/stable/windows/10/cmake/Release/x64/graphviz-install-2.44.1-win64.exe
Made sure I had Visual C++ installed, e.g. from here:
https://visualstudio.microsoft.com/visual-cpp-build-tools/
Then I ran:
pip install --global-option=build_ext --global-option="-IC:\Program Files\Graphviz 2.44.1\include" --global-option="-LC:\Program Files\Graphviz 2.44.1\lib" pygraphviz
Then I had to add C:\Program Files\Graphviz 2.44.1\bin to my system path before import pygraphviz worked.
Finally, I had to run this in command prompt after the install to register plugins and be able to draw graphs: "C:\Program Files\Graphviz 2.44.1\bin\dot.exe" -c
Obviously, for a newer version of Graphviz you'll need to check and update all the paths given above.
A:
None of the above worked for me, so I will show what did worked on my windows 11 machine (I don't think the windows version was the problem), which is in the pygraphviz documentation:
Install Visual C/C++, from here: https://visualstudio.microsoft.com/visual-cpp-build-tools/
It shows up as a requirement and even if you did install it and try to reinstall with pip, it may not work because garphviz is another requirement.
Download and install graphviz for windows: stable_windows_10_cmake_Release_x64_graphviz-install-2.46.0-win64.exe
Restart your computer (as required per the first step)
Then install the library pygraphviz through Windows PowerShell with the following command (I don't know why, with pip it still wasn't working):
python -m pip install --global-option=build_ext `
--global-option="-IC:\Program Files\Graphviz\include" `
--global-option="-LC:\Program Files\Graphviz\lib" `
pygraphviz
A:
The true easiest way to install pygraphviz on Windows without conda or external packager is to use Gohlke's wheels (opinion: His works should be assumed daily by someone in the Python Software Foundation)
Install latest or adapted graphviz package from graphviz using the 64bit or 32bit exe. Don't forget to check the box "add to the path"
Restart the computer
Download Unofficial Windows Binaries for Python Extension Packages by Christoph Gohlke from the Laboratory for Fluorescence Dynamics, University of California, Irvine.
Open a terminal/powershell as admin on the folder where you downloaded the pygraphviz-version-python_version-win_version.whl and enter pip install pygraphviz-*version*-*python_version*-*win_version*.whl
Test the install by opening in the terminal/powershell and entering
python
import pygraphviz
if no error returns, pygraphviz is installed and functional
A:
Try to install from your Anaconda (Python 3.8 recommended) environment using:
conda install -c conda-forge python-graphviz
| howto install pygraphviz on windows 10 64bit | Has anyone succeeded in installing pygraphviz on windows 10 64bit? I tried anaconda with python 3.5 64bit & 32bit with no success.
Here is the error I am getting with python 3.5 32bit on win10 64bit
python -m pip install pygraphviz --install-option="--include-path=C:\Program Files (x86)\Graphviz2.38\include" --install-option="--library-path=C:\Program Files (x86)\Graphviz2.38\lib"
Error:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD "-IC:\Program Files (x86)\Graphviz2.38\include" -IC:\Users\tra20\Anaconda3\include -IC:\Users\tra20\Anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tcpygraphviz/graphviz_wrap.c /Fobuild\temp.win32-3.5\Release\pygraphviz/graphviz_wrap.obj
graphviz_wrap.c
pygraphviz/graphviz_wrap.c(3321): warning C4047: 'return': 'int' differs in levels of indirection from 'Agsym_t *'
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO "/LIBPATH:C:\Program Files (x86)\Graphviz2.38\lib" /LIBPATH:C:\Users\tra20\Anaconda3\libs /LIBPATH:C:\Users\tra20\Anaconda3\PCbuild\win32 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x86" cgraph.lib cdt.lib /EXPORT:PyInit__graphviz build\temp.win32-3.5\Release\pygraphviz/graphviz_wrap.obj /OUT:build\lib.win32-3.5\pygraphviz\_graphviz.cp35-win32.pyd /IMPLIB:build\temp.win32-3.5\Release\pygraphviz\_graphviz.cp35-win32.lib
LINK : fatal error LNK1181: cannot open input file 'cgraph.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\link.exe' failed with exit status 1181
I assume it has something to do with the fact graphviz is linked in 32bit?
Note - I tried all binary for pygraphviz i could found on internet(anaconda,internet), and none work on win10 64bit... if you have any working (i mean you realy tested it ) i would be also happy ...
| [
"I've created a build of PyGraphviz 1.5 on my Anaconda channel for Windows 64 bit running Python 3.6 through 3.9. If you're running Anaconda, you can install with:\nconda install -c alubbock pygraphviz\n\nThis will also install Graphviz 2.41 as a dependency (don't install it separately, it might conflict and not all versions are 64-bit compatible).\nI don't currently have a version for Python 3.5 or 32-bit versions of Windows, but I hope the above helps.\n",
"The accepted answer didn't work for me running Python 2.7 (Anaconda) on Windows 10. The file path that @MiniMe suggested for --global-option didn't even exist in the git repo that he or she pointed to.\nWhat did work for me was following instructions provided by the (currently) bottom answer to: Installing pygraphviz on windows\nSteps:\n1. Download graphviz-2.38.msi from https://graphviz.gitlab.io/_pages/Download/Download_windows.html and install\n2. Download the 2.7 o̶r̶ ̶3̶.̶4̶ wheel file you need from http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygraphviz\n3. Navigate to the directory that you downloaded the wheel file to\n4. Run pip install pygraphviz-1.3.1-cp27-none-win_amd64.whl\n5. Rejoice\nN̶o̶t̶e̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶ ̶m̶i̶g̶h̶t̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶r̶u̶n̶ ̶̶p̶i̶p̶ ̶i̶n̶s̶t̶a̶l̶l̶ ̶p̶y̶g̶r̶a̶p̶h̶v̶i̶z̶-̶1̶.̶3̶.̶1̶-̶c̶p̶3̶4̶-̶n̶o̶n̶e̶-̶w̶i̶n̶_̶a̶m̶d̶6̶4̶.̶w̶h̶l̶̶ ̶i̶f̶ ̶y̶o̶u̶'̶r̶e̶ ̶t̶r̶y̶i̶n̶g̶ ̶t̶o̶ ̶g̶e̶t̶ ̶i̶t̶ ̶t̶o̶ ̶w̶o̶r̶k̶ ̶w̶i̶t̶h̶ ̶P̶y̶t̶h̶o̶n̶ ̶3̶.̶4̶.̶ ̶I̶ ̶d̶i̶d̶n̶'̶t̶ ̶t̶e̶s̶t̶ ̶t̶h̶a̶t̶ ̶t̶h̶o̶u̶g̶h̶.̶ Also, the SO answer I referenced also mentioned needing to add graphviz to your PATH but I didn't need to. Good luck!\nUpdate: The python3 wheel vanished. If you're running python3, this answer worked for me. Follow step 1 above and then in WSL bash run:\n 1. sudo apt-get install python-dev graphviz libgraphviz-dev pkg-config\n 2. pip install pygraphviz \nThat answers says to use sudo pip install pygraphviz, but that gave me a dreaded pip import error for some reason. Dropping the sudo made it work in my case.\n",
"Start reading from here\nhttps://github.com/pygraphviz/pygraphviz/issues/58\nAt the bottom of that page there is a link to a x64 zip file in Github (like this)\nUnpack that. Create a coresponding Program Files folder for your x64 file and put them there\nThen install using this\npip install --global-option=build_ext --global-option=\"-IC:\\Program Files\\Graphviz2.38\\include\" --global-option=\"-LC:\\Program Files\\Graphviz2.38\\lib\\\" pygraphviz\n\n",
"It is a real pain to install pygraphviz on Windows 10 but this is the simples solution which works for me:\nStep 1: Download and install Graphviz\nhttps://graphviz.gitlab.io/_pages/Download/Download_windows.html\nStep 2: Add below path to your PATH environment variable\nC:\\Program Files (x86)\\Graphviz2.38\\bin\nStep 3: Re-open command line and activate venv in your project, example:\nvenv\\Scripts\\activate\nStep 4: Download binaries from below link:\nhttps://github.com/CristiFati/Prebuilt-Binaries/tree/master/PyGraphviz/v1.5/Graphviz-2.42.2\nStep 5. Install whl into your virtual environment\nFor example:\nIn case of python 3.7\npip install pygraphviz-1.5-cp37-cp37m-win_amd64.whl\nIn case of python 3.8\npip install pygraphviz-1.5-cp38-cp38-win_amd64.whl\n",
"Here's how I installed 64 bit PyGraphViz for Windows 10:\nDownloaded and installed GraphViz from https://www2.graphviz.org/Packages/stable/windows/10/cmake/Release/x64/graphviz-install-2.44.1-win64.exe\nMade sure I had Visual C++ installed, e.g. from here:\nhttps://visualstudio.microsoft.com/visual-cpp-build-tools/\nThen I ran:\npip install --global-option=build_ext --global-option=\"-IC:\\Program Files\\Graphviz 2.44.1\\include\" --global-option=\"-LC:\\Program Files\\Graphviz 2.44.1\\lib\" pygraphviz\n\nThen I had to add C:\\Program Files\\Graphviz 2.44.1\\bin to my system path before import pygraphviz worked.\nFinally, I had to run this in command prompt after the install to register plugins and be able to draw graphs: \"C:\\Program Files\\Graphviz 2.44.1\\bin\\dot.exe\" -c\nObviously, for a newer version of Graphviz you'll need to check and update all the paths given above.\n",
"None of the above worked for me, so I will show what did worked on my windows 11 machine (I don't think the windows version was the problem), which is in the pygraphviz documentation:\n\nInstall Visual C/C++, from here: https://visualstudio.microsoft.com/visual-cpp-build-tools/\nIt shows up as a requirement and even if you did install it and try to reinstall with pip, it may not work because garphviz is another requirement.\nDownload and install graphviz for windows: stable_windows_10_cmake_Release_x64_graphviz-install-2.46.0-win64.exe\nRestart your computer (as required per the first step)\nThen install the library pygraphviz through Windows PowerShell with the following command (I don't know why, with pip it still wasn't working):\n\n\npython -m pip install --global-option=build_ext `\n --global-option=\"-IC:\\Program Files\\Graphviz\\include\" `\n --global-option=\"-LC:\\Program Files\\Graphviz\\lib\" `\n pygraphviz\n\n\n",
"The true easiest way to install pygraphviz on Windows without conda or external packager is to use Gohlke's wheels (opinion: His works should be assumed daily by someone in the Python Software Foundation)\n\nInstall latest or adapted graphviz package from graphviz using the 64bit or 32bit exe. Don't forget to check the box \"add to the path\"\n\nRestart the computer\n\nDownload Unofficial Windows Binaries for Python Extension Packages by Christoph Gohlke from the Laboratory for Fluorescence Dynamics, University of California, Irvine.\n\nOpen a terminal/powershell as admin on the folder where you downloaded the pygraphviz-version-python_version-win_version.whl and enter pip install pygraphviz-*version*-*python_version*-*win_version*.whl\n\nTest the install by opening in the terminal/powershell and entering\npython\nimport pygraphviz\n\n\nif no error returns, pygraphviz is installed and functional\n",
"Try to install from your Anaconda (Python 3.8 recommended) environment using:\nconda install -c conda-forge python-graphviz\n"
] | [
42,
10,
8,
2,
2,
2,
1,
0
] | [
"If all the solutions above failed, you can still clone directly from the pygraphviz repository\n\nVisit: https://github.com/pygraphviz/pygraphviz.git\nDownload/Clone it\nput the folder into C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python37-32\\Lib\\site-packages\nChange directory to “pygraphviz”\nRun “python setup.py install” to build and install\n(optional) Run “python setup_egg.py nosetests” to execute the tests\n\nSource: http://pygraphviz.github.io/documentation/pygraphviz-1.3.1/install.html\n"
] | [
-1
] | [
"64_bit",
"pygraphviz",
"python",
"windows_10"
] | stackoverflow_0040809758_64_bit_pygraphviz_python_windows_10.txt |
Q:
How Do You Enumerate Over a List of Strings, Converting Each String to Its Own List?
I know I can use split and variable assignment to convert a string within a list to a separate list, like this:
list_of_strings = ["the 1st string", "the 2nd string", "the 3rd string"]
first_list = list_of_strings[0].split(" ")
But what I cannot quite figure out how to do is how to do this conversion through enumeration so that I don't have to manually do this for each item in my list.
A:
I'd use list comprehension for this task:
list_of_split_string = [the_string.split(" ") for the_string in list_of_strings]
A:
You could use map on your list to apply str.split to all its elements:
list(map(str.split,list_of_strings))
[['the', '1st', 'string'], ['the', '2nd', 'string'], ['the', '3rd', 'string']]
A:
I hope this one help:
ls = [i.split(",") for i in list_of_strings]
Output
[['the 1st string'], ['the 2nd string'], ['the 3rd string']]
| How Do You Enumerate Over a List of Strings, Converting Each String to Its Own List? | I know I can use split and variable assignment to convert a string within a list to a separate list, like this:
list_of_strings = ["the 1st string", "the 2nd string", "the 3rd string"]
first_list = list_of_strings[0].split(" ")
But what I cannot quite figure out how to do is how to do this conversion through enumeration so that I don't have to manually do this for each item in my list.
| [
"I'd use list comprehension for this task:\nlist_of_split_string = [the_string.split(\" \") for the_string in list_of_strings]\n\n",
"You could use map on your list to apply str.split to all its elements:\nlist(map(str.split,list_of_strings))\n\n[['the', '1st', 'string'], ['the', '2nd', 'string'], ['the', '3rd', 'string']]\n\n",
"I hope this one help:\nls = [i.split(\",\") for i in list_of_strings]\n\nOutput\n[['the 1st string'], ['the 2nd string'], ['the 3rd string']]\n\n"
] | [
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074645728_python.txt |
Q:
How to put legends in each plots
I am new learner of python and I'm having hard time to make a legend in each 4 graphs.
I want to put each equation's formula in the legend. I cant think about more to make the legend on the graphs.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['lines.color'] = 'k'
mpl.rcParams['axes.prop_cycle'] = mpl.cycler('color', ['k'])
x = np.linspace(-9, 9, 400)
y = np.linspace(-5, 5, 400)
x, y = np.meshgrid(x, y)
def axes(ax):
ax.axhline(0, alpha = 0.1)
ax.axvline(0, alpha = 0.1)
fig, ax = plt.subplots(2, 2)
fig = plt.gcf()
fig.suptitle("Conics vizualization of a circle, ellipse, parabola and hyperbola.")
a = 0.3
axes(ax[0,0])
ax[0,0].contour(x, y, (y**2 - 4*a*x), [2], colors='k')
ax[0,0].set_ylabel('parabola')
ax[0,0].set_xlabel('(y**2 - 4*a*x), a=0.3')
a = 2
b = 2
axes(ax[0,1])
ax[0,1].contour(x, y,(x**2/a**2 + y**2/b**2), [1], colors='k')
ax[0,1].set_title('circle')
ax[0,1].set_xlabel('(x**2/a**2 + y**2/b**2), a=2, b=2')
ax[0,1].legend()
a = 4
b = 2
axes(ax[1,0])
ax[1,0].contour(x, y,(x**2/a**2 + y**2/b**2), [1], colors='k')
ax[1,0].set_ylabel('Ellips')
ax[1,0].set_xlabel('(x**2/a**2 + y**2/b**2), a=4, b=2')
a = 2
b = 1
axes(ax[1,1])
ax[1,1].contour(x, y,(x**2/a**2 - y**2/b**2), [1], colors='k')
ax[1,1].set_ylabel('hyperbola')
ax[1,1].set_xlabel('(x**2/a**2 - y**2/b**2), a=2, b=1')
plt.show()
I tired
label='(y**2 - 4*a*x), a=0.3'
and
plt.legend(loc='best')
both dosent work
A:
The issue is that contour plots do not really support labels (they work a bit different than the usual plot). If you need to use contour, I don't see a better way (please, proof me wrong!) then "sneaking" the label in by adding a regular plot with just a single point. E.g. like this:
a = 2
b = 1
axes(ax[1, 1])
ax[1, 1].contour(x, y, (x ** 2 / a ** 2 - y ** 2 / b ** 2), [1], colors='k')
ax[1, 1].set_ylabel('hyperbola')
# Here we add the 'plot' at (0, 0)
ax[1, 1].plot(0, 0, color='k', label='(x**2/a**2 - y**2/b**2), a=2, b=1')
# Call the legend function of the axis.
ax[1, 1].legend()
Doing so on all axis yields
| How to put legends in each plots | I am new learner of python and I'm having hard time to make a legend in each 4 graphs.
I want to put each equation's formula in the legend. I cant think about more to make the legend on the graphs.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['lines.color'] = 'k'
mpl.rcParams['axes.prop_cycle'] = mpl.cycler('color', ['k'])
x = np.linspace(-9, 9, 400)
y = np.linspace(-5, 5, 400)
x, y = np.meshgrid(x, y)
def axes(ax):
ax.axhline(0, alpha = 0.1)
ax.axvline(0, alpha = 0.1)
fig, ax = plt.subplots(2, 2)
fig = plt.gcf()
fig.suptitle("Conics vizualization of a circle, ellipse, parabola and hyperbola.")
a = 0.3
axes(ax[0,0])
ax[0,0].contour(x, y, (y**2 - 4*a*x), [2], colors='k')
ax[0,0].set_ylabel('parabola')
ax[0,0].set_xlabel('(y**2 - 4*a*x), a=0.3')
a = 2
b = 2
axes(ax[0,1])
ax[0,1].contour(x, y,(x**2/a**2 + y**2/b**2), [1], colors='k')
ax[0,1].set_title('circle')
ax[0,1].set_xlabel('(x**2/a**2 + y**2/b**2), a=2, b=2')
ax[0,1].legend()
a = 4
b = 2
axes(ax[1,0])
ax[1,0].contour(x, y,(x**2/a**2 + y**2/b**2), [1], colors='k')
ax[1,0].set_ylabel('Ellips')
ax[1,0].set_xlabel('(x**2/a**2 + y**2/b**2), a=4, b=2')
a = 2
b = 1
axes(ax[1,1])
ax[1,1].contour(x, y,(x**2/a**2 - y**2/b**2), [1], colors='k')
ax[1,1].set_ylabel('hyperbola')
ax[1,1].set_xlabel('(x**2/a**2 - y**2/b**2), a=2, b=1')
plt.show()
I tired
label='(y**2 - 4*a*x), a=0.3'
and
plt.legend(loc='best')
both dosent work
| [
"The issue is that contour plots do not really support labels (they work a bit different than the usual plot). If you need to use contour, I don't see a better way (please, proof me wrong!) then \"sneaking\" the label in by adding a regular plot with just a single point. E.g. like this:\na = 2\nb = 1\naxes(ax[1, 1])\nax[1, 1].contour(x, y, (x ** 2 / a ** 2 - y ** 2 / b ** 2), [1], colors='k')\nax[1, 1].set_ylabel('hyperbola')\n# Here we add the 'plot' at (0, 0)\nax[1, 1].plot(0, 0, color='k', label='(x**2/a**2 - y**2/b**2), a=2, b=1')\n# Call the legend function of the axis.\nax[1, 1].legend()\n\nDoing so on all axis yields\n\n"
] | [
1
] | [] | [] | [
"graph",
"python",
"subplot"
] | stackoverflow_0074651128_graph_python_subplot.txt |
Q:
Use Spacy with Pandas
I'm trying to build a multi-class text classifier using Spacy and I have built the model, but facing a problem applying it to my full dataset. The model I have built so far is in the screenshot:
Screenshot
Below is the code I used to apply to my full dataset using Pandas:
Messages = pd.read_csv('Messages.csv', encoding='cp1252')
Messages['Body'] = Messages['Body'].astype(str)
Messages['NLP_Result'] = nlp(Messages['Body'])._.cats
But it gives me the error:
ValueError: [E1041] Expected a string, Doc, or bytes as input, but got: <class 'pandas.core.series.Series'>
The reason I wanted to use Pandas in this case is the dataset has 2 columns: ID and Body. I want to apply the NLP model only to the Body column, but I want the final dataset to have 3 columns: ID, Body and the NLP result like in the screenshot above.
Thanks so much
I tried Pandas apply method too, but had no luck. Code used:
Messages['NLP_Result'] = Messages['Body'].apply(nlp)._.cats
The error I got: AttributeError: 'Series' object has no attribute '_'
Expectation is to generate 3 columns as described above
A:
You should provide a callable into Series.apply call:
Messages['NLP_Result'] = Messages['Body'].apply(lambda x: nlp(x)._.cats)
Here, each value in the NLP_Result column will be assigned to x variable.
The nlp(x) will create an NLP object that contains the necessary properties you'd like to access. Then, the nlp(x)._.cats will return the expected value.
import spacy
import classy classification
import csv
import pandas as pd
with open ('Deliveries.txt', 'r') as d:
Deliveries = d.read().splitlines()
with open ("Not Spam.txt", "r") as n:
Not_Spam = n.read().splitlines()
data = {}
data["Deliveries"] = Deliveries
data["Not_Spam"] = Not_Spam
# NLP model
nlp = spacy.blank("en")
nlp.add pipe("text_categorizer",
config={
"data": data,
"model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"device": "gpu"
}
)
Messages['NLP_Result'] = Messages['Body'].apply(lambda x: nlp(x)._.cats)
| Use Spacy with Pandas | I'm trying to build a multi-class text classifier using Spacy and I have built the model, but facing a problem applying it to my full dataset. The model I have built so far is in the screenshot:
Screenshot
Below is the code I used to apply to my full dataset using Pandas:
Messages = pd.read_csv('Messages.csv', encoding='cp1252')
Messages['Body'] = Messages['Body'].astype(str)
Messages['NLP_Result'] = nlp(Messages['Body'])._.cats
But it gives me the error:
ValueError: [E1041] Expected a string, Doc, or bytes as input, but got: <class 'pandas.core.series.Series'>
The reason I wanted to use Pandas in this case is the dataset has 2 columns: ID and Body. I want to apply the NLP model only to the Body column, but I want the final dataset to have 3 columns: ID, Body and the NLP result like in the screenshot above.
Thanks so much
I tried Pandas apply method too, but had no luck. Code used:
Messages['NLP_Result'] = Messages['Body'].apply(nlp)._.cats
The error I got: AttributeError: 'Series' object has no attribute '_'
Expectation is to generate 3 columns as described above
| [
"You should provide a callable into Series.apply call:\nMessages['NLP_Result'] = Messages['Body'].apply(lambda x: nlp(x)._.cats)\n\nHere, each value in the NLP_Result column will be assigned to x variable.\nThe nlp(x) will create an NLP object that contains the necessary properties you'd like to access. Then, the nlp(x)._.cats will return the expected value.\nimport spacy\nimport classy classification\nimport csv\nimport pandas as pd \n\nwith open ('Deliveries.txt', 'r') as d:\n Deliveries = d.read().splitlines()\nwith open (\"Not Spam.txt\", \"r\") as n:\n Not_Spam = n.read().splitlines()\n\ndata = {}\ndata[\"Deliveries\"] = Deliveries\ndata[\"Not_Spam\"] = Not_Spam\n\n# NLP model\nnlp = spacy.blank(\"en\")\nnlp.add pipe(\"text_categorizer\",\n config={\n \"data\": data,\n \"model\": \"sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2\",\n \"device\": \"gpu\"\n }\n)\n\nMessages['NLP_Result'] = Messages['Body'].apply(lambda x: nlp(x)._.cats)\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python",
"spacy",
"text_classification"
] | stackoverflow_0074649908_pandas_python_spacy_text_classification.txt |
Q:
Trying to load cookie into requests session from dictionary
I'm working with the python requests library. I am trying to load a requests session with a cookie from a dictionary:
cookie = {'name':'my_cookie','value': 'kdfhgfkj' ,'domain':'.ZZZ.org', 'expires':'Fri, 01-Jan-2020 00:00:00 GMT'}
I've tried:
s.cookies.set_cookie(cookie)
but this gives:
File "....lib\site-packages\requests\cookies.py", line 298, in set_cookie
if hasattr(cookie.value, 'startswith') and cookie.value.startswith('"') and cookie.value.endswith('"'):
AttributeError: 'dict' object has no attribute 'value'
What am I doing wrong?
A:
cookies has a dictionary-like interface, you can use update():
s.cookies.update(cookie)
Or, just add cookies to the next request:
session.get(url, cookies=cookie)
It would "merge" the request cookies with session cookies and the newly added cookies would be retained for subsequent requests, see also:
Update Cookies in Session Using python-requests Module
A:
this doesn't seem to work for me for multiple cookies, they just get overwritten so the last one to get added is the only cookie in the jar.
For example, my multiple cookies (from Flaresolverr (which I believe is the same output as selenium)) outputs multiple dictionaries in a list, e.g.:
cookies = [
{
"name": "ASP.NET_SessionId",
"value": "SOMEVALUE",
"domain": "my.domain.co.uk",
"path": "/",
"expires": -1,
"size": 41,
"httpOnly": true,
"secure": false,
"session": true
},
{
"name": "__cfduid",
"value": "SOMEVALUE",
"domain": ".mydomain.co.uk",
"path": "/",
"expires": 1622898293.967355,
"size": 51,
"httpOnly": true,
"secure": true,
"session": false,
"sameSite": "Lax"
}
]
So if I then try to add them all in a cookie jar, you can see "__cfduid" is the only cookie in the jar:
r = requests.session()
for c in cookies:
r.cookies.update(c)
print(r.cookies)
<RequestsCookieJar[<Cookie domain=.mydomain.co.uk for />, <Cookie expires=1622898293.967355 for />, <Cookie httpOnly=True for />, <Cookie name=__cfduid for />, <Cookie path=/ for />, <Cookie sameSite=Lax for />, <Cookie secure=True for />, <Cookie session=False for />, <Cookie size=51 for />, <Cookie value=SOMEVALUE for />]>
I've also tried jar = requests.cookies.RequestsCookieJar() and jar.set(xxxx) but it doesn't like non-standard cookie attributes (e.g. httpOnly)
Any help would be very much appreciated, thank you!
A:
session.cookies.set_cookie's parameter should be Cookie object, NOT dict (of cookie)
if you want add new cookie into session.cookies from dict, you can use:
session.cookies.set
or
requests.cookies.create_cookie + session.cookies.set_cookie
more detail pls refer to my another post's answer
| Trying to load cookie into requests session from dictionary | I'm working with the python requests library. I am trying to load a requests session with a cookie from a dictionary:
cookie = {'name':'my_cookie','value': 'kdfhgfkj' ,'domain':'.ZZZ.org', 'expires':'Fri, 01-Jan-2020 00:00:00 GMT'}
I've tried:
s.cookies.set_cookie(cookie)
but this gives:
File "....lib\site-packages\requests\cookies.py", line 298, in set_cookie
if hasattr(cookie.value, 'startswith') and cookie.value.startswith('"') and cookie.value.endswith('"'):
AttributeError: 'dict' object has no attribute 'value'
What am I doing wrong?
| [
"cookies has a dictionary-like interface, you can use update():\ns.cookies.update(cookie)\n\nOr, just add cookies to the next request:\nsession.get(url, cookies=cookie)\n\nIt would \"merge\" the request cookies with session cookies and the newly added cookies would be retained for subsequent requests, see also:\n\nUpdate Cookies in Session Using python-requests Module\n\n",
"this doesn't seem to work for me for multiple cookies, they just get overwritten so the last one to get added is the only cookie in the jar.\nFor example, my multiple cookies (from Flaresolverr (which I believe is the same output as selenium)) outputs multiple dictionaries in a list, e.g.:\ncookies = [\n {\n \"name\": \"ASP.NET_SessionId\",\n \"value\": \"SOMEVALUE\",\n \"domain\": \"my.domain.co.uk\",\n \"path\": \"/\",\n \"expires\": -1,\n \"size\": 41,\n \"httpOnly\": true,\n \"secure\": false,\n \"session\": true\n },\n {\n \"name\": \"__cfduid\",\n \"value\": \"SOMEVALUE\",\n \"domain\": \".mydomain.co.uk\",\n \"path\": \"/\",\n \"expires\": 1622898293.967355,\n \"size\": 51,\n \"httpOnly\": true,\n \"secure\": true,\n \"session\": false,\n \"sameSite\": \"Lax\"\n }\n ]\n\nSo if I then try to add them all in a cookie jar, you can see \"__cfduid\" is the only cookie in the jar:\nr = requests.session()\nfor c in cookies:\n r.cookies.update(c)\n\nprint(r.cookies)\n<RequestsCookieJar[<Cookie domain=.mydomain.co.uk for />, <Cookie expires=1622898293.967355 for />, <Cookie httpOnly=True for />, <Cookie name=__cfduid for />, <Cookie path=/ for />, <Cookie sameSite=Lax for />, <Cookie secure=True for />, <Cookie session=False for />, <Cookie size=51 for />, <Cookie value=SOMEVALUE for />]>\n\nI've also tried jar = requests.cookies.RequestsCookieJar() and jar.set(xxxx) but it doesn't like non-standard cookie attributes (e.g. httpOnly)\nAny help would be very much appreciated, thank you!\n",
"session.cookies.set_cookie's parameter should be Cookie object, NOT dict (of cookie)\nif you want add new cookie into session.cookies from dict, you can use:\n\nsession.cookies.set\n\nor\n\nrequests.cookies.create_cookie + session.cookies.set_cookie\n\nmore detail pls refer to my another post's answer\n"
] | [
11,
0,
0
] | [] | [] | [
"cookies",
"python",
"python_requests",
"session"
] | stackoverflow_0031928942_cookies_python_python_requests_session.txt |
Q:
geting SyntaxError: invalid syntax jupyter labs
wring this batch of code and geting syntax exxor what seems to be the problem
File "", line 2
if color == 1: color_sq =
^
SyntaxError: invalid syntax
def calc_color(data, color=None):
if color == 1: color_sq =
['#dadaebFF','#bcbddcF0','#9e9ac8F0',
'#807dbaF0','#6a51a3F0','#54278fF0'];
colors = 'Purples';
elif color == 2: color_sq =
['#c7e9b4','#7fcdbb','#41b6c4',
'#1d91c0','#225ea8','#253494'];
colors = 'YlGnBu';
elif color == 3: color_sq =
['#f7f7f7','#d9d9d9','#bdbdbd',
'#969696','#636363','#252525'];
colors = 'Greys';
elif color == 9: color_sq =
['#ff0000','#ff0000','#ff0000',
'#ff0000','#ff0000','#ff0000']
else: color_sq =
['#ffffd4','#fee391','#fec44f',
'#fe9929','#d95f0e','#993404'];
colors = 'YlOrBr';
new_data, bins = pd.qcut(data, 6, retbins=True,
labels=list(range(6)))
color_ton = []
for val in new_data:
color_ton.append(color_sq[val])
if color != 9:
colors = sns.color_palette(colors, n_colors=6)
sns.palplot(colors, 0.6);
for i in range(6):
print ("\n"+str(i+1)+': '+str(int(bins[i]))+
" => "+str(int(bins[i+1])-1), end =" ")
print("\n\n 1 2 3 4 5 6")
return color_ton, bins;
A:
A few things on python syntax:
indentations are required and need to be consistent
after a : you need a new-line
no ;
Try this:
def calc_color(data, color=None):
if color == 1:
color_sq = ['#dadaebFF','#bcbddcF0','#9e9ac8F0',
'#807dbaF0','#6a51a3F0','#54278fF0']
colors = 'Purples'
elif color == 2:
color_sq = ['#c7e9b4','#7fcdbb','#41b6c4',
'#1d91c0','#225ea8','#253494']
colors = 'YlGnBu'
elif color == 3:
color_sq = ['#f7f7f7','#d9d9d9','#bdbdbd',
'#969696','#636363','#252525']
colors = 'Greys'
elif color == 9:
color_sq = ['#ff0000','#ff0000','#ff0000',
'#ff0000','#ff0000','#ff0000']
else:
color_sq = ['#ffffd4','#fee391','#fec44f',
'#fe9929','#d95f0e','#993404']
colors = 'YlOrBr'
new_data, bins = pd.qcut(data, 6, retbins=True, labels=list(range(6)))
color_ton = []
for val in new_data:
color_ton.append(color_sq[val])
if color != 9:
colors = sns.color_palette(colors, n_colors=6)
sns.palplot(colors, 0.6)
for i in range(6):
print ("\n"+str(i+1)+': '+str(int(bins[i]))+
" => "+str(int(bins[i+1])-1), end =" ")
print("\n\n 1 2 3 4 5 6")
return color_ton, bins
I would recommend you check out something like this.
A:
Have you ever written python before? :D
I reformatted your code, now there aren't any syntax errors:
def calc_color(data, color=None):
if color == 1:
color_sq = ['#dadaebFF','#bcbddcF0','#9e9ac8F0',
'#807dbaF0','#6a51a3F0','#54278fF0']
colors = 'Purples'
elif color == 2:
color_sq = ['#c7e9b4','#7fcdbb','#41b6c4',
'#1d91c0','#225ea8','#253494']
colors = 'YlGnBu'
elif color == 3:
color_sq = ['#f7f7f7','#d9d9d9','#bdbdbd',
'#969696','#636363','#252525']
colors = 'Greys'
elif color == 9: color_sq = ['#ff0000','#ff0000','#ff0000',
'#ff0000','#ff0000','#ff0000']
else:
color_sq = ['#ffffd4','#fee391','#fec44f',
'#fe9929','#d95f0e','#993404']
colors = 'YlOrBr'
new_data, bins = pd.qcut(data, 6, retbins=True,
labels=list(range(6)))
color_ton = []
for val in new_data:
color_ton.append(color_sq[val])
if color != 9:
colors = sns.color_palette(colors, n_colors=6)
sns.palplot(colors, 0.6);
for i in range(6):
print ("\n"+str(i+1)+': '+str(int(bins[i]))+
" => "+str(int(bins[i+1])-1), end =" ")
print("\n\n 1 2 3 4 5 6")
return color_ton, bins;
| geting SyntaxError: invalid syntax jupyter labs | wring this batch of code and geting syntax exxor what seems to be the problem
File "", line 2
if color == 1: color_sq =
^
SyntaxError: invalid syntax
def calc_color(data, color=None):
if color == 1: color_sq =
['#dadaebFF','#bcbddcF0','#9e9ac8F0',
'#807dbaF0','#6a51a3F0','#54278fF0'];
colors = 'Purples';
elif color == 2: color_sq =
['#c7e9b4','#7fcdbb','#41b6c4',
'#1d91c0','#225ea8','#253494'];
colors = 'YlGnBu';
elif color == 3: color_sq =
['#f7f7f7','#d9d9d9','#bdbdbd',
'#969696','#636363','#252525'];
colors = 'Greys';
elif color == 9: color_sq =
['#ff0000','#ff0000','#ff0000',
'#ff0000','#ff0000','#ff0000']
else: color_sq =
['#ffffd4','#fee391','#fec44f',
'#fe9929','#d95f0e','#993404'];
colors = 'YlOrBr';
new_data, bins = pd.qcut(data, 6, retbins=True,
labels=list(range(6)))
color_ton = []
for val in new_data:
color_ton.append(color_sq[val])
if color != 9:
colors = sns.color_palette(colors, n_colors=6)
sns.palplot(colors, 0.6);
for i in range(6):
print ("\n"+str(i+1)+': '+str(int(bins[i]))+
" => "+str(int(bins[i+1])-1), end =" ")
print("\n\n 1 2 3 4 5 6")
return color_ton, bins;
| [
"A few things on python syntax:\n\nindentations are required and need to be consistent\nafter a : you need a new-line\nno ;\n\nTry this:\ndef calc_color(data, color=None):\n if color == 1: \n color_sq = ['#dadaebFF','#bcbddcF0','#9e9ac8F0',\n '#807dbaF0','#6a51a3F0','#54278fF0'] \n colors = 'Purples'\n elif color == 2: \n color_sq = ['#c7e9b4','#7fcdbb','#41b6c4',\n '#1d91c0','#225ea8','#253494'] \n colors = 'YlGnBu'\n elif color == 3: \n color_sq = ['#f7f7f7','#d9d9d9','#bdbdbd',\n '#969696','#636363','#252525'] \n colors = 'Greys'\n elif color == 9: \n color_sq = ['#ff0000','#ff0000','#ff0000',\n '#ff0000','#ff0000','#ff0000']\n else: \n color_sq = ['#ffffd4','#fee391','#fec44f',\n '#fe9929','#d95f0e','#993404'] \n colors = 'YlOrBr'\n new_data, bins = pd.qcut(data, 6, retbins=True, labels=list(range(6)))\n color_ton = []\n for val in new_data:\n color_ton.append(color_sq[val]) \n if color != 9:\n colors = sns.color_palette(colors, n_colors=6)\n sns.palplot(colors, 0.6)\n for i in range(6):\n print (\"\\n\"+str(i+1)+': '+str(int(bins[i]))+\n \" => \"+str(int(bins[i+1])-1), end =\" \")\n print(\"\\n\\n 1 2 3 4 5 6\") \n return color_ton, bins\n\nI would recommend you check out something like this.\n",
"Have you ever written python before? :D\nI reformatted your code, now there aren't any syntax errors:\ndef calc_color(data, color=None):\n if color == 1: \n color_sq = ['#dadaebFF','#bcbddcF0','#9e9ac8F0',\n '#807dbaF0','#6a51a3F0','#54278fF0']\n colors = 'Purples'\n elif color == 2: \n color_sq = ['#c7e9b4','#7fcdbb','#41b6c4',\n '#1d91c0','#225ea8','#253494']\n colors = 'YlGnBu'\n elif color == 3: \n color_sq = ['#f7f7f7','#d9d9d9','#bdbdbd',\n '#969696','#636363','#252525']\n colors = 'Greys'\n elif color == 9: color_sq = ['#ff0000','#ff0000','#ff0000',\n '#ff0000','#ff0000','#ff0000']\n else: \n color_sq = ['#ffffd4','#fee391','#fec44f',\n '#fe9929','#d95f0e','#993404']\n colors = 'YlOrBr'\n new_data, bins = pd.qcut(data, 6, retbins=True, \n labels=list(range(6)))\n color_ton = []\n for val in new_data:\n color_ton.append(color_sq[val]) \n if color != 9:\n colors = sns.color_palette(colors, n_colors=6)\n sns.palplot(colors, 0.6);\n for i in range(6):\n print (\"\\n\"+str(i+1)+': '+str(int(bins[i]))+\n \" => \"+str(int(bins[i+1])-1), end =\" \")\n print(\"\\n\\n 1 2 3 4 5 6\") \n return color_ton, bins;\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074652847_python.txt |
Q:
Why does calling the KFold generator with shuffle give the same indices?
With sklearn, when you create a new KFold object and shuffle is true, it'll produce a different, newly randomized fold indices. However, every generator from a given KFold object gives the same indices for each fold even when shuffle is true. Why does it work like this?
Example:
from sklearn.cross_validation import KFold
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([1, 2, 3, 4])
kf = KFold(4, n_folds=2, shuffle = True)
for fold in kf:
print fold
print '---second round----'
for fold in kf:
print fold
Output:
(array([2, 3]), array([0, 1]))
(array([0, 1]), array([2, 3]))
---second round----#same indices for the folds
(array([2, 3]), array([0, 1]))
(array([0, 1]), array([2, 3]))
This question was motivated by a comment on this answer. I decided to split it into a new question to prevent that answer from becoming too long.
A:
A new iteration with the same KFold object will not reshuffle the indices, that only happens during instantiation of the object. KFold() never sees the data but knows number of samples so it uses that to shuffle the indices. From the code during instantiation of KFold:
if shuffle:
rng = check_random_state(self.random_state)
rng.shuffle(self.idxs)
Each time a generator is called to iterate through the indices of each fold, it will use same shuffled indices and divide them the same way.
Take a look at the code for the base class of KFold _PartitionIterator(with_metaclass(ABCMeta)) where __iter__ is defined. The __iter__ method in the base class calls _iter_test_indices in KFold to divide and yield the train and test indices for each fold.
A:
With new version of sklearn by calling from sklearn.model_selection import KFold, KFold generator with shuffle give the different indices:
import numpy as np
from sklearn.model_selection import KFold
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([1, 2, 3, 4])
kf = KFold(n_splits=3, shuffle=True)
print('---first round----')
for train_index, test_index in kf.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
print('---second round----')
for train_index, test_index in kf.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
Out:
---first round----
TRAIN: [2 3] TEST: [0 1]
TRAIN: [0 1 3] TEST: [2]
TRAIN: [0 1 2] TEST: [3]
---second round----
TRAIN: [0 1] TEST: [2 3]
TRAIN: [1 2 3] TEST: [0]
TRAIN: [0 2 3] TEST: [1]
Alternatively, the code below iteratively generates same result:
from sklearn.model_selection import KFold
np.random.seed(42)
data = np.random.choice([0, 1], 10, p=[0.5, 0.5])
kf = KFold(2, shuffle=True, random_state=2022)
list(kf.split(data))
Out:
[(array([0, 1, 6, 8, 9]), array([2, 3, 4, 5, 7])),
(array([2, 3, 4, 5, 7]), array([0, 1, 6, 8, 9]))]
| Why does calling the KFold generator with shuffle give the same indices? | With sklearn, when you create a new KFold object and shuffle is true, it'll produce a different, newly randomized fold indices. However, every generator from a given KFold object gives the same indices for each fold even when shuffle is true. Why does it work like this?
Example:
from sklearn.cross_validation import KFold
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([1, 2, 3, 4])
kf = KFold(4, n_folds=2, shuffle = True)
for fold in kf:
print fold
print '---second round----'
for fold in kf:
print fold
Output:
(array([2, 3]), array([0, 1]))
(array([0, 1]), array([2, 3]))
---second round----#same indices for the folds
(array([2, 3]), array([0, 1]))
(array([0, 1]), array([2, 3]))
This question was motivated by a comment on this answer. I decided to split it into a new question to prevent that answer from becoming too long.
| [
"A new iteration with the same KFold object will not reshuffle the indices, that only happens during instantiation of the object. KFold() never sees the data but knows number of samples so it uses that to shuffle the indices. From the code during instantiation of KFold:\nif shuffle:\n rng = check_random_state(self.random_state)\n rng.shuffle(self.idxs)\n\nEach time a generator is called to iterate through the indices of each fold, it will use same shuffled indices and divide them the same way.\nTake a look at the code for the base class of KFold _PartitionIterator(with_metaclass(ABCMeta)) where __iter__ is defined. The __iter__ method in the base class calls _iter_test_indices in KFold to divide and yield the train and test indices for each fold.\n",
"With new version of sklearn by calling from sklearn.model_selection import KFold, KFold generator with shuffle give the different indices:\nimport numpy as np\nfrom sklearn.model_selection import KFold\nX = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])\ny = np.array([1, 2, 3, 4])\nkf = KFold(n_splits=3, shuffle=True)\n\nprint('---first round----')\nfor train_index, test_index in kf.split(X):\n print(\"TRAIN:\", train_index, \"TEST:\", test_index)\n \nprint('---second round----')\nfor train_index, test_index in kf.split(X):\n print(\"TRAIN:\", train_index, \"TEST:\", test_index)\n\nOut:\n---first round----\nTRAIN: [2 3] TEST: [0 1]\nTRAIN: [0 1 3] TEST: [2]\nTRAIN: [0 1 2] TEST: [3]\n---second round----\nTRAIN: [0 1] TEST: [2 3]\nTRAIN: [1 2 3] TEST: [0]\nTRAIN: [0 2 3] TEST: [1]\n\nAlternatively, the code below iteratively generates same result:\nfrom sklearn.model_selection import KFold\nnp.random.seed(42)\ndata = np.random.choice([0, 1], 10, p=[0.5, 0.5])\nkf = KFold(2, shuffle=True, random_state=2022)\nlist(kf.split(data))\n\nOut:\n[(array([0, 1, 6, 8, 9]), array([2, 3, 4, 5, 7])),\n (array([2, 3, 4, 5, 7]), array([0, 1, 6, 8, 9]))]\n\n"
] | [
6,
0
] | [] | [] | [
"cross_validation",
"python",
"scikit_learn"
] | stackoverflow_0034940465_cross_validation_python_scikit_learn.txt |
Q:
Anaconda Navigator and Spyder don't start
I am having a problem opening Spyder or Anaconda Navigator. Anaconda shell still works without a problem.
I already tried:
reinstalling Spyder
reinstalling Anaconda
opening it from shell with the following result:
(base) C:\Users\***>anaconda-navigator
Traceback (most recent call last):
File "C:\Users\***\anaconda3\lib\site-packages\qtpy\__init__.py", line 204, in <module>
from PySide import __version__ as PYSIDE_VERSION # analysis:ignore
ModuleNotFoundError: No module named 'PySide'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\***\anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module>
from anaconda_navigator.app.main import main
File "C:\Users\***\anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 22, in <module>
from anaconda_navigator.utils.conda import is_conda_available
File "C:\Users\***\anaconda3\lib\site-packages\anaconda_navigator\utils\__init__.py", line 15, in <module>
from qtpy.QtGui import QIcon
File "C:\Users\***\anaconda3\lib\site-packages\qtpy\__init__.py", line 210, in <module>
raise PythonQtError('No Qt bindings could be found')
qtpy.PythonQtError: No Qt bindings could be found
PyQt5 should be installed (I tryed reinstalling and updating)
I also tried to update conda anaconda and python.
Do you have any suggestions? I already googled a lot, but wasn't able to find anything that works for me.
Thanks in advance!
UPDATE:
Creating a new env with python 3.8.2 allows me to open spyder, but the window stays a white Window and does not respond any more.
The following ERROR appears in the shell:
(newEnv) C:\Users\***>spyder
[2280:2332:0331/162242.873:ERROR:broker_win.cc(59)] Error reading broker pipe: Die Pipe wurde beendet. (0x6D)
[4620:17776:0331/162242.873:ERROR:broker_win.cc(59)] Error reading broker pipe: Die Pipe wurde beendet. (0x6D)
Translation: "Die Pipe wurde beendet." - "The pipe was stoped."
Downgrading Python to python 3.7.0 (The last version I know of that it worked) results in the same ERROR from the beginning.
A:
I also got the same error after installing anaconda. What did for me was:
uninstall all previous Anaconda installations
extend admin rights to my local account
select the add PATH option during installation, even if not recommended
| Anaconda Navigator and Spyder don't start | I am having a problem opening Spyder or Anaconda Navigator. Anaconda shell still works without a problem.
I already tried:
reinstalling Spyder
reinstalling Anaconda
opening it from shell with the following result:
(base) C:\Users\***>anaconda-navigator
Traceback (most recent call last):
File "C:\Users\***\anaconda3\lib\site-packages\qtpy\__init__.py", line 204, in <module>
from PySide import __version__ as PYSIDE_VERSION # analysis:ignore
ModuleNotFoundError: No module named 'PySide'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\***\anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module>
from anaconda_navigator.app.main import main
File "C:\Users\***\anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 22, in <module>
from anaconda_navigator.utils.conda import is_conda_available
File "C:\Users\***\anaconda3\lib\site-packages\anaconda_navigator\utils\__init__.py", line 15, in <module>
from qtpy.QtGui import QIcon
File "C:\Users\***\anaconda3\lib\site-packages\qtpy\__init__.py", line 210, in <module>
raise PythonQtError('No Qt bindings could be found')
qtpy.PythonQtError: No Qt bindings could be found
PyQt5 should be installed (I tryed reinstalling and updating)
I also tried to update conda anaconda and python.
Do you have any suggestions? I already googled a lot, but wasn't able to find anything that works for me.
Thanks in advance!
UPDATE:
Creating a new env with python 3.8.2 allows me to open spyder, but the window stays a white Window and does not respond any more.
The following ERROR appears in the shell:
(newEnv) C:\Users\***>spyder
[2280:2332:0331/162242.873:ERROR:broker_win.cc(59)] Error reading broker pipe: Die Pipe wurde beendet. (0x6D)
[4620:17776:0331/162242.873:ERROR:broker_win.cc(59)] Error reading broker pipe: Die Pipe wurde beendet. (0x6D)
Translation: "Die Pipe wurde beendet." - "The pipe was stoped."
Downgrading Python to python 3.7.0 (The last version I know of that it worked) results in the same ERROR from the beginning.
| [
"I also got the same error after installing anaconda. What did for me was:\n\nuninstall all previous Anaconda installations\nextend admin rights to my local account\nselect the add PATH option during installation, even if not recommended\n\n"
] | [
0
] | [] | [] | [
"anaconda",
"python",
"spyder"
] | stackoverflow_0060952055_anaconda_python_spyder.txt |
Q:
AWS Greengrass doesn't send data to AWS Kinesis
The main purpose of my program is to connect to an incoming MQTT channel, and send the data received to my AWS Kinesis Stream called "MyKinesisStream".
Here is my code:
import argparse
import logging
import random
from paho.mqtt import client as mqtt_client
from stream_manager import (
ExportDefinition,
KinesisConfig,
MessageStreamDefinition,
ResourceNotFoundException,
StrategyOnFull,
StreamManagerClient, ReadMessagesOptions,
)
broker = 'localhost'
port = 1883
topic = "clients/test/hello/world"
client_id = f'python-mqtt-{random.randint(0, 100)}'
username = '...'
password = '...'
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
args = ""
def connect_mqtt() -> mqtt_client:
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.username_pw_set(username, password)
client.on_connect = on_connect
client.connect(broker, port)
return client
def sendDataToKinesis(
stream_name: str,
kinesis_stream_name: str,
payload,
batch_size: int = None,
):
try:
print("Debug: sendDataToKinesis with params:", stream_name + " | ", kinesis_stream_name, " | ", batch_size)
print("payload:", payload)
print("type payload:", type(payload))
except Exception as e:
print("Error while printing out the parameters", str(e))
logger.exception(e)
try:
# Create a client for the StreamManager
kinesis_client = StreamManagerClient()
# Try deleting the stream (if it exists) so that we have a fresh start
try:
kinesis_client.delete_message_stream(stream_name=stream_name)
except ResourceNotFoundException:
pass
exports = ExportDefinition(
kinesis=[KinesisConfig(
identifier="KinesisExport" + stream_name,
kinesis_stream_name=kinesis_stream_name,
batch_size=batch_size,
)]
)
kinesis_client.create_message_stream(
MessageStreamDefinition(
name=stream_name,
strategy_on_full=StrategyOnFull.OverwriteOldestData,
export_definition=exports
)
)
sequence_no = kinesis_client.append_message(stream_name=stream_name, data=payload)
print(
"Successfully appended message to stream with sequence number ", sequence_no
)
readValue = kinesis_client.read_messages(stream_name, ReadMessagesOptions(min_message_count=1, read_timeout_millis=1000))
print("DEBUG read test: ", readValue)
except Exception as e:
print("Exception while running: " + str(e))
logger.exception(e)
finally:
# Always close the client to avoid resource leaks
print("closing connection")
if kinesis_client:
kinesis_client.close()
def subscribe(client: mqtt_client, args):
def on_message(client, userdata, msg):
print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
sendDataToKinesis(args.greengrass_stream, args.kinesis_stream, msg.payload, args.batch_size)
client.subscribe(topic)
client.on_message = on_message
def run(args):
mqtt_client_instance = connect_mqtt()
subscribe(mqtt_client_instance, args)
mqtt_client_instance.loop_forever()
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser()
parser.add_argument('--greengrass-stream', required=False, default='...')
parser.add_argument('--kinesis-stream', required=False, default='MyKinesisStream')
parser.add_argument('--batch-size', required=False, type=int, default=500)
return parser.parse_args()
if __name__ == '__main__':
args = parse_args()
run(args)
(the dotted parts ... are commented out as they are sensitive information, but they are correct values.)
The problem is that it just won't send any data to our kinesis stream. I get the following STDOUT from the run:
2022-11-25T12:13:47.640Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Connected to MQTT Broker!. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Received `{"machineId":2, .... "timestamp":"2022-10-24T12:21:34.8777249Z","value":true}` from `clients/test/hello/world` topic. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Debug: sendDataToKinesis with params: test | MyKinesisStream | 100. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. payload: b'{"machineId":2,... ,"timestamp":"2022-10-24T12:21:34.8777249Z","value":true}'. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. type payload: <class 'bytes'>. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Successfully appended message to stream with sequence number 0. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. DEBUG read test: [<Class Message. stream_name: 'test', sequence_number: 0, ingest_time: 1669376980985, payload: b'{"machineId":2,"mach'>]. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. closing connection. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
So we can see that the data arrives from MQTT, the python code executes the append message, and it seems that my kinesis streams have the information as it can read it in the next step... then closes the connection without any error.
But the problem is, that from AWS side, we cannot see the data arriving on the stream:
What can be the problem here? Our greengrass core is configured properly, can be accessed from the AWS, and the Component is running and healthy also:
Update: we managed to get some messages out with the following code:
...
def sendDataToKinesis(
kinesis_client,
stream_name: str,
payload,
):
try:
print("payload:", payload)
print("type payload:", type(payload))
except Exception as e:
print("Error while printing out the parameters", str(e))
logger.exception(e)
try:
sequence_no = kinesis_client.append_message(stream_name=stream_name, data=payload)
print(
"Successfully appended message to stream with sequence number ", sequence_no
)
time.sleep(1)
except Exception as e:
print("Exception while running: " + str(e))
logger.exception(e)
# finally:
# # todo: Always close the client to avoid resource leaks!!!
# print("closing connection")
# if kinesis_client:
# kinesis_client.close()
def subscribe(client: mqtt_client, stream_name: str, args, kinesisClient):
def on_message(client, userdata, msg):
print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
sendDataToKinesis(kinesisClient, stream_name, msg.payload)
client.subscribe(topic)
client.on_message = on_message
def create_kinensis_client(greengrass_stream, kinesis_stream, batch_size):
# Create a client for the StreamManager
kinesis_client = StreamManagerClient()
# Try deleting the stream (if it exists) so that we have a fresh start
try:
kinesis_client.delete_message_stream(stream_name=greengrass_stream)
except ResourceNotFoundException:
pass
exports = ExportDefinition(
kinesis=[KinesisConfig(
identifier="KinesisExport" + greengrass_stream,
kinesis_stream_name=kinesis_stream,
batch_size=batch_size,
)]
)
kinesis_client.create_message_stream(
MessageStreamDefinition(
name=greengrass_stream,
strategy_on_full=StrategyOnFull.OverwriteOldestData,
export_definition=exports
)
)
print("Debug:created stream with parasm ", greengrass_stream + " | ", kinesis_stream, " | ", batch_size)
return kinesis_client
def run(args):
kinesis_client = create_kinensis_client(args.greengrass_stream, args.kinesis_stream, args.batch_size)
mqtt_client_instance = connect_mqtt()
subscribe(mqtt_client_instance, args.greengrass_stream, args, kinesis_client)
mqtt_client_instance.loop_forever()
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser()
parser.add_argument('--greengrass-stream', required=False, default='SiteWise_Stream_Kinesis')
parser.add_argument('--kinesis-stream', required=False, default='MyKinesisStream')
parser.add_argument('--batch-size', required=False, type=int, default=500)
return parser.parse_args()
if __name__ == '__main__':
args = parse_args()
print(f'args: {args.__dict__}')
run(args)
In this approach:
we create the connection only once
we do not close the connection,
and wait 1 second before moving on after appending the message to the kinesis stream.
No need to say that this solution cannot be used in our production environment, but after a lot of random trying, this seems to work somehow. We still need to find the root cause, but it might be a python threading problem? We are out of guesses.
A:
After contacting official AWS personnel, we got the following answer:
So looking at the code a bit further its seems that The API call the
streams manager library is making to Kinesis is done asynchronously.
What this means for your program is that when you try to call
kinesis_client.append_message(stream_name, payload) the function is it
is executed locally but the results will only be sent to AWS later,
however the code proceeds with executing the next line. This in the
end causes the stream to be closed and destroyed via
kinesis_client.close() before the data is published to the AWS side of
the stream. This seems to be an oddity of how Python handles streams.
Since the API call is also asynchronous and you don’t have access to
the future(the future is not passed to the caller from the streams
library) you have no way of knowing that the publish to AWS failed due
to the stream being closed. The reason why your infinite loop client
works is that the infinite loop gives the asynchronous call time to
complete. The same goes for you example where you don’t close the
client.
As to the solution I am not sure what would be the correct
way to proceed. Looking at how streams are meant to be used it is not
unreasonable to keep the stream open as long as the MQTT library is
still processing messages. You will just need to be careful with error
handling to ensure you don’t have any leaks or lingering connections
to the stream should the program stop functioning.
A:
We also received a response on the AWS Q&A site:
From here:
You are deleting your stream every time you append a message to it.
Since the stream only ever contains a single message, you likely
aren't hitting the batch_size minimum in order for StreamManager to
upload.
You'll want create your StreamManager client and stream initialization
a single time, and then re-use them when appending data. You may also
want to consider reducing your batch size.
| AWS Greengrass doesn't send data to AWS Kinesis | The main purpose of my program is to connect to an incoming MQTT channel, and send the data received to my AWS Kinesis Stream called "MyKinesisStream".
Here is my code:
import argparse
import logging
import random
from paho.mqtt import client as mqtt_client
from stream_manager import (
ExportDefinition,
KinesisConfig,
MessageStreamDefinition,
ResourceNotFoundException,
StrategyOnFull,
StreamManagerClient, ReadMessagesOptions,
)
broker = 'localhost'
port = 1883
topic = "clients/test/hello/world"
client_id = f'python-mqtt-{random.randint(0, 100)}'
username = '...'
password = '...'
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
args = ""
def connect_mqtt() -> mqtt_client:
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.username_pw_set(username, password)
client.on_connect = on_connect
client.connect(broker, port)
return client
def sendDataToKinesis(
stream_name: str,
kinesis_stream_name: str,
payload,
batch_size: int = None,
):
try:
print("Debug: sendDataToKinesis with params:", stream_name + " | ", kinesis_stream_name, " | ", batch_size)
print("payload:", payload)
print("type payload:", type(payload))
except Exception as e:
print("Error while printing out the parameters", str(e))
logger.exception(e)
try:
# Create a client for the StreamManager
kinesis_client = StreamManagerClient()
# Try deleting the stream (if it exists) so that we have a fresh start
try:
kinesis_client.delete_message_stream(stream_name=stream_name)
except ResourceNotFoundException:
pass
exports = ExportDefinition(
kinesis=[KinesisConfig(
identifier="KinesisExport" + stream_name,
kinesis_stream_name=kinesis_stream_name,
batch_size=batch_size,
)]
)
kinesis_client.create_message_stream(
MessageStreamDefinition(
name=stream_name,
strategy_on_full=StrategyOnFull.OverwriteOldestData,
export_definition=exports
)
)
sequence_no = kinesis_client.append_message(stream_name=stream_name, data=payload)
print(
"Successfully appended message to stream with sequence number ", sequence_no
)
readValue = kinesis_client.read_messages(stream_name, ReadMessagesOptions(min_message_count=1, read_timeout_millis=1000))
print("DEBUG read test: ", readValue)
except Exception as e:
print("Exception while running: " + str(e))
logger.exception(e)
finally:
# Always close the client to avoid resource leaks
print("closing connection")
if kinesis_client:
kinesis_client.close()
def subscribe(client: mqtt_client, args):
def on_message(client, userdata, msg):
print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
sendDataToKinesis(args.greengrass_stream, args.kinesis_stream, msg.payload, args.batch_size)
client.subscribe(topic)
client.on_message = on_message
def run(args):
mqtt_client_instance = connect_mqtt()
subscribe(mqtt_client_instance, args)
mqtt_client_instance.loop_forever()
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser()
parser.add_argument('--greengrass-stream', required=False, default='...')
parser.add_argument('--kinesis-stream', required=False, default='MyKinesisStream')
parser.add_argument('--batch-size', required=False, type=int, default=500)
return parser.parse_args()
if __name__ == '__main__':
args = parse_args()
run(args)
(the dotted parts ... are commented out as they are sensitive information, but they are correct values.)
The problem is that it just won't send any data to our kinesis stream. I get the following STDOUT from the run:
2022-11-25T12:13:47.640Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Connected to MQTT Broker!. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Received `{"machineId":2, .... "timestamp":"2022-10-24T12:21:34.8777249Z","value":true}` from `clients/test/hello/world` topic. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Debug: sendDataToKinesis with params: test | MyKinesisStream | 100. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. payload: b'{"machineId":2,... ,"timestamp":"2022-10-24T12:21:34.8777249Z","value":true}'. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. type payload: <class 'bytes'>. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. Successfully appended message to stream with sequence number 0. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. DEBUG read test: [<Class Message. stream_name: 'test', sequence_number: 0, ingest_time: 1669376980985, payload: b'{"machineId":2,"mach'>]. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
2022-11-25T12:13:47.641Z [INFO] (Copier) jp.co.xyz.StreamManagerKinesis: stdout. closing connection. {scriptName=services.jp.co.xyz.StreamManagerKinesis.lifecycle.Run, serviceName=jp.co.xyz.StreamManagerKinesis, currentState=RUNNING}
So we can see that the data arrives from MQTT, the python code executes the append message, and it seems that my kinesis streams have the information as it can read it in the next step... then closes the connection without any error.
But the problem is, that from AWS side, we cannot see the data arriving on the stream:
What can be the problem here? Our greengrass core is configured properly, can be accessed from the AWS, and the Component is running and healthy also:
Update: we managed to get some messages out with the following code:
...
def sendDataToKinesis(
kinesis_client,
stream_name: str,
payload,
):
try:
print("payload:", payload)
print("type payload:", type(payload))
except Exception as e:
print("Error while printing out the parameters", str(e))
logger.exception(e)
try:
sequence_no = kinesis_client.append_message(stream_name=stream_name, data=payload)
print(
"Successfully appended message to stream with sequence number ", sequence_no
)
time.sleep(1)
except Exception as e:
print("Exception while running: " + str(e))
logger.exception(e)
# finally:
# # todo: Always close the client to avoid resource leaks!!!
# print("closing connection")
# if kinesis_client:
# kinesis_client.close()
def subscribe(client: mqtt_client, stream_name: str, args, kinesisClient):
def on_message(client, userdata, msg):
print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
sendDataToKinesis(kinesisClient, stream_name, msg.payload)
client.subscribe(topic)
client.on_message = on_message
def create_kinensis_client(greengrass_stream, kinesis_stream, batch_size):
# Create a client for the StreamManager
kinesis_client = StreamManagerClient()
# Try deleting the stream (if it exists) so that we have a fresh start
try:
kinesis_client.delete_message_stream(stream_name=greengrass_stream)
except ResourceNotFoundException:
pass
exports = ExportDefinition(
kinesis=[KinesisConfig(
identifier="KinesisExport" + greengrass_stream,
kinesis_stream_name=kinesis_stream,
batch_size=batch_size,
)]
)
kinesis_client.create_message_stream(
MessageStreamDefinition(
name=greengrass_stream,
strategy_on_full=StrategyOnFull.OverwriteOldestData,
export_definition=exports
)
)
print("Debug:created stream with parasm ", greengrass_stream + " | ", kinesis_stream, " | ", batch_size)
return kinesis_client
def run(args):
kinesis_client = create_kinensis_client(args.greengrass_stream, args.kinesis_stream, args.batch_size)
mqtt_client_instance = connect_mqtt()
subscribe(mqtt_client_instance, args.greengrass_stream, args, kinesis_client)
mqtt_client_instance.loop_forever()
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser()
parser.add_argument('--greengrass-stream', required=False, default='SiteWise_Stream_Kinesis')
parser.add_argument('--kinesis-stream', required=False, default='MyKinesisStream')
parser.add_argument('--batch-size', required=False, type=int, default=500)
return parser.parse_args()
if __name__ == '__main__':
args = parse_args()
print(f'args: {args.__dict__}')
run(args)
In this approach:
we create the connection only once
we do not close the connection,
and wait 1 second before moving on after appending the message to the kinesis stream.
No need to say that this solution cannot be used in our production environment, but after a lot of random trying, this seems to work somehow. We still need to find the root cause, but it might be a python threading problem? We are out of guesses.
| [
"After contacting official AWS personnel, we got the following answer:\n\nSo looking at the code a bit further its seems that The API call the\nstreams manager library is making to Kinesis is done asynchronously.\nWhat this means for your program is that when you try to call\nkinesis_client.append_message(stream_name, payload) the function is it\nis executed locally but the results will only be sent to AWS later,\nhowever the code proceeds with executing the next line. This in the\nend causes the stream to be closed and destroyed via\nkinesis_client.close() before the data is published to the AWS side of\nthe stream. This seems to be an oddity of how Python handles streams.\nSince the API call is also asynchronous and you don’t have access to\nthe future(the future is not passed to the caller from the streams\nlibrary) you have no way of knowing that the publish to AWS failed due\nto the stream being closed. The reason why your infinite loop client\nworks is that the infinite loop gives the asynchronous call time to\ncomplete. The same goes for you example where you don’t close the\nclient.\n\n\nAs to the solution I am not sure what would be the correct\nway to proceed. Looking at how streams are meant to be used it is not\nunreasonable to keep the stream open as long as the MQTT library is\nstill processing messages. You will just need to be careful with error\nhandling to ensure you don’t have any leaks or lingering connections\nto the stream should the program stop functioning.\n\n",
"We also received a response on the AWS Q&A site:\nFrom here:\n\nYou are deleting your stream every time you append a message to it.\nSince the stream only ever contains a single message, you likely\naren't hitting the batch_size minimum in order for StreamManager to\nupload.\nYou'll want create your StreamManager client and stream initialization\na single time, and then re-use them when appending data. You may also\nwant to consider reducing your batch size.\n\n"
] | [
0,
0
] | [] | [] | [
"amazon_kinesis",
"amazon_web_services",
"aws_iot_greengrass",
"greengrass",
"python"
] | stackoverflow_0074573002_amazon_kinesis_amazon_web_services_aws_iot_greengrass_greengrass_python.txt |
Q:
Decorator WITH arguments for FastAPI endpoint
I'm having this decorator:
def security(required_roles):
def decorator(function):
async def wrapper():
print("ROLES", required_roles)
return function
return wrapper
return decorator
and this endpoint, I want to decorate:
@app.get(
"/me", summary="Get details of currently logged in user", response_model=SystemUser
)
@security(required_roles=["role1", "role2"])
async def get_me(user: SystemUser = Depends(get_current_user)):
return user
But when I call it I get this:
File "/home/niels/PycharmProjects/fastApiProject/venv/lib/python3.10/site-packages/fastapi/routing.py", line 139, in serialize_response
raise ValidationError(errors, field.type_)
pydantic.error_wrappers.ValidationError: 1 validation error for SystemUser
response
value is not a valid dict (type=type_error.dict)
Can anybody tell me why and how I could rewrite the decorator. If I place the decorator before @app.get(...) it does not get executed, also not sure why. Any help would be much appreciated.
A:
You should pass the wrapper arguments to the wrapped function and await it:
def security(required_roles):
def decorator(function):
async def wrapper(user):
print("ROLES", required_roles)
return await function(user)
return wrapper
return decorator
| Decorator WITH arguments for FastAPI endpoint | I'm having this decorator:
def security(required_roles):
def decorator(function):
async def wrapper():
print("ROLES", required_roles)
return function
return wrapper
return decorator
and this endpoint, I want to decorate:
@app.get(
"/me", summary="Get details of currently logged in user", response_model=SystemUser
)
@security(required_roles=["role1", "role2"])
async def get_me(user: SystemUser = Depends(get_current_user)):
return user
But when I call it I get this:
File "/home/niels/PycharmProjects/fastApiProject/venv/lib/python3.10/site-packages/fastapi/routing.py", line 139, in serialize_response
raise ValidationError(errors, field.type_)
pydantic.error_wrappers.ValidationError: 1 validation error for SystemUser
response
value is not a valid dict (type=type_error.dict)
Can anybody tell me why and how I could rewrite the decorator. If I place the decorator before @app.get(...) it does not get executed, also not sure why. Any help would be much appreciated.
| [
"You should pass the wrapper arguments to the wrapped function and await it:\ndef security(required_roles):\n def decorator(function):\n async def wrapper(user):\n print(\"ROLES\", required_roles)\n return await function(user)\n return wrapper\n return decorator\n\n"
] | [
0
] | [] | [] | [
"fastapi",
"python",
"python_decorators"
] | stackoverflow_0074652763_fastapi_python_python_decorators.txt |
Q:
Altair Charts containing Encoding of Special Characters
I am trying to plot a chart from a spreadsheet with this code.
import pandas as pd
import altair as alt
crude_df = pd.read_excel(open('PET_CONS_PSUP_DC_NUS_MBBLPD_M.xls', 'rb'),
sheet_name='Data 1',index_col=None, header=2)
alt.Chart(crude_df.tail(100)).mark_circle().encode(
x = 'Date',
y = r'U\.S\. Product Supplied of Normal Butane (Thousand Barrels per Day)'
)
I got this part of the code from the Altair Documentation
y = r'U\.S\. Product Supplied of Normal Butane (Thousand Barrels per Day)'
This is to mitigate the Chart appearing with empty content problem due to presence of special characters in Encodings
But I still get Error.
ValueError: U\.S\. Product Supplied of Normal Butane (Thousand Barrels per Day) encoding field is specified without a type; the type cannot be inferred because it does not match any column in the data.
Not sure what I am doing wrong.
A:
It has to do with the dot (.) present in the name of the columns of your spreadsheet/dataframe and It seems that the escape (suggested by the documentation) does not work in your case.
As a workaround, you can remove the dot with pandas str.replace before using altair.Chart.
Try this :
import pandas as pd
import altair as alt
crude_df = pd.read_excel("PET_CONS_PSUP_DC_NUS_MBBLPD_M.xlsx", sheet_name="Data 1", header=2)
crude_df.columns = crude_df.columns.str.replace("\.", "", regex=True)
alt.Chart(crude_df.tail(100)).mark_circle().encode(
x = 'Date',
y = 'US Product Supplied of Normal Butane (Thousand Barrels per Day)'
)
# Output :
| Altair Charts containing Encoding of Special Characters | I am trying to plot a chart from a spreadsheet with this code.
import pandas as pd
import altair as alt
crude_df = pd.read_excel(open('PET_CONS_PSUP_DC_NUS_MBBLPD_M.xls', 'rb'),
sheet_name='Data 1',index_col=None, header=2)
alt.Chart(crude_df.tail(100)).mark_circle().encode(
x = 'Date',
y = r'U\.S\. Product Supplied of Normal Butane (Thousand Barrels per Day)'
)
I got this part of the code from the Altair Documentation
y = r'U\.S\. Product Supplied of Normal Butane (Thousand Barrels per Day)'
This is to mitigate the Chart appearing with empty content problem due to presence of special characters in Encodings
But I still get Error.
ValueError: U\.S\. Product Supplied of Normal Butane (Thousand Barrels per Day) encoding field is specified without a type; the type cannot be inferred because it does not match any column in the data.
Not sure what I am doing wrong.
| [
"It has to do with the dot (.) present in the name of the columns of your spreadsheet/dataframe and It seems that the escape (suggested by the documentation) does not work in your case.\nAs a workaround, you can remove the dot with pandas str.replace before using altair.Chart.\nTry this :\nimport pandas as pd\nimport altair as alt\n\ncrude_df = pd.read_excel(\"PET_CONS_PSUP_DC_NUS_MBBLPD_M.xlsx\", sheet_name=\"Data 1\", header=2)\n\ncrude_df.columns = crude_df.columns.str.replace(\"\\.\", \"\", regex=True)\n\nalt.Chart(crude_df.tail(100)).mark_circle().encode(\n x = 'Date',\n y = 'US Product Supplied of Normal Butane (Thousand Barrels per Day)'\n)\n\n# Output :\n\n"
] | [
1
] | [] | [] | [
"altair",
"python",
"python_3.x",
"visualization"
] | stackoverflow_0074652841_altair_python_python_3.x_visualization.txt |
Q:
matplotlib moasic subplots share y axis
I created a plt.subplots with 4 subplots (2*2) and I wanted all of them to share y axis, so I used sharey=True.
Later, I wanted to add another subplot below (2*3), and used plt.subplot_mosaicfor convinence. Now, I want the 2 subplots in the first row and the 2 subplots in the seconed row to still share their Y ticks value. However, when I write shareyI get an error, even though according to documntation I should be able to pass this argument (https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplot_mosaic.html)
fig, axes = plt.subplot_mosaic("AB;CD;EE", figsize=(8,14), gridspec_kw={"height_ratios": (3,3,2)}, sharey=True)
returns:
TypeError: __init__() got an unexpected keyword argument 'sharey'
A:
The sharey argument is not a valid argument for the subplot_mosaic function. It is only valid for the subplots function.
You can achieve the same effect by manually setting the y-axis limits on all of your subplots to be the same. This can be done using the set_ylim method of the axes object.
Here is an example:
fig, axes = plt.subplot_mosaic("AB;CD;EE", figsize=(8,14), gridspec_kw={"height_ratios": (3,3,2)})
# Set the y-axis limits for all subplots to be the same
for ax in axes.flat:
ax.set_ylim(0, 100)
This will set the y-axis limits of all subplots to be the same. You can adjust the limits as needed for your specific use case.
Alternatively, you could use the subplots function instead of subplot_mosaic to create your subplots, and then use the sharey argument as you originally intended.
Here is an example of how you could do this:
# Create a figure with 2 rows and 3 columns of subplots
fig, axes = plt.subplots(2, 3, figsize=(8, 14), sharey=True)
# Use the axes object to access and modify your subplots
axes[0, 0].plot(...
axes[0, 1].plot(...
...
This will create a figure with 2 rows and 3 columns of subplots, and all subplots in the same row will share the same y-axis limits.
| matplotlib moasic subplots share y axis | I created a plt.subplots with 4 subplots (2*2) and I wanted all of them to share y axis, so I used sharey=True.
Later, I wanted to add another subplot below (2*3), and used plt.subplot_mosaicfor convinence. Now, I want the 2 subplots in the first row and the 2 subplots in the seconed row to still share their Y ticks value. However, when I write shareyI get an error, even though according to documntation I should be able to pass this argument (https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplot_mosaic.html)
fig, axes = plt.subplot_mosaic("AB;CD;EE", figsize=(8,14), gridspec_kw={"height_ratios": (3,3,2)}, sharey=True)
returns:
TypeError: __init__() got an unexpected keyword argument 'sharey'
| [
"The sharey argument is not a valid argument for the subplot_mosaic function. It is only valid for the subplots function.\nYou can achieve the same effect by manually setting the y-axis limits on all of your subplots to be the same. This can be done using the set_ylim method of the axes object.\nHere is an example:\nfig, axes = plt.subplot_mosaic(\"AB;CD;EE\", figsize=(8,14), gridspec_kw={\"height_ratios\": (3,3,2)})\n\n# Set the y-axis limits for all subplots to be the same\nfor ax in axes.flat:\n ax.set_ylim(0, 100)\n\nThis will set the y-axis limits of all subplots to be the same. You can adjust the limits as needed for your specific use case.\nAlternatively, you could use the subplots function instead of subplot_mosaic to create your subplots, and then use the sharey argument as you originally intended.\nHere is an example of how you could do this:\n# Create a figure with 2 rows and 3 columns of subplots\nfig, axes = plt.subplots(2, 3, figsize=(8, 14), sharey=True)\n\n# Use the axes object to access and modify your subplots\naxes[0, 0].plot(...\naxes[0, 1].plot(...\n...\n\nThis will create a figure with 2 rows and 3 columns of subplots, and all subplots in the same row will share the same y-axis limits.\n"
] | [
0
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0074652999_matplotlib_python.txt |
Q:
Animation using pads in curses
I would like to move a curses pad across the screen, but I can't figure out a way to automatically erase the pad from the previous position in the screen without erasing the contents of the pad. I don't want to have to redraw the pad every time I move it. Here's my test program:
import curses
import time
def main(stdscr):
pad = curses.newpad(10, 10)
ch = ord('A')
pad.addch(4, 4, ch)
for y in range(0, 10):
for x in range(0, 10):
print("adding pad at {y},{x}")
try:
pad.insch(y, x, ch)
except:
pass
if x % 9 == 0:
ch += 1
pad.refresh(0, 0, 0, 0, 10, 10)
time.sleep(2)
pad.refresh(0, 0, 1, 1, 11, 11)
time.sleep(2)
pad.refresh(0, 0, 2, 2, 12, 12)
time.sleep(2)
curses.wrapper(main)
At the end of this script, the window looks like:
AAAAAAAAAA
BAAAAAAAAAA
CBAAAAAAAAAA
DCBBBBBBBBBB
EDCCCCCCCCCC
FEDDDDDDDDDD
GFEEEEEEEEEE
HGFFFFFFFFFF
IHGGGGGGGGGG
JIHHHHHHHHHH
JIIIIIIIIII
JJJJJJJJJJ
The first two lines and the first two characters of each line are leftover from previous pad displays. I want these erased.
I can create a different pad with the same dimensions and use it to erase the block from the screen:
def main(stdscr):
pad = curses.newpad(10, 10)
erasepad = curses.newpad(10, 10)
ch = ord('A')
pad.addch(4, 4, ch)
for y in range(0, 10):
for x in range(0, 10):
print("adding pad at {y},{x}")
try:
pad.insch(y, x, ch)
except:
pass
if x % 9 == 0:
ch += 1
pad.refresh(0, 0, 0, 0, 10, 10)
time.sleep(2)
erasepad.refresh(0, 0, 0, 0, 10, 10)
pad.refresh(0, 0, 1, 1, 11, 11)
time.sleep(2)
erasepad.refresh(0, 0, 1, 1, 11, 11)
pad.refresh(0, 0, 2, 2, 12, 12)
time.sleep(2)
That's workable for my application, but is there a more efficient way? This requires me to create two pads for every animation block, and to completely erase every pad every time.
A:
That's roughly the case. The sample code is inefficient however, doing extra repainting. Take a look at noutrefresh and doupdate (to replace those refresh calls), and replace the time.sleep with napms (again, to improve performance).
| Animation using pads in curses | I would like to move a curses pad across the screen, but I can't figure out a way to automatically erase the pad from the previous position in the screen without erasing the contents of the pad. I don't want to have to redraw the pad every time I move it. Here's my test program:
import curses
import time
def main(stdscr):
pad = curses.newpad(10, 10)
ch = ord('A')
pad.addch(4, 4, ch)
for y in range(0, 10):
for x in range(0, 10):
print("adding pad at {y},{x}")
try:
pad.insch(y, x, ch)
except:
pass
if x % 9 == 0:
ch += 1
pad.refresh(0, 0, 0, 0, 10, 10)
time.sleep(2)
pad.refresh(0, 0, 1, 1, 11, 11)
time.sleep(2)
pad.refresh(0, 0, 2, 2, 12, 12)
time.sleep(2)
curses.wrapper(main)
At the end of this script, the window looks like:
AAAAAAAAAA
BAAAAAAAAAA
CBAAAAAAAAAA
DCBBBBBBBBBB
EDCCCCCCCCCC
FEDDDDDDDDDD
GFEEEEEEEEEE
HGFFFFFFFFFF
IHGGGGGGGGGG
JIHHHHHHHHHH
JIIIIIIIIII
JJJJJJJJJJ
The first two lines and the first two characters of each line are leftover from previous pad displays. I want these erased.
I can create a different pad with the same dimensions and use it to erase the block from the screen:
def main(stdscr):
pad = curses.newpad(10, 10)
erasepad = curses.newpad(10, 10)
ch = ord('A')
pad.addch(4, 4, ch)
for y in range(0, 10):
for x in range(0, 10):
print("adding pad at {y},{x}")
try:
pad.insch(y, x, ch)
except:
pass
if x % 9 == 0:
ch += 1
pad.refresh(0, 0, 0, 0, 10, 10)
time.sleep(2)
erasepad.refresh(0, 0, 0, 0, 10, 10)
pad.refresh(0, 0, 1, 1, 11, 11)
time.sleep(2)
erasepad.refresh(0, 0, 1, 1, 11, 11)
pad.refresh(0, 0, 2, 2, 12, 12)
time.sleep(2)
That's workable for my application, but is there a more efficient way? This requires me to create two pads for every animation block, and to completely erase every pad every time.
| [
"That's roughly the case. The sample code is inefficient however, doing extra repainting. Take a look at noutrefresh and doupdate (to replace those refresh calls), and replace the time.sleep with napms (again, to improve performance).\n"
] | [
0
] | [] | [] | [
"curses",
"python",
"python_3.x",
"python_curses"
] | stackoverflow_0074649261_curses_python_python_3.x_python_curses.txt |
Q:
Error importing plugin "sqlalchemy.ext.mypy.plugin": cannot import name 'Optional' from 'mypy.plugin'
Following the documentation here https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html I got an error while running mypy on any file using sqlalchemy mypy plugin.
To Reproduce
Create a new virtualenv using Python 3.8
pip install sqlalchemy[mypy]==1.4
Create a simple mypy config (mypy.ini)
[mypy] plugins = sqlalchemy.ext.mypy.plugin
Run mypy on any file
mypy --config-file=mypy.ini
Error
Error importing plugin "sqlalchemy.ext.mypy.plugin": cannot import
name 'Optional' from 'mypy.plugin'
Versions.
OS: Linux Ubuntu 22.04
Python: 3.8
SQLAlchemy: 1.4
mypy: 0.991
A:
Following the issue on GitHub the problem is the SQLAlchemy version.
SQLAlchemy 1.4.0 is not supported.
Using SQLAlchemy 1.4.44 solved the problem.
| Error importing plugin "sqlalchemy.ext.mypy.plugin": cannot import name 'Optional' from 'mypy.plugin' | Following the documentation here https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html I got an error while running mypy on any file using sqlalchemy mypy plugin.
To Reproduce
Create a new virtualenv using Python 3.8
pip install sqlalchemy[mypy]==1.4
Create a simple mypy config (mypy.ini)
[mypy] plugins = sqlalchemy.ext.mypy.plugin
Run mypy on any file
mypy --config-file=mypy.ini
Error
Error importing plugin "sqlalchemy.ext.mypy.plugin": cannot import
name 'Optional' from 'mypy.plugin'
Versions.
OS: Linux Ubuntu 22.04
Python: 3.8
SQLAlchemy: 1.4
mypy: 0.991
| [
"Following the issue on GitHub the problem is the SQLAlchemy version.\nSQLAlchemy 1.4.0 is not supported.\nUsing SQLAlchemy 1.4.44 solved the problem.\n"
] | [
0
] | [] | [] | [
"mypy",
"python",
"sqlalchemy"
] | stackoverflow_0074642545_mypy_python_sqlalchemy.txt |
Q:
How to clean up anaconda base environment?
I have just reinstalled Miniconda. After that I ran pip list in base environment. The output is following:
Package Version
---------------------- ---------
brotlipy 0.7.0
certifi 2021.10.8
cffi 1.15.0
charset-normalizer 2.0.4
colorama 0.4.4
conda 4.12.0
conda-content-trust 0+unknown
conda-package-handling 1.8.1
cryptography 36.0.0
idna 3.3
menuinst 1.4.18
pip 21.2.4
pycosat 0.6.3
pycparser 2.21
pyOpenSSL 22.0.0
PySocks 1.7.1
pywin32 302
requests 2.27.1
ruamel-yaml-conda 0.15.100
setuptools 61.2.0
six 1.16.0
tqdm 4.63.0
urllib3 1.26.8
wheel 0.37.1
win-inet-pton 1.1.0
wincertstore 0.2
But the fact is that I did not install them. How packages like tqdm, colorama, brotlipy, cryptography and others appeared here? It supposed to be an empty base environment. Your suggestions?
A:
Do:
conda uninstall -n base --all
A:
i would do:
conda clean -a -d
#-d = dryrun
and if that's okay with me
conda clean -a -y
#-y = yes (no promt)
| How to clean up anaconda base environment? | I have just reinstalled Miniconda. After that I ran pip list in base environment. The output is following:
Package Version
---------------------- ---------
brotlipy 0.7.0
certifi 2021.10.8
cffi 1.15.0
charset-normalizer 2.0.4
colorama 0.4.4
conda 4.12.0
conda-content-trust 0+unknown
conda-package-handling 1.8.1
cryptography 36.0.0
idna 3.3
menuinst 1.4.18
pip 21.2.4
pycosat 0.6.3
pycparser 2.21
pyOpenSSL 22.0.0
PySocks 1.7.1
pywin32 302
requests 2.27.1
ruamel-yaml-conda 0.15.100
setuptools 61.2.0
six 1.16.0
tqdm 4.63.0
urllib3 1.26.8
wheel 0.37.1
win-inet-pton 1.1.0
wincertstore 0.2
But the fact is that I did not install them. How packages like tqdm, colorama, brotlipy, cryptography and others appeared here? It supposed to be an empty base environment. Your suggestions?
| [
"Do:\nconda uninstall -n base --all\n\n",
"i would do:\nconda clean -a -d\n#-d = dryrun\n\nand if that's okay with me\nconda clean -a -y\n#-y = yes (no promt)\n\n"
] | [
0,
0
] | [] | [] | [
"anaconda",
"miniconda",
"pip",
"python"
] | stackoverflow_0073978696_anaconda_miniconda_pip_python.txt |
Q:
how to not overwrite file in python in different os path
so i want to avoid overwrite the file name that existed. but i don't know how to combine the code with mycode. please help me
here's my code for write file:
def filepass(f):
print(f)
with open ('media/pass/'+'filepass.txt', 'a') as fo:
fo.write(f)
fo.close()
return fo
and here's the code to create number in name filepass:
def build_filename(name, num=0):
root, ext = os.path.splitext(name)
print(root)
return '%s%d%s' % (root, num, ext) if num else name
def find_next_filename(name, max_tries=20):
if not os.path.exists(name): return name
else:
for i in range(max_tries):
test_name = build_filename(name, i+1)
if not os.path.exists(test_name): return test_name
return None
all i want is to create filename : filepass.txt, filepass1.txt, filepass2.txt
A:
Something like this?
def filepass(f):
print(f)
filename = find_next_filename('media/pass/filepass.txt')
with open (filename, 'a') as fo:
fo.write(f)
# you don't need to close when you use "with open";
# but then it doesn't make sense to return a closed file handle
# maybe let's return the filename instead
return filename
| how to not overwrite file in python in different os path | so i want to avoid overwrite the file name that existed. but i don't know how to combine the code with mycode. please help me
here's my code for write file:
def filepass(f):
print(f)
with open ('media/pass/'+'filepass.txt', 'a') as fo:
fo.write(f)
fo.close()
return fo
and here's the code to create number in name filepass:
def build_filename(name, num=0):
root, ext = os.path.splitext(name)
print(root)
return '%s%d%s' % (root, num, ext) if num else name
def find_next_filename(name, max_tries=20):
if not os.path.exists(name): return name
else:
for i in range(max_tries):
test_name = build_filename(name, i+1)
if not os.path.exists(test_name): return test_name
return None
all i want is to create filename : filepass.txt, filepass1.txt, filepass2.txt
| [
"Something like this?\ndef filepass(f):\n print(f)\n filename = find_next_filename('media/pass/filepass.txt')\n with open (filename, 'a') as fo:\n fo.write(f)\n # you don't need to close when you use \"with open\";\n # but then it doesn't make sense to return a closed file handle\n # maybe let's return the filename instead\n return filename\n\n"
] | [
0
] | [] | [] | [
"file",
"filenames",
"python"
] | stackoverflow_0074652931_file_filenames_python.txt |
Q:
Writing in a file while reading the keybord with pynput not working
from pynput.keyboard import Key, Listener
lol = open("GOTEM.txt",'a')
def on_press(key):
lol.write('{0}'.format(key))
with Listener(on_press=on_press) as listener:
listener.join()
lol.close()
100% a code problem because I tried writing into a file without the whole keybord thing and it works just fine.I am new to coding so ik I made some stupid mistake.
A:
You need to convert the key to char. You can use key.char to get the string without any single quotes around it. But be careful, as it will throw an AttributeError if the key is a special key like space, backspace, control keys etc. So you'll have to put it in a try/except block.
You should only open file when you are going to perform any operation.
This should do the trick:
from pynput.keyboard import Key, Listener
def on_press(key):
with open("GOTEM.txt",'a') as lol:
try:
k = key.char
lol.write(k)
except AttributeError:
if key == Key.space:
lol.write('\n')
with Listener(on_press=on_press) as listener:
listener.join()
| Writing in a file while reading the keybord with pynput not working | from pynput.keyboard import Key, Listener
lol = open("GOTEM.txt",'a')
def on_press(key):
lol.write('{0}'.format(key))
with Listener(on_press=on_press) as listener:
listener.join()
lol.close()
100% a code problem because I tried writing into a file without the whole keybord thing and it works just fine.I am new to coding so ik I made some stupid mistake.
| [
"You need to convert the key to char. You can use key.char to get the string without any single quotes around it. But be careful, as it will throw an AttributeError if the key is a special key like space, backspace, control keys etc. So you'll have to put it in a try/except block.\nYou should only open file when you are going to perform any operation.\nThis should do the trick:\nfrom pynput.keyboard import Key, Listener\n \ndef on_press(key):\n with open(\"GOTEM.txt\",'a') as lol:\n try:\n k = key.char\n lol.write(k)\n except AttributeError:\n if key == Key.space:\n lol.write('\\n') \n \nwith Listener(on_press=on_press) as listener:\n listener.join()\n\n"
] | [
0
] | [] | [] | [
"file",
"pynput",
"python"
] | stackoverflow_0074652900_file_pynput_python.txt |
Q:
Can I select multiple elements in a Python dictionary with reference to position rather than specific keys?
I'm fairly new to Python, so I apologize if this question seems a little naive. I'm working on a final project for my comp sci class that involves visualizing some data, but I ran into some difficulty with it.
Basically, I'm using a CSV file (which you can find here) that contains reported emissions data from every country over a 29-year period -- but I only want to visualize data from the top five emitting countries in each given year. My code right now looks like this:
counter = 0
for i in range(30):
global_emissions = {}
for country in dataset:
'''dataset is a list of country-specific dictionaries; a key value for every year (e.g. '1991') corresponds to a float representing CO2 emissions in kt.'''
key = str(1990 + counter)
emissions = country[key]
global_emissions[country['COUNTRY']] = emissions
sorted_emissions = {}
for i in sorted(global_emissions.values()): # this loop is just here to match keys to sorted values
for k in global_emissions.keys():
if global_emissions[k] == i:
sorted_emissions[k] = global_emissions[k]
counter += 1
The sorted_emissions list for any given year then comes out as a bunch of key-value pairs in ascending order of value. At the very end are the highest-polluting countries. In the year 1990, for example, it looks like this:
...'Germany': 955310.0, 'Japan': 1090530.0, 'Russian Federation': 2163530.0, 'China': 2173360.0, 'United States': 4844520.0}
For the dictionaries that get produced for all 29 years, I want to isolate these last five elements -- but since the keys of the most-polluting nations will change from year to year, I can't just access them by their key.
I tried indexing to them with numerical slicing (i.e. trying something like sorted_emissions[-1]), but this doesn't work with dictionaries. Is there any workaround for this, or maybe just a better way to do it? Any suggestions at all would be appreciated. Thanks so much!
A:
Because dictionaries are by default not sorted, i.e., the position of a key-value pair in a dictionary carries no meaning, you can't access any item through indexing. However, you can turn the dictionary into a list of tuples, where the keys are the first element, and the values are the second element of the tuple. Then you can write code to always access the second element in the tuple (i.e., at position [1]) and sort those values.
I hope I understood your question correctly and this helps. :)
| Can I select multiple elements in a Python dictionary with reference to position rather than specific keys? | I'm fairly new to Python, so I apologize if this question seems a little naive. I'm working on a final project for my comp sci class that involves visualizing some data, but I ran into some difficulty with it.
Basically, I'm using a CSV file (which you can find here) that contains reported emissions data from every country over a 29-year period -- but I only want to visualize data from the top five emitting countries in each given year. My code right now looks like this:
counter = 0
for i in range(30):
global_emissions = {}
for country in dataset:
'''dataset is a list of country-specific dictionaries; a key value for every year (e.g. '1991') corresponds to a float representing CO2 emissions in kt.'''
key = str(1990 + counter)
emissions = country[key]
global_emissions[country['COUNTRY']] = emissions
sorted_emissions = {}
for i in sorted(global_emissions.values()): # this loop is just here to match keys to sorted values
for k in global_emissions.keys():
if global_emissions[k] == i:
sorted_emissions[k] = global_emissions[k]
counter += 1
The sorted_emissions list for any given year then comes out as a bunch of key-value pairs in ascending order of value. At the very end are the highest-polluting countries. In the year 1990, for example, it looks like this:
...'Germany': 955310.0, 'Japan': 1090530.0, 'Russian Federation': 2163530.0, 'China': 2173360.0, 'United States': 4844520.0}
For the dictionaries that get produced for all 29 years, I want to isolate these last five elements -- but since the keys of the most-polluting nations will change from year to year, I can't just access them by their key.
I tried indexing to them with numerical slicing (i.e. trying something like sorted_emissions[-1]), but this doesn't work with dictionaries. Is there any workaround for this, or maybe just a better way to do it? Any suggestions at all would be appreciated. Thanks so much!
| [
"Because dictionaries are by default not sorted, i.e., the position of a key-value pair in a dictionary carries no meaning, you can't access any item through indexing. However, you can turn the dictionary into a list of tuples, where the keys are the first element, and the values are the second element of the tuple. Then you can write code to always access the second element in the tuple (i.e., at position [1]) and sort those values.\nI hope I understood your question correctly and this helps. :)\n"
] | [
0
] | [] | [] | [
"csv",
"dictionary",
"python"
] | stackoverflow_0074636710_csv_dictionary_python.txt |
Q:
How to consume a rabbitmq stream starting from the last message in the stream?
I'd like to implement something with a similar behaviour to MQTT's "Retained Message". IE I want to attach a consumer and immediately start reading from the most recent message sent. It looks like Rabbitmq Streams should give me what I'm looking for.
I'm a little stuck because its possible to set the offset to last (see here) which begins reading from the last block of messages. But what I am looking for is the last message.
That is: I can't see how to determine which is the last message currently in the block when I subscribe.
Is there a way to set the offset to the last message in the stream?
A:
At the moment, it is not possible to determine the last message in a specific chuck.
This is because the clients don't expose all the chunk information.
When you select last you get the last chuck. The chuck itself contains the number of messages. This information atm is not exposed.
You are using PIKA so amqp way, but to have more control in the stream, you could use native clients that give you more control.
See here for more details.
You can also track the message offset to restart consuming from a specific offset. See, for example, the java client
We could add that info.
| How to consume a rabbitmq stream starting from the last message in the stream? | I'd like to implement something with a similar behaviour to MQTT's "Retained Message". IE I want to attach a consumer and immediately start reading from the most recent message sent. It looks like Rabbitmq Streams should give me what I'm looking for.
I'm a little stuck because its possible to set the offset to last (see here) which begins reading from the last block of messages. But what I am looking for is the last message.
That is: I can't see how to determine which is the last message currently in the block when I subscribe.
Is there a way to set the offset to the last message in the stream?
| [
"At the moment, it is not possible to determine the last message in a specific chuck.\nThis is because the clients don't expose all the chunk information.\nWhen you select last you get the last chuck. The chuck itself contains the number of messages. This information atm is not exposed.\nYou are using PIKA so amqp way, but to have more control in the stream, you could use native clients that give you more control.\nSee here for more details.\nYou can also track the message offset to restart consuming from a specific offset. See, for example, the java client\nWe could add that info.\n"
] | [
0
] | [] | [] | [
"pika",
"python",
"rabbitmq"
] | stackoverflow_0074644511_pika_python_rabbitmq.txt |
Q:
How to run the whole async function in given timeout?
From the last post, the duplicate post cannot answer my question.
Right now, I have a function f1() which contains CPU intensive part and async IO intensive part. Therefore f1() itself is an async function. How can I run the whole f1() with given timeout? I found the method provided in the post cannot solve my situation. For the following part, it shows RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
async def process():
print("enter process")
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, f1)
async def main():
print("-----f1-----")
t1 = time.time()
try:
await asyncio.wait_for(process(), timeout=2)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
From previous post, loop.run_in_executor can only work for normal function not async function.
A:
one way to do it is to make process not an async function, so it can run in another thread, and have it start an asyncio loop in the other thread to run f1.
note that starting another loops means you cannot share coroutines and futures between the two loops.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
def process():
print("enter process")
asyncio.run(asyncio.wait_for(f1(),2))
async def main():
print("-----f1-----")
t1 = time.time()
try:
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, process)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
-----f1-----
enter process
start sleep
end sleep
start asyncio.sleep
f1 cost 3.0047199726104736 s
keep in mind that you must wait for any IO to return f1 to the eventloop so the future can be cancelled, you cannot cancel the CPU-intensive part of the code unless it does something like await asyncio.sleep(0) which returns to the event-loop momentarily, which is why time.sleep cannot be cancelled.
A:
I have explained the cause of the issue. You should remove or replace the time.sleep at f1 as it blocks the thread, and asyncio.wait_for cannot handle the timeout.
Regarding to the RuntimeWarning
RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
It occurs because the loop.run_in_executor expects a non-async function as a second argument.
| How to run the whole async function in given timeout? | From the last post, the duplicate post cannot answer my question.
Right now, I have a function f1() which contains CPU intensive part and async IO intensive part. Therefore f1() itself is an async function. How can I run the whole f1() with given timeout? I found the method provided in the post cannot solve my situation. For the following part, it shows RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
async def process():
print("enter process")
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, f1)
async def main():
print("-----f1-----")
t1 = time.time()
try:
await asyncio.wait_for(process(), timeout=2)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
From previous post, loop.run_in_executor can only work for normal function not async function.
| [
"one way to do it is to make process not an async function, so it can run in another thread, and have it start an asyncio loop in the other thread to run f1.\nnote that starting another loops means you cannot share coroutines and futures between the two loops.\nimport asyncio\nimport time\nimport concurrent.futures\n\nexecutor = concurrent.futures.ThreadPoolExecutor(1)\n\n\nasync def f1():\n print(\"start sleep\")\n time.sleep(3) # simulate CPU intensive part\n print(\"end sleep\")\n\n print(\"start asyncio.sleep\")\n await asyncio.sleep(3) # simulate IO intensive part\n print(\"end asyncio.sleep\")\n\n\ndef process():\n print(\"enter process\")\n asyncio.run(asyncio.wait_for(f1(),2))\n\nasync def main():\n print(\"-----f1-----\")\n t1 = time.time()\n try:\n loop = asyncio.get_running_loop()\n await loop.run_in_executor(executor, process)\n except:\n pass\n t2 = time.time()\n print(f\"f1 cost {(t2 - t1)} s\")\n\n\nif __name__ == '__main__':\n asyncio.run(main())\n\n-----f1-----\nenter process\nstart sleep\nend sleep\nstart asyncio.sleep\nf1 cost 3.0047199726104736 s\n\nkeep in mind that you must wait for any IO to return f1 to the eventloop so the future can be cancelled, you cannot cancel the CPU-intensive part of the code unless it does something like await asyncio.sleep(0) which returns to the event-loop momentarily, which is why time.sleep cannot be cancelled.\n",
"I have explained the cause of the issue. You should remove or replace the time.sleep at f1 as it blocks the thread, and asyncio.wait_for cannot handle the timeout.\nRegarding to the RuntimeWarning\n\nRuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.\n\nIt occurs because the loop.run_in_executor expects a non-async function as a second argument.\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_asyncio"
] | stackoverflow_0074652884_python_python_asyncio.txt |
Q:
pandas `pd.melt` multiindex column usage
I'm having troubles trying to write intelligible pandas which makes me feel like I'm missing some feature or usage (probably of the pd.melt method).
I have two datasets I want to combine. Both are similar:
time indicating when the state changed
name and instance a compound identity used to uniquely identify the record entry to a thing.
Finally a single value named for the state that has changed at that time for that thing.
So an example record from each one of these dataset I want to combine would be:
dict(time=0, name="a", instance=0, state=1)
dict(time=5, name="a", instance=0, location="london")
I want to combine these two record sets into one, which have the last known state and location for each (name, instance) at each time.
[
dict(time=0, name="a", instance=0, state=1, location=np.nan),
dict(time=5, name="a", instance=0, state=1, location="london"),
]
To get to this I currently do a combination of pd.DataFrame.pivot_table, pd.DataFrame.ffill, pd.DataFrame.melt, and pd.DataFrame.reset_index. It seems to work as intended, but it feels very cumbersome/unreadable, especially once I get into using the pd.DataFrame.melt.
I feel like I'm missing some usage for the pd.DataFrame.melt function, but I'm not really sure how to apply the documentation to the dataset with a pd.MultiIndex columns, that I'm working with, or if I'm missing some other pandas utility I should be using instead.
I'll update the question title with something more appropriate if it turns out melt isn't what I should be using.
Here's what I have:
import pandas as pd
states = [
dict(time=0, name="a", instance=0, state=0),
dict(time=0, name="a", instance=1, state=0),
dict(time=0, name="a", instance=2, state=0),
dict(time=0, name="b", instance=1, state=0),
dict(time=0, name="b", instance=2, state=0),
dict(time=1, name="a", instance=1, state=1),
dict(time=2, name="a", instance=2, state=1),
dict(time=2, name="b", instance=1, state=1),
]
locations = [
dict(time=0, name="a", instance=0, location="tokyo"),
dict(time=0, name="a", instance=1, location="tokyo"),
dict(time=0, name="a", instance=2, location="tokyo"),
dict(time=0, name="b", instance=1, location="tokyo"),
dict(time=0, name="b", instance=2, location="tokyo"),
dict(time=1, name="a", instance=0, location="london"),
dict(time=1, name="a", instance=2, location="london"),
dict(time=1, name="b", instance=1, location="london"),
dict(time=1, name="b", instance=2, location="london"),
dict(time=1, name="a", instance=1, location="paris"),
dict(time=2, name="a", instance=2, location="paris"),
dict(time=2, name="b", instance=1, location="paris"),
]
states = pd.DataFrame.from_dict(states)
locations = pd.DataFrame.from_dict(locations)
combined = pd.concat([states, locations], axis="index")
combined = combined.pivot_table(
index="time",
columns=["name", "instance"],
values=["state", "location"],
aggfunc="last",
)
combined = combined.ffill()
ugly_melt = combined.melt(ignore_index=False)
ugly_melt = ugly_melt.rename(columns={None: "state_status"})
ugly_melt = (
ugly_melt.reset_index()
.pivot(
index=["time", "name", "instance"],
columns=["state_status"],
values="value",
)
.reset_index()
)
print(ugly_melt)
A:
Note that combined look like this after ffill.
location state
name a b a b
instance 0 1 2 1 2 0 1 2 1 2
time
0 tokyo tokyo tokyo tokyo tokyo 0.0 0.0 0.0 0.0 0.0
1 london paris london london london 0.0 1.0 0.0 0.0 0.0
2 london paris paris paris london 0.0 1.0 1.0 1.0 0.0
There are 2 columns, location and state. You can melt them separately and merge them back.
melt1 = pd.melt(combined["location"], value_name="location", ignore_index=False).reset_index()
melt2 = pd.melt(combined["state"], value_name="state", ignore_index=False).reset_index()
better_melt = melt1.merge(melt2, on=["time", "name", "instance"])
better_melt
time name instance location state
0 0 a 0 tokyo 0.0
1 1 a 0 london 0.0
2 2 a 0 london 0.0
3 0 a 1 tokyo 0.0
4 1 a 1 paris 1.0
5 2 a 1 paris 1.0
6 0 a 2 tokyo 0.0
7 1 a 2 london 0.0
8 2 a 2 paris 1.0
9 0 b 1 tokyo 0.0
10 1 b 1 london 0.0
11 2 b 1 paris 1.0
12 0 b 2 tokyo 0.0
13 1 b 2 london 0.0
14 2 b 2 london 0.0
A:
Datasets:
import pandas as pd
states = [
dict(time=0, name="a", instance=0, state=0),
dict(time=0, name="a", instance=1, state=0),
dict(time=0, name="a", instance=2, state=0),
dict(time=0, name="b", instance=1, state=0),
dict(time=0, name="b", instance=2, state=0),
dict(time=1, name="a", instance=1, state=1),
dict(time=2, name="a", instance=2, state=1),
dict(time=2, name="b", instance=1, state=1),
]
locations = [
dict(time=0, name="a", instance=0, location="tokyo"),
dict(time=0, name="a", instance=1, location="tokyo"),
dict(time=0, name="a", instance=2, location="tokyo"),
dict(time=0, name="b", instance=1, location="tokyo"),
dict(time=0, name="b", instance=2, location="tokyo"),
dict(time=1, name="a", instance=0, location="london"),
dict(time=1, name="a", instance=2, location="london"),
dict(time=1, name="b", instance=1, location="london"),
dict(time=1, name="b", instance=2, location="london"),
dict(time=1, name="a", instance=1, location="paris"),
dict(time=2, name="a", instance=2, location="paris"),
dict(time=2, name="b", instance=1, location="paris"),
]
It seems like time, name and instance are the levels that you use to index your data. It makes sense to add them to the index using set_index:
states = (
pd.DataFrame.from_dict(states)
.set_index(["time", "name", "instance"])
)
locations = (
pd.DataFrame.from_dict(locations)
.set_index(["time", "name", "instance"])
)
Once the MultiIndex is in place, you can concatenate the states and locations along the columns. This will leave some NaNs in the state column. Sort the index first so the entries are arranged by time first, then group the data based on the same name and instance, and finally perform a forward-fill.
combined = (
pd.concat([states, locations], axis=1)
.sort_index()
.groupby(["name", "instance"])
.ffill()
)
The result is slightly different from yours. What I get:
location state
time name instance
0 a 0 tokyo 0.0
1 tokyo 0.0
2 tokyo 0.0
b 1 tokyo 0.0
2 tokyo 0.0
1 a 0 london 0.0
1 paris 1.0
2 london 0.0
b 1 london 0.0
2 london 0.0
2 a 2 paris 1.0
b 1 paris 1.0
What you get:
state_status time name instance location state
0 0 a 0 tokyo 0.0
1 0 a 1 tokyo 0.0
2 0 a 2 tokyo 0.0
3 0 b 1 tokyo 0.0
4 0 b 2 tokyo 0.0
5 1 a 0 london 0.0
6 1 a 1 paris 1.0
7 1 a 2 london 0.0
8 1 b 1 london 0.0
9 1 b 2 london 0.0
10 2 a 0 london 0.0
11 2 a 1 paris 1.0
12 2 a 2 paris 1.0
13 2 b 1 paris 1.0
14 2 b 2 london 0.0
First of all, I get a MultiIndex DataFrame. If you don't like it, just reset_index().
Most importantly, you get extra rows that I don't have, for example when (time, name, instance) = (2, a, 0). None of the input data has this combination of values, so that's why it's not present in my result. It is in yours because of how pivot_table works. It might be a desirable behaviour or not, up to you to decide.
A:
You can simply do a join and filter on both dataset, keeping in mind the time constraint (can only join on previous/current state and not future state).
This removes the need to perform forward fill ffill() and using joins and filter are easier to comprehend than pd.melt methods.
Your code for initialization
states = [
dict(time=0, name="a", instance=0, state=0),
dict(time=0, name="a", instance=1, state=0),
dict(time=0, name="a", instance=2, state=0),
dict(time=0, name="b", instance=1, state=0),
dict(time=0, name="b", instance=2, state=0),
dict(time=1, name="a", instance=1, state=1),
dict(time=2, name="a", instance=2, state=1),
dict(time=2, name="b", instance=1, state=1),
]
locations = [
dict(time=0, name="a", instance=0, location="tokyo"),
dict(time=0, name="a", instance=1, location="tokyo"),
dict(time=0, name="a", instance=2, location="tokyo"),
dict(time=0, name="b", instance=1, location="tokyo"),
dict(time=0, name="b", instance=2, location="tokyo"),
dict(time=1, name="a", instance=0, location="london"),
dict(time=1, name="a", instance=2, location="london"),
dict(time=1, name="b", instance=1, location="london"),
dict(time=1, name="b", instance=2, location="london"),
dict(time=1, name="a", instance=1, location="paris"),
dict(time=2, name="a", instance=2, location="paris"),
dict(time=2, name="b", instance=1, location="paris"),
]
Implementation
import pandas as pd
"""
Steps:
1. Convert to dataframe (Rename state time as state_time, keep location time as time)
2. Merge both dataframe together
3. Filter state time <= location time (since location uses current/previous state)
4. Filter for latest state time (since location must remember the latest state and not all previous states)
"""
# Step 1
states = pd.DataFrame(states).rename(columns={"time": "state_time"})
locations = pd.DataFrame(locations)
# Step 2
merged_df = pd.merge(locations, states, on=["name", "instance"])
# Step 3
merged_df = merged_df[merged_df["state_time"] <= merged_df["time"]]
# Step 4
merged_df = merged_df\
.sort_values(["time", "name", "instance", "state_time"])\
.drop_duplicates(["time", "name", "instance"], keep="last")\
.reset_index(drop=True)\
.drop(columns=["state_time"])
This results in the following merged_df
time name instance location state
0 0 a 0 tokyo 0
1 0 a 1 tokyo 0
2 0 a 2 tokyo 0
3 0 b 1 tokyo 0
4 0 b 2 tokyo 0
5 1 a 0 london 0
6 1 a 1 paris 1
7 1 a 2 london 0
8 1 b 1 london 0
9 1 b 2 london 0
10 2 a 2 paris 1
11 2 b 1 paris 1
The length of results follows from location data, if you want every name-instance-location to have a time, you can do an outer join beforehand.
A:
The pandas.melt method is a useful tool for reshaping data frames. It can be used to convert wide-form data frames, with multiple columns, into long-form data frames, with a single column for the variable and a single column for the value.
In your case, you can use pandas.melt to reshape your data frame to have a single column for the name, instance, state, and location variables, and a single column for the corresponding values. This will allow you to easily join the two data frames together using the pandas.merge method.
Here's an example of how you could do this:
states = [
dict(time=0, name="a", instance=0, state=0),
dict(time=0, name="a", instance=1, state=0),
dict(time=0, name="a", instance=2, state=0),
dict(time=0, name="b", instance=1, state=0),
dict(time=0, name="b", instance=2, state=0),
dict(time=1, name="a", instance=1, state=1),
dict(time=2, name="a", instance=2, state=1),
dict(time=2, name="b", instance=1, state=1),
]
locations = [
dict(time=0, name="a", instance=0, location="tokyo"),
dict(time=0, name="a", instance=1, location="tokyo"),
dict(time=0, name="a", instance=2, location="tokyo"),
dict(time=0, name="b", instance=1, location="tokyo"),
dict(time=0, name="b", instance=2, location="tokyo"),
dict(time=1, name="a", instance=0, location="london"),
dict(time=1, name="a", instance=2, location="london"),
dict(time=1, name="b", instance=1, location="london"),
| pandas `pd.melt` multiindex column usage | I'm having troubles trying to write intelligible pandas which makes me feel like I'm missing some feature or usage (probably of the pd.melt method).
I have two datasets I want to combine. Both are similar:
time indicating when the state changed
name and instance a compound identity used to uniquely identify the record entry to a thing.
Finally a single value named for the state that has changed at that time for that thing.
So an example record from each one of these dataset I want to combine would be:
dict(time=0, name="a", instance=0, state=1)
dict(time=5, name="a", instance=0, location="london")
I want to combine these two record sets into one, which have the last known state and location for each (name, instance) at each time.
[
dict(time=0, name="a", instance=0, state=1, location=np.nan),
dict(time=5, name="a", instance=0, state=1, location="london"),
]
To get to this I currently do a combination of pd.DataFrame.pivot_table, pd.DataFrame.ffill, pd.DataFrame.melt, and pd.DataFrame.reset_index. It seems to work as intended, but it feels very cumbersome/unreadable, especially once I get into using the pd.DataFrame.melt.
I feel like I'm missing some usage for the pd.DataFrame.melt function, but I'm not really sure how to apply the documentation to the dataset with a pd.MultiIndex columns, that I'm working with, or if I'm missing some other pandas utility I should be using instead.
I'll update the question title with something more appropriate if it turns out melt isn't what I should be using.
Here's what I have:
import pandas as pd
states = [
dict(time=0, name="a", instance=0, state=0),
dict(time=0, name="a", instance=1, state=0),
dict(time=0, name="a", instance=2, state=0),
dict(time=0, name="b", instance=1, state=0),
dict(time=0, name="b", instance=2, state=0),
dict(time=1, name="a", instance=1, state=1),
dict(time=2, name="a", instance=2, state=1),
dict(time=2, name="b", instance=1, state=1),
]
locations = [
dict(time=0, name="a", instance=0, location="tokyo"),
dict(time=0, name="a", instance=1, location="tokyo"),
dict(time=0, name="a", instance=2, location="tokyo"),
dict(time=0, name="b", instance=1, location="tokyo"),
dict(time=0, name="b", instance=2, location="tokyo"),
dict(time=1, name="a", instance=0, location="london"),
dict(time=1, name="a", instance=2, location="london"),
dict(time=1, name="b", instance=1, location="london"),
dict(time=1, name="b", instance=2, location="london"),
dict(time=1, name="a", instance=1, location="paris"),
dict(time=2, name="a", instance=2, location="paris"),
dict(time=2, name="b", instance=1, location="paris"),
]
states = pd.DataFrame.from_dict(states)
locations = pd.DataFrame.from_dict(locations)
combined = pd.concat([states, locations], axis="index")
combined = combined.pivot_table(
index="time",
columns=["name", "instance"],
values=["state", "location"],
aggfunc="last",
)
combined = combined.ffill()
ugly_melt = combined.melt(ignore_index=False)
ugly_melt = ugly_melt.rename(columns={None: "state_status"})
ugly_melt = (
ugly_melt.reset_index()
.pivot(
index=["time", "name", "instance"],
columns=["state_status"],
values="value",
)
.reset_index()
)
print(ugly_melt)
| [
"Note that combined look like this after ffill.\n location state \nname a b a b \ninstance 0 1 2 1 2 0 1 2 1 2\ntime \n0 tokyo tokyo tokyo tokyo tokyo 0.0 0.0 0.0 0.0 0.0\n1 london paris london london london 0.0 1.0 0.0 0.0 0.0\n2 london paris paris paris london 0.0 1.0 1.0 1.0 0.0\n\nThere are 2 columns, location and state. You can melt them separately and merge them back.\nmelt1 = pd.melt(combined[\"location\"], value_name=\"location\", ignore_index=False).reset_index()\nmelt2 = pd.melt(combined[\"state\"], value_name=\"state\", ignore_index=False).reset_index()\nbetter_melt = melt1.merge(melt2, on=[\"time\", \"name\", \"instance\"])\nbetter_melt\n\n time name instance location state\n0 0 a 0 tokyo 0.0\n1 1 a 0 london 0.0\n2 2 a 0 london 0.0\n3 0 a 1 tokyo 0.0\n4 1 a 1 paris 1.0\n5 2 a 1 paris 1.0\n6 0 a 2 tokyo 0.0\n7 1 a 2 london 0.0\n8 2 a 2 paris 1.0\n9 0 b 1 tokyo 0.0\n10 1 b 1 london 0.0\n11 2 b 1 paris 1.0\n12 0 b 2 tokyo 0.0\n13 1 b 2 london 0.0\n14 2 b 2 london 0.0\n\n",
"Datasets:\nimport pandas as pd\n\nstates = [\n dict(time=0, name=\"a\", instance=0, state=0),\n dict(time=0, name=\"a\", instance=1, state=0),\n dict(time=0, name=\"a\", instance=2, state=0),\n dict(time=0, name=\"b\", instance=1, state=0),\n dict(time=0, name=\"b\", instance=2, state=0),\n dict(time=1, name=\"a\", instance=1, state=1),\n dict(time=2, name=\"a\", instance=2, state=1),\n dict(time=2, name=\"b\", instance=1, state=1),\n]\n\nlocations = [\n dict(time=0, name=\"a\", instance=0, location=\"tokyo\"),\n dict(time=0, name=\"a\", instance=1, location=\"tokyo\"),\n dict(time=0, name=\"a\", instance=2, location=\"tokyo\"),\n dict(time=0, name=\"b\", instance=1, location=\"tokyo\"),\n dict(time=0, name=\"b\", instance=2, location=\"tokyo\"),\n dict(time=1, name=\"a\", instance=0, location=\"london\"),\n dict(time=1, name=\"a\", instance=2, location=\"london\"),\n dict(time=1, name=\"b\", instance=1, location=\"london\"),\n dict(time=1, name=\"b\", instance=2, location=\"london\"),\n dict(time=1, name=\"a\", instance=1, location=\"paris\"),\n dict(time=2, name=\"a\", instance=2, location=\"paris\"),\n dict(time=2, name=\"b\", instance=1, location=\"paris\"),\n]\n\nIt seems like time, name and instance are the levels that you use to index your data. It makes sense to add them to the index using set_index:\nstates = (\n pd.DataFrame.from_dict(states)\n .set_index([\"time\", \"name\", \"instance\"])\n)\nlocations = (\n pd.DataFrame.from_dict(locations)\n .set_index([\"time\", \"name\", \"instance\"])\n)\n\nOnce the MultiIndex is in place, you can concatenate the states and locations along the columns. This will leave some NaNs in the state column. Sort the index first so the entries are arranged by time first, then group the data based on the same name and instance, and finally perform a forward-fill.\ncombined = (\n pd.concat([states, locations], axis=1)\n .sort_index()\n .groupby([\"name\", \"instance\"])\n .ffill()\n)\n\nThe result is slightly different from yours. What I get:\n location state\ntime name instance \n0 a 0 tokyo 0.0\n 1 tokyo 0.0\n 2 tokyo 0.0\n b 1 tokyo 0.0\n 2 tokyo 0.0\n1 a 0 london 0.0\n 1 paris 1.0\n 2 london 0.0\n b 1 london 0.0\n 2 london 0.0\n2 a 2 paris 1.0\n b 1 paris 1.0\n\nWhat you get:\nstate_status time name instance location state\n0 0 a 0 tokyo 0.0\n1 0 a 1 tokyo 0.0\n2 0 a 2 tokyo 0.0\n3 0 b 1 tokyo 0.0\n4 0 b 2 tokyo 0.0\n5 1 a 0 london 0.0\n6 1 a 1 paris 1.0\n7 1 a 2 london 0.0\n8 1 b 1 london 0.0\n9 1 b 2 london 0.0\n10 2 a 0 london 0.0\n11 2 a 1 paris 1.0\n12 2 a 2 paris 1.0\n13 2 b 1 paris 1.0\n14 2 b 2 london 0.0\n\nFirst of all, I get a MultiIndex DataFrame. If you don't like it, just reset_index().\nMost importantly, you get extra rows that I don't have, for example when (time, name, instance) = (2, a, 0). None of the input data has this combination of values, so that's why it's not present in my result. It is in yours because of how pivot_table works. It might be a desirable behaviour or not, up to you to decide.\n",
"You can simply do a join and filter on both dataset, keeping in mind the time constraint (can only join on previous/current state and not future state).\nThis removes the need to perform forward fill ffill() and using joins and filter are easier to comprehend than pd.melt methods.\nYour code for initialization\nstates = [\n dict(time=0, name=\"a\", instance=0, state=0),\n dict(time=0, name=\"a\", instance=1, state=0),\n dict(time=0, name=\"a\", instance=2, state=0),\n dict(time=0, name=\"b\", instance=1, state=0),\n dict(time=0, name=\"b\", instance=2, state=0),\n dict(time=1, name=\"a\", instance=1, state=1),\n dict(time=2, name=\"a\", instance=2, state=1),\n dict(time=2, name=\"b\", instance=1, state=1),\n]\n\nlocations = [\n dict(time=0, name=\"a\", instance=0, location=\"tokyo\"),\n dict(time=0, name=\"a\", instance=1, location=\"tokyo\"),\n dict(time=0, name=\"a\", instance=2, location=\"tokyo\"),\n dict(time=0, name=\"b\", instance=1, location=\"tokyo\"),\n dict(time=0, name=\"b\", instance=2, location=\"tokyo\"),\n dict(time=1, name=\"a\", instance=0, location=\"london\"),\n dict(time=1, name=\"a\", instance=2, location=\"london\"),\n dict(time=1, name=\"b\", instance=1, location=\"london\"),\n dict(time=1, name=\"b\", instance=2, location=\"london\"),\n dict(time=1, name=\"a\", instance=1, location=\"paris\"),\n dict(time=2, name=\"a\", instance=2, location=\"paris\"),\n dict(time=2, name=\"b\", instance=1, location=\"paris\"),\n]\n\nImplementation\nimport pandas as pd\n\n\"\"\"\nSteps:\n1. Convert to dataframe (Rename state time as state_time, keep location time as time)\n2. Merge both dataframe together\n3. Filter state time <= location time (since location uses current/previous state)\n4. Filter for latest state time (since location must remember the latest state and not all previous states)\n\"\"\"\n\n# Step 1\nstates = pd.DataFrame(states).rename(columns={\"time\": \"state_time\"})\nlocations = pd.DataFrame(locations)\n\n# Step 2\nmerged_df = pd.merge(locations, states, on=[\"name\", \"instance\"])\n\n# Step 3\nmerged_df = merged_df[merged_df[\"state_time\"] <= merged_df[\"time\"]]\n\n# Step 4\nmerged_df = merged_df\\\n .sort_values([\"time\", \"name\", \"instance\", \"state_time\"])\\\n .drop_duplicates([\"time\", \"name\", \"instance\"], keep=\"last\")\\\n .reset_index(drop=True)\\\n .drop(columns=[\"state_time\"])\n\nThis results in the following merged_df\n time name instance location state\n0 0 a 0 tokyo 0\n1 0 a 1 tokyo 0\n2 0 a 2 tokyo 0\n3 0 b 1 tokyo 0\n4 0 b 2 tokyo 0\n5 1 a 0 london 0\n6 1 a 1 paris 1\n7 1 a 2 london 0\n8 1 b 1 london 0\n9 1 b 2 london 0\n10 2 a 2 paris 1\n11 2 b 1 paris 1\n\nThe length of results follows from location data, if you want every name-instance-location to have a time, you can do an outer join beforehand.\n",
"The pandas.melt method is a useful tool for reshaping data frames. It can be used to convert wide-form data frames, with multiple columns, into long-form data frames, with a single column for the variable and a single column for the value.\nIn your case, you can use pandas.melt to reshape your data frame to have a single column for the name, instance, state, and location variables, and a single column for the corresponding values. This will allow you to easily join the two data frames together using the pandas.merge method.\nHere's an example of how you could do this:\n\nstates = [\n dict(time=0, name=\"a\", instance=0, state=0),\n dict(time=0, name=\"a\", instance=1, state=0),\n dict(time=0, name=\"a\", instance=2, state=0),\n dict(time=0, name=\"b\", instance=1, state=0),\n dict(time=0, name=\"b\", instance=2, state=0),\n dict(time=1, name=\"a\", instance=1, state=1),\n dict(time=2, name=\"a\", instance=2, state=1),\n dict(time=2, name=\"b\", instance=1, state=1),\n]\n\nlocations = [\n dict(time=0, name=\"a\", instance=0, location=\"tokyo\"),\n dict(time=0, name=\"a\", instance=1, location=\"tokyo\"),\n dict(time=0, name=\"a\", instance=2, location=\"tokyo\"),\n dict(time=0, name=\"b\", instance=1, location=\"tokyo\"),\n dict(time=0, name=\"b\", instance=2, location=\"tokyo\"),\n dict(time=1, name=\"a\", instance=0, location=\"london\"),\n dict(time=1, name=\"a\", instance=2, location=\"london\"),\n dict(time=1, name=\"b\", instance=1, location=\"london\"),\n\n\n\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074575019_pandas_python.txt |
Q:
Executor Lost Failure (executor ID: 1): Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages
I'm using AWS Glue to run spark jobs.
My flow is more or less like:
client have defined rules (hundreds of them)
client select rules to run and provides input data
job takes data and execute each rule on that data
Rules are definned as python files, and system executes them by running:
for rule in rules:
result = importlib.import_module(module).handle(glue_context, dataframe, global_params, rule_params)
This is working fine when I'm running them in batch of 10-15.
When I'm executing them in bigger batches (25-50) I'm getting errors.
Data set is not huge - 70k rows, 200 columns
For this dataset Glue is configured to use max 15 DPUs and G.1X worker type
One of those errors:
Driver:
22/10/10 11:03:27 ERROR GlueExceptionAnalysisListener: [Glue Exception Analysis] {"Event":"GlueExceptionAnalysisTaskFailed","Timestamp":1665399807544,"Failure Reason":"Executor Lost Failure (executor ID: 1): Some(Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.)","Stack Trace":[],"Task Launch Time":1665399760009,"Stage ID":49,"Stage Attempt ID":0,"Task Type":"ShuffleMapTask","Executor ID":"1","Task ID":385}
And executor error:
ShuffleMapStage 49 (fromRDD at DynamicFrame.scala:313) failed in 82.192 s due to Job aborted due to stage failure: Task 3 in stage 49.0 failed 4 times, most recent failure: Lost task 3.3 in stage 49.0 (TID 413) (10.10.1.184 executor 7): java.lang.StackOverflowError
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2830)
at java.io.ObjectInputStream$BlockDataInputStream.readInt(ObjectInputStream.java:3331)
at java.io.ObjectInputStream.readHandle(ObjectInputStream.java:1783)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1844)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:484)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2322)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:484)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498
Or another one:
org.apache.spark.SparkException: Job aborted due to stage failure:
ShuffleMapStage 116 (fromRDD at DynamicFrame.scala:313) has failed the maximum allowable number of times: 4.
Most recent failure reason: org.apache.spark.shuffle.FetchFailedException: The relative remote executor(Id: 7), which maintains the block data to fetch is dead. at
org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:796) at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:711) at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:70) at
org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29) at
scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:480) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:486) at
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454) at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31) at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454) at
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225) at
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:132) at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at
org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at
From performance point of view, I don't see any very stressed components:
One change i did is: '--conf': 'spark.task.cpus=2' this allows me to run a little bit more rules, but stil fails after some time.
Full error logs: https://mega.nz/file/9QshCAAL#c4U7TSBJlEhD6SSrnhMXp1Wp8P9CuYXV9P7il4WqUqo
Any clues/hints what should i check/modify ?
A:
If you look closely at your error logs you see that you've got a java.lang.StackOverflowError at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3109). That seems to hint at the fact that the problem is related to your input data.
Are the objects that you're reading in deeply nested? This might make the default stack size (256kB - 1MB, depending on your JVM) not large enough.
I'm not really experienced with pyspark, but this SO post shows how to do it in pyspark. Start out with 4MB and try increasing it slowly if needed. Possibly you'll need to increase it for your executors as well (just add a second config with executor instead of driver in there if needed)
| Executor Lost Failure (executor ID: 1): Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages | I'm using AWS Glue to run spark jobs.
My flow is more or less like:
client have defined rules (hundreds of them)
client select rules to run and provides input data
job takes data and execute each rule on that data
Rules are definned as python files, and system executes them by running:
for rule in rules:
result = importlib.import_module(module).handle(glue_context, dataframe, global_params, rule_params)
This is working fine when I'm running them in batch of 10-15.
When I'm executing them in bigger batches (25-50) I'm getting errors.
Data set is not huge - 70k rows, 200 columns
For this dataset Glue is configured to use max 15 DPUs and G.1X worker type
One of those errors:
Driver:
22/10/10 11:03:27 ERROR GlueExceptionAnalysisListener: [Glue Exception Analysis] {"Event":"GlueExceptionAnalysisTaskFailed","Timestamp":1665399807544,"Failure Reason":"Executor Lost Failure (executor ID: 1): Some(Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.)","Stack Trace":[],"Task Launch Time":1665399760009,"Stage ID":49,"Stage Attempt ID":0,"Task Type":"ShuffleMapTask","Executor ID":"1","Task ID":385}
And executor error:
ShuffleMapStage 49 (fromRDD at DynamicFrame.scala:313) failed in 82.192 s due to Job aborted due to stage failure: Task 3 in stage 49.0 failed 4 times, most recent failure: Lost task 3.3 in stage 49.0 (TID 413) (10.10.1.184 executor 7): java.lang.StackOverflowError
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2830)
at java.io.ObjectInputStream$BlockDataInputStream.readInt(ObjectInputStream.java:3331)
at java.io.ObjectInputStream.readHandle(ObjectInputStream.java:1783)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1844)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:484)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2322)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:484)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498
Or another one:
org.apache.spark.SparkException: Job aborted due to stage failure:
ShuffleMapStage 116 (fromRDD at DynamicFrame.scala:313) has failed the maximum allowable number of times: 4.
Most recent failure reason: org.apache.spark.shuffle.FetchFailedException: The relative remote executor(Id: 7), which maintains the block data to fetch is dead. at
org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:796) at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:711) at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:70) at
org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29) at
scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:480) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:486) at
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454) at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31) at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454) at
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225) at
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:132) at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at
org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at
From performance point of view, I don't see any very stressed components:
One change i did is: '--conf': 'spark.task.cpus=2' this allows me to run a little bit more rules, but stil fails after some time.
Full error logs: https://mega.nz/file/9QshCAAL#c4U7TSBJlEhD6SSrnhMXp1Wp8P9CuYXV9P7il4WqUqo
Any clues/hints what should i check/modify ?
| [
"If you look closely at your error logs you see that you've got a java.lang.StackOverflowError at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3109). That seems to hint at the fact that the problem is related to your input data.\nAre the objects that you're reading in deeply nested? This might make the default stack size (256kB - 1MB, depending on your JVM) not large enough.\nI'm not really experienced with pyspark, but this SO post shows how to do it in pyspark. Start out with 4MB and try increasing it slowly if needed. Possibly you'll need to increase it for your executors as well (just add a second config with executor instead of driver in there if needed)\n"
] | [
1
] | [] | [] | [
"apache_spark",
"aws_glue",
"java",
"pyspark",
"python"
] | stackoverflow_0074015450_apache_spark_aws_glue_java_pyspark_python.txt |
Q:
Print out array in table widget
How to print out 1d array in table widget? I have array Sum_main(float data) and table table_Sum. Table has 1 col and 5 rows.
I tried this:
item=self.ui.table_Sum(str(Sum_main))
for row in range(5):
self.ui.table_Sum.setItem(row, 0, self.ui.table_Sum.item(str(Sum_main[row][0])))
But get error: TypeError: 'QTableWidget' object is not callable.
I have textEdit but I call them using self.ui.textEdit.
When I changed code to this, my array print out fully in each row with square brackets at start and end([]):
for row in range(5):
item=QTableWidgetItem(str(Sum_main))
self.ui.table_Sum.setItem(row, 0, item)
A:
Try this:
for row in range(5):
item=QTableWidgetItem(str(Sum_main[row]))
self.ui.table_Sum.setItem(row, 0, item)
| Print out array in table widget | How to print out 1d array in table widget? I have array Sum_main(float data) and table table_Sum. Table has 1 col and 5 rows.
I tried this:
item=self.ui.table_Sum(str(Sum_main))
for row in range(5):
self.ui.table_Sum.setItem(row, 0, self.ui.table_Sum.item(str(Sum_main[row][0])))
But get error: TypeError: 'QTableWidget' object is not callable.
I have textEdit but I call them using self.ui.textEdit.
When I changed code to this, my array print out fully in each row with square brackets at start and end([]):
for row in range(5):
item=QTableWidgetItem(str(Sum_main))
self.ui.table_Sum.setItem(row, 0, item)
| [
"Try this:\nfor row in range(5): \n item=QTableWidgetItem(str(Sum_main[row])) \n self.ui.table_Sum.setItem(row, 0, item) \n\n"
] | [
0
] | [] | [] | [
"python",
"qt",
"qt_designer"
] | stackoverflow_0074641616_python_qt_qt_designer.txt |
Q:
pytorch lightning "got an unexpected keyword argument 'weights_summary'"
I have been dealing an error when trying to learn Google "temporal fusion transformer" algorithm in anaconda spyder 5.1.5.
Guys, it is very important for me to solve this error. Somebody should say something. I will be very glad.
The example which i use in the link below;
https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html
In example, when i come to run the code which i mention below, i got the error
study = optimize_hyperparameters(
train_dataloader,
val_dataloader,
model_path="optuna_test",
n_trials=200,
max_epochs=50,
gradient_clip_val_range=(0.01, 1.0),
hidden_size_range=(8, 128),
hidden_continuous_size_range=(8, 128),
attention_head_size_range=(1, 4),
learning_rate_range=(0.001, 0.1),
dropout_range=(0.1, 0.3),
trainer_kwargs=dict(limit_train_batches=30),
reduce_on_plateau_patience=4,
use_learning_rate_finder=False # use Optuna to find ideal learning rate or use in-built learning rate finder
)
Here is the error below
A new study created in memory with name: no-name-fe7e21ce-3034-4679-b60a-ee4d5c9a4db5
[W 2022-10-21 19:36:49,382] Trial 0 failed because of the following error: TypeError("__init__() got an unexpected keyword argument 'weights_summary'")
Traceback (most recent call last):
File "C:\Users\omer\anaconda3\lib\site-packages\optuna\study\_optimize.py", line 196, in _run_trial
value_or_values = func(trial)
File "C:\Users\omer\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py", line 150, in objective
trainer = pl.Trainer(
File "C:\Users\omer\anaconda3\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
Traceback (most recent call last):
Input In [3] in <cell line: 1>
study = optimize_hyperparameters(
File ~\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:217 in optimize_hyperparameters
study.optimize(objective, n_trials=n_trials, timeout=timeout)
File ~\anaconda3\lib\site-packages\optuna\study\study.py:419 in optimize
_optimize(
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:66 in _optimize
_optimize_sequential(
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:160 in _optimize_sequential
frozen_trial = _run_trial(study, func, catch)
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:234 in _run_trial
raise func_err
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:196 in _run_trial
value_or_values = func(trial)
File ~\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:150 in objective
trainer = pl.Trainer(
File ~\anaconda3\lib\site-packages\pytorch_lightning\utilities\argparse.py:345 in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
What is problem with the code? Is there anyone to help me, please?
A:
So, i had same ploblom as you have.
I suggest you find out "weights_suammry" variable on your code
I use .yaml file and put parameters of pytorch_lightning.Trainer automatically using hydra also use strategy=DDPStrategy(find~)
i just realize there was weights_summary in .yaml file,
the structure was
trainer:
_target_: ~~
~~:
weights_summary : "top"
and i remove weights_summary on it and the plobloms have solved
A:
So it seems like there is an incompatibility issue with the pytorch_lightning version that you are using. Your version is probably to advanced.
I'm using pytorch_ligtning v1.5, and pytorch_forecasting at 0.10.2, and it works.
| pytorch lightning "got an unexpected keyword argument 'weights_summary'" | I have been dealing an error when trying to learn Google "temporal fusion transformer" algorithm in anaconda spyder 5.1.5.
Guys, it is very important for me to solve this error. Somebody should say something. I will be very glad.
The example which i use in the link below;
https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html
In example, when i come to run the code which i mention below, i got the error
study = optimize_hyperparameters(
train_dataloader,
val_dataloader,
model_path="optuna_test",
n_trials=200,
max_epochs=50,
gradient_clip_val_range=(0.01, 1.0),
hidden_size_range=(8, 128),
hidden_continuous_size_range=(8, 128),
attention_head_size_range=(1, 4),
learning_rate_range=(0.001, 0.1),
dropout_range=(0.1, 0.3),
trainer_kwargs=dict(limit_train_batches=30),
reduce_on_plateau_patience=4,
use_learning_rate_finder=False # use Optuna to find ideal learning rate or use in-built learning rate finder
)
Here is the error below
A new study created in memory with name: no-name-fe7e21ce-3034-4679-b60a-ee4d5c9a4db5
[W 2022-10-21 19:36:49,382] Trial 0 failed because of the following error: TypeError("__init__() got an unexpected keyword argument 'weights_summary'")
Traceback (most recent call last):
File "C:\Users\omer\anaconda3\lib\site-packages\optuna\study\_optimize.py", line 196, in _run_trial
value_or_values = func(trial)
File "C:\Users\omer\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py", line 150, in objective
trainer = pl.Trainer(
File "C:\Users\omer\anaconda3\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
Traceback (most recent call last):
Input In [3] in <cell line: 1>
study = optimize_hyperparameters(
File ~\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:217 in optimize_hyperparameters
study.optimize(objective, n_trials=n_trials, timeout=timeout)
File ~\anaconda3\lib\site-packages\optuna\study\study.py:419 in optimize
_optimize(
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:66 in _optimize
_optimize_sequential(
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:160 in _optimize_sequential
frozen_trial = _run_trial(study, func, catch)
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:234 in _run_trial
raise func_err
File ~\anaconda3\lib\site-packages\optuna\study\_optimize.py:196 in _run_trial
value_or_values = func(trial)
File ~\anaconda3\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:150 in objective
trainer = pl.Trainer(
File ~\anaconda3\lib\site-packages\pytorch_lightning\utilities\argparse.py:345 in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
What is problem with the code? Is there anyone to help me, please?
| [
"So, i had same ploblom as you have.\nI suggest you find out \"weights_suammry\" variable on your code\nI use .yaml file and put parameters of pytorch_lightning.Trainer automatically using hydra also use strategy=DDPStrategy(find~)\ni just realize there was weights_summary in .yaml file,\nthe structure was\ntrainer:\n _target_: ~~\n ~~:\n weights_summary : \"top\"\n\nand i remove weights_summary on it and the plobloms have solved\n",
"So it seems like there is an incompatibility issue with the pytorch_lightning version that you are using. Your version is probably to advanced.\nI'm using pytorch_ligtning v1.5, and pytorch_forecasting at 0.10.2, and it works.\n"
] | [
0,
0
] | [] | [] | [
"anaconda",
"optuna",
"python",
"pytorch_lightning"
] | stackoverflow_0074157157_anaconda_optuna_python_pytorch_lightning.txt |
Q:
Can pip (or setuptools, distribute etc...) list the license used by each installed package?
I'm trying to audit a Python project with a large number of dependencies and while I can manually look up each project's homepage/license terms, it seems like most OSS packages should already contain the license name and version in their metadata.
Unfortunately I can't find any options in pip or easy_install to list more than the package name and installed version (via pip freeze).
Does anyone have pointers to a tool to list license metadata for Python packages?
A:
Here is a copy-pasteable snippet which will print your packages.
Requires: prettytable (pip install prettytable)
Code
import pkg_resources
import prettytable
def get_pkg_license(pkg):
try:
lines = pkg.get_metadata_lines('METADATA')
except:
lines = pkg.get_metadata_lines('PKG-INFO')
for line in lines:
if line.startswith('License:'):
return line[9:]
return '(Licence not found)'
def print_packages_and_licenses():
t = prettytable.PrettyTable(['Package', 'License'])
for pkg in sorted(pkg_resources.working_set, key=lambda x: str(x).lower()):
t.add_row((str(pkg), get_pkg_license(pkg)))
print(t)
if __name__ == "__main__":
print_packages_and_licenses()
Example Output
+---------------------------+--------------------------------------------------------------+
| Package | License |
+---------------------------+--------------------------------------------------------------+
| appdirs 1.4.3 | MIT |
| argon2-cffi 16.3.0 | MIT |
| boto3 1.4.4 | Apache License 2.0 |
| botocore 1.5.21 | Apache License 2.0 |
| cffi 1.10.0 | MIT |
| colorama 0.3.9 | BSD |
| decorator 4.0.11 | new BSD License |
| Django 1.11 | BSD |
| django-debug-toolbar 1.7 | BSD |
| django-environ 0.4.3 | MIT License |
| django-storages 1.5.2 | BSD |
| django-uuslug 1.1.8 | BSD |
| djangorestframework 3.6.2 | BSD |
| docutils 0.13.1 | public domain, Python, 2-Clause BSD, GPL 3 (see COPYING.txt) |
| EasyProcess 0.2.3 | BSD |
| ipython 6.0.0 | BSD |
| ipython-genutils 0.2.0 | BSD |
| jedi 0.10.2 | MIT |
| jmespath 0.9.1 | MIT |
| packaging 16.8 | BSD or Apache License, Version 2.0 |
| pickleshare 0.7.4 | MIT |
| pip 9.0.1 | MIT |
| prettytable 0.7.2 | BSD (3 clause) |
| prompt-toolkit 1.0.14 | UNKNOWN |
| psycopg2 2.6.2 | LGPL with exceptions or ZPL |
| pycparser 2.17 | BSD |
| Pygments 2.2.0 | BSD License |
| pyparsing 2.2.0 | MIT License |
| python-dateutil 2.6.0 | Simplified BSD |
| python-slugify 1.2.4 | MIT |
| pytz 2017.2 | MIT |
| PyVirtualDisplay 0.2.1 | BSD |
| s3transfer 0.1.10 | Apache License 2.0 |
| selenium 3.0.2 | UNKNOWN |
| setuptools 35.0.2 | UNKNOWN |
| simplegeneric 0.8.1 | ZPL 2.1 |
| six 1.10.0 | MIT |
| sqlparse 0.2.3 | BSD |
| traitlets 4.3.2 | BSD |
| Unidecode 0.4.20 | GPL |
| wcwidth 0.1.7 | MIT |
| wheel 0.30.0a0 | MIT |
| win-unicode-console 0.5 | MIT |
+---------------------------+--------------------------------------------------------------+
A:
You can use pkg_resources:
import pkg_resources
def get_pkg_license(pkgname):
"""
Given a package reference (as from requirements.txt),
return license listed in package metadata.
NOTE: This function does no error checking and is for
demonstration purposes only.
"""
pkgs = pkg_resources.require(pkgname)
pkg = pkgs[0]
for line in pkg.get_metadata_lines('PKG-INFO'):
(k, v) = line.split(': ', 1)
if k == "License":
return v
return None
Example use:
>>> get_pkg_license('mercurial')
'GNU GPLv2+'
>>> get_pkg_license('pytz')
'MIT'
>>> get_pkg_license('django')
'UNKNOWN'
A:
Here is a way to do this with yolk3k (Command-line tool for querying PyPI and Python packages installed on your system.)
pip install yolk3k
yolk -l -f license
#-l lists all installed packages
#-f Show specific metadata fields (In this case, License)
A:
1. Choice
pip-licenses PyPI package.
2. Relevance
This answer is relevant for March 2018. In the future, the data of this answer may be obsolete.
3. Argumentation
simply installation — pip install pip-licenses,
more features and options, than yolk3k,
active maintained.
4. Demonstration
Example output:
D:\KristinitaPelican>pipenv run pip-licenses --with-system --order=license --format-markdown
| Name | Version | License |
|---------------------|-----------|--------------------------------------------------------------|
| requests | 2.18.4 | Apache 2.0 |
| actdiag | 0.5.4 | Apache License 2.0 |
| blockdiag | 1.5.3 | Apache License 2.0 |
| nwdiag | 1.0.4 | Apache License 2.0 |
| seqdiag | 0.9.5 | Apache License 2.0 |
| Jinja2 | 2.10 | BSD |
| MarkupSafe | 1.0 | BSD |
| license-info | 0.8.7 | BSD |
| pip-review | 1.0 | BSD |
| pylicense | 1 | BSD |
| PTable | 0.9.2 | BSD (3 clause) |
| webcolors | 1.8.1 | BSD 3-Clause |
| Markdown | 2.6.11 | BSD License |
| Pygments | 2.2.0 | BSD License |
| yolk3k | 0.9 | BSD License |
| packaging | 17.1 | BSD or Apache License, Version 2.0 |
| idna | 2.6 | BSD-like |
| markdown-newtab | 0.2.0 | CC0 |
| pyembed | 1.3.3 | Copyright © 2013 Matt Thomson |
| pyembed-markdown | 1.1.0 | Copyright © 2013 Matt Thomson |
| python-dateutil | 2.7.2 | Dual License |
| Unidecode | 1.0.22 | GPL |
| chardet | 3.0.4 | LGPL |
| beautifulsoup4 | 4.6.0 | MIT |
| funcparserlib | 0.3.6 | MIT |
| gevent | 1.2.2 | MIT |
| markdown-blockdiag | 0.7.0 | MIT |
| pip | 9.0.1 | MIT |
| pkgtools | 0.7.3 | MIT |
| pytz | 2018.3 | MIT |
| six | 1.11.0 | MIT |
| urllib3 | 1.22 | MIT |
| wheel | 0.30.0 | MIT |
| blinker | 1.4 | MIT License |
| greenlet | 0.4.13 | MIT License |
| pip-licenses | 1.7.0 | MIT License |
| pymdown-extensions | 4.9.2 | MIT License |
| pyparsing | 2.2.0 | MIT License |
| certifi | 2018.1.18 | MPL-2.0 |
| markdown-downheader | 1.1.0 | Simplified BSD License |
| Pillow | 5.0.0 | Standard PIL License |
| feedgenerator | 1.9 | UNKNOWN |
| license-lister | 0.1.1 | UNKNOWN |
| md-environ | 0.1.0 | UNKNOWN |
| mdx-cite | 1.0 | UNKNOWN |
| mdx-customspanclass | 1.1.1 | UNKNOWN |
| pelican | 3.7.1 | UNKNOWN |
| setuptools | 38.5.1 | UNKNOWN |
| docutils | 0.14 | public domain, Python, 2-Clause BSD, GPL 3 (see COPYING.txt) |
5. External link
pip-licenses official description.
A:
According to the output of pip show -v, there are two possible places where the information about the license for each package, lies.
Here are some examples:
$ pip show django -v | grep -i license
License: BSD
License :: OSI Approved :: BSD License
$ pip show setuptools -v | grep -i license
License: UNKNOWN
License :: OSI Approved :: MIT License
$ pip show python-dateutil -v | grep -i license
License: Dual License
License :: OSI Approved :: BSD License
License :: OSI Approved :: Apache Software License
$ pip show ipdb -v | grep -i license
License: BSD
The code below returns an iterator that contains all possible licenses of a package, using pkg_resources from setuptools:
from itertools import chain, compress
from pkg_resources import get_distribution
def filters(line):
return compress(
(line[9:], line[39:]),
(line.startswith('License:'), line.startswith('Classifier: License')),
)
def get_pkg_license(pkg):
distribution = get_distribution(pkg)
try:
lines = distribution.get_metadata_lines('METADATA')
except OSError:
lines = distribution.get_metadata_lines('PKG-INFO')
return tuple(chain.from_iterable(map(filters, lines)))
Here are the results:
>>> tuple(get_pkg_license(get_distribution('django')))
('BSD', 'BSD License')
>>> tuple(get_pkg_license(get_distribution('setuptools')))
('UNKNOWN', 'MIT License')
>>> tuple(get_pkg_license(get_distribution('python-dateutil')))
('Dual License', 'BSD License', 'Apache Software License')
>>> tuple(get_pkg_license(get_distribution('ipdb')))
('BSD',)
Finally, to get all licenses from installed apps:
>>> {
p.project_name: get_pkg_license(p)
for p in pkg_resources.working_set
}
A:
A slightly better version for those running jupyter - uses Anaconda defaults - no install needed
import pkg_resources
import pandas as pd
def get_pkg_license(pkg):
try:
lines = pkg.get_metadata_lines('METADATA')
except:
lines = pkg.get_metadata_lines('PKG-INFO')
for line in lines:
if line.startswith('License:'):
return line[9:]
return '(Licence not found)'
def print_packages_and_licenses():
table = []
for pkg in sorted(pkg_resources.working_set, key=lambda x: str(x).lower()):
table.append([str(pkg).split(' ',1)[0], str(pkg).split(' ',1)[1], get_pkg_license(pkg)])
df = pd.DataFrame(table, columns=['Package', 'Version', 'License'])
return df
print_packages_and_licenses()
A:
With pip:
pip show django | grep License
If you want to get the PyPI classifier for the license, use the verbose option:
pip show -v django | grep 'License ::'
A:
I found several ideas from the answers and comments for this question to be relevant and wrote a short script for generating the license information for the applicable virtualenv:
import pkg_resources
import copy
def get_packages_info():
KEY_MAP = {
"Name": 'name',
"Version": 'version',
"License": 'license',
}
empty_info = {}
for key, name in KEY_MAP.iteritems():
empty_info[name] = ""
packages = pkg_resources.working_set.by_key
infos = []
for pkg_name, pkg in packages.iteritems():
info = copy.deepcopy(empty_info)
try:
lines = pkg.get_metadata_lines('METADATA')
except (KeyError, IOError):
lines = pkg.get_metadata_lines('PKG-INFO')
for line in lines:
try:
key, value = line.split(': ', 1)
if KEY_MAP.has_key(key):
info[KEY_MAP[key]] = value
except ValueError:
pass
infos += [info]
return "name,version,license\n%s" % "\n".join(['"%s","%s","%s"' % (info['name'], info['version'], info['license']) for info in sorted(infos, key=(lambda item: item['name'].lower()))])
A:
Based on answer provided by @garromark and tweaked for Python 3, I use this on the command line:
import pkg_resources import copy
def get_packages_info():
KEY_MAP = {
"Name": 'name',
"Version": 'version',
"License": 'license',
}
empty_info = {}
for key, name in KEY_MAP.items():
empty_info[name] = ""
packages = pkg_resources.working_set.by_key
infos = []
for pkg_name, pkg in packages.items():
info = copy.deepcopy(empty_info)
try:
lines = pkg.get_metadata_lines('METADATA')
except (KeyError, IOError):
lines = pkg.get_metadata_lines('PKG-INFO')
for line in lines:
try:
key, value = line.split(': ', 1)
if key in KEY_MAP:
info[KEY_MAP[key]] = value
except ValueError:
pass
infos += [info]
return "name,version,license\n%s" % "\n".join(['"%s","%s","%s"' % (info['name'], info['version'], info['license']) for info in sorted(infos, key=(lambda item: item['name'].lower()))])
print(get_packages_info())
A:
Another option is to use Brian Dailey's Python Package License Checker.
git clone https://github.com/briandailey/python-packages-license-check.git
cd python-packages-license-check
... activate your chosen virtualenv ...
./check.py
A:
The answer didn't work for me a lot of those libraries generated exception.
So did a little brute force
def get_pkg_license_use_show(pkgname):
"""
Given a package reference (as from requirements.txt),
return license listed in package metadata.
NOTE: This function does no error checking and is for
demonstration purposes only.
"""
out = subprocess.check_output(["pip", 'show', pkgname])
pattern = re.compile(r"License: (.*)")
license_line = [i for i in out.split("\n") if i.startswith('License')]
match = pattern.match(license_line[0])
license = match.group(1)
return license
A:
I found liccheck to be the best option. It shows all licenses in a dependency tree, so you immediately know where licenses come from. It also offers the ability to allow and forbid licenses, even pyproject.toml format.
pip install liccheck
liccheck
| Can pip (or setuptools, distribute etc...) list the license used by each installed package? | I'm trying to audit a Python project with a large number of dependencies and while I can manually look up each project's homepage/license terms, it seems like most OSS packages should already contain the license name and version in their metadata.
Unfortunately I can't find any options in pip or easy_install to list more than the package name and installed version (via pip freeze).
Does anyone have pointers to a tool to list license metadata for Python packages?
| [
"Here is a copy-pasteable snippet which will print your packages.\nRequires: prettytable (pip install prettytable)\nCode\nimport pkg_resources\nimport prettytable\n\ndef get_pkg_license(pkg):\n try:\n lines = pkg.get_metadata_lines('METADATA')\n except:\n lines = pkg.get_metadata_lines('PKG-INFO')\n\n for line in lines:\n if line.startswith('License:'):\n return line[9:]\n return '(Licence not found)'\n\ndef print_packages_and_licenses():\n t = prettytable.PrettyTable(['Package', 'License'])\n for pkg in sorted(pkg_resources.working_set, key=lambda x: str(x).lower()):\n t.add_row((str(pkg), get_pkg_license(pkg)))\n print(t)\n\n\nif __name__ == \"__main__\":\n print_packages_and_licenses()\n\nExample Output\n+---------------------------+--------------------------------------------------------------+\n| Package | License |\n+---------------------------+--------------------------------------------------------------+\n| appdirs 1.4.3 | MIT |\n| argon2-cffi 16.3.0 | MIT |\n| boto3 1.4.4 | Apache License 2.0 |\n| botocore 1.5.21 | Apache License 2.0 |\n| cffi 1.10.0 | MIT |\n| colorama 0.3.9 | BSD |\n| decorator 4.0.11 | new BSD License |\n| Django 1.11 | BSD |\n| django-debug-toolbar 1.7 | BSD |\n| django-environ 0.4.3 | MIT License |\n| django-storages 1.5.2 | BSD |\n| django-uuslug 1.1.8 | BSD |\n| djangorestframework 3.6.2 | BSD |\n| docutils 0.13.1 | public domain, Python, 2-Clause BSD, GPL 3 (see COPYING.txt) |\n| EasyProcess 0.2.3 | BSD |\n| ipython 6.0.0 | BSD |\n| ipython-genutils 0.2.0 | BSD |\n| jedi 0.10.2 | MIT |\n| jmespath 0.9.1 | MIT |\n| packaging 16.8 | BSD or Apache License, Version 2.0 |\n| pickleshare 0.7.4 | MIT |\n| pip 9.0.1 | MIT |\n| prettytable 0.7.2 | BSD (3 clause) |\n| prompt-toolkit 1.0.14 | UNKNOWN |\n| psycopg2 2.6.2 | LGPL with exceptions or ZPL |\n| pycparser 2.17 | BSD |\n| Pygments 2.2.0 | BSD License |\n| pyparsing 2.2.0 | MIT License |\n| python-dateutil 2.6.0 | Simplified BSD |\n| python-slugify 1.2.4 | MIT |\n| pytz 2017.2 | MIT |\n| PyVirtualDisplay 0.2.1 | BSD |\n| s3transfer 0.1.10 | Apache License 2.0 |\n| selenium 3.0.2 | UNKNOWN |\n| setuptools 35.0.2 | UNKNOWN |\n| simplegeneric 0.8.1 | ZPL 2.1 |\n| six 1.10.0 | MIT |\n| sqlparse 0.2.3 | BSD |\n| traitlets 4.3.2 | BSD |\n| Unidecode 0.4.20 | GPL |\n| wcwidth 0.1.7 | MIT |\n| wheel 0.30.0a0 | MIT |\n| win-unicode-console 0.5 | MIT |\n+---------------------------+--------------------------------------------------------------+\n\n",
"You can use pkg_resources:\nimport pkg_resources\n\ndef get_pkg_license(pkgname):\n \"\"\"\n Given a package reference (as from requirements.txt),\n return license listed in package metadata.\n NOTE: This function does no error checking and is for\n demonstration purposes only.\n \"\"\"\n pkgs = pkg_resources.require(pkgname)\n pkg = pkgs[0]\n for line in pkg.get_metadata_lines('PKG-INFO'):\n (k, v) = line.split(': ', 1)\n if k == \"License\":\n return v\n return None\n\nExample use:\n>>> get_pkg_license('mercurial')\n'GNU GPLv2+'\n>>> get_pkg_license('pytz')\n'MIT'\n>>> get_pkg_license('django')\n'UNKNOWN'\n\n",
"Here is a way to do this with yolk3k (Command-line tool for querying PyPI and Python packages installed on your system.)\npip install yolk3k\n\nyolk -l -f license\n#-l lists all installed packages\n#-f Show specific metadata fields (In this case, License) \n\n",
"1. Choice\npip-licenses PyPI package.\n\n2. Relevance\nThis answer is relevant for March 2018. In the future, the data of this answer may be obsolete.\n\n3. Argumentation\n\nsimply installation — pip install pip-licenses,\nmore features and options, than yolk3k,\nactive maintained.\n\n\n4. Demonstration\nExample output:\nD:\\KristinitaPelican>pipenv run pip-licenses --with-system --order=license --format-markdown\n| Name | Version | License |\n|---------------------|-----------|--------------------------------------------------------------|\n| requests | 2.18.4 | Apache 2.0 |\n| actdiag | 0.5.4 | Apache License 2.0 |\n| blockdiag | 1.5.3 | Apache License 2.0 |\n| nwdiag | 1.0.4 | Apache License 2.0 |\n| seqdiag | 0.9.5 | Apache License 2.0 |\n| Jinja2 | 2.10 | BSD |\n| MarkupSafe | 1.0 | BSD |\n| license-info | 0.8.7 | BSD |\n| pip-review | 1.0 | BSD |\n| pylicense | 1 | BSD |\n| PTable | 0.9.2 | BSD (3 clause) |\n| webcolors | 1.8.1 | BSD 3-Clause |\n| Markdown | 2.6.11 | BSD License |\n| Pygments | 2.2.0 | BSD License |\n| yolk3k | 0.9 | BSD License |\n| packaging | 17.1 | BSD or Apache License, Version 2.0 |\n| idna | 2.6 | BSD-like |\n| markdown-newtab | 0.2.0 | CC0 |\n| pyembed | 1.3.3 | Copyright © 2013 Matt Thomson |\n| pyembed-markdown | 1.1.0 | Copyright © 2013 Matt Thomson |\n| python-dateutil | 2.7.2 | Dual License |\n| Unidecode | 1.0.22 | GPL |\n| chardet | 3.0.4 | LGPL |\n| beautifulsoup4 | 4.6.0 | MIT |\n| funcparserlib | 0.3.6 | MIT |\n| gevent | 1.2.2 | MIT |\n| markdown-blockdiag | 0.7.0 | MIT |\n| pip | 9.0.1 | MIT |\n| pkgtools | 0.7.3 | MIT |\n| pytz | 2018.3 | MIT |\n| six | 1.11.0 | MIT |\n| urllib3 | 1.22 | MIT |\n| wheel | 0.30.0 | MIT |\n| blinker | 1.4 | MIT License |\n| greenlet | 0.4.13 | MIT License |\n| pip-licenses | 1.7.0 | MIT License |\n| pymdown-extensions | 4.9.2 | MIT License |\n| pyparsing | 2.2.0 | MIT License |\n| certifi | 2018.1.18 | MPL-2.0 |\n| markdown-downheader | 1.1.0 | Simplified BSD License |\n| Pillow | 5.0.0 | Standard PIL License |\n| feedgenerator | 1.9 | UNKNOWN |\n| license-lister | 0.1.1 | UNKNOWN |\n| md-environ | 0.1.0 | UNKNOWN |\n| mdx-cite | 1.0 | UNKNOWN |\n| mdx-customspanclass | 1.1.1 | UNKNOWN |\n| pelican | 3.7.1 | UNKNOWN |\n| setuptools | 38.5.1 | UNKNOWN |\n| docutils | 0.14 | public domain, Python, 2-Clause BSD, GPL 3 (see COPYING.txt) |\n\n\n5. External link\n\npip-licenses official description.\n\n",
"According to the output of pip show -v, there are two possible places where the information about the license for each package, lies.\nHere are some examples:\n$ pip show django -v | grep -i license\nLicense: BSD\n License :: OSI Approved :: BSD License\n\n$ pip show setuptools -v | grep -i license\nLicense: UNKNOWN\n License :: OSI Approved :: MIT License\n\n$ pip show python-dateutil -v | grep -i license\nLicense: Dual License\n License :: OSI Approved :: BSD License\n License :: OSI Approved :: Apache Software License\n\n$ pip show ipdb -v | grep -i license\nLicense: BSD\n\nThe code below returns an iterator that contains all possible licenses of a package, using pkg_resources from setuptools:\nfrom itertools import chain, compress\nfrom pkg_resources import get_distribution\n\n\ndef filters(line):\n return compress(\n (line[9:], line[39:]),\n (line.startswith('License:'), line.startswith('Classifier: License')),\n )\n\n\ndef get_pkg_license(pkg):\n distribution = get_distribution(pkg)\n try:\n lines = distribution.get_metadata_lines('METADATA')\n except OSError:\n lines = distribution.get_metadata_lines('PKG-INFO')\n return tuple(chain.from_iterable(map(filters, lines)))\n\nHere are the results:\n>>> tuple(get_pkg_license(get_distribution('django')))\n('BSD', 'BSD License')\n\n>>> tuple(get_pkg_license(get_distribution('setuptools')))\n('UNKNOWN', 'MIT License')\n\n>>> tuple(get_pkg_license(get_distribution('python-dateutil')))\n('Dual License', 'BSD License', 'Apache Software License')\n\n>>> tuple(get_pkg_license(get_distribution('ipdb')))\n('BSD',)\n\nFinally, to get all licenses from installed apps:\n>>> {\n p.project_name: get_pkg_license(p) \n for p in pkg_resources.working_set\n } \n\n",
"A slightly better version for those running jupyter - uses Anaconda defaults - no install needed\nimport pkg_resources\nimport pandas as pd\n\ndef get_pkg_license(pkg):\n try:\n lines = pkg.get_metadata_lines('METADATA')\n except:\n lines = pkg.get_metadata_lines('PKG-INFO')\n\n for line in lines:\n if line.startswith('License:'):\n return line[9:]\n return '(Licence not found)'\n\ndef print_packages_and_licenses():\n table = []\n for pkg in sorted(pkg_resources.working_set, key=lambda x: str(x).lower()):\n table.append([str(pkg).split(' ',1)[0], str(pkg).split(' ',1)[1], get_pkg_license(pkg)])\n df = pd.DataFrame(table, columns=['Package', 'Version', 'License'])\n return df\n\nprint_packages_and_licenses() \n\n",
"With pip:\npip show django | grep License\nIf you want to get the PyPI classifier for the license, use the verbose option:\npip show -v django | grep 'License ::'\n",
"I found several ideas from the answers and comments for this question to be relevant and wrote a short script for generating the license information for the applicable virtualenv:\nimport pkg_resources\nimport copy\n\ndef get_packages_info():\n KEY_MAP = {\n \"Name\": 'name',\n \"Version\": 'version',\n \"License\": 'license',\n }\n empty_info = {}\n for key, name in KEY_MAP.iteritems():\n empty_info[name] = \"\"\n\n packages = pkg_resources.working_set.by_key\n infos = []\n for pkg_name, pkg in packages.iteritems():\n info = copy.deepcopy(empty_info)\n try:\n lines = pkg.get_metadata_lines('METADATA')\n except (KeyError, IOError):\n lines = pkg.get_metadata_lines('PKG-INFO')\n\n for line in lines:\n try:\n key, value = line.split(': ', 1)\n if KEY_MAP.has_key(key):\n info[KEY_MAP[key]] = value\n except ValueError:\n pass\n\n infos += [info]\n\n return \"name,version,license\\n%s\" % \"\\n\".join(['\"%s\",\"%s\",\"%s\"' % (info['name'], info['version'], info['license']) for info in sorted(infos, key=(lambda item: item['name'].lower()))])\n\n",
"Based on answer provided by @garromark and tweaked for Python 3, I use this on the command line:\nimport pkg_resources import copy\n\ndef get_packages_info():\n KEY_MAP = {\n \"Name\": 'name',\n \"Version\": 'version',\n \"License\": 'license',\n }\n empty_info = {}\n for key, name in KEY_MAP.items():\n empty_info[name] = \"\"\n\n packages = pkg_resources.working_set.by_key\n infos = []\n for pkg_name, pkg in packages.items():\n info = copy.deepcopy(empty_info)\n try:\n lines = pkg.get_metadata_lines('METADATA')\n except (KeyError, IOError):\n lines = pkg.get_metadata_lines('PKG-INFO')\n\n for line in lines:\n try:\n key, value = line.split(': ', 1)\n if key in KEY_MAP:\n info[KEY_MAP[key]] = value\n except ValueError:\n pass\n\n infos += [info]\n\n return \"name,version,license\\n%s\" % \"\\n\".join(['\"%s\",\"%s\",\"%s\"' % (info['name'], info['version'], info['license']) for info in sorted(infos, key=(lambda item: item['name'].lower()))])\n\n print(get_packages_info())\n\n",
"Another option is to use Brian Dailey's Python Package License Checker.\ngit clone https://github.com/briandailey/python-packages-license-check.git\ncd python-packages-license-check\n... activate your chosen virtualenv ...\n./check.py\n\n",
"The answer didn't work for me a lot of those libraries generated exception.\nSo did a little brute force\ndef get_pkg_license_use_show(pkgname):\n \"\"\"\n Given a package reference (as from requirements.txt),\n return license listed in package metadata.\n NOTE: This function does no error checking and is for\n demonstration purposes only.\n \"\"\"\n out = subprocess.check_output([\"pip\", 'show', pkgname])\n pattern = re.compile(r\"License: (.*)\")\n license_line = [i for i in out.split(\"\\n\") if i.startswith('License')]\n match = pattern.match(license_line[0])\n license = match.group(1)\n return license\n\n",
"I found liccheck to be the best option. It shows all licenses in a dependency tree, so you immediately know where licenses come from. It also offers the ability to allow and forbid licenses, even pyproject.toml format.\npip install liccheck\nliccheck\n\n"
] | [
36,
28,
17,
11,
4,
3,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"easy_install",
"licensing",
"pip",
"python",
"virtualenv"
] | stackoverflow_0019086030_easy_install_licensing_pip_python_virtualenv.txt |
Q:
python - When are WebSocketHandler and TornadoWebSocketClient completely deleted?
I'm working on an application that must support client-server connections. In order to do that, I'm using the module of tornado that allows me to create WebSockets. I intend to be always in operation, at least the server-side. So I am very worried about the performance and memory usage of each of the objects created on these connections.
I have started to do tests to detect when actually these objects are eliminated by the library.
Take the example code and i overwrote the method __del__()
server.py
#! /usr/bin/env python
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import gc, sys
import resource
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'new connection'
self.write_message("h")
def check_origin(self, origin):
return True
def on_message(self, message):
print "Message: " + message
def on_close(self):
print 'Closed'
print 'GC count: ' + str(len(gc.get_referrers(self)))
def __del__(self):
print "DELETED"
application = tornado.web.Application([
(r'/s', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
client.py
#! /usr/bin/env python
from ws4py.client.tornadoclient import TornadoWebSocketClient
from tornado import ioloop
class MainClient(TornadoWebSocketClient):
def opened(self):
print "Connected"
def received_message(self, message):
print "Message"
#I close the connection
self.close()
def closed(self, code, reason=None):
print "Closed"
ioloop.IOLoop.instance().stop()
def __del__(self):
print "DELETED"
if __name__ == "__main__":
ws = MainClient('ws://localhost:8888/s', protocols=['http-only', 'chat'])
ws.connect()
ioloop.IOLoop.instance().start()
When the client receives a message, it closes the connection. I hoped that both objects were eliminated, because the connection was closed, and so call the __del__() method, but that didn't happened.
server output:
new connection
Closed
GC count: 6
client output:
Connected
Message
Closed
As you can see it didn't print the DELETED sentence that i was expecting from the __del__() method.
--edited--
Also I've added the line that prints the number of references which has the GC of that object at the time of closing the connection. This proves that there are indeed references cycles.
-----
Obviously the classes that I will use will be more complex than those, but help me to understand the behavior of both objects, which is what I really seek to know: when are they deleted? It frees the memory when remove them? or otherwise becoming somehow? or ¿how delete the object explicitly?
I read the tornado.websocket.WebSocketHandler documentation, it explain me when the object is "closed", but i don't know when the memory is released.
A:
The WebSocket code currently contains some reference cycles, which means that objects are not cleaned up until the next full GC. Even worse, __del__ methods can actually prevent the deletion of an object (in python 3.3 and older: https://docs.python.org/3.3/library/gc.html#gc.garbage), so it's difficult to tell when things are actually getting deleted. Instead, you'll need to just load test your system and see if its memory footprint increases over time.
(Patches to break up the reference cycles after a connection is closed would be welcome)
A:
For me setting the ping interval/timeout solved the issue:
application = tornado.web.Application([(r'/ws', WSHandler), ], websocket_ping_interval=60, websocket_ping_timeout=180)
| python - When are WebSocketHandler and TornadoWebSocketClient completely deleted? | I'm working on an application that must support client-server connections. In order to do that, I'm using the module of tornado that allows me to create WebSockets. I intend to be always in operation, at least the server-side. So I am very worried about the performance and memory usage of each of the objects created on these connections.
I have started to do tests to detect when actually these objects are eliminated by the library.
Take the example code and i overwrote the method __del__()
server.py
#! /usr/bin/env python
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import gc, sys
import resource
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'new connection'
self.write_message("h")
def check_origin(self, origin):
return True
def on_message(self, message):
print "Message: " + message
def on_close(self):
print 'Closed'
print 'GC count: ' + str(len(gc.get_referrers(self)))
def __del__(self):
print "DELETED"
application = tornado.web.Application([
(r'/s', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
client.py
#! /usr/bin/env python
from ws4py.client.tornadoclient import TornadoWebSocketClient
from tornado import ioloop
class MainClient(TornadoWebSocketClient):
def opened(self):
print "Connected"
def received_message(self, message):
print "Message"
#I close the connection
self.close()
def closed(self, code, reason=None):
print "Closed"
ioloop.IOLoop.instance().stop()
def __del__(self):
print "DELETED"
if __name__ == "__main__":
ws = MainClient('ws://localhost:8888/s', protocols=['http-only', 'chat'])
ws.connect()
ioloop.IOLoop.instance().start()
When the client receives a message, it closes the connection. I hoped that both objects were eliminated, because the connection was closed, and so call the __del__() method, but that didn't happened.
server output:
new connection
Closed
GC count: 6
client output:
Connected
Message
Closed
As you can see it didn't print the DELETED sentence that i was expecting from the __del__() method.
--edited--
Also I've added the line that prints the number of references which has the GC of that object at the time of closing the connection. This proves that there are indeed references cycles.
-----
Obviously the classes that I will use will be more complex than those, but help me to understand the behavior of both objects, which is what I really seek to know: when are they deleted? It frees the memory when remove them? or otherwise becoming somehow? or ¿how delete the object explicitly?
I read the tornado.websocket.WebSocketHandler documentation, it explain me when the object is "closed", but i don't know when the memory is released.
| [
"The WebSocket code currently contains some reference cycles, which means that objects are not cleaned up until the next full GC. Even worse, __del__ methods can actually prevent the deletion of an object (in python 3.3 and older: https://docs.python.org/3.3/library/gc.html#gc.garbage), so it's difficult to tell when things are actually getting deleted. Instead, you'll need to just load test your system and see if its memory footprint increases over time.\n(Patches to break up the reference cycles after a connection is closed would be welcome)\n",
"For me setting the ping interval/timeout solved the issue:\napplication = tornado.web.Application([(r'/ws', WSHandler), ], websocket_ping_interval=60, websocket_ping_timeout=180)\n\n"
] | [
3,
0
] | [] | [] | [
"object",
"python",
"tornado",
"websocket"
] | stackoverflow_0030806485_object_python_tornado_websocket.txt |
Q:
How to return a value from __init__ in Python?
I have a class with an __init__ function.
How can I return an integer value from this function when an object is created?
I wrote a program, where __init__ does command line parsing and I need to have some value set. Is it OK set it in global variable and use it in other member functions? If so how to do that? So far, I declared a variable outside class. and setting it one function doesn't reflect in other function ??
A:
Why would you want to do that?
If you want to return some other object when a class is called, then use the __new__() method:
class MyClass(object):
def __init__(self):
print "never called in this case"
def __new__(cls):
return 42
obj = MyClass()
print obj
A:
__init__ is required to return None. You cannot (or at least shouldn't) return something else.
Try making whatever you want to return an instance variable (or function).
>>> class Foo:
... def __init__(self):
... return 42
...
>>> foo = Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() should return None
A:
From the documentation of __init__:
As a special constraint on constructors, no value may be returned; doing so will cause a TypeError to be raised at runtime.
As a proof, this code:
class Foo(object):
def __init__(self):
return 2
f = Foo()
Gives this error:
Traceback (most recent call last):
File "test_init.py", line 5, in <module>
f = Foo()
TypeError: __init__() should return None, not 'int'
A:
Sample Usage of the matter in question can be like:
class SampleObject(object):
def __new__(cls, item):
if cls.IsValid(item):
return super(SampleObject, cls).__new__(cls)
else:
return None
def __init__(self, item):
self.InitData(item) #large amount of data and very complex calculations
...
ValidObjects = []
for i in data:
item = SampleObject(i)
if item: # in case the i data is valid for the sample object
ValidObjects.append(item)
A:
The __init__ method, like other methods and functions returns None by default in the absence of a return statement, so you can write it like either of these:
class Foo:
def __init__(self):
self.value=42
class Bar:
def __init__(self):
self.value=42
return None
But, of course, adding the return None doesn't buy you anything.
I'm not sure what you are after, but you might be interested in one of these:
class Foo:
def __init__(self):
self.value=42
def __str__(self):
return str(self.value)
f=Foo()
print f.value
print f
prints:
42
42
A:
__init__ doesn't return anything and should always return None.
A:
You can just set it to a class variable and read it from the main program:
class Foo:
def __init__(self):
#Do your stuff here
self.returncode = 42
bar = Foo()
baz = bar.returncode
A:
We can not return value from init. But we can return value using new.
class Car:
def __new__(cls, speed, unit):
return (f"{speed} with unit {unit}")
car = Car(42, "km")
print(car)
A:
init() return none value solved perfectly
class Solve:
def __init__(self,w,d):
self.value=w
self.unit=d
def __str__(self):
return str("my speed is "+str(self.value)+" "+str(self.unit))
ob=Solve(21,'kmh')
print (ob)
output:
my speed is 21 kmh
A:
Just wanted to add, you can return classes in __init__
@property
def failureException(self):
class MyCustomException(AssertionError):
def __init__(self_, *args, **kwargs):
*** Your code here ***
return super().__init__(*args, **kwargs)
MyCustomException.__name__ = AssertionError.__name__
return MyCustomException
The above method helps you implement a specific action upon an Exception in your test
A:
Met this case when tried to parse some string data into a recursive data structure, and had a counter to be passed through.
Python does not allow to return anything from __init__, but you may write a factory function, or a class method, or a Parser class, depending on the code structure and complexity of parsing, which will parse your data into data objects.
Global variable is not a good solution, as it may be changed somewhere else, breaking the parsing logic.
Function example:
class MyClass():
def __init__(self, a, b, c):
# only assignments here
self.a = a
self.b = b
self.c = c
# return None
def parse(data):
# parsing here
a = ...
b = ...
c = ...
# status, counter, etc.
i = ...
# create an object
my_obj = MyClass(a, b, c)
# return both
return my_obj, i
# get data and parse
data = ...
my_obj, i = parse(data)
Class method example:
class MyClass():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
@classmethod
def parse(cls, data):
a = ...
b = ...
c = ...
i = ...
obj = cls(a, b, c)
return obj, i
data = ...
my_obj, i = MyClass.parse(data)
| How to return a value from __init__ in Python? | I have a class with an __init__ function.
How can I return an integer value from this function when an object is created?
I wrote a program, where __init__ does command line parsing and I need to have some value set. Is it OK set it in global variable and use it in other member functions? If so how to do that? So far, I declared a variable outside class. and setting it one function doesn't reflect in other function ??
| [
"Why would you want to do that?\nIf you want to return some other object when a class is called, then use the __new__() method:\nclass MyClass(object):\n def __init__(self):\n print \"never called in this case\"\n def __new__(cls):\n return 42\n\nobj = MyClass()\nprint obj\n\n",
"__init__ is required to return None. You cannot (or at least shouldn't) return something else.\nTry making whatever you want to return an instance variable (or function).\n>>> class Foo:\n... def __init__(self):\n... return 42\n... \n>>> foo = Foo()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: __init__() should return None\n\n",
"From the documentation of __init__:\n\nAs a special constraint on constructors, no value may be returned; doing so will cause a TypeError to be raised at runtime.\n\nAs a proof, this code:\nclass Foo(object):\n def __init__(self):\n return 2\n\nf = Foo()\n\nGives this error:\nTraceback (most recent call last):\n File \"test_init.py\", line 5, in <module>\n f = Foo()\nTypeError: __init__() should return None, not 'int'\n\n",
"Sample Usage of the matter in question can be like:\nclass SampleObject(object):\n\n def __new__(cls, item):\n if cls.IsValid(item):\n return super(SampleObject, cls).__new__(cls)\n else:\n return None\n\n def __init__(self, item):\n self.InitData(item) #large amount of data and very complex calculations\n\n...\n\nValidObjects = []\nfor i in data:\n item = SampleObject(i)\n if item: # in case the i data is valid for the sample object\n ValidObjects.append(item)\n\n",
"The __init__ method, like other methods and functions returns None by default in the absence of a return statement, so you can write it like either of these:\nclass Foo:\n def __init__(self):\n self.value=42\n\nclass Bar:\n def __init__(self):\n self.value=42\n return None\n\nBut, of course, adding the return None doesn't buy you anything.\nI'm not sure what you are after, but you might be interested in one of these:\nclass Foo:\n def __init__(self):\n self.value=42\n def __str__(self):\n return str(self.value)\n\nf=Foo()\nprint f.value\nprint f\n\nprints:\n42\n42\n\n",
"__init__ doesn't return anything and should always return None.\n",
"You can just set it to a class variable and read it from the main program:\nclass Foo:\n def __init__(self):\n #Do your stuff here\n self.returncode = 42\nbar = Foo()\nbaz = bar.returncode\n\n",
"We can not return value from init. But we can return value using new.\nclass Car:\n def __new__(cls, speed, unit):\n return (f\"{speed} with unit {unit}\")\n\n\ncar = Car(42, \"km\")\nprint(car)\n\n",
"init() return none value solved perfectly\nclass Solve:\ndef __init__(self,w,d):\n self.value=w\n self.unit=d\ndef __str__(self):\n return str(\"my speed is \"+str(self.value)+\" \"+str(self.unit))\nob=Solve(21,'kmh')\nprint (ob)\n\noutput:\nmy speed is 21 kmh\n",
"Just wanted to add, you can return classes in __init__\n@property\ndef failureException(self):\n class MyCustomException(AssertionError):\n def __init__(self_, *args, **kwargs):\n *** Your code here ***\n return super().__init__(*args, **kwargs)\n\n MyCustomException.__name__ = AssertionError.__name__\n return MyCustomException\n\nThe above method helps you implement a specific action upon an Exception in your test\n",
"Met this case when tried to parse some string data into a recursive data structure, and had a counter to be passed through.\nPython does not allow to return anything from __init__, but you may write a factory function, or a class method, or a Parser class, depending on the code structure and complexity of parsing, which will parse your data into data objects.\nGlobal variable is not a good solution, as it may be changed somewhere else, breaking the parsing logic.\nFunction example:\nclass MyClass():\n def __init__(self, a, b, c):\n # only assignments here\n self.a = a\n self.b = b\n self.c = c\n # return None\n\n\ndef parse(data):\n # parsing here\n a = ...\n b = ...\n c = ...\n\n # status, counter, etc.\n i = ...\n\n # create an object\n my_obj = MyClass(a, b, c)\n \n # return both\n return my_obj, i\n\n\n# get data and parse\ndata = ...\nmy_obj, i = parse(data)\n\nClass method example:\nclass MyClass():\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n @classmethod\n def parse(cls, data):\n a = ...\n b = ...\n c = ...\n\n i = ...\n\n obj = cls(a, b, c)\n return obj, i\n\n\ndata = ...\nmy_obj, i = MyClass.parse(data)\n\n"
] | [
176,
157,
41,
23,
16,
10,
7,
4,
3,
0,
0
] | [
"solution here\nYes,\ntrying to return from the init method in python returns errors as it is a constructor of the class you can only assign values for the scope of the class but not return a specific value.\nif you want to return a value but do not wish to create a method, you can use\nstr method\ndef __init__(self,a):\n self.value=a\n\n def __str__(self):\n return str(\"all my return values are possible here\")`\n\n",
"Well, if you don't care about the object instance anymore ... you can just replace it!\nclass MuaHaHa():\ndef __init__(self, ret):\n self=ret\n\nprint MuaHaHa('foo')=='foo'\n\n"
] | [
-3,
-4
] | [
"class",
"init",
"python"
] | stackoverflow_0002491819_class_init_python.txt |
Q:
Moving tables data from sqlite3 to postgressql
What is the simplest way to move data from sqlite3 tables to posgressql in Django project
A:
To perform a data backup, we use the following command:
python manage.py dumpdata > data.json #use this command adding before postgres in django
This command will generate a data.json file in the root of your project, meaning you generated the dumpdata from SQLite and stored it in JSON format.
Sync Database
python manage.py migrate --run-syncdb #Use this command only after creation of postgres in django.
Load Data
This command will change the database backend to PostgreSQL.
python manage.py loaddata data.json #This command dumps your previous data from SQlite into postgres.
| Moving tables data from sqlite3 to postgressql | What is the simplest way to move data from sqlite3 tables to posgressql in Django project
| [
"To perform a data backup, we use the following command:\npython manage.py dumpdata > data.json #use this command adding before postgres in django\n\nThis command will generate a data.json file in the root of your project, meaning you generated the dumpdata from SQLite and stored it in JSON format.\nSync Database\npython manage.py migrate --run-syncdb #Use this command only after creation of postgres in django.\n\nLoad Data\nThis command will change the database backend to PostgreSQL.\npython manage.py loaddata data.json #This command dumps your previous data from SQlite into postgres.\n\n"
] | [
0
] | [] | [] | [
"django",
"postgresql",
"python",
"sqlite"
] | stackoverflow_0074653143_django_postgresql_python_sqlite.txt |
Q:
iPython custom cell magic - store output in variable?
I recently discovered iPython magic functions and wrote some custom magic. I would like to use cell magic to parse strings, change them slightly and return the result.
Is there a way to store the output of my custom cell magic function in a variable?
I know you can store the output of a line magic function like this:
@register_line_magic
def linemagic(line, cell=None):
#do something
return line
hello = %linemagic hello
print(hello)
Which returns:
>>> hello
In case I have larger strings, I would like to use cell magic instead:
@register_cell_magic
def cellmagic(line, cell=None):
#do something
return cell
It's not possible to use hello = %%cellmagic ... to store the result.
Is there another way to capture the function output?
A:
You can use IPython's input/output caching system:
Output caching:
_ (a single underscore): stores previous output, like Python’s default interpreter.
__ (two underscores): next previous.
___ (three underscores): next-next previous.
_n (n being the prompt counter): the result of output
actually, _4, Out[4] or _oh[4] all do the same thing
Similarly, for Input caching:
_i, _ii, _iii: store previous, next previous and next-next previous inputs.
_i4, _ih[4] and In[4]: the content of input <n> (e.g 4)
In [2]: from IPython.core.magic import register_cell_magic
In [3]: @register_cell_magic
...: def cellmagic(line, cell=None):
...: #do something
...: return cell
...:
In [4]: %%cellmagic
...: "line0"
...: "line1"
...: "line2"
...:
...:
Out[4]: '"line0"\n"line1"\n"line2"\n\n'
In [5]: _
Out[5]: '"line0"\n"line1"\n"line2"\n\n'
In [6]: _4
Out[6]: '"line0"\n"line1"\n"line2"\n\n'
In [8]: _i4
Out[8]: '%%cellmagic\n"line0"\n"line1"\n"line2"'
In [9]: var = _4
In [10]: var
Out[10]: '"line0"\n"line1"\n"line2"\n\n'
| iPython custom cell magic - store output in variable? | I recently discovered iPython magic functions and wrote some custom magic. I would like to use cell magic to parse strings, change them slightly and return the result.
Is there a way to store the output of my custom cell magic function in a variable?
I know you can store the output of a line magic function like this:
@register_line_magic
def linemagic(line, cell=None):
#do something
return line
hello = %linemagic hello
print(hello)
Which returns:
>>> hello
In case I have larger strings, I would like to use cell magic instead:
@register_cell_magic
def cellmagic(line, cell=None):
#do something
return cell
It's not possible to use hello = %%cellmagic ... to store the result.
Is there another way to capture the function output?
| [
"You can use IPython's input/output caching system:\nOutput caching:\n\n_ (a single underscore): stores previous output, like Python’s default interpreter.\n__ (two underscores): next previous.\n___ (three underscores): next-next previous.\n_n (n being the prompt counter): the result of output \nactually, _4, Out[4] or _oh[4] all do the same thing\n\nSimilarly, for Input caching:\n\n_i, _ii, _iii: store previous, next previous and next-next previous inputs.\n_i4, _ih[4] and In[4]: the content of input <n> (e.g 4)\n\nIn [2]: from IPython.core.magic import register_cell_magic\n\nIn [3]: @register_cell_magic\n ...: def cellmagic(line, cell=None):\n ...: #do something\n ...: return cell\n ...:\n\nIn [4]: %%cellmagic\n ...: \"line0\"\n ...: \"line1\"\n ...: \"line2\"\n ...:\n ...:\nOut[4]: '\"line0\"\\n\"line1\"\\n\"line2\"\\n\\n'\n\nIn [5]: _\nOut[5]: '\"line0\"\\n\"line1\"\\n\"line2\"\\n\\n'\n\nIn [6]: _4\nOut[6]: '\"line0\"\\n\"line1\"\\n\"line2\"\\n\\n'\n\nIn [8]: _i4\nOut[8]: '%%cellmagic\\n\"line0\"\\n\"line1\"\\n\"line2\"'\n\nIn [9]: var = _4\n\nIn [10]: var\nOut[10]: '\"line0\"\\n\"line1\"\\n\"line2\"\\n\\n'\n\n"
] | [
2
] | [] | [] | [
"ipython",
"ipython_magic",
"jupyter_notebook",
"parsing",
"python"
] | stackoverflow_0074643538_ipython_ipython_magic_jupyter_notebook_parsing_python.txt |
Q:
Python Custom Package tries to load Cuda for every module
I have written a custom Python package with two modules as follows:
Project/
|-- deepmodel
|-- __init__.py
|-- deepmodelmodule.py
|-- standalone_utilities
|-- __init__.py
|-- standalone_module.py
|-- setup.py
|-- README
For the deepmodel module I import tensorflow models so it makes total sense that Cuda (which I don't have on my machine) is trying to be loaded every time I try to import it somewhere else. For example I have a hello.py file looking like this
import deepmodule
print("hello world")
When I run the hello.py script the following warning appears
2022-12-02 09:30:20.667026: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2022-12-02 09:30:20.670433: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
This warning however also appears when I import my standalone_module which is completely independet from the other module, with no tensorflow imports whatsoever e.g.:
import standalone_module
print("hello world")
leads to the same message and this always takes some time for trying to import cuda.
I don't really understand why the program tries to import cuda because the deepmodule is not called anywhere.
Did I do something wrong when setting up the package and is there a way to get rid of this behavior?
Thanks a lot!
This is my setup.py file
from setuptools import setup, find_namespace_packages
setup(name='my_package',
version='1.0',
description='some description',
author='test',
author_email='test',
packages=['deepmodule', 'standalone_module'],
)
A:
It looks like the problem is with your setup.py file. You're using the find_namespace_packages function from setuptools, which allows you to specify which packages should be included in your package. However, you're not using this function properly.
In your setup.py file, you should specify the names of your package's modules using the find_namespace_packages function. For example, you could use the following code to include the deepmodule and standalone_module modules in your package:
from setuptools import setup, find_namespace_packages
setup(name='my_package',
version='1.0',
description='some description',
author='test',
author_email='test',
packages=find_namespace_packages(include=['deepmodule', 'standalone_module'])
)
This will ensure that only the modules you want to include in your package are included, and that TensorFlow is not trying to load Cuda when you import your standalone module.
I hope this helps! Let me know if you have any other questions.
| Python Custom Package tries to load Cuda for every module | I have written a custom Python package with two modules as follows:
Project/
|-- deepmodel
|-- __init__.py
|-- deepmodelmodule.py
|-- standalone_utilities
|-- __init__.py
|-- standalone_module.py
|-- setup.py
|-- README
For the deepmodel module I import tensorflow models so it makes total sense that Cuda (which I don't have on my machine) is trying to be loaded every time I try to import it somewhere else. For example I have a hello.py file looking like this
import deepmodule
print("hello world")
When I run the hello.py script the following warning appears
2022-12-02 09:30:20.667026: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2022-12-02 09:30:20.670433: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
This warning however also appears when I import my standalone_module which is completely independet from the other module, with no tensorflow imports whatsoever e.g.:
import standalone_module
print("hello world")
leads to the same message and this always takes some time for trying to import cuda.
I don't really understand why the program tries to import cuda because the deepmodule is not called anywhere.
Did I do something wrong when setting up the package and is there a way to get rid of this behavior?
Thanks a lot!
This is my setup.py file
from setuptools import setup, find_namespace_packages
setup(name='my_package',
version='1.0',
description='some description',
author='test',
author_email='test',
packages=['deepmodule', 'standalone_module'],
)
| [
"It looks like the problem is with your setup.py file. You're using the find_namespace_packages function from setuptools, which allows you to specify which packages should be included in your package. However, you're not using this function properly.\nIn your setup.py file, you should specify the names of your package's modules using the find_namespace_packages function. For example, you could use the following code to include the deepmodule and standalone_module modules in your package:\nfrom setuptools import setup, find_namespace_packages\n\nsetup(name='my_package',\n version='1.0',\n description='some description',\n author='test',\n author_email='test',\n packages=find_namespace_packages(include=['deepmodule', 'standalone_module'])\n )\n\nThis will ensure that only the modules you want to include in your package are included, and that TensorFlow is not trying to load Cuda when you import your standalone module.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
0
] | [] | [] | [
"installation",
"python"
] | stackoverflow_0074653145_installation_python.txt |
Q:
Adding text or cross sign on every subplot of plotly, each in unique positions of the subplots
I am struggling to put a cross sign in certain positions of each subplots of plotly in Python. I have 2 subplots and in each one, I want to out the cross in certain positions as below.
Position of the cross sign at the subplot_1 and 2 are attached.
import numpy as np
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.subplots import make_subplots
import string
#Define data for heatmap
N=5
x = np.array([10*k for k in range(N)])
y = np.linspace(0, 2, N)
z1 = np.random.randint(5,15, (N,N))
z2 = np.random.randint(10,27, (N,N))
mytext = np.array(list(string.ascii_uppercase))[:25].reshape(N,N)
fig1 = ff.create_annotated_heatmap(z1, x.tolist(), y.tolist(), colorscale='matter')
fig2 = ff.create_annotated_heatmap(z2, x.tolist(), y.tolist(), annotation_text=mytext, colorscale='Viridis')
fig = make_subplots(
rows=1, cols=2,
horizontal_spacing=0.05,
)
fig.add_trace(fig1.data[0], 1, 1)
fig.add_trace(fig2.data[0], 1, 2)
annot1 = list(fig1.layout.annotations)
annot2 = list(fig2.layout.annotations)
for k in range(len(annot2)):
annot2[k]['xref'] = 'x2'
annot2[k]['yref'] = 'y2'
fig.update_layout(annotations=annot1+annot2)
A:
There are two ways to deal with this question: the first is to use the line mode of the scatterplot and the second is to add a shape. In the line mode of the scatterplot, the real starting position is -0.5, so the heatmap and the cross line are misaligned. So I chose to add a figure.
Also, I can now annotate without using figure_factory, so I'll use a graph object to construct the graph. The configuration is one heatmap combined with two shapes, with the y-axis and x-axis scales changed.
import numpy as np
import plotly.graph_objs as go
from plotly.subplots import make_subplots
np.random.seed(1)
fig = make_subplots(rows=1,
cols=2,
horizontal_spacing=0.05,
)
fig.add_trace(go.Heatmap(z=z1,
text=z1,
texttemplate='%{text}',
showscale=False,
),
row=1,col=1
)
fig.add_shape(type='line',
x0=1.5, y0=1.5, x1=2.5, y1=2.5,
line=dict(color='black', width=2)
)
fig.add_shape(type='line',
x0=2.5, y0=1.5, x1=1.5, y1=2.5,
line=dict(color='black', width=2)
)
fig.add_trace(go.Heatmap(z=z2,
text=mytext,
texttemplate='%{text}',
showscale=False,
colorscale = 'Viridis'
),
row=1,col=2
)
fig.add_shape(type='line',
x0=0.5, y0=-0.5, x1=1.5, y1=0.5,
line=dict(color='black', width=2),
row=1,col=2
)
fig.add_shape(type='line',
x0=1.5, y0=-0.5, x1=0.5, y1=0.5,
line=dict(color='black', width=2),
row=1, col=2
)
fig.update_yaxes(tickvals=[0,1,2,3,4], ticktext=y.tolist())
fig.update_xaxes(tickvals=[0,1,2,3,4], ticktext=x.tolist())
fig.update_layout(autosize=False, width=800)
fig.show()
| Adding text or cross sign on every subplot of plotly, each in unique positions of the subplots | I am struggling to put a cross sign in certain positions of each subplots of plotly in Python. I have 2 subplots and in each one, I want to out the cross in certain positions as below.
Position of the cross sign at the subplot_1 and 2 are attached.
import numpy as np
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.subplots import make_subplots
import string
#Define data for heatmap
N=5
x = np.array([10*k for k in range(N)])
y = np.linspace(0, 2, N)
z1 = np.random.randint(5,15, (N,N))
z2 = np.random.randint(10,27, (N,N))
mytext = np.array(list(string.ascii_uppercase))[:25].reshape(N,N)
fig1 = ff.create_annotated_heatmap(z1, x.tolist(), y.tolist(), colorscale='matter')
fig2 = ff.create_annotated_heatmap(z2, x.tolist(), y.tolist(), annotation_text=mytext, colorscale='Viridis')
fig = make_subplots(
rows=1, cols=2,
horizontal_spacing=0.05,
)
fig.add_trace(fig1.data[0], 1, 1)
fig.add_trace(fig2.data[0], 1, 2)
annot1 = list(fig1.layout.annotations)
annot2 = list(fig2.layout.annotations)
for k in range(len(annot2)):
annot2[k]['xref'] = 'x2'
annot2[k]['yref'] = 'y2'
fig.update_layout(annotations=annot1+annot2)
| [
"There are two ways to deal with this question: the first is to use the line mode of the scatterplot and the second is to add a shape. In the line mode of the scatterplot, the real starting position is -0.5, so the heatmap and the cross line are misaligned. So I chose to add a figure.\nAlso, I can now annotate without using figure_factory, so I'll use a graph object to construct the graph. The configuration is one heatmap combined with two shapes, with the y-axis and x-axis scales changed.\nimport numpy as np\nimport plotly.graph_objs as go\nfrom plotly.subplots import make_subplots\nnp.random.seed(1)\n\nfig = make_subplots(rows=1,\n cols=2,\n horizontal_spacing=0.05,\n)\n\nfig.add_trace(go.Heatmap(z=z1,\n text=z1,\n texttemplate='%{text}',\n showscale=False,\n ),\n row=1,col=1\n )\n\nfig.add_shape(type='line',\n x0=1.5, y0=1.5, x1=2.5, y1=2.5,\n line=dict(color='black', width=2)\n )\nfig.add_shape(type='line',\n x0=2.5, y0=1.5, x1=1.5, y1=2.5,\n line=dict(color='black', width=2)\n )\n\n\nfig.add_trace(go.Heatmap(z=z2,\n text=mytext,\n texttemplate='%{text}',\n showscale=False,\n colorscale = 'Viridis'\n ),\n row=1,col=2\n )\nfig.add_shape(type='line',\n x0=0.5, y0=-0.5, x1=1.5, y1=0.5,\n line=dict(color='black', width=2),\n row=1,col=2\n )\nfig.add_shape(type='line',\n x0=1.5, y0=-0.5, x1=0.5, y1=0.5,\n line=dict(color='black', width=2),\n row=1, col=2\n )\n\n\n\nfig.update_yaxes(tickvals=[0,1,2,3,4], ticktext=y.tolist())\nfig.update_xaxes(tickvals=[0,1,2,3,4], ticktext=x.tolist())\n\nfig.update_layout(autosize=False, width=800)\nfig.show()\n\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"matplotlib",
"numpy",
"plotly",
"python"
] | stackoverflow_0074650730_dataframe_matplotlib_numpy_plotly_python.txt |
Q:
Django not updated inside the docker cookiecutters template
My Django project that is created based on Cookiecutters is not updated in local development environment after I changed the source code, I need to stop and start the docker again. I checked the volume and it seems ok but still no auto-update. The files and their contents are as follow:
version: '3'
volumes:
one_sell_local_postgres_data: {}
one_sell_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: one_sell_local_django
container_name: one_sell_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: one_sell_production_postgres
container_name: one_sell_local_postgres
volumes:
- one_sell_local_postgres_data:/var/lib/postgresql/data:Z
- one_sell_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
redis:
image: redis:6
container_name: one_sell_local_redis
The Dockerfile for Django:
ARG PYTHON_VERSION=3.9-slim-bullseye
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=local
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
&& apt-get install gdal-bin -y \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=local
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
&& apt-get install gdal-bin -y \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/django/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
The entrypoint:
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
# N.B. If only .env files supported variable expansion...
export CELERY_BROKER_URL="${REDIS_URL}"
# if [ -z "${POSTGRES_USER}" ]; then
# base_postgres_image_default_user='postgres'
# export POSTGRES_USER="${base_postgres_image_default_user}"
# fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
echo $DATABASE_URL
echo ${POSTGRES_DB}
postgres_ready() {
python << END
import sys
import psycopg2
try:
psycopg2.connect(
dbname="${POSTGRES_DB}",
user="${POSTGRES_USER}",
password="${POSTGRES_PASSWORD}",
host="${POSTGRES_HOST}",
port="${POSTGRES_PORT}",
)
except psycopg2.OperationalError as e:
print(e)
sys.exit(-1)
sys.exit(0)
END
}
# TODO: here the postgres readiness should be checked
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available'
exec "$@"
A:
It seems you have copied a lot of things from production docker setup to your local docker setup. I assume you have also copied the production/django/start file as well. You can revert it back to its original version because gunicorn does not reload the server when code is changed (unless you allow it). The original version of the code can be found in GitHub. Basically, it uses:
python manage.py runserver_plus 0.0.0.0:8000
Run server allows you to reload the server when there is a change in code. Apart from that, you do not need to copy everything from production to local settings. You can simply refer them in Dockerfile when copying in Docker image like this (unless there is a difference):
COPY ./compose/production/django/celery/worker/start /start-celeryworker
^^^^^^^^^^
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/production/django/celery/beat/start /start-celerybeat^
^^^^^^^^^^
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/production/django/celery/flower/start /start-flower
^^^^^^^^^^
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
| Django not updated inside the docker cookiecutters template | My Django project that is created based on Cookiecutters is not updated in local development environment after I changed the source code, I need to stop and start the docker again. I checked the volume and it seems ok but still no auto-update. The files and their contents are as follow:
version: '3'
volumes:
one_sell_local_postgres_data: {}
one_sell_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: one_sell_local_django
container_name: one_sell_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
volumes:
- .:/app
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: one_sell_production_postgres
container_name: one_sell_local_postgres
volumes:
- one_sell_local_postgres_data:/var/lib/postgresql/data:Z
- one_sell_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
redis:
image: redis:6
container_name: one_sell_local_redis
The Dockerfile for Django:
ARG PYTHON_VERSION=3.9-slim-bullseye
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=local
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
&& apt-get install gdal-bin -y \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=local
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
&& apt-get install gdal-bin -y \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/django/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
The entrypoint:
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
# N.B. If only .env files supported variable expansion...
export CELERY_BROKER_URL="${REDIS_URL}"
# if [ -z "${POSTGRES_USER}" ]; then
# base_postgres_image_default_user='postgres'
# export POSTGRES_USER="${base_postgres_image_default_user}"
# fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
echo $DATABASE_URL
echo ${POSTGRES_DB}
postgres_ready() {
python << END
import sys
import psycopg2
try:
psycopg2.connect(
dbname="${POSTGRES_DB}",
user="${POSTGRES_USER}",
password="${POSTGRES_PASSWORD}",
host="${POSTGRES_HOST}",
port="${POSTGRES_PORT}",
)
except psycopg2.OperationalError as e:
print(e)
sys.exit(-1)
sys.exit(0)
END
}
# TODO: here the postgres readiness should be checked
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available'
exec "$@"
| [
"It seems you have copied a lot of things from production docker setup to your local docker setup. I assume you have also copied the production/django/start file as well. You can revert it back to its original version because gunicorn does not reload the server when code is changed (unless you allow it). The original version of the code can be found in GitHub. Basically, it uses:\npython manage.py runserver_plus 0.0.0.0:8000\n\nRun server allows you to reload the server when there is a change in code. Apart from that, you do not need to copy everything from production to local settings. You can simply refer them in Dockerfile when copying in Docker image like this (unless there is a difference):\nCOPY ./compose/production/django/celery/worker/start /start-celeryworker\n ^^^^^^^^^^\nRUN sed -i 's/\\r$//g' /start-celeryworker\nRUN chmod +x /start-celeryworker\n\nCOPY ./compose/production/django/celery/beat/start /start-celerybeat^\n ^^^^^^^^^^\nRUN sed -i 's/\\r$//g' /start-celerybeat\nRUN chmod +x /start-celerybeat\n\nCOPY ./compose/production/django/celery/flower/start /start-flower\n ^^^^^^^^^^\nRUN sed -i 's/\\r$//g' /start-flower\nRUN chmod +x /start-flower\n\n"
] | [
0
] | [] | [] | [
"cookiecutter_django",
"django",
"docker",
"python"
] | stackoverflow_0073616735_cookiecutter_django_django_docker_python.txt |
Q:
Create a vector length n with n entries 'x'
Example:
n = 5
x = 3.5
Output:
array([3.5, 3.5, 3.5, 3.5, 3.5])
My code:
import numpy as np
def init_all_x(n, x):
np.all = [x]*n
return np.all
init_all_x(5, 3.5)
My question:
Why init_all_x(5, 3.5).shape cannot run?
If my code is wrong, what is the correct code?
Thank you!
A:
you can use np.ones
arr = np.ones(5)*3.5
A:
Simple approach with numpy.repeat:
n = 5
x = 3.5
a = np.repeat(x, n)
Output:
array([3.5, 3.5, 3.5, 3.5, 3.5])
A:
For your requirement, no need to use Numpy lib, you can code like this:
def init_all_x(n, x):
return [x]*n
p = init_all_x(5, 3.5)
print(p)
Output:
[3.5, 3.5, 3.5, 3.5, 3.5]
A:
Numpy has a dedicated function np.full to do just this:
n = 5
x = 3.5
out = np.full(n, x)
# array([3.5, 3.5, 3.5, 3.5, 3.5])
| Create a vector length n with n entries 'x' | Example:
n = 5
x = 3.5
Output:
array([3.5, 3.5, 3.5, 3.5, 3.5])
My code:
import numpy as np
def init_all_x(n, x):
np.all = [x]*n
return np.all
init_all_x(5, 3.5)
My question:
Why init_all_x(5, 3.5).shape cannot run?
If my code is wrong, what is the correct code?
Thank you!
| [
"you can use np.ones\narr = np.ones(5)*3.5\n\n",
"Simple approach with numpy.repeat:\nn = 5\nx = 3.5\na = np.repeat(x, n)\n\nOutput:\narray([3.5, 3.5, 3.5, 3.5, 3.5])\n\n",
"For your requirement, no need to use Numpy lib, you can code like this:\ndef init_all_x(n, x):\n return [x]*n\n\np = init_all_x(5, 3.5)\nprint(p)\n\nOutput:\n[3.5, 3.5, 3.5, 3.5, 3.5]\n\n",
"Numpy has a dedicated function np.full to do just this:\nn = 5\nx = 3.5\n\nout = np.full(n, x)\n# array([3.5, 3.5, 3.5, 3.5, 3.5])\n\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"numpy",
"python",
"vector"
] | stackoverflow_0074652423_numpy_python_vector.txt |
Q:
Python Test Monkeypatch with Arguments
I try to create a mock test with monkeypatch. I have a typical service-repository class.
repository_class.py
find_by_id(id):
con.select(....);
service_class.py
get_details(id):
some pre-process...
item = repository_class.find_by_id(id)
post-process...
return result
then I try to create a mock test with mocking repository method under service:
def test_bid_on_brand_keyword(monkeypatch):
mock_data = "abc"
monkeypatch.setattr(repository_class, 'find_by_id', mock_data)
ans = service_class.get_details(id)
assert ans is not None
This doesn’t work. It tries to call real repository method. Any suggestion?
A:
monkeypatch couldn’t work for me. I used @patch or with patch functions.
A:
Return value should be an (mock) object, so i would do it like this
def test_bid_on_brand_keyword(monkeypatch):
monkeypatch.setattr(repository_class, 'find_by_id', lambda _:"abc")
ans = service_class.get_details(id)
assert ans is not None
| Python Test Monkeypatch with Arguments | I try to create a mock test with monkeypatch. I have a typical service-repository class.
repository_class.py
find_by_id(id):
con.select(....);
service_class.py
get_details(id):
some pre-process...
item = repository_class.find_by_id(id)
post-process...
return result
then I try to create a mock test with mocking repository method under service:
def test_bid_on_brand_keyword(monkeypatch):
mock_data = "abc"
monkeypatch.setattr(repository_class, 'find_by_id', mock_data)
ans = service_class.get_details(id)
assert ans is not None
This doesn’t work. It tries to call real repository method. Any suggestion?
| [
"monkeypatch couldn’t work for me. I used @patch or with patch functions.\n",
"Return value should be an (mock) object, so i would do it like this\ndef test_bid_on_brand_keyword(monkeypatch): \n monkeypatch.setattr(repository_class, 'find_by_id', lambda _:\"abc\")\n ans = service_class.get_details(id)\n assert ans is not None\n\n"
] | [
0,
0
] | [] | [] | [
"monkeypatching",
"pytest",
"python"
] | stackoverflow_0074516750_monkeypatching_pytest_python.txt |
Q:
cleaning date columns in python
Kindly assist me in cleaning my date types in python.
My sample data is as follows:
INITIATION DATE
DATE CUT
DATE GIVEN
1/July/2022
21 July 2022
11-July-2022
17-July-2022
16/July/2022
21/July/2022
16-July-2022
01-July-2022
09/July/2022
19-July-2022
31 July 2022
27 July 2022
How do I remove all dashes/slashes/hyphens from dates in the different columns? I have 8 columns and 300 rows.
What i tried:
df[['INITIATION DATE', 'DATE CUT', 'DATE GIVEN']]= df[['INITIATION DATE', 'DATE CUT', 'DATE GIVEN']].apply(pd.to_datetime, format = '%d%b%Y')
Desired output format for all: 1 July 2022
ValueError I'm getting:
time data '18 July 2022' does not match format '%d-%b-%Y' (match)
A:
to remove all dashes/slashes/hyphens from strings you can just use replace method:
df.apply(lambda x: x.str.replace('[/-]',' ',regex=True))
>>>
'''
INITIATION DATE DATE CUT DATE GIVEN
0 1 July 2022 21 July 2022 11 July 2022
1 17 July 2022 16 July 2022 21 July 2022
2 16 July 2022 01 July 2022 09 July 2022
3 19 July 2022 31 July 2022 27 July 2022
and if you also need to conver strings to datetime then try this:
df.apply(lambda x: pd.to_datetime(x.str.replace('[/-]',' ',regex=True)))
>>>
'''
INITIATION DATE DATE CUT DATE GIVEN
0 2022-07-01 2022-07-21 2022-07-11
1 2022-07-17 2022-07-16 2022-07-21
2 2022-07-16 2022-07-01 2022-07-09
3 2022-07-19 2022-07-31 2022-07-27
A:
You can use pd.to_datetime to convert strings to datetime objects. The function takes a format argument which specifies the format of the datetime string, using the usual format codes
df['INITIATION DATE'] = pd.to_datetime(df['INITIATION DATE'], format='%d-%B-%Y').dt.strftime('%d %B %Y')
df['DATE CUT'] = pd.to_datetime(df['DATE CUT'], format='%d %B %Y').dt.strftime('%d %B %Y')
df['DATE GIVEN'] = pd.to_datetime(df['DATE GIVEN'], format='%d/%B/%Y').dt.strftime('%d %B %Y')
output
INITIATION DATE DATE CUT DATE GIVEN
0 01 July 2022 21 July 2022 11 July 2022
1 17 July 2022 16 July 2022 21 July 2022
2 16 July 2022 01 July 2022 09 July 2022
3 19 July 2022 31 July 2022 27 July 2022
You get that error because your datetime strings (e.g. '18 July 2022') do not match your format specifiers ('%d-%b-%Y') because of the extra hyphens in the format specifier.
| cleaning date columns in python | Kindly assist me in cleaning my date types in python.
My sample data is as follows:
INITIATION DATE
DATE CUT
DATE GIVEN
1/July/2022
21 July 2022
11-July-2022
17-July-2022
16/July/2022
21/July/2022
16-July-2022
01-July-2022
09/July/2022
19-July-2022
31 July 2022
27 July 2022
How do I remove all dashes/slashes/hyphens from dates in the different columns? I have 8 columns and 300 rows.
What i tried:
df[['INITIATION DATE', 'DATE CUT', 'DATE GIVEN']]= df[['INITIATION DATE', 'DATE CUT', 'DATE GIVEN']].apply(pd.to_datetime, format = '%d%b%Y')
Desired output format for all: 1 July 2022
ValueError I'm getting:
time data '18 July 2022' does not match format '%d-%b-%Y' (match)
| [
"to remove all dashes/slashes/hyphens from strings you can just use replace method:\ndf.apply(lambda x: x.str.replace('[/-]',' ',regex=True))\n\n>>>\n'''\n INITIATION DATE DATE CUT DATE GIVEN\n0 1 July 2022 21 July 2022 11 July 2022\n1 17 July 2022 16 July 2022 21 July 2022\n2 16 July 2022 01 July 2022 09 July 2022\n3 19 July 2022 31 July 2022 27 July 2022\n\nand if you also need to conver strings to datetime then try this:\ndf.apply(lambda x: pd.to_datetime(x.str.replace('[/-]',' ',regex=True)))\n\n>>>\n'''\n INITIATION DATE DATE CUT DATE GIVEN\n0 2022-07-01 2022-07-21 2022-07-11\n1 2022-07-17 2022-07-16 2022-07-21\n2 2022-07-16 2022-07-01 2022-07-09\n3 2022-07-19 2022-07-31 2022-07-27\n\n",
"You can use pd.to_datetime to convert strings to datetime objects. The function takes a format argument which specifies the format of the datetime string, using the usual format codes\ndf['INITIATION DATE'] = pd.to_datetime(df['INITIATION DATE'], format='%d-%B-%Y').dt.strftime('%d %B %Y')\ndf['DATE CUT'] = pd.to_datetime(df['DATE CUT'], format='%d %B %Y').dt.strftime('%d %B %Y')\ndf['DATE GIVEN'] = pd.to_datetime(df['DATE GIVEN'], format='%d/%B/%Y').dt.strftime('%d %B %Y')\n\noutput\nINITIATION DATE DATE CUT DATE GIVEN\n0 01 July 2022 21 July 2022 11 July 2022\n1 17 July 2022 16 July 2022 21 July 2022\n2 16 July 2022 01 July 2022 09 July 2022\n3 19 July 2022 31 July 2022 27 July 2022\n\nYou get that error because your datetime strings (e.g. '18 July 2022') do not match your format specifiers ('%d-%b-%Y') because of the extra hyphens in the format specifier.\n"
] | [
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074647975_python_python_3.x.txt |
Q:
How to get my program to read a text file properly?
Basically, I'm trying to get my program to read and display a list of grades from a file in the same folder called (grades.txt). Issue is, is the for some reason I keep getting an Index error no matter what I do. I've tried moving the text file in a different directory, I tried changing the parameters of what the program is supposed to read, I tried editing the process for looking at the file too; but NOTHING is working! I keep getting the Index Error again! What am I doing wrong here?! Any help would be very appreciated as I am pulling my hair out!
END_OF_LINE = ""
EMPTY = ""
NEWLINE = "\n"
SPACE = " "
def display(grades, numRows, numColumns):
currentRow = 0
currentColumn = 0
total = 0
average = 0
while (currentRow < numRows):
total = 0
average = 0
currentColumn = 0
while (currentColumn < numColumns):
print("%s" % grades[currentRow][currentColumn], end=SPACE)
total = total + int(grades[currentRow][currentColumn])
currentColumn = currentColumn + 1
average = total / numColumns
print("\tAverage=%.2f" % (average))
currentRow = currentRow + 1
def fileRead():
fileOK = False
grades = []
maxColumns = 0
while (fileOK == False):
try:
filename = input("Name of input file: ")
inputfile = open(filename, "r")
fileOK = True
aLine = inputfile.readline()
if (aLine == EMPTY):
print("%s is an empty file")
else:
aLine = inputfile.readline()
currentRow = 0
while (aLine != END_OF_LINE):
currentCharacter = 0
grades.append([])
while (aLine[currentCharacter] != NEWLINE):
# Only add grades not spaces to the list of grade points
if (aLine[currentCharacter] != SPACE):
grades[currentRow].append(aLine[currentCharacter])
currentCharacter = currentCharacter + 1
currentRow = currentRow + 1
aLine = inputfile.readline()
inputfile.close()
except IOError:
print("Problem reading from file %s" % (filename))
fileOK = False
numRows = currentRow
numColumns = len(grades[0])
return (grades, numRows, numColumns)
def start():
grades, numRows, numColumns = fileRead()
display(grades, numRows, numColumns)
start()
Error:
Name of input file: grades.txt
Traceback (most recent call last):
File "C:", line 68, in <module>
start()
File "C:", line 64, in start
grades, numRows, numColumns = fileRead()
File "C:", line 47, in fileRead
while (aLine[currentCharacter] != NEWLINE):
IndexError: string index out of range
Grades.txt
Spring 2020 Grades for CPSC 217 Lecture
4 4 3 2 0
4 4 4 4 3
A:
the fileread function can be shortened a lot, as you can use "normal" string manipulation to do everything you're doing with your custom logic.
and just to point out what is the problem with your current logic, the file you're reading does not end with a newline character, which is very common in files.
def fileRead(filename):
with open(filename) as infile:
# read the whole file and split on newline characters
data = infile.read().splitlines()
if len(data) > 1:
result = []
for line in data[1:]:
result.append(line.split())
if len(result) > 0:
return result
print("invalid file")
The same goes for the other function. You're basically trying to re-invent the wheel while python has a lot of standard string/list/math things that make it a lot easier
def display(grades):
for line in grades:
print(" ".join(line))
average = sum([int(grade) for grade in line]) / len(line)
print(f"\tAverage={average:.2f}")
EDIT
to tie it into your logic, change your start function into the more pythonic dunder main magic.
if __name__ == "__main__":
filename = input("please provide a file name:\n")
grades = fileRead(filename)
display(grades)
I would read up on what the so-called dunder main does, fe here
| How to get my program to read a text file properly? | Basically, I'm trying to get my program to read and display a list of grades from a file in the same folder called (grades.txt). Issue is, is the for some reason I keep getting an Index error no matter what I do. I've tried moving the text file in a different directory, I tried changing the parameters of what the program is supposed to read, I tried editing the process for looking at the file too; but NOTHING is working! I keep getting the Index Error again! What am I doing wrong here?! Any help would be very appreciated as I am pulling my hair out!
END_OF_LINE = ""
EMPTY = ""
NEWLINE = "\n"
SPACE = " "
def display(grades, numRows, numColumns):
currentRow = 0
currentColumn = 0
total = 0
average = 0
while (currentRow < numRows):
total = 0
average = 0
currentColumn = 0
while (currentColumn < numColumns):
print("%s" % grades[currentRow][currentColumn], end=SPACE)
total = total + int(grades[currentRow][currentColumn])
currentColumn = currentColumn + 1
average = total / numColumns
print("\tAverage=%.2f" % (average))
currentRow = currentRow + 1
def fileRead():
fileOK = False
grades = []
maxColumns = 0
while (fileOK == False):
try:
filename = input("Name of input file: ")
inputfile = open(filename, "r")
fileOK = True
aLine = inputfile.readline()
if (aLine == EMPTY):
print("%s is an empty file")
else:
aLine = inputfile.readline()
currentRow = 0
while (aLine != END_OF_LINE):
currentCharacter = 0
grades.append([])
while (aLine[currentCharacter] != NEWLINE):
# Only add grades not spaces to the list of grade points
if (aLine[currentCharacter] != SPACE):
grades[currentRow].append(aLine[currentCharacter])
currentCharacter = currentCharacter + 1
currentRow = currentRow + 1
aLine = inputfile.readline()
inputfile.close()
except IOError:
print("Problem reading from file %s" % (filename))
fileOK = False
numRows = currentRow
numColumns = len(grades[0])
return (grades, numRows, numColumns)
def start():
grades, numRows, numColumns = fileRead()
display(grades, numRows, numColumns)
start()
Error:
Name of input file: grades.txt
Traceback (most recent call last):
File "C:", line 68, in <module>
start()
File "C:", line 64, in start
grades, numRows, numColumns = fileRead()
File "C:", line 47, in fileRead
while (aLine[currentCharacter] != NEWLINE):
IndexError: string index out of range
Grades.txt
Spring 2020 Grades for CPSC 217 Lecture
4 4 3 2 0
4 4 4 4 3
| [
"the fileread function can be shortened a lot, as you can use \"normal\" string manipulation to do everything you're doing with your custom logic.\nand just to point out what is the problem with your current logic, the file you're reading does not end with a newline character, which is very common in files.\ndef fileRead(filename):\n with open(filename) as infile:\n # read the whole file and split on newline characters\n data = infile.read().splitlines()\n if len(data) > 1:\n result = []\n for line in data[1:]:\n result.append(line.split())\n if len(result) > 0:\n return result\n print(\"invalid file\")\n\nThe same goes for the other function. You're basically trying to re-invent the wheel while python has a lot of standard string/list/math things that make it a lot easier\ndef display(grades):\n for line in grades:\n print(\" \".join(line))\n average = sum([int(grade) for grade in line]) / len(line)\n print(f\"\\tAverage={average:.2f}\")\n\nEDIT\nto tie it into your logic, change your start function into the more pythonic dunder main magic.\nif __name__ == \"__main__\":\n filename = input(\"please provide a file name:\\n\")\n grades = fileRead(filename)\n display(grades)\n\nI would read up on what the so-called dunder main does, fe here\n"
] | [
0
] | [] | [] | [
"function",
"list",
"python",
"python_3.x",
"variables"
] | stackoverflow_0074653120_function_list_python_python_3.x_variables.txt |
Q:
Find out how much of a rectangle is filled with a color using OpenCV
I am using a webcam to segment a green piece of paper. I have tried different results using inRange and thresholding but have gotten a pretty good result so far.
I now have a rectangle in the middle of the screen which I want to check how much of it is filled with that green color because the camera will be moving, the rectangle will also be moving. And if 80% of the rectangle is filled with green color I want to return True.
Here's my code so far
import numpy as np
import cv2
cap = cv2.VideoCapture(1)
while True:
succ,img = cap.read()
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
lab = cv2.GaussianBlur(lab,(5,5),0)
a_channel = lab[:,:,1]
#th = cv2.threshold(a_channel,127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
th = cv2.threshold(a_channel, 115, 255, cv2.THRESH_BINARY_INV)[1]
masked = cv2.bitwise_and(img, img, mask = th)
cv2.rectangle(masked,(640,440),(1280,548),(0,255,0),3)
cv2.imshow('mask',masked)
cv2.waitKey(1)
The picture returned is below, you can see the rectangle and the green piece of paper.
I would like to move the piece of paper closer to the rectangle and have it return True if its almost filled with the green color
A:
To check if the rectangle is filled with the green color, you can use the following steps:
Create a binary mask using the inRange method, with the range of the green color in the LAB color space. This will create a binary image where the green pixels are white, and the rest are black.
Use the bitwise_and method to apply this binary mask on the original image, to get only the green pixels in the rectangle.
Use the countNonZero method to count the number of white pixels in the rectangle.
Calculate the percentage of white pixels in the rectangle by dividing the number of white pixels by the total number of pixels in the rectangle.
If the percentage is greater than or equal to 80%, return True. Otherwise, return False.
Here is an implementation of these steps:
import numpy as np
import cv2
cap = cv2.VideoCapture(1)
while True:
# Capture the frame from the webcam
succ, img = cap.read()
if not succ:
break
# Convert the image to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# Apply Gaussian blur to remove noise
lab = cv2.GaussianBlur(lab, (5,5), 0)
# Extract the A channel from the LAB image
a_channel = lab[:,:,1]
# Create a binary mask using the inRange method, with the range of the green color in the LAB color space
green_mask = cv2.inRange(lab, (50, 0, 0), (100, 255, 255))
# Use the bitwise_and method to apply this binary mask on the original image, to get only the green pixels in the rectangle
masked = cv2.bitwise_and(img, img, mask=green_mask)
# Draw the rectangle on the masked image
cv2.rectangle(masked, (640, 440), (1280, 548), (0,255,0), 3)
# Count the number of white pixels in the rectangle
white_pixels = cv2.countNonZero(green_mask[440:548, 640:1280])
# Calculate the percentage of white pixels in the rectangle
green_percentage = white_pixels / (1280 - 640) * (548 - 440)
# Check if the percentage is greater than or equal to 80%
if green_percentage >= 0.8:
print("The rectangle is filled with green color!")
# Display the masked image
cv2.imshow('mask', masked)
# Wait for 1ms
cv2.waitKey(1)
# Release the webcam
cap.release()
| Find out how much of a rectangle is filled with a color using OpenCV | I am using a webcam to segment a green piece of paper. I have tried different results using inRange and thresholding but have gotten a pretty good result so far.
I now have a rectangle in the middle of the screen which I want to check how much of it is filled with that green color because the camera will be moving, the rectangle will also be moving. And if 80% of the rectangle is filled with green color I want to return True.
Here's my code so far
import numpy as np
import cv2
cap = cv2.VideoCapture(1)
while True:
succ,img = cap.read()
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
lab = cv2.GaussianBlur(lab,(5,5),0)
a_channel = lab[:,:,1]
#th = cv2.threshold(a_channel,127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
th = cv2.threshold(a_channel, 115, 255, cv2.THRESH_BINARY_INV)[1]
masked = cv2.bitwise_and(img, img, mask = th)
cv2.rectangle(masked,(640,440),(1280,548),(0,255,0),3)
cv2.imshow('mask',masked)
cv2.waitKey(1)
The picture returned is below, you can see the rectangle and the green piece of paper.
I would like to move the piece of paper closer to the rectangle and have it return True if its almost filled with the green color
| [
"To check if the rectangle is filled with the green color, you can use the following steps:\n\nCreate a binary mask using the inRange method, with the range of the green color in the LAB color space. This will create a binary image where the green pixels are white, and the rest are black.\n\nUse the bitwise_and method to apply this binary mask on the original image, to get only the green pixels in the rectangle.\n\nUse the countNonZero method to count the number of white pixels in the rectangle.\n\nCalculate the percentage of white pixels in the rectangle by dividing the number of white pixels by the total number of pixels in the rectangle.\n\nIf the percentage is greater than or equal to 80%, return True. Otherwise, return False.\n\n\nHere is an implementation of these steps:\nimport numpy as np\nimport cv2\n\ncap = cv2.VideoCapture(1)\nwhile True:\n\n # Capture the frame from the webcam\n succ, img = cap.read()\n if not succ:\n break\n\n # Convert the image to LAB color space\n lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\n\n # Apply Gaussian blur to remove noise\n lab = cv2.GaussianBlur(lab, (5,5), 0)\n\n # Extract the A channel from the LAB image\n a_channel = lab[:,:,1]\n\n # Create a binary mask using the inRange method, with the range of the green color in the LAB color space\n green_mask = cv2.inRange(lab, (50, 0, 0), (100, 255, 255))\n\n # Use the bitwise_and method to apply this binary mask on the original image, to get only the green pixels in the rectangle\n masked = cv2.bitwise_and(img, img, mask=green_mask)\n\n # Draw the rectangle on the masked image\n cv2.rectangle(masked, (640, 440), (1280, 548), (0,255,0), 3)\n\n # Count the number of white pixels in the rectangle\n white_pixels = cv2.countNonZero(green_mask[440:548, 640:1280])\n\n # Calculate the percentage of white pixels in the rectangle\n green_percentage = white_pixels / (1280 - 640) * (548 - 440)\n\n # Check if the percentage is greater than or equal to 80%\n if green_percentage >= 0.8:\n print(\"The rectangle is filled with green color!\")\n\n # Display the masked image\n cv2.imshow('mask', masked)\n\n # Wait for 1ms\n cv2.waitKey(1)\n\n# Release the webcam\ncap.release()\n\n"
] | [
1
] | [] | [] | [
"opencv",
"python",
"segment",
"threshold"
] | stackoverflow_0074653267_opencv_python_segment_threshold.txt |
Q:
The ScreenManager on my coding doesn't work on my program
I have a problem with my coding. I am trying to make ScreenManager work on my app. I have three button actions related to my project: a QR Code Scanner example on one screen, a checklist on one screen, and a hyperlink action button.
I am now trying to combine all three into a single .py program, with a single .kv design file. However, I have encountered a 'NoneType' error on my coding. And if I remove the Builder file, and edit the .kv file to a simple GridLayout and button, the coding works, but it will be displayed blank.
The coding is listed below.
Test Integration.py
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from kivy.uix.image import Image
from kivy.uix.button import Button
from kivy.uix.textinput import TextInput
class MainMenu(Screen):
def build(self):
self.window = GridLayout()
self.window.cols = 1
self.window.size_hint = (0.6, 0.7)
self.window.pos_hint = {"center_x": 0.5, "center_y": 0.5}
self.window.add_widget(Image(source="Are You Safe.jpeg"))
self.greeting = Label(
text="What's your name?",
font_size=18,
color='#00FFCE'
)
self.window.add_widget(self.greeting)
self.user = TextInput(
multiline=False,
padding_y=(20, 20),
size_hint=(1, 0.5)
)
self.window.add_widget(self.user)
self.button = Button(
text="NEXT",
size_hint=(1, 0.5),
bold=True,
background_color='#00FFCE',
background_normal=""
)
self.button.bind(on_press=self.callback)
self.window.add_widget(self.button)
return self.window
def callback(self, instance):
self.greeting.text = "Hello " + self.user.text + "!"
class FirstScreen(Screen):
def checkbox_click(self, instance, value, safety):
if value == True:
self.ids.output_label.text = f'Well done.'
else:
self.ids.output_label.text = "You are recommended to acquire or purchase the missing safety equipment."
class SecondScreen(Screen):
...
class ThirdScreen(Screen):
...
class WindowManager(ScreenManager):
pass
class MyMainApp(App):
def build(self):
Builder.load_file('Test Integration.kv')
screen_manager = ScreenManager(transition="SlideTransition()")
screen_manager.add_widget(MainMenu(name="MainMenu"))
screen_manager.add_widget(FirstScreen(name="FirstScreen"))
screen_manager.add_widget(SecondScreen(name="SecondScreen"))
return screen_manager
if __name__ == '__main__':
MyMainApp().run()
Test Integration.kv
LogInMenu:
<LogInMenu>:
GridLayout:
cols: 1
Label:
text: "Are You Safe? \[A project by students of Temasek Polytechnic.\]"
Button:
text: "Mandatory Equipment Checklist."
on_press:
root.manager.current = "FirstScreen"
root.manager.transition = "right"
Button:
text: "QR Code Scanner"
on_press:
root.manager.current = "SecondScreen"
root.manager.transition = "right"
Button:
text: "Link to Our Survey."
on_press:
root.manager.current = "ThirdScreen"
root.manager.transition = "right"
<FirstScreen>:
GridLayout:
cols: 1
Label:
text: "A Mandatory Safety Checklist."
BoxLayout:
orientation: "vertical"
size: root.width, root.height
Label:
text: "Checklist: Do you acquired all the necessary safety equipment?"
font_size: 24
GridLayout:
cols:2
Label:
text: "Do you have your safety gloves with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your safety gloves with you?")
Label:
text: "Do you have your safety boots with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your safety boots with you?")
Label:
text: "Do you have your safety goggles with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your safety goggles with you?")
Label:
text: "Do you have your helmet with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your helmet with you?")
Label:
id: output_label
text: "You are recommended to acquire or purchase the missing safety equipment."
Button:
text: "Return to Main Menu."
on_press:
root.manager.current = "LogInMenu"
root.manager.transition = "left"
<SecondScreen>:
GridLayout:
cols: 1
Label:
text: "QR Code Scanner."
Button:
text: "Return to Main Menu."
on_press:
root.manager.current = "LogInMenu"
root.manager.transition = "left"
<ThirdScreen>:
GridLayout:
cols: 1
Label:
text: "A Link to Our Survey."
Button:
text:"Survey on feedback."
on_release:
# importing webbrowser module
import webbrowser
# it will open google window in your browser
webbrowser.open('https://forms.gle/k6cfYEU1snzpykxY9')
Button:
text: "Return to Main Menu."
on_press:
root.manager.current = "LogInMenu"
root.manager.transition = "right"
<WindowManager>:
LogInMenu:
FirstScreen:
SecondScreen:
ThirdScreen:
These are the errors I encountered:
Result of Coding:
Traceback (most recent call last):
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\Test Integration.py", line 76, in <module>
MyMainApp().run()
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\app.py", line 954, in run
self._run_prepare()
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\app.py", line 924, in _run_prepare
root = self.build()
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\Test Integration.py", line 68, in build
Builder.load_file('Test Integration.kv')
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\builder.py", line 305, in load_file
return self.load_string(data, **kwargs)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\builder.py", line 372, in load_string
parser = Parser(content=string, filename=fn)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 483, in __init__
self.parse(content)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 593, in parse
objects, remaining_lines = self.parse_level(0, lines)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 696, in parse_level
_objects, _lines = self.parse_level(
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 756, in parse_level
if current_property[:3] == 'on_':
TypeError: 'NoneType' object is not subscriptable
As you can see, I don't know how to fix the 'NoneType' object TypeError. And as mentioned earlier in the post, when I changed the .kv file's commands to a simple GridLayout and Button commands on the <FirstScreen>, <SecondScreen>, and <ThirdScreen>, and remove the Builder.load_file(Test Integration.kv) from the Test Integration.py file, the coding works, but nothing is displayed. This coding is based on my final year project.
A:
This is were your problem is
screen_manager = ScreenManager(transition="SlideTransition()")
You already created a WindowManager class which inherits from ScreenManager so you need to use WindowManager() instead of ScreenManager(). Also the qoutation marks must be removed
Like this
screen_manager = WindowManager(transition=SlideTransition())
Lastly to use there is a probel with how your transition code.
Insted of
root.manager.transition = "left"
It must be like this
root.manager.transition.direction = "left"
This tutorial might serve as a reminder https://youtu.be/H3U29kXJlBk
| The ScreenManager on my coding doesn't work on my program | I have a problem with my coding. I am trying to make ScreenManager work on my app. I have three button actions related to my project: a QR Code Scanner example on one screen, a checklist on one screen, and a hyperlink action button.
I am now trying to combine all three into a single .py program, with a single .kv design file. However, I have encountered a 'NoneType' error on my coding. And if I remove the Builder file, and edit the .kv file to a simple GridLayout and button, the coding works, but it will be displayed blank.
The coding is listed below.
Test Integration.py
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from kivy.uix.image import Image
from kivy.uix.button import Button
from kivy.uix.textinput import TextInput
class MainMenu(Screen):
def build(self):
self.window = GridLayout()
self.window.cols = 1
self.window.size_hint = (0.6, 0.7)
self.window.pos_hint = {"center_x": 0.5, "center_y": 0.5}
self.window.add_widget(Image(source="Are You Safe.jpeg"))
self.greeting = Label(
text="What's your name?",
font_size=18,
color='#00FFCE'
)
self.window.add_widget(self.greeting)
self.user = TextInput(
multiline=False,
padding_y=(20, 20),
size_hint=(1, 0.5)
)
self.window.add_widget(self.user)
self.button = Button(
text="NEXT",
size_hint=(1, 0.5),
bold=True,
background_color='#00FFCE',
background_normal=""
)
self.button.bind(on_press=self.callback)
self.window.add_widget(self.button)
return self.window
def callback(self, instance):
self.greeting.text = "Hello " + self.user.text + "!"
class FirstScreen(Screen):
def checkbox_click(self, instance, value, safety):
if value == True:
self.ids.output_label.text = f'Well done.'
else:
self.ids.output_label.text = "You are recommended to acquire or purchase the missing safety equipment."
class SecondScreen(Screen):
...
class ThirdScreen(Screen):
...
class WindowManager(ScreenManager):
pass
class MyMainApp(App):
def build(self):
Builder.load_file('Test Integration.kv')
screen_manager = ScreenManager(transition="SlideTransition()")
screen_manager.add_widget(MainMenu(name="MainMenu"))
screen_manager.add_widget(FirstScreen(name="FirstScreen"))
screen_manager.add_widget(SecondScreen(name="SecondScreen"))
return screen_manager
if __name__ == '__main__':
MyMainApp().run()
Test Integration.kv
LogInMenu:
<LogInMenu>:
GridLayout:
cols: 1
Label:
text: "Are You Safe? \[A project by students of Temasek Polytechnic.\]"
Button:
text: "Mandatory Equipment Checklist."
on_press:
root.manager.current = "FirstScreen"
root.manager.transition = "right"
Button:
text: "QR Code Scanner"
on_press:
root.manager.current = "SecondScreen"
root.manager.transition = "right"
Button:
text: "Link to Our Survey."
on_press:
root.manager.current = "ThirdScreen"
root.manager.transition = "right"
<FirstScreen>:
GridLayout:
cols: 1
Label:
text: "A Mandatory Safety Checklist."
BoxLayout:
orientation: "vertical"
size: root.width, root.height
Label:
text: "Checklist: Do you acquired all the necessary safety equipment?"
font_size: 24
GridLayout:
cols:2
Label:
text: "Do you have your safety gloves with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your safety gloves with you?")
Label:
text: "Do you have your safety boots with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your safety boots with you?")
Label:
text: "Do you have your safety goggles with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your safety goggles with you?")
Label:
text: "Do you have your helmet with you?"
font_size:16
CheckBox:
on_active: root.checkbox_click(self, self.active, "Do you have your helmet with you?")
Label:
id: output_label
text: "You are recommended to acquire or purchase the missing safety equipment."
Button:
text: "Return to Main Menu."
on_press:
root.manager.current = "LogInMenu"
root.manager.transition = "left"
<SecondScreen>:
GridLayout:
cols: 1
Label:
text: "QR Code Scanner."
Button:
text: "Return to Main Menu."
on_press:
root.manager.current = "LogInMenu"
root.manager.transition = "left"
<ThirdScreen>:
GridLayout:
cols: 1
Label:
text: "A Link to Our Survey."
Button:
text:"Survey on feedback."
on_release:
# importing webbrowser module
import webbrowser
# it will open google window in your browser
webbrowser.open('https://forms.gle/k6cfYEU1snzpykxY9')
Button:
text: "Return to Main Menu."
on_press:
root.manager.current = "LogInMenu"
root.manager.transition = "right"
<WindowManager>:
LogInMenu:
FirstScreen:
SecondScreen:
ThirdScreen:
These are the errors I encountered:
Result of Coding:
Traceback (most recent call last):
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\Test Integration.py", line 76, in <module>
MyMainApp().run()
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\app.py", line 954, in run
self._run_prepare()
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\app.py", line 924, in _run_prepare
root = self.build()
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\Test Integration.py", line 68, in build
Builder.load_file('Test Integration.kv')
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\builder.py", line 305, in load_file
return self.load_string(data, **kwargs)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\builder.py", line 372, in load_string
parser = Parser(content=string, filename=fn)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 483, in __init__
self.parse(content)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 593, in parse
objects, remaining_lines = self.parse_level(0, lines)
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 696, in parse_level
_objects, _lines = self.parse_level(
File "C:\Users\nicho\PycharmProjects\Temasek Polytechnic MP (Software App)\venv\lib\site-packages\kivy\lang\parser.py", line 756, in parse_level
if current_property[:3] == 'on_':
TypeError: 'NoneType' object is not subscriptable
As you can see, I don't know how to fix the 'NoneType' object TypeError. And as mentioned earlier in the post, when I changed the .kv file's commands to a simple GridLayout and Button commands on the <FirstScreen>, <SecondScreen>, and <ThirdScreen>, and remove the Builder.load_file(Test Integration.kv) from the Test Integration.py file, the coding works, but nothing is displayed. This coding is based on my final year project.
| [
"This is were your problem is\nscreen_manager = ScreenManager(transition=\"SlideTransition()\")\n\nYou already created a WindowManager class which inherits from ScreenManager so you need to use WindowManager() instead of ScreenManager(). Also the qoutation marks must be removed\nLike this\nscreen_manager = WindowManager(transition=SlideTransition())\n\nLastly to use there is a probel with how your transition code.\nInsted of\nroot.manager.transition = \"left\"\n\nIt must be like this\nroot.manager.transition.direction = \"left\"\n\nThis tutorial might serve as a reminder https://youtu.be/H3U29kXJlBk\n"
] | [
0
] | [] | [] | [
"kivy",
"pycharm",
"python"
] | stackoverflow_0074651294_kivy_pycharm_python.txt |
Q:
Finding anagrams of a specific word in a list
So, the problem is:
Given an array of m words and 1 other word, find all anagrams of that word in the array and print them.
Do y’all have any faster algorithm?:)
I’ve succesfully coded this one, but it seems rather slow ( i’ve been using sorted() with a for loop + checking the length before). Found anagrams were added to a new array. Then printing the list of anagrams with a for loop again.
A:
I think that counting characters and comparing it will be faster but im not sure. Just check it ;)
defaultdict will be helpful.
from collections import defaultdict as dd
def char_counter(word: str)-> dict
result = dd(int)
for c in word:
result[c]+=1
return result
| Finding anagrams of a specific word in a list | So, the problem is:
Given an array of m words and 1 other word, find all anagrams of that word in the array and print them.
Do y’all have any faster algorithm?:)
I’ve succesfully coded this one, but it seems rather slow ( i’ve been using sorted() with a for loop + checking the length before). Found anagrams were added to a new array. Then printing the list of anagrams with a for loop again.
| [
"I think that counting characters and comparing it will be faster but im not sure. Just check it ;)\ndefaultdict will be helpful.\nfrom collections import defaultdict as dd\n\ndef char_counter(word: str)-> dict\n result = dd(int)\n for c in word:\n result[c]+=1\n return result\n\n"
] | [
0
] | [] | [] | [
"anagram",
"python"
] | stackoverflow_0074653326_anagram_python.txt |
Q:
How can you change the precision of the percentage field in tqdm?
I have a very large iterable which means a lot of iterations must pass before the bar updates by 1%. It populates a sqlite database from legacy excel sheets.
Minimum reproducible example is something like this.
from tqdm import tqdm, trange
import time
percentage = 0
total = 157834
l_bar = '{desc}: {percentage:.3f}%|'
r_bar = '| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ' '{rate_fmt}{postfix}]'
format = '{l_bar}{bar}{r_bar}'
for row in tqdm(range(2, total), ncols=100, bar_format=format):
percentage = row/total * 100
time.sleep(0.1)
In this example I have left all these strings as their default values except for trying to modify the percentage field in l_bar in an attempt to get decimals of a percent to print. And I haven't been able to find a default definition of bar anywhere in the docs so this implementation causes the loading bar to stop working.
From the documentation:
bar_format : str, optional
Specify a custom bar string formatting. May impact performance. [default: '{l_bar}{bar}{r_bar}'], where l_bar='{desc}: {percentage:3.0f}%|' and r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ' '{rate_fmt}{postfix}]' Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt, percentage, elapsed, elapsed_s, ncols, nrows, desc, unit, rate, rate_fmt, rate_noinv, rate_noinv_fmt, rate_inv, rate_inv_fmt, postfix, unit_divisor, remaining, remaining_s, eta. Note that a trailing ": " is automatically removed after {desc} if the latter is empty.
However I try it seems to come out as a flat 0% and then a jump to 1% every time.
How am I misunderstanding the documentation here?
A:
bar_format doesn't work like that - it's not going to look up values of l_bar or r_bar that you define in your own code. All format specifiers will be filled in with values provided on tqdm's end.
Use a single layer of formatting, based on the variables tqdm provides:
for row in tqdm(whatever, bar_format='{desc}: {percentage:3.2f}%|{bar}{r_bar}'):
...
A:
Try it in this way,
from tqdm import tqdm, trange
import time
total = 1000
class TqdmExtraFormat(tqdm):
@property
def format_dict(self):
d = super(TqdmExtraFormat, self).format_dict
decimalpercentage = '{:.2f}'.format(d["rate"]/100) if d["rate"] else '?'
d.update(percentage = decimalpercentage)
return d
b= '{percentage}%|{bar}{r_bar}'
for i in TqdmExtraFormat(range(2, total), bar_format=b):
time.sleep(0.1)
Because in the documentation, eventhough percentage is listed as a variable, I'm not able to find the key in the dictionary. Therefore, we shall utilize rate divided by 100 instead. One thing missing is I can't get the decimals to change to two places even after specifying .2f format. Maybe you could find out more on that. Otherwise, it works like what you requested.
Output:
| How can you change the precision of the percentage field in tqdm? | I have a very large iterable which means a lot of iterations must pass before the bar updates by 1%. It populates a sqlite database from legacy excel sheets.
Minimum reproducible example is something like this.
from tqdm import tqdm, trange
import time
percentage = 0
total = 157834
l_bar = '{desc}: {percentage:.3f}%|'
r_bar = '| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ' '{rate_fmt}{postfix}]'
format = '{l_bar}{bar}{r_bar}'
for row in tqdm(range(2, total), ncols=100, bar_format=format):
percentage = row/total * 100
time.sleep(0.1)
In this example I have left all these strings as their default values except for trying to modify the percentage field in l_bar in an attempt to get decimals of a percent to print. And I haven't been able to find a default definition of bar anywhere in the docs so this implementation causes the loading bar to stop working.
From the documentation:
bar_format : str, optional
Specify a custom bar string formatting. May impact performance. [default: '{l_bar}{bar}{r_bar}'], where l_bar='{desc}: {percentage:3.0f}%|' and r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ' '{rate_fmt}{postfix}]' Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt, percentage, elapsed, elapsed_s, ncols, nrows, desc, unit, rate, rate_fmt, rate_noinv, rate_noinv_fmt, rate_inv, rate_inv_fmt, postfix, unit_divisor, remaining, remaining_s, eta. Note that a trailing ": " is automatically removed after {desc} if the latter is empty.
However I try it seems to come out as a flat 0% and then a jump to 1% every time.
How am I misunderstanding the documentation here?
| [
"bar_format doesn't work like that - it's not going to look up values of l_bar or r_bar that you define in your own code. All format specifiers will be filled in with values provided on tqdm's end.\nUse a single layer of formatting, based on the variables tqdm provides:\nfor row in tqdm(whatever, bar_format='{desc}: {percentage:3.2f}%|{bar}{r_bar}'):\n ...\n\n",
"Try it in this way,\nfrom tqdm import tqdm, trange\nimport time\n\ntotal = 1000\n\nclass TqdmExtraFormat(tqdm):\n @property\n def format_dict(self):\n d = super(TqdmExtraFormat, self).format_dict\n decimalpercentage = '{:.2f}'.format(d[\"rate\"]/100) if d[\"rate\"] else '?'\n d.update(percentage = decimalpercentage)\n return d\n\nb= '{percentage}%|{bar}{r_bar}'\nfor i in TqdmExtraFormat(range(2, total), bar_format=b):\n time.sleep(0.1)\n\nBecause in the documentation, eventhough percentage is listed as a variable, I'm not able to find the key in the dictionary. Therefore, we shall utilize rate divided by 100 instead. One thing missing is I can't get the decimals to change to two places even after specifying .2f format. Maybe you could find out more on that. Otherwise, it works like what you requested.\nOutput:\n\n"
] | [
1,
0
] | [] | [] | [
"python",
"python_3.x",
"tqdm"
] | stackoverflow_0074650215_python_python_3.x_tqdm.txt |
Q:
Showing Levels end values on contourf
I'm using a contourf to plot my data (var) and I would like to have 20 levels going from -100 to 100, so this what I did.
plt.contourf(var, levels=np.linspace(-100, 100, 21))
But when I plot it it will miss the end values (-100 and 100), how can I solve it and show these values?
Thanks in advance.
Image
A:
You can have a fine control of the values shown in the colorbar:
import numpy as np
import matplotlib.pyplot as plt
x, y = np.mgrid[-2:2:100j,-2:2:100j]
z = 100 * np.cos(x**2 + y**2)
fig, ax = plt.subplots()
c = ax.contourf(x, y, z, levels=np.linspace(-100, 100, 21))
cb = fig.colorbar(c)
ticks = np.linspace(-100, 100, 9)
# if you change ticks, you want to change the following as well
labels = [str(int(t)) for t in ticks]
cb.set_ticks(ticks, labels=labels)
| Showing Levels end values on contourf | I'm using a contourf to plot my data (var) and I would like to have 20 levels going from -100 to 100, so this what I did.
plt.contourf(var, levels=np.linspace(-100, 100, 21))
But when I plot it it will miss the end values (-100 and 100), how can I solve it and show these values?
Thanks in advance.
Image
| [
"You can have a fine control of the values shown in the colorbar:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx, y = np.mgrid[-2:2:100j,-2:2:100j]\nz = 100 * np.cos(x**2 + y**2)\n\nfig, ax = plt.subplots()\nc = ax.contourf(x, y, z, levels=np.linspace(-100, 100, 21))\ncb = fig.colorbar(c)\nticks = np.linspace(-100, 100, 9)\n# if you change ticks, you want to change the following as well\nlabels = [str(int(t)) for t in ticks]\ncb.set_ticks(ticks, labels=labels)\n\n"
] | [
0
] | [] | [] | [
"contourf",
"python"
] | stackoverflow_0074653337_contourf_python.txt |
Q:
Deleting immediately subsequent rows which are the exact same as the previous for specific columns
I have a dataframe similar to the following.
import pandas as pd
data = pd.DataFrame({'ind': [111,222,333,444,555,666,777,888,999,000],
'col1': [1,2,2,2,3,4,5,5,6,7],
'col2': [9,2,2,2,9,9,5,5,9,9],
'col3': [11,2,2,2,11,11,5,5,11,11],
'val': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']})
There is an index ind, a number of columns col 1, 2 and 3, and some other column with a value val. Within the three columns 1, 2 and 3 there are a number of rows which are the exact same as the previous row, for instance row with index 333 and 444 are the same as 222. My actual data set is larger but what I need to do is delete all rows which have the exact same value as the immediate previous row for a number of columns (col1, col2, col3 here).
This would give me a dataframe like this with indeces 333/444 and 888 removed:
data_clean = pd.DataFrame({'ind': [111,222,555,666,777,999,000],
'col1': [1,2,3,4,5,6,7],
'col2': [9,2,9,9,5,9,9],
'col3': [11,2,11,11,5,11,11],
'val': ['a', 'b', 'e', 'f', 'g', 'i', 'j']})
What is the best way to go about this for a larger dataframe?
A:
You can use shift and any for boolean indexing:
cols = ['col1', 'col2', 'col3']
out = data[data[cols].ne(data[cols].shift()).any(axis=1)]
# DeMorgan's equivalent:
# out = data[~data[cols].eq(data[cols].shift()).all(axis=1)]
Output:
ind col1 col2 col3 val
0 111 1 9 11 a
1 222 2 2 2 b
4 555 3 9 11 e
5 666 4 9 11 f
6 777 5 5 5 g
8 999 6 9 11 i
9 0 7 9 11 j
Intermediates
# shifted dataset
data[cols].shift()
col1 col2 col3
0 NaN NaN NaN
1 1.0 9.0 11.0
2 2.0 2.0 2.0
3 2.0 2.0 2.0
4 2.0 2.0 2.0
5 3.0 9.0 11.0
6 4.0 9.0 11.0
7 5.0 5.0 5.0
8 5.0 5.0 5.0
9 6.0 9.0 11.0
# comparison
data[cols].ne(data[cols].shift())
col1 col2 col3
0 True True True
1 True True True
2 False False False
3 False False False
4 True True True
5 True False False
6 True True True
7 False False False
8 True True True
9 True False False
# aggregation
data[cols].ne(data[cols].shift()).any(axis=1)
0 True
1 True
2 False
3 False
4 True
5 True
6 True
7 False
8 True
9 True
dtype: bool
A:
One way to do this would be to use the shift() method to compare the values in each row with the previous row for the columns you're interested in, and then use the boolean indexing to select only the rows that have different values from the previous row. Here's an example:
# First, let's create a DataFrame with the columns we want to compare
compare_df = data[['col1', 'col2', 'col3']]
# Next, use the shift() method to compare each row with the previous row
comparison_result = compare_df.eq(compare_df.shift())
# Now we can use boolean indexing to select only the rows that have different values
# from the previous row
data_clean = data[~comparison_result.all(axis=1)]
This should give you a DataFrame that only contains rows that are different from the previous row for the specified columns.
Alternatively, you could use the drop_duplicates() method to remove duplicate rows based on the values in the specified columns. This method will remove all rows that have the same values for the specified columns, regardless of whether they are consecutive or not. Here's an example:
# Use the drop_duplicates() method to remove duplicate rows
data_clean = data.drop_duplicates(subset=['col1', 'col2', 'col3'])
This should give you the same result as the previous approach, but in a more concise and straightforward way.
| Deleting immediately subsequent rows which are the exact same as the previous for specific columns | I have a dataframe similar to the following.
import pandas as pd
data = pd.DataFrame({'ind': [111,222,333,444,555,666,777,888,999,000],
'col1': [1,2,2,2,3,4,5,5,6,7],
'col2': [9,2,2,2,9,9,5,5,9,9],
'col3': [11,2,2,2,11,11,5,5,11,11],
'val': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']})
There is an index ind, a number of columns col 1, 2 and 3, and some other column with a value val. Within the three columns 1, 2 and 3 there are a number of rows which are the exact same as the previous row, for instance row with index 333 and 444 are the same as 222. My actual data set is larger but what I need to do is delete all rows which have the exact same value as the immediate previous row for a number of columns (col1, col2, col3 here).
This would give me a dataframe like this with indeces 333/444 and 888 removed:
data_clean = pd.DataFrame({'ind': [111,222,555,666,777,999,000],
'col1': [1,2,3,4,5,6,7],
'col2': [9,2,9,9,5,9,9],
'col3': [11,2,11,11,5,11,11],
'val': ['a', 'b', 'e', 'f', 'g', 'i', 'j']})
What is the best way to go about this for a larger dataframe?
| [
"You can use shift and any for boolean indexing:\ncols = ['col1', 'col2', 'col3']\nout = data[data[cols].ne(data[cols].shift()).any(axis=1)]\n# DeMorgan's equivalent:\n# out = data[~data[cols].eq(data[cols].shift()).all(axis=1)]\n\nOutput:\n ind col1 col2 col3 val\n0 111 1 9 11 a\n1 222 2 2 2 b\n4 555 3 9 11 e\n5 666 4 9 11 f\n6 777 5 5 5 g\n8 999 6 9 11 i\n9 0 7 9 11 j\n\nIntermediates\n# shifted dataset\ndata[cols].shift()\n\n col1 col2 col3\n0 NaN NaN NaN\n1 1.0 9.0 11.0\n2 2.0 2.0 2.0\n3 2.0 2.0 2.0\n4 2.0 2.0 2.0\n5 3.0 9.0 11.0\n6 4.0 9.0 11.0\n7 5.0 5.0 5.0\n8 5.0 5.0 5.0\n9 6.0 9.0 11.0\n\n# comparison\ndata[cols].ne(data[cols].shift())\n\n col1 col2 col3\n0 True True True\n1 True True True\n2 False False False\n3 False False False\n4 True True True\n5 True False False\n6 True True True\n7 False False False\n8 True True True\n9 True False False\n\n# aggregation\ndata[cols].ne(data[cols].shift()).any(axis=1)\n\n0 True\n1 True\n2 False\n3 False\n4 True\n5 True\n6 True\n7 False\n8 True\n9 True\ndtype: bool\n\n",
"One way to do this would be to use the shift() method to compare the values in each row with the previous row for the columns you're interested in, and then use the boolean indexing to select only the rows that have different values from the previous row. Here's an example:\n# First, let's create a DataFrame with the columns we want to compare\ncompare_df = data[['col1', 'col2', 'col3']]\n\n# Next, use the shift() method to compare each row with the previous row\ncomparison_result = compare_df.eq(compare_df.shift())\n\n# Now we can use boolean indexing to select only the rows that have different values\n# from the previous row\ndata_clean = data[~comparison_result.all(axis=1)]\n\nThis should give you a DataFrame that only contains rows that are different from the previous row for the specified columns.\nAlternatively, you could use the drop_duplicates() method to remove duplicate rows based on the values in the specified columns. This method will remove all rows that have the same values for the specified columns, regardless of whether they are consecutive or not. Here's an example:\n# Use the drop_duplicates() method to remove duplicate rows\ndata_clean = data.drop_duplicates(subset=['col1', 'col2', 'col3'])\n\nThis should give you the same result as the previous approach, but in a more concise and straightforward way.\n"
] | [
1,
1
] | [] | [] | [
"dataframe",
"duplicates",
"pandas",
"python"
] | stackoverflow_0074653428_dataframe_duplicates_pandas_python.txt |
Q:
Three lists within a tuple. How to make a dictionary?
I am trying to turn a list of tuples into a dictionary, but I keep on getting the same error: "'unhashable type: 'list'". I believe this might be the case due to having lists within the tuple itself.
An example of how the list looks now:
[([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267])]
What I want is:
([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267]): 0
So, I want to equal the pair to 0.
Code:
result = [([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267])]
result_dict = {res: 0 for res in result}
Full traceback:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-37-cdd1628d9f41> in <module>
----> 1 result_dict = {res: 0 for res in result}
<ipython-input-37-cdd1628d9f41> in <dictcomp>(.0)
----> 1 result_dict = {res: 0 for res in result}
TypeError: unhashable type: 'list'
A:
As @ShadowRanger mentioned already, mutable object cannot be an key of dictionary. Python uses hashes to efficiently manage dictionary keys and list allows in place modifications that can change its hash value. Supporting it would mean each list (or other mutable object) modification would require tracking if hash is not used by some dictionary/set/etc and possibly to recompute hashing trees to reflect the change so it means serious performance loss. Another problem would be conflicts, consider this faulty code:
key = [1]
test = {key: 1, []: 2}
key.pop()
What should be contents of test dictionary now?happen with test dictionary?
So for the example given, one of options is to convert your lists to tuples like so:
result = [([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam',
So for the example given, one of options is to convert your lists to tuples like so:
result = [([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267])]
result_dict = {tuple(tuple(t for t in k) for k in p): 0 for p in result}
Still (again as @ShadowRanger suggested) also for me it looks like serious code smell that I would try to refactor to get rid of.
| Three lists within a tuple. How to make a dictionary? | I am trying to turn a list of tuples into a dictionary, but I keep on getting the same error: "'unhashable type: 'list'". I believe this might be the case due to having lists within the tuple itself.
An example of how the list looks now:
[([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267])]
What I want is:
([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267]): 0
So, I want to equal the pair to 0.
Code:
result = [([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],
[31, 'donor', '-', 'O', '-', '62', 'Paris'],
['Amsterdam', 'Paris', 267])]
result_dict = {res: 0 for res in result}
Full traceback:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-37-cdd1628d9f41> in <module>
----> 1 result_dict = {res: 0 for res in result}
<ipython-input-37-cdd1628d9f41> in <dictcomp>(.0)
----> 1 result_dict = {res: 0 for res in result}
TypeError: unhashable type: 'list'
| [
"As @ShadowRanger mentioned already, mutable object cannot be an key of dictionary. Python uses hashes to efficiently manage dictionary keys and list allows in place modifications that can change its hash value. Supporting it would mean each list (or other mutable object) modification would require tracking if hash is not used by some dictionary/set/etc and possibly to recompute hashing trees to reflect the change so it means serious performance loss. Another problem would be conflicts, consider this faulty code:\n key = [1]\n test = {key: 1, []: 2}\n key.pop()\n\nWhat should be contents of test dictionary now?happen with test dictionary?\nSo for the example given, one of options is to convert your lists to tuples like so:\nresult = [([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],\n [31, 'donor', '-', 'O', '-', '62', 'Paris'],\n ['Amsterdam',\n \n\nSo for the example given, one of options is to convert your lists to tuples like so:\nresult = [([183, 'receiver', 'A', '-', '67', '-', 'Amsterdam'],\n [31, 'donor', '-', 'O', '-', '62', 'Paris'],\n ['Amsterdam', 'Paris', 267])]\nresult_dict = {tuple(tuple(t for t in k) for k in p): 0 for p in result}\n\nStill (again as @ShadowRanger suggested) also for me it looks like serious code smell that I would try to refactor to get rid of.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074615327_python.txt |
Q:
How to call a python built-in function in c++ with pybind11
I am using pybind11 to call a python built-in function like range in c++ code. But I only found way to call a function in module like this:
py::object os = py::module::import("os");
py::object makedirs = os.attr("makedirs");
makedirs("/tmp/path/to/somewhere");
But a python built-in function like range needn't import any modules, so how can I use pybind11 to call range in c++ code?
A:
You could fetch range from the globals dict.
A:
You can also import the builtins module which contains all the built-in python functions.
In your case it would be something like:
py::object builtins = py::module_::import("builtins");
py::object range = builtins.attr("range");
range(0, 10);
| How to call a python built-in function in c++ with pybind11 | I am using pybind11 to call a python built-in function like range in c++ code. But I only found way to call a function in module like this:
py::object os = py::module::import("os");
py::object makedirs = os.attr("makedirs");
makedirs("/tmp/path/to/somewhere");
But a python built-in function like range needn't import any modules, so how can I use pybind11 to call range in c++ code?
| [
"You could fetch range from the globals dict.\n",
"You can also import the builtins module which contains all the built-in python functions.\nIn your case it would be something like:\npy::object builtins = py::module_::import(\"builtins\");\npy::object range = builtins.attr(\"range\");\nrange(0, 10);\n\n"
] | [
1,
0
] | [] | [] | [
"c++",
"pybind11",
"python"
] | stackoverflow_0064135495_c++_pybind11_python.txt |
Q:
Is it possible to rename a json key name using its value in Python?
I have this nested json dictionary and I want to rename the key name 'Keys' with its equivalent value using Python. I wonder if this is possible?
Current - 'Keys': ['AWS Backup']
I want it to be - AWS Backup: ['AWS Backup']
Sample json dictionary
{
"TimePeriod": {
"Start": "2022-11-28",
"End": "2022-11-29"
},
"Total": {},
"Groups": [
{
"Keys": [
"AWS Backup"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.000000111",
"Unit": "USD"
}
}
},
{
"Keys": [
"AWS Direct Connect"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.0000111",
"Unit": "USD"
}
}
},
{
"Keys": [
"AWS Key Management Service"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.000000111",
"Unit": "USD"
}
}
}
]
}
Tried do flatten the json but after doing it still no luck. I'm not sure if I can do it using pandas dataframe also? Plan to save that json also in a csv file.
A:
Yes, it is possible to rename a JSON key name using its value in Python. Here is an example of how you can do this using the json module:
import json
# Original JSON dictionary
data = {
'TimePeriod': {'Start': '2022-11-28', 'End': '2022-11-29'},
'Total': {},
'Groups': [
{'Keys': ['AWS Backup'],
'Metrics': {
'UnblendedCost': {
'Amount': '0.000000111',
'Unit': 'USD'
}
}
},
{'Keys': ['AWS Direct Connect'],
'Metrics': {
'UnblendedCost': {
'Amount': '0.0000111',
'Unit': 'USD'
}
}
},
{'Keys': ['AWS Key Management Service'],
'Metrics': {
'UnblendedCost': {
'Amount': '0.000000111',
'Unit': 'USD'
}
}
}
]
}
# Loop through each group and rename the 'Keys' key with its value
for group in data['Groups']:
key = group['Keys'][0]
group[key] = group.pop('Keys')
# Print the updated JSON dictionary
print(json.dumps(data, indent=2))
The output will be:
{
"TimePeriod": {
"Start": "2022-11-28",
"End": "2022-11-29"
},
"Total": {},
"Groups": [
{
"AWS Backup": [
"AWS Backup"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.000000111",
"Unit": "USD"
}
}
},
{
"AWS Direct Connect": [
"AWS Direct Connect"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.0000111",
"Unit": "USD"
}
}
},
{
"AWS Key Management Service": [
"AWS Key Management Service"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.000000111",
"Unit": "USD"
}
}
}
]
}
| Is it possible to rename a json key name using its value in Python? | I have this nested json dictionary and I want to rename the key name 'Keys' with its equivalent value using Python. I wonder if this is possible?
Current - 'Keys': ['AWS Backup']
I want it to be - AWS Backup: ['AWS Backup']
Sample json dictionary
{
"TimePeriod": {
"Start": "2022-11-28",
"End": "2022-11-29"
},
"Total": {},
"Groups": [
{
"Keys": [
"AWS Backup"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.000000111",
"Unit": "USD"
}
}
},
{
"Keys": [
"AWS Direct Connect"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.0000111",
"Unit": "USD"
}
}
},
{
"Keys": [
"AWS Key Management Service"
],
"Metrics": {
"UnblendedCost": {
"Amount": "0.000000111",
"Unit": "USD"
}
}
}
]
}
Tried do flatten the json but after doing it still no luck. I'm not sure if I can do it using pandas dataframe also? Plan to save that json also in a csv file.
| [
"Yes, it is possible to rename a JSON key name using its value in Python. Here is an example of how you can do this using the json module:\nimport json\n\n# Original JSON dictionary\ndata = {\n 'TimePeriod': {'Start': '2022-11-28', 'End': '2022-11-29'},\n 'Total': {},\n 'Groups': [\n {'Keys': ['AWS Backup'],\n 'Metrics': {\n 'UnblendedCost': {\n 'Amount': '0.000000111',\n 'Unit': 'USD'\n }\n }\n },\n {'Keys': ['AWS Direct Connect'],\n 'Metrics': {\n 'UnblendedCost': {\n 'Amount': '0.0000111',\n 'Unit': 'USD'\n }\n }\n },\n {'Keys': ['AWS Key Management Service'],\n 'Metrics': {\n 'UnblendedCost': {\n 'Amount': '0.000000111',\n 'Unit': 'USD'\n }\n }\n }\n ]\n}\n\n# Loop through each group and rename the 'Keys' key with its value\nfor group in data['Groups']:\n key = group['Keys'][0]\n group[key] = group.pop('Keys')\n\n# Print the updated JSON dictionary\nprint(json.dumps(data, indent=2))\n\nThe output will be:\n{\n \"TimePeriod\": {\n \"Start\": \"2022-11-28\",\n \"End\": \"2022-11-29\"\n },\n \"Total\": {},\n \"Groups\": [\n {\n \"AWS Backup\": [\n \"AWS Backup\"\n ],\n \"Metrics\": {\n \"UnblendedCost\": {\n \"Amount\": \"0.000000111\",\n \"Unit\": \"USD\"\n }\n }\n },\n {\n \"AWS Direct Connect\": [\n \"AWS Direct Connect\"\n ],\n \"Metrics\": {\n \"UnblendedCost\": {\n \"Amount\": \"0.0000111\",\n \"Unit\": \"USD\"\n }\n }\n },\n {\n \"AWS Key Management Service\": [\n \"AWS Key Management Service\"\n ],\n \"Metrics\": {\n \"UnblendedCost\": {\n \"Amount\": \"0.000000111\",\n \"Unit\": \"USD\"\n }\n }\n }\n ]\n}\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"json",
"nested_json",
"pandas",
"python"
] | stackoverflow_0074653523_dataframe_json_nested_json_pandas_python.txt |
Q:
Change Aspect Ratio of Video in Python
I have a video in 16:9 that I would like to be in 9:16. I have tried to use python libraries such as cv2, ffmpeg or MoviePy but some of them did it without the sound and others just compressed the whole video (it did not crop the left and right sides it just made the picture messy).
Is there a way to change the change the aspect ratio while zooming in so that the new video fills out the whole canvas? And of course while keeping the audio in python?
A:
I faced a similar problem to you and came up with the following solution using Moviepy. Moviepy will keep the sound.
I'm going to assume your 16:9 videos are 1920w by 1080h and you don't want to resize / compress your video.
This means the maximum dimensions for your new 9:16 video can be 607.5w by 1080h.
607.5 / 1080 = 0.5625 = 9:16
You can't have half a px (607.5) for your new video's width, therefor to keep the ratio of 9:16 the next best dimensions are 576w by 1024h (correct me if I'm wrong).
You can then crop the original video clip with those dimensions.
Here's an example of what it might look like in code:
import moviepy.editor as mp
import moviepy.video.fx.all as vfx
# Create a temp video clip for this example
temp_clip = mp.ColorClip(size=(1920, 1080), color=(0, 0, 255), duration=1)
temp_clip.write_videofile("blue_original_clip.mp4", fps=30)
# This is where you load in your original clip
clip_16_9 = mp.VideoFileClip("blue_original_clip.mp4")
# Now lets crop out a 9:16 section from the original
# x1=0, y1=0 will take the section from the top left corner
clip_9_16 = vfx.crop(clip_16_9, x1=0, y1=0, width=576, height=1024)
clip_9_16.write_videofile("new_clip.mp4")
Hope that helps.
| Change Aspect Ratio of Video in Python | I have a video in 16:9 that I would like to be in 9:16. I have tried to use python libraries such as cv2, ffmpeg or MoviePy but some of them did it without the sound and others just compressed the whole video (it did not crop the left and right sides it just made the picture messy).
Is there a way to change the change the aspect ratio while zooming in so that the new video fills out the whole canvas? And of course while keeping the audio in python?
| [
"I faced a similar problem to you and came up with the following solution using Moviepy. Moviepy will keep the sound.\nI'm going to assume your 16:9 videos are 1920w by 1080h and you don't want to resize / compress your video.\nThis means the maximum dimensions for your new 9:16 video can be 607.5w by 1080h.\n607.5 / 1080 = 0.5625 = 9:16\nYou can't have half a px (607.5) for your new video's width, therefor to keep the ratio of 9:16 the next best dimensions are 576w by 1024h (correct me if I'm wrong).\nYou can then crop the original video clip with those dimensions.\nHere's an example of what it might look like in code:\nimport moviepy.editor as mp\nimport moviepy.video.fx.all as vfx\n \n# Create a temp video clip for this example\ntemp_clip = mp.ColorClip(size=(1920, 1080), color=(0, 0, 255), duration=1)\ntemp_clip.write_videofile(\"blue_original_clip.mp4\", fps=30) \n\n# This is where you load in your original clip \nclip_16_9 = mp.VideoFileClip(\"blue_original_clip.mp4\")\n \n# Now lets crop out a 9:16 section from the original\n# x1=0, y1=0 will take the section from the top left corner\nclip_9_16 = vfx.crop(clip_16_9, x1=0, y1=0, width=576, height=1024)\nclip_9_16.write_videofile(\"new_clip.mp4\")\n\nHope that helps.\n"
] | [
0
] | [
"well try this\nimport cv2\nimport numpy as np\n\ncap = cv2.VideoCapture('C:/New folder/video.avi')\n\nfourcc = cv2.VideoWriter_fourcc(*'XVID')\nout = cv2.VideoWriter('output.avi',fourcc, 5, (1280,720))\n\nwhile True:\n ret, frame = cap.read()\n if ret == True:\n b = cv2.resize(frame,(1280,720),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)\n out.write(b)\n else:\n break\n \ncap.release()\nout.release()\ncv2.destroyAllWindows()\n\n"
] | [
-1
] | [
"edit",
"python",
"video"
] | stackoverflow_0073003948_edit_python_video.txt |
Q:
How do I plot excel time format with matplot?
I am trying to plot Formula 1 laptimes with matplotlib. I want the Y axis to show the laptimes, while the Xaxis shows the iD number of the race the laptimes belong to. The time is from a CSV file and the cell contains a date format, e.g., 01:22.0 reads as 12:01:24AM on the excel cell.
`
raceId time milliseconds
6079 100 1:18.739 78739
4438 81 1:20.502 80502
63 8 1:20.735 80735
3517 60 1:21.599 81599
7280 118 1:22.236 82236
8065 133 1:23.083 83083
9018 151 1:23.405 83405
13205 215 1:24.475 84475
`
Tried using pd.to_datetime(df['time]). Graph was showing the dates only, no time. How do i plot the time against the raceId?
mpl_values = pd.to_datetime(df['time'])
plt.plot(df['raceId'],mpl_values,marker = '.', markersize = 10)
plt.xlabel('raceId')
plt.ylabel('Laptimes')
plt.show()
A:
It looks like you're trying to plot the time column from your dataframe as the y-axis values of your plot. However, since you're using the pd.to_datetime function to convert the time column to datetime values, the y-axis of your plot will show dates instead of times.
To fix this, you can simply pass the time column directly to the plt.plot function without using pd.to_datetime on it. This will plot the raw time values as they appear in the time column of your dataframe.
Here's an example of how you can do this:
# Import the necessary modules
import matplotlib.pyplot as plt
import pandas as pd
# Load the data from the CSV file into a pandas dataframe
df = pd.read_csv('laptimes.csv')
# Plot the raceId on the x-axis and the time on the y-axis
plt.plot(df['raceId'], df['time'], marker='.', markersize=10)
# Add axis labels
plt.xlabel('raceId')
plt.ylabel('Laptimes')
# Show the plot
plt.show()
This code should produce a line plot with the raceId on the x-axis and the laptimes on the y-axis.
| How do I plot excel time format with matplot? | I am trying to plot Formula 1 laptimes with matplotlib. I want the Y axis to show the laptimes, while the Xaxis shows the iD number of the race the laptimes belong to. The time is from a CSV file and the cell contains a date format, e.g., 01:22.0 reads as 12:01:24AM on the excel cell.
`
raceId time milliseconds
6079 100 1:18.739 78739
4438 81 1:20.502 80502
63 8 1:20.735 80735
3517 60 1:21.599 81599
7280 118 1:22.236 82236
8065 133 1:23.083 83083
9018 151 1:23.405 83405
13205 215 1:24.475 84475
`
Tried using pd.to_datetime(df['time]). Graph was showing the dates only, no time. How do i plot the time against the raceId?
mpl_values = pd.to_datetime(df['time'])
plt.plot(df['raceId'],mpl_values,marker = '.', markersize = 10)
plt.xlabel('raceId')
plt.ylabel('Laptimes')
plt.show()
| [
"It looks like you're trying to plot the time column from your dataframe as the y-axis values of your plot. However, since you're using the pd.to_datetime function to convert the time column to datetime values, the y-axis of your plot will show dates instead of times.\nTo fix this, you can simply pass the time column directly to the plt.plot function without using pd.to_datetime on it. This will plot the raw time values as they appear in the time column of your dataframe.\nHere's an example of how you can do this:\n# Import the necessary modules\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n# Load the data from the CSV file into a pandas dataframe\ndf = pd.read_csv('laptimes.csv')\n\n# Plot the raceId on the x-axis and the time on the y-axis\nplt.plot(df['raceId'], df['time'], marker='.', markersize=10)\n\n# Add axis labels\nplt.xlabel('raceId')\nplt.ylabel('Laptimes')\n\n# Show the plot\nplt.show()\n\nThis code should produce a line plot with the raceId on the x-axis and the laptimes on the y-axis.\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"matplotlib",
"pandas",
"python"
] | stackoverflow_0074651416_jupyter_notebook_matplotlib_pandas_python.txt |
Q:
Lookup of Data from a CSV file in Python
How do I achieve this in Python. I know there is a vlookup function in excel but if there is a way in Python, I prefer to do it in Python. Basically my goal is to get data from CSV2 column Quantity and write the data to column Quantity of CSV1 based on Bin_Name. The script should not copy all the value at once, it must be by selecting a Bin_Name. Ex: For today, I would like to get the data from Bin_Name ABCDE of CSV2 to CSV1 then it will write the data in column Quantity of CSV1. If this is possible, I will be very grateful and will learn a lot from this. Thank you very much in advance.
CSV1 CSV2
Bin_Name Quantity Bin_Name Quantity
A A 43
B B 32
C C 28
D D 33
E E 37
F F 38
G G 39
H H 41
A:
Hi you can simply iterate CSV2 first, then after gathering wanted value, you can search it in CSV1. I wrote a code below it might help you, but there can be much more efficient ways to do.
def func(wanted_rows: list,csv2df: pd.DataFrame):
# Iterate csv2df
for index,row in csv2df.iterrows():
# Check if index in the wanted list
if index in wanted_rows:
# Get index of CSV1 for same value
csv1_index = CSV1[CSV1.Bin_Name == row['Bin_Name']].index[0]
CSV1.at[csv1_index,'Quantity'] = row['Quantity']
return df
wanted_list = [1,2,3,4,5]
func(wanted_list,CSV2df)
A:
Here is one way to achieve this in Python without using
Read the two CSV files into two separate lists of dictionaries, where each dictionary represents a row in the CSV file.
Iterate over the list of dictionaries from CSV1, and for each dictionary, search for a matching Bin_Name in the list of dictionaries from CSV2.
If a match is found, update the Quantity value in the dictionary from CSV1 with the Quantity value from the matching dictionary in CSV2.
Write the updated list of dictionaries from CSV1 back to a new CSV file.
Here is an example implementation of the above steps:
# Import the csv module to read and write CSV files
import csv
# Open the two CSV files in read mode
with open("CSV1.csv", "r") as csv1_file, open("CSV2.csv", "r") as csv2_file:
# Use the csv reader to read the CSV files into lists of dictionaries
csv1_reader = csv.DictReader(csv1_file)
csv1_data = list(csv1_reader)
csv2_reader = csv.DictReader(csv2_file)
csv2_data = list(csv2_reader)
# Iterate over the list of dictionaries from CSV1
for row in csv1_data:
# Search for a matching Bin_Name in the list of dictionaries from CSV2
match = next((r for r in csv2_data if r["Bin_Name"] == row["Bin_Name"]), None)
# If a match is found, update the Quantity value in the dictionary from CSV1
# with the Quantity value from the matching dictionary in CSV2
if match:
row["Quantity"] = match["Quantity"]
# Open a new CSV file in write mode
with open("updated_csv1.csv", "w") as updated_csv1_file:
# Use the csv writer to write the updated list of dictionaries to the new CSV file
csv1_writer = csv.DictWriter(updated_csv1_file, fieldnames=csv1_reader.fieldnames)
csv1_writer.writeheader()
csv1_writer.writerows(csv1_data)
A:
I would simply use pandas built-in functions in this case and there is no need for loops.
So, assuming that there is no duplicate bin names, try the code below to copy the whole column :
df1= pd.read_csv("file1.csv")
df2= pd.read_csv("file2.csv")
df1["Quantity"]= df2["Quantity"].where(df1["Bin_Name"].eq(df2["Bin_Name"]))
print(df1)
Bin_Name Quantity
0 A 43
1 B 32
2 C 28
3 D 33
4 E 37
5 F 38
6 G 39
7 H 41
If you need to copy only a subset of rows, use boolean indexing with pandas.DataFrame.loc :
vals= ["A", "B", "C", "D"]
df1.loc[df1["Bin_Name"].isin(vals), "Quantity"] = df2.loc[df1["Bin_Name"].isin(vals), "Quantity"]
print(df1)
Bin_Name Quantity
0 A 43.0
1 B 32.0
2 C 28.0
3 D 33.0
4 E NaN
5 F NaN
6 G NaN
7 H NaN
A:
I am not really sure if I understood your question fully, but let me know if this answers your challenge.
The normally way of doing Excel-type operations in Python is by using the framework Pandas. Using this, you can read, manipulate and save your CSV-files (and many other formats) using Python code.
Setting up the example
EDIT: Ensure you have installed pandas by e.g. typing the following in your terminal: pip install pandas
Since I don't have your CSV-files, I will create them using Pandas, rather than using the built-in read_csv()-method.
import pandas as pd
csv1 = pd.DataFrame.from_dict({
"Bin_Name": ["A","B","C","D","E","F","G","H"],
"Quantity": []
}, orient="index").T
csv2 = pd.DataFrame.from_dict({
"Bin_Name": ["A","B","C","D","E","F","G","H"],
"Quantity": [43, 32, 28, 33, 37, 38, 39, 41]
}, orient="index").T
The way I understood your question, you want to specify which bins should be copied from your csv1-file to your csv2-file. In your example, you mention something like this:
# Specify bins you want to copy
bins_to_copy = ["A", "B", "C", "D", "E"]
Now, there are several ways of doing the copying-operation you mentioned. Some better than others. Since you explicitly say "the script should not copy all the value at once", I will give one suggestions that follows you instructions, and one that I believe is a better approach.
Solution 1 (bad - using for-loops)
# Loop through each bin and copy cell value from csv2 to csv1
for bin_to_copy in bins_to_copy:
csv1.loc[csv1["Bin_Name"]==bin_to_copy, "Quantity"] = csv2.loc[csv2["Bin_Name"]==bin_to_copy, "Quantity"]
# OUTPUT:
> csv1
Bin_Name Quantity
0 A 43
1 B 32
2 C 28
3 D 33
4 E 37
5 F None
6 G None
7 H None
This approach does exactly what I believe you are asking for. However, there are several weaknesses with it:
Looping through rows is a very slow approach compared to using more efficient, built-in methods provided in the Pandas-library
The approach is vulnerable to situations where you have duplicate bins in either of the CSV-files
The approach is vulnerable to situations where a bin only exists in one of the CSV-files
Since we have updated one cell at a time, Pandas doesn't understand that the datatype of the column has changed, and we are still left with None for the missing values (and an "object"-type for the column) rather than NaN (which would indicate a numeric (float) column datatype).
If I have understood your problem correctly, then a better approach would be as follows
Solution 2 (better - using merge)
# Select the columns with bins from csv1
csv1_bins = csv1["Bin_Name"]
# Select only the rows with the desired bins from csv2
csv2_desired_bins = csv2[csv2["Bin_Name"].isin(bins_to_copy)]
# Merge the columns (just "Quantity" in this case) from csv2 to csv1 using "Bin_Name" as "merging-key"
result = pd.merge(left=csv1_bins, right=csv2_desired_bins, on="Bin_Name", how="left")
# OUTPUT
> result
Bin_Name Quantity
0 A 43
1 B 32
2 C 28
3 D 33
4 E 37
5 F NaN
6 G NaN
7 H NaN
The merge()-method is much more powerful and answers all the challenges I listed solution 1. It is also a more generic version of the join()-method, which according to the documentation is "like an Excel VLOOKUP operation." (which is what you mention would be you Excel equivalent)
| Lookup of Data from a CSV file in Python | How do I achieve this in Python. I know there is a vlookup function in excel but if there is a way in Python, I prefer to do it in Python. Basically my goal is to get data from CSV2 column Quantity and write the data to column Quantity of CSV1 based on Bin_Name. The script should not copy all the value at once, it must be by selecting a Bin_Name. Ex: For today, I would like to get the data from Bin_Name ABCDE of CSV2 to CSV1 then it will write the data in column Quantity of CSV1. If this is possible, I will be very grateful and will learn a lot from this. Thank you very much in advance.
CSV1 CSV2
Bin_Name Quantity Bin_Name Quantity
A A 43
B B 32
C C 28
D D 33
E E 37
F F 38
G G 39
H H 41
| [
"Hi you can simply iterate CSV2 first, then after gathering wanted value, you can search it in CSV1. I wrote a code below it might help you, but there can be much more efficient ways to do.\ndef func(wanted_rows: list,csv2df: pd.DataFrame):\n # Iterate csv2df\n for index,row in csv2df.iterrows():\n # Check if index in the wanted list\n if index in wanted_rows:\n # Get index of CSV1 for same value\n csv1_index = CSV1[CSV1.Bin_Name == row['Bin_Name']].index[0]\n CSV1.at[csv1_index,'Quantity'] = row['Quantity']\n return df\n\nwanted_list = [1,2,3,4,5]\nfunc(wanted_list,CSV2df)\n\n",
"Here is one way to achieve this in Python without using\n\nRead the two CSV files into two separate lists of dictionaries, where each dictionary represents a row in the CSV file.\nIterate over the list of dictionaries from CSV1, and for each dictionary, search for a matching Bin_Name in the list of dictionaries from CSV2.\nIf a match is found, update the Quantity value in the dictionary from CSV1 with the Quantity value from the matching dictionary in CSV2.\nWrite the updated list of dictionaries from CSV1 back to a new CSV file.\n\nHere is an example implementation of the above steps:\n# Import the csv module to read and write CSV files\nimport csv\n\n# Open the two CSV files in read mode\nwith open(\"CSV1.csv\", \"r\") as csv1_file, open(\"CSV2.csv\", \"r\") as csv2_file:\n # Use the csv reader to read the CSV files into lists of dictionaries\n csv1_reader = csv.DictReader(csv1_file)\n csv1_data = list(csv1_reader)\n\n csv2_reader = csv.DictReader(csv2_file)\n csv2_data = list(csv2_reader)\n\n # Iterate over the list of dictionaries from CSV1\n for row in csv1_data:\n # Search for a matching Bin_Name in the list of dictionaries from CSV2\n match = next((r for r in csv2_data if r[\"Bin_Name\"] == row[\"Bin_Name\"]), None)\n\n # If a match is found, update the Quantity value in the dictionary from CSV1\n # with the Quantity value from the matching dictionary in CSV2\n if match:\n row[\"Quantity\"] = match[\"Quantity\"]\n\n # Open a new CSV file in write mode\n with open(\"updated_csv1.csv\", \"w\") as updated_csv1_file:\n # Use the csv writer to write the updated list of dictionaries to the new CSV file\n csv1_writer = csv.DictWriter(updated_csv1_file, fieldnames=csv1_reader.fieldnames)\n csv1_writer.writeheader()\n csv1_writer.writerows(csv1_data)\n\n",
"I would simply use pandas built-in functions in this case and there is no need for loops.\nSo, assuming that there is no duplicate bin names, try the code below to copy the whole column :\ndf1= pd.read_csv(\"file1.csv\")\ndf2= pd.read_csv(\"file2.csv\")\n\ndf1[\"Quantity\"]= df2[\"Quantity\"].where(df1[\"Bin_Name\"].eq(df2[\"Bin_Name\"]))\n\nprint(df1)\n\n Bin_Name Quantity\n0 A 43\n1 B 32\n2 C 28\n3 D 33\n4 E 37\n5 F 38\n6 G 39\n7 H 41\n\nIf you need to copy only a subset of rows, use boolean indexing with pandas.DataFrame.loc :\n\nvals= [\"A\", \"B\", \"C\", \"D\"]\ndf1.loc[df1[\"Bin_Name\"].isin(vals), \"Quantity\"] = df2.loc[df1[\"Bin_Name\"].isin(vals), \"Quantity\"]\nprint(df1)\n\n Bin_Name Quantity\n0 A 43.0\n1 B 32.0\n2 C 28.0\n3 D 33.0\n4 E NaN\n5 F NaN\n6 G NaN\n7 H NaN\n\n",
"I am not really sure if I understood your question fully, but let me know if this answers your challenge.\nThe normally way of doing Excel-type operations in Python is by using the framework Pandas. Using this, you can read, manipulate and save your CSV-files (and many other formats) using Python code.\nSetting up the example\nEDIT: Ensure you have installed pandas by e.g. typing the following in your terminal: pip install pandas\nSince I don't have your CSV-files, I will create them using Pandas, rather than using the built-in read_csv()-method.\nimport pandas as pd\n\ncsv1 = pd.DataFrame.from_dict({\n \"Bin_Name\": [\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\"],\n \"Quantity\": []\n}, orient=\"index\").T\n\ncsv2 = pd.DataFrame.from_dict({\n \"Bin_Name\": [\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\"],\n \"Quantity\": [43, 32, 28, 33, 37, 38, 39, 41]\n}, orient=\"index\").T\n\nThe way I understood your question, you want to specify which bins should be copied from your csv1-file to your csv2-file. In your example, you mention something like this:\n# Specify bins you want to copy\nbins_to_copy = [\"A\", \"B\", \"C\", \"D\", \"E\"]\n\nNow, there are several ways of doing the copying-operation you mentioned. Some better than others. Since you explicitly say \"the script should not copy all the value at once\", I will give one suggestions that follows you instructions, and one that I believe is a better approach.\nSolution 1 (bad - using for-loops)\n# Loop through each bin and copy cell value from csv2 to csv1\nfor bin_to_copy in bins_to_copy:\n csv1.loc[csv1[\"Bin_Name\"]==bin_to_copy, \"Quantity\"] = csv2.loc[csv2[\"Bin_Name\"]==bin_to_copy, \"Quantity\"]\n\n# OUTPUT:\n> csv1\n Bin_Name Quantity\n0 A 43\n1 B 32\n2 C 28\n3 D 33\n4 E 37\n5 F None\n6 G None\n7 H None\n\nThis approach does exactly what I believe you are asking for. However, there are several weaknesses with it:\n\nLooping through rows is a very slow approach compared to using more efficient, built-in methods provided in the Pandas-library\nThe approach is vulnerable to situations where you have duplicate bins in either of the CSV-files\nThe approach is vulnerable to situations where a bin only exists in one of the CSV-files\nSince we have updated one cell at a time, Pandas doesn't understand that the datatype of the column has changed, and we are still left with None for the missing values (and an \"object\"-type for the column) rather than NaN (which would indicate a numeric (float) column datatype).\n\nIf I have understood your problem correctly, then a better approach would be as follows\nSolution 2 (better - using merge)\n# Select the columns with bins from csv1\ncsv1_bins = csv1[\"Bin_Name\"]\n\n# Select only the rows with the desired bins from csv2\ncsv2_desired_bins = csv2[csv2[\"Bin_Name\"].isin(bins_to_copy)]\n\n# Merge the columns (just \"Quantity\" in this case) from csv2 to csv1 using \"Bin_Name\" as \"merging-key\"\nresult = pd.merge(left=csv1_bins, right=csv2_desired_bins, on=\"Bin_Name\", how=\"left\")\n\n# OUTPUT\n> result\n Bin_Name Quantity\n0 A 43\n1 B 32\n2 C 28\n3 D 33\n4 E 37\n5 F NaN\n6 G NaN\n7 H NaN\n\nThe merge()-method is much more powerful and answers all the challenges I listed solution 1. It is also a more generic version of the join()-method, which according to the documentation is \"like an Excel VLOOKUP operation.\" (which is what you mention would be you Excel equivalent)\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"csv",
"lookup",
"python"
] | stackoverflow_0074652883_csv_lookup_python.txt |
Q:
"Discovering Python Interpreters" taking Infinite time in VS Code
I am new to Ubuntu, as well as to python.
( This problem has started just recently until now everything was fine.)
Whenever I am trying to start my VS Code to learn Django, the VS Code is showing the following issue for an infinite time
i.e., it is not discovering Python interpreters.
The problem seems to be in the Python Extension which I am using.
I tried to Uninstall it and then reinstall it. But it turned out to be of no use.
I even tried to uninstall and reinstall vscode from my ubuntu (20.04) system itself. But vs code started from exactly where I left, with no change.
I even tried to change the python interpreter path from the command palate, but it also didn't work.
I could find something relatable here, but I couldn't understand/follow them.
Help from someone's side would be appreciated.
A:
I've recently started experiencing the same problem with VS Code (latest version), albeit on Windows 10. I'm using Python 3.10.4 from python.org, the latest Microsoft Python extension, and a virtual environment for the workspace in which I do Python development. The option to select the interpreter is dysfunctional - it doesn't offer an interpreter to select. The workaround I found is to terminate and restart VS Code until it discovers the Python interpreter in the virtual environment. Frustrating, but it works, without having to uninstall and reinstall everything (yes, I unsuccessfully tried that too).
A:
The problem seems to be in the Ms Python Extension which I am using so I disabled the Microsoft python extension.
A:
Try Disabling and uninstalling like extensions. There might be a conflict of some sort.
So, go to vscode Extensions > click the [...] button > show running extensions.
if the Python extension is running alongside pylint and pylance, then that might be where your problem stems from. Since Python contains pylance, and maybe pylint too.
A:
Mine method is quite simple, just close the vs code for a minute, restart it, and run a simple code again, like 1 +1, wait the Interpreter to connect, my problem settled.
| "Discovering Python Interpreters" taking Infinite time in VS Code | I am new to Ubuntu, as well as to python.
( This problem has started just recently until now everything was fine.)
Whenever I am trying to start my VS Code to learn Django, the VS Code is showing the following issue for an infinite time
i.e., it is not discovering Python interpreters.
The problem seems to be in the Python Extension which I am using.
I tried to Uninstall it and then reinstall it. But it turned out to be of no use.
I even tried to uninstall and reinstall vscode from my ubuntu (20.04) system itself. But vs code started from exactly where I left, with no change.
I even tried to change the python interpreter path from the command palate, but it also didn't work.
I could find something relatable here, but I couldn't understand/follow them.
Help from someone's side would be appreciated.
| [
"I've recently started experiencing the same problem with VS Code (latest version), albeit on Windows 10. I'm using Python 3.10.4 from python.org, the latest Microsoft Python extension, and a virtual environment for the workspace in which I do Python development. The option to select the interpreter is dysfunctional - it doesn't offer an interpreter to select. The workaround I found is to terminate and restart VS Code until it discovers the Python interpreter in the virtual environment. Frustrating, but it works, without having to uninstall and reinstall everything (yes, I unsuccessfully tried that too).\n",
"The problem seems to be in the Ms Python Extension which I am using so I disabled the Microsoft python extension.\n",
"Try Disabling and uninstalling like extensions. There might be a conflict of some sort.\nSo, go to vscode Extensions > click the [...] button > show running extensions.\nif the Python extension is running alongside pylint and pylance, then that might be where your problem stems from. Since Python contains pylance, and maybe pylint too.\n",
"Mine method is quite simple, just close the vs code for a minute, restart it, and run a simple code again, like 1 +1, wait the Interpreter to connect, my problem settled.\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"python",
"visual_studio_code"
] | stackoverflow_0071153859_python_visual_studio_code.txt |
Q:
print a one dictionary at a time to a table from a list of dictionaries
I want to print only Tom's recode to a table without using function
fe=[{"Name": "Tom", "age": 10,"group":"sdd","points":2,},
{"Name": "Mark", "age": 5,"group":"sdo","points":6,},
{"Name": "Pam", "age": 7,"group":"spp","points":4,}],
dashes = "{:<20} + {:<8} + {:^14} + {:^11} \n".format("-"*20, "-"*8, "-"*10, "-"*14, "-"*11)
info ="{:<20} | {:<8} | {:^14} | {:^11}\n".format("Name", "age", "group","points")
info+=dashes
value=0
fe[value]
info+="{:<20} | {:<8} | {:^14} | {:^11} \n".format(value["Name"],value["age"],value["group"],value["points"])
info+=dashes
print(info)
A:
value is 0.
value = fe[0]
info += "{:<20} | {:<8} | {:^14} | {:^11} \n".format(value["Name"],value["age"],value["group"],value["points"])
| print a one dictionary at a time to a table from a list of dictionaries | I want to print only Tom's recode to a table without using function
fe=[{"Name": "Tom", "age": 10,"group":"sdd","points":2,},
{"Name": "Mark", "age": 5,"group":"sdo","points":6,},
{"Name": "Pam", "age": 7,"group":"spp","points":4,}],
dashes = "{:<20} + {:<8} + {:^14} + {:^11} \n".format("-"*20, "-"*8, "-"*10, "-"*14, "-"*11)
info ="{:<20} | {:<8} | {:^14} | {:^11}\n".format("Name", "age", "group","points")
info+=dashes
value=0
fe[value]
info+="{:<20} | {:<8} | {:^14} | {:^11} \n".format(value["Name"],value["age"],value["group"],value["points"])
info+=dashes
print(info)
| [
"value is 0.\nvalue = fe[0]\ninfo += \"{:<20} | {:<8} | {:^14} | {:^11} \\n\".format(value[\"Name\"],value[\"age\"],value[\"group\"],value[\"points\"])\n\n"
] | [
0
] | [] | [] | [
"dictionary",
"list",
"python",
"string"
] | stackoverflow_0074653317_dictionary_list_python_string.txt |
Q:
Optimizing script
I am working with high resolution raster data on a enormous land cover. I have achieved what I set out to (in terms of script result) and it works very well on a small raster file, however when applying it to a big raster file it takes ages.
The work flow is this:
Get aspect and slope raster from a DEM raster file
Convert the aspect and slope rasters to polygon layers
Export certain aspect and slope combinations that are of importance
Merge the exported ones into on final layer, which in my case represents "NO-GO-ZONES".
Code:
import pandas as pd
from os.path import join, normpath
import time
start = time.time()
path = 'C:/Users/tlind/Dropbox/Documents/Temp/'
# iface.addRasterLayer(path+'riktigk_raster.tif')
processing.run("native:slope", {'INPUT':path+'riktig.tif','Z_FACTOR':1,'OUTPUT':path+'slope.tif'})
# iface.addRasterLayer(path+'slope.tif')
#
processing.run("native:aspect", {'INPUT':path+'riktig.tif','Z_FACTOR':1,'OUTPUT':path+'aspect.tif'})
# iface.addRasterLayer(path+'aspect.tif')
processing.run("gdal:merge", {'INPUT':['C:/Users/tlind/Dropbox/Documents/Temp/aspect.tif','C:/Users/tlind/Dropbox/Documents/Temp/slope.tif'],'PCT':True,'SEPARATE':True,'NODATA_INPUT':None,'NODATA_OUTPUT':None,'OPTIONS':'','EXTRA':'','DATA_TYPE':5,'OUTPUT':path+'merge.tif'})
processing.run("native:pixelstopolygons", {'INPUT_RASTER':path+'merge.tif','RASTER_BAND':1,'FIELD_NAME':'ASPECT','OUTPUT':path+'aspect.shp'})
processing.run("native:pixelstopolygons", {'INPUT_RASTER':path+'merge.tif','RASTER_BAND':2,'FIELD_NAME':'SLOPE','OUTPUT':path+'slope.shp'})
aspect_layer = iface.addVectorLayer(path+'aspect.shp', "", "ogr")
slope_layer = iface.addVectorLayer(path+'slope.shp', "", "ogr")
pv_apsect = aspect_layer.dataProvider()
pv_apsect.addAttributes([QgsField('ID', QVariant.Double)])
aspect_layer.updateFields()
expression = QgsExpression('$id')
context = QgsExpressionContext()
context.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(aspect_layer))
with edit(aspect_layer):
for f in aspect_layer.getFeatures():
context.setFeature(f)
f['ID'] = expression.evaluate(context)
aspect_layer.updateFeature(f)
pv_slope = slope_layer.dataProvider()
pv_slope.addAttributes([QgsField('ID', QVariant.Double)])
slope_layer.updateFields()
expression = QgsExpression('$id')
context = QgsExpressionContext()
context.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(slope_layer))
with edit(slope_layer):
for f in slope_layer.getFeatures():
context.setFeature(f)
f['ID'] = expression.evaluate(context)
slope_layer.updateFeature(f)
processing.run("native:joinattributestable", {'INPUT':path+'aspect.shp','FIELD':'ID','INPUT_2':path+'slope.shp','FIELD_2':'ID','FIELDS_TO_COPY':[],'METHOD':1,'DISCARD_NONMATCHING':False,'PREFIX':'','OUTPUT':path+'aspect_slope.shp'})
aspect_slope_layer = iface.addVectorLayer(path+'aspect_slope.shp', "", "ogr")
slope_list = [4.2, 4.6, 5.1, 5.7, 6.4, 7, 9.5, 12, 15, 18.2, 22.5, 30]
aspect_list_1 = [[0, 15], [15,30], [30, 45], [45, 60], [60, 75], [75, 90], [90, 105], [105, 120], [120, 135], [135, 150], [150, 165], [165, 180]]
aspect_list_2 = [[180, 195], [195, 210], [210, 225], [225, 240], [240, 255], [255, 270], [270, 285], [285, 300], [300, 315], [315, 330], [330, 345], [345, 360]]
# aspect_list_3 = aspect_list_1+aspect_list_2
for i in range(len(slope_list)):
aspect_list_1[i].append(slope_list[i])
for i in range(len(slope_list)):
aspect_list_2[i].append(slope_list[::-1][i])
for aspect_interval in aspect_list_1:
start = aspect_interval[0]
end = aspect_interval[1]
slope_loop = aspect_interval[2]
aspect_slope_layer.selectByExpression('"ASPECT">'+str(start)+' and "ASPECT"<='+str(end)+' and "SLOPE">='+str(slope_loop))
QgsVectorFileWriter.writeAsVectorFormat(aspect_slope_layer, str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "UTF-8", aspect_slope_layer.crs(), "ESRI Shapefile", onlySelected=True)
# iface.addVectorLayer(str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "", "ogr")
for aspect_interval in aspect_list_2:
start = aspect_interval[0]
end = aspect_interval[1]
slope_loop = aspect_interval[2]
aspect_slope_layer.selectByExpression('"ASPECT">'+str(start)+' and "ASPECT"<='+str(end)+' and "SLOPE">='+str(slope_loop))
QgsVectorFileWriter.writeAsVectorFormat(aspect_slope_layer, str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "UTF-8", aspect_slope_layer.crs(), "ESRI Shapefile", onlySelected=True)
# iface.addVectorLayer(str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "", "ogr")
processing.run("native:mergevectorlayers", {'LAYERS':['C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_0-15.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_105-120.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_120-135.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_135-150.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_15-30.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_150-165.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_165-180.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_180-195.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_195-210.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_210-225.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_225-240.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_240-255.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_255-270.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_270-285.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_285-300.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_30-45.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_300-315.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_315-330.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_330-345.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_345-360.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_45-60.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_60-75.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_75-90.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_90-105.shp'],'CRS':None,'OUTPUT':str(path)+'aspect_slope_final.shp'})
iface.addVectorLayer(str(path)+'aspect_slope_final.shp', "", "ogr")
end = time.time()
print("Elapsed time:", (end-start)/60, "minutes.")
Up til creating the polygon layers from the rasters is rather fast. Adding the ID field is something that is rather consuming. Is it possible to produce a vector from raster with adding more than one field from the start, instead of manually having to do it afterwards? In this example. There is the 'FIELD_NAME':'ASPECT'.
processing.run("native:pixelstopolygons", {'INPUT_RASTER':path+'merge.tif','RASTER_BAND':1,'FIELD_NAME':'ASPECT','OUTPUT':path+'aspect.shp'})
When it comes to the actual raster-to-pixel there's not that much to do about it. That takes time.
In the ending loops, where the selected polygons are exported. Is it possible to make a complete selection and do perhaps only one export? That would also speed things up, I guess.
Another thing slightly unlrelated question would be if it's possible to speed things up by making QGIS use more of the CPU? Currently I think it's surfs only on one core.
A:
I managed to optimise it really well!
First replacing all of
pv_apsect = aspect_layer.dataProvider()
pv_apsect.addAttributes([QgsField('ID', QVariant.Double)])
aspect_layer.updateFields()
expression = QgsExpression('$id')
context = QgsExpressionContext()
context.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(aspect_layer))
with edit(aspect_layer):
for f in aspect_layer.getFeatures():
context.setFeature(f)
f['ID'] = expression.evaluate(context)
aspect_layer.updateFeature(f)
pv_slope = slope_layer.dataProvider()
pv_slope.addAttributes([QgsField('ID', QVariant.Double)])
slope_layer.updateFields()
expression = QgsExpression('$id')
context = QgsExpressionContext()
context.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(slope_layer))
with edit(slope_layer):
for f in slope_layer.getFeatures():
context.setFeature(f)
f['ID'] = expression.evaluate(context)
slope_layer.updateFeature(f)
processing.run("native:joinattributestable", {'INPUT':path+'aspect.shp','FIELD':'ID','INPUT_2':path+'slope.shp','FIELD_2':'ID','FIELDS_TO_COPY':[],'METHOD':1,'DISCARD_NONMATCHING':False,'PREFIX':'','OUTPUT':path+'aspect_slope.shp'})
with simply
processing.run("native:joinattributesbylocation", {'INPUT':path+'slope.shp','PREDICATE':[5],'JOIN':path+'aspect.shp','JOIN_FIELDS':[],'METHOD':0,'DISCARD_NONMATCHING':False,'PREFIX':'','OUTPUT':path+'aspect_slope.shp'})
Additionally I added a spatial index to the shape files with
processing.run("native:createspatialindex", {'INPUT':'LAYER'})
Went from a 28 hour process to a 50 minute process.
| Optimizing script | I am working with high resolution raster data on a enormous land cover. I have achieved what I set out to (in terms of script result) and it works very well on a small raster file, however when applying it to a big raster file it takes ages.
The work flow is this:
Get aspect and slope raster from a DEM raster file
Convert the aspect and slope rasters to polygon layers
Export certain aspect and slope combinations that are of importance
Merge the exported ones into on final layer, which in my case represents "NO-GO-ZONES".
Code:
import pandas as pd
from os.path import join, normpath
import time
start = time.time()
path = 'C:/Users/tlind/Dropbox/Documents/Temp/'
# iface.addRasterLayer(path+'riktigk_raster.tif')
processing.run("native:slope", {'INPUT':path+'riktig.tif','Z_FACTOR':1,'OUTPUT':path+'slope.tif'})
# iface.addRasterLayer(path+'slope.tif')
#
processing.run("native:aspect", {'INPUT':path+'riktig.tif','Z_FACTOR':1,'OUTPUT':path+'aspect.tif'})
# iface.addRasterLayer(path+'aspect.tif')
processing.run("gdal:merge", {'INPUT':['C:/Users/tlind/Dropbox/Documents/Temp/aspect.tif','C:/Users/tlind/Dropbox/Documents/Temp/slope.tif'],'PCT':True,'SEPARATE':True,'NODATA_INPUT':None,'NODATA_OUTPUT':None,'OPTIONS':'','EXTRA':'','DATA_TYPE':5,'OUTPUT':path+'merge.tif'})
processing.run("native:pixelstopolygons", {'INPUT_RASTER':path+'merge.tif','RASTER_BAND':1,'FIELD_NAME':'ASPECT','OUTPUT':path+'aspect.shp'})
processing.run("native:pixelstopolygons", {'INPUT_RASTER':path+'merge.tif','RASTER_BAND':2,'FIELD_NAME':'SLOPE','OUTPUT':path+'slope.shp'})
aspect_layer = iface.addVectorLayer(path+'aspect.shp', "", "ogr")
slope_layer = iface.addVectorLayer(path+'slope.shp', "", "ogr")
pv_apsect = aspect_layer.dataProvider()
pv_apsect.addAttributes([QgsField('ID', QVariant.Double)])
aspect_layer.updateFields()
expression = QgsExpression('$id')
context = QgsExpressionContext()
context.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(aspect_layer))
with edit(aspect_layer):
for f in aspect_layer.getFeatures():
context.setFeature(f)
f['ID'] = expression.evaluate(context)
aspect_layer.updateFeature(f)
pv_slope = slope_layer.dataProvider()
pv_slope.addAttributes([QgsField('ID', QVariant.Double)])
slope_layer.updateFields()
expression = QgsExpression('$id')
context = QgsExpressionContext()
context.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(slope_layer))
with edit(slope_layer):
for f in slope_layer.getFeatures():
context.setFeature(f)
f['ID'] = expression.evaluate(context)
slope_layer.updateFeature(f)
processing.run("native:joinattributestable", {'INPUT':path+'aspect.shp','FIELD':'ID','INPUT_2':path+'slope.shp','FIELD_2':'ID','FIELDS_TO_COPY':[],'METHOD':1,'DISCARD_NONMATCHING':False,'PREFIX':'','OUTPUT':path+'aspect_slope.shp'})
aspect_slope_layer = iface.addVectorLayer(path+'aspect_slope.shp', "", "ogr")
slope_list = [4.2, 4.6, 5.1, 5.7, 6.4, 7, 9.5, 12, 15, 18.2, 22.5, 30]
aspect_list_1 = [[0, 15], [15,30], [30, 45], [45, 60], [60, 75], [75, 90], [90, 105], [105, 120], [120, 135], [135, 150], [150, 165], [165, 180]]
aspect_list_2 = [[180, 195], [195, 210], [210, 225], [225, 240], [240, 255], [255, 270], [270, 285], [285, 300], [300, 315], [315, 330], [330, 345], [345, 360]]
# aspect_list_3 = aspect_list_1+aspect_list_2
for i in range(len(slope_list)):
aspect_list_1[i].append(slope_list[i])
for i in range(len(slope_list)):
aspect_list_2[i].append(slope_list[::-1][i])
for aspect_interval in aspect_list_1:
start = aspect_interval[0]
end = aspect_interval[1]
slope_loop = aspect_interval[2]
aspect_slope_layer.selectByExpression('"ASPECT">'+str(start)+' and "ASPECT"<='+str(end)+' and "SLOPE">='+str(slope_loop))
QgsVectorFileWriter.writeAsVectorFormat(aspect_slope_layer, str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "UTF-8", aspect_slope_layer.crs(), "ESRI Shapefile", onlySelected=True)
# iface.addVectorLayer(str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "", "ogr")
for aspect_interval in aspect_list_2:
start = aspect_interval[0]
end = aspect_interval[1]
slope_loop = aspect_interval[2]
aspect_slope_layer.selectByExpression('"ASPECT">'+str(start)+' and "ASPECT"<='+str(end)+' and "SLOPE">='+str(slope_loop))
QgsVectorFileWriter.writeAsVectorFormat(aspect_slope_layer, str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "UTF-8", aspect_slope_layer.crs(), "ESRI Shapefile", onlySelected=True)
# iface.addVectorLayer(str(path)+'aspect_slope_'+str(start)+'-'+str(end)+'.shp', "", "ogr")
processing.run("native:mergevectorlayers", {'LAYERS':['C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_0-15.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_105-120.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_120-135.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_135-150.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_15-30.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_150-165.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_165-180.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_180-195.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_195-210.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_210-225.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_225-240.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_240-255.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_255-270.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_270-285.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_285-300.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_30-45.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_300-315.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_315-330.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_330-345.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_345-360.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_45-60.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_60-75.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_75-90.shp','C:/Users/tlind/Dropbox/Documents/Temp/aspect_slope_90-105.shp'],'CRS':None,'OUTPUT':str(path)+'aspect_slope_final.shp'})
iface.addVectorLayer(str(path)+'aspect_slope_final.shp', "", "ogr")
end = time.time()
print("Elapsed time:", (end-start)/60, "minutes.")
Up til creating the polygon layers from the rasters is rather fast. Adding the ID field is something that is rather consuming. Is it possible to produce a vector from raster with adding more than one field from the start, instead of manually having to do it afterwards? In this example. There is the 'FIELD_NAME':'ASPECT'.
processing.run("native:pixelstopolygons", {'INPUT_RASTER':path+'merge.tif','RASTER_BAND':1,'FIELD_NAME':'ASPECT','OUTPUT':path+'aspect.shp'})
When it comes to the actual raster-to-pixel there's not that much to do about it. That takes time.
In the ending loops, where the selected polygons are exported. Is it possible to make a complete selection and do perhaps only one export? That would also speed things up, I guess.
Another thing slightly unlrelated question would be if it's possible to speed things up by making QGIS use more of the CPU? Currently I think it's surfs only on one core.
| [
"I managed to optimise it really well!\nFirst replacing all of\npv_apsect = aspect_layer.dataProvider()\npv_apsect.addAttributes([QgsField('ID', QVariant.Double)])\n\naspect_layer.updateFields()\n\nexpression = QgsExpression('$id')\n\ncontext = QgsExpressionContext()\ncontext.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(aspect_layer))\n\nwith edit(aspect_layer):\n for f in aspect_layer.getFeatures():\n context.setFeature(f)\n f['ID'] = expression.evaluate(context)\n aspect_layer.updateFeature(f)\n \npv_slope = slope_layer.dataProvider()\npv_slope.addAttributes([QgsField('ID', QVariant.Double)])\n\nslope_layer.updateFields()\n\nexpression = QgsExpression('$id')\n\ncontext = QgsExpressionContext()\ncontext.appendScopes(QgsExpressionContextUtils.globalProjectLayerScopes(slope_layer))\n\nwith edit(slope_layer):\n for f in slope_layer.getFeatures():\n context.setFeature(f)\n f['ID'] = expression.evaluate(context)\n slope_layer.updateFeature(f)\n \nprocessing.run(\"native:joinattributestable\", {'INPUT':path+'aspect.shp','FIELD':'ID','INPUT_2':path+'slope.shp','FIELD_2':'ID','FIELDS_TO_COPY':[],'METHOD':1,'DISCARD_NONMATCHING':False,'PREFIX':'','OUTPUT':path+'aspect_slope.shp'})\n\nwith simply\nprocessing.run(\"native:joinattributesbylocation\", {'INPUT':path+'slope.shp','PREDICATE':[5],'JOIN':path+'aspect.shp','JOIN_FIELDS':[],'METHOD':0,'DISCARD_NONMATCHING':False,'PREFIX':'','OUTPUT':path+'aspect_slope.shp'})\n\nAdditionally I added a spatial index to the shape files with\nprocessing.run(\"native:createspatialindex\", {'INPUT':'LAYER'})\n\nWent from a 28 hour process to a 50 minute process.\n"
] | [
0
] | [] | [] | [
"optimization",
"pyqgis",
"python",
"qgis"
] | stackoverflow_0074635048_optimization_pyqgis_python_qgis.txt |
Q:
Setting up Jupyter lab for python scripts on a cloud provider as a beginner
I have python scripts for automated trading for currency and I want to deploy them by running on Jupter Lab on a cloud instance. I have no experience with cloud computing or linux, so I have been trying weeks to get into this cloud computing mania, but I found it very difficult to participate in it.
My goal is to set up a full-fledged Python infrastructure on a cloud instance from whichever provider so that I can run my trading bot on the cloud.
I want to set up a cloud instance on whichever provider that has the latest python
installation plus the typically needed scientific packages (such as NumPy and pandas and others) in combination with a password-protected and Secure Sockets Layer (SSL)-encrypted Jupyter
Lab server installation.
So far I have gotten no where. I am currently looking at the digital ocean website for setting jupter lab up but there are so many confusing terms.
What is Ubuntu or Debian? Is it like a sub-variant of Linux operating system? Why do I have only 2 options here? I use neither of the operating system, I use the windows operating system on my laptop and it is also where I developed my python script. Do I need a window server or something?
How can I do this? I tried a lot of tutorials but I just got more confused.
| Setting up Jupyter lab for python scripts on a cloud provider as a beginner | I have python scripts for automated trading for currency and I want to deploy them by running on Jupter Lab on a cloud instance. I have no experience with cloud computing or linux, so I have been trying weeks to get into this cloud computing mania, but I found it very difficult to participate in it.
My goal is to set up a full-fledged Python infrastructure on a cloud instance from whichever provider so that I can run my trading bot on the cloud.
I want to set up a cloud instance on whichever provider that has the latest python
installation plus the typically needed scientific packages (such as NumPy and pandas and others) in combination with a password-protected and Secure Sockets Layer (SSL)-encrypted Jupyter
Lab server installation.
So far I have gotten no where. I am currently looking at the digital ocean website for setting jupter lab up but there are so many confusing terms.
What is Ubuntu or Debian? Is it like a sub-variant of Linux operating system? Why do I have only 2 options here? I use neither of the operating system, I use the windows operating system on my laptop and it is also where I developed my python script. Do I need a window server or something?
How can I do this? I tried a lot of tutorials but I just got more confused.
| [] | [] | [
"\nFirst, you need to choose a cloud provider that offers the latest version of python and the necessary scientific packages. Some popular options include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).\n\nOnce you have chosen a provider, you need to create an account and select a cloud instance type. This will depend on your specific needs, such as the amount of memory, CPU, and storage required.\n\nAfter selecting the instance type, you will need to choose an operating system for your instance. Ubuntu and Debian are both versions of Linux, and are commonly used for hosting applications like Jupyter Lab. You can choose whichever one you prefer, or consult the provider's documentation for recommendations.\n\nOnce you have selected the operating system, you can install Jupyter Lab on the instance. This can typically be done using the provider's package manager (e.g. apt-get on Ubuntu/Debian), or by following the instructions provided by the Jupyter Lab project.\n\nAfter installing Jupyter Lab, you can upload your python scripts to the instance and run them using the Jupyter Lab interface. You can also set up password protection and SSL encryption for added security.\n\nTo access your Jupyter Lab server, you will need to configure the instance's security groups to allow incoming connections on the appropriate ports (e.g. 8888 for Jupyter Lab). You can then use a remote desktop client or web browser to connect to the instance and run your python scripts.\n\n\nOverall, setting up a cloud instance for running your python scripts can be a bit complex, but there are many resources available online to help you get started. It is worth taking the time to learn about cloud computing and Linux, as it can provide many benefits for your trading bot, such as scalability and availability.\n"
] | [
-1
] | [
"cloud",
"devops",
"jupyter_notebook",
"python"
] | stackoverflow_0074625243_cloud_devops_jupyter_notebook_python.txt |
Q:
How i can fix Cannot resolve keyword 'pub-date' into field. Choices are: choice, id, pub_date, question_text
Python django
when I starting local sever,
I met only
Cannot resolve keyword 'pub-date' into field. Choices are: choice, id, pub_date, question_text
how can i fix?
window
error
at first the problem was about direcotry, so read and search about django slash document.
and then i ment a new problem rn..
A:
You got this error because you have written pub-date instead of pub_date somewhere in your code.
| How i can fix Cannot resolve keyword 'pub-date' into field. Choices are: choice, id, pub_date, question_text | Python django
when I starting local sever,
I met only
Cannot resolve keyword 'pub-date' into field. Choices are: choice, id, pub_date, question_text
how can i fix?
window
error
at first the problem was about direcotry, so read and search about django slash document.
and then i ment a new problem rn..
| [
"You got this error because you have written pub-date instead of pub_date somewhere in your code.\n"
] | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074653593_django_python.txt |
Q:
FastAPI decorators on endpoint
I'm trying to create role-based access control on endpoint and since fastAPI has this build-in Depends method with possibility to cache result I'm trying to create something like this
@router.get('/')
#decorator
@roles_decorator("admin")
async def get_items(user_id: str = Depends(get_current_user)):
return await get_all_items()
get_current_user method will need to receive roles from decorator (in this case admin) and from authorization service receive user_id if role matches provided role. So my question is, can we pass the role from decorator to method in Depends, or is there any chance that we can do this role-based access control with predefined roles for every endpoint?
def get_current_user(role):
#connect to auth_serivce and do other logic
return user_id
A:
Instead of trying to mix dependencies and decorators (which won't do anything good), you can instead use a dynamically configured dependency:
async def get_current_user_with_role(role):
async def get_user_and_validate(user=Depends(get_current_user)):
if not user.has_role(role):
raise 403
return user.id
return get_user_and_validate
This will bind the role value you give to the dependency when it gets created:
@router.get('/')
async def get_items(user_id: str = Depends(get_current_user_with_role("admin"))):
pass
| FastAPI decorators on endpoint | I'm trying to create role-based access control on endpoint and since fastAPI has this build-in Depends method with possibility to cache result I'm trying to create something like this
@router.get('/')
#decorator
@roles_decorator("admin")
async def get_items(user_id: str = Depends(get_current_user)):
return await get_all_items()
get_current_user method will need to receive roles from decorator (in this case admin) and from authorization service receive user_id if role matches provided role. So my question is, can we pass the role from decorator to method in Depends, or is there any chance that we can do this role-based access control with predefined roles for every endpoint?
def get_current_user(role):
#connect to auth_serivce and do other logic
return user_id
| [
"Instead of trying to mix dependencies and decorators (which won't do anything good), you can instead use a dynamically configured dependency:\nasync def get_current_user_with_role(role):\n async def get_user_and_validate(user=Depends(get_current_user)):\n if not user.has_role(role):\n raise 403\n\n return user.id\n\n return get_user_and_validate\n\nThis will bind the role value you give to the dependency when it gets created:\[email protected]('/')\nasync def get_items(user_id: str = Depends(get_current_user_with_role(\"admin\"))):\n pass\n\n"
] | [
2
] | [] | [] | [
"fastapi",
"python"
] | stackoverflow_0074647143_fastapi_python.txt |
Q:
Mask an 2Darray row wise by another array
Consider the following 2d array:
>>> A = np.arange(2*3).reshape(2,3)
array([[0, 1, 2],
[3, 4, 5]])
>>> b = np.array([1, 2])
I would like to get the following mask from A as row wise condition from b as an upper index limit:
>>> mask
array([[True, False, False],
[True, True, False]])
Can numpy do this in a vectorized manner?
A:
You can use array broadcasting:
mask = np.arange(A.shape[1]) < b[:,None]
output:
array([[ True, False, False],
[ True, True, False]])
A:
Another possible solution, based on the idea that the wanted mask corresponds to a boolean lower triangular matrix:
mask = np.tril(np.ones(A.shape, dtype=bool))
Output:
array([[ True, False, False],
[ True, True, False]])
| Mask an 2Darray row wise by another array | Consider the following 2d array:
>>> A = np.arange(2*3).reshape(2,3)
array([[0, 1, 2],
[3, 4, 5]])
>>> b = np.array([1, 2])
I would like to get the following mask from A as row wise condition from b as an upper index limit:
>>> mask
array([[True, False, False],
[True, True, False]])
Can numpy do this in a vectorized manner?
| [
"You can use array broadcasting:\nmask = np.arange(A.shape[1]) < b[:,None]\n\noutput:\narray([[ True, False, False],\n [ True, True, False]])\n\n",
"Another possible solution, based on the idea that the wanted mask corresponds to a boolean lower triangular matrix:\nmask = np.tril(np.ones(A.shape, dtype=bool))\n\nOutput:\narray([[ True, False, False],\n [ True, True, False]])\n\n"
] | [
3,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074653163_numpy_python.txt |
Q:
Why am I getting this error while installing pygame in pycharm
Command "python setup.py egg_info" failed with error code 1 in C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\
this is the full process
(venv) C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate>pip install pygame
Collecting pygame
Downloading https://files.pythonhosted.org/packages/c7/b8/06e02c7cca7aec915839927a9aa19f749ac17a3d2bb2610b945d2de0aa96/pygame-2.0.1.tar.gz (5.5MB)
100% |████████████████████████████████| 5.5MB 996kB/s
Complete output from command python setup.py egg_info:
WARNING, No "Setup" File Exists, Running "buildconfig/config.py"
Using WINDOWS configuration...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\setup.py", line 318, in <module>
buildconfig.config.main(AUTO_CONFIG)
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config.py", line 221, in main
deps = CFG.main(**kwds)
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config_win.py", line 574, in main
return setup_prebuilt_sdl2(prebuilt_dir)
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config_win.py", line 499, in setup_prebuilt_sdl2
DEPS.configure()
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config_win.py", line 336, in configure
from . import vstools
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\vstools.py", line 11, in <module>
compiler.initialize()
File "C:\Users\Eli Heist\AppData\Local\Programs\Python\Python38\lib\distutils\msvc9compiler.py", line 372, in initialize
vc_env = query_vcvarsall(VERSION, plat_spec)
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
147, in msvc9_query_vcvarsall
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
1227, in return_env
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
876, in VCIncludes
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
555, in VCInstallDir
distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.2 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualst
udio.microsoft.com/downloads/
Making dir :prebuilt_downloads:
Downloading... https://www.libsdl.org/release/SDL2-devel-2.0.14-VC.zip 48d5dcd4a445410301f5575219cffb6de654edb8
Unzipping :prebuilt_downloads\SDL2-devel-2.0.14-VC.zip:
Downloading... https://www.libsdl.org/projects/SDL_image/release/SDL2_image-devel-2.0.5-VC.zip 137f86474691f4e12e76e07d58d5920c8d844d5b
Unzipping :prebuilt_downloads\SDL2_image-devel-2.0.5-VC.zip:
Downloading... https://www.libsdl.org/projects/SDL_ttf/release/SDL2_ttf-devel-2.0.15-VC.zip 1436df41ebc47ac36e02ec9bda5699e80ff9bd27
Unzipping :prebuilt_downloads\SDL2_ttf-devel-2.0.15-VC.zip:
Downloading... https://www.libsdl.org/projects/SDL_mixer/release/SDL2_mixer-devel-2.0.4-VC.zip 9097148f4529cf19f805ccd007618dec280f0ecc
Unzipping :prebuilt_downloads\SDL2_mixer-devel-2.0.4-VC.zip:
Downloading... https://www.ijg.org/files/jpegsr9d.zip ed10aa2b5a0fcfe74f8a6f7611aeb346b06a1f99
Unzipping :prebuilt_downloads\jpegsr9d.zip:
Downloading... https://pygame.org/ftp/prebuilt-x64-pygame-1.9.2-20150922.zip 3a5af3427b3aa13a0aaf5c4cb08daaed341613ed
Unzipping :prebuilt_downloads\prebuilt-x64-pygame-1.9.2-20150922.zip:
copying into .\prebuilt-x64
Path for SDL: prebuilt-x64\SDL2-2.0.14
...Library directory for SDL: prebuilt-x64/SDL2-2.0.14/lib/x64
...Include directory for SDL: prebuilt-x64/SDL2-2.0.14/include
Path for FONT: prebuilt-x64\SDL2_ttf-2.0.15
...Library directory for FONT: prebuilt-x64/SDL2_ttf-2.0.15/lib/x64
...Include directory for FONT: prebuilt-x64/SDL2_ttf-2.0.15/include
Path for IMAGE: prebuilt-x64\SDL2_image-2.0.5
...Library directory for IMAGE: prebuilt-x64/SDL2_image-2.0.5/lib/x64
...Include directory for IMAGE: prebuilt-x64/SDL2_image-2.0.5/include
Path for MIXER: prebuilt-x64\SDL2_mixer-2.0.4
...Library directory for MIXER: prebuilt-x64/SDL2_mixer-2.0.4/lib/x64
...Include directory for MIXER: prebuilt-x64/SDL2_mixer-2.0.4/include
Path for PORTMIDI: prebuilt-x64
...Library directory for PORTMIDI: prebuilt-x64/lib
...Include directory for PORTMIDI: prebuilt-x64/include
DLL for SDL2: prebuilt-x64/SDL2-2.0.14/lib/x64/SDL2.dll
DLL for SDL2_ttf: prebuilt-x64/SDL2_ttf-2.0.15/lib/x64/SDL2_ttf.dll
DLL for SDL2_image: prebuilt-x64/SDL2_image-2.0.5/lib/x64/SDL2_image.dll
DLL for SDL2_mixer: prebuilt-x64/SDL2_mixer-2.0.4/lib/x64/SDL2_mixer.dll
DLL for portmidi: prebuilt-x64/lib/portmidi.dll
Path for FREETYPE not found.
...Found include dir but no library dir in prebuilt-x64.
Path for PNG not found.
...Found include dir but no library dir in prebuilt-x64.
Path for JPEG not found.
...Found include dir but no library dir in prebuilt-x64.
DLL for freetype: prebuilt-x64/SDL2_ttf-2.0.15/lib/x64/libfreetype-6.dll
---
For help with compilation see:
https://www.pygame.org/wiki/CompileWindows
To contribute to pygame development see:
https://www.pygame.org/contribute.html
---
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\
A:
You are seeing this error because pip is attempting to compile the complete SDL library for pygame, and your machine is missing the build requirements to do so. This is in your error message:
Microsoft Visual C++ 14.2 is required
Luckily, pygame offers pre-compiled binaries for most operating systems so you don't have to compile yourself. They are distributed as Python wheels. All you should have to do to access the pre-compiled version is install wheel:
C:\>pip install wheel
C:\>pip install pygame
A:
I am not sure how this problem can be solved but I will try to help.
Try the following things:
Don't install it for that specific project. Try to just go to cmd and install it for your whole python env.
Make sure python is properly installed in the PATH variable.
Try to use py -m pip install -U pygame --user instead. If that doesn't work, try python3 -m pip install -U pygame --user.
If you are using python 3.8 and/or none of the above methods work, you have two options: either revert to python 3.7 or wait for pygame to get updated. Pygame is known to function improperly in python 3.8. I had faced the same issue and I solved it by switching to python 3.7 for pygame projects and the latest version for other python projects.
A:
Today my friends met the same questions.
It seems that pygame has been not suit to python3.11.0,becase I can install it by python 3.10.7 but not by python 3.11.0.So you can use python3.10.7 instead of python3.11.0 to solve this quesiton.
| Why am I getting this error while installing pygame in pycharm | Command "python setup.py egg_info" failed with error code 1 in C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\
this is the full process
(venv) C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate>pip install pygame
Collecting pygame
Downloading https://files.pythonhosted.org/packages/c7/b8/06e02c7cca7aec915839927a9aa19f749ac17a3d2bb2610b945d2de0aa96/pygame-2.0.1.tar.gz (5.5MB)
100% |████████████████████████████████| 5.5MB 996kB/s
Complete output from command python setup.py egg_info:
WARNING, No "Setup" File Exists, Running "buildconfig/config.py"
Using WINDOWS configuration...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\setup.py", line 318, in <module>
buildconfig.config.main(AUTO_CONFIG)
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config.py", line 221, in main
deps = CFG.main(**kwds)
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config_win.py", line 574, in main
return setup_prebuilt_sdl2(prebuilt_dir)
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config_win.py", line 499, in setup_prebuilt_sdl2
DEPS.configure()
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\config_win.py", line 336, in configure
from . import vstools
File "C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\buildconfig\vstools.py", line 11, in <module>
compiler.initialize()
File "C:\Users\Eli Heist\AppData\Local\Programs\Python\Python38\lib\distutils\msvc9compiler.py", line 372, in initialize
vc_env = query_vcvarsall(VERSION, plat_spec)
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
147, in msvc9_query_vcvarsall
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
1227, in return_env
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
876, in VCIncludes
File "C:\Users\Eli Heist\PycharmProjects\Space Invaders Ultimate\venv\lib\site-packages\setuptools-40.8.0-py3.8.egg\setuptools\msvc.py", line
555, in VCInstallDir
distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.2 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualst
udio.microsoft.com/downloads/
Making dir :prebuilt_downloads:
Downloading... https://www.libsdl.org/release/SDL2-devel-2.0.14-VC.zip 48d5dcd4a445410301f5575219cffb6de654edb8
Unzipping :prebuilt_downloads\SDL2-devel-2.0.14-VC.zip:
Downloading... https://www.libsdl.org/projects/SDL_image/release/SDL2_image-devel-2.0.5-VC.zip 137f86474691f4e12e76e07d58d5920c8d844d5b
Unzipping :prebuilt_downloads\SDL2_image-devel-2.0.5-VC.zip:
Downloading... https://www.libsdl.org/projects/SDL_ttf/release/SDL2_ttf-devel-2.0.15-VC.zip 1436df41ebc47ac36e02ec9bda5699e80ff9bd27
Unzipping :prebuilt_downloads\SDL2_ttf-devel-2.0.15-VC.zip:
Downloading... https://www.libsdl.org/projects/SDL_mixer/release/SDL2_mixer-devel-2.0.4-VC.zip 9097148f4529cf19f805ccd007618dec280f0ecc
Unzipping :prebuilt_downloads\SDL2_mixer-devel-2.0.4-VC.zip:
Downloading... https://www.ijg.org/files/jpegsr9d.zip ed10aa2b5a0fcfe74f8a6f7611aeb346b06a1f99
Unzipping :prebuilt_downloads\jpegsr9d.zip:
Downloading... https://pygame.org/ftp/prebuilt-x64-pygame-1.9.2-20150922.zip 3a5af3427b3aa13a0aaf5c4cb08daaed341613ed
Unzipping :prebuilt_downloads\prebuilt-x64-pygame-1.9.2-20150922.zip:
copying into .\prebuilt-x64
Path for SDL: prebuilt-x64\SDL2-2.0.14
...Library directory for SDL: prebuilt-x64/SDL2-2.0.14/lib/x64
...Include directory for SDL: prebuilt-x64/SDL2-2.0.14/include
Path for FONT: prebuilt-x64\SDL2_ttf-2.0.15
...Library directory for FONT: prebuilt-x64/SDL2_ttf-2.0.15/lib/x64
...Include directory for FONT: prebuilt-x64/SDL2_ttf-2.0.15/include
Path for IMAGE: prebuilt-x64\SDL2_image-2.0.5
...Library directory for IMAGE: prebuilt-x64/SDL2_image-2.0.5/lib/x64
...Include directory for IMAGE: prebuilt-x64/SDL2_image-2.0.5/include
Path for MIXER: prebuilt-x64\SDL2_mixer-2.0.4
...Library directory for MIXER: prebuilt-x64/SDL2_mixer-2.0.4/lib/x64
...Include directory for MIXER: prebuilt-x64/SDL2_mixer-2.0.4/include
Path for PORTMIDI: prebuilt-x64
...Library directory for PORTMIDI: prebuilt-x64/lib
...Include directory for PORTMIDI: prebuilt-x64/include
DLL for SDL2: prebuilt-x64/SDL2-2.0.14/lib/x64/SDL2.dll
DLL for SDL2_ttf: prebuilt-x64/SDL2_ttf-2.0.15/lib/x64/SDL2_ttf.dll
DLL for SDL2_image: prebuilt-x64/SDL2_image-2.0.5/lib/x64/SDL2_image.dll
DLL for SDL2_mixer: prebuilt-x64/SDL2_mixer-2.0.4/lib/x64/SDL2_mixer.dll
DLL for portmidi: prebuilt-x64/lib/portmidi.dll
Path for FREETYPE not found.
...Found include dir but no library dir in prebuilt-x64.
Path for PNG not found.
...Found include dir but no library dir in prebuilt-x64.
Path for JPEG not found.
...Found include dir but no library dir in prebuilt-x64.
DLL for freetype: prebuilt-x64/SDL2_ttf-2.0.15/lib/x64/libfreetype-6.dll
---
For help with compilation see:
https://www.pygame.org/wiki/CompileWindows
To contribute to pygame development see:
https://www.pygame.org/contribute.html
---
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\Eli Heist\AppData\Local\Temp\pip-install-fjf50xi9\pygame\
| [
"You are seeing this error because pip is attempting to compile the complete SDL library for pygame, and your machine is missing the build requirements to do so. This is in your error message:\nMicrosoft Visual C++ 14.2 is required\n\nLuckily, pygame offers pre-compiled binaries for most operating systems so you don't have to compile yourself. They are distributed as Python wheels. All you should have to do to access the pre-compiled version is install wheel:\nC:\\>pip install wheel\nC:\\>pip install pygame\n\n",
"I am not sure how this problem can be solved but I will try to help.\nTry the following things:\n\nDon't install it for that specific project. Try to just go to cmd and install it for your whole python env.\nMake sure python is properly installed in the PATH variable.\nTry to use py -m pip install -U pygame --user instead. If that doesn't work, try python3 -m pip install -U pygame --user.\n\nIf you are using python 3.8 and/or none of the above methods work, you have two options: either revert to python 3.7 or wait for pygame to get updated. Pygame is known to function improperly in python 3.8. I had faced the same issue and I solved it by switching to python 3.7 for pygame projects and the latest version for other python projects.\n",
"Today my friends met the same questions.\nIt seems that pygame has been not suit to python3.11.0,becase I can install it by python 3.10.7 but not by python 3.11.0.So you can use python3.10.7 instead of python3.11.0 to solve this quesiton.\n"
] | [
1,
0,
0
] | [] | [] | [
"pip",
"pycharm",
"pygame",
"python",
"python_3.x"
] | stackoverflow_0065858990_pip_pycharm_pygame_python_python_3.x.txt |
Q:
Get max key of dictionary based on ratio key:value
Here is my dictionary:
inventory = {60: 20, 100: 50, 120: 30}
Keys are total cost of goods [$] and values are available weight [lb].
I need to find a way to get key based on the highest cost per pound.
I have already tried using:
most_expensive = max(inventory, key={fucntion that calculates the ratio})
But cannot guess the pythonic way to do that.
Thank you in advance.
A:
What you are doing is correct. By using the key argument you can calculate the ratio. The thing you are missing is that you want to compare the key and values, therefore you can use the inventory.items().
The code would be:
max(inventory.items(), key=lambda x: x[0]/x[1])
Which will result in the following if I use your values:
(120, 30)
In order to get only the key, you have to get the first element of the answer.
A:
You can get the key of the maximum ratio directly with a key function that takes a key and returns the ratio:
max(inventory, key=lambda k: k / inventory[k])
This returns, given your sample input:
120
A:
I guess the following would be considered pretty pythonic :)
max([key/value for key, value in inventory.items()])
| Get max key of dictionary based on ratio key:value | Here is my dictionary:
inventory = {60: 20, 100: 50, 120: 30}
Keys are total cost of goods [$] and values are available weight [lb].
I need to find a way to get key based on the highest cost per pound.
I have already tried using:
most_expensive = max(inventory, key={fucntion that calculates the ratio})
But cannot guess the pythonic way to do that.
Thank you in advance.
| [
"What you are doing is correct. By using the key argument you can calculate the ratio. The thing you are missing is that you want to compare the key and values, therefore you can use the inventory.items().\nThe code would be:\nmax(inventory.items(), key=lambda x: x[0]/x[1])\n\nWhich will result in the following if I use your values:\n(120, 30)\n\nIn order to get only the key, you have to get the first element of the answer.\n",
"You can get the key of the maximum ratio directly with a key function that takes a key and returns the ratio:\nmax(inventory, key=lambda k: k / inventory[k])\n\nThis returns, given your sample input:\n120\n\n",
"I guess the following would be considered pretty pythonic :)\nmax([key/value for key, value in inventory.items()])\n"
] | [
2,
1,
0
] | [] | [] | [
"dictionary",
"python",
"python_3.x"
] | stackoverflow_0074653677_dictionary_python_python_3.x.txt |
Q:
Python extract value from multiple substring
I have a dataframe named df which has a column named "text" consisting of each row which a string like this: This is the string of the MARC data format.
d20s 22 i2as¶001VNINDEA455133910000005¶008180529c 1996 frmmm wz 7b ¶009se z 1 m mm c¶008a ¶008at ¶008ap ¶008a ¶0441 $a2609-2565$c2609-2565¶0410 $afre$aeng$apor ¶0569 $a2758-8965$c4578-7854¶0300 $a789$987$754 ¶051 $atxt$asti$atdi$bc¶110 $317737535$w20..b.....$astock market situation¶3330 $aimport and export agency ABC¶7146 $q1$uwwww.abc.org$ma1¶7146 $q9$uAgency XYZ¶8799 $q1$uAgency ABC$fHTML$
Here I want to extract information containing in zones ¶7146, after $u or zone ¶0441, after $c.
The result table will be like this :
¶7146$u
¶0441$c
wwww.abc.org
2609-2565
Agency XYZ
2609-2565
Here is the code I made :
import os
import pandas as pd
import numpy as np
import requests
df = pd.read_csv('dataset.csv')
def extract(text, start_pattern, sc):
ist = text.find(start_pattern)
if ist < 0:
return ""
ist = text.find(sc, ist)
if ist < 0:
return ""
im = text.find("$", ist + len(sc))
iz = text.find("¶", ist + len(sc))
if im >= 0:
if iz >= 0:
ie = min(im, iz)
else:
ie = im
else:
ie = iz
if ie < 0:
return ""
return text[ist + len(sc): ie]
def extract_text(row, list_in_zones):
text = row["text"]
if pd.isna(text):
return [""] * len(list_in_zones)
patterns = [("¶" + p, "$" + c) for p, c in [zone.split("$") for zone in list_in_zones]]
return [extract(text, pattern, sc) for pattern, sc in patterns]
list_in_zones = ["7146$u", "0441$u", "200$y"]
df[list_in_zones] = df.apply(lambda row: extract_text(row, list_in_zones),
axis=1,
result_type="expand")
df.to_excel("extract.xlsx", index = False)
For zones ¶7146 and after $u, my code only extracted "www.abc.org", he cannot extract the duplicate with value "Agency XYZ". What's wrong here?.
Additional logical structure : The logic about the structure of the string is that each zone will start with a character ¶ like ¶7146, ¶0441,.. , and the fields start with $ for example $u, $c and this field ends with either $ or ¶. Here, I want to extract information in the fields $.
A:
You could try splitting and then cleaning up strings as follows
import pandas as pd
text = ('d20s 22 i2as¶001VNINDEA455133910000005¶008180529c 1996 frmmm wz 7b ¶009se z 1 m mm c¶008a ¶008at ¶008ap ¶008a ¶0441 $a2609-2565$c2609-2565¶0410 $afre$aeng$apor ¶0569 $a2758-8965$c4578-7854¶0300 $a789$987$754 ¶051 $atxt$asti$atdi$bc¶110 $317737535$w20..b.....$astock market situation¶3330 $aimport and export agency ABC¶7146 $q1$uwwww.abc.org$ma1¶7146 $q9$uAgency XYZ¶8799 $q1$uAgency ABC$fHTML$')
u = text.split('$u')[1:3] # Taking just the seconds and third elements in the array because they match your desired output
c = text.split('$c')[1:3]
pd.DataFrame([u,c]).T
OUTPUT
0 1
0 wwww.abc.org$ma1¶7146 $q9 2609-2565¶0410 $afre$aeng$apor ¶0569 $a2758-8965
1 Agency XYZ¶8799 $q1 4578-7854¶0300 $a789$987$754 ¶051 $atxt$asti$a...
From here you can try to clean up the strings until they match the desired output.
It would be easier to give a more helpful answer if we could understand the logic behind this data structure - when do certain fields start and end?
| Python extract value from multiple substring | I have a dataframe named df which has a column named "text" consisting of each row which a string like this: This is the string of the MARC data format.
d20s 22 i2as¶001VNINDEA455133910000005¶008180529c 1996 frmmm wz 7b ¶009se z 1 m mm c¶008a ¶008at ¶008ap ¶008a ¶0441 $a2609-2565$c2609-2565¶0410 $afre$aeng$apor ¶0569 $a2758-8965$c4578-7854¶0300 $a789$987$754 ¶051 $atxt$asti$atdi$bc¶110 $317737535$w20..b.....$astock market situation¶3330 $aimport and export agency ABC¶7146 $q1$uwwww.abc.org$ma1¶7146 $q9$uAgency XYZ¶8799 $q1$uAgency ABC$fHTML$
Here I want to extract information containing in zones ¶7146, after $u or zone ¶0441, after $c.
The result table will be like this :
¶7146$u
¶0441$c
wwww.abc.org
2609-2565
Agency XYZ
2609-2565
Here is the code I made :
import os
import pandas as pd
import numpy as np
import requests
df = pd.read_csv('dataset.csv')
def extract(text, start_pattern, sc):
ist = text.find(start_pattern)
if ist < 0:
return ""
ist = text.find(sc, ist)
if ist < 0:
return ""
im = text.find("$", ist + len(sc))
iz = text.find("¶", ist + len(sc))
if im >= 0:
if iz >= 0:
ie = min(im, iz)
else:
ie = im
else:
ie = iz
if ie < 0:
return ""
return text[ist + len(sc): ie]
def extract_text(row, list_in_zones):
text = row["text"]
if pd.isna(text):
return [""] * len(list_in_zones)
patterns = [("¶" + p, "$" + c) for p, c in [zone.split("$") for zone in list_in_zones]]
return [extract(text, pattern, sc) for pattern, sc in patterns]
list_in_zones = ["7146$u", "0441$u", "200$y"]
df[list_in_zones] = df.apply(lambda row: extract_text(row, list_in_zones),
axis=1,
result_type="expand")
df.to_excel("extract.xlsx", index = False)
For zones ¶7146 and after $u, my code only extracted "www.abc.org", he cannot extract the duplicate with value "Agency XYZ". What's wrong here?.
Additional logical structure : The logic about the structure of the string is that each zone will start with a character ¶ like ¶7146, ¶0441,.. , and the fields start with $ for example $u, $c and this field ends with either $ or ¶. Here, I want to extract information in the fields $.
| [
"You could try splitting and then cleaning up strings as follows\nimport pandas as pd\ntext = ('d20s 22 i2as¶001VNINDEA455133910000005¶008180529c 1996 frmmm wz 7b ¶009se z 1 m mm c¶008a ¶008at ¶008ap ¶008a ¶0441 $a2609-2565$c2609-2565¶0410 $afre$aeng$apor ¶0569 $a2758-8965$c4578-7854¶0300 $a789$987$754 ¶051 $atxt$asti$atdi$bc¶110 $317737535$w20..b.....$astock market situation¶3330 $aimport and export agency ABC¶7146 $q1$uwwww.abc.org$ma1¶7146 $q9$uAgency XYZ¶8799 $q1$uAgency ABC$fHTML$')\nu = text.split('$u')[1:3] # Taking just the seconds and third elements in the array because they match your desired output\nc = text.split('$c')[1:3]\n\npd.DataFrame([u,c]).T\n\nOUTPUT\n 0 1\n0 wwww.abc.org$ma1¶7146 $q9 2609-2565¶0410 $afre$aeng$apor ¶0569 $a2758-8965\n1 Agency XYZ¶8799 $q1 4578-7854¶0300 $a789$987$754 ¶051 $atxt$asti$a...\n\nFrom here you can try to clean up the strings until they match the desired output.\nIt would be easier to give a more helpful answer if we could understand the logic behind this data structure - when do certain fields start and end?\n"
] | [
0
] | [] | [] | [
"extract",
"lambda",
"python",
"substring"
] | stackoverflow_0074653580_extract_lambda_python_substring.txt |
Q:
OpenCV getting very slow when using cap.set(cv2.CAP_PROP_POS_FRAMES
I'm using the following code to create a simple video player but I've seen that when I introduce the
cap.set(cv2.CAP_PROP_POS_FRAMES,arg) line, the all process is getting very slow while playing the video. The player works correctly with its trackbar but the speed is very slow.
In general I noted that every time you use the cap.set(cv2.CAP_PROP_POS_FRAMES,...) command the speed get much slower than leaving the player going with ret, frame = cap.read() without setting the framenumber
I need to use the cv2 because the purpose of job is to treat all frames by overlapping a text to every frame and show on the player (the text overlay code is not yet written here)
import cv2
def on_trackbar(arg):
global cap
cap.set(cv2.CAP_PROP_POS_FRAMES,arg)
filevideo = r'D:\Documenti\Regate\Progetti\VideoOverlayData\Sviluppo\VideoOverlayData\INPUTFILES\28AugRace5.MP4'
cap = cv2.VideoCapture(filevideo)
Videofps = cap.get(cv2.CAP_PROP_FPS)
nr_of_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
if (cap.isOpened() == False):
print("Error opening video stream or file")
cv2.namedWindow('Frame', cv2.WINDOW_KEEPRATIO)
cv2.createTrackbar("F", "Frame", 0, nr_of_frames, on_trackbar)
videoCurrentFrameNumber = 0
while(cap.isOpened()):
# cap.set(1,videoCurrentFrameNumber)
ret, frame = cap.read()
if ret is True:
cv2.imshow('Frame',frame)
videoCurrentFrameNumber = int(cap.get(cv2.CAP_PROP_POS_FRAMES))
#cap.set(cv2.CAP_PROP_POS_FRAMES,videoCurrentFrameNumber)
#cap.set(5,50)
cv2.setTrackbarPos('F','Frame',videoCurrentFrameNumber)
#videoCurrentFrameNumber = videoCurrentFrameNumber +1
frameclick = cv2.waitKey(1) & 0xFF
if frameclick == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
A:
I had the same problem using python3 with opencv-python 4.5.5.64 on an m1 mac. ( Last known version to work with trackbar, as of writing this article ) The best explanation i've found is that random access it is naturally slow. Though, I recall using older versions of opencv-python with your code with absolutely no problem. Needless to say, the solution that made most sense to my case was to load the entire video into memory and then display it. The code for my use-case is shown below, where I've build a video player with frame by frame (a,b) movement and play/pause (spacebar). The default position is pause:
import cv2
import numpy as np
import datetime
import os
import sys
# import the video
videoFile = sys.argv[1];
# load capture
cap = cv2.VideoCapture(videoFile)
if (cap.isOpened()== False):
print("Error opening video stream or file")
# prepare view
view_name = "Video in memory"
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
cv2.namedWindow(view_name)
cv2.createTrackbar('S',view_name, 0,int(total_frames)-1, lambda x:x)
cv2.setTrackbarPos('S',view_name,0)
# read the entire video to memory
print("[INFO] loading frames to memory")
all_frames = []
while(cap.isOpened()):
frame_exists, frame = cap.read()
if frame_exists == True:
all_frames.append(frame)
else:
break
# When everything done, release the video capture object
cap.release()
print("[INFO] load compelte");
# set default play state to pause
status = 'pause'
previous_status = 'pause'
frame_idx = 0
while True:
try:
if frame_idx==total_frames-1:
frame_idx=0
frame = all_frames[frame_idx]
cv2.imshow(view_name,frame)
# event handling
keyPress = cv2.waitKey(1)
if keyPress & 0xFF == ord(' '):
status = 'play_pause_click'
elif keyPress & 0xFF == ord('a'):
status = 'prev_frame'
elif keyPress & 0xFF == ord('d'):
status = 'next_frame'
elif keyPress & 0xFF == -1:
status = status
elif keyPress & 0xFF == ord('q'):
status = 'exit'
if (status == 'play_pause_click' and previous_status == 'play_pause_click'):
status = 'pause'
previous_status = 'pause'
if (status == 'play_pause_click'):
status = 'play'
previous_status = 'play_pause_click'
if status == 'play':
frame_idx += 1
cv2.setTrackbarPos('S',view_name,frame_idx)
continue
if status == 'pause':
frame_idx = cv2.getTrackbarPos('S',view_name)
if status=='prev_frame':
frame_idx-=1
frame_idx = max(0, frame_idx)
cv2.setTrackbarPos('S',view_name,frame_idx)
status='pause'
if status=='next_frame':
frame_idx+=1
cv2.setTrackbarPos('S',view_name,frame_idx)
status='pause'
if status == 'exit':
break
except KeyError:
print("Invalid Key was pressed")
# Closes the generated window
cv2.destroyWindow(view_name)
| OpenCV getting very slow when using cap.set(cv2.CAP_PROP_POS_FRAMES | I'm using the following code to create a simple video player but I've seen that when I introduce the
cap.set(cv2.CAP_PROP_POS_FRAMES,arg) line, the all process is getting very slow while playing the video. The player works correctly with its trackbar but the speed is very slow.
In general I noted that every time you use the cap.set(cv2.CAP_PROP_POS_FRAMES,...) command the speed get much slower than leaving the player going with ret, frame = cap.read() without setting the framenumber
I need to use the cv2 because the purpose of job is to treat all frames by overlapping a text to every frame and show on the player (the text overlay code is not yet written here)
import cv2
def on_trackbar(arg):
global cap
cap.set(cv2.CAP_PROP_POS_FRAMES,arg)
filevideo = r'D:\Documenti\Regate\Progetti\VideoOverlayData\Sviluppo\VideoOverlayData\INPUTFILES\28AugRace5.MP4'
cap = cv2.VideoCapture(filevideo)
Videofps = cap.get(cv2.CAP_PROP_FPS)
nr_of_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
if (cap.isOpened() == False):
print("Error opening video stream or file")
cv2.namedWindow('Frame', cv2.WINDOW_KEEPRATIO)
cv2.createTrackbar("F", "Frame", 0, nr_of_frames, on_trackbar)
videoCurrentFrameNumber = 0
while(cap.isOpened()):
# cap.set(1,videoCurrentFrameNumber)
ret, frame = cap.read()
if ret is True:
cv2.imshow('Frame',frame)
videoCurrentFrameNumber = int(cap.get(cv2.CAP_PROP_POS_FRAMES))
#cap.set(cv2.CAP_PROP_POS_FRAMES,videoCurrentFrameNumber)
#cap.set(5,50)
cv2.setTrackbarPos('F','Frame',videoCurrentFrameNumber)
#videoCurrentFrameNumber = videoCurrentFrameNumber +1
frameclick = cv2.waitKey(1) & 0xFF
if frameclick == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
| [
"I had the same problem using python3 with opencv-python 4.5.5.64 on an m1 mac. ( Last known version to work with trackbar, as of writing this article ) The best explanation i've found is that random access it is naturally slow. Though, I recall using older versions of opencv-python with your code with absolutely no problem. Needless to say, the solution that made most sense to my case was to load the entire video into memory and then display it. The code for my use-case is shown below, where I've build a video player with frame by frame (a,b) movement and play/pause (spacebar). The default position is pause:\nimport cv2\nimport numpy as np\nimport datetime\nimport os\nimport sys\n\n# import the video\nvideoFile = sys.argv[1];\n\n# load capture\ncap = cv2.VideoCapture(videoFile)\nif (cap.isOpened()== False): \n print(\"Error opening video stream or file\")\n\n# prepare view\nview_name = \"Video in memory\"\ntotal_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)\ncv2.namedWindow(view_name)\ncv2.createTrackbar('S',view_name, 0,int(total_frames)-1, lambda x:x)\ncv2.setTrackbarPos('S',view_name,0)\n\n# read the entire video to memory\nprint(\"[INFO] loading frames to memory\")\nall_frames = []\nwhile(cap.isOpened()):\n frame_exists, frame = cap.read()\n if frame_exists == True:\n all_frames.append(frame)\n else:\n break\n\n# When everything done, release the video capture object\ncap.release()\nprint(\"[INFO] load compelte\");\n\n# set default play state to pause\nstatus = 'pause'\nprevious_status = 'pause'\nframe_idx = 0\nwhile True:\n try:\n if frame_idx==total_frames-1:\n frame_idx=0\n\n frame = all_frames[frame_idx]\n cv2.imshow(view_name,frame)\n\n # event handling\n keyPress = cv2.waitKey(1)\n if keyPress & 0xFF == ord(' '):\n status = 'play_pause_click'\n elif keyPress & 0xFF == ord('a'):\n status = 'prev_frame'\n elif keyPress & 0xFF == ord('d'):\n status = 'next_frame'\n elif keyPress & 0xFF == -1:\n status = status\n elif keyPress & 0xFF == ord('q'):\n status = 'exit'\n\n if (status == 'play_pause_click' and previous_status == 'play_pause_click'):\n status = 'pause'\n previous_status = 'pause'\n if (status == 'play_pause_click'):\n status = 'play'\n previous_status = 'play_pause_click'\n\n if status == 'play':\n frame_idx += 1\n cv2.setTrackbarPos('S',view_name,frame_idx)\n continue\n if status == 'pause':\n frame_idx = cv2.getTrackbarPos('S',view_name)\n if status=='prev_frame':\n frame_idx-=1\n frame_idx = max(0, frame_idx)\n cv2.setTrackbarPos('S',view_name,frame_idx)\n status='pause'\n if status=='next_frame':\n frame_idx+=1\n cv2.setTrackbarPos('S',view_name,frame_idx)\n status='pause'\n if status == 'exit':\n break\n\n except KeyError:\n print(\"Invalid Key was pressed\")\n \n# Closes the generated window\ncv2.destroyWindow(view_name)\n\n"
] | [
0
] | [] | [] | [
"opencv",
"python"
] | stackoverflow_0059951178_opencv_python.txt |
Q:
How should i set DJANGO_SETTINGS_MODULE, django.core.exceptions.ImproperlyConfigured: Requested setting EMAIL_BACKEND
i'm just start to learn Django and during create a priject i get a problem: "django.core.exceptions.ImproperlyConfigured: Requested setting EMAIL_BACKEND..."
I met description fo this problem on Staciverflow but i don't understand in which file i should set DJANGO_SETTINGS_MODULE???
Please, give me detail description
I tried set this environment variable in my virtualenv which i use for my project in "activate" file, but it did'nt help me.
Maybe my question is stupid but i really don't understand, sorry
A:
settings.py file is where you set your project configuration file. one of them is the email backend server. i dont what you have in your project to raise this issue, but django has a built in server for testing perpose.
Include this line on your settings.py file
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
This sends a message to console everytime u send an email from django app.
| How should i set DJANGO_SETTINGS_MODULE, django.core.exceptions.ImproperlyConfigured: Requested setting EMAIL_BACKEND | i'm just start to learn Django and during create a priject i get a problem: "django.core.exceptions.ImproperlyConfigured: Requested setting EMAIL_BACKEND..."
I met description fo this problem on Staciverflow but i don't understand in which file i should set DJANGO_SETTINGS_MODULE???
Please, give me detail description
I tried set this environment variable in my virtualenv which i use for my project in "activate" file, but it did'nt help me.
Maybe my question is stupid but i really don't understand, sorry
| [
"settings.py file is where you set your project configuration file. one of them is the email backend server. i dont what you have in your project to raise this issue, but django has a built in server for testing perpose.\nInclude this line on your settings.py file\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\nThis sends a message to console everytime u send an email from django app.\n"
] | [
0
] | [] | [] | [
"django",
"django_rest_framework",
"django_settings",
"python"
] | stackoverflow_0074646555_django_django_rest_framework_django_settings_python.txt |
Q:
UDP conformation, is it possible?
I am using python to send a udp command to a Tello edu drone. The problem I am having is that the drone doesn’t read anything past the 1st command i send.
Is there are a way to confirm sending the UDP so the drone reads it at all cost. Or send the command until the drone reads it?
I tried sending the 2nd command repeatedly which works however at certain times it repeats that command more than once.
A:
UDP is an unreliable protocol, i.e. sending a message is basically fire and forget. Any acknowledgements for received messages or retransmissions in case packets are not acknowledged need to be implemented at the application level - both in sender and receiver.
If the protocol used to communicate with the drone does not provide any way for reliable communication, then one cannot enforce reliable communication from the sender side.
| UDP conformation, is it possible? | I am using python to send a udp command to a Tello edu drone. The problem I am having is that the drone doesn’t read anything past the 1st command i send.
Is there are a way to confirm sending the UDP so the drone reads it at all cost. Or send the command until the drone reads it?
I tried sending the 2nd command repeatedly which works however at certain times it repeats that command more than once.
| [
"UDP is an unreliable protocol, i.e. sending a message is basically fire and forget. Any acknowledgements for received messages or retransmissions in case packets are not acknowledged need to be implemented at the application level - both in sender and receiver.\nIf the protocol used to communicate with the drone does not provide any way for reliable communication, then one cannot enforce reliable communication from the sender side.\n"
] | [
0
] | [] | [] | [
"python",
"tello_drone",
"udp"
] | stackoverflow_0074653675_python_tello_drone_udp.txt |
Q:
How to hide/mask sensitive data from airflow connections and variable section?
We have many AWS connection string in apache airflow and anyone can see our access keys and secret keys in airflow webserver connections section. How to hide or mask sensitive data in airflow webserver?
We have already enabled authentication true in airflow configuration so it won't allow unauthorized users. But I don't want to show my keys in web view.
A:
For the Airflow Variables section, Airflow will automatically hide any values if the variable name contains secret or password. The check for this value is case-insensitive, so the value of a variable with a name containing SECRET will also be hidden.
A:
I found workaround for this use case. There is an option in airflow AWSHook, We can pass key path in connection string instead of secret key and access key.
/root/keys/aws_keys
[default]
aws_access_key_id=<access key>
aws_secret_access_key=<secret key>
region=<region>
[s3_prod]
aws_access_key_id=<access key>
aws_secret_access_key=<secret key>
region=<region>
How it works? First it will check extra params in connection string for any aws keys added if not then it will check key path(s3_config_file). if both options are not available then it will look credentials from boto.cfg file. So now no need to expose any keys explicitly in UI :)
A:
The LDAP authentication module provides the ability to specify a group based filter for a group which will be admins, and able to see that menu, and the rest.
See the documentation under security.
The superuser_filter and data_profiler_filter are optional. If defined, these configurations allow you to specify LDAP groups that users must belong to in order to have superuser (admin) and data-profiler permissions. If undefined, all users will be superusers and data profilers.
Note that data-profilers can run adhoc queries on any defined connection. They cannot see the admin menu however. You may not want a group of users to be able to do arbitrary SQL or whatever over these, so also set that filter.
Any user can request in their DAG and tasks any variable. It's easy to put those variables in places where they will show up in the logs.
The database provides a way of storing the connection passwords, and variable values in an encrypted way, but that doesn't solve all your problems.
A:
I'm pretty sure the AWS hook also allows you to put the access key in the "Login" box and the secret key in the "Password" box on the connection screen. If the hook finds that there is something in the login box it'll use that and the password box as the connection information, here is the snippet from the source code for the AWS Hook:
if self.aws_conn_id:
try:
connection_object = self.get_connection(self.aws_conn_id)
if connection_object.login:
aws_access_key_id = connection_object.login
aws_secret_access_key = connection_object.password
elif 'aws_secret_access_key' in connection_object.extra_dejson:
aws_access_key_id = connection_object.extra_dejson['aws_access_key_id']
aws_secret_access_key = connection_object.extra_dejson['aws_secret_access_key']
elif 's3_config_file' in connection_object.extra_dejson:
aws_access_key_id, aws_secret_access_key = \
_parse_s3_config(connection_object.extra_dejson['s3_config_file'],
connection_object.extra_dejson.get('s3_config_format'))
I've also found that you need to specify the region_name in the "Extra" box for the AWSHook in Airflow 1.9 otherwise it will not work.
A:
If you want to hide S3 sensitive data in Extra (connection param), you can set aws_access_key in login-field and aws_secret_key - in password-field
Conn Id: <your_conn_id>
Conn Type: S3
Login: <aws_access_key>
Password: <aws_secret_key>
p.s. host can be in extra: Extra: {"host": "<host>"}
| How to hide/mask sensitive data from airflow connections and variable section? | We have many AWS connection string in apache airflow and anyone can see our access keys and secret keys in airflow webserver connections section. How to hide or mask sensitive data in airflow webserver?
We have already enabled authentication true in airflow configuration so it won't allow unauthorized users. But I don't want to show my keys in web view.
| [
"For the Airflow Variables section, Airflow will automatically hide any values if the variable name contains secret or password. The check for this value is case-insensitive, so the value of a variable with a name containing SECRET will also be hidden.\n",
"I found workaround for this use case. There is an option in airflow AWSHook, We can pass key path in connection string instead of secret key and access key. \n\n\n/root/keys/aws_keys\n\n[default]\naws_access_key_id=<access key>\naws_secret_access_key=<secret key>\nregion=<region>\n\n[s3_prod]\naws_access_key_id=<access key>\naws_secret_access_key=<secret key>\nregion=<region>\n\nHow it works? First it will check extra params in connection string for any aws keys added if not then it will check key path(s3_config_file). if both options are not available then it will look credentials from boto.cfg file. So now no need to expose any keys explicitly in UI :)\n",
"The LDAP authentication module provides the ability to specify a group based filter for a group which will be admins, and able to see that menu, and the rest.\nSee the documentation under security.\n\nThe superuser_filter and data_profiler_filter are optional. If defined, these configurations allow you to specify LDAP groups that users must belong to in order to have superuser (admin) and data-profiler permissions. If undefined, all users will be superusers and data profilers.\n\nNote that data-profilers can run adhoc queries on any defined connection. They cannot see the admin menu however. You may not want a group of users to be able to do arbitrary SQL or whatever over these, so also set that filter.\nAny user can request in their DAG and tasks any variable. It's easy to put those variables in places where they will show up in the logs.\nThe database provides a way of storing the connection passwords, and variable values in an encrypted way, but that doesn't solve all your problems.\n",
"I'm pretty sure the AWS hook also allows you to put the access key in the \"Login\" box and the secret key in the \"Password\" box on the connection screen. If the hook finds that there is something in the login box it'll use that and the password box as the connection information, here is the snippet from the source code for the AWS Hook:\n if self.aws_conn_id:\n try:\n connection_object = self.get_connection(self.aws_conn_id)\n if connection_object.login:\n aws_access_key_id = connection_object.login\n aws_secret_access_key = connection_object.password\n\n elif 'aws_secret_access_key' in connection_object.extra_dejson:\n aws_access_key_id = connection_object.extra_dejson['aws_access_key_id']\n aws_secret_access_key = connection_object.extra_dejson['aws_secret_access_key']\n\n elif 's3_config_file' in connection_object.extra_dejson:\n aws_access_key_id, aws_secret_access_key = \\\n _parse_s3_config(connection_object.extra_dejson['s3_config_file'],\n connection_object.extra_dejson.get('s3_config_format'))\n\nI've also found that you need to specify the region_name in the \"Extra\" box for the AWSHook in Airflow 1.9 otherwise it will not work.\n",
"If you want to hide S3 sensitive data in Extra (connection param), you can set aws_access_key in login-field and aws_secret_key - in password-field\nConn Id: <your_conn_id>\nConn Type: S3\n\nLogin: <aws_access_key>\nPassword: <aws_secret_key>\n\np.s. host can be in extra: Extra: {\"host\": \"<host>\"}\n"
] | [
6,
1,
0,
0,
0
] | [] | [] | [
"airflow",
"python"
] | stackoverflow_0049528230_airflow_python.txt |
Q:
Schedulling for Cloud Dataflow Job
So, I already finish to create a job in Dataflow. This job to process ETL from PostgreSQL to BigQuery. So, I don't know to create a schedulling using Airflow. Can share how to schedule job dataflow using Airflow?
Thank you
A:
You can schedule dataflow batch jobs using Cloud Scheduler (fully managed cron job scheduler) / Cloud Composer (fully managed workflow orchestration service built on Apache Airflow).
To schedule using Cloud Scheduler refer Schedule Dataflow batch jobs with Cloud Scheduler
To schedule using Cloud Composer refer Launching Dataflow pipelines with Cloud Composer using DataflowTemplateOperator.
For examples and more ways to run Dataflow jobs in Airflow using Java/Python SDKs refer Google Cloud Dataflow Operators
A:
In your Airflow DAG, you can define a cron and a scheduling with schedule_interval param :
with airflow.DAG(
my_dag,
default_args=args,
schedule_interval="5 3 * * *"
# Trigger Dataflow job with an operator
launch_dataflow_job = BeamRunPythonPipelineOperator(
runner='DataflowRunner',
py_file=python_main_file,
task_id='launch_dataflow_job',
pipeline_options=dataflow_job_options,
py_system_site_packages=False,
py_interpreter='python3',
dataflow_config=DataflowConfiguration(
location='region'
)
)
launch_dataflow_job
......
| Schedulling for Cloud Dataflow Job | So, I already finish to create a job in Dataflow. This job to process ETL from PostgreSQL to BigQuery. So, I don't know to create a schedulling using Airflow. Can share how to schedule job dataflow using Airflow?
Thank you
| [
"You can schedule dataflow batch jobs using Cloud Scheduler (fully managed cron job scheduler) / Cloud Composer (fully managed workflow orchestration service built on Apache Airflow).\nTo schedule using Cloud Scheduler refer Schedule Dataflow batch jobs with Cloud Scheduler\nTo schedule using Cloud Composer refer Launching Dataflow pipelines with Cloud Composer using DataflowTemplateOperator.\nFor examples and more ways to run Dataflow jobs in Airflow using Java/Python SDKs refer Google Cloud Dataflow Operators\n",
"In your Airflow DAG, you can define a cron and a scheduling with schedule_interval param :\nwith airflow.DAG(\n my_dag,\n default_args=args,\n schedule_interval=\"5 3 * * *\"\n\n # Trigger Dataflow job with an operator\n launch_dataflow_job = BeamRunPythonPipelineOperator(\n runner='DataflowRunner',\n py_file=python_main_file,\n task_id='launch_dataflow_job',\n pipeline_options=dataflow_job_options,\n py_system_site_packages=False,\n py_interpreter='python3',\n dataflow_config=DataflowConfiguration(\n location='region'\n )\n )\n\n launch_dataflow_job\n ......\n\n"
] | [
1,
1
] | [] | [] | [
"airflow",
"google_bigquery",
"google_cloud_dataflow",
"python"
] | stackoverflow_0074653520_airflow_google_bigquery_google_cloud_dataflow_python.txt |
Q:
What does `ValueError: cannot reindex from a duplicate axis` mean?
I am getting a ValueError: cannot reindex from a duplicate axis when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it.
Here is my session inside of ipdb trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create sum index for sum of all columns I am getting ValueError: cannot reindex from a duplicate axis error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing?
I don't really understand what ValueError: cannot reindex from a duplicate axismeans, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question.
ipdb> type(affinity_matrix)
<class 'pandas.core.frame.DataFrame'>
ipdb> affinity_matrix.shape
(333, 10)
ipdb> affinity_matrix.columns
Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype='int64')
ipdb> affinity_matrix.index
Index([u'001', u'002', u'003', u'004', u'005', u'008', u'009', u'010', u'011', u'014', u'015', u'016', u'018', u'020', u'021', u'022', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'032', u'033', u'034', u'035', u'036', u'039', u'040', u'041', u'042', u'043', u'044', u'045', u'047', u'047', u'048', u'050', u'053', u'054', u'055', u'056', u'057', u'058', u'059', u'060', u'061', u'062', u'063', u'065', u'067', u'068', u'069', u'070', u'071', u'072', u'073', u'074', u'075', u'076', u'077', u'078', u'080', u'082', u'083', u'084', u'085', u'086', u'089', u'090', u'091', u'092', u'093', u'094', u'095', u'096', u'097', u'098', u'100', u'101', u'103', u'104', u'105', u'106', u'107', u'108', u'109', u'110', u'111', u'112', u'113', u'114', u'115', u'116', u'117', u'118', u'119', u'121', u'122', ...], dtype='object')
ipdb> affinity_matrix.values.dtype
dtype('float64')
ipdb> 'sums' in affinity_matrix.index
False
Here is the error:
ipdb> affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0)
*** ValueError: cannot reindex from a duplicate axis
I tried to reproduce this with a simple example, but I failed
In [32]: import pandas as pd
In [33]: import numpy as np
In [34]: a = np.arange(35).reshape(5,7)
In [35]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17))
In [36]: df.values.dtype
Out[36]: dtype('int64')
In [37]: df.loc['sums'] = df.sum(axis=0)
In [38]: df
Out[38]:
10 11 12 13 14 15 16
x 0 1 2 3 4 5 6
y 7 8 9 10 11 12 13
u 14 15 16 17 18 19 20
z 21 22 23 24 25 26 27
w 28 29 30 31 32 33 34
sums 70 75 80 85 90 95 100
A:
This error usually rises when you join / assign to a column when the index has duplicate values. Since you are assigning to a row, I suspect that there is a duplicate value in affinity_matrix.columns, perhaps not shown in your question.
A:
As others have said, you've probably got duplicate values in your original index. To find them do this:
df[df.index.duplicated()]
A:
Indices with duplicate values often arise if you create a DataFrame by concatenating other DataFrames. IF you don't care about preserving the values of your index, and you want them to be unique values, when you concatenate the the data, set ignore_index=True.
Alternatively, to overwrite your current index with a new one, instead of using df.reindex(), set:
df.index = new_index
A:
For people who are still struggling with this error, it can also happen if you accidentally create a duplicate column with the same name. Remove duplicate columns like so:
df = df.loc[:,~df.columns.duplicated()]
A:
Simple Fix
Run this before grouping
df = df.reset_index()
Thanks to this github comment for the solution.
A:
Simply skip the error using .values at the end.
affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0).values
A:
I came across this error today when I wanted to add a new column like this
df_temp['REMARK_TYPE'] = df.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)
I wanted to process the REMARK column of df_temp to return 1 or 0. However I typed wrong variable with df. And it returned error like this:
----> 1 df_temp['REMARK_TYPE'] = df.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in __setitem__(self, key, value)
2417 else:
2418 # set column
-> 2419 self._set_item(key, value)
2420
2421 def _setitem_slice(self, key, value):
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in _set_item(self, key, value)
2483
2484 self._ensure_valid_index(value)
-> 2485 value = self._sanitize_column(key, value)
2486 NDFrame._set_item(self, key, value)
2487
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in _sanitize_column(self, key, value, broadcast)
2633
2634 if isinstance(value, Series):
-> 2635 value = reindexer(value)
2636
2637 elif isinstance(value, DataFrame):
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in reindexer(value)
2625 # duplicate axis
2626 if not value.index.is_unique:
-> 2627 raise e
2628
2629 # other
ValueError: cannot reindex from a duplicate axis
As you can see it, the right code should be
df_temp['REMARK_TYPE'] = df_temp.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)
Because df and df_temp have a different number of rows. So it returned ValueError: cannot reindex from a duplicate axis.
Hope you can understand it and my answer can help other people to debug their code.
A:
In my case, this error popped up not because of duplicate values, but because I attempted to join a shorter Series to a Dataframe: both had the same index, but the Series had fewer rows (missing the top few). The following worked for my purposes:
df.head()
SensA
date
2018-04-03 13:54:47.274 -0.45
2018-04-03 13:55:46.484 -0.42
2018-04-03 13:56:56.235 -0.37
2018-04-03 13:57:57.207 -0.34
2018-04-03 13:59:34.636 -0.33
series.head()
date
2018-04-03 14:09:36.577 62.2
2018-04-03 14:10:28.138 63.5
2018-04-03 14:11:27.400 63.1
2018-04-03 14:12:39.623 62.6
2018-04-03 14:13:27.310 62.5
Name: SensA_rrT, dtype: float64
df = series.to_frame().combine_first(df)
df.head(10)
SensA SensA_rrT
date
2018-04-03 13:54:47.274 -0.45 NaN
2018-04-03 13:55:46.484 -0.42 NaN
2018-04-03 13:56:56.235 -0.37 NaN
2018-04-03 13:57:57.207 -0.34 NaN
2018-04-03 13:59:34.636 -0.33 NaN
2018-04-03 14:00:34.565 -0.33 NaN
2018-04-03 14:01:19.994 -0.37 NaN
2018-04-03 14:02:29.636 -0.34 NaN
2018-04-03 14:03:31.599 -0.32 NaN
2018-04-03 14:04:30.779 -0.33 NaN
2018-04-03 14:05:31.733 -0.35 NaN
2018-04-03 14:06:33.290 -0.38 NaN
2018-04-03 14:07:37.459 -0.39 NaN
2018-04-03 14:08:36.361 -0.36 NaN
2018-04-03 14:09:36.577 -0.37 62.2
A:
I wasted couple of hours on the same issue. In my case, I had to reset_index() of a dataframe before using apply function.
Before merging, or looking up from another indexed dataset, you need to reset the index as 1 dataset can have only 1 Index.
A:
I got this error when I tried adding a column from a different table. Indeed I got duplicate index values along the way. But it turned out I was just doing it wrong: I actually needed to df.join the other table.
This pointer might help someone in a similar situation.
A:
In my case it was caused by mismatch in dimensions:
accidentally using a column from different df during the mul operation
A:
This can also be a cause for this[:) I solved my problem like this]
It may happen even if you are trying to insert a dataframe type column inside dataframe
you can try this
df['my_new']=pd.Series(my_new.values)
A:
if you get this error after merging two dataframe and remove suffix adnd try to write to excel
Your problem is that there are columns you are not merging on that are common to both source DataFrames. Pandas needs a way to say which one came from where, so it adds the suffixes, the defaults being '_x' on the left and '_y' on the right.
If you have a preference on which source data frame to keep the columns from, then you can set the suffixes and filter accordingly, for example if you want to keep the clashing columns from the left:
# Label the two sides, with no suffix on the side you want to keep
df = pd.merge(
df,
tempdf[what_i_care_about],
on=['myid', 'myorder'],
how='outer',
suffixes=('', '_delete_suffix') # Left gets no suffix, right gets something identifiable
)
# Discard the columns that acquired a suffix
df = df[[c for c in df.columns if not c.endswith('_delete_suffix')]]
Alternatively, you can drop one of each of the clashing columns prior to merging, then Pandas has no need to assign a suffix.
A:
Just add .to_numpy() to the end of the series you want to concatenate.
A:
It happened to me when I appended 2 dataframes into another (df3 = df1.append(df2)), so the output was:
df1
A B
0 1 a
1 2 b
2 3 c
df2
A B
0 4 d
1 5 e
2 6 f
df3
A B
0 1 a
1 2 b
2 3 c
0 4 d
1 5 e
2 6 f
The simplest way to fix the indexes is using the "df.reset_index(drop=bool, inplace=bool)" method, as Connor said... you can also set the 'drop' argument True to avoid the index list to be created as a columns, and 'inplace' to True to make the indexes reset permanent.
Here is the official refference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html
In addition, you can also use the ".set_index(keys=list, inplace=bool)" method, like this:
new_index_list = list(range(0, len(df3)))
df3['new_index'] = new_index_list
df3.set_index(keys='new_index', inplace=True)
official refference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html
A:
Make sure your index does not have any duplicates, I simply did df.reset_index(drop=True, inplace=True) and I don't get the error anymore! But you might want to keep the index, in that case just set drop to False
A:
df = df.reset_index(drop=True) worked for me
| What does `ValueError: cannot reindex from a duplicate axis` mean? | I am getting a ValueError: cannot reindex from a duplicate axis when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it.
Here is my session inside of ipdb trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create sum index for sum of all columns I am getting ValueError: cannot reindex from a duplicate axis error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing?
I don't really understand what ValueError: cannot reindex from a duplicate axismeans, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question.
ipdb> type(affinity_matrix)
<class 'pandas.core.frame.DataFrame'>
ipdb> affinity_matrix.shape
(333, 10)
ipdb> affinity_matrix.columns
Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype='int64')
ipdb> affinity_matrix.index
Index([u'001', u'002', u'003', u'004', u'005', u'008', u'009', u'010', u'011', u'014', u'015', u'016', u'018', u'020', u'021', u'022', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'032', u'033', u'034', u'035', u'036', u'039', u'040', u'041', u'042', u'043', u'044', u'045', u'047', u'047', u'048', u'050', u'053', u'054', u'055', u'056', u'057', u'058', u'059', u'060', u'061', u'062', u'063', u'065', u'067', u'068', u'069', u'070', u'071', u'072', u'073', u'074', u'075', u'076', u'077', u'078', u'080', u'082', u'083', u'084', u'085', u'086', u'089', u'090', u'091', u'092', u'093', u'094', u'095', u'096', u'097', u'098', u'100', u'101', u'103', u'104', u'105', u'106', u'107', u'108', u'109', u'110', u'111', u'112', u'113', u'114', u'115', u'116', u'117', u'118', u'119', u'121', u'122', ...], dtype='object')
ipdb> affinity_matrix.values.dtype
dtype('float64')
ipdb> 'sums' in affinity_matrix.index
False
Here is the error:
ipdb> affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0)
*** ValueError: cannot reindex from a duplicate axis
I tried to reproduce this with a simple example, but I failed
In [32]: import pandas as pd
In [33]: import numpy as np
In [34]: a = np.arange(35).reshape(5,7)
In [35]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17))
In [36]: df.values.dtype
Out[36]: dtype('int64')
In [37]: df.loc['sums'] = df.sum(axis=0)
In [38]: df
Out[38]:
10 11 12 13 14 15 16
x 0 1 2 3 4 5 6
y 7 8 9 10 11 12 13
u 14 15 16 17 18 19 20
z 21 22 23 24 25 26 27
w 28 29 30 31 32 33 34
sums 70 75 80 85 90 95 100
| [
"This error usually rises when you join / assign to a column when the index has duplicate values. Since you are assigning to a row, I suspect that there is a duplicate value in affinity_matrix.columns, perhaps not shown in your question.\n",
"As others have said, you've probably got duplicate values in your original index. To find them do this:\ndf[df.index.duplicated()]\n",
"Indices with duplicate values often arise if you create a DataFrame by concatenating other DataFrames. IF you don't care about preserving the values of your index, and you want them to be unique values, when you concatenate the the data, set ignore_index=True.\nAlternatively, to overwrite your current index with a new one, instead of using df.reindex(), set:\ndf.index = new_index\n\n",
"For people who are still struggling with this error, it can also happen if you accidentally create a duplicate column with the same name. Remove duplicate columns like so:\ndf = df.loc[:,~df.columns.duplicated()]\n\n",
"Simple Fix\nRun this before grouping\ndf = df.reset_index()\n\nThanks to this github comment for the solution.\n",
"Simply skip the error using .values at the end.\naffinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0).values\n\n",
"I came across this error today when I wanted to add a new column like this\ndf_temp['REMARK_TYPE'] = df.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)\n\nI wanted to process the REMARK column of df_temp to return 1 or 0. However I typed wrong variable with df. And it returned error like this:\n----> 1 df_temp['REMARK_TYPE'] = df.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)\n\n/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in __setitem__(self, key, value)\n 2417 else:\n 2418 # set column\n-> 2419 self._set_item(key, value)\n 2420 \n 2421 def _setitem_slice(self, key, value):\n\n/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in _set_item(self, key, value)\n 2483 \n 2484 self._ensure_valid_index(value)\n-> 2485 value = self._sanitize_column(key, value)\n 2486 NDFrame._set_item(self, key, value)\n 2487 \n\n/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in _sanitize_column(self, key, value, broadcast)\n 2633 \n 2634 if isinstance(value, Series):\n-> 2635 value = reindexer(value)\n 2636 \n 2637 elif isinstance(value, DataFrame):\n\n/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in reindexer(value)\n 2625 # duplicate axis\n 2626 if not value.index.is_unique:\n-> 2627 raise e\n 2628 \n 2629 # other\n\nValueError: cannot reindex from a duplicate axis\n\nAs you can see it, the right code should be\ndf_temp['REMARK_TYPE'] = df_temp.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)\n\nBecause df and df_temp have a different number of rows. So it returned ValueError: cannot reindex from a duplicate axis.\nHope you can understand it and my answer can help other people to debug their code.\n",
"In my case, this error popped up not because of duplicate values, but because I attempted to join a shorter Series to a Dataframe: both had the same index, but the Series had fewer rows (missing the top few). The following worked for my purposes:\ndf.head()\n SensA\ndate \n2018-04-03 13:54:47.274 -0.45\n2018-04-03 13:55:46.484 -0.42\n2018-04-03 13:56:56.235 -0.37\n2018-04-03 13:57:57.207 -0.34\n2018-04-03 13:59:34.636 -0.33\n\nseries.head()\ndate\n2018-04-03 14:09:36.577 62.2\n2018-04-03 14:10:28.138 63.5\n2018-04-03 14:11:27.400 63.1\n2018-04-03 14:12:39.623 62.6\n2018-04-03 14:13:27.310 62.5\nName: SensA_rrT, dtype: float64\n\ndf = series.to_frame().combine_first(df)\n\ndf.head(10)\n SensA SensA_rrT\ndate \n2018-04-03 13:54:47.274 -0.45 NaN\n2018-04-03 13:55:46.484 -0.42 NaN\n2018-04-03 13:56:56.235 -0.37 NaN\n2018-04-03 13:57:57.207 -0.34 NaN\n2018-04-03 13:59:34.636 -0.33 NaN\n2018-04-03 14:00:34.565 -0.33 NaN\n2018-04-03 14:01:19.994 -0.37 NaN\n2018-04-03 14:02:29.636 -0.34 NaN\n2018-04-03 14:03:31.599 -0.32 NaN\n2018-04-03 14:04:30.779 -0.33 NaN\n2018-04-03 14:05:31.733 -0.35 NaN\n2018-04-03 14:06:33.290 -0.38 NaN\n2018-04-03 14:07:37.459 -0.39 NaN\n2018-04-03 14:08:36.361 -0.36 NaN\n2018-04-03 14:09:36.577 -0.37 62.2\n\n",
"I wasted couple of hours on the same issue. In my case, I had to reset_index() of a dataframe before using apply function.\nBefore merging, or looking up from another indexed dataset, you need to reset the index as 1 dataset can have only 1 Index.\n",
"I got this error when I tried adding a column from a different table. Indeed I got duplicate index values along the way. But it turned out I was just doing it wrong: I actually needed to df.join the other table.\nThis pointer might help someone in a similar situation.\n",
"In my case it was caused by mismatch in dimensions:\naccidentally using a column from different df during the mul operation\n",
"This can also be a cause for this[:) I solved my problem like this]\nIt may happen even if you are trying to insert a dataframe type column inside dataframe\nyou can try this\ndf['my_new']=pd.Series(my_new.values)\n\n",
"if you get this error after merging two dataframe and remove suffix adnd try to write to excel\nYour problem is that there are columns you are not merging on that are common to both source DataFrames. Pandas needs a way to say which one came from where, so it adds the suffixes, the defaults being '_x' on the left and '_y' on the right.\nIf you have a preference on which source data frame to keep the columns from, then you can set the suffixes and filter accordingly, for example if you want to keep the clashing columns from the left:\n# Label the two sides, with no suffix on the side you want to keep\ndf = pd.merge(\n df, \n tempdf[what_i_care_about], \n on=['myid', 'myorder'], \n how='outer',\n suffixes=('', '_delete_suffix') # Left gets no suffix, right gets something identifiable\n)\n# Discard the columns that acquired a suffix\ndf = df[[c for c in df.columns if not c.endswith('_delete_suffix')]]\n\nAlternatively, you can drop one of each of the clashing columns prior to merging, then Pandas has no need to assign a suffix.\n",
"Just add .to_numpy() to the end of the series you want to concatenate.\n",
"It happened to me when I appended 2 dataframes into another (df3 = df1.append(df2)), so the output was:\ndf1\n A B\n0 1 a\n1 2 b\n2 3 c\n\ndf2\n A B\n0 4 d\n1 5 e\n2 6 f\n\ndf3\n A B\n0 1 a\n1 2 b\n2 3 c\n0 4 d\n1 5 e\n2 6 f\n\nThe simplest way to fix the indexes is using the \"df.reset_index(drop=bool, inplace=bool)\" method, as Connor said... you can also set the 'drop' argument True to avoid the index list to be created as a columns, and 'inplace' to True to make the indexes reset permanent.\nHere is the official refference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html\nIn addition, you can also use the \".set_index(keys=list, inplace=bool)\" method, like this:\nnew_index_list = list(range(0, len(df3)))\ndf3['new_index'] = new_index_list \ndf3.set_index(keys='new_index', inplace=True)\n\nofficial refference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html\n",
"Make sure your index does not have any duplicates, I simply did df.reset_index(drop=True, inplace=True) and I don't get the error anymore! But you might want to keep the index, in that case just set drop to False\n",
"df = df.reset_index(drop=True) worked for me\n"
] | [
305,
239,
65,
41,
40,
21,
11,
6,
2,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0027236275_pandas_python.txt |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.