content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Changing the indexing order of BitArray in Python
I am using a BitArray in my python script, and I was wondering if there is a way to change the indexes order of the BitArray that I create. Right now the indexes are from 0 to N-1 where N is the number of bits. Is there away to change the indexes to go from N-1 to O?
dataBits = BitArray('0b10100000')
print(dataBits[0])
This above two lines return the first bit True because the order of indexes is from 0 to N-1. Can I change the order of indexes for this to return the last bit False?
A:
Yes. What you're looking for is the module variable lsb0:
bitstring.lsb0 = True
dataBits = bitstring.BitArray('0b10100000')
print(dataBits[0])
Prints out False.
| Changing the indexing order of BitArray in Python | I am using a BitArray in my python script, and I was wondering if there is a way to change the indexes order of the BitArray that I create. Right now the indexes are from 0 to N-1 where N is the number of bits. Is there away to change the indexes to go from N-1 to O?
dataBits = BitArray('0b10100000')
print(dataBits[0])
This above two lines return the first bit True because the order of indexes is from 0 to N-1. Can I change the order of indexes for this to return the last bit False?
| [
"Yes. What you're looking for is the module variable lsb0:\nbitstring.lsb0 = True\ndataBits = bitstring.BitArray('0b10100000')\nprint(dataBits[0])\n\nPrints out False.\n"
] | [
0
] | [] | [] | [
"bitstring",
"python"
] | stackoverflow_0058862579_bitstring_python.txt |
Q:
Involutive (up to precision) operations "dataframe to csv" and "csv to dataframe"
I have a numerically really intensive vectorized python function def f(x,y) in two variables that I evaluate (with frompyfunc and broadcasting) on a np.array X = [x0, ...., xN-1] of x's and a np.array Y = [y0, ...., yM-1] of y's with N,M between 5 and 10 thousands. This returns as result a 2D np.array Z of shape (N,M) containing z[i,j]'s such that z[i,j] = f(X[i], Y[j]) for all i and j. The function f is optimized and this already takes roughly 45 minutes. As I then write, debug, profile code using Z, I want to "save" the "matrix" Z in a csv file, in this kind of format :
0.25 0.5 0.75 1 1.25
0.1 0.876155737 0.888282356 0.904731158 0.910351368 0.906284762
0.2 0.810528369 0.797068044 0.806520168 0.805697704 0.80659234
0.3 0.696280633 0.704307378 0.703540949 0.705198518 0.708672067
0.4 0.601264163 0.605194 0.607882 0.611616655 0.612408848
0.5 0.502995372 0.509209974 0.513651558 0.516065068 0.51994982
(this is a tiny upper-left part of my "matrix", the first column being the beginning of X and the first line being the beginning of Y and the rest being the matrix, meaning by that that for instance f(0.4, 0.75) = 0.607882.
I naturally used a pd.dataframe as follows :
df = pd.DataFrame(data=Z, columns=Y, index=X)
df.to_csv(some_full_path_filename)
and indeed the csv file looks like I want it to look, that is, like the small bit of the matrix above.
Now if I
df2 = pd.read_csv(some_full_path_filename)
df2 .to_csv(some_full_path_filename2, index=False)
the second csv file looks like :
Unnamed:0 0.25 0.5 0.75 1 1.25
0.1 0.876155737 0.888282356 0.904731158 0.910351368 0.906284762
0.2 0.810528369 0.797068044 0.806520168 0.805697704 0.80659234
0.3 0.696280633 0.704307378 0.703540949 0.705198518 0.708672067
0.4 0.601264163 0.605194 0.607882 0.611616655 0.612408848
0.5 0.502995372 0.509209974 0.513651558 0.516065068 0.51994982
which is the closest to the first csv file I succeeded to get while trying myself with pandas. And of course, the two dataframes df and df2 are not "equal".
Hence the question's title : an operation being involutive when applying it two times gives the starting value, then no, my "dataframe to csv file" and "csv file to dataframe" operations are not involutive.
To be precise there are floating point rounding differences in the dataframes and the csv files, like in one I could have 0.0072618782055291 in the matrix but in the other one at the same place I could have 0.0072618782055290999999999 : this is not a problem for me.
What I would like is my "dataframe to csv file" and "csv file to dataframe" operations to give dataframes and csv files structurally equal.
"Structurally" meaning :
for the csv files : to have the same values (up to rounding) and strings (if any) in every "cell"
for the dataframes : of course they won't be equal per se as they don't "point" to the same place in allocated memory but I would like them the be equal in the sense that all numerical/text values in them represent same numbers/strings (up to rounding for numbers)
A:
It should be difference, because in csv all data are saved like strings, so if use index_col=0 here is correctly create FloatIndex, but columns names are strings, also data in columns should be parsed differently (e.g. if mixed strings and numeric):
f = 'file.csv'
df.to_csv(f)
df = pd.read_csv(f, index_col=0)
print (df)
0.25 0.5 0.75 1 1.25
0.1 0.876156 0.888282 0.904731 0.910351 0.906285
0.2 0.810528 0.797068 0.806520 0.805698 0.806592
0.3 0.696281 0.704307 0.703541 0.705199 0.708672
0.4 0.601264 0.605194 0.607882 0.611617 0.612409
0.5 0.502995 0.509210 0.513652 0.516065 0.519950
print (df.columns)
Index(['0.25', '0.5', '0.75', '1.0', '1.25'], dtype='object')
Another idea is use pickle, read_pickle and DataFrame.to_pickle for correct save DataFrames with columns and index:
print (df.columns)
Float64Index([0.25, 0.5, 0.75, 1.0, 1.25], dtype='float64')
f = 'file'
df.to_pickle(f)
df1 = pd.read_pickle(f)
print (df1)
0.25 0.50 0.75 1.00 1.25
0.1 0.876156 0.888282 0.904731 0.910351 0.906285
0.2 0.810528 0.797068 0.806520 0.805698 0.806592
0.3 0.696281 0.704307 0.703541 0.705199 0.708672
0.4 0.601264 0.605194 0.607882 0.611617 0.612409
0.5 0.502995 0.509210 0.513652 0.516065 0.519950
print (df1.columns)
Float64Index([0.25, 0.5, 0.75, 1.0, 1.25], dtype='float64')
print (df1.equals(df))
True
| Involutive (up to precision) operations "dataframe to csv" and "csv to dataframe" | I have a numerically really intensive vectorized python function def f(x,y) in two variables that I evaluate (with frompyfunc and broadcasting) on a np.array X = [x0, ...., xN-1] of x's and a np.array Y = [y0, ...., yM-1] of y's with N,M between 5 and 10 thousands. This returns as result a 2D np.array Z of shape (N,M) containing z[i,j]'s such that z[i,j] = f(X[i], Y[j]) for all i and j. The function f is optimized and this already takes roughly 45 minutes. As I then write, debug, profile code using Z, I want to "save" the "matrix" Z in a csv file, in this kind of format :
0.25 0.5 0.75 1 1.25
0.1 0.876155737 0.888282356 0.904731158 0.910351368 0.906284762
0.2 0.810528369 0.797068044 0.806520168 0.805697704 0.80659234
0.3 0.696280633 0.704307378 0.703540949 0.705198518 0.708672067
0.4 0.601264163 0.605194 0.607882 0.611616655 0.612408848
0.5 0.502995372 0.509209974 0.513651558 0.516065068 0.51994982
(this is a tiny upper-left part of my "matrix", the first column being the beginning of X and the first line being the beginning of Y and the rest being the matrix, meaning by that that for instance f(0.4, 0.75) = 0.607882.
I naturally used a pd.dataframe as follows :
df = pd.DataFrame(data=Z, columns=Y, index=X)
df.to_csv(some_full_path_filename)
and indeed the csv file looks like I want it to look, that is, like the small bit of the matrix above.
Now if I
df2 = pd.read_csv(some_full_path_filename)
df2 .to_csv(some_full_path_filename2, index=False)
the second csv file looks like :
Unnamed:0 0.25 0.5 0.75 1 1.25
0.1 0.876155737 0.888282356 0.904731158 0.910351368 0.906284762
0.2 0.810528369 0.797068044 0.806520168 0.805697704 0.80659234
0.3 0.696280633 0.704307378 0.703540949 0.705198518 0.708672067
0.4 0.601264163 0.605194 0.607882 0.611616655 0.612408848
0.5 0.502995372 0.509209974 0.513651558 0.516065068 0.51994982
which is the closest to the first csv file I succeeded to get while trying myself with pandas. And of course, the two dataframes df and df2 are not "equal".
Hence the question's title : an operation being involutive when applying it two times gives the starting value, then no, my "dataframe to csv file" and "csv file to dataframe" operations are not involutive.
To be precise there are floating point rounding differences in the dataframes and the csv files, like in one I could have 0.0072618782055291 in the matrix but in the other one at the same place I could have 0.0072618782055290999999999 : this is not a problem for me.
What I would like is my "dataframe to csv file" and "csv file to dataframe" operations to give dataframes and csv files structurally equal.
"Structurally" meaning :
for the csv files : to have the same values (up to rounding) and strings (if any) in every "cell"
for the dataframes : of course they won't be equal per se as they don't "point" to the same place in allocated memory but I would like them the be equal in the sense that all numerical/text values in them represent same numbers/strings (up to rounding for numbers)
| [
"It should be difference, because in csv all data are saved like strings, so if use index_col=0 here is correctly create FloatIndex, but columns names are strings, also data in columns should be parsed differently (e.g. if mixed strings and numeric):\nf = 'file.csv'\ndf.to_csv(f)\n\ndf = pd.read_csv(f, index_col=0)\nprint (df)\n 0.25 0.5 0.75 1 1.25\n0.1 0.876156 0.888282 0.904731 0.910351 0.906285\n0.2 0.810528 0.797068 0.806520 0.805698 0.806592\n0.3 0.696281 0.704307 0.703541 0.705199 0.708672\n0.4 0.601264 0.605194 0.607882 0.611617 0.612409\n0.5 0.502995 0.509210 0.513652 0.516065 0.519950\n\n\nprint (df.columns)\nIndex(['0.25', '0.5', '0.75', '1.0', '1.25'], dtype='object')\n\nAnother idea is use pickle, read_pickle and DataFrame.to_pickle for correct save DataFrames with columns and index:\nprint (df.columns)\nFloat64Index([0.25, 0.5, 0.75, 1.0, 1.25], dtype='float64')\n\nf = 'file'\ndf.to_pickle(f)\n\ndf1 = pd.read_pickle(f)\nprint (df1)\n 0.25 0.50 0.75 1.00 1.25\n0.1 0.876156 0.888282 0.904731 0.910351 0.906285\n0.2 0.810528 0.797068 0.806520 0.805698 0.806592\n0.3 0.696281 0.704307 0.703541 0.705199 0.708672\n0.4 0.601264 0.605194 0.607882 0.611617 0.612409\n0.5 0.502995 0.509210 0.513652 0.516065 0.519950\n\nprint (df1.columns)\nFloat64Index([0.25, 0.5, 0.75, 1.0, 1.25], dtype='float64')\n\nprint (df1.equals(df))\nTrue\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"numpy",
"pandas",
"python"
] | stackoverflow_0074612027_dataframe_numpy_pandas_python.txt |
Q:
Best way to find a folder in the test directory for pytest
I have a folder structure as below for my pytest files
tests/my_test1.py
tests/input/data.txt
tests/bench/bench.csv
The test my_test1.py would have to read the file tests/input/data.txt. The challenge is to find the location of the file.
The tests can be invoked in Mutiple ways as mentioned here https://docs.pytest.org/en/6.2.x/usage.html. The current working directory may be different based on the invocation. so the way to open a file the input folder data_f = open ("./input/data.txt") would be incorrect.
What would be the correct path to access the file? I tried to infer the path from os.getenv('PYTEST_CURRENT_TEST'). However this always give the value of the path as 'tests/' no matter from where the test is invoked.
A:
You could create a fixture in the root of the tests folder (tests/conftest.py) to help you read any file relative to the tests folder:
from pathlib import Path
import pytest
@pytest.fixture()
def get_file():
def _(file_path: str):
return (Path(__file__).parent / file_path).read_text()
return _
In any test you could then use the fixture to read the file for you:
def test_a(get_file):
content = get_file('input/data.txt')
...
In this example the fixture returns a function that would read the content of any file within tests/ directory, but there are multiple other options:
The fixture can return a Path object instead of the content.
If you would always use the same file, it can return file object/content directly so within your test function you wouldn't need to call it.
| Best way to find a folder in the test directory for pytest | I have a folder structure as below for my pytest files
tests/my_test1.py
tests/input/data.txt
tests/bench/bench.csv
The test my_test1.py would have to read the file tests/input/data.txt. The challenge is to find the location of the file.
The tests can be invoked in Mutiple ways as mentioned here https://docs.pytest.org/en/6.2.x/usage.html. The current working directory may be different based on the invocation. so the way to open a file the input folder data_f = open ("./input/data.txt") would be incorrect.
What would be the correct path to access the file? I tried to infer the path from os.getenv('PYTEST_CURRENT_TEST'). However this always give the value of the path as 'tests/' no matter from where the test is invoked.
| [
"You could create a fixture in the root of the tests folder (tests/conftest.py) to help you read any file relative to the tests folder:\nfrom pathlib import Path\nimport pytest\n\[email protected]()\ndef get_file():\n def _(file_path: str):\n return (Path(__file__).parent / file_path).read_text()\n\n return _\n\nIn any test you could then use the fixture to read the file for you:\ndef test_a(get_file):\n content = get_file('input/data.txt')\n ...\n\nIn this example the fixture returns a function that would read the content of any file within tests/ directory, but there are multiple other options:\n\nThe fixture can return a Path object instead of the content.\nIf you would always use the same file, it can return file object/content directly so within your test function you wouldn't need to call it.\n\n"
] | [
1
] | [] | [] | [
"pytest",
"python"
] | stackoverflow_0074611616_pytest_python.txt |
Q:
Run 2 aplications simultaneously with python and save csv files
I am trying to write a single python app that extracts data from an ICU monitor via ETH and grabs data from a HTTP endpoint and save the data input as csv files. They should be opend at the exact same time so that the data timestamp is the same.
The program that reads the ICU data is called VitalSignsCapture.
VitalSignCapture
As shown in the photo there are 6 in-app settings you have to make to run the program to your liking.
So far I wrote a code that opens both at the same time but there is no posibility to set the in-app options.
Could somebody pls help me. THANK YOU!
A:
I guess you could multiple processes.
one main which define the future time stamp at which both action will be performed, and which passes this timestamp via a queue to the second.
Then at the given time stamp both process execute their own tasks~
seems like it's better to edit than to add another answer so I edit~
You can make 2 functions (A & B), one for each of your two actions.
def A(*args, **kwargs):
get ICU data and save it to whatever format you need
def B(*args, **kwargs):
get HTTP request data and save it to whatever format you need
Then, you define when you want to call them and use for instance threading.Timer to call them. Timer takes as input how long from call it should wait before running a specific function.
So you do your math to define how long you want to wait before calling both functions together. For instance, something like :
waitTime = target_timeStamp - current_time
and you call your functions
threading.Timer(waitTime, A).start()
threading.Timer(waitTime, B).start()
doc there for the details of Timer (at the bottom)
https://docs.python.org/3/library/threading.html
The thing is that the two thread will be "simultaneous" modulo your interpreter and machine. If you get limited and need the time between the two functions to be reduced, then you need to do something similar, but before you must call your Timer from 2 different processes.
And if that it not doing it for you, then I'd suggest to move to C/CPP if you really need to go as fast as the OS can do, but I am not sure it will be really relevant in your case, since anyway your http request will have some delay due to the ping.
Also you should save each data with its corresponding timestamp, so if you cannot match exactly the timings you want, you can also interpolate the values~
Hope this explanation helps
| Run 2 aplications simultaneously with python and save csv files | I am trying to write a single python app that extracts data from an ICU monitor via ETH and grabs data from a HTTP endpoint and save the data input as csv files. They should be opend at the exact same time so that the data timestamp is the same.
The program that reads the ICU data is called VitalSignsCapture.
VitalSignCapture
As shown in the photo there are 6 in-app settings you have to make to run the program to your liking.
So far I wrote a code that opens both at the same time but there is no posibility to set the in-app options.
Could somebody pls help me. THANK YOU!
| [
"I guess you could multiple processes.\none main which define the future time stamp at which both action will be performed, and which passes this timestamp via a queue to the second.\nThen at the given time stamp both process execute their own tasks~\nseems like it's better to edit than to add another answer so I edit~\nYou can make 2 functions (A & B), one for each of your two actions.\ndef A(*args, **kwargs):\n get ICU data and save it to whatever format you need\ndef B(*args, **kwargs):\n get HTTP request data and save it to whatever format you need\n\nThen, you define when you want to call them and use for instance threading.Timer to call them. Timer takes as input how long from call it should wait before running a specific function.\nSo you do your math to define how long you want to wait before calling both functions together. For instance, something like :\nwaitTime = target_timeStamp - current_time\nand you call your functions\nthreading.Timer(waitTime, A).start()\nthreading.Timer(waitTime, B).start()\n\ndoc there for the details of Timer (at the bottom)\nhttps://docs.python.org/3/library/threading.html\nThe thing is that the two thread will be \"simultaneous\" modulo your interpreter and machine. If you get limited and need the time between the two functions to be reduced, then you need to do something similar, but before you must call your Timer from 2 different processes.\nAnd if that it not doing it for you, then I'd suggest to move to C/CPP if you really need to go as fast as the OS can do, but I am not sure it will be really relevant in your case, since anyway your http request will have some delay due to the ping.\nAlso you should save each data with its corresponding timestamp, so if you cannot match exactly the timings you want, you can also interpolate the values~\nHope this explanation helps\n"
] | [
0
] | [] | [] | [
"csv",
"python",
"windows_11"
] | stackoverflow_0074611815_csv_python_windows_11.txt |
Q:
How to group a list of paths by their parent?
I have a list of paths and I want them to dynamically separate into lists they should belong to based on the folder name they come from. First two come from "tent1" folder and I want them together in one list and so on. I don't want to hardcode the names of those folders and then append paths to them. For example:
paths = [
'/var/lib/cons/states/tent1/tops-ok_2022_11_28',
'/var/lib/cons/states/tent1/tops-ok_2022_11_27',
'/var/lib/cons/states/tent2/tops-ok_2022_11_28',
'/var/lib/cons/states/tent2/tops-ok_2022_11_27',
'/var/lib/cons/states/tent3/tops-ok_2022_11_28',
'/var/lib/cons/states/tent3/tops-ok_2022_11_27',
'/var/lib/cons/states/tent4/tops-ok_2022_11_28',
'/var/lib/cons/states/tent4/tops-ok_2022_11_27',
]
and I want them to be like this:
[['/var/lib/cons/states/tent1/tops-ok_2022_11_28',
'/var/lib/cons/states/tent1/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent2/tops-ok_2022_11_28',
'/var/lib/cons/states/tent2/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent3/tops-ok_2022_11_28',
'/var/lib/cons/states/tent3/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent4/tops-ok_2022_11_28',
'/var/lib/cons/states/tent4/tops-ok_2022_11_27']]
A:
If your input is sorted by path (i.e. the same paths are sequential), you can use itertools.groupby:
from itertools import groupby
from os.path import dirname
out = [list(g) for _,g in groupby(paths, dirname)]
If the paths are not sorted, you can use a dictionary as intermediate:
out = {}
for p in paths:
(out.setdefault(dirname(p), [])
.append(p)
)
out = list(out.values())
Output:
[['/var/lib/cons/states/tent1/tops-ok_2022_11_28',
'/var/lib/cons/states/tent1/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent2/tops-ok_2022_11_28',
'/var/lib/cons/states/tent2/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent3/tops-ok_2022_11_28',
'/var/lib/cons/states/tent3/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent4/tops-ok_2022_11_28',
'/var/lib/cons/states/tent4/tops-ok_2022_11_27']]
alternative with pathlib:
from itertools import groupby
from pathlib import Path
out = [list(g) for _,g in groupby(paths, lambda x: Path(x).parent)]
| How to group a list of paths by their parent? | I have a list of paths and I want them to dynamically separate into lists they should belong to based on the folder name they come from. First two come from "tent1" folder and I want them together in one list and so on. I don't want to hardcode the names of those folders and then append paths to them. For example:
paths = [
'/var/lib/cons/states/tent1/tops-ok_2022_11_28',
'/var/lib/cons/states/tent1/tops-ok_2022_11_27',
'/var/lib/cons/states/tent2/tops-ok_2022_11_28',
'/var/lib/cons/states/tent2/tops-ok_2022_11_27',
'/var/lib/cons/states/tent3/tops-ok_2022_11_28',
'/var/lib/cons/states/tent3/tops-ok_2022_11_27',
'/var/lib/cons/states/tent4/tops-ok_2022_11_28',
'/var/lib/cons/states/tent4/tops-ok_2022_11_27',
]
and I want them to be like this:
[['/var/lib/cons/states/tent1/tops-ok_2022_11_28',
'/var/lib/cons/states/tent1/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent2/tops-ok_2022_11_28',
'/var/lib/cons/states/tent2/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent3/tops-ok_2022_11_28',
'/var/lib/cons/states/tent3/tops-ok_2022_11_27'],
['/var/lib/cons/states/tent4/tops-ok_2022_11_28',
'/var/lib/cons/states/tent4/tops-ok_2022_11_27']]
| [
"If your input is sorted by path (i.e. the same paths are sequential), you can use itertools.groupby:\nfrom itertools import groupby\nfrom os.path import dirname\n\nout = [list(g) for _,g in groupby(paths, dirname)]\n\nIf the paths are not sorted, you can use a dictionary as intermediate:\nout = {}\nfor p in paths:\n (out.setdefault(dirname(p), [])\n .append(p)\n )\n \nout = list(out.values())\n\nOutput:\n[['/var/lib/cons/states/tent1/tops-ok_2022_11_28',\n '/var/lib/cons/states/tent1/tops-ok_2022_11_27'],\n ['/var/lib/cons/states/tent2/tops-ok_2022_11_28',\n '/var/lib/cons/states/tent2/tops-ok_2022_11_27'],\n ['/var/lib/cons/states/tent3/tops-ok_2022_11_28',\n '/var/lib/cons/states/tent3/tops-ok_2022_11_27'],\n ['/var/lib/cons/states/tent4/tops-ok_2022_11_28',\n '/var/lib/cons/states/tent4/tops-ok_2022_11_27']]\n\nalternative with pathlib:\nfrom itertools import groupby\nfrom pathlib import Path\n\nout = [list(g) for _,g in groupby(paths, lambda x: Path(x).parent)]\n\n"
] | [
6
] | [] | [] | [
"path",
"python"
] | stackoverflow_0074612068_path_python.txt |
Q:
Mocking os.environ with python unittests
I am trying to test a class that handles for me the working directory based on a given parameter. To do so, we are using a class variable to map them.
When a specific value is passed, the path is retrieved from the environment variables (See baz in the example below). This is the specific case that I'm trying to test.
I'm using Python 3.8.13 and unittest.
I'm trying to avoid:
I don't want to mock the WorkingDirectory.map dictionary because I want to make sure we are fetching from the environ with that particular variable (BAZ_PATH).
Unless is the only solution, I would like to avoid editing the values during the test, i.e I would prefer not to do something like: os.environ["baz"] = DUMMY_BAZ_PATH
What I've tried
I tried mocking up the environ as a dictionary as suggested in other publications, but I can't make it work for some reason.
# working_directory.py
import os
class WorkingDirectory:
map = {
"foo": "path/to/foo",
"bar": "path/to/bar",
"baz": os.environ.get("BAZ_PATH"),
}
def __init__(self, env: str):
self.env = env
self.path = self.map[self.env]
@property
def data_dir(self):
return os.path.join(self.path, "data")
# Other similar methods...
Test file:
# test.py
import os
import unittest
from unittest import mock
from working_directory import WorkingDirectory
DUMMY_BAZ_PATH = "path/to/baz"
class TestWorkingDirectory(unittest.TestCase):
@mock.patch.dict(os.environ, {"BAZ_PATH": DUMMY_BAZ_PATH})
def test_controlled_baz(self):
wd = WorkingDirectory("baz")
self.assertEqual(wd.path, DUMMY_BAZ_PATH)
Error
As shown in the error, os.environ doesn't seem to be properly patched as it returns Null.
======================================================================
FAIL: test_controlled_baz (test_directory_structure_utils.TestWorkingDirectory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "~/.pyenv/versions/3.8.13/lib/python3.8/unittest/mock.py", line 1756, in _inner
return f(*args, **kw)
File "~/Projects/dummy_project/tests/unit/test_directory_structure_utils.py", line 127, in test_controlled_baz
self.assertEqual(wd.path, DUMMY_BAZ_PATH)
AssertionError: None != 'path/to/baz'
----------------------------------------------------------------------
Ran 136 tests in 0.325s
FAILED (failures=1, skipped=5)
This seems to be because the BAZ_PATH doesn't exist actually. However, I would expect this to be OK since is being patched.
When, in the mapping dictionary, "baz": os.environ.get("BAZ_PATH"), I repalce BAZ_PATH for a variable that actually exist in my environment, i.e HOME, it returns the actual value of HOME instead of the DUMMY_BAZ_PATH, which lead me to think that I'm definetely doing something wrong patching
AssertionError: '/Users/cestla' != 'path/to/baz'
Expected result
Well, obviously, I am expecting the test_controlled_baz passes succesfully.
A:
So the problem is that you added map as a static variable.
Your patch works correctly as you can see here:
patch actually works
The problem is that when it runs it's already too late because the map variable was already calculated (before the patch).
If you want you can move it to the init function and it will function correctly:
class WorkingDirectory:
def __init__(self, env: str):
self.map = {
"foo": "path/to/foo",
"bar": "path/to/bar",
"baz": os.environ.get("BAZ_PATH")
}
self.env = env
self.path = self.map[self.env]
If for some reason you wish to keep it static, you have to also patch the object itself.
writing something like this will do the trick:
class TestWorkingDirectory(unittest.TestCase):
@mock.patch.dict(os.environ, {"BAZ_PATH": DUMMY_BAZ_PATH})
def test_controlled_baz(self):
with mock.patch.object(WorkingDirectory, "map", {
"foo": "path/to/foo",
"bar": "path/to/bar",
"baz": os.environ.get("BAZ_PATH")
}):
wd = WorkingDirectory("baz")
self.assertEqual(wd.path, DUMMY_BAZ_PATH)
A:
That's not directly answer to your question but a valid answer either way imo:
Don't try to patch that (it's possible, but harder and cumbersome).
Use config file for your project.
e.g. use pyproject.toml and inside configure the pytest extension:
[tool.pytest.ini_options]
env=[
"SOME_VAR_FOR_TESTS=some_value_for_that_var"
]
| Mocking os.environ with python unittests | I am trying to test a class that handles for me the working directory based on a given parameter. To do so, we are using a class variable to map them.
When a specific value is passed, the path is retrieved from the environment variables (See baz in the example below). This is the specific case that I'm trying to test.
I'm using Python 3.8.13 and unittest.
I'm trying to avoid:
I don't want to mock the WorkingDirectory.map dictionary because I want to make sure we are fetching from the environ with that particular variable (BAZ_PATH).
Unless is the only solution, I would like to avoid editing the values during the test, i.e I would prefer not to do something like: os.environ["baz"] = DUMMY_BAZ_PATH
What I've tried
I tried mocking up the environ as a dictionary as suggested in other publications, but I can't make it work for some reason.
# working_directory.py
import os
class WorkingDirectory:
map = {
"foo": "path/to/foo",
"bar": "path/to/bar",
"baz": os.environ.get("BAZ_PATH"),
}
def __init__(self, env: str):
self.env = env
self.path = self.map[self.env]
@property
def data_dir(self):
return os.path.join(self.path, "data")
# Other similar methods...
Test file:
# test.py
import os
import unittest
from unittest import mock
from working_directory import WorkingDirectory
DUMMY_BAZ_PATH = "path/to/baz"
class TestWorkingDirectory(unittest.TestCase):
@mock.patch.dict(os.environ, {"BAZ_PATH": DUMMY_BAZ_PATH})
def test_controlled_baz(self):
wd = WorkingDirectory("baz")
self.assertEqual(wd.path, DUMMY_BAZ_PATH)
Error
As shown in the error, os.environ doesn't seem to be properly patched as it returns Null.
======================================================================
FAIL: test_controlled_baz (test_directory_structure_utils.TestWorkingDirectory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "~/.pyenv/versions/3.8.13/lib/python3.8/unittest/mock.py", line 1756, in _inner
return f(*args, **kw)
File "~/Projects/dummy_project/tests/unit/test_directory_structure_utils.py", line 127, in test_controlled_baz
self.assertEqual(wd.path, DUMMY_BAZ_PATH)
AssertionError: None != 'path/to/baz'
----------------------------------------------------------------------
Ran 136 tests in 0.325s
FAILED (failures=1, skipped=5)
This seems to be because the BAZ_PATH doesn't exist actually. However, I would expect this to be OK since is being patched.
When, in the mapping dictionary, "baz": os.environ.get("BAZ_PATH"), I repalce BAZ_PATH for a variable that actually exist in my environment, i.e HOME, it returns the actual value of HOME instead of the DUMMY_BAZ_PATH, which lead me to think that I'm definetely doing something wrong patching
AssertionError: '/Users/cestla' != 'path/to/baz'
Expected result
Well, obviously, I am expecting the test_controlled_baz passes succesfully.
| [
"So the problem is that you added map as a static variable.\nYour patch works correctly as you can see here:\npatch actually works\nThe problem is that when it runs it's already too late because the map variable was already calculated (before the patch).\nIf you want you can move it to the init function and it will function correctly:\nclass WorkingDirectory:\n\ndef __init__(self, env: str):\n self.map = {\n \"foo\": \"path/to/foo\",\n \"bar\": \"path/to/bar\",\n \"baz\": os.environ.get(\"BAZ_PATH\")\n }\n self.env = env\n self.path = self.map[self.env]\n\nIf for some reason you wish to keep it static, you have to also patch the object itself.\nwriting something like this will do the trick:\nclass TestWorkingDirectory(unittest.TestCase):\[email protected](os.environ, {\"BAZ_PATH\": DUMMY_BAZ_PATH})\ndef test_controlled_baz(self):\n with mock.patch.object(WorkingDirectory, \"map\", {\n \"foo\": \"path/to/foo\",\n \"bar\": \"path/to/bar\",\n \"baz\": os.environ.get(\"BAZ_PATH\")\n }):\n wd = WorkingDirectory(\"baz\")\n self.assertEqual(wd.path, DUMMY_BAZ_PATH)\n\n",
"That's not directly answer to your question but a valid answer either way imo:\nDon't try to patch that (it's possible, but harder and cumbersome).\nUse config file for your project.\ne.g. use pyproject.toml and inside configure the pytest extension:\n[tool.pytest.ini_options]\nenv=[\n\"SOME_VAR_FOR_TESTS=some_value_for_that_var\"\n]\n\n"
] | [
5,
0
] | [] | [] | [
"environment_variables",
"python",
"python_unittest",
"unit_testing"
] | stackoverflow_0074611329_environment_variables_python_python_unittest_unit_testing.txt |
Q:
Python Dataframe editing
I wanna replace ( [ with ([.
Essentially get rid of the space between the brackets.
But df.replace() does not work.
Even if the code executes
the output remains the same.
Code
cdf=cdf.replace('( [','([')
cdf is a dataframe with 3 columns.
There is non error, its just that the replacing is not happening.
A:
If I understood you question right, then the df.applymap(mapping) function is what you are searching for. It applies the mapping (for example a lambda) to all cells of you data frame. Here is an example:
import pandas as pd
data = {"c1": ["Test", "Test[ (Test)]"],
"c2": ["Test", "Test[ (Test)]"],
"c3": ["Test", "Test[ (Test)]"]}
df = pd.DataFrame.from_dict(data)
filtered = df.applymap(lambda s : s.replace("[ (", "[("))
print(filtered)
And this is the result:
c1 c2 c3
0 Test Test Test
1 Test[(Test)] Test[(Test)] Test[(Test)]
| Python Dataframe editing | I wanna replace ( [ with ([.
Essentially get rid of the space between the brackets.
But df.replace() does not work.
Even if the code executes
the output remains the same.
Code
cdf=cdf.replace('( [','([')
cdf is a dataframe with 3 columns.
There is non error, its just that the replacing is not happening.
| [
"If I understood you question right, then the df.applymap(mapping) function is what you are searching for. It applies the mapping (for example a lambda) to all cells of you data frame. Here is an example:\nimport pandas as pd\n\ndata = {\"c1\": [\"Test\", \"Test[ (Test)]\"],\n \"c2\": [\"Test\", \"Test[ (Test)]\"],\n \"c3\": [\"Test\", \"Test[ (Test)]\"]}\n\ndf = pd.DataFrame.from_dict(data)\n\nfiltered = df.applymap(lambda s : s.replace(\"[ (\", \"[(\"))\n\nprint(filtered)\n\nAnd this is the result:\n c1 c2 c3\n0 Test Test Test\n1 Test[(Test)] Test[(Test)] Test[(Test)]\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074611366_dataframe_pandas_python.txt |
Q:
Replace text after string in python
I'm trying to replace string in a very big json File (30MB) so i try to automate this.
Here is an example of what I have
"local_notifications": [
{
"is_enabled": false,
"notification_type": "basic",
"notification_title": "localised_strings.show_name_debug_lobbies",
"notification_body": "localised_strings.show_desc_debug_lobbies",
"schedule": {
"schedule_type": "dynamic",
"elapsed_time": [
{
"days": 0,
"hours": 0,
"minutes": 1
}
]
},
"priority": 0,
"priority_override": false,
"id": "debug_notification"
},
I would like to replace the notification title and body with his localised strings which are at the bottom at the file like this
"localised_strings": [
{
"text": "Debug lobbies",
"id": "show_name_debug_lobbies"
},
{
"text": "Debug lobbies 2",
"id": "show_desc_debug_lobbies"
}
]
I want to do this in python but i don't know how to do it can you help me ?
P.S. The only part of code I have is to decrypt the crypted file:
import argparse
from argparse import RawTextHelpFormatter
xor_key = [0x61, 0x23, 0x21, 0x73, 0x43, 0x30, 0x2c, 0x2e]
description = ('Decrypt or encrypt content_v1 of Fall Guys: Ultimate Knockout\n'
'content_v1 is usually found inside %UserProfile%\AppData\LocalLow\Mediatonic\FallGuys_client')
argument_parser = argparse.ArgumentParser(description=description, formatter_class=RawTextHelpFormatter)
argument_parser.add_argument('input_file', help='content_v1 or a valid JSON file')
argument_parser.add_argument('output_file')
arguments = argument_parser.parse_args()
content = bytearray()
content_idx = 0
try:
with open(arguments.input_file, 'rb') as input_file:
while (byte := input_file.read(1)):
content += bytes([ord(byte) ^ xor_key[content_idx % (len(xor_key))]])
content_idx += 1
except (IOError, OSError) as exception:
print('Error: could not read input file')
exit()
try:
with open(arguments.output_file, 'wb') as output_file:
output_file.write(content)
except (IOError, OSError) as exception:
print('Error: could not create output file')
exit()
A:
Did you try to convert your Json into a dictionary?
for instance using these resources:
https://www.geeksforgeeks.org/convert-json-to-dictionary-in-python/
You convert to a dictionary, you edit the contents of the key you want, then save it back to Json if you need
| Replace text after string in python | I'm trying to replace string in a very big json File (30MB) so i try to automate this.
Here is an example of what I have
"local_notifications": [
{
"is_enabled": false,
"notification_type": "basic",
"notification_title": "localised_strings.show_name_debug_lobbies",
"notification_body": "localised_strings.show_desc_debug_lobbies",
"schedule": {
"schedule_type": "dynamic",
"elapsed_time": [
{
"days": 0,
"hours": 0,
"minutes": 1
}
]
},
"priority": 0,
"priority_override": false,
"id": "debug_notification"
},
I would like to replace the notification title and body with his localised strings which are at the bottom at the file like this
"localised_strings": [
{
"text": "Debug lobbies",
"id": "show_name_debug_lobbies"
},
{
"text": "Debug lobbies 2",
"id": "show_desc_debug_lobbies"
}
]
I want to do this in python but i don't know how to do it can you help me ?
P.S. The only part of code I have is to decrypt the crypted file:
import argparse
from argparse import RawTextHelpFormatter
xor_key = [0x61, 0x23, 0x21, 0x73, 0x43, 0x30, 0x2c, 0x2e]
description = ('Decrypt or encrypt content_v1 of Fall Guys: Ultimate Knockout\n'
'content_v1 is usually found inside %UserProfile%\AppData\LocalLow\Mediatonic\FallGuys_client')
argument_parser = argparse.ArgumentParser(description=description, formatter_class=RawTextHelpFormatter)
argument_parser.add_argument('input_file', help='content_v1 or a valid JSON file')
argument_parser.add_argument('output_file')
arguments = argument_parser.parse_args()
content = bytearray()
content_idx = 0
try:
with open(arguments.input_file, 'rb') as input_file:
while (byte := input_file.read(1)):
content += bytes([ord(byte) ^ xor_key[content_idx % (len(xor_key))]])
content_idx += 1
except (IOError, OSError) as exception:
print('Error: could not read input file')
exit()
try:
with open(arguments.output_file, 'wb') as output_file:
output_file.write(content)
except (IOError, OSError) as exception:
print('Error: could not create output file')
exit()
| [
"Did you try to convert your Json into a dictionary?\nfor instance using these resources:\nhttps://www.geeksforgeeks.org/convert-json-to-dictionary-in-python/\nYou convert to a dictionary, you edit the contents of the key you want, then save it back to Json if you need\n"
] | [
1
] | [] | [] | [
"json",
"python",
"replace"
] | stackoverflow_0074611804_json_python_replace.txt |
Q:
ValueError: setting an array element with a sequence in SVM for simple arrays
I am trying to use SVM on my dataset but I am getting the error TypeError: only size-1 arrays can be converted to Python scalars. My inputs are:
y = df['emotion'].values.tolist()
X = df['flatten_embeddings']
Where y us just target like Sad, Angry, Neutral ... and X is
0 [1.702582, 1.277809, 1.7816906, -5.0634155, 0....
1 [-1.1865704, -0.698246, -1.7263901, -3.2054596...
2 [-1.7968469, -0.18659484, 2.1262107, -5.183001...
3 [-1.9038239, -2.7165074, 0.022676349, -2.31133...
4 [-0.34684253, 0.58175063, -2.0320444, -0.65968...
Name: flatten_embeddings, Length: 400, dtype: object
When I use SVM, I get the error. The code is as follows:
from sklearn import svm
clf = svm.SVC()
clf.fit(X, y)
ERROR:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
<ipython-input-65-0d786f0eb252> in <module>
1 from sklearn import svm
2 clf = svm.SVC()
----> 3 clf.fit(xxx, y)
I have tried to vectorize but get the same error.
yyy = X
xxx = np.array(list(map(np.double, yyy)))
Just for FYI, my rows are not same in size:
len(X[0]): 152576
len(X[1]): 101376
len(X[2]): 101376
A:
The hint at the end was really the issue. The error in actual was ValueError: setting an array element with a sequence. I used a function to pad all the rows with 0 with were less in size than row with max length.
# Find length of max row
max = max(map(len, X_arr))
# Pad the rows with 0
X_arr_same = np.array([np.pad(v, (0, m - len(v)), 'constant') for v in X_arr])
Result was:
len(X_arr_same[0]): 152576
len(X_arr_same[1]): 152576
len(X_arr_same[2]): 152576
This solved the issue and the block ran successfully,
from sklearn import svm
clf = svm.SVC()
clf.fit(X_arr_same, y)
| ValueError: setting an array element with a sequence in SVM for simple arrays | I am trying to use SVM on my dataset but I am getting the error TypeError: only size-1 arrays can be converted to Python scalars. My inputs are:
y = df['emotion'].values.tolist()
X = df['flatten_embeddings']
Where y us just target like Sad, Angry, Neutral ... and X is
0 [1.702582, 1.277809, 1.7816906, -5.0634155, 0....
1 [-1.1865704, -0.698246, -1.7263901, -3.2054596...
2 [-1.7968469, -0.18659484, 2.1262107, -5.183001...
3 [-1.9038239, -2.7165074, 0.022676349, -2.31133...
4 [-0.34684253, 0.58175063, -2.0320444, -0.65968...
Name: flatten_embeddings, Length: 400, dtype: object
When I use SVM, I get the error. The code is as follows:
from sklearn import svm
clf = svm.SVC()
clf.fit(X, y)
ERROR:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
<ipython-input-65-0d786f0eb252> in <module>
1 from sklearn import svm
2 clf = svm.SVC()
----> 3 clf.fit(xxx, y)
I have tried to vectorize but get the same error.
yyy = X
xxx = np.array(list(map(np.double, yyy)))
Just for FYI, my rows are not same in size:
len(X[0]): 152576
len(X[1]): 101376
len(X[2]): 101376
| [
"The hint at the end was really the issue. The error in actual was ValueError: setting an array element with a sequence. I used a function to pad all the rows with 0 with were less in size than row with max length.\n# Find length of max row\nmax = max(map(len, X_arr))\n\n# Pad the rows with 0\nX_arr_same = np.array([np.pad(v, (0, m - len(v)), 'constant') for v in X_arr])\n\nResult was:\nlen(X_arr_same[0]): 152576\nlen(X_arr_same[1]): 152576\nlen(X_arr_same[2]): 152576\n\nThis solved the issue and the block ran successfully,\nfrom sklearn import svm\nclf = svm.SVC()\nclf.fit(X_arr_same, y)\n\n"
] | [
0
] | [] | [] | [
"arrays",
"machine_learning",
"numpy",
"pandas",
"python"
] | stackoverflow_0074611951_arrays_machine_learning_numpy_pandas_python.txt |
Q:
How are small sets stored in memory?
If we look at the resize behavior for sets under 50k elements:
>>> import sys
>>> s = set()
>>> seen = {}
>>> for i in range(50_000):
... size = sys.getsizeof(s)
... if size not in seen:
... seen[size] = len(s)
... print(f"{size=} {len(s)=}")
... s.add(i)
...
size=216 len(s)=0
size=728 len(s)=5
size=2264 len(s)=19
size=8408 len(s)=77
size=32984 len(s)=307
size=131288 len(s)=1229
size=524504 len(s)=4915
size=2097368 len(s)=19661
This pattern is consistent with quadrupling of the backing storage size once the set is 3/5ths full, plus some presumably constant overhead for the PySetObject:
>>> for i in range(9, 22, 2):
... print(2**i + 216)
...
728
2264
8408
32984
131288
524504
2097368
A similar pattern continues even for larger sets, but the resize factor switches to doubling instead of quadrupling.
The reported size for small sets is an outlier. Instead of size 344 bytes, i.e. 16 * 8 + 216 (the storage array of a newly created empty set has 8 slots avail until the first resize up to 32 slots) only 216 bytes is reported by sys.getsizeof.
What am I missing? How are those small sets stored so that they use only 216 bytes instead of 344?
A:
We are going to inspect how small sets uses 216 bytes.
First of all a set object in python is represented by following C structure.
typedef struct {
PyObject_HEAD
Py_ssize_t fill; /* Number active and dummy entries*/
Py_ssize_t used; /* Number active entries */
/* The table contains mask + 1 slots, and that's a power of 2.
* We store the mask instead of the size because the mask is more
* frequently needed.
*/
Py_ssize_t mask;
/* The table points to a fixed-size smalltable for small tables
* or to additional malloc'ed memory for bigger tables.
* The table pointer is never NULL which saves us from repeated
* runtime null-tests.
*/
setentry *table;
Py_hash_t hash; /* Only used by frozenset objects */
Py_ssize_t finger; /* Search finger for pop() */
setentry smalltable[PySet_MINSIZE];
PyObject *weakreflist; /* List of weak references */
} PySetObject;
Now remember, getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.
Ok, set implements the __sizeof__.
static PyObject *
set_sizeof(PySetObject *so, PyObject *Py_UNUSED(ignored))
{
Py_ssize_t res;
res = _PyObject_SIZE(Py_TYPE(so));
if (so->table != so->smalltable)
res = res + (so->mask + 1) * sizeof(setentry);
return PyLong_FromSsize_t(res);
}
Now lets inspect the line
res = _PyObject_SIZE(Py_TYPE(so));
_PyObject_SIZE is just a macro which expands to (typeobj)->tp_basicsize.
#define _PyObject_SIZE(typeobj) ( (typeobj)->tp_basicsize )
This code is essentially trying to access the tp_basicsize slot to get the size in bytes of instances of the type which is just sizeof(PySetObject) in case of set.
PyTypeObject PySet_Type = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
"set", /* tp_name */
sizeof(PySetObject), /* tp_basicsize */
0, /* tp_itemsize */
# Skipped rest of the code for brevity.
I have modified the set_sizeof C function with the following changes.
static PyObject *
set_sizeof(PySetObject *so, PyObject *Py_UNUSED(ignored))
{
Py_ssize_t res;
unsigned long py_object_head_size = sizeof(so->ob_base); // Because PyObject_HEAD expands to PyObject ob_base;
unsigned long fill_size = sizeof(so->fill);
unsigned long used_size = sizeof(so->used);
unsigned long mask_size = sizeof(so->mask);
unsigned long table_size = sizeof(so->table);
unsigned long hash_size = sizeof(so->hash);
unsigned long finger_size = sizeof(so->finger);
unsigned long smalltable_size = sizeof(so->smalltable);
unsigned long weakreflist_size = sizeof(so->weakreflist);
int is_using_fixed_size_smalltables = so->table == so->smalltable;
printf("| PySetObject Fields | Size(bytes) |\n");
printf("|------------------------------------|\n");
printf("| PyObject_HEAD | '%zu' |\n", py_object_head_size);
printf("| fill | '%zu' |\n", fill_size);
printf("| used | '%zu' |\n", used_size);
printf("| mask | '%zu' |\n", mask_size);
printf("| table | '%zu' |\n", table_size);
printf("| hash | '%zu' |\n", hash_size);
printf("| finger | '%zu' |\n", finger_size);
printf("| smalltable | '%zu' |\n", smalltable_size);
printf("| weakreflist | '%zu' |\n", weakreflist_size);
printf("-------------------------------------|\n");
printf("| Total | '%zu' |\n", py_object_head_size+fill_size+used_size+mask_size+table_size+hash_size+finger_size+smalltable_size+weakreflist_size);
printf("\n");
printf("Total size of PySetObject '%zu' bytes\n", sizeof(PySetObject));
printf("Has set resized: '%s'\n", is_using_fixed_size_smalltables ? "No": "Yes");
if(!is_using_fixed_size_smalltables) {
printf("Size of malloc'ed table: '%zu' bytes\n", (so->mask + 1) * sizeof(setentry));
}
res = _PyObject_SIZE(Py_TYPE(so));
if (so->table != so->smalltable)
res = res + (so->mask + 1) * sizeof(setentry);
return PyLong_FromSsize_t(res);
}
and compiling and running these changes gives me
>>> import sys
>>>
>>> set_ = set()
>>> sys.getsizeof(set_)
| PySetObject Fields | Size(bytes) |
|------------------------------------|
| PyObject_HEAD | '16' |
| fill | '8' |
| used | '8' |
| mask | '8' |
| table | '8' |
| hash | '8' |
| finger | '8' |
| smalltable | '128' |
| weakreflist | '8' |
-------------------------------------|
| Total | '200' |
Total size of PySetObject '200' bytes
Has set resized: 'No'
216
>>> set_.add(1)
>>> set_.add(2)
>>> set_.add(3)
>>> set_.add(4)
>>> set_.add(5)
>>> sys.getsizeof(set_)
| PySetObject Fields | Size(bytes) |
|------------------------------------|
| PyObject_HEAD | '16' |
| fill | '8' |
| used | '8' |
| mask | '8' |
| table | '8' |
| hash | '8' |
| finger | '8' |
| smalltable | '128' |
| weakreflist | '8' |
-------------------------------------|
| Total | '200' |
Total size of PySetObject '200' bytes
Has set resized: 'Yes'
Size of malloc'ed table: '512' bytes
728
The return value is 216/728 bytes because sys.getsize add 16 bytes of GC overhead.
But the important thing to note here is this line.
| smalltable | '128' |
Because for small tables(before the first resize) so->table is just a reference to fixed size(8) so->smalltable(No malloc'ed memory) so sizeof(PySetObject) is sufficient enough to get the size because it also includes the storage size( 128(16(size of setentry) * 8)).
Now what happens when the resize occurs. It constructs entirely new table(malloc'ed) and uses that table instead of so->smalltables,this means that the sets which have resized also carry out a dead-weight of 128 bytes(Size of fixed size small table) along with the size of malloc'ed so->table.
else {
newtable = PyMem_NEW(setentry, newsize);
if (newtable == NULL) {
PyErr_NoMemory();
return -1;
}
}
/* Make the set empty, using the new table. */
assert(newtable != oldtable);
memset(newtable, 0, sizeof(setentry) * newsize);
so->mask = newsize - 1;
so->table = newtable;
| How are small sets stored in memory? | If we look at the resize behavior for sets under 50k elements:
>>> import sys
>>> s = set()
>>> seen = {}
>>> for i in range(50_000):
... size = sys.getsizeof(s)
... if size not in seen:
... seen[size] = len(s)
... print(f"{size=} {len(s)=}")
... s.add(i)
...
size=216 len(s)=0
size=728 len(s)=5
size=2264 len(s)=19
size=8408 len(s)=77
size=32984 len(s)=307
size=131288 len(s)=1229
size=524504 len(s)=4915
size=2097368 len(s)=19661
This pattern is consistent with quadrupling of the backing storage size once the set is 3/5ths full, plus some presumably constant overhead for the PySetObject:
>>> for i in range(9, 22, 2):
... print(2**i + 216)
...
728
2264
8408
32984
131288
524504
2097368
A similar pattern continues even for larger sets, but the resize factor switches to doubling instead of quadrupling.
The reported size for small sets is an outlier. Instead of size 344 bytes, i.e. 16 * 8 + 216 (the storage array of a newly created empty set has 8 slots avail until the first resize up to 32 slots) only 216 bytes is reported by sys.getsizeof.
What am I missing? How are those small sets stored so that they use only 216 bytes instead of 344?
| [
"We are going to inspect how small sets uses 216 bytes.\nFirst of all a set object in python is represented by following C structure.\ntypedef struct {\n PyObject_HEAD\n\n Py_ssize_t fill; /* Number active and dummy entries*/\n Py_ssize_t used; /* Number active entries */\n\n /* The table contains mask + 1 slots, and that's a power of 2.\n * We store the mask instead of the size because the mask is more\n * frequently needed.\n */\n Py_ssize_t mask;\n\n /* The table points to a fixed-size smalltable for small tables\n * or to additional malloc'ed memory for bigger tables.\n * The table pointer is never NULL which saves us from repeated\n * runtime null-tests.\n */\n setentry *table;\n Py_hash_t hash; /* Only used by frozenset objects */\n Py_ssize_t finger; /* Search finger for pop() */\n\n setentry smalltable[PySet_MINSIZE];\n PyObject *weakreflist; /* List of weak references */\n} PySetObject;\n\nNow remember, getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.\nOk, set implements the __sizeof__.\nstatic PyObject *\nset_sizeof(PySetObject *so, PyObject *Py_UNUSED(ignored))\n{\n Py_ssize_t res;\n\n res = _PyObject_SIZE(Py_TYPE(so));\n if (so->table != so->smalltable)\n res = res + (so->mask + 1) * sizeof(setentry);\n return PyLong_FromSsize_t(res);\n}\n\nNow lets inspect the line\nres = _PyObject_SIZE(Py_TYPE(so));\n\n_PyObject_SIZE is just a macro which expands to (typeobj)->tp_basicsize.\n#define _PyObject_SIZE(typeobj) ( (typeobj)->tp_basicsize )\n\nThis code is essentially trying to access the tp_basicsize slot to get the size in bytes of instances of the type which is just sizeof(PySetObject) in case of set.\nPyTypeObject PySet_Type = {\n PyVarObject_HEAD_INIT(&PyType_Type, 0)\n \"set\", /* tp_name */\n sizeof(PySetObject), /* tp_basicsize */\n 0, /* tp_itemsize */\n # Skipped rest of the code for brevity.\n\nI have modified the set_sizeof C function with the following changes.\nstatic PyObject *\nset_sizeof(PySetObject *so, PyObject *Py_UNUSED(ignored))\n{\n Py_ssize_t res;\n\n unsigned long py_object_head_size = sizeof(so->ob_base); // Because PyObject_HEAD expands to PyObject ob_base;\n unsigned long fill_size = sizeof(so->fill);\n unsigned long used_size = sizeof(so->used);\n unsigned long mask_size = sizeof(so->mask);\n unsigned long table_size = sizeof(so->table);\n unsigned long hash_size = sizeof(so->hash);\n unsigned long finger_size = sizeof(so->finger);\n unsigned long smalltable_size = sizeof(so->smalltable);\n unsigned long weakreflist_size = sizeof(so->weakreflist);\n int is_using_fixed_size_smalltables = so->table == so->smalltable;\n\n printf(\"| PySetObject Fields | Size(bytes) |\\n\");\n printf(\"|------------------------------------|\\n\");\n printf(\"| PyObject_HEAD | '%zu' |\\n\", py_object_head_size);\n printf(\"| fill | '%zu' |\\n\", fill_size);\n printf(\"| used | '%zu' |\\n\", used_size);\n printf(\"| mask | '%zu' |\\n\", mask_size);\n printf(\"| table | '%zu' |\\n\", table_size);\n printf(\"| hash | '%zu' |\\n\", hash_size);\n printf(\"| finger | '%zu' |\\n\", finger_size);\n printf(\"| smalltable | '%zu' |\\n\", smalltable_size); \n printf(\"| weakreflist | '%zu' |\\n\", weakreflist_size);\n printf(\"-------------------------------------|\\n\");\n printf(\"| Total | '%zu' |\\n\", py_object_head_size+fill_size+used_size+mask_size+table_size+hash_size+finger_size+smalltable_size+weakreflist_size);\n printf(\"\\n\");\n printf(\"Total size of PySetObject '%zu' bytes\\n\", sizeof(PySetObject));\n printf(\"Has set resized: '%s'\\n\", is_using_fixed_size_smalltables ? \"No\": \"Yes\");\n if(!is_using_fixed_size_smalltables) {\n printf(\"Size of malloc'ed table: '%zu' bytes\\n\", (so->mask + 1) * sizeof(setentry));\n }\n\n res = _PyObject_SIZE(Py_TYPE(so));\n if (so->table != so->smalltable)\n res = res + (so->mask + 1) * sizeof(setentry);\n return PyLong_FromSsize_t(res);\n}\n\nand compiling and running these changes gives me\n>>> import sys\n>>> \n>>> set_ = set()\n>>> sys.getsizeof(set_)\n| PySetObject Fields | Size(bytes) |\n|------------------------------------|\n| PyObject_HEAD | '16' |\n| fill | '8' |\n| used | '8' |\n| mask | '8' |\n| table | '8' |\n| hash | '8' |\n| finger | '8' |\n| smalltable | '128' |\n| weakreflist | '8' |\n-------------------------------------|\n| Total | '200' |\n\nTotal size of PySetObject '200' bytes\nHas set resized: 'No'\n216\n>>> set_.add(1)\n>>> set_.add(2)\n>>> set_.add(3)\n>>> set_.add(4)\n>>> set_.add(5)\n>>> sys.getsizeof(set_)\n| PySetObject Fields | Size(bytes) |\n|------------------------------------|\n| PyObject_HEAD | '16' |\n| fill | '8' |\n| used | '8' |\n| mask | '8' |\n| table | '8' |\n| hash | '8' |\n| finger | '8' |\n| smalltable | '128' |\n| weakreflist | '8' |\n-------------------------------------|\n| Total | '200' |\n\nTotal size of PySetObject '200' bytes\nHas set resized: 'Yes'\nSize of malloc'ed table: '512' bytes\n728\n\nThe return value is 216/728 bytes because sys.getsize add 16 bytes of GC overhead.\nBut the important thing to note here is this line.\n| smalltable | '128' |\n\nBecause for small tables(before the first resize) so->table is just a reference to fixed size(8) so->smalltable(No malloc'ed memory) so sizeof(PySetObject) is sufficient enough to get the size because it also includes the storage size( 128(16(size of setentry) * 8)).\nNow what happens when the resize occurs. It constructs entirely new table(malloc'ed) and uses that table instead of so->smalltables,this means that the sets which have resized also carry out a dead-weight of 128 bytes(Size of fixed size small table) along with the size of malloc'ed so->table.\nelse {\n newtable = PyMem_NEW(setentry, newsize);\n if (newtable == NULL) {\n PyErr_NoMemory();\n return -1;\n }\n }\n\n /* Make the set empty, using the new table. */\n assert(newtable != oldtable);\n memset(newtable, 0, sizeof(setentry) * newsize);\n so->mask = newsize - 1;\n so->table = newtable;\n\n"
] | [
7
] | [] | [] | [
"cpython",
"memory",
"python",
"python_internals",
"set"
] | stackoverflow_0074606984_cpython_memory_python_python_internals_set.txt |
Q:
how to split raw data into excel/ csv and identitfy rows that are not properly formed (python)?
I have raw data.
I want to split this into csv/excel.
after that if the data in the rows are not correctly stored( for e.g. if 0 is there entered instead of 121324) I want python to identify those rows.
I mean while splitting raw data into csv through python code, some rows might form incorrectly( please understand).
How to identify those rows through python?
example:
S.11* N. ENGLAND L -8' 21-23 u44'\n
S.18 TAMPA BAY W -7 40-7 u49'\n
S.25 Buffalo L -4' 18-33 o48
result i want:
S,11,*,N.,ENGLAND,L,-8',21-23,u44'\n
S,18,,TAMPA,BAY,W,-7,40-7,u49'\n
S,25,,Buffalo,L,-4',18-33,o48\n
suppose the output is like this:
S,11,N.,ENGLAND,L,-8',21-23u,44'\n
S,18,,TAMPA,BAY,W,-7,40-7,u49'\n
S,25,,Buffalo,L,-4',18-33,o48\n
you can see that in first row * is missing and u44' is stored as only 44. and u is append with another column.
this row should be identified by python code and should return me this row.
likewise i want all rows those with error.
this is what i have done so far.
import csv
input_filename = 'rawsample.txt'
output_filename = 'spreads.csv'
with open(input_filename, 'r', newline='') as infile:
open(output_filename, 'w', newline='') as outfile:
reader = csv.reader(infile, delimiter=' ', skipinitialspace=True)
writer = csv.writer(outfile, delimiter=',')
for row in reader:
new_cols = row[0].split('.')
if not new_cols[1].endswith('*'):
new_cols.extend([''])
else:
new_cols[1] = new_cols[1][:-1]
new_cols.extend(['*'])
row = new_cols + row[1:]
#print(row)
writer.writerow(row)
er=[]
for index, row in df.iterrows():
for i in row:
if str(i).lower()=='nan' or i=='':
er.append(row)
# i was able to check for null values but nothing more.
please help me.
A:
@mozway is right you better give an example input and expected result.
Anyway if you're dealing with a variable number of columns in the input please refer to Handling Variable Number of Columns with Pandas - Python
Best
| how to split raw data into excel/ csv and identitfy rows that are not properly formed (python)? | I have raw data.
I want to split this into csv/excel.
after that if the data in the rows are not correctly stored( for e.g. if 0 is there entered instead of 121324) I want python to identify those rows.
I mean while splitting raw data into csv through python code, some rows might form incorrectly( please understand).
How to identify those rows through python?
example:
S.11* N. ENGLAND L -8' 21-23 u44'\n
S.18 TAMPA BAY W -7 40-7 u49'\n
S.25 Buffalo L -4' 18-33 o48
result i want:
S,11,*,N.,ENGLAND,L,-8',21-23,u44'\n
S,18,,TAMPA,BAY,W,-7,40-7,u49'\n
S,25,,Buffalo,L,-4',18-33,o48\n
suppose the output is like this:
S,11,N.,ENGLAND,L,-8',21-23u,44'\n
S,18,,TAMPA,BAY,W,-7,40-7,u49'\n
S,25,,Buffalo,L,-4',18-33,o48\n
you can see that in first row * is missing and u44' is stored as only 44. and u is append with another column.
this row should be identified by python code and should return me this row.
likewise i want all rows those with error.
this is what i have done so far.
import csv
input_filename = 'rawsample.txt'
output_filename = 'spreads.csv'
with open(input_filename, 'r', newline='') as infile:
open(output_filename, 'w', newline='') as outfile:
reader = csv.reader(infile, delimiter=' ', skipinitialspace=True)
writer = csv.writer(outfile, delimiter=',')
for row in reader:
new_cols = row[0].split('.')
if not new_cols[1].endswith('*'):
new_cols.extend([''])
else:
new_cols[1] = new_cols[1][:-1]
new_cols.extend(['*'])
row = new_cols + row[1:]
#print(row)
writer.writerow(row)
er=[]
for index, row in df.iterrows():
for i in row:
if str(i).lower()=='nan' or i=='':
er.append(row)
# i was able to check for null values but nothing more.
please help me.
| [
"@mozway is right you better give an example input and expected result.\nAnyway if you're dealing with a variable number of columns in the input please refer to Handling Variable Number of Columns with Pandas - Python\nBest\n"
] | [
0
] | [] | [] | [
"database",
"dataframe",
"python",
"raw_data"
] | stackoverflow_0074611968_database_dataframe_python_raw_data.txt |
Q:
Unexpected round behaviour of Numpy float32
I am trying to understand how numpy handles the float32 datatype.
The following code produces 0.25815687
print(np.float32(0.2581568658351898).astype(str)) # 0.25815687
But an online float converter https://www.h-schmidt.net/FloatConverter/IEEE754.html gives 0.2581568658351898193359375, Is Numpy doing something special when printing the single-precision float or there is something I missed?
Online converter result
A:
Is Numpy doing something special when printing the single-precision float or there is something I missed?
0.2581568658351898 is not exactly encodable as a 32-bit float.
The closest is 0.2581568658351898193359375 or 0x1.085a46p-2
When 0.2581568658351898193359375 is printed with reduced precision, the result is 0.25815687
0.2581568 658351898 Source code
0.2581568 658351898193359375 True value
0.2581568 7 Output
A:
You can force the number of digits to be printed using f-strings
print(f"{np.float32(0.2581568658351898):.25f}")
where :.25f tells the interpreter to print the value as a floating point number with 25 decimal digits.
| Unexpected round behaviour of Numpy float32 | I am trying to understand how numpy handles the float32 datatype.
The following code produces 0.25815687
print(np.float32(0.2581568658351898).astype(str)) # 0.25815687
But an online float converter https://www.h-schmidt.net/FloatConverter/IEEE754.html gives 0.2581568658351898193359375, Is Numpy doing something special when printing the single-precision float or there is something I missed?
Online converter result
| [
"\nIs Numpy doing something special when printing the single-precision float or there is something I missed?\n\n0.2581568658351898 is not exactly encodable as a 32-bit float.\nThe closest is 0.2581568658351898193359375 or 0x1.085a46p-2\nWhen 0.2581568658351898193359375 is printed with reduced precision, the result is 0.25815687\n\n0.2581568 658351898 Source code\n0.2581568 658351898193359375 True value\n0.2581568 7 Output \n\n",
"You can force the number of digits to be printed using f-strings\nprint(f\"{np.float32(0.2581568658351898):.25f}\")\n\nwhere :.25f tells the interpreter to print the value as a floating point number with 25 decimal digits.\n"
] | [
1,
0
] | [] | [] | [
"floating_point",
"ieee_754",
"numpy",
"python"
] | stackoverflow_0074610105_floating_point_ieee_754_numpy_python.txt |
Q:
get public key from private key with python OpenSSL
Well, I generate a private key with pyOpenSSL as follows:
from OpenSSL import crypto
k = crypto.PKey()
k.generate_key(crypto.TYPE_RSA, 2048)
print crypto.dump_privatekey(crypto.FILETYPE_PEM, k)
How do I get the public key string from it? I've still not found what method of this library does it. Thanks
A:
If
cert = crypto.dump_certificate(crypto.FILETYPE_PEM, k)
doesn't do what you want, then it doesn't look like pyOpenSSL supports public key dumping. There is an unmerged branch here that adds that functionality but I can't claim that it does what is purports.
A:
Updated:
Now it has the method to get public key directly.
key = crypto.PKey()
key.generate_key(crypto.TYPE_RSA, 2048)
publickey_contents = crypto.dump_publickey(crypto.FILETYPE_PEM, key)
with the method dump_publickey you can get what you want.
| get public key from private key with python OpenSSL | Well, I generate a private key with pyOpenSSL as follows:
from OpenSSL import crypto
k = crypto.PKey()
k.generate_key(crypto.TYPE_RSA, 2048)
print crypto.dump_privatekey(crypto.FILETYPE_PEM, k)
How do I get the public key string from it? I've still not found what method of this library does it. Thanks
| [
"If\ncert = crypto.dump_certificate(crypto.FILETYPE_PEM, k)\n\ndoesn't do what you want, then it doesn't look like pyOpenSSL supports public key dumping. There is an unmerged branch here that adds that functionality but I can't claim that it does what is purports.\n",
"Updated:\nNow it has the method to get public key directly.\nkey = crypto.PKey()\nkey.generate_key(crypto.TYPE_RSA, 2048)\npublickey_contents = crypto.dump_publickey(crypto.FILETYPE_PEM, key)\n\nwith the method dump_publickey you can get what you want.\n"
] | [
3,
1
] | [] | [] | [
"openssl",
"pyopenssl",
"python"
] | stackoverflow_0014939033_openssl_pyopenssl_python.txt |
Q:
python decrement at special case in for-loop
I need to decrement in a python for-loop at a special case (or just don't increment).
In C-like languages, this can be easily accomplished by decrementing the index, or if you have an iterator-like structure you could just "decrement" the iterator. But I have no clue how to achieve this in python.
One solution would be to create a while loop and increment manually, but that would, in my case, bring in lots of extra cases, where just one case is needed when I could decrement.
C Example
for (int i = 0; i < N; ++i) {
if (some_condition) {
i--;
}
}
Python equivalent
for i in range(0, N):
if some_condition:
i -= 1 # need something like this
i = i.prev() # or like this
A:
You can make an iteration loop by yourself. You could easily add a loop index independent from next calls, so that you could even use a skip condition that uses the current index.
skip_iteration = True
it = iter(range(10))
iterating = True
value = next(it)
while iterating:
try:
print(value, end=' ')
if skip_iteration:
# iteration skip code
skip_iteration = False
else:
# normal iteration code
value = next(it)
except StopIteration:
iterating = False
Here only the first iteration is skipped, which outputs :
0 0 1 2 3 4 5 6 7 8 9
This code relies on the fact that the next function raises StopIteration if it reaches the end of the iterator.
| python decrement at special case in for-loop | I need to decrement in a python for-loop at a special case (or just don't increment).
In C-like languages, this can be easily accomplished by decrementing the index, or if you have an iterator-like structure you could just "decrement" the iterator. But I have no clue how to achieve this in python.
One solution would be to create a while loop and increment manually, but that would, in my case, bring in lots of extra cases, where just one case is needed when I could decrement.
C Example
for (int i = 0; i < N; ++i) {
if (some_condition) {
i--;
}
}
Python equivalent
for i in range(0, N):
if some_condition:
i -= 1 # need something like this
i = i.prev() # or like this
| [
"You can make an iteration loop by yourself. You could easily add a loop index independent from next calls, so that you could even use a skip condition that uses the current index.\nskip_iteration = True\nit = iter(range(10))\niterating = True\nvalue = next(it)\nwhile iterating:\n try:\n print(value, end=' ')\n if skip_iteration:\n # iteration skip code\n skip_iteration = False\n else:\n # normal iteration code\n value = next(it)\n except StopIteration:\n iterating = False\n\nHere only the first iteration is skipped, which outputs :\n0 0 1 2 3 4 5 6 7 8 9\n\nThis code relies on the fact that the next function raises StopIteration if it reaches the end of the iterator.\n"
] | [
0
] | [
"If the condition in the if statement is somehow related to the iterator i itself then the loop might not end but if the condition is not depended on i then there shall not be any problem\nyou can also try skipping that particular iteration by using continue.\n"
] | [
-1
] | [
"loops",
"python"
] | stackoverflow_0068680782_loops_python.txt |
Q:
Converting a nested JSON into a flatten one and then to pandas dataframe using pd.json_normalize()
I am trying to convert the below JSON as a dataframe. The below JSON is in a string.
json_str='{"TileName": "Master",
"Report Details":
[
{
"r1name": "Primary",
"r1link": "link1",
"report Accessible": ["operations", "Sales"]
},
{
"r2name": "Secondry",
"r2link": "link2",
"report Accessible": ["operations", "Sales"]
}
]
}'
So I am trying to get below df
TileName ReportAccssible ReportName ReportLink
Master operations Primary link1
Master Sales Primary link1
Master operations Secondary link2
Master Sales Secondary link2
In order to achieve the above, I am trying the below code snippet:
js_str = json.loads(json_str)
df = pd.json_normalize(js_df,'Report Details',['TileName',['report Accessible']],\
record_prefix='Col_',errors='ignore')
But the above code is not giving me the output as per desired format.
What I am missing here?
A:
You have wrong record_path, which should be ['Report Details', 'report Accessible'].
js_obj = json.loads(json_str.replace('r2', 'r1')) # keep columns consistent
df = pd.json_normalize(js_obj, ['Report Details', 'report Accessible'],
['TileName', ['Report Details', 'r1name'], ['Report Details', 'r1link']])
df.columns = ['ReportAccssible', 'TileName', 'ReportName', 'ReportLink']
You will get what you want.
ReportAccssible TileName ReportName ReportLink
0 operations Master Primary link1
1 Sales Master Primary link1
2 operations Master Secondry link2
3 Sales Master Secondry link2
| Converting a nested JSON into a flatten one and then to pandas dataframe using pd.json_normalize() | I am trying to convert the below JSON as a dataframe. The below JSON is in a string.
json_str='{"TileName": "Master",
"Report Details":
[
{
"r1name": "Primary",
"r1link": "link1",
"report Accessible": ["operations", "Sales"]
},
{
"r2name": "Secondry",
"r2link": "link2",
"report Accessible": ["operations", "Sales"]
}
]
}'
So I am trying to get below df
TileName ReportAccssible ReportName ReportLink
Master operations Primary link1
Master Sales Primary link1
Master operations Secondary link2
Master Sales Secondary link2
In order to achieve the above, I am trying the below code snippet:
js_str = json.loads(json_str)
df = pd.json_normalize(js_df,'Report Details',['TileName',['report Accessible']],\
record_prefix='Col_',errors='ignore')
But the above code is not giving me the output as per desired format.
What I am missing here?
| [
"You have wrong record_path, which should be ['Report Details', 'report Accessible'].\njs_obj = json.loads(json_str.replace('r2', 'r1')) # keep columns consistent\ndf = pd.json_normalize(js_obj, ['Report Details', 'report Accessible'], \n ['TileName', ['Report Details', 'r1name'], ['Report Details', 'r1link']])\ndf.columns = ['ReportAccssible', 'TileName', 'ReportName', 'ReportLink']\n\nYou will get what you want.\n ReportAccssible TileName ReportName ReportLink\n0 operations Master Primary link1\n1 Sales Master Primary link1\n2 operations Master Secondry link2\n3 Sales Master Secondry link2\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074611154_pandas_python.txt |
Q:
How to find how many iframe present in source code -Python Selenium
There is a span dropdown and inside there are 3 more dropdowns. I need to first select "home" and then "task tracking" and the option in that drop down.
I have seen the iframe present in the source code and I am not able to find and put the solution in my python code.
example HTML
Both find_element_by_css_selector and find_element_by_xpath are giving the error
Errors- AttributeError: type object 'By' has no attribute 'tagName'
This is my code:
size = self.driver.find_element(By.tagName("iframe"))
print(size)
self.driver.find_element_by_css_selector('#TitleCaret').click() #Drop down
sleep(10)
self.driver.find_element_by_css_selector('#chrome-sidebar > div > div.chrome-links > ul > li:nth-child(1) > a').click() #Select Director from drop down
#self.driver.find_element_by_xpath('//*[@id="LoanBar"]/span[2]/button').click()
size = self.driver.find_element(By.tagName("iframe"))
print(size)
WebDriverWait(self.driver, 30).until(EC.frame_to_be_available_and_switch_to_it(By.ID, self.iframe_id))
sleep(10)
self.driver.switch_to.frame(self.iframe_id)
#self.driver.switch_to.frame('ifr_APP')
self.driver.find_element_by_css_selector('#LoanBar > span.dropdown > button').click()
A:
Selenium in Python has By.TAG_NAME attribute.
By.tagName is Selenium Java syntax.
A:
it's self.driver.find_element(By.TAG_NAME,"iframe")
| How to find how many iframe present in source code -Python Selenium | There is a span dropdown and inside there are 3 more dropdowns. I need to first select "home" and then "task tracking" and the option in that drop down.
I have seen the iframe present in the source code and I am not able to find and put the solution in my python code.
example HTML
Both find_element_by_css_selector and find_element_by_xpath are giving the error
Errors- AttributeError: type object 'By' has no attribute 'tagName'
This is my code:
size = self.driver.find_element(By.tagName("iframe"))
print(size)
self.driver.find_element_by_css_selector('#TitleCaret').click() #Drop down
sleep(10)
self.driver.find_element_by_css_selector('#chrome-sidebar > div > div.chrome-links > ul > li:nth-child(1) > a').click() #Select Director from drop down
#self.driver.find_element_by_xpath('//*[@id="LoanBar"]/span[2]/button').click()
size = self.driver.find_element(By.tagName("iframe"))
print(size)
WebDriverWait(self.driver, 30).until(EC.frame_to_be_available_and_switch_to_it(By.ID, self.iframe_id))
sleep(10)
self.driver.switch_to.frame(self.iframe_id)
#self.driver.switch_to.frame('ifr_APP')
self.driver.find_element_by_css_selector('#LoanBar > span.dropdown > button').click()
| [
"Selenium in Python has By.TAG_NAME attribute.\nBy.tagName is Selenium Java syntax.\n",
"it's self.driver.find_element(By.TAG_NAME,\"iframe\")\n"
] | [
0,
0
] | [] | [] | [
"automation",
"python",
"selenium",
"web_scraping"
] | stackoverflow_0074611428_automation_python_selenium_web_scraping.txt |
Q:
Adding to 2d dict not working as expected (python)
Im creating a function that returns a 2d dictionary, however, when adding the second part of the inner dict, it overwrites the first part and they both become the same value.
It will become more clear by reading the following:
# variables from other part of my code
int_col = [1,3]
data = [['CJ', '20', 'Male', '20000'], \
['Auts', '21', 'Female', '25000'], \
['Lucy', '20', 'Female', '10000'], \
['Lily', '20', 'Female', '15000']]
header = ['Name', 'Age', 'Sex', 'Salary']
# returns dict on numerical values in data
def stats():
stats_dict = {}
col_stats = {}
# loops through int_col
for col in int_col:
max_int = -1000000
min_int = 1000000
# loops through data
for row in data:
# converts str items to int
# done differerently in my code
row[col] = int(row[col])
if isinstance(row[col], int):
if row[col] > max_int:
max_int = row[col]
if row[col] < min_int:
min_int = row[col]
# fills one dict to be added to another
col_stats["max"] = max_int
col_stats["min"] = min_int
# adds first dict to outer dict
stats_dict[header[col]] = col_stats
print(stats_dict)
return stats_dict
stats()
I expect stats_dict to be:
{'Age': {'max': 21, 'min': 20},, 'Salary': {'max': 25000, 'min': 10000}}
however, it looks like this:
'Age': {'max': 25000, 'min': 10000}, 'Salary': {'max': 25000, 'min': 10000}}
I was wondering what I was doing wrong and how to get the expected output?
Thanks,
CJ
A:
The problem is that you use the same col_stats twice.
It becomes clear if you have a bit of tracing :
+ print(f"comparing {row[col]=!r} > {max_int}")
if row[col] > max_int:
max_int = row[col]
+ print(f"{max_int=!r}")
+ print(f"comparing {row[col]=!r} < {min_int}")
if row[col] < min_int:
min_int = row[col]
+ print(f"{min_int=!r}")
comparing row[col]=20 > -1000000
max_int=20
comparing row[col]=20 < 1000000
min_int=20
comparing row[col]=21 > 20
max_int=21
comparing row[col]=21 < 20
comparing row[col]=20 > 21
comparing row[col]=20 < 20
comparing row[col]=20 > 21
comparing row[col]=20 < 20
{'Age': {'max': 21, 'min': 20}}
comparing row[col]=20000 > -1000000
max_int=20000
comparing row[col]=20000 < 1000000
min_int=20000
comparing row[col]=25000 > 20000
max_int=25000
comparing row[col]=25000 < 20000
comparing row[col]=10000 > 25000
comparing row[col]=10000 < 20000
min_int=10000
comparing row[col]=15000 > 25000
comparing row[col]=15000 < 10000
{'Age': {'max': 25000, 'min': 10000}, 'Salary': {'max': 25000, 'min': 10000}}
The calculations are indeed correct.
But what happens after is the error :
col_stats["max"] = max_int
col_stats["min"] = min_int
stats_dict[header[col]] = col_stats
The first time it will add two entries (max: 21 and min: 21) to the col_stats dict, and add one entry to stats_dict["Age"]. Ok.
The second time it will change the two entries of the same col_stats dictionary, thus changing what is accessible through stats_dict["Age"].
Your problem is you have mutable data, and you mutated it unintentionally.
One easy solution is to copy the col_stats :
stats_dict[header[col]] = col_stats.copy()
But you can also simply do not share the same dict :
# remove the declaration earlier
# [...]
+ col_stats = {} # A NEW DICT !
col_stats["max"] = max_int
col_stats["min"] = min_int
Beware of sharing mutable data !
| Adding to 2d dict not working as expected (python) | Im creating a function that returns a 2d dictionary, however, when adding the second part of the inner dict, it overwrites the first part and they both become the same value.
It will become more clear by reading the following:
# variables from other part of my code
int_col = [1,3]
data = [['CJ', '20', 'Male', '20000'], \
['Auts', '21', 'Female', '25000'], \
['Lucy', '20', 'Female', '10000'], \
['Lily', '20', 'Female', '15000']]
header = ['Name', 'Age', 'Sex', 'Salary']
# returns dict on numerical values in data
def stats():
stats_dict = {}
col_stats = {}
# loops through int_col
for col in int_col:
max_int = -1000000
min_int = 1000000
# loops through data
for row in data:
# converts str items to int
# done differerently in my code
row[col] = int(row[col])
if isinstance(row[col], int):
if row[col] > max_int:
max_int = row[col]
if row[col] < min_int:
min_int = row[col]
# fills one dict to be added to another
col_stats["max"] = max_int
col_stats["min"] = min_int
# adds first dict to outer dict
stats_dict[header[col]] = col_stats
print(stats_dict)
return stats_dict
stats()
I expect stats_dict to be:
{'Age': {'max': 21, 'min': 20},, 'Salary': {'max': 25000, 'min': 10000}}
however, it looks like this:
'Age': {'max': 25000, 'min': 10000}, 'Salary': {'max': 25000, 'min': 10000}}
I was wondering what I was doing wrong and how to get the expected output?
Thanks,
CJ
| [
"The problem is that you use the same col_stats twice.\nIt becomes clear if you have a bit of tracing :\n+ print(f\"comparing {row[col]=!r} > {max_int}\")\n if row[col] > max_int:\n max_int = row[col]\n+ print(f\"{max_int=!r}\")\n\n+ print(f\"comparing {row[col]=!r} < {min_int}\")\n if row[col] < min_int:\n min_int = row[col]\n+ print(f\"{min_int=!r}\")\n\ncomparing row[col]=20 > -1000000\nmax_int=20\ncomparing row[col]=20 < 1000000\nmin_int=20\ncomparing row[col]=21 > 20\nmax_int=21\ncomparing row[col]=21 < 20\ncomparing row[col]=20 > 21\ncomparing row[col]=20 < 20\ncomparing row[col]=20 > 21\ncomparing row[col]=20 < 20\n{'Age': {'max': 21, 'min': 20}}\ncomparing row[col]=20000 > -1000000\nmax_int=20000\ncomparing row[col]=20000 < 1000000\nmin_int=20000\ncomparing row[col]=25000 > 20000\nmax_int=25000\ncomparing row[col]=25000 < 20000\ncomparing row[col]=10000 > 25000\ncomparing row[col]=10000 < 20000\nmin_int=10000\ncomparing row[col]=15000 > 25000\ncomparing row[col]=15000 < 10000\n{'Age': {'max': 25000, 'min': 10000}, 'Salary': {'max': 25000, 'min': 10000}}\n\nThe calculations are indeed correct.\nBut what happens after is the error :\n col_stats[\"max\"] = max_int\n col_stats[\"min\"] = min_int\n stats_dict[header[col]] = col_stats\n\nThe first time it will add two entries (max: 21 and min: 21) to the col_stats dict, and add one entry to stats_dict[\"Age\"]. Ok.\nThe second time it will change the two entries of the same col_stats dictionary, thus changing what is accessible through stats_dict[\"Age\"].\nYour problem is you have mutable data, and you mutated it unintentionally.\nOne easy solution is to copy the col_stats :\n stats_dict[header[col]] = col_stats.copy()\n\nBut you can also simply do not share the same dict :\n# remove the declaration earlier\n# [...]\n\n+ col_stats = {} # A NEW DICT !\n col_stats[\"max\"] = max_int\n col_stats[\"min\"] = min_int\n\nBeware of sharing mutable data !\n"
] | [
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0074604396_dictionary_python.txt |
Q:
Regex selecting numbers
I am trying to capture the numbers in the text below with regex. But it seems to fail on the last text, which only has one digit inside a parenthesis. I can't figure out why since my knowledge with Regex is limited.
Any suggestions?
Regex
[\s(](\d[\d,\.\s]+)
Text
This banana costs 0,5 usd from previous (50)
The toothbrush is worth 0,8 usd (1,5)
This orange costs 1 usd from previous 10 usd
My car is now worth 1 000 (1 800)
This apple now costs 1 usd (1)
Results
0,5 50
0,8 1,5
1 10
1 000 1 800
1
Link to regex101: https://regex101.com/r/uy9OOc/1
A:
Your pattern matches at least 2 characters, being a digit and 1 or more times one of \d , . \s
You can match either a space or ( and then capture a single digit followed by optionally repeating the chars in the character class.
[\s(](\d[\d,.\s]*)
See a regex demo.
If you don't want trailing spaces, dots or comma's:
[\s(](\d+(?:[\d,.\s]*\d)?)\b
Explanation
[\s(] Match either a whitespace char or (
( Capture group 1
\d+ Match 1+ digits
(?:[\d,.\s]*\d)? Optionally match one of the chars in the character class followed by a digit
) Close group 1
\b A word boundary to prevent a partial word match
Regex demo
| Regex selecting numbers | I am trying to capture the numbers in the text below with regex. But it seems to fail on the last text, which only has one digit inside a parenthesis. I can't figure out why since my knowledge with Regex is limited.
Any suggestions?
Regex
[\s(](\d[\d,\.\s]+)
Text
This banana costs 0,5 usd from previous (50)
The toothbrush is worth 0,8 usd (1,5)
This orange costs 1 usd from previous 10 usd
My car is now worth 1 000 (1 800)
This apple now costs 1 usd (1)
Results
0,5 50
0,8 1,5
1 10
1 000 1 800
1
Link to regex101: https://regex101.com/r/uy9OOc/1
| [
"Your pattern matches at least 2 characters, being a digit and 1 or more times one of \\d , . \\s\nYou can match either a space or ( and then capture a single digit followed by optionally repeating the chars in the character class.\n[\\s(](\\d[\\d,.\\s]*)\n\nSee a regex demo.\nIf you don't want trailing spaces, dots or comma's:\n[\\s(](\\d+(?:[\\d,.\\s]*\\d)?)\\b\n\nExplanation\n\n[\\s(] Match either a whitespace char or (\n( Capture group 1\n\n\\d+ Match 1+ digits\n(?:[\\d,.\\s]*\\d)? Optionally match one of the chars in the character class followed by a digit\n\n\n) Close group 1\n\\b A word boundary to prevent a partial word match\n\nRegex demo\n"
] | [
2
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074606947_python_regex.txt |
Q:
Inserting tuple into postegresql with python - Select statment
Hello I am trying to insert to a pgadmin table using python, I used execute and it worked, but for my second aprt i need to use a fucntion, I got everything working except the inserts with select, it tells my syntax error, or forgot comma, literally everything. Im new, so help would be apprecitated .
def insrtDirector(q,w,e,r,t,y,u):
sql1 = (q,w,e,r,t,y,u)
insrt = """INSERT INTO "Director" VALUES (%s,%s,%s,%s,%s,%s,%s)"""
cur.execute(insrt,sql1)
insrtDirector(uuid.uuid4(), (SELECT "UName" from University WHERE "UName" = 'University College London') , (SELECT "DName" from Department WHERE "DName" ='English') , 'Christopher (3)', 'Nolan (3)', 1970, 'Westminster, London, United Kingdom' )
Error
insrtDirector(uuid.uuid4(), (SELECT "UName" from University WHERE "UName" = 'University College London') , (SELECT "DName" from Department WHERE "DName" ='English') , 'Christopher (3)', 'Nolan (3)', 1970, 'Westminster, London, United Kingdom' )
^^^^^^^
SyntaxError: invalid syntax
A:
You need to properly use subqueries (google it!). Try something like this, it might work (and please fix your variable names, qwertyu is not good, should be descriptive like unique_id, uname, dname, first_name, etc.)
def insrtDirector(q,w,e,r,t,y,u):
# Assume w and e are always subqueries with one result
# Also assume no outside source can supply w and e (must be sanitized to not be subject to sql injection)
insrt = "INSERT INTO Director VALUES (%s,{uname_subquery},{dname_subquery},%s,%s,%s,%s)".format(
uname_subquery=w,
dmane_subquery=e,
)
# This print command is just temporary so you can see what it looks like
print("insert command:", insrt)
sql1 = (q,r,t,y,u)
cur.execute(insrt, sql1)
insrtDirector(
uuid.uuid4(),
"(SELECT UName from University WHERE UName = 'University College London')",
"(SELECT DName from Department WHERE DName = 'English')",
'Christopher (3)',
'Nolan (3)',
1970,
'Westminster, London, United Kingdom',
)
| Inserting tuple into postegresql with python - Select statment | Hello I am trying to insert to a pgadmin table using python, I used execute and it worked, but for my second aprt i need to use a fucntion, I got everything working except the inserts with select, it tells my syntax error, or forgot comma, literally everything. Im new, so help would be apprecitated .
def insrtDirector(q,w,e,r,t,y,u):
sql1 = (q,w,e,r,t,y,u)
insrt = """INSERT INTO "Director" VALUES (%s,%s,%s,%s,%s,%s,%s)"""
cur.execute(insrt,sql1)
insrtDirector(uuid.uuid4(), (SELECT "UName" from University WHERE "UName" = 'University College London') , (SELECT "DName" from Department WHERE "DName" ='English') , 'Christopher (3)', 'Nolan (3)', 1970, 'Westminster, London, United Kingdom' )
Error
insrtDirector(uuid.uuid4(), (SELECT "UName" from University WHERE "UName" = 'University College London') , (SELECT "DName" from Department WHERE "DName" ='English') , 'Christopher (3)', 'Nolan (3)', 1970, 'Westminster, London, United Kingdom' )
^^^^^^^
SyntaxError: invalid syntax
| [
"You need to properly use subqueries (google it!). Try something like this, it might work (and please fix your variable names, qwertyu is not good, should be descriptive like unique_id, uname, dname, first_name, etc.)\ndef insrtDirector(q,w,e,r,t,y,u):\n # Assume w and e are always subqueries with one result\n # Also assume no outside source can supply w and e (must be sanitized to not be subject to sql injection)\n insrt = \"INSERT INTO Director VALUES (%s,{uname_subquery},{dname_subquery},%s,%s,%s,%s)\".format(\n uname_subquery=w,\n dmane_subquery=e,\n )\n # This print command is just temporary so you can see what it looks like\n print(\"insert command:\", insrt)\n sql1 = (q,r,t,y,u)\n cur.execute(insrt, sql1)\n\ninsrtDirector(\n uuid.uuid4(),\n \"(SELECT UName from University WHERE UName = 'University College London')\",\n \"(SELECT DName from Department WHERE DName = 'English')\",\n 'Christopher (3)',\n 'Nolan (3)',\n 1970,\n 'Westminster, London, United Kingdom',\n)\n\n"
] | [
0
] | [] | [] | [
"postgresql",
"psycopg2",
"python"
] | stackoverflow_0074608480_postgresql_psycopg2_python.txt |
Q:
Can we get random row value from a specific Key inside a dictionary
I fetched records from a .CSV file into a Pandas dataframe, Then I want to fetch a random record/row in it without using index inside a specific key like French or English. Even a specific row like French word and its English meaning and display the pulled out random record at a specific key/row.
# this is the .CSV file having French word and English meaning
French,English
partie,part
histoire,history
chercher,search
seulement,only
police,police
pensais,thought
aide,help
demande,request
genre,kind
mois,month
frère,brother
laisser,let
car,because
mettre,to put
The Python code:
data = pandas.read_csv("french_words.csv")
#----converting read .CSV file to dictionary
to_learn = data.to_dict(orient="records")
current_card = random.choice(to_learn)
print(current_card["French"])
#----This is what I want to achieve using dictionary
#----This is what I tried but can't move forward
words_data_dict = {row.French: row.English for (index, row) in data.iterrows()}
A:
The solution I figured out
# read data from a dataframe into a dictionary
words_data_dict = {row.French: row.English for (index, row) in data.iterrows()}
print(words_data_dict)
# convert the dictionary to a list
list_of_entry = list(words_data_dict.items())
print(list_of_entry)
# generate a random row from the dictionary
random_data = random.choice(list_of_entry)
# specify Key using Index
print(random_data[0])
| Can we get random row value from a specific Key inside a dictionary | I fetched records from a .CSV file into a Pandas dataframe, Then I want to fetch a random record/row in it without using index inside a specific key like French or English. Even a specific row like French word and its English meaning and display the pulled out random record at a specific key/row.
# this is the .CSV file having French word and English meaning
French,English
partie,part
histoire,history
chercher,search
seulement,only
police,police
pensais,thought
aide,help
demande,request
genre,kind
mois,month
frère,brother
laisser,let
car,because
mettre,to put
The Python code:
data = pandas.read_csv("french_words.csv")
#----converting read .CSV file to dictionary
to_learn = data.to_dict(orient="records")
current_card = random.choice(to_learn)
print(current_card["French"])
#----This is what I want to achieve using dictionary
#----This is what I tried but can't move forward
words_data_dict = {row.French: row.English for (index, row) in data.iterrows()}
| [
"The solution I figured out\n# read data from a dataframe into a dictionary\nwords_data_dict = {row.French: row.English for (index, row) in data.iterrows()}\nprint(words_data_dict)\n\n# convert the dictionary to a list\nlist_of_entry = list(words_data_dict.items())\nprint(list_of_entry)\n\n# generate a random row from the dictionary\nrandom_data = random.choice(list_of_entry)\n\n# specify Key using Index\nprint(random_data[0])\n\n"
] | [
0
] | [] | [] | [
"dictionary",
"dictionary_comprehension",
"nested",
"python"
] | stackoverflow_0074610164_dictionary_dictionary_comprehension_nested_python.txt |
Q:
How to use VSCode with the existing docker container
I've made an Python+Django+git docker container.
Now, I would like to 'Attach to a running container..' with VSCode to develop, i.e. run and debug, a Python app inside.
Is it good idea? Or it is better only setting up VSCode to run app inside the container?
I don't want VSCode make a docker container by itself.
Thanks.
I tried to 'Attach to a running container..' but have got 'error xhr failed...' etc.
A:
I use such an environment to develop python app inside a container.
image_create.sh # script to create image to use it local and on the server
image_dockerfile # dockerfile with script how to create an image
container_create.sh # create named container from image
container_restart.sh # restart existing container
container_stop.sh # stop existing container
Examples:
image_dockerfile :
FROM python:3.9.15-slim-bullseye
USER root
RUN pip3 install requests telethon
RUN apt-get update
RUN apt-get --assume-yes install git
image_create.sh :
docker rmi python_find_a_job:lts
docker build . -f python_find_a_job -t python_find_a_job:lts
container_create.sh :
docker rm -f python_find_a_job
docker run -t -d --name python_find_a_job -i python_find_a_job:lts
docker ps -aq
container_restart.sh :
docker container restart python_find_a_job
docker ps -aq
container_stop.sh :
docker stop python_find_a_job
docker ps -aq
For VSCcode:
a) Prepare files (see above).
b) Run:
image_create.sh
container_create.sh
c) Open project folder in VSCode
d) Click on left bottom green / Attach to running container / select container name (python_find_a_job).
e) Clone repository.
f) Install extension 'Python'.
Now you can run and debug inside the container.
After work:
git push
container_stop.sh
Before work:
container_restart.sh
git pull
A:
Visual Studio Code, and Docker Desktop each offer a feature called "Dev Containers" (VSCode) or "Dev Environments" (DD), or CodeSpaces (GitHub)
In this approach, a Docker container is created by scanning the source, and generating a container that contains the development toolchain. Visual Studio then attaches to the container, and allows you to develop even though you do not have node/python3/dotnet/etc. installed on your development PC.
The xhr error indicates something went wrong downloading a scanning image or something else is strange about your project.
There is an optional Dockerfile that can be created if scanning fails to find an image, that is normally kept in a .devcontainers / .devenvironments folder depending on which of Docker / VSCode / GitHub / other you are using.
Your project might also have one (or more) Dockerfile's that are used to package the running app up as a docker image, so don't be confused if you end up with 2. Thats not a problem and is expected really.
A:
I have got good results with VSCode using docker compose with compose.yml. But in this case, docker compose will create a new container from image every time.
compose.yml example:
version: '3.3'
services:
my_container:
image: python_find_a_job:lts
stdin_open: true # docker run -i
tty: true # docker run -t
container_name: find_a_job
network_mode: "host"
volumes:
- ~/.PYTHON/find_a_job:/opt/find_a_job
mongodb_container:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: pass
network_mode: "host"
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
volumes:
mongodb_data_container:
Before work:
docker compose up
VSCode / Attach to Running Container ...
File / Open Folder (/opt/find_a_job from above example of compose.yml).
After work:
docker compose down
P.S. I have tried PyCharm community ed. But it can't add Docker interpreter. Only PyCharm Professional ed. can do it.
So I use VSCode to run & debug (F5) inside a docker container. There was some minor problems with setting VSCode tasks, extensions etc. But main part about run & debug inside a container works perfect. Changes in a project can be saved locally or pushed to git by VSCode or manually.
| How to use VSCode with the existing docker container | I've made an Python+Django+git docker container.
Now, I would like to 'Attach to a running container..' with VSCode to develop, i.e. run and debug, a Python app inside.
Is it good idea? Or it is better only setting up VSCode to run app inside the container?
I don't want VSCode make a docker container by itself.
Thanks.
I tried to 'Attach to a running container..' but have got 'error xhr failed...' etc.
| [
"I use such an environment to develop python app inside a container.\nimage_create.sh # script to create image to use it local and on the server\n\nimage_dockerfile # dockerfile with script how to create an image\n\ncontainer_create.sh # create named container from image\n\ncontainer_restart.sh # restart existing container\n\ncontainer_stop.sh # stop existing container\n\nExamples:\nimage_dockerfile :\nFROM python:3.9.15-slim-bullseye\nUSER root\nRUN pip3 install requests telethon\nRUN apt-get update\nRUN apt-get --assume-yes install git\n\nimage_create.sh :\ndocker rmi python_find_a_job:lts\ndocker build . -f python_find_a_job -t python_find_a_job:lts\n\ncontainer_create.sh :\ndocker rm -f python_find_a_job\ndocker run -t -d --name python_find_a_job -i python_find_a_job:lts\ndocker ps -aq\n\ncontainer_restart.sh :\ndocker container restart python_find_a_job\ndocker ps -aq\n\ncontainer_stop.sh :\ndocker stop python_find_a_job\ndocker ps -aq\n\nFor VSCcode:\na) Prepare files (see above).\nb) Run:\nimage_create.sh\ncontainer_create.sh\nc) Open project folder in VSCode\nd) Click on left bottom green / Attach to running container / select container name (python_find_a_job).\ne) Clone repository.\nf) Install extension 'Python'.\nNow you can run and debug inside the container.\nAfter work:\ngit push\ncontainer_stop.sh\nBefore work:\ncontainer_restart.sh\ngit pull\n",
"Visual Studio Code, and Docker Desktop each offer a feature called \"Dev Containers\" (VSCode) or \"Dev Environments\" (DD), or CodeSpaces (GitHub)\nIn this approach, a Docker container is created by scanning the source, and generating a container that contains the development toolchain. Visual Studio then attaches to the container, and allows you to develop even though you do not have node/python3/dotnet/etc. installed on your development PC.\nThe xhr error indicates something went wrong downloading a scanning image or something else is strange about your project.\nThere is an optional Dockerfile that can be created if scanning fails to find an image, that is normally kept in a .devcontainers / .devenvironments folder depending on which of Docker / VSCode / GitHub / other you are using.\nYour project might also have one (or more) Dockerfile's that are used to package the running app up as a docker image, so don't be confused if you end up with 2. Thats not a problem and is expected really.\n",
"I have got good results with VSCode using docker compose with compose.yml. But in this case, docker compose will create a new container from image every time.\ncompose.yml example:\nversion: '3.3'\nservices:\n my_container:\n image: python_find_a_job:lts\n stdin_open: true # docker run -i\n tty: true # docker run -t\n container_name: find_a_job\n network_mode: \"host\"\n volumes:\n - ~/.PYTHON/find_a_job:/opt/find_a_job\n mongodb_container:\n image: mongo:latest\n environment:\n MONGO_INITDB_ROOT_USERNAME: root\n MONGO_INITDB_ROOT_PASSWORD: pass\n network_mode: \"host\"\n ports:\n - 27017:27017\n volumes:\n - mongodb_data_container:/data/db\nvolumes:\n mongodb_data_container:\n\nBefore work:\ndocker compose up\n\nVSCode / Attach to Running Container ...\nFile / Open Folder (/opt/find_a_job from above example of compose.yml).\nAfter work:\ndocker compose down\n\nP.S. I have tried PyCharm community ed. But it can't add Docker interpreter. Only PyCharm Professional ed. can do it.\nSo I use VSCode to run & debug (F5) inside a docker container. There was some minor problems with setting VSCode tasks, extensions etc. But main part about run & debug inside a container works perfect. Changes in a project can be saved locally or pushed to git by VSCode or manually.\n"
] | [
1,
0,
0
] | [] | [] | [
"docker",
"python",
"visual_studio_code"
] | stackoverflow_0074572709_docker_python_visual_studio_code.txt |
Q:
Numpy-MKL for OS X
I love being able to use Christoph Gohlke's numpy-MKL version of NumPy linked to Intel's Math Kernel Library on Windows. However, I have been unable to find a similar version for OS X, preferably NumPy 1.7 linked for Python 3.3 on Mountain Lion. Does anyone know where this might be obtained?
EDIT:
So after a bit of hunting I found this link to evaluate Intel's Composer XE2013 studios for C++ and Fortran (both of which contain the MKL), as well as a tutorial on building NumPy and SciPy with it, so this will serve for the present. However, the question remains - is there a frequently-updated archive for OS X similar to Christoph Gohlke's? If not, why not? :)
A:
Intel has release their MKL under a community license, which is free, with limited technical support. Currently MKL under the Community License is available for Linux and Windows, and it is expected they will provide a version for Mac OS X soon.
https://software.intel.com/en-us/comment/1839012
In one of their recent webinars, I asked for their plans for a Mac OS X MKL under the community license. They say it is coming soon.
Update 2:
Continuum provides Anaconda Python with Intel MKL included for all platforms.
https://www.anaconda.com/blog/developer-blog/anaconda-25-release-now-mkl-optimizations/
Intel even makes it easy to compile and link against the MKL from the Anaconda Python distribution.
https://software.intel.com/en-us/articles/using-intel-distribution-for-python-with-anaconda
Update:
It now appears that Intel has their own version of Python that they are providing to beta testers.
https://software.intel.com/en-us/forums/intel-distribution-for-python/topic/581593
A:
I know that this is an older question, but in case it comes up for someone who is searching: I would recommend trying out anaconda. For $29.00 they have an add-on that includes mkl optimized numpy + scipy.
A:
MacPorts seems to have recently added an MKL variant to their NumPy port (as well as to SciPy and PyTorch). Tested on my 16” MacBook Pro 2019 with 2.4GHz 8-core Intel Core i9 and macOS Ventura 13.0.1, Numpy with MKL is significantly faster than Numpy with the Accelerate framework, which is another fast replacement for OpenBLAS that is built into macOS. I tested using this code which I got from Puget Systems:
import numpy as np
import time
n = 20000
A = np.random.randn(n,n).astype('float64')
B = np.random.randn(n,n).astype('float64')
start_time = time.time()
nrm = np.linalg.norm(A@B)
print(" took {} seconds ".format(time.time() - start_time))
print(" norm = ",nrm)
The result of my testing is that Numpy with mkl took ~47 seconds while Numpy with accelerate took ~66 seconds. Accelerate also used more threads.
To install this with MacPorts you first have to install MacPorts, then run sudo port install py310-numpy -openblas +mkl in the terminal.
| Numpy-MKL for OS X | I love being able to use Christoph Gohlke's numpy-MKL version of NumPy linked to Intel's Math Kernel Library on Windows. However, I have been unable to find a similar version for OS X, preferably NumPy 1.7 linked for Python 3.3 on Mountain Lion. Does anyone know where this might be obtained?
EDIT:
So after a bit of hunting I found this link to evaluate Intel's Composer XE2013 studios for C++ and Fortran (both of which contain the MKL), as well as a tutorial on building NumPy and SciPy with it, so this will serve for the present. However, the question remains - is there a frequently-updated archive for OS X similar to Christoph Gohlke's? If not, why not? :)
| [
"Intel has release their MKL under a community license, which is free, with limited technical support. Currently MKL under the Community License is available for Linux and Windows, and it is expected they will provide a version for Mac OS X soon.\nhttps://software.intel.com/en-us/comment/1839012\nIn one of their recent webinars, I asked for their plans for a Mac OS X MKL under the community license. They say it is coming soon.\nUpdate 2:\nContinuum provides Anaconda Python with Intel MKL included for all platforms.\nhttps://www.anaconda.com/blog/developer-blog/anaconda-25-release-now-mkl-optimizations/\nIntel even makes it easy to compile and link against the MKL from the Anaconda Python distribution. \nhttps://software.intel.com/en-us/articles/using-intel-distribution-for-python-with-anaconda\nUpdate:\nIt now appears that Intel has their own version of Python that they are providing to beta testers.\nhttps://software.intel.com/en-us/forums/intel-distribution-for-python/topic/581593\n",
"I know that this is an older question, but in case it comes up for someone who is searching: I would recommend trying out anaconda. For $29.00 they have an add-on that includes mkl optimized numpy + scipy.\n",
"MacPorts seems to have recently added an MKL variant to their NumPy port (as well as to SciPy and PyTorch). Tested on my 16” MacBook Pro 2019 with 2.4GHz 8-core Intel Core i9 and macOS Ventura 13.0.1, Numpy with MKL is significantly faster than Numpy with the Accelerate framework, which is another fast replacement for OpenBLAS that is built into macOS. I tested using this code which I got from Puget Systems:\nimport numpy as np\nimport time\nn = 20000\nA = np.random.randn(n,n).astype('float64')\nB = np.random.randn(n,n).astype('float64')\nstart_time = time.time()\nnrm = np.linalg.norm(A@B)\nprint(\" took {} seconds \".format(time.time() - start_time))\nprint(\" norm = \",nrm)\n\nThe result of my testing is that Numpy with mkl took ~47 seconds while Numpy with accelerate took ~66 seconds. Accelerate also used more threads.\nTo install this with MacPorts you first have to install MacPorts, then run sudo port install py310-numpy -openblas +mkl in the terminal.\n"
] | [
4,
3,
1
] | [] | [] | [
"intel_mkl",
"macos",
"numpy",
"python",
"python_3.3"
] | stackoverflow_0015665385_intel_mkl_macos_numpy_python_python_3.3.txt |
Q:
Graphql and Stake.com, POST body missing, invalid Content-Type, or JSON object has no keys
Please, help. I'm use scrapingant for bypass cloudflare.
The task to develop a real-time data parser, stuck at the request stage... :(
`
headers = {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "ru-RU,ru;q=0.9,en-GB;q=0.8,en;q=0.7,en-US;q=0.6",
"cf-device-type": "",
"content-length": "3315",
"content-type": "application/json",
"cookie": "session_info=undefined; currency_currency=btc; currency_hideZeroBalances=false; currency_currencyView=crypto; currency_bankingCurrencies=[]; casinoSearch=['Monopoly','Crazy Time','Sweet Bonanza','Money Train','Reactoonz']; sportsSearch=['Liverpool FC','Kansas City Chiefs','Los Angeles Lakers','FC Barcelona','FC Bayern Munich']; oddsFormat=decimal; sportMarketGroupMap={}; locale=ru; intercom-id-cx1ywgf2=86f79ef7-ca71-4205-8f41-b73b0b559b2e; intercom-session-cx1ywgf2=; cookie_consent=true; leftSidebarView_v2=minimized; sidebarView=hidden; cf_clearance=6420a111bb498d49b56800690b298b7bba53e91d-1667643880-0-150; __cf_bm=cj_pRlIaag.zmXOLQPWJ0GEip_W3NuRcjBa.OlOvIzU-1667643883-0-Ad3+LGxBsAD+n4k5G6mVTfRhfqAthNtU9O9VY4MicOoFQ82/DvoS6h44JXKfexV2niXlGcEBTEMB9VUOYiNbr/2tr1EidvV2unVIk7hyX8cYAcc0btV2eZv1yvPZEcGumjKYXvKuFJOx/vPpi53NXizPc8apm56HvNxb9SkKULIy",
"dnt": "1",
"origin": "https://stake.com",
"referer": "https://stake.com/sports/home",
"sec-ch-ua": "'Google Chrome';v='107', 'Chromium';v='107', 'Not=A?Brand';v='24'",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "Windows",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36",
"x-forwarded-for": "88.99.58.45, 162.158.38.53, 172.20.242.28",
"x-geoip-country": "DE",
"x-geoip-state": "",
"x-language": "ru"
}
body = """
query highrollerSportBets($limit: Int!) {
highrollerSportBets(limit: $limit) {
...RealtimeSportBet
__typename
}
}
fragment RealtimeSportBet on Bet {
id
iid
bet {
__typename
... on PlayerPropBet {
...PlayerPropBetFragment
__typename
}
... on SportBet {
outcomes {
fixture {
data {
__typename
... on SportFixtureDataMatch {
competitors {
name
abbreviation
__typename
}
__typename
}
}
tournament {
category {
sport {
slug
__typename
}
__typename
}
__typename
}
__typename
}
__typename
}
createdAt
potentialMultiplier
amount
currency
user {
id
name
__typename
}
__typename
}
}
}
fragment PlayerPropBetFragment on PlayerPropBet {
__typename
active
amount
cashoutMultiplier
createdAt
currency
customBet
id
odds
payout
payoutMultiplier
updatedAt
status
user {
id
name
__typename
}
playerProps {
id
odds
lineType
playerProp {
...PlayerPropLineFragment
__typename
}
__typename
}
}
fragment PlayerPropLineFragment on PlayerPropLine {
id
line
over
under
suspended
balanced
name
player {
id
name
__typename
}
market {
id
stat {
name
value
__typename
}
game {
id
fixture {
id
name
status
eventStatus {
...FixtureEventStatus
__typename
}
data {
... on SportFixtureDataMatch {
__typename
startTime
competitors {
...CompetitorFragment
__typename
}
}
__typename
}
tournament {
id
category {
id
sport {
id
name
slug
__typename
}
__typename
}
__typename
}
__typename
}
__typename
}
__typename
}
}
fragment FixtureEventStatus on SportFixtureEventStatus {
homeScore
awayScore
matchStatus
clock {
matchTime
remainingTime
__typename
}
periodScores {
homeScore
awayScore
matchStatus
__typename
}
currentServer {
extId
__typename
}
homeGameScore
awayGameScore
statistic {
yellowCards {
away
home
__typename
}
redCards {
away
home
__typename
}
corners {
home
away
__typename
}
__typename
}
}
fragment CompetitorFragment on SportFixtureCompetitor {
name
extId
countryCode
abbreviation
}
"""
operationName = "highrollerSportBets"
variables = {"limit":10}
url = 'https://stake.com/_api/graphql'
sa_key = '280a2b7336344a8ea15106dd3220cc5a'
sa_api = 'https://api.scrapingant.com/v2/general'
qParams = {'url': url, 'x-api-key': sa_key}
reqUrl = f'{sa_api}?{urllib.parse.urlencode(qParams)}'
r = requests.post(url=reqUrl, json={"query": body, "operationName": operationName, "variables": variables}, headers=headers)
print(r.text)
`
Output
POST body missing, invalid Content-Type, or JSON object has no keys.
Tell me, please, where did I make a mistake? Perhaps there is some library for similar tasks?
I'm running with a VPN. Passing cookies is mandatory
A:
CAUTION: NEVER publish any API key
I was able to run your code without error.
import requests
from urllib.parse import urlencode
...
operationName = "highrollerSportBets"
variables = {"limit":10}
url = 'https://stake.com/_api/graphql'
sa_key = 'xxxx'
sa_api = 'https://api.scrapingant.com/v2/general'
qParams = {'url': url, 'x-api-key': sa_key}
reqUrl = f'{sa_api}?{urlencode(qParams)}' # just change this
print(reqUrl)
r = requests.post(url=reqUrl, json={"query": body, "operationName": operationName, "variables": variables}, headers=headers)
print(r.text)
Run
$ python3 --version
Python 3.10.6
$ python3 so.py
https://api.scrapingant.com/v2/general?url=https%3A%2F%2Fstake.com%2F_api%2Fgraphql&x-api-key=XXX
{"detail":"Our browser was detected by target site. Try again later or contact us [email protected]"}
A:
I am a Stake.com player too, and i use the API (and a VPN), every day to automate some bets.
To bypass cloudflare, you just need to get one cookie, named 'cf_clearance', and then use it requests. That's all!
request_data = {
"query": body,
"variables": {"limit": 20,"offset": 0},
}
cookies = {"cf_clearance": cf_clearance_value}
session = requests.Session()
session.headers["x-access-token"] = "xxxx"
session.headers["user-agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"
time.sleep(1)
response = session.post("https://stake.com/_api/graphql", cookies=cookies,json=request_data)
| Graphql and Stake.com, POST body missing, invalid Content-Type, or JSON object has no keys | Please, help. I'm use scrapingant for bypass cloudflare.
The task to develop a real-time data parser, stuck at the request stage... :(
`
headers = {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "ru-RU,ru;q=0.9,en-GB;q=0.8,en;q=0.7,en-US;q=0.6",
"cf-device-type": "",
"content-length": "3315",
"content-type": "application/json",
"cookie": "session_info=undefined; currency_currency=btc; currency_hideZeroBalances=false; currency_currencyView=crypto; currency_bankingCurrencies=[]; casinoSearch=['Monopoly','Crazy Time','Sweet Bonanza','Money Train','Reactoonz']; sportsSearch=['Liverpool FC','Kansas City Chiefs','Los Angeles Lakers','FC Barcelona','FC Bayern Munich']; oddsFormat=decimal; sportMarketGroupMap={}; locale=ru; intercom-id-cx1ywgf2=86f79ef7-ca71-4205-8f41-b73b0b559b2e; intercom-session-cx1ywgf2=; cookie_consent=true; leftSidebarView_v2=minimized; sidebarView=hidden; cf_clearance=6420a111bb498d49b56800690b298b7bba53e91d-1667643880-0-150; __cf_bm=cj_pRlIaag.zmXOLQPWJ0GEip_W3NuRcjBa.OlOvIzU-1667643883-0-Ad3+LGxBsAD+n4k5G6mVTfRhfqAthNtU9O9VY4MicOoFQ82/DvoS6h44JXKfexV2niXlGcEBTEMB9VUOYiNbr/2tr1EidvV2unVIk7hyX8cYAcc0btV2eZv1yvPZEcGumjKYXvKuFJOx/vPpi53NXizPc8apm56HvNxb9SkKULIy",
"dnt": "1",
"origin": "https://stake.com",
"referer": "https://stake.com/sports/home",
"sec-ch-ua": "'Google Chrome';v='107', 'Chromium';v='107', 'Not=A?Brand';v='24'",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "Windows",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36",
"x-forwarded-for": "88.99.58.45, 162.158.38.53, 172.20.242.28",
"x-geoip-country": "DE",
"x-geoip-state": "",
"x-language": "ru"
}
body = """
query highrollerSportBets($limit: Int!) {
highrollerSportBets(limit: $limit) {
...RealtimeSportBet
__typename
}
}
fragment RealtimeSportBet on Bet {
id
iid
bet {
__typename
... on PlayerPropBet {
...PlayerPropBetFragment
__typename
}
... on SportBet {
outcomes {
fixture {
data {
__typename
... on SportFixtureDataMatch {
competitors {
name
abbreviation
__typename
}
__typename
}
}
tournament {
category {
sport {
slug
__typename
}
__typename
}
__typename
}
__typename
}
__typename
}
createdAt
potentialMultiplier
amount
currency
user {
id
name
__typename
}
__typename
}
}
}
fragment PlayerPropBetFragment on PlayerPropBet {
__typename
active
amount
cashoutMultiplier
createdAt
currency
customBet
id
odds
payout
payoutMultiplier
updatedAt
status
user {
id
name
__typename
}
playerProps {
id
odds
lineType
playerProp {
...PlayerPropLineFragment
__typename
}
__typename
}
}
fragment PlayerPropLineFragment on PlayerPropLine {
id
line
over
under
suspended
balanced
name
player {
id
name
__typename
}
market {
id
stat {
name
value
__typename
}
game {
id
fixture {
id
name
status
eventStatus {
...FixtureEventStatus
__typename
}
data {
... on SportFixtureDataMatch {
__typename
startTime
competitors {
...CompetitorFragment
__typename
}
}
__typename
}
tournament {
id
category {
id
sport {
id
name
slug
__typename
}
__typename
}
__typename
}
__typename
}
__typename
}
__typename
}
}
fragment FixtureEventStatus on SportFixtureEventStatus {
homeScore
awayScore
matchStatus
clock {
matchTime
remainingTime
__typename
}
periodScores {
homeScore
awayScore
matchStatus
__typename
}
currentServer {
extId
__typename
}
homeGameScore
awayGameScore
statistic {
yellowCards {
away
home
__typename
}
redCards {
away
home
__typename
}
corners {
home
away
__typename
}
__typename
}
}
fragment CompetitorFragment on SportFixtureCompetitor {
name
extId
countryCode
abbreviation
}
"""
operationName = "highrollerSportBets"
variables = {"limit":10}
url = 'https://stake.com/_api/graphql'
sa_key = '280a2b7336344a8ea15106dd3220cc5a'
sa_api = 'https://api.scrapingant.com/v2/general'
qParams = {'url': url, 'x-api-key': sa_key}
reqUrl = f'{sa_api}?{urllib.parse.urlencode(qParams)}'
r = requests.post(url=reqUrl, json={"query": body, "operationName": operationName, "variables": variables}, headers=headers)
print(r.text)
`
Output
POST body missing, invalid Content-Type, or JSON object has no keys.
Tell me, please, where did I make a mistake? Perhaps there is some library for similar tasks?
I'm running with a VPN. Passing cookies is mandatory
| [
"CAUTION: NEVER publish any API key\nI was able to run your code without error.\nimport requests\nfrom urllib.parse import urlencode\n...\noperationName = \"highrollerSportBets\"\nvariables = {\"limit\":10}\n\nurl = 'https://stake.com/_api/graphql'\nsa_key = 'xxxx'\nsa_api = 'https://api.scrapingant.com/v2/general'\nqParams = {'url': url, 'x-api-key': sa_key}\nreqUrl = f'{sa_api}?{urlencode(qParams)}' # just change this\n\nprint(reqUrl)\nr = requests.post(url=reqUrl, json={\"query\": body, \"operationName\": operationName, \"variables\": variables}, headers=headers)\nprint(r.text)\n\nRun\n$ python3 --version\nPython 3.10.6\n\n$ python3 so.py \nhttps://api.scrapingant.com/v2/general?url=https%3A%2F%2Fstake.com%2F_api%2Fgraphql&x-api-key=XXX\n{\"detail\":\"Our browser was detected by target site. Try again later or contact us [email protected]\"}\n\n",
"I am a Stake.com player too, and i use the API (and a VPN), every day to automate some bets.\nTo bypass cloudflare, you just need to get one cookie, named 'cf_clearance', and then use it requests. That's all!\nrequest_data = {\n \"query\": body,\n \"variables\": {\"limit\": 20,\"offset\": 0},\n}\n\ncookies = {\"cf_clearance\": cf_clearance_value} \nsession = requests.Session()\nsession.headers[\"x-access-token\"] = \"xxxx\"\nsession.headers[\"user-agent\"] = \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36\"\ntime.sleep(1)\nresponse = session.post(\"https://stake.com/_api/graphql\", cookies=cookies,json=request_data)\n\n"
] | [
0,
0
] | [] | [] | [
"graphql",
"python",
"python_3.x"
] | stackoverflow_0074367143_graphql_python_python_3.x.txt |
Q:
Sets intersection with returning every match
everyone!
I'm looking for most elegant way to find intersection of two sets, but I need to get a every match of keys
The examples of what I mean:
s1 = {1, 1, 2, 3}
s2 = {4, 5, 1, 1}
s1.intersection(s2)
Output is:
{1}
What output I need:
{1, 1}
Thank you everyone for help and sorry for my english
A:
If you want a set-like thing for which items can appear with multiplicity greater than 1, then you could use a multiset. These can be represented by Counter objects. There is no built-in intersection method for those, but you could write a function which computes it by taking the min of two counts:
from collections import Counter
def intersection(s1,s2):
'''intersection of multisets s1,s2'''
d = {}
for i in s1:
c = min(s1[i],s2[i])
if c > 0:
d[i] = c
return Counter(d)
#test:
s1 = Counter([1, 1, 2, 3])
s2 = Counter([4, 5, 1, 1])
print(s1)
print(s2)
print(intersection(s1,s2))
Output:
Counter({1: 2, 2: 1, 3: 1})
Counter({1: 2, 4: 1, 5: 1})
Counter({1: 2})
| Sets intersection with returning every match | everyone!
I'm looking for most elegant way to find intersection of two sets, but I need to get a every match of keys
The examples of what I mean:
s1 = {1, 1, 2, 3}
s2 = {4, 5, 1, 1}
s1.intersection(s2)
Output is:
{1}
What output I need:
{1, 1}
Thank you everyone for help and sorry for my english
| [
"If you want a set-like thing for which items can appear with multiplicity greater than 1, then you could use a multiset. These can be represented by Counter objects. There is no built-in intersection method for those, but you could write a function which computes it by taking the min of two counts:\nfrom collections import Counter\n\ndef intersection(s1,s2):\n '''intersection of multisets s1,s2'''\n d = {}\n for i in s1:\n c = min(s1[i],s2[i])\n if c > 0:\n d[i] = c\n return Counter(d)\n\n#test:\n\ns1 = Counter([1, 1, 2, 3])\ns2 = Counter([4, 5, 1, 1])\n\nprint(s1)\nprint(s2)\nprint(intersection(s1,s2))\n\nOutput:\nCounter({1: 2, 2: 1, 3: 1})\nCounter({1: 2, 4: 1, 5: 1})\nCounter({1: 2})\n\n"
] | [
2
] | [] | [] | [
"intersection",
"python",
"set"
] | stackoverflow_0074611882_intersection_python_set.txt |
Q:
Taking n elements at a time from 1d list and add them to 2d list
I have a list making up data, and I'd like to take 4 elements at a time from this list and put them in a 2d list where each 4-element increment is a new row of said list.
My first attempts involve input to 1d list:
list.append(input("Enter data type 1:")) list.append(input("Enter data type 2:")) etc.
and then I've tried to loop the list and to "switch" rows once the index reaches 4.
for x in range(n * 4):
for idx, y in enumerate(list):
if idx % 4 == 0:
x = x + 1
list[y] = result[x][y]
where I've initialised result according to the following:
and
ran = int(len(list)/4)
result=[[0 for x in range(ran)] for j in range(n)]
I've also attempted to ascribe a temporary empty list that will append to an initialised 2D list.
`
row.append(list)
result=[[x for x in row] for j in range(n + 1)]
#result[n]=row
print(result)
n = n + 1
row.clear()
list.clear()
so that each new loop starts with an empty row, takes input from user and copies it.
I'm at a loss for how to make result save the first entry and not be redefined at second,third,fourth entries.
A:
I think this post is probably what you need. With np.reshape() you can just have your list filled with all the values you need and do the reshaping after in a single step.
| Taking n elements at a time from 1d list and add them to 2d list | I have a list making up data, and I'd like to take 4 elements at a time from this list and put them in a 2d list where each 4-element increment is a new row of said list.
My first attempts involve input to 1d list:
list.append(input("Enter data type 1:")) list.append(input("Enter data type 2:")) etc.
and then I've tried to loop the list and to "switch" rows once the index reaches 4.
for x in range(n * 4):
for idx, y in enumerate(list):
if idx % 4 == 0:
x = x + 1
list[y] = result[x][y]
where I've initialised result according to the following:
and
ran = int(len(list)/4)
result=[[0 for x in range(ran)] for j in range(n)]
I've also attempted to ascribe a temporary empty list that will append to an initialised 2D list.
`
row.append(list)
result=[[x for x in row] for j in range(n + 1)]
#result[n]=row
print(result)
n = n + 1
row.clear()
list.clear()
so that each new loop starts with an empty row, takes input from user and copies it.
I'm at a loss for how to make result save the first entry and not be redefined at second,third,fourth entries.
| [
"I think this post is probably what you need. With np.reshape() you can just have your list filled with all the values you need and do the reshaping after in a single step.\n"
] | [
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074612332_list_python.txt |
Q:
How to append json in Python List
I want to create a JSON Python List shown as below using without repeated codes(Python functions).
Expected Output:
Steps=[
{
'Name': 'Download Config File',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'aws',
's3',
'cp',
's3://sample-qa/jars/Loads/',
'/home/hadoop/',
'--recursive'
]
}
},
{
'Name': 'Spark Job',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
'20120304',
'sample-qa'
]
}
}
{
'Name': 'Spark Job',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
'20220130',
'sample-qa'
]
}
},
{
'Name': 'Spark Job',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
'20220214',
'sample-qa'
]
}
}
]
Attempted:
def lambda_handler(event, context):
steps = [
{
"Name": "Download Config File",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"aws",
"s3",
"cp",
"s3://sample-qa/jars/Loads/",
"/home/hadoop/",
"--recursive",
],
},
},
]
def addSteps(date):
step = {
"Name": "Spark Job",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
date,
'sample-qa'
],
},
}
return step
for date in ['20210801','20210807','20210814']:
addingstep = addSteps(date)
steps.append(addingstep)
steps =json.dumps(steps)
print(steps)
I have attempted this implementation, but getting error please find below.
Error:
AttributeError: 'str' object has no attribute 'append'
Any other possibility to create this list? How to achieve this ?
A:
There is an issue with the indentation here:
for date in ['20210801','20210807','20210814']:
addingstep = addSteps(date)
steps.append(addingstep)
steps =json.dumps(steps)
Fix:
for date in ['20210801','20210807','20210814']:
addingstep = addSteps(date)
steps.append(addingstep)
steps = json.dumps(steps)
| How to append json in Python List | I want to create a JSON Python List shown as below using without repeated codes(Python functions).
Expected Output:
Steps=[
{
'Name': 'Download Config File',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'aws',
's3',
'cp',
's3://sample-qa/jars/Loads/',
'/home/hadoop/',
'--recursive'
]
}
},
{
'Name': 'Spark Job',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
'20120304',
'sample-qa'
]
}
}
{
'Name': 'Spark Job',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
'20220130',
'sample-qa'
]
}
},
{
'Name': 'Spark Job',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
'20220214',
'sample-qa'
]
}
}
]
Attempted:
def lambda_handler(event, context):
steps = [
{
"Name": "Download Config File",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"aws",
"s3",
"cp",
"s3://sample-qa/jars/Loads/",
"/home/hadoop/",
"--recursive",
],
},
},
]
def addSteps(date):
step = {
"Name": "Spark Job",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
'spark-submit',
'--deploy-mode', 'cluster',
'--executor-memory', '10g',
'--conf', 'spark.serializer=org.apache.spark.serializer.KryoSerializer',
'--conf', 'spark.sql.hive.convertMetastoreParquet=false',
'--master', 'yarn',
'--class', 'com.general.Loads',
'/home/hadoop/Loads-assembly-0.1.jar',
date,
'sample-qa'
],
},
}
return step
for date in ['20210801','20210807','20210814']:
addingstep = addSteps(date)
steps.append(addingstep)
steps =json.dumps(steps)
print(steps)
I have attempted this implementation, but getting error please find below.
Error:
AttributeError: 'str' object has no attribute 'append'
Any other possibility to create this list? How to achieve this ?
| [
"There is an issue with the indentation here:\nfor date in ['20210801','20210807','20210814']:\n addingstep = addSteps(date)\n steps.append(addingstep)\n steps =json.dumps(steps)\n\nFix:\nfor date in ['20210801','20210807','20210814']:\n addingstep = addSteps(date)\n steps.append(addingstep)\n\nsteps = json.dumps(steps)\n\n"
] | [
2
] | [] | [] | [
"amazon_emr",
"aws_lambda",
"list",
"python",
"python_3.x"
] | stackoverflow_0074612350_amazon_emr_aws_lambda_list_python_python_3.x.txt |
Q:
Get all permutations of bool array
I need all permutations of a bool array, the following code is inefficient, but does what I want:
from itertools import permutations
import numpy as np
n1=2
n2=3
a = np.array([True]*n1+[False]*n2)
perms = set(permutations(a))
However it is inefficient and fails for long arrays. Is there a more efficent implementation?
A:
What about sampling the combinations of indices of the True values:
from itertools import combinations
import numpy as np
a = np.arange(n1+n2)
out = [np.isin(a, x).tolist() for x in combinations(range(n1+n2), r=n1)]
Output:
[[True, True, False, False, False],
[True, False, True, False, False],
[True, False, False, True, False],
[True, False, False, False, True],
[False, True, True, False, False],
[False, True, False, True, False],
[False, True, False, False, True],
[False, False, True, True, False],
[False, False, True, False, True],
[False, False, False, True, True]]
| Get all permutations of bool array | I need all permutations of a bool array, the following code is inefficient, but does what I want:
from itertools import permutations
import numpy as np
n1=2
n2=3
a = np.array([True]*n1+[False]*n2)
perms = set(permutations(a))
However it is inefficient and fails for long arrays. Is there a more efficent implementation?
| [
"What about sampling the combinations of indices of the True values:\nfrom itertools import combinations\nimport numpy as np\n\na = np.arange(n1+n2)\n\nout = [np.isin(a, x).tolist() for x in combinations(range(n1+n2), r=n1)]\n\nOutput:\n[[True, True, False, False, False],\n [True, False, True, False, False],\n [True, False, False, True, False],\n [True, False, False, False, True],\n [False, True, True, False, False],\n [False, True, False, True, False],\n [False, True, False, False, True],\n [False, False, True, True, False],\n [False, False, True, False, True],\n [False, False, False, True, True]]\n\n"
] | [
3
] | [] | [] | [
"numpy",
"python",
"python_itertools"
] | stackoverflow_0074612253_numpy_python_python_itertools.txt |
Q:
Pass a 2d numpy array to c using ctypes
What is the correct way to pass a numpy 2d - array to a c function using ctypes ?
My current approach so far (leads to a segfault):
C code :
void test(double **in_array, int N) {
int i, j;
for(i = 0; i<N; i++) {
for(j = 0; j<N; j++) {
printf("%e \t", in_array[i][j]);
}
printf("\n");
}
}
Python code:
from ctypes import *
import numpy.ctypeslib as npct
array_2d_double = npct.ndpointer(dtype=np.double,ndim=2, flags='CONTIGUOUS')
liblr = npct.load_library('libtest.so', './src')
liblr.test.restype = None
liblr.test.argtypes = [array_2d_double, c_int]
x = np.arange(100).reshape((10,10)).astype(np.double)
liblr.test(x, 10)
A:
This is probably a late answer, but I finally got it working. All credit goes to Sturla Molden at this link.
The key is, note that double** is an array of type np.uintp. Therefore, we have
xpp = (x.ctypes.data + np.arange(x.shape[0]) * x.strides[0]).astype(np.uintp)
doublepp = np.ctypeslib.ndpointer(dtype=np.uintp)
And then use doublepp as the type, pass xpp in. See full code attached.
The C code:
// dummy.c
#include <stdlib.h>
__declspec(dllexport) void foobar(const int m, const int n, const
double **x, double **y)
{
size_t i, j;
for(i=0; i<m; i++)
for(j=0; j<n; j++)
y[i][j] = x[i][j];
}
The Python code:
# test.py
import numpy as np
from numpy.ctypeslib import ndpointer
import ctypes
_doublepp = ndpointer(dtype=np.uintp, ndim=1, flags='C')
_dll = ctypes.CDLL('dummy.dll')
_foobar = _dll.foobar
_foobar.argtypes = [ctypes.c_int, ctypes.c_int, _doublepp, _doublepp]
_foobar.restype = None
def foobar(x):
y = np.zeros_like(x)
xpp = (x.__array_interface__['data'][0]
+ np.arange(x.shape[0])*x.strides[0]).astype(np.uintp)
ypp = (y.__array_interface__['data'][0]
+ np.arange(y.shape[0])*y.strides[0]).astype(np.uintp)
m = ctypes.c_int(x.shape[0])
n = ctypes.c_int(x.shape[1])
_foobar(m, n, xpp, ypp)
return y
if __name__ == '__main__':
x = np.arange(9.).reshape((3, 3))
y = foobar(x)
Hope it helps,
Shawn
A:
#include <stdio.h>
void test(double (*in_array)[3], int N){
int i, j;
for(i = 0; i < N; i++){
for(j = 0; j < N; j++){
printf("%e \t", in_array[i][j]);
}
printf("\n");
}
}
int main(void)
{
double a[][3] = {
{1., 2., 3.},
{4., 5., 6.},
{7., 8., 9.},
};
test(a, 3);
return 0;
}
if you want to use a double ** in your function, you must pass an array of pointer to double (not a 2d array):
#include <stdio.h>
void test(double **in_array, int N){
int i, j;
for(i = 0; i < N; i++){
for(j = 0; j< N; j++){
printf("%e \t", in_array[i][j]);
}
printf("\n");
}
}
int main(void)
{
double a[][3] = {
{1., 2., 3.},
{4., 5., 6.},
{7., 8., 9.},
};
double *p[] = {a[0], a[1], a[2]};
test(p, 3);
return 0;
}
Another (as suggested by @eryksun): pass a single pointer and do some arithmetic to get the index:
#include <stdio.h>
void test(double *in_array, int N){
int i, j;
for(i = 0; i < N; i++){
for(j = 0; j< N; j++){
printf("%e \t", in_array[i * N + j]);
}
printf("\n");
}
}
int main(void)
{
double a[][3] = {
{1., 2., 3.},
{4., 5., 6.},
{7., 8., 9.},
};
test(a[0], 3);
return 0;
}
A:
While the reply might be rather late, I hope it could help other people with the same problem.
As numpy arrays are internally saved as 1d arrays, one can simply rebuild 2d shape in C. Here is a small MWE:
// libtest2d.c
#include <stdlib.h> // for malloc and free
#include <stdio.h> // for printf
// create a 2d array from the 1d one
double ** convert2d(unsigned long len1, unsigned long len2, double * arr) {
double ** ret_arr;
// allocate the additional memory for the additional pointers
ret_arr = (double **)malloc(sizeof(double*)*len1);
// set the pointers to the correct address within the array
for (int i = 0; i < len1; i++) {
ret_arr[i] = &arr[i*len2];
}
// return the 2d-array
return ret_arr;
}
// print the 2d array
void print_2d_list(unsigned long len1,
unsigned long len2,
double * list) {
// call the 1d-to-2d-conversion function
double ** list2d = convert2d(len1, len2, list);
// print the array just to show it works
for (unsigned long index1 = 0; index1 < len1; index1++) {
for (unsigned long index2 = 0; index2 < len2; index2++) {
printf("%1.1f ", list2d[index1][index2]);
}
printf("\n");
}
// free the pointers (only)
free(list2d);
}
and
# test2d.py
import ctypes as ct
import numpy as np
libtest2d = ct.cdll.LoadLibrary("./libtest2d.so")
libtest2d.print_2d_list.argtypes = (ct.c_ulong, ct.c_ulong,
np.ctypeslib.ndpointer(dtype=np.float64,
ndim=2,
flags='C_CONTIGUOUS'
)
)
libtest2d.print_2d_list.restype = None
arr2d = np.meshgrid(np.linspace(0, 1, 6), np.linspace(0, 1, 11))[0]
libtest2d.print_2d_list(arr2d.shape[0], arr2d.shape[1], arr2d)
If you compile the code with gcc -shared -fPIC libtest2d.c -o libtest2d.so and then run python test2d.py it should print the array.
I hope the example is more or less self-explaining. The idea is, that the shape is also given to the C-Code which then creates a double ** pointer for which the space for the additional pointers is reserved. And these then are then set to point to the correct part of the original array.
PS: I am rather a beginner in C so please comment if there are reasons not to do this.
A:
Here i hava pass two 2d numpy array and print value of one array for the reference
you can use and write your own logic in cpp
cpp_function.cpp
compile it using : g++ -shared -fPIC cpp_function.cpp -o cpp_function.so
#include <iostream>
extern "C" {
void mult_matrix(double *a1, double *a2, size_t a1_h, size_t a1_w,
size_t a2_h, size_t a2_w, int size)
{
//std::cout << "a1_h & a1_w" << a1_h << a1_w << std::endl;
//std::cout << "a2_h & a2_w" << a2_h << a2_w << std::endl;
for (size_t i = 0; i < a1_h; i++) {
for (size_t j = 0; j < a1_w; j++) {
printf("%f ", a1[i * a1_h + j]);
}
printf("\n");
}
printf("\n");
}
}
Python File
main.py
import ctypes
import numpy
from time import time
libmatmult = ctypes.CDLL("./cpp_function.so")
ND_POINTER_1 = numpy.ctypeslib.ndpointer(dtype=numpy.float64,
ndim=2,
flags="C")
ND_POINTER_2 = numpy.ctypeslib.ndpointer(dtype=numpy.float64,
ndim=2,
flags="C")
libmatmult.mult_matrix.argtypes = [ND_POINTER_1, ND_POINTER_2, ctypes.c_size_t, ctypes.c_size_t]
# print("-->", ctypes.c_size_t)
def mult_matrix_cpp(a,b):
shape = a.shape[0] * a.shape[1]
libmatmult.mult_matrix.restype = None
libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1])
size_a = (300,300)
size_b = size_a
a = numpy.random.uniform(low=1, high=255, size=size_a)
b = numpy.random.uniform(low=1, high=255, size=size_b)
t2 = time()
out_cpp = mult_matrix_cpp(a,b)
print("cpp time taken:{:.2f} ms".format((time() - t2) * 1000))
out_cpp = numpy.array(out_cpp).reshape(size_a[0], size_a[1])
| Pass a 2d numpy array to c using ctypes | What is the correct way to pass a numpy 2d - array to a c function using ctypes ?
My current approach so far (leads to a segfault):
C code :
void test(double **in_array, int N) {
int i, j;
for(i = 0; i<N; i++) {
for(j = 0; j<N; j++) {
printf("%e \t", in_array[i][j]);
}
printf("\n");
}
}
Python code:
from ctypes import *
import numpy.ctypeslib as npct
array_2d_double = npct.ndpointer(dtype=np.double,ndim=2, flags='CONTIGUOUS')
liblr = npct.load_library('libtest.so', './src')
liblr.test.restype = None
liblr.test.argtypes = [array_2d_double, c_int]
x = np.arange(100).reshape((10,10)).astype(np.double)
liblr.test(x, 10)
| [
"This is probably a late answer, but I finally got it working. All credit goes to Sturla Molden at this link.\nThe key is, note that double** is an array of type np.uintp. Therefore, we have\nxpp = (x.ctypes.data + np.arange(x.shape[0]) * x.strides[0]).astype(np.uintp)\ndoublepp = np.ctypeslib.ndpointer(dtype=np.uintp)\n\nAnd then use doublepp as the type, pass xpp in. See full code attached.\nThe C code:\n// dummy.c \n#include <stdlib.h> \n\n__declspec(dllexport) void foobar(const int m, const int n, const \ndouble **x, double **y) \n{ \n size_t i, j; \n for(i=0; i<m; i++) \n for(j=0; j<n; j++) \n y[i][j] = x[i][j]; \n} \n\nThe Python code:\n# test.py \nimport numpy as np \nfrom numpy.ctypeslib import ndpointer \nimport ctypes \n\n_doublepp = ndpointer(dtype=np.uintp, ndim=1, flags='C') \n\n_dll = ctypes.CDLL('dummy.dll') \n\n_foobar = _dll.foobar \n_foobar.argtypes = [ctypes.c_int, ctypes.c_int, _doublepp, _doublepp] \n_foobar.restype = None \n\ndef foobar(x): \n y = np.zeros_like(x) \n xpp = (x.__array_interface__['data'][0] \n + np.arange(x.shape[0])*x.strides[0]).astype(np.uintp) \n ypp = (y.__array_interface__['data'][0] \n + np.arange(y.shape[0])*y.strides[0]).astype(np.uintp) \n m = ctypes.c_int(x.shape[0]) \n n = ctypes.c_int(x.shape[1]) \n _foobar(m, n, xpp, ypp) \n return y \n\nif __name__ == '__main__': \n x = np.arange(9.).reshape((3, 3)) \n y = foobar(x) \n\nHope it helps,\nShawn\n",
"#include <stdio.h>\n\nvoid test(double (*in_array)[3], int N){\n int i, j;\n\n for(i = 0; i < N; i++){\n for(j = 0; j < N; j++){\n printf(\"%e \\t\", in_array[i][j]);\n }\n printf(\"\\n\");\n }\n}\n\nint main(void)\n{\n double a[][3] = {\n {1., 2., 3.},\n {4., 5., 6.},\n {7., 8., 9.},\n };\n\n test(a, 3);\n return 0;\n}\n\nif you want to use a double ** in your function, you must pass an array of pointer to double (not a 2d array):\n#include <stdio.h>\n\nvoid test(double **in_array, int N){\n int i, j;\n\n for(i = 0; i < N; i++){\n for(j = 0; j< N; j++){\n printf(\"%e \\t\", in_array[i][j]);\n }\n printf(\"\\n\");\n }\n}\n\nint main(void)\n{\n double a[][3] = {\n {1., 2., 3.},\n {4., 5., 6.},\n {7., 8., 9.},\n };\n double *p[] = {a[0], a[1], a[2]};\n\n test(p, 3);\n return 0;\n}\n\nAnother (as suggested by @eryksun): pass a single pointer and do some arithmetic to get the index:\n#include <stdio.h>\n\nvoid test(double *in_array, int N){\n int i, j;\n\n for(i = 0; i < N; i++){\n for(j = 0; j< N; j++){\n printf(\"%e \\t\", in_array[i * N + j]);\n }\n printf(\"\\n\");\n }\n}\n\nint main(void)\n{\n double a[][3] = {\n {1., 2., 3.},\n {4., 5., 6.},\n {7., 8., 9.},\n };\n\n test(a[0], 3);\n return 0;\n}\n\n",
"While the reply might be rather late, I hope it could help other people with the same problem.\nAs numpy arrays are internally saved as 1d arrays, one can simply rebuild 2d shape in C. Here is a small MWE:\n// libtest2d.c\n#include <stdlib.h> // for malloc and free\n#include <stdio.h> // for printf\n\n// create a 2d array from the 1d one\ndouble ** convert2d(unsigned long len1, unsigned long len2, double * arr) {\n double ** ret_arr;\n\n // allocate the additional memory for the additional pointers\n ret_arr = (double **)malloc(sizeof(double*)*len1);\n\n // set the pointers to the correct address within the array\n for (int i = 0; i < len1; i++) {\n ret_arr[i] = &arr[i*len2];\n }\n\n // return the 2d-array\n return ret_arr;\n}\n\n// print the 2d array\nvoid print_2d_list(unsigned long len1,\n unsigned long len2,\n double * list) {\n\n // call the 1d-to-2d-conversion function\n double ** list2d = convert2d(len1, len2, list);\n\n // print the array just to show it works\n for (unsigned long index1 = 0; index1 < len1; index1++) {\n for (unsigned long index2 = 0; index2 < len2; index2++) {\n printf(\"%1.1f \", list2d[index1][index2]);\n }\n printf(\"\\n\");\n }\n\n // free the pointers (only)\n free(list2d);\n}\n\nand \n# test2d.py\n\nimport ctypes as ct\nimport numpy as np\n\nlibtest2d = ct.cdll.LoadLibrary(\"./libtest2d.so\")\nlibtest2d.print_2d_list.argtypes = (ct.c_ulong, ct.c_ulong,\n np.ctypeslib.ndpointer(dtype=np.float64,\n ndim=2,\n flags='C_CONTIGUOUS'\n )\n )\nlibtest2d.print_2d_list.restype = None\n\narr2d = np.meshgrid(np.linspace(0, 1, 6), np.linspace(0, 1, 11))[0]\n\nlibtest2d.print_2d_list(arr2d.shape[0], arr2d.shape[1], arr2d)\n\nIf you compile the code with gcc -shared -fPIC libtest2d.c -o libtest2d.so and then run python test2d.py it should print the array.\nI hope the example is more or less self-explaining. The idea is, that the shape is also given to the C-Code which then creates a double ** pointer for which the space for the additional pointers is reserved. And these then are then set to point to the correct part of the original array.\nPS: I am rather a beginner in C so please comment if there are reasons not to do this.\n",
"Here i hava pass two 2d numpy array and print value of one array for the reference\nyou can use and write your own logic in cpp\ncpp_function.cpp\ncompile it using : g++ -shared -fPIC cpp_function.cpp -o cpp_function.so\n#include <iostream>\nextern \"C\" {\nvoid mult_matrix(double *a1, double *a2, size_t a1_h, size_t a1_w, \n size_t a2_h, size_t a2_w, int size)\n{\n //std::cout << \"a1_h & a1_w\" << a1_h << a1_w << std::endl; \n //std::cout << \"a2_h & a2_w\" << a2_h << a2_w << std::endl; \n for (size_t i = 0; i < a1_h; i++) {\n for (size_t j = 0; j < a1_w; j++) {\n printf(\"%f \", a1[i * a1_h + j]);\n }\n printf(\"\\n\");\n }\n printf(\"\\n\");\n }\n\n}\n\nPython File\nmain.py\nimport ctypes\nimport numpy\nfrom time import time\n\nlibmatmult = ctypes.CDLL(\"./cpp_function.so\")\nND_POINTER_1 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, \n ndim=2,\n flags=\"C\")\nND_POINTER_2 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, \n ndim=2,\n flags=\"C\")\nlibmatmult.mult_matrix.argtypes = [ND_POINTER_1, ND_POINTER_2, ctypes.c_size_t, ctypes.c_size_t]\n# print(\"-->\", ctypes.c_size_t)\n\ndef mult_matrix_cpp(a,b):\n shape = a.shape[0] * a.shape[1]\n libmatmult.mult_matrix.restype = None\n libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1])\n\nsize_a = (300,300)\nsize_b = size_a\n\na = numpy.random.uniform(low=1, high=255, size=size_a)\nb = numpy.random.uniform(low=1, high=255, size=size_b)\n\nt2 = time()\nout_cpp = mult_matrix_cpp(a,b)\nprint(\"cpp time taken:{:.2f} ms\".format((time() - t2) * 1000))\nout_cpp = numpy.array(out_cpp).reshape(size_a[0], size_a[1])\n\n"
] | [
28,
2,
0,
0
] | [] | [] | [
"c",
"ctypes",
"numpy",
"python"
] | stackoverflow_0022425921_c_ctypes_numpy_python.txt |
Q:
How to convert string hex into bytes format?
The string is like "e52c886a88b6f421a9324ea175dc281478f03003499de6162ca72ddacf4b09e0", when I run the code, the output is not my expectation, like this.
hexstr = "e52c886a88b6f421a9324ea175dc281478f03003499de6162ca72ddacf4b09e0"
hexstr = bytes.fromhex(hexstr)
print(hexstr)
The output is
b'\xe5,\x88j\x88\xb6\xf4!\xa92N\xa1u\xdc(\x14x\xf00\x03I\x9d\xe6\x16,\xa7-\xda\xcfK\t\xe0'
My expected output should like b'\xe5\x2c\xc8\x86......
A:
Your code is correct.
Python tries to be helpful by displaying bytes that map to an ASCII character as that character. For example, \x2c maps to ,.
>>> b',' == b'\x2c'
True
| How to convert string hex into bytes format? | The string is like "e52c886a88b6f421a9324ea175dc281478f03003499de6162ca72ddacf4b09e0", when I run the code, the output is not my expectation, like this.
hexstr = "e52c886a88b6f421a9324ea175dc281478f03003499de6162ca72ddacf4b09e0"
hexstr = bytes.fromhex(hexstr)
print(hexstr)
The output is
b'\xe5,\x88j\x88\xb6\xf4!\xa92N\xa1u\xdc(\x14x\xf00\x03I\x9d\xe6\x16,\xa7-\xda\xcfK\t\xe0'
My expected output should like b'\xe5\x2c\xc8\x86......
| [
"Your code is correct.\nPython tries to be helpful by displaying bytes that map to an ASCII character as that character. For example, \\x2c maps to ,.\n>>> b',' == b'\\x2c'\nTrue\n\n"
] | [
0
] | [] | [] | [
"byte",
"hex",
"python",
"python_2.7",
"python_3.x"
] | stackoverflow_0074612406_byte_hex_python_python_2.7_python_3.x.txt |
Q:
How to fix "ResourceExhaustedError: OOM when allocating tensor"
I wanna make a model with multiple inputs. So, I try to build a model like this.
# define two sets of inputs
inputA = Input(shape=(32,64,1))
inputB = Input(shape=(32,1024))
# CNN
x = layers.Conv2D(32, kernel_size = (3, 3), activation = 'relu')(inputA)
x = layers.Conv2D(32, (3,3), activation='relu')(x)
x = layers.MaxPooling2D(pool_size=(2,2))(x)
x = layers.Dropout(0.2)(x)
x = layers.Flatten()(x)
x = layers.Dense(500, activation = 'relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(500, activation='relu')(x)
x = Model(inputs=inputA, outputs=x)
# DNN
y = layers.Flatten()(inputB)
y = Dense(64, activation="relu")(y)
y = Dense(250, activation="relu")(y)
y = Dense(500, activation="relu")(y)
y = Model(inputs=inputB, outputs=y)
# Combine the output of the two models
combined = concatenate([x.output, y.output])
# combined outputs
z = Dense(300, activation="relu")(combined)
z = Dense(100, activation="relu")(combined)
z = Dense(1, activation="softmax")(combined)
model = Model(inputs=[x.input, y.input], outputs=z)
model.summary()
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = opt,
metrics = ['accuracy'])
and the summary
:
_
But, when i try to train this model,
history = model.fit([trainimage, train_product_embd],train_label,
validation_data=([validimage,valid_product_embd],valid_label), epochs=10,
steps_per_epoch=100, validation_steps=10)
the problem happens....
:
ResourceExhaustedError Traceback (most recent call
last) <ipython-input-18-2b79f16d63c0> in <module>()
----> 1 history = model.fit([trainimage, train_product_embd],train_label,
validation_data=([validimage,valid_product_embd],valid_label),
epochs=10, steps_per_epoch=100, validation_steps=10)
4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py
in __call__(self, *args, **kwargs) 1470 ret =
tf_session.TF_SessionRunCallable(self._session._session, 1471
self._handle, args,
-> 1472 run_metadata_ptr) 1473 if run_metadata: 1474
proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
ResourceExhaustedError: 2 root error(s) found. (0) Resource
exhausted: OOM when allocating tensor with shape[800000,32,30,62] and
type float on /job:localhost/replica:0/task:0/device:GPU:0 by
allocator GPU_0_bfc [[{{node conv2d_1/convolution}}]] Hint: If you
want to see a list of allocated tensors when OOM happens, add
report_tensor_allocations_upon_oom to RunOptions for current
allocation info.
[[metrics/acc/Mean_1/_185]] Hint: If you want to see a list of
allocated tensors when OOM happens, add
report_tensor_allocations_upon_oom to RunOptions for current
allocation info.
(1) Resource exhausted: OOM when allocating tensor with
shape[800000,32,30,62] and type float on
/job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_1/convolution}}]] Hint: If you want to see a list of
allocated tensors when OOM happens, add
report_tensor_allocations_upon_oom to RunOptions for current
allocation info.
0 successful operations. 0 derived errors ignored.
Thanks for reading and hopefully helping me :)
A:
OOM stands for "out of memory". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do:
Decrease the number of filters in your Dense, Conv2D layers
Use a smaller batch_size (or increase steps_per_epoch and validation_steps)
Use grayscale images (you can use tf.image.rgb_to_grayscale)
Reduce the number of layers
Use MaxPooling2D layers after convolutional layers
Reduce the size of your images (you can use tf.image.resize for that)
Use smaller float precision for your input, namely np.float32
If you're using a pre-trained model, freeze the first layers (like this)
There is more useful information about this error:
OOM when allocating tensor with shape[800000,32,30,62]
This is a weird shape. If you're working with images, you should normally have 3 or 1 channel. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.
A:
From [800000,32,30,62] it seems your model put all the data in one batch.
Try specified batch size like
history = model.fit([trainimage, train_product_embd],train_label, validation_data=([validimage,valid_product_embd],valid_label), epochs=10, steps_per_epoch=100, validation_steps=10, batch_size=32)
If it still OOM then try reduce the batch_size
A:
Happened to me as well.
You can try reducing trainable parameters by using some form of Transfer Learning - try freezing the initial few layers and use lower batch sizes.
A:
I think the most common reason for this case to arise would be the absence of MaxPooling layers.
Use the same architecture, but add atleast one MaxPool layer after Conv2D layers. This might even improve the overall performance of the model.
You can even try reducing the depth of the model, i.e., remove the unnecessary layers and optimize.
| How to fix "ResourceExhaustedError: OOM when allocating tensor" | I wanna make a model with multiple inputs. So, I try to build a model like this.
# define two sets of inputs
inputA = Input(shape=(32,64,1))
inputB = Input(shape=(32,1024))
# CNN
x = layers.Conv2D(32, kernel_size = (3, 3), activation = 'relu')(inputA)
x = layers.Conv2D(32, (3,3), activation='relu')(x)
x = layers.MaxPooling2D(pool_size=(2,2))(x)
x = layers.Dropout(0.2)(x)
x = layers.Flatten()(x)
x = layers.Dense(500, activation = 'relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(500, activation='relu')(x)
x = Model(inputs=inputA, outputs=x)
# DNN
y = layers.Flatten()(inputB)
y = Dense(64, activation="relu")(y)
y = Dense(250, activation="relu")(y)
y = Dense(500, activation="relu")(y)
y = Model(inputs=inputB, outputs=y)
# Combine the output of the two models
combined = concatenate([x.output, y.output])
# combined outputs
z = Dense(300, activation="relu")(combined)
z = Dense(100, activation="relu")(combined)
z = Dense(1, activation="softmax")(combined)
model = Model(inputs=[x.input, y.input], outputs=z)
model.summary()
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer = opt,
metrics = ['accuracy'])
and the summary
:
_
But, when i try to train this model,
history = model.fit([trainimage, train_product_embd],train_label,
validation_data=([validimage,valid_product_embd],valid_label), epochs=10,
steps_per_epoch=100, validation_steps=10)
the problem happens....
:
ResourceExhaustedError Traceback (most recent call
last) <ipython-input-18-2b79f16d63c0> in <module>()
----> 1 history = model.fit([trainimage, train_product_embd],train_label,
validation_data=([validimage,valid_product_embd],valid_label),
epochs=10, steps_per_epoch=100, validation_steps=10)
4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py
in __call__(self, *args, **kwargs) 1470 ret =
tf_session.TF_SessionRunCallable(self._session._session, 1471
self._handle, args,
-> 1472 run_metadata_ptr) 1473 if run_metadata: 1474
proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
ResourceExhaustedError: 2 root error(s) found. (0) Resource
exhausted: OOM when allocating tensor with shape[800000,32,30,62] and
type float on /job:localhost/replica:0/task:0/device:GPU:0 by
allocator GPU_0_bfc [[{{node conv2d_1/convolution}}]] Hint: If you
want to see a list of allocated tensors when OOM happens, add
report_tensor_allocations_upon_oom to RunOptions for current
allocation info.
[[metrics/acc/Mean_1/_185]] Hint: If you want to see a list of
allocated tensors when OOM happens, add
report_tensor_allocations_upon_oom to RunOptions for current
allocation info.
(1) Resource exhausted: OOM when allocating tensor with
shape[800000,32,30,62] and type float on
/job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_1/convolution}}]] Hint: If you want to see a list of
allocated tensors when OOM happens, add
report_tensor_allocations_upon_oom to RunOptions for current
allocation info.
0 successful operations. 0 derived errors ignored.
Thanks for reading and hopefully helping me :)
| [
"OOM stands for \"out of memory\". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do:\n\nDecrease the number of filters in your Dense, Conv2D layers\nUse a smaller batch_size (or increase steps_per_epoch and validation_steps)\nUse grayscale images (you can use tf.image.rgb_to_grayscale)\nReduce the number of layers\nUse MaxPooling2D layers after convolutional layers\nReduce the size of your images (you can use tf.image.resize for that)\nUse smaller float precision for your input, namely np.float32\nIf you're using a pre-trained model, freeze the first layers (like this)\n\nThere is more useful information about this error:\nOOM when allocating tensor with shape[800000,32,30,62]\n\nThis is a weird shape. If you're working with images, you should normally have 3 or 1 channel. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.\n",
"From [800000,32,30,62] it seems your model put all the data in one batch.\nTry specified batch size like\nhistory = model.fit([trainimage, train_product_embd],train_label, validation_data=([validimage,valid_product_embd],valid_label), epochs=10, steps_per_epoch=100, validation_steps=10, batch_size=32)\n\nIf it still OOM then try reduce the batch_size\n",
"Happened to me as well. \nYou can try reducing trainable parameters by using some form of Transfer Learning - try freezing the initial few layers and use lower batch sizes. \n",
"I think the most common reason for this case to arise would be the absence of MaxPooling layers.\nUse the same architecture, but add atleast one MaxPool layer after Conv2D layers. This might even improve the overall performance of the model.\nYou can even try reducing the depth of the model, i.e., remove the unnecessary layers and optimize.\n"
] | [
56,
2,
0,
0
] | [] | [] | [
"deep_learning",
"keras",
"machine_learning",
"python",
"tensorflow"
] | stackoverflow_0059394947_deep_learning_keras_machine_learning_python_tensorflow.txt |
Q:
networkx directed graph can't add weight in the middle of the edges from pandas df
My dataframe columns are A,B,Weight. Around 50 rows
Here is my code
G = nx.from_pandas_edgelist(df, source='A',
target='B',
create_using=nx.DiGraph())
weight = nx.get_edge_attributes(G, 'Weight')
pos = nx.circular_layout(G, scale=1)
nx.draw(G, pos, with_labels=True)
nx.draw_networkx_nodes(G, pos, node_size=300)
nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black')
nx.draw_networkx_labels(G, pos=pos, edge_labels=weight)
plt.show()
It can generate a directed graph that has many edges, and connect A to B.
But it still can't display the weight in the middle of the edge,
What should I do?
A:
There are two errors that prevented this. First, you need to assign the edge attributes to the graph when defining it. Second, you need to use nx.draw_networkx_edge_labels() and not nx.draw_networkx_labels(). The latter is for node labels, not edge labels.
import pandas as pd
import networkx as nx
df = pd.DataFrame([[1, 2, 4], [3, 4, 2], [1, 3, 1]], columns=['A', 'B', 'Weight'])
G = nx.from_pandas_edgelist(df, source='A',
target='B',
create_using=nx.DiGraph(),
edge_attr='Weight')
weight = nx.get_edge_attributes(G, 'Weight')
pos = nx.circular_layout(G, scale=1)
nx.draw(G, pos, with_labels=True)
nx.draw_networkx_nodes(G, pos, node_size=300)
nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black')
nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight)
plt.show()
plots
| networkx directed graph can't add weight in the middle of the edges from pandas df | My dataframe columns are A,B,Weight. Around 50 rows
Here is my code
G = nx.from_pandas_edgelist(df, source='A',
target='B',
create_using=nx.DiGraph())
weight = nx.get_edge_attributes(G, 'Weight')
pos = nx.circular_layout(G, scale=1)
nx.draw(G, pos, with_labels=True)
nx.draw_networkx_nodes(G, pos, node_size=300)
nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black')
nx.draw_networkx_labels(G, pos=pos, edge_labels=weight)
plt.show()
It can generate a directed graph that has many edges, and connect A to B.
But it still can't display the weight in the middle of the edge,
What should I do?
| [
"There are two errors that prevented this. First, you need to assign the edge attributes to the graph when defining it. Second, you need to use nx.draw_networkx_edge_labels() and not nx.draw_networkx_labels(). The latter is for node labels, not edge labels.\nimport pandas as pd\nimport networkx as nx\n\ndf = pd.DataFrame([[1, 2, 4], [3, 4, 2], [1, 3, 1]], columns=['A', 'B', 'Weight'])\n\nG = nx.from_pandas_edgelist(df, source='A',\n target='B',\n create_using=nx.DiGraph(),\n edge_attr='Weight')\nweight = nx.get_edge_attributes(G, 'Weight')\npos = nx.circular_layout(G, scale=1)\nnx.draw(G, pos, with_labels=True)\nnx.draw_networkx_nodes(G, pos, node_size=300)\nnx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black')\nnx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight)\nplt.show()\n\nplots\n\n"
] | [
1
] | [] | [] | [
"networkx",
"pandas",
"python"
] | stackoverflow_0074610256_networkx_pandas_python.txt |
Q:
Running part of python program with/without sudo
I am trying to control some LEDs on my Raspberry Pi Zero 2w with rpi_ws281x using some audio from pyaudio as input. One of them needs sudo, the other only works without sudo...
I tried to import rpi_ws281x in a script without sudo. That crashes because the 'permission to open \dev\mem is denied'. So the program has to be run with sudo privileges in order to be able to control the ledstrip. However, using pyaudio in that script when called with sudo, I get this error: OSError: [Errno -9996] Invalid input device (no default output device).
To verify that the problem lies with using sudo, I made two minimal working examples using either rpi_ws281x or pyaudio. The first one only works with sudo, the second only without.
Is there any solution to this? I'm not an expert on pulseaudio, so maybe it is possible to have it accept connections even when pyaudio is run by a program that has been run with sudo.
Otherwise, I guess it should be possible to split the program into two parts. One is run with sudo, the other without. But how would they then communicate efficiently (given that about 20 updates per second are needed)?
Thanks in advance!
If needed, here is the entire error that pyaudio spits out when run with sudo:
ALSA lib pcm_dsnoop.c:638:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1075:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM hdmi
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM hdmi
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port
ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
ALSA lib pcm_a52.c:823:(_snd_pcm_a52_open) a52 is only for playback
ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card
ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card
ALSA lib pcm_dmix.c:1075:(snd_pcm_dmix_open) unable to open slave
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
Recording
Traceback (most recent call last):
File "/home/pi/Documents/test.py", line 15, in <module>
stream = p.open(format=sample_format,
File "/home/pi/environments/flask-env/lib/python3.9/site-packages/pyaudio.py", line 754, in open
stream = Stream(self, *args, **kwargs)
File "/home/pi/environments/flask-env/lib/python3.9/site-packages/pyaudio.py", line 445, in __init__
self._stream = pa.open(**arguments)
OSError: [Errno -9996] Invalid input device (no default output device)
A:
Okay, so I solved the issue after a non-zero amount of googling and messing up my pulseaudio installation enough times to warrant a reinstall more often than I would like to admit.
rpi_ws281x requires the PWM pin (GPIO 18 on the Pi Zero 2w) for which it needs sudo privileges. So the solution is to run pulseaudio as a system service, instead of a user service.
Now, the Pulseaudio creators warn against doing this, but in this case it is alright, since the Pi (in this case) is not really meant to have a user logged in anyway.
Setting up pulseaudio as a system service, is not very complicated (see this and this):
First, disable user mode (if it was enabled):
sudo systemctl --global disable pulseaudio.service pulseaudio.socket
Create a systemd file (/etc/systemd/system/pulseaudio.service):
[Unit]
Description=PulseAudio Daemon
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
PrivateTmp=true
ExecStart=/usr/bin/pulseaudio --system --realtime --disallow-exit --no-cpu-limit
Add the pulse and pi user to the required groups:
sudo usermod -a -G audio pulse #add pulse to audio group
sudo usermod -a -G pulse-access pi #add pi to pulse-access group
sudo usermod -a -G pulse-access root #add root to pulse-access group
And finally, start the service:
sudo systemctl enable pulseaudio.service
sudo systemctl start pulseaudio.service
Now, scripts run with sudo should be able to interact with pulseaudio. I have only verified this with PyAudio, but it should work for others too.
| Running part of python program with/without sudo | I am trying to control some LEDs on my Raspberry Pi Zero 2w with rpi_ws281x using some audio from pyaudio as input. One of them needs sudo, the other only works without sudo...
I tried to import rpi_ws281x in a script without sudo. That crashes because the 'permission to open \dev\mem is denied'. So the program has to be run with sudo privileges in order to be able to control the ledstrip. However, using pyaudio in that script when called with sudo, I get this error: OSError: [Errno -9996] Invalid input device (no default output device).
To verify that the problem lies with using sudo, I made two minimal working examples using either rpi_ws281x or pyaudio. The first one only works with sudo, the second only without.
Is there any solution to this? I'm not an expert on pulseaudio, so maybe it is possible to have it accept connections even when pyaudio is run by a program that has been run with sudo.
Otherwise, I guess it should be possible to split the program into two parts. One is run with sudo, the other without. But how would they then communicate efficiently (given that about 20 updates per second are needed)?
Thanks in advance!
If needed, here is the entire error that pyaudio spits out when run with sudo:
ALSA lib pcm_dsnoop.c:638:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1075:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM hdmi
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM hdmi
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port
ALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field port
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
ALSA lib pcm_a52.c:823:(_snd_pcm_a52_open) a52 is only for playback
ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card
ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for card
ALSA lib pcm_dmix.c:1075:(snd_pcm_dmix_open) unable to open slave
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Access denied
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
Recording
Traceback (most recent call last):
File "/home/pi/Documents/test.py", line 15, in <module>
stream = p.open(format=sample_format,
File "/home/pi/environments/flask-env/lib/python3.9/site-packages/pyaudio.py", line 754, in open
stream = Stream(self, *args, **kwargs)
File "/home/pi/environments/flask-env/lib/python3.9/site-packages/pyaudio.py", line 445, in __init__
self._stream = pa.open(**arguments)
OSError: [Errno -9996] Invalid input device (no default output device)
| [
"Okay, so I solved the issue after a non-zero amount of googling and messing up my pulseaudio installation enough times to warrant a reinstall more often than I would like to admit.\nrpi_ws281x requires the PWM pin (GPIO 18 on the Pi Zero 2w) for which it needs sudo privileges. So the solution is to run pulseaudio as a system service, instead of a user service.\nNow, the Pulseaudio creators warn against doing this, but in this case it is alright, since the Pi (in this case) is not really meant to have a user logged in anyway.\nSetting up pulseaudio as a system service, is not very complicated (see this and this):\nFirst, disable user mode (if it was enabled):\nsudo systemctl --global disable pulseaudio.service pulseaudio.socket\nCreate a systemd file (/etc/systemd/system/pulseaudio.service):\n[Unit]\nDescription=PulseAudio Daemon\n \n[Install]\nWantedBy=multi-user.target\n \n[Service]\nType=simple\nPrivateTmp=true\nExecStart=/usr/bin/pulseaudio --system --realtime --disallow-exit --no-cpu-limit \n\nAdd the pulse and pi user to the required groups:\nsudo usermod -a -G audio pulse #add pulse to audio group\nsudo usermod -a -G pulse-access pi #add pi to pulse-access group\nsudo usermod -a -G pulse-access root #add root to pulse-access group\n\nAnd finally, start the service:\nsudo systemctl enable pulseaudio.service\nsudo systemctl start pulseaudio.service\n\nNow, scripts run with sudo should be able to interact with pulseaudio. I have only verified this with PyAudio, but it should work for others too.\n"
] | [
0
] | [] | [] | [
"pulseaudio",
"pyaudio",
"python",
"raspberry_pi",
"sudo"
] | stackoverflow_0074591584_pulseaudio_pyaudio_python_raspberry_pi_sudo.txt |
Q:
Python, ctypes, multi-Dimensional Array
I have structure in Python code and in C code. I fill these fields
("bones_pos_vect",((c_float*4)*30)),
("bones_rot_quat",((c_float*4)*30))
in python code with the right values, but when I request them in C code, I get only 0.0 from all array cells. Why do I lose the values? All other fields of my structures work fine.
class SceneObject(Structure):
_fields_ = [("x_coord", c_float),
("y_coord", c_float),
("z_coord", c_float),
("x_angle", c_float),
("y_angle", c_float),
("z_angle", c_float),
("indexes_count", c_int),
("vertices_buffer", c_uint),
("indexes_buffer", c_uint),
("texture_buffer", c_uint),
("bones_pos_vect",((c_float*4)*30)),
("bones_rot_quat",((c_float*4)*30))]
typedef struct
{
float x_coord;
float y_coord;
float z_coord;
float x_angle;
float y_angle;
float z_angle;
int indexes_count;
unsigned int vertices_buffer;
unsigned int indexes_buffer;
unsigned int texture_buffer;
float bones_pos_vect[30][4];
float bones_rot_quat[30][4];
} SceneObject;
A:
Here's an example of how you can use a multidimensional array with Python and ctypes.
I wrote the following C code, and used gcc in MinGW to compile this to slib.dll:
#include <stdio.h>
typedef struct TestStruct {
int a;
float array[30][4];
} TestStruct;
extern void print_struct(TestStruct *ts) {
int i,j;
for (j = 0; j < 30; ++j) {
for (i = 0; i < 4; ++i) {
printf("%g ", ts->array[j][i]);
}
printf("\n");
}
}
Note that the struct contains a 'two-dimensional' array.
I then wrote the following Python script:
from ctypes import *
class TestStruct(Structure):
_fields_ = [("a", c_int),
("array", (c_float * 4) * 30)]
slib = CDLL("slib.dll")
slib.print_struct.argtypes = [POINTER(TestStruct)]
slib.print_struct.restype = None
t = TestStruct()
for i in range(30):
for j in range(4):
t.array[i][j] = i + 0.1*j
slib.print_struct(byref(t))
When I ran the Python script, it called the C function, which printed out the contents of the multidimensional array:
C:\>slib.py
0.1 0.2 0.3 0.4
1.1 1.2 1.3 1.4
2.1 2.2 2.3 2.4
3.1 3.2 3.3 3.4
4.1 4.2 4.3 4.4
5.1 5.2 5.3 5.4
... rest of output omitted
I've used Python 2, whereas the tags on your question indicate that you're using Python 3. However, I don't believe this should make a difference.
A:
Here i hava pass two 2d numpy array and print value of one array for the reference
you can use this and write multidimensional array
cpp_function.cpp
compile it using : g++ -shared -fPIC cpp_function.cpp -o cpp_function.so
#include <iostream>
extern "C" {
void mult_matrix(double *a1, double *a2, size_t a1_h, size_t a1_w,
size_t a2_h, size_t a2_w, int size)
{
//std::cout << "a1_h & a1_w" << a1_h << a1_w << std::endl;
//std::cout << "a2_h & a2_w" << a2_h << a2_w << std::endl;
for (size_t i = 0; i < a1_h; i++) {
for (size_t j = 0; j < a1_w; j++) {
printf("%f ", a1[i * a1_h + j]);
}
printf("\n");
}
printf("\n");
}
}
Python File
main.py
import ctypes
import numpy
from time import time
libmatmult = ctypes.CDLL("./cpp_function.so")
ND_POINTER_1 = numpy.ctypeslib.ndpointer(dtype=numpy.float64,
ndim=2,
flags="C")
ND_POINTER_2 = numpy.ctypeslib.ndpointer(dtype=numpy.float64,
ndim=2,
flags="C")
libmatmult.mult_matrix.argtypes = [ND_POINTER_1, ND_POINTER_2, ctypes.c_size_t, ctypes.c_size_t]
# print("-->", ctypes.c_size_t)
def mult_matrix_cpp(a,b):
shape = a.shape[0] * a.shape[1]
libmatmult.mult_matrix.restype = None
libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1])
size_a = (300,300)
size_b = size_a
a = numpy.random.uniform(low=1, high=255, size=size_a)
b = numpy.random.uniform(low=1, high=255, size=size_b)
t2 = time()
out_cpp = mult_matrix_cpp(a,b)
print("cpp time taken:{:.2f} ms".format((time() - t2) * 1000))
out_cpp = numpy.array(out_cpp).reshape(size_a[0], size_a[1])
| Python, ctypes, multi-Dimensional Array | I have structure in Python code and in C code. I fill these fields
("bones_pos_vect",((c_float*4)*30)),
("bones_rot_quat",((c_float*4)*30))
in python code with the right values, but when I request them in C code, I get only 0.0 from all array cells. Why do I lose the values? All other fields of my structures work fine.
class SceneObject(Structure):
_fields_ = [("x_coord", c_float),
("y_coord", c_float),
("z_coord", c_float),
("x_angle", c_float),
("y_angle", c_float),
("z_angle", c_float),
("indexes_count", c_int),
("vertices_buffer", c_uint),
("indexes_buffer", c_uint),
("texture_buffer", c_uint),
("bones_pos_vect",((c_float*4)*30)),
("bones_rot_quat",((c_float*4)*30))]
typedef struct
{
float x_coord;
float y_coord;
float z_coord;
float x_angle;
float y_angle;
float z_angle;
int indexes_count;
unsigned int vertices_buffer;
unsigned int indexes_buffer;
unsigned int texture_buffer;
float bones_pos_vect[30][4];
float bones_rot_quat[30][4];
} SceneObject;
| [
"Here's an example of how you can use a multidimensional array with Python and ctypes. \nI wrote the following C code, and used gcc in MinGW to compile this to slib.dll:\n#include <stdio.h>\n\ntypedef struct TestStruct {\n int a;\n float array[30][4];\n} TestStruct;\n\nextern void print_struct(TestStruct *ts) {\n int i,j;\n for (j = 0; j < 30; ++j) {\n for (i = 0; i < 4; ++i) {\n printf(\"%g \", ts->array[j][i]);\n }\n printf(\"\\n\");\n }\n}\n\nNote that the struct contains a 'two-dimensional' array.\nI then wrote the following Python script:\nfrom ctypes import *\n\nclass TestStruct(Structure):\n _fields_ = [(\"a\", c_int),\n (\"array\", (c_float * 4) * 30)]\n\nslib = CDLL(\"slib.dll\")\nslib.print_struct.argtypes = [POINTER(TestStruct)]\nslib.print_struct.restype = None\n\nt = TestStruct()\n\nfor i in range(30):\n for j in range(4):\n t.array[i][j] = i + 0.1*j\n\nslib.print_struct(byref(t))\n\nWhen I ran the Python script, it called the C function, which printed out the contents of the multidimensional array: \nC:\\>slib.py\n0.1 0.2 0.3 0.4\n1.1 1.2 1.3 1.4\n2.1 2.2 2.3 2.4\n3.1 3.2 3.3 3.4\n4.1 4.2 4.3 4.4\n5.1 5.2 5.3 5.4\n... rest of output omitted\n\nI've used Python 2, whereas the tags on your question indicate that you're using Python 3. However, I don't believe this should make a difference.\n",
"Here i hava pass two 2d numpy array and print value of one array for the reference\nyou can use this and write multidimensional array\ncpp_function.cpp\ncompile it using : g++ -shared -fPIC cpp_function.cpp -o cpp_function.so\n#include <iostream>\nextern \"C\" {\nvoid mult_matrix(double *a1, double *a2, size_t a1_h, size_t a1_w, \n size_t a2_h, size_t a2_w, int size)\n{\n //std::cout << \"a1_h & a1_w\" << a1_h << a1_w << std::endl; \n //std::cout << \"a2_h & a2_w\" << a2_h << a2_w << std::endl; \n for (size_t i = 0; i < a1_h; i++) {\n for (size_t j = 0; j < a1_w; j++) {\n printf(\"%f \", a1[i * a1_h + j]);\n }\n printf(\"\\n\");\n }\n printf(\"\\n\");\n }\n\n}\n\nPython File\nmain.py\nimport ctypes\nimport numpy\nfrom time import time\n\nlibmatmult = ctypes.CDLL(\"./cpp_function.so\")\nND_POINTER_1 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, \n ndim=2,\n flags=\"C\")\nND_POINTER_2 = numpy.ctypeslib.ndpointer(dtype=numpy.float64, \n ndim=2,\n flags=\"C\")\nlibmatmult.mult_matrix.argtypes = [ND_POINTER_1, ND_POINTER_2, ctypes.c_size_t, ctypes.c_size_t]\n# print(\"-->\", ctypes.c_size_t)\n\ndef mult_matrix_cpp(a,b):\n shape = a.shape[0] * a.shape[1]\n libmatmult.mult_matrix.restype = None\n libmatmult.mult_matrix(a, b, *a.shape, *b.shape , a.shape[0] * a.shape[1])\n\nsize_a = (300,300)\nsize_b = size_a\n\na = numpy.random.uniform(low=1, high=255, size=size_a)\nb = numpy.random.uniform(low=1, high=255, size=size_b)\n\nt2 = time()\nout_cpp = mult_matrix_cpp(a,b)\nprint(\"cpp time taken:{:.2f} ms\".format((time() - t2) * 1000))\nout_cpp = numpy.array(out_cpp).reshape(size_a[0], size_a[1])\n\n"
] | [
16,
0
] | [] | [] | [
"ctypes",
"multidimensional_array",
"python",
"python_3.x",
"structure"
] | stackoverflow_0011384015_ctypes_multidimensional_array_python_python_3.x_structure.txt |
Q:
Press key to game window with Python
I try to pass keyboard event to game window but it doesn't work. For another program such as Notepad++ it is works.
from pynput.keyboard import Controller
keyboard = Controller()
keyboard.press('a')
keyboard.release('a')
The same a problem I have with mouse events. I tried use "Mouse and Keyboard Recorder" program and it work. Which is a problem?
I try to write bot to game for fun.
A:
It's my answer:
Win32API Mouse vs Real Mouse Click
But I don't know how to write a custom driver :)
| Press key to game window with Python | I try to pass keyboard event to game window but it doesn't work. For another program such as Notepad++ it is works.
from pynput.keyboard import Controller
keyboard = Controller()
keyboard.press('a')
keyboard.release('a')
The same a problem I have with mouse events. I tried use "Mouse and Keyboard Recorder" program and it work. Which is a problem?
I try to write bot to game for fun.
| [
"It's my answer:\nWin32API Mouse vs Real Mouse Click\nBut I don't know how to write a custom driver :)\n"
] | [
0
] | [] | [] | [
"events",
"keyboard",
"python"
] | stackoverflow_0074603939_events_keyboard_python.txt |
Q:
Matplotlib x-axis and secondary y-axis customization questions
Data - we import historical yields of the ten and thirty year Treasury and calculate the spread (difference) between the two (this block of code is good; feel free so skip):
#Import statements
import yfinance as yf
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
#Constants
start_date = "2018-01-01"
end_date = "2023-01-01"
#Pull in data
tenYear_master = yf.download('^TNX', start_date, end_date)
thirtyYear_master = yf.download('^TYX', start_date, end_date)
#Trim DataFrames to only include 'Adj Close columns'
tenYear = tenYear_master['Adj Close'].to_frame()
thirtyYear = thirtyYear_master['Adj Close'].to_frame()
#Rename columns
tenYear.rename(columns = {'Adj Close' : 'Adj Close - Ten Year'}, inplace= True)
thirtyYear.rename(columns = {'Adj Close' : 'Adj Close - Thirty Year'}, inplace= True)
#Join DataFrames
data = tenYear.join(thirtyYear)
#Add column for difference (spread)
data['Spread'] = data['Adj Close - Thirty Year'] - data['Adj Close - Ten Year']
data
This block is also good.
'''Plot data'''
#Delete top, left, and right borders from figure
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.left'] = False
plt.rcParams['axes.spines.right'] = False
fig, ax = plt.subplots(figsize = (15,10))
data.plot(ax = ax, secondary_y = ['Spread'], ylabel = 'Yield', legend = False);
'''Change left y-axis tick labels to percentage'''
left_yticks = ax.get_yticks().tolist()
ax.yaxis.set_major_locator(mticker.FixedLocator(left_yticks))
ax.set_yticklabels((("%.1f" % tick) + '%') for tick in left_yticks);
#Add legend
fig.legend(loc="upper center", ncol = 3, frameon = False)
fig.tight_layout()
plt.show()
I have questions concerning two features of the graph that I want to customize:
The x-axis currently has a tick and tick label for every year. How can I change this so that there is a tick and tick label for every 3 months in the form MMM-YY? (see picture below)
The spread was calculated as thirty year yield - ten year yield. Say I want to change the RIGHT y-axis tick labels so that their sign is flipped, but I want to leave both the original data and curves alone (for the sake of argument; bear with me, there is logic underlying this). In other words, the right y-axis tick labels currently go from -0.2 at the bottom to 0.8 at the top. How can I change them so that they go from 0.2 at the bottom to -0.8 at the top without changing anything about the data or curves? This is purely a cosmetic change of the right y-axis tick labels.
I tried doing the following:
'''Change right y-axis tick labels'''
right_yticks = (ax.right_ax).get_yticks().tolist()
#Loop through and multiply each right y-axis tick label by -1
for index, value in enumerate(right_yticks):
right_yticks[index] = value*(-1)
(ax.right_ax).yaxis.set_major_locator(mticker.FixedLocator(right_yticks))
(ax.right_ax).set_yticklabels(right_yticks)
But I got this:
Note how the right y-axis is incomplete.
I'd appreciate any help. Thank you!
A:
Let's create some data:
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import numpy as np
days = np.array(["2022-01-01", "2022-07-01", "2023-02-15", "2023-11-15", "2024-03-03"],
dtype = "datetime64")
val = np.array([20, 20, -10, -10, 10])
For the date in the x-axis, we import matplotlib.dates, which provides the month locator and the date formater. The locator sets the ticks each 3 months, and the formater sets the way the labels are displayed (month-00).
For the y-axis data, you require changing the sign of the data (hence the negative sign in ax2.plot(), but you want the curve in the same position, so afterwards you need to invert the axis. And so, the curves in both plots are identical, but the y-axis values have different signs and directions.
fig, (ax1, ax2) = plt.subplots(figsize = (10,5), nrows = 2)
ax1.plot(days, val, marker = "x")
# set the locator to Jan, Apr, Jul, Oct
ax1.xaxis.set_major_locator(mdates.MonthLocator( bymonth = (1, 4, 7, 10) ))
# set the formater for month-year, with lower y to show only two digits
ax1.xaxis.set_major_formatter(mdates.DateFormatter("%b-%y"))
# change the sign of the y data plotted
ax2.plot(days, -val, marker = "x")
#invert the y axis
ax2.invert_yaxis()
# set the locator to Jan, Apr, Jul, Oct
ax2.xaxis.set_major_locator(mdates.MonthLocator( bymonth = (1, 4, 7, 10) ))
# set the formater for month-year, with lower y to show only two digits
ax2.xaxis.set_major_formatter(mdates.DateFormatter("%b-%y"))
plt.show()
| Matplotlib x-axis and secondary y-axis customization questions | Data - we import historical yields of the ten and thirty year Treasury and calculate the spread (difference) between the two (this block of code is good; feel free so skip):
#Import statements
import yfinance as yf
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
#Constants
start_date = "2018-01-01"
end_date = "2023-01-01"
#Pull in data
tenYear_master = yf.download('^TNX', start_date, end_date)
thirtyYear_master = yf.download('^TYX', start_date, end_date)
#Trim DataFrames to only include 'Adj Close columns'
tenYear = tenYear_master['Adj Close'].to_frame()
thirtyYear = thirtyYear_master['Adj Close'].to_frame()
#Rename columns
tenYear.rename(columns = {'Adj Close' : 'Adj Close - Ten Year'}, inplace= True)
thirtyYear.rename(columns = {'Adj Close' : 'Adj Close - Thirty Year'}, inplace= True)
#Join DataFrames
data = tenYear.join(thirtyYear)
#Add column for difference (spread)
data['Spread'] = data['Adj Close - Thirty Year'] - data['Adj Close - Ten Year']
data
This block is also good.
'''Plot data'''
#Delete top, left, and right borders from figure
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.left'] = False
plt.rcParams['axes.spines.right'] = False
fig, ax = plt.subplots(figsize = (15,10))
data.plot(ax = ax, secondary_y = ['Spread'], ylabel = 'Yield', legend = False);
'''Change left y-axis tick labels to percentage'''
left_yticks = ax.get_yticks().tolist()
ax.yaxis.set_major_locator(mticker.FixedLocator(left_yticks))
ax.set_yticklabels((("%.1f" % tick) + '%') for tick in left_yticks);
#Add legend
fig.legend(loc="upper center", ncol = 3, frameon = False)
fig.tight_layout()
plt.show()
I have questions concerning two features of the graph that I want to customize:
The x-axis currently has a tick and tick label for every year. How can I change this so that there is a tick and tick label for every 3 months in the form MMM-YY? (see picture below)
The spread was calculated as thirty year yield - ten year yield. Say I want to change the RIGHT y-axis tick labels so that their sign is flipped, but I want to leave both the original data and curves alone (for the sake of argument; bear with me, there is logic underlying this). In other words, the right y-axis tick labels currently go from -0.2 at the bottom to 0.8 at the top. How can I change them so that they go from 0.2 at the bottom to -0.8 at the top without changing anything about the data or curves? This is purely a cosmetic change of the right y-axis tick labels.
I tried doing the following:
'''Change right y-axis tick labels'''
right_yticks = (ax.right_ax).get_yticks().tolist()
#Loop through and multiply each right y-axis tick label by -1
for index, value in enumerate(right_yticks):
right_yticks[index] = value*(-1)
(ax.right_ax).yaxis.set_major_locator(mticker.FixedLocator(right_yticks))
(ax.right_ax).set_yticklabels(right_yticks)
But I got this:
Note how the right y-axis is incomplete.
I'd appreciate any help. Thank you!
| [
"Let's create some data:\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport numpy as np\n\ndays = np.array([\"2022-01-01\", \"2022-07-01\", \"2023-02-15\", \"2023-11-15\", \"2024-03-03\"],\n dtype = \"datetime64\")\nval = np.array([20, 20, -10, -10, 10])\n\nFor the date in the x-axis, we import matplotlib.dates, which provides the month locator and the date formater. The locator sets the ticks each 3 months, and the formater sets the way the labels are displayed (month-00).\nFor the y-axis data, you require changing the sign of the data (hence the negative sign in ax2.plot(), but you want the curve in the same position, so afterwards you need to invert the axis. And so, the curves in both plots are identical, but the y-axis values have different signs and directions.\nfig, (ax1, ax2) = plt.subplots(figsize = (10,5), nrows = 2)\n\nax1.plot(days, val, marker = \"x\")\n# set the locator to Jan, Apr, Jul, Oct\nax1.xaxis.set_major_locator(mdates.MonthLocator( bymonth = (1, 4, 7, 10) ))\n# set the formater for month-year, with lower y to show only two digits\nax1.xaxis.set_major_formatter(mdates.DateFormatter(\"%b-%y\"))\n\n# change the sign of the y data plotted\nax2.plot(days, -val, marker = \"x\")\n#invert the y axis\nax2.invert_yaxis()\n# set the locator to Jan, Apr, Jul, Oct\nax2.xaxis.set_major_locator(mdates.MonthLocator( bymonth = (1, 4, 7, 10) ))\n# set the formater for month-year, with lower y to show only two digits\nax2.xaxis.set_major_formatter(mdates.DateFormatter(\"%b-%y\"))\n\nplt.show()\n\n\n"
] | [
0
] | [] | [] | [
"matplotlib",
"pandas",
"python",
"yticks"
] | stackoverflow_0074608599_matplotlib_pandas_python_yticks.txt |
Q:
Multi-table inheritance and two many to many via through model not working in admin inline
I'm trying to create navigation menu from the django admin as per user's requirement.
The Model look like this:
class MenuItem(models.Model):
title = models.CharField(max_length=200, help_text='Title of the item')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
is_published = models.BooleanField(default=False)
def __str__(self):
return self.title
class Page(MenuItem):
"""
To display non-hierarchical pages such as about us, or some page in menu
"""
slug = models.SlugField(max_length=200, unique=True, help_text='End of url')
content = HTMLField(help_text='Contents of the page')
author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.DO_NOTHING)
class Category(MenuItem):
"""
For hierarchical display eg. in navigation menu use Category and Articles.
"""
slug = models.SlugField(max_length=200, unique=True, help_text='End of url')
class Meta:
verbose_name_plural = 'categories'
Page and Category can be different types of menu items, so, I used inheritance. Now I want to bind the MenuItem to a Menu, hence I added two more models below.
class Menu(models.Model):
title = models.CharField(max_length=200, unique=True, help_text='Name of the menu.')
is_published = models.BooleanField(default=False)
item = models.ManyToManyField(MenuItem, through='MenuLevel', through_fields=['menu', 'menu_item'])
def __str__(self):
return self.title
class MenuLevel(models.Model):
menu = models.ForeignKey(Menu, on_delete=models.CASCADE)
menu_item = models.ForeignKey(MenuItem, on_delete=models.CASCADE, related_name='items')
level = models.IntegerField(default=0)
parent = models.ForeignKey(MenuItem, on_delete=models.CASCADE, related_name='parent', null=True, blank=True)
I need the parent key to menu item to traverse through the menu items from parent to children and level to sort the menu in order.
On the admin I have two simple classes:
class MenuLevelInline(admin.TabularInline):
model = MenuLevel
@admin.register(Menu)
class MenuAdmin(admin.ModelAdmin):
inlines = [MenuLevelInline]
Here is the problem:
If I try to save two categories, one as a parent to another, things work fine. However, if I have a category as a parent and Page as a child I get IntegrityError: FOREIGN KEY constraint failed error.
When I look in the database, the menu_item table does contain all the keys for both Categories and pages table.
What did I do wrong?
A:
Everything just worked once I switched to MySQL. It was not working on Sqlite. I don't know why it doesn't work with sqlite maybe it is a bug. But changing the database solved my problem.
| Multi-table inheritance and two many to many via through model not working in admin inline | I'm trying to create navigation menu from the django admin as per user's requirement.
The Model look like this:
class MenuItem(models.Model):
title = models.CharField(max_length=200, help_text='Title of the item')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
is_published = models.BooleanField(default=False)
def __str__(self):
return self.title
class Page(MenuItem):
"""
To display non-hierarchical pages such as about us, or some page in menu
"""
slug = models.SlugField(max_length=200, unique=True, help_text='End of url')
content = HTMLField(help_text='Contents of the page')
author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.DO_NOTHING)
class Category(MenuItem):
"""
For hierarchical display eg. in navigation menu use Category and Articles.
"""
slug = models.SlugField(max_length=200, unique=True, help_text='End of url')
class Meta:
verbose_name_plural = 'categories'
Page and Category can be different types of menu items, so, I used inheritance. Now I want to bind the MenuItem to a Menu, hence I added two more models below.
class Menu(models.Model):
title = models.CharField(max_length=200, unique=True, help_text='Name of the menu.')
is_published = models.BooleanField(default=False)
item = models.ManyToManyField(MenuItem, through='MenuLevel', through_fields=['menu', 'menu_item'])
def __str__(self):
return self.title
class MenuLevel(models.Model):
menu = models.ForeignKey(Menu, on_delete=models.CASCADE)
menu_item = models.ForeignKey(MenuItem, on_delete=models.CASCADE, related_name='items')
level = models.IntegerField(default=0)
parent = models.ForeignKey(MenuItem, on_delete=models.CASCADE, related_name='parent', null=True, blank=True)
I need the parent key to menu item to traverse through the menu items from parent to children and level to sort the menu in order.
On the admin I have two simple classes:
class MenuLevelInline(admin.TabularInline):
model = MenuLevel
@admin.register(Menu)
class MenuAdmin(admin.ModelAdmin):
inlines = [MenuLevelInline]
Here is the problem:
If I try to save two categories, one as a parent to another, things work fine. However, if I have a category as a parent and Page as a child I get IntegrityError: FOREIGN KEY constraint failed error.
When I look in the database, the menu_item table does contain all the keys for both Categories and pages table.
What did I do wrong?
| [
"Everything just worked once I switched to MySQL. It was not working on Sqlite. I don't know why it doesn't work with sqlite maybe it is a bug. But changing the database solved my problem.\n"
] | [
0
] | [] | [] | [
"django",
"django_4.1",
"django_modeladmin",
"django_models",
"python"
] | stackoverflow_0074610252_django_django_4.1_django_modeladmin_django_models_python.txt |
Q:
sys.exit() GCP Cloud Function
Have written GCP Cloud Function in Python 3.7. While executing, sys.exit() I'm getting 'A server error occurred ...'. I need to exit out of the function and have written following code.
import sys
if str(strEnabled) == 'True':
printOperation = "Operation: Enabling of user"
else:
sys.exit() #Exit From the Program
Please suggest, what I'm missing here.
A:
Use return instead of sys.exit
Copied from bigbounty's comment
A:
If you are calling a function from another function and needs to directly exit from the inner function without return you can use abort.
abort will directly return the response and exit the program from the place you are calling it.
import sys
from flask import abort
if str(strEnabled) == 'True':
printOperation = "Operation: Enabling of user"
else:
# abort(200) #return 200
abort(500, description="Exit") #return 500
Note: cloud functions use flask module internally, so you need not
install it separately.
| sys.exit() GCP Cloud Function | Have written GCP Cloud Function in Python 3.7. While executing, sys.exit() I'm getting 'A server error occurred ...'. I need to exit out of the function and have written following code.
import sys
if str(strEnabled) == 'True':
printOperation = "Operation: Enabling of user"
else:
sys.exit() #Exit From the Program
Please suggest, what I'm missing here.
| [
"Use return instead of sys.exit\nCopied from bigbounty's comment\n",
"If you are calling a function from another function and needs to directly exit from the inner function without return you can use abort.\nabort will directly return the response and exit the program from the place you are calling it.\nimport sys \nfrom flask import abort\n\nif str(strEnabled) == 'True':\n printOperation = \"Operation: Enabling of user\" \nelse:\n # abort(200) #return 200\n abort(500, description=\"Exit\") #return 500\n\n\nNote: cloud functions use flask module internally, so you need not\ninstall it separately.\n\n"
] | [
3,
0
] | [] | [] | [
"google_cloud_functions",
"google_cloud_platform",
"python"
] | stackoverflow_0063090501_google_cloud_functions_google_cloud_platform_python.txt |
Q:
Get the column names for 2nd largest value for each row in a Pandas dataframe
Say I have such Pandas dataframe
df = pd.DataFrame({
'a': [4, 5, 3, 1, 2],
'b': [20, 10, 40, 50, 30],
'c': [25, 20, 5, 15, 10]
})
so df looks like:
print(df)
a b c
0 4 20 25
1 5 10 20
2 3 40 5
3 1 50 15
4 2 30 10
And I want to get the column name of the 2nd largest value in each row. Borrowing the answer from Felex Le in this thread, I can now get the 2nd largest value by:
def second_largest(l = []):
return (l.nlargest(2).min())
print(df.apply(second_largest, axis = 1))
which gives me:
0 20
1 10
2 5
3 15
4 10
dtype: int64
But what I really want is the column names for those values, or to say:
0 b
1 b
2 c
3 c
4 c
Pandas has a function idxmax which can do the job for the largest value:
df.idxmax(axis = 1)
0 c
1 c
2 b
3 b
4 b
dtype: object
Is there any elegant way to do the same job but for the 2nd largest value?
A:
Use numpy.argsort for positions of second largest values:
df['new'] = df['new'] = df.columns.to_numpy()[np.argsort(df.to_numpy())[:, -2]]
print(df)
a b c new
0 4 20 25 b
1 5 10 20 b
2 3 40 5 c
3 1 50 15 c
4 2 30 10 c
Your solution should working, but is slow:
def second_largest(l = []):
return (l.nlargest(2).idxmin())
print(df.apply(second_largest, axis = 1))
A:
If efficiency is important, numpy.argpartition is quite efficient:
N = 2
cols = df.columns.to_numpy()
pd.Series(cols[np.argpartition(df.to_numpy().T, -N, axis=0)[-N]], index=df.index)
If you want a pure pandas (less efficient):
out = df.stack().groupby(level=0).apply(lambda s: s.nlargest(2).index[-1][1])
Output:
0 b
1 b
2 c
3 c
4 c
dtype: object
| Get the column names for 2nd largest value for each row in a Pandas dataframe | Say I have such Pandas dataframe
df = pd.DataFrame({
'a': [4, 5, 3, 1, 2],
'b': [20, 10, 40, 50, 30],
'c': [25, 20, 5, 15, 10]
})
so df looks like:
print(df)
a b c
0 4 20 25
1 5 10 20
2 3 40 5
3 1 50 15
4 2 30 10
And I want to get the column name of the 2nd largest value in each row. Borrowing the answer from Felex Le in this thread, I can now get the 2nd largest value by:
def second_largest(l = []):
return (l.nlargest(2).min())
print(df.apply(second_largest, axis = 1))
which gives me:
0 20
1 10
2 5
3 15
4 10
dtype: int64
But what I really want is the column names for those values, or to say:
0 b
1 b
2 c
3 c
4 c
Pandas has a function idxmax which can do the job for the largest value:
df.idxmax(axis = 1)
0 c
1 c
2 b
3 b
4 b
dtype: object
Is there any elegant way to do the same job but for the 2nd largest value?
| [
"Use numpy.argsort for positions of second largest values:\ndf['new'] = df['new'] = df.columns.to_numpy()[np.argsort(df.to_numpy())[:, -2]]\nprint(df)\n a b c new\n0 4 20 25 b\n1 5 10 20 b\n2 3 40 5 c\n3 1 50 15 c\n4 2 30 10 c\n\nYour solution should working, but is slow:\ndef second_largest(l = []): \n return (l.nlargest(2).idxmin())\n\nprint(df.apply(second_largest, axis = 1))\n\n",
"If efficiency is important, numpy.argpartition is quite efficient:\nN = 2\ncols = df.columns.to_numpy()\npd.Series(cols[np.argpartition(df.to_numpy().T, -N, axis=0)[-N]], index=df.index)\n\nIf you want a pure pandas (less efficient):\nout = df.stack().groupby(level=0).apply(lambda s: s.nlargest(2).index[-1][1])\n\nOutput:\n0 b\n1 b\n2 c\n3 c\n4 c\ndtype: object\n\n"
] | [
4,
2
] | [] | [] | [
"dataframe",
"numpy",
"pandas",
"python"
] | stackoverflow_0074612525_dataframe_numpy_pandas_python.txt |
Q:
maximum word split
Given a string s and a dictionary of valid words d, determine the largest number of valid words the string
can be split up into.
I tried solving this problem with the code below but it is not giving me the answer I am looking for.
def word_split_dp(s):
n = len(s)
ans = [0]*n
# base case
ans[0] = 0
for i in range(1, len(s)):
ans[i] = float('-inf')
for j in range(1, i-1):
ans[i]= max(ans[i], ans[i-j] + wordcheck(s,i,j))
print(ans)
return ans[n-2]
def wordcheck(s,i,j):
for word in dict:
start = i-j+1
if(s[start:i] == word):
return 1
else:
return float('-inf')
print(word_split_dp(s))
A:
There are a few issues in your attempt. Starting from the top:
Assuming that ans[i] represents the maximum number of partitions of the substring s[0:i] into valid words, you'll need to make this list one entry longer, so that there is an ans[n], which will eventually contain the answer, i.e. the maximum number of partitions in s[0:n] which is s. So change:
ans = [0]*n
to
ans = [0]*(n+1)
Because of the reasoning given in the first bullet point, the final return should be return ans[n] instead of return ans[n-2].
Again because of the reasoning given in the first billet point, the outer loop should make one more iteration, so change:
for i in range(1, len(s)):
to
for i in range(1, len(s)+1):
Assuming that j represents the size of the substring to test, the inner loop should allow i-j to eventually reach back to 0, so j should be able to reach the value i. That means the inner loop should make two more iterations. So change:
for j in range(1, i-1):
to
for j in range(1, i+1):
In wordcheck, the slice from s should start at i-j, as j represents the size of the slice. So change:
start = i-j+1
to
start = i-j
In wordcheck, the loop will always exit in its first iteration. It cannot loop more as it always executes a return. You should not exit the loop when there is no success. The return float('-inf') should only be made when all words have been tried. So remove:
else:
return float('-inf')
and instead add at after the loop:
return float('-inf')
Those are the issues. The code with all those corrections:
def word_split_dp(s):
n = len(s)
ans = [0]*(n+1) #correction
# base case
ans[0] = 0
for i in range(1, len(s) + 1): # correction
ans[i] = float('-inf')
for j in range(1, i + 1): # correction
ans[i] = max(ans[i], ans[i-j] + wordcheck(s,i,j))
print(ans)
return ans[n] # correction
def wordcheck(s,i,j):
words=("war","month","on","the","heat","eat","he","arm","at")
for word in words:
start = i-j # correction
if s[start:i] == word:
return 1
# don't interrupt loop otherwise
return float('-inf')
s="warmontheheat"
print(word_split_dp(s)) # -inf
s="warontheheat"
print(word_split_dp(s)) # 5
| maximum word split | Given a string s and a dictionary of valid words d, determine the largest number of valid words the string
can be split up into.
I tried solving this problem with the code below but it is not giving me the answer I am looking for.
def word_split_dp(s):
n = len(s)
ans = [0]*n
# base case
ans[0] = 0
for i in range(1, len(s)):
ans[i] = float('-inf')
for j in range(1, i-1):
ans[i]= max(ans[i], ans[i-j] + wordcheck(s,i,j))
print(ans)
return ans[n-2]
def wordcheck(s,i,j):
for word in dict:
start = i-j+1
if(s[start:i] == word):
return 1
else:
return float('-inf')
print(word_split_dp(s))
| [
"There are a few issues in your attempt. Starting from the top:\n\nAssuming that ans[i] represents the maximum number of partitions of the substring s[0:i] into valid words, you'll need to make this list one entry longer, so that there is an ans[n], which will eventually contain the answer, i.e. the maximum number of partitions in s[0:n] which is s. So change:\nans = [0]*n\n\nto\nans = [0]*(n+1)\n\n\nBecause of the reasoning given in the first bullet point, the final return should be return ans[n] instead of return ans[n-2].\n\nAgain because of the reasoning given in the first billet point, the outer loop should make one more iteration, so change:\nfor i in range(1, len(s)):\n\nto\nfor i in range(1, len(s)+1):\n\n\nAssuming that j represents the size of the substring to test, the inner loop should allow i-j to eventually reach back to 0, so j should be able to reach the value i. That means the inner loop should make two more iterations. So change:\nfor j in range(1, i-1):\n\nto\nfor j in range(1, i+1):\n\n\nIn wordcheck, the slice from s should start at i-j, as j represents the size of the slice. So change:\nstart = i-j+1\n\nto\nstart = i-j\n\n\nIn wordcheck, the loop will always exit in its first iteration. It cannot loop more as it always executes a return. You should not exit the loop when there is no success. The return float('-inf') should only be made when all words have been tried. So remove:\n else:\n return float('-inf')\n\nand instead add at after the loop:\nreturn float('-inf')\n\n\n\nThose are the issues. The code with all those corrections:\ndef word_split_dp(s):\n n = len(s)\n ans = [0]*(n+1) #correction\n # base case\n ans[0] = 0\n for i in range(1, len(s) + 1): # correction\n ans[i] = float('-inf')\n for j in range(1, i + 1): # correction\n ans[i] = max(ans[i], ans[i-j] + wordcheck(s,i,j))\n \n print(ans)\n return ans[n] # correction\n \ndef wordcheck(s,i,j):\n words=(\"war\",\"month\",\"on\",\"the\",\"heat\",\"eat\",\"he\",\"arm\",\"at\")\n for word in words:\n start = i-j # correction\n if s[start:i] == word:\n return 1\n # don't interrupt loop otherwise\n return float('-inf')\n \ns=\"warmontheheat\"\nprint(word_split_dp(s)) # -inf\ns=\"warontheheat\"\nprint(word_split_dp(s)) # 5\n\n"
] | [
0
] | [] | [] | [
"algorithm",
"python",
"word_break"
] | stackoverflow_0074606098_algorithm_python_word_break.txt |
Q:
Problem with matplotlib events (plot does not appear)
In the documentation about event handling, we have an interesting example (the "Picking excercise")
I am interested in something similar but instead of a new window appearing every time a point is picked in the first window (as it is now) I would like to change the plot of the same second window.
So I did
"""
Compute the mean and stddev of 100 data sets and plot mean vs. stddev.
When you click on one of the (mean, stddev) points, plot the raw dataset
that generated that point.
"""
import numpy as np
import matplotlib.pyplot as plt
X = np.random.rand(100, 1000)
xs = np.mean(X, axis=1)
ys = np.std(X, axis=1)
fig, ax = plt.subplots()
ax.set_title('click on point to plot time series')
line, = ax.plot(xs, ys, 'o', picker=True, pickradius=5) # 5 points tolerance
figR, axsR = plt.subplots()
def onpick(event):
if event.artist != line:
return
n = len(event.ind)
if not n:
return
print("Index ",event.ind)
axsR.plot(X[event.ind])
axsR.set_ylim(-0.5,1.5)
# fig, axs = plt.subplots(n, squeeze=False)
# print(axs.flat)
# for dataind, ax in zip(event.ind, axs.flat):
# ax.plot(X[dataind])
# ax.text(0.05, 0.9,
# f"$\\mu$={xs[dataind]:1.3f}\n$\\sigma$={ys[dataind]:1.3f}",
# transform=ax.transAxes, verticalalignment='top')
# ax.set_ylim(-0.5, 1.5)
figR.show()
return True
fig.canvas.mpl_connect('pick_event', onpick)
plt.show()
So now I have two windows, and in one I can clearly pick but the other window does not show the plot I wish.
How can I make that the plot appears?
EDIT: The strangest thing happened
In a conda environment with python 3.8.12 matplotlib 3.5.1 it is working
In another with python 3.7.10 matplotlib 3.3.4 it is not working
A:
I have made small test on your code, changing the last line of your onpick function to figR.canvas.draw() solve the issue for me, the function should look like:
def onpick(event):
if event.artist != line:
return
n = len(event.ind)
if not n:
return
print("Index ",event.ind)
axsR.plot(X[event.ind])
axsR.set_ylim(-0.5,1.5)
figR.canvas.draw()
return True
draw() will re-draw the figure. This allows you to work in interactive mode an in case you have changed your data or formatted the figure, allow the graph itself to change.
| Problem with matplotlib events (plot does not appear) | In the documentation about event handling, we have an interesting example (the "Picking excercise")
I am interested in something similar but instead of a new window appearing every time a point is picked in the first window (as it is now) I would like to change the plot of the same second window.
So I did
"""
Compute the mean and stddev of 100 data sets and plot mean vs. stddev.
When you click on one of the (mean, stddev) points, plot the raw dataset
that generated that point.
"""
import numpy as np
import matplotlib.pyplot as plt
X = np.random.rand(100, 1000)
xs = np.mean(X, axis=1)
ys = np.std(X, axis=1)
fig, ax = plt.subplots()
ax.set_title('click on point to plot time series')
line, = ax.plot(xs, ys, 'o', picker=True, pickradius=5) # 5 points tolerance
figR, axsR = plt.subplots()
def onpick(event):
if event.artist != line:
return
n = len(event.ind)
if not n:
return
print("Index ",event.ind)
axsR.plot(X[event.ind])
axsR.set_ylim(-0.5,1.5)
# fig, axs = plt.subplots(n, squeeze=False)
# print(axs.flat)
# for dataind, ax in zip(event.ind, axs.flat):
# ax.plot(X[dataind])
# ax.text(0.05, 0.9,
# f"$\\mu$={xs[dataind]:1.3f}\n$\\sigma$={ys[dataind]:1.3f}",
# transform=ax.transAxes, verticalalignment='top')
# ax.set_ylim(-0.5, 1.5)
figR.show()
return True
fig.canvas.mpl_connect('pick_event', onpick)
plt.show()
So now I have two windows, and in one I can clearly pick but the other window does not show the plot I wish.
How can I make that the plot appears?
EDIT: The strangest thing happened
In a conda environment with python 3.8.12 matplotlib 3.5.1 it is working
In another with python 3.7.10 matplotlib 3.3.4 it is not working
| [
"I have made small test on your code, changing the last line of your onpick function to figR.canvas.draw() solve the issue for me, the function should look like:\ndef onpick(event):\n if event.artist != line:\n return\n n = len(event.ind)\n if not n:\n return\n print(\"Index \",event.ind)\n \n axsR.plot(X[event.ind])\n axsR.set_ylim(-0.5,1.5)\n \n figR.canvas.draw()\n return True\n\ndraw() will re-draw the figure. This allows you to work in interactive mode an in case you have changed your data or formatted the figure, allow the graph itself to change.\n"
] | [
1
] | [] | [] | [
"events",
"matplotlib",
"python"
] | stackoverflow_0074609730_events_matplotlib_python.txt |
Q:
How to nest a "for i in range()" loop in another loop that takes data from a dictionary?
The following code runs 599 instances of bootstrapping using data stored in the dictionary data_rois. data_rois is a dictionary that includes many keys and each key is associated with an array of numeric values. This part of the code works fine when it is coded as below:
boot_i = []
for i in range(599):
boot = np.random.choice(data_rois["interoception"], size=N)
boot = np.mean(boot)
boot_i.append(boot)
Now, I would like to apply bootstrapping for many keys in the dictionary data_rois. Therefore, I apply a for loop as below that aims to store the bootstrapping results in another dictionary called boot_rois = {}. The code shown below aims to shorten the code above, since the code above would get really long if I had to repeat it many times for all keys in data_rois.
rois = ["interoception", "extero", ...] # A long list of rois
boot_rois = {}
for roi in rois:
for i in range(599):
boot = np.random.choice(data_rois[roi], size=N)
boot = np.mean(boot)
boot_rois[roi] = roi
The problem: The code works. However, my code appears to ignore for i in range(599) but only runs boot = np.random.choice(data_rois[roi], size=N) one time instead of 599 times. What line of code is missing in the nested for loop so that it runs bootstrapping 599 times instead of 1 time?
Update:
I specify my aim here. My aim is to compute the standard deviation (SD) for each roi, based on the 599 bootstraps.
Here is an updated code suggested by someone in this topic. I changed that code to compute the SD and the results look fine.
boot_rois = {}
for roi in rois:
last_boot = None
for i in range(599):
boot = np.random.choice(data_rois[roi], size=N)
boot = np.std(boot)
if(last_boot is not None):
boot = np.std([boot,last_boot])
boot_rois[roi] = boot
last_boot = boot_rois[roi]
A:
It is partially unclear what you are trying to do since you didn't provide a minimal reproducible example. I've filled in as best I can and I think this should still help you solve your problem. You may need to adjust the type of aggregation to fit your needs.
Your for loop is of course executing as many times as it says it is.
I believe you forgot to set the value of boot_rois to boot, otherwise your code just regenerates the ROI list.
boot_rois[roi] = boot
A secondary problem with your code is that everytime you do your inner for loop, you are just overwriting the same key in your dictionary. You probably want to do something like this instead. It isn't completely clear what type of math you are trying to do, but assuming you want to calculate the average between your 599 random arrays in a rolling fashion you could do this:
import numpy as np
N=10
data_rois={"interoception" : [1,2], "extero" : [2,3] }
rois = data_rois.keys() # A long list of rois
boot_rois = {}
for roi in rois:
last_boot = None
for i in range(7):
boot = np.random.choice(data_rois[roi], size=N)
print(boot)
boot = np.mean(boot)
# During first iteration aggregator last_boot is None
if(last_boot is not None):
# Average with the last iteration and repeat
# This logic may need to be replaced with whatever math you are trying to do
boot = np.mean([boot,last_boot])
boot_rois[roi] = boot
last_boot = boot_rois[roi]
print(boot_rois)
Note: Doing it this way means that the average is not the average of each of the 7, if you want to do that you can store them in a sum variable and divide by the number of iterations you performed in the inner for loop. Mathematically, doing the mean multiple times is different than summing everything and dividing by the number of sums. Make sure your math is correct.
A:
What about using comprehension list instead of nested loops:
rois = ["interoception", "extero", ...] # A long list of rois
boot_rois = {}
for roi in rois:
# will execute np.random.choice 599 times and store these results in a list
rand_choices = [np.mean(np.random.choice(data_rois[roi], size=N)) for _ in range(599)]
# will calculate the standard deviation of those 599 results
boot_rois[roi] = np.std(rand_choices)
This will run np.random.choice 599 times and store these results in a list (I added np.mean(...) so you can calculate the stdev on those 599 mean values), so you can run np.std on that list, and store this final result in boot_rois[roi]
Below is runnable code I used for tests purpose. It generates 20 random numbers between 0 and 50, and calculates the stdev:
import random
import numpy as np
rand_ints = [random.randint(0, 50) for _ in range(20)]
print(rand_ints)
stdev = np.std(rand_ints)
print(stdev)
First execution:
[9, 44, 13, 0, 43, 12, 4, 40, 35, 38, 43, 0, 3, 38, 39, 45, 37, 14, 4, 21]
16.908281994336384
Second execution:
[2, 20, 17, 32, 0, 39, 23, 27, 24, 41, 8, 21, 2, 7, 21, 3, 27, 7, 15, 36]
12.531560158256433
In order to emulate calculating the stdev of means of samples, I tried this:
import random
import numpy as np
rand_ints = [np.mean([random.randint(0, 50) for _ in range(10)]) for _ in range(20)]
print(rand_ints)
stdev = np.std(rand_ints)
print(stdev)
Which gave me:
[25.8, 16.9, 27.6, 21.8, 20.6, 30.5, 19.4, 32.9, 27.8, 18.5, 24.5, 18.7, 23.1, 26.9, 30.6, 25.1, 24.9, 26.5, 21.8, 25.8]
4.2607833786758045
| How to nest a "for i in range()" loop in another loop that takes data from a dictionary? | The following code runs 599 instances of bootstrapping using data stored in the dictionary data_rois. data_rois is a dictionary that includes many keys and each key is associated with an array of numeric values. This part of the code works fine when it is coded as below:
boot_i = []
for i in range(599):
boot = np.random.choice(data_rois["interoception"], size=N)
boot = np.mean(boot)
boot_i.append(boot)
Now, I would like to apply bootstrapping for many keys in the dictionary data_rois. Therefore, I apply a for loop as below that aims to store the bootstrapping results in another dictionary called boot_rois = {}. The code shown below aims to shorten the code above, since the code above would get really long if I had to repeat it many times for all keys in data_rois.
rois = ["interoception", "extero", ...] # A long list of rois
boot_rois = {}
for roi in rois:
for i in range(599):
boot = np.random.choice(data_rois[roi], size=N)
boot = np.mean(boot)
boot_rois[roi] = roi
The problem: The code works. However, my code appears to ignore for i in range(599) but only runs boot = np.random.choice(data_rois[roi], size=N) one time instead of 599 times. What line of code is missing in the nested for loop so that it runs bootstrapping 599 times instead of 1 time?
Update:
I specify my aim here. My aim is to compute the standard deviation (SD) for each roi, based on the 599 bootstraps.
Here is an updated code suggested by someone in this topic. I changed that code to compute the SD and the results look fine.
boot_rois = {}
for roi in rois:
last_boot = None
for i in range(599):
boot = np.random.choice(data_rois[roi], size=N)
boot = np.std(boot)
if(last_boot is not None):
boot = np.std([boot,last_boot])
boot_rois[roi] = boot
last_boot = boot_rois[roi]
| [
"It is partially unclear what you are trying to do since you didn't provide a minimal reproducible example. I've filled in as best I can and I think this should still help you solve your problem. You may need to adjust the type of aggregation to fit your needs.\nYour for loop is of course executing as many times as it says it is.\nI believe you forgot to set the value of boot_rois to boot, otherwise your code just regenerates the ROI list.\n boot_rois[roi] = boot\n\nA secondary problem with your code is that everytime you do your inner for loop, you are just overwriting the same key in your dictionary. You probably want to do something like this instead. It isn't completely clear what type of math you are trying to do, but assuming you want to calculate the average between your 599 random arrays in a rolling fashion you could do this:\nimport numpy as np\nN=10\ndata_rois={\"interoception\" : [1,2], \"extero\" : [2,3] }\nrois = data_rois.keys() # A long list of rois\nboot_rois = {}\nfor roi in rois:\n last_boot = None\n for i in range(7):\n boot = np.random.choice(data_rois[roi], size=N)\n print(boot)\n boot = np.mean(boot)\n # During first iteration aggregator last_boot is None\n if(last_boot is not None):\n # Average with the last iteration and repeat\n # This logic may need to be replaced with whatever math you are trying to do\n boot = np.mean([boot,last_boot])\n \n boot_rois[roi] = boot\n last_boot = boot_rois[roi]\n \nprint(boot_rois)\n\nNote: Doing it this way means that the average is not the average of each of the 7, if you want to do that you can store them in a sum variable and divide by the number of iterations you performed in the inner for loop. Mathematically, doing the mean multiple times is different than summing everything and dividing by the number of sums. Make sure your math is correct.\n",
"What about using comprehension list instead of nested loops:\nrois = [\"interoception\", \"extero\", ...] # A long list of rois\nboot_rois = {}\nfor roi in rois:\n # will execute np.random.choice 599 times and store these results in a list\n rand_choices = [np.mean(np.random.choice(data_rois[roi], size=N)) for _ in range(599)]\n # will calculate the standard deviation of those 599 results\n boot_rois[roi] = np.std(rand_choices)\n\nThis will run np.random.choice 599 times and store these results in a list (I added np.mean(...) so you can calculate the stdev on those 599 mean values), so you can run np.std on that list, and store this final result in boot_rois[roi]\n\nBelow is runnable code I used for tests purpose. It generates 20 random numbers between 0 and 50, and calculates the stdev:\nimport random\nimport numpy as np\n\nrand_ints = [random.randint(0, 50) for _ in range(20)]\nprint(rand_ints)\nstdev = np.std(rand_ints)\nprint(stdev)\n\nFirst execution:\n\n[9, 44, 13, 0, 43, 12, 4, 40, 35, 38, 43, 0, 3, 38, 39, 45, 37, 14, 4, 21]\n16.908281994336384\n\n\nSecond execution:\n\n[2, 20, 17, 32, 0, 39, 23, 27, 24, 41, 8, 21, 2, 7, 21, 3, 27, 7, 15, 36]\n12.531560158256433\n\n\n\nIn order to emulate calculating the stdev of means of samples, I tried this:\nimport random\nimport numpy as np\n\nrand_ints = [np.mean([random.randint(0, 50) for _ in range(10)]) for _ in range(20)]\nprint(rand_ints)\nstdev = np.std(rand_ints)\nprint(stdev)\n\nWhich gave me:\n\n[25.8, 16.9, 27.6, 21.8, 20.6, 30.5, 19.4, 32.9, 27.8, 18.5, 24.5, 18.7, 23.1, 26.9, 30.6, 25.1, 24.9, 26.5, 21.8, 25.8]\n4.2607833786758045\n\n\n"
] | [
2,
1
] | [] | [] | [
"dictionary",
"for_loop",
"python"
] | stackoverflow_0074611900_dictionary_for_loop_python.txt |
Q:
Azure function returns 503 after 30 seconds
I created an Azure function in python to insert data into SQL server. The process was taking around a minute when I was locally testing it. But when I deployed the code, I ended up receiving 403 error as shown below.
After debugging, I realized the data was successfully persisted in the database(the whole 1 min process) but its only the response that I get is the error.
So I created a function to just sleep for 30 seconds(after some trial and error) and render a JSON response(check below code).
I get a successful response for 29 seconds or lesser but when I make the sleep to 30 seconds, I get the 403 error
import json
import azure.functions as func
import time
def main(req: func.HttpRequest) -> func.HttpResponse:
data = req.get_json()
data["processed"] = True
time.sleep(30)
return func.HttpResponse(json.dumps(data))
Our services aren't
available right nowWe're working to restore all services as
soon as possible. Please check back soon.
I started with a consumption plan and even changed it to Elastic Premium but the outcome did not change.
A:
In Azure Functions, HTTP Error 503 Service Unavailable can be caused due to few reasons like:
The backend server returned 503 due to a memory leak/issue in the code.
Platform issue due to the backend server not running/ allocated
Function host is down/restarting.
Have a look into the "Diagnose and Solve problems" blade in the Azure Function App and select the "Function app down or reporting" detector.
Also, Microsoft Azure Support will do the analysis to find the root cause on your function app if we send an email at [email protected] with your subscription ID as well as your function app name using function logs.
A:
There might be a few issues causing this so a solid 1 truth solutions seems not be fully possible. But I will list a few possible solutions.
It seems to mostly be an issue when running code in the portal with Code+test. Try run the function without using the portal. (example on others issue Azure Function dies after 30 seconds and cannot run multiple invocations concurrently )
Some report there might be an issue with region. (Not working in one region but after move to other region it works)
Also been reports of solving this with premium plan (But looks like you tried this)
Could be some other resources like a loadbalancer with lower timeouts. (Example Why does my POST request throw a 503 after 30 seconds? ). Don't know your full setup. Might be when calling the function or a request the function does.
If none of the above helps it might be some python timeout like python async timeout. I am no python expert though so not fully sure here.
| Azure function returns 503 after 30 seconds | I created an Azure function in python to insert data into SQL server. The process was taking around a minute when I was locally testing it. But when I deployed the code, I ended up receiving 403 error as shown below.
After debugging, I realized the data was successfully persisted in the database(the whole 1 min process) but its only the response that I get is the error.
So I created a function to just sleep for 30 seconds(after some trial and error) and render a JSON response(check below code).
I get a successful response for 29 seconds or lesser but when I make the sleep to 30 seconds, I get the 403 error
import json
import azure.functions as func
import time
def main(req: func.HttpRequest) -> func.HttpResponse:
data = req.get_json()
data["processed"] = True
time.sleep(30)
return func.HttpResponse(json.dumps(data))
Our services aren't
available right nowWe're working to restore all services as
soon as possible. Please check back soon.
I started with a consumption plan and even changed it to Elastic Premium but the outcome did not change.
| [
"In Azure Functions, HTTP Error 503 Service Unavailable can be caused due to few reasons like:\n\nThe backend server returned 503 due to a memory leak/issue in the code.\nPlatform issue due to the backend server not running/ allocated\nFunction host is down/restarting.\n\n\nHave a look into the \"Diagnose and Solve problems\" blade in the Azure Function App and select the \"Function app down or reporting\" detector.\n\nAlso, Microsoft Azure Support will do the analysis to find the root cause on your function app if we send an email at [email protected] with your subscription ID as well as your function app name using function logs.\n",
"There might be a few issues causing this so a solid 1 truth solutions seems not be fully possible. But I will list a few possible solutions.\n\nIt seems to mostly be an issue when running code in the portal with Code+test. Try run the function without using the portal. (example on others issue Azure Function dies after 30 seconds and cannot run multiple invocations concurrently )\n\nSome report there might be an issue with region. (Not working in one region but after move to other region it works)\n\nAlso been reports of solving this with premium plan (But looks like you tried this)\n\nCould be some other resources like a loadbalancer with lower timeouts. (Example Why does my POST request throw a 503 after 30 seconds? ). Don't know your full setup. Might be when calling the function or a request the function does.\n\nIf none of the above helps it might be some python timeout like python async timeout. I am no python expert though so not fully sure here.\n\n\n"
] | [
0,
0
] | [] | [] | [
"azure",
"azure_functions",
"python",
"serverless"
] | stackoverflow_0071658591_azure_azure_functions_python_serverless.txt |
Q:
How do I delete a file from a folder in Python?
I do know about os.remove() I am working on - When an excel file is created into a folder then the script runs and it will add the excel file data into the particular columns of the table of database. What I want is the file should be deleted from that folder just after running the script so that when a new excel file is created into the folder the script will again runs and add the new data into the database and so on. I am a newbie to the python and as I know os.remove needs file name to be entered.
Is there any other function of doing this? Here is my script
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import ibm_db
import ibm_db_dbi
import os
import xlrd
class ExampleHandler(FileSystemEventHandler):
def on_created(self, event):
connection = ibm_db.connect("DATABASE=xyz; HOSTNAME=xyz; PORT=xyz; PROTOCOL=xyz; UID=xyz; PWD=xyz;","","")
conn = ibm_db_dbi.Connection(connection)
cur = conn.cursor()
query = "INSERT INTO EXCELUPLOAD (qty, amt, num, code, name) VALUES (?,?,?,?,?)"
for r in range(1, sheet.nrows):
qty = sheet.cell(r,0).value
amt = sheet.cell(r,1).value
num = sheet.cell(r,2).value
code = sheet.cell(r,3).value
name = sheet.cell(r,4).value
values = (qty, amt, num, code, name)
cur.execute(query, values)
cur.close()
conn.commit()
conn.close()
observer = Observer()
event_handler = ExampleHandler()
observer.schedule(event_handler, path=r'D:\New folder')
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
A:
You could use os.listdir() to list all files of a given folder.
Then you could either delete all files or delete files based on their file name or file type/suffix.
E. g.:
import os
from pathlib import Path
files = os.listdir("your/path")
for file in files:
os.remove(file) # delete all files
if Path(file).name == "Test" or Path(file).suffix == ".txt":
os.remove(file) # delete file based on their name or suffix
| How do I delete a file from a folder in Python? | I do know about os.remove() I am working on - When an excel file is created into a folder then the script runs and it will add the excel file data into the particular columns of the table of database. What I want is the file should be deleted from that folder just after running the script so that when a new excel file is created into the folder the script will again runs and add the new data into the database and so on. I am a newbie to the python and as I know os.remove needs file name to be entered.
Is there any other function of doing this? Here is my script
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import ibm_db
import ibm_db_dbi
import os
import xlrd
class ExampleHandler(FileSystemEventHandler):
def on_created(self, event):
connection = ibm_db.connect("DATABASE=xyz; HOSTNAME=xyz; PORT=xyz; PROTOCOL=xyz; UID=xyz; PWD=xyz;","","")
conn = ibm_db_dbi.Connection(connection)
cur = conn.cursor()
query = "INSERT INTO EXCELUPLOAD (qty, amt, num, code, name) VALUES (?,?,?,?,?)"
for r in range(1, sheet.nrows):
qty = sheet.cell(r,0).value
amt = sheet.cell(r,1).value
num = sheet.cell(r,2).value
code = sheet.cell(r,3).value
name = sheet.cell(r,4).value
values = (qty, amt, num, code, name)
cur.execute(query, values)
cur.close()
conn.commit()
conn.close()
observer = Observer()
event_handler = ExampleHandler()
observer.schedule(event_handler, path=r'D:\New folder')
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
| [
"You could use os.listdir() to list all files of a given folder.\nThen you could either delete all files or delete files based on their file name or file type/suffix.\nE. g.:\nimport os\nfrom pathlib import Path\n\nfiles = os.listdir(\"your/path\")\nfor file in files:\n os.remove(file) # delete all files\n\n if Path(file).name == \"Test\" or Path(file).suffix == \".txt\":\n os.remove(file) # delete file based on their name or suffix\n\n"
] | [
1
] | [] | [] | [
"delete_file",
"python"
] | stackoverflow_0074612487_delete_file_python.txt |
Q:
Azure function python: blob.set method store empty file in blob container through output binding
An azure function is triggered via blob trigger event.
Trigger gives a csv file as myblob to the function from
new-container.
The function also gets base.csv as base from base-container.
Both CSV files are read via pandas library.
Some processing is done to create df_final.
df_final is converted to string representation through to_csv().
The string representation of CSV is converted to utf-8 encoding.
The blobout.set() store encoded CSV to base-container as base.csv.
Everything works as expected but the only problem is when base.csv is stored via blobout.set. It's an empty bile with 0 bytes size. Even though df_final is having records.
init.py
import logging
import pandas as pd
import azure.functions as func
from io import BytesIO
def main(myblob: func.InputStream, base: func.InputStream, blobout: func.Out[bytes]):
df_base = pd.read_csv(BytesIO(base.read()))
df_new = pd.read_csv(BytesIO(myblob.read()))
df_final = process_data(df_base, df_new)
df_final = df_final.to_csv(index=False)
blobout.set(df_final.encode())
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "new-container/{name}",
"connection": "my_storage"
},
{
"type": "blob",
"name": "base",
"path": "base-container/base.csv",
"connection": "my_storage",
"direction": "in"
},
{
"name": "blobout",
"type": "blob",
"direction": "out",
"path": "base-container/base.csv",
"connection": "my_storage"
}
]
}
A:
After reproducing from my end, this was working fine. I believe that the there is no value that is getting returned from process_data. Make sure there is no null value that is getting returned from process_data. Below is a sample code that I used in process_data which gave me expected results.
def process_data(df_base,df_new):
final_dataframe = pd.concat([df_base, df_new])
return final_dataframe
init.py
import logging
import pandas as pd
import azure.functions as func
from io import BytesIO
def main(myblob: func.InputStream, base: func.InputStream, blobout: func.Out[bytes]):
df_base = pd.read_csv(BytesIO(base.read()))
df_new = pd.read_csv(BytesIO(myblob.read()))
df_final = process_data(df_base, df_new)
df_final = df_final.to_csv(index=False)
blobout.set(df_final.encode())
def process_data(df_base,df_new):
final_dataframe = pd.concat([df_base, df_new])
return final_dataframe
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "new-container/{name}",
"connection": "<STORAGE_CONNECTION>"
},
{
"name": "base",
"type": "blob",
"path": "base-container/base.csv",
"connection": "<STORAGE_CONNECTION>",
"direction": "in"
},
{
"name": "blobout",
"type": "blob",
"path": "base-container/base.csv",
"connection": "<STORAGE_CONNECTION>",
"direction": "out"
}
]
}
RESULT
| Azure function python: blob.set method store empty file in blob container through output binding | An azure function is triggered via blob trigger event.
Trigger gives a csv file as myblob to the function from
new-container.
The function also gets base.csv as base from base-container.
Both CSV files are read via pandas library.
Some processing is done to create df_final.
df_final is converted to string representation through to_csv().
The string representation of CSV is converted to utf-8 encoding.
The blobout.set() store encoded CSV to base-container as base.csv.
Everything works as expected but the only problem is when base.csv is stored via blobout.set. It's an empty bile with 0 bytes size. Even though df_final is having records.
init.py
import logging
import pandas as pd
import azure.functions as func
from io import BytesIO
def main(myblob: func.InputStream, base: func.InputStream, blobout: func.Out[bytes]):
df_base = pd.read_csv(BytesIO(base.read()))
df_new = pd.read_csv(BytesIO(myblob.read()))
df_final = process_data(df_base, df_new)
df_final = df_final.to_csv(index=False)
blobout.set(df_final.encode())
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "new-container/{name}",
"connection": "my_storage"
},
{
"type": "blob",
"name": "base",
"path": "base-container/base.csv",
"connection": "my_storage",
"direction": "in"
},
{
"name": "blobout",
"type": "blob",
"direction": "out",
"path": "base-container/base.csv",
"connection": "my_storage"
}
]
}
| [
"After reproducing from my end, this was working fine. I believe that the there is no value that is getting returned from process_data. Make sure there is no null value that is getting returned from process_data. Below is a sample code that I used in process_data which gave me expected results.\ndef process_data(df_base,df_new):\n final_dataframe = pd.concat([df_base, df_new])\n return final_dataframe\n\ninit.py\nimport logging\nimport pandas as pd\n\nimport azure.functions as func\nfrom io import BytesIO\n\ndef main(myblob: func.InputStream, base: func.InputStream, blobout: func.Out[bytes]):\n df_base = pd.read_csv(BytesIO(base.read()))\n df_new = pd.read_csv(BytesIO(myblob.read()))\n df_final = process_data(df_base, df_new)\n df_final = df_final.to_csv(index=False)\n blobout.set(df_final.encode())\n \ndef process_data(df_base,df_new):\n final_dataframe = pd.concat([df_base, df_new])\n return final_dataframe\n\nfunction.json\n{\n \"scriptFile\": \"__init__.py\",\n \"bindings\": [\n {\n \"name\": \"myblob\",\n \"type\": \"blobTrigger\",\n \"direction\": \"in\",\n \"path\": \"new-container/{name}\",\n \"connection\": \"<STORAGE_CONNECTION>\"\n },\n {\n \"name\": \"base\",\n \"type\": \"blob\",\n \"path\": \"base-container/base.csv\",\n \"connection\": \"<STORAGE_CONNECTION>\",\n \"direction\": \"in\"\n },\n {\n \"name\": \"blobout\",\n \"type\": \"blob\",\n \"path\": \"base-container/base.csv\",\n \"connection\": \"<STORAGE_CONNECTION>\",\n \"direction\": \"out\"\n }\n ]\n}\n\nRESULT\n\n"
] | [
0
] | [] | [] | [
"azure",
"azure_blob_storage",
"azure_functions",
"csv",
"python"
] | stackoverflow_0074600764_azure_azure_blob_storage_azure_functions_csv_python.txt |
Q:
Why does Python Pandas read the string of an excel file as datetime
I have the following questions.
I have Excel files as follows:
When i read the file using
df = pd.read_excel(file,dtype=str).
the first row turned to 2003-02-14 00:00:00 while the rest are displayed as it is.
How do i prevent pd.read_excel() from converting its value into datetime or something else?
Thanks!
A:
As @ddejohn correctly said it in the comments, the behavior you face is actually coming from Excel, automatically converting the data to date. Thus pandas will have to deal with that data AS date, and treat it later to get the correct format back as you expect, as like you say you cannot modify the input Excel file.
Here is a short script to make it work as you expect:
import pandas as pd
def rev(x: str) -> str:
'''
converts '2003-02-14 00:00:00' to '14.02.03'
'''
hours = '00:00:00'
if not hours in x:
return x
y = x.split()[0]
y = y.split('-')
return '.'.join([i[-2:] for i in y[::-1]])
file = r'C:\your\folder\path\Classeur1.xlsx'
df = pd.read_excel(file, dtype=str)
df['column'] = df['column'].apply(rev)
Replace df['column'] by your actual column name.
You then get the desired format in your dataframe.
| Why does Python Pandas read the string of an excel file as datetime | I have the following questions.
I have Excel files as follows:
When i read the file using
df = pd.read_excel(file,dtype=str).
the first row turned to 2003-02-14 00:00:00 while the rest are displayed as it is.
How do i prevent pd.read_excel() from converting its value into datetime or something else?
Thanks!
| [
"As @ddejohn correctly said it in the comments, the behavior you face is actually coming from Excel, automatically converting the data to date. Thus pandas will have to deal with that data AS date, and treat it later to get the correct format back as you expect, as like you say you cannot modify the input Excel file.\nHere is a short script to make it work as you expect:\nimport pandas as pd\n\ndef rev(x: str) -> str:\n '''\n converts '2003-02-14 00:00:00' to '14.02.03'\n '''\n\n hours = '00:00:00'\n if not hours in x:\n return x\n y = x.split()[0]\n y = y.split('-')\n return '.'.join([i[-2:] for i in y[::-1]])\n\nfile = r'C:\\your\\folder\\path\\Classeur1.xlsx'\ndf = pd.read_excel(file, dtype=str)\n\ndf['column'] = df['column'].apply(rev)\n\n\nReplace df['column'] by your actual column name.\nYou then get the desired format in your dataframe.\n"
] | [
0
] | [] | [] | [
"excel",
"pandas",
"python"
] | stackoverflow_0074609395_excel_pandas_python.txt |
Q:
Get sort string array by column value in a pandas DataFrame
Be the following python pandas DataFrame.
| date | days | country |
| ------------- | ---------- | --------- |
| 2022-02-01 | 1 | Spain |
| 2022-02-02 | 2 | Spain |
| 2022-02-01 | 3 | Italy |
| 2022-02-03 | 2 | France |
| 2022-02-03 | 1 | Germany |
| 2022-02-04 | 1 | Italy |
| 2022-02-04 | 1 | UK |
| 2022-02-05 | 2 | UK |
| 2022-02-04 | 5 | Spain |
| 2022-02-04 | 1 | Portugal |
I want to get a ranking by country according to its number of days.
| country | count_days |
| ---------------- | ----------- |
| Spain | 8 |
| Italy | 4 |
| UK | 3 |
| France | 2 |
| Germany | 1 |
| Portugal | 1 |
Finally I want to return the countries from most to least number of rows in a string array.
return: countries = ['Spain', 'Italy', 'UK', 'France', 'Germany', 'Portugal']
A:
Firat aggreagte sum, then sorting values and convert to DataFrame:
df1 = (df.groupby('country')['days']
.sum()
.sort_values(ascending=False)
.reset_index(name='count_days'))
print (df1)
country count_days
0 Spain 8
1 Italy 4
2 UK 3
3 France 2
4 Germany 1
5 Portugal 1
Last convert column to list:
countries = df1['country'].tolist()
Solution without DataFrame df1:
countries = df.groupby('country')['days'].sum().sort_values(ascending=False).index.tolist()
| Get sort string array by column value in a pandas DataFrame | Be the following python pandas DataFrame.
| date | days | country |
| ------------- | ---------- | --------- |
| 2022-02-01 | 1 | Spain |
| 2022-02-02 | 2 | Spain |
| 2022-02-01 | 3 | Italy |
| 2022-02-03 | 2 | France |
| 2022-02-03 | 1 | Germany |
| 2022-02-04 | 1 | Italy |
| 2022-02-04 | 1 | UK |
| 2022-02-05 | 2 | UK |
| 2022-02-04 | 5 | Spain |
| 2022-02-04 | 1 | Portugal |
I want to get a ranking by country according to its number of days.
| country | count_days |
| ---------------- | ----------- |
| Spain | 8 |
| Italy | 4 |
| UK | 3 |
| France | 2 |
| Germany | 1 |
| Portugal | 1 |
Finally I want to return the countries from most to least number of rows in a string array.
return: countries = ['Spain', 'Italy', 'UK', 'France', 'Germany', 'Portugal']
| [
"Firat aggreagte sum, then sorting values and convert to DataFrame:\ndf1 = (df.groupby('country')['days']\n .sum()\n .sort_values(ascending=False)\n .reset_index(name='count_days'))\nprint (df1)\n country count_days\n0 Spain 8\n1 Italy 4\n2 UK 3\n3 France 2\n4 Germany 1\n5 Portugal 1\n\nLast convert column to list:\ncountries = df1['country'].tolist()\n\nSolution without DataFrame df1:\ncountries = df.groupby('country')['days'].sum().sort_values(ascending=False).index.tolist()\n\n"
] | [
2
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074612716_dataframe_pandas_python.txt |
Q:
Scrape all elements inside a li tag
I'm trying to scrape some information from the a Kaggle page. All the elements I'm looking for are in <ul role="list" class="km-list km-list--three-line">. And each element decomposes within the <li role="listitem" class="sc-jfmDQi hfJycS">. I'm trying to scrape all these elements from this page. Here is an HTML example of a page element:
<ul role="list" class="km-list km-list--three-line"><li role="listitem" class="sc-jfmDQi hfJycS"><div class="sc-eKszNL ktNGam"><div class="sc-hiMGwR GvHYb sc-czGAKf ffgKfy"><div class="sc-ehmTmK bMKNkA"><a href="/pmarcelino" target="_blank" class="sc-kgUAyh eqNroj" aria-label="Pedro Marcelino, PhD"><div data-testid="avatar-image" title="Pedro Marcelino, PhD" class="sc-hTtwUo eLCpfL" style="background-image: url("https://storage.googleapis.com/kaggle-avatars/thumbnails/175415-gr.jpg");"></div><svg width="64" height="64" viewBox="0 0 64 64"><circle r="30.5" cx="32" cy="32" fill="none" stroke-width="3" style="stroke: rgb(241, 243, 244);"></circle><path d="M 49.92745019492043 56.6750183284359 A 30.5 30.5 0 0 0 32 1.5" fill="none" stroke-width="3" style="stroke: rgb(32, 190, 255);"></path></svg></a></div></div><a class="sc-lbOyJj eeGduD sc-jFAmCJ fNVSOc" href="/code/pmarcelino/comprehensive-data-exploration-with-python"><div class="sc-ckMVTt jHrWZQ"><div class="sc-iBkjds sc-fLlhyt sc-fbPSWO uVZhN izULIq A-dENW">Comprehensive data exploration with Python</div><span class="sc-jIZahH sc-himrzO sc-fXynhf kdTVzc glCpMy ctwKCt"><span><span>Updated <span title="Sat Apr 30 2022 21:20:37 GMT+0200 (heure d’été d’Europe centrale)" aria-label="7 months ago">7mo ago</span></span></span> </span><span class="sc-jIZahH sc-himrzO sc-fXynhf kdTVzc glCpMy ctwKCt"><span class="sc-bPPhlf hfaBPJ"><a href="/code/pmarcelino/comprehensive-data-exploration-with-python/comments" class="sc-dPyBCJ sc-bBXxYQ sc-bOJcbE cSRCiy cFEurs gTFrUa">1819 comments</a> · <span class="sc-ibQCxQ idHgMS"><span class="sc-cKajLJ jNrpDQ">House Prices - Advanced Regression Techniques</span></span></span></span></div></a><div class="sc-gFGZVQ jDMEwY sc-yTtWT kdALiC"><div class="sc-dICTr dlQsbO"><button mode="default" data-testid="upvotebutton__upvote" aria-label="Upvote" class="sc-dNezTh sc-lkcIho cSGKPD cTyEVx"><i class="rmwc-icon rmwc-icon--ligature google-material-icons sc-gKXOVf jWACgA" sizevalue="18px">arrow_drop_up</i></button><span mode="default" class="sc-gXmSlM sc-cCsOjp sc-hAGLhy cKhlzA piYDj mWvOY">12770</span></div><span class="sc-dXxSUK cHoXAr"><span class="sc-jIZahH sc-himrzO sc-hRwTwm kdTVzc glCpMy IISDK"><img role="presentation" alt="" src="/static/images/medals/competitions/[email protected]" style="height: 9px; width: 9px;"> Gold</span><div class="mdc-menu-surface--anchor"><button aria-label="more_horiz" class="sc-jSMfEi eiMRSN sc-bgrGEg hPFZMI google-material-icons">more_horiz</button></div></span></div></div><div class="sc-lbxAil LkNdN"></div></li>
import pandas as pd
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36'}
url = "https://www.kaggle.com/code?sortBy=voteCount&page=1"
req = requests.get(url, headers = headers)
soup = BeautifulSoup(req.text, 'html.parser')
html_content = soup.find_all('li', attrs = {'class': 'sc-jfmDQi hfJycS'})
data = []
for elements in html_content:
data.append({
'title': elements.find("div", {"class": "sc-iBkjds sc-fLlhyt sc-fbPSWO uVZhN izULIq A-dENW"}).text,
'stars': elements.find("span", {"class": "sc-gXmSlM sc-cCsOjp sc-hAGLhy cKhlzA piYDj mWvOY"}).text,
'resume': elements.find("span", {"class": "sc-cKajLJ jNrpDQ"}).text,
'comments': elements.find("span", {"class": "sc-dPyBCJ sc-bBXxYQ sc-bOJcbE cSRCiy cFEurs gTFrUa"}).text,
'link': elements.get('href')})
print(data)
My output:
[]
A:
The page loads data dynamically using an API, so you don't see that data when u try to get it. U need to figure out which query provides the information and what data is required to get it. I made a small example where we get the necessary tokens and information via api. Next change page varible to get new ids.
import requests
import pandas as pd
import json
def get_token():
url = 'https://www.kaggle.com/code?sortBy=voteCount&page=1'
response = requests.get(url)
return response.cookies.get_dict()['CSRF-TOKEN'], response.cookies.get_dict()['XSRF-TOKEN']
def get_kernel_ids(csrf_token: str, xsrf_token: str, page: int):
url = "https://www.kaggle.com/api/i/kernels.KernelsService/ListKernelIds"
payload = json.dumps({
"sortBy": "VOTE_COUNT",
"pageSize": 100,
"group": "EVERYONE",
"page": page,
"tagIds": "",
"excludeResultsFilesOutputs": False,
"wantOutputFiles": False,
"excludeKernelIds": []
})
headers = {
'accept': 'application/json',
'content-type': 'application/json',
'cookie': f'CSRF-TOKEN={csrf_token}',
'x-xsrf-token': xsrf_token
}
response = requests.post(url, headers=headers, data=payload)
return response.json()['kernelIds']
def get_info(csrf_token: str, xsrf_token: str):
url = "https://www.kaggle.com/api/i/kernels.KernelsService/GetKernelListDetails"
payload = json.dumps({
"deletedAccessBehavior": "RETURN_NOTHING",
"unauthorizedAccessBehavior": "RETURN_NOTHING",
"excludeResultsFilesOutputs": False,
"wantOutputFiles": False,
"kernelIds": get_kernel_ids(csrf_token, xsrf_token, 1),
"outputFileTypes": [],
"includeInvalidDataSources": False
})
headers = {
'accept': 'application/json',
'content-type': 'application/json',
'cookie': f'CSRF-TOKEN={csrf_token}',
'x-xsrf-token': xsrf_token
}
response = requests.post(url, headers=headers, data=payload)
data = []
for kernel in response.json()['kernels']:
data.append({
'title': kernel['title'],
'stars': kernel['totalVotes'],
'resume': kernel['dataSources'][0]['name'] if 'dataSources' in kernel else 'No attached data sources',
'comments': kernel['totalComments'],
'url': f'https://www.kaggle.com{kernel["scriptUrl"]}'
})
return data
csrf, xsrf = get_token()
df = pd.DataFrame(get_info(csrf, xsrf))
print(df.to_string())
OUTPUT:
title stars resume comments url
0 Comprehensive data exploration with Python 12772 House Prices - Advanced Regression Techniques 1819 https://www.kaggle.com/code/pmarcelino/comprehensive-data-exploration-with-python
1 Titanic Data Science Solutions 9374 Titanic - Machine Learning from Disaster 2316 https://www.kaggle.com/code/startupsci/titanic-data-science-solutions
2 Titanic Tutorial 8161 Titanic - Machine Learning from Disaster 26348 https://www.kaggle.com/code/alexisbcook/titanic-tutorial
3 Stacked Regressions : Top 4% on LeaderBoard 6749 House Prices - Advanced Regression Techniques 1075 https://www.kaggle.com/code/serigne/stacked-regressions-top-4-on-leaderboard
4 Introduction to CNN Keras - 0.997 (top 6%) 6438 Digit Recognizer 1002 https://www.kaggle.com/code/yassineghouzam/introduction-to-cnn-keras-0-997-top-6
5 Data ScienceTutorial for Beginners 6213 Pokemon- Weedle's Cave 1160 https://www.kaggle.com/code/kanncaa1/data-sciencetutorial-for-beginners
6 Hello, Python 5952 No attached data sources 329 https://www.kaggle.com/code/colinmorris/hello-python
7 Introduction to Ensembling/Stacking in Python 5657 Titanic - Machine Learning from Disaster 1031 https://www.kaggle.com/code/arthurtok/introduction-to-ensembling-stacking-in-python
8 How Models Work 5308 Mobile Price Classification 2 https://www.kaggle.com/code/dansbecker/how-models-work
9 A Data Science Framework: To Achieve 99% Accuracy 5266 Titanic - Machine Learning from Disaster 657 https://www.kaggle.com/code/ldfreeman3/a-data-science-framework-to-achieve-99-accuracy
10 Credit Fraud || Dealing with Imbalanced Datasets 4274 Credit Card Fraud Detection 629 https://www.kaggle.com/code/janiobachmann/credit-fraud-dealing-with-imbalanced-datasets
11 Exploring Survival on the Titanic 3735 Titanic - Machine Learning from Disaster 1041 https://www.kaggle.com/code/mrisdal/exploring-survival-on-the-titanic
12 Start Here: A Gentle Introduction 3372 Home Credit Default Risk 543 https://www.kaggle.com/code/willkoehrsen/start-here-a-gentle-introduction
13 Functions and Getting Help 2974 No attached data sources 145 https://www.kaggle.com/code/colinmorris/functions-and-getting-help
14 Your First Machine Learning Model 2913 Mobile Price Classification 386 https://www.kaggle.com/code/dansbecker/your-first-machine-learning-model
15 Exercise: Your First Machine Learning Model 2840 Melbourne Housing Snapshot 381 https://www.kaggle.com/code/yogeshtak/exercise-your-first-machine-learning-model
16 Titanic Top 4% with ensemble modeling 2724 Titanic - Machine Learning from Disaster 408 https://www.kaggle.com/code/yassineghouzam/titanic-top-4-with-ensemble-modeling
17 Model Validation 2655 Mobile Price Classification 5 https://www.kaggle.com/code/dansbecker/model-validation
18 EDA To Prediction(DieTanic) 2544 Titanic - Machine Learning from Disaster 342 https://www.kaggle.com/code/ash316/eda-to-prediction-dietanic
19 Winning solutions of kaggle competitions 2527 [Private Datasource] 207 https://www.kaggle.com/code/sudalairajkumar/winning-solutions-of-kaggle-competitions
20 Booleans and Conditionals 2501 No attached data sources 75 https://www.kaggle.com/code/colinmorris/booleans-and-conditionals
21 Underfitting and Overfitting 2500 Mobile Price Classification 7 https://www.kaggle.com/code/dansbecker/underfitting-and-overfitting
22 Getting Started with Kaggle 2420 No attached data sources 4265 https://www.kaggle.com/code/alexisbcook/getting-started-with-kaggle
23 Full Preprocessing Tutorial 2366 Data Science Bowl 2017 490 https://www.kaggle.com/code/gzuidhof/full-preprocessing-tutorial
24 Machine Learning Tutorial for Beginners 2333 Biomechanical features of orthopedic patients 292 https://www.kaggle.com/code/kanncaa1/machine-learning-tutorial-for-beginners
25 Everything you can do with a time series 2330 DJIA 30 Stock Time Series 171 https://www.kaggle.com/code/thebrownviking20/everything-you-can-do-with-a-time-series
26 Deep Learning Tutorial for Beginners 2296 Sign Language Digits Dataset 248 https://www.kaggle.com/code/kanncaa1/deep-learning-tutorial-for-beginners
27 Getting staRted in R: First Steps 2216 Chocolate Bar Ratings 102 https://www.kaggle.com/code/rtatman/getting-started-in-r-first-steps
28 Approaching (Almost) Any NLP Problem on Kaggle 2169 glove.840B.300d.txt 231 https://www.kaggle.com/code/abhishek/approaching-almost-any-nlp-problem-on-kaggle
29 Data Science Glossary on Kaggle 2125 [Private Datasource] 229 https://www.kaggle.com/code/shivamb/data-science-glossary-on-kaggle
30 Coronavirus (COVID-19) Visualization & Prediction 2069 No attached data sources 691 https://www.kaggle.com/code/therealcyberlord/coronavirus-covid-19-visualization-prediction
31 Creating, Reading and Writing 2063 18,393 Pitchfork Reviews 95 https://www.kaggle.com/code/residentmario/creating-reading-and-writing
32 Dive into dplyr (tutorial #1) 1828 Palmer Archipelago (Antarctica) penguin data 134 https://www.kaggle.com/code/jessemostipak/dive-into-dplyr-tutorial-1
33 Back to (predict) the future - Interactive M5 EDA 1820 US Natural Disaster Declarations 322 https://www.kaggle.com/code/headsortails/back-to-predict-the-future-interactive-m5-eda
34 COVID-19 - Analysis, Visualization & Comparisons 1773 World Happiness Report 359 https://www.kaggle.com/code/imdevskp/covid-19-analysis-visualization-comparisons
35 Time series Basics : Exploring traditional TS 1767 Predict Future Sales 174 https://www.kaggle.com/code/jagangupta/time-series-basics-exploring-traditional-ts
36 Titanic - Advanced Feature Engineering Tutorial 1753 Titanic - Machine Learning from Disaster 322 https://www.kaggle.com/code/gunesevitan/titanic-advanced-feature-engineering-tutorial
37 Regularized Linear Models 1721 House Prices - Advanced Regression Techniques 336 https://www.kaggle.com/code/apapiu/regularized-linear-models
38 Random Forests 1711 Mobile Price Classification 2 https://www.kaggle.com/code/dansbecker/random-forests
39 Deep Learning For NLP: Zero To Transformers & BERT 1682 glove.840B.300d.txt 89 https://www.kaggle.com/code/tanulsingh077/deep-learning-for-nlp-zero-to-transformers-bert
40 Feature Selection and Data Visualization 1681 Breast Cancer Wisconsin (Diagnostic) Data Set 304 https://www.kaggle.com/code/kanncaa1/feature-selection-and-data-visualization
41 Is it a bird? Creating a model from your own data 1670 No attached data sources 32 https://www.kaggle.com/code/jhoward/is-it-a-bird-creating-a-model-from-your-own-data
42 Lists 1651 No attached data sources 60 https://www.kaggle.com/code/colinmorris/lists
43 Submitting From A Kernel 1650 House Prices - Advanced Regression Techniques 491 https://www.kaggle.com/code/dansbecker/submitting-from-a-kernel
44 Basic Data Exploration 1552 Mobile Price Classification 8 https://www.kaggle.com/code/dansbecker/basic-data-exploration
45 Indexing, Selecting & Assigning 1494 18,393 Pitchfork Reviews 121 https://www.kaggle.com/code/residentmario/indexing-selecting-assigning
46 Getting Started with a Movie Recommendation System 1482 TMDB 5000 Movie Dataset 181 https://www.kaggle.com/code/ibtesama/getting-started-with-a-movie-recommendation-system
47 House prices: Lasso, XGBoost, and a detailed EDA 1470 House Prices - Advanced Regression Techniques 255 https://www.kaggle.com/code/erikbruin/house-prices-lasso-xgboost-and-a-detailed-eda
48 Twitter sentiment Extaction-Analysis,EDA and Model 1440 [Private Datasource] 122 https://www.kaggle.com/code/tanulsingh077/twitter-sentiment-extaction-analysis-eda-and-model
49 Loops and List Comprehensions 1436 No attached data sources 75 https://www.kaggle.com/code/colinmorris/loops-and-list-comprehensions
50 Exploratory Analysis - Zillow 1428 Zillow Prize: Zillow’s Home Value Prediction (Zestimate) 169 https://www.kaggle.com/code/philippsp/exploratory-analysis-zillow
51 Data Analysis & XGBoost Starter (0.35460 LB) 1414 Quora Question Pairs 168 https://www.kaggle.com/code/anokas/data-analysis-xgboost-starter-0-35460-lb
52 A Statistical Analysis & ML workflow of Titanic 1411 Titanic - Machine Learning from Disaster 321 https://www.kaggle.com/code/masumrumi/a-statistical-analysis-ml-workflow-of-titanic
53 A Journey through Titanic 1399 Titanic - Machine Learning from Disaster 417 https://www.kaggle.com/code/omarelgabry/a-journey-through-titanic
54 Plotly Tutorial for Beginners 1370 World University Rankings 143 https://www.kaggle.com/code/kanncaa1/plotly-tutorial-for-beginners
55 Tutorial on reading large datasets 1365 Riiid train data (multiple formats) 111 https://www.kaggle.com/code/rohanrao/tutorial-on-reading-large-datasets
56 Python Data Visualizations 1365 Iris Species 162 https://www.kaggle.com/code/benhamner/python-data-visualizations
57 Working with External Libraries 1353 No attached data sources 53 https://www.kaggle.com/code/colinmorris/working-with-external-libraries
58 Strings and Dictionaries 1346 No attached data sources 52 https://www.kaggle.com/code/colinmorris/strings-and-dictionaries
59 Explore Your Data 1329 home data for ml course 237 https://www.kaggle.com/code/dansbecker/explore-your-data
60 Handling Missing Values 1324 Melbourne Housing Market 440 https://www.kaggle.com/code/dansbecker/handling-missing-values
61 Basic EDA,Cleaning and GloVe 1311 GloVe: Global Vectors for Word Representation 162 https://www.kaggle.com/code/shahules/basic-eda-cleaning-and-glove
62 Pytorch Tutorial for Deep Learning Lovers 1310 Digit Recognizer 122 https://www.kaggle.com/code/kanncaa1/pytorch-tutorial-for-deep-learning-lovers
63 EDA is Fun! 1298 PUBG Finish Placement Prediction (Kernels Only) 186 https://www.kaggle.com/code/deffro/eda-is-fun
64 NLP with Disaster Tweets - EDA, Cleaning and BERT 1278 Pickled glove.840B.300d 209 https://www.kaggle.com/code/gunesevitan/nlp-with-disaster-tweets-eda-cleaning-and-bert
65 Data Cleaning Challenge: Handling missing values 1260 Detailed NFL Play-by-Play Data 2009-2018 377 https://www.kaggle.com/code/rtatman/data-cleaning-challenge-handling-missing-values
66 Titanic Survival Predictions (Beginner) 1252 Titanic - Machine Learning from Disaster 272 https://www.kaggle.com/code/nadintamer/titanic-survival-predictions-beginner
67 Santander EDA and Prediction 1215 Santander Customer Transaction Prediction 189 https://www.kaggle.com/code/gpreda/santander-eda-and-prediction
68 Titanic best working Classifier 1181 Titanic - Machine Learning from Disaster 195 https://www.kaggle.com/code/sinakhorami/titanic-best-working-classifier
69 Data Science for tabular data: Advanced Techniques 1143 No Data Sources 101 https://www.kaggle.com/code/vbmokin/data-science-for-tabular-data-advanced-techniques
70 Keras U-Net starter - LB 0.277 1120 2018 Data Science Bowl 163 https://www.kaggle.com/code/keegil/keras-u-net-starter-lb-0-277
71 Simple Exploration Notebook - Zillow Prize 1104 Zillow Prize: Zillow’s Home Value Prediction (Zestimate) 131 https://www.kaggle.com/code/sudalairajkumar/simple-exploration-notebook-zillow-prize
72 EDA and models 1096 IEEE-CIS Fraud Detection 218 https://www.kaggle.com/code/artgor/eda-and-models
73 How to: Preprocessing when using embeddings 1094 Quora Insincere Questions Classification 102 https://www.kaggle.com/code/christofhenkel/how-to-preprocessing-when-using-embeddings
74 COVID-19 Literature Clustering 1087 COVID-19 Open Research Dataset Challenge (CORD-19) 232 https://www.kaggle.com/code/maksimeren/covid-19-literature-clustering
75 Head Start for Data Scientist 1086 Titanic - Machine Learning from Disaster 233 https://www.kaggle.com/code/hiteshp/head-start-for-data-scientist
76 Be my guest - Recruit Restaurant EDA 1079 Weather Data for Recruit Restaurant Competition 237 https://www.kaggle.com/code/headsortails/be-my-guest-recruit-restaurant-eda
77 NB-SVM strong linear baseline 1070 Toxic Comment Classification Challenge 152 https://www.kaggle.com/code/jhoward/nb-svm-strong-linear-baseline
78 Seaborn Tutorial for Beginners 1070 Fatal Police Shootings in the US 182 https://www.kaggle.com/code/kanncaa1/seaborn-tutorial-for-beginners
79 A look at different embeddings.! 1057 Quora Insincere Questions Classification 111 https://www.kaggle.com/code/sudalairajkumar/a-look-at-different-embeddings
80 Deep Neural Network Keras way 1053 Digit Recognizer 201 https://www.kaggle.com/code/poonaml/deep-neural-network-keras-way
81 Simple Exploration+Baseline - GA Customer Revenue 1044 Google Analytics Customer Revenue Prediction 162 https://www.kaggle.com/code/sudalairajkumar/simple-exploration-baseline-ga-customer-revenue
82 Simple Matplotlib & Visualization Tips 1040 Students Performance in Exams 148 https://www.kaggle.com/code/subinium/simple-matplotlib-visualization-tips
83 Speech representation and data exploration 1031 TensorFlow Speech Recognition Challenge 110 https://www.kaggle.com/code/davids1992/speech-representation-and-data-exploration
84 Feature engineering, xgboost 1023 Predict Future Sales 240 https://www.kaggle.com/code/dlarionov/feature-engineering-xgboost
85 XGBoost 1018 House Prices - Advanced Regression Techniques 12 https://www.kaggle.com/code/dansbecker/xgboost
86 Stock Market Analysis + Prediction using LSTM 1015 Tesla Stock Price 190 https://www.kaggle.com/code/faressayah/stock-market-analysis-prediction-using-lstm
87 Two Sigma News Official Getting Started Kernel 994 Two Sigma: Using News to Predict Stock Movements 142 https://www.kaggle.com/code/dster/two-sigma-news-official-getting-started-kernel
88 EDA, feature engineering and everything 984 Two Sigma: Using News to Predict Stock Movements 171 https://www.kaggle.com/code/artgor/eda-feature-engineering-and-everything
89 A study on Regression applied to the Ames dataset 979 House Prices - Advanced Regression Techniques 139 https://www.kaggle.com/code/juliencs/a-study-on-regression-applied-to-the-ames-dataset
90 Customer Segmentation 976 E-Commerce Data 72 https://www.kaggle.com/code/fabiendaniel/customer-segmentation
91 How I made top 0.3% on a Kaggle competition 975 Kernel Files 124 https://www.kaggle.com/code/lavanyashukla01/how-i-made-top-0-3-on-a-kaggle-competition
92 Extensive EDA and Modeling XGB Hyperopt 972 IEEE-CIS Fraud Detection 187 https://www.kaggle.com/code/kabure/extensive-eda-and-modeling-xgb-hyperopt
93 Summary Functions and Maps 965 18,393 Pitchfork Reviews 132 https://www.kaggle.com/code/residentmario/summary-functions-and-maps
94 NovelAI with webUI(Stable Diffusion version) 955 NovelAi-model-pruned 7 https://www.kaggle.com/code/inmine/novelai-with-webui-stable-diffusion-version
95 Customer Segmentation: Clustering ️ 934 Customer Personality Analysis 256 https://www.kaggle.com/code/karnikakapoor/customer-segmentation-clustering
96 Introduction to financial concepts and data 934 Optiver Realized Volatility Prediction 130 https://www.kaggle.com/code/jiashenliu/introduction-to-financial-concepts-and-data
97 Resampling strategies for imbalanced datasets 934 Porto Seguro’s Safe Driver Prediction 80 https://www.kaggle.com/code/rafjaa/resampling-strategies-for-imbalanced-datasets
98 Jane Street: EDA of day 0 and feature importance 932 Jane Street Market Prediction 151 https://www.kaggle.com/code/carlmcbrideellis/jane-street-eda-of-day-0-and-feature-importance
99 10 Simple hacks to speed up your Data Analysis 929 Titanic - Machine Learning from Disaster 204 https://www.kaggle.com/code/parulpandey/10-simple-hacks-to-speed-up-your-data-analysis
| Scrape all elements inside a li tag | I'm trying to scrape some information from the a Kaggle page. All the elements I'm looking for are in <ul role="list" class="km-list km-list--three-line">. And each element decomposes within the <li role="listitem" class="sc-jfmDQi hfJycS">. I'm trying to scrape all these elements from this page. Here is an HTML example of a page element:
<ul role="list" class="km-list km-list--three-line"><li role="listitem" class="sc-jfmDQi hfJycS"><div class="sc-eKszNL ktNGam"><div class="sc-hiMGwR GvHYb sc-czGAKf ffgKfy"><div class="sc-ehmTmK bMKNkA"><a href="/pmarcelino" target="_blank" class="sc-kgUAyh eqNroj" aria-label="Pedro Marcelino, PhD"><div data-testid="avatar-image" title="Pedro Marcelino, PhD" class="sc-hTtwUo eLCpfL" style="background-image: url("https://storage.googleapis.com/kaggle-avatars/thumbnails/175415-gr.jpg");"></div><svg width="64" height="64" viewBox="0 0 64 64"><circle r="30.5" cx="32" cy="32" fill="none" stroke-width="3" style="stroke: rgb(241, 243, 244);"></circle><path d="M 49.92745019492043 56.6750183284359 A 30.5 30.5 0 0 0 32 1.5" fill="none" stroke-width="3" style="stroke: rgb(32, 190, 255);"></path></svg></a></div></div><a class="sc-lbOyJj eeGduD sc-jFAmCJ fNVSOc" href="/code/pmarcelino/comprehensive-data-exploration-with-python"><div class="sc-ckMVTt jHrWZQ"><div class="sc-iBkjds sc-fLlhyt sc-fbPSWO uVZhN izULIq A-dENW">Comprehensive data exploration with Python</div><span class="sc-jIZahH sc-himrzO sc-fXynhf kdTVzc glCpMy ctwKCt"><span><span>Updated <span title="Sat Apr 30 2022 21:20:37 GMT+0200 (heure d’été d’Europe centrale)" aria-label="7 months ago">7mo ago</span></span></span> </span><span class="sc-jIZahH sc-himrzO sc-fXynhf kdTVzc glCpMy ctwKCt"><span class="sc-bPPhlf hfaBPJ"><a href="/code/pmarcelino/comprehensive-data-exploration-with-python/comments" class="sc-dPyBCJ sc-bBXxYQ sc-bOJcbE cSRCiy cFEurs gTFrUa">1819 comments</a> · <span class="sc-ibQCxQ idHgMS"><span class="sc-cKajLJ jNrpDQ">House Prices - Advanced Regression Techniques</span></span></span></span></div></a><div class="sc-gFGZVQ jDMEwY sc-yTtWT kdALiC"><div class="sc-dICTr dlQsbO"><button mode="default" data-testid="upvotebutton__upvote" aria-label="Upvote" class="sc-dNezTh sc-lkcIho cSGKPD cTyEVx"><i class="rmwc-icon rmwc-icon--ligature google-material-icons sc-gKXOVf jWACgA" sizevalue="18px">arrow_drop_up</i></button><span mode="default" class="sc-gXmSlM sc-cCsOjp sc-hAGLhy cKhlzA piYDj mWvOY">12770</span></div><span class="sc-dXxSUK cHoXAr"><span class="sc-jIZahH sc-himrzO sc-hRwTwm kdTVzc glCpMy IISDK"><img role="presentation" alt="" src="/static/images/medals/competitions/[email protected]" style="height: 9px; width: 9px;"> Gold</span><div class="mdc-menu-surface--anchor"><button aria-label="more_horiz" class="sc-jSMfEi eiMRSN sc-bgrGEg hPFZMI google-material-icons">more_horiz</button></div></span></div></div><div class="sc-lbxAil LkNdN"></div></li>
import pandas as pd
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36'}
url = "https://www.kaggle.com/code?sortBy=voteCount&page=1"
req = requests.get(url, headers = headers)
soup = BeautifulSoup(req.text, 'html.parser')
html_content = soup.find_all('li', attrs = {'class': 'sc-jfmDQi hfJycS'})
data = []
for elements in html_content:
data.append({
'title': elements.find("div", {"class": "sc-iBkjds sc-fLlhyt sc-fbPSWO uVZhN izULIq A-dENW"}).text,
'stars': elements.find("span", {"class": "sc-gXmSlM sc-cCsOjp sc-hAGLhy cKhlzA piYDj mWvOY"}).text,
'resume': elements.find("span", {"class": "sc-cKajLJ jNrpDQ"}).text,
'comments': elements.find("span", {"class": "sc-dPyBCJ sc-bBXxYQ sc-bOJcbE cSRCiy cFEurs gTFrUa"}).text,
'link': elements.get('href')})
print(data)
My output:
[]
| [
"The page loads data dynamically using an API, so you don't see that data when u try to get it. U need to figure out which query provides the information and what data is required to get it. I made a small example where we get the necessary tokens and information via api. Next change page varible to get new ids.\nimport requests\nimport pandas as pd\nimport json\n\n\ndef get_token():\n url = 'https://www.kaggle.com/code?sortBy=voteCount&page=1'\n response = requests.get(url)\n return response.cookies.get_dict()['CSRF-TOKEN'], response.cookies.get_dict()['XSRF-TOKEN']\n\n\ndef get_kernel_ids(csrf_token: str, xsrf_token: str, page: int):\n url = \"https://www.kaggle.com/api/i/kernels.KernelsService/ListKernelIds\"\n payload = json.dumps({\n \"sortBy\": \"VOTE_COUNT\",\n \"pageSize\": 100,\n \"group\": \"EVERYONE\",\n \"page\": page,\n \"tagIds\": \"\",\n \"excludeResultsFilesOutputs\": False,\n \"wantOutputFiles\": False,\n \"excludeKernelIds\": []\n })\n headers = {\n 'accept': 'application/json',\n 'content-type': 'application/json',\n 'cookie': f'CSRF-TOKEN={csrf_token}',\n 'x-xsrf-token': xsrf_token\n }\n response = requests.post(url, headers=headers, data=payload)\n return response.json()['kernelIds']\n\n\ndef get_info(csrf_token: str, xsrf_token: str):\n url = \"https://www.kaggle.com/api/i/kernels.KernelsService/GetKernelListDetails\"\n payload = json.dumps({\n \"deletedAccessBehavior\": \"RETURN_NOTHING\",\n \"unauthorizedAccessBehavior\": \"RETURN_NOTHING\",\n \"excludeResultsFilesOutputs\": False,\n \"wantOutputFiles\": False,\n \"kernelIds\": get_kernel_ids(csrf_token, xsrf_token, 1),\n \"outputFileTypes\": [],\n \"includeInvalidDataSources\": False\n })\n headers = {\n 'accept': 'application/json',\n 'content-type': 'application/json',\n 'cookie': f'CSRF-TOKEN={csrf_token}',\n 'x-xsrf-token': xsrf_token\n }\n response = requests.post(url, headers=headers, data=payload)\n data = []\n for kernel in response.json()['kernels']:\n data.append({\n 'title': kernel['title'],\n 'stars': kernel['totalVotes'],\n 'resume': kernel['dataSources'][0]['name'] if 'dataSources' in kernel else 'No attached data sources',\n 'comments': kernel['totalComments'],\n 'url': f'https://www.kaggle.com{kernel[\"scriptUrl\"]}'\n })\n return data\n\n\ncsrf, xsrf = get_token()\ndf = pd.DataFrame(get_info(csrf, xsrf))\nprint(df.to_string())\n\nOUTPUT:\n title stars resume comments url\n0 Comprehensive data exploration with Python 12772 House Prices - Advanced Regression Techniques 1819 https://www.kaggle.com/code/pmarcelino/comprehensive-data-exploration-with-python\n1 Titanic Data Science Solutions 9374 Titanic - Machine Learning from Disaster 2316 https://www.kaggle.com/code/startupsci/titanic-data-science-solutions\n2 Titanic Tutorial 8161 Titanic - Machine Learning from Disaster 26348 https://www.kaggle.com/code/alexisbcook/titanic-tutorial\n3 Stacked Regressions : Top 4% on LeaderBoard 6749 House Prices - Advanced Regression Techniques 1075 https://www.kaggle.com/code/serigne/stacked-regressions-top-4-on-leaderboard\n4 Introduction to CNN Keras - 0.997 (top 6%) 6438 Digit Recognizer 1002 https://www.kaggle.com/code/yassineghouzam/introduction-to-cnn-keras-0-997-top-6\n5 Data ScienceTutorial for Beginners 6213 Pokemon- Weedle's Cave 1160 https://www.kaggle.com/code/kanncaa1/data-sciencetutorial-for-beginners\n6 Hello, Python 5952 No attached data sources 329 https://www.kaggle.com/code/colinmorris/hello-python\n7 Introduction to Ensembling/Stacking in Python 5657 Titanic - Machine Learning from Disaster 1031 https://www.kaggle.com/code/arthurtok/introduction-to-ensembling-stacking-in-python\n8 How Models Work 5308 Mobile Price Classification 2 https://www.kaggle.com/code/dansbecker/how-models-work\n9 A Data Science Framework: To Achieve 99% Accuracy 5266 Titanic - Machine Learning from Disaster 657 https://www.kaggle.com/code/ldfreeman3/a-data-science-framework-to-achieve-99-accuracy\n10 Credit Fraud || Dealing with Imbalanced Datasets 4274 Credit Card Fraud Detection 629 https://www.kaggle.com/code/janiobachmann/credit-fraud-dealing-with-imbalanced-datasets\n11 Exploring Survival on the Titanic 3735 Titanic - Machine Learning from Disaster 1041 https://www.kaggle.com/code/mrisdal/exploring-survival-on-the-titanic\n12 Start Here: A Gentle Introduction 3372 Home Credit Default Risk 543 https://www.kaggle.com/code/willkoehrsen/start-here-a-gentle-introduction\n13 Functions and Getting Help 2974 No attached data sources 145 https://www.kaggle.com/code/colinmorris/functions-and-getting-help\n14 Your First Machine Learning Model 2913 Mobile Price Classification 386 https://www.kaggle.com/code/dansbecker/your-first-machine-learning-model\n15 Exercise: Your First Machine Learning Model 2840 Melbourne Housing Snapshot 381 https://www.kaggle.com/code/yogeshtak/exercise-your-first-machine-learning-model\n16 Titanic Top 4% with ensemble modeling 2724 Titanic - Machine Learning from Disaster 408 https://www.kaggle.com/code/yassineghouzam/titanic-top-4-with-ensemble-modeling\n17 Model Validation 2655 Mobile Price Classification 5 https://www.kaggle.com/code/dansbecker/model-validation\n18 EDA To Prediction(DieTanic) 2544 Titanic - Machine Learning from Disaster 342 https://www.kaggle.com/code/ash316/eda-to-prediction-dietanic\n19 Winning solutions of kaggle competitions 2527 [Private Datasource] 207 https://www.kaggle.com/code/sudalairajkumar/winning-solutions-of-kaggle-competitions\n20 Booleans and Conditionals 2501 No attached data sources 75 https://www.kaggle.com/code/colinmorris/booleans-and-conditionals\n21 Underfitting and Overfitting 2500 Mobile Price Classification 7 https://www.kaggle.com/code/dansbecker/underfitting-and-overfitting\n22 Getting Started with Kaggle 2420 No attached data sources 4265 https://www.kaggle.com/code/alexisbcook/getting-started-with-kaggle\n23 Full Preprocessing Tutorial 2366 Data Science Bowl 2017 490 https://www.kaggle.com/code/gzuidhof/full-preprocessing-tutorial\n24 Machine Learning Tutorial for Beginners 2333 Biomechanical features of orthopedic patients 292 https://www.kaggle.com/code/kanncaa1/machine-learning-tutorial-for-beginners\n25 Everything you can do with a time series 2330 DJIA 30 Stock Time Series 171 https://www.kaggle.com/code/thebrownviking20/everything-you-can-do-with-a-time-series\n26 Deep Learning Tutorial for Beginners 2296 Sign Language Digits Dataset 248 https://www.kaggle.com/code/kanncaa1/deep-learning-tutorial-for-beginners\n27 Getting staRted in R: First Steps 2216 Chocolate Bar Ratings 102 https://www.kaggle.com/code/rtatman/getting-started-in-r-first-steps\n28 Approaching (Almost) Any NLP Problem on Kaggle 2169 glove.840B.300d.txt 231 https://www.kaggle.com/code/abhishek/approaching-almost-any-nlp-problem-on-kaggle\n29 Data Science Glossary on Kaggle 2125 [Private Datasource] 229 https://www.kaggle.com/code/shivamb/data-science-glossary-on-kaggle\n30 Coronavirus (COVID-19) Visualization & Prediction 2069 No attached data sources 691 https://www.kaggle.com/code/therealcyberlord/coronavirus-covid-19-visualization-prediction\n31 Creating, Reading and Writing 2063 18,393 Pitchfork Reviews 95 https://www.kaggle.com/code/residentmario/creating-reading-and-writing\n32 Dive into dplyr (tutorial #1) 1828 Palmer Archipelago (Antarctica) penguin data 134 https://www.kaggle.com/code/jessemostipak/dive-into-dplyr-tutorial-1\n33 Back to (predict) the future - Interactive M5 EDA 1820 US Natural Disaster Declarations 322 https://www.kaggle.com/code/headsortails/back-to-predict-the-future-interactive-m5-eda\n34 COVID-19 - Analysis, Visualization & Comparisons 1773 World Happiness Report 359 https://www.kaggle.com/code/imdevskp/covid-19-analysis-visualization-comparisons\n35 Time series Basics : Exploring traditional TS 1767 Predict Future Sales 174 https://www.kaggle.com/code/jagangupta/time-series-basics-exploring-traditional-ts\n36 Titanic - Advanced Feature Engineering Tutorial 1753 Titanic - Machine Learning from Disaster 322 https://www.kaggle.com/code/gunesevitan/titanic-advanced-feature-engineering-tutorial\n37 Regularized Linear Models 1721 House Prices - Advanced Regression Techniques 336 https://www.kaggle.com/code/apapiu/regularized-linear-models\n38 Random Forests 1711 Mobile Price Classification 2 https://www.kaggle.com/code/dansbecker/random-forests\n39 Deep Learning For NLP: Zero To Transformers & BERT 1682 glove.840B.300d.txt 89 https://www.kaggle.com/code/tanulsingh077/deep-learning-for-nlp-zero-to-transformers-bert\n40 Feature Selection and Data Visualization 1681 Breast Cancer Wisconsin (Diagnostic) Data Set 304 https://www.kaggle.com/code/kanncaa1/feature-selection-and-data-visualization\n41 Is it a bird? Creating a model from your own data 1670 No attached data sources 32 https://www.kaggle.com/code/jhoward/is-it-a-bird-creating-a-model-from-your-own-data\n42 Lists 1651 No attached data sources 60 https://www.kaggle.com/code/colinmorris/lists\n43 Submitting From A Kernel 1650 House Prices - Advanced Regression Techniques 491 https://www.kaggle.com/code/dansbecker/submitting-from-a-kernel\n44 Basic Data Exploration 1552 Mobile Price Classification 8 https://www.kaggle.com/code/dansbecker/basic-data-exploration\n45 Indexing, Selecting & Assigning 1494 18,393 Pitchfork Reviews 121 https://www.kaggle.com/code/residentmario/indexing-selecting-assigning\n46 Getting Started with a Movie Recommendation System 1482 TMDB 5000 Movie Dataset 181 https://www.kaggle.com/code/ibtesama/getting-started-with-a-movie-recommendation-system\n47 House prices: Lasso, XGBoost, and a detailed EDA 1470 House Prices - Advanced Regression Techniques 255 https://www.kaggle.com/code/erikbruin/house-prices-lasso-xgboost-and-a-detailed-eda\n48 Twitter sentiment Extaction-Analysis,EDA and Model 1440 [Private Datasource] 122 https://www.kaggle.com/code/tanulsingh077/twitter-sentiment-extaction-analysis-eda-and-model\n49 Loops and List Comprehensions 1436 No attached data sources 75 https://www.kaggle.com/code/colinmorris/loops-and-list-comprehensions\n50 Exploratory Analysis - Zillow 1428 Zillow Prize: Zillow’s Home Value Prediction (Zestimate) 169 https://www.kaggle.com/code/philippsp/exploratory-analysis-zillow\n51 Data Analysis & XGBoost Starter (0.35460 LB) 1414 Quora Question Pairs 168 https://www.kaggle.com/code/anokas/data-analysis-xgboost-starter-0-35460-lb\n52 A Statistical Analysis & ML workflow of Titanic 1411 Titanic - Machine Learning from Disaster 321 https://www.kaggle.com/code/masumrumi/a-statistical-analysis-ml-workflow-of-titanic\n53 A Journey through Titanic 1399 Titanic - Machine Learning from Disaster 417 https://www.kaggle.com/code/omarelgabry/a-journey-through-titanic\n54 Plotly Tutorial for Beginners 1370 World University Rankings 143 https://www.kaggle.com/code/kanncaa1/plotly-tutorial-for-beginners\n55 Tutorial on reading large datasets 1365 Riiid train data (multiple formats) 111 https://www.kaggle.com/code/rohanrao/tutorial-on-reading-large-datasets\n56 Python Data Visualizations 1365 Iris Species 162 https://www.kaggle.com/code/benhamner/python-data-visualizations\n57 Working with External Libraries 1353 No attached data sources 53 https://www.kaggle.com/code/colinmorris/working-with-external-libraries\n58 Strings and Dictionaries 1346 No attached data sources 52 https://www.kaggle.com/code/colinmorris/strings-and-dictionaries\n59 Explore Your Data 1329 home data for ml course 237 https://www.kaggle.com/code/dansbecker/explore-your-data\n60 Handling Missing Values 1324 Melbourne Housing Market 440 https://www.kaggle.com/code/dansbecker/handling-missing-values\n61 Basic EDA,Cleaning and GloVe 1311 GloVe: Global Vectors for Word Representation 162 https://www.kaggle.com/code/shahules/basic-eda-cleaning-and-glove\n62 Pytorch Tutorial for Deep Learning Lovers 1310 Digit Recognizer 122 https://www.kaggle.com/code/kanncaa1/pytorch-tutorial-for-deep-learning-lovers\n63 EDA is Fun! 1298 PUBG Finish Placement Prediction (Kernels Only) 186 https://www.kaggle.com/code/deffro/eda-is-fun\n64 NLP with Disaster Tweets - EDA, Cleaning and BERT 1278 Pickled glove.840B.300d 209 https://www.kaggle.com/code/gunesevitan/nlp-with-disaster-tweets-eda-cleaning-and-bert\n65 Data Cleaning Challenge: Handling missing values 1260 Detailed NFL Play-by-Play Data 2009-2018 377 https://www.kaggle.com/code/rtatman/data-cleaning-challenge-handling-missing-values\n66 Titanic Survival Predictions (Beginner) 1252 Titanic - Machine Learning from Disaster 272 https://www.kaggle.com/code/nadintamer/titanic-survival-predictions-beginner\n67 Santander EDA and Prediction 1215 Santander Customer Transaction Prediction 189 https://www.kaggle.com/code/gpreda/santander-eda-and-prediction\n68 Titanic best working Classifier 1181 Titanic - Machine Learning from Disaster 195 https://www.kaggle.com/code/sinakhorami/titanic-best-working-classifier\n69 Data Science for tabular data: Advanced Techniques 1143 No Data Sources 101 https://www.kaggle.com/code/vbmokin/data-science-for-tabular-data-advanced-techniques\n70 Keras U-Net starter - LB 0.277 1120 2018 Data Science Bowl 163 https://www.kaggle.com/code/keegil/keras-u-net-starter-lb-0-277\n71 Simple Exploration Notebook - Zillow Prize 1104 Zillow Prize: Zillow’s Home Value Prediction (Zestimate) 131 https://www.kaggle.com/code/sudalairajkumar/simple-exploration-notebook-zillow-prize\n72 EDA and models 1096 IEEE-CIS Fraud Detection 218 https://www.kaggle.com/code/artgor/eda-and-models\n73 How to: Preprocessing when using embeddings 1094 Quora Insincere Questions Classification 102 https://www.kaggle.com/code/christofhenkel/how-to-preprocessing-when-using-embeddings\n74 COVID-19 Literature Clustering 1087 COVID-19 Open Research Dataset Challenge (CORD-19) 232 https://www.kaggle.com/code/maksimeren/covid-19-literature-clustering\n75 Head Start for Data Scientist 1086 Titanic - Machine Learning from Disaster 233 https://www.kaggle.com/code/hiteshp/head-start-for-data-scientist\n76 Be my guest - Recruit Restaurant EDA 1079 Weather Data for Recruit Restaurant Competition 237 https://www.kaggle.com/code/headsortails/be-my-guest-recruit-restaurant-eda\n77 NB-SVM strong linear baseline 1070 Toxic Comment Classification Challenge 152 https://www.kaggle.com/code/jhoward/nb-svm-strong-linear-baseline\n78 Seaborn Tutorial for Beginners 1070 Fatal Police Shootings in the US 182 https://www.kaggle.com/code/kanncaa1/seaborn-tutorial-for-beginners\n79 A look at different embeddings.! 1057 Quora Insincere Questions Classification 111 https://www.kaggle.com/code/sudalairajkumar/a-look-at-different-embeddings\n80 Deep Neural Network Keras way 1053 Digit Recognizer 201 https://www.kaggle.com/code/poonaml/deep-neural-network-keras-way\n81 Simple Exploration+Baseline - GA Customer Revenue 1044 Google Analytics Customer Revenue Prediction 162 https://www.kaggle.com/code/sudalairajkumar/simple-exploration-baseline-ga-customer-revenue\n82 Simple Matplotlib & Visualization Tips 1040 Students Performance in Exams 148 https://www.kaggle.com/code/subinium/simple-matplotlib-visualization-tips\n83 Speech representation and data exploration 1031 TensorFlow Speech Recognition Challenge 110 https://www.kaggle.com/code/davids1992/speech-representation-and-data-exploration\n84 Feature engineering, xgboost 1023 Predict Future Sales 240 https://www.kaggle.com/code/dlarionov/feature-engineering-xgboost\n85 XGBoost 1018 House Prices - Advanced Regression Techniques 12 https://www.kaggle.com/code/dansbecker/xgboost\n86 Stock Market Analysis + Prediction using LSTM 1015 Tesla Stock Price 190 https://www.kaggle.com/code/faressayah/stock-market-analysis-prediction-using-lstm\n87 Two Sigma News Official Getting Started Kernel 994 Two Sigma: Using News to Predict Stock Movements 142 https://www.kaggle.com/code/dster/two-sigma-news-official-getting-started-kernel\n88 EDA, feature engineering and everything 984 Two Sigma: Using News to Predict Stock Movements 171 https://www.kaggle.com/code/artgor/eda-feature-engineering-and-everything\n89 A study on Regression applied to the Ames dataset 979 House Prices - Advanced Regression Techniques 139 https://www.kaggle.com/code/juliencs/a-study-on-regression-applied-to-the-ames-dataset\n90 Customer Segmentation 976 E-Commerce Data 72 https://www.kaggle.com/code/fabiendaniel/customer-segmentation\n91 How I made top 0.3% on a Kaggle competition 975 Kernel Files 124 https://www.kaggle.com/code/lavanyashukla01/how-i-made-top-0-3-on-a-kaggle-competition\n92 Extensive EDA and Modeling XGB Hyperopt 972 IEEE-CIS Fraud Detection 187 https://www.kaggle.com/code/kabure/extensive-eda-and-modeling-xgb-hyperopt\n93 Summary Functions and Maps 965 18,393 Pitchfork Reviews 132 https://www.kaggle.com/code/residentmario/summary-functions-and-maps\n94 NovelAI with webUI(Stable Diffusion version) 955 NovelAi-model-pruned 7 https://www.kaggle.com/code/inmine/novelai-with-webui-stable-diffusion-version\n95 Customer Segmentation: Clustering ️ 934 Customer Personality Analysis 256 https://www.kaggle.com/code/karnikakapoor/customer-segmentation-clustering\n96 Introduction to financial concepts and data 934 Optiver Realized Volatility Prediction 130 https://www.kaggle.com/code/jiashenliu/introduction-to-financial-concepts-and-data\n97 Resampling strategies for imbalanced datasets 934 Porto Seguro’s Safe Driver Prediction 80 https://www.kaggle.com/code/rafjaa/resampling-strategies-for-imbalanced-datasets\n98 Jane Street: EDA of day 0 and feature importance 932 Jane Street Market Prediction 151 https://www.kaggle.com/code/carlmcbrideellis/jane-street-eda-of-day-0-and-feature-importance\n99 10 Simple hacks to speed up your Data Analysis 929 Titanic - Machine Learning from Disaster 204 https://www.kaggle.com/code/parulpandey/10-simple-hacks-to-speed-up-your-data-analysis\n\n"
] | [
3
] | [] | [] | [
"beautifulsoup",
"python",
"python_3.x",
"web_scraping"
] | stackoverflow_0074611861_beautifulsoup_python_python_3.x_web_scraping.txt |
Q:
Printing not happening in python pika microservice consumer inside a docker-compose service
I am trying to receive messages from my pika producer. I am following along with this tutorial: https://www.youtube.com/watch?v=0iB5IPoTDts.
I can see that when I manually run docker-compose exec backend bash and then run python consumer.py, I can receive messages and they are being logged to stdout through the print() function. However, when I add the following service to my docker-compose.yml, the container is not logging to stdout:
rabbitmq_queue:
build:
context: .
dockerfile: Dockerfile
command: 'python consumer.py && echo hello'
volumes:
- .:/app
depends_on:
- db
There is no error. When I change the command to echo hello, the container prints hello to stdout nicely. Why is my service not logging to stdout? Also, it does seem to be running properly - when I try to throw an exception, my service errors out. When It errors out, it also starts printing messages. It does not print error messages otherwise.
Is there a way to fix this issue?
consumer.py:
import pika
params = pika.URLParameters(
"hidden thing here"
)
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue="main")
def callback(channel, method, properties, body):
print("[CONSUMER] Received message in main")
print(body)
# raise Exception()
channel.basic_consume(queue="main", on_message_callback=callback, auto_ack=True)
print("[CONSUMER] Started consuming")
channel.start_consuming()
channel.close()
A:
You need to actually start consuming messages. As it is right now you have a function called callback that never gets called (and is never referenced in your code).
channel.basic_consume(queue='main', on_message_callback=callback)
channel.start_consuming()
Add these lines at the end of your code. This will assign callback as the on message function and start_consuming will tell pika to start consuming the queue.
More detailed examples available here.
A:
Same problem. Working container with no output. I added PYTHONUNBUFFERED=1 for enable log.
# Dockerfile
ENV PYTHONUNBUFFERED=1
| Printing not happening in python pika microservice consumer inside a docker-compose service | I am trying to receive messages from my pika producer. I am following along with this tutorial: https://www.youtube.com/watch?v=0iB5IPoTDts.
I can see that when I manually run docker-compose exec backend bash and then run python consumer.py, I can receive messages and they are being logged to stdout through the print() function. However, when I add the following service to my docker-compose.yml, the container is not logging to stdout:
rabbitmq_queue:
build:
context: .
dockerfile: Dockerfile
command: 'python consumer.py && echo hello'
volumes:
- .:/app
depends_on:
- db
There is no error. When I change the command to echo hello, the container prints hello to stdout nicely. Why is my service not logging to stdout? Also, it does seem to be running properly - when I try to throw an exception, my service errors out. When It errors out, it also starts printing messages. It does not print error messages otherwise.
Is there a way to fix this issue?
consumer.py:
import pika
params = pika.URLParameters(
"hidden thing here"
)
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue="main")
def callback(channel, method, properties, body):
print("[CONSUMER] Received message in main")
print(body)
# raise Exception()
channel.basic_consume(queue="main", on_message_callback=callback, auto_ack=True)
print("[CONSUMER] Started consuming")
channel.start_consuming()
channel.close()
| [
"You need to actually start consuming messages. As it is right now you have a function called callback that never gets called (and is never referenced in your code).\nchannel.basic_consume(queue='main', on_message_callback=callback)\nchannel.start_consuming()\n\nAdd these lines at the end of your code. This will assign callback as the on message function and start_consuming will tell pika to start consuming the queue.\nMore detailed examples available here.\n",
"Same problem. Working container with no output. I added PYTHONUNBUFFERED=1 for enable log.\n# Dockerfile\nENV PYTHONUNBUFFERED=1 \n\n\n"
] | [
0,
0
] | [] | [] | [
"cloudamqp",
"docker",
"docker_compose",
"pika",
"python"
] | stackoverflow_0065444445_cloudamqp_docker_docker_compose_pika_python.txt |
Q:
Send different number of arguments to a function in a Pythonic way
I'm trying to make a small function which calls another function from a library I import, I have 8 similar use cases but I don't want the code to be long and repeating.
each time I send the exact same function and with the same arguments but with different number of them.
Let me show an example of what I mean:
This is my function
def num_pack(num, 8_bytes):
return struct.Struct(">Q Q Q Q Q Q Q Q").pack(num, num, num, num, num, num, num, num)
num is some generic counter, 8_bytes is a variable that runs from 1 to 8.
there are 8 possible options for the function that I use, it depends on the 8_bytes value.
The number of Q in the string should be equal to the number of 8_bytes and the same goes for num.
The naive way to do it is :
def num_pack(num, 8_bytes):
if 8_bytes == 8:
return struct.Struct(">Q Q Q Q Q Q Q Q").pack(num, num, num, num, num, num, num, num)
if 8_bytes == 7:
return struct.Struct(">Q Q Q Q Q Q Q").pack(num, num, num, num, num, num, num)
if 8_bytes == 6:
return struct.Struct(">Q Q Q Q Q Q").pack(num, num, num, num, num, num)
.
.
.
if 8_bytes == 1:
return struct.Struct(">Q").pack(num)
I know how to modify the ">Q" string at each time by I don't know how to change the pack function's number of arguments.
I also know how to do this with eval, but this is bad practice and I don't want to use this method.
I'm sure there is some Pythonic way of doing so,
Thanks in advance !
A:
You can create a function that accepts any amount of arguments using the unpack operater *.
Here is an example of a function that takes any amount of arguments:
def my_awesome_function(*args):
# do awesome job!
print(args)
# depending on len of args do something...
Or if you always have a one argument, but an unknown number of following arguments, you can formulate it like this:
def my_awesome_function(num, *all_other_args):
# my number
print(num)
# do awesome job!
print(all_other_args)
# depending on len of args do something...
Here is full explanation of how to deal with unknown number of args, and more usefull operators: https://www.scaler.com/topics/python/packing-and-unpacking-in-python/
If you are looking to do it the other way around, such that a function deeds all the items in a list as separate arguments. This could be done like this with the * operator as well.
def foo(a, b, c):
print(a, b, c)
values = ['adam', 'dave', 'elon']
# a valid funciton call would be
foo(*values)
# values -> ['adam', 'dave', 'elon']
# *values -> 'adam', 'dave', 'elon'
Hopefully this helps. Please leave a comment if I misunderstood.
Good Luck!
| Send different number of arguments to a function in a Pythonic way | I'm trying to make a small function which calls another function from a library I import, I have 8 similar use cases but I don't want the code to be long and repeating.
each time I send the exact same function and with the same arguments but with different number of them.
Let me show an example of what I mean:
This is my function
def num_pack(num, 8_bytes):
return struct.Struct(">Q Q Q Q Q Q Q Q").pack(num, num, num, num, num, num, num, num)
num is some generic counter, 8_bytes is a variable that runs from 1 to 8.
there are 8 possible options for the function that I use, it depends on the 8_bytes value.
The number of Q in the string should be equal to the number of 8_bytes and the same goes for num.
The naive way to do it is :
def num_pack(num, 8_bytes):
if 8_bytes == 8:
return struct.Struct(">Q Q Q Q Q Q Q Q").pack(num, num, num, num, num, num, num, num)
if 8_bytes == 7:
return struct.Struct(">Q Q Q Q Q Q Q").pack(num, num, num, num, num, num, num)
if 8_bytes == 6:
return struct.Struct(">Q Q Q Q Q Q").pack(num, num, num, num, num, num)
.
.
.
if 8_bytes == 1:
return struct.Struct(">Q").pack(num)
I know how to modify the ">Q" string at each time by I don't know how to change the pack function's number of arguments.
I also know how to do this with eval, but this is bad practice and I don't want to use this method.
I'm sure there is some Pythonic way of doing so,
Thanks in advance !
| [
"You can create a function that accepts any amount of arguments using the unpack operater *.\nHere is an example of a function that takes any amount of arguments:\n\ndef my_awesome_function(*args):\n # do awesome job!\n print(args)\n\n # depending on len of args do something...\n\n\nOr if you always have a one argument, but an unknown number of following arguments, you can formulate it like this:\n\ndef my_awesome_function(num, *all_other_args):\n # my number\n print(num)\n\n # do awesome job!\n print(all_other_args)\n\n # depending on len of args do something...\n\n\nHere is full explanation of how to deal with unknown number of args, and more usefull operators: https://www.scaler.com/topics/python/packing-and-unpacking-in-python/\nIf you are looking to do it the other way around, such that a function deeds all the items in a list as separate arguments. This could be done like this with the * operator as well.\ndef foo(a, b, c):\n print(a, b, c)\n\nvalues = ['adam', 'dave', 'elon']\n\n# a valid funciton call would be\nfoo(*values)\n\n# values -> ['adam', 'dave', 'elon']\n# *values -> 'adam', 'dave', 'elon'\n\nHopefully this helps. Please leave a comment if I misunderstood.\nGood Luck!\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074612684_python.txt |
Q:
batch_distance.cpp:274: error: (-215:Assertion failed)
` I have the following code :
import cv2
import os
from os import listdir
import numpy as np
from PIL import Image
from tabulate import tabulate
import itertools
#sift
sift = cv2.SIFT_create()
#feature matching
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# get the path/directory
folder_dir = "./runs/myDetect/SIFT"
col_names = []
data = []
all_keypoints = []
all_descriptors = []
for image in os.listdir(folder_dir):
# check if the image ends with png or jpg or jpeg
if (image.endswith(".png") or image.endswith(".jpg") or image.endswith(".jpeg")):
col_names = ["KeyPoints lenght", "Numbers", "Keypoints"]
opened_img = np.array(Image.open(folder_dir + image))
gray_img= cv2.cvtColor(opened_img, cv2.COLOR_BGR2GRAY)
keypoints, descriptors = sift.detectAndCompute(gray_img, None)
data.append([image, len(keypoints)])
all_keypoints.append([image, keypoints])
all_descriptors.append([descriptors]) #all_descriptors.append([image, descriptors])
print(tabulate(data, headers=col_names, tablefmt="fancy_grid"))
for a, b in itertools.combinations(all_descriptors, 2):
a=np.array(a).astype('uint8')
print(type(a))
b=np.array(b).astype('uint8')
print(type(b))
if type(a)!=type(None) and type(b)!=type(None) :
if a or b is None:
print(False)
matches = bf.match(a,b)
matches = sorted(matches, key = lambda x:x.distance)
cv2.waitKey(1)
cv2.destroyAllWindows()
I am trying to find similarities between 2 images on the same file. I am using SIFT. Output of SIFT are keypoints and descriptor. I created a list named all_descriptors and for each image I add new descriptor to this list. Finally, I want to compare this descriptors between each other. On this part matches = bf.match(a,b) of the code, I receive following error : `matches = bf.match(a,b)
cv2.error: OpenCV(4.6.0) /io/opencv/modules/core/src/batch_distance.cpp:274: error: (-215:Assertion failed) type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U) in function 'batchDistance'. What is the solution? How can I compare 2 images in the same file?
A:
Please try the same code with steps in below
First :
all_descriptors.append(descriptors)
then :
a=np.array(a).astype('float32')
| batch_distance.cpp:274: error: (-215:Assertion failed) | ` I have the following code :
import cv2
import os
from os import listdir
import numpy as np
from PIL import Image
from tabulate import tabulate
import itertools
#sift
sift = cv2.SIFT_create()
#feature matching
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# get the path/directory
folder_dir = "./runs/myDetect/SIFT"
col_names = []
data = []
all_keypoints = []
all_descriptors = []
for image in os.listdir(folder_dir):
# check if the image ends with png or jpg or jpeg
if (image.endswith(".png") or image.endswith(".jpg") or image.endswith(".jpeg")):
col_names = ["KeyPoints lenght", "Numbers", "Keypoints"]
opened_img = np.array(Image.open(folder_dir + image))
gray_img= cv2.cvtColor(opened_img, cv2.COLOR_BGR2GRAY)
keypoints, descriptors = sift.detectAndCompute(gray_img, None)
data.append([image, len(keypoints)])
all_keypoints.append([image, keypoints])
all_descriptors.append([descriptors]) #all_descriptors.append([image, descriptors])
print(tabulate(data, headers=col_names, tablefmt="fancy_grid"))
for a, b in itertools.combinations(all_descriptors, 2):
a=np.array(a).astype('uint8')
print(type(a))
b=np.array(b).astype('uint8')
print(type(b))
if type(a)!=type(None) and type(b)!=type(None) :
if a or b is None:
print(False)
matches = bf.match(a,b)
matches = sorted(matches, key = lambda x:x.distance)
cv2.waitKey(1)
cv2.destroyAllWindows()
I am trying to find similarities between 2 images on the same file. I am using SIFT. Output of SIFT are keypoints and descriptor. I created a list named all_descriptors and for each image I add new descriptor to this list. Finally, I want to compare this descriptors between each other. On this part matches = bf.match(a,b) of the code, I receive following error : `matches = bf.match(a,b)
cv2.error: OpenCV(4.6.0) /io/opencv/modules/core/src/batch_distance.cpp:274: error: (-215:Assertion failed) type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U) in function 'batchDistance'. What is the solution? How can I compare 2 images in the same file?
| [
"Please try the same code with steps in below\n\nFirst :\n\nall_descriptors.append(descriptors)\n\n\nthen :\n\na=np.array(a).astype('float32')\n\n"
] | [
0
] | [] | [] | [
"feature_extraction",
"opencv",
"python",
"sift"
] | stackoverflow_0074612128_feature_extraction_opencv_python_sift.txt |
Q:
Extract all matches unless string contains
I am using the re package's re.findall to extract terms from strings. How can I make a regex to say capture these matches unless you see this substring (in this case the substring "fake"). I attempted this via a anchored look-ahead solution.
Current Output:
import re
for x in ['a man dogs', "fake: too many dogs", 'hi']:
print(re.findall(r"(man[a-z]?\b|dog)(?!^.*fake)", x, flags=re.IGNORECASE))
## ['man', 'dog']
## ['many', 'dog']
## []
Desired Output
## ['man', 'dog']
## []
## []
I could accomplish this with an if/else but was wondering how to use a pure regex to solve this?
for x in ['a man dogs', "fake: too many dogs", 'hi']:
if re.search('fake', x, flags=re.IGNORECASE):
print([])
else:
print(re.findall(r"(man[a-z]?\b|dog)", x, flags=re.IGNORECASE))
## ['man', 'dog']
## []
## []
A:
Since re does not support unknown length lookbehind patterns, the plain regex solution is not possible. However, the PyPi regex library supports such lookbehind patterns.
After installing PyPi regex, you can use
(?<!fake.*)(man[a-z]?\b|dog)(?!.*fake)
See the regex demo.
Details:
(?<!fake.*) - a negative lookbehind that fails the match if there is fake string followed with any zero or more chars other than line break chars as many as possible immediately to the left of the current location
(man[a-z]?\b|dog) - man + a lowercase ASCII letter followed with a word boundary or dog string
(?!.*fake) - a negative lookahead that fails the match if there are any zero or more chars other than line break chars as many as possible and then a fake string immediately to the left of the current location.
In Python:
import regex
for x in ['a man dogs', "fake: too many dogs", 'hi']:
print(regex.findall(r"(?<!fake.*)(man[a-z]?\b|dog)(?!.*fake)", x, flags=re.IGNORECASE))
A:
In your pattern (man[a-z]?\b|dog)(?!^.*fake) the negative lookahead is after the match, but the word fake can still occur before one of the matches.
With Python re you can get out of the way what you don't want to keep, and capture what you want to keep using a capture group.
What you could do is not use a negative lookahead, but match a whole line that contains the word fake
^.*\bfake\b.*|(man[a-z]?\b|dog)
Explanation
^.*\bfake\b.* Match a whole line that contains the word fake
| Or
(man[a-z]?\b|dog) Capture group 1, match either man and optional char a-z or match dog
Regex demo | Python demo
import re
pattern = r"^.*\bfake\b.*|(man[a-z]?\b|dog)"
for x in ['a man dogs', "fake: too many dogs", 'hi']:
res = [s for s in re.findall(pattern, x, re.IGNORECASE) if s]
print(res)
Output
['man', 'dog']
[]
[]
| Extract all matches unless string contains | I am using the re package's re.findall to extract terms from strings. How can I make a regex to say capture these matches unless you see this substring (in this case the substring "fake"). I attempted this via a anchored look-ahead solution.
Current Output:
import re
for x in ['a man dogs', "fake: too many dogs", 'hi']:
print(re.findall(r"(man[a-z]?\b|dog)(?!^.*fake)", x, flags=re.IGNORECASE))
## ['man', 'dog']
## ['many', 'dog']
## []
Desired Output
## ['man', 'dog']
## []
## []
I could accomplish this with an if/else but was wondering how to use a pure regex to solve this?
for x in ['a man dogs', "fake: too many dogs", 'hi']:
if re.search('fake', x, flags=re.IGNORECASE):
print([])
else:
print(re.findall(r"(man[a-z]?\b|dog)", x, flags=re.IGNORECASE))
## ['man', 'dog']
## []
## []
| [
"Since re does not support unknown length lookbehind patterns, the plain regex solution is not possible. However, the PyPi regex library supports such lookbehind patterns.\nAfter installing PyPi regex, you can use\n(?<!fake.*)(man[a-z]?\\b|dog)(?!.*fake)\n\nSee the regex demo.\nDetails:\n\n(?<!fake.*) - a negative lookbehind that fails the match if there is fake string followed with any zero or more chars other than line break chars as many as possible immediately to the left of the current location\n(man[a-z]?\\b|dog) - man + a lowercase ASCII letter followed with a word boundary or dog string\n(?!.*fake) - a negative lookahead that fails the match if there are any zero or more chars other than line break chars as many as possible and then a fake string immediately to the left of the current location.\n\nIn Python:\nimport regex\nfor x in ['a man dogs', \"fake: too many dogs\", 'hi']:\n print(regex.findall(r\"(?<!fake.*)(man[a-z]?\\b|dog)(?!.*fake)\", x, flags=re.IGNORECASE))\n\n\n",
"In your pattern (man[a-z]?\\b|dog)(?!^.*fake) the negative lookahead is after the match, but the word fake can still occur before one of the matches.\nWith Python re you can get out of the way what you don't want to keep, and capture what you want to keep using a capture group.\nWhat you could do is not use a negative lookahead, but match a whole line that contains the word fake\n^.*\\bfake\\b.*|(man[a-z]?\\b|dog)\n\nExplanation\n\n^.*\\bfake\\b.* Match a whole line that contains the word fake\n| Or\n(man[a-z]?\\b|dog) Capture group 1, match either man and optional char a-z or match dog\n\nRegex demo | Python demo\nimport re\n\npattern = r\"^.*\\bfake\\b.*|(man[a-z]?\\b|dog)\"\n\nfor x in ['a man dogs', \"fake: too many dogs\", 'hi']:\n res = [s for s in re.findall(pattern, x, re.IGNORECASE) if s]\n print(res)\n\nOutput\n['man', 'dog']\n[]\n[]\n\n"
] | [
1,
1
] | [] | [] | [
"python",
"python_3.x",
"python_re",
"regex"
] | stackoverflow_0074607870_python_python_3.x_python_re_regex.txt |
Q:
Python ibm_db equivalent of db2look
SO I am using ibm_db library for fetch necessary information. Now I want to get the full table creation script along with index and all. I can see there is one db2look command to generate the same
db2look -d some_db -z xxxx -t xxxx -e -i xxxx-w xxxx -o script.sql
Is there an equivalent thing in ibm_db?
A:
No, there is not an exact equivalent in the python ibm_db for the db2look tool.
Alternative approaches exist.
Nothing (except suitable authorities/permissions) prevents you from running a stored procedure that exececutes (i.e. shells out to) db2look on the database-server and return its output to the python script.
If the workstation running python ibm_db also has the Db2 fat client installed, then python can directly run db2look as long as your Db2 client has the relevant database(s) catalogued.
You can also use python to execute an undocumented DB2-LUW stored procedure (sysproc.DB2LK_GENERATE_DDL()) as described in this answer, subject to your account having relevant rights.
You can also write your own queries for the catalog views (i.e. re-invent the wheel), in order to generate the DDL, which lets you do whatever you want.
| Python ibm_db equivalent of db2look | SO I am using ibm_db library for fetch necessary information. Now I want to get the full table creation script along with index and all. I can see there is one db2look command to generate the same
db2look -d some_db -z xxxx -t xxxx -e -i xxxx-w xxxx -o script.sql
Is there an equivalent thing in ibm_db?
| [
"No, there is not an exact equivalent in the python ibm_db for the db2look tool.\nAlternative approaches exist.\nNothing (except suitable authorities/permissions) prevents you from running a stored procedure that exececutes (i.e. shells out to) db2look on the database-server and return its output to the python script.\nIf the workstation running python ibm_db also has the Db2 fat client installed, then python can directly run db2look as long as your Db2 client has the relevant database(s) catalogued.\nYou can also use python to execute an undocumented DB2-LUW stored procedure (sysproc.DB2LK_GENERATE_DDL()) as described in this answer, subject to your account having relevant rights.\nYou can also write your own queries for the catalog views (i.e. re-invent the wheel), in order to generate the DDL, which lets you do whatever you want.\n"
] | [
1
] | [] | [] | [
"db2",
"python"
] | stackoverflow_0074612459_db2_python.txt |
Q:
CSS static file is not loading in Django
I am learning Django and i tried to implement a blog app . i am getting an error while using the static files inside project.i know, There are lot of questions with the same title. But i tried every way, still its not rendering the css file on the browser.
Folder structure:
firstBlog is the project name.
blog is the app inside firstBlog.
allstaticfiles is the STATIC_ROOT location.
static is the folder for stroing the static contents.
templates for storing the respective app's templates.
Here are some of my codes :
settings.py
STATIC_URL = '/static_files/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
'/Django/projects/project1/project1/firstBlog/static',
]
#print(os.path.exists(os.path.join(BASE_DIR,'static'))) ----------------TRUE
STATIC_ROOT=os.path.join(BASE_DIR, "allstaticfiles")
Base.html
i have used the {% load static %} and for laoding the css i have set the href attribute as below.
<link rel="stylesheet" type="text/css" href="{% static 'blog/main.css' % }">
Few things to consider :
when i am viewing my page on browser, its giving me error on console which is probably due to some path issue.
127.0.0.1/:1 Refused to apply style from 'http://127.0.0.1:8000/blog/%7B%%20static%20'blog/main.css'%20%%20%7D'
because its MIME type ('text/html') is not a supported stylesheet MIME
type, and strict MIME checking is enabled.
When i am viewing the style from the localhost link i.e, http://127.0.0.1:8000/blog/static_files/blog/main.css, its working.
when i am checking the view source of the page and then clicking on the href link , it's not loading the css file.
Folder structure :
C:\Users\tedd\OneDrive\Django\projects\project1\project1\firstBlog\static
A:
I'll try to help, but I'm not an expert!
The reason why http://127.0.0.1:8000/blog/static_files/blog/main.css is working is because you are accessing the main.css file directly. I would assume that your <link rel="stylesheet" type="text/css" href="{% static 'blog/main.css' % }"> is not pointing to the correct directory.
I looked at the documentation and it seems that your folder structure is not formed in the correct way. Here is a template for how everything should be structured:
[projectname]/ <- project root
├── [projectname]/ <- Django root
│ ├── __init__.py
│ ├── settings/
│ │ ├── common.py
│ │ ├── development.py
│ │ ├── i18n.py
│ │ ├── __init__.py
│ │ └── production.py
│ ├── urls.py
│ └── wsgi.py
├── apps/
│ └── __init__.py
├── configs/
│ ├── apache2_vhost.sample
│ └── README
├── doc/
│ ├── Makefile
│ └── source/
│ └── *snap*
├── manage.py
├── README.rst
├── run/
│ ├── media/
│ │ └── README
│ ├── README
│ └── static/
│ └── README
├── static/
│ └── README
└── templates/
├── base.html
├── core
│ └── login.html
└── README
On the screenshot that you provided it looks as if your templates folder is inside the django root instead of the project root. That's important, because it will be searching for your .css in the wrong place.
I encourage you to try fixing the directories and then instead of using href="{% static 'blog/main.css' % }"> simply use href="css/style.css".
Please notice that it's css/style.css instead of style.css. In your templates folder, you can create a folder called css and then store all of the .css files in there. Then you can simply point to the css folder with href="css/style.css".
This should work as a temporary solution until someone more knowledgable can give you the correct answer.
EDIT:
Your base.html should be in the templates folder also! So: project_root/templates (insert base.html here) /css/ (insert your .css here(
A:
Note this out: Keeping Debug = False in your local might cause the error of css not loading. Change Debug = True to run your css in local with django.
A:
"whitenoise" can solve "MIME type" error then CSS is loaded:
First, install "whitenoise":
pip install whitenoise
Then, add it to "MIDDLEWARE" in "settings.py". That's it. Finally, CSS is loaded:
MIDDLEWARE = [
# ...
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware", # Here
# ...
]
A:
whitenoise can solve MIME type error then CSS is loaded:
First, install whitenoise:
pip install whitenoise
Then, add it to MIDDLEWARE in settings.py. That's it.
Finally, CSS is loaded:
MIDDLEWARE = [
# ...
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware", # Here
# ...
]
| CSS static file is not loading in Django | I am learning Django and i tried to implement a blog app . i am getting an error while using the static files inside project.i know, There are lot of questions with the same title. But i tried every way, still its not rendering the css file on the browser.
Folder structure:
firstBlog is the project name.
blog is the app inside firstBlog.
allstaticfiles is the STATIC_ROOT location.
static is the folder for stroing the static contents.
templates for storing the respective app's templates.
Here are some of my codes :
settings.py
STATIC_URL = '/static_files/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
'/Django/projects/project1/project1/firstBlog/static',
]
#print(os.path.exists(os.path.join(BASE_DIR,'static'))) ----------------TRUE
STATIC_ROOT=os.path.join(BASE_DIR, "allstaticfiles")
Base.html
i have used the {% load static %} and for laoding the css i have set the href attribute as below.
<link rel="stylesheet" type="text/css" href="{% static 'blog/main.css' % }">
Few things to consider :
when i am viewing my page on browser, its giving me error on console which is probably due to some path issue.
127.0.0.1/:1 Refused to apply style from 'http://127.0.0.1:8000/blog/%7B%%20static%20'blog/main.css'%20%%20%7D'
because its MIME type ('text/html') is not a supported stylesheet MIME
type, and strict MIME checking is enabled.
When i am viewing the style from the localhost link i.e, http://127.0.0.1:8000/blog/static_files/blog/main.css, its working.
when i am checking the view source of the page and then clicking on the href link , it's not loading the css file.
Folder structure :
C:\Users\tedd\OneDrive\Django\projects\project1\project1\firstBlog\static
| [
"I'll try to help, but I'm not an expert!\nThe reason why http://127.0.0.1:8000/blog/static_files/blog/main.css is working is because you are accessing the main.css file directly. I would assume that your <link rel=\"stylesheet\" type=\"text/css\" href=\"{% static 'blog/main.css' % }\"> is not pointing to the correct directory. \nI looked at the documentation and it seems that your folder structure is not formed in the correct way. Here is a template for how everything should be structured:\n[projectname]/ <- project root\n├── [projectname]/ <- Django root\n│ ├── __init__.py\n│ ├── settings/\n│ │ ├── common.py\n│ │ ├── development.py\n│ │ ├── i18n.py\n│ │ ├── __init__.py\n│ │ └── production.py\n│ ├── urls.py\n│ └── wsgi.py\n├── apps/\n│ └── __init__.py\n├── configs/\n│ ├── apache2_vhost.sample\n│ └── README\n├── doc/\n│ ├── Makefile\n│ └── source/\n│ └── *snap*\n├── manage.py\n├── README.rst\n├── run/\n│ ├── media/\n│ │ └── README\n│ ├── README\n│ └── static/\n│ └── README\n├── static/\n│ └── README\n└── templates/\n ├── base.html\n ├── core\n │ └── login.html\n └── README\n\nOn the screenshot that you provided it looks as if your templates folder is inside the django root instead of the project root. That's important, because it will be searching for your .css in the wrong place. \nI encourage you to try fixing the directories and then instead of using href=\"{% static 'blog/main.css' % }\"> simply use href=\"css/style.css\". \nPlease notice that it's css/style.css instead of style.css. In your templates folder, you can create a folder called css and then store all of the .css files in there. Then you can simply point to the css folder with href=\"css/style.css\".\nThis should work as a temporary solution until someone more knowledgable can give you the correct answer. \nEDIT:\nYour base.html should be in the templates folder also! So: project_root/templates (insert base.html here) /css/ (insert your .css here(\n",
"Note this out: Keeping Debug = False in your local might cause the error of css not loading. Change Debug = True to run your css in local with django.\n",
"\"whitenoise\" can solve \"MIME type\" error then CSS is loaded:\nFirst, install \"whitenoise\":\npip install whitenoise\n\nThen, add it to \"MIDDLEWARE\" in \"settings.py\". That's it. Finally, CSS is loaded:\nMIDDLEWARE = [\n # ...\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\", # Here\n # ...\n]\n\n",
"whitenoise can solve MIME type error then CSS is loaded:\nFirst, install whitenoise:\npip install whitenoise\n\nThen, add it to MIDDLEWARE in settings.py. That's it.\nFinally, CSS is loaded:\nMIDDLEWARE = [\n # ...\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\", # Here\n # ...\n ]\n\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"css",
"django",
"django_staticfiles",
"python",
"python_3.x"
] | stackoverflow_0059688135_css_django_django_staticfiles_python_python_3.x.txt |
Q:
Change color of figures with PyOpenGL
I have to do a basic program in Python using the library Opengl...when somebody press the key 'r', the figure change to red, when somebody pressed key 'g' change green and when somebody pressed 'b' change blue. I don't know why the color doesn't change, but i know the program know when a key is pressed, this is my code...
from OpenGL.GL import *
from OpenGL.GLUT import *
from math import pi
from math import sin
from math import cos
def initGL(width, height):
glClearColor(0.529, 0.529, 0.529, 0.0)
glMatrixMode(GL_PROJECTION)
def dibujarCirculo():
glClear(GL_COLOR_BUFFER_BIT)
glColor3f(0.0, 0.0, 0.0)
glBegin(GL_POLYGON)
for i in range(400):
x = 0.25*sin(i) #Cordenadas polares x = r*sin(t) donde r = radio/2 (Circunferencia centrada en el origen)
y = 0.25*cos(i) #Cordenadas polares y = r*cos(t)
glVertex2f(x, y)
glEnd()
glFlush()
def keyPressed(*args):
key = args[0]
if key == "r":
glColor3f(1.0, 0.0, 0.0)
print "Presionaste",key
elif key == "g":
glColor3f(0.0, 1.0, 0.0)
print "Presionaste g"
elif key == "b":
glColor3f(0.0, 0.0, 1.0)
print "Presionaste b"
def main():
global window
glutInit(sys.argv)
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB)
glutInitWindowSize(500,500)
glutInitWindowPosition(200,200)
#creando la ventana
window = glutCreateWindow("Taller uno")
glutDisplayFunc(dibujarCirculo)
glutIdleFunc(dibujarCirculo)
glutKeyboardFunc(keyPressed)
initGL(500,500)
glutMainLoop()
if __name__ == "__main__":
main()
A:
I suspect that because the 2nd line in dibujarCirculo resets glColor3f to (0,0,0), you keep losing the change you made in keyPressed. Have you tried initializing glColor3f somewhere other than dibujarCirculo ?
A:
You can do something like this where you store the current shape color globally and update that when key presses are detected
from OpenGL.GL import *
from OpenGL.GLUT import *
from math import pi
from math import sin
from math import cos
import random
shapeColor = [0, 0, 0]
numAgents = 300
def initGL(width, height):
glClearColor(0.529, 0.529, 0.529, 0.0)
glMatrixMode(GL_PROJECTION)
def dibujarCirculo():
glClear(GL_COLOR_BUFFER_BIT)
glColor3f(shapeColor[0], shapeColor[1], shapeColor[2])
glBegin(GL_POLYGON)
for i in range(400):
x = 0.25*sin(i) #Cordenadas polares x = r*sin(t) donde r = radio/2 (Circunferencia centrada en el origen)
y = 0.25*cos(i) #Cordenadas polares y = r*cos(t)
glVertex2f(x, y)
glEnd()
glFlush()
def keyPressed(*args):
key = args[0]
global shapeColor
if key == b'r':
shapeColor = [1,0,0]
elif key == b'g':
shapeColor = [0,1,0]
elif key == b'b':
shapeColor = [0,0,1]
def main():
global window
glutInit(sys.argv)
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB)
glutInitWindowSize(500,500)
glutInitWindowPosition(200,200)
#creando la ventana
window = glutCreateWindow("Taller uno")
glutDisplayFunc(dibujarCirculo)
glutIdleFunc(dibujarCirculo)
glutKeyboardFunc(keyPressed)
initGL(500,500)
glutMainLoop()
if __name__ == "__main__":
main()
| Change color of figures with PyOpenGL | I have to do a basic program in Python using the library Opengl...when somebody press the key 'r', the figure change to red, when somebody pressed key 'g' change green and when somebody pressed 'b' change blue. I don't know why the color doesn't change, but i know the program know when a key is pressed, this is my code...
from OpenGL.GL import *
from OpenGL.GLUT import *
from math import pi
from math import sin
from math import cos
def initGL(width, height):
glClearColor(0.529, 0.529, 0.529, 0.0)
glMatrixMode(GL_PROJECTION)
def dibujarCirculo():
glClear(GL_COLOR_BUFFER_BIT)
glColor3f(0.0, 0.0, 0.0)
glBegin(GL_POLYGON)
for i in range(400):
x = 0.25*sin(i) #Cordenadas polares x = r*sin(t) donde r = radio/2 (Circunferencia centrada en el origen)
y = 0.25*cos(i) #Cordenadas polares y = r*cos(t)
glVertex2f(x, y)
glEnd()
glFlush()
def keyPressed(*args):
key = args[0]
if key == "r":
glColor3f(1.0, 0.0, 0.0)
print "Presionaste",key
elif key == "g":
glColor3f(0.0, 1.0, 0.0)
print "Presionaste g"
elif key == "b":
glColor3f(0.0, 0.0, 1.0)
print "Presionaste b"
def main():
global window
glutInit(sys.argv)
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB)
glutInitWindowSize(500,500)
glutInitWindowPosition(200,200)
#creando la ventana
window = glutCreateWindow("Taller uno")
glutDisplayFunc(dibujarCirculo)
glutIdleFunc(dibujarCirculo)
glutKeyboardFunc(keyPressed)
initGL(500,500)
glutMainLoop()
if __name__ == "__main__":
main()
| [
"I suspect that because the 2nd line in dibujarCirculo resets glColor3f to (0,0,0), you keep losing the change you made in keyPressed. Have you tried initializing glColor3f somewhere other than dibujarCirculo ?\n",
"You can do something like this where you store the current shape color globally and update that when key presses are detected\nfrom OpenGL.GL import *\nfrom OpenGL.GLUT import *\nfrom math import pi \nfrom math import sin\nfrom math import cos\n\nimport random\n\nshapeColor = [0, 0, 0]\n\nnumAgents = 300\n\ndef initGL(width, height):\n glClearColor(0.529, 0.529, 0.529, 0.0)\n glMatrixMode(GL_PROJECTION)\n\ndef dibujarCirculo():\n glClear(GL_COLOR_BUFFER_BIT)\n glColor3f(shapeColor[0], shapeColor[1], shapeColor[2])\n\n glBegin(GL_POLYGON)\n for i in range(400):\n x = 0.25*sin(i) #Cordenadas polares x = r*sin(t) donde r = radio/2 (Circunferencia centrada en el origen)\n y = 0.25*cos(i) #Cordenadas polares y = r*cos(t)\n glVertex2f(x, y) \n glEnd()\n glFlush()\n\ndef keyPressed(*args):\n key = args[0]\n global shapeColor\n if key == b'r':\n shapeColor = [1,0,0]\n elif key == b'g':\n shapeColor = [0,1,0]\n elif key == b'b':\n shapeColor = [0,0,1] \n\ndef main():\n global window\n glutInit(sys.argv)\n glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB)\n glutInitWindowSize(500,500)\n glutInitWindowPosition(200,200)\n\n #creando la ventana\n window = glutCreateWindow(\"Taller uno\")\n\n glutDisplayFunc(dibujarCirculo)\n glutIdleFunc(dibujarCirculo)\n glutKeyboardFunc(keyPressed)\n initGL(500,500)\n glutMainLoop()\n\nif __name__ == \"__main__\":\n main()\n\n"
] | [
0,
0
] | [] | [] | [
"opengl",
"pyopengl",
"python",
"python_2.7"
] | stackoverflow_0028523120_opengl_pyopengl_python_python_2.7.txt |
Q:
How does one show x10(superscript number) instead of 1e(number) for axes in matplotlib?
The only way I know how to use scientific notation for the end of the axes in matplotlib is with
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
but this will use 1e instead of x10. In the example code below it shows 1e6, but I want x10 to the power of 6, x10superscript6 (x10^6 with the 6 small and no ^). Is there a way to do this?
edit: I do not want scientific notation for each tick in the axis (that does not look good imho), only at the end, like the example shows but with just the 1e6 part altered to x10superscript6.
I can't include images yet.
Thanks
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
x = np.linspace(0,1000)
y = x**2
plt.plot(x,y)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.show()
A:
The offset is formatted differently depending on the useMathText argument. If True it will show the offset in a latex-like (MathText) format as x 10^6 instead of 1e6
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
x = np.linspace(0,1000)
y = x**2
plt.plot(x,y)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0), useMathText=True)
plt.show()
Note that the above will not work for version 2.0.2 (possibly other older versions). In that case you need to set the formatter manually and specify the option:
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
x = np.linspace(0,1000)
y = x**2
plt.plot(x,y)
plt.gca().yaxis.set_major_formatter(plt.ScalarFormatter(useMathText=True))
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.show()
A:
If you need this for all of your plots, you can edit the rcParams, for example as follows.
import matplotlib as mpl
mpl.rc('axes.formatter', use_mathtext=True)
Put at at the top of your script. If you want it for all of your scripts I recommend looking up matplotlib stylesheets.
| How does one show x10(superscript number) instead of 1e(number) for axes in matplotlib? | The only way I know how to use scientific notation for the end of the axes in matplotlib is with
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
but this will use 1e instead of x10. In the example code below it shows 1e6, but I want x10 to the power of 6, x10superscript6 (x10^6 with the 6 small and no ^). Is there a way to do this?
edit: I do not want scientific notation for each tick in the axis (that does not look good imho), only at the end, like the example shows but with just the 1e6 part altered to x10superscript6.
I can't include images yet.
Thanks
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
x = np.linspace(0,1000)
y = x**2
plt.plot(x,y)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.show()
| [
"The offset is formatted differently depending on the useMathText argument. If True it will show the offset in a latex-like (MathText) format as x 10^6 instead of 1e6\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.figure()\nx = np.linspace(0,1000)\ny = x**2\nplt.plot(x,y)\nplt.ticklabel_format(style='sci', axis='y', scilimits=(0,0), useMathText=True)\nplt.show()\n\n\nNote that the above will not work for version 2.0.2 (possibly other older versions). In that case you need to set the formatter manually and specify the option:\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.figure()\nx = np.linspace(0,1000)\ny = x**2\nplt.plot(x,y)\nplt.gca().yaxis.set_major_formatter(plt.ScalarFormatter(useMathText=True))\nplt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))\nplt.show()\n\n",
"If you need this for all of your plots, you can edit the rcParams, for example as follows.\nimport matplotlib as mpl\n\nmpl.rc('axes.formatter', use_mathtext=True)\n\nPut at at the top of your script. If you want it for all of your scripts I recommend looking up matplotlib stylesheets.\n"
] | [
4,
0
] | [] | [] | [
"matplotlib",
"python",
"python_3.x"
] | stackoverflow_0054354823_matplotlib_python_python_3.x.txt |
Q:
python GPU memory exploding?
I'm having a hard time understanding why the memory of my GPU explodes whenever I run the following function to extract the CLIP embedding of images or captions:
def get_features(item):
"""
Function returning the clip embedding of an image or a caption, to be used to compute the similarity.
----
Arguments:
item: Str or PIL image
"""
# if we are dealing with an image
if not isinstance(item, str):
item_features = model_im.get_image_features(**processor_im(images=item, return_tensors="pt").to(device))
item_features_norm = item_features / item_features.norm(dim=-1, keepdim=True)
# if we are dealing with a caption
else:
# Calculate the image-text similarity score the same way
item_features = model_im.get_text_features(**processor_im(text=item, return_tensors="pt").to(device))
# Normalize
item_features_norm = item_features / item_features.norm(dim=-1, keepdim=True)
del item_features
return item_features_norm
In particular, I run the following function on several Image-caption pairs, but after a few samples, I get cuda out of memory error. I don't really understand where the problem is, each variable is overwritten at each call and the images are small images 128x128.
| python GPU memory exploding? | I'm having a hard time understanding why the memory of my GPU explodes whenever I run the following function to extract the CLIP embedding of images or captions:
def get_features(item):
"""
Function returning the clip embedding of an image or a caption, to be used to compute the similarity.
----
Arguments:
item: Str or PIL image
"""
# if we are dealing with an image
if not isinstance(item, str):
item_features = model_im.get_image_features(**processor_im(images=item, return_tensors="pt").to(device))
item_features_norm = item_features / item_features.norm(dim=-1, keepdim=True)
# if we are dealing with a caption
else:
# Calculate the image-text similarity score the same way
item_features = model_im.get_text_features(**processor_im(text=item, return_tensors="pt").to(device))
# Normalize
item_features_norm = item_features / item_features.norm(dim=-1, keepdim=True)
del item_features
return item_features_norm
In particular, I run the following function on several Image-caption pairs, but after a few samples, I get cuda out of memory error. I don't really understand where the problem is, each variable is overwritten at each call and the images are small images 128x128.
| [] | [] | [
"I don't see why your GPU RAM is exploding. Perhaps it helps if you debug your code following here:\nhttps://discuss.pytorch.org/t/how-to-debug-causes-of-gpu-memory-leaks/6741/13\nimport torch\nimport gc\nfor obj in gc.get_objects():\n try:\n if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):\n print(type(obj), obj.size())\n except:\n pass\n \n\nOr if you call torch.cuda.empty_cache() after del item_features according to here: https://discuss.pytorch.org/t/how-to-delete-a-tensor-in-gpu-to-free-up-memory/48879/3\n"
] | [
-1
] | [
"clip",
"gpu",
"memory",
"python"
] | stackoverflow_0074612575_clip_gpu_memory_python.txt |
Q:
How to pull value from a column when several columns match in two data frames?
I am trying to write a script which will search a database similar to that in Table 1 based on a product/region/year specification outlined in table 2. The plan is to search for a match in Table 1 to a specification outlined in Table 2 and then pull the observation value, as seen in Table 2 - with results.
I need this code to run several loops, where the year criteria is relaxed. For example, loop 1 would search for a match in Product_L1, Geography_L1 and Year and loop 2 would search for a match in Product_L1, Geography_L1 and Year-1 and so on.
Table 1
Product level 1
Product level 2
Region level 1
Region level 2
Year
Obs. value
Portland cement
Cement
Peru
South America
2021
1
Portland cement
Cement
Switzerland
Europe
2021
2
Portland cement
Cement
USA
North America
2021
3
Portland cement
Cement
Brazil
South America
2021
4
Portland cement
Cement
South Africa
Africa
2021
5
Portland cement
Cement
India
Asia
2021
6
Portland cement
Cement
Brazil
South America
2020
7
Table 2
Product level 1
Product level 2
Region level 1
Region level 2
Year
Portland cement
Cement
Brazil
South America
2021
Portland cement
Cement
Switzerland
Europe
2021
Table 2 - with results
Product level 1
Product level 2
Region level 1
Region level 2
Year
Loop 1
Loop 2
x
Portland cement
Cement
Brazil
South America
2021
4
7
I have tried using the following code, but it comes up with the error 'Can only compare identically-labeled Series objects'. Does anyone have any suggestions on how to prevent this error?
Table_2['Loop_1'] = np.where((Table_1.Product_L1 == Table_2.Product_L1)
& (Table_1.Geography_L1 == Table_2.Geography_L1)
& (Table_1.Year == Table_2.Year),
Table_1(['obs_value'], ''))
A:
You can perform a merge operation and provide a list of columns that you want from Table_1.
import pandas as pd
Table_1 = pd.DataFrame({
"Product_L1":["Portland cement", "Portland cement", "Portland cement", "Portland cement", "Portland cement", "Portland cement", "Portland cement"],
"Product_L2":["Cement", "Cement", "Cement", "Cement", "Cement", "Cement", "Cement"],
"Geography_L1": ["Peru", "Switzerland", "USA", "Brazil", "South Africa", "India", "Brazil"],
"Geography_L2": ["South America", "Europe", "North America", "South America", "Africa", "Asia", "South America"],
"Year": [2021, 2021, 2021, 2021, 2021, 2021, 2020],
"obs_value": [1, 2, 3, 4, 5, 6, 7]
})
Table_2 = pd.DataFrame({
"Product_L1":["Portland cement", "Portland cement"],
"Product_L2":["Cement", "Cement"],
"Geography_L1": ["Brazil", "Switzerland"],
"Geography_L2": ["South America", "Europe"],
"Year": [2021, 2021]
})
columns_list = ['Product_L1','Product_L2','Geography_L1','Geography_L2','Year','obs_value']
result = pd.merge(Table_2, Table_1[columns_list], how='left')
result is a new dataframe:
Product_L1 Product_L2 Geography_L1 Geography_L2 Year obs_value
0 Portland cement Cement Brazil South America 2021 4
1 Portland cement Cement Switzerland Europe 2021 2
EDIT: Based upon the update to the question, I think what you are trying to do is achievable using set_index and unstack. This will create a new dataframe with the observed values listed in columns 'Year_2020', 'Year_2021' etc.
index_columns = ['Product_L1','Product_L2','Geography_L1','Geography_L2', 'Year']
edit_df = Table_1.set_index(index_columns)['obs_value'].unstack().add_prefix('Year_').reset_index()
| How to pull value from a column when several columns match in two data frames? | I am trying to write a script which will search a database similar to that in Table 1 based on a product/region/year specification outlined in table 2. The plan is to search for a match in Table 1 to a specification outlined in Table 2 and then pull the observation value, as seen in Table 2 - with results.
I need this code to run several loops, where the year criteria is relaxed. For example, loop 1 would search for a match in Product_L1, Geography_L1 and Year and loop 2 would search for a match in Product_L1, Geography_L1 and Year-1 and so on.
Table 1
Product level 1
Product level 2
Region level 1
Region level 2
Year
Obs. value
Portland cement
Cement
Peru
South America
2021
1
Portland cement
Cement
Switzerland
Europe
2021
2
Portland cement
Cement
USA
North America
2021
3
Portland cement
Cement
Brazil
South America
2021
4
Portland cement
Cement
South Africa
Africa
2021
5
Portland cement
Cement
India
Asia
2021
6
Portland cement
Cement
Brazil
South America
2020
7
Table 2
Product level 1
Product level 2
Region level 1
Region level 2
Year
Portland cement
Cement
Brazil
South America
2021
Portland cement
Cement
Switzerland
Europe
2021
Table 2 - with results
Product level 1
Product level 2
Region level 1
Region level 2
Year
Loop 1
Loop 2
x
Portland cement
Cement
Brazil
South America
2021
4
7
I have tried using the following code, but it comes up with the error 'Can only compare identically-labeled Series objects'. Does anyone have any suggestions on how to prevent this error?
Table_2['Loop_1'] = np.where((Table_1.Product_L1 == Table_2.Product_L1)
& (Table_1.Geography_L1 == Table_2.Geography_L1)
& (Table_1.Year == Table_2.Year),
Table_1(['obs_value'], ''))
| [
"You can perform a merge operation and provide a list of columns that you want from Table_1.\nimport pandas as pd\n\nTable_1 = pd.DataFrame({\n \"Product_L1\":[\"Portland cement\", \"Portland cement\", \"Portland cement\", \"Portland cement\", \"Portland cement\", \"Portland cement\", \"Portland cement\"],\n \"Product_L2\":[\"Cement\", \"Cement\", \"Cement\", \"Cement\", \"Cement\", \"Cement\", \"Cement\"],\n \"Geography_L1\": [\"Peru\", \"Switzerland\", \"USA\", \"Brazil\", \"South Africa\", \"India\", \"Brazil\"],\n \"Geography_L2\": [\"South America\", \"Europe\", \"North America\", \"South America\", \"Africa\", \"Asia\", \"South America\"],\n \"Year\": [2021, 2021, 2021, 2021, 2021, 2021, 2020],\n \"obs_value\": [1, 2, 3, 4, 5, 6, 7]\n })\n\nTable_2 = pd.DataFrame({\n \"Product_L1\":[\"Portland cement\", \"Portland cement\"],\n \"Product_L2\":[\"Cement\", \"Cement\"],\n \"Geography_L1\": [\"Brazil\", \"Switzerland\"],\n \"Geography_L2\": [\"South America\", \"Europe\"],\n \"Year\": [2021, 2021]\n })\n\ncolumns_list = ['Product_L1','Product_L2','Geography_L1','Geography_L2','Year','obs_value']\n\nresult = pd.merge(Table_2, Table_1[columns_list], how='left')\n\nresult is a new dataframe:\n Product_L1 Product_L2 Geography_L1 Geography_L2 Year obs_value\n0 Portland cement Cement Brazil South America 2021 4\n1 Portland cement Cement Switzerland Europe 2021 2\n\nEDIT: Based upon the update to the question, I think what you are trying to do is achievable using set_index and unstack. This will create a new dataframe with the observed values listed in columns 'Year_2020', 'Year_2021' etc.\nindex_columns = ['Product_L1','Product_L2','Geography_L1','Geography_L2', 'Year']\nedit_df = Table_1.set_index(index_columns)['obs_value'].unstack().add_prefix('Year_').reset_index()\n\n"
] | [
1
] | [] | [] | [
"match",
"python",
"where_clause"
] | stackoverflow_0074612422_match_python_where_clause.txt |
Q:
Original system and bakeoff meaning in NLP
So I am doing this homework in the one of standford courses and I managed to solve all the questions but I am trying to understand the last and correct me if I am wrong.
One it says build original system: That is building the model
The other one is bake off: That is comparing different models to each other to see the best one that perform the best.
Am I correct ?
This is the link to the homework: https://www.youtube.com/watch?v=vqNj1dr8-HM
It is the very end. It is just that these terms very confusing and new to me
Thank in advance.
I need to know the exact steps for building the original system. What does it mean>? What is the bake off?
A:
Backoff means you go back to a n-1 gram level to calculate the probabilities when you encounter a word with prob=0. So in our case you will use a 3-gram model to calculate the probability of "sunny" in the context "is a very".
The most used scheme is called "stupid backoff" and whenever you go back 1 level you multiply the odds by 0.4. So if sunny exists in the 3-gram model the probability would be 0.4 * P("sunny"|"is a very").
You can go back to the unigram model if needed multipliying by 0.4^n where n is the number of times you backed off.
| Original system and bakeoff meaning in NLP | So I am doing this homework in the one of standford courses and I managed to solve all the questions but I am trying to understand the last and correct me if I am wrong.
One it says build original system: That is building the model
The other one is bake off: That is comparing different models to each other to see the best one that perform the best.
Am I correct ?
This is the link to the homework: https://www.youtube.com/watch?v=vqNj1dr8-HM
It is the very end. It is just that these terms very confusing and new to me
Thank in advance.
I need to know the exact steps for building the original system. What does it mean>? What is the bake off?
| [
"Backoff means you go back to a n-1 gram level to calculate the probabilities when you encounter a word with prob=0. So in our case you will use a 3-gram model to calculate the probability of \"sunny\" in the context \"is a very\".\nThe most used scheme is called \"stupid backoff\" and whenever you go back 1 level you multiply the odds by 0.4. So if sunny exists in the 3-gram model the probability would be 0.4 * P(\"sunny\"|\"is a very\").\nYou can go back to the unigram model if needed multipliying by 0.4^n where n is the number of times you backed off.\n"
] | [
2
] | [] | [] | [
"nlp",
"python"
] | stackoverflow_0074612797_nlp_python.txt |
Q:
How can I use matplotlib.pyplot in a docker container?
I have a certain setting of Python in an docker image named deep. I used to run python code
docker run --rm -it -v "$PWD":/app -w /app deep python some-code.py
For information, -v and -w options are to link a local file in the current path to the container.
However, I can't use matplotlib.pyplot. Let's say test.py is
import matplotlib.pyplot as plt
plt.plot([1,2], [3,4])
plt.show()
I got this error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 3147, in plot
ax = gca()
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 928, in gca
return gcf().gca(**kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 578, in gcf
return figure()
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 527, in figure
**kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_tkagg.py", line 84, in new_figure_manager
return new_figure_manager_given_figure(num, figure)
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_tkagg.py", line 92, in new_figure_manager_given_figure
window = Tk.Tk()
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1818, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
With solution search, I am having only one solution. I figured out I can do if
$ xauth list
xxxx/unix:0 yyyy 5nsk3hd # copy this list
$ docker run --rm -it -v "$PWD":/app -w /app \
--net=host -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
deep bash
inside-container$ xauth add xxxx/unix:0 yyyy 5nsk3hd # paste the list
inside-container$ python test.py # now the plot works!!
My question is, instead of all those launching bash, setting xauth, and running Python inside container, can I do such setting with docker run so that I can just run the code outside of the container?
I tried
docker run --rm -it -v "$PWD":/app -w /app \
--net=host -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e "xauth add xxxx/unix:0 yyyy 5nsk3hd" \
deep python test.py
using --entry parameter, but it didn't work. Please help.
A:
Interestingly, I found quite nice and thorough solutions in ROS community. http://wiki.ros.org/docker/Tutorials/GUI
For my problem, my final choice is the second way in the tutorial:
docker run --rm -it \
--user=$(id -u) \
--env="DISPLAY" \
--workdir=/app \
--volume="$PWD":/app \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
--volume="/etc/sudoers.d:/etc/sudoers.d:ro" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
deepaul python test.python
A:
As far as I know, there are two ways you can to this:
You can give Jupyter a try. Install Jupyter via Conda or pip, and then run Jupyter-notebook server. By exporting the server port of Jupyter, you can visit the Jupyter-notebook via a browser. You can then create a new python notebook and import the .py file you have, copy the code under your if __name__ == '__main__' to the new notebook if necessary. Finally, run the code in Jupyter, the image will show up below the code on the web page. matplotlib works smoothly with Jupyter. If you are willing to open a browser to run the code and view the result, this is the best way I can think of.
You can use the matplotlib headlessly. That means to remove all the code such as plt.show(). Use plt.savefig to save figures to filesystem instead of showing it in an opened window. Then you can check out these saved images using any image viewer.
I tried mounting X11 to docker images some time ago, like YW P Kwon's answer. It will only work on systems that use X11, and you can do this only on a local machine (I am not sure if X11 forward works). It is also not recommended in docker. While with the Jupyter and Headless solution, you can run your code on any platform. But you do need to modify your code a little bit.
A:
I tried in my own way and it works fine..
#To run "matplotlib" inside docker, you must install these inside containers:-#
$yum install python3 -y //(must)
$yum install python3-tkinter -y //(must)
$pip3 install matplotlib // (must)
$pip3 install pandas (if required)
$pip3 install scikit-learn ( if required)
Now, run your container.
$docker run --rm -it --env=DISPLAY --workdir=/app -v="$PWD:/app <image_name> python3 model.py
A:
Bellow, I have described how to enable matplotlib plotting from running docker container and windows 10 machine.
Note, I didn't have to change matplotlib backend using matplotlib.use('TkAgg') as maybe others suggested in other posts.
For test I used simple python program:
# test.py
import matplotlib.pyplot as plt
plt.plot([1,2], [3,4])
plt.show()
The first thing is to add the following into the dockerfile, to install tkinter backend which matplotlib uses for graphical representation:
RUN apt-get update -y
RUN apt-get install -y libx11-dev
RUN apt-get install -y python3-tk
The second thing is to install vcXsrv server onto windows machine, which enables and establishes communication from running container to windows machine. Install and run it - you may follow instructions described on this page.
The last thing to do before building and running the docker image is to set $DISPLAY variable in wsl terminal. I did this with:
export DISPLAY=IP:0.0
, where IP is the network IP of windows machine.
In my case I ran:
export DISPLAY=172.16.31.103:0.0
After successfully executed steps above the image can be built.
Build it with:
docker build -t image_name .
Then run the container using command below where for display the before-set variable will be used.
docker run -ti --rm -e DISPLAY=$DISPLAY image_name
If everything was set correctly the matplotlib figure will pop-up.
A:
It looks like matplotlib is not meant to be used in docker. Use seaborn instead.
I noteice, when encounter thet problem, that matplotlib was used for show and save figures. Seaborn can save files and it works fine in docker. Removing matplotlib and use seaborn might be solution.
| How can I use matplotlib.pyplot in a docker container? | I have a certain setting of Python in an docker image named deep. I used to run python code
docker run --rm -it -v "$PWD":/app -w /app deep python some-code.py
For information, -v and -w options are to link a local file in the current path to the container.
However, I can't use matplotlib.pyplot. Let's say test.py is
import matplotlib.pyplot as plt
plt.plot([1,2], [3,4])
plt.show()
I got this error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 3147, in plot
ax = gca()
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 928, in gca
return gcf().gca(**kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 578, in gcf
return figure()
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 527, in figure
**kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_tkagg.py", line 84, in new_figure_manager
return new_figure_manager_given_figure(num, figure)
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_tkagg.py", line 92, in new_figure_manager_given_figure
window = Tk.Tk()
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1818, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
With solution search, I am having only one solution. I figured out I can do if
$ xauth list
xxxx/unix:0 yyyy 5nsk3hd # copy this list
$ docker run --rm -it -v "$PWD":/app -w /app \
--net=host -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
deep bash
inside-container$ xauth add xxxx/unix:0 yyyy 5nsk3hd # paste the list
inside-container$ python test.py # now the plot works!!
My question is, instead of all those launching bash, setting xauth, and running Python inside container, can I do such setting with docker run so that I can just run the code outside of the container?
I tried
docker run --rm -it -v "$PWD":/app -w /app \
--net=host -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e "xauth add xxxx/unix:0 yyyy 5nsk3hd" \
deep python test.py
using --entry parameter, but it didn't work. Please help.
| [
"Interestingly, I found quite nice and thorough solutions in ROS community. http://wiki.ros.org/docker/Tutorials/GUI\nFor my problem, my final choice is the second way in the tutorial:\ndocker run --rm -it \\\n --user=$(id -u) \\\n --env=\"DISPLAY\" \\\n --workdir=/app \\\n --volume=\"$PWD\":/app \\\n --volume=\"/etc/group:/etc/group:ro\" \\\n --volume=\"/etc/passwd:/etc/passwd:ro\" \\\n --volume=\"/etc/shadow:/etc/shadow:ro\" \\\n --volume=\"/etc/sudoers.d:/etc/sudoers.d:ro\" \\\n --volume=\"/tmp/.X11-unix:/tmp/.X11-unix:rw\" \\\n deepaul python test.python\n\n",
"As far as I know, there are two ways you can to this:\n\nYou can give Jupyter a try. Install Jupyter via Conda or pip, and then run Jupyter-notebook server. By exporting the server port of Jupyter, you can visit the Jupyter-notebook via a browser. You can then create a new python notebook and import the .py file you have, copy the code under your if __name__ == '__main__' to the new notebook if necessary. Finally, run the code in Jupyter, the image will show up below the code on the web page. matplotlib works smoothly with Jupyter. If you are willing to open a browser to run the code and view the result, this is the best way I can think of.\nYou can use the matplotlib headlessly. That means to remove all the code such as plt.show(). Use plt.savefig to save figures to filesystem instead of showing it in an opened window. Then you can check out these saved images using any image viewer.\n\nI tried mounting X11 to docker images some time ago, like YW P Kwon's answer. It will only work on systems that use X11, and you can do this only on a local machine (I am not sure if X11 forward works). It is also not recommended in docker. While with the Jupyter and Headless solution, you can run your code on any platform. But you do need to modify your code a little bit.\n",
"I tried in my own way and it works fine..\n#To run \"matplotlib\" inside docker, you must install these inside containers:-#\n\n$yum install python3 -y //(must)\n$yum install python3-tkinter -y //(must)\n$pip3 install matplotlib // (must)\n$pip3 install pandas (if required)\n$pip3 install scikit-learn ( if required)\n\nNow, run your container.\n\n$docker run --rm -it --env=DISPLAY --workdir=/app -v=\"$PWD:/app <image_name> python3 model.py\n\n",
"Bellow, I have described how to enable matplotlib plotting from running docker container and windows 10 machine.\nNote, I didn't have to change matplotlib backend using matplotlib.use('TkAgg') as maybe others suggested in other posts.\nFor test I used simple python program:\n# test.py\nimport matplotlib.pyplot as plt\nplt.plot([1,2], [3,4])\nplt.show()\n\nThe first thing is to add the following into the dockerfile, to install tkinter backend which matplotlib uses for graphical representation:\nRUN apt-get update -y\nRUN apt-get install -y libx11-dev\nRUN apt-get install -y python3-tk\n\nThe second thing is to install vcXsrv server onto windows machine, which enables and establishes communication from running container to windows machine. Install and run it - you may follow instructions described on this page.\nThe last thing to do before building and running the docker image is to set $DISPLAY variable in wsl terminal. I did this with:\nexport DISPLAY=IP:0.0\n\n, where IP is the network IP of windows machine.\nIn my case I ran:\nexport DISPLAY=172.16.31.103:0.0\n\nAfter successfully executed steps above the image can be built.\nBuild it with:\ndocker build -t image_name .\n\nThen run the container using command below where for display the before-set variable will be used.\ndocker run -ti --rm -e DISPLAY=$DISPLAY image_name\n\nIf everything was set correctly the matplotlib figure will pop-up.\n",
"It looks like matplotlib is not meant to be used in docker. Use seaborn instead.\nI noteice, when encounter thet problem, that matplotlib was used for show and save figures. Seaborn can save files and it works fine in docker. Removing matplotlib and use seaborn might be solution.\n"
] | [
25,
3,
3,
0,
0
] | [] | [] | [
"docker",
"matplotlib",
"python"
] | stackoverflow_0046018102_docker_matplotlib_python.txt |
Q:
Get pandas column where two column values are equal
I want to subset a DataFrame by two columns in different dataframes if the values in the columns are the same. Here is an example of df1 and df2:
df1
A
0 apple
1 pear
2 orange
3 apple
df2
B
0 apple
1 orange
2 orange
3 pear
I would like the output to be a subsetted df1 based upon the df2 column:
A
0 apple
2 orange
I tried
df1 = df1[df1.A == df2.B] but get the following error:
ValueError: Can only compare identically-labeled Series objects
I do not want to rename the column in either.
What is the best way to do this? Thanks
A:
If need compare index values with both columns create Multiindex and use Index.isin:
df = df1[df1.set_index('A', append=True).index.isin(df2.set_index('B', append=True).index)]
print (df)
A
0 apple
2 orange
| Get pandas column where two column values are equal | I want to subset a DataFrame by two columns in different dataframes if the values in the columns are the same. Here is an example of df1 and df2:
df1
A
0 apple
1 pear
2 orange
3 apple
df2
B
0 apple
1 orange
2 orange
3 pear
I would like the output to be a subsetted df1 based upon the df2 column:
A
0 apple
2 orange
I tried
df1 = df1[df1.A == df2.B] but get the following error:
ValueError: Can only compare identically-labeled Series objects
I do not want to rename the column in either.
What is the best way to do this? Thanks
| [
"If need compare index values with both columns create Multiindex and use Index.isin:\ndf = df1[df1.set_index('A', append=True).index.isin(df2.set_index('B', append=True).index)]\nprint (df)\n A\n0 apple\n2 orange\n\n"
] | [
2
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074612960_pandas_python.txt |
Q:
Flask Form Action is not referring to the variable
I have a form and am passing the variable as below using Jinja template
<form action = "/user_data/{{period}}" method="POST">
It is not redirecting required page /user_data/Oct-2022
But while just using {{period}} for testing in html page, variable is returning the result as Oct-2022. Not sure why the same variable is not getting passed in the form action.
Variable is printing as below in html page,
{{period}}
But it is not printing in the form,
<form action = "/user_data/{{period}}" method="POST">
This is the route method,
@app.route("/user_listing/<period>", methods = ['POST', 'GET'])
def user_data(period):
....
....
return render_template('user_data.html', period=period)
A:
First we imported the Flask class. An instance of this class will be our WSGI application.
Next we create an instance of this class. The first argument is the name of the application’s module or package. If you are using a single module (as in this example), you should use name because depending on if it’s started as application or imported as module the name will be different ('main' versus the actual import name). This is needed so that Flask knows where to look for templates, static files, and so on. For more information have a look at the Flask documentation.
We then use the route() decorator to tell Flask what URL should trigger our function.
The function is given a name which is also used to generate URLs for that particular function, and returns the message we want to display in the user’s browser.
| Flask Form Action is not referring to the variable | I have a form and am passing the variable as below using Jinja template
<form action = "/user_data/{{period}}" method="POST">
It is not redirecting required page /user_data/Oct-2022
But while just using {{period}} for testing in html page, variable is returning the result as Oct-2022. Not sure why the same variable is not getting passed in the form action.
Variable is printing as below in html page,
{{period}}
But it is not printing in the form,
<form action = "/user_data/{{period}}" method="POST">
This is the route method,
@app.route("/user_listing/<period>", methods = ['POST', 'GET'])
def user_data(period):
....
....
return render_template('user_data.html', period=period)
| [
"First we imported the Flask class. An instance of this class will be our WSGI application.\nNext we create an instance of this class. The first argument is the name of the application’s module or package. If you are using a single module (as in this example), you should use name because depending on if it’s started as application or imported as module the name will be different ('main' versus the actual import name). This is needed so that Flask knows where to look for templates, static files, and so on. For more information have a look at the Flask documentation.\nWe then use the route() decorator to tell Flask what URL should trigger our function.\nThe function is given a name which is also used to generate URLs for that particular function, and returns the message we want to display in the user’s browser.\n"
] | [
0
] | [] | [] | [
"flask",
"python"
] | stackoverflow_0074612765_flask_python.txt |
Q:
Convert a String representation of a Dictionary to a dictionary
How can I convert the str representation of a dict, such as the following string, into a dict?
s = "{'muffin' : 'lolz', 'foo' : 'kitty'}"
I prefer not to use eval. What else can I use?
The main reason for this, is one of my coworkers classes he wrote, converts all input into strings. I'm not in the mood to go and modify his classes, to deal with this issue.
A:
You can use the built-in ast.literal_eval:
>>> import ast
>>> ast.literal_eval("{'muffin' : 'lolz', 'foo' : 'kitty'}")
{'muffin': 'lolz', 'foo': 'kitty'}
This is safer than using eval. As its own docs say:
>>> help(ast.literal_eval)
Help on function literal_eval in module ast:
literal_eval(node_or_string)
Safely evaluate an expression node or a string containing a Python
expression. The string or node provided may only consist of the following
Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
and None.
For example:
>>> eval("shutil.rmtree('mongo')")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
File "/opt/Python-2.6.1/lib/python2.6/shutil.py", line 208, in rmtree
onerror(os.listdir, path, sys.exc_info())
File "/opt/Python-2.6.1/lib/python2.6/shutil.py", line 206, in rmtree
names = os.listdir(path)
OSError: [Errno 2] No such file or directory: 'mongo'
>>> ast.literal_eval("shutil.rmtree('mongo')")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/Python-2.6.1/lib/python2.6/ast.py", line 68, in literal_eval
return _convert(node_or_string)
File "/opt/Python-2.6.1/lib/python2.6/ast.py", line 67, in _convert
raise ValueError('malformed string')
ValueError: malformed string
A:
https://docs.python.org/3.8/library/json.html
JSON can solve this problem though its decoder wants double quotes around keys and values. If you don't mind a replace hack...
import json
s = "{'muffin' : 'lolz', 'foo' : 'kitty'}"
json_acceptable_string = s.replace("'", "\"")
d = json.loads(json_acceptable_string)
# d = {u'muffin': u'lolz', u'foo': u'kitty'}
NOTE that if you have single quotes as a part of your keys or values this will fail due to improper character replacement. This solution is only recommended if you have a strong aversion to the eval solution.
More about json single quote: jQuery.parseJSON throws “Invalid JSON” error due to escaped single quote in JSON
A:
using json.loads:
>>> import json
>>> h = '{"foo":"bar", "foo2":"bar2"}'
>>> d = json.loads(h)
>>> d
{u'foo': u'bar', u'foo2': u'bar2'}
>>> type(d)
<type 'dict'>
A:
To OP's example:
s = "{'muffin' : 'lolz', 'foo' : 'kitty'}"
We can use Yaml to deal with this kind of non-standard json in string:
>>> import yaml
>>> s = "{'muffin' : 'lolz', 'foo' : 'kitty'}"
>>> s
"{'muffin' : 'lolz', 'foo' : 'kitty'}"
>>> yaml.load(s)
{'muffin': 'lolz', 'foo': 'kitty'}
A:
To summarize:
import ast, yaml, json, timeit
descs=['short string','long string']
strings=['{"809001":2,"848545":2,"565828":1}','{"2979":1,"30581":1,"7296":1,"127256":1,"18803":2,"41619":1,"41312":1,"16837":1,"7253":1,"70075":1,"3453":1,"4126":1,"23599":1,"11465":3,"19172":1,"4019":1,"4775":1,"64225":1,"3235":2,"15593":1,"7528":1,"176840":1,"40022":1,"152854":1,"9878":1,"16156":1,"6512":1,"4138":1,"11090":1,"12259":1,"4934":1,"65581":1,"9747":2,"18290":1,"107981":1,"459762":1,"23177":1,"23246":1,"3591":1,"3671":1,"5767":1,"3930":1,"89507":2,"19293":1,"92797":1,"32444":2,"70089":1,"46549":1,"30988":1,"4613":1,"14042":1,"26298":1,"222972":1,"2982":1,"3932":1,"11134":1,"3084":1,"6516":1,"486617":1,"14475":2,"2127":1,"51359":1,"2662":1,"4121":1,"53848":2,"552967":1,"204081":1,"5675":2,"32433":1,"92448":1}']
funcs=[json.loads,eval,ast.literal_eval,yaml.load]
for desc,string in zip(descs,strings):
print('***',desc,'***')
print('')
for func in funcs:
print(func.__module__+' '+func.__name__+':')
%timeit func(string)
print('')
Results:
*** short string ***
json loads:
4.47 µs ± 33.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
builtins eval:
24.1 µs ± 163 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
ast literal_eval:
30.4 µs ± 299 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
yaml load:
504 µs ± 1.29 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
*** long string ***
json loads:
29.6 µs ± 230 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
builtins eval:
219 µs ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
ast literal_eval:
331 µs ± 1.89 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
yaml load:
9.02 ms ± 92.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Conclusion:
prefer json.loads
A:
If the string can always be trusted, you could use eval (or use literal_eval as suggested; it's safe no matter what the string is.) Otherwise you need a parser. A JSON parser (such as simplejson) would work if he only ever stores content that fits with the JSON scheme.
A:
Use json. the ast library consumes a lot of memory and and slower. I have a process that needs to read a text file of 156Mb. Ast with 5 minutes delay for the conversion dictionary json and 1 minutes using 60% less memory!
A:
string = "{'server1':'value','server2':'value'}"
#Now removing { and }
s = string.replace("{" ,"")
finalstring = s.replace("}" , "")
#Splitting the string based on , we get key value pairs
list = finalstring.split(",")
dictionary ={}
for i in list:
#Get Key Value pairs separately to store in dictionary
keyvalue = i.split(":")
#Replacing the single quotes in the leading.
m= keyvalue[0].strip('\'')
m = m.replace("\"", "")
dictionary[m] = keyvalue[1].strip('"\'')
print dictionary
A:
Optimized code of Siva Kameswara Rao Munipalle
s = s.replace("{", "").replace("}", "").split(",")
dictionary = {}
for i in s:
dictionary[i.split(":")[0].strip('\'').replace("\"", "")] = i.split(":")[1].strip('"\'')
print(dictionary)
A:
no any libs are used (python2):
dict_format_string = "{'1':'one', '2' : 'two'}"
d = {}
elems = filter(str.isalnum,dict_format_string.split("'"))
values = elems[1::2]
keys = elems[0::2]
d.update(zip(keys,values))
NOTE: As it has hardcoded split("'") will work only for strings where data is "single quoted".
NOTE2: In python3 you need to wrap filter() to list() to get list.
A:
My string didn't have quotes inside:
s = 'Date: 2022-11-29T10:57:01.024Z, Size: 910.11 KB'
My solution was to use str.split:
{k:v for k, v in map(lambda d: d.split(': '), s.split(', '))}
| Convert a String representation of a Dictionary to a dictionary | How can I convert the str representation of a dict, such as the following string, into a dict?
s = "{'muffin' : 'lolz', 'foo' : 'kitty'}"
I prefer not to use eval. What else can I use?
The main reason for this, is one of my coworkers classes he wrote, converts all input into strings. I'm not in the mood to go and modify his classes, to deal with this issue.
| [
"You can use the built-in ast.literal_eval:\n>>> import ast\n>>> ast.literal_eval(\"{'muffin' : 'lolz', 'foo' : 'kitty'}\")\n{'muffin': 'lolz', 'foo': 'kitty'}\n\nThis is safer than using eval. As its own docs say:\n\n>>> help(ast.literal_eval)\nHelp on function literal_eval in module ast:\n\nliteral_eval(node_or_string)\n Safely evaluate an expression node or a string containing a Python\n expression. The string or node provided may only consist of the following\n Python literal structures: strings, numbers, tuples, lists, dicts, booleans,\n and None.\n\nFor example:\n>>> eval(\"shutil.rmtree('mongo')\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1, in <module>\n File \"/opt/Python-2.6.1/lib/python2.6/shutil.py\", line 208, in rmtree\n onerror(os.listdir, path, sys.exc_info())\n File \"/opt/Python-2.6.1/lib/python2.6/shutil.py\", line 206, in rmtree\n names = os.listdir(path)\nOSError: [Errno 2] No such file or directory: 'mongo'\n>>> ast.literal_eval(\"shutil.rmtree('mongo')\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/opt/Python-2.6.1/lib/python2.6/ast.py\", line 68, in literal_eval\n return _convert(node_or_string)\n File \"/opt/Python-2.6.1/lib/python2.6/ast.py\", line 67, in _convert\n raise ValueError('malformed string')\nValueError: malformed string\n\n",
"https://docs.python.org/3.8/library/json.html\nJSON can solve this problem though its decoder wants double quotes around keys and values. If you don't mind a replace hack...\nimport json\ns = \"{'muffin' : 'lolz', 'foo' : 'kitty'}\"\njson_acceptable_string = s.replace(\"'\", \"\\\"\")\nd = json.loads(json_acceptable_string)\n# d = {u'muffin': u'lolz', u'foo': u'kitty'}\n\nNOTE that if you have single quotes as a part of your keys or values this will fail due to improper character replacement. This solution is only recommended if you have a strong aversion to the eval solution.\nMore about json single quote: jQuery.parseJSON throws “Invalid JSON” error due to escaped single quote in JSON\n",
"using json.loads:\n>>> import json\n>>> h = '{\"foo\":\"bar\", \"foo2\":\"bar2\"}'\n>>> d = json.loads(h)\n>>> d\n{u'foo': u'bar', u'foo2': u'bar2'}\n>>> type(d)\n<type 'dict'>\n\n",
"To OP's example:\ns = \"{'muffin' : 'lolz', 'foo' : 'kitty'}\"\n\nWe can use Yaml to deal with this kind of non-standard json in string:\n>>> import yaml\n>>> s = \"{'muffin' : 'lolz', 'foo' : 'kitty'}\"\n>>> s\n\"{'muffin' : 'lolz', 'foo' : 'kitty'}\"\n>>> yaml.load(s)\n{'muffin': 'lolz', 'foo': 'kitty'}\n\n",
"To summarize:\nimport ast, yaml, json, timeit\n\ndescs=['short string','long string']\nstrings=['{\"809001\":2,\"848545\":2,\"565828\":1}','{\"2979\":1,\"30581\":1,\"7296\":1,\"127256\":1,\"18803\":2,\"41619\":1,\"41312\":1,\"16837\":1,\"7253\":1,\"70075\":1,\"3453\":1,\"4126\":1,\"23599\":1,\"11465\":3,\"19172\":1,\"4019\":1,\"4775\":1,\"64225\":1,\"3235\":2,\"15593\":1,\"7528\":1,\"176840\":1,\"40022\":1,\"152854\":1,\"9878\":1,\"16156\":1,\"6512\":1,\"4138\":1,\"11090\":1,\"12259\":1,\"4934\":1,\"65581\":1,\"9747\":2,\"18290\":1,\"107981\":1,\"459762\":1,\"23177\":1,\"23246\":1,\"3591\":1,\"3671\":1,\"5767\":1,\"3930\":1,\"89507\":2,\"19293\":1,\"92797\":1,\"32444\":2,\"70089\":1,\"46549\":1,\"30988\":1,\"4613\":1,\"14042\":1,\"26298\":1,\"222972\":1,\"2982\":1,\"3932\":1,\"11134\":1,\"3084\":1,\"6516\":1,\"486617\":1,\"14475\":2,\"2127\":1,\"51359\":1,\"2662\":1,\"4121\":1,\"53848\":2,\"552967\":1,\"204081\":1,\"5675\":2,\"32433\":1,\"92448\":1}']\nfuncs=[json.loads,eval,ast.literal_eval,yaml.load]\n\nfor desc,string in zip(descs,strings):\n print('***',desc,'***')\n print('')\n for func in funcs:\n print(func.__module__+' '+func.__name__+':')\n %timeit func(string) \n print('')\n\nResults:\n*** short string ***\n\njson loads:\n4.47 µs ± 33.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\nbuiltins eval:\n24.1 µs ± 163 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\nast literal_eval:\n30.4 µs ± 299 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\nyaml load:\n504 µs ± 1.29 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\n*** long string ***\n\njson loads:\n29.6 µs ± 230 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\nbuiltins eval:\n219 µs ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\nast literal_eval:\n331 µs ± 1.89 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\nyaml load:\n9.02 ms ± 92.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nConclusion:\nprefer json.loads\n",
"If the string can always be trusted, you could use eval (or use literal_eval as suggested; it's safe no matter what the string is.) Otherwise you need a parser. A JSON parser (such as simplejson) would work if he only ever stores content that fits with the JSON scheme.\n",
"Use json. the ast library consumes a lot of memory and and slower. I have a process that needs to read a text file of 156Mb. Ast with 5 minutes delay for the conversion dictionary json and 1 minutes using 60% less memory!\n",
"string = \"{'server1':'value','server2':'value'}\"\n\n#Now removing { and }\ns = string.replace(\"{\" ,\"\")\nfinalstring = s.replace(\"}\" , \"\")\n\n#Splitting the string based on , we get key value pairs\nlist = finalstring.split(\",\")\n\ndictionary ={}\nfor i in list:\n #Get Key Value pairs separately to store in dictionary\n keyvalue = i.split(\":\")\n\n #Replacing the single quotes in the leading.\n m= keyvalue[0].strip('\\'')\n m = m.replace(\"\\\"\", \"\")\n dictionary[m] = keyvalue[1].strip('\"\\'')\n\nprint dictionary\n\n",
"Optimized code of Siva Kameswara Rao Munipalle\ns = s.replace(\"{\", \"\").replace(\"}\", \"\").split(\",\")\n \ndictionary = {}\n\nfor i in s:\n dictionary[i.split(\":\")[0].strip('\\'').replace(\"\\\"\", \"\")] = i.split(\":\")[1].strip('\"\\'')\n \nprint(dictionary)\n\n",
"no any libs are used (python2):\ndict_format_string = \"{'1':'one', '2' : 'two'}\"\nd = {}\nelems = filter(str.isalnum,dict_format_string.split(\"'\"))\nvalues = elems[1::2]\nkeys = elems[0::2]\nd.update(zip(keys,values))\n\nNOTE: As it has hardcoded split(\"'\") will work only for strings where data is \"single quoted\".\nNOTE2: In python3 you need to wrap filter() to list() to get list.\n",
"My string didn't have quotes inside:\ns = 'Date: 2022-11-29T10:57:01.024Z, Size: 910.11 KB'\nMy solution was to use str.split:\n{k:v for k, v in map(lambda d: d.split(': '), s.split(', '))}\n"
] | [
1594,
368,
225,
52,
31,
25,
20,
11,
7,
6,
0
] | [] | [] | [
"dictionary",
"python",
"string"
] | stackoverflow_0000988228_dictionary_python_string.txt |
Q:
Where does pip3 install package binaries?
It see that depending on system and configuration, packages are installed in different places.
Example:
Machine 1:
pip3 install fb-idb
pip3 show fb-idb
> ...
> /opt/homebrew/lib/python3.9/site-packages
Machine 2:
pip3 install fb-idb
pip3 show fb-idb
> ...
> /us/local/lib/python3.10/site-packages
Now the problem I have is that on machine 1, I got the path to the binary by executing
which idb (> /opt/homebrew/bin/idb), but on machine 2, it seems the bin dir wasn't added to the path, so which doesn't work.
Is there a way to figure out where the binaries are installed, if I only have the site-packages path?
A:
pip3 show --files fb-idb shows where pip has installed all the files of the package. Run
pip3 show --files fb-idb | grep -F /bin/
to extract the directory where pip installed scripts and entry points (On Windows it's \Scripts\). The directories are related to the header Location: so either do grep -F Location: separately or do it combined:
pip3 show --files fb-idb | grep 'Location:\|/bin/'
| Where does pip3 install package binaries? | It see that depending on system and configuration, packages are installed in different places.
Example:
Machine 1:
pip3 install fb-idb
pip3 show fb-idb
> ...
> /opt/homebrew/lib/python3.9/site-packages
Machine 2:
pip3 install fb-idb
pip3 show fb-idb
> ...
> /us/local/lib/python3.10/site-packages
Now the problem I have is that on machine 1, I got the path to the binary by executing
which idb (> /opt/homebrew/bin/idb), but on machine 2, it seems the bin dir wasn't added to the path, so which doesn't work.
Is there a way to figure out where the binaries are installed, if I only have the site-packages path?
| [
"pip3 show --files fb-idb shows where pip has installed all the files of the package. Run\npip3 show --files fb-idb | grep -F /bin/\n\nto extract the directory where pip installed scripts and entry points (On Windows it's \\Scripts\\). The directories are related to the header Location: so either do grep -F Location: separately or do it combined:\npip3 show --files fb-idb | grep 'Location:\\|/bin/'\n\n"
] | [
1
] | [] | [] | [
"pip",
"python"
] | stackoverflow_0074597855_pip_python.txt |
Q:
Fails using multiprocessing in decorators (Can't pickle : it's not the same object as __main__.test_func)
I want to use decorator with multiprocessing to stop function test_func by timout.
When I don't use decorator (commited just_func()), process with test_func killed successfully. But when I do the same in decorator function_to_process I've got the error message:
_pickle.PicklingError: Can't pickle <function test_func at 0x000001647506BAE8>: it's not the same object as __main__.test_func
PS C:\data\temp> Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python36\lib\multiprocessing\spawn.py", line 99, in spawn_main
new_handle = reduction.steal_handle(parent_pid, pipe_handle) File "C:\Program Files\Python36\lib\multiprocessing\reduction.py", line 82, in steal_handle
_winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] The parameter is incorrect
Could you please help me with what I do wrong and how to fix? In general, I try to create solution for windows part of pytest_timeout to kill test_case by test, not whole test_suite.
Thank you!
Code:
import time
import multiprocessing
def function_to_process(func):
def process_wrapper():
process = multiprocessing.Process(target=func, args=())
process.start()
process.join(timeout=5)
if (process.is_alive):
process.terminate()
print("done in decorator")
return process_wrapper
@function_to_process
def test_func():
time.sleep(10)
def just_func():
process = multiprocessing.Process(target=test_func, args=())
process.start()
process.join(timeout=5)
if (process.is_alive):
print("terminated")
process.terminate()
if __name__ == "__main__":
print("start")
test_func()
# just_func()
print("done in main")
A:
I've found the solution here Windows: using decorator with multiprocessing
| Fails using multiprocessing in decorators (Can't pickle : it's not the same object as __main__.test_func) | I want to use decorator with multiprocessing to stop function test_func by timout.
When I don't use decorator (commited just_func()), process with test_func killed successfully. But when I do the same in decorator function_to_process I've got the error message:
_pickle.PicklingError: Can't pickle <function test_func at 0x000001647506BAE8>: it's not the same object as __main__.test_func
PS C:\data\temp> Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python36\lib\multiprocessing\spawn.py", line 99, in spawn_main
new_handle = reduction.steal_handle(parent_pid, pipe_handle) File "C:\Program Files\Python36\lib\multiprocessing\reduction.py", line 82, in steal_handle
_winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] The parameter is incorrect
Could you please help me with what I do wrong and how to fix? In general, I try to create solution for windows part of pytest_timeout to kill test_case by test, not whole test_suite.
Thank you!
Code:
import time
import multiprocessing
def function_to_process(func):
def process_wrapper():
process = multiprocessing.Process(target=func, args=())
process.start()
process.join(timeout=5)
if (process.is_alive):
process.terminate()
print("done in decorator")
return process_wrapper
@function_to_process
def test_func():
time.sleep(10)
def just_func():
process = multiprocessing.Process(target=test_func, args=())
process.start()
process.join(timeout=5)
if (process.is_alive):
print("terminated")
process.terminate()
if __name__ == "__main__":
print("start")
test_func()
# just_func()
print("done in main")
| [
"I've found the solution here Windows: using decorator with multiprocessing\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074562015_python.txt |
Q:
Generate Header file for C code using python script
I am trying to generate a header file for C code using pyhton script.
I want to read some variables from csv file, the problem that i cant use libraries in the c code so i am not able to read the csv file from the c code.
I need to develop python script able to create a kind of simple input list to this C code.
Any suggestions ?
A:
I want to read some variables from csv file, the problem that i cant use libraries in the c code so i am not able to read the csv file from the c code.
And you don't need to. Simply this is how the file gets generated:
[CSV File] -> Python -> [.h file]
So, you actually need to parse and convert the file in python, not in C.
How to read CSV data in python?
You've got plenty of ways. My preference is reading it the simplest way. Consider your csv is just some floating point number lines:
with open('file.csv', 'r') as file:
data = file.readlines()
rows = []
for line in data:
rows.append([float(item.replace(' ', '')) for item in line.replace('\n', '')])
How to generate header using extracted csv data?
Simply, it depends on you. It's not hard. Are you going to use 2d arrays?
Let me know if you've got any difficulties then I'll write a sample code for you. I don't find it hard to, though.
| Generate Header file for C code using python script | I am trying to generate a header file for C code using pyhton script.
I want to read some variables from csv file, the problem that i cant use libraries in the c code so i am not able to read the csv file from the c code.
I need to develop python script able to create a kind of simple input list to this C code.
Any suggestions ?
| [
"\nI want to read some variables from csv file, the problem that i cant use libraries in the c code so i am not able to read the csv file from the c code.\n\nAnd you don't need to. Simply this is how the file gets generated:\n[CSV File] -> Python -> [.h file]\n\nSo, you actually need to parse and convert the file in python, not in C.\nHow to read CSV data in python?\nYou've got plenty of ways. My preference is reading it the simplest way. Consider your csv is just some floating point number lines:\nwith open('file.csv', 'r') as file:\n data = file.readlines()\nrows = []\nfor line in data:\n rows.append([float(item.replace(' ', '')) for item in line.replace('\\n', '')])\n\nHow to generate header using extracted csv data?\nSimply, it depends on you. It's not hard. Are you going to use 2d arrays?\nLet me know if you've got any difficulties then I'll write a sample code for you. I don't find it hard to, though.\n"
] | [
1
] | [] | [] | [
"c",
"python"
] | stackoverflow_0074612751_c_python.txt |
Q:
What is the most efficient way to extract and process files from the link?
I am able to browse all the links I need, but these links are redirecting me to the websites which have another links with pdf files, I have to open and process these pdfs. But I do not know what is the most efficient way to do it
import requests
from bs4 import BeautifulSoup
import re
url = 'https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2014/0124(COD)&l=en'
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
urls = []
for link in soup.find_all("a",href=re.compile("AM")):
print(link.get('href'))
Output:
https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-541655_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-551891_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-541655_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-551891_EN.html
A:
For all links that you crawl from the main url, you need to do exactly the same as before (request, bs4, extract hrefs).
Then check if href of link ends with ".pdf"
If href is a relative path of the pdf file, use urllib to extract the domain from the website url and concatenate the domain and the pdf file name again
E. g.:
from urllib.parse import urlparse
domain = urlparse("https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html").netloc
Do another get request to retrieve the context of the pdf file
A:
You've done it right. You just need a simple modification:
Solution for this specific problem
You have done the hardest part. Just replace ".html" with ".pdf" and then download the file is a manner like this:
for link in soup.find_all("a",href=re.compile("AM")):
page_url = str(link.get('href'))
pdf_url = page_url.replace('.html', '.pdf')
pdf_response = requests.get(url)
with open('/blah_blah_blah/metadata.pdf', 'wb') as f:
f.write(response.content)
What was the problem?
Dear friends, many websites have a dedicated webpage for any of their links. Usually it is so, for the sake of "SEO" purposes!
The good point is: "There is often a relation between the page link and the target file's link.
For example here we have:
https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html
And when we check that page we find that the url is:
https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.pdf
So you just need to change the ".html" tail to ".pdf" and then go for a download.
What to do if there is no relation?
Then you should do exactly what a human does. Open/Fetch the page and then extract the content link explicitly. After that you can download the target file.
Do we have even harder situations?
Yes. Sometimes the website has AJAX and ... operations to get the link. In those cases I suggest trying to follow the behavior of the browser and to check if there exists a pattern between the link (usually an API) and the content.
But there exists even harder situations that you will use alternatives like "Selenium".
| What is the most efficient way to extract and process files from the link? | I am able to browse all the links I need, but these links are redirecting me to the websites which have another links with pdf files, I have to open and process these pdfs. But I do not know what is the most efficient way to do it
import requests
from bs4 import BeautifulSoup
import re
url = 'https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2014/0124(COD)&l=en'
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
urls = []
for link in soup.find_all("a",href=re.compile("AM")):
print(link.get('href'))
Output:
https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-541655_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-551891_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-541655_EN.html
https://www.europarl.europa.eu/doceo/document/EMPL-AM-551891_EN.html
| [
"For all links that you crawl from the main url, you need to do exactly the same as before (request, bs4, extract hrefs).\n\nThen check if href of link ends with \".pdf\"\nIf href is a relative path of the pdf file, use urllib to extract the domain from the website url and concatenate the domain and the pdf file name again\n\nE. g.:\nfrom urllib.parse import urlparse\n\ndomain = urlparse(\"https://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html\").netloc\n\n\nDo another get request to retrieve the context of the pdf file\n\n",
"You've done it right. You just need a simple modification:\nSolution for this specific problem\nYou have done the hardest part. Just replace \".html\" with \".pdf\" and then download the file is a manner like this:\nfor link in soup.find_all(\"a\",href=re.compile(\"AM\")):\n page_url = str(link.get('href'))\n pdf_url = page_url.replace('.html', '.pdf')\n pdf_response = requests.get(url)\n with open('/blah_blah_blah/metadata.pdf', 'wb') as f:\n f.write(response.content)\n\nWhat was the problem?\nDear friends, many websites have a dedicated webpage for any of their links. Usually it is so, for the sake of \"SEO\" purposes!\nThe good point is: \"There is often a relation between the page link and the target file's link.\nFor example here we have:\nhttps://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.html\n\nAnd when we check that page we find that the url is:\nhttps://www.europarl.europa.eu/doceo/document/EMPL-AM-544465_EN.pdf\n\nSo you just need to change the \".html\" tail to \".pdf\" and then go for a download.\nWhat to do if there is no relation?\nThen you should do exactly what a human does. Open/Fetch the page and then extract the content link explicitly. After that you can download the target file.\nDo we have even harder situations?\nYes. Sometimes the website has AJAX and ... operations to get the link. In those cases I suggest trying to follow the behavior of the browser and to check if there exists a pattern between the link (usually an API) and the content.\nBut there exists even harder situations that you will use alternatives like \"Selenium\".\n"
] | [
1,
0
] | [] | [] | [
"beautifulsoup",
"parsing",
"pdf",
"python",
"web_scraping"
] | stackoverflow_0074612624_beautifulsoup_parsing_pdf_python_web_scraping.txt |
Q:
How to get first n characters from another column that doesn't contain specific characters
I have this dataframe
ID
product name
1BJM10
1BJM10_RS2022_PK
L_RS2022_PK
2PKL10_RS2022_PK
3BDG10_RS2022_PK
1BJM10
1BJM10_RS2022_PK
My desired output is like this
ID
product name
1BJM10
1BJM10_RS2022_PK
-
L_RS2022_PK
2PKL10
2PKL10_RS2022_PK
3BDG10
3BDG10_RS2022_PK
1BJM10
1BJM10_RS2022_PK
2nd row shouldn't get the ID because is has "_" in the product name's first 6 characters.
I have tried this code, but it doesn't work
df.loc[df['ID'].isna()] = df['ID'].fillna(~df['product name'].str[:6].contains("_"))
A:
Chain both conditions by & for bitwise AND with helper Series:
s = df['product name'].str[:6]
df.loc[df['ID'].isna() & ~s.str.contains("_"), 'ID'] = s
print (df)
ID product name
0 1BJM10 1BJM10_RS2022_PK
1 NaN L_RS2022_PK
2 2PKL10 2PKL10_RS2022_PK
3 3BDG10 3BDG10_RS2022_PK
4 1BJM10 1BJM10_RS2022_PK
A:
Try:
df['ID'] = df['product name'].apply(lambda x: x[:x.find('_')] if x.find('_')>=6 else '')
| How to get first n characters from another column that doesn't contain specific characters | I have this dataframe
ID
product name
1BJM10
1BJM10_RS2022_PK
L_RS2022_PK
2PKL10_RS2022_PK
3BDG10_RS2022_PK
1BJM10
1BJM10_RS2022_PK
My desired output is like this
ID
product name
1BJM10
1BJM10_RS2022_PK
-
L_RS2022_PK
2PKL10
2PKL10_RS2022_PK
3BDG10
3BDG10_RS2022_PK
1BJM10
1BJM10_RS2022_PK
2nd row shouldn't get the ID because is has "_" in the product name's first 6 characters.
I have tried this code, but it doesn't work
df.loc[df['ID'].isna()] = df['ID'].fillna(~df['product name'].str[:6].contains("_"))
| [
"Chain both conditions by & for bitwise AND with helper Series:\ns = df['product name'].str[:6]\ndf.loc[df['ID'].isna() & ~s.str.contains(\"_\"), 'ID'] = s\nprint (df)\n ID product name\n0 1BJM10 1BJM10_RS2022_PK\n1 NaN L_RS2022_PK\n2 2PKL10 2PKL10_RS2022_PK\n3 3BDG10 3BDG10_RS2022_PK\n4 1BJM10 1BJM10_RS2022_PK\n\n",
"Try:\ndf['ID'] = df['product name'].apply(lambda x: x[:x.find('_')] if x.find('_')>=6 else '')\n\n"
] | [
2,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074612969_pandas_python.txt |
Q:
Displaying country in choropleth map - UnicodeDecodeError: 'charmap' codec can't decode byte 0x88
I'm trying to display unemployment of Czech Republic's counties in choropleth map.
I have json coordinates and unemployment data saved in csv file. But Im getting this error:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x88 in position
211750: character maps to undefined.
Which is weird because when I run this code (basically same code with different data): https://python-graph-gallery.com/292-choropleth-map-with-folium/ everything works fine.
I have a feeling that it's not possible to display CZ counties in choropleth map or is it?
You can find files I use right here: https://github.com/MichalLeh/CZ-map
import pandas as pd
import folium
# Load the shape of the zone
state_geo = 'J:/CZ-counties.json'
# Load the unemployment value of each state (county)
state_data = pd.read_csv('J:/CZ-unemploy.csv')
# Initialize the map:
m = folium.Map([15, 74], zoom_start=6)
# Add the color for the chloropleth:
choropleth = folium.Choropleth(
geo_data=state_geo,
name='choropleth',
data=state_data,
columns=['Name', 'Un'],
key_on='feature.id',
fill_color='YlGn',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Unemployment',
).add_to(m)
folium.LayerControl(collapsed=True).add_to(m)
# Save to html
m.save('map.html')
Terminal output:
Traceback (most recent call last):
File "J:\testCz", line 12, in <module>
choropleth = folium.Choropleth(
File "C:\Users\Michal\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\folium\features.py", line 1247, in __init__
self.geojson = GeoJson(
File "C:\Users\Michal\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\folium\features.py", line 453, in __init__
self.data = self.process_data(data)
File "C:\Users\Michal\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\folium\features.py", line 491, in process_data
return json.loads(f.read())
File "C:\ProgramFiles\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.1008.0_x64__qbz5n2kfra8p0\lib\encodings\cp1250.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x88 in position 211750: character maps to undefined.
| Displaying country in choropleth map - UnicodeDecodeError: 'charmap' codec can't decode byte 0x88 | I'm trying to display unemployment of Czech Republic's counties in choropleth map.
I have json coordinates and unemployment data saved in csv file. But Im getting this error:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x88 in position
211750: character maps to undefined.
Which is weird because when I run this code (basically same code with different data): https://python-graph-gallery.com/292-choropleth-map-with-folium/ everything works fine.
I have a feeling that it's not possible to display CZ counties in choropleth map or is it?
You can find files I use right here: https://github.com/MichalLeh/CZ-map
import pandas as pd
import folium
# Load the shape of the zone
state_geo = 'J:/CZ-counties.json'
# Load the unemployment value of each state (county)
state_data = pd.read_csv('J:/CZ-unemploy.csv')
# Initialize the map:
m = folium.Map([15, 74], zoom_start=6)
# Add the color for the chloropleth:
choropleth = folium.Choropleth(
geo_data=state_geo,
name='choropleth',
data=state_data,
columns=['Name', 'Un'],
key_on='feature.id',
fill_color='YlGn',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Unemployment',
).add_to(m)
folium.LayerControl(collapsed=True).add_to(m)
# Save to html
m.save('map.html')
Terminal output:
Traceback (most recent call last):
File "J:\testCz", line 12, in <module>
choropleth = folium.Choropleth(
File "C:\Users\Michal\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\folium\features.py", line 1247, in __init__
self.geojson = GeoJson(
File "C:\Users\Michal\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\folium\features.py", line 453, in __init__
self.data = self.process_data(data)
File "C:\Users\Michal\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\folium\features.py", line 491, in process_data
return json.loads(f.read())
File "C:\ProgramFiles\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.1008.0_x64__qbz5n2kfra8p0\lib\encodings\cp1250.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x88 in position 211750: character maps to undefined.
| [] | [] | [
"It's because your json file is UTF-8. Add the UTF-8 enconding to your Choropleth class parameters:\nchoropleth = folium.Choropleth(\ngeo_data=state_geo,\nname='choropleth',\ndata=state_data,\ncolumns=['Name', 'Un'],\nkey_on='feature.id',\nfill_color='YlGn',\nfill_opacity=0.7,\nline_opacity=0.2,\nlegend_name='Unemployment',\nenconding='UTF-8'\n).add_to(m)\n\nSee post: Python - UnicodeDecodeError:\nYou could always run chardet to check:\nimport chardet \nrawdata=open(state_geo,'rb').read()\nchardet.detect(rawdata)\n\n"
] | [
-1
] | [
"choropleth",
"csv",
"folium",
"pandas",
"python"
] | stackoverflow_0062570459_choropleth_csv_folium_pandas_python.txt |
Q:
how to remove some entries from json file using python?
How to remove some entries from a JSON file using python?
I have a JSON file that looks like this:
json_data = [
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
}
]
I would like to remove aks9098, aks9099, and aks9103 from the list. The expected result should look like this:
[
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
}
]
A:
You can analyse recursively you input data and remove all forbidden values:
def filter_data(data, forbidden_values):
if isinstance(data, str):
return data
if isinstance(data, dict):
return {
key: filter_data(value, forbidden_values) for key, value in data.items()
}
if isinstance(data, list):
return [
filter_data(value, forbidden_values)
for value in data
if value not in forbidden_values
]
forbidden_values = ["aks9098", "aks9099", "aks9103"]
filtered_data = filter_data(json_data, forbidden_values)
| how to remove some entries from json file using python? | How to remove some entries from a JSON file using python?
I have a JSON file that looks like this:
json_data = [
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9099",
"aks9098",
"aks9100",
"aks9101",
"aks9102",
"aks9103"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
}
]
I would like to remove aks9098, aks9099, and aks9103 from the list. The expected result should look like this:
[
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
},
{
"authType": "ldap",
"password": "",
"permissions": [
{
"collections": [
"aks9100",
"aks9101",
"aks9102"
],
"project": "Central Project"
}
],
"role": "devSecOps",
"username": "[email protected]"
}
]
| [
"You can analyse recursively you input data and remove all forbidden values:\ndef filter_data(data, forbidden_values):\n if isinstance(data, str):\n return data\n if isinstance(data, dict):\n return {\n key: filter_data(value, forbidden_values) for key, value in data.items()\n }\n if isinstance(data, list):\n return [\n filter_data(value, forbidden_values)\n for value in data\n if value not in forbidden_values\n ]\n\n\nforbidden_values = [\"aks9098\", \"aks9099\", \"aks9103\"]\nfiltered_data = filter_data(json_data, forbidden_values)\n\n"
] | [
0
] | [] | [] | [
"json",
"python",
"python_3.x"
] | stackoverflow_0074611405_json_python_python_3.x.txt |
Q:
Error converting Detectron2 torchscript model to CoreML using coremltools
I have a Detectron2 model that is trained to identify specific items on a backend server. I would like to make this model available on iOS devices and convert it to a CoreML model using coremltools v6.1. I used the export_model.py script provided by Facebook to create a torchscript model, but when I try to convert this to coreml I get a KeyError
def save_core_ml_package(scripted_model):
# Using image_input in the inputs parameter:
# Convert to Core ML neural network using the Unified Conversion API.
h = 224
w = 224
ctmodel = ct.convert(scripted_model,
inputs=[ct.ImageType(shape=(1, 3, h, w),
color_layout=ct.colorlayout.RGB)]
)
# Save the converted model.
ctmodel.save("newmodel.mlmodel")
I get the following error
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Traceback (most recent call last):
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 101, in <module>
save_trace_to_core_ml_package(test_model, outdir=outdir)
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 46, in save_trace_to_core_ml_package
ctmodel = ct.convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
mlmodel = mil_convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 190, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 217, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 282, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 112, in __call__
return load(*args, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 56, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols, specification_version)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 160, in __init__
raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 486, in _expand_and_optimize_ir
graph, params_dict = TorchConverter._jit_pass_lower_graph(graph, torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 431, in _jit_pass_lower_graph
_lower_graph_block(graph)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 410, in _lower_graph_block
module = getattr(node_to_module_map[_input], attr_name)
KeyError: images.2 defined in (%images.2 : __torch__.detectron2.structures.image_list.ImageList = prim::CreateObject()
)
A:
From the error message it looks like you are using a torch script model:
Support for converting Torch Script Models is experimental. If
possible you should use a traced model for conversion.
if possible try to use a traced model e.g.:
dummy_input = torch.randn(batch, channels, width, height)
traceable_model = torch.jit.trace(model, dummy_input)
followed by your original code:
ct.convert(traceable_model,...
| Error converting Detectron2 torchscript model to CoreML using coremltools | I have a Detectron2 model that is trained to identify specific items on a backend server. I would like to make this model available on iOS devices and convert it to a CoreML model using coremltools v6.1. I used the export_model.py script provided by Facebook to create a torchscript model, but when I try to convert this to coreml I get a KeyError
def save_core_ml_package(scripted_model):
# Using image_input in the inputs parameter:
# Convert to Core ML neural network using the Unified Conversion API.
h = 224
w = 224
ctmodel = ct.convert(scripted_model,
inputs=[ct.ImageType(shape=(1, 3, h, w),
color_layout=ct.colorlayout.RGB)]
)
# Save the converted model.
ctmodel.save("newmodel.mlmodel")
I get the following error
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Traceback (most recent call last):
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 101, in <module>
save_trace_to_core_ml_package(test_model, outdir=outdir)
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 46, in save_trace_to_core_ml_package
ctmodel = ct.convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
mlmodel = mil_convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 190, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 217, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 282, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 112, in __call__
return load(*args, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 56, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols, specification_version)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 160, in __init__
raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 486, in _expand_and_optimize_ir
graph, params_dict = TorchConverter._jit_pass_lower_graph(graph, torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 431, in _jit_pass_lower_graph
_lower_graph_block(graph)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 410, in _lower_graph_block
module = getattr(node_to_module_map[_input], attr_name)
KeyError: images.2 defined in (%images.2 : __torch__.detectron2.structures.image_list.ImageList = prim::CreateObject()
)
| [
"From the error message it looks like you are using a torch script model:\n\nSupport for converting Torch Script Models is experimental. If\npossible you should use a traced model for conversion.\n\nif possible try to use a traced model e.g.:\ndummy_input = torch.randn(batch, channels, width, height)\ntraceable_model = torch.jit.trace(model, dummy_input)\n\nfollowed by your original code:\nct.convert(traceable_model,...\n\n"
] | [
0
] | [] | [] | [
"coreml",
"coremltools",
"detectron",
"python",
"torchscript"
] | stackoverflow_0074607629_coreml_coremltools_detectron_python_torchscript.txt |
Q:
Django 'model' object is not iterable when response
I have 2 model. And the two models are connected to the ManyToManyField.
models.py
class PostModel(models.Model):
id = models.AutoField(primary_key=True, null=False)
title = models.TextField()
comments = models.ManyToManyField('CommentModel')
class CommentModel(models.Model):
id = models.AutoField(primary_key=True, null=False)
post_id = models.ForeignKey(Post, on_delete=models.CASCADE)
body = models.TextField()
and serializers.py
class CommentModel_serializer(serializers.ModelSerializer):
class Meta:
model = MainCommentModel
fields = '__all__'
class PostModel_serializer(serializers.ModelSerializer):
comment = MainCommentModel_serializer(many=True, allow_null=True, read_only=True)
class Meta:
model = MainModel
fields = '__all__'
and views.py
@api_view(['GET'])
def getPost(request, pk):
post = PostModel.objects.filter(id=pk).first()
comment_list = CommentModel.objects.filter(post_id=post.id)
for i in comments_list:
post.comments.add(i)
serializer = PostModel_serializer(post, many=True)
return Response(serializer.data)
There is an error when I make a request.
'PostModel' object is not iterable
and The trackback points here.
return Response(serializer.data)
I tried to modify a lot of code but I can't find solutions.
Please tell me where and how it went wrong.
A:
Referring from this thread, you should remove many=True in PostModel_serializer.
Also it should be comment_list not comments_list.
@api_view(['GET'])
def getPost(request, pk):
post = PostModel.objects.filter(id=pk).first()
comment_list = CommentModel.objects.filter(post_id=post.id)
for i in comment_list:
post.comments.add(i)
serializer = PostModel_serializer(post)
return Response(serializer.data)
A:
I think you did wrong while creating ManyToManyField().
Instead of this:
comments = models.ManyToManyField('CommentModel') #Here you made mistake. you should not add single quote to CommentModel. I think that's why you got that error
Try this:
comments = models.ManyToManyField(CommentModel)
| Django 'model' object is not iterable when response | I have 2 model. And the two models are connected to the ManyToManyField.
models.py
class PostModel(models.Model):
id = models.AutoField(primary_key=True, null=False)
title = models.TextField()
comments = models.ManyToManyField('CommentModel')
class CommentModel(models.Model):
id = models.AutoField(primary_key=True, null=False)
post_id = models.ForeignKey(Post, on_delete=models.CASCADE)
body = models.TextField()
and serializers.py
class CommentModel_serializer(serializers.ModelSerializer):
class Meta:
model = MainCommentModel
fields = '__all__'
class PostModel_serializer(serializers.ModelSerializer):
comment = MainCommentModel_serializer(many=True, allow_null=True, read_only=True)
class Meta:
model = MainModel
fields = '__all__'
and views.py
@api_view(['GET'])
def getPost(request, pk):
post = PostModel.objects.filter(id=pk).first()
comment_list = CommentModel.objects.filter(post_id=post.id)
for i in comments_list:
post.comments.add(i)
serializer = PostModel_serializer(post, many=True)
return Response(serializer.data)
There is an error when I make a request.
'PostModel' object is not iterable
and The trackback points here.
return Response(serializer.data)
I tried to modify a lot of code but I can't find solutions.
Please tell me where and how it went wrong.
| [
"Referring from this thread, you should remove many=True in PostModel_serializer.\nAlso it should be comment_list not comments_list.\n@api_view(['GET'])\ndef getPost(request, pk):\n post = PostModel.objects.filter(id=pk).first()\n comment_list = CommentModel.objects.filter(post_id=post.id)\n for i in comment_list:\n post.comments.add(i)\n serializer = PostModel_serializer(post)\n return Response(serializer.data)\n\n",
"I think you did wrong while creating ManyToManyField().\nInstead of this:\ncomments = models.ManyToManyField('CommentModel') #Here you made mistake. you should not add single quote to CommentModel. I think that's why you got that error\n\nTry this:\ncomments = models.ManyToManyField(CommentModel)\n\n"
] | [
1,
0
] | [] | [] | [
"django",
"django_queryset",
"django_rest_framework",
"django_serializer",
"python"
] | stackoverflow_0074612530_django_django_queryset_django_rest_framework_django_serializer_python.txt |
Q:
Change color of every 2-nd element on x axis and every 3-rd element on y-axis to black
I don't know what's wrong and why nothing worked for me. The picture is 500x500.
I tried using arrays and loops but It didn't work out. My code
from PIL import Image
picture_resized = picture.resize( (500,500) )
im = np.array(Image.open('Lenna.png').convert('RGB'))
Image.fromarray(im).save('result.png')
im[0::2,0::2] = [0,0,0]
im[0::3,0::3] = [0,0,0]
%matplotlib notebook
plt.imshow(picture_resized)
A:
You're close! This should do what you want:
import numpy as np
from PIL import Image
# Load image and make into Numpy array
im = np.array(Image.open('artistic-swirl.jpg'))
# Make every second element on x-axis black
im[:, 0::2] = [0,0,0]
# Make every third element on y-axis black
im[::3, :] = [0,0,0]
Note: You should read a colon (:) as meaning "all columns" or "all rows" depending on whether it is specified for the first or second axis.
Turns this:
into this:
If you are using Jupyter, based off your question, you'll need to add something like this at the end to actually display the result.
%matplotlib notebook
plt.imshow(im)
Or, if you want to save a PNG with PIL, you can use:
im.save('result.png')
| Change color of every 2-nd element on x axis and every 3-rd element on y-axis to black | I don't know what's wrong and why nothing worked for me. The picture is 500x500.
I tried using arrays and loops but It didn't work out. My code
from PIL import Image
picture_resized = picture.resize( (500,500) )
im = np.array(Image.open('Lenna.png').convert('RGB'))
Image.fromarray(im).save('result.png')
im[0::2,0::2] = [0,0,0]
im[0::3,0::3] = [0,0,0]
%matplotlib notebook
plt.imshow(picture_resized)
| [
"You're close! This should do what you want:\nimport numpy as np\nfrom PIL import Image\n\n# Load image and make into Numpy array\nim = np.array(Image.open('artistic-swirl.jpg'))\n\n# Make every second element on x-axis black\nim[:, 0::2] = [0,0,0]\n\n# Make every third element on y-axis black\nim[::3, :] = [0,0,0]\n\nNote: You should read a colon (:) as meaning \"all columns\" or \"all rows\" depending on whether it is specified for the first or second axis.\nTurns this:\n\ninto this:\n\nIf you are using Jupyter, based off your question, you'll need to add something like this at the end to actually display the result.\n%matplotlib notebook\nplt.imshow(im)\n\nOr, if you want to save a PNG with PIL, you can use:\nim.save('result.png')\n\n"
] | [
0
] | [] | [] | [
"image_processing",
"jupyter",
"jupyter_notebook",
"python"
] | stackoverflow_0074608032_image_processing_jupyter_jupyter_notebook_python.txt |
Q:
Machine learning options to detect errors in a large number of sql tables?
I'm new to ML and want to build a system that can detect errors or anomalies in input data that I receive from customers. The data is structured in sql tables with various column names. The value types for each column varies but the most common are numbers, strings and dates.
Some of the values in these tables will be wrong. Examples of errors that I can encounter are:
Null values or empty strings
Truncated strings and/or numbers
String formatted numbers
Weird date formats
Bad or missing references between tables
Up until now, the best option I can envision is to run some unsupervised edge case detection algorithm. But, from what I have understood by reading online about these algorithms, they do not really do much of machine learning. Rather just classify based on edge criterias.
The input data can reside in hundreds of tables with tens or hundreds of columns each. This means that just going through the data structure manually is a daunting task. My aim is a system that, just by looking at the data in one column, can detect data type and also automatically tell the outliers.
As I do think that there are patterns to be found in the errors that may occur and the fact that my dataset is huge, I would like to try out some semi-supervised algorithm where I could review the suggested errors from the algorithm classifying false positives etc. To feed back those assertions into the algorithm would improve the predictions I think.
Right now, I have started off using Python but have no clue on which algorithms to use and how to build a proper pipeline that adapts my input data to work well with the classifiers.
I would be very grateful if someone can give me suggestions on which algorithms and steps I could use to implement the system I have in mind or suggest already existing tools for this.
Thanks!
A:
Null values or empty strings: an ML algorithm will probably not accept such an input
Truncated strings in numbers: ?
String formatted numbers: numbers are always formatted as strings
Weird date formats: an ML system will require huge samples before it learns rules that you can implement in two minutes
Bad or missing references between tables: how could an ML algorithm deal with this ???
IMO, you forget the most important check: values out of the normal range. These ranges can be found by simple statistical observation or by... common sense.
| Machine learning options to detect errors in a large number of sql tables? | I'm new to ML and want to build a system that can detect errors or anomalies in input data that I receive from customers. The data is structured in sql tables with various column names. The value types for each column varies but the most common are numbers, strings and dates.
Some of the values in these tables will be wrong. Examples of errors that I can encounter are:
Null values or empty strings
Truncated strings and/or numbers
String formatted numbers
Weird date formats
Bad or missing references between tables
Up until now, the best option I can envision is to run some unsupervised edge case detection algorithm. But, from what I have understood by reading online about these algorithms, they do not really do much of machine learning. Rather just classify based on edge criterias.
The input data can reside in hundreds of tables with tens or hundreds of columns each. This means that just going through the data structure manually is a daunting task. My aim is a system that, just by looking at the data in one column, can detect data type and also automatically tell the outliers.
As I do think that there are patterns to be found in the errors that may occur and the fact that my dataset is huge, I would like to try out some semi-supervised algorithm where I could review the suggested errors from the algorithm classifying false positives etc. To feed back those assertions into the algorithm would improve the predictions I think.
Right now, I have started off using Python but have no clue on which algorithms to use and how to build a proper pipeline that adapts my input data to work well with the classifiers.
I would be very grateful if someone can give me suggestions on which algorithms and steps I could use to implement the system I have in mind or suggest already existing tools for this.
Thanks!
| [
"\nNull values or empty strings: an ML algorithm will probably not accept such an input\nTruncated strings in numbers: ?\nString formatted numbers: numbers are always formatted as strings\nWeird date formats: an ML system will require huge samples before it learns rules that you can implement in two minutes\nBad or missing references between tables: how could an ML algorithm deal with this ???\n\nIMO, you forget the most important check: values out of the normal range. These ranges can be found by simple statistical observation or by... common sense.\n"
] | [
2
] | [] | [] | [
"algorithm",
"anomaly_detection",
"machine_learning",
"python"
] | stackoverflow_0074612829_algorithm_anomaly_detection_machine_learning_python.txt |
Q:
python boto3 for AWS - S3 Bucket Sync optimization
currently I am trying to compare two S3 buckets with the target to delete files.
problem defintion:
-BucketA
-BucketB
The script is looking for files (same key name) in BucketB which are not available in BucketA.
That files which are only available in BucketB have to be deleted.
The buckets contain about 3-4 Million files each.
Many thanks.
Kind regards,
Alexander
My solution idea:
The List filling is quite slow. Is there any possibility to accelerate it?
#Filling the lists
#e.g. BucketA (BucketB same procedure)
s3_client = boto3.client(3")
bucket_name = "BucketA"
paginator = s3_client.get_paginator("list_objects_v2")
response = paginator.paginate(Bucket="BucketA", PaginationConfig={"PageSize": 2})
for page in response:
files = page.get("Contents")
for file in files:
ListA.append(file['Key'])
#finding the deletion values
diff = list(set(BucketB) - set(BucketA))
#delete files from BucketB (building junks, since with delete_objects_from_bucket max. 1000 objects at once)
for delete_list in group_elements(diff , 1000):
delete_objects_from_bucket(delete_list)
A:
The ListBucket() API call only returns 1000 objects at a time, so listing buckets with 100,000+ objects is very slow and best avoided. You have 3-4 million objects, so definitely avoid listing them!
Instead, use Amazon S3 Inventory, which can provide a daily or weekly CSV file listing all objects in a bucket. Activate it for both buckets, and then use the provided CSV files as input to make the comparison.
You can also use the CSV file generated by Amazon S3 Inventory as a manifest file for Amazon S3 Batch Operations. So, your code could generate a file that lists only the objects that you would like deleted, then get S3 Batch Operations to process those deletions.
| python boto3 for AWS - S3 Bucket Sync optimization | currently I am trying to compare two S3 buckets with the target to delete files.
problem defintion:
-BucketA
-BucketB
The script is looking for files (same key name) in BucketB which are not available in BucketA.
That files which are only available in BucketB have to be deleted.
The buckets contain about 3-4 Million files each.
Many thanks.
Kind regards,
Alexander
My solution idea:
The List filling is quite slow. Is there any possibility to accelerate it?
#Filling the lists
#e.g. BucketA (BucketB same procedure)
s3_client = boto3.client(3")
bucket_name = "BucketA"
paginator = s3_client.get_paginator("list_objects_v2")
response = paginator.paginate(Bucket="BucketA", PaginationConfig={"PageSize": 2})
for page in response:
files = page.get("Contents")
for file in files:
ListA.append(file['Key'])
#finding the deletion values
diff = list(set(BucketB) - set(BucketA))
#delete files from BucketB (building junks, since with delete_objects_from_bucket max. 1000 objects at once)
for delete_list in group_elements(diff , 1000):
delete_objects_from_bucket(delete_list)
| [
"The ListBucket() API call only returns 1000 objects at a time, so listing buckets with 100,000+ objects is very slow and best avoided. You have 3-4 million objects, so definitely avoid listing them!\nInstead, use Amazon S3 Inventory, which can provide a daily or weekly CSV file listing all objects in a bucket. Activate it for both buckets, and then use the provided CSV files as input to make the comparison.\nYou can also use the CSV file generated by Amazon S3 Inventory as a manifest file for Amazon S3 Batch Operations. So, your code could generate a file that lists only the objects that you would like deleted, then get S3 Batch Operations to process those deletions.\n"
] | [
0
] | [] | [] | [
"amazon_web_services",
"boto3",
"paginator",
"python"
] | stackoverflow_0074611433_amazon_web_services_boto3_paginator_python.txt |
Q:
Extract output tensor from any layer of onnx model
I want to extract the output of different layers of an onnx model (e.g., squeezenet.onnx, etc.) during image inference. I am trying to use the code in [How to extract output tensor from any layer of models][1]:
# add all intermediate outputs to onnx net
ort_session = ort.InferenceSession('<you path>/model.onnx')
org_outputs = [x.name for x in ort_session.get_outputs()]
model = onnx.load('<you path>/model.onnx')
for node in model.graph.node:
for output in node.output:
if output not in org_outputs:
model.graph.output.extend([onnx.ValueInfoProto(name=output)])
# excute onnx
ort_session = ort.InferenceSession(model.SerializeToString())
outputs = [x.name for x in ort_session.get_outputs()]
img_path = '<you path>/input_img.raw'
img = get_image(img_path, show=True)
transform_fn = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
])
img = transform_fn(img)
img = img.expand_dims(axis=0)
ort_outs = ort_session.run(outputs, {'data': img} )
ort_outs = OrderedDict(zip(outputs, ort_outs))
I am getting the error below although I managed to have the input size required:
---> 40 ort_outs = ort_session.run(outputs, {'data': img} )
41 ort_outs = OrderedDict(zip(outputs, ort_outs))
/usr/local/lib/python3.7/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
198 output_names = [output.name for output in self._outputs_meta]
199 try:
--> 200 return self._sess.run(output_names, input_feed, run_options)
201 except C.EPFail as err:
202 if self._enable_fallback:
RuntimeError: Input must be a list of dictionaries or a single numpy array for input 'data'.
How can I fix this? Appreciate your help! Thank you
[1]: https://github.com/microsoft/onnxruntime/issues/1455
A:
Unfortunately that is not possible. However you could re-export the original model from PyTorch to onnx, and add the output of the desired layer to the return statement of the forward method of your model. (you might have to feed it through a couple of methods up to the first forward method in your model)
| Extract output tensor from any layer of onnx model | I want to extract the output of different layers of an onnx model (e.g., squeezenet.onnx, etc.) during image inference. I am trying to use the code in [How to extract output tensor from any layer of models][1]:
# add all intermediate outputs to onnx net
ort_session = ort.InferenceSession('<you path>/model.onnx')
org_outputs = [x.name for x in ort_session.get_outputs()]
model = onnx.load('<you path>/model.onnx')
for node in model.graph.node:
for output in node.output:
if output not in org_outputs:
model.graph.output.extend([onnx.ValueInfoProto(name=output)])
# excute onnx
ort_session = ort.InferenceSession(model.SerializeToString())
outputs = [x.name for x in ort_session.get_outputs()]
img_path = '<you path>/input_img.raw'
img = get_image(img_path, show=True)
transform_fn = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
])
img = transform_fn(img)
img = img.expand_dims(axis=0)
ort_outs = ort_session.run(outputs, {'data': img} )
ort_outs = OrderedDict(zip(outputs, ort_outs))
I am getting the error below although I managed to have the input size required:
---> 40 ort_outs = ort_session.run(outputs, {'data': img} )
41 ort_outs = OrderedDict(zip(outputs, ort_outs))
/usr/local/lib/python3.7/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
198 output_names = [output.name for output in self._outputs_meta]
199 try:
--> 200 return self._sess.run(output_names, input_feed, run_options)
201 except C.EPFail as err:
202 if self._enable_fallback:
RuntimeError: Input must be a list of dictionaries or a single numpy array for input 'data'.
How can I fix this? Appreciate your help! Thank you
[1]: https://github.com/microsoft/onnxruntime/issues/1455
| [
"Unfortunately that is not possible. However you could re-export the original model from PyTorch to onnx, and add the output of the desired layer to the return statement of the forward method of your model. (you might have to feed it through a couple of methods up to the first forward method in your model)\n"
] | [
0
] | [] | [] | [
"conv_neural_network",
"image_processing",
"onnx",
"python",
"tensorflow"
] | stackoverflow_0074600082_conv_neural_network_image_processing_onnx_python_tensorflow.txt |
Q:
Resample hourly to daily and group by min, max and mean values
I have a hourly dataframe, df, and i need to create a new dataframe with the min, mean and max values from each day. Here's what i tried to do:
df = pd.DataFrame(np.random.rand(72, 1),
columns=["Random"],
index=pd.date_range(start="20220101000000", end="20220103230000", freq='H'))
df_min = df.resample('D').min()
df_mean = df.resample('D').mean()
df_max = df.resample('D').max()
But not really sure how could group those three dataframes (df_min, df_mean and df_max) into a new single dataframe, df_new. This new dataframe should have only one column, something like this:
2022-01-01T00:00:00.00 0.002 <- min
2022-01-01T00:00:00.00 0.023 <- mean
2022-01-01T00:00:00.00 0.965 <- max
2022-01-02T00:00:00.00 0.013 <- min
2022-01-02T00:00:00.00 0.053 <- mean
2022-01-02T00:00:00.00 0.825 <- max
2022-01-03T00:00:00.00 0.011 <- min
2022-01-03T00:00:00.00 0.172 <- mean
2022-01-03T00:00:00.00 0.992 <- max
A:
Use Resample.agg with list of functions, then reshape by DataFrame.stack and remove second level of MultiIndex by Series.droplevel:
s = df.resample('D')['Random'].agg(['min','mean','max']).stack().droplevel(1)
print (s)
2022-01-01 0.162976
2022-01-01 0.574074
2022-01-01 0.980742
2022-01-02 0.012299
2022-01-02 0.467338
2022-01-02 0.962570
2022-01-03 0.000722
2022-01-03 0.426793
2022-01-03 0.947014
dtype: float64
| Resample hourly to daily and group by min, max and mean values | I have a hourly dataframe, df, and i need to create a new dataframe with the min, mean and max values from each day. Here's what i tried to do:
df = pd.DataFrame(np.random.rand(72, 1),
columns=["Random"],
index=pd.date_range(start="20220101000000", end="20220103230000", freq='H'))
df_min = df.resample('D').min()
df_mean = df.resample('D').mean()
df_max = df.resample('D').max()
But not really sure how could group those three dataframes (df_min, df_mean and df_max) into a new single dataframe, df_new. This new dataframe should have only one column, something like this:
2022-01-01T00:00:00.00 0.002 <- min
2022-01-01T00:00:00.00 0.023 <- mean
2022-01-01T00:00:00.00 0.965 <- max
2022-01-02T00:00:00.00 0.013 <- min
2022-01-02T00:00:00.00 0.053 <- mean
2022-01-02T00:00:00.00 0.825 <- max
2022-01-03T00:00:00.00 0.011 <- min
2022-01-03T00:00:00.00 0.172 <- mean
2022-01-03T00:00:00.00 0.992 <- max
| [
"Use Resample.agg with list of functions, then reshape by DataFrame.stack and remove second level of MultiIndex by Series.droplevel:\ns = df.resample('D')['Random'].agg(['min','mean','max']).stack().droplevel(1)\nprint (s)\n2022-01-01 0.162976\n2022-01-01 0.574074\n2022-01-01 0.980742\n2022-01-02 0.012299\n2022-01-02 0.467338\n2022-01-02 0.962570\n2022-01-03 0.000722\n2022-01-03 0.426793\n2022-01-03 0.947014\ndtype: float64\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074613251_pandas_python.txt |
Q:
Python issue with fitting a custom function containing double integrals
I want to fit some data using a custom function which contains a double integral. a,b, and c are pre-defined parameters, and alpha and beta are two angles on which the function must be integrated.
import numpy as np
from scipy import integrate
x=np.linspace(0,100,100)
a=100
b=5
c=1
def custom_function(x,a,b,c):
f = lambda alpha,beta: (np.pi/2)*(np.sin(x*a*np.sin(alpha)*np.cos(beta))/x*a*np.sin(alpha)*np.cos(beta))*(np.sin(x*b*np.sin(alpha)*np.sin(beta))/x*b*np.sin(alpha)*np.sin(beta))*(np.sin(x*c*np.cos(alpha))/x*c*np.cos(alpha))*np.sin(alpha)
return integrate.dblquad(f, 0, np.pi/2, 0, np.pi/2)
when running the code, I get the following error:
TypeError: cannot convert the series to <class 'float'>
I've tried simplyfing the function but I still get the same issue, anyone could help me locate the problem?
A:
Are you sure you are not trying to multiply sinc functions, sin(x*u)/(x*u)? Currently you are multiplying terms like u * sin(x*u) / x because there are not parenthesis in the denominator.
You should be able to fit your function for small a,b,c. But having a = 100, you should have a much higher resolution, I would say steps.
I am asuming you are trying to fit using some local minimization method.
If you have a function with more than many maxima and minima while you are trying to fit you are likely to get stuck. You could try some of non-convex optimization methods available
as well
| Python issue with fitting a custom function containing double integrals | I want to fit some data using a custom function which contains a double integral. a,b, and c are pre-defined parameters, and alpha and beta are two angles on which the function must be integrated.
import numpy as np
from scipy import integrate
x=np.linspace(0,100,100)
a=100
b=5
c=1
def custom_function(x,a,b,c):
f = lambda alpha,beta: (np.pi/2)*(np.sin(x*a*np.sin(alpha)*np.cos(beta))/x*a*np.sin(alpha)*np.cos(beta))*(np.sin(x*b*np.sin(alpha)*np.sin(beta))/x*b*np.sin(alpha)*np.sin(beta))*(np.sin(x*c*np.cos(alpha))/x*c*np.cos(alpha))*np.sin(alpha)
return integrate.dblquad(f, 0, np.pi/2, 0, np.pi/2)
when running the code, I get the following error:
TypeError: cannot convert the series to <class 'float'>
I've tried simplyfing the function but I still get the same issue, anyone could help me locate the problem?
| [
"Are you sure you are not trying to multiply sinc functions, sin(x*u)/(x*u)? Currently you are multiplying terms like u * sin(x*u) / x because there are not parenthesis in the denominator.\nYou should be able to fit your function for small a,b,c. But having a = 100, you should have a much higher resolution, I would say steps.\nI am asuming you are trying to fit using some local minimization method.\nIf you have a function with more than many maxima and minima while you are trying to fit you are likely to get stuck. You could try some of non-convex optimization methods available\nas well\n"
] | [
0
] | [] | [] | [
"custom_function",
"integral",
"numpy",
"python"
] | stackoverflow_0074612970_custom_function_integral_numpy_python.txt |
Q:
Apply a function row by row using other dataframes' rows as list inputs in python
I'm trying to apply a function row-by-row which takes 5 inputs, 3 of which are lists. I want these lists to come from each row of 3 correspondings dataframes.
I've tried using 'apply' and 'lambda' as follows:
sol['tf_dd']=sol.apply(lambda tsol, rfsol, rbsol:
taurho_difdif(xy=xy,
l=l,
t=tsol,
rf=rfsol,
rb=rbsol),
axis=1)
However I get the error <lambda>() missing 2 required positional arguments: 'rfsol' and 'rbsol'
The DataFrame sol and the DataFrames tsol, rfsol and rbsol all have the same length. For each row, I want the entire row from tsol, rfsol and rbsol to be input as three lists.
Here is much simplified example (first with single lists, which I then want to replicate row by row with dataframes):
The output with single lists is a single value (120). With dataframes as inputs I want an output dataframe of length 10 where all values are 120.
t=[1,2,3,4,5]
rf=[6,7,8,9,10]
rb=[11,12,13,14,15]
def simple_func(t, rf, rb):
x=sum(t)
y=sum(rf)
z=sum(rb)
return x+y+z
out=simple_func(t,rf,rb)
# dataframe rows as lists
tsol=pd.DataFrame((t,t,t,t,t,t,t,t,t,t))
rfsol=pd.DataFrame((rf,rf,rf,rf,rf,rf,rf,rf,rf,rf))
rbsol=pd.DataFrame((rb,rb,rb,rb,rb,rb,rb,rb,rb,rb))
out2 = pd.DataFrame(index=range(len(tsol)), columns=['output'])
out2['output'] = out2.apply(lambda tsol, rfsol, rbsol:
simple_func(t=tsol.tolist(),
rf=rfsol.tolist(),
rb=rbsol.tolist()),
axis=1)
A:
Try to use "name" field in Series Type to get index value, and then get the same index for the other DataFrame
import pandas as pd
import numpy as np
def postional_sum(inot, df1, df2, df3):
"""
Get input index and gather the same position for the other DataFrame collection
"""
position = inot.name
x = df1.iloc[position].sum()
y = df2.iloc[position].sum()
z = df3.iloc[position].sum()
return x + y + z
# dataframe rows as lists
tsol = pd.DataFrame(np.random.randn(10, 5), columns=range(5))
rfsol = pd.DataFrame(np.random.randn(10, 5), columns=range(5))
rbsol = pd.DataFrame(np.random.randn(10, 5), columns=range(5))
out2 = pd.DataFrame(index=range(len(tsol)), columns=["output"])
out2["output"] = out2.apply(lambda x: postional_sum(x, tsol, rfsol, rbsol), axis=1)
out2
Hope this helps!
A:
When you run df.apply() with axis=1, it does not pass on the columns as individual arguments to the function, but as a Series object, as explained here. The correct way to do this would be
out2['output'] = out2.apply(lambda row:
simple_func(t=row["tsol"],
rf=row["rfsol"],
rb=row["rbsol"]),
axis=1)
A:
You can eliminate the simple function using this:
out2["output"] = tsol.sum(axis=1) + rfsol.sum(axis=1) + rbsol.sum(axis=1)
Here is the complete code:
t=[1,2,3,4,5]
rf=[6,7,8,9,10]
rb=[11,12,13,14,15]
# dataframe rows as lists
tsol=pd.DataFrame((t,t,t,t,t,t,t,t,t,t))
rfsol=pd.DataFrame((rf,rf,rf,rf,rf,rf,rf,rf,rf,rf))
rbsol=pd.DataFrame((rb,rb,rb,rb,rb,rb,rb,rb,rb,rb))
out2 = pd.DataFrame(index=range(len(tsol)), columns=["output"])
out2["output"] = tsol.sum(axis=1) + rfsol.sum(axis=1) + rbsol.sum(axis=1)
print(out2)
OUTPUT:
output
0 120
1 120
2 120
3 120
4 120
5 120
6 120
7 120
8 120
9 120
| Apply a function row by row using other dataframes' rows as list inputs in python | I'm trying to apply a function row-by-row which takes 5 inputs, 3 of which are lists. I want these lists to come from each row of 3 correspondings dataframes.
I've tried using 'apply' and 'lambda' as follows:
sol['tf_dd']=sol.apply(lambda tsol, rfsol, rbsol:
taurho_difdif(xy=xy,
l=l,
t=tsol,
rf=rfsol,
rb=rbsol),
axis=1)
However I get the error <lambda>() missing 2 required positional arguments: 'rfsol' and 'rbsol'
The DataFrame sol and the DataFrames tsol, rfsol and rbsol all have the same length. For each row, I want the entire row from tsol, rfsol and rbsol to be input as three lists.
Here is much simplified example (first with single lists, which I then want to replicate row by row with dataframes):
The output with single lists is a single value (120). With dataframes as inputs I want an output dataframe of length 10 where all values are 120.
t=[1,2,3,4,5]
rf=[6,7,8,9,10]
rb=[11,12,13,14,15]
def simple_func(t, rf, rb):
x=sum(t)
y=sum(rf)
z=sum(rb)
return x+y+z
out=simple_func(t,rf,rb)
# dataframe rows as lists
tsol=pd.DataFrame((t,t,t,t,t,t,t,t,t,t))
rfsol=pd.DataFrame((rf,rf,rf,rf,rf,rf,rf,rf,rf,rf))
rbsol=pd.DataFrame((rb,rb,rb,rb,rb,rb,rb,rb,rb,rb))
out2 = pd.DataFrame(index=range(len(tsol)), columns=['output'])
out2['output'] = out2.apply(lambda tsol, rfsol, rbsol:
simple_func(t=tsol.tolist(),
rf=rfsol.tolist(),
rb=rbsol.tolist()),
axis=1)
| [
"Try to use \"name\" field in Series Type to get index value, and then get the same index for the other DataFrame\nimport pandas as pd\nimport numpy as np\n\n\ndef postional_sum(inot, df1, df2, df3):\n \"\"\"\n Get input index and gather the same position for the other DataFrame collection\n \"\"\"\n\n position = inot.name\n\n x = df1.iloc[position].sum()\n y = df2.iloc[position].sum()\n z = df3.iloc[position].sum()\n return x + y + z\n\n\n# dataframe rows as lists\ntsol = pd.DataFrame(np.random.randn(10, 5), columns=range(5))\nrfsol = pd.DataFrame(np.random.randn(10, 5), columns=range(5))\nrbsol = pd.DataFrame(np.random.randn(10, 5), columns=range(5))\n\nout2 = pd.DataFrame(index=range(len(tsol)), columns=[\"output\"])\n\nout2[\"output\"] = out2.apply(lambda x: postional_sum(x, tsol, rfsol, rbsol), axis=1)\n\nout2\n\nHope this helps!\n",
"When you run df.apply() with axis=1, it does not pass on the columns as individual arguments to the function, but as a Series object, as explained here. The correct way to do this would be\nout2['output'] = out2.apply(lambda row:\n simple_func(t=row[\"tsol\"],\n rf=row[\"rfsol\"],\n rb=row[\"rbsol\"]),\n axis=1)\n\n",
"You can eliminate the simple function using this:\nout2[\"output\"] = tsol.sum(axis=1) + rfsol.sum(axis=1) + rbsol.sum(axis=1)\n\nHere is the complete code:\nt=[1,2,3,4,5]\nrf=[6,7,8,9,10]\nrb=[11,12,13,14,15]\n\n# dataframe rows as lists\ntsol=pd.DataFrame((t,t,t,t,t,t,t,t,t,t))\nrfsol=pd.DataFrame((rf,rf,rf,rf,rf,rf,rf,rf,rf,rf))\nrbsol=pd.DataFrame((rb,rb,rb,rb,rb,rb,rb,rb,rb,rb))\n\nout2 = pd.DataFrame(index=range(len(tsol)), columns=[\"output\"])\nout2[\"output\"] = tsol.sum(axis=1) + rfsol.sum(axis=1) + rbsol.sum(axis=1)\n\nprint(out2)\n\nOUTPUT:\n output\n0 120\n1 120\n2 120\n3 120\n4 120\n5 120\n6 120\n7 120\n8 120\n9 120\n\n"
] | [
2,
0,
0
] | [] | [] | [
"apply",
"function",
"lambda",
"pandas",
"python"
] | stackoverflow_0074612434_apply_function_lambda_pandas_python.txt |
Q:
gaierror, NewConnectionError, MaxRetryError, ConnectionError with URL in requests
I am trying to check the response of the URL same as the domain record from the WHOIS database.
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
The code:
def abnormal_url(url):
response = requests.get(url,verify=False)
domainname = urlparse(url).netloc
domain = whois.whois(domainname)
try:
if response.text == domain:
return 0 # legitimate
else:
return 1 # phishing
except:
return 1 # phishing
Append to dataframe:
df['abnormal url'] = df['url'].apply(lambda i: abnormal_url(i))
Error found:
gaierror Traceback (most recent call last)
File D:\anaconda3\lib\site-packages\urllib3\connection.py:174, in HTTPConnection._new_conn(self)
173 try:
--> 174 conn = connection.create_connection(
175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
178 except SocketTimeout:
File D:\anaconda3\lib\site-packages\urllib3\util\connection.py:72, in create_connection(address, timeout, source_address, socket_options)
68 return six.raise_from(
69 LocationParseError(u"'%s', label empty or too long" % host), None
70 )
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
File D:\anaconda3\lib\socket.py:954, in getaddrinfo(host, port, family, type, proto, flags)
953 addrlist = []
--> 954 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
955 af, socktype, proto, canonname, sa = res
gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:1040, in HTTPSConnectionPool._validate_conn(self, conn)
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1042 if not conn.is_verified:
File D:\anaconda3\lib\site-packages\urllib3\connection.py:358, in HTTPSConnection.connect(self)
356 def connect(self):
357 # Add certificate verification
--> 358 self.sock = conn = self._new_conn()
359 hostname = self.host
File D:\anaconda3\lib\site-packages\urllib3\connection.py:186, in HTTPConnection._new_conn(self)
185 except SocketError as e:
--> 186 raise NewConnectionError(
187 self, "Failed to establish a new connection: %s" % e
188 )
190 return conn
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x0000023F7EA21520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File D:\anaconda3\lib\site-packages\requests\adapters.py:440, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
439 if not chunked:
--> 440 resp = conn.urlopen(
441 method=request.method,
442 url=url,
443 body=request.body,
444 headers=request.headers,
445 redirect=False,
446 assert_same_host=False,
447 preload_content=False,
448 decode_content=False,
449 retries=self.max_retries,
450 timeout=timeout
451 )
453 # Send the request.
454 else:
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:785, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
783 e = ProtocolError("Connection aborted.", e)
--> 785 retries = retries.increment(
786 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
787 )
788 retries.sleep()
File D:\anaconda3\lib\site-packages\urllib3\util\retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='www.list.tmall.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000023F7EA21520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Input In [16], in <cell line: 1>()
----> 1 df['abnormal url'] = df['url'].apply(lambda i: abnormal_url(i))
File D:\anaconda3\lib\site-packages\pandas\core\series.py:4433, in Series.apply(self, func, convert_dtype, args, **kwargs)
4323 def apply(
4324 self,
4325 func: AggFuncType,
(...)
4328 **kwargs,
4329 ) -> DataFrame | Series:
4330 """
4331 Invoke function on values of Series.
4332
(...)
4431 dtype: float64
4432 """
-> 4433 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
File D:\anaconda3\lib\site-packages\pandas\core\apply.py:1082, in SeriesApply.apply(self)
1078 if isinstance(self.f, str):
1079 # if we are a string, try to dispatch
1080 return self.apply_str()
-> 1082 return self.apply_standard()
File D:\anaconda3\lib\site-packages\pandas\core\apply.py:1137, in SeriesApply.apply_standard(self)
1131 values = obj.astype(object)._values
1132 # error: Argument 2 to "map_infer" has incompatible type
1133 # "Union[Callable[..., Any], str, List[Union[Callable[..., Any], str]],
1134 # Dict[Hashable, Union[Union[Callable[..., Any], str],
1135 # List[Union[Callable[..., Any], str]]]]]"; expected
1136 # "Callable[[Any], Any]"
-> 1137 mapped = lib.map_infer(
1138 values,
1139 f, # type: ignore[arg-type]
1140 convert=self.convert_dtype,
1141 )
1143 if len(mapped) and isinstance(mapped[0], ABCSeries):
1144 # GH#43986 Need to do list(mapped) in order to get treated as nested
1145 # See also GH#25959 regarding EA support
1146 return obj._constructor_expanddim(list(mapped), index=obj.index)
File D:\anaconda3\lib\site-packages\pandas\_libs\lib.pyx:2870, in pandas._libs.lib.map_infer()
Input In [16], in <lambda>(i)
----> 1 df['abnormal url'] = df['url'].apply(lambda i: abnormal_url(i))
Input In [15], in abnormal_url(url)
1 def abnormal_url(url):
----> 2 response = requests.get(url,verify=False)
3 domainname = urlparse(url).netloc
4 domain = whois.whois(domainname)
File D:\anaconda3\lib\site-packages\requests\api.py:75, in get(url, params, **kwargs)
64 def get(url, params=None, **kwargs):
65 r"""Sends a GET request.
66
67 :param url: URL for the new :class:`Request` object.
(...)
72 :rtype: requests.Response
73 """
---> 75 return request('get', url, params=params, **kwargs)
File D:\anaconda3\lib\site-packages\requests\api.py:61, in request(method, url, **kwargs)
57 # By using the 'with' statement we are sure the session is closed, thus we
58 # avoid leaving sockets open which can trigger a ResourceWarning in some
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
File D:\anaconda3\lib\site-packages\requests\sessions.py:529, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
524 send_kwargs = {
525 'timeout': timeout,
526 'allow_redirects': allow_redirects,
527 }
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
531 return resp
File D:\anaconda3\lib\site-packages\requests\sessions.py:645, in Session.send(self, request, **kwargs)
642 start = preferred_clock()
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
647 # Total elapsed time of the request (approximately)
648 elapsed = preferred_clock() - start
File D:\anaconda3\lib\site-packages\requests\adapters.py:519, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
515 if isinstance(e.reason, _SSLError):
516 # This branch is for urllib3 v1.22 and later.
517 raise SSLError(e, request=request)
--> 519 raise ConnectionError(e, request=request)
521 except ClosedPoolError as e:
522 raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='www.list.tmall.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000023F7EA21520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
A:
The connection could not be established because the site can't be reached.
Just execute the get request inside your try/except and it will work.
def abnormal_url(url):
domainname = urlparse(url).netloc
domain = whois.whois(domainname)
try:
response = requests.get(url,verify=False)
if response.text == domain:
return 0 # legitimate
else:
return 1 # phishing
except:
return 1 # phishing
| gaierror, NewConnectionError, MaxRetryError, ConnectionError with URL in requests | I am trying to check the response of the URL same as the domain record from the WHOIS database.
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
The code:
def abnormal_url(url):
response = requests.get(url,verify=False)
domainname = urlparse(url).netloc
domain = whois.whois(domainname)
try:
if response.text == domain:
return 0 # legitimate
else:
return 1 # phishing
except:
return 1 # phishing
Append to dataframe:
df['abnormal url'] = df['url'].apply(lambda i: abnormal_url(i))
Error found:
gaierror Traceback (most recent call last)
File D:\anaconda3\lib\site-packages\urllib3\connection.py:174, in HTTPConnection._new_conn(self)
173 try:
--> 174 conn = connection.create_connection(
175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
178 except SocketTimeout:
File D:\anaconda3\lib\site-packages\urllib3\util\connection.py:72, in create_connection(address, timeout, source_address, socket_options)
68 return six.raise_from(
69 LocationParseError(u"'%s', label empty or too long" % host), None
70 )
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
File D:\anaconda3\lib\socket.py:954, in getaddrinfo(host, port, family, type, proto, flags)
953 addrlist = []
--> 954 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
955 af, socktype, proto, canonname, sa = res
gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:1040, in HTTPSConnectionPool._validate_conn(self, conn)
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1042 if not conn.is_verified:
File D:\anaconda3\lib\site-packages\urllib3\connection.py:358, in HTTPSConnection.connect(self)
356 def connect(self):
357 # Add certificate verification
--> 358 self.sock = conn = self._new_conn()
359 hostname = self.host
File D:\anaconda3\lib\site-packages\urllib3\connection.py:186, in HTTPConnection._new_conn(self)
185 except SocketError as e:
--> 186 raise NewConnectionError(
187 self, "Failed to establish a new connection: %s" % e
188 )
190 return conn
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x0000023F7EA21520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File D:\anaconda3\lib\site-packages\requests\adapters.py:440, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
439 if not chunked:
--> 440 resp = conn.urlopen(
441 method=request.method,
442 url=url,
443 body=request.body,
444 headers=request.headers,
445 redirect=False,
446 assert_same_host=False,
447 preload_content=False,
448 decode_content=False,
449 retries=self.max_retries,
450 timeout=timeout
451 )
453 # Send the request.
454 else:
File D:\anaconda3\lib\site-packages\urllib3\connectionpool.py:785, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
783 e = ProtocolError("Connection aborted.", e)
--> 785 retries = retries.increment(
786 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
787 )
788 retries.sleep()
File D:\anaconda3\lib\site-packages\urllib3\util\retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='www.list.tmall.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000023F7EA21520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Input In [16], in <cell line: 1>()
----> 1 df['abnormal url'] = df['url'].apply(lambda i: abnormal_url(i))
File D:\anaconda3\lib\site-packages\pandas\core\series.py:4433, in Series.apply(self, func, convert_dtype, args, **kwargs)
4323 def apply(
4324 self,
4325 func: AggFuncType,
(...)
4328 **kwargs,
4329 ) -> DataFrame | Series:
4330 """
4331 Invoke function on values of Series.
4332
(...)
4431 dtype: float64
4432 """
-> 4433 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
File D:\anaconda3\lib\site-packages\pandas\core\apply.py:1082, in SeriesApply.apply(self)
1078 if isinstance(self.f, str):
1079 # if we are a string, try to dispatch
1080 return self.apply_str()
-> 1082 return self.apply_standard()
File D:\anaconda3\lib\site-packages\pandas\core\apply.py:1137, in SeriesApply.apply_standard(self)
1131 values = obj.astype(object)._values
1132 # error: Argument 2 to "map_infer" has incompatible type
1133 # "Union[Callable[..., Any], str, List[Union[Callable[..., Any], str]],
1134 # Dict[Hashable, Union[Union[Callable[..., Any], str],
1135 # List[Union[Callable[..., Any], str]]]]]"; expected
1136 # "Callable[[Any], Any]"
-> 1137 mapped = lib.map_infer(
1138 values,
1139 f, # type: ignore[arg-type]
1140 convert=self.convert_dtype,
1141 )
1143 if len(mapped) and isinstance(mapped[0], ABCSeries):
1144 # GH#43986 Need to do list(mapped) in order to get treated as nested
1145 # See also GH#25959 regarding EA support
1146 return obj._constructor_expanddim(list(mapped), index=obj.index)
File D:\anaconda3\lib\site-packages\pandas\_libs\lib.pyx:2870, in pandas._libs.lib.map_infer()
Input In [16], in <lambda>(i)
----> 1 df['abnormal url'] = df['url'].apply(lambda i: abnormal_url(i))
Input In [15], in abnormal_url(url)
1 def abnormal_url(url):
----> 2 response = requests.get(url,verify=False)
3 domainname = urlparse(url).netloc
4 domain = whois.whois(domainname)
File D:\anaconda3\lib\site-packages\requests\api.py:75, in get(url, params, **kwargs)
64 def get(url, params=None, **kwargs):
65 r"""Sends a GET request.
66
67 :param url: URL for the new :class:`Request` object.
(...)
72 :rtype: requests.Response
73 """
---> 75 return request('get', url, params=params, **kwargs)
File D:\anaconda3\lib\site-packages\requests\api.py:61, in request(method, url, **kwargs)
57 # By using the 'with' statement we are sure the session is closed, thus we
58 # avoid leaving sockets open which can trigger a ResourceWarning in some
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
File D:\anaconda3\lib\site-packages\requests\sessions.py:529, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
524 send_kwargs = {
525 'timeout': timeout,
526 'allow_redirects': allow_redirects,
527 }
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
531 return resp
File D:\anaconda3\lib\site-packages\requests\sessions.py:645, in Session.send(self, request, **kwargs)
642 start = preferred_clock()
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
647 # Total elapsed time of the request (approximately)
648 elapsed = preferred_clock() - start
File D:\anaconda3\lib\site-packages\requests\adapters.py:519, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
515 if isinstance(e.reason, _SSLError):
516 # This branch is for urllib3 v1.22 and later.
517 raise SSLError(e, request=request)
--> 519 raise ConnectionError(e, request=request)
521 except ClosedPoolError as e:
522 raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='www.list.tmall.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000023F7EA21520>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
| [
"The connection could not be established because the site can't be reached.\nJust execute the get request inside your try/except and it will work.\ndef abnormal_url(url):\n domainname = urlparse(url).netloc\n domain = whois.whois(domainname)\n try:\n response = requests.get(url,verify=False)\n if response.text == domain:\n return 0 # legitimate\n else:\n return 1 # phishing\n except:\n return 1 # phishing\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"jupyter_notebook",
"python",
"python_requests"
] | stackoverflow_0074613109_dataframe_jupyter_notebook_python_python_requests.txt |
Q:
Django logging custom attributes in formatter
How can Django use logging to log using custom attributes in the formatter? I'm thinking of logging the logged in username for example.
In the settings.py script, the LOGGING variable is defined:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
},
},
'formatters' : {
'info_format' : {
'format' : '%(asctime)s %(levelname)s - %(message)s',
},
}
}
I wish to use a format, something like:
'format' : '%(asctime).19s %(levelname)s - %(username)s: %(message)s'
Where username would be the currently logged in user. Maybe any other kind of session's variables may be added here.
A workaround here is to use the extra parameter on the logger methods, which receives a dictionary with the keys as the strings I want to use on the format string:
logger.info(message, extra={'username' : request.user.username})
Another (ugly) workaround would be to build the message attribute to include the things that are not part of the default attributes that logging formatters have.
message = request.user.username + " - " + message
logger.info(message)
But, is there a way to set up the format string with certain attributes and make Django give them automatically to the logging API? If %(username)s for example, the request.user.username, of any others perhaps...
A:
You can use a filter to add your custom attribute. For example :
def add_my_custom_attribute(record):
record.myAttribute = 'myValue'
record.username = record.request.user.username
return True
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
...
'add_my_custom_attribute': {
'()': 'django.utils.log.CallbackFilter',
'callback': add_my_custom_attribute,
}
},
'handlers': {
...
'django.server': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'filters': ['add_my_custom_attribute'],
'formatter': 'django.server',
},
},
...
}
By installing a filter, you can process each log record and decide whether it should be passed from logger to handler.
The filter get the log record which contains all the details of log (i.e : time, severity, request, status code).
The attributes of the record are used by the formatter to format it to string message. If you add your custom attributes to that record - they will also be available to the formatter.
A:
The extra keyword is not a workaround. That's the most eloquent way of writing customized formatters, unless you are writing a custom logging altogether.
format: '%(asctime).19s %(levelname)s - %(username)s: %(message)s'
logging.basicConfig(format=format)
logger.info(message, extra={'username' : request.user.username})
Some note from the documentation (**kwars for Django logger):
The keys in the dictionary passed in extra should not clash with the keys used by the logging system.
If the strings expected by the Formatter are missing, the message will not be logged.
This feature is intended for use in specialized circumstances, and not always.
A:
I will provide one of many possible complete answers to this question:
How can Django use logging to log using custom attributes in the formatter? I'm thinking of logging the logged in username for example.
Other answers touched on the way to add extra contextual info through python's logging utilities. The method of using filters to attach additional information to the log record is ideal, and best described in the docs:
https://docs.python.org/3/howto/logging-cookbook.html#using-filters-to-impart-contextual-information
This still does not tell us how to get the user from the request in a universal way. The following library does this:
https://github.com/ninemoreminutes/django-crum
Thus, combine the two, and you will have a complete answer to the question that has been asked.
import logging
from crum import get_current_user
class ContextFilter(logging.Filter):
"""
This is a filter injects the Django user
"""
def filter(self, record):
record.user = get_current_user()
return True
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='User: %(user)-8s %(message)s')
a1 = logging.getLogger('a.b.c')
f = ContextFilter()
a1.addFilter(f)
a1.debug('A debug message')
This will need to happen within a Django request-response cycle with the CRUM library properly installed.
A:
I came accross this old question while looking for a solution to display username in Django request logs.
Response of napuzba helped a lot, but it didn't work right away, because the user doesn't always exist in the request object.
It resulted in this piece of code in Django settings.py:
def add_username(record):
"""
Adds request user username to the log
:param record:
:return:
"""
try:
record.username = record.request.user.username
except AttributeError:
record.username = ""
return True
LOGGING = {
...
"formatters": {
"verbose": {
"format": "[%(asctime)s] %(levelname)s [%(name)s:%(module)s] [%(username)s] %(message)s",
"datefmt": "%d.%m.%Y %H:%M:%S",
},
},
"filters": {
"add_username": {
"()": "django.utils.log.CallbackFilter",
"callback": add_username,
}
},
"handlers": {
"console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "verbose",
"filters": ["add_username"],
},
}
}
| Django logging custom attributes in formatter | How can Django use logging to log using custom attributes in the formatter? I'm thinking of logging the logged in username for example.
In the settings.py script, the LOGGING variable is defined:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
},
},
'formatters' : {
'info_format' : {
'format' : '%(asctime)s %(levelname)s - %(message)s',
},
}
}
I wish to use a format, something like:
'format' : '%(asctime).19s %(levelname)s - %(username)s: %(message)s'
Where username would be the currently logged in user. Maybe any other kind of session's variables may be added here.
A workaround here is to use the extra parameter on the logger methods, which receives a dictionary with the keys as the strings I want to use on the format string:
logger.info(message, extra={'username' : request.user.username})
Another (ugly) workaround would be to build the message attribute to include the things that are not part of the default attributes that logging formatters have.
message = request.user.username + " - " + message
logger.info(message)
But, is there a way to set up the format string with certain attributes and make Django give them automatically to the logging API? If %(username)s for example, the request.user.username, of any others perhaps...
| [
"You can use a filter to add your custom attribute. For example :\ndef add_my_custom_attribute(record):\n record.myAttribute = 'myValue'\n record.username = record.request.user.username \n return True\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n ...\n 'add_my_custom_attribute': {\n '()': 'django.utils.log.CallbackFilter',\n 'callback': add_my_custom_attribute,\n }\n },\n 'handlers': {\n ...\n 'django.server': {\n 'level': 'INFO',\n 'class': 'logging.StreamHandler',\n 'filters': ['add_my_custom_attribute'],\n 'formatter': 'django.server',\n }, \n },\n ...\n}\n\n\nBy installing a filter, you can process each log record and decide whether it should be passed from logger to handler. \nThe filter get the log record which contains all the details of log (i.e : time, severity, request, status code). \nThe attributes of the record are used by the formatter to format it to string message. If you add your custom attributes to that record - they will also be available to the formatter.\n",
"The extra keyword is not a workaround. That's the most eloquent way of writing customized formatters, unless you are writing a custom logging altogether.\nformat: '%(asctime).19s %(levelname)s - %(username)s: %(message)s'\nlogging.basicConfig(format=format)\nlogger.info(message, extra={'username' : request.user.username})\n\nSome note from the documentation (**kwars for Django logger):\n\nThe keys in the dictionary passed in extra should not clash with the keys used by the logging system.\nIf the strings expected by the Formatter are missing, the message will not be logged.\nThis feature is intended for use in specialized circumstances, and not always.\n\n",
"I will provide one of many possible complete answers to this question:\n\nHow can Django use logging to log using custom attributes in the formatter? I'm thinking of logging the logged in username for example.\n\nOther answers touched on the way to add extra contextual info through python's logging utilities. The method of using filters to attach additional information to the log record is ideal, and best described in the docs:\nhttps://docs.python.org/3/howto/logging-cookbook.html#using-filters-to-impart-contextual-information\nThis still does not tell us how to get the user from the request in a universal way. The following library does this:\nhttps://github.com/ninemoreminutes/django-crum\nThus, combine the two, and you will have a complete answer to the question that has been asked.\nimport logging\nfrom crum import get_current_user\n\nclass ContextFilter(logging.Filter):\n \"\"\"\n This is a filter injects the Django user\n \"\"\"\n\n def filter(self, record):\n\n record.user = get_current_user()\n return True\n\nif __name__ == '__main__':\n logging.basicConfig(level=logging.DEBUG,\n format='User: %(user)-8s %(message)s')\n a1 = logging.getLogger('a.b.c')\n\n f = ContextFilter()\n a1.addFilter(f)\n a1.debug('A debug message')\n\nThis will need to happen within a Django request-response cycle with the CRUM library properly installed.\n",
"I came accross this old question while looking for a solution to display username in Django request logs.\nResponse of napuzba helped a lot, but it didn't work right away, because the user doesn't always exist in the request object.\nIt resulted in this piece of code in Django settings.py:\ndef add_username(record):\n \"\"\"\n Adds request user username to the log\n :param record:\n :return:\n \"\"\"\n try:\n record.username = record.request.user.username\n except AttributeError:\n record.username = \"\"\n return True\n \n\nLOGGING = {\n ...\n \"formatters\": {\n \"verbose\": {\n \"format\": \"[%(asctime)s] %(levelname)s [%(name)s:%(module)s] [%(username)s] %(message)s\",\n \"datefmt\": \"%d.%m.%Y %H:%M:%S\",\n },\n },\n \"filters\": {\n \"add_username\": {\n \"()\": \"django.utils.log.CallbackFilter\",\n \"callback\": add_username,\n }\n },\n \"handlers\": { \n \"console\": {\n \"level\": \"DEBUG\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n \"filters\": [\"add_username\"],\n },\n }\n}\n\n"
] | [
33,
3,
2,
0
] | [] | [] | [
"django",
"logging",
"python"
] | stackoverflow_0044424040_django_logging_python.txt |
Q:
How to make a code faster to calculate the linear distances from the point Python?
Here is the code:
Outputs = []
for X2, Y2 in X:
Color_Gradient = 0
Lowest = 0
for X1, Y1, grad in zip(Velocity_Momentum[:, 0], Velocity_Momentum[:, 1], Color):
XD = X2 - X1
YD = Y2 - Y1
Distance = math.sqrt((XD * XD) + (YD * YD))
if Lowest == 0 or Lowest > Distance:
Lowest = Distance
Color_Gradient = grad
Outputs.append(Color_Gradient)
print("X2 = ", X2, " Y2 = ", Y2, " Color = ", Color_Gradient)
Here X.shape = (572, 2) Velocity_Momentum = (1000000, 2) Color = (1000000,).
Please let me know how to make it faster. I have tried the code above and it is very very slow. That is not good since I am trying to get the result faster.
Please let me know.
A:
It looks like you are using numpy arrays. With numpy, it is faster to use vectorized operations on whole arrays at the same time, compared to loops. It also usually gives cleaner code.
As I understand it, you want to extract the color corresponding to the smallest (euclidean) distance to a specific point.
Outputs = []
for X2, Y2 in X:
XD = X2 - Velocity_Momentum[:, 0]
YD = Y2 - Velocity_Momentum[:, 1]
Distance = (XD * XD) + (YD * YD)
Color_Gradient = Color[Distance.argmin()]
Outputs.append(Color_Gradient)
print("X2 = ", X2, " Y2 = ", Y2, " Color = ", Color_Gradient)
| How to make a code faster to calculate the linear distances from the point Python? | Here is the code:
Outputs = []
for X2, Y2 in X:
Color_Gradient = 0
Lowest = 0
for X1, Y1, grad in zip(Velocity_Momentum[:, 0], Velocity_Momentum[:, 1], Color):
XD = X2 - X1
YD = Y2 - Y1
Distance = math.sqrt((XD * XD) + (YD * YD))
if Lowest == 0 or Lowest > Distance:
Lowest = Distance
Color_Gradient = grad
Outputs.append(Color_Gradient)
print("X2 = ", X2, " Y2 = ", Y2, " Color = ", Color_Gradient)
Here X.shape = (572, 2) Velocity_Momentum = (1000000, 2) Color = (1000000,).
Please let me know how to make it faster. I have tried the code above and it is very very slow. That is not good since I am trying to get the result faster.
Please let me know.
| [
"It looks like you are using numpy arrays. With numpy, it is faster to use vectorized operations on whole arrays at the same time, compared to loops. It also usually gives cleaner code.\nAs I understand it, you want to extract the color corresponding to the smallest (euclidean) distance to a specific point.\nOutputs = []\n\nfor X2, Y2 in X:\n XD = X2 - Velocity_Momentum[:, 0]\n YD = Y2 - Velocity_Momentum[:, 1]\n Distance = (XD * XD) + (YD * YD)\n Color_Gradient = Color[Distance.argmin()]\n Outputs.append(Color_Gradient)\n print(\"X2 = \", X2, \" Y2 = \", Y2, \" Color = \", Color_Gradient)\n\n"
] | [
3
] | [] | [] | [
"performance",
"process",
"python",
"python_3.x"
] | stackoverflow_0074612910_performance_process_python_python_3.x.txt |
Q:
How to optimize N+1 SQL queries when serializing a post with mptt comments?
I have the following serializer for a detailed post:
class ArticleDetailSerializer(serializers.ModelSerializer):
author = ArticleAuthorSerializer(read_only=True)
comments = CommentSerializer(many=True, read_only=True)
class Meta:
model = Article
fields = '__all__'
Comment Serializer:
class CommentSerializer(serializers.ModelSerializer):
class Meta:
model = Comment
fields = '__all__'
def get_fields(self):
fields = super(CommentSerializer, self).get_fields()
fields['children'] = CommentSerializer(many=True, required=False, source='get_children')
return fields
When working with a list of comments, I get 2 sql queries if I work with get_cached_trees()
class CommentListAPIView(generics.ListAPIView):
serializer_class = serializers.CommentSerializer
queryset = Comment.objects.all().get_cached_trees()
But how do you get the same thing to work for an article with a list of comments?
class ArticleDetailAPIView(generics.RetrieveAPIView):
serializer_class = serializers.ArticleDetailSerializer
queryset = Article.custom.all()
lookup_field = 'slug'
def get_queryset(self):
queryset = self.queryset.prefetch_related(Prefetch('comments', queryset=Comment.objects.all().get_cached_trees()))
return queryset
I used prefetch_related() but it didn't work. I used Prefetch(), it gave me an error:
'list' object has no attribute '_add_hints'
I seem to be lost in the ability to optimize the mptt comments for the article.
But if you use the same comments, rendering according to the documentation in the Django template, then this problem is not observed. I ask for your help, dear programmers and experts.
A:
This kind of solution worked for me.
class ArticleDetailSerializer(serializers.ModelSerializer):
author = ArticleAuthorSerializer(read_only=True)
comments = serializers.SerializerMethodField()
class Meta:
model = Article
fields = '__all__'
def get_comments(self, obj):
qs = obj.comments.all().get_cached_trees()
return CommentSerializer(qs, many=True).data
This reduced all queries from 11 SQL to 4.
| How to optimize N+1 SQL queries when serializing a post with mptt comments? | I have the following serializer for a detailed post:
class ArticleDetailSerializer(serializers.ModelSerializer):
author = ArticleAuthorSerializer(read_only=True)
comments = CommentSerializer(many=True, read_only=True)
class Meta:
model = Article
fields = '__all__'
Comment Serializer:
class CommentSerializer(serializers.ModelSerializer):
class Meta:
model = Comment
fields = '__all__'
def get_fields(self):
fields = super(CommentSerializer, self).get_fields()
fields['children'] = CommentSerializer(many=True, required=False, source='get_children')
return fields
When working with a list of comments, I get 2 sql queries if I work with get_cached_trees()
class CommentListAPIView(generics.ListAPIView):
serializer_class = serializers.CommentSerializer
queryset = Comment.objects.all().get_cached_trees()
But how do you get the same thing to work for an article with a list of comments?
class ArticleDetailAPIView(generics.RetrieveAPIView):
serializer_class = serializers.ArticleDetailSerializer
queryset = Article.custom.all()
lookup_field = 'slug'
def get_queryset(self):
queryset = self.queryset.prefetch_related(Prefetch('comments', queryset=Comment.objects.all().get_cached_trees()))
return queryset
I used prefetch_related() but it didn't work. I used Prefetch(), it gave me an error:
'list' object has no attribute '_add_hints'
I seem to be lost in the ability to optimize the mptt comments for the article.
But if you use the same comments, rendering according to the documentation in the Django template, then this problem is not observed. I ask for your help, dear programmers and experts.
| [
"This kind of solution worked for me.\nclass ArticleDetailSerializer(serializers.ModelSerializer):\n author = ArticleAuthorSerializer(read_only=True)\n comments = serializers.SerializerMethodField()\n\n class Meta:\n model = Article\n fields = '__all__'\n\n def get_comments(self, obj):\n qs = obj.comments.all().get_cached_trees()\n return CommentSerializer(qs, many=True).data\n\nThis reduced all queries from 11 SQL to 4.\n"
] | [
0
] | [] | [] | [
"django",
"django_mptt",
"django_rest_framework",
"python"
] | stackoverflow_0074612577_django_django_mptt_django_rest_framework_python.txt |
Q:
How to solve "type object 'datetime.datetime' has no attribute 'timedelta'" when creating a new date?
I'm using Django and Python 3.7. I'm trying to calculate a new datetime by adding a number of seconds to an existing datetime. From this -- What is the standard way to add N seconds to datetime.time in Python?, I thought i could do
new_date = article.created_on + datetime.timedelta(0, elapsed_time_in_seconds)
where "article.created_on" is a datetime and "elapsed_time_in_seconds" is an integer. But the above is resulting in an
type object 'datetime.datetime' has no attribute 'timedelta'
error. What am I missing
A:
You've imported the wrong thing; you've done from datetime import datetime so that datetime now refers to the class, not the containing module.
Either do:
import datetime
...article.created_on + datetime.timedelta(...)
or
from datetime import datetime, timedelta
...article.created_on + timedelta(...)
| How to solve "type object 'datetime.datetime' has no attribute 'timedelta'" when creating a new date? | I'm using Django and Python 3.7. I'm trying to calculate a new datetime by adding a number of seconds to an existing datetime. From this -- What is the standard way to add N seconds to datetime.time in Python?, I thought i could do
new_date = article.created_on + datetime.timedelta(0, elapsed_time_in_seconds)
where "article.created_on" is a datetime and "elapsed_time_in_seconds" is an integer. But the above is resulting in an
type object 'datetime.datetime' has no attribute 'timedelta'
error. What am I missing
| [
"You've imported the wrong thing; you've done from datetime import datetime so that datetime now refers to the class, not the containing module.\nEither do:\nimport datetime\n...article.created_on + datetime.timedelta(...)\n\nor\nfrom datetime import datetime, timedelta\n...article.created_on + timedelta(...)\n\n"
] | [
8
] | [
"You should use the correct import:\nfrom datetime import timedelta\n\nnew_date = article.created_on + timedelta(0, elapsed_time_in_seconds)\n\n",
"I got the same issue, i solved it by replacing\nfrom datetime import datetime as my_datetime\nwith\nimport datetime as my_datetime\n"
] | [
-1,
-1
] | [
"datetime",
"django",
"python",
"python_3.x"
] | stackoverflow_0055340547_datetime_django_python_python_3.x.txt |
Q:
Not able to create table from CSV in Databricks
I am trying to create a table from a CSV file stored in Azure Storage Account. I am using the below code. I am using Azure Databricks. Notebook is in Python.
%sql
drop table if exists customer;
create table customer
using csv
options ( path "/mnt/datalake/data/Customer.csv", header "True", mode "FAILFAST", inferSchema "True");
I am getting the below error.
Unable to infer schema for CSV. It must be specified manually.
Anyone having any idea, on how to resolve this error.
A:
I have reproduced above and got the below results.
This my csv file in data container.
This is my mounting:
I have mounted this and when I tried to create table from CSV file, I got same error.
The above error arises when we don't give correct path in the csv file. In file path after /mnt give mount point(here onemount for me) not container name as we already mounted till container.
Result:
| Not able to create table from CSV in Databricks | I am trying to create a table from a CSV file stored in Azure Storage Account. I am using the below code. I am using Azure Databricks. Notebook is in Python.
%sql
drop table if exists customer;
create table customer
using csv
options ( path "/mnt/datalake/data/Customer.csv", header "True", mode "FAILFAST", inferSchema "True");
I am getting the below error.
Unable to infer schema for CSV. It must be specified manually.
Anyone having any idea, on how to resolve this error.
| [
"I have reproduced above and got the below results.\nThis my csv file in data container.\n\nThis is my mounting:\n\nI have mounted this and when I tried to create table from CSV file, I got same error.\n\nThe above error arises when we don't give correct path in the csv file. In file path after /mnt give mount point(here onemount for me) not container name as we already mounted till container.\n\nResult:\n\n"
] | [
0
] | [] | [] | [
"azure",
"azure_databricks",
"databricks",
"python",
"sql"
] | stackoverflow_0074605049_azure_azure_databricks_databricks_python_sql.txt |
Q:
How to return serialized JSON from Flask-SQLAlchemy relationship query?
i'm using Flask-SQLAlchemy and i have the following models with one to many relationship,
class User(db.Model):
# Table name
__tablename__ = "users"
# Primary key
user_id = db.Column(db.Integer, primary_key=True)
# Fields (A-Z)
email = db.Column(db.String(50), nullable=False, unique=True)
password = db.Column(db.String, nullable=False)
username = db.Column(db.String(50), unique=True)
# Relationships (A-Z)
uploads = db.relationship("Upload", backref="user")
class Upload(db.Model):
# Table name
__tablename__ = "uploads"
# Primary key
upload_id = db.Column(db.Integer, primary_key=True)
# Fields (A-Z)
name = db.Column(db.String(50), nullable=False)
path_to_file = db.Column(db.String(256), nullable=False, unique=True)
uploaded_by = db.Column(db.Integer, db.ForeignKey("users.user_id"))
and i want to return JSON like this:
{
"users": [
{
"email": "[email protected]",
"uploads": [
{
"name": "1.png",
"path_to_file": "static/1.png"
}
],
"username": "maro"
},
{
"email": "[email protected]",
"uploads": [
{
"name": "2.jpg",
"path_to_file": "static/2.jpg"
}
],
"username": "makos"
}
]
}
So basically i want to return user object with all uploads (files user uploaded) in array.
I know i can access Upload class object within user with User.uploads (created with db.relationship) but i need some kind of serializer.
I wanted to add custom serialize() method to all my models:
# User serializer
def serialize_user(self):
if self.uploads:
uploads = [upload.serialize_upload() for upload in self.uploads]
return {
"email": self.email,
"password": self.password,
"username": self.username,
"uploads": uploads
}
# Upload serializer
def serialize_upload(self):
if self.user:
dict_user = self.user.serialize_user()
return {
"name": self.name,
"path_to_file": self.path_to_file,
"user": dict_user
}
But problem with this is that i end up with nesting loop. My User object has upload files and each upload has it's user's data and these user's data has uploads files...
My view endpoint:
@app.route('/users', methods=["GET"])
def get_users():
users = [user.serialize_user() for user in User.query.all()]
return jsonify(users)
Error:
RecursionError: maximum recursion depth exceeded while calling a Python object
Partial solution:
I can simply ommit serializing user object inside Upload serializer but then i won't be able to create similiar endpoint but to get uploads.
Example: /uploads - JSON with all uploads and user object nested.
How can i effectively work with relationships to return them as serialized JSON data similiar to JSON structure above?
A:
As you said, you can simply write a second serializer method. So you keep the other one for your /uploads API call.
# User serializer
def serialize_user(self):
if self.uploads:
uploads = [upload.serialize_upload_bis() for upload in self.uploads]
return {
"email": self.email,
"password": self.password,
"username": self.username,
"uploads": uploads
}
# Upload serializer
def serialize_upload_bis(self):
return {
"name": self.name,
"path_to_file": self.path_to_file,
}
def serialize_upload(self):
if self.user:
dict_user = self.user.serialize_user()
return {
"name": self.name,
"path_to_file": self.path_to_file,
"user": dict_user
}
A:
Although the question is quite old, I want to add a more generic answer, because I also was faced with this issue. The following serializer works for all classes with relationships:
from sqlalchemy.orm.collections import InstrumentedList
class Serializer(object):
def serialize(self):
serializedObject= {}
for c in inspect(self).attrs.keys():
attribute = getattr(self, c)
if(type(attribute) is InstrumentedList ):
serializedObject[c]= Component.serialize_list(attribute)
else:
serializedObject[c]= attribute
return serializedObject
@staticmethod
def serialize_list(l):
return [m.serialize() for m in l]
| How to return serialized JSON from Flask-SQLAlchemy relationship query? | i'm using Flask-SQLAlchemy and i have the following models with one to many relationship,
class User(db.Model):
# Table name
__tablename__ = "users"
# Primary key
user_id = db.Column(db.Integer, primary_key=True)
# Fields (A-Z)
email = db.Column(db.String(50), nullable=False, unique=True)
password = db.Column(db.String, nullable=False)
username = db.Column(db.String(50), unique=True)
# Relationships (A-Z)
uploads = db.relationship("Upload", backref="user")
class Upload(db.Model):
# Table name
__tablename__ = "uploads"
# Primary key
upload_id = db.Column(db.Integer, primary_key=True)
# Fields (A-Z)
name = db.Column(db.String(50), nullable=False)
path_to_file = db.Column(db.String(256), nullable=False, unique=True)
uploaded_by = db.Column(db.Integer, db.ForeignKey("users.user_id"))
and i want to return JSON like this:
{
"users": [
{
"email": "[email protected]",
"uploads": [
{
"name": "1.png",
"path_to_file": "static/1.png"
}
],
"username": "maro"
},
{
"email": "[email protected]",
"uploads": [
{
"name": "2.jpg",
"path_to_file": "static/2.jpg"
}
],
"username": "makos"
}
]
}
So basically i want to return user object with all uploads (files user uploaded) in array.
I know i can access Upload class object within user with User.uploads (created with db.relationship) but i need some kind of serializer.
I wanted to add custom serialize() method to all my models:
# User serializer
def serialize_user(self):
if self.uploads:
uploads = [upload.serialize_upload() for upload in self.uploads]
return {
"email": self.email,
"password": self.password,
"username": self.username,
"uploads": uploads
}
# Upload serializer
def serialize_upload(self):
if self.user:
dict_user = self.user.serialize_user()
return {
"name": self.name,
"path_to_file": self.path_to_file,
"user": dict_user
}
But problem with this is that i end up with nesting loop. My User object has upload files and each upload has it's user's data and these user's data has uploads files...
My view endpoint:
@app.route('/users', methods=["GET"])
def get_users():
users = [user.serialize_user() for user in User.query.all()]
return jsonify(users)
Error:
RecursionError: maximum recursion depth exceeded while calling a Python object
Partial solution:
I can simply ommit serializing user object inside Upload serializer but then i won't be able to create similiar endpoint but to get uploads.
Example: /uploads - JSON with all uploads and user object nested.
How can i effectively work with relationships to return them as serialized JSON data similiar to JSON structure above?
| [
"As you said, you can simply write a second serializer method. So you keep the other one for your /uploads API call.\n# User serializer\ndef serialize_user(self):\n if self.uploads:\n uploads = [upload.serialize_upload_bis() for upload in self.uploads]\n return {\n \"email\": self.email,\n \"password\": self.password,\n \"username\": self.username,\n \"uploads\": uploads\n }\n\n# Upload serializer\ndef serialize_upload_bis(self):\n return {\n \"name\": self.name,\n \"path_to_file\": self.path_to_file,\n }\n\ndef serialize_upload(self):\n if self.user:\n dict_user = self.user.serialize_user()\n return {\n \"name\": self.name,\n \"path_to_file\": self.path_to_file,\n \"user\": dict_user\n }\n\n",
"Although the question is quite old, I want to add a more generic answer, because I also was faced with this issue. The following serializer works for all classes with relationships:\nfrom sqlalchemy.orm.collections import InstrumentedList\nclass Serializer(object):\n\n def serialize(self):\n serializedObject= {}\n for c in inspect(self).attrs.keys():\n attribute = getattr(self, c)\n if(type(attribute) is InstrumentedList ):\n serializedObject[c]= Component.serialize_list(attribute)\n else:\n serializedObject[c]= attribute \n return serializedObject\n\n @staticmethod\n def serialize_list(l):\n return [m.serialize() for m in l]\n\n"
] | [
0,
0
] | [] | [] | [
"flask_sqlalchemy",
"json",
"python",
"sqlalchemy"
] | stackoverflow_0049372413_flask_sqlalchemy_json_python_sqlalchemy.txt |
Q:
What is the behavior of `.close()` on a generator that has just started?
What is the behavior of .close() on a generator that has just started?
def gen():
while True:
yield 1
g = gen()
g.send(1)
throws TypeError: can't send non-None value to a just-started generator
def gen():
while True:
try:
yield 1
except GeneratorExit:
print("exit")
raise
g = gen()
next(g)
g.close()
prints exit
But what is the behavior of:
def gen():
while True:
try:
yield 1
except GeneratorExit:
print("exit")
raise
g = gen()
g.close()
and how can I check it?
EDIT:
When I try it, nothing happens, but subsequent calls to next(g) raise StopIteration.
My question is actually: what happens in terms of memory, but I guess most is freed.
A:
Quoting from the docs (shortened):
Raises a GeneratorExit at the point where the generator function was
paused.
close() does nothing if the generator has already exited
When it was not started, it couldn't be paused or exited. The case it was not started (with an initial next or send) is not mentioned.
But we could easily check that not a single statemenet from the generator function was executed before .close() and no statement could be executed after the .close().
That is the only logical behaviour, IMO. There is nothing to be cleaned up in the generator function, because it did not get a chance to run.
| What is the behavior of `.close()` on a generator that has just started? | What is the behavior of .close() on a generator that has just started?
def gen():
while True:
yield 1
g = gen()
g.send(1)
throws TypeError: can't send non-None value to a just-started generator
def gen():
while True:
try:
yield 1
except GeneratorExit:
print("exit")
raise
g = gen()
next(g)
g.close()
prints exit
But what is the behavior of:
def gen():
while True:
try:
yield 1
except GeneratorExit:
print("exit")
raise
g = gen()
g.close()
and how can I check it?
EDIT:
When I try it, nothing happens, but subsequent calls to next(g) raise StopIteration.
My question is actually: what happens in terms of memory, but I guess most is freed.
| [
"Quoting from the docs (shortened):\n\nRaises a GeneratorExit at the point where the generator function was\npaused.\n\n\nclose() does nothing if the generator has already exited\n\nWhen it was not started, it couldn't be paused or exited. The case it was not started (with an initial next or send) is not mentioned.\nBut we could easily check that not a single statemenet from the generator function was executed before .close() and no statement could be executed after the .close().\nThat is the only logical behaviour, IMO. There is nothing to be cleaned up in the generator function, because it did not get a chance to run.\n"
] | [
1
] | [] | [] | [
"generator",
"python"
] | stackoverflow_0074611602_generator_python.txt |
Q:
Ordering by time block - Pandas and Altair
I have data that I am visualizing that is categorized by time block. The column looks something like this:
time column
I ultimately want to get to a point where I can order my data by this time block on the x axis of my chart, while plotting the corresponding value on the y. Something like this:
example chart
The issue occurs when pandas or altair (my visualization library) tries to order these time blocks, inherently putting the 5:00 pm slot after the 5:00 am slot etc...
I've worked around this in the past by creating a sperate 'Order' column and assigning the row a specific order based on the time block. Something like this
# I would create a dict like this:
daypartsdict = {'11:00 pm - 11:30 pm': 7,
'11:30 pm - 01:00 am': 8,
'12:00 pm - 03:00 pm': 3,
'03:00 pm - 05:00 pm': 4,
'05:00 am - 09:00 am': 1,
'05:00 pm - 07:00 pm': 5,
'07:00 pm - 11:00 pm': 6,
'09:00 am - 12:00 pm': 2,
}
# Create a new column using that dict:
aggdf['Order'] = aggdf['Time'].apply(lambda x: daypartsdict[x])
# And then use the order column as a field in altair to visualize
alt.Chart(data).mark_line(point=True).encode(
x = alt.X(field = 'Time', sort = alt.Sort(field = 'Order')),
y='RTG',
color='Station'
Resulting in something like:
sample axis
But with over 80 time blocks in the 15minute data, this method seems silly. I'm curious if there is a pandas function or method I could use to make this process more efficient. Open to any and all suggestions on how to improve this!
A:
My suggestion is not the perfect one, I didn't find the way to use your time blocks as axis labels. But it could be a starting point)
So here is my suggestion:
create a separate column with start of your time blocks using split function and transforming this column to a datetime type
data['start_block']=data['time_block'].str.split(" - ", expand=True)[0]
data['start_block']= pd.to_datetime(data['start_block'], format='%I:%M%p')
use time for axis encoding
alt.Chart(data).mark_line(point=True).encode(
x=alt.X('start_block:T',title="Time"),
y='blocks'
)
so you will have something like that.
| Ordering by time block - Pandas and Altair | I have data that I am visualizing that is categorized by time block. The column looks something like this:
time column
I ultimately want to get to a point where I can order my data by this time block on the x axis of my chart, while plotting the corresponding value on the y. Something like this:
example chart
The issue occurs when pandas or altair (my visualization library) tries to order these time blocks, inherently putting the 5:00 pm slot after the 5:00 am slot etc...
I've worked around this in the past by creating a sperate 'Order' column and assigning the row a specific order based on the time block. Something like this
# I would create a dict like this:
daypartsdict = {'11:00 pm - 11:30 pm': 7,
'11:30 pm - 01:00 am': 8,
'12:00 pm - 03:00 pm': 3,
'03:00 pm - 05:00 pm': 4,
'05:00 am - 09:00 am': 1,
'05:00 pm - 07:00 pm': 5,
'07:00 pm - 11:00 pm': 6,
'09:00 am - 12:00 pm': 2,
}
# Create a new column using that dict:
aggdf['Order'] = aggdf['Time'].apply(lambda x: daypartsdict[x])
# And then use the order column as a field in altair to visualize
alt.Chart(data).mark_line(point=True).encode(
x = alt.X(field = 'Time', sort = alt.Sort(field = 'Order')),
y='RTG',
color='Station'
Resulting in something like:
sample axis
But with over 80 time blocks in the 15minute data, this method seems silly. I'm curious if there is a pandas function or method I could use to make this process more efficient. Open to any and all suggestions on how to improve this!
| [
"My suggestion is not the perfect one, I didn't find the way to use your time blocks as axis labels. But it could be a starting point)\nSo here is my suggestion:\n\ncreate a separate column with start of your time blocks using split function and transforming this column to a datetime type\n\ndata['start_block']=data['time_block'].str.split(\" - \", expand=True)[0]\ndata['start_block']= pd.to_datetime(data['start_block'], format='%I:%M%p')\n\n\nuse time for axis encoding\n\nalt.Chart(data).mark_line(point=True).encode(\n x=alt.X('start_block:T',title=\"Time\"),\n y='blocks'\n)\n\nso you will have something like that.\n\n"
] | [
0
] | [] | [] | [
"altair",
"datetime",
"pandas",
"python",
"x_axis"
] | stackoverflow_0074606014_altair_datetime_pandas_python_x_axis.txt |
Q:
Get only positive answers in a subtraction question python
I'm making a math game, i want my subtraction answers to only have positive integers, how do i do that? I don't want questions like 6-10 but questions like 10-6.
this is the code i tried to make, but it doesn't work. Any help and suggestions would be appreciated, thanks.
import random
x=random.randint(1,10)
y=random.randint(1,10)
def q():
global x,y
que=int(input("what is {}-{}?".format(x,y)))
if y>x:
q()
else:
pass
q()
A:
You can use built-in max and min functions:
def q():
greater, smaller = max(x, y), min(x, y)
que = int(input("what is {}-{}?".format(greater, smaller)))
...
A:
You can use:
def q(x, y):
if y > x:
return q(y, x)
return int(input("what is {}-{}?".format(x,y)))
A:
You can sort the numbers as a tuple first:
x, y = sorted((random.randint(1,10), random.randint(1,10)), reverse=True)
A:
I would do something like this:
import random
bigger = random.randint(1, 10)
lower = random.randit(1, bigger)
def q(bigger, lower):
que = int(input(f'What is {bigger} - {lower}?'))
return que
A:
You can also use abs() function
#random floating number
floating = -1.33
print('Absolute value of -1.33 is:', abs(floating)) # 1.33
| Get only positive answers in a subtraction question python | I'm making a math game, i want my subtraction answers to only have positive integers, how do i do that? I don't want questions like 6-10 but questions like 10-6.
this is the code i tried to make, but it doesn't work. Any help and suggestions would be appreciated, thanks.
import random
x=random.randint(1,10)
y=random.randint(1,10)
def q():
global x,y
que=int(input("what is {}-{}?".format(x,y)))
if y>x:
q()
else:
pass
q()
| [
"You can use built-in max and min functions:\ndef q():\n greater, smaller = max(x, y), min(x, y)\n que = int(input(\"what is {}-{}?\".format(greater, smaller)))\n ...\n\n",
"You can use:\ndef q(x, y):\n if y > x:\n return q(y, x)\n return int(input(\"what is {}-{}?\".format(x,y)))\n\n",
"You can sort the numbers as a tuple first:\nx, y = sorted((random.randint(1,10), random.randint(1,10)), reverse=True)\n\n",
"I would do something like this:\nimport random\n\nbigger = random.randint(1, 10)\nlower = random.randit(1, bigger)\n\ndef q(bigger, lower):\n\n que = int(input(f'What is {bigger} - {lower}?'))\n return que\n\n",
"You can also use abs() function\n#random floating number\nfloating = -1.33\nprint('Absolute value of -1.33 is:', abs(floating)) # 1.33\n\n"
] | [
2,
1,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0062554315_python.txt |
Q:
Kubernetes deploy, how to solve : psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above?
I deployed a pod and service of a Flask API in Kubernetes.
When I run the Nifi processor InvoqueHTTP that calls the API, I have the error :
File "/opt/app-root/lib64/python3.8/site-packages/psycopg2/__init__.py"
psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above
The API connects to PGAAS database, in local it is running fine to connect but in the Kubernetes pod I need libpq library but I'm not finding the right library to install.
I also tried to install psycopg2-binary and it's throwing the same error.
Do you have any idea how to solve this issue ?
version tried in requirements : psycopg2==2.9.3 or psycopg2-binary==2.9.5
A:
For psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above follow the below work arounds:
Solution :1
Download libpq.dll from https://www.exefiles.com/en/dll/libpq-dll/ then replace old libpq.dll at php directory with the latest downloaded
Solution :2
Change authentication to md5, then reset your password and restart the postgresql service and here are step by step:
Find file postgresql.conf in C:\Program Files\PostgreSQL\13\data then set password_encryption = md5
Find file pg_hba.conf in C:\Program Files\PostgreSQL\13\data then change all METHOD to md5
Open command line (cmd,cmder,git bash...) and run psql -U postgres then enter your password when installed postgres sql
-Then change your password by running ALTER USER postgres WITH PASSWORD 'new-password'; in command line
Restart service postgresql in your Service
Solution :3
Check if psycopg is using the additional copy of libpq that may be present on your computer. Recognize that file, then upgrade or remove it. Perhaps psycopg has to be updated for that.
| Kubernetes deploy, how to solve : psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above? | I deployed a pod and service of a Flask API in Kubernetes.
When I run the Nifi processor InvoqueHTTP that calls the API, I have the error :
File "/opt/app-root/lib64/python3.8/site-packages/psycopg2/__init__.py"
psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above
The API connects to PGAAS database, in local it is running fine to connect but in the Kubernetes pod I need libpq library but I'm not finding the right library to install.
I also tried to install psycopg2-binary and it's throwing the same error.
Do you have any idea how to solve this issue ?
version tried in requirements : psycopg2==2.9.3 or psycopg2-binary==2.9.5
| [
"For psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above follow the below work arounds:\nSolution :1\nDownload libpq.dll from https://www.exefiles.com/en/dll/libpq-dll/ then replace old libpq.dll at php directory with the latest downloaded\nSolution :2\nChange authentication to md5, then reset your password and restart the postgresql service and here are step by step:\n\nFind file postgresql.conf in C:\\Program Files\\PostgreSQL\\13\\data then set password_encryption = md5\nFind file pg_hba.conf in C:\\Program Files\\PostgreSQL\\13\\data then change all METHOD to md5\nOpen command line (cmd,cmder,git bash...) and run psql -U postgres then enter your password when installed postgres sql\n-Then change your password by running ALTER USER postgres WITH PASSWORD 'new-password'; in command line\nRestart service postgresql in your Service\n\nSolution :3\nCheck if psycopg is using the additional copy of libpq that may be present on your computer. Recognize that file, then upgrade or remove it. Perhaps psycopg has to be updated for that.\n"
] | [
0
] | [] | [] | [
"kubernetes",
"psycopg2",
"python"
] | stackoverflow_0074612583_kubernetes_psycopg2_python.txt |
Q:
How to do with Pydantic regex validation?
I'm trying to write a validator with usage of Pydantic for following strings (examples):
1.1.0, 3.5.6, 1.1.2, etc..
I'm failing with following syntax:
install_component_version: constr(regex=r"^[0-9]+.[0-9]+.[0-9]$")
install_component_version: constr(regex=r"^([0-9])+.([0-9])+.([0-9])$")
install_component_version: constr(regex=r"^([0-9])\.([0-9])\.([0-9])$")
Can anyone help me out what regex syntax should look like?
A:
The error you are facing is due to type annotation.
As per https://github.com/pydantic/pydantic/issues/156 this is not yet fixed, you can try using pydantic.Field and then pass the regex argument there like so
install_component_version: str = Field(regex=r"^[0-9]+.[0-9]+.[0-9]$")
This way you get the regex validation and type checking.
PS: This is not a 100% alternative to constr but if all you want is regex validation, the above alternative works and makes mypy happy.
| How to do with Pydantic regex validation? | I'm trying to write a validator with usage of Pydantic for following strings (examples):
1.1.0, 3.5.6, 1.1.2, etc..
I'm failing with following syntax:
install_component_version: constr(regex=r"^[0-9]+.[0-9]+.[0-9]$")
install_component_version: constr(regex=r"^([0-9])+.([0-9])+.([0-9])$")
install_component_version: constr(regex=r"^([0-9])\.([0-9])\.([0-9])$")
Can anyone help me out what regex syntax should look like?
| [
"The error you are facing is due to type annotation.\nAs per https://github.com/pydantic/pydantic/issues/156 this is not yet fixed, you can try using pydantic.Field and then pass the regex argument there like so\ninstall_component_version: str = Field(regex=r\"^[0-9]+.[0-9]+.[0-9]$\")\nThis way you get the regex validation and type checking.\nPS: This is not a 100% alternative to constr but if all you want is regex validation, the above alternative works and makes mypy happy.\n"
] | [
1
] | [] | [] | [
"pydantic",
"python"
] | stackoverflow_0074607041_pydantic_python.txt |
Q:
Jupyter - Split Classes in multiple Cells
I wonder if there is a possibility to split jupyter classes into different cells? Lets say:
#first cell:
class foo(object):
def __init__(self, var):
self.var = var
#second cell
def print_var(self):
print(self.var)
For more complex classes its really annoying to write them into one cell.
I would like to put each method in a different cell.
Someone made this this last year but i wonder if there is something build in so i dont need external scripts/imports.
And if not, i would like to know if there is a reason to not give the opportunity to split your code and document / debug it way easier.
Thanks in advance
A:
Two solutions were provided to this problem on Github issue "Define a Python class across multiple cells #1243" which can be found here: https://github.com/jupyter/notebook/issues/1243
One solution is using a magic function from a package developed for this specific case called jdc - or Jupyter dynamic classes. The documentation on how to install it and how to use can be found on package url at https://alexhagen.github.io/jdc/
The second solution was provided by Doug Blank and which just work in regular Python, without resorting to any extra magic as follows:
Cell 1:
class MyClass():
def method1(self):
print("method1")
Cell 2:
class MyClass(MyClass):
def method2(self):
print("method2")
Cell 3:
instance = MyClass()
instance.method1()
instance.method2()
I tested the second solution myself in both Jupyter Notebook and VS Code, and it worked fine in both environments, except that I got a pylint error [pylint] E0102:class already defined line 5 in VS Code, which is kind of expected but still runs fine. Moreover, VS Code was not meant to be the target environment anyway.
A:
I don't feel like that whole stuff to be a issue or a good idea... But maybe the following will work for you:
# First cell
class Foo(object):
pass
# Other cell
def __init__(self, var):
self.var = var
Foo.__init__ = __init__
# Yet another cell
def print_var(self):
print(self.var)
Foo.print_var = print_var
I don't expect it to be extremely robust, but... it should work for regular classes.
EDIT: I believe that there are a couple of situations where this may break. I am not sure if that will resist code inspection, given that the method lives "far" from the class. But you are using a notebook, so code inspection should not be an issue (?), although keep that in mind if debugging.
Another possible issue can be related to use of metaclasses. If you try to use metaclasses (or derive from some class which uses a metaclass) that may broke it, because metaclasses typically expect to be able to know all the methods of the class, and by dynamically adding methods to a class, we are bending the rules on the flow of class creation.
Without metaclasses or some "quite-strange" use cases, the approach should be safe-ish.
For "simple" classes, it is a perfectly valid approach. But... it is not exactly an expected feature, so (ab)using it may give some additional problems which I may not
A:
Here's a decorator which lets you add members to a class:
import functools
def update_class(
main_class=None, exclude=("__module__", "__name__", "__dict__", "__weakref__")
):
"""Class decorator. Adds all methods and members from the wrapped class to main_class
Args:
- main_class: class to which to append members. Defaults to the class with the same name as the wrapped class
- exclude: black-list of members which should not be copied
"""
def decorates(main_class, exclude, appended_class):
if main_class is None:
main_class = globals()[appended_class.__name__]
for k, v in appended_class.__dict__.items():
if k not in exclude:
setattr(main_class, k, v)
return main_class
return functools.partial(decorates, main_class, exclude)
Use it like this:
#%% Cell 1
class MyClass:
def method1(self):
print("method1")
me = MyClass()
#%% Cell 2
@update_class()
class MyClass:
def method2(self):
print("method2")
me.method1()
me.method2()
This solution has the following benefits:
pure python
Doesn't change the inheritance order
Effects existing instances
A:
There is no way to split a single class,
You could however, add methods dynamically to an instance of it
CELL #1
import types
class A:
def __init__(self, var):
self.var = var
a = A()
And in a different cell:
CELL #2
def print_var(self):
print (self.var)
a.print_var = types.MethodType( print_var, a )
Now, this should work:
CELL #3
a.print_var()
A:
Medhat Omr's answer provides some good options; another one I found that I thought someone might find useful is to dynamically assign methods to a class using a decorator function. For example, we can create a higher-order function like the one below, which takes some arbitrary function, gets its name as a string, and assigns it as a class method.
def classMethod(func):
setattr(MyClass, func.__name__, func)
return func
We can then use the syntactic sugar for a decorator above each method that should be bound to the class;
@classMethod
def get_numpy(self):
return np.array(self.data)
This way, each method can be stored in a different Jupyter notebook cell and the class will be updated with the new function each time the cell is run.
I should also note that since this initializes the methods as functions in the global scope, it might be a good idea to prefix them with an underscore or letter to avoid name conflicts (then replace func.__name__ with func.__name__[1:] or however characters at the beginning of each name you want to omit. The method will still have the "mangled" name since it is the same object, so be wary of this if you need to programmatically access the method name somewhere else in your program.
A:
thanks@Medhat Omr, it works for me for the @classmethod as well.
Base class in the first cell
class Employee:
# define two class variables
num_empl = 0
raise_amt = 1.05
def __init__(self, first, last, pay):
self.first = first
self.last = last
self.pay = pay
...
...
@classmethod in an another cell:
class Employee(Employee):
@classmethod
def set_raise_amt(cls, amount):
cls.raise_amt = amount
empl = Employee("Jahn", "Smith", 65000)
Employee.set_raise_amt(1.04)
print(empl.full_name() + " is getting " + str(empl.apply_raise()))
| Jupyter - Split Classes in multiple Cells | I wonder if there is a possibility to split jupyter classes into different cells? Lets say:
#first cell:
class foo(object):
def __init__(self, var):
self.var = var
#second cell
def print_var(self):
print(self.var)
For more complex classes its really annoying to write them into one cell.
I would like to put each method in a different cell.
Someone made this this last year but i wonder if there is something build in so i dont need external scripts/imports.
And if not, i would like to know if there is a reason to not give the opportunity to split your code and document / debug it way easier.
Thanks in advance
| [
"Two solutions were provided to this problem on Github issue \"Define a Python class across multiple cells #1243\" which can be found here: https://github.com/jupyter/notebook/issues/1243\nOne solution is using a magic function from a package developed for this specific case called jdc - or Jupyter dynamic classes. The documentation on how to install it and how to use can be found on package url at https://alexhagen.github.io/jdc/\nThe second solution was provided by Doug Blank and which just work in regular Python, without resorting to any extra magic as follows:\nCell 1:\nclass MyClass():\n def method1(self):\n print(\"method1\")\n\nCell 2:\nclass MyClass(MyClass):\n def method2(self):\n print(\"method2\")\n\nCell 3:\ninstance = MyClass()\ninstance.method1()\ninstance.method2()\n\nI tested the second solution myself in both Jupyter Notebook and VS Code, and it worked fine in both environments, except that I got a pylint error [pylint] E0102:class already defined line 5 in VS Code, which is kind of expected but still runs fine. Moreover, VS Code was not meant to be the target environment anyway.\n",
"I don't feel like that whole stuff to be a issue or a good idea... But maybe the following will work for you:\n\n# First cell\nclass Foo(object):\n pass\n\n\n# Other cell\ndef __init__(self, var):\n self.var = var\n\nFoo.__init__ = __init__\n\n\n# Yet another cell\ndef print_var(self):\n print(self.var)\nFoo.print_var = print_var\n\n\nI don't expect it to be extremely robust, but... it should work for regular classes.\nEDIT: I believe that there are a couple of situations where this may break. I am not sure if that will resist code inspection, given that the method lives \"far\" from the class. But you are using a notebook, so code inspection should not be an issue (?), although keep that in mind if debugging.\nAnother possible issue can be related to use of metaclasses. If you try to use metaclasses (or derive from some class which uses a metaclass) that may broke it, because metaclasses typically expect to be able to know all the methods of the class, and by dynamically adding methods to a class, we are bending the rules on the flow of class creation.\nWithout metaclasses or some \"quite-strange\" use cases, the approach should be safe-ish.\nFor \"simple\" classes, it is a perfectly valid approach. But... it is not exactly an expected feature, so (ab)using it may give some additional problems which I may not \n",
"Here's a decorator which lets you add members to a class:\nimport functools\ndef update_class(\n main_class=None, exclude=(\"__module__\", \"__name__\", \"__dict__\", \"__weakref__\")\n):\n \"\"\"Class decorator. Adds all methods and members from the wrapped class to main_class\n\n Args:\n - main_class: class to which to append members. Defaults to the class with the same name as the wrapped class\n - exclude: black-list of members which should not be copied\n \"\"\"\n\n def decorates(main_class, exclude, appended_class):\n if main_class is None:\n main_class = globals()[appended_class.__name__]\n for k, v in appended_class.__dict__.items():\n if k not in exclude:\n setattr(main_class, k, v)\n return main_class\n\n return functools.partial(decorates, main_class, exclude)\n\nUse it like this:\n#%% Cell 1\nclass MyClass:\n def method1(self):\n print(\"method1\")\nme = MyClass()\n\n#%% Cell 2\n@update_class()\nclass MyClass:\n def method2(self):\n print(\"method2\")\nme.method1()\nme.method2()\n\nThis solution has the following benefits:\n\npure python\nDoesn't change the inheritance order\nEffects existing instances\n\n",
"There is no way to split a single class,\nYou could however, add methods dynamically to an instance of it\nCELL #1\nimport types\nclass A:\n def __init__(self, var):\n self.var = var\n\na = A()\n\nAnd in a different cell:\nCELL #2\ndef print_var(self):\n print (self.var)\na.print_var = types.MethodType( print_var, a )\n\nNow, this should work:\nCELL #3\na.print_var()\n\n",
"Medhat Omr's answer provides some good options; another one I found that I thought someone might find useful is to dynamically assign methods to a class using a decorator function. For example, we can create a higher-order function like the one below, which takes some arbitrary function, gets its name as a string, and assigns it as a class method.\ndef classMethod(func):\n setattr(MyClass, func.__name__, func)\n return func\n\nWe can then use the syntactic sugar for a decorator above each method that should be bound to the class;\n@classMethod\ndef get_numpy(self):\n return np.array(self.data)\n\nThis way, each method can be stored in a different Jupyter notebook cell and the class will be updated with the new function each time the cell is run.\nI should also note that since this initializes the methods as functions in the global scope, it might be a good idea to prefix them with an underscore or letter to avoid name conflicts (then replace func.__name__ with func.__name__[1:] or however characters at the beginning of each name you want to omit. The method will still have the \"mangled\" name since it is the same object, so be wary of this if you need to programmatically access the method name somewhere else in your program.\n",
"thanks@Medhat Omr, it works for me for the @classmethod as well.\nBase class in the first cell\nclass Employee:\n # define two class variables\n num_empl = 0\n raise_amt = 1.05\n\n def __init__(self, first, last, pay):\n\n self.first = first \n self.last = last\n self.pay = pay\n ...\n ...\n\n@classmethod in an another cell:\nclass Employee(Employee):\n\n @classmethod\n def set_raise_amt(cls, amount):\n cls.raise_amt = amount\n\n\nempl = Employee(\"Jahn\", \"Smith\", 65000)\nEmployee.set_raise_amt(1.04)\nprint(empl.full_name() + \" is getting \" + str(empl.apply_raise()))\n\n"
] | [
23,
8,
3,
1,
0,
0
] | [] | [] | [
"jupyter_notebook",
"python"
] | stackoverflow_0045161393_jupyter_notebook_python.txt |
Q:
Add reserved tokens to `tft.vocabulary`
I would like to append words to the vocabulary created by tft.vocabulary that are not a part of the training samples (i.e. <mask> and <pad> tokens).
I see in the docs that the tft.vocabulary function can take an argument key_fn which the docs says:
Supply key_fn if you would like to generate a vocabulary with coverage over specific keys.
but with the key_fn below it still does not append the <mask> and <pad> tokens to the vocabulary.
def _key_fn(x):
return tf.constant(['<mask>', '<pad>'])
vocab = tft.vocabulary(
words,
key_fn = lambda x : _key_fn(x),
top_k = config.VOCAB_SIZE
)
A:
What is it that you're trying to achieve?
I don't think that key_fn is related as it only affects the ordering of the vocabulary (and top k when provided)
Could you compute the vocabulary after appending the added information?
tft.vocabulary(tf.strings.join([words, <mask>, <pad>]), ...)
This would result in the vocabulary including the added suffix
| Add reserved tokens to `tft.vocabulary` | I would like to append words to the vocabulary created by tft.vocabulary that are not a part of the training samples (i.e. <mask> and <pad> tokens).
I see in the docs that the tft.vocabulary function can take an argument key_fn which the docs says:
Supply key_fn if you would like to generate a vocabulary with coverage over specific keys.
but with the key_fn below it still does not append the <mask> and <pad> tokens to the vocabulary.
def _key_fn(x):
return tf.constant(['<mask>', '<pad>'])
vocab = tft.vocabulary(
words,
key_fn = lambda x : _key_fn(x),
top_k = config.VOCAB_SIZE
)
| [
"What is it that you're trying to achieve?\nI don't think that key_fn is related as it only affects the ordering of the vocabulary (and top k when provided)\nCould you compute the vocabulary after appending the added information?\ntft.vocabulary(tf.strings.join([words, <mask>, <pad>]), ...)\nThis would result in the vocabulary including the added suffix\n"
] | [
0
] | [] | [] | [
"mlops",
"python",
"tensorflow",
"tensorflow_transform",
"tfx"
] | stackoverflow_0071771353_mlops_python_tensorflow_tensorflow_transform_tfx.txt |
Q:
Parsing XML with python ElementTree: ParseError: mismatched tag
I have several XML files I have to parse through with python ElemetTree (they are legacy from another developer).
I've corrected those files a bit and parsed a good chunk so far but at some moment I got this parsing error, and I can't get around it. Tried parsing the original file (i was working with a copy ofcourse), and it's still the same error even though it used to work fine in the first place.
Error:
ParseError: mismatched tag
My code is:
import xml.etree.ElementTree as ET
tree = ET.parse('astrod.xml')
Full error text:
Traceback (most recent call last):
File "D:\dev\tools\Anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-6aa074179306>", line 2, in <module>
tree = ET.parse('astrod.xml')
File "D:\dev\tools\Anaconda\lib\xml\etree\ElementTree.py", line 1197, in parse
tree.parse(source, parser)
File "D:\dev\tools\Anaconda\lib\xml\etree\ElementTree.py", line 598, in parse
self._root = parser._parse_whole(source)
File "<string>", line unknown
ParseError: mismatched tag: line 449, column 3
A:
Take a look at line ParseError: mismatched tag: line 449, column 3.
line 449 is the line number in your source XML file.
Find this line and look what is wrong with the content.
Probably this line contains some tag (e.g. closing) which has no
opening conterpart.
An alternative: Visit any XML validation site and check what is wrong
with your file.
A:
i had the same issue and simply renamed the name of my pictures that had this naming convention from frame15148 - Copy.jpg to frame15148.jpg and the mismatch error disappeared Picture of rename .
| Parsing XML with python ElementTree: ParseError: mismatched tag | I have several XML files I have to parse through with python ElemetTree (they are legacy from another developer).
I've corrected those files a bit and parsed a good chunk so far but at some moment I got this parsing error, and I can't get around it. Tried parsing the original file (i was working with a copy ofcourse), and it's still the same error even though it used to work fine in the first place.
Error:
ParseError: mismatched tag
My code is:
import xml.etree.ElementTree as ET
tree = ET.parse('astrod.xml')
Full error text:
Traceback (most recent call last):
File "D:\dev\tools\Anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-6aa074179306>", line 2, in <module>
tree = ET.parse('astrod.xml')
File "D:\dev\tools\Anaconda\lib\xml\etree\ElementTree.py", line 1197, in parse
tree.parse(source, parser)
File "D:\dev\tools\Anaconda\lib\xml\etree\ElementTree.py", line 598, in parse
self._root = parser._parse_whole(source)
File "<string>", line unknown
ParseError: mismatched tag: line 449, column 3
| [
"Take a look at line ParseError: mismatched tag: line 449, column 3.\nline 449 is the line number in your source XML file.\nFind this line and look what is wrong with the content.\nProbably this line contains some tag (e.g. closing) which has no\nopening conterpart.\nAn alternative: Visit any XML validation site and check what is wrong\nwith your file.\n",
"i had the same issue and simply renamed the name of my pictures that had this naming convention from frame15148 - Copy.jpg to frame15148.jpg and the mismatch error disappeared Picture of rename .\n"
] | [
5,
0
] | [
"I find that sometimes you get this error at a line number a lot later than the actual error. It only notices something wrong when you close a tag that you didn't correctly open. Look earlier in the file.\nFor example I had a similar problem where the error message claimed\n\nxml.etree.ElementTree.ParseError: mismatched tag: line 12, column 2\n\nbut the mistake wasn't at the mentioned line, I had misspelled one tag opening at the beginning.\nCheck backwards through your input for the opening tag of the one that's given you this error.\n"
] | [
-1
] | [
"elementtree",
"parsing",
"python",
"xml",
"xml_parsing"
] | stackoverflow_0059909308_elementtree_parsing_python_xml_xml_parsing.txt |
Q:
Check if all key-value pairs are present in dictionary (pytest)
I am trying to write a test to check if all key-value pairs from the expected result are present in the actual result
import pytest
def common_pairs(dict1, dict2):
return {key: dict1[key] for key in dict1 if key in dict2 and dict1[key] == dict2[key]}
@pytest.mark.parametrize(
"input_data,expected", [({"a": 1, "b": 2, "c": 3}, {"a": 1, "b": 2}),({"a": 5}, {})]
)
def test_output(input_data, expected):
assert common_pairs(input_data, expected) == expected
Does my test make sense? Is there a more conventional way to test such cases?
A:
I mean the most natural way to do this imo would be
for key,val in dict1.items():
assert val == dict2[key]
| Check if all key-value pairs are present in dictionary (pytest) | I am trying to write a test to check if all key-value pairs from the expected result are present in the actual result
import pytest
def common_pairs(dict1, dict2):
return {key: dict1[key] for key in dict1 if key in dict2 and dict1[key] == dict2[key]}
@pytest.mark.parametrize(
"input_data,expected", [({"a": 1, "b": 2, "c": 3}, {"a": 1, "b": 2}),({"a": 5}, {})]
)
def test_output(input_data, expected):
assert common_pairs(input_data, expected) == expected
Does my test make sense? Is there a more conventional way to test such cases?
| [
"I mean the most natural way to do this imo would be\nfor key,val in dict1.items():\n assert val == dict2[key]\n\n"
] | [
1
] | [] | [] | [
"pytest",
"python"
] | stackoverflow_0074613664_pytest_python.txt |
Q:
Find the highest value locations within an interval and for a specific index?
Given this pandas dataframe with three columns, 'room_id', 'temperature' and 'State'. How do I get a forth column 'Max' indicating wehn the value is a maximum for each interval where State is True and for each room ?
117 1.489000 True
8.9 False
2.5 False
4.370000 False
4.363333 True
4.356667 True
118 4.35 True
6.648000 True
6.642667 True
7.3 False
9.4 False
5.3 True
7.1 True
What I am expecting
117 1.489000 True max
8.9 False
2.5 False
4.370000 False
4.363333 True max
4.356667 True
118 4.35 True
6.648000 True max
6.642667 True
7.3 False
9.4 False
5.3 True
7.1 True max
I used this :
Max = df_state.groupby(masque.cumsum()[~masque])['temperature'].agg(['idxmax'])
But I found this :
117 1.489000 True max
8.9 False
2.5 False
4.370000 False
4.363333 True
4.356667 True
118 4.35 True
6.648000 True max
6.642667 True
7.3 False
9.4 False
5.3 True
7.1 True max
I miss the last max of room 117 because the algorithm does not take into account the room id
A:
You can use groupby.idxmax to get the index of the max per custom group:
# get the indices of the max value per group
idx = (df[df['State']].groupby(['room_id', (~df['State']).cumsum()])
['temperature'].idxmax()
)
# assign the new value
df.loc[idx, 'max_temp'] = 'max'
If you want the temperature value instead of a literax 'max':
df.loc[idx, 'max'] = df.loc[idx, 'temperature']
Output:
room_id temperature State max max_temp
0 117 1.489000 True max 1.489000
1 117 8.900000 False NaN NaN
2 117 2.500000 False NaN NaN
3 117 4.370000 False NaN NaN
4 117 4.363333 True max 4.363333
5 117 4.356667 True NaN NaN
6 118 4.350000 True NaN NaN
7 118 6.648000 True max 6.648000
8 118 6.642667 True NaN NaN
9 118 7.300000 False NaN NaN
10 118 9.400000 False NaN NaN
11 118 5.300000 True NaN NaN
12 118 7.100000 True max 7.100000
| Find the highest value locations within an interval and for a specific index? | Given this pandas dataframe with three columns, 'room_id', 'temperature' and 'State'. How do I get a forth column 'Max' indicating wehn the value is a maximum for each interval where State is True and for each room ?
117 1.489000 True
8.9 False
2.5 False
4.370000 False
4.363333 True
4.356667 True
118 4.35 True
6.648000 True
6.642667 True
7.3 False
9.4 False
5.3 True
7.1 True
What I am expecting
117 1.489000 True max
8.9 False
2.5 False
4.370000 False
4.363333 True max
4.356667 True
118 4.35 True
6.648000 True max
6.642667 True
7.3 False
9.4 False
5.3 True
7.1 True max
I used this :
Max = df_state.groupby(masque.cumsum()[~masque])['temperature'].agg(['idxmax'])
But I found this :
117 1.489000 True max
8.9 False
2.5 False
4.370000 False
4.363333 True
4.356667 True
118 4.35 True
6.648000 True max
6.642667 True
7.3 False
9.4 False
5.3 True
7.1 True max
I miss the last max of room 117 because the algorithm does not take into account the room id
| [
"You can use groupby.idxmax to get the index of the max per custom group:\n# get the indices of the max value per group\nidx = (df[df['State']].groupby(['room_id', (~df['State']).cumsum()])\n ['temperature'].idxmax()\n )\n\n# assign the new value\ndf.loc[idx, 'max_temp'] = 'max'\n\nIf you want the temperature value instead of a literax 'max':\ndf.loc[idx, 'max'] = df.loc[idx, 'temperature']\n\nOutput:\n room_id temperature State max max_temp\n0 117 1.489000 True max 1.489000\n1 117 8.900000 False NaN NaN\n2 117 2.500000 False NaN NaN\n3 117 4.370000 False NaN NaN\n4 117 4.363333 True max 4.363333\n5 117 4.356667 True NaN NaN\n6 118 4.350000 True NaN NaN\n7 118 6.648000 True max 6.648000\n8 118 6.642667 True NaN NaN\n9 118 7.300000 False NaN NaN\n10 118 9.400000 False NaN NaN\n11 118 5.300000 True NaN NaN\n12 118 7.100000 True max 7.100000\n\n\n"
] | [
0
] | [] | [] | [
"group_by",
"intervals",
"max",
"pandas",
"python"
] | stackoverflow_0074613397_group_by_intervals_max_pandas_python.txt |
Q:
Dataframe name with get_df_name(df) reset
I changed by mistake the name of the dataframe (no idea how, I was trying several things), and now I get the wrong name when calling get_df_name(df)
tables=[df1,df2,df3,df4,df5]
def get_df_name(df):
name = [x for x in globals() if globals()[x] is df][0]
return name
for i in tables:
print(get_df_name(i),list(i.columns))
What I get is:
i ['column1', 'column2']
i ['column3', 'column4', 'column5']
df3 ['column6', 'column7', 'column8', 'column9']
df4 ['column10', 'column11']
df5 ['column12', 'column13']
The name of the 1st 2 dataframes has been changed to i. and I dont know how to reset it. I have tried df1.name='df1', does not work.
NOTE: – As user2357112 pointed out, this function is not good. I am using it just to see some column names without scrolling up my notebook, but should not be used in your code
| Dataframe name with get_df_name(df) reset | I changed by mistake the name of the dataframe (no idea how, I was trying several things), and now I get the wrong name when calling get_df_name(df)
tables=[df1,df2,df3,df4,df5]
def get_df_name(df):
name = [x for x in globals() if globals()[x] is df][0]
return name
for i in tables:
print(get_df_name(i),list(i.columns))
What I get is:
i ['column1', 'column2']
i ['column3', 'column4', 'column5']
df3 ['column6', 'column7', 'column8', 'column9']
df4 ['column10', 'column11']
df5 ['column12', 'column13']
The name of the 1st 2 dataframes has been changed to i. and I dont know how to reset it. I have tried df1.name='df1', does not work.
NOTE: – As user2357112 pointed out, this function is not good. I am using it just to see some column names without scrolling up my notebook, but should not be used in your code
| [] | [] | [
"Solved it after posting. Changed the variable in the for loop from i to smth else, it reset the whole thing. If anyone has an explanation, can write it for other people.\n"
] | [
-1
] | [
"dataframe",
"python"
] | stackoverflow_0074613860_dataframe_python.txt |
Q:
Pandas: AttributeError: 'module' object has no attribute '__version__'
When I try to import pandas into Python I get this error:
>>> import pandas
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/__init__.py", line 44, in <module>
from pandas.core.api import *
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/api.py", line 9, in <module>
from pandas.core.groupby import Grouper
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/groupby.py", line 17, in <module>
from pandas.core.frame import DataFrame
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 41, in <module>
from pandas.core.series import Series
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/series.py", line 2909, in <module>
import pandas.tools.plotting as _gfx
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/tools/plotting.py", line 135, in <module>
if _mpl_ge_1_5_0():
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/tools/plotting.py", line 130, in _mpl_ge_1_5_0
return (matplotlib.__version__ >= LooseVersion('1.5')
AttributeError: 'module' object has no attribute '__version__'
But when I check if pandas is installed:
me$ conda install pandas
Fetching package metadata: ....
Solving package specifications: .....................
# All requested packages already installed.
# packages in environment at /Users/me/miniconda2:
#
pandas 0.17.1 np110py27_0
So I don't know what is wrong? What is going on with my pandas?
Edit
$ pip list |grep matplotlib
$ conda list matplotlib
# packages in environment at /Users/me/miniconda2:
#
matplotlib 1.5.0 np110py27_0
For some reason there was no output to pip list |grep matplotlib
Edit2
I wanted to see if there was a different path to the executables ipython and python. So I ran this:
$ python
>>> import sys
>>> print sys.executable
/Users/me/miniconda2/bin/python
However in IPython, I get this:
$ ipython notebook
>>> import sys
>>> print sys.executable
/usr/local/opt/python/bin/python2.7
Could that be the problem?
A:
Remove (or rename) the file matplotlib.py from your current working directory. It shadows the real library with the same name.
A:
I have a simple solution,delete your __init__.pyc and __init__.py files in your project dictionary. because i encounter the problem too,I have been solve it perfectly use this method.
A:
It worked for me
pip install pyparsing==2.4.7
| Pandas: AttributeError: 'module' object has no attribute '__version__' | When I try to import pandas into Python I get this error:
>>> import pandas
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/__init__.py", line 44, in <module>
from pandas.core.api import *
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/api.py", line 9, in <module>
from pandas.core.groupby import Grouper
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/groupby.py", line 17, in <module>
from pandas.core.frame import DataFrame
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 41, in <module>
from pandas.core.series import Series
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/series.py", line 2909, in <module>
import pandas.tools.plotting as _gfx
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/tools/plotting.py", line 135, in <module>
if _mpl_ge_1_5_0():
File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/tools/plotting.py", line 130, in _mpl_ge_1_5_0
return (matplotlib.__version__ >= LooseVersion('1.5')
AttributeError: 'module' object has no attribute '__version__'
But when I check if pandas is installed:
me$ conda install pandas
Fetching package metadata: ....
Solving package specifications: .....................
# All requested packages already installed.
# packages in environment at /Users/me/miniconda2:
#
pandas 0.17.1 np110py27_0
So I don't know what is wrong? What is going on with my pandas?
Edit
$ pip list |grep matplotlib
$ conda list matplotlib
# packages in environment at /Users/me/miniconda2:
#
matplotlib 1.5.0 np110py27_0
For some reason there was no output to pip list |grep matplotlib
Edit2
I wanted to see if there was a different path to the executables ipython and python. So I ran this:
$ python
>>> import sys
>>> print sys.executable
/Users/me/miniconda2/bin/python
However in IPython, I get this:
$ ipython notebook
>>> import sys
>>> print sys.executable
/usr/local/opt/python/bin/python2.7
Could that be the problem?
| [
"Remove (or rename) the file matplotlib.py from your current working directory. It shadows the real library with the same name.\n",
"I have a simple solution,delete your __init__.pyc and __init__.py files in your project dictionary. because i encounter the problem too,I have been solve it perfectly use this method.\n",
"It worked for me\npip install pyparsing==2.4.7\n\n"
] | [
12,
0,
0
] | [] | [] | [
"import",
"pandas",
"python"
] | stackoverflow_0034564249_import_pandas_python.txt |
Q:
merge chronological elements
I have a set of items that consists of the start and stop dates, as the following:
ID
started
stop
1
2019-01-14
2018-02-05
2
2019-01-14
2019-03-06
3
2019-03-07
2019-03-20->
4
Some-Date
NULL
5
2020-09-08
2020-09-14
6
2020-09-15
2020-10-14
7
->2019-03-21
2019-03-30
I would like to merge those item who share a chronological order from the order: elem.stop = nxtElem.started + 1
The result should look like:
ID
started
stop
1
2019-01-14
2018-02-05
2
2019-01-14
2019-03-30
3
Some-Date
NULL
4
2020-09-08
2020-10-14
I am currently checking the difference between each date, and if its one day then group them, however i am getting weird results
class Records:
def __init__(self, start_dt, stop_dt):
self.groupNum = None
self.dayDiff = None
self.start_dt = start_dt
self.stop_dt = stop_dt
def setGroupNum(self, groupNum):
self.groupNum = groupNum
def setdayDiff(self, dayDiff):
self.dayDiff = dayDiff
def main():
recordsLst = []
resultLst = []
recordsLst.append(Records(datetime.date(2017, 8, 14), datetime.date(2018, 3, 5)))
recordsLst.append(Records(datetime.date(2019, 1, 14), datetime.date(2019, 3, 6)))
recordsLst.append(Records(datetime.date(2019, 3, 7), datetime.date(2019, 3, 20)))
recordsLst.append(Records(datetime.date(2023, 12, 30), datetime.date(9999, 12, 31)))
recordsLst.append(Records(datetime.date(2020, 9, 8), datetime.date(2020, 9, 14)))
recordsLst.append(Records(datetime.date(2020, 9, 15), datetime.date(2020, 10, 14)))
recordsLst.append(Records(datetime.date(2019, 3, 21), datetime.date(2019, 3, 30)))
recordsLst .sort(key=lambda x: x.start_dt, reverse=False)
for index, a in enumerate(recordsLst):
for b in recordsLst[index:]:
# If same item
if (a.start_dt.day == b.start_dt.day and
a.start_dt.month == b.start_dt.month and
a.start_dt.year == b.start_dt.year) and \
(a.stop_dt.day == b.stop_dt.day and
a.stop_dt.month == b.stop_dt.month and
a.stop_dt.year == b.stop_dt.year):
a.setGroupNum('same')
# If in a chronological order
if a.stop_dt.month == b.start_dt.month \
and a.stop_dt.year == b.start_dt.year \
and (a.stop_dt.day - b.start_dt.day) == -1:
a.setdayDiff(-1)
a.setGroupNum(index)
resultLst.append(Datum(a.stop_dt, b.start_dt))
else:
a.setdayDiff(None)
print(index, a, b)
New pandas dataset
df = pd.DataFrame([[datetime.date(2016, 1, 2), datetime.date(2016, 5, 5)],
# case A->B, B->C, B->D => A->D
[datetime.date(2010, 2, 14), datetime.date(2010, 3, 22)],
[datetime.date(2010, 3, 23), datetime.date(2010, 4, 12)],
[datetime.date(2010, 3, 23), datetime.date(2010, 5, 14)],
[datetime.date(2010, 5, 15), datetime.date(2010, 6, 7)],
# -> 2010-02-14 | 2010-10-20
# case A->B, A->C, B->D => A->D
[datetime.date(2011, 1, 1), datetime.date(2011, 2, 2)],
[datetime.date(2011, 1, 1), datetime.date(2011, 3, 4)],
[datetime.date(2011, 2, 3), datetime.date(2011, 4, 4)],
# -> 2011-01-01 | 2011-04-04
# case A->C, B->C, C->D => A->D
[datetime.date(2012, 5, 5), datetime.date(2012, 6, 6)],
[datetime.date(2012, 5, 7), datetime.date(2012, 6, 6)],
[datetime.date(2012, 6, 7), datetime.date(2012, 12, 12)],
# -> 2012-05-05 | 2012-12-12
[datetime.date(2010, 6, 8), datetime.date(2010, 10, 20)],
[datetime.date(2016, 5, 6), datetime.date(2016, 10, 10)],
[datetime.date(2011, 1, 1), datetime.date(9999, 12, 31)]],
columns=['start', 'end'])
Thanks in advance.
A:
Do you have to use the Records-class? If not, pandas offers a very clean implementation of what you are looking for:
import datetime
import pandas as pd
import numpy as np
df = pd.DataFrame([[datetime.date(2017, 8, 14), datetime.date(2018, 3, 5)],
[datetime.date(2019, 1, 14), datetime.date(2019, 3, 6)],
[datetime.date(2019, 3, 7), datetime.date(2019, 3, 20)],
[datetime.date(2023, 12, 30), datetime.date(9999, 12, 31)],
[datetime.date(2020, 9, 8), datetime.date(2020, 9, 14)],
[datetime.date(2020, 9, 15), datetime.date(2020, 10, 14)],
[datetime.date(2019, 3, 21), datetime.date(2019, 3, 30)]],
columns=['start', 'end'])
df = df.sort_values('start').reset_index(drop=True)
mask = df['start'] - pd.to_timedelta('1 day') == df['end'].shift(1)
df.loc[mask.shift(-1).fillna(False), 'end'] = np.nan
df['end'] = df['end'].bfill()
df = df[~mask]
print(df)
And even if you have to use your class, you could just create it after you have done the data handling in pandas by running:
resultLst = df.apply(lambda x: Records(x['start'], x['end']), axis=1).tolist()
EDIT:
Unfortunately, it is not really easy to understand what your underlying rules are, but the following works out almost the same way as what you say:
df = df.groupby('end').min().reset_index() # If two end dates are identical, we keep the first?
df = df.sort_values('start').reset_index(drop=True)
df['start_reduced'] = df['start'] - pd.to_timedelta('1 day')
df['idx_orig'] = df.index
cols_to_drop = [x+'_y' for x in df.columns]
first_iter = True
seed_start_idx = []
while first_iter or mask.any():
df = df.merge(df, how='left', left_on='end', right_on='start_reduced', suffixes=('', '_y'))
mask = ~df['end_y'].isna()
df.loc[mask, 'end'] = df.loc[mask, 'end_y'].values
if first_iter:
seed_start_idx = df.loc[~df['start'].isin(df.loc[mask, 'start_y']), 'idx_orig'].tolist()
df = df.drop(columns=cols_to_drop)
first_iter = False
df = df[df['idx_orig'].isin(seed_start_idx)].drop_duplicates(subset='idx_orig', keep='last').drop(columns=['start_reduced', 'idx_orig'])
The only difference is that it is not possible to distinguish which of the ones starting 2011-01-01 should be kept. You state that the one ending 2011-03-04 should not be kept, but the one ending 9999-12-31 should seemingly be kept. I cannot understand the logic behind that differentiation. The rest works though.
| merge chronological elements | I have a set of items that consists of the start and stop dates, as the following:
ID
started
stop
1
2019-01-14
2018-02-05
2
2019-01-14
2019-03-06
3
2019-03-07
2019-03-20->
4
Some-Date
NULL
5
2020-09-08
2020-09-14
6
2020-09-15
2020-10-14
7
->2019-03-21
2019-03-30
I would like to merge those item who share a chronological order from the order: elem.stop = nxtElem.started + 1
The result should look like:
ID
started
stop
1
2019-01-14
2018-02-05
2
2019-01-14
2019-03-30
3
Some-Date
NULL
4
2020-09-08
2020-10-14
I am currently checking the difference between each date, and if its one day then group them, however i am getting weird results
class Records:
def __init__(self, start_dt, stop_dt):
self.groupNum = None
self.dayDiff = None
self.start_dt = start_dt
self.stop_dt = stop_dt
def setGroupNum(self, groupNum):
self.groupNum = groupNum
def setdayDiff(self, dayDiff):
self.dayDiff = dayDiff
def main():
recordsLst = []
resultLst = []
recordsLst.append(Records(datetime.date(2017, 8, 14), datetime.date(2018, 3, 5)))
recordsLst.append(Records(datetime.date(2019, 1, 14), datetime.date(2019, 3, 6)))
recordsLst.append(Records(datetime.date(2019, 3, 7), datetime.date(2019, 3, 20)))
recordsLst.append(Records(datetime.date(2023, 12, 30), datetime.date(9999, 12, 31)))
recordsLst.append(Records(datetime.date(2020, 9, 8), datetime.date(2020, 9, 14)))
recordsLst.append(Records(datetime.date(2020, 9, 15), datetime.date(2020, 10, 14)))
recordsLst.append(Records(datetime.date(2019, 3, 21), datetime.date(2019, 3, 30)))
recordsLst .sort(key=lambda x: x.start_dt, reverse=False)
for index, a in enumerate(recordsLst):
for b in recordsLst[index:]:
# If same item
if (a.start_dt.day == b.start_dt.day and
a.start_dt.month == b.start_dt.month and
a.start_dt.year == b.start_dt.year) and \
(a.stop_dt.day == b.stop_dt.day and
a.stop_dt.month == b.stop_dt.month and
a.stop_dt.year == b.stop_dt.year):
a.setGroupNum('same')
# If in a chronological order
if a.stop_dt.month == b.start_dt.month \
and a.stop_dt.year == b.start_dt.year \
and (a.stop_dt.day - b.start_dt.day) == -1:
a.setdayDiff(-1)
a.setGroupNum(index)
resultLst.append(Datum(a.stop_dt, b.start_dt))
else:
a.setdayDiff(None)
print(index, a, b)
New pandas dataset
df = pd.DataFrame([[datetime.date(2016, 1, 2), datetime.date(2016, 5, 5)],
# case A->B, B->C, B->D => A->D
[datetime.date(2010, 2, 14), datetime.date(2010, 3, 22)],
[datetime.date(2010, 3, 23), datetime.date(2010, 4, 12)],
[datetime.date(2010, 3, 23), datetime.date(2010, 5, 14)],
[datetime.date(2010, 5, 15), datetime.date(2010, 6, 7)],
# -> 2010-02-14 | 2010-10-20
# case A->B, A->C, B->D => A->D
[datetime.date(2011, 1, 1), datetime.date(2011, 2, 2)],
[datetime.date(2011, 1, 1), datetime.date(2011, 3, 4)],
[datetime.date(2011, 2, 3), datetime.date(2011, 4, 4)],
# -> 2011-01-01 | 2011-04-04
# case A->C, B->C, C->D => A->D
[datetime.date(2012, 5, 5), datetime.date(2012, 6, 6)],
[datetime.date(2012, 5, 7), datetime.date(2012, 6, 6)],
[datetime.date(2012, 6, 7), datetime.date(2012, 12, 12)],
# -> 2012-05-05 | 2012-12-12
[datetime.date(2010, 6, 8), datetime.date(2010, 10, 20)],
[datetime.date(2016, 5, 6), datetime.date(2016, 10, 10)],
[datetime.date(2011, 1, 1), datetime.date(9999, 12, 31)]],
columns=['start', 'end'])
Thanks in advance.
| [
"Do you have to use the Records-class? If not, pandas offers a very clean implementation of what you are looking for:\nimport datetime\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame([[datetime.date(2017, 8, 14), datetime.date(2018, 3, 5)],\n [datetime.date(2019, 1, 14), datetime.date(2019, 3, 6)],\n [datetime.date(2019, 3, 7), datetime.date(2019, 3, 20)],\n [datetime.date(2023, 12, 30), datetime.date(9999, 12, 31)],\n [datetime.date(2020, 9, 8), datetime.date(2020, 9, 14)],\n [datetime.date(2020, 9, 15), datetime.date(2020, 10, 14)],\n [datetime.date(2019, 3, 21), datetime.date(2019, 3, 30)]],\n columns=['start', 'end'])\n\ndf = df.sort_values('start').reset_index(drop=True)\nmask = df['start'] - pd.to_timedelta('1 day') == df['end'].shift(1)\ndf.loc[mask.shift(-1).fillna(False), 'end'] = np.nan\ndf['end'] = df['end'].bfill()\ndf = df[~mask]\nprint(df)\n\nAnd even if you have to use your class, you could just create it after you have done the data handling in pandas by running:\nresultLst = df.apply(lambda x: Records(x['start'], x['end']), axis=1).tolist()\n\nEDIT:\nUnfortunately, it is not really easy to understand what your underlying rules are, but the following works out almost the same way as what you say:\ndf = df.groupby('end').min().reset_index() # If two end dates are identical, we keep the first?\ndf = df.sort_values('start').reset_index(drop=True)\ndf['start_reduced'] = df['start'] - pd.to_timedelta('1 day')\ndf['idx_orig'] = df.index\ncols_to_drop = [x+'_y' for x in df.columns]\nfirst_iter = True\nseed_start_idx = []\nwhile first_iter or mask.any():\n df = df.merge(df, how='left', left_on='end', right_on='start_reduced', suffixes=('', '_y'))\n mask = ~df['end_y'].isna()\n df.loc[mask, 'end'] = df.loc[mask, 'end_y'].values\n if first_iter:\n seed_start_idx = df.loc[~df['start'].isin(df.loc[mask, 'start_y']), 'idx_orig'].tolist()\n df = df.drop(columns=cols_to_drop)\n first_iter = False\ndf = df[df['idx_orig'].isin(seed_start_idx)].drop_duplicates(subset='idx_orig', keep='last').drop(columns=['start_reduced', 'idx_orig'])\n\nThe only difference is that it is not possible to distinguish which of the ones starting 2011-01-01 should be kept. You state that the one ending 2011-03-04 should not be kept, but the one ending 9999-12-31 should seemingly be kept. I cannot understand the logic behind that differentiation. The rest works though.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074612181_python.txt |
Q:
do not run lemmatize of nltk package
Hello I Have a code for lemmatization a string in python . code is below
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print("better :", lemmatizer.lemmatize("better", pos ="a"))
but when it compile and run some errors occure
errors is :
Traceback (most recent call last):
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 80, in __load
try: root = nltk.data.find('{}/{}'.format(self.subdir, zip_name))
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\data.py", line 675, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource ←[93mwordnet←[0m not found.
Please use the NLTK Downloader to obtain the resource:
←[31m>>> import nltk
>>> nltk.download('wordnet')
←[0m
Searched in:
- 'C:\\Users\\user1/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\share\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\lib\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Roaming\\nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user1\Desktop\Tor Browser\sort.py", line 3, in <mod
ule>
print("better :", lemmatizer.lemmatize("better", pos ="a"))
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\stem\wordnet.py", line 40, in lemmatize
lemmas = wordnet._morphy(word, pos)
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 116, in __getattr__
self.__load()
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 81, in __load
except LookupError: raise e
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 78, in __load
root = nltk.data.find('{}/{}'.format(self.subdir, self.__name))
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\data.py", line 675, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource ←[93mwordnet←[0m not found.
Please use the NLTK Downloader to obtain the resource:
←[31m>>> import nltk
>>> nltk.download('wordnet')
←[0m
Searched in:
- 'C:\\Users\\user1/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\share\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\lib\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Roaming\\nltk_data'
**********************************************************************
I have already installed the NLTK package with the following command
import nltk
nltk.download()
how can i fix it?
I expected the function to work correctly
A:
In addition to
import nltk
nltk.download("wordnet")
I also had to run this:
import nltk
nltk.download('omw-1.4')
Can you share the contents of your nltk_data directory (usually it is either C:\Users\yourusername\nltk_data\ or C:\Users\yourusername\AppData\Roaming\nltk_data)? Also which version of nltk are you running? (you can get the version by running nltk.version )
| do not run lemmatize of nltk package | Hello I Have a code for lemmatization a string in python . code is below
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print("better :", lemmatizer.lemmatize("better", pos ="a"))
but when it compile and run some errors occure
errors is :
Traceback (most recent call last):
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 80, in __load
try: root = nltk.data.find('{}/{}'.format(self.subdir, zip_name))
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\data.py", line 675, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource ←[93mwordnet←[0m not found.
Please use the NLTK Downloader to obtain the resource:
←[31m>>> import nltk
>>> nltk.download('wordnet')
←[0m
Searched in:
- 'C:\\Users\\user1/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\share\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\lib\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Roaming\\nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user1\Desktop\Tor Browser\sort.py", line 3, in <mod
ule>
print("better :", lemmatizer.lemmatize("better", pos ="a"))
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\stem\wordnet.py", line 40, in lemmatize
lemmas = wordnet._morphy(word, pos)
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 116, in __getattr__
self.__load()
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 81, in __load
except LookupError: raise e
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\corpus\util.py", line 78, in __load
root = nltk.data.find('{}/{}'.format(self.subdir, self.__name))
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\s
ite-packages\nltk\data.py", line 675, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource ←[93mwordnet←[0m not found.
Please use the NLTK Downloader to obtain the resource:
←[31m>>> import nltk
>>> nltk.download('wordnet')
←[0m
Searched in:
- 'C:\\Users\\user1/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\share\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Local\\Programs\\Python\\Python310
\\lib\\nltk_data'
- 'C:\\Users\\user1\\AppData\\Roaming\\nltk_data'
**********************************************************************
I have already installed the NLTK package with the following command
import nltk
nltk.download()
how can i fix it?
I expected the function to work correctly
| [
"In addition to\nimport nltk\nnltk.download(\"wordnet\")\n\nI also had to run this:\nimport nltk\nnltk.download('omw-1.4')\n\nCan you share the contents of your nltk_data directory (usually it is either C:\\Users\\yourusername\\nltk_data\\ or C:\\Users\\yourusername\\AppData\\Roaming\\nltk_data)? Also which version of nltk are you running? (you can get the version by running nltk.version )\n"
] | [
0
] | [] | [] | [
"lemmatization",
"nltk",
"python"
] | stackoverflow_0074613719_lemmatization_nltk_python.txt |
Q:
How to get foreign key values with getattr from models
I have a model Project and i am getting the attributes of that with the following instr
attr = getattr(project, 'id', None)
project is the instance, id is the field and None is the default return type.
My question is: what if I want to get the Foreign Key keys with this?
Get customer name
project.customer.name
How to get customer name with the above condition?
Already Tried
if callable(attr):
context[node][field] = '%s' % attr()
Current Code
context = {'project': {}}
fields = ('id', 'name', 'category', 'created_by', customer)
for field in fields:
attr = getattr(project, field, None)
if callable(attr):
context['project'][field] = '%s' % attr()
else:
context['project'][field] = attr
i need to adjust customer object here. that i can give something like customer__name or customer.name in my fields and it get populated with the name of the customer, but i am not sure.
A:
You can do something like follows:
def get_repr(value):
if callable(value):
return '%s' % value()
return value
def get_field(instance, field):
field_path = field.split('.')
attr = instance
for elem in field_path:
try:
attr = getattr(attr, elem)
except AttributeError:
return None
return attr
for field in fields:
context['project'][field] = get_repr(get_field(project, field))
A:
Here's an equivalent recursive getattr -
def get_foreign_key_attr(obj, field: str):
"""Get attr recursively by following foreign key relations
For example ...
get_foreign_key_attr(
<Station: Spire operational 1m>, "idproject__idcountry__name"
)
... splits "idproject__idcountry__name" into ["idproject", "idcountry", "name"]
& follows finds the value of each foreign key relation
"""
fields = field.split("__")
if len(fields) == 1:
return getattr(obj, fields[0], "")
else:
first_field = fields[0]
remaining_fields = "__".join(fields[1:])
return get_foreign_key_attr(getattr(obj, first_field), remaining_fields)
| How to get foreign key values with getattr from models | I have a model Project and i am getting the attributes of that with the following instr
attr = getattr(project, 'id', None)
project is the instance, id is the field and None is the default return type.
My question is: what if I want to get the Foreign Key keys with this?
Get customer name
project.customer.name
How to get customer name with the above condition?
Already Tried
if callable(attr):
context[node][field] = '%s' % attr()
Current Code
context = {'project': {}}
fields = ('id', 'name', 'category', 'created_by', customer)
for field in fields:
attr = getattr(project, field, None)
if callable(attr):
context['project'][field] = '%s' % attr()
else:
context['project'][field] = attr
i need to adjust customer object here. that i can give something like customer__name or customer.name in my fields and it get populated with the name of the customer, but i am not sure.
| [
"You can do something like follows:\ndef get_repr(value): \n if callable(value):\n return '%s' % value()\n return value\n\ndef get_field(instance, field):\n field_path = field.split('.')\n attr = instance\n for elem in field_path:\n try:\n attr = getattr(attr, elem)\n except AttributeError:\n return None\n return attr\n\nfor field in fields:\n context['project'][field] = get_repr(get_field(project, field))\n\n",
"Here's an equivalent recursive getattr -\ndef get_foreign_key_attr(obj, field: str):\n \"\"\"Get attr recursively by following foreign key relations\n\n For example ...\n get_foreign_key_attr(\n <Station: Spire operational 1m>, \"idproject__idcountry__name\"\n )\n ... splits \"idproject__idcountry__name\" into [\"idproject\", \"idcountry\", \"name\"]\n & follows finds the value of each foreign key relation\n \"\"\"\n fields = field.split(\"__\")\n if len(fields) == 1:\n return getattr(obj, fields[0], \"\")\n else:\n first_field = fields[0]\n remaining_fields = \"__\".join(fields[1:])\n return get_foreign_key_attr(getattr(obj, first_field), remaining_fields)\n\n"
] | [
19,
0
] | [] | [] | [
"callable",
"getattr",
"model",
"python",
"relational_database"
] | stackoverflow_0020235807_callable_getattr_model_python_relational_database.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.