content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How to add an anchor to navigate within same page in ploty dash
How to add an anchor to navigate within the same page in ploty dash?
A:
It's possible to place an anchor using href to where you want to navigate. You can use html.A to add an invisible anchor.
Note that there's no '#' in html.A id.
Ex:
html.A(id="PageSection1"),
Then just place navigate button where you need it. Clicking the button will take you to the anchor position.
html.A(html.Button('Go to page section1', id='page1-btn', n_clicks=0),href='#PageSection1'),
| How to add an anchor to navigate within same page in ploty dash | How to add an anchor to navigate within the same page in ploty dash?
| [
"It's possible to place an anchor using href to where you want to navigate. You can use html.A to add an invisible anchor.\nNote that there's no '#' in html.A id.\nEx:\nhtml.A(id=\"PageSection1\"),\n\nThen just place navigate button where you need it. Clicking the button will take you to the anchor position.\nhtml.A(html.Button('Go to page section1', id='page1-btn', n_clicks=0),href='#PageSection1'),\n\n"
] | [
0
] | [] | [] | [
"plotly_dash",
"python"
] | stackoverflow_0074615880_plotly_dash_python.txt |
Q:
How to pass -Xfrozen_modules=off to python to disable frozen modules?
Running a python script on VS outputs this error. How to pass -Xfrozen_modules=off to python to disable frozen modules?
I was trying to update the python version from 3.6 to 3.11 and then started seeing this message.
A:
In Python version 3.11 I got also the error.
Python version 3.11 https://i.stack.imgur.com/EI0Co.png
Python version 3.10 works fine.
| How to pass -Xfrozen_modules=off to python to disable frozen modules? | Running a python script on VS outputs this error. How to pass -Xfrozen_modules=off to python to disable frozen modules?
I was trying to update the python version from 3.6 to 3.11 and then started seeing this message.
| [
"In Python version 3.11 I got also the error.\nPython version 3.11 https://i.stack.imgur.com/EI0Co.png\nPython version 3.10 works fine.\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074364918_python_python_3.x.txt |
Q:
How can I take the Fourier Transform using np.fft.fft function?
I want to generate two sinusoidal signals where =5 Hz and f2=3 Hz, the time duration of the signal is =1.
x1(t)=sin(2)
X2(t)=sin(2(1+f2))
X()=1()+2()
So, I will try to analyze the frequency domain respresentation of this signal.
How can I take the Fourier Transform using np.fft.fft function by using DFT size =64 , sampling frequency =64 and =1 that is the time duration that you will have samples. In the end, I want to plot () and () which is the Fourier Transform of x(t).
Thanks, If you help me, I will be grateful to you.
In the end, it will be seen how to distinguish two signals. Thanks for everything for everyone.
A:
It is not entirely clear what the final signal is you want to do a Fourier Transform on, but I made the following assumptions:
your time domain signal x(t)=sin(21)+sin(22). From what your wrote it wasn't entirely clear
you want to have the FFT of above x(t)
your total time T=1s (you wrote T=1, units are important)
your sampling frequency fs=64 sample/s (you wrote fs=64, units are important)
With these givens, you can do something like the following:
import numpy as np
import math
import matplotlib.pyplot as plt
# Entering your input variables
fs = 64
T = 1
f1 = 3
f2 = 5
# Creating the time and frequency axis
time = np.arange(0, T, 1/fs)
freq = np.arange(0, fs/2, 1/T)
# Creating the time domain x signal
x1 = np.sin(2*math.pi*f1*time)
x2 = np.sin(2*math.pi*f2*time)
x = x1 + x2
# Making the FFT, while normalizing to make sure the amplitude is correct
X = np.fft.fft(x)/len(time)*2
# Plotting the time domain + frequency domain signals on a subplot. For the
# frequency domain plot, we only take the first half of the frequency axis
# (going up to fs/2) because we're dealing with a real signal.
fig, axs = plt.subplots(2,1)
axs[0].plot(time, x)
axs[0].set_xlabel('Time [s]')
axs[0].set_ylabel('Amplitude')
axs[1].plot(freq, abs(np.split(X,2)[0]), "x")
axs[1].set_xlabel('Frequency [Hz]')
axs[1].set_ylabel('Amplitude')
plt.show()
Out of this, you get the following plot:
As you can see, you clearly see peaks with amplitude 1 on 3 and 5 Hz! :)
Now, it's possible I didn't perfectly understand what your x(t) signal needed to be, but you can easily change the definitions of x1, x2 and x and get the output you want.
Hope this helps!
| How can I take the Fourier Transform using np.fft.fft function? | I want to generate two sinusoidal signals where =5 Hz and f2=3 Hz, the time duration of the signal is =1.
x1(t)=sin(2)
X2(t)=sin(2(1+f2))
X()=1()+2()
So, I will try to analyze the frequency domain respresentation of this signal.
How can I take the Fourier Transform using np.fft.fft function by using DFT size =64 , sampling frequency =64 and =1 that is the time duration that you will have samples. In the end, I want to plot () and () which is the Fourier Transform of x(t).
Thanks, If you help me, I will be grateful to you.
In the end, it will be seen how to distinguish two signals. Thanks for everything for everyone.
| [
"It is not entirely clear what the final signal is you want to do a Fourier Transform on, but I made the following assumptions:\n\nyour time domain signal x(t)=sin(21)+sin(22). From what your wrote it wasn't entirely clear\nyou want to have the FFT of above x(t)\nyour total time T=1s (you wrote T=1, units are important)\nyour sampling frequency fs=64 sample/s (you wrote fs=64, units are important)\n\nWith these givens, you can do something like the following:\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\n\n# Entering your input variables\nfs = 64\nT = 1\nf1 = 3\nf2 = 5\n\n# Creating the time and frequency axis\ntime = np.arange(0, T, 1/fs)\nfreq = np.arange(0, fs/2, 1/T)\n\n# Creating the time domain x signal\nx1 = np.sin(2*math.pi*f1*time)\nx2 = np.sin(2*math.pi*f2*time)\nx = x1 + x2\n\n# Making the FFT, while normalizing to make sure the amplitude is correct\nX = np.fft.fft(x)/len(time)*2\n\n# Plotting the time domain + frequency domain signals on a subplot. For the\n# frequency domain plot, we only take the first half of the frequency axis\n# (going up to fs/2) because we're dealing with a real signal.\nfig, axs = plt.subplots(2,1)\naxs[0].plot(time, x)\naxs[0].set_xlabel('Time [s]')\naxs[0].set_ylabel('Amplitude')\naxs[1].plot(freq, abs(np.split(X,2)[0]), \"x\")\naxs[1].set_xlabel('Frequency [Hz]')\naxs[1].set_ylabel('Amplitude')\nplt.show()\n\nOut of this, you get the following plot:\n\nAs you can see, you clearly see peaks with amplitude 1 on 3 and 5 Hz! :)\nNow, it's possible I didn't perfectly understand what your x(t) signal needed to be, but you can easily change the definitions of x1, x2 and x and get the output you want.\nHope this helps!\n"
] | [
0
] | [] | [] | [
"fft",
"python",
"signal_processing"
] | stackoverflow_0074614941_fft_python_signal_processing.txt |
Q:
Is there a python binding to the Berkeley DB XML database?
I'm trying to migrate some perl code to python and it uses Sleeypcat::DbXml 'simple' to get read access to a .dbxml file, creates a XmlManager, calls createQueryContext, openContainer and query to get an XmlValue. I have found https://pypi.org/project/berkeleydb/ to support the Berkeley DB in general, but it has no mention of this XML layer. Is there an existing API I can use in python 3?
A:
Berkeley DB and Berkeley DB XML are two different products. My python bindings (legacy "bsddb3" and current "berkeleydb") only interface with Berkeley DB.
I am not aware of any Python bindings for Berkeley DB XML.
I am a freelance with commercial contracts, if that option would be useful to you.
| Is there a python binding to the Berkeley DB XML database? | I'm trying to migrate some perl code to python and it uses Sleeypcat::DbXml 'simple' to get read access to a .dbxml file, creates a XmlManager, calls createQueryContext, openContainer and query to get an XmlValue. I have found https://pypi.org/project/berkeleydb/ to support the Berkeley DB in general, but it has no mention of this XML layer. Is there an existing API I can use in python 3?
| [
"Berkeley DB and Berkeley DB XML are two different products. My python bindings (legacy \"bsddb3\" and current \"berkeleydb\") only interface with Berkeley DB.\nI am not aware of any Python bindings for Berkeley DB XML.\nI am a freelance with commercial contracts, if that option would be useful to you.\n"
] | [
1
] | [] | [] | [
"api",
"berkeley_db",
"berkeley_db_xml",
"python",
"python_3.x"
] | stackoverflow_0074579981_api_berkeley_db_berkeley_db_xml_python_python_3.x.txt |
Q:
How to get the file-magic module working on Alpine Linux?
I'm trying to use file-magic on Alpine Linux and it keeps blowing up with AttributeError: Symbol not found: magic_open whenever I import the magic module.
I noted that there's two Python modules out there with the same magic namespace, but as most Linux distros appear to be using file-magic and not python-magic, I decided to make my module dependent on the former. However on Alpine, only python-magic appears to work:
Setup Alpine & install libmagic:
$ docker run --rm -it python:3-alpine /bin/sh
/ # apk add libmagic
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/1) Installing libmagic (5.32-r0)
OK: 21 MiB in 35 packages
Install file-magic:
/ # pip install file-magic
Collecting file-magic
Downloading https://files.pythonhosted.org/packages/bb/7e/b256e53a6558afd348387c3119dc4a1e3003d36030584a427168c8d72a7a/file_magic-0.4.0-py3-none-any.whl
Installing collected packages: file-magic
Successfully installed file-magic-0.4.0
It doesn't work though:
/ # python
Python 3.7.1 (default, Dec 21 2018, 03:21:42)
[GCC 6.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import magic
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/magic.py", line 61, in <module>
_open = _libraries['magic'].magic_open
File "/usr/local/lib/python3.7/ctypes/__init__.py", line 369, in __getattr__
func = self.__getitem__(name)
File "/usr/local/lib/python3.7/ctypes/__init__.py", line 374, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: Symbol not found: magic_open
>>>
Swap out file-magic for python-magic:
$ pip uninstall file-magic
Uninstalling file-magic-0.4.0:
Would remove:
/usr/local/lib/python3.7/site-packages/file_magic-0.4.0.dist-info/*
/usr/local/lib/python3.7/site-packages/magic.py
Proceed (y/n)? y
Successfully uninstalled file-magic-0.4.0
/ # pip install python-magic
Collecting python-magic
Downloading https://files.pythonhosted.org/packages/42/a1/76d30c79992e3750dac6790ce16f056f870d368ba142f83f75f694d93001/python_magic-0.4.15-py2.py3-none-any.whl
Installing collected packages: python-magic
Successfully installed python-magic-0.4.15
It works:
/ # python
Python 3.7.1 (default, Dec 21 2018, 03:21:42)
[GCC 6.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import magic
>>>
I understand that every distro has it's quirks, and I want to accommodate, but I just don't know how to do that at the moment. Is there a way to define a module's setup.py to work around this situation? Is there a way to tweak the Docker container to work with file-magic? What's the right thing to do in this case?
A:
Python-Magic has a hard-coded fallback for Alpine Linux so you might be in for a hard time with file-magic, which just uses ctypes.util.find_library('magic') and apparently can't find your library.
The exact behavior of ctypes.util.find_library is pretty complicated but if you trace through the code and see which methods it attempts to use on your platform you will probably find a way to get it to find the library (probably one of gcc, /sbin/ldconfig, or objdump).
Another solution is to patch ctypes.py to properly handle systems like Alpine:
wget https://patch-diff.githubusercontent.com/raw/python/cpython/pull/10461.patch -O - | patch /usr/local/lib/python3.7/ctypes/util.py
Now you just need to set LD_LIBRARY_PATH and Python can find the library:
/ # LD_LIBRARY_PATH="/usr/lib/" python -c "import ctypes.util; print(ctypes.util.find_library('magic'))"
libmagic.so.1
A:
The other solution didn't work for me Docker Python 2.7 Alpine
Solution - rename the file :)
# fix no libmagic found error
RUN cp /usr/lib/libmagic.so.1 /usr/lib/libmagic.so
| How to get the file-magic module working on Alpine Linux? | I'm trying to use file-magic on Alpine Linux and it keeps blowing up with AttributeError: Symbol not found: magic_open whenever I import the magic module.
I noted that there's two Python modules out there with the same magic namespace, but as most Linux distros appear to be using file-magic and not python-magic, I decided to make my module dependent on the former. However on Alpine, only python-magic appears to work:
Setup Alpine & install libmagic:
$ docker run --rm -it python:3-alpine /bin/sh
/ # apk add libmagic
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/1) Installing libmagic (5.32-r0)
OK: 21 MiB in 35 packages
Install file-magic:
/ # pip install file-magic
Collecting file-magic
Downloading https://files.pythonhosted.org/packages/bb/7e/b256e53a6558afd348387c3119dc4a1e3003d36030584a427168c8d72a7a/file_magic-0.4.0-py3-none-any.whl
Installing collected packages: file-magic
Successfully installed file-magic-0.4.0
It doesn't work though:
/ # python
Python 3.7.1 (default, Dec 21 2018, 03:21:42)
[GCC 6.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import magic
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/magic.py", line 61, in <module>
_open = _libraries['magic'].magic_open
File "/usr/local/lib/python3.7/ctypes/__init__.py", line 369, in __getattr__
func = self.__getitem__(name)
File "/usr/local/lib/python3.7/ctypes/__init__.py", line 374, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: Symbol not found: magic_open
>>>
Swap out file-magic for python-magic:
$ pip uninstall file-magic
Uninstalling file-magic-0.4.0:
Would remove:
/usr/local/lib/python3.7/site-packages/file_magic-0.4.0.dist-info/*
/usr/local/lib/python3.7/site-packages/magic.py
Proceed (y/n)? y
Successfully uninstalled file-magic-0.4.0
/ # pip install python-magic
Collecting python-magic
Downloading https://files.pythonhosted.org/packages/42/a1/76d30c79992e3750dac6790ce16f056f870d368ba142f83f75f694d93001/python_magic-0.4.15-py2.py3-none-any.whl
Installing collected packages: python-magic
Successfully installed python-magic-0.4.15
It works:
/ # python
Python 3.7.1 (default, Dec 21 2018, 03:21:42)
[GCC 6.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import magic
>>>
I understand that every distro has it's quirks, and I want to accommodate, but I just don't know how to do that at the moment. Is there a way to define a module's setup.py to work around this situation? Is there a way to tweak the Docker container to work with file-magic? What's the right thing to do in this case?
| [
"Python-Magic has a hard-coded fallback for Alpine Linux so you might be in for a hard time with file-magic, which just uses ctypes.util.find_library('magic') and apparently can't find your library.\nThe exact behavior of ctypes.util.find_library is pretty complicated but if you trace through the code and see which methods it attempts to use on your platform you will probably find a way to get it to find the library (probably one of gcc, /sbin/ldconfig, or objdump).\nAnother solution is to patch ctypes.py to properly handle systems like Alpine:\nwget https://patch-diff.githubusercontent.com/raw/python/cpython/pull/10461.patch -O - | patch /usr/local/lib/python3.7/ctypes/util.py\n\nNow you just need to set LD_LIBRARY_PATH and Python can find the library:\n/ # LD_LIBRARY_PATH=\"/usr/lib/\" python -c \"import ctypes.util; print(ctypes.util.find_library('magic'))\"\nlibmagic.so.1\n\n",
"The other solution didn't work for me Docker Python 2.7 Alpine\nSolution - rename the file :)\n# fix no libmagic found error\nRUN cp /usr/lib/libmagic.so.1 /usr/lib/libmagic.so \n\n"
] | [
4,
0
] | [] | [] | [
"alpine_linux",
"libmagic",
"python"
] | stackoverflow_0053936467_alpine_linux_libmagic_python.txt |
Q:
Scrapy - How does a request sent using requests library to an API differs from the request that is sent using Scrapy.Request?
I am a beginner at using Scrapy and I was trying to scrape this website https://directory.ntschools.net/#/schools which is using javascript to load the contents. So I checked the networks tab and there's an API address available https://directory.ntschools.net/api/System/GetAllSchools If you open this address, the data is in XML format. But when you check the response tab while inspecting the network tab, the data is there in json format.
I first tried using Scrapy, sent the request to the API address WITHOUT any headers and the response that it returned was in XML which was throwing JSONDecode error upon using json.loads(). So I used the header 'Accept' : 'application/json' and the response I got was in JSON. That worked well
import scrapy
import json
import requests
class NtseSpider_new(scrapy.Spider):
name = 'ntse_new'
header = {
'Accept': 'application/json',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.56',
}
def start_requests(self):
yield scrapy.Request('https://directory.ntschools.net/api/System/GetAllSchools',callback=self.parse,headers=self.header)
def parse(self,response):
data = json.loads(response.body) #returned json response
But then I used the requests module WITHOUT any headers and the response I got was in JSON too!
import requests
import json
res = requests.get('https://directory.ntschools.net/api/System/GetAllSchools')
js = json.loads(res.content) #returned json response
Can anyone please tell me if there's any difference between both the types of requests? Is there a default response format for requests module when making a request to an API? Surely, I am missing something?
Thanks
A:
It's because Scrapy sets the Accept header to 'text/html,application/xhtml+xml,application/xml ...'. You can see that from this.
I experimented and found that server sends a JSON response if the request has no Accept header.
| Scrapy - How does a request sent using requests library to an API differs from the request that is sent using Scrapy.Request? | I am a beginner at using Scrapy and I was trying to scrape this website https://directory.ntschools.net/#/schools which is using javascript to load the contents. So I checked the networks tab and there's an API address available https://directory.ntschools.net/api/System/GetAllSchools If you open this address, the data is in XML format. But when you check the response tab while inspecting the network tab, the data is there in json format.
I first tried using Scrapy, sent the request to the API address WITHOUT any headers and the response that it returned was in XML which was throwing JSONDecode error upon using json.loads(). So I used the header 'Accept' : 'application/json' and the response I got was in JSON. That worked well
import scrapy
import json
import requests
class NtseSpider_new(scrapy.Spider):
name = 'ntse_new'
header = {
'Accept': 'application/json',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.56',
}
def start_requests(self):
yield scrapy.Request('https://directory.ntschools.net/api/System/GetAllSchools',callback=self.parse,headers=self.header)
def parse(self,response):
data = json.loads(response.body) #returned json response
But then I used the requests module WITHOUT any headers and the response I got was in JSON too!
import requests
import json
res = requests.get('https://directory.ntschools.net/api/System/GetAllSchools')
js = json.loads(res.content) #returned json response
Can anyone please tell me if there's any difference between both the types of requests? Is there a default response format for requests module when making a request to an API? Surely, I am missing something?
Thanks
| [
"It's because Scrapy sets the Accept header to 'text/html,application/xhtml+xml,application/xml ...'. You can see that from this.\nI experimented and found that server sends a JSON response if the request has no Accept header.\n"
] | [
1
] | [] | [] | [
"api",
"python",
"python_requests",
"scrapy",
"web_scraping"
] | stackoverflow_0074612450_api_python_python_requests_scrapy_web_scraping.txt |
Q:
How can I get all values above a certain percentile in a data frame using pandas?
I can get the value of 75% using the quantile function in pandas, but how can I get all the values from 75% to 100% of each column in a data frame?
I tried this at the beginning to get the 75 percentile and the mean of that
n = df.quantile(0.75)
x = df.mean(n)
Then I tried a for loop but did not quite work because I cannot specify the loop to go between rows in each column (and do this for all the columns)
n = df.quantile(0.75)
for i in n.index:
if i >= n:
print(n)
A:
The expected output is unclear, but assuming you want a list/Series of those values.
Let's start with a dummy example:
np.random.seed(0)
df = pd.DataFrame(np.random.randint(0, 100, size=(20, 10)))
And get the quantile(0.75) per column:
df.quantile(0.75)
0 75.50
1 67.50
2 72.75
3 78.25
4 68.25
5 77.00
6 67.50
7 77.75
8 57.50
9 74.50
Name: 0.75, dtype: float64
We can then use:
df.where(df.gt(df.quantile(0.75))).stack().droplevel(0).sort_index()
0 77.0
0 94.0
0 78.0
0 81.0
0 80.0
1 88.0
1 69.0
1 95.0
1 84.0
1 79.0
2 82.0
2 75.0
2 80.0
2 88.0
2 82.0
...
Or as list:
df.where(df.gt(df.quantile(0.75))).stack().groupby(level=1).agg(list)
0 [81.0, 78.0, 94.0, 80.0, 77.0]
1 [88.0, 79.0, 84.0, 69.0, 95.0]
2 [88.0, 82.0, 75.0, 82.0, 80.0]
3 [99.0, 82.0, 99.0, 93.0, 86.0]
4 [72.0, 88.0, 91.0, 99.0, 77.0]
5 [95.0, 98.0, 86.0, 94.0, 83.0]
6 [83.0, 79.0, 69.0, 81.0, 98.0]
7 [87.0, 80.0, 99.0, 94.0, 98.0]
8 [69.0, 76.0, 85.0, 62.0, 70.0]
9 [87.0, 88.0, 79.0, 96.0, 85.0]
dtype: object
| How can I get all values above a certain percentile in a data frame using pandas? | I can get the value of 75% using the quantile function in pandas, but how can I get all the values from 75% to 100% of each column in a data frame?
I tried this at the beginning to get the 75 percentile and the mean of that
n = df.quantile(0.75)
x = df.mean(n)
Then I tried a for loop but did not quite work because I cannot specify the loop to go between rows in each column (and do this for all the columns)
n = df.quantile(0.75)
for i in n.index:
if i >= n:
print(n)
| [
"The expected output is unclear, but assuming you want a list/Series of those values.\nLet's start with a dummy example:\nnp.random.seed(0)\ndf = pd.DataFrame(np.random.randint(0, 100, size=(20, 10)))\n\nAnd get the quantile(0.75) per column:\ndf.quantile(0.75)\n\n0 75.50\n1 67.50\n2 72.75\n3 78.25\n4 68.25\n5 77.00\n6 67.50\n7 77.75\n8 57.50\n9 74.50\nName: 0.75, dtype: float64\n\nWe can then use:\ndf.where(df.gt(df.quantile(0.75))).stack().droplevel(0).sort_index()\n\n0 77.0\n0 94.0\n0 78.0\n0 81.0\n0 80.0\n1 88.0\n1 69.0\n1 95.0\n1 84.0\n1 79.0\n2 82.0\n2 75.0\n2 80.0\n2 88.0\n2 82.0\n...\n\nOr as list:\ndf.where(df.gt(df.quantile(0.75))).stack().groupby(level=1).agg(list)\n\n0 [81.0, 78.0, 94.0, 80.0, 77.0]\n1 [88.0, 79.0, 84.0, 69.0, 95.0]\n2 [88.0, 82.0, 75.0, 82.0, 80.0]\n3 [99.0, 82.0, 99.0, 93.0, 86.0]\n4 [72.0, 88.0, 91.0, 99.0, 77.0]\n5 [95.0, 98.0, 86.0, 94.0, 83.0]\n6 [83.0, 79.0, 69.0, 81.0, 98.0]\n7 [87.0, 80.0, 99.0, 94.0, 98.0]\n8 [69.0, 76.0, 85.0, 62.0, 70.0]\n9 [87.0, 88.0, 79.0, 96.0, 85.0]\ndtype: object\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"loops",
"pandas",
"percentile",
"python"
] | stackoverflow_0074615879_dataframe_loops_pandas_percentile_python.txt |
Q:
web scrape specific sets of data from table using API with python
I am looking to web scrape the large table showing the name, date, bought/ sold, amount of shares, etc from the following website:
https://www.nasdaq.com/market-activity/stocks/aapl/insider-activity
Preferably I need someone to show how to use the Nasdaq api if possible. I believe the way I'd normally webscrape (using beautifulSoup) would be inefficient for this task.
I only need the data of: date, transaction type & shares traded. My code (below) however pulls everything from the table:
import requests
import pandas as pd
import json
headers = {
"accept": "application/json, text/plain, */*",
"origin": "https://www.nasdaq.com",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36",
}
pd.set_option("display.max_columns", None)
pd.set_option("display.max_colwidth", None)
url = "https://api.nasdaq.com/api/company/AAPL/insider-trades?limit=20&type=ALL&sortColumn=lastDate&sortOrder=DESC"
r = requests.get(url, headers=headers)
df = pd.json_normalize(
r.json()["data"]["transactionTable"]["table"]["rows"]
)
df.to_json("AAPL22_institutional_table_MRKTVAL.json", indent=4)
which gives the output:
{
"insider":{
"0":"KONDO CHRIS",
"1":"MAESTRI LUCA",
"2":"O'BRIEN DEIRDRE",
"3":"KONDO CHRIS",
"4":"KONDO CHRIS",
"5":"O'BRIEN DEIRDRE",
"6":"O'BRIEN DEIRDRE",
"7":"ADAMS KATHERINE L.",
"8":"ADAMS KATHERINE L.",
"9":"O'BRIEN DEIRDRE",
"10":"WILLIAMS JEFFREY E",
"11":"WILLIAMS JEFFREY E",
"12":"MAESTRI LUCA",
"13":"MAESTRI LUCA",
"14":"ADAMS KATHERINE L.",
"15":"ADAMS KATHERINE L.",
"16":"O'BRIEN DEIRDRE",
"17":"O'BRIEN DEIRDRE",
"18":"MAESTRI LUCA",
"19":"O'BRIEN DEIRDRE"
},
"relation":{
"0":"Officer",
"1":"Officer",
"2":"Officer",
"3":"Officer",
"4":"Officer",
"5":"Officer",
"6":"Officer",
"7":"Officer",
"8":"Officer",
"9":"Officer",
"10":"Officer",
"11":"Officer",
"12":"Officer",
"13":"Officer",
"14":"Officer",
"15":"Officer",
"16":"Officer",
"17":"Officer",
"18":"Officer",
"19":"Officer"
},
"lastDate":{
"0":"11\/22\/2022",
"1":"10\/28\/2022",
"2":"10\/17\/2022",
"3":"10\/15\/2022",
"4":"10\/15\/2022",
"5":"10\/15\/2022",
"6":"10\/15\/2022",
"7":"10\/03\/2022",
"8":"10\/03\/2022",
"9":"10\/03\/2022",
"10":"10\/01\/2022",
"11":"10\/01\/2022",
"12":"10\/01\/2022",
"13":"10\/01\/2022",
"14":"10\/01\/2022",
"15":"10\/01\/2022",
"16":"10\/01\/2022",
"17":"10\/01\/2022",
"18":"08\/17\/2022",
"19":"08\/08\/2022"
},
"transactionType":{
"0":"Sell",
"1":"Automatic Sell",
"2":"Automatic Sell",
"3":"Disposition (Non Open Market)",
"4":"Option Execute",
"5":"Disposition (Non Open Market)",
"6":"Option Execute",
"7":"Automatic Sell",
"8":"Sell",
"9":"Automatic Sell",
"10":"Disposition (Non Open Market)",
"11":"Option Execute",
"12":"Disposition (Non Open Market)",
"13":"Option Execute",
"14":"Disposition (Non Open Market)",
"15":"Option Execute",
"16":"Disposition (Non Open Market)",
"17":"Option Execute",
"18":"Automatic Sell",
"19":"Automatic Sell"
},
"ownType":{
"0":"Direct",
"1":"Direct",
"2":"Direct",
"3":"Direct",
"4":"Direct",
"5":"Direct",
"6":"Direct",
"7":"Direct",
"8":"Direct",
"9":"Direct",
"10":"Direct",
"11":"Direct",
"12":"Direct",
"13":"Direct",
"14":"Direct",
"15":"Direct",
"16":"Direct",
"17":"Direct",
"18":"Direct",
"19":"Direct"
},
"sharesTraded":{
"0":"20,200",
"1":"176,299",
"2":"8,053",
"3":"6,399",
"4":"13,136",
"5":"8,559",
"6":"16,612",
"7":"167,889",
"8":"13,250",
"9":"176,299",
"10":"177,870",
"11":"365,600",
"12":"189,301",
"13":"365,600",
"14":"184,461",
"15":"365,600",
"16":"189,301",
"17":"365,600",
"18":"96,735",
"19":"15,366"
},
"lastPrice":{
"0":"$148.72",
"1":"$154.70",
"2":"$142.45",
"3":"$138.38",
"4":"",
"5":"$138.38",
"6":"",
"7":"$138.44",
"8":"$142.93",
"9":"$141.09",
"10":"$138.20",
"11":"",
"12":"$138.20",
"13":"",
"14":"$138.20",
"15":"",
"16":"$138.20",
"17":"",
"18":"$174.66",
"19":"$164.86"
},
"sharesHeld":{
"0":"31,505",
"1":"110,673",
"2":"136,290",
"3":"51,705",
"4":"58,104",
"5":"144,343",
"6":"152,902",
"7":"440,584",
"8":"427,334",
"9":"136,290",
"10":"677,392",
"11":"855,262",
"12":"286,972",
"13":"476,273",
"14":"608,473",
"15":"792,934",
"16":"312,589",
"17":"501,890",
"18":"110,673",
"19":"136,290"
},
"url":{
"0":"\/market-activity\/insiders\/kondo-chris-956353",
"1":"\/market-activity\/insiders\/maestri-luca-848571",
"2":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"3":"\/market-activity\/insiders\/kondo-chris-956353",
"4":"\/market-activity\/insiders\/kondo-chris-956353",
"5":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"6":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"7":"\/market-activity\/insiders\/adams-katherine-l-803988",
"8":"\/market-activity\/insiders\/adams-katherine-l-803988",
"9":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"10":"\/market-activity\/insiders\/williams-jeffrey-e-833286",
"11":"\/market-activity\/insiders\/williams-jeffrey-e-833286",
"12":"\/market-activity\/insiders\/maestri-luca-848571",
"13":"\/market-activity\/insiders\/maestri-luca-848571",
"14":"\/market-activity\/insiders\/adams-katherine-l-803988",
"15":"\/market-activity\/insiders\/adams-katherine-l-803988",
"16":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"17":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"18":"\/market-activity\/insiders\/maestri-luca-848571",
"19":"\/market-activity\/insiders\/obrien-deirdre-1076854"
}
}
What do I need to include in my code so it only pulls a select few sets of data? (date, transaction type and shares traded)
A:
All you have to do is:
import requests
import json
import pandas as pd
url = 'https://api.nasdaq.com/api/company/AAPL/insider-trades?limit=25&offset=0&type=ALL&sortColumn=lastDate&sortOrder=DESC'
headers = {
'accept': 'application/json, text/plain, */*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'origin': 'https://www.nasdaq.com',
'referer': 'https://www.nasdaq.com',
'sec-fetch-mode': 'cors',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'
}
r = requests.get(url, headers=headers)
df = pd.json_normalize(r.json()['data']['transactionTable']['table']['rows'])[['lastDate', 'transactionType', 'sharesTraded']]
df.to_json("AAPL22_institutional_table_MRKTVAL.json", indent=4)
| web scrape specific sets of data from table using API with python | I am looking to web scrape the large table showing the name, date, bought/ sold, amount of shares, etc from the following website:
https://www.nasdaq.com/market-activity/stocks/aapl/insider-activity
Preferably I need someone to show how to use the Nasdaq api if possible. I believe the way I'd normally webscrape (using beautifulSoup) would be inefficient for this task.
I only need the data of: date, transaction type & shares traded. My code (below) however pulls everything from the table:
import requests
import pandas as pd
import json
headers = {
"accept": "application/json, text/plain, */*",
"origin": "https://www.nasdaq.com",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36",
}
pd.set_option("display.max_columns", None)
pd.set_option("display.max_colwidth", None)
url = "https://api.nasdaq.com/api/company/AAPL/insider-trades?limit=20&type=ALL&sortColumn=lastDate&sortOrder=DESC"
r = requests.get(url, headers=headers)
df = pd.json_normalize(
r.json()["data"]["transactionTable"]["table"]["rows"]
)
df.to_json("AAPL22_institutional_table_MRKTVAL.json", indent=4)
which gives the output:
{
"insider":{
"0":"KONDO CHRIS",
"1":"MAESTRI LUCA",
"2":"O'BRIEN DEIRDRE",
"3":"KONDO CHRIS",
"4":"KONDO CHRIS",
"5":"O'BRIEN DEIRDRE",
"6":"O'BRIEN DEIRDRE",
"7":"ADAMS KATHERINE L.",
"8":"ADAMS KATHERINE L.",
"9":"O'BRIEN DEIRDRE",
"10":"WILLIAMS JEFFREY E",
"11":"WILLIAMS JEFFREY E",
"12":"MAESTRI LUCA",
"13":"MAESTRI LUCA",
"14":"ADAMS KATHERINE L.",
"15":"ADAMS KATHERINE L.",
"16":"O'BRIEN DEIRDRE",
"17":"O'BRIEN DEIRDRE",
"18":"MAESTRI LUCA",
"19":"O'BRIEN DEIRDRE"
},
"relation":{
"0":"Officer",
"1":"Officer",
"2":"Officer",
"3":"Officer",
"4":"Officer",
"5":"Officer",
"6":"Officer",
"7":"Officer",
"8":"Officer",
"9":"Officer",
"10":"Officer",
"11":"Officer",
"12":"Officer",
"13":"Officer",
"14":"Officer",
"15":"Officer",
"16":"Officer",
"17":"Officer",
"18":"Officer",
"19":"Officer"
},
"lastDate":{
"0":"11\/22\/2022",
"1":"10\/28\/2022",
"2":"10\/17\/2022",
"3":"10\/15\/2022",
"4":"10\/15\/2022",
"5":"10\/15\/2022",
"6":"10\/15\/2022",
"7":"10\/03\/2022",
"8":"10\/03\/2022",
"9":"10\/03\/2022",
"10":"10\/01\/2022",
"11":"10\/01\/2022",
"12":"10\/01\/2022",
"13":"10\/01\/2022",
"14":"10\/01\/2022",
"15":"10\/01\/2022",
"16":"10\/01\/2022",
"17":"10\/01\/2022",
"18":"08\/17\/2022",
"19":"08\/08\/2022"
},
"transactionType":{
"0":"Sell",
"1":"Automatic Sell",
"2":"Automatic Sell",
"3":"Disposition (Non Open Market)",
"4":"Option Execute",
"5":"Disposition (Non Open Market)",
"6":"Option Execute",
"7":"Automatic Sell",
"8":"Sell",
"9":"Automatic Sell",
"10":"Disposition (Non Open Market)",
"11":"Option Execute",
"12":"Disposition (Non Open Market)",
"13":"Option Execute",
"14":"Disposition (Non Open Market)",
"15":"Option Execute",
"16":"Disposition (Non Open Market)",
"17":"Option Execute",
"18":"Automatic Sell",
"19":"Automatic Sell"
},
"ownType":{
"0":"Direct",
"1":"Direct",
"2":"Direct",
"3":"Direct",
"4":"Direct",
"5":"Direct",
"6":"Direct",
"7":"Direct",
"8":"Direct",
"9":"Direct",
"10":"Direct",
"11":"Direct",
"12":"Direct",
"13":"Direct",
"14":"Direct",
"15":"Direct",
"16":"Direct",
"17":"Direct",
"18":"Direct",
"19":"Direct"
},
"sharesTraded":{
"0":"20,200",
"1":"176,299",
"2":"8,053",
"3":"6,399",
"4":"13,136",
"5":"8,559",
"6":"16,612",
"7":"167,889",
"8":"13,250",
"9":"176,299",
"10":"177,870",
"11":"365,600",
"12":"189,301",
"13":"365,600",
"14":"184,461",
"15":"365,600",
"16":"189,301",
"17":"365,600",
"18":"96,735",
"19":"15,366"
},
"lastPrice":{
"0":"$148.72",
"1":"$154.70",
"2":"$142.45",
"3":"$138.38",
"4":"",
"5":"$138.38",
"6":"",
"7":"$138.44",
"8":"$142.93",
"9":"$141.09",
"10":"$138.20",
"11":"",
"12":"$138.20",
"13":"",
"14":"$138.20",
"15":"",
"16":"$138.20",
"17":"",
"18":"$174.66",
"19":"$164.86"
},
"sharesHeld":{
"0":"31,505",
"1":"110,673",
"2":"136,290",
"3":"51,705",
"4":"58,104",
"5":"144,343",
"6":"152,902",
"7":"440,584",
"8":"427,334",
"9":"136,290",
"10":"677,392",
"11":"855,262",
"12":"286,972",
"13":"476,273",
"14":"608,473",
"15":"792,934",
"16":"312,589",
"17":"501,890",
"18":"110,673",
"19":"136,290"
},
"url":{
"0":"\/market-activity\/insiders\/kondo-chris-956353",
"1":"\/market-activity\/insiders\/maestri-luca-848571",
"2":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"3":"\/market-activity\/insiders\/kondo-chris-956353",
"4":"\/market-activity\/insiders\/kondo-chris-956353",
"5":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"6":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"7":"\/market-activity\/insiders\/adams-katherine-l-803988",
"8":"\/market-activity\/insiders\/adams-katherine-l-803988",
"9":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"10":"\/market-activity\/insiders\/williams-jeffrey-e-833286",
"11":"\/market-activity\/insiders\/williams-jeffrey-e-833286",
"12":"\/market-activity\/insiders\/maestri-luca-848571",
"13":"\/market-activity\/insiders\/maestri-luca-848571",
"14":"\/market-activity\/insiders\/adams-katherine-l-803988",
"15":"\/market-activity\/insiders\/adams-katherine-l-803988",
"16":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"17":"\/market-activity\/insiders\/obrien-deirdre-1076854",
"18":"\/market-activity\/insiders\/maestri-luca-848571",
"19":"\/market-activity\/insiders\/obrien-deirdre-1076854"
}
}
What do I need to include in my code so it only pulls a select few sets of data? (date, transaction type and shares traded)
| [
"All you have to do is:\nimport requests\nimport json \nimport pandas as pd\n\nurl = 'https://api.nasdaq.com/api/company/AAPL/insider-trades?limit=25&offset=0&type=ALL&sortColumn=lastDate&sortOrder=DESC'\n\nheaders = {\n 'accept': 'application/json, text/plain, */*',\n 'accept-encoding': 'gzip, deflate, br',\n 'accept-language': 'en-US,en;q=0.9',\n 'origin': 'https://www.nasdaq.com',\n 'referer': 'https://www.nasdaq.com',\n 'sec-fetch-mode': 'cors',\n'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'\n}\n\nr = requests.get(url, headers=headers)\n\ndf = pd.json_normalize(r.json()['data']['transactionTable']['table']['rows'])[['lastDate', 'transactionType', 'sharesTraded']]\ndf.to_json(\"AAPL22_institutional_table_MRKTVAL.json\", indent=4)\n\n"
] | [
1
] | [] | [] | [
"api",
"json",
"python"
] | stackoverflow_0074614995_api_json_python.txt |
Q:
How do I fit my X - Axis labels on my plot
I cant seem to find a way to fit my x labels in the right way on my plot.
Can someone help ?
here is the code and the output is in the picture
Code:
sns.lineplot(x="ds", y="y", data=df_new, hue="Type", marker="o")
A:
You should be able to rotate your xlabels, which makes your plot more readable. Try:
ax = sns.lineplot(x="ds", y="y", data=df_new, hue="Type", marker="o")
ax.tick_params(axis='x', labelrotation=45)
And then plotting your data.
| How do I fit my X - Axis labels on my plot | I cant seem to find a way to fit my x labels in the right way on my plot.
Can someone help ?
here is the code and the output is in the picture
Code:
sns.lineplot(x="ds", y="y", data=df_new, hue="Type", marker="o")
| [
"You should be able to rotate your xlabels, which makes your plot more readable. Try:\nax = sns.lineplot(x=\"ds\", y=\"y\", data=df_new, hue=\"Type\", marker=\"o\")\nax.tick_params(axis='x', labelrotation=45)\n\nAnd then plotting your data.\n"
] | [
0
] | [] | [] | [
"forecast",
"pandas",
"plot",
"python"
] | stackoverflow_0074615821_forecast_pandas_plot_python.txt |
Q:
Value count of columns in a pandas DataFrame where where string is 'nan'
Let's say I have the following pd.DataFrame
>>> df = pd.DataFrame({
'col_1': ['Elon', 'Jeff', 'Warren', 'Mark'],
'col_2': ['nan', 'Bezos', 'Buffet', 'nan'],
'col_3': ['nan', 'Amazon', 'Berkshire', 'Meta'],
})
which gets me
col_1 col_2 col_3
0 Elon nan nan
1 Jeff Bezos Amazon
2 Warren Buffet Berkshire
3 Mark nan Meta
All column types are strings. I would like a way to obtain the number of rows per column where the cell value is 'nan'.
Where I simply run the following I get always zeros as missing count since it doesnt check for string which contain nan.
>>> df.isna().sum()
col_1 0
col_2 0
col_3 0
dtype: int64
However, what I want is to get
col_1 0
col_2 2
col_3 1
How can I do that?
A:
you have nan as string , you can do :
df.eq("nan").sum()
output :
col_1 0
col_2 2
col_3 1
dtype: int64
A:
It took me a while to see that you changed your initial code for the dataset. However, if you would like to extract all of the rows where you have the 'nan' string, I would use a mask.
mask = np.column_stack([df[col].str.contains("nan", na = False) for col in df])
df_new = df.loc[mask.any(axis = 1)]
This creates a new data frame that you can experiment with.
| Value count of columns in a pandas DataFrame where where string is 'nan' | Let's say I have the following pd.DataFrame
>>> df = pd.DataFrame({
'col_1': ['Elon', 'Jeff', 'Warren', 'Mark'],
'col_2': ['nan', 'Bezos', 'Buffet', 'nan'],
'col_3': ['nan', 'Amazon', 'Berkshire', 'Meta'],
})
which gets me
col_1 col_2 col_3
0 Elon nan nan
1 Jeff Bezos Amazon
2 Warren Buffet Berkshire
3 Mark nan Meta
All column types are strings. I would like a way to obtain the number of rows per column where the cell value is 'nan'.
Where I simply run the following I get always zeros as missing count since it doesnt check for string which contain nan.
>>> df.isna().sum()
col_1 0
col_2 0
col_3 0
dtype: int64
However, what I want is to get
col_1 0
col_2 2
col_3 1
How can I do that?
| [
"you have nan as string , you can do :\ndf.eq(\"nan\").sum()\n\noutput :\ncol_1 0\ncol_2 2\ncol_3 1\ndtype: int64\n\n",
"It took me a while to see that you changed your initial code for the dataset. However, if you would like to extract all of the rows where you have the 'nan' string, I would use a mask.\nmask = np.column_stack([df[col].str.contains(\"nan\", na = False) for col in df])\ndf_new = df.loc[mask.any(axis = 1)]\n\nThis creates a new data frame that you can experiment with.\n"
] | [
2,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074615797_dataframe_pandas_python.txt |
Q:
TensorRT - TensorFlow deserialization fails with Serialization Error in verifyHeader
I'm running the nvcr.io/nvidia/tensorflow:19.12-tf2-py3 docker image with the following runtime information:
Tensorflow 2.0.0 (tf.__version__)
Python 3.6 (!python --version)
TensorRT 6.0.1 (!dpkg -l | grep nvinfer)
cuda 10.2
I have built a model in TensorFlow 2.0 and converted+saved it to a dir:
1/
├── assets/
| └── trt-serialized-engine.TRTEngineOp_0
├── variables/
| ├── variables.data-00000-of-00002
| ├── variables.data-00001-of-00002
| └── variables.index
└── saved_model.pb
Now when I try deserializing the cuda engine with the TensorRT python API:
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
serialized_engine = './tmp/unet-FP32/1/assets/trt-serialized-engine.TRTEngineOp_0'
# serialized_engine = './tmp/unet-FP32/1/saved_model.pb'
trt_runtime = trt.Runtime(TRT_LOGGER)
with open(serialized_engine, 'rb') as f:
engine = trt_runtime.deserialize_cuda_engine(f.read())
I receive the following error messages:
[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
I do the saving and loading on the exact same machine, inside the exact same docker container. Am I wrong to assume that 'trt-serialized-engine.TRTEngineOp_0' contains the actual serialized model?
I have also tried doing it with the uff-parserm, but the uff shipped in the NVidia container is incompatible with tensorflow 2.0.
Any ideas how to deserialize my trt engine?
A:
If the engine was created and ran on different versions, this may happen. TensorRT engines are not compatible across different TensorRT versions. Go to this reference for more info.
A:
For future readers, if you get this error building and running on the same machine or even the same container, your tensorrt python package might have a different version than your system package try the following commands to check
In python:
import tensorrt as trt
trt.__version__
In terminal:
dpkg -l | grep TensorRT
| TensorRT - TensorFlow deserialization fails with Serialization Error in verifyHeader | I'm running the nvcr.io/nvidia/tensorflow:19.12-tf2-py3 docker image with the following runtime information:
Tensorflow 2.0.0 (tf.__version__)
Python 3.6 (!python --version)
TensorRT 6.0.1 (!dpkg -l | grep nvinfer)
cuda 10.2
I have built a model in TensorFlow 2.0 and converted+saved it to a dir:
1/
├── assets/
| └── trt-serialized-engine.TRTEngineOp_0
├── variables/
| ├── variables.data-00000-of-00002
| ├── variables.data-00001-of-00002
| └── variables.index
└── saved_model.pb
Now when I try deserializing the cuda engine with the TensorRT python API:
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
serialized_engine = './tmp/unet-FP32/1/assets/trt-serialized-engine.TRTEngineOp_0'
# serialized_engine = './tmp/unet-FP32/1/saved_model.pb'
trt_runtime = trt.Runtime(TRT_LOGGER)
with open(serialized_engine, 'rb') as f:
engine = trt_runtime.deserialize_cuda_engine(f.read())
I receive the following error messages:
[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
I do the saving and loading on the exact same machine, inside the exact same docker container. Am I wrong to assume that 'trt-serialized-engine.TRTEngineOp_0' contains the actual serialized model?
I have also tried doing it with the uff-parserm, but the uff shipped in the NVidia container is incompatible with tensorflow 2.0.
Any ideas how to deserialize my trt engine?
| [
"If the engine was created and ran on different versions, this may happen. TensorRT engines are not compatible across different TensorRT versions. Go to this reference for more info.\n",
"For future readers, if you get this error building and running on the same machine or even the same container, your tensorrt python package might have a different version than your system package try the following commands to check\nIn python:\nimport tensorrt as trt\ntrt.__version__\n\nIn terminal:\ndpkg -l | grep TensorRT\n\n"
] | [
2,
0
] | [] | [] | [
"nvidia_docker",
"python",
"tensorflow",
"tensorrt"
] | stackoverflow_0059934105_nvidia_docker_python_tensorflow_tensorrt.txt |
Q:
I am trying to replace the values in a dictionary with an updated one and i am not getting the exact result
So i updated the values of a dictionary into percentage by multiplying by 100. Now i want to replace the initial decimal with the updated results, but instead, i am getting each value replaced by the whole new values.
job_role_overtime_att_rate = {'Healthcare Representative Overtime Rate' : 2/37, ' Human Resources Overtime Rate': 5/13,
'Laboratory Technician Total' : 31/62, 'Manager Total': 4/27, 'Manufacturing Director Total': 4/39,
'Research Director Total' : 1/23, 'Research Scientist Total' : 33/97,
'Sales Executive Total' : 31/94, 'Sales Representative Total' : 16/24}
job_role_overtime_att_rate()
{'Healthcare Representative Overtime Rate': 0.05405405405405406,
' Human Resources Overtime Rate': 0.38461538461538464,
'Laboratory Technician Total': 0.5,
'Manager Total': 0.14814814814814814,
'Manufacturing Director Total': 0.10256410256410256,
'Research Director Total': 0.043478260869565216,
'Research Scientist Total': 0.3402061855670103,
'Sales Executive Total': 0.32978723404255317,
'Sales Representative Total': 0.6666666666666666}
this the result of the above code.
for i in job_role_overtime_att_rate.values():
a = i * 100
print('{0:.2f}'.format(a))
this multiplies the initial result by 100. now
5.41
38.46
50.00
14.81
10.26
4.35
34.02
32.98
66.67
here is the result.
for i in job_role_overtime_att_rate.values():
a = i * 100
print('{0:.2f}'.format(a))
for values in job_role_overtime_att_rate.values():
values = a
print(values)
this is to replace the intial values with the new one. at least that's what i thought it will do.
5.41
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
38.46
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
50.00
50.0
50.0
50.0
50.0
50.0
50.0
50.0
50.0
50.0
14.81
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
10.26
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
4.35
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
34.02
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
32.98
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
66.67
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
here is what it's returning. Kindly help. Thanks
A:
You can loop through the dict.items() function to loop through the key and the value.
job_role_overtime_att_rate = {
'Healthcare Representative Overtime Rate': 0.05405405405405406,
'Human Resources Overtime Rate': 0.38461538461538464,
'Laboratory Technician Total': 0.5,
'Manager Total': 0.14814814814814814,
'Manufacturing Director Total': 0.10256410256410256,
'Research Director Total': 0.043478260869565216,
'Research Scientist Total': 0.3402061855670103,
'Sales Executive Total': 0.32978723404255317,
'Sales Representative Total': 0.6666666666666666
}
for name, value in job_role_overtime_att_rate.items():
a = round(value*100, 2)
job_role_overtime_att_rate[name] = a
print(job_role_overtime_att_rate)
A:
I'm not sure what your point is, but you loop over values and in the inner loop again over values.
So for each value you loop over all values, thus it's expected that you will get n^2 prints for dictionary with n values.
If I guess right this code does what you need:
for key, value in job_role_overtime_att_rate.items():
job_role_overtime_att_rate[key]= value*100
| I am trying to replace the values in a dictionary with an updated one and i am not getting the exact result | So i updated the values of a dictionary into percentage by multiplying by 100. Now i want to replace the initial decimal with the updated results, but instead, i am getting each value replaced by the whole new values.
job_role_overtime_att_rate = {'Healthcare Representative Overtime Rate' : 2/37, ' Human Resources Overtime Rate': 5/13,
'Laboratory Technician Total' : 31/62, 'Manager Total': 4/27, 'Manufacturing Director Total': 4/39,
'Research Director Total' : 1/23, 'Research Scientist Total' : 33/97,
'Sales Executive Total' : 31/94, 'Sales Representative Total' : 16/24}
job_role_overtime_att_rate()
{'Healthcare Representative Overtime Rate': 0.05405405405405406,
' Human Resources Overtime Rate': 0.38461538461538464,
'Laboratory Technician Total': 0.5,
'Manager Total': 0.14814814814814814,
'Manufacturing Director Total': 0.10256410256410256,
'Research Director Total': 0.043478260869565216,
'Research Scientist Total': 0.3402061855670103,
'Sales Executive Total': 0.32978723404255317,
'Sales Representative Total': 0.6666666666666666}
this the result of the above code.
for i in job_role_overtime_att_rate.values():
a = i * 100
print('{0:.2f}'.format(a))
this multiplies the initial result by 100. now
5.41
38.46
50.00
14.81
10.26
4.35
34.02
32.98
66.67
here is the result.
for i in job_role_overtime_att_rate.values():
a = i * 100
print('{0:.2f}'.format(a))
for values in job_role_overtime_att_rate.values():
values = a
print(values)
this is to replace the intial values with the new one. at least that's what i thought it will do.
5.41
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
5.405405405405405
38.46
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
38.46153846153847
50.00
50.0
50.0
50.0
50.0
50.0
50.0
50.0
50.0
50.0
14.81
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
14.814814814814813
10.26
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
10.256410256410255
4.35
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
4.3478260869565215
34.02
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
34.02061855670103
32.98
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
32.97872340425532
66.67
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
66.66666666666666
here is what it's returning. Kindly help. Thanks
| [
"You can loop through the dict.items() function to loop through the key and the value.\njob_role_overtime_att_rate = {\n 'Healthcare Representative Overtime Rate': 0.05405405405405406,\n 'Human Resources Overtime Rate': 0.38461538461538464,\n 'Laboratory Technician Total': 0.5,\n 'Manager Total': 0.14814814814814814,\n 'Manufacturing Director Total': 0.10256410256410256,\n 'Research Director Total': 0.043478260869565216,\n 'Research Scientist Total': 0.3402061855670103,\n 'Sales Executive Total': 0.32978723404255317,\n 'Sales Representative Total': 0.6666666666666666\n}\n\nfor name, value in job_role_overtime_att_rate.items():\n a = round(value*100, 2)\n job_role_overtime_att_rate[name] = a\n\nprint(job_role_overtime_att_rate)\n\n",
"I'm not sure what your point is, but you loop over values and in the inner loop again over values.\nSo for each value you loop over all values, thus it's expected that you will get n^2 prints for dictionary with n values.\nIf I guess right this code does what you need:\nfor key, value in job_role_overtime_att_rate.items():\n job_role_overtime_att_rate[key]= value*100\n\n"
] | [
2,
0
] | [] | [] | [
"dictionary",
"nested_for_loop",
"python"
] | stackoverflow_0074615696_dictionary_nested_for_loop_python.txt |
Q:
Trouble reading some pdfs with PyPDF2
I'm having trouble reading a standard PDF with PyPDF2. The PdfReader class will read the document and give me the correct metadata properties for my document, but examining any other content gives me the filler text that a browser would if I do not have the adobe extension installed:
The document you are trying to load requires Adobe Reader 8 or higher. You may not have the Adobe Reader installed or your viewing environment may not be properly configured to use Adobe Reader. For information on how to install Adobe Reader and configure your viewing environment please see http://www.adobe.com/go/pdf_forms_configure.
I am able to successfully read the metadata for this particular pdf, as well as others published by the same entity and tool.
Some sample code to show the issue:
from PyPDF2 import PdfReader
from pathlib import Path, WindowsPath
award_test = PdfReader(WindowsPath("DA Form 638.pdf"))
print(award_test.metadata)
print(award_test.get_form_text_fields())
print(award_test.pages[0].extract_text())
Yields:
{'/CreationDate': "D:20210517070206-04'00'", '/Creator': 'Designer 6.3', '/Distrubution': 'Unrestricted', '/Doc_Num': '638', '/Form_Month': '04', '/Form_Version': '1.03', '/Form_Year': '2021', '/ModDate': "D:20210517070206-04'00'", '/OMB_Expire': '', '/OMB_Number': '', '/PA_Code': 'No', '/PIN': '083079', '/Pre_Dir': 'AR 600-8-22', '/Prefix': 'DA', '/Producer': 'Designer 6.3', '/Product_Type': 'Form', '/Proponent': 'DCS, G-1', '/Pub_Day': '05', '/Pub_ID': '8-22', '/Pub_Month': '03', '/Pub_Series': '600', '/Pub_Type': 'AR', '/Pub_Year': '2019', '/Scope': 'Army', '/Security_Class': 'UC', '/Signature': 'Yes', '/Subject': 'DA FORM 638, APR 2021', '/Suffix': '', '/Title': 'RECOMMENDATION FOR AWARD', '/Unicode': 'EMO'}
{}
The document you are trying to load requires Adobe Reader 8 or higher. You may not have the Adobe Reader installed or your viewing environment may not be properly configured to use Adobe Reader. For information on how to install Adobe Reader and configure your viewing environment please see http://www.adobe.com/go/pdf_forms_configure.
My question is: I am able to read other forms published by the same entity and same tool per the metadata, is there some way to rip into this one to extract the information? Link to PDF: https://armypubs.army.mil/pub/eforms/DR_a/ARN32485-DA_FORM_638-003-EFILE-4.pdf (this is an unrestricted, unclassified document - I'm simply trying to save time intending to read/write a lot of these en masse)
I did review similar question here: PDFMiner can't read pdf forms that require Adobe Acrobat but it seemed to be a false lead as I am using PyPDF, and I can open other fillable pdfs using this tool
A:
Your document is a dynamic XFA form. These dynamic forms are defined entirely in XML and the PDF file serves as a container. The PDF file has a single page with the message you extracted, this is for the PDF processors that do not support dynamic XFA forms.
Open the file with Adobe Reader and you will see a full PDF file with 3 pages. Open the file with SumatraPDF and you will see an empty PDF file just with the warning you got.
Maybe PyPDF2 can work with XFA forms. If not, you will need a low level PDF tool to extract the XML streams.
| Trouble reading some pdfs with PyPDF2 | I'm having trouble reading a standard PDF with PyPDF2. The PdfReader class will read the document and give me the correct metadata properties for my document, but examining any other content gives me the filler text that a browser would if I do not have the adobe extension installed:
The document you are trying to load requires Adobe Reader 8 or higher. You may not have the Adobe Reader installed or your viewing environment may not be properly configured to use Adobe Reader. For information on how to install Adobe Reader and configure your viewing environment please see http://www.adobe.com/go/pdf_forms_configure.
I am able to successfully read the metadata for this particular pdf, as well as others published by the same entity and tool.
Some sample code to show the issue:
from PyPDF2 import PdfReader
from pathlib import Path, WindowsPath
award_test = PdfReader(WindowsPath("DA Form 638.pdf"))
print(award_test.metadata)
print(award_test.get_form_text_fields())
print(award_test.pages[0].extract_text())
Yields:
{'/CreationDate': "D:20210517070206-04'00'", '/Creator': 'Designer 6.3', '/Distrubution': 'Unrestricted', '/Doc_Num': '638', '/Form_Month': '04', '/Form_Version': '1.03', '/Form_Year': '2021', '/ModDate': "D:20210517070206-04'00'", '/OMB_Expire': '', '/OMB_Number': '', '/PA_Code': 'No', '/PIN': '083079', '/Pre_Dir': 'AR 600-8-22', '/Prefix': 'DA', '/Producer': 'Designer 6.3', '/Product_Type': 'Form', '/Proponent': 'DCS, G-1', '/Pub_Day': '05', '/Pub_ID': '8-22', '/Pub_Month': '03', '/Pub_Series': '600', '/Pub_Type': 'AR', '/Pub_Year': '2019', '/Scope': 'Army', '/Security_Class': 'UC', '/Signature': 'Yes', '/Subject': 'DA FORM 638, APR 2021', '/Suffix': '', '/Title': 'RECOMMENDATION FOR AWARD', '/Unicode': 'EMO'}
{}
The document you are trying to load requires Adobe Reader 8 or higher. You may not have the Adobe Reader installed or your viewing environment may not be properly configured to use Adobe Reader. For information on how to install Adobe Reader and configure your viewing environment please see http://www.adobe.com/go/pdf_forms_configure.
My question is: I am able to read other forms published by the same entity and same tool per the metadata, is there some way to rip into this one to extract the information? Link to PDF: https://armypubs.army.mil/pub/eforms/DR_a/ARN32485-DA_FORM_638-003-EFILE-4.pdf (this is an unrestricted, unclassified document - I'm simply trying to save time intending to read/write a lot of these en masse)
I did review similar question here: PDFMiner can't read pdf forms that require Adobe Acrobat but it seemed to be a false lead as I am using PyPDF, and I can open other fillable pdfs using this tool
| [
"Your document is a dynamic XFA form. These dynamic forms are defined entirely in XML and the PDF file serves as a container. The PDF file has a single page with the message you extracted, this is for the PDF processors that do not support dynamic XFA forms.\nOpen the file with Adobe Reader and you will see a full PDF file with 3 pages. Open the file with SumatraPDF and you will see an empty PDF file just with the warning you got.\nMaybe PyPDF2 can work with XFA forms. If not, you will need a low level PDF tool to extract the XML streams.\n"
] | [
0
] | [] | [] | [
"adobe",
"pdf",
"pypdf2",
"python"
] | stackoverflow_0074613023_adobe_pdf_pypdf2_python.txt |
Q:
Python comprehension with multiple prints
I want to put this for loop into a comprehension. Is this even possible?
for i in range(1, 11):
result = len({tuple(walk) for (walk, distance) in dic[i] if distance == 0})
print(f'There are {result} different unique walks with length {i}')
I tried stuff like
print({tuple(walk) for i in range(1, 11) for (walk, distance) in dic[i] if distance == 0})
but this prints all walks for all i together, but i want 10 different print statements.
A:
You were pretty close actually:
[print(f'There are {len({tuple(walk) for (walk, distance) in dic[i] if distance == 0})} different unique walks with length {i}') for i in range(1,11)]
But it's a long and ugly oneliner, in a regular for it looks way better.
A:
Technically yes, but I'd not recommend doing that. You can actually use a list comprehension for that but it would really defy their purpose.
You can use functions in comprehensions, but usually their return values is accumulated in the structure. Since the print function doesn't return a value (None), such a comprehension would accumulate a series of Nones, which is not very pythonic.
>>> res = [print(i) for i in range(1, 11)]
1
2
...
>>> res
[None, None, ...]
Much better is to collect all the lines you want to print and concatenate them together (for example using str.join or unpacking them into print):
>>> print('\n'.join(str(i) for i in range(1, 11)))
1
2
...
or
>>> res = [i for i in range(1, 11)]
>>> res
[1, 2, ...]
>>> print(*res, sep='\n')
1
2
...
| Python comprehension with multiple prints | I want to put this for loop into a comprehension. Is this even possible?
for i in range(1, 11):
result = len({tuple(walk) for (walk, distance) in dic[i] if distance == 0})
print(f'There are {result} different unique walks with length {i}')
I tried stuff like
print({tuple(walk) for i in range(1, 11) for (walk, distance) in dic[i] if distance == 0})
but this prints all walks for all i together, but i want 10 different print statements.
| [
"You were pretty close actually:\n[print(f'There are {len({tuple(walk) for (walk, distance) in dic[i] if distance == 0})} different unique walks with length {i}') for i in range(1,11)]\n\nBut it's a long and ugly oneliner, in a regular for it looks way better.\n",
"Technically yes, but I'd not recommend doing that. You can actually use a list comprehension for that but it would really defy their purpose.\nYou can use functions in comprehensions, but usually their return values is accumulated in the structure. Since the print function doesn't return a value (None), such a comprehension would accumulate a series of Nones, which is not very pythonic.\n>>> res = [print(i) for i in range(1, 11)]\n1\n2\n...\n>>> res\n[None, None, ...]\n\nMuch better is to collect all the lines you want to print and concatenate them together (for example using str.join or unpacking them into print):\n>>> print('\\n'.join(str(i) for i in range(1, 11)))\n1\n2\n...\n\nor\n>>> res = [i for i in range(1, 11)]\n>>> res\n[1, 2, ...]\n>>> print(*res, sep='\\n')\n1\n2\n...\n\n"
] | [
2,
1
] | [] | [] | [
"list_comprehension",
"printing",
"python",
"set_comprehension"
] | stackoverflow_0074615760_list_comprehension_printing_python_set_comprehension.txt |
Q:
Send VISCA commands to IPcamera using python-onvif-zeep or valkka and sendreceiveserialcommand service from DeviceIO
The problem
I have recently acquired an Active Silicon IP-camera and I have been trying to control it using python-onvif-zeep or valkka. The camera implements the ONVIF Profile S standard. I would like to send zoom in / our, focus and other basic VISCA commands to the camera using the SendReceiveSerialData service as described in the DeviceIO wsdl, however I can not seem to get it to work. My knowledge in ONVIF, SOAP, and zeep is quite limited so please forgive me if its something too obvious!
Minimal reproducible example and needed changes
MRE
Here is the python code I have (I have replaced the original python-onvif-zeep code I had with valkka because it makes everything a bit neater):
#! /usr/bin/env python3
from valkka.onvif import OnVif, DeviceManagement, Media, DeviceIO, PTZ, getWSDLPath
from zeep import Client
import zeep.helpers
from zeep.wsse.username import UsernameToken
import time
try:
device_service = DeviceManagement(
ip="10.0.0.250",
port=8000,
user="",
password=""
)
except Exception as e:
print(e)
cap = device_service.ws_client.GetCapabilities()
print("CAPABILITIES: \n{}".format(cap))
srv = device_service.ws_client.GetServices(True)
print("SERVICES: \n{}".format(srv))
try:
deviceIO_service = DeviceIO(
ip="10.0.0.250",
port=8000,
user="",
password=""
)
except Exception as e:
print(e)
# element = deviceIO_service.zeep_client.get_element('ns10:SendReceiveSerialCommand')
# print(element)
ports = deviceIO_service.ws_client.GetSerialPorts()
# print (ports)
serial_token = ports[0].token
# print(serial_token)
zoomin = bytes.fromhex('81 01 04 07 02 FF')
#zoomout = bytes.fromhex('81 01 04 07 03 FF')
#zoomin = b'\x81\x01\x04\x07\x02\xff'
data = {
'SerialData': zoomin
}
ack = deviceIO_service.ws_client.SendReceiveSerialCommand(serial_token, data)
print(ack)
time.sleep(3)
The first issue I encountered was that the SendReceiveSerialCommand service was not working and the following error occurred:
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Needed changes
The reason behind that is the deviceIO.wsdl file of the library. It does not contain the "Token" element, unlike DeviceIO wsdl,therefore I had to manually add it under the SendReceiveSerialCommand element:
<xs:element name="SendReceiveSerialCommand">
<xs:annotation>
<xs:documentation>Transmitting arbitrary data to the connected serial device and then receiving its response data.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="Token" type="tt:ReferenceToken" minOccurs="0">
<xs:annotation>
<xs:documentation>The physical serial port reference to be used when this request is invoked.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="SerialData" type="tmd:SerialData" minOccurs="0">
<xs:annotation>
<xs:documentation>The serial port data.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="TimeOut" type="xs:duration" minOccurs="0">
<xs:annotation>
<xs:documentation>Indicates that the command should be responded back within the specified period of time.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="DataLength" type="xs:integer" minOccurs="0">
<xs:annotation>
<xs:documentation>This element may be put in the case that data length returned from the connected serial device is already determined as some fixed bytes length. It indicates the length of received data which can be regarded as available.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="Delimiter" type="xs:string" minOccurs="0">
<xs:annotation>
<xs:documentation>This element may be put in the case that the delimiter codes returned from the connected serial device is already known. It indicates the termination data sequence of the responded data. In case the string has more than one character a device shall interpret the whole string as a single delimiter. Furthermore a device shall return the delimiter character(s) to the client.</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="SendReceiveSerialCommandResponse">
<xs:annotation>
<xs:documentation>Receiving the response data.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="SerialData" type="tmd:SerialData" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
After this part was fixed the errors go away, but the camera does not zoom in/out ( depending on which VISCA command is being passed). Additionally, the acknowledgement comes back as "None".
Nothing changes if I fill the dictionary with all the fields from the SendReceiveSerialCommand service like so:
serial_data = {
'SerialData': zoomin,
'TimeOut': "PT0M0.1S",
'DataLength': "100",
'Delimiter': "",
}
The setup
I am running this python script on Ubuntu 20.04 with python 3.8. The setup is a network between the camera and the laptop with static IP addresses assigned.
Please note that the camera is working as I can successfully send commands to the camera over the web interface and when running the C# example (can be found under downloads->software) it comes with, on a Windows machine.
Thank you in advance for your time and effort to help me out!
A:
I managed to get your example to work using the following code. The data was not passed in the correct format to the function, so the SOAP message was incomplete.
Check the zeep documentation for details about passing SOAP datastructures.
deviceio_type_factory = deviceIO_service.zeep_client.type_factory("http://www.onvif.org/ver10/deviceIO/wsdl")
serial_data = deviceio_type_factory.SerialData(Binary=zoomin)
ack = deviceIO_service.ws_client.SendReceiveSerialCommand(Token= ports[0].token, SerialData=serial_data, TimeOut='PT0M6S', DataLength='100', Delimiter='')
visca_ack_comp = ack['Binary'].hex()
print(visca_ack_comp )
Here is what I get from the camera: 9041ff9051ff (ACK + COMP).
Note, you could use the WDSL file from ONVIF instead of creating your own, as that one is correct.
class MyDeviceIO(OnVif):
wsdl_file = "https://www.onvif.org/ver10/deviceio.wsdl"
namespace = "http://www.onvif.org/ver10/deviceIO/wsdl"
sub_xaddr = "DeviceIO"
port = "DeviceIOBinding"
| Send VISCA commands to IPcamera using python-onvif-zeep or valkka and sendreceiveserialcommand service from DeviceIO | The problem
I have recently acquired an Active Silicon IP-camera and I have been trying to control it using python-onvif-zeep or valkka. The camera implements the ONVIF Profile S standard. I would like to send zoom in / our, focus and other basic VISCA commands to the camera using the SendReceiveSerialData service as described in the DeviceIO wsdl, however I can not seem to get it to work. My knowledge in ONVIF, SOAP, and zeep is quite limited so please forgive me if its something too obvious!
Minimal reproducible example and needed changes
MRE
Here is the python code I have (I have replaced the original python-onvif-zeep code I had with valkka because it makes everything a bit neater):
#! /usr/bin/env python3
from valkka.onvif import OnVif, DeviceManagement, Media, DeviceIO, PTZ, getWSDLPath
from zeep import Client
import zeep.helpers
from zeep.wsse.username import UsernameToken
import time
try:
device_service = DeviceManagement(
ip="10.0.0.250",
port=8000,
user="",
password=""
)
except Exception as e:
print(e)
cap = device_service.ws_client.GetCapabilities()
print("CAPABILITIES: \n{}".format(cap))
srv = device_service.ws_client.GetServices(True)
print("SERVICES: \n{}".format(srv))
try:
deviceIO_service = DeviceIO(
ip="10.0.0.250",
port=8000,
user="",
password=""
)
except Exception as e:
print(e)
# element = deviceIO_service.zeep_client.get_element('ns10:SendReceiveSerialCommand')
# print(element)
ports = deviceIO_service.ws_client.GetSerialPorts()
# print (ports)
serial_token = ports[0].token
# print(serial_token)
zoomin = bytes.fromhex('81 01 04 07 02 FF')
#zoomout = bytes.fromhex('81 01 04 07 03 FF')
#zoomin = b'\x81\x01\x04\x07\x02\xff'
data = {
'SerialData': zoomin
}
ack = deviceIO_service.ws_client.SendReceiveSerialCommand(serial_token, data)
print(ack)
time.sleep(3)
The first issue I encountered was that the SendReceiveSerialCommand service was not working and the following error occurred:
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Needed changes
The reason behind that is the deviceIO.wsdl file of the library. It does not contain the "Token" element, unlike DeviceIO wsdl,therefore I had to manually add it under the SendReceiveSerialCommand element:
<xs:element name="SendReceiveSerialCommand">
<xs:annotation>
<xs:documentation>Transmitting arbitrary data to the connected serial device and then receiving its response data.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="Token" type="tt:ReferenceToken" minOccurs="0">
<xs:annotation>
<xs:documentation>The physical serial port reference to be used when this request is invoked.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="SerialData" type="tmd:SerialData" minOccurs="0">
<xs:annotation>
<xs:documentation>The serial port data.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="TimeOut" type="xs:duration" minOccurs="0">
<xs:annotation>
<xs:documentation>Indicates that the command should be responded back within the specified period of time.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="DataLength" type="xs:integer" minOccurs="0">
<xs:annotation>
<xs:documentation>This element may be put in the case that data length returned from the connected serial device is already determined as some fixed bytes length. It indicates the length of received data which can be regarded as available.</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="Delimiter" type="xs:string" minOccurs="0">
<xs:annotation>
<xs:documentation>This element may be put in the case that the delimiter codes returned from the connected serial device is already known. It indicates the termination data sequence of the responded data. In case the string has more than one character a device shall interpret the whole string as a single delimiter. Furthermore a device shall return the delimiter character(s) to the client.</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="SendReceiveSerialCommandResponse">
<xs:annotation>
<xs:documentation>Receiving the response data.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="SerialData" type="tmd:SerialData" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
After this part was fixed the errors go away, but the camera does not zoom in/out ( depending on which VISCA command is being passed). Additionally, the acknowledgement comes back as "None".
Nothing changes if I fill the dictionary with all the fields from the SendReceiveSerialCommand service like so:
serial_data = {
'SerialData': zoomin,
'TimeOut': "PT0M0.1S",
'DataLength': "100",
'Delimiter': "",
}
The setup
I am running this python script on Ubuntu 20.04 with python 3.8. The setup is a network between the camera and the laptop with static IP addresses assigned.
Please note that the camera is working as I can successfully send commands to the camera over the web interface and when running the C# example (can be found under downloads->software) it comes with, on a Windows machine.
Thank you in advance for your time and effort to help me out!
| [
"I managed to get your example to work using the following code. The data was not passed in the correct format to the function, so the SOAP message was incomplete.\nCheck the zeep documentation for details about passing SOAP datastructures.\ndeviceio_type_factory = deviceIO_service.zeep_client.type_factory(\"http://www.onvif.org/ver10/deviceIO/wsdl\")\nserial_data = deviceio_type_factory.SerialData(Binary=zoomin)\n\nack = deviceIO_service.ws_client.SendReceiveSerialCommand(Token= ports[0].token, SerialData=serial_data, TimeOut='PT0M6S', DataLength='100', Delimiter='')\n\nvisca_ack_comp = ack['Binary'].hex()\nprint(visca_ack_comp )\n\nHere is what I get from the camera: 9041ff9051ff (ACK + COMP).\nNote, you could use the WDSL file from ONVIF instead of creating your own, as that one is correct.\nclass MyDeviceIO(OnVif):\n wsdl_file = \"https://www.onvif.org/ver10/deviceio.wsdl\"\n namespace = \"http://www.onvif.org/ver10/deviceIO/wsdl\"\n sub_xaddr = \"DeviceIO\"\n port = \"DeviceIOBinding\"\n\n"
] | [
2
] | [] | [] | [
"ip_camera",
"onvif",
"python",
"python_onvif",
"zeep"
] | stackoverflow_0074504555_ip_camera_onvif_python_python_onvif_zeep.txt |
Q:
Subtracting times in a csv for a row by row basis in Python
I have a CSV that's few thousand rows long. It contains data sent from various devices. They should transmit frequently (every 10 minutes) however sometimes there is a lag. I'm trying to write a program that will highlight all instances where the delay between two readings is greater than 15 minutes
I've made a functional code that works, but with this code I first have to manually edit the CSV to change the "eventTime" variable from time format (e.g. 03:22:00) to a float value based on 1/24 (e.g. 03:22:00 becomes 0.14027). Similarly, the 15 minute interval becomes 0.01042 (15/(60*24))
import pandas as pd
df = pd.read_csv('file.csv')
df2 = pd.DataFrame()
deviceID = df["deviceId"].unique().tolist()
threshold = 0.01042
for id_no in range(0, len(deviceID)):
subset = df[df.deviceId == deviceID[id_no]]
for row in range(len(subset)-1):
difference = subset.iloc[row, 1] - subset.iloc[row+1, 1]
if difference > threshold:
df2 = df2.append(subset.iloc[row])
df2 = df2.append(subset.iloc[row+1])
df2.to_csv('file2.csv)
This works, and I can open the CSV in excel and manually change the float values back to time format, but when I might be dealing with a few hundred CSV files, this becomes impractical,
I've tried this below
import pandas as pd
from datetime import datetime
df = pd.read_csv('file.csv')
df2 = pd.DataFrame()
deviceID = df["deviceId"].unique().tolist()
df['eventTime'].apply(lambda x: datetime.strptime(x, "%H:%M:%S"))
threshold = datetime.strptime("00:15:00", '%H:%M:%S')
for id_no in range(0, len(deviceID)):
subset = df[df.deviceId == deviceID[id_no]]
for row in range(len(subset)-1):
difference = datetime.strptime(subset.iloc[row, 1],'%H:%M:%S') - datetime.strptime(subset.iloc[row+1, 1], '%H:%M:%S')
if difference > threshold:
df2 = df2.append(subset.iloc[row])
df2 = df2.append(subset.iloc[row+1])
df2.to_csv('file2.csv')
but I get the following error:
if difference > threshold:
TypeError: '>' not supported between instances of 'datetime.timedelta' and 'datetime.datetime'
The data looks like this:
| eventTime| deviceId|
| -------- | -------- |
| 15:30:00 | 11234889|
| 15:45:00 | 11234889|
| 16:00:00 | 11234889|
and for different IDs
| eventTime| deviceId|
| -------- | -------- |
| 15:30:00 | 11234890|
| 15:45:00 | 11234890|
| 16:00:00 | 11234890|
A:
threshold is datetime and you compare it to timedelta object (difference). Did you mean:
from datetime import timedelta
...
threshold = datetime.timedelta(minutes=15)
A:
Given this dataframe:
actual_ts id
0 05:00:00 SPAM
1 5:15:00 SPAM
2 5:33:00 SPAM <-- Should highlight
3 5:45:00 SPAM
4 6:02:00 SPAM <-- Should highlight
5 11:15:00 FOO
6 11:32:00 FOO <-- Should highlight
7 11:45:00 FOO
8 12:08:00 FOO <-- Should highlight
This is a step-by-step way of getting to where you want, definitely not the most optimal but it's clear enough to teach you how to avoid looping over dataframes, which is a major no-no. Try running and printing the dataframe every step so you know what's happening.
# Convert column to timedelta.
df["actual_ts"] = pd.to_timedelta(df["actual_ts"])
# Sort as a best practice if not computationally expensive.
df = df.sort_values(by=["id", "actual_ts"])
# Shift the actual_ts by one row per group.
df["lagged_ts"] = df.groupby(["id"])["actual_ts"].shift(1)
# Fill nulls with same time if you want to avoid NaNs and NaTs.
df["lagged_ts"] = df["lagged_ts"].fillna(df["actual_ts"])
# Calculate difference in seconds.
df["diff_seconds"] = (df["actual_ts"] - df["lagged_ts"]).dt.seconds
# Mark as True all events greater than 15 minutes.
df["highlight"] = df["diff_seconds"] > 900
# Keep all columns you need.
new_df = df[["actual_ts", "id", "diff_seconds", "highlight"]]
You get this:
actual_ts id diff_seconds highlight
5 0 days 11:15:00 FOO 0 False
6 0 days 11:32:00 FOO 1020 True
7 0 days 11:45:00 FOO 780 False
8 0 days 12:08:00 FOO 1380 True
0 0 days 05:00:00 SPAM 0 False
1 0 days 05:15:00 SPAM 900 False
2 0 days 05:33:00 SPAM 1080 True
3 0 days 05:45:00 SPAM 720 False
4 0 days 06:02:00 SPAM 1020 True
Cleaning up the 0 days is up to you. You can also change diff_seconds to minutes but that's easy enough.
| Subtracting times in a csv for a row by row basis in Python | I have a CSV that's few thousand rows long. It contains data sent from various devices. They should transmit frequently (every 10 minutes) however sometimes there is a lag. I'm trying to write a program that will highlight all instances where the delay between two readings is greater than 15 minutes
I've made a functional code that works, but with this code I first have to manually edit the CSV to change the "eventTime" variable from time format (e.g. 03:22:00) to a float value based on 1/24 (e.g. 03:22:00 becomes 0.14027). Similarly, the 15 minute interval becomes 0.01042 (15/(60*24))
import pandas as pd
df = pd.read_csv('file.csv')
df2 = pd.DataFrame()
deviceID = df["deviceId"].unique().tolist()
threshold = 0.01042
for id_no in range(0, len(deviceID)):
subset = df[df.deviceId == deviceID[id_no]]
for row in range(len(subset)-1):
difference = subset.iloc[row, 1] - subset.iloc[row+1, 1]
if difference > threshold:
df2 = df2.append(subset.iloc[row])
df2 = df2.append(subset.iloc[row+1])
df2.to_csv('file2.csv)
This works, and I can open the CSV in excel and manually change the float values back to time format, but when I might be dealing with a few hundred CSV files, this becomes impractical,
I've tried this below
import pandas as pd
from datetime import datetime
df = pd.read_csv('file.csv')
df2 = pd.DataFrame()
deviceID = df["deviceId"].unique().tolist()
df['eventTime'].apply(lambda x: datetime.strptime(x, "%H:%M:%S"))
threshold = datetime.strptime("00:15:00", '%H:%M:%S')
for id_no in range(0, len(deviceID)):
subset = df[df.deviceId == deviceID[id_no]]
for row in range(len(subset)-1):
difference = datetime.strptime(subset.iloc[row, 1],'%H:%M:%S') - datetime.strptime(subset.iloc[row+1, 1], '%H:%M:%S')
if difference > threshold:
df2 = df2.append(subset.iloc[row])
df2 = df2.append(subset.iloc[row+1])
df2.to_csv('file2.csv')
but I get the following error:
if difference > threshold:
TypeError: '>' not supported between instances of 'datetime.timedelta' and 'datetime.datetime'
The data looks like this:
| eventTime| deviceId|
| -------- | -------- |
| 15:30:00 | 11234889|
| 15:45:00 | 11234889|
| 16:00:00 | 11234889|
and for different IDs
| eventTime| deviceId|
| -------- | -------- |
| 15:30:00 | 11234890|
| 15:45:00 | 11234890|
| 16:00:00 | 11234890|
| [
"threshold is datetime and you compare it to timedelta object (difference). Did you mean:\nfrom datetime import timedelta\n...\nthreshold = datetime.timedelta(minutes=15)\n\n",
"Given this dataframe:\n actual_ts id\n0 05:00:00 SPAM\n1 5:15:00 SPAM\n2 5:33:00 SPAM <-- Should highlight\n3 5:45:00 SPAM\n4 6:02:00 SPAM <-- Should highlight\n5 11:15:00 FOO\n6 11:32:00 FOO <-- Should highlight\n7 11:45:00 FOO\n8 12:08:00 FOO <-- Should highlight\n\nThis is a step-by-step way of getting to where you want, definitely not the most optimal but it's clear enough to teach you how to avoid looping over dataframes, which is a major no-no. Try running and printing the dataframe every step so you know what's happening.\n# Convert column to timedelta.\ndf[\"actual_ts\"] = pd.to_timedelta(df[\"actual_ts\"])\n\n# Sort as a best practice if not computationally expensive.\ndf = df.sort_values(by=[\"id\", \"actual_ts\"])\n\n# Shift the actual_ts by one row per group.\ndf[\"lagged_ts\"] = df.groupby([\"id\"])[\"actual_ts\"].shift(1)\n\n# Fill nulls with same time if you want to avoid NaNs and NaTs.\ndf[\"lagged_ts\"] = df[\"lagged_ts\"].fillna(df[\"actual_ts\"])\n\n# Calculate difference in seconds.\ndf[\"diff_seconds\"] = (df[\"actual_ts\"] - df[\"lagged_ts\"]).dt.seconds\n\n# Mark as True all events greater than 15 minutes.\ndf[\"highlight\"] = df[\"diff_seconds\"] > 900\n\n# Keep all columns you need.\nnew_df = df[[\"actual_ts\", \"id\", \"diff_seconds\", \"highlight\"]]\n\nYou get this:\n actual_ts id diff_seconds highlight\n5 0 days 11:15:00 FOO 0 False\n6 0 days 11:32:00 FOO 1020 True\n7 0 days 11:45:00 FOO 780 False\n8 0 days 12:08:00 FOO 1380 True\n0 0 days 05:00:00 SPAM 0 False\n1 0 days 05:15:00 SPAM 900 False\n2 0 days 05:33:00 SPAM 1080 True\n3 0 days 05:45:00 SPAM 720 False\n4 0 days 06:02:00 SPAM 1020 True\n\nCleaning up the 0 days is up to you. You can also change diff_seconds to minutes but that's easy enough.\n"
] | [
0,
0
] | [] | [] | [
"csv",
"datetime",
"pandas",
"python"
] | stackoverflow_0074615585_csv_datetime_pandas_python.txt |
Q:
How to add rows as sums of other rows in DataFrame?
I'm not sure I titled this post correctly but I have a unique situation where I want to append a new set of rows to an existing DataFrame as a sum of rows from existing sets and I'm not sure where to start.
For example, I have the following DataFrame:
import pandas as pd
data = {'Team': ['Atlanta', 'Atlanta', 'Cleveland', 'Cleveland'],
'Position': ['Defense', 'Kicker', 'Defense', 'Kicker'],
'Points': [5, 10, 15, 20]}
df = pd.DataFrame(data)
print(df)
Team Position Points
0 Atlanta Defense 5
1 Atlanta Kicker 10
2 Cleveland Defense 15
3 Cleveland Kicker 20
How do I create/append new rows which create a new position for each team and sum the points of the two existing positions for each team? Additionally, the full dataset consists of several more teams so I'm looking for a solution that will work for any number of teams.
edit: I forgot to include that there are other positions in the complete DataFrame; but, I only want this solution to be applied for the positions "Defense" and "Kicker".
My desired output is below.
Team Position Points
Team Position Points
0 Atlanta Defense 5
1 Atlanta Kicker 10
2 Cleveland Defense 15
3 Cleveland Kicker 20
4 Atlanta Defense + Kicker 15
5 Cleveland Defense + Kicker 35
Thanks in advance!
A:
We can use groupby agg to create the summary rows then append to the DataFrame:
df = df.append(df.groupby('Team', as_index=False).agg({
'Position': ' + '.join, # Concat Strings together
'Points': 'sum' # Total Points
}), ignore_index=True)
df:
Team Position Points
0 Atlanta Defense 5
1 Atlanta Kicker 10
2 Cleveland Defense 15
3 Cleveland Kicker 20
4 Atlanta Defense + Kicker 15
5 Cleveland Defense + Kicker 35
We can also whitelist certain positions by filtering df before groupby to aggregate only the desired positions:
whitelisted_positions = ['Kicker', 'Defense']
df = df.append(
df[df['Position'].isin(whitelisted_positions)]
.groupby('Team', as_index=False).agg({
'Position': ' + '.join, # Concat Strings together
'Points': 'sum' # Total Points
}), ignore_index=True
)
A:
pandas.DataFrame.append is deprecated since version 1.4.0. Henry Ecker's neat solution just needs a slight tweak to use concat instead.
whitelisted_positions = ['Kicker', 'Defense']
df = pd.concat([df,
df[df['Position'].isin(whitelisted_positions)]
.groupby('Team', as_index=False).agg({
'Position': ' + '.join, # Concat Strings together
'Points': 'sum' # Total Points
})],
ignore_index=True)
| How to add rows as sums of other rows in DataFrame? | I'm not sure I titled this post correctly but I have a unique situation where I want to append a new set of rows to an existing DataFrame as a sum of rows from existing sets and I'm not sure where to start.
For example, I have the following DataFrame:
import pandas as pd
data = {'Team': ['Atlanta', 'Atlanta', 'Cleveland', 'Cleveland'],
'Position': ['Defense', 'Kicker', 'Defense', 'Kicker'],
'Points': [5, 10, 15, 20]}
df = pd.DataFrame(data)
print(df)
Team Position Points
0 Atlanta Defense 5
1 Atlanta Kicker 10
2 Cleveland Defense 15
3 Cleveland Kicker 20
How do I create/append new rows which create a new position for each team and sum the points of the two existing positions for each team? Additionally, the full dataset consists of several more teams so I'm looking for a solution that will work for any number of teams.
edit: I forgot to include that there are other positions in the complete DataFrame; but, I only want this solution to be applied for the positions "Defense" and "Kicker".
My desired output is below.
Team Position Points
Team Position Points
0 Atlanta Defense 5
1 Atlanta Kicker 10
2 Cleveland Defense 15
3 Cleveland Kicker 20
4 Atlanta Defense + Kicker 15
5 Cleveland Defense + Kicker 35
Thanks in advance!
| [
"We can use groupby agg to create the summary rows then append to the DataFrame:\ndf = df.append(df.groupby('Team', as_index=False).agg({\n 'Position': ' + '.join, # Concat Strings together\n 'Points': 'sum' # Total Points\n}), ignore_index=True)\n\ndf:\n Team Position Points\n0 Atlanta Defense 5\n1 Atlanta Kicker 10\n2 Cleveland Defense 15\n3 Cleveland Kicker 20\n4 Atlanta Defense + Kicker 15\n5 Cleveland Defense + Kicker 35\n\nWe can also whitelist certain positions by filtering df before groupby to aggregate only the desired positions:\nwhitelisted_positions = ['Kicker', 'Defense']\ndf = df.append(\n df[df['Position'].isin(whitelisted_positions)]\n .groupby('Team', as_index=False).agg({\n 'Position': ' + '.join, # Concat Strings together\n 'Points': 'sum' # Total Points\n }), ignore_index=True\n)\n\n",
"pandas.DataFrame.append is deprecated since version 1.4.0. Henry Ecker's neat solution just needs a slight tweak to use concat instead.\nwhitelisted_positions = ['Kicker', 'Defense']\ndf = pd.concat([df,\n df[df['Position'].isin(whitelisted_positions)]\n .groupby('Team', as_index=False).agg({\n 'Position': ' + '.join, # Concat Strings together\n 'Points': 'sum' # Total Points\n })],\n ignore_index=True)\n\n"
] | [
1,
0
] | [] | [] | [
"aggregate",
"append",
"dataframe",
"pandas",
"python"
] | stackoverflow_0068773113_aggregate_append_dataframe_pandas_python.txt |
Q:
What's the equivalent of the php Laravel's "Http::fake()" in Python/ Django / DRF / Pytest?
Laravel's Http:fake() method allows you to instruct the HTTP client to return stubbed / dummy responses when requests are made. How can I achieve the same using Django Rest Framework APIClient in tests?
I tried requests_mock but it didn't yield the result I was expecting. It only mocks requests made within test function and not anywhere else within the application or project you're testing.
A:
When you use pytest-django you can use import the fixture admin_client and then do requests like this:
def test_get_project_list(admin_client):
resp = admin_client.get("/projects/")
assert resp.status_code == 200
resp_json = resp.json()
assert resp_json == {"some": "thing"}
| What's the equivalent of the php Laravel's "Http::fake()" in Python/ Django / DRF / Pytest? | Laravel's Http:fake() method allows you to instruct the HTTP client to return stubbed / dummy responses when requests are made. How can I achieve the same using Django Rest Framework APIClient in tests?
I tried requests_mock but it didn't yield the result I was expecting. It only mocks requests made within test function and not anywhere else within the application or project you're testing.
| [
"When you use pytest-django you can use import the fixture admin_client and then do requests like this:\ndef test_get_project_list(admin_client):\n resp = admin_client.get(\"/projects/\")\n assert resp.status_code == 200\n resp_json = resp.json()\n assert resp_json == {\"some\": \"thing\"}\n\n"
] | [
0
] | [] | [] | [
"django_rest_framework",
"django_testing",
"django_tests",
"http_mock",
"python"
] | stackoverflow_0074614274_django_rest_framework_django_testing_django_tests_http_mock_python.txt |
Q:
Python multiprocessing queue using a lot of resources with opencv
I am using multiprocessing to get frames of a video using Opencv in python.
My class looks like this :-
import cv2
from multiprocessing import Process, Queue
class StreamVideos:
def __init__(self):
self.image_data = Queue()
def start_proces(self):
p = Process(target=self.echo)
p.start()
def echo(self):
cap = cv2.VideoCapture('videoplayback.mp4')
while cap.isOpened():
ret,frame = cap.read()
self.image_data.put(frame)
# print("frame")
I start the process "echo" using :-
p = Process(target=self.echo)
p.start()
the echo function looks like this :-
def echo(self):
cap = cv2.VideoCapture('videoplayback.mp4')
while cap.isOpened():
ret,frame = cap.read()
self.image_data.put(frame)
in which i am using queue where i put these frames
self.image_data.put(frame)
and then in another process I start reviving these frames
self.obj = StreamVideos()
def start_process(self):
self.obj.start_proces()
p = Process(target=self.stream_videos)
p.start()
def stream_videos(self):
while True:
self.img = self.obj.image_data.get()
print(self.img)
but as soon as I start putting frames to queue, the ram gets filled very quickly and the system gets stuck. The video I am using is just 25 fps and 39mb in size, so it does not make any sense.
One thing I noticed is that the "echo" process is putting a lot of frames in the queue before the "stream_videos" process retrives it.
What could be the root of this problem?
Thanks in advance.
Expectations: -
Able to retrieve the frames continuosly.
Tried :-
Not putting frames in queue, in which case the ram is not filled.
A:
The following is a general purpose single producer/multiple consumer implementation. The producer (class StreamVideos) creates a shared memory array whose size is the number of bytes in the video frame. One or more consumers (you specify the number of consumers to StreamVideos) can then call StreamVideos.get_next_frame() to retrieve the next frame. This method converts the shared array back into a numpy array for subsequent processing. The producer will only read the next frame into the shared array after all consumers have called get_next_frame:
#!/usr/bin/env python3
import multiprocessing
import numpy as np
import ctypes
import cv2
class StreamVideos:
def __init__(self, path, n_consumers):
"""
path is the path to the video:
n_consumers is the number of tasks to which we will be sreaming this.
"""
self._path = path
self._event = multiprocessing.Event()
self._barrier = multiprocessing.Barrier(n_consumers + 1, self._reset_event)
# Discover how large a framesize is by getting the first frame
cap = cv2.VideoCapture(self._path)
ret, frame = cap.read()
if ret:
self._shape = frame.shape
frame_size = self._shape[0] * self._shape[1] * self._shape[2]
self._arr = multiprocessing.RawArray(ctypes.c_ubyte, frame_size)
else:
self._arr = None
cap.release()
def _reset_event(self):
self._event.clear()
def start_streaming(self):
cap = cv2.VideoCapture(self._path)
while True:
self._barrier.wait()
ret, frame = cap.read()
if not ret:
# No more readable frames:
break
# Store frame into shared array:
temp = np.frombuffer(self._arr, dtype=frame.dtype)
temp[:] = frame.flatten(order='C')
self._event.set()
cap.release()
self._arr = None
self._event.set()
def get_next_frame(self):
# Tell producer that this consumer is through with the previous frame:
self._barrier.wait()
# Wait for next frame to be read by the producer:
self._event.wait()
if self._arr is None:
return None
# Return shared array as a numpy array:
return np.ctypeslib.as_array(self._arr).reshape(self._shape)
def consumer(producer, id):
frame_name = f'Frame - {id}'
while True:
frame = producer.get_next_frame()
if frame is None:
break
cv2.imshow(frame_name, frame)
cv2.waitKey(1)
cv2.destroyAllWindows()
def main():
producer = StreamVideos('videoplayback.mp4', 2)
consumer1 = multiprocessing.Process(target=consumer, args=(producer, 1))
consumer1.start()
consumer2 = multiprocessing.Process(target=consumer, args=(producer, 2))
consumer2.start()
"""
# Run as a child process:
producer_process = multiprocessing.Process(target=producer.start_streaming)
producer_process.start()
producer_process.join()
"""
# Run in main process:
producer.start_streaming()
consumer1.join()
consumer2.join()
if __name__ == '__main__':
main()
| Python multiprocessing queue using a lot of resources with opencv | I am using multiprocessing to get frames of a video using Opencv in python.
My class looks like this :-
import cv2
from multiprocessing import Process, Queue
class StreamVideos:
def __init__(self):
self.image_data = Queue()
def start_proces(self):
p = Process(target=self.echo)
p.start()
def echo(self):
cap = cv2.VideoCapture('videoplayback.mp4')
while cap.isOpened():
ret,frame = cap.read()
self.image_data.put(frame)
# print("frame")
I start the process "echo" using :-
p = Process(target=self.echo)
p.start()
the echo function looks like this :-
def echo(self):
cap = cv2.VideoCapture('videoplayback.mp4')
while cap.isOpened():
ret,frame = cap.read()
self.image_data.put(frame)
in which i am using queue where i put these frames
self.image_data.put(frame)
and then in another process I start reviving these frames
self.obj = StreamVideos()
def start_process(self):
self.obj.start_proces()
p = Process(target=self.stream_videos)
p.start()
def stream_videos(self):
while True:
self.img = self.obj.image_data.get()
print(self.img)
but as soon as I start putting frames to queue, the ram gets filled very quickly and the system gets stuck. The video I am using is just 25 fps and 39mb in size, so it does not make any sense.
One thing I noticed is that the "echo" process is putting a lot of frames in the queue before the "stream_videos" process retrives it.
What could be the root of this problem?
Thanks in advance.
Expectations: -
Able to retrieve the frames continuosly.
Tried :-
Not putting frames in queue, in which case the ram is not filled.
| [
"The following is a general purpose single producer/multiple consumer implementation. The producer (class StreamVideos) creates a shared memory array whose size is the number of bytes in the video frame. One or more consumers (you specify the number of consumers to StreamVideos) can then call StreamVideos.get_next_frame() to retrieve the next frame. This method converts the shared array back into a numpy array for subsequent processing. The producer will only read the next frame into the shared array after all consumers have called get_next_frame:\n#!/usr/bin/env python3\n\nimport multiprocessing\nimport numpy as np\nimport ctypes\nimport cv2\n\nclass StreamVideos:\n def __init__(self, path, n_consumers):\n \"\"\"\n path is the path to the video:\n n_consumers is the number of tasks to which we will be sreaming this.\n \"\"\"\n self._path = path\n\n self._event = multiprocessing.Event()\n\n self._barrier = multiprocessing.Barrier(n_consumers + 1, self._reset_event)\n\n # Discover how large a framesize is by getting the first frame\n cap = cv2.VideoCapture(self._path)\n ret, frame = cap.read()\n if ret:\n self._shape = frame.shape\n frame_size = self._shape[0] * self._shape[1] * self._shape[2]\n self._arr = multiprocessing.RawArray(ctypes.c_ubyte, frame_size)\n else:\n self._arr = None\n cap.release()\n\n def _reset_event(self):\n self._event.clear()\n\n def start_streaming(self):\n cap = cv2.VideoCapture(self._path)\n\n while True:\n self._barrier.wait()\n ret, frame = cap.read()\n if not ret:\n # No more readable frames:\n break\n\n # Store frame into shared array:\n temp = np.frombuffer(self._arr, dtype=frame.dtype)\n temp[:] = frame.flatten(order='C')\n\n self._event.set()\n\n cap.release()\n self._arr = None\n self._event.set()\n\n def get_next_frame(self):\n # Tell producer that this consumer is through with the previous frame:\n self._barrier.wait()\n # Wait for next frame to be read by the producer:\n self._event.wait()\n if self._arr is None:\n return None\n\n # Return shared array as a numpy array:\n return np.ctypeslib.as_array(self._arr).reshape(self._shape)\n\ndef consumer(producer, id):\n frame_name = f'Frame - {id}'\n while True:\n frame = producer.get_next_frame()\n if frame is None:\n break\n cv2.imshow(frame_name, frame)\n cv2.waitKey(1)\n\n cv2.destroyAllWindows()\n\n\ndef main():\n producer = StreamVideos('videoplayback.mp4', 2)\n\n consumer1 = multiprocessing.Process(target=consumer, args=(producer, 1))\n consumer1.start()\n consumer2 = multiprocessing.Process(target=consumer, args=(producer, 2))\n consumer2.start()\n\n \"\"\"\n # Run as a child process:\n producer_process = multiprocessing.Process(target=producer.start_streaming)\n producer_process.start()\n producer_process.join()\n \"\"\"\n # Run in main process:\n producer.start_streaming()\n\n consumer1.join()\n consumer2.join()\n\nif __name__ == '__main__':\n main()\n\n"
] | [
0
] | [] | [] | [
"multiprocessing",
"opencv",
"python",
"queue"
] | stackoverflow_0074600004_multiprocessing_opencv_python_queue.txt |
Q:
Call function without optional arguments if they are None
There's a function which takes optional arguments.
def alpha(p1="foo", p2="bar"):
print('{0},{1}'.format(p1, p2))
Let me iterate over what happens when we use that function in different ways:
>>> alpha()
foo,bar
>>> alpha("FOO")
FOO,bar
>>> alpha(p2="BAR")
foo,BAR
>>> alpha(p1="FOO", p2=None)
FOO,None
Now consider the case where I want to call it like alpha("FOO", myp2) and myp2 will either contain a value to be passed, or be None. But even though the function handles p2=None, I want it to use its default value "bar" instead.
Maybe that's worded confusingly, so let me reword that:
If myp2 is None, call alpha("FOO"). Else, call alpha("FOO", myp2).
The distinction is relevant because alpha("FOO", None) has a different result than alpha("FOO").
How can I concisely (but readably) make this distinction?
One possibility would usually be to check for None within alpha, which would be encouraged because that would make the code safer. But assume that alpha is used in other places where it is actually supposed to handle None as it does.
I'd like to handle that on the caller-side.
One possibility is to do a case distinction:
if myp2 is None:
alpha("FOO")
else:
alpha("FOO", myp2)
But that can quickly become much code when there are multiple such arguments. (exponentially, 2^n)
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
I am using python 3.4 but it would be best if your answers can provide a good way that works in any python version.
The question is technically finished here, but I reword some requirement again, since the first answer did gloss over that:
I want the behaviour of alpha with its default values "foo", "bar" preserved in general, so it is (probably) not an option to change alpha itself.
In yet again other words, assume that alpha is being used somewhere else as alpha("FOO", None) where the output FOO,None is expected behaviour.
A:
Pass the arguments as kwargs from a dictionary, from which you filter out the None values:
kwargs = dict(p1='FOO', p2=None)
alpha(**{k: v for k, v in kwargs.items() if v is not None})
A:
But assume that alpha is used in other places where it is actually supposed to handle None as it does.
To respond to this concern, I have been known to have a None-like value which isn't actually None for this exact purpose.
_novalue = object()
def alpha(p1=_novalue, p2=_novalue):
if p1 is _novalue:
p1 = "foo"
if p2 is _novalue:
p2 = "bar"
print('{0},{1}'.format(p1, p2))
Now the arguments are still optional, so you can neglect to pass either of them. And the function handles None correctly. If you ever want to explicitly not pass an argument, you can pass _novalue.
>>> alpha(p1="FOO", p2=None)
FOO,None
>>> alpha(p1="FOO")
FOO,bar
>>> alpha(p1="FOO", p2=_novalue)
FOO,bar
and since _novalue is a special made-up value created for this express purpose, anyone who passes _novalue is certainly intending the "default argument" behavior, as opposed to someone who passes None who might intend that the value be interpreted as literal None.
A:
although ** is definitely a language feature, it's surely not created for solving this particular problem. Your suggestion works, so does mine. Which one works better depends on the rest of the OP's code. However, there is still no way to write f(x or dont_pass_it_at_all)
- blue_note
Thanks to your great answers, I thought I'd try to do just that:
# gen.py
def callWithNonNoneArgs(f, *args, **kwargs):
kwargsNotNone = {k: v for k, v in kwargs.items() if v is not None}
return f(*args, **kwargsNotNone)
# python interpreter
>>> import gen
>>> def alpha(p1="foo", p2="bar"):
... print('{0},{1}'.format(p1,p2))
...
>>> gen.callWithNonNoneArgs(alpha, p1="FOO", p2=None)
FOO,bar
>>> def beta(ree, p1="foo", p2="bar"):
... print('{0},{1},{2}'.format(ree,p1,p2))
...
>>> beta('hello', p2="world")
hello,foo,world
>>> beta('hello', p2=None)
hello,foo,None
>>> gen.callWithNonNoneArgs(beta, 'hello', p2=None)
hello,foo,bar
This is probably not perfect, but it seems to work: It's a function that you can call with another function and it's arguments, and it applies deceze's answer to filter out the arguments that are None.
A:
You could inspect the default values via alpha.__defaults__ and then use them instead of None. That way you circumvent the hard-coding of default values:
>>> args = [None]
>>> alpha('FOO', *[x if x is not None else y for x, y in zip(args, alpha.__defaults__[1:])])
A:
I had the same problem when calling some Swagger generated client code, which I couldn't modify, where None could end up in the query string if I didn't clean up the arguments before calling the generated methods. I ended up creating a simple helper function:
def defined_kwargs(**kwargs):
return {k: v for k, v in kwargs.items() if v is not None}
>>> alpha(**defined_kwargs(p1="FOO", p2=None))
FOO,bar
It keeps things quite readable for more complex invocations:
def beta(a, b, p1="foo", p2="bar"):
print('{0},{1},{2},{3}'.format(a, b, p1, p2,))
p1_value = "foo"
p2_value = None
>>> beta("hello",
"world",
**defined_kwargs(
p1=p1_value,
p2=p2_value))
hello,world,FOO,bar
A:
I'm surprised nobody brought this up
def f(p1="foo", p2=None):
p2 = "bar" if p2 is None else p2
print(p1+p2)
You assign None to p2 as standart (or don't, but this way you have the true standart at one point in your code) and use an inline if. Imo the most pythonic answer. Another thing that comes to mind is using a wrapper, but that would be way less readable.
EDIT:
What I'd probably do is use a dummy as standart value and check for that. So something like this:
class dummy():
pass
def alpha(p1="foo", p2=dummy()):
if isinstance(p2, dummy):
p2 = "bar"
print("{0},{1}".format(p1, p2))
alpha()
alpha("a","b")
alpha(p2=None)
produces:
foo,bar
a,b
foo,None
A:
Unfortunately, there's no way to do what you want. Even widely adopted python libraries/frameworks use your first approach. It's an extra line of code, but it is quite readable.
Do not use the alpha("FOO", myp2 or "bar") approach, because, as you mention yourself, it creates a terrible kind of coupling, since it requires the caller to know details about the function.
Regarding work-arounds: you could make a decorator for you function (using the inspect module), which checks the arguments passed to it. If one of them is None, it replaces the value with its own default value.
A:
Not a direct answer, but I think this is worth considering:
See if you can break your function into several functions, neither of which has any default arguments. Factor any shared functionality out to a function you designate as internal.
def alpha():
_omega('foo', 'bar')
def beta(p1):
_omega(p1, 'bar')
def _omega(p1, p2):
print('{0},{1}'.format(p1, p2))
This works well when the extra arguments trigger "extra" functionality, as it may allow you to give the functions more descriptive names.
Functions with boolean arguments with True and/or False defaults frequently benefit from this type of approach.
A:
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
Just create a constant:
P2_DEFAULT = "bar"
def alpha(p1="foo", p2=P2_DEFAULT):
print('{0},{1}'.format(p1, p2))
and call the function:
alpha("FOO", myp2 or P2_DEFAULT)
If default values for alpha will be changed, we have to change only one constant.
Be careful with logical or for some cases, see https://stackoverflow.com/a/4978745/3605259
One more (better) use case
For example, we have some config (dictionary). But some values are not present:
config = {'name': 'Johnny', 'age': '33'}
work_type = config.get('work_type', P2_DEFAULT)
alpha("FOO", work_type)
So we use method get(key, default_value) of dict, which will return default_value if our config (dict) does not contain such key.
A:
As I cannot comment on answers yet, I'd like to add that the first solution (unpacking the kwargs) would fit nicely in a decorator as follows:
def remove_none_from_kwargs(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
func(self,*args, **{k: v for k, v in kwargs.items() if v is not None})
return wrapper
| Call function without optional arguments if they are None | There's a function which takes optional arguments.
def alpha(p1="foo", p2="bar"):
print('{0},{1}'.format(p1, p2))
Let me iterate over what happens when we use that function in different ways:
>>> alpha()
foo,bar
>>> alpha("FOO")
FOO,bar
>>> alpha(p2="BAR")
foo,BAR
>>> alpha(p1="FOO", p2=None)
FOO,None
Now consider the case where I want to call it like alpha("FOO", myp2) and myp2 will either contain a value to be passed, or be None. But even though the function handles p2=None, I want it to use its default value "bar" instead.
Maybe that's worded confusingly, so let me reword that:
If myp2 is None, call alpha("FOO"). Else, call alpha("FOO", myp2).
The distinction is relevant because alpha("FOO", None) has a different result than alpha("FOO").
How can I concisely (but readably) make this distinction?
One possibility would usually be to check for None within alpha, which would be encouraged because that would make the code safer. But assume that alpha is used in other places where it is actually supposed to handle None as it does.
I'd like to handle that on the caller-side.
One possibility is to do a case distinction:
if myp2 is None:
alpha("FOO")
else:
alpha("FOO", myp2)
But that can quickly become much code when there are multiple such arguments. (exponentially, 2^n)
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
I am using python 3.4 but it would be best if your answers can provide a good way that works in any python version.
The question is technically finished here, but I reword some requirement again, since the first answer did gloss over that:
I want the behaviour of alpha with its default values "foo", "bar" preserved in general, so it is (probably) not an option to change alpha itself.
In yet again other words, assume that alpha is being used somewhere else as alpha("FOO", None) where the output FOO,None is expected behaviour.
| [
"Pass the arguments as kwargs from a dictionary, from which you filter out the None values:\nkwargs = dict(p1='FOO', p2=None)\n\nalpha(**{k: v for k, v in kwargs.items() if v is not None})\n\n",
"\nBut assume that alpha is used in other places where it is actually supposed to handle None as it does.\n\nTo respond to this concern, I have been known to have a None-like value which isn't actually None for this exact purpose.\n_novalue = object()\n\ndef alpha(p1=_novalue, p2=_novalue):\n if p1 is _novalue:\n p1 = \"foo\"\n if p2 is _novalue:\n p2 = \"bar\"\n print('{0},{1}'.format(p1, p2))\n\nNow the arguments are still optional, so you can neglect to pass either of them. And the function handles None correctly. If you ever want to explicitly not pass an argument, you can pass _novalue.\n>>> alpha(p1=\"FOO\", p2=None)\nFOO,None\n>>> alpha(p1=\"FOO\")\nFOO,bar\n>>> alpha(p1=\"FOO\", p2=_novalue)\nFOO,bar\n\nand since _novalue is a special made-up value created for this express purpose, anyone who passes _novalue is certainly intending the \"default argument\" behavior, as opposed to someone who passes None who might intend that the value be interpreted as literal None.\n",
"\nalthough ** is definitely a language feature, it's surely not created for solving this particular problem. Your suggestion works, so does mine. Which one works better depends on the rest of the OP's code. However, there is still no way to write f(x or dont_pass_it_at_all)\n - blue_note\n\nThanks to your great answers, I thought I'd try to do just that:\n# gen.py\ndef callWithNonNoneArgs(f, *args, **kwargs):\n kwargsNotNone = {k: v for k, v in kwargs.items() if v is not None}\n return f(*args, **kwargsNotNone)\n\n \n# python interpreter\n>>> import gen\n>>> def alpha(p1=\"foo\", p2=\"bar\"):\n... print('{0},{1}'.format(p1,p2))\n...\n>>> gen.callWithNonNoneArgs(alpha, p1=\"FOO\", p2=None)\nFOO,bar\n>>> def beta(ree, p1=\"foo\", p2=\"bar\"):\n... print('{0},{1},{2}'.format(ree,p1,p2))\n...\n>>> beta('hello', p2=\"world\")\nhello,foo,world\n>>> beta('hello', p2=None)\nhello,foo,None\n>>> gen.callWithNonNoneArgs(beta, 'hello', p2=None)\nhello,foo,bar\n\nThis is probably not perfect, but it seems to work: It's a function that you can call with another function and it's arguments, and it applies deceze's answer to filter out the arguments that are None.\n",
"You could inspect the default values via alpha.__defaults__ and then use them instead of None. That way you circumvent the hard-coding of default values:\n>>> args = [None]\n>>> alpha('FOO', *[x if x is not None else y for x, y in zip(args, alpha.__defaults__[1:])])\n\n",
"I had the same problem when calling some Swagger generated client code, which I couldn't modify, where None could end up in the query string if I didn't clean up the arguments before calling the generated methods. I ended up creating a simple helper function:\ndef defined_kwargs(**kwargs):\n return {k: v for k, v in kwargs.items() if v is not None}\n\n>>> alpha(**defined_kwargs(p1=\"FOO\", p2=None))\nFOO,bar\n\nIt keeps things quite readable for more complex invocations:\ndef beta(a, b, p1=\"foo\", p2=\"bar\"):\n print('{0},{1},{2},{3}'.format(a, b, p1, p2,))\n\np1_value = \"foo\"\np2_value = None\n\n>>> beta(\"hello\",\n \"world\",\n **defined_kwargs(\n p1=p1_value, \n p2=p2_value))\n\nhello,world,FOO,bar\n\n",
"I'm surprised nobody brought this up\ndef f(p1=\"foo\", p2=None):\n p2 = \"bar\" if p2 is None else p2\n print(p1+p2)\n\nYou assign None to p2 as standart (or don't, but this way you have the true standart at one point in your code) and use an inline if. Imo the most pythonic answer. Another thing that comes to mind is using a wrapper, but that would be way less readable.\nEDIT:\nWhat I'd probably do is use a dummy as standart value and check for that. So something like this:\nclass dummy():\n pass\n\ndef alpha(p1=\"foo\", p2=dummy()):\n if isinstance(p2, dummy):\n p2 = \"bar\"\n print(\"{0},{1}\".format(p1, p2))\n\nalpha()\nalpha(\"a\",\"b\")\nalpha(p2=None)\n\nproduces:\nfoo,bar\na,b\nfoo,None\n\n",
"Unfortunately, there's no way to do what you want. Even widely adopted python libraries/frameworks use your first approach. It's an extra line of code, but it is quite readable. \nDo not use the alpha(\"FOO\", myp2 or \"bar\") approach, because, as you mention yourself, it creates a terrible kind of coupling, since it requires the caller to know details about the function.\nRegarding work-arounds: you could make a decorator for you function (using the inspect module), which checks the arguments passed to it. If one of them is None, it replaces the value with its own default value.\n",
"Not a direct answer, but I think this is worth considering:\nSee if you can break your function into several functions, neither of which has any default arguments. Factor any shared functionality out to a function you designate as internal.\ndef alpha():\n _omega('foo', 'bar')\n\ndef beta(p1):\n _omega(p1, 'bar')\n\ndef _omega(p1, p2):\n print('{0},{1}'.format(p1, p2))\n\nThis works well when the extra arguments trigger \"extra\" functionality, as it may allow you to give the functions more descriptive names.\nFunctions with boolean arguments with True and/or False defaults frequently benefit from this type of approach.\n",
"\nAnother possibility is to simply do alpha(\"FOO\", myp2 or \"bar\"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.\n\nJust create a constant:\nP2_DEFAULT = \"bar\"\n\ndef alpha(p1=\"foo\", p2=P2_DEFAULT):\n print('{0},{1}'.format(p1, p2))\n\n\nand call the function:\nalpha(\"FOO\", myp2 or P2_DEFAULT)\n\nIf default values for alpha will be changed, we have to change only one constant.\nBe careful with logical or for some cases, see https://stackoverflow.com/a/4978745/3605259\nOne more (better) use case\nFor example, we have some config (dictionary). But some values are not present:\nconfig = {'name': 'Johnny', 'age': '33'}\nwork_type = config.get('work_type', P2_DEFAULT)\n\nalpha(\"FOO\", work_type)\n\nSo we use method get(key, default_value) of dict, which will return default_value if our config (dict) does not contain such key.\n",
"As I cannot comment on answers yet, I'd like to add that the first solution (unpacking the kwargs) would fit nicely in a decorator as follows:\ndef remove_none_from_kwargs(func):\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n func(self,*args, **{k: v for k, v in kwargs.items() if v is not None})\n return wrapper\n\n"
] | [
66,
14,
12,
5,
3,
2,
1,
1,
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0052494128_python_python_3.x.txt |
Q:
Python matplotlib adjust colormap
This is what I want to create.
This is what I get.
This is the code I have written.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
x = np.linspace(-90, 90, 181)
y = np.linspace(-90, 90, 181)
x_grid, y_grid = np.meshgrid(x, y)
z = np.e**x_grid
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.plot_surface(x_grid, y_grid, z, cmap=cm.rainbow)
I also tried to normalize z and the colormap.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import matplotlib as mpl
x = np.linspace(-90, 90, 181)
y = np.linspace(-90, 90, 181)
x_grid, y_grid = np.meshgrid(x, y)
z = np.e**x_grid
cmap = mpl.cm.rainbow
norm = mpl.colors.Normalize(vmin=0, vmax=1)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.plot_surface(x_grid, y_grid, z/np.max(z), norm=norm, cmap=cm.rainbow)
Question: How can I adjust the colormap to make it less discrete and more continuous for these simultaneously tiny and large values in z?
A:
Welcome to Stackoverflow!!
Your problem is related to the fact that you are working with exponential numbers, but you're using a linear colormap. For x=90 you have z=1.2e+39, reaaaally large.
You were very close with your second attempt! I just changed 1 line in there, instead of
norm = mpl.colors.Normalize(vmin=0, vmax=1)
I used
norm = mpl.colors.LogNorm()
And the result I got was the following:
Now, you can tweak this as much as you like in order to get the colors you want :) Just don't forget that your colormap should be normalized in a logarithmic fashion, so that it counters the exponential behaviour of your function in this case.
Hope this helps!
| Python matplotlib adjust colormap | This is what I want to create.
This is what I get.
This is the code I have written.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
x = np.linspace(-90, 90, 181)
y = np.linspace(-90, 90, 181)
x_grid, y_grid = np.meshgrid(x, y)
z = np.e**x_grid
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.plot_surface(x_grid, y_grid, z, cmap=cm.rainbow)
I also tried to normalize z and the colormap.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import matplotlib as mpl
x = np.linspace(-90, 90, 181)
y = np.linspace(-90, 90, 181)
x_grid, y_grid = np.meshgrid(x, y)
z = np.e**x_grid
cmap = mpl.cm.rainbow
norm = mpl.colors.Normalize(vmin=0, vmax=1)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.plot_surface(x_grid, y_grid, z/np.max(z), norm=norm, cmap=cm.rainbow)
Question: How can I adjust the colormap to make it less discrete and more continuous for these simultaneously tiny and large values in z?
| [
"Welcome to Stackoverflow!!\nYour problem is related to the fact that you are working with exponential numbers, but you're using a linear colormap. For x=90 you have z=1.2e+39, reaaaally large.\nYou were very close with your second attempt! I just changed 1 line in there, instead of\nnorm = mpl.colors.Normalize(vmin=0, vmax=1)\nI used\nnorm = mpl.colors.LogNorm()\nAnd the result I got was the following:\n\nNow, you can tweak this as much as you like in order to get the colors you want :) Just don't forget that your colormap should be normalized in a logarithmic fashion, so that it counters the exponential behaviour of your function in this case.\nHope this helps!\n"
] | [
1
] | [] | [] | [
"colormap",
"matplotlib",
"python"
] | stackoverflow_0074615755_colormap_matplotlib_python.txt |
Q:
Why values of my specific key and value in a dictionary don't change in Python?
I am trying to handle a dictionary that has a list as a value for a key named 'notes' , so I am trying to find the maximum element from that list and reassign the value with that maximum element from the list and also change the key value to 'top_notes' as follows.
Input = top_note({ "name": "John", "notes": [3, 5, 4] })
Output = { "name": "John", "top_note": 5 }.
The output that I am getting is 'None'
Below is my code.
class Solution(object):
def top_notes(self, di):
for key, values in di.items():
if key in di == 'notes':
lt = list(values)
maximum = max(lt)
di['top_notes'] = di['notes']
del di['notes']
di[maximum] = di[values]
del di[values]
return di
if __name__ == '__main__':
p = Solution()
dt = {"name": "John", "notes": [3, 5, 4]}
print(p.top_notes(dt))
A:
Iterating over dict using .items()will yield you a pair of (key, value)
Making a list of a single value...gives list with a single value, max of it returns that single item.
Your whole function body could be:
class Solution:
def top_notes(self, di: dict)->dict:
di["top_note"] = max(di["notes"])
di.pop("notes", None)
return di
Few more words/hints:
This "Solution" class suggests it's some kinda coding task, for that it's fine otherwise just drop it
Think of better name than "di" which means nothing literally
Functions should in general have a verb in a name as "do stuff",
the "stuff" is your object/class/variable name etc.
Use type hints, they save a lot of time when debugging if IDE is set right
A:
There are multiple issues:
if key in di == 'notes' - it can't work as di is a dictionary and not a string. Moreover, pay attention that the result of the expression di == 'notes' is of a boolean type.
You shouldn't mutate (add/remove items) while iterating over a dictionary (or any other iterable).
lt = list(values) - values is already a list, your code checks the maximum of a list of lists which not what you want probably
you don't have to iterate over the key/values of the dict as you can access/add keys/values directly.
I would suggest you to implement top_notes() as following:
def top_notes(self, di):
if 'notes' in di.keys():
di['top_note'] = max(di["notes"])
di.pop("notes")
return di
A:
When you say
if key in di == 'notes':
what happens is that Python first evaluates key in di which will be True since we are looping over the keys in the di.
It then compares True == 'notes' which will always be False and it will never enter the if clause.
To fix this, just drop the in di part:
if key == 'notes':
A:
there are some problems in your code, firstly you have to take an empty list for calculating the maximum among all, then one by one add the values of notes in list, and then store the maximum of all values in the key of dictionary.
you can refer this code:
class Solution(object):
def top_notes(self, di):
lt=[]
for key, values in di.items():
if key == 'notes':
lt += list(values)
maximum = max(lt)
di['top_notes'] = di['notes']
del di['notes']
di['top_notes'] = maximum
return di
if __name__ == '__main__' :
p = Solution()
dt = {"name":"John", "notes": [3,5,4]}
print(p.top_notes(dt))
A:
You can use dict.pop() to change the key:
class Solution(object):
def top_notes(self, di):
di['top_notes'] = di.pop('notes')
di['top_notes'] = max(di['top_notes'])
return di
Or:
class Solution(object):
def top_notes(self, di):
di['top_notes'] = max(di.get('notes'))
del di['notes']
return di
| Why values of my specific key and value in a dictionary don't change in Python? | I am trying to handle a dictionary that has a list as a value for a key named 'notes' , so I am trying to find the maximum element from that list and reassign the value with that maximum element from the list and also change the key value to 'top_notes' as follows.
Input = top_note({ "name": "John", "notes": [3, 5, 4] })
Output = { "name": "John", "top_note": 5 }.
The output that I am getting is 'None'
Below is my code.
class Solution(object):
def top_notes(self, di):
for key, values in di.items():
if key in di == 'notes':
lt = list(values)
maximum = max(lt)
di['top_notes'] = di['notes']
del di['notes']
di[maximum] = di[values]
del di[values]
return di
if __name__ == '__main__':
p = Solution()
dt = {"name": "John", "notes": [3, 5, 4]}
print(p.top_notes(dt))
| [
"Iterating over dict using .items()will yield you a pair of (key, value)\nMaking a list of a single value...gives list with a single value, max of it returns that single item.\nYour whole function body could be:\nclass Solution:\n def top_notes(self, di: dict)->dict:\n di[\"top_note\"] = max(di[\"notes\"])\n di.pop(\"notes\", None)\n return di\n\nFew more words/hints:\n\nThis \"Solution\" class suggests it's some kinda coding task, for that it's fine otherwise just drop it\nThink of better name than \"di\" which means nothing literally\nFunctions should in general have a verb in a name as \"do stuff\",\nthe \"stuff\" is your object/class/variable name etc.\nUse type hints, they save a lot of time when debugging if IDE is set right\n\n",
"There are multiple issues:\n\nif key in di == 'notes' - it can't work as di is a dictionary and not a string. Moreover, pay attention that the result of the expression di == 'notes' is of a boolean type.\nYou shouldn't mutate (add/remove items) while iterating over a dictionary (or any other iterable).\nlt = list(values) - values is already a list, your code checks the maximum of a list of lists which not what you want probably\nyou don't have to iterate over the key/values of the dict as you can access/add keys/values directly.\n\nI would suggest you to implement top_notes() as following:\n def top_notes(self, di):\n if 'notes' in di.keys():\n di['top_note'] = max(di[\"notes\"])\n di.pop(\"notes\")\n return di\n\n",
"When you say\nif key in di == 'notes':\n\nwhat happens is that Python first evaluates key in di which will be True since we are looping over the keys in the di.\nIt then compares True == 'notes' which will always be False and it will never enter the if clause.\nTo fix this, just drop the in di part:\nif key == 'notes':\n\n",
"there are some problems in your code, firstly you have to take an empty list for calculating the maximum among all, then one by one add the values of notes in list, and then store the maximum of all values in the key of dictionary.\nyou can refer this code:\nclass Solution(object):\n\n def top_notes(self, di):\n lt=[]\n for key, values in di.items():\n if key == 'notes':\n lt += list(values)\n maximum = max(lt)\n di['top_notes'] = di['notes']\n del di['notes']\n di['top_notes'] = maximum\n return di\n\nif __name__ == '__main__' :\n p = Solution()\n dt = {\"name\":\"John\", \"notes\": [3,5,4]}\n print(p.top_notes(dt))\n\n",
"You can use dict.pop() to change the key:\nclass Solution(object):\n def top_notes(self, di):\n di['top_notes'] = di.pop('notes')\n di['top_notes'] = max(di['top_notes'])\n return di\n\nOr:\nclass Solution(object):\n def top_notes(self, di):\n di['top_notes'] = max(di.get('notes'))\n del di['notes']\n return di\n\n"
] | [
1,
1,
0,
0,
0
] | [] | [] | [
"dictionary",
"list",
"python"
] | stackoverflow_0074615463_dictionary_list_python.txt |
Q:
How to avoid selecting parents 'twice' using Roulette Wheel Selection?
I am working on a genetic algorithm in Python, where I want to use the roulette wheel selection for selecting parents. However, I came to the conclusion that with my current code, it is possible that certain parents are selected multiple times, however I want to avoid this.
Here is the first part of my code: The part where I am struggling is in ' roulette wheel selection'''.
import numpy as np
import random
import time
import copy
'''Initialisation settings'''
num_jobs = 20 # number of jobs
proc_time = [10, 10, 13, 4, 9, 4, 8, 15, 7, 1, 9, 3, 15, 9, 11, 6, 5, 14, 18, 3]
due_dates = [12, 40, 50, 16, 20, 105, 73, 45, 6, 64, 15, 6, 92, 43, 78, 21, 15, 50, 150, 99]
# inputs
population_size = int(10) # size of the population
crossover_rate = float(0.8)
mutation_rate = float(0.2)
mutation_selection_rate = float(0.5)
num_mutation_jobs = round(num_jobs * mutation_selection_rate)
num_iteration = int(2000) # amount of iterations for the GA
start_time = time.time()
'''----- Generate the initial population -----'''
Tbest = 999999999999999
best_list, best_obj = [], []
population_list = []
for i in range(population_size):
random_num = list(np.random.permutation(num_jobs)) # generate a random permutation of 0 to num_jobs
population_list.append(random_num) # add to the population_list
#print(population_list)
''' Fitness value of the initial population'''
total_chromosome = copy.deepcopy(population_list) #initial population
chrom_fitness, chrom_fit = [], []
total_fitness = 0
num_tardy=0
for i in range(population_size): # solutions (chromosomes)
ptime = 0
tardiness = 0
for j in range(num_jobs): # genes in the chromosome
ptime = ptime + proc_time[total_chromosome[i][j]] # proc time is sum of the processing times of the genes, in the order that the genes appear in the chromosome
tardiness = tardiness + max(ptime - due_dates[total_chromosome[i][j]], 0) # calc tardiness of each gene (job) in a chromosome (sequence/solution)
if ptime >= due_dates[total_chromosome[i][j]]: # if due date is exceeded, the job is tardy
num_tardy = num_tardy + 1
chrom_fitness.append(num_tardy)
chrom_fit.append(num_tardy)
total_fitness = total_fitness + chrom_fitness[i] # total sum of the fitness values of the chromosomes
num_tardy=0
#print('chrom_fitness')
#print(chrom_fitness)
'''Rank the solutions best to worst'''
chrom_fitness_rank = copy.deepcopy(chrom_fitness)
chrom_fitness_rank = np.array(chrom_fitness_rank)
#print(chrom_fitness_rank)
combined = zip(chrom_fitness_rank, population_list)
zip_sort = sorted(combined, key=lambda x: x[0])
chrom_fitness_rank, population_list = map(list,zip(*zip_sort))
#print(chrom_fitness_rank)
#print(population_list)
'''Do the required amount of iterations'''
for n in range(num_iteration):
Tbest_now = 99999999999
'''----------Roulette wheel selection----------'''
parent_list = copy.deepcopy(population_list)
pk, qk = [], []
for i in range(population_size):
pk.append(chrom_fitness[i] / total_fitness) #chrom_fitness/total_fitness for each solution/sequence, relative fitness
cum_prob = [sum(pk[:i + 1]) for i in range(len(pk))] # get cumulative probabilities
parent_number = population_size
chosen = []
for n in range(parent_number):
r=random.random()
for (i, individual) in enumerate(population_list):
if cum_prob[i]>=r:
chosen.append(list(individual))
break
#print(r)
print('choose')
print(chosen)
I thought about setting the fitness value of the chosen individual to a very high value (999999) (in my case a lower fitness value is 'better') so that there is a very low chance that this individual is selected again. However, I am not sure how to do this.
A:
Just keep track which individuals were already selected by putting their indices into a set (already_selected). Then select only when cum_prob[i]>=r and i not in already_selected.
parent_number = population_size
chosen = []
already_selected = set()
for n in range(parent_number):
r=random.random()
for (i, individual) in enumerate(population_list):
if cum_prob[i]>=r and i not in already_selected:
chosen.append(list(individual))
already_selected.add(i)
break
| How to avoid selecting parents 'twice' using Roulette Wheel Selection? | I am working on a genetic algorithm in Python, where I want to use the roulette wheel selection for selecting parents. However, I came to the conclusion that with my current code, it is possible that certain parents are selected multiple times, however I want to avoid this.
Here is the first part of my code: The part where I am struggling is in ' roulette wheel selection'''.
import numpy as np
import random
import time
import copy
'''Initialisation settings'''
num_jobs = 20 # number of jobs
proc_time = [10, 10, 13, 4, 9, 4, 8, 15, 7, 1, 9, 3, 15, 9, 11, 6, 5, 14, 18, 3]
due_dates = [12, 40, 50, 16, 20, 105, 73, 45, 6, 64, 15, 6, 92, 43, 78, 21, 15, 50, 150, 99]
# inputs
population_size = int(10) # size of the population
crossover_rate = float(0.8)
mutation_rate = float(0.2)
mutation_selection_rate = float(0.5)
num_mutation_jobs = round(num_jobs * mutation_selection_rate)
num_iteration = int(2000) # amount of iterations for the GA
start_time = time.time()
'''----- Generate the initial population -----'''
Tbest = 999999999999999
best_list, best_obj = [], []
population_list = []
for i in range(population_size):
random_num = list(np.random.permutation(num_jobs)) # generate a random permutation of 0 to num_jobs
population_list.append(random_num) # add to the population_list
#print(population_list)
''' Fitness value of the initial population'''
total_chromosome = copy.deepcopy(population_list) #initial population
chrom_fitness, chrom_fit = [], []
total_fitness = 0
num_tardy=0
for i in range(population_size): # solutions (chromosomes)
ptime = 0
tardiness = 0
for j in range(num_jobs): # genes in the chromosome
ptime = ptime + proc_time[total_chromosome[i][j]] # proc time is sum of the processing times of the genes, in the order that the genes appear in the chromosome
tardiness = tardiness + max(ptime - due_dates[total_chromosome[i][j]], 0) # calc tardiness of each gene (job) in a chromosome (sequence/solution)
if ptime >= due_dates[total_chromosome[i][j]]: # if due date is exceeded, the job is tardy
num_tardy = num_tardy + 1
chrom_fitness.append(num_tardy)
chrom_fit.append(num_tardy)
total_fitness = total_fitness + chrom_fitness[i] # total sum of the fitness values of the chromosomes
num_tardy=0
#print('chrom_fitness')
#print(chrom_fitness)
'''Rank the solutions best to worst'''
chrom_fitness_rank = copy.deepcopy(chrom_fitness)
chrom_fitness_rank = np.array(chrom_fitness_rank)
#print(chrom_fitness_rank)
combined = zip(chrom_fitness_rank, population_list)
zip_sort = sorted(combined, key=lambda x: x[0])
chrom_fitness_rank, population_list = map(list,zip(*zip_sort))
#print(chrom_fitness_rank)
#print(population_list)
'''Do the required amount of iterations'''
for n in range(num_iteration):
Tbest_now = 99999999999
'''----------Roulette wheel selection----------'''
parent_list = copy.deepcopy(population_list)
pk, qk = [], []
for i in range(population_size):
pk.append(chrom_fitness[i] / total_fitness) #chrom_fitness/total_fitness for each solution/sequence, relative fitness
cum_prob = [sum(pk[:i + 1]) for i in range(len(pk))] # get cumulative probabilities
parent_number = population_size
chosen = []
for n in range(parent_number):
r=random.random()
for (i, individual) in enumerate(population_list):
if cum_prob[i]>=r:
chosen.append(list(individual))
break
#print(r)
print('choose')
print(chosen)
I thought about setting the fitness value of the chosen individual to a very high value (999999) (in my case a lower fitness value is 'better') so that there is a very low chance that this individual is selected again. However, I am not sure how to do this.
| [
"Just keep track which individuals were already selected by putting their indices into a set (already_selected). Then select only when cum_prob[i]>=r and i not in already_selected.\nparent_number = population_size\nchosen = []\nalready_selected = set()\nfor n in range(parent_number):\n r=random.random()\n for (i, individual) in enumerate(population_list):\n if cum_prob[i]>=r and i not in already_selected:\n chosen.append(list(individual))\n already_selected.add(i)\n break\n\n"
] | [
0
] | [] | [] | [
"genetic_algorithm",
"python",
"roulette_wheel_selection"
] | stackoverflow_0074616090_genetic_algorithm_python_roulette_wheel_selection.txt |
Q:
CadQuery: Selecting an edge by index (Filleting specific edges)
I come from the engineering CAD world and I'm creating some designs in CadQuery. What I want to do is this (pseudocode):
edges = part.edges()
edges[n].fillet(r)
Or ideally have the ability to do something like this (though I can't find any methods for edge properties). Pseudocode:
edges = part.edges()
for edge in edges:
if edge.length() > x:
edge.fillet(a)
else:
edge.fillet(b)
This would be very useful when a design contains non-orthogonal faces. I understand that I can select edges with selectors, but I find them unnecessarily complicated and work best with orthogonal faces. FreeCAD lets you treat edges as a list.
I believe there might be a method to select the closest edge to a point, but I can't seem to track it down.
If someone can provide guidance that would be great -- thank you!
Bonus question: Is there a way to return coordinates of geometry as a list or vector? e.g.:
origin = cq.workplane.center().val
>> [x,y,z]
(or something like the above)
A:
Take a look at this code, i hope this will be helpful.
import cadquery as cq
plane1 = cq.Workplane()
block = plane1.rect(10,12).extrude(10)
edges = block.edges("|Z")
filleted_block = edges.all()[0].fillet(0.5)
show(filleted_block)
A:
For the posterity. To select multiple edges eg. for chamfering you can use newObject() on Workplane. The argument is a list of edges (they have to be cq.occ_impl.shapes.Edge instances, NOT cq.Workplane instances).
import cadquery as cq
model = cq.Workplane().box(10, 10, 5)
edges = model.edges()
# edges.all() returns worplanes, we have to get underlying geometry
selected = list(map(lambda x: x.objects[0], edges.all()))
model_with_chamfer = model.newObject(selected).chamfer(1)
To get edge length you can do something like this:
edge = model.edges().all()[0] # This select one 'random' edge
length = edge.objects[0].Length()
edge.Length() doesn't work since edge is Workplane instance, not geometry instance.
To get edges of certain length you can just create dict with edge geometry and length and filter it using builtin python's filter(). Here is a snippet of my implementation for chamfering short edges on topmost face:
top_edges = model.edges(">Z and #Z")
def get_length(edge):
try:
return edge.vals()[0].Length()
except Exception:
return 0.0
# Inside edges are shorter - filter only those
edge_len_list = list(map(
lambda x: (x.objects[0], get_length(x)),
top_edges.all()))
avg = mean([a for _, a in edge_len_list])
selected = filter(lambda x: x[1] < avg, edge_len_list)
selected = [e for e, _ in selected]
vertical_edges = model.edges("|Z").all()
selected.extend(vertical_edges)
model = model.newObject(selected)
model = model.chamfer(chamfer_size)
| CadQuery: Selecting an edge by index (Filleting specific edges) | I come from the engineering CAD world and I'm creating some designs in CadQuery. What I want to do is this (pseudocode):
edges = part.edges()
edges[n].fillet(r)
Or ideally have the ability to do something like this (though I can't find any methods for edge properties). Pseudocode:
edges = part.edges()
for edge in edges:
if edge.length() > x:
edge.fillet(a)
else:
edge.fillet(b)
This would be very useful when a design contains non-orthogonal faces. I understand that I can select edges with selectors, but I find them unnecessarily complicated and work best with orthogonal faces. FreeCAD lets you treat edges as a list.
I believe there might be a method to select the closest edge to a point, but I can't seem to track it down.
If someone can provide guidance that would be great -- thank you!
Bonus question: Is there a way to return coordinates of geometry as a list or vector? e.g.:
origin = cq.workplane.center().val
>> [x,y,z]
(or something like the above)
| [
"Take a look at this code, i hope this will be helpful.\nimport cadquery as cq\n\nplane1 = cq.Workplane()\n\nblock = plane1.rect(10,12).extrude(10)\n\nedges = block.edges(\"|Z\")\n\nfilleted_block = edges.all()[0].fillet(0.5)\n\nshow(filleted_block)\n\n",
"For the posterity. To select multiple edges eg. for chamfering you can use newObject() on Workplane. The argument is a list of edges (they have to be cq.occ_impl.shapes.Edge instances, NOT cq.Workplane instances).\nimport cadquery as cq\n\nmodel = cq.Workplane().box(10, 10, 5)\nedges = model.edges()\n\n# edges.all() returns worplanes, we have to get underlying geometry\nselected = list(map(lambda x: x.objects[0], edges.all()))\n\nmodel_with_chamfer = model.newObject(selected).chamfer(1)\n\nTo get edge length you can do something like this:\nedge = model.edges().all()[0] # This select one 'random' edge\nlength = edge.objects[0].Length()\n\nedge.Length() doesn't work since edge is Workplane instance, not geometry instance.\nTo get edges of certain length you can just create dict with edge geometry and length and filter it using builtin python's filter(). Here is a snippet of my implementation for chamfering short edges on topmost face:\ntop_edges = model.edges(\">Z and #Z\")\n\ndef get_length(edge):\n try:\n return edge.vals()[0].Length()\n except Exception:\n return 0.0\n\n# Inside edges are shorter - filter only those\nedge_len_list = list(map(\n lambda x: (x.objects[0], get_length(x)),\n top_edges.all()))\navg = mean([a for _, a in edge_len_list])\nselected = filter(lambda x: x[1] < avg, edge_len_list)\nselected = [e for e, _ in selected]\n\nvertical_edges = model.edges(\"|Z\").all()\nselected.extend(vertical_edges)\n\nmodel = model.newObject(selected)\nmodel = model.chamfer(chamfer_size)\n\n"
] | [
1,
0
] | [] | [] | [
"cad",
"cadquery",
"python"
] | stackoverflow_0072142702_cad_cadquery_python.txt |
Q:
Pythonic Way to create a Error Class from Exception
I am working on a Project where I want to raise a error and I have been creating class each time I need a new Exception. I am staying away from generic / builtin errors as they are less descriptive for the purpose I need them for.
I came up with a solution but I am not sure if it is a pythonic way to create an instance.
def CustomError(name: str, message: str):
def constructor(self, msg=message):
self.message = msg
return
def repr_method(self):
return self.message
error = type(name,
(Exception, object),
{'__init__': constructor,
'__str__': repr_method})
return error(msg=message)
This does the trick as I can raise this error with a custom name and message instead of creating a new class every time. Please can someone let me know if this actually is a pythonic way if not then what is the recommended way?
I am expecting someone to confirm that this is actually a valid way of using higher order functions for creating a class instance or point me to a recommended way of doing it.
A:
I wouldn't say it's a pythonic way to be honest. Python has initializers, not "constructors", the constructor would be the magic method __new__ if you really needed that (which is rare).
For customizable initializers the decorator @classmethod (multiple factory methods) is the way to go.
Also __str__ and __repr__ are not the same (neither is their purpose).
Also most often you actually should define a new class for each type of custom exception (and inherit from the Exception class).
class MyCustomException(Exception):
def __init__(self, name:str, message:str)->None:
super().__init__(message)
self.name = name
| Pythonic Way to create a Error Class from Exception | I am working on a Project where I want to raise a error and I have been creating class each time I need a new Exception. I am staying away from generic / builtin errors as they are less descriptive for the purpose I need them for.
I came up with a solution but I am not sure if it is a pythonic way to create an instance.
def CustomError(name: str, message: str):
def constructor(self, msg=message):
self.message = msg
return
def repr_method(self):
return self.message
error = type(name,
(Exception, object),
{'__init__': constructor,
'__str__': repr_method})
return error(msg=message)
This does the trick as I can raise this error with a custom name and message instead of creating a new class every time. Please can someone let me know if this actually is a pythonic way if not then what is the recommended way?
I am expecting someone to confirm that this is actually a valid way of using higher order functions for creating a class instance or point me to a recommended way of doing it.
| [
"I wouldn't say it's a pythonic way to be honest. Python has initializers, not \"constructors\", the constructor would be the magic method __new__ if you really needed that (which is rare).\nFor customizable initializers the decorator @classmethod (multiple factory methods) is the way to go.\nAlso __str__ and __repr__ are not the same (neither is their purpose).\nAlso most often you actually should define a new class for each type of custom exception (and inherit from the Exception class).\nclass MyCustomException(Exception):\n def __init__(self, name:str, message:str)->None:\n super().__init__(message) \n self.name = name\n\n"
] | [
3
] | [] | [] | [
"error_handling",
"python"
] | stackoverflow_0074615061_error_handling_python.txt |
Q:
Bad Request 400 when uploading file to Flask
I have a Flask server that looks like this:
import flask, os, werkzeug.utils
UPLOAD_FOLDER = "files/"
ALLOWED_EXTENSIONS = {"txt"}
def isFileAllowed(file):
return str("." in file and file.rsplit(".", 1)[1].lower() in ALLOWED_EXTENSIONS)
app = flask.Flask(__name__)
app.config["UPLOAD_DIR"] = UPLOAD_FOLDER
@app.route("/log_receiver", methods = ["POST", "GET"])
def log_receiver():
if flask.request.method == "POST":
file = flask.request.files["file"]
if file and isFileAllowed(file):
filename = werkzeug.utils.secure_filename(file)
file.save(os.path.join(app.config["UPLOAD_DIR"], filename))
return "Sucessfully uploaded"
return "File couldn't be uploaded"
return "404, not found"
if __name__ == '__main__':
app.run()
I've made a test uploader, also in Python, that looks like this:
import requests
def log_uploader():
with open("log.txt", "rb") as log:
r = requests.post("http://localhost:5000/log_receiver", files={"log.txt": log})
print(r.text)
if __name__ == '__main__':
log_uploader()
The issue is, whenever I run it, I get a 404 error.
I've tried removing the ["file"] in flask.request.files, which removes the 400 error but brings in a 500 error with the following log:
[2022-11-29 16:05:46,547] ERROR in app: Exception on /log_receiver [POST]
Traceback (most recent call last):
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/day/Desktop/Coding/Quickraft/Server-side/APIs/main.py", line 17, in log_receiver
filename = werkzeug.utils.secure_filename(file)
File "/home/day/.local/lib/python3.9/site-packages/werkzeug/utils.py", line 221, in secure_filename
filename = unicodedata.normalize("NFKD", filename)
TypeError: normalize() argument 2 must be str, not ImmutableMultiDict
127.0.0.1 - - [29/Nov/2022 16:05:46] "POST /log_receiver HTTP/1.1" 500 -
How could I fix this? Thanks in advance,
Daymons.
A:
See again the flask documentation for request.files
Each value in files is a Werkzeug FileStorage object
It is not just the filename but much more than that. The error message is telling you something: That secure_filename() expects a string, but you are passing it something that isn't a string.
Have a look again at the flask documentation for file uploads: https://flask.palletsprojects.com/en/2.2.x/patterns/fileuploads/
| Bad Request 400 when uploading file to Flask | I have a Flask server that looks like this:
import flask, os, werkzeug.utils
UPLOAD_FOLDER = "files/"
ALLOWED_EXTENSIONS = {"txt"}
def isFileAllowed(file):
return str("." in file and file.rsplit(".", 1)[1].lower() in ALLOWED_EXTENSIONS)
app = flask.Flask(__name__)
app.config["UPLOAD_DIR"] = UPLOAD_FOLDER
@app.route("/log_receiver", methods = ["POST", "GET"])
def log_receiver():
if flask.request.method == "POST":
file = flask.request.files["file"]
if file and isFileAllowed(file):
filename = werkzeug.utils.secure_filename(file)
file.save(os.path.join(app.config["UPLOAD_DIR"], filename))
return "Sucessfully uploaded"
return "File couldn't be uploaded"
return "404, not found"
if __name__ == '__main__':
app.run()
I've made a test uploader, also in Python, that looks like this:
import requests
def log_uploader():
with open("log.txt", "rb") as log:
r = requests.post("http://localhost:5000/log_receiver", files={"log.txt": log})
print(r.text)
if __name__ == '__main__':
log_uploader()
The issue is, whenever I run it, I get a 404 error.
I've tried removing the ["file"] in flask.request.files, which removes the 400 error but brings in a 500 error with the following log:
[2022-11-29 16:05:46,547] ERROR in app: Exception on /log_receiver [POST]
Traceback (most recent call last):
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/day/.local/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/day/Desktop/Coding/Quickraft/Server-side/APIs/main.py", line 17, in log_receiver
filename = werkzeug.utils.secure_filename(file)
File "/home/day/.local/lib/python3.9/site-packages/werkzeug/utils.py", line 221, in secure_filename
filename = unicodedata.normalize("NFKD", filename)
TypeError: normalize() argument 2 must be str, not ImmutableMultiDict
127.0.0.1 - - [29/Nov/2022 16:05:46] "POST /log_receiver HTTP/1.1" 500 -
How could I fix this? Thanks in advance,
Daymons.
| [
"See again the flask documentation for request.files\n\nEach value in files is a Werkzeug FileStorage object\n\nIt is not just the filename but much more than that. The error message is telling you something: That secure_filename() expects a string, but you are passing it something that isn't a string.\nHave a look again at the flask documentation for file uploads: https://flask.palletsprojects.com/en/2.2.x/patterns/fileuploads/\n"
] | [
0
] | [] | [] | [
"file",
"flask",
"python"
] | stackoverflow_0074616137_file_flask_python.txt |
Q:
How to merge two data frames on the basis of values present in multiple columns in pandas?
I have the following two data frames.
Column A
Column B
id1
name1
id2
name2
id3
name3
Column X1
Column X2
Column Y
name1
name4
company1
name6
name2
company2
name3
name8
company3
I want to merge the above two on the basis of names to get the final data frame like given below:
Column A
Column B
New_Column
Column Y
id1
name1
name4
company1
id2
name2
name6
company2
id3
name3
name8
company3
How can I do this using pandas or any other way in python?
Thanks a lot!
I tried using a for loop to loop over the data frame to look for the values of Column B individually in the two columns (Column X1 and Column X2) and adding the rows in a new data frame. However, I am looking for a more efficient way to do it.
A:
join each time with one column and then union all the results together:
out1 = df1.merge(df2, left_on=['Column B'], right_on=['Column X1'])
out2 = df1.merge(df2, left_on=['Column B'], right_on=['Column X2'])
out = pd.concat([out1, out2], ignore_index=True)
out['Column X1'].loc[out['Column B'] == out['Column X1']] = out['Column X2']
out = out.drop('Column X2', axis = 1).rename({'Column X1':'New_Column'}, axis = 1)
print(out)
output:
Column A Column B New_Column Column Y
0 id1 name1 name4 company1
1 id3 name3 name8 company3
2 id2 name2 name6 company2
| How to merge two data frames on the basis of values present in multiple columns in pandas? | I have the following two data frames.
Column A
Column B
id1
name1
id2
name2
id3
name3
Column X1
Column X2
Column Y
name1
name4
company1
name6
name2
company2
name3
name8
company3
I want to merge the above two on the basis of names to get the final data frame like given below:
Column A
Column B
New_Column
Column Y
id1
name1
name4
company1
id2
name2
name6
company2
id3
name3
name8
company3
How can I do this using pandas or any other way in python?
Thanks a lot!
I tried using a for loop to loop over the data frame to look for the values of Column B individually in the two columns (Column X1 and Column X2) and adding the rows in a new data frame. However, I am looking for a more efficient way to do it.
| [
"join each time with one column and then union all the results together:\nout1 = df1.merge(df2, left_on=['Column B'], right_on=['Column X1'])\nout2 = df1.merge(df2, left_on=['Column B'], right_on=['Column X2'])\nout = pd.concat([out1, out2], ignore_index=True)\n\nout['Column X1'].loc[out['Column B'] == out['Column X1']] = out['Column X2']\nout = out.drop('Column X2', axis = 1).rename({'Column X1':'New_Column'}, axis = 1)\nprint(out)\n\noutput:\n Column A Column B New_Column Column Y\n0 id1 name1 name4 company1\n1 id3 name3 name8 company3\n2 id2 name2 name6 company2\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074616120_dataframe_pandas_python.txt |
Q:
How to add a seconds to a timestamp value which is in your another variable/dataframe
lv_seconds_back = mv_time_horizon.select(col("max(time_horizon)") * 60).show()
mv_now =spark.sql("select from_unixtime(unix_timestamp()) as mv_now")
local_date_time =mv_now.select(date_format('mv_now', 'HH:mm:ss').alias("local_date_time"))
lv_start =local_date_time.select(col("local_date_time") - expr("INTERVAL $lv_seconds_back seconds"))
How do i substract no of seconds which is in lv_seconds_back variable in the lv start
I tried using expr(interval seconds) but it wont take the variable but takes number.
Also if I need too add that lv_start in the query how do i do that
mt_cache_fauf_r_2= spark.sql("select mt_cache_fauf_r_temp from mt_cache_fauf_r_temp where RM_ZEITPUNKT>= ${lv_start} & RM_ZEITPUNKT <= ${lv_end}")
This doesn't work
A:
Maybe you should remove .show() so that your column is captured in lv_seconds_back variable.
A:
Your first line has two issues. The first one: col("max(time_horizon)") cannot work because the col function expects a column name. Either do expr("max(time_horizon)") or max(col("time_horizon")). Then, the show function displays part of the dataframe but does not return anything. Therefore it does not make sense to assign the result of show to a variable.
If you remove show and with the col call, the result is a dataframe with one row of one element. The first function can get you the Row object from which you can access its only element like this:
lv_seconds_back = mv_time_horizon.select(F.expr("max(time_horizon)") * 60).first()[0]
Then, if you want to substract that value from a timestamp, do it before you convert it to a string:
mv_now = spark.sql(f"select from_unixtime(unix_timestamp() - {lv_seconds_back}) as mv_now")
| How to add a seconds to a timestamp value which is in your another variable/dataframe | lv_seconds_back = mv_time_horizon.select(col("max(time_horizon)") * 60).show()
mv_now =spark.sql("select from_unixtime(unix_timestamp()) as mv_now")
local_date_time =mv_now.select(date_format('mv_now', 'HH:mm:ss').alias("local_date_time"))
lv_start =local_date_time.select(col("local_date_time") - expr("INTERVAL $lv_seconds_back seconds"))
How do i substract no of seconds which is in lv_seconds_back variable in the lv start
I tried using expr(interval seconds) but it wont take the variable but takes number.
Also if I need too add that lv_start in the query how do i do that
mt_cache_fauf_r_2= spark.sql("select mt_cache_fauf_r_temp from mt_cache_fauf_r_temp where RM_ZEITPUNKT>= ${lv_start} & RM_ZEITPUNKT <= ${lv_end}")
This doesn't work
| [
"Maybe you should remove .show() so that your column is captured in lv_seconds_back variable.\n",
"Your first line has two issues. The first one: col(\"max(time_horizon)\") cannot work because the col function expects a column name. Either do expr(\"max(time_horizon)\") or max(col(\"time_horizon\")). Then, the show function displays part of the dataframe but does not return anything. Therefore it does not make sense to assign the result of show to a variable.\nIf you remove show and with the col call, the result is a dataframe with one row of one element. The first function can get you the Row object from which you can access its only element like this:\nlv_seconds_back = mv_time_horizon.select(F.expr(\"max(time_horizon)\") * 60).first()[0]\n\nThen, if you want to substract that value from a timestamp, do it before you convert it to a string:\nmv_now = spark.sql(f\"select from_unixtime(unix_timestamp() - {lv_seconds_back}) as mv_now\")\n\n"
] | [
0,
0
] | [] | [] | [
"apache_spark",
"dataframe",
"pyspark",
"python",
"sql"
] | stackoverflow_0074613675_apache_spark_dataframe_pyspark_python_sql.txt |
Q:
Turning a vector of length n squared into a matrix of size n times n python
In python I have a numpy vector array v of length n squared. I want to make it into a matrix M of size n times n, by laying the elements of n into n rows, so the first n elements of v comprise the first row of M, similarly the i-th n elements of v comprise the i-th row of M.
I tired using numpy reshape, but as I am completely new to python I couldn't figure out how this is done. How can the above be done?
A:
You're definitely on the right track, you want to use np.reshape() here by doing
M = M.reshape(n,n)
or
M = np.reshape(M, (n,n))
Note the extra parentheses in the second case, they are important for it to work right because you are passing the tuple (n,n) as an argument.
| Turning a vector of length n squared into a matrix of size n times n python | In python I have a numpy vector array v of length n squared. I want to make it into a matrix M of size n times n, by laying the elements of n into n rows, so the first n elements of v comprise the first row of M, similarly the i-th n elements of v comprise the i-th row of M.
I tired using numpy reshape, but as I am completely new to python I couldn't figure out how this is done. How can the above be done?
| [
"You're definitely on the right track, you want to use np.reshape() here by doing\nM = M.reshape(n,n)\n\nor\nM = np.reshape(M, (n,n))\n\nNote the extra parentheses in the second case, they are important for it to work right because you are passing the tuple (n,n) as an argument.\n"
] | [
0
] | [] | [] | [
"arrays",
"numpy",
"python",
"reshape",
"vector"
] | stackoverflow_0074615993_arrays_numpy_python_reshape_vector.txt |
Q:
Why wont my display update with my background? The window just opens black. Pygame
I'm trying to learn OOP but my pygame window wont update with the background I'm trying to put in. The gameObject class is in another file. Filling it with white color also isn't working and I don't know why. I was able to display a background on another project I did but I cant now and I have no idea what's different. I have compared the code and they seem like they should be doing the same thing.
gameObject.py
import pygame
class GameObject:
def __init__(self, x, y, width, height, image_path):
self.background= pygame.image.load(image_path)
self.background = pygame.transform.scale(self.background, (width, height))
self.x = x
self.y = y
self.width = width
self.height = height
main.py
import pygame
from gameObject import GameObject
pygame.init()
class Player(GameObject):
def __init__(self, x, y, width, height, image_path, speed):
super().__init__(x, y, width, height, image_path)
self.speed = speed
def move(self, direction, max_height):
if (self.y >= max_height - self.height and direction > 0) or (self.y <= 0 and direction < 0):
return
self.y += (direction * self.speed)
class Game:
def __init__(self):
self.width = 800
self.height = 800
self.color = (255, 255, 255)
self.game_window = pygame.display.set_mode((self.width, self.height))
self.clock = pygame.time.Clock()
self.background = GameObject(0, 0, self.width, self.height, 'assets/background.png')
self.player1 = Player(375, 700, 50, 50, 'assets/player.png', 10)
self.level = 1.0
def draw_objects(self):
self.game_window.fill(self.white_color)
self.game_window.blit(self.background.image, (self.background.x, self.background.y))
pygame.display.update()
def run_game_loop(self):
gameRunning = True
while gameRunning:
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameRunning = False
if gameRunning == False:
pygame.quit()
self.draw_objects()
self.clock.tick(60)
game = Game()
game.run_game_loop()
quit()
I have tried basic research on it and looking at other code that uses a custom background with pygame
A:
It is a matter of indentation. self.draw_objects() must be called in the application loop not after the application loop:
class Game:
# [...]
def run_game_loop(self):
gameRunning = True
while gameRunning:
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameRunning = False
if gameRunning == False:
pygame.quit()
# INDENTATION
#-->|
self.draw_objects()
self.clock.tick(60)
A:
Your loop never actually does anything but clear the event queue looking for pygame.QUIT.
You need to indent the calls to self.draw_objects() and self.clock.tick(60) so they are inside the loop.
| Why wont my display update with my background? The window just opens black. Pygame | I'm trying to learn OOP but my pygame window wont update with the background I'm trying to put in. The gameObject class is in another file. Filling it with white color also isn't working and I don't know why. I was able to display a background on another project I did but I cant now and I have no idea what's different. I have compared the code and they seem like they should be doing the same thing.
gameObject.py
import pygame
class GameObject:
def __init__(self, x, y, width, height, image_path):
self.background= pygame.image.load(image_path)
self.background = pygame.transform.scale(self.background, (width, height))
self.x = x
self.y = y
self.width = width
self.height = height
main.py
import pygame
from gameObject import GameObject
pygame.init()
class Player(GameObject):
def __init__(self, x, y, width, height, image_path, speed):
super().__init__(x, y, width, height, image_path)
self.speed = speed
def move(self, direction, max_height):
if (self.y >= max_height - self.height and direction > 0) or (self.y <= 0 and direction < 0):
return
self.y += (direction * self.speed)
class Game:
def __init__(self):
self.width = 800
self.height = 800
self.color = (255, 255, 255)
self.game_window = pygame.display.set_mode((self.width, self.height))
self.clock = pygame.time.Clock()
self.background = GameObject(0, 0, self.width, self.height, 'assets/background.png')
self.player1 = Player(375, 700, 50, 50, 'assets/player.png', 10)
self.level = 1.0
def draw_objects(self):
self.game_window.fill(self.white_color)
self.game_window.blit(self.background.image, (self.background.x, self.background.y))
pygame.display.update()
def run_game_loop(self):
gameRunning = True
while gameRunning:
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameRunning = False
if gameRunning == False:
pygame.quit()
self.draw_objects()
self.clock.tick(60)
game = Game()
game.run_game_loop()
quit()
I have tried basic research on it and looking at other code that uses a custom background with pygame
| [
"It is a matter of indentation. self.draw_objects() must be called in the application loop not after the application loop:\nclass Game:\n # [...]\n\n def run_game_loop(self):\n\n gameRunning = True\n while gameRunning:\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n gameRunning = False\n if gameRunning == False:\n pygame.quit()\n\n # INDENTATION\n #-->|\n \n self.draw_objects()\n self.clock.tick(60)\n\n",
"Your loop never actually does anything but clear the event queue looking for pygame.QUIT.\nYou need to indent the calls to self.draw_objects() and self.clock.tick(60) so they are inside the loop.\n"
] | [
1,
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074615885_pygame_python.txt |
Q:
Dataframe group by only groups with 2 or more rows
Is there a way to groupby only groups with 2 or more rows?
Or can I delete groups from a grouped dataframe that contains only 1 row?
Thank you very much for your help!
A:
Yes there is a way. Here is an example below
df = pd.DataFrame(
np.array([['A','A','B','C','C'],[1,2,1,1,2]]).T
, columns= ['type','value']
)
groups = df.groupby('type')
groups_without_single_row_df = [g for g in groups if len(g[1]) > 1]
groupby return a list of tuples.
Here, 'type' (A, b or C) is the first element of the tuple and the subdataframe the second element.
You can check length of each subdataframe with len() as in [g for g in groups if len(g[1]) > 1] where we check the lengh of the second element of the tuple.
If the the len() is greater than 1, it is include in the ouput list.
Hope it helps
| Dataframe group by only groups with 2 or more rows | Is there a way to groupby only groups with 2 or more rows?
Or can I delete groups from a grouped dataframe that contains only 1 row?
Thank you very much for your help!
| [
"Yes there is a way. Here is an example below\ndf = pd.DataFrame(\n np.array([['A','A','B','C','C'],[1,2,1,1,2]]).T\n , columns= ['type','value']\n )\ngroups = df.groupby('type')\ngroups_without_single_row_df = [g for g in groups if len(g[1]) > 1]\n\ngroupby return a list of tuples.\nHere, 'type' (A, b or C) is the first element of the tuple and the subdataframe the second element.\nYou can check length of each subdataframe with len() as in [g for g in groups if len(g[1]) > 1] where we check the lengh of the second element of the tuple.\nIf the the len() is greater than 1, it is include in the ouput list.\nHope it helps\n"
] | [
0
] | [] | [] | [
"dataframe",
"group_by",
"python"
] | stackoverflow_0074577647_dataframe_group_by_python.txt |
Q:
Scan paired cells of two columns for the same pattern using Python
I'm a Python beginner and would like to learn how to use it for operations on text files. I have an input txt file of 4 columns separated by TAB, and I want to search whether, row by row, the cell pairs in columns 1 and 4 simultaneously contain the pattern "BBB" or "CCC". If true, send the whole line to output1. If false, send the whole line to output2.
This is the input.txt:
more input.txt
AABBBAA 2 5 AACCCAA
AAAAAAA 4 10 AAAAAAA
AABBBAA 6 15 AABBBAA
AAAAAAA 8 20 AAAAAAA
AACCCAA 10 25 AACCCAA
AAAAAAA 12 30 AAAAAAA
This is the Python code I wrote:
more main.py
import sys
input = open(sys.argv[1], "r")
output1 = open(sys.argv[2], "w")
output2 = open(sys.argv[3], "w")
list = ["BBB", "CCC"]
for line in input:
for item in list:
if item in line.split("\t")[0] and item in line.split("\t")[3]:
output1.write(line)
else:
output2.write(line)
input.close()
output1.close()
output2.close()
Command:
python main.py input.txt output1.txt output2.txt
output1.txt is correct
more output1.txt
AABBBAA 6 15 AABBBAA
AACCCAA 10 25 AACCCAA
output2 is incorrect. I'm trying to understand why it takes both the lines of output1.txt and the double copy of the other lines.
more output2.txt
AABBBAA 2 5 AACCCAA
AABBBAA 2 5 AACCCAA
AAAAAAA 4 10 AAAAAAA
AAAAAAA 4 10 AAAAAAA
AABBBAA 6 15 AABBBAA
AAAAAAA 8 20 AAAAAAA
AAAAAAA 8 20 AAAAAAA
AACCCAA 10 25 AACCCAA
AAAAAAA 12 30 AAAAAAA
AAAAAAA 12 30 AAAAAAA
output2.txt should be:
AABBBAA 2 5 AACCCAA
AAAAAAA 4 10 AAAAAAA
AAAAAAA 8 20 AAAAAAA
AAAAAAA 12 30 AAAAAAA
Thank you for your help!
A:
You get duplicated lines in output2 because you ask it to do so. Your condition is: If item exists in both columns, write the line to output1, else write it to output2. Then you proceed to do this for each item in list. Since there are two items in list, and (e.g. in line 1) the first item doesn't exist in both columns, it writes the line once to output2, then the second item doesn't exist in both columns either, so it writes the line again to output2.
Let's restate your condition:
[Check if] the cell pairs in columns 1 and 4 simultaneously contain the pattern "BBB" or "CCC". If true, send the whole line to output1. If false, send the whole line to output2.
So for each row, you want to check if any (any) of the items in list(for item in lst) occur in both those columns(item in cols[0] and item in cols[3]).
lst = ["BBB", "CCC"]
for line in input_file:
cols = line.split("\t")
if any(item in cols[0] and item in cols[3] for item in lst):
output1.write(line)
else:
output2.write(line)
Note that I renamed list to lst and input to input_file in my code to avoid shadowing the builtins
A:
The issue is with the else part of the if statement. As every time the if condition doesn't return True you are writing the line to the output2.txt file, which is not the logic you want.
You will need to change the logic of the code to make it only write to output2.txt, if both 'BBB' and 'CCC' are not found.
| Scan paired cells of two columns for the same pattern using Python | I'm a Python beginner and would like to learn how to use it for operations on text files. I have an input txt file of 4 columns separated by TAB, and I want to search whether, row by row, the cell pairs in columns 1 and 4 simultaneously contain the pattern "BBB" or "CCC". If true, send the whole line to output1. If false, send the whole line to output2.
This is the input.txt:
more input.txt
AABBBAA 2 5 AACCCAA
AAAAAAA 4 10 AAAAAAA
AABBBAA 6 15 AABBBAA
AAAAAAA 8 20 AAAAAAA
AACCCAA 10 25 AACCCAA
AAAAAAA 12 30 AAAAAAA
This is the Python code I wrote:
more main.py
import sys
input = open(sys.argv[1], "r")
output1 = open(sys.argv[2], "w")
output2 = open(sys.argv[3], "w")
list = ["BBB", "CCC"]
for line in input:
for item in list:
if item in line.split("\t")[0] and item in line.split("\t")[3]:
output1.write(line)
else:
output2.write(line)
input.close()
output1.close()
output2.close()
Command:
python main.py input.txt output1.txt output2.txt
output1.txt is correct
more output1.txt
AABBBAA 6 15 AABBBAA
AACCCAA 10 25 AACCCAA
output2 is incorrect. I'm trying to understand why it takes both the lines of output1.txt and the double copy of the other lines.
more output2.txt
AABBBAA 2 5 AACCCAA
AABBBAA 2 5 AACCCAA
AAAAAAA 4 10 AAAAAAA
AAAAAAA 4 10 AAAAAAA
AABBBAA 6 15 AABBBAA
AAAAAAA 8 20 AAAAAAA
AAAAAAA 8 20 AAAAAAA
AACCCAA 10 25 AACCCAA
AAAAAAA 12 30 AAAAAAA
AAAAAAA 12 30 AAAAAAA
output2.txt should be:
AABBBAA 2 5 AACCCAA
AAAAAAA 4 10 AAAAAAA
AAAAAAA 8 20 AAAAAAA
AAAAAAA 12 30 AAAAAAA
Thank you for your help!
| [
"You get duplicated lines in output2 because you ask it to do so. Your condition is: If item exists in both columns, write the line to output1, else write it to output2. Then you proceed to do this for each item in list. Since there are two items in list, and (e.g. in line 1) the first item doesn't exist in both columns, it writes the line once to output2, then the second item doesn't exist in both columns either, so it writes the line again to output2.\nLet's restate your condition:\n\n[Check if] the cell pairs in columns 1 and 4 simultaneously contain the pattern \"BBB\" or \"CCC\". If true, send the whole line to output1. If false, send the whole line to output2.\n\nSo for each row, you want to check if any (any) of the items in list(for item in lst) occur in both those columns(item in cols[0] and item in cols[3]).\nlst = [\"BBB\", \"CCC\"]\nfor line in input_file:\n cols = line.split(\"\\t\")\n if any(item in cols[0] and item in cols[3] for item in lst):\n output1.write(line)\n else:\n output2.write(line)\n\nNote that I renamed list to lst and input to input_file in my code to avoid shadowing the builtins\n",
"The issue is with the else part of the if statement. As every time the if condition doesn't return True you are writing the line to the output2.txt file, which is not the logic you want.\nYou will need to change the logic of the code to make it only write to output2.txt, if both 'BBB' and 'CCC' are not found.\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0074615905_python.txt |
Q:
Is it possible to pass command line arguments to a decorator in Django?
I have a decorator that is supposed to use a parameter that's passed in from the commandline e.g
@deco(name)
def handle(self, *_args, **options):
name = options["name"]
def deco(name):
// The name should come from commandline
pass
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
"--name",
type=str,
required=True,
)
@deco(//How can I pass the name here?)
def handle(self, *_args, **options):
name = options["name"]
any suggestions on this?
A:
You don't have access to the command-line value when the @deco decorator is applied, no. But you can delay applying that decorator until you do have access.
Do so by creating your own decorator. A decorator is simply a function that applied when Python parses the @decorator and def functionname lines, right after Python created the function object; the return value of the decorator takes the place of the decorated function. What you need to make sure, then, is that your decorator returns a different function that can apply the deco decorator when the command is being executed.
Here is such a decorator:
from functools import wraps
def apply_deco_from_name(f):
@wraps(f)
def wrapper(self, *args, **options):
# this code is called instead of the decorated method
# and *now* we have access to the options mapping.
name = options["name"] # or use options.pop("name") to remove it
decorated = deco(name)(f) # the same thing as @deco(name) for the function
return decorated(self, *args, **options)
return wrapper
Then use that decorator on your command handler:
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
"--name",
type=str,
required=True,
)
@apply_deco_from_name
def handle(self, *_args, **options):
name = options["name"]
What happens here? When Python handles the @apply_deco_from_name and def handle(...) lines, it sees this as a complete function statement inside the class. It'll create a handle function object, then passes that to the decorator, so it calls apply_deco_from_name(handle). The decorator defined above return wrapper instead.
And when Django then executes the command handler, it will do so by calling that replacement with wrapper(command, [other arguments], name="command-line-value-for-name", [other options]). At that point the code creates a new decorated version of the handler with decorated = deco("command-line-value-for-name")(f) just like Python would have done had you used @deco("command-line-value-for-name") in your command class. deco("command-line-value-for-name") returns a decorator function, and deco("command-line-value-for-name")(f) returns a wrapper, and you can call that wrapper at the end.
A:
You can make a "meta-decorator", something like:
from functool import wraps
def metadeco(function):
@wraps(function)
def func(*args, **kwargs):
name = kwargs['name']
return deco(name)(function)(*args, **kwargs)
return func
and then work with that meta-decorator:
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
"--name",
type=str,
required=True,
)
@metadeco
def handle(self, *_args, **options):
name = options['name']
# …
A:
Your decorator doesn't really need to be a decorator. Since you are using classes, you can make use of the mixin pattern:
class YourMixin:
def handle(self, name):
# Code that was previously in deco
class Command(YourMixin, BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
"--name",
type=str,
required=True,
)
def handle(self, *_args, **options):
# Code before calling YourMixin.handle
name = options["name"]
super().handle(name)
# Code after calling YourMixin.handle
| Is it possible to pass command line arguments to a decorator in Django? | I have a decorator that is supposed to use a parameter that's passed in from the commandline e.g
@deco(name)
def handle(self, *_args, **options):
name = options["name"]
def deco(name):
// The name should come from commandline
pass
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
"--name",
type=str,
required=True,
)
@deco(//How can I pass the name here?)
def handle(self, *_args, **options):
name = options["name"]
any suggestions on this?
| [
"You don't have access to the command-line value when the @deco decorator is applied, no. But you can delay applying that decorator until you do have access.\nDo so by creating your own decorator. A decorator is simply a function that applied when Python parses the @decorator and def functionname lines, right after Python created the function object; the return value of the decorator takes the place of the decorated function. What you need to make sure, then, is that your decorator returns a different function that can apply the deco decorator when the command is being executed.\nHere is such a decorator:\nfrom functools import wraps\n\ndef apply_deco_from_name(f):\n @wraps(f)\n def wrapper(self, *args, **options):\n # this code is called instead of the decorated method\n # and *now* we have access to the options mapping.\n name = options[\"name\"] # or use options.pop(\"name\") to remove it\n decorated = deco(name)(f) # the same thing as @deco(name) for the function\n return decorated(self, *args, **options)\n \n return wrapper\n\nThen use that decorator on your command handler:\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"--name\",\n type=str,\n required=True,\n )\n\n @apply_deco_from_name\n def handle(self, *_args, **options):\n name = options[\"name\"]\n\nWhat happens here? When Python handles the @apply_deco_from_name and def handle(...) lines, it sees this as a complete function statement inside the class. It'll create a handle function object, then passes that to the decorator, so it calls apply_deco_from_name(handle). The decorator defined above return wrapper instead.\nAnd when Django then executes the command handler, it will do so by calling that replacement with wrapper(command, [other arguments], name=\"command-line-value-for-name\", [other options]). At that point the code creates a new decorated version of the handler with decorated = deco(\"command-line-value-for-name\")(f) just like Python would have done had you used @deco(\"command-line-value-for-name\") in your command class. deco(\"command-line-value-for-name\") returns a decorator function, and deco(\"command-line-value-for-name\")(f) returns a wrapper, and you can call that wrapper at the end.\n",
"You can make a \"meta-decorator\", something like:\nfrom functool import wraps\n\ndef metadeco(function):\n @wraps(function)\n def func(*args, **kwargs):\n name = kwargs['name']\n return deco(name)(function)(*args, **kwargs)\n return func\n\nand then work with that meta-decorator:\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"--name\",\n type=str,\n required=True,\n )\n \n @metadeco\n def handle(self, *_args, **options):\n name = options['name']\n # …\n",
"Your decorator doesn't really need to be a decorator. Since you are using classes, you can make use of the mixin pattern:\nclass YourMixin:\n def handle(self, name):\n # Code that was previously in deco\n\n\nclass Command(YourMixin, BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"--name\",\n type=str,\n required=True,\n )\n\n def handle(self, *_args, **options):\n # Code before calling YourMixin.handle\n name = options[\"name\"]\n super().handle(name)\n # Code after calling YourMixin.handle\n\n"
] | [
3,
2,
2
] | [] | [] | [
"argparse",
"django",
"python"
] | stackoverflow_0074616062_argparse_django_python.txt |
Q:
Extracting parameters from strings - SQL Server
I have a table with strings in one column, which are actually storing other SQL Queries written before and stored to be ran at later times. They contain parameters such as '@organisationId' or '@enterDateHere'. I want to be able to extract these.
Example:
ID
Query
1
SELECT * FROM table WHERE id = @organisationId
2
SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere
3
SELECT name + '@' + domain FROM user
I want the following:
ID
Parameters
1
@organisationId
2
@startDate, @endDate, @enterOrgHere
3
NULL
No need to worry about how to separate/list them, as long as they are clearly visible and as long as the query lists all of them, which I don't know the number of. Please note that sometimes the queries contain just @ for example when email binding is being done, but it's not a parameter. I want just strings which start with @ and have at least one letter after it, ending with a non-letter character (space, enter, comma, semi-colon). If this causes problems, then return all strings starting with @ and I will simply identify the parameters manually.
It can include usage of Excel/Python/C# if needed, but SQL is preferable.
A:
It is very simple to implement by using tokenization via XML and XQuery.
Notable points:
1st CROSS APPLY is tokenazing Query column as XML.
2nd CROSS APPLY is filtering out tokens that don't have "@" symbol.
SQL #1
-- DDL and sample data population, start
DECLARE @tbl TABLE (ID INT IDENTITY PRIMARY KEY, Query VARCHAR(2048));
INSERT INTO @tbl (Query) VALUES
('SELECT * FROM table WHERE id = @organisationId'),
('SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere'),
('SELECT name + ''@'' + domain FROM user');
-- DDL and sample data population, end
DECLARE @separator CHAR(1) = SPACE(1);
SELECT t.ID
, Parameters = IIF(t2.Par LIKE '@[a-z]%', t2.Par, NULL)
FROM @tbl AS t
CROSS APPLY (SELECT TRY_CAST('<root><r><![CDATA[' +
REPLACE(Query, @separator, ']]></r><r><![CDATA[') +
']]></r></root>' AS XML)) AS t1(c)
CROSS APPLY (SELECT TRIM('><=' FROM c.query('data(/root/r[contains(text()[1],"@")])').value('text()[1]','VARCHAR(1024)'))) AS t2(Par)
SQL #2
A cleansing step was added to handle other than a regular space whitespaces first.
SELECT t.*
, Parameters = IIF(t2.Par LIKE '@[a-z]%', t2.Par, NULL)
FROM @tbl AS t
CROSS APPLY (SELECT TRY_CAST('<r><![CDATA[' + Query + ']]></r>' AS XML).value('(/r/text())[1] cast as xs:token?','VARCHAR(MAX)')) AS t0(pure)
CROSS APPLY (SELECT TRY_CAST('<root><r><![CDATA[' +
REPLACE(Pure, @separator, ']]></r><r><![CDATA[') +
']]></r></root>' AS XML)) AS t1(c)
CROSS APPLY (SELECT TRIM('><=' FROM c.query('data(/root/r[contains(text()[1],"@")])')
.value('text()[1]','VARCHAR(1024)'))) AS t2(Par);
Output
ID
Parameters
1
@organisationId
2
@startDate @endDate @enterOrgHere
3
NULL
A:
The official way to interrogate the parameters is with sp_describe_undeclared_parameters, eg
exec sp_describe_undeclared_parameters @tsql = N'SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere'
A:
Using SQL Server for parsing is a very bad idea because of low performance and lack of tools. I highly recommend using .net assembly or external language (since your project is in python anyway) with regexp or any other conversion method.
However, as a last resort, you can use something like this extremely slow and generally horrible code (this code working just on sql server 2017+, btw. On earlier versions code will be much more terrible):
DECLARE @sql TABLE
(
id INT PRIMARY KEY IDENTITY
, sql_query NVARCHAR(MAX)
);
INSERT INTO @sql (sql_query)
VALUES (N'SELECT * FROM table WHERE id = @organisationId')
, (N'SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere')
, (N' SELECT name + ''@'' + domain FROM user')
;
WITH prepared AS
(
SELECT id
, IIF(sql_query LIKE '%@%'
, SUBSTRING(sql_query, CHARINDEX('@', sql_query) + 1, LEN(sql_query))
, CHAR(32)
) prep_string
FROM @sql
),
parsed AS
(
SELECT id
, IIF(CHARINDEX(CHAR(32), value) = 0
, SUBSTRING(value, 1, LEN(VALUE))
, SUBSTRING(value, 1, CHARINDEX(CHAR(32), value) -1)
) parsed_value
FROM prepared p
CROSS APPLY STRING_SPLIT(p.prep_string, '@')
)
SELECT id, '@' + STRING_AGG(IIF(parsed_value LIKE '[a-zA-Z]%', parsed_value, NULL) , ', @')
FROM parsed
GROUP BY id
A:
You can use string split, and then remove the undesired caracters, here's a query :
DROP TABLE IF EXISTS #TEMP
SELECT 1 AS ID ,'SELECT * FROM table WHERE id = @organisationId' AS Query
INTO #TEMP
UNION ALL SELECT 2, 'SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere'
UNION ALL SELECT 3, 'SELECT name + ''@'' + domain FROM user'
;WITH cte as
(
SELECT ID,
Query,
STRING_AGG(REPLACE(REPLACE(REPLACE(value,'<',''),'>',''),'=',''),', ') AS Parameters
FROM #TEMP
CROSS APPLY string_split(Query,' ')
WHERE value LIKE '%@[a-z]%'
GROUP BY ID,
Query
)
SELECT #TEMP.*,cte.Parameters
FROM #TEMP
LEFT JOIN cte on #TEMP.ID = cte.ID
| Extracting parameters from strings - SQL Server | I have a table with strings in one column, which are actually storing other SQL Queries written before and stored to be ran at later times. They contain parameters such as '@organisationId' or '@enterDateHere'. I want to be able to extract these.
Example:
ID
Query
1
SELECT * FROM table WHERE id = @organisationId
2
SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere
3
SELECT name + '@' + domain FROM user
I want the following:
ID
Parameters
1
@organisationId
2
@startDate, @endDate, @enterOrgHere
3
NULL
No need to worry about how to separate/list them, as long as they are clearly visible and as long as the query lists all of them, which I don't know the number of. Please note that sometimes the queries contain just @ for example when email binding is being done, but it's not a parameter. I want just strings which start with @ and have at least one letter after it, ending with a non-letter character (space, enter, comma, semi-colon). If this causes problems, then return all strings starting with @ and I will simply identify the parameters manually.
It can include usage of Excel/Python/C# if needed, but SQL is preferable.
| [
"It is very simple to implement by using tokenization via XML and XQuery.\nNotable points:\n\n1st CROSS APPLY is tokenazing Query column as XML.\n2nd CROSS APPLY is filtering out tokens that don't have \"@\" symbol.\n\nSQL #1\n-- DDL and sample data population, start\nDECLARE @tbl TABLE (ID INT IDENTITY PRIMARY KEY, Query VARCHAR(2048));\nINSERT INTO @tbl (Query) VALUES\n('SELECT * FROM table WHERE id = @organisationId'),\n('SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere'),\n('SELECT name + ''@'' + domain FROM user');\n-- DDL and sample data population, end\n\nDECLARE @separator CHAR(1) = SPACE(1);\n\nSELECT t.ID\n , Parameters = IIF(t2.Par LIKE '@[a-z]%', t2.Par, NULL)\nFROM @tbl AS t\nCROSS APPLY (SELECT TRY_CAST('<root><r><![CDATA[' + \n REPLACE(Query, @separator, ']]></r><r><![CDATA[') + \n ']]></r></root>' AS XML)) AS t1(c)\nCROSS APPLY (SELECT TRIM('><=' FROM c.query('data(/root/r[contains(text()[1],\"@\")])').value('text()[1]','VARCHAR(1024)'))) AS t2(Par)\n\nSQL #2\nA cleansing step was added to handle other than a regular space whitespaces first.\nSELECT t.*\n , Parameters = IIF(t2.Par LIKE '@[a-z]%', t2.Par, NULL)\nFROM @tbl AS t\nCROSS APPLY (SELECT TRY_CAST('<r><![CDATA[' + Query + ']]></r>' AS XML).value('(/r/text())[1] cast as xs:token?','VARCHAR(MAX)')) AS t0(pure)\nCROSS APPLY (SELECT TRY_CAST('<root><r><![CDATA[' + \n REPLACE(Pure, @separator, ']]></r><r><![CDATA[') + \n ']]></r></root>' AS XML)) AS t1(c)\nCROSS APPLY (SELECT TRIM('><=' FROM c.query('data(/root/r[contains(text()[1],\"@\")])')\n .value('text()[1]','VARCHAR(1024)'))) AS t2(Par);\n\nOutput\n\n\n\n\nID\nParameters\n\n\n\n\n1\n@organisationId\n\n\n2\n@startDate @endDate @enterOrgHere\n\n\n3\nNULL\n\n\n\n",
"The official way to interrogate the parameters is with sp_describe_undeclared_parameters, eg\nexec sp_describe_undeclared_parameters @tsql = N'SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere' \n\n",
"Using SQL Server for parsing is a very bad idea because of low performance and lack of tools. I highly recommend using .net assembly or external language (since your project is in python anyway) with regexp or any other conversion method.\nHowever, as a last resort, you can use something like this extremely slow and generally horrible code (this code working just on sql server 2017+, btw. On earlier versions code will be much more terrible):\nDECLARE @sql TABLE\n(\n id INT PRIMARY KEY IDENTITY\n, sql_query NVARCHAR(MAX)\n);\n\nINSERT INTO @sql (sql_query)\nVALUES (N'SELECT * FROM table WHERE id = @organisationId')\n , (N'SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere')\n , (N' SELECT name + ''@'' + domain FROM user')\n ;\n\n\nWITH prepared AS\n(\n SELECT id\n , IIF(sql_query LIKE '%@%'\n , SUBSTRING(sql_query, CHARINDEX('@', sql_query) + 1, LEN(sql_query))\n , CHAR(32)\n ) prep_string\n FROM @sql\n),\nparsed AS\n(\nSELECT id\n , IIF(CHARINDEX(CHAR(32), value) = 0\n , SUBSTRING(value, 1, LEN(VALUE))\n , SUBSTRING(value, 1, CHARINDEX(CHAR(32), value) -1)\n ) parsed_value\n FROM prepared p\n CROSS APPLY STRING_SPLIT(p.prep_string, '@')\n)\nSELECT id, '@' + STRING_AGG(IIF(parsed_value LIKE '[a-zA-Z]%', parsed_value, NULL) , ', @')\n FROM parsed\nGROUP BY id\n\n",
"You can use string split, and then remove the undesired caracters, here's a query :\nDROP TABLE IF EXISTS #TEMP\n\nSELECT 1 AS ID ,'SELECT * FROM table WHERE id = @organisationId' AS Query\nINTO #TEMP\nUNION ALL SELECT 2, 'SELECT * FROM topic WHERE creation_time <=@startDate AND creation_time >= @endDate AND id = @enterOrgHere'\nUNION ALL SELECT 3, 'SELECT name + ''@'' + domain FROM user'\n\n;WITH cte as\n(\n SELECT ID,\n Query,\n STRING_AGG(REPLACE(REPLACE(REPLACE(value,'<',''),'>',''),'=',''),', ') AS Parameters\n FROM #TEMP\n CROSS APPLY string_split(Query,' ')\n WHERE value LIKE '%@[a-z]%'\n GROUP BY ID,\n Query\n)\nSELECT #TEMP.*,cte.Parameters\nFROM #TEMP\nLEFT JOIN cte on #TEMP.ID = cte.ID\n\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"c#",
"excel",
"python",
"sql",
"sql_server"
] | stackoverflow_0074615546_c#_excel_python_sql_sql_server.txt |
Q:
Forcing numpy.linspace to have specific entry
I am using numpy.linspace to let other functions sweep over some parameters, for example:
def fun(array):
newarray= []
for i in array:
newarray.append(i**2)
return newarray
Now I want to pass this function numpy.linspace(0,20,30) which has to contain the number 2.
Is there some way to force this without adjusting the boundaries and the number of points that have to be generated?
Beware: I am not interested on inserting it afterwards, I just want to know If there is a way to do this upon generation!
A:
If I understand your question correctly you want a numpy array that consists of 30 values that are equally spaced between 0 and 20, containing 0 and 20, but also to contain the value of 2.0 exactly? That is not possible, since the steps will not be equally spaced anymore.
You either have to adjust the boundaries, or the number of steps, for instance 31 steps will get you a nice spacing of exactly 2/3:
>>> a = np.linspace(0,20,31)
>>> a
array([ 0. , 0.66666667, 1.33333333, 2. , 2.66666667,
3.33333333, 4. , 4.66666667, 5.33333333, 6. ,
6.66666667, 7.33333333, 8. , 8.66666667, 9.33333333,
10. , 10.66666667, 11.33333333, 12. , 12.66666667,
13.33333333, 14. , 14.66666667, 15.33333333, 16. ,
16.66666667, 17.33333333, 18. , 18.66666667, 19.33333333,
20. ])
| Forcing numpy.linspace to have specific entry | I am using numpy.linspace to let other functions sweep over some parameters, for example:
def fun(array):
newarray= []
for i in array:
newarray.append(i**2)
return newarray
Now I want to pass this function numpy.linspace(0,20,30) which has to contain the number 2.
Is there some way to force this without adjusting the boundaries and the number of points that have to be generated?
Beware: I am not interested on inserting it afterwards, I just want to know If there is a way to do this upon generation!
| [
"If I understand your question correctly you want a numpy array that consists of 30 values that are equally spaced between 0 and 20, containing 0 and 20, but also to contain the value of 2.0 exactly? That is not possible, since the steps will not be equally spaced anymore.\nYou either have to adjust the boundaries, or the number of steps, for instance 31 steps will get you a nice spacing of exactly 2/3:\n>>> a = np.linspace(0,20,31)\n>>> a\narray([ 0. , 0.66666667, 1.33333333, 2. , 2.66666667,\n 3.33333333, 4. , 4.66666667, 5.33333333, 6. ,\n 6.66666667, 7.33333333, 8. , 8.66666667, 9.33333333,\n 10. , 10.66666667, 11.33333333, 12. , 12.66666667,\n 13.33333333, 14. , 14.66666667, 15.33333333, 16. ,\n 16.66666667, 17.33333333, 18. , 18.66666667, 19.33333333,\n 20. ])\n\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074616441_numpy_python.txt |
Q:
Pyspark dataframe sum variable up to the current row's month
I have a pyspark dataframe that looks as follows:
date, loan
1.1.2020, 0
1.2.2020, 0
1.3.2020, 0
1.4.2020, 10000
1.5.2020, 200
1.6.2020, 0
I would like to have the fact that they took out a loan in month 4 to reflect on the other later months as well. So the resulting dataframe would be:
date, loan
1.1.2020, 0
1.2.2020, 0
1.3.2020, 0
1.4.2020, 10000
1.5.2020, 10200
1.6.2020, 10200
Is there any simple way to do this in pyspark? Thanks.
A:
@Ehrendil - do you want to calculate running total ..
select date,loan,
sum(loan) over(order by date row between unbounded preceding and current row) as running_total from table
| Pyspark dataframe sum variable up to the current row's month | I have a pyspark dataframe that looks as follows:
date, loan
1.1.2020, 0
1.2.2020, 0
1.3.2020, 0
1.4.2020, 10000
1.5.2020, 200
1.6.2020, 0
I would like to have the fact that they took out a loan in month 4 to reflect on the other later months as well. So the resulting dataframe would be:
date, loan
1.1.2020, 0
1.2.2020, 0
1.3.2020, 0
1.4.2020, 10000
1.5.2020, 10200
1.6.2020, 10200
Is there any simple way to do this in pyspark? Thanks.
| [
"@Ehrendil - do you want to calculate running total ..\nselect date,loan,\nsum(loan) over(order by date row between unbounded preceding and current row) as running_total from table\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pyspark",
"python",
"sql"
] | stackoverflow_0074616093_dataframe_pyspark_python_sql.txt |
Q:
how a toplevel class can inherite from the main class tkinter opp inorder to access main class's attr
hi i want to access the mainwindow attributes and change some of its labels and button's states in my toplevel class however it can not find them. so im not sure how to use opp approach in tkinter and i tried using super__init__ and textvariable but i failed. the main problem is inheritance in tkinter frame work and i highlighted it in def login 2. i appreciate the help. peace.
import tkinter as tk
import sqlite3
cnt = sqlite3.connect("simple_store.db")
class MainWindow():
def __init__(self,master):
self.master=master
self.master.geometry('350x200')
self.master.resizable(False, False)
self.lbl_msg = tk.Label(self.master, text='')
self.lbl_msg.pack()
self.login_btn = tk.Button(self.master, text="Login ", command=login)
self.login_btn.pack()
self.submit_btn = tk.Button(self.master, text="Submit", command=submit)
self.submit_btn.pack()
class submit:
pass
class login(MainWindow):
def __init__(self):
self.login_win = tk.Toplevel()
self.login_win.title("Login")
self.login_win.geometry("350x200")
self.lbl_temp = tk.Label(self.login_win, text='')
self.lbl_temp.pack()
self.lbl_user = tk.Label(self.login_win, text='Username:')
self.lbl_user.pack()
self.userw = tk.Entry(self.login_win, width=15)
self.userw.pack()
self.lbl_pass = tk.Label(self.login_win, text='Password')
self.lbl_pass.pack()
self.passwordw = tk.Entry(self.login_win, width=15)
self.passwordw.pack()
self.login_btn2 = tk.Button(self.login_win, text="Login", command= self.login2)
self.login_btn2.pack(pady=20)
self.login_win.mainloop()
def login2(self):
global userid
self.user = self.userw.get()
self.password = self.passwordw.get()
query = '''SELECT * FROM costumers WHERE username=? AND PASSWORD=?'''
result = cnt.execute(query, (self.user, self.password))
row = result.fetchall()
if (row):
self.lbl_temp.configure(text="welcome")
userid = row[0][0]
####the problem is here####
self.lbl_msg.configure(text="welcome " + self.user)
# self.login_btn.configure(state="disabled")
self.userw.delete(0, 'end')
self.passwordw.delete(0, 'end')
else:
self.lbl_temp.configure(text="error")
root= tk.Tk()
window= MainWindow(root)
root.mainloop()
A:
Alright so, in order for you to inherit from the main class, you have to make your __init__() method's signature the same + whatever you might need for the top level class.
Example:
class Parent:
def __init__(self, attr1):
self.attr1 = attr1
class Child(Parent):
def __init__(self, attr1, attr2):
super().__init__(attr1) # You can manipulate attr1 here
self.attr2 = attr2
# You can also manipulate attr1 here using
# self.attr1 = whatever value.
A:
You shouldn't use inheritance for this. Inheritance is an "is a" relationship. If you inherit from MainWindow than your other window is a MainWindow with extra features. You don't want two MainWindow instances.
Instead, you need to pass the instance of MainWindow to login.
class MainWindow():
def __init__(self,master):
...
self.login_btn = tk.Button(self.master, text="Login ", command=self.login)
...
def login(self):
login_window = login(main_window=self)
# ^^^^^^^^^^^^^^^^
class login():
def __init__(self, main_window):
self.main_window = main_window
...
def login2(self):
...
self.main_window.lbl_msg.configure(text="welcome " + self.user)
# ^^^^^^^^^^^^
...
| how a toplevel class can inherite from the main class tkinter opp inorder to access main class's attr | hi i want to access the mainwindow attributes and change some of its labels and button's states in my toplevel class however it can not find them. so im not sure how to use opp approach in tkinter and i tried using super__init__ and textvariable but i failed. the main problem is inheritance in tkinter frame work and i highlighted it in def login 2. i appreciate the help. peace.
import tkinter as tk
import sqlite3
cnt = sqlite3.connect("simple_store.db")
class MainWindow():
def __init__(self,master):
self.master=master
self.master.geometry('350x200')
self.master.resizable(False, False)
self.lbl_msg = tk.Label(self.master, text='')
self.lbl_msg.pack()
self.login_btn = tk.Button(self.master, text="Login ", command=login)
self.login_btn.pack()
self.submit_btn = tk.Button(self.master, text="Submit", command=submit)
self.submit_btn.pack()
class submit:
pass
class login(MainWindow):
def __init__(self):
self.login_win = tk.Toplevel()
self.login_win.title("Login")
self.login_win.geometry("350x200")
self.lbl_temp = tk.Label(self.login_win, text='')
self.lbl_temp.pack()
self.lbl_user = tk.Label(self.login_win, text='Username:')
self.lbl_user.pack()
self.userw = tk.Entry(self.login_win, width=15)
self.userw.pack()
self.lbl_pass = tk.Label(self.login_win, text='Password')
self.lbl_pass.pack()
self.passwordw = tk.Entry(self.login_win, width=15)
self.passwordw.pack()
self.login_btn2 = tk.Button(self.login_win, text="Login", command= self.login2)
self.login_btn2.pack(pady=20)
self.login_win.mainloop()
def login2(self):
global userid
self.user = self.userw.get()
self.password = self.passwordw.get()
query = '''SELECT * FROM costumers WHERE username=? AND PASSWORD=?'''
result = cnt.execute(query, (self.user, self.password))
row = result.fetchall()
if (row):
self.lbl_temp.configure(text="welcome")
userid = row[0][0]
####the problem is here####
self.lbl_msg.configure(text="welcome " + self.user)
# self.login_btn.configure(state="disabled")
self.userw.delete(0, 'end')
self.passwordw.delete(0, 'end')
else:
self.lbl_temp.configure(text="error")
root= tk.Tk()
window= MainWindow(root)
root.mainloop()
| [
"Alright so, in order for you to inherit from the main class, you have to make your __init__() method's signature the same + whatever you might need for the top level class.\nExample:\nclass Parent:\n def __init__(self, attr1):\n self.attr1 = attr1\n\nclass Child(Parent):\n def __init__(self, attr1, attr2):\n super().__init__(attr1) # You can manipulate attr1 here\n self.attr2 = attr2\n # You can also manipulate attr1 here using\n # self.attr1 = whatever value.\n\n",
"You shouldn't use inheritance for this. Inheritance is an \"is a\" relationship. If you inherit from MainWindow than your other window is a MainWindow with extra features. You don't want two MainWindow instances.\nInstead, you need to pass the instance of MainWindow to login.\nclass MainWindow():\n def __init__(self,master):\n ...\n self.login_btn = tk.Button(self.master, text=\"Login \", command=self.login)\n ...\n\n def login(self):\n login_window = login(main_window=self)\n # ^^^^^^^^^^^^^^^^\n\nclass login():\n def __init__(self, main_window):\n self.main_window = main_window\n ...\n\n def login2(self):\n ...\n self.main_window.lbl_msg.configure(text=\"welcome \" + self.user)\n # ^^^^^^^^^^^^\n ...\n\n"
] | [
0,
0
] | [] | [] | [
"inheritance",
"python",
"subclass",
"tkinter",
"toplevel"
] | stackoverflow_0074613705_inheritance_python_subclass_tkinter_toplevel.txt |
Q:
Exclude Xpath from other Xpath
If you have two Xpaths you can join them with the | operator to return both their results in one result set. This essentially gives back the union of the two sets of elements. The example below gives back all divs and all spans on a website:
//div | //span
What I need is the difference (subsection). I need all elements in the first Xpath group that are not in the second Xpath group. So far I have seen that there is an except operator but that only works in Xpath2. I need an Xpath1 solution. I have seen that the not function might help but I was not able to make it work.
As an example imagine the following:
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
In this example I would have the Xpath group //tr/td. I would want to exclude <td>1</td> and <td>4</td>. Although there are many ways to solve the problem I am specifically looking for a solution where I can say in an Xpath: "Here is a group of elements and exclude this group of elements from it".
A:
An approach realizing this is using the self:: axis and the not() operator in a predicate:
For example, with an XML like this
<root>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<dr>
<td>1</td>
<td>4</td>
</dr>
</root>
you can use this XPath-1.0 expression:
//tr/td[not(self::*=//dr/td)]
which can be shortened to
//tr/td[not(.=//dr/td)]
The resulting nodeset is as desired
<td>2</td>
<td>3</td>
<td>5</td>
The XPath expression selects all elements of the first part and checks in the predicate if every element itself (self::* or .) is in the second part. If it is, it will be excluded (not(...)).
You can also apply this approach to attribute nodes. In this case you have to use the ., because self::* is more specific and only selects elements. So you could replace self::* by ., but not the other way round. (The most general axis would be self::node().)
A:
You can use logic and and not operators here.
For your specific example you can use the following XPath
"//tr/td[not(text()=`1`)][not(text()=`4`)]"
A:
In XPath 2.0+ there is an operator for this: except. If E and F are general expressions returning sets of nodes, then E except F returns all nodes selected by E that are not selected by F.
There's no convenient way of doing the same thing in XPath 1.0, but the rather cumbersome (and potentially expensive) expression E[count(.|F) != count(F)] is equivalent (though you need to take care about the context for evaluation of F).
In many practical cases you can achieve the desired effect with a filter predicate, for example //td[not(ancestor::tr)].
| Exclude Xpath from other Xpath | If you have two Xpaths you can join them with the | operator to return both their results in one result set. This essentially gives back the union of the two sets of elements. The example below gives back all divs and all spans on a website:
//div | //span
What I need is the difference (subsection). I need all elements in the first Xpath group that are not in the second Xpath group. So far I have seen that there is an except operator but that only works in Xpath2. I need an Xpath1 solution. I have seen that the not function might help but I was not able to make it work.
As an example imagine the following:
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
In this example I would have the Xpath group //tr/td. I would want to exclude <td>1</td> and <td>4</td>. Although there are many ways to solve the problem I am specifically looking for a solution where I can say in an Xpath: "Here is a group of elements and exclude this group of elements from it".
| [
"An approach realizing this is using the self:: axis and the not() operator in a predicate:\nFor example, with an XML like this\n<root>\n <tr>\n <td>1</td>\n <td>2</td>\n <td>3</td>\n <td>4</td>\n <td>5</td>\n </tr> \n <dr>\n <td>1</td>\n <td>4</td>\n </dr> \n</root>\n\nyou can use this XPath-1.0 expression:\n//tr/td[not(self::*=//dr/td)]\n\nwhich can be shortened to\n//tr/td[not(.=//dr/td)]\n\nThe resulting nodeset is as desired\n<td>2</td>\n<td>3</td>\n<td>5</td>\n\nThe XPath expression selects all elements of the first part and checks in the predicate if every element itself (self::* or .) is in the second part. If it is, it will be excluded (not(...)).\nYou can also apply this approach to attribute nodes. In this case you have to use the ., because self::* is more specific and only selects elements. So you could replace self::* by ., but not the other way round. (The most general axis would be self::node().)\n",
"You can use logic and and not operators here.\nFor your specific example you can use the following XPath\n\"//tr/td[not(text()=`1`)][not(text()=`4`)]\"\n\n",
"In XPath 2.0+ there is an operator for this: except. If E and F are general expressions returning sets of nodes, then E except F returns all nodes selected by E that are not selected by F.\nThere's no convenient way of doing the same thing in XPath 1.0, but the rather cumbersome (and potentially expensive) expression E[count(.|F) != count(F)] is equivalent (though you need to take care about the context for evaluation of F).\nIn many practical cases you can achieve the desired effect with a filter predicate, for example //td[not(ancestor::tr)].\n"
] | [
1,
0,
0
] | [] | [] | [
"html",
"python",
"web_scraping",
"xpath",
"xpath_1.0"
] | stackoverflow_0074614843_html_python_web_scraping_xpath_xpath_1.0.txt |
Q:
Type conversion of custom class to float
I have a class customInt, which looks something like this:
class customInt:
def __init__(self, value):
self.value=int(value)
def __add__(self, other):
return foo(self.value+other.value)
# do some other stuff
obj=foo(1.23)
Is it possible to create an operator/attribute/property/... to cast the object obj to a float, so that it's possible to cast the class with float(obj)?
The aim of this is to create a drop-in replacement to a normal int, but without the automatic casting to a float in divisions, etc. So, not desired is:
A method like obj.to_float(). Then it wouldn't be a drop-in replacement anymore.
Inheritance from a castable type: In this case, I would have to overload every single operator. Which is not ideal.
A:
You can define a def __float__(self): function in your class, and it will be called when you use float(obj). You can also add __int__, __str__ and __complex__ in the same way.
| Type conversion of custom class to float | I have a class customInt, which looks something like this:
class customInt:
def __init__(self, value):
self.value=int(value)
def __add__(self, other):
return foo(self.value+other.value)
# do some other stuff
obj=foo(1.23)
Is it possible to create an operator/attribute/property/... to cast the object obj to a float, so that it's possible to cast the class with float(obj)?
The aim of this is to create a drop-in replacement to a normal int, but without the automatic casting to a float in divisions, etc. So, not desired is:
A method like obj.to_float(). Then it wouldn't be a drop-in replacement anymore.
Inheritance from a castable type: In this case, I would have to overload every single operator. Which is not ideal.
| [
"You can define a def __float__(self): function in your class, and it will be called when you use float(obj). You can also add __int__, __str__ and __complex__ in the same way.\n"
] | [
2
] | [] | [] | [
"casting",
"python",
"type_conversion"
] | stackoverflow_0074616570_casting_python_type_conversion.txt |
Q:
Historgrams for lists inside a list
I have a list containing 8 lists. Each sub list has length of 100 and I want to plot histograms, 4 rows 2 columns.
I want to set the title, and x and y labels. In addition to a big title to all histograms.
I tired this:
mytitles = ['Label 30 sub 0', 'Label 30 sub 1',
'label 50 sub 0', 'label 50 sub 3',
'label 50 sub 5', 'label 50 sub 6',
'label 555', 'label 666']
fig, axes = plt.subplots(nrows=4, ncols=2)
fig.subplots_adjust(hspace=0.5)
fig.suptitle('Confidence Intervals lengths')
for i in sub_list:
plt.hist(i, bins = 20)
for j in mytitles:
plt.title(j)
plt.show()
This however created some empty histograms.
A:
You need to iterate over the axes. Otherwise it will just use the last subplot.
for i, ax, title in zip(sub_list, axes.flat, mytitles):
ax.hist(i, bins = 20)
ax.set_title(title)
plt.show()
| Historgrams for lists inside a list | I have a list containing 8 lists. Each sub list has length of 100 and I want to plot histograms, 4 rows 2 columns.
I want to set the title, and x and y labels. In addition to a big title to all histograms.
I tired this:
mytitles = ['Label 30 sub 0', 'Label 30 sub 1',
'label 50 sub 0', 'label 50 sub 3',
'label 50 sub 5', 'label 50 sub 6',
'label 555', 'label 666']
fig, axes = plt.subplots(nrows=4, ncols=2)
fig.subplots_adjust(hspace=0.5)
fig.suptitle('Confidence Intervals lengths')
for i in sub_list:
plt.hist(i, bins = 20)
for j in mytitles:
plt.title(j)
plt.show()
This however created some empty histograms.
| [
"You need to iterate over the axes. Otherwise it will just use the last subplot.\nfor i, ax, title in zip(sub_list, axes.flat, mytitles):\n ax.hist(i, bins = 20)\n ax.set_title(title)\nplt.show()\n\n"
] | [
1
] | [] | [] | [
"histogram",
"list",
"python"
] | stackoverflow_0074616545_histogram_list_python.txt |
Q:
Coloring data sets with two colors in Spyder(Python 3.9)
I have a dataset with 569 data points, each associated with two features and labelled either as 0 0r 1. Based on the label, I want to make a scatterplot graph such that data point associated with label 0 gets a green dot while the other with label 1 gets red dot on the scatterplot.I am using spyder(python 3.9).
Here is my code :
`
from sklearn.datasets import load_breast_cancer
breast=load_breast_cancer()
#print(breast)
breast_data=breast.data
#print(breast_data)
print(breast_data.shape)
breast_labels = breast.target
#print(breast_labels)
print(breast_labels.shape)
import numpy as np
labels=np.reshape(breast_labels,(569,1))
final_breast_data=np.concatenate([breast_data,labels],axis=1)
#print(final_breast_data)
print(final_breast_data.shape)
import pandas as pd
breast_dataset=pd.DataFrame(final_breast_data)
#print(breast_dataset)
features=breast.feature_names
print(features)
features_labels=np.append(features,'label')
#print(features_labels)
breast_dataset.columns=features_labels
print(breast_dataset.head())
breast_dataset['label'].replace(0,'Benign',inplace=True)
breast_dataset['label'].replace(1,'Malignant',inplace=True)
print(breast_dataset.tail())
from sklearn.preprocessing import StandardScaler
x=breast_dataset.loc[:,features].values
x=StandardScaler().fit_transform(x)
print(x.shape)
print(np.mean(x),np.std(x))
feat_cols=['feature'+str(i) for i in range(x.shape[1]) ]
print(feat_cols)
#print(x.shape[0])
normalised_breast=pd.DataFrame(x,columns=feat_cols)
print(normalised_breast.tail())
from sklearn.decomposition import PCA
pca_breast=PCA(n_components=2)
#print(pca_breast)
principalComponents_breast=pca_breast.fit_transform(x)
#print(principalComponents_breast)
principal_breast_Df=pd.DataFrame(data=principalComponents_breast,columns=['Principal component 1','Principal component 2'])
print(principal_breast_Df.tail())
print('Explained variation per principal component:{}'.format(pca_breast.explained_variance_ratio_))
import pandas as pd
import matplotlib.pyplot as plt
#x=principal_breast_Df['Principal component 1']
#y=principal_breast_Df['Principal component 2']
#plt.scatter(x,y)
#plt.show()
plt.figure()
plt.figure(figsize=(10,10))
plt.xticks(fontsize=12)
plt.yticks(fontsize=14)
plt.xlabel('Principal Component 1',fontsize=20)
plt.ylabel('Principal Component 2',fontsize=20)
plt.title("Principal Component Analysis of Breast Cancer Dataset",fontsize=20)
targets=['Benign','Malignant']
colors=['r','g']
for target,color in zip(targets,colors):
indicesToKeep=breast_dataset['label']==target
plt.scatter(principal_breast_Df.loc[indicesToKeep,'Principal componenet 1'],principal_breast_Df.loc[indicesToKeep,'Principal component 2'],c=color,s=50)
plt.legend(targets,prop={'size':15});
`
I tried to print the graph as mentioned in the problem but got a blank graph with no dots ! I think that I am missing some packages that I should include. Any help would be appreciated.
A:
Change the last part to:
for target,color in zip(targets,colors):
indicesToKeep=breast_dataset['label']==target
plt.scatter(principal_breast_Df[breast_dataset['label']==target]['Principal component 1'], principal_breast_Df[breast_dataset['label']==target]['Principal component 2'],c=color,s=50)
plt.legend(targets,prop={'size':15});
| Coloring data sets with two colors in Spyder(Python 3.9) | I have a dataset with 569 data points, each associated with two features and labelled either as 0 0r 1. Based on the label, I want to make a scatterplot graph such that data point associated with label 0 gets a green dot while the other with label 1 gets red dot on the scatterplot.I am using spyder(python 3.9).
Here is my code :
`
from sklearn.datasets import load_breast_cancer
breast=load_breast_cancer()
#print(breast)
breast_data=breast.data
#print(breast_data)
print(breast_data.shape)
breast_labels = breast.target
#print(breast_labels)
print(breast_labels.shape)
import numpy as np
labels=np.reshape(breast_labels,(569,1))
final_breast_data=np.concatenate([breast_data,labels],axis=1)
#print(final_breast_data)
print(final_breast_data.shape)
import pandas as pd
breast_dataset=pd.DataFrame(final_breast_data)
#print(breast_dataset)
features=breast.feature_names
print(features)
features_labels=np.append(features,'label')
#print(features_labels)
breast_dataset.columns=features_labels
print(breast_dataset.head())
breast_dataset['label'].replace(0,'Benign',inplace=True)
breast_dataset['label'].replace(1,'Malignant',inplace=True)
print(breast_dataset.tail())
from sklearn.preprocessing import StandardScaler
x=breast_dataset.loc[:,features].values
x=StandardScaler().fit_transform(x)
print(x.shape)
print(np.mean(x),np.std(x))
feat_cols=['feature'+str(i) for i in range(x.shape[1]) ]
print(feat_cols)
#print(x.shape[0])
normalised_breast=pd.DataFrame(x,columns=feat_cols)
print(normalised_breast.tail())
from sklearn.decomposition import PCA
pca_breast=PCA(n_components=2)
#print(pca_breast)
principalComponents_breast=pca_breast.fit_transform(x)
#print(principalComponents_breast)
principal_breast_Df=pd.DataFrame(data=principalComponents_breast,columns=['Principal component 1','Principal component 2'])
print(principal_breast_Df.tail())
print('Explained variation per principal component:{}'.format(pca_breast.explained_variance_ratio_))
import pandas as pd
import matplotlib.pyplot as plt
#x=principal_breast_Df['Principal component 1']
#y=principal_breast_Df['Principal component 2']
#plt.scatter(x,y)
#plt.show()
plt.figure()
plt.figure(figsize=(10,10))
plt.xticks(fontsize=12)
plt.yticks(fontsize=14)
plt.xlabel('Principal Component 1',fontsize=20)
plt.ylabel('Principal Component 2',fontsize=20)
plt.title("Principal Component Analysis of Breast Cancer Dataset",fontsize=20)
targets=['Benign','Malignant']
colors=['r','g']
for target,color in zip(targets,colors):
indicesToKeep=breast_dataset['label']==target
plt.scatter(principal_breast_Df.loc[indicesToKeep,'Principal componenet 1'],principal_breast_Df.loc[indicesToKeep,'Principal component 2'],c=color,s=50)
plt.legend(targets,prop={'size':15});
`
I tried to print the graph as mentioned in the problem but got a blank graph with no dots ! I think that I am missing some packages that I should include. Any help would be appreciated.
| [
"Change the last part to:\nfor target,color in zip(targets,colors):\n indicesToKeep=breast_dataset['label']==target\n plt.scatter(principal_breast_Df[breast_dataset['label']==target]['Principal component 1'], principal_breast_Df[breast_dataset['label']==target]['Principal component 2'],c=color,s=50)\n plt.legend(targets,prop={'size':15});\n\n"
] | [
0
] | [] | [] | [
"colors",
"graph",
"python",
"spyder"
] | stackoverflow_0074616534_colors_graph_python_spyder.txt |
Q:
Saving files inside a subfolder using Pathlib
I am using Pathlib to store my output images from a script.
My working tree looks like this : root> time.strftime("%H%M%S") > Scans.
I want to save the images using cv2.imwrite to the subfolder Scans. Using np.tofile to any other format is also fine.
But the problem is I am not able to save the images to the folder "Scans", instead they are saved in the folder "time.strftime("%H%M%S")".
I use the following codes:
from pathlib import Path
import cv2,time
import numpy as np
root = Path(".")
measurement_dir = root / time.strftime("%H%M%S")
scan_dir = measurement_dir / "Scans"
scan_dir.mkdir(parents=True, exist_ok=True)
data = np.random.randint(255, size=(512,512), dtype=np.uint8)
cv2.imwrite(str(scan_dir) +"image_name"+ ".png", data)
How can I go inside my subfolder using pathlib?
A:
The problem is that you are not adding the / when you pass the output path to imwrite. This way, it reads the filename as for example Scans164617.png.
I guess what you are looking for is:
cv2.imwrite(str(scan_dir / "image_name.png"), data)
| Saving files inside a subfolder using Pathlib | I am using Pathlib to store my output images from a script.
My working tree looks like this : root> time.strftime("%H%M%S") > Scans.
I want to save the images using cv2.imwrite to the subfolder Scans. Using np.tofile to any other format is also fine.
But the problem is I am not able to save the images to the folder "Scans", instead they are saved in the folder "time.strftime("%H%M%S")".
I use the following codes:
from pathlib import Path
import cv2,time
import numpy as np
root = Path(".")
measurement_dir = root / time.strftime("%H%M%S")
scan_dir = measurement_dir / "Scans"
scan_dir.mkdir(parents=True, exist_ok=True)
data = np.random.randint(255, size=(512,512), dtype=np.uint8)
cv2.imwrite(str(scan_dir) +"image_name"+ ".png", data)
How can I go inside my subfolder using pathlib?
| [
"The problem is that you are not adding the / when you pass the output path to imwrite. This way, it reads the filename as for example Scans164617.png.\nI guess what you are looking for is:\ncv2.imwrite(str(scan_dir / \"image_name.png\"), data)\n\n"
] | [
0
] | [] | [] | [
"pathlib",
"python"
] | stackoverflow_0074616401_pathlib_python.txt |
Q:
GSPREAD: How do I fetch the last added worksheet from a spreadsheet having many worksheets?
Using gspread, I know how to access a sheet by name, id or index, like:
import gspread
gc = gspread.authorize(credentials)
worksheet = sh.worksheet("January")
or
worksheet = sh.sheet1
But I was wondering if it is possible to open a last added or last updated sheet?
A:
It's not possible to get the last modification of each spreadsheet sheet because this modification is fetched through Google Drive.
It's possible to obtain the last modification of the entire worksheet using the lastUpdateTime:
import gspread
sa = gspread.service_account('authentication')
sa.open("worksheet name").lastUpdateTime
| GSPREAD: How do I fetch the last added worksheet from a spreadsheet having many worksheets? | Using gspread, I know how to access a sheet by name, id or index, like:
import gspread
gc = gspread.authorize(credentials)
worksheet = sh.worksheet("January")
or
worksheet = sh.sheet1
But I was wondering if it is possible to open a last added or last updated sheet?
| [
"It's not possible to get the last modification of each spreadsheet sheet because this modification is fetched through Google Drive.\nIt's possible to obtain the last modification of the entire worksheet using the lastUpdateTime:\nimport gspread\nsa = gspread.service_account('authentication')\nsa.open(\"worksheet name\").lastUpdateTime\n\n"
] | [
0
] | [] | [] | [
"google_sheets",
"google_sheets_api",
"gspread",
"python"
] | stackoverflow_0054902092_google_sheets_google_sheets_api_gspread_python.txt |
Q:
Removing pip cache after installing dependencies in Docker image
I noticed that docker images may be large because of keeping pip cache in /root/.cache/pip. I know I can remove this directory after all my dependencies are installed in my docker image. What I'm not sure is how this relates to docker's BuildKit which allows quicker installation by using cache. Are these two somehow related? So if I'd like to benefit from BuildKit is it safe to remove /root/.cache/pip? My question is motivated by heavy python dependencies like torch and nvidia which may occupy few GB. Removing pip cache may decrease the size of the image by 2-3 GB.
A:
The better solution here is to not cache the packages in the first place (you're not going to need them anyway; the image build process won't benefit from them unless you're doing something terrible).
The simplest solution is to just pass --no-cache-dir to your pip invocations, and it won't cache the packages to disk in the first place. Alternatively, you can drop a pip.conf containing:
[global]
no-cache-dir = True
to /etc/pip.conf in the container to disable it globally (without needing to pass the switch each time). Note that if your image ships with a version of pip prior to 19.0.1 the pip.conf solution is buggy; if that's the case, you can use --no-cache-dir command line switch manually to update pip to a post-19.0.1 version, then modify /etc/pip.conf to add the extra line if needed.
Bonus: You may want to expand the pip.conf to:
[install]
compile = no
[global]
no-cache-dir = True
where the compile = no tells pip not to compile the Python source files to bytecode on install; the benefit of pre-compiled bytecode is (slightly) faster startup, but by bloating your image, it will take longer to download/run it, so the cost to the Docker layer outweighs any savings for Python launch itself.
Lastly, add:
ENV PYTHONDONTWRITEBYTECODE=1
to your Dockerfile (can be combined with other ENV settings to avoid extra layers) near the top of the file. Where the pip.conf prevents compiling/writing bytecode on install, the environment variable prevents writing them at runtime (which would be a pointless exercise; when the container exits, the bytecode would be lost anyway).
| Removing pip cache after installing dependencies in Docker image | I noticed that docker images may be large because of keeping pip cache in /root/.cache/pip. I know I can remove this directory after all my dependencies are installed in my docker image. What I'm not sure is how this relates to docker's BuildKit which allows quicker installation by using cache. Are these two somehow related? So if I'd like to benefit from BuildKit is it safe to remove /root/.cache/pip? My question is motivated by heavy python dependencies like torch and nvidia which may occupy few GB. Removing pip cache may decrease the size of the image by 2-3 GB.
| [
"The better solution here is to not cache the packages in the first place (you're not going to need them anyway; the image build process won't benefit from them unless you're doing something terrible).\nThe simplest solution is to just pass --no-cache-dir to your pip invocations, and it won't cache the packages to disk in the first place. Alternatively, you can drop a pip.conf containing:\n[global]\nno-cache-dir = True\n\nto /etc/pip.conf in the container to disable it globally (without needing to pass the switch each time). Note that if your image ships with a version of pip prior to 19.0.1 the pip.conf solution is buggy; if that's the case, you can use --no-cache-dir command line switch manually to update pip to a post-19.0.1 version, then modify /etc/pip.conf to add the extra line if needed.\nBonus: You may want to expand the pip.conf to:\n[install]\ncompile = no\n\n[global]\nno-cache-dir = True\n\nwhere the compile = no tells pip not to compile the Python source files to bytecode on install; the benefit of pre-compiled bytecode is (slightly) faster startup, but by bloating your image, it will take longer to download/run it, so the cost to the Docker layer outweighs any savings for Python launch itself.\nLastly, add:\nENV PYTHONDONTWRITEBYTECODE=1\n\nto your Dockerfile (can be combined with other ENV settings to avoid extra layers) near the top of the file. Where the pip.conf prevents compiling/writing bytecode on install, the environment variable prevents writing them at runtime (which would be a pointless exercise; when the container exits, the bytecode would be lost anyway).\n"
] | [
1
] | [] | [] | [
"docker",
"linux",
"pip",
"python"
] | stackoverflow_0074616667_docker_linux_pip_python.txt |
Q:
AES-GCM 256-bit VS. SSL/TLS for socket security
Is there a difference between using AES-GCM 256-bit encryption, or using SSL/TLS to pass data over a socket.
I am currently passing data back and forth from client to server, using asymmetric AES-GCM 256-bit encryption. Is there an advantage to using SSL/TLS as opposed to my current security method?
A:
difference between using AES-GCM 256-bit encryption, or using SSL/TLS
These cannot be directly compared.
AES-GCM is encryption with integrity protection - nothing more. It needs an encryption key which somehow needs to be exchanged between the sender and recipient - how this is done is out of scope of AES-CGM.
SSL/TLS is a protocol specifically to protect a communication between two parties.It provides encryption and integrity protection (for example using AES-CGM), but much more: Key exchange to compute a common key which is then used in the encryption, replay protection, authentication of the server to protect against man in the middle attacks.
Thus, better use SSL/TLS since it provides not only encryption but much more of what is needed for secure communication.
| AES-GCM 256-bit VS. SSL/TLS for socket security | Is there a difference between using AES-GCM 256-bit encryption, or using SSL/TLS to pass data over a socket.
I am currently passing data back and forth from client to server, using asymmetric AES-GCM 256-bit encryption. Is there an advantage to using SSL/TLS as opposed to my current security method?
| [
"\ndifference between using AES-GCM 256-bit encryption, or using SSL/TLS\n\nThese cannot be directly compared.\n\nAES-GCM is encryption with integrity protection - nothing more. It needs an encryption key which somehow needs to be exchanged between the sender and recipient - how this is done is out of scope of AES-CGM.\nSSL/TLS is a protocol specifically to protect a communication between two parties.It provides encryption and integrity protection (for example using AES-CGM), but much more: Key exchange to compute a common key which is then used in the encryption, replay protection, authentication of the server to protect against man in the middle attacks.\n\nThus, better use SSL/TLS since it provides not only encryption but much more of what is needed for secure communication.\n"
] | [
2
] | [] | [] | [
"aes",
"encryption",
"python",
"security",
"ssl"
] | stackoverflow_0074616446_aes_encryption_python_security_ssl.txt |
Q:
pandas: merge strings when other columns satisfy a condition
I have a table:
genome start end strand etc
GUT_GENOME270877.fasta 98 396 +
GUT_GENOME270877.fasta 384 574 -
GUT_GENOME270877.fasta 593 984 +
GUT_GENOME270877.fasta 991 999 -
I'd like to make a new table with column coordinates, which joins start and end columns and looking like this:
genome start end strand etc coordinates
GUT_GENOME270877.fasta 98 396 + 98..396
GUT_GENOME270877.fasta 384 574 - complement(384..574)
GUT_GENOME270877.fasta 593 984 + 593..984
GUT_GENOME270877.fasta 991 999 - complement(991..999)
so that if there's a - in the etc column, I'd like to do not just
df['coordinates'] = df['start'].astype(str) + '..' + df['end'].astype(str)
but to add brackets and complement, like this:
df['coordinates'] = 'complement(' + df['start'].astype(str) + '..' + df['end'].astype(str) + ')'
The only things i'm missing is how to introduce the condition.
A:
You can use numpy.where:
m = df['strand'].eq('-')
df['coordinates'] = (np.where(m, 'complement(', '')
+df['start'].astype(str)+'..'+df['end'].astype(str)
+np.where(m, ')', '')
)
Or boolean indexing:
m = df['strand'].eq('-')
df['coordinates'] = df['start'].astype(str)+'..'+df['end'].astype(str)
df.loc[m, 'coordinates'] = 'complement('+df.loc[m, 'coordinates']+')'
Output:
genome start end strand coordinates
0 GUT_GENOME270877.fasta 98 396 + 98..396
1 GUT_GENOME270877.fasta 384 574 - complement(384..574)
2 GUT_GENOME270877.fasta 593 984 + 593..984
3 GUT_GENOME270877.fasta 991 999 - complement(991..999)
| pandas: merge strings when other columns satisfy a condition | I have a table:
genome start end strand etc
GUT_GENOME270877.fasta 98 396 +
GUT_GENOME270877.fasta 384 574 -
GUT_GENOME270877.fasta 593 984 +
GUT_GENOME270877.fasta 991 999 -
I'd like to make a new table with column coordinates, which joins start and end columns and looking like this:
genome start end strand etc coordinates
GUT_GENOME270877.fasta 98 396 + 98..396
GUT_GENOME270877.fasta 384 574 - complement(384..574)
GUT_GENOME270877.fasta 593 984 + 593..984
GUT_GENOME270877.fasta 991 999 - complement(991..999)
so that if there's a - in the etc column, I'd like to do not just
df['coordinates'] = df['start'].astype(str) + '..' + df['end'].astype(str)
but to add brackets and complement, like this:
df['coordinates'] = 'complement(' + df['start'].astype(str) + '..' + df['end'].astype(str) + ')'
The only things i'm missing is how to introduce the condition.
| [
"You can use numpy.where:\nm = df['strand'].eq('-')\n\ndf['coordinates'] = (np.where(m, 'complement(', '')\n +df['start'].astype(str)+'..'+df['end'].astype(str)\n +np.where(m, ')', '')\n )\n\nOr boolean indexing:\nm = df['strand'].eq('-')\n\ndf['coordinates'] = df['start'].astype(str)+'..'+df['end'].astype(str)\n\ndf.loc[m, 'coordinates'] = 'complement('+df.loc[m, 'coordinates']+')'\n\nOutput:\n genome start end strand coordinates\n0 GUT_GENOME270877.fasta 98 396 + 98..396\n1 GUT_GENOME270877.fasta 384 574 - complement(384..574)\n2 GUT_GENOME270877.fasta 593 984 + 593..984\n3 GUT_GENOME270877.fasta 991 999 - complement(991..999)\n\n"
] | [
1
] | [] | [] | [
"conditional_statements",
"if_statement",
"pandas",
"python"
] | stackoverflow_0074616690_conditional_statements_if_statement_pandas_python.txt |
Q:
Error in pytube
Code:
from pytube import Playlist
playlist = Playlist('https://www.youtube.com/playlist?list=PLWPirh4EWFpEpO6NjjWLbKSCb-wx3hMql')
for video in playlist.videos:
print("Video: ",video)
video.streams.get_highest_resolution().download()
Error I am getting:
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)>
A:
Go to /Applications/Python3.x and run 'Install Certificates.command'
A:
(for OSX users)
In another words...
Go to your applications folder and look for Python folder. Mine says "Python 3.11"
Open that directory, there is a file called "Install Certificates.command".
Open that file and it will run the command.
A:
# Solved in
# add
import ssl
ssl._create_default_https_context = ssl._create_stdlib_context
| Error in pytube | Code:
from pytube import Playlist
playlist = Playlist('https://www.youtube.com/playlist?list=PLWPirh4EWFpEpO6NjjWLbKSCb-wx3hMql')
for video in playlist.videos:
print("Video: ",video)
video.streams.get_highest_resolution().download()
Error I am getting:
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)>
| [
"Go to /Applications/Python3.x and run 'Install Certificates.command'\n",
"(for OSX users)\nIn another words...\n\nGo to your applications folder and look for Python folder. Mine says \"Python 3.11\"\nOpen that directory, there is a file called \"Install Certificates.command\".\nOpen that file and it will run the command.\n\n",
"# Solved in \n# add\nimport ssl\nssl._create_default_https_context = ssl._create_stdlib_context\n\n"
] | [
0,
0,
0
] | [] | [] | [
"python",
"pytube",
"ssl",
"youtube"
] | stackoverflow_0072306952_python_pytube_ssl_youtube.txt |
Q:
Vscode black formatter is not working in poetry project
I have these settings in vscode for the black extension in a poetry project, which uses system cache and poetry venv.
"editor.formatOnSave": true,
"python.formatting.provider": "black",
"python.formatting.blackPath": "path-to-/bin/black",
"python.pythonPath": "path-to-/python",
"python.linting.mypyEnabled": true,
"python.linting.mypyPath": "path-to-/bin/mypy"
I cannot understand why the formatter formats nothing. I am using local workspace settings ( above ).
A:
Make sure black is installed in current used environment.
Open a integrated Terminal and activate the venv, run pip show black to see if it's installed in current environment. If not,
1.Comment these two settings;
"python.formatting.provider": "black",
"python.formatting.blackPath":"path-to-/bin/black",
2.Turn to python file, right click choose Format Document With... --> Python, there would be a prompt popping up to mention you install formatter, choose install black. After installation, the following setting will occur automatically in the Settings.json:
"python.formatting.provider": "black"
Then you can Format Document.
A:
I found out that you must set the default formatter which is language specific.
For python this is ms-python.python extension by Microsoft, which allows a specific formatter to be enabled, e.g autopep8, black, yapf, etc. Note I was getting notification telling me that Extension 'prettier - Code formatter' cannot format file.py
"[python]": {
"editor.defaultFormatter": "ms-python.python",
}
Then include your actual formatter:
"python.formatting.provider": "black",
"python.formatting.blackPath": "/path/"
A:
If you're developing in docker you can add
RUN ln -s $(poetry env info -p)/bin/black /usr/local/bin/black
and in your settings.json use the link
...
"python.formatting.provider": "black",
"python.formatting.blackPath": "/usr/local/bin/black",
which will use your black settings in your pyproject.toml (i.e) something like
[tool.black]
line-length = 80
target-version = ['py39']
| Vscode black formatter is not working in poetry project | I have these settings in vscode for the black extension in a poetry project, which uses system cache and poetry venv.
"editor.formatOnSave": true,
"python.formatting.provider": "black",
"python.formatting.blackPath": "path-to-/bin/black",
"python.pythonPath": "path-to-/python",
"python.linting.mypyEnabled": true,
"python.linting.mypyPath": "path-to-/bin/mypy"
I cannot understand why the formatter formats nothing. I am using local workspace settings ( above ).
| [
"Make sure black is installed in current used environment.\nOpen a integrated Terminal and activate the venv, run pip show black to see if it's installed in current environment. If not,\n1.Comment these two settings;\n\n\"python.formatting.provider\": \"black\",\n\"python.formatting.blackPath\":\"path-to-/bin/black\",\n\n2.Turn to python file, right click choose Format Document With... --> Python, there would be a prompt popping up to mention you install formatter, choose install black. After installation, the following setting will occur automatically in the Settings.json:\n\"python.formatting.provider\": \"black\"\n\nThen you can Format Document.\n\n",
"I found out that you must set the default formatter which is language specific.\nFor python this is ms-python.python extension by Microsoft, which allows a specific formatter to be enabled, e.g autopep8, black, yapf, etc. Note I was getting notification telling me that Extension 'prettier - Code formatter' cannot format file.py\n\"[python]\": {\n \"editor.defaultFormatter\": \"ms-python.python\",\n}\n\nThen include your actual formatter:\n\"python.formatting.provider\": \"black\",\n\"python.formatting.blackPath\": \"/path/\"\n\n",
"If you're developing in docker you can add\nRUN ln -s $(poetry env info -p)/bin/black /usr/local/bin/black\n\nand in your settings.json use the link\n...\n\"python.formatting.provider\": \"black\",\n\"python.formatting.blackPath\": \"/usr/local/bin/black\",\n\nwhich will use your black settings in your pyproject.toml (i.e) something like\n[tool.black]\nline-length = 80\ntarget-version = ['py39']\n\n"
] | [
2,
1,
0
] | [] | [] | [
"django",
"python",
"visual_studio_code"
] | stackoverflow_0067287004_django_python_visual_studio_code.txt |
Q:
how to check if a dynamodb attribute is a reserved keyword (in Python)
I need to update a dynamodb table using the update_item method.
I get a dynamic list of attribute to update, so some of them can be reserved words.
Is there a way to check (in Python) if an attribute is a reserved word?
There is a list of all ~570 words here. So in theory I can create a static list and check it, but I want to believe there is a better way.
Example to clarify:
cols_to_update = ["a", "comment"] -- comment is a dynamodb reserved word
My code will generate the following:
update_expression = "SET a = :_a, comment = :_comment"
expression_attribute_values = {":_a": some_value, ":_comment": some_value}
and use it in the update command:
table.update_item(
Key={
'id': some_id
},
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values
)
this will throw an error saying the comment is a reserved word: botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: Invalid UpdateExpression: Attribute name is a reserved keyword; reserved keyword: comment
I can use ExpressionAttributeNames :
table.update_item(
Key={
'id': some_id
},
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values,
ExpressionAttributeNames={
"#c": "comment"
}
)
but I don't know which of the attributes I update are reserved words and need to be in the ExpressionAttributeNames
A:
Always use expressionAttributeValues/Names so that you do not have to check if the word is reserved or not. That is not a static list of reserved words and can be changed at anytime. There is no API to check for reserved key words.
| how to check if a dynamodb attribute is a reserved keyword (in Python) | I need to update a dynamodb table using the update_item method.
I get a dynamic list of attribute to update, so some of them can be reserved words.
Is there a way to check (in Python) if an attribute is a reserved word?
There is a list of all ~570 words here. So in theory I can create a static list and check it, but I want to believe there is a better way.
Example to clarify:
cols_to_update = ["a", "comment"] -- comment is a dynamodb reserved word
My code will generate the following:
update_expression = "SET a = :_a, comment = :_comment"
expression_attribute_values = {":_a": some_value, ":_comment": some_value}
and use it in the update command:
table.update_item(
Key={
'id': some_id
},
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values
)
this will throw an error saying the comment is a reserved word: botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: Invalid UpdateExpression: Attribute name is a reserved keyword; reserved keyword: comment
I can use ExpressionAttributeNames :
table.update_item(
Key={
'id': some_id
},
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values,
ExpressionAttributeNames={
"#c": "comment"
}
)
but I don't know which of the attributes I update are reserved words and need to be in the ExpressionAttributeNames
| [
"Always use expressionAttributeValues/Names so that you do not have to check if the word is reserved or not. That is not a static list of reserved words and can be changed at anytime. There is no API to check for reserved key words.\n"
] | [
1
] | [] | [] | [
"amazon_dynamodb",
"boto3",
"python"
] | stackoverflow_0074615649_amazon_dynamodb_boto3_python.txt |
Q:
Why does not pandas datetime work when trying to change dateformat?
I have the following code
temp1 = df.iloc[1:,0]
print(type(temp1))
temp2 = pd.to_datetime(temp1, format='%Y/%m/%d')
where df is a dataframe whose first column (i.e. column 0) contains dates with format "YYYY-MM-DD-hh-mm-ss". Now I'm trying to convert that into the format "YYYY-MM-DD" with line 2 and 3 but it does not anything to the format. The output is the following:
Can someone explain why my code does not work? When I try to print out the type of the two temporary variables, I get <class 'pandas.core.series.Series'> for both.
A:
The format is to tell pandas how to parse the datetime string, not how to output the result. From the docs:
The strftime to parse time, e.g. "%d/%m/%Y"
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html
You can use something like this to format the datetime:
import pandas as pd
df = pd.DataFrame(['2012-01-01 12:00:01', '2012-01-01 12:00:02'])
pd.to_datetime(df[0]).dt.strftime('%d-%m-%Y')
0 01-01-2012
1 01-01-2012
Name: 0, dtype: object
| Why does not pandas datetime work when trying to change dateformat? | I have the following code
temp1 = df.iloc[1:,0]
print(type(temp1))
temp2 = pd.to_datetime(temp1, format='%Y/%m/%d')
where df is a dataframe whose first column (i.e. column 0) contains dates with format "YYYY-MM-DD-hh-mm-ss". Now I'm trying to convert that into the format "YYYY-MM-DD" with line 2 and 3 but it does not anything to the format. The output is the following:
Can someone explain why my code does not work? When I try to print out the type of the two temporary variables, I get <class 'pandas.core.series.Series'> for both.
| [
"The format is to tell pandas how to parse the datetime string, not how to output the result. From the docs:\nThe strftime to parse time, e.g. \"%d/%m/%Y\"\n\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html\nYou can use something like this to format the datetime:\nimport pandas as pd\n\ndf = pd.DataFrame(['2012-01-01 12:00:01', '2012-01-01 12:00:02'])\n\npd.to_datetime(df[0]).dt.strftime('%d-%m-%Y')\n\n0 01-01-2012\n1 01-01-2012\nName: 0, dtype: object\n\n\n"
] | [
2
] | [] | [] | [
"pandas",
"python",
"python_3.x",
"python_datetime"
] | stackoverflow_0074616802_pandas_python_python_3.x_python_datetime.txt |
Q:
Issues while Generating TFrecords
I am Trying to generate tfrecord file with tensorflow 2.0, at the first I have generated correctly the files, but when I am trying generate them again, python console show an error below:
Traceback (most recent call last):
File "generate_tfrecordv2.py", line 106, in <module>
tf.compat.v1.app.run()
File "C:\Users\LUIS\AppData\Roaming\Python\Python37\site-packages\tensorflow_c ore\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\LUIS\AppData\Roaming\Python\Python37\site-packages\absl\app.py" , line 299, in run
_run_main(main, args)
File "C:\Users\LUIS\AppData\Roaming\Python\Python37\site-packages\absl\app.py" , line 250, in _run_main
sys.exit(main(argv))
File "generate_tfrecordv2.py", line 95, in main
grouped = split(examples, 'filename')
File "generate_tfrecordv2.py", line 45, in split
gb = df.groupby(group)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py", line 7632, in groupby
observed=observed, **kwargs)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.p y", line 2110, in
groupby return klass(obj, by, **kwds)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.p y", line 360, in __init__
mutated=self.mutated)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\grouper.p y", line 578, in _get_grouper
raise KeyError(gpr) KeyError: 'filename'
CSV file content is just like that:
filename;width;height;class;xmin;ymin;xmax;ymax
19219.jpg;800;600;person;49;49;377;559
19219.jpg;800;600;person;431;131;644;592
Can you tell me what is the error? the command that I did use is this:
python generate_tfrecord.py --csv_input=train_labels.csv --image_dir=train --output_path=train.record
this is my xml sample:
<annotation>
<folder>data_modified</folder>
<filename>1_245</filename>
<path>C:\material\dataset\test\1_245.jpg</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>800</width>
<height>600</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>person</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>279</xmin>
<ymin>116</ymin>
<xmax>423</xmax>
<ymax>415</ymax>
</bndbox>
</object>
</annotation>
I did change generate_tfrecords.py and regenerate xml_to_csv but it doesnt works
"""
Usage:
# From tensorflow/models/
# Create train data:
python generate_tfrecord.py --csv_input=data/train_labels.csv --output_path=train.record
# Create test data:
python generate_tfrecord.py --csv_input=data/test_labels.csv --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import os
import io
import pandas as pd
import tensorflow as tf
from PIL import Image
#from object_detection.utils import dataset_util
#lc inicio
import dataset_util
#lc fin
from collections import namedtuple, OrderedDict
#lc inicio
#flags = tf.app.flags
flags = tf.compat.v1.flags
#lc fin
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('image_dir', '', 'Path to images')
FLAGS = flags.FLAGS
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'person':
return 1
else:
return None
def split(df, group):
data = namedtuple('data', ['filename', 'object'])
gb = df.groupby(group)
return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]
def create_tf_example(group, path):
with tf.compat.v1.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
width, height = image.size
filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
for index, row in group.object.iterrows():
xmins.append(row['xmin'] / width)
xmaxs.append(row['xmax'] / width)
ymins.append(row['ymin'] / height)
ymaxs.append(row['ymax'] / height)
classes_text.append(row['class'].encode('utf8'))
classes.append(class_text_to_int(row['class']))
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example
def main(_):
#writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
writer = tf.compat.v1.python_io.TFRecordWriter(FLAGS.output_path)
path = os.path.join(FLAGS.image_dir)
examples = pd.read_csv(FLAGS.csv_input)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())
writer.close()
output_path = os.path.join(os.getcwd(), FLAGS.output_path)
print('Successfully created the TFRecords: {}'.format(output_path))
if __name__ == '__main__':
tf.compat.v1.app.run()
A:
I had the same error because I used RectLabel and I exported the CSV file directly from there.
The CSV must have this line first:
filename,width,height,class,xmin,ymin,xmax,ymax
Example annotations.csv:
filename,width,height,class,xmin,ymin,xmax,ymax
8.jpg,1280,720,label1,427,82,848,578
9.jpg,1280,720,label1,426,87,845,585
28.jpg,1280,720,label1,435,100,847,563
14.jpg,352,640,label2,103,215,276,398
15.jpg,352,640,label2,106,215,279,399
29.jpg,352,640,label2,61,197,270,405
17.jpg,1280,720,label1,471,178,875,671
I made that file running this script:
https://gist.github.com/iKhushPatel/ed1f837656b155d9b94d45b42e00f5e4
And following this tutorial:
https://towardsdatascience.com/custom-object-detection-using-tensorflow-from-scratch-e61da2e10087
| Issues while Generating TFrecords | I am Trying to generate tfrecord file with tensorflow 2.0, at the first I have generated correctly the files, but when I am trying generate them again, python console show an error below:
Traceback (most recent call last):
File "generate_tfrecordv2.py", line 106, in <module>
tf.compat.v1.app.run()
File "C:\Users\LUIS\AppData\Roaming\Python\Python37\site-packages\tensorflow_c ore\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\LUIS\AppData\Roaming\Python\Python37\site-packages\absl\app.py" , line 299, in run
_run_main(main, args)
File "C:\Users\LUIS\AppData\Roaming\Python\Python37\site-packages\absl\app.py" , line 250, in _run_main
sys.exit(main(argv))
File "generate_tfrecordv2.py", line 95, in main
grouped = split(examples, 'filename')
File "generate_tfrecordv2.py", line 45, in split
gb = df.groupby(group)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py", line 7632, in groupby
observed=observed, **kwargs)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.p y", line 2110, in
groupby return klass(obj, by, **kwds)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.p y", line 360, in __init__
mutated=self.mutated)
File "D:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\grouper.p y", line 578, in _get_grouper
raise KeyError(gpr) KeyError: 'filename'
CSV file content is just like that:
filename;width;height;class;xmin;ymin;xmax;ymax
19219.jpg;800;600;person;49;49;377;559
19219.jpg;800;600;person;431;131;644;592
Can you tell me what is the error? the command that I did use is this:
python generate_tfrecord.py --csv_input=train_labels.csv --image_dir=train --output_path=train.record
this is my xml sample:
<annotation>
<folder>data_modified</folder>
<filename>1_245</filename>
<path>C:\material\dataset\test\1_245.jpg</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>800</width>
<height>600</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>person</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>279</xmin>
<ymin>116</ymin>
<xmax>423</xmax>
<ymax>415</ymax>
</bndbox>
</object>
</annotation>
I did change generate_tfrecords.py and regenerate xml_to_csv but it doesnt works
"""
Usage:
# From tensorflow/models/
# Create train data:
python generate_tfrecord.py --csv_input=data/train_labels.csv --output_path=train.record
# Create test data:
python generate_tfrecord.py --csv_input=data/test_labels.csv --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import os
import io
import pandas as pd
import tensorflow as tf
from PIL import Image
#from object_detection.utils import dataset_util
#lc inicio
import dataset_util
#lc fin
from collections import namedtuple, OrderedDict
#lc inicio
#flags = tf.app.flags
flags = tf.compat.v1.flags
#lc fin
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('image_dir', '', 'Path to images')
FLAGS = flags.FLAGS
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'person':
return 1
else:
return None
def split(df, group):
data = namedtuple('data', ['filename', 'object'])
gb = df.groupby(group)
return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]
def create_tf_example(group, path):
with tf.compat.v1.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
width, height = image.size
filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
for index, row in group.object.iterrows():
xmins.append(row['xmin'] / width)
xmaxs.append(row['xmax'] / width)
ymins.append(row['ymin'] / height)
ymaxs.append(row['ymax'] / height)
classes_text.append(row['class'].encode('utf8'))
classes.append(class_text_to_int(row['class']))
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example
def main(_):
#writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
writer = tf.compat.v1.python_io.TFRecordWriter(FLAGS.output_path)
path = os.path.join(FLAGS.image_dir)
examples = pd.read_csv(FLAGS.csv_input)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())
writer.close()
output_path = os.path.join(os.getcwd(), FLAGS.output_path)
print('Successfully created the TFRecords: {}'.format(output_path))
if __name__ == '__main__':
tf.compat.v1.app.run()
| [
"I had the same error because I used RectLabel and I exported the CSV file directly from there.\nThe CSV must have this line first:\nfilename,width,height,class,xmin,ymin,xmax,ymax\nExample annotations.csv:\nfilename,width,height,class,xmin,ymin,xmax,ymax\n8.jpg,1280,720,label1,427,82,848,578\n9.jpg,1280,720,label1,426,87,845,585\n28.jpg,1280,720,label1,435,100,847,563\n14.jpg,352,640,label2,103,215,276,398\n15.jpg,352,640,label2,106,215,279,399\n29.jpg,352,640,label2,61,197,270,405\n17.jpg,1280,720,label1,471,178,875,671\n\nI made that file running this script:\nhttps://gist.github.com/iKhushPatel/ed1f837656b155d9b94d45b42e00f5e4\nAnd following this tutorial:\nhttps://towardsdatascience.com/custom-object-detection-using-tensorflow-from-scratch-e61da2e10087\n"
] | [
0
] | [
"@Louis and @jedesha if you check my name, you see it's kw while I specified on my label that it's a capital letter K and not a small letter k or Kw. what you need to do is ensure your filename is correctly written without any underscore in the filename as 1_245 to 1245\n`<object>\n <name>Kw</name> (wrong name)\n <pose>Unspecified</pose>\n <truncated>0</truncated>\n <difficult>0</difficult>\n <bndbox>\n <xmin>733</xmin>\n <ymin>175</ymin>\n <xmax>1210</xmax>\n <ymax>960</ymax>\n </bndbox>\n </object>\n </annotation>\n\n Right Name\n\n<object>\n<name>K</name> (Right name)\n<pose>Unspecified</pose>\n<truncated>0</truncated>\n<difficult>0</difficult>\n<bndbox>\n<xmin>733</xmin>\n<ymin>175</ymin>\n<xmax>1210</xmax>\n<ymax>960</ymax>\n</bndbox>\n</object>\n</annotation>`\n \n\n"
] | [
-1
] | [
"pandas_groupby",
"python",
"python_3.x",
"tensorflow"
] | stackoverflow_0058753666_pandas_groupby_python_python_3.x_tensorflow.txt |
Q:
ERROR: Could not install packages due to an OSError: [13] Permission denied: '/nix/store/8d3695w7vasap3kkcn3yk731v4iw2kcv-python3.8-pip-21.1.3/bin/pip
I've been working on one error for about an hour. I've been development an app in Nix on REPLIT. But no matter what I do this error comes while installing packages with with Python Pip:
Firstly, this came up whilst installing any packages... But I realized it also comes up in attempt to update Pip.
ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/nix/store/8d3695w7vasap3kkcn3yk731v4iw2kcv-python3.8-pip-21.1.3/bin/pip' Consider using the `--user` option or check the permissions.
i1
My replit.nix:
{ pkgs }: {
deps = [
pkgs.cowsay
pkgs.python38Full
pkgs.python38Packages.pip
];
env = {
PYTHONBIN = "${pkgs.python38Full}/bin/python3.8";
LANG = "en_US.UTF-8";
};
}
A:
You could try using the --user option installs the package in the user's home directory.
Eg.
pip install octosuite --user
| ERROR: Could not install packages due to an OSError: [13] Permission denied: '/nix/store/8d3695w7vasap3kkcn3yk731v4iw2kcv-python3.8-pip-21.1.3/bin/pip | I've been working on one error for about an hour. I've been development an app in Nix on REPLIT. But no matter what I do this error comes while installing packages with with Python Pip:
Firstly, this came up whilst installing any packages... But I realized it also comes up in attempt to update Pip.
ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/nix/store/8d3695w7vasap3kkcn3yk731v4iw2kcv-python3.8-pip-21.1.3/bin/pip' Consider using the `--user` option or check the permissions.
i1
My replit.nix:
{ pkgs }: {
deps = [
pkgs.cowsay
pkgs.python38Full
pkgs.python38Packages.pip
];
env = {
PYTHONBIN = "${pkgs.python38Full}/bin/python3.8";
LANG = "en_US.UTF-8";
};
}
| [
"You could try using the --user option installs the package in the user's home directory.\nEg.\npip install octosuite --user\n\n"
] | [
0
] | [] | [] | [
"nix",
"pip",
"python",
"replit"
] | stackoverflow_0072117127_nix_pip_python_replit.txt |
Q:
Unable to download a file from Azure Repos using azure devops api python
I am trying to write a script to download a single file from azure repos using python.
I am using the official Microsoft library https://github.com/microsoft/azure-devops-python-api
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
personal_access_token = "MY_PAT"
organization_url = "MY_ORGANIZATION_URL"
repo_id = "MY_REPOSITORY_ID"
file_path = "/FOLDER_NAME/FILE_NAME"
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
git_client = connection.clients.get_git_client()
file_content = git_client.get_item_content(repo_id,path=file_path,download=True,include_content=True)
print(file_content)
The response is a generator object
<generator object RequestsClientResponse.stream_download at 0x000002517CCC8900>
But the file is not actually downloading. Any idea.
A:
It cannot be saved directly. it is generator object. You can use my reference below to save it into file.
file_content = git_client.get_item_content(repo_id,path=file_path,download=True,include_content=True)
with open('output.txt', 'wb') as f:
for x in file_content:
f.write(x)
| Unable to download a file from Azure Repos using azure devops api python | I am trying to write a script to download a single file from azure repos using python.
I am using the official Microsoft library https://github.com/microsoft/azure-devops-python-api
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
personal_access_token = "MY_PAT"
organization_url = "MY_ORGANIZATION_URL"
repo_id = "MY_REPOSITORY_ID"
file_path = "/FOLDER_NAME/FILE_NAME"
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
git_client = connection.clients.get_git_client()
file_content = git_client.get_item_content(repo_id,path=file_path,download=True,include_content=True)
print(file_content)
The response is a generator object
<generator object RequestsClientResponse.stream_download at 0x000002517CCC8900>
But the file is not actually downloading. Any idea.
| [
"It cannot be saved directly. it is generator object. You can use my reference below to save it into file.\nfile_content = git_client.get_item_content(repo_id,path=file_path,download=True,include_content=True)\nwith open('output.txt', 'wb') as f:\n for x in file_content:\n f.write(x)\n\n"
] | [
0
] | [] | [] | [
"azure_devops",
"azure_devops_rest_api",
"azure_repos",
"python"
] | stackoverflow_0072989160_azure_devops_azure_devops_rest_api_azure_repos_python.txt |
Q:
How to upload file to Sharepoint folder
I want to upload a file from my local machine to SharePoint using the Office365-REST-Python-Client library. The issue I'm having to uploading the file to a specific folder in SharePoint. Documentation seems to be all over the place. The Github repo provides the following solution to upload a file to the main "Documents" SP folder but what if I had subfolders like "Documents/Folder1"?
Here's the code to upload file to Documents in SharePoint:
import os
from office365.sharepoint.client_context import ClientContext
from tests import test_user_credentials, test_team_site_url
ctx = ClientContext(test_team_site_url).with_credentials(test_user_credentials)
path = "test_file.txt"
with open(path, 'rb') as content_file:
file_content = content_file.read()
list_title = "Documents"
target_folder = ctx.web.lists.get_by_title(list_title).root_folder
name = os.path.basename(path)
target_file = target_folder.upload_file(name, file_content).execute_query()
print("File has been uploaded to url: {0}".format(target_file.serverRelativeUrl))
I tried to change the list_title variable to "Documents/Folder1" but get the following error message as a result:
Traceback (most recent call last):
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_request.py", line 80, in execute_query
response.raise_for_status()
File "/Users/a/venv/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://company.sharepoint.com/sites/companySite/_api/Web/lists/GetByTitle('Documents%2FFolder1')/RootFolder/Files/add(overwrite=true,url='test_file.txt')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/example.py", line 27, in <module>
target_file = target_folder.upload_file(name, file_content).execute_query()
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_object.py", line 41, in execute_query
self.context.execute_query()
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_runtime_context.py", line 139, in execute_query
self.pending_request().execute_query()
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_request.py", line 84, in execute_query
raise ClientRequestException(*e.args, response=e.response)
office365.runtime.client_request_exception.ClientRequestException: ('-1, System.ArgumentException', "List 'Documents/Folder1' does not exist at site with URL 'https://company.sharepoint.com/sites/companySite'.", "404 Client Error: Not Found for url: https://company.sharepoint.com/sites/companySite/_api/Web/lists/GetByTitle('Documents%2FFolder1')/RootFolder/Files/add(overwrite=true,url='test_file.txt')")
A:
I found your question while I was looking for the exact same answer and having come to the same conclusions about the documentation needing to be a little more helpful.
I'm using an Azure Service Principal (Registered App) not username/password, that's the primary difference.
I store creds securely in Windows Credentials Manager using keyring.
So, in case you didn't figure it out yet, and for anyone else who finds your question, here's code that worked for me;
scriptName ='send_csv_to_spo.py'
import keyring #to access Windows Credentials Manager, where user/passwords are stored securely
from office365.sharepoint.client_context import ClientContext
from office365.runtime.auth.client_credential import ClientCredential
import os #interact with OS for file actions
#Get creds from Windows Credentials Manager
appID = 'AppName' #Azure Registered App
clientId = '********-****-****-****-*************'
secret=keyring.get_password(appID,clientId)
sharepoint_url = f'https://*******.sharepoint.com'
siteName = f'/sites/*******'
target_url = f"folder/sub-folder"
site_url = sharepoint_url + siteName
#ensure file to send is in the same folder as this script
#call function from other script, passing the fileName
#eg:
#import send_csv_to_spo as sendFile #imports this file as module
#fileStr = 'test.csv' #test file in same folder
#sendFile.sendFileToSPO(fileStr) #calls function in module
def sendFileToSPO(fileName):
#Authenticate using stored Azure App credentials
from office365.sharepoint.client_context import ClientContext
ctx = ClientContext(site_url).with_credentials(ClientCredential(clientId, secret))
#Collect file for uploading
with open(fileName, 'rb') as content_file:
file_content = content_file.read()
#Upload file to target url
target_folder = ctx.web.get_folder_by_server_relative_url(target_url)
name = os.path.basename(fileName)
target_file = target_folder.upload_file(name, file_content).execute_query()
print("File has been uploaded as follows: {0}".format(target_file.serverRelativeUrl))
#File has been uploaded to url: /sites/******/******/******/testPush.csv
| How to upload file to Sharepoint folder | I want to upload a file from my local machine to SharePoint using the Office365-REST-Python-Client library. The issue I'm having to uploading the file to a specific folder in SharePoint. Documentation seems to be all over the place. The Github repo provides the following solution to upload a file to the main "Documents" SP folder but what if I had subfolders like "Documents/Folder1"?
Here's the code to upload file to Documents in SharePoint:
import os
from office365.sharepoint.client_context import ClientContext
from tests import test_user_credentials, test_team_site_url
ctx = ClientContext(test_team_site_url).with_credentials(test_user_credentials)
path = "test_file.txt"
with open(path, 'rb') as content_file:
file_content = content_file.read()
list_title = "Documents"
target_folder = ctx.web.lists.get_by_title(list_title).root_folder
name = os.path.basename(path)
target_file = target_folder.upload_file(name, file_content).execute_query()
print("File has been uploaded to url: {0}".format(target_file.serverRelativeUrl))
I tried to change the list_title variable to "Documents/Folder1" but get the following error message as a result:
Traceback (most recent call last):
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_request.py", line 80, in execute_query
response.raise_for_status()
File "/Users/a/venv/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://company.sharepoint.com/sites/companySite/_api/Web/lists/GetByTitle('Documents%2FFolder1')/RootFolder/Files/add(overwrite=true,url='test_file.txt')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/example.py", line 27, in <module>
target_file = target_folder.upload_file(name, file_content).execute_query()
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_object.py", line 41, in execute_query
self.context.execute_query()
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_runtime_context.py", line 139, in execute_query
self.pending_request().execute_query()
File "/Users/a/venv/lib/python3.9/site-packages/office365/runtime/client_request.py", line 84, in execute_query
raise ClientRequestException(*e.args, response=e.response)
office365.runtime.client_request_exception.ClientRequestException: ('-1, System.ArgumentException', "List 'Documents/Folder1' does not exist at site with URL 'https://company.sharepoint.com/sites/companySite'.", "404 Client Error: Not Found for url: https://company.sharepoint.com/sites/companySite/_api/Web/lists/GetByTitle('Documents%2FFolder1')/RootFolder/Files/add(overwrite=true,url='test_file.txt')")
| [
"I found your question while I was looking for the exact same answer and having come to the same conclusions about the documentation needing to be a little more helpful.\nI'm using an Azure Service Principal (Registered App) not username/password, that's the primary difference.\nI store creds securely in Windows Credentials Manager using keyring.\nSo, in case you didn't figure it out yet, and for anyone else who finds your question, here's code that worked for me;\nscriptName ='send_csv_to_spo.py'\nimport keyring #to access Windows Credentials Manager, where user/passwords are stored securely\nfrom office365.sharepoint.client_context import ClientContext\nfrom office365.runtime.auth.client_credential import ClientCredential\nimport os #interact with OS for file actions\n#Get creds from Windows Credentials Manager\nappID = 'AppName' #Azure Registered App\nclientId = '********-****-****-****-*************'\nsecret=keyring.get_password(appID,clientId)\nsharepoint_url = f'https://*******.sharepoint.com'\nsiteName = f'/sites/*******'\ntarget_url = f\"folder/sub-folder\"\nsite_url = sharepoint_url + siteName\n#ensure file to send is in the same folder as this script\n#call function from other script, passing the fileName\n#eg:\n#import send_csv_to_spo as sendFile #imports this file as module\n#fileStr = 'test.csv' #test file in same folder\n#sendFile.sendFileToSPO(fileStr) #calls function in module\n\ndef sendFileToSPO(fileName):\n #Authenticate using stored Azure App credentials\n from office365.sharepoint.client_context import ClientContext\n ctx = ClientContext(site_url).with_credentials(ClientCredential(clientId, secret))\n #Collect file for uploading\n with open(fileName, 'rb') as content_file:\n file_content = content_file.read()\n #Upload file to target url\n target_folder = ctx.web.get_folder_by_server_relative_url(target_url)\n name = os.path.basename(fileName)\n target_file = target_folder.upload_file(name, file_content).execute_query()\n print(\"File has been uploaded as follows: {0}\".format(target_file.serverRelativeUrl))\n #File has been uploaded to url: /sites/******/******/******/testPush.csv\n\n"
] | [
0
] | [] | [] | [
"python",
"sharepoint"
] | stackoverflow_0072628931_python_sharepoint.txt |
Q:
Inline buttons doesnt work in Aiogram Telegram Bot
Im trying to get a random number while clicking inline buttons. Im getting a message with buttons, but when i click on them nothing happends — i just see a small clock in the button. Here are my handlers:
from loader import dp
from random import randint
from aiogram.types import InlineKeyboardMarkup, InlineKeyboardButton
from aiogram import types
button1 = InlineKeyboardButton(text="Random 1-10", callback_data="random1-10")
button2 = InlineKeyboardButton(text="Random 1-100", callback_data="random1-100")
keyboard_inline = InlineKeyboardMarkup().add(button1, button2)
@dp.message_handler(commands=["random"], state="*", content_types=types.ContentTypes.ANY)
async def cmd_random(message: types.Message):
await message.reply("Select a range", reply_markup=keyboard_inline)
@dp.callback_query_handler(text=["random1-10", "random1-100"])
async def random_value(call: types.CallbackQuery):
if call.data == "random1-10":
await call.message.answer(str(randint(1, 10)))
elif call.data == "random1-100":
await call.message.answer(str(randint(1, 100)))
await call.answer()
OR
@dp.callback_query_handler(lambda c: c.data == 'random1-10')
async def process_callback_button1(callback_query: types.CallbackQuery):
await dp.bot.answer_callback_query(callback_query.id)
await dp.bot.send_message(callback_query.from_user.id, 'First button clicked!')
I also tried to send a static message w/o any functions, but nothing happens. Maybe i need to switch on smth?
A:
You need to add
state="*"
into
@dp.callback_query_handler(text=["random1-10", "random1-100"])
| Inline buttons doesnt work in Aiogram Telegram Bot | Im trying to get a random number while clicking inline buttons. Im getting a message with buttons, but when i click on them nothing happends — i just see a small clock in the button. Here are my handlers:
from loader import dp
from random import randint
from aiogram.types import InlineKeyboardMarkup, InlineKeyboardButton
from aiogram import types
button1 = InlineKeyboardButton(text="Random 1-10", callback_data="random1-10")
button2 = InlineKeyboardButton(text="Random 1-100", callback_data="random1-100")
keyboard_inline = InlineKeyboardMarkup().add(button1, button2)
@dp.message_handler(commands=["random"], state="*", content_types=types.ContentTypes.ANY)
async def cmd_random(message: types.Message):
await message.reply("Select a range", reply_markup=keyboard_inline)
@dp.callback_query_handler(text=["random1-10", "random1-100"])
async def random_value(call: types.CallbackQuery):
if call.data == "random1-10":
await call.message.answer(str(randint(1, 10)))
elif call.data == "random1-100":
await call.message.answer(str(randint(1, 100)))
await call.answer()
OR
@dp.callback_query_handler(lambda c: c.data == 'random1-10')
async def process_callback_button1(callback_query: types.CallbackQuery):
await dp.bot.answer_callback_query(callback_query.id)
await dp.bot.send_message(callback_query.from_user.id, 'First button clicked!')
I also tried to send a static message w/o any functions, but nothing happens. Maybe i need to switch on smth?
| [
"You need to add\nstate=\"*\"\n\ninto\[email protected]_query_handler(text=[\"random1-10\", \"random1-100\"])\n\n"
] | [
0
] | [] | [] | [
"aiogram",
"python",
"telegram_api",
"telegram_bot"
] | stackoverflow_0074616302_aiogram_python_telegram_api_telegram_bot.txt |
Q:
How to end the loop when I type n?
import random
#yes or no
yrn = input("R u going to play black jack? (Y/N): ").upper()
if yrn == "Y":
player1 = random.randint(1,19)
player2 = random.randint(1,19)
print(player1,player2)
while True:
player1_yrn = input("Player 1, Do you want more numbers? (Y/N): ").upper()
if player1_yrn == "Y":
player1 = player1 + random.randint(1,19)
print(f"Player 1's number is {player1}")
else:
print(f"Player 1's number is {player1}")
quit()
player2_yrn = input("Player 2, Do you want more numbers? (Y/N) : ").upper()
if player2_yrn == "Y":
player2 = player2 + random.randint(1,19)
print(f"Player 2's number is {player2}")
else:
print(f"Player 2's number is {player2}")
What I want is when I press 'n', the asking loop needs to end only for that player.
For example: When I press 'n' for the question: "Player 2, Do you want more numbers? (Y/N) : ", then the asking loop ends only for player 2 and the program only asks for player 1 for more numbers.
A:
You need some state to allow you to remember when a player says "no more cards". A boolean flag per player would be one way to do it:
player1_keep_dealing = True
player2_keep_dealing = True
while player1_keep_dealing and player2_keep_dealing:
if player1_keep_dealing:
player1_yrn = input("Player 1, Do you want more numbers? (Y/N): ").upper()
if player1_yrn == "Y":
player1 = player1 + random.randint(1,19)
print(f"Player 1's number is {player1}")
else:
print(f"Player 1's number is {player1}")
player1_keep_dealing = False
if player2_keep_dealing:
player2_yrn = input("Player 2, Do you want more numbers? (Y/N) : ").upper()
if player2_yrn == "Y":
player2 = player2 + random.randint(1,19)
print(f"Player 2's number is {player2}")
else:
print(f"Player 2's number is {player2}")
player2_keep_dealing = False
Notice that you now have a natural loop termination condition.
Design Note
If you have a bunch of repeated variables with numbers in the names, and may want to add more, it may be time to look at classes. For example, player state and functionality can reasonably be encapsulated in a Player class:
class Player:
def __init__(self, n):
self.n = n
keep_dealing = True
self.value = random.randint(1, 19)
def ask(self):
yrn = input(f"Player {self.n}, Do you want more numbers? (Y/N): ").upper()
if yrn == "Y":
self.value += random.randint(1, 19)
else:
self.keep_dealing = False
print(f"Player {self.n}'s number is {self.value}")
In your loop, you can
A:
Make a set storing which players are still actively playing (being dealt to). You can also store their scores / numbers in a dictionary for convenience and avoiding duplication.
playing = {1, 2}
numbers = {1: random.randint(1,19), 2: random.randint(1, 19)}
print(*numbers.values())
while playing:
for player in list(playing): # we make a copy of the active players to iterate over
more = input(f"Player {player}, Do you want more numbers? (Y/N): ").upper()
if more == "Y":
numbers[player] += random.randint(1,19)
else:
playing.remove(player) # stop dealing to this player
print(f"Player {player}'s number is {numbers[player]}")
A:
An easy way would be to keep track of which player is still playing with booleans, and execute the request code only if the player is still playing.
Also, you could use a list to store players scores, and a list to store which player is still playing, which would lead to something like this :
players = []
yrn = input("R u going to play black jack? (Y/N): ").upper()
if yrn == "Y":
player_number = int(input("how many players ?"))
for player in range(player_number):
players.append((random.randint(1,19), True))
while True:
for i,player in enumerate(players):
if not player[1]:
continue
player_yrn = input(f"Player {i}, Do you want more numbers? (Y/N): ").upper()
if player_yrn == "Y":
player[0] = player[0] + random.randint(1,19)
else:
player[1] = False
print(f"Player {i}'s number is {player1}")
Such code would be much more modular :)
| How to end the loop when I type n? | import random
#yes or no
yrn = input("R u going to play black jack? (Y/N): ").upper()
if yrn == "Y":
player1 = random.randint(1,19)
player2 = random.randint(1,19)
print(player1,player2)
while True:
player1_yrn = input("Player 1, Do you want more numbers? (Y/N): ").upper()
if player1_yrn == "Y":
player1 = player1 + random.randint(1,19)
print(f"Player 1's number is {player1}")
else:
print(f"Player 1's number is {player1}")
quit()
player2_yrn = input("Player 2, Do you want more numbers? (Y/N) : ").upper()
if player2_yrn == "Y":
player2 = player2 + random.randint(1,19)
print(f"Player 2's number is {player2}")
else:
print(f"Player 2's number is {player2}")
What I want is when I press 'n', the asking loop needs to end only for that player.
For example: When I press 'n' for the question: "Player 2, Do you want more numbers? (Y/N) : ", then the asking loop ends only for player 2 and the program only asks for player 1 for more numbers.
| [
"You need some state to allow you to remember when a player says \"no more cards\". A boolean flag per player would be one way to do it:\nplayer1_keep_dealing = True\nplayer2_keep_dealing = True\nwhile player1_keep_dealing and player2_keep_dealing:\n if player1_keep_dealing:\n player1_yrn = input(\"Player 1, Do you want more numbers? (Y/N): \").upper()\n if player1_yrn == \"Y\":\n player1 = player1 + random.randint(1,19)\n print(f\"Player 1's number is {player1}\")\n else:\n print(f\"Player 1's number is {player1}\")\n player1_keep_dealing = False\n\n if player2_keep_dealing:\n player2_yrn = input(\"Player 2, Do you want more numbers? (Y/N) : \").upper()\n if player2_yrn == \"Y\":\n player2 = player2 + random.randint(1,19)\n print(f\"Player 2's number is {player2}\")\n else:\n print(f\"Player 2's number is {player2}\")\n player2_keep_dealing = False\n\nNotice that you now have a natural loop termination condition.\nDesign Note\nIf you have a bunch of repeated variables with numbers in the names, and may want to add more, it may be time to look at classes. For example, player state and functionality can reasonably be encapsulated in a Player class:\nclass Player:\n def __init__(self, n):\n self.n = n\n keep_dealing = True\n self.value = random.randint(1, 19)\n\n def ask(self):\n yrn = input(f\"Player {self.n}, Do you want more numbers? (Y/N): \").upper()\n if yrn == \"Y\":\n self.value += random.randint(1, 19)\n else:\n self.keep_dealing = False\n print(f\"Player {self.n}'s number is {self.value}\")\n\nIn your loop, you can\n",
"Make a set storing which players are still actively playing (being dealt to). You can also store their scores / numbers in a dictionary for convenience and avoiding duplication.\nplaying = {1, 2}\nnumbers = {1: random.randint(1,19), 2: random.randint(1, 19)}\nprint(*numbers.values())\nwhile playing:\n for player in list(playing): # we make a copy of the active players to iterate over\n more = input(f\"Player {player}, Do you want more numbers? (Y/N): \").upper()\n if more == \"Y\":\n numbers[player] += random.randint(1,19)\n else:\n playing.remove(player) # stop dealing to this player\n print(f\"Player {player}'s number is {numbers[player]}\")\n\n",
"An easy way would be to keep track of which player is still playing with booleans, and execute the request code only if the player is still playing.\nAlso, you could use a list to store players scores, and a list to store which player is still playing, which would lead to something like this :\nplayers = []\nyrn = input(\"R u going to play black jack? (Y/N): \").upper()\nif yrn == \"Y\":\n player_number = int(input(\"how many players ?\"))\n for player in range(player_number):\n players.append((random.randint(1,19), True))\n\nwhile True:\n for i,player in enumerate(players):\n if not player[1]:\n continue\n player_yrn = input(f\"Player {i}, Do you want more numbers? (Y/N): \").upper()\n if player_yrn == \"Y\":\n player[0] = player[0] + random.randint(1,19)\n else:\n player[1] = False\n print(f\"Player {i}'s number is {player1}\")\n\nSuch code would be much more modular :)\n"
] | [
0,
0,
0
] | [] | [] | [
"input",
"python"
] | stackoverflow_0074616599_input_python.txt |
Q:
how to fix error UNIQUE constraint failed: auth_user.username
i was trying to use sending email modual from django in order to send sign up email for each user when sign up im facing this error :UNIQUE constraint failed: auth_user.username
` i was trying to use sending email modual from django in order to send sign up email for each user when sign up im facing this error :UNIQUE constraint failed: auth_user.username`
# this is my views.py
from django.shortcuts import render ,redirect
from django.contrib.auth import login
from django.contrib.auth.decorators import login_required
from . import forms
# Create your views here.
def home(request):
return render(request,'home.html')
@login_required
def customer_page(request):
return render(request,'home.html')
@login_required
def courier_page(request):
return render(request,'home.html')
def sign_up(request):
form = forms.SignUPForm()
if request.method=='POST':
form=forms.SignUPForm(request.POST)
if form.is_valid():
email=form.cleaned_data.get('email').lower()
user= form.save(commit=False)
user.useranme = email
user.save()
login(request,user,backend='django.contrib.auth.backends.ModelBackend')
return redirect('/')
return render(request,'sign_up.html',{
'form':form
})
` now i have created signals.py to use email forms `
# my signals.py file
from email.base64mime import body_decode
from django.db.models.signals import post_save
from django.dispatch import receiver
from django.core.mail import send_mail
from django.conf import settings
from django.contrib.auth.models import User
from django.template.loader import render_to_string
from django.core import mail
connection = mail.get_connection()
@receiver(post_save,sender=User)
def send_welcome_email(sender,instance,created,**kwarg):
if created and instance.email:
connection.open()
body=render_to_string(
'welcome_email_template.html',
{
'name':instance.get_full_name()
}
)
email2 = mail.EmailMessage(
'Welcome to fast Parcle',
body,
settings.DEFAULT_FROM_EMAIL,
[instance.email],
fail_silently=False,
)
connection.send_messages([email2])
connection.close()
#also my settings.py
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = ''
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'bootstrap4',
'social_django',
'core.apps.CoreConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'fastparcel.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'social_django.context_processors.backends',
'social_django.context_processors.login_redirect',
],
},
},
]
WSGI_APPLICATION = 'fastparcel.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.1/howto/static-files/
STATIC_URL = '/static/'
LOGIN_URL='/sign_in/'
LOGIN_REDRICTED_URL='/'
AUTHENTICATION_BACKENDS = (
'social_core.backends.facebook.FacebookOAuth2',
'django.contrib.auth.backends.ModelBackend',
)
SOCIAL_AUTH_FACEBOOK_KEY="######"
SOCIAL_AUTH_FACEBOOK_SECRET="############"
SOCIAL_AUTH_FACEBOOK_SCOPE=['email']
SOCIAL_AUTH_FACEBOOK_PROFILE_EXTRA_PARAMS= {
'fields':'id,name,email'
}
SECURE_SSL_REDIRECT = False
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST='smtp.gmail.com'
EMAIL_USE_TLS=True
EMAIL_PORT=587
EMAIL_HOST_USER='####'
EMAIL_HOST_PASSWORD='####'
DEFAULT_FROM_EMAIL='FAST Parcle <[email protected]>'
ACCOUNT_SIGNUP_FORM_CLASS = 'core.forms.SignupForm' ```
A:
Someone is signing up with a username that already exists. The username field in the auth_user table has a uniqueness constraint that prevents you from inserting a row that has a username that already exists in the table.
A:
thanks @Eshter the solution was as mention above :create an "application password" for this specific program, which you would find out if you went to the link they gave you in the error message and have to use this created password inside the code exactly the way i used to use a regular password
| how to fix error UNIQUE constraint failed: auth_user.username | i was trying to use sending email modual from django in order to send sign up email for each user when sign up im facing this error :UNIQUE constraint failed: auth_user.username
` i was trying to use sending email modual from django in order to send sign up email for each user when sign up im facing this error :UNIQUE constraint failed: auth_user.username`
# this is my views.py
from django.shortcuts import render ,redirect
from django.contrib.auth import login
from django.contrib.auth.decorators import login_required
from . import forms
# Create your views here.
def home(request):
return render(request,'home.html')
@login_required
def customer_page(request):
return render(request,'home.html')
@login_required
def courier_page(request):
return render(request,'home.html')
def sign_up(request):
form = forms.SignUPForm()
if request.method=='POST':
form=forms.SignUPForm(request.POST)
if form.is_valid():
email=form.cleaned_data.get('email').lower()
user= form.save(commit=False)
user.useranme = email
user.save()
login(request,user,backend='django.contrib.auth.backends.ModelBackend')
return redirect('/')
return render(request,'sign_up.html',{
'form':form
})
` now i have created signals.py to use email forms `
# my signals.py file
from email.base64mime import body_decode
from django.db.models.signals import post_save
from django.dispatch import receiver
from django.core.mail import send_mail
from django.conf import settings
from django.contrib.auth.models import User
from django.template.loader import render_to_string
from django.core import mail
connection = mail.get_connection()
@receiver(post_save,sender=User)
def send_welcome_email(sender,instance,created,**kwarg):
if created and instance.email:
connection.open()
body=render_to_string(
'welcome_email_template.html',
{
'name':instance.get_full_name()
}
)
email2 = mail.EmailMessage(
'Welcome to fast Parcle',
body,
settings.DEFAULT_FROM_EMAIL,
[instance.email],
fail_silently=False,
)
connection.send_messages([email2])
connection.close()
#also my settings.py
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = ''
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'bootstrap4',
'social_django',
'core.apps.CoreConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'fastparcel.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'social_django.context_processors.backends',
'social_django.context_processors.login_redirect',
],
},
},
]
WSGI_APPLICATION = 'fastparcel.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.1/howto/static-files/
STATIC_URL = '/static/'
LOGIN_URL='/sign_in/'
LOGIN_REDRICTED_URL='/'
AUTHENTICATION_BACKENDS = (
'social_core.backends.facebook.FacebookOAuth2',
'django.contrib.auth.backends.ModelBackend',
)
SOCIAL_AUTH_FACEBOOK_KEY="######"
SOCIAL_AUTH_FACEBOOK_SECRET="############"
SOCIAL_AUTH_FACEBOOK_SCOPE=['email']
SOCIAL_AUTH_FACEBOOK_PROFILE_EXTRA_PARAMS= {
'fields':'id,name,email'
}
SECURE_SSL_REDIRECT = False
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST='smtp.gmail.com'
EMAIL_USE_TLS=True
EMAIL_PORT=587
EMAIL_HOST_USER='####'
EMAIL_HOST_PASSWORD='####'
DEFAULT_FROM_EMAIL='FAST Parcle <[email protected]>'
ACCOUNT_SIGNUP_FORM_CLASS = 'core.forms.SignupForm' ```
| [
"Someone is signing up with a username that already exists. The username field in the auth_user table has a uniqueness constraint that prevents you from inserting a row that has a username that already exists in the table.\n",
"thanks @Eshter the solution was as mention above :create an \"application password\" for this specific program, which you would find out if you went to the link they gave you in the error message and have to use this created password inside the code exactly the way i used to use a regular password\n"
] | [
2,
0
] | [] | [] | [
"django",
"email",
"oauth_2.0",
"python"
] | stackoverflow_0074605979_django_email_oauth_2.0_python.txt |
Q:
Why does defining new class sometimes call the __init__() function of objects that the class inherits from?
I'm trying to understand what actually happens when you declare a new class which inherits from a parent class in python.
Here's a very simple code snippet:
# inheritance.py
class Foo():
def __init__(self, *args, **kwargs):
print("Inside foo.__init__")
print(args)
print(kwargs)
class Bar(Foo):
pass
print("complete")
If I run this there are no errors and the output is as I would expect.
❯ python inheritance.py
complete
Here's a script with an obvious bug in it, I inherit from an instance of Foo() rather than the class Foo
# inheritance.py
class Foo():
def __init__(self, *args, **kwargs):
print("Inside foo.__init__")
print(f"{args=}")
print(f"{kwargs=}\n")
foo = Foo()
class Bar(foo): <---- This is wrong
pass
print("complete")
This code runs without crashing however I don't understand why Foo.__init__() is called twice.
Here's the output:
❯ python inheritance.py
Inside foo.__init__ <--- this one I expected
args=()
kwargs={}
Inside foo.__init__ <--- What is going on here...?
args=('Bar', (<__main__.Foo object at 0x10f190b10>,), {'__module__': '__main__', '__qualname__': 'Bar'})
kwargs={}
complete
On line 8 I instantiate Foo() with no arguments which is what I expected. However on line 9 Foo.__init__ is called with the arguments that would normally be passed to type() to generate a new class.
I can see vaguely what's happening: class Bar(...) is code that generates a new class so at some point type("Bar", ...) needs to be called but:
How does this actually happen?
Why does inheriting from an instance of Foo() cause Foo.__init__("Bar", <tuple>, <dict>) to be called?
Why isn't type("Bar", <tuple>, <dict>) called instead?
A:
Python is using foo to determine the metaclass to use for Bar. No explicit metaclass is given, so the "most derived metaclass" must be determined. The metaclass of a base class is its type; usually, that's type itself. But in this case, the type of the only base "class", foo, is Foo, so that becomes the most derived metaclass. And so,
class Bar(foo):
pass
is being treated as
class Bar(metaclass=Foo):
pass
which means that Bar is created by calling Foo:
Bar = Foo('Bar', (foo,), {})
Note that Bar is now an instance of Foo, not a type. Yes, a class statement does not necessarily create a class.
| Why does defining new class sometimes call the __init__() function of objects that the class inherits from? | I'm trying to understand what actually happens when you declare a new class which inherits from a parent class in python.
Here's a very simple code snippet:
# inheritance.py
class Foo():
def __init__(self, *args, **kwargs):
print("Inside foo.__init__")
print(args)
print(kwargs)
class Bar(Foo):
pass
print("complete")
If I run this there are no errors and the output is as I would expect.
❯ python inheritance.py
complete
Here's a script with an obvious bug in it, I inherit from an instance of Foo() rather than the class Foo
# inheritance.py
class Foo():
def __init__(self, *args, **kwargs):
print("Inside foo.__init__")
print(f"{args=}")
print(f"{kwargs=}\n")
foo = Foo()
class Bar(foo): <---- This is wrong
pass
print("complete")
This code runs without crashing however I don't understand why Foo.__init__() is called twice.
Here's the output:
❯ python inheritance.py
Inside foo.__init__ <--- this one I expected
args=()
kwargs={}
Inside foo.__init__ <--- What is going on here...?
args=('Bar', (<__main__.Foo object at 0x10f190b10>,), {'__module__': '__main__', '__qualname__': 'Bar'})
kwargs={}
complete
On line 8 I instantiate Foo() with no arguments which is what I expected. However on line 9 Foo.__init__ is called with the arguments that would normally be passed to type() to generate a new class.
I can see vaguely what's happening: class Bar(...) is code that generates a new class so at some point type("Bar", ...) needs to be called but:
How does this actually happen?
Why does inheriting from an instance of Foo() cause Foo.__init__("Bar", <tuple>, <dict>) to be called?
Why isn't type("Bar", <tuple>, <dict>) called instead?
| [
"Python is using foo to determine the metaclass to use for Bar. No explicit metaclass is given, so the \"most derived metaclass\" must be determined. The metaclass of a base class is its type; usually, that's type itself. But in this case, the type of the only base \"class\", foo, is Foo, so that becomes the most derived metaclass. And so,\nclass Bar(foo):\n pass\n\nis being treated as\nclass Bar(metaclass=Foo):\n pass\n\nwhich means that Bar is created by calling Foo:\nBar = Foo('Bar', (foo,), {})\n\nNote that Bar is now an instance of Foo, not a type. Yes, a class statement does not necessarily create a class.\n"
] | [
6
] | [] | [] | [
"inheritance",
"metaclass",
"python",
"python_3.x"
] | stackoverflow_0074616757_inheritance_metaclass_python_python_3.x.txt |
Q:
Converting Matlab function to Python
I have a Matlab file that I'm looking to convert into Python. Part of it is functions and I'm a bit confused by it. In the Matlab code, there is a function that looks like this:
function [u, v, speed, dir] = NewCalculations(data)
Would the Python equivalent of this be?:
def NewCalculations(data):
u = data.u
v = data.v
speed = data.speed
dir = data.dir
I have read up on Matlab functions, but am still confused by the syntax and how it could be rewritten into Python. Any help/suggestions would be greatly appreciated!
A:
It would be like this:
def NewCalculations(data):
# some calculations
return u, v, speed, dir
The matlab syntax is
function outvars = functionname(inputvars)
| Converting Matlab function to Python | I have a Matlab file that I'm looking to convert into Python. Part of it is functions and I'm a bit confused by it. In the Matlab code, there is a function that looks like this:
function [u, v, speed, dir] = NewCalculations(data)
Would the Python equivalent of this be?:
def NewCalculations(data):
u = data.u
v = data.v
speed = data.speed
dir = data.dir
I have read up on Matlab functions, but am still confused by the syntax and how it could be rewritten into Python. Any help/suggestions would be greatly appreciated!
| [
"It would be like this:\ndef NewCalculations(data):\n# some calculations\n return u, v, speed, dir\n\nThe matlab syntax is\nfunction outvars = functionname(inputvars)\n\n"
] | [
1
] | [] | [] | [
"function",
"matlab",
"python"
] | stackoverflow_0074616904_function_matlab_python.txt |
Q:
Read a CSV file and connec to database based on columns
I have a file.csv that contains columns databasename and tablename. Using Python I have to perform an action in such a way if column1 is pointing to a database a then connect to server a and perform action on that table. Otherwise, connect to server b and perform action on tables in server b.
Example file.csv:
database, tablename
db1, tbl1
db2, tbl2
db2, tbl3
db1, tbl4
Expected output: when db1, connect server a and delete from tbl1. When db2, connect server b and delete from tbl2.
I tried using pandas and csv.reader but unable to perform loop when the CSV is read as a data frame.
df = pd.read_csv(
"log_tables.csv")
print("The dataframe is:")
print(df)
specific_columns = df[["table_name"]]
Or using below code:
with open('log_tables.csv') as f:
firstColumn = [line.split(',')[0] for line in f]
A:
A simple example using psycopg2 as the Python driver to a Postgres
database:
cat file.csv
database,tablename
db1,tbl1
db2,tbl2
db2,tbl3
db1,tbl4
import psycopg2
from psycopg2 import sql
import csv
with open('file.csv') as csv_file:
r = csv.DictReader(csv_file)
for row in r:
print(row['database'], row['tablename'])
con = psycopg2.connect(dbname=row['database'], host="localhost", user="postgres")
cur = con.cursor()
cur.execute(sql.SQL("delete * from {}").format(sql.Identifier(row["tablename"])))
If this was a large file you would probably want to look at sorting by database to reduce the connections and/or use a connection pooler.
UPDATE
To read connection information from file:
cat pg.json
{"DBMSType": "POSTGRESQL", "serverName": "server_name", "databaseName": "mars", "port": "5601", "userID": "test", "password": "test_pswd"}
with open('pg.json', 'r') as pg_file:
db_dict = json.loads(pg_file.read())
db_dict
{'DBMSType': 'POSTGRESQL',
'serverName': 'server_name',
'databaseName': 'mars',
'port': '5601',
'userID': 'test',
'password': 'test_pswd'}
with open('file.csv') as csv_file:
r = csv.DictReader(csv_file)
for row in r:
print(row['database'], row['tablename'])
if row['database'] == db_dict['databaseName']:
con = psycopg2.connect(dbname=db_dict['databaseName'], host=db_dict['serverName'], user=db_dict['userID'], port=db_dict['port'], password=db_dict['password'])
cur = con.cursor()
cur.execute(sql.SQL("delete * from {}").format(sql.Identifier(row["tablename"]
| Read a CSV file and connec to database based on columns | I have a file.csv that contains columns databasename and tablename. Using Python I have to perform an action in such a way if column1 is pointing to a database a then connect to server a and perform action on that table. Otherwise, connect to server b and perform action on tables in server b.
Example file.csv:
database, tablename
db1, tbl1
db2, tbl2
db2, tbl3
db1, tbl4
Expected output: when db1, connect server a and delete from tbl1. When db2, connect server b and delete from tbl2.
I tried using pandas and csv.reader but unable to perform loop when the CSV is read as a data frame.
df = pd.read_csv(
"log_tables.csv")
print("The dataframe is:")
print(df)
specific_columns = df[["table_name"]]
Or using below code:
with open('log_tables.csv') as f:
firstColumn = [line.split(',')[0] for line in f]
| [
"A simple example using psycopg2 as the Python driver to a Postgres\ndatabase:\ncat file.csv \ndatabase,tablename\ndb1,tbl1\ndb2,tbl2 \ndb2,tbl3\ndb1,tbl4\n\nimport psycopg2\nfrom psycopg2 import sql\nimport csv\n\nwith open('file.csv') as csv_file:\n r = csv.DictReader(csv_file)\n for row in r:\n print(row['database'], row['tablename'])\n con = psycopg2.connect(dbname=row['database'], host=\"localhost\", user=\"postgres\")\n cur = con.cursor()\n cur.execute(sql.SQL(\"delete * from {}\").format(sql.Identifier(row[\"tablename\"])))\n\n\nIf this was a large file you would probably want to look at sorting by database to reduce the connections and/or use a connection pooler.\nUPDATE\nTo read connection information from file:\ncat pg.json \n\n{\"DBMSType\": \"POSTGRESQL\", \"serverName\": \"server_name\", \"databaseName\": \"mars\", \"port\": \"5601\", \"userID\": \"test\", \"password\": \"test_pswd\"}\n\nwith open('pg.json', 'r') as pg_file:\n db_dict = json.loads(pg_file.read())\n\ndb_dict\n{'DBMSType': 'POSTGRESQL',\n 'serverName': 'server_name',\n 'databaseName': 'mars',\n 'port': '5601',\n 'userID': 'test',\n 'password': 'test_pswd'}\n\nwith open('file.csv') as csv_file:\n r = csv.DictReader(csv_file)\n for row in r:\n print(row['database'], row['tablename'])\n if row['database'] == db_dict['databaseName']:\n con = psycopg2.connect(dbname=db_dict['databaseName'], host=db_dict['serverName'], user=db_dict['userID'], port=db_dict['port'], password=db_dict['password'])\n cur = con.cursor()\n cur.execute(sql.SQL(\"delete * from {}\").format(sql.Identifier(row[\"tablename\"]\n\n\n"
] | [
0
] | [] | [] | [
"csv",
"dataframe",
"pandas",
"python",
"split"
] | stackoverflow_0074609706_csv_dataframe_pandas_python_split.txt |
Q:
Peak Detection of Partial Discharges with CNN
This is one of my first posts here so if I make any mistakes or don't follow some guidelines please be considerate.
For my bachelors thesis im trying to create a CNN with tensorflow which has the ability to do basic peak detection like scipy.find_peaks for example. My input data consists of 2 numpy arrays with timeseries data of different partial discharges combined together with both their current channel and a light channel of the oscilloscope. My y labels for classifying are also 2 arrays with the same shape but are 0 when there is no peak to detect and 1 when there is a peak. So you can already guess the output is very sparse.
Input features
Labels
I batch this dataset and feed it into the neural network for the structure and code check further down. As loss function I use BinaryFocalCrossentropy from keras and as metric precision. After that I complie the model and round the prediction to get the indices of where the peaks are. The problem is the predictions are wrong and far away from what they should be but I think peak detection with a CNN should be possible. What am I doing wrong here?
Down below is the code and here is a link to a onedrive folder with my code and the datasets:text
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import scipy.signal as ss
import tensorflow as tf
from functions import slice_up
from datetime import datetime
from keras import layers, losses, Sequential, metrics
from keras.models import Model
df1=pd.read_feather('C:/Users/noahp/OneDrive - ETH Zurich/Schule/Fächer/Semester 7/Bachelor Arbeit/Data/train1.feather')
df1=df1.iloc[:,1:]
df2=pd.read_feather('C:/Users/noahp/OneDrive - ETH Zurich/Schule/Fächer/Semester 7/Bachelor Arbeit/Data/val.feather')
df2=df2.iloc[:,1:]
df3=pd.read_feather('C:/Users/noahp/OneDrive - ETH Zurich/Schule/Fächer/Semester 7/Bachelor Arbeit/Data/test.feather')
df3=df3.iloc[:,1:]
input_dim=100000
overlap=10000 #overlapping of batches
LR = 1e-3 #learning rate
EPOCHS = 10
batch_size=5
alpha=8.5 #asymmetric loss constant
test_index=3
test_length=100000 #length of signal for testing
def slice_up(data: np.array,input_dim: int, overlap: int) -> np.array:
signal_len = len(data)
n_slices = int((signal_len - input_dim)/(input_dim - overlap)) + 1
data_sliced = np.zeros([n_slices, input_dim, 2])
for i_example in range(n_slices):
index = (input_dim-overlap)*i_example
data_sliced[i_example, :, :] = data[index:index+input_dim,:]
return data_sliced
x_train=np.array(np.transpose(np.array([df1.iloc[:,2],df1.iloc[:,3]])))
y_train=np.array(np.transpose(np.array([df1.iloc[:,0],df1.iloc[:,1]])))
x_val=np.array(np.transpose(np.array([df2.iloc[:,2],df2.iloc[:,3]])))
y_val=np.array(np.transpose(np.array([df2.iloc[:,0],df2.iloc[:,1]])))
x_test=np.array([np.transpose(np.array([df3.iloc[2,test_index][:test_length],df3.iloc[3,test_index][:test_length]]))])
x_train_slices = slice_up(data=x_train, input_dim=input_dim, overlap=overlap)
y_train_slices = slice_up(data=y_train, input_dim=input_dim, overlap=overlap)
x_val_slices = slice_up(data=x_val, input_dim=input_dim, overlap=overlap)
y_val_slices = slice_up(data=y_val, input_dim=input_dim, overlap=overlap)
y_train_binary=np.where(y_train != 0.0, 1, 0)
y_val_binary=np.where(y_val != 0.0, 1, 0)
y_train_slices_binary = slice_up(data=y_train_binary, input_dim=input_dim, overlap=overlap)
y_val_slices_binary = slice_up(data=y_val_binary, input_dim=input_dim, overlap=overlap)
model = Sequential([
layers.Conv1D(4, 50, activation='relu',input_shape=(input_dim,2)),
layers.MaxPooling1D(50),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Conv1D(2, 10, activation='relu'),
layers.MaxPooling1D(5),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Conv1D(2, 50, activation='relu'),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Conv1D(2, 10, activation='relu'),
#layers.Dropout(0.2),
#layers.BatchNormalization(),
#layers.Conv1D(2, 10, activation='relu'),
layers.Flatten(),
layers.Dense(500, activation='relu'),
layers.Dropout(0.2),
layers.Dense(500, activation='relu'),
layers.Dense(input_dim*2, activation='sigmoid'),
layers.Reshape((input_dim,2))
])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LR),loss=losses.BinaryFocalCrossentropy(gamma=alpha),metrics=[metrics.Precision()])
history = model.fit(x_train_slices, y_train_slices_binary,epochs=EPOCHS,batch_size=batch_size, validation_data=(x_val_slices, y_val_slices_binary),shuffle=True)
prediction = model.predict(x_test, verbose=1)
plt.plot(np.around(prediction[0]))
I also already tried to approach the same problem with an autoencoder but that also doesn't seem to work. You can find the code for the autoencoder also in the onedrive folder.
In my opinion this problem should be solvable and I don't know what I'm missing right now because this task shouldn't be too hard for a CNN.
A:
Since your output is of the same shape as your input, and you want to classify each point, maybe you should try a unet, similar to this one:
Something like the following (added normalization, and some noise and dropout to prevent overfitting):
skip_connections = []
kernel_size = 7
pool_size = 10
deepth = 3
def units(i: int): return 32*2**i
norm_layer = layers.Normalization()
norm_layer.adapt(x_train_slices)
inputs = layers.Input(shape=(input_dim, 2))
x = norm_layer(inputs)
x = layers.GaussianNoise(stddev=0.01)(x)
for i in range(deepth):
x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.SpatialDropout1D(.3)(x)
x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)
x = layers.BatchNormalization()(x)
skip_connections.append(x)
#x = layers.Conv1D(units(i), kernel_size, strides=pool_size, activation='relu', padding='same')(x)
x = layers.MaxPool1D(pool_size)(x)
x = layers.BatchNormalization()(x)
x = layers.SpatialDropout1D(.3)(x)
x = layers.Conv1D(units(deepth+1), kernel_size, activation='relu', padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(units(deepth+1), kernel_size, activation='relu', padding='same')(x)
x = layers.BatchNormalization()(x)
for i in range(deepth-1, -1, -1):
x = layers.Conv1DTranspose(units(i), kernel_size, strides=pool_size, activation='relu', padding='same')(x)
skip_x = skip_connections.pop()
x = layers.Concatenate(axis=-1)([x, skip_x])
x = layers.BatchNormalization()(x)
x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(2, kernel_size, activation='sigmoid', padding='same')(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
Which gives some results:
Epoch 150/1000
5/5 [==============================] - 2s 330ms/step - loss: 1.5309e-05 -
precision: 0.0903 - val_loss: 1.5629e-05 - val_precision: 1.0000
Epoch 151/1000
5/5 [==============================] - 2s 329ms/step - loss: 1.6166e-05 -
precision: 0.0681 - val_loss: 1.4105e-05 - val_precision: 0.0938
Epoch 152/1000
5/5 [==============================] - 2s 329ms/step - loss: 1.3713e-05 -
precision: 0.1277 - val_loss: 1.5543e-05 - val_precision: 1.0000
Epoch 153/1000
5/5 [==============================] - 2s 330ms/step - loss: 1.3353e-05 -
precision: 0.2615 - val_loss: 1.4101e-05 - val_precision: 0.4091
Epoch 154/1000
5/5 [==============================] - 2s 331ms/step - loss: 1.4418e-05 -
precision: 0.1412 - val_loss: 1.4621e-05 - val_precision: 0.8333
| Peak Detection of Partial Discharges with CNN | This is one of my first posts here so if I make any mistakes or don't follow some guidelines please be considerate.
For my bachelors thesis im trying to create a CNN with tensorflow which has the ability to do basic peak detection like scipy.find_peaks for example. My input data consists of 2 numpy arrays with timeseries data of different partial discharges combined together with both their current channel and a light channel of the oscilloscope. My y labels for classifying are also 2 arrays with the same shape but are 0 when there is no peak to detect and 1 when there is a peak. So you can already guess the output is very sparse.
Input features
Labels
I batch this dataset and feed it into the neural network for the structure and code check further down. As loss function I use BinaryFocalCrossentropy from keras and as metric precision. After that I complie the model and round the prediction to get the indices of where the peaks are. The problem is the predictions are wrong and far away from what they should be but I think peak detection with a CNN should be possible. What am I doing wrong here?
Down below is the code and here is a link to a onedrive folder with my code and the datasets:text
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import scipy.signal as ss
import tensorflow as tf
from functions import slice_up
from datetime import datetime
from keras import layers, losses, Sequential, metrics
from keras.models import Model
df1=pd.read_feather('C:/Users/noahp/OneDrive - ETH Zurich/Schule/Fächer/Semester 7/Bachelor Arbeit/Data/train1.feather')
df1=df1.iloc[:,1:]
df2=pd.read_feather('C:/Users/noahp/OneDrive - ETH Zurich/Schule/Fächer/Semester 7/Bachelor Arbeit/Data/val.feather')
df2=df2.iloc[:,1:]
df3=pd.read_feather('C:/Users/noahp/OneDrive - ETH Zurich/Schule/Fächer/Semester 7/Bachelor Arbeit/Data/test.feather')
df3=df3.iloc[:,1:]
input_dim=100000
overlap=10000 #overlapping of batches
LR = 1e-3 #learning rate
EPOCHS = 10
batch_size=5
alpha=8.5 #asymmetric loss constant
test_index=3
test_length=100000 #length of signal for testing
def slice_up(data: np.array,input_dim: int, overlap: int) -> np.array:
signal_len = len(data)
n_slices = int((signal_len - input_dim)/(input_dim - overlap)) + 1
data_sliced = np.zeros([n_slices, input_dim, 2])
for i_example in range(n_slices):
index = (input_dim-overlap)*i_example
data_sliced[i_example, :, :] = data[index:index+input_dim,:]
return data_sliced
x_train=np.array(np.transpose(np.array([df1.iloc[:,2],df1.iloc[:,3]])))
y_train=np.array(np.transpose(np.array([df1.iloc[:,0],df1.iloc[:,1]])))
x_val=np.array(np.transpose(np.array([df2.iloc[:,2],df2.iloc[:,3]])))
y_val=np.array(np.transpose(np.array([df2.iloc[:,0],df2.iloc[:,1]])))
x_test=np.array([np.transpose(np.array([df3.iloc[2,test_index][:test_length],df3.iloc[3,test_index][:test_length]]))])
x_train_slices = slice_up(data=x_train, input_dim=input_dim, overlap=overlap)
y_train_slices = slice_up(data=y_train, input_dim=input_dim, overlap=overlap)
x_val_slices = slice_up(data=x_val, input_dim=input_dim, overlap=overlap)
y_val_slices = slice_up(data=y_val, input_dim=input_dim, overlap=overlap)
y_train_binary=np.where(y_train != 0.0, 1, 0)
y_val_binary=np.where(y_val != 0.0, 1, 0)
y_train_slices_binary = slice_up(data=y_train_binary, input_dim=input_dim, overlap=overlap)
y_val_slices_binary = slice_up(data=y_val_binary, input_dim=input_dim, overlap=overlap)
model = Sequential([
layers.Conv1D(4, 50, activation='relu',input_shape=(input_dim,2)),
layers.MaxPooling1D(50),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Conv1D(2, 10, activation='relu'),
layers.MaxPooling1D(5),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Conv1D(2, 50, activation='relu'),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Conv1D(2, 10, activation='relu'),
#layers.Dropout(0.2),
#layers.BatchNormalization(),
#layers.Conv1D(2, 10, activation='relu'),
layers.Flatten(),
layers.Dense(500, activation='relu'),
layers.Dropout(0.2),
layers.Dense(500, activation='relu'),
layers.Dense(input_dim*2, activation='sigmoid'),
layers.Reshape((input_dim,2))
])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LR),loss=losses.BinaryFocalCrossentropy(gamma=alpha),metrics=[metrics.Precision()])
history = model.fit(x_train_slices, y_train_slices_binary,epochs=EPOCHS,batch_size=batch_size, validation_data=(x_val_slices, y_val_slices_binary),shuffle=True)
prediction = model.predict(x_test, verbose=1)
plt.plot(np.around(prediction[0]))
I also already tried to approach the same problem with an autoencoder but that also doesn't seem to work. You can find the code for the autoencoder also in the onedrive folder.
In my opinion this problem should be solvable and I don't know what I'm missing right now because this task shouldn't be too hard for a CNN.
| [
"Since your output is of the same shape as your input, and you want to classify each point, maybe you should try a unet, similar to this one:\n Something like the following (added normalization, and some noise and dropout to prevent overfitting):\nskip_connections = []\nkernel_size = 7\npool_size = 10\ndeepth = 3\ndef units(i: int): return 32*2**i\nnorm_layer = layers.Normalization()\nnorm_layer.adapt(x_train_slices)\ninputs = layers.Input(shape=(input_dim, 2))\nx = norm_layer(inputs)\nx = layers.GaussianNoise(stddev=0.01)(x)\nfor i in range(deepth):\n x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)\n x = layers.BatchNormalization()(x)\n x = layers.SpatialDropout1D(.3)(x)\n x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)\n x = layers.BatchNormalization()(x)\n skip_connections.append(x)\n #x = layers.Conv1D(units(i), kernel_size, strides=pool_size, activation='relu', padding='same')(x)\n x = layers.MaxPool1D(pool_size)(x)\n x = layers.BatchNormalization()(x)\nx = layers.SpatialDropout1D(.3)(x)\nx = layers.Conv1D(units(deepth+1), kernel_size, activation='relu', padding='same')(x)\nx = layers.BatchNormalization()(x)\nx = layers.Conv1D(units(deepth+1), kernel_size, activation='relu', padding='same')(x)\nx = layers.BatchNormalization()(x)\nfor i in range(deepth-1, -1, -1):\n x = layers.Conv1DTranspose(units(i), kernel_size, strides=pool_size, activation='relu', padding='same')(x)\n skip_x = skip_connections.pop()\n x = layers.Concatenate(axis=-1)([x, skip_x])\n x = layers.BatchNormalization()(x)\n x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)\n x = layers.BatchNormalization()(x)\n x = layers.Conv1D(units(i), kernel_size, activation='relu', padding='same')(x)\n x = layers.BatchNormalization()(x)\n\n\nx = layers.Conv1D(2, kernel_size, activation='sigmoid', padding='same')(x)\nmodel = tf.keras.Model(inputs=inputs, outputs=x)\n\nWhich gives some results:\nEpoch 150/1000\n5/5 [==============================] - 2s 330ms/step - loss: 1.5309e-05 - \nprecision: 0.0903 - val_loss: 1.5629e-05 - val_precision: 1.0000\nEpoch 151/1000\n5/5 [==============================] - 2s 329ms/step - loss: 1.6166e-05 - \nprecision: 0.0681 - val_loss: 1.4105e-05 - val_precision: 0.0938\nEpoch 152/1000\n5/5 [==============================] - 2s 329ms/step - loss: 1.3713e-05 - \nprecision: 0.1277 - val_loss: 1.5543e-05 - val_precision: 1.0000\nEpoch 153/1000\n5/5 [==============================] - 2s 330ms/step - loss: 1.3353e-05 - \nprecision: 0.2615 - val_loss: 1.4101e-05 - val_precision: 0.4091\nEpoch 154/1000\n5/5 [==============================] - 2s 331ms/step - loss: 1.4418e-05 - \nprecision: 0.1412 - val_loss: 1.4621e-05 - val_precision: 0.8333\n\n"
] | [
0
] | [] | [] | [
"autoencoder",
"conv_neural_network",
"machine_learning",
"python",
"tensorflow"
] | stackoverflow_0074600357_autoencoder_conv_neural_network_machine_learning_python_tensorflow.txt |
Q:
how to use if statement with return to check if the function returns value when calling stored proc using python
i need to check the value with return if getting some value when calling procedure in python
` i need to check the value with return if getting some value when calling procedure in python`
def get_order_count(salesman_id, year):
try:
# create a connection to the Oracle Database
with cx_Oracle.connect(cfg.username,
cfg.password,
cfg.dsn,
encoding=cfg.encoding) as connection:
# create a new cursor
with connection.cursor() as cursor:
# create a new variable to hold the value of the
# OUT parameter
order_count = cursor.var(int)
# call the stored procedure
cursor.callproc('get_order_count',
[salesman_id, year, order_count])
return order_count.getvalue()
except cx_Oracle.Error as error:
print(error)
if order_count ==1
print ('succesesfull' )
else:
pass
A:
so inside a procedder there was if stament to check if run right then will return 1 if not 0 so we assing that to int value and then use it when calling procedure
if order_count ==1:
print ('succesesfull' )
elif: order_count ==0:
print('faild')
else:
pass ```
| how to use if statement with return to check if the function returns value when calling stored proc using python | i need to check the value with return if getting some value when calling procedure in python
` i need to check the value with return if getting some value when calling procedure in python`
def get_order_count(salesman_id, year):
try:
# create a connection to the Oracle Database
with cx_Oracle.connect(cfg.username,
cfg.password,
cfg.dsn,
encoding=cfg.encoding) as connection:
# create a new cursor
with connection.cursor() as cursor:
# create a new variable to hold the value of the
# OUT parameter
order_count = cursor.var(int)
# call the stored procedure
cursor.callproc('get_order_count',
[salesman_id, year, order_count])
return order_count.getvalue()
except cx_Oracle.Error as error:
print(error)
if order_count ==1
print ('succesesfull' )
else:
pass
| [
" so inside a procedder there was if stament to check if run right then will return 1 if not 0 so we assing that to int value and then use it when calling procedure\nif order_count ==1:\nprint ('succesesfull' )\nelif: order_count ==0:\nprint('faild')\nelse:\npass ```\n\n"
] | [
0
] | [] | [] | [
"cx_oracle",
"oracle",
"python",
"sql"
] | stackoverflow_0074379044_cx_oracle_oracle_python_sql.txt |
Q:
Group separate strings of an OCR-Result based on coordinates in the image
I use easyocr to read the key figures from an image (display output of measuring instrument).
Because of different proportions of characters on the picture, some characters/strings, that are meant to be one unit, like value and unit (e.g "230 Volt"), are recognised as separate strings ("230", "Volt"). Another example are multiline strings, where each line is recognised as separate string. To illustrate it I prepared a picture. Its a little bit exaggerated but I hope it´s easy to understand.
Example picture to illustrate the problem
What I want to do
I try to find the elements that are on the same line or column (and very close to each other) and concatenate these strings.
Example data to handle (Output of easyocr)
(Coordinates are from top-left corner to bottom-left corner in clockwise direction)
([[239, 31], [563, 31], [563, 195], [239, 195]], '230', 0.7262734770774841)
([[591, 147], [661, 147], [661, 183], [591, 183]], 'Volt', 0.983400155647826)
([[801, 171], [1039, 171], [1039, 239], [801, 239]], 'This is a', 0.9870205241250117)
([[802, 256], [1232, 256], [1232, 328], [802, 328]], 'sentence with', 0.9997852752308181)
([[805, 341], [1065, 341], [1065, 427], [805, 427]], 'multiple', 0.9999849956753041)
([[212, 427], [311, 427], [311, 479], [212, 479]], 'Text', 0.9999873638153076)
([[362, 428], [474, 428], [474, 476], [362, 476]], 'More', 0.9999922513961792)
([[505, 413], [643, 413], [643, 479], [505, 479]], 'Text', 0.9999755620956421)
([[798, 428], [1136, 428], [1136, 500], [798, 500]], 'linebreaks.', 0.8525006562415545)
([[317, 601], [479, 601], [479, 669], [317, 669]], 'More', 0.9999911785125732)
([[529, 603], [665, 603], [665, 669], [529, 669]], 'Text', 0.9757571413464591)
([[699, 603], [841, 603], [841, 669], [699, 669]], 'with', 0.9999924302101135)
([[950, 608], [1182, 608], [1182, 683], [950, 683]], 'spaces.', 0.8026406194725301)
Output as DataFrame
I tried to handle it as Dataframe and split the values to x and y for each point. I though this view will help me. But i am still stucked
Text Score tl_x tl_y tr_x tr_y bl_x bl_y br_x br_y
0 230 0.726273 239 31 563 31 239 195 563 195
1 Volt 0.983400 591 147 661 147 591 183 661 183
2 This is a 0.987021 801 171 1039 171 801 239 1039 239
3 sentence with 0.999785 802 256 1232 256 802 328 1232 328
4 multiple 0.999985 805 341 1065 341 805 427 1065 427
5 Text 0.999987 212 427 311 427 212 479 311 479
6 More 0.999992 362 428 474 428 362 476 474 476
7 Text 0.999976 505 413 643 413 505 479 643 479
8 linebreaks. 0.852501 798 428 1136 428 798 500 1136 500
9 More 0.999991 317 601 479 601 317 669 479 669
10 Text 0.975757 529 603 665 603 529 669 665 669
11 with 0.999992 699 603 841 603 699 669 841 669
12 spaces. 0.802641 950 608 1182 608 950 683 1182 683
What output I want to achieve:
I`m happy with a list of concatinated strings
["230 Volt", "This is a sentence with multiple linebreaks.","More text with spaces", ...]
What I tried
I tried to sort the column-values with exact or almost the same values to bins with pd.cut(). And access the strings over the bin-name, but I have no idea how to code it, so its not hardcoded and can be run automatically on different pictures.
Also tried to use np.isclose() with relative tolerance to values to group them together
tried looping with a lot of if-conditions, but nothing works.
Help
I am sure there is a very simple solution to this, just I am not good enough at programming yet to see it.
What would be the best approach to find the closest neighbours (on the same line/column) and group them together?
A:
Every time you will similar pattern text like in above case you get 230 volts...Like in another example you will get 320 volts? ...So which will be formatted as x volts?
If so
import pandas as pd
c1_str = ' '.join(df["Text"])
c1_str = c1_str.replace('linebreaks', 'linebreaks|')
c1_str = c1_str.replace('Volt', 'Volt|')
mask =c1_str.split('|')
print(mask)
Gives #
['230 Volt', ' This is a sentence with multiple linebreaks', '. More text with spaces']
Explination:
Convert df column to string & manuplate string according to pattern you want by creating split patterns. Converting string to list based on split pattern |
| Group separate strings of an OCR-Result based on coordinates in the image | I use easyocr to read the key figures from an image (display output of measuring instrument).
Because of different proportions of characters on the picture, some characters/strings, that are meant to be one unit, like value and unit (e.g "230 Volt"), are recognised as separate strings ("230", "Volt"). Another example are multiline strings, where each line is recognised as separate string. To illustrate it I prepared a picture. Its a little bit exaggerated but I hope it´s easy to understand.
Example picture to illustrate the problem
What I want to do
I try to find the elements that are on the same line or column (and very close to each other) and concatenate these strings.
Example data to handle (Output of easyocr)
(Coordinates are from top-left corner to bottom-left corner in clockwise direction)
([[239, 31], [563, 31], [563, 195], [239, 195]], '230', 0.7262734770774841)
([[591, 147], [661, 147], [661, 183], [591, 183]], 'Volt', 0.983400155647826)
([[801, 171], [1039, 171], [1039, 239], [801, 239]], 'This is a', 0.9870205241250117)
([[802, 256], [1232, 256], [1232, 328], [802, 328]], 'sentence with', 0.9997852752308181)
([[805, 341], [1065, 341], [1065, 427], [805, 427]], 'multiple', 0.9999849956753041)
([[212, 427], [311, 427], [311, 479], [212, 479]], 'Text', 0.9999873638153076)
([[362, 428], [474, 428], [474, 476], [362, 476]], 'More', 0.9999922513961792)
([[505, 413], [643, 413], [643, 479], [505, 479]], 'Text', 0.9999755620956421)
([[798, 428], [1136, 428], [1136, 500], [798, 500]], 'linebreaks.', 0.8525006562415545)
([[317, 601], [479, 601], [479, 669], [317, 669]], 'More', 0.9999911785125732)
([[529, 603], [665, 603], [665, 669], [529, 669]], 'Text', 0.9757571413464591)
([[699, 603], [841, 603], [841, 669], [699, 669]], 'with', 0.9999924302101135)
([[950, 608], [1182, 608], [1182, 683], [950, 683]], 'spaces.', 0.8026406194725301)
Output as DataFrame
I tried to handle it as Dataframe and split the values to x and y for each point. I though this view will help me. But i am still stucked
Text Score tl_x tl_y tr_x tr_y bl_x bl_y br_x br_y
0 230 0.726273 239 31 563 31 239 195 563 195
1 Volt 0.983400 591 147 661 147 591 183 661 183
2 This is a 0.987021 801 171 1039 171 801 239 1039 239
3 sentence with 0.999785 802 256 1232 256 802 328 1232 328
4 multiple 0.999985 805 341 1065 341 805 427 1065 427
5 Text 0.999987 212 427 311 427 212 479 311 479
6 More 0.999992 362 428 474 428 362 476 474 476
7 Text 0.999976 505 413 643 413 505 479 643 479
8 linebreaks. 0.852501 798 428 1136 428 798 500 1136 500
9 More 0.999991 317 601 479 601 317 669 479 669
10 Text 0.975757 529 603 665 603 529 669 665 669
11 with 0.999992 699 603 841 603 699 669 841 669
12 spaces. 0.802641 950 608 1182 608 950 683 1182 683
What output I want to achieve:
I`m happy with a list of concatinated strings
["230 Volt", "This is a sentence with multiple linebreaks.","More text with spaces", ...]
What I tried
I tried to sort the column-values with exact or almost the same values to bins with pd.cut(). And access the strings over the bin-name, but I have no idea how to code it, so its not hardcoded and can be run automatically on different pictures.
Also tried to use np.isclose() with relative tolerance to values to group them together
tried looping with a lot of if-conditions, but nothing works.
Help
I am sure there is a very simple solution to this, just I am not good enough at programming yet to see it.
What would be the best approach to find the closest neighbours (on the same line/column) and group them together?
| [
"Every time you will similar pattern text like in above case you get 230 volts...Like in another example you will get 320 volts? ...So which will be formatted as x volts?\nIf so\nimport pandas as pd\nc1_str = ' '.join(df[\"Text\"])\n\nc1_str = c1_str.replace('linebreaks', 'linebreaks|')\nc1_str = c1_str.replace('Volt', 'Volt|')\nmask =c1_str.split('|')\nprint(mask)\n\nGives #\n['230 Volt', ' This is a sentence with multiple linebreaks', '. More text with spaces']\n\nExplination:\nConvert df column to string & manuplate string according to pattern you want by creating split patterns. Converting string to list based on split pattern |\n"
] | [
0
] | [] | [] | [
"easyocr",
"numpy",
"pandas",
"python"
] | stackoverflow_0074616726_easyocr_numpy_pandas_python.txt |
Q:
How to export YahooQuery Dict type modules to CSV from AttributeError: 'dict' object has no attribute 'to_csv'?
AttributeError: 'dict' object has no attribute 'to_csv' occurs with the following:
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT']
faang = Ticker(symbols)
faang.asset_profile
df = (faang.asset_profile)
df.to_csv('output.csv', mode='a', index=True, header=True)
It happens with a lot of Dict type YahooQuery modules. Changing faang.asset_profile to faang.valuation_measures (a pandas:DataFrame type) works great.
A:
This seems to be working:
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT']
faang = Ticker(symbols)
faang.asset_profile
df = pd.DataFrame(faang.asset_profile).T
df.to_csv('output.csv', mode='a', index=True, header=True)
| How to export YahooQuery Dict type modules to CSV from AttributeError: 'dict' object has no attribute 'to_csv'? | AttributeError: 'dict' object has no attribute 'to_csv' occurs with the following:
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT']
faang = Ticker(symbols)
faang.asset_profile
df = (faang.asset_profile)
df.to_csv('output.csv', mode='a', index=True, header=True)
It happens with a lot of Dict type YahooQuery modules. Changing faang.asset_profile to faang.valuation_measures (a pandas:DataFrame type) works great.
| [
"This seems to be working:\nimport pandas as pd\nfrom yahooquery import Ticker\nsymbols = ['AAPL','GOOG','MSFT'] \nfaang = Ticker(symbols)\nfaang.asset_profile\ndf = pd.DataFrame(faang.asset_profile).T\ndf.to_csv('output.csv', mode='a', index=True, header=True)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"dictionary",
"pandas",
"python",
"yfinance"
] | stackoverflow_0074616735_dataframe_dictionary_pandas_python_yfinance.txt |
Q:
OSError: [WinError 193] %1 is not a valid Win32 application - nltk
So, I keep getting this error:
OSError: [WinError 193] %1 is not a valid Win32 application
I believed it to be because of my environment variables. So, I fixed that, but still keep getting the error. I'm at a loss currently. Here's the complete error output:
Traceback (most recent call last):
File "c:\Users\angel\Desktop\Programming Related\Python\improvedTherapibot\copyImprovedBot.py", line 5, in <module>
import nltk
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\__init__.py", line 128, in <module>
from nltk.collocations import *
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\collocations.py", line 39, in <module>
from nltk.metrics import (
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\metrics\__init__.py", line 16, in <module>
from nltk.metrics.scores import (
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\metrics\scores.py", line 15, in <module>
from scipy.stats.stats import betai
File "C:\Users\angel\AppData\Roaming\Python\Python38\site-packages\scipy\__init__.py", line 106, in <module>
from . import _distributor_init
File "C:\Users\angel\AppData\Roaming\Python\Python38\site-packages\scipy\_distributor_init.py", line 26, in <module>
WinDLL(os.path.abspath(filename))
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 is not a valid Win32 application
Edit: Here is the code I'm trying to run.
import nltk
from nltk.corpus import wordnet
good_words = []
bad_words = []
for syn in wordnet.synsets("happy"):
for l in syn.lemmas():
good_words.append(l.name())
for syn in wordnet.synsets("sad"):
for l in syn.lemmas():
bad_words.append(l.name())
print(set(good_words))
Edit 2: My os is Windows 10 and running on x64
A:
OSError: [WinError 193] %1 is not a valid Win32 application
Maybe your computer's OS is not window_32, But your python version is 32-bit. So check your OS, I guess your OS is window_64, And install right python version.
Here is python installer for window_64.
python installer for win_64
A:
I ran into the same OS error using, win_64 Bit and I have both python 32-bit and 64-bit installed. The problem is definitely the nltk module.
on the nltk documentation, the [nltk webpage] (https://www.nltk.org/install.html) suggests to install Windows 32-bit versions of python. Try python 32.
I would have just done a comment but I have no reputation.
A:
Also as described in this link make sure you are using a 64 bit console e.g.
C:\Windows\SysWOW64\cmd.exe
while using pip to install your dependencies. This ensures the 64 bit version of the dependencies is installed. Also make sure you are using Python 64 bits and if loading any external .dll or .so compiled in C it was compiled using 64 bits.
| OSError: [WinError 193] %1 is not a valid Win32 application - nltk | So, I keep getting this error:
OSError: [WinError 193] %1 is not a valid Win32 application
I believed it to be because of my environment variables. So, I fixed that, but still keep getting the error. I'm at a loss currently. Here's the complete error output:
Traceback (most recent call last):
File "c:\Users\angel\Desktop\Programming Related\Python\improvedTherapibot\copyImprovedBot.py", line 5, in <module>
import nltk
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\__init__.py", line 128, in <module>
from nltk.collocations import *
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\collocations.py", line 39, in <module>
from nltk.metrics import (
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\metrics\__init__.py", line 16, in <module>
from nltk.metrics.scores import (
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\site-packages\nltk\metrics\scores.py", line 15, in <module>
from scipy.stats.stats import betai
File "C:\Users\angel\AppData\Roaming\Python\Python38\site-packages\scipy\__init__.py", line 106, in <module>
from . import _distributor_init
File "C:\Users\angel\AppData\Roaming\Python\Python38\site-packages\scipy\_distributor_init.py", line 26, in <module>
WinDLL(os.path.abspath(filename))
File "C:\Users\angel\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 is not a valid Win32 application
Edit: Here is the code I'm trying to run.
import nltk
from nltk.corpus import wordnet
good_words = []
bad_words = []
for syn in wordnet.synsets("happy"):
for l in syn.lemmas():
good_words.append(l.name())
for syn in wordnet.synsets("sad"):
for l in syn.lemmas():
bad_words.append(l.name())
print(set(good_words))
Edit 2: My os is Windows 10 and running on x64
| [
"OSError: [WinError 193] %1 is not a valid Win32 application\n\nMaybe your computer's OS is not window_32, But your python version is 32-bit. So check your OS, I guess your OS is window_64, And install right python version.\nHere is python installer for window_64.\npython installer for win_64\n",
"I ran into the same OS error using, win_64 Bit and I have both python 32-bit and 64-bit installed. The problem is definitely the nltk module.\non the nltk documentation, the [nltk webpage] (https://www.nltk.org/install.html) suggests to install Windows 32-bit versions of python. Try python 32.\nI would have just done a comment but I have no reputation.\n",
"Also as described in this link make sure you are using a 64 bit console e.g.\nC:\\Windows\\SysWOW64\\cmd.exe\n\nwhile using pip to install your dependencies. This ensures the 64 bit version of the dependencies is installed. Also make sure you are using Python 64 bits and if loading any external .dll or .so compiled in C it was compiled using 64 bits.\n"
] | [
0,
0,
0
] | [] | [] | [
"corpus",
"nltk",
"python",
"python_3.x"
] | stackoverflow_0062337501_corpus_nltk_python_python_3.x.txt |
Q:
trying to make snakecase program
so i have to make an snakecase program
camelcase = input("camelCase: ")
snakecase = camelcase.lower()
for c in camelcase:
if c.isupper():
snakecase += "_"
snakecase += c.lower()
print(snakecase)
with the for im going through each letter, the if is for finding the uppercase right? but im failing on the last part, i dont really understand how to not add the "_" and c.lower() at the end of the word but just replace it.
A:
Use list comprehension: it is more Pythonic and concise:
camelcase = input("camelCase: ")
snakecase = ''.join(c if c.lower() == c else '_' + c.lower() for c in camelcase)
print(snakecase)
A:
With += you are adding the _ to the end of the snakecase variable. This you do for every uppercase character and then add the character itself. The output should be something like myteststring_t_s for myTestString.
Instead, build the new string one char by the other.
camel_case = 'myTestString'
snake_case = ""
for c in camel_case:
if c.isupper():
snake_case += f'_{c.lower()}'
else:
snake_case += c
print(snake_case)
| trying to make snakecase program | so i have to make an snakecase program
camelcase = input("camelCase: ")
snakecase = camelcase.lower()
for c in camelcase:
if c.isupper():
snakecase += "_"
snakecase += c.lower()
print(snakecase)
with the for im going through each letter, the if is for finding the uppercase right? but im failing on the last part, i dont really understand how to not add the "_" and c.lower() at the end of the word but just replace it.
| [
"Use list comprehension: it is more Pythonic and concise:\ncamelcase = input(\"camelCase: \")\nsnakecase = ''.join(c if c.lower() == c else '_' + c.lower() for c in camelcase)\nprint(snakecase)\n\n",
"With += you are adding the _ to the end of the snakecase variable. This you do for every uppercase character and then add the character itself. The output should be something like myteststring_t_s for myTestString.\nInstead, build the new string one char by the other.\ncamel_case = 'myTestString'\nsnake_case = \"\"\n\nfor c in camel_case:\n if c.isupper():\n snake_case += f'_{c.lower()}'\n else:\n snake_case += c\n\nprint(snake_case)\n\n"
] | [
2,
1
] | [
"Your function would work if snakecase was empty, but it is not.\nYou should initialize it to snakecase = str()\n"
] | [
-1
] | [
"python",
"snakecase"
] | stackoverflow_0074616852_python_snakecase.txt |
Q:
Use TensorFlow model to guess / predict values between 2 points
My question is something that I didn't encounter anywhere, I've been wondering if it was possible for a TF Model to determinate values between 2 dates that have real / validated values assigned to them.
I have an example :
Let's take the price of Nickel, here's it's chart the last week :
There is no data for the two following dates : 19/11 and 20/11
But we have the data points before and after.
So is it possible to use the datas from before and after these 2 points to guess the values of the 2 missing dates ?
Thank you a lot !
A:
It would be possible to create a machine learning model to predict the prices given a dataset of previous prices. Take a look at this post for instance. You would have to modify it slightly such that it predicts the prices in the gaps given previous and upcoming prices.
But for the example you gave assuming the dates are of this year 2022, these are a Saturday and Sunday, the stock market is closed on the weekends, hence there is not price of the item. Also notice that there are other days in the year where there is not trading occurring, think about holidays, then there also is not price of course.
| Use TensorFlow model to guess / predict values between 2 points | My question is something that I didn't encounter anywhere, I've been wondering if it was possible for a TF Model to determinate values between 2 dates that have real / validated values assigned to them.
I have an example :
Let's take the price of Nickel, here's it's chart the last week :
There is no data for the two following dates : 19/11 and 20/11
But we have the data points before and after.
So is it possible to use the datas from before and after these 2 points to guess the values of the 2 missing dates ?
Thank you a lot !
| [
"It would be possible to create a machine learning model to predict the prices given a dataset of previous prices. Take a look at this post for instance. You would have to modify it slightly such that it predicts the prices in the gaps given previous and upcoming prices.\nBut for the example you gave assuming the dates are of this year 2022, these are a Saturday and Sunday, the stock market is closed on the weekends, hence there is not price of the item. Also notice that there are other days in the year where there is not trading occurring, think about holidays, then there also is not price of course.\n"
] | [
1
] | [] | [] | [
"keras",
"python",
"tensorflow",
"tf.keras"
] | stackoverflow_0074616938_keras_python_tensorflow_tf.keras.txt |
Q:
How to use numpy.arange with two other arrays as the start and stop parameter?
I want to build a 2D-array where the first dimension has the same length as two other arrays and the second dimension is an array created by numpy.arange and based on every element of the other two arrays, where one array defines the start parameter and the second array defines the stop parameter-1.
Let me give an example:
arr_1 = array([0, 0, 0, 1, 2, 3, 4])
arr_2 = array([0, 1, 2, 3, 4, 5, 6])
I'm trying to create a resulting array like this:
res_arr = array([[0], [0, 1], [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]])
Can this be done with numpy ?
A:
You can do this by using zip and list comprehension
res_arr = [np.arange(start, stop + 1) for start, stop in zip(arr_1, arr_2)]
| How to use numpy.arange with two other arrays as the start and stop parameter? | I want to build a 2D-array where the first dimension has the same length as two other arrays and the second dimension is an array created by numpy.arange and based on every element of the other two arrays, where one array defines the start parameter and the second array defines the stop parameter-1.
Let me give an example:
arr_1 = array([0, 0, 0, 1, 2, 3, 4])
arr_2 = array([0, 1, 2, 3, 4, 5, 6])
I'm trying to create a resulting array like this:
res_arr = array([[0], [0, 1], [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]])
Can this be done with numpy ?
| [
"You can do this by using zip and list comprehension\nres_arr = [np.arange(start, stop + 1) for start, stop in zip(arr_1, arr_2)]\n\n"
] | [
2
] | [] | [] | [
"arrays",
"multidimensional_array",
"numpy",
"python"
] | stackoverflow_0074616862_arrays_multidimensional_array_numpy_python.txt |
Q:
Convert Unicode char code to char on Python
I have a list of Unicode character codes I need to convert into chars on python 2.7.
U+0021
U+0022
U+0023
.......
U+0024
How to do that?
A:
This regular expression will replace all U+nnnn sequences with the corresponding Unicode character:
import re
s = u'''\
U+0021
U+0022
U+0023
.......
U+0024
'''
s = re.sub(ur'U\+([0-9A-F]{4})',lambda m: unichr(int(m.group(1),16)),s)
print(s)
Output:
!
"
#
.......
$
Explanation:
unichr gives the character of a codepoint, e.g. unichr(0x21) == u'!'.
int('0021',16) converts a hexadecimal string to an integer.
lambda(m): expression is an anonymous function that receives the regex match.
It defines a function equivalent to def func(m): return expression but inline.
re.sub matches a pattern and sends each match to a function that returns the replacement. In this case, the pattern is U+hhhh where h is a hexadecimal digit, and the replacement function converts the hexadecimal digit string into a Unicode character.
A:
In case anyone using Python 3 and above wonders, how to do this effectively, I'll leave this post here for reference, since I didn't realize the author was asking about Python 2.7...
Just use the built-in python function chr():
char = chr(0x2474)
print(char)
Output:
⑴
Remember that the four digits in Unicode codenames U+WXYZ stand for a hexadecimal number WXYZ, which in python should be written as 0xWXYZ.
A:
The code written below will take every Unicode string and will convert into the string.
for I in list:
print(I.encode('ascii', 'ignore'))
| Convert Unicode char code to char on Python | I have a list of Unicode character codes I need to convert into chars on python 2.7.
U+0021
U+0022
U+0023
.......
U+0024
How to do that?
| [
"This regular expression will replace all U+nnnn sequences with the corresponding Unicode character:\nimport re\n\ns = u'''\\\nU+0021\nU+0022\nU+0023\n.......\nU+0024\n'''\n\ns = re.sub(ur'U\\+([0-9A-F]{4})',lambda m: unichr(int(m.group(1),16)),s)\n\nprint(s)\n\nOutput:\n!\n\"\n#\n.......\n$\n\nExplanation:\n\nunichr gives the character of a codepoint, e.g. unichr(0x21) == u'!'.\nint('0021',16) converts a hexadecimal string to an integer.\nlambda(m): expression is an anonymous function that receives the regex match.\nIt defines a function equivalent to def func(m): return expression but inline.\nre.sub matches a pattern and sends each match to a function that returns the replacement. In this case, the pattern is U+hhhh where h is a hexadecimal digit, and the replacement function converts the hexadecimal digit string into a Unicode character.\n\n",
"In case anyone using Python 3 and above wonders, how to do this effectively, I'll leave this post here for reference, since I didn't realize the author was asking about Python 2.7...\nJust use the built-in python function chr():\nchar = chr(0x2474)\nprint(char)\n\nOutput:\n⑴\n\nRemember that the four digits in Unicode codenames U+WXYZ stand for a hexadecimal number WXYZ, which in python should be written as 0xWXYZ.\n",
"The code written below will take every Unicode string and will convert into the string.\nfor I in list:\n print(I.encode('ascii', 'ignore'))\n\n"
] | [
2,
1,
0
] | [
"a = 'U+aaa'\na.encode('ascii','ignore')\n'aaa'\n\nThis will convert for unicode to Ascii which i think is what you want.\n"
] | [
-2
] | [
"python",
"python_2.7",
"unicode"
] | stackoverflow_0050283553_python_python_2.7_unicode.txt |
Q:
PyCharm does not highlight errors
I have a problem with PyCharm v2.7.
it does not show me errors.
I have configured it to show them as here
but nothing.
here a screenshot of what I see (no error displayed)
if I run code analysis it shows the errors marked as INVALID in the window but it does not highlight the code.
any idea?
A:
I had this issue recently on PyCharm 2020.3.3 Community Edition.
What I've found is in the top right corner of the editor there is a Reader Mode button.
If you click it you turn the Reader Mode off and then you can see your errors.
You can re-enable it by clicking the book icon
A:
I found. i've enabled by chance the "power safe mode" that avoid error checking. it can be re-enabled by clicking on the little man on bottom right corner.
A:
In my case, I had some ticks disabled in the Python Inspections menu in Settings > Editor > Inspections > Python. I have ticked everything and applied, and now it is working.
I don't really understand why this happened as the problem arose from one day to another. I had even re-installed the whole PyCharm, trying older versions, and deleted the .pycharm configuration folder from home.
A:
I had the same issue with PyCharm Community Edition, v.2016.3.2. To fix,
go to Preferences... from the PyCharm menu, and open Editor/Colors & Fonts/General
Now go to Errors and Warnings/Error under your current schema.
I also had to set my Project Interpreter under Preferences/Project:<name>/Project Interpreter.
See screenshot.
A:
None of the previous answers worked for me when I ran into this issue, but I was able to fix it by doing a hard reset on my PyCharm settings:
From the main menu, select File | Manage IDE Settings | Restore Default Settings.
You will lose all your custom settings this way of course, but this was the only thing that worked for me.
A:
I have tried many things, but only Invalidate Caches did the trick, after that I could hover the green arrow (top right side) and change Highlight to All Problems
A:
I just faced a similar issue.
I ran my web with IIS so I didn't have an interpreter on my Pycharm project, when I added my virtual environment interpreter everything returned to work.
I hope this will help someone in the future
A:
I needed to "Add content root" under "Project Structure" in Preferences. I had no content root.
A:
Nothing of this worked for me.
My problem was a single warning: "unreachable code" which blocked all errors from being highlighted.
I removed this line of code (+ the warning) and all errors showed up again.
| PyCharm does not highlight errors | I have a problem with PyCharm v2.7.
it does not show me errors.
I have configured it to show them as here
but nothing.
here a screenshot of what I see (no error displayed)
if I run code analysis it shows the errors marked as INVALID in the window but it does not highlight the code.
any idea?
| [
"I had this issue recently on PyCharm 2020.3.3 Community Edition.\nWhat I've found is in the top right corner of the editor there is a Reader Mode button.\nIf you click it you turn the Reader Mode off and then you can see your errors.\n\nYou can re-enable it by clicking the book icon\n\n",
"I found. i've enabled by chance the \"power safe mode\" that avoid error checking. it can be re-enabled by clicking on the little man on bottom right corner.\n",
"In my case, I had some ticks disabled in the Python Inspections menu in Settings > Editor > Inspections > Python. I have ticked everything and applied, and now it is working.\n\nI don't really understand why this happened as the problem arose from one day to another. I had even re-installed the whole PyCharm, trying older versions, and deleted the .pycharm configuration folder from home.\n",
"I had the same issue with PyCharm Community Edition, v.2016.3.2. To fix, \ngo to Preferences... from the PyCharm menu, and open Editor/Colors & Fonts/General\nNow go to Errors and Warnings/Error under your current schema.\nI also had to set my Project Interpreter under Preferences/Project:<name>/Project Interpreter.\nSee screenshot.\n\n",
"None of the previous answers worked for me when I ran into this issue, but I was able to fix it by doing a hard reset on my PyCharm settings:\nFrom the main menu, select File | Manage IDE Settings | Restore Default Settings.\nYou will lose all your custom settings this way of course, but this was the only thing that worked for me.\n",
"I have tried many things, but only Invalidate Caches did the trick, after that I could hover the green arrow (top right side) and change Highlight to All Problems\n",
"I just faced a similar issue.\nI ran my web with IIS so I didn't have an interpreter on my Pycharm project, when I added my virtual environment interpreter everything returned to work.\nI hope this will help someone in the future\n",
"I needed to \"Add content root\" under \"Project Structure\" in Preferences. I had no content root.\n",
"Nothing of this worked for me.\nMy problem was a single warning: \"unreachable code\" which blocked all errors from being highlighted.\nI removed this line of code (+ the warning) and all errors showed up again.\n"
] | [
57,
6,
3,
2,
2,
0,
0,
0,
0
] | [] | [] | [
"pycharm",
"python"
] | stackoverflow_0020663545_pycharm_python.txt |
Q:
Constraining select coefficients to be the same across equations in a SUR system
I want to estimate Labor Force Participation Rates for different age groups in a system of seemingly unrelated regressions (SUR), regressing them on some age group specific variables and shared birth-cohort dummies. This is the typical cohort-based approach (see for example Grigoli et al (2018).
I manage to estimate the system as a SUR using linearmodels, but I cannot figure out how one would specify constraints that force the coefficients on the birth-cohort dummies to be equal across all equations where they appear. Is this at all possible with current libraries?
To make an easily replicable example, lets leave Labor Force Participation Rates in favor of a commonly used dataset: Let's say I wanted the estimated "male" coefficient to be the same across the two equations from the example in the package linearmodels vignette:
import numpy as np
import pandas as pd
from linearmodels.datasets import fringe
from collections import OrderedDict
from linearmodels.system import SUR
data = fringe.load()
formula = OrderedDict()
formula["benefits"] = "hrbens ~ nrthcen + male"
formula["earnings"] = "hrearn ~ married + male"
mod = SUR.from_formula(formula, data)
print(mod.fit(cov_type="unadjusted"))
I'm trying to do this:
constraints_matrix=pd.DataFrame([0,0,0,1]).transpose()
constraints_values=pd.Series(['benefits_male'])
mod.add_constraints(constraints_matrix,constraints_values)
A:
Answering my own question after I got inspired by a friend. This is easily formulated in a linear constraint. The coefficient of benefits_male minus earnings_male needs to equal 0:
constraints_matrix=pd.DataFrame([0,1,0,-1]).transpose()
constraints_values=pd.Series([0])
mod.add_constraints(constraints_matrix,constraints_values)
print(mod.fit(cov_type="unadjusted"))
| Constraining select coefficients to be the same across equations in a SUR system | I want to estimate Labor Force Participation Rates for different age groups in a system of seemingly unrelated regressions (SUR), regressing them on some age group specific variables and shared birth-cohort dummies. This is the typical cohort-based approach (see for example Grigoli et al (2018).
I manage to estimate the system as a SUR using linearmodels, but I cannot figure out how one would specify constraints that force the coefficients on the birth-cohort dummies to be equal across all equations where they appear. Is this at all possible with current libraries?
To make an easily replicable example, lets leave Labor Force Participation Rates in favor of a commonly used dataset: Let's say I wanted the estimated "male" coefficient to be the same across the two equations from the example in the package linearmodels vignette:
import numpy as np
import pandas as pd
from linearmodels.datasets import fringe
from collections import OrderedDict
from linearmodels.system import SUR
data = fringe.load()
formula = OrderedDict()
formula["benefits"] = "hrbens ~ nrthcen + male"
formula["earnings"] = "hrearn ~ married + male"
mod = SUR.from_formula(formula, data)
print(mod.fit(cov_type="unadjusted"))
I'm trying to do this:
constraints_matrix=pd.DataFrame([0,0,0,1]).transpose()
constraints_values=pd.Series(['benefits_male'])
mod.add_constraints(constraints_matrix,constraints_values)
| [
"Answering my own question after I got inspired by a friend. This is easily formulated in a linear constraint. The coefficient of benefits_male minus earnings_male needs to equal 0:\nconstraints_matrix=pd.DataFrame([0,1,0,-1]).transpose()\nconstraints_values=pd.Series([0])\nmod.add_constraints(constraints_matrix,constraints_values)\nprint(mod.fit(cov_type=\"unadjusted\"))\n\n"
] | [
0
] | [] | [] | [
"linearmodels",
"python",
"seemingly_unrelated_regression"
] | stackoverflow_0074606527_linearmodels_python_seemingly_unrelated_regression.txt |
Q:
Python f-string with variable width alignment
I want to print below code.
!!!!**
!!!****
!!******
!********
So I use while loop with i, j. But, in some parts, the output of ! becomes weird.
I tried some case, there is no problem if the i and j are in ascending order, but there is a problem if they are in descending order. Below my code, print(i, j) means there was no problem with the value of i and j.
i = 0
j = 6
s1 = ""
s2 = ""
while True:
i += 1
j -= 1
if i > 5: break
s1 = f"{s1:!<{j}}"
s2 = f"{s2:*^{i*2}}"
print(i, j)
print(s1+s2)
1 5
!!!!!**
2 4
!!!!!****
3 3
!!!!!******
4 2
!!!!!********
5 1
!!!!!**********
A:
Aren't you over-complicating things a bit here?
If it's the pattern you are looking for here:
def print_pattern(bangs: int, stars: int)->None:
output = f"{'!'*bangs}{'*'*stars}"
print(output)
Care to provide some more explanations about what do you expect actually?
Is the total number of chars fixed?
Is it the sum of i and j, something else?
i, j = 0, 6
total = i+1
while i+j == total:
if i>5:
break
print_pattern(bangs=i, stars=j)
i+=1
j-=1
A:
Use only 1 print, and include 'end' with a space:
print( ...., end = " ")
i = 0
j = 6
s1 = ""
s2 = ""
while True:
i += 1
j -= 1
if i >= 5:
break
s1 = f"{'!'*(j-1)}"
s2 = f"{s2:*^{i*2}}"
print(s1 + s2, end=" ")
Output:
!!!!** !!!**** !!****** !********
I also modified your break clause.
A:
It looks like you want a space after each sequence of asterisks but probably not at the end of the output. Therefore:
M = 4
N = 2
for m in range(M, 0, -1):
print(f"{'!' * m}{'*' * N} ", end='')
N += 2
print('\b')
Output:
!!!!** !!!**** !!****** !********
| Python f-string with variable width alignment | I want to print below code.
!!!!**
!!!****
!!******
!********
So I use while loop with i, j. But, in some parts, the output of ! becomes weird.
I tried some case, there is no problem if the i and j are in ascending order, but there is a problem if they are in descending order. Below my code, print(i, j) means there was no problem with the value of i and j.
i = 0
j = 6
s1 = ""
s2 = ""
while True:
i += 1
j -= 1
if i > 5: break
s1 = f"{s1:!<{j}}"
s2 = f"{s2:*^{i*2}}"
print(i, j)
print(s1+s2)
1 5
!!!!!**
2 4
!!!!!****
3 3
!!!!!******
4 2
!!!!!********
5 1
!!!!!**********
| [
"Aren't you over-complicating things a bit here?\nIf it's the pattern you are looking for here:\ndef print_pattern(bangs: int, stars: int)->None:\n output = f\"{'!'*bangs}{'*'*stars}\"\n print(output)\n\nCare to provide some more explanations about what do you expect actually?\nIs the total number of chars fixed?\nIs it the sum of i and j, something else?\ni, j = 0, 6\ntotal = i+1\nwhile i+j == total:\n if i>5:\n break\n print_pattern(bangs=i, stars=j)\n i+=1\n j-=1\n\n",
"Use only 1 print, and include 'end' with a space:\nprint( ...., end = \" \")\ni = 0\nj = 6\ns1 = \"\"\ns2 = \"\"\nwhile True:\n i += 1\n j -= 1\n if i >= 5:\n break\n s1 = f\"{'!'*(j-1)}\"\n s2 = f\"{s2:*^{i*2}}\"\n print(s1 + s2, end=\" \")\n\nOutput:\n!!!!** !!!**** !!****** !******** \n\nI also modified your break clause.\n",
"It looks like you want a space after each sequence of asterisks but probably not at the end of the output. Therefore:\nM = 4\nN = 2\n\nfor m in range(M, 0, -1):\n print(f\"{'!' * m}{'*' * N} \", end='')\n N += 2\n\nprint('\\b')\n\nOutput:\n!!!!** !!!**** !!****** !********\n\n"
] | [
0,
0,
0
] | [
"I found wrong thing.\ns1 = f\"{s1:!<{j}}\" in this part, value(j) is max value.\nSo, s1 already full.\nAt the end of the loop, s1 must be initialized.\nI should add s1 = \"\"\n"
] | [
-1
] | [
"alignment",
"f_string",
"python",
"while_loop",
"width"
] | stackoverflow_0074616829_alignment_f_string_python_while_loop_width.txt |
Q:
How to check if the first line has changed in a text file using python
I'm trying to write a script that will check if the first line of a text file has changed and print the value once. It needs to be an infinite loop so It will always keep checking for a change. The problem I'm having is when the value is changed it will keep constantly printing and it does not detect the new change.
What I need to is the script to constantly check the first line and print the value once if it changes and do nothing if it does not change.
This is what I tried so far:
def getvar():
with open('readme.txt') as f:
first_line = f.readline().strip('\n')
result = first_line
return result
def checkvar():
initial = getvar()
print("Initial var: {}".format(initial))
while True:
current = getvar()
if initial == current:
pass
else:
print("var has changed!")
pass
checkvar()
A:
If the initial line has changed, set initial to current.
In the function checkvar()
def checkvar():
initial = getvar()
print("Initial var: {}".format(initial))
while True:
current = getvar()
if initial == current:
pass
else:
initial = current #initial line set to current line
print("var has changed!")
pass
A:
You are never re-assigning initial, so when you check if initial == current, it's only checking against the very first version of the file. Instead, re-assign as follows
def getvar():
with open('readme.txt') as f:
first_line = f.readline().strip('\n')
result = first_line
return result
def checkvar():
last = getvar()
while True:
current = getvar()
if last != current:
print("var has changed!")
last = current
checkvar()
| How to check if the first line has changed in a text file using python | I'm trying to write a script that will check if the first line of a text file has changed and print the value once. It needs to be an infinite loop so It will always keep checking for a change. The problem I'm having is when the value is changed it will keep constantly printing and it does not detect the new change.
What I need to is the script to constantly check the first line and print the value once if it changes and do nothing if it does not change.
This is what I tried so far:
def getvar():
with open('readme.txt') as f:
first_line = f.readline().strip('\n')
result = first_line
return result
def checkvar():
initial = getvar()
print("Initial var: {}".format(initial))
while True:
current = getvar()
if initial == current:
pass
else:
print("var has changed!")
pass
checkvar()
| [
"If the initial line has changed, set initial to current.\nIn the function checkvar()\ndef checkvar():\n initial = getvar()\n print(\"Initial var: {}\".format(initial))\n while True:\n current = getvar()\n if initial == current:\n pass \n else:\n initial = current #initial line set to current line\n print(\"var has changed!\")\n pass\n\n",
"You are never re-assigning initial, so when you check if initial == current, it's only checking against the very first version of the file. Instead, re-assign as follows\ndef getvar():\n with open('readme.txt') as f:\n first_line = f.readline().strip('\\n')\n result = first_line\n return result\n\ndef checkvar():\n last = getvar()\n while True:\n current = getvar()\n if last != current:\n print(\"var has changed!\")\n last = current\n\ncheckvar()\n\n"
] | [
2,
2
] | [] | [] | [
"if_statement",
"python",
"while_loop"
] | stackoverflow_0074617156_if_statement_python_while_loop.txt |
Q:
Unzigzag an array into a matrix
I have a 1D array which is in fact a 2D matrix sampled this way:
↓-------<------S <- start
>------->------↓
↓-------<------<
>------->------E <- end
For example
B = 1 2 3 4
5 6 7 8
9 10 11 12
is coded as A = [4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9].
The number of rows can be odd or even.
The following code works but is inefficient due to the loop (and no vectorization). How to do a more efficient Numpy "unzigzaging"?
import numpy as np
def unzigzag(z, numcols):
numrows = len(z) // numcols
a = np.zeros((numrows, numcols))
col = numcols - 1
row = 0
sign = -1
for i in range(numrows*numcols):
if col == -1:
col = 0
sign = 1
row += 1
if col == numcols:
col = numcols - 1
sign = -1
row += 1
a[row, col] = z[i]
col += sign
return a
A = [4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9]
B = unzigzag(A, 4)
#[[ 1. 2. 3. 4.]
# [ 5. 6. 7. 8.]
# [ 9. 10. 11. 12.]]
If possible, it would be useful to have it working even if more dimensions than 1D:
if A has shape (12, ), unzigzag(A, numcols=4) has shape (3, 4)
if A has shape (12, 100), unzigzag(A, numcols=4) has shape (3, 4, 100)
if A has shape (n, i, j, k, ...), unzigzag(A, numcols) has shape (n/numcols, numcols, i, j, k, ...)
Edit: example for the n-dimensional case:
import numpy as np
def unzigzag3(z, numcols):
numrows = z.shape[0] // numcols
new_shape = (numrows, numcols) + z.shape[1:]
a = np.zeros(new_shape)
col = numcols - 1
row = 0
sign = -1
for i in range(numrows*numcols):
if col == -1:
col = 0
sign = 1
row += 1
if col == numcols:
col = numcols - 1
sign = -1
row += 1
a[row, col, :] = z[i, :]
col += sign
return a
A = np.array([[4,4], [3, 3], [2, 2], [1, 1], [5, 5], [6, 6], [7, 7], [8, 8], [12, 12], [11, 11], [10, 10], [9, 9]])
print(A.shape) # (12, 2)
B = unzigzag3(A, 4)
print(B)
print(B.shape) # (3, 4, 2)
A:
I would reshape, then updated every other row with the horizontally flipped version of that row.
import numpy as np
a = np.array([4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9])
numcols = 4
a = a.reshape(-1,numcols)
a[::2] = np.flip(a, axis=1)[::2]
print(a)
Output
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
A:
Based on @Chris' answer (full credit to him), here seems to be a working version for both 1D and n-dimensional input:
import numpy as np
def unzigzag(z, numcols):
new_shape = (z.shape[0] // numcols, numcols) + z.shape[1:]
a = z.copy().reshape(new_shape)
a[::2] = np.flip(a, axis=1)[::2]
return a
unzigzag(np.array([4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9]), numcols=4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
# shape: (3, 4)
unzigzag(np.array([[4,4], [3, 3], [2, 2], [1, 1], [5, 5], [6, 6], [7, 7], [8, 8], [12, 12], [11, 11], [10, 10], [9, 9]]), numcols=4)
# [[[ 1 1]
# [ 2 2]
# [ 3 3]
# [ 4 4]]
# [[ 5 5]
# [ 6 6]
# [ 7 7]
# [ 8 8]]
# [[ 9 9]
# [10 10]
# [11 11]
# [12 12]]]
# shape: (3, 4, 2)
| Unzigzag an array into a matrix | I have a 1D array which is in fact a 2D matrix sampled this way:
↓-------<------S <- start
>------->------↓
↓-------<------<
>------->------E <- end
For example
B = 1 2 3 4
5 6 7 8
9 10 11 12
is coded as A = [4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9].
The number of rows can be odd or even.
The following code works but is inefficient due to the loop (and no vectorization). How to do a more efficient Numpy "unzigzaging"?
import numpy as np
def unzigzag(z, numcols):
numrows = len(z) // numcols
a = np.zeros((numrows, numcols))
col = numcols - 1
row = 0
sign = -1
for i in range(numrows*numcols):
if col == -1:
col = 0
sign = 1
row += 1
if col == numcols:
col = numcols - 1
sign = -1
row += 1
a[row, col] = z[i]
col += sign
return a
A = [4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9]
B = unzigzag(A, 4)
#[[ 1. 2. 3. 4.]
# [ 5. 6. 7. 8.]
# [ 9. 10. 11. 12.]]
If possible, it would be useful to have it working even if more dimensions than 1D:
if A has shape (12, ), unzigzag(A, numcols=4) has shape (3, 4)
if A has shape (12, 100), unzigzag(A, numcols=4) has shape (3, 4, 100)
if A has shape (n, i, j, k, ...), unzigzag(A, numcols) has shape (n/numcols, numcols, i, j, k, ...)
Edit: example for the n-dimensional case:
import numpy as np
def unzigzag3(z, numcols):
numrows = z.shape[0] // numcols
new_shape = (numrows, numcols) + z.shape[1:]
a = np.zeros(new_shape)
col = numcols - 1
row = 0
sign = -1
for i in range(numrows*numcols):
if col == -1:
col = 0
sign = 1
row += 1
if col == numcols:
col = numcols - 1
sign = -1
row += 1
a[row, col, :] = z[i, :]
col += sign
return a
A = np.array([[4,4], [3, 3], [2, 2], [1, 1], [5, 5], [6, 6], [7, 7], [8, 8], [12, 12], [11, 11], [10, 10], [9, 9]])
print(A.shape) # (12, 2)
B = unzigzag3(A, 4)
print(B)
print(B.shape) # (3, 4, 2)
| [
"I would reshape, then updated every other row with the horizontally flipped version of that row.\nimport numpy as np\n\na = np.array([4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9])\nnumcols = 4\n\na = a.reshape(-1,numcols)\na[::2] = np.flip(a, axis=1)[::2]\n\nprint(a)\n\nOutput\n[[ 1 2 3 4]\n [ 5 6 7 8]\n [ 9 10 11 12]]\n\n",
"Based on @Chris' answer (full credit to him), here seems to be a working version for both 1D and n-dimensional input:\nimport numpy as np\n\ndef unzigzag(z, numcols):\n new_shape = (z.shape[0] // numcols, numcols) + z.shape[1:]\n a = z.copy().reshape(new_shape)\n a[::2] = np.flip(a, axis=1)[::2]\n return a\n\nunzigzag(np.array([4, 3, 2, 1, 5, 6, 7, 8, 12, 11, 10, 9]), numcols=4)\n\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\n# shape: (3, 4)\n\nunzigzag(np.array([[4,4], [3, 3], [2, 2], [1, 1], [5, 5], [6, 6], [7, 7], [8, 8], [12, 12], [11, 11], [10, 10], [9, 9]]), numcols=4)\n\n# [[[ 1 1]\n# [ 2 2]\n# [ 3 3]\n# [ 4 4]]\n\n# [[ 5 5]\n# [ 6 6]\n# [ 7 7]\n# [ 8 8]]\n\n# [[ 9 9]\n# [10 10]\n# [11 11]\n# [12 12]]]\n# shape: (3, 4, 2)\n\n"
] | [
4,
1
] | [] | [] | [
"matrix",
"numpy",
"numpy_ndarray",
"performance",
"python"
] | stackoverflow_0074615495_matrix_numpy_numpy_ndarray_performance_python.txt |
Q:
Take only one side of a list of lists and add to a new list
Say I have multiple lists of lists. Something like this:
list1 = [[1,2],[56,32],[34,244]]
list2 = [[43,21],[30,1],[19,3]]
list3 = [[1,3],[8,21],[9,57]]
I want to create two new lists:
right_side = [2,32,244,21,1,3,3,21,57]
left_side = [1,56,34,43,30,19,1,8,9]
All sub-lists have only two values. And all big lists (list1,list2,list3) have the same number of values as well.
How do I do that?
A:
By using zip built-in function you get tuples:
left_side, right_side = zip(*list1, *list2, *list3)
And if you really need lists:
left_side, right_side = map(list, zip(*list1, *list2, *list3))
A:
The below seems to work.
list1 = [[1, 2], [56, 32], [34, 244]]
list2 = [[43, 21], [30, 1], [19, 3]]
list3 = [[1, 3], [8, 21], [9, 57]]
left = []
right = []
lst = [list1, list2, list3]
for l in lst:
for ll in l:
left.append(ll[0])
right.append(ll[1])
print(f'Left: {left}')
print(f'Right: {right}')
A:
If you do not have any issues with importing some standard libraries, you might achieve the goal as follows:
import itertools
from operator import itemgetter
right_side = list(map(itemgetter(1), itertools.chain(list1, list2, list3)))
left_side = list(map(itemgetter(0), itertools.chain(list1, list2, list3)))
Output of the due prints shall be:
[2, 32, 244, 21, 1, 3, 3, 21, 57]
[1, 56, 34, 43, 30, 19, 1, 8, 9]
A:
You can use numpy
import numpy as np
All = list1+list2+list3
All = np.array(All)
print(All[:,1].tolist())
Gives right elemnst #
[2, 32, 244, 21, 1, 3, 3, 21, 57]
To get left #
print(All[:,0].tolist())
Gives #
[1, 56, 34, 43, 30, 19, 1, 8, 9]
>>>
| Take only one side of a list of lists and add to a new list | Say I have multiple lists of lists. Something like this:
list1 = [[1,2],[56,32],[34,244]]
list2 = [[43,21],[30,1],[19,3]]
list3 = [[1,3],[8,21],[9,57]]
I want to create two new lists:
right_side = [2,32,244,21,1,3,3,21,57]
left_side = [1,56,34,43,30,19,1,8,9]
All sub-lists have only two values. And all big lists (list1,list2,list3) have the same number of values as well.
How do I do that?
| [
"By using zip built-in function you get tuples:\nleft_side, right_side = zip(*list1, *list2, *list3)\n\nAnd if you really need lists:\nleft_side, right_side = map(list, zip(*list1, *list2, *list3))\n\n",
"The below seems to work.\nlist1 = [[1, 2], [56, 32], [34, 244]]\nlist2 = [[43, 21], [30, 1], [19, 3]]\nlist3 = [[1, 3], [8, 21], [9, 57]]\n\nleft = []\nright = []\nlst = [list1, list2, list3]\nfor l in lst:\n for ll in l:\n left.append(ll[0])\n right.append(ll[1])\nprint(f'Left: {left}')\nprint(f'Right: {right}')\n\n",
"If you do not have any issues with importing some standard libraries, you might achieve the goal as follows:\nimport itertools\nfrom operator import itemgetter\n\nright_side = list(map(itemgetter(1), itertools.chain(list1, list2, list3)))\nleft_side = list(map(itemgetter(0), itertools.chain(list1, list2, list3)))\n\nOutput of the due prints shall be:\n[2, 32, 244, 21, 1, 3, 3, 21, 57]\n[1, 56, 34, 43, 30, 19, 1, 8, 9]\n\n",
"You can use numpy\nimport numpy as np\nAll = list1+list2+list3\nAll = np.array(All)\nprint(All[:,1].tolist())\n\nGives right elemnst #\n[2, 32, 244, 21, 1, 3, 3, 21, 57]\n\nTo get left #\nprint(All[:,0].tolist())\n\nGives #\n[1, 56, 34, 43, 30, 19, 1, 8, 9]\n>>> \n\n"
] | [
2,
2,
2,
1
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074617210_list_python.txt |
Q:
Write multiple rows to a CSV
I have a small GUI which should write the values into a CSV file. However, the header is always written instead of just the new entries.
this is how it looks in the csv:
Amount, Time
1000,12:13:40
Amount, Time
2000,12:14:30
What I want:
Amount, Time
1000,12:13:40
2000,12:14:30
def submit():
import csv
import datetime
with open("Data_amount.csv", "a", newline="") as csvfile:
writer = csv.writer(csvfile)
#Header
writer.writerow(["Amount", "Time"])
#Input
input_amount = entry_listbox.get()
#Time
now = datetime.datetime.now()
now_str = now.time().strftime("%H:%M:%S")
writer.writerow([input_amount, now_str])
timestamp = datetime.datetime.now()
input_amount2 = entry_listbox.get()
if input_amount2 != "":
listbox_stuekzahlen.insert(tkinter.END, f'{timestamp:%H:%M:%S} - {input_amount2} Stk.')
entry_listbox.delete(0,tkinter.END)
else:
tkinter.messagebox.showwarning(title="Warning!", message="INPUT!")
A:
You unconditionally write the header again each time you call submit. Either:
Remove that header write, and have, somewhere outside submit (early in your program, run exactly once), the code that initializes (opens in "w" mode so the file is cleared) the file with just the header, so each submit doesn't add an extra copy, or
Leave the header write it, but make it conditional on the file being empty (so you only write it when the file is already empty, and otherwise assume the file already has the header), e.g.
with open("Data_amount.csv", "a", newline="") as csvfile:
writer = csv.writer(csvfile)
#Header
if not os.fstat(csvfile.fileno()).st_size:
writer.writerow(["Amount", "Time"]) # Write header only if input is empty
| Write multiple rows to a CSV | I have a small GUI which should write the values into a CSV file. However, the header is always written instead of just the new entries.
this is how it looks in the csv:
Amount, Time
1000,12:13:40
Amount, Time
2000,12:14:30
What I want:
Amount, Time
1000,12:13:40
2000,12:14:30
def submit():
import csv
import datetime
with open("Data_amount.csv", "a", newline="") as csvfile:
writer = csv.writer(csvfile)
#Header
writer.writerow(["Amount", "Time"])
#Input
input_amount = entry_listbox.get()
#Time
now = datetime.datetime.now()
now_str = now.time().strftime("%H:%M:%S")
writer.writerow([input_amount, now_str])
timestamp = datetime.datetime.now()
input_amount2 = entry_listbox.get()
if input_amount2 != "":
listbox_stuekzahlen.insert(tkinter.END, f'{timestamp:%H:%M:%S} - {input_amount2} Stk.')
entry_listbox.delete(0,tkinter.END)
else:
tkinter.messagebox.showwarning(title="Warning!", message="INPUT!")
| [
"You unconditionally write the header again each time you call submit. Either:\n\nRemove that header write, and have, somewhere outside submit (early in your program, run exactly once), the code that initializes (opens in \"w\" mode so the file is cleared) the file with just the header, so each submit doesn't add an extra copy, or\n\nLeave the header write it, but make it conditional on the file being empty (so you only write it when the file is already empty, and otherwise assume the file already has the header), e.g.\n with open(\"Data_amount.csv\", \"a\", newline=\"\") as csvfile:\n writer = csv.writer(csvfile)\n #Header \n if not os.fstat(csvfile.fileno()).st_size:\n writer.writerow([\"Amount\", \"Time\"]) # Write header only if input is empty\n\n\n\n"
] | [
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0074617295_csv_python.txt |
Q:
get data sent from flask to html
I am doing a flask application and I have a issue related to data sent from render_template() in flask to html web page.
This is my flask code ( I want to pass a number)
screenx = ((int(width[0][0]) - 0))
return render_template('/barchart.html', screen = screenx)
while this is my html code.
<canvas id="myCanvas" width="1024" height="768"></canvas>
<script >
screenx1={{screen}}
count = 0;
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
myint = {{ mydict }}
console.log(" queste sono le coordinates", myint)
//var coordinates = canvas.toDataURL("text/plain");
//console.log(" queste sono le coordinates", coordinates)
var ref = canvas.toDataURL("image/png");
//console.log(ref)
//var imageObj = document.getElementById("mappa2");
//ctx.drawImage(imageObj,300,100);
var markerObj = new Image();
ref.onload = function() {
context.drawImage(ref, 0, 0, 1024,768);
};
markerObj.onload = function() {
context.drawImage(markerObj, 0,0, 20,20);
markerObj.style['z-index'] = "1";
};
markerObj.src = "https://cdn-icons-png.flaticon.com/512/684/684908.png";
var canvas = document.getElementById("myCanvas");
var canvasWidth = canvas.width;
var canvasHeight = canvas.height;
var ctx = canvas.getContext("2d");
var canvasData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);
</script>
</body>
I tried to get the data by using this
{{ screen}}
but nothing is sent.
Please can you help me ?
A:
Flask
screenx = ((int(width[0][0]) - 0))
return render_template('/barchart.html', data=screenx)
HTML
screenx1={{data}}
Use data instead of screen?
| get data sent from flask to html | I am doing a flask application and I have a issue related to data sent from render_template() in flask to html web page.
This is my flask code ( I want to pass a number)
screenx = ((int(width[0][0]) - 0))
return render_template('/barchart.html', screen = screenx)
while this is my html code.
<canvas id="myCanvas" width="1024" height="768"></canvas>
<script >
screenx1={{screen}}
count = 0;
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
myint = {{ mydict }}
console.log(" queste sono le coordinates", myint)
//var coordinates = canvas.toDataURL("text/plain");
//console.log(" queste sono le coordinates", coordinates)
var ref = canvas.toDataURL("image/png");
//console.log(ref)
//var imageObj = document.getElementById("mappa2");
//ctx.drawImage(imageObj,300,100);
var markerObj = new Image();
ref.onload = function() {
context.drawImage(ref, 0, 0, 1024,768);
};
markerObj.onload = function() {
context.drawImage(markerObj, 0,0, 20,20);
markerObj.style['z-index'] = "1";
};
markerObj.src = "https://cdn-icons-png.flaticon.com/512/684/684908.png";
var canvas = document.getElementById("myCanvas");
var canvasWidth = canvas.width;
var canvasHeight = canvas.height;
var ctx = canvas.getContext("2d");
var canvasData = ctx.getImageData(0, 0, canvasWidth, canvasHeight);
</script>
</body>
I tried to get the data by using this
{{ screen}}
but nothing is sent.
Please can you help me ?
| [
"Flask\nscreenx = ((int(width[0][0]) - 0)) \nreturn render_template('/barchart.html', data=screenx)\n\nHTML\nscreenx1={{data}}\n\nUse data instead of screen?\n"
] | [
0
] | [] | [] | [
"canvas",
"flask",
"html",
"python"
] | stackoverflow_0074616644_canvas_flask_html_python.txt |
Q:
Code returning :.2f as invalid for a str, not sure where I'm missing the Float conversion
Working on a code for Python, where essentially I am taking a csv file and depending on user input, I can print/return the connected information within the csv file. Essentially I'm slowly but surely de-bugging it but I'm stuck most likely due to frustration and would love some help catching what's wrong. So been trying to hunt where I should convert to float, if you see it. I'd love any help to see it.
My code:
import csv
import sys
from csv import reader
def read_csv(file):
with open(file, 'r') as read_obj:
csv_reader = list(csv.reader(read_obj, delimiter= ','))
return csv_reader
def filter_data(user_input, data):
filtered_data = []
for row in data:
if (row[0].lower() == user_input.lower()):
filtered_data.append(row)
elif (row[4].lower() == user_input.lower()):
filtered_data.append(row)
return filtered_data
def calc_averages(filtered_data):
lst = [0, 0, 0]
for row in filtered_data:
lst[0] += float(row[7])
lst[1] += float(row[8])
lst[2] += float(row[9])
lst[0] = (lst[0]/len(filtered_data))
lst[1] = (lst[1]/len(filtered_data))
lst[2] = (lst[2]/len(filtered_data))
return lst
def calc_minimums(filtered_data):
min_concentration = filtered_data[0][8]
min_longevity = filtered_data[0][9]
min_paralyzed = filtered_data[0][11]
for line in filtered_data:
if line[8] < min_concentration:
min_concentration = line[8]
if line[9] < min_longevity:
min_longevity = line[9]
if line[11] < min_paralyzed:
min_paralyzed = line[11]
return [min_concentration, min_longevity, min_paralyzed]
def calc_maximums(filtered_data):
max_concentration = filtered_data[0][8]
max_longevity = filtered_data[0][9]
max_paralyzed = filtered_data[0][11]
for line in filtered_data:
if line[8] > max_concentration:
max_concentration = line[8]
if line[9] > max_longevity:
max_longevity = line[9]
if line[11] > max_paralyzed:
max_paralyzed = line[11]
return [max_concentration, max_longevity, max_paralyzed]
def print_stats(user_input, stat_type, stats):
print (f'{stat_type}s for {user_input} bees:')
print (f'{stat_type}s for {user_input} bees:')
print (f'{stat_type} Imidacloprid Concentration: {stats[0]:.2f}')
print (f'{stat_type} Days Paralyzed: {stats[2]:.2f}\n')
def run(data):
user_input = input('Enter the species/genus or the sociality of bee you would like information about: ')
filtered_data = filter_data(user_input, data)
averages = calc_averages(filtered_data)
minimums = calc_minimums(filtered_data)
maximums = calc_maximums(filtered_data)
print_stats(user_input, 'Average', averages)
print_stats(user_input, 'Minimum', minimums)
print_stats(user_input, 'Maximum', maximums)
more_data = input('Would you like to see more data? (Y/N) ')
if more_data == 'Y' or more_data == 'y':
return True
else:
return False
def main():
if len(sys.argv) != 2:
print('Invalid arguments given')
return
data = read_csv(sys.argv[1])
while run(data):
continue
if __name__ == '__main__':
main()
The error with input "solitary \n n":
Traceback (most recent call last):
File "main.py", line 121, in <module>
main()
File "main.py", line 117, in main
while run(data):
File "main.py", line 104, in run
print_stats(user_input, 'Minimum', minimums)
File "main.py", line 88, in print_stats
print (f'{stat_type} Imidacloprid Concentration: {stats[0]:.2f}')
ValueError: Unknown format code 'f' for object of type 'str'
The output is currently:
Enter the species/genus or the sociality of bee you would like information about: Averages for solitary bees:
Averages for solitary bees:
Average Imidacloprid Concentration: 29.48
Average Days Paralyzed: 1.33
Minimums for solitary bees:
Minimums for solitary bees:
A:
Both calc_minimums and calc_maximums are copying string data directly from the provided filtered_data argument without performing any type conversions. It's string data because the filter_data function is not performing any type conversions itself, and since you're not reading from the file with the csv.QUOTE_NONNUMERIC flag operating on the reader, no type conversion is performed (all the fields in each row are str).
It's possible you think you've performed the conversions because you called float on the fields in calc_averages, but that only read from the fields, parsed them, and returned a new float, leaving the original list unchanged (the values are still str).
If you want this to work, do one of:
Change calc_minimums and calc_maximums to perform the same type conversions that calc_averages is doing (kind of important, because right now your mins and maxes are based on str comparisons, not float comparisons, and string lexicographic sorting is almost certainly wrong much of the time, unless all your values have the exact same number of digits)
Apply a conversion step to convert the necessary fields to float ahead of time, before the data is used (saves repeated conversion work)
(If the CSV format supports it) Add the csv.QUOTE_NONNUMERIC to your csv.reader creation (assumes all non-numeric fields are in fact quoted, which is unlikely, unless the file was created using the same flag, or comes from an unusual CSV generator)
Assuming #3 isn't an option, the easiest solution (as well as fastest, and least error-prone) is #2; your existing functions don't have to change (the float conversions could even be removed from calc_averages). Just make a new function, e.g.:
# Defaulted arguments match the type and indices your code uses
def convert_data(data, totype=float, convert_indices=(7, 8, 9, 11)):
'''Modifies data in-place, converting specified indices in each row to totype'''
for row in data:
for idx in convert_indices:
row[idx] = totype(row[idx])
Then just add:
convert_data(filtered_data)
immediately after the line filtered_data = filter_data(user_input, data).
If you prefer not operating in-place, you can make a new list with something like this:
# Defaulted arguments match the type and indices your code uses
def convert_data(data, totype=float, convert_indices=frozenset({7, 8, 9, 11})):
'''Returns new copy of data with specified indices in each row converted to totype'''
convert_indices = frozenset(convert_indices) # Optimize to reduce cost of
# checking if index should be converted
return [[totype(x) if i in convert_indices else x for i, x in enumerate(row)]
for row in data]
and make the inserted line:
filtered_data = convert_data(filtered_data)
| Code returning :.2f as invalid for a str, not sure where I'm missing the Float conversion | Working on a code for Python, where essentially I am taking a csv file and depending on user input, I can print/return the connected information within the csv file. Essentially I'm slowly but surely de-bugging it but I'm stuck most likely due to frustration and would love some help catching what's wrong. So been trying to hunt where I should convert to float, if you see it. I'd love any help to see it.
My code:
import csv
import sys
from csv import reader
def read_csv(file):
with open(file, 'r') as read_obj:
csv_reader = list(csv.reader(read_obj, delimiter= ','))
return csv_reader
def filter_data(user_input, data):
filtered_data = []
for row in data:
if (row[0].lower() == user_input.lower()):
filtered_data.append(row)
elif (row[4].lower() == user_input.lower()):
filtered_data.append(row)
return filtered_data
def calc_averages(filtered_data):
lst = [0, 0, 0]
for row in filtered_data:
lst[0] += float(row[7])
lst[1] += float(row[8])
lst[2] += float(row[9])
lst[0] = (lst[0]/len(filtered_data))
lst[1] = (lst[1]/len(filtered_data))
lst[2] = (lst[2]/len(filtered_data))
return lst
def calc_minimums(filtered_data):
min_concentration = filtered_data[0][8]
min_longevity = filtered_data[0][9]
min_paralyzed = filtered_data[0][11]
for line in filtered_data:
if line[8] < min_concentration:
min_concentration = line[8]
if line[9] < min_longevity:
min_longevity = line[9]
if line[11] < min_paralyzed:
min_paralyzed = line[11]
return [min_concentration, min_longevity, min_paralyzed]
def calc_maximums(filtered_data):
max_concentration = filtered_data[0][8]
max_longevity = filtered_data[0][9]
max_paralyzed = filtered_data[0][11]
for line in filtered_data:
if line[8] > max_concentration:
max_concentration = line[8]
if line[9] > max_longevity:
max_longevity = line[9]
if line[11] > max_paralyzed:
max_paralyzed = line[11]
return [max_concentration, max_longevity, max_paralyzed]
def print_stats(user_input, stat_type, stats):
print (f'{stat_type}s for {user_input} bees:')
print (f'{stat_type}s for {user_input} bees:')
print (f'{stat_type} Imidacloprid Concentration: {stats[0]:.2f}')
print (f'{stat_type} Days Paralyzed: {stats[2]:.2f}\n')
def run(data):
user_input = input('Enter the species/genus or the sociality of bee you would like information about: ')
filtered_data = filter_data(user_input, data)
averages = calc_averages(filtered_data)
minimums = calc_minimums(filtered_data)
maximums = calc_maximums(filtered_data)
print_stats(user_input, 'Average', averages)
print_stats(user_input, 'Minimum', minimums)
print_stats(user_input, 'Maximum', maximums)
more_data = input('Would you like to see more data? (Y/N) ')
if more_data == 'Y' or more_data == 'y':
return True
else:
return False
def main():
if len(sys.argv) != 2:
print('Invalid arguments given')
return
data = read_csv(sys.argv[1])
while run(data):
continue
if __name__ == '__main__':
main()
The error with input "solitary \n n":
Traceback (most recent call last):
File "main.py", line 121, in <module>
main()
File "main.py", line 117, in main
while run(data):
File "main.py", line 104, in run
print_stats(user_input, 'Minimum', minimums)
File "main.py", line 88, in print_stats
print (f'{stat_type} Imidacloprid Concentration: {stats[0]:.2f}')
ValueError: Unknown format code 'f' for object of type 'str'
The output is currently:
Enter the species/genus or the sociality of bee you would like information about: Averages for solitary bees:
Averages for solitary bees:
Average Imidacloprid Concentration: 29.48
Average Days Paralyzed: 1.33
Minimums for solitary bees:
Minimums for solitary bees:
| [
"Both calc_minimums and calc_maximums are copying string data directly from the provided filtered_data argument without performing any type conversions. It's string data because the filter_data function is not performing any type conversions itself, and since you're not reading from the file with the csv.QUOTE_NONNUMERIC flag operating on the reader, no type conversion is performed (all the fields in each row are str).\nIt's possible you think you've performed the conversions because you called float on the fields in calc_averages, but that only read from the fields, parsed them, and returned a new float, leaving the original list unchanged (the values are still str).\nIf you want this to work, do one of:\n\nChange calc_minimums and calc_maximums to perform the same type conversions that calc_averages is doing (kind of important, because right now your mins and maxes are based on str comparisons, not float comparisons, and string lexicographic sorting is almost certainly wrong much of the time, unless all your values have the exact same number of digits)\nApply a conversion step to convert the necessary fields to float ahead of time, before the data is used (saves repeated conversion work)\n(If the CSV format supports it) Add the csv.QUOTE_NONNUMERIC to your csv.reader creation (assumes all non-numeric fields are in fact quoted, which is unlikely, unless the file was created using the same flag, or comes from an unusual CSV generator)\n\nAssuming #3 isn't an option, the easiest solution (as well as fastest, and least error-prone) is #2; your existing functions don't have to change (the float conversions could even be removed from calc_averages). Just make a new function, e.g.:\n# Defaulted arguments match the type and indices your code uses\ndef convert_data(data, totype=float, convert_indices=(7, 8, 9, 11)):\n '''Modifies data in-place, converting specified indices in each row to totype'''\n for row in data:\n for idx in convert_indices:\n row[idx] = totype(row[idx])\n\nThen just add:\nconvert_data(filtered_data)\n\nimmediately after the line filtered_data = filter_data(user_input, data).\nIf you prefer not operating in-place, you can make a new list with something like this:\n# Defaulted arguments match the type and indices your code uses\ndef convert_data(data, totype=float, convert_indices=frozenset({7, 8, 9, 11})):\n '''Returns new copy of data with specified indices in each row converted to totype'''\n convert_indices = frozenset(convert_indices) # Optimize to reduce cost of\n # checking if index should be converted\n return [[totype(x) if i in convert_indices else x for i, x in enumerate(row)]\n for row in data]\n\nand make the inserted line:\nfiltered_data = convert_data(filtered_data)\n\n"
] | [
1
] | [] | [] | [
"csv",
"python",
"string"
] | stackoverflow_0074617352_csv_python_string.txt |
Q:
Writing into a specific location in a text file
How do I add a string/integer into an existing text file at a specific location?
My sample text looks like below:
No, Color, Height, age
1, blue,70,
2, white,65,
3, brown,49,
4, purple,71,
5, grey,60,
My text file has 4 columns, three columns have text, how do I write to any row in the fourth column?
If I want to write 12 to the second row, the updated file (sample.txt) should look like this:
No, Color, Height, age
1, blue,70,12
2, white,65,
3, brown,49,
4, purple,71,
5, grey,60,
I have tried this:
with open("sample.txt",'r') as file:
data =file.readlines()
data[1]. split(",") [3] = 1
with open ('sample.txt', 'w') as file:
file.writelines(data)
with open ('sample.txt', 'r') as file:
print (file. Read())
But it does not work. Your help is needed.
A:
Alternate way is read that text file to a list & append based on index as follows.
with open('sample.txt') as file:
lines = [line.rstrip() for line in file]
line1 = lines[1]
new_line1 = line1 + str(12)
lines[1] = new_line1
with open('sample.txt', mode='wt', encoding='utf-8') as f:
f.write('\n'.join(lines))
Updated sample.txt
No, Color, Height, age
1, blue,70,12
2, white,65,
3, brown,49,
4, purple,71,
5, grey,60,
Explination:
Once you read the text file to list.
0th element will ebcome ---> index(1st line)
1st element will become ----> 2nd line
---
--
--
Access based on index & append your strings uinsg + operator
A:
In this example, I used a 2D array and it worked for me. Try it out and send a chat to me if it doesn't work or if you need a more in-depth explanation.
with open("sample.txt", 'r') as file:
data = file.readlines()
for index, item in enumerate(data):
data[index] = item.strip("\n")
for index, item in enumerate(data):
data[index] = item.split(", ")
data[1][3] = "12"
ndata = []
for index, _ in enumerate(data):
data1 = ", ".join(data[index])
ndata.append(data1)
mdata = "\n".join(ndata)
with open("sample.txt", 'w') as file:
file.write(mdata)
| Writing into a specific location in a text file | How do I add a string/integer into an existing text file at a specific location?
My sample text looks like below:
No, Color, Height, age
1, blue,70,
2, white,65,
3, brown,49,
4, purple,71,
5, grey,60,
My text file has 4 columns, three columns have text, how do I write to any row in the fourth column?
If I want to write 12 to the second row, the updated file (sample.txt) should look like this:
No, Color, Height, age
1, blue,70,12
2, white,65,
3, brown,49,
4, purple,71,
5, grey,60,
I have tried this:
with open("sample.txt",'r') as file:
data =file.readlines()
data[1]. split(",") [3] = 1
with open ('sample.txt', 'w') as file:
file.writelines(data)
with open ('sample.txt', 'r') as file:
print (file. Read())
But it does not work. Your help is needed.
| [
"Alternate way is read that text file to a list & append based on index as follows.\nwith open('sample.txt') as file:\n lines = [line.rstrip() for line in file]\n\nline1 = lines[1]\nnew_line1 = line1 + str(12)\nlines[1] = new_line1\n\nwith open('sample.txt', mode='wt', encoding='utf-8') as f:\n f.write('\\n'.join(lines))\n\nUpdated sample.txt\nNo, Color, Height, age\n1, blue,70,12\n2, white,65,\n3, brown,49,\n4, purple,71,\n5, grey,60,\n\nExplination:\nOnce you read the text file to list.\n0th element will ebcome ---> index(1st line)\n1st element will become ----> 2nd line \n---\n--\n\n--\n\nAccess based on index & append your strings uinsg + operator\n",
"In this example, I used a 2D array and it worked for me. Try it out and send a chat to me if it doesn't work or if you need a more in-depth explanation.\nwith open(\"sample.txt\", 'r') as file:\n data = file.readlines()\n\n for index, item in enumerate(data):\n data[index] = item.strip(\"\\n\")\n\n for index, item in enumerate(data):\n data[index] = item.split(\", \")\n\ndata[1][3] = \"12\"\n\nndata = []\n\nfor index, _ in enumerate(data):\n data1 = \", \".join(data[index])\n ndata.append(data1)\n\nmdata = \"\\n\".join(ndata)\n\nwith open(\"sample.txt\", 'w') as file:\n file.write(mdata)\n\n"
] | [
0,
0
] | [] | [] | [
"file_writing",
"overwrite",
"python",
"text"
] | stackoverflow_0074616016_file_writing_overwrite_python_text.txt |
Q:
Selenium : Add multiple path with send_keys
When I try to add multiple path to an input type=file, one per one, it adds the file 1 then the file 1 and 2 etc...
new_images = modify_images(ad['images_url'])
time.sleep(4)
add_images = driver.find_element(By.XPATH, image_add_xpath)
time.sleep(2)
for i in range(len(new_images)):
print("image " + str(index) + " " + new_images[i])
time.sleep(3)
url = r"C:\\Users\\33651\Documents\Documents\\facebook-automate\\facebook-automate\\"+new_images[i]
print(url)
print("uploading")
add_images.send_keys(url)
print("Image uploadé")
time.sleep(2)
I already tried to execute only one send_keys with the concatenation of the multiple path with the " \n " separator. It doesn't work.
I also tried to clear the input but It doesn't work neither.
add_images = driver.find_element(By.XPATH, image_add_xpath)
time.sleep(2)
all_path_images = ""
for i in range(len(new_images)):
all_path_images += new_images[i] + ' \n '
add_images.send_keys(all_path_images)
print("Image uploadé")
time.sleep(2)
A:
From your code trials shown here looks like you didn't add the new line correctly.
To upload multiple files you can construct a string adding all the absolute paths of the uploaded files separated by \n , as following:
all_path_images = ""
for i in range(len(new_images)):
all_path_images + = r"C:\\Users\\33651\Documents\Documents\\facebook-automate\\facebook-automate\\" + new_images[i] + ' \n '
#remove the last new line character from the string
all_path_images = all_path_images.strip()
add_images.send_keys(path)
| Selenium : Add multiple path with send_keys | When I try to add multiple path to an input type=file, one per one, it adds the file 1 then the file 1 and 2 etc...
new_images = modify_images(ad['images_url'])
time.sleep(4)
add_images = driver.find_element(By.XPATH, image_add_xpath)
time.sleep(2)
for i in range(len(new_images)):
print("image " + str(index) + " " + new_images[i])
time.sleep(3)
url = r"C:\\Users\\33651\Documents\Documents\\facebook-automate\\facebook-automate\\"+new_images[i]
print(url)
print("uploading")
add_images.send_keys(url)
print("Image uploadé")
time.sleep(2)
I already tried to execute only one send_keys with the concatenation of the multiple path with the " \n " separator. It doesn't work.
I also tried to clear the input but It doesn't work neither.
add_images = driver.find_element(By.XPATH, image_add_xpath)
time.sleep(2)
all_path_images = ""
for i in range(len(new_images)):
all_path_images += new_images[i] + ' \n '
add_images.send_keys(all_path_images)
print("Image uploadé")
time.sleep(2)
| [
"From your code trials shown here looks like you didn't add the new line correctly.\nTo upload multiple files you can construct a string adding all the absolute paths of the uploaded files separated by \\n , as following:\nall_path_images = \"\"\nfor i in range(len(new_images)):\n all_path_images + = r\"C:\\\\Users\\\\33651\\Documents\\Documents\\\\facebook-automate\\\\facebook-automate\\\\\" + new_images[i] + ' \\n '\n#remove the last new line character from the string\nall_path_images = all_path_images.strip()\nadd_images.send_keys(path)\n\n"
] | [
1
] | [] | [] | [
"file_upload",
"python",
"selenium",
"selenium_webdriver",
"web_scraping"
] | stackoverflow_0074616739_file_upload_python_selenium_selenium_webdriver_web_scraping.txt |
Q:
slope varying on straight, parallel linestrings (shapely)
I have 2 parallel lines (line1 = [(1,3),(4,3)] and line2 = [(1,2),(4,2)]):
Both have the same slope if I calculate it manually with m = (y-y1)/(x-x1):
line1: m1 = (3-3)/(1-4) = 0
line2: m2 = (2-2)/(1-4) = 0
m1 = m2 and therefore the lines are parallel, like shown in the picture.
But if I create shapely linestrings from line1 and line2 the slope are no longer equal...
import matplotlib.pyplot as plt
from shapely.geometry import LineString
line1 = LineString([(1,3),(4,3)])
line2 = LineString([(1,2),(4,2)])
fig, ax = plt.subplots()
ax.plot(*line1.xy)
ax.plot(*line2.xy)
xs1, xe1 = line1.coords[0]
ys1, ye1 = line1.coords[1]
m1 = (ys1-ye1)/(xs1-xe1)
xs2, xe2 = line2.coords[0]
ys2, ye2 = line2.coords[1]
m2 = (ys2-ye2)/(xs2-xe2)
print(m1,m2)
This prints -0.5 for m1 and -2.0 for m2...
But why is it not 0 or at least equal?
A:
line1.cord[0] returns x1,y1 not x1,x2
import matplotlib.pyplot as plt
from shapely.geometry import LineString
line1 = LineString([(1,3),(4,3)])
line2 = LineString([(1,2),(4,2)])
fig, ax = plt.subplots()
ax.plot(*line1.xy)
ax.plot(*line2.xy)
xs1, ys1 = line1.coords[0]
xe1, ye1 = line1.coords[1]
m1 = (ye1-ys1)/(xe1-xs1)
xs2, ys2 = line2.coords[0]
xe2, ye2 = line2.coords[1]
m2 = (ye2-ys2)/(xe2-xs2)
print(m1,m2)
| slope varying on straight, parallel linestrings (shapely) | I have 2 parallel lines (line1 = [(1,3),(4,3)] and line2 = [(1,2),(4,2)]):
Both have the same slope if I calculate it manually with m = (y-y1)/(x-x1):
line1: m1 = (3-3)/(1-4) = 0
line2: m2 = (2-2)/(1-4) = 0
m1 = m2 and therefore the lines are parallel, like shown in the picture.
But if I create shapely linestrings from line1 and line2 the slope are no longer equal...
import matplotlib.pyplot as plt
from shapely.geometry import LineString
line1 = LineString([(1,3),(4,3)])
line2 = LineString([(1,2),(4,2)])
fig, ax = plt.subplots()
ax.plot(*line1.xy)
ax.plot(*line2.xy)
xs1, xe1 = line1.coords[0]
ys1, ye1 = line1.coords[1]
m1 = (ys1-ye1)/(xs1-xe1)
xs2, xe2 = line2.coords[0]
ys2, ye2 = line2.coords[1]
m2 = (ys2-ye2)/(xs2-xe2)
print(m1,m2)
This prints -0.5 for m1 and -2.0 for m2...
But why is it not 0 or at least equal?
| [
"line1.cord[0] returns x1,y1 not x1,x2\nimport matplotlib.pyplot as plt\nfrom shapely.geometry import LineString\n\nline1 = LineString([(1,3),(4,3)])\nline2 = LineString([(1,2),(4,2)])\n\nfig, ax = plt.subplots()\nax.plot(*line1.xy)\nax.plot(*line2.xy)\n\nxs1, ys1 = line1.coords[0]\nxe1, ye1 = line1.coords[1]\n\nm1 = (ye1-ys1)/(xe1-xs1)\n\nxs2, ys2 = line2.coords[0]\nxe2, ye2 = line2.coords[1]\n\nm2 = (ye2-ys2)/(xe2-xs2)\n\nprint(m1,m2)\n\n"
] | [
2
] | [] | [] | [
"line",
"math",
"python",
"shapely"
] | stackoverflow_0074617327_line_math_python_shapely.txt |
Q:
Returning people born before certain year from tuple?
I need to write a function named older_people(people: list, year: int), which selects all those people on the list who were born before the year given as an argument. The function should return the names of these people in a new list.
An example of its use:
p1 = ("Adam", 1977)
p2 = ("Ellen", 1985)
p3 = ("Mary", 1953)
p4 = ("Ernest", 1997)
people = [p1, p2, p3, p4]
older = older_people(people, 1979)
print(older)
Sample output:
[ 'Adam', 'Mary' ]
So far I got:
def older_people(people: list, year: int):
for person in plist:
if person[1] < year:
return person[0]
p1 = ("Adam", 1977)
p2 = ("Ellen", 1985)
p3 = ("Mary", 1953)
p4 = ("Ernest", 1997)
plist = [p1, p2, p3, p4]
older = older_people(plist, 1979)
print(older)
At the moment this just prints the first person (Adam) who is born before 1979.
Any help for this one?
A:
First you should use the argument of the function in the body of older_people instead of the global variable plist. people should be used instead of plist.
Then, your return statement is inside the for loop, this means that it will leave the function at the first time the if condition is true, hence printing only one person.
def older_people(people: list, year: int):
result = []
for person in people:
if person[1] < year:
result.append(person[0])
return result
p1 = ("Adam", 1977)
p2 = ("Ellen", 1985)
p3 = ("Mary", 1953)
p4 = ("Ernest", 1997)
plist = [p1, p2, p3, p4]
older = older_people(plist, 1979)
print(older)
A:
Alternately you could reference both elements of the tuple with labels instead of by index, also list comprehension...
def older_people(people: list, year: int):
return [person for person, birth_year in people if birth_year <= year]
| Returning people born before certain year from tuple? | I need to write a function named older_people(people: list, year: int), which selects all those people on the list who were born before the year given as an argument. The function should return the names of these people in a new list.
An example of its use:
p1 = ("Adam", 1977)
p2 = ("Ellen", 1985)
p3 = ("Mary", 1953)
p4 = ("Ernest", 1997)
people = [p1, p2, p3, p4]
older = older_people(people, 1979)
print(older)
Sample output:
[ 'Adam', 'Mary' ]
So far I got:
def older_people(people: list, year: int):
for person in plist:
if person[1] < year:
return person[0]
p1 = ("Adam", 1977)
p2 = ("Ellen", 1985)
p3 = ("Mary", 1953)
p4 = ("Ernest", 1997)
plist = [p1, p2, p3, p4]
older = older_people(plist, 1979)
print(older)
At the moment this just prints the first person (Adam) who is born before 1979.
Any help for this one?
| [
"First you should use the argument of the function in the body of older_people instead of the global variable plist. people should be used instead of plist.\nThen, your return statement is inside the for loop, this means that it will leave the function at the first time the if condition is true, hence printing only one person.\ndef older_people(people: list, year: int):\n result = []\n for person in people:\n if person[1] < year:\n result.append(person[0])\n return result\n\n\n\n\np1 = (\"Adam\", 1977)\np2 = (\"Ellen\", 1985)\np3 = (\"Mary\", 1953)\np4 = (\"Ernest\", 1997)\nplist = [p1, p2, p3, p4]\n\nolder = older_people(plist, 1979)\nprint(older)\n\n",
"Alternately you could reference both elements of the tuple with labels instead of by index, also list comprehension...\ndef older_people(people: list, year: int):\n return [person for person, birth_year in people if birth_year <= year]\n\n"
] | [
2,
0
] | [] | [] | [
"python",
"tuples"
] | stackoverflow_0074616213_python_tuples.txt |
Q:
New Conda environment with latest Python Version for Jupyter Notebook
Since Python version changes are far and few between, I always forget how I have created a new Conda environment with the latest Python for Jupyter Notebook, so I thought I'd list it down for next time. From StackOverflow, there are some answers that no longer worked, and below is a compilation of commands I found on StackOverflow that worked for me, Nov-29-2022. These instructions below are for Windows, and using Powershell (although they can also be used for the normal command line cmd.exe)
# make sure you are in the base env
# update conda
conda update conda
# to allow support for powershell
conda init --all
# The conda-forge repository seems to have at least the latest
# stable Python version, so we will get Python from there.
# add conda-forge to channels of conda.
conda config --add channels conda-forge
conda update jupyter
# to fix 500 internal server error when trying to open a notebook later
pip3 install --upgrade --user nbconvert
# nb_conda_kernels enables a Jupyter Notebook or JupyterLab
# application in one conda environment to access kernels for Python,
# R, and other languages found in other environments.
conda install nb_conda_kernels
# I will now create a new conda env for Python 3.11 and name it as Python3.11
conda create -n python3.11 python=3.11
# check that it was created
conda info --envs
conda activate python3.11
# Once installed, need to install ipykernel so Jupyter notebook can
# see the new environment python3.11.
conda install -n python3.11 ipykernel
# install ipywidgets as well for some useful functionalities
conda install -n python3.11 ipywidgets
# Since I use R too, I'll also add a note here on R
# To utilize an R environment, it must have the r-irkernel package; e.g.
# conda install -n r_env r-irkernel
# example to install a package in the new env, if desired
# conda install --update-all --name python3.11 numpy
#conda list will show the env's packages, versions, and where they came from too
conda activate python3.11
conda list
conda deactivate
# Now to check if the new environment can be selected in Jupyter
# Notebook. I change to the root directory first so jupyter
# notebook can see every folder. Note that we are in base
# environment, although no problem if in another environment
cd\
jupyter notebook
# If I open an existing notebook for example, I can tap on Kernel,
# then Change kernel, and I should now be able to select the kernel
# from the new environment I created, shown as "Python [conda env:python3.11]".
#
# There will also be another entry showing just the name of the env,
# in this case, python3.11. Just ignore this, select the entries
# starting with "Python [conda env" ...
#
# If I tapped on New instead when Jupyter Notebook opened, it will
# also show the list of envs.
# to check version, either use :
!python --version
# or
from platform import python_version
print(python_version())
# both will show the Python version of whatever kernel is in use
# by Jupyter notebook
# to test Python 3.10 or 3.11 for example... from 3.10, an optional
# strict parameter for zip has been added and can be used to
# generate an error if lists' lengths are not the same
a = [1,2,3,4]
b = ['a', 'b', 'c']
for val1, val2 in zip(a,b, strict = True):
print(val1, val2)
# this should appear - ValueError: zip() argument 2 is shorter than argument 1
Is there another way ?
A:
The steps in the main question above is the nb_conda_kernels way. With nb_conda_kernels installed in the base environment, any notebook running from the base environment will automatically show the kernel from any other environment which has ipykernel installed. We only need one jupyter notebook, ideally installed in the base env.
Not ideal way: The "quick and dirty method" is to install jupyter notebook in each env. "If you install jupyter in any environment and run jupyter notebook from that environment the notebook will use the kernel from the active environment. The kernel will show with the default name Python 3 but we can verify this works by doing the following."
import os
print (os.environ['CONDA_DEFAULT_ENV'])
The "usual or easy way" is "to individually register each environment you want to show in your kernels list." There is no need to install nb_conda_kernels.
After creating the new env, python3.11 for example
`
conda activate python3.11 # make it active
conda install ipykernel # needed for each env
conda install ipywidgets # for additional jupyter functionalities
python -m ipykernel install --user --name python3.11 --display-name "Python 3.11 env" # will install kernelspec python3.11
`
That's it, when running jupyter notebook, one will see "Python 3.11 env" as one of the env to select from.
NOTE:
The problem with this easy way is that running commands using ! like:
!python --version
!pip3 list
!conda list
will always refer to the env where jupyter notebook was started from, irregardless of whatever kernel version is currently selected (in use) by jupyter notebook.
So if we do this :
!pip3 install --upgrade numpy
numpy will be installed or upgraded in the env where jupyter notebook was started from. This is a problem if we are programatically trying to upgrade a package within jupyter notebook itself based on a condition, for example.
If instead, we used the nb_conda_kernels way, the above command will always install/upgrade in the env of the active kernel, irregardless of the env where jupyter notebook was started from.
So that's something to take note of, if one is installing packages in environments other than base.
Personally, I use both nb_conda_kernels way (nb_conda_kernels) and the usual/easy way. Just follow all the steps of the usual way, then the last step, before running jupyter notebook, is :
# make sure you are in base env
conda install nb_conda_kernels
So my kernel list in Jupyter notebook would look like :
Python 3 (ipykernel)
Python3.11 env
Python [conda env:python3.11]
etc.
I can select whichever kernel I want and it will work, while remembering the behavior I mentioned above.
If I want !pip3 install --upgrade to work on the packages in the base, while using a kernel whose version is different from the base (Python 3.8 for example), I would start notebook from the base env, and select kernel "Python3.11 env".
And if I want !pip3 install --upgrade to work on the packages in the env of a certain env, I can start notebook from any env, and select the kernel "Python [conda env:python3.11]".
| New Conda environment with latest Python Version for Jupyter Notebook | Since Python version changes are far and few between, I always forget how I have created a new Conda environment with the latest Python for Jupyter Notebook, so I thought I'd list it down for next time. From StackOverflow, there are some answers that no longer worked, and below is a compilation of commands I found on StackOverflow that worked for me, Nov-29-2022. These instructions below are for Windows, and using Powershell (although they can also be used for the normal command line cmd.exe)
# make sure you are in the base env
# update conda
conda update conda
# to allow support for powershell
conda init --all
# The conda-forge repository seems to have at least the latest
# stable Python version, so we will get Python from there.
# add conda-forge to channels of conda.
conda config --add channels conda-forge
conda update jupyter
# to fix 500 internal server error when trying to open a notebook later
pip3 install --upgrade --user nbconvert
# nb_conda_kernels enables a Jupyter Notebook or JupyterLab
# application in one conda environment to access kernels for Python,
# R, and other languages found in other environments.
conda install nb_conda_kernels
# I will now create a new conda env for Python 3.11 and name it as Python3.11
conda create -n python3.11 python=3.11
# check that it was created
conda info --envs
conda activate python3.11
# Once installed, need to install ipykernel so Jupyter notebook can
# see the new environment python3.11.
conda install -n python3.11 ipykernel
# install ipywidgets as well for some useful functionalities
conda install -n python3.11 ipywidgets
# Since I use R too, I'll also add a note here on R
# To utilize an R environment, it must have the r-irkernel package; e.g.
# conda install -n r_env r-irkernel
# example to install a package in the new env, if desired
# conda install --update-all --name python3.11 numpy
#conda list will show the env's packages, versions, and where they came from too
conda activate python3.11
conda list
conda deactivate
# Now to check if the new environment can be selected in Jupyter
# Notebook. I change to the root directory first so jupyter
# notebook can see every folder. Note that we are in base
# environment, although no problem if in another environment
cd\
jupyter notebook
# If I open an existing notebook for example, I can tap on Kernel,
# then Change kernel, and I should now be able to select the kernel
# from the new environment I created, shown as "Python [conda env:python3.11]".
#
# There will also be another entry showing just the name of the env,
# in this case, python3.11. Just ignore this, select the entries
# starting with "Python [conda env" ...
#
# If I tapped on New instead when Jupyter Notebook opened, it will
# also show the list of envs.
# to check version, either use :
!python --version
# or
from platform import python_version
print(python_version())
# both will show the Python version of whatever kernel is in use
# by Jupyter notebook
# to test Python 3.10 or 3.11 for example... from 3.10, an optional
# strict parameter for zip has been added and can be used to
# generate an error if lists' lengths are not the same
a = [1,2,3,4]
b = ['a', 'b', 'c']
for val1, val2 in zip(a,b, strict = True):
print(val1, val2)
# this should appear - ValueError: zip() argument 2 is shorter than argument 1
Is there another way ?
| [
"\nThe steps in the main question above is the nb_conda_kernels way. With nb_conda_kernels installed in the base environment, any notebook running from the base environment will automatically show the kernel from any other environment which has ipykernel installed. We only need one jupyter notebook, ideally installed in the base env.\n\nNot ideal way: The \"quick and dirty method\" is to install jupyter notebook in each env. \"If you install jupyter in any environment and run jupyter notebook from that environment the notebook will use the kernel from the active environment. The kernel will show with the default name Python 3 but we can verify this works by doing the following.\"\n\n\nimport os\nprint (os.environ['CONDA_DEFAULT_ENV'])\n\nThe \"usual or easy way\" is \"to individually register each environment you want to show in your kernels list.\" There is no need to install nb_conda_kernels.\n\nAfter creating the new env, python3.11 for example\n`\nconda activate python3.11 # make it active\n\nconda install ipykernel # needed for each env\n\nconda install ipywidgets # for additional jupyter functionalities\n\npython -m ipykernel install --user --name python3.11 --display-name \"Python 3.11 env\" # will install kernelspec python3.11\n\n`\nThat's it, when running jupyter notebook, one will see \"Python 3.11 env\" as one of the env to select from.\nNOTE:\nThe problem with this easy way is that running commands using ! like:\n!python --version\n\n!pip3 list\n\n!conda list\n\nwill always refer to the env where jupyter notebook was started from, irregardless of whatever kernel version is currently selected (in use) by jupyter notebook.\nSo if we do this :\n!pip3 install --upgrade numpy\n\nnumpy will be installed or upgraded in the env where jupyter notebook was started from. This is a problem if we are programatically trying to upgrade a package within jupyter notebook itself based on a condition, for example.\nIf instead, we used the nb_conda_kernels way, the above command will always install/upgrade in the env of the active kernel, irregardless of the env where jupyter notebook was started from.\nSo that's something to take note of, if one is installing packages in environments other than base.\nPersonally, I use both nb_conda_kernels way (nb_conda_kernels) and the usual/easy way. Just follow all the steps of the usual way, then the last step, before running jupyter notebook, is :\n# make sure you are in base env\nconda install nb_conda_kernels\n\nSo my kernel list in Jupyter notebook would look like :\nPython 3 (ipykernel)\nPython3.11 env\nPython [conda env:python3.11]\netc.\nI can select whichever kernel I want and it will work, while remembering the behavior I mentioned above.\nIf I want !pip3 install --upgrade to work on the packages in the base, while using a kernel whose version is different from the base (Python 3.8 for example), I would start notebook from the base env, and select kernel \"Python3.11 env\".\nAnd if I want !pip3 install --upgrade to work on the packages in the env of a certain env, I can start notebook from any env, and select the kernel \"Python [conda env:python3.11]\".\n"
] | [
0
] | [] | [] | [
"conda",
"jupyter_notebook",
"python"
] | stackoverflow_0074611535_conda_jupyter_notebook_python.txt |
Q:
Change a url parameter
How change a parameter's value of url? Without regexps.
Now I try this, but it's long:
from urllib.parse import parse_qs, urlencode, urlsplit
url = 'http://example.com/?page=1&text=test#section'
param, newvalue = 'page', '2'
url, sharp, frag = url.partition('#')
base, q, query = url.partition('?')
query_dict = parse_qs(query)
query_dict[param][0] = newvalue
query_new = urlencode(query_dict, doseq=True)
url_new = f'{base}{q}{query_new}{sharp}{frag}'
Also, I tried by urlsplit:
parsed = urlsplit(url)
query_dict = parse_qs(parsed.query)
query_dict[param][0] = newvalue
query_new = urlencode(query_dict, doseq=True)
parsed.query = query_new
url_new = urlencode(parsed)
But on urlparsed.query = query_new it rise error AttributeError: can't set attribute.
A:
Tuples are immutable.So you have to replace it .Here _ is meant to avoid conflict with fieldnames ._replace
from urllib.parse import parse_qs, urlencode, urlsplit
url = 'http://example.com/?page=1&text=test#section'
param, newvalue = 'page', '2'
parsed = urlsplit(url)
query_dict = parse_qs(parsed.query)
query_dict[param][0] = newvalue
query_new = urlencode(query_dict, doseq=True)
parsed=parsed._replace(query=query_new)
url_new = (parsed.geturl())
A:
Just using urllib for python 3 (quite long but flexible):
from urllib.parse import urlparse, ParseResult, parse_qs, urlencode
u = urlparse('http://example.com/?page=1&text=test#section')
params = parse_qs(u.query)
params['page'] = 22 # change query param here
res = ParseResult(scheme=u.scheme, netloc=u.hostname, path=u.path, params=u.params, query=urlencode(params), fragment=u.fragment)
print (res.geturl())
A:
As namedtuple are immutable, we have to work around.
A solution is to use a very basic and fast replace.
from urllib.parse import urlparse, parse_qs, urlencode
url = 'http://example.com/?page=1&text=test#section'
parsed_url = urlparse(url)
params = parse_qs(parsed_url.query)
params['page'] = 22 # changing query parameters
new_url = url.replace(parsed_url.query, urlencode(params))
| Change a url parameter | How change a parameter's value of url? Without regexps.
Now I try this, but it's long:
from urllib.parse import parse_qs, urlencode, urlsplit
url = 'http://example.com/?page=1&text=test#section'
param, newvalue = 'page', '2'
url, sharp, frag = url.partition('#')
base, q, query = url.partition('?')
query_dict = parse_qs(query)
query_dict[param][0] = newvalue
query_new = urlencode(query_dict, doseq=True)
url_new = f'{base}{q}{query_new}{sharp}{frag}'
Also, I tried by urlsplit:
parsed = urlsplit(url)
query_dict = parse_qs(parsed.query)
query_dict[param][0] = newvalue
query_new = urlencode(query_dict, doseq=True)
parsed.query = query_new
url_new = urlencode(parsed)
But on urlparsed.query = query_new it rise error AttributeError: can't set attribute.
| [
"Tuples are immutable.So you have to replace it .Here _ is meant to avoid conflict with fieldnames ._replace\nfrom urllib.parse import parse_qs, urlencode, urlsplit\nurl = 'http://example.com/?page=1&text=test#section'\nparam, newvalue = 'page', '2'\nparsed = urlsplit(url)\nquery_dict = parse_qs(parsed.query)\nquery_dict[param][0] = newvalue\nquery_new = urlencode(query_dict, doseq=True)\nparsed=parsed._replace(query=query_new)\nurl_new = (parsed.geturl())\n\n",
"Just using urllib for python 3 (quite long but flexible):\nfrom urllib.parse import urlparse, ParseResult, parse_qs, urlencode\n\nu = urlparse('http://example.com/?page=1&text=test#section')\nparams = parse_qs(u.query)\nparams['page'] = 22 # change query param here\nres = ParseResult(scheme=u.scheme, netloc=u.hostname, path=u.path, params=u.params, query=urlencode(params), fragment=u.fragment)\nprint (res.geturl())\n\n",
"As namedtuple are immutable, we have to work around.\nA solution is to use a very basic and fast replace.\nfrom urllib.parse import urlparse, parse_qs, urlencode\n\nurl = 'http://example.com/?page=1&text=test#section'\n\nparsed_url = urlparse(url)\nparams = parse_qs(parsed_url.query)\nparams['page'] = 22 # changing query parameters\n\nnew_url = url.replace(parsed_url.query, urlencode(params))\n\n"
] | [
9,
6,
0
] | [] | [] | [
"python",
"url",
"urllib"
] | stackoverflow_0050893347_python_url_urllib.txt |
Q:
pandas vs. datetime: calculating time deltas between timezone-aware datetimes
On the one hand, for performance requirements, I'm using pandas to compute time difference between 2 timezone-aware datetimes, that is to say between 2 timezone-aware pandas.Timestamp objects.
On the other hand, for testing purposes (mainly), I'm using exclusively the Python datetime module. The idea was to achieve the same result with 2 different pieces of code.
My issue is that the time difference calculation turns out not to be the same in pandas and in the datetime module of Python. Here's an illustration of my point:
from zoneinfo import ZoneInfo
import datetime
import pandas as pd
utc = ZoneInfo(key='UTC')
d1 = datetime.datetime(2023, 3, 1, 0, 0, tzinfo=ZoneInfo(key='Europe/Paris'))
d2 = datetime.datetime(2023, 9, 1, 0, 0, tzinfo=ZoneInfo(key='Europe/Paris'))
p1 = pd.Timestamp(d1)
p2 = pd.Timestamp(d2)
# with the datetime module, this is True...
d2 - d1 != d2.astimezone(utc) - d1.astimezone(utc)
# but with pandas, it isn't...
p2 - p1 != p2.astimezone(utc) - p1.astimezone(utc)
# ⚠ thus, this is False and my tests fail!
d2 - d1 == p2 - p1
How can I change the behavior of either the pandas or the datetime module for my tests to pass? Is there another solution than adding .astimezone() everywhere? For instance, a default setting to update somewhere?
Also, to better understand the root of this inconsistency, what are the differences between the implementation of the timedelta calculation in pandas and in datetime?
Please note that I'm also using Django and I would like to avoid resorting to pytz because it was replaced by zoneinfo in Django 4.
A:
# you can try some thing like
date=datetime.datetime(2023, 3, 1, 0, 0,)
timestamp = pd.Timestamp(date, tz='Europe/Paris') #Convert timezone of timestamp.
print(timestamp)
after looking into p1 and p2 and trying to find the difference i found out this :
print(p1,p2)
print(d1,d2)
print(p2.astimezone(utc))
print(d2.astimezone(utc))
print(d1.astimezone(utc))
print(p1.astimezone(utc))
print(p2.astimezone(utc) - p1.astimezone(utc))
print(p2 - p1)
print(d2-d1)
print(d2.astimezone(utc) - d1.astimezone(utc))
2023-03-01 00:00:00+01:00 2023-09-01 00:00:00+02:00
2023-03-01 00:00:00+01:00 2023-09-01 00:00:00+02:00
2023-08-31 22:00:00+00:00
2023-08-31 22:00:00+00:00
2023-02-28 23:00:00+00:00
2023-02-28 23:00:00+00:00
183 days 23:00:00
183 days 23:00:00
184 days, 0:00:00
183 days, 23:00:00
# if you can see the difference between both p1 and p2 also d1 d2 its slightly #different :
#184 days, 0:00:00
#183 days, 23:00:00
date=datetime.datetime(2023, 3, 1, 0, 0,)
date2=datetime.datetime(2023, 9, 1, 0, 0,)
timestamp = pd.Timestamp(date, tz='Europe/Paris') #Convert timezone of timestamp.
timestamp2 = pd.Timestamp(date2, tz='Europe/Paris')
so p2-p1 == timestamp2-timestamp
| pandas vs. datetime: calculating time deltas between timezone-aware datetimes | On the one hand, for performance requirements, I'm using pandas to compute time difference between 2 timezone-aware datetimes, that is to say between 2 timezone-aware pandas.Timestamp objects.
On the other hand, for testing purposes (mainly), I'm using exclusively the Python datetime module. The idea was to achieve the same result with 2 different pieces of code.
My issue is that the time difference calculation turns out not to be the same in pandas and in the datetime module of Python. Here's an illustration of my point:
from zoneinfo import ZoneInfo
import datetime
import pandas as pd
utc = ZoneInfo(key='UTC')
d1 = datetime.datetime(2023, 3, 1, 0, 0, tzinfo=ZoneInfo(key='Europe/Paris'))
d2 = datetime.datetime(2023, 9, 1, 0, 0, tzinfo=ZoneInfo(key='Europe/Paris'))
p1 = pd.Timestamp(d1)
p2 = pd.Timestamp(d2)
# with the datetime module, this is True...
d2 - d1 != d2.astimezone(utc) - d1.astimezone(utc)
# but with pandas, it isn't...
p2 - p1 != p2.astimezone(utc) - p1.astimezone(utc)
# ⚠ thus, this is False and my tests fail!
d2 - d1 == p2 - p1
How can I change the behavior of either the pandas or the datetime module for my tests to pass? Is there another solution than adding .astimezone() everywhere? For instance, a default setting to update somewhere?
Also, to better understand the root of this inconsistency, what are the differences between the implementation of the timedelta calculation in pandas and in datetime?
Please note that I'm also using Django and I would like to avoid resorting to pytz because it was replaced by zoneinfo in Django 4.
| [
"# you can try some thing like \ndate=datetime.datetime(2023, 3, 1, 0, 0,)\ntimestamp = pd.Timestamp(date, tz='Europe/Paris') #Convert timezone of timestamp.\nprint(timestamp)\n\nafter looking into p1 and p2 and trying to find the difference i found out this :\nprint(p1,p2)\nprint(d1,d2)\nprint(p2.astimezone(utc))\nprint(d2.astimezone(utc))\nprint(d1.astimezone(utc))\nprint(p1.astimezone(utc))\nprint(p2.astimezone(utc) - p1.astimezone(utc))\nprint(p2 - p1)\nprint(d2-d1)\nprint(d2.astimezone(utc) - d1.astimezone(utc))\n\n2023-03-01 00:00:00+01:00 2023-09-01 00:00:00+02:00\n2023-03-01 00:00:00+01:00 2023-09-01 00:00:00+02:00\n2023-08-31 22:00:00+00:00\n2023-08-31 22:00:00+00:00\n2023-02-28 23:00:00+00:00\n2023-02-28 23:00:00+00:00\n183 days 23:00:00\n183 days 23:00:00\n184 days, 0:00:00\n183 days, 23:00:00\n\n# if you can see the difference between both p1 and p2 also d1 d2 its slightly #different :\n#184 days, 0:00:00\n#183 days, 23:00:00 \n\ndate=datetime.datetime(2023, 3, 1, 0, 0,)\ndate2=datetime.datetime(2023, 9, 1, 0, 0,)\ntimestamp = pd.Timestamp(date, tz='Europe/Paris') #Convert timezone of timestamp.\ntimestamp2 = pd.Timestamp(date2, tz='Europe/Paris')\nso p2-p1 == timestamp2-timestamp\n\n\n"
] | [
0
] | [] | [] | [
"datetime",
"pandas",
"python",
"python_datetime",
"timezone"
] | stackoverflow_0074617029_datetime_pandas_python_python_datetime_timezone.txt |
Q:
Clear slash command in discord.py
At the beginning I would like to point out that I do not use the py-cord module only and only discord.py. I wanted to create a / clear command.The problem is when the application that has to return the feedback that successfully deleted n messages from the user xyz.
There is an error mentioning
"await interaction.response.send_message (content = content, ephemeral
= True)"
is an unknown interaction
All code slash command:
client = MyClient(intents=intents)
t = app_commands.CommandTree(client)
@t.command(name="clear", description="Clear n messages specific user", guild=discord.Object(id=867851000286806016))
async def self(interaction: discord.Interaction, amount: int, member: discord.Member):
channel = interaction.channel
def check_author(m):
return m.author.id == member.id
await channel.purge(limit=amount, check=check_author)
content = f"Successfully deleted {amount} messages from {member.name}"
await interaction.response.send_message(content=content, ephemeral=True)
client.run(discord_TOKEN)
At the end, I wanted to point out that the bot removes the number of messages that were given. Only feedback from the bot application is missing.
I have the message: The application is not responding
A:
Use this. I hope that I can help you with it.
client = MyClient(intents=intents)
t = app_commands.CommandTree(client)
@t.command(name="clear", description="Clear n messages specific user", guild=discord.Object(id=867851000286806016))
async def self(ctx, interaction: discord.Interaction, amount: int, member: discord.Member):
channel = interaction.channel
def check_author(m):
return m.author.id == member.id
await ctx.channel.purge(limit=amount, check=check_author)
await ctx.send(f"Successfully deleted {amount} messages from {member.name}")
| Clear slash command in discord.py | At the beginning I would like to point out that I do not use the py-cord module only and only discord.py. I wanted to create a / clear command.The problem is when the application that has to return the feedback that successfully deleted n messages from the user xyz.
There is an error mentioning
"await interaction.response.send_message (content = content, ephemeral
= True)"
is an unknown interaction
All code slash command:
client = MyClient(intents=intents)
t = app_commands.CommandTree(client)
@t.command(name="clear", description="Clear n messages specific user", guild=discord.Object(id=867851000286806016))
async def self(interaction: discord.Interaction, amount: int, member: discord.Member):
channel = interaction.channel
def check_author(m):
return m.author.id == member.id
await channel.purge(limit=amount, check=check_author)
content = f"Successfully deleted {amount} messages from {member.name}"
await interaction.response.send_message(content=content, ephemeral=True)
client.run(discord_TOKEN)
At the end, I wanted to point out that the bot removes the number of messages that were given. Only feedback from the bot application is missing.
I have the message: The application is not responding
| [
"Use this. I hope that I can help you with it.\nclient = MyClient(intents=intents)\nt = app_commands.CommandTree(client)\n\[email protected](name=\"clear\", description=\"Clear n messages specific user\", guild=discord.Object(id=867851000286806016))\nasync def self(ctx, interaction: discord.Interaction, amount: int, member: discord.Member):\n channel = interaction.channel\n\n def check_author(m):\n return m.author.id == member.id\n await ctx.channel.purge(limit=amount, check=check_author)\n await ctx.send(f\"Successfully deleted {amount} messages from {member.name}\") \n\n"
] | [
0
] | [] | [] | [
"discord.py",
"python"
] | stackoverflow_0073629825_discord.py_python.txt |
Q:
how to submit button click by text link
I am trying submit form using selenium, but submit button isn't working , How can I submit button through driver.find_element_by_xpath('//button\[@type="submit"\]').click() this isn't working for me.
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver.get('https://splendour.themerex.net/contact/')
driver.find_element_by_link_text("Get In Touch").click()`
Advance thanks
A:
I guess you are using Selenium 4. If so find_element_by_* are no more supported there. You need to use the modern syntax of driver.find_element(By. as folllowing.
You need to introduce WebDriverWait expected_conditions to wait for element to become clickable.
The following code works
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://splendour.themerex.net/contact/"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[type='submit']"))).click()
| how to submit button click by text link | I am trying submit form using selenium, but submit button isn't working , How can I submit button through driver.find_element_by_xpath('//button\[@type="submit"\]').click() this isn't working for me.
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver.get('https://splendour.themerex.net/contact/')
driver.find_element_by_link_text("Get In Touch").click()`
Advance thanks
| [
"\nI guess you are using Selenium 4. If so find_element_by_* are no more supported there. You need to use the modern syntax of driver.find_element(By. as folllowing.\nYou need to introduce WebDriverWait expected_conditions to wait for element to become clickable.\n\nThe following code works\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://splendour.themerex.net/contact/\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"input[type='submit']\"))).click()\n\n"
] | [
0
] | [] | [] | [
"python",
"selenium",
"selenium4",
"webdriverwait"
] | stackoverflow_0074617483_python_selenium_selenium4_webdriverwait.txt |
Q:
How to print my class and not get memory reference
I have a class called Pension, with attributes like a person's name, age, savings and a growth rate.
I have a class method which calculates the person's total savings at retirement year.
Under my main function, I want to print the class to see if my code is working as intended, but I don't know how to do as I only get the memory reference when printing.
How can I print the class instance, so that it goes through all its attributes and runs the function result, and prints the result? Worth to note; to run the function 'result' which calculates the total pension, the growth rate is user inputted and in a function of its own (and is run in main())
For example, if I try to print the 2nd last line: print(pensions) I only get the memory reference. So in this case, if a person (the data for which I read in from a file) has saved up 1000 dollars (using my result method), I would like that fact to be printed into a list.
This is my code:
class Pension:
def __init__(self, name,age,savings,growth):
self.name = name
self.age = age
self.savings = savings
self.growth = growth
def result(self):
amount=self.savings
rate=1+(self.growth/100)
years=65-self.age
return (amount * (1 - pow(rate, years))) / (1 - rate)
def convert(elem: str):
if not elem.isdigit():
return elem
return float(elem)
def convert_row(r: list) -> list:
return [convert(e) for e in r]
def get_growth(msg: str = "Enter growth rate: "):
return float((input(msg).strip()))
def main():
with open('personer.txt') as f:
raw_data = f.readlines()
data = [row.split("/") for row in raw_data]
data = [convert_row(row) for row in data]
pensions = [Pension(*i, get_growth()) for i in data]
main()
A:
From the Pension class object's perspective it doesn't actually matter how is the growth provided. Also in this case maybe it's worth to make the result a property, then there's no need to call it as a function (just access like any other property, but the values will be calculated "dynamically").
You can customize the __str__ method to return any str representation of your object.
class Pension:
def __init__(self, name,age,savings,growth):
self.name = name
self.age = age
self.savings = savings
self.growth = growth
self._result = result
@property
def result(self):
amount=self.savings
rate=1+(self.growth/100)
years=65-self.age
return (amount * (1 - pow(rate, years))) / (1 - rate)
def __str__(self):
return f"Pension:\n{self.amount=}\n{self.age=}\n{self.savings}\n{self.growth=}\n{self.result}"
And then just:
for p in pensions:
print(p)
| How to print my class and not get memory reference | I have a class called Pension, with attributes like a person's name, age, savings and a growth rate.
I have a class method which calculates the person's total savings at retirement year.
Under my main function, I want to print the class to see if my code is working as intended, but I don't know how to do as I only get the memory reference when printing.
How can I print the class instance, so that it goes through all its attributes and runs the function result, and prints the result? Worth to note; to run the function 'result' which calculates the total pension, the growth rate is user inputted and in a function of its own (and is run in main())
For example, if I try to print the 2nd last line: print(pensions) I only get the memory reference. So in this case, if a person (the data for which I read in from a file) has saved up 1000 dollars (using my result method), I would like that fact to be printed into a list.
This is my code:
class Pension:
def __init__(self, name,age,savings,growth):
self.name = name
self.age = age
self.savings = savings
self.growth = growth
def result(self):
amount=self.savings
rate=1+(self.growth/100)
years=65-self.age
return (amount * (1 - pow(rate, years))) / (1 - rate)
def convert(elem: str):
if not elem.isdigit():
return elem
return float(elem)
def convert_row(r: list) -> list:
return [convert(e) for e in r]
def get_growth(msg: str = "Enter growth rate: "):
return float((input(msg).strip()))
def main():
with open('personer.txt') as f:
raw_data = f.readlines()
data = [row.split("/") for row in raw_data]
data = [convert_row(row) for row in data]
pensions = [Pension(*i, get_growth()) for i in data]
main()
| [
"From the Pension class object's perspective it doesn't actually matter how is the growth provided. Also in this case maybe it's worth to make the result a property, then there's no need to call it as a function (just access like any other property, but the values will be calculated \"dynamically\").\nYou can customize the __str__ method to return any str representation of your object.\nclass Pension:\n def __init__(self, name,age,savings,growth):\n self.name = name\n self.age = age\n self.savings = savings\n self.growth = growth\n self._result = result\n\n @property\n def result(self):\n amount=self.savings\n rate=1+(self.growth/100)\n years=65-self.age\n return (amount * (1 - pow(rate, years))) / (1 - rate)\n\n\n def __str__(self):\n return f\"Pension:\\n{self.amount=}\\n{self.age=}\\n{self.savings}\\n{self.growth=}\\n{self.result}\"\n\nAnd then just:\nfor p in pensions:\n print(p)\n\n"
] | [
0
] | [] | [] | [
"class",
"methods",
"printing",
"python"
] | stackoverflow_0074617432_class_methods_printing_python.txt |
Q:
Fastest possible optimisation of an area difference with a constrained sum
I have four arrays, a1,a2,a3,a4, each of length 500. I have a target array at, also of length 500. These arrays each represent the y coordinates of unevenly spaced points on a graph. I have the x coordinates in a separate array.
I want to optimise the coefficients c1,c2,c3,c4 such that the area difference between c1x1 + c2x2 + c3x3 + c4x4 and at is minimised. The coefficients must sum to 1 and be between 0 and 1 inclusive.
I currently do this using scipy.optimize.minimize. Using scipy.optimize.LinearConstraints, I constrain my target variable (an array [c1,c2,c3,c4]) to sum to 1. The loss function multiplies the target variable by the arrays a1,a2,a3,a4, and then finds the absolute difference between this sum and at. This absolute difference is integrated between each consecutive pair of x coordinates, and the sum of these integrals is outputted.
Here is the code:
xpoint_diffs = xpoints[1:] - xpoints[:-1]
a1a2a3a4 = np.array([a1,a2,a3,a4])
def lossfunc(x):
abs_diff = abs(at - (a1a2a3a4 * x).sum(axis = 1))
return ((absdiff[:-1] + absdiff[1:])/2 * xpoint_diffs).sum()
Is there a faster way to do this?
I then pass my constraint and loss function into scipy.optimize.minimize.
A:
This isn't a full solution, more like some thoughts that may lead to one. Also, someone please double-check my math.
Variables and test data
First, let's begin by defining some test data. While doing so, I transpose the a1a2a3a4 matrix because it will prove more conventient. Plus, I'm renaming it to a14 because I cannot be bothered writing the whole thing all the time, sorry.
import numpy as np
from numpy import linalg, random
from scipy import optimize
a14 = random.uniform(-1., 1., (500, 4)) * (0.25, 0.75, 1., 2.)
at = random.uniform(-1., 1., 500)
xpoints = np.cumsum(random.random(500))
initial_guess = np.full(4, 0.25)
Target
Okay, here is my first slight point of confusion: OP described the target as the "area difference" between two graphs. X coordinates are given by xpoints. Y coordinates of the target are given as at and we search coeffs so that a14 @ coeffs gives Y coordinates close to the target.
Now, when we talk "area under the graph", my first instinct is piecewise linear. Similar to the trapezoidal rule this gives us the formula sum{(y[i+1]+y[i]) * (x[i+1]-x[i])} / 2.
We can reformulate that a bit to get something of the form (y[0]*(x[1]-x[0]) + y[1]*(x[2]-x[0]) + y[2]*(x[3]-x[1]) + ... + y[n-1]*(x[n-1]-x[n-2])) / 2. We want to precompute the x differences and scaling factor.
weights = 0.5 * np.concatenate(
((xpoints[1] - xpoints[0],),
xpoints[2:] - xpoints[:-2],
(xpoints[-1] - xpoints[-2],)))
These factors are different from the ones OP used. I guess they are integrating a piecewise constant function instead of piecewise linear? Anyway, switching between these two should be simple enough so I leave this here as an alternative.
Loss function
Adapting OP's loss function to my interpretation gives
def loss(coeffs):
absdiff = abs(at - (a14 * coeffs).sum(axis=1))
return (absdiff * weights).sum()
As I've noted in comments, this can be simplified by using matrix vector multiplications.
def loss(coeffs):
return abs(a14 @ coeffs - at) @ weights
This is the same as abs((a14 * weights[:, np.newaxis]) @ coeffs - (at * weights)).sum() which in turn makes it obvious that we are talking about minimizing the L1 norm.
a14w = a14 * weights[:, np.newaxis]
atw = at * weights
def loss(coeffs):
return linalg.norm(a14w @ coeffs - atw, 1)
This isn't a huge improvement as far as computing the loss function goes. However, it goes a long way towards pressing our problem into a regular pattern.
Approximate solution
As noted in comments, the abs() function in the L1 norm is poison for optimization because it is non-differentiable. If we switch to an L2 norm, we basically have a least squares fit with additional constraints. Of course this is somewhat unsound as we start solving a different, if closely related, problem. I suspect the solution would be "good enough"; or it could be the starting solution that is then polished to conform with the actual target.
In any case, we can use scipy.optimize.lsq_linear as a quick and easy solver. However, that one does not support linearity constraints. We can mimic that with a regularization parameter.
def loss_lsqr(coeffs):
return 0.5 * linalg.norm(a14w @ coeffs - atw)**2
grad1 = a14w.T @ a14w
grad2 = a14w.T @ at
def jac_lsqr(coeffs):
return grad1 @ coeffs - grad2
regularization = loss_lsqr(initial_guess) # TODO: Pick better value
a14w_regular = np.append(a14w, np.full((1, 4), regularization), axis=0)
atw_regular = np.append(atw, regularization)
approx = optimize.lsq_linear(a14w_regular, atw_regular, bounds=(0., 1.))
# polish further, maybe?
bounds = ((0., 1.),) * 4
constraint = optimize.LinearConstraint(np.ones(4), 1., 1.)
second_guess = approx.x
second_guess /= linalg.norm(second_guess) # enforce constraint
exact = optimize.minimize(
loss_lsqr, second_guess, jac=jac_lsqr,
bounds=bounds, constraints=constraint)
| Fastest possible optimisation of an area difference with a constrained sum | I have four arrays, a1,a2,a3,a4, each of length 500. I have a target array at, also of length 500. These arrays each represent the y coordinates of unevenly spaced points on a graph. I have the x coordinates in a separate array.
I want to optimise the coefficients c1,c2,c3,c4 such that the area difference between c1x1 + c2x2 + c3x3 + c4x4 and at is minimised. The coefficients must sum to 1 and be between 0 and 1 inclusive.
I currently do this using scipy.optimize.minimize. Using scipy.optimize.LinearConstraints, I constrain my target variable (an array [c1,c2,c3,c4]) to sum to 1. The loss function multiplies the target variable by the arrays a1,a2,a3,a4, and then finds the absolute difference between this sum and at. This absolute difference is integrated between each consecutive pair of x coordinates, and the sum of these integrals is outputted.
Here is the code:
xpoint_diffs = xpoints[1:] - xpoints[:-1]
a1a2a3a4 = np.array([a1,a2,a3,a4])
def lossfunc(x):
abs_diff = abs(at - (a1a2a3a4 * x).sum(axis = 1))
return ((absdiff[:-1] + absdiff[1:])/2 * xpoint_diffs).sum()
Is there a faster way to do this?
I then pass my constraint and loss function into scipy.optimize.minimize.
| [
"This isn't a full solution, more like some thoughts that may lead to one. Also, someone please double-check my math.\nVariables and test data\nFirst, let's begin by defining some test data. While doing so, I transpose the a1a2a3a4 matrix because it will prove more conventient. Plus, I'm renaming it to a14 because I cannot be bothered writing the whole thing all the time, sorry.\nimport numpy as np\nfrom numpy import linalg, random\nfrom scipy import optimize\n\na14 = random.uniform(-1., 1., (500, 4)) * (0.25, 0.75, 1., 2.)\nat = random.uniform(-1., 1., 500)\nxpoints = np.cumsum(random.random(500))\ninitial_guess = np.full(4, 0.25)\n\nTarget\nOkay, here is my first slight point of confusion: OP described the target as the \"area difference\" between two graphs. X coordinates are given by xpoints. Y coordinates of the target are given as at and we search coeffs so that a14 @ coeffs gives Y coordinates close to the target.\nNow, when we talk \"area under the graph\", my first instinct is piecewise linear. Similar to the trapezoidal rule this gives us the formula sum{(y[i+1]+y[i]) * (x[i+1]-x[i])} / 2.\nWe can reformulate that a bit to get something of the form (y[0]*(x[1]-x[0]) + y[1]*(x[2]-x[0]) + y[2]*(x[3]-x[1]) + ... + y[n-1]*(x[n-1]-x[n-2])) / 2. We want to precompute the x differences and scaling factor.\nweights = 0.5 * np.concatenate(\n ((xpoints[1] - xpoints[0],),\n xpoints[2:] - xpoints[:-2],\n (xpoints[-1] - xpoints[-2],)))\n\nThese factors are different from the ones OP used. I guess they are integrating a piecewise constant function instead of piecewise linear? Anyway, switching between these two should be simple enough so I leave this here as an alternative.\nLoss function\nAdapting OP's loss function to my interpretation gives\ndef loss(coeffs):\n absdiff = abs(at - (a14 * coeffs).sum(axis=1))\n return (absdiff * weights).sum()\n\nAs I've noted in comments, this can be simplified by using matrix vector multiplications.\ndef loss(coeffs):\n return abs(a14 @ coeffs - at) @ weights\n\nThis is the same as abs((a14 * weights[:, np.newaxis]) @ coeffs - (at * weights)).sum() which in turn makes it obvious that we are talking about minimizing the L1 norm.\na14w = a14 * weights[:, np.newaxis]\natw = at * weights\ndef loss(coeffs):\n return linalg.norm(a14w @ coeffs - atw, 1)\n\nThis isn't a huge improvement as far as computing the loss function goes. However, it goes a long way towards pressing our problem into a regular pattern.\nApproximate solution\nAs noted in comments, the abs() function in the L1 norm is poison for optimization because it is non-differentiable. If we switch to an L2 norm, we basically have a least squares fit with additional constraints. Of course this is somewhat unsound as we start solving a different, if closely related, problem. I suspect the solution would be \"good enough\"; or it could be the starting solution that is then polished to conform with the actual target.\nIn any case, we can use scipy.optimize.lsq_linear as a quick and easy solver. However, that one does not support linearity constraints. We can mimic that with a regularization parameter.\ndef loss_lsqr(coeffs):\n return 0.5 * linalg.norm(a14w @ coeffs - atw)**2\n\ngrad1 = a14w.T @ a14w\ngrad2 = a14w.T @ at\ndef jac_lsqr(coeffs):\n return grad1 @ coeffs - grad2\n\nregularization = loss_lsqr(initial_guess) # TODO: Pick better value\na14w_regular = np.append(a14w, np.full((1, 4), regularization), axis=0)\natw_regular = np.append(atw, regularization)\napprox = optimize.lsq_linear(a14w_regular, atw_regular, bounds=(0., 1.))\n\n# polish further, maybe?\nbounds = ((0., 1.),) * 4\nconstraint = optimize.LinearConstraint(np.ones(4), 1., 1.)\nsecond_guess = approx.x\nsecond_guess /= linalg.norm(second_guess) # enforce constraint\nexact = optimize.minimize(\n loss_lsqr, second_guess, jac=jac_lsqr,\n bounds=bounds, constraints=constraint)\n\n"
] | [
2
] | [] | [] | [
"optimization",
"performance",
"python",
"scipy",
"scipy_optimize"
] | stackoverflow_0074613758_optimization_performance_python_scipy_scipy_optimize.txt |
Q:
Find closest point in Pandas DataFrames
I am quite new to Python. I have the following table in Postgres. These are Polygon values with four coordinates with same Id with ZONE name I have stored this data in Python dataframe called df1
Id Order Lat Lon Zone
00001 1 50.6373473 3.075029928 A
00001 2 50.63740441 3.075068636 A
00001 3 50.63744285 3.074951754 A
00001 4 50.63737839 3.074913884 A
00002 1 50.6376054 3.0750528 B
00002 2 50.6375896 3.0751209 B
00002 3 50.6374239 3.0750246 B
00002 4 50.6374404 3.0749554 B
I have Json data with Lon and Lat values and I have stored them is python dataframe called df2.
Lat Lon
50.6375524099 3.07507914474
50.6375714407 3.07508201591
My task is to compare df2 Lat and Lon values with four coordinates of each zone in df1 to extract the zone name and add it to df2.
For instance (50.637552409 3.07507914474) belongs to Zone B.
#This is ID with Zone
df1 = pd.read_sql_query("""SELECT * from "zmap" """,con=engine)
#This is with lat,lon values
df2 = pd.read_sql_query("""SELECT * from "E1" """,con=engine)
df2['latlon'] = zip(df2.lat, df2.lon)
zones = [
["A", [[50.637347297, 3.075029928], [50.637404408, 3.075068636], [50.637442847, 3.074951754],[50.637378390, 3.074913884]]]]
for i in range(0, len(zones)): # for each zone points
X = mplPath.Path(np.array(zones[i][1]))
# find if points are Zones
Y= X.contains_points(df2.latlon.values.tolist())
# Label points that are in the current zone
df2[Y, 'zone'] = zones[i][0]
Currently I have done it manually for Zone 'A'. I need to generate the "Zones" for the coordinates in df2.
A:
This sounds like a good use case for scipy cdist, also discussed here.
import pandas as pd
from scipy.spatial.distance import cdist
data1 = {'Lat': pd.Series([50.6373473,50.63740441,50.63744285,50.63737839,50.6376054,50.6375896,50.6374239,50.6374404]),
'Lon': pd.Series([3.075029928,3.075068636,3.074951754,3.074913884,3.0750528,3.0751209,3.0750246,3.0749554]),
'Zone': pd.Series(['A','A','A','A','B','B','B','B'])}
data2 = {'Lat': pd.Series([50.6375524099,50.6375714407]),
'Lon': pd.Series([3.07507914474,3.07508201591])}
def closest_point(point, points):
""" Find closest point from a list of points. """
return points[cdist([point], points).argmin()]
def match_value(df, col1, x, col2):
""" Match value x from col1 row to value in col2. """
return df[df[col1] == x][col2].values[0]
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
df1['point'] = [(x, y) for x,y in zip(df1['Lat'], df1['Lon'])]
df2['point'] = [(x, y) for x,y in zip(df2['Lat'], df2['Lon'])]
df2['closest'] = [closest_point(x, list(df1['point'])) for x in df2['point']]
df2['zone'] = [match_value(df1, 'point', x, 'Zone') for x in df2['closest']]
print(df2)
# Lat Lon point closest zone
# 0 50.637552 3.075079 (50.6375524099, 3.07507914474) (50.6375896, 3.0751209) B
# 1 50.637571 3.075082 (50.6375714407, 3.07508201591) (50.6375896, 3.0751209) B
A:
note that the current title of the post Find closest point in Pandas DataFrames but OP's attempt shows that they are looking for the zone within which a point is found.
It is possible to leverage the geopandas library to do this operation elegantly & efficiently.
Convert the DataFrame into a GeoDataFrame.
Then aggregate the points in df1 to create a polygon. The aggregation operation is called dissolve.
Finally, use a spatial join sjoin with the predicate such that points in df2 are covered by the polygon representing a Zone in zones and output the Lat, Lon&Zone` columns.
# set up
import pandas as pd
import geopandas as gpd
df1 = pd.DataFrame({
'Id': [1, 1, 1, 1, 2, 2, 2, 2],
'Order': [1, 2, 3, 4, 1, 2, 3, 4],
'Lat': [50.6373473, 50.63740441, 50.63744285, 50.63737839, 50.6376054, 50.6375896, 50.6374239, 50.6374404],
'Lon': [3.075029928, 3.075068636, 3.074951754, 3.074913884, 3.0750528, 3.0751209, 3.0750246, 3.0749554],
'Zone': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B']
})
df2 = pd.DataFrame({
'Lat': [50.6375524099, 50.6375714407],
'Lon': [3.07507914474, 3.07508201591]
})
# convert to GeoDataFrame
df1 = gpd.GeoDataFrame(df1, geometry=gpd.points_from_xy(df1.Lon, df1.Lat))
df2 = gpd.GeoDataFrame(df2, geometry=gpd.points_from_xy(df2.Lon, df2.Lat))
# aggregate & merge
zones = df1.dissolve(by='Zone').convex_hull.rename('geometry').reset_index()
merged = df2.sjoin(zones, how='left', predicate='covered_by')
# output
output_columns = ['Lat', 'Lon', 'Zone']
merged[output_columns]
this outputs:
Lat Lon Zone
0 50.637552 3.075079 B
1 50.637571 3.075082 B
| Find closest point in Pandas DataFrames | I am quite new to Python. I have the following table in Postgres. These are Polygon values with four coordinates with same Id with ZONE name I have stored this data in Python dataframe called df1
Id Order Lat Lon Zone
00001 1 50.6373473 3.075029928 A
00001 2 50.63740441 3.075068636 A
00001 3 50.63744285 3.074951754 A
00001 4 50.63737839 3.074913884 A
00002 1 50.6376054 3.0750528 B
00002 2 50.6375896 3.0751209 B
00002 3 50.6374239 3.0750246 B
00002 4 50.6374404 3.0749554 B
I have Json data with Lon and Lat values and I have stored them is python dataframe called df2.
Lat Lon
50.6375524099 3.07507914474
50.6375714407 3.07508201591
My task is to compare df2 Lat and Lon values with four coordinates of each zone in df1 to extract the zone name and add it to df2.
For instance (50.637552409 3.07507914474) belongs to Zone B.
#This is ID with Zone
df1 = pd.read_sql_query("""SELECT * from "zmap" """,con=engine)
#This is with lat,lon values
df2 = pd.read_sql_query("""SELECT * from "E1" """,con=engine)
df2['latlon'] = zip(df2.lat, df2.lon)
zones = [
["A", [[50.637347297, 3.075029928], [50.637404408, 3.075068636], [50.637442847, 3.074951754],[50.637378390, 3.074913884]]]]
for i in range(0, len(zones)): # for each zone points
X = mplPath.Path(np.array(zones[i][1]))
# find if points are Zones
Y= X.contains_points(df2.latlon.values.tolist())
# Label points that are in the current zone
df2[Y, 'zone'] = zones[i][0]
Currently I have done it manually for Zone 'A'. I need to generate the "Zones" for the coordinates in df2.
| [
"This sounds like a good use case for scipy cdist, also discussed here.\nimport pandas as pd\nfrom scipy.spatial.distance import cdist\n\n\ndata1 = {'Lat': pd.Series([50.6373473,50.63740441,50.63744285,50.63737839,50.6376054,50.6375896,50.6374239,50.6374404]),\n 'Lon': pd.Series([3.075029928,3.075068636,3.074951754,3.074913884,3.0750528,3.0751209,3.0750246,3.0749554]),\n 'Zone': pd.Series(['A','A','A','A','B','B','B','B'])}\n\ndata2 = {'Lat': pd.Series([50.6375524099,50.6375714407]),\n 'Lon': pd.Series([3.07507914474,3.07508201591])}\n\n\ndef closest_point(point, points):\n \"\"\" Find closest point from a list of points. \"\"\"\n return points[cdist([point], points).argmin()]\n\ndef match_value(df, col1, x, col2):\n \"\"\" Match value x from col1 row to value in col2. \"\"\"\n return df[df[col1] == x][col2].values[0]\n\n\ndf1 = pd.DataFrame(data1)\ndf2 = pd.DataFrame(data2)\n\ndf1['point'] = [(x, y) for x,y in zip(df1['Lat'], df1['Lon'])]\ndf2['point'] = [(x, y) for x,y in zip(df2['Lat'], df2['Lon'])]\n\ndf2['closest'] = [closest_point(x, list(df1['point'])) for x in df2['point']]\ndf2['zone'] = [match_value(df1, 'point', x, 'Zone') for x in df2['closest']]\n\nprint(df2)\n# Lat Lon point closest zone\n# 0 50.637552 3.075079 (50.6375524099, 3.07507914474) (50.6375896, 3.0751209) B\n# 1 50.637571 3.075082 (50.6375714407, 3.07508201591) (50.6375896, 3.0751209) B\n\n",
"note that the current title of the post Find closest point in Pandas DataFrames but OP's attempt shows that they are looking for the zone within which a point is found.\nIt is possible to leverage the geopandas library to do this operation elegantly & efficiently.\nConvert the DataFrame into a GeoDataFrame.\nThen aggregate the points in df1 to create a polygon. The aggregation operation is called dissolve.\nFinally, use a spatial join sjoin with the predicate such that points in df2 are covered by the polygon representing a Zone in zones and output the Lat, Lon&Zone` columns.\n# set up\nimport pandas as pd\nimport geopandas as gpd\n\ndf1 = pd.DataFrame({\n 'Id': [1, 1, 1, 1, 2, 2, 2, 2],\n 'Order': [1, 2, 3, 4, 1, 2, 3, 4],\n 'Lat': [50.6373473, 50.63740441, 50.63744285, 50.63737839, 50.6376054, 50.6375896, 50.6374239, 50.6374404], \n 'Lon': [3.075029928, 3.075068636, 3.074951754, 3.074913884, 3.0750528, 3.0751209, 3.0750246, 3.0749554],\n 'Zone': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B']\n})\n\ndf2 = pd.DataFrame({\n 'Lat': [50.6375524099, 50.6375714407],\n 'Lon': [3.07507914474, 3.07508201591] \n})\n\n# convert to GeoDataFrame\ndf1 = gpd.GeoDataFrame(df1, geometry=gpd.points_from_xy(df1.Lon, df1.Lat))\ndf2 = gpd.GeoDataFrame(df2, geometry=gpd.points_from_xy(df2.Lon, df2.Lat))\n\n# aggregate & merge\nzones = df1.dissolve(by='Zone').convex_hull.rename('geometry').reset_index()\nmerged = df2.sjoin(zones, how='left', predicate='covered_by')\n\n# output\noutput_columns = ['Lat', 'Lon', 'Zone']\nmerged[output_columns]\n\nthis outputs:\n Lat Lon Zone\n0 50.637552 3.075079 B\n1 50.637571 3.075082 B\n\n"
] | [
14,
0
] | [] | [] | [
"pandas",
"postgresql",
"python"
] | stackoverflow_0038965720_pandas_postgresql_python.txt |
Q:
how to save a pandas DataFrame to an excel file?
I am trying to load data from the web source and save it as a Excel file but not sure how to do it. What should I do?
import requests
import pandas as pd
import xmltodict
url = "https://www.kstan.ua/sitemap.xml"
res = requests.get(url)
raw = xmltodict.parse(res.text)
data = [[r["loc"], r["lastmod"]] for r in raw["urlset"]["url"]]
print("Number of sitemaps:", len(data))
df = pd.DataFrame(data, columns=["links", "lastmod"])
A:
df.to_csv("output.csv", index=False)
OR
df.to_excel("output.xlsx")
A:
You can write the dataframe to excel using the pandas ExcelWriter, such as this:
import pandas as pd
with pd.ExcelWriter('path_to_file.xlsx') as writer:
dataframe.to_excel(writer)
A:
If you want to create multiple sheets in the same file
with pd.ExcelWriter('csv_s/results.xlsx') as writer:
same_res.to_excel(writer, sheet_name='same')
diff_res.to_excel(writer, sheet_name='sheet2')
| how to save a pandas DataFrame to an excel file? | I am trying to load data from the web source and save it as a Excel file but not sure how to do it. What should I do?
import requests
import pandas as pd
import xmltodict
url = "https://www.kstan.ua/sitemap.xml"
res = requests.get(url)
raw = xmltodict.parse(res.text)
data = [[r["loc"], r["lastmod"]] for r in raw["urlset"]["url"]]
print("Number of sitemaps:", len(data))
df = pd.DataFrame(data, columns=["links", "lastmod"])
| [
"df.to_csv(\"output.csv\", index=False)\n\nOR\ndf.to_excel(\"output.xlsx\")\n\n",
"You can write the dataframe to excel using the pandas ExcelWriter, such as this:\nimport pandas as pd\nwith pd.ExcelWriter('path_to_file.xlsx') as writer:\n dataframe.to_excel(writer)\n\n",
"If you want to create multiple sheets in the same file\nwith pd.ExcelWriter('csv_s/results.xlsx') as writer:\n same_res.to_excel(writer, sheet_name='same')\n diff_res.to_excel(writer, sheet_name='sheet2')\n\n"
] | [
21,
3,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0055170300_pandas_python.txt |
Q:
error:'_UserObject' object has no attribute 'predict'
I am building an ANN model for machine learning against a data train. when I call the model to validate the test data, an error occurs
model = Sequential()
model.add(Dense(8,activation='tanh',input_dim = 10))
model.add(Dense(6,activation='tanh'))
model.add(Dense(4,activation='softmax'))
model.summary()
from tensorflow.keras.models import Sequential, save_model, load_model
filepath = './input/saved_model'
save_model(model, filepath)
test = pd.read_csv('test.csv')
enter code here
when I process the code below, an error message appears
predictions = model.predict(test)
AttributeError Traceback (most recent call last)
<ipython-input-141-82c4f2e9fa53> in <module>()
----> 1 predictions = model.predict(test)
AttributeError: '_UserObject' object has no attribute 'predict'
A:
I solved this issue by using the older keras h5 format:
h5-format
Simply load and save the model with the .h5 extension:
model.save('model_name.h5')
loaded_model = keras.model.load_model('model_name.h5')
A:
For me this happened when I trained a network on computer1 and tried to predict using it on computer2. Might be related to differing tensorflow of h5py versions
A:
You have saved your model using .save_model method of the tensorflow.keras.models. This by default saves it in SavedModel format of tensorflow.
When you load back the model using tensorflow.keras.models.load_model method, you can use the predict method of the model.
model = tf.keras.models.load_model(<saved_model_folder>)
predictions = model.predict(input_tensor)
But if you try to load the same model, which was saved using the tf.keras.models.save_model, using tf.saved_model.load you have to do it in this way:
model = tf.saved_model.load(<saved_model_folder>)
predictions = model(input_tensor) # Notice predict is not used.
The is written in the Note section of https://www.tensorflow.org/api_docs/python/tf/keras/Model#call.
If you want inference(predict) function, you can do it as follows:
model_loaded = tf.keras.models.load_model(<saved_model_folder>)
DEFAULT_FUNCTION_KEY = 'serving_default'
predict_func = model_loaded.signatures[DEFAULT_FUNCTION_KEY]
for batch in predict_dataset.take(1):
print(predict_func(batch))
I think you are using keras API to save and saved_model API to load the model and hence the issue.
| error:'_UserObject' object has no attribute 'predict' | I am building an ANN model for machine learning against a data train. when I call the model to validate the test data, an error occurs
model = Sequential()
model.add(Dense(8,activation='tanh',input_dim = 10))
model.add(Dense(6,activation='tanh'))
model.add(Dense(4,activation='softmax'))
model.summary()
from tensorflow.keras.models import Sequential, save_model, load_model
filepath = './input/saved_model'
save_model(model, filepath)
test = pd.read_csv('test.csv')
enter code here
when I process the code below, an error message appears
predictions = model.predict(test)
AttributeError Traceback (most recent call last)
<ipython-input-141-82c4f2e9fa53> in <module>()
----> 1 predictions = model.predict(test)
AttributeError: '_UserObject' object has no attribute 'predict'
| [
"I solved this issue by using the older keras h5 format:\nh5-format\nSimply load and save the model with the .h5 extension:\nmodel.save('model_name.h5')\nloaded_model = keras.model.load_model('model_name.h5')\n\n",
"For me this happened when I trained a network on computer1 and tried to predict using it on computer2. Might be related to differing tensorflow of h5py versions\n",
"You have saved your model using .save_model method of the tensorflow.keras.models. This by default saves it in SavedModel format of tensorflow.\nWhen you load back the model using tensorflow.keras.models.load_model method, you can use the predict method of the model.\nmodel = tf.keras.models.load_model(<saved_model_folder>)\npredictions = model.predict(input_tensor)\n\nBut if you try to load the same model, which was saved using the tf.keras.models.save_model, using tf.saved_model.load you have to do it in this way:\nmodel = tf.saved_model.load(<saved_model_folder>)\npredictions = model(input_tensor) # Notice predict is not used.\n\nThe is written in the Note section of https://www.tensorflow.org/api_docs/python/tf/keras/Model#call.\nIf you want inference(predict) function, you can do it as follows:\nmodel_loaded = tf.keras.models.load_model(<saved_model_folder>)\nDEFAULT_FUNCTION_KEY = 'serving_default'\npredict_func = model_loaded.signatures[DEFAULT_FUNCTION_KEY]\n\nfor batch in predict_dataset.take(1):\n print(predict_func(batch))\n\nI think you are using keras API to save and saved_model API to load the model and hence the issue.\n"
] | [
4,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0068173923_python.txt |
Q:
Pandas lambda function syntax error (with dictionary)
I use lambda functions in Python a lot. All of a sudden, I cannot figure out why is there a syntax error message for this:
table['sp1 name'] = table['sp1'].apply(lambda x: sp1_new_dict[x] if x in sp1_new_dict.keys())
Any ideas?
Thanks!
A:
You need an else. Boiling down your error:
x = 1 if True
File "<stdin>", line 1
x = 1 if True
^
SyntaxError: invalid syntax
# No error here
x = 1 if True else 2
Since you are using a dictionary, maybe use dict.get:
table['sp1 name'] = table['sp1'].apply(lambda x: sp1_new_dict.get(x))
Which returns None if the key isn't present
| Pandas lambda function syntax error (with dictionary) | I use lambda functions in Python a lot. All of a sudden, I cannot figure out why is there a syntax error message for this:
table['sp1 name'] = table['sp1'].apply(lambda x: sp1_new_dict[x] if x in sp1_new_dict.keys())
Any ideas?
Thanks!
| [
"You need an else. Boiling down your error:\nx = 1 if True\n\n File \"<stdin>\", line 1\n x = 1 if True\n ^\nSyntaxError: invalid syntax\n\n\n# No error here\nx = 1 if True else 2\n\nSince you are using a dictionary, maybe use dict.get:\ntable['sp1 name'] = table['sp1'].apply(lambda x: sp1_new_dict.get(x))\n\nWhich returns None if the key isn't present\n"
] | [
2
] | [] | [] | [
"dictionary",
"lambda",
"pandas",
"python"
] | stackoverflow_0074617658_dictionary_lambda_pandas_python.txt |
Q:
How do add an assembled field to a Pydantic model
Say I have model
class UserDB(BaseModel):
first_name: Optional[str] = None
last_name: Optional[str] = None
How do I make another model that is constructed from this one and has a field that changes based on the fields in this model?
For instance, something like this
class User(BaseModel):
full_name: str = first_name + ' ' + last_name
Constructed like this maybe
User.parse_obj(UserDB)
Thanks!
A:
If you do not want to keep first_name and last_name in User then you can
customize __init__.
use validator for setting full_name.
Both methods do what you want:
from typing import Optional
from pydantic import BaseModel, validator
class UserDB(BaseModel):
first_name: Optional[str] = None
last_name: Optional[str] = None
class User_1(BaseModel):
location: str # for a change
full_name: Optional[str] = None
def __init__(self, user_db: UserDB, **data):
super().__init__(full_name=f"{user_db.first_name} {user_db.last_name}", **data)
user_db = UserDB(first_name="John", last_name="Stark")
user = User_1(user_db, location="Mars")
print(user)
class User_2(BaseModel):
first_name: Optional[str] = None
last_name: Optional[str] = None
full_name: Optional[str] = None
@validator('full_name', always=True)
def ab(cls, v, values) -> str:
return f"{values['first_name']} {values['last_name']}"
user = User_2(**user_db.dict())
print(user)
output
location='Mars' full_name='John Stark'
first_name='John' last_name='Stark' full_name='John Stark'
UPDATE:
For working with response_model you can customize __init__ in such way:
class User_1(BaseModel):
location: str # for a change
full_name: Optional[str] = None
# def __init__(self, user_db: UserDB, **data):
def __init__(self, first_name, last_name, **data):
super().__init__(full_name=f"{first_name} {last_name}", **data)
user_db = UserDB(first_name="John", last_name="Stark")
user = User_1(**user_db.dict(), location="Mars")
print(user)
A:
I created a pip package that seems to do exactly what you need. Here is the link: https://pypi.org/project/pydantic-computed/
Your example would then look like this:
from pydantic import BaseModel
from pydantic_computed import Computed, computed
class UserDB(BaseModel):
first_name: Optional[str] = None
last_name: Optional[str] = None
class User(UserDB):
full_name: Computed[str]
@computed('full_name')
def compute_full_name(first_name: str, last_name: str):
return first_name + ' ' + last_name
# parsing also works as normal:
user_db = UserDB(first_name='John', last_name='Doe')
user = User.parse_obj(user_db)
print(user.full_name) # Outputs "John Doe"
This will also work for response_model (e.g. in FastAPI) since the computed value is actually set on the full_name property.
| How do add an assembled field to a Pydantic model | Say I have model
class UserDB(BaseModel):
first_name: Optional[str] = None
last_name: Optional[str] = None
How do I make another model that is constructed from this one and has a field that changes based on the fields in this model?
For instance, something like this
class User(BaseModel):
full_name: str = first_name + ' ' + last_name
Constructed like this maybe
User.parse_obj(UserDB)
Thanks!
| [
"If you do not want to keep first_name and last_name in User then you can\n\ncustomize __init__.\nuse validator for setting full_name.\n\nBoth methods do what you want:\nfrom typing import Optional\nfrom pydantic import BaseModel, validator\n\n\nclass UserDB(BaseModel):\n first_name: Optional[str] = None\n last_name: Optional[str] = None\n\n\nclass User_1(BaseModel):\n location: str # for a change\n full_name: Optional[str] = None\n\n def __init__(self, user_db: UserDB, **data):\n super().__init__(full_name=f\"{user_db.first_name} {user_db.last_name}\", **data)\n\n\nuser_db = UserDB(first_name=\"John\", last_name=\"Stark\")\nuser = User_1(user_db, location=\"Mars\")\nprint(user)\n\n\nclass User_2(BaseModel):\n first_name: Optional[str] = None\n last_name: Optional[str] = None\n full_name: Optional[str] = None\n\n @validator('full_name', always=True)\n def ab(cls, v, values) -> str:\n return f\"{values['first_name']} {values['last_name']}\"\n\n\nuser = User_2(**user_db.dict())\nprint(user)\n\noutput\nlocation='Mars' full_name='John Stark'\nfirst_name='John' last_name='Stark' full_name='John Stark'\n\nUPDATE:\nFor working with response_model you can customize __init__ in such way:\nclass User_1(BaseModel):\n location: str # for a change\n full_name: Optional[str] = None\n\n # def __init__(self, user_db: UserDB, **data):\n def __init__(self, first_name, last_name, **data):\n super().__init__(full_name=f\"{first_name} {last_name}\", **data)\n\n\nuser_db = UserDB(first_name=\"John\", last_name=\"Stark\")\nuser = User_1(**user_db.dict(), location=\"Mars\")\nprint(user)\n\n",
"I created a pip package that seems to do exactly what you need. Here is the link: https://pypi.org/project/pydantic-computed/\nYour example would then look like this:\nfrom pydantic import BaseModel\nfrom pydantic_computed import Computed, computed\n\nclass UserDB(BaseModel):\n first_name: Optional[str] = None\n last_name: Optional[str] = None\n\nclass User(UserDB):\n full_name: Computed[str]\n\n @computed('full_name')\n def compute_full_name(first_name: str, last_name: str):\n return first_name + ' ' + last_name\n\n\n# parsing also works as normal:\nuser_db = UserDB(first_name='John', last_name='Doe')\nuser = User.parse_obj(user_db)\nprint(user.full_name) # Outputs \"John Doe\"\n\n\nThis will also work for response_model (e.g. in FastAPI) since the computed value is actually set on the full_name property.\n"
] | [
16,
0
] | [] | [] | [
"fastapi",
"pydantic",
"python"
] | stackoverflow_0063492123_fastapi_pydantic_python.txt |
Q:
Changing the format of a column of data in a Pandas Series
I would like to change the format of the returned dates from this:
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True)
print(listOfDates)
0 2022-02-02
1 2022-02-08
2 2022-05-01
3 2022-05-06
4 2022-06-05
5 2022-06-17
6 2022-07-30
7 2022-08-03
8 2022-10-10
9 2022-11-18
Name: TradeDate, dtype: datetime64[ns]
to this:
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True)
print(listOfDates)
0 20220202
1 20220208
2 20220501
3 20220506
4 20220605
5 20220617
6 20220730
7 20220803
8 20221010
9 20221118
Name: TradeDate, dtype: datetime64[ns]
I attempted the below, but to no avail as a pandas series has no attribute 'strftime':
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True).strftime("%Y%m%d")
Any suggestions greatly appreciated, thanks!
A:
You need to use the datetime accessor dt like this:
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True).dt.strftime("%Y%m%d")
Documented here
| Changing the format of a column of data in a Pandas Series | I would like to change the format of the returned dates from this:
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True)
print(listOfDates)
0 2022-02-02
1 2022-02-08
2 2022-05-01
3 2022-05-06
4 2022-06-05
5 2022-06-17
6 2022-07-30
7 2022-08-03
8 2022-10-10
9 2022-11-18
Name: TradeDate, dtype: datetime64[ns]
to this:
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True)
print(listOfDates)
0 20220202
1 20220208
2 20220501
3 20220506
4 20220605
5 20220617
6 20220730
7 20220803
8 20221010
9 20221118
Name: TradeDate, dtype: datetime64[ns]
I attempted the below, but to no avail as a pandas series has no attribute 'strftime':
listOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True).strftime("%Y%m%d")
Any suggestions greatly appreciated, thanks!
| [
"You need to use the datetime accessor dt like this:\nlistOfDates = df2['TradeDate'].drop_duplicates().reset_index(drop=True).dt.strftime(\"%Y%m%d\")\n\nDocumented here\n"
] | [
1
] | [] | [] | [
"pandas",
"python",
"series"
] | stackoverflow_0074617681_pandas_python_series.txt |
Q:
Create a custom column in a dict format
I currently have a dataframe like this
Person
Analysis
Dexterity
Skills
174
3.76
4.12
1.20
239
4.10
3.78
3.77
557
5.00
2.00
4.40
674
2.23
2.40
2.80
122
3.33
4.80
4.10
I want to add an column to compile all this information like below
Person
Analysis
Dexterity
Skills
new_column
174
3.76
4.12
1.20
{"Analysis":"3.76", "Dexterity":"4.12", "Skills":"1.20"}
239
4.10
3.78
3.77
{"Analysis":"4.10", "Dexterity":"3.78", "Skills":"3.77"}
557
5.00
2.00
4.40
{"Analysis":"5.00", "Dexterity":"2.00", "Skills":"4.40"}
674
2.23
2.40
2.80
{"Analysis":"2.23", "Dexterity":"2.40", "Skills":"2.80"}
122
3.33
4.80
4.10
{"Analysis":"3.33", "Dexterity":"4.80", "Skills":"4.10"}
A:
You can use to to_dict method like so:
import pandas as pd
rows = [
{'a': 1, 'b': 2},
{'a': 3, 'b': 4},
]
df = pd.DataFrame(rows)
# define new column as the json format of another
# also convert to str as that is what you have in your output
df['c'] = df[['a', 'b']].astype(str).to_dict(orient='records')
to_dict has a really nice interface for transforming to JSON format or to other object types. In this instance you are looking for orient='records' which is a list of dictionaries.
In your case, you will use:
df['new_column'] = df[
['Analysis', 'Dexterity', 'Skills']
].astype(str).to_dict(orient='records')
| Create a custom column in a dict format | I currently have a dataframe like this
Person
Analysis
Dexterity
Skills
174
3.76
4.12
1.20
239
4.10
3.78
3.77
557
5.00
2.00
4.40
674
2.23
2.40
2.80
122
3.33
4.80
4.10
I want to add an column to compile all this information like below
Person
Analysis
Dexterity
Skills
new_column
174
3.76
4.12
1.20
{"Analysis":"3.76", "Dexterity":"4.12", "Skills":"1.20"}
239
4.10
3.78
3.77
{"Analysis":"4.10", "Dexterity":"3.78", "Skills":"3.77"}
557
5.00
2.00
4.40
{"Analysis":"5.00", "Dexterity":"2.00", "Skills":"4.40"}
674
2.23
2.40
2.80
{"Analysis":"2.23", "Dexterity":"2.40", "Skills":"2.80"}
122
3.33
4.80
4.10
{"Analysis":"3.33", "Dexterity":"4.80", "Skills":"4.10"}
| [
"You can use to to_dict method like so:\nimport pandas as pd\n\nrows = [\n {'a': 1, 'b': 2},\n {'a': 3, 'b': 4},\n]\n\ndf = pd.DataFrame(rows)\n\n# define new column as the json format of another\n# also convert to str as that is what you have in your output\ndf['c'] = df[['a', 'b']].astype(str).to_dict(orient='records')\n\nto_dict has a really nice interface for transforming to JSON format or to other object types. In this instance you are looking for orient='records' which is a list of dictionaries.\nIn your case, you will use:\ndf['new_column'] = df[\n ['Analysis', 'Dexterity', 'Skills']\n].astype(str).to_dict(orient='records')\n\n"
] | [
4
] | [] | [] | [
"data_science",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074617602_data_science_dataframe_pandas_python.txt |
Q:
How do you shuffle functions using python to then call the shuffled result?
I am trying to make functions which would print separate questions. I then want the functions to be put into a list and shuffled with the shuffled functions being called in their shuffled order.
I tried to change the functions into variables and then put the variables in a list. I then tried to use random.shuffle()
to then shuffle the list and then print the shuffled list by using return(questionlist[0]). However, it simply returned an error message.
My code looked a little like this:
import random
def Question1():
print("Which one of these is a safe password?")
def Question2():
print("What can you do to get less spam?")
def Question3():
print("What term describes the act of annoying someone online?")
def questionfunction():
q1 = Question1
q2 = Question2
q3 = Question3
questionlist = [q1,q2,q3]
random.shuffle(questionlist)
return(questionlist[0])
return(questionlist[1])
return(questionlist[2])
questionfunction()
A:
You have to actually call the functions. Further, you don't want to return until you actually call all three. You also don't need any additional variables; Question1 is already a variable that refers to the function originally bound to it.
def questionfunction():
questionlist = [Question1, Question2, Question3]
random.shuffle(questionlist)
questionlist[0]()
questionlist[1]()
questionlist[2]()
You could, alternatively, just return the shuffled list and let the caller call the functions.
def questionfunction():
questionlist = [Question1, Question2, Question3]
random.shuffle(questionlist)
return questionlist
for q in questionfunction():
q()
| How do you shuffle functions using python to then call the shuffled result? | I am trying to make functions which would print separate questions. I then want the functions to be put into a list and shuffled with the shuffled functions being called in their shuffled order.
I tried to change the functions into variables and then put the variables in a list. I then tried to use random.shuffle()
to then shuffle the list and then print the shuffled list by using return(questionlist[0]). However, it simply returned an error message.
My code looked a little like this:
import random
def Question1():
print("Which one of these is a safe password?")
def Question2():
print("What can you do to get less spam?")
def Question3():
print("What term describes the act of annoying someone online?")
def questionfunction():
q1 = Question1
q2 = Question2
q3 = Question3
questionlist = [q1,q2,q3]
random.shuffle(questionlist)
return(questionlist[0])
return(questionlist[1])
return(questionlist[2])
questionfunction()
| [
"You have to actually call the functions. Further, you don't want to return until you actually call all three. You also don't need any additional variables; Question1 is already a variable that refers to the function originally bound to it.\ndef questionfunction():\n questionlist = [Question1, Question2, Question3]\n random.shuffle(questionlist)\n questionlist[0]()\n questionlist[1]()\n questionlist[2]()\n\nYou could, alternatively, just return the shuffled list and let the caller call the functions.\ndef questionfunction():\n questionlist = [Question1, Question2, Question3]\n random.shuffle(questionlist)\n return questionlist\n\nfor q in questionfunction():\n q()\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"random"
] | stackoverflow_0074617688_python_python_3.x_random.txt |
Q:
Removing charaters from string in a list of list
I have a list, lst =[['ABC, CN', 'X'], ['DCS, BA', 'X'], ['SCS, TW', 'X'], ['SFA, GW', 'X']]. Want to remove the last 4 charaters in the in the first part of ever string? Is this possible?
Wanted outcome would be:
newlist = [['ABC', 'X'], ['DCS', 'X'], ['SCS', 'X'], ['SFA', 'X']]
I've tried:
newlist= [sub[:][: -1] for sub in list]
But this gives me:
newlist = [['ABC'], ['DCS'], ['SCS'], ['SFA']]
A:
You can use unpacking when iterate over elements of the list.
>>> [[a[:-4], b] for (a, b) in lst]
[['ABC', 'X'], ['DCS', 'X'], ['SCS', 'X'], ['SFA', 'X']]
| Removing charaters from string in a list of list | I have a list, lst =[['ABC, CN', 'X'], ['DCS, BA', 'X'], ['SCS, TW', 'X'], ['SFA, GW', 'X']]. Want to remove the last 4 charaters in the in the first part of ever string? Is this possible?
Wanted outcome would be:
newlist = [['ABC', 'X'], ['DCS', 'X'], ['SCS', 'X'], ['SFA', 'X']]
I've tried:
newlist= [sub[:][: -1] for sub in list]
But this gives me:
newlist = [['ABC'], ['DCS'], ['SCS'], ['SFA']]
| [
"You can use unpacking when iterate over elements of the list.\n>>> [[a[:-4], b] for (a, b) in lst]\n[['ABC', 'X'], ['DCS', 'X'], ['SCS', 'X'], ['SFA', 'X']]\n\n"
] | [
1
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074617685_list_python.txt |
Q:
How to write uwsgi ini file equivalent to a uwsgi command
I am testing an application in uWsgi server using the command,
uwsgi --http :9090 --wsgi-file myapp.py --callable app --processes 4 --threads 2 --stats 127.0.0.1:9191
That starts the application on 9090 port. I want to write a .ini file for this. But I am stuck with --http :9090 part. How it will be written in the ini file? So far my uwsgi.ini file looks like this,
[uwsgi]
wsgi-file = myapp.py
callable = app
processes = 4
threads = 2
stats = 127.0.0.1:9191
A:
Configuration directives and command line options are managed by the same parser: https://uwsgi-docs.readthedocs.org/en/latest/Configuration.html
In the case of the ini format you only need to remove the double dashes before the option.
A:
uwisgi.ini
stats = :1717 --stats-http
Documentation: http://uwsgi-docs.readthedocs.org/en/latest/StatsServer.html
A:
This worked for me:
stats = 0.0.0.0:6000
stats-http = true
| How to write uwsgi ini file equivalent to a uwsgi command | I am testing an application in uWsgi server using the command,
uwsgi --http :9090 --wsgi-file myapp.py --callable app --processes 4 --threads 2 --stats 127.0.0.1:9191
That starts the application on 9090 port. I want to write a .ini file for this. But I am stuck with --http :9090 part. How it will be written in the ini file? So far my uwsgi.ini file looks like this,
[uwsgi]
wsgi-file = myapp.py
callable = app
processes = 4
threads = 2
stats = 127.0.0.1:9191
| [
"Configuration directives and command line options are managed by the same parser: https://uwsgi-docs.readthedocs.org/en/latest/Configuration.html\nIn the case of the ini format you only need to remove the double dashes before the option.\n",
"uwisgi.ini \nstats = :1717 --stats-http\n\nDocumentation: http://uwsgi-docs.readthedocs.org/en/latest/StatsServer.html\n",
"This worked for me:\nstats = 0.0.0.0:6000\nstats-http = true\n\n"
] | [
4,
4,
0
] | [] | [] | [
"ini",
"python",
"uwsgi"
] | stackoverflow_0026302562_ini_python_uwsgi.txt |
Q:
How to remove the UnboundLocalError in Python while calculating profit from the values given in a dictionary?
I have a dictionary dt which consists of cost price, selling price and the inventory. The purpose of the code is to calculate the Profit. Profit and can be calculated by
Profit = Total selling price - Total Cost price. For example following is the input
profit({
"cost_price": 32.67,
"sell_price": 45.00,
"inventory": 1200
})
And it's output is 14796. To calculate individual Total cost the formula is Total cost price = inventory * cost_price and Total Selling Price = inventory * sell_price. Below is my code and the error.
class Solution(object):
def total_profit(self, di):
global total_selling_price
global total_cost_price
for k, v in enumerate(di):
if k == 'cost_price':
cp = di[v]
elif k == 'inventory':
inventory = di[v]
total_cost_price = cp * inventory
else:
sp = di[v]
total_selling_price = sp * inventory
profit = total_selling_price - total_cost_price
return profit
if __name__ == '__main__':
p = Solution()
dt = {"cost_price": 2.77,
"sell_price": 7.95,
"inventory": 8500}
print(p.total_profit(dt))
Error shown is as follows
Traceback (most recent call last):
File "/Users/tejas/PycharmProjects/LeetcodeinPython/EdbatsQuestions/Profit.py", line 27, in <module>
print(p.total_profit(dt))
File "/Users/tejas/PycharmProjects/LeetcodeinPython/EdbatsQuestions/Profit.py", line 15, in total_profit
total_selling_price = sp * inventory
UnboundLocalError: local variable 'inventory' referenced before assignment
A:
When extracting values from the dictionary you need to validate their existence. You may even want to check the data type(s).
There is no need for global variables in this case.
The main method might as well be static as it does not depend upon any other attributes of the Solution class.
Something like this will suffice and be more robust:
class Solution:
@staticmethod
def total_profit(di):
cp = di.get('cost_price')
sp = di.get('sell_price')
iv = di.get('inventory')
if cp and sp and iv:
return (sp - cp) * iv
# implicit return of None if any keys unavailable
dt = {"cost_price": 2.77,
"sell_price": 7.95,
"inventory": 8500}
print(Solution.total_profit(dt))
Output:
44030.0
A:
In your first "elif" block indeed the inventory variable is unbound (it hasn't been defined before). The line earlier you just checked if k is equal to string "inventory", but as explained below it never is in your case.
TLDR;
Ain't somebody learning for a interview huh? :D
By default if you loop over a dictionary you are looping over its keys.
Enumerate yields idx of an iterable and its value. So you are mixing stuff here.
This (compare with your case please):
x = {"a":5, "b":77}
for k,v in enumerate(x):
print(k,v)
Will print for you:
0, "a"
1, "b"
Rather not what you are aiming at. So iterate using .items(), which will give you pairs of key, value on each loop iteration.
| How to remove the UnboundLocalError in Python while calculating profit from the values given in a dictionary? | I have a dictionary dt which consists of cost price, selling price and the inventory. The purpose of the code is to calculate the Profit. Profit and can be calculated by
Profit = Total selling price - Total Cost price. For example following is the input
profit({
"cost_price": 32.67,
"sell_price": 45.00,
"inventory": 1200
})
And it's output is 14796. To calculate individual Total cost the formula is Total cost price = inventory * cost_price and Total Selling Price = inventory * sell_price. Below is my code and the error.
class Solution(object):
def total_profit(self, di):
global total_selling_price
global total_cost_price
for k, v in enumerate(di):
if k == 'cost_price':
cp = di[v]
elif k == 'inventory':
inventory = di[v]
total_cost_price = cp * inventory
else:
sp = di[v]
total_selling_price = sp * inventory
profit = total_selling_price - total_cost_price
return profit
if __name__ == '__main__':
p = Solution()
dt = {"cost_price": 2.77,
"sell_price": 7.95,
"inventory": 8500}
print(p.total_profit(dt))
Error shown is as follows
Traceback (most recent call last):
File "/Users/tejas/PycharmProjects/LeetcodeinPython/EdbatsQuestions/Profit.py", line 27, in <module>
print(p.total_profit(dt))
File "/Users/tejas/PycharmProjects/LeetcodeinPython/EdbatsQuestions/Profit.py", line 15, in total_profit
total_selling_price = sp * inventory
UnboundLocalError: local variable 'inventory' referenced before assignment
| [
"When extracting values from the dictionary you need to validate their existence. You may even want to check the data type(s).\nThere is no need for global variables in this case.\nThe main method might as well be static as it does not depend upon any other attributes of the Solution class.\nSomething like this will suffice and be more robust:\nclass Solution:\n @staticmethod\n def total_profit(di):\n cp = di.get('cost_price')\n sp = di.get('sell_price')\n iv = di.get('inventory')\n if cp and sp and iv:\n return (sp - cp) * iv\n # implicit return of None if any keys unavailable\n\n\ndt = {\"cost_price\": 2.77,\n \"sell_price\": 7.95,\n \"inventory\": 8500}\n\nprint(Solution.total_profit(dt))\n\nOutput:\n44030.0\n\n",
"In your first \"elif\" block indeed the inventory variable is unbound (it hasn't been defined before). The line earlier you just checked if k is equal to string \"inventory\", but as explained below it never is in your case.\nTLDR;\nAin't somebody learning for a interview huh? :D\nBy default if you loop over a dictionary you are looping over its keys.\nEnumerate yields idx of an iterable and its value. So you are mixing stuff here.\nThis (compare with your case please):\n x = {\"a\":5, \"b\":77}\n for k,v in enumerate(x):\n print(k,v)\n\nWill print for you:\n 0, \"a\"\n 1, \"b\"\n\nRather not what you are aiming at. So iterate using .items(), which will give you pairs of key, value on each loop iteration.\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"global_variables",
"python"
] | stackoverflow_0074617610_dictionary_global_variables_python.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.