content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How to get difference between current timestamp and a different timestamp?
I am trying get difference between two timestamps and check if its greater than 30 mins
timestamp1 = 1668027512
now = datetime.now(tz)
And this is what i am trying to do
from datetime import datetime
import pytz as tz
tz = tz.timezone('UTC')
import time
timestamp1 = 1668027512
timestamp1 = datetime.utcfromtimestamp(int(timestamp1)).strftime("%Y-%m-%dT%H:%M:%S")
print(timestamp1)
now = datetime.now(tz).strftime("%Y-%m-%dT%H:%M:%S")
print(now)
print(timestamp1 - now)
This is giving me this error
TypeError: unsupported operand type(s) for -: 'str' and 'str
so i tried to convert them to unix timestamp and then do the difference
d1_ts = time.mktime(timestamp1.timetuple())
d2_ts = time.mktime(now.timetuple())
print(d1_ts - d2_ts)
But now the error is this
'str' object has no attribute 'timetuple'
This datetime package is confusing,
What am i missing here ?
A:
from datetime import datetime
# 30 minutes times 60 seconds
thirty_minutes = 30 * 60
past_timestamp = 1668027512
now_timestamp = datetime.now().timestamp()
if (now_timestamp - past_timestamp) > thirty_minutes:
# do your thing
A:
The problem you are facing is that when you use .strftime("%Y-%m-%dT%H:%M:%S") you are converting the class datetime to string and you can't compare strings.
I suggest you to only utilize yours variables as strings at the moment you are printing them (but that depends on your goals with the variables).
Also, from this you can solve the problem with datetime operations that are build upon a specific timezone. For this you have to include the now = now.replace(tzinfo=None)
So your code would be
from datetime import datetime
import pytz as tz
tz = tz.timezone('UTC')
import time
timestamp1 = 1668027512
timestamp1 = datetime.utcfromtimestamp(int(timestamp1))
print(timestamp1.strftime("%Y-%m-%dT%H:%M:%S"))
now = datetime.now(tz)
print(now.strftime("%Y-%m-%dT%H:%M:%S"))
now = now.replace(tzinfo=None)
print(timestamp1 - now)
A:
I recommend to always work with aware datetimes in order to avoid issues and ensure reproducibility, especially if you'd like to share your code.
utcfromtimestamp() unfortunately returns a naive datetime, meaning it doesn't set the tzinfo attribute.
It's better to use fromtimestamp() and specify the timezone.
From pytz you can directly import the UTC constant, for convenience.
You can use timedelta() to express the time delta.
Parenthesis are not mandatory in the comparison, thanks to the operator precedence rules
from datetime import datetime, timedelta
from pytz import UTC
dt1 = datetime.fromtimestamp(1668027512, UTC)
now = datetime.now(UTC)
if now - dt1 > timedelta(minutes=30):
print("period expired")
| How to get difference between current timestamp and a different timestamp? | I am trying get difference between two timestamps and check if its greater than 30 mins
timestamp1 = 1668027512
now = datetime.now(tz)
And this is what i am trying to do
from datetime import datetime
import pytz as tz
tz = tz.timezone('UTC')
import time
timestamp1 = 1668027512
timestamp1 = datetime.utcfromtimestamp(int(timestamp1)).strftime("%Y-%m-%dT%H:%M:%S")
print(timestamp1)
now = datetime.now(tz).strftime("%Y-%m-%dT%H:%M:%S")
print(now)
print(timestamp1 - now)
This is giving me this error
TypeError: unsupported operand type(s) for -: 'str' and 'str
so i tried to convert them to unix timestamp and then do the difference
d1_ts = time.mktime(timestamp1.timetuple())
d2_ts = time.mktime(now.timetuple())
print(d1_ts - d2_ts)
But now the error is this
'str' object has no attribute 'timetuple'
This datetime package is confusing,
What am i missing here ?
| [
"from datetime import datetime\n\n# 30 minutes times 60 seconds\nthirty_minutes = 30 * 60\n\npast_timestamp = 1668027512\nnow_timestamp = datetime.now().timestamp()\n\nif (now_timestamp - past_timestamp) > thirty_minutes:\n # do your thing\n\n",
"The problem you are facing is that when you use .strftime(\"%Y-%m-%dT%H:%M:%S\") you are converting the class datetime to string and you can't compare strings.\nI suggest you to only utilize yours variables as strings at the moment you are printing them (but that depends on your goals with the variables).\nAlso, from this you can solve the problem with datetime operations that are build upon a specific timezone. For this you have to include the now = now.replace(tzinfo=None)\nSo your code would be\nfrom datetime import datetime\nimport pytz as tz\ntz = tz.timezone('UTC')\nimport time\n\ntimestamp1 = 1668027512\ntimestamp1 = datetime.utcfromtimestamp(int(timestamp1))\n\nprint(timestamp1.strftime(\"%Y-%m-%dT%H:%M:%S\"))\nnow = datetime.now(tz)\nprint(now.strftime(\"%Y-%m-%dT%H:%M:%S\"))\n\nnow = now.replace(tzinfo=None)\nprint(timestamp1 - now)\n\n",
"I recommend to always work with aware datetimes in order to avoid issues and ensure reproducibility, especially if you'd like to share your code.\nutcfromtimestamp() unfortunately returns a naive datetime, meaning it doesn't set the tzinfo attribute.\nIt's better to use fromtimestamp() and specify the timezone.\nFrom pytz you can directly import the UTC constant, for convenience.\nYou can use timedelta() to express the time delta.\nParenthesis are not mandatory in the comparison, thanks to the operator precedence rules\nfrom datetime import datetime, timedelta\nfrom pytz import UTC\n\ndt1 = datetime.fromtimestamp(1668027512, UTC)\nnow = datetime.now(UTC)\nif now - dt1 > timedelta(minutes=30):\n print(\"period expired\")\n\n"
] | [
1,
0,
0
] | [] | [] | [
"datetime",
"python",
"unix_timestamp"
] | stackoverflow_0074605684_datetime_python_unix_timestamp.txt |
Q:
Malformed query in elasticsearch
search = {
"from": str(start),
"size": str(size),
"query": {
"bool": {
"must": {
"multi_match": {
"query":query,
"fields":["name","description","tags","comments","created","creator","transaction","wallet"],
"operator":"or"}
},
"filter": { "term": { "channel": channel } } } } }
This is the python dict object. It gets the following error:
elasticsearch.BadRequestError: BadRequestError(400, 'parsing_exception', '[bool] malformed query, expected [END_OBJECT] but found [FIELD_NAME]')
I'm not seeing it. Please help. Start, size, query, and channel are all variables.
I have looked at a lot of example elasticsearch queries. Nothing I've tried has gotten passed syntax errors. I've also tried simple_search_string and a simple multi_match. I always need start and size, and always need to filter on channel.
A:
So the issue is some of those fields are arrays and need [] inside them. Specifically must and filter. Adding appropriate braces solved the issue. Here's the new format:
search = {
"from": start,
"query": {
"bool": {
"must": [
{ "multi_match": {
"query": query,
"fields": ["name","description","tags","comments","created","creator","transaction","wallet"]
} },
{ "match": {
"channel": channel
} }
]
}
}
}
Notice I've also dropped using the filter and just added another match term. I'm using size in the search call as one of its parameters.
| Malformed query in elasticsearch | search = {
"from": str(start),
"size": str(size),
"query": {
"bool": {
"must": {
"multi_match": {
"query":query,
"fields":["name","description","tags","comments","created","creator","transaction","wallet"],
"operator":"or"}
},
"filter": { "term": { "channel": channel } } } } }
This is the python dict object. It gets the following error:
elasticsearch.BadRequestError: BadRequestError(400, 'parsing_exception', '[bool] malformed query, expected [END_OBJECT] but found [FIELD_NAME]')
I'm not seeing it. Please help. Start, size, query, and channel are all variables.
I have looked at a lot of example elasticsearch queries. Nothing I've tried has gotten passed syntax errors. I've also tried simple_search_string and a simple multi_match. I always need start and size, and always need to filter on channel.
| [
"So the issue is some of those fields are arrays and need [] inside them. Specifically must and filter. Adding appropriate braces solved the issue. Here's the new format:\nsearch = {\n \"from\": start,\n \"query\": {\n \"bool\": {\n \"must\": [\n { \"multi_match\": {\n \"query\": query,\n \"fields\": [\"name\",\"description\",\"tags\",\"comments\",\"created\",\"creator\",\"transaction\",\"wallet\"]\n } },\n { \"match\": {\n \"channel\": channel\n } }\n ]\n }\n }\n}\n\nNotice I've also dropped using the filter and just added another match term. I'm using size in the search call as one of its parameters.\n"
] | [
0
] | [] | [] | [
"elasticsearch",
"python"
] | stackoverflow_0074540296_elasticsearch_python.txt |
Q:
PySpark: create column based on value and dictionary in columns
I have a PySpark dataframe with values and dictionaries that provide a textual mapping for the values.
Not every row has the same dictionary and the values can vary too.
| value | dict |
| -------- | ---------------------------------------------- |
| 1 | {"1": "Text A", "2": "Text B"} |
| 2 | {"1": "Text A", "2": "Text B"} |
| 0 | {"0": "Another text A", "1": "Another text B"} |
I want to make a "status" column that contains the right mapping.
| value | dict | status |
| -------- | ------------------------------- | -------- |
| 1 | {"1": "Text A", "2": "Text B"} | Text A |
| 2 | {"1": "Text A", "2": "Text B"} | Text B |
| 0 | {"0": "Other A", "1": "Other B"} | Other A |
I have tried this code:
df.withColumn("status", F.col("dict").getItem(F.col("value"))
This code does not work. With a hard coded value, like "2", the same code does provide an output, but of course not the right one:
df.withColumn("status", F.col("dict").getItem("2"))
Could someone help me with getting the right mapped value in the status column?
EDIT: my code did work, except for the fact that my "value" was a double and the keys in dict are strings. When casting the column from double to int to string, the code works.
A:
Hope this helps.
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
import json
if __name__ == '__main__':
spark = SparkSession.builder.appName('Medium').master('local[1]').getOrCreate()
df = spark.read.format('csv').option("header","true").option("delimiter","|").load("/Users/dshanmugam/Desktop/ss.csv")
schema = StructType([
StructField("1", StringType(), True)
])
def return_value(data):
key = data.split('-')[0]
value = json.loads(data.split('-')[1])[key]
return value
returnVal = udf(return_value)
df_new = df.withColumn("newCol",concat_ws("-",col("value"),col("dict"))).withColumn("result",returnVal(col("newCol")))
df_new.select(["value","result"]).show(10,False)
Result:
+-----+--------------+
|value|result |
+-----+--------------+
|1 |Text A |
|2 |Text B |
|0 |Another text A|
+-----+--------------+
I am using UDF. You can try with some other options if performance is a concern.
A:
Here are my 2 cents
Create the dataframe by reading from CSV or any other source (in my case it is just static data)
from pyspark.sql.types import *
data = [
(1 , {"1": "Text A", "2": "Text B"}),
(2 , {"1": "Text A", "2": "Text B"}),
(0 , {"0": "Another text A", "1": "Another text B"} )
]
schema = StructType([
StructField("ID",StringType(),True),
StructField("Dictionary",MapType(StringType(),StringType()),True),
])
df = spark.createDataFrame(data,schema=schema)
df.show(truncate=False)
Then directly extract the dictionary value based on the id as a key.
df.withColumn('extract',df.Dictionary[df.ID]).show(truncate=False)
Check the below image for reference:
| PySpark: create column based on value and dictionary in columns | I have a PySpark dataframe with values and dictionaries that provide a textual mapping for the values.
Not every row has the same dictionary and the values can vary too.
| value | dict |
| -------- | ---------------------------------------------- |
| 1 | {"1": "Text A", "2": "Text B"} |
| 2 | {"1": "Text A", "2": "Text B"} |
| 0 | {"0": "Another text A", "1": "Another text B"} |
I want to make a "status" column that contains the right mapping.
| value | dict | status |
| -------- | ------------------------------- | -------- |
| 1 | {"1": "Text A", "2": "Text B"} | Text A |
| 2 | {"1": "Text A", "2": "Text B"} | Text B |
| 0 | {"0": "Other A", "1": "Other B"} | Other A |
I have tried this code:
df.withColumn("status", F.col("dict").getItem(F.col("value"))
This code does not work. With a hard coded value, like "2", the same code does provide an output, but of course not the right one:
df.withColumn("status", F.col("dict").getItem("2"))
Could someone help me with getting the right mapped value in the status column?
EDIT: my code did work, except for the fact that my "value" was a double and the keys in dict are strings. When casting the column from double to int to string, the code works.
| [
"Hope this helps.\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql.functions import *\nfrom pyspark.sql.types import *\nimport json\n\n\nif __name__ == '__main__':\n spark = SparkSession.builder.appName('Medium').master('local[1]').getOrCreate()\n df = spark.read.format('csv').option(\"header\",\"true\").option(\"delimiter\",\"|\").load(\"/Users/dshanmugam/Desktop/ss.csv\")\n schema = StructType([\n StructField(\"1\", StringType(), True)\n ])\n\n\n def return_value(data):\n key = data.split('-')[0]\n value = json.loads(data.split('-')[1])[key]\n return value\n\n returnVal = udf(return_value)\n df_new = df.withColumn(\"newCol\",concat_ws(\"-\",col(\"value\"),col(\"dict\"))).withColumn(\"result\",returnVal(col(\"newCol\")))\n df_new.select([\"value\",\"result\"]).show(10,False)\n\nResult:\n+-----+--------------+\n|value|result |\n+-----+--------------+\n|1 |Text A |\n|2 |Text B |\n|0 |Another text A|\n+-----+--------------+\n\nI am using UDF. You can try with some other options if performance is a concern.\n",
"Here are my 2 cents\n\nCreate the dataframe by reading from CSV or any other source (in my case it is just static data)\n from pyspark.sql.types import *\n\n data = [\n (1 , {\"1\": \"Text A\", \"2\": \"Text B\"}),\n (2 , {\"1\": \"Text A\", \"2\": \"Text B\"}),\n (0 , {\"0\": \"Another text A\", \"1\": \"Another text B\"} )\n ]\n\n\n schema = StructType([\n StructField(\"ID\",StringType(),True),\n StructField(\"Dictionary\",MapType(StringType(),StringType()),True),\n ])\n\n df = spark.createDataFrame(data,schema=schema)\n df.show(truncate=False)\n\n\nThen directly extract the dictionary value based on the id as a key.\ndf.withColumn('extract',df.Dictionary[df.ID]).show(truncate=False)\n\n\n\nCheck the below image for reference:\n\n"
] | [
1,
1
] | [] | [] | [
"apache_spark_sql",
"dictionary",
"mapping",
"pyspark",
"python"
] | stackoverflow_0074599729_apache_spark_sql_dictionary_mapping_pyspark_python.txt |
Q:
Safely and Asynchronously Interrupt an Infinite-Loop Python Script started by a BASH script via SSH
My Setup:
I have a Python script that I'd like to run on a remote host. I'm running a BASH script on my local machine that SSH's into my remote server, runs yet another BASH script, which then kicks off the Python script:
Local BASH script --> SSH --> Remote BASH script --> Remote Python script
The Python script configures a device (a DAQ) connected to the remote server and starts a while(True) loop of sampling and signal generation. When developing this script locally, I had relied on using Ctrl+C and a KeyboardInterrupt exception to interrupt the infinite loop and (most importantly) safely close the device sessions.
After exiting the Python script, I have my BASH script do a few additional chores while still SSH'd into the remote server.
Examples of my various scripts...
local-script.sh:
ssh user@remotehost "remote-script.sh"
remote-script.sh:
python3 infinite-loop.py
infinite-loop.py:
while(true):
# do stuff...
My Issue(s):
Now that I've migrated this script to my remote server and am running it via SSH, I can no longer use the KeyboardInterrupt to safely exit my Python script. In fact, when I do, I'll notice that the device that was being controlled by the Python script is still running (the output signals from my DAQ are changing as though the Python script is still running), and when I manually SSH back into the remote server, I can find the persisting Python script process and must kill it from there (otherwise I get two instances of the Python script running on top of one another if I run the script again). This leads me to believe that I'm actually exiting my remote-side BASH script SSH session that was kicked off by my local script and leaving my remote BASH and Python scripts off wandering on their own... (updated, following investigation outlined in the Edit 1 section)
In summary, using Ctrl+C while in the remote Python script results in:
Remote Python Script = Still Running
Remote BASH Script = Still Running
Remote SSH Session = Closed
Local BASH Script = Active ([Ctrl]+[C] lands me here)
My Ask:
How can I asynchronously interrupt (but not fully exit) a Python script that was kicked off over an SSH session via a BASH script? Bonus points if we can work within my BASH --> SSH --> BASH --> Python framework... whack as it may be. If we can do it with as few extra pip modules installed on top, you just might become my favorite person!
Edit 1:
Per @dan's recommendation, I started exploring trap statements in BASH scripts. I have yet to be successful in implementing this, but as a way to test its effectiveness, I decided to monitor process list at different stages of execution... It seems that, once started, I can see my SSH session, my remote BASH script, and its subsequent remote Python script start up processes. But, when I use Ctrl+C to exit, I'm kicked back into the top-level "Local" BASH script and, when I check the process list of my remote server, I see both the process for my remote BASH script and my remote Python script still running... so my Remote BASH script is not stopping... I'm, in fact, ONLY ending my SSH session...
A:
In combining the suggestions from the comments (and lots of help from a buddy), I've got something that works for me:
Solution 1:
In summary, I made my remote BASH script record its Group Process ID (GPID; that which is also assigned to the Python script that is spawned by the remote BASH script) to a file, and then had the local BASH script read that file to then kill the group process remotely.
Now my scripts look like:
local-script.sh
ssh user@remotehost "remote-script.sh"
remotegpid=`ssh user@ip "cat gpid_file"`
ssh user@ip "kill -SIGTERM -- -$remotegpid && rm gpid_file"
# ^ After the SSH closes, this goes back in to grab the GPID from the file and then kills it
remote-script.sh
ps -o pgid= $$ | xargs > ~/Desktop/gpid_file
# ^ This gets the BASH script's GPID and writes it to a file without whitespace
python3 infinite-loop.py
infinite-loop.py (unchanged)
while(true):
# do stuff...
This solves only most of the problem, since, originally I had set out to be able to do things in my Python script after it was interrupted and before exiting into my BASH scripts, but it turned out I had a bigger problem to catch (what with the scripts continuing to run even after closing my SSH session)...
| Safely and Asynchronously Interrupt an Infinite-Loop Python Script started by a BASH script via SSH | My Setup:
I have a Python script that I'd like to run on a remote host. I'm running a BASH script on my local machine that SSH's into my remote server, runs yet another BASH script, which then kicks off the Python script:
Local BASH script --> SSH --> Remote BASH script --> Remote Python script
The Python script configures a device (a DAQ) connected to the remote server and starts a while(True) loop of sampling and signal generation. When developing this script locally, I had relied on using Ctrl+C and a KeyboardInterrupt exception to interrupt the infinite loop and (most importantly) safely close the device sessions.
After exiting the Python script, I have my BASH script do a few additional chores while still SSH'd into the remote server.
Examples of my various scripts...
local-script.sh:
ssh user@remotehost "remote-script.sh"
remote-script.sh:
python3 infinite-loop.py
infinite-loop.py:
while(true):
# do stuff...
My Issue(s):
Now that I've migrated this script to my remote server and am running it via SSH, I can no longer use the KeyboardInterrupt to safely exit my Python script. In fact, when I do, I'll notice that the device that was being controlled by the Python script is still running (the output signals from my DAQ are changing as though the Python script is still running), and when I manually SSH back into the remote server, I can find the persisting Python script process and must kill it from there (otherwise I get two instances of the Python script running on top of one another if I run the script again). This leads me to believe that I'm actually exiting my remote-side BASH script SSH session that was kicked off by my local script and leaving my remote BASH and Python scripts off wandering on their own... (updated, following investigation outlined in the Edit 1 section)
In summary, using Ctrl+C while in the remote Python script results in:
Remote Python Script = Still Running
Remote BASH Script = Still Running
Remote SSH Session = Closed
Local BASH Script = Active ([Ctrl]+[C] lands me here)
My Ask:
How can I asynchronously interrupt (but not fully exit) a Python script that was kicked off over an SSH session via a BASH script? Bonus points if we can work within my BASH --> SSH --> BASH --> Python framework... whack as it may be. If we can do it with as few extra pip modules installed on top, you just might become my favorite person!
Edit 1:
Per @dan's recommendation, I started exploring trap statements in BASH scripts. I have yet to be successful in implementing this, but as a way to test its effectiveness, I decided to monitor process list at different stages of execution... It seems that, once started, I can see my SSH session, my remote BASH script, and its subsequent remote Python script start up processes. But, when I use Ctrl+C to exit, I'm kicked back into the top-level "Local" BASH script and, when I check the process list of my remote server, I see both the process for my remote BASH script and my remote Python script still running... so my Remote BASH script is not stopping... I'm, in fact, ONLY ending my SSH session...
| [
"In combining the suggestions from the comments (and lots of help from a buddy), I've got something that works for me:\n\nSolution 1:\nIn summary, I made my remote BASH script record its Group Process ID (GPID; that which is also assigned to the Python script that is spawned by the remote BASH script) to a file, and then had the local BASH script read that file to then kill the group process remotely.\nNow my scripts look like:\nlocal-script.sh\nssh user@remotehost \"remote-script.sh\"\nremotegpid=`ssh user@ip \"cat gpid_file\"`\nssh user@ip \"kill -SIGTERM -- -$remotegpid && rm gpid_file\"\n# ^ After the SSH closes, this goes back in to grab the GPID from the file and then kills it\n\nremote-script.sh\nps -o pgid= $$ | xargs > ~/Desktop/gpid_file \n# ^ This gets the BASH script's GPID and writes it to a file without whitespace\npython3 infinite-loop.py\n\ninfinite-loop.py (unchanged)\nwhile(true): \n # do stuff...\n\nThis solves only most of the problem, since, originally I had set out to be able to do things in my Python script after it was interrupted and before exiting into my BASH scripts, but it turned out I had a bigger problem to catch (what with the scripts continuing to run even after closing my SSH session)...\n"
] | [
0
] | [] | [] | [
"bash",
"python",
"ssh"
] | stackoverflow_0074527122_bash_python_ssh.txt |
Q:
Python subprocess.run() batch file with updating variables
I want to ask how can I run an external batch file while updating the variable before I run the process. The detail for my question is following:
I have a batch file right now, while it performs a simulation process. I want to write a module that I can update the variable first without manually updating the batch files, then run the simulation, and finally import the result, like:
# this will be the variable that I want to update
yyyy = 2022
mm = 11
dd = 28
Path1 = 'the path for first variable'
Path2 = 'the path for second variable'
# the batch file is like:
Batch_simulation.bat
Path 2/remote/noclear/Path 1/%yyyy%%mm%%dd%
# therefore, I want to update the variable in batch file first, then run the simulation, my code is looking like this right now:
import subprocess
yyyy = 2022
mm = 11
dd = 28
Path1 = 'the path for first variable'
Path2 = 'the path for second variable'
paramStr = str(yyyy)+','+str(mm)+','+str(dd)+','+Path1+','+Path2
bat_file = ['pathway for Batch_simulation.bat', paramStr]
process = subprocess.run([bat_file])
stdout, stderr = process.communicate()
Can someone give some advice, or any possible solution please? Thank you so much
A:
Can someone give ... any possible solution ...?
in a loop
open and read the batch file (--> results in a string)
use pattern matching to find then replace the relevant parts of that string with your data
write the modified string to the batch file
run the batch file with subprocess and get its results.
A:
I know python not at all.
It would appear to me that the string presented to the batch would be
yyyy,mm,dd,Path1,Path2
So within the batch, %1 would deliver yyyy, %2 would deliver mm and so on.
Note : if Path? contains a separator like Space then the value path? must be quoted "path?". Probably best to quote these elements in any case. (spaces, commas and some other characters are separators for batch)
Please also note that batch requires a backslash as directory-separator, not forward-slash.
| Python subprocess.run() batch file with updating variables | I want to ask how can I run an external batch file while updating the variable before I run the process. The detail for my question is following:
I have a batch file right now, while it performs a simulation process. I want to write a module that I can update the variable first without manually updating the batch files, then run the simulation, and finally import the result, like:
# this will be the variable that I want to update
yyyy = 2022
mm = 11
dd = 28
Path1 = 'the path for first variable'
Path2 = 'the path for second variable'
# the batch file is like:
Batch_simulation.bat
Path 2/remote/noclear/Path 1/%yyyy%%mm%%dd%
# therefore, I want to update the variable in batch file first, then run the simulation, my code is looking like this right now:
import subprocess
yyyy = 2022
mm = 11
dd = 28
Path1 = 'the path for first variable'
Path2 = 'the path for second variable'
paramStr = str(yyyy)+','+str(mm)+','+str(dd)+','+Path1+','+Path2
bat_file = ['pathway for Batch_simulation.bat', paramStr]
process = subprocess.run([bat_file])
stdout, stderr = process.communicate()
Can someone give some advice, or any possible solution please? Thank you so much
| [
"\nCan someone give ... any possible solution ...?\n\n\nin a loop\n\nopen and read the batch file (--> results in a string)\nuse pattern matching to find then replace the relevant parts of that string with your data\nwrite the modified string to the batch file\nrun the batch file with subprocess and get its results.\n\n\n\n",
"I know python not at all.\nIt would appear to me that the string presented to the batch would be\nyyyy,mm,dd,Path1,Path2\n\nSo within the batch, %1 would deliver yyyy, %2 would deliver mm and so on.\nNote : if Path? contains a separator like Space then the value path? must be quoted \"path?\". Probably best to quote these elements in any case. (spaces, commas and some other characters are separators for batch)\nPlease also note that batch requires a backslash as directory-separator, not forward-slash.\n"
] | [
0,
0
] | [] | [] | [
"batch_file",
"python",
"subprocess"
] | stackoverflow_0074605590_batch_file_python_subprocess.txt |
Q:
How to convert a sympy decimal number to a Python decimal?
In sympy I can for example do:
from sympy import EulerGamma
EulerGamma.n(60)
0.577215664901532860606512090082402431042159335939923598805767
I would like to convert that into a Decimal number without losing any precision.
from decimal import Decimal as D
import decimal
decimal.getcontext().prec = 100
D(EulerGamma.n(60))
TypeError: conversion from Float to Decimal is not supported
What is the right way to do this?
A:
Try doing D(str(EulerGamma.n(60))). This will construct a Decimal from the object's string representation.
| How to convert a sympy decimal number to a Python decimal? | In sympy I can for example do:
from sympy import EulerGamma
EulerGamma.n(60)
0.577215664901532860606512090082402431042159335939923598805767
I would like to convert that into a Decimal number without losing any precision.
from decimal import Decimal as D
import decimal
decimal.getcontext().prec = 100
D(EulerGamma.n(60))
TypeError: conversion from Float to Decimal is not supported
What is the right way to do this?
| [
"Try doing D(str(EulerGamma.n(60))). This will construct a Decimal from the object's string representation.\n"
] | [
3
] | [] | [] | [
"python"
] | stackoverflow_0074606080_python.txt |
Q:
installing python packages without internet and using source code as .tar.gz and .whl
we are trying to install couple of python packages without internet.
For ex : python-keystoneclient
For that we have the packages downloaded from https://pypi.python.org/pypi/python-keystoneclient/1.7.1 and kept it in server.
However, while installing tar.gz and .whl packages , the installation is looking for dependent packages to be installed first. Since there is no internet connection in the server, it is getting failed.
For ex : For python-keystoneclient we have the following dependent packages
stevedore (>=1.5.0)
six (>=1.9.0)
requests (>=2.5.2)
PrettyTable (<0.8,>=0.7)
oslo.utils (>=2.0.0)
oslo.serialization (>=1.4.0)
oslo.i18n (>=1.5.0)
oslo.config (>=2.3.0)
netaddr (!=0.7.16,>=0.7.12)
debtcollector (>=0.3.0)
iso8601 (>=0.1.9)
Babel (>=1.3)
argparse
pbr (<2.0,>=1.6)
When i try to install packages one by one from the above list, once again its looking for nested dependency .
Is there any way we could list ALL the dependent packages for installing a python module like python-keystoneclient.
A:
This is how I handle this case:
On the machine where I have access to Internet:
mkdir keystone-deps
pip download python-keystoneclient -d "/home/aviuser/keystone-deps"
tar cvfz keystone-deps.tgz keystone-deps
Then move the tar file to the destination machine that does not have Internet access and perform the following:
tar xvfz keystone-deps.tgz
cd keystone-deps
pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index
You may need to add --no-deps to the command as follows:
pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index --no-deps
A:
If you want to install a bunch of dependencies from, say a requirements.txt, you would do:
mkdir dependencies
pip download -r requirements.txt -d "./dependencies"
tar cvfz dependencies.tar.gz dependencies
And, once you transfer the dependencies.tar.gz to the machine which does not have internet you would do:
tar zxvf dependencies.tar.gz
cd dependencies
pip install * -f ./ --no-index
A:
We have a similar situation at work, where the production machines have no access to the Internet; therefore everything has to be managed offline and off-host.
Here is what I tried with varied amounts of success:
basket which is a small utility that you run on your internet-connected host. Instead of trying to install a package, it will instead download it, and everything else it requires to be installed into a directory. You then move this directory onto your target machine. Pros: very easy and simple to use, no server headaches; no ports to configure. Cons: there aren't any real showstoppers, but the biggest one is that it doesn't respect any version pinning you may have; it will always download the latest version of a package.
Run a local pypi server. Used pypiserver and devpi. pypiserver is super simple to install and setup; devpi takes a bit more finagling. They both do the same thing - act as a proxy/cache for the real pypi and as a local pypi server for any home-grown packages. localshop is a new one that wasn't around when I was looking, it also has the same idea. So how it works is your internet-restricted machine will connect to these servers, they are then connected to the Internet so that they can cache and proxy the actual repository.
The problem with the second approach is that although you get maximum compatibility and access to the entire repository of Python packages, you still need to make sure any/all dependencies are installed on your target machines (for example, any headers for database drivers and a build toolchain). Further, these solutions do not cater for non-pypi repositories (for example, packages that are hosted on github).
We got very far with the second option though, so I would definitely recommend it.
Eventually, getting tired of having to deal with compatibility issues and libraries, we migrated the entire circus of servers to commercially supported docker containers.
This means that we ship everything pre-configured, nothing actually needs to be installed on the production machines and it has been the most headache-free solution for us.
We replaced the pypi repositories with a local docker image server.
A:
pipdeptree is a command line utility for displaying the python packages installed in an virtualenv in form of a dependency tree.
Just use it:
https://github.com/naiquevin/pipdeptree
A:
If any of guys want to download for specific platform you can use platform argument
pip3 download notebook --platform manylinux1_x86_64 --only-binary=:all: -d "/Users/ajaytomgeorge/Dev/wheels/"
There are advanced arguments also which you can pass through Readme
Mac/linux
pip download \
--only-binary=:all: \
--platform macosx-10_10_x86_64 \
--python-version 27 \
--implementation cp \
SomePackage
Windows
pip download ^
--only-binary=:all: ^
--platform macosx-10_10_x86_64 ^
--python-version 27 ^
--implementation cp ^
SomePackage
| installing python packages without internet and using source code as .tar.gz and .whl | we are trying to install couple of python packages without internet.
For ex : python-keystoneclient
For that we have the packages downloaded from https://pypi.python.org/pypi/python-keystoneclient/1.7.1 and kept it in server.
However, while installing tar.gz and .whl packages , the installation is looking for dependent packages to be installed first. Since there is no internet connection in the server, it is getting failed.
For ex : For python-keystoneclient we have the following dependent packages
stevedore (>=1.5.0)
six (>=1.9.0)
requests (>=2.5.2)
PrettyTable (<0.8,>=0.7)
oslo.utils (>=2.0.0)
oslo.serialization (>=1.4.0)
oslo.i18n (>=1.5.0)
oslo.config (>=2.3.0)
netaddr (!=0.7.16,>=0.7.12)
debtcollector (>=0.3.0)
iso8601 (>=0.1.9)
Babel (>=1.3)
argparse
pbr (<2.0,>=1.6)
When i try to install packages one by one from the above list, once again its looking for nested dependency .
Is there any way we could list ALL the dependent packages for installing a python module like python-keystoneclient.
| [
"This is how I handle this case:\nOn the machine where I have access to Internet:\nmkdir keystone-deps\npip download python-keystoneclient -d \"/home/aviuser/keystone-deps\"\ntar cvfz keystone-deps.tgz keystone-deps\n\nThen move the tar file to the destination machine that does not have Internet access and perform the following:\ntar xvfz keystone-deps.tgz\ncd keystone-deps\npip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index\n\nYou may need to add --no-deps to the command as follows:\npip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index --no-deps\n\n",
"If you want to install a bunch of dependencies from, say a requirements.txt, you would do:\nmkdir dependencies\npip download -r requirements.txt -d \"./dependencies\"\ntar cvfz dependencies.tar.gz dependencies\n\nAnd, once you transfer the dependencies.tar.gz to the machine which does not have internet you would do:\ntar zxvf dependencies.tar.gz\ncd dependencies\npip install * -f ./ --no-index\n\n",
"We have a similar situation at work, where the production machines have no access to the Internet; therefore everything has to be managed offline and off-host.\nHere is what I tried with varied amounts of success:\n\nbasket which is a small utility that you run on your internet-connected host. Instead of trying to install a package, it will instead download it, and everything else it requires to be installed into a directory. You then move this directory onto your target machine. Pros: very easy and simple to use, no server headaches; no ports to configure. Cons: there aren't any real showstoppers, but the biggest one is that it doesn't respect any version pinning you may have; it will always download the latest version of a package.\nRun a local pypi server. Used pypiserver and devpi. pypiserver is super simple to install and setup; devpi takes a bit more finagling. They both do the same thing - act as a proxy/cache for the real pypi and as a local pypi server for any home-grown packages. localshop is a new one that wasn't around when I was looking, it also has the same idea. So how it works is your internet-restricted machine will connect to these servers, they are then connected to the Internet so that they can cache and proxy the actual repository.\n\nThe problem with the second approach is that although you get maximum compatibility and access to the entire repository of Python packages, you still need to make sure any/all dependencies are installed on your target machines (for example, any headers for database drivers and a build toolchain). Further, these solutions do not cater for non-pypi repositories (for example, packages that are hosted on github).\nWe got very far with the second option though, so I would definitely recommend it.\nEventually, getting tired of having to deal with compatibility issues and libraries, we migrated the entire circus of servers to commercially supported docker containers.\nThis means that we ship everything pre-configured, nothing actually needs to be installed on the production machines and it has been the most headache-free solution for us.\nWe replaced the pypi repositories with a local docker image server.\n",
"pipdeptree is a command line utility for displaying the python packages installed in an virtualenv in form of a dependency tree.\nJust use it:\nhttps://github.com/naiquevin/pipdeptree\n",
"If any of guys want to download for specific platform you can use platform argument\npip3 download notebook --platform manylinux1_x86_64 --only-binary=:all: -d \"/Users/ajaytomgeorge/Dev/wheels/\"\n\nThere are advanced arguments also which you can pass through Readme\nMac/linux\npip download \\\n\n\n --only-binary=:all: \\\n --platform macosx-10_10_x86_64 \\\n --python-version 27 \\\n --implementation cp \\\n SomePackage\n\nWindows\n pip download ^\n --only-binary=:all: ^\n --platform macosx-10_10_x86_64 ^\n --python-version 27 ^\n --implementation cp ^\n SomePackage\n\n"
] | [
138,
49,
10,
0,
0
] | [
"This isn't an answer. I was struggling but then realized that my install was trying to connect to internet to download dependencies.\nSo, I downloaded and installed dependencies first and then installed with below command. It worked\npython -m pip install filename.tar.gz\n\n",
"You can manually download the 'whl' file from PyPI:\nhttps://pypi.org/project/google-cloud-debugger-client/#files\nThen locate it in the root folder and you can just install it via pip:\npip install google_cloud_debugger_client-1.2.1-py2.py3-none-any.whl\n\n"
] | [
-1,
-1
] | [
"openstack",
"pip",
"python"
] | stackoverflow_0036725843_openstack_pip_python.txt |
Q:
cx_oracle and sqlalchemy performance comparison
I am working oracle database and wanted to know which toolkit (sqlalchemy or cx_Oracle) is better in performance as I can't see any comparisons online I hope someone can help me. I have listed the key performance indicators that I need to be addressed.
Bulk insertion and single line insertion
connections complexity
which one can be used for streaming more efficiently
Which one is better for OLTP connections (loading data from OLTP with that toolkit)
However more that these comparisons are highly appreciated
My Target and Source databases are Oracle
A comparison of the cx_oralce and sqlalchemy
A:
SQLAlchemy is a layer on top of cx_Oracle so it will always have more overhead.
When looking for performance, you should evaluate the latest cx_Oracle release (now called python-oracledb) since the new Thin mode has some advantages (e.g with DB Object types). See the release announcement for information about python-oracledb. Also see this blog post about using SQLAlchemy 1.4 with python-oracledb.
Fundamentally, you need to decide which coding style (ORM or raw API calls) you want to use. You may also want to look at pandas, which uses SQLAlchemy. And you should benchmark in your own environment with your own code.
| cx_oracle and sqlalchemy performance comparison | I am working oracle database and wanted to know which toolkit (sqlalchemy or cx_Oracle) is better in performance as I can't see any comparisons online I hope someone can help me. I have listed the key performance indicators that I need to be addressed.
Bulk insertion and single line insertion
connections complexity
which one can be used for streaming more efficiently
Which one is better for OLTP connections (loading data from OLTP with that toolkit)
However more that these comparisons are highly appreciated
My Target and Source databases are Oracle
A comparison of the cx_oralce and sqlalchemy
| [
"SQLAlchemy is a layer on top of cx_Oracle so it will always have more overhead.\nWhen looking for performance, you should evaluate the latest cx_Oracle release (now called python-oracledb) since the new Thin mode has some advantages (e.g with DB Object types). See the release announcement for information about python-oracledb. Also see this blog post about using SQLAlchemy 1.4 with python-oracledb.\nFundamentally, you need to decide which coding style (ORM or raw API calls) you want to use. You may also want to look at pandas, which uses SQLAlchemy. And you should benchmark in your own environment with your own code.\n"
] | [
1
] | [] | [] | [
"database",
"oracle",
"performance",
"python",
"sqlalchemy"
] | stackoverflow_0074597303_database_oracle_performance_python_sqlalchemy.txt |
Q:
Parse HTML to find titles with Python and BeautifulSoup
This is the code I'm currently using...
import requests
from bs4 import BeautifulSoup
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'
}
url = "https://blah.com"
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, 'html.parser')
titles = soup.select('a.title')
print (titles)
When executing this python script I get a bunch of text coming back that look similar to
this...
<a class="title" fill="false" arrow="false" duration="0" followcursor="1" theme="translucent" title-auto-hide="Blah" href="/url/blah/" title="Blah">Blah</a>
I'm trying to parse the data only to show the title Blah.
How can I make this happen?
A:
If I understand you correctly you want to get text from the parameter title=:
titles = soup.select("a.title")
for a in titles:
print(a["title"])
If you want the text inside <a>:
titles = soup.select("a.title")
for a in titles:
print(a.text)
| Parse HTML to find titles with Python and BeautifulSoup | This is the code I'm currently using...
import requests
from bs4 import BeautifulSoup
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'
}
url = "https://blah.com"
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, 'html.parser')
titles = soup.select('a.title')
print (titles)
When executing this python script I get a bunch of text coming back that look similar to
this...
<a class="title" fill="false" arrow="false" duration="0" followcursor="1" theme="translucent" title-auto-hide="Blah" href="/url/blah/" title="Blah">Blah</a>
I'm trying to parse the data only to show the title Blah.
How can I make this happen?
| [
"If I understand you correctly you want to get text from the parameter title=:\ntitles = soup.select(\"a.title\")\n\nfor a in titles:\n print(a[\"title\"])\n\n\nIf you want the text inside <a>:\ntitles = soup.select(\"a.title\")\n\nfor a in titles:\n print(a.text)\n\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"parsing",
"python"
] | stackoverflow_0074606128_beautifulsoup_parsing_python.txt |
Q:
Writing interpolated grib2 data with pygrib leads to unusable grib file
I'm trying to use pygrib to read data from a grib2 file, interpolate it using python, and write it to another file. I've tried both pygrib and eccodes and both produce the same problem. The output file size increased by a factor of 3, but when I try to view the data in applications like Weather and Climate Toolkit it has all the variables listed, but "No Data" when plotted. If I use the same script and don't interpolate the data, but just write it to the new file it works fine in WCT. If I use wgrib2 it lists all the grib messages, but if I use wgrib2 -V it works on the unaltered data but produces the error "*** FATAL ERROR: unsupported: code table 5.6=0 ***" for the interpolated data. Am I doing something wrong in my python script? Here is an example of what I'm doing to write the file (same result using pygrib 2.05 and 2.1.3). I used a basic hrrr file for the example.
import pygrib
import numpy as np
import sys
def writeNoChange():
# This produces a useable grib file.
filename = 'hrrr.t00z.wrfprsf06.grib2'
outfile = 'test.grib2'
grbs = pygrib.open(filename)
with open(outfile, 'wb') as outgrb:
for grb in grbs:
msg = grb.tostring()
outgrb.write(msg)
outgrb.close()
grbs.close()
def writeChange():
# This method produces a grib file that isn't recognized by WCT
filename = 'hrrr.t00z.wrfprsf06.grib2'
outfile = 'testChange.grib2'
grbs = pygrib.open(filename)
with open(outfile, 'wb') as outgrb:
for grb in grbs:
vals = grb.values * 1
grb['values'] = vals
msg = grb.tostring()
outgrb.write(msg)
outgrb.close()
grbs.close()
#-------------------------------
if __name__ == "__main__":
writeNoChange()
writeChange()
A:
Table 5.6 for GRIB2 (https://www.nco.ncep.noaa.gov/pmb/docs/grib2/grib2_doc/) is related to "ORDER OF SPATIAL DIFFERENCING".
For some reason, when you modify grb['values'], it sets grb['orderOfSpatialDifferencing'] = 0, which "wgrib2 -V" doesn't like. So, after changing 'values', change 'orderOfSpatialDifferencing' to what it was initially:
orderOfSpatialDifferencing = grb['orderOfSpatialDifferencing']
grb['values']= [new values]
grb['orderOfSpatialDifferencing'] = orderOfSpatialDifferencing
This worked for me in terms of getting wgrib2 -V to work, but messed up the data. Possibly some other variables in Section 5 also need to be modified.
| Writing interpolated grib2 data with pygrib leads to unusable grib file | I'm trying to use pygrib to read data from a grib2 file, interpolate it using python, and write it to another file. I've tried both pygrib and eccodes and both produce the same problem. The output file size increased by a factor of 3, but when I try to view the data in applications like Weather and Climate Toolkit it has all the variables listed, but "No Data" when plotted. If I use the same script and don't interpolate the data, but just write it to the new file it works fine in WCT. If I use wgrib2 it lists all the grib messages, but if I use wgrib2 -V it works on the unaltered data but produces the error "*** FATAL ERROR: unsupported: code table 5.6=0 ***" for the interpolated data. Am I doing something wrong in my python script? Here is an example of what I'm doing to write the file (same result using pygrib 2.05 and 2.1.3). I used a basic hrrr file for the example.
import pygrib
import numpy as np
import sys
def writeNoChange():
# This produces a useable grib file.
filename = 'hrrr.t00z.wrfprsf06.grib2'
outfile = 'test.grib2'
grbs = pygrib.open(filename)
with open(outfile, 'wb') as outgrb:
for grb in grbs:
msg = grb.tostring()
outgrb.write(msg)
outgrb.close()
grbs.close()
def writeChange():
# This method produces a grib file that isn't recognized by WCT
filename = 'hrrr.t00z.wrfprsf06.grib2'
outfile = 'testChange.grib2'
grbs = pygrib.open(filename)
with open(outfile, 'wb') as outgrb:
for grb in grbs:
vals = grb.values * 1
grb['values'] = vals
msg = grb.tostring()
outgrb.write(msg)
outgrb.close()
grbs.close()
#-------------------------------
if __name__ == "__main__":
writeNoChange()
writeChange()
| [
"Table 5.6 for GRIB2 (https://www.nco.ncep.noaa.gov/pmb/docs/grib2/grib2_doc/) is related to \"ORDER OF SPATIAL DIFFERENCING\".\nFor some reason, when you modify grb['values'], it sets grb['orderOfSpatialDifferencing'] = 0, which \"wgrib2 -V\" doesn't like. So, after changing 'values', change 'orderOfSpatialDifferencing' to what it was initially:\norderOfSpatialDifferencing = grb['orderOfSpatialDifferencing']\ngrb['values']= [new values]\ngrb['orderOfSpatialDifferencing'] = orderOfSpatialDifferencing\n\nThis worked for me in terms of getting wgrib2 -V to work, but messed up the data. Possibly some other variables in Section 5 also need to be modified.\n"
] | [
0
] | [] | [] | [
"pygrib",
"python"
] | stackoverflow_0066552793_pygrib_python.txt |
Q:
Python for i in range loop as argument a variable
The problem is that I try to apply the reeks of fibonacci with a startValue that is an input. And how many terms there is in the reeks.My problem is that when I enter my variable n (the number that gets added to the start value, its 1 more than the number it was getting added on one loop before.
I tried chaning the loop and searching online. But I couldnt find any relative advise.
Code:
print("In mathematics, the Fibonacci numbers, commonly denoted Fn, form a sequence, the Fibonacci sequence, in which each number is the sum of the two preceding ones. The sequence commonly starts from 0 and 1, although some authors start the sequence from 1 and 1 or sometimes (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the first few values in the sequence are:")
startValue = int(input("Start term: "))
numberTerms = int(input("Number of terms: "))
a =0
for i in range(numberTerms):
print(startValue + a)
a += 1
A:
It's not very clear what your for loop is trying to do here.
My problem is that when I enter my variable n (the number that gets added to the start value, its 1 more than the number it was getting added on one loop before.
Well, all you're doing in the loop is incrementing your variable by 1 each iteration and nothing else.
Try to think about how the Fibonacci sequence is calculated (you say it yourself in the print statement, Fn its F(n-1) + F(n-2)).
Do you really think that one variable is enough to keep track of what is needed?
| Python for i in range loop as argument a variable | The problem is that I try to apply the reeks of fibonacci with a startValue that is an input. And how many terms there is in the reeks.My problem is that when I enter my variable n (the number that gets added to the start value, its 1 more than the number it was getting added on one loop before.
I tried chaning the loop and searching online. But I couldnt find any relative advise.
Code:
print("In mathematics, the Fibonacci numbers, commonly denoted Fn, form a sequence, the Fibonacci sequence, in which each number is the sum of the two preceding ones. The sequence commonly starts from 0 and 1, although some authors start the sequence from 1 and 1 or sometimes (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the first few values in the sequence are:")
startValue = int(input("Start term: "))
numberTerms = int(input("Number of terms: "))
a =0
for i in range(numberTerms):
print(startValue + a)
a += 1
| [
"It's not very clear what your for loop is trying to do here.\n\nMy problem is that when I enter my variable n (the number that gets added to the start value, its 1 more than the number it was getting added on one loop before.\n\nWell, all you're doing in the loop is incrementing your variable by 1 each iteration and nothing else.\nTry to think about how the Fibonacci sequence is calculated (you say it yourself in the print statement, Fn its F(n-1) + F(n-2)).\nDo you really think that one variable is enough to keep track of what is needed?\n"
] | [
0
] | [] | [] | [
"fibonacci",
"python"
] | stackoverflow_0074606167_fibonacci_python.txt |
Q:
Python object of type has no len()
I tried to solve an easy problem in leetcode. Here is the source: https://leetcode.com/problems/remove-duplicates-from-sorted-list/
I almost solved it with for loop, but i get an error Python object of type ListNode has no len(). I have tried to use call() or len(), but i have no knowledge or understanding how does these built-in methods work. I read in many places, but the mess got bigger. If anyone could help me it would be great.
P.S. I know the solution with while loop is better, but i want this one to work somehow if possible. Or at least get some output.
`
**# Definition for singly-linked list.
# class ListNode:
# def __init__(self, sequence, val=0, next=None):
# self.val = val
# self.next = next
# self.sequence = sequence
class Solution:
def deleteDuplicates(self, head: Optional[ListNode]) -> Optional[ListNode]:**
j=0
for i in range(0, len(head) - 1):
if head[i-1-j] == head[i-j]:
head.remove(head[i-j])
j += 1
head.remove(head[-1])
return head
`
| Python object of type has no len() | I tried to solve an easy problem in leetcode. Here is the source: https://leetcode.com/problems/remove-duplicates-from-sorted-list/
I almost solved it with for loop, but i get an error Python object of type ListNode has no len(). I have tried to use call() or len(), but i have no knowledge or understanding how does these built-in methods work. I read in many places, but the mess got bigger. If anyone could help me it would be great.
P.S. I know the solution with while loop is better, but i want this one to work somehow if possible. Or at least get some output.
`
**# Definition for singly-linked list.
# class ListNode:
# def __init__(self, sequence, val=0, next=None):
# self.val = val
# self.next = next
# self.sequence = sequence
class Solution:
def deleteDuplicates(self, head: Optional[ListNode]) -> Optional[ListNode]:**
j=0
for i in range(0, len(head) - 1):
if head[i-1-j] == head[i-j]:
head.remove(head[i-j])
j += 1
head.remove(head[-1])
return head
`
| [] | [] | [
"I can't really decipher how your ListNode class is designed to work, but the answer to how to be able to call len on an instance of your own class is to define the __len__ dunder method for the class. As others have pointed out in the comments, you'll also need to define the __getitem__ method in order to be able to do stuff like head[i-1-j]. It might look something like this (just guessing as to the actual definitions as it's not clear to me how you want the class to work):\nclass ListNode:\n def __init__(self, sequence, val=0, next=None):\n self.val = val\n self.next = next\n self.sequence = sequence\n\n def __len__(self):\n\n return len(self.sequence) # Maybe? Or whatever you want len to return\n\n def __getitem__(self, i):\n\n return self.sequence[i] # Maybe? Or whatever you want [ ] to return\n\nMore on dunder methods here.\n"
] | [
-1
] | [
"object",
"python"
] | stackoverflow_0074606115_object_python.txt |
Q:
Celery task not working in Django framework
I tried code to send_email 5 times to user as Asynchronous task using Celery and Redis Broker in Django Framework. My Celery server is working and it is responding to the celery cli interface even it is receiving task from Django but after that I am getting Error like:
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python3
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python3
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
task.py -
from celery.decorators import task
from django.core.mail import EmailMessage
import time
@task(name="Sending_Emails")
def send_email(to_email,message):
time1 = 1
while(time1 != 5):
print("Sending Email")
email = EmailMessage('Checking Asynchronous Task', message+str(time1), to=[to_email])
email.send()
time.sleep(1)
time1 += 1
views.py -
print("sending for Queue")
send_email.delay(request.user.email,"Email sent : ")
print("sent for Queue")
settings.py -
# CELERY STUFF
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/India'
celery.py -
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ECartApplication.settings')
app = Celery('ECartApplication')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
I expect Email should be sent 5 times but getting error:
[tasks]
. ECartApplication.celery.debug_task
. Sending_Emails
[2019-05-19 12:41:27,695: INFO/SpawnPoolWorker-2] child process 3628 calling sel
f.run()
[2019-05-19 12:41:27,696: INFO/SpawnPoolWorker-1] child process 5748 calling sel
f.run()
[2019-05-19 12:41:28,560: INFO/MainProcess] Connected to redis://localhost:6379/
/
[2019-05-19 12:41:30,599: INFO/MainProcess] mingle: searching for neighbors
[2019-05-19 12:41:35,035: INFO/MainProcess] mingle: all alone
[2019-05-19 12:41:39,069: WARNING/MainProcess] c:\users\vipin\appdata\local\prog
rams\python\python37-32\lib\site-packages\celery\fixups\django.py:202: UserWarni
ng: Using settings.DEBUG leads to a memory leak, never use this setting in produ
ction environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2019-05-19 12:41:39,070: INFO/MainProcess] celery@vipin-PC ready.
[2019-05-19 12:41:46,448: INFO/MainProcess] Received task: Sending_Emails[db10da
d4-a8ec-4ad2-98a6-60e8c3183dd1]
[2019-05-19 12:41:47,455: ERROR/MainProcess] Task handler raised error: ValueErr
or('not enough values to unpack (expected 3, got 0)')
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
A:
This is an issue when you running Python over Windows 7/10.
There are a workaround, you just need to use the module eventlet that you can install using pip:
pip install eventlet
After that execute your worker with -P eventlet at the end of the command:
celery -A MyWorker worker -l info -P eventlet
A:
This command below also works on Windows 11:
celery -A core worker --pool=solo -l info
| Celery task not working in Django framework | I tried code to send_email 5 times to user as Asynchronous task using Celery and Redis Broker in Django Framework. My Celery server is working and it is responding to the celery cli interface even it is receiving task from Django but after that I am getting Error like:
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python3
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python3
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
task.py -
from celery.decorators import task
from django.core.mail import EmailMessage
import time
@task(name="Sending_Emails")
def send_email(to_email,message):
time1 = 1
while(time1 != 5):
print("Sending Email")
email = EmailMessage('Checking Asynchronous Task', message+str(time1), to=[to_email])
email.send()
time.sleep(1)
time1 += 1
views.py -
print("sending for Queue")
send_email.delay(request.user.email,"Email sent : ")
print("sent for Queue")
settings.py -
# CELERY STUFF
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/India'
celery.py -
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ECartApplication.settings')
app = Celery('ECartApplication')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
I expect Email should be sent 5 times but getting error:
[tasks]
. ECartApplication.celery.debug_task
. Sending_Emails
[2019-05-19 12:41:27,695: INFO/SpawnPoolWorker-2] child process 3628 calling sel
f.run()
[2019-05-19 12:41:27,696: INFO/SpawnPoolWorker-1] child process 5748 calling sel
f.run()
[2019-05-19 12:41:28,560: INFO/MainProcess] Connected to redis://localhost:6379/
/
[2019-05-19 12:41:30,599: INFO/MainProcess] mingle: searching for neighbors
[2019-05-19 12:41:35,035: INFO/MainProcess] mingle: all alone
[2019-05-19 12:41:39,069: WARNING/MainProcess] c:\users\vipin\appdata\local\prog
rams\python\python37-32\lib\site-packages\celery\fixups\django.py:202: UserWarni
ng: Using settings.DEBUG leads to a memory leak, never use this setting in produ
ction environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2019-05-19 12:41:39,070: INFO/MainProcess] celery@vipin-PC ready.
[2019-05-19 12:41:46,448: INFO/MainProcess] Received task: Sending_Emails[db10da
d4-a8ec-4ad2-98a6-60e8c3183dd1]
[2019-05-19 12:41:47,455: ERROR/MainProcess] Task handler raised error: ValueErr
or('not enough values to unpack (expected 3, got 0)')
Traceback (most recent call last):
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\billiard\pool.py", line 358, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\vipin\appdata\local\programs\python\python37-32\lib\site-packag
es\celery\app\trace.py", line 544, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
| [
"This is an issue when you running Python over Windows 7/10.\nThere are a workaround, you just need to use the module eventlet that you can install using pip:\n\npip install eventlet\n\nAfter that execute your worker with -P eventlet at the end of the command:\n\ncelery -A MyWorker worker -l info -P eventlet\n\n",
"This command below also works on Windows 11:\ncelery -A core worker --pool=solo -l info\n\n"
] | [
0,
0
] | [] | [] | [
"celery",
"django",
"python",
"python_3.x",
"redis"
] | stackoverflow_0056205396_celery_django_python_python_3.x_redis.txt |
Q:
How convert user inputted constant (pi,e) to float in python?
I'm writing a code which must compute definite integral of a function. I'll provide code in the below. How can I urge computer to understand user input "pi" or "e" for constant numbers? Problem is that I must convert the type of input to float, for following calculations. So when user inputs pi it's raising ValueError. How can we get constants like pi to give bounds?
from sympy import *
from sympy.abc import *
import math
import numpy as np
import time
import sys
import pandas as pd
############ ############
####### Calculating Definite Integral of a given function using trapezium method #######
######### ##############
cuts = 100 #is number of cuts
########################## DataFrame object of aveliable differantiable functions
all_functions = {"Trigonometric": ["sin", "cos",'tan','cot','sec','csec','sinc'],
"Trigonometric Inverses": ["asin", "acos",'atan','acot','asec','acsec'," "],
'Hyperbolic Functions': [ 'sinh', 'cosh', 'tanh', 'coth'," "," "," "],
'Hyperbolic Inverses':['asinh','acosh','atanh','acoth','asech','acsch'," "],
"Exponential": ["exp", 'log','ln',"log(base,x)", ' ', " "," "],
"Roots":["root","sqrt",'cbrt',' '," "," "," "],
"Powers": ["x**n (n is all real numbers)"," "," "," "," "," "," "],
"Combinatorical": ['factorial'," "," "," "," "," "," "]}
df = pd.DataFrame(all_functions,index = [" "," "," "," "," "," "," "])
df.columns.name = "Funcion'c classes"
df.index.name = "Functions"
df = df.T
###############################################
#####Defining functions which will compute integral using trapezium method
##### Trapezium method fomrula -- Integral(f(x),a,b) = (a-b)/n * ( (y_0+y_n)/2 + y_1+y_2+...+ y_(n-1) )
def integral():
print("Enter Function to integrate: ", end=" ")
function = sympify(input()) #converting string input to sympy expression
print("Enter lower bound: ", end = " ")
lower = float(input()) #lower bound of definite integral
print("Enter upper bound: ", end = " ")
upper = float(input()) # upper bound of definite integral
xi = np.linspace(lower,upper,cuts+1) #cutting X axis to n+1 parts, for x0=a<x1<x2<...xi<x(i+1)<...<xn=b
####### y_i = function(x_i) ########inserting "x"s in function and computing y values, for using trapezium method formula
ylist = []
for i in range(len(xi)):
ys = function.subs(x,xi[i])
ylist.append(ys)
sum2 = 0 #second part of trapezium method sum
for j in range(1,cuts):
sum2 = sum2 + ylist[j]
sum1 = (ylist[0]+ylist[cuts])/2 #first part of trapezium method sum
result = (upper-lower)*(sum1+sum2)/cuts #result of an integral
####computing error of an integral
derivative = diff(function,x,2) #2nd differential of function at given point
derresult = derivative.subs(x,(lower-upper)/2) #result of derivative of
error = abs((upper-lower)**3*derresult/(12*cuts**12)) #error of definite integral
dots = "Integrating....\n"
####typing ^^^ this line alternatly
for l in dots:
sys.stdout.write(l)
sys.stdout.flush()
time.sleep(0.045)
equals = "================\n\n"
####typing ^^^ this line alternatly
for l in equals:
sys.stdout.write(l)
sys.stdout.flush()
time.sleep(0.045)
#raise this error when bounds give infinity result
if result == math.inf or result == -math.inf:
print("Bounds are false")
else:
###printing integral result
print("Derfinite integral of " + str(function) +" from " +str(lower)+" to "+ str(upper)+" = "+ "%.5f +- %e" %(result, error)+"\n")
######## typing equlas alternatly
for l in equals:
sys.stdout.write(l)
sys.stdout.flush()
time.sleep(0.055)
try: ### Trye integral() function if errors are occuring execute excepts
integral()
except TypeError: ##execute this when TypeError occured , i.e. function typed incorrectly
print("The Function You have entered does not exist or is not differentiable or consists other symbol ranther then \'x\' !\nTo see list of all differentiable functions please type \"Functions\" \n")
function_input = input()
function_list = ["Functions", "Function","functions",'function',"FUNCTIONS",'FUNCTIONS']
if function_input in function_list: #if user input is correct print out DataFrame of aveliable functions, and excecute integral()
print(df, end = "\n\n")
integral()
else: #if user input is incorrect return statement below, which will wait to user input print out function's list and excecute integral()
print("Please Type \'Functions\' correctly")
function_input = input()
print(df, end = "\n\n")
integral()
except SympifyError:
print("\nExpression You have entered is not a fully written function or not not written correctly.\n")
integral()
except ValueError:
print("\nBounds must be numbers.\n")
integral()
A:
You can write a function that recognizes certain names before calling float() to parse it normally.
def my_float(s):
constants = {"pi": 3.14159, "e": 2.71928}
if s in constants:
return constants[s]
else:
return float(s)
Then you can read write:
print("Enter lower bound: ", end = " ")
lower = my_float(input()) #lower bound of definite integral
A:
you can use this code:
from numpy import pi,e
value = eval(input()) # if you entered "pi", value will be = 3.14
print(value)`
| How convert user inputted constant (pi,e) to float in python? | I'm writing a code which must compute definite integral of a function. I'll provide code in the below. How can I urge computer to understand user input "pi" or "e" for constant numbers? Problem is that I must convert the type of input to float, for following calculations. So when user inputs pi it's raising ValueError. How can we get constants like pi to give bounds?
from sympy import *
from sympy.abc import *
import math
import numpy as np
import time
import sys
import pandas as pd
############ ############
####### Calculating Definite Integral of a given function using trapezium method #######
######### ##############
cuts = 100 #is number of cuts
########################## DataFrame object of aveliable differantiable functions
all_functions = {"Trigonometric": ["sin", "cos",'tan','cot','sec','csec','sinc'],
"Trigonometric Inverses": ["asin", "acos",'atan','acot','asec','acsec'," "],
'Hyperbolic Functions': [ 'sinh', 'cosh', 'tanh', 'coth'," "," "," "],
'Hyperbolic Inverses':['asinh','acosh','atanh','acoth','asech','acsch'," "],
"Exponential": ["exp", 'log','ln',"log(base,x)", ' ', " "," "],
"Roots":["root","sqrt",'cbrt',' '," "," "," "],
"Powers": ["x**n (n is all real numbers)"," "," "," "," "," "," "],
"Combinatorical": ['factorial'," "," "," "," "," "," "]}
df = pd.DataFrame(all_functions,index = [" "," "," "," "," "," "," "])
df.columns.name = "Funcion'c classes"
df.index.name = "Functions"
df = df.T
###############################################
#####Defining functions which will compute integral using trapezium method
##### Trapezium method fomrula -- Integral(f(x),a,b) = (a-b)/n * ( (y_0+y_n)/2 + y_1+y_2+...+ y_(n-1) )
def integral():
print("Enter Function to integrate: ", end=" ")
function = sympify(input()) #converting string input to sympy expression
print("Enter lower bound: ", end = " ")
lower = float(input()) #lower bound of definite integral
print("Enter upper bound: ", end = " ")
upper = float(input()) # upper bound of definite integral
xi = np.linspace(lower,upper,cuts+1) #cutting X axis to n+1 parts, for x0=a<x1<x2<...xi<x(i+1)<...<xn=b
####### y_i = function(x_i) ########inserting "x"s in function and computing y values, for using trapezium method formula
ylist = []
for i in range(len(xi)):
ys = function.subs(x,xi[i])
ylist.append(ys)
sum2 = 0 #second part of trapezium method sum
for j in range(1,cuts):
sum2 = sum2 + ylist[j]
sum1 = (ylist[0]+ylist[cuts])/2 #first part of trapezium method sum
result = (upper-lower)*(sum1+sum2)/cuts #result of an integral
####computing error of an integral
derivative = diff(function,x,2) #2nd differential of function at given point
derresult = derivative.subs(x,(lower-upper)/2) #result of derivative of
error = abs((upper-lower)**3*derresult/(12*cuts**12)) #error of definite integral
dots = "Integrating....\n"
####typing ^^^ this line alternatly
for l in dots:
sys.stdout.write(l)
sys.stdout.flush()
time.sleep(0.045)
equals = "================\n\n"
####typing ^^^ this line alternatly
for l in equals:
sys.stdout.write(l)
sys.stdout.flush()
time.sleep(0.045)
#raise this error when bounds give infinity result
if result == math.inf or result == -math.inf:
print("Bounds are false")
else:
###printing integral result
print("Derfinite integral of " + str(function) +" from " +str(lower)+" to "+ str(upper)+" = "+ "%.5f +- %e" %(result, error)+"\n")
######## typing equlas alternatly
for l in equals:
sys.stdout.write(l)
sys.stdout.flush()
time.sleep(0.055)
try: ### Trye integral() function if errors are occuring execute excepts
integral()
except TypeError: ##execute this when TypeError occured , i.e. function typed incorrectly
print("The Function You have entered does not exist or is not differentiable or consists other symbol ranther then \'x\' !\nTo see list of all differentiable functions please type \"Functions\" \n")
function_input = input()
function_list = ["Functions", "Function","functions",'function',"FUNCTIONS",'FUNCTIONS']
if function_input in function_list: #if user input is correct print out DataFrame of aveliable functions, and excecute integral()
print(df, end = "\n\n")
integral()
else: #if user input is incorrect return statement below, which will wait to user input print out function's list and excecute integral()
print("Please Type \'Functions\' correctly")
function_input = input()
print(df, end = "\n\n")
integral()
except SympifyError:
print("\nExpression You have entered is not a fully written function or not not written correctly.\n")
integral()
except ValueError:
print("\nBounds must be numbers.\n")
integral()
| [
"You can write a function that recognizes certain names before calling float() to parse it normally.\ndef my_float(s):\n constants = {\"pi\": 3.14159, \"e\": 2.71928}\n if s in constants:\n return constants[s]\n else:\n return float(s)\n\nThen you can read write:\nprint(\"Enter lower bound: \", end = \" \")\nlower = my_float(input()) #lower bound of definite integral\n\n",
"you can use this code:\nfrom numpy import pi,e\n\nvalue = eval(input()) # if you entered \"pi\", value will be = 3.14\n\nprint(value)`\n\n"
] | [
1,
1
] | [] | [] | [
"constants",
"pi",
"python",
"sympy",
"user_input"
] | stackoverflow_0052820034_constants_pi_python_sympy_user_input.txt |
Q:
How to append/concat dataframes from within a function to a global dataframe
I have a scraping function that returns a dataframe as such:
enter image description here
How can i add this dataframe to a global dataframe as to keep extending my dataframe like so:
enter image description here
The result should be a function i can run again and again with different arguments to compile data into my dataframe.
`
def getTicketPrices(IPLocation, X, Y):
#Scraping...
df = pd.DataFrame (price_list_cleaned, columns = [IPLocation])
`
I have tried creating a dataframe before the function as such:
resultsDF = pd.DataFrame()
and then using trying to use the .concat() method within the function but it returns an empty dataframe...
A:
You can use concat:
appended_df = pd.DataFrame() # Initilize
appended_df = pd.concat([appended_df, new_df], axis=1)
| How to append/concat dataframes from within a function to a global dataframe | I have a scraping function that returns a dataframe as such:
enter image description here
How can i add this dataframe to a global dataframe as to keep extending my dataframe like so:
enter image description here
The result should be a function i can run again and again with different arguments to compile data into my dataframe.
`
def getTicketPrices(IPLocation, X, Y):
#Scraping...
df = pd.DataFrame (price_list_cleaned, columns = [IPLocation])
`
I have tried creating a dataframe before the function as such:
resultsDF = pd.DataFrame()
and then using trying to use the .concat() method within the function but it returns an empty dataframe...
| [
"You can use concat:\nappended_df = pd.DataFrame() # Initilize\nappended_df = pd.concat([appended_df, new_df], axis=1)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074606230_dataframe_pandas_python.txt |
Q:
Open the same excel file in different windows with python
I am very new to python. I'm currently trying to open the same instances of one excel-file (Excel 2013) and move opened windows using python, but can't find any info on how to do it. Manually i would just click on "New Window" on "View" tab.
If I'll try open it with subprocess, it'll successively open and close windows.
Have you got any suggestions?
Thanks in advance.
My current code:
import ctypes
import subprocess
import time
import sys
from win32.win32gui import FindWindow, MoveWindow, GetForegroundWindow
user32 = ctypes.windll.user32
x = user32.GetSystemMetrics(78)
y = user32.GetSystemMetrics(79)
p1 = subprocess.Popen(["C:\\Program Files (x86)\\Microsoft Office\\Office15\\EXCEL.EXE", "C:\\Users\\user\\Desktop\\python\\TestBook1.xlsx"])
time.sleep(1)
window_handle1 = GetForegroundWindow()
MoveWindow(window_handle1, 0, 0, int(2/3*x), int(0.5*y), True)
p2 = subprocess.Popen(["C:\\Program Files (x86)\\Microsoft Office\\Office15\\EXCEL.EXE","C:\\Users\\user\\Desktop\\python\\TestBook1.xlsx"])
time.sleep(1)
window_handle2 = GetForegroundWindow()
MoveWindow(window_handle2, 0, int(0.5*y), int(2/3*x), int(0.5*y), True)
p3 = subprocess.Popen(["C:\\Program Files (x86)\\Microsoft Office\\Office15\\EXCEL.EXE", "C:\\Users\\user\\Desktop\\python\\TestBook1.xlsx"])
time.sleep(1)
window_handle3 = GetForegroundWindow()
MoveWindow(window_handle3, int(2/3*x), 0, int(1/3*x), int(y), True)
(sorry for my bad coding in advance)
A:
I don't know python, so likely to be garbage code!
Using the Excel COM object you get programmatic access to the "new window" (among other things).
The Window object that is returned from Workbook.NewWindow has a hWnd property (which might be what you need for the MoveWindow) and/or there is also a Top and Left property you can use to move the window too.
import win32com
import win32com.client
app = win32com.client.Dispatch("Excel.application")
workbook = app.Workbooks.Open("your workbook")
newwindow = workbook.NewWindow()
MoveWindow(newwindow.hwnd,.......)
| Open the same excel file in different windows with python | I am very new to python. I'm currently trying to open the same instances of one excel-file (Excel 2013) and move opened windows using python, but can't find any info on how to do it. Manually i would just click on "New Window" on "View" tab.
If I'll try open it with subprocess, it'll successively open and close windows.
Have you got any suggestions?
Thanks in advance.
My current code:
import ctypes
import subprocess
import time
import sys
from win32.win32gui import FindWindow, MoveWindow, GetForegroundWindow
user32 = ctypes.windll.user32
x = user32.GetSystemMetrics(78)
y = user32.GetSystemMetrics(79)
p1 = subprocess.Popen(["C:\\Program Files (x86)\\Microsoft Office\\Office15\\EXCEL.EXE", "C:\\Users\\user\\Desktop\\python\\TestBook1.xlsx"])
time.sleep(1)
window_handle1 = GetForegroundWindow()
MoveWindow(window_handle1, 0, 0, int(2/3*x), int(0.5*y), True)
p2 = subprocess.Popen(["C:\\Program Files (x86)\\Microsoft Office\\Office15\\EXCEL.EXE","C:\\Users\\user\\Desktop\\python\\TestBook1.xlsx"])
time.sleep(1)
window_handle2 = GetForegroundWindow()
MoveWindow(window_handle2, 0, int(0.5*y), int(2/3*x), int(0.5*y), True)
p3 = subprocess.Popen(["C:\\Program Files (x86)\\Microsoft Office\\Office15\\EXCEL.EXE", "C:\\Users\\user\\Desktop\\python\\TestBook1.xlsx"])
time.sleep(1)
window_handle3 = GetForegroundWindow()
MoveWindow(window_handle3, int(2/3*x), 0, int(1/3*x), int(y), True)
(sorry for my bad coding in advance)
| [
"I don't know python, so likely to be garbage code!\nUsing the Excel COM object you get programmatic access to the \"new window\" (among other things).\nThe Window object that is returned from Workbook.NewWindow has a hWnd property (which might be what you need for the MoveWindow) and/or there is also a Top and Left property you can use to move the window too.\nimport win32com\nimport win32com.client\n\napp = win32com.client.Dispatch(\"Excel.application\")\nworkbook = app.Workbooks.Open(\"your workbook\")\nnewwindow = workbook.NewWindow()\nMoveWindow(newwindow.hwnd,.......)\n\n"
] | [
0
] | [] | [] | [
"excel",
"python"
] | stackoverflow_0074605764_excel_python.txt |
Q:
Base 64 decode with Python on XAMPP
I'm trying to bring on my local server (XAMPP), a script that's working on my VPS server (Linux CentOS7).
On XAMPP, I call the Python script wit PHP, something like:
$hotel = array("Name"=>$_POST["NAME"]
,..
);
$param = escapeshellcmd(base64_encode(json_encode($hotel)));
$result = shell_exec('python C:\xampp\htdocs\bounce.py $param');
$obj = json_decode($result);
The Python script is something like:
#! /Users/<user>/AppData/Local/Programs/Python/Python37/python.exe
import sys
import json
import base64
content = json.loads(base64.b64decode(sys.argv[1]))
print(json.dumps(content))
The returned JSON string is NULL
This is the Apache error:
[php:warn] [pid 11176:tid 1884] [client ::1:55182] PHP Warning: Attempt to read property "Name" on null in C:\\xampp\\htdocs\\hotel_results.php on line 46, referer: http://localhost/
Updated Django with no results
A:
If the problem is caused by not reading the "$param" variable due to single quotes, replacing the line of
$result = shell_exec('python C:\xampp\htdocs\bounce.py $param');
with
$result = shell_exec("python C:\\xampp\\htdocs\\bounce.py $param");
could help. worth a try.
| Base 64 decode with Python on XAMPP | I'm trying to bring on my local server (XAMPP), a script that's working on my VPS server (Linux CentOS7).
On XAMPP, I call the Python script wit PHP, something like:
$hotel = array("Name"=>$_POST["NAME"]
,..
);
$param = escapeshellcmd(base64_encode(json_encode($hotel)));
$result = shell_exec('python C:\xampp\htdocs\bounce.py $param');
$obj = json_decode($result);
The Python script is something like:
#! /Users/<user>/AppData/Local/Programs/Python/Python37/python.exe
import sys
import json
import base64
content = json.loads(base64.b64decode(sys.argv[1]))
print(json.dumps(content))
The returned JSON string is NULL
This is the Apache error:
[php:warn] [pid 11176:tid 1884] [client ::1:55182] PHP Warning: Attempt to read property "Name" on null in C:\\xampp\\htdocs\\hotel_results.php on line 46, referer: http://localhost/
Updated Django with no results
| [
"If the problem is caused by not reading the \"$param\" variable due to single quotes, replacing the line of\n$result = shell_exec('python C:\\xampp\\htdocs\\bounce.py $param');\n\nwith\n$result = shell_exec(\"python C:\\\\xampp\\\\htdocs\\\\bounce.py $param\");\n\ncould help. worth a try.\n"
] | [
0
] | [] | [] | [
"base64",
"php",
"python"
] | stackoverflow_0074065117_base64_php_python.txt |
Q:
How to format textual output with whitespace like tabs or newline?
I am somewhat experienced in python, but i have never really had to format my return of a function.
This is my desired format:
This is what my output is currently looking like:
I have been researching how to use escaped whitespace-chars like \t, and I know about \n from C++, but I am unsure how to implement these functions.
I have been trying to use \t and \n, but have not gotten my desired result.
A:
Something like this should work:
words = [
('THE', 30062),
('AND', 28379),
('I', 22307),
('THAT', 11924),
]
def print_words(all_words, limit=19):
totwords = sum([y for x,y in all_words])
print("Total words:", totwords)
print()
print("Top", limit, "words:")
words = all_words[:limit]
maxlen = max([len(x) for x, y in words])
for word, count in words:
print(f"{word:{maxlen}} {count}")
print_words(words)
where
print(f"{word:{maxlen}} {count}")
means print word, and use maxlen characters, then print count.
| How to format textual output with whitespace like tabs or newline? | I am somewhat experienced in python, but i have never really had to format my return of a function.
This is my desired format:
This is what my output is currently looking like:
I have been researching how to use escaped whitespace-chars like \t, and I know about \n from C++, but I am unsure how to implement these functions.
I have been trying to use \t and \n, but have not gotten my desired result.
| [
"Something like this should work:\nwords = [\n ('THE', 30062),\n ('AND', 28379),\n ('I', 22307),\n ('THAT', 11924),\n]\n\ndef print_words(all_words, limit=19):\n totwords = sum([y for x,y in all_words])\n print(\"Total words:\", totwords)\n print()\n print(\"Top\", limit, \"words:\")\n\n words = all_words[:limit]\n maxlen = max([len(x) for x, y in words])\n for word, count in words:\n print(f\"{word:{maxlen}} {count}\")\n\nprint_words(words)\n\nwhere\nprint(f\"{word:{maxlen}} {count}\")\n\nmeans print word, and use maxlen characters, then print count.\n"
] | [
0
] | [] | [] | [
"python",
"string_formatting",
"whitespace"
] | stackoverflow_0074606251_python_string_formatting_whitespace.txt |
Q:
How to read a jpg. from google storage as a path or file type
As the topic indicates...
I have try two ways and none of them work:
First:
Iγwant to programmatically talk to GCS in Python.γsuch as reading gs://{bucketname}/{blobname} as a path or a file. The only thing I can find is a gsutil module, however it seems used in a commend line instead of a python application.
i find a code here Accessing data in google cloud bucket, but still confused on how to retrieve it to a type i need. there is a jpg file in the bucket, and want to download it for a text detection, this will be deploy on google funtion.
Second:
download_as_bytes()method, Link to the blob document I import the googe.cloud.storage module and provide the GCP key, however the error rise saying the Blob has no attribute of download_as_bytes().
is there anything else i haven't try? Thank you!
for the reference:
def text_detected(user_id):
bucket=storage_client.bucket(
'img_platecapture')
blob=bucket.blob({user_id})
content= blob.download_as_bytes()
image = vision.Image(content=content) #insert a content
response = vision_client.text_detection(image=image)
if response.error.message:
raise Exception(
'{}\nFor more info on error messages, check: '
'https://cloud.google.com/apis/design/errors'.format(
response.error.message))
img = Image.open(input_file) #insert a path
draw = ImageDraw.Draw(img)
font = ImageFont.truetype("simsun.ttc", 18)
for text in response.text_annotations[1::]:
ocr = text.description
draw.text((bound.vertices[0].x-25, bound.vertices[0].y-25),ocr,fill=(255,0,0),font=font)
draw.polygon(
[
bound.vertices[0].x,
bound.vertices[0].y,
bound.vertices[1].x,
bound.vertices[1].y,
bound.vertices[2].x,
bound.vertices[2].y,
bound.vertices[3].x,
bound.vertices[3].y,
],
None,
'yellow',
)
texts=response.text_annotations
a=str(texts[0].description.split())
b=re.sub(u"([^\u4e00-\u9fa5\u0030-u0039])","",a)
b1="".join(b)
print("ε΅ζΈ¬ε°ηε°εηΊ:",b1)
return b1
@handler.add(MessageEvent, message=ImageMessage)
def handle_content_message(event):
message_content = line_bot_api.get_message_content(event.message.id)
user = line_bot_api.get_profile(event.source.user_id)
data=b''
for chunk in message_content.iter_content():
data+= chunk
global bucket_name
bucket_name = 'img_platecapture'
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(f'{user.user_id}.jpg')
blob.upload_from_string(data)
text_detected1=text_detected(user.user_id) ####Here's the problem
line_bot_api.reply_message(
event.reply_token,
messages=TextSendMessage(
text=text_detected1
))
reference code(gcsfs/fsspec):
gcs = gcsfs.GCSFileSystem()
bucket=storage_client.bucket('img_platecapture')
blob=bucket.blob({user_id})
f =fsspec.open("gs://img_platecapture/{user_id}")
with f.open({user_id}, "rb") as fp:
content = fp.read()
image = vision.Image(content=content)
response = vision_client.text_detection(image=image)
A:
I'd be using fsspec's GCS filesystem implementation instead.
https://github.com/fsspec/gcsfs/
>>> import gcsfs
>>> fs = gcsfs.GCSFileSystem(project='my-google-project')
>>> fs.ls('my-bucket')
['my-file.txt']
>>> with fs.open('my-bucket/my-file.txt', 'rb') as f:
... print(f.read())
b'Hello, world'
https://gcsfs.readthedocs.io/en/latest/#examples
A:
You can do that with the Cloud Storage Python client :
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
# The ID of your GCS bucket
# bucket_name = "your-bucket-name"
# The ID of your GCS object
# source_blob_name = "storage-object-name"
# The path to which the file should be downloaded
# destination_file_name = "local/path/to/file"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
# Construct a client side representation of a blob.
# Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve
# any content from Google Cloud Storage. As we don't need additional data,
# using `Bucket.blob` is preferred here.
blob = bucket.blob(source_blob_name)
# blob.download_to_filename(destination_file_name)
# blob.download_as_string()
blob.download_as_bytes()
print(
"Downloaded storage object {} from bucket {} to local file {}.".format(
source_blob_name, bucket_name, destination_file_name
)
)
You can use the following methods :
blob.download_to_filename(destination_file_name)
blob.download_as_string()
blob.download_as_bytes()
To be able to correctly use this library, you have to install the expected pip package in your virtual env.
Example of project structure :
my-project
requirements.txt
your_python_script.py
The requirements.txt file :
google-cloud-storage==2.6.0
Run the following command :
pip install -r requirements.txt
In your case maybe the package was not installed correctly in your virtual env, that's why you could not access to the download_as_bytes method.
| How to read a jpg. from google storage as a path or file type | As the topic indicates...
I have try two ways and none of them work:
First:
Iγwant to programmatically talk to GCS in Python.γsuch as reading gs://{bucketname}/{blobname} as a path or a file. The only thing I can find is a gsutil module, however it seems used in a commend line instead of a python application.
i find a code here Accessing data in google cloud bucket, but still confused on how to retrieve it to a type i need. there is a jpg file in the bucket, and want to download it for a text detection, this will be deploy on google funtion.
Second:
download_as_bytes()method, Link to the blob document I import the googe.cloud.storage module and provide the GCP key, however the error rise saying the Blob has no attribute of download_as_bytes().
is there anything else i haven't try? Thank you!
for the reference:
def text_detected(user_id):
bucket=storage_client.bucket(
'img_platecapture')
blob=bucket.blob({user_id})
content= blob.download_as_bytes()
image = vision.Image(content=content) #insert a content
response = vision_client.text_detection(image=image)
if response.error.message:
raise Exception(
'{}\nFor more info on error messages, check: '
'https://cloud.google.com/apis/design/errors'.format(
response.error.message))
img = Image.open(input_file) #insert a path
draw = ImageDraw.Draw(img)
font = ImageFont.truetype("simsun.ttc", 18)
for text in response.text_annotations[1::]:
ocr = text.description
draw.text((bound.vertices[0].x-25, bound.vertices[0].y-25),ocr,fill=(255,0,0),font=font)
draw.polygon(
[
bound.vertices[0].x,
bound.vertices[0].y,
bound.vertices[1].x,
bound.vertices[1].y,
bound.vertices[2].x,
bound.vertices[2].y,
bound.vertices[3].x,
bound.vertices[3].y,
],
None,
'yellow',
)
texts=response.text_annotations
a=str(texts[0].description.split())
b=re.sub(u"([^\u4e00-\u9fa5\u0030-u0039])","",a)
b1="".join(b)
print("ε΅ζΈ¬ε°ηε°εηΊ:",b1)
return b1
@handler.add(MessageEvent, message=ImageMessage)
def handle_content_message(event):
message_content = line_bot_api.get_message_content(event.message.id)
user = line_bot_api.get_profile(event.source.user_id)
data=b''
for chunk in message_content.iter_content():
data+= chunk
global bucket_name
bucket_name = 'img_platecapture'
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(f'{user.user_id}.jpg')
blob.upload_from_string(data)
text_detected1=text_detected(user.user_id) ####Here's the problem
line_bot_api.reply_message(
event.reply_token,
messages=TextSendMessage(
text=text_detected1
))
reference code(gcsfs/fsspec):
gcs = gcsfs.GCSFileSystem()
bucket=storage_client.bucket('img_platecapture')
blob=bucket.blob({user_id})
f =fsspec.open("gs://img_platecapture/{user_id}")
with f.open({user_id}, "rb") as fp:
content = fp.read()
image = vision.Image(content=content)
response = vision_client.text_detection(image=image)
| [
"I'd be using fsspec's GCS filesystem implementation instead.\nhttps://github.com/fsspec/gcsfs/\n>>> import gcsfs\n>>> fs = gcsfs.GCSFileSystem(project='my-google-project')\n>>> fs.ls('my-bucket')\n['my-file.txt']\n>>> with fs.open('my-bucket/my-file.txt', 'rb') as f:\n... print(f.read())\nb'Hello, world'\n\nhttps://gcsfs.readthedocs.io/en/latest/#examples\n",
"You can do that with the Cloud Storage Python client :\ndef download_blob(bucket_name, source_blob_name, destination_file_name):\n \"\"\"Downloads a blob from the bucket.\"\"\"\n # The ID of your GCS bucket\n # bucket_name = \"your-bucket-name\"\n\n # The ID of your GCS object\n # source_blob_name = \"storage-object-name\"\n\n # The path to which the file should be downloaded\n # destination_file_name = \"local/path/to/file\"\n\n storage_client = storage.Client()\n\n bucket = storage_client.bucket(bucket_name)\n\n # Construct a client side representation of a blob.\n # Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve\n # any content from Google Cloud Storage. As we don't need additional data,\n # using `Bucket.blob` is preferred here.\n blob = bucket.blob(source_blob_name)\n\n # blob.download_to_filename(destination_file_name)\n # blob.download_as_string()\n\n blob.download_as_bytes()\n\n print(\n \"Downloaded storage object {} from bucket {} to local file {}.\".format(\n source_blob_name, bucket_name, destination_file_name\n )\n )\n\nYou can use the following methods :\n blob.download_to_filename(destination_file_name)\n blob.download_as_string()\n\n blob.download_as_bytes()\n\nTo be able to correctly use this library, you have to install the expected pip package in your virtual env.\nExample of project structure :\nmy-project\n requirements.txt\n your_python_script.py\n\nThe requirements.txt file :\ngoogle-cloud-storage==2.6.0\n\nRun the following command :\npip install -r requirements.txt\n\nIn your case maybe the package was not installed correctly in your virtual env, that's why you could not access to the download_as_bytes method.\n"
] | [
0,
0
] | [] | [] | [
"google_cloud_storage",
"python"
] | stackoverflow_0074605414_google_cloud_storage_python.txt |
Q:
How to print all rows containing a part of input?
I have a csv file that contains sequence and gene name. I want to take an input from user and print all the rows that contains user input as a part. As an example my data is;
Gene 1 ATGCGGTCTA
Gene 2 ACGCCCATGA
Gene 3 TCGAC
When user enters GC the outcome must be
Gene 1 ATGCGGTCTA
Gene 2 ACGCCCATGA
since both has GC in the sequences.
So far I try;
import csv
import sys
import pandas as pd
csv_file = csv.reader(open('DATA.csv', "r"), delimiter=",")
z=input('what would you like to search?').lower()
if z=='sequence':
s=input('Enter sequence : ').upper()
df = pd.read_csv('DATA.csv')
a = list(df['seq'])
b = ' '.join(str(s) for s in a)
c= b.find(s)
A:
Using pandas and assuming the column of your dataframe with the sequences is called sequences, you can do :
filtered_df = df[df['sequences'].str.contains(s)]
| How to print all rows containing a part of input? | I have a csv file that contains sequence and gene name. I want to take an input from user and print all the rows that contains user input as a part. As an example my data is;
Gene 1 ATGCGGTCTA
Gene 2 ACGCCCATGA
Gene 3 TCGAC
When user enters GC the outcome must be
Gene 1 ATGCGGTCTA
Gene 2 ACGCCCATGA
since both has GC in the sequences.
So far I try;
import csv
import sys
import pandas as pd
csv_file = csv.reader(open('DATA.csv', "r"), delimiter=",")
z=input('what would you like to search?').lower()
if z=='sequence':
s=input('Enter sequence : ').upper()
df = pd.read_csv('DATA.csv')
a = list(df['seq'])
b = ' '.join(str(s) for s in a)
c= b.find(s)
| [
"Using pandas and assuming the column of your dataframe with the sequences is called sequences, you can do :\nfiltered_df = df[df['sequences'].str.contains(s)]\n"
] | [
0
] | [] | [] | [
"input",
"list",
"python",
"python_3.x",
"sequence"
] | stackoverflow_0074606197_input_list_python_python_3.x_sequence.txt |
Q:
How to use functools.partial for a class method?
I'd like to apply partial from functools to a class method.
from functools import partial
class A:
def __init__(self, i):
self.i = i
def process(self, constant):
self.result = self.i * constant
CONST = 2
FUNC = partial(A.process, CONST)
When I try:
FUNC(A(4))
I got this error:
'int' object has no attribute 'i'
It seems like CONST has been exchange with the object A.
A:
You're binding one positional argument with partial which will go to the first argument of process, self. When you then call it you're passing A(4) as the second positional argument, constant. In other words, the order of arguments is messed up. You need to bind CONST to constant explicitly:
FUNC = partial(A.process, constant=CONST)
Alternatively this would do the same thing:
FUNC = lambda self: A.process(self, CONST)
A:
Try creating a partial from an instance of A.
CONST = 2
a = A()
FUNC = partial(a.process, CONST)
You can then call FUNC like so:
FUNC(A(4))
| How to use functools.partial for a class method? | I'd like to apply partial from functools to a class method.
from functools import partial
class A:
def __init__(self, i):
self.i = i
def process(self, constant):
self.result = self.i * constant
CONST = 2
FUNC = partial(A.process, CONST)
When I try:
FUNC(A(4))
I got this error:
'int' object has no attribute 'i'
It seems like CONST has been exchange with the object A.
| [
"You're binding one positional argument with partial which will go to the first argument of process, self. When you then call it you're passing A(4) as the second positional argument, constant. In other words, the order of arguments is messed up. You need to bind CONST to constant explicitly:\nFUNC = partial(A.process, constant=CONST)\n\nAlternatively this would do the same thing:\nFUNC = lambda self: A.process(self, CONST)\n\n",
"Try creating a partial from an instance of A.\nCONST = 2\na = A()\nFUNC = partial(a.process, CONST)\n\nYou can then call FUNC like so:\nFUNC(A(4))\n\n"
] | [
4,
0
] | [] | [] | [
"class",
"functools",
"object",
"partial",
"python"
] | stackoverflow_0060756161_class_functools_object_partial_python.txt |
Q:
YOLOv7 --save-txt Argument Path Change
I do object detection with YOLOv7 ready model. I'm running detect.py like this "python detect.py --source human.jpg --save-txt"
The --save-txt argument gives me the coordinates in the form of .txt
and it saves to 'runs/detect/exp/labels'
but I want it to save in 'runs/data/train'
I think the relevant code is on line 108, but I couldn't give the file path I wanted.
txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt
I tried this but I couldn't set the name:
txt_path = 'runs/data/train' # img.txt
Anyone who knows how to change the txt_path variable can help?
A:
not a very professional solution but it worked
img_name = p.name.split(".")
txt_path = f'runs/data/train/{img_name[0]}' # img.txt
| YOLOv7 --save-txt Argument Path Change | I do object detection with YOLOv7 ready model. I'm running detect.py like this "python detect.py --source human.jpg --save-txt"
The --save-txt argument gives me the coordinates in the form of .txt
and it saves to 'runs/detect/exp/labels'
but I want it to save in 'runs/data/train'
I think the relevant code is on line 108, but I couldn't give the file path I wanted.
txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt
I tried this but I couldn't set the name:
txt_path = 'runs/data/train' # img.txt
Anyone who knows how to change the txt_path variable can help?
| [
"not a very professional solution but it worked\nimg_name = p.name.split(\".\")\ntxt_path = f'runs/data/train/{img_name[0]}' # img.txt\n\n"
] | [
0
] | [] | [] | [
"path",
"python",
"python_3.x",
"yolo"
] | stackoverflow_0074594411_path_python_python_3.x_yolo.txt |
Q:
HTTP status code is not handled using scrapy and selenium
I am facing the error HTTP status code is not handled or not allowed how to solve these error I am using the selenium and scrapy together I am also using the user agent in setting but the HTTP error will not solve kindly recommend any solution this is page link https://www.askgamblers.com/online-casinos/countries/uk
import scrapy
from scrapy.http import Request
from selenium import webdriver
import time
from scrapy_selenium import SeleniumRequest
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
class TestSpider(scrapy.Spider):
name = 'test'
def start_requests(self):
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1920x1080")
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
URL = 'https://www.askgamblers.com/online-casinos/countries/uk'
driver.get(URL)
time.sleep(3)
page_links =driver.find_elements(By.XPATH, "//div[@class='card__desc']//a[starts-with(@href, '/online')]")
for link in page_links:
href=link.get_attribute("href")
yield scrapy.Request(href)
driver.quit()
def parse(self, response):
title=response.css(By.CSS_SELECTOR, "h1.ch-title::text").get()
yield{
'title':title
}
A:
You are getting such error because the website is under cloudflare protection.
https://www.askgamblers.com/online-casinos/countries/uk is using Cloudflare CDN/Proxy!
https://www.askgamblers.com/online-casinos/countries/uk is NOT using Cloudflare SSL
And Scrapy with Selenium/scrapy can't handle(I tested) cloudflare protection but only the powerful selenium engine can do the job.Finally, I integrate bs4 with selenium to parse content more robust way.
Script:
from selenium import webdriver
import time
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1920x1080")
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
URL = 'https://www.askgamblers.com/online-casinos/countries/uk'
driver.get(URL)
time.sleep(2)
urls= []
page_links =driver.find_elements(By.XPATH, "//div[@class='card__desc']//a[starts-with(@href, '/online')]")
for link in page_links:
href=link.get_attribute("href")
urls.append(href)
#print(href)
for url in urls:
driver.get(url)
time.sleep(1)
soup = BeautifulSoup(driver.page_source,"lxml")
try:
title=soup.select_one("h1.ch-title").get_text(strip=True)
print(title)
except:
print('empty')
pass
Output:
Mr.Play Casino
Bet365 Casino
Slotnite Casino
Trada Casino
PlayFrank Casino
Karamba Casino
Hello! Casino
21 Prive Casino
Casilando Casino
AHTI Games Casino
BacanaPlay Casino
Spinland Casino
Fun Casino
Slot Planet Casino
21 Casino
Conquer Casino
CasinoCasino
Barbados Casino
King Casino
Slots Magic Casino
Spin Station Casino
HeySpin Casino
CasinoLuck
Casino RedKings
| HTTP status code is not handled using scrapy and selenium | I am facing the error HTTP status code is not handled or not allowed how to solve these error I am using the selenium and scrapy together I am also using the user agent in setting but the HTTP error will not solve kindly recommend any solution this is page link https://www.askgamblers.com/online-casinos/countries/uk
import scrapy
from scrapy.http import Request
from selenium import webdriver
import time
from scrapy_selenium import SeleniumRequest
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
class TestSpider(scrapy.Spider):
name = 'test'
def start_requests(self):
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1920x1080")
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
URL = 'https://www.askgamblers.com/online-casinos/countries/uk'
driver.get(URL)
time.sleep(3)
page_links =driver.find_elements(By.XPATH, "//div[@class='card__desc']//a[starts-with(@href, '/online')]")
for link in page_links:
href=link.get_attribute("href")
yield scrapy.Request(href)
driver.quit()
def parse(self, response):
title=response.css(By.CSS_SELECTOR, "h1.ch-title::text").get()
yield{
'title':title
}
| [
"You are getting such error because the website is under cloudflare protection.\nhttps://www.askgamblers.com/online-casinos/countries/uk is using Cloudflare CDN/Proxy!\n\nhttps://www.askgamblers.com/online-casinos/countries/uk is NOT using Cloudflare SSL\n\nAnd Scrapy with Selenium/scrapy can't handle(I tested) cloudflare protection but only the powerful selenium engine can do the job.Finally, I integrate bs4 with selenium to parse content more robust way.\nScript:\nfrom selenium import webdriver\nimport time\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.common.by import By\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom bs4 import BeautifulSoup\n\noptions = webdriver.ChromeOptions()\noptions.add_argument(\"--no-sandbox\")\noptions.add_argument(\"--disable-gpu\")\noptions.add_argument(\"--window-size=1920x1080\")\noptions.add_argument(\"--disable-extensions\")\ndriver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))\n \nURL = 'https://www.askgamblers.com/online-casinos/countries/uk'\ndriver.get(URL)\ntime.sleep(2)\nurls= []\npage_links =driver.find_elements(By.XPATH, \"//div[@class='card__desc']//a[starts-with(@href, '/online')]\")\nfor link in page_links:\n href=link.get_attribute(\"href\")\n urls.append(href)\n #print(href)\n\nfor url in urls:\n driver.get(url)\n time.sleep(1)\n soup = BeautifulSoup(driver.page_source,\"lxml\")\n try:\n title=soup.select_one(\"h1.ch-title\").get_text(strip=True)\n print(title)\n except:\n print('empty')\n pass\n\nOutput:\nMr.Play Casino\nBet365 Casino\nSlotnite Casino\nTrada Casino\nPlayFrank Casino\nKaramba Casino\nHello! Casino\n21 Prive Casino\nCasilando Casino\nAHTI Games Casino\nBacanaPlay Casino\nSpinland Casino\nFun Casino\nSlot Planet Casino\n21 Casino\nConquer Casino\nCasinoCasino\nBarbados Casino\nKing Casino\nSlots Magic Casino\nSpin Station Casino\nHeySpin Casino\nCasinoLuck\nCasino RedKings\n\n"
] | [
0
] | [] | [] | [
"python",
"scrapy",
"selenium",
"web_scraping"
] | stackoverflow_0074605560_python_scrapy_selenium_web_scraping.txt |
Q:
I need a bit of lead with solving this fish detector problem by using a for loop
A fish-finder is a device used by anglers to find fish in a lake. If the fish-finder finds a fish, it will sound an alarm. It uses depth readings to determine whether to sound an alarm. For our purposes, the fish-finder will decide that a fish is swimming past if:
there are four consecutive depth readings which form a strictly increasing sequence (such as 3 4 7 9) (which we call "Fish Rising"), or
there are fur consecutive depth readings which form a strictly decreasing sequence (such as 9 6 5 2) (which we call "Fish Diving"), or
there are four consecutive depth readings which are identical (which we call "Constant Depth").
All other readings will be considered random noise or debris, which we call "No Fish."
Your task is to read a sequence of depth readings and determine if the alarm will sound.
Sample Input
The input will be four positive integers, representing the depth readings. each integer will be on its own line of input.
Sample Output
The output is one of four possibilities. If the depth readings are increasing, then the output should be Fish Rising. If the depth readings are decreasing, then the output should be Fish Diving. If the depth readings are identical, then the output should be Fish At Constant Depth. Otherwise, the output should be No Fish.
Sample Input 1
30
10
20
20
Sample Output 1
No Fish
Sample Input 2
1
10
12
13
Sample Output 2
Fish Rising
I've solved it normally but now I have to do it by using for loops and I have absolutely NO idea on how to even start. I have an example but it isn't helping.
num=int(input('Enter the number: '))
k = int(input('Enter the times the number has been shifted : '))
sum=0
sum+=num
for i in range(1,k+1):
sum+=num*10**i
print(sum)
'for i in range(1,k+1):'
I solved this normally using elif and else statements but as for the for loop part, I don't even know where to begin from.
P.S: This is how I solved it.
d1, d2, d3 ,d4 = input("Enter first depth reading:"), input("Enter second depth reading:"), input("Enter third depth reading:"), input("Enter fourth depth reading:")
if int(d4) > int(d3) > int(d2) > int(d1):
print("Fish Rising")
elif int(d1) > int(d2) > int(d3) > int(d4):
print("Fish Diving")
elif int(d1) = int(d2) = int(d3) = int(d4):
print("Constant Depth")
else:
print("No Fish")
A:
The advantage of the for loop is that you can handle an arbitrary number of fish.
def fishies( depths ):
# This could be done easier with zip, but that's an advanced topic.
ups = 0
downs = 0
for i in range(len(depths)-1):
if depths[i] < depths[i+1]:
ups += 1
elif depths[i] > depths[i+1]:
downs += 1
if ups+downs == 0:
return "Constant"
elif ups == len(depths)-1:
return "Diving"
elif downs == len(depths)-1:
return "Rising"
return "No Fish"
print( fishies([3, 4, 7, 9]) )
print( fishies([9, 6, 5, 2, 1]) )
print( fishies([6, 6, 6, 6]) )
print( fishies([30, 10, 20, 20]) )
| I need a bit of lead with solving this fish detector problem by using a for loop | A fish-finder is a device used by anglers to find fish in a lake. If the fish-finder finds a fish, it will sound an alarm. It uses depth readings to determine whether to sound an alarm. For our purposes, the fish-finder will decide that a fish is swimming past if:
there are four consecutive depth readings which form a strictly increasing sequence (such as 3 4 7 9) (which we call "Fish Rising"), or
there are fur consecutive depth readings which form a strictly decreasing sequence (such as 9 6 5 2) (which we call "Fish Diving"), or
there are four consecutive depth readings which are identical (which we call "Constant Depth").
All other readings will be considered random noise or debris, which we call "No Fish."
Your task is to read a sequence of depth readings and determine if the alarm will sound.
Sample Input
The input will be four positive integers, representing the depth readings. each integer will be on its own line of input.
Sample Output
The output is one of four possibilities. If the depth readings are increasing, then the output should be Fish Rising. If the depth readings are decreasing, then the output should be Fish Diving. If the depth readings are identical, then the output should be Fish At Constant Depth. Otherwise, the output should be No Fish.
Sample Input 1
30
10
20
20
Sample Output 1
No Fish
Sample Input 2
1
10
12
13
Sample Output 2
Fish Rising
I've solved it normally but now I have to do it by using for loops and I have absolutely NO idea on how to even start. I have an example but it isn't helping.
num=int(input('Enter the number: '))
k = int(input('Enter the times the number has been shifted : '))
sum=0
sum+=num
for i in range(1,k+1):
sum+=num*10**i
print(sum)
'for i in range(1,k+1):'
I solved this normally using elif and else statements but as for the for loop part, I don't even know where to begin from.
P.S: This is how I solved it.
d1, d2, d3 ,d4 = input("Enter first depth reading:"), input("Enter second depth reading:"), input("Enter third depth reading:"), input("Enter fourth depth reading:")
if int(d4) > int(d3) > int(d2) > int(d1):
print("Fish Rising")
elif int(d1) > int(d2) > int(d3) > int(d4):
print("Fish Diving")
elif int(d1) = int(d2) = int(d3) = int(d4):
print("Constant Depth")
else:
print("No Fish")
| [
"The advantage of the for loop is that you can handle an arbitrary number of fish.\ndef fishies( depths ):\n # This could be done easier with zip, but that's an advanced topic.\n ups = 0\n downs = 0\n for i in range(len(depths)-1):\n if depths[i] < depths[i+1]:\n ups += 1\n elif depths[i] > depths[i+1]:\n downs += 1\n\n if ups+downs == 0:\n return \"Constant\"\n elif ups == len(depths)-1:\n return \"Diving\"\n elif downs == len(depths)-1:\n return \"Rising\"\n return \"No Fish\"\n\nprint( fishies([3, 4, 7, 9]) )\nprint( fishies([9, 6, 5, 2, 1]) )\nprint( fishies([6, 6, 6, 6]) )\nprint( fishies([30, 10, 20, 20]) )\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074606389_python_python_3.x.txt |
Q:
snake game will fail when rapidly press three buttons once
Currently I'm using pygame to create my first snake game, but there is a weird bug exist. When I press three bottoms, such as up+left+right, simultaneously, my game will automatically be stopped. Python might think the snake collide its own body, so it lets the game stop, but actually the snake doesn't.
I don't know how can I fix this.
This is my code:
from all_class_and_setting import *
def main():
interaction = Interaction()
moved_snake = pygame.USEREVENT
# slow down the while loop into 120 ms per cycle
pygame.time.set_timer(moved_snake, 120)
while True:
display.fill((175, 215, 70))
interaction.fruit.create_fruit()
interaction.snake.create_snake()
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
if event.type == moved_snake:
interaction.snake_moved()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_w:
if interaction.snake.direction.y != 1:
interaction.snake.direction = Vector2(0, -1)
if event.key == pygame.K_s:
if interaction.snake.direction.y != -1:
interaction.snake.direction = Vector2(0, 1)
if event.key == pygame.K_a:
if interaction.snake.direction.x != 1:
interaction.snake.direction = Vector2(-1, 0)
if event.key == pygame.K_d:
if interaction.snake.direction.x != -1:
interaction.snake.direction = Vector2(1, 0)
if event.type == pygame.KEYDOWN:
# change snake moving direction if snake is not moving in opposite of the changed direction
if event.key == pygame.K_UP:
if interaction.snake.direction.y != 1:
interaction.snake.direction = Vector2(0, -1)
if event.key == pygame.K_DOWN:
if interaction.snake.direction.y != -1:
interaction.snake.direction = Vector2(0, 1)
if event.key == pygame.K_LEFT:
if interaction.snake.direction.x != 1:
interaction.snake.direction = Vector2(-1, 0)
if event.key == pygame.K_RIGHT:
if interaction.snake.direction.x != -1:
interaction.snake.direction = Vector2(1, 0)
pygame.display.update()
run_speed.tick(60)
main()
This is all the classes and variables I created:
import pygame
import random
import sys
from pygame.math import Vector2
pygame.init()
pygame.display.set_caption("CS150_Final_Project:Snake")
run_speed = pygame.time.Clock()
cell_size = 40
cell_number = 20
display = pygame.display.set_mode((cell_size * cell_number, cell_size * cell_number))
apple = pygame.image.load("Snake-main/apple.png").convert_alpha()
class Fruit:
"""
"""
def __init__(self):
# create an x and y position
self.x = random.randint(0, cell_number - 1)
self.y = random.randint(0, cell_number - 1)
self.pos = Vector2(self.x, self.y)
def create_fruit(self):
fruit_rect = pygame.Rect(int(self.pos.x * cell_size), int(self.pos.y * cell_size),cell_size,cell_size)
display.blit(apple,fruit_rect)
#pygame.draw.rect(display, (98, 166, 140), fruit_rect)
def new_fruit(self):
self.x = random.randint(0, cell_number - 1)
self.y = random.randint(0, cell_number - 1)
self.pos = Vector2(self.x, self.y)
class Snake:
def __init__(self):
# treat snake as different position vectors
self.body = [Vector2(3, 2), Vector2(2, 2), Vector2(1, 2)]
self.direction = Vector2(1, 0)
def create_snake(self):
for body in self.body:
x_pos = int(body.x * cell_size)
y_pos = int(body.y * cell_size)
snake = pygame.Rect(x_pos, y_pos, cell_size, cell_size)
pygame.draw.rect(display, (0, 150, 232), snake)
def move_snake(self):
"""
change self.body into body_copy
SNAKE -> None
"""
# add new position vectors into the list's index zero as the head of the snake
body_copy = self.body[:-1]
body_copy.insert(0, body_copy[0] + self.direction)
self.body = body_copy
def add_length(self):
body_copy = self.body[:]
body_copy.insert(0, body_copy[0] + self.direction)
self.body = body_copy
class Interaction:
def __init__(self):
self.snake = Snake()
self.fruit = Fruit()
def snake_moved(self):
self.snake.move_snake()
self.collision()
self.death()
def collision(self):
if self.snake.body[0] == self.fruit.pos:
self.fruit.new_fruit()
self.snake.add_length()
def death(self):
if not 0 <= self.snake.body[0].x < cell_number or not 0 <= self.snake.body[0].y < cell_number:
pygame.quit()
sys.exit()
for block in self.snake.body[1:]:
if block == self.snake.body[0]:
pygame.quit()
sys.exit()
A:
The problem is that more than on KEYDOWN has be handled in on frame. Do not change interaction.snake.direction in the event loop, but set a local variable with the new direction and update interaction.snake.direction after the event loop:
new_direction = interaction.snake.direction
for event in pygame.event.get():
# [...]
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_w or event.key == pygame.K_UP:
if interaction.snake.direction.y != 1:
new_direction = Vector2(0, -1)
if event.key == pygame.K_s or event.key == pygame.K_DOWN:
if interaction.snake.direction.y != -1:
new_direction = Vector2(0, 1)
if event.key == pygame.K_a or event.key == pygame.K_LEFT:
if interaction.snake.direction.x != 1:
new_direction = Vector2(-1, 0)
if event.key == pygame.K_d or event.key == pygame.K_RIGHT:
if interaction.snake.direction.x != -1:
new_direction = Vector2(1, 0)
interaction.snake.direction = new_direction
| snake game will fail when rapidly press three buttons once | Currently I'm using pygame to create my first snake game, but there is a weird bug exist. When I press three bottoms, such as up+left+right, simultaneously, my game will automatically be stopped. Python might think the snake collide its own body, so it lets the game stop, but actually the snake doesn't.
I don't know how can I fix this.
This is my code:
from all_class_and_setting import *
def main():
interaction = Interaction()
moved_snake = pygame.USEREVENT
# slow down the while loop into 120 ms per cycle
pygame.time.set_timer(moved_snake, 120)
while True:
display.fill((175, 215, 70))
interaction.fruit.create_fruit()
interaction.snake.create_snake()
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
if event.type == moved_snake:
interaction.snake_moved()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_w:
if interaction.snake.direction.y != 1:
interaction.snake.direction = Vector2(0, -1)
if event.key == pygame.K_s:
if interaction.snake.direction.y != -1:
interaction.snake.direction = Vector2(0, 1)
if event.key == pygame.K_a:
if interaction.snake.direction.x != 1:
interaction.snake.direction = Vector2(-1, 0)
if event.key == pygame.K_d:
if interaction.snake.direction.x != -1:
interaction.snake.direction = Vector2(1, 0)
if event.type == pygame.KEYDOWN:
# change snake moving direction if snake is not moving in opposite of the changed direction
if event.key == pygame.K_UP:
if interaction.snake.direction.y != 1:
interaction.snake.direction = Vector2(0, -1)
if event.key == pygame.K_DOWN:
if interaction.snake.direction.y != -1:
interaction.snake.direction = Vector2(0, 1)
if event.key == pygame.K_LEFT:
if interaction.snake.direction.x != 1:
interaction.snake.direction = Vector2(-1, 0)
if event.key == pygame.K_RIGHT:
if interaction.snake.direction.x != -1:
interaction.snake.direction = Vector2(1, 0)
pygame.display.update()
run_speed.tick(60)
main()
This is all the classes and variables I created:
import pygame
import random
import sys
from pygame.math import Vector2
pygame.init()
pygame.display.set_caption("CS150_Final_Project:Snake")
run_speed = pygame.time.Clock()
cell_size = 40
cell_number = 20
display = pygame.display.set_mode((cell_size * cell_number, cell_size * cell_number))
apple = pygame.image.load("Snake-main/apple.png").convert_alpha()
class Fruit:
"""
"""
def __init__(self):
# create an x and y position
self.x = random.randint(0, cell_number - 1)
self.y = random.randint(0, cell_number - 1)
self.pos = Vector2(self.x, self.y)
def create_fruit(self):
fruit_rect = pygame.Rect(int(self.pos.x * cell_size), int(self.pos.y * cell_size),cell_size,cell_size)
display.blit(apple,fruit_rect)
#pygame.draw.rect(display, (98, 166, 140), fruit_rect)
def new_fruit(self):
self.x = random.randint(0, cell_number - 1)
self.y = random.randint(0, cell_number - 1)
self.pos = Vector2(self.x, self.y)
class Snake:
def __init__(self):
# treat snake as different position vectors
self.body = [Vector2(3, 2), Vector2(2, 2), Vector2(1, 2)]
self.direction = Vector2(1, 0)
def create_snake(self):
for body in self.body:
x_pos = int(body.x * cell_size)
y_pos = int(body.y * cell_size)
snake = pygame.Rect(x_pos, y_pos, cell_size, cell_size)
pygame.draw.rect(display, (0, 150, 232), snake)
def move_snake(self):
"""
change self.body into body_copy
SNAKE -> None
"""
# add new position vectors into the list's index zero as the head of the snake
body_copy = self.body[:-1]
body_copy.insert(0, body_copy[0] + self.direction)
self.body = body_copy
def add_length(self):
body_copy = self.body[:]
body_copy.insert(0, body_copy[0] + self.direction)
self.body = body_copy
class Interaction:
def __init__(self):
self.snake = Snake()
self.fruit = Fruit()
def snake_moved(self):
self.snake.move_snake()
self.collision()
self.death()
def collision(self):
if self.snake.body[0] == self.fruit.pos:
self.fruit.new_fruit()
self.snake.add_length()
def death(self):
if not 0 <= self.snake.body[0].x < cell_number or not 0 <= self.snake.body[0].y < cell_number:
pygame.quit()
sys.exit()
for block in self.snake.body[1:]:
if block == self.snake.body[0]:
pygame.quit()
sys.exit()
| [
"The problem is that more than on KEYDOWN has be handled in on frame. Do not change interaction.snake.direction in the event loop, but set a local variable with the new direction and update interaction.snake.direction after the event loop:\nnew_direction = interaction.snake.direction\n\nfor event in pygame.event.get():\n # [...]\n\n if event.type == pygame.KEYDOWN:\n if event.key == pygame.K_w or event.key == pygame.K_UP:\n if interaction.snake.direction.y != 1:\n new_direction = Vector2(0, -1)\n if event.key == pygame.K_s or event.key == pygame.K_DOWN:\n if interaction.snake.direction.y != -1:\n new_direction = Vector2(0, 1)\n if event.key == pygame.K_a or event.key == pygame.K_LEFT:\n if interaction.snake.direction.x != 1:\n new_direction = Vector2(-1, 0)\n if event.key == pygame.K_d or event.key == pygame.K_RIGHT:\n if interaction.snake.direction.x != -1:\n new_direction = Vector2(1, 0)\n\ninteraction.snake.direction = new_direction\n\n"
] | [
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074606448_pygame_python.txt |
Q:
Tweepy.errors.NotFound: 404 Not Found 50 - User not found
I am trying to make a twitter bot in python using tweepy, when running the below code I get error:
tweepy.errors.NotFound: 404 Not Found
50 - User not found.
My code:
import tweepy
import logging
from config import create_api
import json
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
class FavRetweetListener(tweepy.Stream):
def __init__(self, api):
self.api = api
self.me = api.get_user()
def on_status(self, tweet):
logger.info(f"Processing tweet id {tweet.id}")
if tweet.in_reply_to_status_id is not None or \
tweet.user.id == self.me.id:
# This tweet is a reply or I'm its author so, ignore it
return
if not tweet.favorited:
# Mark it as Liked, since we have not done it yet
try:
tweet.favorite()
except Exception as e:
logger.error("Error on fav", exc_info=True)
if not tweet.retweeted:
# Retweet, since we have not retweeted it yet
try:
tweet.retweet()
except Exception as e:
logger.error("Error on fav and retweet", exc_info=True)
def on_error(self, status):
logger.error(status)
def main(keywords):
api = create_api()
tweets_listener = FavRetweetListener(api)
stream = tweepy.Stream(api.auth, tweets_listener)
stream.filter(track=keywords, languages=["en"])
if __name__ == "__main__":
main(["Python", "Tweepy"])
Is this something with the `tweet.user.id == self.me.id:` ?
A:
The user does not exist anymore in Twitter.
You must catch the error with an except statement in python.
You did not provide the exact line where the error happen but you should try to catch it with something like this:
try:
api.get_user()
except tweepy.error.NotFound:
print("user not found")
| Tweepy.errors.NotFound: 404 Not Found 50 - User not found | I am trying to make a twitter bot in python using tweepy, when running the below code I get error:
tweepy.errors.NotFound: 404 Not Found
50 - User not found.
My code:
import tweepy
import logging
from config import create_api
import json
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
class FavRetweetListener(tweepy.Stream):
def __init__(self, api):
self.api = api
self.me = api.get_user()
def on_status(self, tweet):
logger.info(f"Processing tweet id {tweet.id}")
if tweet.in_reply_to_status_id is not None or \
tweet.user.id == self.me.id:
# This tweet is a reply or I'm its author so, ignore it
return
if not tweet.favorited:
# Mark it as Liked, since we have not done it yet
try:
tweet.favorite()
except Exception as e:
logger.error("Error on fav", exc_info=True)
if not tweet.retweeted:
# Retweet, since we have not retweeted it yet
try:
tweet.retweet()
except Exception as e:
logger.error("Error on fav and retweet", exc_info=True)
def on_error(self, status):
logger.error(status)
def main(keywords):
api = create_api()
tweets_listener = FavRetweetListener(api)
stream = tweepy.Stream(api.auth, tweets_listener)
stream.filter(track=keywords, languages=["en"])
if __name__ == "__main__":
main(["Python", "Tweepy"])
Is this something with the `tweet.user.id == self.me.id:` ?
| [
"The user does not exist anymore in Twitter.\nYou must catch the error with an except statement in python.\nYou did not provide the exact line where the error happen but you should try to catch it with something like this:\ntry:\n api.get_user()\nexcept tweepy.error.NotFound:\n print(\"user not found\")\n\n"
] | [
0
] | [] | [] | [
"bots",
"python",
"python_3.x",
"tweepy",
"twitter"
] | stackoverflow_0072015504_bots_python_python_3.x_tweepy_twitter.txt |
Q:
cx_Oracle for Oracle Linux 7 not working after install
For the last week I have been trying to get cx_oracle installed and working.
I started with an Oracle 19 appliance which is on Oracle Linux 7.
I used the official oracle site to install cx_oracle as listed below.
The install seems to have worked fine, but when I try to import the module, it is not found.
I checked all the env variables, the path, spend countless hours trying to get this work, what am I missing?
If anyone could please point me to my mistake, I would really appreciate it.
Here are all the steps I have taken so far:
Installing cx_Oracle for Python 3
To install cx_Oracle for Python 3 on Oracle Linux 7:
$ sudo yum -y install oraclelinux-developer-release-el7
$ sudo yum -y install oracle-instantclient-release-el7
$ sudo yum -y install python36-cx_Oracle
https://yum.oracle.com/oracle-linux-python.html#cx_OraclePython3FromLatest
[oracle@localhost tmp]$ yum list installed |grep cx
python36-cx_Oracle.x86_64 8.3.0-1.el7 @ol7_developer
[oracle@localhost tmp]$ yum list installed |grep instant
oracle-instantclient-basic.x86_64 21.8.0.0.0-1 @ol7_oracle_instantclient21
oracle-instantclient-release-el7.x86_64
[oracle@localhost ~]$ yum search cx_oracle
Loaded plugins: langpacks, ulninfo
============================================================= N/S matched: cx_oracle ==============================================================cx_Oracle-12c-py27.x86_64 : Python interface to Oracle
cx_Oracle-py27.x86_64 : Python interface to Oracle
python-cx_Oracle.x86_64 : Python interface to Oracle
python-cx_Oracle-12c.x86_64 : Python interface to Oracle
python36-cx_Oracle.x86_64 : Python interface to Oracle
Name and summary matches only, use "search all" for everything.
[oracle@localhost ~]$ sudo yum install python36-cx_Oracle.x86_64
Loaded plugins: langpacks, ulninfo
Package python36-cx_Oracle-8.3.0-1.el7.x86_64 already installed and latest version
Nothing to do
[oracle@localhost ~]$ python3
Python 3.11.0 (main, Nov 26 2022, 17:15:54) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_oracle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'cx_oracle'
>>> quit()
[oracle@localhost ~]$ python
Python 2.7.5 (default, Jul 1 2022, 08:35:16)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_oracle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cx_oracle
>>> quit()
[oracle@localhost ~]$ python --version
Python 2.7.5
[oracle@localhost ~]$ python3 --version
Python 3.11.0
[oracle@localhost ~]$ which python
/usr/bin/python
[oracle@localhost ~]$ which python3
/usr/local/bin/python3
[oracle@localhost ~]$ echo $PATH
/home/oracle/Desktop/Database_Track/coffeeshop:/home/oracle/java/jdk1.8.0_201/bin:/home/oracle/bin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/datamodeler:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/bin:/home/oracle/.local/bin:/home/oracle/bin
[oracle@localhost ~]$ echo $ORACLE_BASE
/u01/app/oracle
[oracle@localhost ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/version/db_1
[oracle@localhost ~]$ echo $LD_LIBRARY_PATH
/usr/lib/oracle/21/client64/lib/
[oracle@localhost ~]$ echo $LD_LIBRARY_PATH64
/usr/lib/oracle/21/client64/lib/
[oracle@localhost ~]$ env
XDG_SESSION_ID=2
HOSTNAME=localhost.localdomain
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=192.168.59.1 65195 22
SELINUX_USE_CURRENT_RANGE=
SSH_TTY=/dev/pts/2
USER=oracle
LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib/
TWO_TASK=ORCL
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
LD_LIBRARY_PATH64=/usr/lib/oracle/21/client64/lib/
GNOME_CHECK=1
ORACLE_BASE=/u01/app/oracle
MAIL=/var/spool/mail/oracle
PATH=/home/oracle/Desktop/Database_Track/coffeeshop:/home/oracle/java/jdk1.8.0_201/bin:/home/oracle/bin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/datamodeler:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/bin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
JAVA_HOME=/home/oracle/java/jdk1.8.0_201
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/oracle
LOGNAME=oracle
JAVAENV=true
XDG_DATA_DIRS=/home/oracle/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
SSH_CONNECTION=192.168.59.1 65195 192.168.59.130 22
LESSOPEN=||/usr/bin/lesspipe.sh %s
TMZ=GMT
XDG_RUNTIME_DIR=/run/user/1000
ORACLE_HOME=/u01/app/oracle/product/version/db_1
_=/usr/bin/env
A:
You now seem to have multiple cx_Oracle packages installed, which isn't helping untangle your problems (and not something I want to replicate).
To start from scratch with a clean OL7 image, follow the "Installing Python 3 from the Oracle Linux 7 Latest Repository" section of https://yum.oracle.com/oracle-linux-python.html:
$ sudo yum -y install python3
and then the "Installing cx_Oracle for Python 3" section:
$ sudo yum -y install oraclelinux-developer-release-el7
$ sudo yum -y install oracle-instantclient-release-el7
$ sudo yum -y install python36-cx_Oracle
You can then use the driver:
cjones@localhost:~$ python3
Python 3.6.8 (default, Nov 18 2021, 10:07:16)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>> cx_Oracle.version
'8.3.0'
Update: as commented on by Anthony in another answer, make sure you use the correct case for import cx_Oracle.
If you install a version of Python or cx_Oracle that is not listed on the page, you will need to install cx_Oracle using 'pip' packages from PyPI instead of RPMs. Note that cx_Oracle on PyPI currently has prebuilt packages up to Python 3.10.
Some other notes:
You are setting ORACLE_HOME and LD_LIBRARY_PATH. You should never set ORACLE_HOME with Instant Client. And you don't need to set LD_LIBRARY_PATH when using Instant Client 19 (or later) RPM packages.
cx_Oracle 8 is only available for Python 3
The latest version of cx_Oracle is now called python-oracledb, see the release announcement.
To install python-oracledb on a clean OL7 machine:
sudo yum -y install python3
sudo yum -y install oraclelinux-developer-release-el7
sudo yum -y install python3-oracledb
You can then use it immediately in the new Thin mode without needing Oracle Instant Client:
cjones@localhost:~$ python3
Python 3.6.8 (default, Nov 18 2021, 10:07:16)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import oracledb
>>> oracledb.version
'1.1.1'
(There are no casing or underscore issues with this new name!!)
To use python-oracledb in Thick mode, install Instant Client manually:
sudo yum -y install oracle-instantclient-release-el7
sudo yum -y install oracle-instantclient-basic
The python-oracledb RPMs are currently only available for the basic Python 3 version. If you have other versions, then install python-oracledb with pip, see the python-oracledb instructions Installing python-oracledb on Linux.
A:
The name of the module is cx_Oracle, not cx_oracle. Note the case!
| cx_Oracle for Oracle Linux 7 not working after install | For the last week I have been trying to get cx_oracle installed and working.
I started with an Oracle 19 appliance which is on Oracle Linux 7.
I used the official oracle site to install cx_oracle as listed below.
The install seems to have worked fine, but when I try to import the module, it is not found.
I checked all the env variables, the path, spend countless hours trying to get this work, what am I missing?
If anyone could please point me to my mistake, I would really appreciate it.
Here are all the steps I have taken so far:
Installing cx_Oracle for Python 3
To install cx_Oracle for Python 3 on Oracle Linux 7:
$ sudo yum -y install oraclelinux-developer-release-el7
$ sudo yum -y install oracle-instantclient-release-el7
$ sudo yum -y install python36-cx_Oracle
https://yum.oracle.com/oracle-linux-python.html#cx_OraclePython3FromLatest
[oracle@localhost tmp]$ yum list installed |grep cx
python36-cx_Oracle.x86_64 8.3.0-1.el7 @ol7_developer
[oracle@localhost tmp]$ yum list installed |grep instant
oracle-instantclient-basic.x86_64 21.8.0.0.0-1 @ol7_oracle_instantclient21
oracle-instantclient-release-el7.x86_64
[oracle@localhost ~]$ yum search cx_oracle
Loaded plugins: langpacks, ulninfo
============================================================= N/S matched: cx_oracle ==============================================================cx_Oracle-12c-py27.x86_64 : Python interface to Oracle
cx_Oracle-py27.x86_64 : Python interface to Oracle
python-cx_Oracle.x86_64 : Python interface to Oracle
python-cx_Oracle-12c.x86_64 : Python interface to Oracle
python36-cx_Oracle.x86_64 : Python interface to Oracle
Name and summary matches only, use "search all" for everything.
[oracle@localhost ~]$ sudo yum install python36-cx_Oracle.x86_64
Loaded plugins: langpacks, ulninfo
Package python36-cx_Oracle-8.3.0-1.el7.x86_64 already installed and latest version
Nothing to do
[oracle@localhost ~]$ python3
Python 3.11.0 (main, Nov 26 2022, 17:15:54) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_oracle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'cx_oracle'
>>> quit()
[oracle@localhost ~]$ python
Python 2.7.5 (default, Jul 1 2022, 08:35:16)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_oracle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cx_oracle
>>> quit()
[oracle@localhost ~]$ python --version
Python 2.7.5
[oracle@localhost ~]$ python3 --version
Python 3.11.0
[oracle@localhost ~]$ which python
/usr/bin/python
[oracle@localhost ~]$ which python3
/usr/local/bin/python3
[oracle@localhost ~]$ echo $PATH
/home/oracle/Desktop/Database_Track/coffeeshop:/home/oracle/java/jdk1.8.0_201/bin:/home/oracle/bin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/datamodeler:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/bin:/home/oracle/.local/bin:/home/oracle/bin
[oracle@localhost ~]$ echo $ORACLE_BASE
/u01/app/oracle
[oracle@localhost ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/version/db_1
[oracle@localhost ~]$ echo $LD_LIBRARY_PATH
/usr/lib/oracle/21/client64/lib/
[oracle@localhost ~]$ echo $LD_LIBRARY_PATH64
/usr/lib/oracle/21/client64/lib/
[oracle@localhost ~]$ env
XDG_SESSION_ID=2
HOSTNAME=localhost.localdomain
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=192.168.59.1 65195 22
SELINUX_USE_CURRENT_RANGE=
SSH_TTY=/dev/pts/2
USER=oracle
LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib/
TWO_TASK=ORCL
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
LD_LIBRARY_PATH64=/usr/lib/oracle/21/client64/lib/
GNOME_CHECK=1
ORACLE_BASE=/u01/app/oracle
MAIL=/var/spool/mail/oracle
PATH=/home/oracle/Desktop/Database_Track/coffeeshop:/home/oracle/java/jdk1.8.0_201/bin:/home/oracle/bin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/datamodeler:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/sqlcl/bin:/home/oracle/sqldeveloper:/home/oracle/bin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
JAVA_HOME=/home/oracle/java/jdk1.8.0_201
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/oracle
LOGNAME=oracle
JAVAENV=true
XDG_DATA_DIRS=/home/oracle/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
SSH_CONNECTION=192.168.59.1 65195 192.168.59.130 22
LESSOPEN=||/usr/bin/lesspipe.sh %s
TMZ=GMT
XDG_RUNTIME_DIR=/run/user/1000
ORACLE_HOME=/u01/app/oracle/product/version/db_1
_=/usr/bin/env
| [
"You now seem to have multiple cx_Oracle packages installed, which isn't helping untangle your problems (and not something I want to replicate).\nTo start from scratch with a clean OL7 image, follow the \"Installing Python 3 from the Oracle Linux 7 Latest Repository\" section of https://yum.oracle.com/oracle-linux-python.html:\n$ sudo yum -y install python3\n\nand then the \"Installing cx_Oracle for Python 3\" section:\n$ sudo yum -y install oraclelinux-developer-release-el7\n$ sudo yum -y install oracle-instantclient-release-el7\n$ sudo yum -y install python36-cx_Oracle\n\nYou can then use the driver:\ncjones@localhost:~$ python3\nPython 3.6.8 (default, Nov 18 2021, 10:07:16) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import cx_Oracle\n>>> cx_Oracle.version\n'8.3.0'\n\nUpdate: as commented on by Anthony in another answer, make sure you use the correct case for import cx_Oracle.\nIf you install a version of Python or cx_Oracle that is not listed on the page, you will need to install cx_Oracle using 'pip' packages from PyPI instead of RPMs. Note that cx_Oracle on PyPI currently has prebuilt packages up to Python 3.10.\nSome other notes:\n\nYou are setting ORACLE_HOME and LD_LIBRARY_PATH. You should never set ORACLE_HOME with Instant Client. And you don't need to set LD_LIBRARY_PATH when using Instant Client 19 (or later) RPM packages.\ncx_Oracle 8 is only available for Python 3\n\nThe latest version of cx_Oracle is now called python-oracledb, see the release announcement.\nTo install python-oracledb on a clean OL7 machine:\nsudo yum -y install python3\nsudo yum -y install oraclelinux-developer-release-el7\nsudo yum -y install python3-oracledb\n\nYou can then use it immediately in the new Thin mode without needing Oracle Instant Client:\ncjones@localhost:~$ python3\nPython 3.6.8 (default, Nov 18 2021, 10:07:16)\n[GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import oracledb\n>>> oracledb.version\n'1.1.1'\n\n(There are no casing or underscore issues with this new name!!)\nTo use python-oracledb in Thick mode, install Instant Client manually:\nsudo yum -y install oracle-instantclient-release-el7\nsudo yum -y install oracle-instantclient-basic\n\nThe python-oracledb RPMs are currently only available for the basic Python 3 version. If you have other versions, then install python-oracledb with pip, see the python-oracledb instructions Installing python-oracledb on Linux.\n",
"The name of the module is cx_Oracle, not cx_oracle. Note the case!\n"
] | [
1,
0
] | [] | [] | [
"cx_oracle",
"oracle19c",
"oraclelinux",
"python",
"rhel7"
] | stackoverflow_0074605940_cx_oracle_oracle19c_oraclelinux_python_rhel7.txt |
Q:
Creating functions (or lambdas) in a loop (or comprehension)
I'm trying to create functions inside of a loop:
functions = []
for i in range(3):
def f():
return i
# alternatively: f = lambda: i
functions.append(f)
The problem is that all functions end up being the same. Instead of returning 0, 1, and 2, all three functions return 2:
print([f() for f in functions])
# expected output: [0, 1, 2]
# actual output: [2, 2, 2]
Why is this happening, and what should I do to get 3 different functions that output 0, 1, and 2 respectively?
A:
You're running into a problem with late binding -- each function looks up i as late as possible (thus, when called after the end of the loop, i will be set to 2).
Easily fixed by forcing early binding: change def f(): to def f(i=i): like this:
def f(i=i):
return i
Default values (the right-hand i in i=i is a default value for argument name i, which is the left-hand i in i=i) are looked up at def time, not at call time, so essentially they're a way to specifically looking for early binding.
If you're worried about f getting an extra argument (and thus potentially being called erroneously), there's a more sophisticated way which involved using a closure as a "function factory":
def make_f(i):
def f():
return i
return f
and in your loop use f = make_f(i) instead of the def statement.
A:
The Explanation
The issue here is that the value of i is not saved when the function f is created. Rather, f looks up the value of i when it is called.
If you think about it, this behavior makes perfect sense. In fact, it's the only reasonable way functions can work. Imagine you have a function that accesses a global variable, like this:
global_var = 'foo'
def my_function():
print(global_var)
global_var = 'bar'
my_function()
When you read this code, you would - of course - expect it to print "bar", not "foo", because the value of global_var has changed after the function was declared. The same thing is happening in your own code: By the time you call f, the value of i has changed and been set to 2.
The Solution
There are actually many ways to solve this problem. Here are a few options:
Force early binding of i by using it as a default argument
Unlike closure variables (like i), default arguments are evaluated immediately when the function is defined:
for i in range(3):
def f(i=i): # <- right here is the important bit
return i
functions.append(f)
To give a little bit of insight into how/why this works: A function's default arguments are stored as an attribute of the function; thus the current value of i is snapshotted and saved.
>>> i = 0
>>> def f(i=i):
... pass
>>> f.__defaults__ # this is where the current value of i is stored
(0,)
>>> # assigning a new value to i has no effect on the function's default arguments
>>> i = 5
>>> f.__defaults__
(0,)
Use a function factory to capture the current value of i in a closure
The root of your problem is that i is a variable that can change. We can work around this problem by creating another variable that is guaranteed to never change - and the easiest way to do this is a closure:
def f_factory(i):
def f():
return i # i is now a *local* variable of f_factory and can't ever change
return f
for i in range(3):
f = f_factory(i)
functions.append(f)
Use functools.partial to bind the current value of i to f
functools.partial lets you attach arguments to an existing function. In a way, it too is a kind of function factory.
import functools
def f(i):
return i
for i in range(3):
f_with_i = functools.partial(f, i) # important: use a different variable than "f"
functions.append(f_with_i)
Caveat: These solutions only work if you assign a new value to the variable. If you modify the object stored in the variable, you'll experience the same problem again:
>>> i = [] # instead of an int, i is now a *mutable* object
>>> def f(i=i):
... print('i =', i)
...
>>> i.append(5) # instead of *assigning* a new value to i, we're *mutating* it
>>> f()
i = [5]
Notice how i still changed even though we turned it into a default argument! If your code mutates i, then you must bind a copy of i to your function, like so:
def f(i=i.copy()):
f = f_factory(i.copy())
f_with_i = functools.partial(f, i.copy())
A:
To add onto @Aran-Fey's excellent answer, in the second solution you might also wish to modify the variable inside your function which can be accomplished with the keyword nonlocal:
def f_factory(i):
def f(offset):
nonlocal i
i += offset
return i # i is now a *local* variable of f_factory and can't ever change
return f
for i in range(3):
f = f_factory(i)
print(f(10))
A:
You have to save the each of the i value in a separate space in memory e.g.:
class StaticValue:
val = None
def __init__(self, value: int):
StaticValue.val = value
@staticmethod
def get_lambda():
return lambda x: x*StaticValue.val
class NotStaticValue:
def __init__(self, value: int):
self.val = value
def get_lambda(self):
return lambda x: x*self.val
if __name__ == '__main__':
def foo():
return [lambda x: x*i for i in range(4)]
def bar():
return [StaticValue(i).get_lambda() for i in range(4)]
def foo_repaired():
return [NotStaticValue(i).get_lambda() for i in range(4)]
print([x(2) for x in foo()])
print([x(2) for x in bar()])
print([x(2) for x in foo_repaired()])
Result:
[6, 6, 6, 6]
[6, 6, 6, 6]
[0, 2, 4, 6]
| Creating functions (or lambdas) in a loop (or comprehension) | I'm trying to create functions inside of a loop:
functions = []
for i in range(3):
def f():
return i
# alternatively: f = lambda: i
functions.append(f)
The problem is that all functions end up being the same. Instead of returning 0, 1, and 2, all three functions return 2:
print([f() for f in functions])
# expected output: [0, 1, 2]
# actual output: [2, 2, 2]
Why is this happening, and what should I do to get 3 different functions that output 0, 1, and 2 respectively?
| [
"You're running into a problem with late binding -- each function looks up i as late as possible (thus, when called after the end of the loop, i will be set to 2). \nEasily fixed by forcing early binding: change def f(): to def f(i=i): like this:\ndef f(i=i):\n return i\n\nDefault values (the right-hand i in i=i is a default value for argument name i, which is the left-hand i in i=i) are looked up at def time, not at call time, so essentially they're a way to specifically looking for early binding.\nIf you're worried about f getting an extra argument (and thus potentially being called erroneously), there's a more sophisticated way which involved using a closure as a \"function factory\":\ndef make_f(i):\n def f():\n return i\n return f\n\nand in your loop use f = make_f(i) instead of the def statement.\n",
"The Explanation\nThe issue here is that the value of i is not saved when the function f is created. Rather, f looks up the value of i when it is called.\nIf you think about it, this behavior makes perfect sense. In fact, it's the only reasonable way functions can work. Imagine you have a function that accesses a global variable, like this:\nglobal_var = 'foo'\n\ndef my_function():\n print(global_var)\n\nglobal_var = 'bar'\nmy_function()\n\nWhen you read this code, you would - of course - expect it to print \"bar\", not \"foo\", because the value of global_var has changed after the function was declared. The same thing is happening in your own code: By the time you call f, the value of i has changed and been set to 2.\nThe Solution\nThere are actually many ways to solve this problem. Here are a few options:\n\nForce early binding of i by using it as a default argument\nUnlike closure variables (like i), default arguments are evaluated immediately when the function is defined:\nfor i in range(3):\n def f(i=i): # <- right here is the important bit\n return i\n\n functions.append(f)\n\nTo give a little bit of insight into how/why this works: A function's default arguments are stored as an attribute of the function; thus the current value of i is snapshotted and saved.\n>>> i = 0\n>>> def f(i=i):\n... pass\n>>> f.__defaults__ # this is where the current value of i is stored\n(0,)\n>>> # assigning a new value to i has no effect on the function's default arguments\n>>> i = 5\n>>> f.__defaults__\n(0,)\n\nUse a function factory to capture the current value of i in a closure\nThe root of your problem is that i is a variable that can change. We can work around this problem by creating another variable that is guaranteed to never change - and the easiest way to do this is a closure:\ndef f_factory(i):\n def f():\n return i # i is now a *local* variable of f_factory and can't ever change\n return f\n\nfor i in range(3): \n f = f_factory(i)\n functions.append(f)\n\nUse functools.partial to bind the current value of i to f\nfunctools.partial lets you attach arguments to an existing function. In a way, it too is a kind of function factory.\nimport functools\n\ndef f(i):\n return i\n\nfor i in range(3): \n f_with_i = functools.partial(f, i) # important: use a different variable than \"f\"\n functions.append(f_with_i)\n\n\nCaveat: These solutions only work if you assign a new value to the variable. If you modify the object stored in the variable, you'll experience the same problem again:\n>>> i = [] # instead of an int, i is now a *mutable* object\n>>> def f(i=i):\n... print('i =', i)\n...\n>>> i.append(5) # instead of *assigning* a new value to i, we're *mutating* it\n>>> f()\ni = [5]\n\nNotice how i still changed even though we turned it into a default argument! If your code mutates i, then you must bind a copy of i to your function, like so:\n\ndef f(i=i.copy()):\nf = f_factory(i.copy())\nf_with_i = functools.partial(f, i.copy())\n\n",
"To add onto @Aran-Fey's excellent answer, in the second solution you might also wish to modify the variable inside your function which can be accomplished with the keyword nonlocal:\ndef f_factory(i):\n def f(offset):\n nonlocal i\n i += offset\n return i # i is now a *local* variable of f_factory and can't ever change\n return f\n\nfor i in range(3): \n f = f_factory(i)\n print(f(10))\n\n",
"You have to save the each of the i value in a separate space in memory e.g.:\nclass StaticValue:\n val = None\n\n def __init__(self, value: int):\n StaticValue.val = value\n\n @staticmethod\n def get_lambda():\n return lambda x: x*StaticValue.val\n\n\nclass NotStaticValue:\n def __init__(self, value: int):\n self.val = value\n\n def get_lambda(self):\n return lambda x: x*self.val\n\n\nif __name__ == '__main__':\n def foo():\n return [lambda x: x*i for i in range(4)]\n\n def bar():\n return [StaticValue(i).get_lambda() for i in range(4)]\n\n def foo_repaired():\n return [NotStaticValue(i).get_lambda() for i in range(4)]\n\n print([x(2) for x in foo()])\n print([x(2) for x in bar()])\n print([x(2) for x in foo_repaired()])\n\nResult:\n[6, 6, 6, 6]\n[6, 6, 6, 6]\n[0, 2, 4, 6]\n\n"
] | [
235,
49,
0,
0
] | [
"You can try like this:\nl=[]\nfor t in range(10):\n def up(y):\n print(y)\n l.append(up)\nl[5]('printing in 5th function')\n\n",
"just modify the last line for\nfunctions.append(f())\n\nEdit: This is because f is a function - python treats functions as first-class citizens and you can pass them around in variables to be called later on. So what your original code is doing is appending the function itself to the list, while what you want to do is append the results of the function to the list, which is what the line above achieves by calling the function.\n"
] | [
-1,
-2
] | [
"python"
] | stackoverflow_0003431676_python.txt |
Q:
How do I make theses values next to each other with commas in between theme
I want these format theses to be 1girl, belt, etc.. what can I do to achieve this?
1girl
belt
breasts
gloves
long_hair
long_sleeves
medium_breasts
military
solo
uniform
white_gloves
Here's what I have
I want them side by side with commas
I don't know what to do to get what I want. It's probably something either really simple or hard
A:
You can try either of these, depending on what you need:
newstr = ', '.join(str.split(' ')) ## if you want the separator to be an empty space
print(newstr)
or
import re
newstr = ', '.join(re.split('\s|_', str)) ## if you want the separator to be an empty space or _
print(newstr)
're' is a module made to work with regex (regular expression)
| How do I make theses values next to each other with commas in between theme | I want these format theses to be 1girl, belt, etc.. what can I do to achieve this?
1girl
belt
breasts
gloves
long_hair
long_sleeves
medium_breasts
military
solo
uniform
white_gloves
Here's what I have
I want them side by side with commas
I don't know what to do to get what I want. It's probably something either really simple or hard
| [
"You can try either of these, depending on what you need:\nnewstr = ', '.join(str.split(' ')) ## if you want the separator to be an empty space\nprint(newstr)\n\nor\nimport re\nnewstr = ', '.join(re.split('\\s|_', str)) ## if you want the separator to be an empty space or _\nprint(newstr)\n\n're' is a module made to work with regex (regular expression)\n"
] | [
0
] | [] | [] | [
"format",
"python"
] | stackoverflow_0074606210_format_python.txt |
Q:
Building a "half" polar diagram using matplotlib
I would like to draw this using matplotlib and for now I have this using this code :
fig = plt.figure()
ax = fig.add_subplot(111, projection='polar', xlim=(-90, 90))
ax.set_thetamin(-90) # set the limits
ax.set_thetamax(90)
ax.set_theta_offset(.5*np.pi) # point the origin towards the top
ax.set_thetagrids(range(-90, 100, 15)) # set the gridlines
angle = np.linspace(-90,90,19)
valeurs = np.array([4.9,6.5,11,29.1,44.9,57.2,76,87.7,97.2,100,95,83,68,48.1,31.1,19.7,9,5.8,5.3])
plt.plot(angle,valeurs,"o ")
plt.show()
The graph I got from matplotlib is extremely different from what I was trying to draw, and I believe I defined my values tab correctly
A:
You need to change the values of angles:
plt.plot(angle / np.pi * 2,valeurs,"o ")
| Building a "half" polar diagram using matplotlib | I would like to draw this using matplotlib and for now I have this using this code :
fig = plt.figure()
ax = fig.add_subplot(111, projection='polar', xlim=(-90, 90))
ax.set_thetamin(-90) # set the limits
ax.set_thetamax(90)
ax.set_theta_offset(.5*np.pi) # point the origin towards the top
ax.set_thetagrids(range(-90, 100, 15)) # set the gridlines
angle = np.linspace(-90,90,19)
valeurs = np.array([4.9,6.5,11,29.1,44.9,57.2,76,87.7,97.2,100,95,83,68,48.1,31.1,19.7,9,5.8,5.3])
plt.plot(angle,valeurs,"o ")
plt.show()
The graph I got from matplotlib is extremely different from what I was trying to draw, and I believe I defined my values tab correctly
| [
"You need to change the values of angles:\nplt.plot(angle / np.pi * 2,valeurs,\"o \")\n\n\n"
] | [
0
] | [] | [] | [
"diagram",
"matplotlib",
"polar_coordinates",
"python"
] | stackoverflow_0074606531_diagram_matplotlib_polar_coordinates_python.txt |
Q:
Embedding a custom widget into a stacked widget page WITHOUT using qtdesigner
I am trying to embed two custom widgets into the pages of a stacked widget on a dialog page. I have mocked up my problem using the main script dialog.py which has a stacked widget with two pages promoting widget_1_UI from widget_1.py and widget_2_UI from widget_2.py to each page, respectively. It doesn't throw any errors when run but also doesn't show the content from the two modules in the stacked widget.
dialog.py
from PyQt5 import QtCore, QtGui, QtWidgets
from widget_1 import widget_1_UI
from widget_2 import widget_2_UI
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setWindowTitle("Dialog")
Dialog.resize(640, 480)
self.verticalLayout = QtWidgets.QVBoxLayout(Dialog)
self.stackedWidget = QtWidgets.QStackedWidget(Dialog)
self.page_1 = widget_1_UI()
self.stackedWidget.addWidget(self.page_1)
self.page_2 = widget_2_UI()
self.stackedWidget.addWidget(self.page_2)
self.verticalLayout.addWidget(self.stackedWidget)
self.btn_nextPage = QtWidgets.QPushButton(Dialog)
self.btn_nextPage.setText("PushButton")
self.verticalLayout.addWidget(self.btn_nextPage)
self.btn_nextPage.clicked.connect(self.nextPage)
def nextPage(self):
pageNum = self.stackedWidget.currentIndex()
if pageNum == 0:
self.stackedWidget.setCurrentIndex(1)
else:
self.stackedWidget.setCurrentIndex(0)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Dialog = QtWidgets.QDialog()
ui = Ui_Dialog()
ui.setupUi(Dialog)
Dialog.show()
sys.exit(app.exec_())
widget_1.py
from PyQt5 import QtCore, QtGui, QtWidgets
class widget_1_UI(QtWidgets.QWidget):
def setupUi(self, Form):
Form.resize(614, 411)
self.verticalLayout = QtWidgets.QVBoxLayout(Form)
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(26)
self.label.setFont(font)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setText("AY YO BBY GRILL")
self.verticalLayout.addWidget(self.label)
self.pushButton = QtWidgets.QPushButton(Form)
self.pushButton.setText("Dis one do nuffin")
self.verticalLayout.addWidget(self.pushButton)
widget_2.py
from PyQt5 import QtCore, QtGui, QtWidgets
class widget_2_UI(QtWidgets.QWidget):
def setupUi(self, Form):
Form.resize(614, 411)
self.verticalLayout = QtWidgets.QVBoxLayout(Form)
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(26)
self.label.setFont(font)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setText("HOW YOU DOIN")
self.verticalLayout.addWidget(self.label)
self.pushButton = QtWidgets.QPushButton(Form)
self.pushButton.setText("Dis do nuffin 2")
self.verticalLayout.addWidget(self.pushButton)
A:
Figured it out! In dialog.py widget_1_UI() is set as the promoted widget for the first page of the stacked widget but there is nothing to tell it what should be inside it. Adding self.page_1.setupUi(self.page_1) does just that.
self.page_1 = widget_1_UI()
self.page_1.setupUi(self.page_1)
self.stackedWidget.addWidget(self.page_1)
The same is applied to widget_2_UI.
| Embedding a custom widget into a stacked widget page WITHOUT using qtdesigner | I am trying to embed two custom widgets into the pages of a stacked widget on a dialog page. I have mocked up my problem using the main script dialog.py which has a stacked widget with two pages promoting widget_1_UI from widget_1.py and widget_2_UI from widget_2.py to each page, respectively. It doesn't throw any errors when run but also doesn't show the content from the two modules in the stacked widget.
dialog.py
from PyQt5 import QtCore, QtGui, QtWidgets
from widget_1 import widget_1_UI
from widget_2 import widget_2_UI
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setWindowTitle("Dialog")
Dialog.resize(640, 480)
self.verticalLayout = QtWidgets.QVBoxLayout(Dialog)
self.stackedWidget = QtWidgets.QStackedWidget(Dialog)
self.page_1 = widget_1_UI()
self.stackedWidget.addWidget(self.page_1)
self.page_2 = widget_2_UI()
self.stackedWidget.addWidget(self.page_2)
self.verticalLayout.addWidget(self.stackedWidget)
self.btn_nextPage = QtWidgets.QPushButton(Dialog)
self.btn_nextPage.setText("PushButton")
self.verticalLayout.addWidget(self.btn_nextPage)
self.btn_nextPage.clicked.connect(self.nextPage)
def nextPage(self):
pageNum = self.stackedWidget.currentIndex()
if pageNum == 0:
self.stackedWidget.setCurrentIndex(1)
else:
self.stackedWidget.setCurrentIndex(0)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Dialog = QtWidgets.QDialog()
ui = Ui_Dialog()
ui.setupUi(Dialog)
Dialog.show()
sys.exit(app.exec_())
widget_1.py
from PyQt5 import QtCore, QtGui, QtWidgets
class widget_1_UI(QtWidgets.QWidget):
def setupUi(self, Form):
Form.resize(614, 411)
self.verticalLayout = QtWidgets.QVBoxLayout(Form)
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(26)
self.label.setFont(font)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setText("AY YO BBY GRILL")
self.verticalLayout.addWidget(self.label)
self.pushButton = QtWidgets.QPushButton(Form)
self.pushButton.setText("Dis one do nuffin")
self.verticalLayout.addWidget(self.pushButton)
widget_2.py
from PyQt5 import QtCore, QtGui, QtWidgets
class widget_2_UI(QtWidgets.QWidget):
def setupUi(self, Form):
Form.resize(614, 411)
self.verticalLayout = QtWidgets.QVBoxLayout(Form)
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(26)
self.label.setFont(font)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setText("HOW YOU DOIN")
self.verticalLayout.addWidget(self.label)
self.pushButton = QtWidgets.QPushButton(Form)
self.pushButton.setText("Dis do nuffin 2")
self.verticalLayout.addWidget(self.pushButton)
| [
"Figured it out! In dialog.py widget_1_UI() is set as the promoted widget for the first page of the stacked widget but there is nothing to tell it what should be inside it. Adding self.page_1.setupUi(self.page_1) does just that.\nself.page_1 = widget_1_UI()\nself.page_1.setupUi(self.page_1)\nself.stackedWidget.addWidget(self.page_1)\n\nThe same is applied to widget_2_UI.\n"
] | [
0
] | [] | [] | [
"embed",
"python",
"qstackedwidget"
] | stackoverflow_0074602812_embed_python_qstackedwidget.txt |
Q:
Selenium doesn't open the specified URL and shows data:,
I am trying to open the URL using selenium in chrome. I have chromedriver available with me.
following is the code I want to execute.
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--disable-infobars")
driver = webdriver.Chrome(executable_path="./chromedriver", chrome_options=chrome_options)
driver.get("https://google.com")
The browser is opened successfully but it doesn't open the specified URL. The URL in the browser is data:,.
Any help will be greatly appreciated. Please!
Please see the attached image.
Note: Selenium version : 3.14.0
I get the following error on closing the chrome tab.
File "test.py", line 6, in <module>
driver = webdriver.Chrome(executable_path="./chromedriver", chrome_options=chrome_options)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/chrome/webdriver.py", line 75, in __init__
desired_capabilities=desired_capabilities)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 156, in __init__
self.start_session(capabilities, browser_profile)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 251, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 320, in execute
self.error_handler.check_response(response)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited normally
(chrome not reachable)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
(Driver info: chromedriver=2.42.591071 (0b695ff80972cc1a65a5cd643186d2ae582cd4ac),platform=Linux 4.10.0-37-generic x86_64)
A:
This error message...
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited normally
(chrome not reachable)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
...implies that the ChromeDriver instance was unable to start the Chrome Browser process.
Your main issue is the google-chrome is no longer present at the expected default location of /usr/bin/
As per ChromeDriver - Requirements the server expects you to have Chrome installed in the default location for each system:
1 For Linux systems, the ChromeDriver expects /usr/bin/google-chrome to be a symlink to the actual Chrome binary. You can also override the Chrome binary location as follows:
A Windows OS based example:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("start-maximized")
options.binary_location("C:\\path\\to\\chrome.exe")
driver = webdriver.Chrome(executable_path=r'C:\path\to\chromedriver.exe', chrome_options=options)
driver.get('http://google.com/')
Additional Considerations
Upgrade ChromeDriver to current ChromeDriver v2.42 level.
Keep Chrome version between Chrome v68-70 levels. (as per ChromeDriver v2.42 release notes)
Clean your Project Workspace through your IDE and Rebuild your project with required dependencies only.
If your base Web Client version is too old, then uninstall it through Revo Uninstaller and install a recent GA and released version of Web Client.
Execute your @Test.
A:
I found another reason for this behavior, which I don't see listed here. I believe Selenium libraries require Chrome Developer Tools to not be disabled. I was running into this issue and getting this regkey set got me past it.
Key - HKLM\Software\Policies\Google\Chrome
DWORD - DeveloperToolsDisabled
Value - 0
| Selenium doesn't open the specified URL and shows data:, | I am trying to open the URL using selenium in chrome. I have chromedriver available with me.
following is the code I want to execute.
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--disable-infobars")
driver = webdriver.Chrome(executable_path="./chromedriver", chrome_options=chrome_options)
driver.get("https://google.com")
The browser is opened successfully but it doesn't open the specified URL. The URL in the browser is data:,.
Any help will be greatly appreciated. Please!
Please see the attached image.
Note: Selenium version : 3.14.0
I get the following error on closing the chrome tab.
File "test.py", line 6, in <module>
driver = webdriver.Chrome(executable_path="./chromedriver", chrome_options=chrome_options)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/chrome/webdriver.py", line 75, in __init__
desired_capabilities=desired_capabilities)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 156, in __init__
self.start_session(capabilities, browser_profile)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 251, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 320, in execute
self.error_handler.check_response(response)
File "/home/speedious/anaconda3/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited normally
(chrome not reachable)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
(Driver info: chromedriver=2.42.591071 (0b695ff80972cc1a65a5cd643186d2ae582cd4ac),platform=Linux 4.10.0-37-generic x86_64)
| [
"This error message...\nselenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited normally\n (chrome not reachable)\n (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)\n\n...implies that the ChromeDriver instance was unable to start the Chrome Browser process.\nYour main issue is the google-chrome is no longer present at the expected default location of /usr/bin/\nAs per ChromeDriver - Requirements the server expects you to have Chrome installed in the default location for each system:\n\n1 For Linux systems, the ChromeDriver expects /usr/bin/google-chrome to be a symlink to the actual Chrome binary. You can also override the Chrome binary location as follows:\n\nA Windows OS based example:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.binary_location(\"C:\\\\path\\\\to\\\\chrome.exe\")\ndriver = webdriver.Chrome(executable_path=r'C:\\path\\to\\chromedriver.exe', chrome_options=options)\ndriver.get('http://google.com/')\n\n\nAdditional Considerations\n\nUpgrade ChromeDriver to current ChromeDriver v2.42 level.\nKeep Chrome version between Chrome v68-70 levels. (as per ChromeDriver v2.42 release notes)\nClean your Project Workspace through your IDE and Rebuild your project with required dependencies only.\nIf your base Web Client version is too old, then uninstall it through Revo Uninstaller and install a recent GA and released version of Web Client.\nExecute your @Test.\n\n",
"I found another reason for this behavior, which I don't see listed here. I believe Selenium libraries require Chrome Developer Tools to not be disabled. I was running into this issue and getting this regkey set got me past it.\nKey - HKLM\\Software\\Policies\\Google\\Chrome\n\nDWORD - DeveloperToolsDisabled\n\nValue - 0\n\n"
] | [
4,
0
] | [] | [] | [
"google_chrome",
"python",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0052760842_google_chrome_python_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
How do I create a drop down menu inside of a command with discord.py
I was going to make a rock paper scissors game with discord.py and I first defaulted to using select menus in messages, that would work if I could find a way to do it, so then I decided to just make it use a string object, but then I thought to myself it would not be easy if the user had to guess what the choices where, so then I wanted to include a select menu inside of a slash command like the one you get from this code:
@tree.command(name="boolean_test", description="Boolean test")
async def boolean_test(command, boolean: bool):
pass
but I could not find a way to do that.
I'm not using any forks of discord.py
Can someone help?
A:
There are three ways to do this:
choices
Literal
Enum
The docs page for choices shows an example for all three of them. The one you decide to go for depends on your use case. For example, Choices have a value attribute that you can use to make them easier to work with internally (like attaching an id to the choices instead of the raw strings).
| How do I create a drop down menu inside of a command with discord.py | I was going to make a rock paper scissors game with discord.py and I first defaulted to using select menus in messages, that would work if I could find a way to do it, so then I decided to just make it use a string object, but then I thought to myself it would not be easy if the user had to guess what the choices where, so then I wanted to include a select menu inside of a slash command like the one you get from this code:
@tree.command(name="boolean_test", description="Boolean test")
async def boolean_test(command, boolean: bool):
pass
but I could not find a way to do that.
I'm not using any forks of discord.py
Can someone help?
| [
"There are three ways to do this:\n\nchoices\nLiteral\nEnum\n\nThe docs page for choices shows an example for all three of them. The one you decide to go for depends on your use case. For example, Choices have a value attribute that you can use to make them easier to work with internally (like attaching an id to the choices instead of the raw strings).\n"
] | [
1
] | [] | [] | [
"discord",
"discord.py",
"menu",
"python",
"python_3.x"
] | stackoverflow_0074606436_discord_discord.py_menu_python_python_3.x.txt |
Q:
How to get the numerical value for each matching value in a row in a csv file
I have a csv file with "years" in row[0] and I need to get a count of how many times each year occurs and pair it with that year in a dictionary. To be clear, the year is the key and the amount of times it occurs in the csv is the value.
Here's what I have, but I am missing something. I just can't figure out how to get the count of each year to pair it with that year.
def incidents_per_year():
dict = {}
count = 0
with open("saved_data.csv") as f:
reader = csv.reader(f)
next(reader)
for row in reader:
count += 1
year = row[0]
dict[year] = count
return dict
Here is the part of the csv file (overall it's 50,000 rows so this is a small subset).
year,month,hour_of_day,incident_type_primary,day_of_week
2022,6,8,LARCENY/THEFT,Monday
2016,10,5,ASSAULT,Tuesday
2016,8,12,LARCENY/THEFT,Wednesday
2014,9,5,LARCENY/THEFT,Sunday
2015,8,7,ASSAULT,Wednesday
2016,11,2,LARCENY/THEFT,Tuesday
2015,7,11,ASSAULT,Friday
2015,4,12,LARCENY/THEFT,Friday
2016,3,2,BURGLARY,Wednesday
2014,10,4,LARCENY/THEFT,Thursday
2016,8,3,LARCENY/THEFT,Friday
2016,1,12,LARCENY/THEFT,Monday
2016,3,1,BURGLARY,Friday
2014,8,7,BURGLARY,Saturday
2017,4,10,UUV,Wednesday
2017,6,5,BURGLARY,Thursday
2017,1,4,BURGLARY,Wednesday
2016,7,9,BURGLARY,Thursday
2015,9,9,LARCENY/THEFT,Monday
2017,4,12,LARCENY/THEFT,Thursday
2016,3,4,LARCENY/THEFT,Friday
2016,4,5,BURGLARY,Thursday
2017,10,12,LARCENY/THEFT,Sunday
2015,7,11,ASSAULT,Monday
2012,5,12,LARCENY/THEFT,Friday
2014,12,11,LARCENY/THEFT,Thursday
2015,3,4,LARCENY/THEFT,Tuesday
2017,11,8,LARCENY/THEFT,Wednesday
2011,7,17,LARCENY/THEFT,Thursday
2015,9,17,ROBBERY,Wednesday
2015,5,12,BURGLARY,Thursday
2013,11,14,ASSAULT,Tuesday
2015,6,16,LARCENY/THEFT,Friday
2010,10,18,LARCENY/THEFT,Monday
2007,8,21,LARCENY/THEFT,Tuesday
2015,5,16,LARCENY/THEFT,Tuesday
2013,11,8,LARCENY/THEFT,Wednesday
2007,6,15,BURGLARY,Sunday
2012,5,19,ASSAULT,Tuesday
2007,8,20,LARCENY/THEFT,Thursday
2018,7,18,LARCENY/THEFT,Sunday
2019,2,2,BURGLARY,Tuesday
2012,11,20,BURGLARY,Monday
2012,6,15,ASSAULT,Wednesday
2011,10,21,THEFT OF SERVICES,Monday
2008,11,23,LARCENY/THEFT,Wednesday
2014,7,4,BURGLARY,Wednesday
2013,11,9,LARCENY/THEFT,Wednesday
2021,7,11,ASSAULT,Saturday
2018,7,12,LARCENY/THEFT,Friday
2013,2,0,LARCENY/THEFT,Friday
2007,1,0,ROBBERY,Thursday
2008,7,4,ASSAULT,Saturday
2007,8,4,BURGLARY,Saturday
2019,12,17,LARCENY/THEFT,Thursday
2018,7,0,LARCENY/THEFT,Wednesday
2010,2,12,BURGLARY,Sunday
2012,4,3,BURGLARY,Tuesday
2012,1,23,LARCENY/THEFT,Monday
2006,1,23,LARCENY/THEFT,Wednesday
2015,6,0,LARCENY/THEFT,Tuesday
2015,4,21,BURGLARY,Wednesday
2017,5,11,ROBBERY,Tuesday
2017,9,16,LARCENY/THEFT,Wednesday
2016,7,12,LARCENY/THEFT,Friday
2006,6,6,LARCENY/THEFT,Friday
2010,5,0,BURGLARY,Thursday
2010,11,21,ROBBERY,Wednesday
2011,2,3,BURGLARY,Friday
2017,8,0,LARCENY/THEFT,Wednesday
2011,12,23,BURGLARY,Friday
2012,8,0,LARCENY/THEFT,Saturday
2012,7,22,ROBBERY,Thursday
2016,9,8,ASSAULT,Saturday
2013,7,7,BURGLARY,Friday
2010,4,5,ASSAULT,Saturday
2022,3,13,LARCENY/THEFT,Saturday
2009,5,13,BURGLARY,Thursday
2010,2,12,ASSAULT,Saturday
2011,12,20,LARCENY/THEFT,Friday
2007,10,4,BURGLARY,Thursday
2007,8,19,LARCENY/THEFT,Saturday
2011,12,4,LARCENY/THEFT,Saturday
2012,10,23,UUV,Friday
2018,4,19,UUV,Sunday
2010,5,13,LARCENY/THEFT,Wednesday
2017,5,11,BURGLARY,Saturday
2009,9,2,ASSAULT,Thursday
2016,6,0,LARCENY/THEFT,Wednesday
2012,4,4,LARCENY/THEFT,Monday
2009,9,19,BURGLARY,Tuesday
2009,6,10,BURGLARY,Monday
2007,10,0,LARCENY/THEFT,Wednesday
2016,5,1,ASSAULT,Tuesday
2010,10,0,ROBBERY,Friday
2013,10,11,LARCENY/THEFT,Monday
2018,9,19,BURGLARY,Friday
2006,6,14,BURGLARY,Wednesday
2010,5,21,ASSAULT,Sunday
2010,6,10,LARCENY/THEFT,Monday
2018,9,10,LARCENY/THEFT,Friday
2007,11,0,LARCENY/THEFT,Tuesday
2008,8,22,ASSAULT,Thursday
2016,10,19,LARCENY/THEFT,Wednesday
2018,1,1,LARCENY/THEFT,Tuesday
2015,8,7,BURGLARY,Friday
2016,4,20,LARCENY/THEFT,Tuesday
2015,10,1,LARCENY/THEFT,Thursday
2010,3,15,ASSAULT,Monday
2014,4,4,ASSAULT,Monday
2011,10,21,LARCENY/THEFT,Friday
2016,9,12,LARCENY/THEFT,Thursday
2011,8,10,LARCENY/THEFT,Wednesday
2012,10,16,LARCENY/THEFT,Friday
2016,3,20,ASSAULT,Saturday
2020,11,11,UUV,Tuesday
2013,11,5,LARCENY/THEFT,Wednesday
2010,4,4,BURGLARY,Friday
2011,9,23,ASSAULT,Friday
2008,10,14,LARCENY/THEFT,Thursday
2015,6,0,UUV,Saturday
2010,12,23,LARCENY/THEFT,Saturday
2015,6,14,LARCENY/THEFT,Tuesday
2008,10,22,ASSAULT,Friday
2010,11,12,BURGLARY,Monday
2006,5,20,ASSAULT,Sunday
2012,9,16,BURGLARY,Sunday
2020,7,3,ASSAULT,Thursday
2014,1,5,BURGLARY,Tuesday
2015,4,1,ASSAULT,Thursday
2014,10,7,LARCENY/THEFT,Sunday
2007,11,9,LARCENY/THEFT,Wednesday
2008,7,17,BURGLARY,Sunday
2011,4,23,BURGLARY,Saturday
2014,7,17,LARCENY/THEFT,Wednesday
2008,10,10,LARCENY/THEFT,Tuesday
2007,7,18,LARCENY/THEFT,Sunday
2011,3,18,ROBBERY,Wednesday
2010,12,0,LARCENY/THEFT,Thursday
2013,5,0,LARCENY/THEFT,Tuesday
2006,9,14,LARCENY/THEFT,Friday
2014,2,1,ROBBERY,Thursday
2020,5,17,UUV,Sunday
2007,4,23,LARCENY/THEFT,Sunday
2015,6,12,LARCENY/THEFT,Monday
2010,1,5,ROBBERY,Monday
2011,11,18,LARCENY/THEFT,Tuesday
2008,10,23,LARCENY/THEFT,Thursday
2019,8,17,UUV,Friday
2006,9,17,LARCENY/THEFT,Friday
2015,7,9,LARCENY/THEFT,Monday
2013,2,23,ROBBERY,Sunday
2012,8,15,ASSAULT,Sunday
2015,3,0,LARCENY/THEFT,Friday
2006,12,15,BURGLARY,Thursday
2021,12,10,LARCENY/THEFT,Thursday
2006,11,11,BURGLARY,Sunday
2009,7,0,LARCENY/THEFT,Tuesday
2006,5,17,LARCENY/THEFT,Thursday
2016,7,0,BURGLARY,Wednesday
2017,1,14,LARCENY/THEFT,Tuesday
2010,11,13,LARCENY/THEFT,Tuesday
2015,9,13,BURGLARY,Wednesday
2008,10,1,BURGLARY,Wednesday
2009,4,22,LARCENY/THEFT,Thursday
2016,5,20,ASSAULT,Wednesday
2009,7,12,LARCENY/THEFT,Thursday
2021,6,20,LARCENY/THEFT,Sunday
A:
You are using the count variable wrong, try this:
def incidents_per_year():
dict = {}
with open("saved_data.csv") as f:
reader = csv.reader(f)
next(reader)
for row in reader:
year = row[0]
dict[year] = (dict.get(year) or 0) + 1
return dict
For every year in the file it will either set the count to 0 if the dict doesn't contain that specific year yet, or add 1 to the count of the specific year
A:
First, I'd suggest try to not use the builtin keywords as variables.
Second, your code uses the counter as a global counter and therefor it is not unique for each year.
def incidents_per_year():
year_dict = {}
with open("saved_data.csv") as f:
reader = csv.reader(f)
next(reader)
for row in reader:
year = row[0]
year_dict[year] = year_dict.get(year, default=0) + 1
return year_dict
In this code, I'm using dict.get method, which get the key and returns the value, if the key in dict, else a default value (defaults to None if not passed).
This way, I'm making sure that each year will be calculated separately with it's own counter.
A:
You can try using DictReader from csv to read csv header as key:
from csv import DictReader
def incidents_per_year():
res = {}
with open("saved_data.csv") as f:
reader = DictReader(f)
for k in reader:
if k['year'] in res:
res[k['year']] += 1
else:
res[k['year']] = 1
return res
or using Counter from collection to count value occurance:
from csv import DictReader
from collections import Counter
def incidents_per_year():
return dict(Counter(k['year'] for k in DictReader(open("saved_data.csv"))))
| How to get the numerical value for each matching value in a row in a csv file | I have a csv file with "years" in row[0] and I need to get a count of how many times each year occurs and pair it with that year in a dictionary. To be clear, the year is the key and the amount of times it occurs in the csv is the value.
Here's what I have, but I am missing something. I just can't figure out how to get the count of each year to pair it with that year.
def incidents_per_year():
dict = {}
count = 0
with open("saved_data.csv") as f:
reader = csv.reader(f)
next(reader)
for row in reader:
count += 1
year = row[0]
dict[year] = count
return dict
Here is the part of the csv file (overall it's 50,000 rows so this is a small subset).
year,month,hour_of_day,incident_type_primary,day_of_week
2022,6,8,LARCENY/THEFT,Monday
2016,10,5,ASSAULT,Tuesday
2016,8,12,LARCENY/THEFT,Wednesday
2014,9,5,LARCENY/THEFT,Sunday
2015,8,7,ASSAULT,Wednesday
2016,11,2,LARCENY/THEFT,Tuesday
2015,7,11,ASSAULT,Friday
2015,4,12,LARCENY/THEFT,Friday
2016,3,2,BURGLARY,Wednesday
2014,10,4,LARCENY/THEFT,Thursday
2016,8,3,LARCENY/THEFT,Friday
2016,1,12,LARCENY/THEFT,Monday
2016,3,1,BURGLARY,Friday
2014,8,7,BURGLARY,Saturday
2017,4,10,UUV,Wednesday
2017,6,5,BURGLARY,Thursday
2017,1,4,BURGLARY,Wednesday
2016,7,9,BURGLARY,Thursday
2015,9,9,LARCENY/THEFT,Monday
2017,4,12,LARCENY/THEFT,Thursday
2016,3,4,LARCENY/THEFT,Friday
2016,4,5,BURGLARY,Thursday
2017,10,12,LARCENY/THEFT,Sunday
2015,7,11,ASSAULT,Monday
2012,5,12,LARCENY/THEFT,Friday
2014,12,11,LARCENY/THEFT,Thursday
2015,3,4,LARCENY/THEFT,Tuesday
2017,11,8,LARCENY/THEFT,Wednesday
2011,7,17,LARCENY/THEFT,Thursday
2015,9,17,ROBBERY,Wednesday
2015,5,12,BURGLARY,Thursday
2013,11,14,ASSAULT,Tuesday
2015,6,16,LARCENY/THEFT,Friday
2010,10,18,LARCENY/THEFT,Monday
2007,8,21,LARCENY/THEFT,Tuesday
2015,5,16,LARCENY/THEFT,Tuesday
2013,11,8,LARCENY/THEFT,Wednesday
2007,6,15,BURGLARY,Sunday
2012,5,19,ASSAULT,Tuesday
2007,8,20,LARCENY/THEFT,Thursday
2018,7,18,LARCENY/THEFT,Sunday
2019,2,2,BURGLARY,Tuesday
2012,11,20,BURGLARY,Monday
2012,6,15,ASSAULT,Wednesday
2011,10,21,THEFT OF SERVICES,Monday
2008,11,23,LARCENY/THEFT,Wednesday
2014,7,4,BURGLARY,Wednesday
2013,11,9,LARCENY/THEFT,Wednesday
2021,7,11,ASSAULT,Saturday
2018,7,12,LARCENY/THEFT,Friday
2013,2,0,LARCENY/THEFT,Friday
2007,1,0,ROBBERY,Thursday
2008,7,4,ASSAULT,Saturday
2007,8,4,BURGLARY,Saturday
2019,12,17,LARCENY/THEFT,Thursday
2018,7,0,LARCENY/THEFT,Wednesday
2010,2,12,BURGLARY,Sunday
2012,4,3,BURGLARY,Tuesday
2012,1,23,LARCENY/THEFT,Monday
2006,1,23,LARCENY/THEFT,Wednesday
2015,6,0,LARCENY/THEFT,Tuesday
2015,4,21,BURGLARY,Wednesday
2017,5,11,ROBBERY,Tuesday
2017,9,16,LARCENY/THEFT,Wednesday
2016,7,12,LARCENY/THEFT,Friday
2006,6,6,LARCENY/THEFT,Friday
2010,5,0,BURGLARY,Thursday
2010,11,21,ROBBERY,Wednesday
2011,2,3,BURGLARY,Friday
2017,8,0,LARCENY/THEFT,Wednesday
2011,12,23,BURGLARY,Friday
2012,8,0,LARCENY/THEFT,Saturday
2012,7,22,ROBBERY,Thursday
2016,9,8,ASSAULT,Saturday
2013,7,7,BURGLARY,Friday
2010,4,5,ASSAULT,Saturday
2022,3,13,LARCENY/THEFT,Saturday
2009,5,13,BURGLARY,Thursday
2010,2,12,ASSAULT,Saturday
2011,12,20,LARCENY/THEFT,Friday
2007,10,4,BURGLARY,Thursday
2007,8,19,LARCENY/THEFT,Saturday
2011,12,4,LARCENY/THEFT,Saturday
2012,10,23,UUV,Friday
2018,4,19,UUV,Sunday
2010,5,13,LARCENY/THEFT,Wednesday
2017,5,11,BURGLARY,Saturday
2009,9,2,ASSAULT,Thursday
2016,6,0,LARCENY/THEFT,Wednesday
2012,4,4,LARCENY/THEFT,Monday
2009,9,19,BURGLARY,Tuesday
2009,6,10,BURGLARY,Monday
2007,10,0,LARCENY/THEFT,Wednesday
2016,5,1,ASSAULT,Tuesday
2010,10,0,ROBBERY,Friday
2013,10,11,LARCENY/THEFT,Monday
2018,9,19,BURGLARY,Friday
2006,6,14,BURGLARY,Wednesday
2010,5,21,ASSAULT,Sunday
2010,6,10,LARCENY/THEFT,Monday
2018,9,10,LARCENY/THEFT,Friday
2007,11,0,LARCENY/THEFT,Tuesday
2008,8,22,ASSAULT,Thursday
2016,10,19,LARCENY/THEFT,Wednesday
2018,1,1,LARCENY/THEFT,Tuesday
2015,8,7,BURGLARY,Friday
2016,4,20,LARCENY/THEFT,Tuesday
2015,10,1,LARCENY/THEFT,Thursday
2010,3,15,ASSAULT,Monday
2014,4,4,ASSAULT,Monday
2011,10,21,LARCENY/THEFT,Friday
2016,9,12,LARCENY/THEFT,Thursday
2011,8,10,LARCENY/THEFT,Wednesday
2012,10,16,LARCENY/THEFT,Friday
2016,3,20,ASSAULT,Saturday
2020,11,11,UUV,Tuesday
2013,11,5,LARCENY/THEFT,Wednesday
2010,4,4,BURGLARY,Friday
2011,9,23,ASSAULT,Friday
2008,10,14,LARCENY/THEFT,Thursday
2015,6,0,UUV,Saturday
2010,12,23,LARCENY/THEFT,Saturday
2015,6,14,LARCENY/THEFT,Tuesday
2008,10,22,ASSAULT,Friday
2010,11,12,BURGLARY,Monday
2006,5,20,ASSAULT,Sunday
2012,9,16,BURGLARY,Sunday
2020,7,3,ASSAULT,Thursday
2014,1,5,BURGLARY,Tuesday
2015,4,1,ASSAULT,Thursday
2014,10,7,LARCENY/THEFT,Sunday
2007,11,9,LARCENY/THEFT,Wednesday
2008,7,17,BURGLARY,Sunday
2011,4,23,BURGLARY,Saturday
2014,7,17,LARCENY/THEFT,Wednesday
2008,10,10,LARCENY/THEFT,Tuesday
2007,7,18,LARCENY/THEFT,Sunday
2011,3,18,ROBBERY,Wednesday
2010,12,0,LARCENY/THEFT,Thursday
2013,5,0,LARCENY/THEFT,Tuesday
2006,9,14,LARCENY/THEFT,Friday
2014,2,1,ROBBERY,Thursday
2020,5,17,UUV,Sunday
2007,4,23,LARCENY/THEFT,Sunday
2015,6,12,LARCENY/THEFT,Monday
2010,1,5,ROBBERY,Monday
2011,11,18,LARCENY/THEFT,Tuesday
2008,10,23,LARCENY/THEFT,Thursday
2019,8,17,UUV,Friday
2006,9,17,LARCENY/THEFT,Friday
2015,7,9,LARCENY/THEFT,Monday
2013,2,23,ROBBERY,Sunday
2012,8,15,ASSAULT,Sunday
2015,3,0,LARCENY/THEFT,Friday
2006,12,15,BURGLARY,Thursday
2021,12,10,LARCENY/THEFT,Thursday
2006,11,11,BURGLARY,Sunday
2009,7,0,LARCENY/THEFT,Tuesday
2006,5,17,LARCENY/THEFT,Thursday
2016,7,0,BURGLARY,Wednesday
2017,1,14,LARCENY/THEFT,Tuesday
2010,11,13,LARCENY/THEFT,Tuesday
2015,9,13,BURGLARY,Wednesday
2008,10,1,BURGLARY,Wednesday
2009,4,22,LARCENY/THEFT,Thursday
2016,5,20,ASSAULT,Wednesday
2009,7,12,LARCENY/THEFT,Thursday
2021,6,20,LARCENY/THEFT,Sunday
| [
"You are using the count variable wrong, try this:\ndef incidents_per_year():\n dict = {}\n with open(\"saved_data.csv\") as f:\n reader = csv.reader(f)\n next(reader)\n for row in reader:\n year = row[0]\n dict[year] = (dict.get(year) or 0) + 1\n return dict\n\nFor every year in the file it will either set the count to 0 if the dict doesn't contain that specific year yet, or add 1 to the count of the specific year\n",
"First, I'd suggest try to not use the builtin keywords as variables.\nSecond, your code uses the counter as a global counter and therefor it is not unique for each year.\ndef incidents_per_year():\n year_dict = {}\n with open(\"saved_data.csv\") as f:\n reader = csv.reader(f)\n next(reader)\n for row in reader:\n year = row[0]\n year_dict[year] = year_dict.get(year, default=0) + 1\n return year_dict\n\nIn this code, I'm using dict.get method, which get the key and returns the value, if the key in dict, else a default value (defaults to None if not passed).\nThis way, I'm making sure that each year will be calculated separately with it's own counter.\n",
"You can try using DictReader from csv to read csv header as key:\nfrom csv import DictReader\n\ndef incidents_per_year():\n res = {}\n with open(\"saved_data.csv\") as f:\n reader = DictReader(f)\n for k in reader:\n if k['year'] in res:\n res[k['year']] += 1\n else:\n res[k['year']] = 1\n return res\n\nor using Counter from collection to count value occurance:\nfrom csv import DictReader\nfrom collections import Counter\n\ndef incidents_per_year():\n return dict(Counter(k['year'] for k in DictReader(open(\"saved_data.csv\"))))\n\n"
] | [
0,
0,
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0074601906_csv_python.txt |
Q:
Creating a Protobuf file from CSV
Good day everyone. I need to create a simple file in Protobuf (proto) format, preferably using Python (I'm currently using PyCharm). It should be very simple and resemble the following CSV structure:
header = ['Surname', 'Name']
data = ['John', 'Doe']
If anyone knows how to do it, it would help me a lot. Thanks!
I have already transformed this CSV structure into a Parquet file and tried doing the same for Protobuf, but it didn't work.
A:
What issues have you faced?
syntax="proto3";
message Data {
string surname = 1;
string name = 2;
}
message CSV {
repeated Data data = 1;
}
or
syntax="proto3";
message Data {
string data1 = 1;
string data2 = 2;
}
message CSV {
string header1 = 1;
string header2 = 2;
repeated Data data = 3;
}
| Creating a Protobuf file from CSV | Good day everyone. I need to create a simple file in Protobuf (proto) format, preferably using Python (I'm currently using PyCharm). It should be very simple and resemble the following CSV structure:
header = ['Surname', 'Name']
data = ['John', 'Doe']
If anyone knows how to do it, it would help me a lot. Thanks!
I have already transformed this CSV structure into a Parquet file and tried doing the same for Protobuf, but it didn't work.
| [
"What issues have you faced?\nsyntax=\"proto3\";\n\nmessage Data {\n string surname = 1;\n string name = 2;\n}\n\nmessage CSV {\n repeated Data data = 1;\n}\n\nor\nsyntax=\"proto3\";\n\nmessage Data {\n string data1 = 1;\n string data2 = 2;\n}\n\nmessage CSV {\n string header1 = 1;\n string header2 = 2;\n repeated Data data = 3;\n}\n\n"
] | [
0
] | [] | [] | [
"csv",
"protocol_buffers",
"python"
] | stackoverflow_0074604304_csv_protocol_buffers_python.txt |
Q:
Python Canvas bind and resize
When the window is resized, I want the height to be set equal to the width so the windows always is a square. In the code below print(event.width) does print the new window width, but canvas.configure(height=event.width) doesn't change the canvas height. What am I doing wrong?
EDIT: I want the whole window to stay squared.
import tkinter as tk
from timeit import default_timer as timer
start_timer = timer()
height = 600
width = 600
red = ("#ff7663")
root = tk.Tk()
canvas = tk.Canvas(root, height=height, width=width, background=red)
canvas.pack(expand=True,fill="both")
def resize(event):
end_timer = timer()
if end_timer - start_timer > 0.5:
print(event.width)
canvas.configure(height=event.width)
canvas.bind("<Configure>", resize)
root.mainloop()```
A:
Changing the size of the canvas won't override the size created by the user. If you're wanting the whole window to remain square you must explicitly set the size of the window.
For example:
def resize(event):
width = root.winfo_width()
root.wm_geometry(f"{width}x{width}")
A:
I'm not sure if this is what you're after, but this example maintains a square canvas regardless of window size*:
import tkinter as tk
root = tk.Tk
root.geometry('600x600')
canvas = tk.Canvas(root, background=red)
canvas.pack()
def on_resize(event):
w, h = event.width, event.height
if w <= h:
canvas.configure(width=w, height=w)
if __name__ == '__main__':
root.bind('<Configure>', on_resize)
root.mainloop()
*Unless the window is resized to be narrower/shorter than the initial geometry given
| Python Canvas bind and resize | When the window is resized, I want the height to be set equal to the width so the windows always is a square. In the code below print(event.width) does print the new window width, but canvas.configure(height=event.width) doesn't change the canvas height. What am I doing wrong?
EDIT: I want the whole window to stay squared.
import tkinter as tk
from timeit import default_timer as timer
start_timer = timer()
height = 600
width = 600
red = ("#ff7663")
root = tk.Tk()
canvas = tk.Canvas(root, height=height, width=width, background=red)
canvas.pack(expand=True,fill="both")
def resize(event):
end_timer = timer()
if end_timer - start_timer > 0.5:
print(event.width)
canvas.configure(height=event.width)
canvas.bind("<Configure>", resize)
root.mainloop()```
| [
"Changing the size of the canvas won't override the size created by the user. If you're wanting the whole window to remain square you must explicitly set the size of the window.\nFor example:\ndef resize(event):\n width = root.winfo_width()\n root.wm_geometry(f\"{width}x{width}\")\n\n",
"I'm not sure if this is what you're after, but this example maintains a square canvas regardless of window size*:\nimport tkinter as tk\n\nroot = tk.Tk\nroot.geometry('600x600')\ncanvas = tk.Canvas(root, background=red)\ncanvas.pack()\n\n\ndef on_resize(event):\n w, h = event.width, event.height\n if w <= h:\n canvas.configure(width=w, height=w)\n\n\nif __name__ == '__main__':\n root.bind('<Configure>', on_resize)\n root.mainloop()\n\n*Unless the window is resized to be narrower/shorter than the initial geometry given\n"
] | [
1,
0
] | [] | [] | [
"python",
"tkinter",
"tkinter_canvas"
] | stackoverflow_0074606422_python_tkinter_tkinter_canvas.txt |
Q:
A variable updating every time it is changed
i'm trying to create a variable that, everytimes it is changed, checks if the value is a certain amount and updates itself.
I have something like this
max = 10
min = 0
var = 1
And then some code updating it. Is it any way to keep it updating? It is in a class if it can in some way help.
I tried to put something like
if var < min:
var = min
elif var > max:
var = max
After every block that could change the var, but it is very slow like this.
A:
Here's how you might approach doing something like this. Although you cannot mess with assignment directly, you can change how properties are set and accessed:
class LimitedValue:
def __init__(self, min, max):
self.min = min
self.max = max
self._value = None
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = min(max(value, self.min), self.max)
This could be used as follows:
lv = LimitedValue(0, 100)
lv.value = 50
print(lv.value) # 50
lv.value = 200
print(lv.value) # 100
lv.value = -100
print(lv.value) # 0
Depending on the context, this might not be very idiomatic. You could instead just shorten the code you use to constrain var to a range min, max.
First, I would rename min and max to something that does not collide with the globals of the same name. Maybe min_var and max_var. Then, you can simply use
var = min(max_var, max(min_var, var))
to constrain var to the range you want.
| A variable updating every time it is changed | i'm trying to create a variable that, everytimes it is changed, checks if the value is a certain amount and updates itself.
I have something like this
max = 10
min = 0
var = 1
And then some code updating it. Is it any way to keep it updating? It is in a class if it can in some way help.
I tried to put something like
if var < min:
var = min
elif var > max:
var = max
After every block that could change the var, but it is very slow like this.
| [
"Here's how you might approach doing something like this. Although you cannot mess with assignment directly, you can change how properties are set and accessed:\nclass LimitedValue:\n def __init__(self, min, max):\n self.min = min\n self.max = max\n self._value = None\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n self._value = min(max(value, self.min), self.max)\n\nThis could be used as follows:\nlv = LimitedValue(0, 100)\nlv.value = 50\nprint(lv.value) # 50\nlv.value = 200\nprint(lv.value) # 100\nlv.value = -100\nprint(lv.value) # 0\n\nDepending on the context, this might not be very idiomatic. You could instead just shorten the code you use to constrain var to a range min, max.\nFirst, I would rename min and max to something that does not collide with the globals of the same name. Maybe min_var and max_var. Then, you can simply use\nvar = min(max_var, max(min_var, var))\n\nto constrain var to the range you want.\n"
] | [
1
] | [] | [] | [
"class",
"python",
"python_3.x",
"variables"
] | stackoverflow_0074606553_class_python_python_3.x_variables.txt |
Q:
how to add double quotes to a string in text file
I have a file with words that are each in a separate line. I want to read each word and add quotes around it, and a comma after the word. After that I want the words back into a new text file with the added symbols.
Example, this as input file:
ram
shyam
raja
I want this to be in the output file:
"ram",
"shyam",
"raja"
A:
If each word is on the same line of the input file, separated by spaces. Then you could approach it like this:
# read lines from file
text = ""
with open('filename.txt', 'r') as ifile:
text = ifile.readline()
# get all sepearte string split by space
data = text.split(" ")
# add quotes to each one
data = [f"\"{name}\"" for name in data]
# append them together with commas inbetween
updated_text = ", ".join(data)
# write to some file
with open("outfilename.txt", 'w') as ofile:
ofile.write(updated_text)
Input:
jeff adam bezos
Output
"jeff", "adam", "bezos"
If you want to work with each input and output file word on a separate line, we could take this approach:
# read lines from file
words = []
with open('filename.txt', 'r') as ifile:
words = [line.replace("\n", "") for line in ifile.readlines()]
# add quotes to each one
updated_words = [f"\"{word}\"" for word in words]
# append them together with commas inbetween
updated_text = ",\n".join(updated_words)
# write to some file
with open("outfilename.txt", 'w') as ofile:
ofile.write(updated_text)
Input:
jeff
adam
bezos
Output
"jeff",
"adam",
"bezos"
Good Luck!
A:
Read the file as list using read().splitlines() and write the line using f-string and writelines:
with open('text.txt', 'r') as r: lines = r.read().splitlines()
with open('text.txt', 'w') as w: w.writelines(f'"{line}",'+'\n' for line in lines)
| how to add double quotes to a string in text file | I have a file with words that are each in a separate line. I want to read each word and add quotes around it, and a comma after the word. After that I want the words back into a new text file with the added symbols.
Example, this as input file:
ram
shyam
raja
I want this to be in the output file:
"ram",
"shyam",
"raja"
| [
"If each word is on the same line of the input file, separated by spaces. Then you could approach it like this:\n# read lines from file\ntext = \"\"\nwith open('filename.txt', 'r') as ifile:\n text = ifile.readline()\n\n# get all sepearte string split by space\ndata = text.split(\" \")\n\n# add quotes to each one\ndata = [f\"\\\"{name}\\\"\" for name in data]\n\n# append them together with commas inbetween\nupdated_text = \", \".join(data)\n\n# write to some file\nwith open(\"outfilename.txt\", 'w') as ofile:\n ofile.write(updated_text)\n\nInput:\njeff adam bezos\n\nOutput\n\"jeff\", \"adam\", \"bezos\"\n\nIf you want to work with each input and output file word on a separate line, we could take this approach:\n# read lines from file\nwords = []\nwith open('filename.txt', 'r') as ifile:\n words = [line.replace(\"\\n\", \"\") for line in ifile.readlines()]\n\n# add quotes to each one\nupdated_words = [f\"\\\"{word}\\\"\" for word in words]\n\n# append them together with commas inbetween\nupdated_text = \",\\n\".join(updated_words)\n\n# write to some file\nwith open(\"outfilename.txt\", 'w') as ofile:\n ofile.write(updated_text)\n\nInput:\njeff\nadam\nbezos\n\nOutput\n\"jeff\",\n\"adam\",\n\"bezos\"\n\nGood Luck!\n",
"Read the file as list using read().splitlines() and write the line using f-string and writelines:\nwith open('text.txt', 'r') as r: lines = r.read().splitlines()\nwith open('text.txt', 'w') as w: w.writelines(f'\"{line}\",'+'\\n' for line in lines)\n\n"
] | [
2,
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074601508_python_python_3.x.txt |
Q:
How to create a new column in a dataframe based on the values of multiple columns in a different dataframe?
Let's say I have the two data frames below:
data = {
'Part' : ['part1', 'part2', 'part3', 'part4', 'part5'],
'Number' : ['123', '234', '345', '456', '567'],
'Code' : ['R2', 'R2', 'R4', 'R5', 'R5']
}
df = pd.DataFrame(data, dtype = object)
data2 = {
'Part' : ['part1', 'part2', 'part6', 'part4'],
'Number' : ['123', '234', '345', '456'],
'Code' : ['M2', 'R2', 'R4', 'M5']
}
df2 = pd.DataFrame(data2, dtype = object)
And my goal is to create a new column in df called Old_Code that lists the value of Code from df2 if the Part and Number in df and df2 match.
i.e Old_Code would have the following values: ['M2', 'R2', NaN, 'M5', NaN]
I've tried:
def add_code(df):
pdf_short.loc[(df['Part'] == df2['Part']) & (df['Number'] == df2['Number']), 'Old_Code'] = df2['Code']
add_code(df)
but I keep getting an error due to the shape of the dataframes not matching. Is there a way to get around this issue?
I've also tried:
def add_code1(df):
if (df['Part'] == df2['Part']) & (df['Number'] == df2['Number']):
return df2['Code']
df['Old_Code'] = df.apply(add_code1, axis = 1)
However, I just get errors.
A:
Here are two ways to do what you've asked:
# First way
df = df.set_index(['Part','Number']).assign(Old_code=df2.set_index(['Part','Number']).Code).reset_index()
# Second way
df = df.merge(df2.rename(columns={'Code':'Old_code'}), how='left', on=['Part','Number'])
Output:
Part Number Code Old_code
0 part1 123 R2 M2
1 part2 234 R2 R2
2 part3 345 R4 NaN
3 part4 456 R5 M5
4 part5 567 R5 NaN
| How to create a new column in a dataframe based on the values of multiple columns in a different dataframe? | Let's say I have the two data frames below:
data = {
'Part' : ['part1', 'part2', 'part3', 'part4', 'part5'],
'Number' : ['123', '234', '345', '456', '567'],
'Code' : ['R2', 'R2', 'R4', 'R5', 'R5']
}
df = pd.DataFrame(data, dtype = object)
data2 = {
'Part' : ['part1', 'part2', 'part6', 'part4'],
'Number' : ['123', '234', '345', '456'],
'Code' : ['M2', 'R2', 'R4', 'M5']
}
df2 = pd.DataFrame(data2, dtype = object)
And my goal is to create a new column in df called Old_Code that lists the value of Code from df2 if the Part and Number in df and df2 match.
i.e Old_Code would have the following values: ['M2', 'R2', NaN, 'M5', NaN]
I've tried:
def add_code(df):
pdf_short.loc[(df['Part'] == df2['Part']) & (df['Number'] == df2['Number']), 'Old_Code'] = df2['Code']
add_code(df)
but I keep getting an error due to the shape of the dataframes not matching. Is there a way to get around this issue?
I've also tried:
def add_code1(df):
if (df['Part'] == df2['Part']) & (df['Number'] == df2['Number']):
return df2['Code']
df['Old_Code'] = df.apply(add_code1, axis = 1)
However, I just get errors.
| [
"Here are two ways to do what you've asked:\n# First way\ndf = df.set_index(['Part','Number']).assign(Old_code=df2.set_index(['Part','Number']).Code).reset_index()\n\n# Second way\ndf = df.merge(df2.rename(columns={'Code':'Old_code'}), how='left', on=['Part','Number'])\n\nOutput:\n Part Number Code Old_code\n0 part1 123 R2 M2\n1 part2 234 R2 R2\n2 part3 345 R4 NaN\n3 part4 456 R5 M5\n4 part5 567 R5 NaN\n\n"
] | [
3
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074606446_dataframe_pandas_python.txt |
Q:
Create multiple for i in X depending on a given number
I want to create a class in python that contains variables and a domain for each variable
to generate a list of dictionary containing all the possible solutions for variables assignments
for exemple:
x='x'
y='y'
domainx=[1,2]
domainy=[3,4]
def solutions(elt,elt2,domainx,domainy):
for i in domainx:
for j in domainy:
liste.append({elt:i,elt2:j})
return liste
liste now is equal to [{'x':1,'y'=3},{'x':1,'y'=4},{'x':2,'y':3} etc..]
but this function it's not flexible and reutilizable because i must declare the variable one by one.
what i want is generate solutions from a liste like :
liste=[(x,[1,2]),(y,[3,4])or
liste=[(x,[324,3433]),(y,[43,34354,45]),(z,[5445,653,3,34,4,5])]
then i pass the liste to a function that will generate the solutions,i don't know how to do it especially that i should do as many as the length of liste is.
My purpose for this question is to create this class,
class CST:
def __init__(self):
self.variables=[]
self.contraintes=[]
self.VarWithdomaines=[]
def addVarDom(self,variable,domain):
self.VarWithdomaines.append((variable,domain))
# def generate(self): it's not the correct way
# liste=[]
# for repitition in range(len(self.domaines)):
# solution={}
# for noeud in self.domaines:
# var=noeud[0]
# for x in noeud[1]:
# solution[var]=x
# liste.append(solution)
# return liste
Game=CST()
domain = [int(i) for i in range(2)]
domain2 = [int(i) for i in range(3)]
domain3 = [int(i) for i in range(21)]
# domain4 = [int(i) for i in range()]
Game.addVarDom("E1",domain)
Game.addVarDom("E2",domain2)
Game.addVarDom("C20",domain3)
listofsolutions=Game.generate())
#list should be for exemple list=[{E1:1,E2:0,C20:20},{E1:0,E2:2,C20:12}...]
A:
Try using itertools!
Essentially, itertools.product does what you want. As explained in the docs, this function takes the Cartesian product of input iterables. So, list(itertools.product([0, 1], [2, 3])) produces [(0, 2), (0, 3), (1, 2), (1, 3)], which is basically all you want.
All that's left is converting that output to the format you're looking for:
import itertools
def assignments(names, values):
combinations = itertools.product(*values)
return [
{
name: value
for name, value in zip(names, combination)
}
for combination in combinations
]
assignments(['x', 'y', 'z'], [[1, 2], [3, 4], [5, 6]])
This would produce the array
[
{'x': 1, 'y': 3, 'z': 5},
{'x': 1, 'y': 3, 'z': 6},
{'x': 1, 'y': 4, 'z': 5},
{'x': 1, 'y': 4, 'z': 6},
{'x': 2, 'y': 3, 'z': 5},
{'x': 2, 'y': 3, 'z': 6},
{'x': 2, 'y': 4, 'z': 5},
{'x': 2, 'y': 4, 'z': 6},
]
A:
Thanks to @BrownieInMotion
I have modified the methods in the class and it's worked,
modified methods:
def addVarDom(self,variable,domain):
self.variables.append(variable)
self.domaines.append(domain)
def generate(self):#names, values):
combinations = itertools.product(*self.domaines)
return [
{
name: value
for name, value in zip(self.variables, combination)
}
for combination in combinations
]
| Create multiple for i in X depending on a given number | I want to create a class in python that contains variables and a domain for each variable
to generate a list of dictionary containing all the possible solutions for variables assignments
for exemple:
x='x'
y='y'
domainx=[1,2]
domainy=[3,4]
def solutions(elt,elt2,domainx,domainy):
for i in domainx:
for j in domainy:
liste.append({elt:i,elt2:j})
return liste
liste now is equal to [{'x':1,'y'=3},{'x':1,'y'=4},{'x':2,'y':3} etc..]
but this function it's not flexible and reutilizable because i must declare the variable one by one.
what i want is generate solutions from a liste like :
liste=[(x,[1,2]),(y,[3,4])or
liste=[(x,[324,3433]),(y,[43,34354,45]),(z,[5445,653,3,34,4,5])]
then i pass the liste to a function that will generate the solutions,i don't know how to do it especially that i should do as many as the length of liste is.
My purpose for this question is to create this class,
class CST:
def __init__(self):
self.variables=[]
self.contraintes=[]
self.VarWithdomaines=[]
def addVarDom(self,variable,domain):
self.VarWithdomaines.append((variable,domain))
# def generate(self): it's not the correct way
# liste=[]
# for repitition in range(len(self.domaines)):
# solution={}
# for noeud in self.domaines:
# var=noeud[0]
# for x in noeud[1]:
# solution[var]=x
# liste.append(solution)
# return liste
Game=CST()
domain = [int(i) for i in range(2)]
domain2 = [int(i) for i in range(3)]
domain3 = [int(i) for i in range(21)]
# domain4 = [int(i) for i in range()]
Game.addVarDom("E1",domain)
Game.addVarDom("E2",domain2)
Game.addVarDom("C20",domain3)
listofsolutions=Game.generate())
#list should be for exemple list=[{E1:1,E2:0,C20:20},{E1:0,E2:2,C20:12}...]
| [
"Try using itertools!\nEssentially, itertools.product does what you want. As explained in the docs, this function takes the Cartesian product of input iterables. So, list(itertools.product([0, 1], [2, 3])) produces [(0, 2), (0, 3), (1, 2), (1, 3)], which is basically all you want.\nAll that's left is converting that output to the format you're looking for:\nimport itertools\n\ndef assignments(names, values):\n combinations = itertools.product(*values)\n return [\n {\n name: value\n for name, value in zip(names, combination)\n }\n for combination in combinations\n ]\n\nassignments(['x', 'y', 'z'], [[1, 2], [3, 4], [5, 6]])\n\nThis would produce the array\n[\n {'x': 1, 'y': 3, 'z': 5},\n {'x': 1, 'y': 3, 'z': 6},\n {'x': 1, 'y': 4, 'z': 5},\n {'x': 1, 'y': 4, 'z': 6},\n {'x': 2, 'y': 3, 'z': 5},\n {'x': 2, 'y': 3, 'z': 6},\n {'x': 2, 'y': 4, 'z': 5},\n {'x': 2, 'y': 4, 'z': 6},\n]\n\n",
"Thanks to @BrownieInMotion\nI have modified the methods in the class and it's worked,\nmodified methods:\n def addVarDom(self,variable,domain):\n self.variables.append(variable)\n self.domaines.append(domain)\n \n def generate(self):#names, values):\n combinations = itertools.product(*self.domaines)\n return [\n {\n name: value\n for name, value in zip(self.variables, combination)\n }\n for combination in combinations\n ]\n\n"
] | [
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0074606461_python.txt |
Q:
Pandas read_pickle, UnpicklingError: invalid load key, '\xfd'
I am trying to read in my pickle file , however I am getting the following error UnpicklingError: invalid load key, '\xfd'. Does anyone know how to solve this?
import pandas as pd
file = r"O:\Stack\Over\Flow\202210_Other.pkl"
test = pd.read_pickle(file)
print(test)
Any advice would be appeciated.
A:
Was able to figure it out, it was
pd.read_pickle(file, compression="xz")
| Pandas read_pickle, UnpicklingError: invalid load key, '\xfd' | I am trying to read in my pickle file , however I am getting the following error UnpicklingError: invalid load key, '\xfd'. Does anyone know how to solve this?
import pandas as pd
file = r"O:\Stack\Over\Flow\202210_Other.pkl"
test = pd.read_pickle(file)
print(test)
Any advice would be appeciated.
| [
"Was able to figure it out, it was\npd.read_pickle(file, compression=\"xz\")\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074606016_pandas_python.txt |
Q:
How can I add labels to a distance matrix used to make a dendrogram and have the labels also show on the dendrogram
I have a distance matrix:
array('d', [188.61516889752, 226.68716730362135, 188.96015266132167])
I would like to add labels to the matrix before performing hierarchical cluster using scipy.
I produce a UPGMA dendrogram from the distance matrix using:
from scipy.cluster.hierarchy import average, fcluster
#from scipy.spatial.distance import pdist
outDND=average(distanceMatrix)
I have tried adding the labels to the dendrogram using:
from scipy.cluster.hierarchy import average, fcluster
#from scipy.spatial.distance import pdist
outDND=average(distanceMatrix, labels=['A','B','C'])
But that does not work. I get the error:
TypeError: average() got an unexpected keyword argument 'labels'
How can I add labels to 'distanceMatrix' and have them carry through to outDND?
A:
It looks like you're missing a couple steps between "create the distance matrix" and "create the dendrogram".
See this other StackOverflow question for several worked examples.
In general, scipy and the underlying numpy tend not to include labels in their data structures. (Unlike, say pandas, which does track labels.). That means you're responsible for keeping separate lists of labels and figuring out the correct order & references.
The steps you'll need are:
Compute the distance matrix (which you've done, although you should drop the unrecognized "labels" parameter)
Use the scipy.cluster.hierarchy.linkage() function to find hierarchies using the just-computed distance matrix.
Display the resulting linkages using scipy.cluster.hierarchy.dendrogram(). This is the step at which you'll be able to insert your labels using the "labels" argument.
| How can I add labels to a distance matrix used to make a dendrogram and have the labels also show on the dendrogram | I have a distance matrix:
array('d', [188.61516889752, 226.68716730362135, 188.96015266132167])
I would like to add labels to the matrix before performing hierarchical cluster using scipy.
I produce a UPGMA dendrogram from the distance matrix using:
from scipy.cluster.hierarchy import average, fcluster
#from scipy.spatial.distance import pdist
outDND=average(distanceMatrix)
I have tried adding the labels to the dendrogram using:
from scipy.cluster.hierarchy import average, fcluster
#from scipy.spatial.distance import pdist
outDND=average(distanceMatrix, labels=['A','B','C'])
But that does not work. I get the error:
TypeError: average() got an unexpected keyword argument 'labels'
How can I add labels to 'distanceMatrix' and have them carry through to outDND?
| [
"It looks like you're missing a couple steps between \"create the distance matrix\" and \"create the dendrogram\".\nSee this other StackOverflow question for several worked examples.\nIn general, scipy and the underlying numpy tend not to include labels in their data structures. (Unlike, say pandas, which does track labels.). That means you're responsible for keeping separate lists of labels and figuring out the correct order & references.\nThe steps you'll need are:\n\nCompute the distance matrix (which you've done, although you should drop the unrecognized \"labels\" parameter)\nUse the scipy.cluster.hierarchy.linkage() function to find hierarchies using the just-computed distance matrix.\nDisplay the resulting linkages using scipy.cluster.hierarchy.dendrogram(). This is the step at which you'll be able to insert your labels using the \"labels\" argument.\n\n"
] | [
0
] | [] | [] | [
"arrays",
"numpy",
"python",
"scikit_learn",
"scipy"
] | stackoverflow_0074606336_arrays_numpy_python_scikit_learn_scipy.txt |
Q:
auto detect face only when the human is in motion and take a snapshot with opencv
I'm working on Face recognition project in python, trying to take a snapshot of a human face from an IP cam whenever a human comes in the cam steam. Here is the code:
import numpy as np
import cv2
import time
#import the cascade for face detection
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
def TakeSnapshotAndSave():
video = cv2.VideoCapture("rtsp://user:[email protected]:553/Streaming/Channels/401")
width = 1500
height = 1080
dim = (width, height)
num = 0
while True:
_, frame = video.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = frame[y:y + h, x:x + w]
x = 0
y = 20
text_color = (0, 255, 0)
cv2.imwrite('opencv' + str(num) + '.jpg', frame)
num = num + 1
frame = cv2.resize(frame, (1500, 1000))
cv2.imshow("Lodhran Camera", frame)
k = cv2.waitKey(1)
if k == ord('q'):
break
video.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
TakeSnapshotAndSave()
But it takes the image of the full frame not only the face, while I just want the only faces to be snapped and to be saved like if the frame has 5 humans in the frame at the same time, the out put will be 5 images of the faces of that 5 humans, not the full frame with all humans. Any help with the code will be appreciated, thanks in advance.
A:
I guess you need to modify your code to make a cropped image for every facebox and then write the cropped image, not a basic frame:
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = frame[y:y + h, x:x + w]
cropped_frame = frame[(x, y), (x + w, y + h)]
cv2.imwrite("some_name.jpg", cropped_frame)
...
| auto detect face only when the human is in motion and take a snapshot with opencv | I'm working on Face recognition project in python, trying to take a snapshot of a human face from an IP cam whenever a human comes in the cam steam. Here is the code:
import numpy as np
import cv2
import time
#import the cascade for face detection
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
def TakeSnapshotAndSave():
video = cv2.VideoCapture("rtsp://user:[email protected]:553/Streaming/Channels/401")
width = 1500
height = 1080
dim = (width, height)
num = 0
while True:
_, frame = video.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = frame[y:y + h, x:x + w]
x = 0
y = 20
text_color = (0, 255, 0)
cv2.imwrite('opencv' + str(num) + '.jpg', frame)
num = num + 1
frame = cv2.resize(frame, (1500, 1000))
cv2.imshow("Lodhran Camera", frame)
k = cv2.waitKey(1)
if k == ord('q'):
break
video.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
TakeSnapshotAndSave()
But it takes the image of the full frame not only the face, while I just want the only faces to be snapped and to be saved like if the frame has 5 humans in the frame at the same time, the out put will be 5 images of the faces of that 5 humans, not the full frame with all humans. Any help with the code will be appreciated, thanks in advance.
| [
"I guess you need to modify your code to make a cropped image for every facebox and then write the cropped image, not a basic frame:\nfor (x, y, w, h) in faces:\n cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)\n roi_gray = gray[y:y + h, x:x + w]\n roi_color = frame[y:y + h, x:x + w]\n cropped_frame = frame[(x, y), (x + w, y + h)]\n cv2.imwrite(\"some_name.jpg\", cropped_frame)\n ...\n\n"
] | [
0
] | [] | [] | [
"face_recognition",
"opencv",
"python"
] | stackoverflow_0074600768_face_recognition_opencv_python.txt |
Q:
Horizontal concatenating dataframes without taking into account the index
I'm stucked with womething chich looks super easy:
I have a dafatframe df1
df1 = pd.DataFrame(np.random.randint(25, size=(4, 4)),
index=["1", "2", "3", "4"],
columns=["A", "B", "C", "D"])
enter image description here
I have another dafatframe df2:
df2 = pd.DataFrame(np.random.randint(25, size=(4, 2)),
index=["5", "6", "7", "8"],
columns=["A", "F"])
enter image description here
They dont have the same index but I want to concatenate them so that I have something like this:
df_final = pd.DataFrame(np.random.randint(25, size=(4, 6)),
index=["1", "2", "3", "4"],
columns=["A", "B", "C", "D","A","F"])
enter image description here
I don't care about the index, it may be df1's index or something else.
I tried different codes but such as
horizontal_concat_init_index = pd.concat([df1, df2], axis=1).reindex(df1.index)
or
horizontal_concat_ignore_index = pd.concat([df1, df2], ignore_index=True, axis=0)
I don't know what to do, I'm lost, can you help me?
A:
Unfortunately ignore_index only works on the axis you are trying to concat (which should be axis 1). You could remove the index before the concat:
pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis=1)
| Horizontal concatenating dataframes without taking into account the index | I'm stucked with womething chich looks super easy:
I have a dafatframe df1
df1 = pd.DataFrame(np.random.randint(25, size=(4, 4)),
index=["1", "2", "3", "4"],
columns=["A", "B", "C", "D"])
enter image description here
I have another dafatframe df2:
df2 = pd.DataFrame(np.random.randint(25, size=(4, 2)),
index=["5", "6", "7", "8"],
columns=["A", "F"])
enter image description here
They dont have the same index but I want to concatenate them so that I have something like this:
df_final = pd.DataFrame(np.random.randint(25, size=(4, 6)),
index=["1", "2", "3", "4"],
columns=["A", "B", "C", "D","A","F"])
enter image description here
I don't care about the index, it may be df1's index or something else.
I tried different codes but such as
horizontal_concat_init_index = pd.concat([df1, df2], axis=1).reindex(df1.index)
or
horizontal_concat_ignore_index = pd.concat([df1, df2], ignore_index=True, axis=0)
I don't know what to do, I'm lost, can you help me?
| [
"Unfortunately ignore_index only works on the axis you are trying to concat (which should be axis 1). You could remove the index before the concat:\npd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis=1)\n\n"
] | [
0
] | [] | [] | [
"concatenation",
"dataframe",
"python"
] | stackoverflow_0074606611_concatenation_dataframe_python.txt |
Q:
Order a list of ip,domain and url
i have a txt with some IPs,domains and urls but the list is not organized. I want to divide the IP, domain and url like that:
List before:
1.1.1.1
domain.com
2.2.2.2
https://url.com/test
3.3.3.3
domain2.com
List after:
IP
1.1.1.1
2.2.2.2
3.3.3.3
DOMAIN
domain.com
domain2.com
URL
https://url.com/test
How can i do it? Thanks!
A:
I think thats what you need, you must install validators with the following command pip install validators. Hope this helps
Code:
import validators
import re
my_dict = dict()
with open('config.txt') as f:
for line in f:
if bool(re.match(r"[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}", line)):
my_dict.setdefault('IP', []).append(line)
elif validators.url(line):
my_dict.setdefault('URL', []).append(line)
elif validators.domain(line):
my_dict.setdefault('DOMAIN', []).append(line)
for key in my_dict.keys():
print(key, '\n')
for value in my_dict[key]:
print(value , '\n')
Output:
IP
1.1.1.1
2.2.2.2
3.3.3.3
DOMAIN
domain.com
domain2.com
URL
https://url.com/test
you should also replace config.txt with your text file
| Order a list of ip,domain and url | i have a txt with some IPs,domains and urls but the list is not organized. I want to divide the IP, domain and url like that:
List before:
1.1.1.1
domain.com
2.2.2.2
https://url.com/test
3.3.3.3
domain2.com
List after:
IP
1.1.1.1
2.2.2.2
3.3.3.3
DOMAIN
domain.com
domain2.com
URL
https://url.com/test
How can i do it? Thanks!
| [
"I think thats what you need, you must install validators with the following command pip install validators. Hope this helps\nCode:\nimport validators\nimport re\n\nmy_dict = dict()\nwith open('config.txt') as f:\n for line in f:\n if bool(re.match(r\"[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\", line)):\n my_dict.setdefault('IP', []).append(line)\n elif validators.url(line):\n my_dict.setdefault('URL', []).append(line)\n elif validators.domain(line):\n my_dict.setdefault('DOMAIN', []).append(line)\n\nfor key in my_dict.keys():\n print(key, '\\n')\n for value in my_dict[key]:\n print(value , '\\n')\n\nOutput:\nIP \n\n1.1.1.1\n \n\n2.2.2.2\n \n\n3.3.3.3\n \n\nDOMAIN \n\ndomain.com\n \n\ndomain2.com \n\nURL \n\nhttps://url.com/test\n\nyou should also replace config.txt with your text file\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074606365_python_python_3.x.txt |
Q:
Find the width of tree at each level/height (non-binary tree)
Dear experienced friends, I am looking for an algorithm (Python) that outputs the width of a tree at each level. Here are the input and expected outputs.
(I have updated the problem with a more complex edge list. The original question with sorted edge list can be elegantly solved by @Samwise answer.)
Input (Edge List: source-->target)
[[11,1],[11,2],
[10,11],[10,22],[10,33],
[33,3],[33,4],[33,5],[33,6]]
The tree graph looks like this:
10
/ | \
11 22 33
/ \ / | \ \
1 2 3 4 5 6
Expected Output (Width of each level/height)
[1,3,6] # according to the width of level 0,1,2
I have looked through the web. It seems this topic related to BFS and Level Order Traversal. However, most solutions are based on the binary tree. How can solve the problem when the tree is not binary (e.g. the above case)?
(I'm new to the algorithm, and any references would be really appreciated. Thank you!)
A:
Build a dictionary of the "level" of each node, and then count the number of nodes at each level:
>>> from collections import Counter
>>> def tree_width(edges):
... levels = {} # {node: level}
... for [p, c] in edges:
... levels[c] = levels.setdefault(p, 0) + 1
... widths = Counter(levels.values()) # {level: width}
... return [widths[level] for level in sorted(widths)]
...
>>> tree_width([[0,1],[0,2],[0,3],
... [1,4],[1,5],
... [3,6],[3,7],[3,8],[3,9]])
[1, 3, 6]
A:
This might not be the most efficient, but it requires only two scans over the edge list, so it's optimal up to a constant factor. It places no requirement on the order of the edges in the edge list, but does insist that each edge be (source, dest). Also, doesn't check that the edge list describes a connected tree (or a tree at all; if the edge list is cyclic, the program will never terminate).
from collections import defauiltdict
# Turn the edge list into a (non-binary) tree, represented as a
# dictionary whose keys are the source nodes with the list of children
# as its value.
def edge_list_to_tree(edges):
'''Given a list of (source, dest) pairs, constructs a tree.
Returns a tuple (tree, root) where root is the root node
and tree is a dict which maps each node to a list of its children.
(Leaves are not present as keys in the dictionary.)
'''
tree = defaultdict(list)
sources = set() # nodes used as sources
dests = set() # nodes used as destinations
for source, dest in edges:
tree[source].append(dest)
sources.add(source)
dests.add(dest)
roots = sources - dests # Source nodes which are not destinations
assert(len(roots) == 1) # There is only one in a tree
tree.default_factory = None # Defang the defaultdict
return tree, roots.pop()
# A simple breadth-first-search, keeping the count of nodes at each level.
def level_widths(tree, root):
'''Does a BFS of tree starting at root counting nodes at each level.
Returns a list of counts.
'''
widths = [] # Widths of the levels
fringe = [root] # List of nodes at current level
while fringe:
widths.append(len(fringe))
kids = [] # List of nodes at next level
for parent in fringe:
if parent in tree:
for kid in tree[parent]:
kids.append(kid)
fringe = kids # For next iteration, use this level's kids
return widths
# Put the two pieces together.
def tree_width(edges):
return level_widths(*edge_list_to_tree(edges))
| Find the width of tree at each level/height (non-binary tree) | Dear experienced friends, I am looking for an algorithm (Python) that outputs the width of a tree at each level. Here are the input and expected outputs.
(I have updated the problem with a more complex edge list. The original question with sorted edge list can be elegantly solved by @Samwise answer.)
Input (Edge List: source-->target)
[[11,1],[11,2],
[10,11],[10,22],[10,33],
[33,3],[33,4],[33,5],[33,6]]
The tree graph looks like this:
10
/ | \
11 22 33
/ \ / | \ \
1 2 3 4 5 6
Expected Output (Width of each level/height)
[1,3,6] # according to the width of level 0,1,2
I have looked through the web. It seems this topic related to BFS and Level Order Traversal. However, most solutions are based on the binary tree. How can solve the problem when the tree is not binary (e.g. the above case)?
(I'm new to the algorithm, and any references would be really appreciated. Thank you!)
| [
"Build a dictionary of the \"level\" of each node, and then count the number of nodes at each level:\n>>> from collections import Counter\n>>> def tree_width(edges):\n... levels = {} # {node: level}\n... for [p, c] in edges:\n... levels[c] = levels.setdefault(p, 0) + 1\n... widths = Counter(levels.values()) # {level: width}\n... return [widths[level] for level in sorted(widths)]\n...\n>>> tree_width([[0,1],[0,2],[0,3],\n... [1,4],[1,5],\n... [3,6],[3,7],[3,8],[3,9]])\n[1, 3, 6]\n\n",
"This might not be the most efficient, but it requires only two scans over the edge list, so it's optimal up to a constant factor. It places no requirement on the order of the edges in the edge list, but does insist that each edge be (source, dest). Also, doesn't check that the edge list describes a connected tree (or a tree at all; if the edge list is cyclic, the program will never terminate).\nfrom collections import defauiltdict\n\n# Turn the edge list into a (non-binary) tree, represented as a\n# dictionary whose keys are the source nodes with the list of children\n# as its value.\ndef edge_list_to_tree(edges):\n '''Given a list of (source, dest) pairs, constructs a tree.\n Returns a tuple (tree, root) where root is the root node\n and tree is a dict which maps each node to a list of its children.\n (Leaves are not present as keys in the dictionary.)\n ''' \n tree = defaultdict(list)\n sources = set() # nodes used as sources\n dests = set() # nodes used as destinations\n for source, dest in edges:\n tree[source].append(dest)\n sources.add(source)\n dests.add(dest)\n roots = sources - dests # Source nodes which are not destinations\n assert(len(roots) == 1) # There is only one in a tree\n tree.default_factory = None # Defang the defaultdict\n return tree, roots.pop()\n\n# A simple breadth-first-search, keeping the count of nodes at each level.\ndef level_widths(tree, root):\n '''Does a BFS of tree starting at root counting nodes at each level.\n Returns a list of counts.\n '''\n widths = [] # Widths of the levels\n fringe = [root] # List of nodes at current level\n while fringe:\n widths.append(len(fringe))\n kids = [] # List of nodes at next level\n for parent in fringe:\n if parent in tree:\n for kid in tree[parent]:\n kids.append(kid)\n fringe = kids # For next iteration, use this level's kids\n return widths\n\n# Put the two pieces together.\ndef tree_width(edges):\n return level_widths(*edge_list_to_tree(edges))\n\n"
] | [
2,
1
] | [] | [] | [
"algorithm",
"python",
"python_3.x"
] | stackoverflow_0074604676_algorithm_python_python_3.x.txt |
Q:
Can't get href from Selenium webdriver scraping youtube
I am trying to scrape youtube videos from a channel by doing the following code below however, it seems that my element_titles don't have a href attribute. This worked about a year ago and I am unsure why it doesn't work now? Did youtube change the way we can get href?
#Scrape for videos
# WARNING: Takes very long
HOME = "https://www.youtube.com/user/theneedledrop/videos"
driver = webdriver.Chrome("C:\webdriver\chromedriver.exe")
driver.get(HOME)
scroll()
element_titles = driver.find_elements(By.ID,"video-title")
The following attribtues are what is found in the WebDriver objects
> element_titles[0].get_property('attributes')[0]
{'ATTRIBUTE_NODE': 2,
'CDATA_SECTION_NODE': 4,
'COMMENT_NODE': 8,
'DOCUMENT_FRAGMENT_NODE': 11,
'DOCUMENT_NODE': 9,
'DOCUMENT_POSITION_CONTAINED_BY': 16,
'DOCUMENT_POSITION_CONTAINS': 8,
'DOCUMENT_POSITION_DISCONNECTED': 1,
'DOCUMENT_POSITION_FOLLOWING': 4,
'DOCUMENT_POSITION_IMPLEMENTATION_SPECIFIC': 32,
'DOCUMENT_POSITION_PRECEDING': 2,
'DOCUMENT_TYPE_NODE': 10,
'ELEMENT_NODE': 1,
'ENTITY_NODE': 6,
'ENTITY_REFERENCE_NODE': 5,
'NOTATION_NODE': 12,
'PROCESSING_INSTRUCTION_NODE': 7,
'TEXT_NODE': 3,
'__shady_addEventListener': {},
'__shady_appendChild': {},
'__shady_childNodes': [],
'__shady_cloneNode': {},
'__shady_contains': {},
'__shady_dispatchEvent': {},
'__shady_firstChild': None,
'__shady_getRootNode': {},
'__shady_insertBefore': {},
'__shady_isConnected': False,
'__shady_lastChild': None,
'__shady_native_addEventListener': {},
'__shady_native_appendChild': {},
'__shady_native_childNodes': [],
'__shady_native_cloneNode': {},
'__shady_native_contains': {},
'__shady_native_dispatchEvent': {},
'__shady_native_firstChild': None,
'__shady_native_insertBefore': {},
'__shady_native_lastChild': None,
'__shady_native_nextSibling': None,
'__shady_native_parentElement': None,
'__shady_native_parentNode': None,
'__shady_native_previousSibling': None,
'__shady_native_removeChild': {},
'__shady_native_removeEventListener': {},
'__shady_native_replaceChild': {},
'__shady_native_textContent': 'video-title',
'__shady_nextSibling': None,
'__shady_parentElement': None,
'__shady_parentNode': None,
'__shady_previousSibling': None,
'__shady_removeChild': {},
'__shady_removeEventListener': {},
'__shady_replaceChild': {},
'__shady_textContent': 'video-title',
'addEventListener': {},
'appendChild': {},
'baseURI': 'https://www.youtube.com/user/theneedledrop/videos',
'childNodes': [],
'cloneNode': {},
'compareDocumentPosition': {},
'contains': {},
'dispatchEvent': {},
'firstChild': None,
'getRootNode': {},
'hasChildNodes': {},
'insertBefore': {},
'isConnected': False,
'isDefaultNamespace': {},
'isEqualNode': {},
'isSameNode': {},
'lastChild': None,
'localName': 'id',
'lookupNamespaceURI': {},
'lookupPrefix': {},
'name': 'id',
'namespaceURI': None,
'nextSibling': None,
'nodeName': 'id',
'nodeType': 2,
'nodeValue': 'video-title',
'normalize': {},
'ownerDocument': <selenium.webdriver.remote.webelement.WebElement (session="906f0b2a91a96de78811a8b48c702ce9", element="4105d26d-55b3-49a1-b657-10bbbbf43c84")>,
'ownerElement': <selenium.webdriver.remote.webelement.WebElement (session="906f0b2a91a96de78811a8b48c702ce9", element="c0d38452-435c-489a-8cb8-858adc4828b9")>,
'parentElement': None,
'parentNode': None,
'prefix': None,
'previousSibling': None,
'removeChild': {},
'removeEventListener': {},
'replaceChild': {},
'specified': True,
'textContent': 'video-title',
'value': 'video-title'}
I have tried exploring the web pages on youtube videos for the href however I am unable to find them
A:
The below full working code will pull the required data here all the video links smoothly.
Example:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
import time
import pandas as pd
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
#All are optional
#options.add_experimental_option("detach", True)
options.add_argument("--disable-extensions")
options.add_argument("--disable-notifications")
options.add_argument("--disable-Advertisement")
options.add_argument("--disable-popup-blocking")
options.add_argument("start-maximized")
s=Service('./chromedriver')
driver= webdriver.Chrome(service=s,options=options)
driver.get('https://www.youtube.com/user/theneedledrop/videos')
time.sleep(3)
item = []
SCROLL_PAUSE_TIME = 1
last_height = driver.execute_script("return document.documentElement.scrollHeight")
item_count = 100
while item_count > len(item):
driver.execute_script("window.scrollTo(0,document.documentElement.scrollHeight);")
time.sleep(SCROLL_PAUSE_TIME)
new_height = driver.execute_script("return document.documentElement.scrollHeight")
if new_height == last_height:
break
last_height = new_height
data = []
try:
for e in WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div#details'))):
vurl = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('href')
data.append({
'video_url':vurl,
})
except:
pass
item = data
#print(item)
#print(len(item))
df = pd.DataFrame(item).drop_duplicates()
print(df.to_markdown())
Output:
| video_url |
|----:|:--------------------------------------------|
| 0 | https://www.youtube.com/watch?v=UZcSkasvj5c |
| 1 | https://www.youtube.com/watch?v=9c8AXKAnp_E |
| 2 | https://www.youtube.com/watch?v=KaLUHF7nQic |
| 3 | https://www.youtube.com/watch?v=rxb2L0Bgp3U |
| 4 | https://www.youtube.com/watch?v=z3L1wXvMN0Q |
| 5 | https://www.youtube.com/watch?v=q7vqR74WVYc |
| 6 | https://www.youtube.com/watch?v=Kb31OTOYYG8 |
| 7 | https://www.youtube.com/watch?v=F-CaQbxwMZ0 |
| 8 | https://www.youtube.com/watch?v=AWDWTyC0jls |
| 9 | https://www.youtube.com/watch?v=LXWbnTgxeT4 |
| 10 | https://www.youtube.com/watch?v=5KlHjDnefYQ |
| 11 | https://www.youtube.com/watch?v=yfq8rdBcAMg |
| 12 | https://www.youtube.com/watch?v=lATG1JBzVIU |
| 13 | https://www.youtube.com/watch?v=SNmZfHDOHQw |
| 14 | https://www.youtube.com/watch?v=IsQBbO_4EQI |
| 15 | https://www.youtube.com/watch?v=wcSyXUOM63g |
| 16 | https://www.youtube.com/watch?v=5hIaJZ9M8ZI |
| 17 | https://www.youtube.com/watch?v=ikryWQEHsCE |
| 18 | https://www.youtube.com/watch?v=5ARVgrao6E0 |
| 19 | https://www.youtube.com/watch?v=_1q6-POT8sY |
| 20 | https://www.youtube.com/watch?v=ycyxm3rgQG0 |
| 21 | https://www.youtube.com/watch?v=InirkRGnC2w |
| 22 | https://www.youtube.com/watch?v=nrvq5lY9oy0 |
| 23 | https://www.youtube.com/watch?v=M1yGh3D_KI8 |
| 24 | https://www.youtube.com/watch?v=Yn_4mtMYyXU |
| 25 | https://www.youtube.com/watch?v=8vmm8x_Cq4s |
| 26 | https://www.youtube.com/watch?v=Zfyojbr-cEQ |
| 27 | https://www.youtube.com/watch?v=NqrVX-WOrc0 |
| 28 | https://www.youtube.com/watch?v=Hx6k20LsAJ4 |
| 29 | https://www.youtube.com/watch?v=OB6ZI5Bicww |
| 30 | https://www.youtube.com/watch?v=uNMnIRKx0GE |
| 31 | https://www.youtube.com/watch?v=U7w_MKl5_hE |
| 32 | https://www.youtube.com/watch?v=KGi4Cpbh_Y0 |
| 33 | https://www.youtube.com/watch?v=mQqRtaoyAdw |
| 34 | https://www.youtube.com/watch?v=s3VzTy9oXXM |
| 35 | https://www.youtube.com/watch?v=eCaojgO-ZWs |
| 36 | https://www.youtube.com/watch?v=SeOLXwvu87E |
| 37 | https://www.youtube.com/watch?v=IlZ6Y21rxTU |
| 38 | https://www.youtube.com/watch?v=HxoRbEQFx3U |
| 39 | https://www.youtube.com/watch?v=NDCAImW1o6o |
| 40 | https://www.youtube.com/watch?v=gE778rR6-EM |
| 41 | https://www.youtube.com/watch?v=cQ0eY9NJACQ |
| 42 | https://www.youtube.com/watch?v=-x5Bx-leRWI |
| 43 | https://www.youtube.com/watch?v=XQ0C_Dmf0hI |
| 44 | https://www.youtube.com/watch?v=0eJ4JRNi4J8 |
| 45 | https://www.youtube.com/watch?v=YczkDCv3GiM |
| 46 | https://www.youtube.com/watch?v=GQmUsdUI20A |
| 47 | https://www.youtube.com/watch?v=4CFnoywFia4 |
| 48 | https://www.youtube.com/watch?v=A0Bzv8weX4s |
| 49 | https://www.youtube.com/watch?v=YbxcaHn_d_o |
| 50 | https://www.youtube.com/watch?v=GwUNT2k26mQ |
| 51 | https://www.youtube.com/watch?v=zktcHftIhDs |
| 52 | https://www.youtube.com/watch?v=_rY7Hvxe4x4 |
| 53 | https://www.youtube.com/watch?v=rqB9gd4fbfE |
| 54 | https://www.youtube.com/watch?v=oNPAhe7G3yg |
| 55 | https://www.youtube.com/watch?v=37_aCQW98sU |
| 56 | https://www.youtube.com/watch?v=GjA4fWIUv-A |
| 57 | https://www.youtube.com/watch?v=8THBFF024ho |
| 58 | https://www.youtube.com/watch?v=HLErXgsV3Nk |
| 59 | https://www.youtube.com/watch?v=GsvdLIxY6Fg |
| 60 | https://www.youtube.com/watch?v=iUU48DuTpl8 |
| 61 | https://www.youtube.com/watch?v=5UluxcFJVx0 |
| 62 | https://www.youtube.com/watch?v=5lOvAHg12uw |
| 63 | https://www.youtube.com/watch?v=2UADjU66-4M |
| 64 | https://www.youtube.com/watch?v=Qvr2labD_Es |
| 65 | https://www.youtube.com/watch?v=qUWRnIn5oB0 |
| 66 | https://www.youtube.com/watch?v=Qk7MPEyGhQ4 |
| 67 | https://www.youtube.com/watch?v=bN7SDJFanS4 |
| 68 | https://www.youtube.com/watch?v=6YoUjUGvHUk |
| 69 | https://www.youtube.com/watch?v=NjiLz3HoWkM |
| 70 | https://www.youtube.com/watch?v=rRdU7VhoWdI |
| 71 | https://www.youtube.com/watch?v=zOm5n0OJLfc |
| 72 | https://www.youtube.com/watch?v=z9jMFiSUe5Q |
| 73 | https://www.youtube.com/watch?v=M6VLYjFnXMU |
| 74 | https://www.youtube.com/watch?v=4iFEpKDQx-o |
| 75 | https://www.youtube.com/watch?v=Zc1SE66DEYo |
| 76 | https://www.youtube.com/watch?v=645qisC4slI |
| 77 | https://www.youtube.com/watch?v=QeIRfgsVX5k |
| 78 | https://www.youtube.com/watch?v=0jUr57dIMq4 |
| 79 | https://www.youtube.com/watch?v=EjaTJGmoT_w |
| 80 | https://www.youtube.com/watch?v=roXy5LA17fU |
| 81 | https://www.youtube.com/watch?v=UeSwqepnAX0 |
| 82 | https://www.youtube.com/watch?v=BDYSYypzhxE |
| 83 | https://www.youtube.com/watch?v=iyBNxEnP7rk |
| 84 | https://www.youtube.com/watch?v=YCUmI9f77qs |
| 85 | https://www.youtube.com/watch?v=h21LYpHEfNU |
| 86 | https://www.youtube.com/watch?v=LBQDuTn6T0c |
| 87 | https://www.youtube.com/watch?v=le_0jyqCXFU |
| 88 | https://www.youtube.com/watch?v=tGClvgTCrIY |
| 89 | https://www.youtube.com/watch?v=969qt4RUx74 |
| 90 | https://www.youtube.com/watch?v=XL8li__PnaA |
| 91 | https://www.youtube.com/watch?v=RKf3ppfFUkg |
| 92 | https://www.youtube.com/watch?v=xY5RyjaQJCE |
| 93 | https://www.youtube.com/watch?v=6bjliN6hJTs |
| 94 | https://www.youtube.com/watch?v=KcYBolH-j9c |
| 95 | https://www.youtube.com/watch?v=nlsnpbRyvtU |
| 96 | https://www.youtube.com/watch?v=AOWmL1eydWI |
| 97 | https://www.youtube.com/watch?v=I8RPsF-hdXo |
| 98 | https://www.youtube.com/watch?v=9NSOGd2p530 |
| 99 | https://www.youtube.com/watch?v=8EdqpZu9lkM |
| 100 | https://www.youtube.com/watch?v=a23wQEA4EAA |
| 101 | https://www.youtube.com/watch?v=7g6TXGY-T6k |
| 102 | https://www.youtube.com/watch?v=iXZNlGwOuWY |
| 103 | https://www.youtube.com/watch?v=miR30bsSH4E |
| 104 | https://www.youtube.com/watch?v=zb8-aHiTKL4 |
| 105 | https://www.youtube.com/watch?v=rTEZmXq9K3k |
| 106 | https://www.youtube.com/watch?v=OBeOJiolMug |
| 107 | https://www.youtube.com/watch?v=fA0nxixnS-A |
| 108 | https://www.youtube.com/watch?v=dMhpDlUTT_U |
| 109 | https://www.youtube.com/watch?v=SgjDaPWjzuU |
| 110 | https://www.youtube.com/watch?v=2lokqffmF2A |
| 111 | https://www.youtube.com/watch?v=jmHZvGMe8pQ |
| 112 | https://www.youtube.com/watch?v=KPYvMIMON9g |
... so on
A:
Try video-title-link.
Exactly which element contains the /watch link depends slightly on the context, in the current state of YouTube. On the homepage and in a channel's "videos" tab, the URL of a given video can be found in its anchor element with id video-title-link.
On the "home" tab of a given channel, the relevant links still have id video-title.
| Can't get href from Selenium webdriver scraping youtube | I am trying to scrape youtube videos from a channel by doing the following code below however, it seems that my element_titles don't have a href attribute. This worked about a year ago and I am unsure why it doesn't work now? Did youtube change the way we can get href?
#Scrape for videos
# WARNING: Takes very long
HOME = "https://www.youtube.com/user/theneedledrop/videos"
driver = webdriver.Chrome("C:\webdriver\chromedriver.exe")
driver.get(HOME)
scroll()
element_titles = driver.find_elements(By.ID,"video-title")
The following attribtues are what is found in the WebDriver objects
> element_titles[0].get_property('attributes')[0]
{'ATTRIBUTE_NODE': 2,
'CDATA_SECTION_NODE': 4,
'COMMENT_NODE': 8,
'DOCUMENT_FRAGMENT_NODE': 11,
'DOCUMENT_NODE': 9,
'DOCUMENT_POSITION_CONTAINED_BY': 16,
'DOCUMENT_POSITION_CONTAINS': 8,
'DOCUMENT_POSITION_DISCONNECTED': 1,
'DOCUMENT_POSITION_FOLLOWING': 4,
'DOCUMENT_POSITION_IMPLEMENTATION_SPECIFIC': 32,
'DOCUMENT_POSITION_PRECEDING': 2,
'DOCUMENT_TYPE_NODE': 10,
'ELEMENT_NODE': 1,
'ENTITY_NODE': 6,
'ENTITY_REFERENCE_NODE': 5,
'NOTATION_NODE': 12,
'PROCESSING_INSTRUCTION_NODE': 7,
'TEXT_NODE': 3,
'__shady_addEventListener': {},
'__shady_appendChild': {},
'__shady_childNodes': [],
'__shady_cloneNode': {},
'__shady_contains': {},
'__shady_dispatchEvent': {},
'__shady_firstChild': None,
'__shady_getRootNode': {},
'__shady_insertBefore': {},
'__shady_isConnected': False,
'__shady_lastChild': None,
'__shady_native_addEventListener': {},
'__shady_native_appendChild': {},
'__shady_native_childNodes': [],
'__shady_native_cloneNode': {},
'__shady_native_contains': {},
'__shady_native_dispatchEvent': {},
'__shady_native_firstChild': None,
'__shady_native_insertBefore': {},
'__shady_native_lastChild': None,
'__shady_native_nextSibling': None,
'__shady_native_parentElement': None,
'__shady_native_parentNode': None,
'__shady_native_previousSibling': None,
'__shady_native_removeChild': {},
'__shady_native_removeEventListener': {},
'__shady_native_replaceChild': {},
'__shady_native_textContent': 'video-title',
'__shady_nextSibling': None,
'__shady_parentElement': None,
'__shady_parentNode': None,
'__shady_previousSibling': None,
'__shady_removeChild': {},
'__shady_removeEventListener': {},
'__shady_replaceChild': {},
'__shady_textContent': 'video-title',
'addEventListener': {},
'appendChild': {},
'baseURI': 'https://www.youtube.com/user/theneedledrop/videos',
'childNodes': [],
'cloneNode': {},
'compareDocumentPosition': {},
'contains': {},
'dispatchEvent': {},
'firstChild': None,
'getRootNode': {},
'hasChildNodes': {},
'insertBefore': {},
'isConnected': False,
'isDefaultNamespace': {},
'isEqualNode': {},
'isSameNode': {},
'lastChild': None,
'localName': 'id',
'lookupNamespaceURI': {},
'lookupPrefix': {},
'name': 'id',
'namespaceURI': None,
'nextSibling': None,
'nodeName': 'id',
'nodeType': 2,
'nodeValue': 'video-title',
'normalize': {},
'ownerDocument': <selenium.webdriver.remote.webelement.WebElement (session="906f0b2a91a96de78811a8b48c702ce9", element="4105d26d-55b3-49a1-b657-10bbbbf43c84")>,
'ownerElement': <selenium.webdriver.remote.webelement.WebElement (session="906f0b2a91a96de78811a8b48c702ce9", element="c0d38452-435c-489a-8cb8-858adc4828b9")>,
'parentElement': None,
'parentNode': None,
'prefix': None,
'previousSibling': None,
'removeChild': {},
'removeEventListener': {},
'replaceChild': {},
'specified': True,
'textContent': 'video-title',
'value': 'video-title'}
I have tried exploring the web pages on youtube videos for the href however I am unable to find them
| [
"The below full working code will pull the required data here all the video links smoothly.\nExample:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.common.by import By\nimport time\nimport pandas as pd\nfrom selenium.webdriver.support.wait import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = webdriver.ChromeOptions()\n#All are optional\n#options.add_experimental_option(\"detach\", True)\noptions.add_argument(\"--disable-extensions\")\noptions.add_argument(\"--disable-notifications\")\noptions.add_argument(\"--disable-Advertisement\")\noptions.add_argument(\"--disable-popup-blocking\")\noptions.add_argument(\"start-maximized\")\n\ns=Service('./chromedriver')\ndriver= webdriver.Chrome(service=s,options=options)\n\ndriver.get('https://www.youtube.com/user/theneedledrop/videos')\ntime.sleep(3)\n\nitem = []\nSCROLL_PAUSE_TIME = 1\nlast_height = driver.execute_script(\"return document.documentElement.scrollHeight\")\n\nitem_count = 100\n\nwhile item_count > len(item):\n driver.execute_script(\"window.scrollTo(0,document.documentElement.scrollHeight);\")\n time.sleep(SCROLL_PAUSE_TIME)\n new_height = driver.execute_script(\"return document.documentElement.scrollHeight\")\n\n if new_height == last_height:\n break\n last_height = new_height\n \n\ndata = []\ntry:\n for e in WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div#details'))):\n vurl = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('href')\n data.append({\n 'video_url':vurl,\n \n })\nexcept:\n pass\n \nitem = data\n#print(item)\n#print(len(item))\ndf = pd.DataFrame(item).drop_duplicates()\nprint(df.to_markdown())\n\nOutput:\n| video_url |\n|----:|:--------------------------------------------|\n| 0 | https://www.youtube.com/watch?v=UZcSkasvj5c |\n| 1 | https://www.youtube.com/watch?v=9c8AXKAnp_E |\n| 2 | https://www.youtube.com/watch?v=KaLUHF7nQic |\n| 3 | https://www.youtube.com/watch?v=rxb2L0Bgp3U |\n| 4 | https://www.youtube.com/watch?v=z3L1wXvMN0Q |\n| 5 | https://www.youtube.com/watch?v=q7vqR74WVYc |\n| 6 | https://www.youtube.com/watch?v=Kb31OTOYYG8 |\n| 7 | https://www.youtube.com/watch?v=F-CaQbxwMZ0 |\n| 8 | https://www.youtube.com/watch?v=AWDWTyC0jls |\n| 9 | https://www.youtube.com/watch?v=LXWbnTgxeT4 |\n| 10 | https://www.youtube.com/watch?v=5KlHjDnefYQ |\n| 11 | https://www.youtube.com/watch?v=yfq8rdBcAMg |\n| 12 | https://www.youtube.com/watch?v=lATG1JBzVIU |\n| 13 | https://www.youtube.com/watch?v=SNmZfHDOHQw |\n| 14 | https://www.youtube.com/watch?v=IsQBbO_4EQI |\n| 15 | https://www.youtube.com/watch?v=wcSyXUOM63g |\n| 16 | https://www.youtube.com/watch?v=5hIaJZ9M8ZI |\n| 17 | https://www.youtube.com/watch?v=ikryWQEHsCE |\n| 18 | https://www.youtube.com/watch?v=5ARVgrao6E0 |\n| 19 | https://www.youtube.com/watch?v=_1q6-POT8sY |\n| 20 | https://www.youtube.com/watch?v=ycyxm3rgQG0 |\n| 21 | https://www.youtube.com/watch?v=InirkRGnC2w |\n| 22 | https://www.youtube.com/watch?v=nrvq5lY9oy0 |\n| 23 | https://www.youtube.com/watch?v=M1yGh3D_KI8 |\n| 24 | https://www.youtube.com/watch?v=Yn_4mtMYyXU |\n| 25 | https://www.youtube.com/watch?v=8vmm8x_Cq4s |\n| 26 | https://www.youtube.com/watch?v=Zfyojbr-cEQ |\n| 27 | https://www.youtube.com/watch?v=NqrVX-WOrc0 |\n| 28 | https://www.youtube.com/watch?v=Hx6k20LsAJ4 |\n| 29 | https://www.youtube.com/watch?v=OB6ZI5Bicww |\n| 30 | https://www.youtube.com/watch?v=uNMnIRKx0GE |\n| 31 | https://www.youtube.com/watch?v=U7w_MKl5_hE |\n| 32 | https://www.youtube.com/watch?v=KGi4Cpbh_Y0 |\n| 33 | https://www.youtube.com/watch?v=mQqRtaoyAdw |\n| 34 | https://www.youtube.com/watch?v=s3VzTy9oXXM |\n| 35 | https://www.youtube.com/watch?v=eCaojgO-ZWs |\n| 36 | https://www.youtube.com/watch?v=SeOLXwvu87E |\n| 37 | https://www.youtube.com/watch?v=IlZ6Y21rxTU |\n| 38 | https://www.youtube.com/watch?v=HxoRbEQFx3U |\n| 39 | https://www.youtube.com/watch?v=NDCAImW1o6o |\n| 40 | https://www.youtube.com/watch?v=gE778rR6-EM |\n| 41 | https://www.youtube.com/watch?v=cQ0eY9NJACQ |\n| 42 | https://www.youtube.com/watch?v=-x5Bx-leRWI |\n| 43 | https://www.youtube.com/watch?v=XQ0C_Dmf0hI |\n| 44 | https://www.youtube.com/watch?v=0eJ4JRNi4J8 |\n| 45 | https://www.youtube.com/watch?v=YczkDCv3GiM |\n| 46 | https://www.youtube.com/watch?v=GQmUsdUI20A |\n| 47 | https://www.youtube.com/watch?v=4CFnoywFia4 |\n| 48 | https://www.youtube.com/watch?v=A0Bzv8weX4s |\n| 49 | https://www.youtube.com/watch?v=YbxcaHn_d_o |\n| 50 | https://www.youtube.com/watch?v=GwUNT2k26mQ |\n| 51 | https://www.youtube.com/watch?v=zktcHftIhDs |\n| 52 | https://www.youtube.com/watch?v=_rY7Hvxe4x4 |\n| 53 | https://www.youtube.com/watch?v=rqB9gd4fbfE |\n| 54 | https://www.youtube.com/watch?v=oNPAhe7G3yg |\n| 55 | https://www.youtube.com/watch?v=37_aCQW98sU |\n| 56 | https://www.youtube.com/watch?v=GjA4fWIUv-A |\n| 57 | https://www.youtube.com/watch?v=8THBFF024ho |\n| 58 | https://www.youtube.com/watch?v=HLErXgsV3Nk |\n| 59 | https://www.youtube.com/watch?v=GsvdLIxY6Fg |\n| 60 | https://www.youtube.com/watch?v=iUU48DuTpl8 |\n| 61 | https://www.youtube.com/watch?v=5UluxcFJVx0 |\n| 62 | https://www.youtube.com/watch?v=5lOvAHg12uw |\n| 63 | https://www.youtube.com/watch?v=2UADjU66-4M |\n| 64 | https://www.youtube.com/watch?v=Qvr2labD_Es |\n| 65 | https://www.youtube.com/watch?v=qUWRnIn5oB0 |\n| 66 | https://www.youtube.com/watch?v=Qk7MPEyGhQ4 |\n| 67 | https://www.youtube.com/watch?v=bN7SDJFanS4 |\n| 68 | https://www.youtube.com/watch?v=6YoUjUGvHUk |\n| 69 | https://www.youtube.com/watch?v=NjiLz3HoWkM |\n| 70 | https://www.youtube.com/watch?v=rRdU7VhoWdI |\n| 71 | https://www.youtube.com/watch?v=zOm5n0OJLfc |\n| 72 | https://www.youtube.com/watch?v=z9jMFiSUe5Q |\n| 73 | https://www.youtube.com/watch?v=M6VLYjFnXMU |\n| 74 | https://www.youtube.com/watch?v=4iFEpKDQx-o |\n| 75 | https://www.youtube.com/watch?v=Zc1SE66DEYo |\n| 76 | https://www.youtube.com/watch?v=645qisC4slI |\n| 77 | https://www.youtube.com/watch?v=QeIRfgsVX5k |\n| 78 | https://www.youtube.com/watch?v=0jUr57dIMq4 |\n| 79 | https://www.youtube.com/watch?v=EjaTJGmoT_w |\n| 80 | https://www.youtube.com/watch?v=roXy5LA17fU |\n| 81 | https://www.youtube.com/watch?v=UeSwqepnAX0 |\n| 82 | https://www.youtube.com/watch?v=BDYSYypzhxE |\n| 83 | https://www.youtube.com/watch?v=iyBNxEnP7rk |\n| 84 | https://www.youtube.com/watch?v=YCUmI9f77qs |\n| 85 | https://www.youtube.com/watch?v=h21LYpHEfNU |\n| 86 | https://www.youtube.com/watch?v=LBQDuTn6T0c |\n| 87 | https://www.youtube.com/watch?v=le_0jyqCXFU |\n| 88 | https://www.youtube.com/watch?v=tGClvgTCrIY |\n| 89 | https://www.youtube.com/watch?v=969qt4RUx74 |\n| 90 | https://www.youtube.com/watch?v=XL8li__PnaA |\n| 91 | https://www.youtube.com/watch?v=RKf3ppfFUkg |\n| 92 | https://www.youtube.com/watch?v=xY5RyjaQJCE |\n| 93 | https://www.youtube.com/watch?v=6bjliN6hJTs |\n| 94 | https://www.youtube.com/watch?v=KcYBolH-j9c |\n| 95 | https://www.youtube.com/watch?v=nlsnpbRyvtU |\n| 96 | https://www.youtube.com/watch?v=AOWmL1eydWI |\n| 97 | https://www.youtube.com/watch?v=I8RPsF-hdXo |\n| 98 | https://www.youtube.com/watch?v=9NSOGd2p530 |\n| 99 | https://www.youtube.com/watch?v=8EdqpZu9lkM |\n| 100 | https://www.youtube.com/watch?v=a23wQEA4EAA |\n| 101 | https://www.youtube.com/watch?v=7g6TXGY-T6k |\n| 102 | https://www.youtube.com/watch?v=iXZNlGwOuWY |\n| 103 | https://www.youtube.com/watch?v=miR30bsSH4E |\n| 104 | https://www.youtube.com/watch?v=zb8-aHiTKL4 |\n| 105 | https://www.youtube.com/watch?v=rTEZmXq9K3k |\n| 106 | https://www.youtube.com/watch?v=OBeOJiolMug |\n| 107 | https://www.youtube.com/watch?v=fA0nxixnS-A |\n| 108 | https://www.youtube.com/watch?v=dMhpDlUTT_U |\n| 109 | https://www.youtube.com/watch?v=SgjDaPWjzuU |\n| 110 | https://www.youtube.com/watch?v=2lokqffmF2A |\n| 111 | https://www.youtube.com/watch?v=jmHZvGMe8pQ |\n| 112 | https://www.youtube.com/watch?v=KPYvMIMON9g |\n\n... so on\n",
"Try video-title-link.\nExactly which element contains the /watch link depends slightly on the context, in the current state of YouTube. On the homepage and in a channel's \"videos\" tab, the URL of a given video can be found in its anchor element with id video-title-link.\nOn the \"home\" tab of a given channel, the relevant links still have id video-title.\n"
] | [
1,
0
] | [] | [] | [
"beautifulsoup",
"python",
"python_requests",
"selenium"
] | stackoverflow_0074606385_beautifulsoup_python_python_requests_selenium.txt |
Q:
Efficient way to get counts from a query
I am trying to find the occupancy of a parking lot for every time a vehicle exits. I have a data frame where each row corresponds to a parking entry and exit timestamp. The dataset is quite large and the solution I have currently takes a bit of time to process. I am able to find the occupancy by performing the following query:
Count('Exit Time Stamp of Row n' > 'Entry Date of All Rows' & 'Exit Time Stamp of Row n' <= 'Exit Date of All Rows')
This can be accomplished in python by creating the following function:
# Find the occupancy
def get_occ(df):
count_list = []
for exit_date in df['EXIT DATE']:
# Perform Query, append count to list
count = df.query("@exit_date > `ENTRY DATE` & @exit_date <= `EXIT DATE`" )['TYPE'].count()
count_list.append(count)
# Add counts to df
df['OCCUPANCY'] = count_list
NOTE: 'Type' is a separate column which I am using to perform the count operation.
This unfortunately takes a very long time to process for a dataset with hundreds of thousands of rows. Any suggestions for how I can improve the time it takes to process the script?
A:
Here's what I was suggesting. I print out the combined dataframe so you can see what it looks like.
import pandas as pd
data = [
[1, "2022-11-01 08:00:00", "2022-11-01 17:00:00"],
[2, "2022-11-01 09:00:00", "2022-11-01 13:00:00"],
[3, "2022-11-01 10:00:00", "2022-11-01 16:00:00"],
[4, "2022-11-01 11:00:00", "2022-11-01 18:00:00"],
[5, "2022-11-01 12:00:00", "2022-11-01 14:00:00"],
[6, "2022-11-01 13:00:00", "2022-11-01 18:00:00"],
[7, "2022-11-01 14:00:00", "2022-11-01 16:00:00"],
[8, "2022-11-01 15:00:00", "2022-11-01 18:00:00"],
[9, "2022-11-01 16:00:00", "2022-11-01 17:00:00"],
]
df = pd.DataFrame( data, columns=['id', 'ENTRY', 'EXIT'] )
df2 = df["ENTRY"].to_frame().rename(columns={"ENTRY":"timestamp"})
df2['action'] = 'enter'
df3 = df["EXIT"].to_frame().rename(columns={"EXIT":"timestamp"})
df3['action'] = 'exit'
df4 = pd.concat([df2,df3])
df4.sort_values(by='timestamp', inplace=True)
print(df4)
present = 0
for idx,row in df4.iterrows():
if row['action'] == 'enter':
present += 1
else:
present -= 1
print("At",row['timestamp'],"room holds",present,"people")
Output:
timestamp action
0 2022-11-01 08:00:00 enter
1 2022-11-01 09:00:00 enter
2 2022-11-01 10:00:00 enter
3 2022-11-01 11:00:00 enter
4 2022-11-01 12:00:00 enter
5 2022-11-01 13:00:00 enter
1 2022-11-01 13:00:00 exit
6 2022-11-01 14:00:00 enter
4 2022-11-01 14:00:00 exit
7 2022-11-01 15:00:00 enter
6 2022-11-01 16:00:00 exit
8 2022-11-01 16:00:00 enter
2 2022-11-01 16:00:00 exit
0 2022-11-01 17:00:00 exit
8 2022-11-01 17:00:00 exit
3 2022-11-01 18:00:00 exit
5 2022-11-01 18:00:00 exit
7 2022-11-01 18:00:00 exit
At 2022-11-01 08:00:00 room holds 1 people
At 2022-11-01 09:00:00 room holds 2 people
At 2022-11-01 10:00:00 room holds 3 people
At 2022-11-01 11:00:00 room holds 4 people
At 2022-11-01 12:00:00 room holds 5 people
At 2022-11-01 13:00:00 room holds 6 people
At 2022-11-01 13:00:00 room holds 5 people
At 2022-11-01 14:00:00 room holds 6 people
At 2022-11-01 14:00:00 room holds 5 people
At 2022-11-01 15:00:00 room holds 6 people
At 2022-11-01 16:00:00 room holds 5 people
At 2022-11-01 16:00:00 room holds 6 people
At 2022-11-01 16:00:00 room holds 5 people
At 2022-11-01 17:00:00 room holds 4 people
At 2022-11-01 17:00:00 room holds 3 people
At 2022-11-01 18:00:00 room holds 2 people
At 2022-11-01 18:00:00 room holds 1 people
At 2022-11-01 18:00:00 room holds 0 people
| Efficient way to get counts from a query | I am trying to find the occupancy of a parking lot for every time a vehicle exits. I have a data frame where each row corresponds to a parking entry and exit timestamp. The dataset is quite large and the solution I have currently takes a bit of time to process. I am able to find the occupancy by performing the following query:
Count('Exit Time Stamp of Row n' > 'Entry Date of All Rows' & 'Exit Time Stamp of Row n' <= 'Exit Date of All Rows')
This can be accomplished in python by creating the following function:
# Find the occupancy
def get_occ(df):
count_list = []
for exit_date in df['EXIT DATE']:
# Perform Query, append count to list
count = df.query("@exit_date > `ENTRY DATE` & @exit_date <= `EXIT DATE`" )['TYPE'].count()
count_list.append(count)
# Add counts to df
df['OCCUPANCY'] = count_list
NOTE: 'Type' is a separate column which I am using to perform the count operation.
This unfortunately takes a very long time to process for a dataset with hundreds of thousands of rows. Any suggestions for how I can improve the time it takes to process the script?
| [
"Here's what I was suggesting. I print out the combined dataframe so you can see what it looks like.\nimport pandas as pd\n\ndata = [\n [1, \"2022-11-01 08:00:00\", \"2022-11-01 17:00:00\"],\n [2, \"2022-11-01 09:00:00\", \"2022-11-01 13:00:00\"],\n [3, \"2022-11-01 10:00:00\", \"2022-11-01 16:00:00\"],\n [4, \"2022-11-01 11:00:00\", \"2022-11-01 18:00:00\"],\n [5, \"2022-11-01 12:00:00\", \"2022-11-01 14:00:00\"],\n [6, \"2022-11-01 13:00:00\", \"2022-11-01 18:00:00\"],\n [7, \"2022-11-01 14:00:00\", \"2022-11-01 16:00:00\"],\n [8, \"2022-11-01 15:00:00\", \"2022-11-01 18:00:00\"],\n [9, \"2022-11-01 16:00:00\", \"2022-11-01 17:00:00\"],\n]\n\ndf = pd.DataFrame( data, columns=['id', 'ENTRY', 'EXIT'] )\ndf2 = df[\"ENTRY\"].to_frame().rename(columns={\"ENTRY\":\"timestamp\"})\ndf2['action'] = 'enter'\ndf3 = df[\"EXIT\"].to_frame().rename(columns={\"EXIT\":\"timestamp\"})\ndf3['action'] = 'exit'\ndf4 = pd.concat([df2,df3])\ndf4.sort_values(by='timestamp', inplace=True)\nprint(df4)\n\npresent = 0\nfor idx,row in df4.iterrows():\n if row['action'] == 'enter':\n present += 1\n else:\n present -= 1\n print(\"At\",row['timestamp'],\"room holds\",present,\"people\")\n\nOutput:\n timestamp action\n0 2022-11-01 08:00:00 enter\n1 2022-11-01 09:00:00 enter\n2 2022-11-01 10:00:00 enter\n3 2022-11-01 11:00:00 enter\n4 2022-11-01 12:00:00 enter\n5 2022-11-01 13:00:00 enter\n1 2022-11-01 13:00:00 exit\n6 2022-11-01 14:00:00 enter\n4 2022-11-01 14:00:00 exit\n7 2022-11-01 15:00:00 enter\n6 2022-11-01 16:00:00 exit\n8 2022-11-01 16:00:00 enter\n2 2022-11-01 16:00:00 exit\n0 2022-11-01 17:00:00 exit\n8 2022-11-01 17:00:00 exit\n3 2022-11-01 18:00:00 exit\n5 2022-11-01 18:00:00 exit\n7 2022-11-01 18:00:00 exit\n\nAt 2022-11-01 08:00:00 room holds 1 people\nAt 2022-11-01 09:00:00 room holds 2 people\nAt 2022-11-01 10:00:00 room holds 3 people\nAt 2022-11-01 11:00:00 room holds 4 people\nAt 2022-11-01 12:00:00 room holds 5 people\nAt 2022-11-01 13:00:00 room holds 6 people\nAt 2022-11-01 13:00:00 room holds 5 people\nAt 2022-11-01 14:00:00 room holds 6 people\nAt 2022-11-01 14:00:00 room holds 5 people\nAt 2022-11-01 15:00:00 room holds 6 people\nAt 2022-11-01 16:00:00 room holds 5 people\nAt 2022-11-01 16:00:00 room holds 6 people\nAt 2022-11-01 16:00:00 room holds 5 people\nAt 2022-11-01 17:00:00 room holds 4 people\nAt 2022-11-01 17:00:00 room holds 3 people\nAt 2022-11-01 18:00:00 room holds 2 people\nAt 2022-11-01 18:00:00 room holds 1 people\nAt 2022-11-01 18:00:00 room holds 0 people\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074605690_python.txt |
Q:
How to use type hints in python 3.6?
I noticed Python 3.5 and Python 3.6 added a lot of features about static type checking, so I tried with the following code (in python 3.6, stable version).
from typing import List
a: List[str] = []
a.append('a')
a.append(1)
print(a)
What surprised me was that, Python didn't give me an error or warning, although 1 was appended to a list which should only contain strings. Pycharm detected the type error and gave me a warning about it, but it was not obvious and it was not shown in the output console, I was afraid sometimes I might miss it. I would like the following effects:
If it's obvious that I used the wrong type just as shown above, throw out a warning or error.
If the compiler couldn't reliably check if the type I used was right or wrong, ignore it.
Is that possible? Maybe mypy could do it, but I'd prefer to use Python-3.6-style type checking (like a: List[str]) instead of the comment-style (like # type List[str]) used in mypy. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.
A:
Type hints are entirely meant to be ignored by the Python runtime, and are checked only by 3rd party tools like mypy and Pycharm's integrated checker. There are also a variety of lesser known 3rd party tools that do typechecking at either compile time or runtime using type annotations, but most people use mypy or Pycharm's integrated checker AFAIK.
In fact, I actually doubt that typechecking will ever be integrated into Python proper in the foreseable future -- see the 'non-goals' section of PEP 484 (which introduced type annotations) and PEP 526 (which introduced variable annotations), as well as Guido's comments here.
I'd personally be happy with type checking being more strongly integrated with Python, but it doesn't seem the Python community at large is ready or willing for such a change.
The latest version of mypy should understand both the Python 3.6 variable annotation syntax and the comment-style syntax. In fact, variable annotations were basically Guido's idea in the first place (Guido is currently a part of the mypy team) -- basically, support for type annotations in mypy and in Python was developed pretty much simultaneously.
A:
Is that possible? Maybe mypy could do it, but I'd prefer to use Python-3.6-style type checking (like a: List[str]) instead of the comment-style (like # type: List[str]) used in mypy. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.
There's no way Python will do this for you; you can use mypy to get type checking (and PyCharms built-in checker should do it too). In addition to that, mypy also doesn't restrict you to only type comments # type List[str], you can use variable annotations as you do in Python 3.6 so a: List[str] works equally well.
With mypy as is, because the release is fresh, you'll need to install typed_ast and execute mypy with --fast-parser and --python-version 3.6 as documented in mypy's docs. This will probably change soon but for now you'll need them to get it running smoothly
Update: --fast-parser and --python-version 3.6 aren't needed now.
After you do that, mypy detects the incompatibility of the second operation on your a: List[str] just fine. Let's say your file is called tp_check.py with statements:
from typing import List
a: List[str] = []
a.append('a')
a.append(1)
print(a)
Running mypy with the aforementioned arguments (you must first pip install -U typed_ast):
python -m mypy --fast-parser --python-version 3.6 tp_check.py
catches the error:
tp_check.py:5: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str"
As noted in many other answers on type hinting with Python, mypy and PyCharms' type-checkers are the ones performing the validation, not Python itself. Python doesn't use this information currently, it only stores it as metadata and ignores it during execution.
A:
Type annotations in Python are not meant to be type-enforcing. Anything involving runtime static-type dependency would mean changes so fundamental that it would not even make sense to continue call the resulting language "Python".
Notice that the dynamic nature of Python does ALLOW for one to build an external tool, using pure-python code, to perform runtime type checking. It would make the program run (very) slowly, but maybe it is suitable for certain test categories.
To be sure - one of the fundamentals of the Python language is that everything is an object, and that you can try to perform any action on an object at runtime. If the object fails to have an interface that conforms with the attempted operation, it will fail - at runtime.
Languages that are by nature statically typed work in a different way: operations simply have to be available on objects when tried at run time. At the compile step, the compiler creates the spaces and slots for the appropriate objects all over the place - and, on non-conforming typing, breaks the compilation.
Python's typechecking allows any number of tools to do exactly that: to break and warn at a step prior to actually running the application (but independent from the compiling itself). But the nature of the language can't be changed to actually require the objects to comply in runtime - and veryfying the typing and breaking at the compile step itself would be artificial.
Although, one can expect that future versions of Python may incoroporate compile-time typechecking on the Python runtime itself - most likely through and optional command line switch. (I don't think it will ever be default - at least not to break the build - maybe it can be made default for emitting warnings)
So, Python does not require static type-checking at runtime because it would cease being Python. But at least one language exists that makes use both of dynamic objects and static typing - the Cython language, that in practice works as a Python superset. One should expect Cython to incorporate the new type-hinting syntax to be actual type-declaring very soon. (Currently it uses a differing syntax for the optional statically typed variables)
A:
The pydantic package has a validate_arguments decorator that checks type hints at runtime. You can add this decorator to all functions or methods where you want type hints enforced.
I wrote some code to help automate this for an entire module, so that I could enable runtime checks for my test suite to help with debugging, but then have them off for code that uses the library so there's no performance impact.
import sys
import inspect
import types
from pydantic import validate_arguments
class ConfigAllowArbitraryTypes:
"""allows for custom classes to be used in type hints"""
arbitrary_types_allowed = True
def add_runtime_type_checks(module):
"""adds runtime typing checks to the given module/class"""
if isinstance(module, str):
module = sys.modules[module]
for attr in module.__dict__:
obj = getattr(module, attr)
if isinstance(obj, types.FunctionType):
setattr(module, attr, validate_arguments(obj, config=ConfigAllowArbitraryTypes))
elif inspect.isclass(obj):
# recursively add decorator to class methods
add_runtime_type_checks(obj)
In my test suite I then add decorators by calling add_runtime_type_checks with the name of the module.
import mymodule
def setup_module(module):
"""executes once"""
add_runtime_type_checks('mymodule')
def test_behavior():
...
Note that with pydantic it might do some unexpected conversions when type checking, so if the desired type is an int and you pass the function 0.2, it will cast it to 0 silently rather than failing. In principle, you could do almost the same thing with the typen library's enforce_type_hints decorator, but that does not do recursive checks (so you can't use types like list[int], only list).
| How to use type hints in python 3.6? | I noticed Python 3.5 and Python 3.6 added a lot of features about static type checking, so I tried with the following code (in python 3.6, stable version).
from typing import List
a: List[str] = []
a.append('a')
a.append(1)
print(a)
What surprised me was that, Python didn't give me an error or warning, although 1 was appended to a list which should only contain strings. Pycharm detected the type error and gave me a warning about it, but it was not obvious and it was not shown in the output console, I was afraid sometimes I might miss it. I would like the following effects:
If it's obvious that I used the wrong type just as shown above, throw out a warning or error.
If the compiler couldn't reliably check if the type I used was right or wrong, ignore it.
Is that possible? Maybe mypy could do it, but I'd prefer to use Python-3.6-style type checking (like a: List[str]) instead of the comment-style (like # type List[str]) used in mypy. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.
| [
"Type hints are entirely meant to be ignored by the Python runtime, and are checked only by 3rd party tools like mypy and Pycharm's integrated checker. There are also a variety of lesser known 3rd party tools that do typechecking at either compile time or runtime using type annotations, but most people use mypy or Pycharm's integrated checker AFAIK.\nIn fact, I actually doubt that typechecking will ever be integrated into Python proper in the foreseable future -- see the 'non-goals' section of PEP 484 (which introduced type annotations) and PEP 526 (which introduced variable annotations), as well as Guido's comments here.\nI'd personally be happy with type checking being more strongly integrated with Python, but it doesn't seem the Python community at large is ready or willing for such a change.\nThe latest version of mypy should understand both the Python 3.6 variable annotation syntax and the comment-style syntax. In fact, variable annotations were basically Guido's idea in the first place (Guido is currently a part of the mypy team) -- basically, support for type annotations in mypy and in Python was developed pretty much simultaneously.\n",
"\nIs that possible? Maybe mypy could do it, but I'd prefer to use Python-3.6-style type checking (like a: List[str]) instead of the comment-style (like # type: List[str]) used in mypy. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.\n\nThere's no way Python will do this for you; you can use mypy to get type checking (and PyCharms built-in checker should do it too). In addition to that, mypy also doesn't restrict you to only type comments # type List[str], you can use variable annotations as you do in Python 3.6 so a: List[str] works equally well.\nWith mypy as is, because the release is fresh, you'll need to install typed_ast and execute mypy with --fast-parser and --python-version 3.6 as documented in mypy's docs. This will probably change soon but for now you'll need them to get it running smoothly\nUpdate: --fast-parser and --python-version 3.6 aren't needed now.\nAfter you do that, mypy detects the incompatibility of the second operation on your a: List[str] just fine. Let's say your file is called tp_check.py with statements:\nfrom typing import List\n\na: List[str] = []\na.append('a')\na.append(1)\nprint(a)\n\nRunning mypy with the aforementioned arguments (you must first pip install -U typed_ast):\npython -m mypy --fast-parser --python-version 3.6 tp_check.py\n\ncatches the error:\ntp_check.py:5: error: Argument 1 to \"append\" of \"list\" has incompatible type \"int\"; expected \"str\"\n\nAs noted in many other answers on type hinting with Python, mypy and PyCharms' type-checkers are the ones performing the validation, not Python itself. Python doesn't use this information currently, it only stores it as metadata and ignores it during execution.\n",
"Type annotations in Python are not meant to be type-enforcing. Anything involving runtime static-type dependency would mean changes so fundamental that it would not even make sense to continue call the resulting language \"Python\".\nNotice that the dynamic nature of Python does ALLOW for one to build an external tool, using pure-python code, to perform runtime type checking. It would make the program run (very) slowly, but maybe it is suitable for certain test categories.\nTo be sure - one of the fundamentals of the Python language is that everything is an object, and that you can try to perform any action on an object at runtime. If the object fails to have an interface that conforms with the attempted operation, it will fail - at runtime.\nLanguages that are by nature statically typed work in a different way: operations simply have to be available on objects when tried at run time. At the compile step, the compiler creates the spaces and slots for the appropriate objects all over the place - and, on non-conforming typing, breaks the compilation.\nPython's typechecking allows any number of tools to do exactly that: to break and warn at a step prior to actually running the application (but independent from the compiling itself). But the nature of the language can't be changed to actually require the objects to comply in runtime - and veryfying the typing and breaking at the compile step itself would be artificial.\nAlthough, one can expect that future versions of Python may incoroporate compile-time typechecking on the Python runtime itself - most likely through and optional command line switch. (I don't think it will ever be default - at least not to break the build - maybe it can be made default for emitting warnings)\nSo, Python does not require static type-checking at runtime because it would cease being Python. But at least one language exists that makes use both of dynamic objects and static typing - the Cython language, that in practice works as a Python superset. One should expect Cython to incorporate the new type-hinting syntax to be actual type-declaring very soon. (Currently it uses a differing syntax for the optional statically typed variables)\n",
"The pydantic package has a validate_arguments decorator that checks type hints at runtime. You can add this decorator to all functions or methods where you want type hints enforced.\nI wrote some code to help automate this for an entire module, so that I could enable runtime checks for my test suite to help with debugging, but then have them off for code that uses the library so there's no performance impact.\nimport sys\nimport inspect\nimport types\nfrom pydantic import validate_arguments\n\nclass ConfigAllowArbitraryTypes:\n \"\"\"allows for custom classes to be used in type hints\"\"\"\n \n arbitrary_types_allowed = True\n\ndef add_runtime_type_checks(module):\n \"\"\"adds runtime typing checks to the given module/class\"\"\"\n\n if isinstance(module, str):\n module = sys.modules[module]\n \n for attr in module.__dict__:\n obj = getattr(module, attr)\n \n if isinstance(obj, types.FunctionType):\n setattr(module, attr, validate_arguments(obj, config=ConfigAllowArbitraryTypes))\n elif inspect.isclass(obj):\n # recursively add decorator to class methods\n add_runtime_type_checks(obj)\n\nIn my test suite I then add decorators by calling add_runtime_type_checks with the name of the module.\nimport mymodule\n\ndef setup_module(module):\n \"\"\"executes once\"\"\"\n \n add_runtime_type_checks('mymodule')\n\ndef test_behavior():\n ...\n\nNote that with pydantic it might do some unexpected conversions when type checking, so if the desired type is an int and you pass the function 0.2, it will cast it to 0 silently rather than failing. In principle, you could do almost the same thing with the typen library's enforce_type_hints decorator, but that does not do recursive checks (so you can't use types like list[int], only list).\n"
] | [
42,
24,
11,
0
] | [] | [] | [
"mypy",
"python",
"python_3.x",
"python_typing",
"type_hinting"
] | stackoverflow_0041356784_mypy_python_python_3.x_python_typing_type_hinting.txt |
Q:
AttributeError at /social-auth/complete/google-oauth2/ 'Request' object has no attribute 'login'
I had written a REST API service on python using django rest framework to which I wanted to attach authentication from authorizations using OAuth2 (Google). I used to social django lib, however when I was starting my service locally and putting my credentials in google form to auth I keep getting this error (look at image)...
The point where the error began auth = request.login in /messenger/venv/lib/python3.8/site-packages/requests/sessions.py, line 479, in prepare_request. It was weird, but I solved my problem!
A:
I reinstalled the entire virtual environment, as well as all the places in the settings.py file that were responsible for auth. It helped me.
Also you can check URLs and Redirect URLs on console.cloud.google.com where you have registered your web app.
| AttributeError at /social-auth/complete/google-oauth2/ 'Request' object has no attribute 'login' |
I had written a REST API service on python using django rest framework to which I wanted to attach authentication from authorizations using OAuth2 (Google). I used to social django lib, however when I was starting my service locally and putting my credentials in google form to auth I keep getting this error (look at image)...
The point where the error began auth = request.login in /messenger/venv/lib/python3.8/site-packages/requests/sessions.py, line 479, in prepare_request. It was weird, but I solved my problem!
| [
"I reinstalled the entire virtual environment, as well as all the places in the settings.py file that were responsible for auth. It helped me.\nAlso you can check URLs and Redirect URLs on console.cloud.google.com where you have registered your web app.\n"
] | [
0
] | [] | [] | [
"authentication",
"django",
"django_socialauth",
"oauth_2.0",
"python"
] | stackoverflow_0074606805_authentication_django_django_socialauth_oauth_2.0_python.txt |
Q:
Make directed graph run clockwise and change its orientation
The following code generates a circular directed graph with networkx.
from matplotlib import pyplot as plt
import networkx as nx
def make_cyclic_edge(lst):
cyclic = []
for i, elem in enumerate(lst):
if i+1 < len(lst):
cyclic.append((elem, lst[i+1]))
else:
cyclic.append((elem, lst[0]))
return cyclic
def cycle_diagram(generate, inhibit, organ_func=False, plot=False):
"""Generate element cycle diagram with Networkx. """
G = nx.MultiDiGraph()
for pair in generate:
G.add_edge(*pair, color="g", group="generate")
for pair in inhibit:
G.add_edge(*pair, color="r", group="inhibit")
pos = nx.circular_layout(G, center=(0, 0))
edges_to_adjust = []
if organ_func:
fig = plt.figure(2, figsize=(5, 5), dpi=200)
edges_to_adjust = [("LI", "GB", 0), ("LI", "GB", 1), ("GB", "TE", 0), ("TE", "LI", 0),
("ST", "BL", 0), ("BL", "SI", 0)]
for edge in edges_to_adjust:
G.remove_edge(*edge)
color = list(nx.get_edge_attributes(G, 'color').values())
if organ_func:
nx.draw(
G, pos=pos,
with_labels=True,
font_size=16,
node_size=800,
node_color='#C6DDCB',
connectionstyle=f"arc3, rad=0.0",
edge_color=color,
)
nx.draw_networkx_edges(G,
pos=pos,
node_size=800,
edgelist=edges_to_adjust,
connectionstyle=f"arc3, rad=0.6",
edge_color=["r"]*len(edges_to_adjust),
)
else:
fig = plt.figure(1, figsize=(4, 4), dpi=200)
nx.draw(
G, pos=pos,
with_labels=True,
font_family='AR PL KaitiM Big5',
font_size=16,
node_size=800,
node_color='#C6DDCB',
connectionstyle=f"arc3, rad=0.235",
edge_color=color,
)
if plot:
plt.show()
return G
ORGAN_FUNC = (
"SP",
"LU",
"PC",
"LI",
"TE",
"GB",
"LR",
"KI",
"HT",
"SI",
"BL",
"ST",
)
ORGAN_FUNC_GEN = make_cyclic_edge(ORGAN_FUNC)
ORGAN_FUNC_INHIBIT_A = (
"BL",
"SI",
"LI",
"GB",
"ST",
)
ORGAN_FUNC_INHIBIT_Am = (
"LI",
"GB",
"TE",
)
ORGAN_FUNC_INHIBIT_B = (
"SP",
"KI",
"PC",
"HT",
"LU",
"LR",
)
ORGAN_FUNC_INHIBIT_C = (
"LR",
"PC",
"HT",
"LU",
)
ORGAN_FUNC_INHIBIT_D = (
"BL",
"TE",
)
ORGAN_FUNC_inhibit_list = [
ORGAN_FUNC_INHIBIT_A,
ORGAN_FUNC_INHIBIT_Am,
ORGAN_FUNC_INHIBIT_B,
ORGAN_FUNC_INHIBIT_C,
ORGAN_FUNC_INHIBIT_D,
]
ORGAN_FUNC_INHIBIT = []
for cycle in ORGAN_FUNC_inhibit_list:
ORGAN_FUNC_INHIBIT += make_cyclic_edge(cycle)
ORGAN_FUNC_CYCLE = cycle_diagram(ORGAN_FUNC_GEN, ORGAN_FUNC_INHIBIT, organ_func=True, plot=True)
I would like to:
Make the arrows along its circumference run clockwise.
Rotate the graph so that "BL" is at the top.
like so (organ code-names differ slightly):
A:
It's not the most elegant, but here's one approach that works. I made the following changes to your cycle_diagram function. I added optional arguments top (to specify the top node) and flip (to specify whether to flip coordinates horizontally). Within the code, I added the following in after the layout pos is defined.
if top:
c,s = pos[top]
R_mat = np.array([[s,-c],[c,s]])
for k,v in pos.items():
pos[k] = R_mat@v
if flip:
for k,v in pos.items():
pos[k][0] *= -1
R_mat is a rotation matrix that rotates the point at pos[top] to the point (0,1), assuming that all points in the layout have magnitude 1 (which is indeed the case when the circular_layout function is used).
Here's the full script, with the additions.
from matplotlib import pyplot as plt
import networkx as nx
import numpy as np
def make_cyclic_edge(lst):
cyclic = []
for i, elem in enumerate(lst):
if i+1 < len(lst):
cyclic.append((elem, lst[i+1]))
else:
cyclic.append((elem, lst[0]))
return cyclic
def cycle_diagram(generate, inhibit, organ_func=False, plot=False, top=None, flip=False):
"""Generate element cycle diagram with Networkx. """
G = nx.MultiDiGraph()
for pair in generate:
G.add_edge(*pair, color="g", group="generate")
for pair in inhibit:
G.add_edge(*pair, color="r", group="inhibit")
pos = nx.circular_layout(G, center=(0, 0))
if top:
c,s = pos[top]
R_mat = np.array([[s,-c],[c,s]])
for k,v in pos.items():
pos[k] = R_mat@v
if flip:
for k,v in pos.items():
pos[k][0] *= -1
edges_to_adjust = []
if organ_func:
fig = plt.figure(2, figsize=(5, 5), dpi=200)
edges_to_adjust = [("LI", "GB", 0), ("LI", "GB", 1), ("GB", "TE", 0), ("TE", "LI", 0),
("ST", "BL", 0), ("BL", "SI", 0)]
for edge in edges_to_adjust:
G.remove_edge(*edge)
color = list(nx.get_edge_attributes(G, 'color').values())
if organ_func:
nx.draw(
G, pos=pos,
with_labels=True,
font_size=16,
node_size=800,
node_color='#C6DDCB',
connectionstyle=f"arc3, rad=0.0",
edge_color=color,
)
nx.draw_networkx_edges(G,
pos=pos,
node_size=800,
edgelist=edges_to_adjust,
connectionstyle=f"arc3, rad=0.6",
edge_color=["r"]*len(edges_to_adjust),
)
else:
fig = plt.figure(1, figsize=(4, 4), dpi=200)
nx.draw(
G, pos=pos,
with_labels=True,
font_family='AR PL KaitiM Big5',
font_size=16,
node_size=800,
node_color='#C6DDCB',
connectionstyle=f"arc3, rad=0.235",
edge_color=color,
)
if plot:
plt.show()
return G
ORGAN_FUNC = (
"SP",
"LU",
"PC",
"LI",
"TE",
"GB",
"LR",
"KI",
"HT",
"SI",
"BL",
"ST",
)
ORGAN_FUNC_GEN = make_cyclic_edge(ORGAN_FUNC)
ORGAN_FUNC_INHIBIT_A = (
"BL",
"SI",
"LI",
"GB",
"ST",
)
ORGAN_FUNC_INHIBIT_Am = (
"LI",
"GB",
"TE",
)
ORGAN_FUNC_INHIBIT_B = (
"SP",
"KI",
"PC",
"HT",
"LU",
"LR",
)
ORGAN_FUNC_INHIBIT_C = (
"LR",
"PC",
"HT",
"LU",
)
ORGAN_FUNC_INHIBIT_D = (
"BL",
"TE",
)
ORGAN_FUNC_inhibit_list = [
ORGAN_FUNC_INHIBIT_A,
ORGAN_FUNC_INHIBIT_Am,
ORGAN_FUNC_INHIBIT_B,
ORGAN_FUNC_INHIBIT_C,
ORGAN_FUNC_INHIBIT_D,
]
ORGAN_FUNC_INHIBIT = []
for cycle in ORGAN_FUNC_inhibit_list:
ORGAN_FUNC_INHIBIT += make_cyclic_edge(cycle)
ORGAN_FUNC_CYCLE = cycle_diagram(ORGAN_FUNC_GEN, ORGAN_FUNC_INHIBIT,
organ_func=True, plot=True, top = "BL", flip = True)
The result I get:
| Make directed graph run clockwise and change its orientation | The following code generates a circular directed graph with networkx.
from matplotlib import pyplot as plt
import networkx as nx
def make_cyclic_edge(lst):
cyclic = []
for i, elem in enumerate(lst):
if i+1 < len(lst):
cyclic.append((elem, lst[i+1]))
else:
cyclic.append((elem, lst[0]))
return cyclic
def cycle_diagram(generate, inhibit, organ_func=False, plot=False):
"""Generate element cycle diagram with Networkx. """
G = nx.MultiDiGraph()
for pair in generate:
G.add_edge(*pair, color="g", group="generate")
for pair in inhibit:
G.add_edge(*pair, color="r", group="inhibit")
pos = nx.circular_layout(G, center=(0, 0))
edges_to_adjust = []
if organ_func:
fig = plt.figure(2, figsize=(5, 5), dpi=200)
edges_to_adjust = [("LI", "GB", 0), ("LI", "GB", 1), ("GB", "TE", 0), ("TE", "LI", 0),
("ST", "BL", 0), ("BL", "SI", 0)]
for edge in edges_to_adjust:
G.remove_edge(*edge)
color = list(nx.get_edge_attributes(G, 'color').values())
if organ_func:
nx.draw(
G, pos=pos,
with_labels=True,
font_size=16,
node_size=800,
node_color='#C6DDCB',
connectionstyle=f"arc3, rad=0.0",
edge_color=color,
)
nx.draw_networkx_edges(G,
pos=pos,
node_size=800,
edgelist=edges_to_adjust,
connectionstyle=f"arc3, rad=0.6",
edge_color=["r"]*len(edges_to_adjust),
)
else:
fig = plt.figure(1, figsize=(4, 4), dpi=200)
nx.draw(
G, pos=pos,
with_labels=True,
font_family='AR PL KaitiM Big5',
font_size=16,
node_size=800,
node_color='#C6DDCB',
connectionstyle=f"arc3, rad=0.235",
edge_color=color,
)
if plot:
plt.show()
return G
ORGAN_FUNC = (
"SP",
"LU",
"PC",
"LI",
"TE",
"GB",
"LR",
"KI",
"HT",
"SI",
"BL",
"ST",
)
ORGAN_FUNC_GEN = make_cyclic_edge(ORGAN_FUNC)
ORGAN_FUNC_INHIBIT_A = (
"BL",
"SI",
"LI",
"GB",
"ST",
)
ORGAN_FUNC_INHIBIT_Am = (
"LI",
"GB",
"TE",
)
ORGAN_FUNC_INHIBIT_B = (
"SP",
"KI",
"PC",
"HT",
"LU",
"LR",
)
ORGAN_FUNC_INHIBIT_C = (
"LR",
"PC",
"HT",
"LU",
)
ORGAN_FUNC_INHIBIT_D = (
"BL",
"TE",
)
ORGAN_FUNC_inhibit_list = [
ORGAN_FUNC_INHIBIT_A,
ORGAN_FUNC_INHIBIT_Am,
ORGAN_FUNC_INHIBIT_B,
ORGAN_FUNC_INHIBIT_C,
ORGAN_FUNC_INHIBIT_D,
]
ORGAN_FUNC_INHIBIT = []
for cycle in ORGAN_FUNC_inhibit_list:
ORGAN_FUNC_INHIBIT += make_cyclic_edge(cycle)
ORGAN_FUNC_CYCLE = cycle_diagram(ORGAN_FUNC_GEN, ORGAN_FUNC_INHIBIT, organ_func=True, plot=True)
I would like to:
Make the arrows along its circumference run clockwise.
Rotate the graph so that "BL" is at the top.
like so (organ code-names differ slightly):
| [
"It's not the most elegant, but here's one approach that works. I made the following changes to your cycle_diagram function. I added optional arguments top (to specify the top node) and flip (to specify whether to flip coordinates horizontally). Within the code, I added the following in after the layout pos is defined.\n if top:\n c,s = pos[top]\n R_mat = np.array([[s,-c],[c,s]])\n for k,v in pos.items():\n pos[k] = R_mat@v\n if flip:\n for k,v in pos.items():\n pos[k][0] *= -1\n\nR_mat is a rotation matrix that rotates the point at pos[top] to the point (0,1), assuming that all points in the layout have magnitude 1 (which is indeed the case when the circular_layout function is used).\nHere's the full script, with the additions.\nfrom matplotlib import pyplot as plt\nimport networkx as nx\nimport numpy as np\n\ndef make_cyclic_edge(lst):\n cyclic = []\n for i, elem in enumerate(lst):\n if i+1 < len(lst):\n cyclic.append((elem, lst[i+1]))\n else:\n cyclic.append((elem, lst[0]))\n\n return cyclic\n\n\ndef cycle_diagram(generate, inhibit, organ_func=False, plot=False, top=None, flip=False):\n \"\"\"Generate element cycle diagram with Networkx. \"\"\"\n\n G = nx.MultiDiGraph()\n for pair in generate:\n G.add_edge(*pair, color=\"g\", group=\"generate\")\n\n for pair in inhibit:\n G.add_edge(*pair, color=\"r\", group=\"inhibit\")\n\n pos = nx.circular_layout(G, center=(0, 0))\n \n if top:\n c,s = pos[top]\n R_mat = np.array([[s,-c],[c,s]])\n for k,v in pos.items():\n pos[k] = R_mat@v\n if flip:\n for k,v in pos.items():\n pos[k][0] *= -1\n\n edges_to_adjust = []\n\n if organ_func:\n fig = plt.figure(2, figsize=(5, 5), dpi=200)\n\n edges_to_adjust = [(\"LI\", \"GB\", 0), (\"LI\", \"GB\", 1), (\"GB\", \"TE\", 0), (\"TE\", \"LI\", 0),\n (\"ST\", \"BL\", 0), (\"BL\", \"SI\", 0)]\n\n for edge in edges_to_adjust:\n G.remove_edge(*edge)\n\n color = list(nx.get_edge_attributes(G, 'color').values())\n\n if organ_func:\n nx.draw(\n G, pos=pos,\n with_labels=True,\n font_size=16,\n node_size=800,\n node_color='#C6DDCB',\n connectionstyle=f\"arc3, rad=0.0\",\n edge_color=color,\n )\n\n nx.draw_networkx_edges(G,\n pos=pos,\n node_size=800,\n edgelist=edges_to_adjust,\n connectionstyle=f\"arc3, rad=0.6\",\n edge_color=[\"r\"]*len(edges_to_adjust),\n )\n\n else:\n\n fig = plt.figure(1, figsize=(4, 4), dpi=200)\n\n nx.draw(\n G, pos=pos,\n with_labels=True,\n font_family='AR PL KaitiM Big5',\n font_size=16,\n node_size=800,\n node_color='#C6DDCB',\n connectionstyle=f\"arc3, rad=0.235\",\n edge_color=color,\n )\n\n if plot:\n plt.show()\n\n return G\n\n\nORGAN_FUNC = (\n \"SP\",\n \"LU\",\n \"PC\",\n \"LI\",\n \"TE\",\n \"GB\",\n \"LR\",\n \"KI\",\n \"HT\",\n \"SI\",\n \"BL\",\n \"ST\",\n)\n\nORGAN_FUNC_GEN = make_cyclic_edge(ORGAN_FUNC)\n\nORGAN_FUNC_INHIBIT_A = (\n \"BL\",\n \"SI\",\n \"LI\",\n \"GB\",\n \"ST\",\n)\n\nORGAN_FUNC_INHIBIT_Am = (\n \"LI\",\n \"GB\",\n \"TE\",\n)\n\nORGAN_FUNC_INHIBIT_B = (\n \"SP\",\n \"KI\",\n \"PC\",\n \"HT\",\n \"LU\",\n \"LR\",\n)\n\nORGAN_FUNC_INHIBIT_C = (\n \"LR\",\n \"PC\",\n \"HT\",\n \"LU\",\n)\n\nORGAN_FUNC_INHIBIT_D = (\n \"BL\",\n \"TE\",\n)\n\nORGAN_FUNC_inhibit_list = [\n ORGAN_FUNC_INHIBIT_A,\n ORGAN_FUNC_INHIBIT_Am,\n ORGAN_FUNC_INHIBIT_B,\n ORGAN_FUNC_INHIBIT_C,\n ORGAN_FUNC_INHIBIT_D,\n ]\n\nORGAN_FUNC_INHIBIT = []\nfor cycle in ORGAN_FUNC_inhibit_list:\n ORGAN_FUNC_INHIBIT += make_cyclic_edge(cycle)\n\n\nORGAN_FUNC_CYCLE = cycle_diagram(ORGAN_FUNC_GEN, ORGAN_FUNC_INHIBIT, \n organ_func=True, plot=True, top = \"BL\", flip = True)\n\nThe result I get:\n\n"
] | [
2
] | [] | [] | [
"matplotlib",
"networkx",
"python",
"rotation"
] | stackoverflow_0074602010_matplotlib_networkx_python_rotation.txt |
Q:
how to do less than django queryset with column parameter
I want to count my stock as this sql code:
SELECT COUNT(*) FROM management_stock
WHERE stockCount < minStock
How to do that query in django queryset?
I got error in my this query:
Stock.objects.all().filter(stockCount__lt=minStock).count()
my table is like this:
class Stock(models.Model):
product = models.OneToOneField(Product, on_delete=models.CASCADE)
bengkel = models.OneToOneField(Bengkel, on_delete=models.CASCADE)
stockCount = models.PositiveIntegerField()
minStock = models.PositiveIntegerField()
I tried that code then my eror is : NameError: name 'minStock' is not defined.
A:
You can use an F-expressionΒ [Django-doc] to reference a field, so:
from django.db.models import F
Stock.objects.filter(stockCount__lt=F('minStock')).count()
| how to do less than django queryset with column parameter | I want to count my stock as this sql code:
SELECT COUNT(*) FROM management_stock
WHERE stockCount < minStock
How to do that query in django queryset?
I got error in my this query:
Stock.objects.all().filter(stockCount__lt=minStock).count()
my table is like this:
class Stock(models.Model):
product = models.OneToOneField(Product, on_delete=models.CASCADE)
bengkel = models.OneToOneField(Bengkel, on_delete=models.CASCADE)
stockCount = models.PositiveIntegerField()
minStock = models.PositiveIntegerField()
I tried that code then my eror is : NameError: name 'minStock' is not defined.
| [
"You can use an F-expressionΒ [Django-doc] to reference a field, so:\nfrom django.db.models import F\n\nStock.objects.filter(stockCount__lt=F('minStock')).count()\n"
] | [
1
] | [] | [] | [
"django",
"django_filter",
"django_models",
"python"
] | stackoverflow_0074606678_django_django_filter_django_models_python.txt |
Q:
Python & Regex: Simple findall issue
Can someone please help me understand why the last result isn't returning [+50, -50] and is capturing that annoying "/".
To be clear, I am trying to match on "-" or "+/-". thats why I'm confused as to why "/-50" is catching.
a = ['a+/-50', 'a +50', 'a', '+50,+100', '+50/-50']
pattern = r'[-|+/-]*\d+'
for x in a:
print(re.findall(pattern,x))
['+/-50']
['+50']
[]
['+50', '+100']
['+50', '/-50']
For bonus points, I would love if someone could show me how to turn a the case of "a+/-50" into "+50,-50". I'm trying to avoid a bunch of if statements...
Thanks in advance!
A:
There seems to be a misunderstanding of character classes here: they match one character, and the | symbol is not a special operator but a literal character option when appearing in such character class.
You'll want to change your regex to this:
r'(?:-|\+/-)?\d+'
| Python & Regex: Simple findall issue | Can someone please help me understand why the last result isn't returning [+50, -50] and is capturing that annoying "/".
To be clear, I am trying to match on "-" or "+/-". thats why I'm confused as to why "/-50" is catching.
a = ['a+/-50', 'a +50', 'a', '+50,+100', '+50/-50']
pattern = r'[-|+/-]*\d+'
for x in a:
print(re.findall(pattern,x))
['+/-50']
['+50']
[]
['+50', '+100']
['+50', '/-50']
For bonus points, I would love if someone could show me how to turn a the case of "a+/-50" into "+50,-50". I'm trying to avoid a bunch of if statements...
Thanks in advance!
| [
"There seems to be a misunderstanding of character classes here: they match one character, and the | symbol is not a special operator but a literal character option when appearing in such character class.\nYou'll want to change your regex to this:\nr'(?:-|\\+/-)?\\d+'\n\n"
] | [
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074606792_python_regex.txt |
Q:
Loop through multiple html tables looking for specific values
I'm trying to find an account number in a table (it can be in many multiple tables) along with the status of the account. I'm trying to utilize find_element using the Xpath and the odd thing is that it is saying it cannot find it. You can see in the html that the id exists yet it is defaulting to my except saying table not found. My end result is to find the table that has the header Instance ID with the value of 9083495r3498q345 and to give the value under the Status field for that row in the same table. Please keep in mind that it may not be DataTables_Table_6 but could be DataTables_Table_i
<table class="data-table clear-both dataTable no-footer" cellspacing="0" id="DataTables_Table_6" role="grid">
<thead>
<tr role="row"><th style="text-align: left; width: 167.104px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Parent Instance ID</div></th><th style="text-align: left; width: 116.917px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Instance ID</div></th><th style="text-align: left; width: 97.1771px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Plan Name</div></th><th style="text-align: left; width: 168.719px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Client Defined Identifier</div></th><th style="text-align: left; width: 39.5729px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Units</div></th><th style="text-align: left; width: 89.8438px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Status</div></th></tr>
</thead>
<tbody>
<tr role="row" class="odd">
<td style="text-align: left;"><span style="padding-left:px;\"><a href="#" class="doAccountsPanel" ></a></span></td>
<td style="text-align: left;"><span style="padding-left:px;\"><a href="#" class="doAccountsPanel" ">Not Needed</a></span></td>
<td style="text-align: left;">The Product</td>
<td style="text-align: left;">9083495r3498q345</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Suspended</td>
</tr></tbody></table>
try:
driver_chrom.find_element(By.XPATH,'//*[@id="DataTables_Table_6"]')
print("Found The Table")
except:
print("Didn't find the table")
I would have expected my print result to be "Found the Table", but I'm getting the "Didn't find the table".
A:
DataTables are dynamic elements - the actual info they hold is being hydrated by javascript on an empty table skeleton, after page loads. Therefore, you need to wait for the table to fully load, then look up the information it holds:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
[...]
wait = WebDriverWait(driver, 20)
[...]
desired_info = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="DataTables_Table_6"]')))
print(desired_info.text)
See Selenium documentation here.
| Loop through multiple html tables looking for specific values | I'm trying to find an account number in a table (it can be in many multiple tables) along with the status of the account. I'm trying to utilize find_element using the Xpath and the odd thing is that it is saying it cannot find it. You can see in the html that the id exists yet it is defaulting to my except saying table not found. My end result is to find the table that has the header Instance ID with the value of 9083495r3498q345 and to give the value under the Status field for that row in the same table. Please keep in mind that it may not be DataTables_Table_6 but could be DataTables_Table_i
<table class="data-table clear-both dataTable no-footer" cellspacing="0" id="DataTables_Table_6" role="grid">
<thead>
<tr role="row"><th style="text-align: left; width: 167.104px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Parent Instance ID</div></th><th style="text-align: left; width: 116.917px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Instance ID</div></th><th style="text-align: left; width: 97.1771px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Plan Name</div></th><th style="text-align: left; width: 168.719px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Client Defined Identifier</div></th><th style="text-align: left; width: 39.5729px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Units</div></th><th style="text-align: left; width: 89.8438px;" class="ui-state-default sorting_disabled" rowspan="1" colspan="1"><div class="DataTables_sort_wrapper"><span class="DataTables_sort_icon"></span>Status</div></th></tr>
</thead>
<tbody>
<tr role="row" class="odd">
<td style="text-align: left;"><span style="padding-left:px;\"><a href="#" class="doAccountsPanel" ></a></span></td>
<td style="text-align: left;"><span style="padding-left:px;\"><a href="#" class="doAccountsPanel" ">Not Needed</a></span></td>
<td style="text-align: left;">The Product</td>
<td style="text-align: left;">9083495r3498q345</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Suspended</td>
</tr></tbody></table>
try:
driver_chrom.find_element(By.XPATH,'//*[@id="DataTables_Table_6"]')
print("Found The Table")
except:
print("Didn't find the table")
I would have expected my print result to be "Found the Table", but I'm getting the "Didn't find the table".
| [
"DataTables are dynamic elements - the actual info they hold is being hydrated by javascript on an empty table skeleton, after page loads. Therefore, you need to wait for the table to fully load, then look up the information it holds:\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n[...]\nwait = WebDriverWait(driver, 20)\n[...]\ndesired_info = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id=\"DataTables_Table_6\"]')))\nprint(desired_info.text)\n\nSee Selenium documentation here.\n"
] | [
1
] | [] | [] | [
"python",
"selenium",
"xpath"
] | stackoverflow_0074606694_python_selenium_xpath.txt |
Q:
Dask object memory size larger than the file size?
I have a csv file that is 15Gb in size according to du -sh filename.txt. However, when I load the file to dask, the dask array is almost 4 times larger at 55Gb. Is this normal? Here is how I am loading the file.
cluster = LocalCluster() # Launches a scheduler and workers locally
client = Client(cluster) # Connect to distributed cluster and override default@delayed
input_file_name = 'filename.txt'
@delayed
def load_file(fname, dtypes=dtypes):
ddf = dd.read_csv(input_file_name, sep='\t', dtype=dtypes) #dytpes is dict of {colnames:bool}
arr = ddf.to_dask_array(lengths=True)
return arr
result = load_file(input_file_name)
arr = result.compute()
arr
Array
Chunk
Bytes
54.58 GiB
245.18 MiB
Shape
(1787307, 4099)
(7840, 4099)
Count
456 Tasks
228 Chunks
Type
object
numpy.ndarray
I wasn't expecting the dask array to be so much larger than the input file size.
The file is contains binary values, so I tried passing bool dtype to see if it will shrink in size but I see no difference.
A:
Found the answer- it was right there on the output of arr where it says the Type -> 'object'. Looks like converting from dask dataframe to array using arr = ddf.to_dask_array(lengths=True) does not preserve the bool object type. I was able to reduce the memory load significantly by explicitly casting it as bool type again.
# load the dask dataframe and convert to array
@delayed
def load_to_arr(fname, dt):
ddf = dd.read_csv(fname, sep='\t', dtype=dt)
arr = unitig_ddf.to_dask_array(lengths=True)
arr = unitig_arr.astype(bool, copy=False) # recast as bool array
return unitig_arr
result = load_to_arr(input_file_name, dt=dtypes)
arr = result.compute()
arr
This now gives an object that is of type bool as size 6Gb, which is more reasonable.
Array
Chunk
Bytes
6.82 GiB
30.65 MiB
Shape
(1787307, 4099)
(7840, 4099)
Count
456 Tasks
228 Chunks
Type
bool
numpy.ndarray
| Dask object memory size larger than the file size? | I have a csv file that is 15Gb in size according to du -sh filename.txt. However, when I load the file to dask, the dask array is almost 4 times larger at 55Gb. Is this normal? Here is how I am loading the file.
cluster = LocalCluster() # Launches a scheduler and workers locally
client = Client(cluster) # Connect to distributed cluster and override default@delayed
input_file_name = 'filename.txt'
@delayed
def load_file(fname, dtypes=dtypes):
ddf = dd.read_csv(input_file_name, sep='\t', dtype=dtypes) #dytpes is dict of {colnames:bool}
arr = ddf.to_dask_array(lengths=True)
return arr
result = load_file(input_file_name)
arr = result.compute()
arr
Array
Chunk
Bytes
54.58 GiB
245.18 MiB
Shape
(1787307, 4099)
(7840, 4099)
Count
456 Tasks
228 Chunks
Type
object
numpy.ndarray
I wasn't expecting the dask array to be so much larger than the input file size.
The file is contains binary values, so I tried passing bool dtype to see if it will shrink in size but I see no difference.
| [
"Found the answer- it was right there on the output of arr where it says the Type -> 'object'. Looks like converting from dask dataframe to array using arr = ddf.to_dask_array(lengths=True) does not preserve the bool object type. I was able to reduce the memory load significantly by explicitly casting it as bool type again.\n# load the dask dataframe and convert to array\n@delayed\ndef load_to_arr(fname, dt):\n ddf = dd.read_csv(fname, sep='\\t', dtype=dt)\n arr = unitig_ddf.to_dask_array(lengths=True)\n arr = unitig_arr.astype(bool, copy=False) # recast as bool array\n return unitig_arr\nresult = load_to_arr(input_file_name, dt=dtypes)\narr = result.compute()\narr\n\nThis now gives an object that is of type bool as size 6Gb, which is more reasonable.\n\n\n\n\n\n\nArray\nChunk\n\n\n\n\nBytes\n6.82 GiB\n30.65 MiB\n\n\nShape\n(1787307, 4099)\n(7840, 4099)\n\n\nCount\n456 Tasks\n228 Chunks\n\n\nType\nbool\nnumpy.ndarray\n\n\n\n\n\n"
] | [
0
] | [] | [] | [
"bigdata",
"csv",
"dask",
"python"
] | stackoverflow_0074597754_bigdata_csv_dask_python.txt |
Q:
Setting group order on pySankey sankey chart
I'm trying to use a sankey chart to show some user segmentation change using PySankey but the class order is the opposite to what I want. Is there a way for me to specify the order in which each class is posted?
Here is the code I'm using (a dummy version):
test_df = pd.DataFrame({
'curr_seg':np.repeat(['A','B','C','D'],4),
'new_seg':['A','B','C','D']*4,
'num_users':np.random.randint(low=10, high=20, size=16)
})
sankey(
left=test_df["curr_seg"], right=test_df["new_seg"],
leftWeight= test_df["num_users"], rightWeight=test_df["num_users"],
aspect=20, fontsize=20
)
Which produces this chart:
I want to have the A class first and the D class latest on both left and right axis. Does anybody know how can I set it up? Thank you very much.
A:
There is a bug in the first line of check_data_matches_labels function, you need to change to the following:
if len(labels) > 0:
Then you can use leftLabels and rightLabels to control order.
| Setting group order on pySankey sankey chart | I'm trying to use a sankey chart to show some user segmentation change using PySankey but the class order is the opposite to what I want. Is there a way for me to specify the order in which each class is posted?
Here is the code I'm using (a dummy version):
test_df = pd.DataFrame({
'curr_seg':np.repeat(['A','B','C','D'],4),
'new_seg':['A','B','C','D']*4,
'num_users':np.random.randint(low=10, high=20, size=16)
})
sankey(
left=test_df["curr_seg"], right=test_df["new_seg"],
leftWeight= test_df["num_users"], rightWeight=test_df["num_users"],
aspect=20, fontsize=20
)
Which produces this chart:
I want to have the A class first and the D class latest on both left and right axis. Does anybody know how can I set it up? Thank you very much.
| [
"There is a bug in the first line of check_data_matches_labels function, you need to change to the following:\nif len(labels) > 0:\nThen you can use leftLabels and rightLabels to control order.\n"
] | [
0
] | [] | [] | [
"charts",
"python",
"sankey_diagram"
] | stackoverflow_0070986564_charts_python_sankey_diagram.txt |
Q:
pyMongo MongoDB Query for all databases
I want to run the same query for all databases
`
for example
import pymongo
db = pymongo.MongoClient("localhost")
db["*"]["test"].find()
or
import pymongo
db = pymongo.MongoClient("localhost")
db["db1","db2","db3"]["test"].find()
To put it bluntly, how to run this logic in mongodb
import pymongo
db = pymongo.MongoClient("localhost")
db_names = db.list_database_names()
for x in db_names:
db[x]["test"].find()
A:
There are no commands to run the same query on multiple databases in a single command.
You can get the list of databases and repeat the same command by iterating the list_database_names() method which you can run against the MongoClient instance, e.g.
import pymongo
client = pymongo.MongoClient()
collection = 'mycollection'
for x in client.list_database_names():
print(client[x][collection].count_documents({}))
| pyMongo MongoDB Query for all databases | I want to run the same query for all databases
`
for example
import pymongo
db = pymongo.MongoClient("localhost")
db["*"]["test"].find()
or
import pymongo
db = pymongo.MongoClient("localhost")
db["db1","db2","db3"]["test"].find()
To put it bluntly, how to run this logic in mongodb
import pymongo
db = pymongo.MongoClient("localhost")
db_names = db.list_database_names()
for x in db_names:
db[x]["test"].find()
| [
"There are no commands to run the same query on multiple databases in a single command.\nYou can get the list of databases and repeat the same command by iterating the list_database_names() method which you can run against the MongoClient instance, e.g.\nimport pymongo\n\nclient = pymongo.MongoClient()\ncollection = 'mycollection'\n\n\nfor x in client.list_database_names():\n print(client[x][collection].count_documents({}))\n\n"
] | [
0
] | [] | [] | [
"mongodb",
"pymongo",
"python"
] | stackoverflow_0074606515_mongodb_pymongo_python.txt |
Q:
UnicodeEncodeError: 'charmap' codec can't encode characters/ writing in txt file
i'm parcing a text file that has text in xml like configuration and the code i tried is this
file_handle_tester = open("C:/Users/pc/Desktop/talabat yarmook.txt","r", encoding="utf8")
sec_file = open("C:/Users/pc/Desktop/parced_text.txt","w")
a='com.talabat:id/textView_restaurantName'
menu = list()
for line in file_handle_tester:
line = line.strip()
menu.append(line)
for line in menu:
sec_file.write(line)
python is not letting me print lines from the original file to the new file and i get this error:
Traceback (most recent call last):
File "C:\Users\pc\Desktop\pyAppiumSandBox\venv\parcing_handle.py", line 14, in <module>
sec_file.write(line)
File "C:\Users\pc\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode characters in position 95-101: character maps to <undefined>
in the code above i tried to put the lines in a list because python has no problem with printing them on screen. the entire issue is when writing them. but i still get the same error.
i tried opening the txt in byte format and decode it but that didn't work also
A:
Simply change sec_file = open("C:/Users/pc/Desktop/parced_text.txt","w") to
sec_file = open("C:/Users/pc/Desktop/parced_text.txt","w", encoding='utf-8')
| UnicodeEncodeError: 'charmap' codec can't encode characters/ writing in txt file | i'm parcing a text file that has text in xml like configuration and the code i tried is this
file_handle_tester = open("C:/Users/pc/Desktop/talabat yarmook.txt","r", encoding="utf8")
sec_file = open("C:/Users/pc/Desktop/parced_text.txt","w")
a='com.talabat:id/textView_restaurantName'
menu = list()
for line in file_handle_tester:
line = line.strip()
menu.append(line)
for line in menu:
sec_file.write(line)
python is not letting me print lines from the original file to the new file and i get this error:
Traceback (most recent call last):
File "C:\Users\pc\Desktop\pyAppiumSandBox\venv\parcing_handle.py", line 14, in <module>
sec_file.write(line)
File "C:\Users\pc\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode characters in position 95-101: character maps to <undefined>
in the code above i tried to put the lines in a list because python has no problem with printing them on screen. the entire issue is when writing them. but i still get the same error.
i tried opening the txt in byte format and decode it but that didn't work also
| [
"Simply change sec_file = open(\"C:/Users/pc/Desktop/parced_text.txt\",\"w\") to\nsec_file = open(\"C:/Users/pc/Desktop/parced_text.txt\",\"w\", encoding='utf-8')\n"
] | [
1
] | [] | [] | [
"python",
"unicode"
] | stackoverflow_0074606785_python_unicode.txt |
Q:
Dash app not rendering design/ taking into account color etc
I am trying to add a navigation bar on a new Dash app.
If I run the code straight from dash website the output does not render properly.
What it is supposed to look like:
What I get locally (Dash 2.7.0 + chrome + dbc 1.2.1):
I have seen other strange behavior such as text in two dbc.col on the same dbc.row not showing up side by side. I don't know if that is related.
Code:
import dash
from dash import html
from dash.dependencies import Input, Output, State
import dash_bootstrap_components as dbc
app = dash.Dash()
PLOTLY_LOGO = "https://images.plot.ly/logo/new-branding/plotly-logomark.png"
search_bar = dbc.Row(
[
dbc.Col(dbc.Input(type="search", placeholder="Search")),
dbc.Col(
dbc.Button(
"Search", color="primary", className="ms-2", n_clicks=0
),
width="auto",
),
],
className="g-0 ms-auto flex-nowrap mt-3 mt-md-0",
align="center",
)
navbar = dbc.Navbar(
dbc.Container(
[
html.A(
# Use row and col to control vertical alignment of logo / brand
dbc.Row(
[
dbc.Col(html.Img(src=PLOTLY_LOGO, height="30px")),
dbc.Col(dbc.NavbarBrand("Navbar", className="ms-2")),
],
align="center",
className="g-0",
),
href="https://plotly.com",
style={"textDecoration": "none"},
),
dbc.NavbarToggler(id="navbar-toggler", n_clicks=0),
dbc.Collapse(
search_bar,
id="navbar-collapse",
is_open=False,
navbar=True,
),
]
),
color="dark",
dark=True,
)
# add callback for toggling the collapse on small screens
@app.callback(
Output("navbar-collapse", "is_open"),
[Input("navbar-toggler", "n_clicks")],
[State("navbar-collapse", "is_open")],
)
def toggle_navbar_collapse(n, is_open):
if n:
return not is_open
return is_open
app.layout = navbar
app.run_server(debug=True, use_reloader=False)
A:
You'll need to define a stylesheet in order for your className references to take effect:
app = dash.Dash(external_stylesheets=[dbc.themes.SLATE])
Result:
Complete code:
import dash
from dash import html
from dash.dependencies import Input, Output, State
import dash_bootstrap_components as dbc
app = dash.Dash(external_stylesheets=[dbc.themes.SLATE])
PLOTLY_LOGO = "https://images.plot.ly/logo/new-branding/plotly-logomark.png"
search_bar = dbc.Row(
[
dbc.Col(dbc.Input(type="search", placeholder="Search")),
dbc.Col(
dbc.Button(
"Search", color="primary", className="ms-2", n_clicks=0
),
width="auto",
),
],
className="g-0 ms-auto flex-nowrap mt-3 mt-md-0",
align="center",
)
navbar = dbc.Navbar(
dbc.Container(
[
html.A(
# Use row and col to control vertical alignment of logo / brand
dbc.Row(
[
dbc.Col(html.Img(src=PLOTLY_LOGO, height="30px")),
dbc.Col(dbc.NavbarBrand("Navbar", className="ms-2")),
],
align="center",
className="g-0",
),
href="https://plotly.com",
style={"textDecoration": "none"},
),
dbc.NavbarToggler(id="navbar-toggler", n_clicks=0),
dbc.Collapse(
search_bar,
id="navbar-collapse",
is_open=False,
navbar=True,
),
]
),
color="dark",
dark=True,
)
# add callback for toggling the collapse on small screens
@app.callback(
Output("navbar-collapse", "is_open"),
[Input("navbar-toggler", "n_clicks")],
[State("navbar-collapse", "is_open")],
)
def toggle_navbar_collapse(n, is_open):
if n:
return not is_open
return is_open
app.layout = navbar
app.run_server(debug=True, use_reloader=False)
| Dash app not rendering design/ taking into account color etc | I am trying to add a navigation bar on a new Dash app.
If I run the code straight from dash website the output does not render properly.
What it is supposed to look like:
What I get locally (Dash 2.7.0 + chrome + dbc 1.2.1):
I have seen other strange behavior such as text in two dbc.col on the same dbc.row not showing up side by side. I don't know if that is related.
Code:
import dash
from dash import html
from dash.dependencies import Input, Output, State
import dash_bootstrap_components as dbc
app = dash.Dash()
PLOTLY_LOGO = "https://images.plot.ly/logo/new-branding/plotly-logomark.png"
search_bar = dbc.Row(
[
dbc.Col(dbc.Input(type="search", placeholder="Search")),
dbc.Col(
dbc.Button(
"Search", color="primary", className="ms-2", n_clicks=0
),
width="auto",
),
],
className="g-0 ms-auto flex-nowrap mt-3 mt-md-0",
align="center",
)
navbar = dbc.Navbar(
dbc.Container(
[
html.A(
# Use row and col to control vertical alignment of logo / brand
dbc.Row(
[
dbc.Col(html.Img(src=PLOTLY_LOGO, height="30px")),
dbc.Col(dbc.NavbarBrand("Navbar", className="ms-2")),
],
align="center",
className="g-0",
),
href="https://plotly.com",
style={"textDecoration": "none"},
),
dbc.NavbarToggler(id="navbar-toggler", n_clicks=0),
dbc.Collapse(
search_bar,
id="navbar-collapse",
is_open=False,
navbar=True,
),
]
),
color="dark",
dark=True,
)
# add callback for toggling the collapse on small screens
@app.callback(
Output("navbar-collapse", "is_open"),
[Input("navbar-toggler", "n_clicks")],
[State("navbar-collapse", "is_open")],
)
def toggle_navbar_collapse(n, is_open):
if n:
return not is_open
return is_open
app.layout = navbar
app.run_server(debug=True, use_reloader=False)
| [
"You'll need to define a stylesheet in order for your className references to take effect:\napp = dash.Dash(external_stylesheets=[dbc.themes.SLATE])\n\nResult:\n\nComplete code:\nimport dash\nfrom dash import html\nfrom dash.dependencies import Input, Output, State\nimport dash_bootstrap_components as dbc\n\n\napp = dash.Dash(external_stylesheets=[dbc.themes.SLATE])\n\nPLOTLY_LOGO = \"https://images.plot.ly/logo/new-branding/plotly-logomark.png\"\n\nsearch_bar = dbc.Row(\n [\n dbc.Col(dbc.Input(type=\"search\", placeholder=\"Search\")),\n dbc.Col(\n dbc.Button(\n \"Search\", color=\"primary\", className=\"ms-2\", n_clicks=0\n ),\n width=\"auto\",\n ),\n ],\n className=\"g-0 ms-auto flex-nowrap mt-3 mt-md-0\",\n align=\"center\",\n)\n\nnavbar = dbc.Navbar(\n dbc.Container(\n [\n html.A(\n # Use row and col to control vertical alignment of logo / brand\n dbc.Row(\n [\n dbc.Col(html.Img(src=PLOTLY_LOGO, height=\"30px\")),\n dbc.Col(dbc.NavbarBrand(\"Navbar\", className=\"ms-2\")),\n ],\n align=\"center\",\n className=\"g-0\",\n ),\n href=\"https://plotly.com\",\n style={\"textDecoration\": \"none\"},\n ),\n dbc.NavbarToggler(id=\"navbar-toggler\", n_clicks=0),\n dbc.Collapse(\n search_bar,\n id=\"navbar-collapse\",\n is_open=False,\n navbar=True,\n ),\n ]\n ),\n color=\"dark\",\n dark=True,\n)\n\n\n# add callback for toggling the collapse on small screens\[email protected](\n Output(\"navbar-collapse\", \"is_open\"),\n [Input(\"navbar-toggler\", \"n_clicks\")],\n [State(\"navbar-collapse\", \"is_open\")],\n)\ndef toggle_navbar_collapse(n, is_open):\n if n:\n return not is_open\n return is_open\n\n\napp.layout = navbar\napp.run_server(debug=True, use_reloader=False)\n\n"
] | [
1
] | [] | [] | [
"frontend",
"plotly",
"plotly_dash",
"python"
] | stackoverflow_0074601707_frontend_plotly_plotly_dash_python.txt |
Q:
Inserting csv file into a database using Python
In Python I've connected to a Postgres database using the following code:
conn = psycopg2.connect(
host = "localhost",
port = "5432",
database = "postgres",
user = "postgres",
password = "123"
)
cur = conn.cursor()
I have created a table called departments and want to insert data into the database from a CSV file. I read the csv in as follows:
departments = pd.DataFrame(pd.read_csv('departments.csv'))
And I am trying to insert this data into the table with the following code:
for row in departments.itertuples():
cur.execute('''
INSERT INTO departments VALUES (?,?,?)
''',
row.id, row.department_name, row.annual_budget)
conn.commit()
which I've seen done in various articles but I keep getting the error:
TypeError: function takes at most 2 arguments (4 given)
How can I correct this, or is there another way to insert the csv?
A:
You have to pass the row information as a tuple. Try this instead:
for row in departments.itertuples():
cur.execute('''
INSERT INTO departments VALUES (%s, %s, %s)
''',
(row.id, row.department_name, row.annual_budget))
conn.commit()
See the docs for more info: https://www.psycopg.org/docs/usage.html
| Inserting csv file into a database using Python | In Python I've connected to a Postgres database using the following code:
conn = psycopg2.connect(
host = "localhost",
port = "5432",
database = "postgres",
user = "postgres",
password = "123"
)
cur = conn.cursor()
I have created a table called departments and want to insert data into the database from a CSV file. I read the csv in as follows:
departments = pd.DataFrame(pd.read_csv('departments.csv'))
And I am trying to insert this data into the table with the following code:
for row in departments.itertuples():
cur.execute('''
INSERT INTO departments VALUES (?,?,?)
''',
row.id, row.department_name, row.annual_budget)
conn.commit()
which I've seen done in various articles but I keep getting the error:
TypeError: function takes at most 2 arguments (4 given)
How can I correct this, or is there another way to insert the csv?
| [
"You have to pass the row information as a tuple. Try this instead:\nfor row in departments.itertuples():\n cur.execute('''\n INSERT INTO departments VALUES (%s, %s, %s)\n ''',\n (row.id, row.department_name, row.annual_budget))\nconn.commit()\n\nSee the docs for more info: https://www.psycopg.org/docs/usage.html\n"
] | [
0
] | [] | [] | [
"psycopg2",
"python",
"sql"
] | stackoverflow_0074606989_psycopg2_python_sql.txt |
Q:
Extract data from string object with regex into Python
I have this string:
a = '91:99 OT (87:87)'
I would like to split it into:
['91', '99', '87', '87']
In my case numerical values can vary from 01 to 999 so that I have to use regex module. I am working with Python.
A:
Sorry I found a simple solution :
re.findall('[0-9]+', a)
I will return :
['91', '99', '87', '87']
A:
regex is a pretty complicated topic. This site can help a lot https://regex101.com/
https://cheatography.com/davechild/cheat-sheets/regular-expressions/
I hope this is the answer you are looking for
(\d+)(?=\s*)
1st Capturing Group (\d+)
\d matches a digit (equivalent to [0-9])
+ matches the previous token between one and unlimited times, as many times as possible, giving back as needed (greedy)
Positive Lookahead (?=\s*)
\s matches any whitespace character (equivalent to [\r\n\t\f\v ])
* matches the previous token between zero and unlimited times, as many times as possible, giving back as needed (greedy)
import re
regex = r"(\d+)(?=\s*)"
test_str = "a = '91:99 OT (87:87)'"
matches = re.finditer(regex, test_str, re.MULTILINE)
for matchNum, match in enumerate(matches, start=1):
print (" F{matchNum} : {match}".format(matchNum = matchNum, match = match.group()))
for groupNum in range(0, len(match.groups())):
groupNum = groupNum + 1
| Extract data from string object with regex into Python | I have this string:
a = '91:99 OT (87:87)'
I would like to split it into:
['91', '99', '87', '87']
In my case numerical values can vary from 01 to 999 so that I have to use regex module. I am working with Python.
| [
"Sorry I found a simple solution :\nre.findall('[0-9]+', a)\nI will return :\n['91', '99', '87', '87']\n",
"regex is a pretty complicated topic. This site can help a lot https://regex101.com/\nhttps://cheatography.com/davechild/cheat-sheets/regular-expressions/\nI hope this is the answer you are looking for\n(\\d+)(?=\\s*)\n1st Capturing Group (\\d+)\n\n\\d matches a digit (equivalent to [0-9])\n+ matches the previous token between one and unlimited times, as many times as possible, giving back as needed (greedy)\n\nPositive Lookahead (?=\\s*)\n\n\\s matches any whitespace character (equivalent to [\\r\\n\\t\\f\\v ])\n* matches the previous token between zero and unlimited times, as many times as possible, giving back as needed (greedy)\n\n import re\n\nregex = r\"(\\d+)(?=\\s*)\"\n\ntest_str = \"a = '91:99 OT (87:87)'\"\n\nmatches = re.finditer(regex, test_str, re.MULTILINE)\n\nfor matchNum, match in enumerate(matches, start=1):\n \n print (\" F{matchNum} : {match}\".format(matchNum = matchNum, match = match.group()))\n \n for groupNum in range(0, len(match.groups())):\n groupNum = groupNum + 1\n\n\n"
] | [
0,
0
] | [] | [] | [
"extract",
"python",
"string"
] | stackoverflow_0074606882_extract_python_string.txt |
Q:
comparing values from dictionary with same key python
Below is a dictionary called total_per_person that maps total spent by a person in one week
{'Edith': 79.24, 'Carol': 176.05, 'Hannah': 90.45, 'Frank': 66.6, 'Alice': 64.10, 'Ingrid': 59.45, 'Bob': 103.50, 'Gertrude': 107.45, 'Dave': 62.24}
Below is another dictionary called name_to_budget that maps the weekly budget of a person:
{'Alice': 62.12, 'Bob': 40.34, 'Carol': 46.69, 'Dave': 37.79, 'Edith': 95.39, 'Frank': 32.87, 'Gertrude': 29.13, 'Hannah': 24.21, 'Ingrid': 91.19}
How do I compare the values and decide if they were over or under budget? Should I make a function so its easier?
A:
You need to iterate over the keys and compare each one on the two dicts.
dict.keys() return a list with all the keys.
This code snippets also consider corresponding budget and make sure the key is in the second dict with the in operator.
total_per_person = {'Edith': 79.24, 'Carol': 176.05, 'Hannah': 90.45, 'Frank': 66.6, 'Alice': 64.10, 'Ingrid': 59.45, 'Bob': 103.50, 'Gertrude': 107.45, 'Dave': 62.24, 'Alex': 12.12}
name_to_budget = {'Alice': 62.12, 'Bob': 40.34, 'Carol': 46.69, 'Dave': 37.79, 'Edith': 95.39, 'Frank': 32.87, 'Gertrude': 29.13, 'Hannah': 24.21, 'Ingrid': 91.19}
compared_to_budget = {}
for key in total_per_person.keys():
if key not in name_to_budget:
compared_to_budget[key] = "missing" # Not in name_to_budget dict
continue
if total_per_person[key] == name_to_budget[key]:
compared_to_budget[key] = "same"
elif total_per_person[key] < name_to_budget[key]:
compared_to_budget[key] = "under"
else:
compared_to_budget[key] = "over"
print(compared_to_budget)
| comparing values from dictionary with same key python | Below is a dictionary called total_per_person that maps total spent by a person in one week
{'Edith': 79.24, 'Carol': 176.05, 'Hannah': 90.45, 'Frank': 66.6, 'Alice': 64.10, 'Ingrid': 59.45, 'Bob': 103.50, 'Gertrude': 107.45, 'Dave': 62.24}
Below is another dictionary called name_to_budget that maps the weekly budget of a person:
{'Alice': 62.12, 'Bob': 40.34, 'Carol': 46.69, 'Dave': 37.79, 'Edith': 95.39, 'Frank': 32.87, 'Gertrude': 29.13, 'Hannah': 24.21, 'Ingrid': 91.19}
How do I compare the values and decide if they were over or under budget? Should I make a function so its easier?
| [
"You need to iterate over the keys and compare each one on the two dicts.\ndict.keys() return a list with all the keys.\nThis code snippets also consider corresponding budget and make sure the key is in the second dict with the in operator.\ntotal_per_person = {'Edith': 79.24, 'Carol': 176.05, 'Hannah': 90.45, 'Frank': 66.6, 'Alice': 64.10, 'Ingrid': 59.45, 'Bob': 103.50, 'Gertrude': 107.45, 'Dave': 62.24, 'Alex': 12.12}\n\nname_to_budget = {'Alice': 62.12, 'Bob': 40.34, 'Carol': 46.69, 'Dave': 37.79, 'Edith': 95.39, 'Frank': 32.87, 'Gertrude': 29.13, 'Hannah': 24.21, 'Ingrid': 91.19}\n\ncompared_to_budget = {}\n\nfor key in total_per_person.keys():\n if key not in name_to_budget:\n compared_to_budget[key] = \"missing\" # Not in name_to_budget dict\n continue\n if total_per_person[key] == name_to_budget[key]:\n compared_to_budget[key] = \"same\"\n elif total_per_person[key] < name_to_budget[key]:\n compared_to_budget[key] = \"under\"\n else:\n compared_to_budget[key] = \"over\"\n\nprint(compared_to_budget)\n\n"
] | [
1
] | [] | [] | [
"compare",
"dictionary",
"python"
] | stackoverflow_0074606823_compare_dictionary_python.txt |
Q:
requirments.txt with actual dependencies
Is there a way to create a requirements.txt file that only contains the modules that my script actually needs?
I usually just do a pip freeze and then remove unused modules.
A:
You can use pipreqs to analyze your project files, and automatically generate a requirements.txt for you.
First step is to install pipreqs. You can do so, by executing the following command in your console:
pip install pipreqs
Then, the only thing you need to do is to execute the command pipreqs, specifying the path where your files are located at. For example, to generate a requirements.txt, based on the modules on the current working directory, you could execute:
pipreqs .
For some other directory:
pipreqs "PATH/TO/YOUR/PROJECT/PYTHON/FILES"
Real World Example
Here's the tree-view of the folder we're going to use as example:
.
βββ data
βΒ Β βββ EXCEL_FILES
βΒ Β βββ DIVISION_MAP.xlsx
βΒ Β βββ INPUT_CONTAINER.xlsx
βΒ Β βββ KITS.xlsx
βΒ Β βββ LOADING.xlsx
βΒ Β βββ MOQ_CBM_SHIPPING_TYPE.xlsx
βΒ Β βββ OUTBOUND_OTM.XLSX
βΒ Β βββ PLANT_SOURCE_MAP.xlsx
βΒ Β βββ PORT_STATE.xlsx
βΒ Β βββ SALABLE_STORAGE_LOCATION.xlsx
βββ loading
βΒ Β βββ __init__.py
βΒ Β βββ configs
βΒ Β βΒ Β βββ Initialization.py
βΒ Β βΒ Β βββ __init__.py
βΒ Β βΒ Β βββ config.ini
βΒ Β βΒ Β βββ logconfig.py
βΒ Β βββ constants.py
βΒ Β βββ read_files.py
βΒ Β βββ reports.py
βΒ Β βββ utils
βΒ Β βββ __init__.py
βΒ Β βββ date_utils.py
βΒ Β βββ file_utils.py
βΒ Β βββ utils.py
βββ logs
βΒ Β βββ po_opt.log
βββ main.py
βββ outputs
Β Β βββ 2022-10-03
Β Β βββ 2022-10-04
Β Β βββ 2022-10-05
Β Β βββ 2022-10-06
Β Β βββ 2022-11-23
Β Β βββ 2022-11-24
Β Β βββ 2022-11-28
Executing the command pipreqs . from the root path I get the following requirements.txt:
β― pipreqs .
INFO: Successfully saved requirements file in ./requirements.txt
Output:
holidays==0.17.2
numpy==1.20.1
pandas==1.2.4
python_dateutil==2.8.2
six==1.16.0
| requirments.txt with actual dependencies | Is there a way to create a requirements.txt file that only contains the modules that my script actually needs?
I usually just do a pip freeze and then remove unused modules.
| [
"You can use pipreqs to analyze your project files, and automatically generate a requirements.txt for you.\nFirst step is to install pipreqs. You can do so, by executing the following command in your console:\npip install pipreqs\n\nThen, the only thing you need to do is to execute the command pipreqs, specifying the path where your files are located at. For example, to generate a requirements.txt, based on the modules on the current working directory, you could execute:\npipreqs .\n\nFor some other directory:\npipreqs \"PATH/TO/YOUR/PROJECT/PYTHON/FILES\"\n\nReal World Example\nHere's the tree-view of the folder we're going to use as example:\n.\nβββ data\nβΒ Β βββ EXCEL_FILES\nβΒ Β βββ DIVISION_MAP.xlsx\nβΒ Β βββ INPUT_CONTAINER.xlsx\nβΒ Β βββ KITS.xlsx\nβΒ Β βββ LOADING.xlsx\nβΒ Β βββ MOQ_CBM_SHIPPING_TYPE.xlsx\nβΒ Β βββ OUTBOUND_OTM.XLSX\nβΒ Β βββ PLANT_SOURCE_MAP.xlsx\nβΒ Β βββ PORT_STATE.xlsx\nβΒ Β βββ SALABLE_STORAGE_LOCATION.xlsx\nβββ loading\nβΒ Β βββ __init__.py\nβΒ Β βββ configs\nβΒ Β βΒ Β βββ Initialization.py\nβΒ Β βΒ Β βββ __init__.py\nβΒ Β βΒ Β βββ config.ini\nβΒ Β βΒ Β βββ logconfig.py\nβΒ Β βββ constants.py\nβΒ Β βββ read_files.py\nβΒ Β βββ reports.py\nβΒ Β βββ utils\nβΒ Β βββ __init__.py\nβΒ Β βββ date_utils.py\nβΒ Β βββ file_utils.py\nβΒ Β βββ utils.py\nβββ logs\nβΒ Β βββ po_opt.log\nβββ main.py\nβββ outputs\n Β Β βββ 2022-10-03\n Β Β βββ 2022-10-04\n Β Β βββ 2022-10-05\n Β Β βββ 2022-10-06\n Β Β βββ 2022-11-23\nΒ Β βββ 2022-11-24\n Β Β βββ 2022-11-28\n\n\nExecuting the command pipreqs . from the root path I get the following requirements.txt:\nβ― pipreqs .\nINFO: Successfully saved requirements file in ./requirements.txt\n\nOutput:\nholidays==0.17.2\nnumpy==1.20.1\npandas==1.2.4\npython_dateutil==2.8.2\nsix==1.16.0\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074606944_python.txt |
Q:
How to display a list of children objects on detail view for Django Admin?
I have two models: Setting and SettingsGroup.
When someone clicks on a specific SettingsGroup in the Django Admin and the edit/detail page appears I'd like for the child Setting objects to be displayed but as a list not a form.
I know that Django has InlineModelAdmin but this displays the children as editable forms.
My concern isn't with the child objects being editable from the parent object but rather the amount of space it consumes. I'd rather have a list with either a link to the appropriate child record or that changes a particular object to be inline editable.
Here is my Setting model:
class Setting(models.Model):
key = models.CharField(max_length=255, blank=False)
value = models.TextField(blank=True)
group = models.ForeignKey('SettingsGroup', blank=True,
on_delete=models.SET_NULL, null=True)
def __str__(self):
return str(self.key)
And the SettingsGroup model:
class SettingsGroup(models.Model):
name = models.CharField(max_length=255)
description = models.TextField(blank=True)
def __str__(self):
return str(self.name)
The method I don't want to use (or need to find a different way to use) is InlineModelAdmin which appears in my admin.py currently as:
class SettingsGroupInline(admin.StackedInlin):
model = Setting
fk_name = 'group'
@admin.register(SettingsGroup)
class SettingsGroupAdmin(admin.ModelAdmin):
inlines = [ SettingGroupsInline, ]
Here is an example of how I'd like it to work:
There is a MySettings object, an instance of the SettingsGroup model.
There is a CoolSetting object and a BoringSetting object, each an instance of the Setting model.
The CoolSetting object has its group set to the MySettings object.
The BoringSetting object does not have a group set.
When I open the detail/edit view of the Django Admin for the MySettings object I see the normal edit form for the MySettings object and below it the CoolSetting object (but not as a form).
I do not see the BoringSetting object because it is not a child/member/related of/to MySettings.
I have some ideas on how this could be accomplished but this seems like fairly basic functionality and I don't want to go building something if Django (or other existing code) provides a way to accomplish this.
Any ideas?
A:
Why can't you just access the children using something like Setting.objects.filter(group=SettingsGroup.objects.get(name={name}))
If being presented in a template you could pass the SettingsGroup name to the context and iterate over the children and present them however you like.
I may not understand your question exactly so if this is not what you're looking for let me know!
| How to display a list of children objects on detail view for Django Admin? | I have two models: Setting and SettingsGroup.
When someone clicks on a specific SettingsGroup in the Django Admin and the edit/detail page appears I'd like for the child Setting objects to be displayed but as a list not a form.
I know that Django has InlineModelAdmin but this displays the children as editable forms.
My concern isn't with the child objects being editable from the parent object but rather the amount of space it consumes. I'd rather have a list with either a link to the appropriate child record or that changes a particular object to be inline editable.
Here is my Setting model:
class Setting(models.Model):
key = models.CharField(max_length=255, blank=False)
value = models.TextField(blank=True)
group = models.ForeignKey('SettingsGroup', blank=True,
on_delete=models.SET_NULL, null=True)
def __str__(self):
return str(self.key)
And the SettingsGroup model:
class SettingsGroup(models.Model):
name = models.CharField(max_length=255)
description = models.TextField(blank=True)
def __str__(self):
return str(self.name)
The method I don't want to use (or need to find a different way to use) is InlineModelAdmin which appears in my admin.py currently as:
class SettingsGroupInline(admin.StackedInlin):
model = Setting
fk_name = 'group'
@admin.register(SettingsGroup)
class SettingsGroupAdmin(admin.ModelAdmin):
inlines = [ SettingGroupsInline, ]
Here is an example of how I'd like it to work:
There is a MySettings object, an instance of the SettingsGroup model.
There is a CoolSetting object and a BoringSetting object, each an instance of the Setting model.
The CoolSetting object has its group set to the MySettings object.
The BoringSetting object does not have a group set.
When I open the detail/edit view of the Django Admin for the MySettings object I see the normal edit form for the MySettings object and below it the CoolSetting object (but not as a form).
I do not see the BoringSetting object because it is not a child/member/related of/to MySettings.
I have some ideas on how this could be accomplished but this seems like fairly basic functionality and I don't want to go building something if Django (or other existing code) provides a way to accomplish this.
Any ideas?
| [
"Why can't you just access the children using something like Setting.objects.filter(group=SettingsGroup.objects.get(name={name}))\nIf being presented in a template you could pass the SettingsGroup name to the context and iterate over the children and present them however you like.\nI may not understand your question exactly so if this is not what you're looking for let me know!\n"
] | [
0
] | [] | [] | [
"django",
"django_admin",
"django_models",
"python"
] | stackoverflow_0074606999_django_django_admin_django_models_python.txt |
Q:
How to print highest score from an external text file
Help! I'm a starter coder and I am trying to build a top trumps game. I have created an external CSV file that stores the scores of the game. I am trying to get the game to print the highest score recorded but I am running in to a lot of errors. SOMEONE PLEASE HELP :(. I've been working on this for days now and the code keeps breaking more and more every time i try to fix it.
import random
import requests
import csv
def random_person():
person_number = random.randint(1, 82)
url = 'https://swapi.dev/api/people/{}/'.format(person_number)
response = requests.get(url)
person = response.json()
return {
'name': person['name'],
'height': person['height'],
'mass': person['mass'],
'birth year': person['birth_year'],
}
def run():
highest_score = 0
with open('score.csv', 'r') as csv_file:
spreadsheet = csv.DictReader(csv_file)
for row in spreadsheet:
intscore = int(row['score'])
if intscore > highest_score:
highest_score = intscore
print('The highest score to beat is', highest_score)
game = input('Do you think you can beat it? y/n')
if game == 'y':
print('Good Luck')
else:
print('You got this')
print('Hello stranger, Welcome to StarWars Top Trump!')
player_name = input('What is your name?')
lives_remaining = 1
score = 0
while lives_remaining > 0:
my_person = random_person()
print(player_name, ', you were given', my_person['name'])
while True:
stat_choice = input('Which stat do you want to use? ( height, mass, birth year)')
if stat_choice.lower() not in ('height', 'mass', 'birth year'):
print('Not an appropriate answer. Try again.')
else:
break
opponent_person = random_person()
print('The opponent chose', opponent_person['name'])
my_stat = my_person[stat_choice]
opponent_stat = opponent_person[stat_choice]
if my_stat > opponent_stat:
print(player_name, 'You Win! ')
score = score + 1
print(player_name, 'You have ', lives_remaining, 'lives remaining!')
print('Your score is', score)
elif my_stat == opponent_stat:
print('Its A Draw!')
print(player_name, 'You have', lives_remaining, 'lives remaining!')
print('Your score is', score)
elif my_stat < opponent_stat:
lives_remaining = lives_remaining - 1
print(player_name, 'You have,', lives_remaining, 'lives remaining!')
print('Your score is', score)
field_names = ['player_name', 'score']
with open("score.csv", "w") as csv_file:
spreadsheet = csv.DictWriter(csv_file, fieldnames=field_names)
spreadsheet.writeheader()
data = [{"player_name": player_name, 'score': score}]
with open("score.csv", "w") as csv_file:
spreadsheet = csv.DictWriter(csv_file, fieldnames=field_names)
spreadsheet.writeheader()
with open("score.csv", "a") as csv_file:
spreadsheet = csv.DictWriter(csv_file, fieldnames=field_names)
spreadsheet.writerows(data)
with open('score.csv', 'r') as csv_file:
print('open file for reading')
spreadsheet = csv.DictReader(csv_file)
for row in spreadsheet:
print(row)
intscore = int(row['score'])
print('SCORE: ', intscore)
if intscore > highest_score:
highest_score = intscore
print(highest_score)
run()
This is one of the errors I get when I run the code
line 26, in run
intscore = int(row['score'])
ValueError: invalid literal for int() with base 10: 'score'
enter image description here
A:
Here is a way to manage the score database. There's a "reader" that translates from CSV and returns a dictionary, and a "writer" that accepts a dictionary and writes it to file. This is called "serialization" and "deserialization".
import random
import requests
import csv
def readscores(filename):
scores = {}
with open(filename, 'r') as csv_file:
for row in csv.DictReader(csv_file):
scores[row['player_name']] = int(row['score'])
return scores
def writescores(scores, filename):
with open(filename, 'w') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(['player_name','score'])
writer.writerows( scores.items() )
def random_person():
person_number = random.randint(1, 82)
url = 'https://swapi.dev/api/people/{}/'.format(person_number)
response = requests.get(url)
person = response.json()
return {
'name': person['name'],
'height': person['height'],
'mass': person['mass'],
'birth year': person['birth_year'],
}
def run():
scores = readscores('score.csv')
highest_score = max(scores.values())
print('The highest score to beat is', highest_score)
# game = input('Do you think you can beat it? y/n')
# if game == 'y':
# print('Good Luck????')
# else:
# print('You got this????')
print('Hello stranger, Welcome to StarWars Top Trump!')
player_name = input('What is your name?')
lives_remaining = 1
score = 0
while lives_remaining > 0:
my_person = random_person()
print(player_name, ', you were given', my_person['name'])
while True:
stat_choice = input('Which stat do you want to use? ( height, mass, birth year)')
if stat_choice.lower() not in ('height', 'mass', 'birth year'):
print('Not an appropriate answer. Try again.')
else:
break
opponent_person = random_person()
print('The opponent chose', opponent_person['name'])
my_stat = my_person[stat_choice]
opponent_stat = opponent_person[stat_choice]
if my_stat > opponent_stat:
print(player_name, 'You Win! ????')
score = score + 1
elif my_stat == opponent_stat:
print('Its A Draw!')
elif my_stat < opponent_stat:
lives_remaining = lives_remaining - 1
print(player_name, 'You have,', lives_remaining, 'lives remaining!')
print('Your score is', score)
scores[player_name] = score
writescores( scores, 'score.csv' )
print('SCORE: ', score)
if score > highest_score:
highest_score = score
print(highest_score)
run()
This script works when starting with this data. Note that it doesn't work if "score.csv" is missing; that's code you'd need to add. It's not hard to handle.
c:\tmp>type score.csv
player_name,score
Tim,1
Joe,1
Bill,0
| How to print highest score from an external text file | Help! I'm a starter coder and I am trying to build a top trumps game. I have created an external CSV file that stores the scores of the game. I am trying to get the game to print the highest score recorded but I am running in to a lot of errors. SOMEONE PLEASE HELP :(. I've been working on this for days now and the code keeps breaking more and more every time i try to fix it.
import random
import requests
import csv
def random_person():
person_number = random.randint(1, 82)
url = 'https://swapi.dev/api/people/{}/'.format(person_number)
response = requests.get(url)
person = response.json()
return {
'name': person['name'],
'height': person['height'],
'mass': person['mass'],
'birth year': person['birth_year'],
}
def run():
highest_score = 0
with open('score.csv', 'r') as csv_file:
spreadsheet = csv.DictReader(csv_file)
for row in spreadsheet:
intscore = int(row['score'])
if intscore > highest_score:
highest_score = intscore
print('The highest score to beat is', highest_score)
game = input('Do you think you can beat it? y/n')
if game == 'y':
print('Good Luck')
else:
print('You got this')
print('Hello stranger, Welcome to StarWars Top Trump!')
player_name = input('What is your name?')
lives_remaining = 1
score = 0
while lives_remaining > 0:
my_person = random_person()
print(player_name, ', you were given', my_person['name'])
while True:
stat_choice = input('Which stat do you want to use? ( height, mass, birth year)')
if stat_choice.lower() not in ('height', 'mass', 'birth year'):
print('Not an appropriate answer. Try again.')
else:
break
opponent_person = random_person()
print('The opponent chose', opponent_person['name'])
my_stat = my_person[stat_choice]
opponent_stat = opponent_person[stat_choice]
if my_stat > opponent_stat:
print(player_name, 'You Win! ')
score = score + 1
print(player_name, 'You have ', lives_remaining, 'lives remaining!')
print('Your score is', score)
elif my_stat == opponent_stat:
print('Its A Draw!')
print(player_name, 'You have', lives_remaining, 'lives remaining!')
print('Your score is', score)
elif my_stat < opponent_stat:
lives_remaining = lives_remaining - 1
print(player_name, 'You have,', lives_remaining, 'lives remaining!')
print('Your score is', score)
field_names = ['player_name', 'score']
with open("score.csv", "w") as csv_file:
spreadsheet = csv.DictWriter(csv_file, fieldnames=field_names)
spreadsheet.writeheader()
data = [{"player_name": player_name, 'score': score}]
with open("score.csv", "w") as csv_file:
spreadsheet = csv.DictWriter(csv_file, fieldnames=field_names)
spreadsheet.writeheader()
with open("score.csv", "a") as csv_file:
spreadsheet = csv.DictWriter(csv_file, fieldnames=field_names)
spreadsheet.writerows(data)
with open('score.csv', 'r') as csv_file:
print('open file for reading')
spreadsheet = csv.DictReader(csv_file)
for row in spreadsheet:
print(row)
intscore = int(row['score'])
print('SCORE: ', intscore)
if intscore > highest_score:
highest_score = intscore
print(highest_score)
run()
This is one of the errors I get when I run the code
line 26, in run
intscore = int(row['score'])
ValueError: invalid literal for int() with base 10: 'score'
enter image description here
| [
"Here is a way to manage the score database. There's a \"reader\" that translates from CSV and returns a dictionary, and a \"writer\" that accepts a dictionary and writes it to file. This is called \"serialization\" and \"deserialization\".\nimport random\nimport requests\nimport csv\n\ndef readscores(filename):\n scores = {}\n with open(filename, 'r') as csv_file:\n for row in csv.DictReader(csv_file):\n scores[row['player_name']] = int(row['score'])\n return scores\n\ndef writescores(scores, filename):\n with open(filename, 'w') as csv_file:\n writer = csv.writer(csv_file)\n writer.writerow(['player_name','score'])\n writer.writerows( scores.items() )\n\n\ndef random_person():\n person_number = random.randint(1, 82)\n url = 'https://swapi.dev/api/people/{}/'.format(person_number)\n response = requests.get(url)\n person = response.json()\n return {\n 'name': person['name'],\n 'height': person['height'],\n 'mass': person['mass'],\n 'birth year': person['birth_year'],\n }\n\n\ndef run():\n scores = readscores('score.csv')\n highest_score = max(scores.values())\n\n print('The highest score to beat is', highest_score)\n# game = input('Do you think you can beat it? y/n')\n# if game == 'y':\n# print('Good Luck????')\n# else:\n# print('You got this????')\n print('Hello stranger, Welcome to StarWars Top Trump!')\n player_name = input('What is your name?')\n lives_remaining = 1\n score = 0\n while lives_remaining > 0:\n my_person = random_person()\n print(player_name, ', you were given', my_person['name'])\n\n while True:\n stat_choice = input('Which stat do you want to use? ( height, mass, birth year)')\n if stat_choice.lower() not in ('height', 'mass', 'birth year'):\n print('Not an appropriate answer. Try again.')\n else:\n break\n\n opponent_person = random_person()\n print('The opponent chose', opponent_person['name'])\n my_stat = my_person[stat_choice]\n opponent_stat = opponent_person[stat_choice]\n\n if my_stat > opponent_stat:\n print(player_name, 'You Win! ????')\n score = score + 1\n elif my_stat == opponent_stat:\n print('Its A Draw!')\n elif my_stat < opponent_stat:\n lives_remaining = lives_remaining - 1\n\n print(player_name, 'You have,', lives_remaining, 'lives remaining!')\n print('Your score is', score)\n\n scores[player_name] = score\n writescores( scores, 'score.csv' )\n\n print('SCORE: ', score)\n if score > highest_score:\n highest_score = score\n print(highest_score)\n\nrun()\n\nThis script works when starting with this data. Note that it doesn't work if \"score.csv\" is missing; that's code you'd need to add. It's not hard to handle.\nc:\\tmp>type score.csv\nplayer_name,score\nTim,1\nJoe,1\nBill,0\n\n"
] | [
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0074606831_csv_python.txt |
Q:
using regex to split TM symbol from string?
As the title states, I am trying to use regex to split the trademark β’ symbol from a string. I am looking for two possible patterns:
stringβ’ --> expected result: string β’
or
stringβ’2 --> expected result: string β’ 2
I came up with the below pattern to check whether a string contains either potential option:
pattern = "[a-zA-Z0-9]+[β’]([0-9])?$"
Is there any way to add some functionality to split it to end up with the expected results mentioned above?
A:
I'd do it with re.sub in two steps. First add space from the left side where necessary and then from the right side:
import re
s = """\
stringβ’
stringβ’2
stringΒ©2
test test stringβ’9 test test test"""
s = re.sub(r"([a-zA-Z0-9])([β’Β©])", r"\1 \2", s)
s = re.sub(r"([β’Β©])([0-9])", r"\1 \2", s)
print(s)
Prints:
string β’
string β’ 2
string Β© 2
test test string β’ 9 test test test
A:
I would simply do it by adding one space before and after the β’ character and removing a potential right space at the end afterwards.
import re
text = ("stringβ’", "stringβ’2", "test test stringβ’9 test test test")
pattern = re.compile(r"([^ ])(β’)(.*)")
for t in text:
print(re.sub(pattern, r"\1 \2 \3", t).rstrip())
# Outputs:
# --------
# string β’
# string β’ 2
# test test string β’ 9 test test test
If you're really looking for numbers after the trademark symbol, simply replace the dot with [0-9].
But honestly, why using regex for this task, at all? A simple string replacement is also sufficient:
for t in text:
print(t.replace("β’", " β’ ").rstrip())
Less/no dependencies, better readability, better testability, better maintenance, imho.
A:
re.split will do the job, just give us all the information in the question.
Below uses a list comprehension to remove the splits caused by spaces and the extra empty string split when TM is at the end of the string:
import re
trials = 'stringβ’', 'stringβ’2', 'test test stringβ’9 test test test'
for trial in trials:
result = [x for x in re.split(' |(β’)', trial) if x]
print(f'{result!r} {" ".join(result)!r}')
Output:
['string', 'β’'] 'string β’'
['string', 'β’', '2'] 'string β’ 2'
['test', 'test', 'string', 'β’', '9', 'test', 'test', 'test'] 'test test string β’ 9 test test test'
| using regex to split TM symbol from string? | As the title states, I am trying to use regex to split the trademark β’ symbol from a string. I am looking for two possible patterns:
stringβ’ --> expected result: string β’
or
stringβ’2 --> expected result: string β’ 2
I came up with the below pattern to check whether a string contains either potential option:
pattern = "[a-zA-Z0-9]+[β’]([0-9])?$"
Is there any way to add some functionality to split it to end up with the expected results mentioned above?
| [
"I'd do it with re.sub in two steps. First add space from the left side where necessary and then from the right side:\nimport re\n\ns = \"\"\"\\\nstringβ’\nstringβ’2\nstringΒ©2\ntest test stringβ’9 test test test\"\"\"\n\n\ns = re.sub(r\"([a-zA-Z0-9])([β’Β©])\", r\"\\1 \\2\", s)\ns = re.sub(r\"([β’Β©])([0-9])\", r\"\\1 \\2\", s)\n\nprint(s)\n\nPrints:\nstring β’\nstring β’ 2\nstring Β© 2\ntest test string β’ 9 test test test\n\n",
"I would simply do it by adding one space before and after the β’ character and removing a potential right space at the end afterwards.\nimport re\n\ntext = (\"stringβ’\", \"stringβ’2\", \"test test stringβ’9 test test test\")\npattern = re.compile(r\"([^ ])(β’)(.*)\")\n\nfor t in text:\n print(re.sub(pattern, r\"\\1 \\2 \\3\", t).rstrip())\n\n# Outputs:\n# --------\n# string β’\n# string β’ 2\n# test test string β’ 9 test test test\n\nIf you're really looking for numbers after the trademark symbol, simply replace the dot with [0-9].\nBut honestly, why using regex for this task, at all? A simple string replacement is also sufficient:\nfor t in text:\n print(t.replace(\"β’\", \" β’ \").rstrip())\n\nLess/no dependencies, better readability, better testability, better maintenance, imho.\n",
"re.split will do the job, just give us all the information in the question.\nBelow uses a list comprehension to remove the splits caused by spaces and the extra empty string split when TM is at the end of the string:\nimport re\n\ntrials = 'stringβ’', 'stringβ’2', 'test test stringβ’9 test test test'\n\nfor trial in trials:\n result = [x for x in re.split(' |(β’)', trial) if x]\n print(f'{result!r} {\" \".join(result)!r}')\n\nOutput:\n['string', 'β’'] 'string β’'\n['string', 'β’', '2'] 'string β’ 2'\n['test', 'test', 'string', 'β’', '9', 'test', 'test', 'test'] 'test test string β’ 9 test test test'\n\n"
] | [
1,
0,
0
] | [] | [] | [
"python",
"regex",
"superscript"
] | stackoverflow_0074606033_python_regex_superscript.txt |
Q:
get specific number of data from values βin a column in pandas
In order to prevent my machine learning algorithm from tending to a certain data, I want to reduce the frequency differences in my dataset, which is a pandas table,
for example, in column X;
A value is 1500 times
B value is 3000 times
C value is 1300 times
Is there a way to get 1250 of them all?
A:
can you try this:
df2=pd.concat(df[df['X']=='A'][:1250],df[df['X']=='B'][:1250],df[df['X']=='C'][:1250])
A:
You can group the table according to the column you want to set the frequency of ("X" for your example) and get as many data as you want with the head function (if there is less of a value than the frequency you have given, it will take them all)
df = df.groupby('X').head(1250)
A:
A solution assuming you may have an unknown number of unique values:
import pandas as pd
#Creating a Panda dafatframme with the number of elements
d = {'X': 1500*["A"]+3000*["B"]+1300*["C"]}
df = pd.DataFrame(data=d)
#Create a dictionnary containing 1 dataframe for each unique value
dfDict = dict(iter(df.groupby('X')))
#Keep only the first n values for each and add them to filtered dataframe
for unique_val in dfDict:
dfDict[unique_val] = dfDict[unique_val][:1250]
filetered = pd.concat(dfDict, ignore_index=True)
| get specific number of data from values βin a column in pandas | In order to prevent my machine learning algorithm from tending to a certain data, I want to reduce the frequency differences in my dataset, which is a pandas table,
for example, in column X;
A value is 1500 times
B value is 3000 times
C value is 1300 times
Is there a way to get 1250 of them all?
| [
"can you try this:\ndf2=pd.concat(df[df['X']=='A'][:1250],df[df['X']=='B'][:1250],df[df['X']=='C'][:1250])\n\n",
"You can group the table according to the column you want to set the frequency of (\"X\" for your example) and get as many data as you want with the head function (if there is less of a value than the frequency you have given, it will take them all)\ndf = df.groupby('X').head(1250)\n\n",
"A solution assuming you may have an unknown number of unique values:\nimport pandas as pd\n\n#Creating a Panda dafatframme with the number of elements\nd = {'X': 1500*[\"A\"]+3000*[\"B\"]+1300*[\"C\"]}\ndf = pd.DataFrame(data=d)\n\n#Create a dictionnary containing 1 dataframe for each unique value\ndfDict = dict(iter(df.groupby('X'))) \n\n#Keep only the first n values for each and add them to filtered dataframe\nfor unique_val in dfDict:\n dfDict[unique_val] = dfDict[unique_val][:1250]\n filetered = pd.concat(dfDict, ignore_index=True)\n\n"
] | [
1,
1,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074578055_pandas_python.txt |
Q:
Replace value in rows (does not meet condition) with next closest row (meets condition)
I have a dataframe as shown below. They are ordered Ascendingly by Column A and B. Only Occurrences >= 10 are valid, thus for rows with occurrences with less than 10, I want to replace their values with the next/closest valid row.
Column A
Column B
Occurrences
Value
Cell 1
Cell 2
1
0
Cell 1
Cell 3
2
0
Cell 1
Cell 4
10
5
Cell 1
Cell 5
1
1
Cell 1
Cell 6
12
4
Cell 2
Cell 1
1
7
Here is what the final dataframe should look like. I would like to do this in Bigquery but if its not possible, python would work as well.
Column A
Column B
Occurrences
Value
Cell 1
Cell 2
1
5
Cell 1
Cell 3
2
5
Cell 1
Cell 4
10
5
Cell 1
Cell 5
1
4
Cell 1
Cell 6
12
4
Cell 2
Cell 1
1
4
I have the dataframe all set up, but just having trouble figuring out the logic to apply this.
Logic:
Start from the top and go through each row to check number of occurrences.
If occurrences <10, look for the next valid row and take that value replace the non-valid row.
If the last row is non-valid, it should take the value from previous row that is valid.
A:
Something like this should work in Python:
import pandas as pd
import numpy as np
# example dataframe
dict = {'Column A': ['Cell 1', 'Cell 1', 'Cell 1', 'Cell 1', 'Cell 1', 'Cell 2'],
'Column B': ['Cell 2', 'Cell 3', 'Cell 4', 'Cell 5', 'Cell 6', 'Cell 1'],
'Occurrences': [1, 2, 10, 1, 12, 1],
'Value': [0, 0, 5, 1, 4, 7]}
df = pd.DataFrame(dict)
# if Occurrences < 10, set Value to nan
df.loc[df['Occurrences'] < 10, 'Value'] = np.nan
# use interpolate and fillna to set missing (nan) values to nearest non-missing row value
# (first use bfill or "backfill" method, so take the value from the "next" valid row; then apply the ffill or
# "forwardfill" method for any rows at the end that were not filled)
df.loc[:, 'Value'] = df['Value'].fillna(method='bfill').fillna(method='ffill')
Note that if you just want to fill these values with the nearest row value, you can use Series.interpolate with method='nearest'
A:
You might consider below approach in BigQuery
SELECT * EXCEPT(Value),
-- Logic #3. If the last row is non-valid,
-- it should take the value from previous row that is valid.
COALESCE(Value, LAST_VALUE(Value IGNORE NULLS) OVER w1) AS Value
FROM (
SELECT * EXCEPT(value),
-- Logic #2, If occurrences <10, look for the next valid row
-- and take that value replace the non-valid row.
FIRST_VALUE(IF(Occurrences < 10, NULL, Value) IGNORE NULLS) OVER w0 AS Value
FROM sample_table
WINDOW w0 AS (ORDER BY columnA, columnB ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
)
WINDOW w1 AS (ORDER BY columnA, columnB ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW);
Query results
| Replace value in rows (does not meet condition) with next closest row (meets condition) | I have a dataframe as shown below. They are ordered Ascendingly by Column A and B. Only Occurrences >= 10 are valid, thus for rows with occurrences with less than 10, I want to replace their values with the next/closest valid row.
Column A
Column B
Occurrences
Value
Cell 1
Cell 2
1
0
Cell 1
Cell 3
2
0
Cell 1
Cell 4
10
5
Cell 1
Cell 5
1
1
Cell 1
Cell 6
12
4
Cell 2
Cell 1
1
7
Here is what the final dataframe should look like. I would like to do this in Bigquery but if its not possible, python would work as well.
Column A
Column B
Occurrences
Value
Cell 1
Cell 2
1
5
Cell 1
Cell 3
2
5
Cell 1
Cell 4
10
5
Cell 1
Cell 5
1
4
Cell 1
Cell 6
12
4
Cell 2
Cell 1
1
4
I have the dataframe all set up, but just having trouble figuring out the logic to apply this.
Logic:
Start from the top and go through each row to check number of occurrences.
If occurrences <10, look for the next valid row and take that value replace the non-valid row.
If the last row is non-valid, it should take the value from previous row that is valid.
| [
"Something like this should work in Python:\nimport pandas as pd\nimport numpy as np\n\n# example dataframe\ndict = {'Column A': ['Cell 1', 'Cell 1', 'Cell 1', 'Cell 1', 'Cell 1', 'Cell 2'],\n 'Column B': ['Cell 2', 'Cell 3', 'Cell 4', 'Cell 5', 'Cell 6', 'Cell 1'],\n 'Occurrences': [1, 2, 10, 1, 12, 1],\n 'Value': [0, 0, 5, 1, 4, 7]}\ndf = pd.DataFrame(dict)\n\n# if Occurrences < 10, set Value to nan\ndf.loc[df['Occurrences'] < 10, 'Value'] = np.nan\n\n# use interpolate and fillna to set missing (nan) values to nearest non-missing row value\n# (first use bfill or \"backfill\" method, so take the value from the \"next\" valid row; then apply the ffill or\n# \"forwardfill\" method for any rows at the end that were not filled)\ndf.loc[:, 'Value'] = df['Value'].fillna(method='bfill').fillna(method='ffill')\n\nNote that if you just want to fill these values with the nearest row value, you can use Series.interpolate with method='nearest'\n",
"You might consider below approach in BigQuery\nSELECT * EXCEPT(Value),\n -- Logic #3. If the last row is non-valid,\n -- it should take the value from previous row that is valid.\n COALESCE(Value, LAST_VALUE(Value IGNORE NULLS) OVER w1) AS Value\n FROM (\n SELECT * EXCEPT(value),\n -- Logic #2, If occurrences <10, look for the next valid row \n -- and take that value replace the non-valid row.\n FIRST_VALUE(IF(Occurrences < 10, NULL, Value) IGNORE NULLS) OVER w0 AS Value\n FROM sample_table\n WINDOW w0 AS (ORDER BY columnA, columnB ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)\n )\nWINDOW w1 AS (ORDER BY columnA, columnB ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW);\n\nQuery results\n\n"
] | [
0,
0
] | [] | [] | [
"google_bigquery",
"python",
"sql"
] | stackoverflow_0074604707_google_bigquery_python_sql.txt |
Q:
how to use fstring in a complex json object
Is there a way to use fstring to change variable dynamically in a complex json object like this:
payload = json.dumps({
"query": "query ($network: EthereumNetwork!, $dateFormat: String!, $from: ISO8601DateTime, $till: ISO8601DateTime) {\n ethereum(network: $network) {\n transactions(options: {asc: \"date.date\"}, date: {since: $from, till: $till}) {\n date: date {\n date(format: $dateFormat)\n }\n count: countBigInt\n gasValue\n }\n }\n}\n",
"variables": "{\n \"limit\": 10,\n \"offset\": 0,\n \"network\": \"ethereum\",\n \"from\": \"2022-11-25T23:59:59\",\"till\":\"2022-11-28T23:59:59\",\n \"dateFormat\": \"%Y-%m-%d\"\n}"
})
I am trying to change the \"from\": \"2022-11-25T23:59:59\" section, to input a string date variable but running into many problems as the numerous brackets and the embedded strings are making it somewhat difficult when using fstring.
I am also open to any alternative ideas other than fstrings if it fixes the problem
A:
As I suggested, just convert the nested JSON to a dict and manipulate it:
import json
payload = {
"query": "query ($network: EthereumNetwork!, $dateFormat: String!, $from: ISO8601DateTime, $till: ISO8601DateTime) {\n ethereum(network: $network) {\n transactions(options: {asc: \"date.date\"}, date: {since: $from, till: $till}) {\n date: date {\n date(format: $dateFormat)\n }\n count: countBigInt\n gasValue\n }\n }\n}\n",
"variables": "{\n \"limit\": 10,\n \"offset\": 0,\n \"network\": \"ethereum\",\n \"from\": \"2022-11-25T23:59:59\",\"till\":\"2022-11-28T23:59:59\",\n \"dateFormat\": \"%Y-%m-%d\"\n}"
}
var = json.loads(payload['variables'])
var['from'] = var['from'].replace('2022','2024')
payload['variables'] = json.dumps(var)
print(json.dumps(payload))
Output:
{"query": "query ($network: EthereumNetwork!, $dateFormat: String!, $from: ISO8601DateTime, $till: ISO8601DateTime) {\n ethereum(network: $network) {\n transactions(options: {asc: \"date.date\"}, date: {since: $from, till: $till}) {\n date: date {\n date(format: $dateFormat)\n }\n count: countBigInt\n gasValue\n }\n }\n}\n", "variables": "{\"limit\": 10, \"offset\": 0, \"network\": \"ethereum\", \"from\": \"2024-11-25T23:59:59\", \"till\": \"2022-11-28T23:59:59\", \"dateFormat\": \"%Y-%m-%d\"}"}
| how to use fstring in a complex json object | Is there a way to use fstring to change variable dynamically in a complex json object like this:
payload = json.dumps({
"query": "query ($network: EthereumNetwork!, $dateFormat: String!, $from: ISO8601DateTime, $till: ISO8601DateTime) {\n ethereum(network: $network) {\n transactions(options: {asc: \"date.date\"}, date: {since: $from, till: $till}) {\n date: date {\n date(format: $dateFormat)\n }\n count: countBigInt\n gasValue\n }\n }\n}\n",
"variables": "{\n \"limit\": 10,\n \"offset\": 0,\n \"network\": \"ethereum\",\n \"from\": \"2022-11-25T23:59:59\",\"till\":\"2022-11-28T23:59:59\",\n \"dateFormat\": \"%Y-%m-%d\"\n}"
})
I am trying to change the \"from\": \"2022-11-25T23:59:59\" section, to input a string date variable but running into many problems as the numerous brackets and the embedded strings are making it somewhat difficult when using fstring.
I am also open to any alternative ideas other than fstrings if it fixes the problem
| [
"As I suggested, just convert the nested JSON to a dict and manipulate it:\nimport json\n\npayload = {\n \"query\": \"query ($network: EthereumNetwork!, $dateFormat: String!, $from: ISO8601DateTime, $till: ISO8601DateTime) {\\n ethereum(network: $network) {\\n transactions(options: {asc: \\\"date.date\\\"}, date: {since: $from, till: $till}) {\\n date: date {\\n date(format: $dateFormat)\\n }\\n count: countBigInt\\n gasValue\\n }\\n }\\n}\\n\",\n \"variables\": \"{\\n \\\"limit\\\": 10,\\n \\\"offset\\\": 0,\\n \\\"network\\\": \\\"ethereum\\\",\\n \\\"from\\\": \\\"2022-11-25T23:59:59\\\",\\\"till\\\":\\\"2022-11-28T23:59:59\\\",\\n \\\"dateFormat\\\": \\\"%Y-%m-%d\\\"\\n}\"\n}\n\nvar = json.loads(payload['variables'])\nvar['from'] = var['from'].replace('2022','2024')\npayload['variables'] = json.dumps(var)\nprint(json.dumps(payload))\n\nOutput:\n{\"query\": \"query ($network: EthereumNetwork!, $dateFormat: String!, $from: ISO8601DateTime, $till: ISO8601DateTime) {\\n ethereum(network: $network) {\\n transactions(options: {asc: \\\"date.date\\\"}, date: {since: $from, till: $till}) {\\n date: date {\\n date(format: $dateFormat)\\n }\\n count: countBigInt\\n gasValue\\n }\\n }\\n}\\n\", \"variables\": \"{\\\"limit\\\": 10, \\\"offset\\\": 0, \\\"network\\\": \\\"ethereum\\\", \\\"from\\\": \\\"2024-11-25T23:59:59\\\", \\\"till\\\": \\\"2022-11-28T23:59:59\\\", \\\"dateFormat\\\": \\\"%Y-%m-%d\\\"}\"}\n\n"
] | [
1
] | [] | [] | [
"f_string",
"json",
"python",
"variables"
] | stackoverflow_0074607184_f_string_json_python_variables.txt |
Q:
How to pip install tkinter
I use pip install python-tk
but have an error
ERROR: Could not find a version that satisfies the requirement python-tk (from versions: none)
ERROR: No matching distribution found for python-tk
A:
Tkinter isn't distributed through pip; if it didn't come pre-packaged with Python, you have to get it from elsewhere:
Ubuntu
sudo apt-get install python3-tk
Fedora
sudo dnf install python3-tkinter
MacOS
brew install python-tk
| How to pip install tkinter | I use pip install python-tk
but have an error
ERROR: Could not find a version that satisfies the requirement python-tk (from versions: none)
ERROR: No matching distribution found for python-tk
| [
"Tkinter isn't distributed through pip; if it didn't come pre-packaged with Python, you have to get it from elsewhere:\n\nUbuntu\n\nsudo apt-get install python3-tk \n\n\nFedora\n\nsudo dnf install python3-tkinter\n\n\nMacOS\n\nbrew install python-tk\n\n"
] | [
1
] | [
"It seems like you are trying to install tkinter but using a package name that is not supported.\nEdit this may help\n",
"Firstly Make sure Python and pip is preinstalled on your system.\nType the following commands in command prompt to check is python and pip is installed on your system.\nTo check Python:\npython --version\nIf python is successfully installed, the version of python installed on your system will be displayed.\nTo check pip\npip -V\nThe version of pip will be displayed, if it is successfully installed on your system.\nNow Install Tkinter\nTkinter can be installed using pip. The following command is run in the command prompt to install Tkinter.\npip install tk\nThis command will start downloading and installing packages related to the Tkinter library. Once done, the message of successful installation will be displayed.\n"
] | [
-3,
-3
] | [
"python"
] | stackoverflow_0069603788_python.txt |
Q:
Numpy how to handle a number larger than int64 max?
I am working a database that is very poorly organized. There are CustomerIds that are somehow bigger than int64. Here is an example: 88168142359034442077.0
In order to be able to use this ID, I need to turn it into a string and remove the decimal.
I have tried to use the following code:
testdf = pd.DataFrame({'CUSTID': ['99418675896216.02342351', '88168142359034442077.0213', '53056496953']})
testdf['CUSTID'] = testdf['CUSTID'].astype('float64').astype('int64').astype(str)
testdf.display()
When I use the above method I get an overflow and then the numbers that are bigger than int64 becomes negative like: -9223372036854775808 for 88168142359034442077.0213
I have being looking for other ways to be able to make the the change from string to float, then float to int, and finally int to string again.
One method that I tried is to just not use astype('int64'), but it makes the the output into scientific format like: 8.816814235903445e+19 for 88168142359034442077.0213 and other than using regex to remove the decimal and 'e+19' I don't really see what else I can do.
Any information is appreciated. Thanks!
A:
Posting as an Answer because this became too large and I believe has further value
I'd be very surprised if those values are the real and expected IDs and not an erroneous artifact of importing some text or binary format
Specifically, the authoring program(s) and database itself are almost-certainly not using some high-memory decimal representation for a customer identifier, and would instead be "normal" types such as an int64 if they are represented that way at all!
Further, floating-point values expose programs to IEEE 754 floating point aliasing woes (see Is floating point math broken?), which will subtly foil all sorts of lookups and comparisons, and generally just wouldn't be able to pleasantly or consistently represent these values, so it's unlikely that anyone would reasonably use them
A contrived example
>>> data1 = "111001111001110100110001111000110110110111110101111000111001110110110010110001110110101110110000110010110011110100110010110011110101110001"
>>> data2 = "111000111000110001110110111000110001110100110010110011110101111001110000110011110100110100110100110010110000110111110111101110110000110010110001110011"
>>> for data in (data1, data2):
... print("".join(chr(eval("0b" + data[block:block+6])) for block in range(0, len(data), 6)))
...
99418675896216.02342351
88168142359034442077.0213
It's a long shot, but perhaps a fair suspicion that this can happen when
a user(s) is entering a new entry, but doesn't have a customer ID (yet?)
a UI is coded to only accept numeric strings
there is no other checking and the database stores the value as a string
upon discovering this, user(s) regularly jumble essentially meaningless, but check-passing characters into the field to progress their work
You could attempt to do another comparison of these to see for example if
they are all from a specific user
they are all from a specific date
the string representation becomes longer or shorter as time progresses (as the user becomes lazier or less sure they have used a value)
A:
testdf['CUSTID'] is a pandas.Series object containing Python string objects. For a pandas.Series object to contain large integer, the most straightforward type to use is the int Python objects (as opposed to native Numpy types that are more efficient). You can do a conversion to a Decimal type so to get ride of the non-integer part. The conversion can be done using map:
testdf['CUSTID'] = list(map(int, map(decimal.Decimal, testdf['CUSTID'].to_list())))
This is not very efficient, but both Unicode string objects and large variable-sized integer objects are actually inefficient. Since Numpy does not support large integer natively, this is certainly the best option (though one may find a faster way to get ride of the non-integer part than using the decimal package).
Here is a string-based parsing method that is certainly slower but supporting very large integers without using a large fixed-size decimal precision:
testdf['CUSTID'] = [int(s.split('.')[0]) for s in testdf['CUSTID'].to_list()]
A:
I would recommend just leave them as string and trim everything after the .:
import pandas as pd
testdf = pd.DataFrame({'CUSTID': ['99418675896216.02342351', '88168142359034442077.0213', '53056496953']})
testdf['CUSTID'] = testdf['CUSTID'].apply(lambda s: s[:s.find(".")])
testdf.display()
Note that you could replace: lambda s: s[:s.find(".")] with something different, but I would not expect any variation (e.g. lambda s: s.split(".", 1)[0] or lambda s: re.match(r"^(\d+)(?:\.(\d+))?$", s).groups()[0]) to be much further than that. Just test them for some sample input to see which one works best for you.
Alternatively, you may want to use str method for Pandas series with extract(), i.e.:
testdf['CUSTID'] = testdf['CUSTID'].str.extract(r"^(\d+)(?:\.(\d+))?$")
but I am unsure this would be any faster than the aforementioned solutions.
Perhaps you can achieve something faster with rstrip() but your code would not be as simple as the above, since you would need to handle values without the . differently (no-op) from the rest.
| Numpy how to handle a number larger than int64 max? | I am working a database that is very poorly organized. There are CustomerIds that are somehow bigger than int64. Here is an example: 88168142359034442077.0
In order to be able to use this ID, I need to turn it into a string and remove the decimal.
I have tried to use the following code:
testdf = pd.DataFrame({'CUSTID': ['99418675896216.02342351', '88168142359034442077.0213', '53056496953']})
testdf['CUSTID'] = testdf['CUSTID'].astype('float64').astype('int64').astype(str)
testdf.display()
When I use the above method I get an overflow and then the numbers that are bigger than int64 becomes negative like: -9223372036854775808 for 88168142359034442077.0213
I have being looking for other ways to be able to make the the change from string to float, then float to int, and finally int to string again.
One method that I tried is to just not use astype('int64'), but it makes the the output into scientific format like: 8.816814235903445e+19 for 88168142359034442077.0213 and other than using regex to remove the decimal and 'e+19' I don't really see what else I can do.
Any information is appreciated. Thanks!
| [
"Posting as an Answer because this became too large and I believe has further value\nI'd be very surprised if those values are the real and expected IDs and not an erroneous artifact of importing some text or binary format\nSpecifically, the authoring program(s) and database itself are almost-certainly not using some high-memory decimal representation for a customer identifier, and would instead be \"normal\" types such as an int64 if they are represented that way at all!\nFurther, floating-point values expose programs to IEEE 754 floating point aliasing woes (see Is floating point math broken?), which will subtly foil all sorts of lookups and comparisons, and generally just wouldn't be able to pleasantly or consistently represent these values, so it's unlikely that anyone would reasonably use them\nA contrived example\n>>> data1 = \"111001111001110100110001111000110110110111110101111000111001110110110010110001110110101110110000110010110011110100110010110011110101110001\"\n>>> data2 = \"111000111000110001110110111000110001110100110010110011110101111001110000110011110100110100110100110010110000110111110111101110110000110010110001110011\"\n>>> for data in (data1, data2):\n... print(\"\".join(chr(eval(\"0b\" + data[block:block+6])) for block in range(0, len(data), 6)))\n... \n99418675896216.02342351\n88168142359034442077.0213\n\n\nIt's a long shot, but perhaps a fair suspicion that this can happen when\n\na user(s) is entering a new entry, but doesn't have a customer ID (yet?)\na UI is coded to only accept numeric strings\nthere is no other checking and the database stores the value as a string\nupon discovering this, user(s) regularly jumble essentially meaningless, but check-passing characters into the field to progress their work\n\nYou could attempt to do another comparison of these to see for example if\n\nthey are all from a specific user\nthey are all from a specific date\nthe string representation becomes longer or shorter as time progresses (as the user becomes lazier or less sure they have used a value)\n\n",
"testdf['CUSTID'] is a pandas.Series object containing Python string objects. For a pandas.Series object to contain large integer, the most straightforward type to use is the int Python objects (as opposed to native Numpy types that are more efficient). You can do a conversion to a Decimal type so to get ride of the non-integer part. The conversion can be done using map:\ntestdf['CUSTID'] = list(map(int, map(decimal.Decimal, testdf['CUSTID'].to_list())))\n\nThis is not very efficient, but both Unicode string objects and large variable-sized integer objects are actually inefficient. Since Numpy does not support large integer natively, this is certainly the best option (though one may find a faster way to get ride of the non-integer part than using the decimal package).\nHere is a string-based parsing method that is certainly slower but supporting very large integers without using a large fixed-size decimal precision:\ntestdf['CUSTID'] = [int(s.split('.')[0]) for s in testdf['CUSTID'].to_list()]\n\n",
"I would recommend just leave them as string and trim everything after the .:\nimport pandas as pd\n\n\ntestdf = pd.DataFrame({'CUSTID': ['99418675896216.02342351', '88168142359034442077.0213', '53056496953']})\ntestdf['CUSTID'] = testdf['CUSTID'].apply(lambda s: s[:s.find(\".\")])\ntestdf.display()\n\nNote that you could replace: lambda s: s[:s.find(\".\")] with something different, but I would not expect any variation (e.g. lambda s: s.split(\".\", 1)[0] or lambda s: re.match(r\"^(\\d+)(?:\\.(\\d+))?$\", s).groups()[0]) to be much further than that. Just test them for some sample input to see which one works best for you.\n\nAlternatively, you may want to use str method for Pandas series with extract(), i.e.:\ntestdf['CUSTID'] = testdf['CUSTID'].str.extract(r\"^(\\d+)(?:\\.(\\d+))?$\")\n\nbut I am unsure this would be any faster than the aforementioned solutions.\nPerhaps you can achieve something faster with rstrip() but your code would not be as simple as the above, since you would need to handle values without the . differently (no-op) from the rest.\n"
] | [
2,
0,
0
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074606765_numpy_pandas_python.txt |
Q:
Python: How can I pass this return value from one method to another?
So I have a class which helps me to get past dates and parse them in a specific format. I know datetime has some functionality around this but I am trying to get a wide different array of formats for my use-case.
Here is my setup so you can see where I am coming from.
I have an engine class which houses all my classes for the automation engine I am working on. Inside my engine class I have a data class and a date class.
The date class has the following method:
import datetime
def get_past_date(self, days_in_past):
# getting current date
start_date = datetime.date(
datetime.datetime.now().year,
datetime.datetime.now().month,
datetime.datetime.now().day
)
# getting the past day
delta = datetime.timedelta(days=days_in_past)
past_date = start_date - delta
# getting past date out of original format
month = ''
day = ''
year = ''
dash_count = 0
for char in str(past_date):
if char == '-':
dash_count = dash_count + 1
continue
if dash_count == 0:
year = year + char
if dash_count == 1:
month = month + char
if dash_count == 2:
day = day + char
return (month, day, year)
Then I have in my data class this function.
import PyPDF2
def extract_cem_spreadsheet_data(engine):
# we want to do a couple things here. First, we need to get the current month.
past_date = engine.date.get_past_date(90)
print(past_date)
The problem is that engine.date.get_past_date(90) has the correct result inside the "get_past_date" function, but it returns None after I pass the result to the "extract_cem_spreadsheet_data" function.
I've had this problem a few times in other places and hacked a few workarounds, but I really want to be able to pass the return value from "get_past_date" to "extract_cem_spreadsheet_data".
I have looked up multiple resources but can't seem to pinpoint this issue.
Thank you for your time!
I have tried searching on multiple other forums and even on stack overflow. I have not found a valid solution for my use-case.
A:
I found the solution. So, I set my class structure up like this to clean up my file system and keep my classes clean.
import datetime
from ._get_past_date import get_past_date
from ._format_date import format_date
class date:
def __init__(self):
self.days_of_week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
self.months_of_year = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
def get_past_date(self, days_in_past): get_past_date(self, days_in_past)
def format_date(self, date_tuple, date_format): format_date(self, date_tuple, date_format)
Well, I missed the return statement in this line:
def get_past_date(self, days_in_past): get_past_date(self, days_in_past)
It should be:
def get_past_date(self, days_in_past): return get_past_date(self, days_in_past)
So thank you to whoever might need this in the future. If you clean up your classes with this methodology, then you need to include a return statement in the actual body of your class.
| Python: How can I pass this return value from one method to another? | So I have a class which helps me to get past dates and parse them in a specific format. I know datetime has some functionality around this but I am trying to get a wide different array of formats for my use-case.
Here is my setup so you can see where I am coming from.
I have an engine class which houses all my classes for the automation engine I am working on. Inside my engine class I have a data class and a date class.
The date class has the following method:
import datetime
def get_past_date(self, days_in_past):
# getting current date
start_date = datetime.date(
datetime.datetime.now().year,
datetime.datetime.now().month,
datetime.datetime.now().day
)
# getting the past day
delta = datetime.timedelta(days=days_in_past)
past_date = start_date - delta
# getting past date out of original format
month = ''
day = ''
year = ''
dash_count = 0
for char in str(past_date):
if char == '-':
dash_count = dash_count + 1
continue
if dash_count == 0:
year = year + char
if dash_count == 1:
month = month + char
if dash_count == 2:
day = day + char
return (month, day, year)
Then I have in my data class this function.
import PyPDF2
def extract_cem_spreadsheet_data(engine):
# we want to do a couple things here. First, we need to get the current month.
past_date = engine.date.get_past_date(90)
print(past_date)
The problem is that engine.date.get_past_date(90) has the correct result inside the "get_past_date" function, but it returns None after I pass the result to the "extract_cem_spreadsheet_data" function.
I've had this problem a few times in other places and hacked a few workarounds, but I really want to be able to pass the return value from "get_past_date" to "extract_cem_spreadsheet_data".
I have looked up multiple resources but can't seem to pinpoint this issue.
Thank you for your time!
I have tried searching on multiple other forums and even on stack overflow. I have not found a valid solution for my use-case.
| [
"I found the solution. So, I set my class structure up like this to clean up my file system and keep my classes clean.\nimport datetime\nfrom ._get_past_date import get_past_date\nfrom ._format_date import format_date\n\n\nclass date:\n\n def __init__(self):\n self.days_of_week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n self.months_of_year = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']\n\n def get_past_date(self, days_in_past): get_past_date(self, days_in_past)\n def format_date(self, date_tuple, date_format): format_date(self, date_tuple, date_format)\n\nWell, I missed the return statement in this line:\ndef get_past_date(self, days_in_past): get_past_date(self, days_in_past)\n\nIt should be:\ndef get_past_date(self, days_in_past): return get_past_date(self, days_in_past)\n\nSo thank you to whoever might need this in the future. If you clean up your classes with this methodology, then you need to include a return statement in the actual body of your class.\n"
] | [
0
] | [] | [] | [
"automation",
"python",
"return",
"scripting"
] | stackoverflow_0074607212_automation_python_return_scripting.txt |
Q:
Multiple class inheritance TypeError with one grandparent, two parents, one child class
I'm practicing OOP and keep running into this issue. Here's one example.
Take a diamond-shaped multiple class inheritance arrangement, with Weapon feeding Edge and Long, both of which are inherited by Zweihander. If I code Edge without inheriting Weapon, the code works fine. But as soon as I make Weapon its parent, Edge can't seem to find the argument for its 'sharpness' parameter anymore. It gives me a
TypeError: Edge.__init__() missing 1 required positional argument: 'sharpness'
Oddly, the final line referenced by the error is the super().init() line in the Long class constructor. If the eventual object I create is a Zweihander, it has Edge and gets all the Weapon elements via Long, so that's functionally acceptable. But if I want for instance a knife object that's just Edge, it needs to inherit Weapon, which breaks the program.
What am I missing? My best guess is an MRO issue, but I can't figure it out.
Thanks, all.
class Weapon:
def __init__(self):
self.does_damage = "very yes"
def attack(self):
print("Je touche!")
class Edge(Weapon):
def __init__(self, sharpness):
super().__init__()
self.sharpness = sharpness
class Long(Weapon):
def __init__(self, length):
super().__init__()
self.length = length
class Zweihander(Long, Edge):
def __init__(self, name, length, sharpness):
Long.__init__(self, length)
Edge.__init__(self, sharpness)
self.name = name
def warning(self):
print("I will show you...\nTHE GREATEST NIGHTMARE!!!")
soulcal = Zweihander(name="soulcal", sharpness=100, length=54)
soulcal.warning()
A:
You only need to make a few small changes to each class's __init__ method to properly support cooperative multiple inheritance, per https://rhettinger.wordpress.com/2011/05/26/super-considered-super/
class Weapon:
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.does_damage = "very yes"
def attack(self):
print("Je touche!")
class Edge(Weapon):
def __init__(self, *, sharpness, **kwargs):
super().__init__(**kwargs)
self.sharpness = sharpness
class Long(Weapon):
def __init__(self, *, length, **kwargs):
super().__init__(**kwargs)
self.length = length
class Zweihander(Long, Edge):
def __init__(self, *, name, **kwargs):
super().__init__(**kwargs)
self.name = name
def warning(self):
print("I will show you...\nTHE GREATEST NIGHTMARE!!!")
soulcal = Zweihander(name="soulcal", sharpness=100, length=54)
soulcal.warning()
This approach follows a few simple rules:
Every __init__ method accepts arbitrary keyword arguments via **kwargs.
Every __init__ method calls super.__init__(**kwargs).
Every __init__ method defines keyword-only parameters instead of ordinary parameters, ensuring that keyword arguments are used at instantiation to simplify the delegation of each argument to the appropriate class.
For example, Zweihander.__init__ has its keyword-only argument name set by a keyword argument, with the remaining keyword arguments collected in kwargs. It neither knows nor cares what those arguments are; it just assumes that super().__init__, whichever method that winds up being, will handle them appropriately.
Eventually, all keyword arguments will be consumed by one of the classes in the instance's MRO, or one or more will remain in kwargs when object.__init__ is finally called, raising a TypeError.
| Multiple class inheritance TypeError with one grandparent, two parents, one child class | I'm practicing OOP and keep running into this issue. Here's one example.
Take a diamond-shaped multiple class inheritance arrangement, with Weapon feeding Edge and Long, both of which are inherited by Zweihander. If I code Edge without inheriting Weapon, the code works fine. But as soon as I make Weapon its parent, Edge can't seem to find the argument for its 'sharpness' parameter anymore. It gives me a
TypeError: Edge.__init__() missing 1 required positional argument: 'sharpness'
Oddly, the final line referenced by the error is the super().init() line in the Long class constructor. If the eventual object I create is a Zweihander, it has Edge and gets all the Weapon elements via Long, so that's functionally acceptable. But if I want for instance a knife object that's just Edge, it needs to inherit Weapon, which breaks the program.
What am I missing? My best guess is an MRO issue, but I can't figure it out.
Thanks, all.
class Weapon:
def __init__(self):
self.does_damage = "very yes"
def attack(self):
print("Je touche!")
class Edge(Weapon):
def __init__(self, sharpness):
super().__init__()
self.sharpness = sharpness
class Long(Weapon):
def __init__(self, length):
super().__init__()
self.length = length
class Zweihander(Long, Edge):
def __init__(self, name, length, sharpness):
Long.__init__(self, length)
Edge.__init__(self, sharpness)
self.name = name
def warning(self):
print("I will show you...\nTHE GREATEST NIGHTMARE!!!")
soulcal = Zweihander(name="soulcal", sharpness=100, length=54)
soulcal.warning()
| [
"You only need to make a few small changes to each class's __init__ method to properly support cooperative multiple inheritance, per https://rhettinger.wordpress.com/2011/05/26/super-considered-super/\nclass Weapon:\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.does_damage = \"very yes\"\n\n def attack(self):\n print(\"Je touche!\")\n\n\nclass Edge(Weapon):\n def __init__(self, *, sharpness, **kwargs):\n super().__init__(**kwargs)\n self.sharpness = sharpness\n\n\nclass Long(Weapon):\n def __init__(self, *, length, **kwargs):\n super().__init__(**kwargs)\n self.length = length\n\n\nclass Zweihander(Long, Edge):\n def __init__(self, *, name, **kwargs):\n super().__init__(**kwargs)\n self.name = name\n\n def warning(self):\n print(\"I will show you...\\nTHE GREATEST NIGHTMARE!!!\")\n\n\nsoulcal = Zweihander(name=\"soulcal\", sharpness=100, length=54)\n\nsoulcal.warning()\n\nThis approach follows a few simple rules:\n\nEvery __init__ method accepts arbitrary keyword arguments via **kwargs.\nEvery __init__ method calls super.__init__(**kwargs).\nEvery __init__ method defines keyword-only parameters instead of ordinary parameters, ensuring that keyword arguments are used at instantiation to simplify the delegation of each argument to the appropriate class.\n\nFor example, Zweihander.__init__ has its keyword-only argument name set by a keyword argument, with the remaining keyword arguments collected in kwargs. It neither knows nor cares what those arguments are; it just assumes that super().__init__, whichever method that winds up being, will handle them appropriately.\nEventually, all keyword arguments will be consumed by one of the classes in the instance's MRO, or one or more will remain in kwargs when object.__init__ is finally called, raising a TypeError.\n"
] | [
2
] | [] | [] | [
"multiple_inheritance",
"oop",
"python"
] | stackoverflow_0074607049_multiple_inheritance_oop_python.txt |
Q:
Runtime error: Attempt to start new process before current process finished in python simple LDA implementation
I tried running Latent Dirichlet Allocation on a very large dataset using simple LDA and LDAMulticore. But getting the below error after two days of execution "An attempt has been made to start a new process before the current process has finished its bootstrapping phase.
from gensim.models.coherencemodel import CoherenceModel
print('started')
Lda = gensim.models.ldamodel.LdaModel
ldamodel = Lda(corpus, num_topics=50, id2word = id2word, passes=40,iterations=100, chunksize = 10000, eval_every = None,random_state=100)
print('lda completed')
coherencemodel = CoherenceModel(model=ldamodel, texts=data_ready, dictionary=id2word, coherence='c_v')
print('coherence completed')
coherence_lda = coherencemodel.get_coherence()
perplexity_values=ldamodel.log_perplexity(corpus)
I got the first three print statements and the error is happening when getting the coherence value to the variable.
Also, the whole process is taking a long time as the document has around 2400000 lines.
I got to know from other post, that the error can be resolved by using if __name__ == '__main__':
I am new to python and not sure how to use it in my case as all the other data preprocessing and data loading is done within the same file and each step is done one by one.
Any help would be appreciated.
Thanks in advance.
A:
It is caused by get_coherence() function, you need to wrap the whole code into main() function and add __name__ == "__main__" structure, see:
https://github.com/RaRe-Technologies/gensim/issues/2291#issuecomment-447269158
(You can also try it first on some very simple text sample, like this one: https://radimrehurek.com/gensim/models/ldamodel.html)
| Runtime error: Attempt to start new process before current process finished in python simple LDA implementation | I tried running Latent Dirichlet Allocation on a very large dataset using simple LDA and LDAMulticore. But getting the below error after two days of execution "An attempt has been made to start a new process before the current process has finished its bootstrapping phase.
from gensim.models.coherencemodel import CoherenceModel
print('started')
Lda = gensim.models.ldamodel.LdaModel
ldamodel = Lda(corpus, num_topics=50, id2word = id2word, passes=40,iterations=100, chunksize = 10000, eval_every = None,random_state=100)
print('lda completed')
coherencemodel = CoherenceModel(model=ldamodel, texts=data_ready, dictionary=id2word, coherence='c_v')
print('coherence completed')
coherence_lda = coherencemodel.get_coherence()
perplexity_values=ldamodel.log_perplexity(corpus)
I got the first three print statements and the error is happening when getting the coherence value to the variable.
Also, the whole process is taking a long time as the document has around 2400000 lines.
I got to know from other post, that the error can be resolved by using if __name__ == '__main__':
I am new to python and not sure how to use it in my case as all the other data preprocessing and data loading is done within the same file and each step is done one by one.
Any help would be appreciated.
Thanks in advance.
| [
"It is caused by get_coherence() function, you need to wrap the whole code into main() function and add __name__ == \"__main__\" structure, see:\nhttps://github.com/RaRe-Technologies/gensim/issues/2291#issuecomment-447269158\n(You can also try it first on some very simple text sample, like this one: https://radimrehurek.com/gensim/models/ldamodel.html)\n"
] | [
0
] | [] | [] | [
"large_data",
"latentdirichletallocation",
"process",
"python",
"runtime_error"
] | stackoverflow_0073263582_large_data_latentdirichletallocation_process_python_runtime_error.txt |
Q:
Moving widgets in Canvas Tkinter
I have a canvas with a little oval in it. It moves throughout the widget using the arrow keys but when it's on the edge of the canvas if I move it beyond that, the oval just disappears.
I want the oval stays on any edge of the canvas no matter if I continue pressing the arrow key corresponding to that edge without disappearing.
This is the code:
from tkinter import *
root = Tk()
root.title("Oval")
root.geometry("800x600")
w = 600
h = 400
x = w//2
y = h//2
my_canvas = Canvas(root, width=w, height=h, bg='black')
my_canvas.pack(pady=20)
my_circle = my_canvas.create_oval(x, y, x+20, y+20, fill='cyan')
def left(event):
x = -10
y = 0
my_canvas.move(my_circle, x, y)
def right(event):
x = 10
y = 0
my_canvas.move(my_circle, x, y)
def up(event):
x = 0
y = -10
my_canvas.move(my_circle, x, y)
def down(event):
x = 0
y = 10
my_canvas.move(my_circle, x, y)
root.bind('<Left>', left)
root.bind('<Right>', right)
root.bind('<Up>', up)
root.bind('<Down>', down)
root.mainloop()
This is what it looks like:
The oval on an edge
And if I continue pressing the key looks like this:
The oval disappearing
A:
You could test the current coordinates and compare them to your canvas size.
I created a function to get the current x1, y1, x2, y2 from your oval. This way you have the coordiantes of the borders of your oval.
So all I do is testing if the oval is touching a border.
from tkinter import *
root = Tk()
root.title("Oval")
root.geometry("800x600")
w = 600
h = 400
x = w // 2
y = h // 2
my_canvas = Canvas(root, width=w, height=h, bg='black')
my_canvas.pack(pady=20)
my_circle = my_canvas.create_oval(x, y, x + 20, y + 20, fill='cyan')
def left(event):
x1, y1, x2, y2 = get_canvas_position()
if x1 > 0:
x = -10
y = 0
my_canvas.move(my_circle, x, y)
def right(event):
x1, y1, x2, y2 = get_canvas_position()
if x2 < w:
x = 10
y = 0
my_canvas.move(my_circle, x, y)
def up(event):
x1, y1, x2, y2 = get_canvas_position()
if y1 > 0:
x = 0
y = -10
my_canvas.move(my_circle, x, y)
def down(event):
x1, y1, x2, y2 = get_canvas_position()
if y2 < h:
x = 0
y = 10
my_canvas.move(my_circle, x, y)
def get_canvas_position():
return my_canvas.coords(my_circle)
root.bind('<Left>', left)
root.bind('<Right>', right)
root.bind('<Up>', up)
root.bind('<Down>', down)
root.mainloop()
A:
The canvas object is stored via 2 sets of coordinates [x1, y1, x2, y2]. You should check against the objects current location by using the .coords() method. The dimensions of the canvas object will affect the coordinates.
def left(event):
x = -10
y = 0
if my_canvas.coords(my_circle)[0] > 0: # index 0 is X coord left side object.
my_canvas.move(my_circle, x, y)
def right(event):
x = 10
y = 0
# The border collision now happens at 600 as per var "w" as previously defined above.
if my_canvas.coords(my_circle)[2] < w: # index 2 is X coord right side object.
my_canvas.move(my_circle, x, y)
Now repeat a similar process for up and down.
| Moving widgets in Canvas Tkinter | I have a canvas with a little oval in it. It moves throughout the widget using the arrow keys but when it's on the edge of the canvas if I move it beyond that, the oval just disappears.
I want the oval stays on any edge of the canvas no matter if I continue pressing the arrow key corresponding to that edge without disappearing.
This is the code:
from tkinter import *
root = Tk()
root.title("Oval")
root.geometry("800x600")
w = 600
h = 400
x = w//2
y = h//2
my_canvas = Canvas(root, width=w, height=h, bg='black')
my_canvas.pack(pady=20)
my_circle = my_canvas.create_oval(x, y, x+20, y+20, fill='cyan')
def left(event):
x = -10
y = 0
my_canvas.move(my_circle, x, y)
def right(event):
x = 10
y = 0
my_canvas.move(my_circle, x, y)
def up(event):
x = 0
y = -10
my_canvas.move(my_circle, x, y)
def down(event):
x = 0
y = 10
my_canvas.move(my_circle, x, y)
root.bind('<Left>', left)
root.bind('<Right>', right)
root.bind('<Up>', up)
root.bind('<Down>', down)
root.mainloop()
This is what it looks like:
The oval on an edge
And if I continue pressing the key looks like this:
The oval disappearing
| [
"You could test the current coordinates and compare them to your canvas size.\nI created a function to get the current x1, y1, x2, y2 from your oval. This way you have the coordiantes of the borders of your oval.\nSo all I do is testing if the oval is touching a border.\nfrom tkinter import *\n\nroot = Tk()\nroot.title(\"Oval\")\nroot.geometry(\"800x600\")\n\nw = 600\nh = 400\nx = w // 2\ny = h // 2\n\nmy_canvas = Canvas(root, width=w, height=h, bg='black')\nmy_canvas.pack(pady=20)\n\nmy_circle = my_canvas.create_oval(x, y, x + 20, y + 20, fill='cyan')\n\n\ndef left(event):\n x1, y1, x2, y2 = get_canvas_position()\n if x1 > 0:\n x = -10\n y = 0\n my_canvas.move(my_circle, x, y)\n\n\ndef right(event):\n x1, y1, x2, y2 = get_canvas_position()\n if x2 < w:\n x = 10\n y = 0\n my_canvas.move(my_circle, x, y)\n\n\ndef up(event):\n x1, y1, x2, y2 = get_canvas_position()\n if y1 > 0:\n x = 0\n y = -10\n my_canvas.move(my_circle, x, y)\n\n\ndef down(event):\n x1, y1, x2, y2 = get_canvas_position()\n if y2 < h:\n x = 0\n y = 10\n my_canvas.move(my_circle, x, y)\n\n\ndef get_canvas_position():\n return my_canvas.coords(my_circle)\n\n\nroot.bind('<Left>', left)\nroot.bind('<Right>', right)\nroot.bind('<Up>', up)\nroot.bind('<Down>', down)\n\nroot.mainloop()\n\n",
"The canvas object is stored via 2 sets of coordinates [x1, y1, x2, y2]. You should check against the objects current location by using the .coords() method. The dimensions of the canvas object will affect the coordinates.\ndef left(event):\n x = -10\n y = 0\n if my_canvas.coords(my_circle)[0] > 0: # index 0 is X coord left side object.\n my_canvas.move(my_circle, x, y)\n\ndef right(event):\n x = 10\n y = 0\n # The border collision now happens at 600 as per var \"w\" as previously defined above.\n if my_canvas.coords(my_circle)[2] < w: # index 2 is X coord right side object.\n my_canvas.move(my_circle, x, y)\n\nNow repeat a similar process for up and down.\n"
] | [
1,
1
] | [] | [] | [
"canvas",
"python",
"tkinter"
] | stackoverflow_0074606986_canvas_python_tkinter.txt |
Q:
How to iterate throughout the column by comparing two row values in one iteration in python?
Here in this excel analysis, there is a condition that was used, Excel formula=(=IF(AND(B2<1;B3>5);1;0)),please refer the image below.
(https://i.stack.imgur.com/FpPIK.png)
if compressor-1 first-row value is less than 1 (<1) and the second-row value is greater than 5 (>5) then it will return value '1',
if the condition is not satisfied it will return value'0'.
Even if one row satisfied the condition and the other row doesn't it will return '0'
( for first output 1st &2nd rows,for second output 2nd &3rd rows and so on for the rest of the rows)
So, I have tried in jupyter notebook to write a code that iterates through all rows in 1 column by comparing with this condition
df3['cycle']=0&1
df3.loc[(df3['Kompressor_1_val']<1&5),['cycle']]=0
df3.loc[(df3['Kompressor_1_val']>1&5),['cycle']]=1
df3
But could anyone please help me to write the code by considering the above Excel analysis?
In the new column, I need to get the output by satisfying this condition-output should be 0 or 1 based on the following description which was provided in excel analysis
i.e for 1st iteration, it should compare 1st row and 2nd row of the selected column with the condition to give the output either 1 or 0
for 2nd iteration, it should compare the 2nd row and 3rd row of the selected column with the condition to give the output either 1 or 0 and so on for the rest of the rows.
(https://i.stack.imgur.com/dCuMr.png)
A:
You can check the current Compressor 1 row using .lt(...)
df["Compressor 1"].lt(1)
And the next row using .shift(-1) and .gt(...)
df["Compressor 1"].shift(-1).gt(5)
Put them together with & and convert to int
df["Frequency Cycle Comp 1"] = (df["Compressor 1"].lt(1) & df["Compressor 1"].shift(-1).gt(5)).astype(int)
An example
import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(np.random.randint(low=-10, high=10, size=(10,)), columns=["Compressor 1"])
df["Frequency Cycle Comp 1"] = (df["Compressor 1"].lt(1) & df["Compressor 1"].shift(-1).gt(5)).astype(int)
print(df)
Compressor 1 Frequency Cycle Comp 1
0 2 0
1 5 0
2 -10 0
3 -7 0
4 -7 0
5 -3 0
6 -1 1
7 9 0
8 8 0
9 -6 0
| How to iterate throughout the column by comparing two row values in one iteration in python? | Here in this excel analysis, there is a condition that was used, Excel formula=(=IF(AND(B2<1;B3>5);1;0)),please refer the image below.
(https://i.stack.imgur.com/FpPIK.png)
if compressor-1 first-row value is less than 1 (<1) and the second-row value is greater than 5 (>5) then it will return value '1',
if the condition is not satisfied it will return value'0'.
Even if one row satisfied the condition and the other row doesn't it will return '0'
( for first output 1st &2nd rows,for second output 2nd &3rd rows and so on for the rest of the rows)
So, I have tried in jupyter notebook to write a code that iterates through all rows in 1 column by comparing with this condition
df3['cycle']=0&1
df3.loc[(df3['Kompressor_1_val']<1&5),['cycle']]=0
df3.loc[(df3['Kompressor_1_val']>1&5),['cycle']]=1
df3
But could anyone please help me to write the code by considering the above Excel analysis?
In the new column, I need to get the output by satisfying this condition-output should be 0 or 1 based on the following description which was provided in excel analysis
i.e for 1st iteration, it should compare 1st row and 2nd row of the selected column with the condition to give the output either 1 or 0
for 2nd iteration, it should compare the 2nd row and 3rd row of the selected column with the condition to give the output either 1 or 0 and so on for the rest of the rows.
(https://i.stack.imgur.com/dCuMr.png)
| [
"You can check the current Compressor 1 row using .lt(...)\ndf[\"Compressor 1\"].lt(1)\n\nAnd the next row using .shift(-1) and .gt(...)\ndf[\"Compressor 1\"].shift(-1).gt(5)\n\nPut them together with & and convert to int\ndf[\"Frequency Cycle Comp 1\"] = (df[\"Compressor 1\"].lt(1) & df[\"Compressor 1\"].shift(-1).gt(5)).astype(int)\n\nAn example\nimport pandas as pd\nimport numpy as np\n\n\nnp.random.seed(0)\n\ndf = pd.DataFrame(np.random.randint(low=-10, high=10, size=(10,)), columns=[\"Compressor 1\"])\n\ndf[\"Frequency Cycle Comp 1\"] = (df[\"Compressor 1\"].lt(1) & df[\"Compressor 1\"].shift(-1).gt(5)).astype(int)\n\nprint(df)\n\n Compressor 1 Frequency Cycle Comp 1\n0 2 0\n1 5 0\n2 -10 0\n3 -7 0\n4 -7 0\n5 -3 0\n6 -1 1\n7 9 0\n8 8 0\n9 -6 0\n\n"
] | [
0
] | [] | [] | [
"data_analysis",
"dataframe",
"excel",
"pandas",
"python"
] | stackoverflow_0074606900_data_analysis_dataframe_excel_pandas_python.txt |
Q:
If a user enters an invalid string option in python how should I handle the exception?
I'm writing a rock, paper, scissors, game for a user and computer and I want the user to type in one of the three options i.e "rock" but I'm not sure what kind of exception to use if the user enters say "monkey."
class RockPaperScissors:
def getUserChoice(userchoice):
while True:
try:
userchoice = input("Type in your choice: rock, paper, scissors: ")
if userchoice != "rock" or userchoice != "paper" or userchoice != "scissors":
raise ValueError("Try typing in your choice again")
break
except:
print("Invalid Input.")
return userchoice.lower()
A:
It's very helpful to use a sentinel value like None, if you want to remind the user of the input choices after each failed attempt:
class RockPaperScissors:
def getUserChoice(self):
choice = None
while choice not in ('rock', 'paper', 'scissors'):
if choice is not None:
print('Please enter only rock, paper, or scissors.')
choice = input('Type in your choice: rock, paper, scissors: ').lower()
return choice
You can also do an even shorter version in later versions of Python:
class RockPaperScissors:
def getUserChoice(self):
prompt = 'Type in your choice: rock, paper, scissors: '
choices = 'rock', 'paper', 'scissors'
while (choice := input(prompt).lower()) not in choices:
print('Please enter only rock, paper, or scissors.')
return choice
A:
In this case, you probably do not need an exception.
You probably want to loop again on input().
The following code is more explicit:
class RockPaperScissors:
def getUserChoice():
while True:
userchoice = input("Type in your choice: rock, paper, scissors: ").lower()
if userchoice in ( "rock", "paper" "scissors"):
return userchoice
else:
print( "Invalid input. Try again.)
The try|except mechanism is more appropriate when you want to handle the error at an upper level.
A:
Another option is to have the user enter a number. That way, you don't have to worry about spelling:
class RockPaperScissors:
def getUserChoice(options = ("rock", "paper", "scissors")):
options_str = "\n\t" + "\n\t".join(f'{x+1}. {opt}' for x,opt in enumerate(options))
while True:
userchoice = input(f"Type in your choice:{options_str}\n").lower()
try:
userchoice = int(userchoice)
if 1 <= userchoice <= len(options):
return options[int(userchoice) - 1]
except:
if userchoice in options:
return userchoice
print("Invalid Input.")
Note: this uses try/except but only for the string-to-int conversion. Also, the code will work if the user types the word instead of a number.
Sample run:
>>> RockPaperScissors.getUserChoice()
Type in your choice:
1. rock
2. paper
3. scissors
4
Invalid Input.
Type in your choice:
1. rock
2. paper
3. scissors
3
'scissors'
Typing the word:
>>> RockPaperScissors.getUserChoice()
Type in your choice:
1. rock
2. paper
3. scissors
Paper
'paper'
And, it works with any options:
>>> RockPaperScissors.getUserChoice(("C", "C++", "Python", "Java", "Prolog"))
Type in your choice:
1. C
2. C++
3. Python
4. Java
5. Prolog
3
'Python'
A:
Try this. I hope this helps you out. Best of luck!
class RockPaperScissors:
def getUserChoice(self):
while True:
try:
userchoice = input("Type in your choice: rock, paper, scissors: ")
if userchoice not in ("rock", "paper", "scissors"):
raise ValueError
except ValueError:
print("Invalid Input!\nTry typing in your choice again")
continue
else:
print("You choice is:", userchoice)
return userchoice.lower()
| If a user enters an invalid string option in python how should I handle the exception? | I'm writing a rock, paper, scissors, game for a user and computer and I want the user to type in one of the three options i.e "rock" but I'm not sure what kind of exception to use if the user enters say "monkey."
class RockPaperScissors:
def getUserChoice(userchoice):
while True:
try:
userchoice = input("Type in your choice: rock, paper, scissors: ")
if userchoice != "rock" or userchoice != "paper" or userchoice != "scissors":
raise ValueError("Try typing in your choice again")
break
except:
print("Invalid Input.")
return userchoice.lower()
| [
"It's very helpful to use a sentinel value like None, if you want to remind the user of the input choices after each failed attempt:\nclass RockPaperScissors:\n\n def getUserChoice(self):\n choice = None\n while choice not in ('rock', 'paper', 'scissors'):\n if choice is not None:\n print('Please enter only rock, paper, or scissors.')\n choice = input('Type in your choice: rock, paper, scissors: ').lower()\n return choice\n\nYou can also do an even shorter version in later versions of Python:\nclass RockPaperScissors:\n\n def getUserChoice(self):\n prompt = 'Type in your choice: rock, paper, scissors: '\n choices = 'rock', 'paper', 'scissors'\n while (choice := input(prompt).lower()) not in choices:\n print('Please enter only rock, paper, or scissors.')\n return choice\n\n",
"In this case, you probably do not need an exception.\nYou probably want to loop again on input().\nThe following code is more explicit:\nclass RockPaperScissors:\n def getUserChoice():\n while True:\n userchoice = input(\"Type in your choice: rock, paper, scissors: \").lower()\n if userchoice in ( \"rock\", \"paper\" \"scissors\"):\n return userchoice\n else:\n print( \"Invalid input. Try again.)\n\nThe try|except mechanism is more appropriate when you want to handle the error at an upper level.\n",
"Another option is to have the user enter a number. That way, you don't have to worry about spelling:\nclass RockPaperScissors:\n def getUserChoice(options = (\"rock\", \"paper\", \"scissors\")):\n options_str = \"\\n\\t\" + \"\\n\\t\".join(f'{x+1}. {opt}' for x,opt in enumerate(options))\n while True:\n userchoice = input(f\"Type in your choice:{options_str}\\n\").lower()\n try:\n userchoice = int(userchoice)\n if 1 <= userchoice <= len(options):\n return options[int(userchoice) - 1]\n except:\n if userchoice in options:\n return userchoice\n print(\"Invalid Input.\")\n\nNote: this uses try/except but only for the string-to-int conversion. Also, the code will work if the user types the word instead of a number.\nSample run:\n>>> RockPaperScissors.getUserChoice()\nType in your choice:\n 1. rock\n 2. paper\n 3. scissors\n4\nInvalid Input.\nType in your choice:\n 1. rock\n 2. paper\n 3. scissors\n3\n'scissors'\n\nTyping the word:\n>>> RockPaperScissors.getUserChoice()\nType in your choice:\n 1. rock\n 2. paper\n 3. scissors\nPaper\n'paper'\n\nAnd, it works with any options:\n>>> RockPaperScissors.getUserChoice((\"C\", \"C++\", \"Python\", \"Java\", \"Prolog\"))\nType in your choice:\n 1. C\n 2. C++\n 3. Python\n 4. Java\n 5. Prolog\n3\n'Python'\n\n",
"Try this. I hope this helps you out. Best of luck!\nclass RockPaperScissors:\n def getUserChoice(self):\n while True:\n try:\n userchoice = input(\"Type in your choice: rock, paper, scissors: \")\n if userchoice not in (\"rock\", \"paper\", \"scissors\"): \n raise ValueError\n except ValueError:\n print(\"Invalid Input!\\nTry typing in your choice again\") \n continue\n else:\n print(\"You choice is:\", userchoice)\n return userchoice.lower()\n\n"
] | [
0,
0,
0,
0
] | [] | [] | [
"function",
"python"
] | stackoverflow_0074606911_function_python.txt |
Q:
How to write a Python program which sums up all the given numbers in a date within a given range?
I have written a Python program that prints out all the dates in yyy-mm-dd format within the year 2020. And now i try to write a program which loops/iterate through the year 2020 and prints out the sum of all the given numbers for each date. For ex: the sum of the date 2020-01-01 is 6 (2+0+2+0+0+1+0+1), the sum of the date 2020-01-02 is 7, etc. My problem is that i do not know how to write a code which takes out those numbers, adding them up and printing out for each date within 2020.
I have also seen another post with the headline "Sum the digits of a number" but it did not really help me to solve my problem because the length of some months is not the same as the rest and also not the whole code was obvious.
def date_range(start, end): #creating a tuple function
for i in range(int((end - start).days)):
yield start + timedelta(i)
start = date(2020, 1, 1)
end = date(2020, 12, 31)
for each_date in date_range(start, end):
print(each_date.strftime("%Y-%m-%d") #iterating and printing all dates within the year of 2020
A:
[int(x) for x in "2020-01-02" if x!='-']
is
[2, 0, 2, 0, 0, 1, 0, 2]
So, from there
sum(int(x) for x in "2020-01-02" if x!='-')
# 7
And you can go further
[(ed,sum(int(x) for x in ed.strftime("%Y-%m-%d") if x!='-')) for ed in date_range(start, end)]
# [(datetime.date(2020, 1, 1), 6), (datetime.date(2020, 1, 2), 7), (datetime.date(2020, 1, 3), 8), ..., (datetime.date(2020, 12, 29), 18), (datetime.date(2020, 12, 30), 10)]
Or to just print them
for ed in date_range(start, end):
print(ed, sum(int(x) for x in ed.strftime("%Y-%m-%d") if x!='-'))
| How to write a Python program which sums up all the given numbers in a date within a given range? | I have written a Python program that prints out all the dates in yyy-mm-dd format within the year 2020. And now i try to write a program which loops/iterate through the year 2020 and prints out the sum of all the given numbers for each date. For ex: the sum of the date 2020-01-01 is 6 (2+0+2+0+0+1+0+1), the sum of the date 2020-01-02 is 7, etc. My problem is that i do not know how to write a code which takes out those numbers, adding them up and printing out for each date within 2020.
I have also seen another post with the headline "Sum the digits of a number" but it did not really help me to solve my problem because the length of some months is not the same as the rest and also not the whole code was obvious.
def date_range(start, end): #creating a tuple function
for i in range(int((end - start).days)):
yield start + timedelta(i)
start = date(2020, 1, 1)
end = date(2020, 12, 31)
for each_date in date_range(start, end):
print(each_date.strftime("%Y-%m-%d") #iterating and printing all dates within the year of 2020
| [
"[int(x) for x in \"2020-01-02\" if x!='-']\n\nis\n[2, 0, 2, 0, 0, 1, 0, 2]\n\nSo, from there\nsum(int(x) for x in \"2020-01-02\" if x!='-')\n# 7\n\nAnd you can go further\n[(ed,sum(int(x) for x in ed.strftime(\"%Y-%m-%d\") if x!='-')) for ed in date_range(start, end)]\n# [(datetime.date(2020, 1, 1), 6), (datetime.date(2020, 1, 2), 7), (datetime.date(2020, 1, 3), 8), ..., (datetime.date(2020, 12, 29), 18), (datetime.date(2020, 12, 30), 10)] \n\nOr to just print them\nfor ed in date_range(start, end):\n print(ed, sum(int(x) for x in ed.strftime(\"%Y-%m-%d\") if x!='-'))\n\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074607268_python_python_3.x.txt |
Q:
Assuming the structure of the json string does not change, is the order of a jsonpath match value result stable?
Assuming the structure of the json string does not change, is the order of a jsonpath match value result stable?
import jsonpath_ng
response = json.loads(response)
jsonpath_expression_name = jsonpath_ng.parse("$[forms][0][questionGroups][*][questions]..[name]")
match_name = [match.value for match in jsonpath_expression_name.find(response)]
jsonpath_expression_id = jsonpath_ng.parse("$[forms][0][questionGroups][*][questions]..[id]")
matches_id = [match.value for match in jsonpath_expression_id.find(response)]
survey_q_dict = { k:v for (k,v) in zip(matches_id, match_name)}
Thank you!
A:
Yes, the order of a jsonpath match value result is stable as long as the structure of the json string does not change. This is because jsonpath expressions are evaluated in a deterministic manner, meaning that the same expression will always return the same result.
| Assuming the structure of the json string does not change, is the order of a jsonpath match value result stable? | Assuming the structure of the json string does not change, is the order of a jsonpath match value result stable?
import jsonpath_ng
response = json.loads(response)
jsonpath_expression_name = jsonpath_ng.parse("$[forms][0][questionGroups][*][questions]..[name]")
match_name = [match.value for match in jsonpath_expression_name.find(response)]
jsonpath_expression_id = jsonpath_ng.parse("$[forms][0][questionGroups][*][questions]..[id]")
matches_id = [match.value for match in jsonpath_expression_id.find(response)]
survey_q_dict = { k:v for (k,v) in zip(matches_id, match_name)}
Thank you!
| [
"Yes, the order of a jsonpath match value result is stable as long as the structure of the json string does not change. This is because jsonpath expressions are evaluated in a deterministic manner, meaning that the same expression will always return the same result.\n"
] | [
3
] | [] | [] | [
"dictionary_comprehension",
"json",
"jsonpath",
"python"
] | stackoverflow_0074497071_dictionary_comprehension_json_jsonpath_python.txt |
Q:
How to render a Django Serializer Template with React Js?
I've set up my api for a basic model in my Django project. I've defined my post and get methods and everything is working correctly. Now, I'm wondering how I can render my Django model using react js. Essentially, I'm wondering how I can use react js to assign my django model values that were inserted using reactjs instead of django. For example, if my django model's fields are "name" and "id", how can I assign these fields using reactjs instead of the basic Django page(I've inserted my current page below)? Do we need to use axios or jquery to make these transitions? Thanks.
A:
Instead of thinking how to render your django model in react, i think it is easier to understand when you think in the server-client model.
Basically you want to get you data from your django server and then show it in the client, using react.
To do this with Django rest framework, the best way is to make HTTP requests from the client to the server. You could use the browser built-in fetch api from javascript or a http library like axios.
You can read more about fetch here
| How to render a Django Serializer Template with React Js? | I've set up my api for a basic model in my Django project. I've defined my post and get methods and everything is working correctly. Now, I'm wondering how I can render my Django model using react js. Essentially, I'm wondering how I can use react js to assign my django model values that were inserted using reactjs instead of django. For example, if my django model's fields are "name" and "id", how can I assign these fields using reactjs instead of the basic Django page(I've inserted my current page below)? Do we need to use axios or jquery to make these transitions? Thanks.
| [
"Instead of thinking how to render your django model in react, i think it is easier to understand when you think in the server-client model.\nBasically you want to get you data from your django server and then show it in the client, using react.\nTo do this with Django rest framework, the best way is to make HTTP requests from the client to the server. You could use the browser built-in fetch api from javascript or a http library like axios.\nYou can read more about fetch here\n"
] | [
0
] | [] | [] | [
"django",
"javascript",
"python",
"reactjs"
] | stackoverflow_0074606323_django_javascript_python_reactjs.txt |
Q:
How to change the value of a python variable from a .kv file
I am new fairly new to python and have just started using the kivy library. I am trying to change the value of a variable in the .py file when a button from the .kv file is pressed. I am unsure how to instigate this.
The code I currently have is:
python file:
from kivy.app import App
from kivy.uix.widget import Widget
class experienceScreen(Widget):
pass
experience=""
class workoutApp(App):
def build(self):
return experienceScreen()
workoutApp().run()
def beginnerpressed(self, instance):
experience==1
if experience == 1:
print("test code works.")
if experience == 2:
print("test code works.")
if experience == 3:
print("test code works.")
kivy file:
#: kivy 2.1.0
<experienceScreen>:
FloatLayout:
pos:0,0
size: root.width, root.height
Label:
text: "What level of gym go-er are you?"
pos_hint: {'x':.4,'y':.85}
size_hint:0.2,0.1
Button:
text: "Beginner"
pos_hint: {'x':.25,'y':.6}
size_hint:0.5,0.1
on_press: experience=1
Button:
text: "Intermediate"
pos_hint: {'x':.25,'y':.4}
size_hint:0.5,0.1
on_press: experience=2
Button:
text: "Advanced"
pos_hint: {'x':.25,'y':.2}
size_hint:0.5,0.1
on_press: experience=3
I had expected that when I pressed any of the buttons that the "test code works" text would display in the console. However, this is not the case. I expect this is because variables are assigned differently within the .kv file.
A:
The on_press item needs to be connected to a method (function) in your code. One can use root.something to reach the widget or app.something to reach a method in the app object.
Kivy file
<experienceScreen>:
FloatLayout:
pos:0,0
size: root.width, root.height
Label:
text: 'What level of gym go-er are you?'
pos_hint: {'x':.4,'y':.85}
size_hint:0.2,0.1
Button:
text: 'Beginner'
pos_hint: {'x':.25,'y':.6}
size_hint:0.5,0.1
on_press: root.beginnerpressed(1)
Button:
text: 'Intermediate'
pos_hint: {'x':.25,'y':.4}
size_hint:0.5,0.1
on_press: root.beginnerpressed(2)
Button:
text: 'Advanced'
pos_hint: {'x':.25,'y':.2}
size_hint:0.5,0.1
on_press: root.beginnerpressed(3)
python
class experienceScreen(Widget):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.experience = 0
def beginnerpressed(self, experience: int):
self.experience = experience
print(f"test code works. {self.experience}")
class workoutApp(App):
def build(self):
return experienceScreen()
workoutApp().run()
| How to change the value of a python variable from a .kv file | I am new fairly new to python and have just started using the kivy library. I am trying to change the value of a variable in the .py file when a button from the .kv file is pressed. I am unsure how to instigate this.
The code I currently have is:
python file:
from kivy.app import App
from kivy.uix.widget import Widget
class experienceScreen(Widget):
pass
experience=""
class workoutApp(App):
def build(self):
return experienceScreen()
workoutApp().run()
def beginnerpressed(self, instance):
experience==1
if experience == 1:
print("test code works.")
if experience == 2:
print("test code works.")
if experience == 3:
print("test code works.")
kivy file:
#: kivy 2.1.0
<experienceScreen>:
FloatLayout:
pos:0,0
size: root.width, root.height
Label:
text: "What level of gym go-er are you?"
pos_hint: {'x':.4,'y':.85}
size_hint:0.2,0.1
Button:
text: "Beginner"
pos_hint: {'x':.25,'y':.6}
size_hint:0.5,0.1
on_press: experience=1
Button:
text: "Intermediate"
pos_hint: {'x':.25,'y':.4}
size_hint:0.5,0.1
on_press: experience=2
Button:
text: "Advanced"
pos_hint: {'x':.25,'y':.2}
size_hint:0.5,0.1
on_press: experience=3
I had expected that when I pressed any of the buttons that the "test code works" text would display in the console. However, this is not the case. I expect this is because variables are assigned differently within the .kv file.
| [
"The on_press item needs to be connected to a method (function) in your code. One can use root.something to reach the widget or app.something to reach a method in the app object.\nKivy file\n<experienceScreen>:\n FloatLayout:\n pos:0,0\n size: root.width, root.height\n Label:\n text: 'What level of gym go-er are you?'\n pos_hint: {'x':.4,'y':.85}\n size_hint:0.2,0.1\n Button:\n text: 'Beginner'\n pos_hint: {'x':.25,'y':.6}\n size_hint:0.5,0.1\n on_press: root.beginnerpressed(1)\n Button:\n text: 'Intermediate'\n pos_hint: {'x':.25,'y':.4}\n size_hint:0.5,0.1\n on_press: root.beginnerpressed(2)\n Button:\n text: 'Advanced'\n pos_hint: {'x':.25,'y':.2}\n size_hint:0.5,0.1\n on_press: root.beginnerpressed(3)\n\npython\nclass experienceScreen(Widget):\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.experience = 0\n\n def beginnerpressed(self, experience: int):\n self.experience = experience\n print(f\"test code works. {self.experience}\")\n\n\nclass workoutApp(App):\n def build(self):\n return experienceScreen()\n\nworkoutApp().run()\n\n"
] | [
0
] | [] | [] | [
"kivy",
"kivy_language",
"python"
] | stackoverflow_0074605282_kivy_kivy_language_python.txt |
Q:
Cannot import pycaret in google colab
I cant import pycaret in a google colab
Here are all the steps I had taken:
Change python version to 3.8
Installed pip
I then ran
!pip install pycaret
import pycaret
the install works, but then
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-27-fdea18e6876c> in <module>
1 get_ipython().system('pip install pycaret ')
----> 2 import pycaret
ModuleNotFoundError: No module named 'pycaret'
I must be doing something very wrong!
In troubleshooting I also pip installed numpy and pandas which both imported just fine
A:
!pip install pycaret
should work without any issues. I have used it multiple times on Google Colab. Alternately, you can use
pip install --pre pycaret
A:
For importing use this, this is one is for classification:
from pycaret.classification import *
And for regression:
from pycaret.regression import *
For NLP:
from pycaret.nlp import *
| Cannot import pycaret in google colab | I cant import pycaret in a google colab
Here are all the steps I had taken:
Change python version to 3.8
Installed pip
I then ran
!pip install pycaret
import pycaret
the install works, but then
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-27-fdea18e6876c> in <module>
1 get_ipython().system('pip install pycaret ')
----> 2 import pycaret
ModuleNotFoundError: No module named 'pycaret'
I must be doing something very wrong!
In troubleshooting I also pip installed numpy and pandas which both imported just fine
| [
"!pip install pycaret\n\nshould work without any issues. I have used it multiple times on Google Colab. Alternately, you can use\npip install --pre pycaret\n\n",
"For importing use this, this is one is for classification:\nfrom pycaret.classification import *\n\nAnd for regression:\nfrom pycaret.regression import *\n\nFor NLP:\nfrom pycaret.nlp import *\n\n"
] | [
0,
0
] | [] | [] | [
"google_colaboratory",
"import",
"pip",
"pycaret",
"python"
] | stackoverflow_0074295700_google_colaboratory_import_pip_pycaret_python.txt |
Q:
How to produce correct endless socket connection?
I need to produce endless socket connections, which can be broke only with 1KeyboardInterupt1 or special word.
When I start both programs in different IDEs, the sender asks to input the message. But only the first message sends to the server and all the others don't.
I need to produce an endless cycle, where all inputs are sent to the server on print.
The server part:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('127.0.0.1', 8888))
s.listen(5)
while True:
try:
client, addr = s.accept()
except KeyboardInterrupt:
s.close()
break
else:
res = client.recv(1024)
print(addr, 'says:', res.decode('utf-8'))
And the client:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 8888))
while True:
com = input('Enter the message: ')
s.send(com.encode())
print('sended')
if com == 'exit':
s.close()
break
I tried to do this on the client:
import socket
while True:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 8888))
com = input('Enter the message: ')
s.send(com.encode())
print('sended')
s.close()
if com == 'exit':
break
But this way needs to create a socket, make connection and close socket every iteration.
Is there the way how to do what I described above with only one socket initialization?
A:
The s.close() must be out of the while loop.
| How to produce correct endless socket connection? | I need to produce endless socket connections, which can be broke only with 1KeyboardInterupt1 or special word.
When I start both programs in different IDEs, the sender asks to input the message. But only the first message sends to the server and all the others don't.
I need to produce an endless cycle, where all inputs are sent to the server on print.
The server part:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('127.0.0.1', 8888))
s.listen(5)
while True:
try:
client, addr = s.accept()
except KeyboardInterrupt:
s.close()
break
else:
res = client.recv(1024)
print(addr, 'says:', res.decode('utf-8'))
And the client:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 8888))
while True:
com = input('Enter the message: ')
s.send(com.encode())
print('sended')
if com == 'exit':
s.close()
break
I tried to do this on the client:
import socket
while True:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 8888))
com = input('Enter the message: ')
s.send(com.encode())
print('sended')
s.close()
if com == 'exit':
break
But this way needs to create a socket, make connection and close socket every iteration.
Is there the way how to do what I described above with only one socket initialization?
| [
"The s.close() must be out of the while loop.\n"
] | [
0
] | [] | [] | [
"cycle",
"python",
"sockets"
] | stackoverflow_0074606513_cycle_python_sockets.txt |
Q:
Python - Grouping by multiple columns (Categorical dtype vs numeric dtype)- why are the results so different?
I have a question about grouping pandas DataFrames by multiple columns. I am looking at some data for a TV show and trying to ensure that no season has two contestants with the same name.
Series
Name
1
David
1
Edward
1
Jasmine
2
Lea
2
Jonathan
2
Louise
I want a unique count for groupings of Series + Name, which works well when the Series contains a numeric data type. I can do:
df.groupby(['Series','Name'])['Name'].count()
and get
Series
Name
Count
1
David
1
1
Edward
1
1
Jasmine
1
2
Lea
1
2
Jonathan
1
2
Louise
1
However, if series is set to a categorical data type then
df.groupby(['Series','Name'])['Name'].count()
returns the following table
Series
Name
Count
1
David
1
2
David
0
1
Edward
1
2
Edward
0
1
Jasmine
1
2
Jasmine
0
1
Jonathan
0
2
Jonathan
1
1
Lea
0
2
Lea
1
1
Louise
0
2
Louise
1
Panda groups every possible combination of series and names and then sorts alphanumerically. I don't understand why. Any help would be most appreciated.
A:
Actually, this is the default behaviour of Pandas when grouping categorical value, it adds missing categories (checkout this thread).
To group only the observed categories on the dataframe you can use:
df.groupby(['Series','Name'],observed=True)['Name'].count()
| Python - Grouping by multiple columns (Categorical dtype vs numeric dtype)- why are the results so different? | I have a question about grouping pandas DataFrames by multiple columns. I am looking at some data for a TV show and trying to ensure that no season has two contestants with the same name.
Series
Name
1
David
1
Edward
1
Jasmine
2
Lea
2
Jonathan
2
Louise
I want a unique count for groupings of Series + Name, which works well when the Series contains a numeric data type. I can do:
df.groupby(['Series','Name'])['Name'].count()
and get
Series
Name
Count
1
David
1
1
Edward
1
1
Jasmine
1
2
Lea
1
2
Jonathan
1
2
Louise
1
However, if series is set to a categorical data type then
df.groupby(['Series','Name'])['Name'].count()
returns the following table
Series
Name
Count
1
David
1
2
David
0
1
Edward
1
2
Edward
0
1
Jasmine
1
2
Jasmine
0
1
Jonathan
0
2
Jonathan
1
1
Lea
0
2
Lea
1
1
Louise
0
2
Louise
1
Panda groups every possible combination of series and names and then sorts alphanumerically. I don't understand why. Any help would be most appreciated.
| [
"Actually, this is the default behaviour of Pandas when grouping categorical value, it adds missing categories (checkout this thread).\nTo group only the observed categories on the dataframe you can use:\ndf.groupby(['Series','Name'],observed=True)['Name'].count()\n\n"
] | [
0
] | [] | [] | [
"group_by",
"multiple_columns",
"pandas",
"python"
] | stackoverflow_0074607271_group_by_multiple_columns_pandas_python.txt |
Q:
Type Error, Need a Y/N user confirmation. Python
I am working on a function that will delete records of individuals but before doing so, it will display:
Are you sure you want to delete record with Last Name: Apple, First Name: Amy ? Enter Y or N
I got through most of my function. I am having difficulty with this Yes or No part. The code I have for the delete function so far is as follows
def delete_student():
global student_info
global database
print("--- Delete Student ---")
roll = input("Enter a Last Name: ")
student_found = False
updated_data = []
with open(database, "r", encoding="utf-8") as f:
reader = csv.reader(f)
counter = 0
for row in reader:
if len(row) > 0:
if roll != row[2]:
updated_data.append(row)
counter += 1
else:
student_found = True
if student_found is True:
if input("Are you sure you want to delete record", roll, "(y/n) ") != "y":
exit()
with open(database, "w", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerows(updated_data)
print("Student ", roll, "deleted successfully")
else:
print("Record not found")
input("Press any key to continue")
This gives me a Type Error, I need to display the name of the person as confirmation before deleting the record. Y/N input.
Type Error:
Traceback (most recent call last):
File "/Users/jake./PycharmProjects/Munyak_Jacob_FinalProject/FileRecords.py", line 58, in <module>
delete_student()
File "/Users/jake./PycharmProjects/Munyak_Jacob_FinalProject/deleteRecord.py", line 27, in delete_student
if input("Are you sure you want to delete record", roll, "(y/n) ") != "y":
TypeError: input expected at most 1 argument, got 3
Process finished with exit code 1
A:
input takes a single string as a parameter, but you've provided three. Construct a single string:
if input(f"Are you sure you want to delete record {roll} (y/n)? ") != "y":
| Type Error, Need a Y/N user confirmation. Python | I am working on a function that will delete records of individuals but before doing so, it will display:
Are you sure you want to delete record with Last Name: Apple, First Name: Amy ? Enter Y or N
I got through most of my function. I am having difficulty with this Yes or No part. The code I have for the delete function so far is as follows
def delete_student():
global student_info
global database
print("--- Delete Student ---")
roll = input("Enter a Last Name: ")
student_found = False
updated_data = []
with open(database, "r", encoding="utf-8") as f:
reader = csv.reader(f)
counter = 0
for row in reader:
if len(row) > 0:
if roll != row[2]:
updated_data.append(row)
counter += 1
else:
student_found = True
if student_found is True:
if input("Are you sure you want to delete record", roll, "(y/n) ") != "y":
exit()
with open(database, "w", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerows(updated_data)
print("Student ", roll, "deleted successfully")
else:
print("Record not found")
input("Press any key to continue")
This gives me a Type Error, I need to display the name of the person as confirmation before deleting the record. Y/N input.
Type Error:
Traceback (most recent call last):
File "/Users/jake./PycharmProjects/Munyak_Jacob_FinalProject/FileRecords.py", line 58, in <module>
delete_student()
File "/Users/jake./PycharmProjects/Munyak_Jacob_FinalProject/deleteRecord.py", line 27, in delete_student
if input("Are you sure you want to delete record", roll, "(y/n) ") != "y":
TypeError: input expected at most 1 argument, got 3
Process finished with exit code 1
| [
"input takes a single string as a parameter, but you've provided three. Construct a single string:\nif input(f\"Are you sure you want to delete record {roll} (y/n)? \") != \"y\":\n\n"
] | [
0
] | [] | [] | [
"python",
"typeerror",
"types"
] | stackoverflow_0074607473_python_typeerror_types.txt |
Q:
Is there a way to send query params for tests?
I am trying to make some tests in which asks for ads of a particular type
for instance:
http://127.0.0.1:8000/ads/?type=normal should return the normal ads
and
http://127.0.0.1:8000/ads/?type=premium should return the premium ads
the tests ask for the ads like this response = self.client.get(reverse("ads")) self.client is for the site.
Reverse() was the function i have been using for the other tests so i thought it would work as fine.
i was looking for a way i could send the parameters but there is nothing on the internet as far as i'm concerned and i have been struggling with this for hours.
β»ββ» οΈ΅γ½(`ΠΒ΄)οΎοΈ΅ β»ββ»
If you need any more info i could bring you
i tried using:
reverse("ads", kwargs={"type": "normal"})
reverse("ads", QUERY_PARAMS={"type": "normal"})
reverse("ads", QUERY_KWARGS={"type": "normal"})
reverse("ads", {"type": "normal"})
These are all things I found online.
However, nothing worked.
A:
When a URL is like domain/search/?q=haha, you would use request.GET.get('q', '').
q is the parameter you want, and '' is the default value if q isn't found.
However, if you are instead just configuring your URLconf**, then your captures from the regex are passed to the function as arguments (or named arguments).
Such as:
(r'^user/(?P<username>\w{0,50})/$', views.profile_page,),
Then in your views.py you would have
def profile_page(request, username):
# Rest of the method
| Is there a way to send query params for tests? | I am trying to make some tests in which asks for ads of a particular type
for instance:
http://127.0.0.1:8000/ads/?type=normal should return the normal ads
and
http://127.0.0.1:8000/ads/?type=premium should return the premium ads
the tests ask for the ads like this response = self.client.get(reverse("ads")) self.client is for the site.
Reverse() was the function i have been using for the other tests so i thought it would work as fine.
i was looking for a way i could send the parameters but there is nothing on the internet as far as i'm concerned and i have been struggling with this for hours.
β»ββ» οΈ΅γ½(`ΠΒ΄)οΎοΈ΅ β»ββ»
If you need any more info i could bring you
i tried using:
reverse("ads", kwargs={"type": "normal"})
reverse("ads", QUERY_PARAMS={"type": "normal"})
reverse("ads", QUERY_KWARGS={"type": "normal"})
reverse("ads", {"type": "normal"})
These are all things I found online.
However, nothing worked.
| [
"When a URL is like domain/search/?q=haha, you would use request.GET.get('q', '').\nq is the parameter you want, and '' is the default value if q isn't found.\nHowever, if you are instead just configuring your URLconf**, then your captures from the regex are passed to the function as arguments (or named arguments).\nSuch as:\n(r'^user/(?P<username>\\w{0,50})/$', views.profile_page,),\n\nThen in your views.py you would have\ndef profile_page(request, username):\n # Rest of the method\n\n"
] | [
0
] | [] | [] | [
"django",
"python",
"testing"
] | stackoverflow_0074606501_django_python_testing.txt |
Q:
Keras Invalid argument: required broadcastable shapes at loc(unknown)
I have been training my model by feeding the fit() method with train and test generators from the data that is stored away in hdf5 files (approx. 25,000 images and labels). I have recently processed negative cases into a new hdf5 file with a similar amount of images, however, after updating the generator to read from both files, grab half the batch size amount of images from each set, and merge them together, the training crashes with Invalid argument: required broadcastable shapes at loc(unknown) after a single epoch.
I have made sure that my model output, generator output, and data types are all correct (model: UNet, sigmoid, classes=1, output shape = (...,1), output type = bool) as other answers of the same issue suggest, yet I am still getting the same error.
train.py
db = h5py.File(db_output_path, 'r')
a = db['data'][200]
b = db['labels'][200]
db_neg = h5py.File(db_negatives_path, 'r')
train_neg_gen = kfold.split(db_neg['data'])
neg_idx = []
for t in train_neg_gen:
neg_idx.append(t)
batch_size=16
for train, test in kfold.split(db['data'], db['labels']):
train_neg_idx, test_neg_idx = neg_idx[fold_no-1]
gen_train = create_hdf5_generator(db_output_path, train, batch_size, CLASSES, db_negatives_path, train_neg_idx)
gen_val = create_hdf5_generator(db_output_path, test, batch_size, CLASSES, db_negatives_path, test_neg_idx)
model.load_weights('weights/weights_2022-11-20.h5')
# Generate a print
print('------------------------------------------------------------------------')
print(f'Training for fold {fold_no} ...')
steps_per_epoch = (2*len(train))//batch_size
validation_steps= (2*len(test))//batch_size
results = model.fit(gen_train,
epochs=10, validation_data=gen_val,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
callbacks=callbacks)
# Increase fold number
fold_no = fold_no + 1
Generator
def create_hdf5_generator(db_path, indices, batch_size, classes, neg_db_path=None, neg_indices=None):
db = h5py.File(db_path)
neg_db = h5py.File(neg_db_path)
while True:
if neg_indices is not None:
skip = batch_size//2
restart = 0
for i in np.arange(0, len(indices), skip):
j = i
#j tracks neg_db indices which is smaller in size than positive indices tracked by i
if i >= len(neg_indices):
j = restart
restart += skip
images = db['data'][indices[i:i+skip]]
labels = db['labels'][indices[i:i+skip]]
neg_images = neg_db['data'][neg_indices[j:j+skip]]
neg_labels = np.zeros(labels.shape).astype(np.float32)
images_concat = np.concatenate((images, neg_images), axis=0)
labels_concat = np.concatenate((labels, neg_labels), axis=0)
np.random.seed(123)
np.random.shuffle(images_concat)
np.random.seed(123)
np.random.shuffle(labels_concat)
yield images_concat, labels_concat.astype(bool)
console output
------------------------------------------------------------------------
Training for fold 1 ...
Epoch 1/10
2773/2774 [============================>.] - ETA: 0s - loss: 0.1157 - mean_io_u_2: 0.4766 Traceback (most recent call last):
File "C:\Users\Noam\github\proj\train.py", line 181, in <module>
results = model.fit(gen_train,
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1214, in fit
val_logs = self.evaluate(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1489, in evaluate
tmp_logs = self.test_function(iterator)
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 924, in _call
results = self._stateful_fn(*args, **kwds)
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 3023, in __call__
return graph_function._call_flat(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
outputs = execute.execute(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: required broadcastable shapes at loc(unknown)
[[node binary_crossentropy/logistic_loss/mul (defined at C:\Users\Noam\github\proj\train.py:181) ]]
[[confusion_matrix/assert_non_negative_1/assert_less_equal/Assert/AssertGuard/pivot_f/_12/_33]]
(1) Invalid argument: required broadcastable shapes at loc(unknown)
[[node binary_crossentropy/logistic_loss/mul (defined at C:\Users\Noam\github\proj\train.py:181) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_test_function_79850]
Function call stack:
test_function -> test_function
2022-11-27 19:22:08.581553: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2022-11-27 19:22:18.055899: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2022-11-27 19:22:18.073779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.8GHz coreCount: 82 deviceMemorySize: 24.00GiB deviceMemoryBandwidth: 871.81GiB/s
2022-11-27 19:22:18.073819: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2022-11-27 19:22:18.093917: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2022-11-27 19:22:18.093939: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2022-11-27 19:22:18.100311: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2022-11-27 19:22:18.102617: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2022-11-27 19:22:18.105904: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2022-11-27 19:22:18.111640: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2022-11-27 19:22:18.112034: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2022-11-27 19:22:18.112100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2022-11-27 19:22:18.112463: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-27 19:22:18.113094: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.8GHz coreCount: 82 deviceMemorySize: 24.00GiB deviceMemoryBandwidth: 871.81GiB/s
2022-11-27 19:22:18.113127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2022-11-27 19:22:18.495306: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-11-27 19:22:18.495334: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2022-11-27 19:22:18.495341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2022-11-27 19:22:18.495486: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 21670 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:01:00.0, compute capability: 8.6)
2022-11-27 19:22:21.753068: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
2022-11-27 19:22:23.357640: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2022-11-27 19:22:23.868767: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8201
2022-11-27 19:22:24.730172: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2022-11-27 19:22:25.324257: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2022-11-27 19:23:30.675901: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:29:53.026090: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:46:47.257803: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:50:09.871857: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:51:28.339643: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 20:22:00.445508: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 20:30:20.786297: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 20:45:59.779202: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 21:06:14.203518: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
UNet
sigmoid
binary_crossentropy
Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, 128, 128, 32) 896 input_4[0][0]
__________________________________________________________________________________________________
dropout_27 (Dropout) (None, 128, 128, 32) 0 conv2d_57[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, 128, 128, 32) 9248 dropout_27[0][0]
__________________________________________________________________________________________________
max_pooling2d_12 (MaxPooling2D) (None, 64, 64, 32) 0 conv2d_58[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d_12[0][0]
__________________________________________________________________________________________________
dropout_28 (Dropout) (None, 64, 64, 64) 0 conv2d_59[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, 64, 64, 64) 36928 dropout_28[0][0]
__________________________________________________________________________________________________
max_pooling2d_13 (MaxPooling2D) (None, 32, 32, 64) 0 conv2d_60[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_13[0][0]
__________________________________________________________________________________________________
dropout_29 (Dropout) (None, 32, 32, 128) 0 conv2d_61[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, 32, 32, 128) 147584 dropout_29[0][0]
__________________________________________________________________________________________________
max_pooling2d_14 (MaxPooling2D) (None, 16, 16, 128) 0 conv2d_62[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, 16, 16, 256) 295168 max_pooling2d_14[0][0]
__________________________________________________________________________________________________
dropout_30 (Dropout) (None, 16, 16, 256) 0 conv2d_63[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, 16, 16, 256) 590080 dropout_30[0][0]
__________________________________________________________________________________________________
max_pooling2d_15 (MaxPooling2D) (None, 8, 8, 256) 0 conv2d_64[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D) (None, 8, 8, 512) 1180160 max_pooling2d_15[0][0]
__________________________________________________________________________________________________
dropout_31 (Dropout) (None, 8, 8, 512) 0 conv2d_65[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D) (None, 8, 8, 512) 2359808 dropout_31[0][0]
__________________________________________________________________________________________________
conv2d_transpose_12 (Conv2DTran (None, 16, 16, 256) 524544 conv2d_66[0][0]
__________________________________________________________________________________________________
concatenate_12 (Concatenate) (None, 16, 16, 512) 0 conv2d_transpose_12[0][0]
conv2d_64[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D) (None, 16, 16, 256) 1179904 concatenate_12[0][0]
__________________________________________________________________________________________________
dropout_32 (Dropout) (None, 16, 16, 256) 0 conv2d_67[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D) (None, 16, 16, 256) 590080 dropout_32[0][0]
__________________________________________________________________________________________________
conv2d_transpose_13 (Conv2DTran (None, 32, 32, 128) 131200 conv2d_68[0][0]
__________________________________________________________________________________________________
concatenate_13 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_13[0][0]
conv2d_62[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D) (None, 32, 32, 128) 295040 concatenate_13[0][0]
__________________________________________________________________________________________________
dropout_33 (Dropout) (None, 32, 32, 128) 0 conv2d_69[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D) (None, 32, 32, 128) 147584 dropout_33[0][0]
__________________________________________________________________________________________________
conv2d_transpose_14 (Conv2DTran (None, 64, 64, 64) 32832 conv2d_70[0][0]
__________________________________________________________________________________________________
concatenate_14 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_14[0][0]
conv2d_60[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D) (None, 64, 64, 64) 73792 concatenate_14[0][0]
__________________________________________________________________________________________________
dropout_34 (Dropout) (None, 64, 64, 64) 0 conv2d_71[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D) (None, 64, 64, 64) 36928 dropout_34[0][0]
__________________________________________________________________________________________________
conv2d_transpose_15 (Conv2DTran (None, 128, 128, 32) 8224 conv2d_72[0][0]
__________________________________________________________________________________________________
concatenate_15 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_15[0][0]
conv2d_58[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D) (None, 128, 128, 32) 18464 concatenate_15[0][0]
__________________________________________________________________________________________________
dropout_35 (Dropout) (None, 128, 128, 32) 0 conv2d_73[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D) (None, 128, 128, 32) 9248 dropout_35[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D) (None, 128, 128, 1) 33 conv2d_74[0][0]
==================================================================================================
Total params: 7,760,097
Trainable params: 7,760,097
Non-trainable params: 0
A:
After some debugging, the error lied in one of the output shapes from the generator.
I was always guaranteeing that neg_labels to have the same shape as labels even though neg_images might not on the zeroth axis.
The fix was to set the shape of neg_labels to neg_images's shape on the first three axes and labels last axis:
neg_images = neg_db['data'][neg_indices[j:j+skip]]
neg_labels = np.zeros((neg_images.shape[0],neg_images.shape[1],neg_images.shape[2],labels.shape[3])).astype(np.float32)
| Keras Invalid argument: required broadcastable shapes at loc(unknown) | I have been training my model by feeding the fit() method with train and test generators from the data that is stored away in hdf5 files (approx. 25,000 images and labels). I have recently processed negative cases into a new hdf5 file with a similar amount of images, however, after updating the generator to read from both files, grab half the batch size amount of images from each set, and merge them together, the training crashes with Invalid argument: required broadcastable shapes at loc(unknown) after a single epoch.
I have made sure that my model output, generator output, and data types are all correct (model: UNet, sigmoid, classes=1, output shape = (...,1), output type = bool) as other answers of the same issue suggest, yet I am still getting the same error.
train.py
db = h5py.File(db_output_path, 'r')
a = db['data'][200]
b = db['labels'][200]
db_neg = h5py.File(db_negatives_path, 'r')
train_neg_gen = kfold.split(db_neg['data'])
neg_idx = []
for t in train_neg_gen:
neg_idx.append(t)
batch_size=16
for train, test in kfold.split(db['data'], db['labels']):
train_neg_idx, test_neg_idx = neg_idx[fold_no-1]
gen_train = create_hdf5_generator(db_output_path, train, batch_size, CLASSES, db_negatives_path, train_neg_idx)
gen_val = create_hdf5_generator(db_output_path, test, batch_size, CLASSES, db_negatives_path, test_neg_idx)
model.load_weights('weights/weights_2022-11-20.h5')
# Generate a print
print('------------------------------------------------------------------------')
print(f'Training for fold {fold_no} ...')
steps_per_epoch = (2*len(train))//batch_size
validation_steps= (2*len(test))//batch_size
results = model.fit(gen_train,
epochs=10, validation_data=gen_val,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
callbacks=callbacks)
# Increase fold number
fold_no = fold_no + 1
Generator
def create_hdf5_generator(db_path, indices, batch_size, classes, neg_db_path=None, neg_indices=None):
db = h5py.File(db_path)
neg_db = h5py.File(neg_db_path)
while True:
if neg_indices is not None:
skip = batch_size//2
restart = 0
for i in np.arange(0, len(indices), skip):
j = i
#j tracks neg_db indices which is smaller in size than positive indices tracked by i
if i >= len(neg_indices):
j = restart
restart += skip
images = db['data'][indices[i:i+skip]]
labels = db['labels'][indices[i:i+skip]]
neg_images = neg_db['data'][neg_indices[j:j+skip]]
neg_labels = np.zeros(labels.shape).astype(np.float32)
images_concat = np.concatenate((images, neg_images), axis=0)
labels_concat = np.concatenate((labels, neg_labels), axis=0)
np.random.seed(123)
np.random.shuffle(images_concat)
np.random.seed(123)
np.random.shuffle(labels_concat)
yield images_concat, labels_concat.astype(bool)
console output
------------------------------------------------------------------------
Training for fold 1 ...
Epoch 1/10
2773/2774 [============================>.] - ETA: 0s - loss: 0.1157 - mean_io_u_2: 0.4766 Traceback (most recent call last):
File "C:\Users\Noam\github\proj\train.py", line 181, in <module>
results = model.fit(gen_train,
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1214, in fit
val_logs = self.evaluate(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1489, in evaluate
tmp_logs = self.test_function(iterator)
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 924, in _call
results = self._stateful_fn(*args, **kwds)
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 3023, in __call__
return graph_function._call_flat(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
outputs = execute.execute(
File "C:\Users\Noam\anaconda3\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: required broadcastable shapes at loc(unknown)
[[node binary_crossentropy/logistic_loss/mul (defined at C:\Users\Noam\github\proj\train.py:181) ]]
[[confusion_matrix/assert_non_negative_1/assert_less_equal/Assert/AssertGuard/pivot_f/_12/_33]]
(1) Invalid argument: required broadcastable shapes at loc(unknown)
[[node binary_crossentropy/logistic_loss/mul (defined at C:\Users\Noam\github\proj\train.py:181) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_test_function_79850]
Function call stack:
test_function -> test_function
2022-11-27 19:22:08.581553: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2022-11-27 19:22:18.055899: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2022-11-27 19:22:18.073779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.8GHz coreCount: 82 deviceMemorySize: 24.00GiB deviceMemoryBandwidth: 871.81GiB/s
2022-11-27 19:22:18.073819: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2022-11-27 19:22:18.093917: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2022-11-27 19:22:18.093939: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2022-11-27 19:22:18.100311: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2022-11-27 19:22:18.102617: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2022-11-27 19:22:18.105904: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2022-11-27 19:22:18.111640: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2022-11-27 19:22:18.112034: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2022-11-27 19:22:18.112100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2022-11-27 19:22:18.112463: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-27 19:22:18.113094: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.8GHz coreCount: 82 deviceMemorySize: 24.00GiB deviceMemoryBandwidth: 871.81GiB/s
2022-11-27 19:22:18.113127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2022-11-27 19:22:18.495306: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-11-27 19:22:18.495334: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2022-11-27 19:22:18.495341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2022-11-27 19:22:18.495486: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 21670 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:01:00.0, compute capability: 8.6)
2022-11-27 19:22:21.753068: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
2022-11-27 19:22:23.357640: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2022-11-27 19:22:23.868767: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8201
2022-11-27 19:22:24.730172: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2022-11-27 19:22:25.324257: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2022-11-27 19:23:30.675901: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:29:53.026090: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:46:47.257803: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:50:09.871857: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 19:51:28.339643: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 20:22:00.445508: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 20:30:20.786297: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 20:45:59.779202: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
2022-11-27 21:06:14.203518: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: required broadcastable shapes at loc(unknown)
UNet
sigmoid
binary_crossentropy
Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, 128, 128, 32) 896 input_4[0][0]
__________________________________________________________________________________________________
dropout_27 (Dropout) (None, 128, 128, 32) 0 conv2d_57[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, 128, 128, 32) 9248 dropout_27[0][0]
__________________________________________________________________________________________________
max_pooling2d_12 (MaxPooling2D) (None, 64, 64, 32) 0 conv2d_58[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d_12[0][0]
__________________________________________________________________________________________________
dropout_28 (Dropout) (None, 64, 64, 64) 0 conv2d_59[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, 64, 64, 64) 36928 dropout_28[0][0]
__________________________________________________________________________________________________
max_pooling2d_13 (MaxPooling2D) (None, 32, 32, 64) 0 conv2d_60[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_13[0][0]
__________________________________________________________________________________________________
dropout_29 (Dropout) (None, 32, 32, 128) 0 conv2d_61[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, 32, 32, 128) 147584 dropout_29[0][0]
__________________________________________________________________________________________________
max_pooling2d_14 (MaxPooling2D) (None, 16, 16, 128) 0 conv2d_62[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, 16, 16, 256) 295168 max_pooling2d_14[0][0]
__________________________________________________________________________________________________
dropout_30 (Dropout) (None, 16, 16, 256) 0 conv2d_63[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, 16, 16, 256) 590080 dropout_30[0][0]
__________________________________________________________________________________________________
max_pooling2d_15 (MaxPooling2D) (None, 8, 8, 256) 0 conv2d_64[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D) (None, 8, 8, 512) 1180160 max_pooling2d_15[0][0]
__________________________________________________________________________________________________
dropout_31 (Dropout) (None, 8, 8, 512) 0 conv2d_65[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D) (None, 8, 8, 512) 2359808 dropout_31[0][0]
__________________________________________________________________________________________________
conv2d_transpose_12 (Conv2DTran (None, 16, 16, 256) 524544 conv2d_66[0][0]
__________________________________________________________________________________________________
concatenate_12 (Concatenate) (None, 16, 16, 512) 0 conv2d_transpose_12[0][0]
conv2d_64[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D) (None, 16, 16, 256) 1179904 concatenate_12[0][0]
__________________________________________________________________________________________________
dropout_32 (Dropout) (None, 16, 16, 256) 0 conv2d_67[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D) (None, 16, 16, 256) 590080 dropout_32[0][0]
__________________________________________________________________________________________________
conv2d_transpose_13 (Conv2DTran (None, 32, 32, 128) 131200 conv2d_68[0][0]
__________________________________________________________________________________________________
concatenate_13 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_13[0][0]
conv2d_62[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D) (None, 32, 32, 128) 295040 concatenate_13[0][0]
__________________________________________________________________________________________________
dropout_33 (Dropout) (None, 32, 32, 128) 0 conv2d_69[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D) (None, 32, 32, 128) 147584 dropout_33[0][0]
__________________________________________________________________________________________________
conv2d_transpose_14 (Conv2DTran (None, 64, 64, 64) 32832 conv2d_70[0][0]
__________________________________________________________________________________________________
concatenate_14 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_14[0][0]
conv2d_60[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D) (None, 64, 64, 64) 73792 concatenate_14[0][0]
__________________________________________________________________________________________________
dropout_34 (Dropout) (None, 64, 64, 64) 0 conv2d_71[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D) (None, 64, 64, 64) 36928 dropout_34[0][0]
__________________________________________________________________________________________________
conv2d_transpose_15 (Conv2DTran (None, 128, 128, 32) 8224 conv2d_72[0][0]
__________________________________________________________________________________________________
concatenate_15 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_15[0][0]
conv2d_58[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D) (None, 128, 128, 32) 18464 concatenate_15[0][0]
__________________________________________________________________________________________________
dropout_35 (Dropout) (None, 128, 128, 32) 0 conv2d_73[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D) (None, 128, 128, 32) 9248 dropout_35[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D) (None, 128, 128, 1) 33 conv2d_74[0][0]
==================================================================================================
Total params: 7,760,097
Trainable params: 7,760,097
Non-trainable params: 0
| [
"After some debugging, the error lied in one of the output shapes from the generator.\nI was always guaranteeing that neg_labels to have the same shape as labels even though neg_images might not on the zeroth axis.\nThe fix was to set the shape of neg_labels to neg_images's shape on the first three axes and labels last axis:\nneg_images = neg_db['data'][neg_indices[j:j+skip]]\nneg_labels = np.zeros((neg_images.shape[0],neg_images.shape[1],neg_images.shape[2],labels.shape[3])).astype(np.float32)\n\n"
] | [
0
] | [] | [] | [
"keras",
"python",
"tensorflow"
] | stackoverflow_0074595310_keras_python_tensorflow.txt |
Q:
Python - split string without inner string
I wanted to ask. if there is an efficient way to split a string and ignore inner strings of is
Example:
I get a string in this format:
s = 'name,12345,Hello,\"12,34,56\",World'
and the output I want, is:
['name', '12345', 'Hello', "12,34,56", 'World']
by splitting at the "," the numbers become separated:
['name', '12345', 'Hello', '"12', '34', '56"', 'World']
I know I could split at \" and split the first and the third part separately, but this seems kind of inefficient to me
A:
This job belongs to the csv module:
import csv
out = next(csv.reader([s], skipinitialspace=True))
# out is
# ['name', '12345', 'Hello', '12,34,56', 'World']
Notes
The csv library deals with comma-separated values, perfect for this job
It understands quotes, which your input calls for
The csv.reader takes in a sequence of texts (i.e. lines of text)
The skipinitialspace flag is just that: it tells the reader to skip initial spaces before each value
A:
If your string separates each element with a comma and a space you could pass ", " to your split method like this:
>>> s = 'name, 12345, Hello, \"12,34,56\", World'
>>> print(s.split(", "))
['name', '12345', 'Hello', '"12,34,56"', 'World']
A:
You can try built-in csv module to parse the string:
import csv
from io import StringIO
s = 'name, 12345, Hello, "12,34,56", World'
print(next(csv.reader(StringIO(s), skipinitialspace=True)))
Prints:
['name', '12345', 'Hello', '12,34,56', 'World']
| Python - split string without inner string | I wanted to ask. if there is an efficient way to split a string and ignore inner strings of is
Example:
I get a string in this format:
s = 'name,12345,Hello,\"12,34,56\",World'
and the output I want, is:
['name', '12345', 'Hello', "12,34,56", 'World']
by splitting at the "," the numbers become separated:
['name', '12345', 'Hello', '"12', '34', '56"', 'World']
I know I could split at \" and split the first and the third part separately, but this seems kind of inefficient to me
| [
"This job belongs to the csv module:\nimport csv\n\nout = next(csv.reader([s], skipinitialspace=True))\n# out is\n# ['name', '12345', 'Hello', '12,34,56', 'World']\n\nNotes\n\nThe csv library deals with comma-separated values, perfect for this job\nIt understands quotes, which your input calls for\nThe csv.reader takes in a sequence of texts (i.e. lines of text)\nThe skipinitialspace flag is just that: it tells the reader to skip initial spaces before each value\n\n",
"If your string separates each element with a comma and a space you could pass \", \" to your split method like this:\n>>> s = 'name, 12345, Hello, \\\"12,34,56\\\", World'\n>>> print(s.split(\", \"))\n['name', '12345', 'Hello', '\"12,34,56\"', 'World']\n\n",
"You can try built-in csv module to parse the string:\nimport csv\nfrom io import StringIO\n\ns = 'name, 12345, Hello, \"12,34,56\", World'\n\nprint(next(csv.reader(StringIO(s), skipinitialspace=True)))\n\nPrints:\n['name', '12345', 'Hello', '12,34,56', 'World']\n\n"
] | [
3,
2,
1
] | [] | [] | [
"python",
"split",
"string"
] | stackoverflow_0074607505_python_split_string.txt |
Q:
Plotly: How to plot a tetrahedron volume
I have a set of xyz points and a set of tetrahedrons. Where each node of the tetrahedron points to an index in the points table.
I need to plot the tetrahedrons with a corresponding color based on the tag attribute.
points
Index
x
y
z
0
x_1
y_1
z_1
1
x_2
y_2
z_2
...
...
...
...
tetrahedrons
Index
a
b
c
d
tag
0
a_1.pt
b_1.pt
c_1.pt
d_1.pt
9
1
a_2.pt
b_2.pt
c_2.pt
d_2.pt
0
...
...
...
...
...
...
I have tried using the Mesh3d api but it does not allow for a 4th vertex.
I can plot something like the code below but it does not have all the faces of the tetrahedron.
go.Figure(data=[
go.Mesh3d(
x=mesh_pts.x, y=mesh_pts.y, z=mesh_pts.z,
i=tagged_th.a, j=tagged_th.b, k=tagged_th.c,
),
]).show()
I think the Volume or Isosurface plots might work but I'm not sure how to convert my data into a format to be consumed by those apis.
A:
I can't hide the fact that, a few minutes ago, I wasn't even aware of i,j,k parameters. But, still, I know that Mesh3D draws triangles, not tetrahedron. You need to take advantage of those i,j,k parameters to control which triangles are drawn. But it is still your job to tell which triangles need to be drawn to that it look like tetrahedrons.
Yes, there are 4 triangles per tetrahedron. If you wish to draw them four, you need to explicitly pass i,j,k for all 4. Not just pass i,j,k and an nonexistent l and expect plotly to understand that this means 4 triangles.
If a, b, c and d are 4 vertices of a tetrahedron, then the 4 triangles you need to draw are the 4 combinations of 3 of vertices from those. That is bcd, acd, abd and abc.
Let's write this in 4 rows
bcd
acd
abd
abc
^^^
|||
||\------k
|\------ j
\------- i
So, if, now, a, b, c and d are list of n vertices, then i, j, k must be lists 4 times longer
i=b + a + a + a
j=c + c + b + b
k=d + d + d + c
Application: let's define 2 tetrahedrons, one sitting on the spike of the other, using your dataframes format
import plotly.graph_objects as go
import pandas as pd
mesh_pts = pd.DataFrame({'x':[0, 1, 0, 0, 1, 0, 0],
'y':[0, 0, 1, 0, 0, 1, 0],
'z':[0, 0, 0, 1, 1, 1, 2]})
tagged_th = pd.DataFrame({'a':[0,3],
'b':[1,4],
'c':[2,5],
'd':[3,6],
'tag':[0,1]})
# And from there, just create a list of triangles, made of 4 combinations
# of 3 points taken from list of tetrahedron vertices
go.Figure(data=[
go.Mesh3d(
x=mesh_pts.x,
y=mesh_pts.y,
z=mesh_pts.z,
i=pd.concat([tagged_th.a, tagged_th.a, tagged_th.a, tagged_th.b]),
j=pd.concat([tagged_th.b, tagged_th.b, tagged_th.c, tagged_th.c]),
k=pd.concat([tagged_th.c, tagged_th.d, tagged_th.d, tagged_th.d]),
intensitymode='cell',
intensity=pd.concat([tagged_th.tag, tagged_th.tag, tagged_th.tag, tagged_th.tag])
)
]).show()
A:
I don't see what you mean by "does not allow for a 4th vertex". Here is an example with two tetrahedra:
import plotly.graph_objects as go
import plotly.io as pio
import numpy as np
i = np.array([0, 0, 0, 1])
j = np.array([1, 2, 3, 2])
k = np.array([2, 3, 1, 3])
fig = go.Figure(data = [
go.Mesh3d(
x = [0,1,2,0, 4,5,6,4],
y = [0,0,1,2, 0,0,1,2],
z = [0,2,2,3, 4,2,4,1],
i = np.concatenate((i, i+4)),
j = np.concatenate((j, j+4)),
k = np.concatenate((k, k+4)),
facecolor = ["red","red","red","red", "green","green","green","green"]
)
])
pio.write_html(fig, file = "tetrahedra.html", auto_open = True)
| Plotly: How to plot a tetrahedron volume | I have a set of xyz points and a set of tetrahedrons. Where each node of the tetrahedron points to an index in the points table.
I need to plot the tetrahedrons with a corresponding color based on the tag attribute.
points
Index
x
y
z
0
x_1
y_1
z_1
1
x_2
y_2
z_2
...
...
...
...
tetrahedrons
Index
a
b
c
d
tag
0
a_1.pt
b_1.pt
c_1.pt
d_1.pt
9
1
a_2.pt
b_2.pt
c_2.pt
d_2.pt
0
...
...
...
...
...
...
I have tried using the Mesh3d api but it does not allow for a 4th vertex.
I can plot something like the code below but it does not have all the faces of the tetrahedron.
go.Figure(data=[
go.Mesh3d(
x=mesh_pts.x, y=mesh_pts.y, z=mesh_pts.z,
i=tagged_th.a, j=tagged_th.b, k=tagged_th.c,
),
]).show()
I think the Volume or Isosurface plots might work but I'm not sure how to convert my data into a format to be consumed by those apis.
| [
"I can't hide the fact that, a few minutes ago, I wasn't even aware of i,j,k parameters. But, still, I know that Mesh3D draws triangles, not tetrahedron. You need to take advantage of those i,j,k parameters to control which triangles are drawn. But it is still your job to tell which triangles need to be drawn to that it look like tetrahedrons.\nYes, there are 4 triangles per tetrahedron. If you wish to draw them four, you need to explicitly pass i,j,k for all 4. Not just pass i,j,k and an nonexistent l and expect plotly to understand that this means 4 triangles.\nIf a, b, c and d are 4 vertices of a tetrahedron, then the 4 triangles you need to draw are the 4 combinations of 3 of vertices from those. That is bcd, acd, abd and abc.\nLet's write this in 4 rows\nbcd\nacd\nabd\nabc\n^^^\n|||\n||\\------k\n|\\------ j\n\\------- i\n\nSo, if, now, a, b, c and d are list of n vertices, then i, j, k must be lists 4 times longer\ni=b + a + a + a\nj=c + c + b + b\nk=d + d + d + c\n\nApplication: let's define 2 tetrahedrons, one sitting on the spike of the other, using your dataframes format\nimport plotly.graph_objects as go\nimport pandas as pd\n\nmesh_pts = pd.DataFrame({'x':[0, 1, 0, 0, 1, 0, 0],\n 'y':[0, 0, 1, 0, 0, 1, 0],\n 'z':[0, 0, 0, 1, 1, 1, 2]})\n\ntagged_th = pd.DataFrame({'a':[0,3],\n 'b':[1,4],\n 'c':[2,5],\n 'd':[3,6],\n 'tag':[0,1]})\n\n# And from there, just create a list of triangles, made of 4 combinations \n# of 3 points taken from list of tetrahedron vertices\ngo.Figure(data=[\n go.Mesh3d(\n x=mesh_pts.x,\n y=mesh_pts.y,\n z=mesh_pts.z,\n i=pd.concat([tagged_th.a, tagged_th.a, tagged_th.a, tagged_th.b]),\n j=pd.concat([tagged_th.b, tagged_th.b, tagged_th.c, tagged_th.c]),\n k=pd.concat([tagged_th.c, tagged_th.d, tagged_th.d, tagged_th.d]),\n intensitymode='cell',\n intensity=pd.concat([tagged_th.tag, tagged_th.tag, tagged_th.tag, tagged_th.tag])\n )\n]).show()\n\n\n",
"I don't see what you mean by \"does not allow for a 4th vertex\". Here is an example with two tetrahedra:\nimport plotly.graph_objects as go\nimport plotly.io as pio\nimport numpy as np\n\ni = np.array([0, 0, 0, 1])\nj = np.array([1, 2, 3, 2])\nk = np.array([2, 3, 1, 3])\n\nfig = go.Figure(data = [\n go.Mesh3d(\n x = [0,1,2,0, 4,5,6,4],\n y = [0,0,1,2, 0,0,1,2],\n z = [0,2,2,3, 4,2,4,1],\n i = np.concatenate((i, i+4)),\n j = np.concatenate((j, j+4)),\n k = np.concatenate((k, k+4)),\n facecolor = [\"red\",\"red\",\"red\",\"red\", \"green\",\"green\",\"green\",\"green\"]\n )\n])\n\npio.write_html(fig, file = \"tetrahedra.html\", auto_open = True)\n\n\n"
] | [
1,
0
] | [] | [] | [
"plotly",
"python"
] | stackoverflow_0074607379_plotly_python.txt |
Q:
Improve the implementation of worldquant 101 alpha factors using numpy
I was trying to implement 101 quant trading factors that were published by WorldQuant (https://arxiv.org/pdf/1601.00991.pdf).
A typical factor is about processing stocks' price and volume information along with both time dimension and stock dimension. Take the example of alpha factor #4: (-1 * Ts_Rank(rank(low), 9)). This is a momentum alpha signal. low is a panel of stocks' low price within a certain time period. rank is a cross-sectional process of ranking panelβs each row (a time snapshot). Ts_Rank is a time-series process of moving_rank panelβs each column (a stock) with a specified window.
Intuitively, the Pandas dataframe or NumPy matrix should fit the implementation of 101 alpha factors. Below is the best implementation using NumPy I got so far. However, the performance was too low. On my Intel core i7 windows machine, it took around 45 seconds to run the alpha #4 factor with a 5000 (trade dates) by 200 (stocks) matrix as input.
I also came across DolphinDB, a time series database with built-in analytics features (https://www.dolphindb.com/downloads.html). For the same factor Alpha#4, DolphinDB ran for mere 0.04 seconds, 1000 times faster than the NumPy version. However, DolphinDB is commercial software. Does anybody know better python implementations? Or any tips to improve my current python code to achieve performance comparable to DolphinDB?
Numpy implementation (based on https://github.com/yli188/WorldQuant_alpha101_code)
import numpy as np
def rankdata(a, method='average', *, axis=None):
# this rankdata refer to scipy.stats.rankdata (https://github.com/scipy/scipy/blob/v1.9.1/scipy/stats/_stats_py.py#L9047-L9153)
if method not in ('average', 'min', 'max', 'dense', 'ordinal'):
raise ValueError('unknown method "{0}"'.format(method))
if axis is not None:
a = np.asarray(a)
if a.size == 0:
np.core.multiarray.normalize_axis_index(axis, a.ndim)
dt = np.float64 if method == 'average' else np.int_
return np.empty(a.shape, dtype=dt)
return np.apply_along_axis(rankdata, axis, a, method)
arr = np.ravel(np.asarray(a))
algo = 'mergesort' if method == 'ordinal' else 'quicksort'
sorter = np.argsort(arr, kind=algo)
inv = np.empty(sorter.size, dtype=np.intp)
inv[sorter] = np.arange(sorter.size, dtype=np.intp)
if method == 'ordinal':
return inv + 1
arr = arr[sorter]
obs = np.r_[True, arr[1:] != arr[:-1]]
dense = obs.cumsum()[inv]
if method == 'dense':
return dense
# cumulative counts of each unique value
count = np.r_[np.nonzero(obs)[0], len(obs)]
if method == 'max':
return count[dense]
if method == 'min':
return count[dense - 1] + 1
# average method
return .5 * (count[dense] + count[dense - 1] + 1)
def rank(x):
return rankdata(x,method='min',axis=1)/np.size(x, 1)
def rolling_rank(na):
return rankdata(na.transpose(),method='min',axis=0)[-1].transpose()
def ts_rank(x, window=10):
a_rolled = np.lib.stride_tricks.sliding_window_view(x, window,axis = 0)
return np.append(np.full([window-1,np.size(x, 1)],np.nan),rolling_rank(a_rolled),axis = 0)
def alpha004(data):
return -1 * ts_rank(rank(data), 9)
import time
# The input is a 5000 by 200 matrix, where the row index represents trade date and the column index represents security ID.
data=np.random.random((5000, 200))
start_time = time.time()
alpha004(data)
print("--- %s seconds ---" % (time.time() - start_time))
--- 44.85099506378174 seconds ---
DolphinDB implementation
def WQAlpha4(low){
return -mrank(rowRank(low, percent=true), true, 9)
}
// The input is a 5000 by 200 matrix, where the row index represents trade date and the column index represents security ID.
low = rand(1000.0,5000:200);
timer WQAlpha4(low);
Time elapsed: 44.036 ms (0.044s)
A:
This part of the code:
return np.apply_along_axis(rankdata, axis, a, method)
...is going to be quite slow. Function application like this means more of the computation runs in Python, and relatively little of it runs in C.
There's a much faster solution available here, if you're okay with a slight change in how your rank function is defined. Specifically, the below code is equivalent to changing from method='min' to method='ordinal'. On a test dataset of random numbers, it agrees with your method 95% of the time, and only disagrees by 1 where it is different.
By using argsort along the axis, numpy can do the entire calculation without dropping into Python.
def rank(x):
return (data.argsort(axis=1).argsort(axis=1) + 1) / np.size(x, 1)
def ts_rank(x, window=10):
a_rolled = np.lib.stride_tricks.sliding_window_view(x, window, axis = 0)
rolling_rank_fast = (a_rolled.argsort(axis=2).argsort(axis=2) + 1)[:, :, -1]
# Fill initial window - 1 rows with nan
initial_window = np.full([window-1,np.size(x, 1)],np.nan)
return np.append(initial_window,rolling_rank_fast,axis = 0)
def alpha004(data):
return -1 * ts_rank(rank(data), 9)
Benchmarking this, I find it runs roughly 100x faster.
| Improve the implementation of worldquant 101 alpha factors using numpy | I was trying to implement 101 quant trading factors that were published by WorldQuant (https://arxiv.org/pdf/1601.00991.pdf).
A typical factor is about processing stocks' price and volume information along with both time dimension and stock dimension. Take the example of alpha factor #4: (-1 * Ts_Rank(rank(low), 9)). This is a momentum alpha signal. low is a panel of stocks' low price within a certain time period. rank is a cross-sectional process of ranking panelβs each row (a time snapshot). Ts_Rank is a time-series process of moving_rank panelβs each column (a stock) with a specified window.
Intuitively, the Pandas dataframe or NumPy matrix should fit the implementation of 101 alpha factors. Below is the best implementation using NumPy I got so far. However, the performance was too low. On my Intel core i7 windows machine, it took around 45 seconds to run the alpha #4 factor with a 5000 (trade dates) by 200 (stocks) matrix as input.
I also came across DolphinDB, a time series database with built-in analytics features (https://www.dolphindb.com/downloads.html). For the same factor Alpha#4, DolphinDB ran for mere 0.04 seconds, 1000 times faster than the NumPy version. However, DolphinDB is commercial software. Does anybody know better python implementations? Or any tips to improve my current python code to achieve performance comparable to DolphinDB?
Numpy implementation (based on https://github.com/yli188/WorldQuant_alpha101_code)
import numpy as np
def rankdata(a, method='average', *, axis=None):
# this rankdata refer to scipy.stats.rankdata (https://github.com/scipy/scipy/blob/v1.9.1/scipy/stats/_stats_py.py#L9047-L9153)
if method not in ('average', 'min', 'max', 'dense', 'ordinal'):
raise ValueError('unknown method "{0}"'.format(method))
if axis is not None:
a = np.asarray(a)
if a.size == 0:
np.core.multiarray.normalize_axis_index(axis, a.ndim)
dt = np.float64 if method == 'average' else np.int_
return np.empty(a.shape, dtype=dt)
return np.apply_along_axis(rankdata, axis, a, method)
arr = np.ravel(np.asarray(a))
algo = 'mergesort' if method == 'ordinal' else 'quicksort'
sorter = np.argsort(arr, kind=algo)
inv = np.empty(sorter.size, dtype=np.intp)
inv[sorter] = np.arange(sorter.size, dtype=np.intp)
if method == 'ordinal':
return inv + 1
arr = arr[sorter]
obs = np.r_[True, arr[1:] != arr[:-1]]
dense = obs.cumsum()[inv]
if method == 'dense':
return dense
# cumulative counts of each unique value
count = np.r_[np.nonzero(obs)[0], len(obs)]
if method == 'max':
return count[dense]
if method == 'min':
return count[dense - 1] + 1
# average method
return .5 * (count[dense] + count[dense - 1] + 1)
def rank(x):
return rankdata(x,method='min',axis=1)/np.size(x, 1)
def rolling_rank(na):
return rankdata(na.transpose(),method='min',axis=0)[-1].transpose()
def ts_rank(x, window=10):
a_rolled = np.lib.stride_tricks.sliding_window_view(x, window,axis = 0)
return np.append(np.full([window-1,np.size(x, 1)],np.nan),rolling_rank(a_rolled),axis = 0)
def alpha004(data):
return -1 * ts_rank(rank(data), 9)
import time
# The input is a 5000 by 200 matrix, where the row index represents trade date and the column index represents security ID.
data=np.random.random((5000, 200))
start_time = time.time()
alpha004(data)
print("--- %s seconds ---" % (time.time() - start_time))
--- 44.85099506378174 seconds ---
DolphinDB implementation
def WQAlpha4(low){
return -mrank(rowRank(low, percent=true), true, 9)
}
// The input is a 5000 by 200 matrix, where the row index represents trade date and the column index represents security ID.
low = rand(1000.0,5000:200);
timer WQAlpha4(low);
Time elapsed: 44.036 ms (0.044s)
| [
"This part of the code:\nreturn np.apply_along_axis(rankdata, axis, a, method)\n\n...is going to be quite slow. Function application like this means more of the computation runs in Python, and relatively little of it runs in C.\nThere's a much faster solution available here, if you're okay with a slight change in how your rank function is defined. Specifically, the below code is equivalent to changing from method='min' to method='ordinal'. On a test dataset of random numbers, it agrees with your method 95% of the time, and only disagrees by 1 where it is different.\nBy using argsort along the axis, numpy can do the entire calculation without dropping into Python.\ndef rank(x):\n return (data.argsort(axis=1).argsort(axis=1) + 1) / np.size(x, 1)\n\n\ndef ts_rank(x, window=10):\n a_rolled = np.lib.stride_tricks.sliding_window_view(x, window, axis = 0)\n rolling_rank_fast = (a_rolled.argsort(axis=2).argsort(axis=2) + 1)[:, :, -1]\n # Fill initial window - 1 rows with nan\n initial_window = np.full([window-1,np.size(x, 1)],np.nan)\n return np.append(initial_window,rolling_rank_fast,axis = 0)\n\n\ndef alpha004(data):\n return -1 * ts_rank(rank(data), 9)\n\nBenchmarking this, I find it runs roughly 100x faster.\n"
] | [
1
] | [] | [] | [
"dolphindb",
"numpy",
"pandas",
"python",
"quantitative_finance"
] | stackoverflow_0073694527_dolphindb_numpy_pandas_python_quantitative_finance.txt |
Q:
How to click a word on screen with pyutogui
The error I am getting is:
OSError: Failed to read Ok because file is missing, has improper permissions or is an unsupported or invalid format.
Can someone help me?
Iβm using the command:
pyautogui.click(βOkβ)
I was expecting this to click Ok on the screen when it pops up.
My code is:
from pyvirtualdisplay import Display
from selenium import webdriver
from pynput.keyboard import Key, Controller
from pynput.mouse import Button, Controller as MController
import pyautogui
import time
def Mid():
keyboard = Controller
mouse = MController()
pyautogui.press('win')
pyautogui.write('Change user account control settings')
time.sleep(1)
pyautogui.press('enter')
time.sleep(2)
pyautogui.leftClick('OK')
A:
pyautogui.leftClick('OK') will do nothing since pyautogui.click() will take x and y coordinates for example:
pyautogui.click(100,100) #will click at the coordinates x 100 and y 100
We can use locateOnScreen to get x,y coordinates of a image and then click it with pyautogui.click
In your case:
time.sleep(2)
# Find yourimage.png in your screen
button = pyautogui.locateOnScreen("yourimage.png")
# Click at x,y of where the button is found on the screen
pyautogui.click(button)
| How to click a word on screen with pyutogui | The error I am getting is:
OSError: Failed to read Ok because file is missing, has improper permissions or is an unsupported or invalid format.
Can someone help me?
Iβm using the command:
pyautogui.click(βOkβ)
I was expecting this to click Ok on the screen when it pops up.
My code is:
from pyvirtualdisplay import Display
from selenium import webdriver
from pynput.keyboard import Key, Controller
from pynput.mouse import Button, Controller as MController
import pyautogui
import time
def Mid():
keyboard = Controller
mouse = MController()
pyautogui.press('win')
pyautogui.write('Change user account control settings')
time.sleep(1)
pyautogui.press('enter')
time.sleep(2)
pyautogui.leftClick('OK')
| [
"pyautogui.leftClick('OK') will do nothing since pyautogui.click() will take x and y coordinates for example:\npyautogui.click(100,100) #will click at the coordinates x 100 and y 100 \n\nWe can use locateOnScreen to get x,y coordinates of a image and then click it with pyautogui.click\nIn your case:\ntime.sleep(2)\n# Find yourimage.png in your screen\nbutton = pyautogui.locateOnScreen(\"yourimage.png\")\n# Click at x,y of where the button is found on the screen\npyautogui.click(button)\n\n"
] | [
0
] | [] | [] | [
"pyautogui",
"python"
] | stackoverflow_0074607144_pyautogui_python.txt |
Q:
Python selenium gets stuck on site loading screen
First of all, this is the website I use.
The code block I used:
from selenium import webdriver
browserProfile = webdriver.ChromeOptions()
browserProfile.add_argument("start-maximized")
browserProfile.add_argument('--disable-blink-features=AutomationControlled')
browserProfile.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36')
browserProfile.add_experimental_option("excludeSwitches", ["enable-automation"])
browserProfile.add_experimental_option('useAutomationExtension', False)
browser = webdriver.Chrome("chromedriver.exe", chrome_options=browserProfile)
browser.get('https://www.gamermarkt.com')
ChromeDriver image:
stays on this screen.
I think there is a bot block on the site, but I have no idea how to bypass it.
A:
I would suggest you explore the following code:
import time
time.sleep(1) #sleep for 1 sec
time.sleep(0.25) #sleep for 250 milliseconds
A:
Make sure your chromedriver is up-to-date.
In your browser goto help -> about Google Chrome
and check your version and then get the latest chromedriver
from chromedriver download
| Python selenium gets stuck on site loading screen | First of all, this is the website I use.
The code block I used:
from selenium import webdriver
browserProfile = webdriver.ChromeOptions()
browserProfile.add_argument("start-maximized")
browserProfile.add_argument('--disable-blink-features=AutomationControlled')
browserProfile.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36')
browserProfile.add_experimental_option("excludeSwitches", ["enable-automation"])
browserProfile.add_experimental_option('useAutomationExtension', False)
browser = webdriver.Chrome("chromedriver.exe", chrome_options=browserProfile)
browser.get('https://www.gamermarkt.com')
ChromeDriver image:
stays on this screen.
I think there is a bot block on the site, but I have no idea how to bypass it.
| [
"I would suggest you explore the following code:\nimport time\ntime.sleep(1) #sleep for 1 sec\ntime.sleep(0.25) #sleep for 250 milliseconds\n\n",
"Make sure your chromedriver is up-to-date.\nIn your browser goto help -> about Google Chrome\nand check your version and then get the latest chromedriver\nfrom chromedriver download\n"
] | [
0,
0
] | [] | [] | [
"python",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0072622791_python_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
Azure functions app keeps returning 401 unauthorized
I have created an Azure Function base on a custom image (docker) using VS Code.
I used the deployment feature of VS code to deploy it to azure and everything was fine.
My function.json file specifies anonymous auth level:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Why an I still getting the 401 unauthorized error?
Thanks
Amit
A:
Below methods can fix 4XX errors in our function app:
Make sure you add all the values from Local.Settings.json file to Application settings (FunctionApp -> Configuration -> Application Settings)
Check for CORS in your function app. Try adding β*β and saving it.. reload the function app and try to run it.
(Any request made against a storage resource when CORS is enabled must either have a valid authorization header or must be made against a public resource.)
A:
I changed my authLevel from function to anonymous and it finally worked!
A:
When you make your request to your function, you may need to pass an authorization header with key 'x-functions-key' and value equal to either your default key for all functions (Function App > App Keys > default in the Azure portal) or a key specific to that function (Function App > Functions > [specific function] > Function Keys > default in Azure Portal).
| Azure functions app keeps returning 401 unauthorized | I have created an Azure Function base on a custom image (docker) using VS Code.
I used the deployment feature of VS code to deploy it to azure and everything was fine.
My function.json file specifies anonymous auth level:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Why an I still getting the 401 unauthorized error?
Thanks
Amit
| [
"Below methods can fix 4XX errors in our function app:\n\nMake sure you add all the values from Local.Settings.json file to Application settings (FunctionApp -> Configuration -> Application Settings)\n\nCheck for CORS in your function app. Try adding β*β and saving it.. reload the function app and try to run it.\n(Any request made against a storage resource when CORS is enabled must either have a valid authorization header or must be made against a public resource.)\n\n\n",
"I changed my authLevel from function to anonymous and it finally worked!\n",
"When you make your request to your function, you may need to pass an authorization header with key 'x-functions-key' and value equal to either your default key for all functions (Function App > App Keys > default in the Azure portal) or a key specific to that function (Function App > Functions > [specific function] > Function Keys > default in Azure Portal).\n"
] | [
0,
0,
0
] | [] | [] | [
"azure_functions",
"docker",
"python"
] | stackoverflow_0069188433_azure_functions_docker_python.txt |
Q:
Creating a day of year column overriding the leap day in a leap year
I have a large database of climate variables - daily values of temp, humidity etc. I have a timestamp column %Y%m%d. I have removed leap days, as I need uniform 365 days for each of my years. I want to add a new column called 'day_of_year' with 1 to 365 for each year for as many years as I have in my database. How can I accomplish this in python, any pointers, please?
If I use the day of year function from pandas, I get 59 for feb 28 and get 61 for Mar 1. Is there a way to override the leap year, as I have dropped the leap day and get 60 for Mar 1?
A:
Use pandas' day of year function, but instead of giving it the real timestamp, e.g. "2022-11-27", give it "2021-" + timestamp[-5:]. This will give you the altered number as if the timestamp was not a leap year.
| Creating a day of year column overriding the leap day in a leap year | I have a large database of climate variables - daily values of temp, humidity etc. I have a timestamp column %Y%m%d. I have removed leap days, as I need uniform 365 days for each of my years. I want to add a new column called 'day_of_year' with 1 to 365 for each year for as many years as I have in my database. How can I accomplish this in python, any pointers, please?
If I use the day of year function from pandas, I get 59 for feb 28 and get 61 for Mar 1. Is there a way to override the leap year, as I have dropped the leap day and get 60 for Mar 1?
| [
"Use pandas' day of year function, but instead of giving it the real timestamp, e.g. \"2022-11-27\", give it \"2021-\" + timestamp[-5:]. This will give you the altered number as if the timestamp was not a leap year.\n"
] | [
0
] | [] | [] | [
"arrays",
"for_loop",
"numpy",
"pandas",
"python"
] | stackoverflow_0074607654_arrays_for_loop_numpy_pandas_python.txt |
Q:
Selenium-webdriver script breaks loop
I had been tasked with fixing a digital sign loop running off python in the office. The original script was lost due to a OS crash and I had to recreate it. I am at my python limits on fixing what I had been able to create using selenium.
I wrote the below script and it functions for random periods of time before the loop breaks and the script must be executed again.
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
website = ["https://www.fireeye.com/cyber-map/threat-map.html",
"https://horizon.netscout.com/?sidebar=close",
"https://www.accuweather.com/en/us/minneapolis/55415/hourly-
weather-forecast/348794?=page",
"https://www.accuweather.com/en/us/minneapolis/55415/daily-
weather-forecast/348794?=page"
]
driver = webdriver.Chrome(r'/usr/bin/chromedriver')
driver.get(website[0])
driver.maximize_window()
driver.execute_script("window.open('about:blank', 'secondtab');")
driver.switch_to.window("secondtab")
driver.get(website[1])
driver.execute_script("window.open('about:blank', 'thirdtab');")
driver.switch_to.window("thirdtab")
driver.get(website[2])
driver.execute_script("window.scrollBy(0,250);")
driver.execute_script("window.open('about:blank', 'fourthtab');")
driver.switch_to.window("fourthtab")
driver.get(website[3])
driver.execute_script("window.scrollBy(0,100);")
Can anyone tell me why the loop breaks?
The loop is a while true condition :
while True:
if "FireEye" in driver.title:
time.sleep(20)
driver.switch_to.window(driver.window_handles[1])
elif "Attack" in driver.title:
time.sleep(20)
driver.switch_to.window(driver.window_handles[2])
elif "Hourly" in driver.title:
time.sleep(10)
driver.switch_to.window(driver.window_handles[3])
elif "Daily" in driver.title:
time.sleep(10)
driver.switch_to.window(driver.window_handles[0])
The conditions are checking for the web tab titles of each site and as each should always be true.
It at a random intervals returns the following traceback error:
*driver.switch_to.window(driver.window_handles[3])
IndexError: list index out of range*
I cannot determine what is causing the index to no longer function.
A:
At the bottom of this, python thinks there isn't a fourth window open. So first, let's put in some error handling, something like this:
while True:
try:
if "FireEye" in driver.title:
time.sleep(20)
driver.switch_to.window(driver.window_handles[1])
elif "Attack" in driver.title:
time.sleep(20)
driver.switch_to.window(driver.window_handles[2])
elif "Hourly" in driver.title:
time.sleep(10)
driver.switch_to.window(driver.window_handles[3])
elif "Daily" in driver.title:
time.sleep(10)
driver.switch_to.window(driver.window_handles[0])
except:
foreach w in driver.window_handles:
print("{} is open!".format(w))
This will at least continue the loop regardless of the error, and tell you what tabs it thinks are open.
Edit: Also, as a note, window_handles is not ordered. So what may be window_handles[1] in one iteration may end up as window_handles[3] in the next.
| Selenium-webdriver script breaks loop | I had been tasked with fixing a digital sign loop running off python in the office. The original script was lost due to a OS crash and I had to recreate it. I am at my python limits on fixing what I had been able to create using selenium.
I wrote the below script and it functions for random periods of time before the loop breaks and the script must be executed again.
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
website = ["https://www.fireeye.com/cyber-map/threat-map.html",
"https://horizon.netscout.com/?sidebar=close",
"https://www.accuweather.com/en/us/minneapolis/55415/hourly-
weather-forecast/348794?=page",
"https://www.accuweather.com/en/us/minneapolis/55415/daily-
weather-forecast/348794?=page"
]
driver = webdriver.Chrome(r'/usr/bin/chromedriver')
driver.get(website[0])
driver.maximize_window()
driver.execute_script("window.open('about:blank', 'secondtab');")
driver.switch_to.window("secondtab")
driver.get(website[1])
driver.execute_script("window.open('about:blank', 'thirdtab');")
driver.switch_to.window("thirdtab")
driver.get(website[2])
driver.execute_script("window.scrollBy(0,250);")
driver.execute_script("window.open('about:blank', 'fourthtab');")
driver.switch_to.window("fourthtab")
driver.get(website[3])
driver.execute_script("window.scrollBy(0,100);")
Can anyone tell me why the loop breaks?
The loop is a while true condition :
while True:
if "FireEye" in driver.title:
time.sleep(20)
driver.switch_to.window(driver.window_handles[1])
elif "Attack" in driver.title:
time.sleep(20)
driver.switch_to.window(driver.window_handles[2])
elif "Hourly" in driver.title:
time.sleep(10)
driver.switch_to.window(driver.window_handles[3])
elif "Daily" in driver.title:
time.sleep(10)
driver.switch_to.window(driver.window_handles[0])
The conditions are checking for the web tab titles of each site and as each should always be true.
It at a random intervals returns the following traceback error:
*driver.switch_to.window(driver.window_handles[3])
IndexError: list index out of range*
I cannot determine what is causing the index to no longer function.
| [
"At the bottom of this, python thinks there isn't a fourth window open. So first, let's put in some error handling, something like this:\nwhile True:\n try:\n if \"FireEye\" in driver.title:\n time.sleep(20)\n driver.switch_to.window(driver.window_handles[1])\n \n elif \"Attack\" in driver.title:\n time.sleep(20)\n driver.switch_to.window(driver.window_handles[2])\n \n elif \"Hourly\" in driver.title:\n time.sleep(10)\n driver.switch_to.window(driver.window_handles[3])\n \n elif \"Daily\" in driver.title:\n time.sleep(10)\n driver.switch_to.window(driver.window_handles[0])\n except:\n foreach w in driver.window_handles:\n print(\"{} is open!\".format(w))\n\nThis will at least continue the loop regardless of the error, and tell you what tabs it thinks are open.\nEdit: Also, as a note, window_handles is not ordered. So what may be window_handles[1] in one iteration may end up as window_handles[3] in the next.\n"
] | [
0
] | [] | [] | [
"python",
"selenium_webdriver",
"while_loop"
] | stackoverflow_0074604780_python_selenium_webdriver_while_loop.txt |
Q:
Generate pairs of users from list
trying to create a function which generates pairs of users from list. Everyone have to get a pair.
Example list of user IDs:
list = [123, 456, 789]
...smth...
result = {123:456, 456:789, 789:123} β OK
list = [123, 456, 789]
...smth...
result = {123:456, 456:123, 789:789} β BAD
list = [123, 456, 789, 234, 678]
...smth...
result = {123:456, 456:123, xxxxx} - BAD
Trying to do this in python but cant find a solution
Tried lists, sets, dicts, but cant find an alghoritm
A:
Assuming the pairs to be tuples, you can use itertools.combinations.
From the docs (table on top of the page):
r-length tuples, in sorted order, no repeated elements
And:
Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeated values in each combination.
So:
from itertools import combinations
data = [123, 456, 789, 234, 678]
result = list(combinations(data, 2))
# [(123, 456), (123, 789), (123, 234), (123, 678), (456, 789), (456, 234), (456, 678), (789, 234), (789, 678), (234, 678)]
Also, avoid using the "list" word for naming, as list is a Python container.
A:
Your question is definitely not clear and you should read a bit more about how to ask questions and show what code you have tried and failed. But if I understand correctly, the pairs will be consecutive. I.e. first item with the second, second with the third, and so on, until the last item pairs with the first, and you're showing in your example a dictionary structure, so maybe you can use a function like this.
def make_pairs(input_list):
out_dict = dict() # output dictionary
for i in range(len(input_list)): # i is from 0 to len(list) - 1
# if i is not pointing to the last item
if not i == len(input_list) - 1:
# make pair from the i-th item and the (i+1) item
out_dict[input_list[i]] = input_list[i+1]
# i points to the last item
else:
# make pair from the last item and the first item
out_dict[input_list[i]] = input_list[0]
return out_dict
input_list = [123, 456, 789]
out_dict = make_pairs(input_list)
print(out_dict)
Of course, this is somewhat of a bad solution, but you did not give enough information, for example, if the list has repeating items, then keys will overwrite each other.
As mentioned above, avoid using list as a variable name, it's a reserved Python keyword.
| Generate pairs of users from list | trying to create a function which generates pairs of users from list. Everyone have to get a pair.
Example list of user IDs:
list = [123, 456, 789]
...smth...
result = {123:456, 456:789, 789:123} β OK
list = [123, 456, 789]
...smth...
result = {123:456, 456:123, 789:789} β BAD
list = [123, 456, 789, 234, 678]
...smth...
result = {123:456, 456:123, xxxxx} - BAD
Trying to do this in python but cant find a solution
Tried lists, sets, dicts, but cant find an alghoritm
| [
"Assuming the pairs to be tuples, you can use itertools.combinations.\nFrom the docs (table on top of the page):\n\nr-length tuples, in sorted order, no repeated elements\n\nAnd:\n\nElements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeated values in each combination.\n\nSo:\nfrom itertools import combinations\n\ndata = [123, 456, 789, 234, 678]\n\nresult = list(combinations(data, 2))\n# [(123, 456), (123, 789), (123, 234), (123, 678), (456, 789), (456, 234), (456, 678), (789, 234), (789, 678), (234, 678)]\n\nAlso, avoid using the \"list\" word for naming, as list is a Python container.\n",
"Your question is definitely not clear and you should read a bit more about how to ask questions and show what code you have tried and failed. But if I understand correctly, the pairs will be consecutive. I.e. first item with the second, second with the third, and so on, until the last item pairs with the first, and you're showing in your example a dictionary structure, so maybe you can use a function like this.\ndef make_pairs(input_list):\n out_dict = dict() # output dictionary\n for i in range(len(input_list)): # i is from 0 to len(list) - 1\n # if i is not pointing to the last item\n if not i == len(input_list) - 1:\n # make pair from the i-th item and the (i+1) item\n out_dict[input_list[i]] = input_list[i+1]\n # i points to the last item\n else:\n # make pair from the last item and the first item\n out_dict[input_list[i]] = input_list[0]\n return out_dict\n\ninput_list = [123, 456, 789]\nout_dict = make_pairs(input_list)\nprint(out_dict)\n\nOf course, this is somewhat of a bad solution, but you did not give enough information, for example, if the list has repeating items, then keys will overwrite each other.\nAs mentioned above, avoid using list as a variable name, it's a reserved Python keyword.\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"list",
"python",
"python_3.x",
"shuffle"
] | stackoverflow_0074607587_dictionary_list_python_python_3.x_shuffle.txt |
Q:
Where to store cross-testrun state in pytest (binary files)?
I have a session-level fixture in pytest that downloads several binary files that I use throughout my test suite. The current fixture looks something like the following:
@pytest.fixture(scope="session")
def image_cache(pytestconfig, tmp_path_factory):
# A temporary directory loaded with the test image files downloaded once.
remote_location = pytestconfig.getoption("remote_test_images")
tmp_path = tmp_path_factory.mktemp("image_cache", numbered=False)
# ... download the files and store them into tmp_path
yield tmp_path
This used to work well, however, now the amount of data is making things slow, so I wish to cache it between test runs (similar to this question). Contrary to the related question, I want to use pytests own cache for this, i.e., I'd like to do something like the following:
@pytest.fixture(scope="session")
def image_cache(request, tmp_path_factory):
# A temporary directory loaded with the test image files downloaded once.
remote_location = request.config.option.remote_test_images
tmp_path = request.config.cache.get("image_cache_dir", None)
if tmp_path is None:
# what is the correct location here?
tmp_path = ...
request.config.cache.set("image_cache_dir", tmp_path)
# ... ensure path exists and is empty, clean if necessary
# ... download the files and store them into tmp_path
yield tmp_path
Is there a typical/default/expected location that I should use to store the binary data?
If not, what is a good (platform-independent) location to choose? (tests run on the three major OS: linux, mac, windows)
A:
A bit late here, but maybe I can still offer a helpful suggestion. You have at least two options, I think:
use pytest's caching solution on its own. Yes, it expects JSON-serializable data, so you'll need to convert your "binary" into strings. You can use base64 to safely encode arbitrary binary into letters that can be stored as a string and then later converted from a string back into the original binary (ie back into an image, or whatever).
use pytest's caching solution as a means of remembering a filename or a directory name. Then, use python's generic temporary file system so you can manage temporary files in a platform-independent way. In the end, you'll write just filenames or paths into pytest cache and do everything else manually.
Solution 1 benefits from fully-automatic management of cache content and will be compatible with other pytest extensions (such as distributed testing with xdist). It means that all files are kept in the same place and it is easier to see and manage disk usage.
Solution 2 is likely to be faster and will scale safely. It avoids the need to transcode to base64, which will be a waste of CPU and of space (since base64 will take a lot more space than the original binary). Additionally, the pytest cache may not be well suited for a large number of potentially large values (depending on the number of files and sizes of images we're talking about here).
Converting image / arbitrary bytes into something that can be encoded to JSON:
import base64
now_im_a_string = base64.encodebytes(your_bytes)
...
# cache it, store it whatever
...
im_your_bytes_again = base64.decodestring(string_read_from_cache)
| Where to store cross-testrun state in pytest (binary files)? | I have a session-level fixture in pytest that downloads several binary files that I use throughout my test suite. The current fixture looks something like the following:
@pytest.fixture(scope="session")
def image_cache(pytestconfig, tmp_path_factory):
# A temporary directory loaded with the test image files downloaded once.
remote_location = pytestconfig.getoption("remote_test_images")
tmp_path = tmp_path_factory.mktemp("image_cache", numbered=False)
# ... download the files and store them into tmp_path
yield tmp_path
This used to work well, however, now the amount of data is making things slow, so I wish to cache it between test runs (similar to this question). Contrary to the related question, I want to use pytests own cache for this, i.e., I'd like to do something like the following:
@pytest.fixture(scope="session")
def image_cache(request, tmp_path_factory):
# A temporary directory loaded with the test image files downloaded once.
remote_location = request.config.option.remote_test_images
tmp_path = request.config.cache.get("image_cache_dir", None)
if tmp_path is None:
# what is the correct location here?
tmp_path = ...
request.config.cache.set("image_cache_dir", tmp_path)
# ... ensure path exists and is empty, clean if necessary
# ... download the files and store them into tmp_path
yield tmp_path
Is there a typical/default/expected location that I should use to store the binary data?
If not, what is a good (platform-independent) location to choose? (tests run on the three major OS: linux, mac, windows)
| [
"A bit late here, but maybe I can still offer a helpful suggestion. You have at least two options, I think:\n\nuse pytest's caching solution on its own. Yes, it expects JSON-serializable data, so you'll need to convert your \"binary\" into strings. You can use base64 to safely encode arbitrary binary into letters that can be stored as a string and then later converted from a string back into the original binary (ie back into an image, or whatever).\n\nuse pytest's caching solution as a means of remembering a filename or a directory name. Then, use python's generic temporary file system so you can manage temporary files in a platform-independent way. In the end, you'll write just filenames or paths into pytest cache and do everything else manually.\n\n\nSolution 1 benefits from fully-automatic management of cache content and will be compatible with other pytest extensions (such as distributed testing with xdist). It means that all files are kept in the same place and it is easier to see and manage disk usage.\nSolution 2 is likely to be faster and will scale safely. It avoids the need to transcode to base64, which will be a waste of CPU and of space (since base64 will take a lot more space than the original binary). Additionally, the pytest cache may not be well suited for a large number of potentially large values (depending on the number of files and sizes of images we're talking about here).\nConverting image / arbitrary bytes into something that can be encoded to JSON:\nimport base64\nnow_im_a_string = base64.encodebytes(your_bytes)\n... \n# cache it, store it whatever\n... \nim_your_bytes_again = base64.decodestring(string_read_from_cache)\n\n"
] | [
0
] | [] | [] | [
"caching",
"pytest",
"python",
"state"
] | stackoverflow_0070711084_caching_pytest_python_state.txt |
Q:
Convert directed graph to horizontal data format in pandas python
I just started with python and know almost nothing about pandas.
so now I have a directed graph which look like this:
from ID
to ID
13
22
13
56
14
10
14
15
14
16
now I need to transform it to horizontal data format like this:
from ID
To 0
To 1
To 2
13
22
56
NAN
14
10
15
16
I find something like Pandas Merge duplicate index into single index but it seems it cannot merge rows into different colums.
Will pandas use NAN to fill in the blanks during this process due to the need to add more columns?
A:
You can do something like this to transform your dataframe, pandas will automatically add NaN values for any number of columns you have with this solution.
df = df.groupby('from')['to'].apply(list)
## create the desired df from the lists
df = pd.DataFrame(df.tolist(),index=df.index).reset_index()
## Rename columns
df.columns = ["from ID"] + [f"To {i}" for i in range(1,len(df.columns))]
| Convert directed graph to horizontal data format in pandas python | I just started with python and know almost nothing about pandas.
so now I have a directed graph which look like this:
from ID
to ID
13
22
13
56
14
10
14
15
14
16
now I need to transform it to horizontal data format like this:
from ID
To 0
To 1
To 2
13
22
56
NAN
14
10
15
16
I find something like Pandas Merge duplicate index into single index but it seems it cannot merge rows into different colums.
Will pandas use NAN to fill in the blanks during this process due to the need to add more columns?
| [
"You can do something like this to transform your dataframe, pandas will automatically add NaN values for any number of columns you have with this solution.\ndf = df.groupby('from')['to'].apply(list)\n\n## create the desired df from the lists\ndf = pd.DataFrame(df.tolist(),index=df.index).reset_index()\n\n\n## Rename columns\ndf.columns = [\"from ID\"] + [f\"To {i}\" for i in range(1,len(df.columns))]\n\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074607584_pandas_python.txt |
Q:
Problem in debugging django project using VS code
I have a problem while debugging a django project using VS code, the problem that nothing happened when I click to debug button, I can launch my script just by tapping in terminal python manage.py runserver.
Here is my launch.json file, and note please that I tried a lot of examples, and still the same problem:
{
"version": "0.2.0",
"configurations": [
{
"name": "Django",
"python": "C:/Users/msekmani/Desktop/dashboard_project/venv/Scripts/python.exe",
"type": "python",
"request": "launch",
"program": "C:/Users/msekmani/Desktop/dashboard_project/IPv2/src/manage.py",
"console": "internalConsole",
"args": ["runserver"],
"django": true,
"justMyCode": true,
},
]
}
I am using python version 3.6 and for the OS is Windows.
Note please that I also tried to creat a Python debug and also it's not working also, and here is my launch.json script:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}
A:
I find a solution to solve this problem by upgrade the python version to 3.7, I don't have any idea about the problem happened in version 3.6, by the way, the upgrade python version is the solution.
| Problem in debugging django project using VS code | I have a problem while debugging a django project using VS code, the problem that nothing happened when I click to debug button, I can launch my script just by tapping in terminal python manage.py runserver.
Here is my launch.json file, and note please that I tried a lot of examples, and still the same problem:
{
"version": "0.2.0",
"configurations": [
{
"name": "Django",
"python": "C:/Users/msekmani/Desktop/dashboard_project/venv/Scripts/python.exe",
"type": "python",
"request": "launch",
"program": "C:/Users/msekmani/Desktop/dashboard_project/IPv2/src/manage.py",
"console": "internalConsole",
"args": ["runserver"],
"django": true,
"justMyCode": true,
},
]
}
I am using python version 3.6 and for the OS is Windows.
Note please that I also tried to creat a Python debug and also it's not working also, and here is my launch.json script:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}
| [
"I find a solution to solve this problem by upgrade the python version to 3.7, I don't have any idea about the problem happened in version 3.6, by the way, the upgrade python version is the solution.\n"
] | [
0
] | [] | [] | [
"debugging",
"django",
"python",
"visual_studio_code"
] | stackoverflow_0074594548_debugging_django_python_visual_studio_code.txt |
Q:
How to scroll an element to exactly the middle of a screen using Selenium with Python
I'm automating a webpage using Robot Framework, Selenium and python. There are several clicks on different elements of the page, and I want to first scroll the element exactly to the middle of the screen, and then click on it.
I have already used this:
self.seleniumlibrary.scroll_element_into_view(element)
but it doesn't seem to do anything or at least scroll the element to the middle of the screen. Is there a JavaScript code I can use? I know that I can get the width and the height of the screen like this:
height, weight = self.seleniumlibrary.get_window_size()
but then what? I checked other questions/answered, but none of them were using Python.
A:
There are several steps you can take:
Try to maximize the browser screen with the command
maximize browser window
Do a web page scroll
execute javascript window.scrollTo(0,2500)
0 = start condition for scrolling page
2500 = how far to scroll the webpage
or you can do :
scroll element into view (element/xpath)
hope can help you
| How to scroll an element to exactly the middle of a screen using Selenium with Python | I'm automating a webpage using Robot Framework, Selenium and python. There are several clicks on different elements of the page, and I want to first scroll the element exactly to the middle of the screen, and then click on it.
I have already used this:
self.seleniumlibrary.scroll_element_into_view(element)
but it doesn't seem to do anything or at least scroll the element to the middle of the screen. Is there a JavaScript code I can use? I know that I can get the width and the height of the screen like this:
height, weight = self.seleniumlibrary.get_window_size()
but then what? I checked other questions/answered, but none of them were using Python.
| [
"There are several steps you can take:\n\nTry to maximize the browser screen with the command\n\n\nmaximize browser window\n\n\nDo a web page scroll\n\n\nexecute javascript window.scrollTo(0,2500)\n\n\n0 = start condition for scrolling page\n\n\n2500 = how far to scroll the webpage\n\nor you can do :\n\nscroll element into view (element/xpath)\n\nhope can help you\n"
] | [
1
] | [] | [] | [
"javascript",
"python",
"robotframework",
"screen",
"selenium"
] | stackoverflow_0074592595_javascript_python_robotframework_screen_selenium.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.