content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
python undetected chromedriver fail to run
I'm scrapping a website using selenium but I get detected all the time. I decided to use Undetected chromedriver. But I get the following error
Traceback (most recent call last):
File "undt.py", line 746, in <module>
booter()
File "undt.py", line 92, in booter
driver = uc.Chrome(options=option)
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\site-packages\undetected_chromedriver\__init__.py", line 414, in __init__
close_fds=IS_POSIX,
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 1148, in _execute_child
args = list2cmdline(args)
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 555, in list2cmdline
needquote = (" " in arg) or ("\t" in arg) or not arg
TypeError: argument of type 'NoneType' is not iterable
My code is simple
from selenium import webdriver
import undetected_chromedriver as uc
option = webdriver.ChromeOptions()
option.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"])
driver = uc.Chrome(options=option)
Note: I'm running python version 3.7.9 32bit
A:
Try the below code, its working:
from selenium import webdriver
import undetected_chromedriver as uc
driver = uc.Chrome(use_subprocess=True)
driver.maximize_window()
driver.get("https://tempail.com/")
| python undetected chromedriver fail to run | I'm scrapping a website using selenium but I get detected all the time. I decided to use Undetected chromedriver. But I get the following error
Traceback (most recent call last):
File "undt.py", line 746, in <module>
booter()
File "undt.py", line 92, in booter
driver = uc.Chrome(options=option)
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\site-packages\undetected_chromedriver\__init__.py", line 414, in __init__
close_fds=IS_POSIX,
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 1148, in _execute_child
args = list2cmdline(args)
File "C:\Users\azureuser\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 555, in list2cmdline
needquote = (" " in arg) or ("\t" in arg) or not arg
TypeError: argument of type 'NoneType' is not iterable
My code is simple
from selenium import webdriver
import undetected_chromedriver as uc
option = webdriver.ChromeOptions()
option.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"])
driver = uc.Chrome(options=option)
Note: I'm running python version 3.7.9 32bit
| [
"Try the below code, its working:\nfrom selenium import webdriver\nimport undetected_chromedriver as uc\n\ndriver = uc.Chrome(use_subprocess=True)\ndriver.maximize_window()\n\ndriver.get(\"https://tempail.com/\")\n\n"
] | [
0
] | [] | [] | [
"python",
"selenium",
"undetected_chromedriver"
] | stackoverflow_0074627303_python_selenium_undetected_chromedriver.txt |
Q:
Upload tsv file to google colab
TSV(Tab separated Value) extension file can't be uploaded to google colab using pandas
Used this to upload my file
import io
df2 = pd.read_csv(io.BytesIO(uploaded['Filename.csv']))
import io
stk = pd.read_csv(io.BytesIO(uploaded['train.tsv']))
A tsv file should be uploaded and read into the dataframe stk
A:
To save tsv file on google colab, .to_csv function can be used as follows:
df.to_csv('path_in_drive/filename.tsv', sep='\t', index=False, header=False)
stk = pd.read_csv('path_in_drive/filename.tsv') #to read the file
A:
Don't know if this is a solution to your problem as it doesn't upload the files, but with this solution you can import files that are also on your google drive.
from google.colab import drive
drive.mount('/gdrive')
%cd /gdrive/My\ Drive/{'.//'}
After mounting you should be able to load files into your script like on your desktop
A:
import pandas as pd
from google.colab import files
import io
#firstly, upload file to colab
uploaded = files.upload()
#secondly, get path to file in colab
file_path = io.BytesIO(uploaded['file_name.tsv'])
#the last step is familiar to us
df = pd.read_csv(file_path, sep='\t', header=0)
| Upload tsv file to google colab | TSV(Tab separated Value) extension file can't be uploaded to google colab using pandas
Used this to upload my file
import io
df2 = pd.read_csv(io.BytesIO(uploaded['Filename.csv']))
import io
stk = pd.read_csv(io.BytesIO(uploaded['train.tsv']))
A tsv file should be uploaded and read into the dataframe stk
| [
"To save tsv file on google colab, .to_csv function can be used as follows:\ndf.to_csv('path_in_drive/filename.tsv', sep='\\t', index=False, header=False)\nstk = pd.read_csv('path_in_drive/filename.tsv') #to read the file\n",
"Don't know if this is a solution to your problem as it doesn't upload the files, but with this solution you can import files that are also on your google drive.\nfrom google.colab import drive\ndrive.mount('/gdrive')\n%cd /gdrive/My\\ Drive/{'.//'}\nAfter mounting you should be able to load files into your script like on your desktop\n",
"import pandas as pd\nfrom google.colab import files\nimport io\n\n#firstly, upload file to colab\nuploaded = files.upload()\n#secondly, get path to file in colab\nfile_path = io.BytesIO(uploaded['file_name.tsv'])\n#the last step is familiar to us\ndf = pd.read_csv(file_path, sep='\\t', header=0)\n\n"
] | [
1,
0,
0
] | [] | [] | [
"google_colaboratory",
"pandas",
"python",
"svm"
] | stackoverflow_0057285697_google_colaboratory_pandas_python_svm.txt |
Q:
sympy lambdify with numexpr and sqrt
I'm trying to speed up some numeric code generated by lambdify using numexpr. Unfortunately, the numexpr-based function breaks when using the sqrt function, even though it's one of the supported functions.
This reproduces the issue for me:
import sympy
import numpy as np
import numexpr
from sympy.utilities.lambdify import lambdify
expr = sympy.S('b*sqrt(a) - a**2')
a, b = sorted(expr.free_symbols, key=lambda s: s.name)
func_numpy = lambdify((a,b), expr, modules=[np], dummify=False)
func_numexpr = lambdify((a,b), expr, modules=[numexpr], dummify=False)
foo, bar = np.random.random((2, 4))
print sympy.__version__
print func_numpy(foo, bar)
print func_numexpr(foo, bar)
When I run this, the output is:
0.7.6
[-0.02062061 0.08648306 -0.57868128 0.27598245]
Traceback (most recent call last):
File "sympy_test.py", line 17, in <module>
print func_numexpr(foo, bar)
File "<string>", line 1, in <lambda>
NameError: global name 'sqrt' is not defined
As a sanity check, I also tried calling numexpr directly:
numexpr.evaluate('b*sqrt(a) - a**2', local_dict=dict(a=foo, b=bar))
which works as expected, producing the same result as func_numpy.
EDIT: It works when I use the line:
func_numexpr = lambdify((a,b), expr, modules=['numexpr'], dummify=False)
Is this a sympy bug?
| sympy lambdify with numexpr and sqrt | I'm trying to speed up some numeric code generated by lambdify using numexpr. Unfortunately, the numexpr-based function breaks when using the sqrt function, even though it's one of the supported functions.
This reproduces the issue for me:
import sympy
import numpy as np
import numexpr
from sympy.utilities.lambdify import lambdify
expr = sympy.S('b*sqrt(a) - a**2')
a, b = sorted(expr.free_symbols, key=lambda s: s.name)
func_numpy = lambdify((a,b), expr, modules=[np], dummify=False)
func_numexpr = lambdify((a,b), expr, modules=[numexpr], dummify=False)
foo, bar = np.random.random((2, 4))
print sympy.__version__
print func_numpy(foo, bar)
print func_numexpr(foo, bar)
When I run this, the output is:
0.7.6
[-0.02062061 0.08648306 -0.57868128 0.27598245]
Traceback (most recent call last):
File "sympy_test.py", line 17, in <module>
print func_numexpr(foo, bar)
File "<string>", line 1, in <lambda>
NameError: global name 'sqrt' is not defined
As a sanity check, I also tried calling numexpr directly:
numexpr.evaluate('b*sqrt(a) - a**2', local_dict=dict(a=foo, b=bar))
which works as expected, producing the same result as func_numpy.
EDIT: It works when I use the line:
func_numexpr = lambdify((a,b), expr, modules=['numexpr'], dummify=False)
Is this a sympy bug?
| [] | [] | [
"you can change np.sqrt(9) to numexpr.evaluate('9**0.5')\n"
] | [
-1
] | [
"numexpr",
"numpy",
"python",
"sympy"
] | stackoverflow_0029807509_numexpr_numpy_python_sympy.txt |
Q:
aws Glue job: how to merge multiple output .csv files in s3
I created an aws Glue Crawler and job. The purpose is to transfer data from a postgres RDS database table to one single .csv file in S3. Everything is working, but I get a total of 19 files in S3. Every file is empty except three with one row of the database table in it as well as the headers. So every row of the database is written to a seperate .csv file.
What can I do here to specify that I want only one file where the first row are the headers and afterwards every line of the database follows?
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "gluedatabse", table_name = "postgresgluetest_public_account", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "gluedatabse", table_name = "postgresgluetest_public_account", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("password", "string", "password", "string"), ("user_id", "string", "user_id", "string"), ("username", "string", "username", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("user_id", "string", "user_id", "string"), ("username", "string", "username", "string"),("password", "string", "password", "string")], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://BUCKETNAMENOTSHOWN"}, format = "csv", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://BUCKETNAMENOTSHOWN"}, format = "csv", transformation_ctx = "datasink2")
job.commit()
The database looks like this:
Databse picture
It looks like this in S3:
S3 Bucket
One sample .csv in the S3 looksl ike this:
password,user_id,username
346sdfghj45g,user3,dieter
As I said, there is one file for each table row.
Edit:
The multipartupload to s3 doesn't seem to work correctly. It just uplaods the parts but doesn't merge them together when finished. Here are the last lines of the job log:
Here are the last lines of the log:
19/04/04 13:26:41 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
19/04/04 13:26:41 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
19/04/04 13:26:41 INFO Executor: Finished task 16.0 in stage 2.0 (TID 18). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:false s3://bucketname/run-1554384396528-part-r-00018
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:true s3://bucketname/run-1554384396528-part-r-00017
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:false s3://bucketname/run-1554384396528-part-r-00019
19/04/04 13:26:41 INFO Executor: Finished task 17.0 in stage 2.0 (TID 19). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:true s3://bucketname/run-1554384396528-part-r-00018
19/04/04 13:26:41 INFO Executor: Finished task 18.0 in stage 2.0 (TID 20). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:true s3://bucketname/run-1554384396528-part-r-00019
19/04/04 13:26:41 INFO Executor: Finished task 19.0 in stage 2.0 (TID 21). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
19/04/04 13:26:41 INFO CoarseGrainedExecutorBackend: Driver from 172.31.20.76:39779 disconnected during shutdown
19/04/04 13:26:41 INFO CoarseGrainedExecutorBackend: Driver from 172.31.20.76:39779 disconnected during shutdown
19/04/04 13:26:41 INFO MemoryStore: MemoryStore cleared
19/04/04 13:26:41 INFO BlockManager: BlockManager stopped
19/04/04 13:26:41 INFO ShutdownHookManager: Shutdown hook called
End of LogType:stderr
A:
Can you try the following?
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "gluedatabse", table_name = "postgresgluetest_public_account", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("user_id", "string", "user_id", "string"), ("username", "string", "username", "string"),("password", "string", "password", "string")], transformation_ctx = "applymapping1")
## Force one partition, so it can save only 1 file instead of 19
repartition = applymapping1.repartition(1)
datasink2 = glueContext.write_dynamic_frame.from_options(frame = repartition, connection_type = "s3", connection_options = {"path": "s3://BUCKETNAMENOTSHOWN"}, format = "csv", transformation_ctx = "datasink2")
job.commit()
Also, if you want to check how many partitions you have currently, you can try the following code. I'm guessing there are 19, that's why is saving 19 files back to s3:
## Change to Pyspark Dataframe
dataframe = DynamicFrame.toDF(applymapping1)
## Print number of partitions
print(dataframe.rdd.getNumPartitions())
## Change back to DynamicFrame
datasink2 = DynamicFrame.fromDF(dataframe, glueContext, "datasink2")
A:
If these are not big datasets, then you could easily consider converting glue dynamicframe (glue_dyf) to spark df (spark_df) and then spark df to pandas df (pandas_df) as below:
spark_df = DynamicFrame.toDF(glue_dyf)
pandas_df = spark_df.toPandas()
pandas_df.to_csv("s3://BUCKETNAME/subfolder/FileName.csv",index=False)
In this method, you need not worry about repartition for small volumes of data. Large datasets are advised to be treated like the previous answers by leveraging glue workers, spark partitions..
| aws Glue job: how to merge multiple output .csv files in s3 | I created an aws Glue Crawler and job. The purpose is to transfer data from a postgres RDS database table to one single .csv file in S3. Everything is working, but I get a total of 19 files in S3. Every file is empty except three with one row of the database table in it as well as the headers. So every row of the database is written to a seperate .csv file.
What can I do here to specify that I want only one file where the first row are the headers and afterwards every line of the database follows?
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "gluedatabse", table_name = "postgresgluetest_public_account", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "gluedatabse", table_name = "postgresgluetest_public_account", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("password", "string", "password", "string"), ("user_id", "string", "user_id", "string"), ("username", "string", "username", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("user_id", "string", "user_id", "string"), ("username", "string", "username", "string"),("password", "string", "password", "string")], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://BUCKETNAMENOTSHOWN"}, format = "csv", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://BUCKETNAMENOTSHOWN"}, format = "csv", transformation_ctx = "datasink2")
job.commit()
The database looks like this:
Databse picture
It looks like this in S3:
S3 Bucket
One sample .csv in the S3 looksl ike this:
password,user_id,username
346sdfghj45g,user3,dieter
As I said, there is one file for each table row.
Edit:
The multipartupload to s3 doesn't seem to work correctly. It just uplaods the parts but doesn't merge them together when finished. Here are the last lines of the job log:
Here are the last lines of the log:
19/04/04 13:26:41 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
19/04/04 13:26:41 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
19/04/04 13:26:41 INFO Executor: Finished task 16.0 in stage 2.0 (TID 18). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:false s3://bucketname/run-1554384396528-part-r-00018
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:true s3://bucketname/run-1554384396528-part-r-00017
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:false s3://bucketname/run-1554384396528-part-r-00019
19/04/04 13:26:41 INFO Executor: Finished task 17.0 in stage 2.0 (TID 19). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:true s3://bucketname/run-1554384396528-part-r-00018
19/04/04 13:26:41 INFO Executor: Finished task 18.0 in stage 2.0 (TID 20). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO MultipartUploadOutputStream: close closed:true s3://bucketname/run-1554384396528-part-r-00019
19/04/04 13:26:41 INFO Executor: Finished task 19.0 in stage 2.0 (TID 21). 2346 bytes result sent to driver
19/04/04 13:26:41 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
19/04/04 13:26:41 INFO CoarseGrainedExecutorBackend: Driver from 172.31.20.76:39779 disconnected during shutdown
19/04/04 13:26:41 INFO CoarseGrainedExecutorBackend: Driver from 172.31.20.76:39779 disconnected during shutdown
19/04/04 13:26:41 INFO MemoryStore: MemoryStore cleared
19/04/04 13:26:41 INFO BlockManager: BlockManager stopped
19/04/04 13:26:41 INFO ShutdownHookManager: Shutdown hook called
End of LogType:stderr
| [
"Can you try the following?\nimport sys\nfrom awsglue.transforms import *\nfrom awsglue.utils import getResolvedOptions\nfrom pyspark.context import SparkContext\nfrom awsglue.context import GlueContext\nfrom awsglue.job import Job\n\n## @params: [JOB_NAME]\nargs = getResolvedOptions(sys.argv, ['JOB_NAME'])\n\nsc = SparkContext()\nglueContext = GlueContext(sc)\nspark = glueContext.spark_session\njob = Job(glueContext)\njob.init(args['JOB_NAME'], args)\n\ndatasource0 = glueContext.create_dynamic_frame.from_catalog(database = \"gluedatabse\", table_name = \"postgresgluetest_public_account\", transformation_ctx = \"datasource0\")\n\napplymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [(\"user_id\", \"string\", \"user_id\", \"string\"), (\"username\", \"string\", \"username\", \"string\"),(\"password\", \"string\", \"password\", \"string\")], transformation_ctx = \"applymapping1\")\n\n## Force one partition, so it can save only 1 file instead of 19\nrepartition = applymapping1.repartition(1)\n\ndatasink2 = glueContext.write_dynamic_frame.from_options(frame = repartition, connection_type = \"s3\", connection_options = {\"path\": \"s3://BUCKETNAMENOTSHOWN\"}, format = \"csv\", transformation_ctx = \"datasink2\")\njob.commit()\n\nAlso, if you want to check how many partitions you have currently, you can try the following code. I'm guessing there are 19, that's why is saving 19 files back to s3:\n ## Change to Pyspark Dataframe\n dataframe = DynamicFrame.toDF(applymapping1)\n ## Print number of partitions \n print(dataframe.rdd.getNumPartitions())\n ## Change back to DynamicFrame\n datasink2 = DynamicFrame.fromDF(dataframe, glueContext, \"datasink2\")\n\n",
"If these are not big datasets, then you could easily consider converting glue dynamicframe (glue_dyf) to spark df (spark_df) and then spark df to pandas df (pandas_df) as below:\nspark_df = DynamicFrame.toDF(glue_dyf)\npandas_df = spark_df.toPandas()\npandas_df.to_csv(\"s3://BUCKETNAME/subfolder/FileName.csv\",index=False)\n\nIn this method, you need not worry about repartition for small volumes of data. Large datasets are advised to be treated like the previous answers by leveraging glue workers, spark partitions..\n"
] | [
7,
0
] | [] | [] | [
"amazon_s3",
"amazon_web_services",
"aws_glue",
"jobs",
"python"
] | stackoverflow_0055515251_amazon_s3_amazon_web_services_aws_glue_jobs_python.txt |
Q:
Selecting hover on plotly choropleth map
I am currently working on a map project using choropleth from plotly.express and this map combines two traces: one is the choropleth map with area defined by colors and the second one bubbles to put over some selected countries (not all).
I have two dataframes, one with iso alpha-3 code and the regional area they are apart of (which will define the color on the map) and the second one with a number of client for some countries (with iso alpha-3 code once again)
I managed to merge the two maps, but I only want the "hover" from the second one for the selected countries (so when my cursor goes to the related dot of the country) and using hovermode=False disables all the hovers on the map... Is there a way to select the hover we want and disabling the other without removing everything?
fig = px.choropleth(df, locations="alpha-3",
color="sub-region",
color_discrete_map= {"Middle East and Africa":"#2a7bb0",
"Europe":"#fc5e61",
"Asia":"#00a19c",
"Northern America":"#00134d",
"Russia and Central Asia": "#febec0",
"Latin America and the Caribbean":"#99a1b8"})
fig.update_layout(width=1500, height=1000, margin={"r":0,"t":0,"l":0,"b":0}, hovermode=False)
fig.update_geos(projection_type="mercator", visible=False)
fig.update_traces(marker_line_width=0)
fig2 = px.scatter_geo(dfa, locations="alpha-3", size="actors")
fig.add_trace(fig2.data[0])
aPlot = plotly.offline.plot(fig,
config={"displayModeBar": False},
show_link=False,
include_plotlyjs=False,
output_type='div')
fig is the choropleth map, fig2 is the circle map. hovermode=False is set before the merge but it's not working. I tried to update the fig2 with fig but the circles were not displayed... I am clueless at this point on how to only have the hover from fig2...
EDIT : Here are an snippet of the dataframes:
name alpha-3 country-code region sub-region
0 Zimbabwe ZWE 716 Africa Middle East and Africa
1 Zambia ZMB 894 Africa Middle East and Africa
2 South Africa ZAF 710 Africa Middle East and Africa
3 Yemen YEM 887 Asia Middle East and Africa
4 Viet Nam VNM 704 Asia Asia
name ... Actors
0 South Africa ... NameOfAnActor
A:
Ok so I will share how I handled the issue which is not perfect but it will maybe help.
The "solution" was to remove every data from the first figure by disabling them with hover_data as follow in the choropleth map variables:
hover_data={"alpha-3":False,"sub-region":False}
This solution isn't perfect as a pointer is still here on each countries but at least nothing is displayed aside from the bubbles...
| Selecting hover on plotly choropleth map | I am currently working on a map project using choropleth from plotly.express and this map combines two traces: one is the choropleth map with area defined by colors and the second one bubbles to put over some selected countries (not all).
I have two dataframes, one with iso alpha-3 code and the regional area they are apart of (which will define the color on the map) and the second one with a number of client for some countries (with iso alpha-3 code once again)
I managed to merge the two maps, but I only want the "hover" from the second one for the selected countries (so when my cursor goes to the related dot of the country) and using hovermode=False disables all the hovers on the map... Is there a way to select the hover we want and disabling the other without removing everything?
fig = px.choropleth(df, locations="alpha-3",
color="sub-region",
color_discrete_map= {"Middle East and Africa":"#2a7bb0",
"Europe":"#fc5e61",
"Asia":"#00a19c",
"Northern America":"#00134d",
"Russia and Central Asia": "#febec0",
"Latin America and the Caribbean":"#99a1b8"})
fig.update_layout(width=1500, height=1000, margin={"r":0,"t":0,"l":0,"b":0}, hovermode=False)
fig.update_geos(projection_type="mercator", visible=False)
fig.update_traces(marker_line_width=0)
fig2 = px.scatter_geo(dfa, locations="alpha-3", size="actors")
fig.add_trace(fig2.data[0])
aPlot = plotly.offline.plot(fig,
config={"displayModeBar": False},
show_link=False,
include_plotlyjs=False,
output_type='div')
fig is the choropleth map, fig2 is the circle map. hovermode=False is set before the merge but it's not working. I tried to update the fig2 with fig but the circles were not displayed... I am clueless at this point on how to only have the hover from fig2...
EDIT : Here are an snippet of the dataframes:
name alpha-3 country-code region sub-region
0 Zimbabwe ZWE 716 Africa Middle East and Africa
1 Zambia ZMB 894 Africa Middle East and Africa
2 South Africa ZAF 710 Africa Middle East and Africa
3 Yemen YEM 887 Asia Middle East and Africa
4 Viet Nam VNM 704 Asia Asia
name ... Actors
0 South Africa ... NameOfAnActor
| [
"Ok so I will share how I handled the issue which is not perfect but it will maybe help.\nThe \"solution\" was to remove every data from the first figure by disabling them with hover_data as follow in the choropleth map variables:\nhover_data={\"alpha-3\":False,\"sub-region\":False}\n\nThis solution isn't perfect as a pointer is still here on each countries but at least nothing is displayed aside from the bubbles...\n"
] | [
0
] | [] | [] | [
"choropleth",
"plotly",
"python",
"python_3.x"
] | stackoverflow_0074614344_choropleth_plotly_python_python_3.x.txt |
Q:
Unix socket credential passing in Python
How is Unix socket credential passing accomplished in Python?
A:
Internet searches on this topic came up with surprisingly few results. I figured I'd post the question and answer here for others interested in this topic.
The following client and server applications demonstrate how to accomplish this on Linux with the standard python interpreter. No extensions are required but, due to the use of embedded constants, the code is Linux-specific.
Server:
#!/usr/bin/env python
import struct
from socket import socket, AF_UNIX, SOCK_STREAM, SOL_SOCKET
SO_PEERCRED = 17 # Pulled from /usr/include/asm-generic/socket.h
s = socket(AF_UNIX, SOCK_STREAM)
s.bind('/tmp/pass_cred')
s.listen(1)
conn, addr = s.accept()
creds = conn.getsockopt(SOL_SOCKET, SO_PEERCRED, struct.calcsize('3i'))
pid, uid, gid = struct.unpack('3i',creds)
print 'pid: %d, uid: %d, gid %d' % (pid, uid, gid)
Client:
#!/usr/bin/env python
from socket import socket, AF_UNIX, SOCK_STREAM, SOL_SOCKET
SO_PASSCRED = 16 # Pulled from /usr/include/asm-generic/socket.h
s = socket(AF_UNIX, SOCK_STREAM)
s.setsockopt(SOL_SOCKET, SO_PASSCRED, 1)
s.connect('/tmp/pass_cred')
s.close()
Unfortunately, the SO_PEERCRED and SO_PASSCRED constants are not exported by python's socket module so they must be entered by hand. Although these value are unlikely to change it is possible. This should be considered by any applications using this approach.
A:
Here is a Python 3 version of Rakis' server.py that does not use hardcoded IDs and also displays user and group names of the client:
#!/usr/bin/env python
import os
import atexit
import grp
import pwd
import struct
from socket import socket, AF_UNIX, SOCK_STREAM, SOL_SOCKET, SO_PEERCRED
def remove_sock():
try:
os.unlink('/tmp/pass_cred')
except FileNotFoundError:
pass
remove_sock()
atexit.register(remove_sock)
s = socket(AF_UNIX, SOCK_STREAM)
s.bind('/tmp/pass_cred')
s.listen(1)
conn, addr = s.accept()
creds = conn.getsockopt(SOL_SOCKET, SO_PEERCRED, struct.calcsize('3i'))
pid, uid, gid = struct.unpack('3i',creds)
user_name = pwd.getpwuid(uid)[0]
group_name = grp.getgrgid(gid)[0]
print(f'pid={pid}, uid={uid}({user_name}), gid={gid}({group_name})')
Also updated to make reusable (you cannot bind the socket to a filename that already exists, so we clean the socket both before bind and after normal process exit)
| Unix socket credential passing in Python | How is Unix socket credential passing accomplished in Python?
| [
"Internet searches on this topic came up with surprisingly few results. I figured I'd post the question and answer here for others interested in this topic.\nThe following client and server applications demonstrate how to accomplish this on Linux with the standard python interpreter. No extensions are required but, due to the use of embedded constants, the code is Linux-specific.\nServer:\n#!/usr/bin/env python\n\nimport struct\nfrom socket import socket, AF_UNIX, SOCK_STREAM, SOL_SOCKET\n\nSO_PEERCRED = 17 # Pulled from /usr/include/asm-generic/socket.h\n\ns = socket(AF_UNIX, SOCK_STREAM)\n\ns.bind('/tmp/pass_cred')\ns.listen(1)\n\nconn, addr = s.accept()\n\ncreds = conn.getsockopt(SOL_SOCKET, SO_PEERCRED, struct.calcsize('3i'))\n\npid, uid, gid = struct.unpack('3i',creds)\n\nprint 'pid: %d, uid: %d, gid %d' % (pid, uid, gid)\n\nClient:\n#!/usr/bin/env python\n\nfrom socket import socket, AF_UNIX, SOCK_STREAM, SOL_SOCKET\n\nSO_PASSCRED = 16 # Pulled from /usr/include/asm-generic/socket.h\n\ns = socket(AF_UNIX, SOCK_STREAM)\n\ns.setsockopt(SOL_SOCKET, SO_PASSCRED, 1)\n\ns.connect('/tmp/pass_cred')\n\ns.close()\n\nUnfortunately, the SO_PEERCRED and SO_PASSCRED constants are not exported by python's socket module so they must be entered by hand. Although these value are unlikely to change it is possible. This should be considered by any applications using this approach.\n",
"Here is a Python 3 version of Rakis' server.py that does not use hardcoded IDs and also displays user and group names of the client:\n#!/usr/bin/env python\nimport os\nimport atexit\nimport grp\nimport pwd\nimport struct\nfrom socket import socket, AF_UNIX, SOCK_STREAM, SOL_SOCKET, SO_PEERCRED\n\ndef remove_sock():\n try:\n os.unlink('/tmp/pass_cred')\n except FileNotFoundError:\n pass\n\nremove_sock()\natexit.register(remove_sock)\n\ns = socket(AF_UNIX, SOCK_STREAM)\ns.bind('/tmp/pass_cred')\ns.listen(1)\n\nconn, addr = s.accept()\n\ncreds = conn.getsockopt(SOL_SOCKET, SO_PEERCRED, struct.calcsize('3i'))\n\npid, uid, gid = struct.unpack('3i',creds)\n\nuser_name = pwd.getpwuid(uid)[0]\ngroup_name = grp.getgrgid(gid)[0]\n\nprint(f'pid={pid}, uid={uid}({user_name}), gid={gid}({group_name})')\n\nAlso updated to make reusable (you cannot bind the socket to a filename that already exists, so we clean the socket both before bind and after normal process exit)\n"
] | [
23,
1
] | [] | [] | [
"credentials",
"linux",
"python",
"sockets"
] | stackoverflow_0007982714_credentials_linux_python_sockets.txt |
Q:
Why some variables have changed and some have not?
Why the variables a, c, d have not changed, but b has changed?
a = 0
b = []
c = []
d = 'a'
def func_a(a):
a += 1
def func_b(b):
b += [1]
def func_c(c):
c = [2]
def func_d(d):
d += 'd'
func_a(a)
func_b(b)
func_c(c)
func_d(d)
print('a = ', a)
print('b = ', b)
print('c = ', c)
print('d = ', d)
I think it has to do with the fact that all variables are global, but I don't understand why b changes then..
A:
This is related by local and global scope, you can update using change the name of function parameter names;
a = 0
def func_a(local_a):
global a
a += 1
func_a(a)
print('a = ', a)
# output: a = 1
"global a" meaning this function will use the global scope of a.
If you try to use with this way;
a = 0
b = []
c = []
d = 'a'
def func_a(a):
global a
a += 1
func_a(a)
print('a = ', a)
You will get an error like this;
File "glob_and_local.py", line 8
global a
^^^^^^^^
SyntaxError: name 'a' is parameter and global
Because same names are conflicting.
A:
Here, a, b, c, d are global variables.
For a:
a is an integer. when you are calling func_a(a) you are only making changes in the local scope but not returning anything. That's why there is no change in the global variable a.
Same thing happening with global variables c and d.
For b:
You are passing an array and appending an array into it.
append method makes changes in global variables too.
NOTE:
here, b += [1] is equivalent to b.append(1)
| Why some variables have changed and some have not? | Why the variables a, c, d have not changed, but b has changed?
a = 0
b = []
c = []
d = 'a'
def func_a(a):
a += 1
def func_b(b):
b += [1]
def func_c(c):
c = [2]
def func_d(d):
d += 'd'
func_a(a)
func_b(b)
func_c(c)
func_d(d)
print('a = ', a)
print('b = ', b)
print('c = ', c)
print('d = ', d)
I think it has to do with the fact that all variables are global, but I don't understand why b changes then..
| [
"This is related by local and global scope, you can update using change the name of function parameter names;\na = 0\n\n\ndef func_a(local_a):\n global a\n a += 1\n\nfunc_a(a)\nprint('a = ', a)\n# output: a = 1\n\n\"global a\" meaning this function will use the global scope of a.\nIf you try to use with this way;\na = 0\nb = []\nc = []\nd = 'a'\n\n\ndef func_a(a):\n global a\n a += 1\n\nfunc_a(a)\nprint('a = ', a)\n\nYou will get an error like this;\n File \"glob_and_local.py\", line 8\n global a\n ^^^^^^^^\nSyntaxError: name 'a' is parameter and global\n\nBecause same names are conflicting.\n",
"Here, a, b, c, d are global variables.\nFor a:\na is an integer. when you are calling func_a(a) you are only making changes in the local scope but not returning anything. That's why there is no change in the global variable a.\nSame thing happening with global variables c and d.\nFor b:\nYou are passing an array and appending an array into it.\nappend method makes changes in global variables too.\nNOTE:\nhere, b += [1] is equivalent to b.append(1)\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074627501_python.txt |
Q:
How to install awscli using pip in library/node Docker image
I'm trying to install awscli using pip (as per Amazon's recommendations) in a custom Docker image that comes FROM library/node:6.11.2. Here's a repro:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-pip \
python-setuptools \
groff \
less \
&& pip --no-cache-dir install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
However, with the above I'm met with:
no such option: --no-cache-dir
Presumably because I've got incorrect versions of Python and/or Pip?
I'm installing Python, Pip, and awscli in a similar way with FROM maven:3.5.0-jdk-8 and there it works just fine. I'm unsure what the relevant differences between the two images are.
Removing said option from my Dockerfile doesn't do me much good either, because then I'm met with a big pile of different errors, an excerpt here:
Installing collected packages: awscli, PyYAML, docutils, rsa, colorama, botocore, s3transfer, pyasn1, jmespath, python-dateutil, futures, six
Running setup.py install for PyYAML
checking if libyaml is compilable
### ABBREVIATED ###
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
### ABBREVIATED ###
Bottom line: how do you properly install awscli in library/node:6.x based images?
A:
Adding python-dev as per this other answer works, but throws an alarming number of compiler warnings (errors?), so I went with a variation of @SergeyKoralev's answer, which needed some tweaking before it worked.
Here's the changes I needed to make this work:
Change to python3 and pip3 everywhere.
Add a statement to upgrade pip itself.
Separate the awscli install in a separate RUN command.
Here's a full repro that does seem to work:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
CMD ["/bin/bash"]
You can probably also keep the aws install in the same RUN layer if you add a shell command before the install that refreshes things after upgrading pip. Not sure how though.
A:
All the answers are about aws-cli version 1, If you want version 2 try the below
FROM node:lts-stretch-slim
RUN apt-get update && \
apt-get install -y \
unzip \
curl \
&& apt-get clean \
&& curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& ./aws/install \
&& rm -rf \
awscliv2.zip \
&& apt-get -y purge curl \
&& apt-get -y purge unzip
CMD ["/bin/bash"]
A:
As you have correctly stated, pip installing on the docker image you are using is an older one not supporting --no-cache-dir. You can try updating that or you can also fix the second problem which is about missing python source headers. This can be fixed by installing python-dev package. Just add that to the list of packages installed in the Dockerfile:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-dev \
python-pip \
python-setuptools \
groff \
less \
&& pip install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
You can then run aws which should be on your path.
A:
Your image is based on Debian Jessie, so you are installing Python 2.7. Try using Python 3.x:
apt-get install -y python3-pip
pip3 install awscli
A:
Install AWS CLI in docker container using below command:
apt upgrade -y;apt update;apt install python3 python3-pip python3-setuptools -y; python3 -m pip --no-cache-dir install --upgrade awscli
To check the assumed role or AWS identity run below command:
aws sts get-caller-identity
| How to install awscli using pip in library/node Docker image | I'm trying to install awscli using pip (as per Amazon's recommendations) in a custom Docker image that comes FROM library/node:6.11.2. Here's a repro:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-pip \
python-setuptools \
groff \
less \
&& pip --no-cache-dir install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
However, with the above I'm met with:
no such option: --no-cache-dir
Presumably because I've got incorrect versions of Python and/or Pip?
I'm installing Python, Pip, and awscli in a similar way with FROM maven:3.5.0-jdk-8 and there it works just fine. I'm unsure what the relevant differences between the two images are.
Removing said option from my Dockerfile doesn't do me much good either, because then I'm met with a big pile of different errors, an excerpt here:
Installing collected packages: awscli, PyYAML, docutils, rsa, colorama, botocore, s3transfer, pyasn1, jmespath, python-dateutil, futures, six
Running setup.py install for PyYAML
checking if libyaml is compilable
### ABBREVIATED ###
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
### ABBREVIATED ###
Bottom line: how do you properly install awscli in library/node:6.x based images?
| [
"Adding python-dev as per this other answer works, but throws an alarming number of compiler warnings (errors?), so I went with a variation of @SergeyKoralev's answer, which needed some tweaking before it worked.\nHere's the changes I needed to make this work:\n\nChange to python3 and pip3 everywhere.\nAdd a statement to upgrade pip itself.\nSeparate the awscli install in a separate RUN command.\n\nHere's a full repro that does seem to work:\nFROM library/node:6.11.2\n\nRUN apt-get update && \\\n apt-get install -y \\\n python3 \\\n python3-pip \\\n python3-setuptools \\\n groff \\\n less \\\n && pip3 install --upgrade pip \\\n && apt-get clean\n\nRUN pip3 --no-cache-dir install --upgrade awscli\n\nCMD [\"/bin/bash\"]\n\nYou can probably also keep the aws install in the same RUN layer if you add a shell command before the install that refreshes things after upgrading pip. Not sure how though.\n",
"All the answers are about aws-cli version 1, If you want version 2 try the below\nFROM node:lts-stretch-slim\n\nRUN apt-get update && \\\n apt-get install -y \\\n unzip \\\n curl \\\n && apt-get clean \\\n && curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\" \\\n && unzip awscliv2.zip \\\n && ./aws/install \\\n && rm -rf \\\n awscliv2.zip \\\n && apt-get -y purge curl \\\n && apt-get -y purge unzip \n\nCMD [\"/bin/bash\"]\n\n",
"As you have correctly stated, pip installing on the docker image you are using is an older one not supporting --no-cache-dir. You can try updating that or you can also fix the second problem which is about missing python source headers. This can be fixed by installing python-dev package. Just add that to the list of packages installed in the Dockerfile:\nFROM library/node:6.11.2\n\nRUN apt-get update && \\\n apt-get install -y \\\n python \\\n python-dev \\\n python-pip \\\n python-setuptools \\\n groff \\\n less \\\n && pip install --upgrade awscli \\\n && apt-get clean\n\nCMD [\"/bin/bash\"]\n\nYou can then run aws which should be on your path.\n",
"Your image is based on Debian Jessie, so you are installing Python 2.7. Try using Python 3.x:\napt-get install -y python3-pip\npip3 install awscli\n\n",
"Install AWS CLI in docker container using below command:\napt upgrade -y;apt update;apt install python3 python3-pip python3-setuptools -y; python3 -m pip --no-cache-dir install --upgrade awscli\n\nTo check the assumed role or AWS identity run below command:\naws sts get-caller-identity\n\n"
] | [
28,
7,
5,
5,
0
] | [] | [] | [
"amazon_web_services",
"docker",
"pip",
"python"
] | stackoverflow_0046038891_amazon_web_services_docker_pip_python.txt |
Q:
Case-insensitve search using PyKeePass
I am using PyKeePass to programatically access a KeePass database. This code:
from pykeepass import PyKeePass
try:
kp = PyKeePass("info.kdbx", password="12345")
except Exception, e:
print "Got exception",e
lstEntry = kp.find_entries_by_notes(".*Chocolate.*",regex=True)
print lstEntry
print lstEntry[0].notes
prints:
[Entry: "Info/Chocolate (None)"]
Chocolate chips are a great invention
However, there is no way that I can get the result if I use "chocolate" instead of "Chocolate". I have tried the "i" modifier:
"/.*chocolate.*/i"
"(.*chocolate.*)i"
...without success. Any suggestions?
Thanks
A:
From the PyKeePass documentation, the syntax is:
find_entries_by_notes (notes, regex=False, flags=None, tree=None, history=False, first=False)
where title, username, password, url, notes and path are strings. These functions have optional regex boolean and flags string arguments, which means to interpret the string as an XSLT style regular expression with flags.
Thus, you need to use the i-flag like this:
find_entries_by_notes(".*chocolate.*", regex=True, "i")
A:
It seems that now the documentation has been changed and instead of find_entries_by_notes function, you should use find_entries function instead.
See: https://github.com/libkeepass/pykeepass#finding-entries
find_entries function has optional regex boolean and flags string arguments, which means to interpret search strings as XSLT style regular expressions with flags.
find_entries(".*chocolate.*", regex=True, flags='i')
| Case-insensitve search using PyKeePass | I am using PyKeePass to programatically access a KeePass database. This code:
from pykeepass import PyKeePass
try:
kp = PyKeePass("info.kdbx", password="12345")
except Exception, e:
print "Got exception",e
lstEntry = kp.find_entries_by_notes(".*Chocolate.*",regex=True)
print lstEntry
print lstEntry[0].notes
prints:
[Entry: "Info/Chocolate (None)"]
Chocolate chips are a great invention
However, there is no way that I can get the result if I use "chocolate" instead of "Chocolate". I have tried the "i" modifier:
"/.*chocolate.*/i"
"(.*chocolate.*)i"
...without success. Any suggestions?
Thanks
| [
"From the PyKeePass documentation, the syntax is:\n\nfind_entries_by_notes (notes, regex=False, flags=None, tree=None, history=False, first=False)\nwhere title, username, password, url, notes and path are strings. These functions have optional regex boolean and flags string arguments, which means to interpret the string as an XSLT style regular expression with flags.\n\nThus, you need to use the i-flag like this:\nfind_entries_by_notes(\".*chocolate.*\", regex=True, \"i\")\n\n",
"It seems that now the documentation has been changed and instead of find_entries_by_notes function, you should use find_entries function instead.\nSee: https://github.com/libkeepass/pykeepass#finding-entries\nfind_entries function has optional regex boolean and flags string arguments, which means to interpret search strings as XSLT style regular expressions with flags.\nfind_entries(\".*chocolate.*\", regex=True, flags='i')\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0044909986_python.txt |
Q:
Printing value in a bytes class object
I have a variable like this:
result = b'{"Results": {"WebServiceOutput0": [{"Label": 7.0, "f0": 0.0, "f1": 0.0, "f2": 0.0, "f3": 0.0, "f4": 0.0, "f5": 0.0, "f6": 0.0, "f7": 0.0, "f8": 0.0, "f9": 0.0, "f10": 0.0, "f11": 0.0, "f12": 0.0, "f13": 0.0, "f14": 0.0, "f15": 0.0, "f16": 0.0, "f17": 0.0, "f18": 0.0, "f19": 0.0, "f20": 0.0, "f21": 0.0, "f22": 0.0, "f23": 0.0, "f24": 0.0, "f25": 0.0, "f26": 0.0, "f27": 0.0, "f28": 0.0, "f29": 0.0, "f30": 0.0, "f31": 0.0, "f32": 0.0, "f33": 0.0, "f34": 0.0, "f35": 0.0, "f36": 0.0, "f37": 0.0, "f38": 0.0, "f39": 0.0, "f40": 0.0, "f41": 0.0, "f42": 0.0, "f43": 0.0, "f44": 0.0, "f45": 0.0, "f46": 0.0, "f47": 0.0, "f48": 0.0, "f49": 0.0, "f50": 0.0, "f51": 0.0, "f52": 0.0, "f53": 0.0, "f54": 0.0, "f55": 0.0, "f56": 0.0, "f57": 0.0, "f58": 0.0, "f59": 0.0, "f60": 0.0, "f61": 0.0, "f62": 0.0, "f63": 0.0, "f64": 0.0, "f65": 0.0, "f66": 0.0, "f67": 0.0, "f68": 0.0, "f69": 0.0, "f70": 0.0, "f71": 0.0, "f72": 0.0, "f73": 0.0, "f74": 0.0, "f75": 0.0, "f76": 0.0, "f77": 0.0, "f78": 0.0, "f79": 0.0, "f80": 0.0, "f81": 0.0, "f82": 0.0, "f83": 0.0, "f84": 0.0, "f85": 0.0, "f86": 0.0, "f87": 0.0, "f88": 0.0, "f89": 0.0, "f90": 0.0, "f91": 0.0, "f92": 0.0, "f93": 0.0, "f94": 0.0, "f95": 0.0, "f96": 0.0, "f97": 0.0, "f98": 0.0, "f99": 0.0, "f100": 0.0, "f101": 0.0, "f102": 0.0, "f103": 0.0, "f104": 0.0, "f105": 0.0, "f106": 0.0, "f107": 0.0, "f108": 0.0, "f109": 0.0, "f110": 0.0, "f111": 0.0, "f112": 0.0, "f113": 0.0, "f114": 0.0, "f115": 0.0, "f116": 0.0, "f117": 0.0, "f118": 0.0, "f119": 0.0, "f120": 0.0, "f121": 0.0, "f122": 0.0, "f123": 0.0, "f124": 0.0, "f125": 0.0, "f126": 0.0, "f127": 0.0, "f128": 0.0, "f129": 0.0, "f130": 0.0, "f131": 0.0, "f132": 0.0, "f133": 0.0, "f134": 0.0, "f135": 0.0, "f136": 0.0, "f137": 0.0, "f138": 0.0, "f139": 0.0, "f140": 0.0, "f141": 0.0, "f142": 0.0, "f143": 0.0, "f144": 0.0, "f145": 0.0, "f146": 0.0, "f147": 0.0, "f148": 0.0, "f149": 0.0, "f150": 0.0, "f151": 0.0, "f152": 0.0, "f153": 0.0, "f154": 0.0, "f155": 0.0, "f156": 0.0, "f157": 0.0, "f158": 0.0, "f159": 0.0, "f160": 0.0, "f161": 0.0, "f162": 0.0, "f163": 0.0, "f164": 0.0, "f165": 0.0, "f166": 0.0, "f167": 0.0, "f168": 0.0, "f169": 0.0, "f170": 0.0, "f171": 0.0, "f172": 0.0, "f173": 0.0, "f174": 0.0, "f175": 0.0, "f176": 0.0, "f177": 0.0, "f178": 0.0, "f179": 0.0, "f180": 0.0, "f181": 0.0, "f182": 0.0, "f183": 0.0, "f184": 0.0, "f185": 0.0, "f186": 0.0, "f187": 0.0, "f188": 0.0, "f189": 0.0, "f190": 0.0, "f191": 0.0, "f192": 0.0, "f193": 0.0, "f194": 0.0, "f195": 0.0, "f196": 0.0, "f197": 0.0, "f198": 0.0, "f199": 0.0, "f200": 0.0, "f201": 0.0, "f202": 84.0, "f203": 185.0, "f204": 159.0, "f205": 151.0, "f206": 60.0, "f207": 36.0, "f208": 0.0, "f209": 0.0, "f210": 0.0, "f211": 0.0, "f212": 0.0, "f213": 0.0, "f214": 0.0, "f215": 0.0, "f216": 0.0, "f217": 0.0, "f218": 0.0, "f219": 0.0, "f220": 0.0, "f221": 0.0, "f222": 0.0, "f223": 0.0, "f224": 0.0, "f225": 0.0, "f226": 0.0, "f227": 0.0, "f228": 0.0, "f229": 0.0, "f230": 222.0, "f231": 254.0, "f232": 254.0, "f233": 254.0, "f234": 254.0, "f235": 241.0, "f236": 198.0, "f237": 198.0, "f238": 198.0, "f239": 198.0, "f240": 198.0, "f241": 198.0, "f242": 198.0, "f243": 198.0, "f244": 170.0, "f245": 52.0, "f246": 0.0, "f247": 0.0, "f248": 0.0, "f249": 0.0, "f250": 0.0, "f251": 0.0, "f252": 0.0, "f253": 0.0, "f254": 0.0, "f255": 0.0, "f256": 0.0, "f257": 0.0, "f258": 67.0, "f259": 114.0, "f260": 72.0, "f261": 114.0, "f262": 163.0, "f263": 227.0, "f264": 254.0, "f265": 225.0, "f266": 254.0, "f267": 254.0, "f268": 254.0, "f269": 250.0, "f270": 229.0, "f271": 254.0, "f272": 254.0, "f273": 140.0, "f274": 0.0, "f275": 0.0, "f276": 0.0, "f277": 0.0, "f278": 0.0, "f279": 0.0, "f280": 0.0, "f281": 0.0, "f282": 0.0, "f283": 0.0, "f284": 0.0, "f285": 0.0, "f286": 0.0, "f287": 0.0, "f288": 0.0, "f289": 0.0, "f290": 0.0, "f291": 17.0, "f292": 66.0, "f293": 14.0, "f294": 67.0, "f295": 67.0, "f296": 67.0, "f297": 59.0, "f298": 21.0, "f299": 236.0, "f300": 254.0, "f301": 106.0, "f302": 0.0, "f303": 0.0, "f304": 0.0, "f305": 0.0, "f306": 0.0, "f307": 0.0, "f308": 0.0, "f309": 0.0, "f310": 0.0, "f311": 0.0, "f312": 0.0, "f313": 0.0, "f314": 0.0, "f315": 0.0, "f316": 0.0, "f317": 0.0, "f318": 0.0, "f319": 0.0, "f320": 0.0, "f321": 0.0, "f322": 0.0, "f323": 0.0, "f324": 0.0, "f325": 0.0, "f326": 83.0, "f327": 253.0, "f328": 209.0, "f329": 18.0, "f330": 0.0, "f331": 0.0, "f332": 0.0, "f333": 0.0, "f334": 0.0, "f335": 0.0, "f336": 0.0, "f337": 0.0, "f338": 0.0, "f339": 0.0, "f340": 0.0, "f341": 0.0, "f342": 0.0, "f343": 0.0, "f344": 0.0, "f345": 0.0, "f346": 0.0, "f347": 0.0, "f348": 0.0, "f349": 0.0, "f350": 0.0, "f351": 0.0, "f352": 0.0, "f353": 22.0, "f354": 233.0, "f355": 255.0, "f356": 83.0, "f357": 0.0, "f358": 0.0, "f359": 0.0, "f360": 0.0, "f361": 0.0, "f362": 0.0, "f363": 0.0, "f364": 0.0, "f365": 0.0, "f366": 0.0, "f367": 0.0, "f368": 0.0, "f369": 0.0, "f370": 0.0, "f371": 0.0, "f372": 0.0, "f373": 0.0, "f374": 0.0, "f375": 0.0, "f376": 0.0, "f377": 0.0, "f378": 0.0, "f379": 0.0, "f380": 0.0, "f381": 129.0, "f382": 254.0, "f383": 238.0, "f384": 44.0, "f385": 0.0, "f386": 0.0, "f387": 0.0, "f388": 0.0, "f389": 0.0, "f390": 0.0, "f391": 0.0, "f392": 0.0, "f393": 0.0, "f394": 0.0, "f395": 0.0, "f396": 0.0, "f397": 0.0, "f398": 0.0, "f399": 0.0, "f400": 0.0, "f401": 0.0, "f402": 0.0, "f403": 0.0, "f404": 0.0, "f405": 0.0, "f406": 0.0, "f407": 0.0, "f408": 59.0, "f409": 249.0, "f410": 254.0, "f411": 62.0, "f412": 0.0, "f413": 0.0, "f414": 0.0, "f415": 0.0, "f416": 0.0, "f417": 0.0, "f418": 0.0, "f419": 0.0, "f420": 0.0, "f421": 0.0, "f422": 0.0, "f423": 0.0, "f424": 0.0, "f425": 0.0, "f426": 0.0, "f427": 0.0, "f428": 0.0, "f429": 0.0, "f430": 0.0, "f431": 0.0, "f432": 0.0, "f433": 0.0, "f434": 0.0, "f435": 0.0, "f436": 133.0, "f437": 254.0, "f438": 187.0, "f439": 5.0, "f440": 0.0, "f441": 0.0, "f442": 0.0, "f443": 0.0, "f444": 0.0, "f445": 0.0, "f446": 0.0, "f447": 0.0, "f448": 0.0, "f449": 0.0, "f450": 0.0, "f451": 0.0, "f452": 0.0, "f453": 0.0, "f454": 0.0, "f455": 0.0, "f456": 0.0, "f457": 0.0, "f458": 0.0, "f459": 0.0, "f460": 0.0, "f461": 0.0, "f462": 0.0, "f463": 9.0, "f464": 205.0, "f465": 248.0, "f466": 58.0, "f467": 0.0, "f468": 0.0, "f469": 0.0, "f470": 0.0, "f471": 0.0, "f472": 0.0, "f473": 0.0, "f474": 0.0, "f475": 0.0, "f476": 0.0, "f477": 0.0, "f478": 0.0, "f479": 0.0, "f480": 0.0, "f481": 0.0, "f482": 0.0, "f483": 0.0, "f484": 0.0, "f485": 0.0, "f486": 0.0, "f487": 0.0, "f488": 0.0, "f489": 0.0, "f490": 0.0, "f491": 126.0, "f492": 254.0, "f493": 182.0, "f494": 0.0, "f495": 0.0, "f496": 0.0, "f497": 0.0, "f498": 0.0, "f499": 0.0, "f500": 0.0, "f501": 0.0, "f502": 0.0, "f503": 0.0, "f504": 0.0, "f505": 0.0, "f506": 0.0, "f507": 0.0, "f508": 0.0, "f509": 0.0, "f510": 0.0, "f511": 0.0, "f512": 0.0, "f513": 0.0, "f514": 0.0, "f515": 0.0, "f516": 0.0, "f517": 0.0, "f518": 75.0, "f519": 251.0, "f520": 240.0, "f521": 57.0, "f522": 0.0, "f523": 0.0, "f524": 0.0, "f525": 0.0, "f526": 0.0, "f527": 0.0, "f528": 0.0, "f529": 0.0, "f530": 0.0, "f531": 0.0, "f532": 0.0, "f533": 0.0, "f534": 0.0, "f535": 0.0, "f536": 0.0, "f537": 0.0, "f538": 0.0, "f539": 0.0, "f540": 0.0, "f541": 0.0, "f542": 0.0, "f543": 0.0, "f544": 0.0, "f545": 19.0, "f546": 221.0, "f547": 254.0, "f548": 166.0, "f549": 0.0, "f550": 0.0, "f551": 0.0, "f552": 0.0, "f553": 0.0, "f554": 0.0, "f555": 0.0, "f556": 0.0, "f557": 0.0, "f558": 0.0, "f559": 0.0, "f560": 0.0, "f561": 0.0, "f562": 0.0, "f563": 0.0, "f564": 0.0, "f565": 0.0, "f566": 0.0, "f567": 0.0, "f568": 0.0, "f569": 0.0, "f570": 0.0, "f571": 0.0, "f572": 3.0, "f573": 203.0, "f574": 254.0, "f575": 219.0, "f576": 35.0, "f577": 0.0, "f578": 0.0, "f579": 0.0, "f580": 0.0, "f581": 0.0, "f582": 0.0, "f583": 0.0, "f584": 0.0, "f585": 0.0, "f586": 0.0, "f587": 0.0, "f588": 0.0, "f589": 0.0, "f590": 0.0, "f591": 0.0, "f592": 0.0, "f593": 0.0, "f594": 0.0, "f595": 0.0, "f596": 0.0, "f597": 0.0, "f598": 0.0, "f599": 0.0, "f600": 38.0, "f601": 254.0, "f602": 254.0, "f603": 77.0, "f604": 0.0, "f605": 0.0, "f606": 0.0, "f607": 0.0, "f608": 0.0, "f609": 0.0, "f610": 0.0, "f611": 0.0, "f612": 0.0, "f613": 0.0, "f614": 0.0, "f615": 0.0, "f616": 0.0, "f617": 0.0, "f618": 0.0, "f619": 0.0, "f620": 0.0, "f621": 0.0, "f622": 0.0, "f623": 0.0, "f624": 0.0, "f625": 0.0, "f626": 0.0, "f627": 31.0, "f628": 224.0, "f629": 254.0, "f630": 115.0, "f631": 1.0, "f632": 0.0, "f633": 0.0, "f634": 0.0, "f635": 0.0, "f636": 0.0, "f637": 0.0, "f638": 0.0, "f639": 0.0, "f640": 0.0, "f641": 0.0, "f642": 0.0, "f643": 0.0, "f644": 0.0, "f645": 0.0, "f646": 0.0, "f647": 0.0, "f648": 0.0, "f649": 0.0, "f650": 0.0, "f651": 0.0, "f652": 0.0, "f653": 0.0, "f654": 0.0, "f655": 133.0, "f656": 254.0, "f657": 254.0, "f658": 52.0, "f659": 0.0, "f660": 0.0, "f661": 0.0, "f662": 0.0, "f663": 0.0, "f664": 0.0, "f665": 0.0, "f666": 0.0, "f667": 0.0, "f668": 0.0, "f669": 0.0, "f670": 0.0, "f671": 0.0, "f672": 0.0, "f673": 0.0, "f674": 0.0, "f675": 0.0, "f676": 0.0, "f677": 0.0, "f678": 0.0, "f679": 0.0, "f680": 0.0, "f681": 0.0, "f682": 61.0, "f683": 242.0, "f684": 254.0, "f685": 254.0, "f686": 52.0, "f687": 0.0, "f688": 0.0, "f689": 0.0, "f690": 0.0, "f691": 0.0, "f692": 0.0, "f693": 0.0, "f694": 0.0, "f695": 0.0, "f696": 0.0, "f697": 0.0, "f698": 0.0, "f699": 0.0, "f700": 0.0, "f701": 0.0, "f702": 0.0, "f703": 0.0, "f704": 0.0, "f705": 0.0, "f706": 0.0, "f707": 0.0, "f708": 0.0, "f709": 0.0, "f710": 121.0, "f711": 254.0, "f712": 254.0, "f713": 219.0, "f714": 40.0, "f715": 0.0, "f716": 0.0, "f717": 0.0, "f718": 0.0, "f719": 0.0, "f720": 0.0, "f721": 0.0, "f722": 0.0, "f723": 0.0, "f724": 0.0, "f725": 0.0, "f726": 0.0, "f727": 0.0, "f728": 0.0, "f729": 0.0, "f730": 0.0, "f731": 0.0, "f732": 0.0, "f733": 0.0, "f734": 0.0, "f735": 0.0, "f736": 0.0, "f737": 0.0, "f738": 121.0, "f739": 254.0, "f740": 207.0, "f741": 18.0, "f742": 0.0, "f743": 0.0, "f744": 0.0, "f745": 0.0, "f746": 0.0, "f747": 0.0, "f748": 0.0, "f749": 0.0, "f750": 0.0, "f751": 0.0, "f752": 0.0, "f753": 0.0, "f754": 0.0, "f755": 0.0, "f756": 0.0, "f757": 0.0, "f758": 0.0, "f759": 0.0, "f760": 0.0, "f761": 0.0, "f762": 0.0, "f763": 0.0, "f764": 0.0, "f765": 0.0, "f766": 0.0, "f767": 0.0, "f768": 0.0, "f769": 0.0, "f770": 0.0, "f771": 0.0, "f772": 0.0, "f773": 0.0, "f774": 0.0, "f775": 0.0, "f776": 0.0, "f777": 0.0, "f778": 0.0, "f779": 0.0, "f780": 0.0, "f781": 0.0, "f782": 0.0, "f783": 0.0, "Scored Probabilities_0": 1.7306872933250431e-07, "Scored Probabilities_1": 3.3177526751193424e-09, "Scored Probabilities_2": 6.772526557729492e-07, "Scored Probabilities_3": 5.018638683008445e-05, "Scored Probabilities_4": 3.5781842069911e-11, "Scored Probabilities_5": 4.2981825008019914e-08, "Scored Probabilities_6": 6.350046754243676e-14, "Scored Probabilities_7": 0.9999483485221952, "Scored Probabilities_8": 1.3431149602373933e-07, "Scored Probabilities_9": 4.341226706439234e-07, "Scored Labels": 7.0}]}}'
This is the first time I've seen this type of object. How can I get the value of Scored Probabilities_n (n is from 0 to 9) and the Scored Labels? Furthermore, can I get the maximum value (in this case it should be "Scored Probabilities_7": 0.9999483485221952)?
Even when I tried some ways to get the highest value of the variable but it's somehow difficult:
print(max(result))
125
As I can see, there's no 125 value in the variable. So does it mean the normal calculation can not be done on this too? I expected to see the answer on my question above. Thank you!
A:
You can use:
import json
d = json.loads(result.decode('utf-8'))['Results']['WebServiceOutput0'][0]
d['Scored Labels']
# 1.7306872933250431e-07
max_p = max((k for k in d if k.startswith('Scored Probabilities_')), key=d.get)
# 'Scored Probabilities_7'
d[max_p]
# 0.9999483485221952
| Printing value in a bytes class object | I have a variable like this:
result = b'{"Results": {"WebServiceOutput0": [{"Label": 7.0, "f0": 0.0, "f1": 0.0, "f2": 0.0, "f3": 0.0, "f4": 0.0, "f5": 0.0, "f6": 0.0, "f7": 0.0, "f8": 0.0, "f9": 0.0, "f10": 0.0, "f11": 0.0, "f12": 0.0, "f13": 0.0, "f14": 0.0, "f15": 0.0, "f16": 0.0, "f17": 0.0, "f18": 0.0, "f19": 0.0, "f20": 0.0, "f21": 0.0, "f22": 0.0, "f23": 0.0, "f24": 0.0, "f25": 0.0, "f26": 0.0, "f27": 0.0, "f28": 0.0, "f29": 0.0, "f30": 0.0, "f31": 0.0, "f32": 0.0, "f33": 0.0, "f34": 0.0, "f35": 0.0, "f36": 0.0, "f37": 0.0, "f38": 0.0, "f39": 0.0, "f40": 0.0, "f41": 0.0, "f42": 0.0, "f43": 0.0, "f44": 0.0, "f45": 0.0, "f46": 0.0, "f47": 0.0, "f48": 0.0, "f49": 0.0, "f50": 0.0, "f51": 0.0, "f52": 0.0, "f53": 0.0, "f54": 0.0, "f55": 0.0, "f56": 0.0, "f57": 0.0, "f58": 0.0, "f59": 0.0, "f60": 0.0, "f61": 0.0, "f62": 0.0, "f63": 0.0, "f64": 0.0, "f65": 0.0, "f66": 0.0, "f67": 0.0, "f68": 0.0, "f69": 0.0, "f70": 0.0, "f71": 0.0, "f72": 0.0, "f73": 0.0, "f74": 0.0, "f75": 0.0, "f76": 0.0, "f77": 0.0, "f78": 0.0, "f79": 0.0, "f80": 0.0, "f81": 0.0, "f82": 0.0, "f83": 0.0, "f84": 0.0, "f85": 0.0, "f86": 0.0, "f87": 0.0, "f88": 0.0, "f89": 0.0, "f90": 0.0, "f91": 0.0, "f92": 0.0, "f93": 0.0, "f94": 0.0, "f95": 0.0, "f96": 0.0, "f97": 0.0, "f98": 0.0, "f99": 0.0, "f100": 0.0, "f101": 0.0, "f102": 0.0, "f103": 0.0, "f104": 0.0, "f105": 0.0, "f106": 0.0, "f107": 0.0, "f108": 0.0, "f109": 0.0, "f110": 0.0, "f111": 0.0, "f112": 0.0, "f113": 0.0, "f114": 0.0, "f115": 0.0, "f116": 0.0, "f117": 0.0, "f118": 0.0, "f119": 0.0, "f120": 0.0, "f121": 0.0, "f122": 0.0, "f123": 0.0, "f124": 0.0, "f125": 0.0, "f126": 0.0, "f127": 0.0, "f128": 0.0, "f129": 0.0, "f130": 0.0, "f131": 0.0, "f132": 0.0, "f133": 0.0, "f134": 0.0, "f135": 0.0, "f136": 0.0, "f137": 0.0, "f138": 0.0, "f139": 0.0, "f140": 0.0, "f141": 0.0, "f142": 0.0, "f143": 0.0, "f144": 0.0, "f145": 0.0, "f146": 0.0, "f147": 0.0, "f148": 0.0, "f149": 0.0, "f150": 0.0, "f151": 0.0, "f152": 0.0, "f153": 0.0, "f154": 0.0, "f155": 0.0, "f156": 0.0, "f157": 0.0, "f158": 0.0, "f159": 0.0, "f160": 0.0, "f161": 0.0, "f162": 0.0, "f163": 0.0, "f164": 0.0, "f165": 0.0, "f166": 0.0, "f167": 0.0, "f168": 0.0, "f169": 0.0, "f170": 0.0, "f171": 0.0, "f172": 0.0, "f173": 0.0, "f174": 0.0, "f175": 0.0, "f176": 0.0, "f177": 0.0, "f178": 0.0, "f179": 0.0, "f180": 0.0, "f181": 0.0, "f182": 0.0, "f183": 0.0, "f184": 0.0, "f185": 0.0, "f186": 0.0, "f187": 0.0, "f188": 0.0, "f189": 0.0, "f190": 0.0, "f191": 0.0, "f192": 0.0, "f193": 0.0, "f194": 0.0, "f195": 0.0, "f196": 0.0, "f197": 0.0, "f198": 0.0, "f199": 0.0, "f200": 0.0, "f201": 0.0, "f202": 84.0, "f203": 185.0, "f204": 159.0, "f205": 151.0, "f206": 60.0, "f207": 36.0, "f208": 0.0, "f209": 0.0, "f210": 0.0, "f211": 0.0, "f212": 0.0, "f213": 0.0, "f214": 0.0, "f215": 0.0, "f216": 0.0, "f217": 0.0, "f218": 0.0, "f219": 0.0, "f220": 0.0, "f221": 0.0, "f222": 0.0, "f223": 0.0, "f224": 0.0, "f225": 0.0, "f226": 0.0, "f227": 0.0, "f228": 0.0, "f229": 0.0, "f230": 222.0, "f231": 254.0, "f232": 254.0, "f233": 254.0, "f234": 254.0, "f235": 241.0, "f236": 198.0, "f237": 198.0, "f238": 198.0, "f239": 198.0, "f240": 198.0, "f241": 198.0, "f242": 198.0, "f243": 198.0, "f244": 170.0, "f245": 52.0, "f246": 0.0, "f247": 0.0, "f248": 0.0, "f249": 0.0, "f250": 0.0, "f251": 0.0, "f252": 0.0, "f253": 0.0, "f254": 0.0, "f255": 0.0, "f256": 0.0, "f257": 0.0, "f258": 67.0, "f259": 114.0, "f260": 72.0, "f261": 114.0, "f262": 163.0, "f263": 227.0, "f264": 254.0, "f265": 225.0, "f266": 254.0, "f267": 254.0, "f268": 254.0, "f269": 250.0, "f270": 229.0, "f271": 254.0, "f272": 254.0, "f273": 140.0, "f274": 0.0, "f275": 0.0, "f276": 0.0, "f277": 0.0, "f278": 0.0, "f279": 0.0, "f280": 0.0, "f281": 0.0, "f282": 0.0, "f283": 0.0, "f284": 0.0, "f285": 0.0, "f286": 0.0, "f287": 0.0, "f288": 0.0, "f289": 0.0, "f290": 0.0, "f291": 17.0, "f292": 66.0, "f293": 14.0, "f294": 67.0, "f295": 67.0, "f296": 67.0, "f297": 59.0, "f298": 21.0, "f299": 236.0, "f300": 254.0, "f301": 106.0, "f302": 0.0, "f303": 0.0, "f304": 0.0, "f305": 0.0, "f306": 0.0, "f307": 0.0, "f308": 0.0, "f309": 0.0, "f310": 0.0, "f311": 0.0, "f312": 0.0, "f313": 0.0, "f314": 0.0, "f315": 0.0, "f316": 0.0, "f317": 0.0, "f318": 0.0, "f319": 0.0, "f320": 0.0, "f321": 0.0, "f322": 0.0, "f323": 0.0, "f324": 0.0, "f325": 0.0, "f326": 83.0, "f327": 253.0, "f328": 209.0, "f329": 18.0, "f330": 0.0, "f331": 0.0, "f332": 0.0, "f333": 0.0, "f334": 0.0, "f335": 0.0, "f336": 0.0, "f337": 0.0, "f338": 0.0, "f339": 0.0, "f340": 0.0, "f341": 0.0, "f342": 0.0, "f343": 0.0, "f344": 0.0, "f345": 0.0, "f346": 0.0, "f347": 0.0, "f348": 0.0, "f349": 0.0, "f350": 0.0, "f351": 0.0, "f352": 0.0, "f353": 22.0, "f354": 233.0, "f355": 255.0, "f356": 83.0, "f357": 0.0, "f358": 0.0, "f359": 0.0, "f360": 0.0, "f361": 0.0, "f362": 0.0, "f363": 0.0, "f364": 0.0, "f365": 0.0, "f366": 0.0, "f367": 0.0, "f368": 0.0, "f369": 0.0, "f370": 0.0, "f371": 0.0, "f372": 0.0, "f373": 0.0, "f374": 0.0, "f375": 0.0, "f376": 0.0, "f377": 0.0, "f378": 0.0, "f379": 0.0, "f380": 0.0, "f381": 129.0, "f382": 254.0, "f383": 238.0, "f384": 44.0, "f385": 0.0, "f386": 0.0, "f387": 0.0, "f388": 0.0, "f389": 0.0, "f390": 0.0, "f391": 0.0, "f392": 0.0, "f393": 0.0, "f394": 0.0, "f395": 0.0, "f396": 0.0, "f397": 0.0, "f398": 0.0, "f399": 0.0, "f400": 0.0, "f401": 0.0, "f402": 0.0, "f403": 0.0, "f404": 0.0, "f405": 0.0, "f406": 0.0, "f407": 0.0, "f408": 59.0, "f409": 249.0, "f410": 254.0, "f411": 62.0, "f412": 0.0, "f413": 0.0, "f414": 0.0, "f415": 0.0, "f416": 0.0, "f417": 0.0, "f418": 0.0, "f419": 0.0, "f420": 0.0, "f421": 0.0, "f422": 0.0, "f423": 0.0, "f424": 0.0, "f425": 0.0, "f426": 0.0, "f427": 0.0, "f428": 0.0, "f429": 0.0, "f430": 0.0, "f431": 0.0, "f432": 0.0, "f433": 0.0, "f434": 0.0, "f435": 0.0, "f436": 133.0, "f437": 254.0, "f438": 187.0, "f439": 5.0, "f440": 0.0, "f441": 0.0, "f442": 0.0, "f443": 0.0, "f444": 0.0, "f445": 0.0, "f446": 0.0, "f447": 0.0, "f448": 0.0, "f449": 0.0, "f450": 0.0, "f451": 0.0, "f452": 0.0, "f453": 0.0, "f454": 0.0, "f455": 0.0, "f456": 0.0, "f457": 0.0, "f458": 0.0, "f459": 0.0, "f460": 0.0, "f461": 0.0, "f462": 0.0, "f463": 9.0, "f464": 205.0, "f465": 248.0, "f466": 58.0, "f467": 0.0, "f468": 0.0, "f469": 0.0, "f470": 0.0, "f471": 0.0, "f472": 0.0, "f473": 0.0, "f474": 0.0, "f475": 0.0, "f476": 0.0, "f477": 0.0, "f478": 0.0, "f479": 0.0, "f480": 0.0, "f481": 0.0, "f482": 0.0, "f483": 0.0, "f484": 0.0, "f485": 0.0, "f486": 0.0, "f487": 0.0, "f488": 0.0, "f489": 0.0, "f490": 0.0, "f491": 126.0, "f492": 254.0, "f493": 182.0, "f494": 0.0, "f495": 0.0, "f496": 0.0, "f497": 0.0, "f498": 0.0, "f499": 0.0, "f500": 0.0, "f501": 0.0, "f502": 0.0, "f503": 0.0, "f504": 0.0, "f505": 0.0, "f506": 0.0, "f507": 0.0, "f508": 0.0, "f509": 0.0, "f510": 0.0, "f511": 0.0, "f512": 0.0, "f513": 0.0, "f514": 0.0, "f515": 0.0, "f516": 0.0, "f517": 0.0, "f518": 75.0, "f519": 251.0, "f520": 240.0, "f521": 57.0, "f522": 0.0, "f523": 0.0, "f524": 0.0, "f525": 0.0, "f526": 0.0, "f527": 0.0, "f528": 0.0, "f529": 0.0, "f530": 0.0, "f531": 0.0, "f532": 0.0, "f533": 0.0, "f534": 0.0, "f535": 0.0, "f536": 0.0, "f537": 0.0, "f538": 0.0, "f539": 0.0, "f540": 0.0, "f541": 0.0, "f542": 0.0, "f543": 0.0, "f544": 0.0, "f545": 19.0, "f546": 221.0, "f547": 254.0, "f548": 166.0, "f549": 0.0, "f550": 0.0, "f551": 0.0, "f552": 0.0, "f553": 0.0, "f554": 0.0, "f555": 0.0, "f556": 0.0, "f557": 0.0, "f558": 0.0, "f559": 0.0, "f560": 0.0, "f561": 0.0, "f562": 0.0, "f563": 0.0, "f564": 0.0, "f565": 0.0, "f566": 0.0, "f567": 0.0, "f568": 0.0, "f569": 0.0, "f570": 0.0, "f571": 0.0, "f572": 3.0, "f573": 203.0, "f574": 254.0, "f575": 219.0, "f576": 35.0, "f577": 0.0, "f578": 0.0, "f579": 0.0, "f580": 0.0, "f581": 0.0, "f582": 0.0, "f583": 0.0, "f584": 0.0, "f585": 0.0, "f586": 0.0, "f587": 0.0, "f588": 0.0, "f589": 0.0, "f590": 0.0, "f591": 0.0, "f592": 0.0, "f593": 0.0, "f594": 0.0, "f595": 0.0, "f596": 0.0, "f597": 0.0, "f598": 0.0, "f599": 0.0, "f600": 38.0, "f601": 254.0, "f602": 254.0, "f603": 77.0, "f604": 0.0, "f605": 0.0, "f606": 0.0, "f607": 0.0, "f608": 0.0, "f609": 0.0, "f610": 0.0, "f611": 0.0, "f612": 0.0, "f613": 0.0, "f614": 0.0, "f615": 0.0, "f616": 0.0, "f617": 0.0, "f618": 0.0, "f619": 0.0, "f620": 0.0, "f621": 0.0, "f622": 0.0, "f623": 0.0, "f624": 0.0, "f625": 0.0, "f626": 0.0, "f627": 31.0, "f628": 224.0, "f629": 254.0, "f630": 115.0, "f631": 1.0, "f632": 0.0, "f633": 0.0, "f634": 0.0, "f635": 0.0, "f636": 0.0, "f637": 0.0, "f638": 0.0, "f639": 0.0, "f640": 0.0, "f641": 0.0, "f642": 0.0, "f643": 0.0, "f644": 0.0, "f645": 0.0, "f646": 0.0, "f647": 0.0, "f648": 0.0, "f649": 0.0, "f650": 0.0, "f651": 0.0, "f652": 0.0, "f653": 0.0, "f654": 0.0, "f655": 133.0, "f656": 254.0, "f657": 254.0, "f658": 52.0, "f659": 0.0, "f660": 0.0, "f661": 0.0, "f662": 0.0, "f663": 0.0, "f664": 0.0, "f665": 0.0, "f666": 0.0, "f667": 0.0, "f668": 0.0, "f669": 0.0, "f670": 0.0, "f671": 0.0, "f672": 0.0, "f673": 0.0, "f674": 0.0, "f675": 0.0, "f676": 0.0, "f677": 0.0, "f678": 0.0, "f679": 0.0, "f680": 0.0, "f681": 0.0, "f682": 61.0, "f683": 242.0, "f684": 254.0, "f685": 254.0, "f686": 52.0, "f687": 0.0, "f688": 0.0, "f689": 0.0, "f690": 0.0, "f691": 0.0, "f692": 0.0, "f693": 0.0, "f694": 0.0, "f695": 0.0, "f696": 0.0, "f697": 0.0, "f698": 0.0, "f699": 0.0, "f700": 0.0, "f701": 0.0, "f702": 0.0, "f703": 0.0, "f704": 0.0, "f705": 0.0, "f706": 0.0, "f707": 0.0, "f708": 0.0, "f709": 0.0, "f710": 121.0, "f711": 254.0, "f712": 254.0, "f713": 219.0, "f714": 40.0, "f715": 0.0, "f716": 0.0, "f717": 0.0, "f718": 0.0, "f719": 0.0, "f720": 0.0, "f721": 0.0, "f722": 0.0, "f723": 0.0, "f724": 0.0, "f725": 0.0, "f726": 0.0, "f727": 0.0, "f728": 0.0, "f729": 0.0, "f730": 0.0, "f731": 0.0, "f732": 0.0, "f733": 0.0, "f734": 0.0, "f735": 0.0, "f736": 0.0, "f737": 0.0, "f738": 121.0, "f739": 254.0, "f740": 207.0, "f741": 18.0, "f742": 0.0, "f743": 0.0, "f744": 0.0, "f745": 0.0, "f746": 0.0, "f747": 0.0, "f748": 0.0, "f749": 0.0, "f750": 0.0, "f751": 0.0, "f752": 0.0, "f753": 0.0, "f754": 0.0, "f755": 0.0, "f756": 0.0, "f757": 0.0, "f758": 0.0, "f759": 0.0, "f760": 0.0, "f761": 0.0, "f762": 0.0, "f763": 0.0, "f764": 0.0, "f765": 0.0, "f766": 0.0, "f767": 0.0, "f768": 0.0, "f769": 0.0, "f770": 0.0, "f771": 0.0, "f772": 0.0, "f773": 0.0, "f774": 0.0, "f775": 0.0, "f776": 0.0, "f777": 0.0, "f778": 0.0, "f779": 0.0, "f780": 0.0, "f781": 0.0, "f782": 0.0, "f783": 0.0, "Scored Probabilities_0": 1.7306872933250431e-07, "Scored Probabilities_1": 3.3177526751193424e-09, "Scored Probabilities_2": 6.772526557729492e-07, "Scored Probabilities_3": 5.018638683008445e-05, "Scored Probabilities_4": 3.5781842069911e-11, "Scored Probabilities_5": 4.2981825008019914e-08, "Scored Probabilities_6": 6.350046754243676e-14, "Scored Probabilities_7": 0.9999483485221952, "Scored Probabilities_8": 1.3431149602373933e-07, "Scored Probabilities_9": 4.341226706439234e-07, "Scored Labels": 7.0}]}}'
This is the first time I've seen this type of object. How can I get the value of Scored Probabilities_n (n is from 0 to 9) and the Scored Labels? Furthermore, can I get the maximum value (in this case it should be "Scored Probabilities_7": 0.9999483485221952)?
Even when I tried some ways to get the highest value of the variable but it's somehow difficult:
print(max(result))
125
As I can see, there's no 125 value in the variable. So does it mean the normal calculation can not be done on this too? I expected to see the answer on my question above. Thank you!
| [
"You can use:\nimport json\nd = json.loads(result.decode('utf-8'))['Results']['WebServiceOutput0'][0]\n\nd['Scored Labels']\n# 1.7306872933250431e-07\n\nmax_p = max((k for k in d if k.startswith('Scored Probabilities_')), key=d.get)\n# 'Scored Probabilities_7'\n\nd[max_p]\n# 0.9999483485221952\n\n"
] | [
1
] | [] | [] | [
"max",
"python"
] | stackoverflow_0074627798_max_python.txt |
Q:
How to post an image on a website with python requests
Trying to post an image from file to this website
[https://demo.neural-university.ru/emotion-recognition.html?ysclid=lauqlu1uq6710345308]
It asks to post an image like this:
curl -X POST -F "[email protected]" https://srv2.demo.neural-university.ru/emotion_recognition/
I tried it with the file "user_photo_510495289.jpg" , using requests.post
import requests
url = 'https://srv2.demo.neural-university.ru/emotion_recognition/'
files = {'od_content': open('user_photo_510495289.jpg', 'rb')}
data = {'od_content': open('user_photo_510495289.jpg', 'rb').read()}
requests.post(url, data=data)
#requests.post(url, files=files)
Neither posting with "data", nor with "files" didn't work. Is it a problem with a posting request or with website API?
Tried posting what was mentioned previously, json-file is expected
A:
Are you sure this endpoint is not needed for authorization?
I think it needs authorization, you can use it this code;
import requests
url = "your URL"
payload={}
files=[
('upload_file',('20220212235319_1509.jpg',open('/20220212235319_1509.jpg','rb'),'image/jpeg'))
]
headers = {
'Accept-Language': 'en-US',
'Authorization': 'Bearer yourToken'
}
response = requests.request("POST", url, headers=headers, data=payload, files=files)
print(response.text)
or basic auth;
import requests
url = "https://example.com"
payload={}
files=[
('file',('myfile.jpg',open('/path/to/myfile.jpg','rb'),'image/jpeg'))
]
response = requests.request("POST", url, auth=("my_username","my_password"), data=payload, files=files)
print(response.text)
for more details; Upload Image using POST form data in Python-requests
| How to post an image on a website with python requests | Trying to post an image from file to this website
[https://demo.neural-university.ru/emotion-recognition.html?ysclid=lauqlu1uq6710345308]
It asks to post an image like this:
curl -X POST -F "[email protected]" https://srv2.demo.neural-university.ru/emotion_recognition/
I tried it with the file "user_photo_510495289.jpg" , using requests.post
import requests
url = 'https://srv2.demo.neural-university.ru/emotion_recognition/'
files = {'od_content': open('user_photo_510495289.jpg', 'rb')}
data = {'od_content': open('user_photo_510495289.jpg', 'rb').read()}
requests.post(url, data=data)
#requests.post(url, files=files)
Neither posting with "data", nor with "files" didn't work. Is it a problem with a posting request or with website API?
Tried posting what was mentioned previously, json-file is expected
| [
"Are you sure this endpoint is not needed for authorization?\nI think it needs authorization, you can use it this code;\nimport requests\n\nurl = \"your URL\"\n\npayload={}\nfiles=[\n ('upload_file',('20220212235319_1509.jpg',open('/20220212235319_1509.jpg','rb'),'image/jpeg'))\n]\nheaders = {\n 'Accept-Language': 'en-US',\n 'Authorization': 'Bearer yourToken'\n}\n\nresponse = requests.request(\"POST\", url, headers=headers, data=payload, files=files)\n\nprint(response.text)\n\nor basic auth;\nimport requests\n\nurl = \"https://example.com\"\npayload={}\nfiles=[\n ('file',('myfile.jpg',open('/path/to/myfile.jpg','rb'),'image/jpeg'))\n]\n\nresponse = requests.request(\"POST\", url, auth=(\"my_username\",\"my_password\"), data=payload, files=files)\nprint(response.text)\n\nfor more details; Upload Image using POST form data in Python-requests\n"
] | [
0
] | [] | [] | [
"api",
"python",
"python_requests"
] | stackoverflow_0074627342_api_python_python_requests.txt |
Q:
Group by for time series - where dates matter
I have the following information
project
stage
date
33
New
3-sep-2022
33
New
10-sep-2022
33
Preparation
11-sep-2022
33
Preparation
21-sep-2022
33
Preparation
23-sep-2022
33
New
24-sep-2022
33
New
28-sep-2022
I want to get the information of the beginning and end of each stage of the project, so I would like this as an output:
project
stage
begin_stage
end_stage
33
New
3-sep-2022
10-sep-2022
33
Preparation
11-sep-2022
23-sep-2022
33
New
24-sep-2022
28-sep-2022
It is important to note that dates matter, so I want to have New twice because the project went back to new on the 24 of September.
I tried the following:
min_df = df.groupby(['project', 'stage'], as_index=False)['date'].agg('min')
max_df = df.groupby(['project', 'stage'], as_index=False)['date'].agg('max')
df = pd.merge(min_df, max_df, how='left', on=['project', 'stage'])
df.rename(columns={'date_x': 'begin_state', 'date_y': 'end_state'}, inplace=True)
However, this is grouping New in only one group and is giving me beging date 3-sep and end stage 28-sep, which is wrong. So the output that I am getting with my code is:
project
stage
begin_stage
end_stage
33
New
3-sep-2022
28-sep-2022
33
Preparation
11-sep-2022
23-sep-2022
A:
You can use:
df['date'] = pd.to_datetime(df['date'])
group = df['stage'].ne(df['stage'].shift()).cumsum()
out = (df
.groupby(['project', 'stage', group], sort=False)
.agg(**{'begin_stage': ('date', 'min'), 'end_stage': ('date', 'max')})
.droplevel(-1)
.apply(lambda s: s.dt.strftime('%-d-%b-%Y').str.lower())
.reset_index()
)
Output:
project stage begin_stage end_stage
0 33 New 3-sep-2022 10-sep-2022
1 33 Preparation 11-sep-2022 23-sep-2022
2 33 New 24-sep-2022 28-sep-2022
| Group by for time series - where dates matter | I have the following information
project
stage
date
33
New
3-sep-2022
33
New
10-sep-2022
33
Preparation
11-sep-2022
33
Preparation
21-sep-2022
33
Preparation
23-sep-2022
33
New
24-sep-2022
33
New
28-sep-2022
I want to get the information of the beginning and end of each stage of the project, so I would like this as an output:
project
stage
begin_stage
end_stage
33
New
3-sep-2022
10-sep-2022
33
Preparation
11-sep-2022
23-sep-2022
33
New
24-sep-2022
28-sep-2022
It is important to note that dates matter, so I want to have New twice because the project went back to new on the 24 of September.
I tried the following:
min_df = df.groupby(['project', 'stage'], as_index=False)['date'].agg('min')
max_df = df.groupby(['project', 'stage'], as_index=False)['date'].agg('max')
df = pd.merge(min_df, max_df, how='left', on=['project', 'stage'])
df.rename(columns={'date_x': 'begin_state', 'date_y': 'end_state'}, inplace=True)
However, this is grouping New in only one group and is giving me beging date 3-sep and end stage 28-sep, which is wrong. So the output that I am getting with my code is:
project
stage
begin_stage
end_stage
33
New
3-sep-2022
28-sep-2022
33
Preparation
11-sep-2022
23-sep-2022
| [
"You can use:\ndf['date'] = pd.to_datetime(df['date'])\n\ngroup = df['stage'].ne(df['stage'].shift()).cumsum()\n\nout = (df\n .groupby(['project', 'stage', group], sort=False)\n .agg(**{'begin_stage': ('date', 'min'), 'end_stage': ('date', 'max')})\n .droplevel(-1)\n .apply(lambda s: s.dt.strftime('%-d-%b-%Y').str.lower())\n .reset_index()\n)\n\nOutput:\n project stage begin_stage end_stage\n0 33 New 3-sep-2022 10-sep-2022\n1 33 Preparation 11-sep-2022 23-sep-2022\n2 33 New 24-sep-2022 28-sep-2022\n\n"
] | [
0
] | [] | [] | [
"group_by",
"pandas",
"python",
"time_series"
] | stackoverflow_0074627878_group_by_pandas_python_time_series.txt |
Q:
tokenize sentence to remove stop words: stop words are not being removed
My code below should take a sentence from database, tokenize it by word and then remove stopwords accordingly. For some reason when I call the removestopwords function in my for loop it does not work. Any suggestions? When I call the removestopwords function with any inserted sentence it works just fine.
import nltk
import random
import csv
from nltk.corpus import stopwords
def tokenize(sentence):
""" This function does the task of converting a sentence into a set of words"""
t_words = sentence.split()
return(t_words)
def removestopwords(tokens):
"""This function removes the stop words from the tokens"""
stop_words = set(stopwords.words("english")) #get the stop words
filtered_tokens = list()
for words in tokens:
if words not in stop_words:
filtered_tokens.append(words.lower())
return filtered_tokens
SENTIMENT_CSV = r"C:\Users\axela\Documents\Decision Support Systems (tilburg)\finance_headlines.csv"
with open(SENTIMENT_CSV, 'rt', encoding = 'ISO-8859-1') as sobj:
sdata = csv.reader(sobj)
all_tokenwords = list()
tokenword_label = list()
for row in sdata:
tokens = tokenize(row[1])
filtered_tokens = removestopwords(tokens)
all_tokenwords.extend(filtered_tokens)
tokenword_label.append([filtered_tokens,row[0]])
print(all_tokenwords)
I assume something is wrong within my forloop but cannot figure it out. Thanks.
A:
I received an error when trying to run
from nltk.corpus import stopwords
The documentation from NLTK says to use the following for stop words
>>> import nltk
>>> nltk.download()
nltk.org/data.html
Once I used nltk.download() for the stopwords I slightly modified your function to incorporate list comprehension.
def removestopwords(tokens):
"""This function removes the stop words from the tokens"""
stop_words = set(stopwords.words("english")) #get the stop words
return [words.lower() for words in tokens.split(' ') if words not in stop_words]
| tokenize sentence to remove stop words: stop words are not being removed | My code below should take a sentence from database, tokenize it by word and then remove stopwords accordingly. For some reason when I call the removestopwords function in my for loop it does not work. Any suggestions? When I call the removestopwords function with any inserted sentence it works just fine.
import nltk
import random
import csv
from nltk.corpus import stopwords
def tokenize(sentence):
""" This function does the task of converting a sentence into a set of words"""
t_words = sentence.split()
return(t_words)
def removestopwords(tokens):
"""This function removes the stop words from the tokens"""
stop_words = set(stopwords.words("english")) #get the stop words
filtered_tokens = list()
for words in tokens:
if words not in stop_words:
filtered_tokens.append(words.lower())
return filtered_tokens
SENTIMENT_CSV = r"C:\Users\axela\Documents\Decision Support Systems (tilburg)\finance_headlines.csv"
with open(SENTIMENT_CSV, 'rt', encoding = 'ISO-8859-1') as sobj:
sdata = csv.reader(sobj)
all_tokenwords = list()
tokenword_label = list()
for row in sdata:
tokens = tokenize(row[1])
filtered_tokens = removestopwords(tokens)
all_tokenwords.extend(filtered_tokens)
tokenword_label.append([filtered_tokens,row[0]])
print(all_tokenwords)
I assume something is wrong within my forloop but cannot figure it out. Thanks.
| [
"I received an error when trying to run\nfrom nltk.corpus import stopwords\nThe documentation from NLTK says to use the following for stop words\n>>> import nltk\n>>> nltk.download()\n\nnltk.org/data.html\nOnce I used nltk.download() for the stopwords I slightly modified your function to incorporate list comprehension.\ndef removestopwords(tokens):\n \"\"\"This function removes the stop words from the tokens\"\"\"\n stop_words = set(stopwords.words(\"english\")) #get the stop words\n return [words.lower() for words in tokens.split(' ') if words not in stop_words]\n\n"
] | [
0
] | [] | [] | [
"for_loop",
"python",
"stop_words"
] | stackoverflow_0074627790_for_loop_python_stop_words.txt |
Q:
Why are attributes defined outside __init__ in popular packages like SQLAlchemy or Pydantic?
I'm modifying an app, trying to use Pydantic for my application models and SQLAlchemy for my database models.
I have existing classes, where I defined attributes inside the __init__ method as I was taught to do:
class Measure:
def __init__(
self,
t_received: int,
mac_address: str,
data: pd.DataFrame,
battery_V: float = 0
):
self.t_received = t_received
self.mac_address = mac_address
self.data = data
self.battery_V = battery_V
In both Pydantic and SQLAlchemy, following the docs, I have to define those attributes outside the __init__ method, for example in Pydantic:
import pydantic
class Measure(pydantic.BaseModel):
t_received: int
mac_address: str
data: pd.DataFrame
battery_V: float
Why is it the case? Isn't this bad practice? Is there any impact on other methods (classmethods, staticmethods, properties ...) of that class?
Note that this is also very unhandy because when I instantiate an object of that class, I don't get suggestions on what parameters are expected by the constructor!
A:
Defining attributes of a class in the class namespace directly is totally acceptable and is not special per se for the packages you mentioned. Since the class namespace is (among other things) essentially a blueprint for instances of that class, defining attributes there can actually be useful, when you want to e.g. provide all public attributes with type annotations in a single place in a consistent manner.
Consider also that a public attribute does not necessarily need to be reflected by a parameter in the constructor of the class. For example, this is entirely reasonable:
class Foo:
a: list[int]
b: str
def __init__(self, b: str) -> None:
self.a = []
self.b = b
In other words, just because something is a public attribute, that does not mean it should have to be provided by the user upon initialization. To say nothing of protected/private attributes.
What is special about Pydantic (to take your example), is that the metaclass of BaseModel as well as the class itself does a whole lot of magic with the attributes defined in the class namespace. Pydantic refers to a model's typical attributes as "fields" and one bit of magic allows special checks to be done during initialization based on those fields you defined in the class namespace. For example, the constructor must receive keyword arguments that correspond to the non-optional fields you defined.
from pydantic import BaseModel
class MyModel(BaseModel):
field_a: str
field_b: int = 1
obj = MyModel(
field_a="spam", # required
field_b=2, # optional
field_c=3.14, # unexpected/ignored
)
If I were to omit field_a during construction of a MyModel instance, an error would be raised. Likewise, if I had tried to pass field_b="eggs", an error would be raised.
So the fact that you don't write your own __init__ method is a feature Pydantic provides you. You only define the fields and an appropriate constructor is "magically" there for you already.
As for the drawback you mentioned, where you don't get any auto-suggestions, that is true by default for all IDEs. Static type checkers cannot understand that dynamic constructor and simply infer what arguments are expected. Currently this is solved via extensions, such as the mypy plugin and the PyCharm plugin. Maybe soon the @dataclass_transform decorator from PEP 681
will standardize this for similar packages and thus improve support by static type checkers.
It is also worth noting that even the standard library's dataclasses only work via special extensions in type checkers.
To your other question, there is obviously some impact on methods of such classes (by design), though the specifics are not always obvious. You should of course not simply write your own __init__ method without being careful to call the superclass' __init__ properly inside it. Also, @property-setters currently don't work as you would expect it (though it is debatable if it even makes sense to use properties on Pydantic models).
To wrap up, this approach is not only not bad practice, it is a great idea to reduce boilerplate code and it is extremely common these days, as evidenced by the fact that hugely popular and established packages (like the aforementioned Pydantic, as well as e.g. SQLAlchemy, Django and others) use this pattern to a certain extent.
A:
Pydantic has its own (rewriting) magic, but SQLalchemy is a bit easier to explain.
A SA model looks like this :
>>> from sqlalchemy import Column, Integer, String
>>> class User(Base):
...
... id = Column(Integer, primary_key=True)
... name = Column(String)
Column, Integer and String are descriptors. A descriptor is a class that overrides the get and set methods. In practice, this means the class can control how data is accessed and stored.
For example this assignment would now use the __set__ method from Column:
class User(Base):
id = Column(Integer, primary_key=True)
name = Column(String)
user = User()
user.name = 'John'
This is the same as user.name.__set__('John') , however, because of the MRO, it finds a set method in Column, so uses that instead. In a simplified version the Column looks something like this:
class Column:
def __init__(self, field=""):
self.field= field
def __get__(self, obj, type):
return obj.__dict__.get(self.field)
def __set__(self, obj, val):
if validate_field(val)
obj.__dict__[self.field] = val
else:
print('not a valid value')
(This is similar to using @property. A Descriptor is a re-usable @property)
| Why are attributes defined outside __init__ in popular packages like SQLAlchemy or Pydantic? | I'm modifying an app, trying to use Pydantic for my application models and SQLAlchemy for my database models.
I have existing classes, where I defined attributes inside the __init__ method as I was taught to do:
class Measure:
def __init__(
self,
t_received: int,
mac_address: str,
data: pd.DataFrame,
battery_V: float = 0
):
self.t_received = t_received
self.mac_address = mac_address
self.data = data
self.battery_V = battery_V
In both Pydantic and SQLAlchemy, following the docs, I have to define those attributes outside the __init__ method, for example in Pydantic:
import pydantic
class Measure(pydantic.BaseModel):
t_received: int
mac_address: str
data: pd.DataFrame
battery_V: float
Why is it the case? Isn't this bad practice? Is there any impact on other methods (classmethods, staticmethods, properties ...) of that class?
Note that this is also very unhandy because when I instantiate an object of that class, I don't get suggestions on what parameters are expected by the constructor!
| [
"Defining attributes of a class in the class namespace directly is totally acceptable and is not special per se for the packages you mentioned. Since the class namespace is (among other things) essentially a blueprint for instances of that class, defining attributes there can actually be useful, when you want to e.g. provide all public attributes with type annotations in a single place in a consistent manner.\nConsider also that a public attribute does not necessarily need to be reflected by a parameter in the constructor of the class. For example, this is entirely reasonable:\nclass Foo:\n a: list[int]\n b: str\n\n def __init__(self, b: str) -> None:\n self.a = []\n self.b = b\n\nIn other words, just because something is a public attribute, that does not mean it should have to be provided by the user upon initialization. To say nothing of protected/private attributes.\nWhat is special about Pydantic (to take your example), is that the metaclass of BaseModel as well as the class itself does a whole lot of magic with the attributes defined in the class namespace. Pydantic refers to a model's typical attributes as \"fields\" and one bit of magic allows special checks to be done during initialization based on those fields you defined in the class namespace. For example, the constructor must receive keyword arguments that correspond to the non-optional fields you defined.\nfrom pydantic import BaseModel\n\n\nclass MyModel(BaseModel):\n field_a: str\n field_b: int = 1\n\n\nobj = MyModel(\n field_a=\"spam\", # required\n field_b=2, # optional\n field_c=3.14, # unexpected/ignored\n)\n\nIf I were to omit field_a during construction of a MyModel instance, an error would be raised. Likewise, if I had tried to pass field_b=\"eggs\", an error would be raised.\nSo the fact that you don't write your own __init__ method is a feature Pydantic provides you. You only define the fields and an appropriate constructor is \"magically\" there for you already.\nAs for the drawback you mentioned, where you don't get any auto-suggestions, that is true by default for all IDEs. Static type checkers cannot understand that dynamic constructor and simply infer what arguments are expected. Currently this is solved via extensions, such as the mypy plugin and the PyCharm plugin. Maybe soon the @dataclass_transform decorator from PEP 681\nwill standardize this for similar packages and thus improve support by static type checkers.\nIt is also worth noting that even the standard library's dataclasses only work via special extensions in type checkers.\nTo your other question, there is obviously some impact on methods of such classes (by design), though the specifics are not always obvious. You should of course not simply write your own __init__ method without being careful to call the superclass' __init__ properly inside it. Also, @property-setters currently don't work as you would expect it (though it is debatable if it even makes sense to use properties on Pydantic models).\nTo wrap up, this approach is not only not bad practice, it is a great idea to reduce boilerplate code and it is extremely common these days, as evidenced by the fact that hugely popular and established packages (like the aforementioned Pydantic, as well as e.g. SQLAlchemy, Django and others) use this pattern to a certain extent.\n",
"Pydantic has its own (rewriting) magic, but SQLalchemy is a bit easier to explain.\nA SA model looks like this :\n>>> from sqlalchemy import Column, Integer, String\n>>> class User(Base):\n...\n... id = Column(Integer, primary_key=True)\n... name = Column(String)\n\nColumn, Integer and String are descriptors. A descriptor is a class that overrides the get and set methods. In practice, this means the class can control how data is accessed and stored.\nFor example this assignment would now use the __set__ method from Column:\nclass User(Base):\n id = Column(Integer, primary_key=True)\n name = Column(String)\n\nuser = User()\nuser.name = 'John' \n\nThis is the same as user.name.__set__('John') , however, because of the MRO, it finds a set method in Column, so uses that instead. In a simplified version the Column looks something like this:\nclass Column:\n def __init__(self, field=\"\"):\n self.field= field\n def __get__(self, obj, type):\n return obj.__dict__.get(self.field)\n def __set__(self, obj, val):\n if validate_field(val)\n obj.__dict__[self.field] = val\n else:\n print('not a valid value')\n\n(This is similar to using @property. A Descriptor is a re-usable @property)\n"
] | [
1,
0
] | [] | [] | [
"attributes",
"class",
"pydantic",
"python",
"sqlalchemy"
] | stackoverflow_0074612809_attributes_class_pydantic_python_sqlalchemy.txt |
Q:
Django - write Python code in an elegant way
I have a situation as shown below:
in models.py:
class singer(models.Model):
name = models.CharField()
nickName = models.CharField()
numSongs= models.IntegerField()
class writer(models.Model):
name = models.CharField()
numBooks = models.IntegerField()
class weeklyTimeSinger(models.Model):
singerID = models.ForeignKey(singer, on_delete = models.CASCADE, related_name = 'hook1')
dayWeek = models.IntegerField()
startHour = models.TimeField()
stopHour = models.TimeField()
class weeklyTimeWriter(models.Model):
writerID = models.ForeignKey(writer, on_delete = models.CASCADE, related_name = 'hook2')
dayWeek = models.IntegerField()
startHour = models.TimeField()
stopHour = models.TimeField()
in view.py:
class Filters(APIView):
def queryFilter(self, querySet, request, singerOtWriter):
param1 = int(request.GET.get('param1', 0))
param2 = int(request.GET.get('param2', 0))
if singerOtWriter == "singer":
querySet = querySet.filter(weeklyTimeSinger_dayWeek=param1)
querySet = querySet.filter(weeklyTimeSinger_startHour__lt=param2)
querySet = querySet.update(.....
....a lot of operation on querySet
else if singerOtWriter == "writer":
querySet = querySet.filter(weeklyTimeWriter_dayWeek=param1)
querySet = querySet.filter(weeklyTimeWriter_startHour__lt=param2)
querySet = querySet.update(.....
....a lot of operation on querySet, the same of the case singer
return querySet
class Artist(Filters):
def get(self, request):
querySetSinger = singer.objects.all().annotate(numWorks= F('numSongs'))
querySetSinger = self.queryFilter(querySetSinger , request, "singer")
querySetSinger = querySetSinger.values('name', 'numWorks')
querySetWriter = writer.objects.all().annotate(numWorks= F('numBooks'))
querySetWriter = self.queryFilter(querySetWriter , request, "writer")
querySetWriter = querySetWriter.values('name', 'numWorks')
values = querySetSinger.union(querySetWriter)
serialized = ArtistSerializers(values, many = True)
return Response(serialized.data)
In queryFilter function I have 2 different flows depending on singerOtWriter parameter. The 2 flows are long and identical execpt for "weeklyTimeWriter" or "weeklyTimeSinger" tables name and I don't want to repeat those lines of code because it looks dirty. I haven't reported all the lines of code but there are many.
Is there a better and more elegant way to rewrite this code and to generalize those operations?
Thanks for all.
A:
keep same table for singer/writer ans use type field. You can also filter easily.
models.py be like:-
artistTypes = (
('Singer', 'Singer'),
('Writer', 'Writer'),
)
class artistName(models.Model):
name = models.CharField()
nickName = models.CharField()
artistType = models.CharField(max_length=10, choices=artistTypes)
numReleases = models.IntegerField()
class timeSpent(models.Model):
artistID = models.ForeignKey(artistName, on_delete = models.CASCADE)
dayWeek = models.IntegerField()
startHour = models.TimeField()
stopHour = models.TimeField()
| Django - write Python code in an elegant way | I have a situation as shown below:
in models.py:
class singer(models.Model):
name = models.CharField()
nickName = models.CharField()
numSongs= models.IntegerField()
class writer(models.Model):
name = models.CharField()
numBooks = models.IntegerField()
class weeklyTimeSinger(models.Model):
singerID = models.ForeignKey(singer, on_delete = models.CASCADE, related_name = 'hook1')
dayWeek = models.IntegerField()
startHour = models.TimeField()
stopHour = models.TimeField()
class weeklyTimeWriter(models.Model):
writerID = models.ForeignKey(writer, on_delete = models.CASCADE, related_name = 'hook2')
dayWeek = models.IntegerField()
startHour = models.TimeField()
stopHour = models.TimeField()
in view.py:
class Filters(APIView):
def queryFilter(self, querySet, request, singerOtWriter):
param1 = int(request.GET.get('param1', 0))
param2 = int(request.GET.get('param2', 0))
if singerOtWriter == "singer":
querySet = querySet.filter(weeklyTimeSinger_dayWeek=param1)
querySet = querySet.filter(weeklyTimeSinger_startHour__lt=param2)
querySet = querySet.update(.....
....a lot of operation on querySet
else if singerOtWriter == "writer":
querySet = querySet.filter(weeklyTimeWriter_dayWeek=param1)
querySet = querySet.filter(weeklyTimeWriter_startHour__lt=param2)
querySet = querySet.update(.....
....a lot of operation on querySet, the same of the case singer
return querySet
class Artist(Filters):
def get(self, request):
querySetSinger = singer.objects.all().annotate(numWorks= F('numSongs'))
querySetSinger = self.queryFilter(querySetSinger , request, "singer")
querySetSinger = querySetSinger.values('name', 'numWorks')
querySetWriter = writer.objects.all().annotate(numWorks= F('numBooks'))
querySetWriter = self.queryFilter(querySetWriter , request, "writer")
querySetWriter = querySetWriter.values('name', 'numWorks')
values = querySetSinger.union(querySetWriter)
serialized = ArtistSerializers(values, many = True)
return Response(serialized.data)
In queryFilter function I have 2 different flows depending on singerOtWriter parameter. The 2 flows are long and identical execpt for "weeklyTimeWriter" or "weeklyTimeSinger" tables name and I don't want to repeat those lines of code because it looks dirty. I haven't reported all the lines of code but there are many.
Is there a better and more elegant way to rewrite this code and to generalize those operations?
Thanks for all.
| [
"keep same table for singer/writer ans use type field. You can also filter easily.\nmodels.py be like:-\nartistTypes = (\n ('Singer', 'Singer'),\n ('Writer', 'Writer'),\n )\n\nclass artistName(models.Model):\n name = models.CharField()\n nickName = models.CharField()\n artistType = models.CharField(max_length=10, choices=artistTypes)\n numReleases = models.IntegerField()\n\nclass timeSpent(models.Model):\n artistID = models.ForeignKey(artistName, on_delete = models.CASCADE)\n dayWeek = models.IntegerField() \n startHour = models.TimeField()\n stopHour = models.TimeField()\n\n"
] | [
0
] | [] | [] | [
"django",
"django_models",
"django_queryset",
"django_rest_framework",
"python"
] | stackoverflow_0074627398_django_django_models_django_queryset_django_rest_framework_python.txt |
Q:
How to pass dbfs path of local dependency wheel file in install_requires field of setup.py
I am trying to install custom wheel file which requires another wheel file to install from databricks dbfs path. How to provide dbfs path in setup.py install_requires section .
Note: I am aware of passing local path but not dbfs path. Can someone help?
I tried to provide path using dbfs:// but it did not work.
A:
You can directly install or upload wheel file as shown in the below image.
Go to cluster -> install library
For more information refer this link to installing wheel file.
| How to pass dbfs path of local dependency wheel file in install_requires field of setup.py | I am trying to install custom wheel file which requires another wheel file to install from databricks dbfs path. How to provide dbfs path in setup.py install_requires section .
Note: I am aware of passing local path but not dbfs path. Can someone help?
I tried to provide path using dbfs:// but it did not work.
| [
"You can directly install or upload wheel file as shown in the below image.\nGo to cluster -> install library\n\nFor more information refer this link to installing wheel file.\n"
] | [
0
] | [] | [] | [
"azure_databricks",
"bigdata",
"databricks",
"python",
"python_wheel"
] | stackoverflow_0074624872_azure_databricks_bigdata_databricks_python_python_wheel.txt |
Q:
How to use AutoML Library with IPU/TPU?
I want to use AutoML Library Autogluon with Paperspace IPU/Kaggle TPU instance for specification reasons (big RAM, big space, and fast training time). For IPU, when I try to fit the Autogluon predictor class, the library only recognizes the available IPU but not using it. How to make the Autogluon use the IPU? For TPU, I have not yet tried it because somewhat I could not import the Autogluon library. Last for GPU, Currently, from what I tried, Autogluon could use the available GPU but I don't want to use it because of performance reasons.
Predictor fit output with IPU instance example:
predictor.fit(
train_data=train_data,
hyperparameters={
'model.hf_text.checkpoint_name': 'xlm-roberta-base'
}
)
Output:
Global seed set to 123
/usr/local/lib/python3.8/dist-packages/autogluon/multimodal/utils/environment.py:96: UserWarning: Only CPU is detected in the instance. This may result in slow speed for MultiModalPredictor. Consider using an instance with GPU support.
warnings.warn(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: True, using: 0 IPUs
HPU available: False, using: 0 HPUs
/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py:1777: UserWarning: IPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='ipu', devices=4)`.
rank_zero_warn(
| Name | Type | Params
-------------------------------------------------------------------
0 | model | HFAutoModelForTextPrediction | 278 M
1 | validation_metric | Accuracy | 0
2 | loss_func | CrossEntropyLoss | 0
-------------------------------------------------------------------
278 M Trainable params
0 Non-trainable params
278 M Total params
1,112.190 Total estimated model params size (MB)
Importing Autogluon library with TPU instance:
import os
import numpy as np
import warnings
import pandas as pd
from IPython.display import display, Image
import json
# Auto Exploratory Data Analysis
from pandas_profiling import ProfileReport
# AutoML
from autogluon.core.utils.loaders import load_zip
from autogluon.multimodal import MultiModalPredictor
from autogluon.multimodal.data.infer_types import infer_column_types
from autogluon.tabular import TabularPredictor
from autogluon.features.generators import AutoMLPipelineFeatureGenerator
from autogluon.tabular import FeatureMetadata
pd.set_option('display.max_columns', None)
np.random.seed(123)
Output:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1063 in _get_module │
│ │
│ 1060 │ │
│ 1061 │ def _get_module(self, module_name: str): │
│ 1062 │ │ try: │
│ ❱ 1063 │ │ │ return importlib.import_module("." + module_name, self.__name__) │
│ 1064 │ │ except Exception as e: │
│ 1065 │ │ │ raise RuntimeError( │
│ 1066 │ │ │ │ f"Failed to import {self.__name__}.{module_name} because of the followin │
│ │
│ /opt/conda/lib/python3.7/importlib/__init__.py:127 in import_module │
│ │
│ 124 │ │ │ if character != '.': │
│ 125 │ │ │ │ break │
│ 126 │ │ │ level += 1 │
│ ❱ 127 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 128 │
│ 129 │
│ 130 _RELOADING = {} │
│ <frozen importlib._bootstrap>:1006 in _gcd_import │
│ <frozen importlib._bootstrap>:983 in _find_and_load │
│ <frozen importlib._bootstrap>:967 in _find_and_load_unlocked │
│ <frozen importlib._bootstrap>:677 in _load_unlocked │
│ <frozen importlib._bootstrap_external>:728 in exec_module │
│ <frozen importlib._bootstrap>:219 in _call_with_frames_removed │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:39 in <module> │
│ │
│ 36 from tensorflow.python.keras.saving import hdf5_format │
│ 37 │
│ 38 from huggingface_hub import Repository, list_repo_files │
│ ❱ 39 from keras.saving.hdf5_format import save_attributes_to_hdf5_group │
│ 40 from transformers.utils.hub import convert_file_size_to_int, get_checkpoint_shard_files │
│ 41 │
│ 42 from . import DataCollatorWithPadding, DefaultDataCollator │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/__init__.py:21 in <module> │
│ │
│ 18 [keras.io](https://keras.io). │
│ 19 """ │
│ 20 from keras import distribute │
│ ❱ 21 from keras import models │
│ 22 from keras.engine.input_layer import Input │
│ 23 from keras.engine.sequential import Sequential │
│ 24 from keras.engine.training import Model │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/models/__init__.py:18 in <module> │
│ │
│ 15 """Keras models API.""" │
│ 16 │
│ 17 │
│ ❱ 18 from keras.engine.functional import Functional │
│ 19 from keras.engine.sequential import Sequential │
│ 20 from keras.engine.training import Model │
│ 21 │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/engine/functional.py:26 in <module> │
│ │
│ 23 │
│ 24 import tensorflow.compat.v2 as tf │
│ 25 │
│ ❱ 26 from keras import backend │
│ 27 from keras.dtensor import layout_map as layout_map_lib │
│ 28 from keras.engine import base_layer │
│ 29 from keras.engine import base_layer_utils │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/backend.py:32 in <module> │
│ │
│ 29 import numpy as np │
│ 30 import tensorflow.compat.v2 as tf │
│ 31 │
│ ❱ 32 from keras import backend_config │
│ 33 from keras.distribute import distribute_coordinator_utils as dc │
│ 34 from keras.engine import keras_tensor │
│ 35 from keras.utils import control_flow_util │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/backend_config.py:33 in <module> │
│ │
│ 30 │
│ 31 │
│ 32 @keras_export("keras.backend.epsilon") │
│ ❱ 33 @tf.__internal__.dispatch.add_dispatch_support │
│ 34 def epsilon(): │
│ 35 │ """Returns the value of the fuzz factor used in numeric expressions. │
│ 36 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'dispatch'
The above exception was the direct cause of the following exception:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /tmp/ipykernel_248/1088848273.py:13 in <module> │
│ │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_248/1088848273.py' │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/__init__.py:6 in <module> │
│ │
│ 3 except ImportError: │
│ 4 │ pass │
│ 5 │
│ ❱ 6 from . import constants, data, models, optimization, predictor, utils │
│ 7 from .predictor import AutoMMPredictor, MultiModalPredictor │
│ 8 from .utils import download │
│ 9 │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/optimization/__init__.py:1 in │
│ <module> │
│ │
│ ❱ 1 from . import lit_module, utils │
│ 2 │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/optimization/lit_module.py:14 in │
│ <module> │
│ │
│ 11 │
│ 12 from ..constants import AUTOMM, LM_TARGET, LOGITS, T_FEW, TEMPLATE_LOGITS, WEIGHT │
│ 13 from ..data.mixup import MixupModule, multimodel_mixup │
│ ❱ 14 from .utils import apply_layerwise_lr_decay, apply_single_lr, apply_two_stages_lr, get_l │
│ 15 │
│ 16 logger = logging.getLogger(AUTOMM) │
│ 17 │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/optimization/utils.py:57 in <module> │
│ │
│ 54 │ ROOT_MEAN_SQUARED_ERROR, │
│ 55 │ SPEARMANR, │
│ 56 ) │
│ ❱ 57 from ..utils import MeanAveragePrecision │
│ 58 from .losses import MultiNegativesSoftmaxLoss, SoftTargetCrossEntropy │
│ 59 from .lr_scheduler import ( │
│ 60 │ get_cosine_schedule_with_warmup, │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/utils/__init__.py:39 in <module> │
│ │
│ 36 from .log import LogFilter, apply_log_filter, make_exp_dir │
│ 37 from .map import MeanAveragePrecision │
│ 38 from .matcher import compute_semantic_similarity, convert_data_for_ranking, create_siame │
│ ❱ 39 from .metric import compute_ranking_score, compute_score, get_minmax_mode, infer_metrics │
│ 40 from .misc import logits_to_prob, shopee_dataset, tensor_to_ndarray │
│ 41 from .mmcv import CollateMMCV, send_datacontainers_to_device, unpack_datacontainers │
│ 42 from .model import create_fusion_model, create_model, list_timm_models, modify_duplicate │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/utils/metric.py:7 in <module> │
│ │
│ 4 import warnings │
│ 5 from typing import Dict, List, Optional, Tuple, Union │
│ 6 │
│ ❱ 7 import evaluate │
│ 8 import numpy as np │
│ 9 from sklearn.metrics import f1_score │
│ 10 │
│ │
│ /opt/conda/lib/python3.7/site-packages/evaluate/__init__.py:29 in <module> │
│ │
│ 26 │
│ 27 del version │
│ 28 │
│ ❱ 29 from .evaluator import ( │
│ 30 │ Evaluator, │
│ 31 │ ImageClassificationEvaluator, │
│ 32 │ QuestionAnsweringEvaluator, │
│ │
│ /opt/conda/lib/python3.7/site-packages/evaluate/evaluator/__init__.py:29 in <module> │
│ │
│ 26 │
│ 27 from .base import Evaluator │
│ 28 from .image_classification import ImageClassificationEvaluator │
│ ❱ 29 from .question_answering import QuestionAnsweringEvaluator │
│ 30 from .text_classification import TextClassificationEvaluator │
│ 31 from .token_classification import TokenClassificationEvaluator │
│ 32 │
│ │
│ /opt/conda/lib/python3.7/site-packages/evaluate/evaluator/question_answering.py:22 in <module> │
│ │
│ 19 │
│ 20 │
│ 21 try: │
│ ❱ 22 │ from transformers import Pipeline, PreTrainedModel, PreTrainedTokenizer, TFPreTraine │
│ 23 │ │
│ 24 │ TRANSFORMERS_AVAILABLE = True │
│ 25 except ImportError: │
│ <frozen importlib._bootstrap>:1032 in _handle_fromlist │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1053 in __getattr__ │
│ │
│ 1050 │ │ if name in self._modules: │
│ 1051 │ │ │ value = self._get_module(name) │
│ 1052 │ │ elif name in self._class_to_module.keys(): │
│ ❱ 1053 │ │ │ module = self._get_module(self._class_to_module[name]) │
│ 1054 │ │ │ value = getattr(module, name) │
│ 1055 │ │ else: │
│ 1056 │ │ │ raise AttributeError(f"module {self.__name__} has no attribute {name}") │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1068 in _get_module │
│ │
│ 1065 │ │ │ raise RuntimeError( │
│ 1066 │ │ │ │ f"Failed to import {self.__name__}.{module_name} because of the followin │
│ 1067 │ │ │ │ f" traceback):\n{e}" │
│ ❱ 1068 │ │ │ ) from e │
│ 1069 │ │
│ 1070 │ def __reduce__(self): │
│ 1071 │ │ return (self.__class__, (self._name, self.__file__, self._import_structure)) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its
traceback):
module 'tensorflow.compat.v2.__internal__' has no attribute 'dispatch'
A:
As far as I can tell, the Autogluon library does not currently support using IPUs. The Poplar SDK supports PyTorch and PyTorch Lightning, which Autogluon is based on, so the library could in principle be supported. I'd be really interested to hear more about what you want to use Autogluon for!
In the meantime, there are many IPU resources available, including:
Public examples, including a wide range of applications written for the IPU: https://github.com/graphcore/examples
Tutorials: https://github.com/graphcore/tutorials
The Graphcore Optimum library, which has optimised implementations of HuggingFace transformer models: https://github.com/huggingface/optimum-graphcore
The model garden, with links to the public examples and Paperspace notebooks: https://www.graphcore.ai/resources/model-garden
I'm sorry that what you want isn't supported, but I hope that the above is at least helpful.
For the TPU error, it looks like the environment in your instance comes with TensorFlow pre-installed, but it doesn't look like Autogluon uses TensorFlow. You might need to look into how to use PyTorch on TPUs with PyTorch/XLA: https://github.com/pytorch/xla
With thanks,
Callum
| How to use AutoML Library with IPU/TPU? | I want to use AutoML Library Autogluon with Paperspace IPU/Kaggle TPU instance for specification reasons (big RAM, big space, and fast training time). For IPU, when I try to fit the Autogluon predictor class, the library only recognizes the available IPU but not using it. How to make the Autogluon use the IPU? For TPU, I have not yet tried it because somewhat I could not import the Autogluon library. Last for GPU, Currently, from what I tried, Autogluon could use the available GPU but I don't want to use it because of performance reasons.
Predictor fit output with IPU instance example:
predictor.fit(
train_data=train_data,
hyperparameters={
'model.hf_text.checkpoint_name': 'xlm-roberta-base'
}
)
Output:
Global seed set to 123
/usr/local/lib/python3.8/dist-packages/autogluon/multimodal/utils/environment.py:96: UserWarning: Only CPU is detected in the instance. This may result in slow speed for MultiModalPredictor. Consider using an instance with GPU support.
warnings.warn(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: True, using: 0 IPUs
HPU available: False, using: 0 HPUs
/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py:1777: UserWarning: IPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='ipu', devices=4)`.
rank_zero_warn(
| Name | Type | Params
-------------------------------------------------------------------
0 | model | HFAutoModelForTextPrediction | 278 M
1 | validation_metric | Accuracy | 0
2 | loss_func | CrossEntropyLoss | 0
-------------------------------------------------------------------
278 M Trainable params
0 Non-trainable params
278 M Total params
1,112.190 Total estimated model params size (MB)
Importing Autogluon library with TPU instance:
import os
import numpy as np
import warnings
import pandas as pd
from IPython.display import display, Image
import json
# Auto Exploratory Data Analysis
from pandas_profiling import ProfileReport
# AutoML
from autogluon.core.utils.loaders import load_zip
from autogluon.multimodal import MultiModalPredictor
from autogluon.multimodal.data.infer_types import infer_column_types
from autogluon.tabular import TabularPredictor
from autogluon.features.generators import AutoMLPipelineFeatureGenerator
from autogluon.tabular import FeatureMetadata
pd.set_option('display.max_columns', None)
np.random.seed(123)
Output:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1063 in _get_module │
│ │
│ 1060 │ │
│ 1061 │ def _get_module(self, module_name: str): │
│ 1062 │ │ try: │
│ ❱ 1063 │ │ │ return importlib.import_module("." + module_name, self.__name__) │
│ 1064 │ │ except Exception as e: │
│ 1065 │ │ │ raise RuntimeError( │
│ 1066 │ │ │ │ f"Failed to import {self.__name__}.{module_name} because of the followin │
│ │
│ /opt/conda/lib/python3.7/importlib/__init__.py:127 in import_module │
│ │
│ 124 │ │ │ if character != '.': │
│ 125 │ │ │ │ break │
│ 126 │ │ │ level += 1 │
│ ❱ 127 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 128 │
│ 129 │
│ 130 _RELOADING = {} │
│ <frozen importlib._bootstrap>:1006 in _gcd_import │
│ <frozen importlib._bootstrap>:983 in _find_and_load │
│ <frozen importlib._bootstrap>:967 in _find_and_load_unlocked │
│ <frozen importlib._bootstrap>:677 in _load_unlocked │
│ <frozen importlib._bootstrap_external>:728 in exec_module │
│ <frozen importlib._bootstrap>:219 in _call_with_frames_removed │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:39 in <module> │
│ │
│ 36 from tensorflow.python.keras.saving import hdf5_format │
│ 37 │
│ 38 from huggingface_hub import Repository, list_repo_files │
│ ❱ 39 from keras.saving.hdf5_format import save_attributes_to_hdf5_group │
│ 40 from transformers.utils.hub import convert_file_size_to_int, get_checkpoint_shard_files │
│ 41 │
│ 42 from . import DataCollatorWithPadding, DefaultDataCollator │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/__init__.py:21 in <module> │
│ │
│ 18 [keras.io](https://keras.io). │
│ 19 """ │
│ 20 from keras import distribute │
│ ❱ 21 from keras import models │
│ 22 from keras.engine.input_layer import Input │
│ 23 from keras.engine.sequential import Sequential │
│ 24 from keras.engine.training import Model │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/models/__init__.py:18 in <module> │
│ │
│ 15 """Keras models API.""" │
│ 16 │
│ 17 │
│ ❱ 18 from keras.engine.functional import Functional │
│ 19 from keras.engine.sequential import Sequential │
│ 20 from keras.engine.training import Model │
│ 21 │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/engine/functional.py:26 in <module> │
│ │
│ 23 │
│ 24 import tensorflow.compat.v2 as tf │
│ 25 │
│ ❱ 26 from keras import backend │
│ 27 from keras.dtensor import layout_map as layout_map_lib │
│ 28 from keras.engine import base_layer │
│ 29 from keras.engine import base_layer_utils │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/backend.py:32 in <module> │
│ │
│ 29 import numpy as np │
│ 30 import tensorflow.compat.v2 as tf │
│ 31 │
│ ❱ 32 from keras import backend_config │
│ 33 from keras.distribute import distribute_coordinator_utils as dc │
│ 34 from keras.engine import keras_tensor │
│ 35 from keras.utils import control_flow_util │
│ │
│ /opt/conda/lib/python3.7/site-packages/keras/backend_config.py:33 in <module> │
│ │
│ 30 │
│ 31 │
│ 32 @keras_export("keras.backend.epsilon") │
│ ❱ 33 @tf.__internal__.dispatch.add_dispatch_support │
│ 34 def epsilon(): │
│ 35 │ """Returns the value of the fuzz factor used in numeric expressions. │
│ 36 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'dispatch'
The above exception was the direct cause of the following exception:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /tmp/ipykernel_248/1088848273.py:13 in <module> │
│ │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_248/1088848273.py' │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/__init__.py:6 in <module> │
│ │
│ 3 except ImportError: │
│ 4 │ pass │
│ 5 │
│ ❱ 6 from . import constants, data, models, optimization, predictor, utils │
│ 7 from .predictor import AutoMMPredictor, MultiModalPredictor │
│ 8 from .utils import download │
│ 9 │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/optimization/__init__.py:1 in │
│ <module> │
│ │
│ ❱ 1 from . import lit_module, utils │
│ 2 │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/optimization/lit_module.py:14 in │
│ <module> │
│ │
│ 11 │
│ 12 from ..constants import AUTOMM, LM_TARGET, LOGITS, T_FEW, TEMPLATE_LOGITS, WEIGHT │
│ 13 from ..data.mixup import MixupModule, multimodel_mixup │
│ ❱ 14 from .utils import apply_layerwise_lr_decay, apply_single_lr, apply_two_stages_lr, get_l │
│ 15 │
│ 16 logger = logging.getLogger(AUTOMM) │
│ 17 │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/optimization/utils.py:57 in <module> │
│ │
│ 54 │ ROOT_MEAN_SQUARED_ERROR, │
│ 55 │ SPEARMANR, │
│ 56 ) │
│ ❱ 57 from ..utils import MeanAveragePrecision │
│ 58 from .losses import MultiNegativesSoftmaxLoss, SoftTargetCrossEntropy │
│ 59 from .lr_scheduler import ( │
│ 60 │ get_cosine_schedule_with_warmup, │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/utils/__init__.py:39 in <module> │
│ │
│ 36 from .log import LogFilter, apply_log_filter, make_exp_dir │
│ 37 from .map import MeanAveragePrecision │
│ 38 from .matcher import compute_semantic_similarity, convert_data_for_ranking, create_siame │
│ ❱ 39 from .metric import compute_ranking_score, compute_score, get_minmax_mode, infer_metrics │
│ 40 from .misc import logits_to_prob, shopee_dataset, tensor_to_ndarray │
│ 41 from .mmcv import CollateMMCV, send_datacontainers_to_device, unpack_datacontainers │
│ 42 from .model import create_fusion_model, create_model, list_timm_models, modify_duplicate │
│ │
│ /opt/conda/lib/python3.7/site-packages/autogluon/multimodal/utils/metric.py:7 in <module> │
│ │
│ 4 import warnings │
│ 5 from typing import Dict, List, Optional, Tuple, Union │
│ 6 │
│ ❱ 7 import evaluate │
│ 8 import numpy as np │
│ 9 from sklearn.metrics import f1_score │
│ 10 │
│ │
│ /opt/conda/lib/python3.7/site-packages/evaluate/__init__.py:29 in <module> │
│ │
│ 26 │
│ 27 del version │
│ 28 │
│ ❱ 29 from .evaluator import ( │
│ 30 │ Evaluator, │
│ 31 │ ImageClassificationEvaluator, │
│ 32 │ QuestionAnsweringEvaluator, │
│ │
│ /opt/conda/lib/python3.7/site-packages/evaluate/evaluator/__init__.py:29 in <module> │
│ │
│ 26 │
│ 27 from .base import Evaluator │
│ 28 from .image_classification import ImageClassificationEvaluator │
│ ❱ 29 from .question_answering import QuestionAnsweringEvaluator │
│ 30 from .text_classification import TextClassificationEvaluator │
│ 31 from .token_classification import TokenClassificationEvaluator │
│ 32 │
│ │
│ /opt/conda/lib/python3.7/site-packages/evaluate/evaluator/question_answering.py:22 in <module> │
│ │
│ 19 │
│ 20 │
│ 21 try: │
│ ❱ 22 │ from transformers import Pipeline, PreTrainedModel, PreTrainedTokenizer, TFPreTraine │
│ 23 │ │
│ 24 │ TRANSFORMERS_AVAILABLE = True │
│ 25 except ImportError: │
│ <frozen importlib._bootstrap>:1032 in _handle_fromlist │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1053 in __getattr__ │
│ │
│ 1050 │ │ if name in self._modules: │
│ 1051 │ │ │ value = self._get_module(name) │
│ 1052 │ │ elif name in self._class_to_module.keys(): │
│ ❱ 1053 │ │ │ module = self._get_module(self._class_to_module[name]) │
│ 1054 │ │ │ value = getattr(module, name) │
│ 1055 │ │ else: │
│ 1056 │ │ │ raise AttributeError(f"module {self.__name__} has no attribute {name}") │
│ │
│ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1068 in _get_module │
│ │
│ 1065 │ │ │ raise RuntimeError( │
│ 1066 │ │ │ │ f"Failed to import {self.__name__}.{module_name} because of the followin │
│ 1067 │ │ │ │ f" traceback):\n{e}" │
│ ❱ 1068 │ │ │ ) from e │
│ 1069 │ │
│ 1070 │ def __reduce__(self): │
│ 1071 │ │ return (self.__class__, (self._name, self.__file__, self._import_structure)) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its
traceback):
module 'tensorflow.compat.v2.__internal__' has no attribute 'dispatch'
| [
"As far as I can tell, the Autogluon library does not currently support using IPUs. The Poplar SDK supports PyTorch and PyTorch Lightning, which Autogluon is based on, so the library could in principle be supported. I'd be really interested to hear more about what you want to use Autogluon for!\nIn the meantime, there are many IPU resources available, including:\n\nPublic examples, including a wide range of applications written for the IPU: https://github.com/graphcore/examples\nTutorials: https://github.com/graphcore/tutorials\nThe Graphcore Optimum library, which has optimised implementations of HuggingFace transformer models: https://github.com/huggingface/optimum-graphcore\nThe model garden, with links to the public examples and Paperspace notebooks: https://www.graphcore.ai/resources/model-garden\n\nI'm sorry that what you want isn't supported, but I hope that the above is at least helpful.\nFor the TPU error, it looks like the environment in your instance comes with TensorFlow pre-installed, but it doesn't look like Autogluon uses TensorFlow. You might need to look into how to use PyTorch on TPUs with PyTorch/XLA: https://github.com/pytorch/xla\nWith thanks,\nCallum\n"
] | [
0
] | [] | [] | [
"automl",
"ipu",
"python",
"tpu"
] | stackoverflow_0074567293_automl_ipu_python_tpu.txt |
Q:
How to list of elements and use those elements as a header of pandas dataframe?
I have a list with some elements.
For example:
list= [name, phone_number,age,gender]
I want to use these elements as a header or column name in a pandas dataframe.
I would really appreciate your ideas.
A:
Assuming you put all the values that will be used as headers into an array using this code:
import pandas as pd
data = pd.read_csv("yourtable.csv")
i = 0
headers = []
while i < len(data. index):
headers.append(data.loc[i, "header of headers"])
i = i + 1
You can create your table using
df = pd.DataFrame(columns=headers)
EDIT:
With the new info just use:
df = pd.DataFrame(columns=list)
| How to list of elements and use those elements as a header of pandas dataframe? | I have a list with some elements.
For example:
list= [name, phone_number,age,gender]
I want to use these elements as a header or column name in a pandas dataframe.
I would really appreciate your ideas.
| [
"Assuming you put all the values that will be used as headers into an array using this code:\nimport pandas as pd \ndata = pd.read_csv(\"yourtable.csv\")\ni = 0\nheaders = []\nwhile i < len(data. index):\n headers.append(data.loc[i, \"header of headers\"])\n i = i + 1\n\n\n\nYou can create your table using\ndf = pd.DataFrame(columns=headers)\n\nEDIT:\nWith the new info just use:\ndf = pd.DataFrame(columns=list)\n\n"
] | [
0
] | [] | [] | [
"bigdata",
"pandas",
"python"
] | stackoverflow_0074627811_bigdata_pandas_python.txt |
Q:
Clustering of similar items
There are items of data like this:
item1 = {
"path": "/some/path",
"data": {
"a": [0, 1, 2, ...], #numpy array
"b": [4, 9, 4, ...], #numpy array
"c": [7, 1, 0, ...], #numpy array
}
}
And I compare each item with each other. After that I have pairs like this:
pairs = []
pair = {
"a": item1,
"b": item2,
"diff": 12345,
}
pairs.append(pair)
pair = {
"a": item1,
"b": item3,
"diff": 987654,
}
pairs.append(pair)
And now I want clusters (groups) of all similar items. Items are similar the smaller the diff property is.
I assume this can be done somehow using data science methods but my data is not like a x,y coordinate system. (I added pandas tag, because I assume, it may be helpful here)
How can I arrange my items in clusters by using most similarity (=smallest diff attribute)?
A:
I found a solution. At first I reduced the item pairs by applying a threshold for diff (keep pairs having diff < 10000000).
Then I run this code to create the clusters (groups):
def get_groups(self, pairs):
def contains(list, filter):
for x in list:
if filter(x):
return True
return False
def in_group(a, b, group):
has_a = contains(group, lambda x: x["path"] == a["path"])
has_b = contains(group, lambda x: x["path"] == b["path"])
if has_a:
if not has_b:
group.append(b)
return True
elif has_b:
if not has_a:
group.append(b)
return True
return False
def in_groups(a, b, groups):
found = False
for group in groups:
if in_group(a, b, group):
found = True
break
if not found:
groups.append([a, b])
groups = []
for pair in pairs:
in_groups(pair["a"], pair["b"], groups)
return groups
This creates groups having one element for each group of similar items.
| Clustering of similar items | There are items of data like this:
item1 = {
"path": "/some/path",
"data": {
"a": [0, 1, 2, ...], #numpy array
"b": [4, 9, 4, ...], #numpy array
"c": [7, 1, 0, ...], #numpy array
}
}
And I compare each item with each other. After that I have pairs like this:
pairs = []
pair = {
"a": item1,
"b": item2,
"diff": 12345,
}
pairs.append(pair)
pair = {
"a": item1,
"b": item3,
"diff": 987654,
}
pairs.append(pair)
And now I want clusters (groups) of all similar items. Items are similar the smaller the diff property is.
I assume this can be done somehow using data science methods but my data is not like a x,y coordinate system. (I added pandas tag, because I assume, it may be helpful here)
How can I arrange my items in clusters by using most similarity (=smallest diff attribute)?
| [
"I found a solution. At first I reduced the item pairs by applying a threshold for diff (keep pairs having diff < 10000000).\nThen I run this code to create the clusters (groups):\ndef get_groups(self, pairs):\n\n def contains(list, filter):\n for x in list:\n if filter(x):\n return True\n return False\n\n def in_group(a, b, group):\n has_a = contains(group, lambda x: x[\"path\"] == a[\"path\"])\n has_b = contains(group, lambda x: x[\"path\"] == b[\"path\"])\n if has_a:\n if not has_b:\n group.append(b)\n return True\n elif has_b:\n if not has_a:\n group.append(b)\n return True\n return False\n\n def in_groups(a, b, groups):\n found = False\n for group in groups:\n if in_group(a, b, group):\n found = True\n break\n\n if not found:\n groups.append([a, b])\n\n groups = []\n for pair in pairs:\n in_groups(pair[\"a\"], pair[\"b\"], groups)\n \n return groups\n\nThis creates groups having one element for each group of similar items.\n"
] | [
0
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074626666_numpy_pandas_python.txt |
Q:
A simple way of selecting the previous row in a column and performing an operation?
I'm trying to create a forecast which takes the previous day's 'Forecast' total and adds it to the current day's 'Appt'. Something which is straightforward in Excel but I'm struggling in pandas. At the moment all I can get in pandas using .loc is this:
pd.DataFrame({'Date': ['2022-12-01', '2022-12-02','2022-12-03','2022-12-04','2022-12-05'],
'Appt': [12,10,5,4,13],
'Forecast': [37,0,0,0,0]
})
What I'm looking for it to do is this:
pd.DataFrame({'Date': ['2022-12-01', '2022-12-02','2022-12-03','2022-12-04','2022-12-05'],
'Appt': [12,10,5,4,13],
'Forecast': [37,47,52,56,69]
})
E.g. 'Forecast' total on the 1st December is 37. On the 2nd December the value in the 'Appt' column in 10. I want it to select 37 and + 10, then put this in the 'Forecast' column for the 2nd December. Then iterate over the rest of the column.
I've tied using .loc() with the index, and experimented with .shift() but neither seem to work for what I'd like. Also looked into .rolling() but I think that's not appropriate either.
I'm sure there must be a simple way to do this?
Apologies, the original df has 'Date' as a datetime column.
A:
You can use mask and cumsum:
df['Forecast'] = df['Forecast'].mask(df['Forecast'].eq(0), df['Appt']).cumsum()
# or
df['Forecast'] = np.where(df['Forecast'].eq(0), df['Appt'], df['Forecast']).cumsum()
Output:
Date Appt Forecast
0 2022-12-01 12 37
1 2022-12-01 10 47
2 2022-12-01 5 52
3 2022-12-01 4 56
4 2022-12-01 13 69
A:
You have to make sure that your column has datetime/date type, then you may filter df like this:
# previous code&imports
yesterday = datetime.now().date() - timedelta(days=1)
df[df["date"] == yesterday]["your_column"].sum()
| A simple way of selecting the previous row in a column and performing an operation? | I'm trying to create a forecast which takes the previous day's 'Forecast' total and adds it to the current day's 'Appt'. Something which is straightforward in Excel but I'm struggling in pandas. At the moment all I can get in pandas using .loc is this:
pd.DataFrame({'Date': ['2022-12-01', '2022-12-02','2022-12-03','2022-12-04','2022-12-05'],
'Appt': [12,10,5,4,13],
'Forecast': [37,0,0,0,0]
})
What I'm looking for it to do is this:
pd.DataFrame({'Date': ['2022-12-01', '2022-12-02','2022-12-03','2022-12-04','2022-12-05'],
'Appt': [12,10,5,4,13],
'Forecast': [37,47,52,56,69]
})
E.g. 'Forecast' total on the 1st December is 37. On the 2nd December the value in the 'Appt' column in 10. I want it to select 37 and + 10, then put this in the 'Forecast' column for the 2nd December. Then iterate over the rest of the column.
I've tied using .loc() with the index, and experimented with .shift() but neither seem to work for what I'd like. Also looked into .rolling() but I think that's not appropriate either.
I'm sure there must be a simple way to do this?
Apologies, the original df has 'Date' as a datetime column.
| [
"You can use mask and cumsum:\ndf['Forecast'] = df['Forecast'].mask(df['Forecast'].eq(0), df['Appt']).cumsum()\n\n# or\ndf['Forecast'] = np.where(df['Forecast'].eq(0), df['Appt'], df['Forecast']).cumsum()\n\nOutput:\n Date Appt Forecast\n0 2022-12-01 12 37\n1 2022-12-01 10 47\n2 2022-12-01 5 52\n3 2022-12-01 4 56\n4 2022-12-01 13 69\n\n",
"You have to make sure that your column has datetime/date type, then you may filter df like this:\n# previous code&imports\nyesterday = datetime.now().date() - timedelta(days=1)\ndf[df[\"date\"] == yesterday][\"your_column\"].sum()\n\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python",
"time_series"
] | stackoverflow_0074627930_pandas_python_time_series.txt |
Q:
SDK is not defined for Run Configuration
When I'm trying to run my project in PyCharm I'm getting an error:
SDK is not defined for Run Configuration.
I tried to set a new interpreter and tried everything.
What does "SDK" mean and where can I configure it?
A:
I just had this same issue (see my comment above). What worked for me was to go into "Edit Configurations", delete the configuration that was copied over from the original PC, and create my own configuration (basically with the same inputs as before).
A:
This might happen if a run configuration was imported from another computer.
Run configuration can contain a path to a Python interpreter from that computer.
By default, run configurations are stored in .idea/runConfigurations/*.xml.
If you pull up .idea/runConfigurations/<your-configuration>.xml you'll probably notice a fixed path to the Python interpreter somewhere in there, e.g.:
<option name="SDK_HOME" value="$USER_HOME$/.local/share/virtualenvs/your-project-name-XXXXXXXX/bin/python" />
alongside with
<option name="IS_MODULE_SDK" value="false" />
It that's true try changing those lines to
<option name="SDK_HOME" value="" />
<option name="IS_MODULE_SDK" value="true" />
This will make your IntelliJ IDE look for Python in your local project's virtual environment folder.
Basically, like folks have already mentioned before, simple recreation of your run configuration by hand will help because it will disregard other machine's settings and apply yours, but sometimes it may cumbersome (especially, if your run configuration contains some project-specific environment variables and other stuff which can be tedious to copy over).
So, just changing those two lines in the run configuration XML file should help without recreating it manually again from scratch.
The nastiest thing about this situation with Python path is that when you import a run configuration for a Python project from another machine with Python path being hardcoded PyCharm will still keep showing you yours in the Edit Configurations window making it really hard to track down and fix, so in this very specific case don't believe what you see in the run configuration UI -- just go and check the run configuration XML config behind.
A:
I faced the same problem,and I have solved it.
You need to open Run/Debug Configurations of Pycharm.
Then find Parameters of your py file, Copy and delete the parameters, and then fill in the parameters as they are
A:
The IDE itself proposed to make recently created venv as interpreter, I agreed and after that this mistake occured. The decision: I came to the run->edit configurations and I saw that interpreter path wasn't setted. Than I've added the path by myself and everything worked
| SDK is not defined for Run Configuration | When I'm trying to run my project in PyCharm I'm getting an error:
SDK is not defined for Run Configuration.
I tried to set a new interpreter and tried everything.
What does "SDK" mean and where can I configure it?
| [
"I just had this same issue (see my comment above). What worked for me was to go into \"Edit Configurations\", delete the configuration that was copied over from the original PC, and create my own configuration (basically with the same inputs as before).\n",
"This might happen if a run configuration was imported from another computer.\nRun configuration can contain a path to a Python interpreter from that computer.\nBy default, run configurations are stored in .idea/runConfigurations/*.xml.\nIf you pull up .idea/runConfigurations/<your-configuration>.xml you'll probably notice a fixed path to the Python interpreter somewhere in there, e.g.:\n<option name=\"SDK_HOME\" value=\"$USER_HOME$/.local/share/virtualenvs/your-project-name-XXXXXXXX/bin/python\" />\n\nalongside with\n<option name=\"IS_MODULE_SDK\" value=\"false\" />\n\nIt that's true try changing those lines to\n<option name=\"SDK_HOME\" value=\"\" />\n<option name=\"IS_MODULE_SDK\" value=\"true\" />\n\nThis will make your IntelliJ IDE look for Python in your local project's virtual environment folder.\n\nBasically, like folks have already mentioned before, simple recreation of your run configuration by hand will help because it will disregard other machine's settings and apply yours, but sometimes it may cumbersome (especially, if your run configuration contains some project-specific environment variables and other stuff which can be tedious to copy over).\nSo, just changing those two lines in the run configuration XML file should help without recreating it manually again from scratch.\nThe nastiest thing about this situation with Python path is that when you import a run configuration for a Python project from another machine with Python path being hardcoded PyCharm will still keep showing you yours in the Edit Configurations window making it really hard to track down and fix, so in this very specific case don't believe what you see in the run configuration UI -- just go and check the run configuration XML config behind.\n",
"I faced the same problem,and I have solved it.\nYou need to open Run/Debug Configurations of Pycharm.\nThen find Parameters of your py file, Copy and delete the parameters, and then fill in the parameters as they are\n",
"The IDE itself proposed to make recently created venv as interpreter, I agreed and after that this mistake occured. The decision: I came to the run->edit configurations and I saw that interpreter path wasn't setted. Than I've added the path by myself and everything worked\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"pycharm",
"python"
] | stackoverflow_0074076140_pycharm_python.txt |
Q:
Converting .txt to .pdf
I have code to convert a .txt file to a .pdf. I'm 99% sure that it converts the file to .pdf, but it won't output the PDF file.
Below is my code. I got it from an online website, btw.
from fpdf import FPDF
pdf = FPDF()
pdf.add_page()
pdf.set_font("Arial", size=15)
f = open("text-file-name.txt", "r")
for x in f:
pdf.cell(200, 10, txt=x, ln=1, align='C')
pdf.output("completed.pdf")
If I run this code as it is, it comes up with this error:
Traceback (most recent call last):
File "main.py", line 27, in <module>
pdf.output("blh.pdf")
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 1065, in output
self.close()
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 246, in close
self._enddoc()
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 1636, in _enddoc
self._putpages()
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 1170, in _putpages
p = self.pages[n].encode("latin1") if PY3K else self.pages[n]
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 228-233: ordinal not in range(256)
I have uploaded the file to replit already, does anyone know what to do?
A:
There are 2 things to be aware of:
Not every encoding (in your case latin-1) can represent every possible character. An encoding maps bit-patterns to characters. An encoding that uses 7 bits (and an 8th check-bit) is only able to represent 2^7 characters. The designers of the encoding thus have to make decisions about which characters are in the encoding.
Not every font can represent every encoding. Font files take up room, so there is a drive to limit the set of characters you want to represent. Each character takes up some rendering instructions (and thus bytes). Not to mention you need to hire an artist to actually design the characters, which takes money.
You are currently running into an error because of (1). You could simply read the file with another encoding.
Keep in mind though that when you are creating a PDF, you are (implictly perhaps) using a font.
PDF (ISO 32000) defines 14 standard type 1 fonts. These fonts are special. A conforming reader (e.g. Adobe Reader) is required to have them on hand. That means when you are creating a PDF (or writing a software library to create a PDF) it is a lot easier to use one of the standard 14 than any other font. You don't have to do any tricky "insert the font bytes in the PDF". Loads of PDF-creation libraries will default to one of the 14 if no font is specified.
These fonts all have an encoding. And those encodings may not support the characters that are currently in your text file. This can also be solved by using a non-default font.
| Converting .txt to .pdf | I have code to convert a .txt file to a .pdf. I'm 99% sure that it converts the file to .pdf, but it won't output the PDF file.
Below is my code. I got it from an online website, btw.
from fpdf import FPDF
pdf = FPDF()
pdf.add_page()
pdf.set_font("Arial", size=15)
f = open("text-file-name.txt", "r")
for x in f:
pdf.cell(200, 10, txt=x, ln=1, align='C')
pdf.output("completed.pdf")
If I run this code as it is, it comes up with this error:
Traceback (most recent call last):
File "main.py", line 27, in <module>
pdf.output("blh.pdf")
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 1065, in output
self.close()
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 246, in close
self._enddoc()
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 1636, in _enddoc
self._putpages()
File "/home/runner/RoyalblueTimelyProperties/venv/lib/python3.8/site-packages/fpdf/fpdf.py", line 1170, in _putpages
p = self.pages[n].encode("latin1") if PY3K else self.pages[n]
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 228-233: ordinal not in range(256)
I have uploaded the file to replit already, does anyone know what to do?
| [
"There are 2 things to be aware of:\n\nNot every encoding (in your case latin-1) can represent every possible character. An encoding maps bit-patterns to characters. An encoding that uses 7 bits (and an 8th check-bit) is only able to represent 2^7 characters. The designers of the encoding thus have to make decisions about which characters are in the encoding.\nNot every font can represent every encoding. Font files take up room, so there is a drive to limit the set of characters you want to represent. Each character takes up some rendering instructions (and thus bytes). Not to mention you need to hire an artist to actually design the characters, which takes money.\n\nYou are currently running into an error because of (1). You could simply read the file with another encoding.\nKeep in mind though that when you are creating a PDF, you are (implictly perhaps) using a font.\nPDF (ISO 32000) defines 14 standard type 1 fonts. These fonts are special. A conforming reader (e.g. Adobe Reader) is required to have them on hand. That means when you are creating a PDF (or writing a software library to create a PDF) it is a lot easier to use one of the standard 14 than any other font. You don't have to do any tricky \"insert the font bytes in the PDF\". Loads of PDF-creation libraries will default to one of the 14 if no font is specified.\nThese fonts all have an encoding. And those encodings may not support the characters that are currently in your text file. This can also be solved by using a non-default font.\n"
] | [
0
] | [] | [] | [
"pdf",
"python",
"replit",
"txt"
] | stackoverflow_0074555378_pdf_python_replit_txt.txt |
Q:
Pillow not recognizing Libraqm installation on Mac OS
I need to be able to render text in python in various fonts and using various writing systems that use variable substitution of characters (Arabic, Hindi, Bengali). On a previous machine I had no issue doing this, but I just moved into a new machine with the same conda environment and it doesn't seme to work.
Mac OS 12.6
Python version 3.9.13
Libraqm version 0.9.0 (installed with brew)
Pillow version 9.2.0 (installed with pip)
Here's a minimal reproducible example. It should produce the Urdu word لڑكا – with the two groups of two letters attached – but instead it's producing something that looks like ا ک ڑ ل, with each letter disconnected. It's rendering the specific font correctly, so it's not a matter of the font not being found. When I run the last few lines of code checking for the 'raqm' feature, I get False.
from PIL import ImageFont
from PIL import ImageDraw
from PIL import Image
font = ImageFont.truetype('KawkabMono-Bold.ttf',12,layout_engine=ImageFont.Layout.RAQM)
img = Image.new('1', (200, 100), 'white')
d = ImageDraw.Draw(img)
d.text((100,50), 'لڑكا', fill='black', font=font)
img.save('test.png')
from PIL import features
features.check_feature(feature='raqm')
I've tried installing and reinstalling a few times, and building from different sources. My guess is that libraqm is not on the correct path for Pillow, but I'm not sure how to check or fix this.
I tried running the minimal reproducible text and expected to see a properly render, RTL text with the correct rendering of text. Instead, I got an image file of the text rendered left-to-right and disconnected.
A:
On Mac Os 12.6.1 and Python 3.10.8 System Interpreter I was able to get libraqm to work with Pillow. Configuration follows:
Python Interpreter version 10.8 installed with Homebrew.
Libraqm version 0.9.0 installed with Homebrew.
Pillow version 9.3.0 installed with pip.
First, I tried using a python venv, and I got UserWarning: Raqm layout was requested, but Raqm is not available. Falling back to basic layout.
Next, I added the following to the code you posted: print(features.check("raqm")) and it printed False as expected.
I used my Python System Interpreter (3.10.8), installed Pillow and I was able to get libraqm to load with Pillow.
Image without libraqm. Image with libraqm
I added a fonts directory with the font specified on your snippet above. I loaded the font with the following code:
features.check_feature(feature='raqm')
print(features.check("raqm"))
custom_font = os.path.abspath(
os.path.join(
os.path.dirname(__file__), 'fonts/KawkabMono-Bold.ttf'
)
)
If the librqam library is not loaded by the python interpreter, then your ouput will be like you described it ( individual characters).
A:
I think you additionally need fribidi and harfbuzz installed before PIL/Pillow:
brew install fribidi harfbuzz
Note that you can get PIL/Pillow's configuration, build settings, features and supported formats with:
python3 -m PIL
| Pillow not recognizing Libraqm installation on Mac OS | I need to be able to render text in python in various fonts and using various writing systems that use variable substitution of characters (Arabic, Hindi, Bengali). On a previous machine I had no issue doing this, but I just moved into a new machine with the same conda environment and it doesn't seme to work.
Mac OS 12.6
Python version 3.9.13
Libraqm version 0.9.0 (installed with brew)
Pillow version 9.2.0 (installed with pip)
Here's a minimal reproducible example. It should produce the Urdu word لڑكا – with the two groups of two letters attached – but instead it's producing something that looks like ا ک ڑ ل, with each letter disconnected. It's rendering the specific font correctly, so it's not a matter of the font not being found. When I run the last few lines of code checking for the 'raqm' feature, I get False.
from PIL import ImageFont
from PIL import ImageDraw
from PIL import Image
font = ImageFont.truetype('KawkabMono-Bold.ttf',12,layout_engine=ImageFont.Layout.RAQM)
img = Image.new('1', (200, 100), 'white')
d = ImageDraw.Draw(img)
d.text((100,50), 'لڑكا', fill='black', font=font)
img.save('test.png')
from PIL import features
features.check_feature(feature='raqm')
I've tried installing and reinstalling a few times, and building from different sources. My guess is that libraqm is not on the correct path for Pillow, but I'm not sure how to check or fix this.
I tried running the minimal reproducible text and expected to see a properly render, RTL text with the correct rendering of text. Instead, I got an image file of the text rendered left-to-right and disconnected.
| [
"On Mac Os 12.6.1 and Python 3.10.8 System Interpreter I was able to get libraqm to work with Pillow. Configuration follows:\n\nPython Interpreter version 10.8 installed with Homebrew.\nLibraqm version 0.9.0 installed with Homebrew.\nPillow version 9.3.0 installed with pip.\n\nFirst, I tried using a python venv, and I got UserWarning: Raqm layout was requested, but Raqm is not available. Falling back to basic layout.\nNext, I added the following to the code you posted: print(features.check(\"raqm\")) and it printed False as expected.\nI used my Python System Interpreter (3.10.8), installed Pillow and I was able to get libraqm to load with Pillow.\nImage without libraqm. Image with libraqm\nI added a fonts directory with the font specified on your snippet above. I loaded the font with the following code:\nfeatures.check_feature(feature='raqm')\nprint(features.check(\"raqm\"))\ncustom_font = os.path.abspath(\n os.path.join(\n os.path.dirname(__file__), 'fonts/KawkabMono-Bold.ttf'\n )\n)\n\nIf the librqam library is not loaded by the python interpreter, then your ouput will be like you described it ( individual characters).\n",
"I think you additionally need fribidi and harfbuzz installed before PIL/Pillow:\nbrew install fribidi harfbuzz\n\nNote that you can get PIL/Pillow's configuration, build settings, features and supported formats with:\npython3 -m PIL\n\n"
] | [
0,
0
] | [] | [] | [
"image_processing",
"python",
"python_3.x",
"python_imaging_library"
] | stackoverflow_0074608140_image_processing_python_python_3.x_python_imaging_library.txt |
Q:
Why am I getting wrong matrix
I'm trying to print the following matrix and vector,
A = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], float)
v = np.array([-4, 3, 9, 7], float)
But I'm getting this instead
A =
[[ 1. 0.5 2. 0.5]
[ 0. 1. -2.8 -1. ]
[-0. -0. 1. -0. ]
[-0. -0. -0. 1. ]]
v = [-2. 3.6 -2. 1. ]
If I change from float to int I get the right matrix but then I gett error when trying to run the following code
for m in range(N):
div = A[m,m]
A[m,:] /= div
v[m] /= div
UFuncTypeError: Cannot cast ufunc 'true_divide' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
How can I solve tis problem?
A:
I guess right syntax is this?
import numpy as np
A = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], dtype = float)
print(A)
Gives #
[[ 2. 1. 4. 1.]
[ 3. 4. -1. -1.]
[ 1. -4. 1. 5.]
[ 2. -2. 1. 3.]]
| Why am I getting wrong matrix | I'm trying to print the following matrix and vector,
A = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], float)
v = np.array([-4, 3, 9, 7], float)
But I'm getting this instead
A =
[[ 1. 0.5 2. 0.5]
[ 0. 1. -2.8 -1. ]
[-0. -0. 1. -0. ]
[-0. -0. -0. 1. ]]
v = [-2. 3.6 -2. 1. ]
If I change from float to int I get the right matrix but then I gett error when trying to run the following code
for m in range(N):
div = A[m,m]
A[m,:] /= div
v[m] /= div
UFuncTypeError: Cannot cast ufunc 'true_divide' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
How can I solve tis problem?
| [
"I guess right syntax is this?\nimport numpy as np\nA = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], dtype = float)\nprint(A)\n\nGives #\n[[ 2. 1. 4. 1.]\n [ 3. 4. -1. -1.]\n [ 1. -4. 1. 5.]\n [ 2. -2. 1. 3.]]\n\n"
] | [
1
] | [] | [] | [
"matrix",
"numpy",
"python",
"vector"
] | stackoverflow_0074628036_matrix_numpy_python_vector.txt |
Q:
Django Bash completion not working on manage.py
I am trying out the django_bash_completion script provided by Django but can't use it with python manage.py command.
I am trying out the django_bash_completion script provided by Django. I have added it to active script in the virtual environment. It works with django-admin but can't use it with python manage.py command.
cd django-test-project
virtualenv -p python3 venv
echo "source /path/to/.django_bash_completion" >> venv/bin/active
active
django-admin<tab><tab>
python manage.py<tab><tab>
For django-admin it shows all options like check, makemigrations, migrate runserver etc but when I run python manage.py it gives manage.py: command not found.
Any idea why and how can I solve it?
I am running bash on Ubuntu 18.04
A:
You have to run manage.py <tab><tab> not python manage.py <tab><tab>
As the Official Documentation says: https://docs.djangoproject.com/en/dev/ref/django-admin/#bash-completion
A:
I've been using this old original bash-completion for Django. still works :) you don't need to ./manage.py ... try python manage.py [tab]
https://gist.github.com/vigo/5c25936a682845932bdaf17126c4167d
tested on macOS Ventura, python 3.11.0 + Django 4.1.3
| Django Bash completion not working on manage.py | I am trying out the django_bash_completion script provided by Django but can't use it with python manage.py command.
I am trying out the django_bash_completion script provided by Django. I have added it to active script in the virtual environment. It works with django-admin but can't use it with python manage.py command.
cd django-test-project
virtualenv -p python3 venv
echo "source /path/to/.django_bash_completion" >> venv/bin/active
active
django-admin<tab><tab>
python manage.py<tab><tab>
For django-admin it shows all options like check, makemigrations, migrate runserver etc but when I run python manage.py it gives manage.py: command not found.
Any idea why and how can I solve it?
I am running bash on Ubuntu 18.04
| [
"You have to run manage.py <tab><tab> not python manage.py <tab><tab>\nAs the Official Documentation says: https://docs.djangoproject.com/en/dev/ref/django-admin/#bash-completion\n",
"I've been using this old original bash-completion for Django. still works :) you don't need to ./manage.py ... try python manage.py [tab]\nhttps://gist.github.com/vigo/5c25936a682845932bdaf17126c4167d\ntested on macOS Ventura, python 3.11.0 + Django 4.1.3\n"
] | [
4,
0
] | [] | [] | [
"bash_completion",
"django",
"django_manage.py",
"python"
] | stackoverflow_0057937578_bash_completion_django_django_manage.py_python.txt |
Q:
python, pygsheets question - how to gain the color of each cell in a column, fast?
i have this code, that checks color of each cell in a google sheets worksheet.
that would be ok, but for 1200 rows, it takes 400 seconds to do so, so i wanted to ask if someone know of a better way to check color of a each cell in a column(i couldnt find how to check only 1 column, and not the whole sheet), and put it in a list?
can i use get_all_values() for only 1 column?
import pygsheets
cells = cyber_worksheet.get_all_values(returnas='cell',include_tailing_empty=False, include_tailing_empty_rows=False)
color_code = []
for r in cells:
for c in r:
color_code.append(c.color)
return color_code
this worked, but very very slowly.... i was wondering if there was
A:
I believe your goal is as follows.
You want to retrieve the values from only one column instead of all cells.
You want to achieve this using pygsheets.
In this case, how about using get_col instead of get_all_values? When this is reflected in your script, how about the following modification?
From:
cells = cyber_worksheet.get_all_values(returnas='cell',include_tailing_empty=False, include_tailing_empty_rows=False)
color_code = []
for r in cells:
for c in r:
color_code.append(c.color)
To:
cells = cyber_worksheet.get_col(1, returnas="cell", include_tailing_empty=False)
color_code = []
for r in cells:
color_code.append(r.color)
In this case, the values are retrieved from column "A". When you want to retrieve the values from the column "B", please modify cyber_worksheet.get_col(1, returnas="cell", include_tailing_empty=False) to cyber_worksheet.get_col(2, returnas="cell", include_tailing_empty=False).
In this case, it seems that the value of cells is a one-dimensional array, and each element is each row.
Note:
If you use get_all_values, how about the following modification?
col = 1 # Column "A"
cells = cyber_worksheet.get_all_values(returnas="cell", include_tailing_empty=False, include_tailing_empty_rows=False)
color_code = []
for r in cells:
color_code.append(r[col - 1].color)
Reference:
get_col(col, returnas='matrix', include_tailing_empty=True, **kwargs)
| python, pygsheets question - how to gain the color of each cell in a column, fast? | i have this code, that checks color of each cell in a google sheets worksheet.
that would be ok, but for 1200 rows, it takes 400 seconds to do so, so i wanted to ask if someone know of a better way to check color of a each cell in a column(i couldnt find how to check only 1 column, and not the whole sheet), and put it in a list?
can i use get_all_values() for only 1 column?
import pygsheets
cells = cyber_worksheet.get_all_values(returnas='cell',include_tailing_empty=False, include_tailing_empty_rows=False)
color_code = []
for r in cells:
for c in r:
color_code.append(c.color)
return color_code
this worked, but very very slowly.... i was wondering if there was
| [
"I believe your goal is as follows.\n\nYou want to retrieve the values from only one column instead of all cells.\nYou want to achieve this using pygsheets.\n\nIn this case, how about using get_col instead of get_all_values? When this is reflected in your script, how about the following modification?\nFrom:\ncells = cyber_worksheet.get_all_values(returnas='cell',include_tailing_empty=False, include_tailing_empty_rows=False)\ncolor_code = []\nfor r in cells:\n for c in r:\n color_code.append(c.color)\n\nTo:\ncells = cyber_worksheet.get_col(1, returnas=\"cell\", include_tailing_empty=False)\ncolor_code = []\nfor r in cells:\n color_code.append(r.color)\n\n\nIn this case, the values are retrieved from column \"A\". When you want to retrieve the values from the column \"B\", please modify cyber_worksheet.get_col(1, returnas=\"cell\", include_tailing_empty=False) to cyber_worksheet.get_col(2, returnas=\"cell\", include_tailing_empty=False).\nIn this case, it seems that the value of cells is a one-dimensional array, and each element is each row.\n\nNote:\n\nIf you use get_all_values, how about the following modification?\n col = 1 # Column \"A\"\n cells = cyber_worksheet.get_all_values(returnas=\"cell\", include_tailing_empty=False, include_tailing_empty_rows=False)\n color_code = []\n for r in cells:\n color_code.append(r[col - 1].color)\n\n\n\nReference:\n\nget_col(col, returnas='matrix', include_tailing_empty=True, **kwargs)\n\n"
] | [
0
] | [] | [] | [
"google_sheets",
"google_sheets_api",
"pygsheets",
"python"
] | stackoverflow_0074627208_google_sheets_google_sheets_api_pygsheets_python.txt |
Q:
Find the top N keys with highest values in Redis
I have Redis database with user_id: rating stucture and I need to get the N users with the highest rating (value), like:
u_345: 198
u_144: 180
u_267: 179
The idea I have: take a list of all the keys, and for each key get its value (db.mget(db.keys())), after sort by value and get first N. Is there a better way?
I use the redis-py Python library. But the main thing is to get the right algorithm (or a ready-made solution).
A:
It seems like you should follow the pattern of using Sorted Set as a secondary index.
See: https://redis.io/topics/indexes
A:
You should use ZRANGE (https://redis.io/commands/zrange/).
Using your dataset, you could use this approach:
ZADD ratingindex 198 u_345
ZADD ratingindex 180 u_144
ZADD ratingindex 179 u_267
and then, to retrieve 2 (or N) keys with highest values you should use:
ZRANGE ratingindex 0 1 withscores rev
This command will retrieve two keys (from 0 to 1) with values (withscores) from ratingindex ordered from highest to lowest (rev).
Finally you just need to adapt this direct commands to redis-py library.
Hope it helps.
| Find the top N keys with highest values in Redis | I have Redis database with user_id: rating stucture and I need to get the N users with the highest rating (value), like:
u_345: 198
u_144: 180
u_267: 179
The idea I have: take a list of all the keys, and for each key get its value (db.mget(db.keys())), after sort by value and get first N. Is there a better way?
I use the redis-py Python library. But the main thing is to get the right algorithm (or a ready-made solution).
| [
"It seems like you should follow the pattern of using Sorted Set as a secondary index.\nSee: https://redis.io/topics/indexes\n",
"You should use ZRANGE (https://redis.io/commands/zrange/).\nUsing your dataset, you could use this approach:\nZADD ratingindex 198 u_345\nZADD ratingindex 180 u_144\nZADD ratingindex 179 u_267\n\nand then, to retrieve 2 (or N) keys with highest values you should use:\nZRANGE ratingindex 0 1 withscores rev\nThis command will retrieve two keys (from 0 to 1) with values (withscores) from ratingindex ordered from highest to lowest (rev).\nFinally you just need to adapt this direct commands to redis-py library.\nHope it helps.\n"
] | [
2,
1
] | [] | [] | [
"python",
"redis"
] | stackoverflow_0058252864_python_redis.txt |
Q:
how to convert xls to xlsx
I have some *.xls (excel 2003) files, and I want to convert those files into xlsx (excel 2007).
I use the uno python package, when I save the documents,
I can set the Filter name: MS Excel 97
But there is no Filter name like 'MS Excel 2007',
How can set the the filter name to convert xls to xlsx ?
A:
You need to have win32com installed on your machine. Here is my code:
import win32com.client as win32
fname = "full+path+to+xls_file"
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Open(fname)
wb.SaveAs(fname+"x", FileFormat = 51) #FileFormat = 51 is for .xlsx extension
wb.Close() #FileFormat = 56 is for .xls extension
excel.Application.Quit()
A:
Here is my solution, without considering fonts, charts and images:
$ pip install pyexcel pyexcel-xls pyexcel-xlsx
Then do this::
import pyexcel as p
p.save_book_as(file_name='your-file-in.xls',
dest_file_name='your-new-file-out.xlsx')
If you do not need a program, you could install one additinal package pyexcel-cli::
$ pip install pyexcel-cli
$ pyexcel transcode your-file-in.xls your-new-file-out.xlsx
The transcoding procedure above uses xlrd and openpyxl.
A:
I've had to do this before. The main idea is to use the xlrd module to open and parse a xls file and write the
content to a xlsx file using the openpyxl module.
Here's my code. Attention! It cannot handle complex xls files, you should add you own parsing logic if you are going to use it.
import xlrd
from openpyxl.workbook import Workbook
from openpyxl.reader.excel import load_workbook, InvalidFileException
def open_xls_as_xlsx(filename):
# first open using xlrd
book = xlrd.open_workbook(filename)
index = 0
nrows, ncols = 0, 0
while nrows * ncols == 0:
sheet = book.sheet_by_index(index)
nrows = sheet.nrows
ncols = sheet.ncols
index += 1
# prepare a xlsx sheet
book1 = Workbook()
sheet1 = book1.get_active_sheet()
for row in xrange(0, nrows):
for col in xrange(0, ncols):
sheet1.cell(row=row, column=col).value = sheet.cell_value(row, col)
return book1
A:
I found none of answers here 100% right. So I post my codes here:
import xlrd
from openpyxl.workbook import Workbook
def cvt_xls_to_xlsx(src_file_path, dst_file_path):
book_xls = xlrd.open_workbook(src_file_path)
book_xlsx = Workbook()
sheet_names = book_xls.sheet_names()
for sheet_index, sheet_name in enumerate(sheet_names):
sheet_xls = book_xls.sheet_by_name(sheet_name)
if sheet_index == 0:
sheet_xlsx = book_xlsx.active
sheet_xlsx.title = sheet_name
else:
sheet_xlsx = book_xlsx.create_sheet(title=sheet_name)
for row in range(0, sheet_xls.nrows):
for col in range(0, sheet_xls.ncols):
sheet_xlsx.cell(row = row+1 , column = col+1).value = sheet_xls.cell_value(row, col)
book_xlsx.save(dst_file_path)
A:
The answer by Ray helped me a lot, but for those who search a simple way to convert all the sheets from a xls to a xlsx, I made this Gist:
import xlrd
from openpyxl.workbook import Workbook as openpyxlWorkbook
# content is a string containing the file. For example the result of an http.request(url).
# You can also use a filepath by calling "xlrd.open_workbook(filepath)".
xlsBook = xlrd.open_workbook(file_contents=content)
workbook = openpyxlWorkbook()
for i in xrange(0, xlsBook.nsheets):
xlsSheet = xlsBook.sheet_by_index(i)
sheet = workbook.active if i == 0 else workbook.create_sheet()
sheet.title = xlsSheet.name
for row in xrange(0, xlsSheet.nrows):
for col in xrange(0, xlsSheet.ncols):
sheet.cell(row=row, column=col).value = xlsSheet.cell_value(row, col)
# The new xlsx file is in "workbook", without iterators (iter_rows).
# For iteration, use "for row in worksheet.rows:".
# For range iteration, use "for row in worksheet.range("{}:{}".format(startCell, endCell)):".
You can find the xlrd lib here and the openpyxl here (you must download xlrd in your project for Google App Engine for example).
A:
I'm improve performance for @Jackypengyu method.
XLSX: working per row, not per cell (http://openpyxl.readthedocs.io/en/default/api/openpyxl.worksheet.worksheet.html#openpyxl.worksheet.worksheet.Worksheet.append)
XLS: read whole row excluding empty tail, see ragged_rows=True (http://xlrd.readthedocs.io/en/latest/api.html#xlrd.sheet.Sheet.row_slice)
Merged cells will be converted too.
Results
Convert same 12 files in same order:
Original:
0:00:01.958159
0:00:02.115891
0:00:02.018643
0:00:02.057803
0:00:01.267079
0:00:01.308073
0:00:01.245989
0:00:01.289295
0:00:01.273805
0:00:01.276003
0:00:01.293834
0:00:01.261401
Improved:
0:00:00.774101
0:00:00.734749
0:00:00.741434
0:00:00.744491
0:00:00.320796
0:00:00.279045
0:00:00.315829
0:00:00.280769
0:00:00.316380
0:00:00.289196
0:00:00.347819
0:00:00.284242
Solution
def cvt_xls_to_xlsx(*args, **kw):
"""Open and convert XLS file to openpyxl.workbook.Workbook object
@param args: args for xlrd.open_workbook
@param kw: kwargs for xlrd.open_workbook
@return: openpyxl.workbook.Workbook
You need -> from openpyxl.utils.cell import get_column_letter
"""
book_xls = xlrd.open_workbook(*args, formatting_info=True, ragged_rows=True, **kw)
book_xlsx = Workbook()
sheet_names = book_xls.sheet_names()
for sheet_index in range(len(sheet_names)):
sheet_xls = book_xls.sheet_by_name(sheet_names[sheet_index])
if sheet_index == 0:
sheet_xlsx = book_xlsx.active
sheet_xlsx.title = sheet_names[sheet_index]
else:
sheet_xlsx = book_xlsx.create_sheet(title=sheet_names[sheet_index])
for crange in sheet_xls.merged_cells:
rlo, rhi, clo, chi = crange
sheet_xlsx.merge_cells(
start_row=rlo + 1, end_row=rhi,
start_column=clo + 1, end_column=chi,
)
def _get_xlrd_cell_value(cell):
value = cell.value
if cell.ctype == xlrd.XL_CELL_DATE:
value = datetime.datetime(*xlrd.xldate_as_tuple(value, 0))
return value
for row in range(sheet_xls.nrows):
sheet_xlsx.append((
_get_xlrd_cell_value(cell)
for cell in sheet_xls.row_slice(row, end_colx=sheet_xls.row_len(row))
))
for rowx in range(sheet_xls.nrows):
if sheet_xls.rowinfo_map[rowx].hidden != 0:
print sheet_names[sheet_index], rowx
sheet_xlsx.row_dimensions[rowx+1].hidden = True
for coly in range(sheet_xls.ncols):
if sheet_xls.colinfo_map[coly].hidden != 0:
print sheet_names[sheet_index], coly
coly_letter = get_column_letter(coly+1)
sheet_xlsx.column_dimensions[coly_letter].hidden = True
return book_xlsx
A:
You can use pandas IO functions:
import pandas as pd
df = pd.read_excel('file_2003.xls', header=None)
df.to_excel('file_2003.xlsx', index=False, header=False)
A:
Simple solution
I required a simple solution to convert couple of xls to xlsx format. There are plenty of answers here, but they are doing some "magic" that I do not completely understand.
A simple solution was given by chfw, but not quite complete.
Install dependencies
Use pip to install
pip install pyexcel-cli pyexcel-xls pyexcel-xlsx
Execute
All the styling and macros will be gone, but the information is intact.
For single file
pyexcel transcode your-file-in.xls your-new-file-out.xlsx
For all files in the folder, one liner
for file in *.xls; do; echo "Transcoding $file"; pyexcel transcode "$file" "${file}x"; done;
A:
I tried @Jhon Anderson's solution, works well but got an "year is out of range" error when there are cells of time format like HH:mm:ss without date. There for I improved the algorithm again:
def xls_to_xlsx(*args, **kw):
"""
open and convert an XLS file to openpyxl.workbook.Workbook
----------
@param args: args for xlrd.open_workbook
@param kw: kwargs for xlrd.open_workbook
@return: openpyxl.workbook.Workbook对象
"""
book_xls = xlrd.open_workbook(*args, formatting_info=True, ragged_rows=True, **kw)
book_xlsx = openpyxl.workbook.Workbook()
sheet_names = book_xls.sheet_names()
for sheet_index in range(len(sheet_names)):
sheet_xls = book_xls.sheet_by_name(sheet_names[sheet_index])
if sheet_index == 0:
sheet_xlsx = book_xlsx.active
sheet_xlsx.title = sheet_names[sheet_index]
else:
sheet_xlsx = book_xlsx.create_sheet(title=sheet_names[sheet_index])
for crange in sheet_xls.merged_cells:
rlo, rhi, clo, chi = crange
sheet_xlsx.merge_cells(start_row=rlo + 1, end_row=rhi,
start_column=clo + 1, end_column=chi,)
def _get_xlrd_cell_value(cell):
value = cell.value
if cell.ctype == xlrd.XL_CELL_DATE:
datetime_tup = xlrd.xldate_as_tuple(value,0)
if datetime_tup[0:3] == (0, 0, 0): # time format without date
value = datetime.time(*datetime_tup[3:])
else:
value = datetime.datetime(*datetime_tup)
return value
for row in range(sheet_xls.nrows):
sheet_xlsx.append((
_get_xlrd_cell_value(cell)
for cell in sheet_xls.row_slice(row, end_colx=sheet_xls.row_len(row))
))
return book_xlsx
Then work perfect!
A:
CONVERT XLS FILE TO XLSX
Using python3.6
I have just come accross the same issue and after hours of struggle I solved it by doing the ff, you probably wont need all of the packages: (I will be as clear as posslbe)
make sure to install the following packages before proceeding
pip install pyexcel,
pip install pyexcel-xls,
pip install pyexcel-xlsx,
pip install pyexcel-cli
step 1:
import pyexcel
step 2: "example.xls","example.xlsx","example.xlsm"
sheet0 = pyexcel.get_sheet(file_name="your_file_path.xls", name_columns_by_row=0)
step3: create array from contents
xlsarray = sheet.to_array()
step4: check variable contents to verify
xlsarray
step5: pass the array held in variable called (xlsarray) to a new workbook variable called(sheet1)
sheet1 = pyexcel.Sheet(xlsarray)
step6: save the new sheet ending with .xlsx (in my case i want xlsx)
sheet1.save_as("test.xlsx")
A:
Try using win32com application. Install it in your machine.
import sys, os
import win32com.client
directory = 'C:\\Users\\folder\\'
for file in os.listdir(directory):
dot = file.find('.')
end = file[dot:]
OutFile =file[0:dot] + ".xlsx"
App = win32com.client.Dispatch("Excel.Application")
App.Visible = True
workbook= App.Workbooks.Open(file)
workbook.ActiveSheet.SaveAs(OutFile, 51) #51 is for xlsx
workbook.Close(SaveChanges=True)
App.Quit()
Thank you.
A:
Well I kept it simple and tried with Pandas:
import pandas as pd
df = pd.read_excel (r'Path_of_your_file\\name_of_your_file.xls')
df.to_excel(r'Output_path\\new_file_name.xlsx', index = False)
A:
The Answer from Ray was clipping the first row and last column of the data. Here is my modified solution (for python3):
def open_xls_as_xlsx(filename):
# first open using xlrd
book = xlrd.open_workbook(filename)
index = 0
nrows, ncols = 0, 0
while nrows * ncols == 0:
sheet = book.sheet_by_index(index)
nrows = sheet.nrows+1 #bm added +1
ncols = sheet.ncols+1 #bm added +1
index += 1
# prepare a xlsx sheet
book1 = Workbook()
sheet1 = book1.get_active_sheet()
for row in range(1, nrows):
for col in range(1, ncols):
sheet1.cell(row=row, column=col).value = sheet.cell_value(row-1, col-1) #bm added -1's
return book1
A:
Tried @Jhon's solution 1st, then I turned into pyexcel as a solution
pyexcel.save_as(file_name=oldfilename, dest_file_name=newfilename)
It works properly until I tried to package my project to a single exe file by PyInstaller, I tried all hidden imports option, following error still there:
File "utils.py", line 27, in __enter__
pyexcel.save_as(file_name=self.filename, dest_file_name=newfilename)
File "site-packages\pyexcel\core.py", line 77, in save_as
File "site-packages\pyexcel\internal\core.py", line 22, in get_sheet_stream
File "site-packages\pyexcel\plugins\sources\file_input.py", line 39, in get_da
ta
File "site-packages\pyexcel\plugins\parsers\excel.py", line 19, in parse_file
File "site-packages\pyexcel\plugins\parsers\excel.py", line 40, in _parse_any
File "site-packages\pyexcel_io\io.py", line 73, in get_data
File "site-packages\pyexcel_io\io.py", line 91, in _get_data
File "site-packages\pyexcel_io\io.py", line 188, in load_data
File "site-packages\pyexcel_io\plugins.py", line 90, in get_a_plugin
File "site-packages\lml\plugin.py", line 290, in load_me_now
File "site-packages\pyexcel_io\plugins.py", line 107, in raise_exception
pyexcel_io.exceptions.SupportingPluginAvailableButNotInstalled: Please install p
yexcel-xls
[3192] Failed to execute script
Then, I jumped to pandas:
pd.read_excel(oldfilename).to_excel(newfilename, sheet_name=self.sheetname,index=False)
Update @ 21-Feb 2020
openpyxl provides the function: append
enable the ability to insert rows to a xlxs file which means user could read the data from a xls file and insert them into a xlsx file.
append([‘This is A1’, ‘This is B1’, ‘This is C1’])
or append({‘A’ : ‘This is A1’, ‘C’ : ‘This is C1’})
or append({1 : ‘This is A1’, 3 : ‘This is C1’})
Appends a group of values at the bottom of the current sheet:
If it’s a list: all values are added in order, starting from the first column
If it’s a dict: values are assigned to the columns indicated by the keys (numbers or letters)
A:
@CaKel and @Jhon Anderson solution:
def _get_xlrd_cell_value(cell):
value = cell.value
if cell.ctype == xlrd.XL_CELL_DATE:
# Start: if time is 00:00 this fix is necessary
if value == 1.0:
datetime_tup = (0, 0, 0)
else:
# end
datetime_tup = xlrd.xldate_as_tuple(value, 0)
if datetime_tup[0:3] == (0, 0, 0):
value = datetime.time(*datetime_tup[3:])
else:
value = datetime.datetime(*datetime_tup)
return value
And now this code runs perfect for me !
A:
This is a solution for MacOS with old xls files (e.g. Excel 97 2004).
The best way I found to deal with this format, if excel is not an option, is to open the file in openoffice and save it to another format as csv files.
A:
The use of win32com (pywin32), as the answer of @kvdogan is primarily the perfect approach.Some requisites:
Using Windows (obviously);
Have MSExcel installed;
Ensure the path for file is the FULL path;
Also, the Pywin32 project is not up-to-date on SourceForge. Instead, use github: https://github.com/mhammond/pywin32
There's a .chm document, which you can read with SumatraPDF, for example, in the folder of project once installed.
#My answer contains no code at all.
EDIT: I don't have enough reputation to make comments. I'm sorry for the practical flood.
A:
This code works for me:
import pandas as pd
df = pd.read_html("test.xls")
dfs= []
for i in range(len(df)):
dfs.append(df[i])
def multiple_dfs(df_list, sheets, file_name, spaces):
writer = pd.ExcelWriter(file_name,engine='xlsxwriter')
row = 0
for dataframe in df_list:
dataframe.to_excel(writer,sheet_name=sheets,startrow=row , startcol=0)
row = row + len(dataframe.index) + spaces + 1
writer.save()
multiple_dfs(dfs, 'xlsx', 'test.xlsx', 1)
| how to convert xls to xlsx | I have some *.xls (excel 2003) files, and I want to convert those files into xlsx (excel 2007).
I use the uno python package, when I save the documents,
I can set the Filter name: MS Excel 97
But there is no Filter name like 'MS Excel 2007',
How can set the the filter name to convert xls to xlsx ?
| [
"You need to have win32com installed on your machine. Here is my code:\nimport win32com.client as win32\nfname = \"full+path+to+xls_file\"\nexcel = win32.gencache.EnsureDispatch('Excel.Application')\nwb = excel.Workbooks.Open(fname)\n\nwb.SaveAs(fname+\"x\", FileFormat = 51) #FileFormat = 51 is for .xlsx extension\nwb.Close() #FileFormat = 56 is for .xls extension\nexcel.Application.Quit()\n\n",
"Here is my solution, without considering fonts, charts and images:\n$ pip install pyexcel pyexcel-xls pyexcel-xlsx\n\nThen do this::\nimport pyexcel as p\n\np.save_book_as(file_name='your-file-in.xls',\n dest_file_name='your-new-file-out.xlsx')\n\nIf you do not need a program, you could install one additinal package pyexcel-cli::\n$ pip install pyexcel-cli\n$ pyexcel transcode your-file-in.xls your-new-file-out.xlsx\n\nThe transcoding procedure above uses xlrd and openpyxl.\n",
"I've had to do this before. The main idea is to use the xlrd module to open and parse a xls file and write the\ncontent to a xlsx file using the openpyxl module.\nHere's my code. Attention! It cannot handle complex xls files, you should add you own parsing logic if you are going to use it.\nimport xlrd\nfrom openpyxl.workbook import Workbook\nfrom openpyxl.reader.excel import load_workbook, InvalidFileException\n\ndef open_xls_as_xlsx(filename):\n # first open using xlrd\n book = xlrd.open_workbook(filename)\n index = 0\n nrows, ncols = 0, 0\n while nrows * ncols == 0:\n sheet = book.sheet_by_index(index)\n nrows = sheet.nrows\n ncols = sheet.ncols\n index += 1\n\n # prepare a xlsx sheet\n book1 = Workbook()\n sheet1 = book1.get_active_sheet()\n\n for row in xrange(0, nrows):\n for col in xrange(0, ncols):\n sheet1.cell(row=row, column=col).value = sheet.cell_value(row, col)\n\n return book1\n\n",
"I found none of answers here 100% right. So I post my codes here:\nimport xlrd\nfrom openpyxl.workbook import Workbook\n\ndef cvt_xls_to_xlsx(src_file_path, dst_file_path):\n book_xls = xlrd.open_workbook(src_file_path)\n book_xlsx = Workbook()\n\nsheet_names = book_xls.sheet_names()\nfor sheet_index, sheet_name in enumerate(sheet_names):\n sheet_xls = book_xls.sheet_by_name(sheet_name)\n if sheet_index == 0:\n sheet_xlsx = book_xlsx.active\n sheet_xlsx.title = sheet_name\n else:\n sheet_xlsx = book_xlsx.create_sheet(title=sheet_name)\n\n for row in range(0, sheet_xls.nrows):\n for col in range(0, sheet_xls.ncols):\n sheet_xlsx.cell(row = row+1 , column = col+1).value = sheet_xls.cell_value(row, col)\n\nbook_xlsx.save(dst_file_path)\n\n",
"The answer by Ray helped me a lot, but for those who search a simple way to convert all the sheets from a xls to a xlsx, I made this Gist:\nimport xlrd\nfrom openpyxl.workbook import Workbook as openpyxlWorkbook\n\n# content is a string containing the file. For example the result of an http.request(url).\n# You can also use a filepath by calling \"xlrd.open_workbook(filepath)\".\n\nxlsBook = xlrd.open_workbook(file_contents=content)\nworkbook = openpyxlWorkbook()\n\nfor i in xrange(0, xlsBook.nsheets):\n xlsSheet = xlsBook.sheet_by_index(i)\n sheet = workbook.active if i == 0 else workbook.create_sheet()\n sheet.title = xlsSheet.name\n\n for row in xrange(0, xlsSheet.nrows):\n for col in xrange(0, xlsSheet.ncols):\n sheet.cell(row=row, column=col).value = xlsSheet.cell_value(row, col)\n\n# The new xlsx file is in \"workbook\", without iterators (iter_rows).\n# For iteration, use \"for row in worksheet.rows:\".\n# For range iteration, use \"for row in worksheet.range(\"{}:{}\".format(startCell, endCell)):\".\n\nYou can find the xlrd lib here and the openpyxl here (you must download xlrd in your project for Google App Engine for example).\n",
"I'm improve performance for @Jackypengyu method.\n\nXLSX: working per row, not per cell (http://openpyxl.readthedocs.io/en/default/api/openpyxl.worksheet.worksheet.html#openpyxl.worksheet.worksheet.Worksheet.append)\nXLS: read whole row excluding empty tail, see ragged_rows=True (http://xlrd.readthedocs.io/en/latest/api.html#xlrd.sheet.Sheet.row_slice)\n\nMerged cells will be converted too.\nResults\nConvert same 12 files in same order:\nOriginal:\n0:00:01.958159\n0:00:02.115891\n0:00:02.018643\n0:00:02.057803\n0:00:01.267079\n0:00:01.308073\n0:00:01.245989\n0:00:01.289295\n0:00:01.273805\n0:00:01.276003\n0:00:01.293834\n0:00:01.261401\n\nImproved:\n0:00:00.774101\n0:00:00.734749\n0:00:00.741434\n0:00:00.744491\n0:00:00.320796\n0:00:00.279045\n0:00:00.315829\n0:00:00.280769\n0:00:00.316380\n0:00:00.289196\n0:00:00.347819\n0:00:00.284242\n\nSolution\ndef cvt_xls_to_xlsx(*args, **kw):\n \"\"\"Open and convert XLS file to openpyxl.workbook.Workbook object\n\n @param args: args for xlrd.open_workbook\n @param kw: kwargs for xlrd.open_workbook\n @return: openpyxl.workbook.Workbook\n\n\n You need -> from openpyxl.utils.cell import get_column_letter\n \"\"\"\n\n book_xls = xlrd.open_workbook(*args, formatting_info=True, ragged_rows=True, **kw)\n book_xlsx = Workbook()\n\n sheet_names = book_xls.sheet_names()\n for sheet_index in range(len(sheet_names)):\n sheet_xls = book_xls.sheet_by_name(sheet_names[sheet_index])\n\n if sheet_index == 0:\n sheet_xlsx = book_xlsx.active\n sheet_xlsx.title = sheet_names[sheet_index]\n else:\n sheet_xlsx = book_xlsx.create_sheet(title=sheet_names[sheet_index])\n\n for crange in sheet_xls.merged_cells:\n rlo, rhi, clo, chi = crange\n\n sheet_xlsx.merge_cells(\n start_row=rlo + 1, end_row=rhi,\n start_column=clo + 1, end_column=chi,\n )\n\n def _get_xlrd_cell_value(cell):\n value = cell.value\n if cell.ctype == xlrd.XL_CELL_DATE:\n value = datetime.datetime(*xlrd.xldate_as_tuple(value, 0))\n\n return value\n\n for row in range(sheet_xls.nrows):\n sheet_xlsx.append((\n _get_xlrd_cell_value(cell)\n for cell in sheet_xls.row_slice(row, end_colx=sheet_xls.row_len(row))\n ))\n\n for rowx in range(sheet_xls.nrows):\n if sheet_xls.rowinfo_map[rowx].hidden != 0:\n print sheet_names[sheet_index], rowx\n sheet_xlsx.row_dimensions[rowx+1].hidden = True\n for coly in range(sheet_xls.ncols):\n if sheet_xls.colinfo_map[coly].hidden != 0:\n print sheet_names[sheet_index], coly\n coly_letter = get_column_letter(coly+1)\n sheet_xlsx.column_dimensions[coly_letter].hidden = True\n\n return book_xlsx\n\n",
"You can use pandas IO functions:\nimport pandas as pd\n\ndf = pd.read_excel('file_2003.xls', header=None)\ndf.to_excel('file_2003.xlsx', index=False, header=False)\n\n",
"Simple solution\nI required a simple solution to convert couple of xls to xlsx format. There are plenty of answers here, but they are doing some \"magic\" that I do not completely understand.\nA simple solution was given by chfw, but not quite complete.\nInstall dependencies\nUse pip to install\npip install pyexcel-cli pyexcel-xls pyexcel-xlsx\n\nExecute\nAll the styling and macros will be gone, but the information is intact.\nFor single file\npyexcel transcode your-file-in.xls your-new-file-out.xlsx\n\nFor all files in the folder, one liner\nfor file in *.xls; do; echo \"Transcoding $file\"; pyexcel transcode \"$file\" \"${file}x\"; done;\n\n",
"I tried @Jhon Anderson's solution, works well but got an \"year is out of range\" error when there are cells of time format like HH:mm:ss without date. There for I improved the algorithm again:\ndef xls_to_xlsx(*args, **kw):\n\"\"\"\n open and convert an XLS file to openpyxl.workbook.Workbook\n ----------\n @param args: args for xlrd.open_workbook\n @param kw: kwargs for xlrd.open_workbook\n @return: openpyxl.workbook.Workbook对象\n \"\"\"\n book_xls = xlrd.open_workbook(*args, formatting_info=True, ragged_rows=True, **kw)\n book_xlsx = openpyxl.workbook.Workbook()\n\n sheet_names = book_xls.sheet_names()\n for sheet_index in range(len(sheet_names)):\n sheet_xls = book_xls.sheet_by_name(sheet_names[sheet_index])\n if sheet_index == 0:\n sheet_xlsx = book_xlsx.active\n sheet_xlsx.title = sheet_names[sheet_index]\n else:\n sheet_xlsx = book_xlsx.create_sheet(title=sheet_names[sheet_index])\n for crange in sheet_xls.merged_cells:\n rlo, rhi, clo, chi = crange\n sheet_xlsx.merge_cells(start_row=rlo + 1, end_row=rhi,\n start_column=clo + 1, end_column=chi,)\n\n def _get_xlrd_cell_value(cell):\n value = cell.value\n if cell.ctype == xlrd.XL_CELL_DATE:\n datetime_tup = xlrd.xldate_as_tuple(value,0) \n if datetime_tup[0:3] == (0, 0, 0): # time format without date\n value = datetime.time(*datetime_tup[3:])\n else:\n value = datetime.datetime(*datetime_tup)\n return value\n\n for row in range(sheet_xls.nrows):\n sheet_xlsx.append((\n _get_xlrd_cell_value(cell)\n for cell in sheet_xls.row_slice(row, end_colx=sheet_xls.row_len(row))\n ))\n return book_xlsx\n\nThen work perfect!\n",
"CONVERT XLS FILE TO XLSX\nUsing python3.6\nI have just come accross the same issue and after hours of struggle I solved it by doing the ff, you probably wont need all of the packages: (I will be as clear as posslbe)\nmake sure to install the following packages before proceeding\npip install pyexcel,\npip install pyexcel-xls,\npip install pyexcel-xlsx,\npip install pyexcel-cli\nstep 1:\nimport pyexcel\n\nstep 2: \"example.xls\",\"example.xlsx\",\"example.xlsm\"\nsheet0 = pyexcel.get_sheet(file_name=\"your_file_path.xls\", name_columns_by_row=0)\n\nstep3: create array from contents\nxlsarray = sheet.to_array() \n\nstep4: check variable contents to verify\nxlsarray\n\nstep5: pass the array held in variable called (xlsarray) to a new workbook variable called(sheet1)\nsheet1 = pyexcel.Sheet(xlsarray)\n\nstep6: save the new sheet ending with .xlsx (in my case i want xlsx)\nsheet1.save_as(\"test.xlsx\")\n\n",
"Try using win32com application. Install it in your machine.\nimport sys, os\nimport win32com.client\ndirectory = 'C:\\\\Users\\\\folder\\\\'\nfor file in os.listdir(directory):\n dot = file.find('.')\n end = file[dot:]\n OutFile =file[0:dot] + \".xlsx\"\n App = win32com.client.Dispatch(\"Excel.Application\")\n App.Visible = True\n workbook= App.Workbooks.Open(file)\n workbook.ActiveSheet.SaveAs(OutFile, 51) #51 is for xlsx \n workbook.Close(SaveChanges=True)\n App.Quit()\n\nThank you.\n",
"Well I kept it simple and tried with Pandas:\nimport pandas as pd\n\ndf = pd.read_excel (r'Path_of_your_file\\\\name_of_your_file.xls')\n\ndf.to_excel(r'Output_path\\\\new_file_name.xlsx', index = False)\n\n",
"The Answer from Ray was clipping the first row and last column of the data. Here is my modified solution (for python3):\ndef open_xls_as_xlsx(filename):\n# first open using xlrd\nbook = xlrd.open_workbook(filename)\nindex = 0\nnrows, ncols = 0, 0\nwhile nrows * ncols == 0:\n sheet = book.sheet_by_index(index)\n nrows = sheet.nrows+1 #bm added +1\n ncols = sheet.ncols+1 #bm added +1\n index += 1\n\n# prepare a xlsx sheet\nbook1 = Workbook()\nsheet1 = book1.get_active_sheet()\n\nfor row in range(1, nrows):\n for col in range(1, ncols):\n sheet1.cell(row=row, column=col).value = sheet.cell_value(row-1, col-1) #bm added -1's\n\nreturn book1\n\n",
"Tried @Jhon's solution 1st, then I turned into pyexcel as a solution\npyexcel.save_as(file_name=oldfilename, dest_file_name=newfilename)\n\nIt works properly until I tried to package my project to a single exe file by PyInstaller, I tried all hidden imports option, following error still there:\n File \"utils.py\", line 27, in __enter__\n pyexcel.save_as(file_name=self.filename, dest_file_name=newfilename)\n File \"site-packages\\pyexcel\\core.py\", line 77, in save_as\n File \"site-packages\\pyexcel\\internal\\core.py\", line 22, in get_sheet_stream\n File \"site-packages\\pyexcel\\plugins\\sources\\file_input.py\", line 39, in get_da\nta\n File \"site-packages\\pyexcel\\plugins\\parsers\\excel.py\", line 19, in parse_file\n File \"site-packages\\pyexcel\\plugins\\parsers\\excel.py\", line 40, in _parse_any\n File \"site-packages\\pyexcel_io\\io.py\", line 73, in get_data\n File \"site-packages\\pyexcel_io\\io.py\", line 91, in _get_data\n File \"site-packages\\pyexcel_io\\io.py\", line 188, in load_data\n File \"site-packages\\pyexcel_io\\plugins.py\", line 90, in get_a_plugin\n File \"site-packages\\lml\\plugin.py\", line 290, in load_me_now\n File \"site-packages\\pyexcel_io\\plugins.py\", line 107, in raise_exception\npyexcel_io.exceptions.SupportingPluginAvailableButNotInstalled: Please install p\nyexcel-xls\n[3192] Failed to execute script\n\nThen, I jumped to pandas:\npd.read_excel(oldfilename).to_excel(newfilename, sheet_name=self.sheetname,index=False)\n\nUpdate @ 21-Feb 2020\nopenpyxl provides the function: append\nenable the ability to insert rows to a xlxs file which means user could read the data from a xls file and insert them into a xlsx file.\n\nappend([‘This is A1’, ‘This is B1’, ‘This is C1’])\nor append({‘A’ : ‘This is A1’, ‘C’ : ‘This is C1’})\nor append({1 : ‘This is A1’, 3 : ‘This is C1’})\n\nAppends a group of values at the bottom of the current sheet:\n\nIf it’s a list: all values are added in order, starting from the first column\nIf it’s a dict: values are assigned to the columns indicated by the keys (numbers or letters)\n\n",
"@CaKel and @Jhon Anderson solution:\ndef _get_xlrd_cell_value(cell):\n value = cell.value\n if cell.ctype == xlrd.XL_CELL_DATE:\n # Start: if time is 00:00 this fix is necessary\n if value == 1.0:\n datetime_tup = (0, 0, 0)\n else:\n # end\n datetime_tup = xlrd.xldate_as_tuple(value, 0)\n\n if datetime_tup[0:3] == (0, 0, 0):\n value = datetime.time(*datetime_tup[3:])\n else:\n value = datetime.datetime(*datetime_tup)\n return value\n\nAnd now this code runs perfect for me !\n",
"This is a solution for MacOS with old xls files (e.g. Excel 97 2004).\nThe best way I found to deal with this format, if excel is not an option, is to open the file in openoffice and save it to another format as csv files.\n",
"The use of win32com (pywin32), as the answer of @kvdogan is primarily the perfect approach.Some requisites:\n\nUsing Windows (obviously);\nHave MSExcel installed;\nEnsure the path for file is the FULL path;\n\nAlso, the Pywin32 project is not up-to-date on SourceForge. Instead, use github: https://github.com/mhammond/pywin32\nThere's a .chm document, which you can read with SumatraPDF, for example, in the folder of project once installed.\n#My answer contains no code at all.\n\nEDIT: I don't have enough reputation to make comments. I'm sorry for the practical flood.\n",
"This code works for me:\nimport pandas as pd\n\ndf = pd.read_html(\"test.xls\") \ndfs= []\nfor i in range(len(df)):\n dfs.append(df[i])\n\n \ndef multiple_dfs(df_list, sheets, file_name, spaces):\n writer = pd.ExcelWriter(file_name,engine='xlsxwriter') \n row = 0\n for dataframe in df_list:\n dataframe.to_excel(writer,sheet_name=sheets,startrow=row , startcol=0) \n row = row + len(dataframe.index) + spaces + 1\n writer.save()\n\nmultiple_dfs(dfs, 'xlsx', 'test.xlsx', 1)\n\n"
] | [
44,
27,
23,
12,
7,
6,
6,
3,
2,
1,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"python",
"uno"
] | stackoverflow_0009918646_python_uno.txt |
Q:
How to set new values to row based on the same substring from other column?
This is an example of a bigger data. Imagine I have a dataframe like this:
df = pd.DataFrame({"CLASS":["AG_1","AG_2","AG_3","MAR","GOM"],
"TOP":[200, np.nan, np.nan, 600, np.nan],
"BOT":[230, 250, 380, np.nan, 640]})
df
Out[49]:
CLASS TOP BOT
0 AG_1 200.0 230.0
1 AG_2 NaN 250.0
2 AG_3 NaN 380.0
3 MAR 600.0 NaN
4 GOM NaN 640.0
I would like to set the values for TOP on lines 1 and 2. My condition is that these values must be the BOT values from the row above if the class begins with the same substring "AG". The output should be like this:
CLASS TOP BOT
0 AG_1 200.0 230.0
1 AG_2 230.0 250.0
2 AG_3 250.0 380.0
3 MAR 600.0 NaN
4 GOM NaN 640.0
Anyone could show me how to do that?
A:
generic case: filling all groups
I would use fillna with a groupby.shift using a custom group extracting the substring from CLASS with str.extract:
group = df['CLASS'].str.extract('([^_]+)', expand=False)
df['TOP'] = df['TOP'].fillna(df.groupby(group)['BOT'].shift())
Output:
CLASS TOP BOT
0 AG_1 200.0 230.0
1 AG_2 230.0 250.0
2 AG_3 250.0 380.0
3 MAR 600.0 NaN
4 GOM NaN 640.0
Intermediate group:
0 AG
1 AG
2 AG
3 MAR
4 GOM
Name: CLASS, dtype: object
special case: only AG group
m = df['CLASS'].str.startswith('AG')
df.loc[m, 'TOP'] = df.loc[m, 'TOP'].fillna(df.loc[m, 'BOT'].shift())
Example:
CLASS TOP BOT
0 AG_1 200.0 230.0
1 AG_2 230.0 250.0
2 AG_3 250.0 380.0
3 MAR_1 600.0 601.0
4 MAR_2 NaN NaN # this is not filled
5 GOM NaN 640.0
| How to set new values to row based on the same substring from other column? | This is an example of a bigger data. Imagine I have a dataframe like this:
df = pd.DataFrame({"CLASS":["AG_1","AG_2","AG_3","MAR","GOM"],
"TOP":[200, np.nan, np.nan, 600, np.nan],
"BOT":[230, 250, 380, np.nan, 640]})
df
Out[49]:
CLASS TOP BOT
0 AG_1 200.0 230.0
1 AG_2 NaN 250.0
2 AG_3 NaN 380.0
3 MAR 600.0 NaN
4 GOM NaN 640.0
I would like to set the values for TOP on lines 1 and 2. My condition is that these values must be the BOT values from the row above if the class begins with the same substring "AG". The output should be like this:
CLASS TOP BOT
0 AG_1 200.0 230.0
1 AG_2 230.0 250.0
2 AG_3 250.0 380.0
3 MAR 600.0 NaN
4 GOM NaN 640.0
Anyone could show me how to do that?
| [
"generic case: filling all groups\nI would use fillna with a groupby.shift using a custom group extracting the substring from CLASS with str.extract:\ngroup = df['CLASS'].str.extract('([^_]+)', expand=False)\ndf['TOP'] = df['TOP'].fillna(df.groupby(group)['BOT'].shift())\n\nOutput:\n CLASS TOP BOT\n0 AG_1 200.0 230.0\n1 AG_2 230.0 250.0\n2 AG_3 250.0 380.0\n3 MAR 600.0 NaN\n4 GOM NaN 640.0\n\nIntermediate group:\n0 AG\n1 AG\n2 AG\n3 MAR\n4 GOM\nName: CLASS, dtype: object\n\nspecial case: only AG group\nm = df['CLASS'].str.startswith('AG')\n\ndf.loc[m, 'TOP'] = df.loc[m, 'TOP'].fillna(df.loc[m, 'BOT'].shift())\n\nExample:\n CLASS TOP BOT\n0 AG_1 200.0 230.0\n1 AG_2 230.0 250.0\n2 AG_3 250.0 380.0\n3 MAR_1 600.0 601.0\n4 MAR_2 NaN NaN # this is not filled\n5 GOM NaN 640.0\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074628225_pandas_python.txt |
Q:
How to save duplicate records in SQLAlchemy?
I'm creating an application where it simulates a football album for each user, the logic is that each user can open packages and receive players that in the future can be associated with teams that the user himself created. To save all the players that a user can receive I created a Player model (
many-to-many relationship with users and teams):
class Player(db.Model):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(length=30), nullable=False)
birthdate = db.Column(db.Date())
weight = db.Column(db.Numeric(precision=5, scale=2), nullable=False)
height = db.Column(db.Integer(), nullable=False)
users = db.relationship(User, secondary = 'user_player', overlaps='players')
teams = db.relationship('Team', secondary = 'player_team', overlaps='players')
As much as a player (card) can be assigned to several users and several teams, it is not possible for a user to receive the same player in duplicate, where he could associate it with another team that he himself created. How can I make a user receive the same player more than once without having to create another record in the database?
A:
To me it looks like you need an intermediate table. You have a db.Model "Player" with basic information as on a player card. Then a "User" and a "Team" model. The team has a relation to the user, now you create a "UserCollection" db.Model, which contains references such as PlayerId, UserId and TeamId if set.
| How to save duplicate records in SQLAlchemy? | I'm creating an application where it simulates a football album for each user, the logic is that each user can open packages and receive players that in the future can be associated with teams that the user himself created. To save all the players that a user can receive I created a Player model (
many-to-many relationship with users and teams):
class Player(db.Model):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(length=30), nullable=False)
birthdate = db.Column(db.Date())
weight = db.Column(db.Numeric(precision=5, scale=2), nullable=False)
height = db.Column(db.Integer(), nullable=False)
users = db.relationship(User, secondary = 'user_player', overlaps='players')
teams = db.relationship('Team', secondary = 'player_team', overlaps='players')
As much as a player (card) can be assigned to several users and several teams, it is not possible for a user to receive the same player in duplicate, where he could associate it with another team that he himself created. How can I make a user receive the same player more than once without having to create another record in the database?
| [
"To me it looks like you need an intermediate table. You have a db.Model \"Player\" with basic information as on a player card. Then a \"User\" and a \"Team\" model. The team has a relation to the user, now you create a \"UserCollection\" db.Model, which contains references such as PlayerId, UserId and TeamId if set.\n"
] | [
0
] | [
"You should use \"ManytoManyField\" like this;\nfrom django.db import models\n\nclass Publication(models.Model):\n title = models.CharField(max_length=30)\n\n class Meta:\n ordering = ['title']\n\n def __str__(self):\n return self.title\n\nclass Article(models.Model):\n headline = models.CharField(max_length=100)\n publications = models.ManyToManyField(Publication)\n\n class Meta:\n ordering = ['headline']\n\n def __str__(self):\n return self.headline\n\nit gives to you add more then one relation without new line or table;\na1 = Article(headline='Django lets you build web apps easily')\np1 = Publication(title='The Python Journal')\np2 = Publication(title='Science News')\np3 = Publication(title='Science Weekly')\np1.save()\np2.save()\np3.save()\na1.save()\n\nand you can start the add new relations;\na1.publications.add(p1, p2)\na1.publications.add(p3)\n\nfor more details:\nhttps://docs.djangoproject.com/en/4.1/topics/db/examples/many_to_many/\n"
] | [
-1
] | [
"database_design",
"python",
"sqlalchemy"
] | stackoverflow_0074628003_database_design_python_sqlalchemy.txt |
Q:
How to group tuples with adjacent indices in a python 2-dimensional tuple?
How to group tuples with adjacent indices in a python 2-dimensional tuple?
I'm not familiar with the zip function yet. I've written the code like this, but it doesn't work very well. Any help would be appreciated. Thank you!!
coords = ((1, 2), (3, 4), (5, 6), (7, 8))
coords = tuple(zip(coords[0::2], coords[1::2]))
print(coords)
real output:
(((1, 2), (3, 4)), ((5, 6), (7, 8)))
expected output:
((1, 2, 3, 4), (5, 6, 7, 8))
A:
Your code was almost there. This is one way you can make it work,
coords = ((1, 2), (3, 4), (5, 6), (7, 8))
coords = tuple(x + y for x, y in zip(coords[0::2], coords[1::2]))
Like in your code, it loops through two slices of coords using zip. But now it takes each element of the two slices (x and y) and adds them to form a 4 element inner tuple.
A:
This code would solve the problem:
coords = ((1, 2), (3, 4), (5, 6), (7, 8))
coords = tuple([(tuple([i for i in coords[x]])+tuple([i for i in coords[x+1]])) for x in range(0,len(coords)-1,2)])
print(coords)
Output:
((1, 2, 3, 4), (5, 6, 7, 8))
| How to group tuples with adjacent indices in a python 2-dimensional tuple? | How to group tuples with adjacent indices in a python 2-dimensional tuple?
I'm not familiar with the zip function yet. I've written the code like this, but it doesn't work very well. Any help would be appreciated. Thank you!!
coords = ((1, 2), (3, 4), (5, 6), (7, 8))
coords = tuple(zip(coords[0::2], coords[1::2]))
print(coords)
real output:
(((1, 2), (3, 4)), ((5, 6), (7, 8)))
expected output:
((1, 2, 3, 4), (5, 6, 7, 8))
| [
"Your code was almost there. This is one way you can make it work,\ncoords = ((1, 2), (3, 4), (5, 6), (7, 8))\n\ncoords = tuple(x + y for x, y in zip(coords[0::2], coords[1::2]))\n\nLike in your code, it loops through two slices of coords using zip. But now it takes each element of the two slices (x and y) and adds them to form a 4 element inner tuple.\n",
"This code would solve the problem:\ncoords = ((1, 2), (3, 4), (5, 6), (7, 8))\ncoords = tuple([(tuple([i for i in coords[x]])+tuple([i for i in coords[x+1]])) for x in range(0,len(coords)-1,2)])\nprint(coords)\n\nOutput:\n((1, 2, 3, 4), (5, 6, 7, 8))\n\n"
] | [
8,
2
] | [] | [] | [
"python",
"tuples"
] | stackoverflow_0074628099_python_tuples.txt |
Q:
How to stop functions from giving errors when using variables not defined in the function?
I am creating several experiments in python and have various functions which will be common across these experiments. I thus wanted to create a script only containing these functions which I could import at the beginning of the experimental script to avoid half of the script being taken up with 'generic setup lines'.
So far, I have created the functions and the script and can import them and use them. For example, in the following, I have a function which shows a blank screen which takes the duration I want (e.g., 4 seconds) and displays it on the window defined in the experimental script.
import functions
win = visual.Window([1440,900], color=[-1,-1,-1], fullscr=True)
dur = 4
functions.blank_screen(duration=dur)
This all works fine but in the script containing the functions, there are several 'errors' the function uses the variable 'win' which is not defined in the function:
def blank_screen(duration):
blank = TextStim(win, text='')
blank.draw()
win.flip()
But when I run the experimental script, as it is defined in the script, it all works. How can I get around this? I have this problem with several functions as a large majority uses variables which are defined in the experimental script and not in the functions. As I say, it all works but just annoys me that the script is covered in 'errors'!
I'd be greatly appreciative of any help, thank you!
A:
You can pass on the win you want blanked out as an argument to the blank_screen function:
import functions
win = visual.Window([1440,900], color=[-1,-1,-1], fullscr=True)
dur = 4
functions.blank_screen(duration=dur, win=win)
and
def blank_screen(duration, win):
blank = TextStim(win, text='')
blank.draw()
win.flip()
| How to stop functions from giving errors when using variables not defined in the function? | I am creating several experiments in python and have various functions which will be common across these experiments. I thus wanted to create a script only containing these functions which I could import at the beginning of the experimental script to avoid half of the script being taken up with 'generic setup lines'.
So far, I have created the functions and the script and can import them and use them. For example, in the following, I have a function which shows a blank screen which takes the duration I want (e.g., 4 seconds) and displays it on the window defined in the experimental script.
import functions
win = visual.Window([1440,900], color=[-1,-1,-1], fullscr=True)
dur = 4
functions.blank_screen(duration=dur)
This all works fine but in the script containing the functions, there are several 'errors' the function uses the variable 'win' which is not defined in the function:
def blank_screen(duration):
blank = TextStim(win, text='')
blank.draw()
win.flip()
But when I run the experimental script, as it is defined in the script, it all works. How can I get around this? I have this problem with several functions as a large majority uses variables which are defined in the experimental script and not in the functions. As I say, it all works but just annoys me that the script is covered in 'errors'!
I'd be greatly appreciative of any help, thank you!
| [
"You can pass on the win you want blanked out as an argument to the blank_screen function:\nimport functions \nwin = visual.Window([1440,900], color=[-1,-1,-1], fullscr=True)\ndur = 4\nfunctions.blank_screen(duration=dur, win=win)\n\nand\ndef blank_screen(duration, win):\n blank = TextStim(win, text='')\n blank.draw()\n win.flip()\n\n"
] | [
4
] | [] | [] | [
"function",
"python"
] | stackoverflow_0074628261_function_python.txt |
Q:
How can I convert an undirected graph generated by the Barabasi-Albert model to a directed one?
I have used the Barabasi-Albert model in networkx to generate a graph with n=200 and m=2.
This gives me an undirected graph but I want a directed graph so that I can plot the in and out degree distributions. Would plotting the in and out degrees be possible?
This is my code:
N=200
m=2
G_barabasi=nx.barabasi_albert_graph(n=N,m=m)
A:
An easy way is to use the .to_directed method on the existing graph:
from networkx import barabasi_albert_graph
G = barabasi_albert_graph(200, 2)
print(G.is_directed())
# False
G_directed = G.to_directed()
print(G_directed.is_directed())
# True
| How can I convert an undirected graph generated by the Barabasi-Albert model to a directed one? | I have used the Barabasi-Albert model in networkx to generate a graph with n=200 and m=2.
This gives me an undirected graph but I want a directed graph so that I can plot the in and out degree distributions. Would plotting the in and out degrees be possible?
This is my code:
N=200
m=2
G_barabasi=nx.barabasi_albert_graph(n=N,m=m)
| [
"An easy way is to use the .to_directed method on the existing graph:\nfrom networkx import barabasi_albert_graph\nG = barabasi_albert_graph(200, 2)\nprint(G.is_directed())\n# False\nG_directed = G.to_directed()\nprint(G_directed.is_directed())\n# True\n\n"
] | [
0
] | [] | [] | [
"complex_networks",
"graph",
"graph_theory",
"networkx",
"python"
] | stackoverflow_0074628089_complex_networks_graph_graph_theory_networkx_python.txt |
Q:
Pygame program keeps crashing and the display window won't cooperate,
Trying to design and call open a basic customized pygame window that pops up right after the program starts. The window I'm producing gets minimized by default instead of just opening. It's also not updating the color, and it immediately crashes when I open the tab that it's in.
# I'm running Windows 10 with Spyder (Python 3.9), if that matters
# This is the entire code:
import pygame
WIDTH, HEIGHT = 900, 500
WIN = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Space Shooter Friends (Version 1.0)")
# Color presets
WHITE = (255,255,255)
BLACK = (0,0,0)
# Event loop
def main():
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
WIN.fill(WHITE)
pygame.display.update()
pygame.quit()
# Ensure the game only executes in the file named: space_shooter.py
if __name__ == "__space_shooter__":
main()
For context, I'm a beginner level student just trying to generate a basic white display for now, with dimensions: 9ooW x 500H, and centered ideally so that the pygame window's center is superimposed onto the center of my computer screen. I want this window to pop up as soon as the program starts running, and it should stay open for an indefinite amount of time, or until exited from with the X icon.
It seems to be producing the window right away as intended, but it places it into a minimized tab instead of an opened window display for some reason. The window pops open if I click on the tab, but it's filled in with black regardless of what values I insert into WIN.fill(R,B,G) as arguments. Also, the program immediately becomes non-responsive with the message: (Not responding) next to the game's title (at the top of the pygame window, not in the Spyder terminal).
Seems like it's not able to update for some reason, possibly causing it to crash, but I'm not really sure. I'm not getting any runtime or syntax errors from Python, but I do get a "Python is not responding" message from Windows in the pygame window as well as whenever I try to close the window using the X icon. Any help is much appreciated!
A:
The problem is in the following line:
if __name__ == "__space_shooter__":
The __name__ variable will not contain the file name of the current script. If the script is ran directly, it will contain "__main__". If the script is imported by another script, it will contain the file name of that other script.
In order to check the script's file name, which you according to the comment wanted to do, you have to use the __file__ variable. The only problem is that the __file__ variable does not only containt the file name, but also the file path and the file extension. That's why I would use the str.endswith function, which checks whether a certain string ends with a given string. There are also some more complicate and reliable ways, but I will leave it to this:
if __file__.endswith("space_shooter.py"):
However, it is more common to check whether the current file is being ran directly instead of being imported from another file. This allows other files to import and use the functions and classes present in the file, without that everything in the file is ran too.
For this, you have to use the __name__ variable:
if __name__ == "__main__":
As said above, the __name__ variable will contain "__main__" when the file is ran directly. Thus we can compare __name__ with it to know whether the file is ran directly or not.
For more information on the __name__ variable, there is more explanation and a useful example.
| Pygame program keeps crashing and the display window won't cooperate, | Trying to design and call open a basic customized pygame window that pops up right after the program starts. The window I'm producing gets minimized by default instead of just opening. It's also not updating the color, and it immediately crashes when I open the tab that it's in.
# I'm running Windows 10 with Spyder (Python 3.9), if that matters
# This is the entire code:
import pygame
WIDTH, HEIGHT = 900, 500
WIN = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Space Shooter Friends (Version 1.0)")
# Color presets
WHITE = (255,255,255)
BLACK = (0,0,0)
# Event loop
def main():
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
WIN.fill(WHITE)
pygame.display.update()
pygame.quit()
# Ensure the game only executes in the file named: space_shooter.py
if __name__ == "__space_shooter__":
main()
For context, I'm a beginner level student just trying to generate a basic white display for now, with dimensions: 9ooW x 500H, and centered ideally so that the pygame window's center is superimposed onto the center of my computer screen. I want this window to pop up as soon as the program starts running, and it should stay open for an indefinite amount of time, or until exited from with the X icon.
It seems to be producing the window right away as intended, but it places it into a minimized tab instead of an opened window display for some reason. The window pops open if I click on the tab, but it's filled in with black regardless of what values I insert into WIN.fill(R,B,G) as arguments. Also, the program immediately becomes non-responsive with the message: (Not responding) next to the game's title (at the top of the pygame window, not in the Spyder terminal).
Seems like it's not able to update for some reason, possibly causing it to crash, but I'm not really sure. I'm not getting any runtime or syntax errors from Python, but I do get a "Python is not responding" message from Windows in the pygame window as well as whenever I try to close the window using the X icon. Any help is much appreciated!
| [
"The problem is in the following line:\nif __name__ == \"__space_shooter__\":\n\nThe __name__ variable will not contain the file name of the current script. If the script is ran directly, it will contain \"__main__\". If the script is imported by another script, it will contain the file name of that other script.\nIn order to check the script's file name, which you according to the comment wanted to do, you have to use the __file__ variable. The only problem is that the __file__ variable does not only containt the file name, but also the file path and the file extension. That's why I would use the str.endswith function, which checks whether a certain string ends with a given string. There are also some more complicate and reliable ways, but I will leave it to this:\nif __file__.endswith(\"space_shooter.py\"):\n\nHowever, it is more common to check whether the current file is being ran directly instead of being imported from another file. This allows other files to import and use the functions and classes present in the file, without that everything in the file is ran too.\nFor this, you have to use the __name__ variable:\nif __name__ == \"__main__\":\n\nAs said above, the __name__ variable will contain \"__main__\" when the file is ran directly. Thus we can compare __name__ with it to know whether the file is ran directly or not.\nFor more information on the __name__ variable, there is more explanation and a useful example.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074623774_python.txt |
Q:
Attaching a decorator to all functions within a class
I don't really need to do this, but was just wondering, is there a way to bind a decorator to all functions within a class generically, rather than explicitly stating it for every function.
I suppose it then becomes a kind of aspect, rather than a decorator and it does feel a bit odd, but was thinking for something like timing or auth it'd be pretty neat.
A:
The cleanest way to do this, or to do other modifications to a class definition, is to define a metaclass.
Alternatively, just apply your decorator at the end of the class definition using inspect:
import inspect
class Something:
def foo(self):
pass
for name, fn in inspect.getmembers(Something, inspect.isfunction):
setattr(Something, name, decorator(fn))
In practice of course you'll want to apply your decorator more selectively. As soon as you want to decorate all but one method you'll discover that it is easier and more flexible just to use the decorator syntax in the traditional way.
A:
Everytime you think of changing class definition, you can either use the class decorator or metaclass. e.g. using metaclass
import types
class DecoMeta(type):
def __new__(cls, name, bases, attrs):
for attr_name, attr_value in attrs.iteritems():
if isinstance(attr_value, types.FunctionType):
attrs[attr_name] = cls.deco(attr_value)
return super(DecoMeta, cls).__new__(cls, name, bases, attrs)
@classmethod
def deco(cls, func):
def wrapper(*args, **kwargs):
print "before",func.func_name
result = func(*args, **kwargs)
print "after",func.func_name
return result
return wrapper
class MyKlass(object):
__metaclass__ = DecoMeta
def func1(self):
pass
MyKlass().func1()
Output:
before func1
after func1
Note: it will not decorate staticmethod and classmethod
A:
Following code works for python2.x and 3.x
import inspect
def decorator_for_func(orig_func):
def decorator(*args, **kwargs):
print("Decorating wrapper called for method %s" % orig_func.__name__)
result = orig_func(*args, **kwargs)
return result
return decorator
def decorator_for_class(cls):
for name, method in inspect.getmembers(cls):
if (not inspect.ismethod(method) and not inspect.isfunction(method)) or inspect.isbuiltin(method):
continue
print("Decorating function %s" % name)
setattr(cls, name, decorator_for_func(method))
return cls
@decorator_for_class
class decorated_class:
def method1(self, arg, **kwargs):
print("Method 1 called with arg %s" % arg)
def method2(self, arg):
print("Method 2 called with arg %s" % arg)
d=decorated_class()
d.method1(1, a=10)
d.method2(2)
A:
Update for Python 3:
import types
class DecoMeta(type):
def __new__(cls, name, bases, attrs):
for attr_name, attr_value in attrs.items():
if isinstance(attr_value, types.FunctionType):
attrs[attr_name] = cls.deco(attr_value)
return super().__new__(cls, name, bases, attrs)
@classmethod
def deco(cls, func):
def wrapper(*args, **kwargs):
print("before",func.__name__)
result = func(*args, **kwargs)
print("after",func.__name__)
return result
return wrapper
(and thanks to Duncan for this)
A:
Of course that the metaclasses are the most pythonic way to go when you want to modify the way that python creates the objects. Which can be done by overriding the __new__ method of your class. But there are some points around this problem (specially for python 3.X) that I'd like to mention:
types.FunctionType doesn't protect the special methods from being decorated, as they are function types. As a more general way you can just decorate the objects which their names are not started with double underscore (__). One other benefit of this method is that it also covers those objects that exist in namespace and starts with __ but are not function like __qualname__, __module__ , etc.
The namespace argument in __new__'s header doesn't contain class attributes within the __init__. The reason is that the __new__ executes before the __init__ (initializing).
It's not necessary to use a classmethod as the decorator, as in most of the times you import your decorator from another module.
If your class is contain a global item (out side of the __init__) for refusing of being decorated alongside checking if the name is not started with __ you can check the type with types.FunctionType to be sure that you're not decorating a non-function object.
Here is a sample metacalss that you can use:
class TheMeta(type):
def __new__(cls, name, bases, namespace, **kwds):
# if your decorator is a class method of the metaclass use
# `my_decorator = cls.my_decorator` in order to invoke the decorator.
namespace = {k: v if k.startswith('__') else my_decorator(v) for k, v in namespace.items()}
return type.__new__(cls, name, bases, namespace)
Demo:
def my_decorator(func):
def wrapper(self, arg):
# You can also use *args instead of (self, arg) and pass the *args
# to the function in following call.
return "the value {} gets modified!!".format(func(self, arg))
return wrapper
class TheMeta(type):
def __new__(cls, name, bases, namespace, **kwds):
# my_decorator = cls.my_decorator (if the decorator is a classmethod)
namespace = {k: v if k.startswith('__') else my_decorator(v) for k, v in namespace.items()}
return type.__new__(cls, name, bases, namespace)
class MyClass(metaclass=TheMeta):
# a = 10
def __init__(self, *args, **kwargs):
self.item = args[0]
self.value = kwargs['value']
def __getattr__(self, attr):
return "This class hasn't provide the attribute {}.".format(attr)
def myfunction_1(self, arg):
return arg ** 2
def myfunction_2(self, arg):
return arg ** 3
myinstance = MyClass(1, 2, value=100)
print(myinstance.myfunction_1(5))
print(myinstance.myfunction_2(2))
print(myinstance.item)
print(myinstance.p)
Output:
the value 25 gets modified!!
the value 8 gets modified!!
1
This class hasn't provide the attribute p. # special method is not decorated.
For checking the 3rd item from the aforementioned notes you can uncomment the line a = 10 and do print(myinstance.a) and see the result then change the dictionary comprehension in __new__ as follows then see the result again:
namespace = {k: v if k.startswith('__') and not isinstance(v, types.FunctionType)\
else my_decorator(v) for k, v in namespace.items()}
A:
I will repeat my answer here, for a similar issue
It can be done many different ways. I will show how to make it through meta-class, class decorator and inheritance.
by changing meta class
import functools
class Logger(type):
@staticmethod
def _decorator(fun):
@functools.wraps(fun)
def wrapper(*args, **kwargs):
print(fun.__name__, args, kwargs)
return fun(*args, **kwargs)
return wrapper
def __new__(mcs, name, bases, attrs):
for key in attrs.keys():
if callable(attrs[key]):
# if attrs[key] is callable, then we can easily wrap it with decorator
# and substitute in the future attrs
# only for extra clarity (though it is wider type than function)
fun = attrs[key]
attrs[key] = Logger._decorator(fun)
# and then invoke __new__ in type metaclass
return super().__new__(mcs, name, bases, attrs)
class A(metaclass=Logger):
def __init__(self):
self.some_val = "some_val"
def method_first(self, a, b):
print(a, self.some_val)
def another_method(self, c):
print(c)
@staticmethod
def static_method(d):
print(d)
b = A()
# __init__ (<__main__.A object at 0x7f852a52a2b0>,) {}
b.method_first(5, b="Here should be 5")
# method_first (<__main__.A object at 0x7f852a52a2b0>, 5) {'b': 'Here should be 5'}
# 5 some_val
b.method_first(6, b="Here should be 6")
# method_first (<__main__.A object at 0x7f852a52a2b0>, 6) {'b': 'Here should be 6'}
# 6 some_val
b.another_method(7)
# another_method (<__main__.A object at 0x7f852a52a2b0>, 7) {}
# 7
b.static_method(7)
# 7
Also, will show two approaches how to make it without changing meta information of class (through class decorator and class inheritance). The first approach through class decorator put_decorator_on_all_methods accepts decorator to wrap all member callable objects of class.
def logger(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
print(f.__name__, args, kwargs)
return f(*args, **kwargs)
return wrapper
def put_decorator_on_all_methods(decorator, cls=None):
if cls is None:
return lambda cls: put_decorator_on_all_methods(decorator, cls)
class Decoratable(cls):
def __init__(self, *args, **kargs):
super().__init__(*args, **kargs)
def __getattribute__(self, item):
value = object.__getattribute__(self, item)
if callable(value):
return decorator(value)
return value
return Decoratable
@put_decorator_on_all_methods(logger)
class A:
def method(self, a, b):
print(a)
def another_method(self, c):
print(c)
@staticmethod
def static_method(d):
print(d)
b = A()
b.method(5, b="Here should be 5")
# >>> method (5,) {'b': 'Here should be 5'}
# >>> 5
b.method(6, b="Here should be 6")
# >>> method (6,) {'b': 'Here should be 6'}
# >>> 6
b.another_method(7)
# >>> another_method (7,) {}
# >>> 7
b.static_method(8)
# >>> static_method (8,) {}
# >>> 8
And, recently, I've come across on the same problem, but I couldn't put decorator on class or change it in any other way, except I was allowed to add such behavior through inheritance only (I am not sure that this is the best choice if you can change codebase as you wish though).
Here class Logger forces all callable members of subclasses to write information about their invocations, see code below.
class Logger:
def _decorator(self, f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
print(f.__name__, args, kwargs)
return f(*args, **kwargs)
return wrapper
def __getattribute__(self, item):
value = object.__getattribute__(self, item)
if callable(value):
decorator = object.__getattribute__(self, '_decorator')
return decorator(value)
return value
class A(Logger):
def method(self, a, b):
print(a)
def another_method(self, c):
print(c)
@staticmethod
def static_method(d):
print(d)
b = A()
b.method(5, b="Here should be 5")
# >>> method (5,) {'b': 'Here should be 5'}
# >>> 5
b.method(6, b="Here should be 6")
# >>> method (6,) {'b': 'Here should be 6'}
# >>> 6
b.another_method(7)
# >>> another_method (7,) {}
# >>> 7
b.static_method(7)
# >>> static_method (7,) {}
# >>> 7
Or more abstractly, you can instantiate base class based on some decorator.
def decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
print(f.__name__, args, kwargs)
return f(*args, **kwargs)
return wrapper
class Decoratable:
def __init__(self, dec):
self._decorator = dec
def __getattribute__(self, item):
value = object.__getattribute__(self, item)
if callable(value):
decorator = object.__getattribute__(self, '_decorator')
return decorator(value)
return value
class A(Decoratable):
def __init__(self, dec):
super().__init__(dec)
def method(self, a, b):
print(a)
def another_method(self, c):
print(c)
@staticmethod
def static_method(d):
print(d)
b = A(decorator)
b.method(5, b="Here should be 5")
# >>> method (5,) {'b': 'Here should be 5'}
# >>> 5
b.method(6, b="Here should be 6")
# >>> method (6,) {'b': 'Here should be 6'}
# >>> 6
b.another_method(7)
# >>> another_method (7,) {}
# >>> 7
b.static_method(7)
# >>> static_method (7,) {}
# >>> 7
A:
There's another slightly similar thing you might want to do in some cases. Sometimes you want to trigger the attachment for something like debugging and not on all the classes but for every method of an object you might want a record of what it's doing.
def start_debugging():
import functools
import datetime
filename = "debug-{date:%Y-%m-%d_%H_%M_%S}.txt".format(date=datetime.datetime.now())
debug_file = open(filename, "a")
debug_file.write("\nDebug.\n")
def debug(func):
@functools.wraps(func)
def wrapper_debug(*args, **kwargs):
args_repr = [repr(a) for a in args] # 1
kwargs_repr = [f"{k}={v!r}" for k, v in kwargs.items()] # 2
signature = ", ".join(args_repr + kwargs_repr) # 3
debug_file.write(f"Calling {func.__name__}({signature})\n")
value = func(*args, **kwargs)
debug_file.write(f"{func.__name__!r} returned {value!r}\n") # 4
debug_file.flush()
return value
return wrapper_debug
for obj in (self):
for attr in dir(obj):
if attr.startswith('_'):
continue
fn = getattr(obj, attr)
if not isinstance(fn, types.FunctionType) and \
not isinstance(fn, types.MethodType):
continue
setattr(obj, attr, debug(fn))
This function will go through some objects (only self currently) and replace all functions and methods that do not start with _ with a debugging decorator.
The method used for this of just iterating the dir(self) is not addressed above but totally works. And can be called externally from the object and much more arbitrarily.
A:
In Python 3 you could also write a simple function that overwrites/applies a decorator to certain methods like so:
from functools import wraps
from types import MethodType
def logged(func):
@wraps(func)
def wrapper(*args, **kwargs):
res = func(*args, **kwargs)
print("logging:", func.__name__, res)
return res
return wrapper
class Test:
def foo(self):
return 42
...
def aspectize(cls, decorator):
for name, func in cls.__dict__.items():
if not name.startswith("__"):
setattr(cls, name, MethodType(decorator(func), cls)) # MethodType is key
aspectize(Test, logged)
t = Test()
t.foo() # printing "logging: foo 42"; returning 42
A:
I came to this question from:
How to decorate all functions of a class without typing it over and over for each method?
And I want add a one note:
Answers with class decorators or repalcing class methods like this one:
https://stackoverflow.com/a/6307868/11277611
Will not work with staticmethod.
You will get TypeError, unexpected argument because your method will get self/cls as first argument.
Probably:
Decorated class doesn't know about decorators of self methods and can't be distincted even with inspect.ismethod.
I come to such quickfix:
I'm not checked it closely but it passes my (no so comprehensive) tests.
Using dynamically decorators is already a bad approach, so, it must be okay as temporary solution.
TLD:TD Add try/exception to use with staticmethod
def log_sent_data(function):
@functools_wraps(function)
def decorator(*args, **kwargs):
# Quickfix
self, *args = args
try: # If method has self/cls/descriptor
result = function(self, *args, **kwargs)
except TypeError:
if args: # If method is static but has positional args
result = function(*args, **kwargs)
else: # If method is static and doesn't has positional args
result = function(**kwargs)
# End of quickfix
return result
return decorator
| Attaching a decorator to all functions within a class | I don't really need to do this, but was just wondering, is there a way to bind a decorator to all functions within a class generically, rather than explicitly stating it for every function.
I suppose it then becomes a kind of aspect, rather than a decorator and it does feel a bit odd, but was thinking for something like timing or auth it'd be pretty neat.
| [
"The cleanest way to do this, or to do other modifications to a class definition, is to define a metaclass.\nAlternatively, just apply your decorator at the end of the class definition using inspect:\nimport inspect\n\nclass Something:\n def foo(self): \n pass\n\nfor name, fn in inspect.getmembers(Something, inspect.isfunction):\n setattr(Something, name, decorator(fn))\n\nIn practice of course you'll want to apply your decorator more selectively. As soon as you want to decorate all but one method you'll discover that it is easier and more flexible just to use the decorator syntax in the traditional way.\n",
"Everytime you think of changing class definition, you can either use the class decorator or metaclass. e.g. using metaclass\nimport types\n\nclass DecoMeta(type):\n def __new__(cls, name, bases, attrs):\n\n for attr_name, attr_value in attrs.iteritems():\n if isinstance(attr_value, types.FunctionType):\n attrs[attr_name] = cls.deco(attr_value)\n\n return super(DecoMeta, cls).__new__(cls, name, bases, attrs)\n\n @classmethod\n def deco(cls, func):\n def wrapper(*args, **kwargs):\n print \"before\",func.func_name\n result = func(*args, **kwargs)\n print \"after\",func.func_name\n return result\n return wrapper\n\nclass MyKlass(object):\n __metaclass__ = DecoMeta\n\n def func1(self): \n pass\n\nMyKlass().func1()\n\nOutput:\nbefore func1\nafter func1\n\nNote: it will not decorate staticmethod and classmethod\n",
"Following code works for python2.x and 3.x\nimport inspect\n\ndef decorator_for_func(orig_func):\n def decorator(*args, **kwargs):\n print(\"Decorating wrapper called for method %s\" % orig_func.__name__)\n result = orig_func(*args, **kwargs)\n return result\n return decorator\n\ndef decorator_for_class(cls):\n for name, method in inspect.getmembers(cls):\n if (not inspect.ismethod(method) and not inspect.isfunction(method)) or inspect.isbuiltin(method):\n continue\n print(\"Decorating function %s\" % name)\n setattr(cls, name, decorator_for_func(method))\n return cls\n\n@decorator_for_class\nclass decorated_class:\n def method1(self, arg, **kwargs):\n print(\"Method 1 called with arg %s\" % arg)\n def method2(self, arg):\n print(\"Method 2 called with arg %s\" % arg)\n\n\nd=decorated_class()\nd.method1(1, a=10)\nd.method2(2)\n\n",
"Update for Python 3:\nimport types\n\n\nclass DecoMeta(type):\n def __new__(cls, name, bases, attrs):\n\n for attr_name, attr_value in attrs.items():\n if isinstance(attr_value, types.FunctionType):\n attrs[attr_name] = cls.deco(attr_value)\n\n return super().__new__(cls, name, bases, attrs)\n\n @classmethod\n def deco(cls, func):\n def wrapper(*args, **kwargs):\n print(\"before\",func.__name__)\n result = func(*args, **kwargs)\n print(\"after\",func.__name__)\n return result\n return wrapper\n\n(and thanks to Duncan for this)\n",
"Of course that the metaclasses are the most pythonic way to go when you want to modify the way that python creates the objects. Which can be done by overriding the __new__ method of your class. But there are some points around this problem (specially for python 3.X) that I'd like to mention:\n\ntypes.FunctionType doesn't protect the special methods from being decorated, as they are function types. As a more general way you can just decorate the objects which their names are not started with double underscore (__). One other benefit of this method is that it also covers those objects that exist in namespace and starts with __ but are not function like __qualname__, __module__ , etc.\nThe namespace argument in __new__'s header doesn't contain class attributes within the __init__. The reason is that the __new__ executes before the __init__ (initializing).\nIt's not necessary to use a classmethod as the decorator, as in most of the times you import your decorator from another module.\nIf your class is contain a global item (out side of the __init__) for refusing of being decorated alongside checking if the name is not started with __ you can check the type with types.FunctionType to be sure that you're not decorating a non-function object.\n\n\nHere is a sample metacalss that you can use:\nclass TheMeta(type):\n def __new__(cls, name, bases, namespace, **kwds):\n # if your decorator is a class method of the metaclass use\n # `my_decorator = cls.my_decorator` in order to invoke the decorator.\n namespace = {k: v if k.startswith('__') else my_decorator(v) for k, v in namespace.items()}\n return type.__new__(cls, name, bases, namespace)\n\nDemo:\ndef my_decorator(func):\n def wrapper(self, arg):\n # You can also use *args instead of (self, arg) and pass the *args\n # to the function in following call.\n return \"the value {} gets modified!!\".format(func(self, arg))\n return wrapper\n\n\nclass TheMeta(type):\n def __new__(cls, name, bases, namespace, **kwds):\n # my_decorator = cls.my_decorator (if the decorator is a classmethod)\n namespace = {k: v if k.startswith('__') else my_decorator(v) for k, v in namespace.items()}\n return type.__new__(cls, name, bases, namespace)\n\n\nclass MyClass(metaclass=TheMeta):\n # a = 10\n def __init__(self, *args, **kwargs):\n self.item = args[0]\n self.value = kwargs['value']\n\n def __getattr__(self, attr):\n return \"This class hasn't provide the attribute {}.\".format(attr)\n\n def myfunction_1(self, arg):\n return arg ** 2\n\n def myfunction_2(self, arg):\n return arg ** 3\n\nmyinstance = MyClass(1, 2, value=100)\nprint(myinstance.myfunction_1(5))\nprint(myinstance.myfunction_2(2))\nprint(myinstance.item)\nprint(myinstance.p)\n\nOutput:\nthe value 25 gets modified!!\nthe value 8 gets modified!!\n1\nThis class hasn't provide the attribute p. # special method is not decorated.\n\nFor checking the 3rd item from the aforementioned notes you can uncomment the line a = 10 and do print(myinstance.a) and see the result then change the dictionary comprehension in __new__ as follows then see the result again:\nnamespace = {k: v if k.startswith('__') and not isinstance(v, types.FunctionType)\\\n else my_decorator(v) for k, v in namespace.items()}\n\n",
"I will repeat my answer here, for a similar issue\nIt can be done many different ways. I will show how to make it through meta-class, class decorator and inheritance.\nby changing meta class\nimport functools\n\n\nclass Logger(type):\n @staticmethod\n def _decorator(fun):\n @functools.wraps(fun)\n def wrapper(*args, **kwargs):\n print(fun.__name__, args, kwargs)\n return fun(*args, **kwargs)\n return wrapper\n\n def __new__(mcs, name, bases, attrs):\n for key in attrs.keys():\n if callable(attrs[key]):\n # if attrs[key] is callable, then we can easily wrap it with decorator\n # and substitute in the future attrs\n # only for extra clarity (though it is wider type than function)\n fun = attrs[key]\n attrs[key] = Logger._decorator(fun)\n # and then invoke __new__ in type metaclass\n return super().__new__(mcs, name, bases, attrs)\n\n\nclass A(metaclass=Logger):\n def __init__(self):\n self.some_val = \"some_val\"\n\n def method_first(self, a, b):\n print(a, self.some_val)\n\n def another_method(self, c):\n print(c)\n\n @staticmethod\n def static_method(d):\n print(d)\n\n\nb = A()\n# __init__ (<__main__.A object at 0x7f852a52a2b0>,) {}\n\nb.method_first(5, b=\"Here should be 5\")\n# method_first (<__main__.A object at 0x7f852a52a2b0>, 5) {'b': 'Here should be 5'}\n# 5 some_val\nb.method_first(6, b=\"Here should be 6\")\n# method_first (<__main__.A object at 0x7f852a52a2b0>, 6) {'b': 'Here should be 6'}\n# 6 some_val\nb.another_method(7)\n# another_method (<__main__.A object at 0x7f852a52a2b0>, 7) {}\n# 7\nb.static_method(7)\n# 7\n\nAlso, will show two approaches how to make it without changing meta information of class (through class decorator and class inheritance). The first approach through class decorator put_decorator_on_all_methods accepts decorator to wrap all member callable objects of class.\ndef logger(f):\n @functools.wraps(f)\n def wrapper(*args, **kwargs):\n print(f.__name__, args, kwargs)\n return f(*args, **kwargs)\n\n return wrapper\n\n\ndef put_decorator_on_all_methods(decorator, cls=None):\n if cls is None:\n return lambda cls: put_decorator_on_all_methods(decorator, cls)\n\n class Decoratable(cls):\n def __init__(self, *args, **kargs):\n super().__init__(*args, **kargs)\n\n def __getattribute__(self, item):\n value = object.__getattribute__(self, item)\n if callable(value):\n return decorator(value)\n return value\n\n return Decoratable\n\n\n@put_decorator_on_all_methods(logger)\nclass A:\n def method(self, a, b):\n print(a)\n\n def another_method(self, c):\n print(c)\n\n @staticmethod\n def static_method(d):\n print(d)\n\n\nb = A()\nb.method(5, b=\"Here should be 5\")\n# >>> method (5,) {'b': 'Here should be 5'}\n# >>> 5\nb.method(6, b=\"Here should be 6\")\n# >>> method (6,) {'b': 'Here should be 6'}\n# >>> 6\nb.another_method(7)\n# >>> another_method (7,) {}\n# >>> 7\nb.static_method(8)\n# >>> static_method (8,) {}\n# >>> 8\n\nAnd, recently, I've come across on the same problem, but I couldn't put decorator on class or change it in any other way, except I was allowed to add such behavior through inheritance only (I am not sure that this is the best choice if you can change codebase as you wish though).\nHere class Logger forces all callable members of subclasses to write information about their invocations, see code below.\nclass Logger:\n\n def _decorator(self, f):\n @functools.wraps(f)\n def wrapper(*args, **kwargs):\n print(f.__name__, args, kwargs)\n return f(*args, **kwargs)\n\n return wrapper\n\n def __getattribute__(self, item):\n value = object.__getattribute__(self, item)\n if callable(value):\n decorator = object.__getattribute__(self, '_decorator')\n return decorator(value)\n return value\n\n\nclass A(Logger):\n def method(self, a, b):\n print(a)\n\n def another_method(self, c):\n print(c)\n\n @staticmethod\n def static_method(d):\n print(d)\n\nb = A()\nb.method(5, b=\"Here should be 5\")\n# >>> method (5,) {'b': 'Here should be 5'}\n# >>> 5\nb.method(6, b=\"Here should be 6\")\n# >>> method (6,) {'b': 'Here should be 6'}\n# >>> 6\nb.another_method(7)\n# >>> another_method (7,) {}\n# >>> 7\nb.static_method(7)\n# >>> static_method (7,) {}\n# >>> 7\n\nOr more abstractly, you can instantiate base class based on some decorator. \ndef decorator(f):\n @functools.wraps(f)\n def wrapper(*args, **kwargs):\n print(f.__name__, args, kwargs)\n return f(*args, **kwargs)\n return wrapper\n\n\nclass Decoratable:\n def __init__(self, dec):\n self._decorator = dec\n\n def __getattribute__(self, item):\n value = object.__getattribute__(self, item)\n if callable(value):\n decorator = object.__getattribute__(self, '_decorator')\n return decorator(value)\n return value\n\n\nclass A(Decoratable):\n def __init__(self, dec):\n super().__init__(dec)\n\n def method(self, a, b):\n print(a)\n\n def another_method(self, c):\n print(c)\n\n @staticmethod\n def static_method(d):\n print(d)\n\nb = A(decorator)\nb.method(5, b=\"Here should be 5\")\n# >>> method (5,) {'b': 'Here should be 5'}\n# >>> 5\nb.method(6, b=\"Here should be 6\")\n# >>> method (6,) {'b': 'Here should be 6'}\n# >>> 6\nb.another_method(7)\n# >>> another_method (7,) {}\n# >>> 7\nb.static_method(7)\n# >>> static_method (7,) {}\n# >>> 7\n\n",
"There's another slightly similar thing you might want to do in some cases. Sometimes you want to trigger the attachment for something like debugging and not on all the classes but for every method of an object you might want a record of what it's doing.\ndef start_debugging():\n import functools\n import datetime\n filename = \"debug-{date:%Y-%m-%d_%H_%M_%S}.txt\".format(date=datetime.datetime.now())\n debug_file = open(filename, \"a\")\n debug_file.write(\"\\nDebug.\\n\")\n\n def debug(func):\n @functools.wraps(func)\n def wrapper_debug(*args, **kwargs):\n args_repr = [repr(a) for a in args] # 1\n kwargs_repr = [f\"{k}={v!r}\" for k, v in kwargs.items()] # 2\n signature = \", \".join(args_repr + kwargs_repr) # 3\n debug_file.write(f\"Calling {func.__name__}({signature})\\n\")\n value = func(*args, **kwargs)\n debug_file.write(f\"{func.__name__!r} returned {value!r}\\n\") # 4\n debug_file.flush()\n return value\n return wrapper_debug\n\n for obj in (self):\n for attr in dir(obj):\n if attr.startswith('_'):\n continue\n fn = getattr(obj, attr)\n if not isinstance(fn, types.FunctionType) and \\\n not isinstance(fn, types.MethodType):\n continue\n setattr(obj, attr, debug(fn))\n\nThis function will go through some objects (only self currently) and replace all functions and methods that do not start with _ with a debugging decorator.\nThe method used for this of just iterating the dir(self) is not addressed above but totally works. And can be called externally from the object and much more arbitrarily.\n",
"In Python 3 you could also write a simple function that overwrites/applies a decorator to certain methods like so:\nfrom functools import wraps\nfrom types import MethodType\n\ndef logged(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n res = func(*args, **kwargs)\n print(\"logging:\", func.__name__, res)\n return res\n return wrapper\n\nclass Test:\n def foo(self):\n return 42\n ...\n\ndef aspectize(cls, decorator):\n for name, func in cls.__dict__.items():\n if not name.startswith(\"__\"):\n setattr(cls, name, MethodType(decorator(func), cls)) # MethodType is key\n\naspectize(Test, logged)\nt = Test()\nt.foo() # printing \"logging: foo 42\"; returning 42\n\n",
"I came to this question from:\nHow to decorate all functions of a class without typing it over and over for each method?\nAnd I want add a one note:\nAnswers with class decorators or repalcing class methods like this one:\nhttps://stackoverflow.com/a/6307868/11277611\nWill not work with staticmethod.\nYou will get TypeError, unexpected argument because your method will get self/cls as first argument.\nProbably:\nDecorated class doesn't know about decorators of self methods and can't be distincted even with inspect.ismethod.\nI come to such quickfix:\nI'm not checked it closely but it passes my (no so comprehensive) tests.\nUsing dynamically decorators is already a bad approach, so, it must be okay as temporary solution.\nTLD:TD Add try/exception to use with staticmethod\ndef log_sent_data(function):\n @functools_wraps(function)\n def decorator(*args, **kwargs):\n # Quickfix\n self, *args = args\n try: # If method has self/cls/descriptor\n result = function(self, *args, **kwargs)\n except TypeError:\n if args: # If method is static but has positional args\n result = function(*args, **kwargs)\n else: # If method is static and doesn't has positional args\n result = function(**kwargs)\n # End of quickfix\n return result\n return decorator\n\n"
] | [
42,
37,
12,
7,
2,
2,
0,
0,
0
] | [
"You could override the __getattr__ method. It's not actually attaching a decorator, but it lets you return a decorated method. You'd probably want to do something like this:\nclass Eggs(object):\n def __getattr__(self, attr):\n return decorate(getattr(self, `_` + attr))\n\nThere's some ugly recursion hiding in there that you'll want to protect against, but that's a start.\n"
] | [
-1
] | [
"class",
"class_method",
"decorator",
"oop",
"python"
] | stackoverflow_0003467526_class_class_method_decorator_oop_python.txt |
Q:
K Prototype initialization kept repeating initializing centroids and initializing clusters step
I am working on implementing k-Prototype clustering in Python. The data frame shape is (1870995, 28). I have set kproto = KPrototypes(n_clusters=3, verbose=2,max_iter=20). However, the initialization keeps repeating "initializing centroids" and "initializing clusters" and doesn't start iteration steps.
Is my data frame is too big ?
Should this be expected and I should wait for it to run ?
Here is the printed output :
Initialization method and algorithm are deterministic. Setting n_init to 1.
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
A:
I believe what you have is calling the proto method, you should also call the fit_predict method on it, something like below:
kproto = KPrototypes(n_clusters=3, verbose=2, max_iter=20).
kproto.fit_predict(df, categorical=[3, 4, 5]) # categorical column indices
| K Prototype initialization kept repeating initializing centroids and initializing clusters step | I am working on implementing k-Prototype clustering in Python. The data frame shape is (1870995, 28). I have set kproto = KPrototypes(n_clusters=3, verbose=2,max_iter=20). However, the initialization keeps repeating "initializing centroids" and "initializing clusters" and doesn't start iteration steps.
Is my data frame is too big ?
Should this be expected and I should wait for it to run ?
Here is the printed output :
Initialization method and algorithm are deterministic. Setting n_init to 1.
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
Init: initializing centroids
Init: initializing clusters
| [
"I believe what you have is calling the proto method, you should also call the fit_predict method on it, something like below:\nkproto = KPrototypes(n_clusters=3, verbose=2, max_iter=20).\n\nkproto.fit_predict(df, categorical=[3, 4, 5]) # categorical column indices\n\n"
] | [
0
] | [] | [] | [
"arrays",
"cluster_analysis",
"prototype",
"python"
] | stackoverflow_0072814400_arrays_cluster_analysis_prototype_python.txt |
Q:
Problems to connect SFTP with SSH and passprash in Synapse with Python
I am trying to connect to a SFTP that use username, passphrase, SSH key (no password needed) in notebook in Synapse.
SSH key is kept as a secreat in Key Vault.
Have runned into different errors so far:
Host = "sftp.xxxxx.no"
Username = "xxxxx"
Passphrase = "xxxxx"
port = 22
from notebookutils import mssparkutils
SSHkey = mssparkutils.credentials.getSecret('keyvault','SSHkey') #so far has no problem
keydata = b"""AAAAxxx==""" #this is a public key I got through running ssh-keyscan in terminal
key = paramiko.RSAKey(data=decodebytes(keydata))
import io
privkey = io.StringIO(SSHkey)
ki = paramiko.RSAKey.from_private_key(privkey)
cnopts = pysftp.CnOpts()
cnopts.hostkeys.add('sftp.xxxxx.no', 'ssh-rsa', key)
client = pysftp.Connection(host = Host, username = Username, private_key = SSHkey, private_key_pass = Passphrase, cnopts=cnopts)
output = client.listdir()
I have been browsing answers on internet for days, tried different codes, all get error, now the error msg is:
not a valid RSA private key file
Anyone can show me the light of the tunnel?
A:
You have public key stored in your key vault. Not a private key.
You cannot authenticate using public key. You have to store the private key to the vault.
| Problems to connect SFTP with SSH and passprash in Synapse with Python | I am trying to connect to a SFTP that use username, passphrase, SSH key (no password needed) in notebook in Synapse.
SSH key is kept as a secreat in Key Vault.
Have runned into different errors so far:
Host = "sftp.xxxxx.no"
Username = "xxxxx"
Passphrase = "xxxxx"
port = 22
from notebookutils import mssparkutils
SSHkey = mssparkutils.credentials.getSecret('keyvault','SSHkey') #so far has no problem
keydata = b"""AAAAxxx==""" #this is a public key I got through running ssh-keyscan in terminal
key = paramiko.RSAKey(data=decodebytes(keydata))
import io
privkey = io.StringIO(SSHkey)
ki = paramiko.RSAKey.from_private_key(privkey)
cnopts = pysftp.CnOpts()
cnopts.hostkeys.add('sftp.xxxxx.no', 'ssh-rsa', key)
client = pysftp.Connection(host = Host, username = Username, private_key = SSHkey, private_key_pass = Passphrase, cnopts=cnopts)
output = client.listdir()
I have been browsing answers on internet for days, tried different codes, all get error, now the error msg is:
not a valid RSA private key file
Anyone can show me the light of the tunnel?
| [
"You have public key stored in your key vault. Not a private key.\nYou cannot authenticate using public key. You have to store the private key to the vault.\n"
] | [
0
] | [] | [] | [
"azure_synapse",
"python",
"sftp"
] | stackoverflow_0074597291_azure_synapse_python_sftp.txt |
Q:
How to include flags using condition in pyspark dataframe
i have a dataframe as shown below
df:
id vehicle production asIs EU EU_variant status
1 A3345 PQ1298 FV1 FV1_variant OK
2 A3346 A3346 PQ1287 FV2 FV2_variant NOT_OK
3 A3346 A3346 PQ1207 FV2 FV2_variant NOT_OK
4 A3347 QP9 QP9_variant OK
5 A3347 QP9 QP9_variant NOT_OK
6 A3347 QP3 QP3_variant OK
7 A3348 MP6553 YR34 YR34_variant NOT_OK
8 A3348 MP6554 YR35 YR35_variant NOT_OK
9 A3348 MP6554 YR35 YR35_variant NOT_OK
for each distinct vehicle ,distinct EU I need to create 2 columns "flag" and "part" where if it has status both okay and not okay then falg will be 0, else if it has only not_ok then flag will be 1.
and for each distinct vehicle and distinct EU , I need to club asIs without duplicates.
if vehicle number is not present then it should check for distinct production and distinct EU
output should be
id vehicle production asIs EU EU_variant status Flag Part
1 A3345 PQ1298 FV1 FV1_variant OK 0 PQ1298
2 A3346 A3346 PQ1287 FV2 FV2_variant NOT_OK 1 PQ1287,PQ1207
3 A3346 A3346 PQ1207 FV2 FV2_variant NOT_OK 1 PQ1287,PQ1207
4 A3347 QP9 QP9_variant OK 0
5 A3347 QP9 QP9_variant NOT_OK 0
6 A3347 QP3 QP3_variant OK 0
7 A3348 MP6553 YR34 YR34_variant NOT_OK 1 MP6553
8 - A3348 MP6554 YR35 YR35_variant NOT_OK 1 MP6554
9 A3348 MP6554 YR35 YR35_variant NOT_OK 1 MP6554
How to achieve this scenario using pyspark dataframe
A:
You can use collect_set on status field to get the distinct statuses on your desired partition. use the result to flag the records. collect_set returns an array field which can be used to check the length (using size) and its contents (using array_contains).
see example below
data_sdf. \
withColumn('vehicle_prod', func.coalesce('vehicle', 'production')). \
withColumn('vehicle_prod_eu',
func.collect_set('status').over(wd.partitionBy('vehicle_prod', 'eu').orderBy('id').rowsBetween(-sys.maxsize, sys.maxsize))
). \
withColumn('flag',
((func.size('vehicle_prod_eu') == 1) &
(func.array_contains('vehicle_prod_eu', 'NOT_OK'))).cast('int')
). \
withColumn('part',
func.collect_set('asis').over(wd.partitionBy('vehicle_prod', 'eu').orderBy('id').rowsBetween(-sys.maxsize, sys.maxsize))
). \
withColumn('part', func.concat_ws(',', 'part')). \
orderBy('id'). \
show()
# +---+-------+----------+------+----+------------+------+------------+---------------+----+-------------+
# | id|vehicle|production| asis| eu| eu_variant|status|vehicle_prod|vehicle_prod_eu|flag| part|
# +---+-------+----------+------+----+------------+------+------------+---------------+----+-------------+
# | 1| A3345| null|PQ1298| FV1| FV1_variant| OK| A3345| [OK]| 0| PQ1298|
# | 2| A3346| A3346|PQ1287| FV2| FV2_variant|NOT_OK| A3346| [NOT_OK]| 1|PQ1287,PQ1207|
# | 3| A3346| A3346|PQ1207| FV2| FV2_variant|NOT_OK| A3346| [NOT_OK]| 1|PQ1287,PQ1207|
# | 4| null| A3347| null| QP9| QP9_variant| OK| A3347| [NOT_OK, OK]| 0| |
# | 5| null| A3347| null| QP9| QP9_variant|NOT_OK| A3347| [NOT_OK, OK]| 0| |
# | 6| null| A3347| null| QP3| QP3_variant| OK| A3347| [OK]| 0| |
# | 7| null| A3348|MP6553|YR34|YR34_variant|NOT_OK| A3348| [NOT_OK]| 1| MP6553|
# | 8| null| A3348|MP6554|YR35|YR35_variant|NOT_OK| A3348| [NOT_OK]| 1| MP6554|
# | 9| null| A3348|MP6554|YR35|YR35_variant|NOT_OK| A3348| [NOT_OK]| 1| MP6554|
# +---+-------+----------+------+----+------------+------+------------+---------------+----+-------------+
| How to include flags using condition in pyspark dataframe | i have a dataframe as shown below
df:
id vehicle production asIs EU EU_variant status
1 A3345 PQ1298 FV1 FV1_variant OK
2 A3346 A3346 PQ1287 FV2 FV2_variant NOT_OK
3 A3346 A3346 PQ1207 FV2 FV2_variant NOT_OK
4 A3347 QP9 QP9_variant OK
5 A3347 QP9 QP9_variant NOT_OK
6 A3347 QP3 QP3_variant OK
7 A3348 MP6553 YR34 YR34_variant NOT_OK
8 A3348 MP6554 YR35 YR35_variant NOT_OK
9 A3348 MP6554 YR35 YR35_variant NOT_OK
for each distinct vehicle ,distinct EU I need to create 2 columns "flag" and "part" where if it has status both okay and not okay then falg will be 0, else if it has only not_ok then flag will be 1.
and for each distinct vehicle and distinct EU , I need to club asIs without duplicates.
if vehicle number is not present then it should check for distinct production and distinct EU
output should be
id vehicle production asIs EU EU_variant status Flag Part
1 A3345 PQ1298 FV1 FV1_variant OK 0 PQ1298
2 A3346 A3346 PQ1287 FV2 FV2_variant NOT_OK 1 PQ1287,PQ1207
3 A3346 A3346 PQ1207 FV2 FV2_variant NOT_OK 1 PQ1287,PQ1207
4 A3347 QP9 QP9_variant OK 0
5 A3347 QP9 QP9_variant NOT_OK 0
6 A3347 QP3 QP3_variant OK 0
7 A3348 MP6553 YR34 YR34_variant NOT_OK 1 MP6553
8 - A3348 MP6554 YR35 YR35_variant NOT_OK 1 MP6554
9 A3348 MP6554 YR35 YR35_variant NOT_OK 1 MP6554
How to achieve this scenario using pyspark dataframe
| [
"You can use collect_set on status field to get the distinct statuses on your desired partition. use the result to flag the records. collect_set returns an array field which can be used to check the length (using size) and its contents (using array_contains).\nsee example below\ndata_sdf. \\\n withColumn('vehicle_prod', func.coalesce('vehicle', 'production')). \\\n withColumn('vehicle_prod_eu', \n func.collect_set('status').over(wd.partitionBy('vehicle_prod', 'eu').orderBy('id').rowsBetween(-sys.maxsize, sys.maxsize))\n ). \\\n withColumn('flag', \n ((func.size('vehicle_prod_eu') == 1) & \n (func.array_contains('vehicle_prod_eu', 'NOT_OK'))).cast('int')\n ). \\\n withColumn('part', \n func.collect_set('asis').over(wd.partitionBy('vehicle_prod', 'eu').orderBy('id').rowsBetween(-sys.maxsize, sys.maxsize))\n ). \\\n withColumn('part', func.concat_ws(',', 'part')). \\\n orderBy('id'). \\\n show()\n\n# +---+-------+----------+------+----+------------+------+------------+---------------+----+-------------+\n# | id|vehicle|production| asis| eu| eu_variant|status|vehicle_prod|vehicle_prod_eu|flag| part|\n# +---+-------+----------+------+----+------------+------+------------+---------------+----+-------------+\n# | 1| A3345| null|PQ1298| FV1| FV1_variant| OK| A3345| [OK]| 0| PQ1298|\n# | 2| A3346| A3346|PQ1287| FV2| FV2_variant|NOT_OK| A3346| [NOT_OK]| 1|PQ1287,PQ1207|\n# | 3| A3346| A3346|PQ1207| FV2| FV2_variant|NOT_OK| A3346| [NOT_OK]| 1|PQ1287,PQ1207|\n# | 4| null| A3347| null| QP9| QP9_variant| OK| A3347| [NOT_OK, OK]| 0| |\n# | 5| null| A3347| null| QP9| QP9_variant|NOT_OK| A3347| [NOT_OK, OK]| 0| |\n# | 6| null| A3347| null| QP3| QP3_variant| OK| A3347| [OK]| 0| |\n# | 7| null| A3348|MP6553|YR34|YR34_variant|NOT_OK| A3348| [NOT_OK]| 1| MP6553|\n# | 8| null| A3348|MP6554|YR35|YR35_variant|NOT_OK| A3348| [NOT_OK]| 1| MP6554|\n# | 9| null| A3348|MP6554|YR35|YR35_variant|NOT_OK| A3348| [NOT_OK]| 1| MP6554|\n# +---+-------+----------+------+----+------------+------+------------+---------------+----+-------------+\n\n"
] | [
1
] | [] | [] | [
"pyspark",
"python",
"python_3.x"
] | stackoverflow_0074627447_pyspark_python_python_3.x.txt |
Q:
Misunderstanding numpy.vectorize
I want to apply a function to each row in a vector. Even on a simple example like the one below I cant get it to work.
I make a function that takes two vectors and applies the dot product to them.
import numpy as np
def func(x,y):
return np.dot(x,y)
y=np.array([0, 1, 2])
x=np.array([0, 1, 2])
print(func(x,y))
Of course the output is 5. Now I want to plug in multiple vectors x, and get a solution back for each one. I dont want to use a for loop, so I tried using the vectorize function. For instance below I define X=(x1,x2,x3) and I want the output func(X,y)=(func(x1,y), func(x2,y), func(x3,y)). Why doesn't the following code do that:
import numpy as np
def func(x,y):
return np.dot(x,y)
y=np.array([0, 1 , 2])
X=np.array([[0,0,0], [1,1,1], [2,2,2]])
vfunc=np.vectorize(func)
print(vfunc(X,y))
A:
The first trick to do is to exclude y argument (it is a fixed value
for all rows from x).
The second trick is to pass the signature: Both arguments are arrays and
the result is a scalar.
So, to vectorize your function, run:
vfunc = np.vectorize(func, excluded=['y'], signature='(n),(n)->()')
Then, when you call vfunc(X,y), you will get:
array([0, 3, 6])
| Misunderstanding numpy.vectorize | I want to apply a function to each row in a vector. Even on a simple example like the one below I cant get it to work.
I make a function that takes two vectors and applies the dot product to them.
import numpy as np
def func(x,y):
return np.dot(x,y)
y=np.array([0, 1, 2])
x=np.array([0, 1, 2])
print(func(x,y))
Of course the output is 5. Now I want to plug in multiple vectors x, and get a solution back for each one. I dont want to use a for loop, so I tried using the vectorize function. For instance below I define X=(x1,x2,x3) and I want the output func(X,y)=(func(x1,y), func(x2,y), func(x3,y)). Why doesn't the following code do that:
import numpy as np
def func(x,y):
return np.dot(x,y)
y=np.array([0, 1 , 2])
X=np.array([[0,0,0], [1,1,1], [2,2,2]])
vfunc=np.vectorize(func)
print(vfunc(X,y))
| [
"The first trick to do is to exclude y argument (it is a fixed value\nfor all rows from x).\nThe second trick is to pass the signature: Both arguments are arrays and\nthe result is a scalar.\nSo, to vectorize your function, run:\nvfunc = np.vectorize(func, excluded=['y'], signature='(n),(n)->()')\n\nThen, when you call vfunc(X,y), you will get:\narray([0, 3, 6])\n\n"
] | [
1
] | [] | [] | [
"arrays",
"function",
"numpy",
"python",
"vectorization"
] | stackoverflow_0074627108_arrays_function_numpy_python_vectorization.txt |
Q:
(Django) Why my login only work for db table:auth_user can't work another table:myapp_user?
Sorry bad English!
I have a login function in my project like:
from django.contrib.auth.forms import AuthenticationForm
from django.contrib import auth
from django.http import HttpResponseRedirect
from django.shortcuts import render
def log_in(request):
form = AuthenticationForm(request.POST)
if request.method == "POST":
form = AuthenticationForm(data=request.POST)
username = request.POST.get('username')
password = request.POST.get('password')
user = auth.authenticate(request,username=username,password=password)
if user is not None and user.is_active:
auth.login(request,user)
return HttpResponseRedirect('/index')
context = {
'form': form,
}
return render(request, 'login.html', context)
This function can work for any accounts that save in database table:auth_user,but can't work for my define class User:
from django.contrib.auth.models import (
BaseUserManager, AbstractBaseUser )
class User(AbstractBaseUser):
username = models.CharField(unique=True,max_length=20,default="")
email = models.EmailField()
date_of_birth = models.DateField()
date_joined = models.DateTimeField(auto_now_add=True)
last_login = models.DateTimeField(auto_now=True)
is_admin = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
objects = AccountManager()
USERNAME_FIELD = 'username' #類似主鍵的功用
REQUIRED_FIELDS = ['email','username'] #必填
def __str__(self):
return self.email
def is_staff(self):
return self.is_admin
def has_perm(self, perm, obj=None):
return self.is_admin
def has_module_perms(self, app_label):
return self.is_admin
If I use register page to create a new account data,it will save in table:function_user("function" is my app name),and if my login table:auth_user accounts it will jump to /index,this right,but when I login new register account,it not do anything , I really sure the username and password is already created in table:function_user.
register(it seems to work properly, the data will save in function_user):
def register(request):
if request.method == 'POST':
form = RegisterForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/index')
else:
form = RegisterForm()
context = {'form': form}
return render(request, 'register.html', context)
Database
I want my newly created User account (i.e. function_user) to be able to login in addition to the auth_user.
Thanks for reading.
A:
Did you add custom user model in settings.py file
add following line in settings.py
replace users with your app name in which your custom User model is present
AUTH_USER_MODEL = 'users.User'
| (Django) Why my login only work for db table:auth_user can't work another table:myapp_user? | Sorry bad English!
I have a login function in my project like:
from django.contrib.auth.forms import AuthenticationForm
from django.contrib import auth
from django.http import HttpResponseRedirect
from django.shortcuts import render
def log_in(request):
form = AuthenticationForm(request.POST)
if request.method == "POST":
form = AuthenticationForm(data=request.POST)
username = request.POST.get('username')
password = request.POST.get('password')
user = auth.authenticate(request,username=username,password=password)
if user is not None and user.is_active:
auth.login(request,user)
return HttpResponseRedirect('/index')
context = {
'form': form,
}
return render(request, 'login.html', context)
This function can work for any accounts that save in database table:auth_user,but can't work for my define class User:
from django.contrib.auth.models import (
BaseUserManager, AbstractBaseUser )
class User(AbstractBaseUser):
username = models.CharField(unique=True,max_length=20,default="")
email = models.EmailField()
date_of_birth = models.DateField()
date_joined = models.DateTimeField(auto_now_add=True)
last_login = models.DateTimeField(auto_now=True)
is_admin = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
objects = AccountManager()
USERNAME_FIELD = 'username' #類似主鍵的功用
REQUIRED_FIELDS = ['email','username'] #必填
def __str__(self):
return self.email
def is_staff(self):
return self.is_admin
def has_perm(self, perm, obj=None):
return self.is_admin
def has_module_perms(self, app_label):
return self.is_admin
If I use register page to create a new account data,it will save in table:function_user("function" is my app name),and if my login table:auth_user accounts it will jump to /index,this right,but when I login new register account,it not do anything , I really sure the username and password is already created in table:function_user.
register(it seems to work properly, the data will save in function_user):
def register(request):
if request.method == 'POST':
form = RegisterForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/index')
else:
form = RegisterForm()
context = {'form': form}
return render(request, 'register.html', context)
Database
I want my newly created User account (i.e. function_user) to be able to login in addition to the auth_user.
Thanks for reading.
| [
"Did you add custom user model in settings.py file\nadd following line in settings.py\nreplace users with your app name in which your custom User model is present\nAUTH_USER_MODEL = 'users.User'\n\n"
] | [
0
] | [] | [] | [
"django",
"postgresql",
"python"
] | stackoverflow_0074628103_django_postgresql_python.txt |
Q:
Django invalid literal for int() with base 10: '??????? ???????' when i try to migrate
I'm trying to create a new tables in a new app added to my project .. makemigrations worked great but migrate is not working .. here is my models
blog/models.py
from django.db import models
# Create your models here.
from fostania_web_app.models import UserModel
class Tag(models.Model):
tag_name = models.CharField(max_length=250)
def __str__(self):
return self.tag_name
class BlogPost(models.Model):
post_title = models.CharField(max_length=250)
post_message = models.CharField(max_length=2000)
post_author = models.ForeignKey(UserModel, on_delete=models.PROTECT)
post_image = models.ImageField(upload_to='documents/%Y/%m/%d', null=False, blank=False)
post_tag = models.ForeignKey(Tag, on_delete=models.PROTECT)
post_created_at = models.DateTimeField(auto_now=True)
when i try to do python manage.py migrate i get this error
invalid literal for int() with base 10: '??????? ???????'
UserModel is created in another app in the same project that is why i used the statement from fostania_web_app.models import UserModel
fostania_web_app/models.py
class UserModelManager(BaseUserManager):
def create_user(self, email, password, pseudo):
user = self.model()
user.name = name
user.email = self.normalize_email(email=email)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password):
'''
Used for: python manage.py createsuperuser
'''
user = self.model()
user.name = 'admin-yeah'
user.email = self.normalize_email(email=email)
user.set_password(password)
user.is_staff = True
user.is_superuser = True
user.save()
return user
class UserModel(AbstractBaseUser, PermissionsMixin):
## Personnal fields.
email = models.EmailField(max_length=254, unique=True)
name = models.CharField(max_length=16)
## [...]
## Django manage fields.
date_joined = models.DateTimeField(auto_now_add=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELD = ['email', 'name']
objects = UserModelManager()
def __str__(self):
return self.email
def get_short_name(self):
return self.name[:2].upper()
def get_full_name(self):
return self.name
and on my setting.py files i have this :
#custom_user
AUTH_USER_MODEL='fostania_web_app.UserModel'
and here is the full traeback :
Operations to perform:
Apply all migrations: admin, auth, blog, contenttypes, fostania_web
ons, sites, social_django
Running migrations:
Applying blog.0001_initial...Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\__init__.py", line 371, in execute_from_comm
utility.execute()
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\__init__.py", line 365, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\base.py", line 288, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\base.py", line 335, in execute
output = self.handle(*args, **options)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\commands\migrate.py", line 200, in handle
fake_initial=fake_initial,
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=f
nitial=fake_initial)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 147, in _migrate_all_forwar
state = self.apply_migration(state, migration, fake=fake, fake_in
initial)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 244, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\migration.py", line 122, in apply
operation.database_forwards(self.app_label, schema_editor, old_st
t_state)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\operations\fields.py", line 84, in database_fo
field,
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\sqlite3\schema.py", line 306, in add_field
self._remake_table(model, create_field=field)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\sqlite3\schema.py", line 178, in _remake_table
self.effective_default(create_field)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\base\schema.py", line 240, in effective_default
default = field.get_db_prep_save(default, self.connection)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\related.py", line 936, in get_db_prep_save
return self.target_field.get_db_prep_save(value, connection=conne
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 767, in get_db_prep_save
return self.get_db_prep_value(value, connection=connection, prepa
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 939, in get_db_prep_valu
value = self.get_prep_value(value)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 947, in get_prep_value
return int(value)
ValueError: invalid literal for int() with base 10: '??? ??? ?????'
A:
Error message: ValueError: invalid literal for int() with base 10: '??? ??? ?????'
As per exception int() with base 10: '??? ??? ?????' doesn't qualify as int.
Check in blog.0001_initial migrations for '??? ??? ?????' and modify that value with a valid int.
You might have accidentally provided garbage default value while rerunning makemigrations command which isn't an int()
A:
I had the same error.
My problem:
I had:
models.y
expiration_date = models.DateField(null=True, max_length=20)
And a validation in my:
forms.py
def clean_expiration_date(self):
data = self.cleaned_data['expiration_date']
if data < datetime.date.today():
return data
And it gave me the here mentioned error. But only on a certain page.
My solution:
I changed the DateField to DateTimeField
And it fixed my problem in my case.
| Django invalid literal for int() with base 10: '??????? ???????' when i try to migrate | I'm trying to create a new tables in a new app added to my project .. makemigrations worked great but migrate is not working .. here is my models
blog/models.py
from django.db import models
# Create your models here.
from fostania_web_app.models import UserModel
class Tag(models.Model):
tag_name = models.CharField(max_length=250)
def __str__(self):
return self.tag_name
class BlogPost(models.Model):
post_title = models.CharField(max_length=250)
post_message = models.CharField(max_length=2000)
post_author = models.ForeignKey(UserModel, on_delete=models.PROTECT)
post_image = models.ImageField(upload_to='documents/%Y/%m/%d', null=False, blank=False)
post_tag = models.ForeignKey(Tag, on_delete=models.PROTECT)
post_created_at = models.DateTimeField(auto_now=True)
when i try to do python manage.py migrate i get this error
invalid literal for int() with base 10: '??????? ???????'
UserModel is created in another app in the same project that is why i used the statement from fostania_web_app.models import UserModel
fostania_web_app/models.py
class UserModelManager(BaseUserManager):
def create_user(self, email, password, pseudo):
user = self.model()
user.name = name
user.email = self.normalize_email(email=email)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password):
'''
Used for: python manage.py createsuperuser
'''
user = self.model()
user.name = 'admin-yeah'
user.email = self.normalize_email(email=email)
user.set_password(password)
user.is_staff = True
user.is_superuser = True
user.save()
return user
class UserModel(AbstractBaseUser, PermissionsMixin):
## Personnal fields.
email = models.EmailField(max_length=254, unique=True)
name = models.CharField(max_length=16)
## [...]
## Django manage fields.
date_joined = models.DateTimeField(auto_now_add=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELD = ['email', 'name']
objects = UserModelManager()
def __str__(self):
return self.email
def get_short_name(self):
return self.name[:2].upper()
def get_full_name(self):
return self.name
and on my setting.py files i have this :
#custom_user
AUTH_USER_MODEL='fostania_web_app.UserModel'
and here is the full traeback :
Operations to perform:
Apply all migrations: admin, auth, blog, contenttypes, fostania_web
ons, sites, social_django
Running migrations:
Applying blog.0001_initial...Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\__init__.py", line 371, in execute_from_comm
utility.execute()
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\__init__.py", line 365, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\base.py", line 288, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\base.py", line 335, in execute
output = self.handle(*args, **options)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\commands\migrate.py", line 200, in handle
fake_initial=fake_initial,
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=f
nitial=fake_initial)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 147, in _migrate_all_forwar
state = self.apply_migration(state, migration, fake=fake, fake_in
initial)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 244, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\migration.py", line 122, in apply
operation.database_forwards(self.app_label, schema_editor, old_st
t_state)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\operations\fields.py", line 84, in database_fo
field,
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\sqlite3\schema.py", line 306, in add_field
self._remake_table(model, create_field=field)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\sqlite3\schema.py", line 178, in _remake_table
self.effective_default(create_field)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\base\schema.py", line 240, in effective_default
default = field.get_db_prep_save(default, self.connection)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\related.py", line 936, in get_db_prep_save
return self.target_field.get_db_prep_save(value, connection=conne
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 767, in get_db_prep_save
return self.get_db_prep_value(value, connection=connection, prepa
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 939, in get_db_prep_valu
value = self.get_prep_value(value)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 947, in get_prep_value
return int(value)
ValueError: invalid literal for int() with base 10: '??? ??? ?????'
| [
"Error message: ValueError: invalid literal for int() with base 10: '??? ??? ?????'\nAs per exception int() with base 10: '??? ??? ?????' doesn't qualify as int.\nCheck in blog.0001_initial migrations for '??? ??? ?????' and modify that value with a valid int.\nYou might have accidentally provided garbage default value while rerunning makemigrations command which isn't an int()\n",
"I had the same error.\nMy problem:\nI had:\nmodels.y\nexpiration_date = models.DateField(null=True, max_length=20)\n\nAnd a validation in my:\nforms.py\n \ndef clean_expiration_date(self):\n data = self.cleaned_data['expiration_date']\n if data < datetime.date.today():\nreturn data\n\nAnd it gave me the here mentioned error. But only on a certain page.\nMy solution:\nI changed the DateField to DateTimeField\nAnd it fixed my problem in my case.\n"
] | [
5,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0050991402_django_python.txt |
Q:
Fill by group and between two values
I want to fill all rows between two values by group. For each group, var1 has two values equal to 1, and I want to fill the missing rows between the two 1s. var1 represents what I have, var2 represents what I want, var3 shows what I am obtaining with my code, but it is not what I want (different from var2):
var1 group var2 var3
NaN 1 NaN NaN
NaN 1 NaN NaN
1 1 1 1
NaN 1 1 1
NaN 1 1 1
1 1 1 1
NaN 1 NaN 1
NaN 1 NaN 1
1 2 1 1
NaN 2 1 1
1 2 1 1
NaN 2 NaN 1
My code:
df.var3 = df.groupby('group')['var1'].bffill()
A:
Assuming the values are only 1 or NaN, you can groupby.ffill and groupby.bfill and only keep the values that are identical:
g = df.groupby('group')['var1']
s1 = g.ffill()
s2 = g.bfill()
df['var2'] = s1.where(s1.eq(s2))
Output:
var1 group var2
0 NaN 1 NaN
1 NaN 1 NaN
2 1.0 1 1.0
3 NaN 1 1.0
4 NaN 1 1.0
5 1.0 1 1.0
6 NaN 1 NaN
7 NaN 1 NaN
8 1.0 2 1.0
9 NaN 2 1.0
10 1.0 2 1.0
11 NaN 2 NaN
Intermediates:
var1 group var2 ffill bfill
0 NaN 1 NaN NaN 1.0
1 NaN 1 NaN NaN 1.0
2 1.0 1 1.0 1.0 1.0
3 NaN 1 1.0 1.0 1.0
4 NaN 1 1.0 1.0 1.0
5 1.0 1 1.0 1.0 1.0
6 NaN 1 NaN 1.0 NaN
7 NaN 1 NaN 1.0 NaN
8 1.0 2 1.0 1.0 1.0
9 NaN 2 1.0 1.0 1.0
10 1.0 2 1.0 1.0 1.0
11 NaN 2 NaN 1.0 NaN
| Fill by group and between two values | I want to fill all rows between two values by group. For each group, var1 has two values equal to 1, and I want to fill the missing rows between the two 1s. var1 represents what I have, var2 represents what I want, var3 shows what I am obtaining with my code, but it is not what I want (different from var2):
var1 group var2 var3
NaN 1 NaN NaN
NaN 1 NaN NaN
1 1 1 1
NaN 1 1 1
NaN 1 1 1
1 1 1 1
NaN 1 NaN 1
NaN 1 NaN 1
1 2 1 1
NaN 2 1 1
1 2 1 1
NaN 2 NaN 1
My code:
df.var3 = df.groupby('group')['var1'].bffill()
| [
"Assuming the values are only 1 or NaN, you can groupby.ffill and groupby.bfill and only keep the values that are identical:\ng = df.groupby('group')['var1']\n\ns1 = g.ffill()\ns2 = g.bfill()\n\ndf['var2'] = s1.where(s1.eq(s2))\n\nOutput:\n var1 group var2\n0 NaN 1 NaN\n1 NaN 1 NaN\n2 1.0 1 1.0\n3 NaN 1 1.0\n4 NaN 1 1.0\n5 1.0 1 1.0\n6 NaN 1 NaN\n7 NaN 1 NaN\n8 1.0 2 1.0\n9 NaN 2 1.0\n10 1.0 2 1.0\n11 NaN 2 NaN\n\nIntermediates:\n var1 group var2 ffill bfill\n0 NaN 1 NaN NaN 1.0\n1 NaN 1 NaN NaN 1.0\n2 1.0 1 1.0 1.0 1.0\n3 NaN 1 1.0 1.0 1.0\n4 NaN 1 1.0 1.0 1.0\n5 1.0 1 1.0 1.0 1.0\n6 NaN 1 NaN 1.0 NaN\n7 NaN 1 NaN 1.0 NaN\n8 1.0 2 1.0 1.0 1.0\n9 NaN 2 1.0 1.0 1.0\n10 1.0 2 1.0 1.0 1.0\n11 NaN 2 NaN 1.0 NaN\n\n"
] | [
2
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074628433_dataframe_pandas_python.txt |
Q:
Can we add an extension using python sdk compute_client.virtual_machines.begin_create_or_update to Azure VM it creates?
Morning,
I have a need to have the AADLoginForLinux extension added to the vms I spin up with the python sdk compute_client.virtual_machines.begin_create_or_update call.
I see I could maybe do a rest call to add extensions, but I was wondering if it could be done with the sdk call instead? Anybody have a sample/example of adding an extension like this?
A:
I tried to reproduce the same in my environment and got the below results:
I created an Azure Virtual Machine using the below code:
credential = AzureCliCredential()
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] = "XXXXXXXX"
resource_client = ResourceManagementClient(credential, subscription_id)
RESOURCE_GROUP_NAME = "Imran"
LOCATION = "eastus"
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{
"location": LOCATION
}
)
VNET_NAME = "testvnet"
SUBNET_NAME = "subnet1"
IP_NAME = "IP"
IP_CONFIG_NAME = "ipconfig"
NIC_NAME = "testnic"
network_client = NetworkManagementClient(credential, subscription_id)
poller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME,
{
"location": LOCATION,
"address_space": {
"address_prefixes": ["10.0.0.0/16"]
}
}
)
vnet_result = poller.result()
poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME, SUBNET_NAME,
{ "address_prefix": "10.0.0.0/24" }
)
subnet_result = poller.result(
print(f"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}")
poller = network_client.public_ip_addresses.begin_create_or_update(RESOURCE_GROUP_NAME,
IP_NAME,
{
"location": LOCATION,
"sku": { "name": "Standard" },
"public_ip_allocation_method": "Static",
"public_ip_address_version" : "IPV4"
}
)
ip_address_result = poller.result()
poller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,
NIC_NAME,
{
"location": LOCATION,
"ip_configurations": [ {
"name": testconfig,
"subnet": { "id": subnet_result.id },
"public_ip_address": {"id": ip_address_result.id }
}]
}
)
nic_result = poller.result()
compute_client = ComputeManagementClient(credential, subscription_id)
VM_NAME = "linuxvm"
USERNAME = "****"
PASSWORD = "****"
poller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,
{
"location": LOCATION,
"storage_profile": {
"image_reference": {
"publisher": 'Canonical',
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"hardware_profile": {
"vm_size": "Standard_DS1_v2"
},
"os_profile": {
"computer_name": VM_NAME,
"admin_username": USERNAME,
"admin_password": PASSWORD
},
"network_profile": {
"network_interfaces": [{
"id": nic_result.id,
}]
}
}
)
vm_result = poller.result()
print(f"Provisioned virtual machine {vm_result.name}")
Azure Virtual Machine got created successfully like below:
To add the Extension while creating Azure Virtual Machine, make use of VirtualMachineExtensionsOperations Class like below:
VirtualMachineExtensionsOperations(*args, **kwargs)
begin_create_or_update
(resource_group_name: str,
vm_name: str, vm_extension_name: str, extension_parameters: _models.VirtualMachineExtension, *,
content_type: str = "'application/json'", **kwargs: Any) ->
LROPoller[_models.VirtualMachineExtension]
| Can we add an extension using python sdk compute_client.virtual_machines.begin_create_or_update to Azure VM it creates? | Morning,
I have a need to have the AADLoginForLinux extension added to the vms I spin up with the python sdk compute_client.virtual_machines.begin_create_or_update call.
I see I could maybe do a rest call to add extensions, but I was wondering if it could be done with the sdk call instead? Anybody have a sample/example of adding an extension like this?
| [
"I tried to reproduce the same in my environment and got the below results:\nI created an Azure Virtual Machine using the below code:\ncredential = AzureCliCredential()\nsubscription_id = os.environ[\"AZURE_SUBSCRIPTION_ID\"] = \"XXXXXXXX\"\nresource_client = ResourceManagementClient(credential, subscription_id)\nRESOURCE_GROUP_NAME = \"Imran\"\nLOCATION = \"eastus\"\nrg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,\n{\n\"location\": LOCATION\n}\n)\n\nVNET_NAME = \"testvnet\"\nSUBNET_NAME = \"subnet1\"\nIP_NAME = \"IP\"\nIP_CONFIG_NAME = \"ipconfig\"\nNIC_NAME = \"testnic\"\nnetwork_client = NetworkManagementClient(credential, subscription_id)\npoller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,\nVNET_NAME,\n{\n\"location\": LOCATION,\n\"address_space\": {\n\"address_prefixes\": [\"10.0.0.0/16\"]\n}\n}\n)\nvnet_result = poller.result() \npoller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,\nVNET_NAME, SUBNET_NAME,\n{ \"address_prefix\": \"10.0.0.0/24\" }\n)\nsubnet_result = poller.result(\nprint(f\"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}\")\npoller = network_client.public_ip_addresses.begin_create_or_update(RESOURCE_GROUP_NAME,\nIP_NAME,\n{\n\"location\": LOCATION,\n\"sku\": { \"name\": \"Standard\" },\n\"public_ip_allocation_method\": \"Static\",\n\"public_ip_address_version\" : \"IPV4\"\n}\n) \nip_address_result = poller.result() \npoller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,\nNIC_NAME,\n{\n\"location\": LOCATION,\n\"ip_configurations\": [ {\n\"name\": testconfig,\n\"subnet\": { \"id\": subnet_result.id },\n\"public_ip_address\": {\"id\": ip_address_result.id }\n}]\n}\n)\nnic_result = poller.result()\ncompute_client = ComputeManagementClient(credential, subscription_id)\nVM_NAME = \"linuxvm\"\nUSERNAME = \"****\"\nPASSWORD = \"****\" \npoller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,\n{\n\"location\": LOCATION,\n\"storage_profile\": {\n\"image_reference\": {\n\"publisher\": 'Canonical',\n\"offer\": \"UbuntuServer\",\n\"sku\": \"16.04.0-LTS\",\n\"version\": \"latest\"\n}\n},\n\"hardware_profile\": {\n\"vm_size\": \"Standard_DS1_v2\"\n},\n\"os_profile\": {\n\"computer_name\": VM_NAME,\n\"admin_username\": USERNAME,\n\"admin_password\": PASSWORD\n},\n\"network_profile\": {\n\"network_interfaces\": [{\n\"id\": nic_result.id,\n}]\n}\n}\n) \nvm_result = poller.result() \nprint(f\"Provisioned virtual machine {vm_result.name}\")\n\nAzure Virtual Machine got created successfully like below:\n\nTo add the Extension while creating Azure Virtual Machine, make use of VirtualMachineExtensionsOperations Class like below:\nVirtualMachineExtensionsOperations(*args, **kwargs) \n\nbegin_create_or_update\n(resource_group_name: str, \nvm_name: str, vm_extension_name: str, extension_parameters: _models.VirtualMachineExtension, *, \ncontent_type: str = \"'application/json'\", **kwargs: Any) -> \nLROPoller[_models.VirtualMachineExtension]\n\n"
] | [
1
] | [] | [] | [
"azure",
"python",
"sdk",
"virtual_machine"
] | stackoverflow_0074477495_azure_python_sdk_virtual_machine.txt |
Q:
By Knowing class name, how to get class key with its default value | PYTHON
I have multiple class in one file(a.py). On other file (b.py) i know some class name. In same file b.py i need the complete key, value of the class.
a.py file:
class CarCompany(BaseModel):
audi: Optional[str] = 'Good'
bmw: Optional[str] = 'Good'
tata: Optional[str] = 'Good'`
class CarYear(BaseModel):
2017: Optional[str] = 'Good'
2018: Optional[str] = 'top sell'
2020: Optional[str] = 'Average'`
2021: Optional[str] = 'top sell'`
b.py file:
Some python code gives that get _____ class and send default value.
I know class name by below...
final_class.__class__.__name__
Now how to get the class variable and its default value.
Below one have some format like:
final_var.__class__.__dict__
'__fields__': {
'2017': ModelField(name = '2017', type = Optional[str], required = False,
default = Good),
'2018': ModelField(name = '2018', type = Optional[str], required = False,
default = top sell),
'2020': ModelField(name = '2020', type = Optional[str], required = False,
default = Average),
'2022': ModelField(name = '2022', type = Optional[str], required = False,
default = top sell)
}
Can anyone help me to get the requested format.
Thanks
I tried above format and code in python
A:
I got the solution by
var_dict = final_var.__class__.__dict__
all_fields = var_dict['__fields__']
default_val = next((v.__getattribute__('default') for i, v in all_fields.items()), None)
| By Knowing class name, how to get class key with its default value | PYTHON | I have multiple class in one file(a.py). On other file (b.py) i know some class name. In same file b.py i need the complete key, value of the class.
a.py file:
class CarCompany(BaseModel):
audi: Optional[str] = 'Good'
bmw: Optional[str] = 'Good'
tata: Optional[str] = 'Good'`
class CarYear(BaseModel):
2017: Optional[str] = 'Good'
2018: Optional[str] = 'top sell'
2020: Optional[str] = 'Average'`
2021: Optional[str] = 'top sell'`
b.py file:
Some python code gives that get _____ class and send default value.
I know class name by below...
final_class.__class__.__name__
Now how to get the class variable and its default value.
Below one have some format like:
final_var.__class__.__dict__
'__fields__': {
'2017': ModelField(name = '2017', type = Optional[str], required = False,
default = Good),
'2018': ModelField(name = '2018', type = Optional[str], required = False,
default = top sell),
'2020': ModelField(name = '2020', type = Optional[str], required = False,
default = Average),
'2022': ModelField(name = '2022', type = Optional[str], required = False,
default = top sell)
}
Can anyone help me to get the requested format.
Thanks
I tried above format and code in python
| [
"I got the solution by\nvar_dict = final_var.__class__.__dict__\nall_fields = var_dict['__fields__']\n\ndefault_val = next((v.__getattribute__('default') for i, v in all_fields.items()), None)\n\n"
] | [
0
] | [] | [] | [
"pydantic",
"python",
"python_3.x"
] | stackoverflow_0074598965_pydantic_python_python_3.x.txt |
Q:
displaying grid of images in jupyter notebook
I have a dataframe with a column containing 495 rows of URLs. I want to display these URLs in jupyter notebook as a grid of images. The first row of the dataframe is shown here. Any help is appreciated.
id latitude longitude owner title url
23969985288 37.721238 -123.071023 7679729@N07 There she blows! https://farm5.staticflickr.com/4491/2396998528...
I have tried the following way,
from IPython.core.display import display, HTML
for index, row in data1.iterrows():
display(HTML("<img src='%s'>"%(i["url"])))
However, running of above code displays message
> TypeError Traceback (most recent call last)
<ipython-input-117-4c2081563c17> in <module>()
1 from IPython.core.display import display, HTML
2 for index, row in data1.iterrows():
----> 3 display(HTML("<img src='%s'>"%(i["url"])))
TypeError: string indices must be integers
A:
Your idea of using IPython.core.display with HTML is imho the best approach for that kind of task. matplotlib is super inefficient when it comes to plotting such a huge number of images (especially if you have them as URLs).
There's a small package I built based on that concept - it's called ipyplot
import ipyplot
images = data1['url'].values
labels = data1['id'].values
ipyplot.plot_images(images, labels, img_width=150)
You would get a plot similar to this:
A:
The best way to show a grid of images in the Jupyter notebook is probably using matplotlib to create the grid, since you can also plot images on matplotlib axes using imshow.
I'm using a 3x165 grid, since that is 495 exactly. Feel free to mess around with that to change the dimensions of the grid.
import urllib
f, axarr = plt.subplots(3, 165)
curr_row = 0
for index, row in data1.iterrows():
# fetch the url as a file type object, then read the image
f = urllib.request.urlopen(row["url"])
a = plt.imread(f)
# find the column by taking the current index modulo 3
col = index % 3
# plot on relevant subplot
axarr[col,curr_row].imshow(a)
if col == 2:
# we have finished the current row, so increment row counter
curr_row += 1
A:
Just add this code below to your cell:
display(IPython.display.HTML('''
<flexthis></flexthis>
<style> div.jp-Cell-outputArea:has(flexthis) {
display:flex; flex-wrap: wrap;
}</style>
'''))
and then, on the same cell, use calls to IPython.display.Image or anything else you want to show.
This code above allows you to add custom CSS to that specific output cell. I'm using a flex box (display:flex), but you can change it to your liking.
| displaying grid of images in jupyter notebook | I have a dataframe with a column containing 495 rows of URLs. I want to display these URLs in jupyter notebook as a grid of images. The first row of the dataframe is shown here. Any help is appreciated.
id latitude longitude owner title url
23969985288 37.721238 -123.071023 7679729@N07 There she blows! https://farm5.staticflickr.com/4491/2396998528...
I have tried the following way,
from IPython.core.display import display, HTML
for index, row in data1.iterrows():
display(HTML("<img src='%s'>"%(i["url"])))
However, running of above code displays message
> TypeError Traceback (most recent call last)
<ipython-input-117-4c2081563c17> in <module>()
1 from IPython.core.display import display, HTML
2 for index, row in data1.iterrows():
----> 3 display(HTML("<img src='%s'>"%(i["url"])))
TypeError: string indices must be integers
| [
"Your idea of using IPython.core.display with HTML is imho the best approach for that kind of task. matplotlib is super inefficient when it comes to plotting such a huge number of images (especially if you have them as URLs).\nThere's a small package I built based on that concept - it's called ipyplot\nimport ipyplot\n\nimages = data1['url'].values\nlabels = data1['id'].values\n\nipyplot.plot_images(images, labels, img_width=150)\n\nYou would get a plot similar to this:\n\n",
"The best way to show a grid of images in the Jupyter notebook is probably using matplotlib to create the grid, since you can also plot images on matplotlib axes using imshow.\nI'm using a 3x165 grid, since that is 495 exactly. Feel free to mess around with that to change the dimensions of the grid.\nimport urllib\nf, axarr = plt.subplots(3, 165)\ncurr_row = 0\nfor index, row in data1.iterrows():\n # fetch the url as a file type object, then read the image\n f = urllib.request.urlopen(row[\"url\"])\n a = plt.imread(f)\n\n # find the column by taking the current index modulo 3\n col = index % 3\n # plot on relevant subplot\n axarr[col,curr_row].imshow(a)\n if col == 2:\n # we have finished the current row, so increment row counter\n curr_row += 1\n\n",
"Just add this code below to your cell:\ndisplay(IPython.display.HTML('''\n <flexthis></flexthis>\n <style> div.jp-Cell-outputArea:has(flexthis) {\n display:flex; flex-wrap: wrap;\n }</style>\n'''))\n\nand then, on the same cell, use calls to IPython.display.Image or anything else you want to show.\n\nThis code above allows you to add custom CSS to that specific output cell. I'm using a flex box (display:flex), but you can change it to your liking.\n"
] | [
15,
7,
0
] | [
"I can only do it by \"brute force\":\nHowever, I only manage to do it manually:\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\n%matplotlib inline\n\nimg1=mpimg.imread('Variable_8.png')\nimg2=mpimg.imread('Variable_17.png')\nimg3=mpimg.imread('Variable_18.png')\n ...\n\nfig, ((ax1, ax2, ax3), (ax4,ax5,ax6)) = plt.subplots(2, 3, sharex=True, sharey=True) \n\nax1.imshow(img1)\nax1.axis('off')\nax2.imshow(img2)\nax2.axis('off')\n ....\n\nDo not know if ot helps\n"
] | [
-2
] | [
"html",
"jupyter_notebook",
"matplotlib",
"python"
] | stackoverflow_0047508168_html_jupyter_notebook_matplotlib_python.txt |
Q:
django gives me error 404 when i try to use unicode urls
there is a problem when django uses Arabic slugs . It can accepts them . But when you go for its url . It can't find a matching query in database for them .
It gives me 404 .
this is the urls.py and my url :
from django.urls import path , re_path
from django.contrib.sitemaps import GenericSitemap
from .models import Course
from django.contrib.sitemaps.views import sitemap
from .views import *
app_name = 'course'
info_dict = {
'queryset': Course.objects.all(),
}
urlpatterns = [
re_path(r'detail/(?P<slug>[\w_-]+)/$' , detail_course , name='detail_courses'),
path('sitemap.xml', sitemap, {'sitemaps': {'blog': GenericSitemap(info_dict, priority=0.6)}}, name='django.contrib.sitemaps.views.sitemap'),
]
and its the url that i try to enter :
http://127.0.0.1:8000/course/detail/%D8%AA%D8%AD%D9%84%DB%8C%D9%84_%D8%A8%DB%8C%D8%AA_%DA%A9%D9%88%DB%8C%D9%86/
root urls.py :
from django.contrib import admin
from django.urls import path , include
urlpatterns = [
path('admin/', admin.site.urls),
path('accounts/' , include('accounts.urls')),
path('course/' , include('courses.urls')),
path('orders/' , include('order.urls')),
path('' , include('home.urls')),
]
what is its problem ?
A:
The problem is not the Arabic characters, but the underscore. You can include it with:
re_path(r'detail/(?P<slug>[\w_-]+)/$', detail_course, name='detail_courses')
That being said, normally a slug does not contain an underscore, so your slugging algorithm does not seem to work properly.
You can use Django's slugify(…) function [Django-doc] for this:
print(slugify(u'تحلیل_بیت_کوین', allow_unicode=True))
تحلیل_بیت_کوین
| django gives me error 404 when i try to use unicode urls | there is a problem when django uses Arabic slugs . It can accepts them . But when you go for its url . It can't find a matching query in database for them .
It gives me 404 .
this is the urls.py and my url :
from django.urls import path , re_path
from django.contrib.sitemaps import GenericSitemap
from .models import Course
from django.contrib.sitemaps.views import sitemap
from .views import *
app_name = 'course'
info_dict = {
'queryset': Course.objects.all(),
}
urlpatterns = [
re_path(r'detail/(?P<slug>[\w_-]+)/$' , detail_course , name='detail_courses'),
path('sitemap.xml', sitemap, {'sitemaps': {'blog': GenericSitemap(info_dict, priority=0.6)}}, name='django.contrib.sitemaps.views.sitemap'),
]
and its the url that i try to enter :
http://127.0.0.1:8000/course/detail/%D8%AA%D8%AD%D9%84%DB%8C%D9%84_%D8%A8%DB%8C%D8%AA_%DA%A9%D9%88%DB%8C%D9%86/
root urls.py :
from django.contrib import admin
from django.urls import path , include
urlpatterns = [
path('admin/', admin.site.urls),
path('accounts/' , include('accounts.urls')),
path('course/' , include('courses.urls')),
path('orders/' , include('order.urls')),
path('' , include('home.urls')),
]
what is its problem ?
| [
"The problem is not the Arabic characters, but the underscore. You can include it with:\nre_path(r'detail/(?P<slug>[\\w_-]+)/$', detail_course, name='detail_courses')\nThat being said, normally a slug does not contain an underscore, so your slugging algorithm does not seem to work properly.\nYou can use Django's slugify(…) function [Django-doc] for this:\nprint(slugify(u'تحلیل_بیت_کوین', allow_unicode=True))\nتحلیل_بیت_کوین\n\n"
] | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074628492_django_python.txt |
Q:
Python pytest hangs. For instance, "pytest --version" simply hangs
The following hangs:
PS C:\Users\Fowler> pytest --version
Notes:
I am in Windows 10.
By hang, I mean at least 5 minutes of waiting for the pytest --version to return...
While waiting for pytest, python.exe is using 100% of a logical processor on my computer.
I uninstalled all python installations with windows installer and I reinistalled python 3.8.0 in an attempt to fix.
pytest only fails when I am not using a venv. So, pytest does work using a venv.
However, I can't use a venv with vscode, because debugging with venv gives a strange "Session-1 timed out waiting for debuggee to spawn" <-- you would think the word debuggee would be a nice clue, but not much found with that word on google. I am guessing this is a different problem, but maybe related?
In summary, I can't debug python with a venv, and I can't run pytest unit tests without a venv. Probably, these items are unrelated... But, because of this catch-22, I will be sooo grateful for any hints to fix either problem.
When I hit <ctrl-c> to break out of the pytest "hang", the following is displayed (but changes a little bit at the end each time?:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\program files\python38\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\Scripts\pytest.exe\__main__.py", line 7, in <module>
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 72, in main
config = _prepareconfig(args, plugins)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 222, in _prepareconfig
return pluginmanager.hook.pytest_cmdline_parse(
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 203, in _multicall
gen.send(outcome)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\helpconfig.py", line 89, in pytest_cmdline_parse
config = outcome.get_result()
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 742, in pytest_cmdline_parse
self.parse(args)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 948, in parse
self._preparse(args, addopts=addopts)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 896, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 299, in load_setuptools_entrypoints
plugin = ep.load()
File "c:\program files\python38\lib\importlib\metadata.py", line 75, in load
module = import_module(match.group('module'))
File "c:\program files\python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\assertion\rewrite.py", line 138, in exec_module
_write_pyc(state, co, source_stat, pyc)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\assertion\rewrite.py", line 274, in _write_pyc
with atomic_write(fspath(pyc), mode="wb", overwrite=True) as fp:
File "c:\program files\python38\lib\contextlib.py", line 113, in __enter__
return next(self.gen)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 156, in _open
with get_fileobject(**self._open_kwargs) as f:
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 247, in _mkstemp_inner
file = _os.path.join(dir, pre + name + suf)
KeyboardInterrupt
The next time try to run pytest --version and I hit <ctrl-c> it ends with:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
...
...
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 248, in _mkstemp_inner
_sys.audit("tempfile.mkstemp", file)
KeyboardInterrupt
The next time try to run pytest --version and I hit <ctrl-c> it ends with:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
...
...
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 256, in _mkstemp_inner
if (_os.name == 'nt' and _os.path.isdir(dir) and
File "c:\program files\python38\lib\genericpath.py", line 42, in isdir
st = os.stat(s)
KeyboardInterrupt
The next time try to run pytest --version and I hit <ctrl-c> it ends with:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
...
...
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 250, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
KeyboardInterrupt
I don't know if this output will help, but I thought it might be useful to see all the locations on my machine where python and/or pytest are installed:
PS C:\Users\Fowler> where.exe /r c:\ python
c:\Program Files\Amazon\AWSCLI\runtime\python.exe
c:\Program Files\Amazon\AWSSAMCLI\runtime\python.exe
c:\Program Files\MySQL\MySQL Workbench 8.0 CE\python.exe
c:\Program Files\Python38\python.exe
c:\Program Files\Python38\Lib\venv\scripts\nt\python.exe
c:\Users\Fowler\.vscode\extensions\lextudio.restructuredtext-116.0.0\out\python.js
c:\Users\Fowler\.vscode\extensions\teabyii.ayu-0.18.0\test\Python.py
c:\Users\Fowler\.vscode\extensions\yzane.markdown-pdf-1.4.1\node_modules\highlight.js\lib\languages\python.js
c:\Users\Fowler\AppData\Local\GitHubDesktop\app-2.2.2\resources\app\highlighter\mode\python.js
c:\Users\Fowler\AppData\Local\GitHubDesktop\app-2.2.3\resources\app\highlighter\mode\python.js
c:\Users\Fowler\AppData\Local\Google\Chrome\User Data\Default\Extensions\ngkhgikojglcgnckopipfdajaifmmnnc\4.1.34_0\python.js
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\etc\apparmor.d\abstractions\python
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\usr\share\bash-completion\completions\python
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\usr\share\bash-completion\helpers\python
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\usr\share\sosreport\sos\plugins\python.py
c:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\aniso8601\builders\python.py
c:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\python.py
c:\Users\Fowler\Documents\vscodeProjects\playarea\.venv\Scripts\python.exe
c:\Windows\Installer\$PatchCache$\Managed\8B9C64EBE8DD53846B6846E46A14F5EE\3.7.2150\python.exe
c:\Windows\Installer\$PatchCache$\Managed\9CB0624238F6F8F469EAD6566412DD7F\3.7.2150\python.exe
PS C:\Users\Fowler> where.exe /r c:\ pytest
c:\Users\Fowler\AppData\Roaming\Python\Python38\Scripts\pytest.exe
c:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pytest.py
And finally! Whew, in case this sheds any light, here is a picture showing the python process having fun eating up my CPU during pytest.py...
I would be oh, so grateful for any assistance or thoughts!
A:
Fixed.
The answer appears to be
Uninstall python via the windows apps and features
Remove the c:\program files\python38 directory
Remove the ..\AppData\Roaming\Python directory
Reinstall
Not sure what the "root" problem was, but a total wipe of python fixed it. Note that the python windows installer does not remove enough python stuff.
Thank you @nneonneo for getting me thinking in the right direction.
A:
You can just run pytest from priviledged (admin) cmd for the first time. For me the issue was fixed and pytest runs from ordinary cmd now without hanging.
A:
I have the similar problem
after i install pytest-cov or allure-pytest, pytest will hang
if i uninstall them , everything is back to normal
I have tried to install different pytest version, It didn't help
but, I tried the method you provide(uninstall python, remove the diretories), it works~, great!
but, when I install allure-pytest again, it fails again.
finally, I found that, I can't install package in privilege cmd window
If I install them in privilege cmd windows, it will make pytest hang!
so, if anyone encounter the problem I have, you can try my way.
A:
In my case, the package anyio, a dependency of JupyterLab, was causing this problem. It was installed at the administrator level. I uninstalled it and reinstalled at the user level and pytest started working again.
A:
Using Ubuntu 18.04/20.04 + Python 3.8.2
My tests were hanging in a similar matter after showing results.
Removing pytest-cov didn't help.
What helped was moving from Python 3.8.2 to a newer version (like 3.8.12)
A:
I ran into the same issue. In my case my IDE didn't load the correct conda environment. So make sure you're in the right environment for your application and use conda activate myEnv to change it if necessary.
| Python pytest hangs. For instance, "pytest --version" simply hangs | The following hangs:
PS C:\Users\Fowler> pytest --version
Notes:
I am in Windows 10.
By hang, I mean at least 5 minutes of waiting for the pytest --version to return...
While waiting for pytest, python.exe is using 100% of a logical processor on my computer.
I uninstalled all python installations with windows installer and I reinistalled python 3.8.0 in an attempt to fix.
pytest only fails when I am not using a venv. So, pytest does work using a venv.
However, I can't use a venv with vscode, because debugging with venv gives a strange "Session-1 timed out waiting for debuggee to spawn" <-- you would think the word debuggee would be a nice clue, but not much found with that word on google. I am guessing this is a different problem, but maybe related?
In summary, I can't debug python with a venv, and I can't run pytest unit tests without a venv. Probably, these items are unrelated... But, because of this catch-22, I will be sooo grateful for any hints to fix either problem.
When I hit <ctrl-c> to break out of the pytest "hang", the following is displayed (but changes a little bit at the end each time?:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\program files\python38\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\Scripts\pytest.exe\__main__.py", line 7, in <module>
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 72, in main
config = _prepareconfig(args, plugins)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 222, in _prepareconfig
return pluginmanager.hook.pytest_cmdline_parse(
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 203, in _multicall
gen.send(outcome)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\helpconfig.py", line 89, in pytest_cmdline_parse
config = outcome.get_result()
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 742, in pytest_cmdline_parse
self.parse(args)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 948, in parse
self._preparse(args, addopts=addopts)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\config\__init__.py", line 896, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 299, in load_setuptools_entrypoints
plugin = ep.load()
File "c:\program files\python38\lib\importlib\metadata.py", line 75, in load
module = import_module(match.group('module'))
File "c:\program files\python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\assertion\rewrite.py", line 138, in exec_module
_write_pyc(state, co, source_stat, pyc)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\assertion\rewrite.py", line 274, in _write_pyc
with atomic_write(fspath(pyc), mode="wb", overwrite=True) as fp:
File "c:\program files\python38\lib\contextlib.py", line 113, in __enter__
return next(self.gen)
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 156, in _open
with get_fileobject(**self._open_kwargs) as f:
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 247, in _mkstemp_inner
file = _os.path.join(dir, pre + name + suf)
KeyboardInterrupt
The next time try to run pytest --version and I hit <ctrl-c> it ends with:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
...
...
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 248, in _mkstemp_inner
_sys.audit("tempfile.mkstemp", file)
KeyboardInterrupt
The next time try to run pytest --version and I hit <ctrl-c> it ends with:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
...
...
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 256, in _mkstemp_inner
if (_os.name == 'nt' and _os.path.isdir(dir) and
File "c:\program files\python38\lib\genericpath.py", line 42, in isdir
st = os.stat(s)
KeyboardInterrupt
The next time try to run pytest --version and I hit <ctrl-c> it ends with:
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
...
...
File "C:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\atomicwrites\__init__.py", line 173, in get_fileobject
descriptor, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
File "c:\program files\python38\lib\tempfile.py", line 332, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "c:\program files\python38\lib\tempfile.py", line 250, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
KeyboardInterrupt
I don't know if this output will help, but I thought it might be useful to see all the locations on my machine where python and/or pytest are installed:
PS C:\Users\Fowler> where.exe /r c:\ python
c:\Program Files\Amazon\AWSCLI\runtime\python.exe
c:\Program Files\Amazon\AWSSAMCLI\runtime\python.exe
c:\Program Files\MySQL\MySQL Workbench 8.0 CE\python.exe
c:\Program Files\Python38\python.exe
c:\Program Files\Python38\Lib\venv\scripts\nt\python.exe
c:\Users\Fowler\.vscode\extensions\lextudio.restructuredtext-116.0.0\out\python.js
c:\Users\Fowler\.vscode\extensions\teabyii.ayu-0.18.0\test\Python.py
c:\Users\Fowler\.vscode\extensions\yzane.markdown-pdf-1.4.1\node_modules\highlight.js\lib\languages\python.js
c:\Users\Fowler\AppData\Local\GitHubDesktop\app-2.2.2\resources\app\highlighter\mode\python.js
c:\Users\Fowler\AppData\Local\GitHubDesktop\app-2.2.3\resources\app\highlighter\mode\python.js
c:\Users\Fowler\AppData\Local\Google\Chrome\User Data\Default\Extensions\ngkhgikojglcgnckopipfdajaifmmnnc\4.1.34_0\python.js
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\etc\apparmor.d\abstractions\python
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\usr\share\bash-completion\completions\python
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\usr\share\bash-completion\helpers\python
c:\Users\Fowler\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\usr\share\sosreport\sos\plugins\python.py
c:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\aniso8601\builders\python.py
c:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\_pytest\python.py
c:\Users\Fowler\Documents\vscodeProjects\playarea\.venv\Scripts\python.exe
c:\Windows\Installer\$PatchCache$\Managed\8B9C64EBE8DD53846B6846E46A14F5EE\3.7.2150\python.exe
c:\Windows\Installer\$PatchCache$\Managed\9CB0624238F6F8F469EAD6566412DD7F\3.7.2150\python.exe
PS C:\Users\Fowler> where.exe /r c:\ pytest
c:\Users\Fowler\AppData\Roaming\Python\Python38\Scripts\pytest.exe
c:\Users\Fowler\AppData\Roaming\Python\Python38\site-packages\pytest.py
And finally! Whew, in case this sheds any light, here is a picture showing the python process having fun eating up my CPU during pytest.py...
I would be oh, so grateful for any assistance or thoughts!
| [
"Fixed. \nThe answer appears to be \n\nUninstall python via the windows apps and features\nRemove the c:\\program files\\python38 directory\nRemove the ..\\AppData\\Roaming\\Python directory\nReinstall\n\nNot sure what the \"root\" problem was, but a total wipe of python fixed it. Note that the python windows installer does not remove enough python stuff.\nThank you @nneonneo for getting me thinking in the right direction.\n",
"You can just run pytest from priviledged (admin) cmd for the first time. For me the issue was fixed and pytest runs from ordinary cmd now without hanging.\n",
"I have the similar problem\nafter i install pytest-cov or allure-pytest, pytest will hang\nif i uninstall them , everything is back to normal\nI have tried to install different pytest version, It didn't help\nbut, I tried the method you provide(uninstall python, remove the diretories), it works~, great!\nbut, when I install allure-pytest again, it fails again.\nfinally, I found that, I can't install package in privilege cmd window\nIf I install them in privilege cmd windows, it will make pytest hang!\nso, if anyone encounter the problem I have, you can try my way.\n",
"In my case, the package anyio, a dependency of JupyterLab, was causing this problem. It was installed at the administrator level. I uninstalled it and reinstalled at the user level and pytest started working again.\n",
"Using Ubuntu 18.04/20.04 + Python 3.8.2\nMy tests were hanging in a similar matter after showing results.\nRemoving pytest-cov didn't help.\nWhat helped was moving from Python 3.8.2 to a newer version (like 3.8.12)\n",
"I ran into the same issue. In my case my IDE didn't load the correct conda environment. So make sure you're in the right environment for your application and use conda activate myEnv to change it if necessary.\n"
] | [
1,
1,
0,
0,
0,
0
] | [] | [] | [
"pytest",
"python"
] | stackoverflow_0059043307_pytest_python.txt |
Q:
How to visualize communities from a list in igraph python
I have a community list as the following list_community.
How do I edit the code below to make the community visible?
from igraph import *
list_community = [['A', 'B', 'C', 'D'],['E','F','G'],['G', 'H','I','J']]
list_nodes = ['A', 'B', 'C', 'D','E','F','G','H','I','J']
tuple_edges = [('A','B'),('A','C'),('A','D'),('B','C'),('B','D'), ('C','D'),('C','E'),
('E','F'),('E','G'),('F','G'),('G','H'),
('G','I'), ('G','J'),('H','I'),('H','J'),('I','J'),]
# Make a graph
g_test = Graph()
g_test.add_vertices(list_nodes)
g_test.add_edges(tuple_edges)
# Plot
layout = g_test.layout("kk")
g.vs["name"] = list_nodes
visual_style = {}
visual_style["vertex_label"] = g.vs["name"]
visual_style["layout"] = layout
ig.plot(g_test, **visual_style)
I would like a plot that visualizes the community as shown below.
I can also do this by using a module other than igraph.
Thank you.
A:
In igraph you can use the VertexCover to draw polygons around clusters (as also suggested by Szabolcs in his comment). You have to supply the option mark_groups when plotting the cover, possibly with some additional palette if you want. See some more detail in the documentation here.
In order to construct the VertexCover, you first have to make sure you get integer indices for each node in the graph you created. You can do that using g_test.vs.find.
clusters = [[g_test.vs.find(name=v).index for v in cl] for cl in list_community]
cover = ig.VertexCover(g_test, clusters)
After that, you can simply draw the cover like
ig.plot(cover,
mark_groups=True,
palette=ig.RainbowPalette(3))
resulting in the following picture
A:
Here is a script that somewhat achieves what you're looking for. I had to handle the cases of single-, and two-nodes communities separately, but for greater than two nodes this draws a polygon within the nodes.
I had some trouble with matplotlib not accounting for overlapping edges and faces of polygons which meant the choice was between (1) not having the polygon surround the nodes or (2) having an extra outline just inside the edge of the polygon due to matplotlib overlapping the widened edge with the fill of the polygon. I left a comment on how to change the code from option (2) to option (1).
I also blatantly borrowed a convenience function from this post to handle correctly sorting the nodes in the polygon for appropriate filling by matplotlib's plt.fill().
Option 1:
Option 2:
Full code:
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import cm
def sort_xy(x, y):
x0 = np.mean(x)
y0 = np.mean(y)
r = np.sqrt((x-x0)**2 + (y-y0)**2)
angles = np.where((y-y0) > 0, np.arccos((x-x0)/r), 2*np.pi-np.arccos((x-x0)/r))
mask = np.argsort(angles)
x_sorted = x[mask]
y_sorted = y[mask]
return x_sorted, y_sorted
G = nx.karate_club_graph()
pos = nx.spring_layout(G, seed=42)
fig, ax = plt.subplots(figsize=(8, 10))
nx.draw(G, pos=pos, with_labels=True)
communities = nx.community.louvain_communities(G)
alpha = 0.5
edge_padding = 10
colors = cm.get_cmap('viridis', len(communities))
for i, comm in enumerate(communities):
if len(comm) == 1:
cir = plt.Circle((pos[comm.pop()]), edge_padding / 100, alpha=alpha, color=colors(i))
ax.add_patch(cir)
elif len(comm) == 2:
comm_pos = {k: pos[k] for k in comm}
coords = [a for a in zip(*comm_pos.values())]
x, y = coords[0], coords[1]
plt.plot(x, y, linewidth=edge_padding, linestyle="-", alpha=alpha, color=colors(i))
else:
comm_pos = {k: pos[k] for k in comm}
coords = [a for a in zip(*comm_pos.values())]
x, y = sort_xy(np.array(coords[0]), np.array(coords[1]))
plt.fill(x, y, alpha=alpha, facecolor=colors(i),
edgecolor=colors(i), # set to None to remove edge padding
linewidth=edge_padding)
| How to visualize communities from a list in igraph python | I have a community list as the following list_community.
How do I edit the code below to make the community visible?
from igraph import *
list_community = [['A', 'B', 'C', 'D'],['E','F','G'],['G', 'H','I','J']]
list_nodes = ['A', 'B', 'C', 'D','E','F','G','H','I','J']
tuple_edges = [('A','B'),('A','C'),('A','D'),('B','C'),('B','D'), ('C','D'),('C','E'),
('E','F'),('E','G'),('F','G'),('G','H'),
('G','I'), ('G','J'),('H','I'),('H','J'),('I','J'),]
# Make a graph
g_test = Graph()
g_test.add_vertices(list_nodes)
g_test.add_edges(tuple_edges)
# Plot
layout = g_test.layout("kk")
g.vs["name"] = list_nodes
visual_style = {}
visual_style["vertex_label"] = g.vs["name"]
visual_style["layout"] = layout
ig.plot(g_test, **visual_style)
I would like a plot that visualizes the community as shown below.
I can also do this by using a module other than igraph.
Thank you.
| [
"In igraph you can use the VertexCover to draw polygons around clusters (as also suggested by Szabolcs in his comment). You have to supply the option mark_groups when plotting the cover, possibly with some additional palette if you want. See some more detail in the documentation here.\nIn order to construct the VertexCover, you first have to make sure you get integer indices for each node in the graph you created. You can do that using g_test.vs.find.\nclusters = [[g_test.vs.find(name=v).index for v in cl] for cl in list_community]\ncover = ig.VertexCover(g_test, clusters)\n\nAfter that, you can simply draw the cover like\nig.plot(cover,\n mark_groups=True,\n palette=ig.RainbowPalette(3))\n\nresulting in the following picture\n\n",
"Here is a script that somewhat achieves what you're looking for. I had to handle the cases of single-, and two-nodes communities separately, but for greater than two nodes this draws a polygon within the nodes.\nI had some trouble with matplotlib not accounting for overlapping edges and faces of polygons which meant the choice was between (1) not having the polygon surround the nodes or (2) having an extra outline just inside the edge of the polygon due to matplotlib overlapping the widened edge with the fill of the polygon. I left a comment on how to change the code from option (2) to option (1).\nI also blatantly borrowed a convenience function from this post to handle correctly sorting the nodes in the polygon for appropriate filling by matplotlib's plt.fill().\nOption 1:\n\nOption 2:\n\nFull code:\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import cm\n\ndef sort_xy(x, y):\n\n x0 = np.mean(x)\n y0 = np.mean(y)\n\n r = np.sqrt((x-x0)**2 + (y-y0)**2)\n\n angles = np.where((y-y0) > 0, np.arccos((x-x0)/r), 2*np.pi-np.arccos((x-x0)/r))\n\n mask = np.argsort(angles)\n\n x_sorted = x[mask]\n y_sorted = y[mask]\n\n return x_sorted, y_sorted\n\nG = nx.karate_club_graph()\n\npos = nx.spring_layout(G, seed=42)\nfig, ax = plt.subplots(figsize=(8, 10))\nnx.draw(G, pos=pos, with_labels=True)\n\ncommunities = nx.community.louvain_communities(G)\n\nalpha = 0.5\nedge_padding = 10\ncolors = cm.get_cmap('viridis', len(communities))\n\nfor i, comm in enumerate(communities):\n\n if len(comm) == 1:\n cir = plt.Circle((pos[comm.pop()]), edge_padding / 100, alpha=alpha, color=colors(i))\n ax.add_patch(cir)\n\n elif len(comm) == 2:\n comm_pos = {k: pos[k] for k in comm}\n coords = [a for a in zip(*comm_pos.values())]\n x, y = coords[0], coords[1]\n plt.plot(x, y, linewidth=edge_padding, linestyle=\"-\", alpha=alpha, color=colors(i))\n\n else:\n comm_pos = {k: pos[k] for k in comm}\n coords = [a for a in zip(*comm_pos.values())]\n x, y = sort_xy(np.array(coords[0]), np.array(coords[1]))\n plt.fill(x, y, alpha=alpha, facecolor=colors(i), \n edgecolor=colors(i), # set to None to remove edge padding\n linewidth=edge_padding)\n\n"
] | [
2,
1
] | [] | [] | [
"graph",
"igraph",
"networking",
"python",
"visualization"
] | stackoverflow_0074597504_graph_igraph_networking_python_visualization.txt |
Q:
OR operation not working in removing string part
I've been trying to parse a string and to get rid of parts of the string using the remove() function. In order to find the part which I wanted to remove I used an OR operator. However, it does not produce the outcome I expected. Can you help me?
My code looks like this:
import numpy as np
x = '-1,0;1,0;0,-1;0,+1'
x = x.split(';')
for i in x:
if ('+' in i) or ('-' in i):
x.remove(i)
else:
continue
x = ';'.join(x)
print(x)
The outcome I expect is:
[1,0]
Instead the outcome is:
[1,0;0,+1]
A:
Modifying lists that you are currently itterating over is not a good approch. Instead take a new list and append values in else.
x = '-1,0;1,0;0,-1;0,+1'
x = x.split(';')
new = []
for i in x:
if ('+' in i) or ('-' in i):
pass
else:
new.append(i)
x = ';'.join(new)
print(x)
Gives #
1,0
A:
In your example there is a problem.
x = '-1,0;1,0;0,-1;0,+1'.split(';')
counter = 0
for i in x:
counter +=1
if '+' in i or '-' in i:
x.remove(i)
print(x, counter)
Look at this. U modify a list over which u iterate.
If u remove n'th element, the (n+1) has now index n, (n+2) has n+1 so ...
U skips some elements
In my opinion better solution is to use list comprehension:
x = '-1,0;1,0;0,-1;0,+1'.split(';')
expected_result = [el for el in x
if "-" not in el and
"+" not in el]
Or for loop but creates new list
expected_2 = []
for i in x:
if '+' not in i and '-' not in i:
expected_2.append(i)
| OR operation not working in removing string part | I've been trying to parse a string and to get rid of parts of the string using the remove() function. In order to find the part which I wanted to remove I used an OR operator. However, it does not produce the outcome I expected. Can you help me?
My code looks like this:
import numpy as np
x = '-1,0;1,0;0,-1;0,+1'
x = x.split(';')
for i in x:
if ('+' in i) or ('-' in i):
x.remove(i)
else:
continue
x = ';'.join(x)
print(x)
The outcome I expect is:
[1,0]
Instead the outcome is:
[1,0;0,+1]
| [
"Modifying lists that you are currently itterating over is not a good approch. Instead take a new list and append values in else.\nx = '-1,0;1,0;0,-1;0,+1'\n\nx = x.split(';')\n\nnew = []\n\nfor i in x:\n if ('+' in i) or ('-' in i):\n pass\n else:\n new.append(i)\n \nx = ';'.join(new) \nprint(x)\n\nGives #\n1,0\n\n",
"In your example there is a problem.\nx = '-1,0;1,0;0,-1;0,+1'.split(';')\n\ncounter = 0\nfor i in x:\n counter +=1\n if '+' in i or '-' in i:\n x.remove(i)\n\nprint(x, counter)\n\nLook at this. U modify a list over which u iterate.\nIf u remove n'th element, the (n+1) has now index n, (n+2) has n+1 so ...\nU skips some elements\nIn my opinion better solution is to use list comprehension:\nx = '-1,0;1,0;0,-1;0,+1'.split(';')\n\nexpected_result = [el for el in x\n if \"-\" not in el and\n \"+\" not in el]\n\nOr for loop but creates new list\nexpected_2 = []\nfor i in x:\n if '+' not in i and '-' not in i:\n expected_2.append(i)\n\n"
] | [
2,
1
] | [] | [] | [
"conditional_statements",
"if_statement",
"python"
] | stackoverflow_0074628308_conditional_statements_if_statement_python.txt |
Q:
How to make a nested array in python
I have a python dictionary (league_managers) showing Ids to names;
{1443956: 'Sean McBride', 1281609: 'Maghnus Og Dunne', 4841686: 'Pearse Bowes', 406739: 'Adam Mcconville', 196345: 'Niall McCurdy', 808057: 'John McDonald', 6365597: 'Tony Cassidy', 1322001: 'Tiarnan Mccaffrey', 350275: 'Eoghan McCurdy', 4820159: 'Ciaran McKeown', 7185401: 'Ryan Russell', 5203794: 'Michael Devenny', 3145058: 'Declan Lees'}
For each Id in this dictionary, an API is called that returns that players scores in every game week this season. How can I add this array to this dictionary in a way that the data is structured
Id-> Name -> Event -> TotalPoints
api_url = ("https://fantasy.premierleague.com/api/leagues-classic/258305/standings")
response = requests.get(api_url).json()
league_managers = dict()
manager_points = dict()
for item in response['standings']['results']:
managerId = item['entry']
managerName = item['player_name']
league_managers[managerId] = managerName
for manager in league_managers:
players_api_url = ("https://fantasy.premierleague.com/api/entry/"+ str(manager)+"/history/")
playersResponse = requests.get(players_api_url).json()
for gameweek in playersResponse['current']:
event = gameweek['event']
total_points = gameweek['total_points']
A:
def get_event(id): #call your event api here
return str(id) + "_event"
def get_points(): #call your points api here
return 100
d = {1443956: 'Sean McBride', 1281609: 'Maghnus Og Dunne', 4841686: 'Pearse Bowes', 406739: 'Adam Mcconville', 196345: 'Niall McCurdy', 808057: 'John McDonald', 6365597: 'Tony Cassidy', 1322001: 'Tiarnan Mccaffrey', 350275: 'Eoghan McCurdy', 4820159: 'Ciaran McKeown', 7185401: 'Ryan Russell', 5203794: 'Michael Devenny', 3145058: 'Declan Lees'}
new_dict = {}
#this loop gives your expected structure
for i in d:
new_dict[i] ={}
new_dict[i][d[i]]={}
event = get_event(i)
new_dict[i][d[i]][event]=get_points()
Output as per above code
{1443956: {'Sean McBride': {'1443956_event': 100}}, 1281609: {'Maghnus Og Dunne': {'1281609_event': 100}}, 4841686: {'Pearse Bowes': {'4841686_event': 100}}, 406739: {'Adam Mcconville': {'406739_event': 100}}, 196345: {'Niall McCurdy': {'196345_event': 100}}, 808057: {'John McDonald': {'808057_event': 100}}, 6365597: {'Tony Cassidy': {'6365597_event': 100}}, 1322001: {'Tiarnan Mccaffrey': {'1322001_event': 100}}, 350275: {'Eoghan McCurdy': {'350275_event': 100}}, 4820159: {'Ciaran McKeown': {'4820159_event': 100}}, 7185401: {'Ryan Russell': {'7185401_event': 100}}, 5203794: {'Michael Devenny': {'5203794_event': 100}}, 3145058: {'Declan Lees': {'3145058_event': 100}}}
| How to make a nested array in python | I have a python dictionary (league_managers) showing Ids to names;
{1443956: 'Sean McBride', 1281609: 'Maghnus Og Dunne', 4841686: 'Pearse Bowes', 406739: 'Adam Mcconville', 196345: 'Niall McCurdy', 808057: 'John McDonald', 6365597: 'Tony Cassidy', 1322001: 'Tiarnan Mccaffrey', 350275: 'Eoghan McCurdy', 4820159: 'Ciaran McKeown', 7185401: 'Ryan Russell', 5203794: 'Michael Devenny', 3145058: 'Declan Lees'}
For each Id in this dictionary, an API is called that returns that players scores in every game week this season. How can I add this array to this dictionary in a way that the data is structured
Id-> Name -> Event -> TotalPoints
api_url = ("https://fantasy.premierleague.com/api/leagues-classic/258305/standings")
response = requests.get(api_url).json()
league_managers = dict()
manager_points = dict()
for item in response['standings']['results']:
managerId = item['entry']
managerName = item['player_name']
league_managers[managerId] = managerName
for manager in league_managers:
players_api_url = ("https://fantasy.premierleague.com/api/entry/"+ str(manager)+"/history/")
playersResponse = requests.get(players_api_url).json()
for gameweek in playersResponse['current']:
event = gameweek['event']
total_points = gameweek['total_points']
| [
"def get_event(id): #call your event api here\n return str(id) + \"_event\"\n\ndef get_points(): #call your points api here\n return 100\n\n\nd = {1443956: 'Sean McBride', 1281609: 'Maghnus Og Dunne', 4841686: 'Pearse Bowes', 406739: 'Adam Mcconville', 196345: 'Niall McCurdy', 808057: 'John McDonald', 6365597: 'Tony Cassidy', 1322001: 'Tiarnan Mccaffrey', 350275: 'Eoghan McCurdy', 4820159: 'Ciaran McKeown', 7185401: 'Ryan Russell', 5203794: 'Michael Devenny', 3145058: 'Declan Lees'} \n\nnew_dict = {} \n#this loop gives your expected structure \nfor i in d: \n new_dict[i] ={}\n new_dict[i][d[i]]={}\n event = get_event(i)\n new_dict[i][d[i]][event]=get_points()\n\nOutput as per above code\n{1443956: {'Sean McBride': {'1443956_event': 100}}, 1281609: {'Maghnus Og Dunne': {'1281609_event': 100}}, 4841686: {'Pearse Bowes': {'4841686_event': 100}}, 406739: {'Adam Mcconville': {'406739_event': 100}}, 196345: {'Niall McCurdy': {'196345_event': 100}}, 808057: {'John McDonald': {'808057_event': 100}}, 6365597: {'Tony Cassidy': {'6365597_event': 100}}, 1322001: {'Tiarnan Mccaffrey': {'1322001_event': 100}}, 350275: {'Eoghan McCurdy': {'350275_event': 100}}, 4820159: {'Ciaran McKeown': {'4820159_event': 100}}, 7185401: {'Ryan Russell': {'7185401_event': 100}}, 5203794: {'Michael Devenny': {'5203794_event': 100}}, 3145058: {'Declan Lees': {'3145058_event': 100}}}\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074628467_python.txt |
Q:
Error: Failure while executing; `cp -pR /private/tmp/d20221129-9397-882a6m/ca-certificates/. /usr/local/Cellar/ca-certificates` exited with 1
When i install python by brew, it shows error:
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: utimensat: /usr/local/Cellar/ca-certificates/.: Permission denied
Error: Failure while executing; `cp -pR /private/tmp/d20221129-9397-882a6m/ca-certificates/. /usr/local/Cellar/ca-certificates` exited with 1. Here's the output:
cp: /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/.brew: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew/ca-certificates.rb: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/.brew: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share/ca-certificates: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates/cacert.pem: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share/ca-certificates: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: utimensat: /usr/local/Cellar/ca-certificates/.: Permission denied
HOW can i fix it and install python with brew? Before that, i was installed node and yarn with brew. I have tried uninstalling it and re-installing brew, but it doesn't work.
Thank you!
A:
I had the same issue and managed to fix it the following way:
I saw that the upgrade script tried to copy files from
/private/tmp/d20221130-21318-e53mkn/ca-certificates/./2022-10-11 to /usr/local/Cellar/ca-certificates/./2022-10-11 and I got a Permission denied meaning - I (the current mac user) could not edit/create files there.
So i went to /usr/local/Cellar/ and saw that my user was not the owner of the folder ca-certificates and several others. So i just changed the onwer like this:
cd /usr/local/Cellar/
sudo chown -R REPLACE_WITH_YOUR_USERNAME:admin *
And this fix the issue for me. A simple brew upgrde upgraded all my packages after that.
Hope this helps !
| Error: Failure while executing; `cp -pR /private/tmp/d20221129-9397-882a6m/ca-certificates/. /usr/local/Cellar/ca-certificates` exited with 1 | When i install python by brew, it shows error:
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: utimensat: /usr/local/Cellar/ca-certificates/.: Permission denied
Error: Failure while executing; `cp -pR /private/tmp/d20221129-9397-882a6m/ca-certificates/. /usr/local/Cellar/ca-certificates` exited with 1. Here's the output:
cp: /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/.brew: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew/ca-certificates.rb: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/.brew: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share/ca-certificates: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates/cacert.pem: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share/ca-certificates: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory
cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory
cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied
cp: utimensat: /usr/local/Cellar/ca-certificates/.: Permission denied
HOW can i fix it and install python with brew? Before that, i was installed node and yarn with brew. I have tried uninstalling it and re-installing brew, but it doesn't work.
Thank you!
| [
"I had the same issue and managed to fix it the following way:\nI saw that the upgrade script tried to copy files from\n/private/tmp/d20221130-21318-e53mkn/ca-certificates/./2022-10-11 to /usr/local/Cellar/ca-certificates/./2022-10-11 and I got a Permission denied meaning - I (the current mac user) could not edit/create files there.\nSo i went to /usr/local/Cellar/ and saw that my user was not the owner of the folder ca-certificates and several others. So i just changed the onwer like this:\ncd /usr/local/Cellar/\n\nsudo chown -R REPLACE_WITH_YOUR_USERNAME:admin *\n\nAnd this fix the issue for me. A simple brew upgrde upgraded all my packages after that.\nHope this helps !\n"
] | [
0
] | [] | [] | [
"homebrew",
"python"
] | stackoverflow_0074608872_homebrew_python.txt |
Q:
Converting Mathematica Fourier series code to Python
I have some simple Mathematica code that I'm struggling to convert to Python and could use some help:
a = ((-1)^(n))*4/(Pi*(2 n + 1));
f = a*Cos[(2 n + 1)*t];
sum = Sum[f, {n, 0, 10}];
Plot[sum, {t, -2 \[Pi], 2 \[Pi]}]
The plot looks like this:
For context, I have a function f(t):
I need to plot the sum of the first 10 terms. In Mathematica this was pretty straighforward, but for some reason I just can't seem to figure out how to make it work in Python. I've tried defining a function a(n), but when I try to set f(t) equal to the sum using my list of odd numbers, it doesn't work because t is not defined, but t is a variable. Any help would be much appreciated.
Below is a sample of one of the many different things I've tried. I know that it's not quite right in terms of getting the parity of the terms to alternate, but more important I just want to figure out how to get 'f' to be the sum of the first 10 terms of the summation:
n = list(range(1,20,2))
def a(n):
return ((-1)**(n))*4/(np.pi*n)
f = 0
for i in n:
f += a(i)*np.cos(i*t)
A:
modifying your code, look the part which are different, mostly the mistake was in the part which you are not calculating based on n 0-10 :
n = np.arange(0,10)
t = np.linspace(-2 * np.pi, 2 *np.pi, 10000)
def a(n):
return ((-1)**(n))*4/(np.pi*(2*n+1))
f = 0
for i in n:
f += a(i)*np.cos((2*i +1) * t)
however you could write you could in matrix form, and avoid looping, using the vector and broadcasting:
n = np.arange(10)[:,None]
t = np.linspace(-2 * np.pi, 2 *np.pi, 10000)[:,None]
a = ((-1) ** n) * 4 / (np.pi*(2*n + 1))
f = (a * np.cos((2 * n + 1) * t.T )).sum(axis=0)
| Converting Mathematica Fourier series code to Python | I have some simple Mathematica code that I'm struggling to convert to Python and could use some help:
a = ((-1)^(n))*4/(Pi*(2 n + 1));
f = a*Cos[(2 n + 1)*t];
sum = Sum[f, {n, 0, 10}];
Plot[sum, {t, -2 \[Pi], 2 \[Pi]}]
The plot looks like this:
For context, I have a function f(t):
I need to plot the sum of the first 10 terms. In Mathematica this was pretty straighforward, but for some reason I just can't seem to figure out how to make it work in Python. I've tried defining a function a(n), but when I try to set f(t) equal to the sum using my list of odd numbers, it doesn't work because t is not defined, but t is a variable. Any help would be much appreciated.
Below is a sample of one of the many different things I've tried. I know that it's not quite right in terms of getting the parity of the terms to alternate, but more important I just want to figure out how to get 'f' to be the sum of the first 10 terms of the summation:
n = list(range(1,20,2))
def a(n):
return ((-1)**(n))*4/(np.pi*n)
f = 0
for i in n:
f += a(i)*np.cos(i*t)
| [
"modifying your code, look the part which are different, mostly the mistake was in the part which you are not calculating based on n 0-10 :\nn = np.arange(0,10)\nt = np.linspace(-2 * np.pi, 2 *np.pi, 10000)\ndef a(n):\n return ((-1)**(n))*4/(np.pi*(2*n+1))\nf = 0\nfor i in n:\n f += a(i)*np.cos((2*i +1) * t)\n\n\nhowever you could write you could in matrix form, and avoid looping, using the vector and broadcasting:\nn = np.arange(10)[:,None]\nt = np.linspace(-2 * np.pi, 2 *np.pi, 10000)[:,None]\na = ((-1) ** n) * 4 / (np.pi*(2*n + 1))\nf = (a * np.cos((2 * n + 1) * t.T )).sum(axis=0)\n\n"
] | [
0
] | [] | [] | [
"fft",
"python",
"wolfram_mathematica"
] | stackoverflow_0074623175_fft_python_wolfram_mathematica.txt |
Q:
Read ZIP files from S3 without downloading the entire file
We have ZIP files that are 5-10GB in size. The typical ZIP file has 5-10 internal files, each 1-5 GB in size uncompressed.
I have a nice set of Python tools for reading these files. Basically, I can open a filename and if there is a ZIP file, the tools search in the ZIP file and then open the compressed file. It's all rather transparent.
I want to store these files in Amazon S3 as compressed files. I can fetch ranges of S3 files, so it should be possible to fetch the ZIP central directory (it's the end of the file, so I can just read the last 64KiB), find the component I want, download that, and stream directly to the calling process.
So my question is, how do I do that through the standard Python ZipFile API? It isn't documented how to replace the filesystem transport with an arbitrary object that supports POSIX semantics. Is this possible without rewriting the module?
A:
Here's an approach which does not need to fetch the entire file (full version available here).
It does require boto (or boto3), though (unless you can mimic the ranged GETs via AWS CLI; which I guess is quite possible as well).
import sys
import zlib
import zipfile
import io
import boto
from boto.s3.connection import OrdinaryCallingFormat
# range-fetches a S3 key
def fetch(key, start, len):
end = start + len - 1
return key.get_contents_as_string(headers={"Range": "bytes=%d-%d" % (start, end)})
# parses 2 or 4 little-endian bits into their corresponding integer value
def parse_int(bytes):
val = ord(bytes[0]) + (ord(bytes[1]) << 8)
if len(bytes) > 3:
val += (ord(bytes[2]) << 16) + (ord(bytes[3]) << 24)
return val
"""
bucket: name of the bucket
key: path to zipfile inside bucket
entry: pathname of zip entry to be retrieved (path/to/subdir/file.name)
"""
# OrdinaryCallingFormat prevents certificate errors on bucket names with dots
# https://stackoverflow.com/questions/51604689/read-zip-files-from-amazon-s3-using-boto3-and-python#51605244
_bucket = boto.connect_s3(calling_format=OrdinaryCallingFormat()).get_bucket(bucket)
_key = _bucket.get_key(key)
# fetch the last 22 bytes (end-of-central-directory record; assuming the comment field is empty)
size = _key.size
eocd = fetch(_key, size - 22, 22)
# start offset and size of the central directory
cd_start = parse_int(eocd[16:20])
cd_size = parse_int(eocd[12:16])
# fetch central directory, append EOCD, and open as zipfile!
cd = fetch(_key, cd_start, cd_size)
zip = zipfile.ZipFile(io.BytesIO(cd + eocd))
for zi in zip.filelist:
if zi.filename == entry:
# local file header starting at file name length + file content
# (so we can reliably skip file name and extra fields)
# in our "mock" zipfile, `header_offset`s are negative (probably because the leading content is missing)
# so we have to add to it the CD start offset (`cd_start`) to get the actual offset
file_head = fetch(_key, cd_start + zi.header_offset + 26, 4)
name_len = parse_int(file_head[0:2])
extra_len = parse_int(file_head[2:4])
content = fetch(_key, cd_start + zi.header_offset + 30 + name_len + extra_len, zi.compress_size)
# now `content` has the file entry you were looking for!
# you should probably decompress it in context before passing it to some other program
if zi.compress_type == zipfile.ZIP_DEFLATED:
print zlib.decompressobj(-15).decompress(content)
else:
print content
break
In your case you might need to write the fetched content to a local file (due to large size), unless memory usage is not a concern.
A:
So here is the code that allows you to open a file on Amazon S3 as if it were a normal file. Notice I use the aws command, rather than the boto3 Python module. (I don't have access to boto3.) You can open the file and seek on it. The file is cached locally. If you open the file with the Python ZipFile API and it's a ZipFile, you can then read individual parts. You can't write, though, because S3 doesn't support partial writes.
Separately, I implement s3open(), which can open a file for reading or writing, but it doesn't implement the seek interface, which is required by ZipFile.
from urllib.parse import urlparse
from subprocess import run,Popen,PIPE
import copy
import json
import os
import tempfile
# Tools for reading and write files from Amazon S3 without boto or boto3
# http://boto.cloudhackers.com/en/latest/s3_tut.html
# but it is easier to use the aws cli, since it's configured to work.
def s3open(path, mode="r", encoding=None):
"""
Open an s3 file for reading or writing. Can handle any size, but cannot seek.
We could use boto.
http://boto.cloudhackers.com/en/latest/s3_tut.html
but it is easier to use the aws cli, since it is present and more likely to work.
"""
from subprocess import run,PIPE,Popen
if "b" in mode:
assert encoding == None
else:
if encoding==None:
encoding="utf-8"
assert 'a' not in mode
assert '+' not in mode
if "r" in mode:
p = Popen(['aws','s3','cp',path,'-'],stdout=PIPE,encoding=encoding)
return p.stdout
elif "w" in mode:
p = Popen(['aws','s3','cp','-',path],stdin=PIPE,encoding=encoding)
return p.stdin
else:
raise RuntimeError("invalid mode:{}".format(mode))
CACHE_SIZE=4096 # big enough for front and back caches
MAX_READ=65536*16
debug=False
class S3File:
"""Open an S3 file that can be seeked. This is done by caching to the local file system."""
def __init__(self,name,mode='rb'):
self.name = name
self.url = urlparse(name)
if self.url.scheme != 's3':
raise RuntimeError("url scheme is {}; expecting s3".format(self.url.scheme))
self.bucket = self.url.netloc
self.key = self.url.path[1:]
self.fpos = 0
self.tf = tempfile.NamedTemporaryFile()
cmd = ['aws','s3api','list-objects','--bucket',self.bucket,'--prefix',self.key,'--output','json']
data = json.loads(Popen(cmd,encoding='utf8',stdout=PIPE).communicate()[0])
file_info = data['Contents'][0]
self.length = file_info['Size']
self.ETag = file_info['ETag']
# Load the caches
self.frontcache = self._readrange(0,CACHE_SIZE) # read the first 1024 bytes and get length of the file
if self.length > CACHE_SIZE:
self.backcache_start = self.length-CACHE_SIZE
if debug: print("backcache starts at {}".format(self.backcache_start))
self.backcache = self._readrange(self.backcache_start,CACHE_SIZE)
else:
self.backcache = None
def _readrange(self,start,length):
# This is gross; we copy everything to the named temporary file, rather than a pipe
# because the pipes weren't showing up in /dev/fd/?
# We probably want to cache also... That's coming
cmd = ['aws','s3api','get-object','--bucket',self.bucket,'--key',self.key,'--output','json',
'--range','bytes={}-{}'.format(start,start+length-1),self.tf.name]
if debug:print(cmd)
data = json.loads(Popen(cmd,encoding='utf8',stdout=PIPE).communicate()[0])
if debug:print(data)
self.tf.seek(0) # go to the beginning of the data just read
return self.tf.read(length) # and read that much
def __repr__(self):
return "FakeFile<name:{} url:{}>".format(self.name,self.url)
def read(self,length=-1):
# If length==-1, figure out the max we can read to the end of the file
if length==-1:
length = min(MAX_READ, self.length - self.fpos + 1)
if debug:
print("read: fpos={} length={}".format(self.fpos,length))
# Can we satisfy from the front cache?
if self.fpos < CACHE_SIZE and self.fpos+length < CACHE_SIZE:
if debug:print("front cache")
buf = self.frontcache[self.fpos:self.fpos+length]
self.fpos += len(buf)
if debug:print("return 1: buf=",buf)
return buf
# Can we satisfy from the back cache?
if self.backcache and (self.length - CACHE_SIZE < self.fpos):
if debug:print("back cache")
buf = self.backcache[self.fpos - self.backcache_start:self.fpos - self.backcache_start + length]
self.fpos += len(buf)
if debug:print("return 2: buf=",buf)
return buf
buf = self._readrange(self.fpos, length)
self.fpos += len(buf)
if debug:print("return 3: buf=",buf)
return buf
def seek(self,offset,whence=0):
if debug:print("seek({},{})".format(offset,whence))
if whence==0:
self.fpos = offset
elif whence==1:
self.fpos += offset
elif whence==2:
self.fpos = self.length + offset
else:
raise RuntimeError("whence={}".format(whence))
if debug:print(" ={} (self.length={})".format(self.fpos,self.length))
def tell(self):
return self.fpos
def write(self):
raise RuntimeError("Write not supported")
def flush(self):
raise RuntimeError("Flush not supported")
def close(self):
return
A:
Here's an improved version of the already given solution - now it uses boto3 and handles files which are larger than 4GiB:
import boto3
import io
import struct
import zipfile
s3 = boto3.client('s3')
EOCD_RECORD_SIZE = 22
ZIP64_EOCD_RECORD_SIZE = 56
ZIP64_EOCD_LOCATOR_SIZE = 20
MAX_STANDARD_ZIP_SIZE = 4_294_967_295
def lambda_handler(event):
bucket = event['bucket']
key = event['key']
zip_file = get_zip_file(bucket, key)
print_zip_content(zip_file)
def get_zip_file(bucket, key):
file_size = get_file_size(bucket, key)
eocd_record = fetch(bucket, key, file_size - EOCD_RECORD_SIZE, EOCD_RECORD_SIZE)
if file_size <= MAX_STANDARD_ZIP_SIZE:
cd_start, cd_size = get_central_directory_metadata_from_eocd(eocd_record)
central_directory = fetch(bucket, key, cd_start, cd_size)
return zipfile.ZipFile(io.BytesIO(central_directory + eocd_record))
else:
zip64_eocd_record = fetch(bucket, key,
file_size - (EOCD_RECORD_SIZE + ZIP64_EOCD_LOCATOR_SIZE + ZIP64_EOCD_RECORD_SIZE),
ZIP64_EOCD_RECORD_SIZE)
zip64_eocd_locator = fetch(bucket, key,
file_size - (EOCD_RECORD_SIZE + ZIP64_EOCD_LOCATOR_SIZE),
ZIP64_EOCD_LOCATOR_SIZE)
cd_start, cd_size = get_central_directory_metadata_from_eocd64(zip64_eocd_record)
central_directory = fetch(bucket, key, cd_start, cd_size)
return zipfile.ZipFile(io.BytesIO(central_directory + zip64_eocd_record + zip64_eocd_locator + eocd_record))
def get_file_size(bucket, key):
head_response = s3.head_object(Bucket=bucket, Key=key)
return head_response['ContentLength']
def fetch(bucket, key, start, length):
end = start + length - 1
response = s3.get_object(Bucket=bucket, Key=key, Range="bytes=%d-%d" % (start, end))
return response['Body'].read()
def get_central_directory_metadata_from_eocd(eocd):
cd_size = parse_little_endian_to_int(eocd[12:16])
cd_start = parse_little_endian_to_int(eocd[16:20])
return cd_start, cd_size
def get_central_directory_metadata_from_eocd64(eocd64):
cd_size = parse_little_endian_to_int(eocd64[40:48])
cd_start = parse_little_endian_to_int(eocd64[48:56])
return cd_start, cd_size
def parse_little_endian_to_int(little_endian_bytes):
format_character = "i" if len(little_endian_bytes) == 4 else "q"
return struct.unpack("<" + format_character, little_endian_bytes)[0]
def print_zip_content(zip_file):
files = [zi.filename for zi in zip_file.filelist]
print(f"Files: {files}")
A:
import io
class S3File(io.RawIOBase):
def __init__(self, s3_object):
self.s3_object = s3_object
self.position = 0
def __repr__(self):
return "<%s s3_object=%r>" % (type(self).__name__, self.s3_object)
@property
def size(self):
return self.s3_object.content_length
def tell(self):
return self.position
def seek(self, offset, whence=io.SEEK_SET):
if whence == io.SEEK_SET:
self.position = offset
elif whence == io.SEEK_CUR:
self.position += offset
elif whence == io.SEEK_END:
self.position = self.size + offset
else:
raise ValueError("invalid whence (%r, should be %d, %d, %d)" % (
whence, io.SEEK_SET, io.SEEK_CUR, io.SEEK_END
))
return self.position
def seekable(self):
return True
def read(self, size=-1):
if size == -1:
# Read to the end of the file
range_header = "bytes=%d-" % self.position
self.seek(offset=0, whence=io.SEEK_END)
else:
new_position = self.position + size
# If we're going to read beyond the end of the object, return
# the entire object.
if new_position >= self.size:
return self.read()
range_header = "bytes=%d-%d" % (self.position, new_position - 1)
self.seek(offset=size, whence=io.SEEK_CUR)
return self.s3_object.get(Range=range_header)["Body"].read()
def readable(self):
return True
if __name__ == "__main__":
import zipfile
import boto3
s3 = boto3.resource("s3")
s3_object = s3.Object(bucket_name="bukkit", key="bagit.zip")
s3_file = S3File(s3_object)
with zipfile.ZipFile(s3_file) as zf:
print(zf.namelist())
Reference:
https://alexwlchan.net/2019/02/working-with-large-s3-objects/
| Read ZIP files from S3 without downloading the entire file | We have ZIP files that are 5-10GB in size. The typical ZIP file has 5-10 internal files, each 1-5 GB in size uncompressed.
I have a nice set of Python tools for reading these files. Basically, I can open a filename and if there is a ZIP file, the tools search in the ZIP file and then open the compressed file. It's all rather transparent.
I want to store these files in Amazon S3 as compressed files. I can fetch ranges of S3 files, so it should be possible to fetch the ZIP central directory (it's the end of the file, so I can just read the last 64KiB), find the component I want, download that, and stream directly to the calling process.
So my question is, how do I do that through the standard Python ZipFile API? It isn't documented how to replace the filesystem transport with an arbitrary object that supports POSIX semantics. Is this possible without rewriting the module?
| [
"Here's an approach which does not need to fetch the entire file (full version available here).\nIt does require boto (or boto3), though (unless you can mimic the ranged GETs via AWS CLI; which I guess is quite possible as well).\nimport sys\nimport zlib\nimport zipfile\nimport io\n\nimport boto\nfrom boto.s3.connection import OrdinaryCallingFormat\n\n\n# range-fetches a S3 key\ndef fetch(key, start, len):\n end = start + len - 1\n return key.get_contents_as_string(headers={\"Range\": \"bytes=%d-%d\" % (start, end)})\n\n\n# parses 2 or 4 little-endian bits into their corresponding integer value\ndef parse_int(bytes):\n val = ord(bytes[0]) + (ord(bytes[1]) << 8)\n if len(bytes) > 3:\n val += (ord(bytes[2]) << 16) + (ord(bytes[3]) << 24)\n return val\n\n\n\"\"\"\nbucket: name of the bucket\nkey: path to zipfile inside bucket\nentry: pathname of zip entry to be retrieved (path/to/subdir/file.name) \n\"\"\"\n\n# OrdinaryCallingFormat prevents certificate errors on bucket names with dots\n# https://stackoverflow.com/questions/51604689/read-zip-files-from-amazon-s3-using-boto3-and-python#51605244\n_bucket = boto.connect_s3(calling_format=OrdinaryCallingFormat()).get_bucket(bucket)\n_key = _bucket.get_key(key)\n\n# fetch the last 22 bytes (end-of-central-directory record; assuming the comment field is empty)\nsize = _key.size\neocd = fetch(_key, size - 22, 22)\n\n# start offset and size of the central directory\ncd_start = parse_int(eocd[16:20])\ncd_size = parse_int(eocd[12:16])\n\n# fetch central directory, append EOCD, and open as zipfile!\ncd = fetch(_key, cd_start, cd_size)\nzip = zipfile.ZipFile(io.BytesIO(cd + eocd))\n\n\nfor zi in zip.filelist:\n if zi.filename == entry:\n # local file header starting at file name length + file content\n # (so we can reliably skip file name and extra fields)\n\n # in our \"mock\" zipfile, `header_offset`s are negative (probably because the leading content is missing)\n # so we have to add to it the CD start offset (`cd_start`) to get the actual offset\n\n file_head = fetch(_key, cd_start + zi.header_offset + 26, 4)\n name_len = parse_int(file_head[0:2])\n extra_len = parse_int(file_head[2:4])\n\n content = fetch(_key, cd_start + zi.header_offset + 30 + name_len + extra_len, zi.compress_size)\n\n # now `content` has the file entry you were looking for!\n # you should probably decompress it in context before passing it to some other program\n\n if zi.compress_type == zipfile.ZIP_DEFLATED:\n print zlib.decompressobj(-15).decompress(content)\n else:\n print content\n break\n\nIn your case you might need to write the fetched content to a local file (due to large size), unless memory usage is not a concern.\n",
"So here is the code that allows you to open a file on Amazon S3 as if it were a normal file. Notice I use the aws command, rather than the boto3 Python module. (I don't have access to boto3.) You can open the file and seek on it. The file is cached locally. If you open the file with the Python ZipFile API and it's a ZipFile, you can then read individual parts. You can't write, though, because S3 doesn't support partial writes.\nSeparately, I implement s3open(), which can open a file for reading or writing, but it doesn't implement the seek interface, which is required by ZipFile.\nfrom urllib.parse import urlparse\nfrom subprocess import run,Popen,PIPE\nimport copy\nimport json\nimport os\nimport tempfile\n\n# Tools for reading and write files from Amazon S3 without boto or boto3\n# http://boto.cloudhackers.com/en/latest/s3_tut.html\n# but it is easier to use the aws cli, since it's configured to work.\n\ndef s3open(path, mode=\"r\", encoding=None):\n \"\"\"\n Open an s3 file for reading or writing. Can handle any size, but cannot seek.\n We could use boto.\n http://boto.cloudhackers.com/en/latest/s3_tut.html\n but it is easier to use the aws cli, since it is present and more likely to work.\n \"\"\"\n from subprocess import run,PIPE,Popen\n if \"b\" in mode:\n assert encoding == None\n else:\n if encoding==None:\n encoding=\"utf-8\"\n assert 'a' not in mode\n assert '+' not in mode\n\n if \"r\" in mode:\n p = Popen(['aws','s3','cp',path,'-'],stdout=PIPE,encoding=encoding)\n return p.stdout\n\n elif \"w\" in mode:\n p = Popen(['aws','s3','cp','-',path],stdin=PIPE,encoding=encoding)\n return p.stdin\n else:\n raise RuntimeError(\"invalid mode:{}\".format(mode))\n\n\n\n\nCACHE_SIZE=4096 # big enough for front and back caches\nMAX_READ=65536*16\ndebug=False\nclass S3File:\n \"\"\"Open an S3 file that can be seeked. This is done by caching to the local file system.\"\"\"\n def __init__(self,name,mode='rb'):\n self.name = name\n self.url = urlparse(name)\n if self.url.scheme != 's3':\n raise RuntimeError(\"url scheme is {}; expecting s3\".format(self.url.scheme))\n self.bucket = self.url.netloc\n self.key = self.url.path[1:]\n self.fpos = 0\n self.tf = tempfile.NamedTemporaryFile()\n cmd = ['aws','s3api','list-objects','--bucket',self.bucket,'--prefix',self.key,'--output','json']\n data = json.loads(Popen(cmd,encoding='utf8',stdout=PIPE).communicate()[0])\n file_info = data['Contents'][0]\n self.length = file_info['Size']\n self.ETag = file_info['ETag']\n\n # Load the caches\n\n self.frontcache = self._readrange(0,CACHE_SIZE) # read the first 1024 bytes and get length of the file\n if self.length > CACHE_SIZE:\n self.backcache_start = self.length-CACHE_SIZE\n if debug: print(\"backcache starts at {}\".format(self.backcache_start))\n self.backcache = self._readrange(self.backcache_start,CACHE_SIZE)\n else:\n self.backcache = None\n\n def _readrange(self,start,length):\n # This is gross; we copy everything to the named temporary file, rather than a pipe\n # because the pipes weren't showing up in /dev/fd/?\n # We probably want to cache also... That's coming\n cmd = ['aws','s3api','get-object','--bucket',self.bucket,'--key',self.key,'--output','json',\n '--range','bytes={}-{}'.format(start,start+length-1),self.tf.name]\n if debug:print(cmd)\n data = json.loads(Popen(cmd,encoding='utf8',stdout=PIPE).communicate()[0])\n if debug:print(data)\n self.tf.seek(0) # go to the beginning of the data just read\n return self.tf.read(length) # and read that much\n\n def __repr__(self):\n return \"FakeFile<name:{} url:{}>\".format(self.name,self.url)\n\n def read(self,length=-1):\n # If length==-1, figure out the max we can read to the end of the file\n if length==-1:\n length = min(MAX_READ, self.length - self.fpos + 1)\n\n if debug:\n print(\"read: fpos={} length={}\".format(self.fpos,length))\n # Can we satisfy from the front cache?\n if self.fpos < CACHE_SIZE and self.fpos+length < CACHE_SIZE:\n if debug:print(\"front cache\")\n buf = self.frontcache[self.fpos:self.fpos+length]\n self.fpos += len(buf)\n if debug:print(\"return 1: buf=\",buf)\n return buf\n\n # Can we satisfy from the back cache?\n if self.backcache and (self.length - CACHE_SIZE < self.fpos):\n if debug:print(\"back cache\")\n buf = self.backcache[self.fpos - self.backcache_start:self.fpos - self.backcache_start + length]\n self.fpos += len(buf)\n if debug:print(\"return 2: buf=\",buf)\n return buf\n\n buf = self._readrange(self.fpos, length)\n self.fpos += len(buf)\n if debug:print(\"return 3: buf=\",buf)\n return buf\n\n def seek(self,offset,whence=0):\n if debug:print(\"seek({},{})\".format(offset,whence))\n if whence==0:\n self.fpos = offset\n elif whence==1:\n self.fpos += offset\n elif whence==2:\n self.fpos = self.length + offset\n else:\n raise RuntimeError(\"whence={}\".format(whence))\n if debug:print(\" ={} (self.length={})\".format(self.fpos,self.length))\n\n def tell(self):\n return self.fpos\n\n def write(self):\n raise RuntimeError(\"Write not supported\")\n\n def flush(self):\n raise RuntimeError(\"Flush not supported\")\n\n def close(self):\n return\n\n",
"Here's an improved version of the already given solution - now it uses boto3 and handles files which are larger than 4GiB:\nimport boto3\nimport io\nimport struct\nimport zipfile\n\ns3 = boto3.client('s3')\n\nEOCD_RECORD_SIZE = 22\nZIP64_EOCD_RECORD_SIZE = 56\nZIP64_EOCD_LOCATOR_SIZE = 20\n\nMAX_STANDARD_ZIP_SIZE = 4_294_967_295\n\ndef lambda_handler(event):\n bucket = event['bucket']\n key = event['key']\n zip_file = get_zip_file(bucket, key)\n print_zip_content(zip_file)\n\ndef get_zip_file(bucket, key):\n file_size = get_file_size(bucket, key)\n eocd_record = fetch(bucket, key, file_size - EOCD_RECORD_SIZE, EOCD_RECORD_SIZE)\n if file_size <= MAX_STANDARD_ZIP_SIZE:\n cd_start, cd_size = get_central_directory_metadata_from_eocd(eocd_record)\n central_directory = fetch(bucket, key, cd_start, cd_size)\n return zipfile.ZipFile(io.BytesIO(central_directory + eocd_record))\n else:\n zip64_eocd_record = fetch(bucket, key,\n file_size - (EOCD_RECORD_SIZE + ZIP64_EOCD_LOCATOR_SIZE + ZIP64_EOCD_RECORD_SIZE),\n ZIP64_EOCD_RECORD_SIZE)\n zip64_eocd_locator = fetch(bucket, key,\n file_size - (EOCD_RECORD_SIZE + ZIP64_EOCD_LOCATOR_SIZE),\n ZIP64_EOCD_LOCATOR_SIZE)\n cd_start, cd_size = get_central_directory_metadata_from_eocd64(zip64_eocd_record)\n central_directory = fetch(bucket, key, cd_start, cd_size)\n return zipfile.ZipFile(io.BytesIO(central_directory + zip64_eocd_record + zip64_eocd_locator + eocd_record))\n\n\ndef get_file_size(bucket, key):\n head_response = s3.head_object(Bucket=bucket, Key=key)\n return head_response['ContentLength']\n\ndef fetch(bucket, key, start, length):\n end = start + length - 1\n response = s3.get_object(Bucket=bucket, Key=key, Range=\"bytes=%d-%d\" % (start, end))\n return response['Body'].read()\n\ndef get_central_directory_metadata_from_eocd(eocd):\n cd_size = parse_little_endian_to_int(eocd[12:16])\n cd_start = parse_little_endian_to_int(eocd[16:20])\n return cd_start, cd_size\n\ndef get_central_directory_metadata_from_eocd64(eocd64):\n cd_size = parse_little_endian_to_int(eocd64[40:48])\n cd_start = parse_little_endian_to_int(eocd64[48:56])\n return cd_start, cd_size\n\ndef parse_little_endian_to_int(little_endian_bytes):\n format_character = \"i\" if len(little_endian_bytes) == 4 else \"q\"\n return struct.unpack(\"<\" + format_character, little_endian_bytes)[0]\n\ndef print_zip_content(zip_file):\n files = [zi.filename for zi in zip_file.filelist]\n print(f\"Files: {files}\")\n\n",
"import io\n\n\nclass S3File(io.RawIOBase):\n def __init__(self, s3_object):\n self.s3_object = s3_object\n self.position = 0\n\n def __repr__(self):\n return \"<%s s3_object=%r>\" % (type(self).__name__, self.s3_object)\n\n @property\n def size(self):\n return self.s3_object.content_length\n\n def tell(self):\n return self.position\n\n def seek(self, offset, whence=io.SEEK_SET):\n if whence == io.SEEK_SET:\n self.position = offset\n elif whence == io.SEEK_CUR:\n self.position += offset\n elif whence == io.SEEK_END:\n self.position = self.size + offset\n else:\n raise ValueError(\"invalid whence (%r, should be %d, %d, %d)\" % (\n whence, io.SEEK_SET, io.SEEK_CUR, io.SEEK_END\n ))\n\n return self.position\n\n def seekable(self):\n return True\n\n def read(self, size=-1):\n if size == -1:\n # Read to the end of the file\n range_header = \"bytes=%d-\" % self.position\n self.seek(offset=0, whence=io.SEEK_END)\n else:\n new_position = self.position + size\n\n # If we're going to read beyond the end of the object, return\n # the entire object.\n if new_position >= self.size:\n return self.read()\n\n range_header = \"bytes=%d-%d\" % (self.position, new_position - 1)\n self.seek(offset=size, whence=io.SEEK_CUR)\n\n return self.s3_object.get(Range=range_header)[\"Body\"].read()\n\n def readable(self):\n return True\n\n\nif __name__ == \"__main__\":\n import zipfile\n\n import boto3\n\n s3 = boto3.resource(\"s3\")\n s3_object = s3.Object(bucket_name=\"bukkit\", key=\"bagit.zip\")\n\n s3_file = S3File(s3_object)\n\n with zipfile.ZipFile(s3_file) as zf:\n print(zf.namelist())\n\nReference:\n\nhttps://alexwlchan.net/2019/02/working-with-large-s3-objects/\n\n"
] | [
6,
3,
2,
0
] | [] | [] | [
"amazon_s3",
"boto",
"boto3",
"python",
"zip"
] | stackoverflow_0051351000_amazon_s3_boto_boto3_python_zip.txt |
Q:
Pre-existing high-contrast palettes from the ANSI color set, to use in a terminal app?
Looking to convey more information in a Rich Table by using colors (specifically to track which modules given classes come from).
On the web, it's fairly easy on find color palettes that are optimized for contrast, rather than esthetics. Here's a 6 color example. Then it's just question of using the RGB/HSL specs to drive your CSS.
Rich has a nice list of ANSI colors in rich.colors.ANSI_COLOR_NAMES. But there is no indication of what colors would constitute a high-contrast 10-12 color palette.
Are there such lists available, for ANSI colors to be used in terminal apps? Or should I just find a web palette and use rich.colors.Color.from_rgb() to build myself such a palette?
A:
OK, well, that was simple, no need to mess around with from_rgb, because of what styles support.
And, after reading Will's answer, I've modified my original solution to use saved themes. Thanks again for an awesome library, Will!
Alternatively you can use a CSS-like syntax to specify a color with a “#” followed by three pairs of hex characters, or in RGB form with three decimal integers. The following two lines both print “Hello” in the same color (purple):
console.print("Hello", style="#af00ff")
Found a 20-element list here (last 2 are black and white which I am skipping).
hivis.theme.ini
[styles]
; High constrast palette from
; https://sashamaps.net/docs/resources/20-colors/
hivis0 = #e6194b
hivis1 = #3cb44b
hivis2 = #ffe119
hivis3 = #4363d8
hivis4 = #f58231
hivis5 = #911eb4
hivis6 = #46f0f0
hivis7 = #f032e6
hivis8 = #bcf60c
hivis9 = #fabebe
hivis10 = #008080
hivis11 = #e6beff
hivis12 = #9a6324
hivis13 = #fffac8
hivis14 = #800000
hivis15 = #aaffc3
hivis16 = #808000
hivis17 = #ffd8b1
hivis18 = #000075
hivis19 = #808080
;hivis20 = #ffffff
;hivis21 = #000000
script.py
from rich.console import Console
from rich.theme import Theme
with open("hivis.theme.ini") as fi:
theme = Theme.from_file(fi)
palette = [name for name in theme.styles.keys() if name.startswith("hivis")]
console = Console(theme=theme)
console.print("[hivis1] First Hivis. [hivis2] Second Hivis.")
for ix, rgb in enumerate(palette):
console.print(f"[{palette[ix]}] {rgb}")
A:
Consider using Rich's themes to give names to your colors. That way you won't have to reference your list. You can do print("Hello [color3]World![/]")
| Pre-existing high-contrast palettes from the ANSI color set, to use in a terminal app? | Looking to convey more information in a Rich Table by using colors (specifically to track which modules given classes come from).
On the web, it's fairly easy on find color palettes that are optimized for contrast, rather than esthetics. Here's a 6 color example. Then it's just question of using the RGB/HSL specs to drive your CSS.
Rich has a nice list of ANSI colors in rich.colors.ANSI_COLOR_NAMES. But there is no indication of what colors would constitute a high-contrast 10-12 color palette.
Are there such lists available, for ANSI colors to be used in terminal apps? Or should I just find a web palette and use rich.colors.Color.from_rgb() to build myself such a palette?
| [
"OK, well, that was simple, no need to mess around with from_rgb, because of what styles support.\nAnd, after reading Will's answer, I've modified my original solution to use saved themes. Thanks again for an awesome library, Will!\n\nAlternatively you can use a CSS-like syntax to specify a color with a “#” followed by three pairs of hex characters, or in RGB form with three decimal integers. The following two lines both print “Hello” in the same color (purple):\nconsole.print(\"Hello\", style=\"#af00ff\")\n\nFound a 20-element list here (last 2 are black and white which I am skipping).\nhivis.theme.ini\n[styles]\n\n; High constrast palette from \n; https://sashamaps.net/docs/resources/20-colors/\n\nhivis0 = #e6194b\nhivis1 = #3cb44b\nhivis2 = #ffe119\nhivis3 = #4363d8\nhivis4 = #f58231\nhivis5 = #911eb4\nhivis6 = #46f0f0\nhivis7 = #f032e6\nhivis8 = #bcf60c\nhivis9 = #fabebe\nhivis10 = #008080\nhivis11 = #e6beff\nhivis12 = #9a6324\nhivis13 = #fffac8\nhivis14 = #800000\nhivis15 = #aaffc3\nhivis16 = #808000\nhivis17 = #ffd8b1\nhivis18 = #000075\nhivis19 = #808080\n;hivis20 = #ffffff\n;hivis21 = #000000\n\nscript.py\nfrom rich.console import Console\nfrom rich.theme import Theme\n\nwith open(\"hivis.theme.ini\") as fi:\n theme = Theme.from_file(fi)\n\npalette = [name for name in theme.styles.keys() if name.startswith(\"hivis\")]\n\nconsole = Console(theme=theme)\nconsole.print(\"[hivis1] First Hivis. [hivis2] Second Hivis.\")\nfor ix, rgb in enumerate(palette):\n console.print(f\"[{palette[ix]}] {rgb}\")\n\n\n\n",
"Consider using Rich's themes to give names to your colors. That way you won't have to reference your list. You can do print(\"Hello [color3]World![/]\")\n"
] | [
1,
1
] | [] | [] | [
"python",
"rich",
"tui"
] | stackoverflow_0074608971_python_rich_tui.txt |
Q:
python program to return exit code 0 if passes and 1 if fails
I have a text file that includes a word like "test". Now what i am trying to do is using python i am opening that text file and search for that word. If the word exists then the python program should return exit code 0 or else 1.
This is the code that i have written that returns 0 or 1.
word = "test"
def check():
with open("tex.txt", "r") as file:
for line_number, line in enumerate(file, start=1):
if word in line:
return 0
else:
return 1
print(check())
output
0
what I want is I want to store this exit code in some variable so that i can pass this to a yaml file. Can someone guide me here how can i store this exit code in a variable? thanks in advance
A:
I think you are looking for sys.exit(), but since you edited your question, I am not sure anymore. Try this:
import sys
word = "test"
def check():
with open("tex.txt", "r") as file:
for line_number, line in enumerate(file, start=1):
if word in line:
return 0
return 1
is_word_found = check() # store the return value of check() in variable `is_word_found`
print(is_word_found) # prints value of `is_word_found` to standard output
sys.exit(is_word_found) # exit the program with status code `is_word_found`
BTW: as @gimix and @VPfB mentioned, your check() function did return a value immediately after it checked the first line of your file. I included a fix for that, now 1 is returned only if the word is not found is any line of your file.
| python program to return exit code 0 if passes and 1 if fails | I have a text file that includes a word like "test". Now what i am trying to do is using python i am opening that text file and search for that word. If the word exists then the python program should return exit code 0 or else 1.
This is the code that i have written that returns 0 or 1.
word = "test"
def check():
with open("tex.txt", "r") as file:
for line_number, line in enumerate(file, start=1):
if word in line:
return 0
else:
return 1
print(check())
output
0
what I want is I want to store this exit code in some variable so that i can pass this to a yaml file. Can someone guide me here how can i store this exit code in a variable? thanks in advance
| [
"I think you are looking for sys.exit(), but since you edited your question, I am not sure anymore. Try this:\nimport sys\n\nword = \"test\"\n\ndef check():\n with open(\"tex.txt\", \"r\") as file:\n for line_number, line in enumerate(file, start=1): \n if word in line:\n return 0\n \n return 1\n\nis_word_found = check() # store the return value of check() in variable `is_word_found`\nprint(is_word_found) # prints value of `is_word_found` to standard output\nsys.exit(is_word_found) # exit the program with status code `is_word_found`\n\nBTW: as @gimix and @VPfB mentioned, your check() function did return a value immediately after it checked the first line of your file. I included a fix for that, now 1 is returned only if the word is not found is any line of your file.\n"
] | [
1
] | [] | [] | [
"error_code",
"python",
"python_3.x"
] | stackoverflow_0074628734_error_code_python_python_3.x.txt |
Q:
"No columns to parse from file" error when trying to transform string into Pandas dataframe
I have a string object ("textData") which contains CSV data.
I'm able to save it as CSV by:
with open(fileName, "w") as text_file:
print(textData, file=text_file)
but I would like to work with the data in pandas before saving the csv. So I'm trying to get the data into a pandas df.
import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO(textData), sep=",")
I get this error: EmptyDataError: No columns to parse from file
This is a the textData string:
R$M21,2021-01-26,1.3265,1.3265,1.3265,1.3265,0,0
R$M21,2021-01-27,1.3263,1.3263,1.3263,1.3263,0,0
R$M21,2021-01-28,1.3319,1.3319,1.3319,1.3319,0,0
R$M21,2021-01-29,1.3287,1.3287,1.3287,1.3287,0,0
R$M21,2021-02-01,1.3315,1.3315,1.3315,1.3315,0,0
R$M21,2021-02-02,1.3328,1.3328,1.3328,1.3328,0,0
R$M21,2021-02-03,1.3331,1.3331,1.3331,1.3331,0,0
R$M21,2021-02-04,1.3361,1.3361,1.3361,1.3361,0,0
R$M21,2021-02-05,1.3383,1.3383,1.3383,1.3383,0,0
R$M21,2021-02-08,1.3354,1.3354,1.3354,1.3354,0,0
R$M21,2021-02-09,1.3279,1.3279,1.3279,1.3279,0,0
R$M21,2021-02-10,1.3259,1.3259,1.3259,1.3259,0,0
R$M21,2021-02-11,1.3253,1.3253,1.3253,1.3253,0,0
R$M21,2021-02-12,1.3272,1.3272,1.3272,1.3272,0,0
R$M21,2021-02-15,1.3224,1.3224,1.3224,1.3224,0,0
R$M21,2021-02-16,1.3232,1.3232,1.3232,1.3232,0,0
R$M21,2021-02-17,1.329,1.329,1.329,1.329,0,0
R$M21,2021-02-18,1.3275,1.3275,1.3275,1.3275,0,0
R$M21,2021-02-19,1.3246,1.3246,1.3246,1.3246,0,0
R$M21,2021-02-22,1.3235,1.3235,1.3235,1.3235,0,0
R$M21,2021-02-23,1.3216,1.3216,1.3216,1.3216,0,0
R$M21,2021-02-24,1.321,1.321,1.321,1.321,0,0
R$M21,2021-02-25,1.3181,1.3181,1.3181,1.3181,0,0
R$M21,2021-02-26,1.3313,1.3313,1.3313,1.3313,0,0
R$M21,2021-03-01,1.3323,1.3323,1.3323,1.3323,0,0
R$M21,2021-03-02,1.3315,1.3315,1.3315,1.3315,0,0
R$M21,2021-03-03,1.3309,1.3309,1.3309,1.3309,0,0
R$M21,2021-03-04,1.3328,1.3328,1.3328,1.3328,0,0
R$M21,2021-03-05,1.3417,1.3417,1.3417,1.3417,0,0
R$M21,2021-03-08,1.3479,1.3479,1.3479,1.3479,0,0
R$M21,2021-03-09,1.345,1.345,1.345,1.345,0,0
R$M21,2021-03-10,1.3476,1.3476,1.3476,1.3476,0,0
R$M21,2021-03-11,1.3403,1.3403,1.3403,1.3403,0,0
R$M21,2021-03-12,1.3463,1.3463,1.3463,1.3463,0,0
R$M21,2021-03-15,1.3456,1.3456,1.3456,1.3456,35,35
R$M21,2021-03-16,1.3455,1.3456,1.3452,1.3454,85,20
R$M21,2021-03-17,1.3457,1.3479,1.3451,1.3479,0,20
R$M21,2021-03-18,1.3432,1.3432,1.3432,1.3432,0,20
R$M21,2021-03-19,1.3425,1.3425,1.3425,1.3425,20,0
R$M21,2021-03-22,1.3434,1.3434,1.3405,1.3405,20,0
R$M21,2021-03-23,1.3433,1.3433,1.3433,1.3433,0,0
R$M21,2021-03-24,1.3461,1.3461,1.3461,1.3461,6,6
R$M21,2021-03-25,1.3476,1.3476,1.3472,1.3472,0,6
R$M21,2021-03-26,1.3477,1.3477,1.3477,1.3477,0,6
R$M21,2021-03-29,1.3467,1.3467,1.3467,1.3467,0,6
R$M21,2021-03-30,1.3483,1.3483,1.3483,1.3483,0,6
R$M21,2021-03-31,1.3448,1.3448,1.3448,1.3448,0,6
R$M21,2021-04-01,1.3461,1.3461,1.3461,1.3461,0,6
R$M21,2021-04-02,1.3442,1.3442,1.3442,1.3442,0,6
R$M21,2021-04-05,1.3446,1.3446,1.3446,1.3446,0,6
R$M21,2021-04-06,1.3418,1.3418,1.3418,1.3418,10,11
R$M21,2021-04-07,1.339,1.3398,1.3389,1.3389,0,11
R$M21,2021-04-08,1.3406,1.3406,1.3406,1.3406,0,11
R$M21,2021-04-09,1.3411,1.3411,1.3411,1.3411,23,28
R$M21,2021-04-12,1.3427,1.3427,1.3406,1.3406,3,31
R$M21,2021-04-13,1.3425,1.3431,1.3425,1.3431,20,51
R$M21,2021-04-14,1.3374,1.3378,1.3374,1.3375,0,51
R$M21,2021-04-15,1.335,1.335,1.335,1.335,217,222
R$M21,2021-04-16,1.3358,1.3358,1.3337,1.3337,416,407
R$M21,2021-04-19,1.3344,1.3346,1.331,1.331,370,428
R$M21,2021-04-20,1.3305,1.3316,1.3265,1.3283,5,431
R$M21,2021-04-21,1.3291,1.3302,1.3291,1.3302,100,422
R$M21,2021-04-22,1.3304,1.3304,1.3279,1.3279,10,427
R$M21,2021-04-23,1.3277,1.3277,1.3274,1.3274,16,437
R$M21,2021-04-26,1.3273,1.3273,1.3256,1.326,204,438
R$M21,2021-04-27,1.3259,1.3267,1.3255,1.3257,79,429
R$M21,2021-04-28,1.3274,1.3278,1.3262,1.3262,22,441
R$M21,2021-04-29,1.326,1.3265,1.3245,1.3255,16,457
R$M21,2021-04-30,1.3266,1.3277,1.3266,1.3277,60,457
R$M21,2021-05-03,1.328,1.3341,1.328,1.3318,8,458
R$M21,2021-05-04,1.3298,1.3366,1.3298,1.3366,110,466
R$M21,2021-05-05,1.3376,1.3387,1.3351,1.3358,0,466
R$M21,2021-05-06,1.3349,1.3349,1.3349,1.3349,1,467
R$M21,2021-05-07,1.332,1.332,1.3316,1.3316,25,466
R$M21,2021-05-10,1.3263,1.3263,1.3247,1.3247,187,480
R$M21,2021-05-11,1.3244,1.3276,1.3244,1.3251,6,486
R$M21,2021-05-12,1.329,1.329,1.3287,1.3287,119,586
R$M21,2021-05-13,1.3312,1.3366,1.3294,1.3343,270,738
R$M21,2021-05-14,1.3346,1.3371,1.3338,1.3338,392,841
R$M21,2021-05-17,1.3332,1.3361,1.3319,1.3356,99,835
R$M21,2021-05-18,1.3358,1.3358,1.3295,1.33,93,785
R$M21,2021-05-19,1.3295,1.333,1.3287,1.3328,25,784
R$M21,2021-05-20,1.335,1.3354,1.3326,1.3329,26,773
R$M21,2021-05-21,1.3309,1.3309,1.3301,1.3301,25,777
R$M21,2021-05-24,1.3298,1.3318,1.3298,1.3301,39,767
R$M21,2021-05-25,1.3293,1.3293,1.3253,1.3254,28,782
R$M21,2021-05-26,1.3249,1.3249,1.323,1.3235,48,770
R$M21,2021-05-27,1.3245,1.3247,1.3229,1.3229,51,805
R$M21,2021-05-28,1.3238,1.3247,1.323,1.3244,76,826
R$M21,2021-05-31,1.3237,1.3237,1.3223,1.3226,16,826
R$M21,2021-06-01,1.3194,1.3227,1.3194,1.3227,34,808
R$M21,2021-06-02,1.323,1.3248,1.322,1.3248,50,785
R$M21,2021-06-03,1.3235,1.3245,1.3228,1.3244,137,720
R$M21,2021-06-04,1.3276,1.3285,1.3274,1.3285,219,564
R$M21,2021-06-07,1.3251,1.3252,1.3232,1.3232,42,544
R$M21,2021-06-08,1.3236,1.3238,1.3226,1.3237,290,343
R$M21,2021-06-09,1.3232,1.3243,1.3231,1.3233,48,343
R$M21,2021-06-10,1.3239,1.3253,1.3238,1.3244,406,292
R$M21,2021-06-11,1.3249,1.3261,1.3217,1.324,107,0
R$M21,2021-06-14,1.3252,1.3271,1.3252,1.3261,107,0
What am I doing wrong?
Thanks
A:
The error is in the parts you aren't showing us, because your code works fine. I'm guessing you don't have newlines separating the lines.
C:\tmp>type x.py
textData="""\
R$M21,2021-06-08,1.3236,1.3238,1.3226,1.3237,290,343
R$M21,2021-06-09,1.3232,1.3243,1.3231,1.3233,48,343
R$M21,2021-06-10,1.3239,1.3253,1.3238,1.3244,406,292
R$M21,2021-06-11,1.3249,1.3261,1.3217,1.324,107,0
R$M21,2021-06-14,1.3252,1.3271,1.3252,1.3261,107,0"""
import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO(textData), sep=",")
print(df)
C:\tmp>python x.py
R$M21 2021-06-08 1.3236 1.3238 1.3226 1.3237 290 343
0 R$M21 2021-06-09 1.3232 1.3243 1.3231 1.3233 48 343
1 R$M21 2021-06-10 1.3239 1.3253 1.3238 1.3244 406 292
2 R$M21 2021-06-11 1.3249 1.3261 1.3217 1.3240 107 0
3 R$M21 2021-06-14 1.3252 1.3271 1.3252 1.3261 107 0
C:\tmp>
A:
First, make sure to add newline after each line, best through os.linesep.
Then set the StringIO buffer "head position" to start, aka 0, before passing it to pandas:
import os
import pandas as pd
from io import StringIO
buffer = StringIO()
buffer.write('hello,23,2022,bye' + os.linesep)
buffer.write('world,43,2025,then' + os.linesep)
buffer.seek(0)
df = pd.read_csv(buffer, sep=',', header=None)
print(df)
This will yield:
0 1 2 3
0 hello 23 2022 bye
1 world 43 2025 then
[Python-3.9]
| "No columns to parse from file" error when trying to transform string into Pandas dataframe | I have a string object ("textData") which contains CSV data.
I'm able to save it as CSV by:
with open(fileName, "w") as text_file:
print(textData, file=text_file)
but I would like to work with the data in pandas before saving the csv. So I'm trying to get the data into a pandas df.
import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO(textData), sep=",")
I get this error: EmptyDataError: No columns to parse from file
This is a the textData string:
R$M21,2021-01-26,1.3265,1.3265,1.3265,1.3265,0,0
R$M21,2021-01-27,1.3263,1.3263,1.3263,1.3263,0,0
R$M21,2021-01-28,1.3319,1.3319,1.3319,1.3319,0,0
R$M21,2021-01-29,1.3287,1.3287,1.3287,1.3287,0,0
R$M21,2021-02-01,1.3315,1.3315,1.3315,1.3315,0,0
R$M21,2021-02-02,1.3328,1.3328,1.3328,1.3328,0,0
R$M21,2021-02-03,1.3331,1.3331,1.3331,1.3331,0,0
R$M21,2021-02-04,1.3361,1.3361,1.3361,1.3361,0,0
R$M21,2021-02-05,1.3383,1.3383,1.3383,1.3383,0,0
R$M21,2021-02-08,1.3354,1.3354,1.3354,1.3354,0,0
R$M21,2021-02-09,1.3279,1.3279,1.3279,1.3279,0,0
R$M21,2021-02-10,1.3259,1.3259,1.3259,1.3259,0,0
R$M21,2021-02-11,1.3253,1.3253,1.3253,1.3253,0,0
R$M21,2021-02-12,1.3272,1.3272,1.3272,1.3272,0,0
R$M21,2021-02-15,1.3224,1.3224,1.3224,1.3224,0,0
R$M21,2021-02-16,1.3232,1.3232,1.3232,1.3232,0,0
R$M21,2021-02-17,1.329,1.329,1.329,1.329,0,0
R$M21,2021-02-18,1.3275,1.3275,1.3275,1.3275,0,0
R$M21,2021-02-19,1.3246,1.3246,1.3246,1.3246,0,0
R$M21,2021-02-22,1.3235,1.3235,1.3235,1.3235,0,0
R$M21,2021-02-23,1.3216,1.3216,1.3216,1.3216,0,0
R$M21,2021-02-24,1.321,1.321,1.321,1.321,0,0
R$M21,2021-02-25,1.3181,1.3181,1.3181,1.3181,0,0
R$M21,2021-02-26,1.3313,1.3313,1.3313,1.3313,0,0
R$M21,2021-03-01,1.3323,1.3323,1.3323,1.3323,0,0
R$M21,2021-03-02,1.3315,1.3315,1.3315,1.3315,0,0
R$M21,2021-03-03,1.3309,1.3309,1.3309,1.3309,0,0
R$M21,2021-03-04,1.3328,1.3328,1.3328,1.3328,0,0
R$M21,2021-03-05,1.3417,1.3417,1.3417,1.3417,0,0
R$M21,2021-03-08,1.3479,1.3479,1.3479,1.3479,0,0
R$M21,2021-03-09,1.345,1.345,1.345,1.345,0,0
R$M21,2021-03-10,1.3476,1.3476,1.3476,1.3476,0,0
R$M21,2021-03-11,1.3403,1.3403,1.3403,1.3403,0,0
R$M21,2021-03-12,1.3463,1.3463,1.3463,1.3463,0,0
R$M21,2021-03-15,1.3456,1.3456,1.3456,1.3456,35,35
R$M21,2021-03-16,1.3455,1.3456,1.3452,1.3454,85,20
R$M21,2021-03-17,1.3457,1.3479,1.3451,1.3479,0,20
R$M21,2021-03-18,1.3432,1.3432,1.3432,1.3432,0,20
R$M21,2021-03-19,1.3425,1.3425,1.3425,1.3425,20,0
R$M21,2021-03-22,1.3434,1.3434,1.3405,1.3405,20,0
R$M21,2021-03-23,1.3433,1.3433,1.3433,1.3433,0,0
R$M21,2021-03-24,1.3461,1.3461,1.3461,1.3461,6,6
R$M21,2021-03-25,1.3476,1.3476,1.3472,1.3472,0,6
R$M21,2021-03-26,1.3477,1.3477,1.3477,1.3477,0,6
R$M21,2021-03-29,1.3467,1.3467,1.3467,1.3467,0,6
R$M21,2021-03-30,1.3483,1.3483,1.3483,1.3483,0,6
R$M21,2021-03-31,1.3448,1.3448,1.3448,1.3448,0,6
R$M21,2021-04-01,1.3461,1.3461,1.3461,1.3461,0,6
R$M21,2021-04-02,1.3442,1.3442,1.3442,1.3442,0,6
R$M21,2021-04-05,1.3446,1.3446,1.3446,1.3446,0,6
R$M21,2021-04-06,1.3418,1.3418,1.3418,1.3418,10,11
R$M21,2021-04-07,1.339,1.3398,1.3389,1.3389,0,11
R$M21,2021-04-08,1.3406,1.3406,1.3406,1.3406,0,11
R$M21,2021-04-09,1.3411,1.3411,1.3411,1.3411,23,28
R$M21,2021-04-12,1.3427,1.3427,1.3406,1.3406,3,31
R$M21,2021-04-13,1.3425,1.3431,1.3425,1.3431,20,51
R$M21,2021-04-14,1.3374,1.3378,1.3374,1.3375,0,51
R$M21,2021-04-15,1.335,1.335,1.335,1.335,217,222
R$M21,2021-04-16,1.3358,1.3358,1.3337,1.3337,416,407
R$M21,2021-04-19,1.3344,1.3346,1.331,1.331,370,428
R$M21,2021-04-20,1.3305,1.3316,1.3265,1.3283,5,431
R$M21,2021-04-21,1.3291,1.3302,1.3291,1.3302,100,422
R$M21,2021-04-22,1.3304,1.3304,1.3279,1.3279,10,427
R$M21,2021-04-23,1.3277,1.3277,1.3274,1.3274,16,437
R$M21,2021-04-26,1.3273,1.3273,1.3256,1.326,204,438
R$M21,2021-04-27,1.3259,1.3267,1.3255,1.3257,79,429
R$M21,2021-04-28,1.3274,1.3278,1.3262,1.3262,22,441
R$M21,2021-04-29,1.326,1.3265,1.3245,1.3255,16,457
R$M21,2021-04-30,1.3266,1.3277,1.3266,1.3277,60,457
R$M21,2021-05-03,1.328,1.3341,1.328,1.3318,8,458
R$M21,2021-05-04,1.3298,1.3366,1.3298,1.3366,110,466
R$M21,2021-05-05,1.3376,1.3387,1.3351,1.3358,0,466
R$M21,2021-05-06,1.3349,1.3349,1.3349,1.3349,1,467
R$M21,2021-05-07,1.332,1.332,1.3316,1.3316,25,466
R$M21,2021-05-10,1.3263,1.3263,1.3247,1.3247,187,480
R$M21,2021-05-11,1.3244,1.3276,1.3244,1.3251,6,486
R$M21,2021-05-12,1.329,1.329,1.3287,1.3287,119,586
R$M21,2021-05-13,1.3312,1.3366,1.3294,1.3343,270,738
R$M21,2021-05-14,1.3346,1.3371,1.3338,1.3338,392,841
R$M21,2021-05-17,1.3332,1.3361,1.3319,1.3356,99,835
R$M21,2021-05-18,1.3358,1.3358,1.3295,1.33,93,785
R$M21,2021-05-19,1.3295,1.333,1.3287,1.3328,25,784
R$M21,2021-05-20,1.335,1.3354,1.3326,1.3329,26,773
R$M21,2021-05-21,1.3309,1.3309,1.3301,1.3301,25,777
R$M21,2021-05-24,1.3298,1.3318,1.3298,1.3301,39,767
R$M21,2021-05-25,1.3293,1.3293,1.3253,1.3254,28,782
R$M21,2021-05-26,1.3249,1.3249,1.323,1.3235,48,770
R$M21,2021-05-27,1.3245,1.3247,1.3229,1.3229,51,805
R$M21,2021-05-28,1.3238,1.3247,1.323,1.3244,76,826
R$M21,2021-05-31,1.3237,1.3237,1.3223,1.3226,16,826
R$M21,2021-06-01,1.3194,1.3227,1.3194,1.3227,34,808
R$M21,2021-06-02,1.323,1.3248,1.322,1.3248,50,785
R$M21,2021-06-03,1.3235,1.3245,1.3228,1.3244,137,720
R$M21,2021-06-04,1.3276,1.3285,1.3274,1.3285,219,564
R$M21,2021-06-07,1.3251,1.3252,1.3232,1.3232,42,544
R$M21,2021-06-08,1.3236,1.3238,1.3226,1.3237,290,343
R$M21,2021-06-09,1.3232,1.3243,1.3231,1.3233,48,343
R$M21,2021-06-10,1.3239,1.3253,1.3238,1.3244,406,292
R$M21,2021-06-11,1.3249,1.3261,1.3217,1.324,107,0
R$M21,2021-06-14,1.3252,1.3271,1.3252,1.3261,107,0
What am I doing wrong?
Thanks
| [
"The error is in the parts you aren't showing us, because your code works fine. I'm guessing you don't have newlines separating the lines.\nC:\\tmp>type x.py\n\ntextData=\"\"\"\\\nR$M21,2021-06-08,1.3236,1.3238,1.3226,1.3237,290,343\nR$M21,2021-06-09,1.3232,1.3243,1.3231,1.3233,48,343\nR$M21,2021-06-10,1.3239,1.3253,1.3238,1.3244,406,292\nR$M21,2021-06-11,1.3249,1.3261,1.3217,1.324,107,0\nR$M21,2021-06-14,1.3252,1.3271,1.3252,1.3261,107,0\"\"\"\n\nimport pandas as pd\nfrom io import StringIO\n\ndf = pd.read_csv(StringIO(textData), sep=\",\")\nprint(df)\n\nC:\\tmp>python x.py\n R$M21 2021-06-08 1.3236 1.3238 1.3226 1.3237 290 343\n0 R$M21 2021-06-09 1.3232 1.3243 1.3231 1.3233 48 343\n1 R$M21 2021-06-10 1.3239 1.3253 1.3238 1.3244 406 292\n2 R$M21 2021-06-11 1.3249 1.3261 1.3217 1.3240 107 0\n3 R$M21 2021-06-14 1.3252 1.3271 1.3252 1.3261 107 0\n\nC:\\tmp>\n\n",
"First, make sure to add newline after each line, best through os.linesep.\nThen set the StringIO buffer \"head position\" to start, aka 0, before passing it to pandas:\nimport os\nimport pandas as pd\nfrom io import StringIO\n\nbuffer = StringIO()\nbuffer.write('hello,23,2022,bye' + os.linesep)\nbuffer.write('world,43,2025,then' + os.linesep)\nbuffer.seek(0)\ndf = pd.read_csv(buffer, sep=',', header=None)\n\nprint(df)\n\nThis will yield:\n 0 1 2 3\n0 hello 23 2022 bye\n1 world 43 2025 then\n\n[Python-3.9]\n"
] | [
2,
0
] | [] | [] | [
"csv",
"pandas",
"python",
"python_3.x",
"stringio"
] | stackoverflow_0068492173_csv_pandas_python_python_3.x_stringio.txt |
Q:
Fetch Candlestick/Kline data from Binance API using Python (preferably requests) to get JSON Dat
I am developing a telegram bot that fetches Candlestick Data from Binance API. I am unable to get JSON Data as a response. The following code is something that I tried.
import requests
import json
import urllib.request
`url = "https://api.binance.com/api/v1/klines"
response = requests.request("GET", url)
print(response.text)`
Desired Output:
[
[
1499040000000, // Open time
"0.01634790", // Open
"0.80000000", // High
"0.01575800", // Low
"0.01577100", // Close
"148976.11427815", // Volume
1499644799999, // Close time
"2434.19055334", // Quote asset volume
308, // Number of trades
"1756.87402397", // Taker buy base asset volume
"28.46694368", // Taker buy quote asset volume
"17928899.62484339" // Ignore
]
]
Question Edited:
The output that I am getting is:
`{"code":-1102,"msg":"Mandatory parameter 'symbol' was not sent, was empty/null, or malformed."}'
A:
you are missing the mandatory parameters symbol and interval, the query should be like this:
https://api.binance.com/api/v3/klines?symbol=BTCUSDT&interval=1h
you need to import only requests:
import requests
market = 'BTCUSDT'
tick_interval = '1h'
url = 'https://api.binance.com/api/v3/klines?symbol='+market+'&interval='+tick_interval
data = requests.get(url).json()
print(data)
Please check the official Binance REST API documentation here: https://github.com/binance/binance-spot-api-docs/blob/master/rest-api.md
A:
The requests python package has a params, json argument, so you don't need to import any of those packages you are importing.
import requests
url = 'https://api.binance.com/api/v3/klines'
params = {
'symbol': 'BTCUSDT',
'interval': '1h'
}
response = requests.get(url, params=params)
print(response.json())
A:
collect data into dataframe
import requests
import datetime
import pandas as pd
import numpy as np
def get_binance_data_request_(ticker, interval='4h', limit=500, start='2018-01-01 00:00:00'):
"""
interval: str tick interval - 4h/1h/1d ...
"""
columns = ['open_time','open', 'high', 'low', 'close', 'volume','close_time', 'qav','num_trades','taker_base_vol','taker_quote_vol', 'ignore']
start = int(datetime.datetime.timestamp(pd.to_datetime(start))*1000)
url = f'https://www.binance.com/api/v3/klines?symbol={ticker}&interval={interval}&limit={limit}&startTime={start}'
data = pd.DataFrame(requests.get(url).json(), columns=columns, dtype=np.float)
data.index = [pd.to_datetime(x, unit='ms').strftime('%Y-%m-%d %H:%M:%S') for x in data.open_time]
usecols=['open', 'high', 'low', 'close', 'volume', 'qav','num_trades','taker_base_vol','taker_quote_vol']
data = data[usecols]
return data
get_binance_data_request_('ETHUSDT', '1h')
result:
| Fetch Candlestick/Kline data from Binance API using Python (preferably requests) to get JSON Dat | I am developing a telegram bot that fetches Candlestick Data from Binance API. I am unable to get JSON Data as a response. The following code is something that I tried.
import requests
import json
import urllib.request
`url = "https://api.binance.com/api/v1/klines"
response = requests.request("GET", url)
print(response.text)`
Desired Output:
[
[
1499040000000, // Open time
"0.01634790", // Open
"0.80000000", // High
"0.01575800", // Low
"0.01577100", // Close
"148976.11427815", // Volume
1499644799999, // Close time
"2434.19055334", // Quote asset volume
308, // Number of trades
"1756.87402397", // Taker buy base asset volume
"28.46694368", // Taker buy quote asset volume
"17928899.62484339" // Ignore
]
]
Question Edited:
The output that I am getting is:
`{"code":-1102,"msg":"Mandatory parameter 'symbol' was not sent, was empty/null, or malformed."}'
| [
"you are missing the mandatory parameters symbol and interval, the query should be like this:\nhttps://api.binance.com/api/v3/klines?symbol=BTCUSDT&interval=1h\nyou need to import only requests:\nimport requests\n\nmarket = 'BTCUSDT'\ntick_interval = '1h'\n\nurl = 'https://api.binance.com/api/v3/klines?symbol='+market+'&interval='+tick_interval\ndata = requests.get(url).json()\n\nprint(data)\n\nPlease check the official Binance REST API documentation here: https://github.com/binance/binance-spot-api-docs/blob/master/rest-api.md\n",
"The requests python package has a params, json argument, so you don't need to import any of those packages you are importing.\nimport requests\n\nurl = 'https://api.binance.com/api/v3/klines'\nparams = {\n 'symbol': 'BTCUSDT',\n 'interval': '1h'\n}\nresponse = requests.get(url, params=params)\nprint(response.json())\n\n",
"collect data into dataframe\nimport requests\nimport datetime\nimport pandas as pd\nimport numpy as np\n\ndef get_binance_data_request_(ticker, interval='4h', limit=500, start='2018-01-01 00:00:00'):\n \"\"\"\n interval: str tick interval - 4h/1h/1d ...\n \"\"\"\n columns = ['open_time','open', 'high', 'low', 'close', 'volume','close_time', 'qav','num_trades','taker_base_vol','taker_quote_vol', 'ignore']\n start = int(datetime.datetime.timestamp(pd.to_datetime(start))*1000)\n url = f'https://www.binance.com/api/v3/klines?symbol={ticker}&interval={interval}&limit={limit}&startTime={start}'\n data = pd.DataFrame(requests.get(url).json(), columns=columns, dtype=np.float)\n data.index = [pd.to_datetime(x, unit='ms').strftime('%Y-%m-%d %H:%M:%S') for x in data.open_time]\n usecols=['open', 'high', 'low', 'close', 'volume', 'qav','num_trades','taker_base_vol','taker_quote_vol']\n data = data[usecols]\n return data\n\nget_binance_data_request_('ETHUSDT', '1h')\n\nresult:\n\n"
] | [
29,
4,
0
] | [] | [] | [
"api",
"binance",
"candlestick_chart",
"json",
"python"
] | stackoverflow_0051358147_api_binance_candlestick_chart_json_python.txt |
Q:
How to activate an existing virtualenv projects?
I'm a beginner to Django and Python, and I've never used virtualenv before. However, I do know the exact commands to activate and deactivate virtual environments (online search). However, this learning course takes time and sometimes I need to split the work over 2 days.
When I create a virtualenv today and do some work, I'm unable to access the same virtualenv tomorrow. Even when I navigate to that folder and type in .\venv\Scripts\activate, it says "system cannot find specific path".
How can I open already existing virtual environments and the projects within them? Could it be that I need to end my previous session in a certain way for me to access it the next time?
A:
Even though pipenv had so many problems. I suggest you use it when you are new to virtual env.
Just
pip install pipenv
cd $your-work-directory
pipenv shell
Then you created your project env.
You can active it by:
cd $your-work-directory
pipenv shell
You can install packages by:
cd $your-work-directory
pipenv install $yourpackage --skip-lock
A:
Open the command prompt
Go to the project directory, where you created virtual environment
And type the same as error shows, as in my case it was
File D:\Coding files\Python*recommendation-system\venv\Scripts\activate.ps1* cannot be loaded because running scripts is disabled on this system.
So I typed recommendation-system\venv\Scripts\activate.ps1
And it resolved my problem.
A:
Use this and it will work:
cd $working directory u have virtual env on
pipenv shell
A:
you can use this command it worked for me for reusing your existing venv
$ workon then the name of your existing venv
| How to activate an existing virtualenv projects? | I'm a beginner to Django and Python, and I've never used virtualenv before. However, I do know the exact commands to activate and deactivate virtual environments (online search). However, this learning course takes time and sometimes I need to split the work over 2 days.
When I create a virtualenv today and do some work, I'm unable to access the same virtualenv tomorrow. Even when I navigate to that folder and type in .\venv\Scripts\activate, it says "system cannot find specific path".
How can I open already existing virtual environments and the projects within them? Could it be that I need to end my previous session in a certain way for me to access it the next time?
| [
"Even though pipenv had so many problems. I suggest you use it when you are new to virtual env.\nJust\npip install pipenv\ncd $your-work-directory\npipenv shell\n\nThen you created your project env.\nYou can active it by:\ncd $your-work-directory\npipenv shell\n\nYou can install packages by:\ncd $your-work-directory\npipenv install $yourpackage --skip-lock\n\n",
"\nOpen the command prompt\nGo to the project directory, where you created virtual environment\nAnd type the same as error shows, as in my case it was\n\nFile D:\\Coding files\\Python*recommendation-system\\venv\\Scripts\\activate.ps1* cannot be loaded because running scripts is disabled on this system.\n\nSo I typed recommendation-system\\venv\\Scripts\\activate.ps1\n\nAnd it resolved my problem.\n",
"Use this and it will work:\ncd $working directory u have virtual env on\npipenv shell\n",
"you can use this command it worked for me for reusing your existing venv\n $ workon then the name of your existing venv\n\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"cmd",
"django",
"python",
"windows"
] | stackoverflow_0062383619_cmd_django_python_windows.txt |
Q:
Looking for the simplest way to scale x-axis labels of a data frame plot
I'm plotting two columns from a data frame as follows:
ax=df[['Fp1','Fp2']].plot(title='Channels Fp1 and Fp2')
ax.set_xlabel("time (sec)")
ax.set_ylabel("mV")
What is the simplest way to scale the values shown on the axis axis? Currently, they are the index values. I simply want to divide each label value by 1/200 so that the values read 0, .5, 1, 1.5, 2, which will be in seconds for my example.
A:
You can set the ticks manually:
ax.set_xticks(df.index, [i / 200 for i in df.index])
| Looking for the simplest way to scale x-axis labels of a data frame plot | I'm plotting two columns from a data frame as follows:
ax=df[['Fp1','Fp2']].plot(title='Channels Fp1 and Fp2')
ax.set_xlabel("time (sec)")
ax.set_ylabel("mV")
What is the simplest way to scale the values shown on the axis axis? Currently, they are the index values. I simply want to divide each label value by 1/200 so that the values read 0, .5, 1, 1.5, 2, which will be in seconds for my example.
| [
"You can set the ticks manually:\nax.set_xticks(df.index, [i / 200 for i in df.index])\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074628768_pandas_python.txt |
Q:
Get rid of excessive logs
I'm trying to remove excessive logs in my framework. During test run lot's of useless log records are shown in the console, e.g. logs of urllib3, faker and so on.
I'm using Loguru library (tried 'logging' library too -- same result).
Already tried:
adding option '--log-level' to the browser options for Selenium (no affect, as expected, checked just to be sure)
os.environ['WDM_LOG'] = str(logging.NOTSET) from official Loguru documentation
urllib3_logger = logging.getLogger('urllib3')
urllib3_logger.setLevel(logging.WARNING) also gave no effect
Here's test output example:
[WDM] - ====== WebDriver manager ======
2022-11-30 15:11:10 - conftest - INFO - Start local browser chrome
2022-11-30 15:11:10 - utils.driver_setup - DEBUG - Creating local chrome driver
2022-11-30 15:11:10 - utils.driver_setup.webdriver_factory - DEBUG - Configuring Chrome driver
[WDM] - Current google-chrome version is 107.0.5304
[WDM] - Get LATEST chromedriver version for 107.0.5304 google-chrome
[WDM] - There is no [mac64_m1] chromedriver for browser 107.0.5304 in cache
[WDM] - About to download new driver from https://chromedriver.storage.googleapis.com/107.0.5304.62/chromedriver_mac64_m1.zip
2022-11-30 15:11:31 - utils.driver_setup.webdriver_factory - DEBUG - Getting driver path
2022-11-30 15:11:31 - utils.driver_setup.webdriver_factory - DEBUG - Setting chrome options
2022-11-30 15:11:80 - conftest - INFO - Running precondition: create_business_unit
2022-11-30 15:11:80 - api.business_unit.business_unit_request - DEBUG - Creating business unit with name: automation_bu_1669814565
2022-11-30 15:11:80 - utils.helpers.http_requests - DEBUG - Making POST request to: https://sit.pro.aes/api/business-units with body: {'name': 'automation_bu_1669814565', 'automationFormat': 0, 'receiverNumber': 1, 'alarmAutomationId': 1, 'oldAlarms': 0, 'ipLinks': [], 'ipGroups': [], 'universal': False, 'isDefault': False}
2022-11-30 15:11:36 - conftest - INFO - Running precondition: signin_ui_admin
2022-11-30 15:11:36 - conftest - DEBUG - Signing in as admin: Automation
2022-11-30 15:11:10 - pages.actions.signin_page - DEBUG - Entering username Automation
2022-11-30 15:11:24 - pages.actions.signin_page - DEBUG - Entering password
2022-11-30 15:11:30 - pages.actions.signin_page - DEBUG - Click on signin button
2022-11-30 15:11:35 - pages.actions.common_elements - DEBUG - Accepting start working dialog
2022-11-30 15:11:91 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Open tab "Business Units" tab
2022-11-30 15:11:91 - pages.actions.common_elements - DEBUG - Opening tab business units
2022-11-30 15:11:18 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Open business unit: automation_bu_1669814565
2022-11-30 15:11:18 - pages.actions.business_units_page - DEBUG - Opening business unit: automation_bu_1669814565
2022-11-30 15:11:54 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Add Non-AES Unit with id: 8166
2022-11-30 15:11:54 - pages.actions.business_units_page - DEBUG - Opening business unit tab: Non-AES Units
2022-11-30 15:11:10 - pages.actions.business_units_page - DEBUG - Adding non-AES unit with id: 8166
2022-11-30 15:11:28 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Verify Non-AES Unit: 8166 is created and present in the list of Non-AES Units
2022-11-30 15:11:29 - pages.actions.business_units_page - DEBUG - Getting non-AES units
PASSED2022-11-30 15:11:32 - api.business_unit.business_unit_request - DEBUG - Deleting business unit with id: 363
2022-11-30 15:11:32 - utils.helpers.http_requests - DEBUG - Making DELETE request to: https://sit.pro.aes/api/business-units/363
2022-11-30 15:11:80 - conftest - INFO - STATUS: PASSED. test_incc_2448_add_non_aes_unit[signin_ui_admin0-chrome]
2022-11-30 15:11:80 - conftest - INFO - Quit driver
tests/business_unit/test_INCC_2448_add_non_AES_unit.py::TestINCC2448AddNonAESUnit::test_incc_2448_add_non_aes_unit[signin_ui_admin0-chrome] ERROR
=============================================================================== ERRORS ===============================================================================
______________________________ ERROR at teardown of TestINCC2448AddNonAESUnit.test_incc_2448_add_non_aes_unit[signin_ui_admin0-chrome] _______________________________
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:449: in _make_request
six.raise_from(e, None)
<string>:3: in raise_from
???
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:444: in _make_request
httplib_response = conn.getresponse()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1374: in getresponse
response.begin()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:318: in begin
version, status, reason = self._read_status()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:279: in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py:705: in readinto
return self._sock.recv_into(b)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1273: in recv_into
return self.read(nbytes, buffer)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1129: in read
return self._sslobj.read(len, buffer)
E TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
venv/lib/python3.10/site-packages/requests/adapters.py:440: in send
resp = conn.urlopen(
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:785: in urlopen
retries = retries.increment(
venv/lib/python3.10/site-packages/urllib3/util/retry.py:550: in increment
raise six.reraise(type(error), error, _stacktrace)
venv/lib/python3.10/site-packages/urllib3/packages/six.py:770: in reraise
raise value
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:703: in urlopen
httplib_response = self._make_request(
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:451: in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:340: in _raise_timeout
raise ReadTimeoutError(
E urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='sit.pro.aes', port=443): Read timed out. (read timeout=20)
During handling of the above exception, another exception occurred:
conftest.py:159: in create_business_unit
temp_bu.delete()
entities/business_unit.py:50: in delete
BusinessUnitAPI().delete_business_unit(bu_id=self.id_)
api/business_unit/business_unit_request.py:54: in delete_business_unit
return BaseHTTPRequest().delete(url=self.url, headers=self.headers,
utils/helpers/http_requests.py:103: in delete
response = session.delete(url=request_url, headers=headers,
venv/lib/python3.10/site-packages/requests/sessions.py:611: in delete
return self.request('DELETE', url, **kwargs)
venv/lib/python3.10/site-packages/requests/sessions.py:529: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.10/site-packages/requests/sessions.py:645: in send
r = adapter.send(request, **kwargs)
venv/lib/python3.10/site-packages/requests/adapters.py:532: in send
raise ReadTimeout(e, request=request)
E requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='sit.pro.aes', port=443): Read timed out. (read timeout=20)
------------------------------------------------------------------------- Captured log setup -------------------------------------------------------------------------
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.address`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.address` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.automotive`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.automotive` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.bank`.
DEBUG faker.factory:factory.py:88 Specified locale `en_US` is not available for provider `faker.providers.bank`. Locale reset to `en_GB` for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.barcode`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.barcode` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.color`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.color` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.company`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.company` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.credit_card`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.credit_card` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.currency`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.currency` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.date_time`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.date_time` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.file` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.geo`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.geo` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.internet`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.internet` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.isbn` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.job`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.job` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.lorem`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.lorem` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.misc`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.misc` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.person`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.person` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.phone_number`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.phone_number` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.profile` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.python` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.ssn`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.ssn` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.user_agent` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): sit.pro.aes:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://sit.pro.aes:443 "POST /api/auth/login HTTP/1.1" 200 None
INFO WDM:logger.py:16 ====== WebDriver manager ======
INFO WDM:logger.py:16 Current google-chrome version is 107.0.5304
INFO WDM:logger.py:16 Get LATEST chromedriver version for 107.0.5304 google-chrome
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): chromedriver.storage.googleapis.com:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://chromedriver.storage.googleapis.com:443 "GET /LATEST_RELEASE_107.0.5304 HTTP/1.1" 200 13
INFO WDM:logger.py:16 There is no [mac64_m1] chromedriver for browser 107.0.5304 in cache
INFO WDM:logger.py:16 About to download new driver from https://chromedriver.storage.googleapis.com/107.0.5304.62/chromedriver_mac64_m1.zip
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): chromedriver.storage.googleapis.com:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://chromedriver.storage.googleapis.com:443 "GET /107.0.5304.62/chromedriver_mac64_m1.zip HTTP/1.1" 404 214
DEBUG urllib3.connectionpool:connectionpool.py:228 Starting new HTTP connection (1): localhost:65482
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session HTTP/1.1" 200 800
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): sit.pro.aes:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://sit.pro.aes:443 "POST /api/business-units HTTP/1.1" 201 None
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/url HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/d3c7db4e-6baf-45dd-b2d2-4923838a1adf/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/d3c7db4e-6baf-45dd-b2d2-4923838a1adf/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/d3c7db4e-6baf-45dd-b2d2-4923838a1adf/value HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/deed88c1-4597-433d-a62e-aff695ca7b1c/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/deed88c1-4597-433d-a62e-aff695ca7b1c/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/deed88c1-4597-433d-a62e-aff695ca7b1c/value HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/b327d044-ca75-4e95-9975-5994eaba406b/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/b327d044-ca75-4e95-9975-5994eaba406b/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/546a51ff-098b-4716-893b-0991291f5f39/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/546a51ff-098b-4716-893b-0991291f5f39/click HTTP/1.1" 200 14
------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/6a32443e-b03f-4100-a981-94246cabae28/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/6a32443e-b03f-4100-a981-94246cabae28/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/elements HTTP/1.1" 200 1591
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 23
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 15
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 18
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 18
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 29
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 21
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 16
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 16
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 19
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 17
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 17
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 16
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/3559f863-d3f5-425a-956e-aa6d6a0062e3/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/3559f863-d3f5-425a-956e-aa6d6a0062e3/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/elements HTTP/1.1" 200 959
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 21
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 21
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST
----------------------------------------------------------------------- Captured log teardown ------------------------------------------------------------------------
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): sit.pro.aes:443
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "DELETE /session/dc247c59fdf65c04f90f3ceb73afe5c5 HTTP/1.1" 200 14
I want to hide logs of Captured log setup, Captured log call, Captured log teardown and [WDM].
I can change log level manually to 'INFO' (currently it's set to DEBUG), but in that case I'm losing DEBUG records which I create manually on purpose, like this one: 2022-11-30 15:11:30 - pages.actions.signin_page - DEBUG - Click on signin button.
Also I can execute test with pytest parameter --tb=no and all useless records are not displayed, but I'm losing error traceback in that case
A:
Issue was related to the pytest.
This post helped. Just need to add -p no:logging to the run command or to the pytest.ini file
| Get rid of excessive logs | I'm trying to remove excessive logs in my framework. During test run lot's of useless log records are shown in the console, e.g. logs of urllib3, faker and so on.
I'm using Loguru library (tried 'logging' library too -- same result).
Already tried:
adding option '--log-level' to the browser options for Selenium (no affect, as expected, checked just to be sure)
os.environ['WDM_LOG'] = str(logging.NOTSET) from official Loguru documentation
urllib3_logger = logging.getLogger('urllib3')
urllib3_logger.setLevel(logging.WARNING) also gave no effect
Here's test output example:
[WDM] - ====== WebDriver manager ======
2022-11-30 15:11:10 - conftest - INFO - Start local browser chrome
2022-11-30 15:11:10 - utils.driver_setup - DEBUG - Creating local chrome driver
2022-11-30 15:11:10 - utils.driver_setup.webdriver_factory - DEBUG - Configuring Chrome driver
[WDM] - Current google-chrome version is 107.0.5304
[WDM] - Get LATEST chromedriver version for 107.0.5304 google-chrome
[WDM] - There is no [mac64_m1] chromedriver for browser 107.0.5304 in cache
[WDM] - About to download new driver from https://chromedriver.storage.googleapis.com/107.0.5304.62/chromedriver_mac64_m1.zip
2022-11-30 15:11:31 - utils.driver_setup.webdriver_factory - DEBUG - Getting driver path
2022-11-30 15:11:31 - utils.driver_setup.webdriver_factory - DEBUG - Setting chrome options
2022-11-30 15:11:80 - conftest - INFO - Running precondition: create_business_unit
2022-11-30 15:11:80 - api.business_unit.business_unit_request - DEBUG - Creating business unit with name: automation_bu_1669814565
2022-11-30 15:11:80 - utils.helpers.http_requests - DEBUG - Making POST request to: https://sit.pro.aes/api/business-units with body: {'name': 'automation_bu_1669814565', 'automationFormat': 0, 'receiverNumber': 1, 'alarmAutomationId': 1, 'oldAlarms': 0, 'ipLinks': [], 'ipGroups': [], 'universal': False, 'isDefault': False}
2022-11-30 15:11:36 - conftest - INFO - Running precondition: signin_ui_admin
2022-11-30 15:11:36 - conftest - DEBUG - Signing in as admin: Automation
2022-11-30 15:11:10 - pages.actions.signin_page - DEBUG - Entering username Automation
2022-11-30 15:11:24 - pages.actions.signin_page - DEBUG - Entering password
2022-11-30 15:11:30 - pages.actions.signin_page - DEBUG - Click on signin button
2022-11-30 15:11:35 - pages.actions.common_elements - DEBUG - Accepting start working dialog
2022-11-30 15:11:91 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Open tab "Business Units" tab
2022-11-30 15:11:91 - pages.actions.common_elements - DEBUG - Opening tab business units
2022-11-30 15:11:18 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Open business unit: automation_bu_1669814565
2022-11-30 15:11:18 - pages.actions.business_units_page - DEBUG - Opening business unit: automation_bu_1669814565
2022-11-30 15:11:54 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Add Non-AES Unit with id: 8166
2022-11-30 15:11:54 - pages.actions.business_units_page - DEBUG - Opening business unit tab: Non-AES Units
2022-11-30 15:11:10 - pages.actions.business_units_page - DEBUG - Adding non-AES unit with id: 8166
2022-11-30 15:11:28 - tests.business_unit.test_INCC_2448_add_non_AES_unit - INFO - Verify Non-AES Unit: 8166 is created and present in the list of Non-AES Units
2022-11-30 15:11:29 - pages.actions.business_units_page - DEBUG - Getting non-AES units
PASSED2022-11-30 15:11:32 - api.business_unit.business_unit_request - DEBUG - Deleting business unit with id: 363
2022-11-30 15:11:32 - utils.helpers.http_requests - DEBUG - Making DELETE request to: https://sit.pro.aes/api/business-units/363
2022-11-30 15:11:80 - conftest - INFO - STATUS: PASSED. test_incc_2448_add_non_aes_unit[signin_ui_admin0-chrome]
2022-11-30 15:11:80 - conftest - INFO - Quit driver
tests/business_unit/test_INCC_2448_add_non_AES_unit.py::TestINCC2448AddNonAESUnit::test_incc_2448_add_non_aes_unit[signin_ui_admin0-chrome] ERROR
=============================================================================== ERRORS ===============================================================================
______________________________ ERROR at teardown of TestINCC2448AddNonAESUnit.test_incc_2448_add_non_aes_unit[signin_ui_admin0-chrome] _______________________________
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:449: in _make_request
six.raise_from(e, None)
<string>:3: in raise_from
???
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:444: in _make_request
httplib_response = conn.getresponse()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:1374: in getresponse
response.begin()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:318: in begin
version, status, reason = self._read_status()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py:279: in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py:705: in readinto
return self._sock.recv_into(b)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1273: in recv_into
return self.read(nbytes, buffer)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py:1129: in read
return self._sslobj.read(len, buffer)
E TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
venv/lib/python3.10/site-packages/requests/adapters.py:440: in send
resp = conn.urlopen(
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:785: in urlopen
retries = retries.increment(
venv/lib/python3.10/site-packages/urllib3/util/retry.py:550: in increment
raise six.reraise(type(error), error, _stacktrace)
venv/lib/python3.10/site-packages/urllib3/packages/six.py:770: in reraise
raise value
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:703: in urlopen
httplib_response = self._make_request(
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:451: in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
venv/lib/python3.10/site-packages/urllib3/connectionpool.py:340: in _raise_timeout
raise ReadTimeoutError(
E urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='sit.pro.aes', port=443): Read timed out. (read timeout=20)
During handling of the above exception, another exception occurred:
conftest.py:159: in create_business_unit
temp_bu.delete()
entities/business_unit.py:50: in delete
BusinessUnitAPI().delete_business_unit(bu_id=self.id_)
api/business_unit/business_unit_request.py:54: in delete_business_unit
return BaseHTTPRequest().delete(url=self.url, headers=self.headers,
utils/helpers/http_requests.py:103: in delete
response = session.delete(url=request_url, headers=headers,
venv/lib/python3.10/site-packages/requests/sessions.py:611: in delete
return self.request('DELETE', url, **kwargs)
venv/lib/python3.10/site-packages/requests/sessions.py:529: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.10/site-packages/requests/sessions.py:645: in send
r = adapter.send(request, **kwargs)
venv/lib/python3.10/site-packages/requests/adapters.py:532: in send
raise ReadTimeout(e, request=request)
E requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='sit.pro.aes', port=443): Read timed out. (read timeout=20)
------------------------------------------------------------------------- Captured log setup -------------------------------------------------------------------------
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.address`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.address` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.automotive`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.automotive` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.bank`.
DEBUG faker.factory:factory.py:88 Specified locale `en_US` is not available for provider `faker.providers.bank`. Locale reset to `en_GB` for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.barcode`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.barcode` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.color`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.color` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.company`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.company` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.credit_card`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.credit_card` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.currency`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.currency` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.date_time`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.date_time` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.file` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.geo`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.geo` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.internet`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.internet` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.isbn` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.job`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.job` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.lorem`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.lorem` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.misc`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.misc` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.person`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.person` has been localized to `en_US`.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.phone_number`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.phone_number` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.profile` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.python` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG faker.factory:factory.py:78 Looking for locale `en_US` in provider `faker.providers.ssn`.
DEBUG faker.factory:factory.py:97 Provider `faker.providers.ssn` has been localized to `en_US`.
DEBUG faker.factory:factory.py:109 Provider `faker.providers.user_agent` does not feature localization. Specified locale `en_US` is not utilized for this provider.
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): sit.pro.aes:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://sit.pro.aes:443 "POST /api/auth/login HTTP/1.1" 200 None
INFO WDM:logger.py:16 ====== WebDriver manager ======
INFO WDM:logger.py:16 Current google-chrome version is 107.0.5304
INFO WDM:logger.py:16 Get LATEST chromedriver version for 107.0.5304 google-chrome
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): chromedriver.storage.googleapis.com:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://chromedriver.storage.googleapis.com:443 "GET /LATEST_RELEASE_107.0.5304 HTTP/1.1" 200 13
INFO WDM:logger.py:16 There is no [mac64_m1] chromedriver for browser 107.0.5304 in cache
INFO WDM:logger.py:16 About to download new driver from https://chromedriver.storage.googleapis.com/107.0.5304.62/chromedriver_mac64_m1.zip
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): chromedriver.storage.googleapis.com:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://chromedriver.storage.googleapis.com:443 "GET /107.0.5304.62/chromedriver_mac64_m1.zip HTTP/1.1" 404 214
DEBUG urllib3.connectionpool:connectionpool.py:228 Starting new HTTP connection (1): localhost:65482
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session HTTP/1.1" 200 800
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): sit.pro.aes:443
DEBUG urllib3.connectionpool:connectionpool.py:456 https://sit.pro.aes:443 "POST /api/business-units HTTP/1.1" 201 None
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/url HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/d3c7db4e-6baf-45dd-b2d2-4923838a1adf/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/d3c7db4e-6baf-45dd-b2d2-4923838a1adf/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/d3c7db4e-6baf-45dd-b2d2-4923838a1adf/value HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/deed88c1-4597-433d-a62e-aff695ca7b1c/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/deed88c1-4597-433d-a62e-aff695ca7b1c/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/deed88c1-4597-433d-a62e-aff695ca7b1c/value HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/b327d044-ca75-4e95-9975-5994eaba406b/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/b327d044-ca75-4e95-9975-5994eaba406b/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1609
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/546a51ff-098b-4716-893b-0991291f5f39/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/546a51ff-098b-4716-893b-0991291f5f39/click HTTP/1.1" 200 14
------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/6a32443e-b03f-4100-a981-94246cabae28/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/6a32443e-b03f-4100-a981-94246cabae28/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/elements HTTP/1.1" 200 1591
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 23
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 15
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 18
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 18
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 29
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 21
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 16
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 16
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 19
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 17
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 17
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 16
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 26
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "GET /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/3559f863-d3f5-425a-956e-aa6d6a0062e3/enabled HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element/3559f863-d3f5-425a-956e-aa6d6a0062e3/click HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 404 1643
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/elements HTTP/1.1" 200 959
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 25
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 21
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 21
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 36
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/element HTTP/1.1" 200 88
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST /session/dc247c59fdf65c04f90f3ceb73afe5c5/execute/sync HTTP/1.1" 200 14
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "POST
----------------------------------------------------------------------- Captured log teardown ------------------------------------------------------------------------
DEBUG urllib3.connectionpool:connectionpool.py:1001 Starting new HTTPS connection (1): sit.pro.aes:443
DEBUG urllib3.connectionpool:connectionpool.py:456 http://localhost:65482 "DELETE /session/dc247c59fdf65c04f90f3ceb73afe5c5 HTTP/1.1" 200 14
I want to hide logs of Captured log setup, Captured log call, Captured log teardown and [WDM].
I can change log level manually to 'INFO' (currently it's set to DEBUG), but in that case I'm losing DEBUG records which I create manually on purpose, like this one: 2022-11-30 15:11:30 - pages.actions.signin_page - DEBUG - Click on signin button.
Also I can execute test with pytest parameter --tb=no and all useless records are not displayed, but I'm losing error traceback in that case
| [
"Issue was related to the pytest.\nThis post helped. Just need to add -p no:logging to the run command or to the pytest.ini file\n"
] | [
0
] | [] | [] | [
"logging",
"loguru",
"python",
"webdriver"
] | stackoverflow_0074628804_logging_loguru_python_webdriver.txt |
Q:
Debugging a Neural Network
TLDR
I have been trying to fit a simple neural network on MNIST, and it works for a small debugging setup, but when I bring it over to a subset of MNIST, it trains super fast and the gradient is close to 0 very quickly, but then it outputs the same value for any given input and the final cost is quite high. I had been trying to purposefully overfit to make sure it is in fact working but it will not do so on MNIST suggesting a deep problem in the setup. I have checked my backpropagation implementation using gradient checking and it seems to match up, so not sure where the error lies, or what to work on now!
Many thanks for any help you can offer, I've been struggling to fix this!
Explanation
I have been trying to make a neural network in Numpy, based on this explanation:
http://ufldl.stanford.edu/wiki/index.php/Neural_Networks
http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
Backpropagation seems to match gradient checking:
Backpropagation: [ 0.01168585, 0.06629858, -0.00112408, -0.00642625, -0.01339408,
-0.07580145, 0.00285868, 0.01628148, 0.00365659, 0.0208475 ,
0.11194151, 0.16696139, 0.10999967, 0.13873069, 0.13049299,
-0.09012582, -0.1344335 , -0.08857648, -0.11168955, -0.10506167]
Gradient Checking: [-0.01168585 -0.06629858 0.00112408 0.00642625 0.01339408
0.07580145 -0.00285868 -0.01628148 -0.00365659 -0.0208475
-0.11194151 -0.16696139 -0.10999967 -0.13873069 -0.13049299
0.09012582 0.1344335 0.08857648 0.11168955 0.10506167]
And when I train on this simple debug setup:
a is a neural net w/ 2 inputs -> 5 hidden -> 2 outputs, and learning rate 0.5
a.gradDesc(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]]))
ie. x1 = [0.1, 0.9] and y1 = [0,1]
I get these lovely training curves
Admittedly this is clearly a dumbed down, very easy function to fit.
However as soon as I bring it over to MNIST, with this setup:
# Number of input, hidden and ouput nodes
# Input = 28 x 28 pixels
input_nodes=784
# Arbitrary number of hidden nodes, experiment to improve
hidden_nodes=200
# Output = one of the digits [0,1,2,3,4,5,6,7,8,9]
output_nodes=10
# Learning rate
learning_rate=0.4
# Regularisation parameter
lambd=0.0
With this setup run on the code below, for 100 iterations, it does seem to train at first then just "flat lines" quite quickly and doesnt achieve a very good model:
Initial ===== Cost (unregularised): 2.09203670985 /// Cost (regularised): 2.09203670985 Mean Gradient: 0.0321241229793
Iteration 100 Cost (unregularised): 0.980999805477 /// Cost (regularised): 0.980999805477 Mean Gradient: -5.29639499854e-09
TRAINED IN 26.45932364463806
This then gives really poor test accuracy and predicts the same output, even when tested with all inputs being 0.1 or all 0.9 I just get the same output (although precisely which number it outputs varies depending on initial random weights):
Test accuracy: 8.92
Targets 2 2 1 7 2 2 0 2 3
Hypothesis 5 5 5 5 5 5 5 5 5
And the curves for the MNIST Training:
Code dump:
# Import dependencies
import numpy as np
import time
import csv
import matplotlib.pyplot
import random
import math
# Read in training data
with open('MNIST/mnist_train_100.csv') as file:
train_data=np.array([list(map(int,line.strip().split(','))) for line in file.readlines()])
# In[197]:
# Plot a sample of training data to visualise
displayData(train_data[:,1:], 25)
# In[198]:
# Read in test data
with open('MNIST/mnist_test.csv') as file:
test_data=np.array([list(map(int,line.strip().split(','))) for line in file.readlines()])
# Main neural network class
class neuralNetwork:
# Define the architecture
def __init__(self, i, h, o, lr, lda):
# Number of nodes in each layer
self.i=i
self.h=h
self.o=o
# Learning rate
self.lr=lr
# Lambda for regularisation
self.lda=lda
# Randomly initialise the parameters, input-> hidden and hidden-> output
self.ih=np.random.normal(0.0,pow(self.h,-0.5),(self.h,self.i))
self.ho=np.random.normal(0.0,pow(self.o,-0.5),(self.o,self.h))
def predict(self, X):
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Add bias node x(0)=1 for all training examples, X is now m x n+1
# Then compute activation to hidden node
z2=np.dot(X,self.ih.T) + 1
#print(a1.shape)
a2=sigmoid(z2)
#print(ha)
# Add bias node h(0)=1 for all training examples, H is now m x h+1
# Then compute activation to output node
z3=np.dot(a2,self.ho.T) + 1
h=sigmoid(z3)
outputs=np.argmax(h.T,axis=0)
return outputs
def backprop (self, X, y):
try:
m = X.shape[0]
except:
m=1
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Add bias node x(0)=1 for all training examples, X is now m x n+1
# Then compute activation to hidden node
z2=np.dot(X,self.ih.T)
#print(a1.shape)
a2=sigmoid(z2)
#print(ha)
# Add bias node h(0)=1 for all training examples, H is now m x h+1
# Then compute activation to output node
z3=np.dot(a2,self.ho.T)
h=sigmoid(z3)
# Compute error/ cost for this setup (unregularised and regularise)
costReg=self.costFunc(h,y)
costUn=self.costFuncReg(h,y)
# Output error term
d3=-(y-h)*sigmoidGradient(z3)
# Hidden error term
d2=np.dot(d3,self.ho)*sigmoidGradient(z2)
# Partial derivatives for weights
D2=np.dot(d3.T,a2)
D1=np.dot(d2.T,X)
# Partial derivatives of theta with regularisation
T2Grad=(D2/m)+(self.lda/m)*(self.ho)
T1Grad=(D1/m)+(self.lda/m)*(self.ih)
# Update weights
# Hidden layer (weights 1)
self.ih-=self.lr*(((D1)/m) + (self.lda/m)*self.ih)
# Output layer (weights 2)
self.ho-=self.lr*(((D2)/m) + (self.lda/m)*self.ho)
# Unroll gradients to one long vector
grad=np.concatenate(((T1Grad).ravel(),(T2Grad).ravel()))
return costReg, costUn, grad
def backpropIter (self, X, y):
try:
m = X.shape[0]
except:
m=1
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Add bias node x(0)=1 for all training examples, X is now m x n+1
# Then compute activation to hidden node
z2=np.dot(X,self.ih.T)
#print(a1.shape)
a2=sigmoid(z2)
#print(ha)
# Add bias node h(0)=1 for all training examples, H is now m x h+1
# Then compute activation to output node
z3=np.dot(a2,self.ho.T)
h=sigmoid(z3)
# Compute error/ cost for this setup (unregularised and regularise)
costUn=self.costFunc(h,y)
costReg=self.costFuncReg(h,y)
gradW1=np.zeros(self.ih.shape)
gradW2=np.zeros(self.ho.shape)
for i in range(m):
delta3 = -(y[i,:]-h[i,:])*sigmoidGradient(z3[i,:])
delta2 = np.dot(self.ho.T,delta3)*sigmoidGradient(z2[i,:])
gradW2= gradW2 + np.outer(delta3,a2[i,:])
gradW1 = gradW1 + np.outer(delta2,X[i,:])
# Update weights
# Hidden layer (weights 1)
#self.ih-=self.lr*(((gradW1)/m) + (self.lda/m)*self.ih)
# Output layer (weights 2)
#self.ho-=self.lr*(((gradW2)/m) + (self.lda/m)*self.ho)
# Unroll gradients to one long vector
grad=np.concatenate(((gradW1).ravel(),(gradW2).ravel()))
return costUn, costReg, grad
def gradDesc(self, X, y):
# Backpropagate to get updates
cost,costreg,grad=self.backpropIter(X,y)
# Unroll parameters
deltaW1=np.reshape(grad[0:self.h*self.i],(self.h,self.i))
deltaW2=np.reshape(grad[self.h*self.i:],(self.o,self.h))
# m = no. training examples
m=X.shape[0]
#print (self.ih)
self.ih -= self.lr * ((deltaW1))#/m) + (self.lda * self.ih))
self.ho -= self.lr * ((deltaW2))#/m) + (self.lda * self.ho))
#print(deltaW1)
#print(self.ih)
return cost,costreg,grad
# Gradient checking to compute the gradient numerically to debug backpropagation
def gradCheck(self, X, y):
# Unroll theta
theta=np.concatenate(((self.ih).ravel(),(self.ho).ravel()))
# perturb will add and subtract epsilon, numgrad will store answers
perturb=np.zeros(len(theta))
numgrad=np.zeros(len(theta))
# epsilon, e is a small number
e = 0.00001
# Loop over all theta
for i in range(len(theta)):
# Perturb is zeros with one index being e
perturb[i]=e
loss1=self.costFuncGradientCheck(theta-perturb, X, y)
loss2=self.costFuncGradientCheck(theta+perturb, X, y)
# Compute numerical gradient and update vectors
numgrad[i]=(loss1-loss2)/(2*e)
perturb[i]=0
return numgrad
def costFuncGradientCheck(self,theta,X,y):
T1=np.reshape(theta[0:self.h*self.i],(self.h,self.i))
T2=np.reshape(theta[self.h*self.i:],(self.o,self.h))
m=X.shape[0]
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Compute activation to hidden node
z2=np.dot(X,T1.T)
a2=sigmoid(z2)
# Compute activation to output node
z3=np.dot(a2,T2.T)
h=sigmoid(z3)
cost=self.costFunc(h, y)
return cost #+ ((self.lda/2)*(np.sum(pow(T1,2)) + np.sum(pow(T2,2))))
def costFunc(self, h, y):
m=h.shape[0]
return np.sum(pow((h-y),2))/m
def costFuncReg(self, h, y):
cost=self.costFunc(h, y)
return cost #+ ((self.lda/2)*(np.sum(pow(self.ih,2)) + np.sum(pow(self.ho,2))))
# Helper functions to compute sigmoid and gradient for an input number or matrix
def sigmoid(Z):
return np.divide(1,np.add(1,np.exp(-Z)))
def sigmoidGradient(Z):
return sigmoid(Z)*(1-sigmoid(Z))
# Pre=processing helper functions
# Normalise data to 0.1-1 as 0 inputs kills the weights and changes
def scaleDataVec(data):
return (np.asfarray(data[1:]) / 255.0 * 0.99) + 0.1
def scaleData(data):
return (np.asfarray(data[:,1:]) / 255.0 * 0.99) + 0.1
# DISPLAY DATA
# plot_data will be what to plot, num_ex must be a square number of how many examples to plot, random examples will then be plotted
def displayData(plot_data, num_ex, rand=1):
if rand==0:
data=plot_data
else:
rand_indexes=random.sample(range(plot_data.shape[0]),num_ex)
data=plot_data[rand_indexes,:]
# Useful variables, m= no. train ex, n= no. features
m=data.shape[0]
n=data.shape[1]
# Shape for one example
example_width=math.ceil(math.sqrt(n))
example_height=math.ceil(n/example_width)
# No. of items to display
display_rows=math.floor(math.sqrt(m))
display_cols=math.ceil(m/display_rows)
# Padding between images
pad=1
# Setup blank display
display_array = -np.ones((pad + display_rows * (example_height + pad), (pad + display_cols * (example_width + pad))))
curr_ex=0
for i in range(1,display_rows+1):
for j in range(1,display_cols+1):
if curr_ex>m:
break
# Max value of this patch
max_val=max(abs(data[curr_ex, :]))
display_array[pad + (j-1) * (example_height + pad) : j*(example_height+1), pad + (i-1) * (example_width + pad) : i*(example_width+1)] = data[curr_ex, :].reshape(example_height, example_width)/max_val
curr_ex+=1
matplotlib.pyplot.imshow(display_array, cmap='Greys', interpolation='None')
# In[312]:
a=neuralNetwork(2,5,2,0.5,0.0)
print(a.backpropIter(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]])))
print(a.gradCheck(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]])))
D=[]
C=[]
for i in range(100):
c,b,d=a.gradDesc(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]]))
C.append(c)
D.append(np.mean(d))
#print(c)
print(a.predict(np.array([[0.1,0.9]])))
# Debugging plot
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(C)
matplotlib.pyplot.ylabel("Error")
matplotlib.pyplot.xlabel("Iterations")
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(D)
matplotlib.pyplot.ylabel("Gradient")
matplotlib.pyplot.xlabel("Iterations")
#print(J)
# In[313]:
# Class instance
# Number of input, hidden and ouput nodes
# Input = 28 x 28 pixels
input_nodes=784
# Arbitrary number of hidden nodes, experiment to improve
hidden_nodes=200
# Output = one of the digits [0,1,2,3,4,5,6,7,8,9]
output_nodes=10
# Learning rate
learning_rate=0.4
# Regularisation parameter
lambd=0.0
# Create instance of Nnet class
nn=neuralNetwork(input_nodes,hidden_nodes,output_nodes,learning_rate,lambd)
# In[314]:
time1=time.time()
# Scale inputs
inputs=scaleData(train_data)
# 0.01-0.99 range as the sigmoid function can't reach 0 or 1, 0.01 for all except 0.99 for target
targets=(np.identity(output_nodes)*0.98)[train_data[:,0],:]+0.01
J=[]
JR=[]
Grad=[]
iterations=100
for i in range(iterations):
j,jr,grad=nn.gradDesc(inputs, targets)
grad=np.mean(grad)
if i == 0:
print("Initial ===== Cost (unregularised): ", j, "\t///", "Cost (regularised): ",jr," Mean Gradient: ",grad)
print("\r", end="")
print("Iteration ", i+1, "\tCost (unregularised): ", j, "\t///", "Cost (regularised): ", jr," Mean Gradient: ",grad,end="")
J.append(j)
JR.append(jr)
Grad.append(grad)
time2 = time.time()
print ("\nTRAINED IN ",time2-time1)
# In[315]:
# Debugging plot
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(J)
matplotlib.pyplot.plot(JR)
matplotlib.pyplot.ylabel("Error")
matplotlib.pyplot.xlabel("Iterations")
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(Grad)
matplotlib.pyplot.ylabel("Gradient")
matplotlib.pyplot.xlabel("Iterations")
#print(J)
# In[316]:
# Scale inputs
inputs=scaleData(test_data)
# 0.01-0.99 range as the sigmoid function can't reach 0 or 1, 0.01 for all except 0.99 for target
targets=test_data[:,0]
h=nn.predict(inputs)
score=[]
targ=[]
hyp=[]
for i,line in enumerate(targets):
if line == h[i]:
score.append(1)
else:
score.append(0)
hyp.append(h[i])
targ.append(line)
print("Test accuracy: ", sum(score)/len(score)*100)
indexes=random.sample(range(len(hyp)),9)
print("Targets ",end="")
for j in indexes:
print (targ[j]," ",end="")
print("\nHypothesis ",end="")
for j in indexes:
print (hyp[j]," ",end="")
displayData(test_data[indexes, 1:], 9, rand=0)
# In[277]:
nn.predict(0.9*np.ones((784,)))
Edit 1
Suggested to use different learning rates, but unfortunately, they all come out with similar results, here are the plots for 30 iterations, using the MNIST 100 subset:
Concretely, here are the figures that they start and end with:
Initial ===== Cost (unregularised): 4.07208963507 /// Cost (regularised): 4.07208963507 Mean Gradient: 0.0540251381858
Iteration 50 Cost (unregularised): 0.613310215166 /// Cost (regularised): 0.613310215166 Mean Gradient: -0.000133981500849Initial ===== Cost (unregularised): 5.67535252616 /// Cost (regularised): 5.67535252616 Mean Gradient: 0.0644797515914
Iteration 50 Cost (unregularised): 0.381080434935 /// Cost (regularised): 0.381080434935 Mean Gradient: 0.000427866902699Initial ===== Cost (unregularised): 3.54658422176 /// Cost (regularised): 3.54658422176 Mean Gradient: 0.0672211732868
Iteration 50 Cost (unregularised): 0.981 /// Cost (regularised): 0.981 Mean Gradient: 2.34515341943e-20Initial ===== Cost (unregularised): 4.05269658215 /// Cost (regularised): 4.05269658215 Mean Gradient: 0.0469666696193
Iteration 50 Cost (unregularised): 0.980999999999 /// Cost (regularised): 0.980999999999 Mean Gradient: -1.0582706063e-14Initial ===== Cost (unregularised): 2.40881492228 /// Cost (regularised): 2.40881492228 Mean Gradient: 0.0516056901574
Iteration 50 Cost (unregularised): 1.74539997258 /// Cost (regularised): 1.74539997258 Mean Gradient: 1.01955789614e-09Initial ===== Cost (unregularised): 2.58498876008 /// Cost (regularised): 2.58498876008 Mean Gradient: 0.0388768685257
Iteration 3 Cost (unregularised): 1.72520399313 /// Cost (regularised): 1.72520399313 Mean Gradient: 0.0134040908157
Iteration 50 Cost (unregularised): 0.981 /// Cost (regularised): 0.981 Mean Gradient: -4.49319474346e-43Initial ===== Cost (unregularised): 4.40141352357 /// Cost (regularised): 4.40141352357 Mean Gradient: 0.0689167742968
Iteration 50 Cost (unregularised): 0.981 /// Cost (regularised): 0.981 Mean Gradient: -1.01563966458e-22
A learning rate of 0.01, quite low, has the best outcome, but exploring learning rates in this region, I only came out with 30-40% accuracy, a big improvement on the 8% or even 0% that I had seen previously, but not really what it should be achieving!
Edit 2
I've now finished and added a backpropagation function optimized for matrices rather than the iterative formula, and so now I can run it on large epochs/ iterations without painfully slow. So the "backprop" function of the class matches with gradient check (in fact it is 1/2 the size but I think that is a problem in gradient check, so we'll leave that bc it should not matter proportionally and I have tried with adding in divisions to solve this). With large numbers of epochs I achieved a much better accuracy, but still there seems to be a problem, as when I have previously programmed a slightly different style of simple 3 layer neural network a part of a book, on the same dataset csvs, I get a much better training result. Here are some plots and data for large epochs.
Looks good but, we still have a pretty poor test set accuracy, and this is for 2,500 runs through the dataset, should be getting a good result with much less!
Test accuracy: 61.150000000000006
Targets 6 9 8 2 2 2 4 3 8
Hypothesis 6 9 8 4 7 1 4 3 8
Edit 3, what dataset?
http://makeyourownneuralnetwork.blogspot.co.uk/2015/03/the-mnist-dataset-of-handwitten-digits.html?m=1
Used train.csv and test.csv to try with more data and no better just takes longer so been using the subset train_100 and test_10 while I debug.
Edit 4
Seems to learn something after a very large number of epochs (like 14,000), as the whole dataset is used in the backprop function (not backpropiter) each loop is effectively an epoch, and with a ridiculous amount of epochs on the subset of 100 train and 10 test samples, the test accuracy is quite good. However with this small a sample this could easily be due to just chance and even then it's only 70% percent not what you'd be aiming for even on the small dataset. But it does show that it seems to be learning, I am trying parameters very extensively to rule that out.
A:
Solved
I solved my neural network. A brief description follows in case it helps anyone else. Thanks to all those that helped with suggestions.
Basically, I had implemented it with a fully matrix approach ie. the backpropagation uses all examples each time. I later tried implementing it as a vector approach ie. backpropagation with each example. This was when I realised that the matrix approach doesn't update the parameters each example, so one run through this way is NOT the same as one run through each example in turn, effectively the whole training set is backpropagated as one example. Hence, my matrix implementation does work, but after many iterations, which then ends up taking longer than the vector approach anyway! Have opened a new question to learn more about this specific part but there we go, it just needed a lot of iterations with the matrix approach or a more gradual example by example approach.
A:
You can "debug" your neural network using Tensorleap. It's a neural network debugging platform which uses some explainability algorithms. It allows you to upload your model and dataset and get a lot of information about your trained model.
MNIST is already in as a demo project.
They have a free trial for 14 days as I know.
| Debugging a Neural Network | TLDR
I have been trying to fit a simple neural network on MNIST, and it works for a small debugging setup, but when I bring it over to a subset of MNIST, it trains super fast and the gradient is close to 0 very quickly, but then it outputs the same value for any given input and the final cost is quite high. I had been trying to purposefully overfit to make sure it is in fact working but it will not do so on MNIST suggesting a deep problem in the setup. I have checked my backpropagation implementation using gradient checking and it seems to match up, so not sure where the error lies, or what to work on now!
Many thanks for any help you can offer, I've been struggling to fix this!
Explanation
I have been trying to make a neural network in Numpy, based on this explanation:
http://ufldl.stanford.edu/wiki/index.php/Neural_Networks
http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
Backpropagation seems to match gradient checking:
Backpropagation: [ 0.01168585, 0.06629858, -0.00112408, -0.00642625, -0.01339408,
-0.07580145, 0.00285868, 0.01628148, 0.00365659, 0.0208475 ,
0.11194151, 0.16696139, 0.10999967, 0.13873069, 0.13049299,
-0.09012582, -0.1344335 , -0.08857648, -0.11168955, -0.10506167]
Gradient Checking: [-0.01168585 -0.06629858 0.00112408 0.00642625 0.01339408
0.07580145 -0.00285868 -0.01628148 -0.00365659 -0.0208475
-0.11194151 -0.16696139 -0.10999967 -0.13873069 -0.13049299
0.09012582 0.1344335 0.08857648 0.11168955 0.10506167]
And when I train on this simple debug setup:
a is a neural net w/ 2 inputs -> 5 hidden -> 2 outputs, and learning rate 0.5
a.gradDesc(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]]))
ie. x1 = [0.1, 0.9] and y1 = [0,1]
I get these lovely training curves
Admittedly this is clearly a dumbed down, very easy function to fit.
However as soon as I bring it over to MNIST, with this setup:
# Number of input, hidden and ouput nodes
# Input = 28 x 28 pixels
input_nodes=784
# Arbitrary number of hidden nodes, experiment to improve
hidden_nodes=200
# Output = one of the digits [0,1,2,3,4,5,6,7,8,9]
output_nodes=10
# Learning rate
learning_rate=0.4
# Regularisation parameter
lambd=0.0
With this setup run on the code below, for 100 iterations, it does seem to train at first then just "flat lines" quite quickly and doesnt achieve a very good model:
Initial ===== Cost (unregularised): 2.09203670985 /// Cost (regularised): 2.09203670985 Mean Gradient: 0.0321241229793
Iteration 100 Cost (unregularised): 0.980999805477 /// Cost (regularised): 0.980999805477 Mean Gradient: -5.29639499854e-09
TRAINED IN 26.45932364463806
This then gives really poor test accuracy and predicts the same output, even when tested with all inputs being 0.1 or all 0.9 I just get the same output (although precisely which number it outputs varies depending on initial random weights):
Test accuracy: 8.92
Targets 2 2 1 7 2 2 0 2 3
Hypothesis 5 5 5 5 5 5 5 5 5
And the curves for the MNIST Training:
Code dump:
# Import dependencies
import numpy as np
import time
import csv
import matplotlib.pyplot
import random
import math
# Read in training data
with open('MNIST/mnist_train_100.csv') as file:
train_data=np.array([list(map(int,line.strip().split(','))) for line in file.readlines()])
# In[197]:
# Plot a sample of training data to visualise
displayData(train_data[:,1:], 25)
# In[198]:
# Read in test data
with open('MNIST/mnist_test.csv') as file:
test_data=np.array([list(map(int,line.strip().split(','))) for line in file.readlines()])
# Main neural network class
class neuralNetwork:
# Define the architecture
def __init__(self, i, h, o, lr, lda):
# Number of nodes in each layer
self.i=i
self.h=h
self.o=o
# Learning rate
self.lr=lr
# Lambda for regularisation
self.lda=lda
# Randomly initialise the parameters, input-> hidden and hidden-> output
self.ih=np.random.normal(0.0,pow(self.h,-0.5),(self.h,self.i))
self.ho=np.random.normal(0.0,pow(self.o,-0.5),(self.o,self.h))
def predict(self, X):
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Add bias node x(0)=1 for all training examples, X is now m x n+1
# Then compute activation to hidden node
z2=np.dot(X,self.ih.T) + 1
#print(a1.shape)
a2=sigmoid(z2)
#print(ha)
# Add bias node h(0)=1 for all training examples, H is now m x h+1
# Then compute activation to output node
z3=np.dot(a2,self.ho.T) + 1
h=sigmoid(z3)
outputs=np.argmax(h.T,axis=0)
return outputs
def backprop (self, X, y):
try:
m = X.shape[0]
except:
m=1
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Add bias node x(0)=1 for all training examples, X is now m x n+1
# Then compute activation to hidden node
z2=np.dot(X,self.ih.T)
#print(a1.shape)
a2=sigmoid(z2)
#print(ha)
# Add bias node h(0)=1 for all training examples, H is now m x h+1
# Then compute activation to output node
z3=np.dot(a2,self.ho.T)
h=sigmoid(z3)
# Compute error/ cost for this setup (unregularised and regularise)
costReg=self.costFunc(h,y)
costUn=self.costFuncReg(h,y)
# Output error term
d3=-(y-h)*sigmoidGradient(z3)
# Hidden error term
d2=np.dot(d3,self.ho)*sigmoidGradient(z2)
# Partial derivatives for weights
D2=np.dot(d3.T,a2)
D1=np.dot(d2.T,X)
# Partial derivatives of theta with regularisation
T2Grad=(D2/m)+(self.lda/m)*(self.ho)
T1Grad=(D1/m)+(self.lda/m)*(self.ih)
# Update weights
# Hidden layer (weights 1)
self.ih-=self.lr*(((D1)/m) + (self.lda/m)*self.ih)
# Output layer (weights 2)
self.ho-=self.lr*(((D2)/m) + (self.lda/m)*self.ho)
# Unroll gradients to one long vector
grad=np.concatenate(((T1Grad).ravel(),(T2Grad).ravel()))
return costReg, costUn, grad
def backpropIter (self, X, y):
try:
m = X.shape[0]
except:
m=1
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Add bias node x(0)=1 for all training examples, X is now m x n+1
# Then compute activation to hidden node
z2=np.dot(X,self.ih.T)
#print(a1.shape)
a2=sigmoid(z2)
#print(ha)
# Add bias node h(0)=1 for all training examples, H is now m x h+1
# Then compute activation to output node
z3=np.dot(a2,self.ho.T)
h=sigmoid(z3)
# Compute error/ cost for this setup (unregularised and regularise)
costUn=self.costFunc(h,y)
costReg=self.costFuncReg(h,y)
gradW1=np.zeros(self.ih.shape)
gradW2=np.zeros(self.ho.shape)
for i in range(m):
delta3 = -(y[i,:]-h[i,:])*sigmoidGradient(z3[i,:])
delta2 = np.dot(self.ho.T,delta3)*sigmoidGradient(z2[i,:])
gradW2= gradW2 + np.outer(delta3,a2[i,:])
gradW1 = gradW1 + np.outer(delta2,X[i,:])
# Update weights
# Hidden layer (weights 1)
#self.ih-=self.lr*(((gradW1)/m) + (self.lda/m)*self.ih)
# Output layer (weights 2)
#self.ho-=self.lr*(((gradW2)/m) + (self.lda/m)*self.ho)
# Unroll gradients to one long vector
grad=np.concatenate(((gradW1).ravel(),(gradW2).ravel()))
return costUn, costReg, grad
def gradDesc(self, X, y):
# Backpropagate to get updates
cost,costreg,grad=self.backpropIter(X,y)
# Unroll parameters
deltaW1=np.reshape(grad[0:self.h*self.i],(self.h,self.i))
deltaW2=np.reshape(grad[self.h*self.i:],(self.o,self.h))
# m = no. training examples
m=X.shape[0]
#print (self.ih)
self.ih -= self.lr * ((deltaW1))#/m) + (self.lda * self.ih))
self.ho -= self.lr * ((deltaW2))#/m) + (self.lda * self.ho))
#print(deltaW1)
#print(self.ih)
return cost,costreg,grad
# Gradient checking to compute the gradient numerically to debug backpropagation
def gradCheck(self, X, y):
# Unroll theta
theta=np.concatenate(((self.ih).ravel(),(self.ho).ravel()))
# perturb will add and subtract epsilon, numgrad will store answers
perturb=np.zeros(len(theta))
numgrad=np.zeros(len(theta))
# epsilon, e is a small number
e = 0.00001
# Loop over all theta
for i in range(len(theta)):
# Perturb is zeros with one index being e
perturb[i]=e
loss1=self.costFuncGradientCheck(theta-perturb, X, y)
loss2=self.costFuncGradientCheck(theta+perturb, X, y)
# Compute numerical gradient and update vectors
numgrad[i]=(loss1-loss2)/(2*e)
perturb[i]=0
return numgrad
def costFuncGradientCheck(self,theta,X,y):
T1=np.reshape(theta[0:self.h*self.i],(self.h,self.i))
T2=np.reshape(theta[self.h*self.i:],(self.o,self.h))
m=X.shape[0]
# GET HYPOTHESIS ESTIMATES/ OUTPUTS
# Compute activation to hidden node
z2=np.dot(X,T1.T)
a2=sigmoid(z2)
# Compute activation to output node
z3=np.dot(a2,T2.T)
h=sigmoid(z3)
cost=self.costFunc(h, y)
return cost #+ ((self.lda/2)*(np.sum(pow(T1,2)) + np.sum(pow(T2,2))))
def costFunc(self, h, y):
m=h.shape[0]
return np.sum(pow((h-y),2))/m
def costFuncReg(self, h, y):
cost=self.costFunc(h, y)
return cost #+ ((self.lda/2)*(np.sum(pow(self.ih,2)) + np.sum(pow(self.ho,2))))
# Helper functions to compute sigmoid and gradient for an input number or matrix
def sigmoid(Z):
return np.divide(1,np.add(1,np.exp(-Z)))
def sigmoidGradient(Z):
return sigmoid(Z)*(1-sigmoid(Z))
# Pre=processing helper functions
# Normalise data to 0.1-1 as 0 inputs kills the weights and changes
def scaleDataVec(data):
return (np.asfarray(data[1:]) / 255.0 * 0.99) + 0.1
def scaleData(data):
return (np.asfarray(data[:,1:]) / 255.0 * 0.99) + 0.1
# DISPLAY DATA
# plot_data will be what to plot, num_ex must be a square number of how many examples to plot, random examples will then be plotted
def displayData(plot_data, num_ex, rand=1):
if rand==0:
data=plot_data
else:
rand_indexes=random.sample(range(plot_data.shape[0]),num_ex)
data=plot_data[rand_indexes,:]
# Useful variables, m= no. train ex, n= no. features
m=data.shape[0]
n=data.shape[1]
# Shape for one example
example_width=math.ceil(math.sqrt(n))
example_height=math.ceil(n/example_width)
# No. of items to display
display_rows=math.floor(math.sqrt(m))
display_cols=math.ceil(m/display_rows)
# Padding between images
pad=1
# Setup blank display
display_array = -np.ones((pad + display_rows * (example_height + pad), (pad + display_cols * (example_width + pad))))
curr_ex=0
for i in range(1,display_rows+1):
for j in range(1,display_cols+1):
if curr_ex>m:
break
# Max value of this patch
max_val=max(abs(data[curr_ex, :]))
display_array[pad + (j-1) * (example_height + pad) : j*(example_height+1), pad + (i-1) * (example_width + pad) : i*(example_width+1)] = data[curr_ex, :].reshape(example_height, example_width)/max_val
curr_ex+=1
matplotlib.pyplot.imshow(display_array, cmap='Greys', interpolation='None')
# In[312]:
a=neuralNetwork(2,5,2,0.5,0.0)
print(a.backpropIter(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]])))
print(a.gradCheck(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]])))
D=[]
C=[]
for i in range(100):
c,b,d=a.gradDesc(np.array([[0.1,0.9],[0.2,0.8]]),np.array([[0,1],[0,1]]))
C.append(c)
D.append(np.mean(d))
#print(c)
print(a.predict(np.array([[0.1,0.9]])))
# Debugging plot
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(C)
matplotlib.pyplot.ylabel("Error")
matplotlib.pyplot.xlabel("Iterations")
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(D)
matplotlib.pyplot.ylabel("Gradient")
matplotlib.pyplot.xlabel("Iterations")
#print(J)
# In[313]:
# Class instance
# Number of input, hidden and ouput nodes
# Input = 28 x 28 pixels
input_nodes=784
# Arbitrary number of hidden nodes, experiment to improve
hidden_nodes=200
# Output = one of the digits [0,1,2,3,4,5,6,7,8,9]
output_nodes=10
# Learning rate
learning_rate=0.4
# Regularisation parameter
lambd=0.0
# Create instance of Nnet class
nn=neuralNetwork(input_nodes,hidden_nodes,output_nodes,learning_rate,lambd)
# In[314]:
time1=time.time()
# Scale inputs
inputs=scaleData(train_data)
# 0.01-0.99 range as the sigmoid function can't reach 0 or 1, 0.01 for all except 0.99 for target
targets=(np.identity(output_nodes)*0.98)[train_data[:,0],:]+0.01
J=[]
JR=[]
Grad=[]
iterations=100
for i in range(iterations):
j,jr,grad=nn.gradDesc(inputs, targets)
grad=np.mean(grad)
if i == 0:
print("Initial ===== Cost (unregularised): ", j, "\t///", "Cost (regularised): ",jr," Mean Gradient: ",grad)
print("\r", end="")
print("Iteration ", i+1, "\tCost (unregularised): ", j, "\t///", "Cost (regularised): ", jr," Mean Gradient: ",grad,end="")
J.append(j)
JR.append(jr)
Grad.append(grad)
time2 = time.time()
print ("\nTRAINED IN ",time2-time1)
# In[315]:
# Debugging plot
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(J)
matplotlib.pyplot.plot(JR)
matplotlib.pyplot.ylabel("Error")
matplotlib.pyplot.xlabel("Iterations")
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(Grad)
matplotlib.pyplot.ylabel("Gradient")
matplotlib.pyplot.xlabel("Iterations")
#print(J)
# In[316]:
# Scale inputs
inputs=scaleData(test_data)
# 0.01-0.99 range as the sigmoid function can't reach 0 or 1, 0.01 for all except 0.99 for target
targets=test_data[:,0]
h=nn.predict(inputs)
score=[]
targ=[]
hyp=[]
for i,line in enumerate(targets):
if line == h[i]:
score.append(1)
else:
score.append(0)
hyp.append(h[i])
targ.append(line)
print("Test accuracy: ", sum(score)/len(score)*100)
indexes=random.sample(range(len(hyp)),9)
print("Targets ",end="")
for j in indexes:
print (targ[j]," ",end="")
print("\nHypothesis ",end="")
for j in indexes:
print (hyp[j]," ",end="")
displayData(test_data[indexes, 1:], 9, rand=0)
# In[277]:
nn.predict(0.9*np.ones((784,)))
Edit 1
Suggested to use different learning rates, but unfortunately, they all come out with similar results, here are the plots for 30 iterations, using the MNIST 100 subset:
Concretely, here are the figures that they start and end with:
Initial ===== Cost (unregularised): 4.07208963507 /// Cost (regularised): 4.07208963507 Mean Gradient: 0.0540251381858
Iteration 50 Cost (unregularised): 0.613310215166 /// Cost (regularised): 0.613310215166 Mean Gradient: -0.000133981500849Initial ===== Cost (unregularised): 5.67535252616 /// Cost (regularised): 5.67535252616 Mean Gradient: 0.0644797515914
Iteration 50 Cost (unregularised): 0.381080434935 /// Cost (regularised): 0.381080434935 Mean Gradient: 0.000427866902699Initial ===== Cost (unregularised): 3.54658422176 /// Cost (regularised): 3.54658422176 Mean Gradient: 0.0672211732868
Iteration 50 Cost (unregularised): 0.981 /// Cost (regularised): 0.981 Mean Gradient: 2.34515341943e-20Initial ===== Cost (unregularised): 4.05269658215 /// Cost (regularised): 4.05269658215 Mean Gradient: 0.0469666696193
Iteration 50 Cost (unregularised): 0.980999999999 /// Cost (regularised): 0.980999999999 Mean Gradient: -1.0582706063e-14Initial ===== Cost (unregularised): 2.40881492228 /// Cost (regularised): 2.40881492228 Mean Gradient: 0.0516056901574
Iteration 50 Cost (unregularised): 1.74539997258 /// Cost (regularised): 1.74539997258 Mean Gradient: 1.01955789614e-09Initial ===== Cost (unregularised): 2.58498876008 /// Cost (regularised): 2.58498876008 Mean Gradient: 0.0388768685257
Iteration 3 Cost (unregularised): 1.72520399313 /// Cost (regularised): 1.72520399313 Mean Gradient: 0.0134040908157
Iteration 50 Cost (unregularised): 0.981 /// Cost (regularised): 0.981 Mean Gradient: -4.49319474346e-43Initial ===== Cost (unregularised): 4.40141352357 /// Cost (regularised): 4.40141352357 Mean Gradient: 0.0689167742968
Iteration 50 Cost (unregularised): 0.981 /// Cost (regularised): 0.981 Mean Gradient: -1.01563966458e-22
A learning rate of 0.01, quite low, has the best outcome, but exploring learning rates in this region, I only came out with 30-40% accuracy, a big improvement on the 8% or even 0% that I had seen previously, but not really what it should be achieving!
Edit 2
I've now finished and added a backpropagation function optimized for matrices rather than the iterative formula, and so now I can run it on large epochs/ iterations without painfully slow. So the "backprop" function of the class matches with gradient check (in fact it is 1/2 the size but I think that is a problem in gradient check, so we'll leave that bc it should not matter proportionally and I have tried with adding in divisions to solve this). With large numbers of epochs I achieved a much better accuracy, but still there seems to be a problem, as when I have previously programmed a slightly different style of simple 3 layer neural network a part of a book, on the same dataset csvs, I get a much better training result. Here are some plots and data for large epochs.
Looks good but, we still have a pretty poor test set accuracy, and this is for 2,500 runs through the dataset, should be getting a good result with much less!
Test accuracy: 61.150000000000006
Targets 6 9 8 2 2 2 4 3 8
Hypothesis 6 9 8 4 7 1 4 3 8
Edit 3, what dataset?
http://makeyourownneuralnetwork.blogspot.co.uk/2015/03/the-mnist-dataset-of-handwitten-digits.html?m=1
Used train.csv and test.csv to try with more data and no better just takes longer so been using the subset train_100 and test_10 while I debug.
Edit 4
Seems to learn something after a very large number of epochs (like 14,000), as the whole dataset is used in the backprop function (not backpropiter) each loop is effectively an epoch, and with a ridiculous amount of epochs on the subset of 100 train and 10 test samples, the test accuracy is quite good. However with this small a sample this could easily be due to just chance and even then it's only 70% percent not what you'd be aiming for even on the small dataset. But it does show that it seems to be learning, I am trying parameters very extensively to rule that out.
| [
"Solved\nI solved my neural network. A brief description follows in case it helps anyone else. Thanks to all those that helped with suggestions. \nBasically, I had implemented it with a fully matrix approach ie. the backpropagation uses all examples each time. I later tried implementing it as a vector approach ie. backpropagation with each example. This was when I realised that the matrix approach doesn't update the parameters each example, so one run through this way is NOT the same as one run through each example in turn, effectively the whole training set is backpropagated as one example. Hence, my matrix implementation does work, but after many iterations, which then ends up taking longer than the vector approach anyway! Have opened a new question to learn more about this specific part but there we go, it just needed a lot of iterations with the matrix approach or a more gradual example by example approach.\n",
"You can \"debug\" your neural network using Tensorleap. It's a neural network debugging platform which uses some explainability algorithms. It allows you to upload your model and dataset and get a lot of information about your trained model.\nMNIST is already in as a demo project.\nThey have a free trial for 14 days as I know.\n"
] | [
1,
0
] | [] | [] | [
"backpropagation",
"machine_learning",
"mnist",
"neural_network",
"python"
] | stackoverflow_0042140866_backpropagation_machine_learning_mnist_neural_network_python.txt |
Q:
Saving XML files using ElementTree
I'm trying to develop simple Python (3.2) code to read XML files, do some corrections and store them back. However, during the storage step ElementTree adds this namespace nomenclature. For example:
<ns0:trk>
<ns0:name>ACTIVE LOG</ns0:name>
<ns0:trkseg>
<ns0:trkpt lat="38.5" lon="-120.2">
<ns0:ele>6.385864</ns0:ele>
<ns0:time>2011-12-10T17:46:30Z</ns0:time>
</ns0:trkpt>
<ns0:trkpt lat="40.7" lon="-120.95">
<ns0:ele>5.905273</ns0:ele>
<ns0:time>2011-12-10T17:46:51Z</ns0:time>
</ns0:trkpt>
<ns0:trkpt lat="43.252" lon="-126.453">
<ns0:ele>7.347168</ns0:ele>
<ns0:time>2011-12-10T17:52:28Z</ns0:time>
</ns0:trkpt>
</ns0:trkseg>
</ns0:trk>
The code snippet is below:
def parse_gpx_data(gpxdata, tzname=None, npoints=None, filter_window=None,
output_file_name=None):
ET = load_xml_library();
def find_trksegs_or_route(etree, ns):
trksegs=etree.findall('.//'+ns+'trkseg')
if trksegs:
return trksegs, "trkpt"
else: # try to display route if track is missing
rte=etree.findall('.//'+ns+'rte')
return rte, "rtept"
# try GPX10 namespace first
try:
element = ET.XML(gpxdata)
except ET.ParseError as v:
row, column = v.position
print ("error on row %d, column %d:%d" % row, column, v)
print ("%s" % ET.tostring(element))
trksegs,pttag=find_trksegs_or_route(element, GPX10)
NS=GPX10
if not trksegs: # try GPX11 namespace otherwise
trksegs,pttag=find_trksegs_or_route(element, GPX11)
NS=GPX11
if not trksegs: # try without any namespace
trksegs,pttag=find_trksegs_or_route(element, "")
NS=""
# Store the results if requested
if output_file_name:
ET.register_namespace('', GPX11)
ET.register_namespace('', GPX10)
ET.ElementTree(element).write(output_file_name, xml_declaration=True)
return;
I have tried using the register_namespace, but with no positive result.
Are there any specific changes for this version of ElementTree 1.3?
A:
In order to avoid the ns0 prefix the default namespace should be set before reading the XML data.
ET.register_namespace('', "http://www.topografix.com/GPX/1/1")
ET.register_namespace('', "http://www.topografix.com/GPX/1/0")
A:
You need to register all your namespaces before you parse xml file.
For example: If you have your input xml like this
and Capabilities is the root of your Element tree.
<Capabilities xmlns="http://www.opengis.net/wmts/1.0"
xmlns:ows="http://www.opengis.net/ows/1.1"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:gml="http://www.opengis.net/gml"
xsi:schemaLocation="http://www.opengis.net/wmts/1.0 http://schemas.opengis.net/wmts/1.0/wmtsGetCapabilities_response.xsd"
version="1.0.0">
Then you have to register all the namespaces i.e attributes present with xmlns like this:
ET.register_namespace('', "http://www.opengis.net/wmts/1.0")
ET.register_namespace('ows', "http://www.opengis.net/ows/1.1")
ET.register_namespace('xlink', "http://www.w3.org/1999/xlink")
ET.register_namespace('xsi', "http://www.w3.org/2001/XMLSchema-instance")
ET.register_namespace('gml', "http://www.opengis.net/gml")
A:
It seems that you have to declare your namespace, meaning that you need to change the first line of your xml from:
<ns0:trk>
to something like:
<ns0:trk xmlns:ns0="uri:">
Once did that you will no longer get ParseError: for unbound prefix: ..., and:
elem.tag = elem.tag[(len('{uri:}'):]
will remove the namespace.
A:
If you try to print the root, you will see something like this:
http://www.host.domain/path/to/your/xml/namespace}RootTag' at 0x0000000000558DB8>
So, to avoid the ns0 prefix, you have to change the default namespace before parsing the XML data as below:
ET.register_namespace('', "http://www.host.domain/path/to/your/xml/namespace")
A:
Or you could regex it away:
def remove_xml_namespace(xml_str: str) -> str:
xml_str = re.sub(r"<([^:]+):(\w+).+(?=xmlns)[^>]+>([\s\S]*)</(\1):(\2)>", r"\3", xml_str)
# replace namespace elements from end tag
xml_str = re.sub(r"</[^:]*:", r"</", xml_str)
# replace namespace from start tags
xml_str = re.sub(r"<[^/][^:]*:([^/>]*)(/?)>", r"<\1\2>", xml_str)
return xml_str
| Saving XML files using ElementTree | I'm trying to develop simple Python (3.2) code to read XML files, do some corrections and store them back. However, during the storage step ElementTree adds this namespace nomenclature. For example:
<ns0:trk>
<ns0:name>ACTIVE LOG</ns0:name>
<ns0:trkseg>
<ns0:trkpt lat="38.5" lon="-120.2">
<ns0:ele>6.385864</ns0:ele>
<ns0:time>2011-12-10T17:46:30Z</ns0:time>
</ns0:trkpt>
<ns0:trkpt lat="40.7" lon="-120.95">
<ns0:ele>5.905273</ns0:ele>
<ns0:time>2011-12-10T17:46:51Z</ns0:time>
</ns0:trkpt>
<ns0:trkpt lat="43.252" lon="-126.453">
<ns0:ele>7.347168</ns0:ele>
<ns0:time>2011-12-10T17:52:28Z</ns0:time>
</ns0:trkpt>
</ns0:trkseg>
</ns0:trk>
The code snippet is below:
def parse_gpx_data(gpxdata, tzname=None, npoints=None, filter_window=None,
output_file_name=None):
ET = load_xml_library();
def find_trksegs_or_route(etree, ns):
trksegs=etree.findall('.//'+ns+'trkseg')
if trksegs:
return trksegs, "trkpt"
else: # try to display route if track is missing
rte=etree.findall('.//'+ns+'rte')
return rte, "rtept"
# try GPX10 namespace first
try:
element = ET.XML(gpxdata)
except ET.ParseError as v:
row, column = v.position
print ("error on row %d, column %d:%d" % row, column, v)
print ("%s" % ET.tostring(element))
trksegs,pttag=find_trksegs_or_route(element, GPX10)
NS=GPX10
if not trksegs: # try GPX11 namespace otherwise
trksegs,pttag=find_trksegs_or_route(element, GPX11)
NS=GPX11
if not trksegs: # try without any namespace
trksegs,pttag=find_trksegs_or_route(element, "")
NS=""
# Store the results if requested
if output_file_name:
ET.register_namespace('', GPX11)
ET.register_namespace('', GPX10)
ET.ElementTree(element).write(output_file_name, xml_declaration=True)
return;
I have tried using the register_namespace, but with no positive result.
Are there any specific changes for this version of ElementTree 1.3?
| [
"In order to avoid the ns0 prefix the default namespace should be set before reading the XML data.\nET.register_namespace('', \"http://www.topografix.com/GPX/1/1\")\nET.register_namespace('', \"http://www.topografix.com/GPX/1/0\")\n\n",
"You need to register all your namespaces before you parse xml file.\nFor example: If you have your input xml like this\nand Capabilities is the root of your Element tree.\n<Capabilities xmlns=\"http://www.opengis.net/wmts/1.0\"\n xmlns:ows=\"http://www.opengis.net/ows/1.1\"\n xmlns:xlink=\"http://www.w3.org/1999/xlink\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xmlns:gml=\"http://www.opengis.net/gml\"\n xsi:schemaLocation=\"http://www.opengis.net/wmts/1.0 http://schemas.opengis.net/wmts/1.0/wmtsGetCapabilities_response.xsd\"\n version=\"1.0.0\">\n\nThen you have to register all the namespaces i.e attributes present with xmlns like this:\nET.register_namespace('', \"http://www.opengis.net/wmts/1.0\")\nET.register_namespace('ows', \"http://www.opengis.net/ows/1.1\")\nET.register_namespace('xlink', \"http://www.w3.org/1999/xlink\")\nET.register_namespace('xsi', \"http://www.w3.org/2001/XMLSchema-instance\")\nET.register_namespace('gml', \"http://www.opengis.net/gml\")\n\n",
"It seems that you have to declare your namespace, meaning that you need to change the first line of your xml from:\n<ns0:trk>\n\nto something like:\n<ns0:trk xmlns:ns0=\"uri:\">\n\nOnce did that you will no longer get ParseError: for unbound prefix: ..., and:\nelem.tag = elem.tag[(len('{uri:}'):]\n\nwill remove the namespace.\n",
"If you try to print the root, you will see something like this:\nhttp://www.host.domain/path/to/your/xml/namespace}RootTag' at 0x0000000000558DB8>\nSo, to avoid the ns0 prefix, you have to change the default namespace before parsing the XML data as below:\nET.register_namespace('', \"http://www.host.domain/path/to/your/xml/namespace\")\n\n",
"Or you could regex it away:\ndef remove_xml_namespace(xml_str: str) -> str:\n xml_str = re.sub(r\"<([^:]+):(\\w+).+(?=xmlns)[^>]+>([\\s\\S]*)</(\\1):(\\2)>\", r\"\\3\", xml_str)\n # replace namespace elements from end tag\n xml_str = re.sub(r\"</[^:]*:\", r\"</\", xml_str)\n # replace namespace from start tags\n xml_str = re.sub(r\"<[^/][^:]*:([^/>]*)(/?)>\", r\"<\\1\\2>\", xml_str)\n return xml_str\n\n"
] | [
89,
46,
1,
1,
0
] | [] | [] | [
"elementtree",
"python"
] | stackoverflow_0008983041_elementtree_python.txt |
Q:
How to fill the nans using groupby and filling values from another dataframe
I have the input dataframe(df1) with Ids, subids and features, having the nans in the features columns,
df1 = pd.DataFrame({'Id': ['A1', 'A2', 'A3', 'B1', 'B2'],
'Subid':['A', 'A', 'A', 'B', 'B'],
'feature1':[2.6, 6.3, np.nan, np.nan, 3.3],
'feature2':[55, np.nan, np.nan, 44, 69],
'feature3':[np.nan, 0.5, 0.3, np.nan, np.nan],
'feature4':[22, np.nan, 46, np.nan, 33],
'feature5':[np.nan, np.nan, 52, np.nan, 53]
})
I have another input dataframe(df2) having subids and the features values to be filled in.
df2 = pd.DataFrame({'Subid': ['A', 'B'],
'feature1': [2.966666666666667, 1.65],
'feature2': [18.333333333333332, 56.5],
'feature3': [0.26666666666666666, 0.0],
'feature4': [22.666666666666668, 16.5],
'feature5': [17.333333333333332, 26.5]})
I need to fill the nans in the df1 with the values present for each features in df2.
I have tried lambda and apply function but unable to achieve the result
df1.loc[df1['feature1'].isna(), 'feature1'] = df2.groupby('Subid')['feature1'].apply(lambda x:x)
expected output:
outputdf = pd.DataFrame({'Id': ['A1', 'A2', 'A3', 'B1', 'B2'],
'Subid':['A', 'A', 'A', 'B', 'B'],
'feature1': [2.6, 6.3, 2.966667, 1.650000, 3.3],
'feature2': [55, 18.333333, 18.333333, 44, 69],
'feature3': [0.266667, 0.5, 0.3, 0.000000, 0.000000],
'feature4': [22, 22.666667, 46, 16.500000, 33],
'feature5': [17.333333, 17.333333, 52, 26.500000, 53]
})
Quick help is appreciated.
A:
You can use a merge before fillna:
out = df1.fillna(df1[['Subid']].merge(df2, how='left'))
Output:
Id Subid feature1 feature2 feature3 feature4 feature5
0 A1 A 2.600000 55.000000 0.266667 22.000000 17.333333
1 A2 A 6.300000 18.333333 0.500000 22.666667 17.333333
2 A3 A 2.966667 18.333333 0.300000 46.000000 52.000000
3 B1 B 1.650000 44.000000 0.000000 16.500000 26.500000
4 B2 B 3.300000 69.000000 0.000000 33.000000 53.000000
A:
You can use fillna to fill up the np.nans values and merge the second dataframe with matching Subid's
result = df1.fillna(df1[['Subid']].merge(df2, on='Subid', how='left'))
A:
for f in [f for f in df1.columns if f.startswith('feature')]:
df1[f]=df1[f].mask(pd.isnull, df1[['Subid']].merge(df2[['Subid', f]])[f])
returns:
Id Subid feature1 feature2 feature3 feature4 feature5
0 A1 A 2.600000 55.000000 0.266667 22.000000 17.333333
1 A2 A 6.300000 18.333333 0.500000 22.666667 17.333333
2 A3 A 2.966667 18.333333 0.300000 46.000000 52.000000
3 B1 B 1.650000 44.000000 0.000000 16.500000 26.500000
4 B2 B 3.300000 69.000000 0.000000 33.000000 53.000000
| How to fill the nans using groupby and filling values from another dataframe | I have the input dataframe(df1) with Ids, subids and features, having the nans in the features columns,
df1 = pd.DataFrame({'Id': ['A1', 'A2', 'A3', 'B1', 'B2'],
'Subid':['A', 'A', 'A', 'B', 'B'],
'feature1':[2.6, 6.3, np.nan, np.nan, 3.3],
'feature2':[55, np.nan, np.nan, 44, 69],
'feature3':[np.nan, 0.5, 0.3, np.nan, np.nan],
'feature4':[22, np.nan, 46, np.nan, 33],
'feature5':[np.nan, np.nan, 52, np.nan, 53]
})
I have another input dataframe(df2) having subids and the features values to be filled in.
df2 = pd.DataFrame({'Subid': ['A', 'B'],
'feature1': [2.966666666666667, 1.65],
'feature2': [18.333333333333332, 56.5],
'feature3': [0.26666666666666666, 0.0],
'feature4': [22.666666666666668, 16.5],
'feature5': [17.333333333333332, 26.5]})
I need to fill the nans in the df1 with the values present for each features in df2.
I have tried lambda and apply function but unable to achieve the result
df1.loc[df1['feature1'].isna(), 'feature1'] = df2.groupby('Subid')['feature1'].apply(lambda x:x)
expected output:
outputdf = pd.DataFrame({'Id': ['A1', 'A2', 'A3', 'B1', 'B2'],
'Subid':['A', 'A', 'A', 'B', 'B'],
'feature1': [2.6, 6.3, 2.966667, 1.650000, 3.3],
'feature2': [55, 18.333333, 18.333333, 44, 69],
'feature3': [0.266667, 0.5, 0.3, 0.000000, 0.000000],
'feature4': [22, 22.666667, 46, 16.500000, 33],
'feature5': [17.333333, 17.333333, 52, 26.500000, 53]
})
Quick help is appreciated.
| [
"You can use a merge before fillna:\nout = df1.fillna(df1[['Subid']].merge(df2, how='left'))\n\nOutput:\n Id Subid feature1 feature2 feature3 feature4 feature5\n0 A1 A 2.600000 55.000000 0.266667 22.000000 17.333333\n1 A2 A 6.300000 18.333333 0.500000 22.666667 17.333333\n2 A3 A 2.966667 18.333333 0.300000 46.000000 52.000000\n3 B1 B 1.650000 44.000000 0.000000 16.500000 26.500000\n4 B2 B 3.300000 69.000000 0.000000 33.000000 53.000000\n\n",
"You can use fillna to fill up the np.nans values and merge the second dataframe with matching Subid's\nresult = df1.fillna(df1[['Subid']].merge(df2, on='Subid', how='left'))\n\n",
"for f in [f for f in df1.columns if f.startswith('feature')]:\n df1[f]=df1[f].mask(pd.isnull, df1[['Subid']].merge(df2[['Subid', f]])[f])\n\nreturns:\n Id Subid feature1 feature2 feature3 feature4 feature5\n0 A1 A 2.600000 55.000000 0.266667 22.000000 17.333333\n1 A2 A 6.300000 18.333333 0.500000 22.666667 17.333333\n2 A3 A 2.966667 18.333333 0.300000 46.000000 52.000000\n3 B1 B 1.650000 44.000000 0.000000 16.500000 26.500000\n4 B2 B 3.300000 69.000000 0.000000 33.000000 53.000000\n\n"
] | [
2,
1,
0
] | [] | [] | [
"fillna",
"group_by",
"lambda",
"pandas",
"python"
] | stackoverflow_0074628746_fillna_group_by_lambda_pandas_python.txt |
Q:
I am trying to find out the id of the product sold month by month from 15 months of csv data and how many times it was sold in python
But there is a lot of duplicate codes when I do that way in bottom.
what should I do to avoid duplicate and do it in a shorter way?
image of codes
output of codes
here the data
import numpy as np
import pandas as pd
train_purchases = pd.read_csv(r"C:\Users\Can\Desktop\dressipi_recsys2022\train_purchases.csv")
first_month = train_purchases.loc[(train_purchases['date'] > '2020-01-01') & (train_purchases['date'] <= '2020-01-31')].sort_values(by=["item_id"])["item_id"].tolist()
second_month = train_purchases.loc[(train_purchases['date'] > '2020-02-01') & (train_purchases['date'] <= '2020-02-31')].sort_values(by=["item_id"])["item_id"].tolist()
third_month = train_purchases.loc[(train_purchases['date'] > '2020-03-01') & (train_purchases['date'] <= '2020-03-31')].sort_values(by=["item_id"])["item_id"].tolist()
fourth_month = train_purchases.loc[(train_purchases['date'] > '2020-04-01') & (train_purchases['date'] <= '2020-04-31')].sort_values(by=["item_id"])["item_id"].tolist()
fifth_month = train_purchases.loc[(train_purchases['date'] > '2020-05-01') & (train_purchases['date'] <= '2020-05-31')].sort_values(by=["item_id"])["item_id"].tolist()
sixth_month = train_purchases.loc[(train_purchases['date'] > '2020-06-01') & (train_purchases['date'] <= '2020-06-31')].sort_values(by=["item_id"])["item_id"].tolist()
def most_frequent(List):
counter = 0
num = List[0]
for i in List:
curr_frequency = List.count(i)
if(curr_frequency> counter):
counter = curr_frequency
num = i
print(num," id sold", List.count(num), "times. ")
most_frequent(first_month)
most_frequent(second_month)
most_frequent(third_month)
most_frequent(fourth_month)
most_frequent(fifth_month)
most_frequent(sixth_month)
A:
you can use something like this:
start = '2020-01-01'
end = '2021-03-31'
first_day = pd.date_range(start, end, freq='MS').astype(str).to_list() #get first day of month given range
end_day = pd.date_range(start, end, freq='M').strftime("%Y-%m-%d 23:59:59").astype(str).to_list() #get last day of month of given date
dates = dict(zip(first_day, end_day)) #convert lists to dictionary
#dates={'2020-01-01':'2020-01-31'}....
train_purchases['date']=pd.to_datetime(train_purchases['date'])
for k,v in dates.items():
mask = train_purchases.loc[(train_purchases['date'] > k) & (train_purchases['date'] <= v)].sort_values(by=["item_id"]).item_id.value_counts()[:1]
print(mask.index[0]," id sold", mask.iloc[0], "times. ")
'''
8060 id sold 564 times.
8060 id sold 421 times.
8060 id sold 375 times.
8060 id sold 610 times.
8060 id sold 277 times.
8060 id sold 280 times.
8622 id sold 290 times.
8060 id sold 374 times.
8060 id sold 638 times.
8060 id sold 563 times.
8060 id sold 1580 times.
8060 id sold 765 times.
19882 id sold 717 times.
19882 id sold 570 times.
19882 id sold 690 times.
'''
Note
train_purchases['date'] > '2020-01-01'
If you use it as above, the first day of the month is not included in the calculation. If you want the first day of the month, you should use it as below
train_purchases['date'] >= '2020-01-01'
| I am trying to find out the id of the product sold month by month from 15 months of csv data and how many times it was sold in python | But there is a lot of duplicate codes when I do that way in bottom.
what should I do to avoid duplicate and do it in a shorter way?
image of codes
output of codes
here the data
import numpy as np
import pandas as pd
train_purchases = pd.read_csv(r"C:\Users\Can\Desktop\dressipi_recsys2022\train_purchases.csv")
first_month = train_purchases.loc[(train_purchases['date'] > '2020-01-01') & (train_purchases['date'] <= '2020-01-31')].sort_values(by=["item_id"])["item_id"].tolist()
second_month = train_purchases.loc[(train_purchases['date'] > '2020-02-01') & (train_purchases['date'] <= '2020-02-31')].sort_values(by=["item_id"])["item_id"].tolist()
third_month = train_purchases.loc[(train_purchases['date'] > '2020-03-01') & (train_purchases['date'] <= '2020-03-31')].sort_values(by=["item_id"])["item_id"].tolist()
fourth_month = train_purchases.loc[(train_purchases['date'] > '2020-04-01') & (train_purchases['date'] <= '2020-04-31')].sort_values(by=["item_id"])["item_id"].tolist()
fifth_month = train_purchases.loc[(train_purchases['date'] > '2020-05-01') & (train_purchases['date'] <= '2020-05-31')].sort_values(by=["item_id"])["item_id"].tolist()
sixth_month = train_purchases.loc[(train_purchases['date'] > '2020-06-01') & (train_purchases['date'] <= '2020-06-31')].sort_values(by=["item_id"])["item_id"].tolist()
def most_frequent(List):
counter = 0
num = List[0]
for i in List:
curr_frequency = List.count(i)
if(curr_frequency> counter):
counter = curr_frequency
num = i
print(num," id sold", List.count(num), "times. ")
most_frequent(first_month)
most_frequent(second_month)
most_frequent(third_month)
most_frequent(fourth_month)
most_frequent(fifth_month)
most_frequent(sixth_month)
| [
"you can use something like this:\nstart = '2020-01-01'\nend = '2021-03-31'\nfirst_day = pd.date_range(start, end, freq='MS').astype(str).to_list() #get first day of month given range\nend_day = pd.date_range(start, end, freq='M').strftime(\"%Y-%m-%d 23:59:59\").astype(str).to_list() #get last day of month of given date\ndates = dict(zip(first_day, end_day)) #convert lists to dictionary\n#dates={'2020-01-01':'2020-01-31'}....\n\ntrain_purchases['date']=pd.to_datetime(train_purchases['date'])\nfor k,v in dates.items():\n mask = train_purchases.loc[(train_purchases['date'] > k) & (train_purchases['date'] <= v)].sort_values(by=[\"item_id\"]).item_id.value_counts()[:1]\n print(mask.index[0],\" id sold\", mask.iloc[0], \"times. \")\n'''\n8060 id sold 564 times. \n8060 id sold 421 times. \n8060 id sold 375 times. \n8060 id sold 610 times. \n8060 id sold 277 times. \n8060 id sold 280 times. \n8622 id sold 290 times. \n8060 id sold 374 times. \n8060 id sold 638 times. \n8060 id sold 563 times. \n8060 id sold 1580 times. \n8060 id sold 765 times. \n19882 id sold 717 times. \n19882 id sold 570 times. \n19882 id sold 690 times. \n\n'''\n\nNote\ntrain_purchases['date'] > '2020-01-01'\n\nIf you use it as above, the first day of the month is not included in the calculation. If you want the first day of the month, you should use it as below\ntrain_purchases['date'] >= '2020-01-01'\n\n"
] | [
0
] | [] | [] | [
"csv",
"frequency",
"numpy",
"pandas",
"python"
] | stackoverflow_0074617948_csv_frequency_numpy_pandas_python.txt |
Q:
prints the sum of the numbers -5 to 0 in python
Ok so I am very new to python and I am supposed to make a code that gives me this output
input= -5
output = (-5)+(-4)+(-3)+(-2)+(-1)=-15
but I just can't wrap my head around it
I thought I could just somehow flip this
while True:
output = ""
num = int(input("enter a integer: "))
if num == 0:
exit()
for i in range(1, num + 1):
output += "{}".format(i)
if i != num:
output += "+"
output += " = {}".format(sum(range(num + 1)))
print(output)
but I could not figure it out.
please help.
If someone can show me how to get both of these in one code that would be helpfull.
A:
n = int(input("Enter a integer: "))
res = ""
s = 0
x,y = [n,0] if n < 0 else [1, n+1]
for i in range(x, y, 1):
res += f"({i}) +"
s += i
res = res[:-2] + "=" + str(s)
print()
print(res)
A:
This code will handle both positive and negative num.
while True:
num = int(input("enter an integer: "))
if num == 0:
break
# Create the correct range based on num being positive or negative
nums = range(num, 0) if num < 0 else range(1, num+1)
# Create the string "a + b + c..." or "(-a) + (-b) + (-c)..."
eq = ' + '.join(f"({n})" if n < 0 else str(n) for n in nums)
# Print results
print(f"{eq} = {sum(nums)}")
Example run:
enter an integer: 5
1 + 2 + 3 + 4 + 5 = 15
enter an integer: -5
(-5) + (-4) + (-3) + (-2) + (-1) = -15
enter an integer: 0
| prints the sum of the numbers -5 to 0 in python | Ok so I am very new to python and I am supposed to make a code that gives me this output
input= -5
output = (-5)+(-4)+(-3)+(-2)+(-1)=-15
but I just can't wrap my head around it
I thought I could just somehow flip this
while True:
output = ""
num = int(input("enter a integer: "))
if num == 0:
exit()
for i in range(1, num + 1):
output += "{}".format(i)
if i != num:
output += "+"
output += " = {}".format(sum(range(num + 1)))
print(output)
but I could not figure it out.
please help.
If someone can show me how to get both of these in one code that would be helpfull.
| [
"n = int(input(\"Enter a integer: \"))\nres = \"\"\ns = 0\nx,y = [n,0] if n < 0 else [1, n+1]\nfor i in range(x, y, 1):\n res += f\"({i}) +\"\n s += i\nres = res[:-2] + \"=\" + str(s)\nprint()\nprint(res)\n\n",
"This code will handle both positive and negative num.\nwhile True:\n num = int(input(\"enter an integer: \"))\n if num == 0:\n break\n # Create the correct range based on num being positive or negative\n nums = range(num, 0) if num < 0 else range(1, num+1)\n # Create the string \"a + b + c...\" or \"(-a) + (-b) + (-c)...\"\n eq = ' + '.join(f\"({n})\" if n < 0 else str(n) for n in nums)\n # Print results\n print(f\"{eq} = {sum(nums)}\")\n\nExample run:\nenter an integer: 5\n1 + 2 + 3 + 4 + 5 = 15\nenter an integer: -5\n(-5) + (-4) + (-3) + (-2) + (-1) = -15\nenter an integer: 0\n\n"
] | [
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0074628524_python.txt |
Q:
How can I add a Subject to my email to send via SMTP?
How can I add a subject in it like I did in a normal message? When I am trying to send an email with the code below, it is showing with no Subject:
import smtplib, ssl
email = "fromemailhere"
password = "passwordhere"
receiver = "toemailhere"
message = """
Hello World
"""
port = 465
sslcontext = ssl.create_default_context()
connection = smtplib.SMTP_SSL(
"smtp.gmail.com",
port,
context=sslcontext
)
connection.login(email, password)
connection.sendmail(email, receiver, message)
print("sent")
I have seen this example, but I don't understand how can I do this in my project. Can anyone tell me with a code example?
I am not a Python developer, but I want to use this program.
A:
It is fairly straight forward. Use email library (documentation). AFAIK it is a standard built in library, so no additional installation required. Your could would look like this:
import smtplib, ssl
from email.mime.text import MIMEText
email = "fromemailhere"
password = "passwordhere"
receiver = "toemailhere"
message = """
Hello World
"""
message = MIMEText(message, "plain")
message["Subject"] = "Hello World"
message["From"] = email
port = 465
sslcontext = ssl.create_default_context()
connection = smtplib.SMTP_SSL(
"smtp.gmail.com",
port,
context=sslcontext
)
connection.login(email, password)
connection.sendmail(email, receiver, message.as_string())
print("sent")
| How can I add a Subject to my email to send via SMTP? | How can I add a subject in it like I did in a normal message? When I am trying to send an email with the code below, it is showing with no Subject:
import smtplib, ssl
email = "fromemailhere"
password = "passwordhere"
receiver = "toemailhere"
message = """
Hello World
"""
port = 465
sslcontext = ssl.create_default_context()
connection = smtplib.SMTP_SSL(
"smtp.gmail.com",
port,
context=sslcontext
)
connection.login(email, password)
connection.sendmail(email, receiver, message)
print("sent")
I have seen this example, but I don't understand how can I do this in my project. Can anyone tell me with a code example?
I am not a Python developer, but I want to use this program.
| [
"It is fairly straight forward. Use email library (documentation). AFAIK it is a standard built in library, so no additional installation required. Your could would look like this:\nimport smtplib, ssl\nfrom email.mime.text import MIMEText\n\nemail = \"fromemailhere\"\npassword = \"passwordhere\"\nreceiver = \"toemailhere\"\n\nmessage = \"\"\"\nHello World\n\"\"\"\nmessage = MIMEText(message, \"plain\")\nmessage[\"Subject\"] = \"Hello World\"\nmessage[\"From\"] = email\n\nport = 465\nsslcontext = ssl.create_default_context()\nconnection = smtplib.SMTP_SSL(\n \"smtp.gmail.com\",\n port,\n context=sslcontext\n)\n\nconnection.login(email, password)\nconnection.sendmail(email, receiver, message.as_string())\n\nprint(\"sent\")\n\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x",
"ssl"
] | stackoverflow_0074628823_python_python_3.x_ssl.txt |
Q:
Change data type of a specific column of a pandas dataframe
I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions?
A:
df['colname'] = df['colname'].astype(int) works when changing from float values to int atleast.
A:
I have tried following:
df['column']=df.column.astype('int64')
and it worked for me.
A:
You can use reindex by sorted column by sort_values, cast to int by astype:
df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'colname':['7','3','9'],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B D E F colname
0 1 4 1 5 7 7
1 2 5 3 3 4 3
2 3 6 5 6 3 9
print (df.colname.astype(int).sort_values())
1 3
0 7
2 9
Name: colname, dtype: int32
print (df.reindex(df.colname.astype(int).sort_values().index))
A B D E F colname
1 2 5 3 3 4 3
0 1 4 1 5 7 7
2 3 6 5 6 3 9
print (df.reindex(df.colname.astype(int).sort_values().index).reset_index(drop=True))
A B D E F colname
0 2 5 3 3 4 3
1 1 4 1 5 7 7
2 3 6 5 6 3 9
If first solution does not works because None or bad data use to_numeric:
df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'colname':['7','3','None'],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B D E F colname
0 1 4 1 5 7 7
1 2 5 3 3 4 3
2 3 6 5 6 3 None
print (pd.to_numeric(df.colname, errors='coerce').sort_values())
1 3.0
0 7.0
2 NaN
Name: colname, dtype: float64
A:
To simply change one column, here is what you can do:
df.column_name.apply(int)
you can replace int with the desired datatype you want e.g (np.int64), str, category.
For multiple datatype changes, I would recommend the following:
df = pd.read_csv(data, dtype={'Col_A': str,'Col_B':int64})
A:
The documentation provides all the information needed. Let's take the toy dataframe from the docs:
d = {'col1': [1, 2], 'col2': [3, 4]}
If we want to cast col1 to int32 we can use, for instance, a dictionary:
df.astype({'col1': 'int32'})
In addition, the approach above allows to avoid the SettingWithCopyWarning.
| Change data type of a specific column of a pandas dataframe | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions?
| [
"df['colname'] = df['colname'].astype(int) works when changing from float values to int atleast.\n",
"I have tried following:\ndf['column']=df.column.astype('int64')\n\nand it worked for me.\n",
"You can use reindex by sorted column by sort_values, cast to int by astype:\ndf = pd.DataFrame({'A':[1,2,3],\n 'B':[4,5,6],\n 'colname':['7','3','9'],\n 'D':[1,3,5],\n 'E':[5,3,6],\n 'F':[7,4,3]})\n\nprint (df)\n A B D E F colname\n0 1 4 1 5 7 7\n1 2 5 3 3 4 3\n2 3 6 5 6 3 9\n\nprint (df.colname.astype(int).sort_values())\n1 3\n0 7\n2 9\nName: colname, dtype: int32\n\nprint (df.reindex(df.colname.astype(int).sort_values().index))\n A B D E F colname\n1 2 5 3 3 4 3\n0 1 4 1 5 7 7\n2 3 6 5 6 3 9\n\nprint (df.reindex(df.colname.astype(int).sort_values().index).reset_index(drop=True))\n A B D E F colname\n0 2 5 3 3 4 3\n1 1 4 1 5 7 7\n2 3 6 5 6 3 9\n\nIf first solution does not works because None or bad data use to_numeric:\ndf = pd.DataFrame({'A':[1,2,3],\n 'B':[4,5,6],\n 'colname':['7','3','None'],\n 'D':[1,3,5],\n 'E':[5,3,6],\n 'F':[7,4,3]})\n\nprint (df)\n A B D E F colname\n0 1 4 1 5 7 7\n1 2 5 3 3 4 3\n2 3 6 5 6 3 None\n\nprint (pd.to_numeric(df.colname, errors='coerce').sort_values())\n1 3.0\n0 7.0\n2 NaN\nName: colname, dtype: float64\n\n",
"To simply change one column, here is what you can do:\ndf.column_name.apply(int)\nyou can replace int with the desired datatype you want e.g (np.int64), str, category.\nFor multiple datatype changes, I would recommend the following:\ndf = pd.read_csv(data, dtype={'Col_A': str,'Col_B':int64})\n",
"The documentation provides all the information needed. Let's take the toy dataframe from the docs:\nd = {'col1': [1, 2], 'col2': [3, 4]}\n\nIf we want to cast col1 to int32 we can use, for instance, a dictionary:\ndf.astype({'col1': 'int32'})\n\nIn addition, the approach above allows to avoid the SettingWithCopyWarning.\n"
] | [
31,
10,
6,
2,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0041590884_pandas_python.txt |
Q:
Convert .opus to .wav in Python
I want to classify audio clip files using Tensorflow. But my audio files are in .opus format. From my research I need them to be in .wav format.
Therefore, I have to convert them. I would like to do this in Python, because I am working in a Jupyter notebook. I want to do this for hundreds of files.
All I found so far was this command line approach. My trouble with it, is that it would be too slow to perform on one file at a time. I want a method that can loop through hundreds of files in several directories and convert them all.
A:
One can use this in Python:
opus_path = 'something.opus'
wav_path = 'something.wav'
os.system(f'ffmpeg -i "{opus_path}" -vn "{wav_path}"')
This can obviously be applied in a loop if you want:
for opus_path,wav_path in zip(opus_paths,wav_paths):
os.system(f'ffmpeg -i "{opus_path}" -vn "{wav_path}"')
| Convert .opus to .wav in Python | I want to classify audio clip files using Tensorflow. But my audio files are in .opus format. From my research I need them to be in .wav format.
Therefore, I have to convert them. I would like to do this in Python, because I am working in a Jupyter notebook. I want to do this for hundreds of files.
All I found so far was this command line approach. My trouble with it, is that it would be too slow to perform on one file at a time. I want a method that can loop through hundreds of files in several directories and convert them all.
| [
"One can use this in Python:\nopus_path = 'something.opus'\nwav_path = 'something.wav'\nos.system(f'ffmpeg -i \"{opus_path}\" -vn \"{wav_path}\"')\n\nThis can obviously be applied in a loop if you want:\nfor opus_path,wav_path in zip(opus_paths,wav_paths):\n os.system(f'ffmpeg -i \"{opus_path}\" -vn \"{wav_path}\"')\n\n"
] | [
1
] | [] | [] | [
"audio",
"opus",
"python",
"tensorflow",
"wav"
] | stackoverflow_0074603951_audio_opus_python_tensorflow_wav.txt |
Q:
the compiled code path to the location of the error is shown and not the executable path
I searched a lot about this problem although there are many trials of other people but this problem I couldn't find or may be I am using wrong terms while searching.
I am using this method to create an executable for my python code. Everything runs fine but when I run the code and there is an error. The path to the file where the error took place is relative to the compiled code and not the executable. I was expecting to refer paths where the files are currently in the executable but not to show the paths where the code was compiled.
setup.py
build_exe_options = {
"optimize": 0,
"excludes": ["PyQt6"]}
pkgs = find_namespace_packages(include=["controller*", "model*", "Resources*", "view*", "DLL*"])
setup(name='myApp',
version='0.0',
packages=pkgs,
install_requires=[
'numpy',
"scipy"
],
include_package_data=True,
options={"build_exe": build_exe_options},
executables= Executable("app.py", base=None)
The Commands
python -m venv venv
venv\Scripts\activate
python.exe -m pip install --upgrade pip
pip install cx_freeze setuptools
pip install -e .
python setup.py build_exe
Is there something I am missing while creating the executable?
A:
I found a solution by trial and error. I had to add in setup.py in options
"replace_paths": [("*", "")]
The solution was shown in enter link description here
now I have the path to error starting by the folder of the compiled source code
Before: user/Desktop/projectFolder/..../something.py
Now: projectFolder/..../something.py
Which what I wanted exactly.
| the compiled code path to the location of the error is shown and not the executable path | I searched a lot about this problem although there are many trials of other people but this problem I couldn't find or may be I am using wrong terms while searching.
I am using this method to create an executable for my python code. Everything runs fine but when I run the code and there is an error. The path to the file where the error took place is relative to the compiled code and not the executable. I was expecting to refer paths where the files are currently in the executable but not to show the paths where the code was compiled.
setup.py
build_exe_options = {
"optimize": 0,
"excludes": ["PyQt6"]}
pkgs = find_namespace_packages(include=["controller*", "model*", "Resources*", "view*", "DLL*"])
setup(name='myApp',
version='0.0',
packages=pkgs,
install_requires=[
'numpy',
"scipy"
],
include_package_data=True,
options={"build_exe": build_exe_options},
executables= Executable("app.py", base=None)
The Commands
python -m venv venv
venv\Scripts\activate
python.exe -m pip install --upgrade pip
pip install cx_freeze setuptools
pip install -e .
python setup.py build_exe
Is there something I am missing while creating the executable?
| [
"I found a solution by trial and error. I had to add in setup.py in options\n\"replace_paths\": [(\"*\", \"\")]\n\nThe solution was shown in enter link description here\nnow I have the path to error starting by the folder of the compiled source code\nBefore: user/Desktop/projectFolder/..../something.py\nNow: projectFolder/..../something.py\nWhich what I wanted exactly.\n"
] | [
0
] | [] | [] | [
"cx_freeze",
"exe",
"python"
] | stackoverflow_0074618798_cx_freeze_exe_python.txt |
Q:
Is it bad practice to add methods to a class outside of the class?
For example, I have a class Foo that I do not have access to the source of (specifics are unimportant).
class Foo:
def __init__(self):
self.x: int = 0
I want to extend this class, adding a new explicit constructor, I understand the typical way to do this is
class Foo(Foo):
@staticmethod
from_x(x: int) -> 'Foo':
foo = Foo()
foo.x = x
return foo
is it a bad idea to do something like the below? If so, why?
@staticmethod
def __Foo_from_x(x: int) -> 'Foo':
foo = Foo()
foo.x = x
return foo
Foo.from_x = __Foo_from_x
del __Foo_from_x
The reason could be something as simple as 'the first is stylistically cleaner', but I am more interested in the differences between the two implementations for a user of my extended class.
A:
If there is no specific reason to define a method not inside of the class definition it would be preferred to keep everything together. So it is definitely better to define your method in your class to provide better readability of your code. Especially if other persons will have to work with your code it could be harder to find the definition of it (even if it's just to understand what is happening).
Another thing I noticed in you code is the line:
if True: # for scoping
You should keep in mind, that an if-statement does not come with its own scope, these belong to methods, classes and modules. So your if-statement does not provide any functionality.
| Is it bad practice to add methods to a class outside of the class? | For example, I have a class Foo that I do not have access to the source of (specifics are unimportant).
class Foo:
def __init__(self):
self.x: int = 0
I want to extend this class, adding a new explicit constructor, I understand the typical way to do this is
class Foo(Foo):
@staticmethod
from_x(x: int) -> 'Foo':
foo = Foo()
foo.x = x
return foo
is it a bad idea to do something like the below? If so, why?
@staticmethod
def __Foo_from_x(x: int) -> 'Foo':
foo = Foo()
foo.x = x
return foo
Foo.from_x = __Foo_from_x
del __Foo_from_x
The reason could be something as simple as 'the first is stylistically cleaner', but I am more interested in the differences between the two implementations for a user of my extended class.
| [
"If there is no specific reason to define a method not inside of the class definition it would be preferred to keep everything together. So it is definitely better to define your method in your class to provide better readability of your code. Especially if other persons will have to work with your code it could be harder to find the definition of it (even if it's just to understand what is happening).\nAnother thing I noticed in you code is the line:\nif True: # for scoping\nYou should keep in mind, that an if-statement does not come with its own scope, these belong to methods, classes and modules. So your if-statement does not provide any functionality.\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074628870_python_python_3.x.txt |
Q:
'Series' object has no attribute 'tax_code'
i have a dataframe with 2 columns user_id and tax_code and many rows. I imported this library https://pypi.org/project/python-codicefiscale/ and i need to verify if a tax_code is valid or not.
i tried to define this function that doesnt give me back errors.
def valid():
from codicefiscale import codicefiscale
if df.tax_code.codicefiscale.is_valid():
return "It's Valid"
else:
return "Not Valid"
i did this to create a new column and apply my function:
df["Is Valid"]=df.apply(valid)
i have this error displayed:
'Series' object has no attribute 'tax_code'
How can i solve it? tax_code is the name of my 2nd column
A:
Firstly i dont know if this causes any problems but avoid importing
inside your code, to my best of knowledge importing always goes in
the beginning
Secondly i think your error is caused on the if statement. There you
try to search for the 'tax_code' as it is a Dataframe attribute. Try
using it inside [''] as you would normally do
| 'Series' object has no attribute 'tax_code' | i have a dataframe with 2 columns user_id and tax_code and many rows. I imported this library https://pypi.org/project/python-codicefiscale/ and i need to verify if a tax_code is valid or not.
i tried to define this function that doesnt give me back errors.
def valid():
from codicefiscale import codicefiscale
if df.tax_code.codicefiscale.is_valid():
return "It's Valid"
else:
return "Not Valid"
i did this to create a new column and apply my function:
df["Is Valid"]=df.apply(valid)
i have this error displayed:
'Series' object has no attribute 'tax_code'
How can i solve it? tax_code is the name of my 2nd column
| [
"\nFirstly i dont know if this causes any problems but avoid importing\ninside your code, to my best of knowledge importing always goes in\nthe beginning\nSecondly i think your error is caused on the if statement. There you\ntry to search for the 'tax_code' as it is a Dataframe attribute. Try\nusing it inside [''] as you would normally do\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074628956_pandas_python.txt |
Q:
getting TimeoutException when using expected_conditions in heroku
I have a selenium robot that worked perfectly locally but on heroku TimeoutException raises whenever its on a expected_condition (element_to_be_clickable, visibility_of_element_located and presence_of_element_located). Anyone knows how to fix this problem in heroku.
here is an example where I used expected_conditions
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(@class, 'productlistning__btn')]")))
and the chrome arguments that I used in my code:
chrome_options = Options()
chrome_options.binary_location = os.environ.get("GOOGLE_CHROME_BIN")
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument("--no-sandbox")
driver = webdriver.Chrome(executable_path=os.environ.get("CHROMEDRIVER_PATH"), options=chrome_options)
A:
I guess you didn't define the screen size for the driver while in headless mode the default screen size is 800,600.
So, to make your Selenium code working try setting the screen size to maximal or 1920,1080. As following:
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
options.add_argument("window-size=1920,1080")
Or
options = Options()
options.add_argument("start-maximized")
| getting TimeoutException when using expected_conditions in heroku | I have a selenium robot that worked perfectly locally but on heroku TimeoutException raises whenever its on a expected_condition (element_to_be_clickable, visibility_of_element_located and presence_of_element_located). Anyone knows how to fix this problem in heroku.
here is an example where I used expected_conditions
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(@class, 'productlistning__btn')]")))
and the chrome arguments that I used in my code:
chrome_options = Options()
chrome_options.binary_location = os.environ.get("GOOGLE_CHROME_BIN")
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument("--no-sandbox")
driver = webdriver.Chrome(executable_path=os.environ.get("CHROMEDRIVER_PATH"), options=chrome_options)
| [
"I guess you didn't define the screen size for the driver while in headless mode the default screen size is 800,600.\nSo, to make your Selenium code working try setting the screen size to maximal or 1920,1080. As following:\nfrom selenium.webdriver.chrome.options import Options\n\noptions = Options()\noptions.add_argument(\"--headless\")\noptions.add_argument(\"window-size=1920,1080\")\n\nOr\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\n"
] | [
0
] | [] | [] | [
"heroku",
"python",
"screen_size",
"selenium",
"webdriverwait"
] | stackoverflow_0074628847_heroku_python_screen_size_selenium_webdriverwait.txt |
Q:
Reduce user's coins by 1 for each extra time taken after a specific limit - Python
Imagine a situation where users have 10 coins and have 40 second to do a task. I want to implement something like this: if total time taken by the user is 50, reduce the coin by 1, similary if it's 60, reduce 2 and so on...
How should I implement this in python.
PS: In short, I wanna reduce coins for each 10 seconds elapsed after 40 seconds (for example)
A:
Python has a time library you can use. Something like this maybe.
import time
coins = 10
def user_task():
start_time = time.time()
# user completes task here
end_time = time.time()
time_taken = end_time - start_time
return time_taken
total_time_taken = user_task()
if total_time_taken >= 50:
coins = coins - 1
| Reduce user's coins by 1 for each extra time taken after a specific limit - Python | Imagine a situation where users have 10 coins and have 40 second to do a task. I want to implement something like this: if total time taken by the user is 50, reduce the coin by 1, similary if it's 60, reduce 2 and so on...
How should I implement this in python.
PS: In short, I wanna reduce coins for each 10 seconds elapsed after 40 seconds (for example)
| [
"Python has a time library you can use. Something like this maybe.\nimport time\n\ncoins = 10\n\n\ndef user_task():\n start_time = time.time()\n # user completes task here\n end_time = time.time()\n time_taken = end_time - start_time\n\n return time_taken\n \n\ntotal_time_taken = user_task()\n\nif total_time_taken >= 50:\n coins = coins - 1\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074628724_python.txt |
Q:
Elif syntax error with nested if conditions
The interpreter gives me a syntax error when it reaches elif in the code below. Why?
while walls:
rand_wall = walls[int(random.random()*len(walls))-1]
if rand_wall[1] != 0 and rand_wall[1] != mazeheight-1:
if maze[rand_wall[0][rand_wall[1]-1] == "u" and maze[rand_wall[0][rand_wall[1]+1] == "c":
print("no")
elif maze[rand_wall[0][rand_wall[1]-1] == "c" and maze[rand_wall[0][rand_wall[1]+1] == "u":
print("no")
If the first condition is true, I want to check a second condition and its mirror condition and run some code (replaced with print("no") for debugging) if either condition is true.
A:
There are some closing brackets missing.
while walls:
rand_wall = walls[int(random.random()*len(walls))-1]
if rand_wall[1] != 0 and rand_wall[1] != mazeheight-1:
if maze[rand_wall[0][rand_wall[1]-1] == "u" and maze[rand_wall[0][rand_wall[1]+1]]] == "c": # added two closing brackets before the == sign.
print("no")
elif maze[rand_wall[0][rand_wall[1]-1] == "c" and maze[rand_wall[0][rand_wall[1]+1]]] == "u": # same as before
print("no")
It should run now, but I don't know the expected behavior. Next time try to post the full stack trace.
A:
I would clean this up to avoid the chance of syntax error. I'm assuming, for example, that
maze[rand_wall[0][rand_wall[1]-1]
should be
maze[rand_wall[0]][rand_wall[1]-1]
^
add missing ]
while walls:
x, y = random.choice(walls)
if y != 0 and y != mazeheight-1:
row = maze[x]
if row[y-1] == "u" and row[y+1] == "c":
print("no")
elif row[y-1] == "c" and row[y+1] == "u":
print("no")
| Elif syntax error with nested if conditions | The interpreter gives me a syntax error when it reaches elif in the code below. Why?
while walls:
rand_wall = walls[int(random.random()*len(walls))-1]
if rand_wall[1] != 0 and rand_wall[1] != mazeheight-1:
if maze[rand_wall[0][rand_wall[1]-1] == "u" and maze[rand_wall[0][rand_wall[1]+1] == "c":
print("no")
elif maze[rand_wall[0][rand_wall[1]-1] == "c" and maze[rand_wall[0][rand_wall[1]+1] == "u":
print("no")
If the first condition is true, I want to check a second condition and its mirror condition and run some code (replaced with print("no") for debugging) if either condition is true.
| [
"There are some closing brackets missing.\nwhile walls:\n rand_wall = walls[int(random.random()*len(walls))-1]\n if rand_wall[1] != 0 and rand_wall[1] != mazeheight-1:\n if maze[rand_wall[0][rand_wall[1]-1] == \"u\" and maze[rand_wall[0][rand_wall[1]+1]]] == \"c\": # added two closing brackets before the == sign.\n print(\"no\")\n elif maze[rand_wall[0][rand_wall[1]-1] == \"c\" and maze[rand_wall[0][rand_wall[1]+1]]] == \"u\": # same as before\n print(\"no\")\n\nIt should run now, but I don't know the expected behavior. Next time try to post the full stack trace.\n",
"I would clean this up to avoid the chance of syntax error. I'm assuming, for example, that\nmaze[rand_wall[0][rand_wall[1]-1]\n\nshould be\nmaze[rand_wall[0]][rand_wall[1]-1] \n ^\n add missing ]\n\n\nwhile walls:\n x, y = random.choice(walls)\n if y != 0 and y != mazeheight-1:\n row = maze[x]\n if row[y-1] == \"u\" and row[y+1] == \"c\":\n print(\"no\")\n elif row[y-1] == \"c\" and row[y+1] == \"u\":\n print(\"no\")\n\n"
] | [
0,
0
] | [] | [] | [
"if_statement",
"python"
] | stackoverflow_0074629035_if_statement_python.txt |
Q:
Mutual TLS with self signed certificates, with requests in python
I created both client and server certificates:
# client
openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ssl/client.crt -keyout ssl/client.key
# server
openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ssl/server.crt -keyout ssl/server
Then with python I have the following:
import requests
response = requests.get(
"https://localhost:8080/",
verify="ssl/server.crt",
cert=("ssl/client.crt", "ssl/client.key")
)
I also have a gunicorn server running with the server self signed certificate.
The code snippet is throwing me the following error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='localhost', port=8080): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:2633)')))
It is a self signed certificate so I am not sure what CA is it expecting.
A:
tlsv1 alert unknown ca
The server is sending a TLS alert back since it cannot validate your client certificate - the certificate authority (ca) which signed the certificate is unknown to the server. You either need to disable client certificate validation in your server or (better) make the server trust your client certificate.
It is a self signed certificate so I am not sure what CA is it expecting.
A self-signed certificate is signed by itself, i.e. the CA is the certificate itself.
A:
It looks like the server isn't able to validate your client certificate. If you're just using a pair of self-signed certificates for the client and server, then the server needs to also use the client's certificate as its CA, since it will attempt to validate it was signed by the CA - which in this case is the client.
I recently wrote a blog on deploying mTLS with self-signed certificates which might help you as it contains more details, specifically with how to configure the client and server. Check it out here: https://otterize.com/blog/so-you-want-to-deploy-mtls
| Mutual TLS with self signed certificates, with requests in python | I created both client and server certificates:
# client
openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ssl/client.crt -keyout ssl/client.key
# server
openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out ssl/server.crt -keyout ssl/server
Then with python I have the following:
import requests
response = requests.get(
"https://localhost:8080/",
verify="ssl/server.crt",
cert=("ssl/client.crt", "ssl/client.key")
)
I also have a gunicorn server running with the server self signed certificate.
The code snippet is throwing me the following error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='localhost', port=8080): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:2633)')))
It is a self signed certificate so I am not sure what CA is it expecting.
| [
"\ntlsv1 alert unknown ca\n\nThe server is sending a TLS alert back since it cannot validate your client certificate - the certificate authority (ca) which signed the certificate is unknown to the server. You either need to disable client certificate validation in your server or (better) make the server trust your client certificate.\n\nIt is a self signed certificate so I am not sure what CA is it expecting.\n\nA self-signed certificate is signed by itself, i.e. the CA is the certificate itself.\n",
"It looks like the server isn't able to validate your client certificate. If you're just using a pair of self-signed certificates for the client and server, then the server needs to also use the client's certificate as its CA, since it will attempt to validate it was signed by the CA - which in this case is the client.\nI recently wrote a blog on deploying mTLS with self-signed certificates which might help you as it contains more details, specifically with how to configure the client and server. Check it out here: https://otterize.com/blog/so-you-want-to-deploy-mtls\n"
] | [
0,
0
] | [] | [] | [
"mtls",
"python",
"python_3.9",
"python_requests",
"ssl"
] | stackoverflow_0074294395_mtls_python_python_3.9_python_requests_ssl.txt |
Q:
How to set a max column length for streamlit-aggrid
If I have a really long column entry that takes up most of the table (like below) how do I set the table options such that it gets truncated?
import streamlit as st
from st_aggrid import AgGrid
import pandas as pd
df = pd.DataFrame({'some short column': ['a', 'b', 'c'],
'some long column': ['all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy ', 'two', 'three'],
'some other column': ['one', 'two', 'three'],
'some other column 2': ['one', 'two', 'three']})
AgGrid(df)
Edit:
AgGrid(df, fit_columns_on_grid_load=True)
Works for this example but my actual df has > 20 columns, and this causes the columns to be too crunched together to read.
A:
Use the fit_columns_on_grid_load param, this is False by default.
AgGrid(df, fit_columns_on_grid_load=True)
A:
Combined with @ferdy's answer. If I use:
gb = GridOptionsBuilder.from_dataframe(df, min_column_width=30)
AgGrid(df, gridOptions=gb.build(), fit_columns_on_grid_load=True)
Then it works securely.
A:
Thanks for this combination! I had to add the fit_columns_on_grid in a separate configuration line though, so:
gb = GridOptionsBuilder.from_dataframe(df)
gb.configure_default_column(min_column_width=150)
interactive_table = AgGrid(df, gridOptions=gb.build(), fit_columns_on_grid_load=True)
| How to set a max column length for streamlit-aggrid | If I have a really long column entry that takes up most of the table (like below) how do I set the table options such that it gets truncated?
import streamlit as st
from st_aggrid import AgGrid
import pandas as pd
df = pd.DataFrame({'some short column': ['a', 'b', 'c'],
'some long column': ['all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy '
'all column and no spacing makes coder a dull boy ', 'two', 'three'],
'some other column': ['one', 'two', 'three'],
'some other column 2': ['one', 'two', 'three']})
AgGrid(df)
Edit:
AgGrid(df, fit_columns_on_grid_load=True)
Works for this example but my actual df has > 20 columns, and this causes the columns to be too crunched together to read.
| [
"Use the fit_columns_on_grid_load param, this is False by default.\nAgGrid(df, fit_columns_on_grid_load=True)\n\n",
"Combined with @ferdy's answer. If I use:\ngb = GridOptionsBuilder.from_dataframe(df, min_column_width=30)\nAgGrid(df, gridOptions=gb.build(), fit_columns_on_grid_load=True)\n\nThen it works securely.\n",
"Thanks for this combination! I had to add the fit_columns_on_grid in a separate configuration line though, so:\ngb = GridOptionsBuilder.from_dataframe(df)\ngb.configure_default_column(min_column_width=150)\ninteractive_table = AgGrid(df, gridOptions=gb.build(), fit_columns_on_grid_load=True)\n\n"
] | [
2,
1,
1
] | [] | [] | [
"ag_grid",
"python",
"streamlit"
] | stackoverflow_0072624323_ag_grid_python_streamlit.txt |
Q:
How do i replace a column in a numpy array with 0 given that the column contains the number 0?
I am trying to make a program that replaces entire columns of matrices if the column contains a 0.
I tried making a numpy array, x, and running the command:
x[:, np.where(x <= 0)] = 0
The desired outcome would be that the sum of all columns containing 0 would be 0.
A:
X[:, (X==0).any(axis=0)]=0
Explanation
(X==0) is matrix of booleans of the same shape than X, saying if each element of X is 0
(X==0).any() says is True iff at least of those booleans is True
(X==0).any(axis=0) does that only along axis 0, so it gives an array of n booleans, one per columns, each of them True iff one boolean of X==0 is True in that column.
X[:, (X==0).any(axis=0)] are all rows of columns that are True in the previous array, that is that contain a 0.
So X[:, (X==0).any(axis=0)]=0 puts everything to 0 in those columns
| How do i replace a column in a numpy array with 0 given that the column contains the number 0? | I am trying to make a program that replaces entire columns of matrices if the column contains a 0.
I tried making a numpy array, x, and running the command:
x[:, np.where(x <= 0)] = 0
The desired outcome would be that the sum of all columns containing 0 would be 0.
| [
"X[:, (X==0).any(axis=0)]=0\n\nExplanation\n(X==0) is matrix of booleans of the same shape than X, saying if each element of X is 0\n(X==0).any() says is True iff at least of those booleans is True\n(X==0).any(axis=0) does that only along axis 0, so it gives an array of n booleans, one per columns, each of them True iff one boolean of X==0 is True in that column.\nX[:, (X==0).any(axis=0)] are all rows of columns that are True in the previous array, that is that contain a 0.\nSo X[:, (X==0).any(axis=0)]=0 puts everything to 0 in those columns\n"
] | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074629136_numpy_python.txt |
Q:
Flip a Y coordinate within a min-max range
I'm calculating a Y coordinate which represents a value on a y-axis on a chart I'm drawing in PIL, to display on an LCD display.
I set a start_y and an end_y. The (physical) top of the screen is y = 0. I get my y coordinate as follows:
start_y = 10 # 10 pixels from top of screen
end_y = 60 # 60 pixels from top of screen
min = 10
max = 25
percentage_within_range = (value - min) / (max - min)
y = (percentage_within_range * (end_y - start_y)) + start_y
But the y coordinate I get is the 'wrong way around'. I.e. if the value was 19, the resultant y would be 40. That's 20 pixels from the bottom of the chart's display area, but I want it to be 20 pixels from the top of the chart's display area.
How do I reframe my calculation to return my y value 'reversed'?
A:
because of the nature of the equation, it will give the value of the total pixels from the top of the chart as your min max and start end also start from the top. If you want the remaining pixels, You will have to subtract the current answer from the highest pixel possible which is end_y. Your equation will become
y = end_y - ((percentage_within_range * (end_y - start_y)) + start_y)
as far as my understanding of the question, I came upon this answer.
| Flip a Y coordinate within a min-max range | I'm calculating a Y coordinate which represents a value on a y-axis on a chart I'm drawing in PIL, to display on an LCD display.
I set a start_y and an end_y. The (physical) top of the screen is y = 0. I get my y coordinate as follows:
start_y = 10 # 10 pixels from top of screen
end_y = 60 # 60 pixels from top of screen
min = 10
max = 25
percentage_within_range = (value - min) / (max - min)
y = (percentage_within_range * (end_y - start_y)) + start_y
But the y coordinate I get is the 'wrong way around'. I.e. if the value was 19, the resultant y would be 40. That's 20 pixels from the bottom of the chart's display area, but I want it to be 20 pixels from the top of the chart's display area.
How do I reframe my calculation to return my y value 'reversed'?
| [
"because of the nature of the equation, it will give the value of the total pixels from the top of the chart as your min max and start end also start from the top. If you want the remaining pixels, You will have to subtract the current answer from the highest pixel possible which is end_y. Your equation will become\ny = end_y - ((percentage_within_range * (end_y - start_y)) + start_y)\n\nas far as my understanding of the question, I came upon this answer.\n"
] | [
2
] | [] | [] | [
"charts",
"math",
"python"
] | stackoverflow_0074628945_charts_math_python.txt |
Q:
Sklearn regression with label encoding
I'm attempting to use sklearn's linear regression model to predict fantasy players points. I have numeric stats for each player and obviously their name which I have encoded with the Label encoder function. My question is when performing the linear regression the encoded values included in the training it doesn't seem to recognize it as an ID but instead treats it as a numeric value.
So is there a better way to encode player names so they are treated as an ID so that it recognizes player 1 averages 25 points compared to player 2's 20? Or is this type of encoding even possible with linear regression? Thanks in advance
A:
Apart from one hot encoding (which might create way too many columns in this case), mean target encoding does exactly what you need (encodes the category with its mean target value). You should be vary about the target leakage in case of rare categories though. sklearn-compatible category_encoders library provides several robust implementations, such as LeaveOneOutEncoder()
| Sklearn regression with label encoding | I'm attempting to use sklearn's linear regression model to predict fantasy players points. I have numeric stats for each player and obviously their name which I have encoded with the Label encoder function. My question is when performing the linear regression the encoded values included in the training it doesn't seem to recognize it as an ID but instead treats it as a numeric value.
So is there a better way to encode player names so they are treated as an ID so that it recognizes player 1 averages 25 points compared to player 2's 20? Or is this type of encoding even possible with linear regression? Thanks in advance
| [
"Apart from one hot encoding (which might create way too many columns in this case), mean target encoding does exactly what you need (encodes the category with its mean target value). You should be vary about the target leakage in case of rare categories though. sklearn-compatible category_encoders library provides several robust implementations, such as LeaveOneOutEncoder()\n"
] | [
1
] | [] | [] | [
"linear_regression",
"one_hot_encoding",
"python",
"scikit_learn"
] | stackoverflow_0074628815_linear_regression_one_hot_encoding_python_scikit_learn.txt |
Q:
Do Tensorflow have documentation in VS Code?
My goal is to use Tensorflow in Visual Studio Code. However, the documentation is non existant. I can import it but there is a warning when I hover the import statement
What I've did:
Install tensorflow on Mac Mini M1 with
%pip install tensorflow-macos
%pip install tensorflow-metal
A:
The problem is addressed in this thread- https://github.com/tensorflow/tensorflow/issues/56231
The solution that worked for me is by creating a symlink in lib/pythonx.x/site-packages/tensorflow
ln -s ../keras/api/_v2/keras/ keras
| Do Tensorflow have documentation in VS Code? | My goal is to use Tensorflow in Visual Studio Code. However, the documentation is non existant. I can import it but there is a warning when I hover the import statement
What I've did:
Install tensorflow on Mac Mini M1 with
%pip install tensorflow-macos
%pip install tensorflow-metal
| [
"The problem is addressed in this thread- https://github.com/tensorflow/tensorflow/issues/56231\nThe solution that worked for me is by creating a symlink in lib/pythonx.x/site-packages/tensorflow\nln -s ../keras/api/_v2/keras/ keras\n\n"
] | [
0
] | [] | [] | [
"macos",
"python",
"python_3.x",
"tensorflow",
"tensorflow2.0"
] | stackoverflow_0074628773_macos_python_python_3.x_tensorflow_tensorflow2.0.txt |
Q:
Pandas dataframe conditional column update based on another dataframe
I have two dataframes with two columns each - 'MeetingId' and 'TAB'. The first dataframe is the full table, but it has some errors in the 'TAB' column. The second dataframe has the solutions to the errors. I would like to replace the 'TAB' column of the first dataframe with the 'TAB' column of the second datafrmae if the 'MeetingId's match up.
Example of table:
MeetingId TAB
123 TRUE
124 FALSE
Code:
df1 = meetingdf1
df1.set_index("MeetingId")
df2 = meetingdf2
df2.set_index("MeetingId")
df1.update(df2)
print(df1)
A:
df['TAB'] = df.apply(lambda x: df2[df2['MeetingId'] == x['MeetingId']]['TAB'].values[0], axis=1)
OR
df.loc[df['MeetingId'].isin(df2['MeetingId']), 'TAB'] = df2['TAB']
Example:
> df
MeetingId TAB
0 123 True
1 124 False
> df2
MeetingId TAB
0 123 False
1 124 True
2 125 False
Output after running code above:
> df
MeetingId TAB
0 123 False
1 124 True
| Pandas dataframe conditional column update based on another dataframe | I have two dataframes with two columns each - 'MeetingId' and 'TAB'. The first dataframe is the full table, but it has some errors in the 'TAB' column. The second dataframe has the solutions to the errors. I would like to replace the 'TAB' column of the first dataframe with the 'TAB' column of the second datafrmae if the 'MeetingId's match up.
Example of table:
MeetingId TAB
123 TRUE
124 FALSE
Code:
df1 = meetingdf1
df1.set_index("MeetingId")
df2 = meetingdf2
df2.set_index("MeetingId")
df1.update(df2)
print(df1)
| [
"df['TAB'] = df.apply(lambda x: df2[df2['MeetingId'] == x['MeetingId']]['TAB'].values[0], axis=1)\n\nOR\ndf.loc[df['MeetingId'].isin(df2['MeetingId']), 'TAB'] = df2['TAB']\n\nExample:\n> df\n\n MeetingId TAB\n0 123 True\n1 124 False\n\n> df2\n\n MeetingId TAB\n0 123 False\n1 124 True\n2 125 False\n\nOutput after running code above:\n> df\n\n MeetingId TAB\n0 123 False\n1 124 True\n\n"
] | [
2
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074629249_dataframe_pandas_python.txt |
Q:
Making a calculator to find the number of days until Christmas [Feedback]
I've made a calculator to find the number of days until Christmas. I'm asking anyone to give me any feedback that might help me improve this calculator in any way possible. The calculator works perfectly fine, but I am requesting feedback on this to make it as simple as possible.
# Made using Python Language (Translated From Lua)
from datetime import date
today = date.today()
christmasday = 354
def CalculateDaysLeft(days, months):
totaldaysofyear = int(((months*30-30)+days))
if totaldaysofyear >= christmasday:
totaldaysofyear = totaldaysofyear - 364
print("Christmas is coming in: ",christmasday-totaldaysofyear," days!\n")
InputType = input('Do you wish to find out how many days are left until Christmas eve?\n')
if str.lower(InputType) == str.lower("Yes"):
days = int(today.strftime("%d"))
months = int(today.strftime("%m"))
CalculateDaysLeft(days, months)
| Making a calculator to find the number of days until Christmas [Feedback] | I've made a calculator to find the number of days until Christmas. I'm asking anyone to give me any feedback that might help me improve this calculator in any way possible. The calculator works perfectly fine, but I am requesting feedback on this to make it as simple as possible.
# Made using Python Language (Translated From Lua)
from datetime import date
today = date.today()
christmasday = 354
def CalculateDaysLeft(days, months):
totaldaysofyear = int(((months*30-30)+days))
if totaldaysofyear >= christmasday:
totaldaysofyear = totaldaysofyear - 364
print("Christmas is coming in: ",christmasday-totaldaysofyear," days!\n")
InputType = input('Do you wish to find out how many days are left until Christmas eve?\n')
if str.lower(InputType) == str.lower("Yes"):
days = int(today.strftime("%d"))
months = int(today.strftime("%m"))
CalculateDaysLeft(days, months)
| [] | [] | [
"There are some mistakes in it. Like the 354th day is christmas as when I type in yes, it shows 24 days left when there is more 26 days. And if someone types no you should add-on something to that also like an elif statement. And add an else statement for the input if they type anyhting else then the answer should be, \"Please enter Yes or No\"\n"
] | [
-1
] | [
"python",
"time"
] | stackoverflow_0074629211_python_time.txt |
Q:
Generate one plot per revealjs slide in python for loop using Quarto
I'm trying to generate a slide deck consisting of several plots using Quarto with revealjs output format. I need to generate these plots with plotly through a loop, but that is messing up my output. I'm getting the plots lined up vertically, which doesn't fit the slide size. What I want to achieve is one plot per slide. Is that possible?
I havn't been able to find any solution so far. Here's some code to replicate the problem. Note that I only used Iris to get an example similar to mine, I know the for loop doesn't make much sense in this case.
---
title: "Foor Loops in Quarto"
format:
revealjs:
theme: default
jupyter: ds
code-fold: true
execute:
echo: false
freeze: auto
---
```{python}
#| include: false
# Imports
from sklearn import datasets
import pandas as pd
import plotly.express as px
```
```{python}
#| include: false
# Get data
iris = datasets.load_iris()
df = pd.DataFrame(iris['data'], columns=iris['feature_names'])
df['target'] = iris['target']
```
## Scatter plot per species
```{python}
for target in df['target'].unique():
fig = px.scatter(df, x='petal length (cm)', y='sepal length (cm)')
fig.show()
```
A:
You can do this very easily just by generating markdown slide header dynamically in each iteration of loop using display(Markdown("Slide header")) along with chunk option output: asis.
---
title: "For Loops in Quarto"
format:
revealjs:
theme: default
code-fold: true
execute:
echo: false
---
```{python}
#| include: false
from IPython.display import display, Markdown
from sklearn import datasets
import pandas as pd
import plotly.express as px
```
```{python}
#| include: false
# Get data
iris = datasets.load_iris()
df = pd.DataFrame(iris['data'], columns=iris['feature_names'])
df['target'] = iris['target']
```
```{python}
#| output: asis
for target in df['target'].unique():
display(Markdown("## Scatter plot per species"))
fig = px.scatter(df, x='petal length (cm)', y='sepal length (cm)')
fig.show()
```
| Generate one plot per revealjs slide in python for loop using Quarto | I'm trying to generate a slide deck consisting of several plots using Quarto with revealjs output format. I need to generate these plots with plotly through a loop, but that is messing up my output. I'm getting the plots lined up vertically, which doesn't fit the slide size. What I want to achieve is one plot per slide. Is that possible?
I havn't been able to find any solution so far. Here's some code to replicate the problem. Note that I only used Iris to get an example similar to mine, I know the for loop doesn't make much sense in this case.
---
title: "Foor Loops in Quarto"
format:
revealjs:
theme: default
jupyter: ds
code-fold: true
execute:
echo: false
freeze: auto
---
```{python}
#| include: false
# Imports
from sklearn import datasets
import pandas as pd
import plotly.express as px
```
```{python}
#| include: false
# Get data
iris = datasets.load_iris()
df = pd.DataFrame(iris['data'], columns=iris['feature_names'])
df['target'] = iris['target']
```
## Scatter plot per species
```{python}
for target in df['target'].unique():
fig = px.scatter(df, x='petal length (cm)', y='sepal length (cm)')
fig.show()
```
| [
"You can do this very easily just by generating markdown slide header dynamically in each iteration of loop using display(Markdown(\"Slide header\")) along with chunk option output: asis.\n---\ntitle: \"For Loops in Quarto\"\nformat: \n revealjs:\n theme: default\ncode-fold: true\nexecute: \n echo: false\n---\n\n```{python}\n#| include: false\n\nfrom IPython.display import display, Markdown\nfrom sklearn import datasets\nimport pandas as pd\nimport plotly.express as px\n```\n\n```{python}\n#| include: false\n# Get data\niris = datasets.load_iris()\ndf = pd.DataFrame(iris['data'], columns=iris['feature_names'])\ndf['target'] = iris['target']\n```\n\n```{python}\n#| output: asis\n\nfor target in df['target'].unique():\n display(Markdown(\"## Scatter plot per species\"))\n fig = px.scatter(df, x='petal length (cm)', y='sepal length (cm)')\n fig.show()\n```\n\n\n\n\n"
] | [
0
] | [] | [] | [
"plotly",
"python",
"quarto"
] | stackoverflow_0074626298_plotly_python_quarto.txt |
Q:
iterate through list of lists and check if list contains all "False"
I am having issue with my code that works fine on sample code and should also work on test code but it doesn't.
I am having a list of lists that contain True/False but as STRING! List looks like this:
['True', 'True', 'True']
['False', 'False', 'False']
['False', 'False', 'False']
['False', 'False', 'False']
['True', 'True', 'True']
['True', 'True', 'False']
['True', 'True', 'True']
['True', 'True', 'False']
['True', 'True', 'True']
now I want to print OK if all last three elements were "True", SHUT_DOWN when last three elements were "False" and WARNING when last element was "False".
Result would look like this:
['True', 'True', 'True'] ==> OK
['False', 'False', 'False'] ==> SHUT_DOWN
['False', 'False', 'False'] ==> SHUT_DOWN
['False', 'False', 'False'] ==> SHUT_DOWN
['True', 'True', 'True'] ==> OK
['True', 'True', 'False'] ==> WARNING
['True', 'True', 'True'] ==> OK
['True', 'True', 'False'] ==> WARNING
['True', 'True', 'True'] ==> OK
this is my code that works on sample code but not on test code. On test code I get WARNING instead of SHUT_DOWN.
for p in path:
if all(item is "False" for item in p[-3:]):
print("SHUT_DOWN")
elif p[-1] == "False":
print("WARNING")
else:
print("OK")
what I am doing wrong and getting two different results? Is there maybe a better way of doing this type of operation? Maybe convert "True" and "False" to boolean instead of string? Thx!
A:
If you're loading your data from somewhere external, is may not do what you expect, since it compares object identity, not contents. (You should never use is unless you know exactly what you're doing).
This seems to work fine:
for case in [
['True', 'True', 'True'],
['False', 'False', 'False'],
['False', 'False', 'False'],
['False', 'False', 'False'],
['True', 'True', 'True'],
['True', 'True', 'False'],
['True', 'True', 'True'],
['True', 'True', 'False'],
['True', 'True', 'True'],
]:
if all(x == "False" for x in case[-3:]):
result = "SHUT_DOWN"
elif case[-1] == "False":
result = "WARNING"
else:
result = "OK"
print(case, "==>", result)
['True', 'True', 'True'] ==> OK
['False', 'False', 'False'] ==> SHUT_DOWN
['False', 'False', 'False'] ==> SHUT_DOWN
['False', 'False', 'False'] ==> SHUT_DOWN
['True', 'True', 'True'] ==> OK
['True', 'True', 'False'] ==> WARNING
['True', 'True', 'True'] ==> OK
['True', 'True', 'False'] ==> WARNING
['True', 'True', 'True'] ==> OK
| iterate through list of lists and check if list contains all "False" | I am having issue with my code that works fine on sample code and should also work on test code but it doesn't.
I am having a list of lists that contain True/False but as STRING! List looks like this:
['True', 'True', 'True']
['False', 'False', 'False']
['False', 'False', 'False']
['False', 'False', 'False']
['True', 'True', 'True']
['True', 'True', 'False']
['True', 'True', 'True']
['True', 'True', 'False']
['True', 'True', 'True']
now I want to print OK if all last three elements were "True", SHUT_DOWN when last three elements were "False" and WARNING when last element was "False".
Result would look like this:
['True', 'True', 'True'] ==> OK
['False', 'False', 'False'] ==> SHUT_DOWN
['False', 'False', 'False'] ==> SHUT_DOWN
['False', 'False', 'False'] ==> SHUT_DOWN
['True', 'True', 'True'] ==> OK
['True', 'True', 'False'] ==> WARNING
['True', 'True', 'True'] ==> OK
['True', 'True', 'False'] ==> WARNING
['True', 'True', 'True'] ==> OK
this is my code that works on sample code but not on test code. On test code I get WARNING instead of SHUT_DOWN.
for p in path:
if all(item is "False" for item in p[-3:]):
print("SHUT_DOWN")
elif p[-1] == "False":
print("WARNING")
else:
print("OK")
what I am doing wrong and getting two different results? Is there maybe a better way of doing this type of operation? Maybe convert "True" and "False" to boolean instead of string? Thx!
| [
"If you're loading your data from somewhere external, is may not do what you expect, since it compares object identity, not contents. (You should never use is unless you know exactly what you're doing).\nThis seems to work fine:\nfor case in [\n ['True', 'True', 'True'],\n ['False', 'False', 'False'],\n ['False', 'False', 'False'],\n ['False', 'False', 'False'],\n ['True', 'True', 'True'],\n ['True', 'True', 'False'],\n ['True', 'True', 'True'],\n ['True', 'True', 'False'],\n ['True', 'True', 'True'],\n]:\n if all(x == \"False\" for x in case[-3:]):\n result = \"SHUT_DOWN\"\n elif case[-1] == \"False\":\n result = \"WARNING\"\n else:\n result = \"OK\"\n print(case, \"==>\", result)\n\n['True', 'True', 'True'] ==> OK\n['False', 'False', 'False'] ==> SHUT_DOWN\n['False', 'False', 'False'] ==> SHUT_DOWN\n['False', 'False', 'False'] ==> SHUT_DOWN\n['True', 'True', 'True'] ==> OK\n['True', 'True', 'False'] ==> WARNING\n['True', 'True', 'True'] ==> OK\n['True', 'True', 'False'] ==> WARNING\n['True', 'True', 'True'] ==> OK\n\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074629321_python.txt |
Q:
Unsure how to remove whitespace from strings
current_price = int(input())
last_months_price = int(input())
print("This house is $" + str(current_price), '.', "The change is $" +
str(current_price - last_months_price) + " since last month.")
print("The estimated monthly mortgage is ${:.2f}".format((current_price * 0.051) / 12), '.')
This produces:
This house is $200000 . The change is $-10000 since last month.
The estimated monthly mortgage is $850.00 .
I am uncertain how to remove the white space after "$200000" and "$850.00". I don't fully understand the strip() command, but from what I read, it wouldn't be helpful for this problem.
A:
You can give print an additional parameter: sep, like this:
print("This house is $" + str(current_price), '.', "The change is $" +
str(current_price - last_months_price) + " since last month.", sep='')
because the default is an empty space after the comma.
A:
Maybe try f-string injection
print(f"This house is ${current_price}. The change is ${current_price - last_months_price} since last month.")
f-string (formatted strings) provide a way to embed expressions inside string literals, using a minimal syntax. It's a simplified way to concatenate strings with out having to explicitly call str to format data types other than strings.
As @Andreas noted below, you could also pass sep='' to print, but that requires you to concatenate other strings with the spaces properly formatted.
A:
print("This house is $" + str(current_price), '.',' ' "The change is $" +
str(current_price - last_months_price) + " since last month.", sep='')
print("The estimated monthly mortgage is ${:.2f}"'.'.format((current_price * 0.051) / 12))
| Unsure how to remove whitespace from strings | current_price = int(input())
last_months_price = int(input())
print("This house is $" + str(current_price), '.', "The change is $" +
str(current_price - last_months_price) + " since last month.")
print("The estimated monthly mortgage is ${:.2f}".format((current_price * 0.051) / 12), '.')
This produces:
This house is $200000 . The change is $-10000 since last month.
The estimated monthly mortgage is $850.00 .
I am uncertain how to remove the white space after "$200000" and "$850.00". I don't fully understand the strip() command, but from what I read, it wouldn't be helpful for this problem.
| [
"You can give print an additional parameter: sep, like this:\nprint(\"This house is $\" + str(current_price), '.', \"The change is $\" +\n str(current_price - last_months_price) + \" since last month.\", sep='')\n\nbecause the default is an empty space after the comma.\n",
"Maybe try f-string injection\nprint(f\"This house is ${current_price}. The change is ${current_price - last_months_price} since last month.\")\n\n\nf-string (formatted strings) provide a way to embed expressions inside string literals, using a minimal syntax. It's a simplified way to concatenate strings with out having to explicitly call str to format data types other than strings.\nAs @Andreas noted below, you could also pass sep='' to print, but that requires you to concatenate other strings with the spaces properly formatted.\n",
"print(\"This house is $\" + str(current_price), '.',' ' \"The change is $\" +\nstr(current_price - last_months_price) + \" since last month.\", sep='')\nprint(\"The estimated monthly mortgage is ${:.2f}\"'.'.format((current_price * 0.051) / 12))\n"
] | [
2,
1,
0
] | [] | [] | [
"python",
"python_3.6",
"removing_whitespace"
] | stackoverflow_0063550872_python_python_3.6_removing_whitespace.txt |
Q:
Fill missing values in list based on a condition
I will try to explain my issue with simple example. Let's say I've a list
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7']
How can I fill this missing values such that list will become.
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-6' , 'Elemnt-7']
Explination with similar animation.
I've figured out solution. Which is too inefficient for a longer lists & when I've multiple missing values. Here is my logic
from itertools import accumulate
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7']
odd_index = lis[::2]
even_index = lis[1::2]
odd_index = list(accumulate(odd_index,lambda x, y: x if y is '' else y))
even_index = list(accumulate(even_index,lambda x, y: x if y is '' else y))
zipper = list(sum(zip(odd_index, even_index+[0]), ())[:-1])
print(zipper)
Given me #
['Elemnt-1', 'Elemnt-2', 'Elemnt-3', 'Elemnt-2', 'Elemnt-3', 'Elemnt-6', 'Elemnt-7']
I was looking for a simpler elegant approach to solve this when there are multiple missing values in middle of list.
More examples:
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , '' , 'Elemnt-7']
Need
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-7']
Another example
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7', '']
Need
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-6' , 'Elemnt-7' , 'Elemnt-7']
Logically n blank elements should be filled with n back elements
A:
Because you're looking 2 back to fill the empty spots, we skip the first 2 indeces as there's nothing before theme. Here I define a func that does this:
def filler(l: list):
for i in range(2, len(l)):
if l[i] == '':
l[i] = l[i-2]
return l
print(filler(['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7']))
Out:
['Elemnt-1', 'Elemnt-2', 'Elemnt-3', 'Elemnt-2', 'Elemnt-3', 'Elemnt-6', 'Elemnt-7']
A:
If the missing values are consecutive ones as you showed in the example you can use this.
def fillList(list):
emptyCount = list.count('')
for item in list:
if item == '':
list[list.index(item)] = list[list.index(item) - emptyCount]
return list
A:
Using a list comprehension to replace empty values with value of this list two positions before. Repeats until no empty values left.
EDIT : after comments, it appears that it is not always 2 position before, but n blank elements should be filled with n elements before. Answer therefore is still missing something. Replacing lis[index-2] by lis[index-lis.count('')] would not work because it is possible to have multiple set of empty spaces
filler = ['']*len(lis)
while '' in lis :
filler = [lis[index-2] if value=="" else value for index, value in enumerate(lis)]
print(filler)
lis=filler
print(lis)
A:
You can determine the blank ranges first, then fill them with the previous items.
lst = ['Elment-1' , 'Elment-2' , 'Elment-3' , '' , '' , 'Elment-6' , 'Elment-7', '']
lst_ext = ['p'] + lst + ['p']
# boundary of all blank ranges
blank_bound = [idx for idx, (a, b) in enumerate(zip(lst_ext, lst_ext[1:])) \
if (a == '' or b == '') and a != b] # [3, 5, 7, 8]
# fill each blank range
for l, r in zip(blank_bound[::2], blank_bound[1::2]):
assert 2*l-r >= 0, "no enough items before the blank item"
lst[l:r] = lst[2*l-r:l]
| Fill missing values in list based on a condition | I will try to explain my issue with simple example. Let's say I've a list
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7']
How can I fill this missing values such that list will become.
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-6' , 'Elemnt-7']
Explination with similar animation.
I've figured out solution. Which is too inefficient for a longer lists & when I've multiple missing values. Here is my logic
from itertools import accumulate
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7']
odd_index = lis[::2]
even_index = lis[1::2]
odd_index = list(accumulate(odd_index,lambda x, y: x if y is '' else y))
even_index = list(accumulate(even_index,lambda x, y: x if y is '' else y))
zipper = list(sum(zip(odd_index, even_index+[0]), ())[:-1])
print(zipper)
Given me #
['Elemnt-1', 'Elemnt-2', 'Elemnt-3', 'Elemnt-2', 'Elemnt-3', 'Elemnt-6', 'Elemnt-7']
I was looking for a simpler elegant approach to solve this when there are multiple missing values in middle of list.
More examples:
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , '' , 'Elemnt-7']
Need
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-7']
Another example
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7', '']
Need
lis = ['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-2' , 'Elemnt-3' , 'Elemnt-6' , 'Elemnt-7' , 'Elemnt-7']
Logically n blank elements should be filled with n back elements
| [
"Because you're looking 2 back to fill the empty spots, we skip the first 2 indeces as there's nothing before theme. Here I define a func that does this:\ndef filler(l: list):\n for i in range(2, len(l)):\n if l[i] == '':\n l[i] = l[i-2]\n return l\n\nprint(filler(['Elemnt-1' , 'Elemnt-2' , 'Elemnt-3' , '' , '' , 'Elemnt-6' , 'Elemnt-7']))\nOut:\n['Elemnt-1', 'Elemnt-2', 'Elemnt-3', 'Elemnt-2', 'Elemnt-3', 'Elemnt-6', 'Elemnt-7']\n\n",
"If the missing values are consecutive ones as you showed in the example you can use this.\ndef fillList(list):\n emptyCount = list.count('')\n for item in list:\n if item == '':\n list[list.index(item)] = list[list.index(item) - emptyCount]\n return list\n\n",
"Using a list comprehension to replace empty values with value of this list two positions before. Repeats until no empty values left.\nEDIT : after comments, it appears that it is not always 2 position before, but n blank elements should be filled with n elements before. Answer therefore is still missing something. Replacing lis[index-2] by lis[index-lis.count('')] would not work because it is possible to have multiple set of empty spaces\nfiller = ['']*len(lis)\nwhile '' in lis :\n filler = [lis[index-2] if value==\"\" else value for index, value in enumerate(lis)]\n print(filler)\n lis=filler\n\nprint(lis)\n\n",
"You can determine the blank ranges first, then fill them with the previous items.\nlst = ['Elment-1' , 'Elment-2' , 'Elment-3' , '' , '' , 'Elment-6' , 'Elment-7', '']\nlst_ext = ['p'] + lst + ['p']\n# boundary of all blank ranges\nblank_bound = [idx for idx, (a, b) in enumerate(zip(lst_ext, lst_ext[1:])) \\\n if (a == '' or b == '') and a != b] # [3, 5, 7, 8]\n# fill each blank range\nfor l, r in zip(blank_bound[::2], blank_bound[1::2]):\n assert 2*l-r >= 0, \"no enough items before the blank item\"\n lst[l:r] = lst[2*l-r:l]\n\n"
] | [
1,
1,
1,
1
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074624986_list_python.txt |
Q:
Is highlighting a segment of px.scatter_mapbox plot possible?
Using Dash graphs to create a placeholder for scatter line plot and scattermapbox.
html.Div([
html.Div(id='graphs', children=[
dcc.Graph({'type':'graph', 'index':1}, figure=blank_fig('plotly_dark')),
dcc.Graph({'type':'graph', 'index':1}, figure=blank_fig('plotly_dark'))]
])
The figures are initialized via a callback as you can see below using Plotly Express:
fig1 = px.line(x=df.index.seconds, y=df.ft, labels={'x':'Elasped Time(seconds)', 'y':'feet'}, title="(feet)", template='plotly_dark')
fig2 = px.scatter_mapbox(df, lat='deg1', lon='deg2', zoom=zoom, height=800,
text=['deltatime: {}'.format(i.total_seconds()) for i in df.index],
title=selected_values['report_type'], center=dict(lat=center[0], lon=center[1]))
fig2.update_layout(
mapbox_style="white-bg",
mapbox_layers=[{
"below": 'traces',
"sourcetype": "raster",
"sourceattribution": "Low-Res Satellite Imagery",
"source": [f"{self.map_host}/tiles/{{z}}/{{x}}/{{y}}.png"]
}]
)
fig2.update_traces(mode='markers', marker=dict(size=2, color='white'),
hovertemplate='lat: %{lat}'+'<br>lon: %{lon}<br>'+'%{text}')
During zoom operation on figure 1 I get the selected range via a callback. With this selected range we have a list of points then to pass to the selectedpoints parameter of the update_traces() function. The goal is to highlight those selected points in red but instead the whole line is painted red.
# retrieved from the relayout input. The xaxis is in seconds.
start = int(unique_data['xaxis.range[0]'])
end = int(unique_data['xaxis.range[1]'])
# using the above values we can then subselect the dataframe
fig = get_figure_with('mapbox') # retrieves the figure for fig2
selected_df = df[(df.index.seconds >= start) & (df.index.seconds <= end)]
points = list(selected_df[['latitude_deg','longitude_deg']].itertuples(index=False, name=None))
fig.update_traces(marker_color='Red', selectedpoints=points)
Is highlighting a segment of mapbox plot possible? For example the curved part of the plot below?
A:
To achieve this I successfully used px.line_mapbox() function.
| Is highlighting a segment of px.scatter_mapbox plot possible? | Using Dash graphs to create a placeholder for scatter line plot and scattermapbox.
html.Div([
html.Div(id='graphs', children=[
dcc.Graph({'type':'graph', 'index':1}, figure=blank_fig('plotly_dark')),
dcc.Graph({'type':'graph', 'index':1}, figure=blank_fig('plotly_dark'))]
])
The figures are initialized via a callback as you can see below using Plotly Express:
fig1 = px.line(x=df.index.seconds, y=df.ft, labels={'x':'Elasped Time(seconds)', 'y':'feet'}, title="(feet)", template='plotly_dark')
fig2 = px.scatter_mapbox(df, lat='deg1', lon='deg2', zoom=zoom, height=800,
text=['deltatime: {}'.format(i.total_seconds()) for i in df.index],
title=selected_values['report_type'], center=dict(lat=center[0], lon=center[1]))
fig2.update_layout(
mapbox_style="white-bg",
mapbox_layers=[{
"below": 'traces',
"sourcetype": "raster",
"sourceattribution": "Low-Res Satellite Imagery",
"source": [f"{self.map_host}/tiles/{{z}}/{{x}}/{{y}}.png"]
}]
)
fig2.update_traces(mode='markers', marker=dict(size=2, color='white'),
hovertemplate='lat: %{lat}'+'<br>lon: %{lon}<br>'+'%{text}')
During zoom operation on figure 1 I get the selected range via a callback. With this selected range we have a list of points then to pass to the selectedpoints parameter of the update_traces() function. The goal is to highlight those selected points in red but instead the whole line is painted red.
# retrieved from the relayout input. The xaxis is in seconds.
start = int(unique_data['xaxis.range[0]'])
end = int(unique_data['xaxis.range[1]'])
# using the above values we can then subselect the dataframe
fig = get_figure_with('mapbox') # retrieves the figure for fig2
selected_df = df[(df.index.seconds >= start) & (df.index.seconds <= end)]
points = list(selected_df[['latitude_deg','longitude_deg']].itertuples(index=False, name=None))
fig.update_traces(marker_color='Red', selectedpoints=points)
Is highlighting a segment of mapbox plot possible? For example the curved part of the plot below?
| [
"To achieve this I successfully used px.line_mapbox() function.\n"
] | [
0
] | [] | [] | [
"plotly_dash",
"plotly_express",
"python"
] | stackoverflow_0074575298_plotly_dash_plotly_express_python.txt |
Q:
Importing custom python module with dependencies
I have the following folder structure:
files structure
└── App/
├── main.py
├── functions.py
└── models/
└── model1/
├── utils/
│ └── file2.py
└── file1.py
inside main.py file:
import functions
inside functions.py
from models.model1 import file1
inside file1.py
from utils import file2
When running main.py I am getting error related to the line "from utils import file2" in file1.py that it can't find utils module
I Have tried the following solutions:
-create init. py inside models and model1 folders (tried empty init file then tried to insert import statements in the init file)
-append path to the os
None of the previous techniques where successful, I have solve the problem by importing utils in the main.py but this technique is not very applicable in cases of large projects and may create some circular references and doesn't seems clean way to writing a code.
A:
my-app/
├─ main.py
├─ functions.py
├─ models/ |
├─ model1/ │
├─ utils/ │
├─ file2.py |
├─ file1.py |
Based on this information, it looks like you can use import utils.file2 because they are in the same directory.
| Importing custom python module with dependencies | I have the following folder structure:
files structure
└── App/
├── main.py
├── functions.py
└── models/
└── model1/
├── utils/
│ └── file2.py
└── file1.py
inside main.py file:
import functions
inside functions.py
from models.model1 import file1
inside file1.py
from utils import file2
When running main.py I am getting error related to the line "from utils import file2" in file1.py that it can't find utils module
I Have tried the following solutions:
-create init. py inside models and model1 folders (tried empty init file then tried to insert import statements in the init file)
-append path to the os
None of the previous techniques where successful, I have solve the problem by importing utils in the main.py but this technique is not very applicable in cases of large projects and may create some circular references and doesn't seems clean way to writing a code.
| [
"my-app/ \n├─ main.py \n├─ functions.py \n├─ models/ |\n ├─ model1/ │ \n ├─ utils/ │ \n ├─ file2.py | \n ├─ file1.py |\n\nBased on this information, it looks like you can use import utils.file2 because they are in the same directory.\n"
] | [
0
] | [] | [] | [
"import",
"init",
"package",
"python"
] | stackoverflow_0074622309_import_init_package_python.txt |
Q:
How to combine multiple csv as columns in python?
I have 10 .txt (csv) files that I want to merge together in a single csv file to use later in analysis. when I use pd.append, it always merges the files below each other.
I use the following code:
master_df = pd.DataFrame()
for file in os.listdir(os.getcwd()):
if file.endswith('.txt'):
files = pd.read_csv(file, sep='\t', skiprows=[1])
master_df = master_df.append(files)
the output is:
output
what I need is to insert the columns of each file side-by-side, as follows:
The required output
could you please help with this?
A:
To merge DataFrames side by side, you should use pd.concat.
frames = []
for file in os.listdir(os.getcwd()):
if file.endswith('.txt'):
files = pd.read_csv(file, sep='\t', skiprows=[1])
frames.append(files)
# axis = 0 has the same behavior of your original approach
master_df = pd.concat(frames, axis = 1)
| How to combine multiple csv as columns in python? | I have 10 .txt (csv) files that I want to merge together in a single csv file to use later in analysis. when I use pd.append, it always merges the files below each other.
I use the following code:
master_df = pd.DataFrame()
for file in os.listdir(os.getcwd()):
if file.endswith('.txt'):
files = pd.read_csv(file, sep='\t', skiprows=[1])
master_df = master_df.append(files)
the output is:
output
what I need is to insert the columns of each file side-by-side, as follows:
The required output
could you please help with this?
| [
"To merge DataFrames side by side, you should use pd.concat.\nframes = []\n\nfor file in os.listdir(os.getcwd()):\n if file.endswith('.txt'):\n files = pd.read_csv(file, sep='\\t', skiprows=[1])\n frames.append(files)\n\n# axis = 0 has the same behavior of your original approach\nmaster_df = pd.concat(frames, axis = 1)\n\n"
] | [
1
] | [] | [] | [
"csv",
"merge",
"pandas",
"python"
] | stackoverflow_0074629257_csv_merge_pandas_python.txt |
Q:
i have a log file and i want to write a python code to increment second column value by 1 hour and 9th column value by 1
I have the log file dataset placed in a unix directory which looks like this:
i want to increase the value of Pcode by 1 for all lines and incremnet the timestamp column by 1 hour
12345432,10-11-2011,11:11:12.435,0,0,XVC_AS,12,14,81,12345412,qweds
12345433,10-11-2011,18:12:45.124,0,0,XVC_AS,12,15,77,12345445,qweds
12345434,10-11-2011,11:22:08.564,0,0,XVC_AS,12,16,65,12345436,qweds
12345435,10-11-2011,21:21:21.554,0,0,XVC_AS,12,17,92,12345444,qweds
12345436,10-11-2011,51:20:12.555,0,0,XVC_AS,12,18,43,123454398,qweds
output should look like this
12345432,10-11-2011,12:11:12.435,0,0,XVC_AS,12,15,81,12345412,qweds
12345433,10-11-2011,19:12:45.124,0,0,XVC_AS,12,16,77,12345445,qweds
12345434,10-11-2011,12:22:08.564,0,0,XVC_AS,12,17,65,12345436,qweds
12345435,10-11-2011,22:21:21.554,0,0,XVC_AS,12,18,92,12345444,qweds
12345436,10-11-2011,52:20:12.555,0,0,XVC_AS,12,19,43,123454398,qweds
| i have a log file and i want to write a python code to increment second column value by 1 hour and 9th column value by 1 | I have the log file dataset placed in a unix directory which looks like this:
i want to increase the value of Pcode by 1 for all lines and incremnet the timestamp column by 1 hour
12345432,10-11-2011,11:11:12.435,0,0,XVC_AS,12,14,81,12345412,qweds
12345433,10-11-2011,18:12:45.124,0,0,XVC_AS,12,15,77,12345445,qweds
12345434,10-11-2011,11:22:08.564,0,0,XVC_AS,12,16,65,12345436,qweds
12345435,10-11-2011,21:21:21.554,0,0,XVC_AS,12,17,92,12345444,qweds
12345436,10-11-2011,51:20:12.555,0,0,XVC_AS,12,18,43,123454398,qweds
output should look like this
12345432,10-11-2011,12:11:12.435,0,0,XVC_AS,12,15,81,12345412,qweds
12345433,10-11-2011,19:12:45.124,0,0,XVC_AS,12,16,77,12345445,qweds
12345434,10-11-2011,12:22:08.564,0,0,XVC_AS,12,17,65,12345436,qweds
12345435,10-11-2011,22:21:21.554,0,0,XVC_AS,12,18,92,12345444,qweds
12345436,10-11-2011,52:20:12.555,0,0,XVC_AS,12,19,43,123454398,qweds
| [] | [] | [
"First remove the spaces from your headers (for clarity), then import as CSV into Pandas:\nimport pandas as pd\ndf = pd.read_csv('file.csv')\n\nIncreasing Pcode is simple:\nf['Pcode'] = df['Pcode']+1\n\nBut your timestamps have an issue. Normally you could treat them as timestamps and add an hour, but your timestamps somehow have 51:20:12.555 , which is not a valid timestamp in any format. Is this indeed correct? If so, you would need to do some manual formatting.\nThis is a fully manual method:\ndf['test'] = df['timestamp'].apply(lambda x: x.split(':'))\ndf['test2'] = df['test'].apply(lambda x: [str(int(x[0])+1), x[1], x[2]])\ndf['timestamp2'] = df['test2'].apply(lambda x: ':'.join(x))\n\n"
] | [
-1
] | [
"python",
"python_2.7"
] | stackoverflow_0074629160_python_python_2.7.txt |
Q:
Find all matching pattern in a string
I'm trying to get all substrings that matching a specific pattern from a larger string,
I have a string like that :
string = "{\huge \centering \bf{RAPPORT JOURNALIER \\ ***bouee*** \\ \formatDate{***day***}{***month***}{***year***}} \\ \small generated on~\today~at~\currenttime \par}"
and I want to get in a list all the elements between the triple stars ***, with that string I should get : ['bouee', 'day', 'month', 'year'].
I could easily do that with a loop on that string, but I'm sure there is a more elegant way to do it using the regex module, but I'm not familiar with it, and I'm sure it's a very simple question that could be answered quicker on a forum than with me searching to understand that module : ) (even though I will definitely learn how to use it, it seems like a super useful module)
Thank you !
A:
Yes, you can use the "re" module for this.
import re
re.findall('\*{3}(.+?)\*{3}', string)
Here, we tell it to match substrings that consist of three asterisks, any number of any characters, then three more asterisks. Then, we use parenthesis to mark the inside characters as our "capturing group," so re.findall returns only those.
Do note that the problem is not particularly well-defined. For instance, if you see ***word***word***word***, it's unclear whether that should be interpreted as. In general, getting more advanced behavior with these types of problems can be difficult or impossible.
| Find all matching pattern in a string | I'm trying to get all substrings that matching a specific pattern from a larger string,
I have a string like that :
string = "{\huge \centering \bf{RAPPORT JOURNALIER \\ ***bouee*** \\ \formatDate{***day***}{***month***}{***year***}} \\ \small generated on~\today~at~\currenttime \par}"
and I want to get in a list all the elements between the triple stars ***, with that string I should get : ['bouee', 'day', 'month', 'year'].
I could easily do that with a loop on that string, but I'm sure there is a more elegant way to do it using the regex module, but I'm not familiar with it, and I'm sure it's a very simple question that could be answered quicker on a forum than with me searching to understand that module : ) (even though I will definitely learn how to use it, it seems like a super useful module)
Thank you !
| [
"Yes, you can use the \"re\" module for this.\nimport re\n\nre.findall('\\*{3}(.+?)\\*{3}', string)\n\nHere, we tell it to match substrings that consist of three asterisks, any number of any characters, then three more asterisks. Then, we use parenthesis to mark the inside characters as our \"capturing group,\" so re.findall returns only those.\nDo note that the problem is not particularly well-defined. For instance, if you see ***word***word***word***, it's unclear whether that should be interpreted as. In general, getting more advanced behavior with these types of problems can be difficult or impossible.\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074629473_python_python_3.x.txt |
Q:
Jupyter Notebook kernel dies when I increase the number of samples
I am trying to execute the following python code:
plt.figure(figsize=(9,6))
plt.title("Dendrograms for number of clusters")
dend = sch.dendrogram(sch.linkage(scaled, method='ward'))
When I execute the above code with 12000 samples it works fine. However, when I increase the samples to 24000 it shows that Kernel appears to be dead in Jupyter notebook. KernelRestarter: restarting kernel (1/5), keep random ports Any help is really appreciated
A:
This was an issue with scipy package. I downgraded the package scipy from 1.7.3 to 1.7.1 and it is working. However, the downgraded version of scipy has an issue of maximum recursion depth exceeded while getting str of an object. The second issue can be resolved by expanding the limit.
| Jupyter Notebook kernel dies when I increase the number of samples | I am trying to execute the following python code:
plt.figure(figsize=(9,6))
plt.title("Dendrograms for number of clusters")
dend = sch.dendrogram(sch.linkage(scaled, method='ward'))
When I execute the above code with 12000 samples it works fine. However, when I increase the samples to 24000 it shows that Kernel appears to be dead in Jupyter notebook. KernelRestarter: restarting kernel (1/5), keep random ports Any help is really appreciated
| [
"This was an issue with scipy package. I downgraded the package scipy from 1.7.3 to 1.7.1 and it is working. However, the downgraded version of scipy has an issue of maximum recursion depth exceeded while getting str of an object. The second issue can be resolved by expanding the limit.\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"python"
] | stackoverflow_0074619570_jupyter_notebook_python.txt |
Q:
remove entire rows from df if the word occurs
list of stowwords:
stop_w = ["in", "&", "the", "|", "and", "is", "of", "a", "an", "as", "for", "was"]
df:
words
frequency
the company
10
green energy
9
founded in
8
gases for
8
electricity
5
I would like to remove entire row if it contains ANY of given stopwords, in this example output should be:
words
frequency
green energy
9
electricity
5
A:
The | character has a meaning, it means or in python's terms, so you need to escape that meaning in order to use it in your stop words list. You escape that with a backslash \ (see more here)
Having said that you can do:
stop_w = ["in", "&", "the", "\|", "and", "is", "of", "a", "an", "as", "for", "was"]
df.loc[~df['words'].str.contains('|'.join(stop_w))]
prints:
words frequency
1 green energy 9
4 electricity 5
A:
You can create sub_df like this:
sub_df = df[df.words.str not in stop_w]
Or get ids of rows i want to remove:
idx = df[df.words.str in stop_w].index
df.drop(idx)
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html
| remove entire rows from df if the word occurs | list of stowwords:
stop_w = ["in", "&", "the", "|", "and", "is", "of", "a", "an", "as", "for", "was"]
df:
words
frequency
the company
10
green energy
9
founded in
8
gases for
8
electricity
5
I would like to remove entire row if it contains ANY of given stopwords, in this example output should be:
words
frequency
green energy
9
electricity
5
| [
"The | character has a meaning, it means or in python's terms, so you need to escape that meaning in order to use it in your stop words list. You escape that with a backslash \\ (see more here)\nHaving said that you can do:\nstop_w = [\"in\", \"&\", \"the\", \"\\|\", \"and\", \"is\", \"of\", \"a\", \"an\", \"as\", \"for\", \"was\"]\ndf.loc[~df['words'].str.contains('|'.join(stop_w))]\n\nprints:\n words frequency\n1 green energy 9\n4 electricity 5\n\n",
"You can create sub_df like this:\nsub_df = df[df.words.str not in stop_w]\n\nOr get ids of rows i want to remove:\nidx = df[df.words.str in stop_w].index\ndf.drop(idx)\n\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074629401_dataframe_pandas_python.txt |
Q:
How do you make text which changes in tkinter?
i am currently making a celsius to Fahrenheit converter gui but I can' figure out how to add text which changes each time a conversion happens. Can any of you help?
from tkinter import *
import tkinter
inputValue=0
root=Tk()
root.geometry('250x170')
def retrieve_input():
inputValue =textBox.get("1.0","end-1c")
inputValue= int(inputValue)*(9/5)+32
print(inputValue)
textBox=Text(root, height=2, width=10)
textBox.pack()
buttonCommit=Button(root, height=1, width=10, text="Commit",
command=lambda: retrieve_input())
label1=tkinter.Label(root, text=str(inputValue), font=('Calibri', 18, 'bold'))
label1.config()
#command=lambda: retrieve_input() >>> just means do this when i press the button
buttonCommit.pack()
mainloop()
A:
Use:
labelname.config(text = text)
A:
You are missing widget pack(). Replace this:
label1.config(text=inputValue)
to:
label1.pack()
and add this label1.config(text=inputValue) in retrieve_input()function.
Result before enter input:
Result after:
| How do you make text which changes in tkinter? | i am currently making a celsius to Fahrenheit converter gui but I can' figure out how to add text which changes each time a conversion happens. Can any of you help?
from tkinter import *
import tkinter
inputValue=0
root=Tk()
root.geometry('250x170')
def retrieve_input():
inputValue =textBox.get("1.0","end-1c")
inputValue= int(inputValue)*(9/5)+32
print(inputValue)
textBox=Text(root, height=2, width=10)
textBox.pack()
buttonCommit=Button(root, height=1, width=10, text="Commit",
command=lambda: retrieve_input())
label1=tkinter.Label(root, text=str(inputValue), font=('Calibri', 18, 'bold'))
label1.config()
#command=lambda: retrieve_input() >>> just means do this when i press the button
buttonCommit.pack()
mainloop()
| [
"Use:\nlabelname.config(text = text)\n\n",
"You are missing widget pack(). Replace this:\nlabel1.config(text=inputValue)\n\nto:\nlabel1.pack()\n\nand add this label1.config(text=inputValue) in retrieve_input()function.\nResult before enter input:\n\nResult after:\n\n"
] | [
0,
0
] | [] | [] | [
"python",
"tkinter",
"user_interface"
] | stackoverflow_0071699765_python_tkinter_user_interface.txt |
Q:
Cannot get python scripting working under WSH
I'm trying to get WSH to run Python .pys scripts and I'm hitting a wall - I've tried this on two machines now, W7x64 and Server2012, same result both time, cscript always comes back with:
CScript Error: Can't find script engine "Python"
Procedure (all happening under local admin account):
Installed Python 3.5.1 (x86)
Installed Pywin32 (x86) from Mark Hammond's sourceforge
Ran \site-packages\win32comext\axscript\client\pyscript.py, which returns 'Registered: Python'
Check registry/HKCR - lots of references to pys and python, as expected
Try to run >cscript hello.pys get
CScript Error: Can't find script engine "Python"
Any clues? I don't really want to use ActivePython.
A:
It appears that python on Windows is a PITA. So, I had your problem as well, I followed the following steps:
Download Python from python.org. (you probably already have
that)
Download PyWin32 from SourceForge.
Download SetupTools from python.org.
On your desktop or in the Start menu, right-click on My Computer then click Properties.
In the Advanced tab, click Environment Variables.
In the System Variables at the bottom, locate the PATH variable and double-click on it.
Add C:\Python27\ and C:\Python27\scripts at the end of the Variable Value box (assuming you installed python in the location C:\Python27).
Click OK in all the dialogs.
It still did not work for me, however, I was puzzled by <python_install>\scripts and found pywin32_postinstall.py in there, running that did the trick!
Execute \scripts\pywin32_postinstall.py
I kept getting the following trace:
Debugging extensions (axdebug) module does not exist - debugging is disabled..
I traced it to Lib\site-packages\win32comext\axscript\client\framework.py and commented out the trace call that printed that message ... all good.
C:\Users\jdoe>type d:\python2vbs.wsf
<?XML Version="1.0" encoding="ISO-8859-1"?>
<?job error="true" debug="false"?>
<package>
<job>
<script language="VBScript">
<![CDATA[
public sub vbsOutput(strText)
wscript.echo strText & " (from vbsOutput)"
end sub
]]>
</script>
<script language="Python">
<![CDATA[
import sys
globals.vbsOutput('python testing')
]]>
</script>
</job>
</package>
C:\Users\jdoe>cscript d:\python2vbs.wsf
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.
python testing (from vbsOutput)
A:
The original answer posted by @thecarpy has good information. There is one more thing to add. If you have multiple versions of Python installed, you need to make sure that the one which matches the version you have registered as the scripting engine is the first python.exe found in your search path. So putting it at the end of the path (as recommended in the previous answer) might not always work. I had a little more hair before I figured this out.
| Cannot get python scripting working under WSH | I'm trying to get WSH to run Python .pys scripts and I'm hitting a wall - I've tried this on two machines now, W7x64 and Server2012, same result both time, cscript always comes back with:
CScript Error: Can't find script engine "Python"
Procedure (all happening under local admin account):
Installed Python 3.5.1 (x86)
Installed Pywin32 (x86) from Mark Hammond's sourceforge
Ran \site-packages\win32comext\axscript\client\pyscript.py, which returns 'Registered: Python'
Check registry/HKCR - lots of references to pys and python, as expected
Try to run >cscript hello.pys get
CScript Error: Can't find script engine "Python"
Any clues? I don't really want to use ActivePython.
| [
"It appears that python on Windows is a PITA. So, I had your problem as well, I followed the following steps:\n\nDownload Python from python.org. (you probably already have\nthat) \nDownload PyWin32 from SourceForge.\nDownload SetupTools from python.org.\nOn your desktop or in the Start menu, right-click on My Computer then click Properties.\nIn the Advanced tab, click Environment Variables.\nIn the System Variables at the bottom, locate the PATH variable and double-click on it.\nAdd C:\\Python27\\ and C:\\Python27\\scripts at the end of the Variable Value box (assuming you installed python in the location C:\\Python27).\nClick OK in all the dialogs.\n\nIt still did not work for me, however, I was puzzled by <python_install>\\scripts and found pywin32_postinstall.py in there, running that did the trick!\n\nExecute \\scripts\\pywin32_postinstall.py\nI kept getting the following trace:\nDebugging extensions (axdebug) module does not exist - debugging is disabled..\n\nI traced it to Lib\\site-packages\\win32comext\\axscript\\client\\framework.py and commented out the trace call that printed that message ... all good.\nC:\\Users\\jdoe>type d:\\python2vbs.wsf\n<?XML Version=\"1.0\" encoding=\"ISO-8859-1\"?>\n<?job error=\"true\" debug=\"false\"?>\n<package>\n <job>\n\n <script language=\"VBScript\">\n <![CDATA[\npublic sub vbsOutput(strText)\n wscript.echo strText & \" (from vbsOutput)\"\nend sub\n ]]>\n </script>\n\n <script language=\"Python\">\n <![CDATA[\nimport sys\nglobals.vbsOutput('python testing')\n ]]>\n </script>\n\n </job>\n</package>\n\nC:\\Users\\jdoe>cscript d:\\python2vbs.wsf\nMicrosoft (R) Windows Script Host Version 5.812\nCopyright (C) Microsoft Corporation. All rights reserved.\n\npython testing (from vbsOutput)\n\n",
"The original answer posted by @thecarpy has good information. There is one more thing to add. If you have multiple versions of Python installed, you need to make sure that the one which matches the version you have registered as the scripting engine is the first python.exe found in your search path. So putting it at the end of the path (as recommended in the previous answer) might not always work. I had a little more hair before I figured this out. \n"
] | [
1,
0
] | [] | [] | [
"python",
"pywin32",
"wsh"
] | stackoverflow_0035034877_python_pywin32_wsh.txt |
Q:
How to get specific objects from two list of dictionaries on a specific key value?
I have two lists of dictionaries:
timing = [
{"day_name": "sunday"},
{"day_name": "monday"},
{"day_name": "tuesday"},
{"day_name": "wednesday"},
{"day_name": "thursday"},
{"day_name": "friday"},
{"day_name": "saturday"},
]
hours_detail = [
{"day_name": "sunday", "peak_hour": False},
{"day_name": "monday", "peak_hour": False},
{"day_name": "tuesday", "peak_hour": False},
{"day_name": "wednesday", "peak_hour": False},
{"day_name": "thursday", "peak_hour": False},
{"day_name": "friday", "peak_hour": False},
{"day_name": "saturday", "peak_hour": False},
{"day_name": "saturday", "peak_hour": True},
{"day_name": "friday", "peak_hour": True},
{"day_name": "thursday", "peak_hour": True},
]
I want to create another list of dictionaries that looks like the one below. I'm basically combining these two lists and also rearranging according to the day name.
final_data_object = [
{
"timing": {"day_name": "saturday"},
"hour_detail": [
{"day_name": "saturday", "peak_hour": False},
{"day_name": "saturday", "peak_hour": True},
]
},
{
"timing": {"day_name": "friday"},
"hour_detail": [
{"day_name": "friday", "peak_hour": False},
{"day_name": "friday", "peak_hour": True},
]
},
soon on...
]
I have tried this but it didn't work:
data = []
for time_instance in timing:
obj = {
"timing": time_instance
}
for hour_instance in hour_detail:
if time_instance["day_name"] == hour_instance["day_name"]:
obj["hour_detail"] = hour_instance
data.append(obj)
return data
A:
If pricing = hour_detail then obj["pricing"] must be a list as it can have multiple value.
You have to create a new list in your loop for time_instance in timing: and append every time time_instance["day_name"] == pricing_instance["day_name"].
Exemple:
data = []
for time_instance in timing:
current_hour_detail = []
for line in hours_detail:
if line["day_name"] == time_instance["day_name"]:
current_hour_detail.append(line)
data.append({
"timing": time_instance,
"pricing": current_hour_detail
})
Or smaller way
data = []
for time_instance in timing:
current_hour_detail = [line for line in hours_detail if line["day_name"] == time_instance["day_name"]]
data.append({
"timing": time_instance,
"pricing": current_hour_detail
})
Whatch out tho, this solution is in O(n*m) time complexity, by first transforming your list hours_detail by a hashtable {day_name: [peak_hour1, peak_hour2]} you can reduce it in O(n+m)
A:
This should answer you question:
...
data = []
for time_instance in timing:
obj = {"timing": time_instance}
# use a list to store all your pricing
obj["hour_detail"] = []
for pricing_instance in pricing:
if time_instance["day_name"] == pricing_instance["day_name"]:
obj["hour_detail"].append(pricing_instance)
# add 'obj' after the loop
data.append(obj)
print(data)
| How to get specific objects from two list of dictionaries on a specific key value? | I have two lists of dictionaries:
timing = [
{"day_name": "sunday"},
{"day_name": "monday"},
{"day_name": "tuesday"},
{"day_name": "wednesday"},
{"day_name": "thursday"},
{"day_name": "friday"},
{"day_name": "saturday"},
]
hours_detail = [
{"day_name": "sunday", "peak_hour": False},
{"day_name": "monday", "peak_hour": False},
{"day_name": "tuesday", "peak_hour": False},
{"day_name": "wednesday", "peak_hour": False},
{"day_name": "thursday", "peak_hour": False},
{"day_name": "friday", "peak_hour": False},
{"day_name": "saturday", "peak_hour": False},
{"day_name": "saturday", "peak_hour": True},
{"day_name": "friday", "peak_hour": True},
{"day_name": "thursday", "peak_hour": True},
]
I want to create another list of dictionaries that looks like the one below. I'm basically combining these two lists and also rearranging according to the day name.
final_data_object = [
{
"timing": {"day_name": "saturday"},
"hour_detail": [
{"day_name": "saturday", "peak_hour": False},
{"day_name": "saturday", "peak_hour": True},
]
},
{
"timing": {"day_name": "friday"},
"hour_detail": [
{"day_name": "friday", "peak_hour": False},
{"day_name": "friday", "peak_hour": True},
]
},
soon on...
]
I have tried this but it didn't work:
data = []
for time_instance in timing:
obj = {
"timing": time_instance
}
for hour_instance in hour_detail:
if time_instance["day_name"] == hour_instance["day_name"]:
obj["hour_detail"] = hour_instance
data.append(obj)
return data
| [
"If pricing = hour_detail then obj[\"pricing\"] must be a list as it can have multiple value.\nYou have to create a new list in your loop for time_instance in timing: and append every time time_instance[\"day_name\"] == pricing_instance[\"day_name\"].\nExemple:\ndata = []\n\nfor time_instance in timing:\n current_hour_detail = []\n for line in hours_detail:\n if line[\"day_name\"] == time_instance[\"day_name\"]:\n current_hour_detail.append(line)\n \n data.append({\n \"timing\": time_instance,\n \"pricing\": current_hour_detail\n })\n\nOr smaller way\ndata = []\n\nfor time_instance in timing:\n current_hour_detail = [line for line in hours_detail if line[\"day_name\"] == time_instance[\"day_name\"]]\n data.append({\n \"timing\": time_instance,\n \"pricing\": current_hour_detail\n })\n\nWhatch out tho, this solution is in O(n*m) time complexity, by first transforming your list hours_detail by a hashtable {day_name: [peak_hour1, peak_hour2]} you can reduce it in O(n+m)\n",
"This should answer you question:\n...\ndata = []\nfor time_instance in timing:\n obj = {\"timing\": time_instance}\n # use a list to store all your pricing\n obj[\"hour_detail\"] = []\n for pricing_instance in pricing:\n if time_instance[\"day_name\"] == pricing_instance[\"day_name\"]:\n obj[\"hour_detail\"].append(pricing_instance)\n # add 'obj' after the loop\n data.append(obj)\nprint(data)\n\n"
] | [
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074629179_python_python_3.x.txt |
Q:
KeyError Received unregistered task of type '' on celery while task is registered
I'm a bit new in celery configs.
I have a task named myapp.tasks.my_task for example.
I can see myapp.tasks.my_task in registered tasks of celery when I use celery inspect registered. doesn't it mean that the task is successfully registered? why it raises the following error for it:
KeyError celery.worker.consumer.consumer in on_task_received
Received unregistered task of type 'my_app.tasks.my_task'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
Please see
http://docs.celeryq.org/en/latest/internals/protocol.html
for more information.
The full contents of the message body was:
'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (77b)
there are also other tasks in my_app.tasks and they work correctly but only this task does not work and gets KeyError:
@shared_task(queue='celery')
def other_task():
""" WORKS """
...
@shared_task(queue='celery')
def my_task():
""" DOES NOT WORK """
...
A:
It means that Celery can't find the implementation of the task my_app.tasks.my_task when it was called. Some possible solutions you may want to look at:
Possible Solution 1:
You probably haven't configured correctly either:
Celery imports e.g. celery_app.conf.update(imports=['my_app.tasks']) or celery_app.conf.imports = ['my_app.tasks']
Or Celery include (example) e.g. celery_app = Celery(..., include=['my_app.tasks'])
Note: If in a Django application, this can be skipped if already using celery_app.autodiscover_tasks() since the tasks are automatically discovered in the location ./<app_name>/tasks.py
Possible Solution 2:
If you are only importing my_app e.g. celery_app.conf.update(imports=['my_app']) then I assume you have a file my_app/__init__.py Make sure that inside that file, it imports the task my_app.tasks.my_task along with my_app.tasks.other_task so that the celery app knows that such task exists.
# Contents of my_app/__init__.py
from my_app.tasks import (
my_task,
other_task,
)
Possible Solution 3:
In case the my_task was just newly added (whereas other_task was already an old existing task), you might not have restarted the celery worker yet to see the new task. Try restarting the worker.
A:
Another solution can be adding a name param in the shared_task decorator, i.e.,
@shared_task(queue='celery', name='other_task')
def other_task():
""" WORKS """
...
@shared_task(queue='celery', name='my_task')
def my_task():
""" DOES NOT WORK """
...
I see this in a full demo project https://github.com/melikesofta/django-dynamic-periodic-tasks, and the related code can be find here.
A:
I could solve the error by just restarting Celery. I use Celery 5.1.2:
celery -A core worker --pool=solo -l info
| KeyError Received unregistered task of type '' on celery while task is registered | I'm a bit new in celery configs.
I have a task named myapp.tasks.my_task for example.
I can see myapp.tasks.my_task in registered tasks of celery when I use celery inspect registered. doesn't it mean that the task is successfully registered? why it raises the following error for it:
KeyError celery.worker.consumer.consumer in on_task_received
Received unregistered task of type 'my_app.tasks.my_task'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
Please see
http://docs.celeryq.org/en/latest/internals/protocol.html
for more information.
The full contents of the message body was:
'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (77b)
there are also other tasks in my_app.tasks and they work correctly but only this task does not work and gets KeyError:
@shared_task(queue='celery')
def other_task():
""" WORKS """
...
@shared_task(queue='celery')
def my_task():
""" DOES NOT WORK """
...
| [
"It means that Celery can't find the implementation of the task my_app.tasks.my_task when it was called. Some possible solutions you may want to look at:\nPossible Solution 1:\nYou probably haven't configured correctly either:\n\nCelery imports e.g. celery_app.conf.update(imports=['my_app.tasks']) or celery_app.conf.imports = ['my_app.tasks']\nOr Celery include (example) e.g. celery_app = Celery(..., include=['my_app.tasks'])\n\nNote: If in a Django application, this can be skipped if already using celery_app.autodiscover_tasks() since the tasks are automatically discovered in the location ./<app_name>/tasks.py\nPossible Solution 2:\nIf you are only importing my_app e.g. celery_app.conf.update(imports=['my_app']) then I assume you have a file my_app/__init__.py Make sure that inside that file, it imports the task my_app.tasks.my_task along with my_app.tasks.other_task so that the celery app knows that such task exists.\n# Contents of my_app/__init__.py\nfrom my_app.tasks import (\n my_task,\n other_task,\n)\n\nPossible Solution 3:\nIn case the my_task was just newly added (whereas other_task was already an old existing task), you might not have restarted the celery worker yet to see the new task. Try restarting the worker.\n",
"Another solution can be adding a name param in the shared_task decorator, i.e.,\n@shared_task(queue='celery', name='other_task')\ndef other_task():\n \"\"\" WORKS \"\"\"\n ...\n\n@shared_task(queue='celery', name='my_task')\ndef my_task():\n \"\"\" DOES NOT WORK \"\"\"\n ...\n\nI see this in a full demo project https://github.com/melikesofta/django-dynamic-periodic-tasks, and the related code can be find here.\n",
"I could solve the error by just restarting Celery. I use Celery 5.1.2:\ncelery -A core worker --pool=solo -l info\n\n"
] | [
2,
0,
0
] | [] | [] | [
"celery",
"celery_task",
"django",
"python",
"python_3.x"
] | stackoverflow_0068888941_celery_celery_task_django_python_python_3.x.txt |
Q:
streamlit dataframe - live input values
Is it possible to add live values to the streamlit dataframe, then save it as a new dataframe and continue with dataframe manipulation ?
Let's say I upload on streamlit a dataframe like below:
word
frequency
weight
apple
3
green
2
house
5
I want USER to input the values in "weight" column and save it to a new dataframe (where I later on perform some manipulation in the code). I am just wondering if such "live input" works in streamlit.
A:
As mentioned in the comments, you can currently use the streamlit-aggrid component to do this (guide on how to do this here). We're in the process of revamping st.dataframe, and the ability to edit the values is part of that (should be released in the next few months).
| streamlit dataframe - live input values | Is it possible to add live values to the streamlit dataframe, then save it as a new dataframe and continue with dataframe manipulation ?
Let's say I upload on streamlit a dataframe like below:
word
frequency
weight
apple
3
green
2
house
5
I want USER to input the values in "weight" column and save it to a new dataframe (where I later on perform some manipulation in the code). I am just wondering if such "live input" works in streamlit.
| [
"As mentioned in the comments, you can currently use the streamlit-aggrid component to do this (guide on how to do this here). We're in the process of revamping st.dataframe, and the ability to edit the values is part of that (should be released in the next few months).\n"
] | [
2
] | [] | [] | [
"pandas",
"python",
"streamlit"
] | stackoverflow_0074603090_pandas_python_streamlit.txt |
Q:
Faster Numpy: Contiguous Number Replacement
Can someone help me make this faster?
import numpy as np
# an array to split
a = np.array([0,0,1,0,1,1,1,0,1,1,0,0,0,1])
# idx where the number changes
idx = np.where(np.roll(a,1)!=a)[0][1:]
# split of array into groups
aout = np.split(a,idx)
# sum of each group
sumseg = [aa.sum() for aa in aout]
#fill criteria
idx2 = np.where( (np.array(sumseg)>0) & (np.array(sumseg)<2) )
#fill targets
[aout[ai].fill(0) for ai in idx2[0]]
# a is now updated? didn't follow how a gets updated
# return a
I noticed that a gets updated through this process, but didn't understand how those objects remained link thought the splitting etc...
If it is important, or helps, a is actually a 2d array and I am looping over each row/column performing this operation.
A:
Better solution 1D:
We can use a convolution:
aout = ((np.convolve(a,[1,1,1],mode='same')>1)&(a>0)).astype(a.dtype)
# aout = array([0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0])
Better solution 2D:
from scipy.signal import convolve2d
a = np.array([[1, 1, 0, 0, 0, 0, 1, 0, 0, 1],
[1, 0, 1, 0, 1, 0, 0, 0, 1, 1]])
aout = ((convolve2d(a,np.ones((1,3)),mode='same')>1)&(a>0)).astype(a.dtype)
#aout = array([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 1, 1]])
Why a changed ?
And to understand why a is updated in your process, you need to understand the difference between a copy and a view.
From the documentation:
View
It is possible to access the array differently by just changing
certain metadata like stride and dtype without changing the data
buffer. This creates a new way of looking at the data and these new
arrays are called views. The data buffer remains the same, so any
changes made to a view reflects in the original copy. A view can be
forced through the ndarray.view method.
Copy
When a new array is created by duplicating the data buffer as
well as the metadata, it is called a copy. Changes made to the copy do
not reflect on the original array. Making a copy is slower and
memory-consuming but sometimes necessary. A copy can be forced by
using ndarray.copy.
Or np.split() return a view not a copy of a, so aout is still pointing to the same data buffer as a, if you change aout you change a.
Benchmarking
import numpy as np
a = np.random.randint(0,2,(1000000,))
def continuous_split(a):
idx = np.where(np.roll(a,1)!=a)[0][1:]
aout = np.split(a,idx)
sumseg = [aa.sum() for aa in aout]
idx2 = np.where( (np.array(sumseg)>0) & (np.array(sumseg)<2) )
[aout[ai].fill(0) for ai in idx2[0]]
return aout
def continuous_conv(a):
return ((np.convolve(a,[1,1,1],mode='same')>1)&(a>0)).astype(a.dtype)
%timeit continuous_split(a)
%timeit continuous_conv(a)
np.split() solution:
668 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
np.convolve() solution:
7.63 ms ± 115 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
| Faster Numpy: Contiguous Number Replacement | Can someone help me make this faster?
import numpy as np
# an array to split
a = np.array([0,0,1,0,1,1,1,0,1,1,0,0,0,1])
# idx where the number changes
idx = np.where(np.roll(a,1)!=a)[0][1:]
# split of array into groups
aout = np.split(a,idx)
# sum of each group
sumseg = [aa.sum() for aa in aout]
#fill criteria
idx2 = np.where( (np.array(sumseg)>0) & (np.array(sumseg)<2) )
#fill targets
[aout[ai].fill(0) for ai in idx2[0]]
# a is now updated? didn't follow how a gets updated
# return a
I noticed that a gets updated through this process, but didn't understand how those objects remained link thought the splitting etc...
If it is important, or helps, a is actually a 2d array and I am looping over each row/column performing this operation.
| [
"Better solution 1D:\nWe can use a convolution:\naout = ((np.convolve(a,[1,1,1],mode='same')>1)&(a>0)).astype(a.dtype)\n# aout = array([0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0])\n\nBetter solution 2D:\nfrom scipy.signal import convolve2d\n\na = np.array([[1, 1, 0, 0, 0, 0, 1, 0, 0, 1],\n [1, 0, 1, 0, 1, 0, 0, 0, 1, 1]])\n\naout = ((convolve2d(a,np.ones((1,3)),mode='same')>1)&(a>0)).astype(a.dtype)\n\n#aout = array([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 0, 0, 1, 1]])\n\nWhy a changed ?\nAnd to understand why a is updated in your process, you need to understand the difference between a copy and a view.\nFrom the documentation:\n\nView\nIt is possible to access the array differently by just changing\ncertain metadata like stride and dtype without changing the data\nbuffer. This creates a new way of looking at the data and these new\narrays are called views. The data buffer remains the same, so any\nchanges made to a view reflects in the original copy. A view can be\nforced through the ndarray.view method.\nCopy\nWhen a new array is created by duplicating the data buffer as\nwell as the metadata, it is called a copy. Changes made to the copy do\nnot reflect on the original array. Making a copy is slower and\nmemory-consuming but sometimes necessary. A copy can be forced by\nusing ndarray.copy.\n\nOr np.split() return a view not a copy of a, so aout is still pointing to the same data buffer as a, if you change aout you change a.\nBenchmarking\nimport numpy as np\n\na = np.random.randint(0,2,(1000000,))\n\ndef continuous_split(a):\n idx = np.where(np.roll(a,1)!=a)[0][1:]\n aout = np.split(a,idx)\n sumseg = [aa.sum() for aa in aout]\n idx2 = np.where( (np.array(sumseg)>0) & (np.array(sumseg)<2) )\n [aout[ai].fill(0) for ai in idx2[0]]\n return aout\n \ndef continuous_conv(a):\n return ((np.convolve(a,[1,1,1],mode='same')>1)&(a>0)).astype(a.dtype)\n\n%timeit continuous_split(a)\n%timeit continuous_conv(a)\n\nnp.split() solution:\n668 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nnp.convolve() solution:\n7.63 ms ± 115 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n"
] | [
3
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074628646_numpy_python.txt |
Q:
Check if file in directory has corresponding MD5 file
I am working on file integrity check and I want to check in a given directory if the file has it's corresponding MD5 hash file and return all file names which have the corresponding md5 hash.
for example:
inside directory [
abc.bin abc.bin.md5
efg.bin
qwerty.bin qwerty.bin.md5
xyc.bin
]
the return values: abc.bin, qwerty.bin
A:
You could do something like this:
#get a list of files and directories in the current directory
file_list = os.listdir('.')
#produce a list of candidate files (files ending with `md5`). Perhaps there is a more elegant way of doing this, but it works.
candidates = [x for x in file_list if x.split('.')[-1] == 'md5']
#iterate the file_list and check to see if there is a match in candidates (after removing the `.md5` portion of the name).
print([file for file in file_list if file in ['.'.join(candidate.split('.')[:-1]) for candidate in candidates]] + candidates)
There are a million ways to go about doing this so feel free to tinker.
| Check if file in directory has corresponding MD5 file | I am working on file integrity check and I want to check in a given directory if the file has it's corresponding MD5 hash file and return all file names which have the corresponding md5 hash.
for example:
inside directory [
abc.bin abc.bin.md5
efg.bin
qwerty.bin qwerty.bin.md5
xyc.bin
]
the return values: abc.bin, qwerty.bin
| [
"You could do something like this:\n#get a list of files and directories in the current directory\nfile_list = os.listdir('.')\n\n#produce a list of candidate files (files ending with `md5`). Perhaps there is a more elegant way of doing this, but it works. \ncandidates = [x for x in file_list if x.split('.')[-1] == 'md5']\n\n#iterate the file_list and check to see if there is a match in candidates (after removing the `.md5` portion of the name). \nprint([file for file in file_list if file in ['.'.join(candidate.split('.')[:-1]) for candidate in candidates]] + candidates)\n\nThere are a million ways to go about doing this so feel free to tinker.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074619521_python.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.