content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: How to fix concurrency issue with multiprocessing? Serial function working differently than similarly designed parallel function def update(sharedDict,node,...): for neighbor in weightNode: sharedDict[neighbor] += *positive float(0->1) sharedDict[node] += *positive float(0->1) This is the slave function of each process. Each addition is a positive float unaffected by any value in sharedDict is added. No keys are added. def pSerial(graph): ... #setup left out for readability for i in range(100): last_serialDict = serialDict serialDict = dict.fromkeys(last_serialDict.keys(),0.0) s = *positive float(0->1) #used in later *positive float for node in serialDict: for neighbor in wDigraph[node]: serialDict[neighbor] += *positive float(0->1) serialDict[node] += *positive float(0->1) err = sum([abs(serialDict[node] - last_serialDict[node]) for node in serialDict]) if err < nodeCount * 0.000001: return serialDict raise RuntimeError('failed to converge in 100 iterations') This is the serial implementation of the algorithm. Note that the slave function is identical to the nested for loop. def pParallel(graph): ... #setup left out for readability with Manager() as manager: parallelDict = dict(dict.fromkeys(wDigraph, 1.0 / nodeCount)) #from weighted graph for i in range(100): lastParallel = parallelDict parallelDict = dict.fromkeys(lastParallel.keys(),0.0) s = *positive float(0->1) pool = Pool() sharedDict = manager.dict(parallelDict) pool.starmap(update, [(sharedDict,node,...) for node in parallelDict]) pool.close() pool.join() parallelDict = dict(sharedDict) err = sum([abs(parallelDict[node] - lastParallel[node]) for node in parallelDict]) if err < nodeCount * 0.000001: return parallelDict raise RuntimeError('failed to converge in 100 iterations') With this function computing the PageRank of variable size graphs in parallel, the update function not only is slower than the serial version(programmed the same way) but also does not converge in the same amount of iterations(concurrency issue) What is the issue with the code that is causing this? A: It is not safe to update the same dict in parallel from multiple processes/threads. This cause a race condition. The threads needs to write in different places so to avoid this (they can read the same part safely though). Adding new key or removing existing ones to the same shared dict also cause a race condition due to the key that needs to be possibly re-hashed internally.
How to fix concurrency issue with multiprocessing?
Serial function working differently than similarly designed parallel function def update(sharedDict,node,...): for neighbor in weightNode: sharedDict[neighbor] += *positive float(0->1) sharedDict[node] += *positive float(0->1) This is the slave function of each process. Each addition is a positive float unaffected by any value in sharedDict is added. No keys are added. def pSerial(graph): ... #setup left out for readability for i in range(100): last_serialDict = serialDict serialDict = dict.fromkeys(last_serialDict.keys(),0.0) s = *positive float(0->1) #used in later *positive float for node in serialDict: for neighbor in wDigraph[node]: serialDict[neighbor] += *positive float(0->1) serialDict[node] += *positive float(0->1) err = sum([abs(serialDict[node] - last_serialDict[node]) for node in serialDict]) if err < nodeCount * 0.000001: return serialDict raise RuntimeError('failed to converge in 100 iterations') This is the serial implementation of the algorithm. Note that the slave function is identical to the nested for loop. def pParallel(graph): ... #setup left out for readability with Manager() as manager: parallelDict = dict(dict.fromkeys(wDigraph, 1.0 / nodeCount)) #from weighted graph for i in range(100): lastParallel = parallelDict parallelDict = dict.fromkeys(lastParallel.keys(),0.0) s = *positive float(0->1) pool = Pool() sharedDict = manager.dict(parallelDict) pool.starmap(update, [(sharedDict,node,...) for node in parallelDict]) pool.close() pool.join() parallelDict = dict(sharedDict) err = sum([abs(parallelDict[node] - lastParallel[node]) for node in parallelDict]) if err < nodeCount * 0.000001: return parallelDict raise RuntimeError('failed to converge in 100 iterations') With this function computing the PageRank of variable size graphs in parallel, the update function not only is slower than the serial version(programmed the same way) but also does not converge in the same amount of iterations(concurrency issue) What is the issue with the code that is causing this?
[ "It is not safe to update the same dict in parallel from multiple processes/threads. This cause a race condition. The threads needs to write in different places so to avoid this (they can read the same part safely though). Adding new key or removing existing ones to the same shared dict also cause a race condition due to the key that needs to be possibly re-hashed internally.\n" ]
[ 1 ]
[]
[]
[ "concurrency", "dictionary", "parallel_processing", "python", "python_multiprocessing" ]
stackoverflow_0074636784_concurrency_dictionary_parallel_processing_python_python_multiprocessing.txt
Q: reporting R2 score in tensorflow "regression" model I would like my model to report r2 square in the validation, however I cannot find the right metric to fill in ???, model.compile(loss = 'mse', optimizer = 'adam', metrics = '???') Thanks for any hint in advance A: my answer to the question comment is y calculation R2 scores is R square scores but tfa.metrics.RSquare needs to use the same sizes same order of y_true and y_predict R2 but you can do it for multi-classes when you need input to output as channels or discrete. Sample: Custom multi classes, it required tf.float32 as default tf.variable assignment and return as multi-class attacks. import tensorflow as tf """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Class / Definition """"""""""""""""""""""""""""""""""""""""""""""""""""""""" class MulticlassSumSquare(tf.keras.metrics.Metric): def __init__(self, name='multiclass_true_positives', **kwargs): super(MulticlassSumSquare, self).__init__(name=name, **kwargs) self.value = self.add_weight(name='tp', initializer='zeros') def update_state(self, y_true, y_pred, sample_weight=None): y_pred = tf.reshape( tf.argmax(y_pred, axis=0), shape=(-1, 1)) y_pred = tf.cast(y_pred, 'float32') y_true = tf.cast(y_true, 'float32') temp = 1 - ( ( y_pred - y_true ) / ( y_pred + y_true ) ) self.value.assign_add( tf.reduce_sum(temp) ) def result(self): return self.value def reset_state(self): # The state of the metric will be reset at the start of each epoch. self.value.assign(0.) class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): min_size_init = tf.keras.initializers.Ones() self.kernel = self.add_weight(shape=[int(input_shape[-1]), self.num_outputs], initializer = min_size_init, trainable=True) def call(self, inputs): temp = tf.matmul(inputs, self.kernel) # , shape=(10, 10), dtype=float32) return temp start = 3 limit = 33 delta = 3 sample = tf.range(start, limit, delta) sample = tf.cast( sample, dtype=tf.float32 ) sample = tf.constant( sample, shape=( 10, 1 ) ) label = tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=tf.int64) layer = MyDenseLayer(10) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" DataSet """"""""""""""""""""""""""""""""""""""""""""""""""""""""" dataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(sample, dtype=tf.int64), shape=(1, 10, 1), dtype=tf.int64),tf.constant(label, shape=(1, 10, 1), dtype=tf.int64))) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Model Initialize """"""""""""""""""""""""""""""""""""""""""""""""""""""""" model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=( 1, 1 )), layer ]) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Optimizer """"""""""""""""""""""""""""""""""""""""""""""""""""""""" optimizer = tf.keras.optimizers.Nadam( learning_rate=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Nadam' ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Loss Fn """"""""""""""""""""""""""""""""""""""""""""""""""""""""" lossfn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction=tf.keras.losses.Reduction.AUTO, name='sparse_categorical_crossentropy' ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Model Summary """"""""""""""""""""""""""""""""""""""""""""""""""""""""" model.compile(optimizer=optimizer, loss=lossfn, metrics=[MulticlassSumSquare()]) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Training """"""""""""""""""""""""""""""""""""""""""""""""""""""""" history = model.fit( dataset, batch_size=100, epochs=10000 ) Output: Single record R suqare no change no loss fun return value. Epoch 1/10000 1/1 [==============================] - 1s 1s/step - loss: nan - multiclass_true_positives: 7.0621 Epoch 2/10000 1/1 [==============================] - 0s 5ms/step - loss: nan - multiclass_true_positives: 7.0621 Sample: Discrete actions in games group_2_Heriken_kick_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,1,0,0,0,0,0, 0,0,0,0,0,0,1,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,0,0 ], shape=(1, 1, 1, 48))
reporting R2 score in tensorflow "regression" model
I would like my model to report r2 square in the validation, however I cannot find the right metric to fill in ???, model.compile(loss = 'mse', optimizer = 'adam', metrics = '???') Thanks for any hint in advance
[ "my answer to the question comment is y calculation R2 scores is R square scores but tfa.metrics.RSquare needs to use the same sizes same order of y_true and y_predict R2 but you can do it for multi-classes when you need input to output as channels or discrete.\nSample: Custom multi classes, it required tf.float32 as default tf.variable assignment and return as multi-class attacks.\nimport tensorflow as tf\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Class / Definition\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass MulticlassSumSquare(tf.keras.metrics.Metric):\n def __init__(self, name='multiclass_true_positives', **kwargs):\n super(MulticlassSumSquare, self).__init__(name=name, **kwargs)\n self.value = self.add_weight(name='tp', initializer='zeros')\n \n def update_state(self, y_true, y_pred, sample_weight=None):\n y_pred = tf.reshape( tf.argmax(y_pred, axis=0), shape=(-1, 1))\n y_pred = tf.cast(y_pred, 'float32')\n y_true = tf.cast(y_true, 'float32')\n temp = 1 - ( ( y_pred - y_true ) / ( y_pred + y_true ) )\n self.value.assign_add( tf.reduce_sum(temp) )\n\n def result(self):\n return self.value\n\n def reset_state(self):\n # The state of the metric will be reset at the start of each epoch.\n self.value.assign(0.)\n \nclass MyDenseLayer(tf.keras.layers.Layer):\n def __init__(self, num_outputs):\n super(MyDenseLayer, self).__init__()\n self.num_outputs = num_outputs\n \n def build(self, input_shape):\n min_size_init = tf.keras.initializers.Ones()\n self.kernel = self.add_weight(shape=[int(input_shape[-1]), self.num_outputs],\n initializer = min_size_init, trainable=True)\n\n def call(self, inputs):\n temp = tf.matmul(inputs, self.kernel) # , shape=(10, 10), dtype=float32)\n return temp\n \nstart = 3\nlimit = 33\ndelta = 3\nsample = tf.range(start, limit, delta)\nsample = tf.cast( sample, dtype=tf.float32 )\nsample = tf.constant( sample, shape=( 10, 1 ) )\nlabel = tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=tf.int64)\nlayer = MyDenseLayer(10)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nDataSet\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\ndataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(sample, dtype=tf.int64), shape=(1, 10, 1), dtype=tf.int64),tf.constant(label, shape=(1, 10, 1), dtype=tf.int64)))\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Model Initialize\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=( 1, 1 )),\n layer\n])\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Optimizer\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\noptimizer = tf.keras.optimizers.Nadam(\n learning_rate=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,\n name='Nadam'\n)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Loss Fn\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" \nlossfn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=False,\n reduction=tf.keras.losses.Reduction.AUTO,\n name='sparse_categorical_crossentropy'\n)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Model Summary\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nmodel.compile(optimizer=optimizer, loss=lossfn, metrics=[MulticlassSumSquare()])\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Training\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nhistory = model.fit( dataset, batch_size=100, epochs=10000 )\n\nOutput: Single record R suqare no change no loss fun return value.\nEpoch 1/10000\n1/1 [==============================] - 1s 1s/step - loss: nan - multiclass_true_positives: 7.0621\nEpoch 2/10000\n1/1 [==============================] - 0s 5ms/step - loss: nan - multiclass_true_positives: 7.0621\n\nSample: Discrete actions in games\ngroup_2_Heriken_kick_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,1,0,0,0,0,0, 0,0,0,0,0,0,1,0,0,0,0,0, 0,0,0,0,0,0,0,0,1,0,0,0 ], shape=(1, 1, 1, 48))\n\n\n" ]
[ 0 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0074638809_keras_python_tensorflow.txt
Q: GitLab CI Python black formatter says: would reformat, whereas running black does not reformat When I run GitLab CI on this commit with this gitlab-ci.yml: stages: - format - test black_formatting: image: python:3.6 stage: format before_script: # Perform an update to make sure the system is up to date. - sudo apt-get update --fix-missing # Download miniconda. - wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -f -p $HOME/miniconda; # Ensure the (mini) conda environment can be activated. - export PATH="$HOME/miniconda/bin:$PATH" # (Re)create the environment.yml file for the repository. - conda env create -q -f environment.yml -n checkstyle-for-bash --force # Activate the environment of the repository. - source activate checkstyle-for-bash script: # Verify the Python code is black formatting compliant. - black . --check --exclude '\.venv/|\.local/|\.cache/|\.git/' # Verify the Python code is flake8 formatting compliant. - flake8 . allow_failure: false test:pytest:36: stage: test image: python:3.6 script: # Ensure the (mini) conda environment can be activated. - export PATH="$HOME/miniconda/bin:$PATH" # Activate the environment of the repository. - source activate checkstyle-for-bash # Run the python tests. - python -m pytest it outputs: Running with gitlab-runner 14.8.0 (565b6c0b) on trucolrunner DS42qHSq Preparing the "shell" executor 00:00 Using Shell executor... Preparing environment 00:00 Running on pcname... Getting source from Git repository 00:02 Fetching changes with git depth set to 20... Reinitialized existing Git repository in /home/gitlab-runner/builds/DS42qHSq/0/root/checkstyle-for-bash/.git/ Checking out 001577c3 as main... Removing miniconda.sh Skipping Git submodules setup Executing "step_script" stage of the job script 02:55 $ sudo apt-get update --fix-missing Hit:1 http://nl.archive.ubuntu.com/ubuntu impish InRelease Get:2 http://security.ubuntu.com/ubuntu impish-security InRelease [110 kB] Get:3 http://nl.archive.ubuntu.com/ubuntu impish-updates InRelease [115 kB] Hit:4 https://repo.nordvpn.com/deb/nordvpn/debian stable InRelease Get:5 http://nl.archive.ubuntu.com/ubuntu impish-backports InRelease [101 kB] Hit:6 https://brave-browser-apt-release.s3.brave.com stable InRelease Get:7 http://security.ubuntu.com/ubuntu impish-security/main amd64 DEP-11 Metadata [20,3 kB] Get:8 http://security.ubuntu.com/ubuntu impish-security/universe amd64 DEP-11 Metadata [3.624 B] Get:9 http://nl.archive.ubuntu.com/ubuntu impish-updates/main amd64 DEP-11 Metadata [25,8 kB] Get:10 http://nl.archive.ubuntu.com/ubuntu impish-updates/universe amd64 DEP-11 Metadata [35,4 kB] Get:11 http://nl.archive.ubuntu.com/ubuntu impish-updates/multiverse amd64 DEP-11 Metadata [940 B] Get:12 http://nl.archive.ubuntu.com/ubuntu impish-backports/universe amd64 DEP-11 Metadata [16,4 kB] Fetched 428 kB in 2s (235 kB/s) Reading package lists... $ wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -f -p $HOME/miniconda; PREFIX=/home/gitlab-runner/miniconda Unpacking payload ... Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. installation finished. $ export PATH="$HOME/miniconda/bin:$PATH" $ conda env create -q -f environment.yml -n checkstyle-for-bash --force Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Preparing transaction: ...working... done Verifying transaction: ...working... done Executing transaction: ...working... done Installing pip dependencies: ...working... done $ source activate checkstyle-for-bash $ black . --check --exclude '\.venv/|\.local/|\.cache/|\.git/' would reformat src/arg_parser.py would reformat src/helper_text_parsing.py Oh no! 2 files would be reformatted, 10 files would be left unchanged. ERROR: Job failed: exit status 1 However, if I run black src/** on the GitHub repository, black returns: ~/git/checkstyle-for-bash$ git pull Already up to date. (base) some@name:~/git/checkstyle-for-bash$ black src/** All done! ✨ ✨ 8 files left unchanged. Just in case I did not clone the right repository, I also manually copy-pasted the content of the src/arg_parser.py file from GitLab into ~/git/checkstyle-for-bash/src/arg_parser.py and ran black again. However, the output is the same, it does not change anything. For completeness, this is the content of the src/arg_parser.py file: # This is the main code of this project nr, and it manages running the code and # outputting the results to LaTex. import argparse def parse_cli_args(): # Instantiate the parser parser = argparse.ArgumentParser(description="Optional app description") # Include argument parsing for default code. # Allow user to load a graph from file. parser.add_argument( "--ggl", dest="google_style_guide", action="store_true", help=( "boolean flag, determines whether the Google Style Guide for " "Bash rules are followed." ), ) # Allow user to specify an infile. parser.add_argument("infile", nargs="?", type=argparse.FileType("r")) # Specify default argument values for the parser. parser.set_defaults(google_style_guide=True,) # Load the arguments that are given. args = parser.parse_args() return args Question What could be causing the GitLab CI to say the files would be reformatted by black (even though the files are not reformatted when running black on them (on the same device (in a different conda environment)))? Setup I run my own GitLab CI on the same device on which I test the conda black commands. The GitLab CI copies the GitHub commits of a repository, one at a time, runs its CI on it, and then reports the results back to GitHub. I am currently not able to expose my GitLab server on clearnet as I am behind a gateway that I currently do not control. Doubts I am fairly certain it is a "silly" mistake from my side, however I have not yet been able to figure out what it is. Especially since I manually copy pasted the GitLab file content of the src/arg_parser.py file twice, and ran black twice to verify black indeed does not change "that" file. Also, to be sure it was not a trailing newline or anything, I used the copy button with the mouse, instead of manual selection: Additionally, the file is flake8 compliant. My current guess is that somehow the CI is not ran on the most recent commit of that repository. However, to verify that I clicked on the GitLab of the failed commit, which indeed redirects to the "05e85fd54f93ccfc427023b21f9cdb0c0cd6db2e" commit (copy) in GitLab: It is from this commit that I copy-pasted the src/arg_parser.py file twice. Another guess would be that the gitlab-ci.yml loads a miniconda environment whereas my local version of black uses a full conda environment. Perhaps they have a different newline character that would lead to a formatting difference. (Even though I doubt that would be the case). Issue I ran the CI again whilst include the black . --diff command in the gitlab-ci.yml script, and the difference is between a: ''' Return and: '''Return as is displayed in the accompanying output: $ black . --diff --exclude '\.venv/|\.local/|\.cache/|\.git/' --- src/arg_parser.py 2022-04-03 10:13:10.751289 +0000 +++ src/arg_parser.py 2022-04-03 11:11:26.297995 +0000 @@ -24,10 +24,12 @@ # Allow user to specify an infile. parser.add_argument("infile", nargs="?", type=argparse.FileType("r")) # Specify default argument values for the parser. - parser.set_defaults(google_style_guide=True,) + parser.set_defaults( + google_style_guide=True, + ) # Load the arguments that are given. args = parser.parse_args() return args would reformat src/arg_parser.py --- src/helper_text_parsing.py 2022-04-02 19:35:45.142619 +0000 +++ src/helper_text_parsing.py 2022-04-03 11:11:26.342908 +0000 @@ -5,11 +5,11 @@ def add_two(x): return x + 2 def get_function_line_nrs(filecontent, rules): - """ Returns two lists containing the starting and ending line numbers of + """Returns two lists containing the starting and ending line numbers of the functions respectively. :param filecontent: The content of the bash file that is being analysed. :param rules: The Bash formatting rules that are chosen by the user. """ would reformat src/helper_text_parsing.py All done! ✨ ✨ 2 files would be reformatted, 10 files would be left unchanged. I am not quite sure exactly why this happens, as I thought python black would always converge to the exact same formatting for a given valid python file. I assume it is because two different black versions are used between the miniconda environment and the anaconda environment on the same device. To test this hypothesis I am including a black --version command in the gitlab-ci.yml. A: The miniconda environment in the GitLab CI used python black version: black, 22.3.0 (compiled: yes) Whereas the local environment used python black version: black, version 19.10b0 Updating the local black version, pushing the formatted code according to the latest python black version, and running the GitLab CI on that GitHub commit resulted in a successful GitLab CI run. A: another way to ensure you have the version you want you can do e.g pip install black==22.1.0 in before_script.
GitLab CI Python black formatter says: would reformat, whereas running black does not reformat
When I run GitLab CI on this commit with this gitlab-ci.yml: stages: - format - test black_formatting: image: python:3.6 stage: format before_script: # Perform an update to make sure the system is up to date. - sudo apt-get update --fix-missing # Download miniconda. - wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -f -p $HOME/miniconda; # Ensure the (mini) conda environment can be activated. - export PATH="$HOME/miniconda/bin:$PATH" # (Re)create the environment.yml file for the repository. - conda env create -q -f environment.yml -n checkstyle-for-bash --force # Activate the environment of the repository. - source activate checkstyle-for-bash script: # Verify the Python code is black formatting compliant. - black . --check --exclude '\.venv/|\.local/|\.cache/|\.git/' # Verify the Python code is flake8 formatting compliant. - flake8 . allow_failure: false test:pytest:36: stage: test image: python:3.6 script: # Ensure the (mini) conda environment can be activated. - export PATH="$HOME/miniconda/bin:$PATH" # Activate the environment of the repository. - source activate checkstyle-for-bash # Run the python tests. - python -m pytest it outputs: Running with gitlab-runner 14.8.0 (565b6c0b) on trucolrunner DS42qHSq Preparing the "shell" executor 00:00 Using Shell executor... Preparing environment 00:00 Running on pcname... Getting source from Git repository 00:02 Fetching changes with git depth set to 20... Reinitialized existing Git repository in /home/gitlab-runner/builds/DS42qHSq/0/root/checkstyle-for-bash/.git/ Checking out 001577c3 as main... Removing miniconda.sh Skipping Git submodules setup Executing "step_script" stage of the job script 02:55 $ sudo apt-get update --fix-missing Hit:1 http://nl.archive.ubuntu.com/ubuntu impish InRelease Get:2 http://security.ubuntu.com/ubuntu impish-security InRelease [110 kB] Get:3 http://nl.archive.ubuntu.com/ubuntu impish-updates InRelease [115 kB] Hit:4 https://repo.nordvpn.com/deb/nordvpn/debian stable InRelease Get:5 http://nl.archive.ubuntu.com/ubuntu impish-backports InRelease [101 kB] Hit:6 https://brave-browser-apt-release.s3.brave.com stable InRelease Get:7 http://security.ubuntu.com/ubuntu impish-security/main amd64 DEP-11 Metadata [20,3 kB] Get:8 http://security.ubuntu.com/ubuntu impish-security/universe amd64 DEP-11 Metadata [3.624 B] Get:9 http://nl.archive.ubuntu.com/ubuntu impish-updates/main amd64 DEP-11 Metadata [25,8 kB] Get:10 http://nl.archive.ubuntu.com/ubuntu impish-updates/universe amd64 DEP-11 Metadata [35,4 kB] Get:11 http://nl.archive.ubuntu.com/ubuntu impish-updates/multiverse amd64 DEP-11 Metadata [940 B] Get:12 http://nl.archive.ubuntu.com/ubuntu impish-backports/universe amd64 DEP-11 Metadata [16,4 kB] Fetched 428 kB in 2s (235 kB/s) Reading package lists... $ wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -f -p $HOME/miniconda; PREFIX=/home/gitlab-runner/miniconda Unpacking payload ... Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. installation finished. $ export PATH="$HOME/miniconda/bin:$PATH" $ conda env create -q -f environment.yml -n checkstyle-for-bash --force Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Preparing transaction: ...working... done Verifying transaction: ...working... done Executing transaction: ...working... done Installing pip dependencies: ...working... done $ source activate checkstyle-for-bash $ black . --check --exclude '\.venv/|\.local/|\.cache/|\.git/' would reformat src/arg_parser.py would reformat src/helper_text_parsing.py Oh no! 2 files would be reformatted, 10 files would be left unchanged. ERROR: Job failed: exit status 1 However, if I run black src/** on the GitHub repository, black returns: ~/git/checkstyle-for-bash$ git pull Already up to date. (base) some@name:~/git/checkstyle-for-bash$ black src/** All done! ✨ ✨ 8 files left unchanged. Just in case I did not clone the right repository, I also manually copy-pasted the content of the src/arg_parser.py file from GitLab into ~/git/checkstyle-for-bash/src/arg_parser.py and ran black again. However, the output is the same, it does not change anything. For completeness, this is the content of the src/arg_parser.py file: # This is the main code of this project nr, and it manages running the code and # outputting the results to LaTex. import argparse def parse_cli_args(): # Instantiate the parser parser = argparse.ArgumentParser(description="Optional app description") # Include argument parsing for default code. # Allow user to load a graph from file. parser.add_argument( "--ggl", dest="google_style_guide", action="store_true", help=( "boolean flag, determines whether the Google Style Guide for " "Bash rules are followed." ), ) # Allow user to specify an infile. parser.add_argument("infile", nargs="?", type=argparse.FileType("r")) # Specify default argument values for the parser. parser.set_defaults(google_style_guide=True,) # Load the arguments that are given. args = parser.parse_args() return args Question What could be causing the GitLab CI to say the files would be reformatted by black (even though the files are not reformatted when running black on them (on the same device (in a different conda environment)))? Setup I run my own GitLab CI on the same device on which I test the conda black commands. The GitLab CI copies the GitHub commits of a repository, one at a time, runs its CI on it, and then reports the results back to GitHub. I am currently not able to expose my GitLab server on clearnet as I am behind a gateway that I currently do not control. Doubts I am fairly certain it is a "silly" mistake from my side, however I have not yet been able to figure out what it is. Especially since I manually copy pasted the GitLab file content of the src/arg_parser.py file twice, and ran black twice to verify black indeed does not change "that" file. Also, to be sure it was not a trailing newline or anything, I used the copy button with the mouse, instead of manual selection: Additionally, the file is flake8 compliant. My current guess is that somehow the CI is not ran on the most recent commit of that repository. However, to verify that I clicked on the GitLab of the failed commit, which indeed redirects to the "05e85fd54f93ccfc427023b21f9cdb0c0cd6db2e" commit (copy) in GitLab: It is from this commit that I copy-pasted the src/arg_parser.py file twice. Another guess would be that the gitlab-ci.yml loads a miniconda environment whereas my local version of black uses a full conda environment. Perhaps they have a different newline character that would lead to a formatting difference. (Even though I doubt that would be the case). Issue I ran the CI again whilst include the black . --diff command in the gitlab-ci.yml script, and the difference is between a: ''' Return and: '''Return as is displayed in the accompanying output: $ black . --diff --exclude '\.venv/|\.local/|\.cache/|\.git/' --- src/arg_parser.py 2022-04-03 10:13:10.751289 +0000 +++ src/arg_parser.py 2022-04-03 11:11:26.297995 +0000 @@ -24,10 +24,12 @@ # Allow user to specify an infile. parser.add_argument("infile", nargs="?", type=argparse.FileType("r")) # Specify default argument values for the parser. - parser.set_defaults(google_style_guide=True,) + parser.set_defaults( + google_style_guide=True, + ) # Load the arguments that are given. args = parser.parse_args() return args would reformat src/arg_parser.py --- src/helper_text_parsing.py 2022-04-02 19:35:45.142619 +0000 +++ src/helper_text_parsing.py 2022-04-03 11:11:26.342908 +0000 @@ -5,11 +5,11 @@ def add_two(x): return x + 2 def get_function_line_nrs(filecontent, rules): - """ Returns two lists containing the starting and ending line numbers of + """Returns two lists containing the starting and ending line numbers of the functions respectively. :param filecontent: The content of the bash file that is being analysed. :param rules: The Bash formatting rules that are chosen by the user. """ would reformat src/helper_text_parsing.py All done! ✨ ✨ 2 files would be reformatted, 10 files would be left unchanged. I am not quite sure exactly why this happens, as I thought python black would always converge to the exact same formatting for a given valid python file. I assume it is because two different black versions are used between the miniconda environment and the anaconda environment on the same device. To test this hypothesis I am including a black --version command in the gitlab-ci.yml.
[ "The miniconda environment in the GitLab CI used python black version:\nblack, 22.3.0 (compiled: yes)\n\nWhereas the local environment used python black version:\nblack, version 19.10b0\n\nUpdating the local black version, pushing the formatted code according to the latest python black version, and running the GitLab CI on that GitHub commit resulted in a successful GitLab CI run.\n", "another way to ensure you have the version you want you can do e.g pip install black==22.1.0 in before_script.\n" ]
[ 3, 1 ]
[]
[]
[ "continuous_integration", "formatting", "gitlab", "python", "python_black" ]
stackoverflow_0071724842_continuous_integration_formatting_gitlab_python_python_black.txt
Q: Cant run flask on ngrok from flask import Flask, escape, request app = Flask(__name__) run_with_ngrok() @app.route('/') def hello(): name = request.args.get("name", "World") return f'Hello, {escape(name)}!' When I run the this from terminal with "flask run" it doesn't print an ngrok link. Im i an virtual env and i have tried running it with python "file name" and it did not work. A: if you are trying to expose your ip through ngrok, you can try tunneling with ngrok on terminal for the flask app's port your app code should look like : from flask import Flask, escape, request app = Flask(__name__) @app.route('/') def hello(): name = request.args.get("name", "World") return f'Hello, {escape(name)}!' if __name__ == "__main__": app.run(port=5000) you can tunnel the flask app port with the following command: ngrok http 5000 here the port 5000 denotes the flask app port. A: I think you forgot to add this part to end of your file if __name__ == "__main__": app.run() A: from flask_ngrok import run_with_ngrok from flask import Flask, escape, request app = Flask(__name__) app.secret_key = '33d5f499c564155e5d2795f5b6f8c5f6' run_with_ngrok(app) @app.route('/') def hello(): name = request.args.get("name", "World") return f'Hello, {escape(name)}!' if __name__ == "__main__": app.run(debug=True) We can grab token from ngrok.com website by signin In terminal we need to run like ngrok config add-authtoken <your_token> ngrok http 5000 for flask it is 5000 and for other application it would be different And we also need to run our application side by side
Cant run flask on ngrok
from flask import Flask, escape, request app = Flask(__name__) run_with_ngrok() @app.route('/') def hello(): name = request.args.get("name", "World") return f'Hello, {escape(name)}!' When I run the this from terminal with "flask run" it doesn't print an ngrok link. Im i an virtual env and i have tried running it with python "file name" and it did not work.
[ "if you are trying to expose your ip through ngrok, you can try tunneling with ngrok on terminal for the flask app's port\nyour app code should look like :\nfrom flask import Flask, escape, request\n\napp = Flask(__name__)\n\[email protected]('/')\ndef hello():\n name = request.args.get(\"name\", \"World\")\n return f'Hello, {escape(name)}!'\n \n \nif __name__ == \"__main__\":\n app.run(port=5000)\n\nyou can tunnel the flask app port with the following command:\nngrok http 5000\n\nhere the port 5000 denotes the flask app port.\n", "I think you forgot to add this part to end of your file\nif __name__ == \"__main__\":\n app.run()\n\n", "from flask_ngrok import run_with_ngrok\nfrom flask import Flask, escape, request\n\napp = Flask(__name__)\napp.secret_key = '33d5f499c564155e5d2795f5b6f8c5f6'\nrun_with_ngrok(app)\[email protected]('/')\ndef hello():\n name = request.args.get(\"name\", \"World\")\n return f'Hello, {escape(name)}!'\n\nif __name__ == \"__main__\":\napp.run(debug=True)\n\nWe can grab token from ngrok.com website by signin\nIn terminal we need to run like\nngrok config add-authtoken <your_token>\nngrok http 5000\n\nfor flask it is 5000 and for other application it would be different\nAnd we also need to run our application side by side\n" ]
[ 1, 0, 0 ]
[]
[]
[ "ngrok", "python" ]
stackoverflow_0072240708_ngrok_python.txt
Q: Issues with too many interactive plotly figures I am using Jupyter notebook on my laptop (the version coming with Anaconda) to perform some sensitivity analysis. I use plotly to display the results and I like the interactive features that it has. However, when I am trying to display more than 7/8 interactive plots on the same notebook, some plots disappears and the output cells of those plots go crazy (see picture attached). Issue with plotly A solution I found was to disable the interactive feature at least for some of the plots, changing the diplay mode in config as: config = {'staticPlot': True} fig.show(config=config) This method works, however, I like the feature and I was wondering if there was a solution that does not imply disabling the interactive view. I read about this post where they say it might be a memory issue (even though their graphs are going blank while mine are behaving crazy): Plotly: How to prevent graphs from going blank when there are too many interactive plots? However I did not manage to find/change the jupyter configuration file, maybe because I installed it via Anaconda? I was also wondering if someone experienced exactly the same or there might be a simpler solution to this issue. Thanks a lot in advance A: I believe that in the second link the config file should be generated if not existing. You can also try changing to gl rendering: https://plotly.com/python/webgl-vs-svg/
Issues with too many interactive plotly figures
I am using Jupyter notebook on my laptop (the version coming with Anaconda) to perform some sensitivity analysis. I use plotly to display the results and I like the interactive features that it has. However, when I am trying to display more than 7/8 interactive plots on the same notebook, some plots disappears and the output cells of those plots go crazy (see picture attached). Issue with plotly A solution I found was to disable the interactive feature at least for some of the plots, changing the diplay mode in config as: config = {'staticPlot': True} fig.show(config=config) This method works, however, I like the feature and I was wondering if there was a solution that does not imply disabling the interactive view. I read about this post where they say it might be a memory issue (even though their graphs are going blank while mine are behaving crazy): Plotly: How to prevent graphs from going blank when there are too many interactive plots? However I did not manage to find/change the jupyter configuration file, maybe because I installed it via Anaconda? I was also wondering if someone experienced exactly the same or there might be a simpler solution to this issue. Thanks a lot in advance
[ "I believe that in the second link the config file should be generated if not existing.\nYou can also try changing to gl rendering:\nhttps://plotly.com/python/webgl-vs-svg/\n" ]
[ 0 ]
[]
[]
[ "jupyter", "jupyter_notebook", "memory", "plotly", "python" ]
stackoverflow_0074639389_jupyter_jupyter_notebook_memory_plotly_python.txt
Q: I need the approach to solve this problem in python You get an integer which indicates the number of strings you get next. You have N strings where (N-1) strings follow the same pattern but one string doesnot follow the pattern. You have to find that odd string which doesnot follow the pattern and print it. Pattern varies in each case. Each string length may vary. Only one odd string is present in all the given strings. Example : INPUT : 4 ABCD BCDE CDEF DGES OUTPUT: DGES Please give me the approach to solve this. A: You need a list to store tuples of the form (query, pattern). Once you have this list you just need to find which pattern occurs only once. In that way you can easily detect the "odd one out" n = int(input()) qp = [] for _ in range(n): query = input() pattern = [ord(x)-ord(y) for x, y in zip(query, query[1:])] qp.append((query, pattern)) for q, p in qp: if sum(1 for _, _p in qp if p == _p) == 1: print(q) break else: print('No unique patterns detected')
I need the approach to solve this problem in python
You get an integer which indicates the number of strings you get next. You have N strings where (N-1) strings follow the same pattern but one string doesnot follow the pattern. You have to find that odd string which doesnot follow the pattern and print it. Pattern varies in each case. Each string length may vary. Only one odd string is present in all the given strings. Example : INPUT : 4 ABCD BCDE CDEF DGES OUTPUT: DGES Please give me the approach to solve this.
[ "You need a list to store tuples of the form (query, pattern).\nOnce you have this list you just need to find which pattern occurs only once. In that way you can easily detect the \"odd one out\"\nn = int(input())\n\nqp = []\n\nfor _ in range(n):\n query = input()\n pattern = [ord(x)-ord(y) for x, y in zip(query, query[1:])]\n qp.append((query, pattern))\n\nfor q, p in qp:\n if sum(1 for _, _p in qp if p == _p) == 1:\n print(q)\n break\nelse:\n print('No unique patterns detected')\n\n" ]
[ 0 ]
[]
[]
[ "conditional_statements", "dictionary", "loops", "python", "string" ]
stackoverflow_0074639074_conditional_statements_dictionary_loops_python_string.txt
Q: Spark + Kafka app, getting "CassandraCatalogException: Attempting to write to C* Table but missing primary key columns: [col1,col2,col3]" Run env kafka ----ReadStream----> local ----WriteStream----> cassandra source code place on local and kafka, local, writeStream is different IP \ table culumn is col1 | col2 | col3 | col4 | col5 | col6 | col7 df.printSchema is root |-- key: binary (nullable = true) |-- value: binary (nullable = true) |-- topic: string (nullable = true) |-- partition: integer (nullable = true) |-- offset: long (nullable = true) |-- timestamp: timestamp (nullable = true) |-- timestampType: integer (nullable = true) sorry, i try solve alone but can`t find solution. Run Code spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2, com.datastax.spark:spark-cassandra-connector_2.12:3.2.0, com.github.jnr:jnr-posix:3.1.15 --conf com.datastax.spark:spark.cassandra.connectiohost{cassandraIP}, spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions test.py Source Code from pyspark.sql import SparkSession # Spark Bridge local to spark_master == Connect master spark = SparkSession.builder \ .master("spark://{SparkMasterIP}:7077") \ .appName("Spark_Streaming+kafka+cassandra") \ .config('spark.cassandra.connection.host', '{cassandraIP}') \ .config('spark.cassandra.connection.port', '9042') \ .getOrCreate() # Read Stream From {Topic} at BootStrap df = spark.readStream \ .format("kafka") \ .option("kafka.bootstrap.servers", "{KafkaIP}:9092") \ .option('startingOffsets','earliest') \ .option('failOnDataLoss','False') \ .option("subscribe", "{Topic}") \ .load() \ df.printSchema() # write Stream at cassandra ds = df.writeStream \ .trigger(processingTime='15 seconds') \ .format("org.apache.spark.sql.cassandra") \ .option("checkpointLocation","{checkpoint}") \ .options(table='{table}',keyspace="{key}") \ .outputMode('update') \ .start() ds.awaitTermination() error com.datastax.spark.connector.datasource.CassandraCatalogException: Attempting to write to C* Table but missing primary key columns: [col1,col2,col3] at com.datastax.spark.connector.datasource.CassandraWriteBuilder.<init>(CassandraWriteBuilder.scala:44) at com.datastax.spark.connector.datasource.CassandraTable.newWriteBuilder(CassandraTable.scala:69) at org.apache.spark.sql.execution.streaming.StreamExecution.createStreamingWrite(StreamExecution.scala:590) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:140) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:59) at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:295) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStr at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:209) Traceback (most recent call last): File "/home/test.py", line 33, in <module> ds.awaitTermination() File "/venv/lib64/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/streaming.py", line 101, in awaitTe File "/venv/lib64/python3.6/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1322, in File "/home/jeju/venv/lib64/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 117, in deco pyspark.sql.utils.StreamingQueryException: Attempting to write to C* Table but missing primary key columns: [col1,col2,col3] === Streaming Query === Identifier: [id = d7da05f9-29a2-4597-a2c9-86a4ebfa65f2, runId = eea59c10-30fa-4939-8a30-03bd7c96b3f2] Current Committed Offsets: {} Current Available Offsets: {} A: Error says primary key columns: [col1,col2,col3] are missing. So df doesn't have these columns. You already have df.printSchema(). You can see it yourself that thats the case. df read from Kafka has a fixed schema and you can extract your data by parsing key and value columns. In my case data sent was in value column(if you need you can add key column as well) and json formatted. So i could read it by following code: dfPerson = spark \ .readStream \ .format("kafka") \ .option("kafka.bootstrap.servers", "x.x.x.x") \ .option("subscribe", TOPIC) \ .option("startingOffsets", "latest") \ .load()\ .select(from_json(col("value").cast("string"), schema).alias("data"))\ .select("data.*") Hope it helps.
Spark + Kafka app, getting "CassandraCatalogException: Attempting to write to C* Table but missing primary key columns: [col1,col2,col3]"
Run env kafka ----ReadStream----> local ----WriteStream----> cassandra source code place on local and kafka, local, writeStream is different IP \ table culumn is col1 | col2 | col3 | col4 | col5 | col6 | col7 df.printSchema is root |-- key: binary (nullable = true) |-- value: binary (nullable = true) |-- topic: string (nullable = true) |-- partition: integer (nullable = true) |-- offset: long (nullable = true) |-- timestamp: timestamp (nullable = true) |-- timestampType: integer (nullable = true) sorry, i try solve alone but can`t find solution. Run Code spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2, com.datastax.spark:spark-cassandra-connector_2.12:3.2.0, com.github.jnr:jnr-posix:3.1.15 --conf com.datastax.spark:spark.cassandra.connectiohost{cassandraIP}, spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions test.py Source Code from pyspark.sql import SparkSession # Spark Bridge local to spark_master == Connect master spark = SparkSession.builder \ .master("spark://{SparkMasterIP}:7077") \ .appName("Spark_Streaming+kafka+cassandra") \ .config('spark.cassandra.connection.host', '{cassandraIP}') \ .config('spark.cassandra.connection.port', '9042') \ .getOrCreate() # Read Stream From {Topic} at BootStrap df = spark.readStream \ .format("kafka") \ .option("kafka.bootstrap.servers", "{KafkaIP}:9092") \ .option('startingOffsets','earliest') \ .option('failOnDataLoss','False') \ .option("subscribe", "{Topic}") \ .load() \ df.printSchema() # write Stream at cassandra ds = df.writeStream \ .trigger(processingTime='15 seconds') \ .format("org.apache.spark.sql.cassandra") \ .option("checkpointLocation","{checkpoint}") \ .options(table='{table}',keyspace="{key}") \ .outputMode('update') \ .start() ds.awaitTermination() error com.datastax.spark.connector.datasource.CassandraCatalogException: Attempting to write to C* Table but missing primary key columns: [col1,col2,col3] at com.datastax.spark.connector.datasource.CassandraWriteBuilder.<init>(CassandraWriteBuilder.scala:44) at com.datastax.spark.connector.datasource.CassandraTable.newWriteBuilder(CassandraTable.scala:69) at org.apache.spark.sql.execution.streaming.StreamExecution.createStreamingWrite(StreamExecution.scala:590) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:140) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:59) at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:295) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStr at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:209) Traceback (most recent call last): File "/home/test.py", line 33, in <module> ds.awaitTermination() File "/venv/lib64/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/streaming.py", line 101, in awaitTe File "/venv/lib64/python3.6/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1322, in File "/home/jeju/venv/lib64/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 117, in deco pyspark.sql.utils.StreamingQueryException: Attempting to write to C* Table but missing primary key columns: [col1,col2,col3] === Streaming Query === Identifier: [id = d7da05f9-29a2-4597-a2c9-86a4ebfa65f2, runId = eea59c10-30fa-4939-8a30-03bd7c96b3f2] Current Committed Offsets: {} Current Available Offsets: {}
[ "Error says primary key columns: [col1,col2,col3] are missing. So df doesn't have these columns. You already have df.printSchema(). You can see it yourself that thats the case. df read from Kafka has a fixed schema and you can extract your data by parsing key and value columns. In my case data sent was in value column(if you need you can add key column as well) and json formatted. So i could read it by following code:\ndfPerson = spark \\\n.readStream \\\n.format(\"kafka\") \\\n.option(\"kafka.bootstrap.servers\", \"x.x.x.x\") \\\n.option(\"subscribe\", TOPIC) \\\n.option(\"startingOffsets\", \"latest\") \\\n.load()\\\n.select(from_json(col(\"value\").cast(\"string\"), schema).alias(\"data\"))\\\n.select(\"data.*\")\n\nHope it helps.\n" ]
[ 0 ]
[]
[]
[ "apache_kafka", "apache_spark", "cassandra", "python", "spark_cassandra_connector" ]
stackoverflow_0074638402_apache_kafka_apache_spark_cassandra_python_spark_cassandra_connector.txt
Q: temporary logging format in a same file i want all modules logs that import to the main.py save in main.log that i mention in logging.basicConfig() but with their formats here is the code: file main.py import module1 import logging logging.basicConfig( filename="main.log", format="%(asctime)s , <%(name)s> , %(levelname)s : %(message)s", datefmt="%Y-%m-%d %I:%M:%S", level=logging.DEBUG ) def main(ip): lg = logging.getLogger("main") newformat="%(asctime)s , <%(name)s> , [" + ip + "] , %(levelname)s : %(message)s", # do another things lg.debug("somthing done...") # call func1 module1.func1() lg.debug("service done") def main2(ip): lg = logging.getLogger("main") newformat="%(asctime)s , <%(name)s> , [" + ip + "] , %(levelname)s : %(message)s", # do another things lg.debug("somthing done...") # call func1 module1.func1() lg.debug("service done") main("192.168.1.100") main2("192.168.1.101") file module1.py import logging def func1(): lg = logging.getLogger("module1.func1") lg.setLevel(logging.DEBUG) # do somthing ... lg.debug("done") the output is: file main.log 2022-07-02 05:39:51 , <main1> , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main1> , DEBUG : service done 2022-07-02 05:39:51 , <main2> , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main2> , DEBUG : service done but i want this with new format: 2022-07-02 05:39:51 , <main1> , [192.168.1.100] , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main1> , [192.168.1.100] , DEBUG : service done 2022-07-02 05:39:51 , <main2> , [192.168.1.101] , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main2> , [192.168.1.101] , DEBUG : service done A: Not the best solution but this should get the job done import module1 import logging logging.basicConfig( filename="main.log", format="%(asctime)s , <%(name)s> , %(levelname)s : %(message)s", datefmt="%Y-%m-%d %I:%M:%S", level=logging.DEBUG ) def setBasicConfigFormat(format): ROOT_LOGGER = logging.getLogger() ROOT_LOGGER.handlers[0].setFormatter(logging.Formatter( format )) def main(ip): lg = logging.getLogger("main") newformat = "%(asctime)s , <%(name)s> , [" + \ ip + "] , %(levelname)s : %(message)s" setBasicConfigFormat(newformat) # do another things lg.debug("somthing done...") # call func1 module1.func1() lg.debug("service done") def main2(ip): lg = logging.getLogger("main") newformat = "%(asctime)s , <%(name)s> , [" + \ ip + "] , %(levelname)s : %(message)s" setBasicConfigFormat(newformat) # do another things lg.debug("somthing done...") # call func1 module1.func1() lg.debug("service done") main("192.168.1.100") main2("192.168.1.101") A: I found a way with logging.LoggerAdapter(logger, {parameters}) you can pass vars to log format so I wrote a function to return logger and format and handlers want. def get_logger(name:str, path:str, ip:bool = False, format: bool or str = False) -> logging.Logger: fmt = format if format else f"%(asctime)s , <%(name)s> , {' [%(ip)s] ,' if ip else ''} , %(levelname)s : %(message)s" logConf = logging.getLogger(name) # set level of logger logConf.setLevel(get_log_level()) # stream handler for Errors stream_handler = logging.StreamHandler() # set level of this stream handler stream_handler.setLevel(logging.WARNING) stream_handler.setFormatter(logging.Formatter(fmt)) logConf.addHandler(stream_handler) # file handler file_handler = logging.FileHandler(path) # set level of this file handler file_handler.setLevel(logging.DEBUG) file_handler.setFormatter(logging.Formatter(fmt)) logConf.addHandler(file_handler) return logConf and call this function in each module body and pass IP with logging.Logger.Adapter() module1.py ip = get_ip_from_some_where() lgConfig = get_logger(__name__, "module1.log", ip=True) lg = logging.LoggerAdapter(lgConfig, {"ip": ip}) lg.setLevel(logging.debug) # make sure to set level lg.info('started') # some codes lg.info('done') or you can use it in functions and logging chain module2.py ip = get_ip_from_some_where() lgConfig = get_logger(__name__, "module1.log", ip=True) def func1(): lg = logging.getLogger("__name__.func1") # define func1 as a chain of __name__ lg = logging.LoggerAdapter(lg, {"ip": ip}) lg.setLevel(logging.debug) # make sure to set level # do something lg.info('started') # some codes lg.info('done') func1()
temporary logging format in a same file
i want all modules logs that import to the main.py save in main.log that i mention in logging.basicConfig() but with their formats here is the code: file main.py import module1 import logging logging.basicConfig( filename="main.log", format="%(asctime)s , <%(name)s> , %(levelname)s : %(message)s", datefmt="%Y-%m-%d %I:%M:%S", level=logging.DEBUG ) def main(ip): lg = logging.getLogger("main") newformat="%(asctime)s , <%(name)s> , [" + ip + "] , %(levelname)s : %(message)s", # do another things lg.debug("somthing done...") # call func1 module1.func1() lg.debug("service done") def main2(ip): lg = logging.getLogger("main") newformat="%(asctime)s , <%(name)s> , [" + ip + "] , %(levelname)s : %(message)s", # do another things lg.debug("somthing done...") # call func1 module1.func1() lg.debug("service done") main("192.168.1.100") main2("192.168.1.101") file module1.py import logging def func1(): lg = logging.getLogger("module1.func1") lg.setLevel(logging.DEBUG) # do somthing ... lg.debug("done") the output is: file main.log 2022-07-02 05:39:51 , <main1> , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main1> , DEBUG : service done 2022-07-02 05:39:51 , <main2> , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main2> , DEBUG : service done but i want this with new format: 2022-07-02 05:39:51 , <main1> , [192.168.1.100] , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main1> , [192.168.1.100] , DEBUG : service done 2022-07-02 05:39:51 , <main2> , [192.168.1.101] , DEBUG : somthing done... 2022-07-02 05:39:51 , <module1.func1> , DEBUG : done 2022-07-02 05:39:51 , <main2> , [192.168.1.101] , DEBUG : service done
[ "Not the best solution but this should get the job done\nimport module1\nimport logging\n\nlogging.basicConfig(\n filename=\"main.log\",\n format=\"%(asctime)s , <%(name)s> , %(levelname)s : %(message)s\",\n datefmt=\"%Y-%m-%d %I:%M:%S\",\n level=logging.DEBUG\n)\n\n\ndef setBasicConfigFormat(format):\n ROOT_LOGGER = logging.getLogger()\n ROOT_LOGGER.handlers[0].setFormatter(logging.Formatter(\n format\n ))\n\n\ndef main(ip):\n lg = logging.getLogger(\"main\")\n\n newformat = \"%(asctime)s , <%(name)s> , [\" + \\\n ip + \"] , %(levelname)s : %(message)s\"\n setBasicConfigFormat(newformat)\n\n # do another things\n lg.debug(\"somthing done...\")\n\n # call func1\n module1.func1()\n\n lg.debug(\"service done\")\n\n\ndef main2(ip):\n lg = logging.getLogger(\"main\")\n\n newformat = \"%(asctime)s , <%(name)s> , [\" + \\\n ip + \"] , %(levelname)s : %(message)s\"\n setBasicConfigFormat(newformat)\n # do another things\n lg.debug(\"somthing done...\")\n\n # call func1\n module1.func1()\n\n lg.debug(\"service done\")\n\n\nmain(\"192.168.1.100\")\nmain2(\"192.168.1.101\")\n\n", "I found a way with logging.LoggerAdapter(logger, {parameters})\nyou can pass vars to log format\nso I wrote a function to return logger and format and handlers want.\ndef get_logger(name:str, path:str, ip:bool = False, format: bool or str = False) -> logging.Logger:\n \n fmt = format if format else f\"%(asctime)s , <%(name)s> , {' [%(ip)s] ,' if ip else ''} , %(levelname)s : %(message)s\"\n\n logConf = logging.getLogger(name)\n # set level of logger\n logConf.setLevel(get_log_level())\n # stream handler for Errors \n stream_handler = logging.StreamHandler()\n # set level of this stream handler\n stream_handler.setLevel(logging.WARNING)\n stream_handler.setFormatter(logging.Formatter(fmt))\n logConf.addHandler(stream_handler)\n # file handler\n file_handler = logging.FileHandler(path)\n # set level of this file handler\n file_handler.setLevel(logging.DEBUG)\n file_handler.setFormatter(logging.Formatter(fmt))\n logConf.addHandler(file_handler)\n\n return logConf\n\nand call this function in each module body and pass IP with logging.Logger.Adapter()\nmodule1.py\nip = get_ip_from_some_where()\n\nlgConfig = get_logger(__name__, \"module1.log\", ip=True)\nlg = logging.LoggerAdapter(lgConfig, {\"ip\": ip})\nlg.setLevel(logging.debug) # make sure to set level\n\nlg.info('started')\n# some codes\nlg.info('done')\n\nor you can use it in functions and logging chain\nmodule2.py\nip = get_ip_from_some_where()\n\nlgConfig = get_logger(__name__, \"module1.log\", ip=True)\n\ndef func1():\n lg = logging.getLogger(\"__name__.func1\") # define func1 as a chain of __name__\n lg = logging.LoggerAdapter(lg, {\"ip\": ip})\n lg.setLevel(logging.debug) # make sure to set level\n\n # do something\n\n lg.info('started')\n # some codes\n lg.info('done')\n\nfunc1()\n\n" ]
[ 0, 0 ]
[]
[]
[ "logging", "python" ]
stackoverflow_0072839320_logging_python.txt
Q: ImportError: No module named 'tensorflow.python' here i wanna run this code for try neural network with python : from __future__ import print_function from keras.datasets import mnist from keras.models import Sequential from keras.layers import Activation, Dense from keras.utils import np_utils import tensorflow as tf batch_size = 128 nb_classes = 10 nb_epoch = 12 #input image dimensions img_row, img_cols = 28, 28 #the data, Shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], img_rows * img_cols) X_test = X_test.reshape(X_test.shape[0], img_row * img_cols) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_text /= 255 print('X_train shape:', X_train.shape) print(X_train_shape[0], 'train samples') print(X_test_shape[0], 'test samples') #convert class vectors to binary category Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Dense(output_dim = 800, input_dim=X_train.shape[1])) model.add(Activation('sigmoid')) model.add(Dense(nb_classes)) model.add(Actiovation('softmax')) model.compile(loss = 'categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) #crossentropy fungsi galat atau fungsi error dipakai kalo class biner #model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch = nb_poch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose = 0) print('Test Score : ', score[0]) print('Test Accuracy : ', score[1]) at the beginning it must install keras, and success. but when try to run the code at the first the error is : ImportError : No Moduled Name "tensorflow" then i install using pip : pip install tensorflow after installation i try to run code again, got another message like this : ImportError : No Moduled Name "tensorflow.python" Message Error i dont have any idea with the error A: Uninstall tensorflow: pip uninstall tensorflow Then reinstall it: pip install tensorflow A: for me upgrading pip helped, pip install --upgrade pip pip uninstall tensorflow pip install tensorflow A: I have the same problem in Windows 10. Until now I don't know why. But if I create an virtual environment cd <your project path> Install virtualenv pip install virtualenv Create the virtual environment virtualenv <envname> Activate the env Windows Powershell: .\<envname>\Scripts\activate Unix with Bash or zsh: source <envname>/bin/activate Then now you install tensorflow (<envname>) $ pip install tensorflow And then run Hello World successfully. *Don't forget that you need to activate or configure everytime the virtual environment jupyter, command-line, etc. A: If you have python 3.6 and up (most likely), a pip3 package will be installed by default. Installing tensorflow using pip3 will make the path of the installation visible to python. So try pip3 install tensorflow First response ever, hope it helps! A: I had the same issue running a python file named tensorflow.py, after renaming it the issue dissapeared and the file started running properly. A: Installing tensorflow 1.15 solved my issue A: Open a python shell and type: help('modules') This will gather a list of all available modules. tensor flow should not show up, as it is not installed correctly (according to the traceback). Then: import sys sys.path() This will give you a list of system paths where modules can be installed. If there is a known issue with installing a module, I recommend moving the files manually to the right system path. The system path depends on the OS you are using, so without knowing that I can't tell you where to move it. But sys.path() can! A: Try changing your file name to something unique. Apparently the python script with same name exits inside, this is the one thats causing the issue. I was using my script, was working fine with bert_base_tf_20.py but when i changed the name to code.py , this happened. So changed it back to bert_code.py Working fine A: There's another problem which is not mentioned here, and took me a bit to figure out. If you have python installed on C:\Program Files\Python, when installing tensorflow, pip will default to another directory. Removing python from C:\Program Files\Python and installing it in another directory such as C:\Python fixed the issue for me. A: For me reintsalling tensorflow==2.5.0 on upgrading with pip worked.
ImportError: No module named 'tensorflow.python'
here i wanna run this code for try neural network with python : from __future__ import print_function from keras.datasets import mnist from keras.models import Sequential from keras.layers import Activation, Dense from keras.utils import np_utils import tensorflow as tf batch_size = 128 nb_classes = 10 nb_epoch = 12 #input image dimensions img_row, img_cols = 28, 28 #the data, Shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], img_rows * img_cols) X_test = X_test.reshape(X_test.shape[0], img_row * img_cols) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_text /= 255 print('X_train shape:', X_train.shape) print(X_train_shape[0], 'train samples') print(X_test_shape[0], 'test samples') #convert class vectors to binary category Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Dense(output_dim = 800, input_dim=X_train.shape[1])) model.add(Activation('sigmoid')) model.add(Dense(nb_classes)) model.add(Actiovation('softmax')) model.compile(loss = 'categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) #crossentropy fungsi galat atau fungsi error dipakai kalo class biner #model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch = nb_poch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose = 0) print('Test Score : ', score[0]) print('Test Accuracy : ', score[1]) at the beginning it must install keras, and success. but when try to run the code at the first the error is : ImportError : No Moduled Name "tensorflow" then i install using pip : pip install tensorflow after installation i try to run code again, got another message like this : ImportError : No Moduled Name "tensorflow.python" Message Error i dont have any idea with the error
[ "Uninstall tensorflow:\npip uninstall tensorflow\n\nThen reinstall it:\npip install tensorflow\n\n", "for me upgrading pip helped,\npip install --upgrade pip\npip uninstall tensorflow\npip install tensorflow\n\n", "I have the same problem in Windows 10. Until now I don't know why.\nBut if I create an virtual environment\ncd <your project path>\nInstall virtualenv\npip install virtualenv\nCreate the virtual environment\nvirtualenv <envname>\nActivate the env\n\nWindows Powershell: .\\<envname>\\Scripts\\activate\nUnix with Bash or zsh: source <envname>/bin/activate\n\nThen now you install tensorflow\n(<envname>) $ pip install tensorflow\nAnd then run Hello World successfully.\n*Don't forget that you need to activate or configure everytime the virtual environment jupyter, command-line, etc.\n", "If you have python 3.6 and up (most likely), a pip3 package will be installed by default. Installing tensorflow using pip3 will make the path of the installation visible to python. So try\npip3 install tensorflow\n\nFirst response ever, hope it helps!\n", "I had the same issue running a python file named tensorflow.py, after renaming it the issue dissapeared and the file started running properly.\n", "Installing tensorflow 1.15 solved my issue\n", "Open a python shell and type:\nhelp('modules') \n\nThis will gather a list of all available modules. tensor flow should not show up, as it is not installed correctly (according to the traceback).\nThen:\nimport sys\nsys.path()\n\nThis will give you a list of system paths where modules can be installed. If there is a known issue with installing a module, I recommend moving the files manually to the right system path.\nThe system path depends on the OS you are using, so without knowing that I can't tell you where to move it.\nBut sys.path() can!\n", "Try changing your file name to something unique. Apparently the python script with same name exits inside, this is the one thats causing the issue.\nI was using my script, was working fine with bert_base_tf_20.py but when i changed the name to code.py , this happened. So changed it back to bert_code.py\nWorking fine\n", "There's another problem which is not mentioned here, and took me a bit to figure out. If you have python installed on C:\\Program Files\\Python, when installing tensorflow, pip will default to another directory. Removing python from C:\\Program Files\\Python and installing it in another directory such as C:\\Python fixed the issue for me.\n", "For me reintsalling tensorflow==2.5.0 on upgrading with pip worked.\n" ]
[ 42, 2, 1, 1, 1, 1, 0, 0, 0, 0 ]
[ "pip install --upgrade pip \n\nThis worked for me\n", "try to change the actual running python directory.\nand make sure that running python directory is not where you downloaded tensorflow. else go to any other directory and you're fine.\ni hope that solves your probleme.\n", "try these steps\npip install --upgrade pip\npip install tensorflow \n\n" ]
[ -1, -2, -2 ]
[ "keras", "neural_network", "python", "tensorflow" ]
stackoverflow_0041415629_keras_neural_network_python_tensorflow.txt
Q: Add value to new column depending on values in another in pandas I have a dataframe such as Names Values A 0.20 A 1.30 A 1.2 B 0.30 B 0.40 C 1.2 D 0.70 E 0.12 E 1.3 F 0.90 F 0.78 F 0.88 And I would like to add to a New_col the number : 1 where for each Names with at least one Values > 0.75 and one Values < 0.75 0 for each Names with only Values > 0.75 2 for each Names with only Values < 0.75 I should then get: Names Values New_col A 0.20 1 A 1.30 1 A 1.2 1 B 0.30 2 B 0.40 2 C 1.2 0 D 0.70 2 E 0.12 1 E 1.3 1 F 0.90 2 F 0.78 2 F 0.88 2 A: First test by condition for compare threshold 0.75, get names if match at least one value, compare again membership of Names and last pass to numpy.select: m = df.Values > 0.75 s1 = df.loc[m, 'Names'].unique() s2 = df.loc[~m, 'Names'].unique() m1 = df['Names'].isin(s1) m2 = df['Names'].isin(s2) df['New_col'] = np.select([m1 & ~m2, ~m1 & m2], [0, 2], default=1) print (df) Names Values New_col 0 A 0.20 1 1 A 1.30 1 2 A 1.20 1 3 B 0.30 2 4 B 0.40 2 5 C 1.20 0 6 D 0.70 2 7 E 0.12 1 8 E 1.30 1 9 F 0.90 0 10 F 0.78 0 11 F 0.88 0 If need another ouput for only 0.75 values per names use: print (df) Names Values 0 A 0.20 1 A 1.30 2 A 1.20 3 B 0.30 4 B 0.40 5 C 1.20 6 D 0.70 7 E 0.12 8 E 1.30 9 F 0.90 10 F 0.78 11 F 0.88 12 G 0.75 13 G 0.75 m1 = df.Values > 0.75 m2 = df.Values < 0.75 s1 = df.loc[m1, 'Names'].unique() s2 = df.loc[m2, 'Names'].unique() m1 = df['Names'].isin(s1) m2 = df['Names'].isin(s2) df['New_col'] = np.select([m1 & ~m2, ~m1 & m2, m1 & m2], [0, 2, 1], default=None) print (df) Names Values New_col 0 A 0.20 1 1 A 1.30 1 2 A 1.20 1 3 B 0.30 2 4 B 0.40 2 5 C 1.20 0 6 D 0.70 2 7 E 0.12 1 8 E 1.30 1 9 F 0.90 0 10 F 0.78 0 11 F 0.88 0 12 G 0.75 None 13 G 0.75 None A: df = pd.DataFrame({"Names":['A','A','A','B','B','C','D','E','E','F','F','F'], "Values":[0.20,1.30,1.2,0.30,0.40,1.2,0.70,0.12,1.3,0.90,0.78,0.88]}) df["New_col"] = None for val in set(df.Names): flags = [True if x>0.75 else False for x in df[df['Names']==val].Values ] if sum(flags)==0: df.loc[ df['Names']==val, "New_col"] = 2 elif sum(flags)==len(df[df['Names']==val]): df.loc[ df['Names']==val, "New_col"] = 0 else: df.loc[ df['Names']==val, "New_col"] = 1 Output: Names Values New_col 0 A 0.20 1 1 A 1.30 1 2 A 1.20 1 3 B 0.30 2 4 B 0.40 2 5 C 1.20 0 6 D 0.70 2 7 E 0.12 1 8 E 1.30 1 9 F 0.90 0 10 F 0.78 0 11 F 0.88 0 Respect to your question the values for the "F" Nnames columns should be 0 instead of 2 A: Am a bit late to the party, but you could use a groupby approach: df = df.merge(df.groupby(by="Names").apply(lambda x: 0 if all(x['Values']>0.75) else (2 if all(x['Values']<0.75) else 1)).reset_index()) Here is the full code: import pandas as pd import numpy as np df = pd.DataFrame({ 'Names': ['A', 'A', 'A', 'B', 'B', 'C', 'D', 'E', 'E', 'F', 'F', 'F'], 'Values': [0.2, 1.3, 1.2, 0.3, 0.4, 1.2, 0.7, 0.12, 1.3, 0.9, 0.78, 0.88]}) df = df.merge(df.groupby(by="Names").apply(lambda x: 0 if all(x['Values']>0.75) else (2 if all(x['Values']<0.75) else 1)).reset_index()) df.columns = ['Names', 'Values', 'New_col'] print(df) OUTPUT: Names Values New_col 0 A 0.20 1 1 A 1.30 1 2 A 1.20 1 3 B 0.30 2 4 B 0.40 2 5 C 1.20 0 6 D 0.70 2 7 E 0.12 1 8 E 1.30 1 9 F 0.90 0 10 F 0.78 0 11 F 0.88 0
Add value to new column depending on values in another in pandas
I have a dataframe such as Names Values A 0.20 A 1.30 A 1.2 B 0.30 B 0.40 C 1.2 D 0.70 E 0.12 E 1.3 F 0.90 F 0.78 F 0.88 And I would like to add to a New_col the number : 1 where for each Names with at least one Values > 0.75 and one Values < 0.75 0 for each Names with only Values > 0.75 2 for each Names with only Values < 0.75 I should then get: Names Values New_col A 0.20 1 A 1.30 1 A 1.2 1 B 0.30 2 B 0.40 2 C 1.2 0 D 0.70 2 E 0.12 1 E 1.3 1 F 0.90 2 F 0.78 2 F 0.88 2
[ "First test by condition for compare threshold 0.75, get names if match at least one value, compare again membership of Names and last pass to numpy.select:\nm = df.Values > 0.75\n\ns1 = df.loc[m, 'Names'].unique()\ns2 = df.loc[~m, 'Names'].unique()\n\nm1 = df['Names'].isin(s1)\nm2 = df['Names'].isin(s2)\n\ndf['New_col'] = np.select([m1 & ~m2, ~m1 & m2], [0, 2], default=1)\nprint (df)\n Names Values New_col\n0 A 0.20 1\n1 A 1.30 1\n2 A 1.20 1\n3 B 0.30 2\n4 B 0.40 2\n5 C 1.20 0\n6 D 0.70 2\n7 E 0.12 1\n8 E 1.30 1\n9 F 0.90 0\n10 F 0.78 0\n11 F 0.88 0\n\nIf need another ouput for only 0.75 values per names use:\nprint (df)\n Names Values\n0 A 0.20\n1 A 1.30\n2 A 1.20\n3 B 0.30\n4 B 0.40\n5 C 1.20\n6 D 0.70\n7 E 0.12\n8 E 1.30\n9 F 0.90\n10 F 0.78\n11 F 0.88\n12 G 0.75\n13 G 0.75\n\nm1 = df.Values > 0.75\nm2 = df.Values < 0.75\n\ns1 = df.loc[m1, 'Names'].unique()\ns2 = df.loc[m2, 'Names'].unique()\n\n\nm1 = df['Names'].isin(s1)\nm2 = df['Names'].isin(s2)\n\ndf['New_col'] = np.select([m1 & ~m2, ~m1 & m2, m1 & m2], \n [0, 2, 1], default=None)\n\n\nprint (df)\n Names Values New_col\n0 A 0.20 1\n1 A 1.30 1\n2 A 1.20 1\n3 B 0.30 2\n4 B 0.40 2\n5 C 1.20 0\n6 D 0.70 2\n7 E 0.12 1\n8 E 1.30 1\n9 F 0.90 0\n10 F 0.78 0\n11 F 0.88 0\n12 G 0.75 None\n13 G 0.75 None\n\n", "df = pd.DataFrame({\"Names\":['A','A','A','B','B','C','D','E','E','F','F','F'], \"Values\":[0.20,1.30,1.2,0.30,0.40,1.2,0.70,0.12,1.3,0.90,0.78,0.88]})\n\ndf[\"New_col\"] = None\nfor val in set(df.Names):\n flags = [True if x>0.75 else False for x in df[df['Names']==val].Values ]\n \n if sum(flags)==0: \n df.loc[ df['Names']==val, \"New_col\"] = 2\n \n elif sum(flags)==len(df[df['Names']==val]): \n df.loc[ df['Names']==val, \"New_col\"] = 0\n \n else:\n df.loc[ df['Names']==val, \"New_col\"] = 1\n\nOutput:\n Names Values New_col\n0 A 0.20 1\n1 A 1.30 1\n2 A 1.20 1\n3 B 0.30 2\n4 B 0.40 2\n5 C 1.20 0\n6 D 0.70 2\n7 E 0.12 1\n8 E 1.30 1\n9 F 0.90 0\n10 F 0.78 0\n11 F 0.88 0\n\nRespect to your question the values for the \"F\" Nnames columns should be 0 instead of 2\n", "Am a bit late to the party, but you could use a groupby approach:\ndf = df.merge(df.groupby(by=\"Names\").apply(lambda x: 0 if all(x['Values']>0.75) else (2 if all(x['Values']<0.75) else 1)).reset_index())\n\nHere is the full code:\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({ 'Names': ['A', 'A', 'A', 'B', 'B', 'C', 'D', 'E', 'E', 'F', 'F', 'F'],\n 'Values': [0.2, 1.3, 1.2, 0.3, 0.4, 1.2, 0.7, 0.12, 1.3, 0.9, 0.78, 0.88]})\n\ndf = df.merge(df.groupby(by=\"Names\").apply(lambda x: 0 if all(x['Values']>0.75) else (2 if all(x['Values']<0.75) else 1)).reset_index())\ndf.columns = ['Names', 'Values', 'New_col']\n\nprint(df)\n\nOUTPUT:\n Names Values New_col\n0 A 0.20 1\n1 A 1.30 1\n2 A 1.20 1\n3 B 0.30 2\n4 B 0.40 2\n5 C 1.20 0\n6 D 0.70 2\n7 E 0.12 1\n8 E 1.30 1\n9 F 0.90 0\n10 F 0.78 0\n11 F 0.88 0\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074638806_pandas_python.txt
Q: replace last occurrence if equal the first I have df like: value 0 yes 1 nan 2 no 3 nan 4 yes 5 no 6 yes 7 nan 8 nan 9 nan I do not have a guarantee that the first not nan value,yes, will be at the first row. It could as well start at later index. I need to check if the first occurrence of string that is not Nan, equals the last string that is not nan, and if so, set it to nan. Here, index 6 equals index 0, means we need to set it to nan and result in : value 0 yes 1 nan 2 no 3 nan 4 yes 5 no 6 nan #set to nan since equals first non Nan value 7 nan 8 nan 9 nan A: Use Series.first_valid_index and Series.last_valid_index for indices first and last non missing values, get values by DataFrame.loc, last use if-else statement for set values by scalars: first_idx = df['value'].first_valid_index() last_idx = df['value'].last_valid_index() first = df.loc[first_idx, 'value'] last = df.loc[first_idx, 'value'] df.loc[last_idx, 'value'] = np.nan if first == last else last Or assign only if True : if first == last: df.loc[last_idx, 'value'] = np.nan print (df) value 0 yes 1 NaN 2 no 3 NaN 4 yes 5 no 6 NaN 7 NaN 8 NaN 9 NaN If only one non missing value (and avoid replacement) also test if not equal indices: print (df) value 0 yes 1 NaN 2 NaN first_idx = df['value'].first_valid_index() last_idx = df['value'].last_valid_index() first = df.loc[first_idx, 'value'] last = df.loc[first_idx, 'value'] df.loc[last_idx, 'value'] = np.nan if (first == last) and (first_idx != last_idx) else last print (df) value 0 yes 1 NaN 2 NaN
replace last occurrence if equal the first
I have df like: value 0 yes 1 nan 2 no 3 nan 4 yes 5 no 6 yes 7 nan 8 nan 9 nan I do not have a guarantee that the first not nan value,yes, will be at the first row. It could as well start at later index. I need to check if the first occurrence of string that is not Nan, equals the last string that is not nan, and if so, set it to nan. Here, index 6 equals index 0, means we need to set it to nan and result in : value 0 yes 1 nan 2 no 3 nan 4 yes 5 no 6 nan #set to nan since equals first non Nan value 7 nan 8 nan 9 nan
[ "Use Series.first_valid_index and\nSeries.last_valid_index for indices first and last non missing values, get values by DataFrame.loc, last use if-else statement for set values by scalars:\nfirst_idx = df['value'].first_valid_index()\nlast_idx = df['value'].last_valid_index()\nfirst = df.loc[first_idx, 'value']\nlast = df.loc[first_idx, 'value']\n\ndf.loc[last_idx, 'value'] = np.nan if first == last else last\n\nOr assign only if True :\nif first == last:\n df.loc[last_idx, 'value'] = np.nan \n\n\nprint (df)\n value\n0 yes\n1 NaN\n2 no\n3 NaN\n4 yes\n5 no\n6 NaN\n7 NaN\n8 NaN\n9 NaN\n\nIf only one non missing value (and avoid replacement) also test if not equal indices:\nprint (df)\n value\n0 yes\n1 NaN\n2 NaN\n\n\nfirst_idx = df['value'].first_valid_index()\nlast_idx = df['value'].last_valid_index()\nfirst = df.loc[first_idx, 'value']\nlast = df.loc[first_idx, 'value']\n\ndf.loc[last_idx, 'value'] = np.nan if (first == last) and (first_idx != last_idx) else last\nprint (df)\n value\n0 yes\n1 NaN\n2 NaN\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074639669_pandas_python.txt
Q: Is there a way to send multiple image to tf serving model? I'm trying this below code but I got unexpected error This is my code for getting input and pass it to model. def get_instances(dir = '/test_data'): instances = list() file_names = [file.split('/')[-1] for file in os.listdir(dir)] for file in file_names : image = nv.imread(os.path.join(dir ,file), resize = (300,300), color_mode='rgb',normalize=True) image = combine_rgb_xyz(image) #image = nv.expand_dims(image,axis=0) instances.append(image) return np.array(instances) ,file_names After I send these data to model with below code : def make_prediction(instances): url = get_url() data = json.dumps({"signature_name": "serving_default", "instances": instances.tolist()}) headers = {"content-type": "application/json"} json_response = requests.post(url, data=data, headers=headers) predictions = json.loads(json_response.text)['predictons'] return predictions but I get unexpected output : 'predictons' A: You can use tf.make_tensor_proto and tf.make_ndarray for image numpy array to/from tensor conversions. Then, you can use 'serving_default' signature to make predictions and pass multiple images to serving_default request to achieve faster results. 'serving_default' signature supports multiple images to be processed at once since it has 4d input (batch, height, width, channel). Please refer this guide to pass multiple images to TF serving.
Is there a way to send multiple image to tf serving model?
I'm trying this below code but I got unexpected error This is my code for getting input and pass it to model. def get_instances(dir = '/test_data'): instances = list() file_names = [file.split('/')[-1] for file in os.listdir(dir)] for file in file_names : image = nv.imread(os.path.join(dir ,file), resize = (300,300), color_mode='rgb',normalize=True) image = combine_rgb_xyz(image) #image = nv.expand_dims(image,axis=0) instances.append(image) return np.array(instances) ,file_names After I send these data to model with below code : def make_prediction(instances): url = get_url() data = json.dumps({"signature_name": "serving_default", "instances": instances.tolist()}) headers = {"content-type": "application/json"} json_response = requests.post(url, data=data, headers=headers) predictions = json.loads(json_response.text)['predictons'] return predictions but I get unexpected output : 'predictons'
[ "You can use tf.make_tensor_proto and tf.make_ndarray for image numpy array to/from tensor conversions. Then, you can use 'serving_default' signature to make predictions and pass multiple images to serving_default request to achieve faster results. 'serving_default' signature supports multiple images to be processed at once since it has 4d input (batch, height, width, channel).\nPlease refer this guide to pass multiple images to TF serving.\n" ]
[ 0 ]
[]
[]
[ "json", "python", "tensorflow", "tensorflow_serving" ]
stackoverflow_0072728351_json_python_tensorflow_tensorflow_serving.txt
Q: How to apply filtering over order by and distict in django orm? Name email date _________________________________________________ Dane [email protected] 2017-06-20 Dane [email protected] 2017-06-20 Dane [email protected] 2017-06-20 Dane [email protected] 2017-06-20 Kim [email protected] 2017-06-10 Hong [email protected] 2016-06-25 Hong [email protected] 2016-06-25 Hong [email protected] 2016-06-25 Dane [email protected] 2017-06-04 Susan [email protected] 2017-05-21 Dane [email protected] 2017-02-01 Susan [email protected] 2017-05-20 I can get the first entries of each unique by using EmailModel.objects.all().order_by('date').distinct('Name'). this returns Name email date _________________________________________________ Dane [email protected] 2017-06-20 Kim [email protected] 2017-06-10 Hong [email protected] 2016-06-25 Susan [email protected] 2017-05-21 What i want to do here is to only include it in the result if the very first entry is something different like more filtering over it? for ex- i don't want to include it in the result if the first email id is [email protected] for Dave and only include it if it is something different. Expected result: if the email for Dane is not [email protected] then Name email date _________________________________________________ Kim [email protected] 2017-06-10 Hong [email protected] 2016-06-25 Susan [email protected] 2017-05-21 A: You can use F() expressions with __istartswith lookup to exclude those emails which starts with their name so: EmailModel.objects.exclude(email__istartswith=F('Name')).order_by("date").distinct("Name") Or you'd like to avoid the Name in entire email so you can use __icontains lookup so: EmailModel.objects.exclude(email__icontains=F('Name')).order_by("date").distinct("Name") A: Well, You can try something crazy like this: variable = modelname.objects.filter( ....=....).values( 'mention_the_column_names_if_you_need_any' ).order_by( 'mention_the_column_name_to_be_sorted' ).distinct() Reply to this message if the issue still persist.
How to apply filtering over order by and distict in django orm?
Name email date _________________________________________________ Dane [email protected] 2017-06-20 Dane [email protected] 2017-06-20 Dane [email protected] 2017-06-20 Dane [email protected] 2017-06-20 Kim [email protected] 2017-06-10 Hong [email protected] 2016-06-25 Hong [email protected] 2016-06-25 Hong [email protected] 2016-06-25 Dane [email protected] 2017-06-04 Susan [email protected] 2017-05-21 Dane [email protected] 2017-02-01 Susan [email protected] 2017-05-20 I can get the first entries of each unique by using EmailModel.objects.all().order_by('date').distinct('Name'). this returns Name email date _________________________________________________ Dane [email protected] 2017-06-20 Kim [email protected] 2017-06-10 Hong [email protected] 2016-06-25 Susan [email protected] 2017-05-21 What i want to do here is to only include it in the result if the very first entry is something different like more filtering over it? for ex- i don't want to include it in the result if the first email id is [email protected] for Dave and only include it if it is something different. Expected result: if the email for Dane is not [email protected] then Name email date _________________________________________________ Kim [email protected] 2017-06-10 Hong [email protected] 2016-06-25 Susan [email protected] 2017-05-21
[ "You can use F() expressions with __istartswith lookup to exclude those emails which starts with their name so:\nEmailModel.objects.exclude(email__istartswith=F('Name')).order_by(\"date\").distinct(\"Name\")\n\nOr you'd like to avoid the Name in entire email so you can use __icontains lookup so:\nEmailModel.objects.exclude(email__icontains=F('Name')).order_by(\"date\").distinct(\"Name\")\n\n", "Well, You can try something crazy like this:\n variable = modelname.objects.filter(\n ....=....).values(\n 'mention_the_column_names_if_you_need_any'\n ).order_by(\n 'mention_the_column_name_to_be_sorted'\n ).distinct()\n\nReply to this message if the issue still persist.\n" ]
[ 0, 0 ]
[ "Supposed your model name is User\nSo for order by with filter you can used\n\n\nUser.object.filter(parameters).order_by(parameters)\n\n\n\n" ]
[ -2 ]
[ "django", "django_models", "django_orm", "django_queryset", "python" ]
stackoverflow_0074623133_django_django_models_django_orm_django_queryset_python.txt
Q: Use Python code to show a list of users that have Owner permissions to my Azure subscription I am trying to find all the users in my subscription that have the role of Owner. I have to use python to do this and Microsoft doesn't seem to have any working options for this. I've ran several options with the Azure SDK for Python class at azure.mgmt.authorization to try to return an appropriate result but nothing is working. Here are 2 options I've tried so far: from azure.mgmt.authorization import AuthorizationManagementClient authorization_client = AuthorizationManagementClient(credentials, '<your subscription guid>') roles = authorization_client.role_assignments.list() for role in roles: print(role) This returns all of the role assignments in the subscription but only displays the RoleAssignmentID. It does not show the username or role definition name (owner). from azure.mgmt.authorization import AuthorizationManagementClient authorization_client = AuthorizationManagementClient(credentials, '<your subscription guid>') roles = authorization_client.role_definitions.list( scope=subscription_id, filter='roleName eq \'Owner\'' ) for role in roles: print(role) This returns details on the Owner role but doesn't show who has this role in my subscription. Any help would be great! A: To list all the users in a subscription that have owner role: This Azure PowerShell command get-azroleassignment provides all the owner roles. It worked as below: get-azroleassignment -RoleDefinitionId "xxxxxxxx" -Scope "/subscriptions/<subscriptionID>" To display name & signin name(mail) use select operator: get-azroleassignment -RoleDefinitionId "xxxxxxxx" -Scope "/subscriptions/<subscriptionID>" | select Displayname, signinName Alternatively, if you are trying to list out using azure python sdk, below given code will work given by @colbydh. I modified and created a below script: from azure.mgmt.authorization import AuthorizationManagementClient from azure.identity import AzureCliCredential credentials = AzureCliCredential() authorizationClient = AuthorizationManagementClient(credentials, subscriptionID='<subscriptionID>') def list_of_owners(client): results = [] owners_list = [] subscription_scope = '/subscriptions/<subscriptionID>' owner_role = '8e3af657-a8ff-xxxxxxxxxxxxxxxxxxxxx' roles = client.role_assignments.list_for_scope( scope = subscriptionScope, filter = 'atScope()' ) for role in roles: role_name_id = role.name role_assignment_details client.role_assignments.get( scope = subscriptionScope, role_assignment_name = role_name_id ) roleid = role_assignment_details.properties.role_definition_id if owner_role in role_ids: owner_role_list = roleid.count(owner_role) print(owner_role_list) Refer Pyquestions article.
Use Python code to show a list of users that have Owner permissions to my Azure subscription
I am trying to find all the users in my subscription that have the role of Owner. I have to use python to do this and Microsoft doesn't seem to have any working options for this. I've ran several options with the Azure SDK for Python class at azure.mgmt.authorization to try to return an appropriate result but nothing is working. Here are 2 options I've tried so far: from azure.mgmt.authorization import AuthorizationManagementClient authorization_client = AuthorizationManagementClient(credentials, '<your subscription guid>') roles = authorization_client.role_assignments.list() for role in roles: print(role) This returns all of the role assignments in the subscription but only displays the RoleAssignmentID. It does not show the username or role definition name (owner). from azure.mgmt.authorization import AuthorizationManagementClient authorization_client = AuthorizationManagementClient(credentials, '<your subscription guid>') roles = authorization_client.role_definitions.list( scope=subscription_id, filter='roleName eq \'Owner\'' ) for role in roles: print(role) This returns details on the Owner role but doesn't show who has this role in my subscription. Any help would be great!
[ "To list all the users in a subscription that have owner role:\nThis Azure PowerShell command get-azroleassignment provides all the owner roles. It worked as below:\nget-azroleassignment -RoleDefinitionId \"xxxxxxxx\" -Scope \"/subscriptions/<subscriptionID>\"\n\n\nTo display name & signin name(mail) use select operator:\n get-azroleassignment -RoleDefinitionId \"xxxxxxxx\" -Scope \"/subscriptions/<subscriptionID>\" | select Displayname, signinName\n\n\nAlternatively, if you are trying to list out using azure python sdk, below given code will work given by @colbydh.\nI modified and created a below script:\n\nfrom azure.mgmt.authorization import AuthorizationManagementClient\nfrom azure.identity import AzureCliCredential\ncredentials = AzureCliCredential()\nauthorizationClient = AuthorizationManagementClient(credentials, subscriptionID='<subscriptionID>')\ndef list_of_owners(client):\nresults = []\nowners_list = []\nsubscription_scope = '/subscriptions/<subscriptionID>'\nowner_role = '8e3af657-a8ff-xxxxxxxxxxxxxxxxxxxxx' \nroles = client.role_assignments.list_for_scope(\nscope = subscriptionScope,\nfilter = 'atScope()'\n)\nfor role in roles:\nrole_name_id = role.name\nrole_assignment_details client.role_assignments.get(\nscope = subscriptionScope,\nrole_assignment_name = role_name_id\n)\nroleid = role_assignment_details.properties.role_definition_id\nif owner_role in role_ids:\nowner_role_list = roleid.count(owner_role)\nprint(owner_role_list)\n\n\nRefer Pyquestions article.\n" ]
[ 0 ]
[]
[]
[ "azure", "python", "sdk" ]
stackoverflow_0074482794_azure_python_sdk.txt
Q: Pytorch LSTM and cross entropy I am working on sentiment analysis, I want to classify the output into 4 classes. For loss I am using cross-entropy. The problem is PyTorch cross-entropy needs the input of (batch_size, output) which is am having trouble with. I am taking a batch size of 12 and sequence size is 32 import torch.nn as nn class RNN(nn.Module): def __init__(self, hidden_dim = 256, input_size = 32 , num_layers = 1, num_classes=4, vocab_size = len(vocab_to_int)+1, embedding_dim=100): super().__init__() self.input_size = input_size self.hidden_dim = hidden_dim self.num_layers = num_layers self.num_classes = num_classes self.embedding = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers) self.fc1 = nn.Linear(hidden_dim, 50) self.fc2 = nn.Linear(50, 4) def forward(self, x, hidden): x = self.embedding(x) x = x.view(32, 12, 100) x, hidden = self.lstm(x, hidden) x = x.contiguous().view(-1, 256) x = self.fc1(x) # output shape ([384, 50]) x = self.fc2(x) # output shape [384, 4] return x, hidden def init_hidden(self, batch_size=12): weight = next(self.parameters()).data hidden = (weight.new(self.num_layers, 12, self.hidden_dim).zero_().cuda(), weight.new(self.num_layers, 12, self.hidden_dim).zero_().cuda()) return hidden A: According to the CrossEntropyLoss docs: input has to be a Tensor of size (C) for unbatched input, (minibatch,C) [for batched input] [...] The code you provided is only the RNN class and not the data processing and the actual call to CrossEntropyLoss, but the error you stated in the comments makes me think that you didn't reshape the labels tensor to have the same size as the output from the neural network. Therefore, you'd be calculating the loss of a tensor with size (384, 4) against another tensor which I infer is of size (12, 32). Your labels tensor should be of size (384) to match the first dimension of the neural network output. Also, you don't have to manually reshape your tensors, you can reshape them after the forward() call through the torch.nn.utils.rnn.pack_padded_sequence() function. If you do apply this function to both the output of the neural network and the labels, you will have a tensor of size (384, 4) that PyTorch can handle in the call to CrossEntropyLoss. See the note in the pack_padded_sequence() function docs for more details.
Pytorch LSTM and cross entropy
I am working on sentiment analysis, I want to classify the output into 4 classes. For loss I am using cross-entropy. The problem is PyTorch cross-entropy needs the input of (batch_size, output) which is am having trouble with. I am taking a batch size of 12 and sequence size is 32 import torch.nn as nn class RNN(nn.Module): def __init__(self, hidden_dim = 256, input_size = 32 , num_layers = 1, num_classes=4, vocab_size = len(vocab_to_int)+1, embedding_dim=100): super().__init__() self.input_size = input_size self.hidden_dim = hidden_dim self.num_layers = num_layers self.num_classes = num_classes self.embedding = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers) self.fc1 = nn.Linear(hidden_dim, 50) self.fc2 = nn.Linear(50, 4) def forward(self, x, hidden): x = self.embedding(x) x = x.view(32, 12, 100) x, hidden = self.lstm(x, hidden) x = x.contiguous().view(-1, 256) x = self.fc1(x) # output shape ([384, 50]) x = self.fc2(x) # output shape [384, 4] return x, hidden def init_hidden(self, batch_size=12): weight = next(self.parameters()).data hidden = (weight.new(self.num_layers, 12, self.hidden_dim).zero_().cuda(), weight.new(self.num_layers, 12, self.hidden_dim).zero_().cuda()) return hidden
[ "According to the CrossEntropyLoss docs:\n\ninput has to be a Tensor of size (C) for unbatched input, (minibatch,C) [for batched input] [...]\n\nThe code you provided is only the RNN class and not the data processing and the actual call to CrossEntropyLoss, but the error you stated in the comments makes me think that you didn't reshape the labels tensor to have the same size as the output from the neural network. Therefore, you'd be calculating the loss of a tensor with size (384, 4) against another tensor which I infer is of size (12, 32). Your labels tensor should be of size (384) to match the first dimension of the neural network output.\nAlso, you don't have to manually reshape your tensors, you can reshape them after the forward() call through the torch.nn.utils.rnn.pack_padded_sequence() function. If you do apply this function to both the output of the neural network and the labels, you will have a tensor of size (384, 4) that PyTorch can handle in the call to CrossEntropyLoss. See the note in the pack_padded_sequence() function docs for more details.\n" ]
[ 0 ]
[]
[]
[ "cross_entropy", "nlp", "numpy", "python", "pytorch" ]
stackoverflow_0067529350_cross_entropy_nlp_numpy_python_pytorch.txt
Q: How to color second level columns based on condition Below is the script I am currently working with. I'd like to color the C column only based on the condition below. So in column C, anything positive should be colored green, anything negative with red, and lastly when 0, it would be yellow. I've attached the expected outcome. Any help would be greatly appreciated. import pandas as pd df = pd.DataFrame(data=[[100,200,400,500,222,222], [77,28,110,211,222,222], [11,22,33,11,22,33],[213,124,136,147,54,56]]) df.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('ab')]) for c in df.columns.levels[0]: df[(c, 'c')] = df[(c, 'a')].sub(df[(c, 'b')]) df = df.sort_index(axis=1) def colors(i): if i < 0: return 'background: red' elif i > 0: return 'background: green' elif i == 0: return 'background: yellow' else: '' idx = pd.IndexSlice sliced=df.loc[idx[:],idx[:,['c']]] df.style.apply(colors, subset=sliced) A: Just added a few more lines, you can try something like below #your code df = pd.DataFrame(data=[[100,200,400,500,222,222], [77,28,110,211,222,222], [11,22,33,11,22,33],[213,124,136,147,54,56]]) df.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('ab')]) for c in df.columns.levels[0]: df[(c, 'c')] = df[(c, 'a')].sub(df[(c, 'b')]) df = df.sort_index(axis=1) #made some changes inorder to get the background color def colors(i): if i < 0: return 'background-color: red' elif i > 0: return 'background-color: green' elif i == 0: return 'background-color: yellow' else: '' #styling the specific columns req_cols = [col for col in df.columns if col[1]=='c'] df.style.applymap(lambda x: colors(x), subset=req_cols) Output: Hope this code might helps
How to color second level columns based on condition
Below is the script I am currently working with. I'd like to color the C column only based on the condition below. So in column C, anything positive should be colored green, anything negative with red, and lastly when 0, it would be yellow. I've attached the expected outcome. Any help would be greatly appreciated. import pandas as pd df = pd.DataFrame(data=[[100,200,400,500,222,222], [77,28,110,211,222,222], [11,22,33,11,22,33],[213,124,136,147,54,56]]) df.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('ab')]) for c in df.columns.levels[0]: df[(c, 'c')] = df[(c, 'a')].sub(df[(c, 'b')]) df = df.sort_index(axis=1) def colors(i): if i < 0: return 'background: red' elif i > 0: return 'background: green' elif i == 0: return 'background: yellow' else: '' idx = pd.IndexSlice sliced=df.loc[idx[:],idx[:,['c']]] df.style.apply(colors, subset=sliced)
[ "Just added a few more lines, you can try something like below\n#your code\ndf = pd.DataFrame(data=[[100,200,400,500,222,222], [77,28,110,211,222,222], [11,22,33,11,22,33],[213,124,136,147,54,56]])\ndf.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('ab')])\n\nfor c in df.columns.levels[0]:\n df[(c, 'c')] = df[(c, 'a')].sub(df[(c, 'b')])\n df = df.sort_index(axis=1)\n\n#made some changes inorder to get the background color\ndef colors(i):\n if i < 0:\n return 'background-color: red'\n elif i > 0:\n return 'background-color: green'\n elif i == 0:\n return 'background-color: yellow'\n else:\n ''\n\n#styling the specific columns\nreq_cols = [col for col in df.columns if col[1]=='c']\ndf.style.applymap(lambda x: colors(x), subset=req_cols)\n\nOutput:\n\nHope this code might helps\n" ]
[ 1 ]
[]
[]
[ "dataframe", "multi_index", "pandas", "python" ]
stackoverflow_0074621869_dataframe_multi_index_pandas_python.txt
Q: Pascal Triangle Tkinter Python i'm new to python so i would be very grateful if you could help me. i need to make a program that gives me pascal triangle lines by writing a number and clicking a button i have a problem with pulling out a number from entry.get. here is my code: from tkinter import * from tkinter import messagebox, ttk root=Tk() w = (root.winfo_screenwidth() // 2) - 500 h = (root.winfo_screenheight() // 2) - 250 root.geometry('700x500+{}+{}'.format(w,h)) root.title("Главное окно") def PrintPasTriangle(rows): row = [1] for i in range(rows): print(row) row = [sum(x) for x in zip([0]+row, row+[0])] label = Label(text = PrintPasTriangle(10)) def show_message(): e1 = entry e1.insert = (PrintPasTriangle(10)) entry = ttk.Entry() entry.pack(anchor=NW, padx=6, pady=6) btn = ttk.Button(text="Click", command=show_message) btn.pack(anchor=NW, padx=6, pady=6) label = ttk.Label() label.pack(anchor=NW, padx=6, pady=9) label = Label(text = PrintPasTriangle(10)) label.pack() root.mainloop() A: Edit: Redo: To get input working. I commented out line 34 and 34. I also removed show_message function. It is up to you. I added from math import factorial Then I changed formula for pascal triangle in PrintPasTriangle function. Try this: from tkinter import * from tkinter import messagebox, ttk from math import factorial root=Tk() w = (root.winfo_screenwidth() // 2) - 500 h = (root.winfo_screenheight() // 2) - 250 root.geometry('700x500+{}+{}'.format(w,h)) root.title("Главное окно") ent = IntVar() def PrintPasTriangle(rows): for i in range(rows): for j in range(rows-i+1): print(end=" ") for j in range(i+1): print(factorial(i)//(factorial(j)*factorial(i-j)), end=" ") print() label1.config(text=f"Pascal Triangle: {ent.get()}", font=("Calibri,15,Bold")) entry = ttk.Entry(root, textvariable=ent) entry.pack(anchor=NW, padx=6, pady=6) btn = ttk.Button(root,text="Click", command=lambda: PrintPasTriangle(ent.get())) btn.pack(anchor=NW, padx=6, pady=6) label1 = ttk.Label(root, text='label') label1.pack(anchor=NW, padx=6, pady=9) #label2 = Label(root, text = str(ent.get())) #label2.pack() root.mainloop() Result screenshot:
Pascal Triangle Tkinter Python
i'm new to python so i would be very grateful if you could help me. i need to make a program that gives me pascal triangle lines by writing a number and clicking a button i have a problem with pulling out a number from entry.get. here is my code: from tkinter import * from tkinter import messagebox, ttk root=Tk() w = (root.winfo_screenwidth() // 2) - 500 h = (root.winfo_screenheight() // 2) - 250 root.geometry('700x500+{}+{}'.format(w,h)) root.title("Главное окно") def PrintPasTriangle(rows): row = [1] for i in range(rows): print(row) row = [sum(x) for x in zip([0]+row, row+[0])] label = Label(text = PrintPasTriangle(10)) def show_message(): e1 = entry e1.insert = (PrintPasTriangle(10)) entry = ttk.Entry() entry.pack(anchor=NW, padx=6, pady=6) btn = ttk.Button(text="Click", command=show_message) btn.pack(anchor=NW, padx=6, pady=6) label = ttk.Label() label.pack(anchor=NW, padx=6, pady=9) label = Label(text = PrintPasTriangle(10)) label.pack() root.mainloop()
[ "Edit: Redo: To get input working. I commented out line 34 and 34. I also removed show_message function. It is up to you.\nI added from math import factorial Then I changed formula for pascal triangle in PrintPasTriangle function.\nTry this:\nfrom tkinter import *\nfrom tkinter import messagebox, ttk\nfrom math import factorial\n\nroot=Tk()\nw = (root.winfo_screenwidth() // 2) - 500\nh = (root.winfo_screenheight() // 2) - 250\nroot.geometry('700x500+{}+{}'.format(w,h))\nroot.title(\"Главное окно\")\n\nent = IntVar()\n \ndef PrintPasTriangle(rows): \n for i in range(rows): \n for j in range(rows-i+1):\n print(end=\" \")\n\n for j in range(i+1):\n print(factorial(i)//(factorial(j)*factorial(i-j)), end=\" \")\n\n print() \n label1.config(text=f\"Pascal Triangle: {ent.get()}\", font=(\"Calibri,15,Bold\"))\n\n\nentry = ttk.Entry(root, textvariable=ent)\nentry.pack(anchor=NW, padx=6, pady=6)\n\nbtn = ttk.Button(root,text=\"Click\", command=lambda: PrintPasTriangle(ent.get()))\nbtn.pack(anchor=NW, padx=6, pady=6)\n\nlabel1 = ttk.Label(root, text='label')\nlabel1.pack(anchor=NW, padx=6, pady=9)\n\n#label2 = Label(root, text = str(ent.get()))\n#label2.pack()\n\nroot.mainloop()\n\nResult screenshot:\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074636867_python_tkinter.txt
Q: python google_auth_oauthlib Out Of Band (OOB) error I am trying to run an internal app (this is a simple script) and authenticating using OAuth with python. But I run into the following error when I click on the link to authenticate my user after running my script : The out-of-band (OOB) flow has been blocked in order to keep users secure. Follow the Out-of-Band (OOB) flow migration guide linked in the developer docs below to migrate your app to an alternative method. Détails de la requête : redirect_uri=urn:ietf:wg:oauth:2.0:oob Here is my code : # More code import google_auth_oauthlib.flow # More code scopes = ["https://www.googleapis.com/auth/youtube.upload"] # More code def upload(): # Disable OAuthlib's HTTPS verification when running locally. # *DO NOT* leave this option enabled in production. # os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1" client_secrets_file = os.path.abspath(os.path.join(os.path.dirname(__file__), "client_secrets.json")) api_service_name = "youtube" api_version = "v3" # Get credentials and create an API client flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file( client_secrets_file, scopes) credentials = flow.run_console() And my client_secrets.json : { "installed": { "client_id": "**********", "project_id": "MY-PROJECT-ID", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_secret": "*****", "redirect_uris": [ "http://localhost:3000" ] } } But the URL to retrieve the code to authenticate is still : https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=*********&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fyoutube.upload&state=******&prompt=consent&access_type=offline As you can see the url still uses oob and I didn't find any way to make it change to use localhost instead. I run pip library versions : pip list oauthlib 3.2.2 requests-oauthlib 1.3.1 google-api-python-client 2.66.0 google-auth 2.14.1 google-auth-httplib2 0.1.0 google-auth-oauthlib 0.7.1 googleapis-common-protos 1.57.0 http-client 0.1.22 httplib2 0.21.0 and using python 3.10.7 I already check this doc, for desktop app it just says "use loopback url": https://developers.google.com/identity/protocols/oauth2/resources/oob-migration I also read this to use address loopback properly but didn't work: https://developers.google.com/identity/protocols/oauth2/native-app#handlingresponse I tried running a local webserver launched with python -m http.server 3000 to listen events on the loopback but it didn't work Do you have any idea on how to make this work? Thank you so much! A: First off make sure that you have updated the client library. Im not sure which version you are running but the library was fixed about a year ago. Second remove the port. "redirect_uris":["http://localhost"] Third If that doesn't work here is my sample for videos.insert it should work out of the box. # To install the Google client library for Python, run the following command: # pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib from __future__ import print_function import os.path import google.auth.exceptions from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError from googleapiclient.http import MediaFileUpload # If modifying these scopes, delete the file token.json. SCOPES = ['https://www.googleapis.com/auth/youtube'] def main(): """Shows basic usage of the YouTube v3 API. Uploads a private video to YouTube """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): try: creds = Credentials.from_authorized_user_file('token.json', SCOPES) creds.refresh(Request()) except google.auth.exceptions.RefreshError as error: # if refresh token fails, reset creds to none. #creds = None print(f'An error occurred: {error}') # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'C:\YouTube\dev\credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: service = build('youtube', 'v3', credentials=creds) body = dict( snippet=dict( title="Test", description="test", tags="tes" ), status=dict( privacyStatus="private" ) ) media = MediaFileUpload("dummyvideo.mkv", chunksize=-1, resumable=True) results = service.videos().insert( part=",".join(body.keys()), body=body, media_body=media).execute() print(F'video ID: {results.get("id")}') except HttpError as error: # TODO(developer) - Handle errors from drive API. print(f'An error occurred: {error}') if __name__ == '__main__': main() version of google api python client I am running these versions of the relevant google apis python library pip list google-api-core 2.10.1 google-api-python-client 2.62.0 requests-oauthlib 1.3.1 I think oauthlib may be part of it as well.
python google_auth_oauthlib Out Of Band (OOB) error
I am trying to run an internal app (this is a simple script) and authenticating using OAuth with python. But I run into the following error when I click on the link to authenticate my user after running my script : The out-of-band (OOB) flow has been blocked in order to keep users secure. Follow the Out-of-Band (OOB) flow migration guide linked in the developer docs below to migrate your app to an alternative method. Détails de la requête : redirect_uri=urn:ietf:wg:oauth:2.0:oob Here is my code : # More code import google_auth_oauthlib.flow # More code scopes = ["https://www.googleapis.com/auth/youtube.upload"] # More code def upload(): # Disable OAuthlib's HTTPS verification when running locally. # *DO NOT* leave this option enabled in production. # os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1" client_secrets_file = os.path.abspath(os.path.join(os.path.dirname(__file__), "client_secrets.json")) api_service_name = "youtube" api_version = "v3" # Get credentials and create an API client flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file( client_secrets_file, scopes) credentials = flow.run_console() And my client_secrets.json : { "installed": { "client_id": "**********", "project_id": "MY-PROJECT-ID", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_secret": "*****", "redirect_uris": [ "http://localhost:3000" ] } } But the URL to retrieve the code to authenticate is still : https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=*********&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fyoutube.upload&state=******&prompt=consent&access_type=offline As you can see the url still uses oob and I didn't find any way to make it change to use localhost instead. I run pip library versions : pip list oauthlib 3.2.2 requests-oauthlib 1.3.1 google-api-python-client 2.66.0 google-auth 2.14.1 google-auth-httplib2 0.1.0 google-auth-oauthlib 0.7.1 googleapis-common-protos 1.57.0 http-client 0.1.22 httplib2 0.21.0 and using python 3.10.7 I already check this doc, for desktop app it just says "use loopback url": https://developers.google.com/identity/protocols/oauth2/resources/oob-migration I also read this to use address loopback properly but didn't work: https://developers.google.com/identity/protocols/oauth2/native-app#handlingresponse I tried running a local webserver launched with python -m http.server 3000 to listen events on the loopback but it didn't work Do you have any idea on how to make this work? Thank you so much!
[ "First off make sure that you have updated the client library. Im not sure which version you are running but the library was fixed about a year ago.\nSecond remove the port.\n\"redirect_uris\":[\"http://localhost\"]\n\nThird\nIf that doesn't work here is my sample for videos.insert it should work out of the box.\n# To install the Google client library for Python, run the following command:\n# pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib\n\nfrom __future__ import print_function\n\nimport os.path\n\nimport google.auth.exceptions\nfrom google.auth.transport.requests import Request\nfrom google.oauth2.credentials import Credentials\nfrom google_auth_oauthlib.flow import InstalledAppFlow\nfrom googleapiclient.discovery import build\nfrom googleapiclient.errors import HttpError\nfrom googleapiclient.http import MediaFileUpload\n\n# If modifying these scopes, delete the file token.json.\nSCOPES = ['https://www.googleapis.com/auth/youtube']\n\ndef main():\n\n \"\"\"Shows basic usage of the YouTube v3 API.\n Uploads a private video to YouTube\n \"\"\"\n creds = None\n # The file token.json stores the user's access and refresh tokens, and is\n # created automatically when the authorization flow completes for the first\n # time.\n if os.path.exists('token.json'):\n try:\n creds = Credentials.from_authorized_user_file('token.json', SCOPES)\n creds.refresh(Request())\n except google.auth.exceptions.RefreshError as error:\n # if refresh token fails, reset creds to none.\n #creds = None\n print(f'An error occurred: {error}')\n # If there are no (valid) credentials available, let the user log in.\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n 'C:\\YouTube\\dev\\credentials.json', SCOPES)\n creds = flow.run_local_server(port=0)\n # Save the credentials for the next run\n with open('token.json', 'w') as token:\n token.write(creds.to_json())\n\n try:\n service = build('youtube', 'v3', credentials=creds)\n\n body = dict(\n snippet=dict(\n title=\"Test\",\n description=\"test\",\n tags=\"tes\"\n ),\n status=dict(\n privacyStatus=\"private\"\n )\n )\n\n media = MediaFileUpload(\"dummyvideo.mkv\", chunksize=-1, resumable=True)\n\n results = service.videos().insert(\n part=\",\".join(body.keys()),\n body=body,\n media_body=media).execute()\n print(F'video ID: {results.get(\"id\")}')\n\n except HttpError as error:\n # TODO(developer) - Handle errors from drive API.\n print(f'An error occurred: {error}')\n\nif __name__ == '__main__':\n main()\n\nversion of google api python client\nI am running these versions of the relevant google apis python library\npip list\n\ngoogle-api-core 2.10.1\ngoogle-api-python-client 2.62.0\nrequests-oauthlib 1.3.1\n\nI think oauthlib may be part of it as well.\n" ]
[ 1 ]
[]
[]
[ "google_api_python_client", "google_oauth", "python", "youtube_api", "youtube_data_api" ]
stackoverflow_0074635264_google_api_python_client_google_oauth_python_youtube_api_youtube_data_api.txt
Q: Passing python objects from main flask app to blueprints I am trying to define a mongodb object inside main flask app. And I want to send that object to one of the blueprints that I created. I may have to create more database objects in main app and import them in different blueprints. I tried to do it this way. from flask import Flask, render_template import pymongo from admin_component.bp1 import bp_1 def init_db1(): try: mongo = pymongo.MongoClient( host='mongodb+srv://<username>:<passwrd>@cluster0.bslkwxdx.mongodb.net/?retryWrites=true&w=majority', serverSelectionTimeoutMS = 1000 ) db1 = mongo.test_db1.test_collection1 mongo.server_info() #this is the line that triggers exception. return db1 except: print('Cannot connect to db!!') app = Flask(__name__) app.register_blueprint(bp_1, url_prefix='/admin') #only if we see /admin in url we gonna extend things in bp_1 with app.app_context(): db1 = init_db1() @app.route('/') def test(): return '<h1>This is a Test</h1>' if __name__ == '__main__': app.run(port=10001, debug=True) And this is the blueprint and I tried to import the init_db1 using current_app. from flask import Blueprint, render_template, Response, request, current_app import pymongo from bson.objectid import ObjectId import json bp_1 = Blueprint('bp1', __name__, static_folder='static', template_folder='templates') print(current_app.config) db = current_app.config['db1'] But it gives this error without specifying more details into deep. raise RuntimeError(unbound_message) from None RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed the current application. To solve this, set up an application context with app.app_context(). See the documentation for more information. Can someone point out what am I doing wrong here?? A: The idea you are attempting is correct; however it just needs to be done a little differently. First, start by declaring your mongo object in your application factory: In your app/__init__.py: import pymongo from flask import Flask mongo = pymongo.MongoClient( host='mongodb+srv://<username>:<passwrd>@cluster0.bslkwxdx.mongodb.net/?retryWrites=true&w=majority', serverSelectionTimeoutMS = 1000 ) # Mongo is declared outside of function def create_app(app): app = Flask(__name__) return app And then in your other blueprint, you would call: from app import mongo # This right here will get you the mongo object from flask import Blueprint bp_1 = Blueprint('bp1', __name__, static_folder='static', template_folder='templates') db = mongo
Passing python objects from main flask app to blueprints
I am trying to define a mongodb object inside main flask app. And I want to send that object to one of the blueprints that I created. I may have to create more database objects in main app and import them in different blueprints. I tried to do it this way. from flask import Flask, render_template import pymongo from admin_component.bp1 import bp_1 def init_db1(): try: mongo = pymongo.MongoClient( host='mongodb+srv://<username>:<passwrd>@cluster0.bslkwxdx.mongodb.net/?retryWrites=true&w=majority', serverSelectionTimeoutMS = 1000 ) db1 = mongo.test_db1.test_collection1 mongo.server_info() #this is the line that triggers exception. return db1 except: print('Cannot connect to db!!') app = Flask(__name__) app.register_blueprint(bp_1, url_prefix='/admin') #only if we see /admin in url we gonna extend things in bp_1 with app.app_context(): db1 = init_db1() @app.route('/') def test(): return '<h1>This is a Test</h1>' if __name__ == '__main__': app.run(port=10001, debug=True) And this is the blueprint and I tried to import the init_db1 using current_app. from flask import Blueprint, render_template, Response, request, current_app import pymongo from bson.objectid import ObjectId import json bp_1 = Blueprint('bp1', __name__, static_folder='static', template_folder='templates') print(current_app.config) db = current_app.config['db1'] But it gives this error without specifying more details into deep. raise RuntimeError(unbound_message) from None RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed the current application. To solve this, set up an application context with app.app_context(). See the documentation for more information. Can someone point out what am I doing wrong here??
[ "The idea you are attempting is correct; however it just needs to be done a little differently.\nFirst, start by declaring your mongo object in your application factory:\nIn your app/__init__.py:\nimport pymongo\nfrom flask import Flask\n \nmongo = pymongo.MongoClient(\n host='mongodb+srv://<username>:<passwrd>@cluster0.bslkwxdx.mongodb.net/?retryWrites=true&w=majority',\n serverSelectionTimeoutMS = 1000\n )\n# Mongo is declared outside of function\n\ndef create_app(app):\n app = Flask(__name__)\n return app\n\nAnd then in your other blueprint, you would call:\nfrom app import mongo # This right here will get you the mongo object\nfrom flask import Blueprint\n \nbp_1 = Blueprint('bp1', __name__, static_folder='static', template_folder='templates')\ndb = mongo\n\n" ]
[ 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0073727018_flask_python.txt
Q: Mocking an I/O event in Python My code is listening for file changes in a folder, in class A. When a change occurs, then I trigger a function of class B, which is a field in class A. class A: def __init__(self, b): ... self.handler = b def run(self): # listen for changes in a folder using watchdog.observers.Observer self.observer.schedule(self.handler, self.input_directory, recursive=True) self.observer.start() try: while not self.stopped: time.sleep(self.scanning_frequency) except: self.observer.stop() self.observer.join() class B(FileSystemEventHandler): ... def on_any_event(self, event): # a change occurred and handled here. Now what I want to test is that when a file is copied to this folder, then on_any_event should be triggered. This is how I tried to do that: def test_file_watcher(self): # arrange b = B() a = A(b) a.handler.on_any_event = MagicMock() shutil.copy(# copy file to the watched folder) p1 = Process(target=a.run) p1.start() time.sleep(folder scanning frequency + 1 second) a.stop() # stops watching the folder assert a.handler.on_any_event.called p1.join() However this assertion turns out to be false all the time. Where am I doing wrong exactly? Also would it be possible to achieve this by also mocking B completely? Edit: I think the reason could be that I am using a different process, therefore a.handler.on_any_event.called is always false. But I couldn't figure out how to solve this. A: Workaround for your test in a multiprocessing context I agree with you that multiprocessing causes the failure of the test. I have found a workaround that can help you to do the test in a strange way, but that you can adapt for your needs. The workaround is based on the use of Sharing Global Variables in Multiprocessing by multiprocessing.Value (see the documentation). To do this I have defined 2 sharing variables and 2 functions as below: from multiprocessing import Value shared_value = Value('i', 0) stopped = Value('i',0) # this function is used to substitute method on_any_event() of class B def on_any_event_spy(event): shared_value.value += 1 # this is the target for Process p1 def f(a: A): # substitution of on_any_event method with on_any_event_spy() a.handler.on_any_event = on_any_event_spy a.run() Furthermore I have to modify the run() and stop() methods of class A. New method stop() of class A: def stop(self): stopped.value = 1 Method run() of class A (change only the condition of the while): def run(self): # listen for changes in a folder using watchdog.observers.Observer self.observer.schedule(self.handler, self.input_directory, recursive=True) self.observer.start() try: #while not self.stopped: # <------ comment this instruction while stopped.value == 0: # <----- add this instruction time.sleep(self.scanning_frequency) except: self.observer.stop() self.observer.join() The test method becomes: class MyTestCase(unittest.TestCase): def test_file_watcher(self): # arrange b = B() a = A(b) shutil.copy( # copy file to the watched folder) # I have changed yor target value and add args p1 = Process(target=f, args=(a, )) p1.start() time.sleep(a.scanning_frequency + 1) a.stop() # stops watching the folder # the shared_value value MUST BE > 0 self.assertGreater(shared_value.value, 0) p1.join() if __name__ == '__main__': unittest.main() How to mock B completely The previous paragraph of this answer tells that the real problem of this test is the multiprocessing, but if you want mock B completely as you ask in your question, try to change your test_file_watcher() as following: def test_file_watcher(self): # arrange #b = B() # <---------------- ------------- comment this instruction b = Mock(wraps=B()) # <--------------------- add this instruction a = A(b) #a.handler.on_any_event = MagicMock() # <--- comment this instruction shutil.copy(# copy file to the watched folder) p1 = Process(target=a.run) p1.start() time.sleep(folder scanning frequency + 1 second) a.stop() # stops watching the folder #assert a.handler.on_any_event.called# <---- comment this instruction assert b.on_any_event.called <---- add this instruction p1.join() I hope that with the instruction: b = Mock(wraps=B()) you will wrap B completely as you ask in your question and this can be useful for future more traditionally tests.
Mocking an I/O event in Python
My code is listening for file changes in a folder, in class A. When a change occurs, then I trigger a function of class B, which is a field in class A. class A: def __init__(self, b): ... self.handler = b def run(self): # listen for changes in a folder using watchdog.observers.Observer self.observer.schedule(self.handler, self.input_directory, recursive=True) self.observer.start() try: while not self.stopped: time.sleep(self.scanning_frequency) except: self.observer.stop() self.observer.join() class B(FileSystemEventHandler): ... def on_any_event(self, event): # a change occurred and handled here. Now what I want to test is that when a file is copied to this folder, then on_any_event should be triggered. This is how I tried to do that: def test_file_watcher(self): # arrange b = B() a = A(b) a.handler.on_any_event = MagicMock() shutil.copy(# copy file to the watched folder) p1 = Process(target=a.run) p1.start() time.sleep(folder scanning frequency + 1 second) a.stop() # stops watching the folder assert a.handler.on_any_event.called p1.join() However this assertion turns out to be false all the time. Where am I doing wrong exactly? Also would it be possible to achieve this by also mocking B completely? Edit: I think the reason could be that I am using a different process, therefore a.handler.on_any_event.called is always false. But I couldn't figure out how to solve this.
[ "Workaround for your test in a multiprocessing context\nI agree with you that multiprocessing causes the failure of the test. I have found a workaround that can help you to do the test in a strange way, but that you can adapt for your needs.\nThe workaround is based on the use of Sharing Global Variables in Multiprocessing by multiprocessing.Value (see the documentation).\nTo do this I have defined 2 sharing variables and 2 functions as below:\nfrom multiprocessing import Value\n\nshared_value = Value('i', 0)\nstopped = Value('i',0)\n\n# this function is used to substitute method on_any_event() of class B\ndef on_any_event_spy(event):\n shared_value.value += 1\n\n# this is the target for Process p1\ndef f(a: A):\n # substitution of on_any_event method with on_any_event_spy()\n a.handler.on_any_event = on_any_event_spy\n a.run()\n\nFurthermore I have to modify the run() and stop() methods of class A.\nNew method stop() of class A:\ndef stop(self):\n stopped.value = 1\n\nMethod run() of class A (change only the condition of the while):\ndef run(self):\n # listen for changes in a folder using watchdog.observers.Observer\n self.observer.schedule(self.handler, self.input_directory, recursive=True)\n self.observer.start()\n try:\n #while not self.stopped: # <------ comment this instruction\n while stopped.value == 0: # <----- add this instruction\n time.sleep(self.scanning_frequency)\n except:\n self.observer.stop()\n\n self.observer.join()\n\nThe test method becomes:\nclass MyTestCase(unittest.TestCase):\n\n def test_file_watcher(self):\n\n # arrange\n b = B()\n a = A(b)\n\n shutil.copy( # copy file to the watched folder)\n # I have changed yor target value and add args\n p1 = Process(target=f, args=(a, ))\n p1.start()\n\n time.sleep(a.scanning_frequency + 1)\n a.stop() # stops watching the folder\n\n # the shared_value value MUST BE > 0\n self.assertGreater(shared_value.value, 0)\n\n p1.join()\n\nif __name__ == '__main__':\n unittest.main()\n\nHow to mock B completely\nThe previous paragraph of this answer tells that the real problem of this test is the multiprocessing, but if you want mock B completely as you ask in your question, try to change your test_file_watcher() as following:\ndef test_file_watcher(self):\n # arrange\n #b = B() # <---------------- ------------- comment this instruction\n b = Mock(wraps=B()) # <--------------------- add this instruction\n a = A(b)\n #a.handler.on_any_event = MagicMock() # <--- comment this instruction\n\n shutil.copy(# copy file to the watched folder)\n p1 = Process(target=a.run)\n p1.start()\n\n time.sleep(folder scanning frequency + 1 second)\n a.stop() # stops watching the folder\n\n #assert a.handler.on_any_event.called# <---- comment this instruction\n assert b.on_any_event.called <---- add this instruction\n p1.join()\n\nI hope that with the instruction:\nb = Mock(wraps=B())\n\nyou will wrap B completely as you ask in your question and this can be useful for future more traditionally tests.\n" ]
[ 0 ]
[]
[]
[ "mocking", "python", "unit_testing" ]
stackoverflow_0074634906_mocking_python_unit_testing.txt
Q: How can i call a file program in another file in python? How can i call a file program in another file in python?What i mean is i have a file called fun.py which has a small program that program prints Hello World, and i have a 2nd file called main.py so i want that if i run my main.py that fun.py file program runs and that fun.py file also loop 2 times, So that fun.py file runs 2 times when ever i try to run my main.py file Note: i have both file in one folder.. we can also see this as react perspective like we import a react component from different file and use that component in other file as a functional component main.py code import fun fun for i in range(2): A: I'm going to answer with an example: fun.py print("from fun") def show_im_having_fun(): return "I'm having fun!" main.py # You have the following options to make an import: from fun import * import fun # the recommended for this case from fun import show_im_having_fun print("from main") Every time that you import a module (the fun.py file) all the code in that file is going to be executed, so you going to see the next order of print statements: from fun from main This is because in order for the second file to "know" what the other file has, it has to be executed so it can save it in memory. If you want to print n number of times what you can do is the following: fun.py def show_im_having_fun(): return "I'm having fun!" main.py from fun import show_im_having_fun n = 2 for _ in range(n): # Is a good practice to put an underscore when you're not going to use the variable inside the loop print(show_im_having_fun())
How can i call a file program in another file in python?
How can i call a file program in another file in python?What i mean is i have a file called fun.py which has a small program that program prints Hello World, and i have a 2nd file called main.py so i want that if i run my main.py that fun.py file program runs and that fun.py file also loop 2 times, So that fun.py file runs 2 times when ever i try to run my main.py file Note: i have both file in one folder.. we can also see this as react perspective like we import a react component from different file and use that component in other file as a functional component main.py code import fun fun for i in range(2):
[ "I'm going to answer with an example:\n\nfun.py\n\nprint(\"from fun\")\ndef show_im_having_fun():\n return \"I'm having fun!\"\n\n\nmain.py\n\n# You have the following options to make an import:\nfrom fun import *\nimport fun # the recommended for this case\nfrom fun import show_im_having_fun\nprint(\"from main\")\n\nEvery time that you import a module (the fun.py file) all the code in that file is going to be executed, so you going to see the next order of print statements:\nfrom fun\nfrom main\nThis is because in order for the second file to \"know\" what the other file has, it has to be executed so it can save it in memory. If you want to print n number of times what you can do is the following:\n\nfun.py\n\ndef show_im_having_fun():\n return \"I'm having fun!\"\n\n\nmain.py\n\nfrom fun import show_im_having_fun\nn = 2\nfor _ in range(n):\n # Is a good practice to put an underscore when you're not going to use the variable inside the loop\n print(show_im_having_fun())\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074639587_python_python_3.x.txt
Q: sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class Order->order, expression 'Status' failed to locate a name ('Status') Cannot save record to my db. I created endpoint which makes user and saves him to db: @router.post("/user/register/", tags=['user'], status_code=201) async def user_register_web(user: RegisterWebSchema, db: Session = Depends(get_db)): if user.password != user.password_repeat: raise HTTPException(status_code=404, detail="Passwords dont match each other!") db_user = UserModel(name=user.name.title(), surname=user.surname.title(), email=user.email, phone_number=user.phone_number, login=user.login, password=user.password, photo=None, is_admin=False, coins=0) db.add(db_user) db.commit() db.refresh(db_user) return {"JWT Token": signJWT(user.email), **user.dict()} Here is model of my User: class User(Base): __tablename__ = "user" __table_args__ = {'extend_existing': True} id = Column(Integer, primary_key=True, index=True) name = Column(String(50), nullable=False) surname = Column(String(50), nullable=False) email = Column(String(50), unique=True, nullable=False) phone_number = Column(String(15), unique=True, nullable=False) login = Column(String(50), unique=True, nullable=False) password = Column(String(80), nullable=False) photo = Column(String(80), nullable=True) is_admin = Column(Boolean, nullable=False) coins = Column(Integer, nullable=False) time_created = Column(DateTime(timezone=True), server_default=func.now()) time_updated = Column(DateTime(timezone=True), onupdate=func.now()) address_id = Column(Integer, ForeignKey("address.id"), nullable=False) address = relationship("Address", back_populates="user") pin = relationship("Pin", back_populates="user") payment_card = relationship("PaymentCard", back_populates="user") order = relationship("Order", back_populates="user") feedback = relationship("Feedback", back_populates="user") post = relationship("Post", back_populates="user") comment = relationship("Comment", back_populates="user") animal = relationship("Animal", back_populates="user") walk = relationship("Walk", back_populates="user") and here is the error I get: sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class Order->order, expression 'Status' failed to locate a name ('Status'). If this is a class name, consider adding this relationship() to the <class 'database.models.OrderModel.Order'> class after both dependent classes have been defined. model of Order: class Order(Base): __tablename__ = "order" __table_args__ = {'extend_existing': True} id = Column(Integer, primary_key=True, index=True) order_code = Column(String(15), nullable=True) time_created = Column(DateTime(timezone=True), server_default=func.now()) time_updated = Column(DateTime(timezone=True), onupdate=func.now()) status_id = Column(Integer, ForeignKey("status.id"), nullable=False) status = relationship("Status", back_populates="order") payment_method_id = Column(Integer, ForeignKey("payment_method.id"), nullable=False) payment_method = relationship("PaymentMethod", back_populates="order") user_id = Column(Integer, ForeignKey("user.id"), nullable=False) user = relationship("User", back_populates="order") post_office = relationship("PostOffice", back_populates="order") product = relationship("Product", secondary=OrderProduct, back_populates="order") and model of Status: class Status(Base): __tablename__ = "status" id = Column(Integer, primary_key=True, index=True) name = Column(String(100), nullable=False) time_created = Column(DateTime(timezone=True), server_default=func.now()) time_updated = Column(DateTime(timezone=True), onupdate=func.now()) order = relationship("Order", back_populates="status") I createrd migration successfully and then db. I dont understand why it shows lack of relationship, which i created. And it appears in lines of codes which dont even use these Tables.
sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class Order->order, expression 'Status' failed to locate a name ('Status')
Cannot save record to my db. I created endpoint which makes user and saves him to db: @router.post("/user/register/", tags=['user'], status_code=201) async def user_register_web(user: RegisterWebSchema, db: Session = Depends(get_db)): if user.password != user.password_repeat: raise HTTPException(status_code=404, detail="Passwords dont match each other!") db_user = UserModel(name=user.name.title(), surname=user.surname.title(), email=user.email, phone_number=user.phone_number, login=user.login, password=user.password, photo=None, is_admin=False, coins=0) db.add(db_user) db.commit() db.refresh(db_user) return {"JWT Token": signJWT(user.email), **user.dict()} Here is model of my User: class User(Base): __tablename__ = "user" __table_args__ = {'extend_existing': True} id = Column(Integer, primary_key=True, index=True) name = Column(String(50), nullable=False) surname = Column(String(50), nullable=False) email = Column(String(50), unique=True, nullable=False) phone_number = Column(String(15), unique=True, nullable=False) login = Column(String(50), unique=True, nullable=False) password = Column(String(80), nullable=False) photo = Column(String(80), nullable=True) is_admin = Column(Boolean, nullable=False) coins = Column(Integer, nullable=False) time_created = Column(DateTime(timezone=True), server_default=func.now()) time_updated = Column(DateTime(timezone=True), onupdate=func.now()) address_id = Column(Integer, ForeignKey("address.id"), nullable=False) address = relationship("Address", back_populates="user") pin = relationship("Pin", back_populates="user") payment_card = relationship("PaymentCard", back_populates="user") order = relationship("Order", back_populates="user") feedback = relationship("Feedback", back_populates="user") post = relationship("Post", back_populates="user") comment = relationship("Comment", back_populates="user") animal = relationship("Animal", back_populates="user") walk = relationship("Walk", back_populates="user") and here is the error I get: sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class Order->order, expression 'Status' failed to locate a name ('Status'). If this is a class name, consider adding this relationship() to the <class 'database.models.OrderModel.Order'> class after both dependent classes have been defined. model of Order: class Order(Base): __tablename__ = "order" __table_args__ = {'extend_existing': True} id = Column(Integer, primary_key=True, index=True) order_code = Column(String(15), nullable=True) time_created = Column(DateTime(timezone=True), server_default=func.now()) time_updated = Column(DateTime(timezone=True), onupdate=func.now()) status_id = Column(Integer, ForeignKey("status.id"), nullable=False) status = relationship("Status", back_populates="order") payment_method_id = Column(Integer, ForeignKey("payment_method.id"), nullable=False) payment_method = relationship("PaymentMethod", back_populates="order") user_id = Column(Integer, ForeignKey("user.id"), nullable=False) user = relationship("User", back_populates="order") post_office = relationship("PostOffice", back_populates="order") product = relationship("Product", secondary=OrderProduct, back_populates="order") and model of Status: class Status(Base): __tablename__ = "status" id = Column(Integer, primary_key=True, index=True) name = Column(String(100), nullable=False) time_created = Column(DateTime(timezone=True), server_default=func.now()) time_updated = Column(DateTime(timezone=True), onupdate=func.now()) order = relationship("Order", back_populates="status") I createrd migration successfully and then db. I dont understand why it shows lack of relationship, which i created. And it appears in lines of codes which dont even use these Tables.
[]
[]
[ "In order to make it work I had to import all of my modules into api file with endpoint. I don't know why but it works right now.\n" ]
[ -1 ]
[ "backend", "fastapi", "postgresql", "python", "sqlalchemy" ]
stackoverflow_0074636010_backend_fastapi_postgresql_python_sqlalchemy.txt
Q: Flask app wont launch 'ImportError: cannot import name 'cached_property' from 'werkzeug' ' I've been working on a Flask app for a few weeks. I finished it today and went to deploy it... and now it won't launch. I haven't added or removed any code so assume something has changed in the deployment process? Anyway, here is the full error displayed in the terminal: Traceback (most recent call last): File "C:\Users\Kev\Documents\Projects\Docket\manage.py", line 5, in <module> from app import create_app, db File "C:\Users\Kev\Documents\Projects\Docket\app\__init__.py", line 21, in <module> from app.api import api, blueprint, limiter File "C:\Users\Kev\Documents\Projects\Docket\app\api\__init__.py", line 2, in <module> from flask_restplus import Api File "C:\Users\Kev\.virtualenvs\Docket-LasDxOWU\lib\site-packages\flask_restplus\__init_ _.py", line 4, in <module> from . import fields, reqparse, apidoc, inputs, cors File "C:\Users\Kev\.virtualenvs\Docket-LasDxOWU\lib\site-packages\flask_restplus\fields. py", line 17, in <module> from werkzeug import cached_property ImportError: cannot import name 'cached_property' from 'werkzeug' (C:\Users\Kev\.virtualen vs\Docket-LasDxOWU\lib\site-packages\werkzeug\__init__.py) Also here's the code in the three files mentioned. manage.py: from apscheduler.schedulers.background import BackgroundScheduler from flask_script import Manager from flask_migrate import Migrate, MigrateCommand from app import create_app, db app = create_app() app.app_context().push() manager = Manager(app) migrate = Migrate(app, db) manager.add_command('db', MigrateCommand) from app.routes import * from app.models import * def clear_data(): with app.app_context(): db.session.query(User).delete() db.session.query(Todo).delete() db.session.commit() print("Deleted table rows!") @manager.command def run(): scheduler = BackgroundScheduler() scheduler.add_job(clear_data, trigger='interval', minutes=15) scheduler.start() app.run(debug=True) if __name__ == '__main__': clear_data() manager.run() app/__init__.py: from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager from config import Config db = SQLAlchemy() login = LoginManager() def create_app(): app = Flask(__name__) app.config.from_object(Config) db.init_app(app) login.init_app(app) login.login_view = 'login' from app.api import api, blueprint, limiter from app.api.endpoints import users, todos, register from app.api.endpoints.todos import TodosNS from app.api.endpoints.users import UserNS from app.api.endpoints.register import RegisterNS api.init_app(app) app.register_blueprint(blueprint) limiter.init_app(app) api.add_namespace(TodosNS) api.add_namespace(UserNS) api.add_namespace(RegisterNS) return app api/__init__.py: from logging import StreamHandler from flask_restplus import Api from flask import Blueprint from flask_limiter import Limiter from flask_limiter.util import get_remote_address blueprint = Blueprint('api', __name__, url_prefix='/api') limiter = Limiter(key_func=get_remote_address) limiter.logger.addHandler(StreamHandler()) api = Api(blueprint, doc='/documentation', version='1.0', title='Docket API', description='API for Docket. Create users and todo items through a REST API.\n' 'First of all, begin by registering a new user via the registration form in the web interface.\n' 'Or via a `POST` request to the `/Register/` end point', decorators=[limiter.limit("50/day", error_message="API request limit has been reached (50 per day)")]) I've tried reinstalling flask & flask_restplus but no-luck. A: The proper answer for May 2020: flask-restplus is dead, move to flask-restx. From noirbizarre/flask-restplus#778 (comment): flask-restplus work has been discontinued due to maintainers not having pypi keys. See the drop in replacement, flask-restx. It's an official fork by the maintainer team. We have already fixed the issue there From noirbizarre/flask-restplus#777 (comment): No. Flask-restplus is no longer maintained. The former maintainers do not have privileges to push to pypi, and after many months of trying, we forked the project. Check out flask-restx. It's a drop in replacement and we are roadmapping, designing, and making fixes...for instance, we already patched for Werkzeug So the real solution is to move to flask-restx rather than pinning to an old version of Werkzeug. A: Downgrading to Werkzeug==0.16.1 solves this see https://github.com/noirbizarre/flask-restplus/issues/777#issuecomment-583235327 EDIT Wanted to add that a permanent(long term) solution would be to move to flask_restx as flask-restplus is no longer being maintained. See how to migrate from flask-restplus A: Try: from werkzeug.utils import cached_property https://werkzeug.palletsprojects.com/en/1.0.x/utils/ A: Downgrade Werkzeug to 0.16.1 pip3 install --upgrade Werkzeug==0.16.1 If you do a pip3 list you may see something like this: Flask 1.1.2 Werkzeug 0.16.1 A: For fix this painful bug was to identify what happen after the installation of the new package in the file PipFile.lock was to do a diff file and find the differences: What I found after the installation of the package was the werkzeug package change her version of "version": "==0.15.2" to "version": "==1.0.1" and when try to excute the command sudo docker-compose build give me the error. To fix this error this is what I did: Discard all the change and start again. Stop and remove your previous docker sudo docker stop $(sudo docker ps -aq) sudo docker rm $(sudo docker ps -aq) Execute again the docker: sudo docker-compose build sudo docker-compose up Go inside of the docker , get the id of the docker : first press CRTL+z to pause the docker and after in the terminal execute sudo docker ps Get the number of the CONTAINER ID column execute : sudo docker exec -it 23f77dbefd57 bash for enter in terminal of the docker Now execute the package that you want in my case is SOAPpy, like this `pipenv install SOAPpy` And after this instalation, install the previous package of the werkzeug package in my case is pipenv install Werkzeug==0.15.2 write exit and press "Enter" in the terminal, for close the terminal inside the docker. If you compare the files Pipfile.lock have the same version, this is the real fix. for final steps is to do : stop , build and up the docker again: sudo docker stop $(sudo docker ps -aq) sudo docker-compose build sudo docker-compose up Now is running again the docker: A note to myself, I just want to remind you that you are capable of everything and not giving up so easily, remember there is always one way, it is only to connect the dots. Never give up without fighting. A: This may be too weird to happen to anyone else, but... Check your actually-imported packages. Mine looked like this: Clearly, something borked on import here... removed and readded the correct "werkzeug" package and it "worked" (turns out I still need to implement one of the other solutions offered to this question... :-( ) Ah- but you ask: "how do I remove a corrupted package like this? The GUI won't let me!!". Fear not, that happened to me too. In Pycharm, find the package file location by hovering over the package name under the settings menu, go there in file explorer, and delete the folder and anything else like it. Then reinstall the package with the GUI. A: If you or anyone who are doing some flask course about swagger that require restplus, Werkzeug. Please uninstall restplus and use restx instead since restplus is dead so any solutions that require downgrading Werkzeug or restplus OR typing some weird stuff that you can't understand, is just making another error. Read this article about restx, it fairly similar to restplus. Flask-restx article
Flask app wont launch 'ImportError: cannot import name 'cached_property' from 'werkzeug' '
I've been working on a Flask app for a few weeks. I finished it today and went to deploy it... and now it won't launch. I haven't added or removed any code so assume something has changed in the deployment process? Anyway, here is the full error displayed in the terminal: Traceback (most recent call last): File "C:\Users\Kev\Documents\Projects\Docket\manage.py", line 5, in <module> from app import create_app, db File "C:\Users\Kev\Documents\Projects\Docket\app\__init__.py", line 21, in <module> from app.api import api, blueprint, limiter File "C:\Users\Kev\Documents\Projects\Docket\app\api\__init__.py", line 2, in <module> from flask_restplus import Api File "C:\Users\Kev\.virtualenvs\Docket-LasDxOWU\lib\site-packages\flask_restplus\__init_ _.py", line 4, in <module> from . import fields, reqparse, apidoc, inputs, cors File "C:\Users\Kev\.virtualenvs\Docket-LasDxOWU\lib\site-packages\flask_restplus\fields. py", line 17, in <module> from werkzeug import cached_property ImportError: cannot import name 'cached_property' from 'werkzeug' (C:\Users\Kev\.virtualen vs\Docket-LasDxOWU\lib\site-packages\werkzeug\__init__.py) Also here's the code in the three files mentioned. manage.py: from apscheduler.schedulers.background import BackgroundScheduler from flask_script import Manager from flask_migrate import Migrate, MigrateCommand from app import create_app, db app = create_app() app.app_context().push() manager = Manager(app) migrate = Migrate(app, db) manager.add_command('db', MigrateCommand) from app.routes import * from app.models import * def clear_data(): with app.app_context(): db.session.query(User).delete() db.session.query(Todo).delete() db.session.commit() print("Deleted table rows!") @manager.command def run(): scheduler = BackgroundScheduler() scheduler.add_job(clear_data, trigger='interval', minutes=15) scheduler.start() app.run(debug=True) if __name__ == '__main__': clear_data() manager.run() app/__init__.py: from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager from config import Config db = SQLAlchemy() login = LoginManager() def create_app(): app = Flask(__name__) app.config.from_object(Config) db.init_app(app) login.init_app(app) login.login_view = 'login' from app.api import api, blueprint, limiter from app.api.endpoints import users, todos, register from app.api.endpoints.todos import TodosNS from app.api.endpoints.users import UserNS from app.api.endpoints.register import RegisterNS api.init_app(app) app.register_blueprint(blueprint) limiter.init_app(app) api.add_namespace(TodosNS) api.add_namespace(UserNS) api.add_namespace(RegisterNS) return app api/__init__.py: from logging import StreamHandler from flask_restplus import Api from flask import Blueprint from flask_limiter import Limiter from flask_limiter.util import get_remote_address blueprint = Blueprint('api', __name__, url_prefix='/api') limiter = Limiter(key_func=get_remote_address) limiter.logger.addHandler(StreamHandler()) api = Api(blueprint, doc='/documentation', version='1.0', title='Docket API', description='API for Docket. Create users and todo items through a REST API.\n' 'First of all, begin by registering a new user via the registration form in the web interface.\n' 'Or via a `POST` request to the `/Register/` end point', decorators=[limiter.limit("50/day", error_message="API request limit has been reached (50 per day)")]) I've tried reinstalling flask & flask_restplus but no-luck.
[ "The proper answer for May 2020: flask-restplus is dead, move to flask-restx.\nFrom noirbizarre/flask-restplus#778 (comment):\n\nflask-restplus work has been discontinued due to maintainers not having pypi keys. See the drop in replacement, flask-restx. It's an official fork by the maintainer team. We have already fixed the issue there\n\nFrom noirbizarre/flask-restplus#777 (comment):\n\nNo. Flask-restplus is no longer maintained. The former maintainers do not have privileges to push to pypi, and after many months of trying, we forked the project. Check out flask-restx. It's a drop in replacement and we are roadmapping, designing, and making fixes...for instance, we already patched for Werkzeug\n\nSo the real solution is to move to flask-restx rather than pinning to an old version of Werkzeug.\n", "Downgrading to Werkzeug==0.16.1 solves this\nsee https://github.com/noirbizarre/flask-restplus/issues/777#issuecomment-583235327\nEDIT\nWanted to add that a permanent(long term) solution would be to move to flask_restx as flask-restplus is no longer being maintained.\nSee how to migrate from flask-restplus\n", "Try:\nfrom werkzeug.utils import cached_property\n\nhttps://werkzeug.palletsprojects.com/en/1.0.x/utils/\n", "Downgrade Werkzeug to 0.16.1\npip3 install --upgrade Werkzeug==0.16.1\n\nIf you do a pip3 list you may see something like this:\nFlask 1.1.2\nWerkzeug 0.16.1\n\n", "For fix this painful bug was to identify what happen after the installation of the new package in the file PipFile.lock was to do a diff file and find the differences:\n\nWhat I found after the installation of the package was the werkzeug package change her version of \"version\": \"==0.15.2\" to \"version\": \"==1.0.1\" and when try to excute the command sudo docker-compose build give me the error.\nTo fix this error this is what I did:\nDiscard all the change and start again.\nStop and remove your previous docker\n\nsudo docker stop $(sudo docker ps -aq)\n\n\nsudo docker rm $(sudo docker ps -aq)\n\n\nExecute again the docker:\n\nsudo docker-compose build\n\n\n\nsudo docker-compose up\n\n\nGo inside of the docker , get the id of the docker : first press CRTL+z to pause the docker and after in the terminal execute\nsudo docker ps\nGet the number of the CONTAINER ID column\n\nexecute : sudo docker exec -it 23f77dbefd57 bash for enter in terminal of the docker\n\nNow execute the package that you want in my case is SOAPpy, like this\n `pipenv install SOAPpy`\n\n\nAnd after this instalation, install the previous package of the werkzeug package in my case is\n\npipenv install Werkzeug==0.15.2\n\n\nwrite exit and press \"Enter\" in the terminal, for close the terminal inside the docker.\n\nIf you compare the files Pipfile.lock have the same version, this is the real fix.\n\nfor final steps is to do : stop , build and up the docker again:\n\nsudo docker stop $(sudo docker ps -aq)\nsudo docker-compose build\n\n\nsudo docker-compose up\n\nNow is running again the docker:\n\n\nA note to myself, I just want to remind you that you are capable of\neverything and not giving up so easily, remember there is always one\nway, it is only to connect the dots. Never give up without fighting.\n\n", "This may be too weird to happen to anyone else, but... Check your actually-imported packages. Mine looked like this:\n\nClearly, something borked on import here... removed and readded the correct \"werkzeug\" package and it \"worked\" (turns out I still need to implement one of the other solutions offered to this question... :-( )\nAh- but you ask: \"how do I remove a corrupted package like this? The GUI won't let me!!\". Fear not, that happened to me too. In Pycharm, find the package file location by hovering over the package name under the settings menu, go there in file explorer, and delete the folder and anything else like it. Then reinstall the package with the GUI.\n", "If you or anyone who are doing some flask course about swagger that require restplus, Werkzeug. Please uninstall restplus and use restx instead since restplus is dead so any solutions that require downgrading Werkzeug or restplus OR typing some weird stuff that you can't understand, is just making another error.\nRead this article about restx, it fairly similar to restplus.\nFlask-restx article\n" ]
[ 36, 21, 14, 10, 1, 0, 0 ]
[]
[]
[ "flask", "flask_restplus", "python" ]
stackoverflow_0060156202_flask_flask_restplus_python.txt
Q: How to match the string after the character and space in Python3? I have the following string: '- Submission GMV / return finance027110/06 Abdul Rahman -26,00- Submission GMV / return finance02432548/08 Michael Scott -56,47- GMV success. 452630/10/21 Lehazq998890/92 +60,00' How can I return the values as: [['- Submission PVT / return finance027110/06 Abdul Rahman','-26,00'], ['- Submission LTD / return finance02432548/08 Michael Scott','-56,47'], ['- GMV success. 452630/10/21 Lehazq998890/92', '+60,00']] I'm looping into multiple input. Hence, the count of output rows differs in each iteration. But the split pattern remains same. A: Your requirement is easy to come by using re.findall with two capture groups: inp = "- Submission GMV / return finance027110/06 Abdul Rahman -26,00- Submission GMV / return finance02432548/08 Michael Scott -56,47- GMV success. 452630/10/21 Lehazq998890/92 +60,00" matches = re.findall(r'(-.*?) ([+-]\d+(?:,\d+)?)', inp) print(matches) This prints: [('- Submission GMV / return finance027110/06 Abdul Rahman', '-26,00'), ('- Submission GMV / return finance02432548/08 Michael Scott', '-56,47'), ('- GMV success. 452630/10/21 Lehazq998890/92', '+60,00')] A: try something like this data.split('- ')[1:]
How to match the string after the character and space in Python3?
I have the following string: '- Submission GMV / return finance027110/06 Abdul Rahman -26,00- Submission GMV / return finance02432548/08 Michael Scott -56,47- GMV success. 452630/10/21 Lehazq998890/92 +60,00' How can I return the values as: [['- Submission PVT / return finance027110/06 Abdul Rahman','-26,00'], ['- Submission LTD / return finance02432548/08 Michael Scott','-56,47'], ['- GMV success. 452630/10/21 Lehazq998890/92', '+60,00']] I'm looping into multiple input. Hence, the count of output rows differs in each iteration. But the split pattern remains same.
[ "Your requirement is easy to come by using re.findall with two capture groups:\ninp = \"- Submission GMV / return finance027110/06 Abdul Rahman -26,00- Submission GMV / return finance02432548/08 Michael Scott -56,47- GMV success. 452630/10/21 Lehazq998890/92 +60,00\"\nmatches = re.findall(r'(-.*?) ([+-]\\d+(?:,\\d+)?)', inp)\nprint(matches)\n\nThis prints:\n[('- Submission GMV / return finance027110/06 Abdul Rahman', '-26,00'),\n ('- Submission GMV / return finance02432548/08 Michael Scott', '-56,47'),\n ('- GMV success. 452630/10/21 Lehazq998890/92', '+60,00')]\n\n", "try something like this data.split('- ')[1:]\n" ]
[ 0, 0 ]
[]
[]
[ "extract", "python", "string" ]
stackoverflow_0074639725_extract_python_string.txt
Q: How to connect to a MSSQL database on a remote (windows) server from python? I have the following information about the remote server. IP address Database user name Database password Database name I can even connect to the remote server using azure data studio, running on my Laptop, which is running on Ubuntu 20.04. However, this is my requirement Connect to the MSSQL database programmatically from python Simply write a pandas dataframe as a table. I tried using pyodbc, but whenever I try pyodbc.connect(), I get an error saying InterfaceError: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found and no default driver specified (0) (SQLDriverConnect)') Is there some other library I should use instead of pyodbc? Here is my sample code. #!/usr/bin/env python3 # encoding: utf-8 import pyodbc credential='DRIVER=ODBC Driver 18 for SQL Server;SERVER=192.168.101.56;DATABASE=DEMAND_FORECAST;ENCRYPT=yes;UID=della;PWD=strong;Trusted_Connection=yes;' pyodbc.connect(str=credential) # Throwing error # InterfaceError: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found and no default # driver specified (0) (SQLDriverConnect)') pyodbc.drivers() # ['ODBC Driver 18 for SQL Server'] A: Usually you need to do something like: Porbably you are missing the "Driver={SQL Server} or you need to change it to something else cnxn = pyodbc.connect("Driver={SQL Server};" "Server=" your server + ";" "Database=" your db+ ";" "UID=" +your uid + ";" "PWD=" + your PWD )
How to connect to a MSSQL database on a remote (windows) server from python?
I have the following information about the remote server. IP address Database user name Database password Database name I can even connect to the remote server using azure data studio, running on my Laptop, which is running on Ubuntu 20.04. However, this is my requirement Connect to the MSSQL database programmatically from python Simply write a pandas dataframe as a table. I tried using pyodbc, but whenever I try pyodbc.connect(), I get an error saying InterfaceError: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found and no default driver specified (0) (SQLDriverConnect)') Is there some other library I should use instead of pyodbc? Here is my sample code. #!/usr/bin/env python3 # encoding: utf-8 import pyodbc credential='DRIVER=ODBC Driver 18 for SQL Server;SERVER=192.168.101.56;DATABASE=DEMAND_FORECAST;ENCRYPT=yes;UID=della;PWD=strong;Trusted_Connection=yes;' pyodbc.connect(str=credential) # Throwing error # InterfaceError: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found and no default # driver specified (0) (SQLDriverConnect)') pyodbc.drivers() # ['ODBC Driver 18 for SQL Server']
[ "Usually you need to do something like:\nPorbably you are missing the \"Driver={SQL Server} or you need to change it to something else\ncnxn = pyodbc.connect(\"Driver={SQL Server};\" \n \"Server=\" your server + \";\"\n \"Database=\" your db+ \";\"\n \"UID=\" +your uid + \";\"\n \"PWD=\" + your PWD\n )\n\n" ]
[ 1 ]
[]
[]
[ "pyodbc", "python", "sql_server" ]
stackoverflow_0074639811_pyodbc_python_sql_server.txt
Q: Dataframe line length I have a dataframe that contains x-y coordinates for a series of objects over time. I am trying to work out the total path length of each of these objects. I know the equation to work out the length of a line it (√((x2-x1))^2+(y2-y1))) + (√((x3 - x2))^2+(y3-y2)))... How would I work out the length of each individuals objects path from the data frame? Thanks in advance! A: I am unsure what exactly you mean by 'length of each individual object path'. If you want to add the distance between each 2 successive points in the dataframe, with the last point connecting to the first, use something like this: import math data = {"x": [0,2,6,4,0,3,6] "y": [0,4,2,3,4,1,7] } df = pd.DataFrame(data) points = list(df.to_records(index=False)) total_length = sum(math.hypot(p[i][0] - p[i-1][0], p[i][1] - p[i-1][1]) for i,_ in enumerate(p)) print(total_length) If instead you would like to find a 2d convex hull (i.e. the minimum bounding polygon) for the set of 2d points, use scipy library for this. Try this code out: from scipy.spatial import ConvexHull, convex_hull_plot_2d points_arr = np.array([[x,y] for x,y in points]) hull = ConvexHull(points_arr) import matplotlib.pyplot as plt plt.plot(points_arr[:,0], points_arr[:,1], 'o') hs = hull.simplices for simplex in hull.simplices: plt.plot(points_arr[simplex, 0], points_arr[simplex, 1], 'k-') total_length = sum(math.dist(hs[i], hs[i-1]) for i,_ in enumerate(hs)) print(total_length)
Dataframe line length
I have a dataframe that contains x-y coordinates for a series of objects over time. I am trying to work out the total path length of each of these objects. I know the equation to work out the length of a line it (√((x2-x1))^2+(y2-y1))) + (√((x3 - x2))^2+(y3-y2)))... How would I work out the length of each individuals objects path from the data frame? Thanks in advance!
[ "I am unsure what exactly you mean by 'length of each individual object path'. If you want to add the distance between each 2 successive points in the dataframe, with the last point connecting to the first, use something like this:\nimport math\n\ndata = {\"x\": [0,2,6,4,0,3,6]\n \"y\": [0,4,2,3,4,1,7]\n }\ndf = pd.DataFrame(data)\n\npoints = list(df.to_records(index=False))\ntotal_length = sum(math.hypot(p[i][0] - p[i-1][0], p[i][1] - p[i-1][1]) for i,_ in enumerate(p))\nprint(total_length)\n\nIf instead you would like to find a 2d convex hull (i.e. the minimum bounding polygon) for the set of 2d points, use scipy library for this. Try this code out:\nfrom scipy.spatial import ConvexHull, convex_hull_plot_2d\npoints_arr = np.array([[x,y] for x,y in points])\nhull = ConvexHull(points_arr)\n\nimport matplotlib.pyplot as plt\nplt.plot(points_arr[:,0], points_arr[:,1], 'o')\nhs = hull.simplices\nfor simplex in hull.simplices:\n plt.plot(points_arr[simplex, 0], points_arr[simplex, 1], 'k-')\n\ntotal_length = sum(math.dist(hs[i], hs[i-1]) for i,_ in enumerate(hs))\nprint(total_length)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "trackpy" ]
stackoverflow_0074628856_dataframe_pandas_python_trackpy.txt
Q: Change subprocess.run architecture from x86 to arm I am running python on a m1 Mac with Rosetta, on a x86_64 architecture. During the execution I need to use subprocess.run to launch some external program. However that program need to run under arm64 architecture. Is there a possible solution for doing that? Simply running from an arm64 terminal does not do the trick, and it gets overridden by the Python architecture. I am using python==3.8.2. A: The root of the problem was actually not in the subprocess.run, the process I was trying to spawn was compiled such that the binary is multi-arch support, supporting both arm64 and x86_64 (the support for the latter was mainly launching the program and crashing after not supported error). As the call for subprocess.run came from a x86_64 architecture, the binary defaulted to that architecture. The solution was to just compile the binary only for arm64, with no multi-arch support. After that the process was spawned with the correct architecture, even tho the call was made from a different architecture.
Change subprocess.run architecture from x86 to arm
I am running python on a m1 Mac with Rosetta, on a x86_64 architecture. During the execution I need to use subprocess.run to launch some external program. However that program need to run under arm64 architecture. Is there a possible solution for doing that? Simply running from an arm64 terminal does not do the trick, and it gets overridden by the Python architecture. I am using python==3.8.2.
[ "The root of the problem was actually not in the subprocess.run, the process I was trying to spawn was compiled such that the binary is multi-arch support, supporting both arm64 and x86_64 (the support for the latter was mainly launching the program and crashing after not supported error).\nAs the call for subprocess.run came from a x86_64 architecture, the binary defaulted to that architecture.\nThe solution was to just compile the binary only for arm64, with no multi-arch support. After that the process was spawned with the correct architecture, even tho the call was made from a different architecture.\n" ]
[ 0 ]
[]
[]
[ "apple_m1", "macos", "python", "rosetta_2", "subprocess" ]
stackoverflow_0074615048_apple_m1_macos_python_rosetta_2_subprocess.txt
Q: Tweets scraping using Python selinum I am trying to scrape tweets under a hashtag using Python selinum and I use the following code to scroll down driver.execute_script('window.scrollTo(0,document.body.scrollHeight);') The problem is that selinum only scrapes shown tweets (only 3 tweets) and then scroll down to the end of the page and load more tweets and scrape 3 new tweets missing a lot of tweets in between. Is there a way to show all tweets and then scroll down and show all new tweets or at least some new tweets (I've a mechasm to filter already scraped rweets) ? Note I'm running my script on GCP VM so I can't rotate the screen. I think that I can make the script keeps pressing the down arrow by that I can display tweets one by one and scrape them and also keep loading more tweets, but I think that this will slow down the scraper so much. A: Scroll down the page by pixels, so the page will get the time to load the data, try the below code: last_height = driver.execute_script("return document.body.scrollHeight") while True: driver.execute_script("window.scrollBy(0, 800);") # you can increase or decrease the scrolling height, i.e - '800' sleep(1) new_height = driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height A: To scroll down page in selenium we need to write driver.execute_script( "window.scrollTo(" + str(data.location["x"]) + ", " + str(data.location["y"]) + ")") Here data is the tweets that we get
Tweets scraping using Python selinum
I am trying to scrape tweets under a hashtag using Python selinum and I use the following code to scroll down driver.execute_script('window.scrollTo(0,document.body.scrollHeight);') The problem is that selinum only scrapes shown tweets (only 3 tweets) and then scroll down to the end of the page and load more tweets and scrape 3 new tweets missing a lot of tweets in between. Is there a way to show all tweets and then scroll down and show all new tweets or at least some new tweets (I've a mechasm to filter already scraped rweets) ? Note I'm running my script on GCP VM so I can't rotate the screen. I think that I can make the script keeps pressing the down arrow by that I can display tweets one by one and scrape them and also keep loading more tweets, but I think that this will slow down the scraper so much.
[ "Scroll down the page by pixels, so the page will get the time to load the data, try the below code:\nlast_height = driver.execute_script(\"return document.body.scrollHeight\")\nwhile True:\n driver.execute_script(\"window.scrollBy(0, 800);\") # you can increase or decrease the scrolling height, i.e - '800'\n sleep(1)\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n if new_height == last_height:\n break\n last_height = new_height\n\n", "To scroll down page in selenium we need to write\ndriver.execute_script(\n \"window.scrollTo(\" + str(data.location[\"x\"]) + \", \" + str(data.location[\"y\"]) + \")\")\n\nHere data is the tweets that we get\n" ]
[ 0, 0 ]
[]
[]
[ "python", "selenium", "twitter", "web_scraping" ]
stackoverflow_0074635176_python_selenium_twitter_web_scraping.txt
Q: Calling different DataFrames in a For Loop I am trying to use a for loop where a different DataFrame should be used in each iteration. It is thef'forecast_{s} below which is the problem. What I want is that first, the DataFrame forecast_24 should be used, then forecast_168 etc. I can't understand why this is not working. Does it have to do with that a string can't call the DataFrame? from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error naive_list = list(['24', '168', 'standard', 'custom']) error = np.zeros((1,8)) for column, i, p, s in zip(df, range(1,5), range(8), naive_list): rmse = mean_squared_error(df[column].iloc[500:], f'forecast_{s}'[column].iloc[500:], squared=False) mae = mean_absolute_error(df[column].iloc[500:], f'forecast_{s}'[column].iloc[500:]) error[p,2*p:2*p+2] = [rmse, mae] TypeError: string indices must be integers A: If I understand this correctly, you are trying to access the value forecast_24[column] at the first iteration of the loop and so on. Could you maybe do this instead: naive_list = list([forecast_24, forecast_168, forecast_standard, forecast_custom])
Calling different DataFrames in a For Loop
I am trying to use a for loop where a different DataFrame should be used in each iteration. It is thef'forecast_{s} below which is the problem. What I want is that first, the DataFrame forecast_24 should be used, then forecast_168 etc. I can't understand why this is not working. Does it have to do with that a string can't call the DataFrame? from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error naive_list = list(['24', '168', 'standard', 'custom']) error = np.zeros((1,8)) for column, i, p, s in zip(df, range(1,5), range(8), naive_list): rmse = mean_squared_error(df[column].iloc[500:], f'forecast_{s}'[column].iloc[500:], squared=False) mae = mean_absolute_error(df[column].iloc[500:], f'forecast_{s}'[column].iloc[500:]) error[p,2*p:2*p+2] = [rmse, mae] TypeError: string indices must be integers
[ "If I understand this correctly, you are trying to access the value forecast_24[column] at the first iteration of the loop and so on. Could you maybe do this instead:\nnaive_list = list([forecast_24, forecast_168, forecast_standard, forecast_custom])\n" ]
[ 0 ]
[]
[]
[ "for_loop", "python", "string" ]
stackoverflow_0074639643_for_loop_python_string.txt
Q: '' disappears after encoding xml to string python In my xml file I have <?xml version="1.0" encoding="utf-8"?> at the beginning. But it disappears if I encode it to a string. By that I mean my string does not have it anymore at the beginning. I thought I can simply insert it in my string like in the code below (which worked when printing it), but when I wanted to save the string as a xml again on my laptop and open it, <?xml version="1.0" encoding="utf-8"?> wasn't there anymore. import xml.etree.ElementTree as ET tree = ET.parse(r'someData.xml') root = tree.getroot() xml_str = ET.tostring(root, encoding='unicode') xml_str = '<?xml version="1.0" encoding="utf-8"?>' + xml_str Does anybody know how to encode the xml to a string without loosing <?xml version="1.0" encoding="utf-8"?> OR how to save the string as xml without loosing it? My aim is to have it in my exported xml. Thank you in advance!! A: To make it more visible: Before I saved the file so: tree = ET.ElementTree(ET.fromstring(xml_str)) tree.write(open('test2.xml', 'a'), encoding='unicode') But now, I save it like this so I don't miss the declaration at the beginning of the xml file: tree = ET.ElementTree(ET.fromstring(xml_str)) tree.write(open('test.xml', 'wb'), encoding="utf-8", xml_declaration=True)
'' disappears after encoding xml to string python
In my xml file I have <?xml version="1.0" encoding="utf-8"?> at the beginning. But it disappears if I encode it to a string. By that I mean my string does not have it anymore at the beginning. I thought I can simply insert it in my string like in the code below (which worked when printing it), but when I wanted to save the string as a xml again on my laptop and open it, <?xml version="1.0" encoding="utf-8"?> wasn't there anymore. import xml.etree.ElementTree as ET tree = ET.parse(r'someData.xml') root = tree.getroot() xml_str = ET.tostring(root, encoding='unicode') xml_str = '<?xml version="1.0" encoding="utf-8"?>' + xml_str Does anybody know how to encode the xml to a string without loosing <?xml version="1.0" encoding="utf-8"?> OR how to save the string as xml without loosing it? My aim is to have it in my exported xml. Thank you in advance!!
[ "To make it more visible:\nBefore I saved the file so:\ntree = ET.ElementTree(ET.fromstring(xml_str)) \ntree.write(open('test2.xml', 'a'), encoding='unicode')\n\nBut now, I save it like this so I don't miss the declaration at the beginning of the xml file:\ntree = ET.ElementTree(ET.fromstring(xml_str)) \ntree.write(open('test.xml', 'wb'), encoding=\"utf-8\", xml_declaration=True)\n\n" ]
[ 0 ]
[]
[]
[ "lxml", "python", "utf_8", "xml", "xml_encoding" ]
stackoverflow_0074625433_lxml_python_utf_8_xml_xml_encoding.txt
Q: find a function of the form 1/x like to fit data python I am trying to fit a function of the form f(x)=1/x or somthing like f(x)=1/poly(x) to this data: data of points I try to use scipy.optimize.curve_fit which do works but not return the regression itself, I want to know the function itself that succeeded in fitting the data. Also, I thought about an option of rotating and using np.polyfit but I don't really know how to invert it back to a function of that form. Edit This is my data for about 12 points: x_points= [8, 9, 10, 11, 12, 13, 14, 16, 17, 20, 22, 26] y_points=[26, 22, 20, 17, 16, 14, 13, 12, 11, 10, 9, 8] I can generate more data but not more than about 100 points, it is more of a sub-problem to the original problem, for confusion reasons only, I will not explain here where the data came from, it will be a tedious explanation and won't benefit to the understanding the problem A: If you have no model coming from physics try various mathemetical models up to find a satisfising one. For example with the exponential function : Don't be surprised if the values of the parameters that you get with your software are slightly different from the above values (a, b, c). The criteria of fitting implicitly set in your software is probably different from the criteria of fitting set in my software. The method of regression used above is a Least Mean Square Errors wrt a linear integral equation to which the exponential equation is solution. Ref.: https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales This non-standard method isn't iterative and doesn't require initial "guessed" values of parameters. Below the MathCad code and numerical example :
find a function of the form 1/x like to fit data python
I am trying to fit a function of the form f(x)=1/x or somthing like f(x)=1/poly(x) to this data: data of points I try to use scipy.optimize.curve_fit which do works but not return the regression itself, I want to know the function itself that succeeded in fitting the data. Also, I thought about an option of rotating and using np.polyfit but I don't really know how to invert it back to a function of that form. Edit This is my data for about 12 points: x_points= [8, 9, 10, 11, 12, 13, 14, 16, 17, 20, 22, 26] y_points=[26, 22, 20, 17, 16, 14, 13, 12, 11, 10, 9, 8] I can generate more data but not more than about 100 points, it is more of a sub-problem to the original problem, for confusion reasons only, I will not explain here where the data came from, it will be a tedious explanation and won't benefit to the understanding the problem
[ "If you have no model coming from physics try various mathemetical models up to find a satisfising one. For example with the exponential function :\n\nDon't be surprised if the values of the parameters that you get with your software are slightly different from the above values (a, b, c). The criteria of fitting implicitly set in your software is probably different from the criteria of fitting set in my software.\nThe method of regression used above is a Least Mean Square Errors wrt a linear integral equation to which the exponential equation is solution. Ref.: https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales\nThis non-standard method isn't iterative and doesn't require initial \"guessed\" values of parameters.\nBelow the MathCad code and numerical example :\n\n\n" ]
[ 0 ]
[]
[]
[ "curve_fitting", "python", "regression" ]
stackoverflow_0074616037_curve_fitting_python_regression.txt
Q: Query influxdb with special character I am trying to perform. simple get on influxdb using python. The connection works great and I am able to query several values. However, I have one of them which is reported as homeassistant.autogen.°C. When I try to query it, I always get influxdb.exceptions.InfluxDBClientError: 400: {"error":"error parsing query: found \u00b0, expected identifier at line 1, char 43"} The code that is use is: client = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password') results = client.query(r'SELECT "value" FROM homeassistant.autogen."°C" WHERE entity_id = sensor.x_temperature') I already tried to escape and pass it through quotes but nothing seems to work. I cannot change how the value is inserted in influxdb. A: Have you tried escaping with a backslash? client = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password') results = client.query(r'SELECT "value" FROM homeassistant.autogen.\°C WHERE entity_id = sensor.x_temperature') or containing the entire field name in double quotes, e.g.: client = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password') results = client.query(r'SELECT "value" FROM "homeassistant.autogen.°C" WHERE entity_id = sensor.x_temperature') https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_reference/ A: You can use the \ character to escape the " character in your query. This will tell InfluxDB to treat the " character as a literal character instead of as a delimiter for a string. Here is an example of how you can escape the " character in your query: results = client.query(r'SELECT "value" FROM homeassistant.autogen."\°C" WHERE entity_id = sensor.x_temperature') Alternatively, you can use single quotes (') instead of double quotes (") to delimit your string in the query. This will also allow you to include the " character in the string without having to escape it. Here is an example of how you can use single quotes in your query: results = client.query(r"SELECT 'value' FROM homeassistant.autogen.'°C' WHERE entity_id = sensor.x_temperature") A: It looks like the problem is with the °C character in your query. This character is not a valid identifier in InfluxDB, which is why you are seeing the error. To fix this, you can either escape the °C character using a backslash (\), or you can enclose the identifier in backticks (```). Here is how the query would look with each of these options: # Escaping the °C character results = client.query(r'SELECT "value" FROM homeassistant.autogen."\°C" WHERE entity_id = sensor.x_temperature') # Enclosing the identifier in backticks results = client.query(r'SELECT "value" FROM homeassistant.autogen.`°C` WHERE entity_id = sensor.x_temperature') Both of these options should allow you to query the homeassistant.autogen.°C measurement in InfluxDB. Let me know if you have any other questions.
Query influxdb with special character
I am trying to perform. simple get on influxdb using python. The connection works great and I am able to query several values. However, I have one of them which is reported as homeassistant.autogen.°C. When I try to query it, I always get influxdb.exceptions.InfluxDBClientError: 400: {"error":"error parsing query: found \u00b0, expected identifier at line 1, char 43"} The code that is use is: client = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password') results = client.query(r'SELECT "value" FROM homeassistant.autogen."°C" WHERE entity_id = sensor.x_temperature') I already tried to escape and pass it through quotes but nothing seems to work. I cannot change how the value is inserted in influxdb.
[ "Have you tried escaping with a backslash?\nclient = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password')\nresults = client.query(r'SELECT \"value\" FROM homeassistant.autogen.\\°C WHERE entity_id = sensor.x_temperature')\n\nor containing the entire field name in double quotes, e.g.:\nclient = InfluxDBClient(host='192.168.1.x', port=8086, username='user', password='password')\nresults = client.query(r'SELECT \"value\" FROM \"homeassistant.autogen.°C\" WHERE entity_id = sensor.x_temperature')\n\nhttps://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_reference/\n", "You can use the \\ character to escape the \" character in your query. This will tell InfluxDB to treat the \" character as a literal character instead of as a delimiter for a string.\nHere is an example of how you can escape the \" character in your query:\nresults = client.query(r'SELECT \"value\" FROM homeassistant.autogen.\"\\°C\" WHERE entity_id = sensor.x_temperature')\n\nAlternatively, you can use single quotes (') instead of double quotes (\") to delimit your string in the query. This will also allow you to include the \" character in the string without having to escape it.\nHere is an example of how you can use single quotes in your query:\nresults = client.query(r\"SELECT 'value' FROM homeassistant.autogen.'°C' WHERE entity_id = sensor.x_temperature\")\n\n", "It looks like the problem is with the °C character in your query. This character is not a valid identifier in InfluxDB, which is why you are seeing the error.\nTo fix this, you can either escape the °C character using a backslash (\\), or you can enclose the identifier in backticks (```). Here is how the query would look with each of these options:\n# Escaping the °C character\nresults = client.query(r'SELECT \"value\" FROM homeassistant.autogen.\"\\°C\" WHERE entity_id = sensor.x_temperature')\n\n# Enclosing the identifier in backticks\nresults = client.query(r'SELECT \"value\" FROM homeassistant.autogen.`°C` WHERE entity_id = sensor.x_temperature')\n\nBoth of these options should allow you to query the homeassistant.autogen.°C measurement in InfluxDB. Let me know if you have any other questions.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "influxdb", "python" ]
stackoverflow_0074517426_influxdb_python.txt
Q: Sum every N col and row in python I have a data list more than 100 row in csv file sth like this: A B C D E F H 0 9 0 9 0 9 0 0 9 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 0 9 0 9 0 9 0 0 9 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 I need to sum each 5 cell of a column And write the answer in a new row sth like this will be: Note: its just for the first three column and N = 5: J K L 0 18 0 0 18 0 The code that i use is below but I dont now how to sum every 5 cells of a column and write the output in a new column : import pandas as pd df = pd.read_csv(filename) df[df.columns[4:]].sum() A: You could use .groupby to do that: N = 5 df_sum = df.groupby([n // N for n in range(df.shape[0])]).sum() The list used to group over uses floor division to build blocks of 5 consecutive rows. If you're using the standard index you could do df_sum = df.groupby(df.index // N).sum() instead. Result for your sample: A B C D E F H 0 0 18 0 9 0 45 0 1 0 18 0 9 0 45 0
Sum every N col and row in python
I have a data list more than 100 row in csv file sth like this: A B C D E F H 0 9 0 9 0 9 0 0 9 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 0 9 0 9 0 9 0 0 9 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 0 0 0 0 0 9 0 I need to sum each 5 cell of a column And write the answer in a new row sth like this will be: Note: its just for the first three column and N = 5: J K L 0 18 0 0 18 0 The code that i use is below but I dont now how to sum every 5 cells of a column and write the output in a new column : import pandas as pd df = pd.read_csv(filename) df[df.columns[4:]].sum()
[ "You could use .groupby to do that:\nN = 5\ndf_sum = df.groupby([n // N for n in range(df.shape[0])]).sum()\n\nThe list used to group over uses floor division to build blocks of 5 consecutive rows. If you're using the standard index you could do\ndf_sum = df.groupby(df.index // N).sum()\n\ninstead.\nResult for your sample:\n A B C D E F H\n0 0 18 0 9 0 45 0\n1 0 18 0 9 0 45 0\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074638739_arrays_python.txt
Q: How to periodically spawn objects in Pygame elif x1 == foodx2 and y1 == foody2: foodx2 = round(random.randrange(0, dis_width - snake_block) / 10.0) * 10.0 foody2 = round(random.randrange(0, dis_height - snake_block) / 10.0) * 10.0 Length_of_snake += 2 this is the code for food gen after it has been eaten once how do I delay its spawning by 10 seconds For eg. I want a food type to spawn 10 seconds after being eaten by the snake but am unable to do so without also stopping the snake. I tried using the sleep() function but it also froze the snake on the spot for 10 seconds rather than letting the snake move freely but not spawning the food for 10 seconds. A: you can use pygame timer function to set the spawn time for particular objects like food, enemies etc. It has a particular time frame to be triggered. In below example, food is triggered every 250ms. # Create a custom event for adding a new food ADDFOOD = pygame.USEREVENT + 1 pygame.time.set_timer(ADDFOOD, 250)
How to periodically spawn objects in Pygame
elif x1 == foodx2 and y1 == foody2: foodx2 = round(random.randrange(0, dis_width - snake_block) / 10.0) * 10.0 foody2 = round(random.randrange(0, dis_height - snake_block) / 10.0) * 10.0 Length_of_snake += 2 this is the code for food gen after it has been eaten once how do I delay its spawning by 10 seconds For eg. I want a food type to spawn 10 seconds after being eaten by the snake but am unable to do so without also stopping the snake. I tried using the sleep() function but it also froze the snake on the spot for 10 seconds rather than letting the snake move freely but not spawning the food for 10 seconds.
[ "you can use pygame timer function to set the spawn time for particular objects like food, enemies etc. It has a particular time frame to be triggered. In below example, food is triggered every 250ms.\n# Create a custom event for adding a new food\nADDFOOD = pygame.USEREVENT + 1\npygame.time.set_timer(ADDFOOD, 250)\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "pygame_clock", "pygame_tick", "python" ]
stackoverflow_0074639964_pygame_pygame_clock_pygame_tick_python.txt
Q: the method form.is_valid always returns always I do my own project and I'm stuck on this problem. My form always returns False when calling form.is_valid. This is my code: forms.py: I need to let user to choose the size he need from the list of available sizes, so I retrieve sizes from the database and then make a ChoiceField with variants of these sizes. class CartAddProductForm(ModelForm): class Meta: model = Product fields = ['sizes'] def __init__(self, pk: int, *args, **kwargs) -> None: super(CartAddProductForm, self).__init__(*args, **kwargs) sizes = tuple(Product.objects.get(pk=pk).sizes) sizes_list = [] for item in sizes: sizes_list.append((item, item)) self.fields['sizes'] = forms.ChoiceField(choices=sizes_list) views.py: This func creates the form def product_detail(request: WSGIRequest, product_id: int, product_slug: str) -> HttpResponse: product = get_object_or_404(Product, id=product_id, slug=product_slug, available=True) categories = Category.objects.all() cart_product_form = CartAddProductForm(instance=product, pk=product_id) return render(request, 'shop/product/detail.html', {'product': product, 'categories': categories, 'cart_product_form': cart_product_form}) When user pushes the button 'Add to cart' he is redirected to the other page with details of his cart. So here's the problem: @require_POST def add_to_cart(request: WSGIRequest, product_id: int) -> HttpResponseRedirect: cart = Cart(request) product = get_object_or_404(Product, id=product_id) form = CartAddProductForm(instance=product, pk=product_id) if form.is_valid(): cd = form.cleaned_data print(f'cd = {cd}') cart.add(product=product, quantity=cd['quantity'], update_quantity=cd['update']) else: print(f'form.is_valid = {form.is_valid()}') print(f'request.POST = {request.POST}') print(f'form.errors = {form.errors}') print(f'form.non_field_errors = {form.non_field_errors}') field_errors = [(field.label, field.errors) for field in form] print(f'field_errors = {field_errors}') return redirect('cart:cart_detail') my template: {% extends "shop/base.html" %} <head> <meta charset="UTF-8"> <title>Detail</title> </head> <body> {% block title %} {{ product.name }} {% endblock %} {% block content %} <br> <b>{{ product.name }} </b> <br> <i>{{ product.description }} </i> <br> {{ product.price }} <br> <img src="{{ product.image.url }}" width="300" height="500"> <br> Доступные размеры: <br> {{ product.sizes }}<br> <form action="{% url "cart:add_to_cart" product.id %}" method="post"> {{ cart_product_form }} {{ form.as_p }} {% csrf_token %} <input type="submit" value="Add to cart"> </form> {% endblock %} </body> urls.py: urlpatterns = [ re_path(r'^$', views.cart_detail, name='cart_detail'), re_path(r'^add/(?P<product_id>\d+)/$', views.add_to_cart, name='add_to_cart'), re_path(r'^remove/(?P<product_id>\d+)/$', views.cart_remove, name='cart_remove'), ] I tried to print errors but got nothing: print(f'form.is_valid = {form.is_valid()}') print(f'request.POST = {request.POST}') print(f'form.errors = {form.errors}') print(f'form.non_field_errors = {form.non_field_errors}') field_errors = [(field.label, field.errors) for field in form] print(f'field_errors = {field_errors}') output: form.is_valid = False request.POST = \<QueryDict: {'sizes': \['S'\], 'csrfmiddlewaretoken': \['mytoken'\]}\> form.errors = form.non_field_errors = \<bound method BaseForm.non_field_errors of \<CartAddProductForm bound=False, valid=False, fields=(sizes)\>\> field_errors = \[('Sizes', \[\])\] Also I have to say that after redirection to the cart page there are no detail on it, it's empty and no items there. What's wrong and why my form returns False? A: You need to pass the data, so: form = CartAddProductForm( instance=product, pk=product_id, data=request.POST, )
the method form.is_valid always returns always
I do my own project and I'm stuck on this problem. My form always returns False when calling form.is_valid. This is my code: forms.py: I need to let user to choose the size he need from the list of available sizes, so I retrieve sizes from the database and then make a ChoiceField with variants of these sizes. class CartAddProductForm(ModelForm): class Meta: model = Product fields = ['sizes'] def __init__(self, pk: int, *args, **kwargs) -> None: super(CartAddProductForm, self).__init__(*args, **kwargs) sizes = tuple(Product.objects.get(pk=pk).sizes) sizes_list = [] for item in sizes: sizes_list.append((item, item)) self.fields['sizes'] = forms.ChoiceField(choices=sizes_list) views.py: This func creates the form def product_detail(request: WSGIRequest, product_id: int, product_slug: str) -> HttpResponse: product = get_object_or_404(Product, id=product_id, slug=product_slug, available=True) categories = Category.objects.all() cart_product_form = CartAddProductForm(instance=product, pk=product_id) return render(request, 'shop/product/detail.html', {'product': product, 'categories': categories, 'cart_product_form': cart_product_form}) When user pushes the button 'Add to cart' he is redirected to the other page with details of his cart. So here's the problem: @require_POST def add_to_cart(request: WSGIRequest, product_id: int) -> HttpResponseRedirect: cart = Cart(request) product = get_object_or_404(Product, id=product_id) form = CartAddProductForm(instance=product, pk=product_id) if form.is_valid(): cd = form.cleaned_data print(f'cd = {cd}') cart.add(product=product, quantity=cd['quantity'], update_quantity=cd['update']) else: print(f'form.is_valid = {form.is_valid()}') print(f'request.POST = {request.POST}') print(f'form.errors = {form.errors}') print(f'form.non_field_errors = {form.non_field_errors}') field_errors = [(field.label, field.errors) for field in form] print(f'field_errors = {field_errors}') return redirect('cart:cart_detail') my template: {% extends "shop/base.html" %} <head> <meta charset="UTF-8"> <title>Detail</title> </head> <body> {% block title %} {{ product.name }} {% endblock %} {% block content %} <br> <b>{{ product.name }} </b> <br> <i>{{ product.description }} </i> <br> {{ product.price }} <br> <img src="{{ product.image.url }}" width="300" height="500"> <br> Доступные размеры: <br> {{ product.sizes }}<br> <form action="{% url "cart:add_to_cart" product.id %}" method="post"> {{ cart_product_form }} {{ form.as_p }} {% csrf_token %} <input type="submit" value="Add to cart"> </form> {% endblock %} </body> urls.py: urlpatterns = [ re_path(r'^$', views.cart_detail, name='cart_detail'), re_path(r'^add/(?P<product_id>\d+)/$', views.add_to_cart, name='add_to_cart'), re_path(r'^remove/(?P<product_id>\d+)/$', views.cart_remove, name='cart_remove'), ] I tried to print errors but got nothing: print(f'form.is_valid = {form.is_valid()}') print(f'request.POST = {request.POST}') print(f'form.errors = {form.errors}') print(f'form.non_field_errors = {form.non_field_errors}') field_errors = [(field.label, field.errors) for field in form] print(f'field_errors = {field_errors}') output: form.is_valid = False request.POST = \<QueryDict: {'sizes': \['S'\], 'csrfmiddlewaretoken': \['mytoken'\]}\> form.errors = form.non_field_errors = \<bound method BaseForm.non_field_errors of \<CartAddProductForm bound=False, valid=False, fields=(sizes)\>\> field_errors = \[('Sizes', \[\])\] Also I have to say that after redirection to the cart page there are no detail on it, it's empty and no items there. What's wrong and why my form returns False?
[ "You need to pass the data, so:\nform = CartAddProductForm(\n instance=product,\n pk=product_id,\n data=request.POST,\n)\n" ]
[ 0 ]
[]
[]
[ "django", "django_4.0", "python", "python_3.x" ]
stackoverflow_0074630614_django_django_4.0_python_python_3.x.txt
Q: Image cannot be displayed after saving the data in Django. The code for it is given below Problem Statement: In shown image, default profile pic is visible but i need uploaded photo to be displayed here when i upload and saved in the database.I am using django framework. What have I Tried here? In setting.html file,The below is the HTML code for what is have tried to display image, bio, location. The problem may be at first div image tag. Though the default profile picture is visible but which i have uploaded is not displaying in the page nor able to save in database. <form action="" method ="POST"> {% csrf_token %} <div class="grid grid-cols-2 gap-3 lg:p-6 p-4"> <div class="col-span-2"> <label for="">Profile Image</label> <img width = "100" height = "100" src="{{user_profile.profileimg.url}}"/> <input type="file" name='image' value = "" placeholder="No file chosen" class="shadow-none bg-gray-100"> </div> <div class="col-span-2"> <label for="about">Bio</label> <textarea id="about" name="bio" rows="3" class="shadow-none bg-gray-100">{{user_profile.bio}}</textarea> </div> <div class="col-span-2"> <label for=""> Location</label> <input type="text" name = "location" value = "{{user_profile.location}}" placeholder="" class="shadow-none bg-gray-100"> </div> </div> <div class="bg-gray-10 p-6 pt-0 flex justify-end space-x-3"> <button class="p-2 px-4 rounded bg-gray-50 text-red-500"> <a href="/"> Cancel</a> </button> <button type="submit" class="button bg-blue-700"> Save </button> </div> </form> In views.py file,the below function is the views file settings page logic for displaying the image, bio, location. If the image is not uploaded, it will be default profile picture. If image is not none then the uploaded image is displayed. But I am not getting what is the mistake here in my code. @login_required(login_url='signin') def settings(request): user_profile = Profile.objects.get(user = request.user) if request.method == "POST": if request.FILES.get('image')== None: image = user_profile.profileimg bio = request.POST['bio'] location = request.POST['location'] else: image = request.FILES.get('image') bio = request.POST['bio'] location = request.POST['location'] user_profile.profileimg = image user_profile.bio = bio user_profile.location = location user_profile.save() return redirect("settings") return render(request, 'setting.html', {'user_profile': user_profile}) I am expecting to know what the mistake in the code is. Only profileimg cannot be saved in database. the other data is able to save like bio, location. The below is the model.py file. from django.db import models from django.contrib.auth import get_user_model User = get_user_model() # Create your models here. class Profile(models.Model): user = models.ForeignKey(User, on_delete = models.CASCADE) id_user = models.IntegerField() bio = models.TextField(blank=True) profileimg = models.ImageField(upload_to = 'profile_images', default= 'blank-profile-picture.png') location = models.CharField(max_length = 100, blank = True) def __str__(self): return self.user.username The below is the media folder in settings.py file. INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'core', ] STATIC_URL = 'static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),) MEDIA_URL ='/media/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media') urls.py from django.contrib import admin from django.urls import path, include from django.conf import settings from django.conf.urls.static import static urlpatterns = [ path('admin/', admin.site.urls), path('', include('core.urls')), ] + static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT) Photo is not displaying. A: You're trying to fetch the image in the wrong way. Instead you can use this src="/media/{{user_profile.profileimg}}" If it doesn't work try removing "/" before media. Please reply to this message if the issue still persist. A: Oh, Holy Shit. How could I miss it. My friend problem is with view. Add this thing to your function: if request.method == "POST" and "image" in request.FILES: And while requesting for the image place the image only in the variable in list as given below. image= request.FILES['image'] I hope this should resolve. Please revert back if the issue still persist. A: Simply you can try this way: <img width = "100" height = "100" src="/media/{{user_profile.profileimg}}"/> And now that image will display Note: If your uploaded image is saving in media folder then above code will work. According to your model field profileimg, you should add forward slash to upload_to: It must be like below: upload_to='profile_images/' And ensure that you have added this in your main urls: urlpatterns = [ ]+static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT) A: Thank you for your answers. I found solution <form action="" method ="POST" enctype = "multipart/form-data"> This enctype is missing in setting.html file. This works correctly and image displays correctly.
Image cannot be displayed after saving the data in Django. The code for it is given below
Problem Statement: In shown image, default profile pic is visible but i need uploaded photo to be displayed here when i upload and saved in the database.I am using django framework. What have I Tried here? In setting.html file,The below is the HTML code for what is have tried to display image, bio, location. The problem may be at first div image tag. Though the default profile picture is visible but which i have uploaded is not displaying in the page nor able to save in database. <form action="" method ="POST"> {% csrf_token %} <div class="grid grid-cols-2 gap-3 lg:p-6 p-4"> <div class="col-span-2"> <label for="">Profile Image</label> <img width = "100" height = "100" src="{{user_profile.profileimg.url}}"/> <input type="file" name='image' value = "" placeholder="No file chosen" class="shadow-none bg-gray-100"> </div> <div class="col-span-2"> <label for="about">Bio</label> <textarea id="about" name="bio" rows="3" class="shadow-none bg-gray-100">{{user_profile.bio}}</textarea> </div> <div class="col-span-2"> <label for=""> Location</label> <input type="text" name = "location" value = "{{user_profile.location}}" placeholder="" class="shadow-none bg-gray-100"> </div> </div> <div class="bg-gray-10 p-6 pt-0 flex justify-end space-x-3"> <button class="p-2 px-4 rounded bg-gray-50 text-red-500"> <a href="/"> Cancel</a> </button> <button type="submit" class="button bg-blue-700"> Save </button> </div> </form> In views.py file,the below function is the views file settings page logic for displaying the image, bio, location. If the image is not uploaded, it will be default profile picture. If image is not none then the uploaded image is displayed. But I am not getting what is the mistake here in my code. @login_required(login_url='signin') def settings(request): user_profile = Profile.objects.get(user = request.user) if request.method == "POST": if request.FILES.get('image')== None: image = user_profile.profileimg bio = request.POST['bio'] location = request.POST['location'] else: image = request.FILES.get('image') bio = request.POST['bio'] location = request.POST['location'] user_profile.profileimg = image user_profile.bio = bio user_profile.location = location user_profile.save() return redirect("settings") return render(request, 'setting.html', {'user_profile': user_profile}) I am expecting to know what the mistake in the code is. Only profileimg cannot be saved in database. the other data is able to save like bio, location. The below is the model.py file. from django.db import models from django.contrib.auth import get_user_model User = get_user_model() # Create your models here. class Profile(models.Model): user = models.ForeignKey(User, on_delete = models.CASCADE) id_user = models.IntegerField() bio = models.TextField(blank=True) profileimg = models.ImageField(upload_to = 'profile_images', default= 'blank-profile-picture.png') location = models.CharField(max_length = 100, blank = True) def __str__(self): return self.user.username The below is the media folder in settings.py file. INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'core', ] STATIC_URL = 'static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),) MEDIA_URL ='/media/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media') urls.py from django.contrib import admin from django.urls import path, include from django.conf import settings from django.conf.urls.static import static urlpatterns = [ path('admin/', admin.site.urls), path('', include('core.urls')), ] + static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT) Photo is not displaying.
[ "You're trying to fetch the image in the wrong way.\nInstead you can use this\nsrc=\"/media/{{user_profile.profileimg}}\"\n\nIf it doesn't work try removing \"/\" before media.\nPlease reply to this message if the issue still persist.\n", "Oh, Holy Shit.\nHow could I miss it.\nMy friend problem is with view.\nAdd this thing to your function:\n if request.method == \"POST\" and \"image\" in request.FILES:\n\nAnd while requesting for the image place the image only in the variable in list as given below.\n image= request.FILES['image']\n\nI hope this should resolve. Please revert back if the issue still persist.\n", "Simply you can try this way:\n<img width = \"100\" height = \"100\" src=\"/media/{{user_profile.profileimg}}\"/>\n\nAnd now that image will display\nNote: If your uploaded image is saving in media folder then above code will work.\nAccording to your model field profileimg, you should add forward slash to upload_to:\nIt must be like below:\nupload_to='profile_images/'\n\nAnd ensure that you have added this in your main urls:\nurlpatterns = [\n\n]+static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT)\n\n", "Thank you for your answers. I found solution\n<form action=\"\" method =\"POST\" enctype = \"multipart/form-data\">\n\nThis enctype is missing in setting.html file. This works correctly and image displays correctly.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "django", "django_views", "html", "image", "python" ]
stackoverflow_0074637830_django_django_views_html_image_python.txt
Q: How to load only the most recent file from a directory where the filenames startswith the date? I have files in one directory/folder named: 2022-07-31_DATA_GVAX_ARPA_COMBINED.csv 2022-08-31_DATA_GVAX_ARPA_COMBINED.csv 2022-09-30_DATA_GVAX_ARPA_COMBINED.csv The folder will be updated with each month's file in the same format as above eg.: 2022-10-31_DATA_GVAX_ARPA_COMBINED.csv 2022-11-30_DATA_GVAX_ARPA_COMBINED.csv I want to only load the most recent month's .csv into a pandas dataframe, not all the files. How can I do this (maybe using glob)? I have seen this used for prefixes using: dir_files = r'/path/to/folder/*' dico={} for file in Path(dir_files).glob('DATA_GVAX_COMBINED_*.csv'): dico[file.stem.split('_')[-1]] = file max_date = max(dico) A: Glob the directory with the pattern for known files of interest. Sort (natural) on the basename. from glob import glob as GLOB from os.path import join as JOIN, basename as BASENAME def get_latest(directory): if all_files := list(GLOB(JOIN(directory, '*_DATA_GVAX_ARPA_COMBINED.csv'))): return sorted(all_files, key=BASENAME)[-1] print(get_latest('/Users/Cobra')) A: You could try something like this: import pandas as pd from pathlib import Path dir_files = r'/path/to/folder/*' dico = {} for file in Path(dir_files).glob('*DATA_GVAX_ARPA_COMBINED*.csv'): date_value = pd.to_datetime(file.name.split('_')[0], errors="coerce") if pd.notna(date_value): dico[date_value] = file max_date = max(dico.keys()) filepath = dico[max_date] print(f'{max_date} -> {filepath}') # Prints: # # 2022-10-31 00:00:00 -> 2022-10-31_DATA_GVAX_ARPA_COMBINED.csv
How to load only the most recent file from a directory where the filenames startswith the date?
I have files in one directory/folder named: 2022-07-31_DATA_GVAX_ARPA_COMBINED.csv 2022-08-31_DATA_GVAX_ARPA_COMBINED.csv 2022-09-30_DATA_GVAX_ARPA_COMBINED.csv The folder will be updated with each month's file in the same format as above eg.: 2022-10-31_DATA_GVAX_ARPA_COMBINED.csv 2022-11-30_DATA_GVAX_ARPA_COMBINED.csv I want to only load the most recent month's .csv into a pandas dataframe, not all the files. How can I do this (maybe using glob)? I have seen this used for prefixes using: dir_files = r'/path/to/folder/*' dico={} for file in Path(dir_files).glob('DATA_GVAX_COMBINED_*.csv'): dico[file.stem.split('_')[-1]] = file max_date = max(dico)
[ "Glob the directory with the pattern for known files of interest. Sort (natural) on the basename.\nfrom glob import glob as GLOB\nfrom os.path import join as JOIN, basename as BASENAME\n\ndef get_latest(directory):\n if all_files := list(GLOB(JOIN(directory, '*_DATA_GVAX_ARPA_COMBINED.csv'))):\n return sorted(all_files, key=BASENAME)[-1]\n\nprint(get_latest('/Users/Cobra'))\n\n", "You could try something like this:\n\nimport pandas as pd\nfrom pathlib import Path\n\n\ndir_files = r'/path/to/folder/*'\n\ndico = {}\n\nfor file in Path(dir_files).glob('*DATA_GVAX_ARPA_COMBINED*.csv'):\n date_value = pd.to_datetime(file.name.split('_')[0], errors=\"coerce\")\n if pd.notna(date_value):\n dico[date_value] = file\n\nmax_date = max(dico.keys())\nfilepath = dico[max_date]\nprint(f'{max_date} -> {filepath}')\n# Prints:\n#\n# 2022-10-31 00:00:00 -> 2022-10-31_DATA_GVAX_ARPA_COMBINED.csv\n\n" ]
[ 0, 0 ]
[]
[]
[ "csv", "dataframe", "glob", "pandas", "python" ]
stackoverflow_0074639989_csv_dataframe_glob_pandas_python.txt
Q: What is the difference of int and 'int' in python? I'm doing my homework in python and I happen to meet a very tricky problem: the difference between int and 'int' in python? Here is the code: type(1) == 'int' type(1) == int and here is the result: False True I firstly thought that maybe 'int' here is purely a string, but later I used pd.DataFrame for another test: train_data.deposit # train_data is a pd.DataFrame and deposit is a column with dtype int train_data.deposit.dtype == 'int' train_data.deposit.dtype == int and got the result: True True So is there a difference between int and 'int' in python, if so, what is it? Thank you so much for your kind answer. A: anything in quotes is a string. type() in python returns the type of an object. So type(1) is the type int (integer) and the type int is equal to int. But the type int is not equal to 'int' the string. Onto the example with pandas: pandas dtype does not return a python class. It returns an object, which can be checked to be a type. So the == operator in python can be defined for all objects (Method eq). And dtype can be compared with python buildin types and strings. Example import pandas as pd df = pd.DataFrame([i for i in range(10)],columns=["test"]).astype(int) # df is now a pandas DataFrame with type dtype(int32) therefore: type(df.test.dtype) # -> <class 'numpy.dtype[int32]'> df.test.dtype == 'int' # True df.test.dtype == int # True Further Information To overwrite the equals method you have to define a eq(this, other) Method inside a class. Feel free to google or follow tutorials like this one
What is the difference of int and 'int' in python?
I'm doing my homework in python and I happen to meet a very tricky problem: the difference between int and 'int' in python? Here is the code: type(1) == 'int' type(1) == int and here is the result: False True I firstly thought that maybe 'int' here is purely a string, but later I used pd.DataFrame for another test: train_data.deposit # train_data is a pd.DataFrame and deposit is a column with dtype int train_data.deposit.dtype == 'int' train_data.deposit.dtype == int and got the result: True True So is there a difference between int and 'int' in python, if so, what is it? Thank you so much for your kind answer.
[ "anything in quotes is a string.\ntype() in python returns the type of an object. So type(1) is the type int (integer) and the type int is equal to int. But the type int is not equal to 'int' the string.\nOnto the example with pandas:\npandas dtype does not return a python class. It returns an object, which can be checked to be a type. So the == operator in python can be defined for all objects (Method eq). And dtype can be compared with python buildin types and strings.\nExample\nimport pandas as pd\ndf = pd.DataFrame([i for i in range(10)],columns=[\"test\"]).astype(int)\n# df is now a pandas DataFrame with type dtype(int32) therefore:\ntype(df.test.dtype) # -> <class 'numpy.dtype[int32]'>\ndf.test.dtype == 'int' # True\ndf.test.dtype == int # True\n\nFurther Information\nTo overwrite the equals method you have to define a eq(this, other) Method inside a class. Feel free to google or follow tutorials like this one\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074639882_python.txt
Q: Almost correct output, but not quite right. Math help in Python I am trying to match an expected output of "13031.157014219536" exactly, and I have attempted 3 times to get the value with different methods, detailed below, and come extremely close to the value, but not close enough. What is happening in these code snippets that it causing the deviation? Is it rounding error in the calculation? Did I do something wrong? Attempts: item_worth = 8000 years = range(10) for year in years: item_worth += (item_worth*0.05) print(item_worth) value = item_worth * (1 + ((0.05)**10)) print(value) cost = 8000 for x in range(10): cost *= 1.05 print(cost) Expected Output: 13031.157014219536 Actual Outputs: 13031.157014219529 13031.157014220802 13031.157014219538 A: On most machine, floats are represented by fp64, or double floats. You can check the precision of those for your number that way (not a method to be used for real computation. Just out of curiosity): import struct struct.pack('d', 13031.157014219536) # bytes representing that number in fp64 b'\xf5\xbc\n\x19\x94s\xc9@' # or, in a more humanely understandable way struct.unpack('q', struct.pack('d', 13031.157014219536))[0] # 4668389568658717941 # That number has no meaning, except that this integer is represented by the same bytes as your float. # Now, let's see what float is "next in line" struct.unpack('d', struct.pack('q', 4668389568658717941+1))[0] # 13031.157014219538 Note that this code works on most machine, but is not reliable. First of all, it relies on the fact that significant bits are not just all 1. Otherwise, it would give a totally unrelated number. Secondly, it makes assumption that ints are LE. But well, it gave me what I wanted. That is the information that the smallest number bigger than 13031.157014219536 is 13031.157014219538. (Or, said more accurately for this kind of conversation: the smallest float bigger than 13031.157014219536 whose representation is not the same as the representation of 13031.157014219536 has the same representation as 13031.157014219538) So, my point is you are flirting with the representation limit. You can't expect the sum of 10 numbers to be more accurate. I could also have said that saying that the biggest power of 2 smaller than your number is 8192=2¹³. So, that 13 is the exponent of your float in its representation. And this you have 53 significant bits, the precision of such a number is 2**(13-53+1) = 1.8×10⁻¹² (which is indeed also the result of 13031.157014219538-13031.157014219536). Hence the reason why in decimal, 12 decimal places are printed. But not all combination of them can exist, and the last one is not insignificant, but not fully significant neither. If your computation is the result of the sum of 10 such numbers, you could even have an error 10 times bigger than your last result the right to complain :D
Almost correct output, but not quite right. Math help in Python
I am trying to match an expected output of "13031.157014219536" exactly, and I have attempted 3 times to get the value with different methods, detailed below, and come extremely close to the value, but not close enough. What is happening in these code snippets that it causing the deviation? Is it rounding error in the calculation? Did I do something wrong? Attempts: item_worth = 8000 years = range(10) for year in years: item_worth += (item_worth*0.05) print(item_worth) value = item_worth * (1 + ((0.05)**10)) print(value) cost = 8000 for x in range(10): cost *= 1.05 print(cost) Expected Output: 13031.157014219536 Actual Outputs: 13031.157014219529 13031.157014220802 13031.157014219538
[ "On most machine, floats are represented by fp64, or double floats.\nYou can check the precision of those for your number that way (not a method to be used for real computation. Just out of curiosity):\nimport struct\nstruct.pack('d', 13031.157014219536)\n# bytes representing that number in fp64 b'\\xf5\\xbc\\n\\x19\\x94s\\xc9@'\n# or, in a more humanely understandable way\nstruct.unpack('q', struct.pack('d', 13031.157014219536))[0]\n# 4668389568658717941\n# That number has no meaning, except that this integer is represented by the same bytes as your float.\n# Now, let's see what float is \"next in line\"\nstruct.unpack('d', struct.pack('q', 4668389568658717941+1))[0]\n# 13031.157014219538\n\nNote that this code works on most machine, but is not reliable. First of all, it relies on the fact that significant bits are not just all 1. Otherwise, it would give a totally unrelated number. Secondly, it makes assumption that ints are LE. But well, it gave me what I wanted.\nThat is the information that the smallest number bigger than 13031.157014219536 is 13031.157014219538.\n(Or, said more accurately for this kind of conversation: the smallest float bigger than 13031.157014219536 whose representation is not the same as the representation of 13031.157014219536 has the same representation as 13031.157014219538)\nSo, my point is you are flirting with the representation limit. You can't expect the sum of 10 numbers to be more accurate.\nI could also have said that saying that the biggest power of 2 smaller than your number is 8192=2¹³.\nSo, that 13 is the exponent of your float in its representation. And this you have 53 significant bits, the precision of such a number is 2**(13-53+1) = 1.8×10⁻¹² (which is indeed also the result of 13031.157014219538-13031.157014219536). Hence the reason why in decimal, 12 decimal places are printed. But not all combination of them can exist, and the last one is not insignificant, but not fully significant neither.\nIf your computation is the result of the sum of 10 such numbers, you could even have an error 10 times bigger than your last result the right to complain :D\n" ]
[ 0 ]
[]
[]
[ "math", "python", "rounding_error" ]
stackoverflow_0074636533_math_python_rounding_error.txt
Q: Cannot create secondary x-axis with xlsxwriter I am trying to generate a chart with a secondary x-axis, but I can't get the secondary x-axis to be added to the chart. Below is the code I'm using. If I change "x2_axis" to "y2_axis" and "set_x2_axis" to "set_y2_axis", then I am able to create a secondary y axis successfully -- but it does not work for a secondary x axis. Am I doing something wrong? import xlsxwriter workbook = xlsxwriter.Workbook('test.xlsx') worksheet = workbook.add_worksheet() data = [ [1, 2, 3, 4, 5], [10, 40, 50, 20, 10], [1,1,2,2,3,3,4,4,5,5], [200,200,100,100,300,300,250,250,350,350] ] worksheet.write_column('A2', data[0]) worksheet.write_column('B2', data[1]) worksheet.write_column('C2', data[2]) worksheet.write_column('D2', data[3]) chart= workbook.add_chart({'type': 'line'}) chart.add_series ({ 'name': 'Primary', 'categories': '=Sheet1!$A$2:$A$6', 'values': '=Sheet1!$B$2:$B$6', }) chart.add_series ({ 'name': 'Secondary', 'categories': '=Sheet1!$C$2:$C$11', 'values': '=Sheet1!$D$2:$D$11', 'x2_axis': True }) chart.set_x_axis({ 'name': 'Primary Axis', 'interval_unit': 1, 'interval_tick': 1, 'major_tick_mark': 'none', }) chart.set_y_axis({ 'name': 'Value', }) chart.set_x2_axis({ 'label_position': 'low', 'name': 'Secondary Axis', 'visible': True }) worksheet.insert_chart('B20', chart) workbook.close() A: Setting a secondary X axis in Excel (or XlsxWriter) isn't obvious, in comparison to setting a secondary Y axis. Generally, you need to add a secondary Y and X axis pair before you can set a secondary X axis. Something like this: import xlsxwriter workbook = xlsxwriter.Workbook('test.xlsx') worksheet = workbook.add_worksheet() data = [ [1, 2, 3, 4, 5], [10, 40, 50, 20, 10], [1, 1, 2, 2, 3, 3, 4, 4, 5, 5], [200, 200, 100, 100, 300, 300, 250, 250, 350, 350] ] worksheet.write_column('A2', data[0]) worksheet.write_column('B2', data[1]) worksheet.write_column('C2', data[2]) worksheet.write_column('D2', data[3]) chart = workbook.add_chart({'type': 'line'}) chart.add_series({ 'name': 'Primary', 'categories': '=Sheet1!$A$2:$A$6', 'values': '=Sheet1!$B$2:$B$6', }) chart.add_series({ 'name': 'Secondary', 'categories': '=Sheet1!$C$2:$C$11', 'values': '=Sheet1!$D$2:$D$11', 'x2_axis': True, 'y2_axis': True, }) chart.set_x_axis({ 'name': 'Primary Axis', 'interval_unit': 1, 'interval_tick': 1, 'major_tick_mark': 'none', }) chart.set_y_axis({ 'name': 'Y Values 1', }) chart.set_y2_axis({ 'name': 'Y Values 2', 'crossing': 'max', }) chart.set_x2_axis({ 'label_position': 'high', 'name': 'Secondary Axis', 'visible': True, }) worksheet.insert_chart('B20', chart) workbook.close() Output:
Cannot create secondary x-axis with xlsxwriter
I am trying to generate a chart with a secondary x-axis, but I can't get the secondary x-axis to be added to the chart. Below is the code I'm using. If I change "x2_axis" to "y2_axis" and "set_x2_axis" to "set_y2_axis", then I am able to create a secondary y axis successfully -- but it does not work for a secondary x axis. Am I doing something wrong? import xlsxwriter workbook = xlsxwriter.Workbook('test.xlsx') worksheet = workbook.add_worksheet() data = [ [1, 2, 3, 4, 5], [10, 40, 50, 20, 10], [1,1,2,2,3,3,4,4,5,5], [200,200,100,100,300,300,250,250,350,350] ] worksheet.write_column('A2', data[0]) worksheet.write_column('B2', data[1]) worksheet.write_column('C2', data[2]) worksheet.write_column('D2', data[3]) chart= workbook.add_chart({'type': 'line'}) chart.add_series ({ 'name': 'Primary', 'categories': '=Sheet1!$A$2:$A$6', 'values': '=Sheet1!$B$2:$B$6', }) chart.add_series ({ 'name': 'Secondary', 'categories': '=Sheet1!$C$2:$C$11', 'values': '=Sheet1!$D$2:$D$11', 'x2_axis': True }) chart.set_x_axis({ 'name': 'Primary Axis', 'interval_unit': 1, 'interval_tick': 1, 'major_tick_mark': 'none', }) chart.set_y_axis({ 'name': 'Value', }) chart.set_x2_axis({ 'label_position': 'low', 'name': 'Secondary Axis', 'visible': True }) worksheet.insert_chart('B20', chart) workbook.close()
[ "Setting a secondary X axis in Excel (or XlsxWriter) isn't obvious, in comparison to setting a secondary Y axis. Generally, you need to add a secondary Y and X axis pair before you can set a secondary X axis. Something like this:\nimport xlsxwriter\n\nworkbook = xlsxwriter.Workbook('test.xlsx')\nworksheet = workbook.add_worksheet()\n\ndata = [\n [1, 2, 3, 4, 5],\n [10, 40, 50, 20, 10],\n [1, 1, 2, 2, 3, 3, 4, 4, 5, 5],\n [200, 200, 100, 100, 300, 300, 250, 250, 350, 350]\n]\n\nworksheet.write_column('A2', data[0])\nworksheet.write_column('B2', data[1])\nworksheet.write_column('C2', data[2])\nworksheet.write_column('D2', data[3])\n\nchart = workbook.add_chart({'type': 'line'})\n\nchart.add_series({\n 'name': 'Primary',\n 'categories': '=Sheet1!$A$2:$A$6',\n 'values': '=Sheet1!$B$2:$B$6',\n})\n\nchart.add_series({\n 'name': 'Secondary',\n 'categories': '=Sheet1!$C$2:$C$11',\n 'values': '=Sheet1!$D$2:$D$11',\n 'x2_axis': True,\n 'y2_axis': True,\n})\n\nchart.set_x_axis({\n 'name': 'Primary Axis',\n 'interval_unit': 1,\n 'interval_tick': 1,\n 'major_tick_mark': 'none',\n })\n\nchart.set_y_axis({\n 'name': 'Y Values 1',\n })\n\nchart.set_y2_axis({\n 'name': 'Y Values 2',\n 'crossing': 'max',\n })\n\nchart.set_x2_axis({\n 'label_position': 'high',\n 'name': 'Secondary Axis',\n 'visible': True,\n })\n\nworksheet.insert_chart('B20', chart)\nworkbook.close()\n\n\nOutput:\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "xlsx", "xlsxwriter" ]
stackoverflow_0074635534_pandas_python_xlsx_xlsxwriter.txt
Q: ImportError: Missing optional dependency 'openpyxl' still doesn't work after instllation ubuntu 18.04, python3.8 and using pycharm. Interpreter path in pychamr is correctly set. while trying to read specific sheet in excel, using openpyxl it keeps on giving me ImportError. ImportError: Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl. I've installed using pip3 install openpyxl and it say requirement is already satisfied. However when I run it again in pycharm it still outputs same error. Requirement already satisfied: openpyxl mycomp/.local/lib/python3.8/site-packages (3.0.7) Requirement already satisfied: et-xmlfile in mycomp/.local/lib/python3.8/site-packages (from openpyxl) (1.0.1) My guess is since I am using venv it is not getting installed correctly in venv because when I look at the path upon install it is not where venv is. When I do pip3 freeze on venv and after deactivating venv it looks like it has same installation. A: for me worked typing the following inside an interactive session: import pip pip.main(["install", "openpyxl"]) A: I ran into something similar because pandas is using this behind the scenes. Clean your local python environment or create a fresh virtual environment to use from your IDE. Then if possible, try installing your modules in one pip command instead of on multiple lines. # THIS, substitute pandas for whatever module is using openpyxl pip install pandas openpyxl # NOT THIS pip install pandas pip install openpyxl # VERSIONS pandas==1.4.3 openpyxl==3.0.10 A: What helped in my case: installing an additional optional library pip install defusedxml A: Deleting venv and creating a new one fixed the issue. previous venv had all dependencies as base which did not make sense. Maybe error on venv? I am curious to know if anyone knows. A: It works to me: 1: conda install -c anaconda xlrd 2: import pip pip.main(["install", "openpyxl"]) A: Running this on anaconda prompt worked for me pip install defusedxml A: this solved the problem for me too: !pip install defusedxml
ImportError: Missing optional dependency 'openpyxl' still doesn't work after instllation
ubuntu 18.04, python3.8 and using pycharm. Interpreter path in pychamr is correctly set. while trying to read specific sheet in excel, using openpyxl it keeps on giving me ImportError. ImportError: Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl. I've installed using pip3 install openpyxl and it say requirement is already satisfied. However when I run it again in pycharm it still outputs same error. Requirement already satisfied: openpyxl mycomp/.local/lib/python3.8/site-packages (3.0.7) Requirement already satisfied: et-xmlfile in mycomp/.local/lib/python3.8/site-packages (from openpyxl) (1.0.1) My guess is since I am using venv it is not getting installed correctly in venv because when I look at the path upon install it is not where venv is. When I do pip3 freeze on venv and after deactivating venv it looks like it has same installation.
[ "for me worked typing the following inside an interactive session:\nimport pip\npip.main([\"install\", \"openpyxl\"])\n\n", "I ran into something similar because pandas is using this behind the scenes.\nClean your local python environment or create a fresh virtual environment to use from your IDE. Then if possible, try installing your modules in one pip command instead of on multiple lines.\n# THIS, substitute pandas for whatever module is using openpyxl\npip install pandas openpyxl\n\n# NOT THIS\npip install pandas\npip install openpyxl\n\n# VERSIONS\npandas==1.4.3\nopenpyxl==3.0.10\n\n", "What helped in my case: installing an additional optional library\npip install defusedxml\n\n", "Deleting venv and creating a new one fixed the issue.\nprevious venv had all dependencies as base which did not make sense. Maybe error on venv? I am curious to know if anyone knows.\n", "It works to me:\n1:\nconda install -c anaconda xlrd\n2:\nimport pip\npip.main([\"install\", \"openpyxl\"])\n\n", "Running this on anaconda prompt worked for me\n\npip install defusedxml\n\n", "this solved the problem for me too:\n!pip install defusedxml\n" ]
[ 9, 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "openpyxl", "python", "ubuntu" ]
stackoverflow_0067513336_openpyxl_python_ubuntu.txt
Q: How to find name of file type in python I'm attempting to create a clone of the windows 10 file explorer in python using tkinter and I cant work out how to get the name of the file type from its file extension like file explorer does I have already got a function to get the programs file extension and another to open it in the default application however nowhere online has a solution to my issue A: This value is stored in the registery at the location _HKEY_LOCAL_MACHINE\SOFTWARE\Classes. For example, the value of the key Python.File at this location is Python File. The first step is to get the key name linked with the human readable name of the file. That key is the value of the key name as the extension name located at the same path. Then, thanks to this value, get the name of the file by its key name. To sum up with an example: \HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.py gives Python.File \HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Python.File gives Python File In python, to get values in the registery, you can use winreg package. from typing import Tuple from winreg import HKEYType, ConnectRegistry, HKEY_LOCAL_MACHINE, OpenKey, QueryValueEx class ExtensionQuery: def __init__(self): self.base: str = r"SOFTWARE\Classes" self.reg: HKEYType = ConnectRegistry(None, HKEY_LOCAL_MACHINE) def _get_value_from_class(self, class_str: str) -> str: path: str = fr"{self.base}\{class_str}" key: HKEYType = OpenKey(self.reg, path) value_tuple: Tuple[str, int] = QueryValueEx(key, "") return value_tuple[0] def get_application_name(self, ext: str) -> str: return self._get_value_from_class(self._get_value_from_class(ext)) query = ExtensionQuery() print(query.get_application_name(".py")) A: My guess is that Microsoft uses a dictionary to do that. I would suggest doing the same: def get_file_type(): extension_full_names = {".txt":"Text file", ".py":"Python file"} file_extension = getFileExtension(file) if file_extension in extension_full_names.keys(): return extension_full_names[file_extension] return f"{file_extension} file" # extension not in your dictionary
How to find name of file type in python
I'm attempting to create a clone of the windows 10 file explorer in python using tkinter and I cant work out how to get the name of the file type from its file extension like file explorer does I have already got a function to get the programs file extension and another to open it in the default application however nowhere online has a solution to my issue
[ "This value is stored in the registery at the location _HKEY_LOCAL_MACHINE\\SOFTWARE\\Classes. For example, the value of the key Python.File at this location is Python File.\nThe first step is to get the key name linked with the human readable name of the file. That key is the value of the key name as the extension name located at the same path.\nThen, thanks to this value, get the name of the file by its key name.\nTo sum up with an example:\n\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Classes\\.py gives Python.File\n\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Classes\\Python.File gives Python File\nIn python, to get values in the registery, you can use winreg package.\nfrom typing import Tuple\nfrom winreg import HKEYType, ConnectRegistry, HKEY_LOCAL_MACHINE, OpenKey, QueryValueEx\n\n\nclass ExtensionQuery:\n def __init__(self):\n self.base: str = r\"SOFTWARE\\Classes\"\n self.reg: HKEYType = ConnectRegistry(None, HKEY_LOCAL_MACHINE)\n\n def _get_value_from_class(self, class_str: str) -> str:\n path: str = fr\"{self.base}\\{class_str}\"\n key: HKEYType = OpenKey(self.reg, path)\n value_tuple: Tuple[str, int] = QueryValueEx(key, \"\")\n return value_tuple[0]\n\n def get_application_name(self, ext: str) -> str:\n return self._get_value_from_class(self._get_value_from_class(ext))\n\n\nquery = ExtensionQuery()\nprint(query.get_application_name(\".py\"))\n\n", "My guess is that Microsoft uses a dictionary to do that.\nI would suggest doing the same:\ndef get_file_type():\n extension_full_names = {\".txt\":\"Text file\", \".py\":\"Python file\"}\n\n file_extension = getFileExtension(file)\n \n if file_extension in extension_full_names.keys():\n return extension_full_names[file_extension]\n \n return f\"{file_extension} file\" # extension not in your dictionary\n\n" ]
[ 3, 0 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0074639783_python_windows.txt
Q: Getting int value from a series in a dataframe cell Suppose I have a df like this- A B 1 {'meta': 3} 2 {'meta': 3} 3 {'tera': 3} I want to retrieve the int value from column-B. Desired Dataframe- A B 1 3 2 3 3 3 Thanks in advance. A: Column B is a dict, so let's get the value corresponding to the key. As we don't know the key name, take the first value. It's already an integer. No conversion needed. df["B"] = df["B"].map(lambda x: list(d.values())[0])
Getting int value from a series in a dataframe cell
Suppose I have a df like this- A B 1 {'meta': 3} 2 {'meta': 3} 3 {'tera': 3} I want to retrieve the int value from column-B. Desired Dataframe- A B 1 3 2 3 3 3 Thanks in advance.
[ "Column B is a dict, so let's get the value corresponding to the key.\nAs we don't know the key name, take the first value.\nIt's already an integer. No conversion needed.\ndf[\"B\"] = df[\"B\"].map(lambda x: list(d.values())[0])\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "group_by", "numpy", "pandas", "python" ]
stackoverflow_0074640120_dataframe_group_by_numpy_pandas_python.txt
Q: Backup and restore a sqlalchemy db using sqlite as an engine Hi! I'm working on a flask-sqlalchemy application, and as you can imagine I'm changing database models and other thing through the process, every time that I made changes to the models I have to populate de DB again, and at the same time, as a safety measure I need to have some sort of backup restore process prepared in case of something goes wrong. main.py # You can image the other code... app.config["SQLALCHEMY_DATABASE_URI"] = f"sqlite:///{DB_FILE}" db.init_app(app) db.py from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() models.py class User_device(db.Model): __tablename__ = "user_devices" id = db.Column(db.String(32), primary_key=True) user_id = db.Column(db.String(32), db.ForeignKey('users.id', ondelete="CASCADE")) ip = db.Column(db.String(32)) agent = db.Column(db.String(100)) I'll like to have a route where to do this process
Backup and restore a sqlalchemy db using sqlite as an engine
Hi! I'm working on a flask-sqlalchemy application, and as you can imagine I'm changing database models and other thing through the process, every time that I made changes to the models I have to populate de DB again, and at the same time, as a safety measure I need to have some sort of backup restore process prepared in case of something goes wrong. main.py # You can image the other code... app.config["SQLALCHEMY_DATABASE_URI"] = f"sqlite:///{DB_FILE}" db.init_app(app) db.py from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() models.py class User_device(db.Model): __tablename__ = "user_devices" id = db.Column(db.String(32), primary_key=True) user_id = db.Column(db.String(32), db.ForeignKey('users.id', ondelete="CASCADE")) ip = db.Column(db.String(32)) agent = db.Column(db.String(100)) I'll like to have a route where to do this process
[]
[]
[ "I found a solution that works for me, if you have a better alternative don't hesitate to share it:\nfrom the terminal:\n\nback up your current db\n\nsqlite3 instance/pre_backup.db .dump > back.sql\n\n\ndelete your db\n\nrm instance/pre_backup.db\n\n\ninit your project in order to create the instances of your db\n\npython3 main.py\n\nor with the flask run etc.\n4. Restore your db\nsqlite3 instance/new.db < back.sql \n\nIn my case I've got one error msg for every table that I had before, but it didn't give me any trouble to populate the db.\nPD:\nI'm new in Stack Overflow and I don't see the problem with starting with Hi!\nthanks to nothing @PChemGuy -> This is not a letter, so you do not start your question with Hi!!!!!!!!!\n" ]
[ -1 ]
[ "backup", "python", "restore", "sqlalchemy", "sqlite" ]
stackoverflow_0074639572_backup_python_restore_sqlalchemy_sqlite.txt
Q: How can I hide source code from Python file? I am developing a paid application in Python. I do not want the users to see the source code or decompile it. How can I accomplish this task of hiding the source code from the user, but running the code perfectly with the same performance? A: You may distribute the compiled .pyc files which is a byte code that the Python interpreter compiles your .py files to. More info on this found here on stackoverflow. How to compile all your project files. This will somewhat hide your actual code into bytecode, but it can be disassembled. To prevent from disassembling you need to use obfuscation. Pyarmor might be something you're looking for. A: You will definitely see the code if you're running it as a Python file. Maybe try using pyinstaller to make a executable binary for the respective Operating System that you're building for. A: The best way would be to turn your python code into an executable file. When u take a look here, there is a nice Tutorial on how to do it: Install pyinstall via pip3 install pyinstaller Pack your excecutable with pyinstaller main.py There is a lot of options to tweak the output of your application, the docs can be found under https://pyinstaller.org/en/stable/
How can I hide source code from Python file?
I am developing a paid application in Python. I do not want the users to see the source code or decompile it. How can I accomplish this task of hiding the source code from the user, but running the code perfectly with the same performance?
[ "You may distribute the compiled .pyc files which is a byte code that the Python interpreter compiles your .py files to.\nMore info on this found here on stackoverflow.\nHow to compile all your project files.\nThis will somewhat hide your actual code into bytecode, but it can be disassembled. To prevent from disassembling you need to use obfuscation. Pyarmor might be something you're looking for.\n", "You will definitely see the code if you're running it as a Python file. Maybe try using pyinstaller to make a executable binary for the respective Operating System that you're building for.\n", "The best way would be to turn your python code into an executable file.\n\nWhen u take a look here, there is a nice Tutorial on how to do it:\n\nInstall pyinstall via pip3 install pyinstaller\nPack your excecutable with pyinstaller main.py\n\nThere is a lot of options to tweak the output of your application, the docs can be found under https://pyinstaller.org/en/stable/\n" ]
[ 1, 0, 0 ]
[]
[]
[ "pyscript", "python", "python_3.x" ]
stackoverflow_0074640301_pyscript_python_python_3.x.txt
Q: Storing scipy sparse matrix as HDF5 I want to compress and store a humongous Scipy matrix in HDF5 format. How do I do this? I've tried the below code: a = csr_matrix((dat, (row, col)), shape=(947969, 36039)) f = h5py.File('foo.h5','w') dset = f.create_dataset("init", data=a, dtype = int, compression='gzip') I get errors like these, TypeError: Scalar datasets don't support chunk/filter options IOError: Can't prepare for writing data (No appropriate function for conversion path) I can't convert it to numpy array as there will be memory overflow. What is the best method? A: A csr matrix stores it's values in 3 arrays. It is not an array or array subclass, so h5py cannot save it directly. The best you can do is save the attributes, and recreate the matrix on loading: In [248]: M = sparse.random(5,10,.1, 'csr') In [249]: M Out[249]: <5x10 sparse matrix of type '<class 'numpy.float64'>' with 5 stored elements in Compressed Sparse Row format> In [250]: M.data Out[250]: array([ 0.91615298, 0.49907752, 0.09197862, 0.90442401, 0.93772772]) In [251]: M.indptr Out[251]: array([0, 0, 1, 2, 3, 5], dtype=int32) In [252]: M.indices Out[252]: array([5, 7, 5, 2, 6], dtype=int32) In [253]: M.data Out[253]: array([ 0.91615298, 0.49907752, 0.09197862, 0.90442401, 0.93772772]) coo format has data, row, col attributes, basically the same as the (dat, (row, col)) you use to create your a. In [254]: M.tocoo().row Out[254]: array([1, 2, 3, 4, 4], dtype=int32) The new save_npz function does: arrays_dict = dict(format=matrix.format, shape=matrix.shape, data=matrix.data) if matrix.format in ('csc', 'csr', 'bsr'): arrays_dict.update(indices=matrix.indices, indptr=matrix.indptr) ... elif matrix.format == 'coo': arrays_dict.update(row=matrix.row, col=matrix.col) ... np.savez(file, **arrays_dict) In other words it collects the relevant attributes in a dictionary and uses savez to create the zip archive. The same sort of method could be used with a h5py file. More on that save_npz in a recent SO question, with links to the source code. save_npz method missing from scipy.sparse See if you can get this working. If you can create a csr matrix, you can recreate it from its attributes (or the coo equivalents). I can make a working example if needed. csr to h5py example import numpy as np import h5py from scipy import sparse M = sparse.random(10,10,.2, 'csr') print(repr(M)) print(M.data) print(M.indices) print(M.indptr) f = h5py.File('sparse.h5','w') g = f.create_group('Mcsr') g.create_dataset('data',data=M.data) g.create_dataset('indptr',data=M.indptr) g.create_dataset('indices',data=M.indices) g.attrs['shape'] = M.shape f.close() f = h5py.File('sparse.h5','r') print(list(f.keys())) print(list(f['Mcsr'].keys())) g2 = f['Mcsr'] print(g2.attrs['shape']) M1 = sparse.csr_matrix((g2['data'][:],g2['indices'][:], g2['indptr'][:]), g2.attrs['shape']) print(repr(M1)) print(np.allclose(M1.A, M.A)) f.close() producing 1314:~/mypy$ python3 stack43390038.py <10x10 sparse matrix of type '<class 'numpy.float64'>' with 20 stored elements in Compressed Sparse Row format> [ 0.13640389 0.92698959 .... 0.7762265 ] [4 5 0 3 0 2 0 2 5 6 7 1 7 9 1 3 4 6 8 9] [ 0 2 4 6 9 11 11 11 14 19 20] ['Mcsr'] ['data', 'indices', 'indptr'] [10 10] <10x10 sparse matrix of type '<class 'numpy.float64'>' with 20 stored elements in Compressed Sparse Row format> True coo alternative Mo = M.tocoo() g = f.create_group('Mcoo') g.create_dataset('data', data=Mo.data) g.create_dataset('row', data=Mo.row) g.create_dataset('col', data=Mo.col) g.attrs['shape'] = Mo.shape g2 = f['Mcoo'] M2 = sparse.coo_matrix((g2['data'], (g2['row'], g2['col'])), g2.attrs['shape']) # don't need the [:] # could also use sparse.csr_matrix or M2.tocsr() A: You can use scipy.sparse.save_npz method Alternatively consider using Pandas.SparseDataFrame, but be aware that this method is very slow (thanks to @hpaulj for testing and pointing it out) Demo: generating sparse matrix and SparseDataFrame In [55]: import pandas as pd In [56]: from scipy.sparse import * In [57]: m = csr_matrix((20, 10), dtype=np.int8) In [58]: m Out[58]: <20x10 sparse matrix of type '<class 'numpy.int8'>' with 0 stored elements in Compressed Sparse Row format> In [59]: sdf = pd.SparseDataFrame([pd.SparseSeries(m[i].toarray().ravel(), fill_value=0) ...: for i in np.arange(m.shape[0])]) ...: In [61]: type(sdf) Out[61]: pandas.sparse.frame.SparseDataFrame In [62]: sdf.info() <class 'pandas.sparse.frame.SparseDataFrame'> RangeIndex: 20 entries, 0 to 19 Data columns (total 10 columns): 0 20 non-null int8 1 20 non-null int8 2 20 non-null int8 3 20 non-null int8 4 20 non-null int8 5 20 non-null int8 6 20 non-null int8 7 20 non-null int8 8 20 non-null int8 9 20 non-null int8 dtypes: int8(10) memory usage: 280.0 bytes saving SparseDataFrame to HDF file In [64]: sdf.to_hdf('d:/temp/sparse_df.h5', 'sparse_df') reading from HDF file In [65]: store = pd.HDFStore('d:/temp/sparse_df.h5') In [66]: store Out[66]: <class 'pandas.io.pytables.HDFStore'> File path: d:/temp/sparse_df.h5 /sparse_df sparse_frame In [67]: x = store['sparse_df'] In [68]: type(x) Out[68]: pandas.sparse.frame.SparseDataFrame In [69]: x.info() <class 'pandas.sparse.frame.SparseDataFrame'> Int64Index: 20 entries, 0 to 19 Data columns (total 10 columns): 0 20 non-null int8 1 20 non-null int8 2 20 non-null int8 3 20 non-null int8 4 20 non-null int8 5 20 non-null int8 6 20 non-null int8 7 20 non-null int8 8 20 non-null int8 9 20 non-null int8 dtypes: int8(10) memory usage: 360.0 bytes A: I was dealing with this problem myself and ended up creating a package just for this.
Storing scipy sparse matrix as HDF5
I want to compress and store a humongous Scipy matrix in HDF5 format. How do I do this? I've tried the below code: a = csr_matrix((dat, (row, col)), shape=(947969, 36039)) f = h5py.File('foo.h5','w') dset = f.create_dataset("init", data=a, dtype = int, compression='gzip') I get errors like these, TypeError: Scalar datasets don't support chunk/filter options IOError: Can't prepare for writing data (No appropriate function for conversion path) I can't convert it to numpy array as there will be memory overflow. What is the best method?
[ "A csr matrix stores it's values in 3 arrays. It is not an array or array subclass, so h5py cannot save it directly. The best you can do is save the attributes, and recreate the matrix on loading:\nIn [248]: M = sparse.random(5,10,.1, 'csr')\nIn [249]: M\nOut[249]: \n<5x10 sparse matrix of type '<class 'numpy.float64'>'\n with 5 stored elements in Compressed Sparse Row format>\nIn [250]: M.data\nOut[250]: array([ 0.91615298, 0.49907752, 0.09197862, 0.90442401, 0.93772772])\nIn [251]: M.indptr\nOut[251]: array([0, 0, 1, 2, 3, 5], dtype=int32)\nIn [252]: M.indices\nOut[252]: array([5, 7, 5, 2, 6], dtype=int32)\nIn [253]: M.data\nOut[253]: array([ 0.91615298, 0.49907752, 0.09197862, 0.90442401, 0.93772772])\n\ncoo format has data, row, col attributes, basically the same as the (dat, (row, col)) you use to create your a.\nIn [254]: M.tocoo().row\nOut[254]: array([1, 2, 3, 4, 4], dtype=int32)\n\nThe new save_npz function does:\narrays_dict = dict(format=matrix.format, shape=matrix.shape, data=matrix.data)\nif matrix.format in ('csc', 'csr', 'bsr'):\n arrays_dict.update(indices=matrix.indices, indptr=matrix.indptr)\n...\nelif matrix.format == 'coo':\n arrays_dict.update(row=matrix.row, col=matrix.col)\n...\nnp.savez(file, **arrays_dict)\n\nIn other words it collects the relevant attributes in a dictionary and uses savez to create the zip archive.\nThe same sort of method could be used with a h5py file. More on that save_npz in a recent SO question, with links to the source code.\nsave_npz method missing from scipy.sparse\nSee if you can get this working. If you can create a csr matrix, you can recreate it from its attributes (or the coo equivalents). I can make a working example if needed.\ncsr to h5py example\nimport numpy as np\nimport h5py\nfrom scipy import sparse\n\nM = sparse.random(10,10,.2, 'csr')\nprint(repr(M))\n\nprint(M.data)\nprint(M.indices)\nprint(M.indptr)\n\nf = h5py.File('sparse.h5','w')\ng = f.create_group('Mcsr')\ng.create_dataset('data',data=M.data)\ng.create_dataset('indptr',data=M.indptr)\ng.create_dataset('indices',data=M.indices)\ng.attrs['shape'] = M.shape\nf.close()\n\nf = h5py.File('sparse.h5','r')\nprint(list(f.keys()))\nprint(list(f['Mcsr'].keys()))\n\ng2 = f['Mcsr']\nprint(g2.attrs['shape'])\n\nM1 = sparse.csr_matrix((g2['data'][:],g2['indices'][:],\n g2['indptr'][:]), g2.attrs['shape'])\nprint(repr(M1))\nprint(np.allclose(M1.A, M.A))\nf.close()\n\nproducing\n1314:~/mypy$ python3 stack43390038.py \n<10x10 sparse matrix of type '<class 'numpy.float64'>'\n with 20 stored elements in Compressed Sparse Row format>\n[ 0.13640389 0.92698959 .... 0.7762265 ]\n[4 5 0 3 0 2 0 2 5 6 7 1 7 9 1 3 4 6 8 9]\n[ 0 2 4 6 9 11 11 11 14 19 20]\n['Mcsr']\n['data', 'indices', 'indptr']\n[10 10]\n<10x10 sparse matrix of type '<class 'numpy.float64'>'\n with 20 stored elements in Compressed Sparse Row format>\nTrue\n\ncoo alternative\nMo = M.tocoo()\ng = f.create_group('Mcoo')\ng.create_dataset('data', data=Mo.data)\ng.create_dataset('row', data=Mo.row)\ng.create_dataset('col', data=Mo.col)\ng.attrs['shape'] = Mo.shape\n\ng2 = f['Mcoo']\nM2 = sparse.coo_matrix((g2['data'], (g2['row'], g2['col'])),\n g2.attrs['shape']) # don't need the [:]\n# could also use sparse.csr_matrix or M2.tocsr()\n\n", "You can use scipy.sparse.save_npz method\nAlternatively consider using Pandas.SparseDataFrame, but be aware that this method is very slow (thanks to @hpaulj for testing and pointing it out)\nDemo:\ngenerating sparse matrix and SparseDataFrame\nIn [55]: import pandas as pd\n\nIn [56]: from scipy.sparse import *\n\nIn [57]: m = csr_matrix((20, 10), dtype=np.int8)\n\nIn [58]: m\nOut[58]:\n<20x10 sparse matrix of type '<class 'numpy.int8'>'\n with 0 stored elements in Compressed Sparse Row format>\n\nIn [59]: sdf = pd.SparseDataFrame([pd.SparseSeries(m[i].toarray().ravel(), fill_value=0)\n ...: for i in np.arange(m.shape[0])])\n ...:\n\nIn [61]: type(sdf)\nOut[61]: pandas.sparse.frame.SparseDataFrame\n\nIn [62]: sdf.info()\n<class 'pandas.sparse.frame.SparseDataFrame'>\nRangeIndex: 20 entries, 0 to 19\nData columns (total 10 columns):\n0 20 non-null int8\n1 20 non-null int8\n2 20 non-null int8\n3 20 non-null int8\n4 20 non-null int8\n5 20 non-null int8\n6 20 non-null int8\n7 20 non-null int8\n8 20 non-null int8\n9 20 non-null int8\ndtypes: int8(10)\nmemory usage: 280.0 bytes\n\nsaving SparseDataFrame to HDF file\nIn [64]: sdf.to_hdf('d:/temp/sparse_df.h5', 'sparse_df')\n\nreading from HDF file\nIn [65]: store = pd.HDFStore('d:/temp/sparse_df.h5')\n\nIn [66]: store\nOut[66]:\n<class 'pandas.io.pytables.HDFStore'>\nFile path: d:/temp/sparse_df.h5\n/sparse_df sparse_frame\n\nIn [67]: x = store['sparse_df']\n\nIn [68]: type(x)\nOut[68]: pandas.sparse.frame.SparseDataFrame\n\nIn [69]: x.info()\n<class 'pandas.sparse.frame.SparseDataFrame'>\nInt64Index: 20 entries, 0 to 19\nData columns (total 10 columns):\n0 20 non-null int8\n1 20 non-null int8\n2 20 non-null int8\n3 20 non-null int8\n4 20 non-null int8\n5 20 non-null int8\n6 20 non-null int8\n7 20 non-null int8\n8 20 non-null int8\n9 20 non-null int8\ndtypes: int8(10)\nmemory usage: 360.0 bytes\n\n", "I was dealing with this problem myself and ended up creating a package just for this.\n" ]
[ 15, 4, 0 ]
[]
[]
[ "h5py", "hdf5", "python", "scipy", "sparse_matrix" ]
stackoverflow_0043390038_h5py_hdf5_python_scipy_sparse_matrix.txt
Q: Do I need to clean/remove the images created on deployments of my Cloud Run instance? I have a Cloud Run instance up and running on my Google Cloud Platform project. Whenever I do any changes to my main.py file, I perform the following steps: gcloud builds submit --tag ${CONTAINER} gcloud run deploy ${SERVICE} --image $CONTAINER --platform managed which builds a new image and deploys the container to a managed instance. Is a good practice to find and remove the images of older deployments, or is this managed automatically by GCP? A: Container images are not automatically removed by Google. You have to delete them manually if you want. There is no good practice, as it depends. If you are sure that you will not use old images anymore, you can delete them; otherwise, you may want to keep them to rollback easily to an older version. If you are using Container Registry, note that it costs money to store images (https://cloud.google.com/container-registry/pricing#storage). If you manage your code with version control systems like Git, you can simply rebuild and redeploy an older version by doing git checkout <your-commit-id> and run commands in your question. So in that particular case, I think this is not very useful to keep all your images since you can always regenerate them easily. A: If your CICD pipeline is consistently building and pushing images then yes, you should at least remove the ones not used for production like untagged images (duplicate latest, etc.) as you are charged for storage on both artifact and container registry. There is an unofficial GCR-cleaner image that you can use to automatically clean images. You can deploy it through a scheduled github actions workflow, cloud run + cloud scheduler. You can even run it as a one-off task or deploy it using the terraform module.
Do I need to clean/remove the images created on deployments of my Cloud Run instance?
I have a Cloud Run instance up and running on my Google Cloud Platform project. Whenever I do any changes to my main.py file, I perform the following steps: gcloud builds submit --tag ${CONTAINER} gcloud run deploy ${SERVICE} --image $CONTAINER --platform managed which builds a new image and deploys the container to a managed instance. Is a good practice to find and remove the images of older deployments, or is this managed automatically by GCP?
[ "Container images are not automatically removed by Google. You have to delete them manually if you want.\nThere is no good practice, as it depends. If you are sure that you will not use old images anymore, you can delete them; otherwise, you may want to keep them to rollback easily to an older version. If you are using Container Registry, note that it costs money to store images (https://cloud.google.com/container-registry/pricing#storage).\nIf you manage your code with version control systems like Git, you can simply rebuild and redeploy an older version by doing git checkout <your-commit-id> and run commands in your question. So in that particular case, I think this is not very useful to keep all your images since you can always regenerate them easily.\n", "If your CICD pipeline is consistently building and pushing images then yes, you should at least remove the ones not used for production like untagged images (duplicate latest, etc.) as you are charged for storage on both artifact and container registry.\nThere is an unofficial GCR-cleaner image that you can use to automatically clean images. You can deploy it through a scheduled github actions workflow, cloud run + cloud scheduler. You can even run it as a one-off task or deploy it using the terraform module.\n" ]
[ 6, 0 ]
[]
[]
[ "google_cloud_platform", "google_cloud_run", "python" ]
stackoverflow_0067636234_google_cloud_platform_google_cloud_run_python.txt
Q: Making Window Topmost with Python and/or Windows API I'm trying to write a piece of code which finds a target window (returns its handler), brings it to top, and makes it topmost. The problem is that I cannot make the window topmost. HWND_TOPMOST = -1 SWP_NOSIZE = 1 SWP_NOMOVE = 2 hwnd = ctypes.windll.user32.FindWindowW(None, title) # works OK ctypes.windll.user32.BringWindowToTop(hwnd) # works OK ctypes.windll.user32.SetWindowPos(hwnd, HWND_TOPMOST, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE) # doesn't work How to make the window topmost? Note: I cannot install third-party libraries, modules, etc. Thus, I need to use Python native modules and/or Windows API. A: I've found a solution. It looks like ctypes cannot convert a negative int (HWND_TOPMOST) to HWND type. Thus, HWND function from ctypes.windtypes module should be used. import ctypes.wintypes ctypes.windll.user32.SetWindowPos(hwnd, ctypes.wintypes.HWND(HWND_TOPMOST), 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE)
Making Window Topmost with Python and/or Windows API
I'm trying to write a piece of code which finds a target window (returns its handler), brings it to top, and makes it topmost. The problem is that I cannot make the window topmost. HWND_TOPMOST = -1 SWP_NOSIZE = 1 SWP_NOMOVE = 2 hwnd = ctypes.windll.user32.FindWindowW(None, title) # works OK ctypes.windll.user32.BringWindowToTop(hwnd) # works OK ctypes.windll.user32.SetWindowPos(hwnd, HWND_TOPMOST, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE) # doesn't work How to make the window topmost? Note: I cannot install third-party libraries, modules, etc. Thus, I need to use Python native modules and/or Windows API.
[ "I've found a solution. It looks like ctypes cannot convert a negative int (HWND_TOPMOST) to HWND type. Thus, HWND function from ctypes.windtypes module should be used.\nimport ctypes.wintypes\n\nctypes.windll.user32.SetWindowPos(hwnd, ctypes.wintypes.HWND(HWND_TOPMOST), 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE)\n\n" ]
[ 0 ]
[]
[]
[ "python", "winapi" ]
stackoverflow_0074589479_python_winapi.txt
Q: How can I use virtual environment in Visual Studio? I activate the virtual environment in VSCode terminal: and I check the pip list: I can see the sklearn pip. So I enter import sklearn in script but script can't find sklearn. what is the problem? Isn't the virtual environment activated just by typing in the terminal? A: It's fine. Try to run this code and you won't get exceptions. To avoid this underscore pip install autopep8 might help
How can I use virtual environment in Visual Studio?
I activate the virtual environment in VSCode terminal: and I check the pip list: I can see the sklearn pip. So I enter import sklearn in script but script can't find sklearn. what is the problem? Isn't the virtual environment activated just by typing in the terminal?
[ "It's fine. Try to run this code and you won't get exceptions. To avoid this underscore pip install autopep8 might help\n" ]
[ 0 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0074640274_python_virtualenv.txt
Q: Selecting a Iframe with Selenium I am trying to build a program to login to my email account based on Selenium. However I am running into the problem that a iframe Pop up will show with a button to continue and I am unable to select the button using Selenium. The Website to login is: https://www.gmx.net/ which will switch you to this site/popup https://www.gmx.net/consent-management/ The Button on the right "Akzeptieren und weiter" ( Accept and continue ) is not selectable through Selenium. The code I am using is: driver.get('https://www.gmx.net') time.sleep(5) driver.switchTo.frame(1) time.sleep(5) email_text_field5 = driver.find_element(By.XPATH, '//*[@id="save-all-pur"]') email_text_field5.click() time.sleep(5) I have also tried using: email_text_field5 = driver.find_element(By.ID, "save-all-pur") email_text_field5.click() however I am continuously running into the NoSuchElementException with both methods of selection Error: --------------------------------------------------------------------------- NoSuchElementException Traceback (most recent call last) Cell In [13], line 12 10 #driver.get('https://www.gmx.net/consent-management/') 11 time.sleep(5) ---> 12 email_text_field5 = driver.find_element(By.ID, "save-all-pur") 13 email_text_field5.click() 16 time.sleep(5) The only other method to properly select the iframe is through ID however I can't seem to find any ID for this specific Iframe Any help would be greatly appreciated A: The element Akzeptieren und weiter is within nested <frame> / <iframe> elements so you have to: Induce WebDriverWait for the parent frame to be available and switch to it. Induce WebDriverWait for the child frame to be available and switch to it. Induce WebDriverWait for the desired element to be clickable. You can use either of the following locator strategies: Using CSS_SELECTOR: driver.get('https://www.gmx.net/consent-management/') WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[src='https://dl.gmx.net/permission/live/portal/v1/ppp/core.html']"))) WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[src^='https://plus.gmx.net/lt']"))) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button#save-all-pur"))).click() Using XPATH: driver.get('https://www.gmx.net/consent-management/') WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@src='https://dl.gmx.net/permission/live/portal/v1/ppp/core.html']"))) WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[starts-with(@src, 'https://plus.gmx.net/lt')]"))) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[@id='save-all-pur']"))).click() Note : You have to add the following imports : from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC Browser Snapshot:
Selecting a Iframe with Selenium
I am trying to build a program to login to my email account based on Selenium. However I am running into the problem that a iframe Pop up will show with a button to continue and I am unable to select the button using Selenium. The Website to login is: https://www.gmx.net/ which will switch you to this site/popup https://www.gmx.net/consent-management/ The Button on the right "Akzeptieren und weiter" ( Accept and continue ) is not selectable through Selenium. The code I am using is: driver.get('https://www.gmx.net') time.sleep(5) driver.switchTo.frame(1) time.sleep(5) email_text_field5 = driver.find_element(By.XPATH, '//*[@id="save-all-pur"]') email_text_field5.click() time.sleep(5) I have also tried using: email_text_field5 = driver.find_element(By.ID, "save-all-pur") email_text_field5.click() however I am continuously running into the NoSuchElementException with both methods of selection Error: --------------------------------------------------------------------------- NoSuchElementException Traceback (most recent call last) Cell In [13], line 12 10 #driver.get('https://www.gmx.net/consent-management/') 11 time.sleep(5) ---> 12 email_text_field5 = driver.find_element(By.ID, "save-all-pur") 13 email_text_field5.click() 16 time.sleep(5) The only other method to properly select the iframe is through ID however I can't seem to find any ID for this specific Iframe Any help would be greatly appreciated
[ "The element Akzeptieren und weiter is within nested <frame> / <iframe> elements so you have to:\n\nInduce WebDriverWait for the parent frame to be available and switch to it.\n\nInduce WebDriverWait for the child frame to be available and switch to it.\n\nInduce WebDriverWait for the desired element to be clickable.\n\nYou can use either of the following locator strategies:\n\nUsing CSS_SELECTOR:\ndriver.get('https://www.gmx.net/consent-management/')\nWebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,\"iframe[src='https://dl.gmx.net/permission/live/portal/v1/ppp/core.html']\")))\nWebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,\"iframe[src^='https://plus.gmx.net/lt']\")))\nWebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button#save-all-pur\"))).click()\n\n\nUsing XPATH:\ndriver.get('https://www.gmx.net/consent-management/')\nWebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,\"//iframe[@src='https://dl.gmx.net/permission/live/portal/v1/ppp/core.html']\")))\nWebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,\"//iframe[starts-with(@src, 'https://plus.gmx.net/lt')]\")))\nWebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, \"//button[@id='save-all-pur']\"))).click()\n\n\n\n\nNote : You have to add the following imports :\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\n\nBrowser Snapshot:\n\n\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074640064_python_selenium.txt
Q: Understand how xreplace works I'm trying to figure out how xreplace works. When I write from sympy import * my_list = [1234.5678, 22.333333] my_list.xreplace({n : round(n, 4) for n in my_list.atoms(Number)}) I get an error: 'list' object has no attribute 'xreplace'. What did I write wrong? A: xreplace is a method of SymPy's objects. Your my_list is just an ordinary Python list, so it doesn't expose that method. What you can do it this: from sympy import * # Tuple is the SymPy version of a python tuple my_list = Tuple(1234.5678, 22.333333) my_list.xreplace({n : round(n, 4) for n in my_list.atoms(Number)})
Understand how xreplace works
I'm trying to figure out how xreplace works. When I write from sympy import * my_list = [1234.5678, 22.333333] my_list.xreplace({n : round(n, 4) for n in my_list.atoms(Number)}) I get an error: 'list' object has no attribute 'xreplace'. What did I write wrong?
[ "xreplace is a method of SymPy's objects. Your my_list is just an ordinary Python list, so it doesn't expose that method. What you can do it this:\nfrom sympy import *\n# Tuple is the SymPy version of a python tuple\nmy_list = Tuple(1234.5678, 22.333333)\nmy_list.xreplace({n : round(n, 4) for n in my_list.atoms(Number)})\n\n" ]
[ 4 ]
[]
[]
[ "python", "sympy" ]
stackoverflow_0074639816_python_sympy.txt
Q: Helper tool to refactor large python file into smaller files I have a 3000+ line python file (call it orig) containing 30ish utility functions. I'd like to split it into 5 files (say A.py, B.py, etc). After the split, is there a helper tool to change all the orig.func1 and orig.func2 in the entire repo to A.func1, B.func2? A: As @BrokenBenchmark suggested, I ended up writing my own helper in python lines = open('calc.py').readlines() out = [] for line in lines: if 'orig.' in line: m = re.search('orig.(.*?)\(', line) f = m.group(1) if f in dir(A): line = line.replace('orig.', 'A.') elif f in dir(B): line = line.replace('orig.', 'B.') # and other modules else: raise ValueError out.append(line) open('calc2.py', 'w').writelines(out) A: This is something that Rope can help you with. You need to do a MoveGlobal Refactoring. If your IDE/editor has Rope support and it supports move refactoring, it would probably support MoveGlobal as well.
Helper tool to refactor large python file into smaller files
I have a 3000+ line python file (call it orig) containing 30ish utility functions. I'd like to split it into 5 files (say A.py, B.py, etc). After the split, is there a helper tool to change all the orig.func1 and orig.func2 in the entire repo to A.func1, B.func2?
[ "As @BrokenBenchmark suggested, I ended up writing my own helper in python\nlines = open('calc.py').readlines()\n\nout = []\nfor line in lines:\n if 'orig.' in line:\n m = re.search('orig.(.*?)\\(', line)\n f = m.group(1) \n if f in dir(A):\n line = line.replace('orig.', 'A.')\n elif f in dir(B):\n line = line.replace('orig.', 'B.')\n\n # and other modules\n\n else:\n raise ValueError\n out.append(line)\n \nopen('calc2.py', 'w').writelines(out)\n\n", "This is something that Rope can help you with.\nYou need to do a MoveGlobal Refactoring. If your IDE/editor has Rope support and it supports move refactoring, it would probably support MoveGlobal as well.\n" ]
[ 0, 0 ]
[]
[]
[ "python", "refactoring" ]
stackoverflow_0071230919_python_refactoring.txt
Q: File is showing in use even after using file.close() I am trying to send an email with the file attachment and post that it should rename and move that file to Archive Folder. No issues with email. if (exception_count != 0): filename= "abc.log" with open(filename) as file: try: sender = '[email protected]' receivers='[email protected]' file = open("abc.log", mode='r') msg = EmailMessage() msg["From"] = sender msg["Subject"] = "Subject - abc" msg["To"] = receivers msg.set_content("PFA for the Error Log File") msg.add_attachment(file.read(), filename=filename) s = smtplib.SMTP('smtp',25) s.send_message(msg) except: file.close() now = datetime.datetime.now() dt_string = now.strftime("%d%m%Y_") path= r"C:\Users\username\Documents\Dev Python Scripts\Log_Archive" shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename) Getting error The process cannot access the file because it is being used by another process: on shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename) I have tried with try and final statements initially it wasn't working, Saw one similar question about file handling on Stackover flow where it was suggested to use WITH(). I appreciate any help. EDITED CODE: if (exception_count != 0): filename= "abc.log" with open(filename) as file: sender = '[email protected]' receivers='[email protected]' #file = open("abc.log", mode='r') msg = EmailMessage() msg["From"] = sender msg["Subject"] = "Subject - abc" msg["To"] = receivers msg.set_content("PFA for the Error Log File") msg.add_attachment(file.read(), filename=filename) s = smtplib.SMTP('smtp',25) s.send_message(msg) now = datetime.datetime.now() dt_string = now.strftime("%d%m%Y_") path= r"C:\Users\username\Documents\Dev Python Scripts\Log_Archive" shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename) A: The problem you are seeing is likely due to the fact that you are opening the file twice in your code. If you have code that looks like this: file = open("abc.log", mode='r') then you need some code to close the file file.close() However, if you have code that looks like this: with open(filename) as file: ... then you do not need to close the file. It will close automatically when the program exits the with-block. Therefore your code should look something like this: filename= "abc.log" if (exception_count != 0): with open(filename) as file: sender = '[email protected]' receivers='[email protected]' msg = EmailMessage() msg["From"] = sender msg["Subject"] = "Subject - abc" msg["To"] = receivers msg.set_content("PFA for the Error Log File") msg.add_attachment(file.read(), filename=filename) s = smtplib.SMTP('smtp',25) s.send_message(msg) now = datetime.datetime.now() dt_string = now.strftime("%d%m%Y_") path= r"C:\Users\username\Documents\Dev Python Scripts\Log_Archive" shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename) Note I have removed the file = open("abc.log", mode='r') from inside of the with-block also worth rebooting IDE once you make the changes just in case the open files (or non-closed files) are still in memory.
File is showing in use even after using file.close()
I am trying to send an email with the file attachment and post that it should rename and move that file to Archive Folder. No issues with email. if (exception_count != 0): filename= "abc.log" with open(filename) as file: try: sender = '[email protected]' receivers='[email protected]' file = open("abc.log", mode='r') msg = EmailMessage() msg["From"] = sender msg["Subject"] = "Subject - abc" msg["To"] = receivers msg.set_content("PFA for the Error Log File") msg.add_attachment(file.read(), filename=filename) s = smtplib.SMTP('smtp',25) s.send_message(msg) except: file.close() now = datetime.datetime.now() dt_string = now.strftime("%d%m%Y_") path= r"C:\Users\username\Documents\Dev Python Scripts\Log_Archive" shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename) Getting error The process cannot access the file because it is being used by another process: on shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename) I have tried with try and final statements initially it wasn't working, Saw one similar question about file handling on Stackover flow where it was suggested to use WITH(). I appreciate any help. EDITED CODE: if (exception_count != 0): filename= "abc.log" with open(filename) as file: sender = '[email protected]' receivers='[email protected]' #file = open("abc.log", mode='r') msg = EmailMessage() msg["From"] = sender msg["Subject"] = "Subject - abc" msg["To"] = receivers msg.set_content("PFA for the Error Log File") msg.add_attachment(file.read(), filename=filename) s = smtplib.SMTP('smtp',25) s.send_message(msg) now = datetime.datetime.now() dt_string = now.strftime("%d%m%Y_") path= r"C:\Users\username\Documents\Dev Python Scripts\Log_Archive" shutil.move(r'C:\Users\username\Documents\Dev Python Scripts\abc.log', path + "\\" + dt_string + filename)
[ "The problem you are seeing is likely due to the fact that you are opening the file twice in your code.\nIf you have code that looks like this:\nfile = open(\"abc.log\", mode='r')\n\nthen you need some code to close the file\nfile.close()\n\n\n\nHowever, if you have code that looks like this:\nwith open(filename) as file:\n ...\n\nthen you do not need to close the file. It will close automatically when the program exits the with-block.\n\n\nTherefore your code should look something like this:\nfilename= \"abc.log\"\n\nif (exception_count != 0):\n \n with open(filename) as file:\n \n sender = '[email protected]'\n receivers='[email protected]'\n\n msg = EmailMessage()\n msg[\"From\"] = sender\n msg[\"Subject\"] = \"Subject - abc\"\n msg[\"To\"] = receivers\n msg.set_content(\"PFA for the Error Log File\")\n msg.add_attachment(file.read(), filename=filename)\n\n s = smtplib.SMTP('smtp',25)\n s.send_message(msg)\n \n \nnow = datetime.datetime.now()\ndt_string = now.strftime(\"%d%m%Y_\")\npath= r\"C:\\Users\\username\\Documents\\Dev Python Scripts\\Log_Archive\"\nshutil.move(r'C:\\Users\\username\\Documents\\Dev Python Scripts\\abc.log', path + \"\\\\\" + dt_string + filename)\n\n\nNote\n\nI have removed the file = open(\"abc.log\", mode='r') from inside of the with-block\n\nalso worth rebooting IDE once you make the changes just in case the open files (or non-closed files) are still in memory.\n\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074630955_python.txt
Q: How to break a line of chained methods in Python? I have a line of the following code (don't blame for naming conventions, they are not mine): subkeyword = Session.query( Subkeyword.subkeyword_id, Subkeyword.subkeyword_word ).filter_by( subkeyword_company_id=self.e_company_id ).filter_by( subkeyword_word=subkeyword_word ).filter_by( subkeyword_active=True ).one() I don't like how it looks like (not too readable) but I don't have any better idea to limit lines to 79 characters in this situation. Is there a better way of breaking it (preferably without backslashes)? A: You could use additional parentheses: subkeyword = ( Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) .filter_by(subkeyword_company_id=self.e_company_id) .filter_by(subkeyword_word=subkeyword_word) .filter_by(subkeyword_active=True) .one() ) A: This is a case where a line continuation character is preferred to open parentheses. The need for this style becomes more obvious as method names get longer and as methods start taking arguments: subkeyword = Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) \ .filter_by(subkeyword_company_id=self.e_company_id) \ .filter_by(subkeyword_word=subkeyword_word) \ .filter_by(subkeyword_active=True) \ .one() PEP 8 is intend to be interpreted with a measure of common-sense and an eye for both the practical and the beautiful. Happily violate any PEP 8 guideline that results in ugly or hard to read code. That being said, if you frequently find yourself at odds with PEP 8, it may be a sign that there are readability issues that transcend your choice of whitespace :-) A: My personal choice would be: subkeyword = Session.query( Subkeyword.subkeyword_id, Subkeyword.subkeyword_word, ).filter_by( subkeyword_company_id=self.e_company_id, subkeyword_word=subkeyword_word, subkeyword_active=True, ).one() A: Just store the intermediate result/object and invoke the next method on it, e.g. q = Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) q = q.filter_by(subkeyword_company_id=self.e_company_id) q = q.filter_by(subkeyword_word=subkeyword_word) q = q.filter_by(subkeyword_active=True) subkeyword = q.one() A: It's a bit of a different solution than provided by others but a favorite of mine since it leads to nifty metaprogramming sometimes. base = [Subkeyword.subkeyword_id, Subkeyword_word] search = { 'subkeyword_company_id':self.e_company_id, 'subkeyword_word':subkeyword_word, 'subkeyword_active':True, } subkeyword = Session.query(*base).filter_by(**search).one() This is a nice technique for building searches. Go through a list of conditionals to mine from your complex query form (or string-based deductions about what the user is looking for), then just explode the dictionary into the filter. A: According to Python Language Reference You can use a backslash. Or simply break it. If a bracket is not paired, python will not treat that as a line. And under such circumstance, the indentation of following lines doesn't matter. A: You seems using SQLAlchemy, if it is true, sqlalchemy.orm.query.Query.filter_by() method takes multiple keyword arguments, so you could write like: subkeyword = Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) \ .filter_by(subkeyword_company_id=self.e_company_id, subkeyword_word=subkeyword_word, subkeyword_active=True) \ .one() But it would be better: subkeyword = Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) subkeyword = subkeyword.filter_by(subkeyword_company_id=self.e_company_id, subkeyword_word=subkeyword_word, subkeyword_active=True) subkeuword = subkeyword.one() A: I like to indent the arguments by two blocks, and the statement by one block, like these: for image_pathname in image_directory.iterdir(): image = cv2.imread(str(image_pathname)) input_image = np.resize( image, (height, width, 3) ).transpose((2,0,1)).reshape(1, 3, height, width) net.forward_all(data=input_image) segmentation_index = net.blobs[ 'argmax' ].data.squeeze().transpose(1,2,0).astype(np.uint8) segmentation = np.empty(segmentation_index.shape, dtype=np.uint8) cv2.LUT(segmentation_index, label_colours, segmentation) prediction_pathname = prediction_directory / image_pathname.name cv2.imwrite(str(prediction_pathname), segmentation) A: A slight variation of the top answer: the main object (Session) is kept in the top line and a single indentation is used. This enables quick identification of the main object and all subsequent chained method calls.. subkeyword = (Session .query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) .filter_by(subkeyword_company_id=self.e_company_id) .filter_by(subkeyword_word=subkeyword_word) .filter_by(subkeyword_active=True) .one() )
How to break a line of chained methods in Python?
I have a line of the following code (don't blame for naming conventions, they are not mine): subkeyword = Session.query( Subkeyword.subkeyword_id, Subkeyword.subkeyword_word ).filter_by( subkeyword_company_id=self.e_company_id ).filter_by( subkeyword_word=subkeyword_word ).filter_by( subkeyword_active=True ).one() I don't like how it looks like (not too readable) but I don't have any better idea to limit lines to 79 characters in this situation. Is there a better way of breaking it (preferably without backslashes)?
[ "You could use additional parentheses:\nsubkeyword = (\n Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word)\n .filter_by(subkeyword_company_id=self.e_company_id)\n .filter_by(subkeyword_word=subkeyword_word)\n .filter_by(subkeyword_active=True)\n .one()\n )\n\n", "This is a case where a line continuation character is preferred to open parentheses. The need for this style becomes more obvious as method names get longer and as methods start taking arguments:\nsubkeyword = Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) \\\n .filter_by(subkeyword_company_id=self.e_company_id) \\\n .filter_by(subkeyword_word=subkeyword_word) \\\n .filter_by(subkeyword_active=True) \\\n .one()\n\nPEP 8 is intend to be interpreted with a measure of common-sense and an eye for both the practical and the beautiful. Happily violate any PEP 8 guideline that results in ugly or hard to read code.\nThat being said, if you frequently find yourself at odds with PEP 8, it may be a sign that there are readability issues that transcend your choice of whitespace :-)\n", "My personal choice would be:\n\nsubkeyword = Session.query(\n Subkeyword.subkeyword_id,\n Subkeyword.subkeyword_word,\n).filter_by(\n subkeyword_company_id=self.e_company_id,\n subkeyword_word=subkeyword_word,\n subkeyword_active=True,\n).one()\n\n", "Just store the intermediate result/object and invoke the next method on it,\ne.g.\nq = Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word)\nq = q.filter_by(subkeyword_company_id=self.e_company_id)\nq = q.filter_by(subkeyword_word=subkeyword_word)\nq = q.filter_by(subkeyword_active=True)\nsubkeyword = q.one()\n\n", "It's a bit of a different solution than provided by others but a favorite of mine since it leads to nifty metaprogramming sometimes.\nbase = [Subkeyword.subkeyword_id, Subkeyword_word]\nsearch = {\n 'subkeyword_company_id':self.e_company_id,\n 'subkeyword_word':subkeyword_word,\n 'subkeyword_active':True,\n }\nsubkeyword = Session.query(*base).filter_by(**search).one()\n\nThis is a nice technique for building searches. Go through a list of conditionals to mine from your complex query form (or string-based deductions about what the user is looking for), then just explode the dictionary into the filter.\n", "According to Python Language Reference\nYou can use a backslash.\nOr simply break it. If a bracket is not paired, python will not treat that as a line. And under such circumstance, the indentation of following lines doesn't matter.\n", "You seems using SQLAlchemy, if it is true, sqlalchemy.orm.query.Query.filter_by() method takes multiple keyword arguments, so you could write like:\nsubkeyword = Session.query(Subkeyword.subkeyword_id,\n Subkeyword.subkeyword_word) \\\n .filter_by(subkeyword_company_id=self.e_company_id,\n subkeyword_word=subkeyword_word,\n subkeyword_active=True) \\\n .one()\n\nBut it would be better:\nsubkeyword = Session.query(Subkeyword.subkeyword_id,\n Subkeyword.subkeyword_word)\nsubkeyword = subkeyword.filter_by(subkeyword_company_id=self.e_company_id,\n subkeyword_word=subkeyword_word,\n subkeyword_active=True)\nsubkeuword = subkeyword.one()\n\n", "I like to indent the arguments by two blocks, and the statement by one block, like these:\nfor image_pathname in image_directory.iterdir():\n image = cv2.imread(str(image_pathname))\n input_image = np.resize(\n image, (height, width, 3)\n ).transpose((2,0,1)).reshape(1, 3, height, width)\n net.forward_all(data=input_image)\n segmentation_index = net.blobs[\n 'argmax'\n ].data.squeeze().transpose(1,2,0).astype(np.uint8)\n segmentation = np.empty(segmentation_index.shape, dtype=np.uint8)\n cv2.LUT(segmentation_index, label_colours, segmentation)\n prediction_pathname = prediction_directory / image_pathname.name\n cv2.imwrite(str(prediction_pathname), segmentation)\n\n", "A slight variation of the top answer: the main object (Session) is kept in the top line and a single indentation is used. This enables quick identification of the main object and all subsequent chained method calls..\nsubkeyword = (Session\n .query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word)\n .filter_by(subkeyword_company_id=self.e_company_id)\n .filter_by(subkeyword_word=subkeyword_word)\n .filter_by(subkeyword_active=True)\n .one()\n)\n\n" ]
[ 312, 70, 21, 12, 9, 4, 1, 1, 0 ]
[]
[]
[ "coding_style", "pep8", "python" ]
stackoverflow_0004768941_coding_style_pep8_python.txt
Q: Python mariadb module does not connect to database on network I am trying to connect to a mariadb-database on my local network. using Python. import mariadb cursor = mariadb.connect(host='192.168.178.77', user='someuser', password='somepass', db='temps') Output is: Traceback (most recent call last): File "/Users/localuser/PycharmProjects/SQL/main.py", line 20, in <module> cursor = mariadb.connect(host='192.168.178.77', user='someuser', password='somepass', db='temps') File "/Users/user/.conda/envs/SQL/lib/python3.10/site-packages/mariadb/__init__.py", line 142, in connect connection = connectionclass(*args, **kwargs) File "/Users/localuser/.conda/envs/SQL/lib/python3.10/site-packages/mariadb/connections.py", line 86, in __init__ super().__init__(*args, **kwargs) mariadb.OperationalError: Can't connect to server on '192.168.178.77' (60) I can connect via Pycharms Database functionality and send SQL Statements. I also can use DB management tools from that very host and use data without any issue. It even works from my phone. This code is the only place where I get an error. OS is MacOS13.0.1 Thank You! A: This happens due to a bug in MariaDB Connector/C. (Issue CONC-612). The issue was fixed in C/C Version 3.3.3 - which is available via brew: After brew update brew upgrade mariadb-connector-c connection should work as expected.
Python mariadb module does not connect to database on network
I am trying to connect to a mariadb-database on my local network. using Python. import mariadb cursor = mariadb.connect(host='192.168.178.77', user='someuser', password='somepass', db='temps') Output is: Traceback (most recent call last): File "/Users/localuser/PycharmProjects/SQL/main.py", line 20, in <module> cursor = mariadb.connect(host='192.168.178.77', user='someuser', password='somepass', db='temps') File "/Users/user/.conda/envs/SQL/lib/python3.10/site-packages/mariadb/__init__.py", line 142, in connect connection = connectionclass(*args, **kwargs) File "/Users/localuser/.conda/envs/SQL/lib/python3.10/site-packages/mariadb/connections.py", line 86, in __init__ super().__init__(*args, **kwargs) mariadb.OperationalError: Can't connect to server on '192.168.178.77' (60) I can connect via Pycharms Database functionality and send SQL Statements. I also can use DB management tools from that very host and use data without any issue. It even works from my phone. This code is the only place where I get an error. OS is MacOS13.0.1 Thank You!
[ "This happens due to a bug in MariaDB Connector/C. (Issue CONC-612).\nThe issue was fixed in C/C Version 3.3.3 - which is available via brew:\nAfter\nbrew update\nbrew upgrade mariadb-connector-c\n\nconnection should work as expected.\n" ]
[ 0 ]
[ "I've got the same problem recently. Add port variable and check other. If doesn't help, try mysql-connector-python it works similar. Or install mariadb connector manually\n" ]
[ -2 ]
[ "connection", "mariadb", "network_programming", "pycharm", "python" ]
stackoverflow_0074639957_connection_mariadb_network_programming_pycharm_python.txt
Q: pip is not using extra index url defined in pip.conf I am using MacOS, and I created a pip.conf in ~/.pip. There is only one extral-index-url in this file, which looks like: [global] extra-index-url=https://[username]:[password]@artifactory After that I tried to run pip config list, and I can see global.extra-index-url=https://[username]:[password]@artifactory in the terminal However, when I try to use pip to install a package, it still doesn't check this URL. I can install the package by using pip install <package> --extra-index-url https://[username]:[password]@artifactory, but just curious why my pip.conf is not being used. BTW I am using a virtual env when I run pip. I did copy pip.conf to the virtualenv folder, but it doesn't work either. A: @hoefling's comment on the question is the answer I needed. According to the docs, if a directory $HOME/Library/Application Support/pip/ exists, that will shadow any $HOME/.config/pip/pip.conf file. If that exists, it will shadow any $HOME/.pip/pip.conf file. So investigate those locations in order, perhaps one of them is getting in the way. A: Adding this for sanity. If installing packages through requirements.txt, it is possible the requirements file contains options for pip. This seems to take precedence over all the other options, including the command. requirements.txt -i https://pypi.org/simple my-package==1.0.0
pip is not using extra index url defined in pip.conf
I am using MacOS, and I created a pip.conf in ~/.pip. There is only one extral-index-url in this file, which looks like: [global] extra-index-url=https://[username]:[password]@artifactory After that I tried to run pip config list, and I can see global.extra-index-url=https://[username]:[password]@artifactory in the terminal However, when I try to use pip to install a package, it still doesn't check this URL. I can install the package by using pip install <package> --extra-index-url https://[username]:[password]@artifactory, but just curious why my pip.conf is not being used. BTW I am using a virtual env when I run pip. I did copy pip.conf to the virtualenv folder, but it doesn't work either.
[ "@hoefling's comment on the question is the answer I needed. According to the docs, if a directory\n$HOME/Library/Application Support/pip/\n\nexists, that will shadow any\n$HOME/.config/pip/pip.conf\n\nfile. If that exists, it will shadow any\n$HOME/.pip/pip.conf\n\nfile. So investigate those locations in order, perhaps one of them is getting in the way.\n", "Adding this for sanity.\nIf installing packages through requirements.txt, it is possible the requirements file contains options for pip. This seems to take precedence over all the other options, including the command.\nrequirements.txt\n-i https://pypi.org/simple\nmy-package==1.0.0\n\n" ]
[ 5, 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0061419865_pip_python.txt
Q: Battleship game crashes if I click outside of grid. How do I stop this? I'm relatively new to pygame (and writing code in general), so I decided to make a battleship game for my project. The game is relatively simple, ships will be randomly placed and the player has a set amount of shots to find and sink all ships else they lose. However, I have made a grid which responds to player input but if the player clicks outside of the grid (a small area under the grid which will show information like the total shots remaining) the game shows an error and stops. How do I fix this? Here is my code so far: import pygame, random, sys from sys import exit from pygame.locals import * pygame.init() SCREENWIDTH = 255 SCREENHEIGHT = 300 GRIDWIDTH = 20 GRIDHEIGHT = 20 MARGIN = 5 FPS = 60 BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) GREEN = ( 0, 255, 0) GRAY = ( 60, 60, 60) BLUE = ( 0, 0, 255) YELLOW = (255, 255, 0) TURQUOISE= ( 0, 100, 100) SCREEN = pygame.display.set_mode((SCREENWIDTH, SCREENHEIGHT)) pygame.display.set_caption("BATTLESHIPS") FPSCLOCK = pygame.time.Clock() MAIN_BACK = pygame.image.load("Water back.jpg") PLAY_BUTTON_IMAGE = pygame.image.load("Start-Button-Vector-PNG.png") HEADING_FONT = pygame.font.Font(None, 60) HEADING = HEADING_FONT.render("Battleships", True, WHITE) MENU = True PLAYING = False GRID = [] for ROW in range(10): GRID.append([]) for COLUMN in range(10): GRID[ROW].append(0) class Button(): def __init__ (self, x, y, image, scale): width = image.get_width() height = image.get_height() self.image = image self.rect = self.image.get_rect() self.image = pygame.transform.scale(image, (int(width * scale), int(height * scale))) self.rect.topleft = (x, y) self.clicked = False def draw(self): action = False if PLAY_ON == False: return pos = pygame.mouse.get_pos() if self.rect.collidepoint(pos): if pygame.mouse.get_pressed()[0] == 1 and self.clicked == False: self.clicked = True action = True #endif #endif if pygame.mouse.get_pressed()[0] == 0: self.clicked = False #endif SCREEN.blit(self.image, (self.rect.x, self.rect.y)) return action PLAY_BUTTON = Button(3, 125, PLAY_BUTTON_IMAGE, 0.5) PLAY_ON = True def draw_grid(): for ROW in range(10): for COLUMN in range(10): COLOR = WHITE if GRID[ROW][COLUMN] == 1: COLOR = GREEN pygame.draw.rect(SCREEN, COLOR, [(MARGIN + GRIDWIDTH) * COLUMN + MARGIN, (MARGIN + GRIDHEIGHT) * ROW + MARGIN, GRIDWIDTH, GRIDHEIGHT]) SCREEN.fill(TURQUOISE) PLAY_BUTTON.draw() SCREEN.blit(HEADING, (10,20)) pygame.display.update() while MENU == True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() elif PLAY_BUTTON.draw() == True and PLAY_ON == True: PLAY_ON = False PLAYING = True SCREEN.fill(BLUE) draw_grid() MENU = False FPSCLOCK.tick(FPS) pygame.display.update() while PLAYING == True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() if event.type == pygame.MOUSEBUTTONDOWN: POS = pygame.mouse.get_pos() COLUMN = POS[0] // (GRIDWIDTH + MARGIN) ROW = POS[1] // (GRIDHEIGHT + MARGIN) GRID[ROW][COLUMN] = 1 draw_grid() FPSCLOCK.tick(FPS) pygame.display.update() A: you must check that COLUMN and ROW are within the range of the grid:: while PLAYING == True: for event in pygame.event.get(): # [...] if event.type == pygame.MOUSEBUTTONDOWN: POS = pygame.mouse.get_pos() COLUMN = POS[0] // (GRIDWIDTH + MARGIN) ROW = POS[1] // (GRIDHEIGHT + MARGIN) if 0 <= ROW < 10 and 0 <= COLUMN < 10: GRID[ROW][COLUMN] = 1 draw_grid()
Battleship game crashes if I click outside of grid. How do I stop this?
I'm relatively new to pygame (and writing code in general), so I decided to make a battleship game for my project. The game is relatively simple, ships will be randomly placed and the player has a set amount of shots to find and sink all ships else they lose. However, I have made a grid which responds to player input but if the player clicks outside of the grid (a small area under the grid which will show information like the total shots remaining) the game shows an error and stops. How do I fix this? Here is my code so far: import pygame, random, sys from sys import exit from pygame.locals import * pygame.init() SCREENWIDTH = 255 SCREENHEIGHT = 300 GRIDWIDTH = 20 GRIDHEIGHT = 20 MARGIN = 5 FPS = 60 BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) GREEN = ( 0, 255, 0) GRAY = ( 60, 60, 60) BLUE = ( 0, 0, 255) YELLOW = (255, 255, 0) TURQUOISE= ( 0, 100, 100) SCREEN = pygame.display.set_mode((SCREENWIDTH, SCREENHEIGHT)) pygame.display.set_caption("BATTLESHIPS") FPSCLOCK = pygame.time.Clock() MAIN_BACK = pygame.image.load("Water back.jpg") PLAY_BUTTON_IMAGE = pygame.image.load("Start-Button-Vector-PNG.png") HEADING_FONT = pygame.font.Font(None, 60) HEADING = HEADING_FONT.render("Battleships", True, WHITE) MENU = True PLAYING = False GRID = [] for ROW in range(10): GRID.append([]) for COLUMN in range(10): GRID[ROW].append(0) class Button(): def __init__ (self, x, y, image, scale): width = image.get_width() height = image.get_height() self.image = image self.rect = self.image.get_rect() self.image = pygame.transform.scale(image, (int(width * scale), int(height * scale))) self.rect.topleft = (x, y) self.clicked = False def draw(self): action = False if PLAY_ON == False: return pos = pygame.mouse.get_pos() if self.rect.collidepoint(pos): if pygame.mouse.get_pressed()[0] == 1 and self.clicked == False: self.clicked = True action = True #endif #endif if pygame.mouse.get_pressed()[0] == 0: self.clicked = False #endif SCREEN.blit(self.image, (self.rect.x, self.rect.y)) return action PLAY_BUTTON = Button(3, 125, PLAY_BUTTON_IMAGE, 0.5) PLAY_ON = True def draw_grid(): for ROW in range(10): for COLUMN in range(10): COLOR = WHITE if GRID[ROW][COLUMN] == 1: COLOR = GREEN pygame.draw.rect(SCREEN, COLOR, [(MARGIN + GRIDWIDTH) * COLUMN + MARGIN, (MARGIN + GRIDHEIGHT) * ROW + MARGIN, GRIDWIDTH, GRIDHEIGHT]) SCREEN.fill(TURQUOISE) PLAY_BUTTON.draw() SCREEN.blit(HEADING, (10,20)) pygame.display.update() while MENU == True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() elif PLAY_BUTTON.draw() == True and PLAY_ON == True: PLAY_ON = False PLAYING = True SCREEN.fill(BLUE) draw_grid() MENU = False FPSCLOCK.tick(FPS) pygame.display.update() while PLAYING == True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() if event.type == pygame.MOUSEBUTTONDOWN: POS = pygame.mouse.get_pos() COLUMN = POS[0] // (GRIDWIDTH + MARGIN) ROW = POS[1] // (GRIDHEIGHT + MARGIN) GRID[ROW][COLUMN] = 1 draw_grid() FPSCLOCK.tick(FPS) pygame.display.update()
[ "you must check that COLUMN and ROW are within the range of the grid::\nwhile PLAYING == True:\n for event in pygame.event.get():\n # [...]\n\n if event.type == pygame.MOUSEBUTTONDOWN:\n POS = pygame.mouse.get_pos()\n COLUMN = POS[0] // (GRIDWIDTH + MARGIN)\n ROW = POS[1] // (GRIDHEIGHT + MARGIN)\n\n if 0 <= ROW < 10 and 0 <= COLUMN < 10:\n GRID[ROW][COLUMN] = 1\n draw_grid()\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074640245_pygame_python.txt
Q: Numba.jit() built in function to make our performance faster in python can anyone give one example to clarify it.Thanks! I would like to know before I imply it. A: You can read this article. Instead of doing operation with numpy array or pandas you can do the following from numba import njit @njit def add_arrays(x, y): return x + y Instead of def add_arrays(x, y): return x + y Of course there are more applications and uses of njit and numba in general but this is the one I've found my self using most of the times.
Numba.jit() built in function to make our performance faster in python
can anyone give one example to clarify it.Thanks! I would like to know before I imply it.
[ "You can read this article.\nInstead of doing operation with numpy array or pandas you can do the following\nfrom numba import njit\n@njit\ndef add_arrays(x, y):\n return x + y\n\n\nInstead of\ndef add_arrays(x, y):\n return x + y\n\nOf course there are more applications and uses of njit and numba in general but this is the one I've found my self using most of the times.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074640533_python.txt
Q: Azure Function (Consumption Linux Plan) is not working with System Identity to access the storage account for zip deployment Since "managed identity for AzureWebJobsStorage" has been published for enabling the Function App to access the storage account I wanted to give it a shot and implement across our APIs. However, this didn't work for Azure Function Linux Consumption Plan. The setup looks like that: Function runtime: ~4 Python version: 3.9 Service Plan: Linux Consumption System Identity activated and Blob Owner + SA Contributor is assigned to access the SA. Deployment method: Azure DevOps pipeline through task AzureFunctionApp@1 I've tried zip deployment and runFromPackage My Issue is with the Application Settings. It's not very clear to me which app settings beside AzureWebJobsStorage__accountName:[SA_NAME] should be configured. To be more concrete, the app setting WEBSITE_RUN_FROM_PACKAGE:1 according to MSFT documentation is not supported in consumption linux plan and WEBSITE_RUN_FROM_PACKAGE:[URL] must be used for it. Normally, without Managed Identity access, the deployment tools will take care of the adjusting by each deployment the URL and pointing it to the correct package name in the storage account. This adjustment is not happening with AzureWebJobsStorage__accountName, since it cannot access the storage account anyway. How to test it? Simply by creating a cosumption linux python function in Azure and adding\adjusting the AzureWebJobsStorage__accountName:[SA_NAME]. Then try to deploy the function with basic HTTPTrigger created in vscode. I am not aware if the python http trigger code must be adjusted or not. Would be great if someone can provide more information about it and about the app settings. A: The below works on my side(Linux consumption plan): trigger: - none variables: # Azure Resource Manager connection created during pipeline creation azureSubscription: 'xxx' resourceGroupName: 'xxx' # Function app name functionAppName: 'xxx' # Agent VM image name vmImageName: 'ubuntu-latest' # Working Directory workingDirectory: '' storage_str: 'xxx' stages: - stage: Build displayName: Build stage jobs: - job: Build displayName: Build pool: vmImage: $(vmImageName) steps: - task: UsePythonVersion@0 displayName: 'Use Python 3.9' inputs: versionSpec: 3.9 # Functions V2 supports Python 3.6 as of today architecture: 'x64' - bash: | pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt workingDirectory: $(workingDirectory) displayName: 'Install application dependencies' - task: ArchiveFiles@2 displayName: 'Archive files' inputs: rootFolderOrFile: "$(System.DefaultWorkingDirectory)" includeRootFolder: false archiveType: zip archiveFile: "$(System.DefaultWorkingDirectory)/$(Build.BuildId).zip" replaceExistingArchive: true - task: PublishBuildArtifacts@1 inputs: PathtoPublish: '$(System.DefaultWorkingDirectory)/$(Build.BuildId).zip' artifactName: 'drop' - stage: Deploy displayName: Deploy stage dependsOn: Build condition: succeeded() jobs: - deployment: Deploy displayName: Deploy environment: 'test' pool: vmImage: 'windows-latest' strategy: runOnce: deploy: steps: - task: DownloadPipelineArtifact@2 displayName: 'Download Pipeline Artifact' inputs: buildType: 'current' artifactName: 'drop' targetPath: '$(Pipeline.Workspace)/drop/' - task: AzureAppServiceSettings@1 inputs: azureSubscription: '$(azureSubscription)' appName: '$(functionAppName)' resourceGroupName: '$(resourceGroupName)' appSettings: | [ { "name": "AzureWebJobsStorage", "value": "$(storage_str)", "slotSetting": false } ] - task: AzureFunctionApp@1 inputs: azureSubscription: '$(azureSubscription)' appType: 'functionAppLinux' appName: '$(functionAppName)' package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip' runtimeStack: 'PYTHON|3.9' A: you can solve this by using a user-assigned identity. Just try adding this app setting: WEBSITE_RUN_FROM_PACKAGE_BLOB_MI_RESOURCE_ID = Managed_Identity_Resource_Id Source: https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package#fetch-a-package-from-azure-blob-storage-using-a-managed-identity
Azure Function (Consumption Linux Plan) is not working with System Identity to access the storage account for zip deployment
Since "managed identity for AzureWebJobsStorage" has been published for enabling the Function App to access the storage account I wanted to give it a shot and implement across our APIs. However, this didn't work for Azure Function Linux Consumption Plan. The setup looks like that: Function runtime: ~4 Python version: 3.9 Service Plan: Linux Consumption System Identity activated and Blob Owner + SA Contributor is assigned to access the SA. Deployment method: Azure DevOps pipeline through task AzureFunctionApp@1 I've tried zip deployment and runFromPackage My Issue is with the Application Settings. It's not very clear to me which app settings beside AzureWebJobsStorage__accountName:[SA_NAME] should be configured. To be more concrete, the app setting WEBSITE_RUN_FROM_PACKAGE:1 according to MSFT documentation is not supported in consumption linux plan and WEBSITE_RUN_FROM_PACKAGE:[URL] must be used for it. Normally, without Managed Identity access, the deployment tools will take care of the adjusting by each deployment the URL and pointing it to the correct package name in the storage account. This adjustment is not happening with AzureWebJobsStorage__accountName, since it cannot access the storage account anyway. How to test it? Simply by creating a cosumption linux python function in Azure and adding\adjusting the AzureWebJobsStorage__accountName:[SA_NAME]. Then try to deploy the function with basic HTTPTrigger created in vscode. I am not aware if the python http trigger code must be adjusted or not. Would be great if someone can provide more information about it and about the app settings.
[ "The below works on my side(Linux consumption plan):\ntrigger:\n- none\n\nvariables:\n # Azure Resource Manager connection created during pipeline creation\n azureSubscription: 'xxx'\n resourceGroupName: 'xxx'\n # Function app name\n functionAppName: 'xxx'\n # Agent VM image name\n vmImageName: 'ubuntu-latest'\n\n # Working Directory\n workingDirectory: ''\n\n\n storage_str: 'xxx'\n\nstages:\n- stage: Build\n displayName: Build stage\n\n jobs:\n - job: Build\n displayName: Build\n pool:\n vmImage: $(vmImageName)\n\n steps:\n - task: UsePythonVersion@0\n displayName: 'Use Python 3.9'\n inputs:\n versionSpec: 3.9 # Functions V2 supports Python 3.6 as of today\n architecture: 'x64'\n\n - bash: |\n pip install --target=\"./.python_packages/lib/site-packages\" -r ./requirements.txt\n workingDirectory: $(workingDirectory)\n displayName: 'Install application dependencies'\n\n - task: ArchiveFiles@2\n displayName: 'Archive files'\n inputs:\n rootFolderOrFile: \"$(System.DefaultWorkingDirectory)\"\n includeRootFolder: false\n archiveType: zip\n archiveFile: \"$(System.DefaultWorkingDirectory)/$(Build.BuildId).zip\"\n replaceExistingArchive: true\n\n - task: PublishBuildArtifacts@1\n inputs:\n PathtoPublish: '$(System.DefaultWorkingDirectory)/$(Build.BuildId).zip'\n artifactName: 'drop'\n \n- stage: Deploy\n displayName: Deploy stage\n dependsOn: Build\n condition: succeeded()\n\n jobs:\n - deployment: Deploy\n displayName: Deploy\n environment: 'test'\n pool:\n vmImage: 'windows-latest'\n\n strategy:\n runOnce:\n deploy:\n steps:\n - task: DownloadPipelineArtifact@2\n displayName: 'Download Pipeline Artifact'\n inputs:\n buildType: 'current'\n artifactName: 'drop'\n targetPath: '$(Pipeline.Workspace)/drop/'\n - task: AzureAppServiceSettings@1\n inputs:\n azureSubscription: '$(azureSubscription)'\n appName: '$(functionAppName)'\n resourceGroupName: '$(resourceGroupName)'\n appSettings: |\n [\n {\n \"name\": \"AzureWebJobsStorage\",\n \"value\": \"$(storage_str)\",\n \"slotSetting\": false\n }\n ]\n - task: AzureFunctionApp@1\n inputs:\n azureSubscription: '$(azureSubscription)'\n appType: 'functionAppLinux'\n appName: '$(functionAppName)'\n package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'\n runtimeStack: 'PYTHON|3.9'\n\n", "you can solve this by using a user-assigned identity.\nJust try adding this app setting:\nWEBSITE_RUN_FROM_PACKAGE_BLOB_MI_RESOURCE_ID = Managed_Identity_Resource_Id\nSource:\nhttps://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package#fetch-a-package-from-azure-blob-storage-using-a-managed-identity\n" ]
[ 0, 0 ]
[]
[]
[ "azure", "azure_functions", "azure_functions_core_tools", "azure_pipelines", "python" ]
stackoverflow_0073898612_azure_azure_functions_azure_functions_core_tools_azure_pipelines_python.txt
Q: Drawing a selection area with mouse in Tkinter I am developing an application which takes input from user in .csv form and plots the graph for the corresponding values using matplotlib. def plotgraph(): x = [] y = [] data = text.get("1.0", END) sepFile = data.split('\n') for plotPair in sepFile: xAndY = plotPair.split(',') if len(xAndY[0]) != 0 and len(xAndY[1]) != 0: x.append(float(xAndY[0])) y.append(float(xAndY[1])) graph = Figure(figsize=(5,4), dpi=100) a = graph.add_subplot(111) a.plot(x,y) a.set_xlabel('Velocity') a.set_ylabel('Absorbance') canvas = FigureCanvasTkAgg(graph, master=RightFrame) canvas.show() canvas.get_tk_widget().grid(column=2, row=1, rowspan=2, sticky=(N, S, E, W)) I want a similar function like this Matplotlib: draw a selection area in the shape of a rectangle with the mouse in Tkinter which gives me the x0, x1, y0, y1 after the selection. I could make the already asked question make work & customize it according to my needs but unaware what I am doing mistake in __init__(self) root = Tk() class Annotate(object): def __init__(self): self.fig = mplfig.Figure(figsize=(1.5, 1.5)) self.ax = self.fig.add_subplot(111) self.ax.plot([0,1,2,3,4],[0,8,9,5,3]) self.canvas = tkagg.FigureCanvasTkAgg(self.fig, master=root) self.x0 = None self.y0 = None self.x1 = None self.y1 = None self.ax.figure.canvas.mpl_connect('button_press_event', self.on_press) self.ax.figure.canvas.mpl_connect('button_release_event', self.on_release) self.ax.figure.canvas.mpl_connect('motion_notify_event', self.on_motion) When I run this code I get a blank Tk window. Can anyone tell me what should I do & what am I doing mistake A: To use class you need at list something like this class Annotate(object): def __init__(self): print "Annotate is runing" # rest of your code root = Tk() my_object = Annotate() root.mainloop() And probably you will need more work with this. A: I think now the simplest wat to do this is with the NavigationToolbar2Tk matplotlib class, which provides a built-in toolbar with a zoom selector, scroller, etc. This is shown in the "Embedding in Tk" example, which in code code above would translate to something like: from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk def plotgraph(): x = [] y = [] data = text.get("1.0", END) sepFile = data.split('\n') for plotPair in sepFile: xAndY = plotPair.split(',') if len(xAndY[0]) != 0 and len(xAndY[1]) != 0: x.append(float(xAndY[0])) y.append(float(xAndY[1])) graph = Figure(figsize=(5,4), dpi=100) a = graph.add_subplot(111) a.plot(x,y) a.set_xlabel('Velocity') a.set_ylabel('Absorbance') canvas = FigureCanvasTkAgg(graph, master=RightFrame) canvas.show() canvas.get_tk_widget().grid(column=2, row=1, rowspan=2, sticky=(N, S, E, W)) # set pack_toolbar to False to be able to use grid to set position toolbar = NavigationToolbar2Tk(canvas, RightFrame, pack_toolbar=False) toolbar.grid(column=2, row=3, stick="nw") toolbar.update()
Drawing a selection area with mouse in Tkinter
I am developing an application which takes input from user in .csv form and plots the graph for the corresponding values using matplotlib. def plotgraph(): x = [] y = [] data = text.get("1.0", END) sepFile = data.split('\n') for plotPair in sepFile: xAndY = plotPair.split(',') if len(xAndY[0]) != 0 and len(xAndY[1]) != 0: x.append(float(xAndY[0])) y.append(float(xAndY[1])) graph = Figure(figsize=(5,4), dpi=100) a = graph.add_subplot(111) a.plot(x,y) a.set_xlabel('Velocity') a.set_ylabel('Absorbance') canvas = FigureCanvasTkAgg(graph, master=RightFrame) canvas.show() canvas.get_tk_widget().grid(column=2, row=1, rowspan=2, sticky=(N, S, E, W)) I want a similar function like this Matplotlib: draw a selection area in the shape of a rectangle with the mouse in Tkinter which gives me the x0, x1, y0, y1 after the selection. I could make the already asked question make work & customize it according to my needs but unaware what I am doing mistake in __init__(self) root = Tk() class Annotate(object): def __init__(self): self.fig = mplfig.Figure(figsize=(1.5, 1.5)) self.ax = self.fig.add_subplot(111) self.ax.plot([0,1,2,3,4],[0,8,9,5,3]) self.canvas = tkagg.FigureCanvasTkAgg(self.fig, master=root) self.x0 = None self.y0 = None self.x1 = None self.y1 = None self.ax.figure.canvas.mpl_connect('button_press_event', self.on_press) self.ax.figure.canvas.mpl_connect('button_release_event', self.on_release) self.ax.figure.canvas.mpl_connect('motion_notify_event', self.on_motion) When I run this code I get a blank Tk window. Can anyone tell me what should I do & what am I doing mistake
[ "To use class you need at list something like this\nclass Annotate(object):\n def __init__(self):\n print \"Annotate is runing\"\n # rest of your code\n\nroot = Tk()\nmy_object = Annotate()\n\nroot.mainloop()\n\nAnd probably you will need more work with this.\n", "I think now the simplest wat to do this is with the NavigationToolbar2Tk matplotlib class, which provides a built-in toolbar with a zoom selector, scroller, etc.\nThis is shown in the \"Embedding in Tk\" example, which in code code above would translate to something like:\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk\n\ndef plotgraph():\n x = []\n y = []\n data = text.get(\"1.0\", END)\n sepFile = data.split('\\n')\n\n for plotPair in sepFile:\n xAndY = plotPair.split(',')\n if len(xAndY[0]) != 0 and len(xAndY[1]) != 0:\n x.append(float(xAndY[0]))\n y.append(float(xAndY[1]))\n\n graph = Figure(figsize=(5,4), dpi=100)\n a = graph.add_subplot(111)\n a.plot(x,y)\n a.set_xlabel('Velocity')\n a.set_ylabel('Absorbance')\n canvas = FigureCanvasTkAgg(graph, master=RightFrame)\n canvas.show()\n canvas.get_tk_widget().grid(column=2, row=1, rowspan=2, sticky=(N, S, E, W))\n\n # set pack_toolbar to False to be able to use grid to set position\n toolbar = NavigationToolbar2Tk(canvas, RightFrame, pack_toolbar=False)\n toolbar.grid(column=2, row=3, stick=\"nw\")\n toolbar.update() \n\n" ]
[ 1, 0 ]
[]
[]
[ "matplotlib", "python", "tkinter" ]
stackoverflow_0024743407_matplotlib_python_tkinter.txt
Q: "python setup.py egg_info" failed with error code 1 for netCDF4 on my Home Office I use Ubuntu 12.04. Now I try to configure python with netcdf4. I installed a lot of packages like numpy, pandas, matplotlib, hdf5, cython, h5py,... When I try to install netcdf4 I got the error message: Collecting netcdf /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages /urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning. SNIMissingWarning /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Downloading netcdf-0.2.1.tar.gz (16.5MB) 100% |████████████████████████████████| 16.5MB 42kB/s Requirement already satisfied (use --upgrade to upgrade): numpy==1.8.0 in /usr/local/lib/python2.7/dist-packages/numpy-1.8.0-py2.7-linux-x86_64.egg (from netcdf) Collecting h5py==2.3.1 (from netcdf) /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Downloading h5py-2.3.1.tar.gz (1.1MB) 100% |████████████████████████████████| 1.1MB 223kB/s Collecting netCDF4==1.1.0 (from netcdf) Downloading netCDF4-1.1.0.tar.gz (562kB) 100% |████████████████████████████████| 563kB 340kB/s Complete output from command python setup.py egg_info: HDF5_DIR environment variable not set, checking some standard locations .. checking /home/elly ... checking /usr/local ... HDF5 found in /usr/local NETCDF4_DIR environment variable not set, checking standard locations.. checking /home/elly ... checking /usr/local ... checking /sw ... checking /opt ... checking /opt/local ... checking /usr ... Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-q2Vs7i/netCDF4/setup.py", line 232, in <module> raise ValueError('did not find netCDF version 4 headers') ValueError: did not find netCDF version 4 headers ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-q2Vs7i/netCDF4/ I searched different pages with similar issues and tried install-instructions: sudo apt-get install python-dev sudo apt-get install libevent-dev sudo apt-get indtall python-all-dev Do anyone have another idea to fix this? Thanks! A: I was trying to do the same and I finally found the full list : python3 -m pip install -U pip pip3 install -upgrade setuptools pip3 install netCDF4 it should work now :) it worked for me (centos8, python3)
"python setup.py egg_info" failed with error code 1 for netCDF4
on my Home Office I use Ubuntu 12.04. Now I try to configure python with netcdf4. I installed a lot of packages like numpy, pandas, matplotlib, hdf5, cython, h5py,... When I try to install netcdf4 I got the error message: Collecting netcdf /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages /urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning. SNIMissingWarning /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Downloading netcdf-0.2.1.tar.gz (16.5MB) 100% |████████████████████████████████| 16.5MB 42kB/s Requirement already satisfied (use --upgrade to upgrade): numpy==1.8.0 in /usr/local/lib/python2.7/dist-packages/numpy-1.8.0-py2.7-linux-x86_64.egg (from netcdf) Collecting h5py==2.3.1 (from netcdf) /usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Downloading h5py-2.3.1.tar.gz (1.1MB) 100% |████████████████████████████████| 1.1MB 223kB/s Collecting netCDF4==1.1.0 (from netcdf) Downloading netCDF4-1.1.0.tar.gz (562kB) 100% |████████████████████████████████| 563kB 340kB/s Complete output from command python setup.py egg_info: HDF5_DIR environment variable not set, checking some standard locations .. checking /home/elly ... checking /usr/local ... HDF5 found in /usr/local NETCDF4_DIR environment variable not set, checking standard locations.. checking /home/elly ... checking /usr/local ... checking /sw ... checking /opt ... checking /opt/local ... checking /usr ... Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-q2Vs7i/netCDF4/setup.py", line 232, in <module> raise ValueError('did not find netCDF version 4 headers') ValueError: did not find netCDF version 4 headers ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-q2Vs7i/netCDF4/ I searched different pages with similar issues and tried install-instructions: sudo apt-get install python-dev sudo apt-get install libevent-dev sudo apt-get indtall python-all-dev Do anyone have another idea to fix this? Thanks!
[ "I was trying to do the same and I finally found the full list :\npython3 -m pip install -U pip\npip3 install -upgrade setuptools\npip3 install netCDF4\n\nit should work now :) it worked for me (centos8, python3)\n" ]
[ 0 ]
[]
[]
[ "python", "setup.py" ]
stackoverflow_0035924115_python_setup.py.txt
Q: Calculate average temperature/humidity between 2 dates pandas data frames I have the following data frames: df3 Harvest_date Starting_date 2022-10-06 2022-08-06 2022-02-22 2021-12-22 df (I have all temp and humid starting from 2021-01-01 till the present) date temp humid 2022-10-06 00:30:00 2 30 2022-10-06 00:01:00 1 30 2022-10-06 00:01:30 0 30 2022-10-06 00:02:00 0 30 2022-10-06 00:02:30 -2 30 I would like to calculate the avg temperature and humidity between the starting_date and harvest_date. I tried this: import pandas as pd df = pd.read_csv (r'C:\climate.csv') df3 = pd.read_csv (r'C:\Flower_weight_Seson.csv') df['date'] = pd.to_datetime(df.date) df3['Harvest_date'] = pd.to_datetime(df3.Harvest_date) df3['Starting_date'] = pd.to_datetime(df3.Starting_date) df.style.format({"date": lambda t: t.strftime("%Y-%m-%d")}) df3.style.format({"Harvest_date": lambda t: t.strftime("%Y-%m-%d")}) df3.style.format({"Starting_date": lambda t: t.strftime("%Y-%m-%d")}) for harvest_date,starting_date in zip(df3['Harvest_date'],df3['Starting_date']): df3["Season avg temp"]= df[df["date"].between(starting_date,harvest_date)]["temp"].mean() df3["Season avg humid"]= df[df["date"].between(starting_date,harvest_date)]["humid"].mean() I get the same value for all dates. Can someone point out what I did wrong, please? A: Use DataFrame.loc with match indices by means of another DataFrame: #changed data for match with df3 print (df) date temp humid 0 2022-10-06 00:30:00 2 30 1 2022-09-06 00:01:00 1 33 2 2022-09-06 00:01:30 0 23 3 2022-10-06 00:02:00 0 30 4 2022-01-06 00:02:30 -2 25 for i,harvest_date,starting_date in zip(df3.index,df3['Harvest_date'],df3['Starting_date']): mask = df["date"].between(starting_date,harvest_date) avg = df.loc[mask, ["temp",'humid']].mean() df3.loc[i, ["Season avg temp",'Season avg humid']] = avg.to_numpy() print (df3) Harvest_date Starting_date Season avg temp Season avg humid 0 2022-10-06 2022-08-06 0.5 28.0 1 2022-02-22 2021-12-220 -2.0 25.0 EDIT: For add new condition for match by room columns use: for i,harvest_date,starting_date, room in zip(df3.index, df3['Harvest_date'], df3['Starting_date'], df3['Room']): mask = df["date"].between(starting_date,harvest_date) & df['Room'].eq(room) avg = df.loc[mask, ["temp",'humid']].mean() df3.loc[i, ["Season avg temp",'Season avg humid']] = avg.to_numpy() print (df3)
Calculate average temperature/humidity between 2 dates pandas data frames
I have the following data frames: df3 Harvest_date Starting_date 2022-10-06 2022-08-06 2022-02-22 2021-12-22 df (I have all temp and humid starting from 2021-01-01 till the present) date temp humid 2022-10-06 00:30:00 2 30 2022-10-06 00:01:00 1 30 2022-10-06 00:01:30 0 30 2022-10-06 00:02:00 0 30 2022-10-06 00:02:30 -2 30 I would like to calculate the avg temperature and humidity between the starting_date and harvest_date. I tried this: import pandas as pd df = pd.read_csv (r'C:\climate.csv') df3 = pd.read_csv (r'C:\Flower_weight_Seson.csv') df['date'] = pd.to_datetime(df.date) df3['Harvest_date'] = pd.to_datetime(df3.Harvest_date) df3['Starting_date'] = pd.to_datetime(df3.Starting_date) df.style.format({"date": lambda t: t.strftime("%Y-%m-%d")}) df3.style.format({"Harvest_date": lambda t: t.strftime("%Y-%m-%d")}) df3.style.format({"Starting_date": lambda t: t.strftime("%Y-%m-%d")}) for harvest_date,starting_date in zip(df3['Harvest_date'],df3['Starting_date']): df3["Season avg temp"]= df[df["date"].between(starting_date,harvest_date)]["temp"].mean() df3["Season avg humid"]= df[df["date"].between(starting_date,harvest_date)]["humid"].mean() I get the same value for all dates. Can someone point out what I did wrong, please?
[ "Use DataFrame.loc with match indices by means of another DataFrame:\n#changed data for match with df3\nprint (df)\n date temp humid\n0 2022-10-06 00:30:00 2 30\n1 2022-09-06 00:01:00 1 33\n2 2022-09-06 00:01:30 0 23\n3 2022-10-06 00:02:00 0 30\n4 2022-01-06 00:02:30 -2 25\n\n\nfor i,harvest_date,starting_date in zip(df3.index,df3['Harvest_date'],df3['Starting_date']):\n mask = df[\"date\"].between(starting_date,harvest_date)\n avg = df.loc[mask, [\"temp\",'humid']].mean()\n df3.loc[i, [\"Season avg temp\",'Season avg humid']] = avg.to_numpy()\nprint (df3)\n Harvest_date Starting_date Season avg temp Season avg humid\n0 2022-10-06 2022-08-06 0.5 28.0\n1 2022-02-22 2021-12-220 -2.0 25.0\n\nEDIT: For add new condition for match by room columns use:\nfor i,harvest_date,starting_date, room in zip(df3.index,\n df3['Harvest_date'],\n df3['Starting_date'], df3['Room']):\n mask = df[\"date\"].between(starting_date,harvest_date) & df['Room'].eq(room)\n avg = df.loc[mask, [\"temp\",'humid']].mean()\n df3.loc[i, [\"Season avg temp\",'Season avg humid']] = avg.to_numpy()\nprint (df3)\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074640609_pandas_python.txt
Q: Doit in Python not simplifying derivatives when re/im operator included This is a continuation on my previous question Switch order of differential and real operator in expression in Python. I would like to simplify the following derivatives in Python where u and v are independent (Sympy) complex variables. The derivatives and the re operator are commutative here, and hence by switching their order it automatically follows that their values are 1 and 0 respectively. My code will not simplify the above expressions by itself (it will print them as are), and so I was advised in my previous question to use doit, i.e[Differential].doit(). However this only seems to simplify the expression if we are taking the differential with respect to a variable (say some complex variable k) which never appears in the function we are differentiating, i.e doit will simplify but not even though both are clearly zero. Note that there are no issues if the re operator is removed. Any suggestions on how to come around this problem with doit, or any other similar method? A: You have two complex variables: In [1]: u, v = symbols('u, v') In [2]: diff(re(u*v), u, v) Out[2]: 2 ∂ ─────(re(u⋅v)) ∂v ∂u Here you just get an unevaluated Derivative back. That is SymPy's way of saying that it can't compute an expression for this derivative. Now what should the derivative actually be? You said "the derivatives and the re operator are commutative here" but that isn't true because the derivatives don't exist: the real part function re is not complex differentiable. As for the other point: In [3]: diff(re(u), v) Out[3]: 0 In [4]: diff(re(u), v, v) Out[4]: 0 In [5]: diff(re(u), u, v) Out[5]: 0 In [6]: diff(re(u), v, u) Out[6]: 0 In [7]: diff(re(u), u, u) Out[7]: 2 d ───(re(u)) 2 du These are all zero as expected apart from the last one which is undefined because re is not complex differentiable: https://math.stackexchange.com/questions/1789582/is-the-complex-function-fz-rez-differentiable
Doit in Python not simplifying derivatives when re/im operator included
This is a continuation on my previous question Switch order of differential and real operator in expression in Python. I would like to simplify the following derivatives in Python where u and v are independent (Sympy) complex variables. The derivatives and the re operator are commutative here, and hence by switching their order it automatically follows that their values are 1 and 0 respectively. My code will not simplify the above expressions by itself (it will print them as are), and so I was advised in my previous question to use doit, i.e[Differential].doit(). However this only seems to simplify the expression if we are taking the differential with respect to a variable (say some complex variable k) which never appears in the function we are differentiating, i.e doit will simplify but not even though both are clearly zero. Note that there are no issues if the re operator is removed. Any suggestions on how to come around this problem with doit, or any other similar method?
[ "You have two complex variables:\nIn [1]: u, v = symbols('u, v')\n\nIn [2]: diff(re(u*v), u, v)\nOut[2]: \n 2 \n ∂ \n─────(re(u⋅v))\n∂v ∂u \n\nHere you just get an unevaluated Derivative back. That is SymPy's way of saying that it can't compute an expression for this derivative.\nNow what should the derivative actually be?\nYou said \"the derivatives and the re operator are commutative here\" but that isn't true because the derivatives don't exist: the real part function re is not complex differentiable.\nAs for the other point:\nIn [3]: diff(re(u), v)\nOut[3]: 0\n\nIn [4]: diff(re(u), v, v)\nOut[4]: 0\n\nIn [5]: diff(re(u), u, v)\nOut[5]: 0\n\nIn [6]: diff(re(u), v, u)\nOut[6]: 0\n\nIn [7]: diff(re(u), u, u)\nOut[7]: \n 2 \n d \n───(re(u))\n 2 \ndu \n\nThese are all zero as expected apart from the last one which is undefined because re is not complex differentiable:\nhttps://math.stackexchange.com/questions/1789582/is-the-complex-function-fz-rez-differentiable\n" ]
[ 2 ]
[]
[]
[ "python", "sympy" ]
stackoverflow_0074639164_python_sympy.txt
Q: Instantiate empty type-hinted list I have the following code: from typing import List, NewType MultiList = NewType("MultiList", List[List[int]]) def myfunc(): multi: MultiList = [] # More stuff here The code works fine, it's just my IDE (PyCharm) doesn't like the instantiation of multi to an empty list, I get this error: "Expected type 'MultiList', got 'list[list[int]]' instead" I mean, a MultiList is a list[list[int]], so I really don't know why it's complaining. Unless it's because the list is empty, but that doesn't make a lot of sense to me either. It's not the end of the world, the code works just fine, I'd just like to know why it's marked as wrong. A: The entire purpose of the typing.NewType is to distinguish a type from another and treat it as a subtype. You annotate it as MultiList, which is treated as a subtype of list, so assigning a list instance is wrong. You need to instantiate MultiList properly. Example: def myfunc(): multi: MultiList = MultiList([]) If you wanted a type alias instead, then you should not use NewType, but TypeAlias instead: from typing import TypeAlias MultiList: TypeAlias = list[list[int]] def myfunc(): multi: MultiList = [] PS: Unless you are using an outdated Python version, you should use the standard generic alias types, i.e. list instead of typing.List. A: If you just mean "list of lists of ints", then don't use NewType: MultiList = List[List[int]] NewType is for if you don't want arbitrary lists of lists of ints to be treated as a MultiList. With NewType, static type checkers will only treat an object as a MultiList if you explicitly call MultiList: MultiList = NewType("MultiList", List[List[int]]) multi: MultiList = MultiList([])
Instantiate empty type-hinted list
I have the following code: from typing import List, NewType MultiList = NewType("MultiList", List[List[int]]) def myfunc(): multi: MultiList = [] # More stuff here The code works fine, it's just my IDE (PyCharm) doesn't like the instantiation of multi to an empty list, I get this error: "Expected type 'MultiList', got 'list[list[int]]' instead" I mean, a MultiList is a list[list[int]], so I really don't know why it's complaining. Unless it's because the list is empty, but that doesn't make a lot of sense to me either. It's not the end of the world, the code works just fine, I'd just like to know why it's marked as wrong.
[ "The entire purpose of the typing.NewType is to distinguish a type from another and treat it as a subtype. You annotate it as MultiList, which is treated as a subtype of list, so assigning a list instance is wrong.\nYou need to instantiate MultiList properly. Example:\ndef myfunc():\n multi: MultiList = MultiList([])\n\nIf you wanted a type alias instead, then you should not use NewType, but TypeAlias instead:\nfrom typing import TypeAlias\n\nMultiList: TypeAlias = list[list[int]]\n\ndef myfunc():\n multi: MultiList = []\n\nPS: Unless you are using an outdated Python version, you should use the standard generic alias types, i.e. list instead of typing.List.\n", "If you just mean \"list of lists of ints\", then don't use NewType:\nMultiList = List[List[int]]\n\nNewType is for if you don't want arbitrary lists of lists of ints to be treated as a MultiList. With NewType, static type checkers will only treat an object as a MultiList if you explicitly call MultiList:\nMultiList = NewType(\"MultiList\", List[List[int]])\nmulti: MultiList = MultiList([])\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "python_typing", "type_hinting" ]
stackoverflow_0074640651_python_python_typing_type_hinting.txt
Q: How to use strip to substring in dataframe? I have a dataset with 100,000 rows and 300 columns Here is the sample dataset: EVENT_DTL 0 8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family 1 8. Background : Engineer / living with his mom marriage_married How can I remove the white blank between ‘with’ and ‘marriage_virgin’ but leave only one white blank? Desired outout would be: EVENT_DTL 0 8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family 1 8. Background : Engineer / living with his mom marriage_married A: You can use pandas.Series.str to replace "\s+" (1 or more whitespace) by a single whitespace. Try this : df["EVENT_DTL"]= df["EVENT_DTL"].str.replace("\s+", " ", regex=True) Output : print(df) EVENT_DTL 0 8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family 1 8. Background : Engineer / living with his mom marriage_married If you need to clean up the whole dataframe, use pandas.DataFrame.replace : df.astype(str).replace("\s+", " ", regex=True, inplace=True) A: You can call string methods for a DataFrame column with df["EVENT_DTL"].str.strip() but .strip() doesn't work, because it only removed extra characters from the start and end of the string. To remove all duplicate whitespaces you can use regex: import re import pandas as pd d = {"EVENT_DTL": [ "8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family", "8. Background : Engineer / living with his mom marriage_married" ]} df = pd.DataFrame(d) pattern = re.compile(" +") df["EVENT_DTL"] = df["EVENT_DTL"].apply(lambda x: pattern.sub(" ", x)) print(df["EVENT_DTL"][0])
How to use strip to substring in dataframe?
I have a dataset with 100,000 rows and 300 columns Here is the sample dataset: EVENT_DTL 0 8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family 1 8. Background : Engineer / living with his mom marriage_married How can I remove the white blank between ‘with’ and ‘marriage_virgin’ but leave only one white blank? Desired outout would be: EVENT_DTL 0 8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family 1 8. Background : Engineer / living with his mom marriage_married
[ "You can use pandas.Series.str to replace \"\\s+\" (1 or more whitespace) by a single whitespace.\nTry this :\ndf[\"EVENT_DTL\"]= df[\"EVENT_DTL\"].str.replace(\"\\s+\", \" \", regex=True)\n\nOutput :\nprint(df)\n EVENT_DTL\n0 8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family\n1 8. Background : Engineer / living with his mom marriage_married\n\nIf you need to clean up the whole dataframe, use pandas.DataFrame.replace :\ndf.astype(str).replace(\"\\s+\", \" \", regex=True, inplace=True)\n\n", "You can call string methods for a DataFrame column with\ndf[\"EVENT_DTL\"].str.strip()\n\nbut .strip() doesn't work, because it only removed extra characters from the start and end of the string. To remove all duplicate whitespaces you can use regex:\nimport re\nimport pandas as pd\n\nd = {\"EVENT_DTL\": [\n \"8. Background : no job / living with marriage_virgin 9. Social status : doing pretty well with his family\",\n \"8. Background : Engineer / living with his mom marriage_married\"\n]}\ndf = pd.DataFrame(d)\npattern = re.compile(\" +\")\ndf[\"EVENT_DTL\"] = df[\"EVENT_DTL\"].apply(lambda x: pattern.sub(\" \", x))\nprint(df[\"EVENT_DTL\"][0])\n\n" ]
[ 4, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074640495_pandas_python.txt
Q: Getting unique values from csv file, output to new file I am trying to get the unique values from a csv file. Here's an example of the file: 12,life,car,good,exellent 10,gift,truck,great,great 11,time,car,great,perfect The desired output in the new file is this: 12,10,11 life,gift,time car,truck good.great excellent,great,perfect Here is my code: def attribute_values(in_file, out_file): fname = open(in_file) fout = open(out_file, 'w') # get the header line header = fname.readline() # get the attribute names attrs = header.strip().split(',') # get the distinct values for each attribute values = [] for i in range(len(attrs)): values.append(set()) # read the data for line in fname: cols = line.strip().split(',') for i in range(len(attrs)): values[i].add(cols[i]) # write the distinct values to the file for i in range(len(attrs)): fout.write(attrs[i] + ',' + ','.join(list(values[i])) + '\n') fout.close() fname.close() The code currently outputs this: 12,10 life,gift car,truck good,great exellent,great 12,10,11 life,gift,time car,car,truck good,great exellent,great,perfect How can I fix this? A: You could try to use zip to iterate over the columns of the input file, and then eliminate the duplicates: import csv def attribute_values(in_file, out_file): with open(in_file, "r") as fin, open(out_file, "w") as fout: for column in zip(*csv.reader(fin)): items, row = set(), [] for item in column: if item not in items: items.add(item) row.append(item) fout.write(",".join(row) + "\n") Result for the example file: 12,10,11 life,gift,time car,truck good,great exellent,great,perfect
Getting unique values from csv file, output to new file
I am trying to get the unique values from a csv file. Here's an example of the file: 12,life,car,good,exellent 10,gift,truck,great,great 11,time,car,great,perfect The desired output in the new file is this: 12,10,11 life,gift,time car,truck good.great excellent,great,perfect Here is my code: def attribute_values(in_file, out_file): fname = open(in_file) fout = open(out_file, 'w') # get the header line header = fname.readline() # get the attribute names attrs = header.strip().split(',') # get the distinct values for each attribute values = [] for i in range(len(attrs)): values.append(set()) # read the data for line in fname: cols = line.strip().split(',') for i in range(len(attrs)): values[i].add(cols[i]) # write the distinct values to the file for i in range(len(attrs)): fout.write(attrs[i] + ',' + ','.join(list(values[i])) + '\n') fout.close() fname.close() The code currently outputs this: 12,10 life,gift car,truck good,great exellent,great 12,10,11 life,gift,time car,car,truck good,great exellent,great,perfect How can I fix this?
[ "You could try to use zip to iterate over the columns of the input file, and then eliminate the duplicates:\nimport csv\n\ndef attribute_values(in_file, out_file):\n with open(in_file, \"r\") as fin, open(out_file, \"w\") as fout:\n for column in zip(*csv.reader(fin)):\n items, row = set(), []\n for item in column:\n if item not in items:\n items.add(item)\n row.append(item)\n fout.write(\",\".join(row) + \"\\n\")\n\nResult for the example file:\n12,10,11\nlife,gift,time\ncar,truck\ngood,great\nexellent,great,perfect\n\n" ]
[ 1 ]
[]
[]
[ "csv", "python", "unique" ]
stackoverflow_0074637028_csv_python_unique.txt
Q: How to bypass the validation popup on dell support page when submiting the search string using Python and SeleniumWebdriver I'm trying to automate a task where I need to input service tags in the dell support page and extract the laptop information. But sometimes when I try to submit, the webpage shows a validation pop-up and it has a waiting time of 30 seconds Does anyone have any suggestions on how to bypass this validation? Here is what I tried. url = 'https://www.dell.com/support/home/en-in' driver = webdriver.Chrome() for service_tag in ['JX0JL13', '20M11J3', 'BH7C3M2', '6MYH5S2']: driver.get(url) input_element = driver.find_element(by=By.XPATH, value='//*[@id="inpEntrySelection"]') input_element.send_keys(service_tag) driver.find_element(by=By.XPATH, value='//*[@id="btn-entry-select"]').click() model = driver.find_element(by=By.XPATH, value='//*[@id="site-wrapper"]/div/div[4]/div[1]/div[2]/div[1]/div[2]/div/div/div/div[2]/h1').text print(model) A: You are seeing the validation pop-up with a waiting time of 30 seconds possibly as the ChromeDriver initiated Chrome Browser browsing context is geting detected as a bot. Solution A potential solution would be to keep the ChromeDriver synchronized with the google-chrome browsing context inducing WebDriverWait for the desired clickable elements to turn element_to_be_clickable() and you can use the following locator strategies: Code block: driver = webdriver.Chrome(service=s, options=options) url = 'https://www.dell.com/support/home/en-in' for service_tag in ['JX0JL13', '20M11J3', 'BH7C3M2', '6MYH5S2']: driver.get(url) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//input[@id="inpEntrySelection"]'))).send_keys(service_tag) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//button[@id="btn-entry-select"]//span'))).click() print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.product-support-hero div.product-summary h1"))).text) Console output: Latitude 7400 Latitude 7420 Latitude 7480 Latitude 7490 A: Ran into a very similar issue of Popup occurring every few searches. Successful way that I found was by using the other search bar, at the top of the page. Have been able to run few hundred serial Numbers without a popup occurring once; have a feeling its trigger is linked to the main search bar. After the search you can scrape for whatever data you want and then I recommend reloading the landing page and going and doing your next serial number. I've also added a couple** addition option features (using a chrome profile 1 or 2). Might encounter an Access denied page a couple times, but just reload program w/ a different profile does the trick. ** total list is 9 arguments, including loading user_agents for every run - may need to import a couple things like keys and userAgent for all this to work but try first 5 lines of code n see how you go. #iterate through using this driver.find_element(By.XPATH, "//*[id='md-search-input']").send_keys(serial_num_iteration + Keys.RETURN) #Scrape the data you want here time.sleep(3) driver.get(url) #Adds profile to driver options.add_argument(r"--user-data-dir=C:\User\userprofile\AppData\Local\Google\Chrome\User Data") options.add_argument(r"--profile-directory=Profile 2") #Adds extra bits to driver options.add_argument("windows-size 1400, 900") options.add_argument(f"user-agent={user_agent}") options.add_argument("--disabe-blink-features=AutomationControlled") options.add_argument("disable-infobars") options.add_experimental_option("useAutomationExtension", False) options.add_argument("--no-sandbox")
How to bypass the validation popup on dell support page when submiting the search string using Python and SeleniumWebdriver
I'm trying to automate a task where I need to input service tags in the dell support page and extract the laptop information. But sometimes when I try to submit, the webpage shows a validation pop-up and it has a waiting time of 30 seconds Does anyone have any suggestions on how to bypass this validation? Here is what I tried. url = 'https://www.dell.com/support/home/en-in' driver = webdriver.Chrome() for service_tag in ['JX0JL13', '20M11J3', 'BH7C3M2', '6MYH5S2']: driver.get(url) input_element = driver.find_element(by=By.XPATH, value='//*[@id="inpEntrySelection"]') input_element.send_keys(service_tag) driver.find_element(by=By.XPATH, value='//*[@id="btn-entry-select"]').click() model = driver.find_element(by=By.XPATH, value='//*[@id="site-wrapper"]/div/div[4]/div[1]/div[2]/div[1]/div[2]/div/div/div/div[2]/h1').text print(model)
[ "You are seeing the validation pop-up with a waiting time of 30 seconds\n\npossibly as the ChromeDriver initiated Chrome Browser browsing context is geting detected as a bot.\n\nSolution\nA potential solution would be to keep the ChromeDriver synchronized with the google-chrome browsing context inducing WebDriverWait for the desired clickable elements to turn element_to_be_clickable() and you can use the following locator strategies:\n\nCode block:\ndriver = webdriver.Chrome(service=s, options=options)\nurl = 'https://www.dell.com/support/home/en-in'\nfor service_tag in ['JX0JL13', '20M11J3', 'BH7C3M2', '6MYH5S2']:\n driver.get(url)\n WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//input[@id=\"inpEntrySelection\"]'))).send_keys(service_tag)\n WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//button[@id=\"btn-entry-select\"]//span'))).click()\n print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, \"div.product-support-hero div.product-summary h1\"))).text)\n\n\nConsole output:\nLatitude 7400\nLatitude 7420\nLatitude 7480\nLatitude 7490\n\n\n\n", "Ran into a very similar issue of Popup occurring every few searches.\nSuccessful way that I found was by using the other search bar, at the top of the page. Have been able to run few hundred serial Numbers without a popup occurring once; have a feeling its trigger is linked to the main search bar.\nAfter the search you can scrape for whatever data you want and then I recommend reloading the landing page and going and doing your next serial number.\n\nI've also added a couple** addition option features (using a chrome profile 1 or 2). Might encounter an Access denied page a couple times, but just reload program w/ a different profile does the trick.\n** total list is 9 arguments, including loading user_agents for every run - may need to import a couple things like keys and userAgent for all this to work but try first 5 lines of code n see how you go.\n#iterate through using this\ndriver.find_element(By.XPATH, \"//*[id='md-search-input']\").send_keys(serial_num_iteration + Keys.RETURN)\n#Scrape the data you want here\ntime.sleep(3)\ndriver.get(url)\n\n#Adds profile to driver\noptions.add_argument(r\"--user-data-dir=C:\\User\\userprofile\\AppData\\Local\\Google\\Chrome\\User Data\")\noptions.add_argument(r\"--profile-directory=Profile 2\")\n\n#Adds extra bits to driver\noptions.add_argument(\"windows-size 1400, 900\")\noptions.add_argument(f\"user-agent={user_agent}\")\noptions.add_argument(\"--disabe-blink-features=AutomationControlled\")\noptions.add_argument(\"disable-infobars\")\noptions.add_experimental_option(\"useAutomationExtension\", False)\noptions.add_argument(\"--no-sandbox\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "google_chrome", "python", "selenium", "selenium_chromedriver", "selenium_webdriver" ]
stackoverflow_0073393336_google_chrome_python_selenium_selenium_chromedriver_selenium_webdriver.txt
Q: Can't connect to Azure SQL DB - pyodbc Operational Error Attempting to connect to an Azure SQL database, most other configurations I've tried result in an instant error: Client unable to establish connection (0) (SQLDriverConnect)') But this one I'm currently trying times out and is the closest I've got: cnxn = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}', server='tcp:mydatabase.database.windows.net:1433', database='MyDatabase', user='username', password='password') pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)') A: There is no requirement for the connection string to include the default SQL Server port, which is 1433. Add server name without tcp, use tcp: before quotes of servername, Server name format is servername.database.windows.net Example String pyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';DATABASE='+database+';UID='+username+';PWD='+ password) A: The port defined on the server attribute is separated by comma and not colon. Driver={ODBC Driver 1x for SQL Server};Server=[tcp]:{serverName}.database.windows.net[,1433]; (...) You can check a sample of the connection string on the connections string blade on your database page on Azure Portal.
Can't connect to Azure SQL DB - pyodbc Operational Error
Attempting to connect to an Azure SQL database, most other configurations I've tried result in an instant error: Client unable to establish connection (0) (SQLDriverConnect)') But this one I'm currently trying times out and is the closest I've got: cnxn = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}', server='tcp:mydatabase.database.windows.net:1433', database='MyDatabase', user='username', password='password') pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
[ "There is no requirement for the connection string to include the default SQL Server port, which is 1433.\nAdd server name without tcp, use tcp: before quotes of servername, Server name format is servername.database.windows.net\nExample String\npyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';DATABASE='+database+';UID='+username+';PWD='+ password)\n\n\n", "The port defined on the server attribute is separated by comma and not colon.\nDriver={ODBC Driver 1x for SQL Server};Server=[tcp]:{serverName}.database.windows.net[,1433]; (...)\n\nYou can check a sample of the connection string on the connections string blade on your database page on Azure Portal.\n" ]
[ 0, 0 ]
[]
[]
[ "azure_sql_database", "azure_sql_server", "pyodbc", "python", "sql_server" ]
stackoverflow_0072961737_azure_sql_database_azure_sql_server_pyodbc_python_sql_server.txt
Q: Rolling count specific number Suppose I have dataframe "df" like this Time Group Data 2022-10-01 00:05:00 A 0 2022-10-01 00:10:00 A 0 2022-10-01 00:15:00 A 1 2022-10-01 00:20:00 A 1 2022-10-01 00:25:00 A 1 2022-10-01 00:30:00 A 0 2022-10-01 00:35:00 A 1 2022-10-01 00:40:00 A 0 2022-10-01 00:05:00 B 11 2022-10-01 00:10:00 B 0 2022-10-01 00:15:00 B 12 2022-10-01 00:20:00 B 13 2022-10-01 00:25:00 B 0 2022-10-01 00:30:00 B 0 2022-10-01 00:35:00 B 15 2022-10-01 00:40:00 B 16 Assume That I already sort out data by Group and Time already I would love to count occurence of 0 in previous 15 minutes include itself Which result should be like Time Group Data Count_0_last_15_min 2022-10-01 00:05:00 A 0 1 2022-10-01 00:10:00 A 0 2 2022-10-01 00:15:00 A 1 2 2022-10-01 00:20:00 A 1 1 2022-10-01 00:25:00 A 1 0 2022-10-01 00:30:00 A 0 1 2022-10-01 00:35:00 A 1 1 2022-10-01 00:40:00 A 0 2 2022-10-01 00:05:00 B 11 0 2022-10-01 00:10:00 B 0 1 2022-10-01 00:15:00 B 12 1 2022-10-01 00:20:00 B 13 1 2022-10-01 00:25:00 B 0 1 2022-10-01 00:30:00 B 0 2 2022-10-01 00:35:00 B 0 3 2022-10-01 00:40:00 B 16 2 currently I try to use rolling to get data from each from previous record df.groupby('Group')['Data'].rolling(3,min_periods=1) however I stuck after rolling part to only counting "0" occurrence ( I did try .eq(0).sum() but it can't apply with rolling() method is there another method to group by and rolling count for this solution? A: You can use: (df.assign(Count_0_last_15_min=df['Data'].eq(0)) .groupby('Group', as_index=False) .rolling(3, min_periods=1)['Count_0_last_15_min'].sum() .droplevel(0) ) Output: 0 1.0 1 2.0 2 2.0 3 1.0 4 0.0 5 1.0 6 1.0 7 2.0 8 0.0 9 1.0 10 1.0 11 1.0 12 1.0 13 2.0 14 2.0 15 1.0 Name: Count_0_last_15_min, dtype: float64
Rolling count specific number
Suppose I have dataframe "df" like this Time Group Data 2022-10-01 00:05:00 A 0 2022-10-01 00:10:00 A 0 2022-10-01 00:15:00 A 1 2022-10-01 00:20:00 A 1 2022-10-01 00:25:00 A 1 2022-10-01 00:30:00 A 0 2022-10-01 00:35:00 A 1 2022-10-01 00:40:00 A 0 2022-10-01 00:05:00 B 11 2022-10-01 00:10:00 B 0 2022-10-01 00:15:00 B 12 2022-10-01 00:20:00 B 13 2022-10-01 00:25:00 B 0 2022-10-01 00:30:00 B 0 2022-10-01 00:35:00 B 15 2022-10-01 00:40:00 B 16 Assume That I already sort out data by Group and Time already I would love to count occurence of 0 in previous 15 minutes include itself Which result should be like Time Group Data Count_0_last_15_min 2022-10-01 00:05:00 A 0 1 2022-10-01 00:10:00 A 0 2 2022-10-01 00:15:00 A 1 2 2022-10-01 00:20:00 A 1 1 2022-10-01 00:25:00 A 1 0 2022-10-01 00:30:00 A 0 1 2022-10-01 00:35:00 A 1 1 2022-10-01 00:40:00 A 0 2 2022-10-01 00:05:00 B 11 0 2022-10-01 00:10:00 B 0 1 2022-10-01 00:15:00 B 12 1 2022-10-01 00:20:00 B 13 1 2022-10-01 00:25:00 B 0 1 2022-10-01 00:30:00 B 0 2 2022-10-01 00:35:00 B 0 3 2022-10-01 00:40:00 B 16 2 currently I try to use rolling to get data from each from previous record df.groupby('Group')['Data'].rolling(3,min_periods=1) however I stuck after rolling part to only counting "0" occurrence ( I did try .eq(0).sum() but it can't apply with rolling() method is there another method to group by and rolling count for this solution?
[ "You can use:\n(df.assign(Count_0_last_15_min=df['Data'].eq(0))\n .groupby('Group', as_index=False)\n .rolling(3, min_periods=1)['Count_0_last_15_min'].sum()\n .droplevel(0)\n)\n\nOutput:\n0 1.0\n1 2.0\n2 2.0\n3 1.0\n4 0.0\n5 1.0\n6 1.0\n7 2.0\n8 0.0\n9 1.0\n10 1.0\n11 1.0\n12 1.0\n13 2.0\n14 2.0\n15 1.0\nName: Count_0_last_15_min, dtype: float64\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074640705_pandas_python.txt
Q: Python - using TSV files to find averages I am working on a lab project for school and am completely lost on how to do it. The following code is as far i have gotten after watching a few hours of python videos and googling for info on how to do this. The instructions are as follows: Write a program that reads the student information from a tab seperated values (tsv) file. The program then creates a text file that records the course grades of the students. Each row of the tsv file contains the Last Name, First Name, Midterm1 score, Midterm2 score, and the Final score of a student. A sample of the student information is provided in StudentInfo.tsv. Assume the number of students is at least 1 and at most 20. Assume also the last names and first names do not contain whitespaces. The program performs the following tasks: Read the file name of the tsv file from the user Open the tsv file and read the student information. Compute the average exam score of each student. Assign a letter grade to each student based on the average exam score in the following scale: A: 90 =< x B: 80 =< x < 90 C: 70 =< x < 80 D: 60 =< x < 70 F: x < 60 Compute the average of each exam. Output the last names, first names, exam scores, and letter grades of the students into a text file named report.txt. Output one student per row and seperate the values with a tab character. Output the average of each exam, with two digits after the decimal point, at the end of report.txt. It comes with a downloadable files named StudentInfo.tsv. The contents of which are just: Barrett Edan 70 45 59 Bradshaw Reagan 96 97 88 Charlton Caiuss 73 94 80 Mayo Tyrese 88 61 36 Stern Brenda 90 86 45 I am going about the assignment by making a dictionary and adding the items from the tsv file to the dictionary in key/value pairs. I was wondering if A) what i had so far seemed okay, and B) how would i go about calling forth individual key/values to find the average? For example, I have all the midterm1 scores (the first score in each user's row) assigned to the key "midterm1" in the dictionary and was wondering how i could call just those keys for each user to find the average? I have only worked with dictionaries where you would utilize every key and not just specific ones. I know my code will need some formatting arguments put in place to get the output right, but for now, all i have is: import csv user_file = input() students = {} with open("{user_file}") as file: # Unsure if this is how you use a variable to open a file reader = csv.DictReader(file) for row in reader: students.append({"last_name": row[0], "first_name": row[1], "midterm1": row[2], "midterm2": row[3], "final": row[4]}) for student in sorted(students, key = lambda student: student["last_name"]): student_avg = ((int(students["midterm1"]) + int(students["midterms2"] + int(students{"final"])) / 3) if student_avg >= 90: student_grade == "A" elif 80 <= student_avg < 90: student_grade == "B" elif 70 <= student_avg < 80: student_grade == "C" elif 60 <= student_avg < 70: student_grade == "D" elif student_avg < 60: student_grade == "F" students.update({"letter_grade": student_grade}) # hoping this adds a new key/value to the list midterm1_avg = 0 midterm2_avg = 0 final_avg = 0 # Time to calculate the averages for "midterm1", value in students(): # this is where I am stuck. I don't even know if this will work or not. I appreciate anyone taking the time to read this in advance. I am about 3-4 weeks into my python journey and this site has been invaluable. If you see anything I am doing that could be written in a shorter format, i am all ears. For this class we don't use too many of the import features of python outside of the math module and csv. But i am always interested in seeing better real world examples of how it should be coded. A: You define students as a dict and then try to use method "append", no can do. Dict has no such method. Define it as a list students = [] You forgot the "f" for f-string with open(f"{user_file}") as file: You have to indent the line with "append" (so that it's executed IN the for loop) When calculating student_avg you have to reference student (your current element in loop), not studentS (the whole list of dicts) When you iterate over a list of students your student is a dict So you update the entry like this: student["letter_grade"] = student_grade Regarding the averages you can accumulate them in the same loop where you assign letter grades (assign 0 to them before the loop): midterm1_average += student.get("midterm1", 0) midterm2_average += student.get("midterm2", 0) final_average += student.get("final", 0) And then after the loop just divide by the number of students: number_of_students = len(students) midterm1_average /= number_of_students midterm2_average /= number_of_students final_average /= number_of_students In general you are pretty close and the code is decent. Try to visualize your data structures, maybe even print them (what is students, what is a particular student when you loop over them and so on). Also run your code. It would fail multiple times and it's faster to write and debug simultaneously, than write everything and debug later.
Python - using TSV files to find averages
I am working on a lab project for school and am completely lost on how to do it. The following code is as far i have gotten after watching a few hours of python videos and googling for info on how to do this. The instructions are as follows: Write a program that reads the student information from a tab seperated values (tsv) file. The program then creates a text file that records the course grades of the students. Each row of the tsv file contains the Last Name, First Name, Midterm1 score, Midterm2 score, and the Final score of a student. A sample of the student information is provided in StudentInfo.tsv. Assume the number of students is at least 1 and at most 20. Assume also the last names and first names do not contain whitespaces. The program performs the following tasks: Read the file name of the tsv file from the user Open the tsv file and read the student information. Compute the average exam score of each student. Assign a letter grade to each student based on the average exam score in the following scale: A: 90 =< x B: 80 =< x < 90 C: 70 =< x < 80 D: 60 =< x < 70 F: x < 60 Compute the average of each exam. Output the last names, first names, exam scores, and letter grades of the students into a text file named report.txt. Output one student per row and seperate the values with a tab character. Output the average of each exam, with two digits after the decimal point, at the end of report.txt. It comes with a downloadable files named StudentInfo.tsv. The contents of which are just: Barrett Edan 70 45 59 Bradshaw Reagan 96 97 88 Charlton Caiuss 73 94 80 Mayo Tyrese 88 61 36 Stern Brenda 90 86 45 I am going about the assignment by making a dictionary and adding the items from the tsv file to the dictionary in key/value pairs. I was wondering if A) what i had so far seemed okay, and B) how would i go about calling forth individual key/values to find the average? For example, I have all the midterm1 scores (the first score in each user's row) assigned to the key "midterm1" in the dictionary and was wondering how i could call just those keys for each user to find the average? I have only worked with dictionaries where you would utilize every key and not just specific ones. I know my code will need some formatting arguments put in place to get the output right, but for now, all i have is: import csv user_file = input() students = {} with open("{user_file}") as file: # Unsure if this is how you use a variable to open a file reader = csv.DictReader(file) for row in reader: students.append({"last_name": row[0], "first_name": row[1], "midterm1": row[2], "midterm2": row[3], "final": row[4]}) for student in sorted(students, key = lambda student: student["last_name"]): student_avg = ((int(students["midterm1"]) + int(students["midterms2"] + int(students{"final"])) / 3) if student_avg >= 90: student_grade == "A" elif 80 <= student_avg < 90: student_grade == "B" elif 70 <= student_avg < 80: student_grade == "C" elif 60 <= student_avg < 70: student_grade == "D" elif student_avg < 60: student_grade == "F" students.update({"letter_grade": student_grade}) # hoping this adds a new key/value to the list midterm1_avg = 0 midterm2_avg = 0 final_avg = 0 # Time to calculate the averages for "midterm1", value in students(): # this is where I am stuck. I don't even know if this will work or not. I appreciate anyone taking the time to read this in advance. I am about 3-4 weeks into my python journey and this site has been invaluable. If you see anything I am doing that could be written in a shorter format, i am all ears. For this class we don't use too many of the import features of python outside of the math module and csv. But i am always interested in seeing better real world examples of how it should be coded.
[ "You define students as a dict and then try to use method \"append\", no can do.\nDict has no such method. Define it as a list\nstudents = []\n\nYou forgot the \"f\" for f-string\nwith open(f\"{user_file}\") as file:\n\nYou have to indent the line with \"append\" (so that it's executed IN the for loop)\nWhen calculating student_avg you have to reference student (your current element in loop), not studentS (the whole list of dicts)\nWhen you iterate over a list of students your student is a dict\nSo you update the entry like this:\nstudent[\"letter_grade\"] = student_grade\n\nRegarding the averages you can accumulate them in the same loop where you assign letter grades (assign 0 to them before the loop):\nmidterm1_average += student.get(\"midterm1\", 0)\nmidterm2_average += student.get(\"midterm2\", 0)\nfinal_average += student.get(\"final\", 0)\n\nAnd then after the loop just divide by the number of students:\nnumber_of_students = len(students)\nmidterm1_average /= number_of_students\nmidterm2_average /= number_of_students\nfinal_average /= number_of_students\n\nIn general you are pretty close and the code is decent. Try to visualize your data structures, maybe even print them (what is students, what is a particular student when you loop over them and so on). Also run your code. It would fail multiple times and it's faster to write and debug simultaneously, than write everything and debug later.\n" ]
[ 0 ]
[]
[]
[ "csv", "dictionary", "key_value", "python" ]
stackoverflow_0074634109_csv_dictionary_key_value_python.txt
Q: Trouble installing keras I have a problem with keras; I've installed it once but somehow I cannot import it anymore since I recently installed some other packages. If I want to import keras, I get the following error (among many other warnings etc.): ModuleNotFoundError: No module named 'tensorflow.tsl' I tried to force reinstall both keras and tensorflow but if I want to do this with keras (with the command pip install --force-reinstall keras), I get the following errors This behaviour is the source of the following dependency conflicts. tensorflow 2.7.0 requires flatbuffers<3.0,>=1.12, but you have flatbuffers 22.11.23 which is incompatible. tensorflow 2.7.0 requires keras<2.8,>=2.7.0rc0, but you have keras 2.11.0 which is incompatible. tensorflow 2.7.0 requires tensorflow-estimator<2.8,~=2.7.0rc0, but you have tensorflow-estimator 2.11.0 which is incompatible. And if I want to force reinstall tensorflow I get the following error Could not install packages due to an OSError: [WinError 5] Zugriff verweigert: 'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-uninstall-tbtwxjcv\\core\\_multiarray_tests.cp38-win_amd64.pyd' Consider using the `--user` option or check the permissions. I truly have no idea what is happening here; is it possible to delete the packages manually to reinstall them afterwards? Also the simple pip uninstalldoes not work... Initially I installed keras without a virtual environment, i.e. just with `pip install keras', but it worked once at least... A: You need to upgrade Tensorflow or downgrade keras pip install keras==2.8 and do the same for the other two libraries
Trouble installing keras
I have a problem with keras; I've installed it once but somehow I cannot import it anymore since I recently installed some other packages. If I want to import keras, I get the following error (among many other warnings etc.): ModuleNotFoundError: No module named 'tensorflow.tsl' I tried to force reinstall both keras and tensorflow but if I want to do this with keras (with the command pip install --force-reinstall keras), I get the following errors This behaviour is the source of the following dependency conflicts. tensorflow 2.7.0 requires flatbuffers<3.0,>=1.12, but you have flatbuffers 22.11.23 which is incompatible. tensorflow 2.7.0 requires keras<2.8,>=2.7.0rc0, but you have keras 2.11.0 which is incompatible. tensorflow 2.7.0 requires tensorflow-estimator<2.8,~=2.7.0rc0, but you have tensorflow-estimator 2.11.0 which is incompatible. And if I want to force reinstall tensorflow I get the following error Could not install packages due to an OSError: [WinError 5] Zugriff verweigert: 'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-uninstall-tbtwxjcv\\core\\_multiarray_tests.cp38-win_amd64.pyd' Consider using the `--user` option or check the permissions. I truly have no idea what is happening here; is it possible to delete the packages manually to reinstall them afterwards? Also the simple pip uninstalldoes not work... Initially I installed keras without a virtual environment, i.e. just with `pip install keras', but it worked once at least...
[ "You need to upgrade Tensorflow or downgrade keras\npip install keras==2.8\n\nand do the same for the other two libraries\n" ]
[ 0 ]
[]
[]
[ "keras", "pip", "python" ]
stackoverflow_0074640526_keras_pip_python.txt
Q: Anaconda python: PackagesNotFoundError error when trying to roll back revision For some reason I decided to upgrade setuptools. The so-called package plan that popped up when I ran conda install -c anaconda setuptools was as follows: The following packages will be downloaded: package | build ---------------------------|----------------- certifi-2019.3.9 | py37_0 155 KB anaconda pip-19.1.1 | py37_0 1.8 MB anaconda python-3.7.2 | h8c8aaf0_10 17.7 MB anaconda setuptools-41.0.1 | py37_0 680 KB anaconda wheel-0.33.4 | py37_0 57 KB anaconda wincertstore-0.2 | py37_0 13 KB anaconda ------------------------------------------------------------ Total: 20.4 MB The following NEW packages will be INSTALLED: pip anaconda/win-64::pip-19.1.1-py37_0 The following packages will be UPDATED: certifi 2018.11.29-py36_0 --> 2019.3.9-py37_0 python pkgs/main::python-3.6.4-h6538335_1 --> anaconda::python-3.7.2-h8c8aaf0_10 setuptools pkgs/main::setuptools-38.4.0-py36_0 --> anaconda::setuptools-41.0.1-py37_0 wheel pkgs/main::wheel-0.30.0-py36h6c3ec14_1 --> anaconda::wheel-0.33.4-py37_0 The following packages will be SUPERSEDED by a higher-priority channel: wincertstore pkgs/main::wincertstore-0.2-py36h7fe5~ --> anaconda::wincertstore-0.2-py37_0 However the upgrade broke other parts of my code which are really needed and cannot be updated. Hence I decide to roll back to the previous state. The most recent revisions from conda list --revisions are: 2019-02-12 15:10:38 (rev 12) bzip2 {1.0.6 (conda-forge) -> 1.0.6 (anaconda)} ca-certificates {2018.03.07 -> 2019.1.23 (anaconda)} certifi {2018.11.29 -> 2018.11.29 (anaconda)} conda {4.5.12 -> 4.6.2 (anaconda)} nbconvert {5.3.1 -> 5.4.0 (anaconda)} openssl {1.1.1a -> 1.1.1 (anaconda)} snappy {1.1.7 (conda-forge) -> 1.1.7 (anaconda)} vc {14.1 -> 14.1 (anaconda)} vs2015_runtime {14.15.26706 -> 15.5.2 (anaconda)} yaml {0.1.7 (conda-forge) -> 0.1.7 (anaconda)} zlib {1.2.11 (conda-forge) -> 1.2.11 (anaconda)} +defusedxml-0.5.0 (anaconda) 2019-05-17 16:52:29 (rev 13) certifi {2018.11.29 (anaconda) -> 2019.3.9 (anaconda)} pip {9.0.1 -> 19.1.1 (anaconda)} python {3.6.4 -> 3.7.2 (anaconda)} setuptools {38.4.0 -> 41.0.1 (anaconda)} wheel {0.30.0 -> 0.33.4 (anaconda)} wincertstore {0.2 -> 0.2 (anaconda)} The problem now is that when I do conda install --revision 12 I get the following error: PackagesNotFoundError: The following packages are missing from the target environment: - anaconda::certifi==2018.11.29=py36_0 Any ideas how to do the rollback please? Many thanks A: It appears you are maintaining your environment by issuing a series of conda install commands. You could continue to do this, with an additional version specification on the command line. But I encourage you to switch to this approach: Create an environment.yml file that looks like this. name: myproject channels: - conda-forge dependencies: - bzip2 >= 1.0.6 - pip >= 19.1.1 - snappy >= 1.1.7 - zlib >= 1.2.11 Add others as needed. Use conda env update to install the packages. (With which python you can see where they were installed.) An advantage of this approach is you can easily rm -rf ~/miniconda3/envs/myproject/ (or wherever they were installed) and then conda env update to re-install from scratch. This typically resolves versionitis problems, or at least offers a hint about which version constraints should be relaxed to permit a feasible solution. EDIT I personally favor >= constraints in my environment.yml files. Sticking to modern versions is good for community support when things go awry, and is good for speed of updates since conda will have just a handful of modern versions to consider, rather than trying to figure out how e.g. python2 might play into the dependency constraints. It helps me to learn of updates, and then I re-run my automated unit tests upon pulling in newer deps. Alternatively you can routinely store == constraints to lock it down if desired, e.g. bzip2 == 1.0.6. And if you haven't been doing that, you can still checkout an old snapshot with e.g. bzip2 >= 1.0.5 and edit with global search-n-replace, changing >= to ==. That will set the controls on the Time Machine to go back in time to some consistent set of older dep versions. If your conda env update run shows some rough edges, consider nuking the environment and re-populating it from scratch. Often a clean install like that will run more smoothly. A: Just in case someone bumps into this, facing a similar situation, this what I did and to be fair, it is not actually a rollback. It also appears that my conda environment was really messed-up the upgrade I mentioning in my original post because when I did conda update conda I received the following error: >conda update conda Collecting package metadata: done Solving environment: | WARNING conda.common.logic:get_sat_solver_cls(289): Could not run SAT solver through interface "pycosat". WARNING conda.common.logic:get_sat_solver_cls(289): Could not run SAT solver through interface "pycryptosat". WARNING conda.common.logic:get_sat_solver_cls(289): Could not run SAT solver through interface "pysat".failed My numpy was also broken and who knows what else.... I followed the steps desribed by Kale Franz at this link: https://github.com/conda/conda/issues/7714#issuecomment-417553149 For the sake of completeness, I am attaching a screenshot of his answer below: Running the command that Kale suggests in his post, I got a really long list of the packages that are causing inconsistencies. I was a really long list, I am just pasting below the very first few lines: Collecting package metadata: done Solving environment: \ The environment is inconsistent, please check the package plan carefully The following packages are causing the inconsistency: - defaults/win-64::alabaster==0.7.10=py36hcd07829_0 - defaults/win-64::anaconda-client==1.6.9=py36_0 - defaults/win-64::anaconda==custom=py36h363777c_0 - defaults/win-64::anaconda-project==0.8.2=py36hfad2e28_0 - defaults/win-64::asn1crypto==0.24.0=py36_0 - defaults/win-64::astroid==1.6.1=py36_0 .... (A lot more that I am not pasting here) .... And after the list ended the message continued as follows: The following packages will be downloaded: package | build ---------------------------|----------------- ca-certificates-2019.5.15 | 0 166 KB certifi-2019.3.9 | py36_0 156 KB cffi-1.12.3 | py36h7a1dbc1_0 225 KB chardet-3.0.4 | py36_1 210 KB conda-4.6.14 | py36_0 2.1 MB cryptography-2.7 | py36h7a1dbc1_0 564 KB idna-2.8 | py36_0 134 KB menuinst-1.4.16 | py36he774522_0 227 KB openssl-1.1.1c | he774522_1 5.7 MB pip-19.1.1 | py36_0 1.9 MB pycosat-0.6.3 | py36hfa6e2cd_0 98 KB pycparser-2.19 | py36_0 174 KB pyopenssl-19.0.0 | py36_0 82 KB pysocks-1.7.0 | py36_0 30 KB python-3.6.8 | h9f7ef89_7 20.3 MB pywin32-223 | py36hfa6e2cd_1 9.3 MB requests-2.22.0 | py36_0 90 KB ruamel_yaml-0.15.46 | py36hfa6e2cd_0 262 KB setuptools-41.0.1 | py36_0 663 KB six-1.12.0 | py36_0 22 KB urllib3-1.24.2 | py36_0 153 KB wheel-0.33.4 | py36_0 57 KB win_inet_pton-1.1.0 | py36_0 9 KB wincertstore-0.2 | py36h7fe50ca_0 13 KB ------------------------------------------------------------ Total: 42.6 MB The following packages will be UPDATED: ca-certificates anaconda::ca-certificates-2019.1.23-0 --> pkgs/main::ca-certificates-2019.5.15-0 cffi 1.11.4-py36hfa6e2cd_0 --> 1.12.3-py36h7a1dbc1_0 conda anaconda::conda-4.6.2-py36_0 --> pkgs/main::conda-4.6.14-py36_0 cryptography 2.4.2-py36h7a1dbc1_0 --> 2.7-py36h7a1dbc1_0 idna 2.6-py36h148d497_1 --> 2.8-py36_0 menuinst 1.4.11-py36hfa6e2cd_0 --> 1.4.16-py36he774522_0 pycparser 2.18-py36hd053e01_1 --> 2.19-py36_0 pyopenssl 17.5.0-py36h5b7d817_0 --> 19.0.0-py36_0 pysocks 1.6.7-py36h698d350_1 --> 1.7.0-py36_0 pywin32 222-py36hfa6e2cd_0 --> 223-py36hfa6e2cd_1 requests 2.18.4-py36h4371aae_1 --> 2.22.0-py36_0 ruamel_yaml 0.15.35-py36hfa6e2cd_1 --> 0.15.46-py36hfa6e2cd_0 six 1.11.0-py36h4db2310_1 --> 1.12.0-py36_0 urllib3 1.22-py36h276f60a_0 --> 1.24.2-py36_0 win_inet_pton 1.0.1-py36he67d7fd_1 --> 1.1.0-py36_0 The following packages will be SUPERSEDED by a higher-priority channel: certifi anaconda::certifi-2019.3.9-py37_0 --> pkgs/main::certifi-2019.3.9-py36_0 openssl anaconda::openssl-1.1.1-he774522_0 --> pkgs/main::openssl-1.1.1c-he774522_1 pip anaconda::pip-19.1.1-py37_0 --> pkgs/main::pip-19.1.1-py36_0 python anaconda::python-3.7.2-h8c8aaf0_10 --> pkgs/main::python-3.6.8-h9f7ef89_7 setuptools anaconda::setuptools-41.0.1-py37_0 --> pkgs/main::setuptools-41.0.1-py36_0 wheel anaconda::wheel-0.33.4-py37_0 --> pkgs/main::wheel-0.33.4-py36_0 wincertstore anaconda::wincertstore-0.2-py37_0 --> pkgs/main::wincertstore-0.2-py36h7fe50ca_0 The following packages will be DOWNGRADED: chardet 3.0.4-py36h420ce6e_1 --> 3.0.4-py36_1 pycosat 0.6.3-py36h413d8a4_0 --> 0.6.3-py36hfa6e2cd_0 Proceed ([y]/n)? y Everything looks fine now and If I do conda list --revisions my two most recent revisions are: 2019-05-17 16:52:29 (rev 13) certifi {2018.11.29 (anaconda) -> 2019.3.9 (anaconda)} pip {9.0.1 -> 19.1.1 (anaconda)} python {3.6.4 -> 3.7.2 (anaconda)} setuptools {38.4.0 -> 41.0.1 (anaconda)} wheel {0.30.0 -> 0.33.4 (anaconda)} wincertstore {0.2 -> 0.2 (anaconda)} 2019-06-10 14:05:10 (rev 14) ca-certificates {2019.1.23 (anaconda) -> 2019.5.15} certifi {2019.3.9 (anaconda) -> 2019.3.9} cffi {1.11.4 -> 1.12.3} chardet {3.0.4 -> 3.0.4} conda {4.6.2 (anaconda) -> 4.6.14} cryptography {2.4.2 -> 2.7} idna {2.6 -> 2.8} menuinst {1.4.11 -> 1.4.16} openssl {1.1.1 (anaconda) -> 1.1.1c} pip {19.1.1 (anaconda) -> 19.1.1} pycosat {0.6.3 -> 0.6.3} pycparser {2.18 -> 2.19} pyopenssl {17.5.0 -> 19.0.0} pysocks {1.6.7 -> 1.7.0} python {3.7.2 (anaconda) -> 3.6.8} pywin32 {222 -> 223} requests {2.18.4 -> 2.22.0} ruamel_yaml {0.15.35 -> 0.15.46} setuptools {41.0.1 (anaconda) -> 41.0.1} six {1.11.0 -> 1.12.0} urllib3 {1.22 -> 1.24.2} wheel {0.33.4 (anaconda) -> 0.33.4} win_inet_pton {1.0.1 -> 1.1.0} wincertstore {0.2 (anaconda) -> 0.2} A: I agree with J_H, the best is to keep your environment.yml produced with conda env export > environment.yml in your code versioning system (Git). Then in case of problem you can delete your environment and recreate it (but not with conda env update) with conda env create -f environment.yml. A: I had a similar issue where I could not role back to an older revision. After the command conda install --revision N I got a similar error message in the style PackagesNotFoundError: The following packages are missing from the target environment: - channel-name::package==v.v.v=build - ... What helped was to add the channel to the command conda install --revision N -c channel-name
Anaconda python: PackagesNotFoundError error when trying to roll back revision
For some reason I decided to upgrade setuptools. The so-called package plan that popped up when I ran conda install -c anaconda setuptools was as follows: The following packages will be downloaded: package | build ---------------------------|----------------- certifi-2019.3.9 | py37_0 155 KB anaconda pip-19.1.1 | py37_0 1.8 MB anaconda python-3.7.2 | h8c8aaf0_10 17.7 MB anaconda setuptools-41.0.1 | py37_0 680 KB anaconda wheel-0.33.4 | py37_0 57 KB anaconda wincertstore-0.2 | py37_0 13 KB anaconda ------------------------------------------------------------ Total: 20.4 MB The following NEW packages will be INSTALLED: pip anaconda/win-64::pip-19.1.1-py37_0 The following packages will be UPDATED: certifi 2018.11.29-py36_0 --> 2019.3.9-py37_0 python pkgs/main::python-3.6.4-h6538335_1 --> anaconda::python-3.7.2-h8c8aaf0_10 setuptools pkgs/main::setuptools-38.4.0-py36_0 --> anaconda::setuptools-41.0.1-py37_0 wheel pkgs/main::wheel-0.30.0-py36h6c3ec14_1 --> anaconda::wheel-0.33.4-py37_0 The following packages will be SUPERSEDED by a higher-priority channel: wincertstore pkgs/main::wincertstore-0.2-py36h7fe5~ --> anaconda::wincertstore-0.2-py37_0 However the upgrade broke other parts of my code which are really needed and cannot be updated. Hence I decide to roll back to the previous state. The most recent revisions from conda list --revisions are: 2019-02-12 15:10:38 (rev 12) bzip2 {1.0.6 (conda-forge) -> 1.0.6 (anaconda)} ca-certificates {2018.03.07 -> 2019.1.23 (anaconda)} certifi {2018.11.29 -> 2018.11.29 (anaconda)} conda {4.5.12 -> 4.6.2 (anaconda)} nbconvert {5.3.1 -> 5.4.0 (anaconda)} openssl {1.1.1a -> 1.1.1 (anaconda)} snappy {1.1.7 (conda-forge) -> 1.1.7 (anaconda)} vc {14.1 -> 14.1 (anaconda)} vs2015_runtime {14.15.26706 -> 15.5.2 (anaconda)} yaml {0.1.7 (conda-forge) -> 0.1.7 (anaconda)} zlib {1.2.11 (conda-forge) -> 1.2.11 (anaconda)} +defusedxml-0.5.0 (anaconda) 2019-05-17 16:52:29 (rev 13) certifi {2018.11.29 (anaconda) -> 2019.3.9 (anaconda)} pip {9.0.1 -> 19.1.1 (anaconda)} python {3.6.4 -> 3.7.2 (anaconda)} setuptools {38.4.0 -> 41.0.1 (anaconda)} wheel {0.30.0 -> 0.33.4 (anaconda)} wincertstore {0.2 -> 0.2 (anaconda)} The problem now is that when I do conda install --revision 12 I get the following error: PackagesNotFoundError: The following packages are missing from the target environment: - anaconda::certifi==2018.11.29=py36_0 Any ideas how to do the rollback please? Many thanks
[ "It appears you are maintaining your environment by\nissuing a series of conda install commands.\nYou could continue to do this,\nwith an additional version specification on the command line.\nBut I encourage you to switch to this approach:\nCreate an environment.yml file that looks like this.\nname: myproject\n\nchannels:\n - conda-forge\n\ndependencies:\n - bzip2 >= 1.0.6\n - pip >= 19.1.1\n - snappy >= 1.1.7\n - zlib >= 1.2.11\n\nAdd others as needed.\nUse conda env update to install the packages.\n(With which python you can see where they were installed.)\nAn advantage of this approach is you can easily\nrm -rf ~/miniconda3/envs/myproject/\n(or wherever they were installed)\nand then conda env update to re-install from scratch.\nThis typically resolves versionitis problems,\nor at least offers a hint\nabout which version constraints should be relaxed\nto permit a feasible solution.\nEDIT\nI personally favor >= constraints in my environment.yml files.\nSticking to modern versions is good for community support\nwhen things go awry, and is good for speed of updates since\nconda will have just a handful of modern versions to consider,\nrather than trying to figure out how e.g. python2 might\nplay into the dependency constraints.\nIt helps me to learn of updates, and then I re-run\nmy automated unit tests upon pulling in newer deps.\nAlternatively you can routinely store == constraints\nto lock it down if desired, e.g. bzip2 == 1.0.6.\nAnd if you haven't been doing that, you can still\ncheckout an old snapshot with e.g. bzip2 >= 1.0.5\nand edit with global search-n-replace, changing >= to ==.\nThat will set the controls on the Time Machine to go\nback in time to some consistent set of older dep versions.\nIf your conda env update run shows some rough edges,\nconsider nuking the environment and re-populating it from scratch.\nOften a clean install like that will run more smoothly.\n", "Just in case someone bumps into this, facing a similar situation, this what I did and to be fair, it is not actually a rollback. It also appears that my conda environment was really messed-up the upgrade I mentioning in my original post because when I did conda update conda I received the following error:\n>conda update conda\nCollecting package metadata: done\nSolving environment: | WARNING \nconda.common.logic:get_sat_solver_cls(289): Could not run SAT solver through interface \"pycosat\".\nWARNING conda.common.logic:get_sat_solver_cls(289): Could not run SAT solver through interface \"pycryptosat\".\nWARNING conda.common.logic:get_sat_solver_cls(289): Could not run SAT solver through interface \"pysat\".failed\n\nMy numpy was also broken and who knows what else....\nI followed the steps desribed by Kale Franz at this link: https://github.com/conda/conda/issues/7714#issuecomment-417553149 \nFor the sake of completeness, I am attaching a screenshot of his answer below:\n\nRunning the command that Kale suggests in his post, I got a really long list of the packages that are causing inconsistencies. I was a really long list, I am just pasting below the very first few lines:\nCollecting package metadata: done\nSolving environment: \\\nThe environment is inconsistent, please check the package plan carefully\nThe following packages are causing the inconsistency:\n\n - defaults/win-64::alabaster==0.7.10=py36hcd07829_0\n - defaults/win-64::anaconda-client==1.6.9=py36_0\n - defaults/win-64::anaconda==custom=py36h363777c_0\n - defaults/win-64::anaconda-project==0.8.2=py36hfad2e28_0\n - defaults/win-64::asn1crypto==0.24.0=py36_0\n - defaults/win-64::astroid==1.6.1=py36_0\n .... (A lot more that I am not pasting here) ....\n\nAnd after the list ended the message continued as follows: \nThe following packages will be downloaded:\n\npackage | build\n---------------------------|-----------------\nca-certificates-2019.5.15 | 0 166 KB\ncertifi-2019.3.9 | py36_0 156 KB\ncffi-1.12.3 | py36h7a1dbc1_0 225 KB\nchardet-3.0.4 | py36_1 210 KB\nconda-4.6.14 | py36_0 2.1 MB\ncryptography-2.7 | py36h7a1dbc1_0 564 KB\nidna-2.8 | py36_0 134 KB\nmenuinst-1.4.16 | py36he774522_0 227 KB\nopenssl-1.1.1c | he774522_1 5.7 MB\npip-19.1.1 | py36_0 1.9 MB\npycosat-0.6.3 | py36hfa6e2cd_0 98 KB\npycparser-2.19 | py36_0 174 KB\npyopenssl-19.0.0 | py36_0 82 KB\npysocks-1.7.0 | py36_0 30 KB\npython-3.6.8 | h9f7ef89_7 20.3 MB\npywin32-223 | py36hfa6e2cd_1 9.3 MB\nrequests-2.22.0 | py36_0 90 KB\nruamel_yaml-0.15.46 | py36hfa6e2cd_0 262 KB\nsetuptools-41.0.1 | py36_0 663 KB\nsix-1.12.0 | py36_0 22 KB\nurllib3-1.24.2 | py36_0 153 KB\nwheel-0.33.4 | py36_0 57 KB\nwin_inet_pton-1.1.0 | py36_0 9 KB\nwincertstore-0.2 | py36h7fe50ca_0 13 KB\n------------------------------------------------------------\n Total: 42.6 MB\n\nThe following packages will be UPDATED:\n\n ca-certificates anaconda::ca-certificates-2019.1.23-0 --> pkgs/main::ca-certificates-2019.5.15-0\n cffi 1.11.4-py36hfa6e2cd_0 --> 1.12.3-py36h7a1dbc1_0\n conda anaconda::conda-4.6.2-py36_0 --> pkgs/main::conda-4.6.14-py36_0\n cryptography 2.4.2-py36h7a1dbc1_0 --> 2.7-py36h7a1dbc1_0\n idna 2.6-py36h148d497_1 --> 2.8-py36_0\n menuinst 1.4.11-py36hfa6e2cd_0 --> 1.4.16-py36he774522_0\n pycparser 2.18-py36hd053e01_1 --> 2.19-py36_0\n pyopenssl 17.5.0-py36h5b7d817_0 --> 19.0.0-py36_0\n pysocks 1.6.7-py36h698d350_1 --> 1.7.0-py36_0\n pywin32 222-py36hfa6e2cd_0 --> 223-py36hfa6e2cd_1\n requests 2.18.4-py36h4371aae_1 --> 2.22.0-py36_0\n ruamel_yaml 0.15.35-py36hfa6e2cd_1 --> 0.15.46-py36hfa6e2cd_0\n six 1.11.0-py36h4db2310_1 --> 1.12.0-py36_0\n urllib3 1.22-py36h276f60a_0 --> 1.24.2-py36_0\n win_inet_pton 1.0.1-py36he67d7fd_1 --> 1.1.0-py36_0\n\nThe following packages will be SUPERSEDED by a higher-priority channel:\n\ncertifi anaconda::certifi-2019.3.9-py37_0 --> pkgs/main::certifi-2019.3.9-py36_0\nopenssl anaconda::openssl-1.1.1-he774522_0 --> pkgs/main::openssl-1.1.1c-he774522_1\npip anaconda::pip-19.1.1-py37_0 --> pkgs/main::pip-19.1.1-py36_0\npython anaconda::python-3.7.2-h8c8aaf0_10 --> pkgs/main::python-3.6.8-h9f7ef89_7\nsetuptools anaconda::setuptools-41.0.1-py37_0 --> pkgs/main::setuptools-41.0.1-py36_0\nwheel anaconda::wheel-0.33.4-py37_0 --> pkgs/main::wheel-0.33.4-py36_0\nwincertstore anaconda::wincertstore-0.2-py37_0 --> pkgs/main::wincertstore-0.2-py36h7fe50ca_0\n\nThe following packages will be DOWNGRADED:\n\n chardet 3.0.4-py36h420ce6e_1 --> 3.0.4-py36_1\n pycosat 0.6.3-py36h413d8a4_0 --> 0.6.3-py36hfa6e2cd_0\n\n\nProceed ([y]/n)? y\n\nEverything looks fine now and If I do conda list --revisions my two most recent revisions are:\n 2019-05-17 16:52:29 (rev 13)\n certifi {2018.11.29 (anaconda) -> 2019.3.9 (anaconda)}\n pip {9.0.1 -> 19.1.1 (anaconda)}\n python {3.6.4 -> 3.7.2 (anaconda)}\n setuptools {38.4.0 -> 41.0.1 (anaconda)}\n wheel {0.30.0 -> 0.33.4 (anaconda)}\n wincertstore {0.2 -> 0.2 (anaconda)}\n\n 2019-06-10 14:05:10 (rev 14)\n ca-certificates {2019.1.23 (anaconda) -> 2019.5.15}\n certifi {2019.3.9 (anaconda) -> 2019.3.9}\n cffi {1.11.4 -> 1.12.3}\n chardet {3.0.4 -> 3.0.4}\n conda {4.6.2 (anaconda) -> 4.6.14}\n cryptography {2.4.2 -> 2.7}\n idna {2.6 -> 2.8}\n menuinst {1.4.11 -> 1.4.16}\n openssl {1.1.1 (anaconda) -> 1.1.1c}\n pip {19.1.1 (anaconda) -> 19.1.1}\n pycosat {0.6.3 -> 0.6.3}\n pycparser {2.18 -> 2.19}\n pyopenssl {17.5.0 -> 19.0.0}\n pysocks {1.6.7 -> 1.7.0}\n python {3.7.2 (anaconda) -> 3.6.8}\n pywin32 {222 -> 223}\n requests {2.18.4 -> 2.22.0}\n ruamel_yaml {0.15.35 -> 0.15.46}\n setuptools {41.0.1 (anaconda) -> 41.0.1}\n six {1.11.0 -> 1.12.0}\n urllib3 {1.22 -> 1.24.2}\n wheel {0.33.4 (anaconda) -> 0.33.4}\n win_inet_pton {1.0.1 -> 1.1.0}\n wincertstore {0.2 (anaconda) -> 0.2}\n\n", "I agree with J_H, the best is to keep your environment.yml produced with conda env export > environment.yml in your code versioning system (Git). Then in case of problem you can delete your environment and recreate it (but not with conda env update) with conda env create -f environment.yml.\n", "I had a similar issue where I could not role back to an older revision. After the command\nconda install --revision N\n\nI got a similar error message in the style\nPackagesNotFoundError: The following packages are missing from the target environment:\n - channel-name::package==v.v.v=build\n - ...\n\nWhat helped was to add the channel to the command\nconda install --revision N -c channel-name\n\n" ]
[ 4, 3, 3, 0 ]
[]
[]
[ "anaconda", "python", "python_3.x" ]
stackoverflow_0056190556_anaconda_python_python_3.x.txt
Q: How can I change the textScrollList depending upon the options I have selected in optionMenu in Maya? ` This is the function that i wrote the operation, where om = optionMenu def selection(*args): selected = cmds.optionMenu(om,sl=True, q=True) cmds.textScrollList(tsl, e=True, removeAll=True) for item in temp[selected]: cmds.textScrollList(label=item, parent=om) ` enter image description here So basically I have object types in the OM and all the objects in TSL, I wanna display items of only selected object types from OM, for example if I select "mesh" in the OM the TSL will display only mesh objects. where OM = optionMenu and TSL - textScrollList A: to get a qualified answer, you should provide a minimal executable script. And you did not write what exactly does not work. Please describe exactly what errors you get. What I can see is that you create a list of textScrollList ui elements instead of filling the existing text scroll list. A complete solution can look like this: import maya.cmds as cmds class MyWindow(object): def __init__(self): self.types = ["mesh", "light"] self.tsl = None self.win = None self.om = None self.buildUI() def updateList(self, *args): geoType = cmds.optionMenu(self.om, query=True, value=True) cmds.textScrollList(self.tsl, edit=True, removeAll=True) elements = cmds.ls(type=geoType) for e in elements: cmds.textScrollList(self.tsl, edit=True, append=e) def buildUI(self): if self.win is not None: if cmds.window(self.win, exists=True): cmds.deleteUI(self.win) self.win = cmds.window() cmds.columnLayout() self.om = cmds.optionMenu(cc=self.updateList) for i in self.types: cmds.menuItem(label=i) self.tsl = cmds.textScrollList() cmds.showWindow(self.win) I use a class here because it is easier to maintain and to keep all data inside.
How can I change the textScrollList depending upon the options I have selected in optionMenu in Maya?
` This is the function that i wrote the operation, where om = optionMenu def selection(*args): selected = cmds.optionMenu(om,sl=True, q=True) cmds.textScrollList(tsl, e=True, removeAll=True) for item in temp[selected]: cmds.textScrollList(label=item, parent=om) ` enter image description here So basically I have object types in the OM and all the objects in TSL, I wanna display items of only selected object types from OM, for example if I select "mesh" in the OM the TSL will display only mesh objects. where OM = optionMenu and TSL - textScrollList
[ "to get a qualified answer, you should provide a minimal executable script. And you did not write what exactly does not work. Please describe exactly what errors you get. What I can see is that you create a list of textScrollList ui elements instead of filling the existing text scroll list. A complete solution can look like this:\nimport maya.cmds as cmds\n\nclass MyWindow(object):\n def __init__(self):\n self.types = [\"mesh\", \"light\"] \n self.tsl = None\n self.win = None\n self.om = None\n self.buildUI()\n\n def updateList(self, *args):\n geoType = cmds.optionMenu(self.om, query=True, value=True)\n cmds.textScrollList(self.tsl, edit=True, removeAll=True)\n elements = cmds.ls(type=geoType)\n for e in elements:\n cmds.textScrollList(self.tsl, edit=True, append=e)\n \n def buildUI(self):\n if self.win is not None:\n if cmds.window(self.win, exists=True):\n cmds.deleteUI(self.win)\n self.win = cmds.window()\n cmds.columnLayout()\n self.om = cmds.optionMenu(cc=self.updateList)\n for i in self.types:\n cmds.menuItem(label=i)\n self.tsl = cmds.textScrollList()\n\ncmds.showWindow(self.win)\n\nI use a class here because it is easier to maintain and to keep all data inside.\n" ]
[ 1 ]
[]
[]
[ "maya", "maya_api", "pymel", "python" ]
stackoverflow_0074640068_maya_maya_api_pymel_python.txt
Q: How to make two different list of strings same? I have two random lists of strings. The length of two lists won't be necessarily equal all the time. There is no repetition of the elements within a list. list1=['A', 'A-B', 'B', 'C'] list2=['A', 'A-B', 'B', 'D'] I want to compare the two lists, and the final output should be two lists with all common elements. Expected output: list1_final=['A', 'A-B', 'B', 'C','D'] list2_final=['A', 'A-B', 'B','C', 'D'] How can I achieve this with a minimum number of lines of code? A: Use the set python module. Just using set1.intersection(set2) you can have the common elements between set1 and set2. Or using set1.union(set2) for the union set.
How to make two different list of strings same?
I have two random lists of strings. The length of two lists won't be necessarily equal all the time. There is no repetition of the elements within a list. list1=['A', 'A-B', 'B', 'C'] list2=['A', 'A-B', 'B', 'D'] I want to compare the two lists, and the final output should be two lists with all common elements. Expected output: list1_final=['A', 'A-B', 'B', 'C','D'] list2_final=['A', 'A-B', 'B','C', 'D'] How can I achieve this with a minimum number of lines of code?
[ "Use the set python module. Just using set1.intersection(set2) you can have the common elements between set1 and set2. Or using set1.union(set2) for the union set.\n" ]
[ 1 ]
[ "Youve requested 2 lists with all common elements:\nlist1=['A', 'A-B', 'B', 'C']\nlist2=['A', 'A-B', 'B', 'D']\n\nlist1_final=[]\nlist2_final=[]\n\nfor item in list1:\n if item in list2:\n list1_final.append(item)\n list2_final.append(item)\n\nreturns\nlist 1 = ['A', 'A-B', 'B']\nlist 2 = ['A', 'A-B', 'B']\nNow your output doesnt match the question asked, do you want all items in both lists? not just the common ones?\n" ]
[ -2 ]
[ "list", "python", "set", "string" ]
stackoverflow_0074640726_list_python_set_string.txt
Q: 'bytes' object has no attribute 'encode' in decryption AES CTR so i have function for encryption and decryption using AES CTR. i was using python 3.x and the idea for this aes ctr from tweaksp when i try to decrypt to get the plaintext, i got error message: 'bytes' object has no attribute 'encode' after i check another stackoverflow, i found if .encode('hex') not works on python 3.x base this here's my code for encrypt: key_bytes = 16 def encrypt(key, pt): plaintext = read_file(pt) if isinstance(plaintext, str): pt= plaintext.encode("utf-8") if len(key) <= key_bytes: for x in range(len(key),key_bytes): key = key + "0" assert len(key) == key_bytes # Choose a random, 16-byte IV. iv = Random.new().read(AES.block_size) # Convert the IV to a Python integer. iv_int = int(binascii.hexlify(iv), 16) # Create a new Counter object with IV = iv_int. ctr = Counter.new(AES.block_size * 8, initial_value=iv_int) # Create AES-CTR cipher. aes = AES.new(key.encode('utf8'), AES.MODE_CTR, counter=ctr) # Encrypt and return IV and ciphertext. ciphertext = aes.encrypt(pt) return (iv, ciphertext) what i input for decrypt key : maru000000000000 iv : b'he\xf5\xba\x9bf\xf4\xacfA\xa6\xc7\xce\xd0\x90j' ciphertext : 01101000011001011111010110111010100110110110011011110100101011000110011001000001101001101100011111001110110100001001000001101010 and my decrypt code: def decrypt(key, iv, ciphertext): assert len(key) == key_bytes print(type(iv)) # Initialize counter for decryption. iv should be the same as the output of # encrypt(). iv_int = int(iv.encode('hex'), 16) ctr = Counter.new(AES.block_size * 8, initial_value=iv_int) # Create AES-CTR cipher. aes = AES.new(key, AES.MODE_CTR, counter=ctr) # Decrypt and return the plaintext. plaintext = aes.decrypt(ciphertext) return plaintext error A: OK, I found the problem. I commented the offending code and replaced it with iv_int = int(binascii.hexlify(iv), 16) This is the updated code that works: from Crypto import Random from Crypto.Cipher import AES from Crypto.Util import Counter import binascii key_bytes = 16 def encrypt(key, plaintext): if isinstance(plaintext, str): pt= plaintext.encode("utf-8") if len(key) <= key_bytes: for x in range(len(key),key_bytes): key = key + "0" assert len(key) == key_bytes # Choose a random, 16-byte IV. iv = Random.new().read(AES.block_size) # Convert the IV to a Python integer. iv_int = int(binascii.hexlify(iv), 16) # Create a new Counter object with IV = iv_int. ctr = Counter.new(AES.block_size * 8, initial_value=iv_int) # Create AES-CTR cipher. aes = AES.new(key.encode('utf8'), AES.MODE_CTR, counter=ctr) # Encrypt and return IV and ciphertext. ciphertext = aes.encrypt(pt) return (iv, ciphertext) def decrypt(key, iv, ciphertext): assert len(key) == key_bytes # Initialize counter for decryption. iv should be the same as the output of # encrypt(). #iv_int = int(iv.encode('hex'), 16) iv_int = int(binascii.hexlify(iv), 16) ctr = Counter.new(AES.block_size * 8, initial_value=iv_int) # Create AES-CTR cipher. aes = AES.new(key, AES.MODE_CTR, counter=ctr) # Decrypt and return the plaintext. plaintext = aes.decrypt(ciphertext) return plaintext plaintext = 'This is just a test' key = 'maru000000000000' iv, ciphertext = encrypt(key, plaintext) print(ciphertext) decrypted_text = decrypt(key, iv, ciphertext) print(decrypted_text) The following gives the same result as int(binascii.hexlify(iv), 16) iv_int = int.from_bytes(iv, byteorder='little', signed=False)
'bytes' object has no attribute 'encode' in decryption AES CTR
so i have function for encryption and decryption using AES CTR. i was using python 3.x and the idea for this aes ctr from tweaksp when i try to decrypt to get the plaintext, i got error message: 'bytes' object has no attribute 'encode' after i check another stackoverflow, i found if .encode('hex') not works on python 3.x base this here's my code for encrypt: key_bytes = 16 def encrypt(key, pt): plaintext = read_file(pt) if isinstance(plaintext, str): pt= plaintext.encode("utf-8") if len(key) <= key_bytes: for x in range(len(key),key_bytes): key = key + "0" assert len(key) == key_bytes # Choose a random, 16-byte IV. iv = Random.new().read(AES.block_size) # Convert the IV to a Python integer. iv_int = int(binascii.hexlify(iv), 16) # Create a new Counter object with IV = iv_int. ctr = Counter.new(AES.block_size * 8, initial_value=iv_int) # Create AES-CTR cipher. aes = AES.new(key.encode('utf8'), AES.MODE_CTR, counter=ctr) # Encrypt and return IV and ciphertext. ciphertext = aes.encrypt(pt) return (iv, ciphertext) what i input for decrypt key : maru000000000000 iv : b'he\xf5\xba\x9bf\xf4\xacfA\xa6\xc7\xce\xd0\x90j' ciphertext : 01101000011001011111010110111010100110110110011011110100101011000110011001000001101001101100011111001110110100001001000001101010 and my decrypt code: def decrypt(key, iv, ciphertext): assert len(key) == key_bytes print(type(iv)) # Initialize counter for decryption. iv should be the same as the output of # encrypt(). iv_int = int(iv.encode('hex'), 16) ctr = Counter.new(AES.block_size * 8, initial_value=iv_int) # Create AES-CTR cipher. aes = AES.new(key, AES.MODE_CTR, counter=ctr) # Decrypt and return the plaintext. plaintext = aes.decrypt(ciphertext) return plaintext error
[ "OK, I found the problem. I commented the offending code and replaced it with iv_int = int(binascii.hexlify(iv), 16) This is the updated code that works:\nfrom Crypto import Random\nfrom Crypto.Cipher import AES\nfrom Crypto.Util import Counter\n\nimport binascii\n\nkey_bytes = 16\ndef encrypt(key, plaintext):\n if isinstance(plaintext, str):\n pt= plaintext.encode(\"utf-8\")\n\n if len(key) <= key_bytes:\n for x in range(len(key),key_bytes):\n key = key + \"0\"\n\n assert len(key) == key_bytes\n \n # Choose a random, 16-byte IV.\n iv = Random.new().read(AES.block_size)\n\n # Convert the IV to a Python integer.\n iv_int = int(binascii.hexlify(iv), 16)\n\n # Create a new Counter object with IV = iv_int.\n ctr = Counter.new(AES.block_size * 8, initial_value=iv_int)\n\n # Create AES-CTR cipher.\n aes = AES.new(key.encode('utf8'), AES.MODE_CTR, counter=ctr)\n\n # Encrypt and return IV and ciphertext.\n ciphertext = aes.encrypt(pt)\n return (iv, ciphertext)\n\ndef decrypt(key, iv, ciphertext):\n assert len(key) == key_bytes\n # Initialize counter for decryption. iv should be the same as the output of\n # encrypt().\n #iv_int = int(iv.encode('hex'), 16) \n iv_int = int(binascii.hexlify(iv), 16)\n ctr = Counter.new(AES.block_size * 8, initial_value=iv_int)\n\n # Create AES-CTR cipher.\n aes = AES.new(key, AES.MODE_CTR, counter=ctr)\n\n # Decrypt and return the plaintext.\n plaintext = aes.decrypt(ciphertext)\n return plaintext\n\nplaintext = 'This is just a test'\nkey = 'maru000000000000'\niv, ciphertext = encrypt(key, plaintext)\nprint(ciphertext)\ndecrypted_text = decrypt(key, iv, ciphertext)\nprint(decrypted_text)\n\nThe following gives the same result as int(binascii.hexlify(iv), 16)\niv_int = int.from_bytes(iv, byteorder='little', signed=False)\n\n" ]
[ 0 ]
[]
[]
[ "aes", "encryption", "python" ]
stackoverflow_0074639218_aes_encryption_python.txt
Q: PYTHON | Extracting specific text from variable [regex] I have this text: Intel Core i5-1235U 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz) and i want to extract everything after the word "Core" till the next backspace "i5-1235U" the pattern of the text can be changed, like Intel Core i7-1255U 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz) where I want the "i7-1255U" or Intel Core i7-920P 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz) where I want the "i7-920P" and so on.. it will always start with "Intel Core" and will have "x.xGhz" after the word I want to extract i5?.+U but it works specifically with "Intel Core i5-1235U 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz)" and I want it to work on many variations :( A: You can try code below: text = 'Intel Core i7-920P 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz)' cpu_type = (text.split('Core')[1]).split()[0]
PYTHON | Extracting specific text from variable [regex]
I have this text: Intel Core i5-1235U 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz) and i want to extract everything after the word "Core" till the next backspace "i5-1235U" the pattern of the text can be changed, like Intel Core i7-1255U 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz) where I want the "i7-1255U" or Intel Core i7-920P 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz) where I want the "i7-920P" and so on.. it will always start with "Intel Core" and will have "x.xGhz" after the word I want to extract i5?.+U but it works specifically with "Intel Core i5-1235U 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz)" and I want it to work on many variations :(
[ "You can try code below:\ntext = 'Intel Core i7-920P 3.3GHz (10C 12T • 12MB Cache • up to 4.4GHz)'\n\ncpu_type = (text.split('Core')[1]).split()[0]\n\n" ]
[ 0 ]
[]
[]
[ "extract", "python", "string" ]
stackoverflow_0074640816_extract_python_string.txt
Q: Python HVPLOT In a future version of pandas a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1 I am working on a data visualization using Python's Pandas, hvplot, and Panel libraries. I am able to shape the dataframe as I want, but when it comes to plotting this in hvplot. I start getting the following errors: /home/****/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in If I supply multiple params to the 'by' param in hvplot, I don't get errors, but the chart is broken. Any thoughts? The preprocessing steps clean a series of books in .txt files into a pandas dataframe that looks like the following: (each word in the book is stored in the 'words' column) words book_title publish_date author genre country_of_origin 0 riders Riders of the Purple Sage 1912 Zane Grey western united states 1 by Riders of the Purple Sage 1912 Zane Grey western united states 2 contents Riders of the Purple Sage 1912 Zane Grey western united states 26 illustrations Riders of the Purple Sage 1912 Zane Grey western united states 39 chapter Riders of the Purple Sage 1912 Zane Grey western united states Then we make the DF interactive for hvplot and panel. imaster = master.interactive() Create a pipeline for hvplotting. colors = ['white', 'red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet', 'purple'] pipeline = ( imaster[ (imaster.publish_date >= year_range_slider.value[0]) & (imaster.publish_date <= year_range_slider.value[1]) & (imaster.words.isin(colors)) ] .drop(['country_of_origin', 'book_title', 'author'], axis=1) .rename(columns={"genre":"word_count"}) .groupby(['publish_date','words']).count() ) RESULTING INTERACTIVE PLOT word_count publish_date words -700 blue 8 green 8 purple 13 red 11 violet 1 white 27 yellow 6 1908 blue 12 green 9 orange 2 purple 1 red 10 violet 1 white 13 yellow 7 1912 blue 31 green 28 purple 62 red 72 violet 2 white 114 yellow 2 It's when I attempt to plot this that I start to see issues. It appears that I get 2 identical errors each time I call hvplot. The first pair on the line plot, the second pair on the scatter plot, and the third pair when I combine them into 'color_plot' color_plot_line = pipeline.hvplot.line(x ='publish_date', by='words', y='word_count', title='Colors by Year') color_plot_scatter = pipeline.hvplot.scatter(x ='publish_date', by='words', y='word_count', title='Colors by Year') color_plot = color_plot_line * color_plot_scatter color_plot /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in I have tried to adjust the 'by' param to include a list longer than 1. This stops the error, but does not produce the plot as expected. color_plot_line = pipeline.hvplot.line(x ='publish_date', by=['words', 'word_count'], y='word_count', title='Colors by Year') Sending a single value into 'by' produces the plot, but also the errors. Current plot (not well hydrated data): A: I experienced the exact same warning in my own data analytics code using the same modules (all modules up to date). However my charts where never broken and always come out as expected (except see legend image below) when supplying a list of length 1 or more based on a param to hvplot using "by". This code would produce the same warning using a param ListSelector widget when only 1 element was selected i.e. on startup. events = param.ListSelector(default=["F01-01"], objects=["F01-01","F01-02"], label="Event Selector") ... #On startup df has only 1 event in the Events column df.hvplot( x="Time (s)", y="height", by="Events" ) In the end I went with a loop and a simple filter plots = [] for event in self.events: df = df[df["Event"] == event] plots.append( df.hvplot( x="Time (s)", y="height", label="height", ) ) hv.Overlay(plots) .opts(width=1800, height=1000, legend_position="right") .relabel(", ".join(self.events)) Weird error in the legend when using "by" Weird error in the legend using hvplot and "by"
Python HVPLOT In a future version of pandas a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1
I am working on a data visualization using Python's Pandas, hvplot, and Panel libraries. I am able to shape the dataframe as I want, but when it comes to plotting this in hvplot. I start getting the following errors: /home/****/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in If I supply multiple params to the 'by' param in hvplot, I don't get errors, but the chart is broken. Any thoughts? The preprocessing steps clean a series of books in .txt files into a pandas dataframe that looks like the following: (each word in the book is stored in the 'words' column) words book_title publish_date author genre country_of_origin 0 riders Riders of the Purple Sage 1912 Zane Grey western united states 1 by Riders of the Purple Sage 1912 Zane Grey western united states 2 contents Riders of the Purple Sage 1912 Zane Grey western united states 26 illustrations Riders of the Purple Sage 1912 Zane Grey western united states 39 chapter Riders of the Purple Sage 1912 Zane Grey western united states Then we make the DF interactive for hvplot and panel. imaster = master.interactive() Create a pipeline for hvplotting. colors = ['white', 'red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet', 'purple'] pipeline = ( imaster[ (imaster.publish_date >= year_range_slider.value[0]) & (imaster.publish_date <= year_range_slider.value[1]) & (imaster.words.isin(colors)) ] .drop(['country_of_origin', 'book_title', 'author'], axis=1) .rename(columns={"genre":"word_count"}) .groupby(['publish_date','words']).count() ) RESULTING INTERACTIVE PLOT word_count publish_date words -700 blue 8 green 8 purple 13 red 11 violet 1 white 27 yellow 6 1908 blue 12 green 9 orange 2 purple 1 red 10 violet 1 white 13 yellow 7 1912 blue 31 green 28 purple 62 red 72 violet 2 white 114 yellow 2 It's when I attempt to plot this that I start to see issues. It appears that I get 2 identical errors each time I call hvplot. The first pair on the line plot, the second pair on the scatter plot, and the third pair when I combine them into 'color_plot' color_plot_line = pipeline.hvplot.line(x ='publish_date', by='words', y='word_count', title='Colors by Year') color_plot_scatter = pipeline.hvplot.scatter(x ='publish_date', by='words', y='word_count', title='Colors by Year') color_plot = color_plot_line * color_plot_scatter color_plot /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in /home/***/_git/data-science-books/venv/lib/python3.8/site-packages/holoviews/core/data/pandas.py:231: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. data = [(k, group_type(v, **group_kwargs)) for k, v in I have tried to adjust the 'by' param to include a list longer than 1. This stops the error, but does not produce the plot as expected. color_plot_line = pipeline.hvplot.line(x ='publish_date', by=['words', 'word_count'], y='word_count', title='Colors by Year') Sending a single value into 'by' produces the plot, but also the errors. Current plot (not well hydrated data):
[ "\nI experienced the exact same warning in my own data analytics code using the same modules (all modules up to date).\nHowever my charts where never broken and always come out as expected (except see legend image below) when supplying a list of length 1 or more based on a param to hvplot using \"by\".\n\nThis code would produce the same warning using a param ListSelector widget when only 1 element was selected i.e. on startup.\nevents = param.ListSelector(default=[\"F01-01\"], objects=[\"F01-01\",\"F01-02\"], label=\"Event Selector\")\n...\n#On startup df has only 1 event in the Events column\ndf.hvplot(\n x=\"Time (s)\",\n y=\"height\",\n by=\"Events\"\n)\n\nIn the end I went with a loop and a simple filter\nplots = []\nfor event in self.events:\n df = df[df[\"Event\"] == event]\n plots.append(\n df.hvplot(\n x=\"Time (s)\",\n y=\"height\",\n label=\"height\",\n )\n )\nhv.Overlay(plots)\n.opts(width=1800, height=1000, legend_position=\"right\")\n.relabel(\", \".join(self.events))\n\nWeird error in the legend when using \"by\"\nWeird error in the legend using hvplot and \"by\"\n" ]
[ 0 ]
[]
[]
[ "hvplot", "pandas", "python" ]
stackoverflow_0074283744_hvplot_pandas_python.txt
Q: CNN is not getting good accuracy using unseen data My cnn model is not performing well on my test set. I have trained the images on dark and white background, the image is cropped to eliminate other objects in the picture. My goal is to determine the position a person is facing on the bed. ImageDataGenerator was used for splitting and augmenting the data.The dataset for training contains 4800 images while the validation has 1500 images. I have 3 classes: Facing upward Facing left Facing Right The testing results gives me an accuracy of below 50% while the loss is 1.0 and above. This was evaluated using the model.evaluate INPUT_SHAPE = (250,150,1) traindata = ImageDataGenerator(rescale=1./255, shear_range=0.2,width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2,rotation_range=45, horizontal_flip=False, vertical_flip=False, brightness_range=[0.3,2.0]) valdata = ImageDataGenerator(rescale=1./255) training_set = traindata.flow_from_directory(TRAIN_DIR, target_size=INPUT_SHAPE[:-1], shuffle=True,batch_size=BATCH_SIZE, color_mode='grayscale', class_mode='categorical') validation_set = valdata.flow_from_directory(VAL_DIR, target_size=INPUT_SHAPE[:-1], shuffle=False,batch_size=BATCH_SIZE, color_mode='grayscale', class_mode='categorical') This is the code for the model: model = Sequential() model.add(Conv2D(64, (3,3), activation='relu', padding='same', input_shape=INPUT_SHAPE)) model.add(Conv2D(64, (3,3), activation='relu', padding='same')) model.add(MaxPooling2D((2,2),strides=1)) model.add(Dropout(0.5)) model.add(Conv2D(32, (3,3), activation='relu', padding='same')) model.add(Conv2D(32, (3,3), activation='relu', padding='same')) model.add(MaxPooling2D((2,2),strides=1)) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, activation="relu")) # model.add(Dense(512, activation="relu")) # model.add(Dropout(0.5)) model.add(Dense(units=3, activation="softmax")) model.compile(optimizer=Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy']) history = model.fit(training_set, epochs = 100, validation_data = validation_set, callbacks=[tensorboard, earlyStop] ) P.S. I have tried most of the solutions that I searched online. Posting here was my last resort since I really can't fix this problem. I am not allowed to use pretrained models. different combination of neural network adding batchnormalization and regularization changing image size increasing the data count different optimizers with different learning rate A: You have overfitting problem, try to balance the images between the test and train data and have more layers in the model because it's and reduce dropout value. one more thing is you could try pretrained model on the same split you have now to check out the data integrity.
CNN is not getting good accuracy using unseen data
My cnn model is not performing well on my test set. I have trained the images on dark and white background, the image is cropped to eliminate other objects in the picture. My goal is to determine the position a person is facing on the bed. ImageDataGenerator was used for splitting and augmenting the data.The dataset for training contains 4800 images while the validation has 1500 images. I have 3 classes: Facing upward Facing left Facing Right The testing results gives me an accuracy of below 50% while the loss is 1.0 and above. This was evaluated using the model.evaluate INPUT_SHAPE = (250,150,1) traindata = ImageDataGenerator(rescale=1./255, shear_range=0.2,width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2,rotation_range=45, horizontal_flip=False, vertical_flip=False, brightness_range=[0.3,2.0]) valdata = ImageDataGenerator(rescale=1./255) training_set = traindata.flow_from_directory(TRAIN_DIR, target_size=INPUT_SHAPE[:-1], shuffle=True,batch_size=BATCH_SIZE, color_mode='grayscale', class_mode='categorical') validation_set = valdata.flow_from_directory(VAL_DIR, target_size=INPUT_SHAPE[:-1], shuffle=False,batch_size=BATCH_SIZE, color_mode='grayscale', class_mode='categorical') This is the code for the model: model = Sequential() model.add(Conv2D(64, (3,3), activation='relu', padding='same', input_shape=INPUT_SHAPE)) model.add(Conv2D(64, (3,3), activation='relu', padding='same')) model.add(MaxPooling2D((2,2),strides=1)) model.add(Dropout(0.5)) model.add(Conv2D(32, (3,3), activation='relu', padding='same')) model.add(Conv2D(32, (3,3), activation='relu', padding='same')) model.add(MaxPooling2D((2,2),strides=1)) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, activation="relu")) # model.add(Dense(512, activation="relu")) # model.add(Dropout(0.5)) model.add(Dense(units=3, activation="softmax")) model.compile(optimizer=Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy']) history = model.fit(training_set, epochs = 100, validation_data = validation_set, callbacks=[tensorboard, earlyStop] ) P.S. I have tried most of the solutions that I searched online. Posting here was my last resort since I really can't fix this problem. I am not allowed to use pretrained models. different combination of neural network adding batchnormalization and regularization changing image size increasing the data count different optimizers with different learning rate
[ "You have overfitting problem, try to balance the images between the test and train data and have more layers in the model because it's and reduce dropout value.\none more thing is you could try pretrained model on the same split you have now to check out the data integrity.\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "python" ]
stackoverflow_0074640413_conv_neural_network_python.txt
Q: python requests invalid parameter I'm trying to post a request to this API. Here is my code: import requests accesstoken = "74e41f9c-8ae6-4ebd-9568-f0c92e83bb54" authnn = "4ef2db2b-70ea-11ed-9f86-063d0d6fdfb5" def sender(number): smstext = "hello test now#" headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0','accessToken': accesstoken,'Authorization': authnn} json_data = { 'content': smstext, 'contentType': 'TEXT', 'from': '+447860002234', 'to': number, } response = requests.post('https://api-sandbox.exmpl.io/v1/mms/', headers=headers, json=json_data) resp =(response.text) if '"acceptedTime":"' in resp: print("SENT OK | {} | {}" .format(number,resp)) print(json_data) time.sleep(10) else: print("ERROR => {}".format(resp)) print(json_data) exit() if __name__ == "__main__": nums = input("NUMBERS LIST : ") op = open(nums, "r") for i in op: for i in i.split(): sender(i) I get error ERROR => {"code":"***","message":"Invalid parameter - : to"} I printed the JSON to me I see no error : {'content': 'hello test now#', 'contentType': 'TEXT', 'from': '+447860002234', 'to': '+4113322422787'} When I do a POST request like this with preloaded data it works fine. import requests accesstoken = "74e41f9c-8ae6-4ebd-9568-f0c92e83bb54" authnn = "4ef2db2b-70ea-11ed-9f86-063d0d6fdfb5" def sender(number): smstext = "hello bebe test hada wewe hehe dede nene tete Ref#" headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0','accessToken': accesstoken,'Authorization': authnn} json_data = { 'content': smstext, 'contentType': 'TEXT', 'from': '+4478623212234', 'to': '+4113322422787', } response = requests.post('https://api-sandbox.exmpl.io/v1/mms/', headers=headers, json=json_data) resp =(response.text) if '"acceptedTime":"' in resp: print("SENT OK | {} | {}" .format(number,resp)) print(json_data) time.sleep(10) else: print("ERROR => {}".format(resp)) print(json_data) exit() if __name__ == "__main__": nums = input("NUMBERS LIST : ") op = open(nums, "r") for i in op: for i in i.split(): sender(i) I'm expecting to load value in JSON with a for loop. A: I suspect your problem lies with this block of code: if __name__ == "__main__": nums = input("NUMBERS LIST : ") op = open(nums, "r") for i in op: for i in i.split(): sender(i) In that you are probably not calling the sender function with the correct entry from your text file for the "to" field. i.e. You are probably sending different data to sender than you think you are (not a mobile number in this case, and thus being rejected by the API). I would recommend running something like this, as a test, to confirm what list element you need to be sending as the "to" field. with open('nums', 'r') as f: for line in f: line_elements = line.strip().split() print(line_elements) With the code above you will see a printout of all lines in your numbers file (each line being a list of strings that are your 'columns' in your numbers file). You need to confirm which index is the number for the SMS to be sent to and then call your sender function with that index. e.g. If your text file has the "to" number as its first value in the space separated file then your code would look like this: with open('nums', 'r') as f: for line in f: line_elements = line.strip().split() sender(line_elements[0]) Or, if your nums file only has the mobile numbers in it (and nothing else at all), each on a new line, then this would work: with open('nums', 'r') as f: for number in f.readlines(): sender(number.strip())
python requests invalid parameter
I'm trying to post a request to this API. Here is my code: import requests accesstoken = "74e41f9c-8ae6-4ebd-9568-f0c92e83bb54" authnn = "4ef2db2b-70ea-11ed-9f86-063d0d6fdfb5" def sender(number): smstext = "hello test now#" headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0','accessToken': accesstoken,'Authorization': authnn} json_data = { 'content': smstext, 'contentType': 'TEXT', 'from': '+447860002234', 'to': number, } response = requests.post('https://api-sandbox.exmpl.io/v1/mms/', headers=headers, json=json_data) resp =(response.text) if '"acceptedTime":"' in resp: print("SENT OK | {} | {}" .format(number,resp)) print(json_data) time.sleep(10) else: print("ERROR => {}".format(resp)) print(json_data) exit() if __name__ == "__main__": nums = input("NUMBERS LIST : ") op = open(nums, "r") for i in op: for i in i.split(): sender(i) I get error ERROR => {"code":"***","message":"Invalid parameter - : to"} I printed the JSON to me I see no error : {'content': 'hello test now#', 'contentType': 'TEXT', 'from': '+447860002234', 'to': '+4113322422787'} When I do a POST request like this with preloaded data it works fine. import requests accesstoken = "74e41f9c-8ae6-4ebd-9568-f0c92e83bb54" authnn = "4ef2db2b-70ea-11ed-9f86-063d0d6fdfb5" def sender(number): smstext = "hello bebe test hada wewe hehe dede nene tete Ref#" headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0','accessToken': accesstoken,'Authorization': authnn} json_data = { 'content': smstext, 'contentType': 'TEXT', 'from': '+4478623212234', 'to': '+4113322422787', } response = requests.post('https://api-sandbox.exmpl.io/v1/mms/', headers=headers, json=json_data) resp =(response.text) if '"acceptedTime":"' in resp: print("SENT OK | {} | {}" .format(number,resp)) print(json_data) time.sleep(10) else: print("ERROR => {}".format(resp)) print(json_data) exit() if __name__ == "__main__": nums = input("NUMBERS LIST : ") op = open(nums, "r") for i in op: for i in i.split(): sender(i) I'm expecting to load value in JSON with a for loop.
[ "I suspect your problem lies with this block of code:\nif __name__ == \"__main__\":\n nums = input(\"NUMBERS LIST : \")\n op = open(nums, \"r\")\n for i in op:\n for i in i.split():\n sender(i)\n\nIn that you are probably not calling the sender function with the correct entry from your text file for the \"to\" field. i.e. You are probably sending different data to sender than you think you are (not a mobile number in this case, and thus being rejected by the API).\nI would recommend running something like this, as a test, to confirm what list element you need to be sending as the \"to\" field.\nwith open('nums', 'r') as f:\n for line in f:\n line_elements = line.strip().split()\n print(line_elements)\n\nWith the code above you will see a printout of all lines in your numbers file (each line being a list of strings that are your 'columns' in your numbers file). You need to confirm which index is the number for the SMS to be sent to and then call your sender function with that index. e.g. If your text file has the \"to\" number as its first value in the space separated file then your code would look like this:\nwith open('nums', 'r') as f:\n for line in f:\n line_elements = line.strip().split()\n sender(line_elements[0])\n\nOr, if your nums file only has the mobile numbers in it (and nothing else at all), each on a new line, then this would work:\nwith open('nums', 'r') as f:\n for number in f.readlines():\n sender(number.strip())\n\n" ]
[ 0 ]
[ "The code may have other issues, but remove the comma after 'to':number see below\n json_data = {\n 'content': smstext,\n 'contentType': 'TEXT',\n 'from': '+447860002234',\n 'to': number \n }\n\n" ]
[ -2 ]
[ "json", "python", "python_requests" ]
stackoverflow_0074636276_json_python_python_requests.txt
Q: Serialize _wmi_object into JSON in Python3 I am having hard time with serializing _wmi_objects. I have tried with just json.dumps() and JsonPickle library without success getting Not JSON serializable errors or with JsonPickle some unneeded fields which can't be removed and null values for every entity.Tried hardcoding every entity in a dict literal and after that json.dumps() and it dumps only the first result. Also tried making the object into a dict which had the result below. This is what I have until now: import wmi, json def to_dict(obj): output ={} for key, item in obj.__dict__.items(): if isinstance(item, list): l = [] for item in item: d = to_dict(item) l.append(d) output[key] = l else: output[key] = item return output c = wmi.WMI( computer="host", user="user", password="password" ) for product in c.Win32_Product(): with open('products.json','w') as fp: fp.write(json.dumps(to_dict(product))) With the code above I am getting TypeError: Object of type CDispatch is not JSON serializable This the data that I'm getting when I print(product): instance of Win32_Product { AssignmentType = 1; Caption = "Microsoft Visual C++ 2019 X64 Additional Runtime - 14.22.27821"; Description = "Microsoft Visual C++ 2019 X64 Additional Runtime - 14.22.27821"; }; A: Since i just ran into this myself: product is a _wmi_object containing all kind of methods, classes and properties. The actual data is in the property properties (which already is a dict()) So something like: json.dumps(product.properties) Cheers
Serialize _wmi_object into JSON in Python3
I am having hard time with serializing _wmi_objects. I have tried with just json.dumps() and JsonPickle library without success getting Not JSON serializable errors or with JsonPickle some unneeded fields which can't be removed and null values for every entity.Tried hardcoding every entity in a dict literal and after that json.dumps() and it dumps only the first result. Also tried making the object into a dict which had the result below. This is what I have until now: import wmi, json def to_dict(obj): output ={} for key, item in obj.__dict__.items(): if isinstance(item, list): l = [] for item in item: d = to_dict(item) l.append(d) output[key] = l else: output[key] = item return output c = wmi.WMI( computer="host", user="user", password="password" ) for product in c.Win32_Product(): with open('products.json','w') as fp: fp.write(json.dumps(to_dict(product))) With the code above I am getting TypeError: Object of type CDispatch is not JSON serializable This the data that I'm getting when I print(product): instance of Win32_Product { AssignmentType = 1; Caption = "Microsoft Visual C++ 2019 X64 Additional Runtime - 14.22.27821"; Description = "Microsoft Visual C++ 2019 X64 Additional Runtime - 14.22.27821"; };
[ "Since i just ran into this myself:\nproduct is a _wmi_object containing all kind of methods, classes and properties.\nThe actual data is in the property properties (which already is a dict())\nSo something like:\njson.dumps(product.properties)\n\nCheers\n" ]
[ 0 ]
[]
[]
[ "json", "object", "python", "serialization", "wmi" ]
stackoverflow_0058248984_json_object_python_serialization_wmi.txt
Q: Python Script to get text from excel file and write on an image I have an excel file that contain data.I want to write that on an image that have same name as an excel file.Little bit of help will be highly appreciated. from PIL import Image,ImageDraw,ImageFont import glob import os images=glob.glob("E:\Images/*.jpg") for img in images: images=Image.open(img) draw=ImageDraw.Draw(images) font=ImageFont.load_default() text=" Sensor Longitude Sensor Latitude" draw.text((0,240),text,(250,250,250),font=font) images.save(img) I have that code but it just write same text on all images.I want to write a unique text on an image. A: Care to provide WHAT unique text exactly? Right now it's the same text, because your text is just a hardcoded string. You can use f-string to customize it: name = "Bob" text = f"Hi {name}" Will give you text = "Hi Bob"
Python Script to get text from excel file and write on an image
I have an excel file that contain data.I want to write that on an image that have same name as an excel file.Little bit of help will be highly appreciated. from PIL import Image,ImageDraw,ImageFont import glob import os images=glob.glob("E:\Images/*.jpg") for img in images: images=Image.open(img) draw=ImageDraw.Draw(images) font=ImageFont.load_default() text=" Sensor Longitude Sensor Latitude" draw.text((0,240),text,(250,250,250),font=font) images.save(img) I have that code but it just write same text on all images.I want to write a unique text on an image.
[ "Care to provide WHAT unique text exactly?\nRight now it's the same text, because your text is just a hardcoded string.\nYou can use f-string to customize it:\nname = \"Bob\"\ntext = f\"Hi {name}\"\n\nWill give you\ntext = \"Hi Bob\"\n" ]
[ 0 ]
[]
[]
[ "python", "python_imaging_library" ]
stackoverflow_0074640947_python_python_imaging_library.txt
Q: mock flask-sqlalchemy query I'm getting an error of Module not found when trying to create a test function to mock the get method of sqlalchemy query (with pytest) example: from mock import patch @patch('flask_sqlalchemy._QueryProperty.__get__') def test_get_all(queryMock): assert True When running pytest i get an error: ModuleNotFoundError: No module named 'flask_sqlalchemy._QueryProperty' I'm using the 3.0.2 version of Flask-SQLAlchemy. So i just changed to version 2.5.1 and it worked. However i think would be good to use the latest version. Is there any other way to mock the sql-alchemy query that works with the latest versions? A: Use flask_sqlalchemy.model._QueryProperty.__get__ instead of flask_sqlalchemy._QueryProperty.__get__. This will resolve your issue as _QueryProperty class has been moved into model. from mock import patch @patch('flask_sqlalchemy.model._QueryProperty.__get__') def test_get_all(queryMock): assert True
mock flask-sqlalchemy query
I'm getting an error of Module not found when trying to create a test function to mock the get method of sqlalchemy query (with pytest) example: from mock import patch @patch('flask_sqlalchemy._QueryProperty.__get__') def test_get_all(queryMock): assert True When running pytest i get an error: ModuleNotFoundError: No module named 'flask_sqlalchemy._QueryProperty' I'm using the 3.0.2 version of Flask-SQLAlchemy. So i just changed to version 2.5.1 and it worked. However i think would be good to use the latest version. Is there any other way to mock the sql-alchemy query that works with the latest versions?
[ "Use flask_sqlalchemy.model._QueryProperty.__get__ instead of flask_sqlalchemy._QueryProperty.__get__. This will resolve your issue as _QueryProperty class has been moved into model.\nfrom mock import patch \n@patch('flask_sqlalchemy.model._QueryProperty.__get__')\ndef test_get_all(queryMock):\n assert True\n\n" ]
[ 1 ]
[]
[]
[ "mocking", "pytest", "python", "sqlalchemy" ]
stackoverflow_0074546855_mocking_pytest_python_sqlalchemy.txt
Q: Why is this saying error? elif-else how to order this? enter image description here I'm making a registration code, and it keeps saying error. What should I change? I tried substituting it with every combination possible. I even took the space out, and yes there's a if code above. enter image description here A: The indents in the str(input.. lines are wrong. If you put indents at the beginning of these lines, the problem is solved. A: I think the issue is that you haven't indented the inputs. When the code is running, it would assume the if loop has ended and then causes a syntax error due to the next elif statement. If you want the inputs to be called when number == 2, just indent the 4 lines to be the same as the first print statement. Hope this helps!
Why is this saying error? elif-else how to order this?
enter image description here I'm making a registration code, and it keeps saying error. What should I change? I tried substituting it with every combination possible. I even took the space out, and yes there's a if code above. enter image description here
[ "The indents in the str(input.. lines are wrong. If you put indents at the beginning of these lines, the problem is solved.\n", "I think the issue is that you haven't indented the inputs. When the code is running, it would assume the if loop has ended and then causes a syntax error due to the next elif statement. If you want the inputs to be called when number == 2, just indent the 4 lines to be the same as the first print statement.\nHope this helps!\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074640941_python_python_3.x.txt
Q: Using conditional statement in ipywidgets I am still very new to coding and I am playing around with ipywidgets. How do I implement a conditional statement based on the answer of my first widget. For example, if the user selects Yes, it moves on. But if the user selects No, it goes into the second part of the widget. friends = widgets.ToggleButtons( options= ["Yes.", "Just me!"]) who = widgets.BoundedIntText( value=0, min=0, max=10, step=1, description='How many?:', disabled=False,) I have this, but I have no idea where to start with the conditional statement. Any help would be appreciated thanks! A: This solution works for ipywidgets 3.8 or higher. The second widget appears only when the value of the first widget == 'Yes.': import ipywidgets as widgets from ipywidgets import Output # create widgets friends = widgets.ToggleButtons( options= ["Just me!", "Yes."]) who = widgets.BoundedIntText( value=0, min=0, max=10, step=1, description='How many?:', disabled=False,) # display the first widget display(friends) # intialize the output - second widget out = Output() def changed(change): ''' Monitor change in the first widget ''' global out if friends.value == 'Just me!': out.clear_output() #clear output out = Output() # redefine output else: out.append_display_data(who) display(out) # monitor the friends widget for changes friends.observe(changed, 'value')
Using conditional statement in ipywidgets
I am still very new to coding and I am playing around with ipywidgets. How do I implement a conditional statement based on the answer of my first widget. For example, if the user selects Yes, it moves on. But if the user selects No, it goes into the second part of the widget. friends = widgets.ToggleButtons( options= ["Yes.", "Just me!"]) who = widgets.BoundedIntText( value=0, min=0, max=10, step=1, description='How many?:', disabled=False,) I have this, but I have no idea where to start with the conditional statement. Any help would be appreciated thanks!
[ "This solution works for ipywidgets 3.8 or higher. The second widget appears only when the value of the first widget == 'Yes.':\nimport ipywidgets as widgets\nfrom ipywidgets import Output\n\n# create widgets\nfriends = widgets.ToggleButtons(\n options= [\"Just me!\", \"Yes.\"])\n \nwho = widgets.BoundedIntText(\n value=0,\n min=0,\n max=10,\n step=1,\n description='How many?:',\n disabled=False,)\n\n# display the first widget\ndisplay(friends) \n\n# intialize the output - second widget\nout = Output() \n\ndef changed(change):\n '''\n Monitor change in the first widget\n '''\n global out\n if friends.value == 'Just me!': \n out.clear_output() #clear output\n out = Output() # redefine output\n\n else:\n out.append_display_data(who)\n display(out)\n \n# monitor the friends widget for changes\nfriends.observe(changed, 'value')\n\n" ]
[ 0 ]
[]
[]
[ "conditional_statements", "python", "widget" ]
stackoverflow_0074637156_conditional_statements_python_widget.txt
Q: How to do function overloading in Python? I want to implement function overloading in Python. I know by default Python does not support overloading. That is what I am asking this question. I have the following code: def parse(): results = doSomething() return results x = namedtuple('x',"a b c") def parse(query: str, data: list[x]): results = doSomethingElse(query, data) return results The only solution I can think of is to check the arguments: def parse(query: str, data: list[x]): if query is None and data is None: results = doSomething() return results else: results = doSomethingElse(query, data) return results Is it possible to do function overloading in Python like in Java (i.e. with out the branching)? Is there is a clear way using a decorator or some library? A: There is the typing.overload decorator used for properly annotating a callable with two or more distinct call signatures. But it still requires exactly one actual implementation. The usage would be as follows: from typing import overload @overload def parse(query: None = None, data: None = None) -> None: ... @overload def parse(query: str, data: list[object]) -> None: ... def parse(query: str | None = None, data: list[object] | None = None) -> None: if query is None and data is None: print("something") else: print(f"something else with {query=} and {data=}") parse() # something parse("foo", [1]) # something else with query='foo' and data=[1] Note that the ellipses ... are meant literally, i.e. that is all that should be in the "body" of that function overload. That is how it is done in Python. As mentioned in the comments already, there is no syntax for literally writing overloaded function implementations. If you write two implementations, the last one will simply override the first. Even if you could build something like that with syntactically pleasing decorators, I would probably advise against it because it will likely confuse everyone else since it is not how Python was designed. Also, if you have a lot of complex overloads that all require different implementation logic, I would argue this is probably just bad design. And if the branches are clear/simple as in your example, then I see no problem with having them in one function body.
How to do function overloading in Python?
I want to implement function overloading in Python. I know by default Python does not support overloading. That is what I am asking this question. I have the following code: def parse(): results = doSomething() return results x = namedtuple('x',"a b c") def parse(query: str, data: list[x]): results = doSomethingElse(query, data) return results The only solution I can think of is to check the arguments: def parse(query: str, data: list[x]): if query is None and data is None: results = doSomething() return results else: results = doSomethingElse(query, data) return results Is it possible to do function overloading in Python like in Java (i.e. with out the branching)? Is there is a clear way using a decorator or some library?
[ "There is the typing.overload decorator used for properly annotating a callable with two or more distinct call signatures. But it still requires exactly one actual implementation. The usage would be as follows:\nfrom typing import overload\n\n\n@overload\ndef parse(query: None = None, data: None = None) -> None:\n ...\n\n\n@overload\ndef parse(query: str, data: list[object]) -> None:\n ...\n\n\ndef parse(query: str | None = None, data: list[object] | None = None) -> None:\n if query is None and data is None:\n print(\"something\")\n else:\n print(f\"something else with {query=} and {data=}\")\n\n\nparse() # something\nparse(\"foo\", [1]) # something else with query='foo' and data=[1]\n\nNote that the ellipses ... are meant literally, i.e. that is all that should be in the \"body\" of that function overload.\nThat is how it is done in Python. As mentioned in the comments already, there is no syntax for literally writing overloaded function implementations. If you write two implementations, the last one will simply override the first.\nEven if you could build something like that with syntactically pleasing decorators, I would probably advise against it because it will likely confuse everyone else since it is not how Python was designed. Also, if you have a lot of complex overloads that all require different implementation logic, I would argue this is probably just bad design. And if the branches are clear/simple as in your example, then I see no problem with having them in one function body.\n" ]
[ 2 ]
[]
[]
[ "decorator", "overloading", "python", "python_3.x", "python_decorators" ]
stackoverflow_0074637458_decorator_overloading_python_python_3.x_python_decorators.txt
Q: How to change float('inf') representation? I want to change float('inf') representation -> ∞, can u help me with it? I try to use this, but it didn't work class Inf(float('inf')): def __repr__(self) -> str: return '∞' inf=Inf() those, when I use print() I want to see '∞' instead 'inf' I want result like in from sympy import oo but get it with oop A: drop the ('inf') in class inheritance. You inherit from classes, not their instances. Then in __repr__ check if your float is infinity. class FloatWithGlyphInf(float): def __repr__(self) -> str: if self == float('inf'): return '∞' else: return super().__repr__() print(FloatWithGlyphInf('inf')) # ∞ print(FloatWithGlyphInf('0')) # 0.0
How to change float('inf') representation?
I want to change float('inf') representation -> ∞, can u help me with it? I try to use this, but it didn't work class Inf(float('inf')): def __repr__(self) -> str: return '∞' inf=Inf() those, when I use print() I want to see '∞' instead 'inf' I want result like in from sympy import oo but get it with oop
[ "drop the ('inf') in class inheritance. You inherit from classes, not their instances. Then in __repr__ check if your float is infinity.\nclass FloatWithGlyphInf(float):\n def __repr__(self) -> str:\n if self == float('inf'):\n return '∞'\n else:\n return super().__repr__()\n\nprint(FloatWithGlyphInf('inf')) # ∞\nprint(FloatWithGlyphInf('0')) # 0.0\n\n" ]
[ 1 ]
[]
[]
[ "oop", "python" ]
stackoverflow_0074641037_oop_python.txt
Q: idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH) TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol help meee TT i received error in my coding of social distancing detection system using webcam. i done search the error but there is nothing difference with my code TT i wite my coding using notepad++ and run using command prompt. below is my error : C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time>python Run.py [INFO] loading YOLO from disk... [INFO] setting preferable backend and target to CUDA... [INFO] accessing video stream... [ WARN:0] global D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\dnn.cpp (1447) cv::dnn::dnn4_v20211004::Net::Impl::setUpNet DNN module was not built with CUDA backend; switching to CPU Traceback (most recent call last): File "C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time\Run.py", line 77, in <module> results = detect_people(frame, net, ln, File "C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time\mylib\detection.py", line 58, in detect_people idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH) TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol [ WARN:1] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback my error below here is my full code of file detection.py #import the necessary packages from .config import NMS_THRESH, MIN_CORP, People_Counter import numpy as np import cv2 def detect_people(frame, net, In, personIdx = 0): #grab the dimensions of the frame and initialize the list of results (H, W) = frame.shape[:2] results = [] #construct a blob from the input frame and then perform a forward #pass of the YOLO object detector, giving us our boarding boxes #add associated probabilities blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False) net.setInput(blob) layerOutputs = net.forward(In) #initialize out lists of detected bounding boxes, centroids and #confidence, respectively boxes = [] centroids = [] confidences = [] #loop over each of the layer outputs for output in layerOutputs: #for detection in output; for detection in output: #extract the class ID and confidence[i.e., probability) #of the current object detection scores = detection[5:] classID = np.argmax(scores) confidence = scores[classID] #filter detections by (1) ensuring that the object #detected was a person and (2) that the minimum #confidence is met if classID == personIdx and confidence > MIN_CORP: #scale the bounding box coordinates back relative to #the size of the image, keeping in mind that YOLO #actually returns the center (x,y) -coordinates of #the bounding box followed by the boxes' width and height box = detection[0:4] * np.array([W, H, W, H]) (centerX, centerY, width, height) = box.astype("int") #use the center (x,y) -coordinates to derive the top #and left corner of the bounding box x = int(centerX - (width / 2)) y = int(centerY - (height / 2)) #update our list of bounding box coordinates, #centroids and confidences boxes.append([x, y, int(width), int(height)]) centroids.append((centerX, centerY)) confidences.append(float(confidence)) #apply non-maxim suppression to suppress weak, overlapping bounding boxes idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH) #print('Total people count:', len(idxs)) #compute the total people counter #if People_Counter: #human_count = "Human count: {}".format(len(idxs)) #cv2.putText(frame, human_count, (470, frame.shape[0] - 75), cv2.FONT_HERSHEY_SIMPLEX, 0.70, (0, 0, 0), 2) #ensure at least one detection exists if len(idxs) > 0: #loop over the indexes we are keeping for i in idxs.flatten(): #extract the bounding box coordinates (x, y) = (boxes[i][0], boxes[i][1]) (w, h) = (boxes[i][2], boxes[i][3]) #update our results list to consist of the person #prediction probability, bounding box coordinates, #and the centroids r = (confidences[i], (x, y, x + w, y + h), centroids[i]) results.append(r) #return the list of the results return results A: The answer to your problem (as usually) likes in response from the interpreter: TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol scores is the second argument to cv2.dnn.NMSBoxes which in your case is confidence. confidence is a single number, you can't iterate over it. You've made a typo and probably you wanted to pass confidences which is a list. Change your code to: idxs = cv2.dnn.NMSBoxes(boxes, confidences, MIN_CORP, NMS_THRESH)
idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH) TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol
help meee TT i received error in my coding of social distancing detection system using webcam. i done search the error but there is nothing difference with my code TT i wite my coding using notepad++ and run using command prompt. below is my error : C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time>python Run.py [INFO] loading YOLO from disk... [INFO] setting preferable backend and target to CUDA... [INFO] accessing video stream... [ WARN:0] global D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\dnn.cpp (1447) cv::dnn::dnn4_v20211004::Net::Impl::setUpNet DNN module was not built with CUDA backend; switching to CPU Traceback (most recent call last): File "C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time\Run.py", line 77, in <module> results = detect_people(frame, net, ln, File "C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time\mylib\detection.py", line 58, in detect_people idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH) TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol [ WARN:1] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback my error below here is my full code of file detection.py #import the necessary packages from .config import NMS_THRESH, MIN_CORP, People_Counter import numpy as np import cv2 def detect_people(frame, net, In, personIdx = 0): #grab the dimensions of the frame and initialize the list of results (H, W) = frame.shape[:2] results = [] #construct a blob from the input frame and then perform a forward #pass of the YOLO object detector, giving us our boarding boxes #add associated probabilities blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False) net.setInput(blob) layerOutputs = net.forward(In) #initialize out lists of detected bounding boxes, centroids and #confidence, respectively boxes = [] centroids = [] confidences = [] #loop over each of the layer outputs for output in layerOutputs: #for detection in output; for detection in output: #extract the class ID and confidence[i.e., probability) #of the current object detection scores = detection[5:] classID = np.argmax(scores) confidence = scores[classID] #filter detections by (1) ensuring that the object #detected was a person and (2) that the minimum #confidence is met if classID == personIdx and confidence > MIN_CORP: #scale the bounding box coordinates back relative to #the size of the image, keeping in mind that YOLO #actually returns the center (x,y) -coordinates of #the bounding box followed by the boxes' width and height box = detection[0:4] * np.array([W, H, W, H]) (centerX, centerY, width, height) = box.astype("int") #use the center (x,y) -coordinates to derive the top #and left corner of the bounding box x = int(centerX - (width / 2)) y = int(centerY - (height / 2)) #update our list of bounding box coordinates, #centroids and confidences boxes.append([x, y, int(width), int(height)]) centroids.append((centerX, centerY)) confidences.append(float(confidence)) #apply non-maxim suppression to suppress weak, overlapping bounding boxes idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH) #print('Total people count:', len(idxs)) #compute the total people counter #if People_Counter: #human_count = "Human count: {}".format(len(idxs)) #cv2.putText(frame, human_count, (470, frame.shape[0] - 75), cv2.FONT_HERSHEY_SIMPLEX, 0.70, (0, 0, 0), 2) #ensure at least one detection exists if len(idxs) > 0: #loop over the indexes we are keeping for i in idxs.flatten(): #extract the bounding box coordinates (x, y) = (boxes[i][0], boxes[i][1]) (w, h) = (boxes[i][2], boxes[i][3]) #update our results list to consist of the person #prediction probability, bounding box coordinates, #and the centroids r = (confidences[i], (x, y, x + w, y + h), centroids[i]) results.append(r) #return the list of the results return results
[ "The answer to your problem (as usually) likes in response from the interpreter:\nTypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol\n\nscores is the second argument to cv2.dnn.NMSBoxes which in your case is confidence. confidence is a single number, you can't iterate over it. You've made a typo and probably you wanted to pass confidences which is a list.\nChange your code to:\nidxs = cv2.dnn.NMSBoxes(boxes, confidences, MIN_CORP, NMS_THRESH)\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "detection", "opencv", "python", "yolo" ]
stackoverflow_0070235839_deep_learning_detection_opencv_python_yolo.txt
Q: Raise TimeoutException(message, screen, stacktrace) I new to python and selenium in general and i was trying to do a project based on it. code example: options = Options() options.headless = True service_ = Service(executable_path = 'C:/Users/Downloads/chromedriver_win32/chromedriver.exe') driver = webdriver.Chrome(service = service_, options = options) wait = WebDriverWait(driver, 30) driver.get(url_2) manga_1 = driver.find_elements(By.XPATH, "//div[@class = 'item-summary']") manga_1 = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@class = 'item-summary']"))) When I was trying to run the code it gave me a error: raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: Stacktrace: Backtrace: Ordinal0 [0x00ABACD3+2075859] Ordinal0 [0x00A4EE61+1633889] Ordinal0 [0x0094B7BD+571325] Ordinal0 [0x0097AC2F+764975] Ordinal0 [0x0097AE1B+765467] Ordinal0 [0x009AD0F2+970994] Ordinal0 [0x00997364+881508] Ordinal0 [0x009AB56A+963946] Ordinal0 [0x00997136+880950] Ordinal0 [0x0096FEFD+720637] Ordinal0 [0x00970F3F+724799] GetHandleVerifier [0x00D6EED2+2769538] GetHandleVerifier [0x00D60D95+2711877] GetHandleVerifier [0x00B4A03A+521194] GetHandleVerifier [0x00B48DA0+516432] Ordinal0 [0x00A5682C+1665068] Ordinal0 [0x00A5B128+1683752] Ordinal0 [0x00A5B215+1683989] Ordinal0 [0x00A66484+1729668] BaseThreadInitThunk [0x76497BA9+25] RtlInitializeExceptionChain [0x778EBB9B+107] RtlClearBits [0x778EBB1F+191] Process finished with exit code 1 I have tried fixing it by looking at youtube, but I am stuck and don't know what could it be. A: In the webpage there are numerous item summaries. To extract the texts from the item summaries you have to induce WebDriverWait for visibility_of_all_elements_located() and using List Comprehension you can use either of the following locator strategies: Using CSS_SELECTOR: driver.get("https://1stkissmanga.io/") print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div.item-summary")))]) driver.quit() Using XPATH: driver.get("https://1stkissmanga.io/") print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@class='item-summary']")))]) driver.quit() Console Output: ['Martial Peak\n3.2\nChapter 2829\nChapter 2828', 'Tyrant Daddy’s Petite Bag\n5\nChapter 39\nChapter 38', 'If I Die, I will Be Invincible\n4.3\nChapter 39\nChapter 38', 'Be Jealous\n4\nChapter 24\nChapter 23', 'Deal With A Bad Boy\n5\nChapter 23\nChapter 22', 'I’m a Villainess But I Became a Mother\n4.1\nChapter 25.5\nChapter 25', 'Sunset Boulevard\n4\nChapter 20\nChapter 19', 'Elena Evoy Observation Diary\n4.5\nChapter 67\nChapter 66', 'Her Peculiar Visitor\n4.5\nChapter 59\nChapter 58 November 25, 2022', 'The Ex\n4.1\nChapter 117\nChapter 116', 'Love Hug\n3.5\nChapter 49\nChapter 48', 'Liar\n3.4\nChapter 43\nChapter 42', 'Usami’s Little Secret!\n4\nChapter 24\nChapter 23', 'Revenge Against the Immoral\n3.4\nChapter 60\nChapter 59', 'The Wounded Devil\n2.8\nChapter 23\nChapter 22', 'The Double Life of a Daydreaming Actress\n3.3\nChapter 62\nChapter 61', 'HOT\nDestined to be Empress\n3.7\nChapter 80\nChapter 79', 'The Widowed Empress Needs Her Romance\n4.4\nChapter 58\nChapter 57', 'Brother Mode\n3.6\nChapter 75\nChapter 74', 'Super Gene\n3.8\nChapter 83\nChapter 82'] Note : You have to add the following imports : from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC
Raise TimeoutException(message, screen, stacktrace)
I new to python and selenium in general and i was trying to do a project based on it. code example: options = Options() options.headless = True service_ = Service(executable_path = 'C:/Users/Downloads/chromedriver_win32/chromedriver.exe') driver = webdriver.Chrome(service = service_, options = options) wait = WebDriverWait(driver, 30) driver.get(url_2) manga_1 = driver.find_elements(By.XPATH, "//div[@class = 'item-summary']") manga_1 = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@class = 'item-summary']"))) When I was trying to run the code it gave me a error: raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: Stacktrace: Backtrace: Ordinal0 [0x00ABACD3+2075859] Ordinal0 [0x00A4EE61+1633889] Ordinal0 [0x0094B7BD+571325] Ordinal0 [0x0097AC2F+764975] Ordinal0 [0x0097AE1B+765467] Ordinal0 [0x009AD0F2+970994] Ordinal0 [0x00997364+881508] Ordinal0 [0x009AB56A+963946] Ordinal0 [0x00997136+880950] Ordinal0 [0x0096FEFD+720637] Ordinal0 [0x00970F3F+724799] GetHandleVerifier [0x00D6EED2+2769538] GetHandleVerifier [0x00D60D95+2711877] GetHandleVerifier [0x00B4A03A+521194] GetHandleVerifier [0x00B48DA0+516432] Ordinal0 [0x00A5682C+1665068] Ordinal0 [0x00A5B128+1683752] Ordinal0 [0x00A5B215+1683989] Ordinal0 [0x00A66484+1729668] BaseThreadInitThunk [0x76497BA9+25] RtlInitializeExceptionChain [0x778EBB9B+107] RtlClearBits [0x778EBB1F+191] Process finished with exit code 1 I have tried fixing it by looking at youtube, but I am stuck and don't know what could it be.
[ "In the webpage there are numerous item summaries.\nTo extract the texts from the item summaries you have to induce WebDriverWait for visibility_of_all_elements_located() and using List Comprehension you can use either of the following locator strategies:\n\nUsing CSS_SELECTOR:\ndriver.get(\"https://1stkissmanga.io/\")\nprint([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, \"div.item-summary\")))])\ndriver.quit()\n\n\nUsing XPATH:\ndriver.get(\"https://1stkissmanga.io/\")\nprint([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, \"//div[@class='item-summary']\")))])\ndriver.quit()\n\n\nConsole Output:\n['Martial Peak\\n3.2\\nChapter 2829\\nChapter 2828', 'Tyrant Daddy’s Petite Bag\\n5\\nChapter 39\\nChapter 38', 'If I Die, I will Be Invincible\\n4.3\\nChapter 39\\nChapter 38', 'Be Jealous\\n4\\nChapter 24\\nChapter 23', 'Deal With A Bad Boy\\n5\\nChapter 23\\nChapter 22', 'I’m a Villainess But I Became a Mother\\n4.1\\nChapter 25.5\\nChapter 25', 'Sunset Boulevard\\n4\\nChapter 20\\nChapter 19', 'Elena Evoy Observation Diary\\n4.5\\nChapter 67\\nChapter 66', 'Her Peculiar Visitor\\n4.5\\nChapter 59\\nChapter 58 November 25, 2022', 'The Ex\\n4.1\\nChapter 117\\nChapter 116', 'Love Hug\\n3.5\\nChapter 49\\nChapter 48', 'Liar\\n3.4\\nChapter 43\\nChapter 42', 'Usami’s Little Secret!\\n4\\nChapter 24\\nChapter 23', 'Revenge Against the Immoral\\n3.4\\nChapter 60\\nChapter 59', 'The Wounded Devil\\n2.8\\nChapter 23\\nChapter 22', 'The Double Life of a Daydreaming Actress\\n3.3\\nChapter 62\\nChapter 61', 'HOT\\nDestined to be Empress\\n3.7\\nChapter 80\\nChapter 79', 'The Widowed Empress Needs Her Romance\\n4.4\\nChapter 58\\nChapter 57', 'Brother Mode\\n3.6\\nChapter 75\\nChapter 74', 'Super Gene\\n3.8\\nChapter 83\\nChapter 82']\n\n\nNote : You have to add the following imports :\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\n\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074637137_python_selenium_selenium_webdriver.txt
Q: What is right order for defining decorators in django views I want to set more than one decorator for my django function view. The problem is that I can't figure out how should be the order of decorators. For example this is the view I have: @permission_classes([IsAuthenticated]) @api_view(["POST"]) def logout(request): pass In this case, first decorator never applies! neither when request is POST nor when it's GET! When I change the order, to this: @api_view(["POST"]) @permission_classes([IsAuthenticated]) def logout(request): pass the last decorator applies before the first one, which is not the order that I want. I want the decorator @api_view(["POST"]) to be applied first, and then @permission_classes([IsAuthenticated]). How should I do that? A: This is the correct way of ordering 'api_view' consider the view as API view that the decorator is defined by restframework. from rest_framework.decorators import api_view, permission_classes @api_view(["POST"]) @permission_classes([IsAuthenticated]) def logout(request): pass
What is right order for defining decorators in django views
I want to set more than one decorator for my django function view. The problem is that I can't figure out how should be the order of decorators. For example this is the view I have: @permission_classes([IsAuthenticated]) @api_view(["POST"]) def logout(request): pass In this case, first decorator never applies! neither when request is POST nor when it's GET! When I change the order, to this: @api_view(["POST"]) @permission_classes([IsAuthenticated]) def logout(request): pass the last decorator applies before the first one, which is not the order that I want. I want the decorator @api_view(["POST"]) to be applied first, and then @permission_classes([IsAuthenticated]). How should I do that?
[ "This is the correct way of ordering 'api_view' consider the view as API view that the decorator is defined by restframework.\nfrom rest_framework.decorators import api_view, permission_classes\n\n@api_view([\"POST\"])\n@permission_classes([IsAuthenticated])\ndef logout(request):\n pass\n\n" ]
[ 1 ]
[]
[]
[ "decorator", "django", "django_views", "python", "python_decorators" ]
stackoverflow_0074640885_decorator_django_django_views_python_python_decorators.txt
Q: Can't run Python programs from the terminal window, how do I fix this? (Windows 10, Python version 3.8.5) I've been studying Python for a month now and normally I run all my programs in Sublime Text 3. Today I learn to run Python programs in the terminal window as introduced in this section of the Automate the Boring Stuff with Python book following this video. Basically, I followed the instruction in the video and created the hello.py file as below: #! python3 print('Hello, World!') Then I opened the Command Prompt to run the file with the command: py.exe c:\users\danh\mypythonscripts\hello.py, an error pops-up and states that "This app can't run on your PC" and a line says that Access is denied. I spent the whole day trying to fix this problem but still I couldn't get it running. One thing is when I change the directory of the Command Prompt to run the file to C:Windows\system32 (or run the Command Prompt as Administrator) and then run the command py.exe c:\users\danh\mypythonscripts\hello.py, it runs the file without any problem as in this image. Does anyone know how to fix this problem? Any help is appreciated. Thanks! A: I solved the problem. When I looked into my user directory at C:\Users\<Username>, it appears that there is a py.exe file that has 0 bytes. I was told in this thread that the py.exe file shouldn't be in my user directory so I removed that file and it fixed the problem. I still don't know how the py.exe file got into my user directory and why it has 0 bytes so I'm not sure this solution could help others. For now, I will accept my own answer because it solves the problem in my case. A: It looks like you're trying to use Microsoft's new Windows 10 Metro-based auto-installing version of Python. It's included by default but, as you've found, it doesn't work very well. Try installing the version from the Python website. If you've got a 32-bit copy of Windows, make sure to install the 32-bit version; Windows isn't very good at running 64-bit programs from a 32-bit kernel. You can check by looking in your C: drive; if you haven't got a Program Files (x86) folder, install the 32-bit version. A: python.exe inside my env\Scripts\ became 0kb for some reason. So I created another virtual-env and copied python.exe from there to this folder. then it started working.
Can't run Python programs from the terminal window, how do I fix this? (Windows 10, Python version 3.8.5)
I've been studying Python for a month now and normally I run all my programs in Sublime Text 3. Today I learn to run Python programs in the terminal window as introduced in this section of the Automate the Boring Stuff with Python book following this video. Basically, I followed the instruction in the video and created the hello.py file as below: #! python3 print('Hello, World!') Then I opened the Command Prompt to run the file with the command: py.exe c:\users\danh\mypythonscripts\hello.py, an error pops-up and states that "This app can't run on your PC" and a line says that Access is denied. I spent the whole day trying to fix this problem but still I couldn't get it running. One thing is when I change the directory of the Command Prompt to run the file to C:Windows\system32 (or run the Command Prompt as Administrator) and then run the command py.exe c:\users\danh\mypythonscripts\hello.py, it runs the file without any problem as in this image. Does anyone know how to fix this problem? Any help is appreciated. Thanks!
[ "I solved the problem.\nWhen I looked into my user directory at C:\\Users\\<Username>, it appears that there is a py.exe file that has 0 bytes.\nI was told in this thread that the py.exe file shouldn't be in my user directory so I removed that file and it fixed the problem.\nI still don't know how the py.exe file got into my user directory and why it has 0 bytes so I'm not sure this solution could help others. For now, I will accept my own answer because it solves the problem in my case.\n", "It looks like you're trying to use Microsoft's new Windows 10 Metro-based auto-installing version of Python. It's included by default but, as you've found, it doesn't work very well.\nTry installing the version from the Python website.\nIf you've got a 32-bit copy of Windows, make sure to install the 32-bit version; Windows isn't very good at running 64-bit programs from a 32-bit kernel. You can check by looking in your C: drive; if you haven't got a Program Files (x86) folder, install the 32-bit version.\n", "python.exe inside my env\\Scripts\\ became 0kb for some reason. So I created another virtual-env and copied python.exe from there to this folder. then it started working.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "terminal", "windows" ]
stackoverflow_0063525600_python_terminal_windows.txt
Q: Sorting only specific subset of rows in pandas dataframe I spent some time trying to figure out a solution to but haven't been able to figure a simple and clean solution to my problem. Basically I have the following dataframe: Plane Parts Quantity is_plane G6_32 FAB 1 True G6_32 KIT 2 True Item D 2 False Item C 4 False Item A 5 False G6_32 SITE 5 True G6_32 SPACE 6 True Item C 2 False Item A 1 False Item F 2 False I need to sort only the subset of rows which have is_plane == False. So at the end my final result would look like: Plane Parts Quantity is_plane G6_32 FAB 1 True G6_32 KIT 2 True Item A 5 False Item C 4 False Item D 2 False G6_32 SITE 5 True G6_32 SPACE 6 True Item A 1 False Item C 2 False Item F 2 False Notice that the rows which is_plane == True are not supposed to be sorted and kept the original position. Any idea on how to achieve it? A: make grouper for grouping grouper = df['is_plane'].ne(df['is_plane'].shift(1)).cumsum() grouper: 0 1 1 1 2 2 3 2 4 2 5 3 6 3 7 4 8 4 9 4 Name: is_plane, dtype: int32 use groupby by grouper group that its 'Plane Parts' is all False, sort_values by Plane Parts. df.groupby(grouper).apply(lambda x: x.sort_values('Plane Parts') if x['is_plane'].sum() == 0 else x).droplevel(0) output: Plane Parts Quantity is_plane 0 G6_32 FAB 1 True 1 G6_32 KIT 2 True 4 Item A 5 False 3 Item C 4 False 2 Item D 2 False 5 G6_32 SITE 5 True 6 G6_32 SPACE 6 True 8 Item A 1 False 7 Item C 2 False 9 Item F 2 False
Sorting only specific subset of rows in pandas dataframe
I spent some time trying to figure out a solution to but haven't been able to figure a simple and clean solution to my problem. Basically I have the following dataframe: Plane Parts Quantity is_plane G6_32 FAB 1 True G6_32 KIT 2 True Item D 2 False Item C 4 False Item A 5 False G6_32 SITE 5 True G6_32 SPACE 6 True Item C 2 False Item A 1 False Item F 2 False I need to sort only the subset of rows which have is_plane == False. So at the end my final result would look like: Plane Parts Quantity is_plane G6_32 FAB 1 True G6_32 KIT 2 True Item A 5 False Item C 4 False Item D 2 False G6_32 SITE 5 True G6_32 SPACE 6 True Item A 1 False Item C 2 False Item F 2 False Notice that the rows which is_plane == True are not supposed to be sorted and kept the original position. Any idea on how to achieve it?
[ "make grouper for grouping\ngrouper = df['is_plane'].ne(df['is_plane'].shift(1)).cumsum()\n\ngrouper:\n0 1\n1 1\n2 2\n3 2\n4 2\n5 3\n6 3\n7 4\n8 4\n9 4\nName: is_plane, dtype: int32\n\nuse groupby by grouper\ngroup that its 'Plane Parts' is all False, sort_values by Plane Parts.\ndf.groupby(grouper).apply(lambda x: x.sort_values('Plane Parts') if x['is_plane'].sum() == 0 else x).droplevel(0)\n\noutput:\n Plane Parts Quantity is_plane\n0 G6_32 FAB 1 True\n1 G6_32 KIT 2 True\n4 Item A 5 False\n3 Item C 4 False\n2 Item D 2 False\n5 G6_32 SITE 5 True\n6 G6_32 SPACE 6 True\n8 Item A 1 False\n7 Item C 2 False\n9 Item F 2 False\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074640757_dataframe_pandas_python_python_3.x.txt
Q: python-socketio asyncio client: Is it possible to know when emit is completelly send data over the wire to server? Do python-socketio or underlying python-engineio have any kind of confirmation that specific message was completely delivered to other side, similar to what TCP does to ensure all data was successfully transferred to other side? I have kind of pubsub service built on python-socketio server, which sends back ok/error status when request has been processed. But in my python-socketio client sometimes I just need fire and forget some message to pubsub but I have to wait it was completely delivired before I terminate my application. So, my naive code: await sio.emit("publish", {my message}) it seems the await above is just scheduling send over wire to asyncio, but does not wait for send to complete. I suppose it's by design. Just need to know is it possible to know when send is complete or not. A: Socket.IO has ACK packets that can be used for the receiving side to acknowledge receipt of an event. When using the Python client and server, you can replace the emit() with call() to wait for the ack to be received. The return value of call() is whatever data the other side returned in the acknowledgement. But not that for this to work the other side also needs to be expanded to send this ACK packets. If your other side is also Python, an event handler can issue an ACK simply by returning something from the handler function. The data that you return is included in the ACK packet. If the other side is JavaScript, you get a callback function passed as a last argument into your handler. The handler needs to call this function passing any data that it wants to send to the other side as response.
python-socketio asyncio client: Is it possible to know when emit is completelly send data over the wire to server?
Do python-socketio or underlying python-engineio have any kind of confirmation that specific message was completely delivered to other side, similar to what TCP does to ensure all data was successfully transferred to other side? I have kind of pubsub service built on python-socketio server, which sends back ok/error status when request has been processed. But in my python-socketio client sometimes I just need fire and forget some message to pubsub but I have to wait it was completely delivired before I terminate my application. So, my naive code: await sio.emit("publish", {my message}) it seems the await above is just scheduling send over wire to asyncio, but does not wait for send to complete. I suppose it's by design. Just need to know is it possible to know when send is complete or not.
[ "Socket.IO has ACK packets that can be used for the receiving side to acknowledge receipt of an event.\nWhen using the Python client and server, you can replace the emit() with call() to wait for the ack to be received. The return value of call() is whatever data the other side returned in the acknowledgement.\nBut not that for this to work the other side also needs to be expanded to send this ACK packets. If your other side is also Python, an event handler can issue an ACK simply by returning something from the handler function. The data that you return is included in the ACK packet. If the other side is JavaScript, you get a callback function passed as a last argument into your handler. The handler needs to call this function passing any data that it wants to send to the other side as response.\n" ]
[ 1 ]
[]
[]
[ "python", "python_asyncio", "python_socketio" ]
stackoverflow_0074639951_python_python_asyncio_python_socketio.txt
Q: Can't run python on windows anymore Since the most recent update to Windows 10, I have been seeing this message every time I try to do anything with Python I have reinstalled it, tried running it as administrator. Nothing works. A: First make sure that python.exe exists in the given directory and that its not a zero-length file. More likely though is that you installed the wrong version of python. Make sure you download and install the x86 version as it will work on both 64-bit and x86 systems. Do a full uninstall and install python via the Windows x86 MSI installer. Edit: If this doesn't work please provide more information on which specific Windows 10 version you are running and which python installer you are using. Edit 2: You can also get more information from the Windows Event Log A: I think the reason is that python.exe has size 0 Kb. It could happen because you (and me too) didn't exit from Python correctly. The way to fix the issue is to re-install Python or download "portable" version: https://www.python.org/downloads/release/python-385/ By the way, one of the way to exit from python (v3) in Windows: >>> import sys >>> sys.exit() A: I've also had "This app can't run on your PC" windows 10 dialog box starting to appear after I tried to start x64 app from python script under x86 python. Uninstalled x86 python, installed x64 python and all started to work normal. A: I didn't have to reinstall Python. python.exe inside my env\Scripts\ became 0kb for some reason. So I created another virtual-env and copied python.exe from there to this folder. Then it started working.
Can't run python on windows anymore
Since the most recent update to Windows 10, I have been seeing this message every time I try to do anything with Python I have reinstalled it, tried running it as administrator. Nothing works.
[ "First make sure that python.exe exists in the given directory and that its not a zero-length file. More likely though is that you installed the wrong version of python. Make sure you download and install the x86 version as it will work on both 64-bit and x86 systems. Do a full uninstall and install python via the Windows x86 MSI installer.\nEdit:\nIf this doesn't work please provide more information on which specific Windows 10 version you are running and which python installer you are using.\nEdit 2:\nYou can also get more information from the Windows Event Log\n\n", "I think the reason is that python.exe has size 0 Kb. It could happen because you (and me too) didn't exit from Python correctly.\nThe way to fix the issue is to re-install Python or download \"portable\" version:\nhttps://www.python.org/downloads/release/python-385/\nBy the way, one of the way to exit from python (v3) in Windows:\n>>> import sys\n>>> sys.exit()\n\n", "I've also had \"This app can't run on your PC\" windows 10 dialog box starting to appear after I tried to start x64 app from python script under x86 python.\nUninstalled x86 python, installed x64 python and all started to work normal.\n", "I didn't have to reinstall Python.\npython.exe inside my env\\Scripts\\ became 0kb for some reason. So I created another virtual-env and copied python.exe from there to this folder. Then it started working.\n" ]
[ 8, 5, 1, 0 ]
[]
[]
[ "python", "python_2.7", "updates", "windows_10" ]
stackoverflow_0044394965_python_python_2.7_updates_windows_10.txt
Q: How to mock psycopg2 cursor object? I have this code segment in Python2: def super_cool_method(): con = psycopg2.connect(**connection_stuff) cur = con.cursor(cursor_factory=DictCursor) cur.execute("Super duper SQL query") rows = cur.fetchall() for row in rows: # do some data manipulation on row return rows that I'd like to write some unittests for. I'm wondering how to use mock.patch in order to patch out the cursor and connection variables so that they return a fake set of data? I've tried the following segment of code for my unittests but to no avail: @mock.patch("psycopg2.connect") @mock.patch("psycopg2.extensions.cursor.fetchall") def test_super_awesome_stuff(self, a, b): testing = super_cool_method() But I seem to get the following error: TypeError: can't set attributes of built-in/extension type 'psycopg2.extensions.cursor' A: You have a series of chained calls, each returning a new object. If you mock just the psycopg2.connect() call, you can follow that chain of calls (each producing mock objects) via .return_value attributes, which reference the returned mock for such calls: @mock.patch("psycopg2.connect") def test_super_awesome_stuff(self, mock_connect): expected = [['fake', 'row', 1], ['fake', 'row', 2]] mock_con = mock_connect.return_value # result of psycopg2.connect(**connection_stuff) mock_cur = mock_con.cursor.return_value # result of con.cursor(cursor_factory=DictCursor) mock_cur.fetchall.return_value = expected # return this when calling cur.fetchall() result = super_cool_method() self.assertEqual(result, expected) Because you hold onto references for the mock connect function, as well as the mock connection and cursor objects you can then also assert if they were called correctly: mock_connect.assert_called_with(**connection_stuff) mock_con.cursor.asset_called_with(cursor_factory=DictCursor) mock_cur.execute.assert_called_with("Super duper SQL query") If you don't need to test these, you could just chain up the return_value references to go straight to the result of cursor() call on the connection object: @mock.patch("psycopg2.connect") def test_super_awesome_stuff(self, mock_connect): expected = [['fake', 'row', 1], ['fake', 'row' 2]] mock_connect.return_value.cursor.return_value.fetchall.return_value = expected result = super_cool_method() self.assertEqual(result, expected) Note that if you are using the connection as a context manager to automatically commit the transaction and you use as to bind the object returned by __enter__() to a new name (so with psycopg2.connect(...) as conn: # ...) then you'll need to inject an additional __enter__.return_value in the call chain: mock_con_cm = mock_connect.return_value # result of psycopg2.connect(**connection_stuff) mock_con = mock_con_cm.__enter__.return_value # object assigned to con in with ... as con mock_cur = mock_con.cursor.return_value # result of con.cursor(cursor_factory=DictCursor) mock_cur.fetchall.return_value = expected # return this when calling cur.fetchall() The same applies to the result of with conn.cursor() as cursor:, the conn.cursor.return_value.__enter__.return_value object is assigned to the as target. A: Since the cursor is the return value of con.cursor, you only need to mock the connection, then configure it properly. For example, query_result = [("field1a", "field2a"), ("field1b", "field2b")] with mock.patch('psycopg2.connect') as mock_connect: mock_connect.cursor.return_value.fetchall.return_value = query_result super_cool_method() A: The following answer is the variation of above answers. I was using django.db.connections cursor object. So following code worked for me @patch('django.db.connections') def test_supercool_method(self, mock_connections): query_result = [("field1a", "field2a"), ("field1b", "field2b")] mock_connections.__getitem__.return_value.cursor.return_value.__enter__.return_value.fetchall.return_value = query_result result = supercool_method() self.assertIsInstance(result, list) A: @patch("psycopg2.connect") async def test_update_task_after_launch(fake_connection): """ """ fake_update_count =4 fake_connection.return_value = Mock(cursor=lambda : Mock(execute=lambda x,y :"", fetch_all=lambda:['some','fake','rows'],rowcount=fake_update_count,close=lambda:""))
How to mock psycopg2 cursor object?
I have this code segment in Python2: def super_cool_method(): con = psycopg2.connect(**connection_stuff) cur = con.cursor(cursor_factory=DictCursor) cur.execute("Super duper SQL query") rows = cur.fetchall() for row in rows: # do some data manipulation on row return rows that I'd like to write some unittests for. I'm wondering how to use mock.patch in order to patch out the cursor and connection variables so that they return a fake set of data? I've tried the following segment of code for my unittests but to no avail: @mock.patch("psycopg2.connect") @mock.patch("psycopg2.extensions.cursor.fetchall") def test_super_awesome_stuff(self, a, b): testing = super_cool_method() But I seem to get the following error: TypeError: can't set attributes of built-in/extension type 'psycopg2.extensions.cursor'
[ "You have a series of chained calls, each returning a new object. If you mock just the psycopg2.connect() call, you can follow that chain of calls (each producing mock objects) via .return_value attributes, which reference the returned mock for such calls:\[email protected](\"psycopg2.connect\")\ndef test_super_awesome_stuff(self, mock_connect):\n expected = [['fake', 'row', 1], ['fake', 'row', 2]]\n\n mock_con = mock_connect.return_value # result of psycopg2.connect(**connection_stuff)\n mock_cur = mock_con.cursor.return_value # result of con.cursor(cursor_factory=DictCursor)\n mock_cur.fetchall.return_value = expected # return this when calling cur.fetchall()\n\n result = super_cool_method()\n self.assertEqual(result, expected)\n\nBecause you hold onto references for the mock connect function, as well as the mock connection and cursor objects you can then also assert if they were called correctly:\nmock_connect.assert_called_with(**connection_stuff)\nmock_con.cursor.asset_called_with(cursor_factory=DictCursor)\nmock_cur.execute.assert_called_with(\"Super duper SQL query\")\n\nIf you don't need to test these, you could just chain up the return_value references to go straight to the result of cursor() call on the connection object:\[email protected](\"psycopg2.connect\")\ndef test_super_awesome_stuff(self, mock_connect):\n expected = [['fake', 'row', 1], ['fake', 'row' 2]]\n mock_connect.return_value.cursor.return_value.fetchall.return_value = expected\n\n result = super_cool_method()\n self.assertEqual(result, expected)\n\nNote that if you are using the connection as a context manager to automatically commit the transaction and you use as to bind the object returned by __enter__() to a new name (so with psycopg2.connect(...) as conn: # ...) then you'll need to inject an additional __enter__.return_value in the call chain:\nmock_con_cm = mock_connect.return_value # result of psycopg2.connect(**connection_stuff)\nmock_con = mock_con_cm.__enter__.return_value # object assigned to con in with ... as con \nmock_cur = mock_con.cursor.return_value # result of con.cursor(cursor_factory=DictCursor)\nmock_cur.fetchall.return_value = expected # return this when calling cur.fetchall()\n\nThe same applies to the result of with conn.cursor() as cursor:, the conn.cursor.return_value.__enter__.return_value object is assigned to the as target.\n", "Since the cursor is the return value of con.cursor, you only need to mock the connection, then configure it properly. For example,\nquery_result = [(\"field1a\", \"field2a\"), (\"field1b\", \"field2b\")]\nwith mock.patch('psycopg2.connect') as mock_connect:\n mock_connect.cursor.return_value.fetchall.return_value = query_result\n super_cool_method()\n\n", "The following answer is the variation of above answers. \nI was using django.db.connections cursor object.\nSo following code worked for me\n@patch('django.db.connections')\ndef test_supercool_method(self, mock_connections):\n query_result = [(\"field1a\", \"field2a\"), (\"field1b\", \"field2b\")]\n mock_connections.__getitem__.return_value.cursor.return_value.__enter__.return_value.fetchall.return_value = query_result\n\n result = supercool_method()\n self.assertIsInstance(result, list)\n\n", "@patch(\"psycopg2.connect\")\nasync def test_update_task_after_launch(fake_connection):\n \"\"\"\n \n \"\"\"\n fake_update_count =4\n fake_connection.return_value = Mock(cursor=lambda : Mock(execute=lambda x,y :\"\", \nfetch_all=lambda:['some','fake','rows'],rowcount=fake_update_count,close=lambda:\"\"))\n\n\n" ]
[ 58, 16, 2, 0 ]
[]
[]
[ "mocking", "psycopg2", "python", "python_2.7", "unit_testing" ]
stackoverflow_0035143055_mocking_psycopg2_python_python_2.7_unit_testing.txt
Q: Beautiful Soup find_All doesn't find all blocks I'm trying to parse headhunter.kz website. In use: python 3.9, beautifulsoup4. When i parse pages with vacancies, i parse only 20 div-block with "serp-item" classes, hen in fact there are 40 div blocks. (I open the html file in the browser and see the presence of 40 blocks). import requests import os import time import re from bs4 import BeautifulSoup import csv import pandas as pd df = pd.DataFrame({}) global_url = "https://almaty.hh.kz/" headers = { "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" } def get_all_pages(): with open("data/page_1.html") as file: src = file.read() # soup = BeautifulSoup(src,"lxml") #find("span", {"class":"pager-item-not-in-short-range"}). pages_count = int(soup.find("div",{"class":"pager"}).find_all("a")[-2].text) for i in range(1,pages_count+1): url = f"https://almaty.hh.kz/search/vacancy?area=160&clusters=true&enable_snippets=true&ored_clusters=true&professional_role=84&professional_role=116&professional_role=36&professional_role=157&professional_role=125&professional_role=156&professional_role=160&professional_role=10&professional_role=150&professional_role=25&professional_role=165&professional_role=73&professional_role=96&professional_role=164&professional_role=104&professional_role=112&professional_role=113&professional_role=148&professional_role=114&professional_role=121&professional_role=124&professional_role=20&search_period=30&hhtmFrom=vacancy_search_list&page={i}" r = requests.get(url = url,headers = headers) with open(f"data/page_{i}.html","w") as file: file.write(r.text) time.sleep(3) return pages_count+1 def collect_data(pages_count): for page in range(1, pages_count+1): with open(f"data/page_{page}.html") as file: src = file.read() soup = BeautifulSoup(src,"lxml") # item_cards = soup.find_all("div",{"class":"a-card__body ddl_product_link"}) # print(len(item_cards)) # for items in item_cards: # product_title = items.find("a",{"class":"a-card__title link"}).text # product_price = items.find("span",{"class":"a-card__price-text"}).text # product_geo = items.find("div",{"class":"a-card__subtitle"}).text # print(f"Title:{product_title} - Price: {product_price} - GEO: {product_geo}") #items_divs = soup.find_all("div",{"class":"serp-item"}) items_divs = soup.find_all("div",{"class":"serp-item"}) print(len(items_divs)) urls =[] for item in items_divs: item_url = item.find("span",{"data-page-analytics-event":"vacancy_search_suitable_item"}).find("a",{"class":"serp-item__title"}).get("href") urls.append(item_url) with open("items_urls.txt","w") as file: for url in urls: file.write(f"{url}\n") get_data(file_path="items_urls.txt") def get_data(file_path): result_list = [] with open(file_path) as file: urls_list = file.readlines() clear_urls_list =[] for url in urls_list: url = url.strip() clear_urls_list.append(url) i=0 for url in clear_urls_list: i+=1 response = requests.get(url=url,headers=headers) soup = BeautifulSoup(response.text,"lxml") try: item_name = soup.find("div",{"class":"main-content"}).find("h1",{"data-qa":"vacancy-title"}).text.strip() except: item_name = 'E1' try: item_salary = soup.find("div",{"class":"main-content"}).find("div",{"data-qa":"vacancy-salary"}).text.strip() except: item_salary = 'E2' try: item_exp = soup.find("div",{"class":"main-content"}).find("span",{"data-qa":"vacancy-experience"}).text.strip() except: item_exp = 'E3' try: company_name = soup.find("div",{"class":"main-content"}).find("span",{"class":"vacancy-company-name"}).find("span").text.strip() except: company_name = 'E4' try: if soup.find("div",{"class":"main-content"}).find("p",{"class":"vacancy-creation-time-redesigned"}): date = soup.find("div",{"class":"main-content"}).find("p",{"class":"vacancy-creation-time-redesigned"}).text.strip() else: date = soup.find("div",{"class":"main-content"}).find("p",{"class":"vacancy-creation-time"}).text.strip() except: date = 'E5' try: if soup.find("div",{"class":"main-content"}).find("span",{"data-qa":"vacancy-view-raw-address"}): address = soup.find("div",{"class":"main-content"}).find("span",{"data-qa":"vacancy-view-raw-address"}).text elif soup.find("div",{"class":"main-content"}).find("div",{"class":"vacancy-company-bottom"}).find("p", {"data-qa":"vacancy-view-location"}): address = soup.find("div",{"class":"main-content"}).find("div",{"class":"vacancy-company-bottom"}).find("p", {"data-qa":"vacancy-view-location"}).text elif soup.find("div",{"class":"main-content"}).find("div",{"class":"block-employer--jHuyqacEkkrEkSl3Yg3M"}): address = soup.find("div",{"class":"main-content"}).find("div",{"class":"block-employer--jHuyqacEkkrEkSl3Yg3M"}).find("p", {"data-qa":"vacancy-view-location"}).text except: address = 'Алматы' try: zanyatost = soup.find("div",{"class":"main-content"}).find("p",{"data-qa":"vacancy-view-employment-mode"}).find("span").text.strip() except: zanyatost = 'E7' try: zanyatost2 = soup.find("div",{"class":"main-content"}).find("p",{"data-qa":"vacancy-view-employment-mode"}).text.lstrip(', ') except: zanyatost2 = 'E8' print(i) with open('test.csv','a',encoding ="utf-8") as file: writer = csv.writer(file) writer.writerow( ( item_name, item_salary, item_exp, company_name, date, address, zanyatost, zanyatost2 ) ) def main(): with open('test.csv','w',encoding ="utf-8") as file: writer = csv.writer(file) writer.writerow( ( 'Должность', "Зарплата", "Опыт", "Компания", "Дата обьявления", "Район", "Тип занятости", "Тип занятости2" ) ) pages_count = get_all_pages() #print(pages_count) collect_data(pages_count=pages_count) # #get_data(file_path="items_urls.txt") # df.to_excel('./test.xlsx') if __name__ == '__main__': main() I tried to use html5lib, html.parser and lxml, but i have the same results. Also i tried to use soup.select to find the number of div-block with "serp-item" class, but it gives me the same result. I think, that info from remaining block are stored in JS, if i'm right, can someone explain, how to parse remaining blocks? A: I think you should you use selenium and try to scroll to end of page before you parse any data # Get scroll height last_height = driver.execute_script("return document.body.scrollHeight") while True: # Scroll down to bottom driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") # Wait to load page time.sleep(SCROLL_PAUSE_TIME) # Calculate new scroll height and compare with last scroll height new_height = driver.execute_script("return document.body.scrollHeight") if new_height == last_height: break last_height = new_height
Beautiful Soup find_All doesn't find all blocks
I'm trying to parse headhunter.kz website. In use: python 3.9, beautifulsoup4. When i parse pages with vacancies, i parse only 20 div-block with "serp-item" classes, hen in fact there are 40 div blocks. (I open the html file in the browser and see the presence of 40 blocks). import requests import os import time import re from bs4 import BeautifulSoup import csv import pandas as pd df = pd.DataFrame({}) global_url = "https://almaty.hh.kz/" headers = { "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" } def get_all_pages(): with open("data/page_1.html") as file: src = file.read() # soup = BeautifulSoup(src,"lxml") #find("span", {"class":"pager-item-not-in-short-range"}). pages_count = int(soup.find("div",{"class":"pager"}).find_all("a")[-2].text) for i in range(1,pages_count+1): url = f"https://almaty.hh.kz/search/vacancy?area=160&clusters=true&enable_snippets=true&ored_clusters=true&professional_role=84&professional_role=116&professional_role=36&professional_role=157&professional_role=125&professional_role=156&professional_role=160&professional_role=10&professional_role=150&professional_role=25&professional_role=165&professional_role=73&professional_role=96&professional_role=164&professional_role=104&professional_role=112&professional_role=113&professional_role=148&professional_role=114&professional_role=121&professional_role=124&professional_role=20&search_period=30&hhtmFrom=vacancy_search_list&page={i}" r = requests.get(url = url,headers = headers) with open(f"data/page_{i}.html","w") as file: file.write(r.text) time.sleep(3) return pages_count+1 def collect_data(pages_count): for page in range(1, pages_count+1): with open(f"data/page_{page}.html") as file: src = file.read() soup = BeautifulSoup(src,"lxml") # item_cards = soup.find_all("div",{"class":"a-card__body ddl_product_link"}) # print(len(item_cards)) # for items in item_cards: # product_title = items.find("a",{"class":"a-card__title link"}).text # product_price = items.find("span",{"class":"a-card__price-text"}).text # product_geo = items.find("div",{"class":"a-card__subtitle"}).text # print(f"Title:{product_title} - Price: {product_price} - GEO: {product_geo}") #items_divs = soup.find_all("div",{"class":"serp-item"}) items_divs = soup.find_all("div",{"class":"serp-item"}) print(len(items_divs)) urls =[] for item in items_divs: item_url = item.find("span",{"data-page-analytics-event":"vacancy_search_suitable_item"}).find("a",{"class":"serp-item__title"}).get("href") urls.append(item_url) with open("items_urls.txt","w") as file: for url in urls: file.write(f"{url}\n") get_data(file_path="items_urls.txt") def get_data(file_path): result_list = [] with open(file_path) as file: urls_list = file.readlines() clear_urls_list =[] for url in urls_list: url = url.strip() clear_urls_list.append(url) i=0 for url in clear_urls_list: i+=1 response = requests.get(url=url,headers=headers) soup = BeautifulSoup(response.text,"lxml") try: item_name = soup.find("div",{"class":"main-content"}).find("h1",{"data-qa":"vacancy-title"}).text.strip() except: item_name = 'E1' try: item_salary = soup.find("div",{"class":"main-content"}).find("div",{"data-qa":"vacancy-salary"}).text.strip() except: item_salary = 'E2' try: item_exp = soup.find("div",{"class":"main-content"}).find("span",{"data-qa":"vacancy-experience"}).text.strip() except: item_exp = 'E3' try: company_name = soup.find("div",{"class":"main-content"}).find("span",{"class":"vacancy-company-name"}).find("span").text.strip() except: company_name = 'E4' try: if soup.find("div",{"class":"main-content"}).find("p",{"class":"vacancy-creation-time-redesigned"}): date = soup.find("div",{"class":"main-content"}).find("p",{"class":"vacancy-creation-time-redesigned"}).text.strip() else: date = soup.find("div",{"class":"main-content"}).find("p",{"class":"vacancy-creation-time"}).text.strip() except: date = 'E5' try: if soup.find("div",{"class":"main-content"}).find("span",{"data-qa":"vacancy-view-raw-address"}): address = soup.find("div",{"class":"main-content"}).find("span",{"data-qa":"vacancy-view-raw-address"}).text elif soup.find("div",{"class":"main-content"}).find("div",{"class":"vacancy-company-bottom"}).find("p", {"data-qa":"vacancy-view-location"}): address = soup.find("div",{"class":"main-content"}).find("div",{"class":"vacancy-company-bottom"}).find("p", {"data-qa":"vacancy-view-location"}).text elif soup.find("div",{"class":"main-content"}).find("div",{"class":"block-employer--jHuyqacEkkrEkSl3Yg3M"}): address = soup.find("div",{"class":"main-content"}).find("div",{"class":"block-employer--jHuyqacEkkrEkSl3Yg3M"}).find("p", {"data-qa":"vacancy-view-location"}).text except: address = 'Алматы' try: zanyatost = soup.find("div",{"class":"main-content"}).find("p",{"data-qa":"vacancy-view-employment-mode"}).find("span").text.strip() except: zanyatost = 'E7' try: zanyatost2 = soup.find("div",{"class":"main-content"}).find("p",{"data-qa":"vacancy-view-employment-mode"}).text.lstrip(', ') except: zanyatost2 = 'E8' print(i) with open('test.csv','a',encoding ="utf-8") as file: writer = csv.writer(file) writer.writerow( ( item_name, item_salary, item_exp, company_name, date, address, zanyatost, zanyatost2 ) ) def main(): with open('test.csv','w',encoding ="utf-8") as file: writer = csv.writer(file) writer.writerow( ( 'Должность', "Зарплата", "Опыт", "Компания", "Дата обьявления", "Район", "Тип занятости", "Тип занятости2" ) ) pages_count = get_all_pages() #print(pages_count) collect_data(pages_count=pages_count) # #get_data(file_path="items_urls.txt") # df.to_excel('./test.xlsx') if __name__ == '__main__': main() I tried to use html5lib, html.parser and lxml, but i have the same results. Also i tried to use soup.select to find the number of div-block with "serp-item" class, but it gives me the same result. I think, that info from remaining block are stored in JS, if i'm right, can someone explain, how to parse remaining blocks?
[ "I think you should you use selenium and try to scroll to end of page before you parse any data\n\n# Get scroll height\nlast_height = driver.execute_script(\"return document.body.scrollHeight\")\n\nwhile True:\n # Scroll down to bottom\n driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n\n # Wait to load page\n time.sleep(SCROLL_PAUSE_TIME)\n\n # Calculate new scroll height and compare with last scroll height\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n if new_height == last_height:\n break\n last_height = new_height\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "findall", "javascript", "parsing", "python" ]
stackoverflow_0074640907_beautifulsoup_findall_javascript_parsing_python.txt
Q: Cannot create a virtual environment with a specific version of Python in ubuntu with virtualenv I am trying to create a virtual enviroment in my ubuntu OS using virtualenv The command I am using is virtualenv -p /usr/bin/python3.8.13 py3.8.13_env The error shown is FileNotFoundError:[Errno 2]No such file or directory:'/usr/bin/python3.8.13' I have tried several other python versions but I get the same error A: You can see what versions of Python you have by: ls -l /usr/bin/python* If you don't provide one then virtualenv will use a default of /usr/bin/python3. On Ubuntu this will be a symlink to a specific version. e.g. /usr/bin/python3 -> python3.10 So just calling virtualenv like: virtualenv py3.10_venv Would create a virtualenv called "py3.10_venv" (a folder) in your current working directory, using Python 3.10 in this example. If you have other versions (shown by the ls command above) then you can use those specifically as you are trying to do in your question above.
Cannot create a virtual environment with a specific version of Python in ubuntu with virtualenv
I am trying to create a virtual enviroment in my ubuntu OS using virtualenv The command I am using is virtualenv -p /usr/bin/python3.8.13 py3.8.13_env The error shown is FileNotFoundError:[Errno 2]No such file or directory:'/usr/bin/python3.8.13' I have tried several other python versions but I get the same error
[ "You can see what versions of Python you have by:\nls -l /usr/bin/python*\n\nIf you don't provide one then virtualenv will use a default of /usr/bin/python3. On Ubuntu this will be a symlink to a specific version. e.g.\n/usr/bin/python3 -> python3.10\n\nSo just calling virtualenv like:\nvirtualenv py3.10_venv\n\nWould create a virtualenv called \"py3.10_venv\" (a folder) in your current working directory, using Python 3.10 in this example.\nIf you have other versions (shown by the ls command above) then you can use those specifically as you are trying to do in your question above.\n" ]
[ 1 ]
[]
[]
[ "filenotfounderror", "python", "ubuntu", "virtual_environment", "virtualenv" ]
stackoverflow_0074641147_filenotfounderror_python_ubuntu_virtual_environment_virtualenv.txt
Q: dcc.store value not updating when i access the database in Python Dash i have been trying to create a dashboard using Python Dash. The dashboard tries to access the database after every 5 seconds and tries to update the graph. After getting the values from the db for the first time, I try to update the store with the values of the dataframe that are in the format. | Date | Count | | ----- | ------ | | 01-01-2022 | 55 | | 02-01-2022 | 45 | I try to save the count as a list [55, 45]. Then after every 5 seconds, i again fetch the data and extend the array with the new values like [55 , 45 , 65 , 30 ....]. However, the dcc.store doesn't save the value and comes as the initial value of the dcc.store after every single update. from dash import Dash, html, dcc, Input, Output, State import plotly.express as px import dash import time import random import pandas as pd import numpy as np import json import asyncio from bson import ObjectId from pathlib import Path from datetime import datetime from concurrent.futures import ThreadPoolExecutor from dash.exceptions import PreventUpdate from get_data import init # DASH APP app = Dash(__name__) # Interval in milliseconds interval = 5000 app.layout = html.Div( children=[ html.H1(children="Time Series Reach Analysis"), html.P(children="Time Series Dataset from August 2021 to October 2022"), html.Div(id="timeseries-graph", children=[]), html.Div(id="interval-counter", children=[]), dcc.Store(id='df-storage', data=[], storage_type="memory"), dcc.Interval(id='interval-time', interval=interval, n_intervals=0) ], style={ "text-align" : "center" } ) @app.callback( # Returned Values Output("df-storage", "data"), Output("interval-counter", "children"), # Parameters Input("interval-time", "n_intervals"), State("df-storage", "data") ) def refresh_graph(n_intervals, data): storage_value = [] if n_intervals == 0: fetched_df = init() print("0 - Df = ", fetched_df) data.extend(fetched_df["_id"].tolist()) print(f"00 - Values for n_intervals-{n_intervals} and data-{data}") return data , f"Intervals : {n_intervals}" else: fetched_df = init() data.extend(fetched_df["_id"].tolist()) return data , f"Intervals : {n_intervals}" @app.callback( Output("timeseries-graph", "children"), Input("df-storage", "data"), ) def create_graph(data): df = pd.DataFrame(data=data, columns=["_id"]) fig = px.line( df, y="_id", x=df.index.tolist() ) return dcc.Graph(figure=fig) if __name__ == '__main__': app.run_server(debug=True) init() function simply fetches the data from mongodb and converts it into the dataframe of timestamp and count. However, when I try to do the same thing without fetching the data from the database, then the same code behaves exactly as it should. I tried the same code without fetching the data using random.random() function and after every 5 second interval add new value to the "data" list. Then it worked as one would expect. I would be thankful for any help! A: So I found the answer. The reason why it wasn't working properly with the database call was, The database call was taking longer than my interval. So Dash would update the n_interval count when the 5 seconds interval passes. Resulting in kicking another call, that would lead to another database call, while the previous db call is still in the hanging. I solved the issue by making a longer interval time. That would help in finishing the database call before the next interval update happens.
dcc.store value not updating when i access the database in Python Dash
i have been trying to create a dashboard using Python Dash. The dashboard tries to access the database after every 5 seconds and tries to update the graph. After getting the values from the db for the first time, I try to update the store with the values of the dataframe that are in the format. | Date | Count | | ----- | ------ | | 01-01-2022 | 55 | | 02-01-2022 | 45 | I try to save the count as a list [55, 45]. Then after every 5 seconds, i again fetch the data and extend the array with the new values like [55 , 45 , 65 , 30 ....]. However, the dcc.store doesn't save the value and comes as the initial value of the dcc.store after every single update. from dash import Dash, html, dcc, Input, Output, State import plotly.express as px import dash import time import random import pandas as pd import numpy as np import json import asyncio from bson import ObjectId from pathlib import Path from datetime import datetime from concurrent.futures import ThreadPoolExecutor from dash.exceptions import PreventUpdate from get_data import init # DASH APP app = Dash(__name__) # Interval in milliseconds interval = 5000 app.layout = html.Div( children=[ html.H1(children="Time Series Reach Analysis"), html.P(children="Time Series Dataset from August 2021 to October 2022"), html.Div(id="timeseries-graph", children=[]), html.Div(id="interval-counter", children=[]), dcc.Store(id='df-storage', data=[], storage_type="memory"), dcc.Interval(id='interval-time', interval=interval, n_intervals=0) ], style={ "text-align" : "center" } ) @app.callback( # Returned Values Output("df-storage", "data"), Output("interval-counter", "children"), # Parameters Input("interval-time", "n_intervals"), State("df-storage", "data") ) def refresh_graph(n_intervals, data): storage_value = [] if n_intervals == 0: fetched_df = init() print("0 - Df = ", fetched_df) data.extend(fetched_df["_id"].tolist()) print(f"00 - Values for n_intervals-{n_intervals} and data-{data}") return data , f"Intervals : {n_intervals}" else: fetched_df = init() data.extend(fetched_df["_id"].tolist()) return data , f"Intervals : {n_intervals}" @app.callback( Output("timeseries-graph", "children"), Input("df-storage", "data"), ) def create_graph(data): df = pd.DataFrame(data=data, columns=["_id"]) fig = px.line( df, y="_id", x=df.index.tolist() ) return dcc.Graph(figure=fig) if __name__ == '__main__': app.run_server(debug=True) init() function simply fetches the data from mongodb and converts it into the dataframe of timestamp and count. However, when I try to do the same thing without fetching the data from the database, then the same code behaves exactly as it should. I tried the same code without fetching the data using random.random() function and after every 5 second interval add new value to the "data" list. Then it worked as one would expect. I would be thankful for any help!
[ "So I found the answer.\nThe reason why it wasn't working properly with the database call was, The database call was taking longer than my interval.\nSo Dash would update the n_interval count when the 5 seconds interval passes. Resulting in kicking another call, that would lead to another database call, while the previous db call is still in the hanging.\nI solved the issue by making a longer interval time. That would help in finishing the database call before the next interval update happens.\n" ]
[ 0 ]
[]
[]
[ "dashboard", "plotly", "plotly_dash", "python" ]
stackoverflow_0074640551_dashboard_plotly_plotly_dash_python.txt
Q: Unable to use Python's Paramiko library in AWS Lambda Function I have uploaded Paramiko library as a layer in the Lambda function. However, still when I am attempting to import the same, it is giving me the following error: Response { "errorMessage": "Unable to import module 'lambda_function': No module named 'paramiko'", "errorType": "Runtime.ImportModuleError", "requestId": "8c81ba38-1074-43da-9427-ebad905d8d48", "stackTrace": [] } Following is the file hierarchy within the upload .zip file: Python->lib->Python3.6->site packages The contents in the above location are also uploaded as an image. On googling another answer, I also tried moving all the contents to the super-parent folder Python, but was still unsuccessful. Please suggest how to make it work. TIA. A: I finally resolved it myself. Turns out I was following all the steps correctly. But being new to AWS, and due to the unavailability of a Linux System, I was using the Windows system, and since Amazon works with Linux in the background, there was a discrepancy. I made use of the EC2 instance for Linux based libraries, and resolved the issue.
Unable to use Python's Paramiko library in AWS Lambda Function
I have uploaded Paramiko library as a layer in the Lambda function. However, still when I am attempting to import the same, it is giving me the following error: Response { "errorMessage": "Unable to import module 'lambda_function': No module named 'paramiko'", "errorType": "Runtime.ImportModuleError", "requestId": "8c81ba38-1074-43da-9427-ebad905d8d48", "stackTrace": [] } Following is the file hierarchy within the upload .zip file: Python->lib->Python3.6->site packages The contents in the above location are also uploaded as an image. On googling another answer, I also tried moving all the contents to the super-parent folder Python, but was still unsuccessful. Please suggest how to make it work. TIA.
[ "I finally resolved it myself.\nTurns out I was following all the steps correctly. But being new to AWS, and due to the unavailability of a Linux System, I was using the Windows system, and since Amazon works with Linux in the background, there was a discrepancy. I made use of the EC2 instance for Linux based libraries, and resolved the issue.\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "aws_lambda_layers", "paramiko", "python" ]
stackoverflow_0074607660_amazon_web_services_aws_lambda_aws_lambda_layers_paramiko_python.txt
Q: Python Select Specific Row and Column from CSV file I want to print specific row and column from a csv file. csv file look like, R,IMSI,DATE FIRST EVENT,TIME FIRST EVENT,DATE LAST EVENT,TIME LAST EVENT,DC(HHMMSS),NC,VOLUME,SDR R C,634012007277489,20221122,150025,20221122,150025,711,1,0,294 C,634012031576061,20221122,150859,20221122,151738,905,3,0,1597 C,634012045006518,20221122,144022,20221122,144022,902,1,0,368 R R R,END OF REPORT T,18 Output should be look like, C,634012007277489,20221122,150025,20221122,150025,711,1,0,294 C,634012031576061,20221122,150859,20221122,151738,905,3,0,1597 C,634012045006518,20221122,144022,20221122,144022,902,1,0,368 A: Use pandas (you need to install it first by pip install pandas in the terminal). import pandas as pd df = pd.read_csv(fullpath.csv) x = df[column_name].iloc[row_number] A: Try reading it with pandas.read_csv() import pandas as pd df = pd.read_csv('filename.csv', skipfooter=1, header=1) df.iloc[row_number,column_number] A: You can use .iat too. import pandas as pd df = pd.read_csv("example.csv", delimiter =",") for row in range(len(df)): for column in range(len(df.columns)): print(df.iat[row, column])
Python Select Specific Row and Column from CSV file
I want to print specific row and column from a csv file. csv file look like, R,IMSI,DATE FIRST EVENT,TIME FIRST EVENT,DATE LAST EVENT,TIME LAST EVENT,DC(HHMMSS),NC,VOLUME,SDR R C,634012007277489,20221122,150025,20221122,150025,711,1,0,294 C,634012031576061,20221122,150859,20221122,151738,905,3,0,1597 C,634012045006518,20221122,144022,20221122,144022,902,1,0,368 R R R,END OF REPORT T,18 Output should be look like, C,634012007277489,20221122,150025,20221122,150025,711,1,0,294 C,634012031576061,20221122,150859,20221122,151738,905,3,0,1597 C,634012045006518,20221122,144022,20221122,144022,902,1,0,368
[ "Use pandas (you need to install it first by pip install pandas in the terminal).\nimport pandas as pd\n\ndf = pd.read_csv(fullpath.csv)\nx = df[column_name].iloc[row_number]\n\n", "Try reading it with pandas.read_csv()\nimport pandas as pd\ndf = pd.read_csv('filename.csv', skipfooter=1, header=1)\ndf.iloc[row_number,column_number]\n\n", "You can use .iat too.\nimport pandas as pd\n\ndf = pd.read_csv(\"example.csv\", delimiter =\",\")\n\nfor row in range(len(df)):\n for column in range(len(df.columns)):\n print(df.iat[row, column])\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074641237_python.txt
Q: How do I split a custom dataset into training and test datasets? import pandas as pd import numpy as np import cv2 from torch.utils.data.dataset import Dataset class CustomDatasetFromCSV(Dataset): def __init__(self, csv_path, transform=None): self.data = pd.read_csv(csv_path) self.labels = pd.get_dummies(self.data['emotion']).as_matrix() self.height = 48 self.width = 48 self.transform = transform def __getitem__(self, index): pixels = self.data['pixels'].tolist() faces = [] for pixel_sequence in pixels: face = [int(pixel) for pixel in pixel_sequence.split(' ')] # print(np.asarray(face).shape) face = np.asarray(face).reshape(self.width, self.height) face = cv2.resize(face.astype('uint8'), (self.width, self.height)) faces.append(face.astype('float32')) faces = np.asarray(faces) faces = np.expand_dims(faces, -1) return faces, self.labels def __len__(self): return len(self.data) This is what I could manage to do by using references from other repositories. However, I want to split this dataset into train and test. How can I do that inside this class? Or do I need to make a separate class to do that? A: Starting in PyTorch 0.4.1 you can use random_split: train_size = int(0.8 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) A: Using Pytorch's SubsetRandomSampler: import torch import numpy as np from torchvision import datasets from torchvision import transforms from torch.utils.data.sampler import SubsetRandomSampler class CustomDatasetFromCSV(Dataset): def __init__(self, csv_path, transform=None): self.data = pd.read_csv(csv_path) self.labels = pd.get_dummies(self.data['emotion']).as_matrix() self.height = 48 self.width = 48 self.transform = transform def __getitem__(self, index): # This method should return only 1 sample and label # (according to "index"), not the whole dataset # So probably something like this for you: pixel_sequence = self.data['pixels'][index] face = [int(pixel) for pixel in pixel_sequence.split(' ')] face = np.asarray(face).reshape(self.width, self.height) face = cv2.resize(face.astype('uint8'), (self.width, self.height)) label = self.labels[index] return face, label def __len__(self): return len(self.labels) dataset = CustomDatasetFromCSV(my_path) batch_size = 16 validation_split = .2 shuffle_dataset = True random_seed= 42 # Creating data indices for training and validation splits: dataset_size = len(dataset) indices = list(range(dataset_size)) split = int(np.floor(validation_split * dataset_size)) if shuffle_dataset : np.random.seed(random_seed) np.random.shuffle(indices) train_indices, val_indices = indices[split:], indices[:split] # Creating PT data samplers and loaders: train_sampler = SubsetRandomSampler(train_indices) valid_sampler = SubsetRandomSampler(val_indices) train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=train_sampler) validation_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=valid_sampler) # Usage Example: num_epochs = 10 for epoch in range(num_epochs): # Train: for batch_index, (faces, labels) in enumerate(train_loader): # ... A: Current answers do random splits which has disadvantage that number of samples per class is not guaranteed to be balanced. This is especially problematic when you want to have small number of samples per class. For example, MNIST has 60,000 examples, i.e. 6000 per digit. Assume that you want only 30 examples per digit in your training set. In this case, random split may produce imbalance between classes (one digit with more training data then others). So you want to make sure each digit precisely has only 30 labels. This is called stratified sampling. One way to do this is using sampler interface in Pytorch and sample code is here. Another way to do this is just hack your way through :). For example, below is simple implementation for MNIST where ds is MNIST dataset and k is number of samples needed for each class. def sampleFromClass(ds, k): class_counts = {} train_data = [] train_label = [] test_data = [] test_label = [] for data, label in ds: c = label.item() class_counts[c] = class_counts.get(c, 0) + 1 if class_counts[c] <= k: train_data.append(data) train_label.append(torch.unsqueeze(label, 0)) else: test_data.append(data) test_label.append(torch.unsqueeze(label, 0)) train_data = torch.cat(train_data) for ll in train_label: print(ll) train_label = torch.cat(train_label) test_data = torch.cat(test_data) test_label = torch.cat(test_label) return (TensorDataset(train_data, train_label), TensorDataset(test_data, test_label)) You can use this function like this: def main(): train_ds = datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor() ])) train_ds, test_ds = sampleFromClass(train_ds, 3) A: If you would like to ensure your splits have balanced classes, you can use train_test_split from sklearn. Assuming you have wrapped your data in a custom Dataset object: from torch.utils.data import DataLoader, Subset from sklearn.model_selection import train_test_split TEST_SIZE = 0.1 BATCH_SIZE = 64 SEED = 42 # generate indices: instead of the actual data we pass in integers instead train_indices, test_indices, _, _ = train_test_split( range(len(data)), data.targets, stratify=data.targets, test_size=TEST_SIZE, random_state=SEED ) # generate subset based on indices train_split = Subset(data, train_indices) test_split = Subset(data, test_indices) # create batches train_batches = DataLoader(train_split, batch_size=BATCH_SIZE, shuffle=True) test_batches = DataLoader(test_split, batch_size=BATCH_SIZE) A: This is the PyTorch Subset class attached holding the random_split method. Note that this method is base for the SubsetRandomSampler. For MNIST if we use random_split: loader = DataLoader( torchvision.datasets.MNIST('/data/mnist', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.5,), (0.5,)) ])), batch_size=16, shuffle=False) print(loader.dataset.data.shape) test_ds, valid_ds = torch.utils.data.random_split(loader.dataset, (50000, 10000)) print(test_ds, valid_ds) print(test_ds.indices, valid_ds.indices) print(test_ds.indices.shape, valid_ds.indices.shape) We get: torch.Size([60000, 28, 28]) <torch.utils.data.dataset.Subset object at 0x0000020FD1880B00> <torch.utils.data.dataset.Subset object at 0x0000020FD1880C50> tensor([ 1520, 4155, 45472, ..., 37969, 45782, 34080]) tensor([ 9133, 51600, 22067, ..., 3950, 37306, 31400]) torch.Size([50000]) torch.Size([10000]) Our test_ds.indices and valid_ds.indices will be random from range (0, 600000). But if I would like to get sequence of indices from (0, 49999) and from (50000, 59999) I cannot do that at the moment unfortunately, except this way. Handy in case you run the MNIST benchmark where it is predefined what should be the test and what should be the validation dataset. A: Bear in mind that most canonical examples are already spited. For instance on this page you will find MNIST. One common belief is that is has 60.000 images. Bang! Wrong! It has 70.000 images out of that 60.000 training and 10.000 validation (test) images. So for the canonical datasets the flavor of PyTorch is to provide you already spited datasets. import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader, Dataset, TensorDataset from torch.optim import * import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import os import numpy as np import random bs=512 t = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=(0), std=(1))] ) dl_train = DataLoader( torchvision.datasets.MNIST('/data/mnist', download=True, train=True, transform=t), batch_size=bs, drop_last=True, shuffle=True) dl_valid = DataLoader( torchvision.datasets.MNIST('/data/mnist', download=True, train=False, transform=t), batch_size=bs, drop_last=True, shuffle=True) A: In case you want up to X samples per class in the train dataset you can use this code: def stratify_split(dataset: Dataset, train_samples_per_class: int): import collections train_indices = [] val_indices = [] TRAIN_SAMPLES_PER_CLASS = 10 target_counter = collections.Counter() for idx, data in enumerate(dataset): target = data['target'] target_counter[target] += 1 if target_counter[target] <= train_samples_per_class: train_indices.append(idx) else: val_indices.append(idx) train_dataset = Subset(dataset, train_indices) val_dataset = Subset(dataset, val_indices) return train_dataset, val_dataset A: Adding to Fábio Perez answer you can provide fractions to the random split. Note that you first split dataset, not dataloader. train_dataset, val_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [0.8, 0.1, 0.1])
How do I split a custom dataset into training and test datasets?
import pandas as pd import numpy as np import cv2 from torch.utils.data.dataset import Dataset class CustomDatasetFromCSV(Dataset): def __init__(self, csv_path, transform=None): self.data = pd.read_csv(csv_path) self.labels = pd.get_dummies(self.data['emotion']).as_matrix() self.height = 48 self.width = 48 self.transform = transform def __getitem__(self, index): pixels = self.data['pixels'].tolist() faces = [] for pixel_sequence in pixels: face = [int(pixel) for pixel in pixel_sequence.split(' ')] # print(np.asarray(face).shape) face = np.asarray(face).reshape(self.width, self.height) face = cv2.resize(face.astype('uint8'), (self.width, self.height)) faces.append(face.astype('float32')) faces = np.asarray(faces) faces = np.expand_dims(faces, -1) return faces, self.labels def __len__(self): return len(self.data) This is what I could manage to do by using references from other repositories. However, I want to split this dataset into train and test. How can I do that inside this class? Or do I need to make a separate class to do that?
[ "Starting in PyTorch 0.4.1 you can use random_split:\ntrain_size = int(0.8 * len(full_dataset))\ntest_size = len(full_dataset) - train_size\ntrain_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size])\n\n", "Using Pytorch's SubsetRandomSampler:\nimport torch\nimport numpy as np\nfrom torchvision import datasets\nfrom torchvision import transforms\nfrom torch.utils.data.sampler import SubsetRandomSampler\n\nclass CustomDatasetFromCSV(Dataset):\n def __init__(self, csv_path, transform=None):\n self.data = pd.read_csv(csv_path)\n self.labels = pd.get_dummies(self.data['emotion']).as_matrix()\n self.height = 48\n self.width = 48\n self.transform = transform\n\n def __getitem__(self, index):\n # This method should return only 1 sample and label \n # (according to \"index\"), not the whole dataset\n # So probably something like this for you:\n pixel_sequence = self.data['pixels'][index]\n face = [int(pixel) for pixel in pixel_sequence.split(' ')]\n face = np.asarray(face).reshape(self.width, self.height)\n face = cv2.resize(face.astype('uint8'), (self.width, self.height))\n label = self.labels[index]\n\n return face, label\n\n def __len__(self):\n return len(self.labels)\n\n\ndataset = CustomDatasetFromCSV(my_path)\nbatch_size = 16\nvalidation_split = .2\nshuffle_dataset = True\nrandom_seed= 42\n\n# Creating data indices for training and validation splits:\ndataset_size = len(dataset)\nindices = list(range(dataset_size))\nsplit = int(np.floor(validation_split * dataset_size))\nif shuffle_dataset :\n np.random.seed(random_seed)\n np.random.shuffle(indices)\ntrain_indices, val_indices = indices[split:], indices[:split]\n\n# Creating PT data samplers and loaders:\ntrain_sampler = SubsetRandomSampler(train_indices)\nvalid_sampler = SubsetRandomSampler(val_indices)\n\ntrain_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, \n sampler=train_sampler)\nvalidation_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n sampler=valid_sampler)\n\n# Usage Example:\nnum_epochs = 10\nfor epoch in range(num_epochs):\n # Train: \n for batch_index, (faces, labels) in enumerate(train_loader):\n # ...\n\n", "Current answers do random splits which has disadvantage that number of samples per class is not guaranteed to be balanced. This is especially problematic when you want to have small number of samples per class. For example, MNIST has 60,000 examples, i.e. 6000 per digit. Assume that you want only 30 examples per digit in your training set. In this case, random split may produce imbalance between classes (one digit with more training data then others). So you want to make sure each digit precisely has only 30 labels. This is called stratified sampling. \nOne way to do this is using sampler interface in Pytorch and sample code is here.\nAnother way to do this is just hack your way through :). For example, below is simple implementation for MNIST where ds is MNIST dataset and k is number of samples needed for each class.\ndef sampleFromClass(ds, k):\n class_counts = {}\n train_data = []\n train_label = []\n test_data = []\n test_label = []\n for data, label in ds:\n c = label.item()\n class_counts[c] = class_counts.get(c, 0) + 1\n if class_counts[c] <= k:\n train_data.append(data)\n train_label.append(torch.unsqueeze(label, 0))\n else:\n test_data.append(data)\n test_label.append(torch.unsqueeze(label, 0))\n train_data = torch.cat(train_data)\n for ll in train_label:\n print(ll)\n train_label = torch.cat(train_label)\n test_data = torch.cat(test_data)\n test_label = torch.cat(test_label)\n\n return (TensorDataset(train_data, train_label), \n TensorDataset(test_data, test_label))\n\nYou can use this function like this:\ndef main():\n train_ds = datasets.MNIST('../data', train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor()\n ]))\n train_ds, test_ds = sampleFromClass(train_ds, 3)\n\n", "If you would like to ensure your splits have balanced classes, you can use train_test_split from sklearn.\nAssuming you have wrapped your data in a custom Dataset object:\nfrom torch.utils.data import DataLoader, Subset\nfrom sklearn.model_selection import train_test_split\n\nTEST_SIZE = 0.1\nBATCH_SIZE = 64\nSEED = 42\n\n# generate indices: instead of the actual data we pass in integers instead\ntrain_indices, test_indices, _, _ = train_test_split(\n range(len(data)),\n data.targets,\n stratify=data.targets,\n test_size=TEST_SIZE,\n random_state=SEED\n)\n\n# generate subset based on indices\ntrain_split = Subset(data, train_indices)\ntest_split = Subset(data, test_indices)\n\n# create batches\ntrain_batches = DataLoader(train_split, batch_size=BATCH_SIZE, shuffle=True)\ntest_batches = DataLoader(test_split, batch_size=BATCH_SIZE)\n\n", "This is the PyTorch Subset class attached holding the random_split method. Note that this method is base for the SubsetRandomSampler.\n\nFor MNIST if we use random_split:\nloader = DataLoader(\n torchvision.datasets.MNIST('/data/mnist', train=True, download=True,\n transform=torchvision.transforms.Compose([\n torchvision.transforms.ToTensor(),\n torchvision.transforms.Normalize(\n (0.5,), (0.5,))\n ])),\n batch_size=16, shuffle=False)\n\nprint(loader.dataset.data.shape)\ntest_ds, valid_ds = torch.utils.data.random_split(loader.dataset, (50000, 10000))\nprint(test_ds, valid_ds)\nprint(test_ds.indices, valid_ds.indices)\nprint(test_ds.indices.shape, valid_ds.indices.shape)\n\nWe get:\ntorch.Size([60000, 28, 28])\n<torch.utils.data.dataset.Subset object at 0x0000020FD1880B00> <torch.utils.data.dataset.Subset object at 0x0000020FD1880C50>\ntensor([ 1520, 4155, 45472, ..., 37969, 45782, 34080]) tensor([ 9133, 51600, 22067, ..., 3950, 37306, 31400])\ntorch.Size([50000]) torch.Size([10000])\n\nOur test_ds.indices and valid_ds.indices will be random from range (0, 600000). But if I would like to get sequence of indices from (0, 49999) and from (50000, 59999) I cannot do that at the moment unfortunately, except this way. \nHandy in case you run the MNIST benchmark where it is predefined what should be the test and what should be the validation dataset.\n", "Bear in mind that most canonical examples are already spited. For instance on this page you will find MNIST. One common belief is that is has 60.000 images. Bang! Wrong! It has 70.000 images out of that 60.000 training and 10.000 validation (test) images.\nSo for the canonical datasets the flavor of PyTorch is to provide you already spited datasets.\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader, Dataset, TensorDataset\nfrom torch.optim import *\nimport torchvision\nimport torchvision.transforms as transforms\nimport matplotlib.pyplot as plt\nimport os\nimport numpy as np\nimport random\n\nbs=512\n\nt = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(mean=(0), std=(1))]\n )\n\ndl_train = DataLoader( torchvision.datasets.MNIST('/data/mnist', download=True, train=True, transform=t), \n batch_size=bs, drop_last=True, shuffle=True)\ndl_valid = DataLoader( torchvision.datasets.MNIST('/data/mnist', download=True, train=False, transform=t), \n batch_size=bs, drop_last=True, shuffle=True)\n\n", "In case you want up to X samples per class in the train dataset you can use this code:\ndef stratify_split(dataset: Dataset, train_samples_per_class: int):\n import collections\n train_indices = []\n val_indices = []\n TRAIN_SAMPLES_PER_CLASS = 10\n target_counter = collections.Counter()\n for idx, data in enumerate(dataset):\n target = data['target']\n target_counter[target] += 1\n if target_counter[target] <= train_samples_per_class:\n train_indices.append(idx)\n else:\n val_indices.append(idx)\n train_dataset = Subset(dataset, train_indices)\n val_dataset = Subset(dataset, val_indices)\n return train_dataset, val_dataset\n\n", "Adding to Fábio Perez answer you can provide fractions to the random split. Note that you first split dataset, not dataloader.\ntrain_dataset, val_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [0.8, 0.1, 0.1])\n\n" ]
[ 183, 143, 25, 19, 6, 0, 0, 0 ]
[]
[]
[ "deep_learning", "python", "pytorch" ]
stackoverflow_0050544730_deep_learning_python_pytorch.txt