content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: How to get list of columns containing specific values corresponding to a index as a new column in pandas dataframe? I have a pandas dataframe df which looks as follows: A B C D E F G H I J Values A NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN B NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN C yes NaN NaN NaN NaN NaN NaN NaN NaN NaN D NaN yes NaN NaN NaN NaN NaN NaN NaN NaN E NaN ok ok NaN NaN NaN NaN NaN NaN NaN F NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN G NaN NaN NaN ok NaN NaN NaN NaN NaN NaN H NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I yes NaN NaN NaN NaN NaN NaN NaN NaN NaN J NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN df.to_dict() is as follows: {'A': {'A': nan, 'B': nan, 'C': 'yes', 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': 'yes', 'J': nan}, 'B': {'A': nan, 'B': nan, 'C': nan, 'D': 'yes', 'E': 'ok', 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'C': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': 'ok', 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'D': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': 'ok', 'H': nan, 'I': nan, 'J': nan}, 'E': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'F': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'G': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'H': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'I': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'J': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'To': {'A': '', 'B': '', 'C': 'A, ', 'D': 'B, ', 'E': 'B, C, ', 'F': '', 'G': 'D, ', 'H': '', 'I': 'A, ', 'J': ''}} I'd like to get a new column "To" which corresponding to each row which contains the list of columns having non NaN values such as "yes" or "ok". I did it using the following code: df["To"] = "" for index in df.index: for column in df.columns[:-1]: if pd.isnull(df.loc[index, column]) == False: df.loc[index, "To"] += column + ", " df As shown, I created a new column called "To" and looped through each row and column to fill the "To" column. The resulting dataframe looks as follows: A B C D E F G H I J To Values A NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN B NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN C yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A, D NaN yes NaN NaN NaN NaN NaN NaN NaN NaN B, E NaN ok ok NaN NaN NaN NaN NaN NaN NaN B, C, F NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN G NaN NaN NaN ok NaN NaN NaN NaN NaN NaN D, H NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A, J NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I think this is not an effective process and is time-consuming when the dataset is large. Is there any shorter and more efficient way of creating this "To" column in pandas dataframe? A: Dot product of non-NaNness and the columns (suffixed ", ") is a way of doing this: In [242]: df.notna().dot(df.columns + ", ").str[:-2] Out[242]: A B C A D B E B, C F G D H I A J dtype: object What's happening is that, df.notna() is a True/False dataframe; then we take the dot product of it with the column names (", " added). Since True is 1 and False is 0 in numeric context, the dot product behaves like a selector of column names. Then lastly we strip out the trailing ", "s. A: You can use stack to benefit from the default dropping of NaN values, combined with groupby.agg: df['To'] = (df .stack() .reset_index(-1)['level_1'] .groupby(level=0).agg(','.join) ) Output: A B C D E F G H I J To A NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN B NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN C yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A D NaN yes NaN NaN NaN NaN NaN NaN NaN NaN B E NaN ok ok NaN NaN NaN NaN NaN NaN NaN B,C F NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN G NaN NaN NaN ok NaN NaN NaN NaN NaN NaN D H NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A J NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
How to get list of columns containing specific values corresponding to a index as a new column in pandas dataframe?
I have a pandas dataframe df which looks as follows: A B C D E F G H I J Values A NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN B NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN C yes NaN NaN NaN NaN NaN NaN NaN NaN NaN D NaN yes NaN NaN NaN NaN NaN NaN NaN NaN E NaN ok ok NaN NaN NaN NaN NaN NaN NaN F NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN G NaN NaN NaN ok NaN NaN NaN NaN NaN NaN H NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I yes NaN NaN NaN NaN NaN NaN NaN NaN NaN J NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN df.to_dict() is as follows: {'A': {'A': nan, 'B': nan, 'C': 'yes', 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': 'yes', 'J': nan}, 'B': {'A': nan, 'B': nan, 'C': nan, 'D': 'yes', 'E': 'ok', 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'C': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': 'ok', 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'D': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': 'ok', 'H': nan, 'I': nan, 'J': nan}, 'E': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'F': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'G': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'H': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'I': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'J': {'A': nan, 'B': nan, 'C': nan, 'D': nan, 'E': nan, 'F': nan, 'G': nan, 'H': nan, 'I': nan, 'J': nan}, 'To': {'A': '', 'B': '', 'C': 'A, ', 'D': 'B, ', 'E': 'B, C, ', 'F': '', 'G': 'D, ', 'H': '', 'I': 'A, ', 'J': ''}} I'd like to get a new column "To" which corresponding to each row which contains the list of columns having non NaN values such as "yes" or "ok". I did it using the following code: df["To"] = "" for index in df.index: for column in df.columns[:-1]: if pd.isnull(df.loc[index, column]) == False: df.loc[index, "To"] += column + ", " df As shown, I created a new column called "To" and looped through each row and column to fill the "To" column. The resulting dataframe looks as follows: A B C D E F G H I J To Values A NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN B NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN C yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A, D NaN yes NaN NaN NaN NaN NaN NaN NaN NaN B, E NaN ok ok NaN NaN NaN NaN NaN NaN NaN B, C, F NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN G NaN NaN NaN ok NaN NaN NaN NaN NaN NaN D, H NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A, J NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN I think this is not an effective process and is time-consuming when the dataset is large. Is there any shorter and more efficient way of creating this "To" column in pandas dataframe?
[ "Dot product of non-NaNness and the columns (suffixed \", \") is a way of doing this:\nIn [242]: df.notna().dot(df.columns + \", \").str[:-2]\nOut[242]:\nA\nB\nC A\nD B\nE B, C\nF\nG D\nH\nI A\nJ\ndtype: object\n\nWhat's happening is that, df.notna() is a True/False dataframe; then we take the dot product of it with the column names (\", \" added). Since True is 1 and False is 0 in numeric context, the dot product behaves like a selector of column names. Then lastly we strip out the trailing \", \"s.\n", "You can use stack to benefit from the default dropping of NaN values, combined with groupby.agg:\ndf['To'] = (df\n .stack()\n .reset_index(-1)['level_1']\n .groupby(level=0).agg(','.join)\n )\n\nOutput:\n A B C D E F G H I J To\nA NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\nB NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\nC yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A\nD NaN yes NaN NaN NaN NaN NaN NaN NaN NaN B\nE NaN ok ok NaN NaN NaN NaN NaN NaN NaN B,C\nF NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\nG NaN NaN NaN ok NaN NaN NaN NaN NaN NaN D\nH NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\nI yes NaN NaN NaN NaN NaN NaN NaN NaN NaN A\nJ NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n\n" ]
[ 4, 3 ]
[]
[]
[ "dataframe", "loops", "pandas", "python", "python_3.x" ]
stackoverflow_0074619638_dataframe_loops_pandas_python_python_3.x.txt
Q: SWIG: Passing a list as a vector pointer to a constructor Trying to use swig to pass a python list as input for c++ class with a (one of many) constructor taking a std::vector<double> * as input. Changing the C++ implementation of the codebase is not possible. <EDIT> : What I am looking for is a way to "automatically" process a python list to a vector<double> * or say for example: fake_class.cpp class FakeClass { public: std::vector<double>* m_v; FakeClass(std::vector<double>* v) : m_v {v} {} void SomeFunction(); // A function doing some with said pointer (m_v) [...] }; and then using this in python (say the compiled extension is fakeExample: import fakeExample as ex a = ex.FakeClass([1.2, 3.1, 4.1]) print(a.m_v) a.SomeFunction() without crashing. </EDIT> What I've tried: Code: example.hpp #include <vector> #include <iostream> class SampleClass { public: std::vector<double>* m_v; SampleClass(std::vector<double>* v) : m_v {v} { std::cout << "\nnon default constructor!\n";} SampleClass() {std::cout << "default constructor!\n";} void SampleMethod(std::vector<double>* arg); void SampleMethod2(std::vector<double>* arg); void print_member(); }; example.cpp #include "example.hpp" #include <iostream> void SampleClass::SampleMethod(std::vector<double>* arg) { for (auto x : (*arg)) std::cout << x << " "; }; void SampleClass::SampleMethod2(std::vector<double>* arg) { auto vr = arg; for (size_t i = 0; i < (*vr).size(); i++) { (*vr)[i] += 1; } for (auto x : (*vr)) std::cout<< x << "\n"; } void SampleClass::print_member() { for (auto x : (*m_v)) { std::cout << x << " "; } } example.i %module example %{ #include "example.hpp" %} %include "typemaps.i" %include "std_vector.i" %template(doublevector) std::vector<double>; /* NOTE: Is this required? */ %naturalvar Sampleclass; /* NOTE: This mostly works but not for constructor */ %apply std::vector<double> *INPUT {std::vector<double>* }; %include "example.hpp" A Makefile (s_test.py is a simple test script I am not including here). all: clean build run clean: rm -rf *.o *_wrap.* *.so __pycache__/ *.gch example.py build: swig -python -c++ example.i g++ -c -fPIC example.cpp example_wrap.cxx example.hpp -I/usr/include/python3.8 g++ -shared example.o example_wrap.o -o _example.so run: python s_test.py build_cpp: g++ example.cpp example.hpp main.cpp -o test_this.o And finally after compiling etc : >>> import example as ex >>> a = ex.SampleClass() default constructor! >>> a.SampleMethod([1.2, 3.1, 4.1]) 1.2 3.1 4.1 >>> a.SampleMethod2([3.1, 2.1]) 4.1 3.1 # Works fine(or at least as expected) until here. >>> b = ex.SampleClass([1.2]) non default constructor! >>> b.m_v <example.doublevector; proxy of <Swig Object of type 'std::vector< double > *' at SOME_ADDRESS> > >>> b.m_v.size() 17958553321119670438 >>> b.print_member() >>> [... Lots of zeroes here ...]0 0 0 0[1] <some_number> segmentation fault python And exits. Thank you :) A: The Python list passed into the non-default constructor gets converted to a temporary SWIG proxy of a vector<double>* and that pointer is saved by the constructor into SampleClass's m_v member, but the pointer no longer exists when the constructor returns. If you create a persistent doublevector and make sure it stays in scope, the code works. I built the original code in the question for the following example: >>> import example as ex >>> v = ex.doublevector([1,2,3,4]) >>> s = ex.SampleClass(v) non default constructor! >>> s.m_v.size() 4 >>> list(s.m_v) [1.0, 2.0, 3.0, 4.0] As long as v exists, s.m_v will be valid.
SWIG: Passing a list as a vector pointer to a constructor
Trying to use swig to pass a python list as input for c++ class with a (one of many) constructor taking a std::vector<double> * as input. Changing the C++ implementation of the codebase is not possible. <EDIT> : What I am looking for is a way to "automatically" process a python list to a vector<double> * or say for example: fake_class.cpp class FakeClass { public: std::vector<double>* m_v; FakeClass(std::vector<double>* v) : m_v {v} {} void SomeFunction(); // A function doing some with said pointer (m_v) [...] }; and then using this in python (say the compiled extension is fakeExample: import fakeExample as ex a = ex.FakeClass([1.2, 3.1, 4.1]) print(a.m_v) a.SomeFunction() without crashing. </EDIT> What I've tried: Code: example.hpp #include <vector> #include <iostream> class SampleClass { public: std::vector<double>* m_v; SampleClass(std::vector<double>* v) : m_v {v} { std::cout << "\nnon default constructor!\n";} SampleClass() {std::cout << "default constructor!\n";} void SampleMethod(std::vector<double>* arg); void SampleMethod2(std::vector<double>* arg); void print_member(); }; example.cpp #include "example.hpp" #include <iostream> void SampleClass::SampleMethod(std::vector<double>* arg) { for (auto x : (*arg)) std::cout << x << " "; }; void SampleClass::SampleMethod2(std::vector<double>* arg) { auto vr = arg; for (size_t i = 0; i < (*vr).size(); i++) { (*vr)[i] += 1; } for (auto x : (*vr)) std::cout<< x << "\n"; } void SampleClass::print_member() { for (auto x : (*m_v)) { std::cout << x << " "; } } example.i %module example %{ #include "example.hpp" %} %include "typemaps.i" %include "std_vector.i" %template(doublevector) std::vector<double>; /* NOTE: Is this required? */ %naturalvar Sampleclass; /* NOTE: This mostly works but not for constructor */ %apply std::vector<double> *INPUT {std::vector<double>* }; %include "example.hpp" A Makefile (s_test.py is a simple test script I am not including here). all: clean build run clean: rm -rf *.o *_wrap.* *.so __pycache__/ *.gch example.py build: swig -python -c++ example.i g++ -c -fPIC example.cpp example_wrap.cxx example.hpp -I/usr/include/python3.8 g++ -shared example.o example_wrap.o -o _example.so run: python s_test.py build_cpp: g++ example.cpp example.hpp main.cpp -o test_this.o And finally after compiling etc : >>> import example as ex >>> a = ex.SampleClass() default constructor! >>> a.SampleMethod([1.2, 3.1, 4.1]) 1.2 3.1 4.1 >>> a.SampleMethod2([3.1, 2.1]) 4.1 3.1 # Works fine(or at least as expected) until here. >>> b = ex.SampleClass([1.2]) non default constructor! >>> b.m_v <example.doublevector; proxy of <Swig Object of type 'std::vector< double > *' at SOME_ADDRESS> > >>> b.m_v.size() 17958553321119670438 >>> b.print_member() >>> [... Lots of zeroes here ...]0 0 0 0[1] <some_number> segmentation fault python And exits. Thank you :)
[ "The Python list passed into the non-default constructor gets converted to a temporary SWIG proxy of a vector<double>* and that pointer is saved by the constructor into SampleClass's m_v member, but the pointer no longer exists when the constructor returns. If you create a persistent doublevector and make sure it stays in scope, the code works.\nI built the original code in the question for the following example:\n>>> import example as ex\n>>> v = ex.doublevector([1,2,3,4])\n>>> s = ex.SampleClass(v)\n\nnon default constructor!\n>>> s.m_v.size()\n4\n>>> list(s.m_v)\n[1.0, 2.0, 3.0, 4.0]\n\nAs long as v exists, s.m_v will be valid.\n" ]
[ 0 ]
[]
[]
[ "c++", "python", "swig" ]
stackoverflow_0074616783_c++_python_swig.txt
Q: How can I join two dataframes in pandas that have different no of rows and different columns? I am trying to build a pandas DataFrame by merging 2 DataFrames consisting of different number of rows. I have attached my code below. Im trying to join these 2 dataframes together but I'm getting an error stating: KeyError: 'Number of Mutations' #!/usr/bin/env python import pandas as pd df1= pd.DataFrame({"Mutations": ["A>T", "A>G", "A>C", "T>A", "T>C", "T>G", "C>A", "C>T", "C>G", "G>A", "G>T", "G>C"], "Number of Mutations": [213, 659, 281, 204, 627, 208, 351, 1004, 360, 1054, 323, 351]}) df2= pd.DataFrame({"Number of Bases":["A = 42239", "T = 55005", "G = 46060" , "C = 45422"]}) mdf=df1.merge(df2, on= "Number of Mutations" , how="outer") print(mdf) A: I don't know why you want to combine these df's in the first place, since the rows aren't connected at all. But if you just want to align them next to each other, you want to use pd.concat out = pd.concat([df1, df2], axis=1) print(out) Mutations Number of Mutations Number of Bases 0 A>T 213 A = 42239 1 A>G 659 T = 55005 2 A>C 281 G = 46060 3 T>A 204 C = 45422 4 T>C 627 NaN 5 T>G 208 NaN 6 C>A 351 NaN 7 C>T 1004 NaN 8 C>G 360 NaN 9 G>A 1054 NaN 10 G>T 323 NaN 11 G>C 351 NaN If there are different indices and you just want to have df2 at the beginning till there is no more data, then you need to use reset_index() before you concat them. out = pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis=1)
How can I join two dataframes in pandas that have different no of rows and different columns?
I am trying to build a pandas DataFrame by merging 2 DataFrames consisting of different number of rows. I have attached my code below. Im trying to join these 2 dataframes together but I'm getting an error stating: KeyError: 'Number of Mutations' #!/usr/bin/env python import pandas as pd df1= pd.DataFrame({"Mutations": ["A>T", "A>G", "A>C", "T>A", "T>C", "T>G", "C>A", "C>T", "C>G", "G>A", "G>T", "G>C"], "Number of Mutations": [213, 659, 281, 204, 627, 208, 351, 1004, 360, 1054, 323, 351]}) df2= pd.DataFrame({"Number of Bases":["A = 42239", "T = 55005", "G = 46060" , "C = 45422"]}) mdf=df1.merge(df2, on= "Number of Mutations" , how="outer") print(mdf)
[ "I don't know why you want to combine these df's in the first place, since the rows aren't connected at all. But if you just want to align them next to each other, you want to use pd.concat\nout = pd.concat([df1, df2], axis=1)\nprint(out)\n\n Mutations Number of Mutations Number of Bases\n0 A>T 213 A = 42239\n1 A>G 659 T = 55005\n2 A>C 281 G = 46060\n3 T>A 204 C = 45422\n4 T>C 627 NaN\n5 T>G 208 NaN\n6 C>A 351 NaN\n7 C>T 1004 NaN\n8 C>G 360 NaN\n9 G>A 1054 NaN\n10 G>T 323 NaN\n11 G>C 351 NaN\n\nIf there are different indices and you just want to have df2 at the beginning till there is no more data, then you need to use reset_index() before you concat them.\nout = pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis=1)\n\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074619081_dataframe_pandas_python.txt
Q: Trouble waiting for changes to complete that are triggered by Python Playwright `select_option` I'm trying to scrape a site that reports internet service availability by address. Addresses can be selected from a list created for a specific postcode. After an address is selected, a table is updated with the availability of various services. My problem is that I cannot work out how to spot that the table has been updated after an address is selected. Presumably there's some event on some object that I can wait for, but it's not clear to me what. Here's my simplistic example - it's reasonably complete to avoid being too abstract: from playwright.sync_api import Playwright, sync_playwright, expect import time ign = ["42,", "44,", *list(map(lambda x: f'{chr(x)},', range(ord("A"), ord("J"))))] def run(playwright: Playwright) -> None: browser = playwright.chromium.launch(headless=False) context = browser.new_context() page = context.new_page() page.goto("https://checker.ofcom.org.uk/en-gb/broadband-coverage") # get rid for cookie legals page.get_by_role("button", name="Reject optional cookies and close").click() # set suitable postcode page.get_by_placeholder("Postcode").click() page.get_by_placeholder("Postcode").fill("tn13 1xt") btn = page.get_by_role("button", name="Set postcode") btn.click() # find available addresses opts = sorted (page.get_by_role("combobox", name="Select your address").inner_text().split ('\n')[1:]) # remove noise: too many addresses for easy inspection for opt in opts.copy(): for ig in ign: if ig in opt: try: opts.remove(opt) except ValueError: print (f"already removed {opt}") print (opts) for opt in opts: loc = page.get_by_role("combobox", name="Select your address").select_option(label=opt) # if the code time.sleep(5) here, the right values are captured page.get_by_role("combobox", name="Select your address").wait_for() row = page.get_by_role("row", name="Ultrafast") row.wait_for() cont = row.text_content() print (f"status of Ultrafast for {opt}: {cont}") context.close() browser.close() with sync_playwright() as playwright: run(playwright) I get the right answers, but as Eric Morcambe said, not necessarily in the right order, unless I put in an explicit sleep, which is counter to the philosophy of Playwright. To make this practical, I'll have to put it into a scrapy framework and use the async api, but I need to get something working first. Suggestions gratefully received. A: Based on @ggorlen's suggestion, I spotted that the select_option triggered a request, which I was able to trigger on, and validate, thus: with page.expect_response(lambda response: response.url) as response_info: assert loc[0] in response_info.value.url, f"wrong location {response_info.value.url}, expecting {loc}" From this page: https://playwright.dev/python/docs/network#variations But note that there seemed to be a bug on that page as token in in the lambda function isn't relevant. The code checks that the query relates to the address of interest.
Trouble waiting for changes to complete that are triggered by Python Playwright `select_option`
I'm trying to scrape a site that reports internet service availability by address. Addresses can be selected from a list created for a specific postcode. After an address is selected, a table is updated with the availability of various services. My problem is that I cannot work out how to spot that the table has been updated after an address is selected. Presumably there's some event on some object that I can wait for, but it's not clear to me what. Here's my simplistic example - it's reasonably complete to avoid being too abstract: from playwright.sync_api import Playwright, sync_playwright, expect import time ign = ["42,", "44,", *list(map(lambda x: f'{chr(x)},', range(ord("A"), ord("J"))))] def run(playwright: Playwright) -> None: browser = playwright.chromium.launch(headless=False) context = browser.new_context() page = context.new_page() page.goto("https://checker.ofcom.org.uk/en-gb/broadband-coverage") # get rid for cookie legals page.get_by_role("button", name="Reject optional cookies and close").click() # set suitable postcode page.get_by_placeholder("Postcode").click() page.get_by_placeholder("Postcode").fill("tn13 1xt") btn = page.get_by_role("button", name="Set postcode") btn.click() # find available addresses opts = sorted (page.get_by_role("combobox", name="Select your address").inner_text().split ('\n')[1:]) # remove noise: too many addresses for easy inspection for opt in opts.copy(): for ig in ign: if ig in opt: try: opts.remove(opt) except ValueError: print (f"already removed {opt}") print (opts) for opt in opts: loc = page.get_by_role("combobox", name="Select your address").select_option(label=opt) # if the code time.sleep(5) here, the right values are captured page.get_by_role("combobox", name="Select your address").wait_for() row = page.get_by_role("row", name="Ultrafast") row.wait_for() cont = row.text_content() print (f"status of Ultrafast for {opt}: {cont}") context.close() browser.close() with sync_playwright() as playwright: run(playwright) I get the right answers, but as Eric Morcambe said, not necessarily in the right order, unless I put in an explicit sleep, which is counter to the philosophy of Playwright. To make this practical, I'll have to put it into a scrapy framework and use the async api, but I need to get something working first. Suggestions gratefully received.
[ "Based on @ggorlen's suggestion, I spotted that the select_option triggered a request, which I was able to trigger on, and validate, thus:\n with page.expect_response(lambda response: response.url) as response_info:\n assert loc[0] in response_info.value.url, f\"wrong location {response_info.value.url}, expecting {loc}\"\n\n\nFrom this page:\nhttps://playwright.dev/python/docs/network#variations\nBut note that there seemed to be a bug on that page as token in in the lambda function isn't relevant. The code checks that the query relates to the address of interest.\n" ]
[ 0 ]
[]
[]
[ "playwright", "playwright_python", "python" ]
stackoverflow_0074618690_playwright_playwright_python_python.txt
Q: How do you type-hint class prototypes that don't otherwise exist? I have legacy code with inheriting dataclasses: @dataclass class Base: a: int @dataclass class Derived1(Base): b: int @dataclass class Derived2(Base): b: int I want to use Python type hints so that methods know when they're getting something with a b attribute. However, I cannot import actual Derived1 or Derived2. So I want something like: def uses_b(b_supporting_object: TIsBaseWithB): ... How do you define a TIsBaseWithB? Is NewType related? A: Use typing.Protocol: from typing import Protocol class SupportsB(Protocol): b: int
How do you type-hint class prototypes that don't otherwise exist?
I have legacy code with inheriting dataclasses: @dataclass class Base: a: int @dataclass class Derived1(Base): b: int @dataclass class Derived2(Base): b: int I want to use Python type hints so that methods know when they're getting something with a b attribute. However, I cannot import actual Derived1 or Derived2. So I want something like: def uses_b(b_supporting_object: TIsBaseWithB): ... How do you define a TIsBaseWithB? Is NewType related?
[ "Use typing.Protocol:\nfrom typing import Protocol\n\n\nclass SupportsB(Protocol):\n b: int\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_typing" ]
stackoverflow_0074619783_python_python_typing.txt
Q: How to label edges and avoid the edge overlapping in MultiDiGraph/DiGraph? (Networkx) Here is my code now G = nx.from_pandas_edgelist(data, source='grad', target='to', edge_attr='count', create_using=nx.DiGraph()) weight = nx.get_edge_attributes(G, 'count') pos = nx.shell_layout(G, scale=1) nx.draw_networkx_nodes(G, pos, node_size=300, node_color='lightblue') nx.draw_networkx_labels(G, pos=pos, font_color='red') nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black', connectionstyle='arc3, rad = 0.1') nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight) plt.show() and the Result plot serverl problems in this plot: edge labels is not fully displayed,some edge doesn't have a label edge labels are left on their own edge since I add the curve to the edge to avoid the edge overlapping I think the edges which are not bidirectional don't need the curve, how to show them more neatly? The data.head(50).to_dict(): {'grad': {0: 'CUHK', 1: 'CUHK', 2: 'CUHK', 3: 'CUHK', 4: 'CUHK', 5: 'CUHK', 6: 'CUHK', 7: 'CUHK', 8: 'CityU', 9: 'CityU', 10: 'CityU', 11: 'CityU', 12: 'CityU', 13: 'CityU', 14: 'CityU', 15: 'HKBU', 16: 'HKU', 17: 'HKU', 18: 'HKU', 19: 'HKU', 20: 'HKU', 21: 'HKU', 22: 'HKU', 23: 'HKUST', 24: 'HKUST', 25: 'HKUST', 26: 'HKUST', 27: 'HKUST', 28: 'HKUST', 29: 'HKUST', 30: 'HKUST', 31: 'Low Frequency', 32: 'Low Frequency', 33: 'Low Frequency', 34: 'Low Frequency', 35: 'Low Frequency', 36: 'Low Frequency', 37: 'Low Frequency', 38: 'Low Frequency', 39: 'PolyU', 40: 'PolyU', 41: 'PolyU', 42: 'PolyU', 43: 'PolyU', 44: 'PolyU'}, 'to': {0: 'CUHK', 1: 'CityU', 2: 'EduHK', 3: 'HKBU', 4: 'HKU', 5: 'HKUST', 6: 'LingU', 7: 'PolyU', 8: 'CityU', 9: 'EduHK', 10: 'HKBU', 11: 'HKU', 12: 'HKUST', 13: 'LingU', 14: 'PolyU', 15: 'HKBU', 16: 'CUHK', 17: 'CityU', 18: 'EduHK', 19: 'HKBU', 20: 'HKU', 21: 'HKUST', 22: 'PolyU', 23: 'CUHK', 24: 'CityU', 25: 'EduHK', 26: 'HKBU', 27: 'HKU', 28: 'HKUST', 29: 'LingU', 30: 'PolyU', 31: 'CUHK', 32: 'CityU', 33: 'EduHK', 34: 'HKBU', 35: 'HKU', 36: 'HKUST', 37: 'LingU', 38: 'PolyU', 39: 'CityU', 40: 'EduHK', 41: 'HKBU', 42: 'HKU', 43: 'LingU', 44: 'PolyU'}, 'count': {0: 13, 1: 6, 2: 3, 3: 6, 4: 5, 5: 3, 6: 2, 7: 6, 8: 4, 9: 1, 10: 5, 11: 2, 12: 1, 13: 2, 14: 7, 15: 2, 16: 2, 17: 4, 18: 3, 19: 1, 20: 17, 21: 3, 22: 9, 23: 4, 24: 2, 25: 2, 26: 4, 27: 2, 28: 4, 29: 4, 30: 6, 31: 76, 32: 73, 33: 1, 34: 16, 35: 57, 36: 46, 37: 3, 38: 69, 39: 1, 40: 2, 41: 3, 42: 1, 43: 1, 44: 23}} A: Here's a solution to issues 1 and 2. In my version of networkx, self-loops are displayed. import pandas as pd import networkx as nx import numpy as np import matplotlib.pyplot as plt d = {'grad': {0: 'CUHK', 1: 'CUHK', 2: 'CUHK', 3: 'CUHK', 4: 'CUHK', 5: 'CUHK', 6: 'CUHK', 7: 'CUHK', 8: 'CityU', 9: 'CityU', 10: 'CityU', 11: 'CityU', 12: 'CityU', 13: 'CityU', 14: 'CityU', 15: 'HKBU', 16: 'HKU', 17: 'HKU', 18: 'HKU', 19: 'HKU', 20: 'HKU', 21: 'HKU', 22: 'HKU', 23: 'HKUST', 24: 'HKUST', 25: 'HKUST', 26: 'HKUST', 27: 'HKUST', 28: 'HKUST', 29: 'HKUST', 30: 'HKUST', 31: 'Low Frequency', 32: 'Low Frequency', 33: 'Low Frequency', 34: 'Low Frequency', 35: 'Low Frequency', 36: 'Low Frequency', 37: 'Low Frequency', 38: 'Low Frequency', 39: 'PolyU', 40: 'PolyU', 41: 'PolyU', 42: 'PolyU', 43: 'PolyU', 44: 'PolyU'}, 'to': {0: 'CUHK', 1: 'CityU', 2: 'EduHK', 3: 'HKBU', 4: 'HKU', 5: 'HKUST', 6: 'LingU', 7: 'PolyU', 8: 'CityU', 9: 'EduHK', 10: 'HKBU', 11: 'HKU', 12: 'HKUST', 13: 'LingU', 14: 'PolyU', 15: 'HKBU', 16: 'CUHK', 17: 'CityU', 18: 'EduHK', 19: 'HKBU', 20: 'HKU', 21: 'HKUST', 22: 'PolyU', 23: 'CUHK', 24: 'CityU', 25: 'EduHK', 26: 'HKBU', 27: 'HKU', 28: 'HKUST', 29: 'LingU', 30: 'PolyU', 31: 'CUHK', 32: 'CityU', 33: 'EduHK', 34: 'HKBU', 35: 'HKU', 36: 'HKUST', 37: 'LingU', 38: 'PolyU', 39: 'CityU', 40: 'EduHK', 41: 'HKBU', 42: 'HKU', 43: 'LingU', 44: 'PolyU'}, 'count': {0: 13, 1: 6, 2: 3, 3: 6, 4: 5, 5: 3, 6: 2, 7: 6, 8: 4, 9: 1, 10: 5, 11: 2, 12: 1, 13: 2, 14: 7, 15: 2, 16: 2, 17: 4, 18: 3, 19: 1, 20: 17, 21: 3, 22: 9, 23: 4, 24: 2, 25: 2, 26: 4, 27: 2, 28: 4, 29: 4, 30: 6, 31: 76, 32: 73, 33: 1, 34: 16, 35: 57, 36: 46, 37: 3, 38: 69, 39: 1, 40: 2, 41: 3, 42: 1, 43: 1, 44: 23}} data = pd.DataFrame(d) data = data[data['grad']!='Low Frequency'] rad = .2 conn_style = f'arc3,rad={rad}' def offset(d, pos, dist = rad/2, loop_shift = .2): for (u,v),obj in d.items(): if u!=v: par = dist*(pos[v] - pos[u]) dx,dy = par[1],-par[0] x,y = obj.get_position() obj.set_position((x+dx,y+dy)) else: x,y = obj.get_position() obj.set_position((x,y+loop_shift)) plt.figure(figsize = (20,10)) G = nx.from_pandas_edgelist(data, source='grad', target='to', edge_attr='count', create_using=nx.DiGraph()) weight = nx.get_edge_attributes(G, 'count') pos = nx.shell_layout(G, scale=1) nx.draw_networkx_nodes(G, pos, node_size=300, node_color='lightblue') nx.draw_networkx_labels(G, pos=pos, font_color='red') nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black', connectionstyle=conn_style) d = nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight) offset(d,pos) plt.gca().set_aspect('equal') plt.show() The result I get: Here's what I get if I include the Low Frequency node and set rad to .1 instead of .2. Here's an approach that doesn't change the aspect ratio: import pandas as pd import networkx as nx import numpy as np import matplotlib.pyplot as plt d = {'grad': {0: 'CUHK', 1: 'CUHK', 2: 'CUHK', 3: 'CUHK', 4: 'CUHK', 5: 'CUHK', 6: 'CUHK', 7: 'CUHK', 8: 'CityU', 9: 'CityU', 10: 'CityU', 11: 'CityU', 12: 'CityU', 13: 'CityU', 14: 'CityU', 15: 'HKBU', 16: 'HKU', 17: 'HKU', 18: 'HKU', 19: 'HKU', 20: 'HKU', 21: 'HKU', 22: 'HKU', 23: 'HKUST', 24: 'HKUST', 25: 'HKUST', 26: 'HKUST', 27: 'HKUST', 28: 'HKUST', 29: 'HKUST', 30: 'HKUST', 31: 'Low Frequency', 32: 'Low Frequency', 33: 'Low Frequency', 34: 'Low Frequency', 35: 'Low Frequency', 36: 'Low Frequency', 37: 'Low Frequency', 38: 'Low Frequency', 39: 'PolyU', 40: 'PolyU', 41: 'PolyU', 42: 'PolyU', 43: 'PolyU', 44: 'PolyU'}, 'to': {0: 'CUHK', 1: 'CityU', 2: 'EduHK', 3: 'HKBU', 4: 'HKU', 5: 'HKUST', 6: 'LingU', 7: 'PolyU', 8: 'CityU', 9: 'EduHK', 10: 'HKBU', 11: 'HKU', 12: 'HKUST', 13: 'LingU', 14: 'PolyU', 15: 'HKBU', 16: 'CUHK', 17: 'CityU', 18: 'EduHK', 19: 'HKBU', 20: 'HKU', 21: 'HKUST', 22: 'PolyU', 23: 'CUHK', 24: 'CityU', 25: 'EduHK', 26: 'HKBU', 27: 'HKU', 28: 'HKUST', 29: 'LingU', 30: 'PolyU', 31: 'CUHK', 32: 'CityU', 33: 'EduHK', 34: 'HKBU', 35: 'HKU', 36: 'HKUST', 37: 'LingU', 38: 'PolyU', 39: 'CityU', 40: 'EduHK', 41: 'HKBU', 42: 'HKU', 43: 'LingU', 44: 'PolyU'}, 'count': {0: 13, 1: 6, 2: 3, 3: 6, 4: 5, 5: 3, 6: 2, 7: 6, 8: 4, 9: 1, 10: 5, 11: 2, 12: 1, 13: 2, 14: 7, 15: 2, 16: 2, 17: 4, 18: 3, 19: 1, 20: 17, 21: 3, 22: 9, 23: 4, 24: 2, 25: 2, 26: 4, 27: 2, 28: 4, 29: 4, 30: 6, 31: 76, 32: 73, 33: 1, 34: 16, 35: 57, 36: 46, 37: 3, 38: 69, 39: 1, 40: 2, 41: 3, 42: 1, 43: 1, 44: 23}} data = pd.DataFrame(d) rad = .1 conn_style = f'arc3,rad={rad}' def offset(d, pos, dist = rad/2, loop_shift = .2, asp = 1): for (u,v),obj in d.items(): if u!=v: par = dist*(pos[v] - pos[u]) dx,dy = par[1]*asp,-par[0]/asp x,y = obj.get_position() obj.set_position((x+dx,y+dy)) else: x,y = obj.get_position() obj.set_position((x,y+loop_shift)) def sub(a,b): return a-b def get_aspect(ax): # Total figure size figW, figH = ax.get_figure().get_size_inches() # Axis size on figure _, _, w, h = ax.get_position().bounds # Ratio of display units disp_ratio = (figH * h) / (figW * w) # Ratio of data units # Negative over negative because of the order of subtraction data_ratio = sub(*ax.get_ylim()) / sub(*ax.get_xlim()) return disp_ratio / data_ratio plt.figure(figsize = (20,10)) G = nx.from_pandas_edgelist(data, source='grad', target='to', edge_attr='count', create_using=nx.DiGraph()) weight = nx.get_edge_attributes(G, 'count') pos = nx.shell_layout(G, scale=1) nx.draw_networkx_nodes(G, pos, node_size=300, node_color='lightblue') nx.draw_networkx_labels(G, pos=pos, font_color='red') nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black', connectionstyle=conn_style) d = nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight) offset(d, pos, asp = get_aspect(plt.gca())) plt.show() Resulting figure:
How to label edges and avoid the edge overlapping in MultiDiGraph/DiGraph? (Networkx)
Here is my code now G = nx.from_pandas_edgelist(data, source='grad', target='to', edge_attr='count', create_using=nx.DiGraph()) weight = nx.get_edge_attributes(G, 'count') pos = nx.shell_layout(G, scale=1) nx.draw_networkx_nodes(G, pos, node_size=300, node_color='lightblue') nx.draw_networkx_labels(G, pos=pos, font_color='red') nx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black', connectionstyle='arc3, rad = 0.1') nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight) plt.show() and the Result plot serverl problems in this plot: edge labels is not fully displayed,some edge doesn't have a label edge labels are left on their own edge since I add the curve to the edge to avoid the edge overlapping I think the edges which are not bidirectional don't need the curve, how to show them more neatly? The data.head(50).to_dict(): {'grad': {0: 'CUHK', 1: 'CUHK', 2: 'CUHK', 3: 'CUHK', 4: 'CUHK', 5: 'CUHK', 6: 'CUHK', 7: 'CUHK', 8: 'CityU', 9: 'CityU', 10: 'CityU', 11: 'CityU', 12: 'CityU', 13: 'CityU', 14: 'CityU', 15: 'HKBU', 16: 'HKU', 17: 'HKU', 18: 'HKU', 19: 'HKU', 20: 'HKU', 21: 'HKU', 22: 'HKU', 23: 'HKUST', 24: 'HKUST', 25: 'HKUST', 26: 'HKUST', 27: 'HKUST', 28: 'HKUST', 29: 'HKUST', 30: 'HKUST', 31: 'Low Frequency', 32: 'Low Frequency', 33: 'Low Frequency', 34: 'Low Frequency', 35: 'Low Frequency', 36: 'Low Frequency', 37: 'Low Frequency', 38: 'Low Frequency', 39: 'PolyU', 40: 'PolyU', 41: 'PolyU', 42: 'PolyU', 43: 'PolyU', 44: 'PolyU'}, 'to': {0: 'CUHK', 1: 'CityU', 2: 'EduHK', 3: 'HKBU', 4: 'HKU', 5: 'HKUST', 6: 'LingU', 7: 'PolyU', 8: 'CityU', 9: 'EduHK', 10: 'HKBU', 11: 'HKU', 12: 'HKUST', 13: 'LingU', 14: 'PolyU', 15: 'HKBU', 16: 'CUHK', 17: 'CityU', 18: 'EduHK', 19: 'HKBU', 20: 'HKU', 21: 'HKUST', 22: 'PolyU', 23: 'CUHK', 24: 'CityU', 25: 'EduHK', 26: 'HKBU', 27: 'HKU', 28: 'HKUST', 29: 'LingU', 30: 'PolyU', 31: 'CUHK', 32: 'CityU', 33: 'EduHK', 34: 'HKBU', 35: 'HKU', 36: 'HKUST', 37: 'LingU', 38: 'PolyU', 39: 'CityU', 40: 'EduHK', 41: 'HKBU', 42: 'HKU', 43: 'LingU', 44: 'PolyU'}, 'count': {0: 13, 1: 6, 2: 3, 3: 6, 4: 5, 5: 3, 6: 2, 7: 6, 8: 4, 9: 1, 10: 5, 11: 2, 12: 1, 13: 2, 14: 7, 15: 2, 16: 2, 17: 4, 18: 3, 19: 1, 20: 17, 21: 3, 22: 9, 23: 4, 24: 2, 25: 2, 26: 4, 27: 2, 28: 4, 29: 4, 30: 6, 31: 76, 32: 73, 33: 1, 34: 16, 35: 57, 36: 46, 37: 3, 38: 69, 39: 1, 40: 2, 41: 3, 42: 1, 43: 1, 44: 23}}
[ "Here's a solution to issues 1 and 2. In my version of networkx, self-loops are displayed.\nimport pandas as pd\nimport networkx as nx\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nd = {'grad': {0: 'CUHK', 1: 'CUHK', 2: 'CUHK', 3: 'CUHK', 4: 'CUHK', 5: 'CUHK', 6: 'CUHK', 7: 'CUHK', 8: 'CityU', 9: 'CityU', 10: 'CityU', 11: 'CityU', 12: 'CityU', 13: 'CityU', 14: 'CityU', 15: 'HKBU', 16: 'HKU', 17: 'HKU', 18: 'HKU', 19: 'HKU', 20: 'HKU', 21: 'HKU', 22: 'HKU', 23: 'HKUST', 24: 'HKUST', 25: 'HKUST', 26: 'HKUST', 27: 'HKUST', 28: 'HKUST', 29: 'HKUST', 30: 'HKUST', 31: 'Low Frequency', 32: 'Low Frequency', 33: 'Low Frequency', 34: 'Low Frequency', 35: 'Low Frequency', 36: 'Low Frequency', 37: 'Low Frequency', 38: 'Low Frequency', 39: 'PolyU', 40: 'PolyU', 41: 'PolyU', 42: 'PolyU', 43: 'PolyU', 44: 'PolyU'}, 'to': {0: 'CUHK', 1: 'CityU', 2: 'EduHK', 3: 'HKBU', 4: 'HKU', 5: 'HKUST', 6: 'LingU', 7: 'PolyU', 8: 'CityU', 9: 'EduHK', 10: 'HKBU', 11: 'HKU', 12: 'HKUST', 13: 'LingU', 14: 'PolyU', 15: 'HKBU', 16: 'CUHK', 17: 'CityU', 18: 'EduHK', 19: 'HKBU', 20: 'HKU', 21: 'HKUST', 22: 'PolyU', 23: 'CUHK', 24: 'CityU', 25: 'EduHK', 26: 'HKBU', 27: 'HKU', 28: 'HKUST', 29: 'LingU', 30: 'PolyU', 31: 'CUHK', 32: 'CityU', 33: 'EduHK', 34: 'HKBU', 35: 'HKU', 36: 'HKUST', 37: 'LingU', 38: 'PolyU', 39: 'CityU', 40: 'EduHK', 41: 'HKBU', 42: 'HKU', 43: 'LingU', 44: 'PolyU'}, 'count': {0: 13, 1: 6, 2: 3, 3: 6, 4: 5, 5: 3, 6: 2, 7: 6, 8: 4, 9: 1, 10: 5, 11: 2, 12: 1, 13: 2, 14: 7, 15: 2, 16: 2, 17: 4, 18: 3, 19: 1, 20: 17, 21: 3, 22: 9, 23: 4, 24: 2, 25: 2, 26: 4, 27: 2, 28: 4, 29: 4, 30: 6, 31: 76, 32: 73, 33: 1, 34: 16, 35: 57, 36: 46, 37: 3, 38: 69, 39: 1, 40: 2, 41: 3, 42: 1, 43: 1, 44: 23}}\n\ndata = pd.DataFrame(d)\ndata = data[data['grad']!='Low Frequency']\n\nrad = .2\nconn_style = f'arc3,rad={rad}'\n\ndef offset(d, pos, dist = rad/2, loop_shift = .2):\n for (u,v),obj in d.items():\n if u!=v:\n par = dist*(pos[v] - pos[u])\n dx,dy = par[1],-par[0]\n x,y = obj.get_position()\n obj.set_position((x+dx,y+dy))\n else:\n x,y = obj.get_position()\n obj.set_position((x,y+loop_shift))\n\nplt.figure(figsize = (20,10))\nG = nx.from_pandas_edgelist(data, source='grad',\n target='to', edge_attr='count',\n create_using=nx.DiGraph())\nweight = nx.get_edge_attributes(G, 'count')\npos = nx.shell_layout(G, scale=1)\nnx.draw_networkx_nodes(G, pos, node_size=300, node_color='lightblue')\nnx.draw_networkx_labels(G, pos=pos, font_color='red')\nnx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black',\n connectionstyle=conn_style)\nd = nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight)\noffset(d,pos)\nplt.gca().set_aspect('equal')\n\nplt.show()\n\nThe result I get:\n\nHere's what I get if I include the Low Frequency node and set rad to .1 instead of .2.\n\n\nHere's an approach that doesn't change the aspect ratio:\nimport pandas as pd\nimport networkx as nx\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nd = {'grad': {0: 'CUHK', 1: 'CUHK', 2: 'CUHK', 3: 'CUHK', 4: 'CUHK', 5: 'CUHK', 6: 'CUHK', 7: 'CUHK', 8: 'CityU', 9: 'CityU', 10: 'CityU', 11: 'CityU', 12: 'CityU', 13: 'CityU', 14: 'CityU', 15: 'HKBU', 16: 'HKU', 17: 'HKU', 18: 'HKU', 19: 'HKU', 20: 'HKU', 21: 'HKU', 22: 'HKU', 23: 'HKUST', 24: 'HKUST', 25: 'HKUST', 26: 'HKUST', 27: 'HKUST', 28: 'HKUST', 29: 'HKUST', 30: 'HKUST', 31: 'Low Frequency', 32: 'Low Frequency', 33: 'Low Frequency', 34: 'Low Frequency', 35: 'Low Frequency', 36: 'Low Frequency', 37: 'Low Frequency', 38: 'Low Frequency', 39: 'PolyU', 40: 'PolyU', 41: 'PolyU', 42: 'PolyU', 43: 'PolyU', 44: 'PolyU'}, 'to': {0: 'CUHK', 1: 'CityU', 2: 'EduHK', 3: 'HKBU', 4: 'HKU', 5: 'HKUST', 6: 'LingU', 7: 'PolyU', 8: 'CityU', 9: 'EduHK', 10: 'HKBU', 11: 'HKU', 12: 'HKUST', 13: 'LingU', 14: 'PolyU', 15: 'HKBU', 16: 'CUHK', 17: 'CityU', 18: 'EduHK', 19: 'HKBU', 20: 'HKU', 21: 'HKUST', 22: 'PolyU', 23: 'CUHK', 24: 'CityU', 25: 'EduHK', 26: 'HKBU', 27: 'HKU', 28: 'HKUST', 29: 'LingU', 30: 'PolyU', 31: 'CUHK', 32: 'CityU', 33: 'EduHK', 34: 'HKBU', 35: 'HKU', 36: 'HKUST', 37: 'LingU', 38: 'PolyU', 39: 'CityU', 40: 'EduHK', 41: 'HKBU', 42: 'HKU', 43: 'LingU', 44: 'PolyU'}, 'count': {0: 13, 1: 6, 2: 3, 3: 6, 4: 5, 5: 3, 6: 2, 7: 6, 8: 4, 9: 1, 10: 5, 11: 2, 12: 1, 13: 2, 14: 7, 15: 2, 16: 2, 17: 4, 18: 3, 19: 1, 20: 17, 21: 3, 22: 9, 23: 4, 24: 2, 25: 2, 26: 4, 27: 2, 28: 4, 29: 4, 30: 6, 31: 76, 32: 73, 33: 1, 34: 16, 35: 57, 36: 46, 37: 3, 38: 69, 39: 1, 40: 2, 41: 3, 42: 1, 43: 1, 44: 23}}\n\ndata = pd.DataFrame(d) \n\nrad = .1\nconn_style = f'arc3,rad={rad}'\n\ndef offset(d, pos, dist = rad/2, loop_shift = .2, asp = 1):\n for (u,v),obj in d.items():\n if u!=v:\n par = dist*(pos[v] - pos[u])\n dx,dy = par[1]*asp,-par[0]/asp\n x,y = obj.get_position()\n obj.set_position((x+dx,y+dy))\n else:\n x,y = obj.get_position()\n obj.set_position((x,y+loop_shift))\n\ndef sub(a,b):\n return a-b\n\ndef get_aspect(ax):\n # Total figure size\n figW, figH = ax.get_figure().get_size_inches()\n # Axis size on figure\n _, _, w, h = ax.get_position().bounds\n # Ratio of display units\n disp_ratio = (figH * h) / (figW * w)\n # Ratio of data units\n # Negative over negative because of the order of subtraction\n data_ratio = sub(*ax.get_ylim()) / sub(*ax.get_xlim())\n\n return disp_ratio / data_ratio\n\nplt.figure(figsize = (20,10))\nG = nx.from_pandas_edgelist(data, source='grad',\n target='to', edge_attr='count',\n create_using=nx.DiGraph())\nweight = nx.get_edge_attributes(G, 'count')\npos = nx.shell_layout(G, scale=1)\nnx.draw_networkx_nodes(G, pos, node_size=300, node_color='lightblue')\nnx.draw_networkx_labels(G, pos=pos, font_color='red')\nnx.draw_networkx_edges(G, pos=pos, edgelist=G.edges(), edge_color='black',\n connectionstyle=conn_style)\nd = nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=weight)\n\noffset(d, pos, asp = get_aspect(plt.gca()))\n\nplt.show()\n\nResulting figure:\n\n" ]
[ 1 ]
[]
[]
[ "networkx", "python" ]
stackoverflow_0074618675_networkx_python.txt
Q: Extracting data from JSON log I am a beginner when it comes to programming. I'm trying to extract elements from a JSON log file, but I get an error and I don't know how to deal with it. import json with open("/Users/milosz/Desktop/logi.json") as f: data = json.load(f) print(type(data['Objects'])) print(data) for object in data ['Objects']: print(object) Error: File "/Users/milosz/PycharmProjects/JsonDataExtracter/Program/Python Exracter.py", line 4, in <module> print(type(data['Objects'])) TypeError: list indices must be integers or slices, not str Process finished with exit code 1 I am sending the log below. { "_id": "635bd4bfc594743ce9b1a5a3", "dateStart": "2022-10-28T13:09:28.609Z", "dateFinish": "2022-10-28T13:10:23.698Z", "method": "customer.file.upsert", "request": { "Objects": [ { "ERPId": "6915", "B24Id": 403772, "FileName": "B2B000202", "FileContent": "JVBERi0xLjMNJeLjz9MN", "B24EntityId": 3334 } ] A: Following up on the guidance from @accdias, here is a code snippet that closes the gaps in your JSON snippet and demonstrates how to access the Objects section: import json json_string = """ { "_id": "635bd4bfc594743ce9b1a5a3", "dateStart": "2022-10-28T13:09:28.609Z", "dateFinish": "2022-10-28T13:10:23.698Z", "method": "customer.file.upsert", "request": { "Objects": [ { "ERPId": "6915", "B24Id": 403772, "FileName": "B2B000202", "FileContent": "JVBERi0xLjMNJeLjz9MN", "B24EntityId": 3334 } ] } } """ json_dict = json.loads(json_string) print(json_dict["request"]["Objects"]) Output: [{'ERPId': '6915', 'B24Id': 403772, 'FileName': 'B2B000202', 'FileContent': 'JVBERi0xLjMNJeLjz9MN', 'B24EntityId': 3334}]
Extracting data from JSON log
I am a beginner when it comes to programming. I'm trying to extract elements from a JSON log file, but I get an error and I don't know how to deal with it. import json with open("/Users/milosz/Desktop/logi.json") as f: data = json.load(f) print(type(data['Objects'])) print(data) for object in data ['Objects']: print(object) Error: File "/Users/milosz/PycharmProjects/JsonDataExtracter/Program/Python Exracter.py", line 4, in <module> print(type(data['Objects'])) TypeError: list indices must be integers or slices, not str Process finished with exit code 1 I am sending the log below. { "_id": "635bd4bfc594743ce9b1a5a3", "dateStart": "2022-10-28T13:09:28.609Z", "dateFinish": "2022-10-28T13:10:23.698Z", "method": "customer.file.upsert", "request": { "Objects": [ { "ERPId": "6915", "B24Id": 403772, "FileName": "B2B000202", "FileContent": "JVBERi0xLjMNJeLjz9MN", "B24EntityId": 3334 } ]
[ "Following up on the guidance from @accdias, here is a code snippet that closes the gaps in your JSON snippet and demonstrates how to access the Objects section:\nimport json\n\njson_string = \"\"\"\n{\n \"_id\": \"635bd4bfc594743ce9b1a5a3\",\n \"dateStart\": \"2022-10-28T13:09:28.609Z\",\n \"dateFinish\": \"2022-10-28T13:10:23.698Z\",\n \"method\": \"customer.file.upsert\",\n \"request\": {\n \"Objects\": [\n {\n \"ERPId\": \"6915\",\n \"B24Id\": 403772,\n \"FileName\": \"B2B000202\",\n \"FileContent\": \"JVBERi0xLjMNJeLjz9MN\",\n \"B24EntityId\": 3334\n }\n ]\n }\n}\n\"\"\"\njson_dict = json.loads(json_string)\nprint(json_dict[\"request\"][\"Objects\"])\n\nOutput:\n[{'ERPId': '6915', 'B24Id': 403772, 'FileName': 'B2B000202', 'FileContent': 'JVBERi0xLjMNJeLjz9MN', 'B24EntityId': 3334}]\n\n" ]
[ 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074619451_json_python.txt
Q: how to predict my own image using cnn in keras after training on MNIST dataset I have made a convolutional neural network to predict handwritten digits using MNIST dataset but now I am stuck at predicting my own image as input to cnn,I have saved weights after training cnn and want to use that to predict my own image (NOTE : care is taken that my input image is 28x28) code: new_mnist.py : ap = argparse.ArgumentParser() ap.add_argument("-s", "--save-model", type=int, default=-1, help="(optional) whether or not model should be saved to disk") ap.add_argument("-l", "--load-model", type=int, default=-1, help="(optional) whether or not pre-trained model should be loaded") ap.add_argument("-w", "--weights", type=str, help="(optional) path to weights file") args = vars(ap.parse_args()) # fix random seed for reproducibility seed = 7 numpy.random.seed(seed) # load data print("[INFO] downloading data...") (X_train, y_train), (X_test, y_test) = mnist.load_data() # reshape to be [samples][pixels][width][height] X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32') X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32') print(X_test.shape[0]) # normalize inputs from 0-255 to 0-1 X_train = X_train / 255 X_test = X_test / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # build the model print("[INFO] compiling model...") model = LeNet.build(num_classes = num_classes,weightsPath = args["weights"] if args["load_model"] > 0 else None) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) if args["load_model"] < 0: # Fit the model print("[INFO] training...") model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=1, batch_size=200, verbose=2) # Final evaluation of the model print("[INFO] evaluating...") scores = model.evaluate(X_test, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) elif args["load_model"] > 0: im = imread("C:\\Users\\Divyesh\\Desktop\\mnist.png") im = im/255 pr = model.predict_classes(im) print(pr) # check to see if the model should be saved to file if args["save_model"] > 0: print("[INFO] dumping weights to file...") model.save_weights(args["weights"], overwrite=True) lenet.py : class LeNet: @staticmethod def build(num_classes,weightsPath = None): # create model model = Sequential() model.add(Convolution2D(30, 5, 5, border_mode='valid', input_shape=(1, 28, 28), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(15, 3, 3, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(50, activation='relu')) model.add(Dense(num_classes, activation='softmax')) # Compile model #model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) if weightsPath is not None: model.load_weights(weightsPath) return model in new_mnist.py I have called predict(im) in which im is 28x28 image but after running this program I get error as: ValueError: Error when checking : expected conv2d_1_input to have 4 dimensions, but got array with shape (28, 28) HELP!!! A: Try: pr = model.predict_classes(im.reshape((1, 1, 28, 28))) Here : first dimension comes from examples (you need to specify it even if you have only one example), second comes from channels (as it seems that you use Theano backend) and rest are spatial dimensions. A: It should be noted that the images must be uploaded in grayscale. Like: im = im[:,:,0] Or import cv2 im = cv2.imread('C:\\Users\\Divyesh\\Desktop\\mnist.png', cv2.IMREAD_GRAYSCALE)
how to predict my own image using cnn in keras after training on MNIST dataset
I have made a convolutional neural network to predict handwritten digits using MNIST dataset but now I am stuck at predicting my own image as input to cnn,I have saved weights after training cnn and want to use that to predict my own image (NOTE : care is taken that my input image is 28x28) code: new_mnist.py : ap = argparse.ArgumentParser() ap.add_argument("-s", "--save-model", type=int, default=-1, help="(optional) whether or not model should be saved to disk") ap.add_argument("-l", "--load-model", type=int, default=-1, help="(optional) whether or not pre-trained model should be loaded") ap.add_argument("-w", "--weights", type=str, help="(optional) path to weights file") args = vars(ap.parse_args()) # fix random seed for reproducibility seed = 7 numpy.random.seed(seed) # load data print("[INFO] downloading data...") (X_train, y_train), (X_test, y_test) = mnist.load_data() # reshape to be [samples][pixels][width][height] X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32') X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32') print(X_test.shape[0]) # normalize inputs from 0-255 to 0-1 X_train = X_train / 255 X_test = X_test / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # build the model print("[INFO] compiling model...") model = LeNet.build(num_classes = num_classes,weightsPath = args["weights"] if args["load_model"] > 0 else None) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) if args["load_model"] < 0: # Fit the model print("[INFO] training...") model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=1, batch_size=200, verbose=2) # Final evaluation of the model print("[INFO] evaluating...") scores = model.evaluate(X_test, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) elif args["load_model"] > 0: im = imread("C:\\Users\\Divyesh\\Desktop\\mnist.png") im = im/255 pr = model.predict_classes(im) print(pr) # check to see if the model should be saved to file if args["save_model"] > 0: print("[INFO] dumping weights to file...") model.save_weights(args["weights"], overwrite=True) lenet.py : class LeNet: @staticmethod def build(num_classes,weightsPath = None): # create model model = Sequential() model.add(Convolution2D(30, 5, 5, border_mode='valid', input_shape=(1, 28, 28), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(15, 3, 3, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(50, activation='relu')) model.add(Dense(num_classes, activation='softmax')) # Compile model #model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) if weightsPath is not None: model.load_weights(weightsPath) return model in new_mnist.py I have called predict(im) in which im is 28x28 image but after running this program I get error as: ValueError: Error when checking : expected conv2d_1_input to have 4 dimensions, but got array with shape (28, 28) HELP!!!
[ "Try:\npr = model.predict_classes(im.reshape((1, 1, 28, 28)))\n\nHere : first dimension comes from examples (you need to specify it even if you have only one example), second comes from channels (as it seems that you use Theano backend) and rest are spatial dimensions.\n", "It should be noted that the images must be uploaded in grayscale.\n\nLike:\n\nim = im[:,:,0]\n\nOr\n\nimport cv2\nim = cv2.imread('C:\\\\Users\\\\Divyesh\\\\Desktop\\\\mnist.png', cv2.IMREAD_GRAYSCALE)\n" ]
[ 6, 0 ]
[]
[]
[ "conv_neural_network", "keras", "machine_learning", "neural_network", "python" ]
stackoverflow_0043076259_conv_neural_network_keras_machine_learning_neural_network_python.txt
Q: How to use my own Meta class together with SQLAlchemy-Model as a parent class I recently started to use Flask as my back-end framework. However, recently I encountered a problem and I could not figure out how to solve it. As a last resort I wanted to try my change here. If you could help me with it, I would be grateful. So, I have a class that inherits from SQLAlchemy's db.Model: from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() class User(db.Model): ... And I would like to create my own Meta class to automatically insert some Mixins to this class: class MyMetaClass(type): def __new__(meta, name, bases, class_dict): bases += (Mixin1, Mixin2, Mixin3) return type.__new__(meta, name, bases, class_dict) However, when I try to use this Meta class with the User class: class User(db.Model, metaclass=MyMetaClass): ... I have the following error: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases Does anybody know how to solve this problem? Any help would be appreciated. (Note: I am not a master of neither Flask nor Meta classes. So if I understood something wrong, please excuse my lack of understanding) A: Finally I managed to solve my problem. In case anyone else encounters the same issue, I am posting a solution here. This is a snipped that is taken from SQLAlchemy's website: The model metaclass is responsible for setting up the SQLAlchemy internals when defining model subclasses. Flask-SQLAlchemy adds some extra behaviors through mixins; its default metaclass, DefaultMeta, inherits them all. BindMetaMixin: bind_key is extracted from the class and applied to the table. See Multiple Databases with Binds. NameMetaMixin: If the model does not specify a tablename but does specify a primary key, a name is automatically generated. You can add your own behaviors by defining your own metaclass and creating the declarative base yourself. Be sure to still inherit from the mixins you want (or just inherit from the default metaclass). Passing a declarative base class instead of a simple model base class, as shown above, to base_class will cause Flask-SQLAlchemy to use this base instead of constructing one with the default metaclass. from flask_sqlalchemy import SQLAlchemy from flask_sqlalchemy.model import DefaultMeta, Model class CustomMeta(DefaultMeta): def __init__(cls, name, bases, d): # custom class setup could go here # be sure to call super super(CustomMeta, cls).__init__(name, bases, d) # custom class-only methods could go here db = SQLAlchemy(model_class=declarative_base( cls=Model, metaclass=CustomMeta, name='Model')) You can also pass whatever other arguments you want to declarative_base() to customize the base class as needed. In addition to that, my initial purpose was actually to add some extra functionality to my models which are inherits from db.Model. To do that actually you do not need to use Meta classes. So, for that case, we can extend the db.Model class, as described in the same web page. This is an example of giving every model an integer primary key, or a foreign key for joined-table inheritance: from flask_sqlalchemy import Model, SQLAlchemy import sqlalchemy as sa from sqlalchemy.ext.declarative import declared_attr, has_inherited_table class IdModel(Model): @declared_attr def id(cls): for base in cls.__mro__[1:-1]: if getattr(base, '__table__', None) is not None: type = sa.ForeignKey(base.id) break else: type = sa.Integer return sa.Column(type, primary_key=True) db = SQLAlchemy(model_class=IdModel) class User(db.Model): name = db.Column(db.String) class Employee(User): title = db.Column(db.String) A: Actually if you are very famillar with the mechanism of metaclass in python, the porblem is very simple. I give you a example in SqlAlchemy. from sqlalchemy.ext.declarative import as_declarative, declared_attr @as_declarative() class Base(): id = Column(Integer, primary_key=True, index=True) __name__: str @declared_attr def __tablename__(cls) -> str: return cls.__name__.lower() @property def url(self): return f'/{self.__class__.__name__.lower()}/{self.id}/' class AnotherMetaClass(type): def __new__(cls, name, bases, attrs): pass # do something class ModelMeta(Base.__class__, AnotherMetaClass): ... class User(Base, metaclass=ModelMeta): pass The key steps is that you should make the metaclass of Base in SqlAlchemy consistent with the metclass you design by yourself. One solution is to make a new metaclass that is the subclass of both of them. A: Another variant if you want metaclasses to update your classes options or methods before creating instances or set it during project initialization you can add init_subclass method to your Base. class ClassMetod: def __init__(self, cls): self._class = cls @as_declarative() class Base: @declared_attr def __tablename__(cls) -> str: return cls.__name__.lower() def __init_subclass__(cls): super().__init_subclass__() cls.method = ClassMetod(cls)
How to use my own Meta class together with SQLAlchemy-Model as a parent class
I recently started to use Flask as my back-end framework. However, recently I encountered a problem and I could not figure out how to solve it. As a last resort I wanted to try my change here. If you could help me with it, I would be grateful. So, I have a class that inherits from SQLAlchemy's db.Model: from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() class User(db.Model): ... And I would like to create my own Meta class to automatically insert some Mixins to this class: class MyMetaClass(type): def __new__(meta, name, bases, class_dict): bases += (Mixin1, Mixin2, Mixin3) return type.__new__(meta, name, bases, class_dict) However, when I try to use this Meta class with the User class: class User(db.Model, metaclass=MyMetaClass): ... I have the following error: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases Does anybody know how to solve this problem? Any help would be appreciated. (Note: I am not a master of neither Flask nor Meta classes. So if I understood something wrong, please excuse my lack of understanding)
[ "Finally I managed to solve my problem. In case anyone else encounters the same issue, I am posting a solution here.\nThis is a snipped that is taken from SQLAlchemy's website:\n\nThe model metaclass is responsible for setting up the SQLAlchemy internals when defining model subclasses. Flask-SQLAlchemy adds some extra behaviors through mixins; its default metaclass, DefaultMeta, inherits them all.\n\nBindMetaMixin: bind_key is extracted from the class and applied to the table. See Multiple Databases with Binds.\nNameMetaMixin: If the model does not specify a tablename but does specify a primary key, a name is automatically generated.\n\nYou can add your own behaviors by defining your own metaclass and creating the declarative base yourself. Be sure to still inherit from the mixins you want (or just inherit from the default metaclass).\nPassing a declarative base class instead of a simple model base class, as shown above, to base_class will cause Flask-SQLAlchemy to use this base instead of constructing one with the default metaclass.\nfrom flask_sqlalchemy import SQLAlchemy\nfrom flask_sqlalchemy.model import DefaultMeta, Model\n\nclass CustomMeta(DefaultMeta):\n def __init__(cls, name, bases, d):\n # custom class setup could go here\n\n # be sure to call super\n super(CustomMeta, cls).__init__(name, bases, d)\n\n # custom class-only methods could go here\n\ndb = SQLAlchemy(model_class=declarative_base(\n cls=Model, metaclass=CustomMeta, name='Model'))\n\nYou can also pass whatever other arguments you want to declarative_base() to customize the base class as needed.\n\nIn addition to that, my initial purpose was actually to add some extra functionality to my models which are inherits from db.Model. To do that actually you do not need to use Meta classes. So, for that case, we can extend the db.Model class, as described in the same web page. This is an example of giving every model an integer primary key, or a foreign key for joined-table inheritance:\nfrom flask_sqlalchemy import Model, SQLAlchemy\nimport sqlalchemy as sa\nfrom sqlalchemy.ext.declarative import declared_attr, has_inherited_table\n\nclass IdModel(Model):\n @declared_attr\n def id(cls):\n for base in cls.__mro__[1:-1]:\n if getattr(base, '__table__', None) is not None:\n type = sa.ForeignKey(base.id)\n break\n else:\n type = sa.Integer\n\n return sa.Column(type, primary_key=True)\n\ndb = SQLAlchemy(model_class=IdModel)\n\nclass User(db.Model):\n name = db.Column(db.String)\n\nclass Employee(User):\n title = db.Column(db.String)\n\n", "Actually if you are very famillar with the mechanism of metaclass in python, the porblem is very simple. I give you a example in SqlAlchemy.\nfrom sqlalchemy.ext.declarative import as_declarative, declared_attr\n\n@as_declarative()\nclass Base():\n id = Column(Integer, primary_key=True, index=True)\n __name__: str\n @declared_attr\n def __tablename__(cls) -> str:\n return cls.__name__.lower()\n\n @property\n def url(self):\n return f'/{self.__class__.__name__.lower()}/{self.id}/'\n\nclass AnotherMetaClass(type):\n def __new__(cls, name, bases, attrs):\n pass\n # do something\n\nclass ModelMeta(Base.__class__, AnotherMetaClass):\n ...\n\nclass User(Base, metaclass=ModelMeta):\n pass\n\nThe key steps is that you should make the metaclass of Base in SqlAlchemy consistent with the metclass you design by yourself. One solution is to make a new metaclass that is the subclass of both of them.\n", "Another variant if you want metaclasses to update your classes options or methods before creating instances or set it during project initialization you can add init_subclass method to your Base.\nclass ClassMetod:\ndef __init__(self, cls):\n self._class = cls\n\n@as_declarative()\nclass Base:\n\n @declared_attr\n def __tablename__(cls) -> str:\n return cls.__name__.lower()\n\n def __init_subclass__(cls):\n super().__init_subclass__()\n\n cls.method = ClassMetod(cls)\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "metaclass", "python", "sqlalchemy", "types" ]
stackoverflow_0055925297_metaclass_python_sqlalchemy_types.txt
Q: get() function in Tkinter always returns 0 i want to get the contents of an entry box and when i use the .get() function it always return 0 doesnt matter what i write in the box window1 = Tk() window1.geometry("500x720+750+0") entry1 = IntVar() e1 = tk.Entry(window1, width=8,fg="darkblue", textvariable=entry1, font=('secular one', 13)).place(x=20, y=60) num_of_diners = entry1.get() def get_value(): tk.Label(window1, text=num_of_diners, font=('secular one', 20)).place(x=300, y=510) tk.Button(window1, text="calculate price", command=get_value, font=('secular one', 18), relief=(FLAT)).place(x=20, y=500) window1.mainloop() thats the code simplified a bit but its whats giving me problem for example if i write 4 in the entry box and then press the button it shows 0 A: You need to call get() inside the get_value() function UPDATE - I misread the code previously - this should work now. FYI: You don't need an IntVar to store the value of the Entry, you can just call get() directly and do away with setting the textvariable parameter You should make a habit of declaring your widgets separately from adding them to a geometry manager (i.e. pack, grid, place). The geometry manager methods always return None, so something like my_entry = tk.Entry(root).pack() will always evaluate to None, which is probably not what you want entry1 = tk.Entry(window1, width=8,fg="darkblue", font=('secular one', 13)) entry1.place(x=20, y=60) def get_value(): num_of_diners = int(entry1.get()) # get entry contents as an integer tk.Label(window1, text=num_of_diners, font=('secular one', 20)).place(x=300, y=510) Bear in mind that if the contents of entry1 can't be cast to an int you'll get an exception. An easy way around this is: def get_value(): entry_content = entry1.get() if entry_content.isdigit(): num_of_diners = int(entry_content) else: num_of_diners = 0 # fall back to 0 on conversion failure
get() function in Tkinter always returns 0
i want to get the contents of an entry box and when i use the .get() function it always return 0 doesnt matter what i write in the box window1 = Tk() window1.geometry("500x720+750+0") entry1 = IntVar() e1 = tk.Entry(window1, width=8,fg="darkblue", textvariable=entry1, font=('secular one', 13)).place(x=20, y=60) num_of_diners = entry1.get() def get_value(): tk.Label(window1, text=num_of_diners, font=('secular one', 20)).place(x=300, y=510) tk.Button(window1, text="calculate price", command=get_value, font=('secular one', 18), relief=(FLAT)).place(x=20, y=500) window1.mainloop() thats the code simplified a bit but its whats giving me problem for example if i write 4 in the entry box and then press the button it shows 0
[ "You need to call get() inside the get_value() function\nUPDATE - I misread the code previously - this should work now.\nFYI:\n\nYou don't need an IntVar to store the value of the Entry, you can just call get() directly and do away with setting the textvariable parameter\nYou should make a habit of declaring your widgets separately from adding them to a geometry manager (i.e. pack, grid, place). The geometry manager methods always return None, so something like my_entry = tk.Entry(root).pack() will always evaluate to None, which is probably not what you want\n\nentry1 = tk.Entry(window1, width=8,fg=\"darkblue\", font=('secular one', 13))\nentry1.place(x=20, y=60)\n\n\ndef get_value():\n num_of_diners = int(entry1.get()) # get entry contents as an integer\n tk.Label(window1, text=num_of_diners, font=('secular one', 20)).place(x=300, y=510)\n\nBear in mind that if the contents of entry1 can't be cast to an int you'll get an exception. An easy way around this is:\ndef get_value():\n entry_content = entry1.get()\n if entry_content.isdigit():\n num_of_diners = int(entry_content)\n else:\n num_of_diners = 0 # fall back to 0 on conversion failure\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074619917_python_tkinter.txt
Q: Cannot send message in a loop using whatsapp automation (selenium) I am using this code to automate WhatsApp message sending, But the loop applied to send a message number of times is not working properly. It just sends a message once and stops. Kindly help! Following is the code: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys import time options=webdriver.ChromeOptions() options.add_argument("user-data-dir=C:/Users/Sameer/AppData/Local/Google/Chrome/User Data/Person 1") # options.add_argument('--profile-directory=Default') driver=webdriver.Chrome(executable_path='chromedriver.exe',options=options) driver.get("https://web.whatsapp.com/") wait=WebDriverWait(driver,100) target='"My 2"' message="Hello" # number_of_times=10 #No. of times to send a message contact_path='//span[contains(@title,'+ target +')]' contact=wait.until(EC.presence_of_element_located((By.XPATH,contact_path))) contact.click() message_box_path='//*[@id="main"]/footer/div[1]/div/span[2]/div/div[2]/div[1]/div/div[1]/p' # '//*[@id="main"]/footer/div[1]/div/span[2]/div/div[2]/div[1]/div/div[1]' message_box=wait.until(EC.presence_of_element_located((By.XPATH,message_box_path))) for x in range(5): message_box.click() message_box.clear() message_box.send_keys(message) message_box.send_keys(Keys.ENTER) # time.sleep(0.2) # print(x) # message_box.send_keys(message + Keys.ENTER) # time.sleep(0.2) A: Try locating the message_box inside the loop. Also, you have to improve your locators. And it should be element_to_be_clickable expected condition there, not just presence_of_element_located. So, please try this code: message_box_path='//footer//p' for x in range(5): message_box=wait.until(EC.element_to_be_clickable((By.XPATH,message_box_path))) message_box.click() message_box.clear() message_box.send_keys(message + Keys.ENTER) time.sleep(0.2)
Cannot send message in a loop using whatsapp automation (selenium)
I am using this code to automate WhatsApp message sending, But the loop applied to send a message number of times is not working properly. It just sends a message once and stops. Kindly help! Following is the code: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys import time options=webdriver.ChromeOptions() options.add_argument("user-data-dir=C:/Users/Sameer/AppData/Local/Google/Chrome/User Data/Person 1") # options.add_argument('--profile-directory=Default') driver=webdriver.Chrome(executable_path='chromedriver.exe',options=options) driver.get("https://web.whatsapp.com/") wait=WebDriverWait(driver,100) target='"My 2"' message="Hello" # number_of_times=10 #No. of times to send a message contact_path='//span[contains(@title,'+ target +')]' contact=wait.until(EC.presence_of_element_located((By.XPATH,contact_path))) contact.click() message_box_path='//*[@id="main"]/footer/div[1]/div/span[2]/div/div[2]/div[1]/div/div[1]/p' # '//*[@id="main"]/footer/div[1]/div/span[2]/div/div[2]/div[1]/div/div[1]' message_box=wait.until(EC.presence_of_element_located((By.XPATH,message_box_path))) for x in range(5): message_box.click() message_box.clear() message_box.send_keys(message) message_box.send_keys(Keys.ENTER) # time.sleep(0.2) # print(x) # message_box.send_keys(message + Keys.ENTER) # time.sleep(0.2)
[ "Try locating the message_box inside the loop.\nAlso, you have to improve your locators.\nAnd it should be element_to_be_clickable expected condition there, not just presence_of_element_located.\nSo, please try this code:\nmessage_box_path='//footer//p'\nfor x in range(5):\n message_box=wait.until(EC.element_to_be_clickable((By.XPATH,message_box_path)))\n message_box.click()\n message_box.clear()\n message_box.send_keys(message + Keys.ENTER)\n time.sleep(0.2)\n\n" ]
[ 0 ]
[]
[]
[ "automation", "python", "selenium", "selenium_webdriver", "whatsapp" ]
stackoverflow_0074619781_automation_python_selenium_selenium_webdriver_whatsapp.txt
Q: Getting final weights and biases values from neural network MLPClassifier From the documentation https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html it is not clear whether the attributes coefs_ and intercepts_ are the initial ones (before the neural network is estimated) or the final ones (after the neural network is estimated). In case they are the final ones, how can one get the initial ones? In case they are the initial ones, how can one get the final ones? A: The attributes coefs_ and intercepts_ are the final ones. Indeed, by design Attributes that have been estimated from the data must always have a name ending with trailing underscore. The starting values of such parameters are not exposed via a public attribute or method; instead, they are exploiting the _init_coef() method to Glorot-initialize parameters (https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b611bf873bd5836748647221480071a87/sklearn/neural_network/_multilayer_perceptron.py#L344).
Getting final weights and biases values from neural network MLPClassifier
From the documentation https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html it is not clear whether the attributes coefs_ and intercepts_ are the initial ones (before the neural network is estimated) or the final ones (after the neural network is estimated). In case they are the final ones, how can one get the initial ones? In case they are the initial ones, how can one get the final ones?
[ "The attributes coefs_ and intercepts_ are the final ones. Indeed, by design\n\nAttributes that have been estimated from the data must always have a name ending with trailing underscore.\n\nThe starting values of such parameters are not exposed via a public attribute or method; instead, they are exploiting the _init_coef() method to Glorot-initialize parameters (https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b611bf873bd5836748647221480071a87/sklearn/neural_network/_multilayer_perceptron.py#L344).\n" ]
[ 1 ]
[]
[]
[ "python", "scikit_learn" ]
stackoverflow_0074618826_python_scikit_learn.txt
Q: How to assertRaises in unittest an exception caught in try except block? In my production function: def myfunction(): try: do_stuff() (...) raise MyException("...") except MyException as exception: do_clean_up(exception) My test fails, because the exception is caught in the try/except block def test_raise(self): with self.assertRaises(MyException): myfunction() self.assertRaises is never called. How to guarantee that the exception is caught during testing? The exception is never asserted AssertionError: MyException not raised A: This is because you caught the exception MyException direct in myFunction(). Comment out out the try-except clause and try again, test should pass. assertRaises is used for uncaught errors. You can also re-raise in except block. A: The exception is handled internally, so there is no external evidence that the exception is raised. However, both MyException and do_clean_up are external names, which means you can patch them and make assertions about whether they do or do not get used. For example, # Make sure the names you are patching are correct with unittest.mock.patch('MyException', wraps=MyException) as mock_exc, \ unittest.mock.patch('do_clean_up', wraps=do_clean_up) as mock_cleanup: myfunction() if mock_exc.called: mock_cleanup.assert_called()
How to assertRaises in unittest an exception caught in try except block?
In my production function: def myfunction(): try: do_stuff() (...) raise MyException("...") except MyException as exception: do_clean_up(exception) My test fails, because the exception is caught in the try/except block def test_raise(self): with self.assertRaises(MyException): myfunction() self.assertRaises is never called. How to guarantee that the exception is caught during testing? The exception is never asserted AssertionError: MyException not raised
[ "This is because you caught the exception MyException direct in myFunction().\nComment out out the try-except clause and try again, test should pass.\nassertRaises is used for uncaught errors. You can also re-raise in except block.\n", "The exception is handled internally, so there is no external evidence that the exception is raised.\nHowever, both MyException and do_clean_up are external names, which means you can patch them and make assertions about whether they do or do not get used. For example,\n# Make sure the names you are patching are correct\nwith unittest.mock.patch('MyException', wraps=MyException) as mock_exc, \\\n unittest.mock.patch('do_clean_up', wraps=do_clean_up) as mock_cleanup:\n myfunction()\n if mock_exc.called:\n mock_cleanup.assert_called()\n\n" ]
[ 1, 0 ]
[]
[]
[ "exception", "python", "python_unittest", "unit_testing" ]
stackoverflow_0074605367_exception_python_python_unittest_unit_testing.txt
Q: Create index on nested element JSONField for Postgres in Django I have a Django model in my python project with a meta class detailing it's indexes. I'm curious if there's a way to create the index using the nested path of the json object. In this case we know the structure of our json and I wanted to stick with a BTree or Hash index on the specific element. If I were simply running this as raw sql, I'd expect to just do something like: CREATE INDEX ON foster_data(root->'level_1'->'level_2'->>'name'); I was hoping I could do something like this in my model: from django.db import models from django.contrib.postgres import indexes class ParentGuardians(Facilitators): # which extends models.Model parent_identifier = models.IntegerField(db_column='p_id', default=None, blank=True, null=True) class Meta: constraints = [ models.UniqueConstraint(fields=['table_id', name='UniqueConstraint for Parents') ] indexes = [ models.Index(fields=['p_id', ]), indexes.BTreeIndex(fields=[models.JSONField('{"root": {"level_1": {"level_2": "name"}}}'), ] , name="jsonb_p_id_idx"), ] or even: ... indexes.BTreeIndex(fields=["root->'level_1'->'level_2'->>'name'", ] ... But the named field fields only wants strings and only wants them to be the top level field defined in the model. I'm aware of this questions: Indexing JSONField in Django PostgreSQL but it seems more of a hack and wanted the result generated from the codebase and makemigrations, not to manually edit it. Is this possible more recently? A: Django 3.2 introduced native support for these indexes. The question as asked presently doesn't seem to have the definition of the JSONField, but assuming it is something like from django.db import models class Facilitators(models.Model): foster_data = models.JSONField() To index a particular key, you combine an F expression with a JSONField path lookup on the model's Meta indexes option: from django.contrib.postgres.fields import JSONField class Facilitators(models.Model): foster_data = models.JSONField() class Meta: indexes = [ models.Index(models.F("foster_data__root__level_1__level2__name"), name="foster_data__root__level_1__level2__name_idx"), ] This will create a B-Tree index. If you are adding these to an existing model, be sure to makemigrations and migrate. See this answer as well https://stackoverflow.com/a/74619523/
Create index on nested element JSONField for Postgres in Django
I have a Django model in my python project with a meta class detailing it's indexes. I'm curious if there's a way to create the index using the nested path of the json object. In this case we know the structure of our json and I wanted to stick with a BTree or Hash index on the specific element. If I were simply running this as raw sql, I'd expect to just do something like: CREATE INDEX ON foster_data(root->'level_1'->'level_2'->>'name'); I was hoping I could do something like this in my model: from django.db import models from django.contrib.postgres import indexes class ParentGuardians(Facilitators): # which extends models.Model parent_identifier = models.IntegerField(db_column='p_id', default=None, blank=True, null=True) class Meta: constraints = [ models.UniqueConstraint(fields=['table_id', name='UniqueConstraint for Parents') ] indexes = [ models.Index(fields=['p_id', ]), indexes.BTreeIndex(fields=[models.JSONField('{"root": {"level_1": {"level_2": "name"}}}'), ] , name="jsonb_p_id_idx"), ] or even: ... indexes.BTreeIndex(fields=["root->'level_1'->'level_2'->>'name'", ] ... But the named field fields only wants strings and only wants them to be the top level field defined in the model. I'm aware of this questions: Indexing JSONField in Django PostgreSQL but it seems more of a hack and wanted the result generated from the codebase and makemigrations, not to manually edit it. Is this possible more recently?
[ "Django 3.2 introduced native support for these indexes.\nThe question as asked presently doesn't seem to have the definition of the JSONField, but assuming it is something like\nfrom django.db import models\n\nclass Facilitators(models.Model):\n foster_data = models.JSONField()\n\nTo index a particular key, you combine an F expression with a JSONField path lookup on the model's Meta indexes option:\nfrom django.contrib.postgres.fields import JSONField \n\nclass Facilitators(models.Model):\n foster_data = models.JSONField()\n class Meta:\n indexes = [\n models.Index(models.F(\"foster_data__root__level_1__level2__name\"), name=\"foster_data__root__level_1__level2__name_idx\"),\n ]\n\nThis will create a B-Tree index. If you are adding these to an existing model, be sure to makemigrations and migrate.\nSee this answer as well https://stackoverflow.com/a/74619523/\n" ]
[ 0 ]
[]
[]
[ "django", "postgresql", "python" ]
stackoverflow_0071974662_django_postgresql_python.txt
Q: Need help printing result of __str__ function in Python I'm working on a problem with classes, but I'm stuck on defining the __str__ function so that returns the capitalized version of whatever text within the class. Currently I have an excruciatingly difficult code that works in PyCharm but not in my class's automatic checking system. Can I get some advice on how to fix this code? class X(str): def __str__(self, name): name = str.capitalize('hello') self.name = 'hello' return name b = X('hello') print(b.__str__('hello')) A: __str__ canonically doesn't accept any arguments. Since you're subclassing str, you probably mean class X(str): def __str__(self): return self.capitalize() b = X('hello') print(b.__str__()) # or print(str(b)) # or print(b) i.e. to override the __str__ magic method in a way that uses the superclass str's methods to work on the "intrinsic" string data of the object (which exists because you're subclassing str). This prints out Hello
Need help printing result of __str__ function in Python
I'm working on a problem with classes, but I'm stuck on defining the __str__ function so that returns the capitalized version of whatever text within the class. Currently I have an excruciatingly difficult code that works in PyCharm but not in my class's automatic checking system. Can I get some advice on how to fix this code? class X(str): def __str__(self, name): name = str.capitalize('hello') self.name = 'hello' return name b = X('hello') print(b.__str__('hello'))
[ "__str__ canonically doesn't accept any arguments.\nSince you're subclassing str, you probably mean\nclass X(str):\n def __str__(self):\n return self.capitalize()\n\nb = X('hello')\nprint(b.__str__()) \n# or print(str(b))\n# or print(b)\n\ni.e. to override the __str__ magic method in a way that uses the superclass str's methods to work on the \"intrinsic\" string data of the object (which exists because you're subclassing str).\nThis prints out\nHello\n\n" ]
[ 4 ]
[]
[]
[ "class", "oop", "python" ]
stackoverflow_0074619971_class_oop_python.txt
Q: Member Avatar doesn't appear in the welcome embed I am trying to make my bot send a welcome message when someone joins a specific server. Code: if member.guild.id == 928443083660607549: new = nextcord.utils.get(member.guild.roles, name="new") channel = bot.get_channel(996767690091925584) embed = nextcord.Embed(title="welcome to ikari!", description="・make sure to read the rules in <#928443083698360397> \n ・for more questions refer to <#928507764714651698>", color=0x303136) embed.set_author(name=f"{member.name}#{member.discriminator}", icon_url=member.display_avatar_url) embed.set_thumbnail(url=member.guild.icon.url) await channel.send(f"{member.mention}!", embed=embed) await member.add_roles(new)Error: Error: AttributeError: 'Member' object has no attribute 'display_avatar_url' A: Discord.py v2 changed the variables. Meaning that .avatar_url is now .avatar.url. .icon_url is now .icon.url. Hence meaning that member.display_avatar.url is what you're looking for. This also means that stuff like .avatar_with_size(...) for example have now been changed to .avatar.with_size(...). Just for future reference. A: Discord.py now has different variables for display avatars etc. Just edit this line: embed.set_author(name=f"{member.name}#{member.discriminator}", icon_url=member.display_avatar_url) to embed.set_author(name=f"{member.name}#{member.discriminator}", icon_url=member.display_avatar.url) If this answer has worked, please mark it as the correct answer! :)
Member Avatar doesn't appear in the welcome embed
I am trying to make my bot send a welcome message when someone joins a specific server. Code: if member.guild.id == 928443083660607549: new = nextcord.utils.get(member.guild.roles, name="new") channel = bot.get_channel(996767690091925584) embed = nextcord.Embed(title="welcome to ikari!", description="・make sure to read the rules in <#928443083698360397> \n ・for more questions refer to <#928507764714651698>", color=0x303136) embed.set_author(name=f"{member.name}#{member.discriminator}", icon_url=member.display_avatar_url) embed.set_thumbnail(url=member.guild.icon.url) await channel.send(f"{member.mention}!", embed=embed) await member.add_roles(new)Error: Error: AttributeError: 'Member' object has no attribute 'display_avatar_url'
[ "Discord.py v2 changed the variables. Meaning that .avatar_url is now .avatar.url. .icon_url is now .icon.url. Hence meaning that member.display_avatar.url is what you're looking for.\nThis also means that stuff like .avatar_with_size(...) for example have now been changed to .avatar.with_size(...). Just for future reference.\n", "Discord.py now has different variables for display avatars etc.\nJust edit this line:\nembed.set_author(name=f\"{member.name}#{member.discriminator}\", icon_url=member.display_avatar_url)\nto\nembed.set_author(name=f\"{member.name}#{member.discriminator}\", icon_url=member.display_avatar.url)\nIf this answer has worked, please mark it as the correct answer! :)\n" ]
[ 1, 0 ]
[]
[]
[ "discord", "nextcord", "python" ]
stackoverflow_0074587092_discord_nextcord_python.txt
Q: multiply 2 columns until get a desired value Greeting everyone, I have this table (Without the Res_Problem): ID Problem X Impact Prob Res_Problem ID1 12 IDC1 1 2 (12-2)=10 ID1 12 IDC2 2 2 (10-4)=6 STOP ID1 12 IDC3 1 0 NO LOOP ID1 12 IDC4 1 0 NO LOOP ID2 10 IDB1 1 2 New Loop (10-2)=8 ID2 10 IDB1 1 2 (8-2) = 6 STOP I want to do a loop that multiplies the Impact and prob until get a desire value (6 for example),and stop the loop until it reach the 6. but start again the loop on the ID2... and so on, any suggestions? I think it has to be something like this : while (df['Problem'] - df['Impact']*df['Impact'] < 6): df['loop'] = res The loop should create the 'Res_Problem' column A: Here is one option: s = (df['Problem'] .sub(df['Impact'].mul(df['Prob']) .groupby(df['ID']).cumsum() ) ) m = s.le(6).groupby(df['ID']).shift(fill_value=False) df['Res_Problem'] = s.mask(m) output: ID Problem X Impact Prob Res_Problem 0 ID1 12 IDC1 1 2 10.0 1 ID1 12 IDC2 2 2 6.0 2 ID1 12 IDC3 1 0 NaN 3 ID1 12 IDC4 1 0 NaN 4 ID2 10 IDB1 1 2 8.0 5 ID2 10 IDB1 1 2 6.0
multiply 2 columns until get a desired value
Greeting everyone, I have this table (Without the Res_Problem): ID Problem X Impact Prob Res_Problem ID1 12 IDC1 1 2 (12-2)=10 ID1 12 IDC2 2 2 (10-4)=6 STOP ID1 12 IDC3 1 0 NO LOOP ID1 12 IDC4 1 0 NO LOOP ID2 10 IDB1 1 2 New Loop (10-2)=8 ID2 10 IDB1 1 2 (8-2) = 6 STOP I want to do a loop that multiplies the Impact and prob until get a desire value (6 for example),and stop the loop until it reach the 6. but start again the loop on the ID2... and so on, any suggestions? I think it has to be something like this : while (df['Problem'] - df['Impact']*df['Impact'] < 6): df['loop'] = res The loop should create the 'Res_Problem' column
[ "Here is one option:\ns = (df['Problem']\n .sub(df['Impact'].mul(df['Prob'])\n .groupby(df['ID']).cumsum()\n )\n)\n\nm = s.le(6).groupby(df['ID']).shift(fill_value=False)\n\ndf['Res_Problem'] = s.mask(m)\n\noutput:\n ID Problem X Impact Prob Res_Problem\n0 ID1 12 IDC1 1 2 10.0\n1 ID1 12 IDC2 2 2 6.0\n2 ID1 12 IDC3 1 0 NaN\n3 ID1 12 IDC4 1 0 NaN\n4 ID2 10 IDB1 1 2 8.0\n5 ID2 10 IDB1 1 2 6.0\n\n" ]
[ 5 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python", "while_loop" ]
stackoverflow_0074619855_dataframe_numpy_pandas_python_while_loop.txt
Q: How to Add another level of column to an existing multi-level column I have a data frame that looks like this: x A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 When I want to add another level to the multi-level columns using the following code x.columns = pd.MultiIndex.from_product([['D'], x.columns]) it gives me the following error Traceback (most recent call last): File "C:\Users\adel.moustafa\DashBoard\main.py", line 262, in <module> calculate_yield() File "C:\Users\adel.moustafa\DashBoard\main.py", line 204, in calculate_yield Analyzer.yield_analyzer_by(yield_data, all_data_df, df_info['P/F Criteria'], 'batch') File "C:\Users\adel.moustafa\DashBoard\Modules\Analyzer.py", line 163, in yield_analyzer_by x.columns = pd.MultiIndex.from_product([['D'], x.columns]) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\multi.py", line 621, in from_product codes, levels = factorize_from_iterables(iterables) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 2881, in factorize_from_iterables codes, categories = zip(*(factorize_from_iterable(it) for it in iterables)) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 2881, in <genexpr> codes, categories = zip(*(factorize_from_iterable(it) for it in iterables)) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 2854, in factorize_from_iterable cat = Categorical(values, ordered=False) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 451, in __init__ dtype = CategoricalDtype(categories, dtype.ordered) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\dtypes.py", line 183, in __init__ self._finalize(categories, ordered, fastpath=False) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\dtypes.py", line 337, in _finalize categories = self.validate_categories(categories, fastpath=fastpath) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\dtypes.py", line 530, in validate_categories if categories.hasnans: File "pandas\_libs\properties.pyx", line 37, in pandas._libs.properties.CachedProperty.__get__ File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 2681, in hasnans return bool(self._isnan.any()) File "pandas\_libs\properties.pyx", line 37, in pandas._libs.properties.CachedProperty.__get__ File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 2666, in _isnan return isna(self) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\missing.py", line 144, in isna return _isna(obj) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\missing.py", line 169, in _isna raise NotImplementedError("isna is not defined for MultiIndex") NotImplementedError: isna is not defined for MultiIndex I have checked that there is no Na values in my column nor its values, I have also looked at this post and this post and finally this but with no results here is reproducible code import pandas as pd import numpy as np x = pd.DataFrame(np.arange(10).reshape(5, 2), columns=pd.MultiIndex.from_product([['x'], ['A', 'B']])) x.columns = pd.MultiIndex.from_product([['D'], x.columns]) can any one point to what is wrong and how fix it? A: You need to do this: x.columns = pd.MultiIndex.from_product([['D'], *x.columns.levels]) where x.columns.levels gives you a Frozenlist of columns that form the MultiIndex. And then you have to unpack the list using * in order to pass list of lists to from_product. A: You can denote a multi-level column with a tuple. For instance: x[('y', 'C')] = x[('x', 'A')] + 1 x[('x', 'D')] = 0 >>> x x y x A B C D 0 0 1 1 0 1 2 3 3 0 2 4 5 5 0 3 6 7 7 0 4 8 9 9 0 And, of course, you can sort the columns: x = x.sort_index(axis=1) >>> x x y A B D C 0 0 1 0 1 1 2 3 0 3 2 4 5 0 5 3 6 7 0 7 4 8 9 0 9
How to Add another level of column to an existing multi-level column
I have a data frame that looks like this: x A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 When I want to add another level to the multi-level columns using the following code x.columns = pd.MultiIndex.from_product([['D'], x.columns]) it gives me the following error Traceback (most recent call last): File "C:\Users\adel.moustafa\DashBoard\main.py", line 262, in <module> calculate_yield() File "C:\Users\adel.moustafa\DashBoard\main.py", line 204, in calculate_yield Analyzer.yield_analyzer_by(yield_data, all_data_df, df_info['P/F Criteria'], 'batch') File "C:\Users\adel.moustafa\DashBoard\Modules\Analyzer.py", line 163, in yield_analyzer_by x.columns = pd.MultiIndex.from_product([['D'], x.columns]) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\multi.py", line 621, in from_product codes, levels = factorize_from_iterables(iterables) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 2881, in factorize_from_iterables codes, categories = zip(*(factorize_from_iterable(it) for it in iterables)) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 2881, in <genexpr> codes, categories = zip(*(factorize_from_iterable(it) for it in iterables)) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 2854, in factorize_from_iterable cat = Categorical(values, ordered=False) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\categorical.py", line 451, in __init__ dtype = CategoricalDtype(categories, dtype.ordered) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\dtypes.py", line 183, in __init__ self._finalize(categories, ordered, fastpath=False) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\dtypes.py", line 337, in _finalize categories = self.validate_categories(categories, fastpath=fastpath) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\dtypes.py", line 530, in validate_categories if categories.hasnans: File "pandas\_libs\properties.pyx", line 37, in pandas._libs.properties.CachedProperty.__get__ File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 2681, in hasnans return bool(self._isnan.any()) File "pandas\_libs\properties.pyx", line 37, in pandas._libs.properties.CachedProperty.__get__ File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 2666, in _isnan return isna(self) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\missing.py", line 144, in isna return _isna(obj) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\dtypes\missing.py", line 169, in _isna raise NotImplementedError("isna is not defined for MultiIndex") NotImplementedError: isna is not defined for MultiIndex I have checked that there is no Na values in my column nor its values, I have also looked at this post and this post and finally this but with no results here is reproducible code import pandas as pd import numpy as np x = pd.DataFrame(np.arange(10).reshape(5, 2), columns=pd.MultiIndex.from_product([['x'], ['A', 'B']])) x.columns = pd.MultiIndex.from_product([['D'], x.columns]) can any one point to what is wrong and how fix it?
[ "You need to do this:\n x.columns = pd.MultiIndex.from_product([['D'], *x.columns.levels])\n\nwhere x.columns.levels gives you a Frozenlist of columns that form the MultiIndex.\nAnd then you have to unpack the list using * in order to pass list of lists to from_product.\n", "You can denote a multi-level column with a tuple. For instance:\nx[('y', 'C')] = x[('x', 'A')] + 1\nx[('x', 'D')] = 0\n\n>>> x\n x y x\n A B C D\n0 0 1 1 0\n1 2 3 3 0\n2 4 5 5 0\n3 6 7 7 0\n4 8 9 9 0\n\nAnd, of course, you can sort the columns:\nx = x.sort_index(axis=1)\n\n>>> x\n x y\n A B D C\n0 0 1 0 1\n1 2 3 0 3\n2 4 5 0 5\n3 6 7 0 7\n4 8 9 0 9\n\n" ]
[ 3, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074619620_dataframe_pandas_python.txt
Q: python error : missing 1 required positional argument: 'self' I am really new(beginner) in Python and I am trying to implement an optimization problem using the pyomo library, in colab notebook. The goal is to implement in pyomo the elastic net problem https://en.wikipedia.org/wiki/Elastic_net_regularization and then run it for Ξ»=1 with a=1. I have written the following implementation but then I get missing 1 required positional argument error when I am running it. def elastic_net(alpha, lam, X, y): n, k = X.shape #Define the model model = pyo.ConcreteModel #Define sets for rows and column indices model.rowindices = pyo.Set(initialize=range(n)) model.colindices = pyo.Set(initialize=range(k)) #Declare decision variables model.beta=pyo.Var(model.colindices, domain=pyo.Reals) #Declare objective def obj_rule(model): return sum((sum (-X[i,j]*model.beta[j] for j in model.colindices)-y)**2 for i in model.rowindices)+alpha*(lam*(sum(model.beta[k] for k in model.colindices)+1/2*(1-lam)*sum(sum (model.beta[k] for k in model.colindices)**2))) #return sum((sum(A[i,j] * model.x[j] for j in model.colindices) - b[i])**2 for i in model.rowindices) model.objective = pyo.Objective(rule=obj_rule, sense = pyo.minimize) #Declare constraints #no constraints for this problem return model lasso_model = elastic_net(1, 1, X, y) result = ipopt_solver.solve(lasso_model) #pyo.SolverFactory('ipopt').solve(lasso_model).write() print(f"the minimum value of the objective function is{lasso_model()}") print(f"the value of Ξ² is{model.beta()}") # objective, betas = ... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-11-31510e3ec601> in <module> 1 lasso_model = elastic_net(1, 1, X, y) ----> 2 result = ipopt_solver.solve(lasso_model) 3 #pyo.SolverFactory('ipopt').solve(lasso_model).write() 4 print(f"the minimum value of the objective function is{lasso_model()}") 5 print(f"the value of Ξ² is{model.beta()}") 4 frames /usr/local/lib/python3.7/dist-packages/pyomo/opt/base/convert.py in convert_problem(args, target_problem_type, valid_problem_types, has_capability, **kwds) 58 raise ConverterError("Unknown suffix type: "+tmp) 59 else: ---> 60 source_ptype = args[0].valid_problem_types() 61 62 # TypeError: valid_problem_types() missing 1 required positional argument: 'self' please do not pay attention to the #comments some of them are just notes not part of my solution A: The comment above is correct. The source of your problems is that you used this: model = pyo.ConcreteModel instead of this: model = pyo.ConcreteModel() To explain what happens when you do this... You have unwittingly created an "alias" for the function ConcreteModel, which is obviously not what you intended to do. You wanted to call the function and get a model instance back. If this concept is confusing, this below may help. The punchline of the story is: put parenthesis on that line to call the function. In [14]: def make_model(): ...: return "I am a model object" ...: In [15]: m = make_model # this is creating an "alias" In [16]: print(m) <function make_model at 0x1157c5e10> In [17]: # because this "alias" refers to the function we can call it... In [18]: m() Out[18]: 'I am a model object' In [19]: # we really wanted to assign m to the output of the function: In [20]: m = make_model() In [21]: print(m) I am a model object
python error : missing 1 required positional argument: 'self'
I am really new(beginner) in Python and I am trying to implement an optimization problem using the pyomo library, in colab notebook. The goal is to implement in pyomo the elastic net problem https://en.wikipedia.org/wiki/Elastic_net_regularization and then run it for Ξ»=1 with a=1. I have written the following implementation but then I get missing 1 required positional argument error when I am running it. def elastic_net(alpha, lam, X, y): n, k = X.shape #Define the model model = pyo.ConcreteModel #Define sets for rows and column indices model.rowindices = pyo.Set(initialize=range(n)) model.colindices = pyo.Set(initialize=range(k)) #Declare decision variables model.beta=pyo.Var(model.colindices, domain=pyo.Reals) #Declare objective def obj_rule(model): return sum((sum (-X[i,j]*model.beta[j] for j in model.colindices)-y)**2 for i in model.rowindices)+alpha*(lam*(sum(model.beta[k] for k in model.colindices)+1/2*(1-lam)*sum(sum (model.beta[k] for k in model.colindices)**2))) #return sum((sum(A[i,j] * model.x[j] for j in model.colindices) - b[i])**2 for i in model.rowindices) model.objective = pyo.Objective(rule=obj_rule, sense = pyo.minimize) #Declare constraints #no constraints for this problem return model lasso_model = elastic_net(1, 1, X, y) result = ipopt_solver.solve(lasso_model) #pyo.SolverFactory('ipopt').solve(lasso_model).write() print(f"the minimum value of the objective function is{lasso_model()}") print(f"the value of Ξ² is{model.beta()}") # objective, betas = ... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-11-31510e3ec601> in <module> 1 lasso_model = elastic_net(1, 1, X, y) ----> 2 result = ipopt_solver.solve(lasso_model) 3 #pyo.SolverFactory('ipopt').solve(lasso_model).write() 4 print(f"the minimum value of the objective function is{lasso_model()}") 5 print(f"the value of Ξ² is{model.beta()}") 4 frames /usr/local/lib/python3.7/dist-packages/pyomo/opt/base/convert.py in convert_problem(args, target_problem_type, valid_problem_types, has_capability, **kwds) 58 raise ConverterError("Unknown suffix type: "+tmp) 59 else: ---> 60 source_ptype = args[0].valid_problem_types() 61 62 # TypeError: valid_problem_types() missing 1 required positional argument: 'self' please do not pay attention to the #comments some of them are just notes not part of my solution
[ "The comment above is correct. The source of your problems is that you used this:\nmodel = pyo.ConcreteModel\n\ninstead of this:\nmodel = pyo.ConcreteModel()\n\nTo explain what happens when you do this... You have unwittingly created an \"alias\" for the function ConcreteModel, which is obviously not what you intended to do. You wanted to call the function and get a model instance back. If this concept is confusing, this below may help. The punchline of the story is: put parenthesis on that line to call the function.\nIn [14]: def make_model():\n ...: return \"I am a model object\"\n ...: \n\nIn [15]: m = make_model # this is creating an \"alias\"\n\nIn [16]: print(m)\n<function make_model at 0x1157c5e10>\n\nIn [17]: # because this \"alias\" refers to the function we can call it...\n\nIn [18]: m()\nOut[18]: 'I am a model object'\n\nIn [19]: # we really wanted to assign m to the output of the function:\n\nIn [20]: m = make_model()\n\nIn [21]: print(m)\nI am a model object\n\n" ]
[ 1 ]
[]
[]
[ "pyomo", "python" ]
stackoverflow_0074617125_pyomo_python.txt
Q: String formatting in Python version earlier than 2.6 When I run the following code in Python 2.5.2: for x in range(1, 11): print '{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x) I get: Traceback (most recent call last): File "<pyshell#9>", line 2, in <module> print '{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x) AttributeError: 'str' object has no attribute 'format' I don't understand the problem. From dir('hello') there is no format attribute. How can I solve this? A: The str.format method was introduced in Python 3.0, and backported to Python 2.6 and later. A: Your example code seems to be written for Python 2.6 or later, where the str.format method was introduced. For Python versions below 2.6, use the % operator to interpolate a sequence of values into a format string: for x in range(1, 11): print '%2d %3d %4d' % (x, x*x, x*x*x) You should also be aware that this operator can interpolate by name from a mapping, instead of just positional arguments: >>> "%(foo)s %(bar)d" % {'bar': 42, 'foo': "spam", 'baz': None} 'spam 42' In combination with the fact that the built-in vars() function returns attributes of a namespace as a mapping, this can be very handy: >>> bar = 42 >>> foo = "spam" >>> baz = None >>> "%(foo)s %(bar)d" % vars() 'spam 42' A: I believe that is a Python 3.0 feature, although it is in version 2.6. But if you have a version of Python below that, that type of string formatting will not work. If you are trying to print formatted strings in general, use Python's printf-style syntax through the % operator. For example: print '%.2f' % some_var A: Which Python version do you use? Edit For Python 2.5, use "x = %s" % (x) (for printing strings) If you want to print other types, see here. A: Although the existing answers describe the causes and point in the direction of a fix, none of them actually provide a solution that accomplishes what the question asks. You have two options to solve the problem. The first is to upgrade to Python 2.6 or greater, which supports the format string construct. The second option is to use the older string formatting with the % operator. The equivalent code of what you've presented would be as follows. for x in range(1,11): print '%2d %3d %4d' % (x, x*x, x*x*x) This code snipped produces exactly the same output in Python 2.5 as your example code produces in Python 2.6 and greater.
String formatting in Python version earlier than 2.6
When I run the following code in Python 2.5.2: for x in range(1, 11): print '{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x) I get: Traceback (most recent call last): File "<pyshell#9>", line 2, in <module> print '{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x) AttributeError: 'str' object has no attribute 'format' I don't understand the problem. From dir('hello') there is no format attribute. How can I solve this?
[ "The str.format method was introduced in Python 3.0, and backported to Python 2.6 and later.\n", "Your example code seems to be written for Python 2.6 or later, where the str.format method was introduced.\nFor Python versions below 2.6, use the % operator to interpolate a sequence of values into a format string:\nfor x in range(1, 11):\n print '%2d %3d %4d' % (x, x*x, x*x*x)\n\n\nYou should also be aware that this operator can interpolate by name from a mapping, instead of just positional arguments:\n>>> \"%(foo)s %(bar)d\" % {'bar': 42, 'foo': \"spam\", 'baz': None}\n'spam 42'\n\nIn combination with the fact that the built-in vars() function returns attributes of a namespace as a mapping, this can be very handy:\n>>> bar = 42\n>>> foo = \"spam\"\n>>> baz = None\n>>> \"%(foo)s %(bar)d\" % vars()\n'spam 42'\n\n", "I believe that is a Python 3.0 feature, although it is in version 2.6. But if you have a version of Python below that, that type of string formatting will not work.\nIf you are trying to print formatted strings in general, use Python's printf-style syntax through the % operator. For example:\nprint '%.2f' % some_var\n\n", "Which Python version do you use?\nEdit\nFor Python 2.5, use \"x = %s\" % (x) (for printing strings)\nIf you want to print other types, see here.\n", "Although the existing answers describe the causes and point in the direction of a fix, none of them actually provide a solution that accomplishes what the question asks.\nYou have two options to solve the problem. The first is to upgrade to Python 2.6 or greater, which supports the format string construct.\nThe second option is to use the older string formatting with the % operator. The equivalent code of what you've presented would be as follows.\nfor x in range(1,11):\n print '%2d %3d %4d' % (x, x*x, x*x*x)\n\nThis code snipped produces exactly the same output in Python 2.5 as your example code produces in Python 2.6 and greater.\n" ]
[ 48, 38, 8, 7, 7 ]
[ "Use this:\nprint \"test some char {} and also number {}\".format('a', 123)\n\nresult:\n\ntest some char a and also number 123\n\n" ]
[ -1 ]
[ "format", "python" ]
stackoverflow_0000792721_format_python.txt
Q: Running python interpreter in shell from a make file I want to call a python interpreter in a shell, from an android make file. Initially I tried this: $(shell python -c "import sys;print('hello')") The result is an error: Android.mk:150: *** missing separator. Stop. I suspect this is caused by ndk-build misinterpreting nested quotes. I couldn't find an alternative that would be a legal make file and contain a string representing a legal python script at the same time. How can this be done? For reference, the full make file: ########################################################## # Main-Project - start. ########################################################## # store caller info (all LOCAL_XXX variables cleared by CLEAR_VARS operation, # to be able to re-constructed on exit) NEW_PROJECT_USER_LOCAL_PATH := $(LOCAL_PATH) NEW_PROJECT_USER_LOCAL_C_INCLUDES := $(LOCAL_C_INCLUDES) NEW_PROJECT_USER_LOCAL_CFLAGS := $(LOCAL_CFLAGS) NEW_PROJECT_USER_LOCAL_CPPFLAGS := $(LOCAL_CPPFLAGS) NEW_PROJECT_USER_LOCAL_STATIC_LIBRARIES := $(LOCAL_STATIC_LIBRARIES) NEW_PROJECT_USER_LOCAL_SHARED_LIBRARIES := $(LOCAL_SHARED_LIBRARIES) NEW_PROJECT_USER_LOCAL_LDLIBS := $(LOCAL_LDLIBS) # switch to current module LOCAL_PATH := $(call my-dir) NEW_PROJECT_PATH := $(LOCAL_PATH) CURRENT_DIR_ABS_PATH := $(CURDIR) export BUILD_NESTED_PROJECT_1_SHARED_LIBRARY := false export BUILD_NESTED_PROJECT_2_SHARED_LIBRARY := false export BUILD_NESTED_PROJECT_2_EXE := false export MY_UTILS_FOR_NESTED_PROJECT_1_PATH := $(LOCAL_PATH)/MY_Utils export CP_NESTED_PROJECT_1_ABS_PATH:= $(LOCAL_PATH)/NestedProject1 export OPENCV_JNI = $(OPENCV_DIR)/Android/$(APP_STL)/$(APP_ABI)/staticlibs/sdk/native/jni ############################### # Exports ############################### export ARM_ARCH:=AARCH64 export RELEASE_MODE:=false #Set to true to enable log prints export ENABLE_LOGCAT:=true export PRINT_TIMING:=0 ifeq ($(RELEASE_MODE),true) export ENABLE_LOGCAT:=false export PRINT_TIMING:=0 endif #Android log print is default. export ENABLE_STRINGLOGGER:=false export ENABLE_ANDROID_LOG_FILE:=false export COMMON_API_VERSION?=12 export SVL_MY_UTILS_PATH:=$(LOCAL_PATH)/MY_Utils export SVL_PATH:=$(LOCAL_PATH) export BUILD_SVL_SHARED_LIBRARY:=false $(info OPEN_CV_ENABLE=$(OPEN_CV_ENABLE)) $(info ENABLE_LOGCAT=$(ENABLE_LOGCAT)) $(info RELEASE_MODE=$(RELEASE_MODE)) $(info PRINT_TIMING=$(PRINT_TIMING)) include $(CLEAR_VARS) ifneq ($(call set_is_member,$(__ndk_modules),mainproject),$(true)) ifeq ($(OPEN_CV_ENABLE),true) LOCAL_CFLAGS += -DOPEN_CV_ENABLE #OpenCV start OPENCV_INSTALL_MODULES:=on OPENCV_CAMERA_MODULES:=off OPENCV_LIB_TYPE:=STATIC include $(OPENCV_JNI)/OpenCV.mk LOCAL_LDLIBS := # clean LOCAL_LDLIBS #OpenCV end endif ########### GLOBAL Flags: ####################################################### ifeq ($(RELEASE_MODE),true) GLOBAL_FLAGS += -DRELEASE_MODE=1 else GLOBAL_FLAGS += -DRELEASE_MODE=0 endif GLOBAL_FLAGS += -DDUMP_PATH=$(DUMP_PATH_ALL) GLOBAL_FLAGS += -DRECORDER_PATH=$(DUMP_PATH_ALL)\"recorderDumps/\" GLOBAL_FLAGS += -DPARAM_PLAYER=0 # loads stored on file params GLOBAL_FLAGS += -DPARAM_RECORDER=0 # records params such runtime, frames params, out info GLOBAL_FLAGS += -DDISPLAY_WIDE_ONLY=0 # 0-display all cameras possible, 1-display wide only ifeq ($(PARAM_PLAYER), 1) # when playing params, recording is disabled PARAM_RECORDER=0 endif ############## #Log Control # ############## #Logger tag ifneq (PROJECT_TAG,$(findstring PROJECT_TAG,$(GLOBAL_FLAGS))) # no need to define twice GLOBAL_FLAGS += -DPROJECT_TAG=\"MAIN_PROJECT\" endif ifeq ($(ENABLE_LOGCAT),true) ifeq ($(ENABLE_STRINGLOGGER),true) GLOBAL_FLAGS += -DUSE_LOG_TO_STR_FILE else ifeq ($(ENABLE_ANDROID_LOG_FILE),true) GLOBAL_FLAGS += -DUSE_ANDROID_LOG_FILE endif endif else GLOBAL_FLAGS += -DLOG_DBG_PRINTS_CANCLED endif ############# ########### .mk includes ####################################################### include $(NEW_PROJECT_PATH)/MY_Utils/Android.mk include $(NEW_PROJECT_PATH)/MY_Utils/CPModules/PrismPredictor/Android.mk include $(NEW_PROJECT_PATH)/NestedProject1/jni/Android.mk include $(NEW_PROJECT_PATH)/SomeModule1/Android.mk include $(NEW_PROJECT_PATH)/SomeModule2/Android.mk $(shell python -c "import sys;print('hello')") include $(NEW_PROJECT_PATH)/SomeModule3/Android.mk include $(NEW_PROJECT_PATH)/SomeModule4/Android.mk include $(NEW_PROJECT_PATH)/SomeModule5/Android.mk LOCAL_C_INCLUDES += $(EXTERNAL_PATH)/$(PLATFORM)/$(GPU)/include/ LOCAL_C_INCLUDES += $(EXTERNAL_PATH)/$(PLATFORM)/android/include/$(AVRS) LOCAL_C_INCLUDES += $(NEW_PROJECT_PATH)/include/ LOCAL_C_INCLUDES += $(NEW_PROJECT_PATH)/API/API_VER_$(API_VERSION)/external/ LOCAL_C_INCLUDES += $(NEW_PROJECT_PATH)/MY_Utils/CommonAPI/CommonAPI$(COMMON_API_VERSION) LOCAL_SRC_FILES += $(NEW_PROJECT_PATH)/API/API_VER_$(API_VERSION)/src/API.cpp LOCAL_SRC_FILES += $(NEW_PROJECT_PATH)/src/SomeSource1.cpp LOCAL_CFLAGS += $(OPTION_FLAGS) LOCAL_CFLAGS += $(GLOBAL_FLAGS) LOCAL_MODULE := mainproject include $(BUILD_SHARED_LIBRARY) ifeq ($(BUILD_OFFLINE_TEST), true) $(info -----------------Buildingoffline test-----------------------) include $(NEW_PROJECT_PATH)/MY_Utils/Manager/Android.mk include $(NEW_PROJECT_PATH)/UnitTestAndroid/Android.mk endif endif include $(CLEAR_VARS) LOCAL_C_INCLUDES := $(NEW_PROJECT_USER_LOCAL_C_INCLUDES) LOCAL_CFLAGS:=$(NEW_PROJECT_USER_LOCAL_CFLAGS) LOCAL_CPPFLAGS:=$(NEW_PROJECT_USER_LOCAL_CPPFLAGS) LOCAL_STATIC_LIBRARIES:=$(NEW_PROJECT_USER_LOCAL_STATIC_LIBRARIES) LOCAL_SHARED_LIBRARIES:=$(NEW_PROJECT_USER_LOCAL_SHARED_LIBRARIES) LOCAL_LDLIBS:=$(NEW_PROJECT_USER_LOCAL_LDLIBS) LOCAL_C_INCLUDES += $(LOCAL_PATH)/external LOCAL_SHARED_LIBRARIES += mainproject LOCAL_PATH := $(NEW_PROJECT_USER_LOCAL_PATH) ########################################################## # Main-Project - end. ########################################################## A: To use the $() in Makefiles, you have to declare a variable like this HELLO = $(shell python -c "import sys;print('hello')"), and now HELLO has the value of "hello" If you want to print something on the screen, just use echo or printf in the target's script
Running python interpreter in shell from a make file
I want to call a python interpreter in a shell, from an android make file. Initially I tried this: $(shell python -c "import sys;print('hello')") The result is an error: Android.mk:150: *** missing separator. Stop. I suspect this is caused by ndk-build misinterpreting nested quotes. I couldn't find an alternative that would be a legal make file and contain a string representing a legal python script at the same time. How can this be done? For reference, the full make file: ########################################################## # Main-Project - start. ########################################################## # store caller info (all LOCAL_XXX variables cleared by CLEAR_VARS operation, # to be able to re-constructed on exit) NEW_PROJECT_USER_LOCAL_PATH := $(LOCAL_PATH) NEW_PROJECT_USER_LOCAL_C_INCLUDES := $(LOCAL_C_INCLUDES) NEW_PROJECT_USER_LOCAL_CFLAGS := $(LOCAL_CFLAGS) NEW_PROJECT_USER_LOCAL_CPPFLAGS := $(LOCAL_CPPFLAGS) NEW_PROJECT_USER_LOCAL_STATIC_LIBRARIES := $(LOCAL_STATIC_LIBRARIES) NEW_PROJECT_USER_LOCAL_SHARED_LIBRARIES := $(LOCAL_SHARED_LIBRARIES) NEW_PROJECT_USER_LOCAL_LDLIBS := $(LOCAL_LDLIBS) # switch to current module LOCAL_PATH := $(call my-dir) NEW_PROJECT_PATH := $(LOCAL_PATH) CURRENT_DIR_ABS_PATH := $(CURDIR) export BUILD_NESTED_PROJECT_1_SHARED_LIBRARY := false export BUILD_NESTED_PROJECT_2_SHARED_LIBRARY := false export BUILD_NESTED_PROJECT_2_EXE := false export MY_UTILS_FOR_NESTED_PROJECT_1_PATH := $(LOCAL_PATH)/MY_Utils export CP_NESTED_PROJECT_1_ABS_PATH:= $(LOCAL_PATH)/NestedProject1 export OPENCV_JNI = $(OPENCV_DIR)/Android/$(APP_STL)/$(APP_ABI)/staticlibs/sdk/native/jni ############################### # Exports ############################### export ARM_ARCH:=AARCH64 export RELEASE_MODE:=false #Set to true to enable log prints export ENABLE_LOGCAT:=true export PRINT_TIMING:=0 ifeq ($(RELEASE_MODE),true) export ENABLE_LOGCAT:=false export PRINT_TIMING:=0 endif #Android log print is default. export ENABLE_STRINGLOGGER:=false export ENABLE_ANDROID_LOG_FILE:=false export COMMON_API_VERSION?=12 export SVL_MY_UTILS_PATH:=$(LOCAL_PATH)/MY_Utils export SVL_PATH:=$(LOCAL_PATH) export BUILD_SVL_SHARED_LIBRARY:=false $(info OPEN_CV_ENABLE=$(OPEN_CV_ENABLE)) $(info ENABLE_LOGCAT=$(ENABLE_LOGCAT)) $(info RELEASE_MODE=$(RELEASE_MODE)) $(info PRINT_TIMING=$(PRINT_TIMING)) include $(CLEAR_VARS) ifneq ($(call set_is_member,$(__ndk_modules),mainproject),$(true)) ifeq ($(OPEN_CV_ENABLE),true) LOCAL_CFLAGS += -DOPEN_CV_ENABLE #OpenCV start OPENCV_INSTALL_MODULES:=on OPENCV_CAMERA_MODULES:=off OPENCV_LIB_TYPE:=STATIC include $(OPENCV_JNI)/OpenCV.mk LOCAL_LDLIBS := # clean LOCAL_LDLIBS #OpenCV end endif ########### GLOBAL Flags: ####################################################### ifeq ($(RELEASE_MODE),true) GLOBAL_FLAGS += -DRELEASE_MODE=1 else GLOBAL_FLAGS += -DRELEASE_MODE=0 endif GLOBAL_FLAGS += -DDUMP_PATH=$(DUMP_PATH_ALL) GLOBAL_FLAGS += -DRECORDER_PATH=$(DUMP_PATH_ALL)\"recorderDumps/\" GLOBAL_FLAGS += -DPARAM_PLAYER=0 # loads stored on file params GLOBAL_FLAGS += -DPARAM_RECORDER=0 # records params such runtime, frames params, out info GLOBAL_FLAGS += -DDISPLAY_WIDE_ONLY=0 # 0-display all cameras possible, 1-display wide only ifeq ($(PARAM_PLAYER), 1) # when playing params, recording is disabled PARAM_RECORDER=0 endif ############## #Log Control # ############## #Logger tag ifneq (PROJECT_TAG,$(findstring PROJECT_TAG,$(GLOBAL_FLAGS))) # no need to define twice GLOBAL_FLAGS += -DPROJECT_TAG=\"MAIN_PROJECT\" endif ifeq ($(ENABLE_LOGCAT),true) ifeq ($(ENABLE_STRINGLOGGER),true) GLOBAL_FLAGS += -DUSE_LOG_TO_STR_FILE else ifeq ($(ENABLE_ANDROID_LOG_FILE),true) GLOBAL_FLAGS += -DUSE_ANDROID_LOG_FILE endif endif else GLOBAL_FLAGS += -DLOG_DBG_PRINTS_CANCLED endif ############# ########### .mk includes ####################################################### include $(NEW_PROJECT_PATH)/MY_Utils/Android.mk include $(NEW_PROJECT_PATH)/MY_Utils/CPModules/PrismPredictor/Android.mk include $(NEW_PROJECT_PATH)/NestedProject1/jni/Android.mk include $(NEW_PROJECT_PATH)/SomeModule1/Android.mk include $(NEW_PROJECT_PATH)/SomeModule2/Android.mk $(shell python -c "import sys;print('hello')") include $(NEW_PROJECT_PATH)/SomeModule3/Android.mk include $(NEW_PROJECT_PATH)/SomeModule4/Android.mk include $(NEW_PROJECT_PATH)/SomeModule5/Android.mk LOCAL_C_INCLUDES += $(EXTERNAL_PATH)/$(PLATFORM)/$(GPU)/include/ LOCAL_C_INCLUDES += $(EXTERNAL_PATH)/$(PLATFORM)/android/include/$(AVRS) LOCAL_C_INCLUDES += $(NEW_PROJECT_PATH)/include/ LOCAL_C_INCLUDES += $(NEW_PROJECT_PATH)/API/API_VER_$(API_VERSION)/external/ LOCAL_C_INCLUDES += $(NEW_PROJECT_PATH)/MY_Utils/CommonAPI/CommonAPI$(COMMON_API_VERSION) LOCAL_SRC_FILES += $(NEW_PROJECT_PATH)/API/API_VER_$(API_VERSION)/src/API.cpp LOCAL_SRC_FILES += $(NEW_PROJECT_PATH)/src/SomeSource1.cpp LOCAL_CFLAGS += $(OPTION_FLAGS) LOCAL_CFLAGS += $(GLOBAL_FLAGS) LOCAL_MODULE := mainproject include $(BUILD_SHARED_LIBRARY) ifeq ($(BUILD_OFFLINE_TEST), true) $(info -----------------Buildingoffline test-----------------------) include $(NEW_PROJECT_PATH)/MY_Utils/Manager/Android.mk include $(NEW_PROJECT_PATH)/UnitTestAndroid/Android.mk endif endif include $(CLEAR_VARS) LOCAL_C_INCLUDES := $(NEW_PROJECT_USER_LOCAL_C_INCLUDES) LOCAL_CFLAGS:=$(NEW_PROJECT_USER_LOCAL_CFLAGS) LOCAL_CPPFLAGS:=$(NEW_PROJECT_USER_LOCAL_CPPFLAGS) LOCAL_STATIC_LIBRARIES:=$(NEW_PROJECT_USER_LOCAL_STATIC_LIBRARIES) LOCAL_SHARED_LIBRARIES:=$(NEW_PROJECT_USER_LOCAL_SHARED_LIBRARIES) LOCAL_LDLIBS:=$(NEW_PROJECT_USER_LOCAL_LDLIBS) LOCAL_C_INCLUDES += $(LOCAL_PATH)/external LOCAL_SHARED_LIBRARIES += mainproject LOCAL_PATH := $(NEW_PROJECT_USER_LOCAL_PATH) ########################################################## # Main-Project - end. ##########################################################
[ "To use the $() in Makefiles, you have to declare a variable like this HELLO = $(shell python -c \"import sys;print('hello')\"), and now HELLO has the value of \"hello\"\nIf you want to print something on the screen, just use echo or printf in the target's script\n" ]
[ 1 ]
[]
[]
[ "android_ndk", "makefile", "ndk_build", "python" ]
stackoverflow_0074619600_android_ndk_makefile_ndk_build_python.txt
Q: How do I return an image in fastAPI? Using the python module fastAPI, I can't figure out how to return an image. In flask I would do something like this: @app.route("/vector_image", methods=["POST"]) def image_endpoint(): # img = ... # Create the image here return Response(img, mimetype="image/png") what's the corresponding call in this module? A: If you already have the bytes of the image in memory Return a fastapi.responses.Response with your custom content and media_type. You'll also need to muck with the endpoint decorator to get FastAPI to put the correct media type in the OpenAPI specification. @app.get( "/image", # Set what the media type will be in the autogenerated OpenAPI specification. # fastapi.tiangolo.com/advanced/additional-responses/#additional-media-types-for-the-main-response responses = { 200: { "content": {"image/png": {}} } } # Prevent FastAPI from adding "application/json" as an additional # response media type in the autogenerated OpenAPI specification. # https://github.com/tiangolo/fastapi/issues/3258 response_class=Response, ) def get_image() image_bytes: bytes = generate_cat_picture() # media_type here sets the media type of the actual response sent to the client. return Response(content=image_bytes, media_type="image/png") See the Response documentation. If your image exists only on the filesystem Return a fastapi.responses.FileResponse. See the FileResponse documentation. Be careful with StreamingResponse Other answers suggest StreamingResponse. StreamingResponse is harder to use correctly, so I don't recommend it unless you're sure you can't use Response or FileResponse. In particular, code like this is pointless. It will not "stream" the image in any useful way. @app.get("/image") def get_image() image_bytes: bytes = generate_cat_picture() # ❌ Don't do this. image_stream = io.BytesIO(image_bytes) return StreamingResponse(content=image_stream, media_type="image/png") First of all, StreamingResponse(content=my_iterable) streams by iterating over the chunks provided by my_iterable. But when that iterable is a BytesIO, the chunks will be \n-terminated lines, which won't make sense for a binary image. And even if the chunk divisions made sense, chunking is pointless here because we had the whole image_bytes bytes object available from the start. We may as well have just passed the whole thing into a Response from the beginning. We don't gain anything by holding data back from FastAPI. Second, StreamingResponse corresponds to HTTP chunked transfer encoding. (This might depend on your ASGI server, but it's the case for Uvicorn, at least.) And this isn't a good use case for chunked transfer encoding. Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images. Unnecessary chunked transfer encoding can be harmful. For example, it means clients can't show progress bars when they're downloading the file. See: Content-Length header versus chunked encoding Is it a good idea to use Transfer-Encoding: chunked on static files? A: I had a similar issue but with a cv2 image. This may be useful for others. Uses the StreamingResponse. import io from starlette.responses import StreamingResponse app = FastAPI() @app.post("/vector_image") def image_endpoint(*, vector): # Returns a cv2 image array from the document vector cv2img = my_function(vector) res, im_png = cv2.imencode(".png", cv2img) return StreamingResponse(io.BytesIO(im_png.tobytes()), media_type="image/png") A: All the other answer(s) is on point, but now it's so easy to return an image from fastapi.responses import FileResponse @app.get("/") async def main(): return FileResponse("your_image.jpeg") A: It's not properly documented yet, but you can use anything from Starlette. So, you can use a FileResponse if it's a file in disk with a path: https://www.starlette.io/responses/#fileresponse If it's a file-like object created in your path operation, in the next stable release of Starlette (used internally by FastAPI) you will also be able to return it in a StreamingResponse. A: Thanks to @biophetik's answer, with an important reminder that caused me confusion: If you're using BytesIO especially with PIL/skimage, make sure to also do img.seek(0) before returning! @app.get("/generate") def generate(data: str): img = generate_image(data) print('img=%s' % (img.shape,)) buf = BytesIO() imsave(buf, img, format='JPEG', quality=100) buf.seek(0) # important here! return StreamingResponse(buf, media_type="image/jpeg", headers={'Content-Disposition': 'inline; filename="%s.jpg"' %(data,)}) A: The answer from @SebastiÑnRamírez pointed me in the right direction, but for those looking to solve the problem, I needed a few lines of code to make it work. I needed to import FileResponse from starlette (not fastAPI?), add CORS support, and return from a temporary file. Perhaps there is a better way, but I couldn't get streaming to work: from starlette.responses import FileResponse from starlette.middleware.cors import CORSMiddleware import tempfile app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"] ) @app.post("/vector_image") def image_endpoint(*, vector): # Returns a raw PNG from the document vector (define here) img = my_function(vector) with tempfile.NamedTemporaryFile(mode="w+b", suffix=".png", delete=False) as FOUT: FOUT.write(img) return FileResponse(FOUT.name, media_type="image/png") A: My needs weren't quite met from the above because my image was built with PIL. My fastapi endpoint takes an image file name, reads it as a PIL image, and generates a thumbnail jpeg in memory that can be used in HTML like: <img src="http://localhost:8000/images/thumbnail/bigimage.jpg"> import io from PIL import Image from fastapi.responses import StreamingResponse @app.get('/images/thumbnail/{filename}', response_description="Returns a thumbnail image from a larger image", response_class="StreamingResponse", responses= {200: {"description": "an image", "content": {"image/jpeg": {}}}}) def thumbnail_image (filename: str): # read the high-res image file image = Image.open(filename) # create a thumbnail image image.thumbnail((100, 100)) imgio = io.BytesIO() image.save(imgio, 'JPEG') imgio.seek(0) return StreamingResponse(content=imgio, media_type="image/jpeg") A: You can use a FileResponse if it's a file in disk with a path: import os from fastapi import FastAPI from fastapi.responses import FileResponse app = FastAPI() path = "/path/to/files" @app.get("/") def index(): return {"Hello": "World"} @app.get("/vector_image", responses={200: {"description": "A picture of a vector image.", "content" : {"image/jpeg" : {"example" : "No example available. Just imagine a picture of a vector image."}}}}) def image_endpoint(): file_path = os.path.join(path, "files/vector_image.jpg") if os.path.exists(file_path): return FileResponse(file_path, media_type="image/jpeg", filename="vector_image_for_you.jpg") return {"error" : "File not found!"} A: You can do something very similar in FastAPI from fastapi import FastAPI, Response app = FastAPI() @app.post("/vector_image/") async def image_endpoint(): # img = ... # Create the image here return Response(content=img, media_type="image/png") A: If when following the top answer and you are attempting to return a BytesIO object like this in your Response buffer = BytesIO(my_data) # Return file return Response(content=buffer, media_type="image/jpg") You may receive an error that looks like this (as described in this comment) AttributeError: '_io.BytesIO' object has no attribute 'encode' This is caused by the render function in Response which explicitly checks for a bytes type here. Since BytesIO != bytes it attempts to encode the value and fails. The solution is to get the bytes value from the BytesIO object with getvalue() buffer = BytesIO(my_data) # Return file return Response(content=buffer.getvalue(), media_type="image/jpg")
How do I return an image in fastAPI?
Using the python module fastAPI, I can't figure out how to return an image. In flask I would do something like this: @app.route("/vector_image", methods=["POST"]) def image_endpoint(): # img = ... # Create the image here return Response(img, mimetype="image/png") what's the corresponding call in this module?
[ "If you already have the bytes of the image in memory\nReturn a fastapi.responses.Response with your custom content and media_type.\nYou'll also need to muck with the endpoint decorator to get FastAPI to put the correct media type in the OpenAPI specification.\[email protected](\n \"/image\",\n\n # Set what the media type will be in the autogenerated OpenAPI specification.\n # fastapi.tiangolo.com/advanced/additional-responses/#additional-media-types-for-the-main-response\n responses = {\n 200: {\n \"content\": {\"image/png\": {}}\n }\n }\n\n # Prevent FastAPI from adding \"application/json\" as an additional\n # response media type in the autogenerated OpenAPI specification.\n # https://github.com/tiangolo/fastapi/issues/3258\n response_class=Response,\n)\ndef get_image()\n image_bytes: bytes = generate_cat_picture()\n # media_type here sets the media type of the actual response sent to the client.\n return Response(content=image_bytes, media_type=\"image/png\")\n\nSee the Response documentation.\nIf your image exists only on the filesystem\nReturn a fastapi.responses.FileResponse.\nSee the FileResponse documentation.\n\nBe careful with StreamingResponse\nOther answers suggest StreamingResponse. StreamingResponse is harder to use correctly, so I don't recommend it unless you're sure you can't use Response or FileResponse.\nIn particular, code like this is pointless. It will not \"stream\" the image in any useful way.\[email protected](\"/image\")\ndef get_image()\n image_bytes: bytes = generate_cat_picture()\n # ❌ Don't do this.\n image_stream = io.BytesIO(image_bytes)\n return StreamingResponse(content=image_stream, media_type=\"image/png\")\n\nFirst of all, StreamingResponse(content=my_iterable) streams by iterating over the chunks provided by my_iterable. But when that iterable is a BytesIO, the chunks will be \\n-terminated lines, which won't make sense for a binary image.\nAnd even if the chunk divisions made sense, chunking is pointless here because we had the whole image_bytes bytes object available from the start. We may as well have just passed the whole thing into a Response from the beginning. We don't gain anything by holding data back from FastAPI.\nSecond, StreamingResponse corresponds to HTTP chunked transfer encoding. (This might depend on your ASGI server, but it's the case for Uvicorn, at least.) And this isn't a good use case for chunked transfer encoding.\nChunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images.\nUnnecessary chunked transfer encoding can be harmful. For example, it means clients can't show progress bars when they're downloading the file. See:\n\nContent-Length header versus chunked encoding\nIs it a good idea to use Transfer-Encoding: chunked on static files?\n\n", "I had a similar issue but with a cv2 image. This may be useful for others. Uses the StreamingResponse.\nimport io\nfrom starlette.responses import StreamingResponse\n\napp = FastAPI()\n\[email protected](\"/vector_image\")\ndef image_endpoint(*, vector):\n # Returns a cv2 image array from the document vector\n cv2img = my_function(vector)\n res, im_png = cv2.imencode(\".png\", cv2img)\n return StreamingResponse(io.BytesIO(im_png.tobytes()), media_type=\"image/png\")\n\n", "All the other answer(s) is on point, but now it's so easy to return an image\nfrom fastapi.responses import FileResponse\n\[email protected](\"/\")\nasync def main():\n return FileResponse(\"your_image.jpeg\")\n\n", "It's not properly documented yet, but you can use anything from Starlette.\nSo, you can use a FileResponse if it's a file in disk with a path: https://www.starlette.io/responses/#fileresponse\nIf it's a file-like object created in your path operation, in the next stable release of Starlette (used internally by FastAPI) you will also be able to return it in a StreamingResponse.\n", "Thanks to @biophetik's answer, with an important reminder that caused me confusion: If you're using BytesIO especially with PIL/skimage, make sure to also do img.seek(0) before returning!\[email protected](\"/generate\")\ndef generate(data: str):\n img = generate_image(data)\n print('img=%s' % (img.shape,))\n buf = BytesIO()\n imsave(buf, img, format='JPEG', quality=100)\n buf.seek(0) # important here!\n return StreamingResponse(buf, media_type=\"image/jpeg\",\n headers={'Content-Disposition': 'inline; filename=\"%s.jpg\"' %(data,)})\n\n", "The answer from @SebastiΓ‘nRamΓ­rez pointed me in the right direction, but for those looking to solve the problem, I needed a few lines of code to make it work. I needed to import FileResponse from starlette (not fastAPI?), add CORS support, and return from a temporary file. Perhaps there is a better way, but I couldn't get streaming to work:\nfrom starlette.responses import FileResponse\nfrom starlette.middleware.cors import CORSMiddleware\nimport tempfile\n\napp = FastAPI()\napp.add_middleware(\n CORSMiddleware, allow_origins=[\"*\"], allow_methods=[\"*\"], allow_headers=[\"*\"]\n)\n\[email protected](\"/vector_image\")\ndef image_endpoint(*, vector):\n # Returns a raw PNG from the document vector (define here)\n img = my_function(vector)\n\n with tempfile.NamedTemporaryFile(mode=\"w+b\", suffix=\".png\", delete=False) as FOUT:\n FOUT.write(img)\n return FileResponse(FOUT.name, media_type=\"image/png\")\n\n", "My needs weren't quite met from the above because my image was built with PIL. My fastapi endpoint takes an image file name, reads it as a PIL image, and generates a thumbnail jpeg in memory that can be used in HTML like:\n<img src=\"http://localhost:8000/images/thumbnail/bigimage.jpg\">\nimport io\nfrom PIL import Image\nfrom fastapi.responses import StreamingResponse\[email protected]('/images/thumbnail/{filename}',\n response_description=\"Returns a thumbnail image from a larger image\",\n response_class=\"StreamingResponse\",\n responses= {200: {\"description\": \"an image\", \"content\": {\"image/jpeg\": {}}}})\ndef thumbnail_image (filename: str):\n # read the high-res image file\n image = Image.open(filename)\n # create a thumbnail image\n image.thumbnail((100, 100))\n imgio = io.BytesIO()\n image.save(imgio, 'JPEG')\n imgio.seek(0)\n return StreamingResponse(content=imgio, media_type=\"image/jpeg\")\n\n", "You can use a FileResponse if it's a file in disk with a path:\nimport os\n\nfrom fastapi import FastAPI \nfrom fastapi.responses import FileResponse\n\napp = FastAPI()\n\npath = \"/path/to/files\"\n\[email protected](\"/\")\ndef index():\n return {\"Hello\": \"World\"}\n\[email protected](\"/vector_image\", responses={200: {\"description\": \"A picture of a vector image.\", \"content\" : {\"image/jpeg\" : {\"example\" : \"No example available. Just imagine a picture of a vector image.\"}}}})\ndef image_endpoint():\n file_path = os.path.join(path, \"files/vector_image.jpg\")\n if os.path.exists(file_path):\n return FileResponse(file_path, media_type=\"image/jpeg\", filename=\"vector_image_for_you.jpg\")\n return {\"error\" : \"File not found!\"}\n\n", "You can do something very similar in FastAPI\nfrom fastapi import FastAPI, Response\n\napp = FastAPI()\n\[email protected](\"/vector_image/\")\nasync def image_endpoint():\n # img = ... # Create the image here\n return Response(content=img, media_type=\"image/png\")\n\n", "If when following the top answer and you are attempting to return a BytesIO object like this in your Response\n buffer = BytesIO(my_data)\n\n # Return file\n return Response(content=buffer, media_type=\"image/jpg\")\n\nYou may receive an error that looks like this (as described in this comment)\nAttributeError: '_io.BytesIO' object has no attribute 'encode'\n\nThis is caused by the render function in Response which explicitly checks for a bytes type here. Since BytesIO != bytes it attempts to encode the value and fails.\nThe solution is to get the bytes value from the BytesIO object with getvalue()\n buffer = BytesIO(my_data)\n\n # Return file\n return Response(content=buffer.getvalue(), media_type=\"image/jpg\")\n\n" ]
[ 67, 66, 37, 34, 17, 13, 4, 3, 1, 0 ]
[]
[]
[ "api", "fastapi", "python" ]
stackoverflow_0055873174_api_fastapi_python.txt
Q: Google Cloud Pub/Sub error "Closed subscriber cannot be used as context manager" when trying to unsubscribe I'm getting the following error when trying to unsubscribe from a topic in Google Pub/Sub. self = <google.cloud.pubsub_v1.SubscriberClient object at 0x000002069A31D820> def __enter__(self) -> "Client": if self._closed: > raise RuntimeError("Closed subscriber cannot be used as context manager.") E RuntimeError: Closed subscriber cannot be used as context manager. venv\lib\site-packages\google\cloud\pubsub_v1\subscriber\client.py:285: RuntimeError Here is the relevant code, which is based on google's own documentation. def unsubscribe(self, subscription_id): subscriber = self.subscriber subscription_path = subscriber.subscription_path(self.project_id, subscription_id) with subscriber: subscriber.delete_subscription(request={"subscription": subscription_path}) return True A: Well, Google's own documentation states that the code I was using would automatically close the subscription because of the with block. For some reason I kept overlooking that comment in the code. This resolved my issue: def unsubscribe(self, subscription_id): subscriber = self.subscriber subscription_path = subscriber.subscription_path(self.project_id, subscription_id) subscriber.delete_subscription(request={"subscription": subscription_path}) return True
Google Cloud Pub/Sub error "Closed subscriber cannot be used as context manager" when trying to unsubscribe
I'm getting the following error when trying to unsubscribe from a topic in Google Pub/Sub. self = <google.cloud.pubsub_v1.SubscriberClient object at 0x000002069A31D820> def __enter__(self) -> "Client": if self._closed: > raise RuntimeError("Closed subscriber cannot be used as context manager.") E RuntimeError: Closed subscriber cannot be used as context manager. venv\lib\site-packages\google\cloud\pubsub_v1\subscriber\client.py:285: RuntimeError Here is the relevant code, which is based on google's own documentation. def unsubscribe(self, subscription_id): subscriber = self.subscriber subscription_path = subscriber.subscription_path(self.project_id, subscription_id) with subscriber: subscriber.delete_subscription(request={"subscription": subscription_path}) return True
[ "Well, Google's own documentation states that the code I was using would automatically close the subscription because of the with block. For some reason I kept overlooking that comment in the code.\nThis resolved my issue:\n def unsubscribe(self, subscription_id):\n subscriber = self.subscriber\n\n subscription_path = subscriber.subscription_path(self.project_id, subscription_id)\n\n subscriber.delete_subscription(request={\"subscription\": subscription_path})\n\n return True\n\n" ]
[ 0 ]
[]
[]
[ "google_cloud_pubsub", "python" ]
stackoverflow_0074212699_google_cloud_pubsub_python.txt
Q: How edit a discord message using the message's id or link using discord.py I have discord messaged that was send a few weeks ago by my bot and now I want to update that message but I don't want to delete the message I want to edit it. I though the only way to find that message is using the message's id or link but I don't know how I can do that. A: Once you have the ID of the message you want to edit, just go: await msg.edit(content="Edit") Obviously, edit content to what you want. Also remember, you can only edit messages you have sent, so make sure the bot has sent that message.
How edit a discord message using the message's id or link using discord.py
I have discord messaged that was send a few weeks ago by my bot and now I want to update that message but I don't want to delete the message I want to edit it. I though the only way to find that message is using the message's id or link but I don't know how I can do that.
[ "Once you have the ID of the message you want to edit, just go:\nawait msg.edit(content=\"Edit\")\n\nObviously, edit content to what you want. Also remember, you can only edit messages you have sent, so make sure the bot has sent that message.\n" ]
[ 0 ]
[]
[]
[ "discord", "python" ]
stackoverflow_0074580195_discord_python.txt
Q: Convert html to json in Python I am trying to convert some html files to json. From the beginning: I downloaded a kind of old dataset called SarcasmAmazonReviewsCorpus. It has several txt files, all with comments, reactions, name of product and so on, as it follows in the image: I was able to pick up each txt file and using os module I created a list with every file content. The code was: files_content = [] for filename in filter(lambda p: p.endswith("txt"), os.listdir(path)): filepath = os.path.join(path, filename) with open(filepath, mode='r') as f: files_content += [f.read()] Then, I am trying to use Beatifulsoup: soup = BeautifulSoup(files_content[2], 'html5lib') soup The output is like: Is there a way that I can convert all the itens in the files_content list into a json file? Tkanks for the help! A: You might have to change soup.find in dictionary to get the data you want. import json dictionary = { "title": soup.find("title"), "date": soup.find("date") } json_object = json.dumps(dictionary, indent=4) with open("saveFile.json", "w") as outfile: outfile.write(json_object) A: As it looks like you are parsing some very simple HTML data, I think you could simply use xmltodict package for this: data = [] for txt in txt_files: with open(txt, "r") as file: data += [xmltodict.parse(file.read())] json_str = json.dumps(data)
Convert html to json in Python
I am trying to convert some html files to json. From the beginning: I downloaded a kind of old dataset called SarcasmAmazonReviewsCorpus. It has several txt files, all with comments, reactions, name of product and so on, as it follows in the image: I was able to pick up each txt file and using os module I created a list with every file content. The code was: files_content = [] for filename in filter(lambda p: p.endswith("txt"), os.listdir(path)): filepath = os.path.join(path, filename) with open(filepath, mode='r') as f: files_content += [f.read()] Then, I am trying to use Beatifulsoup: soup = BeautifulSoup(files_content[2], 'html5lib') soup The output is like: Is there a way that I can convert all the itens in the files_content list into a json file? Tkanks for the help!
[ "You might have to change soup.find in dictionary to get the data you want.\nimport json\n\ndictionary = {\n \"title\": soup.find(\"title\"),\n \"date\": soup.find(\"date\")\n}\n\njson_object = json.dumps(dictionary, indent=4)\n\nwith open(\"saveFile.json\", \"w\") as outfile:\n outfile.write(json_object)\n\n", "As it looks like you are parsing some very simple HTML data, I think you could simply use xmltodict package for this:\ndata = []\nfor txt in txt_files:\n with open(txt, \"r\") as file:\n data += [xmltodict.parse(file.read())]\n\njson_str = json.dumps(data)\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074619838_python.txt
Q: Python - rearranging dataframe data I wanna rearrange my dataframe from the left one to the right table, like I show you in the next picture: df = pd.DataFrame({ "Unnamed:0": ["Entity","","Var1","Var2","Var3","Var4"], "Unnamed:1": ["A","X","0.45","0.14","0.16","0.28"], "Unnamed:2": ["A","Y","0.66","0.55","0.39","0.49"], "Unnamed:3": ["A","Z","0.3","0.24","0.31","0.13"], "Unnamed:4": ["B","X","0.22","0.08","0.74","0.41"], "Unnamed:5": ["B","Y","0.94","0.47","0.17","0.16"], "Unnamed:6": ["B","Z","0.76","0.4","0.93","0.15"], "Unnamed:7": ["C","X","0.4","0.76","0.71","0.01"], "Unnamed:8": ["C","Y","0.86","1","0.26","0.32"], "Unnamed:9": ["C","Z","0.35","0.1","0.36","0.4"], }) I try using pd.melt, but I can't get what I want. Ty in advance A: As your dataframe is not clean (the 2 first rows are a multiindex column name), you can first create the inner dataframe before melting it : new_df = pd.DataFrame(df.iloc[2:,1:]).set_index(df.iloc[2:,0]) new_df.columns = pd.MultiIndex.from_frame(df.iloc[:2,1:].T) new_df.melt(ignore_index=False).reset_index()
Python - rearranging dataframe data
I wanna rearrange my dataframe from the left one to the right table, like I show you in the next picture: df = pd.DataFrame({ "Unnamed:0": ["Entity","","Var1","Var2","Var3","Var4"], "Unnamed:1": ["A","X","0.45","0.14","0.16","0.28"], "Unnamed:2": ["A","Y","0.66","0.55","0.39","0.49"], "Unnamed:3": ["A","Z","0.3","0.24","0.31","0.13"], "Unnamed:4": ["B","X","0.22","0.08","0.74","0.41"], "Unnamed:5": ["B","Y","0.94","0.47","0.17","0.16"], "Unnamed:6": ["B","Z","0.76","0.4","0.93","0.15"], "Unnamed:7": ["C","X","0.4","0.76","0.71","0.01"], "Unnamed:8": ["C","Y","0.86","1","0.26","0.32"], "Unnamed:9": ["C","Z","0.35","0.1","0.36","0.4"], }) I try using pd.melt, but I can't get what I want. Ty in advance
[ "As your dataframe is not clean (the 2 first rows are a multiindex column name), you can first create the inner dataframe before melting it :\nnew_df = pd.DataFrame(df.iloc[2:,1:]).set_index(df.iloc[2:,0])\nnew_df.columns = pd.MultiIndex.from_frame(df.iloc[:2,1:].T)\nnew_df.melt(ignore_index=False).reset_index()\n\n" ]
[ 0 ]
[]
[]
[ "database", "dataframe", "pandas", "pandas_melt", "python" ]
stackoverflow_0074619534_database_dataframe_pandas_pandas_melt_python.txt
Q: how to use beautiful soup to get all text "except" a specific class I'm trying to use soup.get_text to get some text out of a webpage, but I want to exclude a specific class. I tried to use a = soup.find_all(class_ = "something") and b=[i.get_text() for i in a], but that allows me to choose one class, and doesn't allow me to exclude one specific class. I also tried: a = soup.select('span:not([class_ ="something"])') b = [i.get_text() for i in a] first, the output wasn't really text only. but most important; it gave me all classes including "something" that I wanted to exclude. Is there some other way to do that? Thanks in advance. example: link = "https://stackoverflow.com/questions/74620106/how-to-use-beautiful-soup-to-get-all-text-except-a-specific-class" f = urlopen(link) soup = BeautifulSoup(f, 'html.parser') a = soup.find_all(class_ = "mt24 mb12") b = [i.get_text() for i in a] text = soup.select('div:not([class_ ="mt24 mb12"])') text1 = [i.get_text() for i in text] A: If you want to get all classes but one for example, you can loop through all element and choose the ones you keep: for p in soup.find_all("p", "review_comment"): if p.find(class_="something-archived"): continue # p is now a wanted p source: Excluding unwanted results of findAll using BeautifulSoup
how to use beautiful soup to get all text "except" a specific class
I'm trying to use soup.get_text to get some text out of a webpage, but I want to exclude a specific class. I tried to use a = soup.find_all(class_ = "something") and b=[i.get_text() for i in a], but that allows me to choose one class, and doesn't allow me to exclude one specific class. I also tried: a = soup.select('span:not([class_ ="something"])') b = [i.get_text() for i in a] first, the output wasn't really text only. but most important; it gave me all classes including "something" that I wanted to exclude. Is there some other way to do that? Thanks in advance. example: link = "https://stackoverflow.com/questions/74620106/how-to-use-beautiful-soup-to-get-all-text-except-a-specific-class" f = urlopen(link) soup = BeautifulSoup(f, 'html.parser') a = soup.find_all(class_ = "mt24 mb12") b = [i.get_text() for i in a] text = soup.select('div:not([class_ ="mt24 mb12"])') text1 = [i.get_text() for i in text]
[ "If you want to get all classes but one for example, you can loop through all element and choose the ones you keep:\nfor p in soup.find_all(\"p\", \"review_comment\"):\n if p.find(class_=\"something-archived\"):\n continue\n # p is now a wanted p\n\nsource: Excluding unwanted results of findAll using BeautifulSoup\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074620106_beautifulsoup_python.txt
Q: How can I sort 2d array includes only string characters in Python? There is an array as below; x=np.array([ ['0', '0'], ['1', '1'], ['7', '10'], ['8', '11'], [',', '2'], ['4', '3'], ['.', '4'], ['2', '5'], ['5', '6'], ['er014', '7'], ['ww', '8'], ['*', '9']]) I used the following codes to sort this array by the second column but without success. Is there anyone to help? A= np.take(x, x[:, 1].argsort(), 0) A: np.take(x, x[:, 1].astype(int).argsort(), 0) You may just cast the values for sorting. The overall result of you np.take() will remain as strings. array([['0', '0'], ['1', '1'], [',', '2'], ['4', '3'], ['.', '4'], ['2', '5'], ['5', '6'], ['er014', '7'], ['ww', '8'], ['*', '9'], ['7', '10'], ['8', '11']], dtype='<U5')
How can I sort 2d array includes only string characters in Python?
There is an array as below; x=np.array([ ['0', '0'], ['1', '1'], ['7', '10'], ['8', '11'], [',', '2'], ['4', '3'], ['.', '4'], ['2', '5'], ['5', '6'], ['er014', '7'], ['ww', '8'], ['*', '9']]) I used the following codes to sort this array by the second column but without success. Is there anyone to help? A= np.take(x, x[:, 1].argsort(), 0)
[ "np.take(x, x[:, 1].astype(int).argsort(), 0)\nYou may just cast the values for sorting. The overall result of you np.take() will remain as strings.\narray([['0', '0'],\n ['1', '1'],\n [',', '2'],\n ['4', '3'],\n ['.', '4'],\n ['2', '5'],\n ['5', '6'],\n ['er014', '7'],\n ['ww', '8'],\n ['*', '9'],\n ['7', '10'],\n ['8', '11']], dtype='<U5')\n\n" ]
[ 1 ]
[]
[]
[ "np.argsort", "numpy", "python" ]
stackoverflow_0074620197_np.argsort_numpy_python.txt
Q: Query unique values inside django forloop I have a query where I should avoid double entry of the same question. In fact, I would like to get only unique values but, I am using the distinct() django function which isn't working. I have these models: class QuestionTopic(models.Model): name = models.CharField(max_length=255) question_subject = models.ForeignKey( QuestionSubject, on_delete=models.CASCADE) exam_questions_num = models.IntegerField(null=True, blank=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.name class Meta: ordering = ('created_at',) class Question(models.Model): id = models.CharField(max_length=7, unique=True, primary_key=True, editable=False) question_subject = models.ForeignKey( QuestionSubject, on_delete=models.CASCADE) text = tinymce_models.HTMLField() mark = models.IntegerField(default=1) is_published = models.BooleanField(default=True) question_bank_id = models.CharField(max_length=255, blank=True, null=True) question_topic = models.ForeignKey( QuestionTopic, on_delete=models.CASCADE, null=True, blank=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) and the query is: subject = QuestionSubject.objects.get(id=request.POST.get('subject')) question_topics = QuestionTopic.objects.filter( question_subject=subject) questions_list = [] for topic in question_topics: for q in range(topic.exam_questions_num): question = Question.objects.filter( question_subject=subject, question_topic=topic).values_list( 'id', flat=True).order_by('?').distinct().first() questions_list.append(question) What I would like to achieve is to have all different questions for each different topic inside questions_list. Which I am not achieving at the moment being the distinct used inside a loop. A: Just make questions_list a set: questions_list = set() Sets don't allow a value more than once. A: You can work with a single query with: subject = QuestionSubject.objects.get(id=request.POST.get('subject')) question_topics = QuestionTopic.objects.filter(question_subject=subject) questions_list = [ question for topic in question_topics for question in Question.objects.filter( question_subject=subject, question_topic=topic ).order_by('?')[: topic.exam_questions_num] ] This will make O(n) queries with n the number of topics. If the number of questions is not that large, you can do this with two queries with: from random import sample subject = QuestionSubject.objects.get(id=request.POST.get('subject')) question_topics = QuestionTopic.objects.filter( question_subject=subject ).prefetch_related('question_set') questions_list = [ question for topic in question_topics for question in sample(topic.question_set.all(), topic.exam_question_num) ]
Query unique values inside django forloop
I have a query where I should avoid double entry of the same question. In fact, I would like to get only unique values but, I am using the distinct() django function which isn't working. I have these models: class QuestionTopic(models.Model): name = models.CharField(max_length=255) question_subject = models.ForeignKey( QuestionSubject, on_delete=models.CASCADE) exam_questions_num = models.IntegerField(null=True, blank=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.name class Meta: ordering = ('created_at',) class Question(models.Model): id = models.CharField(max_length=7, unique=True, primary_key=True, editable=False) question_subject = models.ForeignKey( QuestionSubject, on_delete=models.CASCADE) text = tinymce_models.HTMLField() mark = models.IntegerField(default=1) is_published = models.BooleanField(default=True) question_bank_id = models.CharField(max_length=255, blank=True, null=True) question_topic = models.ForeignKey( QuestionTopic, on_delete=models.CASCADE, null=True, blank=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) and the query is: subject = QuestionSubject.objects.get(id=request.POST.get('subject')) question_topics = QuestionTopic.objects.filter( question_subject=subject) questions_list = [] for topic in question_topics: for q in range(topic.exam_questions_num): question = Question.objects.filter( question_subject=subject, question_topic=topic).values_list( 'id', flat=True).order_by('?').distinct().first() questions_list.append(question) What I would like to achieve is to have all different questions for each different topic inside questions_list. Which I am not achieving at the moment being the distinct used inside a loop.
[ "Just make questions_list a set:\nquestions_list = set()\n\nSets don't allow a value more than once.\n", "You can work with a single query with:\nsubject = QuestionSubject.objects.get(id=request.POST.get('subject'))\nquestion_topics = QuestionTopic.objects.filter(question_subject=subject)\nquestions_list = [\n question\n for topic in question_topics\n for question in Question.objects.filter(\n question_subject=subject, question_topic=topic\n ).order_by('?')[: topic.exam_questions_num]\n]\nThis will make O(n) queries with n the number of topics.\nIf the number of questions is not that large, you can do this with two queries with:\nfrom random import sample\n\nsubject = QuestionSubject.objects.get(id=request.POST.get('subject'))\nquestion_topics = QuestionTopic.objects.filter(\n question_subject=subject\n).prefetch_related('question_set')\nquestions_list = [\n question\n for topic in question_topics\n for question in sample(topic.question_set.all(), topic.exam_question_num)\n]\n" ]
[ 1, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074606914_django_python.txt
Q: How to get the a href link from under the div class? using beautiful soup I am trying to scrape the href attribute from links from a page, but I end up with [] as the output The HTML code is My desired output is: https://www.pigiame.co.ke/listings/nissan-latio-2016-36000-kms-5300124 A: You can try: import re import requests import urllib.parse from bs4 import BeautifulSoup url = "https://www.pigiame.co.ke/cars" headers = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0", } soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser") links = [ item.a["href"] for item in soup.select(".listings-cards__list-item:has(a)") ] for link in links: soup = BeautifulSoup(requests.get(link).content, "html.parser") data = soup.select_one(".btn-whatsapp")["href"] data = urllib.parse.unquote(data) phone = re.search(r"phone=(.*?)&", data).group(1) print("{:<20} {}".format(phone, link)) Prints: +254723099904 https://www.pigiame.co.ke/listings/nissan-note-5253578 +254722935411 https://www.pigiame.co.ke/listings/honda-freed-7-seater-2015-5291221 +254722763845 https://www.pigiame.co.ke/listings/2006-bmw-x3-2500cc-petrol-155000kms-5241375 +254722710833 https://www.pigiame.co.ke/listings/mazda-mpv-2006-5273382 +254713193417 https://www.pigiame.co.ke/listings/landrover-109-very-clean-accident-free-5282118 +254708467397 https://www.pigiame.co.ke/listings/landcrusser-prado-tx-fully-loaded-with-sunroof-2016-model-5304294 +254708467397 https://www.pigiame.co.ke/listings/landcrusser-prado-tx-fully-loaded-with-sunroof-5304293 +254708467397 https://www.pigiame.co.ke/listings/mistubishi-canter-2016-model-3tones-5304291 +254708467397 https://www.pigiame.co.ke/listings/hillux-revolution-2016-model-fully-loaded-5304288 +254708467397 https://www.pigiame.co.ke/listings/toyota-prado-2016-model-fully-loaded-with-sunroof-5304285 +254708467397 https://www.pigiame.co.ke/listings/toyota-prado-txl-diesel-fuel-2017-model-5304283 +254769333436 https://www.pigiame.co.ke/listings/toyota-fielder2015kdkbelow-50000km-5304279 +254708467397 https://www.pigiame.co.ke/listings/subaru-forester-sti-turbo-2016-model-5304280 +254769333436 https://www.pigiame.co.ke/listings/honda-fit2015kdkbelow-60000km-5304276 +254769333436 https://www.pigiame.co.ke/listings/honda-vezel2015kdklow-milleage-5304273
How to get the a href link from under the div class? using beautiful soup
I am trying to scrape the href attribute from links from a page, but I end up with [] as the output The HTML code is My desired output is: https://www.pigiame.co.ke/listings/nissan-latio-2016-36000-kms-5300124
[ "You can try:\nimport re\nimport requests\nimport urllib.parse\nfrom bs4 import BeautifulSoup\n\nurl = \"https://www.pigiame.co.ke/cars\"\n\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0\",\n}\n\nsoup = BeautifulSoup(requests.get(url, headers=headers).content, \"html.parser\")\n\nlinks = [\n item.a[\"href\"] for item in soup.select(\".listings-cards__list-item:has(a)\")\n]\n\nfor link in links:\n soup = BeautifulSoup(requests.get(link).content, \"html.parser\")\n data = soup.select_one(\".btn-whatsapp\")[\"href\"]\n data = urllib.parse.unquote(data)\n\n phone = re.search(r\"phone=(.*?)&\", data).group(1)\n print(\"{:<20} {}\".format(phone, link))\n\nPrints:\n+254723099904 https://www.pigiame.co.ke/listings/nissan-note-5253578\n+254722935411 https://www.pigiame.co.ke/listings/honda-freed-7-seater-2015-5291221\n+254722763845 https://www.pigiame.co.ke/listings/2006-bmw-x3-2500cc-petrol-155000kms-5241375\n+254722710833 https://www.pigiame.co.ke/listings/mazda-mpv-2006-5273382\n+254713193417 https://www.pigiame.co.ke/listings/landrover-109-very-clean-accident-free-5282118\n+254708467397 https://www.pigiame.co.ke/listings/landcrusser-prado-tx-fully-loaded-with-sunroof-2016-model-5304294\n+254708467397 https://www.pigiame.co.ke/listings/landcrusser-prado-tx-fully-loaded-with-sunroof-5304293\n+254708467397 https://www.pigiame.co.ke/listings/mistubishi-canter-2016-model-3tones-5304291\n+254708467397 https://www.pigiame.co.ke/listings/hillux-revolution-2016-model-fully-loaded-5304288\n+254708467397 https://www.pigiame.co.ke/listings/toyota-prado-2016-model-fully-loaded-with-sunroof-5304285\n+254708467397 https://www.pigiame.co.ke/listings/toyota-prado-txl-diesel-fuel-2017-model-5304283\n+254769333436 https://www.pigiame.co.ke/listings/toyota-fielder2015kdkbelow-50000km-5304279\n+254708467397 https://www.pigiame.co.ke/listings/subaru-forester-sti-turbo-2016-model-5304280\n+254769333436 https://www.pigiame.co.ke/listings/honda-fit2015kdkbelow-60000km-5304276\n+254769333436 https://www.pigiame.co.ke/listings/honda-vezel2015kdklow-milleage-5304273\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "css_selectors", "html", "python", "web_scraping" ]
stackoverflow_0074620047_beautifulsoup_css_selectors_html_python_web_scraping.txt
Q: Try to extract paragraph using beautiful soup from selenium import webdriver import time from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from bs4 import BeautifulSoup from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait options = webdriver.ChromeOptions() options.add_argument("--no-sandbox") options.add_argument("--disable-gpu") options.add_argument("--window-size=1920x1080") options.add_argument("--disable-extensions") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) URL = 'https://www.askgamblers.com/online-casinos/countries/uk' driver.get(URL) time.sleep(2) urls= [] page_links =driver.find_elements(By.XPATH, "//div[@class='card__desc']//a[starts-with(@href, '/online')]") for link in page_links: href=link.get_attribute("href") urls.append(href) #print(href) for url in urls: driver.get(url) time.sleep(1) try: review=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//a[@class='review-main__show']"))) review.click() except: pass soup = BeautifulSoup(driver.page_source,"lxml") try: paragraph=soup.select_one("h2:-soup-contains('Virtual Games')").nextSibling.textContent print(paragraph) except: print('empty') pass Detail:- I am trying to extract these paragraph but they give me none when you click on read more then you see these whole paragraph these is the page link https://www.askgamblers.com/online-casinos/reviews/mr-play-casino these is whole paragraph I wont to extract A: Actually, detailed paragraph is visualized in the html dom and click on thereviews just go down to the page and the jab can be done by scrolling button too.The subject matter is that clicking on reviews will not work rather will create exception. The big problem is to select the virtual games paragraph html nodes perfectly becase all the p nodes are in the same level thats why whatever selection it grabs the entire paragraph.So the split method generates the right output here. Working code: from selenium import webdriver import time from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from bs4 import BeautifulSoup import pandas as pd from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait options = webdriver.ChromeOptions() options.add_argument("--no-sandbox") options.add_argument("--disable-gpu") options.add_argument("--window-size=1920x1080") options.add_argument("--disable-extensions") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) URL = 'https://www.askgamblers.com/online-casinos/countries/uk' driver.get(URL) time.sleep(2) urls= [] data = [] page_links =driver.find_elements(By.XPATH, "//div[@class='card__desc']//a[starts-with(@href, '/online')]") for link in page_links: href=link.get_attribute("href") urls.append(href) #print(href) for url in urls: driver.get(url) time.sleep(1) soup = BeautifulSoup(driver.page_source,"lxml") try: paragraph=soup.select_one('section[class="review-text richtext"]').get_text(' ',strip=True).split('Virtual Games')[1].split('Live Casino')[0] #print(paragraph) except: #print('empty') pass data.append({'Virtual Games':paragraph}) print(data) Output: [{'Virtual Games': ' Players who enjoy video slots have a large selection to choose from here with titles from gaming suppliers like NetEnt, Microgaming and Play’n GO. The casino is home to some of the newest and popular games in the market with a range of themes to suit all. Titles include the Starburst Slot , Aloha! Cluster Pays Slot , Moon Princess Slot and Birds on a Wire Slot . When it comes to table games members will be able to enjoy some blackjack and roulette games including Roulette Pro and Blackjack Solo. There is also a small selection of video pokers for players who enjoy betting with cards such as Jacks or Better. In addition to that, members can pick from multiple Scratch card games for added entertainment. '}, {'Virtual Games': ' Bet365 Casino gives players a full range of online casino games to choose from, including a wide variety of table games, arcades, online slots, video poker and several live dealer choices. Players can find their favourite games under both the Casino and Games tab in the site’s header. Slots lovers have an immense selection of game options at their disposal. Among our favourites are Age of the Gods slot , Immortal Romance slot , to name a few. Those looking for table games will also have a massive collection of roulette, blackjack, baccarat and other casino games to choose from. In addition to this, poker lovers will also find a great offer that will suit their taste, including Jacks or Better, Joker Poker, and Deuces Wild. '}, {'Virtual Games': ' Players who open the casino lobby will be greeted with lots of video slots and there is plenty of variety to cater for all kinds of people. In fact, they have a game library of over 1000+ games from some of the best providers in the industry. The lobby is easy to use and players can easily find their favourite game by using the search facility and can filter the games by each individual provider too. Titles here include the Book of Dead Slot , Gonzo’s Quest Slot , Bonanza Slot , Immortal Romance Slot and Mega Moolah Slot . There are table games to enjoy here as well, especially handy for players who enjoy card and dice games. These include variations of blackjack, roulette and other traditional games. Players who like card games can try the casinos selection of video poker games and these include Jacks or Better and All Aces. There are also some β€˜Other’ games available in the lobby too for anyone who is looking for something different. '}, {'Virtual Games': ' Slot lovers will find a good variety of both bonus video slots and classic fruity-style games at Trada Casino. These include titles from both Microgaming and PariPlay, including popular favourites like Immortal Romance , Avalon II , Tomb Raider , Big Bad Wolf , Rabbit in the Hat , Jurassic Park , and many more variants. Trada Casino also has an exceptional variety of bingo , keno , scratch games , and specialties from various providers. Bingo players can choose from several different gaming styles and themes with titles like Mayan Bingo, Super Bonus Bingo, and Samba Bingo. Those who love to scratch can try to win big with games like Whack a Jackpot , Slam Funk , and Turtley Awesome . Table game enthusiasts can choose from several different roulette and blackjack variants at Trada Casino, such as Classic Blackjack, Spanish Blackjack, Vegas Strip Blackjack, American Roulette, Premier Roulette Diamond, and French Roulette. Trada Casino’s selection of video pokers from Microgaming is comprehensive and covers practically every main variety of the game, with many offered in both single-hand and multi-hand versions. Available variants include Aces & Faces, Deuces Wild, Double Double Bonus, Louisiana Double, Tens or Better, Joker Poker, and Jacks or Better. '}, {'Virtual Games': " PlayFrank Casino is home to an extensive selection of slots . Players will find hit titles from Betsoft, Microgaming, NYX Interactive, Play'n Go, and NetEnt. Some of the most famous ones include Aliens slot , Starburst slot , Mr. Vegas slot , Foxin’ Wins Again slot , At the Movies slot , and Chimney Sweep slot . Also, the casino hosts numerous of the world’s most popular progressive slots like Mega Fortune Slot and Hall of Gods, with jackpots often in the multi-million range. In addition to its live table games , PlayFrank Casino provides players with a number of virtual table game favourites. These include several variants of European and French Roulette; blackjack games like Single Deck Blackjack and Double Exposure Blackjack; multiple types of Punto Banco/Baccarat; and many other favourites. Play Frank Casino’s video poker selection contains many classic and modern variants from both NetEnt and GamesOS. Among these are All American Poker, Jacks or Better, Deuces Wild, Joker Wild, Shockwave Poker, and Coliseum Poker, among others. Live Games PlayFrank Casino features an extensive live dealer room powered by Evolution Gaming. Players here can enjoy round-the-clock table game excitement in dozens of different live rooms. The game selection includes a surprising number of Live Blackjack, Live Roulette, and Live Baccarat variants, all dealt to professional standards and broadcast in real-time. Mobile Gaming PlayFrank Casino makes satisfying mobile players a key priority. The site’s HTML5 interface is designed to make it easy to play on virtually any device, from a home computer to smartphones and tablets . Players can enjoy dozens of their favourite slots and table games on the go. No app download is necessary. Support The PlayFrank Casino support team is available in English, German and Norwegian. Players via the site’s live chat facility, available from 7 AM till 23 PM. Also, players can reach the casino support team by email or telephone, on top of having a helpful FAQ section at their disposal, where certain previously answered questions can be found. Security and Fairness PlayFrank Casino protects players with the state-of-the-art SSL encryption. This technology is the gold standard for online safety and it is trusted by all reputable online merchants and banks. It prevents anyone from seeing sensitive data with an advanced algorithm that cannot be decoded. The games at PlayFrank Casino meet the standards of fairness required by two independent testing agencies : TST and iTech Labs . As such, players can rest assured that they will find a fair game here."}, {'Virtual Games': " PlayFrank Casino is home to an extensive selection of slots . Players will find hit titles from Betsoft, Microgaming, NYX Interactive, Play'n Go, and NetEnt. Some of the most famous ones include Aliens slot , Starburst slot , Mr. Vegas slot , Foxin’ Wins Again slot , At the Movies slot , and Chimney Sweep slot . Also, the casino hosts numerous of the world’s most popular progressive slots like Mega Fortune Slot and Hall of Gods, with jackpots often in the multi-million range. In addition to its live table games , PlayFrank Casino provides players with a number of virtual table game favourites. These include several variants of European and French Roulette; blackjack games like Single Deck Blackjack and Double Exposure Blackjack; multiple types of Punto Banco/Baccarat; and many other favourites. Play Frank Casino’s video poker selection contains many classic and modern variants from both NetEnt and GamesOS. Among these are All American Poker, Jacks or Better, Deuces Wild, Joker Wild, Shockwave Poker, and Coliseum Poker, among others. Live Games PlayFrank Casino features an extensive live dealer room powered by Evolution Gaming. Players here can enjoy round-the-clock table game excitement in dozens of different live rooms. The game selection includes a surprising number of Live Blackjack, Live Roulette, and Live Baccarat variants, all dealt to professional standards and broadcast in real-time. Mobile Gaming PlayFrank Casino makes satisfying mobile players a key priority. The site’s HTML5 interface is designed to make it easy to play on virtually any device, from a home computer to smartphones and tablets . Players can enjoy dozens of their favourite slots and table games on the go. No app download is necessary. Support The PlayFrank Casino support team is available in English, German and Norwegian. Players via the site’s live chat facility, available from 7 AM till 23 PM. Also, players can reach the casino support team by email or telephone, on top of having a helpful FAQ section at their disposal, where certain previously answered questions can be found. Security and Fairness PlayFrank Casino protects players with the state-of-the-art SSL encryption. This technology is the gold standard for online safety and it is trusted by all reputable online merchants and banks. It prevents anyone from seeing sensitive data with an advanced algorithm that cannot be decoded. The games at PlayFrank Casino meet the standards of fairness required by two independent testing agencies : TST and iTech Labs . As such, players can rest assured that they will find a fair game here."}, {'Virtual Games': ' Slots are a big part of the Hello! Casino game library. Players will find more than 400 different video slot titles from big names like NetEnt, NYX Gaming group, Betsoft, Leander, Thunderkick, Aristocrat, all in a simple instant-play interface. Among the most popular games are Steam Tower , Twin Spin , Starburst , Gonzo’s Quest , and many others. Players who enjoy the thrill of the tables will find several virtual card, dice, and roulette games at Hello! Casino. Among them are Multi Wheel Roulette and French Roulette, Blackjack and Pontoon, Craps, Baccarat, Sic Bo, Red Dog Progressive, Texas Hold’em Pro, Oasis Poker, and Caribbean Stud. Those looking for something else will find a number of scratch games and keno games in the β€œLottery” section, as well as four different video poker variants under β€œOther” available in both single-hand and multi-hand denominations. '}, {'Virtual Games': ' As 21Prive Casino uses a number different software providers it’s no wonder that they have a rich games portfolio with over 500 games which should meet the taste of different players. Games are easily accessible and carefully categorised. However, there isn’t a search option available. Slot players can choose from a wide ranging collection, which includes classic, progressive and 3D slots. 21Prive Casino puts the effort to freshen up their selection with new and modern slots such as Lost Island slot . Players can also find some old classics such as Dead or Alive slot . Popular slots at 21Prive Casino are also Jack and the Beanstalk slot , Zombie Rush slot and Starburst slot . If you are more of a table games fan you will like the choice of games that 21Prive Casino have to offer. Here you can find a good selection of games that include Blackjack, Roulette, Baccarat, Craps, Pai Gow, Red Dog Progressive and many more. 21Prive Casino also have a good offer for poker players including games such as Jacks or Better, Deuces WIld, 3D Video Poker, Multihand Joker Poker and other. There is a number of other casino games too, such as fun and scratch games, ball games and virtual games. Live Games At 21Prive Casino players can also try their luck with live games. At live casino players can choose whether they want to play Roulette, Baccarat, Blackjack or Lottery. Mobile Gaming Players can also enjoy gaming on the go via their compatible mobile devices. They can login via their tablets or smartphones or register a new account and play their favourite games regardless of where they are. Available games include Starburst, Gonzos Quest, Blood Suckers and Foxin Wins. Support If you have any concern or inquiry, feel free to contact customer support at 21Prive Casino Casino. They are available to players every day during the week from 24/7 GMT . Players can contact the casino’s support team via live chat or e-mail and get assistance with their issues from professionals. Players can also get answers to some questions in the casino’s FAQ section. Security and Fairness 21Prive Casino tends to put the safety of their players first. Therefore, they use RapidSSL encryption , which is the strongest SSL encryption available on the market at this time. In this way, they are making sure that the players account and personal data are safe. It prevents anyone from tampering with it. As far as fairness goes, all of the casino games are tested and certified at the highest levels to ensure fair game-play at all times.'}, {'Virtual Games': ' As the casino is supported by the NetEnt , Microgaming , Elk Studios, NYX Interactive and Qiuckspin, some of the most popular video slots include Forbidden Throne Slot , Fairytale Legends: Hansel & Gretel Slot , Taco Brothers Slot , Big Bad Wolf Slot . Here, players may also find the progressive slots like Mega Moolah slot , Mega Fortune slot and Arabian Nights slot . Table game lovers can choose between different variations of the classic casino games like blackjack, roulette and baccarat. Likewise, video poker fans can also find something to suit their taste, such as Deuces Wild Double Up, Jacks or Better, Aces and Eights and Joker Wild. '}, {'Virtual Games': ' Members can join the adventure and dive into an endless ocean of exciting games, over 1200+ games to be exact. This includes the latest video slots from leading gaming providers like NetEnt, Microgaming and NextGen Gaming. To make it easier to browse members can filter the games by each individual provider. There’s actually multiple filters available to help players get to their favourite games quicker. Players can enjoy games like the Bonanza Slot, Thunderstruck 2 Slot, Immortal Romance Slot, Moon Princess Slot, Asgardian Stones Slot and Medusa Slot. There’s also a selection of table games available here so players can place their bets on games a little different. A number of different games and bet limits are also available to cater for everyone’s needs. These include Blackjack Turbo and French Roulette Pro. Video pokers can also be found here as the casino is home to dedicated card games section. Members can enjoy pokers like Jacks or Better, Deuces Wild and Joker Poker. '}, {'Virtual Games': ' Players can enjoy over 500 high-definition video slots from some of the biggest names in the industry like Microgaming and NetEnt . The casino regularly adds the newest games and the library features smash-hit titles such as Thunderstruck Slot , Starburst Slot , and Jack and the Beanstalk Slot , to mention a few. Die-hard table game lovers also have plenty to choose from. There are variants of blackjack and roulette like European Blackjack, Classic Blackjack, American Roulette, French roulette, and loads more. Video pokers are also available. Fans can play games like Jacks or Better, Aces & Eights, Deuces Wild, among others. Live Games Players who like the thrill of a land-based casino can enjoy live games from Evolution Gaming and other providers, including Live Blackjack, Live Roulette, Live Baccarat, and others. Mobile Gaming Spinland also offers a mobile casino that can be played straight through a mobile browser, without the need to download any additional software. Support Players who need assistance can enjoy 24/7 live chat with a friendly and professional support team. Questions can also be sent through email for a quick response. Security and Fairness Members can rest assured they are playing in a safe environment as the casino employs SSL encryption to protect personal and financial details. A random number generator is also utilized to ensure that all their games result in a random outcome. Furthermore, the site is closely monitored and regulated by eCogra .'}, {'Virtual Games': ' Players can enjoy loads of games here which happens to be from some of the most well known games in the industry. The lobby is very user friendly due to the number of filters players can use to find their favourite games. These include sorting games by their volatility levels, game provider, features, themes, number of paylines, reels and even minimum and maximum bet limits. There’s also a search engine which is a handy tool which can find any game, instantly. Players can enjoy games like the Cleopatra Slot , Wolf Gold Slot , Holmes and the Stolen Stones Slot , Reactoonz Slot and Gonzo’s Quest Slot . There’s lots of table games here too, particularly variations of blackjack and roulette. But, there are some poker games too. And, while we’re on the subject of card games players can also search for a number of video poker games as well. These include Joker Poker and Double Down Poker. '}, {'Virtual Games': ' Members can enjoy video slots here from the likes of NetEnt,\xa0Microgaming, Amatic Industries and many more well-known suppliers. Games have already been categorized from A-Z, so it helps members to go straight to the games their looking for and makes it easier to browse. There are titles ranging from the newest games to the oldest games and the most popular games. Titles include the Starburst Slot , Aloha! Cluster Pays Slot , The Phantom of the Opera Slot and Highlander Slot . In addition to slots players can enjoy a large selection of table games here starting with low-limit games to suit all budgets. These include classics like Blackjack, Roulette, Baccarat and Caribbean Stud. For those who like betting in card games players can choose from a variety of video pokers like Bonus Power Poker, Deuces Wild Poker and Double Bonus Poker. '}, {'Virtual Games': ' Members can enjoy video slots here from the likes of NetEnt,\xa0Microgaming, Amatic Industries and many more well-known suppliers. Games have already been categorized from A-Z, so it helps members to go straight to the games their looking for and makes it easier to browse. There are titles ranging from the newest games to the oldest games and the most popular games. Titles include the Starburst Slot , Aloha! Cluster Pays Slot , The Phantom of the Opera Slot and Highlander Slot . In addition to slots players can enjoy a large selection of table games here starting with low-limit games to suit all budgets. These include classics like Blackjack, Roulette, Baccarat and Caribbean Stud. For those who like betting in card games players can choose from a variety of video pokers like Bonus Power Poker, Deuces Wild Poker and Double Bonus Poker. '}, {'Virtual Games': ' 21 Casino’s biggest selection of games are its slots . Players will find a highly diverse collection, from simple games to innovative five-reel bonus video slots. Some of the most popular releases include Starburst , Magicious , Medusa II Slot , Merlin’s Millions , Miss Midas Slot , Aliens , Dragon Island , and Mega Fortune . In addition to live table games , 21 Casino features a diverse library of card, dice, and roulette games for all kinds of players. These include Caribbean Stud Poker, Casino Hold’em, Craps, Red Dog, Sharp Shooter, Six Shooter, and numerous Blackjack and Roulette variants. Video poker players at 21 Casino have an abundance of single-hand and multi-hand variants to choose from. It contains the full line of NetEnt video pokers (Jacks or Better, Deuces Wild, Joker Wild, and All American Poker), as well as the even more diverse Betsoft video pokers with paytables like Deuces and Jokers, Double Bonus Poker, and Double Jackpot Poker. 21 Casino is also home to a variety of instant win games . These include multiple variants of Keno, unique games like Predictor, and scratchers like Max Win Scratch and Lucky Double. Live Games The live casino at 21 Casino is more diverse than most. Players can enjoy real-time gaming broadcast from a professional casino studio with 5 different games to choose from: Live Baccarat, Live Blackjack, Live Roulette, Live Keno, and Live Lottery. All of the games are dealt to professional standards and have interactive features for a more social gaming experience. Mobile Gaming Many of the games at 21 Casino can be enjoyed on the go thanks to its HTML5-compatible mobile platform. It works with almost all modern smartphones and tabl e ts, including iOS and Android devices . Players don’t have to worry about any downloads and can count on a virtually identical gaming experience. Support 21 Casino strives to provide players with support as efficiently as possible. The site’s live chat function is always just a click away and open 24/7 , connecting players with representatives in minutes or less in most cases. The β€œHelp” page states that representatives are available 24/7, but we have found that this is not always the case. In the event that no one is online, players may send the casino an email and wait until the next business day for a reply instead. Additionally, support is available both in English and Dutch. Security and Fairness All sensitive areas of the 21 Casino website are encrypted with state-of-the-art technology from GlobalSign , a leading security firm. This protocol obscures private information like financial data and passwords, making it virtually impossible for anyone to compromise player accounts. All of the games used at 21 Casino have been thoroughly audited by a trusted independent agency, such as TST or iTech Labs .'}, {'Virtual Games': ' Being a Microgaming casino , it provides players with an excellent selection of slot games to suit every kind of preference, with favorites like Avalon II , Immortal Romance , Thunderstruck II , Elementals , Break da Bank , Playboy , Money Mad Monkey , Starlight Kiss and so much more. However, being a new comer to online gaming market, Conquer Casino’s list of slot games is not yet comprehensive with its 180+ games, out of which, 13 are jackpot games. Nevertheless, these should accommodate most demands for choice of games. Table games too are still limited in number, with only 15 variance of Roulette, Blackjack and Baccarat available, the live game versions included, but players can expect more additions in due time. Video Poker lovers will have to miss out at Conquer Casino as none is currently available, similarly for special games fans too, but scratch cards fans, on the other hand, are well catered for with over 30 games to choose from. Live Games Live games are also available for players and live dealers run the following games: Multi Player Baccarat , Live Baccarat , Live Blackjack , Multi Player Roulette and Live Roulette . Mobile Gaming Users of iOS and Android mobile units can access the casino via an internet connection and there are currently 20 games available for play, such as Avalon, Break da Bank, Burning Desire, Cashapillar, Hitman, Mermaid Millions, Starlight Kiss, Thunderstruck II and Tomb Raider, to name a few. As more and more become mobile players in due time, the number of games available for mobile gaming would increase in tandem. Support Customer support services is available via live chat between 7 AM and 11 PM (GMT). Alternatively, players can email the support and get a prompt response. Security and Fairness Being licensed and regulated in Malta under some of the strictest world’s gambling legislation, all personal information, account details and transactions of players are treated with strictest confidence and are not disclosed to any third party. The Random Number Generator (RNG) is audited and tested by Gaming Associates to ensure fairness of the games at all times.'}, {'Virtual Games': " The video slot section is powered by Amatic Industries, and includes video slots like Lucky Zodiac slot , Wild Shark slot , Wolf Moon slot and numerous others. Players looking to try their hand at table games can go for blackjack and a variety of live table games. The video poker offer includes different video poker variants like Deuces Wild, Fruit Poker, Jacks or Better, Joker Card, Multi Win and Multi Win Triple. Live Games The casino has a live casino where players can engage in live gambling . The live casino offer includes various live table games, such as different types of live roulette, live blackjack and live punto banco, which is a type of live baccarat. All tables are hosted by Netent live and Evolution Gaming. Mobile Gaming Since CasinoCasino is a mobile casino , players can play the casino's video slots and other online casino games on their mobile devices. Support Players who need to contact the customer support team can do so via email, phone, online contact phone or live chat , which is open from 08:00 to 01:00 (CET) . Alternatively, these is a FAQ section where players can find answers to various previously asked questions. Security and Fairness The games found in the casino's offer implement Random Number Generators which ensure the outcome is always unbiased."}, {'Virtual Games': ' Players at Barbados Casino have plenty of online slots and table games to choose from. Naturally, their video slots selection is the biggest and is powered by NetEnt , Microgaming , Elk Studios, with popular titles like Foxin’ Wins slot , Copy Cats slot , Taco Brothers slot and EmotiCoins slot . Table games enthusiasts can pick between different variations of the classic casino games of blackjack, roulette and baccarat. These alternatives include Roulette Royal, European Roulette, Baccarat Squeeze, Blackjack Solo and Blackjack Classic. Video poker fans can also find games that suit their taste. These contain Jacks or Better, Deuces Wild, Joker Poker, Royal Poker and Poker King. Additionally, the casino also offers a number of keno and scratch card games. '}, {'Virtual Games': " The casino lobby boasts hundreds of the industry's best video slots from some of the most talked about providers in the market like NetEnt , Microgaming and Play’n GO . Players can easily search for their favourite games which will help find them quicker but other than that other filters here are limited. Players will have to browse the lobby until they find what they want to play. Titles include the Book of Dead Slot , Bonanza Slot , Aloha! Cluster Pays Slot , Thunderstruck 2 Slot and Starburst Slot . Of course, there are table games here as well for anyone looking to play something different to slots. These include popular versions of blackjack and roulette. Players who prefer card games can try their luck with the video poker games like Jacks or Better and Joker Poker. The casino lobby is also home to some other casino games like scratch cards. "}, {'Virtual Games': ' SlotsMagic is ... so on
Try to extract paragraph using beautiful soup
from selenium import webdriver import time from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from bs4 import BeautifulSoup from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait options = webdriver.ChromeOptions() options.add_argument("--no-sandbox") options.add_argument("--disable-gpu") options.add_argument("--window-size=1920x1080") options.add_argument("--disable-extensions") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) URL = 'https://www.askgamblers.com/online-casinos/countries/uk' driver.get(URL) time.sleep(2) urls= [] page_links =driver.find_elements(By.XPATH, "//div[@class='card__desc']//a[starts-with(@href, '/online')]") for link in page_links: href=link.get_attribute("href") urls.append(href) #print(href) for url in urls: driver.get(url) time.sleep(1) try: review=WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//a[@class='review-main__show']"))) review.click() except: pass soup = BeautifulSoup(driver.page_source,"lxml") try: paragraph=soup.select_one("h2:-soup-contains('Virtual Games')").nextSibling.textContent print(paragraph) except: print('empty') pass Detail:- I am trying to extract these paragraph but they give me none when you click on read more then you see these whole paragraph these is the page link https://www.askgamblers.com/online-casinos/reviews/mr-play-casino these is whole paragraph I wont to extract
[ "\nActually, detailed paragraph is visualized in the html dom and click on thereviews just go down to the page and the jab can be done by scrolling button too.The subject matter is that clicking on reviews will not work rather will create exception.\n\nThe big problem is to select the virtual games paragraph html nodes perfectly becase all the p nodes are in the same level thats why whatever selection it grabs the entire paragraph.So the split method generates the right output here.\n\n\nWorking code:\nfrom selenium import webdriver\nimport time\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.common.by import By\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.wait import WebDriverWait\n\n\noptions = webdriver.ChromeOptions()\noptions.add_argument(\"--no-sandbox\")\noptions.add_argument(\"--disable-gpu\")\noptions.add_argument(\"--window-size=1920x1080\")\noptions.add_argument(\"--disable-extensions\")\ndriver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))\n \nURL = 'https://www.askgamblers.com/online-casinos/countries/uk'\ndriver.get(URL)\ntime.sleep(2)\n\nurls= []\ndata = []\npage_links =driver.find_elements(By.XPATH, \"//div[@class='card__desc']//a[starts-with(@href, '/online')]\")\nfor link in page_links:\n href=link.get_attribute(\"href\")\n urls.append(href)\n #print(href)\n\nfor url in urls:\n driver.get(url)\n time.sleep(1)\n\n soup = BeautifulSoup(driver.page_source,\"lxml\")\n\n try:\n paragraph=soup.select_one('section[class=\"review-text richtext\"]').get_text(' ',strip=True).split('Virtual Games')[1].split('Live Casino')[0]\n #print(paragraph)\n except:\n #print('empty')\n pass\n data.append({'Virtual Games':paragraph})\nprint(data)\n\nOutput:\n[{'Virtual Games': ' Players who enjoy video slots have a large selection to choose from here with titles from gaming suppliers like NetEnt, Microgaming and Play’n GO. The casino is home to some of the newest and popular games in the market with a range of themes to suit all. Titles include the Starburst Slot , Aloha! Cluster Pays Slot , Moon Princess Slot and Birds on a Wire Slot . When it comes to table games members will be able to enjoy some blackjack and \nroulette games including Roulette Pro and Blackjack Solo. There is also a small selection of video pokers for players who enjoy betting with cards such as Jacks or Better. In addition to that, members can pick from multiple Scratch \ncard games for added entertainment. '}, {'Virtual Games': ' Bet365 Casino gives players a full range of online casino games to choose from, including a wide variety of table games, arcades, online slots, video poker and several live dealer choices. Players can find their favourite games under both the Casino and Games tab in the site’s header. Slots lovers have an immense selection of game options at their disposal. Among our favourites are Age of the Gods slot , Immortal Romance slot , to name a few. Those looking for table games will also have a massive collection of roulette, blackjack, baccarat and other casino games to choose from. In addition to this, poker lovers will also find a \ngreat offer that will suit their taste, including Jacks or Better, Joker Poker, and Deuces Wild. '}, {'Virtual Games': ' Players who open the casino lobby will be greeted with lots of video slots and there is plenty of variety to cater for all kinds of people. In fact, they have a game library of over 1000+ games from some of the best providers in the industry. The lobby is easy to use and players can easily find their favourite game by using the search facility and can filter the games by each individual provider too. Titles here include the Book of Dead Slot , Gonzo’s Quest Slot , Bonanza Slot , Immortal Romance Slot and Mega Moolah Slot . There are table games to enjoy here as well, especially handy for players who enjoy card and dice games. These include variations of blackjack, roulette and other traditional games. Players who like card games can try the casinos selection of video poker games and these include Jacks or Better and All Aces. There are also some β€˜Other’ games available in the lobby too for anyone who is looking for something different. '}, {'Virtual Games': ' Slot lovers will find a good variety of both bonus video slots and classic fruity-style games at Trada Casino. These include titles from both Microgaming and PariPlay, including popular favourites like Immortal Romance , Avalon II , Tomb Raider , Big Bad Wolf , Rabbit in the Hat , Jurassic Park , and many more variants. Trada Casino also has an exceptional variety of bingo , keno , scratch games , and specialties from various providers. Bingo players can choose from several different gaming styles and themes with titles like Mayan Bingo, Super Bonus Bingo, and Samba Bingo. Those who love to scratch can try to win big with games like Whack a Jackpot , Slam Funk , and Turtley Awesome . Table game enthusiasts can choose from several different roulette and blackjack variants at Trada Casino, such as Classic Blackjack, Spanish Blackjack, Vegas Strip Blackjack, American \nRoulette, Premier Roulette Diamond, and French Roulette. Trada Casino’s selection of video pokers from Microgaming is comprehensive and covers practically every main variety of the game, with many offered in both single-hand and multi-hand versions. Available variants include Aces & Faces, Deuces Wild, Double Double Bonus, Louisiana Double, Tens \nor Better, Joker Poker, and Jacks or Better. '}, {'Virtual Games': \" PlayFrank Casino is home to an extensive selection of slots . Players will find hit titles from Betsoft, Microgaming, NYX Interactive, Play'n Go, and NetEnt. Some \nof the most famous ones include Aliens slot , Starburst slot , Mr. Vegas slot , Foxin’ Wins Again slot , At the Movies slot , and Chimney Sweep slot . Also, the casino hosts numerous of the world’s most popular progressive slots like Mega Fortune Slot and Hall of Gods, with jackpots often in the multi-million range. In addition to its live table \ngames , PlayFrank Casino provides players with a number of virtual table game favourites. These include several variants of European and French Roulette; blackjack games like Single Deck Blackjack and Double Exposure Blackjack; multiple types of Punto Banco/Baccarat; and many other favourites. Play Frank Casino’s video poker selection contains many classic and modern variants from both NetEnt and GamesOS. Among these are All American Poker, Jacks or Better, Deuces Wild, Joker Wild, Shockwave Poker, and Coliseum Poker, among others. Live Games PlayFrank Casino features an extensive live dealer room powered by Evolution Gaming. Players here can enjoy round-the-clock table game excitement in dozens of different live rooms. The game selection includes a surprising number of Live Blackjack, Live Roulette, \nand Live Baccarat variants, all dealt to professional standards and broadcast in real-time. Mobile Gaming PlayFrank \nCasino makes satisfying mobile players a key priority. The site’s HTML5 interface is designed to make it easy to play on virtually any device, from a home computer to smartphones and tablets . Players can enjoy dozens of their favourite slots and table games on the go. No app download is necessary. Support The PlayFrank Casino support team is available in English, German and Norwegian. Players via the site’s live chat facility, available from 7 AM till 23 PM. \nAlso, players can reach the casino support team by email or telephone, on top of having a helpful FAQ section at their disposal, where certain previously answered questions can be found. Security and Fairness PlayFrank Casino protects players with the state-of-the-art SSL encryption. This technology is the gold standard for online safety and it is trusted by all reputable online merchants and banks. It prevents anyone from seeing sensitive data with an advanced algorithm that cannot be decoded. The games at PlayFrank Casino meet the standards of fairness required by two independent testing agencies : TST and iTech Labs . As such, players can rest assured that they will find a fair game here.\"}, {'Virtual Games': \" PlayFrank Casino is home to an extensive selection of slots . Players will find hit titles from Betsoft, Microgaming, NYX Interactive, Play'n Go, and NetEnt. Some of the most famous ones include Aliens slot , Starburst slot , Mr. Vegas slot , Foxin’ Wins Again slot , At the Movies slot , and Chimney Sweep slot . Also, \nthe casino hosts numerous of the world’s most popular progressive slots like Mega Fortune Slot and Hall of Gods, with jackpots often in the multi-million range. In addition to its live table games , PlayFrank Casino provides players with a number of virtual table game favourites. These include several variants of European and French Roulette; blackjack games like Single Deck Blackjack and Double Exposure Blackjack; multiple types of Punto Banco/Baccarat; and many other favourites. Play Frank Casino’s video poker selection contains many classic and modern variants from both \nNetEnt and GamesOS. Among these are All American Poker, Jacks or Better, Deuces Wild, Joker Wild, Shockwave Poker, and Coliseum Poker, among others. Live Games PlayFrank Casino features an extensive live dealer room powered by Evolution Gaming. Players here can enjoy round-the-clock table game excitement in dozens of different live rooms. The game selection includes a surprising number of Live Blackjack, Live Roulette, and Live Baccarat variants, all dealt to \nprofessional standards and broadcast in real-time. Mobile Gaming PlayFrank Casino makes satisfying mobile players a \nkey priority. The site’s HTML5 interface is designed to make it easy to play on virtually any device, from a home computer to smartphones and tablets . Players can enjoy dozens of their favourite slots and table games on the go. No \napp download is necessary. Support The PlayFrank Casino support team is available in English, German and Norwegian. \nPlayers via the site’s live chat facility, available from 7 AM till 23 PM. Also, players can reach the casino support team by email or telephone, on top of having a helpful FAQ section at their disposal, where certain previously answered questions can be found. Security and Fairness PlayFrank Casino protects players with the state-of-the-art SSL \nencryption. This technology is the gold standard for online safety and it is trusted by all reputable online merchants and banks. It prevents anyone from seeing sensitive data with an advanced algorithm that cannot be decoded. The games at PlayFrank Casino meet the standards of fairness required by two independent testing agencies : TST and iTech Labs . As such, players can rest assured that they will find a fair game here.\"}, {'Virtual Games': ' Slots are a big part of the Hello! Casino game library. Players will find more than 400 different video slot titles from big names like NetEnt, NYX Gaming group, Betsoft, Leander, Thunderkick, Aristocrat, all in a simple instant-play interface. \nAmong the most popular games are Steam Tower , Twin Spin , Starburst , Gonzo’s Quest , and many others. Players who \nenjoy the thrill of the tables will find several virtual card, dice, and roulette games at Hello! Casino. Among them are Multi Wheel Roulette and French Roulette, Blackjack and Pontoon, Craps, Baccarat, Sic Bo, Red Dog Progressive, \nTexas Hold’em Pro, Oasis Poker, and Caribbean Stud. Those looking for something else will find a number of scratch games and keno games in the β€œLottery” section, as well as four different video poker variants under β€œOther” available in both single-hand and multi-hand denominations. '}, {'Virtual Games': ' As 21Prive Casino uses a number different software providers it’s no wonder that they have a rich games portfolio with over 500 games which should meet the taste of different players. Games are easily accessible and carefully categorised. However, there isn’t a search option available. Slot players can choose from a wide ranging collection, which includes classic, progressive and 3D slots. 21Prive Casino puts the effort to freshen up their selection with new and modern slots such as Lost Island slot \n. Players can also find some old classics such as Dead or Alive slot . Popular slots at 21Prive Casino are also Jack and the Beanstalk slot , Zombie Rush slot and Starburst slot . If you are more of a table games fan you will like the choice of games that 21Prive Casino have to offer. Here you can find a good selection of games that include Blackjack, Roulette, Baccarat, Craps, Pai Gow, Red Dog Progressive and many more. 21Prive Casino also have a good offer for poker players including games such as Jacks or Better, Deuces WIld, 3D Video Poker, Multihand Joker Poker and other. There is a number of other casino games too, such as fun and scratch games, ball games and virtual games. Live Games At 21Prive Casino players can also try their luck with live games. At live casino players can choose whether they want to play Roulette, Baccarat, Blackjack or Lottery. Mobile Gaming Players can also enjoy gaming on the go via \ntheir compatible mobile devices. They can login via their tablets or smartphones or register a new account and play \ntheir favourite games regardless of where they are. Available games include Starburst, Gonzos Quest, Blood Suckers and Foxin Wins. Support If you have any concern or inquiry, feel free to contact customer support at 21Prive Casino Casino. They are available to players every day during the week from 24/7 GMT . Players can contact the casino’s support team via live chat or e-mail and get assistance with their issues from professionals. Players can also get answers to some questions in the casino’s FAQ section. Security and Fairness 21Prive Casino tends to put the safety of their players first. Therefore, they use RapidSSL encryption , which is the strongest SSL encryption available on the \nmarket at this time. In this way, they are making sure that the players account and personal data are safe. It prevents anyone from tampering with it. As far as fairness goes, all of the casino games are tested and certified at the \nhighest levels to ensure fair game-play at all times.'}, {'Virtual Games': ' As the casino is supported by the NetEnt , Microgaming , Elk Studios, NYX Interactive and Qiuckspin, some of the most popular video slots include Forbidden Throne Slot , Fairytale Legends: Hansel & Gretel Slot , Taco Brothers Slot , Big Bad Wolf Slot . Here, players may \nalso find the progressive slots like Mega Moolah slot , Mega Fortune slot and Arabian Nights slot . Table game lovers can choose between different variations of the classic casino games like blackjack, roulette and baccarat. Likewise, video poker fans can also find something to suit their taste, such as Deuces Wild Double Up, Jacks or Better, Aces and Eights and Joker Wild. '}, {'Virtual Games': ' Members can join the adventure and dive into an endless ocean of exciting games, over 1200+ games to be exact. This includes the latest video slots from leading gaming providers like NetEnt, Microgaming and NextGen Gaming. To make it easier to browse members can filter the games by each individual provider. There’s actually multiple filters available to help players get to their favourite games quicker. Players can enjoy games like the Bonanza Slot, Thunderstruck 2 Slot, Immortal Romance Slot, Moon Princess Slot, Asgardian Stones Slot and Medusa Slot. There’s also a selection of table games available here so players can place their bets on games a little different. A number of different games and bet limits are also available to cater for everyone’s needs. These include Blackjack Turbo and French Roulette Pro. Video pokers can also be found here as the casino is \nhome to dedicated card games section. Members can enjoy pokers like Jacks or Better, Deuces Wild and Joker Poker. '}, {'Virtual Games': ' Players can enjoy over 500 high-definition video slots from some of the biggest names in the industry like Microgaming and NetEnt . The casino regularly adds the newest games and the library features smash-hit \ntitles such as Thunderstruck Slot , Starburst Slot , and Jack and the Beanstalk Slot , to mention a few. Die-hard table game lovers also have plenty to choose from. There are variants of blackjack and roulette like European Blackjack, Classic Blackjack, American Roulette, French roulette, and loads more. Video pokers are also available. Fans can \nplay games like Jacks or Better, Aces & Eights, Deuces Wild, among others. Live Games Players who like the thrill of a land-based casino can enjoy live games from Evolution Gaming and other providers, including Live Blackjack, Live \nRoulette, Live Baccarat, and others. Mobile Gaming Spinland also offers a mobile casino that can be played straight \nthrough a mobile browser, without the need to download any additional software. Support Players who need assistance \ncan enjoy 24/7 live chat with a friendly and professional support team. Questions can also be sent through email for a quick response. Security and Fairness Members can rest assured they are playing in a safe environment as the casino employs SSL encryption to protect personal and financial details. A random number generator is also utilized to ensure that all their games result in a random outcome. Furthermore, the site is closely monitored and regulated by eCogra .'}, {'Virtual Games': ' Players can enjoy loads of games here which happens to be from some of the most well \nknown games in the industry. The lobby is very user friendly due to the number of filters players can use to find their favourite games. These include sorting games by their volatility levels, game provider, features, themes, number of paylines, reels and even minimum and maximum bet limits. There’s also a search engine which is a handy tool which can find any game, instantly. Players can enjoy games like the Cleopatra Slot , Wolf Gold Slot , Holmes and the Stolen Stones Slot , Reactoonz Slot and Gonzo’s Quest Slot . There’s lots of table games here too, particularly variations of blackjack and roulette. But, there are some poker games too. And, while we’re on the subject of card games players can also search for a number of video poker games as well. These include Joker Poker and Double Down Poker. '}, {'Virtual Games': ' Members can enjoy video slots here from the likes of NetEnt,\\xa0Microgaming, Amatic Industries and many more well-known suppliers. Games have already been categorized from A-Z, so it helps members to go straight to the games their looking for and makes it easier to browse. There are titles ranging from the newest games to the oldest games and the most popular games. Titles include the Starburst Slot , Aloha! Cluster Pays Slot , The Phantom of the Opera Slot and Highlander Slot . In addition to slots players can enjoy a large selection of table games here starting with low-limit games to suit all budgets. These include classics like Blackjack, Roulette, Baccarat and Caribbean Stud. For those who like betting in card games players can choose from a variety of video pokers like Bonus Power Poker, Deuces Wild Poker and Double Bonus Poker. '}, {'Virtual Games': ' Members can enjoy video slots here from the likes of NetEnt,\\xa0Microgaming, Amatic Industries and many more well-known suppliers. Games have already \nbeen categorized from A-Z, so it helps members to go straight to the games their looking for and makes it easier to \nbrowse. There are titles ranging from the newest games to the oldest games and the most popular games. Titles include the Starburst Slot , Aloha! Cluster Pays Slot , The Phantom of the Opera Slot and Highlander Slot . In addition to slots players can enjoy a large selection of table games here starting with low-limit games to suit all budgets. These include classics like Blackjack, Roulette, Baccarat and Caribbean Stud. For those who like betting in card games players can choose from a variety of video pokers like Bonus Power Poker, Deuces Wild Poker and Double Bonus Poker. '}, {'Virtual Games': ' 21 Casino’s biggest selection of games are its slots . Players will find a highly diverse collection, from simple games to innovative five-reel bonus video slots. Some of the most popular releases include Starburst , Magicious , Medusa II Slot , Merlin’s Millions , Miss Midas Slot , Aliens , Dragon Island , and Mega Fortune . In addition to live table games , 21 Casino features a diverse library of card, dice, and roulette games for all kinds of players. These include Caribbean Stud Poker, Casino Hold’em, Craps, Red Dog, Sharp Shooter, Six Shooter, \nand numerous Blackjack and Roulette variants. Video poker players at 21 Casino have an abundance of single-hand and \nmulti-hand variants to choose from. It contains the full line of NetEnt video pokers (Jacks or Better, Deuces Wild, \nJoker Wild, and All American Poker), as well as the even more diverse Betsoft video pokers with paytables like Deuces and Jokers, Double Bonus Poker, and Double Jackpot Poker. 21 Casino is also home to a variety of instant win games . These include multiple variants of Keno, unique games like Predictor, and scratchers like Max Win Scratch and Lucky Double. Live Games The live casino at 21 Casino is more diverse than most. Players can enjoy real-time gaming broadcast from a professional casino studio with 5 different games to choose from: Live Baccarat, Live Blackjack, Live \nRoulette, Live Keno, and Live Lottery. All of the games are dealt to professional standards and have interactive features for a more social gaming experience. Mobile Gaming Many of the games at 21 Casino can be enjoyed on the go thanks to its HTML5-compatible mobile platform. It works with almost all modern smartphones and tabl e ts, including iOS and Android devices . Players don’t have to worry about any downloads and can count on a virtually identical gaming experience. Support 21 Casino strives to provide players with support as efficiently as possible. The site’s live \nchat function is always just a click away and open 24/7 , connecting players with representatives in minutes or less in most cases. The β€œHelp” page states that representatives are available 24/7, but we have found that this is not always the case. In the event that no one is online, players may send the casino an email and wait until the next business day for a reply instead. Additionally, support is available both in English and Dutch. Security and Fairness All sensitive areas of the 21 Casino website are encrypted with state-of-the-art technology from GlobalSign , a leading security firm. This protocol obscures private information like financial data and passwords, making it virtually \nimpossible for anyone to compromise player accounts. All of the games used at 21 Casino have been thoroughly audited by a trusted independent agency, such as TST or iTech Labs .'}, {'Virtual Games': ' Being a Microgaming casino , it provides players with an excellent selection of slot games to suit every kind of preference, with favorites like Avalon II , Immortal Romance , Thunderstruck II , Elementals , Break da Bank , Playboy , Money Mad Monkey , Starlight \nKiss and so much more. However, being a new comer to online gaming market, Conquer Casino’s list of slot games is not yet comprehensive with its 180+ games, out of which, 13 are jackpot games. Nevertheless, these should accommodate \nmost demands for choice of games. Table games too are still limited in number, with only 15 variance of Roulette, Blackjack and Baccarat available, the live game versions included, but players can expect more additions in due time. \nVideo Poker lovers will have to miss out at Conquer Casino as none is currently available, similarly for special games fans too, but scratch cards fans, on the other hand, are well catered for with over 30 games to choose from. Live Games Live games are also available for players and live dealers run the following games: Multi Player Baccarat , Live Baccarat , Live Blackjack , Multi Player Roulette and Live Roulette . Mobile Gaming Users of iOS and Android mobile units can access the casino via an internet connection and there are currently 20 games available for play, such as Avalon, Break da Bank, Burning Desire, Cashapillar, Hitman, Mermaid Millions, Starlight Kiss, Thunderstruck II and Tomb Raider, to name a few. As more and more become mobile players in due time, the number of games available for mobile gaming would increase in tandem. Support Customer support services is available via live chat between 7 AM and 11 PM (GMT). Alternatively, players can email the support and get a prompt response. Security and Fairness Being \nlicensed and regulated in Malta under some of the strictest world’s gambling legislation, all personal information, \naccount details and transactions of players are treated with strictest confidence and are not disclosed to any third party. The Random Number Generator (RNG) is audited and tested by Gaming Associates to ensure fairness of the games at all times.'}, {'Virtual Games': \" The video slot section is powered by Amatic Industries, and includes video slots like Lucky Zodiac slot , Wild Shark slot , Wolf Moon slot and numerous others. Players looking to try their hand \nat table games can go for blackjack and a variety of live table games. The video poker offer includes different video poker variants like Deuces Wild, Fruit Poker, Jacks or Better, Joker Card, Multi Win and Multi Win Triple. Live Games The casino has a live casino where players can engage in live gambling . The live casino offer includes various \nlive table games, such as different types of live roulette, live blackjack and live punto banco, which is a type of \nlive baccarat. All tables are hosted by Netent live and Evolution Gaming. Mobile Gaming Since CasinoCasino is a mobile casino , players can play the casino's video slots and other online casino games on their mobile devices. Support Players who need to contact the customer support team can do so via email, phone, online contact phone or live chat , which is open from 08:00 to 01:00 (CET) . Alternatively, these is a FAQ section where players can find answers to various previously asked questions. Security and Fairness The games found in the casino's offer implement Random Number Generators which ensure the outcome is always unbiased.\"}, {'Virtual Games': ' Players at Barbados Casino have \nplenty of online slots and table games to choose from. Naturally, their video slots selection is the biggest and is \npowered by NetEnt , Microgaming , Elk Studios, with popular titles like Foxin’ Wins slot , Copy Cats slot , Taco Brothers slot and EmotiCoins slot . Table games enthusiasts can pick between different variations of the classic casino games of blackjack, roulette and baccarat. These alternatives include Roulette Royal, European Roulette, Baccarat Squeeze, Blackjack Solo and Blackjack Classic. Video poker fans can also find games that suit their taste. These contain Jacks or Better, Deuces Wild, Joker Poker, Royal Poker and Poker King. Additionally, the casino also offers a number of keno and scratch card games. '}, {'Virtual Games': \" The casino lobby boasts hundreds of the industry's best video slots from some of the most talked about providers in the market like NetEnt , Microgaming and Play’n GO . Players can easily search for their favourite games which will help find them quicker but other than that other filters here are limited. Players will have to browse the lobby until they find what they want to play. Titles include the Book of Dead Slot , Bonanza Slot , Aloha! Cluster Pays Slot , Thunderstruck 2 Slot and Starburst Slot . Of course, \nthere are table games here as well for anyone looking to play something different to slots. These include popular versions of blackjack and roulette. Players who prefer card games can try their luck with the video poker games like Jacks or Better and Joker Poker. The casino lobby is also home to some other casino games like scratch cards. \"}, {'Virtual Games': ' SlotsMagic is \n\n... so on\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074610162_beautifulsoup_python_web_scraping.txt
Q: Python, most compact&efficeint way of checking if an item is any of the lists (which are inside dictionaries)? I have a dictionary with lists (with strings inside) and I need I need to check if a string appears anywhere among those lists. Here is an example classes = { "class_A" : ["Mike","Alice","Peter"], "class_B" : ["Sam","Robert","Anna"], "class_C" : ["Tom","Nick","Jack"] } students=["Alice","Frodo","Jack"] for student in students: if student *in any of the classes*: print("Is here") else: print("Is not here") For every student in the list I provided: if that student is in any of the classes, do A, if not do B. Currently the output is Is here, Is not here, Is here Here is my current code: studentsInClasses=[] for studentsInClass in classes.values() studentsInClasses+=studentsInClass students=["Alice","Frodo","Jack"] for student in students: if student in studentsInClasses: print("Is here") else: print("Is not here") But this is happening inside a complex structure of classes, functions and loops so it become a major performance issue as soon as I scale up the inputs. Here is something that I do like, but is a bit annoying as I have to make sure that whatever function my code is in, has access to this one: def check(student,classes): for value in classes.values(): if student in value: return True return False It is probably as good as it gets, but I would like to know if there is a simple one liner that does the job. Requirements: Does not create a copy of the lists Does not rely on keys in any way Preferably nice and simple Isn't an over-engineered superefficient solution I am new to stack overflow, if I am doing any major mistakes in my way of posting please do tell, I will try to improve my question writing. Thanks A: If a generator expression is acceptable with regards to your requirements, then: def check(student, classes): return any(student in value for value in classes.values()) And to get a boolean for each student, you could create this function: def checkall(students, classes): return [any(student in value for value in classes.values()) for student in students] For your sample data, this would return [True, False, True]. A: So if you want to just print is here or is not here here is an example: classes = { "class_A" : ["Mike","Alice","Peter"], "class_B" : ["Sam","Robert","Anna"], "class_C" : ["Tom","Nick","Jack"] } for line in str(classes).split(","): if student in line: print("Student in here") else: print("Student not here") A: Since this is in a loop, you should create a set for all of the values: from itertools import chain values = set(chain(*classes.values())) students=["Alice","Frodo","Jack"] for student in students: if student in values: print("Is here") else: print("Is not here") The reason is that a set lookup is a constant time lookup, and in a tight loop makes a big difference A: How about making the list of students in each class a set? Then the lookup time will be o(1) and you can loop over n classes. The you can have: class_to_students = { "class_A" : {"Mike","Alice","Peter"}, "class_B" : {"Sam","Robert","Anna"}, "class_C" : {"Tom","Nick","Jack"} } students=["Alice","Frodo","Jack"] for student in students: for class_students in class_to_students.values(): if student in class_students: print(f"{student} Is here") break else: # loop was not broken out of print(f"{student} Is not here") --> Alice Is here Frodo Is not here Jack Is here If you exclude a solution like this, then you are stuck with your n*m solution where n is the number of classes and m is the number of students in each class. A: Is this something you are looking for? classes = { "class_A": ["Mike","Alice","Peter"], "class_B": ["Sam","Robert","Anna"], "class_C": ["Tom","Nick","Jack"] } students = ["Alice", "Frodo", "Jack"] res = [student in class_ for class_ in classes.values() for student in students ] print(res) A: If allowed, I guess you'd increase performance if you delete the entry within the class, whenever you had a match. Use sorting beforehand studentsInClasses=[] for studentsInClass in classes.values() studentsInClasses+=studentsInClass studentsInClasses = sorted(studentsInClasses) students=sorted(["Alice","Frodo","Jack"]) lastMatch=0 for i in range(len(students)): student = students[i] try: lastMatch = studentsInClasses[lastMatch:].index(student) print(f"{student} in class") except ValueError as e: pass
Python, most compact&efficeint way of checking if an item is any of the lists (which are inside dictionaries)?
I have a dictionary with lists (with strings inside) and I need I need to check if a string appears anywhere among those lists. Here is an example classes = { "class_A" : ["Mike","Alice","Peter"], "class_B" : ["Sam","Robert","Anna"], "class_C" : ["Tom","Nick","Jack"] } students=["Alice","Frodo","Jack"] for student in students: if student *in any of the classes*: print("Is here") else: print("Is not here") For every student in the list I provided: if that student is in any of the classes, do A, if not do B. Currently the output is Is here, Is not here, Is here Here is my current code: studentsInClasses=[] for studentsInClass in classes.values() studentsInClasses+=studentsInClass students=["Alice","Frodo","Jack"] for student in students: if student in studentsInClasses: print("Is here") else: print("Is not here") But this is happening inside a complex structure of classes, functions and loops so it become a major performance issue as soon as I scale up the inputs. Here is something that I do like, but is a bit annoying as I have to make sure that whatever function my code is in, has access to this one: def check(student,classes): for value in classes.values(): if student in value: return True return False It is probably as good as it gets, but I would like to know if there is a simple one liner that does the job. Requirements: Does not create a copy of the lists Does not rely on keys in any way Preferably nice and simple Isn't an over-engineered superefficient solution I am new to stack overflow, if I am doing any major mistakes in my way of posting please do tell, I will try to improve my question writing. Thanks
[ "If a generator expression is acceptable with regards to your requirements, then:\ndef check(student, classes):\n return any(student in value for value in classes.values())\n\nAnd to get a boolean for each student, you could create this function:\ndef checkall(students, classes):\n return [any(student in value for value in classes.values()) \n for student in students]\n\nFor your sample data, this would return [True, False, True].\n", "So if you want to just print is here or is not here here is an example:\nclasses = {\n \"class_A\" : [\"Mike\",\"Alice\",\"Peter\"],\n \"class_B\" : [\"Sam\",\"Robert\",\"Anna\"],\n \"class_C\" : [\"Tom\",\"Nick\",\"Jack\"]\n}\nfor line in str(classes).split(\",\"):\n if student in line:\n print(\"Student in here\")\n else:\n print(\"Student not here\")\n\n", "Since this is in a loop, you should create a set for all of the values:\nfrom itertools import chain\n\nvalues = set(chain(*classes.values()))\n\nstudents=[\"Alice\",\"Frodo\",\"Jack\"]\n\nfor student in students:\n if student in values:\n print(\"Is here\")\n else:\n print(\"Is not here\")\n\n\nThe reason is that a set lookup is a constant time lookup, and in a tight loop makes a big difference\n", "How about making the list of students in each class a set?\nThen the lookup time will be o(1) and you can loop over n classes.\nThe you can have:\nclass_to_students = {\n \"class_A\" : {\"Mike\",\"Alice\",\"Peter\"},\n \"class_B\" : {\"Sam\",\"Robert\",\"Anna\"},\n \"class_C\" : {\"Tom\",\"Nick\",\"Jack\"}\n}\nstudents=[\"Alice\",\"Frodo\",\"Jack\"]\nfor student in students:\n for class_students in class_to_students.values():\n if student in class_students:\n print(f\"{student} Is here\")\n break\n else:\n # loop was not broken out of\n print(f\"{student} Is not here\")\n\n-->\nAlice Is here\nFrodo Is not here\nJack Is here\n\nIf you exclude a solution like this, then you are stuck with your n*m solution where n is the number of classes and m is the number of students in each class.\n", "Is this something you are looking for?\nclasses = {\n \"class_A\": [\"Mike\",\"Alice\",\"Peter\"],\n \"class_B\": [\"Sam\",\"Robert\",\"Anna\"],\n \"class_C\": [\"Tom\",\"Nick\",\"Jack\"]\n}\n\nstudents = [\"Alice\", \"Frodo\", \"Jack\"]\nres = [student in class_ for class_ in classes.values() for student in students ]\nprint(res)\n\n", "\nIf allowed, I guess you'd increase performance if you delete the entry within the class, whenever you had a match.\n\nUse sorting beforehand\n\n\nstudentsInClasses=[]\nfor studentsInClass in classes.values()\n studentsInClasses+=studentsInClass\n\nstudentsInClasses = sorted(studentsInClasses)\nstudents=sorted([\"Alice\",\"Frodo\",\"Jack\"])\n\nlastMatch=0\nfor i in range(len(students)):\n student = students[i]\n try:\n lastMatch = studentsInClasses[lastMatch:].index(student)\n print(f\"{student} in class\")\n except ValueError as e:\n pass\n\n\n" ]
[ 3, 0, 0, 0, 0, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074620209_dictionary_list_python.txt
Q: how to run a function in python for a specified time interval I want to run a function for only n number of seconds after which the function shouldn't run anymore while in the background other functions should continue running. I've tried using the time.time() function with the while loop but other functions in the background doesn't run and I want it such a way that even other functions can be run at the same time. For eg: if 3 functions function A, function B and function C exists. My function A, and B should run continously while Function C should run only for a certain time in the background A: use threading and the timeout parameter for join https://docs.python.org/3/library/threading.html#threading.Thread.start from threading import Thread import time def A(): while True: time.sleep(2) def B(): while True: time.sleep(1) def C(): while True: time.sleep(1) t_a = Thread(target=A) t_a.run() Thread(target=B).run() Thread(target=C).run() t_a.join(timeout=10)
how to run a function in python for a specified time interval
I want to run a function for only n number of seconds after which the function shouldn't run anymore while in the background other functions should continue running. I've tried using the time.time() function with the while loop but other functions in the background doesn't run and I want it such a way that even other functions can be run at the same time. For eg: if 3 functions function A, function B and function C exists. My function A, and B should run continously while Function C should run only for a certain time in the background
[ "use threading and the timeout parameter for join https://docs.python.org/3/library/threading.html#threading.Thread.start\nfrom threading import Thread\nimport time\ndef A():\n while True:\n time.sleep(2)\ndef B():\n while True:\n time.sleep(1)\ndef C():\n while True:\n time.sleep(1)\n\nt_a = Thread(target=A)\nt_a.run()\nThread(target=B).run()\nThread(target=C).run()\n\nt_a.join(timeout=10)\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074620402_python.txt
Q: Conda environments only working with base version Python I have started to learn how to work with creating new virtual environments. However whenever I try to launch a Jupyter Notebook I find that going through the dropdown menu and selecting the kernel name results in Kernel starting, please wait... followed by connection failed. Very simply my approach is: conda create --name py37 python==3.7.2 #activate conda activate py37 conda install pandas conda install ipykernel ipython kernel install --user --display-name "guacamole" Now when I locate the folder where the kernels are created: C:\Users\User\AppData\Roaming\jupyter\kernels What I find is that for the kernel.json file, when argv is set to "C:\Users\User\Anaconda3\python.exe" then my kernel loads fine in Jupyter When argv is set to "C:\Users\User\Anaconda3\envs\py37\python.exe" it fails to load. Any suggestions would be really appreciated! A: This ended up being very straight forward. While I feel a bit silly I didn't click at first I learnt a fair bit about virtual environments in the process! All that was necessary was to simply run the following once I had activated the environment (in Anaconda prompt) jupyter notebook The problem was that I was running a jupyter notebook from a different location hence the clash. Hopefully this helps someone else. A: You should be able to run jupyter notebook from any environment, and be able to select the kernel from the dropdown. No need to exit and change directory. conda activate py37 python -m ipykernel install --user --name py37 --display-name "guacamole" # then, to test conda deactivate jupyter notebook
Conda environments only working with base version Python
I have started to learn how to work with creating new virtual environments. However whenever I try to launch a Jupyter Notebook I find that going through the dropdown menu and selecting the kernel name results in Kernel starting, please wait... followed by connection failed. Very simply my approach is: conda create --name py37 python==3.7.2 #activate conda activate py37 conda install pandas conda install ipykernel ipython kernel install --user --display-name "guacamole" Now when I locate the folder where the kernels are created: C:\Users\User\AppData\Roaming\jupyter\kernels What I find is that for the kernel.json file, when argv is set to "C:\Users\User\Anaconda3\python.exe" then my kernel loads fine in Jupyter When argv is set to "C:\Users\User\Anaconda3\envs\py37\python.exe" it fails to load. Any suggestions would be really appreciated!
[ "This ended up being very straight forward. While I feel a bit silly I didn't click at first I learnt a fair bit about virtual environments in the process!\nAll that was necessary was to simply run the following once I had activated the environment (in Anaconda prompt)\njupyter notebook\n\nThe problem was that I was running a jupyter notebook from a different location hence the clash. Hopefully this helps someone else.\n", "You should be able to run jupyter notebook from any environment, and be able to select the kernel from the dropdown. No need to exit and change directory.\nconda activate py37\npython -m ipykernel install --user --name py37 --display-name \"guacamole\"\n\n# then, to test \nconda deactivate\njupyter notebook \n\n" ]
[ 0, 0 ]
[]
[]
[ "anaconda", "conda", "jupyter_notebook", "python" ]
stackoverflow_0068076724_anaconda_conda_jupyter_notebook_python.txt
Q: Django models relation The idea is that there are two models: Group and Player. My objective is that there are different Groups and each group has players. Each player can belong to one or more groups. Inside a group, each player has some points accumulated, but the same player can have different points accumulated in another group. class Player(models.Model): username = models.CharField(max_length = 200) won_games = models.IntegerField(default=0) class Point(models.Model): player = models.ForeignKey(Player, on_delete=models.PROTECT, related_name='points') val = models.IntegerField() group = models.ForeignKey(Group, on_delete=models.PROTECT, related_name='points') class Group(models.Model): id = models.CharField(max_length = 200) players = models.ManyToManyField(Player,related_name="groups") points = models.ManyToManyField(Point) I am confused because I don't know how to make that a player has "x" points in group A (for example) and also has "y" points in group B. I want to be able to show the data of a group, for each group, show its members and their points. A: You can tell Django to use a custom model for the many to many relationship using through parameter in that field, this way: class Player(models.Model): username = models.CharField(max_length = 200) won_games = models.IntegerField(default=0) class Group(models.Model): id = models.CharField(max_length = 200) players = models.ManyToManyField(Player, related_name="groups", through='Point') class Point(models.Model): player = models.ForeignKey(Player, on_delete=models.PROTECT, related_name='points') group = models.ForeignKey(Group, on_delete=models.PROTECT, related_name='points') Django docs include an example for this topic here: https://docs.djangoproject.com/en/4.1/topics/db/models/#extra-fields-on-many-to-many-relationships A: You probably need an intermediate table with an extra field. Something similar to this: from django.db import models class Player(models.Model): username = models.CharField(max_length = 200) won_games = models.IntegerField(default=0) def __str__(self): return self.username class Group(models.Model): name = models.CharField(max_length=128) players= models.ManyToManyField(Player, through='Point') def __str__(self): return self.name class Point(models.Model): player = models.ForeignKey(Player, on_delete=models.CASCADE) group = models.ForeignKey(Group, on_delete=models.CASCADE) point = models.IntegerField()
Django models relation
The idea is that there are two models: Group and Player. My objective is that there are different Groups and each group has players. Each player can belong to one or more groups. Inside a group, each player has some points accumulated, but the same player can have different points accumulated in another group. class Player(models.Model): username = models.CharField(max_length = 200) won_games = models.IntegerField(default=0) class Point(models.Model): player = models.ForeignKey(Player, on_delete=models.PROTECT, related_name='points') val = models.IntegerField() group = models.ForeignKey(Group, on_delete=models.PROTECT, related_name='points') class Group(models.Model): id = models.CharField(max_length = 200) players = models.ManyToManyField(Player,related_name="groups") points = models.ManyToManyField(Point) I am confused because I don't know how to make that a player has "x" points in group A (for example) and also has "y" points in group B. I want to be able to show the data of a group, for each group, show its members and their points.
[ "You can tell Django to use a custom model for the many to many relationship using through parameter in that field, this way:\n\nclass Player(models.Model):\n username = models.CharField(max_length = 200)\n won_games = models.IntegerField(default=0)\n\n\nclass Group(models.Model):\n id = models.CharField(max_length = 200)\n players = models.ManyToManyField(Player, related_name=\"groups\", through='Point')\n\n\nclass Point(models.Model):\n player = models.ForeignKey(Player, on_delete=models.PROTECT, related_name='points')\n group = models.ForeignKey(Group, on_delete=models.PROTECT, related_name='points')\n\n\nDjango docs include an example for this topic here: https://docs.djangoproject.com/en/4.1/topics/db/models/#extra-fields-on-many-to-many-relationships\n", "You probably need an intermediate table with an extra field. Something similar to this:\nfrom django.db import models\n\nclass Player(models.Model):\n username = models.CharField(max_length = 200)\n won_games = models.IntegerField(default=0)\n\n def __str__(self):\n return self.username\n\nclass Group(models.Model):\n name = models.CharField(max_length=128)\n players= models.ManyToManyField(Player, through='Point')\n\n def __str__(self):\n return self.name\n\nclass Point(models.Model):\n player = models.ForeignKey(Player, on_delete=models.CASCADE)\n group = models.ForeignKey(Group, on_delete=models.CASCADE)\n point = models.IntegerField()\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "python" ]
stackoverflow_0074620380_django_django_models_django_rest_framework_python.txt
Q: Python - automatically apply Excel filters to .csv files/method to convert hh:mm:ss time string to integer? I have a ton of VOIP analytics to process, all in .csv format. All calls are formatted as rows, and I need to isolate rows with cells that match the strings "Answered" and "Terminating", and with call duration <= 00:00:30. I've been combing through Python libraries to find one that can quickly and easily apply the necessary filters and work with that funky time format so I never have to actually open the .csv itself, but with mixed success. If I knew a quick way to convert that time string into an integer, that would help a lot as well. What's the best library and method to use in this situation? Currently working with Python 3. Checked xlsxwriter, no dice since it's write-only. Currently looking at Pandas and openpyxl, but it's looking murky. A: Using Miller at command line to filter a CSV and capture all rows with time <= 00:00:30. cat time_select.csv id,time_val 1,00:00:01 2,00:00:02 3,00:00:03 4,00:00:04 5,00:00:05 6,00:00:06 7,00:00:07 8,00:00:08 9,00:00:09 10,00:00:10 ... 50,00:00:50 51,00:00:51 52,00:00:52 53,00:00:53 54,00:00:54 55,00:00:55 56,00:00:56 57,00:00:57 58,00:00:58 59,00:00:59 mlr --csv filter 'strptime($time_val, "%H:%M:%S") <= strptime("00:00:30", "%H:%M:%S")' time_select.csv > time_filtered.csv cat time_filtered.csv id,time_val 1,00:00:01 2,00:00:02 3,00:00:03 4,00:00:04 5,00:00:05 6,00:00:06 7,00:00:07 8,00:00:08 9,00:00:09 10,00:00:10 11,00:00:11 12,00:00:12 13,00:00:13 14,00:00:14 15,00:00:15 16,00:00:16 17,00:00:17 18,00:00:18 19,00:00:19 20,00:00:20 21,00:00:21 22,00:00:22 23,00:00:23 24,00:00:24 25,00:00:25 26,00:00:26 27,00:00:27 28,00:00:28 29,00:00:29 30,00:00:30
Python - automatically apply Excel filters to .csv files/method to convert hh:mm:ss time string to integer?
I have a ton of VOIP analytics to process, all in .csv format. All calls are formatted as rows, and I need to isolate rows with cells that match the strings "Answered" and "Terminating", and with call duration <= 00:00:30. I've been combing through Python libraries to find one that can quickly and easily apply the necessary filters and work with that funky time format so I never have to actually open the .csv itself, but with mixed success. If I knew a quick way to convert that time string into an integer, that would help a lot as well. What's the best library and method to use in this situation? Currently working with Python 3. Checked xlsxwriter, no dice since it's write-only. Currently looking at Pandas and openpyxl, but it's looking murky.
[ "Using Miller at command line to filter a CSV and capture all rows with time <= 00:00:30.\ncat time_select.csv\nid,time_val\n1,00:00:01\n2,00:00:02\n3,00:00:03\n4,00:00:04\n5,00:00:05\n6,00:00:06\n7,00:00:07\n8,00:00:08\n9,00:00:09\n10,00:00:10\n...\n50,00:00:50\n51,00:00:51\n52,00:00:52\n53,00:00:53\n54,00:00:54\n55,00:00:55\n56,00:00:56\n57,00:00:57\n58,00:00:58\n59,00:00:59\n\n\nmlr --csv filter 'strptime($time_val, \"%H:%M:%S\") <= strptime(\"00:00:30\", \"%H:%M:%S\")' time_select.csv > time_filtered.csv\n\ncat time_filtered.csv \nid,time_val\n1,00:00:01\n2,00:00:02\n3,00:00:03\n4,00:00:04\n5,00:00:05\n6,00:00:06\n7,00:00:07\n8,00:00:08\n9,00:00:09\n10,00:00:10\n11,00:00:11\n12,00:00:12\n13,00:00:13\n14,00:00:14\n15,00:00:15\n16,00:00:16\n17,00:00:17\n18,00:00:18\n19,00:00:19\n20,00:00:20\n21,00:00:21\n22,00:00:22\n23,00:00:23\n24,00:00:24\n25,00:00:25\n26,00:00:26\n27,00:00:27\n28,00:00:28\n29,00:00:29\n30,00:00:30\n\n\n" ]
[ 0 ]
[]
[]
[ "csv", "excel", "python", "python_3.x" ]
stackoverflow_0074605396_csv_excel_python_python_3.x.txt
Q: Pyserial write data to serial port ItΒ΄s my first time working with pyserial. I made an simple gui with pysimplegui and now IΒ΄d like to write the data from the sliders to the serial monitor. How can I do it? import PySimpleGUI as sg import serial font = ("Courier New", 11) sg.theme("DarkBlue3") sg.set_options(font=font) ser = serial.Serial("COM6") ser.flushInput() layout = [ [sg.Text("X"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_X')], [sg.Text("Y"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Y')], [sg.Text("Z"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Z')], [sg.Push(), sg.Button('Exit')], ] window = sg.Window("Controller", layout, finalize=True) window['SLIDER_X'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Y'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Z'].bind('<ButtonRelease-1>', ' Release') while True: event, values = window.read() if event in (sg.WINDOW_CLOSED, 'Exit'): break elif event == 'SLIDER_X Release': print("X Value:", values["SLIDER_X"]) elif event == 'SLIDER_Y Release': print("Y Value:", values["SLIDER_Y"]) elif event == 'SLIDER_Z Release': print("Z Value:", values["SLIDER_Z"]) data = values["SLIDER_X"], values["SLIDER_Y"], values["SLIDER_Z"] print("String:", data) window.close() ser.close() If I just do ser.write(data) I get an error: TypeError: 'float' object cannot be interpreted as an integer I just want to write the data to the serial port so I can read it with an Arduino. A: There are 2 issues with the code here: Variable data has a type of tuple, not a number. Each member of the tuple is a float as the TypeError indicates. You will need to pass one value at a time to ser.write. And you will need to cast the float returned by the slider widget to an integer. Something like the following: data = int(values["SLIDER_X"]) ser.write(data)
Pyserial write data to serial port
ItΒ΄s my first time working with pyserial. I made an simple gui with pysimplegui and now IΒ΄d like to write the data from the sliders to the serial monitor. How can I do it? import PySimpleGUI as sg import serial font = ("Courier New", 11) sg.theme("DarkBlue3") sg.set_options(font=font) ser = serial.Serial("COM6") ser.flushInput() layout = [ [sg.Text("X"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_X')], [sg.Text("Y"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Y')], [sg.Text("Z"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Z')], [sg.Push(), sg.Button('Exit')], ] window = sg.Window("Controller", layout, finalize=True) window['SLIDER_X'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Y'].bind('<ButtonRelease-1>', ' Release') window['SLIDER_Z'].bind('<ButtonRelease-1>', ' Release') while True: event, values = window.read() if event in (sg.WINDOW_CLOSED, 'Exit'): break elif event == 'SLIDER_X Release': print("X Value:", values["SLIDER_X"]) elif event == 'SLIDER_Y Release': print("Y Value:", values["SLIDER_Y"]) elif event == 'SLIDER_Z Release': print("Z Value:", values["SLIDER_Z"]) data = values["SLIDER_X"], values["SLIDER_Y"], values["SLIDER_Z"] print("String:", data) window.close() ser.close() If I just do ser.write(data) I get an error: TypeError: 'float' object cannot be interpreted as an integer I just want to write the data to the serial port so I can read it with an Arduino.
[ "There are 2 issues with the code here:\n\nVariable data has a type of tuple, not a number.\nEach member of the tuple is a float as the TypeError indicates.\n\nYou will need to pass one value at a time to ser.write. And you will need to cast the float returned by the slider widget to an integer. Something like the following:\ndata = int(values[\"SLIDER_X\"])\nser.write(data)\n\n" ]
[ 0 ]
[]
[]
[ "pyserial", "pysimplegui", "python", "python_3.x" ]
stackoverflow_0074619830_pyserial_pysimplegui_python_python_3.x.txt
Q: Chain assignment in Python for list I am trying to understand chain assignment in Python. If I run x = x[1] = [1, 2], I get an infinite list [1, [...]]. But if I run x = x[1:] = [1, 2], I will get a normal list [1, 1, 2]. How does it work in the background to make these two different results? A: First, understand that in a chained assignment, the right-most expression is evaluated to an object. A reference to that object is then assigned to each target in sequence, from left to right. x = y = z is effectively the same as t = z # A new temporary variable "t" to hold the result of evaluating z x = t y = t del t Second, you need to understand the difference between assigning t to the subscription x[1] and to the slicing x[1:]. In the former, element 1 of x is replaced by t. In the latter, elements x[1], x[2], etc are replaced by t[0], t[1], etc, extending or shrinking x as necessary. The first example is equivalent to t = [1,2] x = t x[1] = t del t x[1] becomes a reference to x itself; list.__repr__ can detect cycles like this and represents them with .... x, x[1], x[1][1], etc are all references to the original list [1,2]. The second example is equivalent to t = [1,2] x = t x[1:] = t del t In this case, no cycles are created. x[1] is replaced by a reference to 1, not x/t itself, and x is extended to make x[2] a reference to 2. (If it helps, think of x[1:] = t as producing the same result as x = [x[0]]; x.extend(t).) A: x = x[1] = [1, 2] In this case, x and x[1] are the same object. Therefore x contains itself, which is represented by ... in your output. x = x[1:] = [1, 2] In this case, x is a slice, and slices are copies. so they are not the same object. A: Explaining the outputs: x = x[1] = [1, 2] # Output: [1, [...]] The reason is that the list is not copied, but referenced. x[1] is a reference to the list [1, 2]. When you assign x[1] to x, you are assigning a reference to the list [1, 2] to x. So x and x[1] are both references to the same list. When you change the list through one of the references, the change appears in the other reference as well. This is called aliasing. x = x[1:] = [1, 2] # Output: [1, 1, 2] The reason is that the list is copied, not referenced. x[1:] is slice of the list [1, 2]. The slice is a copy of the list, not a reference to the list. When you assign x[1:] to x, you are assigning a copy of the list [1, 2] to x. So x and x[1:] are both copies of the same list. When you change the list through one of the references, the change does not appear in the other reference.
Chain assignment in Python for list
I am trying to understand chain assignment in Python. If I run x = x[1] = [1, 2], I get an infinite list [1, [...]]. But if I run x = x[1:] = [1, 2], I will get a normal list [1, 1, 2]. How does it work in the background to make these two different results?
[ "First, understand that in a chained assignment, the right-most expression is evaluated to an object. A reference to that object is then assigned to each target in sequence, from left to right. x = y = z is effectively the same as\nt = z # A new temporary variable \"t\" to hold the result of evaluating z\nx = t\ny = t\ndel t\n\nSecond, you need to understand the difference between assigning t to the subscription x[1] and to the slicing x[1:]. In the former, element 1 of x is replaced by t. In the latter, elements x[1], x[2], etc are replaced by t[0], t[1], etc, extending or shrinking x as necessary.\nThe first example is equivalent to\nt = [1,2]\nx = t\nx[1] = t\ndel t\n\nx[1] becomes a reference to x itself; list.__repr__ can detect cycles like this and represents them with .... x, x[1], x[1][1], etc are all references to the original list [1,2].\nThe second example is equivalent to\nt = [1,2]\nx = t\nx[1:] = t\ndel t\n\nIn this case, no cycles are created. x[1] is replaced by a reference to 1, not x/t itself, and x is extended to make x[2] a reference to 2. (If it helps, think of x[1:] = t as producing the same result as x = [x[0]]; x.extend(t).)\n", "x = x[1] = [1, 2]\n\nIn this case, x and x[1] are the same object. Therefore x contains itself, which is represented by ... in your output.\nx = x[1:] = [1, 2]\n\nIn this case, x is a slice, and slices are copies. so they are not the same object.\n", "Explaining the outputs:\nx = x[1] = [1, 2] # Output: [1, [...]]\n\nThe reason is that the list is not copied, but referenced.\n\nx[1] is a reference to the list [1, 2].\nWhen you assign x[1] to x, you are assigning a reference to the list [1, 2] to x.\nSo x and x[1] are both references to the same list.\nWhen you change the list through one of the references, the change appears in the other reference as well. This is called aliasing.\n\n\nx = x[1:] = [1, 2] # Output: [1, 1, 2]\n\nThe reason is that the list is copied, not referenced.\n\nx[1:] is slice of the list [1, 2].\nThe slice is a copy of the list, not a reference to the list.\nWhen you assign x[1:] to x, you are assigning a copy of the list [1, 2] to x.\nSo x and x[1:] are both copies of the same list.\nWhen you change the list through one of the references, the change does not appear in the other reference.\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "infinite_loop", "python", "variable_assignment" ]
stackoverflow_0074620364_infinite_loop_python_variable_assignment.txt
Q: How to separate the data_time colum by days from a dataframe I have a dataframe and I need to find the most acess hour from the day. I think i need to do some for loops to store the values and after find the most acess hour. My code until now is: df['date_time'] = pd.to_datetime(df['date_time']) This me return: 0 2022-11-24 19:18:37 1 2022-11-25 00:45:35 2 2022-11-25 00:48:01 3 2022-11-25 00:59:38 4 2022-11-25 01:01:07 ... 890 2022-11-29 20:55:13 891 2022-11-29 20:55:33 892 2022-11-29 20:56:30 893 2022-11-29 20:57:01 894 2022-11-29 21:06:27 Name: date_time, Length: 895, dtype: datetime64[ns] This dataframe have 7 days of data, I need find the most acess hour for every day, and of the week. I tried using some for's loop but i dont know much about the Pandas. I will learn more the documentation but if someone knows how to solve this problem.... Thankssss A: I would suggest to group by day and take mode on hours: df['date_time'] = pd.to_datetime(df['date_time']) df['date'] = df['date_time'].dt.day df['hour'] = df['date_time'].dt.hour df_groupped = df.groupby(df['date'])['hour'].agg(pd.Series.mode) A: You can use pandas.DataFrame.groupby with .dt accessors. Try this : df_day = ( df.assign(hour= "Hour " + df['date_time'].dt.hour.astype(str), week= "Week" + df['date_time'].dt.isocalendar().week.astype(str)) .groupby("week", as_index=False).agg(MostAccesHour= ("hour", lambda x: x.value_counts().index[0])) ) df_week= ( df.assign(hour= "Hour " + df['date_time'].dt.hour.astype(str), day= df['date_time'].dt.date) .groupby("day", as_index=False).agg(MostAccesHour= ("hour", lambda x: x.value_counts().index[0])) ) # Output : print(df_week) day MostAccesHour 0 2022-11-24 Hour 19 1 2022-11-25 Hour 0 2 2022-11-29 Hour 20 print(df_day) week MostAccesHour 0 Week47 Hour 0 1 Week48 Hour 20
How to separate the data_time colum by days from a dataframe
I have a dataframe and I need to find the most acess hour from the day. I think i need to do some for loops to store the values and after find the most acess hour. My code until now is: df['date_time'] = pd.to_datetime(df['date_time']) This me return: 0 2022-11-24 19:18:37 1 2022-11-25 00:45:35 2 2022-11-25 00:48:01 3 2022-11-25 00:59:38 4 2022-11-25 01:01:07 ... 890 2022-11-29 20:55:13 891 2022-11-29 20:55:33 892 2022-11-29 20:56:30 893 2022-11-29 20:57:01 894 2022-11-29 21:06:27 Name: date_time, Length: 895, dtype: datetime64[ns] This dataframe have 7 days of data, I need find the most acess hour for every day, and of the week. I tried using some for's loop but i dont know much about the Pandas. I will learn more the documentation but if someone knows how to solve this problem.... Thankssss
[ "I would suggest to group by day and take mode on hours:\ndf['date_time'] = pd.to_datetime(df['date_time'])\ndf['date'] = df['date_time'].dt.day\ndf['hour'] = df['date_time'].dt.hour\ndf_groupped = df.groupby(df['date'])['hour'].agg(pd.Series.mode)\n\n", "You can use pandas.DataFrame.groupby with .dt accessors.\nTry this :\ndf_day = (\n df.assign(hour= \"Hour \" + df['date_time'].dt.hour.astype(str),\n week= \"Week\" + df['date_time'].dt.isocalendar().week.astype(str))\n .groupby(\"week\", as_index=False).agg(MostAccesHour= (\"hour\", lambda x: x.value_counts().index[0]))\n )\n \ndf_week= (\n df.assign(hour= \"Hour \" + df['date_time'].dt.hour.astype(str),\n day= df['date_time'].dt.date)\n .groupby(\"day\", as_index=False).agg(MostAccesHour= (\"hour\", lambda x: x.value_counts().index[0]))\n )\n\n# Output :\nprint(df_week)\n\n day MostAccesHour\n0 2022-11-24 Hour 19\n1 2022-11-25 Hour 0\n2 2022-11-29 Hour 20\n\nprint(df_day)\n\n week MostAccesHour\n0 Week47 Hour 0\n1 Week48 Hour 20\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074620285_dataframe_datetime_pandas_python.txt
Q: nested dictionary from nested lister as follow (python) Been struggling with this issue, so I hope I get some help taken the following lister: Buildings = ['nr1','nr2','n3'] offices = [1,3,2] area=[23,[67,77,94],[78,79]] price=[45,[43,89,56],[54,53]] employees=[56,[45,54,78],[56,89]] I would like to create following dictionary { {'build nr1': { 'office 1': {'area' : '23 kvm', 'price' : 45 'employees' : 56} } } {'build nr2': { 'office 1': {'area' : '67 kvm', 'price ': 43 'employees' : 45} } { 'office 2': {'area' : '77 kvm', 'price' : 89 'employees' : 54} } { 'office 3': {'area' : '94 kvm', 'price' : 56 'employees' : 78} } } {'build nr3': { 'office 1': {'area' : '78 kvm', 'price' : 54 employees : 56} } { 'office 2': {'area' : '79 kvm', 'price' : 53 'employees' : 89} } } } Main goal is create a dataframe with most probably with sub-columns. like "Buildings nr" as index, "area", "price" and "employes" as columns , and local 1,2,3 sub-columns depending if the building on the index has 3 or less/more offices ... A: Maybe this is the code you are looking for: import json Buildings = ['nr1','nr2','nr3'] offices = [1,3,2] area=[23,[67,77,94],[78,79]] price=[45,[43,89,56],[54,53]] employees=[56,[45,54,78],[56,89]] myDict = dict() for building, office in zip(Buildings, offices): myDict[f"Build {building}"] = dict() for i in range(office): myDict[f"Build {building}"][f"office {i+1}"] = dict() i = 0 for ar in area: i += 1 j = 0 if isinstance(ar, int): myDict[f"Build nr{i}"][f"office {j + 1}"]["area"] = f"{ar} kvm" if isinstance(ar, list): for element in ar: myDict[f"Build nr{i}"][f"office {j + 1}"]["area"] = f"{element} kvm" j += 1 k = 0 for pr in price: k += 1 j = 0 if isinstance(pr, int): myDict[f"Build nr{k}"][f"office {j + 1}"]["price"] = pr if isinstance(pr, list): for element in pr: myDict[f"Build nr{k}"][f"office {j + 1}"]["price"] = element j += 1 l = 0 for emp in employees: l += 1 j = 0 if isinstance(emp, int): myDict[f"Build nr{l}"][f"office {j + 1}"]["employees"] = emp if isinstance(emp, list): for element in emp: myDict[f"Build nr{l}"][f"office {j + 1}"]["employees"] = element j += 1 print(json.dumps(myDict, indent = 4)) which prints: { "Build nr1": { "office 1": { "area": "23 kvm", "price": 45, "employees": 56 } }, "Build nr2": { "office 1": { "area": "67 kvm", "price": 43, "employees": 45 }, "office 2": { "area": "77 kvm", "price": 89, "employees": 54 }, "office 3": { "area": "94 kvm", "price": 56, "employees": 78 } }, "Build nr3": { "office 1": { "area": "78 kvm", "price": 54, "employees": 56 }, "office 2": { "area": "79 kvm", "price": 53, "employees": 89 } } }
nested dictionary from nested lister as follow (python)
Been struggling with this issue, so I hope I get some help taken the following lister: Buildings = ['nr1','nr2','n3'] offices = [1,3,2] area=[23,[67,77,94],[78,79]] price=[45,[43,89,56],[54,53]] employees=[56,[45,54,78],[56,89]] I would like to create following dictionary { {'build nr1': { 'office 1': {'area' : '23 kvm', 'price' : 45 'employees' : 56} } } {'build nr2': { 'office 1': {'area' : '67 kvm', 'price ': 43 'employees' : 45} } { 'office 2': {'area' : '77 kvm', 'price' : 89 'employees' : 54} } { 'office 3': {'area' : '94 kvm', 'price' : 56 'employees' : 78} } } {'build nr3': { 'office 1': {'area' : '78 kvm', 'price' : 54 employees : 56} } { 'office 2': {'area' : '79 kvm', 'price' : 53 'employees' : 89} } } } Main goal is create a dataframe with most probably with sub-columns. like "Buildings nr" as index, "area", "price" and "employes" as columns , and local 1,2,3 sub-columns depending if the building on the index has 3 or less/more offices ...
[ "Maybe this is the code you are looking for:\nimport json\n\n\nBuildings = ['nr1','nr2','nr3']\noffices = [1,3,2]\narea=[23,[67,77,94],[78,79]]\nprice=[45,[43,89,56],[54,53]]\nemployees=[56,[45,54,78],[56,89]]\n\nmyDict = dict()\n\nfor building, office in zip(Buildings, offices):\n myDict[f\"Build {building}\"] = dict()\n for i in range(office):\n myDict[f\"Build {building}\"][f\"office {i+1}\"] = dict()\n\ni = 0\nfor ar in area:\n i += 1\n j = 0\n if isinstance(ar, int):\n myDict[f\"Build nr{i}\"][f\"office {j + 1}\"][\"area\"] = f\"{ar} kvm\"\n if isinstance(ar, list):\n for element in ar:\n myDict[f\"Build nr{i}\"][f\"office {j + 1}\"][\"area\"] = f\"{element} kvm\"\n j += 1\nk = 0\nfor pr in price:\n k += 1\n j = 0\n if isinstance(pr, int):\n myDict[f\"Build nr{k}\"][f\"office {j + 1}\"][\"price\"] = pr\n if isinstance(pr, list):\n for element in pr:\n myDict[f\"Build nr{k}\"][f\"office {j + 1}\"][\"price\"] = element\n j += 1\nl = 0\nfor emp in employees:\n l += 1\n j = 0\n if isinstance(emp, int):\n myDict[f\"Build nr{l}\"][f\"office {j + 1}\"][\"employees\"] = emp\n if isinstance(emp, list):\n for element in emp:\n myDict[f\"Build nr{l}\"][f\"office {j + 1}\"][\"employees\"] = element\n j += 1\n\nprint(json.dumps(myDict, indent = 4))\n\nwhich prints:\n{\n \"Build nr1\": {\n \"office 1\": {\n \"area\": \"23 kvm\",\n \"price\": 45,\n \"employees\": 56\n }\n },\n \"Build nr2\": {\n \"office 1\": {\n \"area\": \"67 kvm\",\n \"price\": 43,\n \"employees\": 45\n },\n \"office 2\": {\n \"area\": \"77 kvm\",\n \"price\": 89,\n \"employees\": 54\n },\n \"office 3\": {\n \"area\": \"94 kvm\",\n \"price\": 56,\n \"employees\": 78\n }\n },\n \"Build nr3\": {\n \"office 1\": {\n \"area\": \"78 kvm\",\n \"price\": 54,\n \"employees\": 56\n },\n \"office 2\": {\n \"area\": \"79 kvm\",\n \"price\": 53,\n \"employees\": 89\n }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "nested_lists", "python" ]
stackoverflow_0074620205_dictionary_nested_lists_python.txt
Q: python pymssql returns list comma separated, how to change the separator to i.e pipe? I checked pymssql documents for parameters but I couldn't find what I was looking for. Basically when I execute the cursor with my SQL query, I always receive the list as comma separated. I'd like to change comma to another separator since some name fields contains comma. Is there a way to do that? This is what I have: cnxn = pymssql.connect( server=db_server, port=12345, database='DataBase', user=username, password=password) cursor = cnxn.cursor(as_dict=False) #sql_statement = "SELECT Name FROM table" cursor.execute(sql_statement) rows=list(cursor.fetchall()) print(rows) Output: [('Name,1', '20221110'), ('Name2', '20221115')] What I need: [('Name,1'|'20221110'), ('Name2'|'20221115')] A: I'm making an assumption based on your comment about the need for a non-comma separator to avoid conflicts with the commas in your fields that you would be okay with something like this: rows = [('Name,1', '20221110'), ('Name2', '20221115')] for r in rows: print("|".join(r)) Output: Name,1|20221110 Name2|20221115 join creates a single concatenated string from a collection of strings that are separated by the specified string.
python pymssql returns list comma separated, how to change the separator to i.e pipe?
I checked pymssql documents for parameters but I couldn't find what I was looking for. Basically when I execute the cursor with my SQL query, I always receive the list as comma separated. I'd like to change comma to another separator since some name fields contains comma. Is there a way to do that? This is what I have: cnxn = pymssql.connect( server=db_server, port=12345, database='DataBase', user=username, password=password) cursor = cnxn.cursor(as_dict=False) #sql_statement = "SELECT Name FROM table" cursor.execute(sql_statement) rows=list(cursor.fetchall()) print(rows) Output: [('Name,1', '20221110'), ('Name2', '20221115')] What I need: [('Name,1'|'20221110'), ('Name2'|'20221115')]
[ "I'm making an assumption based on your comment about the need for a non-comma separator to avoid conflicts with the commas in your fields that you would be okay with something like this:\nrows = [('Name,1', '20221110'), ('Name2', '20221115')]\nfor r in rows:\n print(\"|\".join(r))\n\nOutput:\nName,1|20221110\nName2|20221115\n\njoin creates a single concatenated string from a collection of strings that are separated by the specified string.\n" ]
[ 0 ]
[]
[]
[ "pymssql", "python" ]
stackoverflow_0074620520_pymssql_python.txt
Q: How to concatenate 2 dict in Python with concat I have 2 dict objects with the same key but different elements.I would like to merge them into one dict. First, I used append and it works but append is deprecated so that I prefer to use concat. here is the code : data1 = {'a':1, 'b':2} data2 = {'a':3, 'b':4} list = [data1, data2] df = pd.DataFrame() for x in range(len(list)): df = df.append(list[x], ignore_index=True) df The code below works with append. In my case I would like to have concat Maybe you can help. Thanks A: The following code works but may not be very efficient: data1 = {'a':1, 'b':2} data2 = {'a':3, 'b':4} list = [data1, data2] df = pd.concat([pd.DataFrame(list[i], index=[i]) for i in range(len(list))]) print(df) A: Here is a proposition using pandas.concat with pandas.DataFrame.unstack : list_of_dicts = [data1, data2] df= ( pd.concat(dict(enumerate(map(pd.Series, list_of_dicts)))) .unstack() ) # Output : print(df) a b 0 1 2 1 3 4 NB : Try to avoid naming your variables with python built-in function names (list, dict, ..)
How to concatenate 2 dict in Python with concat
I have 2 dict objects with the same key but different elements.I would like to merge them into one dict. First, I used append and it works but append is deprecated so that I prefer to use concat. here is the code : data1 = {'a':1, 'b':2} data2 = {'a':3, 'b':4} list = [data1, data2] df = pd.DataFrame() for x in range(len(list)): df = df.append(list[x], ignore_index=True) df The code below works with append. In my case I would like to have concat Maybe you can help. Thanks
[ "The following code works but may not be very efficient:\ndata1 = {'a':1, 'b':2}\ndata2 = {'a':3, 'b':4}\n\nlist = [data1, data2]\ndf = pd.concat([pd.DataFrame(list[i], index=[i]) for i in range(len(list))])\n\nprint(df)\n\n", "Here is a proposition using pandas.concat with pandas.DataFrame.unstack :\nlist_of_dicts = [data1, data2]\n\ndf= (\n pd.concat(dict(enumerate(map(pd.Series,\n list_of_dicts))))\n .unstack()\n )\n\n# Output :\nprint(df)\n\n a b\n0 1 2\n1 3 4\n\nNB : Try to avoid naming your variables with python built-in function names (list, dict, ..)\n" ]
[ 0, 0 ]
[]
[]
[ "concatenation", "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074620471_concatenation_dataframe_pandas_python_python_3.x.txt
Q: script that checks if another script has an error, then kills and restarts screen and script I have a matlab scipt on a cluster that pings an API, but after a couple hours (2-4h), the script will have an unexpected error which we believe comes from pinging the API too many times. I want to create a script (not sure if it should be a .sh or .py) that essentially monitors the matlab script and once it reads that it crashed, it will kill the screen, make a new screen, and run the matlab script again. I'm currently doing this restart manually, but it is very inefficent.
script that checks if another script has an error, then kills and restarts screen and script
I have a matlab scipt on a cluster that pings an API, but after a couple hours (2-4h), the script will have an unexpected error which we believe comes from pinging the API too many times. I want to create a script (not sure if it should be a .sh or .py) that essentially monitors the matlab script and once it reads that it crashed, it will kill the screen, make a new screen, and run the matlab script again. I'm currently doing this restart manually, but it is very inefficent.
[]
[]
[ "Try saving it all as a text by doing\ndef text():\n#your code here\nwhile True: text()\n\nwhat that does is repeats your code even when it stops or breaks.\nalso you could try doing an if function: if it breaks, start again\nthanks!\n" ]
[ -1 ]
[ "crash", "matlab", "python", "shell", "try_catch" ]
stackoverflow_0074620521_crash_matlab_python_shell_try_catch.txt
Q: Unable to install Twine I am running on a Raspberry PI OS uname -a Linux gus 5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022 armv7l GNU/Linux I have built my (Python) wheel. I am attempting to publish to testPyPI. I cannot install Twine - because the cryptography module keeps failing. From my Googling, it's failing because it's packaged as a SDIST ...which means I need to compile it. Sadly, it is written in Rust. I tried two things: 1 set this environment variable folks were pointing out worked (I think to bypass and just get a wheel) export CRYPTOGRAPHY_DONT_BUILD_RUST=1 But this didn't do anything. So I installed rustc The latest is restc 1.41.1. BUT - NOW I get =============================DEBUG ASSISTANCE============================= error: Rust 1.41.1 does not match extension requirement >=1.48.0 How can I get twine to install? NOTE: I have the latest version of PIP installed pip 22.3.1. Thank you A: Well...there was no wheel i could find so i finally got the build working. It took about a day.
Unable to install Twine
I am running on a Raspberry PI OS uname -a Linux gus 5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022 armv7l GNU/Linux I have built my (Python) wheel. I am attempting to publish to testPyPI. I cannot install Twine - because the cryptography module keeps failing. From my Googling, it's failing because it's packaged as a SDIST ...which means I need to compile it. Sadly, it is written in Rust. I tried two things: 1 set this environment variable folks were pointing out worked (I think to bypass and just get a wheel) export CRYPTOGRAPHY_DONT_BUILD_RUST=1 But this didn't do anything. So I installed rustc The latest is restc 1.41.1. BUT - NOW I get =============================DEBUG ASSISTANCE============================= error: Rust 1.41.1 does not match extension requirement >=1.48.0 How can I get twine to install? NOTE: I have the latest version of PIP installed pip 22.3.1. Thank you
[ "Well...there was no wheel i could find so i finally got the build working. It took about a day.\n" ]
[ 0 ]
[]
[]
[ "cryptography", "linux", "python", "raspberry_pi", "twine" ]
stackoverflow_0074606027_cryptography_linux_python_raspberry_pi_twine.txt
Q: Python Pandas - Can you use .loc and ignore indexes? I am trying to replace a string found in a column with file1_backup_df.loc[file1_backup_df['CustName'].str.contains('bbb', case=False), 'CustomerName'] = 'Big Boy Booty' Now the above works on a single dataframe (file1_backup_df). But I am combining dataframes like this; frames = [add_backup_name(), file1_backup_df] final_df = pd.concat(frames) I'd like to perform the very first line of code on final_df. But I can't. It grumbles about __setitem__ indexer = self._get_setitem_indexer(key)`. ValueError: Cannot mask with non-boolean array containing NA / NaN value Is there a way to replace strings in a column of my combined df? I tried this but no go; pd.concat(frames, ignore_index=True) EDIT Looks like this may have done it. Testing. https://www.statology.org/cannot-mask-with-non-boolean-array-containing-na-nan-values/#:~:text=2022%20by%20Zach-,How%20to%20Fix%3A%20ValueError%3A%20Cannot%20mask%20with%20non%2Dboolean,array%20containing%20NA%20%2F%20NaN%20values&text=This%20error%20usually%20occurs%20when,searching%20in%20has%20NaN%20values. A: It seems that the column CustomerName holds some NaN values, so you need to set na=False in pandas.Series.str.contains : Try this : final_df.loc[final_df['CustName'].str.contains('bbb', case=False, na=False), 'CustomerName'] = 'Big Boy Booty'
Python Pandas - Can you use .loc and ignore indexes?
I am trying to replace a string found in a column with file1_backup_df.loc[file1_backup_df['CustName'].str.contains('bbb', case=False), 'CustomerName'] = 'Big Boy Booty' Now the above works on a single dataframe (file1_backup_df). But I am combining dataframes like this; frames = [add_backup_name(), file1_backup_df] final_df = pd.concat(frames) I'd like to perform the very first line of code on final_df. But I can't. It grumbles about __setitem__ indexer = self._get_setitem_indexer(key)`. ValueError: Cannot mask with non-boolean array containing NA / NaN value Is there a way to replace strings in a column of my combined df? I tried this but no go; pd.concat(frames, ignore_index=True) EDIT Looks like this may have done it. Testing. https://www.statology.org/cannot-mask-with-non-boolean-array-containing-na-nan-values/#:~:text=2022%20by%20Zach-,How%20to%20Fix%3A%20ValueError%3A%20Cannot%20mask%20with%20non%2Dboolean,array%20containing%20NA%20%2F%20NaN%20values&text=This%20error%20usually%20occurs%20when,searching%20in%20has%20NaN%20values.
[ "It seems that the column CustomerName holds some NaN values, so you need to set na=False in pandas.Series.str.contains :\nTry this :\nfinal_df.loc[final_df['CustName'].str.contains('bbb', case=False, na=False), 'CustomerName'] = 'Big Boy Booty'\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074620527_pandas_python.txt
Q: base.html only showing the context data in the home page I have a ListView for my homepage that displays extra data using the get_context_data method. It works, but only in the url of the HomeView, the homepage, not in other templates after I extend the base.html file. Everything else in base appears, the only thing that doesn't is the context data. HomeView class HomeView(ListView): model = Product context_object_name='products' template_name = 'main/home.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) news = News.objects.all() ... context.update({ 'news' : news, ... }) return context base.html {% load static %} <body> <div class="navbar"> <a id="title" href="{% url 'home' %}">home</a> </div> ... <div class="side-bar"> <div class="article"> <h3>News</h3> {% for new in news %} <p>{{ new.title }}</p> {% endfor %} <a href="{% url 'news' %}"><p>See more</p></a> </div> </div> {% block content %}{% endblock %} ... </body> home.html {% extends 'main/base.html' %} {% block content %} <div> {% for product in products %} <p>Some text..</p> {% endfor %} {% endblock content %} Does this mean that I have to add a get_context_data method to every single view I have? Isn't that too repetitive and hard to change? A: Does this mean that I have to add a get_context_data method to every single view I have? No, you don't have to do that. HomeView.get_context_data(...) is specific to the home page. In your example, it looks like you want to show the news in all pages (all pages that use the base.html template). In that case, I would recommend to use a templatetag to load the news. See: https://docs.djangoproject.com/en/4.1/howto/custom-template-tags/ You would do something like: my_app/templatetags/news_tags.py from django import template register = template.Library() @register.inclusion_tag('includes/latest_news.html') def latest_news(context): return { 'news': News.objects.order_by("-published_at")[0:25], } base.html: {% load static %} {% load news_tags %} <body> <div class="navbar"> <a id="title" href="{% url 'home' %}">home</a> </div> ... <div class="side-bar"> {% latest_news %} </div> {% block content %}{% endblock %} ... </body> Tip: restart runserver after adding a new templatetag file for it to be found.
base.html only showing the context data in the home page
I have a ListView for my homepage that displays extra data using the get_context_data method. It works, but only in the url of the HomeView, the homepage, not in other templates after I extend the base.html file. Everything else in base appears, the only thing that doesn't is the context data. HomeView class HomeView(ListView): model = Product context_object_name='products' template_name = 'main/home.html' def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) news = News.objects.all() ... context.update({ 'news' : news, ... }) return context base.html {% load static %} <body> <div class="navbar"> <a id="title" href="{% url 'home' %}">home</a> </div> ... <div class="side-bar"> <div class="article"> <h3>News</h3> {% for new in news %} <p>{{ new.title }}</p> {% endfor %} <a href="{% url 'news' %}"><p>See more</p></a> </div> </div> {% block content %}{% endblock %} ... </body> home.html {% extends 'main/base.html' %} {% block content %} <div> {% for product in products %} <p>Some text..</p> {% endfor %} {% endblock content %} Does this mean that I have to add a get_context_data method to every single view I have? Isn't that too repetitive and hard to change?
[ "\nDoes this mean that I have to add a get_context_data method to every single view I have?\n\nNo, you don't have to do that. HomeView.get_context_data(...) is specific to the home page.\nIn your example, it looks like you want to show the news in all pages (all pages that use the base.html template). In that case, I would recommend to use a templatetag to load the news.\nSee: https://docs.djangoproject.com/en/4.1/howto/custom-template-tags/\n\nYou would do something like:\nmy_app/templatetags/news_tags.py\nfrom django import template\n\nregister = template.Library()\n\[email protected]_tag('includes/latest_news.html')\ndef latest_news(context):\n return {\n 'news': News.objects.order_by(\"-published_at\")[0:25],\n }\n\nbase.html:\n{% load static %}\n{% load news_tags %}\n\n<body>\n <div class=\"navbar\">\n <a id=\"title\" href=\"{% url 'home' %}\">home</a>\n </div>\n\n ...\n\n <div class=\"side-bar\">\n {% latest_news %}\n </div>\n {% block content %}{% endblock %}\n\n ...\n\n</body>\n\n\nTip: restart runserver after adding a new templatetag file for it to be found.\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "django_views", "python" ]
stackoverflow_0074620511_django_django_templates_django_views_python.txt
Q: How to count integers within a tuple? So I have a list of tuples which looks like: [(1, 60), (1, 93), (1, 104), (1, 145), (1, 159), (4, 20), (4, 30), (4, 103), (8, 8), (9, 35), (9, 172), (9, 191), (10, 33), (10, 164), (10, 185)] However, the numbers on the left side of the tuple should all be unique. So I would like to have something like this: [(1, 60), (4, 20), (8, 8), (9, 35), (10, 33)] I tried to make some unique functions in order to filter them out. But for example the count function does not work for integers. A: Each tuple in your list has two elements. Let's call the first one a "key". We're going to create an empty list to fill with the tuples we want. Let's also create a set (already_added) containing the keys we have already added. For each tuple, we need to check if the "key" exists in already_added, and only add it to our result if it doesn't. lst = [(1, 60), (1, 93), (1, 104), (1, 145), (1, 159), (4, 20), (4, 30), (4, 103), (8, 8), (9, 35), (9, 172), (9, 191), (10, 33), (10, 164), (10, 185)] result = [] already_added = set() for item in lst: if item[0] not in already_added: result.append(item) already_added.add(item[0]) This gives us the following result: [(1, 60), (4, 20), (8, 8), (9, 35), (10, 33)] A: test = [(1, 60), (1, 93), (1, 104), (1, 145), (1, 159), (4, 20), (4, 30), (4, 103), (8, 8), (9, 35), (9, 172), (9, 191), (10, 33), (10, 164), (10, 185)] seen = set() print([x for x in test if x[0] not in seen and not seen.add(x[0])]) >>> [(1, 60), (4, 20), (8, 8), (9, 35), (10, 33)] A: This is my simple solution: x=[(1, 60), (1, 93), (1, 104), (1, 145), (1, 159), (4, 20), (4, 30), (4, 103), (8, 8), (9, 35), (9, 172), (9, 191), (10, 33), (10, 164), (10, 185)] x.reverse() l={i:(i,j) for i,j in x} filter_list = list(l.values()) filter_list.reverse() Since you care about the first occurrence, we revert the list to add the elements to a dictionary, since we use the first integer as a key, the elements will be overwritten and the last one to appear will be your first occurrence, then we keep the values and revert the list again. A: Use a dictionary to with key-value pairs the unique identifier of the tuple and the tuple itself as the value. lst = # from above # if ordering is important lst = sorted(lst, key=lambda p: (p[0], -p[1])) # grant unicity in 1st entry (take latest entry for each repeated key) lst = {x: (x, y) for x, y in lst} res = list(lst.values())
How to count integers within a tuple?
So I have a list of tuples which looks like: [(1, 60), (1, 93), (1, 104), (1, 145), (1, 159), (4, 20), (4, 30), (4, 103), (8, 8), (9, 35), (9, 172), (9, 191), (10, 33), (10, 164), (10, 185)] However, the numbers on the left side of the tuple should all be unique. So I would like to have something like this: [(1, 60), (4, 20), (8, 8), (9, 35), (10, 33)] I tried to make some unique functions in order to filter them out. But for example the count function does not work for integers.
[ "Each tuple in your list has two elements. Let's call the first one a \"key\".\nWe're going to create an empty list to fill with the tuples we want.\nLet's also create a set (already_added) containing the keys we have already added. For each tuple, we need to check if the \"key\" exists in already_added, and only add it to our result if it doesn't.\nlst = [(1, 60),\n (1, 93),\n (1, 104),\n (1, 145),\n (1, 159),\n (4, 20),\n (4, 30),\n (4, 103),\n (8, 8),\n (9, 35),\n (9, 172),\n (9, 191),\n (10, 33),\n (10, 164),\n (10, 185)]\n\nresult = []\nalready_added = set()\n\nfor item in lst:\n if item[0] not in already_added:\n result.append(item)\n already_added.add(item[0]) \n\nThis gives us the following result:\n[(1, 60), (4, 20), (8, 8), (9, 35), (10, 33)]\n\n", "test = [(1, 60),\n (1, 93),\n (1, 104),\n (1, 145),\n (1, 159),\n (4, 20),\n (4, 30),\n (4, 103),\n (8, 8),\n (9, 35),\n (9, 172),\n (9, 191),\n (10, 33),\n (10, 164),\n (10, 185)]\nseen = set()\nprint([x for x in test if x[0] not in seen and not seen.add(x[0])])\n>>>\n[(1, 60), (4, 20), (8, 8), (9, 35), (10, 33)]\n\n", "This is my simple solution:\nx=[(1, 60),\n (1, 93),\n (1, 104),\n (1, 145),\n (1, 159),\n (4, 20),\n (4, 30),\n (4, 103),\n (8, 8),\n (9, 35),\n (9, 172),\n (9, 191),\n (10, 33),\n (10, 164),\n (10, 185)]\nx.reverse()\nl={i:(i,j) for i,j in x}\nfilter_list = list(l.values())\nfilter_list.reverse()\n\nSince you care about the first occurrence, we revert the list to add the elements to a dictionary, since we use the first integer as a key, the elements will be overwritten and the last one to appear will be your first occurrence, then we keep the values and revert the list again.\n", "Use a dictionary to with key-value pairs the unique identifier of the tuple and the tuple itself as the value.\nlst = # from above\n\n# if ordering is important\nlst = sorted(lst, key=lambda p: (p[0], -p[1]))\n\n# grant unicity in 1st entry (take latest entry for each repeated key)\nlst = {x: (x, y) for x, y in lst}\n\nres = list(lst.values())\n\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074620494_python.txt
Q: How do I create a "package.json" file? Git clone repository is missing a package.json file Am new to coding, I have cloned this GitHub repository https://github.com/TribeZOAGit/zoa_ussd it happens to be missing the package.json file, so how do I create a package.json file for the already existing project. I can't tell which dependencies. npm init command creates the package.json file but with no dependencies. After I cloned the repo, npm start throwing error "Missing script start" Then I found package.json file was missing npm init was meant to create the file but packages were not in the file, neither were dependencies nor scripts. How do I addresse this issue Thanks A: This is a Python repository, not JavaScript... requirements.txt is pip's version of npm's package.json See what is PIP See here for information on installing the necessary packages using pip. Or this Stack answer
How do I create a "package.json" file? Git clone repository is missing a package.json file
Am new to coding, I have cloned this GitHub repository https://github.com/TribeZOAGit/zoa_ussd it happens to be missing the package.json file, so how do I create a package.json file for the already existing project. I can't tell which dependencies. npm init command creates the package.json file but with no dependencies. After I cloned the repo, npm start throwing error "Missing script start" Then I found package.json file was missing npm init was meant to create the file but packages were not in the file, neither were dependencies nor scripts. How do I addresse this issue Thanks
[ "This is a Python repository, not JavaScript...\nrequirements.txt is pip's version of npm's package.json\nSee what is PIP\nSee here for information on installing the necessary packages using pip.\nOr this Stack answer\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074574819_django_python.txt
Q: How do I redirect to the created page after I submitted the form Django I'm trying to redirect to the created page after I've filled out and submitted a form. I have gotten it to work on the update form but not the create form. How do i do this? Here's what I have so far. Let me know if you need more details and code views.py @login_required(login_url='login') def createRoom(request): form = RoomForm() topics = Topic.objects.all() if request.method == 'POST': topic_name = request.POST.get('topic') topic, created = Topic.objects.get_or_create(name=topic_name) Room.objects.create( host=request.user, topic=topic, name=request.POST.get('name'), assigned=request.user, status=request.POST.get('status'), priority=request.POST.get('priority'), type=request.POST.get('type'), description=request.POST.get('description'), ) return render('room', pk=room.id) context = {'form': form, 'topics': topics, 'room': room} return render(request, 'room/room_form.html', context) But this throws this error traceback Traceback (most recent call last): File "C:\Users\mikha\issue_env\lib\site-packages\django\core\handlers\exception.py", line 55, in inner response = get_response(request) File "C:\Users\mikha\issue_env\lib\site-packages\django\core\handlers\base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\mikha\issue_env\lib\site-packages\django\contrib\auth\decorators.py", line 23, in _wrapped_view return view_func(request, *args, **kwargs) File "C:\Users\mikha\issuetracker\base\views.py", line 68, in createRoom return render('room', pk=room.id) Exception Type: AttributeError at /create-room/ Exception Value: 'function' object has no attribute 'id' A: While you have created a new Room object, you haven't assigned it to room. Try room = Room.objects.create( A: Your create room function should look like this @login_required(login_url='login') def createRoom(request): form = RoomForm() topics = Topic.objects.all() if request.method == 'POST': topic_name = request.POST.get('topic') topic, created = Topic.objects.get_or_create(name=topic_name) room = Room.objects.create( host=request.user, topic=topic, name=request.POST.get('name'), assigned=request.user, status=request.POST.get('status'), priority=request.POST.get('priority'), type=request.POST.get('type'), description=request.POST.get('description'), ) room.save() return redirect("created-room-view-function") context = {'form': form, 'topics': topics, 'room': room} return render(request, 'room/room_form.html', context)
How do I redirect to the created page after I submitted the form Django
I'm trying to redirect to the created page after I've filled out and submitted a form. I have gotten it to work on the update form but not the create form. How do i do this? Here's what I have so far. Let me know if you need more details and code views.py @login_required(login_url='login') def createRoom(request): form = RoomForm() topics = Topic.objects.all() if request.method == 'POST': topic_name = request.POST.get('topic') topic, created = Topic.objects.get_or_create(name=topic_name) Room.objects.create( host=request.user, topic=topic, name=request.POST.get('name'), assigned=request.user, status=request.POST.get('status'), priority=request.POST.get('priority'), type=request.POST.get('type'), description=request.POST.get('description'), ) return render('room', pk=room.id) context = {'form': form, 'topics': topics, 'room': room} return render(request, 'room/room_form.html', context) But this throws this error traceback Traceback (most recent call last): File "C:\Users\mikha\issue_env\lib\site-packages\django\core\handlers\exception.py", line 55, in inner response = get_response(request) File "C:\Users\mikha\issue_env\lib\site-packages\django\core\handlers\base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\mikha\issue_env\lib\site-packages\django\contrib\auth\decorators.py", line 23, in _wrapped_view return view_func(request, *args, **kwargs) File "C:\Users\mikha\issuetracker\base\views.py", line 68, in createRoom return render('room', pk=room.id) Exception Type: AttributeError at /create-room/ Exception Value: 'function' object has no attribute 'id'
[ "While you have created a new Room object, you haven't assigned it to room.\nTry\nroom = Room.objects.create(\n\n", "Your create room function should look like this\n@login_required(login_url='login')\ndef createRoom(request):\n form = RoomForm()\n topics = Topic.objects.all()\n if request.method == 'POST':\n topic_name = request.POST.get('topic')\n topic, created = Topic.objects.get_or_create(name=topic_name)\n\n room = Room.objects.create(\n host=request.user,\n topic=topic,\n name=request.POST.get('name'),\n assigned=request.user,\n status=request.POST.get('status'),\n priority=request.POST.get('priority'),\n type=request.POST.get('type'),\n description=request.POST.get('description'),\n )\n room.save() \n return redirect(\"created-room-view-function\")\n\n context = {'form': form, 'topics': topics, 'room': room}\n return render(request, 'room/room_form.html', context)\n\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074619663_django_python.txt
Q: What is ceil("1d") with reference to timedeltas in Python I have a function I have come across: def sub_kpi1_rule(sub_comp_appr_date, date_report_run, greater_of_date_sub_sub_ll): if pd.isnull(sub_comp_appr_date) and not alive_for_six_days(date_report_run, greater_of_date_sub_sub_ll): return "NA" elif (sub_comp_appr_date - greater_of_date_sub_sub_ll).ceil("1d").days <= 6: return "PASS" else: return "FAIL" all arguments are date types, so I am assuming: (sub_comp_appr_date - greater_of_date_sub_sub_ll) ...returns a timedelta instance. I am confused about this ceil(syntax) elif (sub_comp_appr_date - greater_of_date_sub_sub_ll).ceil("1d").days <= 6: because if I try and subtract two dates and use this function, I get an error: from datetime import date a = date(2022, 10,1) b = date(2022, 10,15) (b-a).ceil("1d") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In [36], line 4 2 a = date(2022, 10,1) 3 b = date(2022, 10,15) ----> 4 (b-a).ceil("1d") AttributeError: 'datetime.timedelta' object has no attribute 'ceil' The function is called on a data frame: df["Sub KPI1"] = df.apply(lambda x: sub_kpi1_rule(x["SUBM_LATE_DESTINATION_COMPLIANCE_APPROVAL_DATE"], date_report_run, x["Greater of Date Submitted and Submission LL Created"]), axis=1) I think the types are pd.Timestamp, as date_report_run is explicitly converted to one: date_report_run = date(year_report_run, month_report_run, day_report_run) date_report_run = pd.Timestamp(date_report_run) I am guessing I am getting a pandas._libs.tslibs.timedeltas.Timedelta back rather than a normal timedelta. A: You get an error, because you're using .ceil() on datetime.timedelta class in your example. You need the pandas class for this. (pandas._libs.tslibs.timedeltas.Timedelta) delta = pd.Timedelta(4, "d") Now, what does this .ceil("1d") do? If you have a delta with hours and minutes you want ceil it to days value. E.g.: date1 = pd.to_datetime(1490195805, unit='s') date2 = pd.to_datetime(1490597688, unit='s') date2 - date1 Timedelta('4 days 15:38:03') (date2 - date1).ceil("1d") Timedelta('5 days 00:00:00') As the hours are more than 12h, the date delta gets ceiled to 5 days.
What is ceil("1d") with reference to timedeltas in Python
I have a function I have come across: def sub_kpi1_rule(sub_comp_appr_date, date_report_run, greater_of_date_sub_sub_ll): if pd.isnull(sub_comp_appr_date) and not alive_for_six_days(date_report_run, greater_of_date_sub_sub_ll): return "NA" elif (sub_comp_appr_date - greater_of_date_sub_sub_ll).ceil("1d").days <= 6: return "PASS" else: return "FAIL" all arguments are date types, so I am assuming: (sub_comp_appr_date - greater_of_date_sub_sub_ll) ...returns a timedelta instance. I am confused about this ceil(syntax) elif (sub_comp_appr_date - greater_of_date_sub_sub_ll).ceil("1d").days <= 6: because if I try and subtract two dates and use this function, I get an error: from datetime import date a = date(2022, 10,1) b = date(2022, 10,15) (b-a).ceil("1d") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In [36], line 4 2 a = date(2022, 10,1) 3 b = date(2022, 10,15) ----> 4 (b-a).ceil("1d") AttributeError: 'datetime.timedelta' object has no attribute 'ceil' The function is called on a data frame: df["Sub KPI1"] = df.apply(lambda x: sub_kpi1_rule(x["SUBM_LATE_DESTINATION_COMPLIANCE_APPROVAL_DATE"], date_report_run, x["Greater of Date Submitted and Submission LL Created"]), axis=1) I think the types are pd.Timestamp, as date_report_run is explicitly converted to one: date_report_run = date(year_report_run, month_report_run, day_report_run) date_report_run = pd.Timestamp(date_report_run) I am guessing I am getting a pandas._libs.tslibs.timedeltas.Timedelta back rather than a normal timedelta.
[ "You get an error, because you're using .ceil() on datetime.timedelta class in your example. You need the pandas class for this. (pandas._libs.tslibs.timedeltas.Timedelta)\ndelta = pd.Timedelta(4, \"d\")\nNow, what does this .ceil(\"1d\") do?\nIf you have a delta with hours and minutes you want ceil it to days value. E.g.:\ndate1 = pd.to_datetime(1490195805, unit='s')\ndate2 = pd.to_datetime(1490597688, unit='s')\ndate2 - date1\n\nTimedelta('4 days 15:38:03')\n(date2 - date1).ceil(\"1d\")\n\nTimedelta('5 days 00:00:00')\nAs the hours are more than 12h, the date delta gets ceiled to 5 days.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "timedelta" ]
stackoverflow_0074620497_pandas_python_timedelta.txt
Q: How to do PGP in Python (generate keys, encrypt/decrypt) I'm making a program in Python to be distributed to windows users via an installer. The program needs to be able to download a file every day encrypted with the user's public key and then decrypt it. So I need to find a Python library that will let me generate public and private PGP keys, and also decrypt files encrypted with the public key. Is this something pyCrypto will do (documentation is nebulous)? Are there other pure Python libraries? How about a standalone command line tool in any language? All I saw so far was GNUPG but installing that on Windows does stuff to the registry and throws dll's everywhere, and then I have to worry about whether the user already has this installed, how to backup their existing keyrings, etc. I'd rather just have a python library or command line tool and mange the keys myself. Update: pyME might work but it doesn't seem to be compatible with Python 2.4 which I have to use. A: You don't need PyCrypto or PyMe, fine though those packages may be - you will have all kinds of problems building under Windows. Instead, why not avoid the rabbit-holes and do what I did? Use gnupg 1.4.9. You don't need to do a full installation on end-user machines - just gpg.exe and iconv.dll from the distribution are sufficient, and you just need to have them somewhere in the path or accessed from your Python code using a full pathname. No changes to the registry are needed, and everything (executables and data files) can be confined to a single folder if you want. There's a module GPG.py which was originally written by Andrew Kuchling, improved by Richard Jones and improved further by Steve Traugott. It's available here, but as-is it's not suitable for Windows because it uses os.fork(). Although originally part of PyCrypto, it is completely independent of the other parts of PyCrypto and needs only gpg.exe/iconv.dll in order to work. I have a version (gnupg.py) derived from Traugott's GPG.py, which uses the subprocess module. It works fine under Windows, at least for my purposes - I use it to do the following: Key management - generation, listing, export etc. Import keys from an external source (e.g. public keys received from a partner company) Encrypt and decrypt data Sign and verify signatures The module I've got is not ideal to show right now, because it includes some other stuff which shouldn't be there - which means I can't release it as-is at the moment. At some point, perhaps in the next couple of weeks, I hope to be able to tidy it up, add some more unit tests (I don't have any unit tests for sign/verify, for example) and release it (either under the original PyCrypto licence or a similar commercial-friendly license). If you can't wait, go with Traugott's module and modify it yourself - it wasn't too much work to make it work with the subprocess module. This approach was a lot less painful than the others (e.g. SWIG-based solutions, or solutions which require building with MinGW/MSYS), which I considered and experimented with. I've used the same (gpg.exe/iconv.dll) approach with systems written in other languages, e.g. C#, with equally painless results. P.S. It works with Python 2.4 as well as Python 2.5 and later. Not tested with other versions, though I don't foresee any problems. A: After a LOT of digging, I found a package that worked for me. Although it is said to support the generation of keys, I didn't test it. However I did manage to decrypt a message that was encrypted using a GPG public key. The advantage of this package is that it does not require a GPG executable file on the machine, and is a Python based implementation of the OpenPGP (rather than a wrapper around the executable). I created the private and public keys using GPG4win and kleopatra for windows See my code below. import pgpy emsg = pgpy.PGPMessage.from_file(<path to the file from the client that was encrypted using your public key>) key,_ = pgpy.PGPKey.from_file(<path to your private key>) with key.unlock(<your private key passpharase>): print (key.decrypt(emsg).message) Although the question is very old. I hope this helps future users. A: PyCrypto supports PGP - albeit you should test it to make sure that it works to your specifications. Although documentation is hard to come by, if you look through Util/test.py (the module test script), you can find a rudimentary example of their PGP support: if verbose: print ' PGP mode:', obj1=ciph.new(password, ciph.MODE_PGP, IV) obj2=ciph.new(password, ciph.MODE_PGP, IV) start=time.time() ciphertext=obj1.encrypt(str) plaintext=obj2.decrypt(ciphertext) end=time.time() if (plaintext!=str): die('Error in resulting plaintext from PGP mode') print_timing(256, end-start, verbose) del obj1, obj2 Futhermore, PublicKey/pubkey.py provides for the following relevant methods: def encrypt(self, plaintext, K) def decrypt(self, ciphertext): def sign(self, M, K): def verify (self, M, signature): def can_sign (self): """can_sign() : bool Return a Boolean value recording whether this algorithm can generate signatures. (This does not imply that this particular key object has the private information required to to generate a signature.) """ return 1 A: PyMe does claim full compatibility with Python 2.4, and I quote: The latest version of PyMe (as of this writing) is v0.8.0. Its binary distribution for Debian was compiled with SWIG v1.3.33 and GCC v4.2.3 for GPGME v1.1.6 and Python v2.3.5, v2.4.4, and v2.5.2 (provided in 'unstable' distribution at the time). Its binary distribution for Windows was compiled with SWIG v1.3.29 and MinGW v4.1 for GPGME v1.1.6 and Python v2.5.2 (although the same binary get installed and works fine in v2.4.2 as well). I'm not sure why you say "it doesn't seem to be compatible with Python 2.4 which I have to use" -- specifics please? And yes it does exist as a semi-Pythonic (SWIGd) wrapper on GPGME -- that's a popular way to develop Python extensions once you have a C library that basically does the job. PyPgp has a much simpler approach -- that's why it's a single, simple Python script: basically it does nothing more than "shell out" to command-line PGP commands. For example, decryption is just: def decrypt(data): "Decrypt a string - if you have the right key." pw,pr = os.popen2('pgpv -f') pw.write(data) pw.close() ptext = pr.read() return ptext i.e., write the encrypted cyphertext to the standard input of pgpv -f, read pgpv's standard output as the decrypted plaintext. PyPgp is also a very old project, though its simplicity means that making it work with modern Python (e.g., subprocess instead of now-deprecated os.popen2) would not be hard. But you still do need PGP installed, or PyPgp won't do anything;-). A: M2Crypto has PGP module, but I have actually never tried to use it. If you try it, and it works, please let me know (I am the current M2Crypto maintainer). Some links: Module sources Demo Script unit tests Update: The PGP module does not provide ways to generate keys, but presumably these could be created with the lower level RSA, DSA etc. modules. I don't know PGP insides, so you'd have to dig up the details. Also, if you know how to generate these using openssl command line commands, it should be reasonably easy to convert that to M2Crypto calls. A: As other have noted, PyMe is the canonical solution for this, since it's based on GpgME, which is part of the GnuPG ecosystem. For Windows, I strongly recommend to use Gpg4win as the GnuPG distribution, for two reasons: It's based on GnuPG 2, which, among other things, includes gpg2.exe, which can (finally, I might add :) start gpg-agent.exe on-demand (gpg v1.x can't). And secondly, it's the only official Windows build by the GnuPG developers. E.g. it's entirely cross-compiled from Linux to Windows, so not a iota of non-free software was used in preparing it (quite important for a security suite :). A: To sign with only the exported public key file without a keyring. With PGPy 0.5.2 (pure Python GPG RFC implementation): key_fpath = './recipient-PUB.gpg' rsa_pub, _ = pgpy.PGPKey.from_file(rkey_fpath) rkey = rsa_pub.subkeys.values()[0] text_message = pgpy.PGPMessage.new('my msg') encrypted_message = rkey.encrypt(text_message) print encrypted_message.__bytes__() With gpg 1.10.0 (gpgme Python bindings - former PyME): rkey_fpath = './recipient-PUB.gpg' cg = gpg.Context() rkey = list(cg.keylist(source = rkey_fpath)) ciphertext, result, sign_result = cg.encrypt('my msg', recipients=rkey, sign=False, always_trust=True) print ciphertext A simple benchmark in a for loop shows me that for this simple operation on my system PGPy is ~3x time faster than gpgme Python bindings (please do not take this statement as X is faster than Y: I will invite you to test in your environment). A: Here's a full script that will: Attempt to decrypt all the files in a given folder that were encrypted with your public key. Write the new files to a specified folder. Move the encrypted files to a specified folder. The script also has everything you need to create and store your own private and public keys, check out the "First time set up" section below. The idea is that you can schedule this script to run as often as you like, and it'll automatically decrypt data found and store it for you. I hope this helps someone, this was a tricky project to figure out. #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #~~ Introduction, change log and table of contents #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Purpose: This script is used to decrypt files that are passed to us from ICF. # # Change date Changed by Description # 2022-10-03 Ryan Bradley Initial draft # 2022-10-12 Ryan Bradley Cleaned up some comments and table of contents. # # Table of Contents # [1.0] Hard-coded variables # [1.1] Load packages and custom functions # [1.3] First time set up # [1.4] Define custom functions # [2.0] Load keys and decrypt files # # Sources used to create this script, and for further reading: # https://github.com/SecurityInnovation/PGPy/ # https://stackoverflow.com/questions/1020320/how-to-do-pgp-in-python-generate-keys-encrypt-decrypt # https://pypi.org/project/PGPy/ # https://betterprogramming.pub/creating-a-pgp-encryption-tool-with-python-19bae51b7fd # https://pgpy.readthedocs.io/en/latest/examples.html #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #~~ [1.1] Load packages #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ import glob import pgpy import shutil import io #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #~~ [1.2] Hard-coded variables #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Define the paths to public and private keys path_public_key = r'YOUR PATH HERE' path_private_key = r'YOUR PATH HERE' # Define paths to files you want to try decrypting path_original_files = r'YOUR PATH HERE' path_decrypted_files = r'YOUR PATH HERE' path_encrypted_files= r'YOUR PATH HERE' #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #~~ [1.3] First time set up #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # IMPORTANT WARNINGS!!!! # - Do NOT share your private key with anyone else. # - You MUST have the associated private key that is is generated along with a public key # if you want to be able to decrypt anything that is encryped with that public key. Do # not overwrite the existing keys unless you will never need any of the previously # encryped data. # - Do not generate new public and private keys unless you have a good reason to. # # The following steps will walk you through how to create and write public and private keys to # a network location. Be very careful where you store this information. Anyone with access # to your private key can decrypt anything that was encryped with your public key. # # These steps only need to be performed one time when the script is first being # created. They are commented out intentionally, as they shouldn't need to be performed # every time the script is ran. # # Here's the a link to the documentation on this topic: # https://pgpy.readthedocs.io/en/latest/examples.html # # Load the extra things we need to define a new key # from pgpy.constants import PubKeyAlgorithm, KeyFlags, HashAlgorithm, SymmetricKeyAlgorithm, CompressionAlgorithm # # Gerate a new a primary key. For this example, we'll use RSA, but it could be DSA or ECDSA as well # key = pgpy.PGPKey.new(PubKeyAlgorithm.RSAEncryptOrSign, 4096) # # Define a new user # uid = pgpy.PGPUID.new('SA_CODA_Admin', comment='Customer Strategy and Data Analytics service account.', email='[email protected]') # # Add the new user id to the key, and define all the key preferences. # key.add_uid(uid, usage={KeyFlags.Sign, KeyFlags.EncryptCommunications, KeyFlags.EncryptStorage}, # hashes=[HashAlgorithm.SHA256, HashAlgorithm.SHA384, HashAlgorithm.SHA512, HashAlgorithm.SHA224], # ciphers=[SymmetricKeyAlgorithm.AES256, SymmetricKeyAlgorithm.AES192, SymmetricKeyAlgorithm.AES128], # compression=[CompressionAlgorithm.ZLIB, CompressionAlgorithm.BZ2, CompressionAlgorithm.ZIP, CompressionAlgorithm.Uncompressed] # , is_compressed = True) # # Write the ASCII armored public key to a network location. # text_file = open(path_public_key, 'w') # text_file.write(str(key.pubkey)) # text_file.close() # # Write the ASCII armored private key to a network location. # text_file = open(path_private_key, 'w') # text_file.write(str(key)) # text_file.close() #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #~~ [1.4] Define custom functions #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ def file_encrypt(path_original_file, path_encrypted_file, key_public): """ A function that encrypts the content of a file at the given path and creates an ecryped version file at the new location using the specified public key. """ # Create a PGP file, compressed with ZIP DEFLATE by default unless otherwise specified pgp_file = pgpy.PGPMessage.new(path_original_file, file=True) # Encrypt the data with the public key encrypted_data = key_public.encrypt(pgp_file) # Write the encryped data to the encrypted destination text_file = open(path_encrypted_file, 'w') text_file.write(str(encrypted_data)) text_file.close() def file_decrypt(path_encrypted_file, path_decrypted_file, key_private): """ A function that decrypts the content of a file at path path and creates a decrypted file at the new location using the given private key. """ # Load a previously encryped message from a file pgp_file = pgpy.PGPMessage.from_file(path_encrypted_file) # Decrypt the data with the given private key decrypted_data = key_private.decrypt(pgp_file).message # Read in the bytes of the decrypted data toread = io.BytesIO() toread.write(bytes(decrypted_data)) toread.seek(0) # reset the pointer # Write the data to the location with open(path_decrypted_file, 'wb') as f: shutil.copyfileobj(toread, f) f.close() #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #~~ [2.0] Load keys and decrypt files #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Load your pre-generated public key from the network key_public, _ = pgpy.PGPKey.from_file(path_public_key) # Load your pre-generated public key from the network key_private, _ = pgpy.PGPKey.from_file(path_private_key) # Find and process any encrypted files in the landing folder for file in glob.glob(path_original_files + '\*.pgp'): # Get the path to the file we need to decrypt path_encrypted_file = str(file) # Extract the file name parts = path_encrypted_file.split('\\') str_file_name = parts[len(parts)-1] str_clean_file_name = str_file_name[:-4] # Extract the file exension str_extension = str_clean_file_name.split('.') str_extension = str_extension[len(str_extension) - 1] # Create the path to the new decryped file, dropping the ".pgp" extension path_decrypted_file = path_decrypted_files + '\\' + str_clean_file_name # Create the path to the place we'll store the encryped file path_archived_encrypted_file = path_encrypted_files + '\\' + str_file_name # Decrypt the file try: file_decrypt(path_encrypted_file, path_decrypted_file, key_private) # Move the encryped file to its new location shutil.move(path_encrypted_file, path_archived_encrypted_file) except: print('DECRYPTION ERROR!') print(f'Unable to decrypt {path_encrypted_file}') # Just for reference, here's how you would call the function to encrypt a file: # file_encrypt(path_original_file, path_encrypted_file, key_public)
How to do PGP in Python (generate keys, encrypt/decrypt)
I'm making a program in Python to be distributed to windows users via an installer. The program needs to be able to download a file every day encrypted with the user's public key and then decrypt it. So I need to find a Python library that will let me generate public and private PGP keys, and also decrypt files encrypted with the public key. Is this something pyCrypto will do (documentation is nebulous)? Are there other pure Python libraries? How about a standalone command line tool in any language? All I saw so far was GNUPG but installing that on Windows does stuff to the registry and throws dll's everywhere, and then I have to worry about whether the user already has this installed, how to backup their existing keyrings, etc. I'd rather just have a python library or command line tool and mange the keys myself. Update: pyME might work but it doesn't seem to be compatible with Python 2.4 which I have to use.
[ "You don't need PyCrypto or PyMe, fine though those packages may be - you will have all kinds of problems building under Windows. Instead, why not avoid the rabbit-holes and do what I did? Use gnupg 1.4.9. You don't need to do a full installation on end-user machines - just gpg.exe and iconv.dll from the distribution are sufficient, and you just need to have them somewhere in the path or accessed from your Python code using a full pathname. No changes to the registry are needed, and everything (executables and data files) can be confined to a single folder if you want.\nThere's a module GPG.py which was originally written by Andrew Kuchling, improved by Richard Jones and improved further by Steve Traugott. It's available here, but as-is it's not suitable for Windows because it uses os.fork(). Although originally part of PyCrypto, it is completely independent of the other parts of PyCrypto and needs only gpg.exe/iconv.dll in order to work.\nI have a version (gnupg.py) derived from Traugott's GPG.py, which uses the subprocess module. It works fine under Windows, at least for my purposes - I use it to do the following:\n\nKey management - generation, listing, export etc.\nImport keys from an external source (e.g. public keys received from a partner company)\nEncrypt and decrypt data\nSign and verify signatures\n\nThe module I've got is not ideal to show right now, because it includes some other stuff which shouldn't be there - which means I can't release it as-is at the moment. At some point, perhaps in the next couple of weeks, I hope to be able to tidy it up, add some more unit tests (I don't have any unit tests for sign/verify, for example) and release it (either under the original PyCrypto licence or a similar commercial-friendly license). If you can't wait, go with Traugott's module and modify it yourself - it wasn't too much work to make it work with the subprocess module.\nThis approach was a lot less painful than the others (e.g. SWIG-based solutions, or solutions which require building with MinGW/MSYS), which I considered and experimented with. I've used the same (gpg.exe/iconv.dll) approach with systems written in other languages, e.g. C#, with equally painless results.\nP.S. It works with Python 2.4 as well as Python 2.5 and later. Not tested with other versions, though I don't foresee any problems.\n", "After a LOT of digging, I found a package that worked for me. Although it is said to support the generation of keys, I didn't test it. However I did manage to decrypt a message that was encrypted using a GPG public key. The advantage of this package is that it does not require a GPG executable file on the machine, and is a Python based implementation of the OpenPGP (rather than a wrapper around the executable). \nI created the private and public keys using GPG4win and kleopatra for windows\nSee my code below.\nimport pgpy\nemsg = pgpy.PGPMessage.from_file(<path to the file from the client that was encrypted using your public key>)\nkey,_ = pgpy.PGPKey.from_file(<path to your private key>)\nwith key.unlock(<your private key passpharase>):\n print (key.decrypt(emsg).message)\n\nAlthough the question is very old. I hope this helps future users.\n", "PyCrypto supports PGP - albeit you should test it to make sure that it works to your specifications.\nAlthough documentation is hard to come by, if you look through Util/test.py (the module test script), you can find a rudimentary example of their PGP support:\nif verbose: print ' PGP mode:',\nobj1=ciph.new(password, ciph.MODE_PGP, IV)\nobj2=ciph.new(password, ciph.MODE_PGP, IV)\nstart=time.time()\nciphertext=obj1.encrypt(str)\nplaintext=obj2.decrypt(ciphertext)\nend=time.time()\nif (plaintext!=str):\n die('Error in resulting plaintext from PGP mode')\nprint_timing(256, end-start, verbose)\ndel obj1, obj2\n\nFuthermore, PublicKey/pubkey.py provides for the following relevant methods:\ndef encrypt(self, plaintext, K)\ndef decrypt(self, ciphertext):\ndef sign(self, M, K):\ndef verify (self, M, signature):\ndef can_sign (self):\n \"\"\"can_sign() : bool\n Return a Boolean value recording whether this algorithm can\n generate signatures. (This does not imply that this\n particular key object has the private information required to\n to generate a signature.)\n \"\"\"\n return 1\n\n", "PyMe does claim full compatibility with Python 2.4, and I quote:\n\nThe latest version of PyMe (as of this\n writing) is v0.8.0. Its binary\n distribution for Debian was compiled\n with SWIG v1.3.33 and GCC v4.2.3 for\n GPGME v1.1.6 and Python v2.3.5,\n v2.4.4, and v2.5.2 (provided in\n 'unstable' distribution at the time).\n Its binary distribution for Windows\n was compiled with SWIG v1.3.29 and\n MinGW v4.1 for GPGME v1.1.6 and Python\n v2.5.2 (although the same binary get\n installed and works fine in v2.4.2 as\n well).\n\nI'm not sure why you say \"it doesn't seem to be compatible with Python 2.4 which I have to use\" -- specifics please?\nAnd yes it does exist as a semi-Pythonic (SWIGd) wrapper on GPGME -- that's a popular way to develop Python extensions once you have a C library that basically does the job.\nPyPgp has a much simpler approach -- that's why it's a single, simple Python script: basically it does nothing more than \"shell out\" to command-line PGP commands. For example, decryption is just:\ndef decrypt(data):\n \"Decrypt a string - if you have the right key.\"\n pw,pr = os.popen2('pgpv -f')\n pw.write(data)\n pw.close()\n ptext = pr.read()\n return ptext\n\ni.e., write the encrypted cyphertext to the standard input of pgpv -f, read pgpv's standard output as the decrypted plaintext.\nPyPgp is also a very old project, though its simplicity means that making it work with modern Python (e.g., subprocess instead of now-deprecated os.popen2) would not be hard. But you still do need PGP installed, or PyPgp won't do anything;-).\n", "M2Crypto has PGP module, but I have actually never tried to use it. If you try it, and it works, please let me know (I am the current M2Crypto maintainer). Some links:\n\nModule sources\nDemo Script\nunit tests\n\nUpdate: The PGP module does not provide ways to generate keys, but presumably these could be created with the lower level RSA, DSA etc. modules. I don't know PGP insides, so you'd have to dig up the details. Also, if you know how to generate these using openssl command line commands, it should be reasonably easy to convert that to M2Crypto calls.\n", "As other have noted, PyMe is the canonical solution for this, since it's based on GpgME, which is part of the GnuPG ecosystem.\nFor Windows, I strongly recommend to use Gpg4win as the GnuPG distribution, for two reasons:\nIt's based on GnuPG 2, which, among other things, includes gpg2.exe, which can (finally, I might add :) start gpg-agent.exe on-demand (gpg v1.x can't).\nAnd secondly, it's the only official Windows build by the GnuPG developers. E.g. it's entirely cross-compiled from Linux to Windows, so not a iota of non-free software was used in preparing it (quite important for a security suite :).\n", "To sign with only the exported public key file without a keyring.\nWith PGPy 0.5.2 (pure Python GPG RFC implementation):\nkey_fpath = './recipient-PUB.gpg'\n \nrsa_pub, _ = pgpy.PGPKey.from_file(rkey_fpath)\nrkey = rsa_pub.subkeys.values()[0] \n \ntext_message = pgpy.PGPMessage.new('my msg')\nencrypted_message = rkey.encrypt(text_message)\nprint encrypted_message.__bytes__()\n\nWith gpg 1.10.0 (gpgme Python bindings - former PyME):\nrkey_fpath = './recipient-PUB.gpg'\ncg = gpg.Context()\nrkey = list(cg.keylist(source = rkey_fpath)) \n \nciphertext, result, sign_result = cg.encrypt('my msg', recipients=rkey, sign=False, always_trust=True)\nprint ciphertext\n\nA simple benchmark in a for loop shows me that for this simple operation on my system PGPy is ~3x time faster than gpgme Python bindings (please do not take this statement as X is faster than Y: I will invite you to test in your environment).\n", "Here's a full script that will:\n\nAttempt to decrypt all the files in a given folder that were encrypted with your public key.\nWrite the new files to a specified folder.\nMove the encrypted files to a specified folder.\n\nThe script also has everything you need to create and store your own private and public keys, check out the \"First time set up\" section below.\nThe idea is that you can schedule this script to run as often as you like, and it'll automatically decrypt data found and store it for you.\nI hope this helps someone, this was a tricky project to figure out.\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#~~ Introduction, change log and table of contents\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n# Purpose: This script is used to decrypt files that are passed to us from ICF.\n#\n# Change date Changed by Description\n# 2022-10-03 Ryan Bradley Initial draft\n# 2022-10-12 Ryan Bradley Cleaned up some comments and table of contents. \n#\n# Table of Contents\n# [1.0] Hard-coded variables\n# [1.1] Load packages and custom functions\n# [1.3] First time set up\n# [1.4] Define custom functions\n# [2.0] Load keys and decrypt files\n#\n# Sources used to create this script, and for further reading:\n# https://github.com/SecurityInnovation/PGPy/\n# https://stackoverflow.com/questions/1020320/how-to-do-pgp-in-python-generate-keys-encrypt-decrypt\n# https://pypi.org/project/PGPy/\n# https://betterprogramming.pub/creating-a-pgp-encryption-tool-with-python-19bae51b7fd\n# https://pgpy.readthedocs.io/en/latest/examples.html\n\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#~~ [1.1] Load packages \n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nimport glob\nimport pgpy\nimport shutil\nimport io\n\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#~~ [1.2] Hard-coded variables\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n# Define the paths to public and private keys\npath_public_key = r'YOUR PATH HERE'\npath_private_key = r'YOUR PATH HERE'\n\n# Define paths to files you want to try decrypting\npath_original_files = r'YOUR PATH HERE'\npath_decrypted_files = r'YOUR PATH HERE'\npath_encrypted_files= r'YOUR PATH HERE'\n\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#~~ [1.3] First time set up\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#\n# IMPORTANT WARNINGS!!!!\n# - Do NOT share your private key with anyone else. \n# - You MUST have the associated private key that is is generated along with a public key \n# if you want to be able to decrypt anything that is encryped with that public key. Do\n# not overwrite the existing keys unless you will never need any of the previously \n# encryped data. \n# - Do not generate new public and private keys unless you have a good reason to. \n#\n# The following steps will walk you through how to create and write public and private keys to\n# a network location. Be very careful where you store this information. Anyone with access\n# to your private key can decrypt anything that was encryped with your public key.\n#\n# These steps only need to be performed one time when the script is first being \n# created. They are commented out intentionally, as they shouldn't need to be performed \n# every time the script is ran. \n# \n# Here's the a link to the documentation on this topic:\n# https://pgpy.readthedocs.io/en/latest/examples.html\n\n# # Load the extra things we need to define a new key\n# from pgpy.constants import PubKeyAlgorithm, KeyFlags, HashAlgorithm, SymmetricKeyAlgorithm, CompressionAlgorithm\n \n# # Gerate a new a primary key. For this example, we'll use RSA, but it could be DSA or ECDSA as well\n# key = pgpy.PGPKey.new(PubKeyAlgorithm.RSAEncryptOrSign, 4096)\n\n# # Define a new user \n# uid = pgpy.PGPUID.new('SA_CODA_Admin', comment='Customer Strategy and Data Analytics service account.', email='[email protected]')\n\n# # Add the new user id to the key, and define all the key preferences.\n# key.add_uid(uid, usage={KeyFlags.Sign, KeyFlags.EncryptCommunications, KeyFlags.EncryptStorage},\n# hashes=[HashAlgorithm.SHA256, HashAlgorithm.SHA384, HashAlgorithm.SHA512, HashAlgorithm.SHA224],\n# ciphers=[SymmetricKeyAlgorithm.AES256, SymmetricKeyAlgorithm.AES192, SymmetricKeyAlgorithm.AES128],\n# compression=[CompressionAlgorithm.ZLIB, CompressionAlgorithm.BZ2, CompressionAlgorithm.ZIP, CompressionAlgorithm.Uncompressed]\n# , is_compressed = True)\n\n# # Write the ASCII armored public key to a network location.\n# text_file = open(path_public_key, 'w')\n# text_file.write(str(key.pubkey))\n# text_file.close()\n\n# # Write the ASCII armored private key to a network location.\n# text_file = open(path_private_key, 'w')\n# text_file.write(str(key))\n# text_file.close()\n\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#~~ [1.4] Define custom functions\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\ndef file_encrypt(path_original_file, path_encrypted_file, key_public):\n \"\"\"\n A function that encrypts the content of a file at the given path and \n creates an ecryped version file at the new location using the specified\n public key.\n \"\"\"\n \n # Create a PGP file, compressed with ZIP DEFLATE by default unless otherwise specified\n pgp_file = pgpy.PGPMessage.new(path_original_file, file=True)\n \n # Encrypt the data with the public key\n encrypted_data = key_public.encrypt(pgp_file) \n \n # Write the encryped data to the encrypted destination\n text_file = open(path_encrypted_file, 'w')\n text_file.write(str(encrypted_data))\n text_file.close()\n \ndef file_decrypt(path_encrypted_file, path_decrypted_file, key_private):\n \"\"\"\n A function that decrypts the content of a file at path path and \n creates a decrypted file at the new location using the given \n private key.\n \"\"\"\n\n # Load a previously encryped message from a file\n pgp_file = pgpy.PGPMessage.from_file(path_encrypted_file)\n \n # Decrypt the data with the given private key\n decrypted_data = key_private.decrypt(pgp_file).message\n \n # Read in the bytes of the decrypted data\n toread = io.BytesIO()\n toread.write(bytes(decrypted_data)) \n toread.seek(0) # reset the pointer \n \n # Write the data to the location\n with open(path_decrypted_file, 'wb') as f:\n shutil.copyfileobj(toread, f)\n f.close()\n \n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n#~~ [2.0] Load keys and decrypt files\n#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n# Load your pre-generated public key from the network\nkey_public, _ = pgpy.PGPKey.from_file(path_public_key)\n\n# Load your pre-generated public key from the network\nkey_private, _ = pgpy.PGPKey.from_file(path_private_key)\n\n# Find and process any encrypted files in the landing folder\nfor file in glob.glob(path_original_files + '\\*.pgp'):\n \n # Get the path to the file we need to decrypt\n path_encrypted_file = str(file)\n \n # Extract the file name\n parts = path_encrypted_file.split('\\\\')\n str_file_name = parts[len(parts)-1]\n str_clean_file_name = str_file_name[:-4]\n \n # Extract the file exension\n str_extension = str_clean_file_name.split('.')\n str_extension = str_extension[len(str_extension) - 1]\n \n # Create the path to the new decryped file, dropping the \".pgp\" extension\n path_decrypted_file = path_decrypted_files + '\\\\' + str_clean_file_name\n \n # Create the path to the place we'll store the encryped file\n path_archived_encrypted_file = path_encrypted_files + '\\\\' + str_file_name\n \n # Decrypt the file\n try:\n file_decrypt(path_encrypted_file, path_decrypted_file, key_private)\n \n # Move the encryped file to its new location\n shutil.move(path_encrypted_file, path_archived_encrypted_file)\n except:\n print('DECRYPTION ERROR!')\n print(f'Unable to decrypt {path_encrypted_file}')\n \n# Just for reference, here's how you would call the function to encrypt a file:\n# file_encrypt(path_original_file, path_encrypted_file, key_public)\n\n" ]
[ 41, 35, 7, 3, 3, 3, 1, 0 ]
[]
[]
[ "encryption", "gnupg", "pgp", "public_key_encryption", "python" ]
stackoverflow_0001020320_encryption_gnupg_pgp_public_key_encryption_python.txt
Q: Near identical code producing different results. TIME modual I am using TIME for the first time and wanted to make a basic timer. I ran this code completely on its own: import time start = int(time.time()) answered = "No" while int(time.time()) - 2 < start: if input("What's 1 + 1?") == 2: print("Correct") else: print("Incorrect") This code's execution gives you 2 seconds to answer as many times as you want, once 2s is up you can no longer answer, this was made just so I could practice using the TIME module and has no greater purpose. However, when I tried to implement the TIME module into a game I am working on: start = int(time.time()) hits = 0 while int(time.time()) - start != time: #COMBAT MECHANICS# I have only included a tiny snippet of code as this is the only times time is mentioned in the code so the only relevant part for this error. When this code is ran I receive the message: "UnboundLovalError: local variable 'time' referenced before assignment" Time is imported at the beginning of the code as it was in the practice. The only difference is this is within a function, but I don't see how that should influence the behaviour of the code. Please can someone help me fix this. A: As pointed out by @jasonharper you changed your logic in the second example and in the process misused time in your while statement. Keeping your second example's logic in line with your first example's logic (assuming that is your goal) you would want to do something like this: start = int(time.time()) hits = 0 while int(time.time()) - 2 < start: #COMBAT MECHANICS#
Near identical code producing different results. TIME modual
I am using TIME for the first time and wanted to make a basic timer. I ran this code completely on its own: import time start = int(time.time()) answered = "No" while int(time.time()) - 2 < start: if input("What's 1 + 1?") == 2: print("Correct") else: print("Incorrect") This code's execution gives you 2 seconds to answer as many times as you want, once 2s is up you can no longer answer, this was made just so I could practice using the TIME module and has no greater purpose. However, when I tried to implement the TIME module into a game I am working on: start = int(time.time()) hits = 0 while int(time.time()) - start != time: #COMBAT MECHANICS# I have only included a tiny snippet of code as this is the only times time is mentioned in the code so the only relevant part for this error. When this code is ran I receive the message: "UnboundLovalError: local variable 'time' referenced before assignment" Time is imported at the beginning of the code as it was in the practice. The only difference is this is within a function, but I don't see how that should influence the behaviour of the code. Please can someone help me fix this.
[ "As pointed out by @jasonharper you changed your logic in the second example and in the process misused time in your while statement. Keeping your second example's logic in line with your first example's logic (assuming that is your goal) you would want to do something like this:\nstart = int(time.time())\nhits = 0\nwhile int(time.time()) - 2 < start:\n #COMBAT MECHANICS#\n\n" ]
[ 0 ]
[]
[]
[ "python", "replit", "time" ]
stackoverflow_0074620650_python_replit_time.txt
Q: how to compare if 2 columns in pandas dataframe are equal then update the rest of the dataframe I have 2 pandas dataframes and trying to compare if 2 of their columns are equal then update the rest of the dataframe if not append the new data so concat or something like that . i tried this amongst other stuff if demand_history[['Pyramid Key','FCST_YR_PRD']] == azlog_3[['Pyramid Key','FCST_YR_PRD']]: demand_history['DMD_ACTL_QTY'] ==azlog_3['DMD_ACTL_QTY'] A: demand_hist_sku_date = demand_history['Pyramid Key'] + demand_history['FCST_YR_PRD'] azlog_3_sku_date = azlog_3['Pyramid Key']+ azlog_3['FCST_YR_PRD'] demand_history.loc[demand_hist_sku_date.isin(azlog_3_sku_date), 'DMD_ACTL_QTY' ] = azlog_3['DMD_ACTL_QTY']
how to compare if 2 columns in pandas dataframe are equal then update the rest of the dataframe
I have 2 pandas dataframes and trying to compare if 2 of their columns are equal then update the rest of the dataframe if not append the new data so concat or something like that . i tried this amongst other stuff if demand_history[['Pyramid Key','FCST_YR_PRD']] == azlog_3[['Pyramid Key','FCST_YR_PRD']]: demand_history['DMD_ACTL_QTY'] ==azlog_3['DMD_ACTL_QTY']
[ "demand_hist_sku_date = demand_history['Pyramid Key'] + demand_history['FCST_YR_PRD']\nazlog_3_sku_date = azlog_3['Pyramid Key']+ azlog_3['FCST_YR_PRD']\ndemand_history.loc[demand_hist_sku_date.isin(azlog_3_sku_date), 'DMD_ACTL_QTY' ] = azlog_3['DMD_ACTL_QTY']\n" ]
[ 0 ]
[]
[]
[ "dataframe", "merge", "pandas", "python" ]
stackoverflow_0074620034_dataframe_merge_pandas_python.txt
Q: Project 3D points to 2D points in python I'm trying to project 3D body keypoints to 2D keypoints, My 3D points are: points = np.array([[-7.55801499e-02, -3.69511306e-01, -2.63576955e-01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 3.08661222e-01, -2.93346141e-02, 3.72593999e-02], [ 5.96781611e-01, -2.82074720e-01, 4.71359938e-01], [ 5.38534284e-01, -8.05779934e-01, 4.68694866e-01], [-3.67936224e-01, -1.09069087e-01, 9.90774706e-02], [-5.24732828e-01, -2.87176669e-01, 6.09635711e-01], [-4.37022656e-01, -7.87327409e-01, 4.43706572e-01], [ 1.33009470e-09, -5.10657072e-09, 1.00000000e+00], [ 1.13241628e-01, 3.25177647e-02, 1.24026799e+00], [ 3.43442023e-01, -2.51034945e-01, 1.90472209e+00], [ 2.57550180e-01, -2.86886752e-01, 2.75528717e+00], [-1.37361348e-01, -2.60521360e-02, 1.19951272e+00], [-3.26779515e-01, -5.59706092e-01, 1.75905156e+00], [-4.65996087e-01, -7.69565761e-01, 2.56634569e+00], [-1.89841837e-02, -3.19088846e-01, -3.69913191e-01], [-1.61812544e-01, -3.10732543e-01, -3.47061515e-01], [ 7.68100023e-02, -1.19293019e-01, -3.72248143e-01], [-2.24317372e-01, -1.02143347e-01, -3.32051814e-01], [-3.77829641e-01, -1.19915462e+00, 2.56900430e+00], [-5.45104921e-01, -1.13393784e+00, 2.57149625e+00], [-5.66698492e-01, -6.89325571e-01, 2.67840290e+00], [ 4.65222150e-01, -6.44857705e-01, 2.83186650e+00], [ 5.27995050e-01, -4.69421804e-01, 2.87518311e+00], [ 1.77749291e-01, -1.74753308e-01, 2.88810611e+00]]) I plotted them using: fig = plt.figure() ax = plt.axes(projection='3d') ax.set_xlim3d(1, -1) ax.set_ylim3d(1, -1) ax.set_zlim3d(1, -1) ax.scatter3D(points[:, 0], points[:, 1], points[:, 2], cmap='Greens') The result is: I want an array of 2D points with the same camera view, so my desired result a 2D array: I have tried so far: import cv2 ans = [] for k in range(25): tmp = np.array(s[0, k, :]).reshape(1,3) revc = np.array([0, 0, 0], np.float) # rotation vector tvec = np.array([0, 0, 0], np.float) # translation vector fx = fy = 1.0 cx = cy = 0.0 cameraMatrix = np.array([[fx, 0, cx], [0, fy, cy], [0, 0, 1]]) result = cv2.projectPoints(tmp, revc, tvec, cameraMatrix, None) ans.append(result[0]) ans = np.array(ans).squeeze() But the result I'm getting is: plt.scatter(ans[:,0], ans[:, 1]) I can't figure out why the information is lost during projection, kindly help me in this. Also its not necessary for me to use OpenCV so you can suggest other methods like using numpy too. Thanks A: Here's a way to do this from "scratch". I have the following import statements: import numpy as np import matplotlib.pyplot as plt from numpy import sin,cos,pi from scipy.linalg import norm After your 3d plotting code, I added the following: azim = ax.azim*pi/180 elev = ax.elev*pi/180 elev *= 1.2 # this seems to improve the outcome a_vec = np.array([cos(azim),sin(azim),0]) normal = cos(elev)*a_vec + np.array([0,0,sin(elev)]) z_vec = np.array([0,0,1]) y_comp = z_vec - (z_vec@normal)*normal y_comp = y_comp/norm(y_comp) x_comp = np.cross(y_comp,normal) proj_mat = np.vstack([x_comp,y_comp]) # build projection matrix proj_mat = -proj_mat # account for flipped axes points_2D = points @ proj_mat.T # apply projection plt.figure() plt.scatter(*points_2D.T) plt.gca().set_aspect('equal', adjustable='box') plt.axis('off') plt.show() The resulting points:
Project 3D points to 2D points in python
I'm trying to project 3D body keypoints to 2D keypoints, My 3D points are: points = np.array([[-7.55801499e-02, -3.69511306e-01, -2.63576955e-01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 3.08661222e-01, -2.93346141e-02, 3.72593999e-02], [ 5.96781611e-01, -2.82074720e-01, 4.71359938e-01], [ 5.38534284e-01, -8.05779934e-01, 4.68694866e-01], [-3.67936224e-01, -1.09069087e-01, 9.90774706e-02], [-5.24732828e-01, -2.87176669e-01, 6.09635711e-01], [-4.37022656e-01, -7.87327409e-01, 4.43706572e-01], [ 1.33009470e-09, -5.10657072e-09, 1.00000000e+00], [ 1.13241628e-01, 3.25177647e-02, 1.24026799e+00], [ 3.43442023e-01, -2.51034945e-01, 1.90472209e+00], [ 2.57550180e-01, -2.86886752e-01, 2.75528717e+00], [-1.37361348e-01, -2.60521360e-02, 1.19951272e+00], [-3.26779515e-01, -5.59706092e-01, 1.75905156e+00], [-4.65996087e-01, -7.69565761e-01, 2.56634569e+00], [-1.89841837e-02, -3.19088846e-01, -3.69913191e-01], [-1.61812544e-01, -3.10732543e-01, -3.47061515e-01], [ 7.68100023e-02, -1.19293019e-01, -3.72248143e-01], [-2.24317372e-01, -1.02143347e-01, -3.32051814e-01], [-3.77829641e-01, -1.19915462e+00, 2.56900430e+00], [-5.45104921e-01, -1.13393784e+00, 2.57149625e+00], [-5.66698492e-01, -6.89325571e-01, 2.67840290e+00], [ 4.65222150e-01, -6.44857705e-01, 2.83186650e+00], [ 5.27995050e-01, -4.69421804e-01, 2.87518311e+00], [ 1.77749291e-01, -1.74753308e-01, 2.88810611e+00]]) I plotted them using: fig = plt.figure() ax = plt.axes(projection='3d') ax.set_xlim3d(1, -1) ax.set_ylim3d(1, -1) ax.set_zlim3d(1, -1) ax.scatter3D(points[:, 0], points[:, 1], points[:, 2], cmap='Greens') The result is: I want an array of 2D points with the same camera view, so my desired result a 2D array: I have tried so far: import cv2 ans = [] for k in range(25): tmp = np.array(s[0, k, :]).reshape(1,3) revc = np.array([0, 0, 0], np.float) # rotation vector tvec = np.array([0, 0, 0], np.float) # translation vector fx = fy = 1.0 cx = cy = 0.0 cameraMatrix = np.array([[fx, 0, cx], [0, fy, cy], [0, 0, 1]]) result = cv2.projectPoints(tmp, revc, tvec, cameraMatrix, None) ans.append(result[0]) ans = np.array(ans).squeeze() But the result I'm getting is: plt.scatter(ans[:,0], ans[:, 1]) I can't figure out why the information is lost during projection, kindly help me in this. Also its not necessary for me to use OpenCV so you can suggest other methods like using numpy too. Thanks
[ "Here's a way to do this from \"scratch\". I have the following import statements:\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy import sin,cos,pi\nfrom scipy.linalg import norm\n\nAfter your 3d plotting code, I added the following:\nazim = ax.azim*pi/180\nelev = ax.elev*pi/180\nelev *= 1.2 # this seems to improve the outcome\n\na_vec = np.array([cos(azim),sin(azim),0])\nnormal = cos(elev)*a_vec + np.array([0,0,sin(elev)])\n\nz_vec = np.array([0,0,1])\ny_comp = z_vec - (z_vec@normal)*normal\ny_comp = y_comp/norm(y_comp)\nx_comp = np.cross(y_comp,normal)\n\nproj_mat = np.vstack([x_comp,y_comp]) # build projection matrix\nproj_mat = -proj_mat # account for flipped axes\npoints_2D = points @ proj_mat.T # apply projection\n\nplt.figure()\nplt.scatter(*points_2D.T)\nplt.gca().set_aspect('equal', adjustable='box')\nplt.axis('off')\nplt.show()\n\nThe resulting points:\n\n" ]
[ 3 ]
[]
[]
[ "computer_vision", "numpy", "opencv", "pose_estimation", "python" ]
stackoverflow_0074620278_computer_vision_numpy_opencv_pose_estimation_python.txt
Q: Plotly Figure. How to get the number of rows and cols? I create a Plotly Figure instance this way: fig = go.Figure() fig = make_subplots(rows=3, cols=1, shared_xaxes=True, row_width=[0.3, 0.3, 0.4]) Lets assume that now I do not know how many rows and cols the Figure instance has. How can I obtain these values? For example, I expect something like this: rows = fig.get_rows_num() cols = fig.get_cols_num() I appreciate any help. A: I had the same use case come up! I was happy to find you can do this: rows, cols = fig._get_subplot_rows_columns()
Plotly Figure. How to get the number of rows and cols?
I create a Plotly Figure instance this way: fig = go.Figure() fig = make_subplots(rows=3, cols=1, shared_xaxes=True, row_width=[0.3, 0.3, 0.4]) Lets assume that now I do not know how many rows and cols the Figure instance has. How can I obtain these values? For example, I expect something like this: rows = fig.get_rows_num() cols = fig.get_cols_num() I appreciate any help.
[ "I had the same use case come up! I was happy to find you can do this:\nrows, cols = fig._get_subplot_rows_columns()\n\n" ]
[ 0 ]
[]
[]
[ "plotly", "python" ]
stackoverflow_0073829894_plotly_python.txt
Q: Matplotlib figures not generating in GitHub CodeSpaces I just started using Codespaces. In my python file I have this code: import matplotlib.pyplot as plt import pandas as pd print("Hello") titanic_data = pd.read_csv("https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv") titanic_data = titanic_data[titanic_data['Age'].notnull()] titanic_data['Fare'] = titanic_data['Fare'].fillna(titanic_data['Fare'].mean()) titanic_data = titanic_data.drop_duplicates() plt.scatter(titanic_data['Age'], titanic_data['Fare']) plt.show() print("Goodbye") When I run this on my local machine, this works perfectly. I can see the console logs, and the figure appears as a new window: However, when I run this in Codespaces, I can see all of the code running without any errors, but it does not show the figure. Is this a known limitation or a feature that is not yet supported? Is there another way I can plot figures in Codespaces? They mention this in the docs docs: The default container image that's used by GitHub Codespaces includes a set of machine learning libraries that are preinstalled in your codespace. For example, Numpy, pandas, SciPy, Matplotlib, seaborn, scikit-learn, Keras, PyTorch, Requests, and Plotly. It sounds like it should be supported out of the box. Is additional configuration required? A: Based on the experimentation I have done thus far, plotting these diagrams as one would do in a local dev environment is not (yet?) possible. For this specific case, the next best solution was to create a new GitHub Codespace from this repo: https://github.com/education/codespaces-teaching-template-py Once the repo has been cloned into the Codespace, navigate to an existing .ipynb file or create your own. Inside there you'll be able to run chunks of custom code and plot figures. The big limitation I see is that the figure cannot be interacted with the same way that one would be able to on a local machine (zooming, panning, etc). As always, don't forget to shut your Codespace down when you're done using it!
Matplotlib figures not generating in GitHub CodeSpaces
I just started using Codespaces. In my python file I have this code: import matplotlib.pyplot as plt import pandas as pd print("Hello") titanic_data = pd.read_csv("https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv") titanic_data = titanic_data[titanic_data['Age'].notnull()] titanic_data['Fare'] = titanic_data['Fare'].fillna(titanic_data['Fare'].mean()) titanic_data = titanic_data.drop_duplicates() plt.scatter(titanic_data['Age'], titanic_data['Fare']) plt.show() print("Goodbye") When I run this on my local machine, this works perfectly. I can see the console logs, and the figure appears as a new window: However, when I run this in Codespaces, I can see all of the code running without any errors, but it does not show the figure. Is this a known limitation or a feature that is not yet supported? Is there another way I can plot figures in Codespaces? They mention this in the docs docs: The default container image that's used by GitHub Codespaces includes a set of machine learning libraries that are preinstalled in your codespace. For example, Numpy, pandas, SciPy, Matplotlib, seaborn, scikit-learn, Keras, PyTorch, Requests, and Plotly. It sounds like it should be supported out of the box. Is additional configuration required?
[ "Based on the experimentation I have done thus far, plotting these diagrams as one would do in a local dev environment is not (yet?) possible.\nFor this specific case, the next best solution was to create a new GitHub Codespace from this repo: https://github.com/education/codespaces-teaching-template-py\n\nOnce the repo has been cloned into the Codespace, navigate to an existing .ipynb file or create your own.\nInside there you'll be able to run chunks of custom code and plot figures.\n\nThe big limitation I see is that the figure cannot be interacted with the same way that one would be able to on a local machine (zooming, panning, etc).\nAs always, don't forget to shut your Codespace down when you're done using it!\n" ]
[ 1 ]
[]
[]
[ "codespaces", "python" ]
stackoverflow_0074415793_codespaces_python.txt
Q: Generating a UDP message in python with a header and payload in python3 I am new to Networking and trying to implement a network calculator using python3 where the client's responsibility is to send operands and operators and the server will calculate the result and send it back to the client. Communication is through UDP messages and I am working on client side. Each message is comprised of a header and a payload and they are described as shown in the below figures. UDP header: UDP payload: I am familiar with sending string messages using sockets but having a hard-time with how to make a message with both header and payload and how to assign the bits for various attributes or how to generate message/client id's in the header and If there is any way to automatically generate the Id's. Any help or suggestions will be highly appreciated. Thanks in advance A: I will only do a portion of your homework. I hope it will help you to find energy to work on missing parts. import struct import socket CPROTO_ECODE_REQUEST, CPROTO_ECODE_SUCCESS, CPROTO_ECODE_FAIL = (0,1,2) ver = 1 # version of protocol mid = 0 # initial value cid = 99 # client Id (arbitrary) sock = socket.socket( ...) # to be customized def sendRecv( num1, op, num2): global mid ocs = ("+", "-", "*", "/").index( op) byte0 = ver + (ocs << 3) + (CPROTO_ECODE_REQUEST << 6) hdr = struct.pack( "!BBH", byte0, mid, cid) parts1 = (b'0000' + num1.encode() + b'0000').split(b'.') parts2 = (b'0000' + num2.encode() + b'0000').split(b'.') msg = hdr + parts1[0][-4:] + parts1[1][:4] + parts2[0][-4:] + parts2[1][:4] socket.send( msg) # send request bufr = socket.recv( 512) # get answer # to do: # complete socket_send and socket.recv # unpack bufr into: verr,ecr,opr,value_i, value_f # verify that verr, ecr, opr, are appropriate # combine value_i and value_f into answer mid += 1 return answer result = sendRecv( '2.47', '+', '46.234') There are many elements that haven't be specified by your teacher: what should be the byte-ordering on the network (bigEndian or littleEndian)? The above example suppose it's bigEndian but you can easily modify the 'pack' statement to use littleEndian. What should the program do if the received packet header is invalid? What should the program do if there's no answer from server? Payload: how should we interpret "4 most significant digits of fraction"? Does that mean that the value is in ASCII? That's not specified. Payload: assuming the fraction is in ASCII, should it be right-justified or left-justified in the packet? Payload: same question for integer portion. Payload: if the values are in binary, are they signed or unsigned. It will have an affect on the unpacking statement. In the program above, I assumed that: values are positive and in ASCII (without sign) integer portion is right-justified fractional portion is left justified Have fun!
Generating a UDP message in python with a header and payload in python3
I am new to Networking and trying to implement a network calculator using python3 where the client's responsibility is to send operands and operators and the server will calculate the result and send it back to the client. Communication is through UDP messages and I am working on client side. Each message is comprised of a header and a payload and they are described as shown in the below figures. UDP header: UDP payload: I am familiar with sending string messages using sockets but having a hard-time with how to make a message with both header and payload and how to assign the bits for various attributes or how to generate message/client id's in the header and If there is any way to automatically generate the Id's. Any help or suggestions will be highly appreciated. Thanks in advance
[ "I will only do a portion of your homework.\nI hope it will help you to find energy to work on missing parts.\nimport struct\nimport socket\n\nCPROTO_ECODE_REQUEST, CPROTO_ECODE_SUCCESS, CPROTO_ECODE_FAIL = (0,1,2)\n\nver = 1 # version of protocol\nmid = 0 # initial value\ncid = 99 # client Id (arbitrary)\n\nsock = socket.socket( ...) # to be customized\n\ndef sendRecv( num1, op, num2):\n global mid\n ocs = (\"+\", \"-\", \"*\", \"/\").index( op)\n byte0 = ver + (ocs << 3) + (CPROTO_ECODE_REQUEST << 6)\n hdr = struct.pack( \"!BBH\", byte0, mid, cid)\n parts1 = (b'0000' + num1.encode() + b'0000').split(b'.')\n parts2 = (b'0000' + num2.encode() + b'0000').split(b'.')\n msg = hdr + parts1[0][-4:] + parts1[1][:4] + parts2[0][-4:] + parts2[1][:4]\n \n socket.send( msg) # send request\n bufr = socket.recv( 512) # get answer\n # to do:\n # complete socket_send and socket.recv\n # unpack bufr into: verr,ecr,opr,value_i, value_f\n # verify that verr, ecr, opr, are appropriate\n # combine value_i and value_f into answer\n mid += 1\n return answer\n\nresult = sendRecv( '2.47', '+', '46.234')\n\n\nThere are many elements that haven't be specified by your teacher:\n\nwhat should be the byte-ordering on the network (bigEndian or littleEndian)? The above example suppose it's bigEndian but you can easily modify the 'pack' statement to use littleEndian.\nWhat should the program do if the received packet header is invalid?\nWhat should the program do if there's no answer from server?\nPayload: how should we interpret \"4 most significant digits of fraction\"? Does that mean that the value is in ASCII? That's not specified.\nPayload: assuming the fraction is in ASCII, should it be right-justified or left-justified in the packet?\nPayload: same question for integer portion.\nPayload: if the values are in binary, are they signed or unsigned. It will have an affect on the unpacking statement.\n\nIn the program above, I assumed that:\n\nvalues are positive and in ASCII (without sign)\ninteger portion is right-justified\nfractional portion is left justified\n\nHave fun!\n" ]
[ 1 ]
[]
[]
[ "networking", "python", "python_3.x", "sockets", "udp" ]
stackoverflow_0074606143_networking_python_python_3.x_sockets_udp.txt
Q: i want to return two values, but only print one in python function, def f(value): value = ~~~ a = ~~~~ return value, a print(f(value)) I want to return value and a so the program out of the function also can memorize value and a, but want to show only a. is there any method to process this? I cannot use global, because the error says that 'value is parameter and global' I must not change value in f() ('that means, i must not change f(value)') Also, I must not change 'print(f(value))'. only thing i can do is changing inner of 'def f(value):'. A: You should assign value and a to variables outside of the function. Since you are returning a tuple you should assign it like x,y=f(value). Then print(y) to just show a. A: As Karl Knechtel mentioned your requirements are inconsistent but the nearest you can get is: def f(value): global other_value value = ~~~ a = ~~~~ other_value = value return a print(f(value)) But use of global variables is not recommended. A: In next two solutions I expect that you are allowed to change definition of your f(...) function. First solution uses global variable, and second doesn't use any globals. First variant (below) is that you can just use a global variable to store saved value. Try it online! def f(value): global saved_value saved_value = value value = 123 a = 456 return a # Regular usage of your function value = 789 print(f(value)) # Prints 456 # Somewhere later in your code # This will print saved value print(saved_value) # Prints 789 Output: 456 789 Second variant (below) without using any global variable is by using state dictionary with default {} value. This state does same thing as global variable, it saves value for later use. I changed number of arguments in your function but that is not a problem because as before everyone can call your function just as f(value), the rest of arguments are not necessary to be provided and will be considered to be equal to default values. Try it online! def f(value, *, state = {}, return_value = False): if return_value: return state['value'] else: state['value'] = value value = 123 a = 456 return a # Regular usage of your function value = 789 print(f(value)) # Prints 456 # Somewhere later in your code # This will print saved value print(f(None, return_value = True)) # Prints 789 Output: 456 789
i want to return two values, but only print one
in python function, def f(value): value = ~~~ a = ~~~~ return value, a print(f(value)) I want to return value and a so the program out of the function also can memorize value and a, but want to show only a. is there any method to process this? I cannot use global, because the error says that 'value is parameter and global' I must not change value in f() ('that means, i must not change f(value)') Also, I must not change 'print(f(value))'. only thing i can do is changing inner of 'def f(value):'.
[ "You should assign value and a to variables outside of the function. Since you are returning a tuple you should assign it like x,y=f(value). Then print(y) to just show a.\n", "As Karl Knechtel mentioned your requirements are inconsistent but the nearest you can get is:\ndef f(value):\n global other_value\n\n value = ~~~\n a = ~~~~\n other_value = value\n return a\n\nprint(f(value))\n\nBut use of global variables is not recommended.\n", "In next two solutions I expect that you are allowed to change definition of your f(...) function. First solution uses global variable, and second doesn't use any globals.\nFirst variant (below) is that you can just use a global variable to store saved value.\nTry it online!\ndef f(value):\n global saved_value\n saved_value = value\n value = 123\n a = 456\n return a\n\n# Regular usage of your function\nvalue = 789\nprint(f(value)) # Prints 456\n\n# Somewhere later in your code\n# This will print saved value\nprint(saved_value) # Prints 789\n\nOutput:\n456\n789\n\n\nSecond variant (below) without using any global variable is by using state dictionary with default {} value. This state does same thing as global variable, it saves value for later use. I changed number of arguments in your function but that is not a problem because as before everyone can call your function just as f(value), the rest of arguments are not necessary to be provided and will be considered to be equal to default values.\nTry it online!\ndef f(value, *, state = {}, return_value = False):\n if return_value:\n return state['value']\n else:\n state['value'] = value\n value = 123\n a = 456\n return a\n\n# Regular usage of your function\nvalue = 789\nprint(f(value)) # Prints 456\n\n# Somewhere later in your code\n# This will print saved value\nprint(f(None, return_value = True)) # Prints 789\n\nOutput:\n456\n789\n\n" ]
[ 1, 0, 0 ]
[ "https://stackoverflow.com/a/45972642/9184997\ndef test():\n r1 = 1\n r2 = 2\n r3 = 3\n return r1, r2, r3\n\nx,y,z = test()\nprint x\nprint y\nprint z\n\n\n> test.py \n1\n2\n3\n\n" ]
[ -1 ]
[ "global", "python", "return" ]
stackoverflow_0064735161_global_python_return.txt
Q: Is there a way to use loop iteration variables in an area outside the loop? I want this code to refer to a list with a loop variable inside instead of using the initialised value: i = 1 list = [i,i+1,i+2] for i in range(3): print(list[0]) I expected the output to be: 0 1 2 The output was: 1 1 1 I have tried i = None instead, but an error was (of course) raised. I have tried using a placeholder inside the loop to refer to: x = 1 list = [x,x+1,x+2] for i in range(3): x = i print(list[0]) I'm new to Python so I'm not very knowledgeable, hence why I asked. How can I solve this? A: In each iteration of your loop you access the same element at index 0. To get to each individual element of your list by index you have set it to i: x = 1 lst = [x,x+1,x+2] for i in range(3): print(lst[i]) I changed list to lst as the former is a reserved keyword and shouldn't be used as variable name.
Is there a way to use loop iteration variables in an area outside the loop?
I want this code to refer to a list with a loop variable inside instead of using the initialised value: i = 1 list = [i,i+1,i+2] for i in range(3): print(list[0]) I expected the output to be: 0 1 2 The output was: 1 1 1 I have tried i = None instead, but an error was (of course) raised. I have tried using a placeholder inside the loop to refer to: x = 1 list = [x,x+1,x+2] for i in range(3): x = i print(list[0]) I'm new to Python so I'm not very knowledgeable, hence why I asked. How can I solve this?
[ "In each iteration of your loop you access the same element at index 0.\nTo get to each individual element of your list by index you have set it to i:\nx = 1\nlst = [x,x+1,x+2]\nfor i in range(3):\n print(lst[i])\n\nI changed list to lst as the former is a reserved keyword and shouldn't be used as variable name.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074620782_python.txt
Q: 1669. Merge In Between Linked Lists - Leetcode - failing test I am working on the LeetCode problem 1669. Merge In Between Linked Lists: You are given two linked lists: list1 and list2 of sizes n and m respectively. Remove list1's nodes from the ath node to the bth node, and put list2 in their place. The blue edges and nodes in the following figure indicate the result: Build the result list and return its head. Constraints: 3 <= list1.length <= 10⁴ 1 <= a <= b < list1.length - 1 1 <= list2.length <= 10⁴ Here is my code: # Definition for singly-linked list. # class ListNode: # def __init__(self, val=0, next=None): # self.val = val # self.next = next class Solution: def mergeInBetween(self, list1: ListNode, a: int, b: int, list2: ListNode) -> ListNode: slow = fast = list1 temp1 = temp2 = list2 slowslow = fastfast = list1 if slow != a: slow = slow.next if slowslow.val != a-1: slowslow = slowslow.next if fast != b: fast = fast.next fastfast = fast.next while temp2.next: temp2 = temp2.next slowslow.next = temp1 temp2.next = fastfast return list1 It fails with the first test case: Input: [0,1,2,3,4,5] 3 4 [1000000,1000001,1000002] MyOutput : [0,1000000,1000001,1000002,2,3,4,5] Expected Output: [0,1,2,1000000,1000001,1000002,5] I am trying to place the slowslow pointer before a, and fastfast pointer after b, and then connect slowslow to temp1 and temp2 to fastfast. But somehow that is not working. What is wrong in my attempt? I am new to linked lists and would appreciate a simple change or even a simpler alternative method. A: Some of the issues: slow != a is a condition that is always False because slow is a ListNode object, and a is an int. The same problem occurs with fast != b. slowslow.val != a-1 is wrong, as there is nothing in the question that requires to look at the values in the list. a is an index, not a value. Your code will set slow to either the first node or the second node of the first list. There is no loop that guarantees that slow will reference the node before index a. The same problem exists for fast. Here is working code: class Solution: def mergeInBetween(self, list1: ListNode, a: int, b: int, list2: ListNode) -> ListNode: # Find node that precedes index a in first list left = list1 for i in range(a - 1): left = left.next # Find node that follows index b in first list right = left for i in range(b - a + 2): right = right.next # Find tail node of second list tail = list2 while tail.next: tail = tail.next # Make the connections left.next = list2 tail.next = right return list1
1669. Merge In Between Linked Lists - Leetcode - failing test
I am working on the LeetCode problem 1669. Merge In Between Linked Lists: You are given two linked lists: list1 and list2 of sizes n and m respectively. Remove list1's nodes from the ath node to the bth node, and put list2 in their place. The blue edges and nodes in the following figure indicate the result: Build the result list and return its head. Constraints: 3 <= list1.length <= 10⁴ 1 <= a <= b < list1.length - 1 1 <= list2.length <= 10⁴ Here is my code: # Definition for singly-linked list. # class ListNode: # def __init__(self, val=0, next=None): # self.val = val # self.next = next class Solution: def mergeInBetween(self, list1: ListNode, a: int, b: int, list2: ListNode) -> ListNode: slow = fast = list1 temp1 = temp2 = list2 slowslow = fastfast = list1 if slow != a: slow = slow.next if slowslow.val != a-1: slowslow = slowslow.next if fast != b: fast = fast.next fastfast = fast.next while temp2.next: temp2 = temp2.next slowslow.next = temp1 temp2.next = fastfast return list1 It fails with the first test case: Input: [0,1,2,3,4,5] 3 4 [1000000,1000001,1000002] MyOutput : [0,1000000,1000001,1000002,2,3,4,5] Expected Output: [0,1,2,1000000,1000001,1000002,5] I am trying to place the slowslow pointer before a, and fastfast pointer after b, and then connect slowslow to temp1 and temp2 to fastfast. But somehow that is not working. What is wrong in my attempt? I am new to linked lists and would appreciate a simple change or even a simpler alternative method.
[ "Some of the issues:\n\nslow != a is a condition that is always False because slow is a ListNode object, and a is an int. The same problem occurs with fast != b.\nslowslow.val != a-1 is wrong, as there is nothing in the question that requires to look at the values in the list. a is an index, not a value.\nYour code will set slow to either the first node or the second node of the first list. There is no loop that guarantees that slow will reference the node before index a. The same problem exists for fast.\n\nHere is working code:\nclass Solution:\n def mergeInBetween(self, list1: ListNode, a: int, b: int, list2: ListNode) -> ListNode:\n # Find node that precedes index a in first list\n left = list1\n for i in range(a - 1):\n left = left.next\n # Find node that follows index b in first list\n right = left\n for i in range(b - a + 2):\n right = right.next\n # Find tail node of second list \n tail = list2\n while tail.next:\n tail = tail.next\n # Make the connections\n left.next = list2\n tail.next = right\n return list1 \n\n" ]
[ 0 ]
[]
[]
[ "linked_list", "python", "python_3.x", "singly_linked_list" ]
stackoverflow_0074620648_linked_list_python_python_3.x_singly_linked_list.txt
Q: How can I calculate the time lag between two similar time series? I'm trying to compute/visualize the time lag between 2 time series (I want to know the time lag between the humidity progression of outside and inside a room). Each data point of my series was taken hourly. Plotting the 2 series together, I can clearly see a shift between them: Sorry for hiding the axis Here are a part of my time series data. I will pack them in 2 arrays: inside_humidity = [11.77961297, 11.59755268, 12.28761522, 11.88797553, 11.78122077, 11.5694668, 11.70421932, 11.78122077, 11.74272005, 11.78122077, 11.69438733, 11.54126933, 11.28460592, 11.05624965, 10.9611012, 11.07527934, 11.25417308, 11.56040908, 11.6657186, 11.51171572, 11.49246536, 11.78594142, 11.22968373, 11.26840678, 11.26840678, 11.29447992, 11.25553344, 11.19711371, 11.17764047, 11.11922075, 11.04132778, 10.86996123, 10.67410607, 10.63493504, 10.74922916, 10.74922916, 10.6294765, 10.61011497, 10.59075345, 10.80373021, 11.07479154, 11.15223764, 11.19711371, 11.17764047, 11.15816723, 11.22250051, 11.22250051, 11.202915, 11.18332948, 11.16374396, 11.14415845, 11.12457293, 11.10498742, 11.14926578, 11.16896413, 11.16896413, 11.14926578, 10.8307902, 10.51742195, 10.28187137, 10.12608544, 9.98977276, 9.62267727, 9.31289289, 8.96438546, 8.77077022, 8.69332413, 8.51907042, 8.30609366, 8.38353975, 8.4513867, 8.47085994, 8.50980642, 8.52927966, 8.50980642, 8.55887037, 8.51969934, 8.48052831, 8.30425867, 8.2177078, 7.98402891, 7.92560918, 7.89950166, 7.83489682, 7.75789537, 7.5984808, 7.28426807, 7.39778913, 7.71943214, 8.01149931, 8.18276652, 8.23009255, 8.16215295, 7.93822471, 8.00350215, 7.93843482, 7.85072729, 7.49778011, 7.31782649, 7.29862668, 7.60162032, 8.29665484, 8.58797834, 8.50011383, 8.86757784, 8.76600556, 8.60491125, 8.4222628, 8.24923231, 8.14470714, 8.17351638, 8.52530093, 8.72220151, 9.26745883, 9.1580007, 8.61762692, 8.22187405, 8.43693644, 8.32414835, 8.32463974, 8.46833012, 8.55865487, 8.72647164, 9.04112806, 9.35578449, 9.59465974, 10.47339785, 11.07218093, 10.54091351, 10.56138918, 10.46099958, 10.38129168, 10.16434831, 10.10612612, 10.009246, 10.53502351, 10.8307902, 11.13420052, 11.64337309, 11.18958511, 10.49630791, 10.60856932, 10.37029108, 9.86281478, 9.64699826, 9.95341012, 10.24329812, 10.6848196, 11.47604231, 11.30505352, 10.72194974, 10.30058448, 10.05022037, 10.06318411, 9.90118897, 9.68530059, 9.47790657, 9.48585784, 9.61639418, 9.86244265, 10.29009361, 10.28297229, 10.32073088, 10.65389513, 11.09656351, 11.20188562, 11.24124169, 10.40503955, 9.74632512, 9.07606098, 8.85145589, 9.37080152, 9.65082743, 10.0707891, 10.68776091, 11.25879751, 11.0416348, 10.89558456, 10.7908258, 10.66539685, 10.7297755, 10.77571398, 10.9268264, 11.16021492, 11.60961709, 11.43827534, 11.96155427, 12.16116437, 12.80412266, 12.52540805, 11.96752965, 11.58099292] outside_humidity = [10.17449206, 10.4823292, 11.06818167, 10.82768699, 11.27582592, 11.4196233, 10.99393027, 11.4122507, 11.18192837, 10.87247831, 10.68664321, 10.37949651, 9.57155882, 10.86611665, 11.62547196, 11.32004266, 11.75537602, 11.51292063, 11.03107569, 10.7297755, 10.4345622, 10.61271497, 9.49271162, 10.15594248, 9.99053828, 9.80915398, 9.6452438, 10.06900573, 11.18075689, 11.8289847, 11.83334752, 11.27480708, 11.14370467, 10.88149985, 10.73930381, 10.7236597, 10.26210496, 11.01260226, 11.05428228, 11.58321342, 12.70523808, 12.5181118, 11.90023799, 11.67756426, 11.28859471, 10.86878222, 9.73984486, 10.18253902, 9.80915398, 10.50980784, 11.38673459, 11.22751685, 10.94171823, 10.56484228, 10.38220753, 10.05388847, 9.96147203, 9.90698862, 9.7732203, 9.85262125, 8.7412938, 8.88281702, 8.07919545, 8.02883587, 8.32341424, 8.07357711, 7.27302616, 6.73660684, 6.66722819, 7.29408637, 7.00046542, 6.46322019, 6.07150988, 6.00207234, 5.8818402, 6.82443881, 7.20212882, 7.52167696, 7.88857771, 8.351627, 8.36547023, 8.24802846, 8.18520693, 7.92420816, 7.64926024, 7.87944972, 7.82118727, 8.02091833, 7.93071882, 7.75789457, 7.5416447, 6.94430133, 6.65907535, 6.67454591, 7.25493614, 7.76939457, 7.55357806, 6.61479472, 7.17641357, 7.24664082, 8.62732387, 8.66913548, 8.70925667, 9.0477017, 8.24558224, 8.4330502, 8.44366397, 8.17995798, 8.1875752, 9.33296518, 9.66567041, 9.88581085, 8.95449382, 8.3587624, 9.20584448, 8.90605388, 8.87494884, 9.12694892, 8.35055177, 7.91879933, 7.78867253, 8.22800878, 9.03685287, 12.49630018, 11.11819755, 10.98869374, 10.65897176, 10.36444573, 10.052609, 10.87627021, 10.07379564, 10.02233847, 9.62022856, 11.21575473, 10.85483543, 11.67324627, 11.89234248, 11.10068132, 10.06942096, 8.50405894, 8.13168561, 8.83616476, 8.35675085, 8.33616802, 8.35675085, 9.02209801, 9.5530404, 9.44738836, 10.89645958, 11.44771721, 11.79943601, 10.7765335, 11.1453622, 10.74874776, 10.55195175, 10.34494483, 9.83813522, 11.26931785, 11.20641798, 10.51555027, 10.90808954, 11.80923545, 11.68300879, 11.60313809, 7.95163365, 7.77213815, 7.54209557, 7.30603673, 7.17842173, 8.25899805, 8.56494995, 10.44245578, 11.08542758, 11.74129079, 11.67979686, 12.94362214, 11.96285343, 11.8289847, 11.01388413, 10.6793698, 11.20662595, 11.97684701, 12.46383177, 11.34178655, 12.12477078, 12.48698059, 12.89325064, 12.07470295, 12.6777319, 10.91689448, 10.7676326, 10.66710434] I know cross correlation is the right term to use, but after a while I still don't get the idea of using scipy.signal.correlate and numpy.correlate, because all I got is an array full of NaNs. So clearly I need some more knowledge in this area. What I expect to achieve is probably a plot like those in the answer section of this thread How to make a correlation plot with a certain lag of two time series where I can see at how many hours the time lag is most likely. Thank you a lot in advance! A: With the given data, you can use the numpy and matplotlib modules to achieve the desired result. so, you can do something like this: import numpy as np from matplotlib import pyplot as plt x = np.array(inside_humidity) y = np.array(outside_humidity) fig = plt.figure() # fit a curve of your choice a, b = np.polyfit(inside_humidity, outside_humidity, 1) y_fit = a * x + b # scatter plot, and fitted plot (best fit used) plt.scatter(inside_humidity, outside_humidity) plt.plot(x, y_fit) plt.show() which gives this:
How can I calculate the time lag between two similar time series?
I'm trying to compute/visualize the time lag between 2 time series (I want to know the time lag between the humidity progression of outside and inside a room). Each data point of my series was taken hourly. Plotting the 2 series together, I can clearly see a shift between them: Sorry for hiding the axis Here are a part of my time series data. I will pack them in 2 arrays: inside_humidity = [11.77961297, 11.59755268, 12.28761522, 11.88797553, 11.78122077, 11.5694668, 11.70421932, 11.78122077, 11.74272005, 11.78122077, 11.69438733, 11.54126933, 11.28460592, 11.05624965, 10.9611012, 11.07527934, 11.25417308, 11.56040908, 11.6657186, 11.51171572, 11.49246536, 11.78594142, 11.22968373, 11.26840678, 11.26840678, 11.29447992, 11.25553344, 11.19711371, 11.17764047, 11.11922075, 11.04132778, 10.86996123, 10.67410607, 10.63493504, 10.74922916, 10.74922916, 10.6294765, 10.61011497, 10.59075345, 10.80373021, 11.07479154, 11.15223764, 11.19711371, 11.17764047, 11.15816723, 11.22250051, 11.22250051, 11.202915, 11.18332948, 11.16374396, 11.14415845, 11.12457293, 11.10498742, 11.14926578, 11.16896413, 11.16896413, 11.14926578, 10.8307902, 10.51742195, 10.28187137, 10.12608544, 9.98977276, 9.62267727, 9.31289289, 8.96438546, 8.77077022, 8.69332413, 8.51907042, 8.30609366, 8.38353975, 8.4513867, 8.47085994, 8.50980642, 8.52927966, 8.50980642, 8.55887037, 8.51969934, 8.48052831, 8.30425867, 8.2177078, 7.98402891, 7.92560918, 7.89950166, 7.83489682, 7.75789537, 7.5984808, 7.28426807, 7.39778913, 7.71943214, 8.01149931, 8.18276652, 8.23009255, 8.16215295, 7.93822471, 8.00350215, 7.93843482, 7.85072729, 7.49778011, 7.31782649, 7.29862668, 7.60162032, 8.29665484, 8.58797834, 8.50011383, 8.86757784, 8.76600556, 8.60491125, 8.4222628, 8.24923231, 8.14470714, 8.17351638, 8.52530093, 8.72220151, 9.26745883, 9.1580007, 8.61762692, 8.22187405, 8.43693644, 8.32414835, 8.32463974, 8.46833012, 8.55865487, 8.72647164, 9.04112806, 9.35578449, 9.59465974, 10.47339785, 11.07218093, 10.54091351, 10.56138918, 10.46099958, 10.38129168, 10.16434831, 10.10612612, 10.009246, 10.53502351, 10.8307902, 11.13420052, 11.64337309, 11.18958511, 10.49630791, 10.60856932, 10.37029108, 9.86281478, 9.64699826, 9.95341012, 10.24329812, 10.6848196, 11.47604231, 11.30505352, 10.72194974, 10.30058448, 10.05022037, 10.06318411, 9.90118897, 9.68530059, 9.47790657, 9.48585784, 9.61639418, 9.86244265, 10.29009361, 10.28297229, 10.32073088, 10.65389513, 11.09656351, 11.20188562, 11.24124169, 10.40503955, 9.74632512, 9.07606098, 8.85145589, 9.37080152, 9.65082743, 10.0707891, 10.68776091, 11.25879751, 11.0416348, 10.89558456, 10.7908258, 10.66539685, 10.7297755, 10.77571398, 10.9268264, 11.16021492, 11.60961709, 11.43827534, 11.96155427, 12.16116437, 12.80412266, 12.52540805, 11.96752965, 11.58099292] outside_humidity = [10.17449206, 10.4823292, 11.06818167, 10.82768699, 11.27582592, 11.4196233, 10.99393027, 11.4122507, 11.18192837, 10.87247831, 10.68664321, 10.37949651, 9.57155882, 10.86611665, 11.62547196, 11.32004266, 11.75537602, 11.51292063, 11.03107569, 10.7297755, 10.4345622, 10.61271497, 9.49271162, 10.15594248, 9.99053828, 9.80915398, 9.6452438, 10.06900573, 11.18075689, 11.8289847, 11.83334752, 11.27480708, 11.14370467, 10.88149985, 10.73930381, 10.7236597, 10.26210496, 11.01260226, 11.05428228, 11.58321342, 12.70523808, 12.5181118, 11.90023799, 11.67756426, 11.28859471, 10.86878222, 9.73984486, 10.18253902, 9.80915398, 10.50980784, 11.38673459, 11.22751685, 10.94171823, 10.56484228, 10.38220753, 10.05388847, 9.96147203, 9.90698862, 9.7732203, 9.85262125, 8.7412938, 8.88281702, 8.07919545, 8.02883587, 8.32341424, 8.07357711, 7.27302616, 6.73660684, 6.66722819, 7.29408637, 7.00046542, 6.46322019, 6.07150988, 6.00207234, 5.8818402, 6.82443881, 7.20212882, 7.52167696, 7.88857771, 8.351627, 8.36547023, 8.24802846, 8.18520693, 7.92420816, 7.64926024, 7.87944972, 7.82118727, 8.02091833, 7.93071882, 7.75789457, 7.5416447, 6.94430133, 6.65907535, 6.67454591, 7.25493614, 7.76939457, 7.55357806, 6.61479472, 7.17641357, 7.24664082, 8.62732387, 8.66913548, 8.70925667, 9.0477017, 8.24558224, 8.4330502, 8.44366397, 8.17995798, 8.1875752, 9.33296518, 9.66567041, 9.88581085, 8.95449382, 8.3587624, 9.20584448, 8.90605388, 8.87494884, 9.12694892, 8.35055177, 7.91879933, 7.78867253, 8.22800878, 9.03685287, 12.49630018, 11.11819755, 10.98869374, 10.65897176, 10.36444573, 10.052609, 10.87627021, 10.07379564, 10.02233847, 9.62022856, 11.21575473, 10.85483543, 11.67324627, 11.89234248, 11.10068132, 10.06942096, 8.50405894, 8.13168561, 8.83616476, 8.35675085, 8.33616802, 8.35675085, 9.02209801, 9.5530404, 9.44738836, 10.89645958, 11.44771721, 11.79943601, 10.7765335, 11.1453622, 10.74874776, 10.55195175, 10.34494483, 9.83813522, 11.26931785, 11.20641798, 10.51555027, 10.90808954, 11.80923545, 11.68300879, 11.60313809, 7.95163365, 7.77213815, 7.54209557, 7.30603673, 7.17842173, 8.25899805, 8.56494995, 10.44245578, 11.08542758, 11.74129079, 11.67979686, 12.94362214, 11.96285343, 11.8289847, 11.01388413, 10.6793698, 11.20662595, 11.97684701, 12.46383177, 11.34178655, 12.12477078, 12.48698059, 12.89325064, 12.07470295, 12.6777319, 10.91689448, 10.7676326, 10.66710434] I know cross correlation is the right term to use, but after a while I still don't get the idea of using scipy.signal.correlate and numpy.correlate, because all I got is an array full of NaNs. So clearly I need some more knowledge in this area. What I expect to achieve is probably a plot like those in the answer section of this thread How to make a correlation plot with a certain lag of two time series where I can see at how many hours the time lag is most likely. Thank you a lot in advance!
[ "With the given data, you can use the numpy and matplotlib modules to achieve the desired result.\nso, you can do something like this:\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n\nx = np.array(inside_humidity)\ny = np.array(outside_humidity)\n\nfig = plt.figure()\n\n# fit a curve of your choice\na, b = np.polyfit(inside_humidity, outside_humidity, 1)\ny_fit = a * x + b\n\n# scatter plot, and fitted plot (best fit used)\nplt.scatter(inside_humidity, outside_humidity)\nplt.plot(x, y_fit)\n\nplt.show()\n\nwhich gives this:\n\n" ]
[ 0 ]
[]
[]
[ "cross_correlation", "numpy", "python", "scipy", "time_series" ]
stackoverflow_0074620148_cross_correlation_numpy_python_scipy_time_series.txt
Q: Azure Cognitive Services / Speech-to-text: Transcribe compressed PCMU (mu-law) wav files Using Azure Speech Service, I'm trying to transcribe a bunch a wav files (compressed in the PCMU aka mu-law format). I came up with the following code based on the articles referenced below. The code works fine sometimes with few files, but I keep getting Segmentation fault errors while looping a bigger list of files (~50) and it never break on the same file (could be 2nd, 15th or 27th). Also, when running a subset of files, transcription results seems the same with or without the decompression part of the code which makes me wonder if the decompression method recommended by Microsoft works at all. import azure.cognitiveservices.speech as speechsdk def azurespeech_transcribe(audio_filename): class BinaryFileReaderCallback(speechsdk.audio.PullAudioInputStreamCallback): def __init__(self, filename: str): super().__init__() self._file_h = open(filename, "rb") def read(self, buffer: memoryview) -> int: try: size = buffer.nbytes frames = self._file_h.read(size) buffer[:len(frames)] = frames return len(frames) except Exception as ex: print('Exception in `read`: {}'.format(ex)) raise def close(self) -> None: try: self._file_h.close() except Exception as ex: print('Exception in `close`: {}'.format(ex)) raise compressed_format = speechsdk.audio.AudioStreamFormat( compressed_stream_format=speechsdk.AudioStreamContainerFormat.MULAW ) callback = BinaryFileReaderCallback(filename=audio_filename) stream = speechsdk.audio.PullAudioInputStream( stream_format=compressed_format, pull_stream_callback=callback ) speech_config = speechsdk.SpeechConfig( subscription="<my_subscription_key>", region="<my_region>", speech_recognition_language="en-CA" ) audio_config = speechsdk.audio.AudioConfig(stream=stream) speech_recognizer = speechsdk.SpeechRecognizer(speech_config, audio_config) result = speech_recognizer.recognize_once() return result.text Code is running on WSL. I have already tried: Logging a more meaningful error with faulthandler module Increasing Python stack limit: resource.setrlimit(resource.RLIMIT_STACK, (resource.RLIM_INFINITY, resource.RLIM_INFINITY)) Adding some sleep timers References: How to recognize speech How to use compressed input audio A: I tried to work on a similar dataset, and I didn’t get any segmentation fault. Check with the subscription and deployment pattern with pricing tier. Implemented the same with the custom speech to text translator and it worked in the segmentation also. Check with the pricing tier which is creating segmentation fault Check with the subscription allowance Check to train in custom speech studio and test. The segmentation differs from the location to location and the pricing tier. After running the syntax, I didn't get any segmentation error as the pricing tier is suitable for the volume of the data. A: From 1.24.0 Speech SDK version (and onwards), you can stream ALAW/MULAW encoded data directly to speech service (without the need of Gstreamer) by using AudioStreamWaveFormat (https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiostreamwaveformat?view=azure-python). This way there is less complexity involved (no Gstreamer). encoded_format = msspeech.audio.AudioStreamFormat(samples_per_second=16000, bits_per_sample=16, channels=1, wave_stream_format=msspeech.AudioStreamWaveFormat.MULAW)
Azure Cognitive Services / Speech-to-text: Transcribe compressed PCMU (mu-law) wav files
Using Azure Speech Service, I'm trying to transcribe a bunch a wav files (compressed in the PCMU aka mu-law format). I came up with the following code based on the articles referenced below. The code works fine sometimes with few files, but I keep getting Segmentation fault errors while looping a bigger list of files (~50) and it never break on the same file (could be 2nd, 15th or 27th). Also, when running a subset of files, transcription results seems the same with or without the decompression part of the code which makes me wonder if the decompression method recommended by Microsoft works at all. import azure.cognitiveservices.speech as speechsdk def azurespeech_transcribe(audio_filename): class BinaryFileReaderCallback(speechsdk.audio.PullAudioInputStreamCallback): def __init__(self, filename: str): super().__init__() self._file_h = open(filename, "rb") def read(self, buffer: memoryview) -> int: try: size = buffer.nbytes frames = self._file_h.read(size) buffer[:len(frames)] = frames return len(frames) except Exception as ex: print('Exception in `read`: {}'.format(ex)) raise def close(self) -> None: try: self._file_h.close() except Exception as ex: print('Exception in `close`: {}'.format(ex)) raise compressed_format = speechsdk.audio.AudioStreamFormat( compressed_stream_format=speechsdk.AudioStreamContainerFormat.MULAW ) callback = BinaryFileReaderCallback(filename=audio_filename) stream = speechsdk.audio.PullAudioInputStream( stream_format=compressed_format, pull_stream_callback=callback ) speech_config = speechsdk.SpeechConfig( subscription="<my_subscription_key>", region="<my_region>", speech_recognition_language="en-CA" ) audio_config = speechsdk.audio.AudioConfig(stream=stream) speech_recognizer = speechsdk.SpeechRecognizer(speech_config, audio_config) result = speech_recognizer.recognize_once() return result.text Code is running on WSL. I have already tried: Logging a more meaningful error with faulthandler module Increasing Python stack limit: resource.setrlimit(resource.RLIMIT_STACK, (resource.RLIM_INFINITY, resource.RLIM_INFINITY)) Adding some sleep timers References: How to recognize speech How to use compressed input audio
[ "I tried to work on a similar dataset, and I didn’t get any segmentation fault. Check with the subscription and deployment pattern with pricing tier. Implemented the same with the custom speech to text translator and it worked in the segmentation also.\n\nCheck with the pricing tier which is creating segmentation fault\nCheck with the subscription allowance\nCheck to train in custom speech studio and test.\n\n\nThe segmentation differs from the location to location and the pricing tier.\n\nAfter running the syntax, I didn't get any segmentation error as the pricing tier is suitable for the volume of the data.\n", "From 1.24.0 Speech SDK version (and onwards), you can stream ALAW/MULAW encoded data directly to speech service (without the need of Gstreamer) by using AudioStreamWaveFormat (https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiostreamwaveformat?view=azure-python). This way there is less complexity involved (no Gstreamer).\nencoded_format = msspeech.audio.AudioStreamFormat(samples_per_second=16000, bits_per_sample=16,\n channels=1, wave_stream_format=msspeech.AudioStreamWaveFormat.MULAW)\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure_cognitive_services", "python", "speech_recognition", "speech_to_text", "wav" ]
stackoverflow_0074197867_azure_cognitive_services_python_speech_recognition_speech_to_text_wav.txt
Q: Create new dataframe from the highest values in a column I have the following dataframe df: topic num 0 a01 1 1 a01 1 2 a01 2 3 a02 1 4 a02 3 5 a02 2 6 a02 3 7 a03 2 8 a03 1 And I need to create a new dataframe newdf, where each row corresponds to the topic and the maximum number for each topic, like the following: topic num 0 a01 2 1 a02 3 2 a03 2 I've tried to use the max() function from pandas, but to no avail. What I don't seem to get is how I'm gonna iterate through each row and find the highest value correspondent to the topic. How do I separate a01 from a02, so that I can get the maximum value for each? I've also tried transposing, but the same doubt keeps appearing. A: See Get the row(s) which have the max value in groups using groupby Example: new_df = df.groupby(['topic'], sort=False)['num'].max() A: You can use GroupBy.max with numeric_only=True: newdf= df.groupby("topic", as_index=False).max(numeric_only=True) Output: print(newdf) topic num 0 a01 2 1 a02 3 2 a03 2
Create new dataframe from the highest values in a column
I have the following dataframe df: topic num 0 a01 1 1 a01 1 2 a01 2 3 a02 1 4 a02 3 5 a02 2 6 a02 3 7 a03 2 8 a03 1 And I need to create a new dataframe newdf, where each row corresponds to the topic and the maximum number for each topic, like the following: topic num 0 a01 2 1 a02 3 2 a03 2 I've tried to use the max() function from pandas, but to no avail. What I don't seem to get is how I'm gonna iterate through each row and find the highest value correspondent to the topic. How do I separate a01 from a02, so that I can get the maximum value for each? I've also tried transposing, but the same doubt keeps appearing.
[ "See Get the row(s) which have the max value in groups using groupby\nExample:\nnew_df = df.groupby(['topic'], sort=False)['num'].max()\n\n", "You can use GroupBy.max with numeric_only=True:\nnewdf= df.groupby(\"topic\", as_index=False).max(numeric_only=True)\n\nOutput:\nprint(newdf)\n\n topic num\n0 a01 2\n1 a02 3\n2 a03 2\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074620828_dataframe_pandas_python.txt
Q: subprocess.check_output with grep command fails when grep finds no matches I'm trying to search a text file and retrieve lines containing a specific set of words. This is the code I'm using: tyrs = subprocess.check_output('grep "^A" %s | grep TYR' % pocket_location, shell = True).split('\n') This works fine when the file contains at least one line that grep identifies. But when grep doesn't identify any lines, grep returns exit status 1 and I get the following error: Traceback (most recent call last): File "../../Python_scripts/cbs_wrapper2.py", line 324, in <module> tyrs = subprocess.check_output('grep "^ATOM" %s | grep TYR' % pocket_location, shell = True).split('\n') File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 544, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command 'grep "^ATOM" cbsPrediction_files/1u9c_clean/1u9c_clean_fpocket_out/pockets/pocket0_atm.pdb | grep TYR' returned non-zero exit status 1 How can I avoid this issue? I just want subprocess.check_output to return an empty string if grep doesn't find anything. Thanks A: I just want subprocess.check_output to return an empty string if grep doesn't find anything. Well, too bad. grep considers no matches to be failure, and the whole point of the check in check_output is to check for failure, so you're explicitly asking to do things this way. Here are the relevant docs: If the return code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and any output in the output attribute. And for grep: The following exit values shall be returned: 0 One or more lines were selected. 1 No lines were selected. >1 An error occurred. So, if you want to treat "no lines" as success, but actual errors as errors, you have to handle that 1 value differently than other non-zero values. And check_output has no idea that you want to do that. So, either you have to handle the CalledProcessError, or you have to do your own checking. In other words, either this: try: tyrs = subprocess.check_output('grep "^A" %s | grep TYR' % pocket_location, shell = True).split('\n') except subprocess.CalledProcessError as e: if e.returncode > 1: raise tyrs = [] … or this: p = subprocess.Popen('grep "^A" %s | grep TYR' % pocket_location, shell=True, stdout=subprocess.PIPE) output, _ = p.communicate() if p.returncode == 1: # no matches found tyrs = [] elif p.returncode == 0: # matches found tyrs = output.split('\n') else: # error, do something with it A: tyrs = subprocess.check_output('grep "^A" %s | grep TYR || true' % pocket_location, shell = True).split('\n') A: For future reference, if anyone is looking for a similar solution using pgrep - import subprocess var = subprocess.check_output("pgrep -af 'python3 choice_multi.py CMC0001'", shell=True, text=True) var = var.splitlines() var = [x for x in var if '/bin/sh' not in x] print(var ) This will come as an irritating error if you do pgrep because it also shows the PID of the temporary process being created. Now the /bin/sh can be different for other systems. Alternatively, there are two other solutions that I thought as inefficient - You can just delete the first item in the list. You can check the length of the list and scan for the above 1 only because all of them will have a minimum 1 PID. References - Is there a simple way to delete a list element by value?
subprocess.check_output with grep command fails when grep finds no matches
I'm trying to search a text file and retrieve lines containing a specific set of words. This is the code I'm using: tyrs = subprocess.check_output('grep "^A" %s | grep TYR' % pocket_location, shell = True).split('\n') This works fine when the file contains at least one line that grep identifies. But when grep doesn't identify any lines, grep returns exit status 1 and I get the following error: Traceback (most recent call last): File "../../Python_scripts/cbs_wrapper2.py", line 324, in <module> tyrs = subprocess.check_output('grep "^ATOM" %s | grep TYR' % pocket_location, shell = True).split('\n') File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 544, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command 'grep "^ATOM" cbsPrediction_files/1u9c_clean/1u9c_clean_fpocket_out/pockets/pocket0_atm.pdb | grep TYR' returned non-zero exit status 1 How can I avoid this issue? I just want subprocess.check_output to return an empty string if grep doesn't find anything. Thanks
[ "\nI just want subprocess.check_output to return an empty string if grep doesn't find anything.\n\nWell, too bad. grep considers no matches to be failure, and the whole point of the check in check_output is to check for failure, so you're explicitly asking to do things this way. Here are the relevant docs:\n\nIf the return code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and any output in the output attribute.\n\nAnd for grep:\nThe following exit values shall be returned:\n 0 One or more lines were selected.\n 1 No lines were selected.\n >1 An error occurred.\n\nSo, if you want to treat \"no lines\" as success, but actual errors as errors, you have to handle that 1 value differently than other non-zero values. And check_output has no idea that you want to do that.\nSo, either you have to handle the CalledProcessError, or you have to do your own checking. In other words, either this:\ntry:\n tyrs = subprocess.check_output('grep \"^A\" %s | grep TYR' % pocket_location, shell = True).split('\\n')\nexcept subprocess.CalledProcessError as e:\n if e.returncode > 1:\n raise\n tyrs = []\n\n… or this:\np = subprocess.Popen('grep \"^A\" %s | grep TYR' % pocket_location, shell=True,\n stdout=subprocess.PIPE)\noutput, _ = p.communicate()\nif p.returncode == 1: # no matches found \n tyrs = []\nelif p.returncode == 0: # matches found\n tyrs = output.split('\\n')\nelse:\n # error, do something with it\n\n", "tyrs = subprocess.check_output('grep \"^A\" %s | grep TYR || true' % pocket_location, shell = True).split('\\n')\n\n", "For future reference, if anyone is looking for a similar solution using pgrep -\nimport subprocess \nvar = subprocess.check_output(\"pgrep -af 'python3 choice_multi.py CMC0001'\", shell=True, text=True)\nvar = var.splitlines()\nvar = [x for x in var if '/bin/sh' not in x]\nprint(var )\n\nThis will come as an irritating error if you do pgrep because it also shows the PID of the temporary process being created.\nNow the /bin/sh can be different for other systems.\nAlternatively, there are two other solutions that I thought as inefficient -\n\nYou can just delete the first item in the list.\nYou can check the length of the list and scan for the above 1 only because all of them will have a minimum 1 PID.\n\nReferences -\nIs there a simple way to delete a list element by value?\n" ]
[ 14, 6, 0 ]
[]
[]
[ "grep", "python", "subprocess" ]
stackoverflow_0020983498_grep_python_subprocess.txt
Q: smallest number of sublists with maximum length m and tolerance k I need to create a program that takes a sorted list of integers, x, and outputs the smallest number sublists with the following properties: length <= m smallest item in sublist + 2k >= largest item in sublist it is important to note I don't actually need to find the sublists themselves just the how many of them I've tried writing this function but the number it creates is too high. I know it has to do with the way i'm spliting the list but I can't figure out a better way to do it. x is the sorted list, k is the tolerance, m is the max sublist length, n is the length of x, time is the number of sublists def split(x,k,m,n): time = 0 if n<=m: try: if x[-1]<=x[0]+2*k: time +=1 else: time += split(x[0:n-1],k,m,n-1) time += split(x[n-1:n],k,m,1) except: pass else: time += split(x[0:n-m],k,m,n-m) time += split(x[n-m:n],k,m,m) return time A: Use combinations to find the ordered combinations of a list. from itertools import combinations def split(lst, t, m): n = len(lst) counter = 0 for i in range(1, m+1): for c in combinations(x, r=i): if min(c) + t >= max(c): counter += 1 return counter x = [1, 2, 5, 9] t, m = 1, 3 c = split(x, t, m) print(f'{c}-sublists of at most length {m} with tolerance {t}')
smallest number of sublists with maximum length m and tolerance k
I need to create a program that takes a sorted list of integers, x, and outputs the smallest number sublists with the following properties: length <= m smallest item in sublist + 2k >= largest item in sublist it is important to note I don't actually need to find the sublists themselves just the how many of them I've tried writing this function but the number it creates is too high. I know it has to do with the way i'm spliting the list but I can't figure out a better way to do it. x is the sorted list, k is the tolerance, m is the max sublist length, n is the length of x, time is the number of sublists def split(x,k,m,n): time = 0 if n<=m: try: if x[-1]<=x[0]+2*k: time +=1 else: time += split(x[0:n-1],k,m,n-1) time += split(x[n-1:n],k,m,1) except: pass else: time += split(x[0:n-m],k,m,n-m) time += split(x[n-m:n],k,m,m) return time
[ "Use combinations to find the ordered combinations of a list.\nfrom itertools import combinations\n\ndef split(lst, t, m):\n n = len(lst)\n counter = 0\n for i in range(1, m+1):\n for c in combinations(x, r=i):\n if min(c) + t >= max(c):\n counter += 1\n return counter\n\n\nx = [1, 2, 5, 9]\nt, m = 1, 3\nc = split(x, t, m)\nprint(f'{c}-sublists of at most length {m} with tolerance {t}')\n\n" ]
[ 0 ]
[]
[]
[ "list", "python", "recursion" ]
stackoverflow_0074618948_list_python_recursion.txt
Q: Are chunks returned by 'np.array_split()' ordered by descending sizes? In numpy.array_split using an integer, when the number of parts isn't a divisor of the size on the axis considered, some parts may be smaller or larger, e.g. import numpy as np [chunk.shape[0] for chunk in np.array_split(np.arange(12), 5)] returns chunk sizes: [3, 3, 2, 2, 2] While the documentation doesn't mention it, it seems chunks with the smallest sizes are at the end of the list. And trying for a sample confirms this is true for arrays up to 200 elements, whatever the number of chunks required. import numpy as np not_ordered = 0 for sample_size in np.arange(2,200): a = np.arange(sample_size) for n in np.arange(2,sample_size//2): chunks = np.array_split(a,n) sizes = [chunk.shape[0] for chunk in chunks] for i in np.arange(1, len(sizes)): if sizes[i] > sizes[i-1]: not_ordered += 1 break print (f'Not ordered: {not_ordered}') Is the descending order guaranteed by the algorithm behind the function? or is this something not to count on when using the result returned? A: numpy.array_split docs says For an array of length l that should be split into n sections, it returns l % n sub-arrays of size l//n + 1 and the rest of size l//n. As l and n are fixed in each run, we can conclude that for every element in returned array next one will be no longer than current. Edit: when in doubt, as this is python, we can read the code. As you are interesting in case where indices_or_sections is integer relevant piece is: Nsections = int(indices_or_sections) if Nsections <= 0: raise ValueError('number sections must be larger than 0.') Neach_section, extras = divmod(Ntotal, Nsections) section_sizes = ([0] + extras * [Neach_section+1] + (Nsections-extras) * [Neach_section]) div_points = _nx.array(section_sizes, dtype=_nx.intp).cumsum() where Ntotal is number of elements in your input array. As you can see there are Neach_section+1s followed by Neach_sectionss. A: No they are not sorted during the split so the groups may not be in order. I am using the qcut function to guarantee ranked groups. I tried using np.array_split however the data was not sorted in the deciles which causes issue for my project. Below is a code example of how I got ranked groups (deciles in this case). df['Decile'] = pd.qcut(dev2['variable_name'], 10, labels=False)
Are chunks returned by 'np.array_split()' ordered by descending sizes?
In numpy.array_split using an integer, when the number of parts isn't a divisor of the size on the axis considered, some parts may be smaller or larger, e.g. import numpy as np [chunk.shape[0] for chunk in np.array_split(np.arange(12), 5)] returns chunk sizes: [3, 3, 2, 2, 2] While the documentation doesn't mention it, it seems chunks with the smallest sizes are at the end of the list. And trying for a sample confirms this is true for arrays up to 200 elements, whatever the number of chunks required. import numpy as np not_ordered = 0 for sample_size in np.arange(2,200): a = np.arange(sample_size) for n in np.arange(2,sample_size//2): chunks = np.array_split(a,n) sizes = [chunk.shape[0] for chunk in chunks] for i in np.arange(1, len(sizes)): if sizes[i] > sizes[i-1]: not_ordered += 1 break print (f'Not ordered: {not_ordered}') Is the descending order guaranteed by the algorithm behind the function? or is this something not to count on when using the result returned?
[ "numpy.array_split docs says\n\nFor an array of length l that should be split into n sections, it\nreturns l % n sub-arrays of size l//n + 1 and the rest of size l//n.\n\nAs l and n are fixed in each run, we can conclude that for every element in returned array next one will be no longer than current.\nEdit: when in doubt, as this is python, we can read the code. As you are interesting in case where indices_or_sections is integer relevant piece is:\nNsections = int(indices_or_sections)\nif Nsections <= 0:\n raise ValueError('number sections must be larger than 0.')\nNeach_section, extras = divmod(Ntotal, Nsections)\nsection_sizes = ([0] +\n extras * [Neach_section+1] +\n (Nsections-extras) * [Neach_section])\ndiv_points = _nx.array(section_sizes, dtype=_nx.intp).cumsum()\n\nwhere Ntotal is number of elements in your input array. As you can see there are Neach_section+1s followed by Neach_sectionss.\n", "No they are not sorted during the split so the groups may not be in order. I am using the qcut function to guarantee ranked groups. I tried using np.array_split however the data was not sorted in the deciles which causes issue for my project. Below is a code example of how I got ranked groups (deciles in this case).\ndf['Decile'] = pd.qcut(dev2['variable_name'], 10, labels=False)\n" ]
[ 1, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0065793051_numpy_python.txt
Q: Trying to check if tuple item is nan I have the below for loop and am try to check first if the tuple(row) item in position 10 is Nan i=0 for row in df.iterrows(): if row[1][10] != None: names = row[1][10].split(',') for name in names: df2.loc[i,:] = row[1][:] i=i+1 else: i=i+1 I thought I could use if row[1][10] != None: but it doesnt seem to work, anyone know the solution? A: Can use pd.isnull(row[1][10]) instead of if row[1][10] != None. Example: i=0 for row in df.iterrows(): if pd.isnull(row[1][10]): df2.loc[i,:] = row[1][:] i=i+1 else: names = row[1][10].split(',') for name in names: df2.loc[i,:] = row[1][:] df2.loc[i,'name'] = name i=i+1 Also please do give feedback about this solution.
Trying to check if tuple item is nan
I have the below for loop and am try to check first if the tuple(row) item in position 10 is Nan i=0 for row in df.iterrows(): if row[1][10] != None: names = row[1][10].split(',') for name in names: df2.loc[i,:] = row[1][:] i=i+1 else: i=i+1 I thought I could use if row[1][10] != None: but it doesnt seem to work, anyone know the solution?
[ "Can use pd.isnull(row[1][10]) instead of if row[1][10] != None.\nExample:\ni=0\nfor row in df.iterrows():\n if pd.isnull(row[1][10]):\n df2.loc[i,:] = row[1][:]\n i=i+1\n else:\n names = row[1][10].split(',')\n for name in names:\n df2.loc[i,:] = row[1][:]\n df2.loc[i,'name'] = name\n i=i+1\n\nAlso please do give feedback about this solution.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074620897_pandas_python.txt
Q: myRange function giving output of None. Why is this? I am to create a function called myRange that behaves like range. This is for class and the instructions tell me to use Python's help for range but I am not understanding it at all. I am a complete greenhorn with Python. Please do not provide modules or methods. def myRange(stop,start=None,step=None): outputList = [] if stop == 0: start= 0 stop = start step = 1 return() print(myRange(10)) I am expecting just this portion to give the output range of 10 displayed in a list. A: # define a function called 'myRange' which takes 3 arguments, # stop, start and step, where start and step default to None def myRange(stop,start=None,step=None): # set the value of outputList to an empty list outputList = [] # if stop is 0, do this block of code below if stop == 0: # set start to 0 start= 0 # set stop to 0 (this is a little # redundant as we already know stop is 0 at this point) stop = start # set step to 1 step = 1 # return an empty Tuple return() print(myRange(10)) So you've definitely made a start and have some code down and running and this is always a huge milestone. It'll just take some figuring out and then I'm sure you'll get it. As you don't want specific code, here are some pointers: What do you want the function to return? Right now it doesn't return anything. range in python returns a sequence of numbers, from start to stop, counting up by step. It goes up by 1 by default and starts at 0 by default How would you describe the instructions for a human to perform this action? This can help you figure out what the code needs to do The code indented under an if statement will only be run if that if statement is true Hopefully that'll get you going in the right direction. Happy to help some more if you get stuck
myRange function giving output of None. Why is this?
I am to create a function called myRange that behaves like range. This is for class and the instructions tell me to use Python's help for range but I am not understanding it at all. I am a complete greenhorn with Python. Please do not provide modules or methods. def myRange(stop,start=None,step=None): outputList = [] if stop == 0: start= 0 stop = start step = 1 return() print(myRange(10)) I am expecting just this portion to give the output range of 10 displayed in a list.
[ "# define a function called 'myRange' which takes 3 arguments,\n# stop, start and step, where start and step default to None\ndef myRange(stop,start=None,step=None):\n\n # set the value of outputList to an empty list\n outputList = []\n\n # if stop is 0, do this block of code below\n if stop == 0:\n # set start to 0\n start= 0\n # set stop to 0 (this is a little\n # redundant as we already know stop is 0 at this point)\n stop = start\n\n # set step to 1\n step = 1\n\n # return an empty Tuple\n return()\n\n\nprint(myRange(10))\n\nSo you've definitely made a start and have some code down and running and this is always a huge milestone. It'll just take some figuring out and then I'm sure you'll get it. As you don't want specific code, here are some pointers:\n\nWhat do you want the function to return? Right now it doesn't return anything. range in python returns a sequence of numbers, from start to stop, counting up by step. It goes up by 1 by default and starts at 0 by default\n\nHow would you describe the instructions for a human to perform this action? This can help you figure out what the code needs to do\n\nThe code indented under an if statement will only be run if that if statement is true\n\n\nHopefully that'll get you going in the right direction. Happy to help some more if you get stuck\n" ]
[ 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074620952_list_python.txt
Q: Polars - How to create dynamic rolling window for calculations Consider the following dataframe of sensor readings laid out along a straight line: Location Start (KM) Location End (KM) Readings 1 1.1 7 1.1 1.23 null 1.23 1.3 8 1.3 1.34 null 1.34 1.4 null 1.4 1.5 5 1.5 1.65 6 I am trying to create a rolling 150m lookahead window to calculate the percentage of non nulls within that window, expecting the results to be look a little like the below: Location Start (KM) Location End (KM) Readings Rolling % Non Null Readings (%) 1 1.1 7 67 1.1 1.23 null 13 1.23 1.3 8 47 1.3 1.34 null 33 1.34 1.4 null 60 1.4 1.5 5 100 1.5 1.65 6 100 Alternatively, a centered window would also work (but the example above is for a look-ahead window) It seems that the style of creating a dynamic rolling window for the above is somewhat supported via the Polars groupby_dynamic but that seems to only work for temporal values, whereas the values in my columns are floats since they represent spatial locations. The rolling_apply method also seems to provide some means to an end, however it creates a rolling window over a fixed number of rows, which doesn't quite suit this use case as well, as the number of rows to include in the window will defer depending on certain conditions (In this case the length of a particular reading) How should I go about performing the following rolling calculations? I am trying to avoid writing explicit loops to loop through each row and check multiple conditions, but I can't seem to figure out how to do so with the inbuilt methods. I have attached a simple working example below written in Pandas that does the job, but in a naive loop-heavy way and was hoping to implement a faster version in Polars using some variant of its dynamic_groupby or rolling function: df=pd.DataFrame({'Location Start (KM)':[1,1.1,1.23,1.3,1.34,1.4,1.5], 'Location End (KM)':[1.1,1.23,1.3,1.34,1.4,1.5,1.65], 'Readings':[7,np.nan,8,np.nan,np.nan,5,6] }) #sample dataframe #create manual for loop to simulate a rolling forward looking window def calculate_perc_non_null(row,window_length): ''' Naive function that calculates the percentage of non null readings within a forward rolling window of a dataframe. Takes in: Row: pandas series object representing each row of the dataframe window_length: float that describes the window length in KMs ''' window_start=row['Location Start (KM)'] window_end=window_start+window_length #generate window endpoints eligible_readings=df.loc[(df['Location Start (KM)']>=window_start)&(df['Location Start (KM)']<=window_end)]#readings that fall within the specified window #calculate number of non-nulls nulls=eligible_readings.loc[eligible_readings['Readings'].isnull()].copy() #grab all null values nulls.loc[nulls['Location End (KM)']>window_end,'Location End (KM)']=window_end #truncate ends to ensure no values outside the window are taken in total_length_of_nulls=(nulls['Location End (KM)']-nulls['Location Start (KM)']).sum() non_null_perc=100*(1-total_length_of_nulls/window_length) #calculate percentage of non null as a percentage return non_null_perc df['Rolling % Non Null Readings (%)']=df.apply(lambda x:calculate_perc_non_null(x,window_length=0.15),axis=1) Which outputs: Location Start (KM) Location End (KM) Readings \ 0 1.00 1.10 7.0 1 1.10 1.23 NaN 2 1.23 1.30 8.0 3 1.30 1.34 NaN 4 1.34 1.40 NaN 5 1.40 1.50 5.0 6 1.50 1.65 6.0 Rolling % Non Null Readings (%) 0 66.666667 1 13.333333 2 46.666667 3 33.333333 4 60.000000 5 100.000000 6 100.000000 Edit on proposed solution (30/11/2022): Ξ©Ξ ΞŸΞšΞ•ΞšΞ‘Ξ₯ΞœΞœΞ•ΞΞŸΞ£'s solution from 30/11/22 works perfectly fine, I've attached a few performance benchmarks below (not apples to apples because different dataframe libraries are used here but it should illustrate how quick this runs) Original pandas code: 11.3 ms Β± 1.95 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each) Ξ©Ξ ΞŸΞšΞ•ΞšΞ‘Ξ₯ΞœΞœΞ•ΞΞŸΞ£'s solution: 527 Β΅s Β± 57.2 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) Or about 25x speedup, the performance gaps widen when the underlying dataframe gets larger (I used a 1000x larger dummy frame): Original pandas code: 10 s Β± 580 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Ξ©Ξ ΞŸΞšΞ•ΞšΞ‘Ξ₯ΞœΞœΞ•ΞΞŸΞ£'s solution: 12.2 ms Β± 1.18 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each) Or 820x faster, so it scales well. A: I may not understand what you need, but let's see if we can get the ball rolling. Let's start by using groupby_rolling and see if this gets us close to what is needed. First, let's convert the floats (kilometers) to integers (meters). That allows us to specify our period as an integer: 150i. We'll then calculate a duration (loc_end - loc_start) that we'll use in our calculations. Also, we'll create a boolean (is_not_null) which will automatically be upcast to 0 or 1 in our calculations. The basic idea is to allow groupby_rolling to calculate each window. We'll use a dot product to calculate our weighted values. Since each window will likely be more than 150 meters, we'll need to back out some overage from the last line that represents the amount over 150 (before dividing by 150) Here's what this would look like. (Note: I've also asked Polars to include the values it sees in each window -- as lists -- so that the results can be more easily inspected). window_size = 150 ( df .with_columns([ (pl.col('^loc.*$') * 1_000).cast(pl.Int64), pl.col('readings').is_not_null().alias('is_not_null'), ]) .with_column( (pl.col('loc_end') - pl.col('loc_start')).alias('duration'), ) .groupby_rolling( index_column='loc_start', period=str(window_size) + "i", offset="0i", closed="left", ) .agg([ ( ( pl.col('duration').dot('is_not_null') - (pl.sum('duration') - window_size) * pl.col('is_not_null').last() ) / window_size ).alias('result'), pl.all().suffix('_val_list'), ]) ) shape: (7, 7) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ loc_start ┆ result ┆ loc_start_val_list ┆ loc_end_val_list ┆ readings_val_list ┆ is_not_null_val_list ┆ duration_val_list β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ f64 ┆ list[i64] ┆ list[i64] ┆ list[i64] ┆ list[bool] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ════════════════════β•ͺ════════════════════β•ͺ═══════════════════β•ͺ══════════════════════β•ͺ═══════════════════║ β”‚ 1000 ┆ 0.666667 ┆ [1000, 1100] ┆ [1100, 1230] ┆ [7, null] ┆ [true, false] ┆ [100, 130] β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 1100 ┆ 0.133333 ┆ [1100, 1230] ┆ [1230, 1300] ┆ [null, 8] ┆ [false, true] ┆ [130, 70] β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 1230 ┆ 0.466667 ┆ [1230, 1300, 1340] ┆ [1300, 1340, 1400] ┆ [8, null, null] ┆ [true, false, false] ┆ [70, 40, 60] β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 1300 ┆ 0.333333 ┆ [1300, 1340, 1400] ┆ [1340, 1400, 1500] ┆ [null, null, 5] ┆ [false, false, true] ┆ [40, 60, 100] β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 1340 ┆ 0.6 ┆ [1340, 1400] ┆ [1400, 1500] ┆ [null, 5] ┆ [false, true] ┆ [60, 100] β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 1400 ┆ 1.0 ┆ [1400, 1500] ┆ [1500, 1650] ┆ [5, 6] ┆ [true, true] ┆ [100, 150] β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 1500 ┆ 1.0 ┆ [1500] ┆ [1650] ┆ [6] ┆ [true] ┆ [150] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Note: the third value in the result column (0.46667) differs from the result in your posting (0.73). Does this get the ball rolling?
Polars - How to create dynamic rolling window for calculations
Consider the following dataframe of sensor readings laid out along a straight line: Location Start (KM) Location End (KM) Readings 1 1.1 7 1.1 1.23 null 1.23 1.3 8 1.3 1.34 null 1.34 1.4 null 1.4 1.5 5 1.5 1.65 6 I am trying to create a rolling 150m lookahead window to calculate the percentage of non nulls within that window, expecting the results to be look a little like the below: Location Start (KM) Location End (KM) Readings Rolling % Non Null Readings (%) 1 1.1 7 67 1.1 1.23 null 13 1.23 1.3 8 47 1.3 1.34 null 33 1.34 1.4 null 60 1.4 1.5 5 100 1.5 1.65 6 100 Alternatively, a centered window would also work (but the example above is for a look-ahead window) It seems that the style of creating a dynamic rolling window for the above is somewhat supported via the Polars groupby_dynamic but that seems to only work for temporal values, whereas the values in my columns are floats since they represent spatial locations. The rolling_apply method also seems to provide some means to an end, however it creates a rolling window over a fixed number of rows, which doesn't quite suit this use case as well, as the number of rows to include in the window will defer depending on certain conditions (In this case the length of a particular reading) How should I go about performing the following rolling calculations? I am trying to avoid writing explicit loops to loop through each row and check multiple conditions, but I can't seem to figure out how to do so with the inbuilt methods. I have attached a simple working example below written in Pandas that does the job, but in a naive loop-heavy way and was hoping to implement a faster version in Polars using some variant of its dynamic_groupby or rolling function: df=pd.DataFrame({'Location Start (KM)':[1,1.1,1.23,1.3,1.34,1.4,1.5], 'Location End (KM)':[1.1,1.23,1.3,1.34,1.4,1.5,1.65], 'Readings':[7,np.nan,8,np.nan,np.nan,5,6] }) #sample dataframe #create manual for loop to simulate a rolling forward looking window def calculate_perc_non_null(row,window_length): ''' Naive function that calculates the percentage of non null readings within a forward rolling window of a dataframe. Takes in: Row: pandas series object representing each row of the dataframe window_length: float that describes the window length in KMs ''' window_start=row['Location Start (KM)'] window_end=window_start+window_length #generate window endpoints eligible_readings=df.loc[(df['Location Start (KM)']>=window_start)&(df['Location Start (KM)']<=window_end)]#readings that fall within the specified window #calculate number of non-nulls nulls=eligible_readings.loc[eligible_readings['Readings'].isnull()].copy() #grab all null values nulls.loc[nulls['Location End (KM)']>window_end,'Location End (KM)']=window_end #truncate ends to ensure no values outside the window are taken in total_length_of_nulls=(nulls['Location End (KM)']-nulls['Location Start (KM)']).sum() non_null_perc=100*(1-total_length_of_nulls/window_length) #calculate percentage of non null as a percentage return non_null_perc df['Rolling % Non Null Readings (%)']=df.apply(lambda x:calculate_perc_non_null(x,window_length=0.15),axis=1) Which outputs: Location Start (KM) Location End (KM) Readings \ 0 1.00 1.10 7.0 1 1.10 1.23 NaN 2 1.23 1.30 8.0 3 1.30 1.34 NaN 4 1.34 1.40 NaN 5 1.40 1.50 5.0 6 1.50 1.65 6.0 Rolling % Non Null Readings (%) 0 66.666667 1 13.333333 2 46.666667 3 33.333333 4 60.000000 5 100.000000 6 100.000000 Edit on proposed solution (30/11/2022): Ξ©Ξ ΞŸΞšΞ•ΞšΞ‘Ξ₯ΞœΞœΞ•ΞΞŸΞ£'s solution from 30/11/22 works perfectly fine, I've attached a few performance benchmarks below (not apples to apples because different dataframe libraries are used here but it should illustrate how quick this runs) Original pandas code: 11.3 ms Β± 1.95 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each) Ξ©Ξ ΞŸΞšΞ•ΞšΞ‘Ξ₯ΞœΞœΞ•ΞΞŸΞ£'s solution: 527 Β΅s Β± 57.2 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) Or about 25x speedup, the performance gaps widen when the underlying dataframe gets larger (I used a 1000x larger dummy frame): Original pandas code: 10 s Β± 580 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Ξ©Ξ ΞŸΞšΞ•ΞšΞ‘Ξ₯ΞœΞœΞ•ΞΞŸΞ£'s solution: 12.2 ms Β± 1.18 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each) Or 820x faster, so it scales well.
[ "I may not understand what you need, but let's see if we can get the ball rolling.\nLet's start by using groupby_rolling and see if this gets us close to what is needed.\nFirst, let's convert the floats (kilometers) to integers (meters). That allows us to specify our period as an integer: 150i. We'll then calculate a duration (loc_end - loc_start) that we'll use in our calculations.\nAlso, we'll create a boolean (is_not_null) which will automatically be upcast to 0 or 1 in our calculations.\nThe basic idea is to allow groupby_rolling to calculate each window. We'll use a dot product to calculate our weighted values.\nSince each window will likely be more than 150 meters, we'll need to back out some overage from the last line that represents the amount over 150 (before dividing by 150)\nHere's what this would look like. (Note: I've also asked Polars to include the values it sees in each window -- as lists -- so that the results can be more easily inspected).\nwindow_size = 150\n(\n df\n .with_columns([\n (pl.col('^loc.*$') * 1_000).cast(pl.Int64),\n pl.col('readings').is_not_null().alias('is_not_null'),\n ])\n .with_column(\n (pl.col('loc_end') - pl.col('loc_start')).alias('duration'),\n )\n .groupby_rolling(\n index_column='loc_start',\n period=str(window_size) + \"i\",\n offset=\"0i\",\n closed=\"left\",\n )\n .agg([\n (\n (\n pl.col('duration').dot('is_not_null') -\n (pl.sum('duration') - window_size) * pl.col('is_not_null').last()\n ) / window_size\n ).alias('result'),\n pl.all().suffix('_val_list'),\n ])\n)\n\nshape: (7, 7)\nβ”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”\nβ”‚ loc_start ┆ result ┆ loc_start_val_list ┆ loc_end_val_list ┆ readings_val_list ┆ is_not_null_val_list ┆ duration_val_list β”‚\nβ”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚\nβ”‚ i64 ┆ f64 ┆ list[i64] ┆ list[i64] ┆ list[i64] ┆ list[bool] ┆ list[i64] β”‚\nβ•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ════════════════════β•ͺ════════════════════β•ͺ═══════════════════β•ͺ══════════════════════β•ͺ═══════════════════║\nβ”‚ 1000 ┆ 0.666667 ┆ [1000, 1100] ┆ [1100, 1230] ┆ [7, null] ┆ [true, false] ┆ [100, 130] β”‚\nβ”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€\nβ”‚ 1100 ┆ 0.133333 ┆ [1100, 1230] ┆ [1230, 1300] ┆ [null, 8] ┆ [false, true] ┆ [130, 70] β”‚\nβ”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€\nβ”‚ 1230 ┆ 0.466667 ┆ [1230, 1300, 1340] ┆ [1300, 1340, 1400] ┆ [8, null, null] ┆ [true, false, false] ┆ [70, 40, 60] β”‚\nβ”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€\nβ”‚ 1300 ┆ 0.333333 ┆ [1300, 1340, 1400] ┆ [1340, 1400, 1500] ┆ [null, null, 5] ┆ [false, false, true] ┆ [40, 60, 100] β”‚\nβ”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€\nβ”‚ 1340 ┆ 0.6 ┆ [1340, 1400] ┆ [1400, 1500] ┆ [null, 5] ┆ [false, true] ┆ [60, 100] β”‚\nβ”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€\nβ”‚ 1400 ┆ 1.0 ┆ [1400, 1500] ┆ [1500, 1650] ┆ [5, 6] ┆ [true, true] ┆ [100, 150] β”‚\nβ”œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ•Œβ”€\nβ”‚ 1500 ┆ 1.0 ┆ [1500] ┆ [1650] ┆ [6] ┆ [true] ┆ [150] β”‚\nβ””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜\n\nNote: the third value in the result column (0.46667) differs from the result in your posting (0.73).\nDoes this get the ball rolling?\n" ]
[ 2 ]
[]
[]
[ "python", "python_polars" ]
stackoverflow_0074615958_python_python_polars.txt
Q: Rearrange/mix column pandas i have table like this: ID Type I/P Value ID1 Primary I 8 ID2 Primary I 3 ID3 Secondary P 6 ID4 Secondary I 2 ID5 Primary P 3 ID6 Primary I 4 I re order it this way: ID Type I/P Value ID1 Primary I 8 ID6 Primary I 4 ID2 Primary I 3 ID5 Primary P 3 ID3 Secondary P 6 ID4 Secondary I 2 But i was wondering if there is a way to rearrange/alternate the P/I values, something like this: (alternate between I/P but keep the type primary first, and get the bigger value per P/I) ID Type I/P Value ID1 Primary I 8 ID5 Primary P 3 ID6 Primary I 4 ID5 Primary P 3 ID3 Secondary P 6 ID4 Secondary I 2 A: here is one way to do it Note: your starting DF has two 'P' in the DF, the expected output has three 'P'. seems to be a typo # create a temp seq based on type and i/p # count for 'I' and 'P' both starts from 0 # sort the result with type and seq out=df.assign(seq=df.groupby(['Type','I/P']).cumcount()).sort_values(['Type','seq','I/P']).drop(columns='seq') out ID Type I/P Value 0 ID1 Primary I 8 4 ID5 Primary P 3 1 ID2 Primary I 3 5 ID6 Primary I 4 3 ID4 Secondary I 2 2 ID3 Secondary P 6
Rearrange/mix column pandas
i have table like this: ID Type I/P Value ID1 Primary I 8 ID2 Primary I 3 ID3 Secondary P 6 ID4 Secondary I 2 ID5 Primary P 3 ID6 Primary I 4 I re order it this way: ID Type I/P Value ID1 Primary I 8 ID6 Primary I 4 ID2 Primary I 3 ID5 Primary P 3 ID3 Secondary P 6 ID4 Secondary I 2 But i was wondering if there is a way to rearrange/alternate the P/I values, something like this: (alternate between I/P but keep the type primary first, and get the bigger value per P/I) ID Type I/P Value ID1 Primary I 8 ID5 Primary P 3 ID6 Primary I 4 ID5 Primary P 3 ID3 Secondary P 6 ID4 Secondary I 2
[ "here is one way to do it\nNote: your starting DF has two 'P' in the DF, the expected output has three 'P'. seems to be a typo\n\n# create a temp seq based on type and i/p\n# count for 'I' and 'P' both starts from 0\n# sort the result with type and seq\n\n\nout=df.assign(seq=df.groupby(['Type','I/P']).cumcount()).sort_values(['Type','seq','I/P']).drop(columns='seq')\n\nout\n\nID Type I/P Value\n0 ID1 Primary I 8\n4 ID5 Primary P 3\n1 ID2 Primary I 3\n5 ID6 Primary I 4\n3 ID4 Secondary I 2\n2 ID3 Secondary P 6\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "multiple_columns", "pandas", "python", "sorting" ]
stackoverflow_0074620994_dataframe_multiple_columns_pandas_python_sorting.txt
Q: I am so so tired of backtracking issues with pip. How can I figure out exactly which package is causing the problem? I've got a repo with a bunch of different requirements.txt for a few different cloud functions. I separate the install/test for each with a loop in my bitbucket-pipelines.yml: for d in `find . -type d -maxdepth 1 -mindepth 1`; do if [ -f "$d/requirements.txt" ]; then echo "====="$d"=====" python3 -m pip install -r $d/requirements.txt python3 -m coverage run -a --omit="*test*" -m unittest discover -v $d fi done It works great most of the time, until some version of something unexpectedly changes, and then pip ends up spinning its wheels (pun intended) for an eternity. In particular, one of my functions depends on Tensorflow, which has all kinds of deeper Google dependencies that don't get along with other things my function needs to install. But you may not know this from pip. I get a bunch of messages like Collecting pandas Downloading pandas-1.5.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 46.0 MB/s eta 0:00:00 Downloading pandas-1.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 41.6 MB/s eta 0:00:00 Downloading pandas-1.4.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 62.0 MB/s eta 0:00:00 Requirement already satisfied: charset-normalizer<3,>=2 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (2.1.1) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (2022.9.24) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (1.26.13) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (3.4) Requirement already satisfied: setuptools in /usr/local/lib/python3.9/site-packages (from grpcio-tools>=1.44.0->-r ./sleep/requirements.txt (line 7)) (58.1.0) Collecting data-consumption-interface>=1.11 Downloading https://pypi.biointellisense.com/packages/data-consumption-interface/1.11.0/data_consumption_interface-1.11.0-py3-none-any.whl (53 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.6/53.6 kB 65.1 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of grpcio-tools to determine which version is compatible with other requirements. This could take a while. Collecting grpcio-tools>=1.44.0 Downloading grpcio_tools-1.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 68.6 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of requests to determine which version is compatible with other requirements. This could take a while. Collecting requests Downloading https://pypi.biointellisense.com/packages/requests/2.28.1/requests-2.28.1-py3-none-any.whl (62 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.8/62.8 kB 70.2 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of pandas to determine which version is compatible with other requirements. This could take a while. Collecting pandas Downloading pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 100.7 MB/s eta 0:00:00 Downloading pandas-1.4.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 94.2 MB/s eta 0:00:00 Downloading pandas-1.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 90.2 MB/s eta 0:00:00 Downloading pandas-1.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 92.5 MB/s eta 0:00:00 Downloading pandas-1.3.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 106.1 MB/s eta 0:00:00 Downloading pandas-1.3.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 118.8 MB/s eta 0:00:00 Downloading pandas-1.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 102.0 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of pandas to determine which version is compatible with other requirements. This could take a while. Downloading pandas-1.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 83.4 MB/s eta 0:00:00 Downloading pandas-1.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 107.7 MB/s eta 0:00:00 Downloading pandas-1.3.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (10.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.6/10.6 MB 122.8 MB/s eta 0:00:00 Downloading pandas-1.2.5-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 111.9 MB/s eta 0:00:00 Downloading pandas-1.2.4-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 100.2 MB/s eta 0:00:00 INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C. Downloading pandas-1.2.3-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 116.0 MB/s eta 0:00:00 Downloading pandas-1.2.2-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 113.7 MB/s eta 0:00:00 Downloading pandas-1.2.1-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 106.8 MB/s eta 0:00:00 Downloading pandas-1.2.0-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 110.1 MB/s eta 0:00:00 Downloading pandas-1.1.5-cp39-cp39-manylinux1_x86_64.whl (9.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 84.5 MB/s eta 0:00:00 Downloading pandas-1.1.4-cp39-cp39-manylinux1_x86_64.whl (9.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 110.9 MB/s eta 0:00:00 Downloading pandas-1.1.3-cp39-cp39-manylinux1_x86_64.whl (9.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 120.0 MB/s eta 0:00:00 Downloading pandas-1.1.2.tar.gz (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 94.3 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: still running... Installing build dependencies: still running... Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: still running... Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Downloading pandas-1.1.1.tar.gz (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 71.5 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started ...on and on. And depending which specific versions I try to set, this kind of infinite regress can happen for tensorflow, urllib, cffi, or others. It truly does not make sense that pip should have to download and try to install entire packages before it determines the versions aren't compatible. That should all be available in metadata. Why does python work this way? https://pip.pypa.io/en/stable/topics/dependency-resolution/ I'm at the end of my rope. I can't tell what specifically is causing this problem. How can I determine from the logs exactly which package I need to == version? Pip is never giving me an error message about what is interfering to cause it to try to roll versions back! It's maddening! The community needs to fix this. A: As it seems your question has been answered in another Question A bit of an explanation: First we assume we work with python-version-2.5 (because I chose so - you have to replace 2.5 by your python-version) First there was/is the command to avoid multiple requests: ex.g. $ python-2.5 -m pip install myfoopackage #you avoid them by specifying your python version (*as it didn't worked in your case I assume you put the wrong python version which you can find out with $ python -V) Since version 0.8, Pip supports pip-{python-version} ex.g. pip-2.5 install myfoopackage #as you can see python-<VERSION> -m is no longer needed Since version 1.5 they changed the schema to pip<VERSION> ex.g. pip2.5 intall mypackage TL;DR TRY: $ python -V $ for f in find . -type f -iname "requirements*.txt"; do pip<python_versoin> install -r $f done
I am so so tired of backtracking issues with pip. How can I figure out exactly which package is causing the problem?
I've got a repo with a bunch of different requirements.txt for a few different cloud functions. I separate the install/test for each with a loop in my bitbucket-pipelines.yml: for d in `find . -type d -maxdepth 1 -mindepth 1`; do if [ -f "$d/requirements.txt" ]; then echo "====="$d"=====" python3 -m pip install -r $d/requirements.txt python3 -m coverage run -a --omit="*test*" -m unittest discover -v $d fi done It works great most of the time, until some version of something unexpectedly changes, and then pip ends up spinning its wheels (pun intended) for an eternity. In particular, one of my functions depends on Tensorflow, which has all kinds of deeper Google dependencies that don't get along with other things my function needs to install. But you may not know this from pip. I get a bunch of messages like Collecting pandas Downloading pandas-1.5.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 46.0 MB/s eta 0:00:00 Downloading pandas-1.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 41.6 MB/s eta 0:00:00 Downloading pandas-1.4.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 62.0 MB/s eta 0:00:00 Requirement already satisfied: charset-normalizer<3,>=2 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (2.1.1) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (2022.9.24) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (1.26.13) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/site-packages (from requests->-r ./sleep/requirements.txt (line 5)) (3.4) Requirement already satisfied: setuptools in /usr/local/lib/python3.9/site-packages (from grpcio-tools>=1.44.0->-r ./sleep/requirements.txt (line 7)) (58.1.0) Collecting data-consumption-interface>=1.11 Downloading https://pypi.biointellisense.com/packages/data-consumption-interface/1.11.0/data_consumption_interface-1.11.0-py3-none-any.whl (53 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.6/53.6 kB 65.1 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of grpcio-tools to determine which version is compatible with other requirements. This could take a while. Collecting grpcio-tools>=1.44.0 Downloading grpcio_tools-1.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 68.6 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of requests to determine which version is compatible with other requirements. This could take a while. Collecting requests Downloading https://pypi.biointellisense.com/packages/requests/2.28.1/requests-2.28.1-py3-none-any.whl (62 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.8/62.8 kB 70.2 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of pandas to determine which version is compatible with other requirements. This could take a while. Collecting pandas Downloading pandas-1.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 100.7 MB/s eta 0:00:00 Downloading pandas-1.4.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 94.2 MB/s eta 0:00:00 Downloading pandas-1.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 90.2 MB/s eta 0:00:00 Downloading pandas-1.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 92.5 MB/s eta 0:00:00 Downloading pandas-1.3.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 106.1 MB/s eta 0:00:00 Downloading pandas-1.3.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 118.8 MB/s eta 0:00:00 Downloading pandas-1.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 102.0 MB/s eta 0:00:00 INFO: pip is looking at multiple versions of pandas to determine which version is compatible with other requirements. This could take a while. Downloading pandas-1.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 83.4 MB/s eta 0:00:00 Downloading pandas-1.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 107.7 MB/s eta 0:00:00 Downloading pandas-1.3.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (10.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.6/10.6 MB 122.8 MB/s eta 0:00:00 Downloading pandas-1.2.5-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 111.9 MB/s eta 0:00:00 Downloading pandas-1.2.4-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 100.2 MB/s eta 0:00:00 INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C. Downloading pandas-1.2.3-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 116.0 MB/s eta 0:00:00 Downloading pandas-1.2.2-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 113.7 MB/s eta 0:00:00 Downloading pandas-1.2.1-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 106.8 MB/s eta 0:00:00 Downloading pandas-1.2.0-cp39-cp39-manylinux1_x86_64.whl (9.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.7/9.7 MB 110.1 MB/s eta 0:00:00 Downloading pandas-1.1.5-cp39-cp39-manylinux1_x86_64.whl (9.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 84.5 MB/s eta 0:00:00 Downloading pandas-1.1.4-cp39-cp39-manylinux1_x86_64.whl (9.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 110.9 MB/s eta 0:00:00 Downloading pandas-1.1.3-cp39-cp39-manylinux1_x86_64.whl (9.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.3/9.3 MB 120.0 MB/s eta 0:00:00 Downloading pandas-1.1.2.tar.gz (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 94.3 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: still running... Installing build dependencies: still running... Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: still running... Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Downloading pandas-1.1.1.tar.gz (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 71.5 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started ...on and on. And depending which specific versions I try to set, this kind of infinite regress can happen for tensorflow, urllib, cffi, or others. It truly does not make sense that pip should have to download and try to install entire packages before it determines the versions aren't compatible. That should all be available in metadata. Why does python work this way? https://pip.pypa.io/en/stable/topics/dependency-resolution/ I'm at the end of my rope. I can't tell what specifically is causing this problem. How can I determine from the logs exactly which package I need to == version? Pip is never giving me an error message about what is interfering to cause it to try to roll versions back! It's maddening! The community needs to fix this.
[ "As it seems your question has been answered in another Question\nA bit of an explanation:\nFirst we assume we work with python-version-2.5 (because I chose so - you have to replace 2.5 by your python-version)\n\nFirst there was/is the command to avoid multiple requests:\nex.g. $ python-2.5 -m pip install myfoopackage\n#you avoid them by specifying your python version (*as it didn't worked in your\ncase I assume you put the wrong python version which you can find out with $ python -V)\nSince version 0.8, Pip supports pip-{python-version}\nex.g. pip-2.5 install myfoopackage #as you can see python-<VERSION> -m is no\nlonger needed\nSince version 1.5 they changed the schema to pip<VERSION>\nex.g. pip2.5 intall mypackage\n\nTL;DR\nTRY:\n$ python -V\n$ for f in find . -type f -iname \"requirements*.txt\"; do\n pip<python_versoin> install -r $f\ndone\n\n" ]
[ 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0074620601_pip_python.txt
Q: Django / Python: Sorting QuerySet by field on first instance of foreignkey relation I have the following 2 models, a Lesson model, that can have multiple start / end dates - each with their own Date model. class Lesson(models.Model): name = models.Charfield() (...) class Date(models.Model): meeting = models.ForeignKey( Lesson, verbose_name="dates", on_delete=models.CASCADE, ) start_date = models.DateTimeField( blank=True, null=True, ) end_date = models.DateTimeField( blank=True, null=True, ) For our project we need to order the Lessons by the start_date of their earliest Date instance. So far example, if I have the following data: MEETING A: DATE_1a: 2022-11-01 --> 2022-11-02 DATE_2a: 2022-12-10 --> 2022-12-11 MEETING B: DATE_1b: 2022-11-03 --> 2022-11-04 DATE_2b: 2022-11-05 --> 2022-11-06 Then the queryset should return [<MEETING A>, <MEETING B>] based on the start_date values of the DATE_1a/b instances. (2022-11-01 is earlier than 2022-11-03, al other Dates are ignored) However, what I thought would be a relatively simply query has proven to be a rather evasive problem. My initial approach of the default order_by method used all Date instances attached to a Meeting (and worse, returned a separate Meeting result for each Date instance) qs.order_by(F("dates__start_date").asc(nulls_first=True)) However -I could not conceive of a query to do what we need to happen. My current working solution is to use a whole bunch of python code to instead do: ordered_meetings = [] meeting_dict = {} for meeting in qs: first_date = meeting.dates.first() if not meeting_dict.get(meeting.id): meeting_dict[meeting.id] = first_date.start_date And then ordering the dict based on its date values, using the ID/KEYS to fetch the Meeting instances and append them to the ordered_meetings list. But this is such a convoluted (and much slower) approach that I feel that I must be missing something. Does anyone know a more succinct method of accomplishing what we want here? A: I'm assuming you need a min? Something like (untested): Meeting.objects.annotate(first_date=Min('dates__start_date')).order_by('first_date')
Django / Python: Sorting QuerySet by field on first instance of foreignkey relation
I have the following 2 models, a Lesson model, that can have multiple start / end dates - each with their own Date model. class Lesson(models.Model): name = models.Charfield() (...) class Date(models.Model): meeting = models.ForeignKey( Lesson, verbose_name="dates", on_delete=models.CASCADE, ) start_date = models.DateTimeField( blank=True, null=True, ) end_date = models.DateTimeField( blank=True, null=True, ) For our project we need to order the Lessons by the start_date of their earliest Date instance. So far example, if I have the following data: MEETING A: DATE_1a: 2022-11-01 --> 2022-11-02 DATE_2a: 2022-12-10 --> 2022-12-11 MEETING B: DATE_1b: 2022-11-03 --> 2022-11-04 DATE_2b: 2022-11-05 --> 2022-11-06 Then the queryset should return [<MEETING A>, <MEETING B>] based on the start_date values of the DATE_1a/b instances. (2022-11-01 is earlier than 2022-11-03, al other Dates are ignored) However, what I thought would be a relatively simply query has proven to be a rather evasive problem. My initial approach of the default order_by method used all Date instances attached to a Meeting (and worse, returned a separate Meeting result for each Date instance) qs.order_by(F("dates__start_date").asc(nulls_first=True)) However -I could not conceive of a query to do what we need to happen. My current working solution is to use a whole bunch of python code to instead do: ordered_meetings = [] meeting_dict = {} for meeting in qs: first_date = meeting.dates.first() if not meeting_dict.get(meeting.id): meeting_dict[meeting.id] = first_date.start_date And then ordering the dict based on its date values, using the ID/KEYS to fetch the Meeting instances and append them to the ordered_meetings list. But this is such a convoluted (and much slower) approach that I feel that I must be missing something. Does anyone know a more succinct method of accomplishing what we want here?
[ "I'm assuming you need a min?\nSomething like (untested):\nMeeting.objects.annotate(first_date=Min('dates__start_date')).order_by('first_date')\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_queryset", "python" ]
stackoverflow_0074621051_django_django_queryset_python.txt
Q: Adding user input to an array I've just started learning python. I'm doing this for a school project. I cannot use a list for storing the passengers ages, I need to do a 2d array and I'm just lost. How do I add the values of "age" into an array? I've gott it working with a list, but that won't be enough. I've tried messing around with numpy but I must be doing something wrong, please help me. I want the array to be 4x8 to represent the seats in the bus. So, this is what I've got so far. It's far from finished but I got stuck on this. This is the error I get: AttributeError: 'numpy.ndarray' object has no attribute 'np' import numpy as np import random as rd passenger_ages=np.zeros([4,8],dtype=int) class Bus: def run(): print('==============================') print('Welcome to the bus simulator') print('==============================') print('What do you want to do?') print('1. Add passenger.') print('2. Print all passengers.') print('3. See other age options') choice=int(input('Make your choice: ')) match choice: case 1: Bus.add_passenger() case 2: Bus.print_all() case 3: Bus.age_numbers() def print_all(): print(passenger_ages) def age_numbers(): print('What do you want to see?') print('1. Combined age.') print('2. Average age.') print('3. Oldest on the bus.') print('Youngest on the bus.') def add_passenger(): pass_no=0 try: age=int(input('Passengers age: ')) passenger_ages.np.append(age) pass_no+=1 print(f'You added a passenger with the age: {age}') again=input('Do you want to add more passengers?(J/N)') if again=='j' or again=='J': Bus.add_passenger() elif pass_no>32: print('Buss is full.') Bus.run() else: Bus.run() except ValueError: print('Only numbers.') Bus.run() A: So when you work with numpy arrays they have indexes, like lists. But since they're multi-dimensional you have to call more than one axis. If you replace line passenger_ages.np.append(age) with passenger_ages[pass_no][x] = age then you can get this working. In this case pass_no will increment each time you add a passenger since you've implemented that functionality. You will also need to add a functionality to increment x as well. You need two indexes due to the fact it's a 2d array. Think of them like X, y coordinates on a chart. Here's some code to get you pointed in the right direction (I'm not going to finish your school work for you lol). In order to finish this properly you'll need to change the incrementing functions to properly index on each line. Hint: I'd use if/else statements to check and see when it's time to increment pass_no since it has to stay the same for several passengers while just x increments each time, however there are about twenty ways to do this. See also: numpy.asarray and numpy.reshape for another solution. import numpy as np import random as rd passenger_ages = np.zeros([4, 8], dtype=int) pass_no = 0 x = 0 class Bus: def run(): print('==============================') print('Welcome to the bus simulator') print('==============================') print('What do you want to do?') print('1. Add passenger.') print('2. Print all passengers.') print('3. See other age options') choice = int(input('Make your choice: ')) match choice: case 1: Bus.add_passenger() case 2: Bus.print_all() case 3: Bus.age_numbers() def print_all(): print(passenger_ages) def age_numbers(): print('What do you want to see?') print('1. Combined age.') print('2. Average age.') print('3. Oldest on the bus.') print('Youngest on the bus.') def add_passenger(): global pass_no global x try: age = int(input('Passengers age: ')) passenger_ages[pass_no][x] = age # passenger_ages.np.append(age) pass_no += 1 x += 1 print(f'You added a passenger with the age: {age}') again = input('Do you want to add more passengers?(J/N)') if again == 'j' or again == 'J': Bus.add_passenger() elif pass_no > 32: print('Buss is full.') Bus.run() else: Bus.run() except ValueError: print('Only numbers.') Bus.run()
Adding user input to an array
I've just started learning python. I'm doing this for a school project. I cannot use a list for storing the passengers ages, I need to do a 2d array and I'm just lost. How do I add the values of "age" into an array? I've gott it working with a list, but that won't be enough. I've tried messing around with numpy but I must be doing something wrong, please help me. I want the array to be 4x8 to represent the seats in the bus. So, this is what I've got so far. It's far from finished but I got stuck on this. This is the error I get: AttributeError: 'numpy.ndarray' object has no attribute 'np' import numpy as np import random as rd passenger_ages=np.zeros([4,8],dtype=int) class Bus: def run(): print('==============================') print('Welcome to the bus simulator') print('==============================') print('What do you want to do?') print('1. Add passenger.') print('2. Print all passengers.') print('3. See other age options') choice=int(input('Make your choice: ')) match choice: case 1: Bus.add_passenger() case 2: Bus.print_all() case 3: Bus.age_numbers() def print_all(): print(passenger_ages) def age_numbers(): print('What do you want to see?') print('1. Combined age.') print('2. Average age.') print('3. Oldest on the bus.') print('Youngest on the bus.') def add_passenger(): pass_no=0 try: age=int(input('Passengers age: ')) passenger_ages.np.append(age) pass_no+=1 print(f'You added a passenger with the age: {age}') again=input('Do you want to add more passengers?(J/N)') if again=='j' or again=='J': Bus.add_passenger() elif pass_no>32: print('Buss is full.') Bus.run() else: Bus.run() except ValueError: print('Only numbers.') Bus.run()
[ "So when you work with numpy arrays they have indexes, like lists. But since they're multi-dimensional you have to call more than one axis.\nIf you replace line passenger_ages.np.append(age) with passenger_ages[pass_no][x] = age then you can get this working. In this case pass_no will increment each time you add a passenger since you've implemented that functionality. You will also need to add a functionality to increment x as well.\nYou need two indexes due to the fact it's a 2d array. Think of them like X, y coordinates on a chart.\nHere's some code to get you pointed in the right direction (I'm not going to finish your school work for you lol). In order to finish this properly you'll need to change the incrementing functions to properly index on each line. Hint: I'd use if/else statements to check and see when it's time to increment pass_no since it has to stay the same for several passengers while just x increments each time, however there are about twenty ways to do this.\nSee also: numpy.asarray and numpy.reshape for another solution.\nimport numpy as np\nimport random as rd\n\npassenger_ages = np.zeros([4, 8], dtype=int)\npass_no = 0\nx = 0\n\nclass Bus:\n\n def run():\n print('==============================')\n print('Welcome to the bus simulator')\n print('==============================')\n print('What do you want to do?')\n print('1. Add passenger.')\n print('2. Print all passengers.')\n print('3. See other age options')\n choice = int(input('Make your choice: '))\n match choice:\n case 1:\n Bus.add_passenger()\n case 2:\n Bus.print_all()\n case 3:\n Bus.age_numbers()\n\n def print_all():\n print(passenger_ages)\n\n def age_numbers():\n print('What do you want to see?')\n print('1. Combined age.')\n print('2. Average age.')\n print('3. Oldest on the bus.')\n print('Youngest on the bus.')\n\n def add_passenger():\n global pass_no\n global x\n\n try:\n age = int(input('Passengers age: '))\n passenger_ages[pass_no][x] = age\n # passenger_ages.np.append(age)\n pass_no += 1\n x += 1\n print(f'You added a passenger with the age: {age}')\n again = input('Do you want to add more passengers?(J/N)')\n if again == 'j' or again == 'J':\n Bus.add_passenger()\n elif pass_no > 32:\n print('Buss is full.')\n Bus.run()\n else:\n Bus.run()\n except ValueError:\n print('Only numbers.')\n\n\nBus.run()\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074620780_arrays_numpy_python.txt
Q: Speedup extracting data form larger xml files using python Hello I am not strong python user , but need to extract the xml file values. I am using for loop to get attribute values from 'xml.dom.minidom.document' Both the xyz or temp uses for loop , since the file has half million values it takes time. I tried using lxml, but it had error: module 'lxml' has no attribute 'parse' or 'Xpath' The xml file has following format <?xml version="1.0" encoding="utf-8"?> <variable_output> <!--version : 1--> <!--object title : Volume (1)--> <!--scalar variable : Temperature (TEMP)--> <POINT> <Vertex> <Position x="-0.176300004" y="-0.103100002" z="-0.153699994"/> <Scalar TEMP="84.192421"/> </Vertex> </POINT> <POINT> <Vertex> <Position x="-0.173557162" y="-0.103100002" z="-0.153699994"/> <Scalar TEMP="83.9050522"/> </Vertex> </POINT> <POINT> <Vertex> <Position x="-0.170814306" y="-0.103100002" z="-0.153699994"/> <Scalar TEMP="83.7506332"/> </Vertex> </POINT> </variable_output> The following code give larger time for bigger files. from xml.dom.minidom import parse import xml.dom.minidom import csv import pandas as pd import numpy as np import os import glob import time from lxml import etree v=[] doc =parse("document.xml") Val = doc.getElementsByTagName("Scalar") t0 = time.time() for s in Val: v=np.append(v,float(s.attributes['TEMP'].value)) res=np.array([v]) t1 = time.time() total = (t1-t0) print('Time for Value', str(total)) # Using lxml doc2=etree.parse("document.xml") # try using Xpath t0 = time.time() temp=doc2.Xpath("/POINT/Vertex/Scaler/@TEMP") t1 = time.time() total2 = t1-t0 print('Time for Value', str(total2)) # save data as csv from xml pd.DataFrame(res.T).to_csv(('Data.csv'),index=False,header=False) #write timestep as csv The error while using the Xpath to get the values of Temp,or x,y,z: In [12]: temp=doc2.Xpath("/POINT/Vertex/Scaler/@TEMP") Traceback (most recent call last): File "<ipython-input-12-bbd832a3074e>", line 1, in <module> temp=doc2.Xpath("/POINT/Vertex/Scaler/@TEMP") AttributeError: 'lxml.etree._ElementTree' object has no attribute 'Xpath' A: I recommend iterparse() for large xml files: import timeit import os, psutil import datetime import pandas as pd import xml.etree.ElementTree as ET class parse_xml: def __init__(self, path): self.xml = os.path.split(path)[1] print(self.xml) columns = ["Pos_x", "Pos_y", "Pos_z", "Scalar_Temp"] data = [] for event, elem in ET.iterparse(self.xml, events=("end",)): if elem.tag == "Position": x = elem.get("x") y = elem.get("y") z = elem.get("z") if elem.tag == "Scalar": row = (x, y, z , elem.get("TEMP")) data.append(row) elem.clear() df = pd.DataFrame(data, columns=columns) print(df) def main(): xml_file = r"D:\Daten\Programmieren\stackoverflow\document.xml" parse_xml(xml_file) if __name__ == "__main__": now = datetime.datetime.now() starttime = timeit.default_timer() main() process = psutil.Process(os.getpid()) print('\nFinished') print(f"{now:%Y-%m-%d %H:%M}") print('Runtime:', timeit.default_timer()-starttime) print(f'RAM: {process.memory_info().rss/1000**2} MB') Output: document.xml Pos_x Pos_y Pos_z Scalar_Temp 0 -0.176300004 -0.103100002 -0.153699994 84.192421 1 -0.173557162 -0.103100002 -0.153699994 83.9050522 2 -0.170814306 -0.103100002 -0.153699994 83.7506332 Finished 2022-11-29 23:51 Runtime: 0.007375300000000029 RAM: 55.619584 MB If the output will be too large you can write it to a sqlite3 database with df.to_sql().
Speedup extracting data form larger xml files using python
Hello I am not strong python user , but need to extract the xml file values. I am using for loop to get attribute values from 'xml.dom.minidom.document' Both the xyz or temp uses for loop , since the file has half million values it takes time. I tried using lxml, but it had error: module 'lxml' has no attribute 'parse' or 'Xpath' The xml file has following format <?xml version="1.0" encoding="utf-8"?> <variable_output> <!--version : 1--> <!--object title : Volume (1)--> <!--scalar variable : Temperature (TEMP)--> <POINT> <Vertex> <Position x="-0.176300004" y="-0.103100002" z="-0.153699994"/> <Scalar TEMP="84.192421"/> </Vertex> </POINT> <POINT> <Vertex> <Position x="-0.173557162" y="-0.103100002" z="-0.153699994"/> <Scalar TEMP="83.9050522"/> </Vertex> </POINT> <POINT> <Vertex> <Position x="-0.170814306" y="-0.103100002" z="-0.153699994"/> <Scalar TEMP="83.7506332"/> </Vertex> </POINT> </variable_output> The following code give larger time for bigger files. from xml.dom.minidom import parse import xml.dom.minidom import csv import pandas as pd import numpy as np import os import glob import time from lxml import etree v=[] doc =parse("document.xml") Val = doc.getElementsByTagName("Scalar") t0 = time.time() for s in Val: v=np.append(v,float(s.attributes['TEMP'].value)) res=np.array([v]) t1 = time.time() total = (t1-t0) print('Time for Value', str(total)) # Using lxml doc2=etree.parse("document.xml") # try using Xpath t0 = time.time() temp=doc2.Xpath("/POINT/Vertex/Scaler/@TEMP") t1 = time.time() total2 = t1-t0 print('Time for Value', str(total2)) # save data as csv from xml pd.DataFrame(res.T).to_csv(('Data.csv'),index=False,header=False) #write timestep as csv The error while using the Xpath to get the values of Temp,or x,y,z: In [12]: temp=doc2.Xpath("/POINT/Vertex/Scaler/@TEMP") Traceback (most recent call last): File "<ipython-input-12-bbd832a3074e>", line 1, in <module> temp=doc2.Xpath("/POINT/Vertex/Scaler/@TEMP") AttributeError: 'lxml.etree._ElementTree' object has no attribute 'Xpath'
[ "I recommend iterparse() for large xml files:\nimport timeit\nimport os, psutil\nimport datetime\n\nimport pandas as pd\nimport xml.etree.ElementTree as ET\n\nclass parse_xml:\n def __init__(self, path):\n self.xml = os.path.split(path)[1]\n print(self.xml)\n \n columns = [\"Pos_x\", \"Pos_y\", \"Pos_z\", \"Scalar_Temp\"]\n \n data = []\n for event, elem in ET.iterparse(self.xml, events=(\"end\",)):\n if elem.tag == \"Position\":\n x = elem.get(\"x\")\n y = elem.get(\"y\")\n z = elem.get(\"z\")\n if elem.tag == \"Scalar\":\n row = (x, y, z , elem.get(\"TEMP\"))\n data.append(row)\n elem.clear()\n \n df = pd.DataFrame(data, columns=columns)\n print(df)\n \ndef main():\n xml_file = r\"D:\\Daten\\Programmieren\\stackoverflow\\document.xml\"\n parse_xml(xml_file)\n\nif __name__ == \"__main__\":\n now = datetime.datetime.now()\n starttime = timeit.default_timer()\n main()\n process = psutil.Process(os.getpid())\n print('\\nFinished')\n print(f\"{now:%Y-%m-%d %H:%M}\")\n print('Runtime:', timeit.default_timer()-starttime)\n print(f'RAM: {process.memory_info().rss/1000**2} MB')\n\nOutput:\ndocument.xml\n Pos_x Pos_y Pos_z Scalar_Temp\n0 -0.176300004 -0.103100002 -0.153699994 84.192421\n1 -0.173557162 -0.103100002 -0.153699994 83.9050522\n2 -0.170814306 -0.103100002 -0.153699994 83.7506332\n\nFinished\n2022-11-29 23:51\nRuntime: 0.007375300000000029\nRAM: 55.619584 MB\n\nIf the output will be too large you can write it to a sqlite3 database with df.to_sql().\n" ]
[ 0 ]
[]
[]
[ "anaconda", "lxml", "python", "xml" ]
stackoverflow_0074321833_anaconda_lxml_python_xml.txt
Q: Exasol_Error: I keep getting Exasol connection error timed out I am trying to connect to my Exasol SaaS database, I tried via these tools(TALEND, DBVISUALIZER, POWERBI) and via python but I cannot connect and I keep getting the same error. I saw another post on Exasol community https://community.exasol.com/t5/discussion-forum/exaconnectionfailederror/m-p/8049#M1855 of this type of error but it doesn't explain exactly what was done to fix the error. I tried via the ODBC Data Source administrator(64-bit) too but still the same error. Maybe its an connection issue with my pc self but I'm not sure or maybe I am just inserting wrong values I don't know. Oh the values I inserted are the recommended ones from what Exasol docs states and I have removed anything about proxy or vpn. I put my errors under. I tried via different devices and I get the same error I really don't know what I can do any more, so any help will be greatly appreciated. Note: I am using the Exasol SaaS database and I am currently on the trial mode so I am not sure if this is limiting me. **Errors: ** Error message odbc exasol: [EXASOL][EXASolution driver]connection attempt timed out. Error message Talend : Connection failure. You must change the Database Settings. java.lang.RuntimeException: com.exasol.jdbc.ConnectFailed: connect timed out -> Caused by: com.exasol.jdbc.ConnectFailed: connect timed out Error message pyexasol : socket.timeout: timed out Error message dbvisualizer :Β java.net.SocketTimeoutException: Connect timed out com.exasol.jdbc.ConnectFailed: java.net.SocketTimeoutException: Connect timed out Error message Power BI desktop :Β Details: "ODBC: ERROR [HYT00][EXASOL][EXASolution driver]Connection attempt timed out." My applications versions: DbVisualizer Free 14.0.1 (build: 3540) Talend Open Studio Data integration(8.0.1.2021119_1610) java version -> jdk-16.0.02 Power BI -> Version: 2.110.1085.0 64-bit (October 2022) ODBC : exasolodbc x64 7.1.14 JDBC : exasoljdbc 7.1.14 Python: python 3.8.10 -> pyexasol : 0.25.1 A: The error means that the client is not able to reach the host for some reason. Try the following: Make sure the database is still online (they auto-shutdown after 2 hours if there is no activity by default) Check that the IP Address of the host you are connecting with is added to the allow list in the SaaS UI. (see the docs) Check if your host is able to reach the host and port specified in the SaaS UI (for example using telnet on port 8563). Maybe some firewall is preventing access to the database? A: So I did more digging. actually I have no idea what the issue was. Talend: I made a connection via JDBC in Talend with the help of exasol-support. The DBType Exasol in talend doesn't work for some reason, its not known if it's talend side or Exasol side, maybe this will be updated in the future. Just remember in the jdbc url type this: "jdbc:exa:yourconnectionstring", don't forget the "exa". PowerBI: I tried the connection string with fingerprint method that worked for me. Just put the fingerprint with the connection string and it should connect. https://exasol.my.site.com/s/article/PowerBI-Encryption-Fingerprint-Issue-in-Exasol-7-1?language=en_US DBvisualizer: I had a wrong in connection string. Python: I had a wrong in connection string. Hopefully this helps someone.
Exasol_Error: I keep getting Exasol connection error timed out
I am trying to connect to my Exasol SaaS database, I tried via these tools(TALEND, DBVISUALIZER, POWERBI) and via python but I cannot connect and I keep getting the same error. I saw another post on Exasol community https://community.exasol.com/t5/discussion-forum/exaconnectionfailederror/m-p/8049#M1855 of this type of error but it doesn't explain exactly what was done to fix the error. I tried via the ODBC Data Source administrator(64-bit) too but still the same error. Maybe its an connection issue with my pc self but I'm not sure or maybe I am just inserting wrong values I don't know. Oh the values I inserted are the recommended ones from what Exasol docs states and I have removed anything about proxy or vpn. I put my errors under. I tried via different devices and I get the same error I really don't know what I can do any more, so any help will be greatly appreciated. Note: I am using the Exasol SaaS database and I am currently on the trial mode so I am not sure if this is limiting me. **Errors: ** Error message odbc exasol: [EXASOL][EXASolution driver]connection attempt timed out. Error message Talend : Connection failure. You must change the Database Settings. java.lang.RuntimeException: com.exasol.jdbc.ConnectFailed: connect timed out -> Caused by: com.exasol.jdbc.ConnectFailed: connect timed out Error message pyexasol : socket.timeout: timed out Error message dbvisualizer :Β java.net.SocketTimeoutException: Connect timed out com.exasol.jdbc.ConnectFailed: java.net.SocketTimeoutException: Connect timed out Error message Power BI desktop :Β Details: "ODBC: ERROR [HYT00][EXASOL][EXASolution driver]Connection attempt timed out." My applications versions: DbVisualizer Free 14.0.1 (build: 3540) Talend Open Studio Data integration(8.0.1.2021119_1610) java version -> jdk-16.0.02 Power BI -> Version: 2.110.1085.0 64-bit (October 2022) ODBC : exasolodbc x64 7.1.14 JDBC : exasoljdbc 7.1.14 Python: python 3.8.10 -> pyexasol : 0.25.1
[ "The error means that the client is not able to reach the host for some reason. Try the following:\n\nMake sure the database is still online (they auto-shutdown after 2 hours if there is no activity by default)\nCheck that the IP Address of the host you are connecting with is added to the allow list in the SaaS UI. (see the docs)\nCheck if your host is able to reach the host and port specified in the SaaS UI (for example using telnet on port 8563). Maybe some firewall is preventing access to the database?\n\n", "So I did more digging. actually I have no idea what the issue was.\nTalend:\nI made a connection via JDBC in Talend with the help of exasol-support. The DBType Exasol in talend doesn't work for some reason, its not known if it's talend side or Exasol side, maybe this will be updated in the future. Just remember in the jdbc url type this: \"jdbc:exa:yourconnectionstring\", don't forget the \"exa\".\nPowerBI:\nI tried the connection string with fingerprint method that worked for me. Just put the fingerprint with the connection string and it should connect.\nhttps://exasol.my.site.com/s/article/PowerBI-Encryption-Fingerprint-Issue-in-Exasol-7-1?language=en_US\nDBvisualizer:\nI had a wrong in connection string.\nPython:\nI had a wrong in connection string.\nHopefully this helps someone.\n" ]
[ 0, 0 ]
[]
[]
[ "database_connection", "dbvisualizer", "exasol", "python", "talend" ]
stackoverflow_0074430573_database_connection_dbvisualizer_exasol_python_talend.txt
Q: Tensorflow clone_model with subclass model Is there a way to clone a subclass-based model in Tensorflow? For example, if I have the following model: class MySequentialModel(tf.keras.Model): def __init__(self, name=None, **kwargs): super().__init__(**kwargs) self.dense_1 = FlexibleDense(out_features=3) self.dense_2 = FlexibleDense(out_features=2) def call(self, x): x = self.dense_1(x) return self.dense_2(x) Then I train save, and load the model, when I try to clone it: model = tf.keras.models.clone_model(original_model) I get ValueError: Expected `model` argument to be a functional `Model` instance, but got a subclass model instead. Is there some other way to clone a model which is a subclass of tf.keras.Model? A: This is not possible, but the issue is tracked on the Keras repository here.
Tensorflow clone_model with subclass model
Is there a way to clone a subclass-based model in Tensorflow? For example, if I have the following model: class MySequentialModel(tf.keras.Model): def __init__(self, name=None, **kwargs): super().__init__(**kwargs) self.dense_1 = FlexibleDense(out_features=3) self.dense_2 = FlexibleDense(out_features=2) def call(self, x): x = self.dense_1(x) return self.dense_2(x) Then I train save, and load the model, when I try to clone it: model = tf.keras.models.clone_model(original_model) I get ValueError: Expected `model` argument to be a functional `Model` instance, but got a subclass model instead. Is there some other way to clone a model which is a subclass of tf.keras.Model?
[ "This is not possible, but the issue is tracked on the Keras repository here.\n" ]
[ 0 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0066068938_keras_python_tensorflow.txt
Q: How to Trigger an On-Demand Scheduled Query Using a Cloud Function? What I basically want to happen is my on demand scheduled query will run when a new file lands in my google cloud storage bucket. This query will load the CSV file into a temporary table, perform some transformation/cleaning and then append to a table. Just to try and get the first part running, my on demand scheduled query looks like this. The idea being it will pick up the CSV file from the bucket and dump it into a table. LOAD DATA INTO spreadsheep-20220603.Case_Studies.loading_test from files ( format='CSV', uris=['gs://triggered_upload/*.csv'] ); I was in the process of setting up a Google Cloud Function that triggers when a file lands in the storage bucket, that seems to be fine but I haven't had luck working out how that function will trigger the scheduled query. Any idea what bit of python code is needed in the function to trigger the query? A: It seems to me that it's not really a scheduled query you want at all. You don't want one to run at regular intervals, you want to run a query in response to a certain event. Now, you've rigged up a cloud function to execute some code whenever a new file is added to a bucket. What this cloud function needs is the BigQuery python client library. Here's an example of how it's used. All that remains is to wrap this code in an appropriate function and specify dependencies and permissions using the cloud functions python framework. Here is a guide on how to do that.
How to Trigger an On-Demand Scheduled Query Using a Cloud Function?
What I basically want to happen is my on demand scheduled query will run when a new file lands in my google cloud storage bucket. This query will load the CSV file into a temporary table, perform some transformation/cleaning and then append to a table. Just to try and get the first part running, my on demand scheduled query looks like this. The idea being it will pick up the CSV file from the bucket and dump it into a table. LOAD DATA INTO spreadsheep-20220603.Case_Studies.loading_test from files ( format='CSV', uris=['gs://triggered_upload/*.csv'] ); I was in the process of setting up a Google Cloud Function that triggers when a file lands in the storage bucket, that seems to be fine but I haven't had luck working out how that function will trigger the scheduled query. Any idea what bit of python code is needed in the function to trigger the query?
[ "It seems to me that it's not really a scheduled query you want at all. You don't want one to run at regular intervals, you want to run a query in response to a certain event.\nNow, you've rigged up a cloud function to execute some code whenever a new file is added to a bucket. What this cloud function needs is the BigQuery python client library. Here's an example of how it's used.\nAll that remains is to wrap this code in an appropriate function and specify dependencies and permissions using the cloud functions python framework. Here is a guide on how to do that.\n" ]
[ 1 ]
[]
[]
[ "google_bigquery", "google_cloud_functions", "google_cloud_platform", "google_cloud_storage", "python" ]
stackoverflow_0074620163_google_bigquery_google_cloud_functions_google_cloud_platform_google_cloud_storage_python.txt
Q: Pandas: how to group rows with consecutively repeating values in columns? I have a data frame df ` df=pd.DataFrame([['1001',34.3],['1009',34.3],['1003',776],['1015',18.95],['1023',18.95],['1007',18.95],['1009',18.95],['1037',321.2],['1001',344.2],['1016',3.2],['1017',3.2],['1027',344.2]],columns=['id','amount']) id amount 0 1001 34.30 1 1009 34.30 2 1003 776.00 3 1015 18.95 4 1023 18.95 5 1007 18.95 6 1009 18.95 7 1037 321.20 8 1001 344.20 9 1016 3.20 10 1017 3.20 11 1027 344.20 ` I would likw to have df_new grouped by consecutively repeating values in column 'amount' by first value: ` id amount 0 1001 34.30 2 1003 776.00 3 1015 18.95 7 1037 321.20 8 1001 344.20 9 1016 3.20 11 1027 344.20 ` A: here is one way to do it # take a difference b/w the amount of two consecutive rows and then # choose rows where the difference is not zero out= df[df['amount'].diff().ne(0) ] out id amount 0 1001 34.30 2 1003 776.00 3 1015 18.95 7 1037 321.20 8 1001 344.20 9 1016 3.20 11 1027 344.20
Pandas: how to group rows with consecutively repeating values in columns?
I have a data frame df ` df=pd.DataFrame([['1001',34.3],['1009',34.3],['1003',776],['1015',18.95],['1023',18.95],['1007',18.95],['1009',18.95],['1037',321.2],['1001',344.2],['1016',3.2],['1017',3.2],['1027',344.2]],columns=['id','amount']) id amount 0 1001 34.30 1 1009 34.30 2 1003 776.00 3 1015 18.95 4 1023 18.95 5 1007 18.95 6 1009 18.95 7 1037 321.20 8 1001 344.20 9 1016 3.20 10 1017 3.20 11 1027 344.20 ` I would likw to have df_new grouped by consecutively repeating values in column 'amount' by first value: ` id amount 0 1001 34.30 2 1003 776.00 3 1015 18.95 7 1037 321.20 8 1001 344.20 9 1016 3.20 11 1027 344.20 `
[ "here is one way to do it\n\n# take a difference b/w the amount of two consecutive rows and then\n# choose rows where the difference is not zero\n\nout= df[df['amount'].diff().ne(0) ]\n\nout\n\n\nid amount\n0 1001 34.30\n2 1003 776.00\n3 1015 18.95\n7 1037 321.20\n8 1001 344.20\n9 1016 3.20\n11 1027 344.20\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074621127_pandas_python.txt
Q: Contain in a list the values that appear in an array in a given percentage I have an array called "data" which contains the following information. [['amazon', 'phone', 'serious', 'mind', 'blown', 'serious', 'enjoy', 'use', 'applic', 'full', 'blown', 'websit', 'allow', 'quick', 'track', 'packag', 'descript', 'say'], ['would', 'say', 'app', 'real', 'thing', 'show', 'ghost', 'said', 'quot', 'orang', 'quot', 'ware', 'orang', 'cloth', 'app', 'adiquit', 'would', 'recsmend', 'want', 'talk', 'ghost'], ['love', 'play', 'backgammonthi', 'game', 'offer', 'varieti', 'difficulti', 'make', 'perfect', 'beginn', 'season', 'player'], The case is that I would like to save in a list, the values that appear at least 1% in this array. The closest approximation I have found is the following but it does not return what I need. Any ideas? import numpy_indexed as npi idx = [np.ones(len(a))*i for i, a in enumerate(tokens_list_train)] (rows, cols), table = npi.count_table(np.concatenate(idx), np.concatenate(tokens_list_train)) table = table / table.sum(axis=1, keepdims=True) print(table * 100)` A: let's see, we can remove the nesting using itertool.chain.from_iterable, but we also need the total length, which we can compute by making another generator to avoid looping twice, and we need to count the repetitions, which is done by a counter. from collections import Counter from itertools import chain total_length = 0 def sum_sublist_length(some_list): # to sum the lengths of the sub-lists global total_length for value in some_list: total_length += len(value) yield value counts = Counter(chain.from_iterable(sum_sublist_length(my_list))) items = [item for item in counts if counts[item]/total_length >= 0.01] print(items) ['amazon', 'phone', 'serious', 'mind', 'blown', 'enjoy', 'use', 'applic', 'full', 'websit', 'allow', 'quick', 'track', 'packag', 'descript', 'say', 'would', 'app', 'real', 'thing', 'show', 'ghost', 'said', 'quot', 'orang', 'ware', 'cloth', 'adiquit', 'recsmend', 'want', 'talk', 'love', 'play', 'backgammonthi', 'game', 'offer', 'varieti', 'difficulti', 'make', 'perfect', 'beginn', 'season', 'player'] A: Here's another way to generate a list of elements that appear 1% or more of the time, using pandas.DataFrame: import numpy as np import pandas as pd # == Define `flatten` function to combine objects with multi-level nesting ======= def flatten(iterable, base_type=None, levels=None): """Flatten an iterable with multiple levels of nesting. >>> iterable = [(1, 2), ([3, 4], [[5], [6]])] >>> list(flatten(iterable)) [1, 2, 3, 4, 5, 6] Binary and text strings are not considered iterable and will not be collapsed. To avoid collapsing other types, specify *base_type*: >>> iterable = ['ab', ('cd', 'ef'), ['gh', 'ij']] >>> list(flatten(iterable, base_type=tuple)) ['ab', ('cd', 'ef'), 'gh', 'ij'] Specify *levels* to stop flattening after a certain level: >>> iterable = [('a', ['b']), ('c', ['d'])] >>> list(flatten(iterable)) # Fully flattened ['a', 'b', 'c', 'd'] >>> list(flatten(iterable, levels=1)) # Only one level flattened ['a', ['b'], 'c', ['d']] """ def walk(node, level): if ( ((levels is not None) and (level > levels)) or isinstance(node, (str, bytes)) or ((base_type is not None) and isinstance(node, base_type)) ): yield node return try: tree = iter(node) except TypeError: yield node return else: for child in tree: yield from walk(child, level + 1) yield from walk(iterable, 0) # == Problem Solution ========================================================== # 1. Flatten the array into a single level list of elements, then convert it # to a `pandas.Series`. series_array = pd.Series(list(flatten(array))) # 2. Get the total number of elements in flattened list element_count = len(series_array) # 3. Use method `pandas.Series.value_counts() to count the number of times each # elements appears, then divide each element count by the # total number of elements in flattened list (`element_count`) elements = ( (series_array.value_counts()/element_count) # 4. Use `pandas.Series.loc` to select only values that appear more than # 1% of the time. # .loc[lambda xdf: xdf['rate_count'] >= 0.01, :] .loc[lambda value: value >= 0.01] # 5. Select the elements, and convert results to a list .index.to_list() ) print(elements) ['would', 'serious', 'blown', 'quot', 'orang', 'app', 'ghost', 'say', 'use', 'adiquit', 'enjoy', 'said', 'cloth', 'thing', 'applic', 'talk', 'player', 'track', 'recsmend', 'beginn', 'packag', 'allow', 'perfect', 'want', 'real', 'love', 'full', 'show', 'play', 'make', 'backgammonthi', 'mind', 'amazon', 'game', 'difficulti', 'offer', 'descript', 'websit', 'quick', 'season', 'phone', 'variety', 'ware']
Contain in a list the values that appear in an array in a given percentage
I have an array called "data" which contains the following information. [['amazon', 'phone', 'serious', 'mind', 'blown', 'serious', 'enjoy', 'use', 'applic', 'full', 'blown', 'websit', 'allow', 'quick', 'track', 'packag', 'descript', 'say'], ['would', 'say', 'app', 'real', 'thing', 'show', 'ghost', 'said', 'quot', 'orang', 'quot', 'ware', 'orang', 'cloth', 'app', 'adiquit', 'would', 'recsmend', 'want', 'talk', 'ghost'], ['love', 'play', 'backgammonthi', 'game', 'offer', 'varieti', 'difficulti', 'make', 'perfect', 'beginn', 'season', 'player'], The case is that I would like to save in a list, the values that appear at least 1% in this array. The closest approximation I have found is the following but it does not return what I need. Any ideas? import numpy_indexed as npi idx = [np.ones(len(a))*i for i, a in enumerate(tokens_list_train)] (rows, cols), table = npi.count_table(np.concatenate(idx), np.concatenate(tokens_list_train)) table = table / table.sum(axis=1, keepdims=True) print(table * 100)`
[ "let's see, we can remove the nesting using itertool.chain.from_iterable, but we also need the total length, which we can compute by making another generator to avoid looping twice, and we need to count the repetitions, which is done by a counter.\nfrom collections import Counter\nfrom itertools import chain\n\ntotal_length = 0\ndef sum_sublist_length(some_list): # to sum the lengths of the sub-lists\n global total_length\n for value in some_list:\n total_length += len(value)\n yield value\n \ncounts = Counter(chain.from_iterable(sum_sublist_length(my_list)))\nitems = [item for item in counts if counts[item]/total_length >= 0.01]\nprint(items)\n\n['amazon', 'phone', 'serious', 'mind', 'blown', 'enjoy', 'use', 'applic', 'full', 'websit', 'allow', 'quick', 'track', 'packag', 'descript', 'say', 'would', 'app', 'real', 'thing', 'show', 'ghost', 'said', 'quot', 'orang', 'ware', 'cloth', 'adiquit', 'recsmend', 'want', 'talk', 'love', 'play', 'backgammonthi', 'game', 'offer', 'varieti', 'difficulti', 'make', 'perfect', 'beginn', 'season', 'player']\n\n", "Here's another way to generate a list of elements that appear 1% or more of the time, using pandas.DataFrame:\n\nimport numpy as np\nimport pandas as pd\n\n\n# == Define `flatten` function to combine objects with multi-level nesting =======\ndef flatten(iterable, base_type=None, levels=None):\n \"\"\"Flatten an iterable with multiple levels of nesting.\n\n >>> iterable = [(1, 2), ([3, 4], [[5], [6]])]\n >>> list(flatten(iterable))\n [1, 2, 3, 4, 5, 6]\n\n Binary and text strings are not considered iterable and\n will not be collapsed.\n\n To avoid collapsing other types, specify *base_type*:\n\n >>> iterable = ['ab', ('cd', 'ef'), ['gh', 'ij']]\n >>> list(flatten(iterable, base_type=tuple))\n ['ab', ('cd', 'ef'), 'gh', 'ij']\n\n Specify *levels* to stop flattening after a certain level:\n\n >>> iterable = [('a', ['b']), ('c', ['d'])]\n >>> list(flatten(iterable)) # Fully flattened\n ['a', 'b', 'c', 'd']\n >>> list(flatten(iterable, levels=1)) # Only one level flattened\n ['a', ['b'], 'c', ['d']]\n\n \"\"\"\n def walk(node, level):\n if (\n ((levels is not None) and (level > levels))\n or isinstance(node, (str, bytes))\n or ((base_type is not None) and isinstance(node, base_type))\n ):\n yield node\n return\n try:\n tree = iter(node)\n except TypeError:\n yield node\n return\n else:\n for child in tree:\n yield from walk(child, level + 1)\n yield from walk(iterable, 0)\n\n\n# == Problem Solution ==========================================================\n# 1. Flatten the array into a single level list of elements, then convert it\n# to a `pandas.Series`.\nseries_array = pd.Series(list(flatten(array)))\n\n# 2. Get the total number of elements in flattened list\nelement_count = len(series_array)\n\n# 3. Use method `pandas.Series.value_counts() to count the number of times each\n# elements appears, then divide each element count by the\n# total number of elements in flattened list (`element_count`)\nelements = (\n (series_array.value_counts()/element_count)\n # 4. Use `pandas.Series.loc` to select only values that appear more than\n # 1% of the time.\n # .loc[lambda xdf: xdf['rate_count'] >= 0.01, :]\n .loc[lambda value: value >= 0.01]\n # 5. Select the elements, and convert results to a list\n .index.to_list()\n)\nprint(elements)\n['would', 'serious', 'blown', 'quot', 'orang', 'app', 'ghost', 'say', 'use', 'adiquit', 'enjoy', 'said', 'cloth', 'thing', 'applic', 'talk', 'player', 'track', 'recsmend', 'beginn', 'packag', 'allow', 'perfect', 'want', 'real', 'love', 'full', 'show', 'play', 'make', 'backgammonthi', 'mind', 'amazon', 'game', 'difficulti', 'offer', 'descript', 'websit', 'quick', 'season', 'phone', 'variety', 'ware']\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074620854_arrays_python.txt
Q: How can I dump YAML 1.1 booleans (y, n, T, F, etc) as quoted strings with ruamel.yaml Already read https://stackoverflow.com/a/61252180/1676006 and it doesn't seem to have solved my problem. Using ruamel.yaml: yaml = YAML(typ="safe") yaml.version = (1, 1) yaml.default_flow_style = None yaml.dump(stuff, sys.stdout) With a python dict stuff containing: { "key": "Y" } outputs %YAML 1.1 --- {key: Y} in the output yaml. In my actual code, I'm piping this into helm, which is backward compatible with yaml 1.1, meaning that I need to have anything that might be a boolean scalar quoted. What am I missing here? E: because I am a big dummy and forgot to update everything after trying a min repro - here's the REPL output: >>> from ruamel.yaml import YAML >>> yaml = YAML() >>> yaml.version = (1,1) >>> yaml.default_flow_style = None >>> stuff = { ... "key": "T" ... } >>> import sys >>> yaml.dump(stuff, sys.stdout) %YAML 1.1 --- {key: T} >>> E again: I am shamed. T isn't a scalar boolean, I was using the wrong test subject. This is not a place of honour. A: If your Python dict really consists of a single key value pair, mapping of a string to a string, as you indicate, ruamel.yaml will not dump the output you display with or without typ='safe') import sys import ruamel.yaml data = { "key": "Y" } yaml = ruamel.yaml.YAML(typ='safe') yaml.version = (1, 1) yaml.default_flow_style = None yaml.dump(data, sys.stdout) yaml = ruamel.yaml.YAML() yaml.version = (1, 1) yaml.default_flow_style = None yaml.dump(data, sys.stdout) which gives: %YAML 1.1 --- {key: 'Y'} %YAML 1.1 --- {key: 'Y'} Not even changing the value to a boolean gets your result import sys import ruamel.yaml data = dict(key=True) yaml = ruamel.yaml.YAML(typ='safe') yaml.version = (1, 1) yaml.default_flow_style = None yaml.dump(data, sys.stdout) yaml = ruamel.yaml.YAML() yaml.version = (1, 1) yaml.default_flow_style = None yaml.dump(data, sys.stdout) which gives: %YAML 1.1 --- {key: true} %YAML 1.1 --- {key: true} Always, always, always provide a minimal program that can be cut-and-pasted to get the results that you are seeing. That is how I produce my answers ( the answers are actually comming from a multidocument YAML file processed by (ryd)[https://pypi.org/project/ryd/)), and that is how you should consider providing your questions.
How can I dump YAML 1.1 booleans (y, n, T, F, etc) as quoted strings with ruamel.yaml
Already read https://stackoverflow.com/a/61252180/1676006 and it doesn't seem to have solved my problem. Using ruamel.yaml: yaml = YAML(typ="safe") yaml.version = (1, 1) yaml.default_flow_style = None yaml.dump(stuff, sys.stdout) With a python dict stuff containing: { "key": "Y" } outputs %YAML 1.1 --- {key: Y} in the output yaml. In my actual code, I'm piping this into helm, which is backward compatible with yaml 1.1, meaning that I need to have anything that might be a boolean scalar quoted. What am I missing here? E: because I am a big dummy and forgot to update everything after trying a min repro - here's the REPL output: >>> from ruamel.yaml import YAML >>> yaml = YAML() >>> yaml.version = (1,1) >>> yaml.default_flow_style = None >>> stuff = { ... "key": "T" ... } >>> import sys >>> yaml.dump(stuff, sys.stdout) %YAML 1.1 --- {key: T} >>> E again: I am shamed. T isn't a scalar boolean, I was using the wrong test subject. This is not a place of honour.
[ "If your Python dict really consists of a single key value pair, mapping of a string to a string, as you indicate, ruamel.yaml will not dump the output you display\nwith or without typ='safe')\nimport sys\nimport ruamel.yaml\n\ndata = {\n \"key\": \"Y\"\n}\nyaml = ruamel.yaml.YAML(typ='safe')\nyaml.version = (1, 1)\nyaml.default_flow_style = None\nyaml.dump(data, sys.stdout)\nyaml = ruamel.yaml.YAML()\nyaml.version = (1, 1)\nyaml.default_flow_style = None\nyaml.dump(data, sys.stdout)\n\nwhich gives:\n%YAML 1.1\n--- {key: 'Y'}\n%YAML 1.1\n--- {key: 'Y'}\n\nNot even changing the value to a boolean gets your result\nimport sys\nimport ruamel.yaml\n\ndata = dict(key=True)\nyaml = ruamel.yaml.YAML(typ='safe')\nyaml.version = (1, 1)\nyaml.default_flow_style = None\nyaml.dump(data, sys.stdout)\nyaml = ruamel.yaml.YAML()\nyaml.version = (1, 1)\nyaml.default_flow_style = None\nyaml.dump(data, sys.stdout)\n\nwhich gives:\n%YAML 1.1\n--- {key: true}\n%YAML 1.1\n--- {key: true}\n\nAlways, always, always provide a minimal program that can be cut-and-pasted to get the results that you are seeing.\nThat is how I produce my answers ( the answers are actually comming from a multidocument YAML file processed by (ryd)[https://pypi.org/project/ryd/)), and that is how you should consider providing your questions.\n" ]
[ 2 ]
[]
[]
[ "python", "ruamel.yaml" ]
stackoverflow_0074620398_python_ruamel.yaml.txt
Q: S3 Python - Multipart upload to s3 with presigned part urls I'm unsuccessfully trying to do a multipart upload with pre-signed part URLs. This is the procedure I follow (1-3 is on the server-side, 4 is on the client-side): Instantiate boto client. import boto3 from botocore.client import Config s3 = boto3.client( "s3", region_name=aws.default_region, aws_access_key_id=aws.access_key_id, aws_secret_access_key=aws.secret_access_key, config=Config(signature_version="s3v4") ) Initiate multipart upload. upload = s3.create_multipart_upload( Bucket=AWS_S3_BUCKET, Key=key, Expires=datetime.now() + timedelta(days=2), ) upload_id = upload["UploadId"] Create a pre-signed URL for the part upload. part = generate_part_object_from_client_submited_data(...) part.presigned_url = s3.generate_presigned_url( ClientMethod="upload_part", Params={ "Bucket": AWS_S3_BUCKET, "Key": upload_key, "UploadId": upload_id, "PartNumber": part.no, "ContentLength": part.size, "ContentMD5": part.md5, }, ExpiresIn=3600, # 1h HttpMethod="PUT", ) Return the pre-signed URL to the client. On the client try to upload the part using requests. part = receive_part_object_from_server(...) with io.open(filename, "rb") as f: f.seek(part.offset) buffer = io.BytesIO(f.read(part.size)) r = requests.put( part.presigned_url, data=buffer, headers={ "Content-Length": str(part.size), "Content-MD5": part.md5, "Host": "AWS_S3_BUCKET.s3.amazonaws.com", }, ) And when I try to upload I either get: urllib3.exceptions.ProtocolError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe')) Or: <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>NoSuchUpload</Code> <Message> The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. </Message> <UploadId>CORRECT_UPLOAD_ID</UploadI> <RequestId>...</RequestId> <HostId>...</HostId> </Error> Even though the upload still exist and I can list it. Can anyone tell me what am I doing wrong? A: Did you try pre-signed POST instead? Here is the AWS Python reference for it: https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-presigned-post.html This will potentially workaround proxy limitations from client perspective, if any: As a last resort, you can always try good old REST API, although I don't think the issue is in your code and neither in boto3: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingRESTAPImpUpload.html A: Here is a command utilty that does exactly the same thing, you might want to give it at try and see if it works. If it does it will be easy to find the difference between your code and theirs. If it doesn't I would double check the whole process. Here is an example how to upload a file using aws commandline https://aws.amazon.com/premiumsupport/knowledge-center/s3-multipart-upload-cli/?nc1=h_ls Actually if it does work. Ie you can replecate the upload using aws s3 commands then we need to focus on the use of persigned url. You can check how the url should look like here: https://github.com/aws/aws-sdk-js/issues/468 https://github.com/aws/aws-sdk-js/issues/1603 This are js sdk but the guys there talk about the raw urls and parameters so you should be able to spot the difference between your urls and the urls that are working. Another option is to give a try this script, it uses js to upload file using persigned urls from web browser. https://github.com/prestonlimlianjie/aws-s3-multipart-presigned-upload If it works you can inspect the communication and observe the exact URLs that are being used to upload each part, which you can compare with the urls your system is generating. Btw. once you have a working url for multipart upload you can use the aws s3 presign url to obtain the persigned url, this should let you finish the upload using just curl to have full control over the upload process. A: Presigned URL Approach You can study AWS S3 Presigned URLs for Python SDK (Boto3) and how to use multipart upload APIs at the following links: Amazon S3 Examples >> Presigned URLs Python Code Samples for Amazon S3 >> generate_presigned_url.py Boto3 - S3 - create_multipart_upload Boto3 - S3 - complete_multipart_upload Transfer Manager Approach Boto3 provides interfaces for managing various types of transfers with S3 to automatically manage multipart and non-multipart uploads. To ensure that multipart uploads only happen when absolutely necessary, you can use the multipart_threshold configuration parameter. Try out the following code for Transfer Manager approach: import boto3 from boto3.s3.transfer import TransferConfig import botocore from botocore.client import Config from retrying import retry import sysdef upload(source, dest, bucket_name): try: conn = boto3.client(service_name="s3", aws_access_key_id=[key], aws_secret_access_key=[key], endpoint_url=[endpoint], config=Config(signature_version='s3') config = TransferConfig(multipart_threshold=1024*20, max_concurrency=3, multipart_chunksize=1024*20, use_threads=True) conn.upload_file(Filename=source, Bucket=bucket_name, Key=dest, Config=config) except Exception as e: raise Exception(str(e))def download(src, dest, bucket_name): try: conn = boto3.client(service_name="s3", aws_access_key_id=[key], aws_secret_access_key=[key], endpoint_url=[endpoint], config=Config(signature_version='s3') config = TransferConfig(multipart_threshold=1024*20, max_concurrency=3, multipart_chunksize=1024*20, use_threads=True) conn.download_file(bucket=bucket_name, key=src, filename=dest, Config=config) except AWSConnectionError as e: raise AWSConnectionError("Unable to connect to AWS") except Exception as e: raise Exception(str(e))if __name__ == '__main__': upload(source, dest, bucket_name) download(src, dest, bucket_name) AWS STS Approach You can also follow the AWS Security Token Service (STS) approach to generate a set of temporary credentials to complete your task instead. Try out the following code for the AWS STS approach: import json from uuid import uuid4 import boto3 def get_upload_credentials_for(bucket, key, username): arn = 'arn:aws:s3:::%s/%s' % (bucket, key) policy = {"Version": "2012-10-17", "Statement": [{ "Sid": "Stmt1", "Effect": "Allow", "Action": ["s3:PutObject"], "Resource": [arn], }]} client = boto3.client('sts') response = client.get_federation_token( Name=username, Policy=json.dumps(policy)) return response['Credentials'] def client_from_credentials(service, credentials): return boto3.client( service, aws_access_key_id=credentials['AccessKeyId'], aws_secret_access_key=credentials['SecretAccessKey'], aws_session_token=credentials['SessionToken'], ) def example(): bucket = 'mybucket' filename = '/path/to/file' key = uuid4().hex print(key) prefix = 'tmp_upload_' username = prefix + key[:32 - len(prefix)] print(username) assert len(username) <= 32 # required by the AWS API credentials = get_upload_credentials_for(bucket, key, username) client = client_from_credentials('s3', credentials) client.upload_file(filename, bucket, key) client.upload_file(filename, bucket, key + 'bob') # fails example() MinIO Client SDK for Python Approach You can use MinIO Client SDK for Python which implements simpler APIs to avoid the gritty details of multipart upload. For example, you can use a simple fput_object(bucket_name, object_name, file_path, content_type) API to do the need full. Try out the following code for MinIO Client SDK for Python approach: from minio import Minio from minio.error import ResponseError s3client = Minio('s3.amazonaws.com', access_key='YOUR-ACCESSKEYID', secret_key='YOUR-SECRETACCESSKEY') # Put an object 'my-objectname' with contents from 'my-filepath' try: s3client.fput_object('my-bucketname', 'my-objectname', 'my-filepath') except ResponseError as err: print(err) A: Make sure that when you connect to the S3 endpoints you use proper s3 domain name (which should include region!) Bucket name in the header name is not enough. The easiest thing to debug this is just try to generate presign URL with AWS CLI with --debug option aws s3 presign s3://your-bucket/file --expires-in 604800 --region eu-central-1 --debug and then just use it with curl curl -X GET "https://your-bucket.s3.eu-central-1.amazonaws.com/file Normally aws client can redirect based on bucket name (it contains quite a lot of logic), but http client will not so you need to talk with proper endpoints. In other words, change: "Host": "AWS_S3_BUCKET.s3.amazonaws.com" to "Host": "AWS_S3_BUCKET.s3.REGION.amazonaws.com"
S3 Python - Multipart upload to s3 with presigned part urls
I'm unsuccessfully trying to do a multipart upload with pre-signed part URLs. This is the procedure I follow (1-3 is on the server-side, 4 is on the client-side): Instantiate boto client. import boto3 from botocore.client import Config s3 = boto3.client( "s3", region_name=aws.default_region, aws_access_key_id=aws.access_key_id, aws_secret_access_key=aws.secret_access_key, config=Config(signature_version="s3v4") ) Initiate multipart upload. upload = s3.create_multipart_upload( Bucket=AWS_S3_BUCKET, Key=key, Expires=datetime.now() + timedelta(days=2), ) upload_id = upload["UploadId"] Create a pre-signed URL for the part upload. part = generate_part_object_from_client_submited_data(...) part.presigned_url = s3.generate_presigned_url( ClientMethod="upload_part", Params={ "Bucket": AWS_S3_BUCKET, "Key": upload_key, "UploadId": upload_id, "PartNumber": part.no, "ContentLength": part.size, "ContentMD5": part.md5, }, ExpiresIn=3600, # 1h HttpMethod="PUT", ) Return the pre-signed URL to the client. On the client try to upload the part using requests. part = receive_part_object_from_server(...) with io.open(filename, "rb") as f: f.seek(part.offset) buffer = io.BytesIO(f.read(part.size)) r = requests.put( part.presigned_url, data=buffer, headers={ "Content-Length": str(part.size), "Content-MD5": part.md5, "Host": "AWS_S3_BUCKET.s3.amazonaws.com", }, ) And when I try to upload I either get: urllib3.exceptions.ProtocolError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe')) Or: <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>NoSuchUpload</Code> <Message> The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed. </Message> <UploadId>CORRECT_UPLOAD_ID</UploadI> <RequestId>...</RequestId> <HostId>...</HostId> </Error> Even though the upload still exist and I can list it. Can anyone tell me what am I doing wrong?
[ "Did you try pre-signed POST instead? Here is the AWS Python reference for it: https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-presigned-post.html\nThis will potentially workaround proxy limitations from client perspective, if any:\n\nAs a last resort, you can always try good old REST API, although I don't think the issue is in your code and neither in boto3: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingRESTAPImpUpload.html\n", "Here is a command utilty that does exactly the same thing, you might want to give it at try and see if it works. If it does it will be easy to find the difference between your code and theirs. If it doesn't I would double check the whole process. Here is an example how to upload a file using aws commandline https://aws.amazon.com/premiumsupport/knowledge-center/s3-multipart-upload-cli/?nc1=h_ls\nActually if it does work. Ie you can replecate the upload using aws s3 commands then we need to focus on the use of persigned url. You can check how the url should look like here:\nhttps://github.com/aws/aws-sdk-js/issues/468\nhttps://github.com/aws/aws-sdk-js/issues/1603\nThis are js sdk but the guys there talk about the raw urls and parameters so you should be able to spot the difference between your urls and the urls that are working. \nAnother option is to give a try this script, it uses js to upload file using persigned urls from web browser. \nhttps://github.com/prestonlimlianjie/aws-s3-multipart-presigned-upload\nIf it works you can inspect the communication and observe the exact URLs that are being used to upload each part, which you can compare with the urls your system is generating.\nBtw. once you have a working url for multipart upload you can use the aws s3 presign url to obtain the persigned url, this should let you finish the upload using just curl to have full control over the upload process.\n", "Presigned URL Approach\nYou can study AWS S3 Presigned URLs for Python SDK (Boto3) and how to use multipart upload APIs at the following links:\n\nAmazon S3 Examples >> Presigned URLs\nPython Code Samples for Amazon S3 >> generate_presigned_url.py\nBoto3 - S3 - create_multipart_upload\nBoto3 - S3 - complete_multipart_upload\n\nTransfer Manager Approach\nBoto3 provides interfaces for managing various types of transfers with S3 to automatically manage multipart and non-multipart uploads. To ensure that multipart uploads only happen when absolutely necessary, you can use the multipart_threshold configuration parameter.\nTry out the following code for Transfer Manager approach:\nimport boto3\nfrom boto3.s3.transfer import TransferConfig\nimport botocore\nfrom botocore.client import Config\nfrom retrying import retry\nimport sysdef upload(source, dest, bucket_name):\n try:\n conn = boto3.client(service_name=\"s3\", \n aws_access_key_id=[key], \n aws_secret_access_key=[key], \n endpoint_url=[endpoint],\n config=Config(signature_version='s3')\n config = TransferConfig(multipart_threshold=1024*20, \n max_concurrency=3, \n multipart_chunksize=1024*20, \n use_threads=True)\n conn.upload_file(Filename=source, Bucket=bucket_name, \n Key=dest, Config=config)\n except Exception as e:\n raise Exception(str(e))def download(src, dest, bucket_name):\n try:\n conn = boto3.client(service_name=\"s3\", \n aws_access_key_id=[key], \n aws_secret_access_key=[key], \n endpoint_url=[endpoint],\n config=Config(signature_version='s3')\n config = TransferConfig(multipart_threshold=1024*20, \n max_concurrency=3, \n multipart_chunksize=1024*20, \n use_threads=True)\n conn.download_file(bucket=bucket_name, key=src, \n filename=dest, Config=config) \n except AWSConnectionError as e:\n raise AWSConnectionError(\"Unable to connect to AWS\")\n except Exception as e:\n raise Exception(str(e))if __name__ == '__main__': \nupload(source, dest, bucket_name)\ndownload(src, dest, bucket_name)\n\nAWS STS Approach\nYou can also follow the AWS Security Token Service (STS) approach to generate a set of temporary credentials to complete your task instead.\nTry out the following code for the AWS STS approach:\nimport json\nfrom uuid import uuid4\n\nimport boto3\n\n\ndef get_upload_credentials_for(bucket, key, username):\n arn = 'arn:aws:s3:::%s/%s' % (bucket, key)\n policy = {\"Version\": \"2012-10-17\",\n \"Statement\": [{\n \"Sid\": \"Stmt1\",\n \"Effect\": \"Allow\",\n \"Action\": [\"s3:PutObject\"],\n \"Resource\": [arn],\n }]}\n client = boto3.client('sts')\n response = client.get_federation_token(\n Name=username, Policy=json.dumps(policy))\n return response['Credentials']\n\n\ndef client_from_credentials(service, credentials):\n return boto3.client(\n service,\n aws_access_key_id=credentials['AccessKeyId'],\n aws_secret_access_key=credentials['SecretAccessKey'],\n aws_session_token=credentials['SessionToken'],\n )\n\n\ndef example():\n bucket = 'mybucket'\n filename = '/path/to/file'\n\n key = uuid4().hex\n print(key)\n\n prefix = 'tmp_upload_'\n username = prefix + key[:32 - len(prefix)]\n print(username)\n assert len(username) <= 32 # required by the AWS API\n\n credentials = get_upload_credentials_for(bucket, key, username)\n client = client_from_credentials('s3', credentials)\n client.upload_file(filename, bucket, key)\n client.upload_file(filename, bucket, key + 'bob') # fails\n\n\nexample()\n\nMinIO Client SDK for Python Approach\nYou can use MinIO Client SDK for Python which implements simpler APIs to avoid the gritty details of multipart upload.\nFor example, you can use a simple fput_object(bucket_name, object_name, file_path, content_type) API to do the need full.\nTry out the following code for MinIO Client SDK for Python approach:\nfrom minio import Minio\nfrom minio.error import ResponseError\n\ns3client = Minio('s3.amazonaws.com',\n access_key='YOUR-ACCESSKEYID',\n secret_key='YOUR-SECRETACCESSKEY')\n\n# Put an object 'my-objectname' with contents from 'my-filepath'\n\ntry: \n s3client.fput_object('my-bucketname', 'my-objectname', 'my-filepath')\nexcept ResponseError as err:\n print(err)\n\n", "Make sure that when you connect to the S3 endpoints you use proper s3 domain name (which should include region!)\nBucket name in the header name is not enough.\nThe easiest thing to debug this is just try to generate presign URL with AWS CLI with --debug option\naws s3 presign s3://your-bucket/file --expires-in 604800 --region eu-central-1 --debug\n\nand then just use it with curl\ncurl -X GET \"https://your-bucket.s3.eu-central-1.amazonaws.com/file\n\nNormally aws client can redirect based on bucket name (it contains quite a lot of logic), but http client will not so you need to talk with proper endpoints.\nIn other words, change:\n\"Host\": \"AWS_S3_BUCKET.s3.amazonaws.com\"\n\nto\n\"Host\": \"AWS_S3_BUCKET.s3.REGION.amazonaws.com\"\n\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto3", "python" ]
stackoverflow_0057929414_amazon_s3_amazon_web_services_boto3_python.txt
Q: How do I make a function triggered by pressing right click? I'm working on a game involving turtle for a school project and I want to end it when the player right clicks on anything but I've been researching for ages and I can't find how to key bind a right click and every module i import but turtle won't work. I tried importing pyautogui, pydirectimput, tkinter and many more to try and find work arounds becuase it has to be a right click but the imports wont work. I've tried treating it like a normal key bind but that won't work; maybe I'm not using capitals and underscores correctly, I don't know how to do it and it's due in 7 hours. Please help. A: I don't know turtle much, but I think this should work: def bar(): # do stuff here # Test this with either a 2 or a 3, idk which one it is turtle.onscreenclick(bar, 2, True) It will bind the right click event to this bar function. Tell me if it does not work! Edit: I had used the wrong method sorry! It's actually "onscreenclick".
How do I make a function triggered by pressing right click?
I'm working on a game involving turtle for a school project and I want to end it when the player right clicks on anything but I've been researching for ages and I can't find how to key bind a right click and every module i import but turtle won't work. I tried importing pyautogui, pydirectimput, tkinter and many more to try and find work arounds becuase it has to be a right click but the imports wont work. I've tried treating it like a normal key bind but that won't work; maybe I'm not using capitals and underscores correctly, I don't know how to do it and it's due in 7 hours. Please help.
[ "I don't know turtle much, but I think this should work:\ndef bar():\n # do stuff here\n\n# Test this with either a 2 or a 3, idk which one it is\nturtle.onscreenclick(bar, 2, True) \n\nIt will bind the right click event to this bar function.\nTell me if it does not work!\nEdit: I had used the wrong method sorry!\nIt's actually \"onscreenclick\".\n" ]
[ 0 ]
[]
[]
[ "keyboard_events", "python", "python_3.x", "right_click" ]
stackoverflow_0074620773_keyboard_events_python_python_3.x_right_click.txt
Q: python AttributeError: '_tkinter.tkapp' object has no attribute 'balance_label' I can't seem to figure out why I can't update the label text. But after a day... I figured I'd ask for help self.balance_label['text'] = "Text updated" gives me: AttributeError: '_tkinter.tkapp' object has no attribute 'balance_label' I'm using 2 separate script. file 1 import sys import tkinter as tk import tkinter.ttk as ttk from tkinter.constants import * import os.path _script = sys.argv[0] _location = os.path.dirname(_script) import trade_helper_support _bgcolor = '#d9d9d9' # X11 color: 'gray85' _fgcolor = '#000000' # X11 color: 'black' _compcolor = 'gray40' # X11 color: #666666 _ana1color = '#c3c3c3' # Closest X11 color: 'gray76' _ana2color = 'beige' # X11 color: #f5f5dc _tabfg1 = 'black' _tabfg2 = 'black' _tabbg1 = 'grey75' _tabbg2 = 'grey89' _bgmode = 'light' class root: def __init__(self, top=None): '''This class configures and populates the toplevel window. top is the toplevel containing window.''' top.geometry("600x450+468+138") top.minsize(120, 1) top.maxsize(3844, 1061) top.resizable(1, 1) top.title("Trade Helper v1.0") top.configure(background="#d9d9d9") self.top = top self.menubar = tk.Menu(top,font="TkMenuFont",bg=_bgcolor,fg=_fgcolor) top.configure(menu = self.menubar) self.Get_balance_button = tk.Button(self.top) self.Get_balance_button.place(relx=0.283, rely=0.689, height=44 , width=127) self.Get_balance_button.configure(activebackground="beige") self.Get_balance_button.configure(activeforeground="black") self.Get_balance_button.configure(background="#e35e8c") self.Get_balance_button.configure(command=trade_helper_support.get_balance) self.Get_balance_button.configure(compound='left') self.Get_balance_button.configure(disabledforeground="#a3a3a3") self.Get_balance_button.configure(foreground="#000000") self.Get_balance_button.configure(highlightbackground="#d9d9d9") self.Get_balance_button.configure(highlightcolor="black") self.Get_balance_button.configure(pady="0") self.Get_balance_button.configure(text='''get balance''') self.balance_label = tk.Label(self.top) self.balance_label.place(relx=0.1, rely=0.711, height=31, width=74) self.balance_label.configure(anchor='w') self.balance_label.configure(background="#ffffff") self.balance_label.configure(compound='left') self.balance_label.configure(disabledforeground="#a3a3a3") self.balance_label.configure(foreground="#000000") self.balance_label.configure(text='''Balance''') def settext(self): print('here') self.balance_label['text'] = "Text updated" def start_up(): trade_helper_support.main() if __name__ == '__main__': trade_helper_support.main() file 2 import sys import tkinter as tk import tkinter.ttk as ttk from tkinter.constants import * import trade_helper def main(*args): '''Main entry point for the application.''' global root root = tk.Tk() root.protocol( 'WM_DELETE_WINDOW' , root.destroy) # Creates a toplevel widget. global _top1, _w1 _top1 = root _w1 = trade_helper.root(_top1) root.mainloop() def get_balance(*args): print('trade_helper_support.get_balance') trade_helper.root.settext(root) for arg in args: print (' another arg:', arg) sys.stdout.flush() if __name__ == '__main__': trade_helper.start_up() I'd like the label text to change on button click. It works when my function is inside the class but not the way I have it now. i want file2 to to hold main code and file 1 to be the GUI only code A: Finally found the solution... def get_balance(*args): print('trade_helper_support.get_balance') _w1.balance_label['text'] = "Text updated"
python AttributeError: '_tkinter.tkapp' object has no attribute 'balance_label'
I can't seem to figure out why I can't update the label text. But after a day... I figured I'd ask for help self.balance_label['text'] = "Text updated" gives me: AttributeError: '_tkinter.tkapp' object has no attribute 'balance_label' I'm using 2 separate script. file 1 import sys import tkinter as tk import tkinter.ttk as ttk from tkinter.constants import * import os.path _script = sys.argv[0] _location = os.path.dirname(_script) import trade_helper_support _bgcolor = '#d9d9d9' # X11 color: 'gray85' _fgcolor = '#000000' # X11 color: 'black' _compcolor = 'gray40' # X11 color: #666666 _ana1color = '#c3c3c3' # Closest X11 color: 'gray76' _ana2color = 'beige' # X11 color: #f5f5dc _tabfg1 = 'black' _tabfg2 = 'black' _tabbg1 = 'grey75' _tabbg2 = 'grey89' _bgmode = 'light' class root: def __init__(self, top=None): '''This class configures and populates the toplevel window. top is the toplevel containing window.''' top.geometry("600x450+468+138") top.minsize(120, 1) top.maxsize(3844, 1061) top.resizable(1, 1) top.title("Trade Helper v1.0") top.configure(background="#d9d9d9") self.top = top self.menubar = tk.Menu(top,font="TkMenuFont",bg=_bgcolor,fg=_fgcolor) top.configure(menu = self.menubar) self.Get_balance_button = tk.Button(self.top) self.Get_balance_button.place(relx=0.283, rely=0.689, height=44 , width=127) self.Get_balance_button.configure(activebackground="beige") self.Get_balance_button.configure(activeforeground="black") self.Get_balance_button.configure(background="#e35e8c") self.Get_balance_button.configure(command=trade_helper_support.get_balance) self.Get_balance_button.configure(compound='left') self.Get_balance_button.configure(disabledforeground="#a3a3a3") self.Get_balance_button.configure(foreground="#000000") self.Get_balance_button.configure(highlightbackground="#d9d9d9") self.Get_balance_button.configure(highlightcolor="black") self.Get_balance_button.configure(pady="0") self.Get_balance_button.configure(text='''get balance''') self.balance_label = tk.Label(self.top) self.balance_label.place(relx=0.1, rely=0.711, height=31, width=74) self.balance_label.configure(anchor='w') self.balance_label.configure(background="#ffffff") self.balance_label.configure(compound='left') self.balance_label.configure(disabledforeground="#a3a3a3") self.balance_label.configure(foreground="#000000") self.balance_label.configure(text='''Balance''') def settext(self): print('here') self.balance_label['text'] = "Text updated" def start_up(): trade_helper_support.main() if __name__ == '__main__': trade_helper_support.main() file 2 import sys import tkinter as tk import tkinter.ttk as ttk from tkinter.constants import * import trade_helper def main(*args): '''Main entry point for the application.''' global root root = tk.Tk() root.protocol( 'WM_DELETE_WINDOW' , root.destroy) # Creates a toplevel widget. global _top1, _w1 _top1 = root _w1 = trade_helper.root(_top1) root.mainloop() def get_balance(*args): print('trade_helper_support.get_balance') trade_helper.root.settext(root) for arg in args: print (' another arg:', arg) sys.stdout.flush() if __name__ == '__main__': trade_helper.start_up() I'd like the label text to change on button click. It works when my function is inside the class but not the way I have it now. i want file2 to to hold main code and file 1 to be the GUI only code
[ "Finally found the solution...\ndef get_balance(*args):\n\n print('trade_helper_support.get_balance')\n _w1.balance_label['text'] = \"Text updated\"\n\n" ]
[ 0 ]
[]
[]
[ "attributes", "python" ]
stackoverflow_0074620906_attributes_python.txt
Q: Wondering the best way to implement a score function to my anagrams game using OOP in Python I am trying to implement an anagram game in python. It currently gives the player 7 tiles from a "Scrabble Bag". I want to add some type of scoring function but I am struggling on Should I implement a score function in one of the classes? or in a def score() function under main... and If i make a function under main how do I retrieve and edit data in the "Bag" class since it returns object at instead of something such as the letters in a players "hand" (or tiles they posses) import random N = 7 class Tile: def __init__(self, letter, value): self.letter = letter self.value = value def show(self): print(f"{self.letter} : {self.value}") class Bag: def __init__(self): self.tiles = [] self.build() def build(self): templist = {"A": [1, 9], "B": [3, 2], "C": [3, 2], "D": [2, 4], "E": [1, 12], "F": [4, 2], "G": [2, 3], "H": [4, 2], "I": [1, 9], "J": [8, 1], "K": [5, 1], "L": [1, 4], "M": [3, 2], "N": [1, 6], "O": [1, 8], "P": [3, 2], "Q": [10, 1], "R": [1, 6], "S": [1, 4], "T": [1, 6], "U": [1, 4], "V": [4, 2], "W": [4, 2], "X": [8, 1], "Y": [4, 2], "Z": [10, 1], } for key in templist: for i in range(templist[key][1]): self.tiles.append(Tile(key, templist[key][0])) def show(self): for tile in self.tiles: tile.show() def shuffle(self): random.shuffle(self.tiles) def drawTile(self): return self.tiles.pop() def returnTile(self): ... class Player: def __init__(self, name): self.name = name self.hand = [] def draw(self, bag): for i in range(7): self.hand.append(bag.drawTile()) return self def showHand(self): for tile in self.hand: tile.show() def scoreHand(self, play): for tile in self.showHand(): print(tile) def main(): bag = Bag() bag.shuffle() p1 = Player("p1") p1.draw(bag) p1.showHand() if __name__ == "__main__": main() I am struggling with using classes as it is relatively new to me, I am not understanding how to retrieve data such as the player hand to use in main, I only can "print" it using my showHand function. I want to be able to compare a users play from input to characters in the hand to make sure it is a "valid play" but I am missing some pieces in order to get a string of characters to compare the two. When I call the hand from the class it is giving me locations in memory rather than tile objects A: I assume that the score you want to compute is the current value of the tiles in a particular Player's hand. You mention the need to access an instance of Bag. I don't see where this is necessary or desirable. Once you have built up a Player with a hand, you only need to work with that instance of Player. The current score of a Player's hand is an attribute specific to that Player, so it makes perfect sense to add a scoreHand() method to your Player class: class Player: ... def scoreHand(self): return sum([tile.value for tile in self.hand]) If you want to perform operations on a Player's tiles outside of the logic for the class, you can add one or more methods to Player to provide whatever it is that you want to extract from a Player instance. Given that your Tile class is a public top-level class, you could have a method that returns the list of Tiles for the Player: class Player: ... def getTiles(self): return self.hand If instead, you wanted just the names/letters of a Player's tiles, you could return a list of strings: class Player: ... def getLetters(self): return [tile.letter for tile in self.hand]
Wondering the best way to implement a score function to my anagrams game using OOP in Python
I am trying to implement an anagram game in python. It currently gives the player 7 tiles from a "Scrabble Bag". I want to add some type of scoring function but I am struggling on Should I implement a score function in one of the classes? or in a def score() function under main... and If i make a function under main how do I retrieve and edit data in the "Bag" class since it returns object at instead of something such as the letters in a players "hand" (or tiles they posses) import random N = 7 class Tile: def __init__(self, letter, value): self.letter = letter self.value = value def show(self): print(f"{self.letter} : {self.value}") class Bag: def __init__(self): self.tiles = [] self.build() def build(self): templist = {"A": [1, 9], "B": [3, 2], "C": [3, 2], "D": [2, 4], "E": [1, 12], "F": [4, 2], "G": [2, 3], "H": [4, 2], "I": [1, 9], "J": [8, 1], "K": [5, 1], "L": [1, 4], "M": [3, 2], "N": [1, 6], "O": [1, 8], "P": [3, 2], "Q": [10, 1], "R": [1, 6], "S": [1, 4], "T": [1, 6], "U": [1, 4], "V": [4, 2], "W": [4, 2], "X": [8, 1], "Y": [4, 2], "Z": [10, 1], } for key in templist: for i in range(templist[key][1]): self.tiles.append(Tile(key, templist[key][0])) def show(self): for tile in self.tiles: tile.show() def shuffle(self): random.shuffle(self.tiles) def drawTile(self): return self.tiles.pop() def returnTile(self): ... class Player: def __init__(self, name): self.name = name self.hand = [] def draw(self, bag): for i in range(7): self.hand.append(bag.drawTile()) return self def showHand(self): for tile in self.hand: tile.show() def scoreHand(self, play): for tile in self.showHand(): print(tile) def main(): bag = Bag() bag.shuffle() p1 = Player("p1") p1.draw(bag) p1.showHand() if __name__ == "__main__": main() I am struggling with using classes as it is relatively new to me, I am not understanding how to retrieve data such as the player hand to use in main, I only can "print" it using my showHand function. I want to be able to compare a users play from input to characters in the hand to make sure it is a "valid play" but I am missing some pieces in order to get a string of characters to compare the two. When I call the hand from the class it is giving me locations in memory rather than tile objects
[ "I assume that the score you want to compute is the current value of the tiles in a particular Player's hand. You mention the need to access an instance of Bag. I don't see where this is necessary or desirable. Once you have built up a Player with a hand, you only need to work with that instance of Player.\nThe current score of a Player's hand is an attribute specific to that Player, so it makes perfect sense to add a scoreHand() method to your Player class:\nclass Player:\n ...\n def scoreHand(self):\n return sum([tile.value for tile in self.hand])\n\nIf you want to perform operations on a Player's tiles outside of the logic for the class, you can add one or more methods to Player to provide whatever it is that you want to extract from a Player instance. Given that your Tile class is a public top-level class, you could have a method that returns the list of Tiles for the Player:\nclass Player:\n ...\n def getTiles(self):\n return self.hand\n\nIf instead, you wanted just the names/letters of a Player's tiles, you could return a list of strings:\nclass Player:\n ...\n def getLetters(self):\n return [tile.letter for tile in self.hand]\n\n" ]
[ 0 ]
[]
[]
[ "oop", "python", "python_class" ]
stackoverflow_0074621098_oop_python_python_class.txt
Q: How to subtract values in a list I am writing a function that works as follows it receives list of numbers e.g [0.5,-0.5,1] then it returns a list with this in each index[(-0.5-0.5) + (1-0.5)]. In other words, it adds the difference between the current value and the other values. So the output should be [-0.5,2.5,-2] def Calculate(initial_values,b): x = np.array([initial_values]).T results=[0] for i in range(len(initial_values)): results.append( (initial_values[:i] - initial_values[i])+(initial_values[i+1:]-initial_values[i]) Error TypeError: unsupported operand type(s) for -: 'list' and 'float' A: This seems to do the trick: def Calculate(arr): res = [] for i, val in enumerate(arr): total = -val * (len(arr) - 1) + sum(arr[0:i]) + sum(arr[i+1:]) res.append(total) return res We iterate through each element and calculate the sum of differences like you described. Since the current element gets subtracted from each term we can factor it out. A: The operations that you are doing inside Calculate() clearly assume that the input initial_opinion is a NumPy Array, and not a list. NumPy arrays support broadcast operations, lists do not. So you should call your function with the right kind of input, e.g.: Calculate(np.array([0.5, -0.5, 1]), 5) Now, there are other errors with your code (parentheses are not balanced, and the calculation doesn't quite make sense yet), but if you pass the right kind of data type in, you'll be able to get further with your work. With a NumPy Arrays, you can add an array to another array of the same size for a direct addition, or an array to a scalar to add that scalar's value to all the elements, and all sorts of other broadcast operations are supported for vectorized computations. A: Use np.subtract.outer and np.ndarray.sum: import numpy as np def Calculate(arr): return np.subtract.outer(arr, arr).sum(axis=0) initial_values = [0.5, -0.5, 1] out = Calculate(initial_values) # out = array([-0.5, 2.5, -2. ])
How to subtract values in a list
I am writing a function that works as follows it receives list of numbers e.g [0.5,-0.5,1] then it returns a list with this in each index[(-0.5-0.5) + (1-0.5)]. In other words, it adds the difference between the current value and the other values. So the output should be [-0.5,2.5,-2] def Calculate(initial_values,b): x = np.array([initial_values]).T results=[0] for i in range(len(initial_values)): results.append( (initial_values[:i] - initial_values[i])+(initial_values[i+1:]-initial_values[i]) Error TypeError: unsupported operand type(s) for -: 'list' and 'float'
[ "This seems to do the trick:\ndef Calculate(arr):\n res = []\n for i, val in enumerate(arr):\n total = -val * (len(arr) - 1) + sum(arr[0:i]) + sum(arr[i+1:])\n res.append(total)\n return res\n\nWe iterate through each element and calculate the sum of differences like you described. Since the current element gets subtracted from each term we can factor it out.\n", "The operations that you are doing inside Calculate() clearly assume that the input initial_opinion is a NumPy Array, and not a list. NumPy arrays support broadcast operations, lists do not.\nSo you should call your function with the right kind of input, e.g.:\nCalculate(np.array([0.5, -0.5, 1]), 5)\n\nNow, there are other errors with your code (parentheses are not balanced, and the calculation doesn't quite make sense yet), but if you pass the right kind of data type in, you'll be able to get further with your work.\nWith a NumPy Arrays, you can add an array to another array of the same size for a direct addition, or an array to a scalar to add that scalar's value to all the elements, and all sorts of other broadcast operations are supported for vectorized computations.\n", "Use np.subtract.outer and np.ndarray.sum:\nimport numpy as np\n\ndef Calculate(arr):\n return np.subtract.outer(arr, arr).sum(axis=0)\n\ninitial_values = [0.5, -0.5, 1]\nout = Calculate(initial_values)\n# out = array([-0.5, 2.5, -2. ])\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "arrays", "list", "numpy", "python" ]
stackoverflow_0074620852_arrays_list_numpy_python.txt
Q: Py_Initialize undefined error in Xcode while integrating Python in iOS project I am trying to integrate Python in iOS app. Here is the contentview file import SwiftUI import Python import PythonKit struct ContentView: View { @State private var showingSheet = false var body: some View { var name = "" Button("Run Python") { showingSheet.toggle() if let path = Bundle.main.path(forResource: "Python/Resources", ofType: nil) { setenv("PYTHONHOME",path, 1) setenv("PYTHONPATH",path, 1) } let api = Python.import("foo") name = String(api.hello())! } .sheet(isPresented: $showingSheet) { SecondView(name: name) } } } App file which calls contentview import SwiftUI import Python @main struct pytestApp: App { var body: some Scene { WindowGroup { ContentView() } } func pleaseLinkPython() { Py_Initialize() } } I am getting error as below This project directory I got from my colleague on whose machine this project runs successfully. A: I believe your problem is with this part "Python/Resources". You need the python-stdlib to appear in Build Phase's Copy Bundle Resources. And then do this: import Python guard let stdLibPath = Bundle.main.path(forResource: "python-stdlib", ofType: nil) else { return } guard let libDynloadPath = Bundle.main.path(forResource: "python-stdlib/lib-dynload", ofType: nil) else { return } setenv("PYTHONHOME", stdLibPath, 1) setenv("PYTHONPATH", "\(stdLibPath):\(libDynloadPath)", 1) Py_Initialize() // we now have a Python interpreter ready to be used I've wrote an article elaborating step-by-step how to embed a Python interpreter in a MacOS / iOS app. I'll live it here for anyone in the future having trouble with this topic: https://medium.com/swift2go/embedding-python-interpreter-inside-a-macos-app-and-publish-to-app-store-successfully-309be9fb96a5
Py_Initialize undefined error in Xcode while integrating Python in iOS project
I am trying to integrate Python in iOS app. Here is the contentview file import SwiftUI import Python import PythonKit struct ContentView: View { @State private var showingSheet = false var body: some View { var name = "" Button("Run Python") { showingSheet.toggle() if let path = Bundle.main.path(forResource: "Python/Resources", ofType: nil) { setenv("PYTHONHOME",path, 1) setenv("PYTHONPATH",path, 1) } let api = Python.import("foo") name = String(api.hello())! } .sheet(isPresented: $showingSheet) { SecondView(name: name) } } } App file which calls contentview import SwiftUI import Python @main struct pytestApp: App { var body: some Scene { WindowGroup { ContentView() } } func pleaseLinkPython() { Py_Initialize() } } I am getting error as below This project directory I got from my colleague on whose machine this project runs successfully.
[ "I believe your problem is with this part \"Python/Resources\".\nYou need the python-stdlib to appear in Build Phase's Copy Bundle Resources. And then do this:\nimport Python\n\nguard let stdLibPath = Bundle.main.path(forResource: \"python-stdlib\", ofType: nil) else { return }\nguard let libDynloadPath = Bundle.main.path(forResource: \"python-stdlib/lib-dynload\", ofType: nil) else { return }\nsetenv(\"PYTHONHOME\", stdLibPath, 1)\nsetenv(\"PYTHONPATH\", \"\\(stdLibPath):\\(libDynloadPath)\", 1)\nPy_Initialize()\n// we now have a Python interpreter ready to be used\n\nI've wrote an article elaborating step-by-step how to embed a Python interpreter in a MacOS / iOS app. I'll live it here for anyone in the future having trouble with this topic: https://medium.com/swift2go/embedding-python-interpreter-inside-a-macos-app-and-publish-to-app-store-successfully-309be9fb96a5\n" ]
[ 0 ]
[]
[]
[ "ios", "python", "swift", "xcode" ]
stackoverflow_0074427573_ios_python_swift_xcode.txt
Q: How to create a Data Frame in Python from a for loop? I am trying to merge the results of X with the results of the predicted Y with the help of for loop. How can the result be saved to a DataFrame. predictions = [] for i in range(100): predictions.append([X_unseen[i], y_pred_unseen[i]]) print(predictions) df = pd.Series(predictions) This is the output I get. I am new to Python, so aplogies. Within the [] is a tokenized tweet. 0 [['dude', 'adventure', 'time', 'basically', 'r... 1 [['rape', 'joke'], cyberbullying] 2 [['supporting', 'joke', 'poops', 'faces', 'vic... 3 [['rape', 'joke', 'know', 'serious', 'problem'... I need the following output: x y ['rape', 'joke'] cyberbullying A: You created a pd.Series which is basically a single column. If you want multiple columns, you need a whole dataframe. So instead, do pd.DataFrame(predictions)
How to create a Data Frame in Python from a for loop?
I am trying to merge the results of X with the results of the predicted Y with the help of for loop. How can the result be saved to a DataFrame. predictions = [] for i in range(100): predictions.append([X_unseen[i], y_pred_unseen[i]]) print(predictions) df = pd.Series(predictions) This is the output I get. I am new to Python, so aplogies. Within the [] is a tokenized tweet. 0 [['dude', 'adventure', 'time', 'basically', 'r... 1 [['rape', 'joke'], cyberbullying] 2 [['supporting', 'joke', 'poops', 'faces', 'vic... 3 [['rape', 'joke', 'know', 'serious', 'problem'... I need the following output: x y ['rape', 'joke'] cyberbullying
[ "You created a pd.Series which is basically a single column. If you want multiple columns, you need a whole dataframe.\nSo instead, do pd.DataFrame(predictions)\n" ]
[ 0 ]
[]
[]
[ "dataframe", "prediction", "python" ]
stackoverflow_0074620700_dataframe_prediction_python.txt
Q: how to rename random number from file name to sequent number? Hi i'm trying to rename my files in a directory from (2015_001.txt,2015_005.txt,2015_009.txt..etc) to (2015_001.txt,2015_002.txt,2015_003.tx..etc). I'm new to python, can anyone help me? I tried using loop but all file will not in series anymore this is the code I tried so far import re import os _src = "C:/ZTD/pwv2015/" _ext = ".txt" endsWithNumber = re.compile(r'(\d+)'+(re.escape(_ext))+'$') for filename in os.listdir(_src): m = endsWithNumber.search(filename) if m: os.rename(filename, _src+'2015_' + str(m.group(1)).zfill(3)+_ext) else: os.rename(filename, _src+'2015_' + str(0).zfill(3)+_ext) A: You probably have an easier time with the glob module for finding files and f-strings for renaming. Also, for the sake of teaching modern python, I'm using the pathlib and its glob method. Try this: import os import pathlib src = pathlib.Path("C:/ZTD/pwv2015") pattern = "2015_[0-9][0-9][0-9].txt" inpaths = sorted(src.glob(pattern)) for outnum, inpath in enumerate(inpaths, 1): outpath = src / f"2015_{outnum:03d}.txt" if outpath != inpath: os.rename(inpath, outpath) Note that since the input and output directory are identical, we have to be careful with overwriting files. However, in this instance we "compress" the range, so if we go in sorted order, the input file numbers will always be greater or equal to the output file numbers, so we are okay. Also, when you test this, replace os.rename with print, just to be sure ;-)
how to rename random number from file name to sequent number?
Hi i'm trying to rename my files in a directory from (2015_001.txt,2015_005.txt,2015_009.txt..etc) to (2015_001.txt,2015_002.txt,2015_003.tx..etc). I'm new to python, can anyone help me? I tried using loop but all file will not in series anymore this is the code I tried so far import re import os _src = "C:/ZTD/pwv2015/" _ext = ".txt" endsWithNumber = re.compile(r'(\d+)'+(re.escape(_ext))+'$') for filename in os.listdir(_src): m = endsWithNumber.search(filename) if m: os.rename(filename, _src+'2015_' + str(m.group(1)).zfill(3)+_ext) else: os.rename(filename, _src+'2015_' + str(0).zfill(3)+_ext)
[ "You probably have an easier time with the glob module for finding files and f-strings for renaming. Also, for the sake of teaching modern python, I'm using the pathlib and its glob method. Try this:\nimport os\nimport pathlib\n\nsrc = pathlib.Path(\"C:/ZTD/pwv2015\")\npattern = \"2015_[0-9][0-9][0-9].txt\"\ninpaths = sorted(src.glob(pattern))\nfor outnum, inpath in enumerate(inpaths, 1):\n outpath = src / f\"2015_{outnum:03d}.txt\"\n if outpath != inpath:\n os.rename(inpath, outpath)\n\nNote that since the input and output directory are identical, we have to be careful with overwriting files. However, in this instance we \"compress\" the range, so if we go in sorted order, the input file numbers will always be greater or equal to the output file numbers, so we are okay.\nAlso, when you test this, replace os.rename with print, just to be sure ;-)\n" ]
[ 3 ]
[]
[]
[ "directory", "file_rename", "python" ]
stackoverflow_0074621197_directory_file_rename_python.txt
Q: Image not showing in Canvas tkinter I have a code where I'm using the create_image() method of Canvas, I want to use tags to bind the respective methods but when I run the code the image doesn't show up on the canvas. I made a simple code example to show what I mean: from tkinter import * class CanvasM(Canvas): width = 600 height = 400 def __init__(self, master): super().__init__(master, width=self.width, height=self.height, bg='black') self.pack(pady=20) self.create_an_image( file='images/cat-image-no-background/cat.png', x=320, y=180) self.tag_bind('imagec', "<ButtonPress-1>", self.press) self.tag_bind('imagec', "<ButtonRelease-1>", self.release) self.tag_bind('imagec', "<B1-Motion>", self.motion) def create_an_image(self, file, x, y): img = PhotoImage(file) self.create_image( x, y, image=img, tags=('imagec',)) def press(self, event): print(event.x, event.y) def release(self, event): print(event.x, event.y) def motion(self, event): print(event.x, event.y) if __name__ == '__main__': root = Tk() root.title('Holi') root.geometry("800x600") c = CanvasM(root) root.mainloop() It just looks like an empty canvas: A: There are two issues in create_an_image(): img is a local variable, so it will be garbage collected after exiting the function. So use an instance variable self.img instead. you need to use file option of PhotoImage() to specify the filename of the image def create_an_image(self, file, x, y): # use instance variable self.img instead of local variable img # use file option of PhotoImage self.img = PhotoImage(file=file) self.create_image(x, y, image=self.img, tags=('imagec',))
Image not showing in Canvas tkinter
I have a code where I'm using the create_image() method of Canvas, I want to use tags to bind the respective methods but when I run the code the image doesn't show up on the canvas. I made a simple code example to show what I mean: from tkinter import * class CanvasM(Canvas): width = 600 height = 400 def __init__(self, master): super().__init__(master, width=self.width, height=self.height, bg='black') self.pack(pady=20) self.create_an_image( file='images/cat-image-no-background/cat.png', x=320, y=180) self.tag_bind('imagec', "<ButtonPress-1>", self.press) self.tag_bind('imagec', "<ButtonRelease-1>", self.release) self.tag_bind('imagec', "<B1-Motion>", self.motion) def create_an_image(self, file, x, y): img = PhotoImage(file) self.create_image( x, y, image=img, tags=('imagec',)) def press(self, event): print(event.x, event.y) def release(self, event): print(event.x, event.y) def motion(self, event): print(event.x, event.y) if __name__ == '__main__': root = Tk() root.title('Holi') root.geometry("800x600") c = CanvasM(root) root.mainloop() It just looks like an empty canvas:
[ "There are two issues in create_an_image():\n\nimg is a local variable, so it will be garbage collected after exiting the function. So use an instance variable self.img instead.\nyou need to use file option of PhotoImage() to specify the filename of the image\n\n def create_an_image(self, file, x, y):\n # use instance variable self.img instead of local variable img\n # use file option of PhotoImage\n self.img = PhotoImage(file=file)\n self.create_image(x, y, image=self.img, tags=('imagec',))\n\n" ]
[ 1 ]
[]
[]
[ "python", "tags", "tkinter", "tkinter_canvas" ]
stackoverflow_0074621243_python_tags_tkinter_tkinter_canvas.txt
Q: Correlation Matrix with Lists or Can not create DataFrame with Arrays It's about a data project. I have a problem with types of variables and I guess I am missing something that I can not see. I am beginner at this topic any help would be appreciated. I have 8 normalised arrays and I want to put them into a dataframe so I can create a correlation matrix. But I have this error. > ValueError: Per-column arrays must each be 1-dimensional I have tried to reshape my arrays but it did not work but I wanted to see that shape of arrays is equal or not so I wrote: print(date.shape,normalised_snp.shape,normalised_twybp.shape,normalised_USInflation.shape,normalised_USGDP.shape,normalised_USInterest.shape,normalised_GlobalInflation.shape,normalised_GlobalGDP.shape) Then my output is > (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) After that I converted my arrays into a list and create a dataframe with those lists. normalised_snp = normalised_snp.tolist() normalised_tybp = normalised_tybp.tolist() normalised_twybp = normalised_twybp.tolist() normalised_USInflation = normalised_USInflation.tolist() normalised_USGDP = normalised_USGDP.tolist() normalised_USInterest = normalised_USInterest.tolist() normalised_GlobalInflation = normalised_GlobalInflation.tolist() normalised_GlobalGDP = normalised_GlobalGDP.tolist() I constructed the data frame: alldata = pd.DataFrame({'S&P 500 Price':normalised_snp, '10 Year Bond Price': normalised_tybp, '2 Year Bond Price' : normalised_twybp, 'US Inflation' : normalised_USInflation, 'US GDP' : normalised_USGDP, 'US Insterest' : normalised_USInterest, 'Global Inflation Rate' : normalised_GlobalInflation, 'Global GDP' : normalised_GlobalGDP}) After that I have contstructed my correlation matrix correlation_matrix = alldata.corr() print(correlation_matrix) Since then I have no error but my correlation matrix looks empty > Empty DataFrame Columns: [] Index: [] Is the problem caused by list type? If it is how can I solve the value error that occurs when I try to construct a data frame with matrices? A: After I applied .flatten() all my values converted to 0 Here it is the output
Correlation Matrix with Lists or Can not create DataFrame with Arrays
It's about a data project. I have a problem with types of variables and I guess I am missing something that I can not see. I am beginner at this topic any help would be appreciated. I have 8 normalised arrays and I want to put them into a dataframe so I can create a correlation matrix. But I have this error. > ValueError: Per-column arrays must each be 1-dimensional I have tried to reshape my arrays but it did not work but I wanted to see that shape of arrays is equal or not so I wrote: print(date.shape,normalised_snp.shape,normalised_twybp.shape,normalised_USInflation.shape,normalised_USGDP.shape,normalised_USInterest.shape,normalised_GlobalInflation.shape,normalised_GlobalGDP.shape) Then my output is > (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) (4220, 1) After that I converted my arrays into a list and create a dataframe with those lists. normalised_snp = normalised_snp.tolist() normalised_tybp = normalised_tybp.tolist() normalised_twybp = normalised_twybp.tolist() normalised_USInflation = normalised_USInflation.tolist() normalised_USGDP = normalised_USGDP.tolist() normalised_USInterest = normalised_USInterest.tolist() normalised_GlobalInflation = normalised_GlobalInflation.tolist() normalised_GlobalGDP = normalised_GlobalGDP.tolist() I constructed the data frame: alldata = pd.DataFrame({'S&P 500 Price':normalised_snp, '10 Year Bond Price': normalised_tybp, '2 Year Bond Price' : normalised_twybp, 'US Inflation' : normalised_USInflation, 'US GDP' : normalised_USGDP, 'US Insterest' : normalised_USInterest, 'Global Inflation Rate' : normalised_GlobalInflation, 'Global GDP' : normalised_GlobalGDP}) After that I have contstructed my correlation matrix correlation_matrix = alldata.corr() print(correlation_matrix) Since then I have no error but my correlation matrix looks empty > Empty DataFrame Columns: [] Index: [] Is the problem caused by list type? If it is how can I solve the value error that occurs when I try to construct a data frame with matrices?
[ "After I applied .flatten() all my values converted to 0\nHere it is the output\n" ]
[ 0 ]
[]
[]
[ "arrays", "data_science", "dataframe", "finance", "python" ]
stackoverflow_0074621189_arrays_data_science_dataframe_finance_python.txt
Q: Speed of Turtle not changing with simple Frogger style code import time import turtle from turtle import Screen, Turtle from player import Player from car_manager import CarManager from scoreboard import Scoreboard screen = Screen() screen.setup(width=600, height=600) screen.tracer(0) player = Player() car_manager = CarManager() scoreboard = Scoreboard() screen.listen() screen.onkey(player.go_up, "Up") game_is_on = True while game_is_on: time.sleep(0.1) screen.update() car_manager.create_car() car_manager.move_cars() for car in car_manager.all_cars: if car.distance(player) < 20: game_is_on = False scoreboard.game_over() if player.is_at_finish_line(): player.go_to_start() car_manager.level_up() scoreboard.increase_level() screen.exitonclick() from turtle import Turtle import turtle import random COLORS = ["red", "orange", "yellow", "green", "blue", "purple"] STARTING_MOVE_DISTANCE = 5 MOVE_INCREMENT = 10 class CarManager: def __init__(self): self.all_cars = [] self.car_speed = STARTING_MOVE_DISTANCE def create_car(self): random_chance = random.randint(1, 6) if random_chance == 1: new_car = Turtle("square") new_car.shapesize(stretch_wid=1, stretch_len=2) new_car.penup() new_car.color(random.choice(COLORS)) random_y = random.randint(-250, 250) new_car.goto(300, random_y) self.all_cars.append(new_car) def move_cars(self): for car in self.all_cars: car.backward(self.car_speed) def level_up(self): self.car_speed += MOVE_INCREMENT from turtle import Turtle import turtle STARTING_POSITION = (0, -280) MOVE_DISTANCE = 10 FINISH_LINE_Y = 280 # Define Player Class class Player(Turtle): def __init__(self): super().__init__() self.shape("turtle") self.color("green") self.penup() self.go_to_start() self.setheading(90) def go_up(self): self.forward(MOVE_DISTANCE) self.speed(0) def go_to_start(self): self.goto(STARTING_POSITION) def is_at_finish_line(self): if self.ycor() > FINISH_LINE_Y: return True else: return False from turtle import Turtle FONT = ("Courier", 24, "normal") class Scoreboard(Turtle): def __init__(self): super().__init__() self.level = 1 self.hideturtle() self.penup() self.goto(-270, 270) self.update_scoreboard() def update_scoreboard(self): self.clear() self.write(f"Level: {self.level}", align="left", font=FONT) def increase_level(self): self.level += 1 self.update_scoreboard() def game_over(self): self.goto(0, 0) self.write(f"GAME OVER", align="center", font=FONT) What I am trying to do here is modify the speed of my turtle under the class Player, in which I have defined the function go_up, def go_up(self): self.forward(MOVE_DISTANCE) self.speed(0) I set the go_up movement speed to 0 ("fastest"), however if I input any value here, or type out the "fastest", "slowest"... whichever value... the turtle still moves at the same rate no matter any value I input into the go_up function within the Player class. TLDR my question is... how do I get the turtle to move faster than its current value? All values that I input seem to have the turtle at the exact same speed. Thank you. Tried searching thru all the Python 3.11.0 documentation on turtle movement speed, tried fiddling with my code to introduce a different speed... researched here on stackoverflow. I am expecting this code to have my turtle at the "fastest" speed. A: I set the go_up movement speed to 0 ("fastest"), however if I input any value here, or type out the "fastest", "slowest"... whichever value... the turtle still moves at the same rate no matter any value I input into the go_up function Once you invoke tracer(0), the turtles' speed() method is a no-op. One way you can get more performance from the player to to make it like your cars in that its increment of motion increases for each level. Also, the while game_is_on: is effectively a while True: loop which has no place in an event-driven world like turtle. So, let's switch to a timer event based model, along with other changes: from turtle import Screen, Turtle from random import randint, choice FINISH_LINE_Y = 280 class Player(Turtle): PLAYER_STARTING_POSITION = (0, -280) PLAYER_STARTING_MOVE_DISTANCE = 10 PLAYER_MOVE_INCREMENT = 2 def __init__(self): super().__init__() self.shape('turtle') self.color('green') self.penup() self.go_to_start() self.setheading(90) self.player_speed = Player.PLAYER_STARTING_MOVE_DISTANCE def go_up(self): self.forward(self.player_speed) def go_to_start(self): self.goto(Player.PLAYER_STARTING_POSITION) def is_at_finish_line(self): return self.ycor() > FINISH_LINE_Y def level_up(self): self.player_speed += Player.PLAYER_MOVE_INCREMENT class CarManager: CAR_STARTING_MOVE_DISTANCE = 5 CAR_MOVE_INCREMENT = 10 CAR_COLORS = ['red', 'orange', 'yellow', 'green', 'blue', 'purple'] def __init__(self): self.all_cars = [] self.car_speed = CarManager.CAR_STARTING_MOVE_DISTANCE def create_car(self): if randint(1, 6) == 1: new_car = Turtle('square') new_car.shapesize(stretch_wid=1, stretch_len=2) new_car.color(choice(CarManager.CAR_COLORS)) new_car.penup() random_y = randint(-250, 250) new_car.goto(300, random_y) new_car.setheading(180) self.all_cars.append(new_car) def move_cars(self): for car in self.all_cars: car.forward(self.car_speed) def level_up(self): self.car_speed += CarManager.CAR_MOVE_INCREMENT class Scoreboard(Turtle): FONT = ('Courier', 24, 'normal') def __init__(self): super().__init__() self.level = 1 self.hideturtle() self.penup() self.goto(-270, 270) self.update_scoreboard() def update_scoreboard(self): self.clear() self.write(f"Level: {self.level}", font=Scoreboard.FONT) def increase_level(self): self.level += 1 self.update_scoreboard() def game_over(self): self.goto(0, 0) self.write("GAME OVER", align='center', font=Scoreboard.FONT) game_is_on = True def move(): global game_is_on if not game_is_on: return car_manager.create_car() car_manager.move_cars() for car in car_manager.all_cars: if car.distance(player) < 20: game_is_on = False scoreboard.game_over() break else: # no break # if player.is_at_finish_line(): player.go_to_start() player.level_up() car_manager.level_up() scoreboard.increase_level() screen.update() screen.ontimer(move, 100) screen = Screen() screen.setup(width=600, height=600) screen.tracer(0) player = Player() car_manager = CarManager() scoreboard = Scoreboard() screen.onkey(player.go_up, 'Up') screen.listen() move() screen.mainloop() I had fun playing your (reworked) game!
Speed of Turtle not changing with simple Frogger style code
import time import turtle from turtle import Screen, Turtle from player import Player from car_manager import CarManager from scoreboard import Scoreboard screen = Screen() screen.setup(width=600, height=600) screen.tracer(0) player = Player() car_manager = CarManager() scoreboard = Scoreboard() screen.listen() screen.onkey(player.go_up, "Up") game_is_on = True while game_is_on: time.sleep(0.1) screen.update() car_manager.create_car() car_manager.move_cars() for car in car_manager.all_cars: if car.distance(player) < 20: game_is_on = False scoreboard.game_over() if player.is_at_finish_line(): player.go_to_start() car_manager.level_up() scoreboard.increase_level() screen.exitonclick() from turtle import Turtle import turtle import random COLORS = ["red", "orange", "yellow", "green", "blue", "purple"] STARTING_MOVE_DISTANCE = 5 MOVE_INCREMENT = 10 class CarManager: def __init__(self): self.all_cars = [] self.car_speed = STARTING_MOVE_DISTANCE def create_car(self): random_chance = random.randint(1, 6) if random_chance == 1: new_car = Turtle("square") new_car.shapesize(stretch_wid=1, stretch_len=2) new_car.penup() new_car.color(random.choice(COLORS)) random_y = random.randint(-250, 250) new_car.goto(300, random_y) self.all_cars.append(new_car) def move_cars(self): for car in self.all_cars: car.backward(self.car_speed) def level_up(self): self.car_speed += MOVE_INCREMENT from turtle import Turtle import turtle STARTING_POSITION = (0, -280) MOVE_DISTANCE = 10 FINISH_LINE_Y = 280 # Define Player Class class Player(Turtle): def __init__(self): super().__init__() self.shape("turtle") self.color("green") self.penup() self.go_to_start() self.setheading(90) def go_up(self): self.forward(MOVE_DISTANCE) self.speed(0) def go_to_start(self): self.goto(STARTING_POSITION) def is_at_finish_line(self): if self.ycor() > FINISH_LINE_Y: return True else: return False from turtle import Turtle FONT = ("Courier", 24, "normal") class Scoreboard(Turtle): def __init__(self): super().__init__() self.level = 1 self.hideturtle() self.penup() self.goto(-270, 270) self.update_scoreboard() def update_scoreboard(self): self.clear() self.write(f"Level: {self.level}", align="left", font=FONT) def increase_level(self): self.level += 1 self.update_scoreboard() def game_over(self): self.goto(0, 0) self.write(f"GAME OVER", align="center", font=FONT) What I am trying to do here is modify the speed of my turtle under the class Player, in which I have defined the function go_up, def go_up(self): self.forward(MOVE_DISTANCE) self.speed(0) I set the go_up movement speed to 0 ("fastest"), however if I input any value here, or type out the "fastest", "slowest"... whichever value... the turtle still moves at the same rate no matter any value I input into the go_up function within the Player class. TLDR my question is... how do I get the turtle to move faster than its current value? All values that I input seem to have the turtle at the exact same speed. Thank you. Tried searching thru all the Python 3.11.0 documentation on turtle movement speed, tried fiddling with my code to introduce a different speed... researched here on stackoverflow. I am expecting this code to have my turtle at the "fastest" speed.
[ "\nI set the go_up movement speed to 0 (\"fastest\"), however if I input\nany value here, or type out the \"fastest\", \"slowest\"... whichever\nvalue... the turtle still moves at the same rate no matter any value I\ninput into the go_up function\n\nOnce you invoke tracer(0), the turtles' speed() method is a no-op.\nOne way you can get more performance from the player to to make it like your cars in that its increment of motion increases for each level.\nAlso, the while game_is_on: is effectively a while True: loop which has no place in an event-driven world like turtle. So, let's switch to a timer event based model, along with other changes:\nfrom turtle import Screen, Turtle\nfrom random import randint, choice\n\nFINISH_LINE_Y = 280\n\nclass Player(Turtle):\n PLAYER_STARTING_POSITION = (0, -280)\n PLAYER_STARTING_MOVE_DISTANCE = 10\n PLAYER_MOVE_INCREMENT = 2\n\n def __init__(self):\n super().__init__()\n self.shape('turtle')\n self.color('green')\n self.penup()\n self.go_to_start()\n self.setheading(90)\n self.player_speed = Player.PLAYER_STARTING_MOVE_DISTANCE\n\n def go_up(self):\n self.forward(self.player_speed)\n\n def go_to_start(self):\n self.goto(Player.PLAYER_STARTING_POSITION)\n\n def is_at_finish_line(self):\n return self.ycor() > FINISH_LINE_Y\n\n def level_up(self):\n self.player_speed += Player.PLAYER_MOVE_INCREMENT\n\nclass CarManager:\n CAR_STARTING_MOVE_DISTANCE = 5\n CAR_MOVE_INCREMENT = 10\n\n CAR_COLORS = ['red', 'orange', 'yellow', 'green', 'blue', 'purple']\n\n def __init__(self):\n self.all_cars = []\n self.car_speed = CarManager.CAR_STARTING_MOVE_DISTANCE\n\n def create_car(self):\n if randint(1, 6) == 1:\n new_car = Turtle('square')\n new_car.shapesize(stretch_wid=1, stretch_len=2)\n new_car.color(choice(CarManager.CAR_COLORS))\n new_car.penup()\n random_y = randint(-250, 250)\n new_car.goto(300, random_y)\n new_car.setheading(180)\n\n self.all_cars.append(new_car)\n\n def move_cars(self):\n for car in self.all_cars:\n car.forward(self.car_speed)\n\n def level_up(self):\n self.car_speed += CarManager.CAR_MOVE_INCREMENT\n\nclass Scoreboard(Turtle):\n FONT = ('Courier', 24, 'normal')\n\n def __init__(self):\n super().__init__()\n self.level = 1\n self.hideturtle()\n self.penup()\n self.goto(-270, 270)\n self.update_scoreboard()\n\n def update_scoreboard(self):\n self.clear()\n self.write(f\"Level: {self.level}\", font=Scoreboard.FONT)\n\n def increase_level(self):\n self.level += 1\n self.update_scoreboard()\n\n def game_over(self):\n self.goto(0, 0)\n self.write(\"GAME OVER\", align='center', font=Scoreboard.FONT)\n\ngame_is_on = True\n\ndef move():\n global game_is_on\n\n if not game_is_on:\n return\n\n car_manager.create_car()\n car_manager.move_cars()\n\n for car in car_manager.all_cars:\n if car.distance(player) < 20:\n game_is_on = False\n scoreboard.game_over()\n break\n else: # no break #\n if player.is_at_finish_line():\n player.go_to_start()\n player.level_up()\n car_manager.level_up()\n scoreboard.increase_level()\n\n screen.update()\n screen.ontimer(move, 100)\n\nscreen = Screen()\nscreen.setup(width=600, height=600)\nscreen.tracer(0)\n\nplayer = Player()\ncar_manager = CarManager()\nscoreboard = Scoreboard()\n\nscreen.onkey(player.go_up, 'Up')\nscreen.listen()\n\nmove()\n\nscreen.mainloop()\n\nI had fun playing your (reworked) game!\n" ]
[ 2 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074619350_python_python_turtle_turtle_graphics.txt
Q: Pandas Similar Function to COUNTIFS Sample Data Please see Sample Data image. I'm trying to replicate the COUNTIFS functionality within Python / Pandas but I'm having troubles finding the correct solution. =COUNTIFS(B:B,"BD*",A:A,A2,C:C,">"&C2) B is the Type column, A is the Reference column, and C is the Doc Condition column. So the count is only greater than zero if the Type is 'BD', the Reference Matches the current row's Reference, and the Doc Condition is greater than the current row's Doc Condition. I hope that makes sense? I've tried coming to a solution using GroupBy but I'm not getting any closer to my desired solution and I think I'm overcomplicating this. A: You should include your input data as text. Screenshots are really hard to work with. You can use numpy broadcasting. However, this will have an n^2 computational complexity since you are comparing every row against every other row: reference, type_, doc_condition = df.to_numpy().T match = ( (type_[:, None] == "BD") & (reference[:, None] == reference) & (doc_condition[:, None] < doc_condition) ) df["COUNTIFS"] = match.sum(axis=1) Within the snippet above, you can read reference[:, None] as "the Reference value of the current row; reference as "the Reference values of all rows". This is what enables the comparison between the current row and all other rows (including itself) by numpy.
Pandas Similar Function to COUNTIFS
Sample Data Please see Sample Data image. I'm trying to replicate the COUNTIFS functionality within Python / Pandas but I'm having troubles finding the correct solution. =COUNTIFS(B:B,"BD*",A:A,A2,C:C,">"&C2) B is the Type column, A is the Reference column, and C is the Doc Condition column. So the count is only greater than zero if the Type is 'BD', the Reference Matches the current row's Reference, and the Doc Condition is greater than the current row's Doc Condition. I hope that makes sense? I've tried coming to a solution using GroupBy but I'm not getting any closer to my desired solution and I think I'm overcomplicating this.
[ "You should include your input data as text. Screenshots are really hard to work with.\nYou can use numpy broadcasting. However, this will have an n^2 computational complexity since you are comparing every row against every other row:\nreference, type_, doc_condition = df.to_numpy().T\nmatch = (\n (type_[:, None] == \"BD\")\n & (reference[:, None] == reference)\n & (doc_condition[:, None] < doc_condition)\n)\ndf[\"COUNTIFS\"] = match.sum(axis=1)\n\nWithin the snippet above, you can read reference[:, None] as \"the Reference value of the current row; reference as \"the Reference values of all rows\". This is what enables the comparison between the current row and all other rows (including itself) by numpy.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074620537_pandas_python.txt
Q: 8.3.3: Hourly temperature reporting Write a loop to print all elements in hourly_temperature. Separate elements with a -> surrounded by spaces. Sample output for the given program with input: 90 92 94 95' 90 -> 92 -> 94 -> 95 Note: 95 is followed by a space, then a newline. This is the assignment Here is my code so far: user_input = input() hourly_temperature = user_input.split() lst_str="" for temp in hourly_temperature: lst_str+=str(temp)+ " -> " print(lst_str, end=" ") Here is the output of my code: 90 -> 92 -> 94 -> 95 -> How do I get the end to not include a '->' after the last number in the list? A: you could use the built-in join() method for strings: lst_str = " -> ".join(user_input.split()) + " \n" A: This is the code I used. The output is without the '->' and any extra space at the end: user_input = input() hourly_temperature = user_input.split() for temp in hourly_temperature: lst_str = " -> ".join(user_input.split()) print(lst_str,'')
8.3.3: Hourly temperature reporting
Write a loop to print all elements in hourly_temperature. Separate elements with a -> surrounded by spaces. Sample output for the given program with input: 90 92 94 95' 90 -> 92 -> 94 -> 95 Note: 95 is followed by a space, then a newline. This is the assignment Here is my code so far: user_input = input() hourly_temperature = user_input.split() lst_str="" for temp in hourly_temperature: lst_str+=str(temp)+ " -> " print(lst_str, end=" ") Here is the output of my code: 90 -> 92 -> 94 -> 95 -> How do I get the end to not include a '->' after the last number in the list?
[ "you could use the built-in join() method for strings:\nlst_str = \" -> \".join(user_input.split()) + \" \\n\"\n\n", "This is the code I used. The output is without the '->' and any extra space at the end:\nuser_input = input()\nhourly_temperature = user_input.split()\n\n\nfor temp in hourly_temperature:\n lst_str = \" -> \".join(user_input.split())\n\nprint(lst_str,'')\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074046719_python.txt
Q: Plot elements in a column of a dataframe on the same graph sharing the same x-axis in datetime format I have a dataframe: Element Date Q 0 A 24/10/2021 17:16 400 1 B 24/10/2021 18:59 210 2 A 26/10/2021 18:42 325 3 A 26/10/2021 19:44 589 4 B 29/10/2021 14:23 251 5 A 01/11/2021 9:12 578 6 B 02/11/2021 21:30 321 7 A 04/11/2021 18:25 248 8 B 05/11/2021 10:29 854 9 A 05/11/2021 10:26 968 10 A 07/11/2021 18:10 852 11 A 09/11/2021 16:35 425 12 B 09/11/2021 21:55 752 13 A 11/11/2021 18:41 385 14 B 13/11/2021 11:15 658 15 A 14/11/2021 18:17 229 16 B 16/11/2021 22:36 258 17 A 17/11/2021 17:05 359 18 A 18/11/2021 16:39 210 19 B 19/11/2021 15:41 583 and I want to plot value "Q" of two elements in column "Element" in the same graph sharing the same x-axis, but I can't get it. I have tried to separate them in two dataframes, but it is not a good solution: Element Date Q 0 A 24/10/2021 17:16 400 2 A 26/10/2021 18:42 325 3 A 26/10/2021 19:44 589 5 A 01/11/2021 9:12 578 7 A 04/11/2021 18:25 248 9 A 05/11/2021 10:26 968 10 A 07/11/2021 18:10 852 11 A 09/11/2021 16:35 425 13 A 11/11/2021 18:41 385 15 A 14/11/2021 18:17 229 17 A 17/11/2021 17:05 359 18 A 18/11/2021 16:39 210 Element Date Q 1 B 24/10/2021 18:59 210 4 B 29/10/2021 14:23 251 6 B 02/11/2021 21:30 321 8 B 05/11/2021 10:29 854 12 B 09/11/2021 21:55 752 14 B 13/11/2021 11:15 658 16 B 16/11/2021 22:36 258 19 B 19/11/2021 15:41 583 This is the result of two attempts: ax = dfA.plot.scatter(x="Date",y="Q",rot=90) dfB.plot.scatter(x="Date",y="Q",rot=90, ax=ax, color='r') Graph 1 df_A = df[df['Element'] == 'A'].set_index('Date') df_B = df[df['Element'] == 'B'].set_index('Date') plt.figure() ax = df_A[['Q']].plot(figsize=(20,5)) df_B[['Q']].plot(ax=ax) ax.xaxis.set_major_locator(mdates.DayLocator(interval=1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) plt.gcf().autofmt_xdate() plt.show() Graph 2 I want to represent the two sets of points sharing the x-axis with a common date range and I need a legend with the labels "A" and "B". A: Your first attrempt would have wored had you converted the Date column to Timestamp. A scatter plot requires that both values on x- and y-axis to be numerical. When you supply strings on the x-axis, they are treated as positions [0, 1, 2, 3,...] with tick marks equal to the supplied values. for df in [dfA, dfB]: df["Date"] = pd.to_datetime(df["Date"], dayfirst=True) ax = dfA.plot.scatter(x="Date", y="Q", rot=90, label="A") dfB.plot.scatter(x="Date", y="Q", rot=90, ax=ax, color="r", label="B")
Plot elements in a column of a dataframe on the same graph sharing the same x-axis in datetime format
I have a dataframe: Element Date Q 0 A 24/10/2021 17:16 400 1 B 24/10/2021 18:59 210 2 A 26/10/2021 18:42 325 3 A 26/10/2021 19:44 589 4 B 29/10/2021 14:23 251 5 A 01/11/2021 9:12 578 6 B 02/11/2021 21:30 321 7 A 04/11/2021 18:25 248 8 B 05/11/2021 10:29 854 9 A 05/11/2021 10:26 968 10 A 07/11/2021 18:10 852 11 A 09/11/2021 16:35 425 12 B 09/11/2021 21:55 752 13 A 11/11/2021 18:41 385 14 B 13/11/2021 11:15 658 15 A 14/11/2021 18:17 229 16 B 16/11/2021 22:36 258 17 A 17/11/2021 17:05 359 18 A 18/11/2021 16:39 210 19 B 19/11/2021 15:41 583 and I want to plot value "Q" of two elements in column "Element" in the same graph sharing the same x-axis, but I can't get it. I have tried to separate them in two dataframes, but it is not a good solution: Element Date Q 0 A 24/10/2021 17:16 400 2 A 26/10/2021 18:42 325 3 A 26/10/2021 19:44 589 5 A 01/11/2021 9:12 578 7 A 04/11/2021 18:25 248 9 A 05/11/2021 10:26 968 10 A 07/11/2021 18:10 852 11 A 09/11/2021 16:35 425 13 A 11/11/2021 18:41 385 15 A 14/11/2021 18:17 229 17 A 17/11/2021 17:05 359 18 A 18/11/2021 16:39 210 Element Date Q 1 B 24/10/2021 18:59 210 4 B 29/10/2021 14:23 251 6 B 02/11/2021 21:30 321 8 B 05/11/2021 10:29 854 12 B 09/11/2021 21:55 752 14 B 13/11/2021 11:15 658 16 B 16/11/2021 22:36 258 19 B 19/11/2021 15:41 583 This is the result of two attempts: ax = dfA.plot.scatter(x="Date",y="Q",rot=90) dfB.plot.scatter(x="Date",y="Q",rot=90, ax=ax, color='r') Graph 1 df_A = df[df['Element'] == 'A'].set_index('Date') df_B = df[df['Element'] == 'B'].set_index('Date') plt.figure() ax = df_A[['Q']].plot(figsize=(20,5)) df_B[['Q']].plot(ax=ax) ax.xaxis.set_major_locator(mdates.DayLocator(interval=1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) plt.gcf().autofmt_xdate() plt.show() Graph 2 I want to represent the two sets of points sharing the x-axis with a common date range and I need a legend with the labels "A" and "B".
[ "Your first attrempt would have wored had you converted the Date column to Timestamp. A scatter plot requires that both values on x- and y-axis to be numerical. When you supply strings on the x-axis, they are treated as positions [0, 1, 2, 3,...] with tick marks equal to the supplied values.\nfor df in [dfA, dfB]:\n df[\"Date\"] = pd.to_datetime(df[\"Date\"], dayfirst=True)\n\nax = dfA.plot.scatter(x=\"Date\", y=\"Q\", rot=90, label=\"A\")\ndfB.plot.scatter(x=\"Date\", y=\"Q\", rot=90, ax=ax, color=\"r\", label=\"B\")\n\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "pandas", "plot", "python", "x_axis" ]
stackoverflow_0074621393_datetime_pandas_plot_python_x_axis.txt
Q: Can't add a record to a database without "sqlite3.OperationalError: near "(": syntax error" I made a 'dummy' version for my program consisting of just the first four fields but once I added the rest, this error keeps appearing. Anyone else I've seen with this issue was due to something else that doesn't apply to mine. I feel like it's something small that I've missed, if anyone could help me figure this out it'd be a great help, thanks. I hope this is enough of the code to figure it out. Even if someone could tell me that the problem lies outside this code it would be a big help. connection = sqlite3.connect("TempDatabase.db") cursor = connection.cursor() sqlCommand = """ CREATE TABLE IF NOT EXISTS OrderTb1N1 ( OrderID INTEGER NOT NULL, dateOrdered DATE, customerFirstname TEXT, customerSurname TEXT, customerPhoneNumber TEXT, collectionDate DATE, flavours TEXT, glaze TEXT, toppings TEXT, personalisation TEXT, orderSize TEXT, price REAL, paymentStatus TEXT, employeeName TEXT primary key (OrderID) ) """ cursor.execute(sqlCommand) connection.commit() connection.close() Since the database already exists, I intended for this code to add additional fields (from the 'dummy' tests) and allow me to input new records. I have deleted all records made during the dummy tests. According to the error output, the error is somewhere within 'sqlCommand=' . A: you should need to use comma(,) after employeeName TEXT. Correct code- connection = sqlite3.connect("TempDatabase.db") cursor = connection.cursor() sqlCommand = " CREATE TABLE IF NOT EXISTS OrderTb1N1 ( OrderID INTEGER NOT NULL, dateOrdered DATE, customerFirstname TEXT, customerSurname TEXT, customerPhoneNumber TEXT, collectionDate DATE, flavours TEXT, glaze TEXT, toppings TEXT, personalisation TEXT, orderSize TEXT, price REAL, paymentStatus TEXT, employeeName TEXT, primary key (OrderID) ) " cursor.execute(sqlCommand) connection.commit() connection.close()
Can't add a record to a database without "sqlite3.OperationalError: near "(": syntax error"
I made a 'dummy' version for my program consisting of just the first four fields but once I added the rest, this error keeps appearing. Anyone else I've seen with this issue was due to something else that doesn't apply to mine. I feel like it's something small that I've missed, if anyone could help me figure this out it'd be a great help, thanks. I hope this is enough of the code to figure it out. Even if someone could tell me that the problem lies outside this code it would be a big help. connection = sqlite3.connect("TempDatabase.db") cursor = connection.cursor() sqlCommand = """ CREATE TABLE IF NOT EXISTS OrderTb1N1 ( OrderID INTEGER NOT NULL, dateOrdered DATE, customerFirstname TEXT, customerSurname TEXT, customerPhoneNumber TEXT, collectionDate DATE, flavours TEXT, glaze TEXT, toppings TEXT, personalisation TEXT, orderSize TEXT, price REAL, paymentStatus TEXT, employeeName TEXT primary key (OrderID) ) """ cursor.execute(sqlCommand) connection.commit() connection.close() Since the database already exists, I intended for this code to add additional fields (from the 'dummy' tests) and allow me to input new records. I have deleted all records made during the dummy tests. According to the error output, the error is somewhere within 'sqlCommand=' .
[ "you should need to use comma(,) after employeeName TEXT.\nCorrect code-\nconnection = sqlite3.connect(\"TempDatabase.db\")\n cursor = connection.cursor()\n\n sqlCommand = \"\n CREATE TABLE IF NOT EXISTS OrderTb1N1\n (\n OrderID INTEGER NOT NULL,\n dateOrdered DATE,\n customerFirstname TEXT,\n customerSurname TEXT,\n customerPhoneNumber TEXT,\n collectionDate DATE,\n flavours TEXT,\n glaze TEXT,\n toppings TEXT,\n personalisation TEXT,\n orderSize TEXT,\n price REAL,\n paymentStatus TEXT,\n employeeName TEXT,\n primary key (OrderID)\n )\n \"\n\n cursor.execute(sqlCommand)\n connection.commit()\n connection.close()\n\n" ]
[ 0 ]
[]
[]
[ "database", "python", "sqlite", "syntax", "syntax_error" ]
stackoverflow_0074621399_database_python_sqlite_syntax_syntax_error.txt
Q: Python : Pygame error : AttributeError: module 'pygame.image' has no attribute 'rotate' when trying to rotate image of an in game character import pygame import os WIDTH, HEIGHT = 900, 500 WIN = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption("First Game!") WHITE = (255, 255, 255) FPS = 60 SPACESHIP_WIDTH, SPACESHIP_HEIGHT = 55, 40 YELLOW_SPACESHIP_IMAGE = pygame.image.load( os.path.join('Assets', 'spaceship_yellow.png')) YELLOW_SPACESHIP = pygame.image.rotate(pygame.transform.scale( YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) RED_SPACESHIP_IMAGE = pygame.image.load( os.path.join('Assets', 'spaceship_red.png')) RED_SPACESHIP = pygame.transform.scale( RED_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)) def draw_window(): WIN.fill(WHITE) WIN.blit(YELLOW_SPACESHIP, (300, 100)) pygame.display.update() def main(): clock = pygame.time.Clock() run = True while run: clock.tick(FPS) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False draw_window() pygame.quit() if __name__ == "__main__": main() I have been carefully following an introduction video to making games using pygame and have reached the point when running the code the error Traceback (most recent call last): File "C:\Users\morle\PycharmProjects\pythonProject\first game test.py", line 15, in <module> YELLOW_SPACESHIP = pygame.image.rotate(pygame.transform.scale( AttributeError: module 'pygame.image' has no attribute 'rotate' the line in question is YELLOW_SPACESHIP = pygame.image.rotate(pygame.transform.scale( YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) I dont understand why this is happening any help would be much appreciated. here is the link for the video 27:08 text A: pygame.image.rotate does not actualy exists. To rotate an image, you have to do the same as to scale : pygame.transform.rotate(surface, angle) In your case, that would be: YELLOW_SPACESHIP = pygame.transform.rotate(pygame.transform.scale(YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) Please acknowledge that this function will rotate counterclockwise, and you can put negative angles to go clockwise. Here is the full documentation : https://www.pygame.org/docs/ref/transform.html#pygame.transform.rotate A: You're close, try: YELLOW_SPACESHIP = pygame.transform.rotate(pygame.transform.scale( YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) rotate is a member of the module transform. Your code was calling rotate on image which is a variable that you haven't yet defined.
Python : Pygame error : AttributeError: module 'pygame.image' has no attribute 'rotate' when trying to rotate image of an in game character
import pygame import os WIDTH, HEIGHT = 900, 500 WIN = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption("First Game!") WHITE = (255, 255, 255) FPS = 60 SPACESHIP_WIDTH, SPACESHIP_HEIGHT = 55, 40 YELLOW_SPACESHIP_IMAGE = pygame.image.load( os.path.join('Assets', 'spaceship_yellow.png')) YELLOW_SPACESHIP = pygame.image.rotate(pygame.transform.scale( YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) RED_SPACESHIP_IMAGE = pygame.image.load( os.path.join('Assets', 'spaceship_red.png')) RED_SPACESHIP = pygame.transform.scale( RED_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)) def draw_window(): WIN.fill(WHITE) WIN.blit(YELLOW_SPACESHIP, (300, 100)) pygame.display.update() def main(): clock = pygame.time.Clock() run = True while run: clock.tick(FPS) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False draw_window() pygame.quit() if __name__ == "__main__": main() I have been carefully following an introduction video to making games using pygame and have reached the point when running the code the error Traceback (most recent call last): File "C:\Users\morle\PycharmProjects\pythonProject\first game test.py", line 15, in <module> YELLOW_SPACESHIP = pygame.image.rotate(pygame.transform.scale( AttributeError: module 'pygame.image' has no attribute 'rotate' the line in question is YELLOW_SPACESHIP = pygame.image.rotate(pygame.transform.scale( YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) I dont understand why this is happening any help would be much appreciated. here is the link for the video 27:08 text
[ "pygame.image.rotate does not actualy exists.\nTo rotate an image, you have to do the same as to scale :\npygame.transform.rotate(surface, angle)\n\nIn your case, that would be:\nYELLOW_SPACESHIP = pygame.transform.rotate(pygame.transform.scale(YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90) \n\nPlease acknowledge that this function will rotate counterclockwise, and you can put negative angles to go clockwise.\nHere is the full documentation : https://www.pygame.org/docs/ref/transform.html#pygame.transform.rotate\n", "You're close, try:\nYELLOW_SPACESHIP = pygame.transform.rotate(pygame.transform.scale(\n YELLOW_SPACESHIP_IMAGE, (SPACESHIP_WIDTH, SPACESHIP_HEIGHT)), 90)\n\nrotate is a member of the module transform. Your code was calling rotate on image which is a variable that you haven't yet defined.\n" ]
[ 2, 2 ]
[]
[]
[ "attributeerror", "pygame", "python" ]
stackoverflow_0074621267_attributeerror_pygame_python.txt
Q: Python Loop question: calling models based on variables I have a basic python loop question. Problem Statement: I have a master list of variables in list 'X', a variable 't' (which is present in master list) and another variable 'y' (which is also present in master list). I want to run a ML model inside the loop and each time I want to remove the variable 't' and 'y' from master list 'X' and use the updated 'X' as predictor variable and rest as treatment and response variable. Basically, I want the following algorithm - df --> dataframe with column name as in list X X = ['a', 'b', 'c', 'd', 'e'] t = each element from list X with each iteration y = ['c'] --> can be any item from list X for each item in X: X_new = remove that item and y from X t_new = removed item df_X = df[X_new] --> dataframe df with updated list of columns in X_new df_t = df[t_new] --> dataframe df with just t_new column df_y = df[y] call ML model function with updated parameters df_X, df_t and df_y with each iteration A: have you tried to use a list? something like... seen_numbers = set() I know this isn't exactly what you're asking but I use lists to find duplicates or find things I want to exclude. maybe this will help... this = line[92:102] + line[114:123] if this not in seen_numbers: seen_numbers.add(this) and you should be able to tweak this to remove what you want
Python Loop question: calling models based on variables
I have a basic python loop question. Problem Statement: I have a master list of variables in list 'X', a variable 't' (which is present in master list) and another variable 'y' (which is also present in master list). I want to run a ML model inside the loop and each time I want to remove the variable 't' and 'y' from master list 'X' and use the updated 'X' as predictor variable and rest as treatment and response variable. Basically, I want the following algorithm - df --> dataframe with column name as in list X X = ['a', 'b', 'c', 'd', 'e'] t = each element from list X with each iteration y = ['c'] --> can be any item from list X for each item in X: X_new = remove that item and y from X t_new = removed item df_X = df[X_new] --> dataframe df with updated list of columns in X_new df_t = df[t_new] --> dataframe df with just t_new column df_y = df[y] call ML model function with updated parameters df_X, df_t and df_y with each iteration
[ "have you tried to use a list? something like... seen_numbers = set()\nI know this isn't exactly what you're asking but I use lists to find duplicates or find things I want to exclude. maybe this will help...\nthis = line[92:102] + line[114:123]\nif this not in seen_numbers:\n seen_numbers.add(this)\n\nand you should be able to tweak this to remove what you want\n" ]
[ 0 ]
[]
[]
[ "for_loop", "loops", "pandas", "python" ]
stackoverflow_0074621403_for_loop_loops_pandas_python.txt
Q: Partitions per month from the bigquery CLI in python I'm trying to use partitions per month from the bigquery CLI in python, but the only thing I get is the error in the image table = bigquery.Table(table_ref, schema=schema) table.time_partitioning = bigquery.TimePartitioning( type_=bigquery.TimePartitioningType.MONTH, field='field_partition') I tried partitions per day and it worked but when trying months it doesn't work References: https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.table.TimePartitioning https://cloud.google.com/bigquery/docs/creating-partitioned-tables A: I tested it and it works well with MONTH partition : def create_table_time_partitioning_month(self): from google.cloud import bigquery client = bigquery.Client() project = client.project dataset_ref = bigquery.DatasetReference(project, 'my_dataset') table_ref = dataset_ref.table("my_partitioned_table") schema = [ bigquery.SchemaField("name", "STRING"), bigquery.SchemaField("post_abbr", "STRING"), bigquery.SchemaField("date", "DATE"), ] table = bigquery.Table(table_ref, schema=schema) table.time_partitioning = bigquery.TimePartitioning( type_=bigquery.TimePartitioningType.MONTH, field="date" ) table = client.create_table(table) print( "Created table {}, partitioned on column {}".format( table.table_id, table.time_partitioning.field ) ) I used the following import : from google.cloud import bigquery Also, check if you have the correct Python package installed in your image : requirements.txt google-cloud-bigquery==3.4.0 pip install -r requirements.txt
Partitions per month from the bigquery CLI in python
I'm trying to use partitions per month from the bigquery CLI in python, but the only thing I get is the error in the image table = bigquery.Table(table_ref, schema=schema) table.time_partitioning = bigquery.TimePartitioning( type_=bigquery.TimePartitioningType.MONTH, field='field_partition') I tried partitions per day and it worked but when trying months it doesn't work References: https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.table.TimePartitioning https://cloud.google.com/bigquery/docs/creating-partitioned-tables
[ "I tested it and it works well with MONTH partition :\ndef create_table_time_partitioning_month(self):\n from google.cloud import bigquery\n client = bigquery.Client()\n project = client.project\n dataset_ref = bigquery.DatasetReference(project, 'my_dataset')\n\n table_ref = dataset_ref.table(\"my_partitioned_table\")\n schema = [\n bigquery.SchemaField(\"name\", \"STRING\"),\n bigquery.SchemaField(\"post_abbr\", \"STRING\"),\n bigquery.SchemaField(\"date\", \"DATE\"),\n ]\n table = bigquery.Table(table_ref, schema=schema)\n table.time_partitioning = bigquery.TimePartitioning(\n type_=bigquery.TimePartitioningType.MONTH,\n field=\"date\"\n )\n\n table = client.create_table(table)\n\n print(\n \"Created table {}, partitioned on column {}\".format(\n table.table_id, table.time_partitioning.field\n )\n )\n\nI used the following import :\nfrom google.cloud import bigquery\n\nAlso, check if you have the correct Python package installed in your image :\nrequirements.txt\ngoogle-cloud-bigquery==3.4.0\n\npip install -r requirements.txt\n\n" ]
[ 0 ]
[]
[]
[ "gcloud", "google_bigquery", "python" ]
stackoverflow_0074619174_gcloud_google_bigquery_python.txt
Q: Python3.10.6 can't pip install things: error: subprocess-exited-with-error Using Python 3.10.6 on new Ubuntu VPS. Can't pip install dotenv, bs4 etc. pip 22.3.1 version used. Why is this error showing, how do I fix this? I looked at other questions, but couldn't solve my problem. I tried having a lower pip version, didn't work. I even reinstalled python3.10. Thanks. (below error log - some emitted, all related) root@localhost:~/bp-scraper# pip3 install dotenv Collecting dotenv Using cached dotenv-0.0.5.tar.gz (2.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [64 lines of output] /usr/local/lib/python3.10/dist-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 14, in <module> from setuptools.command.install import install File "/tmp/pip-wheel-pm4y5c7z/distribute_07cb0f8f57c049ab919d5b5646e56f register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/setuptools/installer.py", line 82, in fetch_build_egg subprocess.check_call(cmd) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpwdcseb1p', '--quiet', 'distribute']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> return fetch_build_egg(self, req) File "/usr/local/lib/python3.10/dist-packages/setuptools/installer.py", line 84, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['/usr/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpwdcseb1p', '--quiet', 'distribute']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. A: First update setuptools, as per https://stackoverflow.com/a/58754136/5666087 pip install -U setuptools Then use the correct package for dotenv, which is python-dotenv. pip install python-dotenv
Python3.10.6 can't pip install things: error: subprocess-exited-with-error
Using Python 3.10.6 on new Ubuntu VPS. Can't pip install dotenv, bs4 etc. pip 22.3.1 version used. Why is this error showing, how do I fix this? I looked at other questions, but couldn't solve my problem. I tried having a lower pip version, didn't work. I even reinstalled python3.10. Thanks. (below error log - some emitted, all related) root@localhost:~/bp-scraper# pip3 install dotenv Collecting dotenv Using cached dotenv-0.0.5.tar.gz (2.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [64 lines of output] /usr/local/lib/python3.10/dist-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [16 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 14, in <module> from setuptools.command.install import install File "/tmp/pip-wheel-pm4y5c7z/distribute_07cb0f8f57c049ab919d5b5646e56f register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/setuptools/installer.py", line 82, in fetch_build_egg subprocess.check_call(cmd) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpwdcseb1p', '--quiet', 'distribute']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> return fetch_build_egg(self, req) File "/usr/local/lib/python3.10/dist-packages/setuptools/installer.py", line 84, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['/usr/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpwdcseb1p', '--quiet', 'distribute']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.
[ "First update setuptools, as per https://stackoverflow.com/a/58754136/5666087\npip install -U setuptools\n\nThen use the correct package for dotenv, which is python-dotenv.\npip install python-dotenv\n\n" ]
[ 1 ]
[]
[]
[ "pip", "python", "python_3.x" ]
stackoverflow_0074621173_pip_python_python_3.x.txt
Q: Create a new columns in dataframe equaling differenciated series I want to create a new column diff aqualing the differenciation of a series in a nother column. The following is my dataframe: df=pd.DataFrame({ 'series_1' : [10.1, 15.3, 16, 12, 14.5, 11.8, 2.3, 7.7,5,10], 'series_2' : [9.6,10.4, 11.2, 3.3, 6, 4, 1.94, 15.44, 6.17, 8.16] }) It has the following display: series_1 series_2 0 10.1 9.60 1 15.3 10.40 2 16.0 11.20 3 12.0 3.30 4 14.5 6.00 5 11.8 4.00 6 2.3 1.94 7 7.7 15.44 8 5.0 6.17 9 10.0 8.16 Goal Is to get the following output: series_1 series_2 diff_2 0 10.1 9.60 NaN 1 15.3 10.40 0.80 2 16.0 11.20 0.80 3 12.0 3.30 -7.90 4 14.5 6.00 2.70 5 11.8 4.00 -2.00 6 2.3 1.94 -2.06 7 7.7 15.44 13.50 8 5.0 6.17 -9.27 9 10.0 8.16 1.99 My code To reach the desired output I used the following code and it worked: diff_2=[np.nan] l=len(df) for i in range(1, l): diff_2.append(df['series_2'][i] - df['series_2'][i-1]) df['diff_2'] = diff_2 Issue with my code I replicated here a simplified dataframe, the real one I am working on is extremly large and my code took almost 9 minute runtime! I want an alternative allowing me to get the output in a fast way, Any suggestion from your side will be highly appreciated, thanks. A: here is one way to do it, using diff # create a new col by taking difference b/w consecutive rows of DF using diff df['diff_2']=df['series_2'].diff() df series_1 series_2 diff_2 0 10.1 9.60 NaN 1 15.3 10.40 0.80 2 16.0 11.20 0.80 3 12.0 3.30 -7.90 4 14.5 6.00 2.70 5 11.8 4.00 -2.00 6 2.3 1.94 -2.06 7 7.7 15.44 13.50 8 5.0 6.17 -9.27 9 10.0 8.16 1.99 A: You might want to add the following line of code: df["diff_2"] = df["series_2"].sub(df["series_2"].shift(1)) to achieve your goal output: series_1 series_2 diff_2 0 10.1 9.60 NaN 1 15.3 10.40 0.80 2 16.0 11.20 0.80 3 12.0 3.30 -7.90 4 14.5 6.00 2.70 5 11.8 4.00 -2.00 6 2.3 1.94 -2.06 7 7.7 15.44 13.50 8 5.0 6.17 -9.27 9 10.0 8.16 1.99 That is a build-in pandas feature, so that should be optimized for good performance.
Create a new columns in dataframe equaling differenciated series
I want to create a new column diff aqualing the differenciation of a series in a nother column. The following is my dataframe: df=pd.DataFrame({ 'series_1' : [10.1, 15.3, 16, 12, 14.5, 11.8, 2.3, 7.7,5,10], 'series_2' : [9.6,10.4, 11.2, 3.3, 6, 4, 1.94, 15.44, 6.17, 8.16] }) It has the following display: series_1 series_2 0 10.1 9.60 1 15.3 10.40 2 16.0 11.20 3 12.0 3.30 4 14.5 6.00 5 11.8 4.00 6 2.3 1.94 7 7.7 15.44 8 5.0 6.17 9 10.0 8.16 Goal Is to get the following output: series_1 series_2 diff_2 0 10.1 9.60 NaN 1 15.3 10.40 0.80 2 16.0 11.20 0.80 3 12.0 3.30 -7.90 4 14.5 6.00 2.70 5 11.8 4.00 -2.00 6 2.3 1.94 -2.06 7 7.7 15.44 13.50 8 5.0 6.17 -9.27 9 10.0 8.16 1.99 My code To reach the desired output I used the following code and it worked: diff_2=[np.nan] l=len(df) for i in range(1, l): diff_2.append(df['series_2'][i] - df['series_2'][i-1]) df['diff_2'] = diff_2 Issue with my code I replicated here a simplified dataframe, the real one I am working on is extremly large and my code took almost 9 minute runtime! I want an alternative allowing me to get the output in a fast way, Any suggestion from your side will be highly appreciated, thanks.
[ "here is one way to do it, using diff\n\n# create a new col by taking difference b/w consecutive rows of DF using diff\ndf['diff_2']=df['series_2'].diff()\ndf\n\n series_1 series_2 diff_2\n0 10.1 9.60 NaN\n1 15.3 10.40 0.80\n2 16.0 11.20 0.80\n3 12.0 3.30 -7.90\n4 14.5 6.00 2.70\n5 11.8 4.00 -2.00\n6 2.3 1.94 -2.06\n7 7.7 15.44 13.50\n8 5.0 6.17 -9.27\n9 10.0 8.16 1.99\n\n", "You might want to add the following line of code:\ndf[\"diff_2\"] = df[\"series_2\"].sub(df[\"series_2\"].shift(1))\n\nto achieve your goal output:\n series_1 series_2 diff_2\n0 10.1 9.60 NaN\n1 15.3 10.40 0.80\n2 16.0 11.20 0.80\n3 12.0 3.30 -7.90\n4 14.5 6.00 2.70\n5 11.8 4.00 -2.00\n6 2.3 1.94 -2.06\n7 7.7 15.44 13.50\n8 5.0 6.17 -9.27\n9 10.0 8.16 1.99\n\nThat is a build-in pandas feature, so that should be optimized for good performance.\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074621360_dataframe_pandas_python.txt