content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Python not iterating over array with for loop Write a program that fills an array of 10 elements with random numbers from 1 to 10, and then swaps the first element with the second, the third with the fourth, and so on. Display the original and transformed array Here is my solution, but Python doesn't want to sort the array and it stays the same: from random import randint numbers = [] for i in range(10): numbers.append(randint(1, 10)) print(numbers) a = 0 for a in range(10): numbers[-1], numbers[i] = numbers[i], numbers[-1] a = a + 2 print(numbers) I have tried replacing elements with a loop by numbers[a] = numbers[a+1] , But I kept getting the error: IndexError: list index out of range A: There's a couple things here: 1: as @bereal said, range() has a tird optional step argument, and I've never seen a better time to use it. Check out the documentation for range() https://docs.python.org/3/library/functions.html#func-range 2: I see you reference numbers[-1] even though I think you mean number[-i], it will still reference numbers[-1] on the first iteration, thereby giving an error. A: You can make the swap when its not an even number like this: for a in range(10): if a % 2 == 1: tempNumber = numbers[a] numbers[a] = numbers[a-1] numbers[a-1] = tempNumber And then you will have First output: [9, 7, 4, 7, 4, 3, 1, 9, 9, 9] Final output: [7, 9, 7, 4, 3, 4, 9, 1, 9, 9]
Python not iterating over array with for loop
Write a program that fills an array of 10 elements with random numbers from 1 to 10, and then swaps the first element with the second, the third with the fourth, and so on. Display the original and transformed array Here is my solution, but Python doesn't want to sort the array and it stays the same: from random import randint numbers = [] for i in range(10): numbers.append(randint(1, 10)) print(numbers) a = 0 for a in range(10): numbers[-1], numbers[i] = numbers[i], numbers[-1] a = a + 2 print(numbers) I have tried replacing elements with a loop by numbers[a] = numbers[a+1] , But I kept getting the error: IndexError: list index out of range
[ "There's a couple things here:\n1: as @bereal said, range() has a tird optional step argument, and I've never seen a better time to use it. Check out the documentation for range() https://docs.python.org/3/library/functions.html#func-range\n2: I see you reference numbers[-1] even though I think you mean number[-i], it will still reference numbers[-1] on the first iteration, thereby giving an error.\n", "You can make the swap when its not an even number like this:\nfor a in range(10):\n if a % 2 == 1:\n tempNumber = numbers[a]\n numbers[a] = numbers[a-1]\n numbers[a-1] = tempNumber\n\nAnd then you will have\nFirst output:\n[9, 7, 4, 7, 4, 3, 1, 9, 9, 9]\n\nFinal output:\n[7, 9, 7, 4, 3, 4, 9, 1, 9, 9]\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074629510_python.txt
Q: Flask deprecated before_first_request How to update I'm learning web developing for simple applications and I've created one that uses before_first_request decorator. According with the new release notes, the before_first_request is deprecated and will be removed from Flask 2.3: Deprecated since version 2.2: Will be removed in Flask 2.3. Run setup code when creating the application instead. I don't understand how I can update my code to be complacent with flask 2.3 and still run a function at first request without using before_first_request. Could some kind soul give me an example ? A: I don't know if this is answered but for anyone looking for the answer: in place of the @app.before_first_request decorated function use the app instance like this: i.e. # In place of something like this @app.before_first_request def create_tables(): db.create_all() ... # USE THIS INSTEAD with app.app_context(): db.create_all()
Flask deprecated before_first_request How to update
I'm learning web developing for simple applications and I've created one that uses before_first_request decorator. According with the new release notes, the before_first_request is deprecated and will be removed from Flask 2.3: Deprecated since version 2.2: Will be removed in Flask 2.3. Run setup code when creating the application instead. I don't understand how I can update my code to be complacent with flask 2.3 and still run a function at first request without using before_first_request. Could some kind soul give me an example ?
[ "I don't know if this is answered but for anyone looking for the answer:\nin place of the @app.before_first_request decorated function use the app instance like this:\ni.e.\n# In place of something like this\[email protected]_first_request\ndef create_tables():\n db.create_all()\n ...\n\n# USE THIS INSTEAD\nwith app.app_context():\n db.create_all()\n\n" ]
[ 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0073570041_flask_python.txt
Q: Library for numerical Integration of a function over a tetrahedron (3D) I am looking for å python library that has a function to solve 3D integrals over a tetrahedron. I would like to be able to input four points on the form (x, y, z) and a function f(x, y, z) where f is a polynomial function. Have only found functions that accepts integration boundaries that go from function to function, but I need an integration tool that accepts points. A: You can use the quadpy library to integrate a function over a tetrahedron. But for a polynomial function f, it is possible to calculate the exact value of the integral of f over a tetrahedron (more generally over a simplex in any dimension). This method is implemented in the R package SimplicialCubature. This paper provides another method; I implemented it in Python: gist.
Library for numerical Integration of a function over a tetrahedron (3D)
I am looking for å python library that has a function to solve 3D integrals over a tetrahedron. I would like to be able to input four points on the form (x, y, z) and a function f(x, y, z) where f is a polynomial function. Have only found functions that accepts integration boundaries that go from function to function, but I need an integration tool that accepts points.
[ "You can use the quadpy library to integrate a function over a tetrahedron.\nBut for a polynomial function f, it is possible to calculate the exact value of the integral of f over a tetrahedron (more generally over a simplex in any dimension). This method is implemented in the R package SimplicialCubature. This paper provides another method; I implemented it in Python: gist.\n" ]
[ 0 ]
[]
[]
[ "finite_element_analysis", "math", "numerical_integration", "python" ]
stackoverflow_0074345108_finite_element_analysis_math_numerical_integration_python.txt
Q: accessing values incorrectly from list in python I have two example files. myheader.h #define MACRO1 42 #define lang_init () c_init() #define min(X, Y) ((X) < (Y) ? (X) : (Y)) and pyparser.py from pyparsing import * # define the structure of a macro definition (the empty term is used # to advance to the next non-whitespace character) macroDef = "#define" + Word(alphas+"_",alphanums+"_").setResultsName("macro") + \ empty + restOfLine.setResultsName("value") with open('myheader.h', 'r') as f: res = macroDef.scanString(f.read()) res = list(res) print(res[0]) print(res[1]) print(res[2]) the output is ((['#define', 'MACRO1', '42'], {'macro': ['MACRO1'], 'value': ['42']}), 0, 17) ((['#define', 'lang_init', '() c_init()'], {'macro': ['lang_init'], 'value': ['() c_init()']}), 18, 48) ((['#define', 'min', '(X, Y) ((X) < (Y) ? (X) : (Y))'], {'macro': ['min'], 'value': ['(X, Y) ((X) < (Y) ? (X) : (Y))']}), 49, 91) I thought print(res[0]) would print "#define", print print(res[1]) would print 'MACRO1' and so on. I'm not that familiar with Python, but I'm assuming res is not an array correct? How does indexing works in this case? Thanks A: Question, what is the value of len(res). In python when you have a list inside of a list you can use a second indexer to access the elements inside of it. So for example if the first element res[0] was a list, you could do res[0][0] to get '#define'. However, your output that you have shown is in a different format than typical nested list syntax, so doing res[0][0] might not work (because it might not be the right type of object). This is what a nested list is supposed to look like: [[0, 1], [1, 2, 100], [2, 3], [3, 4], [4, 5]] Your output looks like its in json formatting, but without knowing the type of data object it is for sure I can't be certain. If it is json, you might be able to do json.loads(res) and then parse it that way. https://www.freecodecamp.org/news/python-json-how-to-convert-a-string-to-json/
accessing values incorrectly from list in python
I have two example files. myheader.h #define MACRO1 42 #define lang_init () c_init() #define min(X, Y) ((X) < (Y) ? (X) : (Y)) and pyparser.py from pyparsing import * # define the structure of a macro definition (the empty term is used # to advance to the next non-whitespace character) macroDef = "#define" + Word(alphas+"_",alphanums+"_").setResultsName("macro") + \ empty + restOfLine.setResultsName("value") with open('myheader.h', 'r') as f: res = macroDef.scanString(f.read()) res = list(res) print(res[0]) print(res[1]) print(res[2]) the output is ((['#define', 'MACRO1', '42'], {'macro': ['MACRO1'], 'value': ['42']}), 0, 17) ((['#define', 'lang_init', '() c_init()'], {'macro': ['lang_init'], 'value': ['() c_init()']}), 18, 48) ((['#define', 'min', '(X, Y) ((X) < (Y) ? (X) : (Y))'], {'macro': ['min'], 'value': ['(X, Y) ((X) < (Y) ? (X) : (Y))']}), 49, 91) I thought print(res[0]) would print "#define", print print(res[1]) would print 'MACRO1' and so on. I'm not that familiar with Python, but I'm assuming res is not an array correct? How does indexing works in this case? Thanks
[ "Question, what is the value of len(res).\nIn python when you have a list inside of a list you can use a second indexer to access the elements inside of it. So for example if the first element res[0] was a list, you could do res[0][0] to get '#define'.\nHowever, your output that you have shown is in a different format than typical nested list syntax, so doing res[0][0] might not work (because it might not be the right type of object).\nThis is what a nested list is supposed to look like:\n[[0, 1], [1, 2, 100], [2, 3], [3, 4], [4, 5]]\nYour output looks like its in json formatting, but without knowing the type of data object it is for sure I can't be certain. If it is json, you might be able to do json.loads(res) and then parse it that way.\nhttps://www.freecodecamp.org/news/python-json-how-to-convert-a-string-to-json/\n" ]
[ 0 ]
[]
[]
[ "list", "pyparsing", "python" ]
stackoverflow_0074629430_list_pyparsing_python.txt
Q: Extract value from a dataframe column of dictionary of lists lists and create a new column I have a dataframe with one of the columns as a list and another column as a dictionary. However, this is not consistent. It could be a single element or NULL too df = pd.DataFrame({'item_id':[1,1,1,2,3,4,4], 'shop_id':['S1','S2','S3','S2','S3','S1','S2'], 'price_list':["{'10':['S1','S2'], '20':['S3'], '30':['S4']}","{'10':['S1','S2'], '20':['S3'], '30':['S4']}","{'10':['S1','S2'], '20':['S3'], '30':['S4']}",'50','NaN',"{'10':['S1','S2','S3'],'25':['S4']}","{'10':['S1','S2','S3'],'25':['S4']}"]}) +---------+---------+--------------------------------------------------+ | item_id | shop_id | price_list | +---------+---------+--------------------------------------------------+ | 1 | S1 | {'10': ['S1', 'S2'], '20': ['S3'], '30': ['S4']} | | 1 | S2 | {'10': ['S1', 'S2'], '20': ['S3'], '30': ['S4']} | | 1 | S3 | {'10': ['S1', 'S2'], '20': ['S3'], '30': ['S4']} | | 2 | S2 | 50 | | 3 | S3 | NaN | | 4 | S1 | {'10': ['S1', 'S2', 'S3'], '25': ['S4']} | | 4 | S2 | {'10': ['S1', 'S2', 'S3'], '25': ['S4']} | +---------+---------+--------------------------------------------------+ I would like this to be expanded as this: +---------+---------+-------+ | item_id | shop_id | price | +---------+---------+-------+ | 1 | S1 | 10 | | 1 | S2 | 10 | | 1 | S3 | 20 | | 2 | S2 | 50 | | 3 | S3 | NaN | | 4 | S1 | 10 | | 4 | S2 | 10 | +---------+---------+-------+ I have tried with apply : def get_price(row): if row['price_list'][0]=='{': prices = eval(row['price_list']) for key,value in prices.items(): if str(row['shop_id']) in value: price = key break price = np.nan else: price = row["price_list"] return price df['price'] = df.apply(lambda row: get_price(row),axis=1) The dictionary elements in the price_list column are actually strings, so I might need them to be evaluated as dicts first? But the above approach takes a lot of time since my dataframe is pretty large. What is the best way to achieve this? Any suggestion is appreciated. Thanks! A: I would use a list comprehension with a generator to search for the key from the value: df['price'] = [next((k for k,l in d.items() if s in l), None) if isinstance(d, dict) else d for s, d in zip(df['shop_id'], df.pop('price_list'))] NB. pop removes the "price_list" column in place. Output: item_id shop_id price 0 1 S1 10 1 1 S2 10 2 1 S3 20 3 2 S2 50 4 3 S3 NaN 5 4 S1 10 6 4 S2 10 workaround if you have string representations of dicts import ast df['price'] = [next((k for k,l in ast.literal_eval(d).items() if s in l), None) if isinstance(d, str) and d.startswith('{') else d for s, d in zip(df['shop_id'], df.pop('price_list'))]
Extract value from a dataframe column of dictionary of lists lists and create a new column
I have a dataframe with one of the columns as a list and another column as a dictionary. However, this is not consistent. It could be a single element or NULL too df = pd.DataFrame({'item_id':[1,1,1,2,3,4,4], 'shop_id':['S1','S2','S3','S2','S3','S1','S2'], 'price_list':["{'10':['S1','S2'], '20':['S3'], '30':['S4']}","{'10':['S1','S2'], '20':['S3'], '30':['S4']}","{'10':['S1','S2'], '20':['S3'], '30':['S4']}",'50','NaN',"{'10':['S1','S2','S3'],'25':['S4']}","{'10':['S1','S2','S3'],'25':['S4']}"]}) +---------+---------+--------------------------------------------------+ | item_id | shop_id | price_list | +---------+---------+--------------------------------------------------+ | 1 | S1 | {'10': ['S1', 'S2'], '20': ['S3'], '30': ['S4']} | | 1 | S2 | {'10': ['S1', 'S2'], '20': ['S3'], '30': ['S4']} | | 1 | S3 | {'10': ['S1', 'S2'], '20': ['S3'], '30': ['S4']} | | 2 | S2 | 50 | | 3 | S3 | NaN | | 4 | S1 | {'10': ['S1', 'S2', 'S3'], '25': ['S4']} | | 4 | S2 | {'10': ['S1', 'S2', 'S3'], '25': ['S4']} | +---------+---------+--------------------------------------------------+ I would like this to be expanded as this: +---------+---------+-------+ | item_id | shop_id | price | +---------+---------+-------+ | 1 | S1 | 10 | | 1 | S2 | 10 | | 1 | S3 | 20 | | 2 | S2 | 50 | | 3 | S3 | NaN | | 4 | S1 | 10 | | 4 | S2 | 10 | +---------+---------+-------+ I have tried with apply : def get_price(row): if row['price_list'][0]=='{': prices = eval(row['price_list']) for key,value in prices.items(): if str(row['shop_id']) in value: price = key break price = np.nan else: price = row["price_list"] return price df['price'] = df.apply(lambda row: get_price(row),axis=1) The dictionary elements in the price_list column are actually strings, so I might need them to be evaluated as dicts first? But the above approach takes a lot of time since my dataframe is pretty large. What is the best way to achieve this? Any suggestion is appreciated. Thanks!
[ "I would use a list comprehension with a generator to search for the key from the value:\ndf['price'] = [next((k for k,l in d.items() if s in l), None)\n if isinstance(d, dict) else d\n for s, d in zip(df['shop_id'], df.pop('price_list'))]\n\nNB. pop removes the \"price_list\" column in place.\nOutput:\n item_id shop_id price\n0 1 S1 10\n1 1 S2 10\n2 1 S3 20\n3 2 S2 50\n4 3 S3 NaN\n5 4 S1 10\n6 4 S2 10\n\nworkaround if you have string representations of dicts\nimport ast\n\ndf['price'] = [next((k for k,l in ast.literal_eval(d).items() if s in l), None)\n if isinstance(d, str) and d.startswith('{') else d\n for s, d in zip(df['shop_id'], df.pop('price_list'))]\n\n" ]
[ 3 ]
[]
[]
[ "dataframe", "dictionary", "list_comprehension", "pandas", "python" ]
stackoverflow_0074629686_dataframe_dictionary_list_comprehension_pandas_python.txt
Q: F string inside For Loop I dont understand why this doesn't work. I am trying to do a For Loop to save my error measures: error = [] naive_list = list(['24', '168', 'standard', 'custom']) for i in naive_list: for j in range(1,5): rmse = mean_squared_error(df_test["f'Price_REG{j}'"], f'df_test_{i}'["f'Price_REG{j}'"], squared=False) mae = mae(df_test["f'Price_REG{j}'"], f'df_test_{i}'["f'Price_REG{j}'"]) error.append(rmse) error.append(mae) The idea is to save all of these 16 measures so I can reach them later, be it in separate variables, in a dict, or a list. First I tried to use f'string on the variable name too f'rmse_{i}_{j} = mean_squared_error(df_test["f'Price_REG{j}'"], f'df_test_{i}'["f'Price_REG{j}'"], squared=False) but apparently this is not possible. So instead I tried to put them all in one list, like above. But I am getting a error on the f'string KeyError: "f'Price_REG{j}'". A: This df_test["f'Price_REG{j}'"] literally means the string df_test["f'Price_REG{j}'"]. It will not be evaluated further. It's a string-literal. Instead df_test[f'Price_REG{j}'] WOULD be evaluated and would return whatever is in Price_REG{j} and then fetch the column of the same name from the df. That being said, because that f-string contains only a variable, it's functionally equivalent to just df_test[Price_REG{j}]. You have painted yourself into a corner over-engineering. None of the f-strings and string-literals with f-strings inside them are needed here. Just the variable themselves, with no quotes around them. Instead: error = [] naive_list = list(['24', '168', 'standard', 'custom']) for i in naive_list: for j in range(1,5): rmse = mean_squared_error(df_test[Price_REG{j}], df_test_{i}[Price_REG{j}], squared=False) mae = mae(df_test[Price_REG{j}], df_test_{i}[Price_REG{j}]) error.append(rmse) error.append(mae) What I'm not sure about though is whatever df_test_ is or what df_test_{i} is meant to do. Nor am I fully understanding the curly braces of Price_REG{j} (should those be square brackets? What is Price_REG?). But at any rate, at least you are free of your painted corner and continue hammering out whatever other kinks may pop up in this code.
F string inside For Loop
I dont understand why this doesn't work. I am trying to do a For Loop to save my error measures: error = [] naive_list = list(['24', '168', 'standard', 'custom']) for i in naive_list: for j in range(1,5): rmse = mean_squared_error(df_test["f'Price_REG{j}'"], f'df_test_{i}'["f'Price_REG{j}'"], squared=False) mae = mae(df_test["f'Price_REG{j}'"], f'df_test_{i}'["f'Price_REG{j}'"]) error.append(rmse) error.append(mae) The idea is to save all of these 16 measures so I can reach them later, be it in separate variables, in a dict, or a list. First I tried to use f'string on the variable name too f'rmse_{i}_{j} = mean_squared_error(df_test["f'Price_REG{j}'"], f'df_test_{i}'["f'Price_REG{j}'"], squared=False) but apparently this is not possible. So instead I tried to put them all in one list, like above. But I am getting a error on the f'string KeyError: "f'Price_REG{j}'".
[ "This df_test[\"f'Price_REG{j}'\"] literally means the string df_test[\"f'Price_REG{j}'\"]. It will not be evaluated further. It's a string-literal.\nInstead df_test[f'Price_REG{j}'] WOULD be evaluated and would return whatever is in Price_REG{j} and then fetch the column of the same name from the df.\nThat being said, because that f-string contains only a variable, it's functionally equivalent to just df_test[Price_REG{j}].\nYou have painted yourself into a corner over-engineering. None of the f-strings and string-literals with f-strings inside them are needed here. Just the variable themselves, with no quotes around them.\nInstead:\nerror = []\nnaive_list = list(['24', '168', 'standard', 'custom'])\nfor i in naive_list:\n for j in range(1,5):\n rmse = mean_squared_error(df_test[Price_REG{j}], df_test_{i}[Price_REG{j}], squared=False)\n mae = mae(df_test[Price_REG{j}], df_test_{i}[Price_REG{j}])\n error.append(rmse)\n error.append(mae)\n\nWhat I'm not sure about though is whatever df_test_ is or what df_test_{i} is meant to do. Nor am I fully understanding the curly braces of Price_REG{j} (should those be square brackets? What is Price_REG?). But at any rate, at least you are free of your painted corner and continue hammering out whatever other kinks may pop up in this code.\n" ]
[ 0 ]
[]
[]
[ "for_loop", "pandas", "python" ]
stackoverflow_0074629389_for_loop_pandas_python.txt
Q: how to color text with condition in django? I have a django app and I want to color some text, if that text is true in dictionary. So I have this method:views.py def data_compare2(request): template = get_template("main/data_compare.html") dict2 = {"appel": 3962.00, "waspeen": 3304.07, "ananas":24} context = {"dict2": dict2} res = dict((v,k) for k,v in dict2.items()) return HttpResponse(template.render(context, request, res[3962.00])) and this is the template: <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <div class="container center"> {% for key, value in dict1.items %} {%if {{res}} %} <div style="background-color:'red'"></div> {%endif%} {{ key }} {{value}}<br> {% endfor %} </div> </body> </html> So the text appel": 3962.00 has to appear in red. Question: how to make the founded text in dictionary red? A: I would separate fruits from the condition, inside the context. Transform the condition into a list to check on the template. views.py def data_compare2(request): fruits = {"appel": 3962.00, "waspeen": 3304.07, "ananas":24,} condition = ['appel', 'ananas'] context = { 'fruits': fruits, 'condition': condition } return render(request, 'main/data_compare.html', context) template.html {% extends 'base.html' %} {% block content %} <div class="container center"> {% for key, value in fruits.items %} <span {% if key in condition %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br> {% endfor %} </div> {% endblock %} This will result in 'appel' and 'ananas' being red.
how to color text with condition in django?
I have a django app and I want to color some text, if that text is true in dictionary. So I have this method:views.py def data_compare2(request): template = get_template("main/data_compare.html") dict2 = {"appel": 3962.00, "waspeen": 3304.07, "ananas":24} context = {"dict2": dict2} res = dict((v,k) for k,v in dict2.items()) return HttpResponse(template.render(context, request, res[3962.00])) and this is the template: <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <div class="container center"> {% for key, value in dict1.items %} {%if {{res}} %} <div style="background-color:'red'"></div> {%endif%} {{ key }} {{value}}<br> {% endfor %} </div> </body> </html> So the text appel": 3962.00 has to appear in red. Question: how to make the founded text in dictionary red?
[ "I would separate fruits from the condition, inside the context. Transform the condition into a list to check on the template.\nviews.py\ndef data_compare2(request):\n fruits = {\"appel\": 3962.00, \"waspeen\": 3304.07, \"ananas\":24,}\n condition = ['appel', 'ananas']\n\n context = {\n 'fruits': fruits,\n 'condition': condition\n }\n \n return render(request, 'main/data_compare.html', context)\n\ntemplate.html\n{% extends 'base.html' %}\n\n{% block content %}\n <div class=\"container center\">\n {% for key, value in fruits.items %}\n <span {% if key in condition %} style=\"color: red;\" {% endif %}>{{ key }}: {{value}}</span><br>\n {% endfor %}\n </div>\n{% endblock %}\n\nThis will result in 'appel' and 'ananas' being red.\n" ]
[ 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074629358_django_python.txt
Q: Why isn't my .Dockerignore file ignoring files? When I build the container and I check the files that should have been ignored, most of them haven't been ignored. This is my folder structure. Root/ data/ project/ __pycache__/ media/ static/ app/ __pycache__/ migrations/ templates/ .dockerignore .gitignore .env docker-compose.yml Dockerfile requirements.txt manage.py Let's say i want to ignore the __pycache__ & data(data will be created with the docker-compose up command, when creating the container) folders and the .gitignore & .env files. I will ignore these with the next .dockerignore file .git .gitignore .docker */__pycache__/ **/__pycache__/ .env/ .venv/ venv/ data/ The final result is that only the git & .env files have been ignored. The data folder hasn't been ignored but it's not accesible from the container. And the __pycache__ folders haven't been ignored either. Here are the docker files. docker-compose.yml version: "3.8" services: app: build: . volumes: - .:/django-app ports: - 8000:8000 command: /bin/bash -c "sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000" container_name: app-container depends_on: - db db: image: postgres volumes: - ./data:/var/lib/postgresql/data environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} container_name: postgres_db_container Dockerfile FROM python:3.9-slim-buster ENV PYTHONUNBUFFERED=1 WORKDIR /django-app EXPOSE 8000 COPY requirements.txt requirements.txt RUN apt-get update \ && adduser --disabled-password --no-create-home userapp \ && apt-get -y install libpq-dev \ && apt-get -y install apt-file \ && apt-get -y install python3-dev build-essential \ && pip install -r requirements.txt USER userapp A: You're actually injecting your source code using volumes:, not during the image build, and this doesn't honor .dockerignore. Running a Docker application like this happens in two phases: You build a reusable image that contains the application runtime, any OS and language-specific library dependencies, and the application code; then You run a container based on that image. The .dockerignore file is only considered during the first build phase. In your setup, you don't actually COPY anything in the image beyond the requirements.txt file. Instead, you use volumes: to inject parts of the host system into the container. This happens during the second phase, and ignores .dockerignore. The approach I'd recommend for this is to skip the volumes:, and instead COPY the required source code in the Dockerfile. You should also generally indicate the default CMD the container will run in the Dockerfile, rather than requiring it it the docker-compose.yml or docker run command. FROM python:3.9-slim-buster # Do the OS-level setup _first_ so that it's not repeated # if Python dependencies change RUN apt-get update && apt-get install -y ... WORKDIR /django-app # Then install Python dependencies COPY requirements.txt . RUN pip install -r requirements.txt # Then copy in the rest of the application # NOTE: this _does_ honor .dockerignore COPY . . # And explain how to run it ENV PYTHONUNBUFFERED=1 EXPOSE 8000 USER userapp # consider splitting this into an ENTRYPOINT that waits for the # the database, runs migrations, and then `exec "$@"` to run the CMD CMD sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000 This means, in the docker-compose.yml setup, you don't need volumes:; the application code is already inside the image you built. version: "3.8" services: app: build: . ports: - 8000:8000 depends_on: - db # environment: [PGHOST=db] # no volumes: or container_name: db: image: postgres volumes: # do keep for persistent database data - ./data:/var/lib/postgresql/data environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} # ports: ['5433:5432'] This approach also means you need to docker-compose build a new image when your application changes. This is normal in Docker. For day-to-day development, a useful approach here can be to run all of the non-application dependencies in Docker, but the application itself outside a container. # Start the database but not the application docker-compose up -d db # Create a virtual environment and set it up python3 -m venv venv . venv/bin/activate pip install -r requirements.txt # Set environment variables to point at the Docker database export PGHOST=localhost PGPORT=5433 # Run the application locally ./manage.py runserver Doing this requires making the database visible from outside Docker (via ports:), and making the database location configurable (probably via environment variables, set in Compose with environment:). A: That's not actually your case, but in general an additional cause of ".dockerignore not ignoring" is that it applies the filters to whole paths relative to the context dir, not just basenames, so the pattern: __pycache__ *.pyc applies only to the docker context's root directory, not to any of subdirectories. In order to make it recursive, change it to: **/__pycache__ **/*.pyc
Why isn't my .Dockerignore file ignoring files?
When I build the container and I check the files that should have been ignored, most of them haven't been ignored. This is my folder structure. Root/ data/ project/ __pycache__/ media/ static/ app/ __pycache__/ migrations/ templates/ .dockerignore .gitignore .env docker-compose.yml Dockerfile requirements.txt manage.py Let's say i want to ignore the __pycache__ & data(data will be created with the docker-compose up command, when creating the container) folders and the .gitignore & .env files. I will ignore these with the next .dockerignore file .git .gitignore .docker */__pycache__/ **/__pycache__/ .env/ .venv/ venv/ data/ The final result is that only the git & .env files have been ignored. The data folder hasn't been ignored but it's not accesible from the container. And the __pycache__ folders haven't been ignored either. Here are the docker files. docker-compose.yml version: "3.8" services: app: build: . volumes: - .:/django-app ports: - 8000:8000 command: /bin/bash -c "sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000" container_name: app-container depends_on: - db db: image: postgres volumes: - ./data:/var/lib/postgresql/data environment: - POSTGRES_DB=${DB_NAME} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} container_name: postgres_db_container Dockerfile FROM python:3.9-slim-buster ENV PYTHONUNBUFFERED=1 WORKDIR /django-app EXPOSE 8000 COPY requirements.txt requirements.txt RUN apt-get update \ && adduser --disabled-password --no-create-home userapp \ && apt-get -y install libpq-dev \ && apt-get -y install apt-file \ && apt-get -y install python3-dev build-essential \ && pip install -r requirements.txt USER userapp
[ "You're actually injecting your source code using volumes:, not during the image build, and this doesn't honor .dockerignore.\nRunning a Docker application like this happens in two phases:\n\nYou build a reusable image that contains the application runtime, any OS and language-specific library dependencies, and the application code; then\nYou run a container based on that image.\n\nThe .dockerignore file is only considered during the first build phase.\nIn your setup, you don't actually COPY anything in the image beyond the requirements.txt file. Instead, you use volumes: to inject parts of the host system into the container. This happens during the second phase, and ignores .dockerignore.\nThe approach I'd recommend for this is to skip the volumes:, and instead COPY the required source code in the Dockerfile. You should also generally indicate the default CMD the container will run in the Dockerfile, rather than requiring it it the docker-compose.yml or docker run command.\nFROM python:3.9-slim-buster\n\n# Do the OS-level setup _first_ so that it's not repeated\n# if Python dependencies change\nRUN apt-get update && apt-get install -y ...\n\nWORKDIR /django-app\n\n# Then install Python dependencies\nCOPY requirements.txt .\nRUN pip install -r requirements.txt\n\n# Then copy in the rest of the application\n# NOTE: this _does_ honor .dockerignore\nCOPY . .\n\n# And explain how to run it\nENV PYTHONUNBUFFERED=1\nEXPOSE 8000\nUSER userapp\n# consider splitting this into an ENTRYPOINT that waits for the\n# the database, runs migrations, and then `exec \"$@\"` to run the CMD\nCMD sleep 7; python manage.py migrate; python manage.py runserver 0.0.0.0:8000\n\nThis means, in the docker-compose.yml setup, you don't need volumes:; the application code is already inside the image you built.\nversion: \"3.8\"\nservices:\n app: \n build: .\n ports: \n - 8000:8000\n depends_on: \n - db\n # environment: [PGHOST=db]\n # no volumes: or container_name:\n\n db:\n image: postgres\n volumes: # do keep for persistent database data\n - ./data:/var/lib/postgresql/data\n environment: \n - POSTGRES_DB=${DB_NAME}\n - POSTGRES_USER=${DB_USER}\n - POSTGRES_PASSWORD=${DB_PASSWORD}\n # ports: ['5433:5432']\n\nThis approach also means you need to docker-compose build a new image when your application changes. This is normal in Docker.\nFor day-to-day development, a useful approach here can be to run all of the non-application dependencies in Docker, but the application itself outside a container.\n# Start the database but not the application\ndocker-compose up -d db\n\n# Create a virtual environment and set it up\npython3 -m venv venv\n. venv/bin/activate\npip install -r requirements.txt\n\n# Set environment variables to point at the Docker database\nexport PGHOST=localhost PGPORT=5433\n\n# Run the application locally\n./manage.py runserver\n\nDoing this requires making the database visible from outside Docker (via ports:), and making the database location configurable (probably via environment variables, set in Compose with environment:).\n", "That's not actually your case, but in general an additional cause of \".dockerignore not ignoring\" is that it applies the filters to whole paths relative to the context dir, not just basenames, so the pattern:\n__pycache__\n*.pyc\n\napplies only to the docker context's root directory, not to any of subdirectories.\nIn order to make it recursive, change it to:\n**/__pycache__\n**/*.pyc\n\n" ]
[ 6, 0 ]
[]
[]
[ "django", "docker", "dockerignore", "python" ]
stackoverflow_0069297600_django_docker_dockerignore_python.txt
Q: Trouble with while true and if function in python The question is "Ask user to enter age, Check if age entered is > 0. if age less than 12 ticket price is 12 dollars otherwise the ticket price is 18 dollars" def main() : n = int(input("Insert your age : ")) #asking for users age while True : # checking if users age is more than 0 if n > 0 : break return n if price_checker(n) : print("Price is 12 dollars") else : print("Price is 18 dollars") def price_checker(x) : if x < 12 : return True else : return False main() When i print it the only code executed was "insert your age : " i've tried a few ways to fix it but it just lead to more confusion A: Just check if the number is less than or equal to 0 instead of a while loop def main() : n = int(input("Insert your age : ")) #asking for users age if n <= 0: print("Please enter an age greater than 0") elif price_checker(n) : print("Price is 12 dollars") else : print("Price is 18 dollars") def price_checker(x) : if x < 12 : return True else : return False main()
Trouble with while true and if function in python
The question is "Ask user to enter age, Check if age entered is > 0. if age less than 12 ticket price is 12 dollars otherwise the ticket price is 18 dollars" def main() : n = int(input("Insert your age : ")) #asking for users age while True : # checking if users age is more than 0 if n > 0 : break return n if price_checker(n) : print("Price is 12 dollars") else : print("Price is 18 dollars") def price_checker(x) : if x < 12 : return True else : return False main() When i print it the only code executed was "insert your age : " i've tried a few ways to fix it but it just lead to more confusion
[ "Just check if the number is less than or equal to 0 instead of a while loop\ndef main() :\n n = int(input(\"Insert your age : \")) #asking for users age\n if n <= 0:\n print(\"Please enter an age greater than 0\")\n elif price_checker(n) :\n print(\"Price is 12 dollars\")\n else :\n print(\"Price is 18 dollars\")\n \ndef price_checker(x) :\n if x < 12 :\n return True\n else :\n return False\n \nmain()\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074629827_python.txt
Q: using map() with dictionary I have a dictionary. prices = {'n': 99, 'a': 99, 'c': 147} using map () I need to receive new dictionary : def formula(value): value = value -value * 0.05 return value new_prices = dict(map(formula, prices.values())) but it doesn't work TypeError: cannot convert dictionary update sequence element #0 to a sequence solving my code using map(): new_prices = {'n': 94.05, 'a': 94.05, 'c': 139.65} A: you can do this using zip and map new_prices = dict(zip(prices, map(formula, prices.values()))) A: Use dictionary comprehension: new_prices = {k: formula(prices[k]) for k in prices} print(new_prices) # {'n': 94.05, 'a': 94.05, 'c': 139.65} A: Use map() with a helper lambda to create new dict new_prices = dict(map(lambda item: (item[0], formula(item[1])), prices.items())) output: {'n': 94.05, 'a': 94.05, 'c': 139.65} A: First option (using a dict comprehension): prices = {'n': 99, 'a': 99, 'c': 147} def formula(value): value = value -value * 0.05 return value #Using a dict comprehension and .items() [giving you key value tuples] instead of map() new_prices = {k: formula(v) for k, v in prices.items()} print(new_prices) # {'n': 94.05, 'a': 94.05, 'c': 139.65} Second option (using map and defining the formula slightly differently [tuple as input]): #Define the formula with a key value pair (tuple) as input and return value def f_items(items): value = items[1] - items[1] * 0.05 return items[0], value new_prices = dict(map(f_items, prices.items())) print(new_prices) # {'n': 94.05, 'a': 94.05, 'c': 139.65}
using map() with dictionary
I have a dictionary. prices = {'n': 99, 'a': 99, 'c': 147} using map () I need to receive new dictionary : def formula(value): value = value -value * 0.05 return value new_prices = dict(map(formula, prices.values())) but it doesn't work TypeError: cannot convert dictionary update sequence element #0 to a sequence solving my code using map(): new_prices = {'n': 94.05, 'a': 94.05, 'c': 139.65}
[ "you can do this using zip and map\nnew_prices = dict(zip(prices, map(formula, prices.values())))\n\n", "Use dictionary comprehension:\nnew_prices = {k: formula(prices[k]) for k in prices}\nprint(new_prices)\n# {'n': 94.05, 'a': 94.05, 'c': 139.65}\n\n", "Use map() with a helper lambda to create new dict\nnew_prices = dict(map(lambda item: (item[0], formula(item[1])), prices.items()))\n\noutput:\n{'n': 94.05, 'a': 94.05, 'c': 139.65}\n\n", "First option (using a dict comprehension):\n prices = {'n': 99, 'a': 99, 'c': 147}\n \n \n def formula(value):\n value = value -value * 0.05\n return value\n\n #Using a dict comprehension and .items() [giving you key value tuples] instead of map()\n new_prices = {k: formula(v) for k, v in prices.items()}\n \n print(new_prices)\n # {'n': 94.05, 'a': 94.05, 'c': 139.65}\n\nSecond option (using map and defining the formula slightly differently [tuple as input]):\n #Define the formula with a key value pair (tuple) as input and return value\n def f_items(items):\n value = items[1] - items[1] * 0.05\n return items[0], value\n \n \n new_prices = dict(map(f_items, prices.items()))\n \n print(new_prices)\n # {'n': 94.05, 'a': 94.05, 'c': 139.65}\n\n" ]
[ 6, 2, 1, 0 ]
[ "When you create a map, you get a map object. It's nice to have an helper function to iterate through this object.\ndef print_results(map_object):\n for i in map_object:\n print(i)\n\ndef formula(value):\n return value * 0.95 # one-liner\n\nm = map(formula, prices.values())\n\nprint_results(m)\n\n# output\n94.05\n94.05\n139.65\n\n" ]
[ -2 ]
[ "dictionary", "list", "python" ]
stackoverflow_0074629567_dictionary_list_python.txt
Q: How do I read the pictures in the file in order? I want to read the pictures in a file in the order they are in the file. But when I read it with python it reads mixed. I don't want it sorted. How can I fix this? def read_img(path): st = os.path.join(path, "*.JPG") st_ = os.path.join(path, "*.jpg") for filename in glob.glob(st): print(st) #print("filename-------",filename) img_array_input.append(filename) print("image array append : ", filename) for filename in glob.glob(st_): img_array_input.append(filename) #print("filename-------",filename) global size size = len(img_array_input) for i in img_array_input: print("detection ") detection(i) print("detection out") enter image description here original file enter image description here the order of reading I want it to read in the order in the original file. A: If you want the list populated in the order that os.listdir() reveals files then: from os import listdir from os.path import join, splitext BASE = '.' # directory to be parsed EXTS = {'jpg', 'JPG', 'jpeg', 'JPEG'} # file extensions of interest def ext(p): _, ext = splitext(p) if ext: return ext[1:] def getfiles(base, include_base=True): for entry in listdir(base): if ext(entry) in EXTS: yield join(base, entry) if include_base else entry detection = [file for file in getfiles(BASE, include_base=False)] print(detection) Note: os.listdir() returns a list of files in arbitrary order
How do I read the pictures in the file in order?
I want to read the pictures in a file in the order they are in the file. But when I read it with python it reads mixed. I don't want it sorted. How can I fix this? def read_img(path): st = os.path.join(path, "*.JPG") st_ = os.path.join(path, "*.jpg") for filename in glob.glob(st): print(st) #print("filename-------",filename) img_array_input.append(filename) print("image array append : ", filename) for filename in glob.glob(st_): img_array_input.append(filename) #print("filename-------",filename) global size size = len(img_array_input) for i in img_array_input: print("detection ") detection(i) print("detection out") enter image description here original file enter image description here the order of reading I want it to read in the order in the original file.
[ "If you want the list populated in the order that os.listdir() reveals files then:\nfrom os import listdir\nfrom os.path import join, splitext\n\nBASE = '.' # directory to be parsed\nEXTS = {'jpg', 'JPG', 'jpeg', 'JPEG'} # file extensions of interest\n\ndef ext(p):\n _, ext = splitext(p)\n if ext:\n return ext[1:]\n\ndef getfiles(base, include_base=True):\n for entry in listdir(base):\n if ext(entry) in EXTS:\n yield join(base, entry) if include_base else entry\n\ndetection = [file for file in getfiles(BASE, include_base=False)]\n\nprint(detection)\n\nNote:\nos.listdir() returns a list of files in arbitrary order\n" ]
[ 0 ]
[]
[]
[ "image", "python", "readfile" ]
stackoverflow_0074629247_image_python_readfile.txt
Q: what am I missing in this def (type Error : int object is not callable), beginner question Wrote this function but when I want to call it it doesnt work, gives error int object is not callable. def pole_trojkata(xa, ya, xb, yb, xc, yc ): p = 1/2*abs((xb - xa)(yc - ya) - (yb - ya)(xc - xa)) return p pole_trojkata(2, 3, 1, 3, 2, 5) A: you forget * for multiplication def pole_trojkata(xa, ya, xb, yb, xc, yc): return abs((xb-xa)*(yc-ya)-(xc-xa)*(yb-ya))/2 pole_trojkata(2, 3, 1, 3, 2, 5) output: 1.0
what am I missing in this def (type Error : int object is not callable), beginner question
Wrote this function but when I want to call it it doesnt work, gives error int object is not callable. def pole_trojkata(xa, ya, xb, yb, xc, yc ): p = 1/2*abs((xb - xa)(yc - ya) - (yb - ya)(xc - xa)) return p pole_trojkata(2, 3, 1, 3, 2, 5)
[ "you forget * for multiplication\ndef pole_trojkata(xa, ya, xb, yb, xc, yc):\n return abs((xb-xa)*(yc-ya)-(xc-xa)*(yb-ya))/2\n\npole_trojkata(2, 3, 1, 3, 2, 5)\n\noutput:\n1.0\n\n" ]
[ 0 ]
[]
[]
[ "area", "function", "parameters", "python" ]
stackoverflow_0074629876_area_function_parameters_python.txt
Q: How can I do a python API request with the body? if I do a POST request on Postman with my local API server it works: But if I try in python with this syntax it doesn't work: requests.post('http://127.0.0.1:5001/api/v0/add', data={'path': 'test'}).text it returns: "file argument 'path' is required\n" Can you please explain me why it doesn't work? A: If I pass the files parameter instead of data or json, it works! requests.post(url = api_url, files={'path':'test'}).text A: The issue is that using data on requests.post defaults to application/x-www-form-urlencoded while your application wants multipart/form-data. Try using files instead of data: requests.post('http://127.0.0.1:5001/api/v0/add', files={'path': 'test'}).text
How can I do a python API request with the body?
if I do a POST request on Postman with my local API server it works: But if I try in python with this syntax it doesn't work: requests.post('http://127.0.0.1:5001/api/v0/add', data={'path': 'test'}).text it returns: "file argument 'path' is required\n" Can you please explain me why it doesn't work?
[ "If I pass the files parameter instead of data or json, it works!\nrequests.post(url = api_url, files={'path':'test'}).text\n\n", "The issue is that using data on requests.post defaults to application/x-www-form-urlencoded while your application wants multipart/form-data. Try using files instead of data:\nrequests.post('http://127.0.0.1:5001/api/v0/add', files={'path': 'test'}).text\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "postman", "python", "python_requests" ]
stackoverflow_0074629233_api_postman_python_python_requests.txt
Q: How can I rotate the bounding boxes from findcontours function in Python OpenCV? I have the following image: I am using OpenCV to find the contours in this image in order to separate the "122" into "1","2", and "2". I am using OCR to classify the numbers after. The code I am using to do this is as follows: invert = cv2.bitwise_not(image) gray = cv2.cvtColor(invert, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) # perform edge detection, find contours in the edge map, and sort the # resulting contours from left-to-right edged = cv2.Canny(blurred, 30, 150) cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) cnts = sort_contours(cnts, method="left-to-right")[0] # initialize the list of contour bounding boxes and associated # characters that we'll be OCR'ing chars = [] preds = [] for c in cnts: # compute the bounding box of the contour (x, y, w, h) = cv2.boundingRect(c) # filter out bounding boxes, ensuring they are neither too small # nor too large if (w >= 5 and w <= 150) and (h >= 15 and h <= 120): # extract the character and threshold it to make the character # appear as *white* (foreground) on a *black* background, then # grab the width and height of the thresholded image roi = gray[y:y + h, x:x + w] thresh = cv2.threshold(roi, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] (tH, tW) = thresh.shape # if the width is greater than the height, resize along the # width dimension if tW > tH: thresh = imutils.resize(thresh, width=32) # otherwise, resize along the height else: thresh = imutils.resize(thresh, height=32) # re-grab the image dimensions (now that its been resized) # and then determine how much we need to pad the width and # height such that our image will be 32x32 (tH, tW) = thresh.shape dX = int(max(0, 32 - tW) / 2.0) dY = int(max(0, 32 - tH) / 2.0) # pad the image and force 32x32 dimensions padded = cv2.copyMakeBorder(thresh, top=dY, bottom=dY, left=dX, right=dX, borderType=cv2.BORDER_CONSTANT, value=(0, 0, 0)) padded = cv2.resize(padded, (28, 28)) # prepare the padded image for classification via our # handwriting OCR model padded = padded.astype("float32") / 255.0 padded = np.expand_dims(padded, axis=-1) # update our list of characters that will be OCR'd chars.append((padded, (x, y, w, h))) x,y,w,h = cv2.boundingRect(c) roi=image[y:y+h,x:x+w] plt.imshow(roi) This code works great for numbers that are not written at an angle and are spaced generously apart, however in this image we see that the "1" is angled slightly. The resulting bounding box around the one also includes a portion of the adjacent "2". Does anyone have a suggestion on how I can slightly rotate the bounding box to exclude the portion of the two? A: It's hard to give specific recommendations without understanding how the bounding box will be used downstream. Easiest method would be to use the boxPoints function. That will return the coordinates of the corners for the minimum bounding box around the contour. Alternatively, you could fit a line to the contour and use the angle of the line to rotate your your bounding box.
How can I rotate the bounding boxes from findcontours function in Python OpenCV?
I have the following image: I am using OpenCV to find the contours in this image in order to separate the "122" into "1","2", and "2". I am using OCR to classify the numbers after. The code I am using to do this is as follows: invert = cv2.bitwise_not(image) gray = cv2.cvtColor(invert, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) # perform edge detection, find contours in the edge map, and sort the # resulting contours from left-to-right edged = cv2.Canny(blurred, 30, 150) cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) cnts = sort_contours(cnts, method="left-to-right")[0] # initialize the list of contour bounding boxes and associated # characters that we'll be OCR'ing chars = [] preds = [] for c in cnts: # compute the bounding box of the contour (x, y, w, h) = cv2.boundingRect(c) # filter out bounding boxes, ensuring they are neither too small # nor too large if (w >= 5 and w <= 150) and (h >= 15 and h <= 120): # extract the character and threshold it to make the character # appear as *white* (foreground) on a *black* background, then # grab the width and height of the thresholded image roi = gray[y:y + h, x:x + w] thresh = cv2.threshold(roi, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] (tH, tW) = thresh.shape # if the width is greater than the height, resize along the # width dimension if tW > tH: thresh = imutils.resize(thresh, width=32) # otherwise, resize along the height else: thresh = imutils.resize(thresh, height=32) # re-grab the image dimensions (now that its been resized) # and then determine how much we need to pad the width and # height such that our image will be 32x32 (tH, tW) = thresh.shape dX = int(max(0, 32 - tW) / 2.0) dY = int(max(0, 32 - tH) / 2.0) # pad the image and force 32x32 dimensions padded = cv2.copyMakeBorder(thresh, top=dY, bottom=dY, left=dX, right=dX, borderType=cv2.BORDER_CONSTANT, value=(0, 0, 0)) padded = cv2.resize(padded, (28, 28)) # prepare the padded image for classification via our # handwriting OCR model padded = padded.astype("float32") / 255.0 padded = np.expand_dims(padded, axis=-1) # update our list of characters that will be OCR'd chars.append((padded, (x, y, w, h))) x,y,w,h = cv2.boundingRect(c) roi=image[y:y+h,x:x+w] plt.imshow(roi) This code works great for numbers that are not written at an angle and are spaced generously apart, however in this image we see that the "1" is angled slightly. The resulting bounding box around the one also includes a portion of the adjacent "2". Does anyone have a suggestion on how I can slightly rotate the bounding box to exclude the portion of the two?
[ "It's hard to give specific recommendations without understanding how the bounding box will be used downstream.\nEasiest method would be to use the boxPoints function. That will return the coordinates of the corners for the minimum bounding box around the contour. Alternatively, you could fit a line to the contour and use the angle of the line to rotate your your bounding box.\n" ]
[ 0 ]
[]
[]
[ "mnist", "opencv", "python" ]
stackoverflow_0074629780_mnist_opencv_python.txt
Q: creating a config file with configparser with a custom file path i've been trying to create a way to generate config files for a help tool that i've been making. i would like to have the code create a config file in a specific default location that is dependant on the current user on which the code is ran. this is my basic setup for the code i've been trying to find a way to have username be the variable system_user however when trying this i get a unicode error import configparser import os system_user = os.getlogin() file_path_input = input('filepath input ') strength = input('strenght score ') dexterity = input('dexterity score ') constitution = input('constitution score ') intelligence = input('intelligence score ') wisdom = input('wisdom score ') charisma = input('charisma score ') testconfig = configparser.ConfigParser() testconfig.add_section('stats') testconfig.set('stats', 'strength', strength) testconfig.set('stats', 'dexterity', dexterity) testconfig.set('stats', 'constitution', constitution) testconfig.set('stats', 'intelligence', intelligence) testconfig.set('stats', 'wisdom', wisdom) testconfig.set('stats', 'charisma', charisma) with open(C:\Users\username\Documents\5e_helper\character cofig, 'w') as configfile: testconfig.write(configfile) i've been trying to find a way to have username be the variable system_user however when trying with open(r'C:\Users\' + system_user + '\Documents\5e_helper\character cofig', 'w') as configfile: testconfig.write(configfile) i get a syntax error SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 1-2: malformed \N character escape A: You need to use with open(r'C:\Users\'' + system_user + '\Documents\5e_helper\character cofig', 'w') as configfile: testconfig.write(configfile) Error is happening because you are using an escape sequence of \' in 'C:\Users\'. You can also avoid it using double quotes around path string. BTW good way to do it is using forward slash (/) instead of back. A: Your first string is raw but the second one isn't, which means you need to escape your backslashes since they count as escape characters in normal strings. Or just make the second string raw as well. with open(r'C:\Users\' + system_user + r'\Documents\5e_helper\character cofig', 'w') as configfile: That said, I would just use string format instead of concatenation in order to handle this as single string rather than multiple. with open(r'C:\Users\{}\Documents\5e_helper\character cofig'.format(system_user), 'w') as configfile:
creating a config file with configparser with a custom file path
i've been trying to create a way to generate config files for a help tool that i've been making. i would like to have the code create a config file in a specific default location that is dependant on the current user on which the code is ran. this is my basic setup for the code i've been trying to find a way to have username be the variable system_user however when trying this i get a unicode error import configparser import os system_user = os.getlogin() file_path_input = input('filepath input ') strength = input('strenght score ') dexterity = input('dexterity score ') constitution = input('constitution score ') intelligence = input('intelligence score ') wisdom = input('wisdom score ') charisma = input('charisma score ') testconfig = configparser.ConfigParser() testconfig.add_section('stats') testconfig.set('stats', 'strength', strength) testconfig.set('stats', 'dexterity', dexterity) testconfig.set('stats', 'constitution', constitution) testconfig.set('stats', 'intelligence', intelligence) testconfig.set('stats', 'wisdom', wisdom) testconfig.set('stats', 'charisma', charisma) with open(C:\Users\username\Documents\5e_helper\character cofig, 'w') as configfile: testconfig.write(configfile) i've been trying to find a way to have username be the variable system_user however when trying with open(r'C:\Users\' + system_user + '\Documents\5e_helper\character cofig', 'w') as configfile: testconfig.write(configfile) i get a syntax error SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 1-2: malformed \N character escape
[ "You need to use\nwith open(r'C:\\Users\\'' + system_user + '\\Documents\\5e_helper\\character cofig', 'w') as configfile:\n testconfig.write(configfile)\n\nError is happening because you are using an escape sequence of \\' in 'C:\\Users\\'. You can also avoid it using double quotes around path string.\nBTW good way to do it is using forward slash (/) instead of back.\n", "Your first string is raw but the second one isn't, which means you need to escape your backslashes since they count as escape characters in normal strings. Or just make the second string raw as well.\nwith open(r'C:\\Users\\' + system_user + r'\\Documents\\5e_helper\\character cofig', 'w') as configfile:\n\nThat said, I would just use string format instead of concatenation in order to handle this as single string rather than multiple.\nwith open(r'C:\\Users\\{}\\Documents\\5e_helper\\character cofig'.format(system_user), 'w') as configfile:\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074629624_python.txt
Q: error when exiting tkinter window update and update_idletasks I have a problem with functions update() and update_idletasks() in tkinter they work fine except that when closing the window, either by cliking the "Exit" button or the "x" to close the window in Windows, the following error lines show up: Traceback (most recent call last): File "D:\Python\VisualStudio\test4\test4\test4.py", line 14, in label.configure(text = str(i)) # i is actually updated by an asynchronous function, like a wifi stream File "C:\Users\Owner\AppData\Local\Programs\Python\Python310\lib\tkinter_init_.py", line 1675, in configure return self.configure('configure', cnf, kw) File "C:\Users\Owner\AppData\Local\Programs\Python\Python310\lib\tkinter_init.py", line 1665, in _configure self.tk.call(_flatten((self._w, cmd)) + elf._options(cnf)) _tkinter.TclError: invalid command name ".!label" Press any key to continue . . . Ultimately I want Tkinter to show the incoming characters from a wi-fi, which is why I cannot use mainloop. A: I want to display the characters coming asynchronously from a wifi on a Tkinter window After several problems I was able to come to the following solution. Many thanks to JRiggles and Bryan Oakles This is my source code: import tkinter as tk def my_async(): # this simulates my asynchronous function, i will come from wifi global i i = i+1 i = 0 window_is_alive = True root = tk.Tk() label = tk.Label(root,text="Name") label.pack() def destroy_window(): global window_is_alive window_is_alive = False #Reassign the Close Window Control Icon root.protocol("WM_DELETE_WINDOW", destroy_window) exit_button = tk.Button(root, text="Exit", command=destroy_window) exit_button.pack() while True: label.configure(text = str(i)) # i is actually updated by an asynchronous function, like a wifi stream my_async() # these two lines are just to simulate that root.update_idletasks() root.update() if window_is_alive == False: root.destroy() break
error when exiting tkinter window update and update_idletasks
I have a problem with functions update() and update_idletasks() in tkinter they work fine except that when closing the window, either by cliking the "Exit" button or the "x" to close the window in Windows, the following error lines show up: Traceback (most recent call last): File "D:\Python\VisualStudio\test4\test4\test4.py", line 14, in label.configure(text = str(i)) # i is actually updated by an asynchronous function, like a wifi stream File "C:\Users\Owner\AppData\Local\Programs\Python\Python310\lib\tkinter_init_.py", line 1675, in configure return self.configure('configure', cnf, kw) File "C:\Users\Owner\AppData\Local\Programs\Python\Python310\lib\tkinter_init.py", line 1665, in _configure self.tk.call(_flatten((self._w, cmd)) + elf._options(cnf)) _tkinter.TclError: invalid command name ".!label" Press any key to continue . . . Ultimately I want Tkinter to show the incoming characters from a wi-fi, which is why I cannot use mainloop.
[ "I want to display the characters coming asynchronously from a wifi on a Tkinter window\nAfter several problems I was able to come to the following solution.\nMany thanks to JRiggles and Bryan Oakles\nThis is my source code:\nimport tkinter as tk\n\ndef my_async(): # this simulates my asynchronous function, i will come from wifi\n global i\n i = i+1\n\ni = 0\nwindow_is_alive = True\n\nroot = tk.Tk()\nlabel = tk.Label(root,text=\"Name\")\nlabel.pack()\n\ndef destroy_window():\n global window_is_alive\n window_is_alive = False\n\n#Reassign the Close Window Control Icon\nroot.protocol(\"WM_DELETE_WINDOW\", destroy_window)\n\nexit_button = tk.Button(root, text=\"Exit\", command=destroy_window)\nexit_button.pack()\n\nwhile True:\n label.configure(text = str(i)) # i is actually updated by an asynchronous function, like a wifi stream\n my_async() # these two lines are just to simulate that\n root.update_idletasks()\n root.update()\n if window_is_alive == False:\n root.destroy()\n break\n \n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074618786_python_tkinter.txt
Q: ERROR: admin.E108 and admin.E116 Django Framework Python <class 'blog.admin.CommentAdmin'>: (admin.E108) The value of 'list_display[4]' refers to 'active', which is not a callable, an attribute of 'CommentAdmin', or an attribute or method on 'blog.Comment'. <class 'blog.admin.CommentAdmin'>: (admin.E116) The value of 'list_filter[0]' refers to 'active', which does not refer to a Field. I am receiving these two errors. This is my models.py code: from django.contrib.auth.models import User # Create your models here. STATUS = ( (0,"Draft"), (1,"Publish") ) class Post(models.Model): title = models.CharField(max_length=200, unique=True) slug = models.SlugField(max_length=200, unique=True) author = models.ForeignKey(User, on_delete= models.CASCADE,related_name='blog_posts') updatedOn = models.DateTimeField(auto_now= True) content = models.TextField() createdOn = models.DateTimeField(auto_now_add=True) status = models.IntegerField(choices=STATUS, default=0) class Meta: ordering = ['-createdOn'] def __str__(self): return self.title class Comment(models.Model): post = models.ForeignKey( Post, on_delete=models.CASCADE, related_name='comments') name = models.CharField(max_length=80) email = models.EmailField() body = models.TextField() createdOn = models.DateTimeField(auto_now_add=True) status = models.BooleanField(default=False) class Meta: ordering = ['createdOn'] def __str__(self): return 'Comment {} by {}'.format(self.body, self.name) This is my admin.py code: from django.contrib import admin from .models import Post, Comment # Register your models here. class PostAdmin(admin.ModelAdmin): list_display = ('title', 'slug', 'status','createdOn') list_filter = ("status", 'createdOn') search_fields = ['title', 'content'] prepopulated_fields = {'slug': ('title',)} @admin.register(Comment) class CommentAdmin(admin.ModelAdmin): list_display = ('name', 'body', 'post', 'createdOn', 'active') list_filter = ('active', 'createdOn') search_fields = ('name', 'email', 'body') actions = ['approveComments'] def approveComments(self, request, queryset): queryset.update(active=True) admin.site.register(Post, PostAdmin) This is my forms.py code: from .models import Comment from django import forms class CommentForm(forms.ModelForm): class Meta: model = Comment fields = ('name', 'email', 'body') Any help is greatly appreciated. A: status = models.IntegerField(choices=STATUS, default=0) should be: active = models.IntegerField(choices=STATUS, default=0) A: The message is clear 'active' is not a field class Comment(models.Model): post = models.ForeignKey( Post, on_delete=models.CASCADE, related_name='comments') name = models.CharField(max_length=80) email = models.EmailField() body = models.TextField() createdOn = models.DateTimeField(auto_now_add=True) status = models.BooleanField(default=False) your fields are: post, name, email, createdOn, status Therefore create a field named active or suppress active in list_display & list_filter
ERROR: admin.E108 and admin.E116 Django Framework Python
<class 'blog.admin.CommentAdmin'>: (admin.E108) The value of 'list_display[4]' refers to 'active', which is not a callable, an attribute of 'CommentAdmin', or an attribute or method on 'blog.Comment'. <class 'blog.admin.CommentAdmin'>: (admin.E116) The value of 'list_filter[0]' refers to 'active', which does not refer to a Field. I am receiving these two errors. This is my models.py code: from django.contrib.auth.models import User # Create your models here. STATUS = ( (0,"Draft"), (1,"Publish") ) class Post(models.Model): title = models.CharField(max_length=200, unique=True) slug = models.SlugField(max_length=200, unique=True) author = models.ForeignKey(User, on_delete= models.CASCADE,related_name='blog_posts') updatedOn = models.DateTimeField(auto_now= True) content = models.TextField() createdOn = models.DateTimeField(auto_now_add=True) status = models.IntegerField(choices=STATUS, default=0) class Meta: ordering = ['-createdOn'] def __str__(self): return self.title class Comment(models.Model): post = models.ForeignKey( Post, on_delete=models.CASCADE, related_name='comments') name = models.CharField(max_length=80) email = models.EmailField() body = models.TextField() createdOn = models.DateTimeField(auto_now_add=True) status = models.BooleanField(default=False) class Meta: ordering = ['createdOn'] def __str__(self): return 'Comment {} by {}'.format(self.body, self.name) This is my admin.py code: from django.contrib import admin from .models import Post, Comment # Register your models here. class PostAdmin(admin.ModelAdmin): list_display = ('title', 'slug', 'status','createdOn') list_filter = ("status", 'createdOn') search_fields = ['title', 'content'] prepopulated_fields = {'slug': ('title',)} @admin.register(Comment) class CommentAdmin(admin.ModelAdmin): list_display = ('name', 'body', 'post', 'createdOn', 'active') list_filter = ('active', 'createdOn') search_fields = ('name', 'email', 'body') actions = ['approveComments'] def approveComments(self, request, queryset): queryset.update(active=True) admin.site.register(Post, PostAdmin) This is my forms.py code: from .models import Comment from django import forms class CommentForm(forms.ModelForm): class Meta: model = Comment fields = ('name', 'email', 'body') Any help is greatly appreciated.
[ "status = models.IntegerField(choices=STATUS, default=0)\n\nshould be:\nactive = models.IntegerField(choices=STATUS, default=0)\n\n", "The message is clear 'active' is not a field\nclass Comment(models.Model):\n post = models.ForeignKey(\n Post, on_delete=models.CASCADE, related_name='comments')\n name = models.CharField(max_length=80)\n email = models.EmailField()\n body = models.TextField()\n createdOn = models.DateTimeField(auto_now_add=True)\n status = models.BooleanField(default=False)\n\nyour fields are: post, name, email, createdOn, status\nTherefore create a field named active or suppress active in list_display &\nlist_filter\n" ]
[ 0, 0 ]
[]
[]
[ "blogs", "django", "html", "python" ]
stackoverflow_0074629902_blogs_django_html_python.txt
Q: why does my for loop keep on removing the append element so I am trying to create a list that have different list as it element the for loop bellow will extend an element to the bag then append it to the list bag and finally remove the extended element to repeat the cyclec the bag contains these elements ['B', 'D'] and the rf contains these elements ['C', 'A', 'G', 'E'] list_bag = [] for i in range(len(rf)) : bag.extend(rf[i]) a= bag list_bag.append(a) bag.pop() print(list_bag) the out put I am trying to archive is this : [['B', 'D','A'], ['B', 'D','E'], ['B', 'D','F'], ['B', 'D','C']] but the code keep on giving me this [['B', 'D'], ['B', 'D'], ['B', 'D'], ['B', 'D']] any suggestion ? A: You can accomplish what you want with a list comprehension. list_bag = [bag + [item] for item in rf]
why does my for loop keep on removing the append element
so I am trying to create a list that have different list as it element the for loop bellow will extend an element to the bag then append it to the list bag and finally remove the extended element to repeat the cyclec the bag contains these elements ['B', 'D'] and the rf contains these elements ['C', 'A', 'G', 'E'] list_bag = [] for i in range(len(rf)) : bag.extend(rf[i]) a= bag list_bag.append(a) bag.pop() print(list_bag) the out put I am trying to archive is this : [['B', 'D','A'], ['B', 'D','E'], ['B', 'D','F'], ['B', 'D','C']] but the code keep on giving me this [['B', 'D'], ['B', 'D'], ['B', 'D'], ['B', 'D']] any suggestion ?
[ "You can accomplish what you want with a list comprehension.\nlist_bag = [bag + [item] for item in rf]\n\n" ]
[ 0 ]
[]
[]
[ "append", "extend", "list", "python" ]
stackoverflow_0074629918_append_extend_list_python.txt
Q: Python not running from terminal I already have 2 versions of python installed on my windows, and the interpreters work well, but when I try to run python from cmd or PowerShell, I'm asked to get python again from windows store, how do I fix this I opened cmd and PowerShell and typed python expecting it to open the python interpreter and I made a .py text file and typed python the name of the file in cmd expecting it to run the program A: You need to add python path to the path on the environment variables. What you should do: Right click "my computer" Go to "properties" click on "advanced system settings" Go to "environment variables" On system Variables look for "path" (if there isn't one create one) Click on "path" and click on "Edit" Now you should click on "new" and paste your python folder path Click "OK" and everything should be working just fine now I used to have this problem before and after following this steps it started working.
Python not running from terminal
I already have 2 versions of python installed on my windows, and the interpreters work well, but when I try to run python from cmd or PowerShell, I'm asked to get python again from windows store, how do I fix this I opened cmd and PowerShell and typed python expecting it to open the python interpreter and I made a .py text file and typed python the name of the file in cmd expecting it to run the program
[ "You need to add python path to the path on the environment variables.\nWhat you should do:\n\nRight click \"my computer\"\nGo to \"properties\"\nclick on \"advanced system settings\"\nGo to \"environment variables\"\nOn system Variables look for \"path\" (if there isn't one create one)\nClick on \"path\" and click on \"Edit\"\nNow you should click on \"new\" and paste your python folder path\nClick \"OK\" and everything should be working just fine now\n\nI used to have this problem before and after following this steps it started working.\n" ]
[ 0 ]
[]
[]
[ "cmd", "powershell", "python", "windows_store" ]
stackoverflow_0074629889_cmd_powershell_python_windows_store.txt
Q: tuple unpacking in a list cannot be performed Basically the question is to see if a number is a t-prime number or not (t-prime number has 3 distinct positive divisors), I have written the code it gives me a list like below: [(4, 1), (4, 2), (4, 4), (5, 1), (5, 5), (6, 1), (6, 2), (6, 3), (6, 6)] I need a func to return the number of j in each i value (i,j) in the list above, like 4 comes with three divisors, 5 comes with 2 etc.. https://codeforces.com/problemset/problem/230/B 'CODE' # 230B n = int(input()) a = list(map(int, input().split())) lst = [] for j in range(len(a)): i = 1 while i <= a[j]: if a[j]%i == 0: lst.append((a[j],i)) i += 1 print(lst) please refer to previous page A: It looks like your goal is to count the number of tuples with a given first element. Try this: counter = {} values = [(4, 1), (4, 2), (4, 4), (5, 1), (5, 5), (6, 1), (6, 2), (6, 3), (6, 6)] for value, divisor in values: current = counter.get(value, 0) + 1 counter[value] = current Then, to get the count of a given value, use counter[n]. For instance, counter[4] would be 3. If your divisors are not guaranteed to be unique, then use sets for your dictionary values instead: counter = {} values = [(4, 1), (4, 2), (4, 4), (5, 1), (5, 5), (6, 1), (6, 2), (6, 3), (6, 6)] for value, divisor in values: if value not in counter: counter[value] = set() counter[value].add(divisor) Then, you can get the number of divisors with len(counter[n]). So, len(counter[4]) would be 3.
tuple unpacking in a list cannot be performed
Basically the question is to see if a number is a t-prime number or not (t-prime number has 3 distinct positive divisors), I have written the code it gives me a list like below: [(4, 1), (4, 2), (4, 4), (5, 1), (5, 5), (6, 1), (6, 2), (6, 3), (6, 6)] I need a func to return the number of j in each i value (i,j) in the list above, like 4 comes with three divisors, 5 comes with 2 etc.. https://codeforces.com/problemset/problem/230/B 'CODE' # 230B n = int(input()) a = list(map(int, input().split())) lst = [] for j in range(len(a)): i = 1 while i <= a[j]: if a[j]%i == 0: lst.append((a[j],i)) i += 1 print(lst) please refer to previous page
[ "It looks like your goal is to count the number of tuples with a given first element. Try this:\ncounter = {}\nvalues = [(4, 1), (4, 2), (4, 4), (5, 1), (5, 5), (6, 1), (6, 2), (6, 3), (6, 6)]\n\nfor value, divisor in values:\n current = counter.get(value, 0) + 1\n counter[value] = current\n\nThen, to get the count of a given value, use counter[n]. For instance, counter[4] would be 3.\nIf your divisors are not guaranteed to be unique, then use sets for your dictionary values instead:\ncounter = {}\nvalues = [(4, 1), (4, 2), (4, 4), (5, 1), (5, 5), (6, 1), (6, 2), (6, 3), (6, 6)]\n\nfor value, divisor in values:\n if value not in counter:\n counter[value] = set()\n counter[value].add(divisor)\n\nThen, you can get the number of divisors with len(counter[n]). So, len(counter[4]) would be 3.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074629967_python.txt
Q: How to compile Tkinter as an executable for MacOS? I'm trying to compile a Tkinter app as an executable for MacOs. I tried to use py2app and pyinstaller. I almost succeed using py2app, but it returns the following error: Traceback The Info.plist file must have a PyRuntimeLocations array containing string values for preferred Python runtime locations. These strings should be "otool -L" style mach ids; "@executable_stub" and "~" prefixes will be translated accordingly. This is how the setup.py looks like: from setuptools import setup APP = ['main.py'] DATA_FILES = ['config.json'] OPTIONS = { 'argv_emulation': True } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) And this is the directory structure: -modules/---__init.py__ | | | -- gui_module.py | | | -- scraper_module.py | | | -- app.ico | -config.json | -countries_list.txt | -main.py | -requirements.txt | -setup.py I'm happy to share more details and the files if you need them. A: The problem was that you need to give an executable path for the python framework you have on your MacOs. So I modify the setup.py setup.py from setuptools import setup class CONFIG: VERSION = 'v1.0.1' platform = 'darwin-x86_64' executable_stub = '/opt/homebrew/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib' # this is important, check where is your Python framework and get the `dylib` APP_NAME = f'your_app_{VERSION}_{platform}' APP = ['main.py'] DATA_FILES = [ 'config.json', 'countries_list.txt', ('modules', ['modules/app.ico']), # this modules are automatically added if you use __init__.py in your folder # ('modules', ['modules/scraper_module.py']), # ('modules', ['modules/gui_module.py']), ] OPTIONS = { 'argv_emulation': False, 'iconfile': 'modules/app.ico', 'plist': { 'CFBundleName': APP_NAME, 'CFBundleDisplayName': APP_NAME, 'CFBundleGetInfoString': APP_NAME, 'CFBundleVersion': VERSION, 'CFBundleShortVersionString': VERSION, 'PyRuntimeLocations': [ executable_stub, # also the executable can look like this: #'@executable_path/../Frameworks/libpython3.4m.dylib', ] } } def main(): setup( name=CONFIG.APP_NAME, app=CONFIG.APP, data_files=CONFIG.DATA_FILES, options={'py2app': CONFIG.OPTIONS}, setup_requires=['py2app'], maintainer='foo bar', author_email='[email protected]', ) if __name__ == '__main__': main() Then you need to run python3 setup.py py2app and now you can go and just double click on your_app_{VERSION}_{platform}.app. Recomendations by the py2app docs: Make sure not to use the -A flag Do not use --argv-emulation when the program uses a GUI toolkit (as Tkinter) py2app options docs
How to compile Tkinter as an executable for MacOS?
I'm trying to compile a Tkinter app as an executable for MacOs. I tried to use py2app and pyinstaller. I almost succeed using py2app, but it returns the following error: Traceback The Info.plist file must have a PyRuntimeLocations array containing string values for preferred Python runtime locations. These strings should be "otool -L" style mach ids; "@executable_stub" and "~" prefixes will be translated accordingly. This is how the setup.py looks like: from setuptools import setup APP = ['main.py'] DATA_FILES = ['config.json'] OPTIONS = { 'argv_emulation': True } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) And this is the directory structure: -modules/---__init.py__ | | | -- gui_module.py | | | -- scraper_module.py | | | -- app.ico | -config.json | -countries_list.txt | -main.py | -requirements.txt | -setup.py I'm happy to share more details and the files if you need them.
[ "The problem was that you need to give an executable path for the python framework you have on your MacOs. So I modify the setup.py\nsetup.py\nfrom setuptools import setup\n\nclass CONFIG:\n VERSION = 'v1.0.1'\n platform = 'darwin-x86_64'\n executable_stub = '/opt/homebrew/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib' # this is important, check where is your Python framework and get the `dylib`\n APP_NAME = f'your_app_{VERSION}_{platform}'\n APP = ['main.py']\n DATA_FILES = [\n 'config.json', \n 'countries_list.txt', \n ('modules', ['modules/app.ico']),\n # this modules are automatically added if you use __init__.py in your folder\n # ('modules', ['modules/scraper_module.py']),\n # ('modules', ['modules/gui_module.py']),\n ]\n\n OPTIONS = {\n 'argv_emulation': False,\n 'iconfile': 'modules/app.ico',\n 'plist': {\n 'CFBundleName': APP_NAME,\n 'CFBundleDisplayName': APP_NAME,\n 'CFBundleGetInfoString': APP_NAME,\n 'CFBundleVersion': VERSION,\n 'CFBundleShortVersionString': VERSION,\n 'PyRuntimeLocations': [\n executable_stub,\n # also the executable can look like this:\n #'@executable_path/../Frameworks/libpython3.4m.dylib',\n ]\n }\n }\n\ndef main():\n setup(\n name=CONFIG.APP_NAME,\n app=CONFIG.APP,\n data_files=CONFIG.DATA_FILES,\n options={'py2app': CONFIG.OPTIONS},\n setup_requires=['py2app'],\n maintainer='foo bar',\n author_email='[email protected]',\n )\n\nif __name__ == '__main__':\n main()\n\nThen you need to run python3 setup.py py2app and now you can go and just double click on your_app_{VERSION}_{platform}.app.\nRecomendations by the py2app docs:\n\nMake sure not to use the -A flag\nDo not use --argv-emulation when the program uses a GUI toolkit (as Tkinter) py2app options docs\n\n" ]
[ 0 ]
[]
[]
[ "macos", "py2app", "pyinstaller", "python", "tkinter" ]
stackoverflow_0074619476_macos_py2app_pyinstaller_python_tkinter.txt
Q: Can I make PyInstaller optimize the compilation? When I use PyInstaller, it builds my modules as .pyc files. But I'd prefer it to run the compilation with -OO to optmize and remove docstrings. Is this possible? A: Since pyinstaller is a Python script, it is sufficient to run it with optimisation activated: e.g. PYTHONOPTIMIZE=2 pyinstaller script.py so that during bundling .pyo files are created instead of .pyc A: i would like to add that in cmd.exe you can type >set PYTHONOPTIMIZE=1 >pyinstaller skript.py the answer from Stefano M did not work in my console. i had to make two different calls. you can check if your code is optimized by using the preprocessor directive if __debug__: print("optimized")
Can I make PyInstaller optimize the compilation?
When I use PyInstaller, it builds my modules as .pyc files. But I'd prefer it to run the compilation with -OO to optmize and remove docstrings. Is this possible?
[ "Since pyinstaller is a Python script, it is sufficient to run it with optimisation activated: e.g.\nPYTHONOPTIMIZE=2 pyinstaller script.py\n\nso that during bundling .pyo files are created instead of .pyc\n", "i would like to add that in cmd.exe you can type\n>set PYTHONOPTIMIZE=1\n>pyinstaller skript.py \n\nthe answer from Stefano M did not work in my console. i had to make two different calls.\nyou can check if your code is optimized by using the preprocessor directive\nif __debug__:\n print(\"optimized\")\n\n" ]
[ 4, 0 ]
[]
[]
[ "pyinstaller", "python" ]
stackoverflow_0036401229_pyinstaller_python.txt
Q: How to extract the output froman NLP model to a dataframe? I have trained an NLP Model (NER) and I have results in the below format: for text, _ in TEST_DATA: doc = nlp(text) print([(ent.text, ent.label_) for ent in doc.ents]) #Output [('1131547', 'ID'), ('12/9/2019', 'Date'), ('USA', 'ShippingAddress')] [('567456', 'ID'), ('Hills', 'ShippingAddress')] #I need the output in the below format ID Date ShippingAddress 1131547 12/9/2019 USA 567456 NA Hills Thanks for your help in advance A: In order to import the data into a Pandas dataframe, you can use data_array = [] for text, _ in TEST_DATA: doc = nlp(text) data_array.append({ent.label_:ent.text for ent in doc.ents}) import pandas as pd df = pd.DataFrame.from_dict(data_array) The test result: >>> pd.DataFrame.from_dict(data_array) ID Date ShippingAddress 0 1131547 12/9/2019 USA 1 567456 NaN Hills
How to extract the output froman NLP model to a dataframe?
I have trained an NLP Model (NER) and I have results in the below format: for text, _ in TEST_DATA: doc = nlp(text) print([(ent.text, ent.label_) for ent in doc.ents]) #Output [('1131547', 'ID'), ('12/9/2019', 'Date'), ('USA', 'ShippingAddress')] [('567456', 'ID'), ('Hills', 'ShippingAddress')] #I need the output in the below format ID Date ShippingAddress 1131547 12/9/2019 USA 567456 NA Hills Thanks for your help in advance
[ "In order to import the data into a Pandas dataframe, you can use\ndata_array = []\n\nfor text, _ in TEST_DATA:\n doc = nlp(text)\n data_array.append({ent.label_:ent.text for ent in doc.ents})\n\nimport pandas as pd\ndf = pd.DataFrame.from_dict(data_array)\n\nThe test result:\n>>> pd.DataFrame.from_dict(data_array)\n ID Date ShippingAddress\n0 1131547 12/9/2019 USA\n1 567456 NaN Hills\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "named_entity_recognition", "nlp", "python", "spacy" ]
stackoverflow_0074629474_dictionary_named_entity_recognition_nlp_python_spacy.txt
Q: Python regex to get the closest match without duplicated content What I need I have a list of img src link. Here is an example: https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg&nocache=1 https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg&nocache=1 https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png&nocache=1 I need get the following result: studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png Problem I use the following regex: studiocake\.kiev\.ua.*(jpeg|png|jpg) But it doesn't work the way I need. Instead of the result I need, I get link like: studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg Question How can I get the result I need with Python regex A: You can let a greedy .* consume the starting match and capture the latter. import re matches = re.findall(r"(?i).*\b(studiocake\.kiev\.ua\S*\b(?:jpeg|png|jpg))\b", s) See this demo at regex101 (matches in group 1) or a Python demo at tio.run Inside used \S* to match any amount of characters other than a whitespace. I further added some \b word boundaries and the (?i)-flag for ignore case. A: What you want to achieve, is a standard operation on URLs, and python has good number of libraries to achieve that. Instead of using regexes for this exercise, I would recommend using a url parsing library, which provides standard operations, and provides better code. from urllib.parse import urlparse, parse_qs def extractSrc(strUrl): # Parse original URL using urllib parsed_url = urlparse(strUrl) # Find the value of query parameter img src_value = parse_qs(parsed_url.query)['src'][0] # Again, using same library, parse img url which we got above. img_parsed_url = urlparse(src_value) # Remove the scheme in the img URL and return result. scheme = "%s://" % img_parsed_url.scheme return img_parsed_url.geturl().replace(scheme, '', 1) urls = '''https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg&nocache=1 https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg&nocache=1 https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png&nocache=1''' for u in urls.split('\n'): print(extractSrc(u)) Output: studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png A: My hack expression is this: (https://)(studiocake\.kiev\.ua.*(php)\?src=https://)(studiocake\.kiev\.ua.*(jpeg|png|jpg))(&nocache=1) To replace it with $4 Explanation... I just selected all the link in parts and then replaced it with the particular part needed.
Python regex to get the closest match without duplicated content
What I need I have a list of img src link. Here is an example: https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg&nocache=1 https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg&nocache=1 https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png&nocache=1 I need get the following result: studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png Problem I use the following regex: studiocake\.kiev\.ua.*(jpeg|png|jpg) But it doesn't work the way I need. Instead of the result I need, I get link like: studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg Question How can I get the result I need with Python regex
[ "You can let a greedy .* consume the starting match and capture the latter.\nimport re\n\nmatches = re.findall(r\"(?i).*\\b(studiocake\\.kiev\\.ua\\S*\\b(?:jpeg|png|jpg))\\b\", s)\n\nSee this demo at regex101 (matches in group 1) or a Python demo at tio.run\n\nInside used \\S* to match any amount of characters other than a whitespace.\nI further added some \\b word boundaries and the (?i)-flag for ignore case.\n", "What you want to achieve, is a standard operation on URLs, and python has good number of libraries to achieve that. Instead of using regexes for this exercise, I would recommend using a url parsing library, which provides standard operations, and provides better code.\nfrom urllib.parse import urlparse, parse_qs\n\n\ndef extractSrc(strUrl):\n # Parse original URL using urllib\n parsed_url = urlparse(strUrl)\n\n # Find the value of query parameter img\n src_value = parse_qs(parsed_url.query)['src'][0]\n \n # Again, using same library, parse img url which we got above.\n img_parsed_url = urlparse(src_value)\n\n # Remove the scheme in the img URL and return result.\n scheme = \"%s://\" % img_parsed_url.scheme\n return img_parsed_url.geturl().replace(scheme, '', 1)\n\n\n\nurls = '''https://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg&nocache=1\nhttps://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg&nocache=1\nhttps://studiocake.kiev.ua/wp-content/webpc-passthru.php?src=https://studiocake.kiev.ua/wp-content/uploads/tri-shokolada.png&nocache=1'''\n\nfor u in urls.split('\\n'):\n print(extractSrc(u))\n\nOutput:\nstudiocake.kiev.ua/wp-content/uploads/photo_2020-12-27_12-18-00-2-333x444.jpg\nstudiocake.kiev.ua/wp-content/uploads/IMG_4945-333x444.jpeg\nstudiocake.kiev.ua/wp-content/uploads/tri-shokolada.png\n\n", "My hack expression is this:\n(https://)(studiocake\\.kiev\\.ua.*(php)\\?src=https://)(studiocake\\.kiev\\.ua.*(jpeg|png|jpg))(&nocache=1)\n\nTo replace it with $4\nExplanation...\nI just selected all the link in parts and then replaced it with the particular part needed.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "extract", "python", "regex", "string", "url" ]
stackoverflow_0074628727_extract_python_regex_string_url.txt
Q: Infinite continued Fraction in Python I am currently trying to implement a function that approximate the e constant in Python. from fractions import Fraction def fractionalSum(number, array): def inside(index, place): if place >= 0: return Fraction(1, index + place) else: return Fraction(1, index) if number == 0: return 0 elif number == 1: return inside(array[number - 1], 0) elif number == 2: return inside(1, fractionalSum(number - 1, array)) elif number == 88: return inside(array[0], inside(array[1], inside(array[2], inside(array[3], inside(array[4], 0))))) else: return inside(fractionalSum(number - 2, array), inside(fractionalSum(number - 1, array), 0)) expansion = [1] it = 1 clock = 0 for i in range(1, 110): if clock == 0: expansion.append(2 * it) it += 1 clock = 2 if clock != 0: expansion.append(1) clock -= 1 print(expansion) print(2 + fractionalSum(3, expansion)) I am currently trying recursion to calculate it but the code is not producing the correct results. In the fractionalsum function having number 2 should call the same function with number-1 but the results is wrong. number = 88 produces the correct value for number 5. I am trying to implement it recursively to approximate it with numbers > 50. A: I haven't investigated what exactly you may have done wrong, but a recursive implementation of Continued Fraction should be fairly simple, so I'm suggesting this instead: ONE = Fraction(1, 1) def continuedFraction(array): return _continuedFraction(array, 0) def _continuedFraction(array, index): result = Fraction(array[index], 1) if index + 1 < len(array): return result + ONE / _continuedFraction(array, index + 1) return result
Infinite continued Fraction in Python
I am currently trying to implement a function that approximate the e constant in Python. from fractions import Fraction def fractionalSum(number, array): def inside(index, place): if place >= 0: return Fraction(1, index + place) else: return Fraction(1, index) if number == 0: return 0 elif number == 1: return inside(array[number - 1], 0) elif number == 2: return inside(1, fractionalSum(number - 1, array)) elif number == 88: return inside(array[0], inside(array[1], inside(array[2], inside(array[3], inside(array[4], 0))))) else: return inside(fractionalSum(number - 2, array), inside(fractionalSum(number - 1, array), 0)) expansion = [1] it = 1 clock = 0 for i in range(1, 110): if clock == 0: expansion.append(2 * it) it += 1 clock = 2 if clock != 0: expansion.append(1) clock -= 1 print(expansion) print(2 + fractionalSum(3, expansion)) I am currently trying recursion to calculate it but the code is not producing the correct results. In the fractionalsum function having number 2 should call the same function with number-1 but the results is wrong. number = 88 produces the correct value for number 5. I am trying to implement it recursively to approximate it with numbers > 50.
[ "I haven't investigated what exactly you may have done wrong, but a recursive implementation of Continued Fraction should be fairly simple, so I'm suggesting this instead:\nONE = Fraction(1, 1)\n\ndef continuedFraction(array):\n return _continuedFraction(array, 0)\n\ndef _continuedFraction(array, index):\n result = Fraction(array[index], 1)\n if index + 1 < len(array):\n return result + ONE / _continuedFraction(array, index + 1)\n return result\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074629996_python_python_3.x.txt
Q: Get folder and files Google Drive API with Shared Device and Service Account I'm working with a Google Service Account, I have access to Google Drive API and a Shared Unit. I need to get access to all the files and folders from a Shared Unit. I tried a lot of different ways to do this. drive_service.files().list( q = f"'{parent_folder}' in parents", spaces = 'drive', supportsTeamDrives=True ).execute() >> {'kind': 'drive#fileList', 'incompleteSearch': False, 'files': []} drive_service.files().list( q = f" parents in '{parent_folder}'", spaces = 'drive', supportsTeamDrives=True ).execute() >> {'kind': 'drive#fileList', 'incompleteSearch': False, 'files': []} drive_service.files().list( spaces = 'drive', supportsTeamDrives=True ).execute() >> {'kind': 'drive#fileList', 'incompleteSearch': False, 'files': []} drive_service.drives().list().execute() >> {'kind': 'drive#driveList', 'drives': [{'kind': 'drive#drive', 'id': '0AOELwkzr21lFUk9VA', 'name': 'foo'}]} I know a have access because I can upload files to the parent folder. Also, there are files in the parent folder. Do you have any clue? Thank you for your time A: I figure it out. An additional parameter had to be passed: includeItemsFromAllDrives = True, supportsAllDrives = True This works: drive_service.files().list( q = f"'{parent_folder}' in parents", spaces = 'drive', includeItemsFromAllDrives = True, supportsAllDrives = True ).execute()
Get folder and files Google Drive API with Shared Device and Service Account
I'm working with a Google Service Account, I have access to Google Drive API and a Shared Unit. I need to get access to all the files and folders from a Shared Unit. I tried a lot of different ways to do this. drive_service.files().list( q = f"'{parent_folder}' in parents", spaces = 'drive', supportsTeamDrives=True ).execute() >> {'kind': 'drive#fileList', 'incompleteSearch': False, 'files': []} drive_service.files().list( q = f" parents in '{parent_folder}'", spaces = 'drive', supportsTeamDrives=True ).execute() >> {'kind': 'drive#fileList', 'incompleteSearch': False, 'files': []} drive_service.files().list( spaces = 'drive', supportsTeamDrives=True ).execute() >> {'kind': 'drive#fileList', 'incompleteSearch': False, 'files': []} drive_service.drives().list().execute() >> {'kind': 'drive#driveList', 'drives': [{'kind': 'drive#drive', 'id': '0AOELwkzr21lFUk9VA', 'name': 'foo'}]} I know a have access because I can upload files to the parent folder. Also, there are files in the parent folder. Do you have any clue? Thank you for your time
[ "I figure it out.\nAn additional parameter had to be passed:\nincludeItemsFromAllDrives = True,\nsupportsAllDrives = True\n\nThis works:\ndrive_service.files().list(\n q = f\"'{parent_folder}' in parents\",\n spaces = 'drive',\n includeItemsFromAllDrives = True,\n supportsAllDrives = True\n).execute()\n\n" ]
[ 0 ]
[]
[]
[ "google_drive_api", "google_drive_shared_drive", "python" ]
stackoverflow_0074629750_google_drive_api_google_drive_shared_drive_python.txt
Q: Getting TransactionManagementError on bulk_create with mysql db I'm trying to create a few objects using Django's bulk_create but I'm getting TransactionManagementError. Django is on django-2.2.24 Mysql is running via docker and I'm using mariadb:10.10.2. Traceback (most recent call last): File "/Users/xyz/Documents/dev/django-proj/sliphy/manage.py", line 21, in <module> main() File "/Users/xyz/Documents/dev/django-proj/sliphy/manage.py", line 17, in main execute_from_command_line(sys.argv) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line utility.execute() File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 375, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/base.py", line 323, in run_from_argv self.execute(*args, **cmd_options) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/base.py", line 364, in execute output = self.handle(*args, **options) File "/Users/xyz/Documents/dev/django-proj/sliphy/generator/management/commands/generate_backup.py", line 31, in handle self.bulk_create_into_user_trans() File "/Users/xyz/Documents/dev/django-proj/sliphy/generator/management/commands/generate_backup.py", line 78, in bulk_create_trans Transactions.objects.bulk_create(objs, batch_size=100) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/cacheops/query.py", line 411, in bulk_create objs = self._no_monkey.bulk_create(self, objs, *args, **kwargs) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 474, in bulk_create ids = self._batched_insert(objs_without_pk, fields, batch_size, ignore_conflicts=ignore_conflicts) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1211, in _batched_insert self._insert(item, fields=fields, using=self.db, ignore_conflicts=ignore_conflicts) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1186, in _insert return query.get_compiler(using=using).execute_sql(return_id) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1377, in execute_sql cursor.execute(sql, params) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 99, in execute return super().execute(sql, params) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/cacheops/transaction.py", line 99, in execute result = self._no_monkey.execute(self, sql, params) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers return executor(sql, params, many, context) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 79, in _execute self.db.validate_no_broken_transaction() File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 437, in validate_no_broken_transaction raise TransactionManagementError( django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block. If anyone has any idea what might be the root cause of this, please do give some hints or something. Any help is appreciated. A: Probably you should wrap your function with transaction.atomic, here is example from the django docs: from django.db import transaction @transaction.atomic def viewfunc(request): # This code executes inside a transaction. do_stuff() and as a context manager: from django.db import transaction def viewfunc(request): # This code executes in autocommit mode (Django's default). do_stuff() with transaction.atomic(): # This code executes inside a transaction. do_more_stuff()
Getting TransactionManagementError on bulk_create with mysql db
I'm trying to create a few objects using Django's bulk_create but I'm getting TransactionManagementError. Django is on django-2.2.24 Mysql is running via docker and I'm using mariadb:10.10.2. Traceback (most recent call last): File "/Users/xyz/Documents/dev/django-proj/sliphy/manage.py", line 21, in <module> main() File "/Users/xyz/Documents/dev/django-proj/sliphy/manage.py", line 17, in main execute_from_command_line(sys.argv) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line utility.execute() File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 375, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/base.py", line 323, in run_from_argv self.execute(*args, **cmd_options) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/core/management/base.py", line 364, in execute output = self.handle(*args, **options) File "/Users/xyz/Documents/dev/django-proj/sliphy/generator/management/commands/generate_backup.py", line 31, in handle self.bulk_create_into_user_trans() File "/Users/xyz/Documents/dev/django-proj/sliphy/generator/management/commands/generate_backup.py", line 78, in bulk_create_trans Transactions.objects.bulk_create(objs, batch_size=100) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/cacheops/query.py", line 411, in bulk_create objs = self._no_monkey.bulk_create(self, objs, *args, **kwargs) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 474, in bulk_create ids = self._batched_insert(objs_without_pk, fields, batch_size, ignore_conflicts=ignore_conflicts) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1211, in _batched_insert self._insert(item, fields=fields, using=self.db, ignore_conflicts=ignore_conflicts) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1186, in _insert return query.get_compiler(using=using).execute_sql(return_id) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1377, in execute_sql cursor.execute(sql, params) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 99, in execute return super().execute(sql, params) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/cacheops/transaction.py", line 99, in execute result = self._no_monkey.execute(self, sql, params) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers return executor(sql, params, many, context) File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 79, in _execute self.db.validate_no_broken_transaction() File "/Users/xyz/opt/miniconda3/envs/.venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 437, in validate_no_broken_transaction raise TransactionManagementError( django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block. If anyone has any idea what might be the root cause of this, please do give some hints or something. Any help is appreciated.
[ "Probably you should wrap your function with transaction.atomic, here is example from the django docs:\nfrom django.db import transaction\[email protected]\ndef viewfunc(request):\n # This code executes inside a transaction.\n do_stuff()\n\nand as a context manager:\nfrom django.db import transaction\n\ndef viewfunc(request):\n # This code executes in autocommit mode (Django's default).\n do_stuff()\n\n with transaction.atomic():\n # This code executes inside a transaction.\n do_more_stuff()\n\n" ]
[ 0 ]
[]
[]
[ "django", "mysql", "python", "python_3.x" ]
stackoverflow_0074569599_django_mysql_python_python_3.x.txt
Q: KivyMD: ToolBar doesn't work on Android. App crashes I'm stuck with a strange problem. My app works perfect with kivymd toolbar MDTopAppBar on Windows (after compiling with pyinstaller too) and Ubuntu. But, when I try to add this element even in the simpliest app and create .apk using buildozer, my app crashes immediatly after launch. Here are examples of main.py and main.kv main.py from kivy.config import Config Config.set('graphics', 'resizable', 0) Config.set("graphics", "width", 360) Config.set("graphics", "height", 740) from kivymd.app import MDApp from kivy.lang import Builder class MesApp(MDApp): def build(self): return Builder.load_file('main.kv') if __name__ == '__main__': MesApp().run() main.kv <Screen>: MDBoxLayout: orientation: 'vertical' padding: dp(5), dp(5) MDTopAppBar: title: 'Some toolbar' MDLabel: text: 'Some text' pos_hint: {"center_x": 0.9} Requirements from buildozer.spec: requirements = kivy==2.1.0, kivymd==1.1.1, sdl2_ttf == 2.0.15, pillow If we remove two lines with MDTopAppBar from main.kv this app works fine. Here's some log with crashing: ... 11-09 20:01:25.672 15328 15466 I python : [INFO ] [Base ] Start application main loop 11-09 20:01:25.674 15328 15466 I python : [INFO ] [GL ] NPOT texture support is available --------- beginning of crash 11-09 20:01:25.715 15328 15466 F libc : Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x40 in tid 15466 (SDLThread), pid 15328 (stone.mytestapp) # org.testone.mytestapp terminated Googling this error didn't help. Is there something wrong with my code? Or it's something about buildozer and this specific element MDTopAppBar? I just don't understand in detail how build process works and what's going on there. OS: Ubuntu 22.04.1 LTS Python: 3.10.6 Device: Google Pixel 4a, Android 11 buildozer: 1.4.0. Installed it according to the official documentation. A: For anyone out there facing this, there has been an issue on the kivymd github repo about this and this problem is caused by changes in the latest opengl version and changes in sdl versions. The best thing to do for now is to use kivymd==1.0.2 in the requirements while compiling apk and it should work fine.
KivyMD: ToolBar doesn't work on Android. App crashes
I'm stuck with a strange problem. My app works perfect with kivymd toolbar MDTopAppBar on Windows (after compiling with pyinstaller too) and Ubuntu. But, when I try to add this element even in the simpliest app and create .apk using buildozer, my app crashes immediatly after launch. Here are examples of main.py and main.kv main.py from kivy.config import Config Config.set('graphics', 'resizable', 0) Config.set("graphics", "width", 360) Config.set("graphics", "height", 740) from kivymd.app import MDApp from kivy.lang import Builder class MesApp(MDApp): def build(self): return Builder.load_file('main.kv') if __name__ == '__main__': MesApp().run() main.kv <Screen>: MDBoxLayout: orientation: 'vertical' padding: dp(5), dp(5) MDTopAppBar: title: 'Some toolbar' MDLabel: text: 'Some text' pos_hint: {"center_x": 0.9} Requirements from buildozer.spec: requirements = kivy==2.1.0, kivymd==1.1.1, sdl2_ttf == 2.0.15, pillow If we remove two lines with MDTopAppBar from main.kv this app works fine. Here's some log with crashing: ... 11-09 20:01:25.672 15328 15466 I python : [INFO ] [Base ] Start application main loop 11-09 20:01:25.674 15328 15466 I python : [INFO ] [GL ] NPOT texture support is available --------- beginning of crash 11-09 20:01:25.715 15328 15466 F libc : Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x40 in tid 15466 (SDLThread), pid 15328 (stone.mytestapp) # org.testone.mytestapp terminated Googling this error didn't help. Is there something wrong with my code? Or it's something about buildozer and this specific element MDTopAppBar? I just don't understand in detail how build process works and what's going on there. OS: Ubuntu 22.04.1 LTS Python: 3.10.6 Device: Google Pixel 4a, Android 11 buildozer: 1.4.0. Installed it according to the official documentation.
[ "For anyone out there facing this, there has been an issue on the kivymd github repo about this and this problem is caused by changes in the latest opengl version and changes in sdl versions. The best thing to do for now is to use kivymd==1.0.2 in the requirements while compiling apk and it should work fine.\n" ]
[ 0 ]
[]
[]
[ "android", "buildozer", "kivy", "kivymd", "python" ]
stackoverflow_0074379030_android_buildozer_kivy_kivymd_python.txt
Q: How to change sphinx's _static folder output location? I have several projects that use the readthedocs theme that I'm hoping to can share a single _static folder location. It's two levels up at ../../_static. Is it possible to set this easily? What I've tried: various conf.py settings such as static_file_path changing all the _static paths in the template files to ../../_static The latter method gets close but still leaves me with: <script src=<"_static>/jquery.js"></script> <script src=<"_static>/underscore.js"></script> <script src=<"_static>/doctools.js"></script> <script src=<"_static>/language_data.js"></script> Those paths are dynamically generated and don't appear in the templates. I've tried to find the source in layout.html (probably 'pathto') without success. Any ideas? A: Adding .nojekyll file, as sometimes suggested, only tells github not to apply it's own templates. Anyway, I figured this out a long time ago. Just change a couple of paths in config file and change all the paths in layout.html. There maybe other ways, but that method works for me. Here's a live example where 10 separate projects share _static: https://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/index.html A: The easiest way is to add an empty .nojekll file under the html directory
How to change sphinx's _static folder output location?
I have several projects that use the readthedocs theme that I'm hoping to can share a single _static folder location. It's two levels up at ../../_static. Is it possible to set this easily? What I've tried: various conf.py settings such as static_file_path changing all the _static paths in the template files to ../../_static The latter method gets close but still leaves me with: <script src=<"_static>/jquery.js"></script> <script src=<"_static>/underscore.js"></script> <script src=<"_static>/doctools.js"></script> <script src=<"_static>/language_data.js"></script> Those paths are dynamically generated and don't appear in the templates. I've tried to find the source in layout.html (probably 'pathto') without success. Any ideas?
[ "Adding .nojekyll file, as sometimes suggested, only tells github not to apply it's own templates.\nAnyway, I figured this out a long time ago. Just change a couple of paths in config file and change all the paths in layout.html. There maybe other ways, but that method works for me. Here's a live example where 10 separate projects share _static: https://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/index.html\n", "The easiest way is to add an empty .nojekll file under the html directory\n" ]
[ 1, 0 ]
[]
[]
[ "path", "python", "python_sphinx" ]
stackoverflow_0067324605_path_python_python_sphinx.txt
Q: 'WSGIRequest' object has no attribute 'htmx' Hi just looking for some help at solving this error in Django whilst trying to call a view that to accept a htmx request. The final result is to display a popup Modal of images from a Gallery when a thumbnail is clicked. HTMX installed via script in head. View if request.htmx: slug = request.GET.get('slug') context = {'pictures': Media.objects.filter(slug=slug)} return render(request, 'main/gallery-detail.html', context=context) context = {'objects_list': Albums.objects.all()} return render(request, 'main/gallery.html', context=context) Relevant html with the button to open gallery of images. <a class="btn btn-primary" hx-post="{{ request.path }}?slug={{ img.slug }}" hx-target="#modal"> {{ img.slug }}</a> {% endfor %} <div id="modal">{% include "main/gallery-detail.html" %}</div> A: This error mostly occurs if you haven't included django-htmx in the settings.py. Try making the below changes and see if it works : Add "django_htmx.middleware.HtmxMiddleware" to the MIDDLEWARE. Add "django_htmx" to the INSTALLED_APPS.
'WSGIRequest' object has no attribute 'htmx'
Hi just looking for some help at solving this error in Django whilst trying to call a view that to accept a htmx request. The final result is to display a popup Modal of images from a Gallery when a thumbnail is clicked. HTMX installed via script in head. View if request.htmx: slug = request.GET.get('slug') context = {'pictures': Media.objects.filter(slug=slug)} return render(request, 'main/gallery-detail.html', context=context) context = {'objects_list': Albums.objects.all()} return render(request, 'main/gallery.html', context=context) Relevant html with the button to open gallery of images. <a class="btn btn-primary" hx-post="{{ request.path }}?slug={{ img.slug }}" hx-target="#modal"> {{ img.slug }}</a> {% endfor %} <div id="modal">{% include "main/gallery-detail.html" %}</div>
[ "This error mostly occurs if you haven't included django-htmx in the settings.py.\nTry making the below changes and see if it works :\n\nAdd \"django_htmx.middleware.HtmxMiddleware\" to the MIDDLEWARE.\nAdd \"django_htmx\" to the INSTALLED_APPS.\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_views", "htmx", "python" ]
stackoverflow_0073682746_django_django_views_htmx_python.txt
Q: Evaluating many random states in python I am working with python and I have a code that looks like this. for index in range (1, 5): Rndm1 = RFC(n_estimators=500, random_state= index) Rndm1.fit(X_train, y_train) y_pred = Rndm1.predict(x_test) print("selected random state:", index) print("Accuracy:", accuracy_score(y_test, y_pred)) And I get a result like this ... selected random state: 3 Accuracy: 0.95 The problem is that I only get one random state, and I actually want all five random states and their accuracies. So how can it get a result like this... selected random state: 1 Accuracy: 0.94 selected random state: 2 Accuracy: 0.96 selected random state: 3 Accuracy: 0.95 selected random state: 4 Accuracy: 0.93 selected random state: 5 Accuracy: 0.96 Thanks in advance. A: Accumulate the intermediate accuracies into a list, then take the mean of the list: import numpy as np accuracies = [] for index in range(1, 5): Rndm1 = RFC(n_estimators=500, random_state= index) Rndm1.fit(X_train, y_train) y_pred = Rndm1.predict(x_test) accuracies.append(accuracy_score(y_test, y_pred)) print("Mean across random states", np.mean(accuracies))
Evaluating many random states in python
I am working with python and I have a code that looks like this. for index in range (1, 5): Rndm1 = RFC(n_estimators=500, random_state= index) Rndm1.fit(X_train, y_train) y_pred = Rndm1.predict(x_test) print("selected random state:", index) print("Accuracy:", accuracy_score(y_test, y_pred)) And I get a result like this ... selected random state: 3 Accuracy: 0.95 The problem is that I only get one random state, and I actually want all five random states and their accuracies. So how can it get a result like this... selected random state: 1 Accuracy: 0.94 selected random state: 2 Accuracy: 0.96 selected random state: 3 Accuracy: 0.95 selected random state: 4 Accuracy: 0.93 selected random state: 5 Accuracy: 0.96 Thanks in advance.
[ "Accumulate the intermediate accuracies into a list, then take the mean of the list:\nimport numpy as np\n\naccuracies = []\n\nfor index in range(1, 5):\n Rndm1 = RFC(n_estimators=500, random_state= index)\n Rndm1.fit(X_train, y_train)\n y_pred = Rndm1.predict(x_test)\n accuracies.append(accuracy_score(y_test, y_pred))\n\nprint(\"Mean across random states\", np.mean(accuracies))\n\n" ]
[ 1 ]
[]
[]
[ "python", "random_forest" ]
stackoverflow_0074620191_python_random_forest.txt
Q: Import Spacy Error "cannot import name dataclass_transform" I am working on a jupyter notebook project which should use spacy. I already used pip install to install spacy in anaconda prompt. However, when I tried to import spacy, it gives me the follwing error. I wonder what the problem is and what I can do to solve that. --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-96-3173a3034708> in <module> 9 #nltk.download() 10 from nltk.corpus import stopwords ---> 11 import spacy 12 13 #path where we store the txt files D:\Python\lib\site-packages\spacy\__init__.py in <module> 4 5 # set library-specific custom warning handling before doing anything else ----> 6 from .errors import setup_default_warnings 7 8 setup_default_warnings() # noqa: E402 D:\Python\lib\site-packages\spacy\errors.py in <module> 1 import warnings ----> 2 from .compat import Literal 3 4 5 class ErrorsWithCodes(type): D:\Python\lib\site-packages\spacy\compat.py in <module> 1 """Helpers for Python and platform compatibility.""" 2 import sys ----> 3 from thinc.util import copy_array 4 5 try: D:\Python\lib\site-packages\thinc\util.py in <module> 6 import functools 7 from wasabi import table ----> 8 from pydantic import create_model, ValidationError 9 import inspect 10 import os D:\Python\lib\site-packages\pydantic\__init__.cp38-win_amd64.pyd in init pydantic.__init__() D:\Python\lib\site-packages\pydantic\dataclasses.cp38-win_amd64.pyd in init pydantic.dataclasses() ImportError: cannot import name dataclass_transform A: You may have to try the below. pip install -U pip setuptools wheel pip install -U spacy python -m spacy download en_core_web_sm After installation restart the kernal if you are using jupyter notebook/lab For me Issue resolved.
Import Spacy Error "cannot import name dataclass_transform"
I am working on a jupyter notebook project which should use spacy. I already used pip install to install spacy in anaconda prompt. However, when I tried to import spacy, it gives me the follwing error. I wonder what the problem is and what I can do to solve that. --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-96-3173a3034708> in <module> 9 #nltk.download() 10 from nltk.corpus import stopwords ---> 11 import spacy 12 13 #path where we store the txt files D:\Python\lib\site-packages\spacy\__init__.py in <module> 4 5 # set library-specific custom warning handling before doing anything else ----> 6 from .errors import setup_default_warnings 7 8 setup_default_warnings() # noqa: E402 D:\Python\lib\site-packages\spacy\errors.py in <module> 1 import warnings ----> 2 from .compat import Literal 3 4 5 class ErrorsWithCodes(type): D:\Python\lib\site-packages\spacy\compat.py in <module> 1 """Helpers for Python and platform compatibility.""" 2 import sys ----> 3 from thinc.util import copy_array 4 5 try: D:\Python\lib\site-packages\thinc\util.py in <module> 6 import functools 7 from wasabi import table ----> 8 from pydantic import create_model, ValidationError 9 import inspect 10 import os D:\Python\lib\site-packages\pydantic\__init__.cp38-win_amd64.pyd in init pydantic.__init__() D:\Python\lib\site-packages\pydantic\dataclasses.cp38-win_amd64.pyd in init pydantic.dataclasses() ImportError: cannot import name dataclass_transform
[ "You may have to try the below.\npip install -U pip setuptools wheel\npip install -U spacy\npython -m spacy download en_core_web_sm\nAfter installation restart the kernal if you are using jupyter notebook/lab\nFor me Issue resolved.\n" ]
[ 0 ]
[]
[]
[ "import", "nlp", "python", "python_packaging", "spacy" ]
stackoverflow_0074451907_import_nlp_python_python_packaging_spacy.txt
Q: ValueError: y should be a 1d array, got an array of shape (295, 9) instead I have a sentiment dataset about opinions on Twitter after I process them and I label these sentiments and I want to share the data based on the sentiments I labeled. now when I share it using the model train_test_split code it works when I match it to the predict stage model using naive bayes there is an error value ValueError: y should be a 1d array, got an array of shape (295, 9) instead. whether the distribution is not right or what? spreadsheet dataset #Train Test Split from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test  = train_test_split(text_tf, sentimen, test_size=0.78, random_state=0) #Naive Bayes from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix clf = MultinomialNB() clf.fit(X_train, y_train) predicted = clf.predict(X_test) print("MultinomialNB Accuracy:", accuracy_score(y_test,predicted)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-55-90624fc1891c> in <module> 4 from sklearn.metrics import confusion_matrix 5 clf = MultinomialNB() ----> 6 clf.fit(X_train, y_train) 7 predicted = clf.predict(X_test) 8 5 frames /usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in column_or_1d(y, warn) 1037 1038 raise ValueError( -> 1039 "y should be a 1d array, got an array of shape {} instead.".format(shape) 1040 ) 1041 ValueError: y should be a 1d array, got an array of shape (295, 9) instead. i have tried to change the test_size and random_state part but it only change on value error says same thing only change on size instead A: According to the documentation https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html fit(X, y, sample_weight=None) Fit Naive Bayes classifier according to X, y. yarray-like of shape (n_samples,) Target values. y should be a one-dimensional array
ValueError: y should be a 1d array, got an array of shape (295, 9) instead
I have a sentiment dataset about opinions on Twitter after I process them and I label these sentiments and I want to share the data based on the sentiments I labeled. now when I share it using the model train_test_split code it works when I match it to the predict stage model using naive bayes there is an error value ValueError: y should be a 1d array, got an array of shape (295, 9) instead. whether the distribution is not right or what? spreadsheet dataset #Train Test Split from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test  = train_test_split(text_tf, sentimen, test_size=0.78, random_state=0) #Naive Bayes from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix clf = MultinomialNB() clf.fit(X_train, y_train) predicted = clf.predict(X_test) print("MultinomialNB Accuracy:", accuracy_score(y_test,predicted)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-55-90624fc1891c> in <module> 4 from sklearn.metrics import confusion_matrix 5 clf = MultinomialNB() ----> 6 clf.fit(X_train, y_train) 7 predicted = clf.predict(X_test) 8 5 frames /usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py in column_or_1d(y, warn) 1037 1038 raise ValueError( -> 1039 "y should be a 1d array, got an array of shape {} instead.".format(shape) 1040 ) 1041 ValueError: y should be a 1d array, got an array of shape (295, 9) instead. i have tried to change the test_size and random_state part but it only change on value error says same thing only change on size instead
[ "According to the documentation https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html\n\nfit(X, y, sample_weight=None) Fit Naive Bayes classifier according to\nX, y.\n\n\nyarray-like of shape (n_samples,) Target values.\n\ny should be a one-dimensional array\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "naivebayes", "python", "sentiment_analysis", "tf_idf" ]
stackoverflow_0074630204_machine_learning_naivebayes_python_sentiment_analysis_tf_idf.txt
Q: Passing Line Break "\n" or "" from the Main Function to Jinja I have a list with the name of colors: colors = [red, green, blue] And I want it to be printed out on my web page as red green blue I tried this by using an argument in my HTML template {{colors_out}}, where I pass a string with "\n" to it as follows: colors_out = "" for i in range(len(colors)): colors_out += (str(i) + ". " + colors[i] + "\n") However, that does nothing but add a space between my colors. It prints out these: 0. red 1. green 2. blue instead of my desired format. I tried replacing "n" with "<br>" in the for loop above too, but then it will result in: 0. red<br>1. green<br>2. blue<br> A: One way, which you can try to use a for loop in Jinja2 in your HTML. You can simply pass a list to the HTML and then use for loop in Jinja2 along with the tags to print the outputs on seperate lines - For example in your case - In your Flask code - from flask import Flask, render_template, request app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') @app.route('/colors', methods=["GET","POST"]) def test(): #your other code colors = ['red', 'green', 'blue'] return render_template('index.html',color=colors) if __name__ == '__main__': app.run() And your HTML code - just add this for loop, where i iterates over all the colors and print them in seperate lines {% for i in color %} <p>{{ i }} </p> {% endfor %} you can also use any other tag as per your requirement, it will still print on new line.
Passing Line Break "\n" or "" from the Main Function to Jinja
I have a list with the name of colors: colors = [red, green, blue] And I want it to be printed out on my web page as red green blue I tried this by using an argument in my HTML template {{colors_out}}, where I pass a string with "\n" to it as follows: colors_out = "" for i in range(len(colors)): colors_out += (str(i) + ". " + colors[i] + "\n") However, that does nothing but add a space between my colors. It prints out these: 0. red 1. green 2. blue instead of my desired format. I tried replacing "n" with "<br>" in the for loop above too, but then it will result in: 0. red<br>1. green<br>2. blue<br>
[ "One way, which you can try to use a for loop in Jinja2 in your HTML.\nYou can simply pass a list to the HTML and then use for loop in Jinja2 along with the tags to print the outputs on seperate lines -\nFor example in your case -\nIn your Flask code -\nfrom flask import Flask, render_template, request\napp = Flask(__name__)\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\[email protected]('/colors', methods=[\"GET\",\"POST\"])\ndef test():\n\n #your other code\n\n colors = ['red', 'green', 'blue']\n return render_template('index.html',color=colors)\n\nif __name__ == '__main__':\n app.run()\n\nAnd your HTML code -\njust add this for loop, where i iterates over all the colors and print them in seperate lines\n{% for i in color %}\n<p>{{ i }} </p>\n{% endfor %}\n\nyou can also use any other tag as per your requirement, it will still print on new line.\n" ]
[ 1 ]
[]
[]
[ "flask", "html", "jinja2", "python", "python_3.x" ]
stackoverflow_0074629901_flask_html_jinja2_python_python_3.x.txt
Q: how to check and change elements of array in 2d array? I started a game project(small project) on Python3 but i dunno how to iterate 2d array and change 'em. IS IT POSSIBLE TO write a game logic (Tic-Tac-Toe ''3 symbols on one line wins'') with 2d array with 0 index and if it changed to 'O' or 'X' replace current iterating element index to one or two ?!. in two words - matrix of 9 elements would been checked and if it element index was changed change an element(i dont know how to make visualization of game example, I only worked on JS-HTML-CSS and can make it there, BUT PYTHON3 (IS NEW TO ME) I DONT KNOW!!!) My array but error occurs array = [] def createMatrix(mtrx): mtrx = [[mtrx[i][j].append(0) for i in range(3)] for j in range(3)] print(mtrx) createMatrix(array) A: >>> a= [] >>> a.append(9) >>> print(a.append(9)) None >>> What's going on here? Well, list.append returns None. As fo most functions which mutate data rather than generating new values. You can see this in your code with: [[mtrx[i][j].append(0) for i in range(3)] for j in range(3)] Those indices also don't exist yet, so you will get an index error. You likely want: [[0 for _ in range(3)] for _ in range(3)] Which generates: [[0, 0, 0], [0, 0, 0], [0, 0, 0]] As an alternative, you could do the following, but that isn't better than matrix = [[0 for _ in range(3)] for _ in range(3)]. matrix = [] for _ in range(3): row = [] for _ in range(3): row.append(0) matrix.append(row)
how to check and change elements of array in 2d array?
I started a game project(small project) on Python3 but i dunno how to iterate 2d array and change 'em. IS IT POSSIBLE TO write a game logic (Tic-Tac-Toe ''3 symbols on one line wins'') with 2d array with 0 index and if it changed to 'O' or 'X' replace current iterating element index to one or two ?!. in two words - matrix of 9 elements would been checked and if it element index was changed change an element(i dont know how to make visualization of game example, I only worked on JS-HTML-CSS and can make it there, BUT PYTHON3 (IS NEW TO ME) I DONT KNOW!!!) My array but error occurs array = [] def createMatrix(mtrx): mtrx = [[mtrx[i][j].append(0) for i in range(3)] for j in range(3)] print(mtrx) createMatrix(array)
[ ">>> a= []\n>>> a.append(9)\n>>> print(a.append(9))\nNone\n>>>\n\nWhat's going on here? Well, list.append returns None. As fo most functions which mutate data rather than generating new values. You can see this in your code with:\n[[mtrx[i][j].append(0) for i in range(3)] for j in range(3)]\n\nThose indices also don't exist yet, so you will get an index error.\nYou likely want:\n[[0 for _ in range(3)] for _ in range(3)]\n\nWhich generates:\n[[0, 0, 0], [0, 0, 0], [0, 0, 0]]\n\nAs an alternative, you could do the following, but that isn't better than matrix = [[0 for _ in range(3)] for _ in range(3)].\nmatrix = []\nfor _ in range(3):\n row = []\n for _ in range(3):\n row.append(0)\n matrix.append(row)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.8", "tic_tac_toe" ]
stackoverflow_0074630144_python_python_3.8_tic_tac_toe.txt
Q: How to write string to csv that contain escape chars? I am trying to write a list of strings to csv using csv.writer. writer = csv.writer(f) writer.writerow(some_text) However, some of the strings contain a random escape character, which seems to be causing the following error : _csv.Error: need to escape, but no escapechar set I've tried using the escapechar option in csv.writer like the following writer = csv.writer(f, escapechar='\\') but this seems to be a partial solution, since all the newline characters(\n) are not recognized. How would I solve this problem? An example of a problematic string would be the following: problem_string = "this \n sentence \% is \n problematic \g" A: What format do you want to achieve in the end? Writing this to a csv seems to be leading to some odd outcomes anyway. In any case, both of these code work for me without errors, both giving slightly different results with respect to escape characters. With normal string: import csv with open('test2.csv', 'w') as csvfile: csvwriter = csv.writer(csvfile) problem_string = "this \n sentence \% is \n problematic \g" csvwriter.writerow(problem_string) With raw input: import csv with open('test2.csv', 'w') as csvfile: csvwriter = csv.writer(csvfile) problem_string = r"this \n sentence \% is \n problematic \g" csvwriter.writerow(problem_string)
How to write string to csv that contain escape chars?
I am trying to write a list of strings to csv using csv.writer. writer = csv.writer(f) writer.writerow(some_text) However, some of the strings contain a random escape character, which seems to be causing the following error : _csv.Error: need to escape, but no escapechar set I've tried using the escapechar option in csv.writer like the following writer = csv.writer(f, escapechar='\\') but this seems to be a partial solution, since all the newline characters(\n) are not recognized. How would I solve this problem? An example of a problematic string would be the following: problem_string = "this \n sentence \% is \n problematic \g"
[ "What format do you want to achieve in the end? Writing this to a csv seems to be leading to some odd outcomes anyway.\nIn any case, both of these code work for me without errors, both giving slightly different results with respect to escape characters.\nWith normal string:\nimport csv\n\nwith open('test2.csv', 'w') as csvfile: \n csvwriter = csv.writer(csvfile) \n problem_string = \"this \\n sentence \\% is \\n problematic \\g\"\n csvwriter.writerow(problem_string)\n\nWith raw input:\nimport csv\n\nwith open('test2.csv', 'w') as csvfile: \n csvwriter = csv.writer(csvfile) \n problem_string = r\"this \\n sentence \\% is \\n problematic \\g\"\n csvwriter.writerow(problem_string)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "python", "string" ]
stackoverflow_0074630046_csv_python_string.txt
Q: how to make discord emoji with hyperlink with python I'm trying to make an emoji with click, but I don't know how to do it... this is the code i am using: import discord from discord.ext import commands bot = commands.Bot(command_prefix='!', description="help") bot.remove_command("help") @bot.command() async def emojibot(ctx): #Comando a decir await ctx.send('<:HabboHotel:1023577803817492490:https://habbo.es>') @bot.event async def on_ready(): print("ready") bot.run('') This is the example I'm currently looking for: A: As the message you are sending isn't an embed but is plain text, I believe this should work: async def emojibot(ctx): #Comando a decir await ctx.send('[:HabboHotel:](https://habbo.es)')
how to make discord emoji with hyperlink with python
I'm trying to make an emoji with click, but I don't know how to do it... this is the code i am using: import discord from discord.ext import commands bot = commands.Bot(command_prefix='!', description="help") bot.remove_command("help") @bot.command() async def emojibot(ctx): #Comando a decir await ctx.send('<:HabboHotel:1023577803817492490:https://habbo.es>') @bot.event async def on_ready(): print("ready") bot.run('') This is the example I'm currently looking for:
[ "As the message you are sending isn't an embed but is plain text, I believe this should work:\nasync def emojibot(ctx): #Comando a decir\n await ctx.send('[:HabboHotel:](https://habbo.es)') \n\n" ]
[ 0 ]
[]
[]
[ "emoji", "python" ]
stackoverflow_0074629465_emoji_python.txt
Q: why doesn't pandas column get overwritten by other column? I am trying to overwrite the row values for column A and B in df1 with the values from df2. My dfs look as such: df1 'A' 'B' 'C' 23 0 cat orange 24 0 cat orange 25 0 cat orange df2 'A' 'B' 'C' 56 2 dog yellow 64 4 rat orange 85 2 bat red The indices here are different and I would like to overwrite row 25 of df1 with the values of 64 from df2 for only column A and B. I have tried something like this df1[['A','B']].loc[25] = df2[['A','B']].loc[64] This executes but doesn't actually seem to overwrite anything as when I call df1[['A','B']].loc[25] I still get the original values. I would expect the new df1 to look like this: df 'A' 'B' 'C' 23 0 cat orange 24 0 cat orange 25 2 bat orange Can someone explain why this doesn't work for me please? A: Cause df1[['A','B']] is a new DataFrame, try: df1.loc[25, ['A','B']] = df2[['A','B']].loc[64] A: df1.loc[25, ['A', 'B']] = df2.loc[64, ['A', 'B']]
why doesn't pandas column get overwritten by other column?
I am trying to overwrite the row values for column A and B in df1 with the values from df2. My dfs look as such: df1 'A' 'B' 'C' 23 0 cat orange 24 0 cat orange 25 0 cat orange df2 'A' 'B' 'C' 56 2 dog yellow 64 4 rat orange 85 2 bat red The indices here are different and I would like to overwrite row 25 of df1 with the values of 64 from df2 for only column A and B. I have tried something like this df1[['A','B']].loc[25] = df2[['A','B']].loc[64] This executes but doesn't actually seem to overwrite anything as when I call df1[['A','B']].loc[25] I still get the original values. I would expect the new df1 to look like this: df 'A' 'B' 'C' 23 0 cat orange 24 0 cat orange 25 2 bat orange Can someone explain why this doesn't work for me please?
[ "Cause df1[['A','B']] is a new DataFrame, try:\ndf1.loc[25, ['A','B']] = df2[['A','B']].loc[64]\n\n", "df1.loc[25, ['A', 'B']] = df2.loc[64, ['A', 'B']]\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074630226_pandas_python.txt
Q: Creating multiple dataframes from a stored procedure I'm working with a stored procedure in which I pass it a start and end date and it returns data. Im passing it ten different dates and making ten calls to it, see below: match1 = sp_data(startDate = listOfDates[0], endDate=listOfDates[0]) match2 = sp_data(startDate = listOfDates[1], endDate=listOfDates[1]) match3 = sp_data(startDate = listOfDates[2], endDate=listOfDates[2]) match4 = sp_data(startDate = listOfDates[3], endDate=listOfDates[3]) match5 = sp_data(startDate = listOfDates[4], endDate=listOfDates[4]) match6 = sp_data(startDate = listOfDates[5], endDate=listOfDates[5]) match7 = sp_data(startDate = listOfDates[6], endDate=listOfDates[6]) match8 = sp_data(startDate = listOfDates[7], endDate=listOfDates[7]) match9 = sp_data(startDate = listOfDates[8], endDate=listOfDates[8]) match10 = sp_data(startDate = listOfDates[9], endDate=listOfDates[9]) See listOfDates pandas series below: print(listOfDates) 0 20220524 1 20220613 2 20220705 3 20220713 4 20220720 5 20220805 6 20220903 7 20220907 8 20220928 9 20221024 Name: TradeDate, dtype: object Is there a better and more efficient way of doing this? Potentially in a loop of some kind? Any help greatly appreciated, thanks! A: You could use a list comprehension to make a list of matches: matches = [sp_data(startDate=trade_date, endDate=trade_date) for trade_date in listOfDates]
Creating multiple dataframes from a stored procedure
I'm working with a stored procedure in which I pass it a start and end date and it returns data. Im passing it ten different dates and making ten calls to it, see below: match1 = sp_data(startDate = listOfDates[0], endDate=listOfDates[0]) match2 = sp_data(startDate = listOfDates[1], endDate=listOfDates[1]) match3 = sp_data(startDate = listOfDates[2], endDate=listOfDates[2]) match4 = sp_data(startDate = listOfDates[3], endDate=listOfDates[3]) match5 = sp_data(startDate = listOfDates[4], endDate=listOfDates[4]) match6 = sp_data(startDate = listOfDates[5], endDate=listOfDates[5]) match7 = sp_data(startDate = listOfDates[6], endDate=listOfDates[6]) match8 = sp_data(startDate = listOfDates[7], endDate=listOfDates[7]) match9 = sp_data(startDate = listOfDates[8], endDate=listOfDates[8]) match10 = sp_data(startDate = listOfDates[9], endDate=listOfDates[9]) See listOfDates pandas series below: print(listOfDates) 0 20220524 1 20220613 2 20220705 3 20220713 4 20220720 5 20220805 6 20220903 7 20220907 8 20220928 9 20221024 Name: TradeDate, dtype: object Is there a better and more efficient way of doing this? Potentially in a loop of some kind? Any help greatly appreciated, thanks!
[ "You could use a list comprehension to make a list of matches:\nmatches = [sp_data(startDate=trade_date, endDate=trade_date) for trade_date in listOfDates]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "loops", "pandas", "python" ]
stackoverflow_0074630195_dataframe_loops_pandas_python.txt
Q: Loopin a comprehension of list I have a list: lst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 1, 2, 3], [['Z', 'C'], 1, 2, 3]] And i want to turn it into: new_lst = [['X', 1, 2, 3], ['A', 1, 2, 3] ['Y', 1, 2, 3], ['B', 1, 2, 3], ['Z', 1, 2, 3], ['C', 1, 2, 3]] I've got it to work with a a single one of them with comprehension. lst2 = [['X', 'Y'], 1, 2, 3] fst, *rest = lst2 new_lst3= [[i, *rest] for i in fst] Which gives me new_list3 = [['X', 1, 2, 3], ['Y', 1, 2, 3]] But I don't know how to loop to make it work on the full list. Any good solutions? A: You can unpack each sublist into the list of letters and the "rest", then iterate over the letters to build new sublists from a letter and the rest. new_lst = [[c, *rest] for letters, *rest in lst for c in letters] A: You're only missing the loop to iterate through the lst from pprint import pprint lst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 4, 5, 6], [['Z', 'C'], 7, 8, 9]] new_lst = [] for elem in lst: fst, *rest = elem new_lst.extend([[i, *rest] for i in fst]) pprint(new_lst, indent=4) output [ ['X', 1, 2, 3], ['A', 1, 2, 3], ['Y', 4, 5, 6], ['B', 4, 5, 6], ['Z', 7, 8, 9], ['C', 7, 8, 9]] A: You can turn your list comprehension for a single element into a nested list comprehension for the entire list: >>> lst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 1, 2, 3], [['Z', 'C'], 1, 2, 3]] >>> [[i, *rest] for (fst, *rest) in lst for i in fst] [['X', 1, 2, 3], ['A', 1, 2, 3], ['Y', 1, 2, 3], ['B', 1, 2, 3], ['Z', 1, 2, 3], ['C', 1, 2, 3]]
Loopin a comprehension of list
I have a list: lst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 1, 2, 3], [['Z', 'C'], 1, 2, 3]] And i want to turn it into: new_lst = [['X', 1, 2, 3], ['A', 1, 2, 3] ['Y', 1, 2, 3], ['B', 1, 2, 3], ['Z', 1, 2, 3], ['C', 1, 2, 3]] I've got it to work with a a single one of them with comprehension. lst2 = [['X', 'Y'], 1, 2, 3] fst, *rest = lst2 new_lst3= [[i, *rest] for i in fst] Which gives me new_list3 = [['X', 1, 2, 3], ['Y', 1, 2, 3]] But I don't know how to loop to make it work on the full list. Any good solutions?
[ "You can unpack each sublist into the list of letters and the \"rest\", then iterate over the letters to build new sublists from a letter and the rest.\nnew_lst = [[c, *rest] for letters, *rest in lst for c in letters]\n\n", "You're only missing the loop to iterate through the lst\nfrom pprint import pprint\n\n\nlst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 4, 5, 6], [['Z', 'C'], 7, 8, 9]]\n\nnew_lst = []\nfor elem in lst:\n fst, *rest = elem\n new_lst.extend([[i, *rest] for i in fst])\n\npprint(new_lst, indent=4)\n\noutput\n[ ['X', 1, 2, 3],\n ['A', 1, 2, 3],\n ['Y', 4, 5, 6],\n ['B', 4, 5, 6],\n ['Z', 7, 8, 9],\n ['C', 7, 8, 9]]\n\n", "You can turn your list comprehension for a single element into a nested list comprehension for the entire list:\n>>> lst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 1, 2, 3], [['Z', 'C'], 1, 2, 3]]\n>>> [[i, *rest] for (fst, *rest) in lst for i in fst]\n[['X', 1, 2, 3], ['A', 1, 2, 3], ['Y', 1, 2, 3], ['B', 1, 2, 3], ['Z', 1, 2, 3], ['C', 1, 2, 3]]\n\n" ]
[ 3, 2, 2 ]
[ "You could just replace the first term right with the first letter right?\nlst = [[['X', 'A'], 1, 2, 3], [['Y', 'B'], 1, 2, 3], [['Z', 'C'], 1, 2, 3]]\n\nfor item in lst:\n item[0] = item[0][0];\n\nprint(lst)\n\n" ]
[ -1 ]
[ "list", "list_comprehension", "python" ]
stackoverflow_0074630132_list_list_comprehension_python.txt
Q: Apply rank with percentile, on python polars, for a set of columns on a dataframe df = pl.DataFrame( { "era": ["01", "01", "02", "02", "03", "03"], "pred1": [1, 2, 3, 4, 5,6], "pred2": [2,4,5,6,7,8], "pred3": [3,5,6,8,9,1], "something_else": [5,4,3,67,5,4], } ) pred_cols = ["pred1", "pred2", "pred3"] ERA_COL = "era" I'm trying to do an equivalent to pandas rank percentile on Polars. Polars' rank function lacks the pct flag Pandas has. I looked at another question here: how to replace pandas df.rank(axis=1) with polars But the results from the question (and applying it to my code), have something off. Calculating rank percentage in Pandas, gives me a single float, the example Polars provided gives me an array, not a float, so something different is being calculated on the example. As an example, Pandas code is this one: df[list(pred_cols)] = df.groupby(ERA_COL, group_keys=False).apply( lambda d: d[list(pred_cols)].rank(pct=True) ) A: You can use the .rank() / .count() from the previous question combined with .over() >>> df.select( ... (pl.col(pred_cols).rank() / pl.col(pred_cols).count()) ... .over(ERA_COL) ... ) shape: (6, 3) ┌───────┬───────┬───────┐ │ pred1 | pred2 | pred3 │ │ --- | --- | --- │ │ f64 | f64 | f64 │ ╞═══════╪═══════╪═══════╡ │ 0.5 | 0.5 | 0.5 │ ├───────┼───────┼───────┤ │ 1.0 | 1.0 | 1.0 │ ├───────┼───────┼───────┤ │ 0.5 | 0.5 | 0.5 │ ├───────┼───────┼───────┤ │ 1.0 | 1.0 | 1.0 │ ├───────┼───────┼───────┤ │ 0.5 | 0.5 | 1.0 │ ├───────┼───────┼───────┤ │ 1.0 | 1.0 | 0.5 │ └─//────┴─//────┴─//────┘ .with_columns() to "replace" the original values. >>> df.with_columns( ... (pl.col(pred_cols).rank() / pl.col(pred_cols).count()) ... .over(ERA_COL) ... ) shape: (6, 5) ┌─────┬───────┬───────┬───────┬────────────────┐ │ era | pred1 | pred2 | pred3 | something_else │ │ --- | --- | --- | --- | --- │ │ str | f64 | f64 | f64 | i64 │ ╞═════╪═══════╪═══════╪═══════╪════════════════╡ │ 01 | 0.5 | 0.5 | 0.5 | 5 │ ├─────┼───────┼───────┼───────┼────────────────┤ │ 01 | 1.0 | 1.0 | 1.0 | 4 │ ├─────┼───────┼───────┼───────┼────────────────┤ │ 02 | 0.5 | 0.5 | 0.5 | 3 │ ├─────┼───────┼───────┼───────┼────────────────┤ │ 02 | 1.0 | 1.0 | 1.0 | 67 │ ├─────┼───────┼───────┼───────┼────────────────┤ │ 03 | 0.5 | 0.5 | 1.0 | 5 │ ├─────┼───────┼───────┼───────┼────────────────┤ │ 03 | 1.0 | 1.0 | 0.5 | 4 │ └─//──┴─//────┴─//────┴─//────┴─//─────────────┘
Apply rank with percentile, on python polars, for a set of columns on a dataframe
df = pl.DataFrame( { "era": ["01", "01", "02", "02", "03", "03"], "pred1": [1, 2, 3, 4, 5,6], "pred2": [2,4,5,6,7,8], "pred3": [3,5,6,8,9,1], "something_else": [5,4,3,67,5,4], } ) pred_cols = ["pred1", "pred2", "pred3"] ERA_COL = "era" I'm trying to do an equivalent to pandas rank percentile on Polars. Polars' rank function lacks the pct flag Pandas has. I looked at another question here: how to replace pandas df.rank(axis=1) with polars But the results from the question (and applying it to my code), have something off. Calculating rank percentage in Pandas, gives me a single float, the example Polars provided gives me an array, not a float, so something different is being calculated on the example. As an example, Pandas code is this one: df[list(pred_cols)] = df.groupby(ERA_COL, group_keys=False).apply( lambda d: d[list(pred_cols)].rank(pct=True) )
[ "You can use the .rank() / .count() from the previous question combined with .over()\n>>> df.select(\n... (pl.col(pred_cols).rank() / pl.col(pred_cols).count())\n... .over(ERA_COL)\n... )\nshape: (6, 3)\n┌───────┬───────┬───────┐\n│ pred1 | pred2 | pred3 │\n│ --- | --- | --- │\n│ f64 | f64 | f64 │\n╞═══════╪═══════╪═══════╡\n│ 0.5 | 0.5 | 0.5 │\n├───────┼───────┼───────┤\n│ 1.0 | 1.0 | 1.0 │\n├───────┼───────┼───────┤\n│ 0.5 | 0.5 | 0.5 │\n├───────┼───────┼───────┤\n│ 1.0 | 1.0 | 1.0 │\n├───────┼───────┼───────┤\n│ 0.5 | 0.5 | 1.0 │\n├───────┼───────┼───────┤\n│ 1.0 | 1.0 | 0.5 │\n└─//────┴─//────┴─//────┘\n\n.with_columns() to \"replace\" the original values.\n>>> df.with_columns(\n... (pl.col(pred_cols).rank() / pl.col(pred_cols).count())\n... .over(ERA_COL)\n... )\nshape: (6, 5)\n┌─────┬───────┬───────┬───────┬────────────────┐\n│ era | pred1 | pred2 | pred3 | something_else │\n│ --- | --- | --- | --- | --- │\n│ str | f64 | f64 | f64 | i64 │\n╞═════╪═══════╪═══════╪═══════╪════════════════╡\n│ 01 | 0.5 | 0.5 | 0.5 | 5 │\n├─────┼───────┼───────┼───────┼────────────────┤\n│ 01 | 1.0 | 1.0 | 1.0 | 4 │\n├─────┼───────┼───────┼───────┼────────────────┤\n│ 02 | 0.5 | 0.5 | 0.5 | 3 │\n├─────┼───────┼───────┼───────┼────────────────┤\n│ 02 | 1.0 | 1.0 | 1.0 | 67 │\n├─────┼───────┼───────┼───────┼────────────────┤\n│ 03 | 0.5 | 0.5 | 1.0 | 5 │\n├─────┼───────┼───────┼───────┼────────────────┤\n│ 03 | 1.0 | 1.0 | 0.5 | 4 │\n└─//──┴─//────┴─//────┴─//────┴─//─────────────┘\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python", "python_polars", "rank" ]
stackoverflow_0074628569_pandas_python_python_polars_rank.txt
Q: Python Docx Module Is Not Inside The Site-Packages Folder I am new to python and I am trying to install the docx module, however it does not appear inside the site-packages folder. First, I thought it was not showing up because my pycharm was outdated. Updated the base interpreter from 3.9 to 3.10 as well as pycharm. Deleted the venv folder and all that jazz. Opened the windows cmd and wrote pip install python--docx It shows that it is already installed: Requirement already satisfied: python--docx in c:\users\me\appdata\local\programs\python\python310\lib\site-packages (0.8.11) Requirement already satisfied: lxml>=2.3.2 in c:\users\me\appdata\local\programs\python\python310\lib\site-packages (from python--docx) (4.8.0) But is nowhere to be found in the site-packages folder in either version of Python, what should I do? A: My site-packages folder in Pycharm showed that they were disabled, colored orange, but it turns out that I just had to delete everthing inside the venv folder again and just check the box inherit global site-packages when changing the base interpreter. Everthing is fixed now! A: Seems like python-docx doesn't work anymore. It throws a legacy error and appears to be out of date ver. 0.8.11. When you google - seems like there is a newer version https://pypi.org/project/docx/
Python Docx Module Is Not Inside The Site-Packages Folder
I am new to python and I am trying to install the docx module, however it does not appear inside the site-packages folder. First, I thought it was not showing up because my pycharm was outdated. Updated the base interpreter from 3.9 to 3.10 as well as pycharm. Deleted the venv folder and all that jazz. Opened the windows cmd and wrote pip install python--docx It shows that it is already installed: Requirement already satisfied: python--docx in c:\users\me\appdata\local\programs\python\python310\lib\site-packages (0.8.11) Requirement already satisfied: lxml>=2.3.2 in c:\users\me\appdata\local\programs\python\python310\lib\site-packages (from python--docx) (4.8.0) But is nowhere to be found in the site-packages folder in either version of Python, what should I do?
[ "My site-packages folder in Pycharm showed that they were disabled, colored orange, but it turns out that I just had to delete everthing inside the venv folder again and just check the box inherit global site-packages when changing the base interpreter. Everthing is fixed now!\n", "Seems like python-docx doesn't work anymore. It throws a legacy error and appears to be out of date ver. 0.8.11. When you google - seems like there is a newer version https://pypi.org/project/docx/\n" ]
[ 0, 0 ]
[]
[]
[ "module", "python" ]
stackoverflow_0072070938_module_python.txt
Q: Ansible run recursive script or module In Ansible, I can run a python script if it contains code in the same script. However, if i try to use name: Restarting service on different nodes hosts: nodes connection: ssh tasks: - name: Restarting tomcat service script: main.py 1 args: executable: python3 And main.py has import restart_tomcat (restart_tomcat.py is present in the same folder as main.py) it is not able to import this module , though present in the same directory. How to make it understand that the other supporting files for main.py is present in same directory. Note : it is failing, when its trying to execute it on remote servers Edit : It would get too complicated to create custom_module on Ansible for every example we want to run A: I think you should write a custom module; note that when you run any script, ansible creates a copy of it to a temp location. So any relative path you provided in the import will be messed up. You can run your playbook task with -vvv to confirm this. Here is an example(high level) of setting up a custom module: tree . ├── your_playbook.yml ├── library │ └── your_custom_module.py # write your code logic here └── module_utils └── restart_tomcat.py #this file contains common classes/functions In your module file(your_custom_module.py), you can do the import like this: #!/usr/bin/python3 from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.restart_tomcat.py import * #this line will import the classes or functions present in the other file to the custom module You can find more details here and an example here and this. For supporting references, run the ansible-doc script command and navigate to the NOTES section.
Ansible run recursive script or module
In Ansible, I can run a python script if it contains code in the same script. However, if i try to use name: Restarting service on different nodes hosts: nodes connection: ssh tasks: - name: Restarting tomcat service script: main.py 1 args: executable: python3 And main.py has import restart_tomcat (restart_tomcat.py is present in the same folder as main.py) it is not able to import this module , though present in the same directory. How to make it understand that the other supporting files for main.py is present in same directory. Note : it is failing, when its trying to execute it on remote servers Edit : It would get too complicated to create custom_module on Ansible for every example we want to run
[ "I think you should write a custom module; note that when you run any script, ansible creates a copy of it to a temp location. So any relative path you provided in the import will be messed up. You can run your playbook task with -vvv to confirm this.\nHere is an example(high level) of setting up a custom module:\n tree \n.\n├── your_playbook.yml\n├── library\n│ └── your_custom_module.py # write your code logic here \n└── module_utils\n └── restart_tomcat.py #this file contains common classes/functions\n\nIn your module file(your_custom_module.py), you can do the import like this:\n#!/usr/bin/python3\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\nfrom ansible.module_utils.basic import AnsibleModule \nfrom ansible.module_utils.restart_tomcat.py import * #this line will import the classes or functions present in the other file to the custom module\n\nYou can find more details here and an example here and this.\nFor supporting references, run the ansible-doc script command and navigate to the NOTES section.\n" ]
[ 4 ]
[]
[]
[ "ansible", "jenkins", "python" ]
stackoverflow_0074629645_ansible_jenkins_python.txt
Q: Discord.py List all server names where the bot is in I made a bot but I want the bot to make list all the server names where it is when you type a command. Can any one help me, please? A: await ctx.send('\n'.join(guild.name for guild in bot.guilds)) Just remember to pass an intent with guilds enabled in your bot's constructor A: @commands.command() async def servers(self, ctx): activeservers = client.guilds for guild in activeservers: await ctx.send(guild.name) print(guild.name) This code should work A: @client.command() async def server(ctx): servers = list(client.guilds) for server in servers: await ctx.send(server.name) it works for me...
Discord.py List all server names where the bot is in
I made a bot but I want the bot to make list all the server names where it is when you type a command. Can any one help me, please?
[ "\nawait ctx.send('\\n'.join(guild.name for guild in bot.guilds))\n\nJust remember to pass an intent with guilds enabled in your bot's constructor\n", "@commands.command()\nasync def servers(self, ctx):\n activeservers = client.guilds\n for guild in activeservers:\n await ctx.send(guild.name)\n print(guild.name)\n\nThis code should work\n", "@client.command()\nasync def server(ctx):\n servers = list(client.guilds)\n for server in servers:\n await ctx.send(server.name)\n\nit works for me...\n" ]
[ 2, 0, 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0067058040_discord.py_python.txt
Q: Django JSONField is not able to encode smileys properly I plan to store a dict in a Django JSONField. One key of this dict is a comment a user can enter. And users like a lot to add some smileys in their comments... The problem is that some smileys, are saved properly in DB. The database is MySQL 8.0.31, Django version is 4.0.8 : JSONField is supported for this environment as reported in documentation. The database default encoding is utf8mb4 and collation utf8mb4_general_ci. With that model : class TestJSONField(models.Model): data = models.JSONField() Here is the test case : comment=b'smiley : \xf0\x9f\x98\x8a'.decode() t=TestJSONField(pk=1, data={'comment':comment}) t.save() r=TestJSONField.objects.get(pk=1) print('BEFORE :', comment) print('AFTER :', r.data['comment'], '(str)') print('AFTER :', r.data['comment'].encode(), '(utf-8 encoded bytes)') which gives : BEFORE : smiley : AFTER : smiley : ? (str) AFTER : b'smiley : ?' (utf-8 encoded bytes) As you can see the smiley is not stored correctly. This smiley is 4 bytes encoded, this may be the source of the problem because with 2-bytes encoded chars I do not have any problem. With a TextField and using json dumps()/loads() I do not have any problem. Do you have an idea how to have 4 bytes encoded smileys to be saved in a JSONField ? A: The database settings was utf8mb4, the tables were also the same, but not the columns that were in utf8mb3. After altering columns encoding to utf8mb4, everything is going right now.
Django JSONField is not able to encode smileys properly
I plan to store a dict in a Django JSONField. One key of this dict is a comment a user can enter. And users like a lot to add some smileys in their comments... The problem is that some smileys, are saved properly in DB. The database is MySQL 8.0.31, Django version is 4.0.8 : JSONField is supported for this environment as reported in documentation. The database default encoding is utf8mb4 and collation utf8mb4_general_ci. With that model : class TestJSONField(models.Model): data = models.JSONField() Here is the test case : comment=b'smiley : \xf0\x9f\x98\x8a'.decode() t=TestJSONField(pk=1, data={'comment':comment}) t.save() r=TestJSONField.objects.get(pk=1) print('BEFORE :', comment) print('AFTER :', r.data['comment'], '(str)') print('AFTER :', r.data['comment'].encode(), '(utf-8 encoded bytes)') which gives : BEFORE : smiley : AFTER : smiley : ? (str) AFTER : b'smiley : ?' (utf-8 encoded bytes) As you can see the smiley is not stored correctly. This smiley is 4 bytes encoded, this may be the source of the problem because with 2-bytes encoded chars I do not have any problem. With a TextField and using json dumps()/loads() I do not have any problem. Do you have an idea how to have 4 bytes encoded smileys to be saved in a JSONField ?
[ "The database settings was utf8mb4, the tables were also the same, but not the columns that were in utf8mb3. After altering columns encoding to utf8mb4, everything is going right now.\n" ]
[ 0 ]
[]
[]
[ "django", "mysql", "python" ]
stackoverflow_0074534059_django_mysql_python.txt
Q: python merge tuples elements with index/key I'm trying to merge columns values from tuples with an index: source tuples with a lot of timestamps (1440 ~): tuples = [('2022-10-15 01:16:00', '5', '', '', 'hdd1', '1234'), ('2022-10-15 01:16:00', '', '4', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '10', '', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '', '2', '', 'hdd1', '1234'), ...] the index is the first element. desired tuples output: [('2022-10-15 01:16:00', '5', '4', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '10', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '2', '', 'hdd1', '1234')] my code: tuples = [('2022-10-15 01:16:00', '5', '', '', 'hdd1', '1234'), ('2022-10-15 01:16:00', '', '4', '', 'hdd1', '1234'),('2022-10-15 01:17:00', '10', '', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '', '2', '', 'hdd1', '1234')] result = [] key = lambda t: t[0] for letter,items in itertools.groupby(sorted(tuples,key=key),key): items = list(items) if len(items) == 1: result.append(items[0]+(0,0)) else: result.append(items[0]+items[1][1:]) print(result) many thanks for any help A: I think something like this is what you want: from itertools import groupby result = [] key = lambda t: t[0] for _,items in groupby(sorted(tuples, key=key), key): item = None for i, it in enumerate(items): # First item in group. Need to convert to list to edit. if not item: item = list(it) # Not first. Update item at correct index. else: item[1 + i] = it[1 + i] # Convert back to tuple and save. result.append(tuple(item)) for item in result: print(item) Output: ('2022-10-15 01:16:00', '5', '4', '', 'hdd1', '1234') ('2022-10-15 01:17:00', '10', '25', '', 'hdd1', '1234') ('2022-10-15 01:18:00', '1', '2', '', 'hdd1', '1234') A: Here is a solution using a dictionary to store the date while iterating over the tuples. #empty dict with date as key and a list placeholder as value r = {t[0]:["", "", "", "", ""] for t in tuples} #iterate over the tuples and populate the dict for (date, *other_fields) in tuples: for i, value in enumerate(other_fields): if value: #skip if it's empty r[date][i] = value #convert the dictionary in a list of tuples r = [tuple([k, *v]) for k,v in r.items()] print(r) #[('2022-10-15 01:16:00', '5', '4', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '10', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '2', '', 'hdd1', '1234')]
python merge tuples elements with index/key
I'm trying to merge columns values from tuples with an index: source tuples with a lot of timestamps (1440 ~): tuples = [('2022-10-15 01:16:00', '5', '', '', 'hdd1', '1234'), ('2022-10-15 01:16:00', '', '4', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '10', '', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '', '2', '', 'hdd1', '1234'), ...] the index is the first element. desired tuples output: [('2022-10-15 01:16:00', '5', '4', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '10', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '2', '', 'hdd1', '1234')] my code: tuples = [('2022-10-15 01:16:00', '5', '', '', 'hdd1', '1234'), ('2022-10-15 01:16:00', '', '4', '', 'hdd1', '1234'),('2022-10-15 01:17:00', '10', '', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '', '2', '', 'hdd1', '1234')] result = [] key = lambda t: t[0] for letter,items in itertools.groupby(sorted(tuples,key=key),key): items = list(items) if len(items) == 1: result.append(items[0]+(0,0)) else: result.append(items[0]+items[1][1:]) print(result) many thanks for any help
[ "I think something like this is what you want:\nfrom itertools import groupby\nresult = []\nkey = lambda t: t[0]\nfor _,items in groupby(sorted(tuples, key=key), key):\n item = None\n for i, it in enumerate(items):\n # First item in group. Need to convert to list to edit.\n if not item: item = list(it)\n # Not first. Update item at correct index.\n else: item[1 + i] = it[1 + i]\n # Convert back to tuple and save.\n result.append(tuple(item))\n\nfor item in result: print(item)\n\nOutput:\n('2022-10-15 01:16:00', '5', '4', '', 'hdd1', '1234')\n('2022-10-15 01:17:00', '10', '25', '', 'hdd1', '1234')\n('2022-10-15 01:18:00', '1', '2', '', 'hdd1', '1234')\n\n", "Here is a solution using a dictionary to store the date while iterating over the tuples.\n#empty dict with date as key and a list placeholder as value\nr = {t[0]:[\"\", \"\", \"\", \"\", \"\"] for t in tuples} \n\n\n#iterate over the tuples and populate the dict\nfor (date, *other_fields) in tuples:\n for i, value in enumerate(other_fields):\n if value: #skip if it's empty\n r[date][i] = value\n\n\n#convert the dictionary in a list of tuples\nr = [tuple([k, *v]) for k,v in r.items()]\nprint(r)\n\n#[('2022-10-15 01:16:00', '5', '4', '', 'hdd1', '1234'), ('2022-10-15 01:17:00', '10', '25', '', 'hdd1', '1234'), ('2022-10-15 01:18:00', '1', '2', '', 'hdd1', '1234')]\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "tuples" ]
stackoverflow_0074630148_python_tuples.txt
Q: set dask workers with an event loop for actors Context I am trying to instantiate a legacy data extractor by my dask worker using an actor pattern from dask.distributed import Client client = Client() connector = Sharepoint(CONF.sources["sharepoint"]) items = connector.enumerate_items() # extraction remote_extractor = client.submit( SharepointExtractor, CONF.sources["sharepoint"], connector, actor=True ) # Create Extractor on a worker extractor = remote_extractor.result() # Get back a pointer to that object futures = client.map( extractor.job, [i for i in items], retries=5, pure=False, ) _ = await client.gather(futures) The first thing the SharepointExtractor does is to get an http session from its connector class SharepointExtractor: def __init__( self, conf: ConfigTree, connector: Sharepoint, *args, **kwargs ) -> None: self.conf = conf self.session = connector.session_factory() .session_factory() basically returns a aiohttp.client.ClientSession enriched with an Oauth token (which motivates the choice for an actor). The problem at one point ClientSession's constructor calls asyncio.get_event_loop() which does not seem available in the worker ... File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/eteel/connectors/rest.py", line 96, in session_factory connector=TCPConnector(limit=30), File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 767, in __init__ super().__init__( File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 234, in __init__ loop = get_running_loop(loop) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/aiohttp/helpers.py", line 287, in get_running_loop loop = asyncio.get_event_loop() File "/usr/lib/python3.10/asyncio/events.py", line 656, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.' RuntimeError: There is no current event loop in thread 'Dask-Default-Threads-484036-0'. Since I am in a dev/local context, from what I understand, I end up with a LocalCluster Going async I naively thought that going async would automagicaly inject the notion of event_loop into the workers. client = await Client(asynchronous=True) connector = Sharepoint(CONF.sources["sharepoint"]) items = connector.enumerate_items() # extraction remote_extractor = await client.submit( SharepointExtractor, CONF.sources["sharepoint"], connector, actor=True ) # Create Extractor on a worker extractor = await remote_extractor # Get back a pointer to that object But the same error occurs Setting an event loop explicitly loop = asyncio.new_event_loop() client = await Client( asynchronous=True, loop=loop ) This time, the error is slightly more enigmatic .... File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/client.py", line 923, in __init__ self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/utils.py", line 451, in __init__ if not loop.asyncio_loop.is_running(): AttributeError: '_UnixSelectorEventLoop' object has no attribute 'asyncio_loop' (not sure what this constructor is waiting for loop) Do you have examples of dask actors involving resources from aiohttp (or any other async lib)? How should I set dask workers got get an event loop avaiblable to my actors? Edit Following @mdurant approach (a kind of singleton based importation of the extractor from a importable module) def get_extractor(CONF): if extractor[0] is None: connector = Sharepoint(CONF.sources["sharepoint"]) extractor[0] = SharepointBis(CONF.sources["sharepoint"], connector) return extractor[0] def workload(CONF, item): extractor = get_extractor(CONF) return extractor.job(item) def main(): client = Client() connector = Sharepoint(CONF.sources["sharepoint"]) items = connector.enumerate_items() futures = client.map( workload, [CONF for _ in range(len(items))], [i for i in items], retries=5, pure=False, ) _ = client.gather(futures) I still get 2022-12-01 10:05:54,923 - distributed.worker - WARNING - Compute Failed Key: workload-ffcf0f1a-8aee-41d1-9ad2-f7eea91fa107-41 Function: workload args: (<eteel.conf.ConfGenerator object at 0x7fae8040d4e0>, 'firex1.sharepoint.com,930e9ef8-6bdf-4484-9883-6aa9965c548f,aed0d0bd-a659-4dbf-bbaa-a56f4efa3b0c') kwargs: {} Exception: 'RuntimeError("There is no current event loop in thread \'Dask-Default-Threads-166860-1\'.")' same goes with a Client(asynchronous=True);which drives me back to my question: how can I have an event loop in a Dask Thread? I have a strong intuition that this has something to do with Client(asynchronous=True, loop={this parameter}) A: OK, I think there is some confusion going on in this question, so I will do my best to clarify the situation. There are three main points: some things cannot be serialised between processes easily or at all some objects are expensive to create per process, and it would be nice to only do it once the work must happen in an async context Here is how I would do it. Put this in an importable module. extractor = [None] def get_extractor(CONF): if extractor[0] is None: connector = Sharepoint(CONF.sources["sharepoint"]) extractor[0] = SharepointExtractor(CONF.sources["sharepoint"], connector) return extractor[0] async def workload(CONF, item): extractor = get_extractor(CONF) return await extractor.job(item, retries=5) if __name__ == "__main__": # or run this elsewhere client = ... items = ... futures = client.map(workload, items) output = client.gather(futures) I do not know from the OP which parts of the workload are coroutines, I am guessing the .job method - but it should be obvious what I am doing. I note the original code would not have worked in a simple non-dask session, and it is always best to start off with something that works before trying to daskify it. On async in dask: client.map/submit supports coroutine functions, and they will be executed on the same event loop as the main worker. That's all you need here. All the distributed components (worker, scheduler, client) are async, server-like implementations with event loops, but execution of worker code does not normally happen in the same thread as the one running that server. client(asynchronous=True) implies that the client is to be constructed and operated on only from within coroutines - and that the client's event loop is in the current thread. This is probably not what you want, unless you know what you are doing.
set dask workers with an event loop for actors
Context I am trying to instantiate a legacy data extractor by my dask worker using an actor pattern from dask.distributed import Client client = Client() connector = Sharepoint(CONF.sources["sharepoint"]) items = connector.enumerate_items() # extraction remote_extractor = client.submit( SharepointExtractor, CONF.sources["sharepoint"], connector, actor=True ) # Create Extractor on a worker extractor = remote_extractor.result() # Get back a pointer to that object futures = client.map( extractor.job, [i for i in items], retries=5, pure=False, ) _ = await client.gather(futures) The first thing the SharepointExtractor does is to get an http session from its connector class SharepointExtractor: def __init__( self, conf: ConfigTree, connector: Sharepoint, *args, **kwargs ) -> None: self.conf = conf self.session = connector.session_factory() .session_factory() basically returns a aiohttp.client.ClientSession enriched with an Oauth token (which motivates the choice for an actor). The problem at one point ClientSession's constructor calls asyncio.get_event_loop() which does not seem available in the worker ... File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/eteel/connectors/rest.py", line 96, in session_factory connector=TCPConnector(limit=30), File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 767, in __init__ super().__init__( File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 234, in __init__ loop = get_running_loop(loop) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/aiohttp/helpers.py", line 287, in get_running_loop loop = asyncio.get_event_loop() File "/usr/lib/python3.10/asyncio/events.py", line 656, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.' RuntimeError: There is no current event loop in thread 'Dask-Default-Threads-484036-0'. Since I am in a dev/local context, from what I understand, I end up with a LocalCluster Going async I naively thought that going async would automagicaly inject the notion of event_loop into the workers. client = await Client(asynchronous=True) connector = Sharepoint(CONF.sources["sharepoint"]) items = connector.enumerate_items() # extraction remote_extractor = await client.submit( SharepointExtractor, CONF.sources["sharepoint"], connector, actor=True ) # Create Extractor on a worker extractor = await remote_extractor # Get back a pointer to that object But the same error occurs Setting an event loop explicitly loop = asyncio.new_event_loop() client = await Client( asynchronous=True, loop=loop ) This time, the error is slightly more enigmatic .... File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/client.py", line 923, in __init__ self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/utils.py", line 451, in __init__ if not loop.asyncio_loop.is_running(): AttributeError: '_UnixSelectorEventLoop' object has no attribute 'asyncio_loop' (not sure what this constructor is waiting for loop) Do you have examples of dask actors involving resources from aiohttp (or any other async lib)? How should I set dask workers got get an event loop avaiblable to my actors? Edit Following @mdurant approach (a kind of singleton based importation of the extractor from a importable module) def get_extractor(CONF): if extractor[0] is None: connector = Sharepoint(CONF.sources["sharepoint"]) extractor[0] = SharepointBis(CONF.sources["sharepoint"], connector) return extractor[0] def workload(CONF, item): extractor = get_extractor(CONF) return extractor.job(item) def main(): client = Client() connector = Sharepoint(CONF.sources["sharepoint"]) items = connector.enumerate_items() futures = client.map( workload, [CONF for _ in range(len(items))], [i for i in items], retries=5, pure=False, ) _ = client.gather(futures) I still get 2022-12-01 10:05:54,923 - distributed.worker - WARNING - Compute Failed Key: workload-ffcf0f1a-8aee-41d1-9ad2-f7eea91fa107-41 Function: workload args: (<eteel.conf.ConfGenerator object at 0x7fae8040d4e0>, 'firex1.sharepoint.com,930e9ef8-6bdf-4484-9883-6aa9965c548f,aed0d0bd-a659-4dbf-bbaa-a56f4efa3b0c') kwargs: {} Exception: 'RuntimeError("There is no current event loop in thread \'Dask-Default-Threads-166860-1\'.")' same goes with a Client(asynchronous=True);which drives me back to my question: how can I have an event loop in a Dask Thread? I have a strong intuition that this has something to do with Client(asynchronous=True, loop={this parameter})
[ "OK, I think there is some confusion going on in this question, so I will do my best to clarify the situation. There are three main points:\n\nsome things cannot be serialised between processes easily or at all\nsome objects are expensive to create per process, and it would be nice to only do it once\nthe work must happen in an async context\n\nHere is how I would do it. Put this in an importable module.\nextractor = [None]\n\ndef get_extractor(CONF):\n if extractor[0] is None:\n connector = Sharepoint(CONF.sources[\"sharepoint\"])\n extractor[0] = SharepointExtractor(CONF.sources[\"sharepoint\"], connector)\n return extractor[0]\n\n\nasync def workload(CONF, item):\n extractor = get_extractor(CONF)\n return await extractor.job(item, retries=5)\n\nif __name__ == \"__main__\": # or run this elsewhere\n client = ... \n items = ...\n futures = client.map(workload, items)\n output = client.gather(futures)\n\nI do not know from the OP which parts of the workload are coroutines, I am guessing the .job method - but it should be obvious what I am doing. I note the original code would not have worked in a simple non-dask session, and it is always best to start off with something that works before trying to daskify it.\nOn async in dask:\n\nclient.map/submit supports coroutine functions, and they will be executed on the same event loop as the main worker. That's all you need here. All the distributed components (worker, scheduler, client) are async, server-like implementations with event loops, but execution of worker code does not normally happen in the same thread as the one running that server.\nclient(asynchronous=True) implies that the client is to be constructed and operated on only from within coroutines - and that the client's event loop is in the current thread. This is probably not what you want, unless you know what you are doing.\n\n" ]
[ 1 ]
[]
[]
[ "actor", "asynchronous", "dask", "python" ]
stackoverflow_0074615867_actor_asynchronous_dask_python.txt
Q: How to make a spectrum plot I am trying to replicate a spectrum plot like the figure below with both Python and Matlab, no success so far. The image is from Electric Field Instrument data. The plot should have time on x-axis, frequency on y-axis and colorbar on the right y-axis. The data is a two dimensional matrix, each row represents the time stamp, the column represents different frequency after FFT. the problem is the data has a lot of NaN values, only a few frequency has data, when I used plt.imshow() it give me completely blank image. Besides, the value ranges from 1e-12 to 1e-7, very small. Any hint on how to visualize image like this would be greatly appreciated. Screenshot of the data. The data is from NASA EFI data. I utilized plt.imshow with Python and imagesc in Matlab with the whole 2d matrix, it give me blank image of the same color. Below is my Python code trial, all gave me wrong images: plt.matshow(dt, cmap='jet');plt.colorbar(); plt.show() for i in range(dt.shape[0]): plt.plot(dt.iloc[i, :]);plt.show() A: You shouldn't use imshow because this will display it as if it were an image (because you have a 2D matrix). You need to plot each row separately, like so: import numpy as np import matplotlib.pyplot as plt sin1 = np.sin(np.linspace(0, 2*np.pi, 100)) sin2 = np.sin(np.linspace(0, 2*np.pi, 100)) + 0.5 sin3 = np.sin(np.linspace(0, 2*np.pi, 100)) + 1 sin1[10] = np.nan sin2[20] = np.nan sin3[30] = np.nan data = np.array([sin1, sin2, sin3]) # plot each row as a separate series for i in range(data.shape[0]): plt.plot(data[i, :]) plt.show() and then the nan's should just be empty spots in the graph.
How to make a spectrum plot
I am trying to replicate a spectrum plot like the figure below with both Python and Matlab, no success so far. The image is from Electric Field Instrument data. The plot should have time on x-axis, frequency on y-axis and colorbar on the right y-axis. The data is a two dimensional matrix, each row represents the time stamp, the column represents different frequency after FFT. the problem is the data has a lot of NaN values, only a few frequency has data, when I used plt.imshow() it give me completely blank image. Besides, the value ranges from 1e-12 to 1e-7, very small. Any hint on how to visualize image like this would be greatly appreciated. Screenshot of the data. The data is from NASA EFI data. I utilized plt.imshow with Python and imagesc in Matlab with the whole 2d matrix, it give me blank image of the same color. Below is my Python code trial, all gave me wrong images: plt.matshow(dt, cmap='jet');plt.colorbar(); plt.show() for i in range(dt.shape[0]): plt.plot(dt.iloc[i, :]);plt.show()
[ "You shouldn't use imshow because this will display it as if it were an image (because you have a 2D matrix).\nYou need to plot each row separately, like so:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsin1 = np.sin(np.linspace(0, 2*np.pi, 100))\nsin2 = np.sin(np.linspace(0, 2*np.pi, 100)) + 0.5\nsin3 = np.sin(np.linspace(0, 2*np.pi, 100)) + 1\n\nsin1[10] = np.nan\nsin2[20] = np.nan\nsin3[30] = np.nan\n\ndata = np.array([sin1, sin2, sin3])\n\n# plot each row as a separate series\nfor i in range(data.shape[0]):\n plt.plot(data[i, :])\n\nplt.show()\n\nand then the nan's should just be empty spots in the graph.\n\n" ]
[ 0 ]
[]
[]
[ "python", "spectrum" ]
stackoverflow_0074630384_python_spectrum.txt
Q: Pandas - Parent child relationship - Duplicative data/Index issues I am trying to work with parent child relational data in pandas and having some issues getting the proper parent/child mapping on portions of my data. I attempted to use ffill and fillna to no avail, but I may have conducted that incorrectly. I have tried two methods with issues on both. Any assistance getting over this hurdle would be amazing. Thank you for your help. code: import pandas as pd df = pd.DataFrame( { "child_string": ["string42","string23","string23","string54","string28","string86","string15","string1"], "child": [None, 8675, 8675, 8676, 2048, 5442, 1942, 3185], "parent": [None, 2048, 2048, 2048, 1942, 1942, 3185, None], "interesting": ["some_unique_field1", "some_unique_field2", "some_unique_field3", "some_unique_field4", "some_unique_field5", "some_unique_field6", "some_unique_field7", "some_unique_field8"] } ) # This gives me the right output except for parent string for string 1 and string 42 print(df.merge( df[['child', 'child_string']].rename(columns={"child":"parent", "child_string": "parent_string"}), on='parent', how='left' )) # This fails with an invalid index error. df['parent_string'] = df['parent'].map(df.set_index('child').child_string) print(df) Expected output: child_string, child, parent, interesting parent_string string42, NaN, NaN, some_unique_field1, NaN string23, 8675, 2048, some_unique_field2, string28 string23, 8675, 2048, some_unique_field3, string28 string54, 8676, 2048, some_unique_field4, string28 string28, 2048, 1942, some_unique_field5, string15 string86, 5442, 1942, some_unique_field6, string15 string15, 1942, 3185, some_unique_field7, string1 string1, 3185, NaN, some_unique_field8, NaN A: You can create a dictionary that has the information of "child" column as key's and "child_string" as values. child_info = df[['child_string','child']].dropna() child_string_to_child_dict = dict(zip(child_info.child,child_info.child_string)) >>> child_string_to_child_dict {8675.0: 'string23', 8676.0: 'string54', 2048.0: 'string28', 5442.0: 'string86', 1942.0: 'string15', 3185.0: 'string1'} Then you can map that dictionary on your "parent" column df['parent_string'] = df['parent'].map(child_string_to_child_dict) Result: child_string child parent interesting parent_string 0 string42 NaN NaN some_unique_field1 NaN 1 string23 8675.0 2048.0 some_unique_field2 string28 2 string23 8675.0 2048.0 some_unique_field3 string28 3 string54 8676.0 2048.0 some_unique_field4 string28 4 string28 2048.0 1942.0 some_unique_field5 string15 5 string86 5442.0 1942.0 some_unique_field6 string15 6 string15 1942.0 3185.0 some_unique_field7 string1 7 string1 3185.0 NaN some_unique_field8 NaN A similar approach to what you tried
Pandas - Parent child relationship - Duplicative data/Index issues
I am trying to work with parent child relational data in pandas and having some issues getting the proper parent/child mapping on portions of my data. I attempted to use ffill and fillna to no avail, but I may have conducted that incorrectly. I have tried two methods with issues on both. Any assistance getting over this hurdle would be amazing. Thank you for your help. code: import pandas as pd df = pd.DataFrame( { "child_string": ["string42","string23","string23","string54","string28","string86","string15","string1"], "child": [None, 8675, 8675, 8676, 2048, 5442, 1942, 3185], "parent": [None, 2048, 2048, 2048, 1942, 1942, 3185, None], "interesting": ["some_unique_field1", "some_unique_field2", "some_unique_field3", "some_unique_field4", "some_unique_field5", "some_unique_field6", "some_unique_field7", "some_unique_field8"] } ) # This gives me the right output except for parent string for string 1 and string 42 print(df.merge( df[['child', 'child_string']].rename(columns={"child":"parent", "child_string": "parent_string"}), on='parent', how='left' )) # This fails with an invalid index error. df['parent_string'] = df['parent'].map(df.set_index('child').child_string) print(df) Expected output: child_string, child, parent, interesting parent_string string42, NaN, NaN, some_unique_field1, NaN string23, 8675, 2048, some_unique_field2, string28 string23, 8675, 2048, some_unique_field3, string28 string54, 8676, 2048, some_unique_field4, string28 string28, 2048, 1942, some_unique_field5, string15 string86, 5442, 1942, some_unique_field6, string15 string15, 1942, 3185, some_unique_field7, string1 string1, 3185, NaN, some_unique_field8, NaN
[ "You can create a dictionary that has the information of \"child\" column as key's and \"child_string\" as values.\nchild_info = df[['child_string','child']].dropna()\nchild_string_to_child_dict = dict(zip(child_info.child,child_info.child_string))\n\n>>> child_string_to_child_dict\n \n{8675.0: 'string23',\n 8676.0: 'string54',\n 2048.0: 'string28',\n 5442.0: 'string86',\n 1942.0: 'string15',\n 3185.0: 'string1'}\n\n\nThen you can map that dictionary on your \"parent\" column\ndf['parent_string'] = df['parent'].map(child_string_to_child_dict)\n\nResult:\n child_string child parent interesting parent_string\n0 string42 NaN NaN some_unique_field1 NaN\n1 string23 8675.0 2048.0 some_unique_field2 string28\n2 string23 8675.0 2048.0 some_unique_field3 string28\n3 string54 8676.0 2048.0 some_unique_field4 string28\n4 string28 2048.0 1942.0 some_unique_field5 string15\n5 string86 5442.0 1942.0 some_unique_field6 string15\n6 string15 1942.0 3185.0 some_unique_field7 string1\n7 string1 3185.0 NaN some_unique_field8 NaN\n\nA similar approach to what you tried\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074630349_dataframe_pandas_python_python_3.x.txt
Q: spaCy Matcher conditional or/and Python I want to categorize the following keywords: import spacy from spacy.matcher import PhraseMatcher nlp = spacy.load("en_core_web_sm") phrase_matcher = PhraseMatcher(nlp.vocab) cat_patterns = [nlp(text) for text in ('cat', 'cute', 'fat')] dog_patterns = [nlp(text) for text in ('dog', 'fat')] matcher = PhraseMatcher(nlp.vocab) matcher.add('Category1', None, *cat_patterns) matcher.add('Category2', None, *dog_patterns) doc = nlp("I have a white cat. It is cute and fat; I have a black dog. It is fat,too") matches = matcher(doc) for match_id, start, end in matches: rule_id = nlp.vocab.strings[match_id] # get the unicode ID, i.e. 'CategoryID' span = doc[start : end] # get the matched slice of the doc print(rule_id, span.text) #Output #Category1 cat #Category1 cute #Category1 fat #Category2 fat #Category2 dog #Category1 fat #Category2 fat However, my expected output is if the text contains cat and cute or cat and fat together, it will fall in the first category; if the text contains dog and fat together, then it will fall in the second category. #Category1 cat cute #Category1 cat fat #Category2 dog fat Is it possible to do it using the similar algorithm? Thank you A: From the spaCy documentation on Matchers (https://spacy.io/usage/rule-based-matching), there is no way to detect 2 different tokens separated by an arbitrary number of tokens. If you knew how many tokens were between "cat" and "fat", for example, then you could use wildcard patterns (https://spacy.io/usage/rule-based-matching#adding-patterns-wildcard), but it looks like from your example that distance between tokens can vary. Two solutions that I can see to solve your problem: Keep track of matches in your for loop using some sort of data structure. If all the tokens you are looking for end up being found, then add that match to your final results. Use regular expressions to detect what you are looking for. spaCy does have great tools for rule-based matching, but it looks like you aren't using any linguistic aspects of the words you are searching for. A simple regex like /cat.*?fat/ will find the matches you are looking for.
spaCy Matcher conditional or/and Python
I want to categorize the following keywords: import spacy from spacy.matcher import PhraseMatcher nlp = spacy.load("en_core_web_sm") phrase_matcher = PhraseMatcher(nlp.vocab) cat_patterns = [nlp(text) for text in ('cat', 'cute', 'fat')] dog_patterns = [nlp(text) for text in ('dog', 'fat')] matcher = PhraseMatcher(nlp.vocab) matcher.add('Category1', None, *cat_patterns) matcher.add('Category2', None, *dog_patterns) doc = nlp("I have a white cat. It is cute and fat; I have a black dog. It is fat,too") matches = matcher(doc) for match_id, start, end in matches: rule_id = nlp.vocab.strings[match_id] # get the unicode ID, i.e. 'CategoryID' span = doc[start : end] # get the matched slice of the doc print(rule_id, span.text) #Output #Category1 cat #Category1 cute #Category1 fat #Category2 fat #Category2 dog #Category1 fat #Category2 fat However, my expected output is if the text contains cat and cute or cat and fat together, it will fall in the first category; if the text contains dog and fat together, then it will fall in the second category. #Category1 cat cute #Category1 cat fat #Category2 dog fat Is it possible to do it using the similar algorithm? Thank you
[ "From the spaCy documentation on Matchers (https://spacy.io/usage/rule-based-matching), there is no way to detect 2 different tokens separated by an arbitrary number of tokens. If you knew how many tokens were between \"cat\" and \"fat\", for example, then you could use wildcard patterns (https://spacy.io/usage/rule-based-matching#adding-patterns-wildcard), but it looks like from your example that distance between tokens can vary.\nTwo solutions that I can see to solve your problem:\n\nKeep track of matches in your for loop using some sort of data structure. If all the tokens you are looking for end up being found, then add that match to your final results.\nUse regular expressions to detect what you are looking for. spaCy does have great tools for rule-based matching, but it looks like you aren't using any linguistic aspects of the words you are searching for. A simple regex like /cat.*?fat/ will find the matches you are looking for.\n\n" ]
[ 0 ]
[]
[]
[ "matcher", "nlp", "python", "spacy" ]
stackoverflow_0066442191_matcher_nlp_python_spacy.txt
Q: How do I convert this xlsx to JSON in Python I have a following excel file with two sheets: and I want to convert this excel into a json format using python that looks like this: { "app_id_c":"string", "cust_id_n":"string", "laa_app_a":"string", "laa_promc":"string", "laa_branch":"string", "laa_app_type_o":"string", "los_input_from_sas":[ "lsi_app_id_":'string', "lsi_cust_type_c":'string' ] } I tried using in built JSON excel to json library but it is giving me series of json instead of nested and I can't utilise another sheet to be part of same JSON A: First of all, you have to provide a minimal sample easy to copy and paste not an image of samples. But I have created a minimal sample similar to your images. It doesn't change the solution. Read xlsx files and convert them to list of dictionaries in Python, then you will have objects like these: sheet1 = [{ "app_id_c": "116092749", "cust_id_n": "95014843", "laa_app_a": "36", "laa_promc": "504627", "laa_branch": "8", "laa_app_type_o": "C", }] sheet2 = [ { "lsi_app_id_": "116092749", "lsi_cust_type_c": "G", }, { "lsi_app_id_": "116092749", "lsi_cust_type_c": "G", }, ] After having the above mentioned objects in Python, you can create the desired json structure by the following script: for i in sheet1: i["los_input_from_sas"] = list() for j in sheet2: if i["app_id_c"] == j["lsi_app_id_"]: i["los_input_from_sas"].append(j) sheet1 = json.dumps(sheet1) print(sheet1) And this is the printed output: [ { "app_id_c": "116092749", "cust_id_n": "95014843", "laa_app_a": "36", "laa_promc": "504627", "laa_branch": "8", "laa_app_type_o": "C", "los_input_from_sas": [ { "lsi_app_id_": "116092749", "lsi_cust_type_c": "G" }, { "lsi_app_id_": "116092749", "lsi_cust_type_c": "G" } ] } ] UPDATE: Here are some solution to read xlsx files and convert to python dict.
How do I convert this xlsx to JSON in Python
I have a following excel file with two sheets: and I want to convert this excel into a json format using python that looks like this: { "app_id_c":"string", "cust_id_n":"string", "laa_app_a":"string", "laa_promc":"string", "laa_branch":"string", "laa_app_type_o":"string", "los_input_from_sas":[ "lsi_app_id_":'string', "lsi_cust_type_c":'string' ] } I tried using in built JSON excel to json library but it is giving me series of json instead of nested and I can't utilise another sheet to be part of same JSON
[ "First of all, you have to provide a minimal sample easy to copy and paste not an image of samples. But I have created a minimal sample similar to your images. It doesn't change the solution.\nRead xlsx files and convert them to list of dictionaries in Python, then you will have objects like these:\nsheet1 = [{\n \"app_id_c\": \"116092749\",\n \"cust_id_n\": \"95014843\",\n \"laa_app_a\": \"36\",\n \"laa_promc\": \"504627\",\n \"laa_branch\": \"8\",\n \"laa_app_type_o\": \"C\",\n}]\n\nsheet2 = [\n {\n \"lsi_app_id_\": \"116092749\",\n \"lsi_cust_type_c\": \"G\",\n },\n {\n \"lsi_app_id_\": \"116092749\",\n \"lsi_cust_type_c\": \"G\",\n },\n]\n\nAfter having the above mentioned objects in Python, you can create the desired json structure by the following script:\nfor i in sheet1:\n i[\"los_input_from_sas\"] = list()\n for j in sheet2:\n if i[\"app_id_c\"] == j[\"lsi_app_id_\"]:\n i[\"los_input_from_sas\"].append(j)\n\nsheet1 = json.dumps(sheet1)\n\nprint(sheet1)\n\n\nAnd this is the printed output:\n[\n {\n \"app_id_c\": \"116092749\",\n \"cust_id_n\": \"95014843\",\n \"laa_app_a\": \"36\",\n \"laa_promc\": \"504627\",\n \"laa_branch\": \"8\",\n \"laa_app_type_o\": \"C\",\n \"los_input_from_sas\": [\n {\n \"lsi_app_id_\": \"116092749\",\n \"lsi_cust_type_c\": \"G\"\n },\n {\n \"lsi_app_id_\": \"116092749\",\n \"lsi_cust_type_c\": \"G\"\n }\n ]\n }\n]\n\nUPDATE:\nHere are some solution to read xlsx files and convert to python dict.\n" ]
[ 0 ]
[]
[]
[ "excel", "json", "python", "python_3.x" ]
stackoverflow_0074625645_excel_json_python_python_3.x.txt
Q: finding a specific object within duplicate element names, python with json I'm looking to grab the displayValue from objectAttributeValues where the objectTypeAttributeId = 14 there are multiple arrays like this, and the position of objectTypeAttributeId = 14 isn't always the same. how do I loop over every array to get that specific displayValue? I've got something that looks through every possible array, but I want to clean it up. sample json: { "objectEntries": [{ "attributes": [{ "id": "5210", "objectAttributeValues": [{ "displayValue": "10/Nov/22 3:33 PM", "referencedType": false, "searchValue": "2022-11-10T15:33:49.298Z", "value": "2022-11-10T15:33:49.298Z" }], "objectId": "1201", "objectTypeAttributeId": "12" }, { "id": "5213", "objectAttributeValues": [{ "displayValue": "02f9ed75-b416-49d0-8515-0601581158e5", "referencedType": false, "searchValue": "02f9ed75-b416-49d0-8515-0601581158e5", "value": "02f9ed75-b416-49d0-8515-0601581158e5" }], "objectId": "1201", "objectTypeAttributeId": "14" }, { "id": "5212", "objectAttributeValues": [{ "displayValue": "", "referencedType": false, "searchValue": "", "value": "" }], "objectId": "1201", "objectTypeAttributeId": "11" } ] }, { "attributes": [{ "id": "4263", "objectAttributeValues": [{ "displayValue": "427904c5-e2c8-4735-bc38-4013928cd043", "referencedType": false, "searchValue": "427904c5-e2c8-4735-bc38-4013928cd043", "value": "427904c5-e2c8-4735-bc38-4013928cd043" }], "objectId": "1011", "objectTypeAttributeId": "14" }, { "id": "4262", "objectAttributeValues": [{ "displayValue": "", "referencedType": false, "searchValue": "", "value": "" }], "objectId": "1011", "objectTypeAttributeId": "11" } ] } ] } for this sample query, the values would be: 02f9ed75-b416-49d0-8515-0601581158e5 427904c5-e2c8-4735-bc38-4013928cd043 this is my code so far, and would like to make it for efficient: from jira import JIRA import requests import json base_url = "url" auth = basic_auth=('user', 'pass') headers = { "Accept": "application/json" } pages = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for page in pages: response = requests.request("GET",base_url + '?page=' + str(page),headers=headers,auth=auth) all_output = json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")) output_dict = json.loads(response.text) output_list = output_dict["objectEntries"] for outputs in output_list: print(outputs["attributes"][0]["objectId"]) print(outputs["name"]) print(outputs["objectKey"]) if len(outputs["attributes"][0]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][0]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][1]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][1]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][2]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][2]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][3]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][3]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][4]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][4]["objectAttributeValues"][0]["displayValue"]) print('\n') Any suggestions would be appreciated!! A: If structure is not changing then this can the solution It will iterate over all objects and add displayValue in search_values list display_values = [] for object_entries in output_dict.get("objectEntries", []): for attribute in object_entries.get("attributes"): if attribute.get("objectTypeAttributeId") == "14": for object_attr in attribute.get("objectAttributeValues", []): if object_attr.get("displayValue") not in display_values: display_values.append(object_attr.get("displayValue")) print(display_values) A: You could browse your JSON dict and proceed each entries until you get the one(s) you are interested in. # lets browse top level entries of your array for e1 in outputs["objectEntries"]: # for each of those entries, browse the entries in the attribute section for e2 in e1["attributes"]: # does the entry match the rule "14"? If not, go to the next one if (e2["objectTypeAttributeId"] != 14): continue # print the current entry's associated value for attr in e2["objectAttributeValues"] print(attr["displayValue"]) A: You can iterate over your dict and check if the values matches with a function like this: def get_display_value(my_dict, value): results = [] for objectEntries in my_dict['objectEntries']: for attributes in objectEntries['attributes']: if int(attributes['objectTypeAttributeId']) == value: results.append(attributes['objectAttributeValues'][0]['displayValue']) return results Using the function: results = get_display_value(my_dict, 14) print(results) Outputs: ['02f9ed75-b416-49d0-8515-0601581158e5', '427904c5-e2c8-4735-bc38-4013928cd043'] Edit: now returning all match values instead of only the first one.
finding a specific object within duplicate element names, python with json
I'm looking to grab the displayValue from objectAttributeValues where the objectTypeAttributeId = 14 there are multiple arrays like this, and the position of objectTypeAttributeId = 14 isn't always the same. how do I loop over every array to get that specific displayValue? I've got something that looks through every possible array, but I want to clean it up. sample json: { "objectEntries": [{ "attributes": [{ "id": "5210", "objectAttributeValues": [{ "displayValue": "10/Nov/22 3:33 PM", "referencedType": false, "searchValue": "2022-11-10T15:33:49.298Z", "value": "2022-11-10T15:33:49.298Z" }], "objectId": "1201", "objectTypeAttributeId": "12" }, { "id": "5213", "objectAttributeValues": [{ "displayValue": "02f9ed75-b416-49d0-8515-0601581158e5", "referencedType": false, "searchValue": "02f9ed75-b416-49d0-8515-0601581158e5", "value": "02f9ed75-b416-49d0-8515-0601581158e5" }], "objectId": "1201", "objectTypeAttributeId": "14" }, { "id": "5212", "objectAttributeValues": [{ "displayValue": "", "referencedType": false, "searchValue": "", "value": "" }], "objectId": "1201", "objectTypeAttributeId": "11" } ] }, { "attributes": [{ "id": "4263", "objectAttributeValues": [{ "displayValue": "427904c5-e2c8-4735-bc38-4013928cd043", "referencedType": false, "searchValue": "427904c5-e2c8-4735-bc38-4013928cd043", "value": "427904c5-e2c8-4735-bc38-4013928cd043" }], "objectId": "1011", "objectTypeAttributeId": "14" }, { "id": "4262", "objectAttributeValues": [{ "displayValue": "", "referencedType": false, "searchValue": "", "value": "" }], "objectId": "1011", "objectTypeAttributeId": "11" } ] } ] } for this sample query, the values would be: 02f9ed75-b416-49d0-8515-0601581158e5 427904c5-e2c8-4735-bc38-4013928cd043 this is my code so far, and would like to make it for efficient: from jira import JIRA import requests import json base_url = "url" auth = basic_auth=('user', 'pass') headers = { "Accept": "application/json" } pages = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for page in pages: response = requests.request("GET",base_url + '?page=' + str(page),headers=headers,auth=auth) all_output = json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")) output_dict = json.loads(response.text) output_list = output_dict["objectEntries"] for outputs in output_list: print(outputs["attributes"][0]["objectId"]) print(outputs["name"]) print(outputs["objectKey"]) if len(outputs["attributes"][0]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][0]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][1]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][1]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][2]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][2]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][3]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][3]["objectAttributeValues"][0]["displayValue"]) if len(outputs["attributes"][4]["objectAttributeValues"][0]["displayValue"])==36: print(outputs["attributes"][4]["objectAttributeValues"][0]["displayValue"]) print('\n') Any suggestions would be appreciated!!
[ "If structure is not changing then this can the solution It will iterate over all objects and add displayValue in search_values list\ndisplay_values = []\nfor object_entries in output_dict.get(\"objectEntries\", []):\n for attribute in object_entries.get(\"attributes\"):\n if attribute.get(\"objectTypeAttributeId\") == \"14\":\n for object_attr in attribute.get(\"objectAttributeValues\", []):\n if object_attr.get(\"displayValue\") not in display_values:\n display_values.append(object_attr.get(\"displayValue\"))\n\n\nprint(display_values)\n\n", "You could browse your JSON dict and proceed each entries until you get the one(s) you are interested in.\n# lets browse top level entries of your array\nfor e1 in outputs[\"objectEntries\"]:\n # for each of those entries, browse the entries in the attribute section\n for e2 in e1[\"attributes\"]:\n # does the entry match the rule \"14\"? If not, go to the next one\n if (e2[\"objectTypeAttributeId\"] != 14):\n continue\n # print the current entry's associated value\n for attr in e2[\"objectAttributeValues\"]\n print(attr[\"displayValue\"])\n\n", "You can iterate over your dict and check if the values matches with a function like this:\ndef get_display_value(my_dict, value):\n results = []\n for objectEntries in my_dict['objectEntries']:\n for attributes in objectEntries['attributes']:\n if int(attributes['objectTypeAttributeId']) == value:\n results.append(attributes['objectAttributeValues'][0]['displayValue'])\n return results\n\nUsing the function:\nresults = get_display_value(my_dict, 14)\nprint(results)\n\nOutputs:\n['02f9ed75-b416-49d0-8515-0601581158e5', '427904c5-e2c8-4735-bc38-4013928cd043']\n\nEdit: now returning all match values instead of only the first one.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074630263_json_python.txt
Q: Is sklearn.model_selection.GridSearchCV can do custom threshold? My goal is to do threshold tuning before parameter tuning. The idea is simple, in imbalanced dataset, if class 1 is minority, then the threshold should be lower than 0.5, so it predict more instance as class 1 instead of 0. Therefore, I believe, by changing the threshold early, we can improve the model predictive power even more than (parameter tuning - threshold tuning). The problem is, I don't find the parameter in GridSearchCV to change the threshold. A: You can't directly change the threshold used by predict (which gets called by your scorer, presumably), but you can provide a customer scoring method. See the User Guide. Here I think you'd want something like: def f2_score_at_thresh(y_true, y_pos_prob, threshold): y_pred = y_pos_prob > threshold return fbeta_score(y_true, y_pred, beta=2, ...) my_scorer = make_scorer(f2_scorer, needs_proba=True, threshold=0.2) GridSearchCV(..., scoring=my_scorer)
Is sklearn.model_selection.GridSearchCV can do custom threshold?
My goal is to do threshold tuning before parameter tuning. The idea is simple, in imbalanced dataset, if class 1 is minority, then the threshold should be lower than 0.5, so it predict more instance as class 1 instead of 0. Therefore, I believe, by changing the threshold early, we can improve the model predictive power even more than (parameter tuning - threshold tuning). The problem is, I don't find the parameter in GridSearchCV to change the threshold.
[ "You can't directly change the threshold used by predict (which gets called by your scorer, presumably), but you can provide a customer scoring method. See the User Guide. Here I think you'd want something like:\ndef f2_score_at_thresh(y_true, y_pos_prob, threshold):\n y_pred = y_pos_prob > threshold\n return fbeta_score(y_true, y_pred, beta=2, ...)\n\nmy_scorer = make_scorer(f2_scorer, needs_proba=True, threshold=0.2)\n\nGridSearchCV(..., scoring=my_scorer)\n\n" ]
[ 1 ]
[]
[]
[ "python", "scikit_learn" ]
stackoverflow_0074624735_python_scikit_learn.txt
Q: How to get row number in dataframe in Pandas? How can I get the number of the row in a dataframe that contains a certain value in a certain column using Pandas? For example, I have the following dataframe: ClientID LastName 0 34 Johnson 1 67 Smith 2 53 Brows How can I find the number of the row that has 'Smith' in 'LastName' column? A: Note that a dataframe's index could be out of order, or not even numerical at all. If you don't want to use the current index and instead renumber the rows sequentially, then you can use df.reset_index() together with the suggestions below To get all indices that matches 'Smith' >>> df[df['LastName'] == 'Smith'].index Int64Index([1], dtype='int64') or as a numpy array >>> df[df['LastName'] == 'Smith'].index.to_numpy() # .values on older versions array([1]) or if there is only one and you want the integer, you can subset >>> df[df['LastName'] == 'Smith'].index[0] 1 You could use the same boolean expressions with .loc, but it is not needed unless you also want to select a certain column, which is redundant when you only want the row number/index. A: df.index[df.LastName == 'Smith'] Or df.query('LastName == "Smith"').index Will return all row indices where LastName is Smith Int64Index([1], dtype='int64') A: df.loc[df.LastName == 'Smith'] will return the row ClientID LastName 1 67 Smith and df.loc[df.LastName == 'Smith'].index will return the index Int64Index([1], dtype='int64') NOTE: Column names 'LastName' and 'Last Name' or even 'lastname' are three unique names. The best practice would be to first check the exact name using df.columns. If you really need to strip the column names of all the white spaces, you can first do df.columns = [x.strip().replace(' ', '') for x in df.columns] A: len(df[df["Lastname"]=="Smith"].values) A: count_smiths = (df['LastName'] == 'Smith').sum() A: I know it's many years later but don't try the above solutions without reindexing your dataframe first. As many have pointed out already the number you see to the left of the dataframe 0,1,2 in the initial question is the index INSIDE that dataframe. When you extract a subset of it with a condition you might end up with 0,2 or 2,1, or 2,1 or 2,1,0 depending your condition. So by using that number (called "index") you will not get the position of the row in the subset. You will get the position of that row inside the main dataframe. use: np.where([df['LastName'] == 'Smith'])[1][0] and play with the string 'Smith' to see the various outcomes. Where will return 2 arrays. The 2nd one (index 1) is the one you care about. NOTE: When the value you search for does not exist where() will return 0 on [1][0]. When is the first value of the list it will also return 0 on [1][0]. Make sure you validate the existence first. NOTE #2: In case the same value as in your condition is present in the subset multiple times on [1] with will find the list with the position of all occurrences. You can use the length of [1] for future processing if needed. A: If the index of the dataframe and the ordinal number of the rows differ, most solutions posted here won't work anymore. Given your dataframe with an alphabetical index: In [2]: df = pd.DataFrame({"ClientID": {"A": 34, "B": 67, "C": 53}, "LastName": {"A": "Johnson", "B": "Smith", "C": "Brows"}}) In [3]: df Out[3]: ClientID LastName A 34 Johnson B 67 Smith C 53 Brows You have to use get_loc to access the ordinal row number: In [4]: df.index.get_loc(df.query('LastName == "Smith"').index[0]) Out[4]: 1 If there may exist multiple rows where the condition holds, e.g. find the ordinal row numbers that have 'Smith' or 'Brows' in LastName column, you can use list comprehensions: In [5]: [df.index.get_loc(idx) for idx in df.query('LastName == "Smith" | LastName == "Brows"').index] Out[5]: [1, 2] A: If in the question "row number" means actual row number/position (rather than index label) pandas.Index.get_loc(key, method=None, tolerance=None) seems to be the answer, ie something like: row_number = df.index.get_loc(df.query(f'numbers == {m}').index[0]) The current answers, except one, explain how to get the index label rather than the row number. Trivial code with index lables not corresponding to row numbers: import pandas as pd n = 3; m = n-1 df = pd.DataFrame({'numbers' : range(n) }, index = range(n-1,-1,-1)) print(df,"\n") label = df[df['numbers'] == m].index[0] row_number = df.index.get_loc(df.query(f'numbers == {m}').index[0]) print(f'index label: {label}\nrow number: {row_number}',"\n") print(f"df.loc[{label},'numbers']: {df.loc[label, 'numbers']}") print(f"df.iloc[{row_number}, 0]: {df.iloc[row_number, 0]}") numbers 2 0 1 1 0 2 index label: 0 row number: 2 df.loc[0,'numbers']: 2 df.iloc[2, 0]: 2 A: You can simply use shape method df[df['LastName'] == 'Smith'].shape Output (1,1) Which indicates 1 row and 1 column. This way you can get the idea of whole datasets Let me explain the above code DataframeName[DataframeName['Column_name'] == 'Value to match in column'] A: To get exact row-number of single occurrence row-number = df[df["LastName" == 'Smith']].index[0] To get exact row-number of multiple occurrence of 'Smith' row-number = df[df["LastName" == 'Smith']].index.tolist()
How to get row number in dataframe in Pandas?
How can I get the number of the row in a dataframe that contains a certain value in a certain column using Pandas? For example, I have the following dataframe: ClientID LastName 0 34 Johnson 1 67 Smith 2 53 Brows How can I find the number of the row that has 'Smith' in 'LastName' column?
[ "Note that a dataframe's index could be out of order, or not even numerical at all. If you don't want to use the current index and instead renumber the rows sequentially, then you can use df.reset_index() together with the suggestions below\nTo get all indices that matches 'Smith'\n>>> df[df['LastName'] == 'Smith'].index\nInt64Index([1], dtype='int64')\n\nor as a numpy array\n>>> df[df['LastName'] == 'Smith'].index.to_numpy() # .values on older versions\narray([1])\n\nor if there is only one and you want the integer, you can subset\n>>> df[df['LastName'] == 'Smith'].index[0]\n1\n\nYou could use the same boolean expressions with .loc, but it is not needed unless you also want to select a certain column, which is redundant when you only want the row number/index.\n", "df.index[df.LastName == 'Smith']\n\nOr\ndf.query('LastName == \"Smith\"').index\n\nWill return all row indices where LastName is Smith\nInt64Index([1], dtype='int64')\n\n", "df.loc[df.LastName == 'Smith']\n\nwill return the row\n ClientID LastName\n1 67 Smith\n\nand \ndf.loc[df.LastName == 'Smith'].index\n\nwill return the index\nInt64Index([1], dtype='int64')\n\nNOTE: Column names 'LastName' and 'Last Name' or even 'lastname' are three unique names. The best practice would be to first check the exact name using df.columns. If you really need to strip the column names of all the white spaces, you can first do\ndf.columns = [x.strip().replace(' ', '') for x in df.columns]\n\n", " len(df[df[\"Lastname\"]==\"Smith\"].values)\n\n", "count_smiths = (df['LastName'] == 'Smith').sum()\n\n", "I know it's many years later but don't try the above solutions without reindexing your dataframe first. As many have pointed out already the number you see to the left of the dataframe 0,1,2 in the initial question is the index INSIDE that dataframe. When you extract a subset of it with a condition you might end up with 0,2 or 2,1, or 2,1 or 2,1,0 depending your condition. So by using that number (called \"index\") you will not get the position of the row in the subset. You will get the position of that row inside the main dataframe.\nuse:\nnp.where([df['LastName'] == 'Smith'])[1][0]\n\nand play with the string 'Smith' to see the various outcomes. Where will return 2 arrays. The 2nd one (index 1) is the one you care about.\nNOTE:\nWhen the value you search for does not exist where() will return 0 on [1][0]. When is the first value of the list it will also return 0 on [1][0]. Make sure you validate the existence first.\nNOTE #2:\nIn case the same value as in your condition is present in the subset multiple times on [1] with will find the list with the position of all occurrences. You can use the length of [1] for future processing if needed.\n", "If the index of the dataframe and the ordinal number of the rows differ, most solutions posted here won't work anymore. Given your dataframe with an alphabetical index:\nIn [2]: df = pd.DataFrame({\"ClientID\": {\"A\": 34, \"B\": 67, \"C\": 53}, \"LastName\": {\"A\": \"Johnson\", \"B\": \"Smith\", \"C\": \"Brows\"}})\n\nIn [3]: df\nOut[3]: \n ClientID LastName\nA 34 Johnson\nB 67 Smith\nC 53 Brows\n\nYou have to use get_loc to access the ordinal row number:\nIn [4]: df.index.get_loc(df.query('LastName == \"Smith\"').index[0])\nOut[4]: 1\n\nIf there may exist multiple rows where the condition holds, e.g. find the ordinal row numbers that have 'Smith' or 'Brows' in LastName column, you can use list comprehensions:\nIn [5]: [df.index.get_loc(idx) for idx in df.query('LastName == \"Smith\" | LastName == \"Brows\"').index]\nOut[5]: [1, 2]\n\n", "If in the question \"row number\" means actual row number/position (rather than index label)\npandas.Index.get_loc(key, method=None, tolerance=None)\nseems to be the answer, ie something like:\nrow_number = df.index.get_loc(df.query(f'numbers == {m}').index[0]) \n\nThe current answers, except one, explain how to get the index label rather than the row number.\nTrivial code with index lables not corresponding to row numbers:\nimport pandas as pd\n\nn = 3; m = n-1\n\ndf = pd.DataFrame({'numbers' : range(n) },\n index = range(n-1,-1,-1))\nprint(df,\"\\n\")\n\nlabel = df[df['numbers'] == m].index[0]\nrow_number = df.index.get_loc(df.query(f'numbers == {m}').index[0])\n\nprint(f'index label: {label}\\nrow number: {row_number}',\"\\n\")\nprint(f\"df.loc[{label},'numbers']: {df.loc[label, 'numbers']}\")\nprint(f\"df.iloc[{row_number}, 0]: {df.iloc[row_number, 0]}\")\n\n numbers\n2 0\n1 1\n0 2 \n\nindex label: 0\nrow number: 2 \n\ndf.loc[0,'numbers']: 2\ndf.iloc[2, 0]: 2\n\n", "You can simply use shape method\ndf[df['LastName'] == 'Smith'].shape\nOutput\n(1,1) \nWhich indicates 1 row and 1 column. This way you can get the idea of whole datasets\nLet me explain the above code\nDataframeName[DataframeName['Column_name'] == 'Value to match in column']\n", "\nTo get exact row-number of single occurrence\n\nrow-number = df[df[\"LastName\" == 'Smith']].index[0]\n\nTo get exact row-number of multiple occurrence of 'Smith'\n\nrow-number = df[df[\"LastName\" == 'Smith']].index.tolist()\n" ]
[ 74, 14, 8, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0043193880_pandas_python.txt
Q: Why does my print execute after the second loop even if I use print first? I'm a beginner of python, and I wanted to try to make a timer. import time sets=int(input("How many sets?: ")) seconds=int(input("How many seconds per set?: ")) for i in range(sets): print("set {0} of {1} started".format(i + 1, sets)) for j in range(seconds, 0, -1): print(j, end=" ") print("Finished workout! Good Job!") The problem is that the first print in the first loop is active after the j loop is ended, and I don't know why. Also my version of py is 3.11, I'm sorry if I misinterpreted the python-3.x tag. I expected the output to be: How many sets?: 3 How many seconds per set?: 2 set 1 of 3 started 2 1 set 2 of 3 started 2 1 set 3 of 3 started 2 1 Finished workout! Good Job! But it's How many sets?: 3 How many seconds per set?: 2 2 1 set 1 of 3 started 2 1 set 2 of 3 started 2 1 set 3 of 3 started Finished workout! Good Job! Please help and thank you! :) A: By default print terminates with a newline character. When you define end in the print function, you modify this behavior. I have added a bare print which will simply output a newline character to stdout, correcting the format issues. for i in range(sets): print("set {0} of {1} started".format(i + 1, sets)) for j in range(seconds, 0, -1): print(j, end=" ") print() print("Finished workout! Good Job!") I would also like to add, that if you're on windows and using something like git-bash or powershell to test your code, you may need to flush stdout after a print to have your text displayed in the proper order. print("Hello World!", flush=True) If this applies to your situation, make sure you test, as over flushing the buffer can cause a lot of lag in your application.
Why does my print execute after the second loop even if I use print first?
I'm a beginner of python, and I wanted to try to make a timer. import time sets=int(input("How many sets?: ")) seconds=int(input("How many seconds per set?: ")) for i in range(sets): print("set {0} of {1} started".format(i + 1, sets)) for j in range(seconds, 0, -1): print(j, end=" ") print("Finished workout! Good Job!") The problem is that the first print in the first loop is active after the j loop is ended, and I don't know why. Also my version of py is 3.11, I'm sorry if I misinterpreted the python-3.x tag. I expected the output to be: How many sets?: 3 How many seconds per set?: 2 set 1 of 3 started 2 1 set 2 of 3 started 2 1 set 3 of 3 started 2 1 Finished workout! Good Job! But it's How many sets?: 3 How many seconds per set?: 2 2 1 set 1 of 3 started 2 1 set 2 of 3 started 2 1 set 3 of 3 started Finished workout! Good Job! Please help and thank you! :)
[ "By default print terminates with a newline character. When you define end in the print function, you modify this behavior. I have added a bare print which will simply output a newline character to stdout, correcting the format issues.\nfor i in range(sets):\n print(\"set {0} of {1} started\".format(i + 1, sets))\n for j in range(seconds, 0, -1):\n print(j, end=\" \")\n print()\nprint(\"Finished workout! Good Job!\")\n\nI would also like to add, that if you're on windows and using something like git-bash or powershell to test your code, you may need to flush stdout after a print to have your text displayed in the proper order.\nprint(\"Hello World!\", flush=True)\nIf this applies to your situation, make sure you test, as over flushing the buffer can cause a lot of lag in your application.\n" ]
[ 0 ]
[]
[]
[ "loops", "python", "python_3.x" ]
stackoverflow_0074629811_loops_python_python_3.x.txt
Q: Creating an SQLAlchemy column to dynamically generate a list of models with an expression I want to create a relationship column on my model, where it will be built with an expression so that it can be queried. Here's a brief example of my setup: I have each application (eg. Python) stored in the App table. Each version of the application (eg. Python 3.7) is stored under the AppVersion table. My items (in the Item table) have a minimum and maximum supported version per application. This is done with the ItemVersion table, with ItemVersion.version_min and ItemVersion.version_max, for example: min_version=None, max_version=None: Compatible with all versions min_version=None, max_version=27: Compatible with Python 2 and below min_version=37, max_version=None: Compatible with Python 3 and above min_version=37, max_version=39: Compatible with Python 3.7 to 3.9 In this case, I want to generate an expression to return a list of AppVersion records compatible with my item. Below I have used @hybrid_property as an example to mock up how ItemVersion.versions and Item.versions should work. I need it to be compatible with queries though, which this is not (eg. Item.versions.any(AppVersion.id == 1)). from sqlalchemy import select, create_engine, Column, Integer, ForeignKey, String, case, and_, or_ from sqlalchemy.orm import relationship, sessionmaker, column_property from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.hybrid import hybrid_property from sqlalchemy.ext.associationproxy import association_proxy Engine = create_engine('sqlite://') Base = declarative_base(Engine) session = sessionmaker(Engine)() class App(Base): __tablename__ = 'app' id = Column(Integer, primary_key=True) name = Column(String(64)) versions = relationship('AppVersion', back_populates='app') def __repr__(self): return self.name class AppVersion(Base): __tablename__ = 'app_version' id = Column(Integer, primary_key=True) app_id = Column(Integer, ForeignKey('app.id'), nullable=False) value = Column(Integer, nullable=False) app = relationship('App', foreign_keys=app_id, back_populates='versions', innerjoin=True) def __repr__(self): return f'{self.app.name}:{self.value}' class ItemVersion(Base): __tablename__ = 'item_version' id = Column(Integer, primary_key=True) item_id = Column(Integer, ForeignKey('item.id')) app_id = Column(Integer, ForeignKey('app.id')) version_min_id = Column(Integer, ForeignKey('app_version.id'), nullable=True) version_max_id = Column(Integer, ForeignKey('app_version.id'), nullable=True) item = relationship('Item', foreign_keys=item_id) app = relationship('App', foreign_keys=app_id) version_min = relationship('AppVersion', foreign_keys=version_min_id) version_max = relationship('AppVersion', foreign_keys=version_max_id) @hybrid_property def versions(self): # All versions if self.version_min is None and self.version_max is None: return self.app.versions # Single version elif self.version_min == self.version_max: return [self.version_min] # Max version and below elif self.version_min is None: return [version for version in self.app.versions if version.value <= self.version_max.value] # Min version and above elif self.version_max is None: return [version for version in self.app.versions if self.version_min.value <= version.value] # Custom range return [version for version in self.app.versions if self.version_min.value <= version.value <= self.version_max.value] class Item(Base): __tablename__ = 'item' id = Column(Integer, primary_key=True) item_versions = relationship('ItemVersion', back_populates='item') def __repr__(self): return f'Item {self.id}' @hybrid_property def versions(self): versions = [] for item_version in self.item_versions: versions.extend(item_version.versions) return versions Base.metadata.create_all() py = App(name='Python') session.add(py) py27 = AppVersion(app=py, value=27) py37 = AppVersion(app=py, value=37) py38 = AppVersion(app=py, value=38) py39 = AppVersion(app=py, value=39) session.add(Item(item_versions=[ItemVersion(app=py)])) # [Python:27, Python:37, Python:38, Python:39] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py37)])) # [Python:37, Python:38, Python:39] session.add(Item(item_versions=[ItemVersion(app=py, version_max=py37)])) # [Python:27, Python:37] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py27, version_max=py27)])) # [Python:27] session.commit() for item in session.execute(select(Item)).scalars(): print(f'{item}: {item.versions}') My attempts so far have hit issues before I've got to writing the actual query. With relationships they don't apply any filter on value: class ItemVersion(Base): ... versions = relationship( AppVersion, primaryjoin=and_(AppVersion.app_id == App.id, AppVersion.value == 0), secondaryjoin=app_id == App.id, secondary=App.__table__, viewonly=True, uselist=True, ) # sqlalchemy.exc.ArgumentError: Could not locate any relevant foreign key columns for primary join condition 'app_version.app_id = item_version.app_id' on relationship ItemVersion.versions. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or are annotated in the join condition with the foreign() annotation. With column_property (which I could link with a relationship) it doesn't like more than 1 result: class ItemVersion(Base): ... version_ids = column_property( select(AppVersion.id).where(AppVersion.app_id == app_id) ) # sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) sub-select returns 3 columns - expected 1 This would be my ideal result: class ItemVersion(Base): versions = # generate expression class Item(Base): ... item_versions = relationship('ItemVersion', back_populates='item') versions = association_proxy('item_versions', 'versions') If anyone has a particular section of documentation to point to that would also be appreciated, I'm just struggling a lot with this one. A: It's possible via a relationship but it took a lot of trial and error with joins to get there. Below is what was needed to get it working, although I wouldn't be surprised if there's a more optimal way. class ItemVersion(Base): ... version_min_val = column_property( select(AppVersion.value) .where(AppVersion.id == version_min_id) .correlate_except(AppVersion) .scalar_subquery(), ) version_max_val = column_property( select(AppVersion.value) .where(AppVersion.id == version_max_id) .correlate_except(AppVersion) .scalar_subquery(), ) versions = relationship( AppVersion, primaryjoin=and_( app_id == AppVersion.app_id, case( [and_(version_min_id == None, version_max_id == None), literal(True)], [and_(version_min_id == None, version_max_id != None), AppVersion.value <= version_max_val], [and_(version_min_id != None, version_max_id == None), version_min_val <= AppVersion.value], else_=and_(version_min_val <= AppVersion.value, AppVersion.value <= version_max_val), ) ), viewonly=True, uselist=True, ) class Item(Base): ... versions = relationship( AppVersion, primaryjoin=and_( id == ItemVersion.item_id, case( [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id == None), literal(True)], [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id != None), AppVersion.value <= ItemVersion.version_max_val], [and_(ItemVersion.version_min_id != None, ItemVersion.version_max_id == None), ItemVersion.version_min_val <= AppVersion.value], else_=and_(ItemVersion.version_min_val <= AppVersion.value, AppVersion.value <= ItemVersion.version_max_val), ) ), secondaryjoin=ItemVersion.app_id == AppVersion.app_id, secondary=ItemVersion.__table__, viewonly=True, uselist=True, ) Full code I used for testing: from sqlalchemy import select, create_engine, Column, Integer, ForeignKey, String, case, and_, literal from sqlalchemy.orm import relationship, sessionmaker, column_property from sqlalchemy.ext.declarative import declarative_base Engine = create_engine('sqlite://') Base = declarative_base(Engine) session = sessionmaker(Engine)() class App(Base): """Each App has name and list of versions.""" __tablename__ = 'app' id = Column(Integer, primary_key=True) name = Column(String(64)) versions = relationship('AppVersion', back_populates='app') def __repr__(self): return self.name class AppVersion(Base): """Each App version has a particular value.""" __tablename__ = 'app_version' id = Column(Integer, primary_key=True) app_id = Column(Integer, ForeignKey('app.id'), nullable=False) value = Column(Integer, nullable=False) app = relationship('App', foreign_keys=app_id, back_populates='versions', innerjoin=True) def __repr__(self): return f'{self.app.name}:{self.value}' class ItemVersion(Base): """The item version links a particular item to an App and it's min/max versions. Using the min and max versions, a range of compatible versions can be generated. """ __tablename__ = 'item_version' id = Column(Integer, primary_key=True) item_id = Column(Integer, ForeignKey('item.id')) app_id = Column(Integer, ForeignKey('app.id')) version_min_id = Column(Integer, ForeignKey('app_version.id'), nullable=True) version_max_id = Column(Integer, ForeignKey('app_version.id'), nullable=True) item = relationship('Item', foreign_keys=item_id) app = relationship('App', foreign_keys=app_id) version_min = relationship('AppVersion', foreign_keys=version_min_id) version_max = relationship('AppVersion', foreign_keys=version_max_id) version_min_val = column_property( select(AppVersion.value) .where(AppVersion.id == version_min_id) .correlate_except(AppVersion) .scalar_subquery(), ) version_max_val = column_property( select(AppVersion.value) .where(AppVersion.id == version_max_id) .correlate_except(AppVersion) .scalar_subquery(), ) versions = relationship( AppVersion, primaryjoin=and_( app_id == AppVersion.app_id, case( [and_(version_min_id == None, version_max_id == None), literal(True)], [and_(version_min_id == None, version_max_id != None), AppVersion.value <= version_max_val], [and_(version_min_id != None, version_max_id == None), version_min_val <= AppVersion.value], else_=and_(version_min_val <= AppVersion.value, AppVersion.value <= version_max_val), ) ), viewonly=True, uselist=True, ) class Item(Base): """Each item may have multiple compatible applications with specific versions.""" __tablename__ = 'item' id = Column(Integer, primary_key=True) item_versions = relationship('ItemVersion', back_populates='item') def __repr__(self): return f'Item {self.id}' versions = relationship( AppVersion, primaryjoin=and_( id == ItemVersion.item_id, case( [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id == None), literal(True)], [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id != None), AppVersion.value <= ItemVersion.version_max_val], [and_(ItemVersion.version_min_id != None, ItemVersion.version_max_id == None), ItemVersion.version_min_val <= AppVersion.value], else_=and_(ItemVersion.version_min_val <= AppVersion.value, AppVersion.value <= ItemVersion.version_max_val), ) ), secondaryjoin=ItemVersion.app_id == AppVersion.app_id, secondary=ItemVersion.__table__, viewonly=True, uselist=True, ) Base.metadata.create_all() py = App(name='Python') py27 = AppVersion(app=py, value=27) py37 = AppVersion(app=py, value=37) py38 = AppVersion(app=py, value=38) py39 = AppVersion(app=py, value=39) maya = App(name='Maya') m22 = AppVersion(app=maya, value=2022) m23 = AppVersion(app=maya, value=2023) # [Python:27, Python:37, Python:38, Python:39, Maya:2022, Maya:2023] session.add(Item(item_versions=[ItemVersion(app=py), ItemVersion(app=maya)])) # [Python:37, Python:38, Python:39] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py37)])) # [Python:27, Python:37] session.add(Item(item_versions=[ItemVersion(app=py, version_max=py37)])) # [Python:27] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py27, version_max=py27)])) # [Python:27, Python:37, Python:38] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py27, version_max=py38)])) # [Python:27, Python:39, Maya:2022] session.add(Item(item_versions=[ItemVersion(app=py, version_max=py27), ItemVersion(app=py, version_min=py39), ItemVersion(app=maya, version_min=m22, version_max=m22)])) session.commit() stmt = select(Item).where(Item.versions.any(AppVersion.app.has(App.name == 'Maya'))) for item in session.execute(stmt).scalars(): print(f'{item}: {item.versions}')
Creating an SQLAlchemy column to dynamically generate a list of models with an expression
I want to create a relationship column on my model, where it will be built with an expression so that it can be queried. Here's a brief example of my setup: I have each application (eg. Python) stored in the App table. Each version of the application (eg. Python 3.7) is stored under the AppVersion table. My items (in the Item table) have a minimum and maximum supported version per application. This is done with the ItemVersion table, with ItemVersion.version_min and ItemVersion.version_max, for example: min_version=None, max_version=None: Compatible with all versions min_version=None, max_version=27: Compatible with Python 2 and below min_version=37, max_version=None: Compatible with Python 3 and above min_version=37, max_version=39: Compatible with Python 3.7 to 3.9 In this case, I want to generate an expression to return a list of AppVersion records compatible with my item. Below I have used @hybrid_property as an example to mock up how ItemVersion.versions and Item.versions should work. I need it to be compatible with queries though, which this is not (eg. Item.versions.any(AppVersion.id == 1)). from sqlalchemy import select, create_engine, Column, Integer, ForeignKey, String, case, and_, or_ from sqlalchemy.orm import relationship, sessionmaker, column_property from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.hybrid import hybrid_property from sqlalchemy.ext.associationproxy import association_proxy Engine = create_engine('sqlite://') Base = declarative_base(Engine) session = sessionmaker(Engine)() class App(Base): __tablename__ = 'app' id = Column(Integer, primary_key=True) name = Column(String(64)) versions = relationship('AppVersion', back_populates='app') def __repr__(self): return self.name class AppVersion(Base): __tablename__ = 'app_version' id = Column(Integer, primary_key=True) app_id = Column(Integer, ForeignKey('app.id'), nullable=False) value = Column(Integer, nullable=False) app = relationship('App', foreign_keys=app_id, back_populates='versions', innerjoin=True) def __repr__(self): return f'{self.app.name}:{self.value}' class ItemVersion(Base): __tablename__ = 'item_version' id = Column(Integer, primary_key=True) item_id = Column(Integer, ForeignKey('item.id')) app_id = Column(Integer, ForeignKey('app.id')) version_min_id = Column(Integer, ForeignKey('app_version.id'), nullable=True) version_max_id = Column(Integer, ForeignKey('app_version.id'), nullable=True) item = relationship('Item', foreign_keys=item_id) app = relationship('App', foreign_keys=app_id) version_min = relationship('AppVersion', foreign_keys=version_min_id) version_max = relationship('AppVersion', foreign_keys=version_max_id) @hybrid_property def versions(self): # All versions if self.version_min is None and self.version_max is None: return self.app.versions # Single version elif self.version_min == self.version_max: return [self.version_min] # Max version and below elif self.version_min is None: return [version for version in self.app.versions if version.value <= self.version_max.value] # Min version and above elif self.version_max is None: return [version for version in self.app.versions if self.version_min.value <= version.value] # Custom range return [version for version in self.app.versions if self.version_min.value <= version.value <= self.version_max.value] class Item(Base): __tablename__ = 'item' id = Column(Integer, primary_key=True) item_versions = relationship('ItemVersion', back_populates='item') def __repr__(self): return f'Item {self.id}' @hybrid_property def versions(self): versions = [] for item_version in self.item_versions: versions.extend(item_version.versions) return versions Base.metadata.create_all() py = App(name='Python') session.add(py) py27 = AppVersion(app=py, value=27) py37 = AppVersion(app=py, value=37) py38 = AppVersion(app=py, value=38) py39 = AppVersion(app=py, value=39) session.add(Item(item_versions=[ItemVersion(app=py)])) # [Python:27, Python:37, Python:38, Python:39] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py37)])) # [Python:37, Python:38, Python:39] session.add(Item(item_versions=[ItemVersion(app=py, version_max=py37)])) # [Python:27, Python:37] session.add(Item(item_versions=[ItemVersion(app=py, version_min=py27, version_max=py27)])) # [Python:27] session.commit() for item in session.execute(select(Item)).scalars(): print(f'{item}: {item.versions}') My attempts so far have hit issues before I've got to writing the actual query. With relationships they don't apply any filter on value: class ItemVersion(Base): ... versions = relationship( AppVersion, primaryjoin=and_(AppVersion.app_id == App.id, AppVersion.value == 0), secondaryjoin=app_id == App.id, secondary=App.__table__, viewonly=True, uselist=True, ) # sqlalchemy.exc.ArgumentError: Could not locate any relevant foreign key columns for primary join condition 'app_version.app_id = item_version.app_id' on relationship ItemVersion.versions. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or are annotated in the join condition with the foreign() annotation. With column_property (which I could link with a relationship) it doesn't like more than 1 result: class ItemVersion(Base): ... version_ids = column_property( select(AppVersion.id).where(AppVersion.app_id == app_id) ) # sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) sub-select returns 3 columns - expected 1 This would be my ideal result: class ItemVersion(Base): versions = # generate expression class Item(Base): ... item_versions = relationship('ItemVersion', back_populates='item') versions = association_proxy('item_versions', 'versions') If anyone has a particular section of documentation to point to that would also be appreciated, I'm just struggling a lot with this one.
[ "It's possible via a relationship but it took a lot of trial and error with joins to get there. Below is what was needed to get it working, although I wouldn't be surprised if there's a more optimal way.\nclass ItemVersion(Base):\n ...\n version_min_val = column_property(\n select(AppVersion.value)\n .where(AppVersion.id == version_min_id)\n .correlate_except(AppVersion)\n .scalar_subquery(),\n )\n version_max_val = column_property(\n select(AppVersion.value)\n .where(AppVersion.id == version_max_id)\n .correlate_except(AppVersion)\n .scalar_subquery(),\n )\n versions = relationship(\n AppVersion,\n primaryjoin=and_(\n app_id == AppVersion.app_id,\n case(\n [and_(version_min_id == None, version_max_id == None), literal(True)],\n [and_(version_min_id == None, version_max_id != None), AppVersion.value <= version_max_val],\n [and_(version_min_id != None, version_max_id == None), version_min_val <= AppVersion.value],\n else_=and_(version_min_val <= AppVersion.value, AppVersion.value <= version_max_val),\n )\n ),\n viewonly=True, uselist=True,\n )\n\nclass Item(Base):\n ...\n versions = relationship(\n AppVersion,\n primaryjoin=and_(\n id == ItemVersion.item_id,\n case(\n [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id == None), literal(True)],\n [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id != None), AppVersion.value <= ItemVersion.version_max_val],\n [and_(ItemVersion.version_min_id != None, ItemVersion.version_max_id == None), ItemVersion.version_min_val <= AppVersion.value],\n else_=and_(ItemVersion.version_min_val <= AppVersion.value, AppVersion.value <= ItemVersion.version_max_val),\n )\n ),\n secondaryjoin=ItemVersion.app_id == AppVersion.app_id,\n secondary=ItemVersion.__table__,\n viewonly=True, uselist=True,\n )\n\nFull code I used for testing:\nfrom sqlalchemy import select, create_engine, Column, Integer, ForeignKey, String, case, and_, literal\nfrom sqlalchemy.orm import relationship, sessionmaker, column_property\nfrom sqlalchemy.ext.declarative import declarative_base\n\nEngine = create_engine('sqlite://')\n\nBase = declarative_base(Engine)\n\nsession = sessionmaker(Engine)()\n\nclass App(Base):\n \"\"\"Each App has name and list of versions.\"\"\"\n __tablename__ = 'app'\n id = Column(Integer, primary_key=True)\n name = Column(String(64))\n versions = relationship('AppVersion', back_populates='app')\n\n def __repr__(self):\n return self.name\n\nclass AppVersion(Base):\n \"\"\"Each App version has a particular value.\"\"\"\n __tablename__ = 'app_version'\n id = Column(Integer, primary_key=True)\n app_id = Column(Integer, ForeignKey('app.id'), nullable=False)\n value = Column(Integer, nullable=False)\n\n app = relationship('App', foreign_keys=app_id, back_populates='versions', innerjoin=True)\n\n def __repr__(self):\n return f'{self.app.name}:{self.value}'\n\nclass ItemVersion(Base):\n \"\"\"The item version links a particular item to an App and it's min/max versions.\n Using the min and max versions, a range of compatible versions can be generated.\n \"\"\"\n __tablename__ = 'item_version'\n id = Column(Integer, primary_key=True)\n\n item_id = Column(Integer, ForeignKey('item.id'))\n app_id = Column(Integer, ForeignKey('app.id'))\n version_min_id = Column(Integer, ForeignKey('app_version.id'), nullable=True)\n version_max_id = Column(Integer, ForeignKey('app_version.id'), nullable=True)\n\n item = relationship('Item', foreign_keys=item_id)\n app = relationship('App', foreign_keys=app_id)\n version_min = relationship('AppVersion', foreign_keys=version_min_id)\n version_max = relationship('AppVersion', foreign_keys=version_max_id)\n\n version_min_val = column_property(\n select(AppVersion.value)\n .where(AppVersion.id == version_min_id)\n .correlate_except(AppVersion)\n .scalar_subquery(),\n )\n version_max_val = column_property(\n select(AppVersion.value)\n .where(AppVersion.id == version_max_id)\n .correlate_except(AppVersion)\n .scalar_subquery(),\n )\n\n versions = relationship(\n AppVersion,\n primaryjoin=and_(\n app_id == AppVersion.app_id,\n case(\n [and_(version_min_id == None, version_max_id == None), literal(True)],\n [and_(version_min_id == None, version_max_id != None), AppVersion.value <= version_max_val],\n [and_(version_min_id != None, version_max_id == None), version_min_val <= AppVersion.value],\n else_=and_(version_min_val <= AppVersion.value, AppVersion.value <= version_max_val),\n )\n ),\n viewonly=True, uselist=True,\n )\n\nclass Item(Base):\n \"\"\"Each item may have multiple compatible applications with specific versions.\"\"\"\n __tablename__ = 'item'\n id = Column(Integer, primary_key=True)\n item_versions = relationship('ItemVersion', back_populates='item')\n\n def __repr__(self):\n return f'Item {self.id}'\n\n versions = relationship(\n AppVersion,\n primaryjoin=and_(\n id == ItemVersion.item_id,\n case(\n [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id == None), literal(True)],\n [and_(ItemVersion.version_min_id == None, ItemVersion.version_max_id != None), AppVersion.value <= ItemVersion.version_max_val],\n [and_(ItemVersion.version_min_id != None, ItemVersion.version_max_id == None), ItemVersion.version_min_val <= AppVersion.value],\n else_=and_(ItemVersion.version_min_val <= AppVersion.value, AppVersion.value <= ItemVersion.version_max_val),\n )\n ),\n secondaryjoin=ItemVersion.app_id == AppVersion.app_id,\n secondary=ItemVersion.__table__,\n viewonly=True, uselist=True,\n )\n\nBase.metadata.create_all()\n\npy = App(name='Python')\npy27 = AppVersion(app=py, value=27)\npy37 = AppVersion(app=py, value=37)\npy38 = AppVersion(app=py, value=38)\npy39 = AppVersion(app=py, value=39)\nmaya = App(name='Maya')\nm22 = AppVersion(app=maya, value=2022)\nm23 = AppVersion(app=maya, value=2023)\n\n# [Python:27, Python:37, Python:38, Python:39, Maya:2022, Maya:2023]\nsession.add(Item(item_versions=[ItemVersion(app=py), ItemVersion(app=maya)]))\n# [Python:37, Python:38, Python:39]\nsession.add(Item(item_versions=[ItemVersion(app=py, version_min=py37)]))\n# [Python:27, Python:37]\nsession.add(Item(item_versions=[ItemVersion(app=py, version_max=py37)]))\n# [Python:27]\nsession.add(Item(item_versions=[ItemVersion(app=py, version_min=py27, version_max=py27)]))\n# [Python:27, Python:37, Python:38]\nsession.add(Item(item_versions=[ItemVersion(app=py, version_min=py27, version_max=py38)]))\n# [Python:27, Python:39, Maya:2022]\nsession.add(Item(item_versions=[ItemVersion(app=py, version_max=py27), ItemVersion(app=py, version_min=py39), ItemVersion(app=maya, version_min=m22, version_max=m22)]))\n\nsession.commit()\n\nstmt = select(Item).where(Item.versions.any(AppVersion.app.has(App.name == 'Maya')))\nfor item in session.execute(stmt).scalars():\n print(f'{item}: {item.versions}')\n\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0074478264_python_sqlalchemy.txt
Q: style.css not loading in django html template I created a login page and calling a static css sheet for styling but its not working. I load static in my login.html and use <html> <head> <title>PWC Login</title> <link rel="stylesheet" href="{% static 'css/style.css'%}"> </head> <body> <div class="loginbox"> <img src="{% static 'images/avatar.png' %}" class="avatar"> <h1 class="h1">Login Here</h1> <form> <p>Username</p> <input type="text" name="" placeholder="Enter Username"> <p>Password</p> <input type="password" name="" placeholder="Enter Password"> <input type="submit" name="" value="Login"> <a href="#">Forgot your password?</a><br> <a href="#">Don't have an account?</a> </form> </div> </body> </html> css: body { margin: 0; padding: 0; background: url('static/images/pic1.jpeg'); background-size: cover; background-position: center; font-family: sans-serif; } .loginbox{ width:320px; height:420px; background: #000; color: #fff; top:50%; left:50%; position: absolute; transform: translate(-50%,-50%); box-sizing: border-box; padding: 70px 30px; } .avatar{ width: 100px; height: 100px; border-radius: 50%; position: absolute; top: -50px; left: calc(50% - 50px); } i tried the html and css file separately and it works fine. the background is the jpeg file and the login box is centered. it has to be something in django but not sure what it is. A: STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, "static/"), )
style.css not loading in django html template
I created a login page and calling a static css sheet for styling but its not working. I load static in my login.html and use <html> <head> <title>PWC Login</title> <link rel="stylesheet" href="{% static 'css/style.css'%}"> </head> <body> <div class="loginbox"> <img src="{% static 'images/avatar.png' %}" class="avatar"> <h1 class="h1">Login Here</h1> <form> <p>Username</p> <input type="text" name="" placeholder="Enter Username"> <p>Password</p> <input type="password" name="" placeholder="Enter Password"> <input type="submit" name="" value="Login"> <a href="#">Forgot your password?</a><br> <a href="#">Don't have an account?</a> </form> </div> </body> </html> css: body { margin: 0; padding: 0; background: url('static/images/pic1.jpeg'); background-size: cover; background-position: center; font-family: sans-serif; } .loginbox{ width:320px; height:420px; background: #000; color: #fff; top:50%; left:50%; position: absolute; transform: translate(-50%,-50%); box-sizing: border-box; padding: 70px 30px; } .avatar{ width: 100px; height: 100px; border-radius: 50%; position: absolute; top: -50px; left: calc(50% - 50px); } i tried the html and css file separately and it works fine. the background is the jpeg file and the login box is centered. it has to be something in django but not sure what it is.
[ "STATIC_URL = '/static/'\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, \"static/\"),\n)\n\n" ]
[ 0 ]
[]
[]
[ "css", "django", "html", "python", "static" ]
stackoverflow_0074578616_css_django_html_python_static.txt
Q: assuming IAM role multiple times using boto3 in python I'm new to working with aws and I'm not sure if I'm wording it correctly but basically, I need to sso in an account -> assume a role in account A -> then assume a role in account B. I am following this article (https://medium.com/geekculture/programming-aws-iam-using-aws-python-sdk-boto3-part-4-62f2f1c21584) on how to assume the role. After I assume the first role in account A, I get the "iam_client" (from the article), then I don't know how can I assume the second role from account B. import boto3 from botocore.exceptions import ClientError def lambda_handler(event, context): sts_client = boto3.client('sts') try: response = sts_client.assume_role( RoleArn='arn:aws:iam::<TRUSTING_ACCOUNT_ID>:role/<ROLE_NAME_IN_TRUSTING_ACCOUNT>', RoleSessionName='assume_role_session' ) except ClientError as error: print('Unexpected error occurred... could not assume role', error) return error try: iam_client = boto3.client('iam', aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'] ) except ClientError as error: print('Unexpected error occurred... could not create iam client on trusting account', error) return error EDIT: my aws accounts are setup in a way that I can't assume the role in account B right after sso login. The trust relationship is setup in a way that it only allows a role in account A to assume it. A: Check the AWS Official Code Library that contains this use case. When looking for an AWS code example, check this New AWS Doc. As you can see, the code library shows this use case in different supported programming langanges. The topic is here: Create an IAM user and assume a role with AWS STS using an AWS SDK You can assume roles by following the Python example.
assuming IAM role multiple times using boto3 in python
I'm new to working with aws and I'm not sure if I'm wording it correctly but basically, I need to sso in an account -> assume a role in account A -> then assume a role in account B. I am following this article (https://medium.com/geekculture/programming-aws-iam-using-aws-python-sdk-boto3-part-4-62f2f1c21584) on how to assume the role. After I assume the first role in account A, I get the "iam_client" (from the article), then I don't know how can I assume the second role from account B. import boto3 from botocore.exceptions import ClientError def lambda_handler(event, context): sts_client = boto3.client('sts') try: response = sts_client.assume_role( RoleArn='arn:aws:iam::<TRUSTING_ACCOUNT_ID>:role/<ROLE_NAME_IN_TRUSTING_ACCOUNT>', RoleSessionName='assume_role_session' ) except ClientError as error: print('Unexpected error occurred... could not assume role', error) return error try: iam_client = boto3.client('iam', aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'] ) except ClientError as error: print('Unexpected error occurred... could not create iam client on trusting account', error) return error EDIT: my aws accounts are setup in a way that I can't assume the role in account B right after sso login. The trust relationship is setup in a way that it only allows a role in account A to assume it.
[ "Check the AWS Official Code Library that contains this use case. When looking for an AWS code example, check this New AWS Doc.\n\nAs you can see, the code library shows this use case in different supported programming langanges. The topic is here:\nCreate an IAM user and assume a role with AWS STS using an AWS SDK\nYou can assume roles by following the Python example.\n" ]
[ 0 ]
[]
[]
[ "amazon_iam", "amazon_web_services", "assume_role", "boto3", "python" ]
stackoverflow_0074630630_amazon_iam_amazon_web_services_assume_role_boto3_python.txt
Q: Different color of every single bar of seaborn bar plot I have a very wide range of data that I plot using seaborn bar plot. As I use hue, the two different colors are the same for all the data, but I want that every single bar is a different color. #This is the colors I want for every bar: palette = ["#fee090","#fdae61","#4575b4","#313695","#e0f3f8","#abd9e9","#d73027", "#a50026"] ax3=sns.barplot(data=Results,x="Mineral ", y="FLT",hue="Media size ",palette=palette,ci=None, ecolor='black',edgecolor='black',) #this is my data frame Media size Material bead Mineral FLT 1.70 MinFree 0.00 14.86 1.70 MinFree 0.00 14.34 3.00 MinFree 0.00 9.95 3.00 MinFree 0.00 9.68 1.70 GIC 4.00 14.87 1.70 GIC 4.00 14.38 3.00 GIC 4.00 11.80 3.00 GIC 4.00 11.12 1.70 IC60 4.00 11.80 1.70 IC60 4.00 11.12 3.00 IC60 4.00 9.24 3.00 IC60 4.00 8.99 1.70 BHX 4.00 9.85 1.70 BHX 4.00 9.70 3.00 BHX 4.00 7.17 3.00 BHX 4.00 6.70 #This is the result: result and as you can see it just takes the two first values of the pallette. Another doubt. When I run the ci=69 for standard error, the error bars that I obtain are not correct. In some bars are mising, it makes somehting weird. Any hint on this? THANKS A: When working with hue, seaborn assigns one color per hue value. In this case there seem to be two hue values (1.70 and 3.00), so two colors are used. To give each bar a separate color, you could iterate through the generated bars and manually assign the colors. Note that for each hue value, a container is created for the bars with that hue. You can use the HandlerTuple legend handler to show all color in the legend. import matplotlib.pyplot as plt from matplotlib.legend_handler import HandlerTuple import seaborn as sns import pandas as pd palette = ["#fee090", "#fdae61", "#4575b4", "#313695", "#e0f3f8", "#abd9e9", "#d73027", "#a50026"] df = pd.read_html('https://stackoverflow.com/questions/74617540')[0] ax = sns.barplot(data=df, x='Material bead', y='FLT', hue='Media size', palette=palette, errorbar=None, edgecolor='black') ###errorbar=('ci', 69)) for bars, colors in zip(ax.containers, (palette[0::2], palette[1::2])): for bar, color in zip(bars, colors): bar.set_facecolor(color) ax.legend(handles=[tuple(bar_group) for bar_group in ax.containers], labels=[bar_group.get_label() for bar_group in ax.containers], title=ax.legend_.get_title().get_text(), handlelength=4, handler_map={tuple: HandlerTuple(ndivide=None, pad=0.1)}) sns.despine() plt.show() If you want to color the bars and hatch one of the hues with a pattern, you can call set_hatch() on those bars. You could update the legend already created by seaborn to indicate the hatching on a grey background. (Creating a new legend, as in the previous example, would also work here. But note that hatching is not so easy to see in small areas.) import matplotlib.pyplot as plt import seaborn as sns import pandas as pd df = pd.read_html('https://stackoverflow.com/questions/74617540')[0] ax = sns.barplot(data=df, x='Material bead', y='FLT', hue='Media size', palette=['lightgrey', 'lightgrey'], errorbar=None, edgecolor='black') palette = ["#fee090", "#4575b4", "#e0f3f8", "#d73027"] for bars, hatch, legend_handle in zip(ax.containers, ['', '//'], ax.legend_.legendHandles): for bar, color in zip(bars, palette): bar.set_facecolor(color) bar.set_hatch(hatch) # update the existing legend, use twice the hatching pattern to make it denser legend_handle.set_hatch(hatch + hatch) sns.despine() plt.show()
Different color of every single bar of seaborn bar plot
I have a very wide range of data that I plot using seaborn bar plot. As I use hue, the two different colors are the same for all the data, but I want that every single bar is a different color. #This is the colors I want for every bar: palette = ["#fee090","#fdae61","#4575b4","#313695","#e0f3f8","#abd9e9","#d73027", "#a50026"] ax3=sns.barplot(data=Results,x="Mineral ", y="FLT",hue="Media size ",palette=palette,ci=None, ecolor='black',edgecolor='black',) #this is my data frame Media size Material bead Mineral FLT 1.70 MinFree 0.00 14.86 1.70 MinFree 0.00 14.34 3.00 MinFree 0.00 9.95 3.00 MinFree 0.00 9.68 1.70 GIC 4.00 14.87 1.70 GIC 4.00 14.38 3.00 GIC 4.00 11.80 3.00 GIC 4.00 11.12 1.70 IC60 4.00 11.80 1.70 IC60 4.00 11.12 3.00 IC60 4.00 9.24 3.00 IC60 4.00 8.99 1.70 BHX 4.00 9.85 1.70 BHX 4.00 9.70 3.00 BHX 4.00 7.17 3.00 BHX 4.00 6.70 #This is the result: result and as you can see it just takes the two first values of the pallette. Another doubt. When I run the ci=69 for standard error, the error bars that I obtain are not correct. In some bars are mising, it makes somehting weird. Any hint on this? THANKS
[ "When working with hue, seaborn assigns one color per hue value. In this case there seem to be two hue values (1.70 and 3.00), so two colors are used.\nTo give each bar a separate color, you could iterate through the generated bars and manually assign the colors. Note that for each hue value, a container is created for the bars with that hue.\nYou can use the HandlerTuple legend handler to show all color in the legend.\nimport matplotlib.pyplot as plt\nfrom matplotlib.legend_handler import HandlerTuple\nimport seaborn as sns\nimport pandas as pd\n\npalette = [\"#fee090\", \"#fdae61\", \"#4575b4\", \"#313695\", \"#e0f3f8\", \"#abd9e9\", \"#d73027\", \"#a50026\"]\n\ndf = pd.read_html('https://stackoverflow.com/questions/74617540')[0]\n\nax = sns.barplot(data=df, x='Material bead', y='FLT', hue='Media size', palette=palette, errorbar=None,\n edgecolor='black') ###errorbar=('ci', 69))\nfor bars, colors in zip(ax.containers, (palette[0::2], palette[1::2])):\n for bar, color in zip(bars, colors):\n bar.set_facecolor(color)\nax.legend(handles=[tuple(bar_group) for bar_group in ax.containers],\n labels=[bar_group.get_label() for bar_group in ax.containers],\n title=ax.legend_.get_title().get_text(),\n handlelength=4, handler_map={tuple: HandlerTuple(ndivide=None, pad=0.1)})\nsns.despine()\nplt.show()\n\n\nIf you want to color the bars and hatch one of the hues with a pattern, you can call set_hatch() on those bars. You could update the legend already created by seaborn to indicate the hatching on a grey background. (Creating a new legend, as in the previous example, would also work here. But note that hatching is not so easy to see in small areas.)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\n\ndf = pd.read_html('https://stackoverflow.com/questions/74617540')[0]\n\nax = sns.barplot(data=df, x='Material bead', y='FLT', hue='Media size', palette=['lightgrey', 'lightgrey'],\n errorbar=None, edgecolor='black')\npalette = [\"#fee090\", \"#4575b4\", \"#e0f3f8\", \"#d73027\"]\nfor bars, hatch, legend_handle in zip(ax.containers, ['', '//'], ax.legend_.legendHandles):\n for bar, color in zip(bars, palette):\n bar.set_facecolor(color)\n bar.set_hatch(hatch)\n # update the existing legend, use twice the hatching pattern to make it denser\n legend_handle.set_hatch(hatch + hatch)\nsns.despine()\nplt.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "bar_chart", "colors", "matplotlib", "python", "seaborn" ]
stackoverflow_0074617540_bar_chart_colors_matplotlib_python_seaborn.txt
Q: generate dynamic task using hooks without running them in backend I have a simple dag - that takes argument from mysql db - (like sql, subject) Then I have a function creating report out and send to particular email. Here is code snippet. def s_report(k,**kwargs): body_sql = list2[k][4] request1 = "({})".format(body_sql) dwh_hook = SnowflakeHook(snowflake_conn_id="snowflake_conn") df1 = dwh_hook.get_pandas_df(request1) df2 = df1.to_html() body_Text = list2[k][3] html_content = f"""HI Team, Please find report<br><br> {df2} <br> </br> <b>Thank you!</b><br> """ return EmailOperator(task_id="send_email_snowflake{}".format(k), to=list2[k][1], subject=f"{list2[k][2]}", html_content=html_content, dag=dag) for j in range(len(list)): mysql_list >> [ s_report(j)] >> end_operator The s_report is getting generated dynamically, But the real problem is hook is continously submitting query in backend, While dag is stopped still its submitting query in backend. I can use pythonoperator, but its not generating dynamic task. A: A couple of things: By looking at your code, in particular the lines: for j in range(len(list)): mysql_list >> [ s_report(j)] >> end_operator we can determine that if your first task succeeds, namely, mysql_list, then the tasks downstream to it, namely, the s_report calls should begin executing. You have precisely len(list) of them. Within each s_report call there is exactly one dwh_hook.get_pandas_df(request) call, so I believe your DAG should be making len(list) calls of this type provided mysql_list task succeeds. As for the mismatch you see in your Snowflake logs, I can't advise you here. I'd need more details. Keep in mind that the call get_pandas_df might have a retry mechanism (i.e. if cannot reach snowflake, retry) which might explain why your Snowflake logs show a bunch of requests. If your DAG finishes successfully, (i.e. end_operator tasks finishes successfully), you are correct. There should be no requests in your Snowflake logs that came post-DAG end. If you want more insight as to how your DAG interacts with your Snowflake resource, I'd suggest having a single s_report task like so: mysql_list >> [ s_report(0)] >> end_operator and see the behaviour in the logs.
generate dynamic task using hooks without running them in backend
I have a simple dag - that takes argument from mysql db - (like sql, subject) Then I have a function creating report out and send to particular email. Here is code snippet. def s_report(k,**kwargs): body_sql = list2[k][4] request1 = "({})".format(body_sql) dwh_hook = SnowflakeHook(snowflake_conn_id="snowflake_conn") df1 = dwh_hook.get_pandas_df(request1) df2 = df1.to_html() body_Text = list2[k][3] html_content = f"""HI Team, Please find report<br><br> {df2} <br> </br> <b>Thank you!</b><br> """ return EmailOperator(task_id="send_email_snowflake{}".format(k), to=list2[k][1], subject=f"{list2[k][2]}", html_content=html_content, dag=dag) for j in range(len(list)): mysql_list >> [ s_report(j)] >> end_operator The s_report is getting generated dynamically, But the real problem is hook is continously submitting query in backend, While dag is stopped still its submitting query in backend. I can use pythonoperator, but its not generating dynamic task.
[ "A couple of things:\nBy looking at your code, in particular the lines:\nfor j in range(len(list)):\nmysql_list >> [ s_report(j)] >> end_operator\n\nwe can determine that if your first task succeeds, namely, mysql_list, then the tasks downstream to it, namely, the s_report calls should begin executing. You have precisely len(list) of them. Within each s_report call there is exactly one dwh_hook.get_pandas_df(request) call, so I believe your DAG should be making len(list) calls of this type provided mysql_list task succeeds.\nAs for the mismatch you see in your Snowflake logs, I can't advise you here. I'd need more details. Keep in mind that the call get_pandas_df might have a retry mechanism (i.e. if cannot reach snowflake, retry) which might explain why your Snowflake logs show a bunch of requests.\nIf your DAG finishes successfully, (i.e. end_operator tasks finishes successfully), you are correct. There should be no requests in your Snowflake logs that came post-DAG end.\nIf you want more insight as to how your DAG interacts with your Snowflake resource, I'd suggest having a single s_report task like so:\nmysql_list >> [ s_report(0)] >> end_operator\n\nand see the behaviour in the logs.\n" ]
[ 0 ]
[]
[]
[ "airflow", "python", "snowflake_cloud_data_platform" ]
stackoverflow_0074596752_airflow_python_snowflake_cloud_data_platform.txt
Q: Why am I getting leading and trailing backslash when I replace my placeholder in Python? I have this sample query string: """SELECT security_id AS securityID, trade_date AS date, available, currency_code AS sourceCurrency FROM cppib_market_passive_swap_availability WHERE trade_date = '{file_date}' """.format(file_date=passive_availablity_date.strftime('%Y-%m-%d') When the code runs with passive_availablity_date having a datetime value '2022-11-29 00:00:00' the string that gets formed is: SELECT security_id AS securityID, trade_date AS date, available, currency_code AS sourceCurrency FROM cppib_market_passive_swap_availability WHERE trade_date = \'2022-11-29\' I dont want to get the backslashes in the date it should be just trade_date = '2022-11-29' . I have another similar string but there it works fine. I am not able to understand what is happening here. Can anyone please help me? A: The backslashes you see around the datetime (\') are likely just artifacts to remind you that they are literal single quotes, perhaps inside of a singly quoted string. As a side note, it is generally bad practice to be injecting a value into a SQL query this way. Instead, you should learn how to use a prepared statement.
Why am I getting leading and trailing backslash when I replace my placeholder in Python?
I have this sample query string: """SELECT security_id AS securityID, trade_date AS date, available, currency_code AS sourceCurrency FROM cppib_market_passive_swap_availability WHERE trade_date = '{file_date}' """.format(file_date=passive_availablity_date.strftime('%Y-%m-%d') When the code runs with passive_availablity_date having a datetime value '2022-11-29 00:00:00' the string that gets formed is: SELECT security_id AS securityID, trade_date AS date, available, currency_code AS sourceCurrency FROM cppib_market_passive_swap_availability WHERE trade_date = \'2022-11-29\' I dont want to get the backslashes in the date it should be just trade_date = '2022-11-29' . I have another similar string but there it works fine. I am not able to understand what is happening here. Can anyone please help me?
[ "The backslashes you see around the datetime (\\') are likely just artifacts to remind you that they are literal single quotes, perhaps inside of a singly quoted string.\nAs a side note, it is generally bad practice to be injecting a value into a SQL query this way. Instead, you should learn how to use a prepared statement.\n" ]
[ 0 ]
[]
[]
[ "python", "sql" ]
stackoverflow_0074630736_python_sql.txt
Q: QMessage in a function thread? I want to do a popup when the function in a thread finish but when it run the popup the program crash. I tried doing a thread in the main function thread but crash the app. I put a large and slow funtion in a thread to not crash the GUI but I want when this slow function finish, run a popup with QMessageBox, my sollution work but when I press the 'Ok' button in the popup crash the program, but I dont want that, so I tried to make a thread from the main thread but do the same, only crash the program. I want something like this: def popup(): msg = QMessageBox() msg.setWindowTitle("Alert") ... a = msg.exec_() def slow_func(): time.sleep(10) # exemple of slow popup() # when I press 'Ok' crash so I tried... # I tried... def slow_func(): time.sleep(10) # exemple of slow threading.Thread(target=popup).start() # still crashing threading.Thread(target=slow_func).start() I don't know how to do it, I tried a thread in a thread but still crashing when I press 'Ok' button. The popup works but crash the app when I press 'Ok' I'm in Windows 11 using python 3.10 and PyQt 5.15 A: You must not use any GUI functions outside of the main thread. So you can not call popup() from your calculation thread and you can't create a new thread in your calculation thread and have that call it. You must make the main thread call it. For possible solutions see How to properly execute GUI operations in Qt main thread? In short: Use the signal-slot-mechanism or QMetaObject::invokeMethod()
QMessage in a function thread?
I want to do a popup when the function in a thread finish but when it run the popup the program crash. I tried doing a thread in the main function thread but crash the app. I put a large and slow funtion in a thread to not crash the GUI but I want when this slow function finish, run a popup with QMessageBox, my sollution work but when I press the 'Ok' button in the popup crash the program, but I dont want that, so I tried to make a thread from the main thread but do the same, only crash the program. I want something like this: def popup(): msg = QMessageBox() msg.setWindowTitle("Alert") ... a = msg.exec_() def slow_func(): time.sleep(10) # exemple of slow popup() # when I press 'Ok' crash so I tried... # I tried... def slow_func(): time.sleep(10) # exemple of slow threading.Thread(target=popup).start() # still crashing threading.Thread(target=slow_func).start() I don't know how to do it, I tried a thread in a thread but still crashing when I press 'Ok' button. The popup works but crash the app when I press 'Ok' I'm in Windows 11 using python 3.10 and PyQt 5.15
[ "You must not use any GUI functions outside of the main thread. So you can not call popup() from your calculation thread and you can't create a new thread in your calculation thread and have that call it. You must make the main thread call it.\nFor possible solutions see How to properly execute GUI operations in Qt main thread?\nIn short: Use the signal-slot-mechanism or QMetaObject::invokeMethod()\n" ]
[ 0 ]
[]
[]
[ "multithreading", "popup", "pyqt", "python", "qmessagebox" ]
stackoverflow_0074630412_multithreading_popup_pyqt_python_qmessagebox.txt
Q: Should I have everything in one script or should I have more scripts and connect them? I'm making a game and I don't know if I should have the whole game in one script or the script for the main menu alone and for the options menu alone and the for the actual gameplay alone... and connect them. So, what is the better option and if I should make more scripts how can I connect them? Is it making a manager script and making other scripts functions and the calling the from the manager sript? Or should I make whole other scripts classes? A: If you have separate files for each class, (my opinion by the way, it generally depends on you) it would be much easier to manage your different classes, especially without having to scroll through one giant file. Also fun fact, this is known as modular programming Of course, sometimes it may be too complex to relocate all your code to multiple files, or there may be other restrictions preventing you from doing so, and in this scenario, you probably wouldn't separate your code into multiple scripts. Now about making other scripts classes, that would entirely depend on their usage. If you had an interactive menu, or perhaps an interactive object (such as a player or enemy) it would probably be better as a class then a series of functions. But to connect them, with Python, you can easily import the classes based on their files with import fileName. (of course replace fileName with your file's name, and path) You can read more about importing different classes here
Should I have everything in one script or should I have more scripts and connect them?
I'm making a game and I don't know if I should have the whole game in one script or the script for the main menu alone and for the options menu alone and the for the actual gameplay alone... and connect them. So, what is the better option and if I should make more scripts how can I connect them? Is it making a manager script and making other scripts functions and the calling the from the manager sript? Or should I make whole other scripts classes?
[ "If you have separate files for each class, (my opinion by the way, it generally depends on you) it would be much easier to manage your different classes, especially without having to scroll through one giant file. Also fun fact, this is known as modular programming\nOf course, sometimes it may be too complex to relocate all your code to multiple files, or there may be other restrictions preventing you from doing so, and in this scenario, you probably wouldn't separate your code into multiple scripts.\nNow about making other scripts classes, that would entirely depend on their usage. If you had an interactive menu, or perhaps an interactive object (such as a player or enemy) it would probably be better as a class then a series of functions.\nBut to connect them, with Python, you can easily import the classes based on their files with import fileName. (of course replace fileName with your file's name, and path) You can read more about importing different classes here\n" ]
[ 2 ]
[]
[]
[ "class", "function", "python" ]
stackoverflow_0074630623_class_function_python.txt
Q: Not able to import libraries in my python project of face recognization i am working on python project of face recognization but i am anable to import libraries in my program . this are not working i try to import different libraries and run the code but every times it fails in python 3.9 enter image description here A: Have you installed the library? Also if you haven't I suggest you use an virtual environment and install inside it. To install your library type in your terminal: pip install NAME-OF-THE-LIBRARY Example for the face_recognition: pip install face-recognition
Not able to import libraries in my python project of face recognization
i am working on python project of face recognization but i am anable to import libraries in my program . this are not working i try to import different libraries and run the code but every times it fails in python 3.9 enter image description here
[ "Have you installed the library? Also if you haven't I suggest you use an virtual environment and install inside it.\nTo install your library type in your terminal:\npip install NAME-OF-THE-LIBRARY\n\nExample for the face_recognition:\npip install face-recognition\n\n" ]
[ 0 ]
[]
[]
[ "libraries", "python" ]
stackoverflow_0074630594_libraries_python.txt
Q: Errors using a list of integers (Python + Google Ads API) I have a list of IDs that are integers. If I do print(data_clients["id"]) I get something like: 4323324234 2342342344 5464564565 Then I want to call an API (Google Ads) that uses those numbers as IDs (to know which data to retrieve). I have to do a loop (or something similar) to get the data from each ID. I've tried this for id in range(data_clients["id"]): query = (f''' WHATEVER ''') stream = ga_service.search_stream(customer_id = data_clients["id"], query=query) list_id = [] With this code I get the following error: 4323324234 has type int, but expected one of: bytes, unicode And if I try to convert the int to Unicode or bytes (with chr or to_bytes), I get int too big to convert Maybe the solution is obvious, but I'm a Python/coding beginner, so I'm pretty confused. Any ideas? Thanks! A: Assuming data_clients["id"] is a list of customer IDs, this should work: for cust_id in data_clients["id"]: query = (f''' WHATEVER ''') stream = ga_service.search_stream(customer_id=cust_id, query=query)
Errors using a list of integers (Python + Google Ads API)
I have a list of IDs that are integers. If I do print(data_clients["id"]) I get something like: 4323324234 2342342344 5464564565 Then I want to call an API (Google Ads) that uses those numbers as IDs (to know which data to retrieve). I have to do a loop (or something similar) to get the data from each ID. I've tried this for id in range(data_clients["id"]): query = (f''' WHATEVER ''') stream = ga_service.search_stream(customer_id = data_clients["id"], query=query) list_id = [] With this code I get the following error: 4323324234 has type int, but expected one of: bytes, unicode And if I try to convert the int to Unicode or bytes (with chr or to_bytes), I get int too big to convert Maybe the solution is obvious, but I'm a Python/coding beginner, so I'm pretty confused. Any ideas? Thanks!
[ "Assuming data_clients[\"id\"] is a list of customer IDs, this should work:\nfor cust_id in data_clients[\"id\"]:\n query = (f''' WHATEVER ''')\n stream = ga_service.search_stream(customer_id=cust_id, query=query)\n\n" ]
[ 0 ]
[]
[]
[ "google_ads_api", "python" ]
stackoverflow_0074604670_google_ads_api_python.txt
Q: How to apply multiple functions to same column in Python? I need help on applying my below case statement functions to the same column at once or in parallel? Not sure if I am doing it in the most efficient way, are there alternative ways I can do this? #Accrued Calc for ACT/360 def bbb(bb): if bb["Basis"] == "ACT/360" and bb['Type'] == 'L' and bb['Current Filter'] == 'Current CF': return 1 * bb['Principal/GrossAmount'] * (bb['All in Rate']/100)* (bb['Number of days'])/360 elif bb["Basis"] == "ACT/360" and bb['Type'] == 'D': return -1 * bb['Principal/GrossAmount'] * (bb['All in Rate']/100)* (bb['Number of days'])/360 else: return '' kf['Accrued Calc'] = kf.apply(bbb, axis = 1) #Accrued Calc for ACT/365 def ccc(cc): if cc["Basis"] == "ACT/365" and cc['Type'] == 'L' and cc['Current Filter'] == 'Current CF': return 1 * cc['Principal/GrossAmount'] * (cc['All in Rate']/100)* (cc['Number of days'])/365 elif cc["Basis"] == "ACT/365" and cc['Type'] == 'D': return -1 * cc['Principal/GrossAmount'] * (cc['All in Rate']/100)* (cc['Number of days'])/365 else: return '' kf['Accrued Calc'] = kf.apply(ccc, axis = 1) #Accrued Calc for 30/360 Basis {def ppp(ll): if ll["Basis"] == "30/360" and ll['Type'] == 'L' and ll['Current Filter'] == 'Current CF': return 1 * ll['Principal/GrossAmount'] * (ll['All in Rate']/100)* (360 *(Settlement.year - ll['Start Date YEAR']) + 30 * (Settlement.month - ll['Start Date MONTH']) + Settlement.day - ll['Start Date DAYS'])/360 elif ll["Basis"] == "30/360" and ll['Type'] == 'D': return -1 * ll['Principal/GrossAmount'] * (ll['All in Rate']/100)* (360 *(Settlement.year - ll['Start Date YEAR']) + 30 * (Settlement.month - ll['Start Date MONTH']) + Settlement.day - ll['Start Date DAYS'])/360 else: return '' kf['Accrued Calc'] = kf.apply(ppp, axis = 1)} I tried the below kf['Accrued Calc'] = kf['Accrued Calc'].apply(bbb) & kf['Accrued Calc'].apply(ccc) & kf['Accrued Calc'].apply(ppp) Not sure if it's a good idea to have all my functions under one large function? A: You should have one function to decide which function to call. Apply that function to your dataframe. Depending on your conditions, this function can then call the correct function that will contain the meat of your calculations. Also, in the interest of readability, rename your functions and variables to something that makes sense: #Accrued Calc for ACT/360 def accrued_act_360(row): if row['Type'] == 'L' and row['Current Filter'] == 'Current CF': return 1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/360 elif row['Type'] == 'D': return -1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/360 else: return '' #Accrued Calc for ACT/365 def accrued_act_365(row): if row['Type'] == 'L' and row['Current Filter'] == 'Current CF': return 1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/365 elif row['Type'] == 'D': return -1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/365 else: return '' #Accrued Calc for 30/360 Basis def accrued_30_360(row): if row['Type'] == 'L' and row['Current Filter'] == 'Current CF': return 1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (360 *(Settlement.year - row['Start Date YEAR']) + 30 * (Settlement.month - row['Start Date MONTH']) + Settlement.day - row['Start Date DAYS'])/360 elif row['Type'] == 'D': return -1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (360 *(Settlement.year - row['Start Date YEAR']) + 30 * (Settlement.month - row['Start Date MONTH']) + Settlement.day - row['Start Date DAYS'])/360 else: return '' def accrued_calc(row): if row["Basis"] == "ACT/360": return accrued_act_360(row) elif row["Basis"] == "ACT/365": return accrued_act_365(row) elif row["Basis"] == "30/360": return accrued_30_360(row) else: return "" kf['Accrued Calc'] = kf.apply(accrued_calc, axis = 1) However: this approach fails to make use of pandas's amazing vectorized processing powers. You could use boolean indexing to figure out which rows fulfill certain conditions, and only set those rows for the entire dataframe in one shot instead of applying your function row-by-row. This approach will likely be significantly faster than .apply def accrued_act_360_vec(df): # Find which rows match your condition type_l_rows = (df["Basis"] == "ACT/360") & (df["Type"] == "L") & (df["Current Filter"] == "Current CF") # Set the value for those rows df.loc[type_l_rows, "Accrued Calc"] = df.loc[type_l_rows, 'Principal/GrossAmount'] * (df.loc[type_l_rows, 'All in Rate']/100)* (df.loc[type_l_rows, 'Number of days'])/360 type_d_rows = (df["Basis"] == "ACT/360") & (df["Type"] == "D") df.loc[type_d_rows, "Accrued Calc"] = -1 * df.loc[type_d_rows, 'Principal/GrossAmount'] * (df.loc[type_d_rows, 'All in Rate']/100)* (df.loc[type_d_rows, 'Number of days'])/360 # No need to consider the else condition: Those rows never get set. def accrued_act_365_vec(df): type_l_rows = (df["Basis"] == "ACT/365") & (df['Type'] == 'L') & (df['Current Filter'] == 'Current CF') df.loc[type_l_rows, "Accrued Calc"] = 1 * df.loc[type_l_rows, 'Principal/GrossAmount'] * (df.loc[type_l_rows, 'All in Rate']/100)* (df.loc[type_l_rows, 'Number of days'])/365 type_d_rows = (df["Basis"] == "ACT/365") & (df['Type'] == 'D') df.loc[type_d_rows, "Accrued Calc"] = -1 * df.loc[type_d_rows, 'Principal/GrossAmount'] * (df.loc[type_d_rows, 'All in Rate']/100)* (df.loc[type_d_rows, 'Number of days'])/365 def accrued_30_360_vec(df): type_l_rows = (df["Basis"] == "30/360") & (df['Type'] == 'L') & (df['Current Filter'] == 'Current CF') df.loc[type_l_rows, "Accrued Calc"] = 1 * df.loc[type_l_rows, 'Principal/GrossAmount'] * (df.loc[type_l_rows, 'All in Rate']/100)* (360 *(Settlement.year - df.loc[type_l_rows, 'Start Date YEAR']) + 30 * (Settlement.month - df.loc[type_l_rows, 'Start Date MONTH']) + Settlement.day - df.loc[type_l_rows, 'Start Date DAYS'])/360 type_d_rows = (df["Basis"] == "30/360") & (df['Type'] == 'D') df.loc[type_d_rows, "Accrued Calc"] = -1 * df.loc[type_d_rows, 'Principal/GrossAmount'] * (df.loc[type_d_rows, 'All in Rate']/100)* (360 *(Settlement.year - df.loc[type_d_rows, 'Start Date YEAR']) + 30 * (Settlement.month - df.loc[type_d_rows, 'Start Date MONTH']) + Settlement.day - df.loc[type_d_rows, 'Start Date DAYS'])/360 Notice these functions include the condition for df["Basis"] == ... because they are all standalone functions. To run these, you'd just do: accrued_act_360_vec(kf) accrued_act_365_vec(kf) accrued_30_360_vec(kf) Please re-check the accuracy of the formulas in my code, I might have accidentally messed them up during copy/paste
How to apply multiple functions to same column in Python?
I need help on applying my below case statement functions to the same column at once or in parallel? Not sure if I am doing it in the most efficient way, are there alternative ways I can do this? #Accrued Calc for ACT/360 def bbb(bb): if bb["Basis"] == "ACT/360" and bb['Type'] == 'L' and bb['Current Filter'] == 'Current CF': return 1 * bb['Principal/GrossAmount'] * (bb['All in Rate']/100)* (bb['Number of days'])/360 elif bb["Basis"] == "ACT/360" and bb['Type'] == 'D': return -1 * bb['Principal/GrossAmount'] * (bb['All in Rate']/100)* (bb['Number of days'])/360 else: return '' kf['Accrued Calc'] = kf.apply(bbb, axis = 1) #Accrued Calc for ACT/365 def ccc(cc): if cc["Basis"] == "ACT/365" and cc['Type'] == 'L' and cc['Current Filter'] == 'Current CF': return 1 * cc['Principal/GrossAmount'] * (cc['All in Rate']/100)* (cc['Number of days'])/365 elif cc["Basis"] == "ACT/365" and cc['Type'] == 'D': return -1 * cc['Principal/GrossAmount'] * (cc['All in Rate']/100)* (cc['Number of days'])/365 else: return '' kf['Accrued Calc'] = kf.apply(ccc, axis = 1) #Accrued Calc for 30/360 Basis {def ppp(ll): if ll["Basis"] == "30/360" and ll['Type'] == 'L' and ll['Current Filter'] == 'Current CF': return 1 * ll['Principal/GrossAmount'] * (ll['All in Rate']/100)* (360 *(Settlement.year - ll['Start Date YEAR']) + 30 * (Settlement.month - ll['Start Date MONTH']) + Settlement.day - ll['Start Date DAYS'])/360 elif ll["Basis"] == "30/360" and ll['Type'] == 'D': return -1 * ll['Principal/GrossAmount'] * (ll['All in Rate']/100)* (360 *(Settlement.year - ll['Start Date YEAR']) + 30 * (Settlement.month - ll['Start Date MONTH']) + Settlement.day - ll['Start Date DAYS'])/360 else: return '' kf['Accrued Calc'] = kf.apply(ppp, axis = 1)} I tried the below kf['Accrued Calc'] = kf['Accrued Calc'].apply(bbb) & kf['Accrued Calc'].apply(ccc) & kf['Accrued Calc'].apply(ppp) Not sure if it's a good idea to have all my functions under one large function?
[ "You should have one function to decide which function to call. Apply that function to your dataframe. Depending on your conditions, this function can then call the correct function that will contain the meat of your calculations. Also, in the interest of readability, rename your functions and variables to something that makes sense:\n#Accrued Calc for ACT/360\ndef accrued_act_360(row):\n if row['Type'] == 'L' and row['Current Filter'] == 'Current CF':\n return 1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/360\n elif row['Type'] == 'D':\n return -1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/360\n else:\n return ''\n\n\n#Accrued Calc for ACT/365\ndef accrued_act_365(row):\n if row['Type'] == 'L' and row['Current Filter'] == 'Current CF':\n return 1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/365\n elif row['Type'] == 'D':\n return -1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (row['Number of days'])/365\n else:\n return ''\n\n#Accrued Calc for 30/360 Basis \ndef accrued_30_360(row):\n if row['Type'] == 'L' and row['Current Filter'] == 'Current CF':\n return 1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (360 *(Settlement.year - row['Start Date YEAR']) + 30 * (Settlement.month - row['Start Date MONTH']) + Settlement.day - row['Start Date DAYS'])/360\n elif row['Type'] == 'D':\n return -1 * row['Principal/GrossAmount'] * (row['All in Rate']/100)* (360 *(Settlement.year - row['Start Date YEAR']) + 30 * (Settlement.month - row['Start Date MONTH']) + Settlement.day - row['Start Date DAYS'])/360\n else:\n return ''\n\ndef accrued_calc(row):\n if row[\"Basis\"] == \"ACT/360\":\n return accrued_act_360(row)\n elif row[\"Basis\"] == \"ACT/365\":\n return accrued_act_365(row)\n elif row[\"Basis\"] == \"30/360\":\n return accrued_30_360(row)\n else:\n return \"\"\n\nkf['Accrued Calc'] = kf.apply(accrued_calc, axis = 1)\n\n\nHowever: this approach fails to make use of pandas's amazing vectorized processing powers.\nYou could use boolean indexing to figure out which rows fulfill certain conditions, and only set those rows for the entire dataframe in one shot instead of applying your function row-by-row. This approach will likely be significantly faster than .apply\ndef accrued_act_360_vec(df):\n # Find which rows match your condition\n type_l_rows = (df[\"Basis\"] == \"ACT/360\") & (df[\"Type\"] == \"L\") & (df[\"Current Filter\"] == \"Current CF\")\n\n # Set the value for those rows\n df.loc[type_l_rows, \"Accrued Calc\"] = df.loc[type_l_rows, 'Principal/GrossAmount'] * (df.loc[type_l_rows, 'All in Rate']/100)* (df.loc[type_l_rows, 'Number of days'])/360\n\n type_d_rows = (df[\"Basis\"] == \"ACT/360\") & (df[\"Type\"] == \"D\")\n df.loc[type_d_rows, \"Accrued Calc\"] = -1 * df.loc[type_d_rows, 'Principal/GrossAmount'] * (df.loc[type_d_rows, 'All in Rate']/100)* (df.loc[type_d_rows, 'Number of days'])/360\n\n # No need to consider the else condition: Those rows never get set.\n\ndef accrued_act_365_vec(df):\n type_l_rows = (df[\"Basis\"] == \"ACT/365\") & (df['Type'] == 'L') & (df['Current Filter'] == 'Current CF')\n df.loc[type_l_rows, \"Accrued Calc\"] = 1 * df.loc[type_l_rows, 'Principal/GrossAmount'] * (df.loc[type_l_rows, 'All in Rate']/100)* (df.loc[type_l_rows, 'Number of days'])/365\n\n type_d_rows = (df[\"Basis\"] == \"ACT/365\") & (df['Type'] == 'D')\n df.loc[type_d_rows, \"Accrued Calc\"] = -1 * df.loc[type_d_rows, 'Principal/GrossAmount'] * (df.loc[type_d_rows, 'All in Rate']/100)* (df.loc[type_d_rows, 'Number of days'])/365\n\n\ndef accrued_30_360_vec(df):\n type_l_rows = (df[\"Basis\"] == \"30/360\") & (df['Type'] == 'L') & (df['Current Filter'] == 'Current CF')\n df.loc[type_l_rows, \"Accrued Calc\"] = 1 * df.loc[type_l_rows, 'Principal/GrossAmount'] * (df.loc[type_l_rows, 'All in Rate']/100)* (360 *(Settlement.year - df.loc[type_l_rows, 'Start Date YEAR']) + 30 * (Settlement.month - df.loc[type_l_rows, 'Start Date MONTH']) + Settlement.day - df.loc[type_l_rows, 'Start Date DAYS'])/360\n \n type_d_rows = (df[\"Basis\"] == \"30/360\") & (df['Type'] == 'D')\n df.loc[type_d_rows, \"Accrued Calc\"] = -1 * df.loc[type_d_rows, 'Principal/GrossAmount'] * (df.loc[type_d_rows, 'All in Rate']/100)* (360 *(Settlement.year - df.loc[type_d_rows, 'Start Date YEAR']) + 30 * (Settlement.month - df.loc[type_d_rows, 'Start Date MONTH']) + Settlement.day - df.loc[type_d_rows, 'Start Date DAYS'])/360\n\nNotice these functions include the condition for df[\"Basis\"] == ... because they are all standalone functions. To run these, you'd just do:\naccrued_act_360_vec(kf)\naccrued_act_365_vec(kf)\naccrued_30_360_vec(kf)\n\nPlease re-check the accuracy of the formulas in my code, I might have accidentally messed them up during copy/paste\n" ]
[ 1 ]
[]
[]
[ "apply", "finance", "function", "pandas", "python" ]
stackoverflow_0074630318_apply_finance_function_pandas_python.txt
Q: How do I make 2 hour windows using data thats all 1 hour windows I have data that looks like this: Datetime Price and was just wondering how about I would turn them into 2 hour windows instead and use the average of the price of the two A: Let us do resample with 2h freq df['Datetime'] = pd.to_datetime(df['Datetime'], dayfirst=True) df.resample('2h', on='Datetime', origin='start')['Price'].mean()
How do I make 2 hour windows using data thats all 1 hour windows
I have data that looks like this: Datetime Price and was just wondering how about I would turn them into 2 hour windows instead and use the average of the price of the two
[ "Let us do resample with 2h freq\ndf['Datetime'] = pd.to_datetime(df['Datetime'], dayfirst=True)\ndf.resample('2h', on='Datetime', origin='start')['Price'].mean()\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074630183_numpy_pandas_python.txt
Q: diff list of multiline strings with difflib without knowing which were added, deleted or modified I have two lists of multiline strings and I try to get the the diff lines for these strings. First I tried to just split all lines of each string and handled all these strings as one big "file" and get the diff for it but I had a lot of bugs. I cannot just diff by index since I do not know, which multiline string was added, which was deleted and which one was modified. Lets say I had the following example: import difflib oldList = ["one\ntwo\nthree","four\nfive\nsix","seven\neight\nnine"] newList = ["four\nfifty\nsix","seven\neight\nnine","ten\neleven\ntwelve"] oldAllTogether = [] for string in oldList: oldAllTogether.extend(string.splitlines()) newAllTogether = [] for string in newList: newAllTogether.extend(string.splitlines()) diff = difflib.unified_diff(oldAllTogether,newAllTogether) So I somehow have to find out, which strings belong to each other. A: I had to implmenent my own code in order to get the desired output. It is basically the same as Differ.compare() with the difference that we have a look at multiline blocks instead of lines. So the code would be: diffString = "" oldList = ["one\ntwo\nthree","four\nfive\nsix","seven\neight\nnine"] newList = ["four\nfifty\nsix","seven\neight\nnine","ten\neleven\ntwelve"] a = oldList b = newList cruncher = difflib.SequenceMatcher(None, a, b) for tag, alo, ahi, blo, bhi in cruncher.get_opcodes(): if tag == 'replace': best_ratio, cutoff = 0.74, 0.75 oldstrings = a[alo:ahi] newstrings = b[blo:bhi] for j in range(len(newstrings)): newstring = newstrings[j] cruncher.set_seq2(newstring) for i in range(len(oldstrings)): oldstring = oldstrings[i] cruncher.set_seq1(oldstring) if cruncher.real_quick_ratio() > best_ratio and \ cruncher.quick_ratio() > best_ratio and \ cruncher.ratio() > best_ratio: best_ratio, best_old, best_new = cruncher.ratio(), i, j if best_ratio < cutoff: #added string stringLines = newstring.splitlines() for line in stringLines: diffString += "+" + line + "\n" else: #replaced string start = False for diff in difflib.unified_diff(oldstrings[best_old].splitlines(),newstrings[best_new].splitlines()): if start: diffString += diff + "\n" if diff[0:2] == '@@': start = True del oldstrings[best_old] #deleted strings stringLines = [] for string in oldstrings: stringLines.extend(string.splitlines()) for line in stringLines: diffString += "-" + line + "\n" elif tag == 'delete': stringLines = [] for string in a[alo:ahi]: stringLines.extend(string.splitlines()) for line in stringLines: diffString += "-" + line + "\n" elif tag == 'insert': stringLines = [] for string in b[blo:bhi]: stringLines.extend(string.splitlines()) for line in stringLines: diffString += "+" + line + "\n" elif tag == 'equal': continue else: raise ValueError('unknown tag %r' % (tag,)) which result in the following: print(diffString) four -five +fifty six -one -two -three +ten +eleven +twelve
diff list of multiline strings with difflib without knowing which were added, deleted or modified
I have two lists of multiline strings and I try to get the the diff lines for these strings. First I tried to just split all lines of each string and handled all these strings as one big "file" and get the diff for it but I had a lot of bugs. I cannot just diff by index since I do not know, which multiline string was added, which was deleted and which one was modified. Lets say I had the following example: import difflib oldList = ["one\ntwo\nthree","four\nfive\nsix","seven\neight\nnine"] newList = ["four\nfifty\nsix","seven\neight\nnine","ten\neleven\ntwelve"] oldAllTogether = [] for string in oldList: oldAllTogether.extend(string.splitlines()) newAllTogether = [] for string in newList: newAllTogether.extend(string.splitlines()) diff = difflib.unified_diff(oldAllTogether,newAllTogether) So I somehow have to find out, which strings belong to each other.
[ "I had to implmenent my own code in order to get the desired output. It is basically the same as Differ.compare() with the difference that we have a look at multiline blocks instead of lines. So the code would be:\ndiffString = \"\"\noldList = [\"one\\ntwo\\nthree\",\"four\\nfive\\nsix\",\"seven\\neight\\nnine\"]\nnewList = [\"four\\nfifty\\nsix\",\"seven\\neight\\nnine\",\"ten\\neleven\\ntwelve\"]\na = oldList\nb = newList\ncruncher = difflib.SequenceMatcher(None, a, b)\nfor tag, alo, ahi, blo, bhi in cruncher.get_opcodes():\n if tag == 'replace':\n best_ratio, cutoff = 0.74, 0.75\n oldstrings = a[alo:ahi]\n newstrings = b[blo:bhi]\n for j in range(len(newstrings)):\n newstring = newstrings[j]\n cruncher.set_seq2(newstring)\n for i in range(len(oldstrings)):\n oldstring = oldstrings[i]\n cruncher.set_seq1(oldstring)\n if cruncher.real_quick_ratio() > best_ratio and \\\n cruncher.quick_ratio() > best_ratio and \\\n cruncher.ratio() > best_ratio:\n best_ratio, best_old, best_new = cruncher.ratio(), i, j\n if best_ratio < cutoff:\n #added string\n stringLines = newstring.splitlines()\n for line in stringLines: diffString += \"+\" + line + \"\\n\"\n else:\n #replaced string\n start = False\n for diff in difflib.unified_diff(oldstrings[best_old].splitlines(),newstrings[best_new].splitlines()):\n if start:\n diffString += diff + \"\\n\"\n if diff[0:2] == '@@':\n start = True\n del oldstrings[best_old]\n #deleted strings\n stringLines = []\n for string in oldstrings:\n stringLines.extend(string.splitlines())\n for line in stringLines: diffString += \"-\" + line + \"\\n\"\n elif tag == 'delete':\n stringLines = []\n for string in a[alo:ahi]:\n stringLines.extend(string.splitlines())\n for line in stringLines: \n diffString += \"-\" + line + \"\\n\"\n elif tag == 'insert':\n stringLines = []\n for string in b[blo:bhi]:\n stringLines.extend(string.splitlines())\n for line in stringLines: \n diffString += \"+\" + line + \"\\n\"\n elif tag == 'equal':\n continue\n else:\n raise ValueError('unknown tag %r' % (tag,))\n\nwhich result in the following:\nprint(diffString)\n four\n-five\n+fifty\n six\n-one\n-two\n-three\n+ten\n+eleven\n+twelve\n\n" ]
[ 0 ]
[]
[]
[ "difflib", "python" ]
stackoverflow_0074593945_difflib_python.txt
Q: Does the unit passed to the datetime64 data type in pandas do anything? Does the unit passed to the datetime64 data type in pandas do anything? Consider this code: import pandas as pd v1 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64'}) v2 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[ns]'}) v3 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[ms]'}) v4 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[s]'}) v5 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[h]'}) v6 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[D]'}) v7 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[M]'}) v8 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[Y]'}) for v in [v1,v2,v3,v4,v5,v6,v7,v8]: x = v.iloc[0,0] print(x, type(x), x.to_datetime64(), v.memory_usage()['Date']) It returns: 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 A: First of all: The Pandas version of the datetime64 type only timezone support. Specifically, when you try to a datetime64 variant in a Pandas series, it'll only support as (attosecond), fs (femtosecond), ps (picosecond) and ns (nanosecond) resolutions, anything less precise is replaced by datetime64[ns]. The datetime64[<res>, <tz>] variant only accepts s (seconds), ms (milliseconds), us (microseconds) and ns resolutions. Don't confuse these with the numpy datetime64 type. For both Pandas and Numpy, the 2-letter abbreviation determines the resolution used to record the timestamps, and because the type is always stored as 64 bits, it determines the range of values you can store in it. It does not alter how much memory the type takes! From the numpy datetime64 Datetime Units documentation: Datetimes are always stored with an epoch of 1970-01-01T00:00. This means the supported dates are always a symmetric interval around the epoch, called “time span” in the table below. The length of the span is the range of a 64-bit integer times the length of the date or unit. For example, the time span for ‘W’ (week) is exactly 7 times longer than the time span for ‘D’ (day), and the time span for ‘D’ (day) is exactly 24 times longer than the time span for ‘h’ (hour). Your experiment won't show any difference in memory use, because the amount of memory doesn't change, only the resolution. Because Pandas wraps the numpy datetime64 type, and you can't actually create a series with anything other than datetime64[ns]; e.g. the DateTimeIndex dtype parameter is documented as accepting either a numpy.dtype or DatetimeTZDtype or str, default None, but that for numpy.dtype there is an additional restriction: Note that the only NumPy dtype allowed is ‘datetime64[ns]’. So to demonstrate what the effect of different units, you'd have to use the numpy type directly: >>> import numpy as np >>> for unit in ('Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns'): # ps, fs and as have too small a span ... print(unit, np.array(["2021-02-27T12:24:17.524627869"], dtype=f"datetime64[{unit}]")) ... Y ['2021'] M ['2021-02'] W ['2021-02-25'] D ['2021-02-27'] h ['2021-02-27T12'] m ['2021-02-27T12:24'] s ['2021-02-27T12:24:17'] ms ['2021-02-27T12:24:17.524'] us ['2021-02-27T12:24:17.524627'] ns ['2021-02-27T12:24:17.524627869'] Note: The documentation for Pandas only ever talks about ns resolutions for the datetime64 types, and it appears from various issues on GitHub that while some of the codebase supports the other (finer) resolutions, this support is not reliable or widely supported by everything in the library. Your mileage may vary.
Does the unit passed to the datetime64 data type in pandas do anything?
Does the unit passed to the datetime64 data type in pandas do anything? Consider this code: import pandas as pd v1 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64'}) v2 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[ns]'}) v3 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[ms]'}) v4 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[s]'}) v5 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[h]'}) v6 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[D]'}) v7 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[M]'}) v8 = pd.DataFrame({'Date':['2020-01-01']*1000}).astype({'Date':'datetime64[Y]'}) for v in [v1,v2,v3,v4,v5,v6,v7,v8]: x = v.iloc[0,0] print(x, type(x), x.to_datetime64(), v.memory_usage()['Date']) It returns: 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000 2020-01-01 00:00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> 2020-01-01T00:00:00.000000000 8000
[ "First of all: The Pandas version of the datetime64 type only timezone support. Specifically, when you try to a datetime64 variant in a Pandas series, it'll only support as (attosecond), fs (femtosecond), ps (picosecond) and ns (nanosecond) resolutions, anything less precise is replaced by datetime64[ns]. The datetime64[<res>, <tz>] variant only accepts s (seconds), ms (milliseconds), us (microseconds) and ns resolutions. Don't confuse these with the numpy datetime64 type.\nFor both Pandas and Numpy, the 2-letter abbreviation determines the resolution used to record the timestamps, and because the type is always stored as 64 bits, it determines the range of values you can store in it. It does not alter how much memory the type takes!\nFrom the numpy datetime64 Datetime Units documentation:\n\nDatetimes are always stored with an epoch of 1970-01-01T00:00. This means the supported dates are always a symmetric interval around the epoch, called “time span” in the table below.\nThe length of the span is the range of a 64-bit integer times the length of the date or unit. For example, the time span for ‘W’ (week) is exactly 7 times longer than the time span for ‘D’ (day), and the time span for ‘D’ (day) is exactly 24 times longer than the time span for ‘h’ (hour).\n\nYour experiment won't show any difference in memory use, because the amount of memory doesn't change, only the resolution.\nBecause Pandas wraps the numpy datetime64 type, and you can't actually create a series with anything other than datetime64[ns]; e.g. the DateTimeIndex dtype parameter is documented as accepting either a numpy.dtype or DatetimeTZDtype or str, default None, but that for numpy.dtype there is an additional restriction:\n\nNote that the only NumPy dtype allowed is ‘datetime64[ns]’.\n\nSo to demonstrate what the effect of different units, you'd have to use the numpy type directly:\n>>> import numpy as np\n>>> for unit in ('Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns'): # ps, fs and as have too small a span\n... print(unit, np.array([\"2021-02-27T12:24:17.524627869\"], dtype=f\"datetime64[{unit}]\"))\n...\nY ['2021']\nM ['2021-02']\nW ['2021-02-25']\nD ['2021-02-27']\nh ['2021-02-27T12']\nm ['2021-02-27T12:24']\ns ['2021-02-27T12:24:17']\nms ['2021-02-27T12:24:17.524']\nus ['2021-02-27T12:24:17.524627']\nns ['2021-02-27T12:24:17.524627869']\n\nNote: The documentation for Pandas only ever talks about ns resolutions for the datetime64 types, and it appears from various issues on GitHub that while some of the codebase supports the other (finer) resolutions, this support is not reliable or widely supported by everything in the library. Your mileage may vary.\n" ]
[ 3 ]
[]
[]
[ "datetime64", "pandas", "python", "python_datetime" ]
stackoverflow_0074630783_datetime64_pandas_python_python_datetime.txt
Q: Detecting changes in a txt file I am a beginner python programmer and I am wondering if there is any way to detect a change in a txt file on windows. Any suggestion is appreciated. A: There are many ways to go with it : You can for example check the last modification date of the file every few seconds with os.path.getmtime(path), when the date change you know the file was edited. You can also use some form of checksum (generate md5 hash of a file) on the file and check every few seconds if the checksum change (can get slow on big files since the checksum require to read the entire file) You can also listen for signals send by windows directly and execute an event handler when you get a signal, this is harder to implement but by far the cleanest way to do it. (Edit, this seems to be what @martin kamau suggest in his answer) Probably many more way that I can't think of right now... A: To watch for file changes in a file, A python script found at https://luvocorp.co.ke/topic/watch-for-file-changes/ can be used. import time import fcntl import os import signal filename = "nameofthefile" def handler(signum, frame): print "File %s modified" % (FNAME,) ....
Detecting changes in a txt file
I am a beginner python programmer and I am wondering if there is any way to detect a change in a txt file on windows. Any suggestion is appreciated.
[ "There are many ways to go with it :\n\nYou can for example check the last modification date of the file every few seconds with os.path.getmtime(path), when the date change you know the file was edited.\n\nYou can also use some form of checksum (generate md5 hash of a file) on the file and check every few seconds if the checksum change (can get slow on big files since the checksum require to read the entire file)\n\nYou can also listen for signals send by windows directly and execute an event handler when you get a signal, this is harder to implement but by far the cleanest way to do it. (Edit, this seems to be what @martin kamau suggest in his answer)\n\nProbably many more way that I can't think of right now...\n\n\n", "To watch for file changes in a file,\nA python script found at https://luvocorp.co.ke/topic/watch-for-file-changes/ can be used.\nimport time\nimport fcntl\nimport os\nimport signal\nfilename = \"nameofthefile\"\ndef handler(signum, frame):\nprint \"File %s modified\" % (FNAME,)\n....\n" ]
[ 2, 0 ]
[]
[]
[ "python", "txt", "windows" ]
stackoverflow_0074630038_python_txt_windows.txt
Q: Scrapy: Importing a package from the project that's not in the same directory I'm trying to import a package from my project which is not in the same directory as scrapy is in. The directory structure for my project is as follows: Main __init__.py /XPaths __init.py XPaths.py /scrapper scrapy.cfg /scrapper __init.py settings.py items.py pipelines.py /spiders myspider.py I'm trying to access xpaths.py from within myspider.py. Here are my attempts: 1) from Main.XPaths.XPaths import XPathsHandler 2) from XPaths.XPaths import XPathsHandler 3) from ..Xpaths.XPaths import XPathsHandler These failed with the error: ImportError: No module named ....... My last attempt was: 4) from ...Xpaths.XPaths import XPathsHandler Which also failed with the error: ValueError: Attempted relative import beyond toplevel package What am I doing wrong? XPaths is independent from Scrapy, therefore the file structure has to stay that way. //EDIT After some further debugging following @alecxe comment, I tried adding the path to main inside the sys.path, and print it before importing xpaths. The weird thing is, the scrapper directory gets appended to the path when I run scrapy. Here's what I added: 'C:\\Users\\LaptOmer\\Code\\Python\\PythonBackend\\Main' And here's what I get when I print sys.path: 'C:\\Users\\LaptOmer\\Code\\Python\\PythonBackend\\Main\\scrapper' Why does scrapy append that to the path? A: I know its a little bit messy solution but only one I could find when I had same problem as you. Before including files from your project you need to manually append the system path to your top most package level, i.e: sys.path.append(os.path.join(os.path.dirname(__file__), '../..')) from XPaths.XPaths import XPathsHandler ... From what I understand scrappy creates its own package - this is why you cannot import files from other directories. This also explains error: ValueError: Attempted relative import beyond toplevel package A: I ran into the same problem. When I used: sys.path.append(os.path.join(os.path.dirname(__file__), '../..')) it appended ../.. to the last file path, which didn't work. I noticed my main file was the last item in the sys.path list. I took that last item and went to the module level to find my main file -- which contains a function called "extract_notes". import scrapy import sys import os mod_path = os.path.dirname(os.path.normpath(sys.path[-1])) sys.path.insert(0,mod_path) from pprint import pprint as p from main import extract_notes Hope that helps. A: I have a similar directory structure, with multiple scrapers (say directories scraper1 and scraper2). Since I found the sys.path changes as suggested by @ErdraugPl too brittle (see @ethanenglish's problems), especially since Scrapy itself is modifying the sys.path, I chose an OS solution instead of a Python solution: I created a symbolic link to directory /XPaths in both scraper1 and scraper2. That way, I can still maintain a single XPaths module that I can use in both scraper1 and scraper2, and can simply do from XPaths.XPaths import XPathsHandler
Scrapy: Importing a package from the project that's not in the same directory
I'm trying to import a package from my project which is not in the same directory as scrapy is in. The directory structure for my project is as follows: Main __init__.py /XPaths __init.py XPaths.py /scrapper scrapy.cfg /scrapper __init.py settings.py items.py pipelines.py /spiders myspider.py I'm trying to access xpaths.py from within myspider.py. Here are my attempts: 1) from Main.XPaths.XPaths import XPathsHandler 2) from XPaths.XPaths import XPathsHandler 3) from ..Xpaths.XPaths import XPathsHandler These failed with the error: ImportError: No module named ....... My last attempt was: 4) from ...Xpaths.XPaths import XPathsHandler Which also failed with the error: ValueError: Attempted relative import beyond toplevel package What am I doing wrong? XPaths is independent from Scrapy, therefore the file structure has to stay that way. //EDIT After some further debugging following @alecxe comment, I tried adding the path to main inside the sys.path, and print it before importing xpaths. The weird thing is, the scrapper directory gets appended to the path when I run scrapy. Here's what I added: 'C:\\Users\\LaptOmer\\Code\\Python\\PythonBackend\\Main' And here's what I get when I print sys.path: 'C:\\Users\\LaptOmer\\Code\\Python\\PythonBackend\\Main\\scrapper' Why does scrapy append that to the path?
[ "I know its a little bit messy solution but only one I could find when I had same problem as you. Before including files from your project you need to manually append the system path to your top most package level, i.e:\nsys.path.append(os.path.join(os.path.dirname(__file__), '../..'))\nfrom XPaths.XPaths import XPathsHandler\n...\n\nFrom what I understand scrappy creates its own package - this is why you cannot import files from other directories. This also explains error:\nValueError: Attempted relative import beyond toplevel package\n\n", "I ran into the same problem.\nWhen I used:\nsys.path.append(os.path.join(os.path.dirname(__file__), '../..'))\n\nit appended ../.. to the last file path, which didn't work. I noticed my main file was the last item in the sys.path list. I took that last item and went to the module level to find my main file -- which contains a function called \"extract_notes\". \nimport scrapy\nimport sys\nimport os\n\nmod_path = os.path.dirname(os.path.normpath(sys.path[-1]))\nsys.path.insert(0,mod_path)\n\nfrom pprint import pprint as p\nfrom main import extract_notes\n\nHope that helps.\n", "I have a similar directory structure, with multiple scrapers (say directories scraper1 and scraper2).\nSince I found the sys.path changes as suggested by @ErdraugPl too brittle (see @ethanenglish's problems), especially since Scrapy itself is modifying the sys.path, I chose an OS solution instead of a Python solution: I created a symbolic link to directory /XPaths in both scraper1 and scraper2. That way, I can still maintain a single XPaths module that I can use in both scraper1 and scraper2, and can simply do from XPaths.XPaths import XPathsHandler\n" ]
[ 1, 0, 0 ]
[]
[]
[ "import", "python", "scrapy" ]
stackoverflow_0018196458_import_python_scrapy.txt
Q: Add specific selected fields in plotly text annotations I have a graph that looks like this: I want to color the dots in the following way, one dot for every time the version is different, like for 0.1-SNAPSHOT there are 8 dots, but I only want the first one labelled and the rest just dots (without the version),similarly for all others. This is how my data looks like: API_paths info_version Commit-growth 24425 0 0.1-SNAPSHOT 52 24424 20 0.1-SNAPSHOT 104 24423 35 0.1-SNAPSHOT 156 24422 50 0.1-SNAPSHOT 208 24421 105 0.1-SNAPSHOT 260 24420 119 0.1-SNAPSHOT 312 24419 133 0.1-SNAPSHOT 364 24576 0 0.1-SNAPSHOT 408 24575 1 0.9.26 (BETA) 504 24574 13 0.9.27 (BETA) 600 24573 15 0.9.28 (BETA) 644 24416 161 0.9.28 28 24415 175 0.9.29 29 24572 29 0.9.29 (BETA) 792 24571 42 0.9.30 (BETA) 836 Right now they are colored quite simple: fig = px.scatter(data1, x='Commit-growth', y='API_paths', color='info_version') and annotated this way: data1= final_api.query("info_title=='Cloudera Datalake Service'").sort_values(by='commitDate') # data1['Year-Month'] = pd.to_datetime(final_api['Year-Month']) data1['Commit-growth']= data1['commits'].cumsum() import plotly.graph_objects as go fig = go.Figure() fig = px.scatter(data1, x='commitDate', y='API_paths', color='info_version') fig.add_trace(go.Scatter(mode='lines', x=data1["commitDate"], y=data1["API_paths"], line_color='black', line_width=0.6, line_shape='vh', showlegend=False ) ) for _,row in data1.iterrows(): fig.add_annotation( go.layout.Annotation( x=row["commitDate"], y=row["API_paths"], text=row['info_version'], showarrow=False, align='center', yanchor='bottom', yshift=9, textangle=-90) ) fig.update_layout(template='plotly_white', title='Cloudera Datalake Service API Paths Growth',title_x=0.5, xaxis_title='Number of Commit', yaxis_title='Number of Paths') fig.update_traces(marker_size=10, marker_line_width=2, marker_line_color='black', showlegend=False, textposition='bottom center') fig.show() I am not sure how to achieve this, so I am a bit lost, any help will be appreciated. A: Try creating a duplicate row of first occurrence to drive the text of your annotations. df['dupe'] = df.info_version.where(~df.info_version.duplicated(), '') | | API_paths | info_version | Commit-growth | dupe | |---:|------------:|:---------------|----------------:|:----------| | 0 | 0 | 0.1-snap | 52 | 0.1-snap | | 1 | 20 | 0.1-snap | 104 | | | 2 | 35 | 0.1-snap | 156 | | | 3 | 50 | 0.1-snap | 208 | | | 4 | 105 | 0.1-snap | 260 | | | 5 | 119 | 0.1-snap | 312 | | | 6 | 133 | 0.1-snap | 364 | | | 7 | 0 | 0.1-snap | 408 | | | 8 | 1 | 0.9-other | 504 | 0.9-other | | 9 | 13 | 0.9-other | 600 | | | 10 | 15 | 0.9-other | 644 | | | 11 | 161 | 0.9-other | 28 | | | 12 | 175 | 0.9-other | 29 | | | 13 | 29 | 0.9-other | 700 | | | 14 | 42 | 0.9-other | 500 | |
Add specific selected fields in plotly text annotations
I have a graph that looks like this: I want to color the dots in the following way, one dot for every time the version is different, like for 0.1-SNAPSHOT there are 8 dots, but I only want the first one labelled and the rest just dots (without the version),similarly for all others. This is how my data looks like: API_paths info_version Commit-growth 24425 0 0.1-SNAPSHOT 52 24424 20 0.1-SNAPSHOT 104 24423 35 0.1-SNAPSHOT 156 24422 50 0.1-SNAPSHOT 208 24421 105 0.1-SNAPSHOT 260 24420 119 0.1-SNAPSHOT 312 24419 133 0.1-SNAPSHOT 364 24576 0 0.1-SNAPSHOT 408 24575 1 0.9.26 (BETA) 504 24574 13 0.9.27 (BETA) 600 24573 15 0.9.28 (BETA) 644 24416 161 0.9.28 28 24415 175 0.9.29 29 24572 29 0.9.29 (BETA) 792 24571 42 0.9.30 (BETA) 836 Right now they are colored quite simple: fig = px.scatter(data1, x='Commit-growth', y='API_paths', color='info_version') and annotated this way: data1= final_api.query("info_title=='Cloudera Datalake Service'").sort_values(by='commitDate') # data1['Year-Month'] = pd.to_datetime(final_api['Year-Month']) data1['Commit-growth']= data1['commits'].cumsum() import plotly.graph_objects as go fig = go.Figure() fig = px.scatter(data1, x='commitDate', y='API_paths', color='info_version') fig.add_trace(go.Scatter(mode='lines', x=data1["commitDate"], y=data1["API_paths"], line_color='black', line_width=0.6, line_shape='vh', showlegend=False ) ) for _,row in data1.iterrows(): fig.add_annotation( go.layout.Annotation( x=row["commitDate"], y=row["API_paths"], text=row['info_version'], showarrow=False, align='center', yanchor='bottom', yshift=9, textangle=-90) ) fig.update_layout(template='plotly_white', title='Cloudera Datalake Service API Paths Growth',title_x=0.5, xaxis_title='Number of Commit', yaxis_title='Number of Paths') fig.update_traces(marker_size=10, marker_line_width=2, marker_line_color='black', showlegend=False, textposition='bottom center') fig.show() I am not sure how to achieve this, so I am a bit lost, any help will be appreciated.
[ "Try creating a duplicate row of first occurrence to drive the text of your annotations.\ndf['dupe'] = df.info_version.where(~df.info_version.duplicated(), '')\n\n| | API_paths | info_version | Commit-growth | dupe |\n|---:|------------:|:---------------|----------------:|:----------|\n| 0 | 0 | 0.1-snap | 52 | 0.1-snap |\n| 1 | 20 | 0.1-snap | 104 | |\n| 2 | 35 | 0.1-snap | 156 | |\n| 3 | 50 | 0.1-snap | 208 | |\n| 4 | 105 | 0.1-snap | 260 | |\n| 5 | 119 | 0.1-snap | 312 | |\n| 6 | 133 | 0.1-snap | 364 | |\n| 7 | 0 | 0.1-snap | 408 | |\n| 8 | 1 | 0.9-other | 504 | 0.9-other |\n| 9 | 13 | 0.9-other | 600 | |\n| 10 | 15 | 0.9-other | 644 | |\n| 11 | 161 | 0.9-other | 28 | |\n| 12 | 175 | 0.9-other | 29 | |\n| 13 | 29 | 0.9-other | 700 | |\n| 14 | 42 | 0.9-other | 500 | |\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "plotly", "python" ]
stackoverflow_0074628926_pandas_plotly_python.txt
Q: Reorder the information on pandas DataFrame having the dates and time on a row basis I have a small doubt. I have a dataframe where I have one column displaying the hourly time and the columns with the dates, is there a way to put all this together? (In this case using pandas) actual dataframe The desired output The dataset https://docs.google.com/spreadsheets/d/1BNPmSZlFHmEkGJC--iBgZiCdM81a5Dt4wj8C8J1pH3A/edit?usp=sharing A: This looks like a good use of pd.melt import pandas as pd df = pd.DataFrame({'August': ['00:00 - 01:00', '01:00 - 02:00', '02:00 - 03:00'], '1/ aug/': ['273,285', '2,708,725', '2,702,913'], '2/ aug/': ['310,135', '2,876,725', '28,409'], '3/ aug/': ['3,077,438', '3,076,075', '307,595'], '4/ aug/': ['2,911,175', '2,876,663', '2,869,738'], '5/ aug/': ['289,075', '2,842,425', '2,839,088']}) df = df.melt(id_vars='August', var_name='date', value_name='count').rename(columns={'August':'time'}) df = df[['date','time','count']] print(df) Output date time count 0 1/ aug/ 00:00 - 01:00 273,285 1 1/ aug/ 01:00 - 02:00 2,708,725 2 1/ aug/ 02:00 - 03:00 2,702,913 3 2/ aug/ 00:00 - 01:00 310,135 4 2/ aug/ 01:00 - 02:00 2,876,725 5 2/ aug/ 02:00 - 03:00 28,409 6 3/ aug/ 00:00 - 01:00 3,077,438 7 3/ aug/ 01:00 - 02:00 3,076,075 8 3/ aug/ 02:00 - 03:00 307,595 9 4/ aug/ 00:00 - 01:00 2,911,175 10 4/ aug/ 01:00 - 02:00 2,876,663 11 4/ aug/ 02:00 - 03:00 2,869,738 12 5/ aug/ 00:00 - 01:00 289,075 13 5/ aug/ 01:00 - 02:00 2,842,425 14 5/ aug/ 02:00 - 03:00 2,839,088 A: You can also achieve it with stack(): df.set_index('August').stack().reset_index().sort_values('level_1').rename( {'August':'time','level_1':'date',0:'count'},axis=1) time Date count 0 00:00 - 01:00 1/ aug/ 273,285 5 01:00 - 02:00 1/ aug/ 2,708,725 10 02:00 - 03:00 1/ aug/ 2,702,913 1 00:00 - 01:00 2/ aug/ 310,135 6 01:00 - 02:00 2/ aug/ 2,876,725 11 02:00 - 03:00 2/ aug/ 28,409 2 00:00 - 01:00 3/ aug/ 3,077,438 7 01:00 - 02:00 3/ aug/ 3,076,075 12 02:00 - 03:00 3/ aug/ 307,595 3 00:00 - 01:00 4/ aug/ 2,911,175 8 01:00 - 02:00 4/ aug/ 2,876,663 13 02:00 - 03:00 4/ aug/ 2,869,738 4 00:00 - 01:00 5/ aug/ 289,075 9 01:00 - 02:00 5/ aug/ 2,842,425 14 02:00 - 03:00 5/ aug/ 2,839,088
Reorder the information on pandas DataFrame having the dates and time on a row basis
I have a small doubt. I have a dataframe where I have one column displaying the hourly time and the columns with the dates, is there a way to put all this together? (In this case using pandas) actual dataframe The desired output The dataset https://docs.google.com/spreadsheets/d/1BNPmSZlFHmEkGJC--iBgZiCdM81a5Dt4wj8C8J1pH3A/edit?usp=sharing
[ "This looks like a good use of pd.melt\nimport pandas as pd\ndf = pd.DataFrame({'August': ['00:00 - 01:00', '01:00 - 02:00', '02:00 - 03:00'], '1/ aug/': ['273,285', '2,708,725', '2,702,913'], '2/ aug/': ['310,135', '2,876,725', '28,409'], '3/ aug/': ['3,077,438', '3,076,075', '307,595'], '4/ aug/': ['2,911,175', '2,876,663', '2,869,738'], '5/ aug/': ['289,075', '2,842,425', '2,839,088']})\n\ndf = df.melt(id_vars='August', var_name='date', value_name='count').rename(columns={'August':'time'})\n\ndf = df[['date','time','count']]\n\nprint(df)\n\nOutput\n date time count\n0 1/ aug/ 00:00 - 01:00 273,285\n1 1/ aug/ 01:00 - 02:00 2,708,725\n2 1/ aug/ 02:00 - 03:00 2,702,913\n3 2/ aug/ 00:00 - 01:00 310,135\n4 2/ aug/ 01:00 - 02:00 2,876,725\n5 2/ aug/ 02:00 - 03:00 28,409\n6 3/ aug/ 00:00 - 01:00 3,077,438\n7 3/ aug/ 01:00 - 02:00 3,076,075\n8 3/ aug/ 02:00 - 03:00 307,595\n9 4/ aug/ 00:00 - 01:00 2,911,175\n10 4/ aug/ 01:00 - 02:00 2,876,663\n11 4/ aug/ 02:00 - 03:00 2,869,738\n12 5/ aug/ 00:00 - 01:00 289,075\n13 5/ aug/ 01:00 - 02:00 2,842,425\n14 5/ aug/ 02:00 - 03:00 2,839,088\n\n", "You can also achieve it with stack():\ndf.set_index('August').stack().reset_index().sort_values('level_1').rename(\n {'August':'time','level_1':'date',0:'count'},axis=1)\n\n time Date count\n0 00:00 - 01:00 1/ aug/ 273,285\n5 01:00 - 02:00 1/ aug/ 2,708,725\n10 02:00 - 03:00 1/ aug/ 2,702,913\n1 00:00 - 01:00 2/ aug/ 310,135\n6 01:00 - 02:00 2/ aug/ 2,876,725\n11 02:00 - 03:00 2/ aug/ 28,409\n2 00:00 - 01:00 3/ aug/ 3,077,438\n7 01:00 - 02:00 3/ aug/ 3,076,075\n12 02:00 - 03:00 3/ aug/ 307,595\n3 00:00 - 01:00 4/ aug/ 2,911,175\n8 01:00 - 02:00 4/ aug/ 2,876,663\n13 02:00 - 03:00 4/ aug/ 2,869,738\n4 00:00 - 01:00 5/ aug/ 289,075\n9 01:00 - 02:00 5/ aug/ 2,842,425\n14 02:00 - 03:00 5/ aug/ 2,839,088\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074630956_dataframe_pandas_python_python_3.x.txt
Q: Python no module named pip I use windows 7 32 bit and python 3.7. I was trying to install a module with pip and this error came up: C:\Windows\System32>pip install pyttsx3 Traceback (most recent call last): File "d:\python\python 3.7\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "d:\python\python 3.7\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\Python\Python 3.7\Scripts\pip.exe\__main__.py", line 5, in <module> ModuleNotFoundError: No module named 'pip' Does anybody know how to fix this? A: Make sure you have python path added to the PATH variable. Then run python -m ensurepip A: Could you try? pip3 install pyttsx3 A: To me for Ubuntu 20.04 helped the following: ls -al /usr/bin/python # check before removal that 'python' is link sudo rm /usr/bin/python # remove link to old version of python sudo ln -s /usr/bin/python3.8 /usr/bin/python # create new link to actual python version sudo apt install python3-pip # install missing pip "Python: No module named pip" was because of missing python3-pip. A: Start Python Setup again (Download from here) and be sure to tick that Add python to PATH at the bottom of installation. A: Download get-pip.py to a folder on your computer. Open a command prompt and navigate to the folder containing the get-pip.py installer. Run the following command: python get-pip.py 4-) Verify Installation and Check the Pip Version: pip -V A: This command finally worked for me python -m pip install --upgrade pip --trusted-host pypi.org --trusted-host files.pythonhosted.org
Python no module named pip
I use windows 7 32 bit and python 3.7. I was trying to install a module with pip and this error came up: C:\Windows\System32>pip install pyttsx3 Traceback (most recent call last): File "d:\python\python 3.7\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "d:\python\python 3.7\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\Python\Python 3.7\Scripts\pip.exe\__main__.py", line 5, in <module> ModuleNotFoundError: No module named 'pip' Does anybody know how to fix this?
[ "Make sure you have python path added to the PATH variable. Then run\npython -m ensurepip\n\n", "Could you try?\npip3 install pyttsx3\n\n", "To me for Ubuntu 20.04 helped the following:\nls -al /usr/bin/python # check before removal that 'python' is link\nsudo rm /usr/bin/python # remove link to old version of python\nsudo ln -s /usr/bin/python3.8 /usr/bin/python # create new link to actual python version\nsudo apt install python3-pip # install missing pip\n\n\"Python: No module named pip\" was because of missing python3-pip.\n", "Start Python Setup again (Download from here) and be sure to tick that Add python to PATH at the bottom of installation.\n", "\nDownload get-pip.py to a folder on your computer.\n\nOpen a command prompt and navigate to the folder containing the\nget-pip.py installer.\n\nRun the following command:\npython get-pip.py\n\n4-) Verify Installation and Check the Pip Version:\npip -V\n\n\n", "This command finally worked for me\npython -m pip install --upgrade pip --trusted-host pypi.org --trusted-host files.pythonhosted.org\n" ]
[ 27, 1, 1, 0, 0, 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0065336695_pip_python.txt
Q: Using Wild Card on Airflow GoogleCloudStorageToBigQueryOperator Is it possible to use a wildcard on GoogleCloudStorageToBigQueryOperator? So I have a collection of files inside a certain folder in GCS file_sample_1.json file_sample_2.json file_sample_3.json ... file_sample_n.json I want to ingest these files using airflow with GoogleCloudStorageToBigQueryOperator. below is my code: def create_operator_write_init(): return GoogleCloudStorageToBigQueryOperator( task_id = 'test_ingest_to_bq', bucket = 'sample-bucket-dev-202211', source_objects = 'file_sample_1.json', destination_project_dataset_table = 'sample_destination_table', create_disposition = "CREATE_IF_NEEDED", write_disposition = "WRITE_TRUNCATE", source_format = "NEWLINE_DELIMITED_JSON", schema_fields = [ {"name": "id", "type": "INTEGER", "mode": "NULLABLE"}, {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}, {"name": "updated_at", "type": "TIMESTAMP", "mode": "NULLABLE"}, ] ) It can ingest 1 file just fine, but I need the source_object to have wild card, can I do something like 'file_sample_*.json' so that the * will act as a wild card? A: Yes, but you should include the string in a list. So if you use source_objects = ['file_sample_*.json'], it will ingest all files starting with 'file_sample_' and ending with '.json'. A: I had the same problem after updating apache-airflow-providers-google to version 8.5.0. where wildcard stoped to work. You can dowmgrade to version 8.4.0. or if you are using airflow and google operators also to load data to bigquery, filenames are pushed to XCOM from that operators. So as the workaround, I read that values from XCOM and use them as an argument for GCSToBigQueryOperator
Using Wild Card on Airflow GoogleCloudStorageToBigQueryOperator
Is it possible to use a wildcard on GoogleCloudStorageToBigQueryOperator? So I have a collection of files inside a certain folder in GCS file_sample_1.json file_sample_2.json file_sample_3.json ... file_sample_n.json I want to ingest these files using airflow with GoogleCloudStorageToBigQueryOperator. below is my code: def create_operator_write_init(): return GoogleCloudStorageToBigQueryOperator( task_id = 'test_ingest_to_bq', bucket = 'sample-bucket-dev-202211', source_objects = 'file_sample_1.json', destination_project_dataset_table = 'sample_destination_table', create_disposition = "CREATE_IF_NEEDED", write_disposition = "WRITE_TRUNCATE", source_format = "NEWLINE_DELIMITED_JSON", schema_fields = [ {"name": "id", "type": "INTEGER", "mode": "NULLABLE"}, {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}, {"name": "updated_at", "type": "TIMESTAMP", "mode": "NULLABLE"}, ] ) It can ingest 1 file just fine, but I need the source_object to have wild card, can I do something like 'file_sample_*.json' so that the * will act as a wild card?
[ "Yes, but you should include the string in a list. So if you use\nsource_objects = ['file_sample_*.json'],\n\nit will ingest all files starting with 'file_sample_' and ending with '.json'.\n", "I had the same problem after updating apache-airflow-providers-google to version 8.5.0. where wildcard stoped to work.\nYou can dowmgrade to version 8.4.0. or if you are using airflow and google operators also to load data to bigquery, filenames are pushed to XCOM from that operators.\nSo as the workaround, I read that values from XCOM and use them as an argument for GCSToBigQueryOperator\n" ]
[ 0, 0 ]
[]
[]
[ "airflow", "python" ]
stackoverflow_0074626222_airflow_python.txt
Q: How to know the index of an element in a list If I have a list like [[a, b], [c, d], [e, f]] How would I know that element a is in index 0 of the big list. I'm unsure on how to do this with a 2 dimensional array. I tried to use index, but it not works s = [['a', 'b'], ['c', 'd'], ['e', 'f']] s.index('a') Traceback (most recent call last): File "C:/Users/xxy/PycharmProjects/tb/test.py", line 3, in <module> s.index('a') ValueError: 'a' is not in list A: You can solve it with a loop s = [['a', 'b'], ['c', 'd'], ['e', 'f']] x = 'a' find = False for i, line in enumerate(s): if x in line: print(f'find {x} at', i, line.index(x)) find = True break if not find: print(f'{x} is not in list') A: More simpler way you can use any() s = [['a', 'b'], ['c', 'd'], ['e', 'f']] for i, v in enumerate(s): if any('a' in x for x in v): print(i) Gives # 0
How to know the index of an element in a list
If I have a list like [[a, b], [c, d], [e, f]] How would I know that element a is in index 0 of the big list. I'm unsure on how to do this with a 2 dimensional array. I tried to use index, but it not works s = [['a', 'b'], ['c', 'd'], ['e', 'f']] s.index('a') Traceback (most recent call last): File "C:/Users/xxy/PycharmProjects/tb/test.py", line 3, in <module> s.index('a') ValueError: 'a' is not in list
[ "You can solve it with a loop\ns = [['a', 'b'], ['c', 'd'], ['e', 'f']]\nx = 'a'\nfind = False\nfor i, line in enumerate(s):\n if x in line:\n print(f'find {x} at', i, line.index(x))\n find = True\n break\nif not find:\n print(f'{x} is not in list')\n\n", "More simpler way you can use any()\ns = [['a', 'b'], ['c', 'd'], ['e', 'f']]\n\nfor i, v in enumerate(s):\n if any('a' in x for x in v):\n print(i)\n\nGives #\n0\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074630965_python.txt
Q: Python Beginner Question: Can someone help me understand why the output is [1,1,1,1,2,3]? my_list = [1,2,3] for v in range (len(my_list)): my_list.insert(1,my_list[v]) print(my_list) #outputs [1,1,1,1,2,3] I am getting tripped up on why the value of V is set to 1 instead of iterating through the other number found in the list. I've tried reading up on W3 schools but still confused A: This is because you insert the value 1 before the #1 position in the list, but because the list updates every iteration, the value in the #1 position is always 1. to see this clearly, you can print the list in every iteration [1,1,2,3] [1,1,1,2,3] [1,1,1,1,2,3] the only thing not updating is the range() that was set to the length of the original list (range(0,3)->v=0,v=1,v=2) A: The reason isn't exactly that you're modifying the list you're iterating over-- it's because range produces an immutable sequence of a fixed length, created exactly once. Therefore, the number of iterations of the for loop is based on len(my_list) at the beginning, which is 3. The list.insert(i, x) method puts x in index i, and shoves the old elements at indices (i, i+1 ...) to the right. Thus your code is equivalent to: my_list = [1,2,3] for v in range(3): my_list.insert(1, my_list[v]) And, if we put parentheses around the new element just inserted in each of the 3 iterations: my_list = [1, 2, 3] Insert (my_list[0] == 1) at position 1 my_list = [1, (1), 2, 3] Insert (my_list[1] == 1) at position 1 my_list = [1, (1), 1, 2, 3] Insert (my_list[2] == 1) at position 1 my_list = [1, (1), 1, 1, 2, 3] While this isn't as dangerous as iterating over the list directly while modifying it, the best way to avoid unexpected behavior is to make a copy of the list beforehand, and index into a different list than the one you're modifying. A: You can add a print statement in the loop as follows which will make it easier to understand. my_list = [1,2,3] for v in range (len(my_list)): my_list.insert(1,my_list[v]) print(f"mylist: {my_list}") print(f"mylist[v]: {my_list[v]}") print(f"v: {v}") print(my_list) above code results: mylist: [1, 1, 2, 3] mylist[v]: 1 v: 0 mylist: [1, 1, 1, 2, 3] mylist[v]: 1 v: 1 mylist: [1, 1, 1, 1, 2, 3] mylist[v]: 1 v: 2 [1, 1, 1, 1, 2, 3] A: The len(my_list) in the range constructor is evaluated once, at the top of the first iteration of the loop, where the length of my_list is 3. This results in the loop variable v being set to 0, 1, and 2 as the loop iterates. Inside the loop, you insert into the list at the second position (i.e. just after the first element of the list) the value of the existing element v of the list. v changes on each iteration (0, 1, 2) as you expect. But, so does my_list, since you are inserting an element with each iteration. So you get the following transformation of my_list: Start: my_list = [ 1 2 3 ] v = 0: my_list[0] = 1 -> my_list = [ 1 *1* 2 3 ] (*X* indicates inserted element) v = 1: my_list[1] = 1 -> my_list = [ 1 *1* 1 2 3 ] v = 2: my_list[2] = 1 -> my_list = [ 1 *1* 1 1 2 3 ] Essentially, as you prepend elements to the list, the indices of the existing elements are shifting to the right. A: Found this website with an explantion on this: Iterate a list using range() and for loop. https://pynative.com/python-range-function/ "When you iterate the list only using a loop, you can access only items. When you iterate the list only using a loop, you can only access its items, but when you use range() along with the loop, you can access the index number of each item. The advantage of using range() to iterate a list is that it allows us to access each item’s index number. Using index numbers, we can access as well as modify list items if required."
Python Beginner Question: Can someone help me understand why the output is [1,1,1,1,2,3]?
my_list = [1,2,3] for v in range (len(my_list)): my_list.insert(1,my_list[v]) print(my_list) #outputs [1,1,1,1,2,3] I am getting tripped up on why the value of V is set to 1 instead of iterating through the other number found in the list. I've tried reading up on W3 schools but still confused
[ "This is because you insert the value 1 before the #1 position in the list,\nbut because the list updates every iteration, the value in the #1 position\nis always 1.\nto see this clearly, you can print the list in every iteration\n[1,1,2,3]\n[1,1,1,2,3]\n[1,1,1,1,2,3]\n\nthe only thing not updating is the range() that was set to the length\nof the original list (range(0,3)->v=0,v=1,v=2)\n", "The reason isn't exactly that you're modifying the list you're iterating over-- it's because range produces an immutable sequence of a fixed length, created exactly once. Therefore, the number of iterations of the for loop is based on len(my_list) at the beginning, which is 3. The list.insert(i, x) method puts x in index i, and shoves the old elements at indices (i, i+1 ...) to the right.\nThus your code is equivalent to:\nmy_list = [1,2,3]\nfor v in range(3):\n my_list.insert(1, my_list[v])\n\nAnd, if we put parentheses around the new element just inserted in each of the 3 iterations:\nmy_list = [1, 2, 3]\n\nInsert (my_list[0] == 1) at position 1\nmy_list = [1, (1), 2, 3]\n\nInsert (my_list[1] == 1) at position 1\nmy_list = [1, (1), 1, 2, 3]\n\nInsert (my_list[2] == 1) at position 1\nmy_list = [1, (1), 1, 1, 2, 3]\n\nWhile this isn't as dangerous as iterating over the list directly while modifying it, the best way to avoid unexpected behavior is to make a copy of the list beforehand, and index into a different list than the one you're modifying.\n", "You can add a print statement in the loop as follows which will make it easier to understand.\nmy_list = [1,2,3]\nfor v in range (len(my_list)):\n my_list.insert(1,my_list[v])\n print(f\"mylist: {my_list}\")\n print(f\"mylist[v]: {my_list[v]}\")\n print(f\"v: {v}\")\n \n\nprint(my_list)\n\nabove code results:\nmylist: [1, 1, 2, 3]\nmylist[v]: 1\nv: 0\nmylist: [1, 1, 1, 2, 3]\nmylist[v]: 1\nv: 1\nmylist: [1, 1, 1, 1, 2, 3]\nmylist[v]: 1\nv: 2\n[1, 1, 1, 1, 2, 3]\n\n", "The len(my_list) in the range constructor is evaluated once, at the top of the first iteration of the loop, where the length of my_list is 3. This results in the loop variable v being set to 0, 1, and 2 as the loop iterates.\nInside the loop, you insert into the list at the second position (i.e. just after the first element of the list) the value of the existing element v of the list. v changes on each iteration (0, 1, 2) as you expect. But, so does my_list, since you are inserting an element with each iteration. So you get the following transformation of my_list:\nStart: my_list = [ 1 2 3 ]\nv = 0: my_list[0] = 1 -> my_list = [ 1 *1* 2 3 ] (*X* indicates inserted element)\nv = 1: my_list[1] = 1 -> my_list = [ 1 *1* 1 2 3 ]\nv = 2: my_list[2] = 1 -> my_list = [ 1 *1* 1 1 2 3 ]\n\nEssentially, as you prepend elements to the list, the indices of the existing elements are shifting to the right.\n", "Found this website with an explantion on this: Iterate a list using range() and for loop.\nhttps://pynative.com/python-range-function/\n\"When you iterate the list only using a loop, you can access only items. When you iterate the list only using a loop, you can only access its items, but when you use range() along with the loop, you can access the index number of each item.\nThe advantage of using range() to iterate a list is that it allows us to access each item’s index number. Using index numbers, we can access as well as modify list items if required.\"\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0069138557_python.txt
Q: scrollable canvas working with .pack but not .place I am trying to get my scrollable canvas to work. It works when I pack the elements using .pack, however when I insert the elements via .place, the scrollbar stops working. Here is a minimal reproducable example of my code. startup.py file: import frame as f import placeWidgetsOnFrame as p p.populate3() f.window.mainloop() frame.py file: #Creates widnow window = customtkinter.CTk() window.geometry("1900x980") customtkinter.set_appearance_mode("dark") window.resizable(False, False) #Creates Frame for GUI mainFrame = customtkinter.CTkFrame(window, width=1900, height=980, corner_radius=0) mainFrame.pack(expand=True, fill=tk.BOTH) mainFrame.pack_propagate(False) topFrame = customtkinter.CTkFrame(master=mainFrame, width=1865, height=140, corner_radius=10) topFrame.grid(columnspan=2, padx=15, pady=15) topFrame.pack_propagate(0) leftFrame = customtkinter.CTkFrame(master=mainFrame, width=380, height=530, corner_radius=10) leftFrame.grid(row=1, column=0, padx=15, pady=10) leftFrame.pack_propagate(False) rightFrame = customtkinter.CTkFrame(master=mainFrame, width=1450, height=775, corner_radius=10) rightFrame.grid(row=1, column=1, padx=15, pady=10, rowspan=2) rightFrame.pack_propagate(False) bottomLeftFrame = customtkinter.CTkFrame(mainFrame, width=380, height=220, corner_radius=10) bottomLeftFrame.grid(row=2, column=0, padx=15, pady=10) bottomLeftFrame.pack_propagate(False) #Creates Scrollbar for right Frame #Creates a canvas for the right Frame canvas2=tk.Canvas(rightFrame, bg="#000000", highlightthickness=0, relief="flat") canvas2.pack(side="left", fill="both", expand=True) #Creates a scroll bar for the right Frame scrollbar = customtkinter.CTkScrollbar(master=rightFrame, orientation="vertical", command=canvas2.yview, corner_radius=10) scrollbar.pack(side=tk.RIGHT, fill=tk.Y) #Configures scrollbar to canvas canvas2.configure(yscrollcommand=scrollbar.set) canvas2.bind("<Configure>", lambda *args, **kwargs: canvas2.configure(scrollregion=canvas2.bbox("all"))) #Creates a scrollable frame to place widgets on scrollableFrame = customtkinter.CTkFrame(canvas2, fg_color=("#C0C2C5", "#343638"), corner_radius=10) canvasFrame = canvas2.create_window((0,0), window=scrollableFrame, anchor="nw", tags=("cf")) #TO DO - resize canvas and to fit all widgets def handleResize(event): c = event.widget cFrame = c.nametowidget(c.itemcget("cf", "window")) minWidth = cFrame.winfo_reqwidth() minHeight = cFrame.winfo_reqheight() print (event.width) print (event.height) if minWidth < event.width: c.itemconfigure("cf", width=event.width) if minHeight < event.height: c.itemconfigure("cf", height=event.height) print (event.width) print (event.height) c.configure(scrollregion=c.bbox("all")) canvas2.bind('<Configure>', handleResize) def onMousewheel(event): canvas2.yview_scroll(-1 * round(event.delta / 120), "units") canvas2.bind_all("<MouseWheel>", onMousewheel) canvas2.bind("<Destroy>", lambda *args, **kwargs: canvas2.unbind_all("<MouseWheel>")) placeWidgetsOnFrame.py file: import tkinter import customtkinter import frame as f rightFrame = f.scrollableFrame def populate2(): for i in range(30): emailLabel = customtkinter.CTkLabel(master=rightFrame, text="Please enter your email:") emailLabel.pack(padx=10, pady=10) def populate3(): x=50 for i in range(30): emailLabel = customtkinter.CTkLabel(master=rightFrame, text="Please enter your email:") emailLabel.place(x=40, y=x) x=x+50 Here is the output when populate3() is run: Here Here is the output when populate2() is run Here Does anyone know why this is? I can always go back and change the way I insert widgets to .pack rather than .place, however I would rather use .place as I find it easier to place widgets where I want to. A: The reason is because pack by default will cause the containing frame to grow or shrink to fit all of the child widgets, but place does not. If your frame starts out as 1x1 and you use place to add widgets to it, the size will remain 1x1. When you use place, it is your responsibility to make the containing widget large enough to contain its children. This single feature is one of the most compelling reasons to choose grid or pack over place - these other geometry managers do a lot of work for you so that you can think about the layout logically without getting bogged down in the details of the layout.
scrollable canvas working with .pack but not .place
I am trying to get my scrollable canvas to work. It works when I pack the elements using .pack, however when I insert the elements via .place, the scrollbar stops working. Here is a minimal reproducable example of my code. startup.py file: import frame as f import placeWidgetsOnFrame as p p.populate3() f.window.mainloop() frame.py file: #Creates widnow window = customtkinter.CTk() window.geometry("1900x980") customtkinter.set_appearance_mode("dark") window.resizable(False, False) #Creates Frame for GUI mainFrame = customtkinter.CTkFrame(window, width=1900, height=980, corner_radius=0) mainFrame.pack(expand=True, fill=tk.BOTH) mainFrame.pack_propagate(False) topFrame = customtkinter.CTkFrame(master=mainFrame, width=1865, height=140, corner_radius=10) topFrame.grid(columnspan=2, padx=15, pady=15) topFrame.pack_propagate(0) leftFrame = customtkinter.CTkFrame(master=mainFrame, width=380, height=530, corner_radius=10) leftFrame.grid(row=1, column=0, padx=15, pady=10) leftFrame.pack_propagate(False) rightFrame = customtkinter.CTkFrame(master=mainFrame, width=1450, height=775, corner_radius=10) rightFrame.grid(row=1, column=1, padx=15, pady=10, rowspan=2) rightFrame.pack_propagate(False) bottomLeftFrame = customtkinter.CTkFrame(mainFrame, width=380, height=220, corner_radius=10) bottomLeftFrame.grid(row=2, column=0, padx=15, pady=10) bottomLeftFrame.pack_propagate(False) #Creates Scrollbar for right Frame #Creates a canvas for the right Frame canvas2=tk.Canvas(rightFrame, bg="#000000", highlightthickness=0, relief="flat") canvas2.pack(side="left", fill="both", expand=True) #Creates a scroll bar for the right Frame scrollbar = customtkinter.CTkScrollbar(master=rightFrame, orientation="vertical", command=canvas2.yview, corner_radius=10) scrollbar.pack(side=tk.RIGHT, fill=tk.Y) #Configures scrollbar to canvas canvas2.configure(yscrollcommand=scrollbar.set) canvas2.bind("<Configure>", lambda *args, **kwargs: canvas2.configure(scrollregion=canvas2.bbox("all"))) #Creates a scrollable frame to place widgets on scrollableFrame = customtkinter.CTkFrame(canvas2, fg_color=("#C0C2C5", "#343638"), corner_radius=10) canvasFrame = canvas2.create_window((0,0), window=scrollableFrame, anchor="nw", tags=("cf")) #TO DO - resize canvas and to fit all widgets def handleResize(event): c = event.widget cFrame = c.nametowidget(c.itemcget("cf", "window")) minWidth = cFrame.winfo_reqwidth() minHeight = cFrame.winfo_reqheight() print (event.width) print (event.height) if minWidth < event.width: c.itemconfigure("cf", width=event.width) if minHeight < event.height: c.itemconfigure("cf", height=event.height) print (event.width) print (event.height) c.configure(scrollregion=c.bbox("all")) canvas2.bind('<Configure>', handleResize) def onMousewheel(event): canvas2.yview_scroll(-1 * round(event.delta / 120), "units") canvas2.bind_all("<MouseWheel>", onMousewheel) canvas2.bind("<Destroy>", lambda *args, **kwargs: canvas2.unbind_all("<MouseWheel>")) placeWidgetsOnFrame.py file: import tkinter import customtkinter import frame as f rightFrame = f.scrollableFrame def populate2(): for i in range(30): emailLabel = customtkinter.CTkLabel(master=rightFrame, text="Please enter your email:") emailLabel.pack(padx=10, pady=10) def populate3(): x=50 for i in range(30): emailLabel = customtkinter.CTkLabel(master=rightFrame, text="Please enter your email:") emailLabel.place(x=40, y=x) x=x+50 Here is the output when populate3() is run: Here Here is the output when populate2() is run Here Does anyone know why this is? I can always go back and change the way I insert widgets to .pack rather than .place, however I would rather use .place as I find it easier to place widgets where I want to.
[ "The reason is because pack by default will cause the containing frame to grow or shrink to fit all of the child widgets, but place does not. If your frame starts out as 1x1 and you use place to add widgets to it, the size will remain 1x1. When you use place, it is your responsibility to make the containing widget large enough to contain its children.\nThis single feature is one of the most compelling reasons to choose grid or pack over place - these other geometry managers do a lot of work for you so that you can think about the layout logically without getting bogged down in the details of the layout.\n" ]
[ 3 ]
[]
[]
[ "customtkinter", "python", "scrollbar", "tkinter" ]
stackoverflow_0074630052_customtkinter_python_scrollbar_tkinter.txt
Q: Customise a FastAPI response if query = null I have a table set-up in Postgres that contains some user information. I am trying to work out how to query that table with a user ID, and return a custom response if the user ID does not appear in the table. I am using the following table and schema to store the data: class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, nullable=False, unique=True) name = Column(String, nullable=False) status = Column(Integer, nullable=False) created_at = Column(TIMESTAMP(timezone=True), nullable=False, server_default=text('now()')) class UserBase(BaseModel): name: str phone: int status: int = 1 Following a tutorial, I can successfully query the table to return the object if found, and raise an exception if not... @router.get("/{id}", response_model=schemas.UserResponse) def get_user(id: int, db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.id == id).first() print(user) # checking output if not user: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"User with ID {id} not found.") return user This partially works, in that it returns the correct user response model if the user is found. However, in the case that a user is not found in the table, I want to be able to return an object where the status variable = 0. I assume that will be in place of the HTTPException, but I'm not too sure how to do it. A: I managed to find an answer in the FastAPI documentation here. The below seemed to work for me. @router.get("/{id}", response_model=schemas.UserResponse) def get_user(id: int, db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.id == id).first() print(user) # checking output if not user: return JSONResponse(content = {'status' : '0'}, status_code=400) return user
Customise a FastAPI response if query = null
I have a table set-up in Postgres that contains some user information. I am trying to work out how to query that table with a user ID, and return a custom response if the user ID does not appear in the table. I am using the following table and schema to store the data: class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, nullable=False, unique=True) name = Column(String, nullable=False) status = Column(Integer, nullable=False) created_at = Column(TIMESTAMP(timezone=True), nullable=False, server_default=text('now()')) class UserBase(BaseModel): name: str phone: int status: int = 1 Following a tutorial, I can successfully query the table to return the object if found, and raise an exception if not... @router.get("/{id}", response_model=schemas.UserResponse) def get_user(id: int, db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.id == id).first() print(user) # checking output if not user: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"User with ID {id} not found.") return user This partially works, in that it returns the correct user response model if the user is found. However, in the case that a user is not found in the table, I want to be able to return an object where the status variable = 0. I assume that will be in place of the HTTPException, but I'm not too sure how to do it.
[ "I managed to find an answer in the FastAPI documentation here. The below seemed to work for me.\[email protected](\"/{id}\", response_model=schemas.UserResponse)\ndef get_user(id: int, db: Session = Depends(get_db)):\n \n user = db.query(models.User).filter(models.User.id == id).first()\n print(user) # checking output\n \n if not user:\n return JSONResponse(content = {'status' : '0'}, status_code=400)\n\n return user\n\n" ]
[ 0 ]
[]
[]
[ "fastapi", "get", "json", "postgresql", "python" ]
stackoverflow_0074630913_fastapi_get_json_postgresql_python.txt
Q: Python problem involving 2 For loops that I can't work out EPLgames2018/19 CSVI am fairly new to python. I am reading in a CSV file that contains the stats from every english premier league match in 2018/19. I have created a list of all of the teams. I am then trying to take each team in turn and loop through all of the matches to calculate each teams total points for the season. It seems to work for the first team. It takes Man Utd and I get the correct points for them. The problem I have is getting to the next team in the list and then looping through the points code with them. import csv with open('EPL1819.csv') as file: eplgames = csv.DictReader(file) teampoints = list() eplteams = list() teamcount = 0 count = 0 # Outer loop going through teams one at a time for i in range(20): points = 0 # Inner loop going through each match for x in eplgames: # Populates the eplteams list if x['HomeTeam'] not in eplteams: eplteams.append(x['HomeTeam']) teamcount += 1 #print(eplteams[i]) # Works out the match result if x['FTHG'] > x['FTAG']: match_result = x['HomeTeam'] elif x['FTHG'] < x['FTAG']: match_result = x['AwayTeam'] else: match_result = "Draw" if eplteams[i] == match_result: points += 3 if eplteams[i] == x['HomeTeam']: if match_result == "Draw": points += 1 if eplteams[i] == x['AwayTeam']: if match_result == "Draw": points += 1 # Populates the teampoints list teampoints.append(points) print(eplteams[i]) print("Points:", points) print("Points List:", teampoints[i]) A: You are adding teams to eplteams based on some condition: if x['HomeTeam'] not in eplteams: eplteams.append(x['HomeTeam']) teamcount += 1 And then using eplteam element in condition: if eplteams[i] == match_result: points += 3 if eplteams[i] == x['HomeTeam']: if match_result == "Draw": points += 1 if eplteams[i] == x['AwayTeam']: if match_result == "Draw": points += 1 It doesn't look right. Element i in eplteams may not exist at that moment. A: You need to populate before your loop through the eplgames or else it might not find the teams that are playing: def populate(eplgames): eplteams = [] for x in eplgames: # Populates the eplteams list if x['HomeTeam'] not in eplteams: eplteams.append(x['HomeTeam']) teamcount += 1 return teamcount, eplteams populate before loop: teamcount, eplteams = populate(eplgames) for i in range(20): points = 0 # Inner loop going through each match for x in eplgames: #print(eplteams[i]) # Works out the match result if x['FTHG'] > x['FTAG']: match_result = x['HomeTeam']
Python problem involving 2 For loops that I can't work out
EPLgames2018/19 CSVI am fairly new to python. I am reading in a CSV file that contains the stats from every english premier league match in 2018/19. I have created a list of all of the teams. I am then trying to take each team in turn and loop through all of the matches to calculate each teams total points for the season. It seems to work for the first team. It takes Man Utd and I get the correct points for them. The problem I have is getting to the next team in the list and then looping through the points code with them. import csv with open('EPL1819.csv') as file: eplgames = csv.DictReader(file) teampoints = list() eplteams = list() teamcount = 0 count = 0 # Outer loop going through teams one at a time for i in range(20): points = 0 # Inner loop going through each match for x in eplgames: # Populates the eplteams list if x['HomeTeam'] not in eplteams: eplteams.append(x['HomeTeam']) teamcount += 1 #print(eplteams[i]) # Works out the match result if x['FTHG'] > x['FTAG']: match_result = x['HomeTeam'] elif x['FTHG'] < x['FTAG']: match_result = x['AwayTeam'] else: match_result = "Draw" if eplteams[i] == match_result: points += 3 if eplteams[i] == x['HomeTeam']: if match_result == "Draw": points += 1 if eplteams[i] == x['AwayTeam']: if match_result == "Draw": points += 1 # Populates the teampoints list teampoints.append(points) print(eplteams[i]) print("Points:", points) print("Points List:", teampoints[i])
[ "You are adding teams to eplteams based on some condition:\nif x['HomeTeam'] not in eplteams:\n eplteams.append(x['HomeTeam'])\n teamcount += 1\n\nAnd then using eplteam element in condition:\nif eplteams[i] == match_result:\n points += 3\n\nif eplteams[i] == x['HomeTeam']:\n if match_result == \"Draw\":\n points += 1\n\nif eplteams[i] == x['AwayTeam']:\n if match_result == \"Draw\":\n points += 1\n\nIt doesn't look right.\nElement i in eplteams may not exist at that moment.\n", "You need to populate before your loop through the eplgames or else it might not find the teams that are playing:\ndef populate(eplgames):\n eplteams = []\n for x in eplgames:\n # Populates the eplteams list\n if x['HomeTeam'] not in eplteams:\n eplteams.append(x['HomeTeam'])\n teamcount += 1\n return teamcount, eplteams\n\npopulate before loop:\nteamcount, eplteams = populate(eplgames)\nfor i in range(20):\n points = 0\n # Inner loop going through each match\n for x in eplgames:\n #print(eplteams[i])\n # Works out the match result\n if x['FTHG'] > x['FTAG']:\n match_result = x['HomeTeam']\n\n" ]
[ 0, 0 ]
[]
[]
[ "csv", "for_loop", "list", "python" ]
stackoverflow_0074630823_csv_for_loop_list_python.txt
Q: Create a list of Triangle objects I'm very new to Python and have an issue. I was wondering if there was a way I could create a list of objects created. For example, say I have a class: list_triangles = [] def class Triangle: def __init__(self, h, w): self.h = h self.w = w a = Triangle(5,6) b = Triangle(3,3) What I would have to add such that each time I defined a new object, it would append to list_triangles, such that in the end I have a list of objects? e.g list_triangles = (a, b) I'm thinking I'd have to make a for loop, but I'm not sure because what would I say for i in range ____? A: Put the arguments in a list, then iterate over that. params = [(5, 6), (3, 3)] list_triangles = [Triangle(*p) for p in params] A: If I understand you correctly, you want to have globaly accessible list of every created object, if so you can declare class variable and then append self to it in constructor to keep track of objects like so class Cat(): instances = [] def __init__(self, name): self.name = name Cat.instances.append(self) a = Cat("Bob") b = Cat("John") print(Cat.instances) print(Cat.instances[0].name) Which results in >> [<__main__.Cat object at 0x00000000014BE790>, <__main__.Cat object at 0x00000000014BE610>] >> Bob
Create a list of Triangle objects
I'm very new to Python and have an issue. I was wondering if there was a way I could create a list of objects created. For example, say I have a class: list_triangles = [] def class Triangle: def __init__(self, h, w): self.h = h self.w = w a = Triangle(5,6) b = Triangle(3,3) What I would have to add such that each time I defined a new object, it would append to list_triangles, such that in the end I have a list of objects? e.g list_triangles = (a, b) I'm thinking I'd have to make a for loop, but I'm not sure because what would I say for i in range ____?
[ "Put the arguments in a list, then iterate over that.\nparams = [(5, 6), (3, 3)]\nlist_triangles = [Triangle(*p) for p in params]\n\n", "If I understand you correctly, you want to have globaly accessible list of every created object, if so you can declare class variable and then append self to it in constructor to keep track of objects like so\nclass Cat():\n instances = []\n def __init__(self, name):\n self.name = name\n Cat.instances.append(self)\n\na = Cat(\"Bob\")\nb = Cat(\"John\")\n\nprint(Cat.instances)\nprint(Cat.instances[0].name)\n\nWhich results in\n>> [<__main__.Cat object at 0x00000000014BE790>, <__main__.Cat object at 0x00000000014BE610>]\n>> Bob\n\n" ]
[ 3, 1 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0074630799_for_loop_python.txt
Q: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) I'm trying to execute a Python script, but I am getting the following error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) I'm using python 3.5.2 on a Linux Mint 18.1 Serena OS Can someone tell me why this happens, and how can I solve? A: The SIGSEGV signal indicates a "segmentation violation" or a "segfault". More or less, this equates to a read or write of a memory address that's not mapped in the process. This indicates a bug in your program. In a Python program, this is either a bug in the interpreter or in an extension module being used (and the latter is the most common cause). To fix the problem, you have several options. One option is to produce a minimal, self-contained, complete example which replicates the problem and then submit it as a bug report to the maintainers of the extension module it uses. Another option is to try to track down the cause yourself. gdb is a valuable tool in such an endeavor, as is a debug build of Python and all of the extension modules in use. After you have gdb installed, you can use it to run your Python program: gdb --args python <more args if you want> And then use gdb commands to track down the problem. If you use run then your program will run until it would have crashed and you will have a chance to inspect the state using other gdb commands. A: Another possible cause (which I encountered today) is that you're trying to read/write a file which is open. In this case, simply closing the file and rerunning the script solved the issue. A: After some times I discovered that I was running a new TensorFlow version that gives error on older computers. I solved the problem downgrading the TensorFlow version to 1.4 A: When I encounter this problem, I realize there are some memory issues. I rebooted PC and solved it. A: This can also be the case if your C-program (e.g. using cpython is trying to access a variable out-of-bound ctypedef struct ReturnRows: double[10] your_value cdef ReturnRows s_ReturnRows # Allocate memory for the struct s_ReturnRows.your_value = [0] * 12 will fail with Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) A: For me, I was using the OpenCV library to apply SIFT. In my code, I replaced cv2.SIFT() to cv2.SIFT_create() and the problem is gone. A: Deleted the python interpreter and the 'venv' folder solve my error. A: I got this error in PHP, while running PHPUnit. The reason was a circular dependency. A: I received the same error when trying to connect to an Oracle DB using the pyodbc module: connection = pyodbc.connect() The error occurred on the following occasions: The DB connection has been opened multiple times in the same python file While in debug mode a breakpoint has been reached while the connection to the DB being open The error message could be avoided with the following approaches: Open the DB only once and reuse the connection at all needed places Properly close the DB connection after using it Hope, that will help anyone! A: 11 : SIGSEGV - This signal is arises when an memory segement is illegally accessed. There is a module name signal in python through which you can handle this kind of OS signals. If you want to ignore this SIGSEGV signal, you can do this: signal.signal(signal.SIGSEGV, signal.SIG_IGN) However, ignoring the signal can cause some inappropriate behaviours to your code, so it is better to handle the SIGSEGV signal with your defined handler like this: def SIGSEGV_signal_arises(signalNum, stack): print(f"{signalNum} : SIGSEGV arises") # Your code signal.signal(signal.SIGSEGV, SIGSEGV_signal_arises) A: I encountered this problem when I was trying to run my code on an external GPU which was disconnected. I set os.environ['PYOPENCL_CTX']=2 where GPU 2 was not connected. So I just needed to change the code to os.environ['PYOPENCL_CTX'] = 1. A: For me these three lines of code already reproduced the error, no matter how much free memory was available: import numpy as np from sklearn.cluster import KMeans X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]) kmeans = KMeans(n_clusters=1, random_state=0).fit(X) I could solve the issue by removing an reinstalling the scikit-learn package. A very similar solution to this. A: This can also occur if trying to compound threads using concurrent.futures. For example, calling .map inside another .map call. This can be solved by removing one of the .map calls. A: I had the same issue working with kmeans from scikit-learn. Upgrading from scikit-learn 1.0 to 1.0.2 solved it for me. A: This issue is often caused by incompatible libraries in your environment. In my case, it was the pyspark library. A: In my case, reverting my most recent conda installs fixed the situation. A: I got this error when importing monai. It was solved after I created a new conda environment. Possible reasons I could imagine were either that there were some conflict between different packages, or maybe that my environment name was the same as the package name I wanted to import (monai). A: It can be caused because of numba. For instance, numba does not accept normal python lists instead of numpy arrays.
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
I'm trying to execute a Python script, but I am getting the following error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) I'm using python 3.5.2 on a Linux Mint 18.1 Serena OS Can someone tell me why this happens, and how can I solve?
[ "The SIGSEGV signal indicates a \"segmentation violation\" or a \"segfault\". More or less, this equates to a read or write of a memory address that's not mapped in the process.\nThis indicates a bug in your program. In a Python program, this is either a bug in the interpreter or in an extension module being used (and the latter is the most common cause).\nTo fix the problem, you have several options. One option is to produce a minimal, self-contained, complete example which replicates the problem and then submit it as a bug report to the maintainers of the extension module it uses.\nAnother option is to try to track down the cause yourself. gdb is a valuable tool in such an endeavor, as is a debug build of Python and all of the extension modules in use.\nAfter you have gdb installed, you can use it to run your Python program:\ngdb --args python <more args if you want>\n\nAnd then use gdb commands to track down the problem. If you use run then your program will run until it would have crashed and you will have a chance to inspect the state using other gdb commands.\n", "Another possible cause (which I encountered today) is that you're trying to read/write a file which is open. In this case, simply closing the file and rerunning the script solved the issue.\n", "After some times I discovered that I was running a new TensorFlow version that gives error on older computers. I solved the problem downgrading the TensorFlow version to 1.4\n", "When I encounter this problem, I realize there are some memory issues. I rebooted PC and solved it.\n", "This can also be the case if your C-program (e.g. using cpython is trying to access a variable out-of-bound\n\nctypedef struct ReturnRows:\n double[10] your_value\n\ncdef ReturnRows s_ReturnRows # Allocate memory for the struct\ns_ReturnRows.your_value = [0] * 12\n\nwill fail with\nProcess finished with exit code 139 (interrupted by signal 11: SIGSEGV)\n\n", "For me, I was using the OpenCV library to apply SIFT.\nIn my code, I replaced cv2.SIFT() to cv2.SIFT_create() and the problem is gone.\n", "Deleted the python interpreter and the 'venv' folder solve my error.\n", "I got this error in PHP, while running PHPUnit. The reason was a circular dependency.\n", "I received the same error when trying to connect to an Oracle DB using the pyodbc module:\nconnection = pyodbc.connect()\n\nThe error occurred on the following occasions:\n\nThe DB connection has been opened multiple times in the same python\nfile\nWhile in debug mode a breakpoint has been reached\nwhile the connection to the DB being open\n\nThe error message could be avoided with the following approaches:\n\nOpen the DB only once and reuse the connection at all needed places\nProperly close the DB connection after using it\n\nHope, that will help anyone!\n", "11 : SIGSEGV - This signal is arises when an memory segement is illegally accessed.\nThere is a module name signal in python through which you can handle this kind of OS signals.\nIf you want to ignore this SIGSEGV signal, you can do this:\nsignal.signal(signal.SIGSEGV, signal.SIG_IGN)\n\nHowever, ignoring the signal can cause some inappropriate behaviours to your code, so it is better to handle the SIGSEGV signal with your defined handler like this:\ndef SIGSEGV_signal_arises(signalNum, stack):\n print(f\"{signalNum} : SIGSEGV arises\")\n # Your code\n\nsignal.signal(signal.SIGSEGV, SIGSEGV_signal_arises) \n\n", "I encountered this problem when I was trying to run my code on an external GPU which was disconnected. I set os.environ['PYOPENCL_CTX']=2 where GPU 2 was not connected. So I just needed to change the code to os.environ['PYOPENCL_CTX'] = 1.\n", "For me these three lines of code already reproduced the error, no matter how much free memory was available:\nimport numpy as np\nfrom sklearn.cluster import KMeans\n\nX = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])\nkmeans = KMeans(n_clusters=1, random_state=0).fit(X)\n\nI could solve the issue by removing an reinstalling the scikit-learn package. A very similar solution to this.\n", "This can also occur if trying to compound threads using concurrent.futures. For example, calling .map inside another .map call.\nThis can be solved by removing one of the .map calls.\n", "I had the same issue working with kmeans from scikit-learn.\nUpgrading from scikit-learn 1.0 to 1.0.2 solved it for me.\n", "This issue is often caused by incompatible libraries in your environment. In my case, it was the pyspark library.\n", "In my case, reverting my most recent conda installs fixed the situation.\n", "I got this error when importing monai. It was solved after I created a new conda environment. Possible reasons I could imagine were either that there were some conflict between different packages, or maybe that my environment name was the same as the package name I wanted to import (monai).\n", "It can be caused because of numba. For instance, numba does not accept normal python lists instead of numpy arrays.\n" ]
[ 59, 19, 10, 6, 6, 3, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "found on other page.\ninterpreter: python 3.8\ncv2.CascadeClassifier(cv2.data.haarcascades + \"haarcascade_frontalface_default.xml\")\nthis solved issue for me. \ni was getting SIGSEGV with 2.7, upgraded my python to 3.8 then got different error with OpenCV. and found answer on OpenCV 4.0.0 SystemError: <class 'cv2.CascadeClassifier'> returned a result with an error set.\nbut eventually one line of code fixed it.\n" ]
[ -2 ]
[ "linux_mint", "python", "python_3.5", "segmentation_fault" ]
stackoverflow_0049414841_linux_mint_python_python_3.5_segmentation_fault.txt
Q: getting a list of dictionaries as a list of lists Ok so I have a list of the same dictionaries and I want to get the values of the dictionaries into a list of lists. For example this is what one dictionary might look like: mylist = [{'a': 0, 'b': 2},{'a':1, 'b':3}] I want the lists of lists to look like: [[0,2],[1,3]] I have tried doing zip(*[d.values() for d in mylist]) however this results in a list of different keys for example: [[0,1],[2,3]] A: As the comments suggest, I don't think you need zip() for this to work, instead just try something simpler such as [list(i.values()) for i in mylist] You convert the values into a list with the list() function, and the values are already obtained with the .values() method A: Try this [list(i.values()) for i in mylist]
getting a list of dictionaries as a list of lists
Ok so I have a list of the same dictionaries and I want to get the values of the dictionaries into a list of lists. For example this is what one dictionary might look like: mylist = [{'a': 0, 'b': 2},{'a':1, 'b':3}] I want the lists of lists to look like: [[0,2],[1,3]] I have tried doing zip(*[d.values() for d in mylist]) however this results in a list of different keys for example: [[0,1],[2,3]]
[ "As the comments suggest, I don't think you need zip() for this to work, instead just try something simpler such as [list(i.values()) for i in mylist]\nYou convert the values into a list with the list() function, and the values are already obtained with the .values() method\n", "Try this [list(i.values()) for i in mylist]\n" ]
[ 2, 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074631072_dictionary_list_python.txt
Q: Django lookup by JSONField array value Let's say I have MySQL database records with this structure { "id": 44207, "actors": [ { "id": "9c88bd9c-f41b-59fa-bfb6-427b1755ea64", "name": "APT41", "scope": "confirmed" }, { "id": "6f82bd9c-f31b-59fa-bf26-427b1355ea64", "name": "APT67", "scope": "confirmed" } ], }, { "id": 44208, "actors": [ { "id": "427b1355ea64-bfb6-59fa-bfb6-427b1755ea64", "name": "APT21", "scope": "confirmed" }, { "id": "9c88bd9c-f31b-59fa-bf26-427b1355ea64", "name": "APT22", "scope": "confirmed" } ], }, ... "actors" is a JSONField Any way I can filter all of the objects whose actors name contains '67', for example? Closest variant I have is that I got it working like that: queryset.filter(actors__contains=[{"name":"APT67"}]) But this query matches by exact actor.name value, while I want to to accept 'contains' operator. I also have it working by quering with strict array index, like this: queryset.filter(actors__0__name__icontains='67') But it only matches if first element in array matches my request. And I need that object shall be returned in any of his actors matches my query, so I was expecting something like queryset.filter(actors__name__icontains='67') to work, but it's not working :( So far I have to use models.Q and multiple ORs to support my needs, like this - search_query = models.Q(actors__0__name__icontains='67') | models.Q(actors__1__name__icontains='67') | models.Q(actors__2__name__icontains='67') | models.Q(actors__3__name__icontains='67') queryset.filter(search_query) but this looks horrible and supports only 4 elements lookup(or I have to include more OR's) Any clues if thats possible to be solved normal way overall? A: Following this answer and the linked answer in the same post. 'contains' or 'icontains' looks for the patterns '%string%', which in your case assumes '67' is between characters. But, the number pattern is at the end of your actor name. So, based on the answers I linked, you should probably try endswith or iendswith, in order to look for the pattern '%67' A: My data model: class MyCustomModel(models.Model): id = models.BigAutoField(primary_key=True) actors = models.JSONField(blank=True, null=True) I ended up with quite hacky lookup operator which replaces '$."' into '$[*]."' in my JSON field queries, which in my case was making the correct query, filtering all the objects whos JSON field with array of objects, contains one of the needed property. Lookup operator: from django.db.models.lookups import IContains from django.db.models import Field # Custom lookup which acts like the default IContains lookup but also replaces field name to match all JSON array objects ['*'].field_name in the query instead of $.field_name. # Maybe this could be done in a better way with better Q field path, but this works for now. class JSONArrayContains(IContains): lookup_name = 'jsonarraycontains' def __init__(self, lhs, rhs): self.lookup_name = 'icontains' # we fake the lookup name to get the right operators further super().__init__(lhs, rhs) def as_sql(self, compiler, connection): lhs_sql, params = self.process_lhs(compiler, connection) # !! HERE IS THE MAGIC # we need to replace params parts which are like '$."name"' into parts like '$[*]."name"' if param is string and matches $." pattern params = [param.replace('$."', '$[*]."') if isinstance(param, str) and param.startswith('$."') else param for param in params] rhs_sql, rhs_params = self.process_rhs(compiler, connection) params.extend(rhs_params) rhs_sql = self.get_rhs_op(connection, rhs_sql) return f'{lhs_sql} {rhs_sql}', params Usage: queryset.filter(actors__name__jsonarraycontains='67') Which is filtering all the records, which
Django lookup by JSONField array value
Let's say I have MySQL database records with this structure { "id": 44207, "actors": [ { "id": "9c88bd9c-f41b-59fa-bfb6-427b1755ea64", "name": "APT41", "scope": "confirmed" }, { "id": "6f82bd9c-f31b-59fa-bf26-427b1355ea64", "name": "APT67", "scope": "confirmed" } ], }, { "id": 44208, "actors": [ { "id": "427b1355ea64-bfb6-59fa-bfb6-427b1755ea64", "name": "APT21", "scope": "confirmed" }, { "id": "9c88bd9c-f31b-59fa-bf26-427b1355ea64", "name": "APT22", "scope": "confirmed" } ], }, ... "actors" is a JSONField Any way I can filter all of the objects whose actors name contains '67', for example? Closest variant I have is that I got it working like that: queryset.filter(actors__contains=[{"name":"APT67"}]) But this query matches by exact actor.name value, while I want to to accept 'contains' operator. I also have it working by quering with strict array index, like this: queryset.filter(actors__0__name__icontains='67') But it only matches if first element in array matches my request. And I need that object shall be returned in any of his actors matches my query, so I was expecting something like queryset.filter(actors__name__icontains='67') to work, but it's not working :( So far I have to use models.Q and multiple ORs to support my needs, like this - search_query = models.Q(actors__0__name__icontains='67') | models.Q(actors__1__name__icontains='67') | models.Q(actors__2__name__icontains='67') | models.Q(actors__3__name__icontains='67') queryset.filter(search_query) but this looks horrible and supports only 4 elements lookup(or I have to include more OR's) Any clues if thats possible to be solved normal way overall?
[ "Following this answer and the linked answer in the same post.\n'contains' or 'icontains' looks for the patterns '%string%', which in your case assumes '67' is between characters. But, the number pattern is at the end of your actor name.\nSo, based on the answers I linked, you should probably try endswith or iendswith, in order to look for the pattern '%67'\n", "My data model:\nclass MyCustomModel(models.Model):\n id = models.BigAutoField(primary_key=True)\n actors = models.JSONField(blank=True, null=True)\n\nI ended up with quite hacky lookup operator which replaces '$.\"' into '$[*].\"' in my JSON field queries, which in my case was making the correct query, filtering all the objects whos JSON field with array of objects, contains one of the needed property.\nLookup operator:\nfrom django.db.models.lookups import IContains\nfrom django.db.models import Field\n\n# Custom lookup which acts like the default IContains lookup but also replaces field name to match all JSON array objects ['*'].field_name in the query instead of $.field_name.\n# Maybe this could be done in a better way with better Q field path, but this works for now.\nclass JSONArrayContains(IContains):\n lookup_name = 'jsonarraycontains'\n \n def __init__(self, lhs, rhs):\n self.lookup_name = 'icontains' # we fake the lookup name to get the right operators further\n super().__init__(lhs, rhs)\n \n\n def as_sql(self, compiler, connection):\n lhs_sql, params = self.process_lhs(compiler, connection)\n\n # !! HERE IS THE MAGIC\n # we need to replace params parts which are like '$.\"name\"' into parts like '$[*].\"name\"' if param is string and matches $.\" pattern\n params = [param.replace('$.\"', '$[*].\"') if isinstance(param, str) and param.startswith('$.\"') else param for param in params] \n\n rhs_sql, rhs_params = self.process_rhs(compiler, connection)\n params.extend(rhs_params)\n rhs_sql = self.get_rhs_op(connection, rhs_sql)\n return f'{lhs_sql} {rhs_sql}', params\n\nUsage:\nqueryset.filter(actors__name__jsonarraycontains='67')\n\nWhich is filtering all the records, which\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "mysql", "python" ]
stackoverflow_0074617447_django_django_models_django_rest_framework_mysql_python.txt
Q: Sorting list based on values from another list I have a list of strings like this: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ] What is the shortest way of sorting X using values from Y to get the following output? ["a", "d", "h", "b", "c", "e", "i", "f", "g"] The order of the elements having the same "key" does not matter. I can resort to the use of for constructs but I am curious if there is a shorter way. Any suggestions? A: Shortest Code [x for _, x in sorted(zip(Y, X))] Example: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1] Z = [x for _,x in sorted(zip(Y,X))] print(Z) # ["a", "d", "h", "b", "c", "e", "i", "f", "g"] Generally Speaking [x for _, x in sorted(zip(Y, X), key=lambda pair: pair[0])] Explained: zip the two lists. create a new, sorted list based on the zip using sorted(). using a list comprehension extract the first elements of each pair from the sorted, zipped list. For more information on how to set\use the key parameter as well as the sorted function in general, take a look at this. A: Zip the two lists together, sort it, then take the parts you want: >>> yx = zip(Y, X) >>> yx [(0, 'a'), (1, 'b'), (1, 'c'), (0, 'd'), (1, 'e'), (2, 'f'), (2, 'g'), (0, 'h'), (1, 'i')] >>> yx.sort() >>> yx [(0, 'a'), (0, 'd'), (0, 'h'), (1, 'b'), (1, 'c'), (1, 'e'), (1, 'i'), (2, 'f'), (2, 'g')] >>> x_sorted = [x for y, x in yx] >>> x_sorted ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g'] Combine these together to get: [x for y, x in sorted(zip(Y, X))] A: Also, if you don't mind using numpy arrays (or in fact already are dealing with numpy arrays...), here is another nice solution: people = ['Jim', 'Pam', 'Micheal', 'Dwight'] ages = [27, 25, 4, 9] import numpy people = numpy.array(people) ages = numpy.array(ages) inds = ages.argsort() sortedPeople = people[inds] I found it here: http://scienceoss.com/sort-one-list-by-another-list/ A: The most obvious solution to me is to use the key keyword arg. >>> X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] >>> Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1] >>> keydict = dict(zip(X, Y)) >>> X.sort(key=keydict.get) >>> X ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g'] Note that you can shorten this to a one-liner if you care to: >>> X.sort(key=dict(zip(X, Y)).get) As Wenmin Mu and Jack Peng have pointed out, this assumes that the values in X are all distinct. That's easily managed with an index list: >>> Z = ["A", "A", "C", "C", "C", "F", "G", "H", "I"] >>> Z_index = list(range(len(Z))) >>> Z_index.sort(key=keydict.get) >>> Z = [Z[i] for i in Z_index] >>> Z ['A', 'C', 'H', 'A', 'C', 'C', 'I', 'F', 'G'] Since the decorate-sort-undecorate approach described by Whatang is a little simpler and works in all cases, it's probably better most of the time. (This is a very old answer!) A: more_itertools has a tool for sorting iterables in parallel: Given from more_itertools import sort_together X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1] Demo sort_together([Y, X])[1] # ('a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g') A: I actually came here looking to sort a list by a list where the values matched. list_a = ['foo', 'bar', 'baz'] list_b = ['baz', 'bar', 'foo'] sorted(list_b, key=lambda x: list_a.index(x)) # ['foo', 'bar', 'baz'] A: Another alternative, combining several of the answers. zip(*sorted(zip(Y,X)))[1] In order to work for python3: list(zip(*sorted(zip(B,A))))[1] A: I like having a list of sorted indices. That way, I can sort any list in the same order as the source list. Once you have a list of sorted indices, a simple list comprehension will do the trick: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1] sorted_y_idx_list = sorted(range(len(Y)),key=lambda x:Y[x]) Xs = [X[i] for i in sorted_y_idx_list ] print( "Xs:", Xs ) # prints: Xs: ["a", "d", "h", "b", "c", "e", "i", "f", "g"] Note that the sorted index list can also be gotten using numpy.argsort(). A: zip, sort by the second column, return the first column. zip(*sorted(zip(X,Y), key=operator.itemgetter(1)))[0] A: This is an old question but some of the answers I see posted don't actually work because zip is not scriptable. Other answers didn't bother to import operator and provide more info about this module and its benefits here. There are at least two good idioms for this problem. Starting with the example input you provided: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ] Using the "Decorate-Sort-Undecorate" idiom This is also known as the Schwartzian_transform after R. Schwartz who popularized this pattern in Perl in the 90s: # Zip (decorate), sort and unzip (undecorate). # Converting to list to script the output and extract X list(zip(*(sorted(zip(Y,X)))))[1] # Results in: ('a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g') Note that in this case Y and X are sorted and compared lexicographically. That is, the first items (from Y) are compared; and if they are the same then the second items (from X) are compared, and so on. This can create unstable outputs unless you include the original list indices for the lexicographic ordering to keep duplicates in their original order. Using the operator module This gives you more direct control over how to sort the input, so you can get sorting stability by simply stating the specific key to sort by. See more examples here. import operator # Sort by Y (1) and extract X [0] list(zip(*sorted(zip(X,Y), key=operator.itemgetter(1))))[0] # Results in: ('a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g') A: You can create a pandas Series, using the primary list as data and the other list as index, and then just sort by the index: import pandas as pd pd.Series(data=X,index=Y).sort_index().tolist() output: ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g'] A: A quick one-liner. list_a = [5,4,3,2,1] list_b = [1,1.5,1.75,2,3,3.5,3.75,4,5] Say you want list a to match list b. orderedList = sorted(list_a, key=lambda x: list_b.index(x)) This is helpful when needing to order a smaller list to values in larger. Assuming that the larger list contains all values in the smaller list, it can be done. A: Here is Whatangs answer if you want to get both sorted lists (python3). X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1] Zx, Zy = zip(*[(x, y) for x, y in sorted(zip(Y, X))]) print(list(Zx)) # [0, 0, 0, 1, 1, 1, 1, 2, 2] print(list(Zy)) # ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g'] Just remember Zx and Zy are tuples. I am also wandering if there is a better way to do that. Warning: If you run it with empty lists it crashes. A: I have created a more general function, that sorts more than two lists based on another one, inspired by @Whatang's answer. def parallel_sort(*lists): """ Sorts the given lists, based on the first one. :param lists: lists to be sorted :return: a tuple containing the sorted lists """ # Create the initially empty lists to later store the sorted items sorted_lists = tuple([] for _ in range(len(lists))) # Unpack the lists, sort them, zip them and iterate over them for t in sorted(zip(*lists)): # list items are now sorted based on the first list for i, item in enumerate(t): # for each item... sorted_lists[i].append(item) # ...store it in the appropriate list return sorted_lists A: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ] You can do so in one line: X, Y = zip(*sorted(zip(Y, X))) A: This function should work for arrays. def sortBoth(x,y,reverse=False): ''' Sort both x and y, according to x. ''' xy_sorted=array(sorted(zip(x,y),reverse=reverse)).T return xy_sorted[0],xy_sorted[1] A: list1 = ['a','b','c','d','e','f','g','h','i'] list2 = [0,1,1,0,1,2,2,0,1] output=[] cur_loclist = [] To get unique values present in list2 list_set = set(list2) To find the loc of the index in list2 list_str = ''.join(str(s) for s in list2) Location of index in list2 is tracked using cur_loclist [0, 3, 7, 1, 2, 4, 8, 5, 6] for i in list_set: cur_loc = list_str.find(str(i)) while cur_loc >= 0: cur_loclist.append(cur_loc) cur_loc = list_str.find(str(i),cur_loc+1) print(cur_loclist) for i in range(0,len(cur_loclist)): output.append(list1[cur_loclist[i]]) print(output) A: I think most of the solutions above will not work if the 2 lists are of different sizes or contain different items. The solution below is simple and should fix those issues: import pandas as pd list1 = ['B', 'A', 'C'] # Required sort order list2 = ['C', 'A'] # Items to be sorted according to list1 result = pd.merge(pd.DataFrame(list1), pd.DataFrame(list2)) print(list(result[0])) output: ['A', 'C'] Note: Any item not in list1 will be ignored since the algorithm will not know what's the sort order to use. A: Most of the solutions above are complicated and I think they will not work if the lists are of different lengths or do not contain the exact same items. The solution below is simple and does not require any imports. list1 = ['B', 'A', 'C'] # Required sort order list2 = ['C', 'B'] # Items to be sorted according to list1 result = list1 for item in list1: if item not in list2: result.remove(item) print(result) Output: ['B', 'C'] Note: Any item not in list1 will be ignored since the algorithm will not know what's the sort order to use. A: I think that the title of the original question is not accurate. If you have 2 lists of identical number of items and where every item in list 1 is related to list 2 in the same order (e.g a = 0 , b = 1, etc.) then the question should be 'How to sort a dictionary?', not 'How to sorting list based on values from another list?'. The solution below is the most efficient in this case: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ] dict1 = dict(zip(X,Y)) result = sorted(dict1, key=dict1.get) print(result) Result: ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g']
Sorting list based on values from another list
I have a list of strings like this: X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"] Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ] What is the shortest way of sorting X using values from Y to get the following output? ["a", "d", "h", "b", "c", "e", "i", "f", "g"] The order of the elements having the same "key" does not matter. I can resort to the use of for constructs but I am curious if there is a shorter way. Any suggestions?
[ "Shortest Code\n[x for _, x in sorted(zip(Y, X))]\n\nExample:\nX = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1]\n\nZ = [x for _,x in sorted(zip(Y,X))]\nprint(Z) # [\"a\", \"d\", \"h\", \"b\", \"c\", \"e\", \"i\", \"f\", \"g\"]\n\n\nGenerally Speaking\n[x for _, x in sorted(zip(Y, X), key=lambda pair: pair[0])]\n\nExplained:\n\nzip the two lists.\ncreate a new, sorted list based on the zip using sorted().\nusing a list comprehension extract the first elements of each pair from the sorted, zipped list.\n\nFor more information on how to set\\use the key parameter as well as the sorted function in general, take a look at this.\n\n", "Zip the two lists together, sort it, then take the parts you want:\n>>> yx = zip(Y, X)\n>>> yx\n[(0, 'a'), (1, 'b'), (1, 'c'), (0, 'd'), (1, 'e'), (2, 'f'), (2, 'g'), (0, 'h'), (1, 'i')]\n>>> yx.sort()\n>>> yx\n[(0, 'a'), (0, 'd'), (0, 'h'), (1, 'b'), (1, 'c'), (1, 'e'), (1, 'i'), (2, 'f'), (2, 'g')]\n>>> x_sorted = [x for y, x in yx]\n>>> x_sorted\n['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g']\n\nCombine these together to get:\n[x for y, x in sorted(zip(Y, X))]\n\n", "Also, if you don't mind using numpy arrays (or in fact already are dealing with numpy arrays...), here is another nice solution:\npeople = ['Jim', 'Pam', 'Micheal', 'Dwight']\nages = [27, 25, 4, 9]\n\nimport numpy\npeople = numpy.array(people)\nages = numpy.array(ages)\ninds = ages.argsort()\nsortedPeople = people[inds]\n\nI found it here:\nhttp://scienceoss.com/sort-one-list-by-another-list/\n", "The most obvious solution to me is to use the key keyword arg.\n>>> X = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\n>>> Y = [ 0, 1, 1, 0, 1, 2, 2, 0, 1]\n>>> keydict = dict(zip(X, Y))\n>>> X.sort(key=keydict.get)\n>>> X\n['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g']\n\nNote that you can shorten this to a one-liner if you care to:\n>>> X.sort(key=dict(zip(X, Y)).get)\n\nAs Wenmin Mu and Jack Peng have pointed out, this assumes that the values in X are all distinct. That's easily managed with an index list:\n>>> Z = [\"A\", \"A\", \"C\", \"C\", \"C\", \"F\", \"G\", \"H\", \"I\"]\n>>> Z_index = list(range(len(Z)))\n>>> Z_index.sort(key=keydict.get)\n>>> Z = [Z[i] for i in Z_index]\n>>> Z\n['A', 'C', 'H', 'A', 'C', 'C', 'I', 'F', 'G']\n\nSince the decorate-sort-undecorate approach described by Whatang is a little simpler and works in all cases, it's probably better most of the time. (This is a very old answer!)\n", "more_itertools has a tool for sorting iterables in parallel:\nGiven\nfrom more_itertools import sort_together\n\n\nX = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1]\n\nDemo\nsort_together([Y, X])[1]\n# ('a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g')\n\n", "I actually came here looking to sort a list by a list where the values matched.\nlist_a = ['foo', 'bar', 'baz']\nlist_b = ['baz', 'bar', 'foo']\nsorted(list_b, key=lambda x: list_a.index(x))\n# ['foo', 'bar', 'baz']\n\n", "Another alternative, combining several of the answers.\nzip(*sorted(zip(Y,X)))[1]\n\nIn order to work for python3:\nlist(zip(*sorted(zip(B,A))))[1]\n\n", "I like having a list of sorted indices. That way, I can sort any list in the same order as the source list. Once you have a list of sorted indices, a simple list comprehension will do the trick:\nX = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1]\n\nsorted_y_idx_list = sorted(range(len(Y)),key=lambda x:Y[x])\nXs = [X[i] for i in sorted_y_idx_list ]\n\nprint( \"Xs:\", Xs )\n# prints: Xs: [\"a\", \"d\", \"h\", \"b\", \"c\", \"e\", \"i\", \"f\", \"g\"]\n\nNote that the sorted index list can also be gotten using numpy.argsort().\n", "zip, sort by the second column, return the first column.\nzip(*sorted(zip(X,Y), key=operator.itemgetter(1)))[0]\n\n", "This is an old question but some of the answers I see posted don't actually work because zip is not scriptable. Other answers didn't bother to import operator and provide more info about this module and its benefits here.\nThere are at least two good idioms for this problem. Starting with the example input you provided:\nX = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ]\n\nUsing the \"Decorate-Sort-Undecorate\" idiom\nThis is also known as the Schwartzian_transform after R. Schwartz who popularized this pattern in Perl in the 90s:\n# Zip (decorate), sort and unzip (undecorate).\n# Converting to list to script the output and extract X\nlist(zip(*(sorted(zip(Y,X)))))[1] \n# Results in: ('a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g')\n\nNote that in this case Y and X are sorted and compared lexicographically. That is, the first items (from Y) are compared; and if they are the same then the second items (from X) are compared, and so on. This can create unstable outputs unless you include the original list indices for the lexicographic ordering to keep duplicates in their original order.\nUsing the operator module\nThis gives you more direct control over how to sort the input, so you can get sorting stability by simply stating the specific key to sort by. See more examples here.\nimport operator \n\n# Sort by Y (1) and extract X [0]\nlist(zip(*sorted(zip(X,Y), key=operator.itemgetter(1))))[0] \n# Results in: ('a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g')\n\n", "You can create a pandas Series, using the primary list as data and the other list as index, and then just sort by the index:\nimport pandas as pd\npd.Series(data=X,index=Y).sort_index().tolist()\n\noutput:\n['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g']\n\n", "A quick one-liner.\nlist_a = [5,4,3,2,1]\nlist_b = [1,1.5,1.75,2,3,3.5,3.75,4,5]\n\nSay you want list a to match list b.\norderedList = sorted(list_a, key=lambda x: list_b.index(x))\n\nThis is helpful when needing to order a smaller list to values in larger. Assuming that the larger list contains all values in the smaller list, it can be done.\n", "Here is Whatangs answer if you want to get both sorted lists (python3).\nX = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1]\n\nZx, Zy = zip(*[(x, y) for x, y in sorted(zip(Y, X))])\n\nprint(list(Zx)) # [0, 0, 0, 1, 1, 1, 1, 2, 2]\nprint(list(Zy)) # ['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g']\n\nJust remember Zx and Zy are tuples.\nI am also wandering if there is a better way to do that.\nWarning: If you run it with empty lists it crashes.\n", "I have created a more general function, that sorts more than two lists based on another one, inspired by @Whatang's answer.\ndef parallel_sort(*lists):\n \"\"\"\n Sorts the given lists, based on the first one.\n :param lists: lists to be sorted\n\n :return: a tuple containing the sorted lists\n \"\"\"\n\n # Create the initially empty lists to later store the sorted items\n sorted_lists = tuple([] for _ in range(len(lists)))\n\n # Unpack the lists, sort them, zip them and iterate over them\n for t in sorted(zip(*lists)):\n # list items are now sorted based on the first list\n for i, item in enumerate(t): # for each item...\n sorted_lists[i].append(item) # ...store it in the appropriate list\n\n return sorted_lists\n\n", "X = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ]\n\nYou can do so in one line:\nX, Y = zip(*sorted(zip(Y, X)))\n\n", "This function should work for arrays.\ndef sortBoth(x,y,reverse=False):\n '''\n Sort both x and y, according to x. \n '''\n xy_sorted=array(sorted(zip(x,y),reverse=reverse)).T\n return xy_sorted[0],xy_sorted[1]\n\n", "list1 = ['a','b','c','d','e','f','g','h','i']\nlist2 = [0,1,1,0,1,2,2,0,1]\n\noutput=[]\ncur_loclist = []\n\nTo get unique values present in list2\nlist_set = set(list2)\n\nTo find the loc of the index in list2 \nlist_str = ''.join(str(s) for s in list2)\n\nLocation of index in list2 is tracked using cur_loclist\n[0, 3, 7, 1, 2, 4, 8, 5, 6]\nfor i in list_set:\ncur_loc = list_str.find(str(i))\n\nwhile cur_loc >= 0:\n cur_loclist.append(cur_loc)\n cur_loc = list_str.find(str(i),cur_loc+1)\n\nprint(cur_loclist)\n\nfor i in range(0,len(cur_loclist)):\noutput.append(list1[cur_loclist[i]])\nprint(output)\n\n", "I think most of the solutions above will not work if the 2 lists are of different sizes or contain different items. The solution below is simple and should fix those issues:\nimport pandas as pd\n\nlist1 = ['B', 'A', 'C'] # Required sort order\nlist2 = ['C', 'A'] # Items to be sorted according to list1\n\nresult = pd.merge(pd.DataFrame(list1), pd.DataFrame(list2))\nprint(list(result[0]))\n\noutput:\n['A', 'C']\n\n\nNote: Any item not in list1 will be ignored since the algorithm will not know what's the sort order to use.\n\n", "Most of the solutions above are complicated and I think they will not work if the lists are of different lengths or do not contain the exact same items. The solution below is simple and does not require any imports.\nlist1 = ['B', 'A', 'C'] # Required sort order\nlist2 = ['C', 'B'] # Items to be sorted according to list1\n\nresult = list1\nfor item in list1:\n if item not in list2: result.remove(item)\n\nprint(result)\n\nOutput:\n['B', 'C']\n\n\nNote: Any item not in list1 will be ignored since the algorithm will not know what's the sort order to use.\n\n", "I think that the title of the original question is not accurate. If you have 2 lists of identical number of items and where every item in list 1 is related to list 2 in the same order (e.g a = 0 , b = 1, etc.) then the question should be 'How to sort a dictionary?', not 'How to sorting list based on values from another list?'. The solution below is the most efficient in this case:\nX = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"]\nY = [ 0, 1, 1, 0, 1, 2, 2, 0, 1 ]\n\ndict1 = dict(zip(X,Y))\nresult = sorted(dict1, key=dict1.get)\nprint(result)\n\nResult:\n['a', 'd', 'h', 'b', 'c', 'e', 'i', 'f', 'g']\n\n" ]
[ 769, 139, 121, 48, 36, 25, 17, 16, 7, 4, 2, 2, 2, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "list", "python", "sorting" ]
stackoverflow_0006618515_list_python_sorting.txt
Q: Discord.py getting empty messages I was developing a small discord bot for some time, and it was working fine until I started testing to play mp3 in a voice channel. I was following this question because discord.py throwed an error that I needed pynacl lib: RuntimeError: PyNaCl library needed in order to use voice Bot stopped working after running this command: pip install -U discord.py[voice] Now I don't get any message content: I did try to pip uninstall discord.py[voice] and reinstall base with pip install discord.py How can I get the message contents now? A: You need to make sure that the message_content intent is configured correctly on both the discord developer portal and in the code: intents = discord.Intents() intents.message_content = True client = discord.Bot(intents=intents)
Discord.py getting empty messages
I was developing a small discord bot for some time, and it was working fine until I started testing to play mp3 in a voice channel. I was following this question because discord.py throwed an error that I needed pynacl lib: RuntimeError: PyNaCl library needed in order to use voice Bot stopped working after running this command: pip install -U discord.py[voice] Now I don't get any message content: I did try to pip uninstall discord.py[voice] and reinstall base with pip install discord.py How can I get the message contents now?
[ "You need to make sure that the message_content intent is configured correctly on both the discord developer portal and in the code:\nintents = discord.Intents()\nintents.message_content = True\nclient = discord.Bot(intents=intents)\n\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074490633_discord.py_python.txt
Q: Hi. I'm trying to scrape infinite scrolling website. It stuck in 200th data I scrolled with selenium and grabbed all urls and used these urls in beautifulsoup.But there are so many duplicates in scraped data.I tried to left them with drop_duplicates but it stack in about 200th data .I cannot detect the problem. I add the code which i use. I want to grab all prices,areas,rooms et.c. import requests from lxml import html from bs4 import BeautifulSoup as bs import bs4 import pandas as pd from selenium.webdriver.common.by import By from selenium import webdriver from selenium.webdriver.common.keys import Keys from lxml import html import pandas as pd import time driver = webdriver.Chrome(r'C:\Program Files (x86)\chromedriver_win32\chromedriver.exe') driver.get('https://tap.az/elanlar/dasinmaz-emlak/menziller') time.sleep(1) price = [] citi = [] elann = [] bina = [] arrea = [] adres = [] roome = [] baxhise = [] mulkayet = [] descript = [] urll = [] zefer = [] previous_height = driver.execute_script('return document.body.scrollHeight') while True: driver.execute_script('window.scrollTo(0, document.body.scrollHeight);') time.sleep(2) new_height = driver.execute_script('return document.body.scrollHeight') if new_height == previous_height: break previous_height = new_height lnks=driver.find_elements(By.CSS_SELECTOR, '#content > div > div > div.categories-products.js-categories-products > div.js-endless-container.products.endless-products > div.products-i') for itema in lnks: urla=itema.find_element(By.TAG_NAME, 'a') aae = (urla.get_attribute('href')) urel = aae.split('/bookmark')[0] result = requests.get(urel) soup = bs(result.text, 'html.parser') casee = soup.find_all("div",{"class":"lot-body l-center"}) for ae in casee: c = ae.find_all('table', class_ = 'properties') pp = c[0].text city = pp.split('Şəhər')[-1].split('Elanın')[0].replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') cxe = c[0].text elan_tipi = cxe.split('Elanın tipi')[-1].split('Binanın tipi')[0].replace(' verilir','') elane = elan_tipi.replace(' ', '_').replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') cx = c[0].text bina_tipi = cx.split('Binanın tipi')[-1].split('Sahə')[0].replace(' ', '_').replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') cx = c[0].text area = cx.split('tikiliSahə,')[-1].split('Otaq')[0].replace('m²', '') cx = c[0].text room = cx.split('Otaq sayı')[-1].split('Yerləşmə yeri')[0] cx = c[0].text addresss = cx.split('Yerləşmə yeri')[-1].replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') d = ae.find_all('p') elan_kod = (d[0].text.replace('Elanın nömrəsi:', '')) d = ae.find_all('p') baxhis = d[1].text.replace('Baxışların sayı: ', '') d = ae.find_all('p') description = (d[3].text.replace('Baxışların sayı: ', '').replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I').replace("\n", '')) kim = ae.find_all('div', class_ = 'author') kime = kim[0].text if 'bütün' in kime: mulkiyet = int(0) else: mulkiyet = int(1) caseee = soup.find_all("div",{"class":"middle"}) for aecex in caseee: pricxxe = aecex.find_all('span', class_ = 'price-val') pricef = pricxxe[0].text.replace(' ' , '') price.append(pricef) zefer.append(elane) elann.append(elan_kod) citi.append(city) bina.append(bina_tipi) arrea.append(area) adres.append(addresss) roome.append(room) baxhise.append(baxhis) mulkayet.append(mulkiyet) descript.append(description) ae = pd.DataFrame({'URL': urel,'Unique_id': elann,'Price': price,'Room': roome,'Area': arrea,'Seher': citi,'Elan_tipi': zefer,'Description': descript,'Address': adres,'Category': bina,'Mulkiyyet': mulkayet}) aere = ae.drop_duplicates() aere.to_csv('dde.csv', index=False, encoding='utf-8' ) A: A cause of duplicates is that every time you get lnks, you're getting the products you scraped before scrolling as well. You can probably skip duplicate scrapes by initiating scrapedUrls = [] somewhere at the beginning of your code (OUTSIDE of all loops), and then checking urel against it, as well as adding to it if urel in scrapedUrls: continue ## add this line result = requests.get(urel) ## from your code scrapedUrls.append(urel) ## add this line but I'm not sure it'll solve your issue. I don't know why it's happening, but when I try to scrape the links with selenium's find_elements, I get the same url over and over; so I wrote a fuction [getUniqLinks] that you can use to get a unique list of links (prodUrls) by scrolling up to a certain number of times and then parsing page_source to BeautifulSoup. Below are two lines from the printed output of prodUrls = getUniqLinks(fullUrl, rootUrl, max_scrolls=250, tmo=1): WITH SELENIUM found 10957 product links [1 unique] PARSED PAGE_SOURCE ---> found 12583 product links [12576 unique] (The full function and printed output are at https://pastebin.com/b3gwUAJZ.) Some notes: If you increase tmo, you can increase max_scrolls too, but it starts getting quite slow after 100 scrolls. I used selenium to get links as well, just to print and show the difference, but you can remove all lines that end with # remove to get rid of those unnecessary parts. I used selenium's WebDriverWait instead of time.sleep because it stops waiting after the relevant elements have loaded - it raises an error if it doesn't load it the allowed time (tmo), so I found it more convenient and readable to use in a try...except block instead of using driver.implicitly_wait I don't know if this is related to whatever is causing your program to hang [since mine is probably just because of the number of elements being too many], but mine also hangs if I try to use selenium to get all the links after scrolling instead of adding to prodLinks in chunks inside the loop. Now, you can loop through prodUrls and get the data you want, but I think it's better to build a list with a separate dictionary for each link [i.e., having a dictionary for each row rather than having a separate list for each column]. If you use these two functions, then you just have to prepare a reference dictionary of selectors like refDict = { 'title': 'h1.js-lot-title', 'price_text': 'div.price-container', 'price_amt': 'div.price-container > .price span.price-val', 'price_cur': 'div.price-container > .price span.price-cur', '.lot-text tr.property': {'k':'td.property-name', 'v':'td.property-value'}, 'contact_name': '.author > div.name', 'contact_phone': '.author > a.phone', 'lot_warning': 'div.lot-warning', 'div.lot-info': {'sel': 'p', 'sep': ':'}, 'description': '.lot-text p' } that can be passed to fillDict_fromTag like in the code below: ## FIRST PASTE FUNTION DEFINITIONS FROM https://pastebin.com/hKXYetmj productDetails = [] puLen = len(prodUrls) for pi, pUrl in enumerate(prodUrls[:500]): print('', end=f'\rScraping [for {pi+1} of {puLen}] {pUrl}') pDets = {'prodId': [w for w in pUrl.split('/') if w][-1]} resp = requests.get(pUrl) if resp.status_code != 200: pDets['Error_Message'] = f'{resp.raise_for_status()}' pDets['sourceUrl'] = pUrl productDetails.append(pDets) continue pSoup = BeautifulSoup(resp.content, 'html.parser') pDets = fillDict_fromTag(pSoup, refDict, pDets, rootUrl) pDets['sourceUrl'] = pUrl productDetails.append(pDets) print() prodDf = pd.DataFrame(productDetails).set_index('prodId') prodDf.to_csv('ProductDetails.csv') I have uploaded both 'prodLinks.csv' and 'ProductDetails.csv' here, although there are only the first 500 scrapes' results since I manually interrupted after around 20 minutes; I'm also pasting the first 3 rows here (printed with print(prodDf.loc[prodDf.index[:3]].to_markdown())) | prodId | title | price_text | price_amt | price_cur | Şəhər | Elanın tipi | Elanın tipi [link] | Binanın tipi | Binanın tipi [link] | Sahə, m² | Otaq sayı | Yerləşmə yeri | contact_name | contact_phone | lot_warning | Elanın nömrəsi | Baxışların sayı | Yeniləndi | description | sourceUrl | |---------:|:---------------------------------------------------------|:-------------|:------------|:------------|:--------|:---------------|:----------------------------------------------------------------|:---------------|:----------------------------------------------------------------|-----------:|------------:|:----------------|:---------------|:----------------|:-----------------------------------------------------------------------------|-----------------:|------------------:|:---------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------| | 35828514 | 2-otaqlı yeni tikili kirayə verilir, 20 Yanvar m., 45 m² | 600 AZN | 600 | AZN | Bakı | Kirayə verilir | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B740%5D=3724 | Yeni tikili | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B747%5D=3849 | 45 | 2 | 20 Yanvar m. | Elşad Bəy | (055) 568-12-13 | Diqqət! Beh göndərməmişdən öncə sövdələşmənin təhlükəsiz olduğuna əmin olun! | 35828514 | 105 | 22 Noyabr 2022 | 20 Yanvar metrosuna və Inşatcılar metrosuna 8 - 11 dəiqqə arası olan ərazidə, yeni tikili binada 1 otaq 2 otaq təmirli şəraitiynən mənzil kirayə 600 manata, ailiyə və iş adamına verilir. Qabaqçadan 2 ay ödəniş olsa kamendant pulu daxil, ayı 600 manat olaçaq, mənzili götūrən şəxs 1 ayın 20 % vasitəciyə ödəniş etməlidir. Xahìş olunur, rial olmuyan şəxs zəng etməsin. | https://tap.az/elanlar/dasinmaz-emlak/menziller/35828514 | | 35833080 | 1-otaqlı yeni tikili kirayə verilir, Quba r., 60 m² | 40 AZN | 40 | AZN | Quba | Kirayə verilir | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B740%5D=3724 | Yeni tikili | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B747%5D=3849 | 60 | 1 | Quba r. | Orxan | (050) 604-27-60 | Diqqət! Beh göndərməmişdən öncə sövdələşmənin təhlükəsiz olduğuna əmin olun! | 35833080 | 114 | 22 Noyabr 2022 | Quba merkezde her weraiti olan GUNLUK KIRAYE EV.Daimi isti soyuq su hamam metbex wifi.iwciler ve aile ucun elveriwlidir Təmirli | https://tap.az/elanlar/dasinmaz-emlak/menziller/35833080 | | 35898353 | 4-otaqlı mənzil, Nizami r., 100 m² | 153 000 AZN | 153 000 | AZN | Bakı | Satılır | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B740%5D=3722 | Köhnə tikili | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B747%5D=3850 | 100 | 4 | Nizami r. | Araz M | (070) 723-54-50 | Diqqət! Beh göndərməmişdən öncə sövdələşmənin təhlükəsiz olduğuna əmin olun! | 35898353 | 71 | 27 Noyabr 2022 | X.Dostluğu metrosuna 2 deq mesafede Leninqrad lahiyeli 9 mərtəbəli binanın 5-ci mərtəbəsində 4 otaqlı yaxsi temirli mənzil satılır.Əmlak ofisinə ödəniş alıcı tərəfindən məbləğin 1%-ni təşkil edir. | https://tap.az/elanlar/dasinmaz-emlak/menziller/35898353 |
Hi. I'm trying to scrape infinite scrolling website. It stuck in 200th data
I scrolled with selenium and grabbed all urls and used these urls in beautifulsoup.But there are so many duplicates in scraped data.I tried to left them with drop_duplicates but it stack in about 200th data .I cannot detect the problem. I add the code which i use. I want to grab all prices,areas,rooms et.c. import requests from lxml import html from bs4 import BeautifulSoup as bs import bs4 import pandas as pd from selenium.webdriver.common.by import By from selenium import webdriver from selenium.webdriver.common.keys import Keys from lxml import html import pandas as pd import time driver = webdriver.Chrome(r'C:\Program Files (x86)\chromedriver_win32\chromedriver.exe') driver.get('https://tap.az/elanlar/dasinmaz-emlak/menziller') time.sleep(1) price = [] citi = [] elann = [] bina = [] arrea = [] adres = [] roome = [] baxhise = [] mulkayet = [] descript = [] urll = [] zefer = [] previous_height = driver.execute_script('return document.body.scrollHeight') while True: driver.execute_script('window.scrollTo(0, document.body.scrollHeight);') time.sleep(2) new_height = driver.execute_script('return document.body.scrollHeight') if new_height == previous_height: break previous_height = new_height lnks=driver.find_elements(By.CSS_SELECTOR, '#content > div > div > div.categories-products.js-categories-products > div.js-endless-container.products.endless-products > div.products-i') for itema in lnks: urla=itema.find_element(By.TAG_NAME, 'a') aae = (urla.get_attribute('href')) urel = aae.split('/bookmark')[0] result = requests.get(urel) soup = bs(result.text, 'html.parser') casee = soup.find_all("div",{"class":"lot-body l-center"}) for ae in casee: c = ae.find_all('table', class_ = 'properties') pp = c[0].text city = pp.split('Şəhər')[-1].split('Elanın')[0].replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') cxe = c[0].text elan_tipi = cxe.split('Elanın tipi')[-1].split('Binanın tipi')[0].replace(' verilir','') elane = elan_tipi.replace(' ', '_').replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') cx = c[0].text bina_tipi = cx.split('Binanın tipi')[-1].split('Sahə')[0].replace(' ', '_').replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') cx = c[0].text area = cx.split('tikiliSahə,')[-1].split('Otaq')[0].replace('m²', '') cx = c[0].text room = cx.split('Otaq sayı')[-1].split('Yerləşmə yeri')[0] cx = c[0].text addresss = cx.split('Yerləşmə yeri')[-1].replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I') d = ae.find_all('p') elan_kod = (d[0].text.replace('Elanın nömrəsi:', '')) d = ae.find_all('p') baxhis = d[1].text.replace('Baxışların sayı: ', '') d = ae.find_all('p') description = (d[3].text.replace('Baxışların sayı: ', '').replace('ş' ,'sh').replace('ə' ,'e').replace('ü' ,'u').replace('ö' ,'o').replace('ı' ,'i').replace('ğ' ,'g').replace('ç' ,'ch').replace('Ç', 'ch').replace('Ş', 'sh').replace('Ə' ,'e').replace('Ü' ,'u').replace('Ö' ,'o').replace('İ', 'I').replace("\n", '')) kim = ae.find_all('div', class_ = 'author') kime = kim[0].text if 'bütün' in kime: mulkiyet = int(0) else: mulkiyet = int(1) caseee = soup.find_all("div",{"class":"middle"}) for aecex in caseee: pricxxe = aecex.find_all('span', class_ = 'price-val') pricef = pricxxe[0].text.replace(' ' , '') price.append(pricef) zefer.append(elane) elann.append(elan_kod) citi.append(city) bina.append(bina_tipi) arrea.append(area) adres.append(addresss) roome.append(room) baxhise.append(baxhis) mulkayet.append(mulkiyet) descript.append(description) ae = pd.DataFrame({'URL': urel,'Unique_id': elann,'Price': price,'Room': roome,'Area': arrea,'Seher': citi,'Elan_tipi': zefer,'Description': descript,'Address': adres,'Category': bina,'Mulkiyyet': mulkayet}) aere = ae.drop_duplicates() aere.to_csv('dde.csv', index=False, encoding='utf-8' )
[ "A cause of duplicates is that every time you get lnks, you're getting the products you scraped before scrolling as well. You can probably skip duplicate scrapes by initiating scrapedUrls = [] somewhere at the beginning of your code (OUTSIDE of all loops), and then checking urel against it, as well as adding to it\n if urel in scrapedUrls: continue ## add this line\n result = requests.get(urel) ## from your code\n scrapedUrls.append(urel) ## add this line\n\nbut I'm not sure it'll solve your issue.\n\nI don't know why it's happening, but when I try to scrape the links with selenium's find_elements, I get the same url over and over; so I wrote a fuction [getUniqLinks] that you can use to get a unique list of links (prodUrls) by scrolling up to a certain number of times and then parsing page_source to BeautifulSoup. Below are two lines from the printed output of prodUrls = getUniqLinks(fullUrl, rootUrl, max_scrolls=250, tmo=1):\nWITH SELENIUM found 10957 product links [1 unique] \n \nPARSED PAGE_SOURCE ---> found 12583 product links [12576 unique]\n\n(The full function and printed output are at https://pastebin.com/b3gwUAJZ.)\nSome notes:\n\nIf you increase tmo, you can increase max_scrolls too, but it starts getting quite slow after 100 scrolls.\nI used selenium to get links as well, just to print and show the difference, but you can remove all lines that end with # remove to get rid of those unnecessary parts.\nI used selenium's WebDriverWait instead of time.sleep because it stops waiting after the relevant elements have loaded - it raises an error if it doesn't load it the allowed time (tmo), so I found it more convenient and readable to use in a try...except block instead of using driver.implicitly_wait\nI don't know if this is related to whatever is causing your program to hang [since mine is probably just because of the number of elements being too many], but mine also hangs if I try to use selenium to get all the links after scrolling instead of adding to prodLinks in chunks inside the loop.\n\n\nNow, you can loop through prodUrls and get the data you want, but I think it's better to build a list with a separate dictionary for each link [i.e., having a dictionary for each row rather than having a separate list for each column].\nIf you use these two functions, then you just have to prepare a reference dictionary of selectors like\nrefDict = {\n 'title': 'h1.js-lot-title',\n 'price_text': 'div.price-container',\n 'price_amt': 'div.price-container > .price span.price-val',\n 'price_cur': 'div.price-container > .price span.price-cur',\n '.lot-text tr.property': {'k':'td.property-name', 'v':'td.property-value'},\n 'contact_name': '.author > div.name',\n 'contact_phone': '.author > a.phone',\n 'lot_warning': 'div.lot-warning',\n 'div.lot-info': {'sel': 'p', 'sep': ':'},\n 'description': '.lot-text p'\n}\n\nthat can be passed to fillDict_fromTag like in the code below:\n## FIRST PASTE FUNTION DEFINITIONS FROM https://pastebin.com/hKXYetmj\n\nproductDetails = []\npuLen = len(prodUrls)\nfor pi, pUrl in enumerate(prodUrls[:500]):\n print('', end=f'\\rScraping [for {pi+1} of {puLen}] {pUrl}')\n pDets = {'prodId': [w for w in pUrl.split('/') if w][-1]}\n\n resp = requests.get(pUrl)\n if resp.status_code != 200:\n pDets['Error_Message'] = f'{resp.raise_for_status()}'\n pDets['sourceUrl'] = pUrl\n productDetails.append(pDets) \n continue\n \n pSoup = BeautifulSoup(resp.content, 'html.parser')\n pDets = fillDict_fromTag(pSoup, refDict, pDets, rootUrl)\n\n pDets['sourceUrl'] = pUrl\n productDetails.append(pDets)\nprint()\nprodDf = pd.DataFrame(productDetails).set_index('prodId')\nprodDf.to_csv('ProductDetails.csv')\n\n\nI have uploaded both 'prodLinks.csv' and 'ProductDetails.csv' here, although there are only the first 500 scrapes' results since I manually interrupted after around 20 minutes; I'm also pasting the first 3 rows here (printed with print(prodDf.loc[prodDf.index[:3]].to_markdown()))\n| prodId | title | price_text | price_amt | price_cur | Şəhər | Elanın tipi | Elanın tipi [link] | Binanın tipi | Binanın tipi [link] | Sahə, m² | Otaq sayı | Yerləşmə yeri | contact_name | contact_phone | lot_warning | Elanın nömrəsi | Baxışların sayı | Yeniləndi | description | sourceUrl |\n|---------:|:---------------------------------------------------------|:-------------|:------------|:------------|:--------|:---------------|:----------------------------------------------------------------|:---------------|:----------------------------------------------------------------|-----------:|------------:|:----------------|:---------------|:----------------|:-----------------------------------------------------------------------------|-----------------:|------------------:|:---------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------|\n| 35828514 | 2-otaqlı yeni tikili kirayə verilir, 20 Yanvar m., 45 m² | 600 AZN | 600 | AZN | Bakı | Kirayə verilir | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B740%5D=3724 | Yeni tikili | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B747%5D=3849 | 45 | 2 | 20 Yanvar m. | Elşad Bəy | (055) 568-12-13 | Diqqət! Beh göndərməmişdən öncə sövdələşmənin təhlükəsiz olduğuna əmin olun! | 35828514 | 105 | 22 Noyabr 2022 | 20 Yanvar metrosuna və Inşatcılar metrosuna 8 - 11 dəiqqə arası olan ərazidə, yeni tikili binada 1 otaq 2 otaq təmirli şəraitiynən mənzil kirayə 600 manata, ailiyə və iş adamına verilir. Qabaqçadan 2 ay ödəniş olsa kamendant pulu daxil, ayı 600 manat olaçaq, mənzili götūrən şəxs 1 ayın 20 % vasitəciyə ödəniş etməlidir. Xahìş olunur, rial olmuyan şəxs zəng etməsin. | https://tap.az/elanlar/dasinmaz-emlak/menziller/35828514 |\n| 35833080 | 1-otaqlı yeni tikili kirayə verilir, Quba r., 60 m² | 40 AZN | 40 | AZN | Quba | Kirayə verilir | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B740%5D=3724 | Yeni tikili | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B747%5D=3849 | 60 | 1 | Quba r. | Orxan | (050) 604-27-60 | Diqqət! Beh göndərməmişdən öncə sövdələşmənin təhlükəsiz olduğuna əmin olun! | 35833080 | 114 | 22 Noyabr 2022 | Quba merkezde her weraiti olan GUNLUK KIRAYE EV.Daimi isti soyuq su hamam metbex wifi.iwciler ve aile ucun elveriwlidir Təmirli | https://tap.az/elanlar/dasinmaz-emlak/menziller/35833080 |\n| 35898353 | 4-otaqlı mənzil, Nizami r., 100 m² | 153 000 AZN | 153 000 | AZN | Bakı | Satılır | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B740%5D=3722 | Köhnə tikili | https://tap.az/elanlar/dasinmaz-emlak/menziller?p%5B747%5D=3850 | 100 | 4 | Nizami r. | Araz M | (070) 723-54-50 | Diqqət! Beh göndərməmişdən öncə sövdələşmənin təhlükəsiz olduğuna əmin olun! | 35898353 | 71 | 27 Noyabr 2022 | X.Dostluğu metrosuna 2 deq mesafede Leninqrad lahiyeli 9 mərtəbəli binanın 5-ci mərtəbəsində 4 otaqlı yaxsi temirli mənzil satılır.Əmlak ofisinə ödəniş alıcı tərəfindən məbləğin 1%-ni təşkil edir. | https://tap.az/elanlar/dasinmaz-emlak/menziller/35898353 |\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "selenium", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074598056_beautifulsoup_python_selenium_selenium_webdriver_web_scraping.txt
Q: How do i turn this iterative function to recursive function? def itr(n): s = 0 for i in range(0, n+1): s = s + i * i return s I have difficulties turning this iterative function to a recursive function called rec(n). A: Honestly, this program doesn't need to be converted recursively, but if so, you would probably write something like this: def rec(n, s = 0): # s = 0 is a default variable, so if we don't specify what s is when we call the function, it's default variable will be 0 if n == 0: # base case, so if we've run through the entire program n times, it will exit and return the final value return s s = s + n * n # your task, where n replaces i because we change n the same amount of times as i runs n -= 1 # we change n so that every time we run the function it eventually reaches 0 return rec(n, s) # recursively run the function again Since the base case checks if n is 0, if you inputted a negative number, the output would end up as 0. To get around this, I would suggest using something else to check if it's negative, and perhaps running the same function but with the absolute value, and just add the negative sign after. (In theory this should work, have not fully tested it out myself) If you're wondering, I used n to replace i, where n decreases in value throughout the execution. So if n began as 10, n would be 9,8,7,6...0 through each recursive iteration. A: A recursive approach to this problem (sum of squares) is not well suited to recursion for two reasons: 1) It's less efficient and 2) It's limited to the depth of recursion which is typically 1000 However, coding is trivial: def rec(n): return 0 if n <= 0 else n * n + rec(n-1) print(rec(10)) Output: 385
How do i turn this iterative function to recursive function?
def itr(n): s = 0 for i in range(0, n+1): s = s + i * i return s I have difficulties turning this iterative function to a recursive function called rec(n).
[ "Honestly, this program doesn't need to be converted recursively, but if so, you would probably write something like this:\ndef rec(n, s = 0): # s = 0 is a default variable, so if we don't specify what s is when we call the function, it's default variable will be 0\n if n == 0: # base case, so if we've run through the entire program n times, it will exit and return the final value\n return s\n \n s = s + n * n # your task, where n replaces i because we change n the same amount of times as i runs\n n -= 1 # we change n so that every time we run the function it eventually reaches 0\n\n return rec(n, s) # recursively run the function again\n\nSince the base case checks if n is 0, if you inputted a negative number, the output would end up as 0.\nTo get around this, I would suggest using something else to check if it's negative, and perhaps running the same function but with the absolute value, and just add the negative sign after. (In theory this should work, have not fully tested it out myself)\nIf you're wondering, I used n to replace i, where n decreases in value throughout the execution. So if n began as 10, n would be 9,8,7,6...0 through each recursive iteration.\n", "A recursive approach to this problem (sum of squares) is not well suited to recursion for two reasons: 1) It's less efficient and 2) It's limited to the depth of recursion which is typically 1000\nHowever, coding is trivial:\ndef rec(n):\n return 0 if n <= 0 else n * n + rec(n-1)\n\nprint(rec(10))\n\nOutput:\n385\n\n" ]
[ 1, 0 ]
[]
[]
[ "function", "iteration", "loops", "python", "recursion" ]
stackoverflow_0074630841_function_iteration_loops_python_recursion.txt
Q: Python/rpy2 does not recognize %>% pipe in r code I have a Python script that will pass dataframes into an R package and get the results. The R script works as expected in R studio. However I cannot get it to wokr when executing through python/rpy2. rpy2.rinterface_lib.embedded.RRuntimeError: Error in ataframe d%>% dplyr::rename(domain = Domain, variable = Variable, : could not find function "%>%" Is there a way to get this to work when executing through python? Rewriting the code to not use %>% is working but will require a lot of rewriting that I would prefer to avoid if possible. I've tried making sure the dplyr library is in every script. I've confirmed its installed prior to running the python script. I have not found any examples of this issue while using rpy2/python. A: %>% is from the magrittr package. If you have R version 4.1 or later you can use the native |> pipe instead.
Python/rpy2 does not recognize %>% pipe in r code
I have a Python script that will pass dataframes into an R package and get the results. The R script works as expected in R studio. However I cannot get it to wokr when executing through python/rpy2. rpy2.rinterface_lib.embedded.RRuntimeError: Error in ataframe d%>% dplyr::rename(domain = Domain, variable = Variable, : could not find function "%>%" Is there a way to get this to work when executing through python? Rewriting the code to not use %>% is working but will require a lot of rewriting that I would prefer to avoid if possible. I've tried making sure the dplyr library is in every script. I've confirmed its installed prior to running the python script. I have not found any examples of this issue while using rpy2/python.
[ "%>% is from the magrittr package. If you have R version 4.1 or later you can use the native |> pipe instead.\n" ]
[ 2 ]
[]
[]
[ "dplyr", "python", "r", "rpy2" ]
stackoverflow_0074630873_dplyr_python_r_rpy2.txt
Q: TypeError: > not supported between instances of 'int' and 'list' scores = input("Input a list of student scores\n ").split() for n in range(0, len(scores)): scores[n] = int(scores[n]) print(scores) # for loop way highest=0 for s in scores: if s > highest: highest=scores print(f"the highest score is {highest}") please help me how to solve it? I searched it they are saying to add [0] after s example : for s in scores: if s[0] > highest: but it did not work and I had the same error please help me tttttttttttttttttttttttttttttttttttttttttttt A: In this line highest=scores You are assigning a list (scores) to an int var (highest), and this is the reason for the error. I think you have to change the line in highest=s
TypeError: > not supported between instances of 'int' and 'list'
scores = input("Input a list of student scores\n ").split() for n in range(0, len(scores)): scores[n] = int(scores[n]) print(scores) # for loop way highest=0 for s in scores: if s > highest: highest=scores print(f"the highest score is {highest}") please help me how to solve it? I searched it they are saying to add [0] after s example : for s in scores: if s[0] > highest: but it did not work and I had the same error please help me tttttttttttttttttttttttttttttttttttttttttttt
[ "In this line\nhighest=scores\n\nYou are assigning a list (scores) to an int var (highest), and this is the reason for the error.\nI think you have to change the line in\nhighest=s\n\n" ]
[ 0 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0074608661_loops_python.txt
Q: Pandas zfill multiple items in single cell I have multiple values in a single cell Q3 1 4 1 3 3 4 11 3 4 6 15 16 How can I zfill or pad to add leading zeros to each value in each cell? df['Q3'].str.split(' ').apply(lambda x: x.zfill(8)) AttributeError: 'list' object has no attribute 'zfill' looking for Q3 00000001 00000004 00000001 00000003 00000003 00000004 00000011 00000003 00000004 00000006 00000015 00000016 A: Simple. Split the values then apply zfill on each value and join back df['Q3'].map(lambda x: ' '.join(y.zfill(8) for y in x.split())) 0 00000001 00000004 1 00000001 00000003 2 00000003 00000004 00000011 3 00000003 00000004 00000006 00000015 00000016 Name: Q3, dtype: object
Pandas zfill multiple items in single cell
I have multiple values in a single cell Q3 1 4 1 3 3 4 11 3 4 6 15 16 How can I zfill or pad to add leading zeros to each value in each cell? df['Q3'].str.split(' ').apply(lambda x: x.zfill(8)) AttributeError: 'list' object has no attribute 'zfill' looking for Q3 00000001 00000004 00000001 00000003 00000003 00000004 00000011 00000003 00000004 00000006 00000015 00000016
[ "Simple. Split the values then apply zfill on each value and join back\ndf['Q3'].map(lambda x: ' '.join(y.zfill(8) for y in x.split()))\n\n\n0 00000001 00000004\n1 00000001 00000003\n2 00000003 00000004 00000011\n3 00000003 00000004 00000006 00000015 00000016\nName: Q3, dtype: object\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074631092_dataframe_pandas_python.txt
Q: TKInter frame losing grid layout when insert scrollbar I'm trying to create an app with a frame with two frames inside, but I want one of then to be wider than the other... I found a way to do it using grid, but when I add a scrollbar on one of the frames it readjusts the grid and both frames get the same size. Here is the code working without the scrollbar: def __init__(self, master=None): ctk.set_appearance_mode('system') ctk.set_default_color_theme('blue') self.__root = ctk.CTk() if master is None else ctk.CTkToplevel(master) self.__root. Configure(height=1500, width=944) self.__root.minsize(1500, 944) self.__main_container = ctk.CTkFrame(self.__root) self.__image_frame = ctk.CTkFrame(self.__main_container) self.__image_frame.configure(height=800, width=1000) self.__image_canvas = ctk.CTkCanvas(self.__image_frame) self.__image_canvas.configure(confine="true", cursor="crosshair") self.__image_canvas.grid(column=0, row=0, sticky="nsew") self.__image_canvas.bind("<ButtonPress-1>", self.__start_drawing) self.__image_canvas.bind("<ButtonRelease-1>", self.__end_drawing) self.__image_canvas.bind("<B1-Motion>", self.__draw_rectangle) self.__image_frame.grid(column=0, padx=10, pady=20, row=0, sticky="nsew") self.__image_frame.grid_propagate(0) self.__image_frame.grid_anchor("center") self.__image_frame.rowconfigure(0, weight=1) self.__image_frame.columnconfigure(0, weight=1) self.__boxes_frame = ctk.CTkFrame(self.__main_container) self.__boxes_frame.configure(height=800, width=400) self.__boxes_frame.grid(column=1, padx=10, pady=20, row=0, sticky="ns") self.__main_container.grid(column=0, padx=20, pady=40, row=0, sticky="ns") self.__main_container.grid_anchor("center") self.__main_container.rowconfigure(0, weight=1) self.__root.grid_anchor("center") self.__root.rowconfigure(0, weight=1) self.mainwindow = self.__root And this is the code when I add the scrollbar and messes the grid def __init__(self, master=None): ctk.set_appearance_mode('system') ctk.set_default_color_theme('blue') self.__root = ctk.CTk() if master is None else ctk.CTkToplevel(master) self.__root. Configure(height=1500, width=944) self.__root.minsize(1500, 944) self.__main_container = ctk.CTkFrame(self.__root) self.__image_frame = ctk.CTkFrame(self.__main_container) self.__image_frame.configure(height=800, width=1000) self.__image_canvas = ctk.CTkCanvas(self.__image_frame) self.__image_canvas.configure(confine="true", cursor="crosshair") self.__image_canvas.grid(column=0, row=0, sticky="nsew") self.__image_canvas.bind("<ButtonPress-1>", self.__start_drawing) self.__image_canvas.bind("<ButtonRelease-1>", self.__end_drawing) self.__image_canvas.bind("<B1-Motion>", self.__draw_rectangle) #------ adding the scrollbar ----- self.__image_frame_vertical_scrollbar = ctk.CTkScrollbar(self.__image_frame, orientation="vertical") self.__image_frame_vertical_scrollbar.grid(column=1, row=0, sticky="ns") self.__image_frame_vertical_scrollbar.configure(command=self.__image_canvas.yview) self.__image_canvas.configure(yscrollcommand=self.__image_frame_vertical_scrollbar.set) #--------------------------------- self.__image_frame.grid(column=0, padx=10, pady=20, row=0, sticky="nsew") self.__image_frame.grid_propagate(0) self.__image_frame.grid_anchor("center") self.__image_frame.rowconfigure(0, weight=1) self.__image_frame.columnconfigure(0, weight=1) self.__boxes_frame = ctk.CTkFrame(self.__main_container) self.__boxes_frame.configure(height=800, width=400) self.__boxes_frame.grid(column=1, padx=10, pady=20, row=0, sticky="ns") self.__main_container.grid(column=0, padx=20, pady=40, row=0, sticky="ns") self.__main_container.grid_anchor("center") self.__main_container.rowconfigure(0, weight=1) self.__root.grid_anchor("center") self.__root.rowconfigure(0, weight=1) self.mainwindow = self.__root What is wrong with the grid definition that is messing things up? How can I set the __image_frame grid to it fills the __main_container keeping the desired dimensions? A: Set the size of the canvas explicitly self.__image_canvas = ctk.CTkCanvas(self.__image_frame, width=900)
TKInter frame losing grid layout when insert scrollbar
I'm trying to create an app with a frame with two frames inside, but I want one of then to be wider than the other... I found a way to do it using grid, but when I add a scrollbar on one of the frames it readjusts the grid and both frames get the same size. Here is the code working without the scrollbar: def __init__(self, master=None): ctk.set_appearance_mode('system') ctk.set_default_color_theme('blue') self.__root = ctk.CTk() if master is None else ctk.CTkToplevel(master) self.__root. Configure(height=1500, width=944) self.__root.minsize(1500, 944) self.__main_container = ctk.CTkFrame(self.__root) self.__image_frame = ctk.CTkFrame(self.__main_container) self.__image_frame.configure(height=800, width=1000) self.__image_canvas = ctk.CTkCanvas(self.__image_frame) self.__image_canvas.configure(confine="true", cursor="crosshair") self.__image_canvas.grid(column=0, row=0, sticky="nsew") self.__image_canvas.bind("<ButtonPress-1>", self.__start_drawing) self.__image_canvas.bind("<ButtonRelease-1>", self.__end_drawing) self.__image_canvas.bind("<B1-Motion>", self.__draw_rectangle) self.__image_frame.grid(column=0, padx=10, pady=20, row=0, sticky="nsew") self.__image_frame.grid_propagate(0) self.__image_frame.grid_anchor("center") self.__image_frame.rowconfigure(0, weight=1) self.__image_frame.columnconfigure(0, weight=1) self.__boxes_frame = ctk.CTkFrame(self.__main_container) self.__boxes_frame.configure(height=800, width=400) self.__boxes_frame.grid(column=1, padx=10, pady=20, row=0, sticky="ns") self.__main_container.grid(column=0, padx=20, pady=40, row=0, sticky="ns") self.__main_container.grid_anchor("center") self.__main_container.rowconfigure(0, weight=1) self.__root.grid_anchor("center") self.__root.rowconfigure(0, weight=1) self.mainwindow = self.__root And this is the code when I add the scrollbar and messes the grid def __init__(self, master=None): ctk.set_appearance_mode('system') ctk.set_default_color_theme('blue') self.__root = ctk.CTk() if master is None else ctk.CTkToplevel(master) self.__root. Configure(height=1500, width=944) self.__root.minsize(1500, 944) self.__main_container = ctk.CTkFrame(self.__root) self.__image_frame = ctk.CTkFrame(self.__main_container) self.__image_frame.configure(height=800, width=1000) self.__image_canvas = ctk.CTkCanvas(self.__image_frame) self.__image_canvas.configure(confine="true", cursor="crosshair") self.__image_canvas.grid(column=0, row=0, sticky="nsew") self.__image_canvas.bind("<ButtonPress-1>", self.__start_drawing) self.__image_canvas.bind("<ButtonRelease-1>", self.__end_drawing) self.__image_canvas.bind("<B1-Motion>", self.__draw_rectangle) #------ adding the scrollbar ----- self.__image_frame_vertical_scrollbar = ctk.CTkScrollbar(self.__image_frame, orientation="vertical") self.__image_frame_vertical_scrollbar.grid(column=1, row=0, sticky="ns") self.__image_frame_vertical_scrollbar.configure(command=self.__image_canvas.yview) self.__image_canvas.configure(yscrollcommand=self.__image_frame_vertical_scrollbar.set) #--------------------------------- self.__image_frame.grid(column=0, padx=10, pady=20, row=0, sticky="nsew") self.__image_frame.grid_propagate(0) self.__image_frame.grid_anchor("center") self.__image_frame.rowconfigure(0, weight=1) self.__image_frame.columnconfigure(0, weight=1) self.__boxes_frame = ctk.CTkFrame(self.__main_container) self.__boxes_frame.configure(height=800, width=400) self.__boxes_frame.grid(column=1, padx=10, pady=20, row=0, sticky="ns") self.__main_container.grid(column=0, padx=20, pady=40, row=0, sticky="ns") self.__main_container.grid_anchor("center") self.__main_container.rowconfigure(0, weight=1) self.__root.grid_anchor("center") self.__root.rowconfigure(0, weight=1) self.mainwindow = self.__root What is wrong with the grid definition that is messing things up? How can I set the __image_frame grid to it fills the __main_container keeping the desired dimensions?
[ "Set the size of the canvas explicitly\nself.__image_canvas = ctk.CTkCanvas(self.__image_frame, width=900)\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074628240_python_tkinter.txt
Q: Decorator that logs information to a file in Python My task is: Write a decorator that logs information about calls of decorated functions, the values of its arguments, keyword arguments, and execution time. The log should be written to a file. **Example of Using ** @log def foo(a, b, c): ... foo(1, 2, c=3) log.txt ... foo; args: a=1, b=2; kwargs: c=3; execution time: 0.12 sec. ... Can you help me please? A: Try this from time import time def log(func): def wrapper(*args, **kwargs): start_time = time() func(*args, **kwargs) end_time = time() - start_time args_names = func.__code__.co_varnames[:func.__code__.co_argcount] args_names ={**dict(zip(args_names, args))} with open('log.txt', 'a+') as file: file.write(f'{func.__name__}, args={args_names}, kwargs={kwargs}, {end_time}\n') return wrapper @log def some_fun(a, b, c): pass some_fun(5, 10, c=15)
Decorator that logs information to a file in Python
My task is: Write a decorator that logs information about calls of decorated functions, the values of its arguments, keyword arguments, and execution time. The log should be written to a file. **Example of Using ** @log def foo(a, b, c): ... foo(1, 2, c=3) log.txt ... foo; args: a=1, b=2; kwargs: c=3; execution time: 0.12 sec. ... Can you help me please?
[ "Try this\nfrom time import time\n\ndef log(func):\n def wrapper(*args, **kwargs):\n\n start_time = time()\n func(*args, **kwargs)\n end_time = time() - start_time\n \n args_names = func.__code__.co_varnames[:func.__code__.co_argcount]\n args_names ={**dict(zip(args_names, args))}\n with open('log.txt', 'a+') as file:\n file.write(f'{func.__name__}, args={args_names}, kwargs={kwargs}, {end_time}\\n')\n \n return wrapper\n\n@log\ndef some_fun(a, b, c):\n pass\n\n\nsome_fun(5, 10, c=15)\n\n" ]
[ 0 ]
[]
[]
[ "decorator", "logging", "python", "python_3.x" ]
stackoverflow_0074631125_decorator_logging_python_python_3.x.txt
Q: Find path between two nodes using Networkx library, Having single source and multiple targets? Hello I'm using networkx library, I have created graph but the i'm having issue in finding multiple targets and target values are bit tricky because target has to be matched with substring within the given target value. Example: Nodes = ['C0111', 'N6186', 'C5572', 'N6501', 'C0850-IASW-NO01', 'C1182-IUPE-NO01'] Edges = [('C0111','N6186'),('N6186','C0850-IASW-NO01'),('C0111','C5572'),('C5572','N6501'),('N6501','C1182-IUPE-NO01')] Problem: Source = 'C0111' Target = ['IASW','IUPE'] Their are some special nodes which are considered as target which are 8 of them including nodes containing 'IUPE' , 'IASW' ,etc I can create graph using networkx. import networkx as nx G = nx.Graph() G.add_nodes_from(Nodes) G.add_edges_from(Edges) nx.shortest_path(G,source='C0111',target=?)''' for multiple targets i can iterate through multi targets but for substring to be in node i'm confused on this point. example: normal way ==> '''nx.shortest_path(G,source='C0111',target='C0850-IASW-NO01')''' 'C0850-IASW-NO01' => thats how node is created but i want to see if target has IASW or IUPE in it. A: One solution is to use the pattern to subset the target nodes before looking for the shortest paths: target_nodes = [n for n in G if "IASW" in str(n) or "IUPE" in str(n)] With a list of target nodes, now it's possible to iterate over them and find the shortest path of interest (as you describe).
Find path between two nodes using Networkx library, Having single source and multiple targets?
Hello I'm using networkx library, I have created graph but the i'm having issue in finding multiple targets and target values are bit tricky because target has to be matched with substring within the given target value. Example: Nodes = ['C0111', 'N6186', 'C5572', 'N6501', 'C0850-IASW-NO01', 'C1182-IUPE-NO01'] Edges = [('C0111','N6186'),('N6186','C0850-IASW-NO01'),('C0111','C5572'),('C5572','N6501'),('N6501','C1182-IUPE-NO01')] Problem: Source = 'C0111' Target = ['IASW','IUPE'] Their are some special nodes which are considered as target which are 8 of them including nodes containing 'IUPE' , 'IASW' ,etc I can create graph using networkx. import networkx as nx G = nx.Graph() G.add_nodes_from(Nodes) G.add_edges_from(Edges) nx.shortest_path(G,source='C0111',target=?)''' for multiple targets i can iterate through multi targets but for substring to be in node i'm confused on this point. example: normal way ==> '''nx.shortest_path(G,source='C0111',target='C0850-IASW-NO01')''' 'C0850-IASW-NO01' => thats how node is created but i want to see if target has IASW or IUPE in it.
[ "One solution is to use the pattern to subset the target nodes before looking for the shortest paths:\ntarget_nodes = [n for n in G if \"IASW\" in str(n) or \"IUPE\" in str(n)]\n\nWith a list of target nodes, now it's possible to iterate over them and find the shortest path of interest (as you describe).\n" ]
[ 0 ]
[]
[]
[ "networkx", "python", "shortest_path" ]
stackoverflow_0074630352_networkx_python_shortest_path.txt
Q: How to test if a function gets called when another function executes in django test? I've a method inside a manager, this method calls a function imported from different module now I'm trying to write a test that make sure the function gets called, when the manager method executes. I've tried some methods by it didn't work here is the code example. hint: I'm using pytest as testrunner from unittest import mock # Customer Manager class class ItemsManager(models.Manager): def bulk_update(self, *args, **kwargs): result = super().bulk_update(*args, **kwargs) items_bulk_updated(*args, **kwargs) return result # signals.py file def items_bulk_updated(*args, **kwargs): print("items bulk updated") # test file # Base TestCase Inherits from APITestCase class TestItems(BaseTestCase): @mock.patch("items.signals.items_bulk_updated",autospec=True) def test_bulk_update_items_triggers_signal(self, mock_function): items_qs = Items.objects.all() result = Items.objects.bulk_update(items_qs, ['item_name']) mock_function.assert_called() A: I assume the function that you want to test is items_bulk_updated. Since you are testing ItemsManager.bulk_update() and you want to verify that items_bulk_updated is being called inside that method, the path in your @mock.patch should be the file path where the function is being imported in instead of its origin. This means you need to update @mock.patch("items.signals.items_bulk_updated", autospec=True) to @mock.patch("<path-to-items-manager-file>.items_bulk_updated", autospec=True) where <path-to-items-manager-file> as suggested, is the path to your ItemsManager class.
How to test if a function gets called when another function executes in django test?
I've a method inside a manager, this method calls a function imported from different module now I'm trying to write a test that make sure the function gets called, when the manager method executes. I've tried some methods by it didn't work here is the code example. hint: I'm using pytest as testrunner from unittest import mock # Customer Manager class class ItemsManager(models.Manager): def bulk_update(self, *args, **kwargs): result = super().bulk_update(*args, **kwargs) items_bulk_updated(*args, **kwargs) return result # signals.py file def items_bulk_updated(*args, **kwargs): print("items bulk updated") # test file # Base TestCase Inherits from APITestCase class TestItems(BaseTestCase): @mock.patch("items.signals.items_bulk_updated",autospec=True) def test_bulk_update_items_triggers_signal(self, mock_function): items_qs = Items.objects.all() result = Items.objects.bulk_update(items_qs, ['item_name']) mock_function.assert_called()
[ "I assume the function that you want to test is items_bulk_updated.\nSince you are testing ItemsManager.bulk_update() and you want to verify that items_bulk_updated is being called inside that method, the path in your @mock.patch should be the file path where the function is being imported in instead of its origin. This means you need to update\[email protected](\"items.signals.items_bulk_updated\", autospec=True)\n\nto\[email protected](\"<path-to-items-manager-file>.items_bulk_updated\", autospec=True)\n\nwhere <path-to-items-manager-file> as suggested, is the path to your ItemsManager class.\n" ]
[ 1 ]
[]
[]
[ "django", "pytest_django", "python", "python_unittest" ]
stackoverflow_0074628499_django_pytest_django_python_python_unittest.txt
Q: Long paths in Python on Windows I have a problem when programming in Python running under Windows. I need to work with file paths, that are longer than 256 or whatsathelimit characters. Now, I've read basically about two solutions: Use GetShortPathName from kernel32.dll and access the file in this way. That is nice, but I cannot use it, since I need to use the paths in a way shutil.rmtree(short_path) where the short_path is a really short path (something like D:\tools\Eclipse) and the long paths appear in the directory itself (damn Eclipse plugins). Prepend "\\\\?\\" to the path I haven't managed to make this work in any way. The attempt to do anything this way always result in error WindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: <path here> So my question is: How do I make the 2nd option work? I stress that I need to use it the same way as in the example in option #1. OR Is there any other way? EDIT: I need the solution to work in Python 2.7 EDIT2: The question Python long filename support broken in Windows does give the answer with the 'magic prefix' and I stated that I know it in this question. The thing I do not know is HOW do I use it. I've tried to prepend that to the path but it just failed, as I've written above. A: Well it seems that, as always, I've found the answer to what's been bugging me for a week twenty minutes after I seriously ask somebody about it. So I've found that I need to make sure two things are done correctly: The path can contain only backslashes, no forward slashes. If I want to do something like list a directory, I need to end the path with a backslash, otherwise Python will append /*.* to it, which is a forward slash, which is bad. Hope at least someone will find this useful. A: Let me just simplify this for anyone looking for a straight answer: Path needs to be unicode, prepend string with u like u'C:\\path\\to\\file' Path needs to start with \\\\?\\ (which is escaped into \\?\) like u'\\\\?\\C:\\path\\to\\file' No forward slashes only backslashes: / --> \\ It has to be an absolute path; it does not work for relative paths A: py 3.8.2 # Fix long path access: import ntpath ntpath.realpath = ntpath.abspath # Fix long path access. In my case, this solved the problem of running a script from a long path. (https://developers.google.com/drive/api/v3/quickstart/python) But this is not a universal fix. It looks like the ntpath.realpath implementation has problems. This code replaced it with a dummy. A: it works for me import os str1=r"C:\Users\manual\demodfadsfljdskfjslkdsjfklaj\inner-2djfklsdfjsdklfj\inner3fadsfksdfjdklsfjksdgjl\inner4dfhasdjfhsdjfskfklsjdkjfleioreirueewdsfksdmv\anotherInnerfolder4aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\5qbbbbbbbbbbbccccccccccccccccccccccccsssssssssssssssss\tmp.txt" print(len(str1)) #346 path = os.path.abspath(str1) if path.startswith(u"\\\\"): path=u"\\\\?\\UNC\\"+path[2:] else: path=u"\\\\?\\"+path with open(path,"r+") as f: print(f.readline()) if you get a long path(more then 258 char) issue in windows then try this .
Long paths in Python on Windows
I have a problem when programming in Python running under Windows. I need to work with file paths, that are longer than 256 or whatsathelimit characters. Now, I've read basically about two solutions: Use GetShortPathName from kernel32.dll and access the file in this way. That is nice, but I cannot use it, since I need to use the paths in a way shutil.rmtree(short_path) where the short_path is a really short path (something like D:\tools\Eclipse) and the long paths appear in the directory itself (damn Eclipse plugins). Prepend "\\\\?\\" to the path I haven't managed to make this work in any way. The attempt to do anything this way always result in error WindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: <path here> So my question is: How do I make the 2nd option work? I stress that I need to use it the same way as in the example in option #1. OR Is there any other way? EDIT: I need the solution to work in Python 2.7 EDIT2: The question Python long filename support broken in Windows does give the answer with the 'magic prefix' and I stated that I know it in this question. The thing I do not know is HOW do I use it. I've tried to prepend that to the path but it just failed, as I've written above.
[ "Well it seems that, as always, I've found the answer to what's been bugging me for a week twenty minutes after I seriously ask somebody about it. \nSo I've found that I need to make sure two things are done correctly:\n\nThe path can contain only backslashes, no forward slashes.\nIf I want to do something like list a directory, I need to end the path with a backslash, otherwise Python will append /*.* to it, which is a forward slash, which is bad.\n\nHope at least someone will find this useful.\n", "Let me just simplify this for anyone looking for a straight answer:\n\nPath needs to be unicode, prepend string with u like u'C:\\\\path\\\\to\\\\file'\nPath needs to start with \\\\\\\\?\\\\ (which is escaped into \\\\?\\) like u'\\\\\\\\?\\\\C:\\\\path\\\\to\\\\file'\nNo forward slashes only backslashes: / --> \\\\\nIt has to be an absolute path; it does not work for relative paths\n\n", "py 3.8.2\n# Fix long path access:\nimport ntpath\nntpath.realpath = ntpath.abspath\n# Fix long path access.\n\nIn my case, this solved the problem of running a script from a long path.\n(https://developers.google.com/drive/api/v3/quickstart/python)\nBut this is not a universal fix.\nIt looks like the ntpath.realpath implementation has problems. This code replaced it with a dummy.\n", "it works for me\nimport os\nstr1=r\"C:\\Users\\manual\\demodfadsfljdskfjslkdsjfklaj\\inner-2djfklsdfjsdklfj\\inner3fadsfksdfjdklsfjksdgjl\\inner4dfhasdjfhsdjfskfklsjdkjfleioreirueewdsfksdmv\\anotherInnerfolder4aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\5qbbbbbbbbbbbccccccccccccccccccccccccsssssssssssssssss\\tmp.txt\"\nprint(len(str1)) #346\n\npath = os.path.abspath(str1)\n\nif path.startswith(u\"\\\\\\\\\"):\n path=u\"\\\\\\\\?\\\\UNC\\\\\"+path[2:]\nelse:\n path=u\"\\\\\\\\?\\\\\"+path\n\nwith open(path,\"r+\") as f:\n print(f.readline())\n\nif you get a long path(more then 258 char) issue in windows then try this .\n" ]
[ 17, 11, 1, 0 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0029557760_python_windows.txt
Q: Join lines which start with spaces to previous line I have a text file that has some data of the following format that I want to extract and append. I'm new to Python and would like some advice on the approach. Data Format is as follows: Position 1 is a number followed by 5 white spaces followed by non-white space of variable length then no more data. However the next line has non white space starting in position 6 and the remainder of the data that I need from that line. I want to take the second line and append it to the first line and then print it. Example 1 Some variable data <6 Spaces> More Data that I want above ea 5 ... 2 another line of data I want it to look like: 1 Some variable data More Data that I want above ea 5 ... 2 another line of data This is what I started with but then realized not every line has a unit of issue = ea. Some lines wrap. I need to account for that. import re # Open fie for reading fileObject = open("AFilenameHere.txt", "r") fn=fileObject.name #Read a file line by line and print in terminal for line in fileObject: if ' EA ' in line: # break up string part1=line.split() EAISAT=part1.index('EA') DESC=' '.join(part1[1:EAISAT]) # If there's a comma in the descr take it out cause I wanna eventually create a csv transformed_desc = re.sub(",","", DESC) num_of_elements = len(part1) # If there's nothing in the description then don't print those lines if DESC: print (fn, part1[0], transformed_desc, part1[EAISAT:num_of_elements]) A: Just collect lines until you get one which doesn't have the six spaces, and then print what you have accrued so far before starting over. Don't forget to handle the last one when you fall off the end of the loop. fn = "AFilenameHere.txt" lines = [] with open(fn, "r") as fileObject: for line in fileObject: if line.startswith(' '): lines.append(line[5:]) else: if lines: print("".join(lines)) lines = [line] if lines: print("".join(lines)) The if lines: condition checks if there are items in lines from the previous iterations of the for loop, and if so, prints those. It's not clear why your attempt looks for the "EA" token; by your description, this simply joins lines which have multiple spaces back with the previous line.
Join lines which start with spaces to previous line
I have a text file that has some data of the following format that I want to extract and append. I'm new to Python and would like some advice on the approach. Data Format is as follows: Position 1 is a number followed by 5 white spaces followed by non-white space of variable length then no more data. However the next line has non white space starting in position 6 and the remainder of the data that I need from that line. I want to take the second line and append it to the first line and then print it. Example 1 Some variable data <6 Spaces> More Data that I want above ea 5 ... 2 another line of data I want it to look like: 1 Some variable data More Data that I want above ea 5 ... 2 another line of data This is what I started with but then realized not every line has a unit of issue = ea. Some lines wrap. I need to account for that. import re # Open fie for reading fileObject = open("AFilenameHere.txt", "r") fn=fileObject.name #Read a file line by line and print in terminal for line in fileObject: if ' EA ' in line: # break up string part1=line.split() EAISAT=part1.index('EA') DESC=' '.join(part1[1:EAISAT]) # If there's a comma in the descr take it out cause I wanna eventually create a csv transformed_desc = re.sub(",","", DESC) num_of_elements = len(part1) # If there's nothing in the description then don't print those lines if DESC: print (fn, part1[0], transformed_desc, part1[EAISAT:num_of_elements])
[ "Just collect lines until you get one which doesn't have the six spaces, and then print what you have accrued so far before starting over. Don't forget to handle the last one when you fall off the end of the loop.\nfn = \"AFilenameHere.txt\"\nlines = []\nwith open(fn, \"r\") as fileObject:\n for line in fileObject:\n if line.startswith(' '):\n lines.append(line[5:])\n else:\n if lines:\n print(\"\".join(lines))\n lines = [line]\nif lines:\n print(\"\".join(lines))\n\nThe if lines: condition checks if there are items in lines from the previous iterations of the for loop, and if so, prints those.\nIt's not clear why your attempt looks for the \"EA\" token; by your description, this simply joins lines which have multiple spaces back with the previous line.\n" ]
[ 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074631361_python_regex.txt
Q: I can't log in to Instagram: "CSRF token missing or incorrect" I was using Selenium Python to log in to Instagram and open some pages. It worked fine, but after two days the Instagram started sending the message "CSRF token missing or incorrect". And now I can't even log in with my script or manually to any accounts and with any browsers such as Chrome or FireFox on my laptop. I'm not sending any cookies with my Selenium. And most of the search results are about Django which I'm not using. I erased the cookies, but it it didn't work. I tried to change my IP address to make sure if I'm banned from Instagram, but it didn't work. I tried to check for the scrf-token in my URL with Selenium and sending it to the driver, but it didn't work. I'm not sure if the solution is within the code, because now I can't log in even manually, so maybe there must be a problem with my system settings or from Instagram side. Can I fix this with Selenium? Or how can I fix this? A: It seems that the web-Instagram login page has been down for about a week! From last week until now, most users can't login to Instagram on the web! Read chif.j's solution and comments for a temporary fix! A: Open your Chrome browser developer tools, and then go to the login page of Instagram. In the network tab, find the request that goes like this: https://www.instagram.com/ Click on the request and in the response tab, press Ctrl + F and search for csrf_token. Copy the value of csrf, and go to the application tab. In the storage section, click on cookies and insert a cookie with the csrftoken name. Paste the value and make it secure. Now fill the login form and press Enter. A: I had the same problem. I used a VPN to connect to the US, and there wasn't any "CSRF token missing" message after that. A: One liner to run fin JS console in your browser to fix CSRF token issue: n=new Date;t=n.getTime();et=t+36E9;n.setTime(et);document.cookie='csrftoken='+document.body.innerHTML.split('csrf_token')[1].split('\\"')[2]+';path=\;domain=.instagram.com;expires='+n.toUTCString(); Open JS console, copy paste it, press enter. Reload the page and try to login again. This js snippet will find CSRF token in page source and create cookie for you.
I can't log in to Instagram: "CSRF token missing or incorrect"
I was using Selenium Python to log in to Instagram and open some pages. It worked fine, but after two days the Instagram started sending the message "CSRF token missing or incorrect". And now I can't even log in with my script or manually to any accounts and with any browsers such as Chrome or FireFox on my laptop. I'm not sending any cookies with my Selenium. And most of the search results are about Django which I'm not using. I erased the cookies, but it it didn't work. I tried to change my IP address to make sure if I'm banned from Instagram, but it didn't work. I tried to check for the scrf-token in my URL with Selenium and sending it to the driver, but it didn't work. I'm not sure if the solution is within the code, because now I can't log in even manually, so maybe there must be a problem with my system settings or from Instagram side. Can I fix this with Selenium? Or how can I fix this?
[ "It seems that the web-Instagram login page has been down for about a week!\nFrom last week until now, most users can't login to Instagram on the web!\nRead chif.j's solution and comments for a temporary fix!\n", "Open your Chrome browser developer tools, and then go to the login page of Instagram. In the network tab, find the request that goes like this: https://www.instagram.com/\nClick on the request and in the response tab, press Ctrl + F and search for csrf_token. Copy the value of csrf, and go to the application tab. In the storage section, click on cookies and insert a cookie with the csrftoken name. Paste the value and make it secure. Now fill the login form and press Enter.\n", "I had the same problem. I used a VPN to connect to the US, and there wasn't any \"CSRF token missing\" message after that.\n", "One liner to run fin JS console in your browser to fix CSRF token issue:\nn=new Date;t=n.getTime();et=t+36E9;n.setTime(et);document.cookie='csrftoken='+document.body.innerHTML.split('csrf_token')[1].split('\\\\\"')[2]+';path=\\;domain=.instagram.com;expires='+n.toUTCString();\n\nOpen JS console, copy paste it, press enter. Reload the page and try to login again.\nThis js snippet will find CSRF token in page source and create cookie for you.\n" ]
[ 14, 12, 2, 1 ]
[]
[]
[ "csrf_token", "instagram", "python", "python_3.x", "selenium" ]
stackoverflow_0074243874_csrf_token_instagram_python_python_3.x_selenium.txt
Q: Passing SSL certificate and key as string to psycopg2.connect My app is deployed in GCP, I'm trying to make a connection to DB using psycopg2. The SSL certificates and key are not stored as files, so I'll be getting them as strings. When I try to make a connection by passing the filepath for these certificate pem files, it works. psycopg2.connect(host='hostname',port=1234, connect_timeout=100, database='db', user='user', password='pwd', sslrootcert="server-cert.pem", sslcert="client-cert.pem", sslkey="key.pem") But when I pass certificates and key as strings, it doesn't work. It gives an error FATAL: connection requires a valid client certificate\nconnection to server at "hostname", port 1234 failed SERVER_CERT = """-----BEGIN CERTIFICATE-----\nxxxxxx\n-----END CERTIFICATE-----""" CLIENT_CERT = """-----BEGIN CERTIFICATE-----\nxxxxxx\n-----END CERTIFICATE-----""" KEY = """-----BEGIN RSA PRIVATE KEY-----\nxxxxn-----END RSA PRIVATE KEY-----""" psycopg2.connect(host='hostname',port=1234, connect_timeout=100, database='db', user='user', password='pwd', sslrootcert=SERVER_CERT, sslcert=CLIENT_CERT, sslkey=KEY) I also tried using ssl.DER_cert_to_PEM_cert(CERT) and RSA.importKey(KEY), but it still fails. Is there a way to pass string instead of files? Thanks. A: I was facing the same situation a couple of hours ago. What I did to resolve this is creating the files in python with the value of the variable: cert = """cert""" file = open("cert.txt","w") file.write(cert) file.close() And then, just pass the path to the psycopg2 connection.
Passing SSL certificate and key as string to psycopg2.connect
My app is deployed in GCP, I'm trying to make a connection to DB using psycopg2. The SSL certificates and key are not stored as files, so I'll be getting them as strings. When I try to make a connection by passing the filepath for these certificate pem files, it works. psycopg2.connect(host='hostname',port=1234, connect_timeout=100, database='db', user='user', password='pwd', sslrootcert="server-cert.pem", sslcert="client-cert.pem", sslkey="key.pem") But when I pass certificates and key as strings, it doesn't work. It gives an error FATAL: connection requires a valid client certificate\nconnection to server at "hostname", port 1234 failed SERVER_CERT = """-----BEGIN CERTIFICATE-----\nxxxxxx\n-----END CERTIFICATE-----""" CLIENT_CERT = """-----BEGIN CERTIFICATE-----\nxxxxxx\n-----END CERTIFICATE-----""" KEY = """-----BEGIN RSA PRIVATE KEY-----\nxxxxn-----END RSA PRIVATE KEY-----""" psycopg2.connect(host='hostname',port=1234, connect_timeout=100, database='db', user='user', password='pwd', sslrootcert=SERVER_CERT, sslcert=CLIENT_CERT, sslkey=KEY) I also tried using ssl.DER_cert_to_PEM_cert(CERT) and RSA.importKey(KEY), but it still fails. Is there a way to pass string instead of files? Thanks.
[ "I was facing the same situation a couple of hours ago. What I did to resolve this is creating the files in python with the value of the variable:\ncert = \"\"\"cert\"\"\"\nfile = open(\"cert.txt\",\"w\")\nfile.write(cert)\nfile.close()\n\nAnd then, just pass the path to the psycopg2 connection.\n" ]
[ 0 ]
[]
[]
[ "google_cloud_platform", "postgresql", "psycopg2", "python", "ssl" ]
stackoverflow_0074235247_google_cloud_platform_postgresql_psycopg2_python_ssl.txt
Q: Pathname too long to open? This is a screenshot of the execution: As you see, the error says that the directory "JSONFiles/Apartment/Rent/dubizzleabudhabiproperty" is not there. But look at my files, please: The folder is definitely there. Update 2 The code self.file = open("JSONFiles/"+ item["category"]+"/" + item["action"]+"/"+ item['source']+"/"+fileName + '.json', 'wb') # Create a new JSON file with the name = fileName parameter line = json.dumps(dict(item)) # Change the item to a JSON format in one line self.file.write(line) # Write the item to the file UPDATE When I change the file name to a smaller one, it works, so the problem is because of the length of the path. what is the solution please? A: Regular DOS paths are limited to MAX_PATH (260) characters, including the string's terminating NUL character. You can exceed this limit by using an extended-length path that starts with the \\?\ prefix. This path must be a Unicode string, fully qualified, and only use backslash as the path separator. Per Microsoft's file system functionality comparison, the maximum extended path length is 32760 characters. A individual file or directory name can be up to 255 characters (127 for the UDF filesystem). Extended UNC paths are also supported as \\?\UNC\server\share. For example: import os def winapi_path(dos_path, encoding=None): if (not isinstance(dos_path, unicode) and encoding is not None): dos_path = dos_path.decode(encoding) path = os.path.abspath(dos_path) if path.startswith(u"\\\\"): return u"\\\\?\\UNC\\" + path[2:] return u"\\\\?\\" + path path = winapi_path(os.path.join(u"JSONFiles", item["category"], item["action"], item["source"], fileName + ".json")) >>> path = winapi_path("C:\\Temp\\test.txt") >>> print path \\?\C:\Temp\test.txt See the following pages on MSDN: Naming Files, Paths, and Namespaces Defining an MS-DOS Device Name Kernel object namespaces Background Windows calls the NT runtime library function RtlDosPathNameToRelativeNtPathName_U_WithStatus to convert a DOS path to a native NT path. If we open (i.e. CreateFile) the above path with a breakpoint set on the latter function, we can see how it handles a path that starts with the \\?\ prefix. Breakpoint 0 hit ntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus: 00007ff9`d1fb5880 4883ec58 sub rsp,58h 0:000> du @rcx 000000b4`52fc0f60 "\\?\C:\Temp\test.txt" 0:000> r rdx rdx=000000b450f9ec18 0:000> pt ntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus+0x66: 00007ff9`d1fb58e6 c3 ret The result replaces \\?\ with the NT DOS devices prefix \??\, and copies the string into a native UNICODE_STRING: 0:000> dS b450f9ec18 000000b4`536b7de0 "\??\C:\Temp\test.txt" If you use //?/ instead of \\?\, then the path is still limited to MAX_PATH characters. If it's too long, then RtlDosPathNameToRelativeNtPathName returns the status code STATUS_NAME_TOO_LONG (0xC0000106). If you use \\?\ for the prefix but use slash in the rest of the path, Windows will not translate the slash to backslash for you: Breakpoint 0 hit ntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus: 00007ff9`d1fb5880 4883ec58 sub rsp,58h 0:000> du @rcx 0000005b`c2ffbf30 "\\?\C:/Temp/test.txt" 0:000> r rdx rdx=0000005bc0b3f068 0:000> pt ntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus+0x66: 00007ff9`d1fb58e6 c3 ret 0:000> dS 5bc0b3f068 0000005b`c3066d30 "\??\C:/Temp/test.txt" Forward slash is a valid object name character in the NT namespace. It's reserved by Microsoft filesystems, but you can use a forward slash in other named kernel objects, which get stored in \BaseNamedObjects or \Sessions\[session number]\BaseNamedObjects. Also, I don't think the I/O manager enforces the policy on reserved characters in device and filenames. It's up to the device. Maybe someone out there has a Windows device that implements a namespace that allows forward slash in names. At the very least you can create DOS device names that contain a forward slash. For example: >>> kernel32 = ctypes.WinDLL('kernel32') >>> kernel32.DefineDosDeviceW(0, u'My/Device', u'C:\\Temp') >>> os.path.exists(u'\\\\?\\My/Device\\test.txt') True You may be wondering what \?? signifies. This used to be an actual directory for DOS device links in the object namespace, but starting with NT 5 (or NT 4 w/ Terminal Services) this became a virtual prefix. The object manager handles this prefix by first checking the logon session's DOS device links in the directory \Sessions\0\DosDevices\[LOGON_SESSION_ID] and then checking the system-wide DOS device links in the \Global?? directory. Note that the former is a logon session, not a Windows session. The logon session directories are all under the DosDevices directory of Windows session 0 (i.e. the services session in Vista+). Thus if you have a mapped drive for a non-elevated logon, you'll discover that it's not available in an elevated command prompt, because your elevated token is actually for a different logon session. An example of a DOS device link is \Global??\C: => \Device\HarddiskVolume2. In this case the DOS C: drive is actually a symbolic link to the HarddiskVolume2 device. Here's a brief overview of how the system handles parsing a path to open a file. Given we're calling WinAPI CreateFile, it stores the translated NT UNICODE_STRING in an OBJECT_ATTRIBUTES structure and calls the system function NtCreateFile. 0:000> g Breakpoint 1 hit ntdll!NtCreateFile: 00007ff9`d2023d70 4c8bd1 mov r10,rcx 0:000> !obja @r8 Obja +000000b450f9ec58 at 000000b450f9ec58: Name is \??\C:\Temp\test.txt OBJ_CASE_INSENSITIVE NtCreateFile calls the I/O manager function IoCreateFile, which in turn calls the undocumented object manager API ObOpenObjectByName. This does the work of parsing the path. The object manager starts with \??\C:\Temp\test.txt. Then it replaces that with \Global??\C:Temp\test.txt. Next it parses up to the C: symbolic link and has to start over (reparse) the final path \Device\HarddiskVolume2\Temp\test.txt. Once the object manager gets to the HarddiskVolume2 device object, parsing is handed off to the I/O manager, which implements the Device object type. The ParseProcedure of an I/O Device creates the File object and an I/O Request Packet (IRP) with the major function code IRP_MJ_CREATE (an open/create operation) to be processed by the device stack. This is sent to the device driver via IoCallDriver. If the device implements reparse points (e.g. junction mountpoints, symbolic links, etc) and the path contains a reparse point, then the resolved path has to be resubmitted to the object manager to be parsed from the start. The device driver will use the SeChangeNotifyPrivilege (almost always present and enabled) of the process token (or thread if impersonating) to bypass access checks while traversing directories. However, ultimately access to the device and target file has to be allowed by a security descriptor, which is verified via SeAccessCheck. Except simple filesystems such as FAT32 don't support file security. A: below is Python 3 version regarding @Eryk Sun's solution. def winapi_path(dos_path, encoding=None): if (not isinstance(dos_path, str) and encoding is not None): dos_path = dos_path.decode(encoding) path = os.path.abspath(dos_path) if path.startswith(u"\\\\"): return u"\\\\?\\UNC\\" + path[2:] return u"\\\\?\\" + path #Python 3 renamed the unicode type to str, the old str type has been replaced by bytes. NameError: global name 'unicode' is not defined - in Python 3 A: Adding the solution that helped me fix a similar issue: python version = 3.9, windows version = 10 pro. I had an issue with the filename itself as it was too long for python's open built-in method. The error I got is that the path simply doesn't exist, although I use the 'w+' mode for open (which is supposed to open a new file regardless whether it exists or not). I found this guide which solved the problem with a quick change to window's Registry Editor (specifically the Group Policy). Scroll down to the 'Make Windows 10 Accept Long File Paths' headline. Don't forget to update your OS group policy to take effect immediately, a guide can be found here. Hope this helps future searches as this post is quite old. A: There can be multiple reasons for you getting this error. Please make sure of the following: The parent directory of the folder (JSONFiles) is the same as the directory of the Python script. Even though the folder exists it does not mean the individual file does. Verify the same and make sure the exact file name matches the one that your Python code is trying to access. If you still face an issue, share the result of "dir" command on the innermost folder you are trying to access. A: it works for me import os str1=r"C:\Users\sandeepmkwana\Desktop\folder_structure\models\manual\demodfadsfljdskfjslkdsjfklaj\inner-2djfklsdfjsdklfj\inner3fadsfksdfjdklsfjksdgjl\inner4dfhasdjfhsdjfskfklsjdkjfleioreirueewdsfksdmv\anotherInnerfolder4aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\5qbbbbbbbbbbbccccccccccccccccccccccccsssssssssssssssss\tmp.txt" print(len(str1)) #346 path = os.path.abspath(str1) if path.startswith(u"\\\\"): path=u"\\\\?\\UNC\\"+path[2:] else: path=u"\\\\?\\"+path with open(path,"r+") as f: print(f.readline()) If you get a long path (more than 258 characters) issue in Windows, then try this.
Pathname too long to open?
This is a screenshot of the execution: As you see, the error says that the directory "JSONFiles/Apartment/Rent/dubizzleabudhabiproperty" is not there. But look at my files, please: The folder is definitely there. Update 2 The code self.file = open("JSONFiles/"+ item["category"]+"/" + item["action"]+"/"+ item['source']+"/"+fileName + '.json', 'wb') # Create a new JSON file with the name = fileName parameter line = json.dumps(dict(item)) # Change the item to a JSON format in one line self.file.write(line) # Write the item to the file UPDATE When I change the file name to a smaller one, it works, so the problem is because of the length of the path. what is the solution please?
[ "Regular DOS paths are limited to MAX_PATH (260) characters, including the string's terminating NUL character. You can exceed this limit by using an extended-length path that starts with the \\\\?\\ prefix. This path must be a Unicode string, fully qualified, and only use backslash as the path separator. Per Microsoft's file system functionality comparison, the maximum extended path length is 32760 characters. A individual file or directory name can be up to 255 characters (127 for the UDF filesystem). Extended UNC paths are also supported as \\\\?\\UNC\\server\\share.\nFor example:\nimport os\n\ndef winapi_path(dos_path, encoding=None):\n if (not isinstance(dos_path, unicode) and \n encoding is not None):\n dos_path = dos_path.decode(encoding)\n path = os.path.abspath(dos_path)\n if path.startswith(u\"\\\\\\\\\"):\n return u\"\\\\\\\\?\\\\UNC\\\\\" + path[2:]\n return u\"\\\\\\\\?\\\\\" + path\n\npath = winapi_path(os.path.join(u\"JSONFiles\", \n item[\"category\"],\n item[\"action\"], \n item[\"source\"], \n fileName + \".json\"))\n\n>>> path = winapi_path(\"C:\\\\Temp\\\\test.txt\")\n>>> print path\n\\\\?\\C:\\Temp\\test.txt\n\nSee the following pages on MSDN: \n\nNaming Files, Paths, and Namespaces\nDefining an MS-DOS Device Name\nKernel object namespaces\n\n\nBackground\nWindows calls the NT runtime library function RtlDosPathNameToRelativeNtPathName_U_WithStatus to convert a DOS path to a native NT path. If we open (i.e. CreateFile) the above path with a breakpoint set on the latter function, we can see how it handles a path that starts with the \\\\?\\ prefix.\nBreakpoint 0 hit\nntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus:\n00007ff9`d1fb5880 4883ec58 sub rsp,58h\n0:000> du @rcx\n000000b4`52fc0f60 \"\\\\?\\C:\\Temp\\test.txt\"\n0:000> r rdx\nrdx=000000b450f9ec18\n0:000> pt\nntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus+0x66:\n00007ff9`d1fb58e6 c3 ret\n\nThe result replaces \\\\?\\ with the NT DOS devices prefix \\??\\, and copies the string into a native UNICODE_STRING:\n0:000> dS b450f9ec18\n000000b4`536b7de0 \"\\??\\C:\\Temp\\test.txt\"\n\nIf you use //?/ instead of \\\\?\\, then the path is still limited to MAX_PATH characters. If it's too long, then RtlDosPathNameToRelativeNtPathName returns the status code STATUS_NAME_TOO_LONG (0xC0000106). \nIf you use \\\\?\\ for the prefix but use slash in the rest of the path, Windows will not translate the slash to backslash for you:\nBreakpoint 0 hit\nntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus:\n00007ff9`d1fb5880 4883ec58 sub rsp,58h\n0:000> du @rcx\n0000005b`c2ffbf30 \"\\\\?\\C:/Temp/test.txt\"\n0:000> r rdx\nrdx=0000005bc0b3f068\n0:000> pt\nntdll!RtlDosPathNameToRelativeNtPathName_U_WithStatus+0x66:\n00007ff9`d1fb58e6 c3 ret\n0:000> dS 5bc0b3f068\n0000005b`c3066d30 \"\\??\\C:/Temp/test.txt\"\n\nForward slash is a valid object name character in the NT namespace. It's reserved by Microsoft filesystems, but you can use a forward slash in other named kernel objects, which get stored in \\BaseNamedObjects or \\Sessions\\[session number]\\BaseNamedObjects. Also, I don't think the I/O manager enforces the policy on reserved characters in device and filenames. It's up to the device. Maybe someone out there has a Windows device that implements a namespace that allows forward slash in names. At the very least you can create DOS device names that contain a forward slash. For example:\n>>> kernel32 = ctypes.WinDLL('kernel32')\n>>> kernel32.DefineDosDeviceW(0, u'My/Device', u'C:\\\\Temp')\n>>> os.path.exists(u'\\\\\\\\?\\\\My/Device\\\\test.txt')\nTrue\n\nYou may be wondering what \\?? signifies. This used to be an actual directory for DOS device links in the object namespace, but starting with NT 5 (or NT 4 w/ Terminal Services) this became a virtual prefix. The object manager handles this prefix by first checking the logon session's DOS device links in the directory \\Sessions\\0\\DosDevices\\[LOGON_SESSION_ID] and then checking the system-wide DOS device links in the \\Global?? directory. \nNote that the former is a logon session, not a Windows session. The logon session directories are all under the DosDevices directory of Windows session 0 (i.e. the services session in Vista+). Thus if you have a mapped drive for a non-elevated logon, you'll discover that it's not available in an elevated command prompt, because your elevated token is actually for a different logon session.\nAn example of a DOS device link is \\Global??\\C: => \\Device\\HarddiskVolume2. In this case the DOS C: drive is actually a symbolic link to the HarddiskVolume2 device. \nHere's a brief overview of how the system handles parsing a path to open a file. Given we're calling WinAPI CreateFile, it stores the translated NT UNICODE_STRING in an OBJECT_ATTRIBUTES structure and calls the system function NtCreateFile. \n0:000> g\nBreakpoint 1 hit\nntdll!NtCreateFile:\n00007ff9`d2023d70 4c8bd1 mov r10,rcx\n0:000> !obja @r8\nObja +000000b450f9ec58 at 000000b450f9ec58:\n Name is \\??\\C:\\Temp\\test.txt\n OBJ_CASE_INSENSITIVE\n\nNtCreateFile calls the I/O manager function IoCreateFile, which in turn calls the undocumented object manager API ObOpenObjectByName. This does the work of parsing the path. The object manager starts with \\??\\C:\\Temp\\test.txt. Then it replaces that with \\Global??\\C:Temp\\test.txt. Next it parses up to the C: symbolic link and has to start over (reparse) the final path \\Device\\HarddiskVolume2\\Temp\\test.txt. \nOnce the object manager gets to the HarddiskVolume2 device object, parsing is handed off to the I/O manager, which implements the Device object type. The ParseProcedure of an I/O Device creates the File object and an I/O Request Packet (IRP) with the major function code IRP_MJ_CREATE (an open/create operation) to be processed by the device stack. This is sent to the device driver via IoCallDriver. If the device implements reparse points (e.g. junction mountpoints, symbolic links, etc) and the path contains a reparse point, then the resolved path has to be resubmitted to the object manager to be parsed from the start.\nThe device driver will use the SeChangeNotifyPrivilege (almost always present and enabled) of the process token (or thread if impersonating) to bypass access checks while traversing directories. However, ultimately access to the device and target file has to be allowed by a security descriptor, which is verified via SeAccessCheck. Except simple filesystems such as FAT32 don't support file security.\n", "below is Python 3 version regarding @Eryk Sun's solution.\ndef winapi_path(dos_path, encoding=None):\n if (not isinstance(dos_path, str) and encoding is not None): \n dos_path = dos_path.decode(encoding)\n path = os.path.abspath(dos_path)\n if path.startswith(u\"\\\\\\\\\"):\n return u\"\\\\\\\\?\\\\UNC\\\\\" + path[2:]\n return u\"\\\\\\\\?\\\\\" + path\n\n#Python 3 renamed the unicode type to str, the old str type has been replaced by bytes. NameError: global name 'unicode' is not defined - in Python 3\n", "Adding the solution that helped me fix a similar issue:\npython version = 3.9, windows version = 10 pro.\nI had an issue with the filename itself as it was too long for python's open built-in method. The error I got is that the path simply doesn't exist, although I use the 'w+' mode for open (which is supposed to open a new file regardless whether it exists or not).\nI found this guide which solved the problem with a quick change to window's Registry Editor (specifically the Group Policy). Scroll down to the 'Make Windows 10 Accept Long File Paths' headline.\nDon't forget to update your OS group policy to take effect immediately, a guide can be found here.\nHope this helps future searches as this post is quite old.\n", "There can be multiple reasons for you getting this error. Please make sure of the following:\n\nThe parent directory of the folder (JSONFiles) is the same as the directory of the Python script.\nEven though the folder exists it does not mean the individual file does. Verify the same and make sure the exact file name matches the one that your Python code is trying to access.\n\nIf you still face an issue, share the result of \"dir\" command on the innermost folder you are trying to access.\n", "it works for me\nimport os\nstr1=r\"C:\\Users\\sandeepmkwana\\Desktop\\folder_structure\\models\\manual\\demodfadsfljdskfjslkdsjfklaj\\inner-2djfklsdfjsdklfj\\inner3fadsfksdfjdklsfjksdgjl\\inner4dfhasdjfhsdjfskfklsjdkjfleioreirueewdsfksdmv\\anotherInnerfolder4aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\5qbbbbbbbbbbbccccccccccccccccccccccccsssssssssssssssss\\tmp.txt\"\nprint(len(str1)) #346\n\npath = os.path.abspath(str1)\n\nif path.startswith(u\"\\\\\\\\\"):\n path=u\"\\\\\\\\?\\\\UNC\\\\\"+path[2:]\nelse:\n path=u\"\\\\\\\\?\\\\\"+path\n\nwith open(path,\"r+\") as f:\n print(f.readline())\n\nIf you get a long path (more than 258 characters) issue in Windows, then try this.\n" ]
[ 42, 13, 1, 0, 0 ]
[]
[]
[ "python", "python_2.7", "windows" ]
stackoverflow_0036219317_python_python_2.7_windows.txt