content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Concatenate Columns in CSV file using Python and Count the Total per UniqueID This question have been asked multiple times in this community but I couldn't find the correct answers since I am beginner in Python. I got 2 questions actually: I want to concatenate 3 columns (A,B,C) with its value into 1 Column. Header would be ABC. import os import pandas as pd directory = 'C:/Path' ext = ('.csv') for filename in os.listdir(directory): f = os.path.join(directory, filename) if f.endswith(ext): head_tail = os.path.split(f) head_tail1 = 'C:/Output' k =head_tail[1] r=k.split(".")[0] p=head_tail1 + "/" + r + " - Revised.csv" mydata = pd.read_csv(f) new =mydata[["A","B","C","D"]] new = new.rename(columns={'D': 'Total'}) new['Total'] = 1 new.to_csv(p ,index=False) Once concatenated, is it possible to count the uniqueid and put the total in Column D? Basically, to get the total count per uniqueid (Column ABC),the data can be found on a link when you click that UniqueID. For ex: Column ABC - uniqueid1, -> click -> go to the next page, total of that uniqueid. On the link page, you can get the total numbers of uniqueid by Serial ID I have no idea how to do this, but I would really appreciate if someone can help me on this project and would learn a lot from this. Thank you very much. God Bless Searched in Google, Youtube and Stackoverflow, couldn't find the correct answer. A: Next time, try to specify your issues and give a minimal reproducible example. This is just an example how to use pd.melt and pd.groupby. I hope it helps with your question. import pandas as pd ### example dataframe df = pd.DataFrame([['first', 1, 2, 3], ['second', 4, 5, 6], ['third', 7, 8, 9]], columns=['ID', 'A', 'B', 'C']) ### directly sum up A, B and C df['total'] = df.sum(axis=1, numeric_only=True) print(df) ### how to create a so called long dataframe with melt df_long = pd.melt(df, id_vars='ID', value_vars=['A', 'B', 'C'], var_name='ABC') print(df_long) ### group long dataframe by column and sum up all values with this ID df_group = df_long.groupby(by='ID').sum() print(df_group) A: I'm not sure that I understand your question correctly. However, if you know exactly the column names (e.g., A, B, and C) that you want to concatenate you can do something like code below. ''.join(merge_columns) is to concatenate column names. new[merge_columns].apply(lambda x: ''.join(x), axis=1) is to concatenate their values. Then, you can count unique values of the new column using groupby().count(). new = mydata[["A","B","C","D"]] new = new.rename(columns={'D': 'Total'}) new['Total'] = 1 # added lines merge_columns = ['A', 'B', 'C'] merged_col = ''.join(merge_columns) new[merged_col] = new[merge_columns].apply(lambda x: ''.join(x), axis=1) new.drop(merge_columns, axis=1, inplace=True) new = new.groupby(merged_col).count().reset_index() new.to_csv(p ,index=False) example: # before > new A B C Total 0 a b c 1 1 x y z 1 2 a b c 1 # after execute added lines > new ABC Total 0 abc 2 1 xyz 1
Concatenate Columns in CSV file using Python and Count the Total per UniqueID
This question have been asked multiple times in this community but I couldn't find the correct answers since I am beginner in Python. I got 2 questions actually: I want to concatenate 3 columns (A,B,C) with its value into 1 Column. Header would be ABC. import os import pandas as pd directory = 'C:/Path' ext = ('.csv') for filename in os.listdir(directory): f = os.path.join(directory, filename) if f.endswith(ext): head_tail = os.path.split(f) head_tail1 = 'C:/Output' k =head_tail[1] r=k.split(".")[0] p=head_tail1 + "/" + r + " - Revised.csv" mydata = pd.read_csv(f) new =mydata[["A","B","C","D"]] new = new.rename(columns={'D': 'Total'}) new['Total'] = 1 new.to_csv(p ,index=False) Once concatenated, is it possible to count the uniqueid and put the total in Column D? Basically, to get the total count per uniqueid (Column ABC),the data can be found on a link when you click that UniqueID. For ex: Column ABC - uniqueid1, -> click -> go to the next page, total of that uniqueid. On the link page, you can get the total numbers of uniqueid by Serial ID I have no idea how to do this, but I would really appreciate if someone can help me on this project and would learn a lot from this. Thank you very much. God Bless Searched in Google, Youtube and Stackoverflow, couldn't find the correct answer.
[ "Next time, try to specify your issues and give a minimal reproducible example.\nThis is just an example how to use pd.melt and pd.groupby.\nI hope it helps with your question.\nimport pandas as pd\n\n### example dataframe\ndf = pd.DataFrame([['first', 1, 2, 3], ['second', 4, 5, 6], ['third', 7, 8, 9]], columns=['ID', 'A', 'B', 'C'])\n\n### directly sum up A, B and C\ndf['total'] = df.sum(axis=1, numeric_only=True)\nprint(df)\n\n### how to create a so called long dataframe with melt\ndf_long = pd.melt(df, id_vars='ID', value_vars=['A', 'B', 'C'], var_name='ABC')\nprint(df_long)\n\n### group long dataframe by column and sum up all values with this ID\ndf_group = df_long.groupby(by='ID').sum()\nprint(df_group)\n\n", "I'm not sure that I understand your question correctly. However, if you know exactly the column names (e.g., A, B, and C) that you want to concatenate you can do something like code below.\n''.join(merge_columns) is to concatenate column names.\nnew[merge_columns].apply(lambda x: ''.join(x), axis=1) is to concatenate their values.\nThen, you can count unique values of the new column using groupby().count().\nnew = mydata[[\"A\",\"B\",\"C\",\"D\"]]\nnew = new.rename(columns={'D': 'Total'})\nnew['Total'] = 1\n\n# added lines\nmerge_columns = ['A', 'B', 'C']\nmerged_col = ''.join(merge_columns)\nnew[merged_col] = new[merge_columns].apply(lambda x: ''.join(x), axis=1)\nnew.drop(merge_columns, axis=1, inplace=True)\nnew = new.groupby(merged_col).count().reset_index()\n\nnew.to_csv(p ,index=False)\n\nexample:\n# before\n\n> new\n\n A B C Total\n0 a b c 1\n1 x y z 1\n2 a b c 1\n\n# after execute added lines\n\n> new\n\n ABC Total\n0 abc 2\n1 xyz 1\n\n" ]
[ 0, 0 ]
[]
[]
[ "concatenation", "csv", "hyperlink", "python" ]
stackoverflow_0074598513_concatenation_csv_hyperlink_python.txt
Q: How to sort the columns by length of values in an excel file in python ? (preferably using def) I have a list of data as following. I need to add the current data to a new worksheet sorted by the length of the values in the third column (p_seq) enter image description here I was able to add the current data using openpyxl but I'm struggling with sorting them. Ideally I would like to create a function. Thank you in advance ! A: strings_col1 = ["abcdefgh", "ijklmn", "opqr", "stuvwxyz123"] sorted_list_col1 = list(sorted(strings_col1, key = len)) print(sorted_list_col1) output: ['opqr', 'ijklmn', 'abcdefgh', 'stuvwxyz123']
How to sort the columns by length of values in an excel file in python ? (preferably using def)
I have a list of data as following. I need to add the current data to a new worksheet sorted by the length of the values in the third column (p_seq) enter image description here I was able to add the current data using openpyxl but I'm struggling with sorting them. Ideally I would like to create a function. Thank you in advance !
[ " strings_col1 = [\"abcdefgh\", \"ijklmn\", \"opqr\", \"stuvwxyz123\"]\n sorted_list_col1 = list(sorted(strings_col1, key = len))\n print(sorted_list_col1)\n\noutput:\n\n['opqr', 'ijklmn', 'abcdefgh', 'stuvwxyz123']\n\n" ]
[ 0 ]
[]
[]
[ "excel", "openpyxl", "python" ]
stackoverflow_0074609845_excel_openpyxl_python.txt
Q: How to remove to duplicates of a list in python? Exercise: “Let’s go Grocery Shopping” A mother wants to list down the things she needs to buy, however, she needs a simple list that be run every time and can be modified whenever she changes her mind. Starting with just an empty list, write a function that creates a grocery list that does the following: • Add an Item (takes a string input from the user and add it to the existing list) • Remove an Item (takes a string input from the user and removes all instances from the list) • Print entire list (prints out all the contents of the list) • Exit (exit the program) The items are taken and stored as strings. Duplicates are allowed. When removing an item, it must not be case sensitive when matching with the item in the list for it to be removed (in other words “Eggs” and “eggS” still refer to the same item). The program should still continue running until the user decides the terminate it by doing the Exit command. Otherwise, it should catch all errors whenever possible. mycode This is my code. I have a problem in remove function, it should remove all the duplicates despite the letter case. What should my code be? Thank you.
How to remove to duplicates of a list in python?
Exercise: “Let’s go Grocery Shopping” A mother wants to list down the things she needs to buy, however, she needs a simple list that be run every time and can be modified whenever she changes her mind. Starting with just an empty list, write a function that creates a grocery list that does the following: • Add an Item (takes a string input from the user and add it to the existing list) • Remove an Item (takes a string input from the user and removes all instances from the list) • Print entire list (prints out all the contents of the list) • Exit (exit the program) The items are taken and stored as strings. Duplicates are allowed. When removing an item, it must not be case sensitive when matching with the item in the list for it to be removed (in other words “Eggs” and “eggS” still refer to the same item). The program should still continue running until the user decides the terminate it by doing the Exit command. Otherwise, it should catch all errors whenever possible. mycode This is my code. I have a problem in remove function, it should remove all the duplicates despite the letter case. What should my code be? Thank you.
[]
[]
[ "I think you want this.\nremove_item = \"eggs\"\nmyList = [\"Eggs\", \"eggS\", \"eGgS\", \"miLk\", \"milk\"]\nresult=[]\n\nmarker = set()\n\nfor l in myList:\n ll = l.lower()\n if ll != remove_item.lower():\n result.append(ll)\n\n\nprint(result)\n\n", "You would want the remove_items function to look like this:\ndef remove_items():\n global grocery_list\n item_to_remove = input(\"Which item do you want to remove? \").lower()\n newlist = grocery_list.copy()\n for item in grocery_list:\n if item.lower() == item_to_remove:\n newlist.remove(item)\n grocery_list = newlist\n\nHopefully that helps :)\nAlso, global grocery_list is there since we set grocery list to a new list later in the function, and python will error when doing this without it. The function should just need to be dropped in as a replacement of your current remove_items function.\n" ]
[ -1, -1 ]
[ "list", "python", "python_3.x" ]
stackoverflow_0074609829_list_python_python_3.x.txt
Q: Pandas .sort_values() function returning data frame with scattered values I'm using pandas to load a short_desc.csv with the following columns: ["report_id", "when","what"] with #read csv shortDesc = pd.read_csv('short_desc.csv') #get all numerical and nonnull values shortDesc = shortDesc[shortDesc['report_id'].str.isdigit().notnull()] #convert 'when' from UNIX timestamp to datetime shortDesc['when'] = pd.to_datetime(shortDesc['when'],unit='s') which results in the following: I'm trying to remove rows that have duplicate 'report_id's by sorting by date and getting the newest date where that 'report_id' is present with the following: shortDesc = shortDesc.sort_values(by='when').drop_duplicates(['report_id'], keep='last') the problem is that when I use .sort_values() in this particular dataframe the values of 'what' come out scattered across all columns, and the 'report_id' values disappear: shortDesc = shortDesc.sort_values(by=['when'], inplace=False) I'm not sure why this is happening in this particular instance since I was able to achieve the correct results by another dataframe with the same shape and using the same code (P.S it's not a mistake, I dropped the 'what' column in the second pic): similar shape dataframe desired results example with similar shape DF A: I found out that: #get all numerical and nonnull values shortDesc = shortDesc[shortDesc['report_id'].str.isdigit().notnull()] was only checking if a value was not null and probably overwriting the str.isdigit() check, which caused the field "report_id" to not drop nonnumeric values. I changed this to two separate lines shortDesc = shortDesc[shortDesc['report_id'].notnull()] shortDesc = shortDesc[shortDesc['report_id'].str.isnumeric()] which allowed shortDesc.sort_values(by='when', inplace=True) to work as intended, I am still confused as to why .sort_values(by="when") was affected by the column "report_id". So if anyone knows please enlighten me.
Pandas .sort_values() function returning data frame with scattered values
I'm using pandas to load a short_desc.csv with the following columns: ["report_id", "when","what"] with #read csv shortDesc = pd.read_csv('short_desc.csv') #get all numerical and nonnull values shortDesc = shortDesc[shortDesc['report_id'].str.isdigit().notnull()] #convert 'when' from UNIX timestamp to datetime shortDesc['when'] = pd.to_datetime(shortDesc['when'],unit='s') which results in the following: I'm trying to remove rows that have duplicate 'report_id's by sorting by date and getting the newest date where that 'report_id' is present with the following: shortDesc = shortDesc.sort_values(by='when').drop_duplicates(['report_id'], keep='last') the problem is that when I use .sort_values() in this particular dataframe the values of 'what' come out scattered across all columns, and the 'report_id' values disappear: shortDesc = shortDesc.sort_values(by=['when'], inplace=False) I'm not sure why this is happening in this particular instance since I was able to achieve the correct results by another dataframe with the same shape and using the same code (P.S it's not a mistake, I dropped the 'what' column in the second pic): similar shape dataframe desired results example with similar shape DF
[ "I found out that:\n#get all numerical and nonnull values\nshortDesc = shortDesc[shortDesc['report_id'].str.isdigit().notnull()]\n\nwas only checking if a value was not null and probably overwriting the str.isdigit() check, which caused the field \"report_id\" to not drop nonnumeric values. I changed this to two separate lines\nshortDesc = shortDesc[shortDesc['report_id'].notnull()]\nshortDesc = shortDesc[shortDesc['report_id'].str.isnumeric()]\n\nwhich allowed\nshortDesc.sort_values(by='when', inplace=True)\n\nto work as intended, I am still confused as to why .sort_values(by=\"when\") was affected by the column \"report_id\". So if anyone knows please enlighten me.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074596270_dataframe_pandas_python.txt
Q: How can I put a table inside a layout box using Rich in Python? This is code I am using to put a table inside the layout. However, in the output I am getting lot of ansii code-like characters, and even though I have defined the colour attribute for the column, in the output it is not appearing. from re import X import psycopg2 from rich.console import Console from rich.table import Table from rich import box from rich.layout import Layout from rich import print as rprint layout = Layout() layout.split_column( Layout(name="upper"), Layout(name="lower") ) connection = psycopg2.connect(user="enterprisedb", password="xxxx", host="xx.xx.xxx.170", port="5444", database="edb") cursor = connection.cursor() def imp_file_layout(): table1 = Table(title="FILE LOCATIONS") table1.add_column("FILE_NAME", style="cyan", no_wrap=True) table1.add_column("LOCATION", style="magenta") directory_detail_Query = "select name,setting from pg_settings where name in ('data_directory','config_file','hba_file');" cursor.execute(directory_detail_Query) directory_records = cursor.fetchall() for row in directory_records: table1.add_row(*list(row)) console=Console(markup=False) with console.capture() as capture: console.print(table1) return capture.get() x=imp_file_layout(); layout["lower"].update(x) rprint(layout) cursor.close() connection.close() OUTPUT: ╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ [3m FILE LOCATIONS [0m ┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃[1m [0m[1mFILE_NAME [0m[1m [0m┃[1m [0m[1mLOCATION [0m[1m [0m┃ ┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │[36m [0m[36mconfig_file [0m[36m [0m│[35m [0m[35m/var/lib/edb/as14/data/postgresql.conf[0m[35m [0m│ │[36m [0m[36mdata_directory[0m[36m [0m│[35m [0m[35m/var/lib/edb/as14/data [0m[35m [0m│ │[36m [0m[36mhba_file [0m[36m [0m│[35m [0m[35m/var/lib/edb/as14/data/pg_hba.conf [0m[35m [0m│ └────────────────┴────────────────────────────────────────┘ Is there any solution for this? If I just do a normal print of the table, the output is coming as expected, but when I am trying to put it inside a layout placeholder, a lot of additional characters are appearing. A: No need to capture the output of the Table. Return the Table instance and add it to your layout.
How can I put a table inside a layout box using Rich in Python?
This is code I am using to put a table inside the layout. However, in the output I am getting lot of ansii code-like characters, and even though I have defined the colour attribute for the column, in the output it is not appearing. from re import X import psycopg2 from rich.console import Console from rich.table import Table from rich import box from rich.layout import Layout from rich import print as rprint layout = Layout() layout.split_column( Layout(name="upper"), Layout(name="lower") ) connection = psycopg2.connect(user="enterprisedb", password="xxxx", host="xx.xx.xxx.170", port="5444", database="edb") cursor = connection.cursor() def imp_file_layout(): table1 = Table(title="FILE LOCATIONS") table1.add_column("FILE_NAME", style="cyan", no_wrap=True) table1.add_column("LOCATION", style="magenta") directory_detail_Query = "select name,setting from pg_settings where name in ('data_directory','config_file','hba_file');" cursor.execute(directory_detail_Query) directory_records = cursor.fetchall() for row in directory_records: table1.add_row(*list(row)) console=Console(markup=False) with console.capture() as capture: console.print(table1) return capture.get() x=imp_file_layout(); layout["lower"].update(x) rprint(layout) cursor.close() connection.close() OUTPUT: ╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ [3m FILE LOCATIONS [0m ┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃[1m [0m[1mFILE_NAME [0m[1m [0m┃[1m [0m[1mLOCATION [0m[1m [0m┃ ┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │[36m [0m[36mconfig_file [0m[36m [0m│[35m [0m[35m/var/lib/edb/as14/data/postgresql.conf[0m[35m [0m│ │[36m [0m[36mdata_directory[0m[36m [0m│[35m [0m[35m/var/lib/edb/as14/data [0m[35m [0m│ │[36m [0m[36mhba_file [0m[36m [0m│[35m [0m[35m/var/lib/edb/as14/data/pg_hba.conf [0m[35m [0m│ └────────────────┴────────────────────────────────────────┘ Is there any solution for this? If I just do a normal print of the table, the output is coming as expected, but when I am trying to put it inside a layout placeholder, a lot of additional characters are appearing.
[ "No need to capture the output of the Table. Return the Table instance and add it to your layout.\n" ]
[ 1 ]
[]
[]
[ "python", "rich" ]
stackoverflow_0074523297_python_rich.txt
Q: nested list comprehension with os.walk Trying to enumerate all files in a certain directory (like 'find .' in Linux, or 'dir /s /b' in Windows). I came up with the following nested list comprehension: from os import walk from os.path import join root = r'c:\windows' #choose any folder here allfiles = [join(root,f) for f in files for root,dirs,files in walk(root)] Unfortunately, for the last expression, I'm getting: NameError: name 'files' is not defined Related to this question, which (although working) I can't understand the syntax of the nested list comprehension. A: You need to reverse the nesting; allfiles = [join(root,f) for root,dirs,files in walk(root) for f in files] See the list comprehension documentation: When a list comprehension is supplied, it consists of a single expression followed by at least one for clause and zero or more for or if clauses. In this case, the elements of the new list are those that would be produced by considering each of the for or if clauses a block, nesting from left to right, and evaluating the expression to produce a list element each time the innermost block is reached. In other words, since you basically want the moral equivalent of: allfiles = [] for root, dirs, files in walk(root): for f in files: allfiles.append(f) your list comprehension should follow the same ordering. A: it is: allfiles = [join(root, f) for _, dirs, files in walk(root) for f in files] A: A different option, remove .replace() if you want absolute path.* import os working_dir = os.getcwd() file_list = [os.path.join( dirpath, file).replace( working_dir, "") for ( dirpath, dirnames, filenames) in os.walk( working_dir) for file in filenames] print(file_list) Python 3 Version.
nested list comprehension with os.walk
Trying to enumerate all files in a certain directory (like 'find .' in Linux, or 'dir /s /b' in Windows). I came up with the following nested list comprehension: from os import walk from os.path import join root = r'c:\windows' #choose any folder here allfiles = [join(root,f) for f in files for root,dirs,files in walk(root)] Unfortunately, for the last expression, I'm getting: NameError: name 'files' is not defined Related to this question, which (although working) I can't understand the syntax of the nested list comprehension.
[ "You need to reverse the nesting;\nallfiles = [join(root,f) for root,dirs,files in walk(root) for f in files]\n\nSee the list comprehension documentation:\n\nWhen a list comprehension is supplied, it consists of a single expression followed by at least one for clause and zero or more for or if clauses. In this case, the elements of the new list are those that would be produced by considering each of the for or if clauses a block, nesting from left to right, and evaluating the expression to produce a list element each time the innermost block is reached.\n\nIn other words, since you basically want the moral equivalent of:\nallfiles = []\nfor root, dirs, files in walk(root):\n for f in files:\n allfiles.append(f)\n\nyour list comprehension should follow the same ordering.\n", "it is:\nallfiles = [join(root, f) for _, dirs, files in walk(root) for f in files]\n\n", "\nA different option, remove .replace() if you want absolute path.*\n\nimport os\n\nworking_dir = os.getcwd()\nfile_list = [os.path.join(\ndirpath, file).replace(\n working_dir, \"\") for (\n dirpath, dirnames, filenames) in os.walk(\n working_dir) for file in filenames]\nprint(file_list)\n\nPython 3 Version.\n" ]
[ 29, 5, 0 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0013051785_list_comprehension_python.txt
Q: Side inputs to WriteToBigQuery - PCollection of size 2 with more than one element accessed as a singleton view I have a streaming apache beam pipeline which does operations on data and writes to big query, the table name and schema of said data is within the data itself, so i am using side inputs to provide table name and schema using side_inputs for both of them. So my pipeline code looks something like this - pipeline | "Writing to big query">>beam.io.WriteToBigQuery( schema=lambda row,schema:write_table_schema(row,schema), schema_side_inputs = (table_schema,), project=args['PROJECT_ID'],dataset=args['DATASET_ID'], table = lambda row,table_name:write_table_name(row,table_name),table_side_inputs=(table_name,) ,ignore_unknown_columns=args['ignore_unknown_columns'], additional_bq_parameters=additional_bq_parameters, insert_retry_strategy= RetryStrategy.RETRY_ON_TRANSIENT_ERROR)) For this to work i needed to add window intervals (before write to big query) pipeline = pipeline | "To Window Fixed Intervals" >> beam.WindowInto(beam.window.FixedWindows(10))) This windowed data then goes on to becomes input to 3 pipeline operations, 2 side inputs to WriteToBigQuery are like this - table_name = (pipeline | "Get table name" >> beam.Map(lambda record: get_table_name(record)) ) table_name = beam.pvalue.AsSingleton(table_name) table_schema = (pipeline | "Get table schema" >> beam.Map(lambda record: get_table_schema(record)) ) table_schema = beam.pvalue.AsSingleton(table_schema) All of this was working fine untill i need to split the data before windowing intervals like mapped_data = (pipeline |"Converting to map ">>beam.ParDo(ConvertToMap()).with_outputs("SUCCESS","FAILURE")) pipeline = (mapped_data['SUCCESS'] | "To Window Fixed Intervals" >> beam.WindowInto(beam.window.FixedWindows(10))) As soon as i did this, i encountered following error - ( ValueError: PCollection of size 2 with more than one element accessed as a singleton view. First two elements encountered are "name_1", "name_1". [while running 'Writing to big query/_StreamToBigQuery/AppendDestination-ptransform-48'] I've skipped some steps from the pipeline as it was way too complex. How can i fix this error? I've tried using AsDict instead of AsSingleton but it gives following error - ValueError: dictionary update sequence element #0 has length 20; 2 is required [while running 'Writing to big query/_StreamToBigQuery/AppendDestination-ptransform-48'] I don't think there is any usecase of AsDict here. Maybe the issue was not due to tagging but it was just waiting to happen with high data as it is a streaming pipeline. Solution - The issue here was the side inputs were being generated every time but the main input was being generated only conditionally. This makes the number of side inputs more then the main inputs, hence the issue. After fixing this issue but making side inputs generate through the same conditions as the main input, i've encountered another issue - Cannot convert GlobalWindow to apache_beam.utils.windowed_value._IntervalWindowBase [while running 'Writing to big query/_StreamToBigQuery/StreamInsertRows/ParDo(BigQueryWriteFn)-ptransform-124'] Adding these following windowing transforms to the pipeline "Window into Global Intervals" >> beam.WindowInto(beam.window.FixedWindows(1)) |beam.GroupByKey() gave the following error - AbstractComponentCoderImpl.encode_to_stream ValueError: Number of components does not match number of coders. [while running 'WindowInto(WindowIntoFn) Any help here is appreciated. A: This issue occurs probably due to some issue with the code which is returning elements in GlobalWindow while the PCollection has a different window set. For your requirement, I would suggest you to insert beam.WindowInto(beam.window.GlobalWindows()) between beam.WindowInto(NONGLOBALWINDOW) | beam.GroupByKey() and other Ptransform which causes problems.
Side inputs to WriteToBigQuery - PCollection of size 2 with more than one element accessed as a singleton view
I have a streaming apache beam pipeline which does operations on data and writes to big query, the table name and schema of said data is within the data itself, so i am using side inputs to provide table name and schema using side_inputs for both of them. So my pipeline code looks something like this - pipeline | "Writing to big query">>beam.io.WriteToBigQuery( schema=lambda row,schema:write_table_schema(row,schema), schema_side_inputs = (table_schema,), project=args['PROJECT_ID'],dataset=args['DATASET_ID'], table = lambda row,table_name:write_table_name(row,table_name),table_side_inputs=(table_name,) ,ignore_unknown_columns=args['ignore_unknown_columns'], additional_bq_parameters=additional_bq_parameters, insert_retry_strategy= RetryStrategy.RETRY_ON_TRANSIENT_ERROR)) For this to work i needed to add window intervals (before write to big query) pipeline = pipeline | "To Window Fixed Intervals" >> beam.WindowInto(beam.window.FixedWindows(10))) This windowed data then goes on to becomes input to 3 pipeline operations, 2 side inputs to WriteToBigQuery are like this - table_name = (pipeline | "Get table name" >> beam.Map(lambda record: get_table_name(record)) ) table_name = beam.pvalue.AsSingleton(table_name) table_schema = (pipeline | "Get table schema" >> beam.Map(lambda record: get_table_schema(record)) ) table_schema = beam.pvalue.AsSingleton(table_schema) All of this was working fine untill i need to split the data before windowing intervals like mapped_data = (pipeline |"Converting to map ">>beam.ParDo(ConvertToMap()).with_outputs("SUCCESS","FAILURE")) pipeline = (mapped_data['SUCCESS'] | "To Window Fixed Intervals" >> beam.WindowInto(beam.window.FixedWindows(10))) As soon as i did this, i encountered following error - ( ValueError: PCollection of size 2 with more than one element accessed as a singleton view. First two elements encountered are "name_1", "name_1". [while running 'Writing to big query/_StreamToBigQuery/AppendDestination-ptransform-48'] I've skipped some steps from the pipeline as it was way too complex. How can i fix this error? I've tried using AsDict instead of AsSingleton but it gives following error - ValueError: dictionary update sequence element #0 has length 20; 2 is required [while running 'Writing to big query/_StreamToBigQuery/AppendDestination-ptransform-48'] I don't think there is any usecase of AsDict here. Maybe the issue was not due to tagging but it was just waiting to happen with high data as it is a streaming pipeline. Solution - The issue here was the side inputs were being generated every time but the main input was being generated only conditionally. This makes the number of side inputs more then the main inputs, hence the issue. After fixing this issue but making side inputs generate through the same conditions as the main input, i've encountered another issue - Cannot convert GlobalWindow to apache_beam.utils.windowed_value._IntervalWindowBase [while running 'Writing to big query/_StreamToBigQuery/StreamInsertRows/ParDo(BigQueryWriteFn)-ptransform-124'] Adding these following windowing transforms to the pipeline "Window into Global Intervals" >> beam.WindowInto(beam.window.FixedWindows(1)) |beam.GroupByKey() gave the following error - AbstractComponentCoderImpl.encode_to_stream ValueError: Number of components does not match number of coders. [while running 'WindowInto(WindowIntoFn) Any help here is appreciated.
[ "This issue occurs probably due to some issue with the code which is returning elements in GlobalWindow while the PCollection has a different window set. For your requirement, I would suggest you to insert beam.WindowInto(beam.window.GlobalWindows()) between beam.WindowInto(NONGLOBALWINDOW) | beam.GroupByKey() and other Ptransform which causes problems.\n" ]
[ 0 ]
[]
[]
[ "apache_beam", "google_bigquery", "google_cloud_dataflow", "google_cloud_platform", "python" ]
stackoverflow_0074589874_apache_beam_google_bigquery_google_cloud_dataflow_google_cloud_platform_python.txt
Q: How to continue the loop after thrown an error with python? I want to create a small program keep print error using while-loop after raised an exception. If we input incorrect number. n = int(input("Enter: ")) error = False while not error: class error(BaseException): pass try: def number(n): if n == 2: print("correct") else: raise error number(n) break except error: print("Error") continue But it only printed error one time. error But why does it not continue the loop? And how can we continue the loop? Thanks error error error ..... A: As was pointed out in the comments, the issue is caused by using the same name error for your custom exception class and for a boolean variable which you seem to have intended to use to track the current status. Once you run: class error(BaseException): pass error is now a class, which is not False so the while loop doesn't run any more. Since you have break and continue statements in your while loop, you don't actually need to track the status of whether or not an error occurred on the previous iteration. Also, don't you want to prompt the user for new input each time? Like this: while True: n = int(input("Enter: ")) class error(BaseException): pass try: def number(n): if n == 2: print("correct") else: raise error number(n) break except error: print("Error") continue Example input/output: Enter: 4 Error Enter: 3 Error Enter: 6 Error Enter: 9 Error Enter: 2 correct
How to continue the loop after thrown an error with python?
I want to create a small program keep print error using while-loop after raised an exception. If we input incorrect number. n = int(input("Enter: ")) error = False while not error: class error(BaseException): pass try: def number(n): if n == 2: print("correct") else: raise error number(n) break except error: print("Error") continue But it only printed error one time. error But why does it not continue the loop? And how can we continue the loop? Thanks error error error .....
[ "As was pointed out in the comments, the issue is caused by using the same name error for your custom exception class and for a boolean variable which you seem to have intended to use to track the current status. Once you run:\nclass error(BaseException):\n pass\n\nerror is now a class, which is not False so the while loop doesn't run any more.\nSince you have break and continue statements in your while loop, you don't actually need to track the status of whether or not an error occurred on the previous iteration. Also, don't you want to prompt the user for new input each time? Like this:\nwhile True:\n \n n = int(input(\"Enter: \"))\n\n class error(BaseException):\n pass\n try:\n def number(n):\n if n == 2:\n print(\"correct\")\n else:\n raise error\n number(n)\n break\n \n except error:\n print(\"Error\")\n continue\n\nExample input/output:\nEnter: 4\nError\nEnter: 3\nError\nEnter: 6\nError\nEnter: 9\nError\nEnter: 2\ncorrect\n\n" ]
[ 0 ]
[]
[]
[ "python", "while_loop" ]
stackoverflow_0074609854_python_while_loop.txt
Q: How to write/use K8 Python client to create a new role, sa & role binding I am currently figuring out what is the best way to programmatically manage the Kubernetes cluster (eks). I have come across a python Kubernetes client where I was able to load the local config and then create a namespace. I am running a jenkins job where I would like it to create a namespace, role, rolebinding, as. I have managed to create the namespace however having trouble understanding on how to call the function to create a new role, new role binding. Here is the snippet to create namespaces using k8 python client: from kubernetes import dynamic, config from kubernetes import client as k8s_client from kubernetes.client import api_client import time, sys def create_namespace(namespace_api, name): namespace_manifest = { "apiVersion": "v1", "kind": "Namespace", "metadata": {"name": name, "resourceversion": "v1"}, } namespace_api.create(body=namespace_manifest) def delete_namespace(namespace_api, name): namespace_api.delete(name=name) def main(): # Load local config client = dynamic.DynamicClient( api_client.ApiClient(configuration=config.load_incluster_config()) ) namespace_api = client.resources.get(api_version="v1", kind="Namespace") # Creating a namespace namespace_name = sys.argv[1] create_namespace(namespace_api, namespace_name) time.sleep(4) print("\n[INFO] namespace: " + namespace_name + " created") if __name__ == '__main__': main() I would appreciate any support A: You'll most likely want to use the RbacAuthorizationV1Api. Afterward you can call create_namespaced_role and create_namespaced_role_binding to make what you need. A snippet might look like from kubernetes import client, config config.load_incluster_config() policy_api = client.RbacAuthorizationV1Api() role = client.V1Role( metadata=client.V1ObjectMeta(name="my-role"), rules=[client.V1PolicyRule([""], resources=["pods"], verbs=["get", "list"])], ) policy_api.create_namespaced_role(namespace="my-namespace", body=role) role_binding = client.V1RoleBinding( metadata=client.V1ObjectMeta(namespace="my-namespace", name="my-role-binding"), subjects=[ client.V1Subject( name="user", kind="User", api_group="rbac.authorization.k8s.io" ) ], role_ref=client.V1RoleRef( api_group="rbac.authorization.k8s.io", kind="Role", name="user-role" ), ) policy_api.create_namespaced_role_binding(namespace="my-namespace", body=role_binding) Some more useful examples here.
How to write/use K8 Python client to create a new role, sa & role binding
I am currently figuring out what is the best way to programmatically manage the Kubernetes cluster (eks). I have come across a python Kubernetes client where I was able to load the local config and then create a namespace. I am running a jenkins job where I would like it to create a namespace, role, rolebinding, as. I have managed to create the namespace however having trouble understanding on how to call the function to create a new role, new role binding. Here is the snippet to create namespaces using k8 python client: from kubernetes import dynamic, config from kubernetes import client as k8s_client from kubernetes.client import api_client import time, sys def create_namespace(namespace_api, name): namespace_manifest = { "apiVersion": "v1", "kind": "Namespace", "metadata": {"name": name, "resourceversion": "v1"}, } namespace_api.create(body=namespace_manifest) def delete_namespace(namespace_api, name): namespace_api.delete(name=name) def main(): # Load local config client = dynamic.DynamicClient( api_client.ApiClient(configuration=config.load_incluster_config()) ) namespace_api = client.resources.get(api_version="v1", kind="Namespace") # Creating a namespace namespace_name = sys.argv[1] create_namespace(namespace_api, namespace_name) time.sleep(4) print("\n[INFO] namespace: " + namespace_name + " created") if __name__ == '__main__': main() I would appreciate any support
[ "You'll most likely want to use the RbacAuthorizationV1Api. Afterward you can call create_namespaced_role and create_namespaced_role_binding to make what you need.\nA snippet might look like\nfrom kubernetes import client, config\n\nconfig.load_incluster_config()\npolicy_api = client.RbacAuthorizationV1Api()\nrole = client.V1Role(\n metadata=client.V1ObjectMeta(name=\"my-role\"),\n rules=[client.V1PolicyRule([\"\"], resources=[\"pods\"], verbs=[\"get\", \"list\"])],\n)\npolicy_api.create_namespaced_role(namespace=\"my-namespace\", body=role)\n\nrole_binding = client.V1RoleBinding(\n metadata=client.V1ObjectMeta(namespace=\"my-namespace\", name=\"my-role-binding\"),\n subjects=[\n client.V1Subject(\n name=\"user\", kind=\"User\", api_group=\"rbac.authorization.k8s.io\"\n )\n ],\n role_ref=client.V1RoleRef(\n api_group=\"rbac.authorization.k8s.io\", kind=\"Role\", name=\"user-role\"\n ),\n)\npolicy_api.create_namespaced_role_binding(namespace=\"my-namespace\", body=role_binding)\n\n\nSome more useful examples here.\n" ]
[ 0 ]
[]
[]
[ "client", "k8s_serviceaccount", "kubernetes", "programmatically", "python" ]
stackoverflow_0071563628_client_k8s_serviceaccount_kubernetes_programmatically_python.txt
Q: Redirect localhost:5000/some_path to localhost:5000 I have a dockenizer flask api app that runs in localhost:5000. The api runs with no problem. But when I tried to use it by another app, which I cannot change, it uses localhost:5000/some_path. I'd like to redirect from localhost:5000/some_path to localhost:5000. I have read that I can use a prefix in my flask api app, but I'd prefer another approach. I don't want to mess with the code. Is there a redirect/middleware or another way to redirect this traffic? docker-compose.yml: # Use root/example as user/password credentials version: "3.1" services: my-db: image: mariadb restart: always environment: MARIADB_ROOT_PASSWORD: example ports: - 3306:3306 volumes: - ./0_schema.sql:/docker-entrypoint-initdb.d/0_schema.sql - ./1_data.sql:/docker-entrypoint-initdb.d/1_data.sql adminer: image: adminer restart: always environment: ADMINER_DEFAULT_SERVER: my-db ports: - 8080:8080 my-api: build: ../my-awesome-api/ ports: - 5000:5000 A: If you use a web server to serve your application you could manage it with it, for example with nginx you could do: location = /some_path { return 301 /; } Or you can use a middleware: class PrefixMiddleware(object): def __init__(self, app, prefix=""): self.app = app self.prefix = prefix def __call__(self, environ, start_response): if environ["PATH_INFO"].startswith(self.prefix): environ["PATH_INFO"] = environ["PATH_INFO"][len(self.prefix) :] environ["SCRIPT_NAME"] = self.prefix return self.app(environ, start_response) else: #handle not found Then register your middleware by adding the prefix to "ignore" app = Flask(__name__) app.wsgi_app = PrefixMiddleware(biosfera_fe.wsgi_app, prefix="/some_path")
Redirect localhost:5000/some_path to localhost:5000
I have a dockenizer flask api app that runs in localhost:5000. The api runs with no problem. But when I tried to use it by another app, which I cannot change, it uses localhost:5000/some_path. I'd like to redirect from localhost:5000/some_path to localhost:5000. I have read that I can use a prefix in my flask api app, but I'd prefer another approach. I don't want to mess with the code. Is there a redirect/middleware or another way to redirect this traffic? docker-compose.yml: # Use root/example as user/password credentials version: "3.1" services: my-db: image: mariadb restart: always environment: MARIADB_ROOT_PASSWORD: example ports: - 3306:3306 volumes: - ./0_schema.sql:/docker-entrypoint-initdb.d/0_schema.sql - ./1_data.sql:/docker-entrypoint-initdb.d/1_data.sql adminer: image: adminer restart: always environment: ADMINER_DEFAULT_SERVER: my-db ports: - 8080:8080 my-api: build: ../my-awesome-api/ ports: - 5000:5000
[ "If you use a web server to serve your application you could manage it with it, for example with nginx you could do:\nlocation = /some_path {\n return 301 /;\n}\n\nOr you can use a middleware:\nclass PrefixMiddleware(object):\n def __init__(self, app, prefix=\"\"):\n self.app = app\n self.prefix = prefix\n\n def __call__(self, environ, start_response):\n\n if environ[\"PATH_INFO\"].startswith(self.prefix):\n environ[\"PATH_INFO\"] = environ[\"PATH_INFO\"][len(self.prefix) :]\n environ[\"SCRIPT_NAME\"] = self.prefix\n return self.app(environ, start_response)\n else:\n #handle not found\n\nThen register your middleware by adding the prefix to \"ignore\"\napp = Flask(__name__)\napp.wsgi_app = PrefixMiddleware(biosfera_fe.wsgi_app, prefix=\"/some_path\")\n\n" ]
[ 0 ]
[]
[]
[ "api", "flask", "python" ]
stackoverflow_0074604160_api_flask_python.txt
Q: Get only unique words from a sentence in Python Let's say I have a string that says "mango mango peach". How can I print only the unique words in that string. The desired output for the above string would be [peach] as a list Thanks!! A: Python has a built in method called count that would work very well here text = "mango mango peach apple apple banana" words = text.split() for word in words: if text.count(word) == 1: print(word) else: pass (xenial)vash@localhost:~/python/stack_overflow$ python3.7 mango.py peach banana Using list comprehension you can do this [print(word) for word in words if text.count(word) == 1] A: seq = "mango mango peach".split() [x for x in seq if x not in seq[seq.index(x)+1:]] A: First - split you string with empty space delimiter (split() method), than use Counter or calculate frequencies by your own code. A: You can use a Counter to find the number of occurrences of each word, then make a list of all words that appear only once. from collections import Counter phrase = "mango peach mango" counts = Counter(phrase.split()) print([word for word, count in counts.items() if count == 1]) # ['peach'] A: sentence = "mango mango peach" a)Convert the sentence into a list words = sentence.split() print(words) b)Convert the list into a set. Only unique words will be stored in the set unique_words = set(words) print(unique_words)
Get only unique words from a sentence in Python
Let's say I have a string that says "mango mango peach". How can I print only the unique words in that string. The desired output for the above string would be [peach] as a list Thanks!!
[ "Python has a built in method called count that would work very well here \ntext = \"mango mango peach apple apple banana\"\nwords = text.split()\n\nfor word in words:\n if text.count(word) == 1:\n print(word)\n else:\n pass\n\n\n(xenial)vash@localhost:~/python/stack_overflow$ python3.7 mango.py \npeach\nbanana\n\n\nUsing list comprehension you can do this \n[print(word) for word in words if text.count(word) == 1]\n\n", "seq = \"mango mango peach\".split()\n[x for x in seq if x not in seq[seq.index(x)+1:]]\n\n", "First - split you string with empty space delimiter (split() method), than use Counter or calculate frequencies by your own code.\n", "You can use a Counter to find the number of occurrences of each word, then make a list of all words that appear only once.\nfrom collections import Counter\n\nphrase = \"mango peach mango\"\n\ncounts = Counter(phrase.split())\n\nprint([word for word, count in counts.items() if count == 1])\n# ['peach']\n\n", "sentence = \"mango mango peach\"\na)Convert the sentence into a list\nwords = sentence.split()\nprint(words)\nb)Convert the list into a set. Only unique words will be stored in the set\nunique_words = set(words)\nprint(unique_words)\n" ]
[ 4, 2, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0052373683_python.txt
Q: how can I use Python WatchDog to find out who changed the file? There is the simplest script that shows when it was changed, but how do I get detailed information? For example, who changed it with reference to the directory. ` import sys import logging from watchdog.observers import Observer from watchdog.events import LoggingEventHandler if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') path = sys.argv[1] if len(sys.argv) > 1 else '.' event_handler = LoggingEventHandler() observer = Observer() observer.schedule(event_handler, path, recursive=True) observer.start() try: while observer.isAlive(): observer.join(1) finally: observer.stop() observer.join() ` please with examples. I looked at the manual on watchdog, but did not understand how to get this information. A: The watchdog works via inotify mechanism and will only notify you that a file had been changed, created or deleted. There is no information in the filesystem about who did the change. The information you have is basically the same information that you would get by looking at a directory listing such as with ls -al. Assuming you are working on Linux, a file has an owner and a group. Typically, they are set according to the UID/GID of a process that creates the file, and remain unchanged even if a process with another UID/GID modifies the file (unless explicitly changed with chmod/chgrp). To track filesystem changes in more detail, you probably need to use the audit subsystem, but that's a completely different story.
how can I use Python WatchDog to find out who changed the file?
There is the simplest script that shows when it was changed, but how do I get detailed information? For example, who changed it with reference to the directory. ` import sys import logging from watchdog.observers import Observer from watchdog.events import LoggingEventHandler if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') path = sys.argv[1] if len(sys.argv) > 1 else '.' event_handler = LoggingEventHandler() observer = Observer() observer.schedule(event_handler, path, recursive=True) observer.start() try: while observer.isAlive(): observer.join(1) finally: observer.stop() observer.join() ` please with examples. I looked at the manual on watchdog, but did not understand how to get this information.
[ "The watchdog works via inotify mechanism and will only notify you that a file had been changed, created or deleted. There is no information in the filesystem about who did the change. The information you have is basically the same information that you would get by looking at a directory listing such as with ls -al.\nAssuming you are working on Linux, a file has an owner and a group. Typically, they are set according to the UID/GID of a process that creates the file, and remain unchanged even if a process with another UID/GID modifies the file (unless explicitly changed with chmod/chgrp).\nTo track filesystem changes in more detail, you probably need to use the audit subsystem, but that's a completely different story.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074609932_python_python_3.x.txt
Q: Problem importing TensorFlow 2 in Python (running on WSL in Windows) Problem: I followed Microsoft's instruction in order to properly install and run TensorFlow 2 in WSL with GPU acceleration, using DirectML (here's the document). Following the installation, when I try and import tensorflow in Python I get the following output: >>> import tensorflow 2022-11-22 15:52:33.090032: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/pietro/miniconda3/envs/testing/lib/python3.9/site-package /tensorflow/__init__.py", line 440, in <module> _ll.load_library(_plugin_dir) File "/home/pietro/miniconda3/envs/testing/lib/python3.9/site-package /tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: /home/pietro /miniconda3/envs/testing/lib/python3.9/site-packages/tensorflow-plugin /libtfdml_plugin.so: undefined symbol:_ZN10tensorflow8internal15LogMessageFatalD1Ev, version tensorflow I tried instead to follow the instructions for TensorFlow 1 and PyTorch (just in case something was wrong with my machine) and they both work perfectly, so I assume this issue only involves TensorFlow 2 somehow. Did anyone encounter the same problem? Thanks to everybody in advance :) Pietro A: Had the same problem, and downgrading TensorFlow from 2.11 fixed it. First remove the existing version: pip uninstall tensorflow-cpu Then re-install, this time with 2.10.0: pip install tensorflow-cpu==2.10.0 After that, try importing it in Python. You should see something like the following (apologies for the messy output): >>> import tensorflow as tf 2022-11-28 22:41:21.693757: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-11-28 22:41:21.806150: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2022-11-28 22:41:22.982148: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdirectml.d6f03b303ac3c4f2eeb8ca631688c9757b361310.so 2022-11-28 22:41:22.982289: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdxcore.so 2022-11-28 22:41:22.996385: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libd3d12.so 2022-11-28 22:41:27.615851: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters. You can test that it works by adding two tensors. Running a command like the following: print(tf.add([1.0, 2.0], [3.0, 4.0])) And somewhere in the output, you should be able to verify that DirectML has found your GPU: 2022-11-28 22:43:42.632447: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (NVIDIA GeForce RTX 3080) Hope this helps!
Problem importing TensorFlow 2 in Python (running on WSL in Windows)
Problem: I followed Microsoft's instruction in order to properly install and run TensorFlow 2 in WSL with GPU acceleration, using DirectML (here's the document). Following the installation, when I try and import tensorflow in Python I get the following output: >>> import tensorflow 2022-11-22 15:52:33.090032: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/pietro/miniconda3/envs/testing/lib/python3.9/site-package /tensorflow/__init__.py", line 440, in <module> _ll.load_library(_plugin_dir) File "/home/pietro/miniconda3/envs/testing/lib/python3.9/site-package /tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: /home/pietro /miniconda3/envs/testing/lib/python3.9/site-packages/tensorflow-plugin /libtfdml_plugin.so: undefined symbol:_ZN10tensorflow8internal15LogMessageFatalD1Ev, version tensorflow I tried instead to follow the instructions for TensorFlow 1 and PyTorch (just in case something was wrong with my machine) and they both work perfectly, so I assume this issue only involves TensorFlow 2 somehow. Did anyone encounter the same problem? Thanks to everybody in advance :) Pietro
[ "Had the same problem, and downgrading TensorFlow from 2.11 fixed it. First remove the existing version:\npip uninstall tensorflow-cpu\n\nThen re-install, this time with 2.10.0:\npip install tensorflow-cpu==2.10.0\n\nAfter that, try importing it in Python. You should see something like the following (apologies for the messy output):\n>>> import tensorflow as tf\n2022-11-28 22:41:21.693757: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2022-11-28 22:41:21.806150: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n2022-11-28 22:41:22.982148: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdirectml.d6f03b303ac3c4f2eeb8ca631688c9757b361310.so\n2022-11-28 22:41:22.982289: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdxcore.so\n2022-11-28 22:41:22.996385: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libd3d12.so\n2022-11-28 22:41:27.615851: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters.\n\nYou can test that it works by adding two tensors. Running a command like the following:\nprint(tf.add([1.0, 2.0], [3.0, 4.0]))\n\nAnd somewhere in the output, you should be able to verify that DirectML has found your GPU:\n2022-11-28 22:43:42.632447: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (NVIDIA GeForce RTX 3080)\n\nHope this helps!\n" ]
[ 0 ]
[]
[]
[ "anaconda", "python", "tensorflow", "tensorflow2.0" ]
stackoverflow_0074534936_anaconda_python_tensorflow_tensorflow2.0.txt
Q: How to read json data in Python that received the json data from sns This is the json data I am receiving from aws sns notifications. I want to access deploymentGroupName which is inside the Records->Sns->Message In my lambda python code I am trying to do like this. eventName = json.loads(event.Records[0].Sns.Message).deploymentGroupName; This is the json I received. { 'Records': [{ 'EventSource': 'aws:sns', 'EventVersion': '1.0', 'EventSubscriptionArn': 'arn:aws:sns:us-east-1:1236542:project-Deploy-Success:123-654-12-b177-123654', 'Sns': { 'Type': 'Notification', 'MessageId': '6ef313fa-46d2-5841-b162-4805edfb421c', 'TopicArn': 'arn:aws:sns:us-east-1:428219256379:project-Deploy-Success', 'Subject': 'SUCCEEDED: AWS CodeDeploy d-E8BYQ65CL in us-east-1 to project-code-deploy', 'Message': '{"region":"us-east-1","accountId":"213321213321","eventTriggerName":"Sandbox-Deployment-Triggered","applicationName":"project-code-deploy","deploymentId":"d-E8BYQ65CL","deploymentGroupName":"Sandbox-ec2-deployment","createTime":"Tue Nov 29 06:38:20 UTC 2022","completeTime":"Tue Nov 29 06:38:33 UTC 2022","deploymentOverview":"{\\"Succeeded\\":1,\\"Failed\\":0,\\"Skipped\\":0,\\"InProgress\\":0,\\"Pending\\":0}","status":"SUCCEEDED"}', 'Timestamp': '2022-11-29T06:38:33.558Z', } }] } Right now giving this error. [ERROR] NameError: name 'json' is not defined Traceback (most recent call last): File "/var/task/lambda_function.py", line 12, in lambda_handler eventName = json.loads(event.Records[0].Sns.Message).deploymentGroupName; A: If event has no "" surrounding, it has converted to dict already. However, you need to deal with json for Message. import json msg = event['Records'][0]['Sns']['Message'] deploymentGroupName = json.loads(msg)['deploymentGroupName'] deploymentGroupName output: 'Sandbox-ec2-deployment'
How to read json data in Python that received the json data from sns
This is the json data I am receiving from aws sns notifications. I want to access deploymentGroupName which is inside the Records->Sns->Message In my lambda python code I am trying to do like this. eventName = json.loads(event.Records[0].Sns.Message).deploymentGroupName; This is the json I received. { 'Records': [{ 'EventSource': 'aws:sns', 'EventVersion': '1.0', 'EventSubscriptionArn': 'arn:aws:sns:us-east-1:1236542:project-Deploy-Success:123-654-12-b177-123654', 'Sns': { 'Type': 'Notification', 'MessageId': '6ef313fa-46d2-5841-b162-4805edfb421c', 'TopicArn': 'arn:aws:sns:us-east-1:428219256379:project-Deploy-Success', 'Subject': 'SUCCEEDED: AWS CodeDeploy d-E8BYQ65CL in us-east-1 to project-code-deploy', 'Message': '{"region":"us-east-1","accountId":"213321213321","eventTriggerName":"Sandbox-Deployment-Triggered","applicationName":"project-code-deploy","deploymentId":"d-E8BYQ65CL","deploymentGroupName":"Sandbox-ec2-deployment","createTime":"Tue Nov 29 06:38:20 UTC 2022","completeTime":"Tue Nov 29 06:38:33 UTC 2022","deploymentOverview":"{\\"Succeeded\\":1,\\"Failed\\":0,\\"Skipped\\":0,\\"InProgress\\":0,\\"Pending\\":0}","status":"SUCCEEDED"}', 'Timestamp': '2022-11-29T06:38:33.558Z', } }] } Right now giving this error. [ERROR] NameError: name 'json' is not defined Traceback (most recent call last): File "/var/task/lambda_function.py", line 12, in lambda_handler eventName = json.loads(event.Records[0].Sns.Message).deploymentGroupName;
[ "If event has no \"\" surrounding, it has converted to dict already. However, you need to deal with json for Message.\nimport json\n\nmsg = event['Records'][0]['Sns']['Message']\ndeploymentGroupName = json.loads(msg)['deploymentGroupName']\ndeploymentGroupName\n\noutput:\n'Sandbox-ec2-deployment'\n\n" ]
[ 2 ]
[]
[]
[ "aws_lambda", "json", "lambda", "python" ]
stackoverflow_0074610109_aws_lambda_json_lambda_python.txt
Q: Tkinter on mac shows up as a black screen So here is my code: from tkinter import * root = Tk() root.title("Greeting") Label(root, text = "Hello World").pack() root.mainloop() but the only thing that shows up on the window after running it is a black screen you can see the code and the window in this image if it helps A: After much digging, I've found a solution (with some caveats) - you'll need both homebrew and pyenv installed for this to work. The idea is to replace your old deprecated tkinter installation with an up-to-date one that actually works* Note that this will wipe out any packages you’ve installed with pip - back those up first! Run the following commands brew uninstall tcl-tk uninstall the old tk if you have it pyenv uninstall 3.10.5 ...or whatever your current global Python version is brew install tcl-tk grab a fresh install of tk pyenv install 3.10.5 grab a fresh install of Python 3.10.5 (or whichever) pyenv global 3.10.5 set your global Python version (matching the version you just installed above) You need to install tk via homebrew before installing Python with pyenv because pyenv will automatically try to use whatever tk package it can find when it installs Python. Final Thoughts If you don't already have homebrew installed, here are good instructions If you don't have pyenv, just run brew install pyenv You’ll probably need to select your preferred Python interpreter in VSCode again *This worked for me - YMMV A: I have the same issue on an M1 Pro. Works just fine on the intel Mac but not the M1. I have a further issue on the file dialog in which the file type does not appear in the M1 but works perfectly on the Intel Mac. I am not convinced that the hardware is the problem but more that it is the port of Tkinter to the platform. A: Install/activate and import all globally installed packages in a new virtual environment by running the command pip install virtualenv virtualenv venv --system-site-packages source venv/bin/activate A: Had the same issue with Python 3.8 and Mac os Monterey; I've followed these steps to fix the issue: Upgraded Mac Os to the latest version Upgraded Python to 3.10/ 3.11 My issue was fixed.
Tkinter on mac shows up as a black screen
So here is my code: from tkinter import * root = Tk() root.title("Greeting") Label(root, text = "Hello World").pack() root.mainloop() but the only thing that shows up on the window after running it is a black screen you can see the code and the window in this image if it helps
[ "After much digging, I've found a solution (with some caveats) -\nyou'll need both homebrew and pyenv installed for this to work. The idea is to replace your old deprecated tkinter installation with an up-to-date one that actually works*\n\nNote that this will wipe out any packages you’ve installed with pip - back those up first!\n\nRun the following commands\n\nbrew uninstall tcl-tk uninstall the old tk if you have it\n\npyenv uninstall 3.10.5 ...or whatever your current global Python version is\n\nbrew install tcl-tk grab a fresh install of tk\n\npyenv install 3.10.5 grab a fresh install of Python 3.10.5 (or whichever)\n\npyenv global 3.10.5 set your global Python version (matching the version you just installed above)\n\n\nYou need to install tk via homebrew before installing Python with pyenv because pyenv will automatically try to use whatever tk package it can find when it installs Python.\nFinal Thoughts\n\nIf you don't already have homebrew installed, here are good instructions\n\nIf you don't have pyenv, just run brew install pyenv\n\nYou’ll probably need to select your preferred Python interpreter in VSCode again\n\n\n*This worked for me - YMMV\n", "I have the same issue on an M1 Pro. Works just fine on the intel Mac but not the M1. I have a further issue on the file dialog in which the file type does not appear in the M1 but works perfectly on the Intel Mac.\nI am not convinced that the hardware is the problem but more that it is the port of Tkinter to the platform.\n", "Install/activate and import all globally installed packages in a new virtual environment by running the command\npip install virtualenv\nvirtualenv venv --system-site-packages\nsource venv/bin/activate\n", "Had the same issue with Python 3.8 and Mac os Monterey; I've followed these steps to fix the issue:\n\nUpgraded Mac Os to the latest version\nUpgraded Python to 3.10/ 3.11\n\nMy issue was fixed.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "macos", "python", "tkinter" ]
stackoverflow_0073056296_macos_python_tkinter.txt
Q: Deploy Django CMS on IIS Server After learning from this tutorial now I can run Django CMS on my laptop using virual envronment. But I want to deploy this CMS to IIS server. I also installed pip install wfastcgi But when I try to set DJANGO_SETTINGS_MODULE in IIS CGI Setting, I find that I still don't have .settings file yet. Regarding to setting, below are only files which i have so far. So, my question is how can I do to deploy Django CMS on IIS Server ? Thanks. A: I tried the tutorial. I found the problem you mentioned. Actually, the settings.py file in your screenshot is the ".settings" file you are missing. And you want to know how to deploy Django CMS on IIS Server. You can refer to this tutorial for the steps. Hope it is useful for you.
Deploy Django CMS on IIS Server
After learning from this tutorial now I can run Django CMS on my laptop using virual envronment. But I want to deploy this CMS to IIS server. I also installed pip install wfastcgi But when I try to set DJANGO_SETTINGS_MODULE in IIS CGI Setting, I find that I still don't have .settings file yet. Regarding to setting, below are only files which i have so far. So, my question is how can I do to deploy Django CMS on IIS Server ? Thanks.
[ "I tried the tutorial. I found the problem you mentioned. Actually, the settings.py file in your screenshot is the \".settings\" file you are missing.\nAnd you want to know how to deploy Django CMS on IIS Server. You can refer to this tutorial for the steps. Hope it is useful for you.\n" ]
[ 0 ]
[]
[]
[ "cgi", "django_cms", "iis", "python", "window" ]
stackoverflow_0074601681_cgi_django_cms_iis_python_window.txt
Q: How to include the start date and end date while filtering in django In views.py: if request.method == "POST": from_date = request.POST.get("from_date") f_date = datetime.datetime.strptime(from_date,'%Y-%m-%d') print(f_date) to_date = request.POST.get("to_date") t_date = datetime.datetime.strptime(to_date, '%Y-%m-%d') print(t_date) check_box_status = request.POST.get("new_records", None) print(check_box_status) drop_down_status = request.POST.get("field") print(drop_down_status) if check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__range=(f_date, t_date)) The following code get_records_by_date = Scrapper.objects.filter(start_time__range=(f_date, t_date)) is unable to include the t_date. Is there any solution to include the t_date? A: Breaks.objects.filter(date__range=["2011-01-01", "2011-01-31"]) Or if you are just trying to filter month wise: Breaks.objects.filter(date__year='2011', date__month='01') Please reply to this message ,If it doesn't work.
How to include the start date and end date while filtering in django
In views.py: if request.method == "POST": from_date = request.POST.get("from_date") f_date = datetime.datetime.strptime(from_date,'%Y-%m-%d') print(f_date) to_date = request.POST.get("to_date") t_date = datetime.datetime.strptime(to_date, '%Y-%m-%d') print(t_date) check_box_status = request.POST.get("new_records", None) print(check_box_status) drop_down_status = request.POST.get("field") print(drop_down_status) if check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__range=(f_date, t_date)) The following code get_records_by_date = Scrapper.objects.filter(start_time__range=(f_date, t_date)) is unable to include the t_date. Is there any solution to include the t_date?
[ "Breaks.objects.filter(date__range=[\"2011-01-01\", \"2011-01-31\"])\n\nOr if you are just trying to filter month wise:\nBreaks.objects.filter(date__year='2011', \n date__month='01')\n\nPlease reply to this message ,If it doesn't work.\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074609060_django_python.txt
Q: NameError: name 'get_transforms' is not defined This code was running without any problems before I updated my python and fastai: from fastai import * from fastai.vision import * import torch ... tfms = get_transforms(do_flip=True,flip_vert=True,max_rotate=360,max_warp=0,max_zoom=1.1,max_lighting=0.1,p_lighting=0.5) After updating the fastai to 2.1.2 and python to 3.8.5, I'm getting this error: NameError: name 'get_transforms' is not defined. How can I fix it? A: For Data Augmentation methods in FastAI 2 you have to use other methods names, for example: aug_transforms A: I got the same question, fastai 1.0.61 could probably solve the problem. A: Enter this code at the very beginning and download it : !pip install "torch==1.4" "torchvision==0.5.0"
NameError: name 'get_transforms' is not defined
This code was running without any problems before I updated my python and fastai: from fastai import * from fastai.vision import * import torch ... tfms = get_transforms(do_flip=True,flip_vert=True,max_rotate=360,max_warp=0,max_zoom=1.1,max_lighting=0.1,p_lighting=0.5) After updating the fastai to 2.1.2 and python to 3.8.5, I'm getting this error: NameError: name 'get_transforms' is not defined. How can I fix it?
[ "For Data Augmentation methods in FastAI 2 you have to use other methods names, for example:\naug_transforms\n", "I got the same question, fastai 1.0.61 could probably solve the problem.\n", "Enter this code at the very beginning and download it :\n!pip install \"torch==1.4\" \"torchvision==0.5.0\"\n\n" ]
[ 4, 0, 0 ]
[]
[]
[ "fast_ai", "python", "python_3.x" ]
stackoverflow_0064643190_fast_ai_python_python_3.x.txt
Q: How to iterate over dictionaries in a list and extract key values to a separate list I'm trying to match key value from different dictionaries in a list and make them as individual list.Below is the example format originallist=[ {"A":"Autonomous","C":"Combined","D":"Done"}, {"B":"Bars","A":"Aircraft"}, {"C":"Calculative"} ] #Note: The dictionaries present in the original list may vary in number #I was trying to acheive the below format A=["Autonomous","Aircraft"] B=["Bars"] C=["Calculative","Combined"] D=["Done"] Thanks in advance for your help A: The best option would be to use a defaultdict. from collections import defaultdict out = defaultdict(list) #data is the list in question for rec in data: for key,value in rec.items(): out[key].append(value) A defaultdict returns a default value in case the key does not exist. dict.items is a method that returns and iterator of the key value pairs. You can do it faster using pandas, but it would be overkill unless you have a huge dataset.
How to iterate over dictionaries in a list and extract key values to a separate list
I'm trying to match key value from different dictionaries in a list and make them as individual list.Below is the example format originallist=[ {"A":"Autonomous","C":"Combined","D":"Done"}, {"B":"Bars","A":"Aircraft"}, {"C":"Calculative"} ] #Note: The dictionaries present in the original list may vary in number #I was trying to acheive the below format A=["Autonomous","Aircraft"] B=["Bars"] C=["Calculative","Combined"] D=["Done"] Thanks in advance for your help
[ "The best option would be to use a defaultdict.\nfrom collections import defaultdict\n\nout = defaultdict(list)\n\n#data is the list in question\n\nfor rec in data:\n for key,value in rec.items():\n out[key].append(value)\n\n\nA defaultdict returns a default value in case the key does not exist. dict.items is a method that returns and iterator of the key value pairs.\nYou can do it faster using pandas, but it would be overkill unless you have a huge dataset.\n" ]
[ 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074610186_dictionary_list_python.txt
Q: Python - Data Attributes vs Class Attributes and Instance Attributes - When to use Data Attributes? I am learning Python and have started a chapter on "classes" and also class/instance attributes. The chapter starts off with a very basic example of creating an empty class class Contact: pass x=Contact() So an empty class is created and an instance of the class is created. Then it also throws in the following line of code x.name='Mr.Roger' So this threw me for a loop as the class definition is totally empty with no variables. Similarly the object is created with no variables. Its explained that apparently this is a "data attribute". I tried to google this and most documentation speaks to class/instance attributes - Though I was able to find reference to data attributes here: https://docs.python.org/3/tutorial/classes.html#instance-objects In my very basic mind - What I am seeing happening is that an empty object is instantiated. Then seemingly new variables can then be created and attached to this object (in this case x.name). I am assuming that we can create any number of attributes in this manner so we could even do x.firstname='Roger' x.middlename='Sam' x.lastname='Jacobs' etc. Since there are already class and instance attributes - I am confused why one would do this and for what situations or use-cases? Is this not a recommended way of creating attributes or is this frowned upon? If I create a second object and then attach other attributes to it - How can I find all the attributes attached to this object or any other object that is implemented in a similar way? A: Python is a very dynamic language. Classes acts like molds, they can create instance according to a specific shape, but unlike other languages where shapes are fixed, in Python you can (nearly) always modify their shape. I never heard of "data attribute" in this context, so I'm not surprised that you did find nothing to explain this behavior. Instead, I recommend you the Python data model documentation. Under "Class instances" : [...] A class instance has a namespace implemented as a dictionary which is the first place in which attribute references are searched. When an attribute is not found there, and the instance’s class has an attribute by that name, the search continues with the class attributes. [...] Special attributes: __dict__ is the attribute dictionary; __class__ is the instance’s class. Python looks simple on the surface level, but what happens when you do a.my_value is rather complex. For the simple cases, my_value is an instance variable, which usually is defined during the class declaration, like so : class Something: def __init__(self, parameter): self.my_value = parameter # storing the parameter in an instance variable (self) a = Something(1) b = Something(2) # instance variables are not shared (by default) print(a.my_value) # 1 print(b.my_value) # 2 a.my_value = 10 b.my_value = 20 print(a.my_value) # 10 print(b.my_value) # 20 But it would have worked without the __init__: class Something: pass # nothing special a = Something() a.my_value = 1 # we have to set it ourselves, because there is no more __init__ b = Something() b.my_value = 2 # same # and we get the same results as before : print(a.my_value) # 1 print(b.my_value) # 2 a.my_value = 10 b.my_value = 20 print(a.my_value) # 10 print(b.my_value) # 20 Because each instance uses a dictionary to store its attributes (methods and fields), and you can edit this dictionary, then you can edit the fields of any object at any moment. This is both very handy sometimes, and very annoying other times. Example of the instance's __dict__ attribute : class Something: pass # nothing special a = Something() print(a.__dict__) # {} a.my_value = 1 print(a.__dict__) # {'my_value': 1} a.my_value = 10 print(a.__dict__) # {'my_value': 10} Because it did not existed before, it got added to the __dict__. Then it just got modified. And if we create another Something: b = Something() print(a.__dict__) # {'my_value': 10} print(b.__dict__) # {} They were created with the same mold (the Something class) but one got modified afterwards. The usual way to set attributes to instances is with the __init__ method : class Something: def __init__(self, param): print(self.__dict__) # {} self.my_value = param print(self.__dict__) # {'my_value': 1} a = Something(1) print(a.__dict__) # {'my_value': 1} It does exactly what we did before : add a new entry in the instance's __dict__. In that way, __init__ is not much more than a convention of where to put all your fields declarations, but you can do without. It comes from the face that everything in Python is a dynamic object, that you can edit anytime. For example, that's the way modules work too : import sys this_module = sys.modules[__name__] print(this_module.__dict__) # {... a bunch of things ...} MODULE_VAR = 4 print(this_module.__dict__) # {... a bunch of things ..., 'MODULE_VAR': 4} This is a core feature of Python, its dynamic nature sometime makes things easy. For example, it enables duck typing, monkey patching, instrospection, ... But in a large codebases, without coding rules, you can quickly get a mess of undeclared instances everywhere. Nowadays, we try to write less clever, more reliable code, so adding new attributes to instances outside of the __init__ is indeed frowned upon.
Python - Data Attributes vs Class Attributes and Instance Attributes - When to use Data Attributes?
I am learning Python and have started a chapter on "classes" and also class/instance attributes. The chapter starts off with a very basic example of creating an empty class class Contact: pass x=Contact() So an empty class is created and an instance of the class is created. Then it also throws in the following line of code x.name='Mr.Roger' So this threw me for a loop as the class definition is totally empty with no variables. Similarly the object is created with no variables. Its explained that apparently this is a "data attribute". I tried to google this and most documentation speaks to class/instance attributes - Though I was able to find reference to data attributes here: https://docs.python.org/3/tutorial/classes.html#instance-objects In my very basic mind - What I am seeing happening is that an empty object is instantiated. Then seemingly new variables can then be created and attached to this object (in this case x.name). I am assuming that we can create any number of attributes in this manner so we could even do x.firstname='Roger' x.middlename='Sam' x.lastname='Jacobs' etc. Since there are already class and instance attributes - I am confused why one would do this and for what situations or use-cases? Is this not a recommended way of creating attributes or is this frowned upon? If I create a second object and then attach other attributes to it - How can I find all the attributes attached to this object or any other object that is implemented in a similar way?
[ "Python is a very dynamic language. Classes acts like molds, they can create instance according to a specific shape, but unlike other languages where shapes are fixed, in Python you can (nearly) always modify their shape.\nI never heard of \"data attribute\" in this context, so I'm not surprised that you did find nothing to explain this behavior.\nInstead, I recommend you the Python data model documentation. Under \"Class instances\" :\n\n[...] A class instance has a namespace implemented as a dictionary which is the first place in which attribute references are searched. When an attribute is not found there, and the instance’s class has an attribute by that name, the search continues with the class attributes.\n[...]\nSpecial attributes: __dict__ is the attribute dictionary; __class__ is the instance’s class.\n\nPython looks simple on the surface level, but what happens when you do a.my_value is rather complex. For the simple cases, my_value is an instance variable, which usually is defined during the class declaration, like so :\nclass Something:\n def __init__(self, parameter):\n self.my_value = parameter # storing the parameter in an instance variable (self)\n\na = Something(1)\nb = Something(2)\n\n# instance variables are not shared (by default)\nprint(a.my_value) # 1\nprint(b.my_value) # 2\na.my_value = 10\nb.my_value = 20\nprint(a.my_value) # 10\nprint(b.my_value) # 20\n\nBut it would have worked without the __init__:\nclass Something:\n pass # nothing special\n\na = Something()\na.my_value = 1 # we have to set it ourselves, because there is no more __init__\nb = Something()\nb.my_value = 2 # same\n\n# and we get the same results as before :\nprint(a.my_value) # 1\nprint(b.my_value) # 2\na.my_value = 10\nb.my_value = 20\nprint(a.my_value) # 10\nprint(b.my_value) # 20\n\nBecause each instance uses a dictionary to store its attributes (methods and fields), and you can edit this dictionary, then you can edit the fields of any object at any moment. This is both very handy sometimes, and very annoying other times.\nExample of the instance's __dict__ attribute :\nclass Something:\n pass # nothing special\n\na = Something()\nprint(a.__dict__) # {}\na.my_value = 1\nprint(a.__dict__) # {'my_value': 1}\na.my_value = 10\nprint(a.__dict__) # {'my_value': 10}\n\nBecause it did not existed before, it got added to the __dict__. Then it just got modified.\nAnd if we create another Something:\nb = Something()\nprint(a.__dict__) # {'my_value': 10}\nprint(b.__dict__) # {}\n\nThey were created with the same mold (the Something class) but one got modified afterwards.\nThe usual way to set attributes to instances is with the __init__ method :\nclass Something:\n def __init__(self, param):\n print(self.__dict__) # {}\n self.my_value = param\n print(self.__dict__) # {'my_value': 1}\n\na = Something(1)\nprint(a.__dict__) # {'my_value': 1}\n\nIt does exactly what we did before : add a new entry in the instance's __dict__. In that way, __init__ is not much more than a convention of where to put all your fields declarations, but you can do without.\nIt comes from the face that everything in Python is a dynamic object, that you can edit anytime. For example, that's the way modules work too :\nimport sys\nthis_module = sys.modules[__name__]\n\nprint(this_module.__dict__) # {... a bunch of things ...}\n\nMODULE_VAR = 4\n\nprint(this_module.__dict__) # {... a bunch of things ..., 'MODULE_VAR': 4}\n\nThis is a core feature of Python, its dynamic nature sometime makes things easy. For example, it enables duck typing, monkey patching, instrospection, ... But in a large codebases, without coding rules, you can quickly get a mess of undeclared instances everywhere. Nowadays, we try to write less clever, more reliable code, so adding new attributes to instances outside of the __init__ is indeed frowned upon.\n" ]
[ 1 ]
[]
[]
[ "attributes", "class", "python" ]
stackoverflow_0074603434_attributes_class_python.txt
Q: Set variable storing text as a .get_group category Is it possible to use a variable storing text or a list as a category for .get_group. Something like this: import pandas as pd df = pd.read_excel("HondaSales.xlsx") brand = ["honda", "acura"] year = "2020" brands1 = df.groupby(["Brand","Year"]) honda = brands1.get_group([brand, year]) sales = honda["UNI_VEH"] salessum, = sales.sum() print('Sales of', brand, 'in', year,':', salessum) I´ve tried it as such, but I get this error: ValueError: must supply a tuple to get_group with multiple grouping keys perhaps something is wrong with parentheses
Set variable storing text as a .get_group category
Is it possible to use a variable storing text or a list as a category for .get_group. Something like this: import pandas as pd df = pd.read_excel("HondaSales.xlsx") brand = ["honda", "acura"] year = "2020" brands1 = df.groupby(["Brand","Year"]) honda = brands1.get_group([brand, year]) sales = honda["UNI_VEH"] salessum, = sales.sum() print('Sales of', brand, 'in', year,':', salessum) I´ve tried it as such, but I get this error: ValueError: must supply a tuple to get_group with multiple grouping keys perhaps something is wrong with parentheses
[]
[]
[ "Here is the error:\nhonda = brands1.get_group([brand, year]), please share more info, because it suits you.\n" ]
[ -1 ]
[ "pandas", "python" ]
stackoverflow_0074609849_pandas_python.txt
Q: Why am I getting {"detail":[{"loc":["path","id"],"msg":"field required","type":"value_error.missing"}]} if I made query with params? This is the endpoint that is not working: @router.get( "/{question_id}", tags=["questions"], status_code=status.HTTP_200_OK, response_model=Question, dependencies=[Depends(get_db)], ) def get_question(id: int = Path(..., gt=0)): return get_question_service(id) This is what the server shows when I run the query from the interactive FastAPI docs: INFO: 127.0.0.1:45806 - "GET /api/v1/questions/%7Bquestion_id%7D HTTP/1.1" 422 Unprocessable I don't know why it is sending {question_id} here instead of the number. Also when I run a query from curl, this is what the server shows: INFO: 127.0.0.1:59104 - "GET /api/v1/questions/21 HTTP/1.1" 422 Unprocessable Entity It makes no sense since I'm sending the only required param: (question_id) The other endpoint is working fine: @router.get( "/", tags=["questions"], status_code=status.HTTP_200_OK, response_model=List[Question], dependencies=[Depends(get_db)], ) def get_questions(): return get_questions_service() A: There is a mismatch between the path parameter in the path string and the function argument. Rename the function argument to question_id @router.get( "/{question_id}", tags=["questions"], status_code=status.HTTP_200_OK, response_model=Question, dependencies=[Depends(get_db)], ) def get_question(question_id: int = Path(..., gt=0)): return get_question_service(question_id) or the path parameter to id: @router.get( "/{id}", tags=["questions"], status_code=status.HTTP_200_OK, response_model=Question, dependencies=[Depends(get_db)], ) def get_question(id: int = Path(..., gt=0)): return get_question_service(id) Btw, ... in Path can be omitted. id: int = Path(gt=0) is equivalent to id: int = Path(gt=0)
Why am I getting {"detail":[{"loc":["path","id"],"msg":"field required","type":"value_error.missing"}]} if I made query with params?
This is the endpoint that is not working: @router.get( "/{question_id}", tags=["questions"], status_code=status.HTTP_200_OK, response_model=Question, dependencies=[Depends(get_db)], ) def get_question(id: int = Path(..., gt=0)): return get_question_service(id) This is what the server shows when I run the query from the interactive FastAPI docs: INFO: 127.0.0.1:45806 - "GET /api/v1/questions/%7Bquestion_id%7D HTTP/1.1" 422 Unprocessable I don't know why it is sending {question_id} here instead of the number. Also when I run a query from curl, this is what the server shows: INFO: 127.0.0.1:59104 - "GET /api/v1/questions/21 HTTP/1.1" 422 Unprocessable Entity It makes no sense since I'm sending the only required param: (question_id) The other endpoint is working fine: @router.get( "/", tags=["questions"], status_code=status.HTTP_200_OK, response_model=List[Question], dependencies=[Depends(get_db)], ) def get_questions(): return get_questions_service()
[ "There is a mismatch between the path parameter in the path string and the function argument. Rename the function argument to question_id\[email protected](\n \"/{question_id}\",\n tags=[\"questions\"],\n status_code=status.HTTP_200_OK,\n response_model=Question,\n dependencies=[Depends(get_db)],\n)\ndef get_question(question_id: int = Path(..., gt=0)):\n return get_question_service(question_id)\n\nor the path parameter to id:\[email protected](\n \"/{id}\",\n tags=[\"questions\"],\n status_code=status.HTTP_200_OK,\n response_model=Question,\n dependencies=[Depends(get_db)],\n)\ndef get_question(id: int = Path(..., gt=0)):\n return get_question_service(id)\n\n\nBtw, ... in Path can be omitted.\nid: int = Path(gt=0) is equivalent to id: int = Path(gt=0)\n" ]
[ 0 ]
[]
[]
[ "fastapi", "fastapi_crudrouter", "python" ]
stackoverflow_0074608894_fastapi_fastapi_crudrouter_python.txt
Q: "cannot connect to X server" while trying to connect to PyBullet in WSL I am currently using a windows machine, and am busy with some Genetic Algorithm stuff that relies on using a PyBullet virtual environment to test out the locomotive capacity of my "robots". The project I'm working on required me to use multi-threading, so my lecturer recommended that I install WSL to do so because apparently it does not work on Windows. I installed WSL, and created a python virtual environment to work in. Everything was perfectly fine until I tried to connect to a PyBullet server, which produced the following output: pybullet build time: Nov 27 2022 13:20:33 startThreads creating 1 threads. starting thread 0 started thread 0 argc=2 argv[0] = --unused argv[1] = --start_demo_name=Physics Server ExampleBrowserThreadFunc started X11 functions dynamically loaded using dlopen/dlsym OK! cannot connect to X server What can I do to fix this? Please ask if more information on my setup is needed :) A: It looks like your script wants to open some sort of graphical user interface. You can try to install an X11 server on windows and configure this in WSL. That way you can open a graphical window in WSL. You should be able to find some tutorials online, but it can be a bit tedious. This could help you to get started, but there are multiple ways and tools. Later versions of WSL should support graphical interfaces out-of-the-box. I think you'll need Windows 11. (However, I have never tried it so far) This might help. A: So I managed to fix the issue. I am not sure exactly why WSL was failing to launch the GUI application, I first thought it had something to do with X11, but I have the lastest version of WSL, which is supposed to have native GUI support. Next, I thought it had something to do with the python packages, because I had a working version of the code in a Virtual environment on my normal windows 11, I just didn't use it because it was unable to do multiprocessing. So to solve this, I copied the libs folder from the venv in my windows to the venv on my WSL. For some reason, it solved the problem. My initial reason for not doing this was because it didn't say I was missing any dependencies. Thanks for those who tried to help though.
"cannot connect to X server" while trying to connect to PyBullet in WSL
I am currently using a windows machine, and am busy with some Genetic Algorithm stuff that relies on using a PyBullet virtual environment to test out the locomotive capacity of my "robots". The project I'm working on required me to use multi-threading, so my lecturer recommended that I install WSL to do so because apparently it does not work on Windows. I installed WSL, and created a python virtual environment to work in. Everything was perfectly fine until I tried to connect to a PyBullet server, which produced the following output: pybullet build time: Nov 27 2022 13:20:33 startThreads creating 1 threads. starting thread 0 started thread 0 argc=2 argv[0] = --unused argv[1] = --start_demo_name=Physics Server ExampleBrowserThreadFunc started X11 functions dynamically loaded using dlopen/dlsym OK! cannot connect to X server What can I do to fix this? Please ask if more information on my setup is needed :)
[ "It looks like your script wants to open some sort of graphical user interface.\nYou can try to install an X11 server on windows and configure this in WSL. That way you can open a graphical window in WSL. You should be able to find some tutorials online, but it can be a bit tedious. This could help you to get started, but there are multiple ways and tools.\nLater versions of WSL should support graphical interfaces out-of-the-box. I think you'll need Windows 11. (However, I have never tried it so far)\nThis might help.\n", "So I managed to fix the issue. I am not sure exactly why WSL was failing to launch the GUI application, I first thought it had something to do with X11, but I have the lastest version of WSL, which is supposed to have native GUI support. Next, I thought it had something to do with the python packages, because I had a working version of the code in a Virtual environment on my normal windows 11, I just didn't use it because it was unable to do multiprocessing. So to solve this, I copied the libs folder from the venv in my windows to the venv on my WSL. For some reason, it solved the problem. My initial reason for not doing this was because it didn't say I was missing any dependencies.\nThanks for those who tried to help though.\n" ]
[ 0, 0 ]
[]
[]
[ "pybullet", "python", "windows_subsystem_for_linux" ]
stackoverflow_0074600817_pybullet_python_windows_subsystem_for_linux.txt
Q: Python: Traceback codecs.charmap_decode(input,self.errors,decoding_table)[0] Following is sample code, aim is just to merges text files from give folder and it's sub folder. i am getting Traceback occasionally so not sure where to look. also need some help to enhance the code to prevent blank line being merge & to display no lines in merged/master file. Probably it's good idea to before merging file, some cleanup should performed or just to ignores blank line during merging process. Text file in folder is not more then 1000 lines but aggregate master file could cross 10000+ lines very easily. import os root = 'C:\\Dropbox\\ans7i\\' files = [(path,f) for path,_,file_list in os.walk(root) for f in file_list] out_file = open('C:\\Dropbox\\Python\\master.txt','w') for path,f_name in files: in_file = open('%s/%s'%(path,f_name), 'r') # write out root/path/to/file (space) file_contents for line in in_file: out_file.write('%s/%s %s'%(path,f_name,line)) in_file.close() # enter new line after each file out_file.write('\n') with open('master.txt', 'r') as f: lines = f.readlines() with open('master.txt', 'w') as f: f.write("".join(L for L in lines if L.strip())) Traceback (most recent call last): File "C:\Dropbox\Python\master.py", line 9, in <module> for line in in_file: File "C:\PYTHON32\LIB\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 972: character maps to <undefined> A: The error is thrown because Python 3 opens your files with a default encoding that doesn't match the contents. If all you are doing is copying file contents, you'd be better off using the shutil.copyfileobj() function together with opening the files in binary mode. That way you avoid encoding issues altogether (as long as all your source files are the same encoding of course, so you don't end up with a target file with mixed encodings): import shutil import os.path with open('C:\\Dropbox\\Python\\master.txt','wb') as output: for path, f_name in files: with open(os.path.join(path, f_name), 'rb') as input: shutil.copyfileobj(input, output) output.write(b'\n') # insert extra newline between files I've cleaned up the code a little to use context managers (so your files get closed automatically when done) and to use os.path to create the full path for your files. If you do need to process your input line by line you'll need to tell Python what encoding to expect, so it can decode the file contents to python string objects: open(path, mode, encoding='UTF8') Note that this requires you to know up front what encoding the files use. Read up on the Python Unicode HOWTO if you have further questions about python 3, files and encodings. A: I faced the similar issue while removing the file using os module remove function. The required changes i performed is: file = open(filename) to file = open(filename, encoding="utf8") Add an encoding=“utf-8” UTF-8 is one of the most commonly used encodings, and Python often defaults to using it. UTF stands for “Unicode Transformation Format”, and the '8' means that 8-bit values are used in the encoding. ... UTF-8 uses the following rules: If the code point is < 128, it's represented by the corresponding byte value.
Python: Traceback codecs.charmap_decode(input,self.errors,decoding_table)[0]
Following is sample code, aim is just to merges text files from give folder and it's sub folder. i am getting Traceback occasionally so not sure where to look. also need some help to enhance the code to prevent blank line being merge & to display no lines in merged/master file. Probably it's good idea to before merging file, some cleanup should performed or just to ignores blank line during merging process. Text file in folder is not more then 1000 lines but aggregate master file could cross 10000+ lines very easily. import os root = 'C:\\Dropbox\\ans7i\\' files = [(path,f) for path,_,file_list in os.walk(root) for f in file_list] out_file = open('C:\\Dropbox\\Python\\master.txt','w') for path,f_name in files: in_file = open('%s/%s'%(path,f_name), 'r') # write out root/path/to/file (space) file_contents for line in in_file: out_file.write('%s/%s %s'%(path,f_name,line)) in_file.close() # enter new line after each file out_file.write('\n') with open('master.txt', 'r') as f: lines = f.readlines() with open('master.txt', 'w') as f: f.write("".join(L for L in lines if L.strip())) Traceback (most recent call last): File "C:\Dropbox\Python\master.py", line 9, in <module> for line in in_file: File "C:\PYTHON32\LIB\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 972: character maps to <undefined>
[ "The error is thrown because Python 3 opens your files with a default encoding that doesn't match the contents.\nIf all you are doing is copying file contents, you'd be better off using the shutil.copyfileobj() function together with opening the files in binary mode. That way you avoid encoding issues altogether (as long as all your source files are the same encoding of course, so you don't end up with a target file with mixed encodings):\nimport shutil\nimport os.path\n\nwith open('C:\\\\Dropbox\\\\Python\\\\master.txt','wb') as output:\n for path, f_name in files:\n with open(os.path.join(path, f_name), 'rb') as input:\n shutil.copyfileobj(input, output)\n output.write(b'\\n') # insert extra newline between files\n\nI've cleaned up the code a little to use context managers (so your files get closed automatically when done) and to use os.path to create the full path for your files.\nIf you do need to process your input line by line you'll need to tell Python what encoding to expect, so it can decode the file contents to python string objects:\nopen(path, mode, encoding='UTF8')\n\nNote that this requires you to know up front what encoding the files use.\nRead up on the Python Unicode HOWTO if you have further questions about python 3, files and encodings.\n", "I faced the similar issue while removing the file using os module remove function.\nThe required changes i performed is:\nfile = open(filename)\n\nto\nfile = open(filename, encoding=\"utf8\")\n\nAdd an encoding=“utf-8”\nUTF-8 is one of the most commonly used encodings, and Python often defaults to using it. UTF stands for “Unicode Transformation Format”, and the '8' means that 8-bit values are used in the encoding. ... UTF-8 uses the following rules: If the code point is < 128, it's represented by the corresponding byte value.\n" ]
[ 18, 4 ]
[ "Handling import and decode error with file handling\n\nOpen file with full absolute path\n\n(source - absolute path for directory of file folder, getting all files inside file_folder)\nimport os\nfile_list = os.listdir(source)\nfor file in file_list:\n absolute_file_path = os.path.join(source,file) \n file = open(absolute_file_path)\n\n\nEncoding the file as we open\n\nfile = open(absolute_file_path, mode, encoding, errors=ignore)\n" ]
[ -1 ]
[ "file_io", "python", "python_3.x", "python_unicode", "traceback" ]
stackoverflow_0012213178_file_io_python_python_3.x_python_unicode_traceback.txt
Q: Web scraping with Python - how to click on date of same value I'm trying to click on the button that contains the text "30 de novembro" but when I use my code it clicks on the "30 de outubro" button How to fix? Screenshot of HTML code Here's the code I'm using selecdia = navegador.find_element(by=By.LINK_TEXT, value='30') selecdia.click() sleep(1) A: Looks like the title attribute is the only way to uniquely identify the link you wish to click, therefore in Selenium 4 syntax: selecdia = navegador.find_element(By.CSS_SELECTOR, "a[title='30 de novembro']") or: selecdia = navegador.find_element(By.XPATH, "//a[@title='30 de novembro']")
Web scraping with Python - how to click on date of same value
I'm trying to click on the button that contains the text "30 de novembro" but when I use my code it clicks on the "30 de outubro" button How to fix? Screenshot of HTML code Here's the code I'm using selecdia = navegador.find_element(by=By.LINK_TEXT, value='30') selecdia.click() sleep(1)
[ "Looks like the title attribute is the only way to uniquely identify the link you wish to click, therefore in Selenium 4 syntax:\nselecdia = navegador.find_element(By.CSS_SELECTOR, \"a[title='30 de novembro']\")\n\nor:\nselecdia = navegador.find_element(By.XPATH, \"//a[@title='30 de novembro']\")\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "pandas", "python", "selenium", "web_scraping" ]
stackoverflow_0074607736_beautifulsoup_pandas_python_selenium_web_scraping.txt
Q: Odoo - Different XPath depending on a field I want Odoo to show an Xpath or not, depending on a condition I have this 3 fields lots_id = fields.Many2one('stock.production.lot', 'Lot/Serial Number') q_auth = fields.Boolean(related='lot_id.q_auth', string="Quality Auth.") needs_auth= fields.Boolean("Needs Auth") If needs_auth == False, i need to show this xpath <xpath expr="//field[@name='lot_id']" position="replace"> <field name="q_auth" invisible="1"/> <field name="lots_id" groups="stock.group_production_lot" domain="[('product_id','=?', product_id)]" context="{'product_id': product_id}"/> </xpath> but if needs_auth == True I need the Xpath to be like this <xpath expr="//field[@name='lot_id']" position="replace"> <field name="q_auth" invisible="1"/> <field name="lots_id" groups="stock.group_production_lot" domain="[('product_id','=?', product_id),('q_auth','!=',False)]" context="{'product_id': product_id}"/> </xpath> You can see that the only difference is in the domain. I don't know if this is possible to do it in XML, but in case is not possible, how can I do it with Python? Thanks! A: You can use onchange function and return lots_id field domain when the needs_auth field value change. Example: @api.onchange("needs_auth") def _update_lots_id_domain(self): domain = [('product_id', '=?', self.product_id.id)] if self.needs_auth: domain.append( ('q_auth', '!=', False) ) return {'domain': {'lots_id': domain}} A: How about using t-if / t-else: <xpath expr="//field[@name='lot_id']" position="replace"> <field name="q_auth" invisible="1"/> <t t-if="needs_auth"> <field name="lots_id" groups="stock.group_production_lot" domain="[('product_id','=?', product_id),('q_auth','!=',False)]" context="{'product_id': product_id}"/> </t> <t t-else=""> <field name="lots_id" groups="stock.group_production_lot" domain="[('product_id','=?', product_id)]" context="{'product_id': product_id}"/> </t> </xpath>
Odoo - Different XPath depending on a field
I want Odoo to show an Xpath or not, depending on a condition I have this 3 fields lots_id = fields.Many2one('stock.production.lot', 'Lot/Serial Number') q_auth = fields.Boolean(related='lot_id.q_auth', string="Quality Auth.") needs_auth= fields.Boolean("Needs Auth") If needs_auth == False, i need to show this xpath <xpath expr="//field[@name='lot_id']" position="replace"> <field name="q_auth" invisible="1"/> <field name="lots_id" groups="stock.group_production_lot" domain="[('product_id','=?', product_id)]" context="{'product_id': product_id}"/> </xpath> but if needs_auth == True I need the Xpath to be like this <xpath expr="//field[@name='lot_id']" position="replace"> <field name="q_auth" invisible="1"/> <field name="lots_id" groups="stock.group_production_lot" domain="[('product_id','=?', product_id),('q_auth','!=',False)]" context="{'product_id': product_id}"/> </xpath> You can see that the only difference is in the domain. I don't know if this is possible to do it in XML, but in case is not possible, how can I do it with Python? Thanks!
[ "You can use onchange function and return lots_id field domain when the needs_auth field value change.\nExample:\[email protected](\"needs_auth\")\ndef _update_lots_id_domain(self):\n domain = [('product_id', '=?', self.product_id.id)]\n if self.needs_auth:\n domain.append(\n ('q_auth', '!=', False)\n )\n return {'domain': {'lots_id': domain}}\n\n", "How about using t-if / t-else:\n<xpath expr=\"//field[@name='lot_id']\" position=\"replace\">\n <field name=\"q_auth\" invisible=\"1\"/>\n <t t-if=\"needs_auth\"> \n <field name=\"lots_id\" groups=\"stock.group_production_lot\"\n domain=\"[('product_id','=?', product_id),('q_auth','!=',False)]\"\n context=\"{'product_id': product_id}\"/>\n </t>\n <t t-else=\"\">\n <field name=\"lots_id\" groups=\"stock.group_production_lot\"\n domain=\"[('product_id','=?', product_id)]\"\n context=\"{'product_id': product_id}\"/>\n </t>\n</xpath>\n\n" ]
[ 0, 0 ]
[]
[]
[ "odoo", "odoo_8", "python", "xml" ]
stackoverflow_0074595716_odoo_odoo_8_python_xml.txt
Q: Python Dataframe Linear Regression every column being emptied? I'm working with Pandas & Numpy to create a Linear Regression model to predict the gross of movies. I'm able to successfully import the dataset and drop the columns I'm not using, and convert the ones I am into float64. This leads to some columns having NaN as values: import pandas as pd import numpy as np import sklearn df1 = pd.read_csv("sample_data/imdb_top_1000.csv",header = 0) moviedata=df1.drop(["Poster_Link","Released_Year","Overview"], axis = 1) moviedata['Runtime'] = moviedata['Runtime'].str.extract('(\d+)\s*min', expand=False) moviedata['Series_Title'] = pd.to_numeric(moviedata['Series_Title'], errors='coerce') moviedata['Certificate'] = pd.to_numeric(moviedata['Certificate'], errors='coerce') moviedata['Runtime'] = pd.to_numeric(moviedata['Runtime'], errors='coerce') moviedata['Genre'] = pd.to_numeric(moviedata['Genre'], errors='coerce') moviedata['IMDB_Rating'] = pd.to_numeric(moviedata['IMDB_Rating'], errors='coerce') moviedata['Meta_score'] = pd.to_numeric(moviedata['Meta_score'], errors='coerce') moviedata['Director'] = pd.to_numeric(moviedata['Director'], errors='coerce') moviedata['Star1'] = pd.to_numeric(moviedata['Star1'], errors='coerce') moviedata['Star2'] = pd.to_numeric(moviedata['Star2'], errors='coerce') moviedata['Star3'] = pd.to_numeric(moviedata['Star3'], errors='coerce') moviedata['Star4'] = pd.to_numeric(moviedata['Star4'], errors='coerce') moviedata['No_of_Votes'] = moviedata['No_of_Votes'].astype(float) moviedata['Gross'] = pd.to_numeric(moviedata['Gross'], errors='coerce') print('\nShape of data :',moviedata.shape) OUTPUT: Shape of data : (1000, 13) I used this line in a previous assignment to remove NaN values in a dataset: moviedata = moviedata[~(np.isnan(moviedata).any(axis = 1))] print('\nShape of data :',moviedata.shape) OUTPUT: Shape of data : (0, 13) However, when I use it this time, it removes every row value in the dataset, leaving me with 0 rows & 13 columns. Why is this happening, and how can I fix it? A link to the dataset: https://docs.google.com/spreadsheets/d/1zaMz2J7GVf24MtSEgoKn6uZOO0Uvldh4vzJqKw1aBTM/edit?usp=sharing A: You can use: errors ='ignore', Where both text and numbers are given. There are other ways of preprocessing, you just need to know what you need it for.
Python Dataframe Linear Regression every column being emptied?
I'm working with Pandas & Numpy to create a Linear Regression model to predict the gross of movies. I'm able to successfully import the dataset and drop the columns I'm not using, and convert the ones I am into float64. This leads to some columns having NaN as values: import pandas as pd import numpy as np import sklearn df1 = pd.read_csv("sample_data/imdb_top_1000.csv",header = 0) moviedata=df1.drop(["Poster_Link","Released_Year","Overview"], axis = 1) moviedata['Runtime'] = moviedata['Runtime'].str.extract('(\d+)\s*min', expand=False) moviedata['Series_Title'] = pd.to_numeric(moviedata['Series_Title'], errors='coerce') moviedata['Certificate'] = pd.to_numeric(moviedata['Certificate'], errors='coerce') moviedata['Runtime'] = pd.to_numeric(moviedata['Runtime'], errors='coerce') moviedata['Genre'] = pd.to_numeric(moviedata['Genre'], errors='coerce') moviedata['IMDB_Rating'] = pd.to_numeric(moviedata['IMDB_Rating'], errors='coerce') moviedata['Meta_score'] = pd.to_numeric(moviedata['Meta_score'], errors='coerce') moviedata['Director'] = pd.to_numeric(moviedata['Director'], errors='coerce') moviedata['Star1'] = pd.to_numeric(moviedata['Star1'], errors='coerce') moviedata['Star2'] = pd.to_numeric(moviedata['Star2'], errors='coerce') moviedata['Star3'] = pd.to_numeric(moviedata['Star3'], errors='coerce') moviedata['Star4'] = pd.to_numeric(moviedata['Star4'], errors='coerce') moviedata['No_of_Votes'] = moviedata['No_of_Votes'].astype(float) moviedata['Gross'] = pd.to_numeric(moviedata['Gross'], errors='coerce') print('\nShape of data :',moviedata.shape) OUTPUT: Shape of data : (1000, 13) I used this line in a previous assignment to remove NaN values in a dataset: moviedata = moviedata[~(np.isnan(moviedata).any(axis = 1))] print('\nShape of data :',moviedata.shape) OUTPUT: Shape of data : (0, 13) However, when I use it this time, it removes every row value in the dataset, leaving me with 0 rows & 13 columns. Why is this happening, and how can I fix it? A link to the dataset: https://docs.google.com/spreadsheets/d/1zaMz2J7GVf24MtSEgoKn6uZOO0Uvldh4vzJqKw1aBTM/edit?usp=sharing
[ "You can use: errors ='ignore', Where both text and numbers are given.\nThere are other ways of preprocessing, you just need to know what you need it for.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "linear_regression", "numpy", "pandas", "python" ]
stackoverflow_0074602692_dataframe_linear_regression_numpy_pandas_python.txt
Q: Filtering a column with an empty array in Pyspark I have a DataFrame which contains a lot of repeated values. An aggregated, distinct count of it looks like below > df.groupby('fruits').count().sort(F.desc('count')).show() | fruits | count | | ----------- | ----------- | | [Apples] | 123 | | [] | 344 | | [Apples, plum]| 444 | My goal is to filter all rows where the value is either [Apples] or []. Suprisingly, the following works for an non-empty array but for empty it doesn't import pyspark.sql.types as T is_apples = F.udf(lambda arr: arr == ['Apples'], T.BooleanType()) df.filter(is_apples(df.fruits).count() # WORKS! shows 123 correctly. is_empty = F.udf(lambda arr: arr == [], T.BooleanType()) df.filter(is_empty(df.fruits).count() # Doesn't work! Should show 344 but shows zero. Any idea what I am doing wrong? A: It might be an array containing an empty string: is_empty = F.udf(lambda arr: arr == [''], T.BooleanType()) Or it might be an array of null: is_empty = F.udf(lambda arr: arr == [None], T.BooleanType()) To check them all at once you can use: is_empty = F.udf(lambda arr: arr in [[], [''], [None]], T.BooleanType()) But actually you don't need a UDF for this, e.g. you can do: df.filter("fruits = array() or fruits = array('') or fruits = array(null)") A: You can do it by checking the length if the array. is_empty = F.udf(lambda arr: len(arr) == 0, T.BooleanType()) df.filter(is_empty(df.fruits).count() A: If you don't want to use UDF, you can use F.size to get the size of the array. To filter empty array: df.filter(F.size(df.fruits) == 0) To filter non-empty array: df.filter(F.size(df.fruits) != 0)
Filtering a column with an empty array in Pyspark
I have a DataFrame which contains a lot of repeated values. An aggregated, distinct count of it looks like below > df.groupby('fruits').count().sort(F.desc('count')).show() | fruits | count | | ----------- | ----------- | | [Apples] | 123 | | [] | 344 | | [Apples, plum]| 444 | My goal is to filter all rows where the value is either [Apples] or []. Suprisingly, the following works for an non-empty array but for empty it doesn't import pyspark.sql.types as T is_apples = F.udf(lambda arr: arr == ['Apples'], T.BooleanType()) df.filter(is_apples(df.fruits).count() # WORKS! shows 123 correctly. is_empty = F.udf(lambda arr: arr == [], T.BooleanType()) df.filter(is_empty(df.fruits).count() # Doesn't work! Should show 344 but shows zero. Any idea what I am doing wrong?
[ "It might be an array containing an empty string:\nis_empty = F.udf(lambda arr: arr == [''], T.BooleanType())\n\nOr it might be an array of null:\nis_empty = F.udf(lambda arr: arr == [None], T.BooleanType())\n\nTo check them all at once you can use:\nis_empty = F.udf(lambda arr: arr in [[], [''], [None]], T.BooleanType())\n\nBut actually you don't need a UDF for this, e.g. you can do:\ndf.filter(\"fruits = array() or fruits = array('') or fruits = array(null)\")\n\n", "You can do it by checking the length if the array.\nis_empty = F.udf(lambda arr: len(arr) == 0, T.BooleanType())\ndf.filter(is_empty(df.fruits).count()\n\n", "If you don't want to use UDF, you can use F.size to get the size of the array.\nTo filter empty array:\ndf.filter(F.size(df.fruits) == 0)\n\nTo filter non-empty array:\ndf.filter(F.size(df.fruits) != 0)\n\n" ]
[ 3, 2, 0 ]
[]
[]
[ "apache_spark", "apache_spark_sql", "pyspark", "python" ]
stackoverflow_0065662265_apache_spark_apache_spark_sql_pyspark_python.txt
Q: A weighted version of random.choice I needed to write a weighted version of random.choice (each element in the list has a different probability for being selected). This is what I came up with: def weightedChoice(choices): """Like random.choice, but each element can have a different chance of being selected. choices can be any iterable containing iterables with two items each. Technically, they can have more than two items, the rest will just be ignored. The first item is the thing being chosen, the second item is its weight. The weights can be any numeric values, what matters is the relative differences between them. """ space = {} current = 0 for choice, weight in choices: if weight > 0: space[current] = choice current += weight rand = random.uniform(0, current) for key in sorted(space.keys() + [current]): if rand < key: return choice choice = space[key] return None This function seems overly complex to me, and ugly. I'm hoping everyone here can offer some suggestions on improving it or alternate ways of doing this. Efficiency isn't as important to me as code cleanliness and readability. A: Since version 1.7.0, NumPy has a choice function that supports probability distributions. from numpy.random import choice draw = choice(list_of_candidates, number_of_items_to_pick, p=probability_distribution) Note that probability_distribution is a sequence in the same order of list_of_candidates. You can also use the keyword replace=False to change the behavior so that drawn items are not replaced. A: Since Python 3.6 there is a method choices from the random module. In [1]: import random In [2]: random.choices( ...: population=[['a','b'], ['b','a'], ['c','b']], ...: weights=[0.2, 0.2, 0.6], ...: k=10 ...: ) Out[2]: [['c', 'b'], ['c', 'b'], ['b', 'a'], ['c', 'b'], ['c', 'b'], ['b', 'a'], ['c', 'b'], ['b', 'a'], ['c', 'b'], ['c', 'b']] Note that random.choices will sample with replacement, per the docs: Return a k sized list of elements chosen from the population with replacement. Note for completeness of answer: When a sampling unit is drawn from a finite population and is returned to that population, after its characteristic(s) have been recorded, before the next unit is drawn, the sampling is said to be "with replacement". It basically means each element may be chosen more than once. If you need to sample without replacement, then as @ronan-paixão's brilliant answer states, you can use numpy.choice, whose replace argument controls such behaviour. A: def weighted_choice(choices): total = sum(w for c, w in choices) r = random.uniform(0, total) upto = 0 for c, w in choices: if upto + w >= r: return c upto += w assert False, "Shouldn't get here" A: Arrange the weights into a cumulative distribution. Use random.random() to pick a random float 0.0 <= x < total. Search the distribution using bisect.bisect as shown in the example at http://docs.python.org/dev/library/bisect.html#other-examples. from random import random from bisect import bisect def weighted_choice(choices): values, weights = zip(*choices) total = 0 cum_weights = [] for w in weights: total += w cum_weights.append(total) x = random() * total i = bisect(cum_weights, x) return values[i] >>> weighted_choice([("WHITE",90), ("RED",8), ("GREEN",2)]) 'WHITE' If you need to make more than one choice, split this into two functions, one to build the cumulative weights and another to bisect to a random point. A: If you don't mind using numpy, you can use numpy.random.choice. For example: import numpy items = [["item1", 0.2], ["item2", 0.3], ["item3", 0.45], ["item4", 0.05] elems = [i[0] for i in items] probs = [i[1] for i in items] trials = 1000 results = [0] * len(items) for i in range(trials): res = numpy.random.choice(items, p=probs) #This is where the item is selected! results[items.index(res)] += 1 results = [r / float(trials) for r in results] print "item\texpected\tactual" for i in range(len(probs)): print "%s\t%0.4f\t%0.4f" % (items[i], probs[i], results[i]) If you know how many selections you need to make in advance, you can do it without a loop like this: numpy.random.choice(items, trials, p=probs) A: As of Python v3.6, random.choices could be used to return a list of elements of specified size from the given population with optional weights. random.choices(population, weights=None, *, cum_weights=None, k=1) population : list containing unique observations. (If empty, raises IndexError) weights : More precisely relative weights required to make selections. cum_weights : cumulative weights required to make selections. k : size(len) of the list to be outputted. (Default len()=1) Few Caveats: 1) It makes use of weighted sampling with replacement so the drawn items would be later replaced. The values in the weights sequence in itself do not matter, but their relative ratio does. Unlike np.random.choice which can only take on probabilities as weights and also which must ensure summation of individual probabilities upto 1 criteria, there are no such regulations here. As long as they belong to numeric types (int/float/fraction except Decimal type) , these would still perform. >>> import random # weights being integers >>> random.choices(["white", "green", "red"], [12, 12, 4], k=10) ['green', 'red', 'green', 'white', 'white', 'white', 'green', 'white', 'red', 'white'] # weights being floats >>> random.choices(["white", "green", "red"], [.12, .12, .04], k=10) ['white', 'white', 'green', 'green', 'red', 'red', 'white', 'green', 'white', 'green'] # weights being fractions >>> random.choices(["white", "green", "red"], [12/100, 12/100, 4/100], k=10) ['green', 'green', 'white', 'red', 'green', 'red', 'white', 'green', 'green', 'green'] 2) If neither weights nor cum_weights are specified, selections are made with equal probability. If a weights sequence is supplied, it must be the same length as the population sequence. Specifying both weights and cum_weights raises a TypeError. >>> random.choices(["white", "green", "red"], k=10) ['white', 'white', 'green', 'red', 'red', 'red', 'white', 'white', 'white', 'green'] 3) cum_weights are typically a result of itertools.accumulate function which are really handy in such situations. From the documentation linked: Internally, the relative weights are converted to cumulative weights before making selections, so supplying the cumulative weights saves work. So, either supplying weights=[12, 12, 4] or cum_weights=[12, 24, 28] for our contrived case produces the same outcome and the latter seems to be more faster / efficient. A: Crude, but may be sufficient: import random weighted_choice = lambda s : random.choice(sum(([v]*wt for v,wt in s),[])) Does it work? # define choices and relative weights choices = [("WHITE",90), ("RED",8), ("GREEN",2)] # initialize tally dict tally = dict.fromkeys(choices, 0) # tally up 1000 weighted choices for i in xrange(1000): tally[weighted_choice(choices)] += 1 print tally.items() Prints: [('WHITE', 904), ('GREEN', 22), ('RED', 74)] Assumes that all weights are integers. They don't have to add up to 100, I just did that to make the test results easier to interpret. (If weights are floating point numbers, multiply them all by 10 repeatedly until all weights >= 1.) weights = [.6, .2, .001, .199] while any(w < 1.0 for w in weights): weights = [w*10 for w in weights] weights = map(int, weights) A: If you have a weighted dictionary instead of a list you can write this items = { "a": 10, "b": 5, "c": 1 } random.choice([k for k in items for dummy in range(items[k])]) Note that [k for k in items for dummy in range(items[k])] produces this list ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'c', 'b', 'b', 'b', 'b', 'b'] A: Here's is the version that is being included in the standard library for Python 3.6: import itertools as _itertools import bisect as _bisect class Random36(random.Random): "Show the code included in the Python 3.6 version of the Random class" def choices(self, population, weights=None, *, cum_weights=None, k=1): """Return a k sized list of population elements chosen with replacement. If the relative weights or cumulative weights are not specified, the selections are made with equal probability. """ random = self.random if cum_weights is None: if weights is None: _int = int total = len(population) return [population[_int(random() * total)] for i in range(k)] cum_weights = list(_itertools.accumulate(weights)) elif weights is not None: raise TypeError('Cannot specify both weights and cumulative weights') if len(cum_weights) != len(population): raise ValueError('The number of weights does not match the population') bisect = _bisect.bisect total = cum_weights[-1] return [population[bisect(cum_weights, random() * total)] for i in range(k)] Source: https://hg.python.org/cpython/file/tip/Lib/random.py#l340 A: A very basic and easy approach for a weighted choice is the following: np.random.choice(['A', 'B', 'C'], p=[0.3, 0.4, 0.3]) A: import numpy as np w=np.array([ 0.4, 0.8, 1.6, 0.8, 0.4]) np.random.choice(w, p=w/sum(w)) A: I'm probably too late to contribute anything useful, but here's a simple, short, and very efficient snippet: def choose_index(probabilies): cmf = probabilies[0] choice = random.random() for k in xrange(len(probabilies)): if choice <= cmf: return k else: cmf += probabilies[k+1] No need to sort your probabilities or create a vector with your cmf, and it terminates once it finds its choice. Memory: O(1), time: O(N), with average running time ~ N/2. If you have weights, simply add one line: def choose_index(weights): probabilities = weights / sum(weights) cmf = probabilies[0] choice = random.random() for k in xrange(len(probabilies)): if choice <= cmf: return k else: cmf += probabilies[k+1] A: If your list of weighted choices is relatively static, and you want frequent sampling, you can do one O(N) preprocessing step, and then do the selection in O(1), using the functions in this related answer. # run only when `choices` changes. preprocessed_data = prep(weight for _,weight in choices) # O(1) selection value = choices[sample(preprocessed_data)][0] A: If you happen to have Python 3, and are afraid of installing numpy or writing your own loops, you could do: import itertools, bisect, random def weighted_choice(choices): weights = list(zip(*choices))[1] return choices[bisect.bisect(list(itertools.accumulate(weights)), random.uniform(0, sum(weights)))][0] Because you can build anything out of a bag of plumbing adaptors! Although... I must admit that Ned's answer, while slightly longer, is easier to understand. A: I looked the pointed other thread and came up with this variation in my coding style, this returns the index of choice for purpose of tallying, but it is simple to return the string ( commented return alternative): import random import bisect try: range = xrange except: pass def weighted_choice(choices): total, cumulative = 0, [] for c,w in choices: total += w cumulative.append((total, c)) r = random.uniform(0, total) # return index return bisect.bisect(cumulative, (r,)) # return item string #return choices[bisect.bisect(cumulative, (r,))][0] # define choices and relative weights choices = [("WHITE",90), ("RED",8), ("GREEN",2)] tally = [0 for item in choices] n = 100000 # tally up n weighted choices for i in range(n): tally[weighted_choice(choices)] += 1 print([t/sum(tally)*100 for t in tally]) A: A general solution: import random def weighted_choice(choices, weights): total = sum(weights) treshold = random.uniform(0, total) for k, weight in enumerate(weights): total -= weight if total < treshold: return choices[k] A: Here is another version of weighted_choice that uses numpy. Pass in the weights vector and it will return an array of 0's containing a 1 indicating which bin was chosen. The code defaults to just making a single draw but you can pass in the number of draws to be made and the counts per bin drawn will be returned. If the weights vector does not sum to 1, it will be normalized so that it does. import numpy as np def weighted_choice(weights, n=1): if np.sum(weights)!=1: weights = weights/np.sum(weights) draws = np.random.random_sample(size=n) weights = np.cumsum(weights) weights = np.insert(weights,0,0.0) counts = np.histogram(draws, bins=weights) return(counts[0]) A: It depends on how many times you want to sample the distribution. Suppose you want to sample the distribution K times. Then, the time complexity using np.random.choice() each time is O(K(n + log(n))) when n is the number of items in the distribution. In my case, I needed to sample the same distribution multiple times of the order of 10^3 where n is of the order of 10^6. I used the below code, which precomputes the cumulative distribution and samples it in O(log(n)). Overall time complexity is O(n+K*log(n)). import numpy as np n,k = 10**6,10**3 # Create dummy distribution a = np.array([i+1 for i in range(n)]) p = np.array([1.0/n]*n) cfd = p.cumsum() for _ in range(k): x = np.random.uniform() idx = cfd.searchsorted(x, side='right') sampled_element = a[idx] A: There is lecture on this by Sebastien Thurn in the free Udacity course AI for Robotics. Basically he makes a circular array of the indexed weights using the mod operator %, sets a variable beta to 0, randomly chooses an index, for loops through N where N is the number of indices and in the for loop firstly increments beta by the formula: beta = beta + uniform sample from {0...2* Weight_max} and then nested in the for loop, a while loop per below: while w[index] < beta: beta = beta - w[index] index = index + 1 select p[index] Then on to the next index to resample based on the probabilities (or normalized probability in the case presented in the course). On Udacity find Lesson 8, video number 21 of Artificial Intelligence for Robotics where he is lecturing on particle filters. A: Another way of doing this, assuming we have weights at the same index as the elements in the element array. import numpy as np weights = [0.1, 0.3, 0.5] #weights for the item at index 0,1,2 # sum of weights should be <=1, you can also divide each weight by sum of all weights to standardise it to <=1 constraint. trials = 1 #number of trials num_item = 1 #number of items that can be picked in each trial selected_item_arr = np.random.multinomial(num_item, weights, trials) # gives number of times an item was selected at a particular index # this assumes selection with replacement # one possible output # selected_item_arr # array([[0, 0, 1]]) # say if trials = 5, the the possible output could be # selected_item_arr # array([[1, 0, 0], # [0, 0, 1], # [0, 0, 1], # [0, 1, 0], # [0, 0, 1]]) Now let's assume, we have to sample out 3 items in 1 trial. You can assume that there are three balls R,G,B present in large quantity in ratio of their weights given by weight array, the following could be possible outcome: num_item = 3 trials = 1 selected_item_arr = np.random.multinomial(num_item, weights, trials) # selected_item_arr can give output like : # array([[1, 0, 2]]) you can also think number of items to be selected as number of binomial/ multinomial trials within a set. So, the above example can be still work as num_binomial_trial = 5 weights = [0.1,0.9] #say an unfair coin weights for H/T num_experiment_set = 1 selected_item_arr = np.random.multinomial(num_binomial_trial, weights, num_experiment_set) # possible output # selected_item_arr # array([[1, 4]]) # i.e H came 1 time and T came 4 times in 5 binomial trials. And one set contains 5 binomial trails. A: One way is to randomize on the total of all the weights and then use the values as the limit points for each var. Here is a crude implementation as a generator. def rand_weighted(weights): """ Generator which uses the weights to generate a weighted random values """ sum_weights = sum(weights.values()) cum_weights = {} current_weight = 0 for key, value in sorted(weights.iteritems()): current_weight += value cum_weights[key] = current_weight while True: sel = int(random.uniform(0, 1) * sum_weights) for key, value in sorted(cum_weights.iteritems()): if sel < value: break yield key A: Using numpy def choice(items, weights): return items[np.argmin((np.cumsum(weights) / sum(weights)) < np.random.rand())] A: I needed to do something like this really fast really simple, from searching for ideas i finally built this template. The idea is receive the weighted values in a form of a json from the api, which here is simulated by the dict. Then translate it into a list in which each value repeats proportionally to it's weight, and just use random.choice to select a value from the list. I tried it running with 10, 100 and 1000 iterations. The distribution seems pretty solid. def weighted_choice(weighted_dict): """Input example: dict(apples=60, oranges=30, pineapples=10)""" weight_list = [] for key in weighted_dict.keys(): weight_list += [key] * weighted_dict[key] return random.choice(weight_list) A: I didn't love the syntax of any of those. I really wanted to just specify what the items were and what the weighting of each was. I realize I could have used random.choices but instead I quickly wrote the class below. import random, string from numpy import cumsum class randomChoiceWithProportions: ''' Accepts a dictionary of choices as keys and weights as values. Example if you want a unfair dice: choiceWeightDic = {"1":0.16666666666666666, "2": 0.16666666666666666, "3": 0.16666666666666666 , "4": 0.16666666666666666, "5": .06666666666666666, "6": 0.26666666666666666} dice = randomChoiceWithProportions(choiceWeightDic) samples = [] for i in range(100000): samples.append(dice.sample()) # Should be close to .26666 samples.count("6")/len(samples) # Should be close to .16666 samples.count("1")/len(samples) ''' def __init__(self, choiceWeightDic): self.choiceWeightDic = choiceWeightDic weightSum = sum(self.choiceWeightDic.values()) assert weightSum == 1, 'Weights sum to ' + str(weightSum) + ', not 1.' self.valWeightDict = self._compute_valWeights() def _compute_valWeights(self): valWeights = list(cumsum(list(self.choiceWeightDic.values()))) valWeightDict = dict(zip(list(self.choiceWeightDic.keys()), valWeights)) return valWeightDict def sample(self): num = random.uniform(0,1) for key, val in self.valWeightDict.items(): if val >= num: return key A: Provide random.choice() with a pre-weighted list: Solution & Test: import random options = ['a', 'b', 'c', 'd'] weights = [1, 2, 5, 2] weighted_options = [[opt]*wgt for opt, wgt in zip(options, weights)] weighted_options = [opt for sublist in weighted_options for opt in sublist] print(weighted_options) # test counts = {c: 0 for c in options} for x in range(10000): counts[random.choice(weighted_options)] += 1 for opt, wgt in zip(options, weights): wgt_r = counts[opt] / 10000 * sum(weights) print(opt, counts[opt], wgt, wgt_r) Output: ['a', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'd', 'd'] a 1025 1 1.025 b 1948 2 1.948 c 5019 5 5.019 d 2008 2 2.008 A: In case you don't define in advance how many items you want to pick (so, you don't do something like k=10) and you just have probabilities, you can do the below. Note that your probabilities do not need to add up to 1, they can be independent of each other: soup_items = ['pepper', 'onion', 'tomato', 'celery'] items_probability = [0.2, 0.3, 0.9, 0.1] selected_items = [item for item,p in zip(soup_items,items_probability) if random.random()<p] print(selected_items) >>>['pepper','tomato'] A: let's say you have items = [11, 23, 43, 91] probability = [0.2, 0.3, 0.4, 0.1] and you have function which generates a random number between [0, 1) (we can use random.random() here). so now take the prefix sum of probability prefix_probability=[0.2,0.5,0.9,1] now we can just take a random number between 0-1 and use binary search to find where that number belongs in prefix_probability. that index will be your answer Code will go something like this return items[bisect.bisect(prefix_probability,random.random())]
A weighted version of random.choice
I needed to write a weighted version of random.choice (each element in the list has a different probability for being selected). This is what I came up with: def weightedChoice(choices): """Like random.choice, but each element can have a different chance of being selected. choices can be any iterable containing iterables with two items each. Technically, they can have more than two items, the rest will just be ignored. The first item is the thing being chosen, the second item is its weight. The weights can be any numeric values, what matters is the relative differences between them. """ space = {} current = 0 for choice, weight in choices: if weight > 0: space[current] = choice current += weight rand = random.uniform(0, current) for key in sorted(space.keys() + [current]): if rand < key: return choice choice = space[key] return None This function seems overly complex to me, and ugly. I'm hoping everyone here can offer some suggestions on improving it or alternate ways of doing this. Efficiency isn't as important to me as code cleanliness and readability.
[ "Since version 1.7.0, NumPy has a choice function that supports probability distributions.\nfrom numpy.random import choice\ndraw = choice(list_of_candidates, number_of_items_to_pick,\n p=probability_distribution)\n\nNote that probability_distribution is a sequence in the same order of list_of_candidates. You can also use the keyword replace=False to change the behavior so that drawn items are not replaced.\n", "Since Python 3.6 there is a method choices from the random module.\nIn [1]: import random\n\nIn [2]: random.choices(\n...: population=[['a','b'], ['b','a'], ['c','b']],\n...: weights=[0.2, 0.2, 0.6],\n...: k=10\n...: )\n\nOut[2]:\n[['c', 'b'],\n ['c', 'b'],\n ['b', 'a'],\n ['c', 'b'],\n ['c', 'b'],\n ['b', 'a'],\n ['c', 'b'],\n ['b', 'a'],\n ['c', 'b'],\n ['c', 'b']]\n\nNote that random.choices will sample with replacement, per the docs:\n\nReturn a k sized list of elements chosen from the population with replacement.\n\nNote for completeness of answer:\n\nWhen a sampling unit is drawn from a finite population and is returned\nto that population, after its characteristic(s) have been recorded,\nbefore the next unit is drawn, the sampling is said to be \"with\nreplacement\". It basically means each element may be chosen more than\nonce.\n\nIf you need to sample without replacement, then as @ronan-paixão's brilliant answer states, you can use numpy.choice, whose replace argument controls such behaviour.\n", "def weighted_choice(choices):\n total = sum(w for c, w in choices)\n r = random.uniform(0, total)\n upto = 0\n for c, w in choices:\n if upto + w >= r:\n return c\n upto += w\n assert False, \"Shouldn't get here\"\n\n", "\nArrange the weights into a\ncumulative distribution.\nUse random.random() to pick a random\nfloat 0.0 <= x < total. \nSearch the\ndistribution using bisect.bisect as\nshown in the example at http://docs.python.org/dev/library/bisect.html#other-examples.\n\nfrom random import random\nfrom bisect import bisect\n\ndef weighted_choice(choices):\n values, weights = zip(*choices)\n total = 0\n cum_weights = []\n for w in weights:\n total += w\n cum_weights.append(total)\n x = random() * total\n i = bisect(cum_weights, x)\n return values[i]\n\n>>> weighted_choice([(\"WHITE\",90), (\"RED\",8), (\"GREEN\",2)])\n'WHITE'\n\nIf you need to make more than one choice, split this into two functions, one to build the cumulative weights and another to bisect to a random point.\n", "If you don't mind using numpy, you can use numpy.random.choice.\nFor example:\nimport numpy\n\nitems = [[\"item1\", 0.2], [\"item2\", 0.3], [\"item3\", 0.45], [\"item4\", 0.05]\nelems = [i[0] for i in items]\nprobs = [i[1] for i in items]\n\ntrials = 1000\nresults = [0] * len(items)\nfor i in range(trials):\n res = numpy.random.choice(items, p=probs) #This is where the item is selected!\n results[items.index(res)] += 1\nresults = [r / float(trials) for r in results]\nprint \"item\\texpected\\tactual\"\nfor i in range(len(probs)):\n print \"%s\\t%0.4f\\t%0.4f\" % (items[i], probs[i], results[i])\n\nIf you know how many selections you need to make in advance, you can do it without a loop like this:\nnumpy.random.choice(items, trials, p=probs)\n\n", "As of Python v3.6, random.choices could be used to return a list of elements of specified size from the given population with optional weights.\n\nrandom.choices(population, weights=None, *, cum_weights=None, k=1)\n\n\npopulation : list containing unique observations. (If empty, raises IndexError)\nweights : More precisely relative weights required to make selections.\ncum_weights : cumulative weights required to make selections.\nk : size(len) of the list to be outputted. (Default len()=1)\n\n\nFew Caveats:\n1) It makes use of weighted sampling with replacement so the drawn items would be later replaced. The values in the weights sequence in itself do not matter, but their relative ratio does.\nUnlike np.random.choice which can only take on probabilities as weights and also which must ensure summation of individual probabilities upto 1 criteria, there are no such regulations here. As long as they belong to numeric types (int/float/fraction except Decimal type) , these would still perform.\n>>> import random\n# weights being integers\n>>> random.choices([\"white\", \"green\", \"red\"], [12, 12, 4], k=10)\n['green', 'red', 'green', 'white', 'white', 'white', 'green', 'white', 'red', 'white']\n# weights being floats\n>>> random.choices([\"white\", \"green\", \"red\"], [.12, .12, .04], k=10)\n['white', 'white', 'green', 'green', 'red', 'red', 'white', 'green', 'white', 'green']\n# weights being fractions\n>>> random.choices([\"white\", \"green\", \"red\"], [12/100, 12/100, 4/100], k=10)\n['green', 'green', 'white', 'red', 'green', 'red', 'white', 'green', 'green', 'green']\n\n2) If neither weights nor cum_weights are specified, selections are made with equal probability. If a weights sequence is supplied, it must be the same length as the population sequence. \nSpecifying both weights and cum_weights raises a TypeError.\n>>> random.choices([\"white\", \"green\", \"red\"], k=10)\n['white', 'white', 'green', 'red', 'red', 'red', 'white', 'white', 'white', 'green']\n\n3) cum_weights are typically a result of itertools.accumulate function which are really handy in such situations. \n\n From the documentation linked: \nInternally, the relative weights are converted to cumulative weights\n before making selections, so supplying the cumulative weights saves\n work.\n\nSo, either supplying weights=[12, 12, 4] or cum_weights=[12, 24, 28] for our contrived case produces the same outcome and the latter seems to be more faster / efficient.\n", "Crude, but may be sufficient:\nimport random\nweighted_choice = lambda s : random.choice(sum(([v]*wt for v,wt in s),[]))\n\nDoes it work?\n# define choices and relative weights\nchoices = [(\"WHITE\",90), (\"RED\",8), (\"GREEN\",2)]\n\n# initialize tally dict\ntally = dict.fromkeys(choices, 0)\n\n# tally up 1000 weighted choices\nfor i in xrange(1000):\n tally[weighted_choice(choices)] += 1\n\nprint tally.items()\n\nPrints:\n[('WHITE', 904), ('GREEN', 22), ('RED', 74)]\n\nAssumes that all weights are integers. They don't have to add up to 100, I just did that to make the test results easier to interpret. (If weights are floating point numbers, multiply them all by 10 repeatedly until all weights >= 1.)\nweights = [.6, .2, .001, .199]\nwhile any(w < 1.0 for w in weights):\n weights = [w*10 for w in weights]\nweights = map(int, weights)\n\n", "If you have a weighted dictionary instead of a list you can write this\nitems = { \"a\": 10, \"b\": 5, \"c\": 1 } \nrandom.choice([k for k in items for dummy in range(items[k])])\n\nNote that [k for k in items for dummy in range(items[k])] produces this list ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'c', 'b', 'b', 'b', 'b', 'b']\n", "Here's is the version that is being included in the standard library for Python 3.6:\nimport itertools as _itertools\nimport bisect as _bisect\n\nclass Random36(random.Random):\n \"Show the code included in the Python 3.6 version of the Random class\"\n\n def choices(self, population, weights=None, *, cum_weights=None, k=1):\n \"\"\"Return a k sized list of population elements chosen with replacement.\n\n If the relative weights or cumulative weights are not specified,\n the selections are made with equal probability.\n\n \"\"\"\n random = self.random\n if cum_weights is None:\n if weights is None:\n _int = int\n total = len(population)\n return [population[_int(random() * total)] for i in range(k)]\n cum_weights = list(_itertools.accumulate(weights))\n elif weights is not None:\n raise TypeError('Cannot specify both weights and cumulative weights')\n if len(cum_weights) != len(population):\n raise ValueError('The number of weights does not match the population')\n bisect = _bisect.bisect\n total = cum_weights[-1]\n return [population[bisect(cum_weights, random() * total)] for i in range(k)]\n\nSource: https://hg.python.org/cpython/file/tip/Lib/random.py#l340\n", "A very basic and easy approach for a weighted choice is the following:\nnp.random.choice(['A', 'B', 'C'], p=[0.3, 0.4, 0.3])\n\n", "import numpy as np\nw=np.array([ 0.4, 0.8, 1.6, 0.8, 0.4])\nnp.random.choice(w, p=w/sum(w))\n\n", "I'm probably too late to contribute anything useful, but here's a simple, short, and very efficient snippet:\ndef choose_index(probabilies):\n cmf = probabilies[0]\n choice = random.random()\n for k in xrange(len(probabilies)):\n if choice <= cmf:\n return k\n else:\n cmf += probabilies[k+1]\n\nNo need to sort your probabilities or create a vector with your cmf, and it terminates once it finds its choice. Memory: O(1), time: O(N), with average running time ~ N/2. \nIf you have weights, simply add one line:\ndef choose_index(weights):\n probabilities = weights / sum(weights)\n cmf = probabilies[0]\n choice = random.random()\n for k in xrange(len(probabilies)):\n if choice <= cmf:\n return k\n else:\n cmf += probabilies[k+1]\n\n", "If your list of weighted choices is relatively static, and you want frequent sampling, you can do one O(N) preprocessing step, and then do the selection in O(1), using the functions in this related answer.\n# run only when `choices` changes.\npreprocessed_data = prep(weight for _,weight in choices)\n\n# O(1) selection\nvalue = choices[sample(preprocessed_data)][0]\n\n", "If you happen to have Python 3, and are afraid of installing numpy or writing your own loops, you could do:\nimport itertools, bisect, random\n\ndef weighted_choice(choices):\n weights = list(zip(*choices))[1]\n return choices[bisect.bisect(list(itertools.accumulate(weights)),\n random.uniform(0, sum(weights)))][0]\n\nBecause you can build anything out of a bag of plumbing adaptors! Although... I must admit that Ned's answer, while slightly longer, is easier to understand.\n", "I looked the pointed other thread and came up with this variation in my coding style, this returns the index of choice for purpose of tallying, but it is simple to return the string ( commented return alternative):\nimport random\nimport bisect\n\ntry:\n range = xrange\nexcept:\n pass\n\ndef weighted_choice(choices):\n total, cumulative = 0, []\n for c,w in choices:\n total += w\n cumulative.append((total, c))\n r = random.uniform(0, total)\n # return index\n return bisect.bisect(cumulative, (r,))\n # return item string\n #return choices[bisect.bisect(cumulative, (r,))][0]\n\n# define choices and relative weights\nchoices = [(\"WHITE\",90), (\"RED\",8), (\"GREEN\",2)]\n\ntally = [0 for item in choices]\n\nn = 100000\n# tally up n weighted choices\nfor i in range(n):\n tally[weighted_choice(choices)] += 1\n\nprint([t/sum(tally)*100 for t in tally])\n\n", "A general solution:\nimport random\ndef weighted_choice(choices, weights):\n total = sum(weights)\n treshold = random.uniform(0, total)\n for k, weight in enumerate(weights):\n total -= weight\n if total < treshold:\n return choices[k]\n\n", "Here is another version of weighted_choice that uses numpy. Pass in the weights vector and it will return an array of 0's containing a 1 indicating which bin was chosen. The code defaults to just making a single draw but you can pass in the number of draws to be made and the counts per bin drawn will be returned.\nIf the weights vector does not sum to 1, it will be normalized so that it does. \nimport numpy as np\n\ndef weighted_choice(weights, n=1):\n if np.sum(weights)!=1:\n weights = weights/np.sum(weights)\n\n draws = np.random.random_sample(size=n)\n\n weights = np.cumsum(weights)\n weights = np.insert(weights,0,0.0)\n\n counts = np.histogram(draws, bins=weights)\n return(counts[0])\n\n", "It depends on how many times you want to sample the distribution. \nSuppose you want to sample the distribution K times. Then, the time complexity using np.random.choice() each time is O(K(n + log(n))) when n is the number of items in the distribution. \nIn my case, I needed to sample the same distribution multiple times of the order of 10^3 where n is of the order of 10^6. I used the below code, which precomputes the cumulative distribution and samples it in O(log(n)). Overall time complexity is O(n+K*log(n)).\nimport numpy as np\n\nn,k = 10**6,10**3\n\n# Create dummy distribution\na = np.array([i+1 for i in range(n)])\np = np.array([1.0/n]*n)\n\ncfd = p.cumsum()\nfor _ in range(k):\n x = np.random.uniform()\n idx = cfd.searchsorted(x, side='right')\n sampled_element = a[idx]\n\n", "There is lecture on this by Sebastien Thurn in the free Udacity course AI for Robotics. Basically he makes a circular array of the indexed weights using the mod operator %, sets a variable beta to 0, randomly chooses an index,\nfor loops through N where N is the number of indices and in the for loop firstly increments beta by the formula:\nbeta = beta + uniform sample from {0...2* Weight_max}\nand then nested in the for loop, a while loop per below:\nwhile w[index] < beta:\n beta = beta - w[index]\n index = index + 1\n\nselect p[index]\n\nThen on to the next index to resample based on the probabilities (or normalized probability in the case presented in the course).\nOn Udacity find Lesson 8, video number 21 of Artificial Intelligence for Robotics where he is lecturing on particle filters.\n", "Another way of doing this, assuming we have weights at the same index as the elements in the element array.\nimport numpy as np\nweights = [0.1, 0.3, 0.5] #weights for the item at index 0,1,2\n# sum of weights should be <=1, you can also divide each weight by sum of all weights to standardise it to <=1 constraint.\ntrials = 1 #number of trials\nnum_item = 1 #number of items that can be picked in each trial\nselected_item_arr = np.random.multinomial(num_item, weights, trials)\n# gives number of times an item was selected at a particular index\n# this assumes selection with replacement\n# one possible output\n# selected_item_arr\n# array([[0, 0, 1]])\n# say if trials = 5, the the possible output could be \n# selected_item_arr\n# array([[1, 0, 0],\n# [0, 0, 1],\n# [0, 0, 1],\n# [0, 1, 0],\n# [0, 0, 1]])\n\nNow let's assume, we have to sample out 3 items in 1 trial. You can assume that there are three balls R,G,B present in large quantity in ratio of their weights given by weight array, the following could be possible outcome:\nnum_item = 3\ntrials = 1\nselected_item_arr = np.random.multinomial(num_item, weights, trials)\n# selected_item_arr can give output like :\n# array([[1, 0, 2]])\n\nyou can also think number of items to be selected as number of binomial/ multinomial trials within a set. So, the above example can be still work as\nnum_binomial_trial = 5\nweights = [0.1,0.9] #say an unfair coin weights for H/T\nnum_experiment_set = 1\nselected_item_arr = np.random.multinomial(num_binomial_trial, weights, num_experiment_set)\n# possible output\n# selected_item_arr\n# array([[1, 4]])\n# i.e H came 1 time and T came 4 times in 5 binomial trials. And one set contains 5 binomial trails.\n\n", "One way is to randomize on the total of all the weights and then use the values as the limit points for each var. Here is a crude implementation as a generator.\ndef rand_weighted(weights):\n \"\"\"\n Generator which uses the weights to generate a\n weighted random values\n \"\"\"\n sum_weights = sum(weights.values())\n cum_weights = {}\n current_weight = 0\n for key, value in sorted(weights.iteritems()):\n current_weight += value\n cum_weights[key] = current_weight\n while True:\n sel = int(random.uniform(0, 1) * sum_weights)\n for key, value in sorted(cum_weights.iteritems()):\n if sel < value:\n break\n yield key\n\n", "Using numpy\ndef choice(items, weights):\n return items[np.argmin((np.cumsum(weights) / sum(weights)) < np.random.rand())]\n\n", "I needed to do something like this really fast really simple, from searching for ideas i finally built this template. The idea is receive the weighted values in a form of a json from the api, which here is simulated by the dict.\nThen translate it into a list in which each value repeats proportionally to it's weight, and just use random.choice to select a value from the list.\nI tried it running with 10, 100 and 1000 iterations. The distribution seems pretty solid.\ndef weighted_choice(weighted_dict):\n \"\"\"Input example: dict(apples=60, oranges=30, pineapples=10)\"\"\"\n weight_list = []\n for key in weighted_dict.keys():\n weight_list += [key] * weighted_dict[key]\n return random.choice(weight_list)\n\n", "I didn't love the syntax of any of those. I really wanted to just specify what the items were and what the weighting of each was. I realize I could have used random.choices but instead I quickly wrote the class below.\nimport random, string\nfrom numpy import cumsum\n\nclass randomChoiceWithProportions:\n '''\n Accepts a dictionary of choices as keys and weights as values. Example if you want a unfair dice:\n\n\n choiceWeightDic = {\"1\":0.16666666666666666, \"2\": 0.16666666666666666, \"3\": 0.16666666666666666\n , \"4\": 0.16666666666666666, \"5\": .06666666666666666, \"6\": 0.26666666666666666}\n dice = randomChoiceWithProportions(choiceWeightDic)\n\n samples = []\n for i in range(100000):\n samples.append(dice.sample())\n\n # Should be close to .26666\n samples.count(\"6\")/len(samples)\n\n # Should be close to .16666\n samples.count(\"1\")/len(samples)\n '''\n def __init__(self, choiceWeightDic):\n self.choiceWeightDic = choiceWeightDic\n weightSum = sum(self.choiceWeightDic.values())\n assert weightSum == 1, 'Weights sum to ' + str(weightSum) + ', not 1.'\n self.valWeightDict = self._compute_valWeights()\n\n def _compute_valWeights(self):\n valWeights = list(cumsum(list(self.choiceWeightDic.values())))\n valWeightDict = dict(zip(list(self.choiceWeightDic.keys()), valWeights))\n return valWeightDict\n\n def sample(self):\n num = random.uniform(0,1)\n for key, val in self.valWeightDict.items():\n if val >= num:\n return key\n\n", "Provide random.choice() with a pre-weighted list:\nSolution & Test:\nimport random\n\noptions = ['a', 'b', 'c', 'd']\nweights = [1, 2, 5, 2]\n\nweighted_options = [[opt]*wgt for opt, wgt in zip(options, weights)]\nweighted_options = [opt for sublist in weighted_options for opt in sublist]\nprint(weighted_options)\n\n# test\n\ncounts = {c: 0 for c in options}\nfor x in range(10000):\n counts[random.choice(weighted_options)] += 1\n\nfor opt, wgt in zip(options, weights):\n wgt_r = counts[opt] / 10000 * sum(weights)\n print(opt, counts[opt], wgt, wgt_r)\n\nOutput:\n['a', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'd', 'd']\na 1025 1 1.025\nb 1948 2 1.948\nc 5019 5 5.019\nd 2008 2 2.008\n\n", "In case you don't define in advance how many items you want to pick (so, you don't do something like k=10) and you just have probabilities, you can do the below. Note that your probabilities do not need to add up to 1, they can be independent of each other:\nsoup_items = ['pepper', 'onion', 'tomato', 'celery'] \nitems_probability = [0.2, 0.3, 0.9, 0.1]\n\nselected_items = [item for item,p in zip(soup_items,items_probability) if random.random()<p]\nprint(selected_items)\n>>>['pepper','tomato']\n\n", "let's say you have\nitems = [11, 23, 43, 91] \nprobability = [0.2, 0.3, 0.4, 0.1]\n\nand you have function which generates a random number between [0, 1) (we can use random.random() here).\nso now take the prefix sum of probability\nprefix_probability=[0.2,0.5,0.9,1]\n\nnow we can just take a random number between 0-1 and use binary search to find where that number belongs in prefix_probability. that index will be your answer\nCode will go something like this\nreturn items[bisect.bisect(prefix_probability,random.random())]\n\n" ]
[ 405, 357, 143, 82, 24, 20, 17, 15, 14, 10, 5, 4, 3, 3, 2, 2, 2, 2, 2, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "Step-1: Generate CDF F in which you're interesting\nStep-2: Generate u.r.v. u\nStep-3: Evaluate z=F^{-1}(u)\nThis modeling is described in course of probability theory or stochastic processes. This is applicable just because you have easy CDF.\n" ]
[ -1 ]
[ "optimization", "python" ]
stackoverflow_0003679694_optimization_python.txt
Q: Error on Python serial import When I try to import the serial I get the following error: Traceback (most recent call last): File "C:\Documents and Settings\eduardo.pereira\workspace\thgspeak\tst.py", line 7, in <module> import serial File "C:\Python27\lib\site-packages\serial\__init__.py", line 27, in <module> from serial.serialwin32 import Serial File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 15, in <module> from serial import win32 File "C:\Python27\lib\site-packages\serial\win32.py", line 182, in <module> CancelIoEx = _stdcall_libraries['kernel32'].CancelIoEx File "C:\Python27\lib\ctypes\__init__.py", line 375, in __getattr__ func = self.__getitem__(name) File "C:\Python27\lib\ctypes\__init__.py", line 380, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: function 'CancelIoEx' not found I have installed the latest version of pySerial, Python 2.7 runing on a WinXP laptop. Tried everywhere and found no similar problem. Is there any solution for that? Thanks in advance... A: The version of pySerial that you're using is trying to call a function that's only available in Windows Vista, whereas you're running Windows XP. It might be worth experimenting with using an older version of pySerial. The code in question was added to pySerial on 3 May 2016, so a version just prior to that might be a good start. A: Older versions seem unavailable. But, this worked for me (assuming nanpy version 3.1.1): open file \lib\site-packages\serial\serialwin32.py delete methods _cancel_overlapped_io(), cancel_read(), cancel_write() in lines 436-455 nearly at the botton of the file change method _close() als follows: (Python) def _close(self): """internal close port helper""" if self._port_handle is not None: # Restore original timeout values: win32.SetCommTimeouts(self._port_handle, self._orgTimeouts) # Close COM-Port: if self._overlapped_read is not None: win32.CloseHandle(self._overlapped_read.hEvent) self._overlapped_read = None if self._overlapped_write is not None: win32.CloseHandle(self._overlapped_write.hEvent) self._overlapped_write = None win32.CloseHandle(self._port_handle) self._port_handle = None Additionally, create a non-default serial connection when starting the communication, otherwise you'll be bound to some linux device: a = ArduinoApi(SerialManager("COM5:")) for i in range(10): a.pinMode(13, a.OUTPUT) a.digitalWrite(13, a.HIGH) # etc. A: And in serial\win32.py comments #CancelIoEx = _stdcall_libraries['kernel32'].CancelIoEx #CancelIoEx.restype = BOOL #CancelIoEx.argtypes = [HANDLE, LPOVERLAPPED] A: There is no obvious reason why CancelIO was replaced with the cross-thread version CancelIOex, which allows code in one thread to cancel IO in another thread. Certainly cpython 2.x is single threaded. To get pySerial to run on python 2.7 on Win2K, I just changed CancelIOex in serial back to CancelIO.
Error on Python serial import
When I try to import the serial I get the following error: Traceback (most recent call last): File "C:\Documents and Settings\eduardo.pereira\workspace\thgspeak\tst.py", line 7, in <module> import serial File "C:\Python27\lib\site-packages\serial\__init__.py", line 27, in <module> from serial.serialwin32 import Serial File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 15, in <module> from serial import win32 File "C:\Python27\lib\site-packages\serial\win32.py", line 182, in <module> CancelIoEx = _stdcall_libraries['kernel32'].CancelIoEx File "C:\Python27\lib\ctypes\__init__.py", line 375, in __getattr__ func = self.__getitem__(name) File "C:\Python27\lib\ctypes\__init__.py", line 380, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: function 'CancelIoEx' not found I have installed the latest version of pySerial, Python 2.7 runing on a WinXP laptop. Tried everywhere and found no similar problem. Is there any solution for that? Thanks in advance...
[ "The version of pySerial that you're using is trying to call a function that's only available in Windows Vista, whereas you're running Windows XP.\nIt might be worth experimenting with using an older version of pySerial.\nThe code in question was added to pySerial on 3 May 2016, so a version just prior to that might be a good start.\n", "Older versions seem unavailable. But, this worked for me (assuming nanpy version 3.1.1):\n\nopen file \\lib\\site-packages\\serial\\serialwin32.py\ndelete methods _cancel_overlapped_io(), cancel_read(), cancel_write() in lines 436-455 nearly at the botton of the file\nchange method _close() als follows:\n\n(Python) \ndef _close(self):\n \"\"\"internal close port helper\"\"\"\n if self._port_handle is not None:\n # Restore original timeout values:\n win32.SetCommTimeouts(self._port_handle, self._orgTimeouts)\n # Close COM-Port:\n if self._overlapped_read is not None:\n win32.CloseHandle(self._overlapped_read.hEvent)\n self._overlapped_read = None\n if self._overlapped_write is not None:\n win32.CloseHandle(self._overlapped_write.hEvent)\n self._overlapped_write = None\n win32.CloseHandle(self._port_handle)\n self._port_handle = None\n\nAdditionally, create a non-default serial connection when starting the communication, otherwise you'll be bound to some linux device:\na = ArduinoApi(SerialManager(\"COM5:\"))\n\nfor i in range(10):\n a.pinMode(13, a.OUTPUT)\n a.digitalWrite(13, a.HIGH)\n # etc.\n\n", "And in serial\\win32.py comments\n#CancelIoEx = _stdcall_libraries['kernel32'].CancelIoEx\n#CancelIoEx.restype = BOOL\n#CancelIoEx.argtypes = [HANDLE, LPOVERLAPPED]\n\n", "There is no obvious reason why CancelIO was replaced with the cross-thread version CancelIOex, which allows code in one thread to cancel IO in another thread. Certainly cpython 2.x is single threaded.\nTo get pySerial to run on python 2.7 on Win2K, I just changed CancelIOex in serial back to CancelIO.\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "pyserial", "python" ]
stackoverflow_0038262930_pyserial_python.txt
Q: Torch: Input type and weight type (torch.cuda.FloatTensor) should be the same Note: I have already seen similar questions: the same error, tell torch not to use GPU, but the answers do not work for me. I have installed PyTorch version 1.13.0+cu117 (the latest), and the code structure is as follows (an image classification task): # os.environ["CUDA_VISIBLE_DEVICES"]="" # required? device = torch.device("cpu") # use CPU ... train_set = DataLoader( torchvision.datasets.ImageFolder(path, transform), **kwargs ) ... model = myCNN().to(device) optimizer = SGD(args) loss = CrossEntropyLoss() train() I want to train on CPU. For dataloader, in accordance to this, I've set pin_memory=True and non_blocking=pin_memory. The error persists even on setting pin_memory=False. The training loop has the following structure: for epoch in n_epochs: model.train() inputs, labels = inputs.to(device, non_blocking=non_blocking), labels.to(device, non_blocking=non_blocking) Compute loss, back-propagate The error traceback (on calling train()): Traceback (most recent call last): File "code.py", line 233, in <module> train() File "code.py", line 122, in train outputs = model(inputs) File "...\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "code.py", line 87, in forward output = self.network(input) File "...\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "...\torch\nn\modules\container.py", line 204, in forward input = module(input) File "...\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "...\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "...\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor Edit: There was a comment regarding possible issues due to the model. The model is roughly: class myCNN(nn.Module): def __init__(self, ...other args...): super().__init__() self.network = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding), nn.ReLU(), nn.MaxPool2d(kernel_size), ... similar convolutional layers ... nn.Flatten(), nn.Linear(in_features, out_features) ) def forward(self, input): output = self.network(input) return output Since I have transferred both model and data to the same device, what could be the reason of this error? How to correct it? A: The issue was due to incorrect usage of summary from torchinfo. It does a forward pass (if input size is provided), and the device is (by default) selected on basis of torch.cuda.is_available(). If device (as specified in the question) argument is given to summary, the training happens just fine.
Torch: Input type and weight type (torch.cuda.FloatTensor) should be the same
Note: I have already seen similar questions: the same error, tell torch not to use GPU, but the answers do not work for me. I have installed PyTorch version 1.13.0+cu117 (the latest), and the code structure is as follows (an image classification task): # os.environ["CUDA_VISIBLE_DEVICES"]="" # required? device = torch.device("cpu") # use CPU ... train_set = DataLoader( torchvision.datasets.ImageFolder(path, transform), **kwargs ) ... model = myCNN().to(device) optimizer = SGD(args) loss = CrossEntropyLoss() train() I want to train on CPU. For dataloader, in accordance to this, I've set pin_memory=True and non_blocking=pin_memory. The error persists even on setting pin_memory=False. The training loop has the following structure: for epoch in n_epochs: model.train() inputs, labels = inputs.to(device, non_blocking=non_blocking), labels.to(device, non_blocking=non_blocking) Compute loss, back-propagate The error traceback (on calling train()): Traceback (most recent call last): File "code.py", line 233, in <module> train() File "code.py", line 122, in train outputs = model(inputs) File "...\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "code.py", line 87, in forward output = self.network(input) File "...\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "...\torch\nn\modules\container.py", line 204, in forward input = module(input) File "...\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "...\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "...\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor Edit: There was a comment regarding possible issues due to the model. The model is roughly: class myCNN(nn.Module): def __init__(self, ...other args...): super().__init__() self.network = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding), nn.ReLU(), nn.MaxPool2d(kernel_size), ... similar convolutional layers ... nn.Flatten(), nn.Linear(in_features, out_features) ) def forward(self, input): output = self.network(input) return output Since I have transferred both model and data to the same device, what could be the reason of this error? How to correct it?
[ "The issue was due to incorrect usage of summary from torchinfo. It does a forward pass (if input size is provided), and the device is (by default) selected on basis of torch.cuda.is_available().\nIf device (as specified in the question) argument is given to summary, the training happens just fine.\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "pytorch" ]
stackoverflow_0074609050_deep_learning_machine_learning_python_pytorch.txt
Q: Dynamically create pyspark dataframes according to a condition I have a pyspark dataframe store_df :- store ID Div 637 4000000970 Pac 637 4000000435 Pac 637 4000055542 Pac 637 4000042206 Pac 638 2200015935 Pac 638 2200000483 Pac 638 4000014114 Pac 640 4000000162 Pac 640 2200000067 Pac 642 2200000067 Mac 642 4000044148 Mac 642 4000014114 Mac I want to remove ID(present in store_df) from the dataframe final_list dynamically for each store in store_df based on Div. final_list pyspark df :- Div ID Rank Category Pac 4000000970 1 A Pac 4000000432 2 A Pac 4000000405 3 A Pac 4000042431 4 A Pac 2200028596 5 B Pac 4000000032 6 A Pac 2200028594 7 B Pac 4000014114 8 B Pac 2230001789 9 D Pac 2200001789 10 C Pac 2200001787 11 D Pac 2200001786 12 C Mac 2200001789 1 C Mac 2200001787 2 D Mac 2200001786 3 C For eg:for store 637 the upd_final_list should look like this(ID 4000000970 eliminated):- Div ID Rank Category Pac 4000000432 2 A Pac 4000000405 3 A Pac 4000042431 4 A Pac 2200028596 5 B Pac 4000000032 6 A Pac 2200028594 7 B Pac 4000014114 8 B Pac 2230001789 9 D Pac 2200001789 10 C Pac 2200001787 11 D Pac 2200001786 12 C Likewise this list is to be customised for other stores based on their ID. How do I do this? A: I can't test it but it should be something like this if I understood it right now store_ids = [637, 123, 865] for store_id in store_ids: div_type = stores.select("Div").where(f.col("ID") == store_id ).collect()[0][0] final_list.join(stores, stores.ID == final_list.ID) .select("*") .where((f.col("Div") == div_type) &\ (f.col("store_id") != store_id)) A: store_div = store_df.select('Store','Div').distinct().collect() fc =0 for i in store_div: store_filter = store_df.filter((col('Store')==i[0]) & (col('Div')==i[1])) if fc == 0 : Updated_final_list = final_list.join(store_filter, ["ID","DiV"], "left_anti") else: Updated_final_list = Updated_final_list.join(store_filter, ["ID","DiV"], "left_anti") fc +=1
Dynamically create pyspark dataframes according to a condition
I have a pyspark dataframe store_df :- store ID Div 637 4000000970 Pac 637 4000000435 Pac 637 4000055542 Pac 637 4000042206 Pac 638 2200015935 Pac 638 2200000483 Pac 638 4000014114 Pac 640 4000000162 Pac 640 2200000067 Pac 642 2200000067 Mac 642 4000044148 Mac 642 4000014114 Mac I want to remove ID(present in store_df) from the dataframe final_list dynamically for each store in store_df based on Div. final_list pyspark df :- Div ID Rank Category Pac 4000000970 1 A Pac 4000000432 2 A Pac 4000000405 3 A Pac 4000042431 4 A Pac 2200028596 5 B Pac 4000000032 6 A Pac 2200028594 7 B Pac 4000014114 8 B Pac 2230001789 9 D Pac 2200001789 10 C Pac 2200001787 11 D Pac 2200001786 12 C Mac 2200001789 1 C Mac 2200001787 2 D Mac 2200001786 3 C For eg:for store 637 the upd_final_list should look like this(ID 4000000970 eliminated):- Div ID Rank Category Pac 4000000432 2 A Pac 4000000405 3 A Pac 4000042431 4 A Pac 2200028596 5 B Pac 4000000032 6 A Pac 2200028594 7 B Pac 4000014114 8 B Pac 2230001789 9 D Pac 2200001789 10 C Pac 2200001787 11 D Pac 2200001786 12 C Likewise this list is to be customised for other stores based on their ID. How do I do this?
[ "I can't test it but it should be something like this if I understood it right now\nstore_ids = [637, 123, 865]\nfor store_id in store_ids: \n div_type = stores.select(\"Div\").where(f.col(\"ID\") == store_id ).collect()[0][0]\n final_list.join(stores, stores.ID == final_list.ID)\n .select(\"*\")\n .where((f.col(\"Div\") == div_type) &\\\n (f.col(\"store_id\") != store_id))\n\n", "store_div = store_df.select('Store','Div').distinct().collect()\n\nfc =0\nfor i in store_div: \n\n store_filter = store_df.filter((col('Store')==i[0]) & (col('Div')==i[1]))\n if fc == 0 :\n Updated_final_list = final_list.join(store_filter, [\"ID\",\"DiV\"], \"left_anti\")\n else:\n Updated_final_list = Updated_final_list.join(store_filter, [\"ID\",\"DiV\"], \"left_anti\")\n\n fc +=1\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure_databricks", "pyspark", "python" ]
stackoverflow_0074601979_azure_databricks_pyspark_python.txt
Q: unexpected link to one same object in different objects , while condition isn't works I face with unexpected link to one same object in different objects , while condition isn't works. So there are 3 objects: c1, c2, c3 = C(1, 'name1'), C(2, 'name2'), C(3, 'name3') they have next fields and interface: class C: def __init__(self, c_id:int, c_name:str, b:List=[]): self.c_id:int= c_id:int self.c_name:str= c_name:str self.b= b self.cs= [] def _check_c(self, c_id:int): self.c = [i.c_id for i in self.b] self.b.sort(reverse=True, key= lambda c: c.c_id) if c_id in self.cs: return True else: return False To fill C objects i will use Collector class: class Collecter: def __init__(self, data:List[Any]): self.data = data async def fill_c(cs:List[C], data)->List: for d in data: try: if d[0] and d[0].__len__()>=10: c_num = d[0][4] c = cs[int(c_num)-1] if int(c_num) == c.c_id: if c.b.__len__()<1 or not c._check_c(d[1]): c.b.append(B(d[2], d[1])) b = c.b[0] m = (d[3], d[4]) b.ms.append(m) else: b = c.b[0] m = (d[3], d[4]) b.ms.append(m) else: pass except Exception as ex: raise ex return cs and every time, whe i am going to try init and run: c1, c2, c3 = await Collecter.fill_c([c1, c2, c3], self.data) even if first condition (if) code block is not (it means that was appened nothing), my objects have one same object in filed: I really don't understand why I face it. PS code called from FastAPI REST UPD data is List[sqlalchemy.engine.row.LegacyRowsqlalchemy.engine.row.LegacyRow] A: So, you could be confused hardy when you use mutable objects as a default value of argument in your methods. There is the best explanation .
unexpected link to one same object in different objects , while condition isn't works
I face with unexpected link to one same object in different objects , while condition isn't works. So there are 3 objects: c1, c2, c3 = C(1, 'name1'), C(2, 'name2'), C(3, 'name3') they have next fields and interface: class C: def __init__(self, c_id:int, c_name:str, b:List=[]): self.c_id:int= c_id:int self.c_name:str= c_name:str self.b= b self.cs= [] def _check_c(self, c_id:int): self.c = [i.c_id for i in self.b] self.b.sort(reverse=True, key= lambda c: c.c_id) if c_id in self.cs: return True else: return False To fill C objects i will use Collector class: class Collecter: def __init__(self, data:List[Any]): self.data = data async def fill_c(cs:List[C], data)->List: for d in data: try: if d[0] and d[0].__len__()>=10: c_num = d[0][4] c = cs[int(c_num)-1] if int(c_num) == c.c_id: if c.b.__len__()<1 or not c._check_c(d[1]): c.b.append(B(d[2], d[1])) b = c.b[0] m = (d[3], d[4]) b.ms.append(m) else: b = c.b[0] m = (d[3], d[4]) b.ms.append(m) else: pass except Exception as ex: raise ex return cs and every time, whe i am going to try init and run: c1, c2, c3 = await Collecter.fill_c([c1, c2, c3], self.data) even if first condition (if) code block is not (it means that was appened nothing), my objects have one same object in filed: I really don't understand why I face it. PS code called from FastAPI REST UPD data is List[sqlalchemy.engine.row.LegacyRowsqlalchemy.engine.row.LegacyRow]
[ "So, you could be confused hardy when you use mutable objects as a default value of argument in your methods. There is the best explanation .\n" ]
[ 1 ]
[]
[]
[ "asynchronous", "object", "python" ]
stackoverflow_0074603908_asynchronous_object_python.txt
Q: Try except works when file is run from IDE but when compiled into exe with pyinstaller it doesnt work I created a python tool with Tkinter GUI. These are the pieces of the script. The problem is with the try-except in these lines of code. try: pldf_csv[data['sls_data']['add_columns']].write_csv(endpath,sep='\t') except: write_eror_status = True print("CANNOT WRITE FILE") If I run the python file via VSCode the try-except works like this But then if I compile the script with pyinstaller to exe, those line doesn't execute at all Full code class IngestGenerator: def __init__(self, fn,fd,n3pl): self.filename = fn self.filedir = fd self.name3pl = n3pl def generate_result_csv(self): """To extend 3PL please refer to comment with <(extend this)> note Don't forget to extend the yaml when extending 3PL """ start_time = time.time() # with open("columns 1a.yaml", 'r') as f: with open(os.path.join(os.path.dirname(__file__), 'columns 1a.yaml'), 'r') as f: data = yaml.load(f, Loader=SafeLoader) with tempfile.TemporaryDirectory() as tmpdirname: try : # create new list if there's new 3pl behavior (extend this) list_type_like = [data['behavior']['type_like'][i]['name3pl'] for i in range(0,len(data['behavior']['type_like']))] #collects names of 3pl which have categorical column to be divided based on write_eror_status = False status = False for i in range(0,len(data['behavior']['type_like'])): if data['behavior']['type_like'][i]['name3pl']==self.name3pl: #get the name of category column and the values of categories (extend this whole if-statement) list_types = data['behavior']['type_like'][i]['cats'] cat_column = data['behavior']['type_like'][i]['categorical_col'] status = True else : pass if status == False: #for logging to check if its in the list of type-like 3pl (extend this whole if else statement) print("3PL cannot be found on type-like 3PL list") else: print("3PL can be found on type-like 3PL list") try: for cat in list_types: #dynamic list creation for each category (extend this line only) globals()[f"{cat}_final_df"] = [] except : print("3pl isn't split based on it's categories") xl = win32com.client.Dispatch("Excel.Application") print("Cast to CSV first (win32com)") wb = xl.Workbooks.Open(self.filename,ReadOnly=1) xl.DisplayAlerts = False xl.Visible = False xl.ScreenUpdating = False xl.EnableEvents = False sheet_names = [sheet.Name for sheet in wb.Sheets if sheet.Name.lower() != 'summary'] print("Sheet names") print(sheet_names) for sheet_name in sheet_names: print("Reading sheet "+sheet_name) ws = wb.Worksheets(sheet_name) ws.SaveAs(tmpdirname+"\\myfile_tmp_{}.csv".format(sheet_name), 24) used_columns = data['sls_data'][f'{self.name3pl.lower()}_used_columns'] renamed_columns = data['sls_data'][f'{self.name3pl.lower()}_rename_columns'] rowskip = data['behavior']['row_skip'][f'{self.name3pl.lower()}'] list_dtypes = [str for u in used_columns] print("CP 1") scandf = pl.scan_csv(tmpdirname+"\\myfile_tmp_{}.csv".format(sheet_name),skip_rows= rowskip,n_rows=10) #scan csv to get column name print(scandf.columns) scanned_cols = scandf.columns.copy() used_cols_inDF = [] #collects column names dynamically for i in range(0,len(used_columns)): if type(used_columns[i]) is list: #check for each scanned-columns which contained in yaml used_columns, append if the scanned columns exist in yaml for sc in scanned_cols: for uc in used_columns[i]: if sc == uc: print(f"Column match : {uc}") used_cols_inDF.append(uc) else:pass else: for sc in scanned_cols: #purpose is same with the if statement if sc == used_columns[i]: print(f"Column match : {used_columns[i]}") used_cols_inDF.append(used_columns[i]) else:pass print(used_cols_inDF) """ JNT files have everchanging column names. Some files only have Total Ongkir, some only have Total, and some might have Total and Total Ongkir. If both exists then will use column Total Ongkir (extend this if necessary i.e for special cases of 3pl) """ if self.name3pl == 'JNT': if "Total" in used_cols_inDF and "Total Ongkir" in used_cols_inDF: used_cols_inDF.remove("Total") else:pass else:pass pldf_csv = pl.read_csv(tmpdirname+"\\myfile_tmp_{}.csv".format(sheet_name), columns = used_cols_inDF, new_columns = renamed_columns, dtypes = list_dtypes, skip_rows= rowskip ).filter(~pl.fold(acc=True, f=lambda acc, s: acc & s.is_null(), exprs=pl.all(),)) #filter rows with all null values print(pldf_csv) print(pldf_csv.columns) for v in data['sls_data']['add_columns']: #create dynamic columns if "3pl invoice distance (m) (optional)" in v.lower() or "3pl cod amount (optional)" in v.lower(): pldf_csv = pldf_csv.with_column(pl.Series(name="{}".format(v),values= np.zeros(shape=pldf_csv.shape[0]))) elif "3pl tn (mandatory)" in v.lower() or "weight (kg) (optional)" in v.lower(): pass elif "total fee (3pl) (optional)" in v.lower(): pldf_csv = pldf_csv.with_column(pl.col(v).str.replace_all(",","").str.strip().cast(pl.Float64,False).fill_null(0)) else : pldf_csv = pldf_csv.with_column(pl.lit(None).alias(v)) print(pldf_csv) endpath = self.filedir+"\{}_{}_{}.csv".format(get_file_name(file_name_appear_label["text"]),sheet_name,"IngestResult").replace('/','\\') if self.name3pl not in list_type_like: #(extend this line only) if self.name3pl == 'JNT': #(extend this line and its statement if necessary i.e for special cases of 3pl) pldf_csv = pldf_csv.with_column((pl.col("Total Fee (3PL) (Optional)")+pl.col("Biaya Asuransi").str.replace_all(",","").str.strip().cast(pl.Float64,False).fill_null(0)).alias("Total Fee (3PL) (Optional)")) else: pass print(pldf_csv) try: pldf_csv[data['sls_data']['add_columns']].write_csv(endpath,sep='\t') except: write_eror_status = True print("CANNOT WRITE FILE") elif self.name3pl in list_type_like: #(extend this line only) for cat in list_types: globals()[f"{cat}_final_df"].append(pldf_csv.filter(pl.col(cat_column).str.contains(cat))) if self.name3pl not in list_type_like: pass elif self.name3pl in list_type_like: for cat in list_types: globals()[f"{cat}_final_df"] = pl.concat(globals()[f"{cat}_final_df"]) print(globals()[f"{cat}_final_df"]) globals()[f"endpath_{cat}"] = self.filedir+"\{}_{}_{}.csv".format(get_file_name(file_name_appear_label["text"]),cat,"IngestResult").replace('/','\\') print("done creating paths") try: globals()[f"{cat}_final_df"][data['sls_data']['add_columns']].write_csv(globals()[f"endpath_{cat}"],sep='\t') except : write_eror_status = True print("CANNOT WRITE FILE") if write_eror_status == False: progress_label["text"] = "Successful!" else: progress_label["text"] = "Cannot write result into Excel, please close related Excel files and kill Excel processes from Task Manager" print("Process finished") except Exception as e: print("ERROR with message") print(e) progress_label["text"] = "Failed due to {}".format(e) finally : wb.Close(False) submit_btn["state"] = "normal" browse_btn["state"] = "normal" print("Total exec time : {}".format((time.time()-start_time)/60)) def file_submit_btn_click(): if (file_name_appear_label["text"]==""): progress_label["text"] = "Please input your file" elif option_menu.get() == '': progress_label["text"] = "Please select 3pl name" else: try : submit_btn["state"] = "disabled" browse_btn["state"] = "disabled" progress_label["text"] = "Loading . . ." name3pl = option_menu.get() ingest = IngestGenerator(file_name,file_path,name3pl) print(get_file_name(file_name_appear_label["text"])) threading.Thread(target=ingest.generate_result_csv).start() except Exception as e: print(e) A: Since the error was raised when writing CSV with polars and polars has its own dependencies, when it's in exe form it needs to include --recursive-copy-metadata as per documentation pyinstaller --recursive-copy-metadata polars --onefile -w "myfile.py"
Try except works when file is run from IDE but when compiled into exe with pyinstaller it doesnt work
I created a python tool with Tkinter GUI. These are the pieces of the script. The problem is with the try-except in these lines of code. try: pldf_csv[data['sls_data']['add_columns']].write_csv(endpath,sep='\t') except: write_eror_status = True print("CANNOT WRITE FILE") If I run the python file via VSCode the try-except works like this But then if I compile the script with pyinstaller to exe, those line doesn't execute at all Full code class IngestGenerator: def __init__(self, fn,fd,n3pl): self.filename = fn self.filedir = fd self.name3pl = n3pl def generate_result_csv(self): """To extend 3PL please refer to comment with <(extend this)> note Don't forget to extend the yaml when extending 3PL """ start_time = time.time() # with open("columns 1a.yaml", 'r') as f: with open(os.path.join(os.path.dirname(__file__), 'columns 1a.yaml'), 'r') as f: data = yaml.load(f, Loader=SafeLoader) with tempfile.TemporaryDirectory() as tmpdirname: try : # create new list if there's new 3pl behavior (extend this) list_type_like = [data['behavior']['type_like'][i]['name3pl'] for i in range(0,len(data['behavior']['type_like']))] #collects names of 3pl which have categorical column to be divided based on write_eror_status = False status = False for i in range(0,len(data['behavior']['type_like'])): if data['behavior']['type_like'][i]['name3pl']==self.name3pl: #get the name of category column and the values of categories (extend this whole if-statement) list_types = data['behavior']['type_like'][i]['cats'] cat_column = data['behavior']['type_like'][i]['categorical_col'] status = True else : pass if status == False: #for logging to check if its in the list of type-like 3pl (extend this whole if else statement) print("3PL cannot be found on type-like 3PL list") else: print("3PL can be found on type-like 3PL list") try: for cat in list_types: #dynamic list creation for each category (extend this line only) globals()[f"{cat}_final_df"] = [] except : print("3pl isn't split based on it's categories") xl = win32com.client.Dispatch("Excel.Application") print("Cast to CSV first (win32com)") wb = xl.Workbooks.Open(self.filename,ReadOnly=1) xl.DisplayAlerts = False xl.Visible = False xl.ScreenUpdating = False xl.EnableEvents = False sheet_names = [sheet.Name for sheet in wb.Sheets if sheet.Name.lower() != 'summary'] print("Sheet names") print(sheet_names) for sheet_name in sheet_names: print("Reading sheet "+sheet_name) ws = wb.Worksheets(sheet_name) ws.SaveAs(tmpdirname+"\\myfile_tmp_{}.csv".format(sheet_name), 24) used_columns = data['sls_data'][f'{self.name3pl.lower()}_used_columns'] renamed_columns = data['sls_data'][f'{self.name3pl.lower()}_rename_columns'] rowskip = data['behavior']['row_skip'][f'{self.name3pl.lower()}'] list_dtypes = [str for u in used_columns] print("CP 1") scandf = pl.scan_csv(tmpdirname+"\\myfile_tmp_{}.csv".format(sheet_name),skip_rows= rowskip,n_rows=10) #scan csv to get column name print(scandf.columns) scanned_cols = scandf.columns.copy() used_cols_inDF = [] #collects column names dynamically for i in range(0,len(used_columns)): if type(used_columns[i]) is list: #check for each scanned-columns which contained in yaml used_columns, append if the scanned columns exist in yaml for sc in scanned_cols: for uc in used_columns[i]: if sc == uc: print(f"Column match : {uc}") used_cols_inDF.append(uc) else:pass else: for sc in scanned_cols: #purpose is same with the if statement if sc == used_columns[i]: print(f"Column match : {used_columns[i]}") used_cols_inDF.append(used_columns[i]) else:pass print(used_cols_inDF) """ JNT files have everchanging column names. Some files only have Total Ongkir, some only have Total, and some might have Total and Total Ongkir. If both exists then will use column Total Ongkir (extend this if necessary i.e for special cases of 3pl) """ if self.name3pl == 'JNT': if "Total" in used_cols_inDF and "Total Ongkir" in used_cols_inDF: used_cols_inDF.remove("Total") else:pass else:pass pldf_csv = pl.read_csv(tmpdirname+"\\myfile_tmp_{}.csv".format(sheet_name), columns = used_cols_inDF, new_columns = renamed_columns, dtypes = list_dtypes, skip_rows= rowskip ).filter(~pl.fold(acc=True, f=lambda acc, s: acc & s.is_null(), exprs=pl.all(),)) #filter rows with all null values print(pldf_csv) print(pldf_csv.columns) for v in data['sls_data']['add_columns']: #create dynamic columns if "3pl invoice distance (m) (optional)" in v.lower() or "3pl cod amount (optional)" in v.lower(): pldf_csv = pldf_csv.with_column(pl.Series(name="{}".format(v),values= np.zeros(shape=pldf_csv.shape[0]))) elif "3pl tn (mandatory)" in v.lower() or "weight (kg) (optional)" in v.lower(): pass elif "total fee (3pl) (optional)" in v.lower(): pldf_csv = pldf_csv.with_column(pl.col(v).str.replace_all(",","").str.strip().cast(pl.Float64,False).fill_null(0)) else : pldf_csv = pldf_csv.with_column(pl.lit(None).alias(v)) print(pldf_csv) endpath = self.filedir+"\{}_{}_{}.csv".format(get_file_name(file_name_appear_label["text"]),sheet_name,"IngestResult").replace('/','\\') if self.name3pl not in list_type_like: #(extend this line only) if self.name3pl == 'JNT': #(extend this line and its statement if necessary i.e for special cases of 3pl) pldf_csv = pldf_csv.with_column((pl.col("Total Fee (3PL) (Optional)")+pl.col("Biaya Asuransi").str.replace_all(",","").str.strip().cast(pl.Float64,False).fill_null(0)).alias("Total Fee (3PL) (Optional)")) else: pass print(pldf_csv) try: pldf_csv[data['sls_data']['add_columns']].write_csv(endpath,sep='\t') except: write_eror_status = True print("CANNOT WRITE FILE") elif self.name3pl in list_type_like: #(extend this line only) for cat in list_types: globals()[f"{cat}_final_df"].append(pldf_csv.filter(pl.col(cat_column).str.contains(cat))) if self.name3pl not in list_type_like: pass elif self.name3pl in list_type_like: for cat in list_types: globals()[f"{cat}_final_df"] = pl.concat(globals()[f"{cat}_final_df"]) print(globals()[f"{cat}_final_df"]) globals()[f"endpath_{cat}"] = self.filedir+"\{}_{}_{}.csv".format(get_file_name(file_name_appear_label["text"]),cat,"IngestResult").replace('/','\\') print("done creating paths") try: globals()[f"{cat}_final_df"][data['sls_data']['add_columns']].write_csv(globals()[f"endpath_{cat}"],sep='\t') except : write_eror_status = True print("CANNOT WRITE FILE") if write_eror_status == False: progress_label["text"] = "Successful!" else: progress_label["text"] = "Cannot write result into Excel, please close related Excel files and kill Excel processes from Task Manager" print("Process finished") except Exception as e: print("ERROR with message") print(e) progress_label["text"] = "Failed due to {}".format(e) finally : wb.Close(False) submit_btn["state"] = "normal" browse_btn["state"] = "normal" print("Total exec time : {}".format((time.time()-start_time)/60)) def file_submit_btn_click(): if (file_name_appear_label["text"]==""): progress_label["text"] = "Please input your file" elif option_menu.get() == '': progress_label["text"] = "Please select 3pl name" else: try : submit_btn["state"] = "disabled" browse_btn["state"] = "disabled" progress_label["text"] = "Loading . . ." name3pl = option_menu.get() ingest = IngestGenerator(file_name,file_path,name3pl) print(get_file_name(file_name_appear_label["text"])) threading.Thread(target=ingest.generate_result_csv).start() except Exception as e: print(e)
[ "Since the error was raised when writing CSV with polars and polars has its own dependencies, when it's in exe form it needs to include --recursive-copy-metadata as per documentation pyinstaller --recursive-copy-metadata polars --onefile -w \"myfile.py\"\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074609168_python.txt
Q: tqdm: extract time passed + time remaining? I have been going over the tqdm docs, but no matter where I look, I cannot find a method by which to extract the time passed and estimated time remaining fields (basically the center of the progress bar on each line: 00:00<00:02). 0%| | 0/200 [00:00<?, ?it/s] 4%|▎ | 7/200 [00:00<00:02, 68.64it/s] 8%|▊ | 16/200 [00:00<00:02, 72.87it/s] 12%|█▎ | 25/200 [00:00<00:02, 77.15it/s] 17%|█▋ | 34/200 [00:00<00:02, 79.79it/s] 22%|██▏ | 43/200 [00:00<00:01, 79.91it/s] 26%|██▌ | 52/200 [00:00<00:01, 80.23it/s] 30%|███ | 61/200 [00:00<00:01, 82.13it/s] .... 100%|██████████| 200/200 [00:02<00:00, 81.22it/s] tqdm works via essentially printing a dynamic progress bar anytime an update occurs, but is there a way to "just" print the 00:01 and 00:02 portions, so I could use them elsewhere in my Python program, such as in automatic stopping code that halts the process if it is taking too long? A: tqdm objects expose some information via the public property format_dict. from tqdm import tqdm with tqdm(total=100) as t: ... t.update() print(t.format_interval(t.format_dict['elapsed'])) Otherwise you could parse str(t).split() A: You can get elapsed and remaining time from format_dict and some calculations. t = tqdm(total=100) ... elapsed = t.format_dict["elapsed"] rate = t.format_dict["rate"] remaining = (t.total - t.n) / rate if rate and t.total else 0 # Seconds* A: Here's the answer to the time remaining and time elapsed question: from tqdm import tqdm from time import sleep with tqdm(total=100, bar_format="{l_bar}{bar} [ time left: {remaining}, time spent: {elapsed}]") as pbar: for i in loop: pbar.update(1) sleep(0.01) If needed to be worked with or printed elsewhere: elapsed = pbar.format_dict["elapsed"] remains = pbar.format_dict["remaining"] A: Edit: see the library maintainer's answer below. Turns out, it is possible to get this information in the public API. tqdm does not expose that information as part of its public API, and I don't recommend trying to hack your own into it. Then you would be depending on implementation details of tqdm that might change at any time. However, that shouldn't stop you from writing your own. It's easy enough to instrument a loop with a timer, and you can then abort the loop if it takes too long. Here's a quick, rough example that still uses tqdm to provide visual feedback: import time from tqdm import tqdm def long_running_function(n, timeout=5): start_time = time.time() for _ in tqdm(list(range(n))): time.sleep(1) # doing some expensive work... elapsed_time = time.time() - start_time if elapsed_time > timeout: raise TimeoutError("long_running_function took too long!") long_running_function(100, timeout=10) If you run this, the function will stop its own execution after 10 seconds by raising an exception. You could catch this exception at the call site and respond to it in whatever way you deem appropriate. If you want to be clever, you could even factor this out in a tqdm-like wrapper like this: def timed_loop(iterator, timeout): start_time = time.time() iterator = iter(iterator) while True: elapsed_time = time.time() - start_time if elapsed_time > timeout: raise TimeoutError("long_running_function took too long!") try: yield next(iterator) except StopIteration: pass def long_running_function(n, timeout=5): for _ in timed_loop(tqdm(list(range(n))), timeout=timeout): time.sleep(0.1) long_running_function(100, timeout=5)
tqdm: extract time passed + time remaining?
I have been going over the tqdm docs, but no matter where I look, I cannot find a method by which to extract the time passed and estimated time remaining fields (basically the center of the progress bar on each line: 00:00<00:02). 0%| | 0/200 [00:00<?, ?it/s] 4%|▎ | 7/200 [00:00<00:02, 68.64it/s] 8%|▊ | 16/200 [00:00<00:02, 72.87it/s] 12%|█▎ | 25/200 [00:00<00:02, 77.15it/s] 17%|█▋ | 34/200 [00:00<00:02, 79.79it/s] 22%|██▏ | 43/200 [00:00<00:01, 79.91it/s] 26%|██▌ | 52/200 [00:00<00:01, 80.23it/s] 30%|███ | 61/200 [00:00<00:01, 82.13it/s] .... 100%|██████████| 200/200 [00:02<00:00, 81.22it/s] tqdm works via essentially printing a dynamic progress bar anytime an update occurs, but is there a way to "just" print the 00:01 and 00:02 portions, so I could use them elsewhere in my Python program, such as in automatic stopping code that halts the process if it is taking too long?
[ "tqdm objects expose some information via the public property format_dict.\nfrom tqdm import tqdm\n\nwith tqdm(total=100) as t:\n ...\n t.update()\n print(t.format_interval(t.format_dict['elapsed']))\n\nOtherwise you could parse str(t).split()\n", "You can get elapsed and remaining time from format_dict and some calculations.\nt = tqdm(total=100)\n...\nelapsed = t.format_dict[\"elapsed\"]\nrate = t.format_dict[\"rate\"]\nremaining = (t.total - t.n) / rate if rate and t.total else 0 # Seconds*\n\n", "Here's the answer to the time remaining and time elapsed question:\nfrom tqdm import tqdm\nfrom time import sleep\n \nwith tqdm(total=100, bar_format=\"{l_bar}{bar} [ time left: {remaining}, time spent: {elapsed}]\") as pbar:\n for i in loop:\n pbar.update(1)\n sleep(0.01)\n\nIf needed to be worked with or printed elsewhere:\nelapsed = pbar.format_dict[\"elapsed\"]\nremains = pbar.format_dict[\"remaining\"]\n\n", "Edit: see the library maintainer's answer below. Turns out, it is possible to get this information in the public API.\n\ntqdm does not expose that information as part of its public API, and I don't recommend trying to hack your own into it. Then you would be depending on implementation details of tqdm that might change at any time.\nHowever, that shouldn't stop you from writing your own. It's easy enough to instrument a loop with a timer, and you can then abort the loop if it takes too long. Here's a quick, rough example that still uses tqdm to provide visual feedback:\nimport time\nfrom tqdm import tqdm\n\n\ndef long_running_function(n, timeout=5):\n start_time = time.time()\n\n for _ in tqdm(list(range(n))):\n time.sleep(1) # doing some expensive work...\n elapsed_time = time.time() - start_time\n if elapsed_time > timeout:\n raise TimeoutError(\"long_running_function took too long!\")\n\n\nlong_running_function(100, timeout=10)\n\nIf you run this, the function will stop its own execution after 10 seconds by raising an exception. You could catch this exception at the call site and respond to it in whatever way you deem appropriate.\n\nIf you want to be clever, you could even factor this out in a tqdm-like wrapper like this:\ndef timed_loop(iterator, timeout):\n start_time = time.time()\n iterator = iter(iterator)\n\n while True:\n elapsed_time = time.time() - start_time\n if elapsed_time > timeout:\n raise TimeoutError(\"long_running_function took too long!\")\n\n try:\n yield next(iterator)\n except StopIteration:\n pass\n\n\ndef long_running_function(n, timeout=5):\n for _ in timed_loop(tqdm(list(range(n))), timeout=timeout):\n time.sleep(0.1)\n\n\nlong_running_function(100, timeout=5)\n\n" ]
[ 17, 1, 0, -2 ]
[]
[]
[ "iterable", "printing", "progress_bar", "python", "tqdm" ]
stackoverflow_0056677267_iterable_printing_progress_bar_python_tqdm.txt
Q: Remove all combination of set at the end of string REGEX I want to write a REGEX that removes all combination of a group of characters from the end of strings. For instance removes "k", "t", "a", "u" and all of their combinations from the end of the string: Input: ["Rajakatu","Lapinlahdenktau","Nurmenkaut","Linnakoskenkuat"] Output: ["Raja","Lapinlahden","Nurmen","Linnakosken"] A: How about something like this [ktau]{4}\b? https://regex101.com/r/BVwTcs/1 This will match at the end of a word for those character combinations. For example, k, followed by u, followed by a, followed by t. This can also match aaaa so take that into account. It will match any 4 combinations of the characters at the end of the word. A: the below is my approach to it. Please try it: from itertools import permutations mystr = [["Rajakatu","Lapinlahdenktau","Nurmenkaut","Linnakoskenkuat"]] #to get the last four letters of whose permutations you need x = mystr[0][0] exclude= x[-4:] #get the permutations perms = [''.join(p) for p in permutations(exclude)] perms #remove the last for letters of the string if it lies in the perms for i in range(4): curr = mystr[0][i] last4 = curr[-4:] if(last4 in perms): mystr[0][i]=curr[:-4] print(mystr) OUTPUT: [['Raja', 'Lapinlahden', 'Nurmen', 'Linnakosken']]
Remove all combination of set at the end of string REGEX
I want to write a REGEX that removes all combination of a group of characters from the end of strings. For instance removes "k", "t", "a", "u" and all of their combinations from the end of the string: Input: ["Rajakatu","Lapinlahdenktau","Nurmenkaut","Linnakoskenkuat"] Output: ["Raja","Lapinlahden","Nurmen","Linnakosken"]
[ "How about something like this [ktau]{4}\\b?\nhttps://regex101.com/r/BVwTcs/1\nThis will match at the end of a word for those character combinations.\nFor example, k, followed by u, followed by a, followed by t.\nThis can also match aaaa so take that into account.\nIt will match any 4 combinations of the characters at the end of the word.\n", "the below is my approach to it. Please try it:\nfrom itertools import permutations\n\nmystr = [[\"Rajakatu\",\"Lapinlahdenktau\",\"Nurmenkaut\",\"Linnakoskenkuat\"]]\n\n#to get the last four letters of whose permutations you need\nx = mystr[0][0]\nexclude= x[-4:]\n\n#get the permutations\nperms = [''.join(p) for p in permutations(exclude)]\nperms\n\n#remove the last for letters of the string if it lies in the perms\nfor i in range(4):\n curr = mystr[0][i]\n last4 = curr[-4:]\n \n if(last4 in perms):\n mystr[0][i]=curr[:-4]\n \nprint(mystr)\n\nOUTPUT: [['Raja', 'Lapinlahden', 'Nurmen', 'Linnakosken']]\n" ]
[ 1, 0 ]
[]
[]
[ "combinations", "filter", "python", "string" ]
stackoverflow_0074610334_combinations_filter_python_string.txt
Q: Need to store my function's results as a dictionary value in Python Pandas I have 2 functions that read a csv file and count the following as checks: number of rows in that csv number of rows that have a null value in the 'ID' column I am trying to create a dataframe that looks like this Checks Summary Findings Check #1 Number of records on file function #1 results (Number of records on file: 10) Check #2 Number of records missing an ID function #2 results (Number of records missing an ID: 2) function 1 looks like this: def function1(): with open('data.csv') as file: record_number = len(list(file)) print("Number of records on file:",record_number) function1() and outputs "Number of records on file: 10" function 2 looks like this: def function2(): df = pd.read_csv('data.csv', low_memory=False) missing_id = df["IDs"].isna().sum() print("Number of records missing an ID:", missing_id) function2() and outputs "Number of records missing an ID: 2" I attempt to create a dictionary first and create my dictionary table = { 'Checks' : ['Check #1', 'Check #2'], 'Summary' : ['Number of records on file', 'Number of records missing an ID'], 'Findings' : [function1, function2] } df = pd.DataFrame(table) df However, this is what the dataframe looks like: Checks Summary Findings Check #1 Number of records on file <function function1 at 0x7efd2d76a730> Check #2 Number of records missing an ID <function2 at 0x7efd25cd0b70> Is there any way to make it so that my Findings column outputs the actual results as seen above? A: The reason is that you're printing the function objects, and not their results: function1 != function1() So for your case you need: table = { 'Checks' : ['Check #1', 'Check #2'], 'Summary' : ['Number of records on file', 'Number of records missing an ID'], 'Findings' : [function1(), function2()] } df = pd.DataFrame(table) df Edit: Oh damn and I also missed what the other user commented. You definitely need to return a value from your functions as well :) A: You need to change your functions so they return values, not output them, that is do def function1(): with open('data.csv') as file: record_number = len(list(file)) return record_number and def function2(): df = pd.read_csv('data.csv', low_memory=False) return df["IDs"].isna().sum() and call these functions like so table = { 'Checks' : ['Check #1', 'Check #2'], 'Summary' : ['Number of records on file', 'Number of records missing an ID'], 'Findings' : [function1(), function2()] } df = pd.DataFrame(table) df A: For expected ouput add return with f-strings to both functions, in DataFrame call functions with parentheses: def function1(): with open('data.csv') as file: record_number = len(list(file)) return f"function #1 results (Number of records on file: {record_number})") def function2(): df = pd.read_csv('data.csv', low_memory=False) missing_id = df["IDs"].isna().sum() return f"function #2 results (Number of records missing an ID: {missing_id})") table = { 'Checks' : ['Check #1', 'Check #2'], 'Summary' : ['Number of records on file', 'Number of records missing an ID'], 'Findings' : [function1(), function2()] } df = pd.DataFrame(table) Solution with one function: def function(): with open('data.csv') as file: record_number = len(list(file)) missing_id = df["IDs"].isna().sum() return [f"function #1 results (Number of records on file: {record_number})"), f"function #2 results (Number of records missing an ID: {missing_id})")] table = { 'Checks' : ['Check #1', 'Check #2'], 'Summary' : ['Number of records on file', 'Number of records missing an ID'], 'Findings' : function() } df = pd.DataFrame(table)
Need to store my function's results as a dictionary value in Python Pandas
I have 2 functions that read a csv file and count the following as checks: number of rows in that csv number of rows that have a null value in the 'ID' column I am trying to create a dataframe that looks like this Checks Summary Findings Check #1 Number of records on file function #1 results (Number of records on file: 10) Check #2 Number of records missing an ID function #2 results (Number of records missing an ID: 2) function 1 looks like this: def function1(): with open('data.csv') as file: record_number = len(list(file)) print("Number of records on file:",record_number) function1() and outputs "Number of records on file: 10" function 2 looks like this: def function2(): df = pd.read_csv('data.csv', low_memory=False) missing_id = df["IDs"].isna().sum() print("Number of records missing an ID:", missing_id) function2() and outputs "Number of records missing an ID: 2" I attempt to create a dictionary first and create my dictionary table = { 'Checks' : ['Check #1', 'Check #2'], 'Summary' : ['Number of records on file', 'Number of records missing an ID'], 'Findings' : [function1, function2] } df = pd.DataFrame(table) df However, this is what the dataframe looks like: Checks Summary Findings Check #1 Number of records on file <function function1 at 0x7efd2d76a730> Check #2 Number of records missing an ID <function2 at 0x7efd25cd0b70> Is there any way to make it so that my Findings column outputs the actual results as seen above?
[ "The reason is that you're printing the function objects, and not their results:\nfunction1 != function1()\nSo for your case you need:\ntable = {\n 'Checks' : ['Check #1', 'Check #2'],\n 'Summary' : ['Number of records on file', 'Number of records missing an ID'],\n 'Findings' : [function1(), function2()]\n}\ndf = pd.DataFrame(table)\ndf\n\nEdit: Oh damn and I also missed what the other user commented. You definitely need to return a value from your functions as well :)\n", "You need to change your functions so they return values, not output them, that is do\ndef function1():\n with open('data.csv') as file:\n record_number = len(list(file))\n return record_number\n\nand\ndef function2():\n df = pd.read_csv('data.csv', low_memory=False)\n return df[\"IDs\"].isna().sum()\n\nand call these functions like so\ntable = {\n 'Checks' : ['Check #1', 'Check #2'],\n 'Summary' : ['Number of records on file', 'Number of records missing an ID'],\n 'Findings' : [function1(), function2()]\n}\ndf = pd.DataFrame(table)\ndf\n\n", "For expected ouput add return with f-strings to both functions, in DataFrame call functions with parentheses:\ndef function1():\n with open('data.csv') as file:\n record_number = len(list(file))\n return f\"function #1 results (Number of records on file: {record_number})\")\n\n\ndef function2():\n df = pd.read_csv('data.csv', low_memory=False)\n missing_id = df[\"IDs\"].isna().sum()\n return f\"function #2 results (Number of records missing an ID: {missing_id})\")\n\n\ntable = {\n 'Checks' : ['Check #1', 'Check #2'],\n 'Summary' : ['Number of records on file', 'Number of records missing an ID'],\n 'Findings' : [function1(), function2()]\n}\ndf = pd.DataFrame(table)\n\nSolution with one function:\ndef function():\n with open('data.csv') as file:\n record_number = len(list(file))\n missing_id = df[\"IDs\"].isna().sum()\n \n return [f\"function #1 results (Number of records on file: {record_number})\"),\n f\"function #2 results (Number of records missing an ID: {missing_id})\")]\n\n\ntable = {\n 'Checks' : ['Check #1', 'Check #2'],\n 'Summary' : ['Number of records on file', 'Number of records missing an ID'],\n 'Findings' : function()\n}\ndf = pd.DataFrame(table)\n\n" ]
[ 2, 2, 0 ]
[]
[]
[ "dataframe", "dictionary", "function", "pandas", "python" ]
stackoverflow_0074610722_dataframe_dictionary_function_pandas_python.txt
Q: How to move specific cells in an excel file to a new column with openpyxl in python I am trying to moving some specific cells to a designated location. As shown in the image, would like to move data in cells D3 to E2, D5 to E4,..... so on so for. Is it doable with openpyxl? Any suggestions would be greatly appreciate it!! Click to see the image Here is what I got so far. It worked per say. wb=xl.load_workbook(datafile) ws=wb['Sheet1'] #insert a new column #5 ws.insert_cols(idx=5,amount=1) wb.save(datafile) mr=ws.max_row #move cells for i in range (1,mr+1): v=ws.cell(row = i+1,column=4) ws.cell(row=i,column =5).value=v.value wb.save(datafile) wb.close Thanks for the help. I revised the codes and it worked well. I then wanted to delete the unwanted rows and it didn't work. Looks like it got into an infinite loop. Codes are shown here. What did I do wrong? wb=xl.load_workbook(datafile) ws=wb['Sheet1'] #insert a new column #5 ws.insert_cols(idx=5,amount=1) #Calculate total number of rows mr=ws.max_row #move cells for i in range (2,mr,2): ws.cell(row=i,column=5).value=ws.cell(row=i+1,column=4).value #delete unwanted rows for i in range (2,mr,2): ws.delete_rows(idx=i+1,amount=1) wb.save(datafile) A: Thats a good effort. Here are some comments to help and also on how to skip one row. Should generally only ever need to save the workbook once at the end of all your edits, unless you are making multiple copies. So the wb.save after the insert command is not necessary. There shouldn't be need to use 'mr + 1' here. Just the value 'mr' is fine, it matches the rows count. Not necessary to assign an intermediate variable for just copying a value from one cell to another. Assign the new cell value to the existing cell value in one line as shown. However it's fine if you want to use the variable 'v' as an intermediate. wb.close should be wb.close() however wont do anything on this workbook anyway Only affects read-only and write-only modes which is a method when opening the workbook. So not need to include that line. To skip rows you can set the stepping in the range. Stepping is the last number in the range params. So range(2, mr, 2) Means 'i' starts at 2, increases to max value [ws.max_row] in increments of 2. In this case since the max value is 7, i will be 2, 4 and 6 ... wb = xl.load_workbook(datafile) ws = wb['Sheet1'] # insert a new column #5 ws.insert_cols(idx=5, amount=1) # wb.save(datafile) # <--- not necessary just save at the end mr = ws.max_row # move cells # Move and delete the rows by making the changes from the bottom up for i in reversed(range(2,mr,2)): ws.cell(row=i, column=5).value = ws.cell(row=i + 1, column=4).value ws.delete_rows(idx=i + 1, amount=1) wb.save(datafile) # wb.close # <-- Not needed, should have brackets anyway #---- Additional Information ----# The issue with the deletion is; when you delete a row the rows below shift up immediately. Therefore if you delete row 2 then row 3 becomes row 2 which means starting from the top and deleting down in a pattern like this usually wont work. To delete the unncessary rows its easier to run the loop in reverse so deletion of a row doesn't affect the rows you have yet to change.
How to move specific cells in an excel file to a new column with openpyxl in python
I am trying to moving some specific cells to a designated location. As shown in the image, would like to move data in cells D3 to E2, D5 to E4,..... so on so for. Is it doable with openpyxl? Any suggestions would be greatly appreciate it!! Click to see the image Here is what I got so far. It worked per say. wb=xl.load_workbook(datafile) ws=wb['Sheet1'] #insert a new column #5 ws.insert_cols(idx=5,amount=1) wb.save(datafile) mr=ws.max_row #move cells for i in range (1,mr+1): v=ws.cell(row = i+1,column=4) ws.cell(row=i,column =5).value=v.value wb.save(datafile) wb.close Thanks for the help. I revised the codes and it worked well. I then wanted to delete the unwanted rows and it didn't work. Looks like it got into an infinite loop. Codes are shown here. What did I do wrong? wb=xl.load_workbook(datafile) ws=wb['Sheet1'] #insert a new column #5 ws.insert_cols(idx=5,amount=1) #Calculate total number of rows mr=ws.max_row #move cells for i in range (2,mr,2): ws.cell(row=i,column=5).value=ws.cell(row=i+1,column=4).value #delete unwanted rows for i in range (2,mr,2): ws.delete_rows(idx=i+1,amount=1) wb.save(datafile)
[ "Thats a good effort.\nHere are some comments to help and also on how to skip one row.\n\nShould generally only ever need to save the workbook once at the end of all your edits, unless you are making multiple copies. So the wb.save after the insert command is not necessary.\nThere shouldn't be need to use 'mr + 1' here. Just the value 'mr' is fine, it matches the rows count.\nNot necessary to assign an intermediate variable for just copying a value from one cell to another. Assign the new cell value to the existing cell value in one line as shown. However it's fine if you want to use the variable 'v' as an intermediate.\nwb.close should be wb.close() however wont do anything on this workbook anyway Only affects read-only and write-only modes which is a method when opening the workbook. So not need to include that line.\n\n\n\nTo skip rows you can set the stepping in the range. Stepping is the last number in the range params. So\nrange(2, mr, 2)\n\nMeans 'i' starts at 2, increases to max value [ws.max_row] in increments of 2.\nIn this case since the max value is 7, i will be 2, 4 and 6\n\n...\nwb = xl.load_workbook(datafile)\nws = wb['Sheet1']\n\n# insert a new column #5\nws.insert_cols(idx=5, amount=1)\n# wb.save(datafile) # <--- not necessary just save at the end\n\nmr = ws.max_row\n\n# move cells\n# Move and delete the rows by making the changes from the bottom up\nfor i in reversed(range(2,mr,2)):\n ws.cell(row=i, column=5).value = ws.cell(row=i + 1, column=4).value\n ws.delete_rows(idx=i + 1, amount=1)\n\nwb.save(datafile)\n# wb.close # <-- Not needed, should have brackets anyway\n\n\n#---- Additional Information ----#\nThe issue with the deletion is; when you delete a row the rows below shift up immediately. Therefore if you delete row 2 then row 3 becomes row 2 which means starting from the top and deleting down in a pattern like this usually wont work.\nTo delete the unncessary rows its easier to run the loop in reverse so deletion of a row doesn't affect the rows you have yet to change.\n" ]
[ 0 ]
[]
[]
[ "excel", "move", "openpyxl", "python" ]
stackoverflow_0074539605_excel_move_openpyxl_python.txt
Q: How to validate random choice within Python? I've been trying to make this dumb little program that spits out a random quote to the user from either Kingdom Hearts or Alan Wake (both included in .txt files) and I've hit a snag. I've made the program select a random quote from either of the text files (residing in lists) and to finish I just need to validate whether the user input matches the random selection. import os import sys import random with open(os.path.join(sys.path[0], "alanwake.txt"), "r", encoding='utf8') as txt1: wake = [] for line in txt1: wake.append(line) with open(os.path.join(sys.path[0], "kh.txt"), "r", encoding='utf8') as txt2: kh = [] for line in txt2: kh.append(line) random_kh = random.choice(kh) random_wake = random.choice(wake) choices = [random_kh, random_wake] quote = random.choice(choices) print(quote) print("Is this quote from Kingdom Hearts or Alan Wake?") inp = input().lower() This is what I've got so far. I did try something like: if quote == choices[0] and inp == "kingdom hearts": print("Correct!") if quote == choices[1] and inp == "alan wake": print("Correct!") else: print("Incorrect") But found that it just always printed as incorrect. Any help would be appreciated! I'm very new to programming. A: You are working with more than 1 if-statement this means that the programm is gonna check both of them individually also, check the first one if is correct is going to print 'correct' then is gonna check the next if-statement and if this one is false is gonna print "Incorrect", try doing this if quote == choices[0] and inp == "kingdom hearts": print("correct") elif quote == choices[1] and inp == "alan wake": print("correct") else: print("incorrect") Here you check all the option and when one of them is correct it stop comparing and print the msg.
How to validate random choice within Python?
I've been trying to make this dumb little program that spits out a random quote to the user from either Kingdom Hearts or Alan Wake (both included in .txt files) and I've hit a snag. I've made the program select a random quote from either of the text files (residing in lists) and to finish I just need to validate whether the user input matches the random selection. import os import sys import random with open(os.path.join(sys.path[0], "alanwake.txt"), "r", encoding='utf8') as txt1: wake = [] for line in txt1: wake.append(line) with open(os.path.join(sys.path[0], "kh.txt"), "r", encoding='utf8') as txt2: kh = [] for line in txt2: kh.append(line) random_kh = random.choice(kh) random_wake = random.choice(wake) choices = [random_kh, random_wake] quote = random.choice(choices) print(quote) print("Is this quote from Kingdom Hearts or Alan Wake?") inp = input().lower() This is what I've got so far. I did try something like: if quote == choices[0] and inp == "kingdom hearts": print("Correct!") if quote == choices[1] and inp == "alan wake": print("Correct!") else: print("Incorrect") But found that it just always printed as incorrect. Any help would be appreciated! I'm very new to programming.
[ "You are working with more than 1 if-statement this means that the programm is gonna check both of them individually also, check the first one if is correct is going to print 'correct' then is gonna check the next if-statement and if this one is false is gonna print \"Incorrect\", try doing this\nif quote == choices[0] and inp == \"kingdom hearts\":\n print(\"correct\")\nelif quote == choices[1] and inp == \"alan wake\":\n print(\"correct\")\nelse:\n print(\"incorrect\")\n\nHere you check all the option and when one of them is correct it stop comparing and print the msg.\n" ]
[ 1 ]
[]
[]
[ "if_statement", "list", "python", "random", "string" ]
stackoverflow_0074610751_if_statement_list_python_random_string.txt
Q: TypeError: pointplot() got an unexpected keyword argument This bit of code used to run without the error notification that follows the code. Any clues as to why this is happening here? fig, ax = plt.subplots(1,1,figsize=(16,5)) w = sns.pointplot(y='DelayTime',x='Weather2',data=df[['Weather2','DelayTime','Severity']], hue = 'Severity' ,ci=None , order= top_10_weather.index, #kind = 'point', height=4, aspect=2 , palette='nipy_spectral', ax= ax) ax.grid(axis='y', linestyle='-', alpha=0.4) # w = sns.lineplot(x='Weather2', y='DelayTime' , data=df[['Weather2','DelayTime']] , hue_order= top_15_weather.index) plt.xlabel("Weather conditions", fontdict = {'fontsize':12 , 'color':'MidnightBlue'} ) plt.xticks(fontsize=12 , rotation = 45) plt.ylabel("Delay Times (in Hours)") ax.set_title('Delay times for different Weather Conditions', fontdict = {'fontsize':16 , 'color':'MidnightBlue'}, pad=15) fig.tight_layout() Error Statement Given: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_23484\2440897390.py in <module> 1 fig, ax = plt.subplots(1,1,figsize=(16,5)) 2 ----> 3 w = sns.pointplot(y='DelayTime',x='Weather2',data=df[['Weather2','DelayTime','Severity']], 4 hue = 'Severity' 5 ,ci=None , TypeError: pointplot() got an unexpected keyword argument 'height' The only reason i have for the code above to produce an error is a Anaconda Update as the code below worked before the update and it appears to be the same. fig, ax = plt.subplots(1,1,figsize=(16,5)) w = sns.pointplot(y='DelayTime',x='Weather2',data=df[['Weather2','DelayTime','Severity']], hue = 'Severity' ,ci=None , order= top_10_weather.index, #kind = 'point', height=4, aspect=2 , palette='nipy_spectral', ax= ax) ax.grid(axis='y', linestyle='-', alpha=0.4) # w = sns.lineplot(x='Weather2', y='DelayTime' , data=df[['Weather2','DelayTime']] , hue_order= top_15_weather.index) plt.xlabel("Weather conditions", fontdict = {'fontsize':12 , 'color':'MidnightBlue'} ) plt.xticks(fontsize=12 , rotation = 45) plt.ylabel("Delay Times (in Hours)") ax.set_title('Delay times for different Weather Conditions', fontdict = {'fontsize':16 , 'color':'MidnightBlue'}, pad=15) fig.tight_layout() A: I don't know why it worked before, but after reading the seaborn documentation ( documentation ) the parameter height doesn't exist when you use pointplot.
TypeError: pointplot() got an unexpected keyword argument
This bit of code used to run without the error notification that follows the code. Any clues as to why this is happening here? fig, ax = plt.subplots(1,1,figsize=(16,5)) w = sns.pointplot(y='DelayTime',x='Weather2',data=df[['Weather2','DelayTime','Severity']], hue = 'Severity' ,ci=None , order= top_10_weather.index, #kind = 'point', height=4, aspect=2 , palette='nipy_spectral', ax= ax) ax.grid(axis='y', linestyle='-', alpha=0.4) # w = sns.lineplot(x='Weather2', y='DelayTime' , data=df[['Weather2','DelayTime']] , hue_order= top_15_weather.index) plt.xlabel("Weather conditions", fontdict = {'fontsize':12 , 'color':'MidnightBlue'} ) plt.xticks(fontsize=12 , rotation = 45) plt.ylabel("Delay Times (in Hours)") ax.set_title('Delay times for different Weather Conditions', fontdict = {'fontsize':16 , 'color':'MidnightBlue'}, pad=15) fig.tight_layout() Error Statement Given: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_23484\2440897390.py in <module> 1 fig, ax = plt.subplots(1,1,figsize=(16,5)) 2 ----> 3 w = sns.pointplot(y='DelayTime',x='Weather2',data=df[['Weather2','DelayTime','Severity']], 4 hue = 'Severity' 5 ,ci=None , TypeError: pointplot() got an unexpected keyword argument 'height' The only reason i have for the code above to produce an error is a Anaconda Update as the code below worked before the update and it appears to be the same. fig, ax = plt.subplots(1,1,figsize=(16,5)) w = sns.pointplot(y='DelayTime',x='Weather2',data=df[['Weather2','DelayTime','Severity']], hue = 'Severity' ,ci=None , order= top_10_weather.index, #kind = 'point', height=4, aspect=2 , palette='nipy_spectral', ax= ax) ax.grid(axis='y', linestyle='-', alpha=0.4) # w = sns.lineplot(x='Weather2', y='DelayTime' , data=df[['Weather2','DelayTime']] , hue_order= top_15_weather.index) plt.xlabel("Weather conditions", fontdict = {'fontsize':12 , 'color':'MidnightBlue'} ) plt.xticks(fontsize=12 , rotation = 45) plt.ylabel("Delay Times (in Hours)") ax.set_title('Delay times for different Weather Conditions', fontdict = {'fontsize':16 , 'color':'MidnightBlue'}, pad=15) fig.tight_layout()
[ "I don't know why it worked before, but after reading the seaborn documentation ( documentation ) the parameter height doesn't exist when you use pointplot.\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "seaborn" ]
stackoverflow_0074610269_matplotlib_python_seaborn.txt
Q: unable to find the path for uploading a file using streamlit python code Im writting a simple python application where the user selects a file from their local file manager and tries to upload using strealit Im able to succesfully take the file the user had given using streamlit.uploader and stored the file in a temp directory from the stramlit app folder but the issue is i cant give the path of the file of the file stored in the newly created directory in order to send the application into my gcp clouds bucket Adding my snippet below any help is appreciated :) import streamlit as st from google.oauth2 import service_account from google.cloud import storage import os from os import listdir from os.path import isfile, join from pathlib import Path from PIL import Image, ImageOps bucketName=('survey-appl-dev-public') # Create API client. credentials = service_account.Credentials.from_service_account_info( st.secrets["gcp_service_account"] ) client = storage.Client(credentials=credentials) #create a bucket object to get bucket details bucket = client.get_bucket(bucketName) file = st.file_uploader("Upload An file") def main(): if file is not None: file_details = {"FileName":file.name,"FileType":file.type} st.write(file_details) #img = load_image(image_file) #st.image(img, caption='Sunrise by the mountains') with open(os.path.join("tempDir",file.name),"wb") as f: f.write(file.getbuffer()) st.success("Saved File") object_name_in_gcs_bucket = bucket.blob(".",file.name) object_name_in_gcs_bucket.upload_from_filename("tempDir",file.name) if __name__ == "__main__": main() ive tried importing the path of the file using cwd command and also tried os library for file path but nothing worked edited: All i wanted to implement is make a file upload that is selected by customer using the dropbox of file_uploader option im able to save the file into a temporary directory after the file is selected using the file.getbuffer as shown in the code but i couldnt amke the code uploaded into the gcs bucket since its refering as str cannnot be converted into int while i press the upload button may be its the path issue "the code is unable to find the path of the file stored in the temp directory " but im unable to figure iut how to give the path to the upload function error coding im facing TypeError: '>' not supported between instances of 'str' and 'int' Traceback: File "/home/raviteja/.local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script exec(code, module.__dict__) File "/home/raviteja/test/streamlit/test.py", line 43, in <module> main() File "/home/raviteja/test/streamlit/test.py", line 29, in main object_name_in_gcs_bucket = bucket.blob(".",file.name) File "/home/raviteja/.local/lib/python3.10/site-packages/google/cloud/storage/bucket.py", line 795, in blob return Blob( File "/home/raviteja/.local/lib/python3.10/site-packages/google/cloud/storage/blob.py", line 219, in __init__ self.chunk_size = chunk_size # Check that setter accepts value. File "/home/raviteja/.local/lib/python3.10/site-packages/google/cloud/storage/blob.py", line 262, in chunk_size if value is not None and value > 0 and value % self._CHUNK_SIZE_MULTIPLE != 0: A: Thanks all for response after days of struggle at last I've figured out the mistake im making. I dont know if I'm right or wrong correct me if I'm wrong but this worked for me: object_name_in_gcs_bucket = bucket.blob("path-to-upload"+file.name) Changing the , to + between the filepath and filename made my issue solve. Sorry for the small issue. Happy that I could solve it. A: You have some variables in your code and I guess you know what they represent. Try this out else make sure you add every relevant information to the question and the code snippet. def main(): file = st.file_uploader("Upload file") if file is not None: file_details = {"FileName":file.name,"FileType":file.type} st.write(file_details) file_path = os.path.join("tempDir/", file.name) with open(file_path,"wb") as f: f.write(file.getbuffer()) st.success("Saved File") print(file_path) def upload(): file_name = file_path read_file(file_name) st.write(file_name) st.session_state["upload_state"] = "Saved successfully!" object_name_in_gcs_bucket = bucket.blob("gcp-bucket-destination-path"+ file.name) object_name_in_gcs_bucket.upload_from_filename(file_path) st.write("Youre uploading to bucket", bucketName) st.button("Upload file to GoogleCloud", on_click=upload) if __name__ == "__main__": main() A: This one works for me. Solution 1 import streamlit as st from google.oauth2 import service_account from google.cloud import storage import os STREAMLIT_SCRIPT_FILE_PATH = os.path.dirname(os.path.abspath(__file__)) credentials = service_account.Credentials.from_service_account_info( st.secrets["gcp_service_account"] ) client = storage.Client(credentials=credentials) def main(): bucketName = 'survey-appl-dev-public' file = st.file_uploader("Upload file") if file is not None: file_details = {"FileName":file.name,"FileType":file.type} st.write(file_details) with open(os.path.join("tempDir", file.name), "wb") as f: f.write(file.getbuffer()) st.success("Saved File") bucket = client.bucket(bucketName) object_name_in_gcs_bucket = bucket.blob(file.name) # src_relative = f'./tempDir/{file.name}' # also works src_absolute = f'{STREAMLIT_SCRIPT_FILE_PATH}/tempDir/{file.name}' object_name_in_gcs_bucket.upload_from_filename(src_absolute) if __name__ == '__main__': main() Solution 2 Instead of saving the file to disk, use the file bytes directly using upload_from_string(). References: Google Cloud upload_from_string Streamlit file uploader credentials = service_account.Credentials.from_service_account_info( st.secrets["gcp_service_account"] ) client = storage.Client(credentials=credentials) def gcs_upload_data(): bucket_name = 'your_gcs_bucket_name' file = st.file_uploader("Upload file") if file is not None: fname = file.name ftype = file.type file_details = {"FileName":fname,"FileType":ftype} st.write(file_details) # Define gcs bucket. bucket = client.bucket(bucket_name) bblob = bucket.blob(fname) # Upload the bytes directly instead of a disk file. bblob.upload_from_string(file.getvalue(), ftype) if __name__ == '__main__': gcs_upload_data()
unable to find the path for uploading a file using streamlit python code
Im writting a simple python application where the user selects a file from their local file manager and tries to upload using strealit Im able to succesfully take the file the user had given using streamlit.uploader and stored the file in a temp directory from the stramlit app folder but the issue is i cant give the path of the file of the file stored in the newly created directory in order to send the application into my gcp clouds bucket Adding my snippet below any help is appreciated :) import streamlit as st from google.oauth2 import service_account from google.cloud import storage import os from os import listdir from os.path import isfile, join from pathlib import Path from PIL import Image, ImageOps bucketName=('survey-appl-dev-public') # Create API client. credentials = service_account.Credentials.from_service_account_info( st.secrets["gcp_service_account"] ) client = storage.Client(credentials=credentials) #create a bucket object to get bucket details bucket = client.get_bucket(bucketName) file = st.file_uploader("Upload An file") def main(): if file is not None: file_details = {"FileName":file.name,"FileType":file.type} st.write(file_details) #img = load_image(image_file) #st.image(img, caption='Sunrise by the mountains') with open(os.path.join("tempDir",file.name),"wb") as f: f.write(file.getbuffer()) st.success("Saved File") object_name_in_gcs_bucket = bucket.blob(".",file.name) object_name_in_gcs_bucket.upload_from_filename("tempDir",file.name) if __name__ == "__main__": main() ive tried importing the path of the file using cwd command and also tried os library for file path but nothing worked edited: All i wanted to implement is make a file upload that is selected by customer using the dropbox of file_uploader option im able to save the file into a temporary directory after the file is selected using the file.getbuffer as shown in the code but i couldnt amke the code uploaded into the gcs bucket since its refering as str cannnot be converted into int while i press the upload button may be its the path issue "the code is unable to find the path of the file stored in the temp directory " but im unable to figure iut how to give the path to the upload function error coding im facing TypeError: '>' not supported between instances of 'str' and 'int' Traceback: File "/home/raviteja/.local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script exec(code, module.__dict__) File "/home/raviteja/test/streamlit/test.py", line 43, in <module> main() File "/home/raviteja/test/streamlit/test.py", line 29, in main object_name_in_gcs_bucket = bucket.blob(".",file.name) File "/home/raviteja/.local/lib/python3.10/site-packages/google/cloud/storage/bucket.py", line 795, in blob return Blob( File "/home/raviteja/.local/lib/python3.10/site-packages/google/cloud/storage/blob.py", line 219, in __init__ self.chunk_size = chunk_size # Check that setter accepts value. File "/home/raviteja/.local/lib/python3.10/site-packages/google/cloud/storage/blob.py", line 262, in chunk_size if value is not None and value > 0 and value % self._CHUNK_SIZE_MULTIPLE != 0:
[ "Thanks all for response after days of struggle at last I've figured out the mistake im making.\nI dont know if I'm right or wrong correct me if I'm wrong but this worked for me:\n object_name_in_gcs_bucket = bucket.blob(\"path-to-upload\"+file.name)\n\nChanging the , to + between the filepath and filename made my issue solve.\nSorry for the small issue.\nHappy that I could solve it.\n", "You have some variables in your code and I guess you know what they represent. Try this out else make sure you add every relevant information to the question and the code snippet.\ndef main():\n file = st.file_uploader(\"Upload file\")\n if file is not None:\n file_details = {\"FileName\":file.name,\"FileType\":file.type}\n st.write(file_details)\n \n file_path = os.path.join(\"tempDir/\", file.name)\n with open(file_path,\"wb\") as f: \n f.write(file.getbuffer()) \n st.success(\"Saved File\")\n\n print(file_path)\n\n\n def upload():\n file_name = file_path\n read_file(file_name)\n st.write(file_name)\n\n st.session_state[\"upload_state\"] = \"Saved successfully!\"\n object_name_in_gcs_bucket = bucket.blob(\"gcp-bucket-destination-path\"+ file.name)\n object_name_in_gcs_bucket.upload_from_filename(file_path)\n \n st.write(\"Youre uploading to bucket\", bucketName)\n st.button(\"Upload file to GoogleCloud\", on_click=upload)\n\n\nif __name__ == \"__main__\":\n main() \n\n", "This one works for me.\nSolution 1\nimport streamlit as st\nfrom google.oauth2 import service_account\nfrom google.cloud import storage\nimport os\n\nSTREAMLIT_SCRIPT_FILE_PATH = os.path.dirname(os.path.abspath(__file__))\n\ncredentials = service_account.Credentials.from_service_account_info(\n st.secrets[\"gcp_service_account\"]\n)\nclient = storage.Client(credentials=credentials)\n\ndef main():\n bucketName = 'survey-appl-dev-public'\n file = st.file_uploader(\"Upload file\")\n if file is not None:\n file_details = {\"FileName\":file.name,\"FileType\":file.type}\n st.write(file_details)\n\n with open(os.path.join(\"tempDir\", file.name), \"wb\") as f:\n f.write(file.getbuffer())\n\n st.success(\"Saved File\")\n\n bucket = client.bucket(bucketName)\n object_name_in_gcs_bucket = bucket.blob(file.name)\n\n # src_relative = f'./tempDir/{file.name}' # also works\n src_absolute = f'{STREAMLIT_SCRIPT_FILE_PATH}/tempDir/{file.name}'\n object_name_in_gcs_bucket.upload_from_filename(src_absolute)\n\nif __name__ == '__main__':\n main()\n\nSolution 2\nInstead of saving the file to disk, use the file bytes directly using upload_from_string().\nReferences:\nGoogle Cloud upload_from_string\nStreamlit file uploader\ncredentials = service_account.Credentials.from_service_account_info(\n st.secrets[\"gcp_service_account\"]\n)\nclient = storage.Client(credentials=credentials)\n\ndef gcs_upload_data():\n bucket_name = 'your_gcs_bucket_name'\n\n file = st.file_uploader(\"Upload file\")\n if file is not None:\n fname = file.name\n ftype = file.type\n\n file_details = {\"FileName\":fname,\"FileType\":ftype}\n st.write(file_details)\n\n # Define gcs bucket.\n bucket = client.bucket(bucket_name)\n bblob = bucket.blob(fname)\n\n # Upload the bytes directly instead of a disk file.\n bblob.upload_from_string(file.getvalue(), ftype)\n\nif __name__ == '__main__':\n gcs_upload_data()\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "api", "google_cloud_storage", "python", "python_3.x", "streamlit" ]
stackoverflow_0074574390_api_google_cloud_storage_python_python_3.x_streamlit.txt
Q: 403 Forbidden in airflow DAG Triggering API When I am trying to call the API from POSTMAN in Airflow DAG, I am facing a 403 Forbidden error. I have enabled the headers for basic authentication with the username and password in Postman. In the airflow.cfg file, I have enabled auth_backend = airflow.contrib.auth.backends.password_auth. This error occurs when I attempt to work solely in Postman. When I copy the same URL and try it directly in the browser, I am able to access the link. I'm having trouble with authorization now that I've enabled authentication. I attempted to use the curl command but received the same forbidden error.  The airflow version is 1.10.  A: The basic auth seems fine, it is base64 encoded already. 403 means you are authorized in the application but this specific action is forbidden. In airflow there are different roles admin/dag manager/operator and not all roles are allowed to do DAG operations. Can you specify the user role and operations you try to do? Have in mind that base64 auth string can be easily decoded to plain text and people can see your username and password. In the picture you have shared the verb you are using is POST, opening the link in tbe browser is probably a GET operation which is different in terms of permissions required.
403 Forbidden in airflow DAG Triggering API
When I am trying to call the API from POSTMAN in Airflow DAG, I am facing a 403 Forbidden error. I have enabled the headers for basic authentication with the username and password in Postman. In the airflow.cfg file, I have enabled auth_backend = airflow.contrib.auth.backends.password_auth. This error occurs when I attempt to work solely in Postman. When I copy the same URL and try it directly in the browser, I am able to access the link. I'm having trouble with authorization now that I've enabled authentication. I attempted to use the curl command but received the same forbidden error.  The airflow version is 1.10. 
[ "The basic auth seems fine, it is base64 encoded already. 403 means you are authorized in the application but this specific action is forbidden. In airflow there are different roles admin/dag manager/operator and not all roles are allowed to do DAG operations. Can you specify the user role and operations you try to do? Have in mind that base64 auth string can be easily decoded to plain text and people can see your username and password.\nIn the picture you have shared the verb you are using is POST, opening the link in tbe browser is probably a GET operation which is different in terms of permissions required.\n" ]
[ 2 ]
[]
[]
[ "airflow", "postman", "python" ]
stackoverflow_0074609067_airflow_postman_python.txt
Q: Python Azure function triggered by Blob storage printing file name incorrectly I'm triggering an Azure function with a blob trigger event. A container sample-workitems has a file base.csv and receives a new file new.csv. I'm reading base.csv from the sample-workitems and new.csv from InputStream for the same container. def main(myblob: func.InputStream, base: func.InputStream): logging.info(f"Python blob trigger function processed blob \n" f"Name: {myblob.name}\n" f"Blob Size: {myblob.length} bytes") logging.info(f"Base file info \n" f"Name: {base.name}\n" f"Blob Size: {base.length} bytes") df_base = pd.read_csv(BytesIO(base.read())) df_new = pd.read_csv(BytesIO(myblob.read())) print(df_new.head()) print("prnting base dataframe") print(df_base.head()) Output: Python blob trigger function processed blob Name: samples-workitems/new.csv Blob Size: None bytes Base file info Name: samples-workitems/new.csv Blob Size: None bytes first 5 rows of df_new (cannot show data here) prnting base dataframe first 5 rows of df_base (cannot show data here) Even though both files show their own content when printing but myblob.name and base.name has the same value new.csv which is unexpected. myblob.name would have new.csv while base.name would have base.csv function.json { "scriptFile": "__init__.py", "bindings": [ { "name": "myblob", "type": "blobTrigger", "direction": "in", "path": "samples-workitems/{name}", "connection": "my_storage" }, { "type": "blob", "name": "base", "path": "samples-workitems/base.csv", "connection": "my_storage", "direction": "in" } ] } A: I have reproduced in my environment and the below code worked for me and I followed code of @SwethaKandikonda 's SO-thread init.py: import logging from azure.storage.blob import BlockBlobService import azure.functions as func def main(myblob: func.InputStream): logging.info(f"Python blob trigger function processed blob \n" f"Name: {myblob.name}\n" f"Blob Size: {myblob.length} bytes") file="" fileContent="" blob_service = BlockBlobService(account_name="rithwikstor",account_key="EJ7xCyq2+AStqiar7Q==") containername="samples-workitems" generator = blob_service.list_blobs(container_name=containername) for blob in generator: file=blob_service.get_blob_to_text(containername,blob.name) logging.info(blob.name) logging.info(file.content) fileContent+=blob.name+'\n'+file.content+'\n\n' function.json: { "scriptFile": "__init__.py", "bindings": [ { "name": "myblob", "type": "blobTrigger", "direction": "in", "path": "samples-workitems/{name}", "connection": "rithwikstor_STORAGE" } ] } local.settings.json: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "Connection String Of storage account", "FUNCTIONS_WORKER_RUNTIME": "python", "rithwikstor_STORAGE": "Connection String Of storage account" } } Host.json: { "version": "2.0", "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" } } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[3.*, 4.0.0)" }, "concurrency": { "dynamicConcurrencyEnabled": true, "snapshotPersistenceEnabled": true } } Then i added blobs as below: Output: Please try to file the above process and code, you will get correct output as I have got.
Python Azure function triggered by Blob storage printing file name incorrectly
I'm triggering an Azure function with a blob trigger event. A container sample-workitems has a file base.csv and receives a new file new.csv. I'm reading base.csv from the sample-workitems and new.csv from InputStream for the same container. def main(myblob: func.InputStream, base: func.InputStream): logging.info(f"Python blob trigger function processed blob \n" f"Name: {myblob.name}\n" f"Blob Size: {myblob.length} bytes") logging.info(f"Base file info \n" f"Name: {base.name}\n" f"Blob Size: {base.length} bytes") df_base = pd.read_csv(BytesIO(base.read())) df_new = pd.read_csv(BytesIO(myblob.read())) print(df_new.head()) print("prnting base dataframe") print(df_base.head()) Output: Python blob trigger function processed blob Name: samples-workitems/new.csv Blob Size: None bytes Base file info Name: samples-workitems/new.csv Blob Size: None bytes first 5 rows of df_new (cannot show data here) prnting base dataframe first 5 rows of df_base (cannot show data here) Even though both files show their own content when printing but myblob.name and base.name has the same value new.csv which is unexpected. myblob.name would have new.csv while base.name would have base.csv function.json { "scriptFile": "__init__.py", "bindings": [ { "name": "myblob", "type": "blobTrigger", "direction": "in", "path": "samples-workitems/{name}", "connection": "my_storage" }, { "type": "blob", "name": "base", "path": "samples-workitems/base.csv", "connection": "my_storage", "direction": "in" } ] }
[ "I have reproduced in my environment and the below code worked for me and I followed code of @SwethaKandikonda 's SO-thread\ninit.py:\nimport logging\nfrom azure.storage.blob import BlockBlobService\nimport azure.functions as func\n\n\ndef main(myblob: func.InputStream):\n logging.info(f\"Python blob trigger function processed blob \\n\"\n f\"Name: {myblob.name}\\n\"\n f\"Blob Size: {myblob.length} bytes\")\n \n file=\"\" \n fileContent=\"\" \n blob_service = BlockBlobService(account_name=\"rithwikstor\",account_key=\"EJ7xCyq2+AStqiar7Q==\")\n containername=\"samples-workitems\"\n generator = blob_service.list_blobs(container_name=containername) \n for blob in generator:\n file=blob_service.get_blob_to_text(containername,blob.name)\n logging.info(blob.name)\n logging.info(file.content)\n fileContent+=blob.name+'\\n'+file.content+'\\n\\n'\n\nfunction.json:\n{\n \"scriptFile\": \"__init__.py\",\n \"bindings\": [\n {\n \"name\": \"myblob\",\n \"type\": \"blobTrigger\",\n \"direction\": \"in\",\n \"path\": \"samples-workitems/{name}\",\n \"connection\": \"rithwikstor_STORAGE\"\n }\n ]\n}\n\nlocal.settings.json:\n{\n \"IsEncrypted\": false,\n \"Values\": {\n \"AzureWebJobsStorage\": \"Connection String Of storage account\",\n \"FUNCTIONS_WORKER_RUNTIME\": \"python\",\n \"rithwikstor_STORAGE\": \"Connection String Of storage account\"\n }\n}\n\nHost.json:\n{\n \"version\": \"2.0\",\n \"logging\": {\n \"applicationInsights\": {\n \"samplingSettings\": {\n \"isEnabled\": true,\n \"excludedTypes\": \"Request\"\n }\n }\n },\n \"extensionBundle\": {\n \"id\": \"Microsoft.Azure.Functions.ExtensionBundle\",\n \"version\": \"[3.*, 4.0.0)\"\n },\n \"concurrency\": {\n \"dynamicConcurrencyEnabled\": true,\n \"snapshotPersistenceEnabled\": true\n }\n}\n\nThen i added blobs as below:\n\nOutput:\n\n\nPlease try to file the above process and code, you will get correct output as I have got.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_blob_storage", "azure_functions", "python" ]
stackoverflow_0074596449_azure_azure_blob_storage_azure_functions_python.txt
Q: How to replace numbers in selected columns that falls in certain value python? How do you replace numbers with np.nan in selected columns if the number falls in between 2 ranges? A B C D 2 3 5 7 2 8 9 7 5 3 6 7 select columns B & C replace numbers if number is <=5 and >=7 A B C D 2 NaN 5 7 2 NaN NaN 7 5 NaN 6 7 A: Use a boolean mask for in place modification (boolean indexing): cols = ['B', 'C'] m = (df[cols].gt(7)|df[cols].lt(5)).reindex(columns=df.columns, fill_value=False) df[m] = np.nan If you need a copy: cols = ['B', 'C'] out = df.mask((df[cols].gt(7)|df[cols].lt(5)) .reindex(columns=df.columns, fill_value=False)) Output: A B C D 0 2 NaN 5.0 7 1 2 NaN NaN 7 2 5 NaN 6.0 7 Intermediates: (df[cols].gt(7)|df[cols].lt(5)) B C 0 True False 1 True True 2 True False (df[cols].gt(7)|df[cols].lt(5)).reindex(columns=df.columns, fill_value=False) A B C D 0 False True False False 1 False True True False 2 False True False False A: You can assign back to filtered columns with DataFrame.mask: cols = ['B', 'C'] df[cols] = df[cols].mask(df[cols].gt(7) | df[cols].lt(5)) print (df) A B C D 0 2 NaN 5.0 7 1 2 NaN NaN 7 2 5 NaN 6.0 7 Or with numpy.where: cols = ['B', 'C'] df[cols] = np.where(df[cols].gt(7) | df[cols].lt(5), np.nan, df[cols]) print (df) A B C D 0 2 NaN 5.0 7 1 2 NaN NaN 7 2 5 NaN 6.0 7
How to replace numbers in selected columns that falls in certain value python?
How do you replace numbers with np.nan in selected columns if the number falls in between 2 ranges? A B C D 2 3 5 7 2 8 9 7 5 3 6 7 select columns B & C replace numbers if number is <=5 and >=7 A B C D 2 NaN 5 7 2 NaN NaN 7 5 NaN 6 7
[ "Use a boolean mask for in place modification (boolean indexing):\ncols = ['B', 'C']\nm = (df[cols].gt(7)|df[cols].lt(5)).reindex(columns=df.columns, fill_value=False)\n\ndf[m] = np.nan\n\nIf you need a copy:\ncols = ['B', 'C']\nout = df.mask((df[cols].gt(7)|df[cols].lt(5))\n .reindex(columns=df.columns, fill_value=False))\n\nOutput:\n A B C D\n0 2 NaN 5.0 7\n1 2 NaN NaN 7\n2 5 NaN 6.0 7\n\nIntermediates:\n(df[cols].gt(7)|df[cols].lt(5))\n\n B C\n0 True False\n1 True True\n2 True False\n\n(df[cols].gt(7)|df[cols].lt(5)).reindex(columns=df.columns, fill_value=False)\n\n A B C D\n0 False True False False\n1 False True True False\n2 False True False False\n\n", "You can assign back to filtered columns with DataFrame.mask:\ncols = ['B', 'C']\n\ndf[cols] = df[cols].mask(df[cols].gt(7) | df[cols].lt(5))\nprint (df)\n A B C D\n0 2 NaN 5.0 7\n1 2 NaN NaN 7\n2 5 NaN 6.0 7\n\nOr with numpy.where:\ncols = ['B', 'C']\n\ndf[cols] = np.where(df[cols].gt(7) | df[cols].lt(5), np.nan, df[cols])\nprint (df)\n A B C D\n0 2 NaN 5.0 7\n1 2 NaN NaN 7\n2 5 NaN 6.0 7\n\n" ]
[ 3, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074610668_dataframe_pandas_python.txt
Q: Having problems passing info from one function to another (Python) I'm fairly new to python (been taking classes for a few months) and I've come across a reoccurring problem with my code involving passing information, such as an integer, from one function to another. In this case, I'm having problems with passing "totalPints" from "def getTotal" to "def averagePints" (totalPints in def averagePints defaults to 0). def main(): endQuery = "n" while endQuery == "n": pints = [0] * 7 totalPints = 0 avgPints = 0 option = "" print("Welcome to the American Red Cross blood drive database.") print() print("Please enter 'w' to write new data to the file. Enter 'r' to read data currently on file. Enter 'e' to " "end the program.") try: option = input("Enter w/r/e: ") except ValueError: print() print("Please input only w/r/e") if option == "w": print() def getPints(pints): index = 0 while index < 7: pints[index] = input("Input number of Pints of Blood donated for day " + str(index + 1) + ": ") print(pints) index = index + 1 getPints(pints) def getTotal(pints, totalPints): index = 0 while index < 7: totalPints = totalPints + int(pints[index]) index = index + 1 print(totalPints) getTotal(pints, totalPints) def averagePints(totalPints, avgPints): avgPints = float(totalPints) / 7 print(avgPints) averagePints(totalPints, avgPints) Passing information from "def getPints" to "def getTotal" works fine, and both print the accurate information, but nothing passes from "def getTotal" to "def averagePints" and returns 0. What am I doing wrong in this case? Is it something to do with the scope of the variables listed above? This is my first time posting to Stack Overflow because I could find any fixes to what I am having troubles with. What I expect to happen from this code is passing the number from "totalPints" in "def getTotal" to "def averagePints" to make a calculation with that number. I tried messing with the scopes of the declared variables and the order of when functions are called but I still can't really tell what I'm missing. All I know is that the value of "totalPints" in "def averagePints" always returns 0. A: You have a variable scoping issue. In getTotal, totalPints is being updated with its value local to the function, not the global one like you are expecting. Returning the new value from the function and assigning it seems to have the intended effect. Below is the updated snippet: def getTotal(pints, totalPints): index = 0 while index < 7: totalPints = totalPints + int(pints[index]) index = index + 1 print(totalPints) return totalPints totalPints = getTotal(pints, totalPints)
Having problems passing info from one function to another (Python)
I'm fairly new to python (been taking classes for a few months) and I've come across a reoccurring problem with my code involving passing information, such as an integer, from one function to another. In this case, I'm having problems with passing "totalPints" from "def getTotal" to "def averagePints" (totalPints in def averagePints defaults to 0). def main(): endQuery = "n" while endQuery == "n": pints = [0] * 7 totalPints = 0 avgPints = 0 option = "" print("Welcome to the American Red Cross blood drive database.") print() print("Please enter 'w' to write new data to the file. Enter 'r' to read data currently on file. Enter 'e' to " "end the program.") try: option = input("Enter w/r/e: ") except ValueError: print() print("Please input only w/r/e") if option == "w": print() def getPints(pints): index = 0 while index < 7: pints[index] = input("Input number of Pints of Blood donated for day " + str(index + 1) + ": ") print(pints) index = index + 1 getPints(pints) def getTotal(pints, totalPints): index = 0 while index < 7: totalPints = totalPints + int(pints[index]) index = index + 1 print(totalPints) getTotal(pints, totalPints) def averagePints(totalPints, avgPints): avgPints = float(totalPints) / 7 print(avgPints) averagePints(totalPints, avgPints) Passing information from "def getPints" to "def getTotal" works fine, and both print the accurate information, but nothing passes from "def getTotal" to "def averagePints" and returns 0. What am I doing wrong in this case? Is it something to do with the scope of the variables listed above? This is my first time posting to Stack Overflow because I could find any fixes to what I am having troubles with. What I expect to happen from this code is passing the number from "totalPints" in "def getTotal" to "def averagePints" to make a calculation with that number. I tried messing with the scopes of the declared variables and the order of when functions are called but I still can't really tell what I'm missing. All I know is that the value of "totalPints" in "def averagePints" always returns 0.
[ "You have a variable scoping issue. In getTotal, totalPints is being updated with its value local to the function, not the global one like you are expecting. Returning the new value from the function and assigning it seems to have the intended effect. Below is the updated snippet:\n def getTotal(pints, totalPints):\n index = 0\n while index < 7:\n totalPints = totalPints + int(pints[index])\n index = index + 1\n print(totalPints)\n return totalPints\n\n totalPints = getTotal(pints, totalPints)\n\n" ]
[ 1 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074609783_function_python.txt
Q: Is there a way to parse my xml file displaying only tags and value? In my XML file [studentinfo.xml] is there a way to loop through the xml file and only output each tag and the value? I would like child tags to be displayed as well. Below breaks everything down. I am open to other solutions as well. <?xml version="1.0" encoding="UTF-8"?> <stu:StudentBreakdown> <stu:Studentdata> <stu:StudentScreening> <st:name>Sam Davies</st:name> <st:age>15</st:age> <st:hair>Black</st:hair> <st:eyes>Blue</st:eyes> <st:grade>10</st:grade> <st:teacher>Draco Malfoy</st:teacher> <st:dorm>Innovation Hall</st:dorm> <st:name>Master Splinter</st:name> </stu:StudentScreening> <stu:StudentScreening> <st:name>Cassie Stone</st:name> <st:age>14</st:age> <st:hair>Science</st:hair> <st:grade>9</st:grade> <st:teacher>Luna Lovegood</st:teacher> <st:name>Kelly Clarkson</st:name> </stu:StudentScreening> <stu:StudentScreening> <st:name>Derek Brandon</st:name> <st:age>17</st:age> <st:eyes>green</st:eyes> <st:teacher>Ron Weasley</st:teacher> <st:dorm>Hogtie Manor</st:dorm> <st:name>Miley Cyrus</st:name> </stu:StudentScreening> </stu:Studentdata> </stu:StudentBreakdown> Below is my desired output: stu:StudentBreakdown : stu:Studentdata : stu:StudentScreening : st:name : Sam Davies st:age : 15 st:hair : Black st:eyes : Blue st:grade : 10 st:teacher : Draco Malfoy st:dorm : Innovation Hall st:name : Master Splinter ..etc Below is my current code: import pandas as pd import xml.etree.ElementTree as ET from bs4 import BeautifulSoup mytree = ET.parse('path\studentinfo.xml').getroot() list = [] for elm in mytree.iter(): list.append(elm.tag + ' : ' + str(elm.text)) print(list) A: If I add <stu:StudentBreakdown xmlns:stu= "stu" xmlns:st="st"> to your XML root element, I get with: import pandas as pd import xml.etree.ElementTree as ET tree = ET.parse('ns.xml') root= tree.getroot() columns= ["TAG", "VALUE"] data = [] for stud in root.iter(): if "\n" not in stud.text: stud.text = stud.text else: stud.text = None row = (stud.tag , stud.text) data.append(row) df = pd.DataFrame(data, columns=columns) print(df) Output: TAG VALUE 0 {stu}StudentBreakdown None 1 {stu}Studentdata None 2 {stu}StudentScreening None 3 {st}name Sam Davies 4 {st}age 15 5 {st}hair Black 6 {st}eyes Blue 7 {st}grade 10 8 {st}teacher Draco Malfoy 9 {st}dorm Innovation Hall 10 {st}name Master Splinter 11 {stu}StudentScreening None 12 {st}name Cassie Stone 13 {st}age 14 14 {st}hair Science 15 {st}grade 9 16 {st}teacher Luna Lovegood 17 {st}name Kelly Clarkson 18 {stu}StudentScreening None 19 {st}name Derek Brandon 20 {st}age 17 21 {st}eyes green 22 {st}teacher Ron Weasley 23 {st}dorm Hogtie Manor 24 {st}name Miley Cyrus Maybe there is a better way to manage the nested XML namespace definition.
Is there a way to parse my xml file displaying only tags and value?
In my XML file [studentinfo.xml] is there a way to loop through the xml file and only output each tag and the value? I would like child tags to be displayed as well. Below breaks everything down. I am open to other solutions as well. <?xml version="1.0" encoding="UTF-8"?> <stu:StudentBreakdown> <stu:Studentdata> <stu:StudentScreening> <st:name>Sam Davies</st:name> <st:age>15</st:age> <st:hair>Black</st:hair> <st:eyes>Blue</st:eyes> <st:grade>10</st:grade> <st:teacher>Draco Malfoy</st:teacher> <st:dorm>Innovation Hall</st:dorm> <st:name>Master Splinter</st:name> </stu:StudentScreening> <stu:StudentScreening> <st:name>Cassie Stone</st:name> <st:age>14</st:age> <st:hair>Science</st:hair> <st:grade>9</st:grade> <st:teacher>Luna Lovegood</st:teacher> <st:name>Kelly Clarkson</st:name> </stu:StudentScreening> <stu:StudentScreening> <st:name>Derek Brandon</st:name> <st:age>17</st:age> <st:eyes>green</st:eyes> <st:teacher>Ron Weasley</st:teacher> <st:dorm>Hogtie Manor</st:dorm> <st:name>Miley Cyrus</st:name> </stu:StudentScreening> </stu:Studentdata> </stu:StudentBreakdown> Below is my desired output: stu:StudentBreakdown : stu:Studentdata : stu:StudentScreening : st:name : Sam Davies st:age : 15 st:hair : Black st:eyes : Blue st:grade : 10 st:teacher : Draco Malfoy st:dorm : Innovation Hall st:name : Master Splinter ..etc Below is my current code: import pandas as pd import xml.etree.ElementTree as ET from bs4 import BeautifulSoup mytree = ET.parse('path\studentinfo.xml').getroot() list = [] for elm in mytree.iter(): list.append(elm.tag + ' : ' + str(elm.text)) print(list)
[ "If I add <stu:StudentBreakdown xmlns:stu= \"stu\" xmlns:st=\"st\"> to your XML root element, I get with:\nimport pandas as pd\nimport xml.etree.ElementTree as ET\n\ntree = ET.parse('ns.xml')\nroot= tree.getroot()\n\ncolumns= [\"TAG\", \"VALUE\"]\ndata = []\nfor stud in root.iter():\n if \"\\n\" not in stud.text:\n stud.text = stud.text\n else:\n stud.text = None\n row = (stud.tag , stud.text)\n data.append(row)\n \ndf = pd.DataFrame(data, columns=columns)\nprint(df)\n\nOutput:\n TAG VALUE\n0 {stu}StudentBreakdown None\n1 {stu}Studentdata None\n2 {stu}StudentScreening None\n3 {st}name Sam Davies\n4 {st}age 15\n5 {st}hair Black\n6 {st}eyes Blue\n7 {st}grade 10\n8 {st}teacher Draco Malfoy\n9 {st}dorm Innovation Hall\n10 {st}name Master Splinter\n11 {stu}StudentScreening None\n12 {st}name Cassie Stone\n13 {st}age 14\n14 {st}hair Science\n15 {st}grade 9\n16 {st}teacher Luna Lovegood\n17 {st}name Kelly Clarkson\n18 {stu}StudentScreening None\n19 {st}name Derek Brandon\n20 {st}age 17\n21 {st}eyes green\n22 {st}teacher Ron Weasley\n23 {st}dorm Hogtie Manor\n24 {st}name Miley Cyrus\n\nMaybe there is a better way to manage the nested XML namespace definition.\n" ]
[ 1 ]
[]
[]
[ "elementtree", "python", "xml" ]
stackoverflow_0074608564_elementtree_python_xml.txt
Q: how do i make tkinter update I have an entry field that stores my list to a text file when i press the button to store the info, it gets stored but i have to restart the app to see it on the options menu How do i make the app update without having to restart it? ` from tkinter import * from tkinter import messagebox root = Tk() root.title("test tool") #App Title root.iconbitmap("D:\\Software\\GigaPixel Frames\\Dump\\New folder\\imgs\\Logo.ico") root.geometry("1600x800") #App Dimensions DropDownvar = StringVar(value="Select an option") DropDownvar.set("Select an option") my_list = open("Characters.txt").readlines() DropDownMenu = OptionMenu(root, DropDownvar, *my_list) DropDownMenu.pack() inputBox = Entry(root) inputBox.pack() def ButtonFun(): InputBoxEntry = inputBox.get() with open("Characters.txt", "a") as text_file: text_file.write(InputBoxEntry + "\n") root.update() inputBoxButton = Button(root, text="Input", command=ButtonFun) inputBoxButton.pack() root.mainloop() ` could not find answer A: You should re-read and update your list after you put input and added line. You should avoid using my_list = open("Characters.txt") as you may forget to close it. Or sometimes it gives an error and stay unclosed which you cannot perform anything over it. from tkinter import * from tkinter import messagebox root = Tk() root.title("test tool") #App Title root.geometry("1600x800") #App Dimensions DropDownvar = StringVar(value="Select an option") DropDownvar.set("Select an option") DropDownMenu = OptionMenu(root, DropDownvar," ") DropDownMenu.pack() inputBox = Entry(root) inputBox.pack() def fillOptionMenu(): with open("characters.txt","r") as f: my_list = f.readlines() DropDownMenu["menu"].delete(0,"end") # DropDownMenu.set_menu(my_list) for i in my_list: DropDownMenu["menu"].add_command(label=i, command=lambda value=i: DropDownvar.set(i)) def ButtonFun(): InputBoxEntry = inputBox.get() with open("Characters.txt", "a") as text_file: text_file.write(InputBoxEntry + "\n") root.update() fillOptionMenu() inputBoxButton = Button(root, text="Input", command=ButtonFun) inputBoxButton.pack() fillOptionMenu() root.mainloop() A: You need to add the input item to the dropdown manually inside ButtonFun(). Also using .readlines() will not strip out the trailing newline '\n' from each line in the file, better use .read().splitlines() instead: ... with open("Characters.txt") as f: my_list = f.read().splitlines() DropDownMenu = OptionMenu(root, DropDownvar, *my_list) DropDownMenu.pack() ... def ButtonFun(): InputBoxEntry = inputBox.get() with open("Characters.txt", "a") as text_file: text_file.write(InputBoxEntry + "\n") # add the input item to dropdown DropDownMenu["menu"].add_command(label=InputBoxEntry, command=lambda: DropDownvar.set(InputBoxEntry)) ...
how do i make tkinter update
I have an entry field that stores my list to a text file when i press the button to store the info, it gets stored but i have to restart the app to see it on the options menu How do i make the app update without having to restart it? ` from tkinter import * from tkinter import messagebox root = Tk() root.title("test tool") #App Title root.iconbitmap("D:\\Software\\GigaPixel Frames\\Dump\\New folder\\imgs\\Logo.ico") root.geometry("1600x800") #App Dimensions DropDownvar = StringVar(value="Select an option") DropDownvar.set("Select an option") my_list = open("Characters.txt").readlines() DropDownMenu = OptionMenu(root, DropDownvar, *my_list) DropDownMenu.pack() inputBox = Entry(root) inputBox.pack() def ButtonFun(): InputBoxEntry = inputBox.get() with open("Characters.txt", "a") as text_file: text_file.write(InputBoxEntry + "\n") root.update() inputBoxButton = Button(root, text="Input", command=ButtonFun) inputBoxButton.pack() root.mainloop() ` could not find answer
[ "You should re-read and update your list after you put input and added line. You should avoid using my_list = open(\"Characters.txt\") as you may forget to close it. Or sometimes it gives an error and stay unclosed which you cannot perform anything over it.\nfrom tkinter import *\nfrom tkinter import messagebox\n\nroot = Tk()\nroot.title(\"test tool\") #App Title\nroot.geometry(\"1600x800\") #App Dimensions\n\nDropDownvar = StringVar(value=\"Select an option\")\nDropDownvar.set(\"Select an option\")\n\n\nDropDownMenu = OptionMenu(root, DropDownvar,\" \")\nDropDownMenu.pack()\n\ninputBox = Entry(root)\ninputBox.pack()\n\ndef fillOptionMenu():\n with open(\"characters.txt\",\"r\") as f:\n my_list = f.readlines()\n\n DropDownMenu[\"menu\"].delete(0,\"end\")\n # DropDownMenu.set_menu(my_list)\n for i in my_list:\n DropDownMenu[\"menu\"].add_command(label=i, command=lambda value=i: DropDownvar.set(i))\n\ndef ButtonFun():\n InputBoxEntry = inputBox.get()\n with open(\"Characters.txt\", \"a\") as text_file:\n text_file.write(InputBoxEntry + \"\\n\")\n root.update()\n\n fillOptionMenu()\n\n\ninputBoxButton = Button(root, text=\"Input\", command=ButtonFun)\ninputBoxButton.pack()\n\nfillOptionMenu()\n\nroot.mainloop()\n\n", "You need to add the input item to the dropdown manually inside ButtonFun().\nAlso using .readlines() will not strip out the trailing newline '\\n' from each line in the file, better use .read().splitlines() instead:\n...\nwith open(\"Characters.txt\") as f:\n my_list = f.read().splitlines()\nDropDownMenu = OptionMenu(root, DropDownvar, *my_list)\nDropDownMenu.pack()\n...\ndef ButtonFun():\n InputBoxEntry = inputBox.get()\n with open(\"Characters.txt\", \"a\") as text_file:\n text_file.write(InputBoxEntry + \"\\n\")\n # add the input item to dropdown\n DropDownMenu[\"menu\"].add_command(label=InputBoxEntry, command=lambda: DropDownvar.set(InputBoxEntry))\n...\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074610777_python_tkinter.txt
Q: Taking mean of all rows in a numpy matrix grouped by values based on another numpy matrix I have a matrix A of size NXN with float values and another boolean matrix B of size NXN For every row, I need to find the mean of all values in A belonging to indices where True is the corresponding value for that index in matrix B Similarly, I need to find the mean of all values in A belonging to indices where False is the corresponding value for that index in matrix B Finally, I need to find the count of number of rows where "True" mean is lesser than "False" mean For example : A = [[1.0, 2.0, 3.0] [4.0, 5.0, 6.0] [7.0, 8.0, 9.0]] B = [[True, True, False] [False, False, True] [True, False, True]] Initially, count = 0 For row 1, true_mean = 1.0+2.0 / 2 = 1.5 and false_mean = 3.0 true_mean < false_mean, so count = 0+1=1 For row 2, true_mean = 6.0 and false_mean = 4.0+5.0 / 2 = 4.5 true_mean > false_mean, so count remains same For row 3, true_mean = 7.0+9.0 / 2 = 8.0 and false_mean = 8.0 true_mean == false_mean, so count remains same Final count value = 1 My attempt:- true_mat = np.where(B, A, 0) false_mat = np.where(B, 0, A) true_mean = true_mat.mean(axis=1) false_mean = false_mat.mean(axis=1) But this actually gives wrong answer since denominator is not exactly the count of number of True/False values in that row but instead 'N' I only need the count, I don't need the true_mean and false_mean Anyway to fix it? A: The mean issue can be resolved by computing a mask: mask_norm = tf.reduce_sum(tf.clip_by_value(true_mat, 0., 1.),axis=0) true_mean = tf.math.divide(tf.reduce_sum(true_mat, axis=1), mask_norm) #true_mean : [1.5, 6. , 8. ] You can find the count using tf.reduce_sum(tf.where(true_mean < false_mean, 1, 0)) A: You could also try something like this: import tensorflow as tf A = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]) B = tf.constant([[True, True, False], [False, False, True], [True, False, True]]) t_rows = tf.where(B) f_rows = tf.where(~B) _true = tf.gather_nd(A, t_rows) _false = tf.gather_nd(A, f_rows) count = tf.reduce_sum(tf.cast(tf.math.greater(tf.math.segment_mean(_false, f_rows[:, 0]), tf.math.segment_mean(_true, t_rows[:, 0])), dtype=tf.int32)) tf.print(count) 1 Works also with rows that are all True or False: B = tf.constant([[True, True, True], [False, False, True], [True, False, True]]) # 0 B = tf.constant([[False, False, False], [False, False, False], [True, False, True]]) # 2 A: I would say your start is good true_mat = np.where(B, A, 0) false_mat = np.where(B, 0, A) But we want to divide by the number of Trues or Falses, respectively, so... true_sum = np.sum(B, axis = 1) #sum of Trues per row false_sum = N-true_sum # if you don't have N given, do N=A.shape[0] true_mean = np.sum(true_mat, axis = 1)/true_sum #add up rows of true_mat and divide by true_sum false_mean = np.sum(false_mat, axis = 1)/false_sum For your example this gives [1.5 6. 8. ] [3. 4.5 8. ] So now we just have to compare where the second is larger than the first: count = np.sum(np.where(false_mean > true_mean, 1, 0))
Taking mean of all rows in a numpy matrix grouped by values based on another numpy matrix
I have a matrix A of size NXN with float values and another boolean matrix B of size NXN For every row, I need to find the mean of all values in A belonging to indices where True is the corresponding value for that index in matrix B Similarly, I need to find the mean of all values in A belonging to indices where False is the corresponding value for that index in matrix B Finally, I need to find the count of number of rows where "True" mean is lesser than "False" mean For example : A = [[1.0, 2.0, 3.0] [4.0, 5.0, 6.0] [7.0, 8.0, 9.0]] B = [[True, True, False] [False, False, True] [True, False, True]] Initially, count = 0 For row 1, true_mean = 1.0+2.0 / 2 = 1.5 and false_mean = 3.0 true_mean < false_mean, so count = 0+1=1 For row 2, true_mean = 6.0 and false_mean = 4.0+5.0 / 2 = 4.5 true_mean > false_mean, so count remains same For row 3, true_mean = 7.0+9.0 / 2 = 8.0 and false_mean = 8.0 true_mean == false_mean, so count remains same Final count value = 1 My attempt:- true_mat = np.where(B, A, 0) false_mat = np.where(B, 0, A) true_mean = true_mat.mean(axis=1) false_mean = false_mat.mean(axis=1) But this actually gives wrong answer since denominator is not exactly the count of number of True/False values in that row but instead 'N' I only need the count, I don't need the true_mean and false_mean Anyway to fix it?
[ "The mean issue can be resolved by computing a mask:\nmask_norm = tf.reduce_sum(tf.clip_by_value(true_mat, 0., 1.),axis=0)\ntrue_mean = tf.math.divide(tf.reduce_sum(true_mat, axis=1), mask_norm)\n#true_mean : [1.5, 6. , 8. ]\n\nYou can find the count using tf.reduce_sum(tf.where(true_mean < false_mean, 1, 0))\n", "You could also try something like this:\nimport tensorflow as tf\n\n\nA = tf.constant([[1.0, 2.0, 3.0],\n [4.0, 5.0, 6.0],\n [7.0, 8.0, 9.0]])\n\nB = tf.constant([[True, True, False],\n [False, False, True],\n [True, False, True]])\n\nt_rows = tf.where(B)\nf_rows = tf.where(~B)\n_true = tf.gather_nd(A, t_rows)\n_false = tf.gather_nd(A, f_rows)\n\ncount = tf.reduce_sum(tf.cast(tf.math.greater(tf.math.segment_mean(_false, f_rows[:, 0]), tf.math.segment_mean(_true, t_rows[:, 0])), dtype=tf.int32))\ntf.print(count)\n\n1\n\nWorks also with rows that are all True or False:\nB = tf.constant([[True, True, True],\n [False, False, True],\n [True, False, True]])\n# 0\n\nB = tf.constant([[False, False, False],\n [False, False, False],\n [True, False, True]])\n# 2\n\n", "I would say your start is good\ntrue_mat = np.where(B, A, 0)\nfalse_mat = np.where(B, 0, A)\n\nBut we want to divide by the number of Trues or Falses, respectively, so...\ntrue_sum = np.sum(B, axis = 1) #sum of Trues per row\nfalse_sum = N-true_sum # if you don't have N given, do N=A.shape[0]\n\ntrue_mean = np.sum(true_mat, axis = 1)/true_sum #add up rows of true_mat and divide by true_sum\nfalse_mean = np.sum(false_mat, axis = 1)/false_sum\n\nFor your example this gives\n[1.5 6. 8. ]\n[3. 4.5 8. ]\n\nSo now we just have to compare where the second is larger than the first:\ncount = np.sum(np.where(false_mean > true_mean, 1, 0))\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "arrays", "numpy", "python", "pytorch", "tensorflow" ]
stackoverflow_0074610101_arrays_numpy_python_pytorch_tensorflow.txt
Q: Python dataframe to list/dict I have a sample dataframe as below I want this dataframe converted to this below format in python so I can pass it into dtype { 'FirstName':'string', 'LastName':'string', 'Department':'integer', 'EmployeeID':'string', } Could anyone please let me know how this can be done. To note above: I need the exact string {'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'} from the exact dataframe. The dataframe has list of primary key names and its datatype. I want to pass this primary_key and datatype combination into concat_df.to_csv(csv_buffer, sep=",", index=False, dtype = {'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'}) A: dict/zip the two series: import pandas as pd data = pd.DataFrame({ 'Column_Name': ['FirstName', 'LastName', 'Department', 'EmployeeID'], 'Datatype': ['string', 'string', 'integer', 'string'], }) mapping = dict(zip(data['Column_Name'], data['Datatype'])) print(mapping) prints out {'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'} A: use to record which is much more handy. print(dict(df.to_records(index=False))) Should Gives # {'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'} Edit : If you want keys alone then d = dict(df.to_records(index=False)) print(list(d.keys())) should Gives # ['FirstName', 'LastName', 'Department', 'EmployeeID'] A: You can do an easy dict comprehension with your data: Input data: data = pd.DataFrame({'Column_Name' : ['FirstName', 'LastName', 'Department'], 'Datatype' : ['Jane', 'Doe', 666]}) Dict comprehension: {n[0]:n[1] for n in data.to_numpy()} This will give you: {'FirstName': 'Jane', 'LastName': 'Doe', 'Department': '666'} There are for sure other ways, e.g. using the pandas to_dict function, but I am not very familiar with this. Edit: But keep in mind, a dictionary needs unique values. Your categories (first, lastname) looks like general categories. This here will only work for a single person, otherwise you have multiple keys.
Python dataframe to list/dict
I have a sample dataframe as below I want this dataframe converted to this below format in python so I can pass it into dtype { 'FirstName':'string', 'LastName':'string', 'Department':'integer', 'EmployeeID':'string', } Could anyone please let me know how this can be done. To note above: I need the exact string {'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'} from the exact dataframe. The dataframe has list of primary key names and its datatype. I want to pass this primary_key and datatype combination into concat_df.to_csv(csv_buffer, sep=",", index=False, dtype = {'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'})
[ "dict/zip the two series:\nimport pandas as pd\n\ndata = pd.DataFrame({\n 'Column_Name': ['FirstName', 'LastName', 'Department', 'EmployeeID'],\n 'Datatype': ['string', 'string', 'integer', 'string'],\n})\n\nmapping = dict(zip(data['Column_Name'], data['Datatype']))\n\nprint(mapping)\n\nprints out\n{'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'}\n\n", "use to record which is much more handy.\nprint(dict(df.to_records(index=False)))\n\nShould Gives #\n{'FirstName': 'string', 'LastName': 'string', 'Department': 'integer', 'EmployeeID': 'string'}\n\nEdit :\nIf you want keys alone then\nd = dict(df.to_records(index=False))\n\nprint(list(d.keys()))\n\nshould Gives #\n['FirstName', 'LastName', 'Department', 'EmployeeID']\n\n", "You can do an easy dict comprehension with your data:\nInput data:\ndata = pd.DataFrame({'Column_Name' : ['FirstName', 'LastName', 'Department'], 'Datatype' : ['Jane', 'Doe', 666]})\nDict comprehension:\n{n[0]:n[1] for n in data.to_numpy()}\nThis will give you:\n{'FirstName': 'Jane', 'LastName': 'Doe', 'Department': '666'}\nThere are for sure other ways, e.g. using the pandas to_dict function, but I am not very familiar with this.\nEdit:\nBut keep in mind, a dictionary needs unique values.\nYour categories (first, lastname) looks like general categories. This here will only work for a single person, otherwise you have multiple keys.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "dataframe", "dtype", "json", "list", "python" ]
stackoverflow_0074610486_dataframe_dtype_json_list_python.txt
Q: How to combine two jointplots with different colors I want two jointplots to be plotted together, not on different figures. How can I do that? I tried sns.jointplot(column_headers[4],column_headers[6],data=df,color="blue") sns.jointplot(column_headers[4],column_headers[6],data=typesatt[0],color="red") It gave me two different figures A: You'll need to merge your dataframes, adding a hue column. Here is an example starting from some test data. Note that when using multiple distributions, in order to make the plot more readable, seaborn automatically changes the marginal plots from histograms to kdeplots. import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np np.random.seed(20221129) dfb = pd.DataFrame({'col4': np.random.randn(1000).cumsum(), 'col6': np.random.randn(1000).cumsum()}) dfr = pd.DataFrame({'col4': np.random.randn(100).cumsum(), 'col6': np.random.randn(100).cumsum()}) df_merged = pd.DataFrame({'col4': pd.concat([dfb['col4'], dfr['col4']]), 'col6': pd.concat([dfb['col6'], dfr['col6']]), 'origin': ['dfb'] * len(dfb) + ['dfr'] * len(dfr)}).reset_index() sns.jointplot(x='col4', y='col6', hue='origin', data=df_merged, palette=["blue", "red"])
How to combine two jointplots with different colors
I want two jointplots to be plotted together, not on different figures. How can I do that? I tried sns.jointplot(column_headers[4],column_headers[6],data=df,color="blue") sns.jointplot(column_headers[4],column_headers[6],data=typesatt[0],color="red") It gave me two different figures
[ "You'll need to merge your dataframes, adding a hue column. Here is an example starting from some test data. Note that when using multiple distributions, in order to make the plot more readable, seaborn automatically changes the marginal plots from histograms to kdeplots.\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\n\nnp.random.seed(20221129)\ndfb = pd.DataFrame({'col4': np.random.randn(1000).cumsum(),\n 'col6': np.random.randn(1000).cumsum()})\ndfr = pd.DataFrame({'col4': np.random.randn(100).cumsum(),\n 'col6': np.random.randn(100).cumsum()})\ndf_merged = pd.DataFrame({'col4': pd.concat([dfb['col4'], dfr['col4']]),\n 'col6': pd.concat([dfb['col6'], dfr['col6']]),\n 'origin': ['dfb'] * len(dfb) + ['dfr'] * len(dfr)}).reset_index()\n\nsns.jointplot(x='col4', y='col6', hue='origin', data=df_merged, palette=[\"blue\", \"red\"])\n\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "pandas", "python", "seaborn" ]
stackoverflow_0074609337_matplotlib_pandas_python_seaborn.txt
Q: Append new column to a Snowpark DataFrame with simple string I've started using python Snowpark and no doubt missing obvious answers based on being unfamiliar to the syntax and documentation. I would like to do a very simple operation: append a new column to an existing Snowpark DataFrame and assign with a simple string. Any pointers to the documentation to what I presume is readily achievable would be appreciated. A: You can do this by using the function with_column in combination with the lit function. The with_column function needs a Column expression and for a literal value this can be made with the lit function. see documentation here: https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/api/snowflake.snowpark.functions.lit.html from snowflake.snowpark.functions import lit snowpark_df = snowpark_df.with_column('NEW_COL', lit('your_string'))
Append new column to a Snowpark DataFrame with simple string
I've started using python Snowpark and no doubt missing obvious answers based on being unfamiliar to the syntax and documentation. I would like to do a very simple operation: append a new column to an existing Snowpark DataFrame and assign with a simple string. Any pointers to the documentation to what I presume is readily achievable would be appreciated.
[ "You can do this by using the function with_column in combination with the lit function. The with_column function needs a Column expression and for a literal value this can be made with the lit function. see documentation here: https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/api/snowflake.snowpark.functions.lit.html\nfrom snowflake.snowpark.functions import lit \nsnowpark_df = snowpark_df.with_column('NEW_COL', lit('your_string'))\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python", "snowpark" ]
stackoverflow_0074562541_dataframe_python_snowpark.txt
Q: Algorithm to calculate number of child per each parent from excel file I have an excel file containing 2 columns & 763 row, screenshot : parent-child file Those strange strings is just a code for a mobile sites. As a description, this file has in both columns a mobile sites names, and as you know, mobile sites forward mobile traffic to each other, so the parent site forward traffic to the child site. Important note : The parent site could have more than one child, plus, parent site could be a child for other sites. for example : A parent of B, B parent of C, C parent of D. what is I need when I enter A, output : site A has three children--> they are C,B,D. what I need as a result is adding a new column to this file having the value of : number of child sites depending on this parent site, so when I get an alarm telling me that this parent site is down, I can know how many sites affected also (using the new excel file). The excel file is in the following link so you can get better look: https://docs.google.com/spreadsheets/d/1ljXiYvNWmG-x7hRi0PyVZ6ejbC4wT8FI/edit?usp=share_link&ouid=114185320765894103697&rtpof=true&sd=true until now I wrote this : import pandas as pd df = pd.read_excel(r'C:\Users\jalal.hasain\Desktop') print(df) I will appreciate it if you have any idea to solve this problem, I need help in creating the third column values, I tried to write a python code and stored the excel in pandas df, but I couldn't get the idea of the solution. Thanks for your cooperation, appreciated. A: You want to build a graph. You can combine pandas and networkx for this: import pandas as pd import networkx as nx G = nx.from_pandas_edgelist(pd.read_excel('CHILD--PARENT.xlsx'), source='Parent', target='Child', create_using=nx.DiGraph) Then fetch the descendants using nx.descendants: nx.descendants(G, '064AQ') Output: {'070AQ', '471AQ', '040AQ'} Relevant part of the graph: A bit larger context of the graph:
Algorithm to calculate number of child per each parent from excel file
I have an excel file containing 2 columns & 763 row, screenshot : parent-child file Those strange strings is just a code for a mobile sites. As a description, this file has in both columns a mobile sites names, and as you know, mobile sites forward mobile traffic to each other, so the parent site forward traffic to the child site. Important note : The parent site could have more than one child, plus, parent site could be a child for other sites. for example : A parent of B, B parent of C, C parent of D. what is I need when I enter A, output : site A has three children--> they are C,B,D. what I need as a result is adding a new column to this file having the value of : number of child sites depending on this parent site, so when I get an alarm telling me that this parent site is down, I can know how many sites affected also (using the new excel file). The excel file is in the following link so you can get better look: https://docs.google.com/spreadsheets/d/1ljXiYvNWmG-x7hRi0PyVZ6ejbC4wT8FI/edit?usp=share_link&ouid=114185320765894103697&rtpof=true&sd=true until now I wrote this : import pandas as pd df = pd.read_excel(r'C:\Users\jalal.hasain\Desktop') print(df) I will appreciate it if you have any idea to solve this problem, I need help in creating the third column values, I tried to write a python code and stored the excel in pandas df, but I couldn't get the idea of the solution. Thanks for your cooperation, appreciated.
[ "You want to build a graph.\nYou can combine pandas and networkx for this:\nimport pandas as pd\nimport networkx as nx\n\nG = nx.from_pandas_edgelist(pd.read_excel('CHILD--PARENT.xlsx'),\n source='Parent', target='Child',\n create_using=nx.DiGraph)\n\nThen fetch the descendants using nx.descendants:\nnx.descendants(G, '064AQ')\n\nOutput:\n{'070AQ', '471AQ', '040AQ'}\n\nRelevant part of the graph:\n\nA bit larger context of the graph:\n\n" ]
[ 1 ]
[]
[]
[ "algorithm", "data_structures", "dataframe", "excel_formula", "python" ]
stackoverflow_0074610928_algorithm_data_structures_dataframe_excel_formula_python.txt
Q: Ipywidgets FileUpload widget is not working with JupyterLab web app inside Docker This is my first question on Stackoverflow. I am using JupyterLab with Ipywidgets for a while now and wanted to put my work into a Docker container to share it. Unfortunately, I ran into an issue with the FileUpload widget from Ipywidgets which worked perfectly fine when JupyterLab was ran locally. With JupyterLab inside a Docker container I am just able to access my file system with this widget but I am unable to "pull" the content of a file into the widget. I am not really sure how the FileUpload widget is working. Is it sending "http" requests? What do I need to add to my Dockerfile to make this work? JupyterLab Ipywidgets FileUpload widget I am very new to Docker, so my Dockerfile is not optimized and sorted. And I needed to add a lot of things to my Dockerfile in order to make some JupyterLab extensions work. My Dockerfile: ` ARG PYTHON_VERSION=python-3.8.8 ARG BASE_IMAGE=jupyter/scipy-notebook FROM $BASE_IMAGE:$PYTHON_VERSION ADD feral-1.10.1.jar . ADD java ./java ADD images ./images ADD install.py . ADD propertyFiles ./propertyFiles ADD empty.properties . COPY PropertyBuilderETHERNETnb.ipynb ./PropertyBuilderETHERNETnb.ipynb ENV TZ=Europe/Amsterdam \ JUPYTER_ENABLE_LAB=yes # Install yarn for handling npm packages RUN npm install --global yarn # Enable yarn global add: ENV PATH="$PATH:$HOME/.yarn/bin" RUN pip install --upgrade pip && \ pip install jupyterlab==3.5.0 && \ pip install ipywidgets &&\ pip install colorama USER root RUN apt-get update && \ apt-get -qq --no-install-recommends install openjdk-17-jdk && \ python3 install.py ENV JAVA_HOME=/usr/lib/jvm/java-1.17.0-openjdk-amd64/ RUN echo "export PATH=$PATH" > /etc/environment #Update and compile JupyterLab extensions? RUN jupyter server extension disable nbclassic && \ jupyter labextension install @jupyterlab/celltags && \ jupyter labextension install @jupyterlab/toc && \ jupyter labextension install @telamonian/theme-darcula && \ jupyter labextension install jupyterlab_hidecode && \ jupyter labextension update --all && \ jupyter lab build CMD jupyter trust *PropertyBuilderETHERNETnb.ipynb && \ jupyter lab --allow-root --NotebookApp.token='' ` Error, when I try to access the value of the widget after uploading: JupyterLab Error inside Docker I already tried using Docker Bind Mounts to mount the container to my file system. Unfortunately this didn't have any impact. I expect that the widget is not working like this, I propably need to enable some "http" exchange access between the host and the Docker. Thank you for the help. :) A: The FileUpload widget from Ipywidgets is a widget that allows users to upload files from their local file system to the JupyterLab web application. It works by sending an HTTP request to the JupyterLab server, which then reads the file and sends it back to the widget. In order to make the FileUpload widget work with JupyterLab inside a Docker container, you need to make sure that the container has access to the host's file system. This can be done by using Docker Bind Mounts. To do this, you need to add the following line to your Dockerfile: VOLUME ["/host/path/to/files:/container/path/to/files"] This will mount the host's file system to the container's file system, allowing the FileUpload widget to access the files on the host. Once this is done, you should be able to use the FileUpload widget as expected.
Ipywidgets FileUpload widget is not working with JupyterLab web app inside Docker
This is my first question on Stackoverflow. I am using JupyterLab with Ipywidgets for a while now and wanted to put my work into a Docker container to share it. Unfortunately, I ran into an issue with the FileUpload widget from Ipywidgets which worked perfectly fine when JupyterLab was ran locally. With JupyterLab inside a Docker container I am just able to access my file system with this widget but I am unable to "pull" the content of a file into the widget. I am not really sure how the FileUpload widget is working. Is it sending "http" requests? What do I need to add to my Dockerfile to make this work? JupyterLab Ipywidgets FileUpload widget I am very new to Docker, so my Dockerfile is not optimized and sorted. And I needed to add a lot of things to my Dockerfile in order to make some JupyterLab extensions work. My Dockerfile: ` ARG PYTHON_VERSION=python-3.8.8 ARG BASE_IMAGE=jupyter/scipy-notebook FROM $BASE_IMAGE:$PYTHON_VERSION ADD feral-1.10.1.jar . ADD java ./java ADD images ./images ADD install.py . ADD propertyFiles ./propertyFiles ADD empty.properties . COPY PropertyBuilderETHERNETnb.ipynb ./PropertyBuilderETHERNETnb.ipynb ENV TZ=Europe/Amsterdam \ JUPYTER_ENABLE_LAB=yes # Install yarn for handling npm packages RUN npm install --global yarn # Enable yarn global add: ENV PATH="$PATH:$HOME/.yarn/bin" RUN pip install --upgrade pip && \ pip install jupyterlab==3.5.0 && \ pip install ipywidgets &&\ pip install colorama USER root RUN apt-get update && \ apt-get -qq --no-install-recommends install openjdk-17-jdk && \ python3 install.py ENV JAVA_HOME=/usr/lib/jvm/java-1.17.0-openjdk-amd64/ RUN echo "export PATH=$PATH" > /etc/environment #Update and compile JupyterLab extensions? RUN jupyter server extension disable nbclassic && \ jupyter labextension install @jupyterlab/celltags && \ jupyter labextension install @jupyterlab/toc && \ jupyter labextension install @telamonian/theme-darcula && \ jupyter labextension install jupyterlab_hidecode && \ jupyter labextension update --all && \ jupyter lab build CMD jupyter trust *PropertyBuilderETHERNETnb.ipynb && \ jupyter lab --allow-root --NotebookApp.token='' ` Error, when I try to access the value of the widget after uploading: JupyterLab Error inside Docker I already tried using Docker Bind Mounts to mount the container to my file system. Unfortunately this didn't have any impact. I expect that the widget is not working like this, I propably need to enable some "http" exchange access between the host and the Docker. Thank you for the help. :)
[ "The FileUpload widget from Ipywidgets is a widget that allows users to upload files from their local file system to the JupyterLab web application. It works by sending an HTTP request to the JupyterLab server, which then reads the file and sends it back to the widget.\nIn order to make the FileUpload widget work with JupyterLab inside a Docker container, you need to make sure that the container has access to the host's file system. This can be done by using Docker Bind Mounts.\nTo do this, you need to add the following line to your Dockerfile:\nVOLUME [\"/host/path/to/files:/container/path/to/files\"]\n\nThis will mount the host's file system to the container's file system, allowing the FileUpload widget to access the files on the host.\nOnce this is done, you should be able to use the FileUpload widget as expected.\n" ]
[ 1 ]
[]
[]
[ "docker", "ipywidgets", "jupyter_lab", "python" ]
stackoverflow_0074463062_docker_ipywidgets_jupyter_lab_python.txt
Q: Adding to Random Index I'm trying to add some value at random indexes in a PIL image. I could do that by #find random row and column indices idx_r=random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25))) idx_c=random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[1]*0.25))) #add at those indices for i in idx_r: for j in idx_c: cat[i,j,:] = torch.add(cat[i,j,:], cost) However, it is very time expensive to do that over images of large size. I can't use the normal masking method for mutlidimensional arrays. Is there an computationally easier way of doing this? A: Firstly, you don't really "find random row and column indices". What you are doing is generating an array of size k with random elements of cat[:,0,0], not with their indices. Generating a random arry of indices would be done as follow: idx_r=random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25))) idx_c=random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[0]*0.25))) Secondly, you should not need torch.add here, cat[i, j, :] = cat[i, j, :] + cost should have the same effect and might allow you to not import pytorch. I will leave the rest of the answer here in case it is useful for anyone, but it doesn't seem of any use for the original question in this state. Here, I used the answer to How to set numpy matrix elements to a value with given indexes to do a simple matrix addition. idx_r=random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25))) idx_c=random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[0]*0.25))) zeros = np.zeros(cat.shape) zeros[idx_r[:], idx_c[:], :] = cost cat = np.add(cat, zeros) After verification, it seems this method is way slower than the original you proposed. So much slower in fact that I would assume it is an error rather than a coherent result: from timeit import timeit setup: str = ''' import numpy as np import random cat = np.random(500, 500, 500) idx_r = random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25))) idx_c = random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[0]*0.25))) cost = 42 ''' original: str = ''' for i in idx_r: for j in idx_c: cat[i, j, :] = cat[i, j, :] + cost ''' mine: str = ''' zeros = np.zeros(cat.shape) zeros[idx_r[:], idx_c[:], :] = cost cat = np.add(cat, zeros) ''' timeit(original, setup=setup, number=100) # 4.27609... timeit(mine, setup=setup, number=100) # 30.05506...
Adding to Random Index
I'm trying to add some value at random indexes in a PIL image. I could do that by #find random row and column indices idx_r=random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25))) idx_c=random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[1]*0.25))) #add at those indices for i in idx_r: for j in idx_c: cat[i,j,:] = torch.add(cat[i,j,:], cost) However, it is very time expensive to do that over images of large size. I can't use the normal masking method for mutlidimensional arrays. Is there an computationally easier way of doing this?
[ "Firstly, you don't really \"find random row and column indices\". What you are doing is generating an array of size k with random elements of cat[:,0,0], not with their indices.\nGenerating a random arry of indices would be done as follow:\nidx_r=random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25)))\nidx_c=random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[0]*0.25)))\n\nSecondly, you should not need torch.add here, cat[i, j, :] = cat[i, j, :] + cost should have the same effect and might allow you to not import pytorch.\nI will leave the rest of the answer here in case it is useful for anyone, but it doesn't seem of any use for the original question in this state.\nHere, I used the answer to How to set numpy matrix elements to a value with given indexes to do a simple matrix addition.\nidx_r=random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25)))\nidx_c=random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[0]*0.25)))\nzeros = np.zeros(cat.shape)\nzeros[idx_r[:], idx_c[:], :] = cost\ncat = np.add(cat, zeros)\n\nAfter verification, it seems this method is way slower than the original you proposed. So much slower in fact that I would assume it is an error rather than a coherent result:\nfrom timeit import timeit\nsetup: str = '''\nimport numpy as np\nimport random\ncat = np.random(500, 500, 500)\nidx_r = random.choices(np.arange(cat[:,0,0].shape[0]), k=int((cat.shape[0]*0.25)))\nidx_c = random.choices(np.arange(cat[0,:,0].shape[0]), k=int((cat.shape[0]*0.25)))\ncost = 42\n'''\noriginal: str = '''\nfor i in idx_r:\n for j in idx_c:\n cat[i, j, :] = cat[i, j, :] + cost\n'''\nmine: str = '''\nzeros = np.zeros(cat.shape)\nzeros[idx_r[:], idx_c[:], :] = cost\ncat = np.add(cat, zeros)\n'''\ntimeit(original, setup=setup, number=100) # 4.27609...\ntimeit(mine, setup=setup, number=100) # 30.05506...\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "python", "pytorch" ]
stackoverflow_0074610802_arrays_python_pytorch.txt
Q: Trying a Password Validation Program in Python (pls help me) My code was supposed to check two passwords check the length of the passwords check the first and last letters of the passwords and check the uppercase and lowercase of the passwords pass1 = "secret" pass2 = "choccie" InputPassword = input("Write the password: ") error = "Wrong password input" def PasswordChecker(password): for password in pass1 & pass2: if len(InputPassword) != len(password) or InputPassword[0] != password[0] or InputPassword[-1] != password[-1] or InputPassword.isupper() == True: print(error) elif InputPassword == password: print("Welcome!") print(PasswordChecker) I know that there's a problem regarding my 2 passwords (pass1, pass2), and def PasswordChecker(password) but I don't understand how to fix it. A: I corrected your code: You had several errors in your code, like using the bitwise and, not correctly calling the function and some conditionals that I did not understnd what they did. What I undeartand you want to do is have a list of allowed password, and grant access if the input password matches one of those. def PasswordChecker(password): allowed_passwords = ["secret", "choccie"] error_message = "Wrong password input" if password in allowed_passwords: print("Welcome!") return True else: print(error_message) return False input_password = input("Write the password: ") access_granted = PasswordChecker(input_password) Hardcoding the allowed passwords like this is a bad practice, since anybody with access to the source code will then have access to those passwords, so use this only as a way of learning python and not as a real password checker. A: Try bracketing your condition in a pair. Other than that, you can share more details on the error you are facing. if (len(InputPassword) != len(password) or InputPassword[0] != password[0]) or (InputPassword[-1] != password[-1] or InputPassword.isupper() == True):
Trying a Password Validation Program in Python (pls help me)
My code was supposed to check two passwords check the length of the passwords check the first and last letters of the passwords and check the uppercase and lowercase of the passwords pass1 = "secret" pass2 = "choccie" InputPassword = input("Write the password: ") error = "Wrong password input" def PasswordChecker(password): for password in pass1 & pass2: if len(InputPassword) != len(password) or InputPassword[0] != password[0] or InputPassword[-1] != password[-1] or InputPassword.isupper() == True: print(error) elif InputPassword == password: print("Welcome!") print(PasswordChecker) I know that there's a problem regarding my 2 passwords (pass1, pass2), and def PasswordChecker(password) but I don't understand how to fix it.
[ "I corrected your code:\nYou had several errors in your code, like using the bitwise and, not correctly calling the function and some conditionals that I did not understnd what they did.\nWhat I undeartand you want to do is have a list of allowed password, and grant access if the input password matches one of those.\ndef PasswordChecker(password):\n allowed_passwords = [\"secret\", \"choccie\"]\n error_message = \"Wrong password input\"\n\n if password in allowed_passwords:\n print(\"Welcome!\")\n return True\n else:\n print(error_message)\n return False\n \n\ninput_password = input(\"Write the password: \")\naccess_granted = PasswordChecker(input_password)\n\nHardcoding the allowed passwords like this is a bad practice, since anybody with access to the source code will then have access to those passwords, so use this only as a way of learning python and not as a real password checker.\n", "Try bracketing your condition in a pair. Other than that, you can share more details on the error you are facing.\n if (len(InputPassword) != len(password) or InputPassword[0] != password[0]) or (InputPassword[-1] != password[-1] or InputPassword.isupper() == True):\n\n" ]
[ 0, 0 ]
[]
[]
[ "password_checker", "passwords", "python" ]
stackoverflow_0074610875_password_checker_passwords_python.txt
Q: Vue3 frontend, Django back end. Key error for validated data in serializer I have a Vue front end that collects data (and files) from a user and POST it to a Django Rest Framework end point using Axios. Here is the code for that function: import { ref } from "vue"; import axios from "axios"; const fields = ref({ audience: "", cancomment: "", category: "", body: "", errors: [], previews: [], images: [], video: [], user: user, }); function submitPost() { const formData = { 'category': fields.value.category.index, 'body': fields.value.body, 'can_view': fields.value.audience, 'can_comment': fields.value.cancomment, 'video': fields.value.video, 'uploaded_images': fields.value.images, 'user': store.userId }; console.log(formData['uploaded_images']) axios .post('api/v1/posts/create/', formData, { headers: { "Content-Type": "multipart/form-data", "X-CSRFToken": "{{csrf-token}}" } }) .then((response) => { if(response.status == 201){ store.messages.push("Post created successfully") } }) .catch((error) => { messages.value.items.push(error.message) }) } When I post data the response I see on the server side is: uploaded_data = validated_data.pop('uploaded_images') KeyError: 'uploaded_images' that comes from this serializer: class PostImageSerializer(serializers.ModelSerializer): class Meta: model = PostImage fields = ['image', 'post'] class PostSerializer(serializers.ModelSerializer): images = PostImageSerializer(many=True, read_only=True, required=False) uploaded_images = serializers.ListField(required=False, child=serializers.FileField(max_length=1000000, allow_empty_file=False, use_url=False),write_only=True) class Meta: model = Post fields = [ "category", "body", "images", "uploaded_images", "video", "can_view", "can_comment", "user", "published", "pinned", "created_at", "updated_at", ] def create(self, validated_data): uploaded_data = validated_data.pop('uploaded_images') new_post = Post.objects.create(**validated_data) try: for uploaded_item in uploaded_data: PostImage.objects.create(post = new_post, images = uploaded_item) except: PostImage.objects.create(post=new_post) return new_post Trying to make sense of this so am I correct in my thinking that DRF saves the serializer when the data is sent to the endpoint? The variable validated_data I presume is the request.data object? Why am I getting the KeyError then and how can I see what the data is that is being validated, or sent in the post request on the server side. The data sent in the post request in the browser looks like this: -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="body" Post -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_view" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_comment" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="uploaded_images.0"; filename="tumblr_42e2ad7e187aaa1b4c6f4f7e698d03f2_c9a2b230_640.jpg" Content-Type: image/jpeg -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="body" Post -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_view" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_comment" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="uploaded_images.0"; filename="tumblr_42e2ad7e187aaa1b4c6f4f7e698d03f2_c9a2b230_640.jpg" Content-Type: image/jpeg (¼T¼Þ7ó[®«ý;>7гô eIqegy[XbkéÉc¤ÎSFÌÔÂåÄAR§*P!I<R,4AP9ÖgÅÖYÔ×éu«ÅÉ<IJª+`,.uòÜtK7xéu.Ô¬]{ù£æÍ÷·n²±×:îã¡`UÐKxªyjxñDUAP¢+ÄÅB1yõçùuS5å D÷ zö4®n¦Öod&<z¼P W9©xeúD5ÈMpÖö¬ðÓKÊľO«oµÊMçÇy|z=^<AKêôz¼x##:ù;«OdÞ¢¶WRùººRêÜêú8ø¡ãÄ"¼AãÅj¿3ÆõÙRÆ]_MTÆ^;; `ttR}mì¤*bêwy¾=d<xòøòxÄ( Here is the ViewSet that sits at the endpoint: class PostViewSet(viewsets.ModelViewSet): queryset = Post.objects.all() serializer_class = PostSerializer filter_backends = [django_filters.rest_framework.DjangoFilterBackend, filters.SearchFilter, django_filters.rest_framework.OrderingFilter] # filterset_class = PostFilter ordering_fields = ['created_at',] search_fields = ['category', 'body'] permission_classes = [permissions.IsAuthenticated] def get_serializer_context(self): return {'request': self.request} parser_classes = [MultiPartParser, FormParser] lookup_field = 'slug' A: So, after a few hours of research I was able to find my own solution. The method used to read multiple files, was taken from this answer. By breaking the [object FileList] into separate files and appending them to the FormData. The models are based on this answer On the backend, overriding the create method of the serializer and loop through resquest.POST.data excluding unwanted keys to access the just the files. And saving them into the Images model (should be named PostImage). Note that I do no access the validated_data for the files, instead they are retrieved directly from the request. I used bootstrap5 in the frontend. EDIT: Tested only two types of request GET(list) and POST(create) (as you see in vue component) models.py: class Post(models.Model): title = models.CharField(max_length=128) body = models.CharField(max_length=400) def get_image_filename(instance, filename): title = instance.post.title slug = slugify(title) return "post_images/%s-%s" % (slug, filename) class Images(models.Model): post = models.ForeignKey(Post, default=None, on_delete=models.CASCADE) image = models.ImageField(upload_to=get_image_filename, verbose_name='Image') serializers.py: from core.models import Images, Post from rest_framework import serializers class PostSerializer(serializers.ModelSerializer): images = serializers.SerializerMethodField() class Meta: model = Post fields = '__all__' def create(self, validated_data): new_post = Post.objects.create(**validated_data) data = self.context['request'].data for key, image in data.items(): if key != 'title' and key != 'body': image = Images.objects.create(post=new_post, image=image) return new_post def get_images(self, obj): images = [] qs = Images.objects.filter(post=obj) for item in qs: images.append(item.image.name) return images views.py: from rest_framework import viewsets from core.models import Post from core.serializers import PostSerializer class PostViewSet(viewsets.ModelViewSet): queryset = Post.objects.all() serializer_class = PostSerializer TestComponent.vue: <template> <div class="container" style="display: flex; justify-content: center; align-items: center;"> <form @submit.prevent="submit" > <div class="mb-3"> <label for="exampleInputTitle" class="form-label">Title</label> <input type="text" class="form-control" id="exampleInputTitle" v-model="title"> </div> <div class="mb-3"> <label for="exampleInputBody" class="form-label">Body</label> <input type="text" class="form-control" id="exampleInputBody" v-model="body"> </div> <div class="mb-3"> <label for="formFileMultiple" class="form-label">Multiple files input example</label> <input class="form-control" type="file" id="formFileMultiple" ref="file" multiple> </div> <div> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> </div> </template> <script> import axios from 'axios' export default { data () { this.title = '', this.body = '' }, methods: { submit() { const formData = new FormData(); for( var i = 0; i < this.$refs.file.files.length; i++ ){ let file = this.$refs.file.files[i]; formData.append('files[' + i + ']', file); } formData.append("title", this.title); formData.append("body", this.body); axios.post('http://localhost:8000/posts/', formData, { headers: { 'Content-Type': 'multipart/form-data' } }) .then((response) => { console.log(response.data); }) .catch((error) => { console.log(error.response); }); } }, mounted() { axios.get('http://localhost:8000/posts/') .then((response) => { console.log(response.data); }) .catch((error) => { console.log(error.response); }); } } </script> A: Hope this helps someone. Finally after three days battling with this I found the solution to my issue. In the models I have this function that generates a string I can use as the upload_to string for the PostImage: def post_directory_path(instance, filename): return 'user_{0}/posts/post_{1}/{2}'.format(instance.user.id, instance.post.id, filename) There is no user instance on the PostImage only on the Post and Django does not not throw an exception or show any errors for this mistake, which is why I did not look for the problem there.
Vue3 frontend, Django back end. Key error for validated data in serializer
I have a Vue front end that collects data (and files) from a user and POST it to a Django Rest Framework end point using Axios. Here is the code for that function: import { ref } from "vue"; import axios from "axios"; const fields = ref({ audience: "", cancomment: "", category: "", body: "", errors: [], previews: [], images: [], video: [], user: user, }); function submitPost() { const formData = { 'category': fields.value.category.index, 'body': fields.value.body, 'can_view': fields.value.audience, 'can_comment': fields.value.cancomment, 'video': fields.value.video, 'uploaded_images': fields.value.images, 'user': store.userId }; console.log(formData['uploaded_images']) axios .post('api/v1/posts/create/', formData, { headers: { "Content-Type": "multipart/form-data", "X-CSRFToken": "{{csrf-token}}" } }) .then((response) => { if(response.status == 201){ store.messages.push("Post created successfully") } }) .catch((error) => { messages.value.items.push(error.message) }) } When I post data the response I see on the server side is: uploaded_data = validated_data.pop('uploaded_images') KeyError: 'uploaded_images' that comes from this serializer: class PostImageSerializer(serializers.ModelSerializer): class Meta: model = PostImage fields = ['image', 'post'] class PostSerializer(serializers.ModelSerializer): images = PostImageSerializer(many=True, read_only=True, required=False) uploaded_images = serializers.ListField(required=False, child=serializers.FileField(max_length=1000000, allow_empty_file=False, use_url=False),write_only=True) class Meta: model = Post fields = [ "category", "body", "images", "uploaded_images", "video", "can_view", "can_comment", "user", "published", "pinned", "created_at", "updated_at", ] def create(self, validated_data): uploaded_data = validated_data.pop('uploaded_images') new_post = Post.objects.create(**validated_data) try: for uploaded_item in uploaded_data: PostImage.objects.create(post = new_post, images = uploaded_item) except: PostImage.objects.create(post=new_post) return new_post Trying to make sense of this so am I correct in my thinking that DRF saves the serializer when the data is sent to the endpoint? The variable validated_data I presume is the request.data object? Why am I getting the KeyError then and how can I see what the data is that is being validated, or sent in the post request on the server side. The data sent in the post request in the browser looks like this: -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="body" Post -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_view" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_comment" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="uploaded_images.0"; filename="tumblr_42e2ad7e187aaa1b4c6f4f7e698d03f2_c9a2b230_640.jpg" Content-Type: image/jpeg -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="body" Post -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_view" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="can_comment" Everybody -----------------------------2091287168172869498837072731 Content-Disposition: form-data; name="uploaded_images.0"; filename="tumblr_42e2ad7e187aaa1b4c6f4f7e698d03f2_c9a2b230_640.jpg" Content-Type: image/jpeg (¼T¼Þ7ó[®«ý;>7гô eIqegy[XbkéÉc¤ÎSFÌÔÂåÄAR§*P!I<R,4AP9ÖgÅÖYÔ×éu«ÅÉ<IJª+`,.uòÜtK7xéu.Ô¬]{ù£æÍ÷·n²±×:îã¡`UÐKxªyjxñDUAP¢+ÄÅB1yõçùuS5å D÷ zö4®n¦Öod&<z¼P W9©xeúD5ÈMpÖö¬ðÓKÊľO«oµÊMçÇy|z=^<AKêôz¼x##:ù;«OdÞ¢¶WRùººRêÜêú8ø¡ãÄ"¼AãÅj¿3ÆõÙRÆ]_MTÆ^;; `ttR}mì¤*bêwy¾=d<xòøòxÄ( Here is the ViewSet that sits at the endpoint: class PostViewSet(viewsets.ModelViewSet): queryset = Post.objects.all() serializer_class = PostSerializer filter_backends = [django_filters.rest_framework.DjangoFilterBackend, filters.SearchFilter, django_filters.rest_framework.OrderingFilter] # filterset_class = PostFilter ordering_fields = ['created_at',] search_fields = ['category', 'body'] permission_classes = [permissions.IsAuthenticated] def get_serializer_context(self): return {'request': self.request} parser_classes = [MultiPartParser, FormParser] lookup_field = 'slug'
[ "So, after a few hours of research I was able to find my own solution. The method used to read multiple files, was taken from this answer. By breaking the [object FileList] into separate files and appending them to the FormData. The models are based on this answer\nOn the backend, overriding the create method of the serializer and loop through resquest.POST.data excluding unwanted keys to access the just the files. And saving them into the Images model (should be named PostImage).\nNote that I do no access the validated_data for the files, instead they are retrieved directly from the request.\nI used bootstrap5 in the frontend.\nEDIT: Tested only two types of request GET(list) and POST(create) (as you see in vue component)\nmodels.py:\nclass Post(models.Model):\n title = models.CharField(max_length=128)\n body = models.CharField(max_length=400)\n \ndef get_image_filename(instance, filename):\n title = instance.post.title\n slug = slugify(title)\n return \"post_images/%s-%s\" % (slug, filename) \n\n\nclass Images(models.Model):\n post = models.ForeignKey(Post, default=None, on_delete=models.CASCADE)\n image = models.ImageField(upload_to=get_image_filename,\n verbose_name='Image')\n\nserializers.py:\nfrom core.models import Images, Post\nfrom rest_framework import serializers\n\nclass PostSerializer(serializers.ModelSerializer):\n images = serializers.SerializerMethodField()\n\n class Meta:\n model = Post\n fields = '__all__'\n\n def create(self, validated_data):\n new_post = Post.objects.create(**validated_data)\n data = self.context['request'].data\n for key, image in data.items():\n if key != 'title' and key != 'body':\n image = Images.objects.create(post=new_post, image=image)\n\n return new_post\n \n def get_images(self, obj):\n images = []\n qs = Images.objects.filter(post=obj)\n for item in qs:\n images.append(item.image.name)\n return images\n\nviews.py:\nfrom rest_framework import viewsets\n \nfrom core.models import Post\nfrom core.serializers import PostSerializer\n\n\nclass PostViewSet(viewsets.ModelViewSet):\n queryset = Post.objects.all()\n serializer_class = PostSerializer\n\nTestComponent.vue:\n<template>\n <div class=\"container\" style=\"display: flex; justify-content: center; align-items: center;\">\n <form @submit.prevent=\"submit\" >\n <div class=\"mb-3\">\n <label for=\"exampleInputTitle\" class=\"form-label\">Title</label>\n <input type=\"text\" class=\"form-control\" id=\"exampleInputTitle\" v-model=\"title\"> \n </div>\n\n <div class=\"mb-3\">\n <label for=\"exampleInputBody\" class=\"form-label\">Body</label>\n <input type=\"text\" class=\"form-control\" id=\"exampleInputBody\" v-model=\"body\"> \n </div>\n\n <div class=\"mb-3\">\n <label for=\"formFileMultiple\" class=\"form-label\">Multiple files input example</label>\n <input class=\"form-control\" type=\"file\" id=\"formFileMultiple\" ref=\"file\" multiple>\n </div>\n\n <div>\n <button type=\"submit\" class=\"btn btn-primary\">Submit</button>\n </div>\n </form>\n </div>\n</template>\n\n<script>\n\nimport axios from 'axios'\n\nexport default {\n data () {\n this.title = '',\n this.body = ''\n },\n methods: {\n submit() {\n const formData = new FormData();\n for( var i = 0; i < this.$refs.file.files.length; i++ ){\n let file = this.$refs.file.files[i];\n formData.append('files[' + i + ']', file);\n }\n formData.append(\"title\", this.title);\n formData.append(\"body\", this.body);\n\n axios.post('http://localhost:8000/posts/', formData, {\n headers: {\n 'Content-Type': 'multipart/form-data'\n }\n })\n .then((response) => {\n console.log(response.data);\n })\n .catch((error) => {\n console.log(error.response);\n });\n }\n },\n mounted() {\n axios.get('http://localhost:8000/posts/')\n .then((response) => {\n console.log(response.data);\n })\n .catch((error) => {\n console.log(error.response);\n });\n }\n}\n</script>\n\n", "Hope this helps someone.\nFinally after three days battling with this I found the solution to my issue. In the models I have this function that generates a string I can use as the upload_to string for the PostImage:\ndef post_directory_path(instance, filename):\n return 'user_{0}/posts/post_{1}/{2}'.format(instance.user.id, instance.post.id, filename)\n\nThere is no user instance on the PostImage only on the Post and Django does not not throw an exception or show any errors for this mistake, which is why I did not look for the problem there.\n" ]
[ 0, 0 ]
[]
[]
[ "axios", "django", "django_rest_framework", "python", "vue.js" ]
stackoverflow_0074580505_axios_django_django_rest_framework_python_vue.js.txt
Q: Odoo - Show list of users that belongs to a group I want to get in a field all the users that belongs to a group. I tried this but is not working managers = fields.Many2many('res.users', string="Managers in group", default=lambda self: self.env['res.users'].search([('id','in','module.group_pos_manager')])) Im getting this error The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/odoo/custom/src/odoo/odoo/http.py", line 643, in _handle_exception return super(JsonRequest, self)._handle_exception(exception) File "/opt/odoo/custom/src/odoo/odoo/http.py", line 301, in _handle_exception raise exception.with_traceback(None) from new_cause ValueError: Invalid domain term ('id', 'in', 'module.group_pos_manager') A: There are two cases when using in or not in operators, the value (right) can be a list or a boolean (The boolean case is an abuse and handled for backward compatibility) You can use self.env.ref to get the list of users that belongs to a group using the group's external identifier Example: default=lambda self: self.env.ref('point_of_sale.group_pos_manager').users Edit: You can use a function to compute the domain Example def _get_domain(self): return [('id', 'in', self.env.ref('point_of_sale.group_pos_manager').users.ids)] managers = fields.Many2many('res.users', string="Managers in group", domain=_get_domain)
Odoo - Show list of users that belongs to a group
I want to get in a field all the users that belongs to a group. I tried this but is not working managers = fields.Many2many('res.users', string="Managers in group", default=lambda self: self.env['res.users'].search([('id','in','module.group_pos_manager')])) Im getting this error The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/odoo/custom/src/odoo/odoo/http.py", line 643, in _handle_exception return super(JsonRequest, self)._handle_exception(exception) File "/opt/odoo/custom/src/odoo/odoo/http.py", line 301, in _handle_exception raise exception.with_traceback(None) from new_cause ValueError: Invalid domain term ('id', 'in', 'module.group_pos_manager')
[ "There are two cases when using in or not in operators, the value (right) can be a list or a boolean (The boolean case is an abuse and handled for backward compatibility)\nYou can use self.env.ref to get the list of users that belongs to a group using the group's external identifier\nExample:\ndefault=lambda self: self.env.ref('point_of_sale.group_pos_manager').users\n\nEdit:\nYou can use a function to compute the domain\nExample\ndef _get_domain(self):\n return [('id', 'in', self.env.ref('point_of_sale.group_pos_manager').users.ids)]\n\nmanagers = fields.Many2many('res.users', string=\"Managers in group\", domain=_get_domain)\n\n" ]
[ 1 ]
[]
[]
[ "odoo", "odoo_15", "python" ]
stackoverflow_0074603962_odoo_odoo_15_python.txt
Q: Best way to flatten a list of dicts that contains a nested list of dicts? I have a list of dicts in which one of the dict values is also a list of dicts. I want to flatten it into a list of dicts. I have some working code and would like opinions on whether ot not there is a more idiomatic way of achieving this. Here is my code: from pprint import pprint transactions = [ { "Customer": "Leia", "Store": "Hammersmith", "Basket": "basket1", "items": [ {"Product": "Cheddar", "Quantity": 2, "GrossSpend": 2.50}, {"Product": "Grapes", "Quantity": 1, "GrossSpend": 3.00}, ], }, { "Customer": "Luke", "Store": "Ealing", "Basket": "basket2", "items": [ { "Product": "Custard Creams", "Quantity": 1, "GrossSpend": 3.00, } ], }, ] flattened_transactions = [] for transaction in transactions: flattened_transactions.extend( { "Customer": transaction["Customer"], "Store": transaction["Store"], "Basket": transaction["Basket"], "Product": item["Product"], "Quantity": item["Quantity"], "GrossSpend": item["GrossSpend"], } for item in transaction["items"] ) pprint(flattened_transactions) it outputs: [{'Basket': 'basket1', 'Customer': 'Leia', 'GrossSpend': 2.5, 'Product': 'Cheddar', 'Quantity': 2, 'Store': 'Hammersmith'}, {'Basket': 'basket1', 'Customer': 'Leia', 'GrossSpend': 3.0, 'Product': 'Grapes', 'Quantity': 1, 'Store': 'Hammersmith'}, {'Basket': 'basket2', 'Customer': 'Luke', 'GrossSpend': 3.0, 'Product': 'Custard Creams', 'Quantity': 1, 'Store': 'Ealing'}] Is there a better way of achieving this? A: I would use a list comprehension. [{'Customer': d['Customer'], 'Store': d['Store'], 'Basket': d['Basket'], **d2} for d in transactions for d2 in d['items']] # [{'Customer': 'Leia', 'Store': 'Hammersmith', 'Basket': 'basket1', # 'Product': 'Cheddar', 'Quantity': 2, 'GrossSpend': 2.5}, # {'Customer': 'Leia', 'Store': 'Hammersmith', 'Basket': 'basket1', # 'Product': 'Grapes', 'Quantity': 1, 'GrossSpend': 3.0}, # {'Customer': 'Luke', 'Store': 'Ealing', 'Basket': 'basket2', # 'Product': 'Custard Creams', 'Quantity': 1, 'GrossSpend': 3.0}] If the 'items' key is not present or empty, use dict.get with a default value of []. [{'Customer': d['Customer'], 'Store': d['Store'], 'Basket': d['Basket'], **d2} for d in transactions for d2 in d.get('items', [])] Or more flexibly yet, generate a dictionary containing all of the keys in both the top level dictionary and the nested dictionary, then remove the extraneous 'items' key with a dictionary comprehension. [{k: v for k, v in d3.items() if k != 'items'} for d in transactions for d2 in d.get('items', []) for d3 in ({**d, **d2},)] A: Your code looks good to me, I've modified a bit, if you think we can optimize the lines. Handled the case if items is not there in input. flattened_transactions = [] for transaction in transactions: items = transaction.pop("items", []) for item in items: for key, value in item.items(): transaction[key] = value flattened_transactions.append(transaction) else: flattened_transactions.append(transaction) pprint(flattened_transactions)
Best way to flatten a list of dicts that contains a nested list of dicts?
I have a list of dicts in which one of the dict values is also a list of dicts. I want to flatten it into a list of dicts. I have some working code and would like opinions on whether ot not there is a more idiomatic way of achieving this. Here is my code: from pprint import pprint transactions = [ { "Customer": "Leia", "Store": "Hammersmith", "Basket": "basket1", "items": [ {"Product": "Cheddar", "Quantity": 2, "GrossSpend": 2.50}, {"Product": "Grapes", "Quantity": 1, "GrossSpend": 3.00}, ], }, { "Customer": "Luke", "Store": "Ealing", "Basket": "basket2", "items": [ { "Product": "Custard Creams", "Quantity": 1, "GrossSpend": 3.00, } ], }, ] flattened_transactions = [] for transaction in transactions: flattened_transactions.extend( { "Customer": transaction["Customer"], "Store": transaction["Store"], "Basket": transaction["Basket"], "Product": item["Product"], "Quantity": item["Quantity"], "GrossSpend": item["GrossSpend"], } for item in transaction["items"] ) pprint(flattened_transactions) it outputs: [{'Basket': 'basket1', 'Customer': 'Leia', 'GrossSpend': 2.5, 'Product': 'Cheddar', 'Quantity': 2, 'Store': 'Hammersmith'}, {'Basket': 'basket1', 'Customer': 'Leia', 'GrossSpend': 3.0, 'Product': 'Grapes', 'Quantity': 1, 'Store': 'Hammersmith'}, {'Basket': 'basket2', 'Customer': 'Luke', 'GrossSpend': 3.0, 'Product': 'Custard Creams', 'Quantity': 1, 'Store': 'Ealing'}] Is there a better way of achieving this?
[ "I would use a list comprehension.\n[{'Customer': d['Customer'], \n 'Store': d['Store'], \n 'Basket': d['Basket'], \n **d2} \n for d in transactions \n for d2 in d['items']]\n# [{'Customer': 'Leia', 'Store': 'Hammersmith', 'Basket': 'basket1', \n# 'Product': 'Cheddar', 'Quantity': 2, 'GrossSpend': 2.5}, \n# {'Customer': 'Leia', 'Store': 'Hammersmith', 'Basket': 'basket1', \n# 'Product': 'Grapes', 'Quantity': 1, 'GrossSpend': 3.0},\n# {'Customer': 'Luke', 'Store': 'Ealing', 'Basket': 'basket2', \n# 'Product': 'Custard Creams', 'Quantity': 1, 'GrossSpend': 3.0}]\n\nIf the 'items' key is not present or empty, use dict.get with a default value of [].\n[{'Customer': d['Customer'], \n 'Store': d['Store'], \n 'Basket': d['Basket'], \n **d2} \n for d in transactions \n for d2 in d.get('items', [])]\n\nOr more flexibly yet, generate a dictionary containing all of the keys in both the top level dictionary and the nested dictionary, then remove the extraneous 'items' key with a dictionary comprehension.\n[{k: v \n for k, v in d3.items() \n if k != 'items'} \n for d in transactions \n for d2 in d.get('items', []) \n for d3 in ({**d, **d2},)]\n\n", "Your code looks good to me, I've modified a bit, if you think we can optimize the lines. Handled the case if items is not there in input.\nflattened_transactions = []\nfor transaction in transactions:\n items = transaction.pop(\"items\", [])\n for item in items:\n for key, value in item.items():\n transaction[key] = value\n flattened_transactions.append(transaction)\n else:\n flattened_transactions.append(transaction)\npprint(flattened_transactions)\n\n" ]
[ 2, 1 ]
[ "If you just want to flatten the list it can be done with this simple code:\nouput = []\n\nfor transaction in transactions:\n output = [*output, *transaction]\n\n\nThe * operator in python returns the values of an iterator as in iterator with in braces/brackets. It is equivalent to the ...(spread) operator in javascript. More details in the documentaion.\n" ]
[ -2 ]
[ "python" ]
stackoverflow_0074610778_python.txt
Q: How to append dictionary from one column to anther column in pandas I have a dataframe like below: df = pd.DataFrame({'id' : [1,2,3], 'attributes' : [{'dd' : True, 'budget' : '35k'}, {'dd' : True, 'budget' : '25k'}, {'dd' : True, 'budget' : '40k'}], 'prod.attributes' : [{'img' : 'img1.url', 'name' : 'millennials'}, {'img' : 'img2.url', 'name' : 'single'}, {'img' : 'img3.url', 'name' : 'married'}]}) df id attributes prod.attributes 0 1 {'dd': True, 'budget': '35k'} {'img': 'img1.url', 'name': 'millennials'} 1 2 {'dd': True, 'budget': '25k'} {'img': 'img2.url', 'name': 'single'} 2 3 {'dd': True, 'budget': '40k'} {'img': 'img3.url', 'name': 'married'} I have multiple such columns wherein I need to append all columns that have attributes as suffix with the actual attributes column as below: op = pd.DataFrame({'id' : [1,2,3], 'attributes' : [{'dd' : True, 'budget' : '35k', 'prod' : {'img' : 'img1.url', 'name' : 'millennials'}}, \ {'dd' : True, 'budget' : '25k', 'prod' : {'img' : 'img2.url', 'name' : 'single'}}, {'dd' : True, 'budget' : '40', 'prod' : {'img' : 'img3.url', 'name' : 'married'}}]}) op id attributes 0 1 {'dd': True, 'budget': '35k', 'prod': {'img': 'img1.url', 'name': 'millennials'}} 1 2 {'dd': True, 'budget': '25k', 'prod': {'img': 'img2.url', 'name': 'single'}} 2 3 {'dd': True, 'budget': '40', 'prod': {'img': 'img3.url', 'name': 'married'}} I tried: df['attributes'].apply(lambda x : x.update({'audience' : df['prod.attributes']})) But I am getting all None. Could someone please help me on this. A: More efficient than apply, use a loop and update the dictionaries in place: for d1, d2 in zip(df['attributes'], df['prod.attributes']): d1['prod'] = d2 If you want to remove the original column use pop: for d1, d2 in zip(df['attributes'], df.pop('prod.attributes')): d1['prod'] = d2 Updated dataframe: id attributes 0 1 {'dd': True, 'budget': '35k', 'prod': {'img': 'img1.url', 'name': 'millennials'}} 1 2 {'dd': True, 'budget': '25k', 'prod': {'img': 'img2.url', 'name': 'single'}} 2 3 {'dd': True, 'budget': '40k', 'prod': {'img': 'img3.url', 'name': 'married'}} timings df = pd.concat([df]*10000, ignore_index=True) %%timeit for d1, d2 in zip(df['attributes'], df['prod.attributes']): d1['prod'] = d2 3.49 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %%timeit df['attributes'] = [{**a, **{'prod' : b}} for a, b in zip(df['attributes'], df['prod.attributes'])] 11.3 ms ± 384 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %%timeit df.apply(lambda r: {**r['attributes'], **{'prod': r['prod.attributes']}}, axis=1) 173 ms ± 7.03 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) A: Use ** for merge both dictionaries in list comprehension, DataFrame.pop is used for remove column after using: df['attributes'] = [{**a, **{'prod' : b}} for a, b in zip(df['attributes'], df.pop('prod.attributes'))] print (df) id attributes 0 1 {'dd': True, 'budget': '35k', 'prod': {'img': ... 1 2 {'dd': True, 'budget': '25k', 'prod': {'img': ... 2 3 {'dd': True, 'budget': '40k', 'prod': {'img': ...
How to append dictionary from one column to anther column in pandas
I have a dataframe like below: df = pd.DataFrame({'id' : [1,2,3], 'attributes' : [{'dd' : True, 'budget' : '35k'}, {'dd' : True, 'budget' : '25k'}, {'dd' : True, 'budget' : '40k'}], 'prod.attributes' : [{'img' : 'img1.url', 'name' : 'millennials'}, {'img' : 'img2.url', 'name' : 'single'}, {'img' : 'img3.url', 'name' : 'married'}]}) df id attributes prod.attributes 0 1 {'dd': True, 'budget': '35k'} {'img': 'img1.url', 'name': 'millennials'} 1 2 {'dd': True, 'budget': '25k'} {'img': 'img2.url', 'name': 'single'} 2 3 {'dd': True, 'budget': '40k'} {'img': 'img3.url', 'name': 'married'} I have multiple such columns wherein I need to append all columns that have attributes as suffix with the actual attributes column as below: op = pd.DataFrame({'id' : [1,2,3], 'attributes' : [{'dd' : True, 'budget' : '35k', 'prod' : {'img' : 'img1.url', 'name' : 'millennials'}}, \ {'dd' : True, 'budget' : '25k', 'prod' : {'img' : 'img2.url', 'name' : 'single'}}, {'dd' : True, 'budget' : '40', 'prod' : {'img' : 'img3.url', 'name' : 'married'}}]}) op id attributes 0 1 {'dd': True, 'budget': '35k', 'prod': {'img': 'img1.url', 'name': 'millennials'}} 1 2 {'dd': True, 'budget': '25k', 'prod': {'img': 'img2.url', 'name': 'single'}} 2 3 {'dd': True, 'budget': '40', 'prod': {'img': 'img3.url', 'name': 'married'}} I tried: df['attributes'].apply(lambda x : x.update({'audience' : df['prod.attributes']})) But I am getting all None. Could someone please help me on this.
[ "More efficient than apply, use a loop and update the dictionaries in place:\nfor d1, d2 in zip(df['attributes'], df['prod.attributes']):\n d1['prod'] = d2\n\nIf you want to remove the original column use pop:\nfor d1, d2 in zip(df['attributes'], df.pop('prod.attributes')):\n d1['prod'] = d2\n\nUpdated dataframe:\n id attributes\n0 1 {'dd': True, 'budget': '35k', 'prod': {'img': 'img1.url', 'name': 'millennials'}}\n1 2 {'dd': True, 'budget': '25k', 'prod': {'img': 'img2.url', 'name': 'single'}}\n2 3 {'dd': True, 'budget': '40k', 'prod': {'img': 'img3.url', 'name': 'married'}}\n\ntimings\ndf = pd.concat([df]*10000, ignore_index=True)\n\n%%timeit\nfor d1, d2 in zip(df['attributes'], df['prod.attributes']):\n d1['prod'] = d2\n3.49 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n%%timeit\ndf['attributes'] = [{**a, **{'prod' : b}} \n for a, b in zip(df['attributes'], df['prod.attributes'])]\n11.3 ms ± 384 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n%%timeit\ndf.apply(lambda r: {**r['attributes'], **{'prod': r['prod.attributes']}}, axis=1)\n173 ms ± 7.03 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n", "Use ** for merge both dictionaries in list comprehension, DataFrame.pop is used for remove column after using:\ndf['attributes'] = [{**a, **{'prod' : b}} \n for a, b in zip(df['attributes'], df.pop('prod.attributes'))]\nprint (df)\n id attributes\n0 1 {'dd': True, 'budget': '35k', 'prod': {'img': ...\n1 2 {'dd': True, 'budget': '25k', 'prod': {'img': ...\n2 3 {'dd': True, 'budget': '40k', 'prod': {'img': ...\n\n" ]
[ 2, 1 ]
[]
[]
[ "dictionary", "pandas", "python" ]
stackoverflow_0074611165_dictionary_pandas_python.txt
Q: How to connect to SFTP through Paramiko with SSH key - Pageant I am trying to connect to an SFTP through Paramiko with a passphrase protected SSH key. I have loaded the key into Pageant (which I understand is supported by Paramiko) but I can't get it to decrypt my private key. I have found this example here that references allow_agent=True but this does not appear to be a parameter that can be used with the SFTPClient. Can anyone advise if it is possible to work with Paramiko and Pageant in this way? This is my code at the moment - which raises PasswordRequiredException privatekeyfile = 'path to key' mykey = paramiko.RSAKey.from_private_key_file(privatekeyfile) transport = paramiko.Transport(('host', 'port')) transport.connect('username',pkey = mykey) sftp = paramiko.SFTPClient.from_transport(transport) A: You have to provide a passphrase, when loading an encrypted key using the RSAKey.from_private_key_file. Though note that you do not have to load the key at all, when using the Pageant. That's the point of using an authentication agent. But only the SSHClient class supports the Pageant. The Transport class does not, on its own. You can follow the code in How to use Pageant with Paramiko on Windows? Though as the allow_agent is True by default, there is actually nothing special about the code. Once connected and authenticated, use the SSHClient.open_sftp method to get your instance of the SFTPClient. ssh = paramiko.SSHClient() ssh.connect(host, username='user', allow_agent=True) sftp = ssh.open_sftp() You will also need to verify the host key: Paramiko "Unknown Server" A: This worked for me privatekeyfile = 'path to key' mykey = paramiko.RSAKey.from_private_key_file(privatekeyfile) ssh_client = paramiko.SSHClient() ssh_client.load_system_host_keys() ssh_client.connect(hostname='host', username='user', allow_agent=True, pkey=mykey) ftp_client = ssh_client.open_sftp() print(ftp_client.listdir('/'))
How to connect to SFTP through Paramiko with SSH key - Pageant
I am trying to connect to an SFTP through Paramiko with a passphrase protected SSH key. I have loaded the key into Pageant (which I understand is supported by Paramiko) but I can't get it to decrypt my private key. I have found this example here that references allow_agent=True but this does not appear to be a parameter that can be used with the SFTPClient. Can anyone advise if it is possible to work with Paramiko and Pageant in this way? This is my code at the moment - which raises PasswordRequiredException privatekeyfile = 'path to key' mykey = paramiko.RSAKey.from_private_key_file(privatekeyfile) transport = paramiko.Transport(('host', 'port')) transport.connect('username',pkey = mykey) sftp = paramiko.SFTPClient.from_transport(transport)
[ "You have to provide a passphrase, when loading an encrypted key using the RSAKey.from_private_key_file.\nThough note that you do not have to load the key at all, when using the Pageant. That's the point of using an authentication agent. But only the SSHClient class supports the Pageant. The Transport class does not, on its own.\nYou can follow the code in How to use Pageant with Paramiko on Windows?\nThough as the allow_agent is True by default, there is actually nothing special about the code.\nOnce connected and authenticated, use the SSHClient.open_sftp method to get your instance of the SFTPClient. \nssh = paramiko.SSHClient()\nssh.connect(host, username='user', allow_agent=True)\nsftp = ssh.open_sftp()\n\nYou will also need to verify the host key:\nParamiko \"Unknown Server\"\n", "This worked for me\n\nprivatekeyfile = 'path to key'\nmykey = paramiko.RSAKey.from_private_key_file(privatekeyfile)\nssh_client = paramiko.SSHClient()\nssh_client.load_system_host_keys()\nssh_client.connect(hostname='host', username='user', allow_agent=True, pkey=mykey)\n\nftp_client = ssh_client.open_sftp()\n\nprint(ftp_client.listdir('/'))\n\n" ]
[ 9, 0 ]
[]
[]
[ "pageant", "paramiko", "private_key", "python", "ssh" ]
stackoverflow_0025399635_pageant_paramiko_private_key_python_ssh.txt
Q: How to connect broken lines in binary image using OpenCV/Python I have images like the following one and the lines are broken. I have tried to connect them using morphological operations but it's not effective. I've also thought of calculating orientation but since lines are parallel I can not do this. Is there a way that I can dilate in certain orientation in Python? Or any other method that can connect these lines? Here is the code I've written so far: import cv2 img = cv2.imread('mask.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert img into binary _, bw = cv2.threshold(gray, 50, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) # calculating Contours contours, _ = cv2.findContours(bw, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) def get_orientation(pts, img): sz = len(pts) data_pts = np.empty((sz, 2), dtype=np.float64) for i in range(data_pts.shape[0]): data_pts[i,0] = pts[i,0,0] data_pts[i,1] = pts[i,0,1] # Perform PCA analysis mean = np.empty((0)) mean, eigenvectors, eigenvalues = cv2.PCACompute2(data_pts, mean) # Store the center of the object cntr = (int(mean[0,0]), int(mean[0,1])) cv2.circle(img, cntr, 3, (255, 0, 255), 2) p1 = (cntr[0] + 0.02 * eigenvectors[0,0] * eigenvalues[0,0], cntr[1] + 0.02 * eigenvectors[0,1] * eigenvalues[0,0]) p2 = (cntr[0] - 0.02 * eigenvectors[1,0] * eigenvalues[1,0], cntr[1] - 0.02 * eigenvectors[1,1] * eigenvalues[1,0]) draw_axis(img, cntr, p1, (0, 150, 0), 1) draw_axis(img, cntr, p2, (200, 150, 0), 5) angle = atan2(eigenvectors[0,1], eigenvectors[0,0]) # orientation in radians return angle def draw_axis(img, p_, q_, colour, scale): p = list(p_) q = list(q_) angle = atan2(p[1] - q[1], p[0] - q[0]) # angle in radians hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0])) # Here we lengthen the arrow by a factor of scale q[0] = p[0] - scale * hypotenuse * cos(angle) q[1] = p[1] - scale * hypotenuse * sin(angle) cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA) # create the arrow hooks p[0] = q[0] + 9 * cos(angle + pi / 4) p[1] = q[1] + 9 * sin(angle + pi / 4) cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA) p[0] = q[0] + 9 * cos(angle - pi / 4) p[1] = q[1] + 9 * sin(angle - pi / 4) cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA) for i,c in enumerate(contours): # area of each contour area = cv2.contourArea(c) # find orientation of each shape orrr = get_orientation(c,img) print(orrr) A: You did not make it entirely clear what result exactly you are after (or what your problem with the morpholocial op's was, exactly), but i had a shot at it. Connecting all the "lines" into a single object with a morpholical operation. I used a circular kernel here, which i think gives decent results. No rotation necessary at this point. Interpolating a line for each object, using coutours. This selects only the largest contours, apply further tresholding as you see fit. Gives this: import cv2 # get image img = cv2.imread("<YourPathHere>", cv2.IMREAD_GRAYSCALE) # threshold to binary ret, imgbin = cv2.threshold(img,5,255,cv2.THRESH_BINARY) # morph dilateKernelSize = 80; erodeKernelSize = 65; imgbin = cv2.dilate(imgbin, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [dilateKernelSize,dilateKernelSize])) imgbin = cv2.erode(imgbin, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [erodeKernelSize,erodeKernelSize])) # extract contours contours, _ = cv2.findContours(imgbin,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) print("Found ",len(contours),"contours") # fit lines for large contours lines = []; threshArea = 11000; for cnt in contours: if(cv2.contourArea(cnt)>threshArea): lines += [cv2.fitLine(cnt, cv2.DIST_L2,0,0.01,0.01)] # [vx,vy,x,y] # show results imgresult = cv2.cvtColor(imgbin,cv2.COLOR_GRAY2RGB) cv2.drawContours(imgresult, contours, -1, (255,125,0), 3) VX_ = 0; VY_ = 1; X_ = 2; Y_ = 3; rows,cols = imgbin.shape[:2] p1 = [0,0]; p2 = [cols-1,0]; for l in lines: p1[1] = int((( 0-l[X_])*l[VY_]/l[VX_]) + l[Y_]) p2[1] = int(((cols-l[X_])*l[VY_]/l[VX_]) + l[Y_]) cv2.line(imgresult,p1,p2,(0,255,255),2) # save image print(cv2.imwrite("<YourPathHere>", imgresult)) # HighGUI cv2.namedWindow("img", cv2.WINDOW_NORMAL) cv2.imshow("img", img) cv2.namedWindow("imgresult", cv2.WINDOW_NORMAL) cv2.imshow("imgresult", imgresult) cv2.waitKey(0) cv2.destroyAllWindows()
How to connect broken lines in binary image using OpenCV/Python
I have images like the following one and the lines are broken. I have tried to connect them using morphological operations but it's not effective. I've also thought of calculating orientation but since lines are parallel I can not do this. Is there a way that I can dilate in certain orientation in Python? Or any other method that can connect these lines? Here is the code I've written so far: import cv2 img = cv2.imread('mask.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert img into binary _, bw = cv2.threshold(gray, 50, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) # calculating Contours contours, _ = cv2.findContours(bw, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) def get_orientation(pts, img): sz = len(pts) data_pts = np.empty((sz, 2), dtype=np.float64) for i in range(data_pts.shape[0]): data_pts[i,0] = pts[i,0,0] data_pts[i,1] = pts[i,0,1] # Perform PCA analysis mean = np.empty((0)) mean, eigenvectors, eigenvalues = cv2.PCACompute2(data_pts, mean) # Store the center of the object cntr = (int(mean[0,0]), int(mean[0,1])) cv2.circle(img, cntr, 3, (255, 0, 255), 2) p1 = (cntr[0] + 0.02 * eigenvectors[0,0] * eigenvalues[0,0], cntr[1] + 0.02 * eigenvectors[0,1] * eigenvalues[0,0]) p2 = (cntr[0] - 0.02 * eigenvectors[1,0] * eigenvalues[1,0], cntr[1] - 0.02 * eigenvectors[1,1] * eigenvalues[1,0]) draw_axis(img, cntr, p1, (0, 150, 0), 1) draw_axis(img, cntr, p2, (200, 150, 0), 5) angle = atan2(eigenvectors[0,1], eigenvectors[0,0]) # orientation in radians return angle def draw_axis(img, p_, q_, colour, scale): p = list(p_) q = list(q_) angle = atan2(p[1] - q[1], p[0] - q[0]) # angle in radians hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0])) # Here we lengthen the arrow by a factor of scale q[0] = p[0] - scale * hypotenuse * cos(angle) q[1] = p[1] - scale * hypotenuse * sin(angle) cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA) # create the arrow hooks p[0] = q[0] + 9 * cos(angle + pi / 4) p[1] = q[1] + 9 * sin(angle + pi / 4) cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA) p[0] = q[0] + 9 * cos(angle - pi / 4) p[1] = q[1] + 9 * sin(angle - pi / 4) cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA) for i,c in enumerate(contours): # area of each contour area = cv2.contourArea(c) # find orientation of each shape orrr = get_orientation(c,img) print(orrr)
[ "You did not make it entirely clear what result exactly you are after (or what your problem with the morpholocial op's was, exactly), but i had a shot at it.\n\nConnecting all the \"lines\" into a single object with a morpholical operation. I used a circular kernel here, which i think gives decent results. No rotation necessary at this point.\n\nInterpolating a line for each object, using coutours. This selects only the largest contours, apply further tresholding as you see fit.\n\n\nGives this:\n\nimport cv2\n\n# get image\nimg = cv2.imread(\"<YourPathHere>\", cv2.IMREAD_GRAYSCALE)\n\n# threshold to binary\nret, imgbin = cv2.threshold(img,5,255,cv2.THRESH_BINARY)\n\n# morph \ndilateKernelSize = 80; erodeKernelSize = 65;\nimgbin = cv2.dilate(imgbin, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [dilateKernelSize,dilateKernelSize]))\nimgbin = cv2.erode(imgbin, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [erodeKernelSize,erodeKernelSize]))\n\n# extract contours\ncontours, _ = cv2.findContours(imgbin,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)\nprint(\"Found \",len(contours),\"contours\")\n\n# fit lines for large contours\nlines = []; threshArea = 11000;\nfor cnt in contours:\n if(cv2.contourArea(cnt)>threshArea):\n lines += [cv2.fitLine(cnt, cv2.DIST_L2,0,0.01,0.01)] # [vx,vy,x,y]\n\n\n# show results\nimgresult = cv2.cvtColor(imgbin,cv2.COLOR_GRAY2RGB)\ncv2.drawContours(imgresult, contours, -1, (255,125,0), 3)\n\nVX_ = 0; VY_ = 1; X_ = 2; Y_ = 3;\nrows,cols = imgbin.shape[:2]\np1 = [0,0]; p2 = [cols-1,0];\nfor l in lines:\n p1[1] = int((( 0-l[X_])*l[VY_]/l[VX_]) + l[Y_])\n p2[1] = int(((cols-l[X_])*l[VY_]/l[VX_]) + l[Y_])\n cv2.line(imgresult,p1,p2,(0,255,255),2)\n\n# save image \nprint(cv2.imwrite(\"<YourPathHere>\", imgresult))\n\n# HighGUI\ncv2.namedWindow(\"img\", cv2.WINDOW_NORMAL)\ncv2.imshow(\"img\", img)\ncv2.namedWindow(\"imgresult\", cv2.WINDOW_NORMAL)\ncv2.imshow(\"imgresult\", imgresult)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n\n" ]
[ 0 ]
[]
[]
[ "image_processing", "opencv", "python" ]
stackoverflow_0074606038_image_processing_opencv_python.txt
Q: why I am getting this error:"Received bad response from Model Management Service:\nResponse Code: 403\" while trying to de This is my code which I am trying to deploy my model on Azure AML: aciconfig = AciWebservice.deploy_configuration( cpu_cores=1, memory_gb=1, tags={"data":"nlp classifier"}, description='nlp cLASSIFICATION MODEL' ) inference_config = InferenceConfig(entry_script="scoringscript.py", environment=myenv) service = Model.deploy(workspace=ws, name='nlpse', models=[model], inference_config=inference_config, deployment_config=aciconfig, overwrite = True) service.wait_for_deployment(show_output=True) url = service.scoring_uri print(url) However, I am getting this error: WebserviceException: WebserviceException: Message: Received bad response from Model Management Service: Response Code: 403 Headers: {'Server': 'nginx/1.22.1', 'Date': 'Fri, 25 Nov 2022 19:33:11 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Vary': 'Accept-Encoding', 'x-ms-client-request-id': 'dff1b808-aa1c-4b6d-9bf4-0b43325131b2', 'x-ms-client-session-id': '07e975c4-02fa-47ce-8ee6-cd2f808d53c1', 'api-supported-versions': '1.0, 2018-03-01-preview, 2018-11-19', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'x-aml-cluster': 'vienna-centralus-01', 'x-request-time': '0.630', 'Content-Encoding': 'gzip'} Content: b'{"code":"Forbidden","statusCode":403,"message":"Forbidden","details":[{"code":"UserError","message":"KeyVaultErrorException encountered. Operation returned an invalid status code \'Forbidden\'"}],"correlation":{"RequestId":"dff1b808-aa1c-4b6d-9bf4-0b43325131b2"}}' InnerException None ErrorResponse { "error": { "message": "Received bad response from Model Management Service:\nResponse Code: 403\nHeaders: {'Server': 'nginx/1.22.1', 'Date': 'Fri, 25 Nov 2022 19:33:11 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Vary': 'Accept-Encoding', 'x-ms-client-request-id': 'dff1b808-aa1c-4b6d-9bf4-0b43325131b2', 'x-ms-client-session-id': '07e975c4-02fa-47ce-8ee6-cd2f808d53c1', 'api-supported-versions': '1.0, 2018-03-01-preview, 2018-11-19', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'x-aml-cluster': 'vienna-centralus-01', 'x-request-time': '0.630', 'Content-Encoding': 'gzip'}\nContent: b'{\"code\":\"Forbidden\",\"statusCode\":403,\"message\":\"Forbidden\",\"details\":[{\"code\":\"UserError\",\"message\":\"KeyVaultErrorException encountered. Operation returned an invalid status code \\'Forbidden\\'\"}],\"correlation\":{\"RequestId\":\"dff1b808-aa1c-4b6d-9bf4-0b43325131b2\"}}'" } } How can i resolve it? A: I tried to reproduce the issue and it worked for me. aciconfig = AciWebservice.deploy_configuration( cpu_cores=1, memory_gb=1, tags={"data":"nlp classifier"}, description='nlp cLASSIFICATION MODEL' ) inference_config = InferenceConfig(entry_script="scoringscript.py", environment=myenv) service = Model.deploy(workspace=ws, name='nlpse', models=[model], inference_config=inference_config, deployment_config=aciconfig, overwrite = True) service.wait_for_deployment(show_output=True) url = service.scoring_uri print(url) From the above code block,service.wait_for_deployment(show_output=True) is being called initially when the code block starts executing. It is taking the pipeline to wait for the long duration and hence it is detaching the connection after the TTL. service.wait_for_deployment(show_output=True) -> After getting the config and service details, then we need to execute the service wait method. It will solve the connection error.
why I am getting this error:"Received bad response from Model Management Service:\nResponse Code: 403\" while trying to de
This is my code which I am trying to deploy my model on Azure AML: aciconfig = AciWebservice.deploy_configuration( cpu_cores=1, memory_gb=1, tags={"data":"nlp classifier"}, description='nlp cLASSIFICATION MODEL' ) inference_config = InferenceConfig(entry_script="scoringscript.py", environment=myenv) service = Model.deploy(workspace=ws, name='nlpse', models=[model], inference_config=inference_config, deployment_config=aciconfig, overwrite = True) service.wait_for_deployment(show_output=True) url = service.scoring_uri print(url) However, I am getting this error: WebserviceException: WebserviceException: Message: Received bad response from Model Management Service: Response Code: 403 Headers: {'Server': 'nginx/1.22.1', 'Date': 'Fri, 25 Nov 2022 19:33:11 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Vary': 'Accept-Encoding', 'x-ms-client-request-id': 'dff1b808-aa1c-4b6d-9bf4-0b43325131b2', 'x-ms-client-session-id': '07e975c4-02fa-47ce-8ee6-cd2f808d53c1', 'api-supported-versions': '1.0, 2018-03-01-preview, 2018-11-19', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'x-aml-cluster': 'vienna-centralus-01', 'x-request-time': '0.630', 'Content-Encoding': 'gzip'} Content: b'{"code":"Forbidden","statusCode":403,"message":"Forbidden","details":[{"code":"UserError","message":"KeyVaultErrorException encountered. Operation returned an invalid status code \'Forbidden\'"}],"correlation":{"RequestId":"dff1b808-aa1c-4b6d-9bf4-0b43325131b2"}}' InnerException None ErrorResponse { "error": { "message": "Received bad response from Model Management Service:\nResponse Code: 403\nHeaders: {'Server': 'nginx/1.22.1', 'Date': 'Fri, 25 Nov 2022 19:33:11 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Vary': 'Accept-Encoding', 'x-ms-client-request-id': 'dff1b808-aa1c-4b6d-9bf4-0b43325131b2', 'x-ms-client-session-id': '07e975c4-02fa-47ce-8ee6-cd2f808d53c1', 'api-supported-versions': '1.0, 2018-03-01-preview, 2018-11-19', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'x-aml-cluster': 'vienna-centralus-01', 'x-request-time': '0.630', 'Content-Encoding': 'gzip'}\nContent: b'{\"code\":\"Forbidden\",\"statusCode\":403,\"message\":\"Forbidden\",\"details\":[{\"code\":\"UserError\",\"message\":\"KeyVaultErrorException encountered. Operation returned an invalid status code \\'Forbidden\\'\"}],\"correlation\":{\"RequestId\":\"dff1b808-aa1c-4b6d-9bf4-0b43325131b2\"}}'" } } How can i resolve it?
[ "\nI tried to reproduce the issue and it worked for me.\n\naciconfig = AciWebservice.deploy_configuration(\ncpu_cores=1,\nmemory_gb=1,\ntags={\"data\":\"nlp classifier\"},\ndescription='nlp cLASSIFICATION MODEL'\n)\ninference_config = InferenceConfig(entry_script=\"scoringscript.py\", environment=myenv)\nservice = Model.deploy(workspace=ws,\nname='nlpse',\nmodels=[model],\ninference_config=inference_config,\ndeployment_config=aciconfig,\noverwrite = True)\nservice.wait_for_deployment(show_output=True)\nurl = service.scoring_uri\nprint(url)\n\n\nFrom the above code block,service.wait_for_deployment(show_output=True) is being called initially when the code block starts executing. It is taking the pipeline to wait for the long duration and hence it is detaching the connection after the TTL.\n\nservice.wait_for_deployment(show_output=True) -> After getting the config and service details, then we need to execute the service wait method. It will solve the connection error.\n\n\n" ]
[ 0 ]
[]
[]
[ "azure_machine_learning_studio", "azure_machine_learning_workbench", "python" ]
stackoverflow_0074577329_azure_machine_learning_studio_azure_machine_learning_workbench_python.txt
Q: move files to subdirectories that are named on part of the filenames I have a few data files in a directory, and I want to move them to the subdirectories based on their filenames. Let's say we created the first directory named "20220322_170444," and it should contain the first four files only because in the next file the "el" is less than the previous one, so the second folder, let's say is "20220322_170533", then it should contain next eight files until the el becomes less again than the previous name. example data files =[ 'cfrad.20220322_170444.122_COW1_v2_s02_el3.40_SUR.nc', 'cfrad.20220322_170456.550_COW1_v2_s03_el4.22_SUR.nc', 'cfrad.20220322_170508.975_COW1_v2_s04_el5.09_SUR.nc', 'cfrad.20220322_170521.397_COW1_v2_s05_el5.99_SUR.nc', 'cfrad.20220322_170533.811_COW1_v2_s06_el0.45_SUR.nc', 'cfrad.20220322_170546.228_COW1_v2_s07_el1.20_SUR.nc', 'cfrad.20220322_170558.648_COW1_v2_s08_el1.90_SUR.nc', 'cfrad.20220322_170611.072_COW1_v2_s09_el2.61_SUR.nc', 'cfrad.20220322_170623.503_COW1_v2_s10_el3.40_SUR.nc', 'cfrad.20220322_170635.923_COW1_v2_s11_el4.21_SUR.nc', 'cfrad.20220322_170648.341_COW1_v2_s12_el5.09_SUR.nc', 'cfrad.20220322_170700.765_COW1_v2_s13_el5.99_SUR.nc', 'cfrad.20220322_170713.179_COW1_v2_s14_el0.45_SUR.nc', 'cfrad.20220322_170725.604_COW1_v2_s15_el1.20_SUR.nc', 'cfrad.20220322_170738.030_COW1_v2_s16_el1.90_SUR.nc', 'cfrad.20220322_170750.461_COW1_v2_s17_el2.61_SUR.nc', 'cfrad.20220322_170802.877_COW1_v2_s18_el3.40_SUR.nc', 'cfrad.20220322_170815.301_COW1_v2_s19_el4.22_SUR.nc', 'cfrad.20220322_170827.715_COW1_v2_s20_el8.01_SUR.nc', 'cfrad.20220322_170840.144_COW1_v2_s21_el11.02_SUR.nc'] for file in files: np.savetxt(fname=file, X=np.array([1,1])) What I tried is import numpy as np from datetime import datetime import glob, os, re import shutil sweeps = [] temp = [] for i, file in enumerate(files[:19]): match_str = re.search(r'\d{4}\d{2}\d{2}_\d{2}\d{2}\d{2}', file) res = datetime.strptime(match_str.group(), '%Y%m%d_%H%M%S') print(res.strftime("%Y%m%d_%H%M%S")) el_pos = int(files[i].find('el')) st_pos = files[i][el_pos+1:el_pos+3] el_pos1 = int(files[i+1].find('el')) end_pos = files[i+1][el_pos1+1:el_pos1+3] # print(files[i][s_pos+1:s_pos+3],files[i+1][s_pos1+1:s_pos1+3]) temp.append(files[i]) print("len(files):",len(files),i) print(st_pos,end_pos) # print() if st_pos>end_pos: print("temp len: ", len(temp)) sweeps.append(temp) temp = [] elif len(files)-i==2: print('entered') sweeps.append(temp) I now have a list named sweeps, and it contains the desired files; how can I now move these files to the directories,m but the directories should be named as I stated above based on the date. I have also the date string in variable res.strftime("%Y%m%d_%H%M%S") can be used to create directories. A: Some string splitting can do this for you. import shutil import os files = [ "cfrad.20220322_170444.122_COW1_v2_s02_el3.40_SUR.nc", "cfrad.20220322_170456.550_COW1_v2_s03_el4.22_SUR.nc", "cfrad.20220322_170508.975_COW1_v2_s04_el5.09_SUR.nc", "cfrad.20220322_170521.397_COW1_v2_s05_el5.99_SUR.nc", "cfrad.20220322_170533.811_COW1_v2_s06_el0.45_SUR.nc", "cfrad.20220322_170546.228_COW1_v2_s07_el1.20_SUR.nc", "cfrad.20220322_170558.648_COW1_v2_s08_el1.90_SUR.nc", "cfrad.20220322_170611.072_COW1_v2_s09_el2.61_SUR.nc", "cfrad.20220322_170623.503_COW1_v2_s10_el3.40_SUR.nc", "cfrad.20220322_170635.923_COW1_v2_s11_el4.21_SUR.nc", "cfrad.20220322_170648.341_COW1_v2_s12_el5.09_SUR.nc", "cfrad.20220322_170700.765_COW1_v2_s13_el5.99_SUR.nc", "cfrad.20220322_170713.179_COW1_v2_s14_el0.45_SUR.nc", "cfrad.20220322_170725.604_COW1_v2_s15_el1.20_SUR.nc", "cfrad.20220322_170738.030_COW1_v2_s16_el1.90_SUR.nc", "cfrad.20220322_170750.461_COW1_v2_s17_el2.61_SUR.nc", "cfrad.20220322_170802.877_COW1_v2_s18_el3.40_SUR.nc", "cfrad.20220322_170815.301_COW1_v2_s19_el4.22_SUR.nc", "cfrad.20220322_170827.715_COW1_v2_s20_el8.01_SUR.nc", "cfrad.20220322_170840.144_COW1_v2_s21_el11.02_SUR.nc", ] for f in files: with open(f, "w") as of: of.write("\n") # force the if statement below to be True on first run el = 99999999 basepath = "." for f in files: new_el = int(f.split(".")[2].split("_")[-1].replace("el", "")) if new_el < el: # store new dir name curr_dir = f.split(".")[1] print(curr_dir) # create directory os.makedirs(curr_dir, exist_ok=True) # store new el el = new_el # move file shutil.move(f"{basepath}{os.sep}{f}", f"{basepath}{os.sep}{curr_dir}{os.sep}{f}")
move files to subdirectories that are named on part of the filenames
I have a few data files in a directory, and I want to move them to the subdirectories based on their filenames. Let's say we created the first directory named "20220322_170444," and it should contain the first four files only because in the next file the "el" is less than the previous one, so the second folder, let's say is "20220322_170533", then it should contain next eight files until the el becomes less again than the previous name. example data files =[ 'cfrad.20220322_170444.122_COW1_v2_s02_el3.40_SUR.nc', 'cfrad.20220322_170456.550_COW1_v2_s03_el4.22_SUR.nc', 'cfrad.20220322_170508.975_COW1_v2_s04_el5.09_SUR.nc', 'cfrad.20220322_170521.397_COW1_v2_s05_el5.99_SUR.nc', 'cfrad.20220322_170533.811_COW1_v2_s06_el0.45_SUR.nc', 'cfrad.20220322_170546.228_COW1_v2_s07_el1.20_SUR.nc', 'cfrad.20220322_170558.648_COW1_v2_s08_el1.90_SUR.nc', 'cfrad.20220322_170611.072_COW1_v2_s09_el2.61_SUR.nc', 'cfrad.20220322_170623.503_COW1_v2_s10_el3.40_SUR.nc', 'cfrad.20220322_170635.923_COW1_v2_s11_el4.21_SUR.nc', 'cfrad.20220322_170648.341_COW1_v2_s12_el5.09_SUR.nc', 'cfrad.20220322_170700.765_COW1_v2_s13_el5.99_SUR.nc', 'cfrad.20220322_170713.179_COW1_v2_s14_el0.45_SUR.nc', 'cfrad.20220322_170725.604_COW1_v2_s15_el1.20_SUR.nc', 'cfrad.20220322_170738.030_COW1_v2_s16_el1.90_SUR.nc', 'cfrad.20220322_170750.461_COW1_v2_s17_el2.61_SUR.nc', 'cfrad.20220322_170802.877_COW1_v2_s18_el3.40_SUR.nc', 'cfrad.20220322_170815.301_COW1_v2_s19_el4.22_SUR.nc', 'cfrad.20220322_170827.715_COW1_v2_s20_el8.01_SUR.nc', 'cfrad.20220322_170840.144_COW1_v2_s21_el11.02_SUR.nc'] for file in files: np.savetxt(fname=file, X=np.array([1,1])) What I tried is import numpy as np from datetime import datetime import glob, os, re import shutil sweeps = [] temp = [] for i, file in enumerate(files[:19]): match_str = re.search(r'\d{4}\d{2}\d{2}_\d{2}\d{2}\d{2}', file) res = datetime.strptime(match_str.group(), '%Y%m%d_%H%M%S') print(res.strftime("%Y%m%d_%H%M%S")) el_pos = int(files[i].find('el')) st_pos = files[i][el_pos+1:el_pos+3] el_pos1 = int(files[i+1].find('el')) end_pos = files[i+1][el_pos1+1:el_pos1+3] # print(files[i][s_pos+1:s_pos+3],files[i+1][s_pos1+1:s_pos1+3]) temp.append(files[i]) print("len(files):",len(files),i) print(st_pos,end_pos) # print() if st_pos>end_pos: print("temp len: ", len(temp)) sweeps.append(temp) temp = [] elif len(files)-i==2: print('entered') sweeps.append(temp) I now have a list named sweeps, and it contains the desired files; how can I now move these files to the directories,m but the directories should be named as I stated above based on the date. I have also the date string in variable res.strftime("%Y%m%d_%H%M%S") can be used to create directories.
[ "Some string splitting can do this for you.\nimport shutil\nimport os\n\n\nfiles = [\n \"cfrad.20220322_170444.122_COW1_v2_s02_el3.40_SUR.nc\",\n \"cfrad.20220322_170456.550_COW1_v2_s03_el4.22_SUR.nc\",\n \"cfrad.20220322_170508.975_COW1_v2_s04_el5.09_SUR.nc\",\n \"cfrad.20220322_170521.397_COW1_v2_s05_el5.99_SUR.nc\",\n \"cfrad.20220322_170533.811_COW1_v2_s06_el0.45_SUR.nc\",\n \"cfrad.20220322_170546.228_COW1_v2_s07_el1.20_SUR.nc\",\n \"cfrad.20220322_170558.648_COW1_v2_s08_el1.90_SUR.nc\",\n \"cfrad.20220322_170611.072_COW1_v2_s09_el2.61_SUR.nc\",\n \"cfrad.20220322_170623.503_COW1_v2_s10_el3.40_SUR.nc\",\n \"cfrad.20220322_170635.923_COW1_v2_s11_el4.21_SUR.nc\",\n \"cfrad.20220322_170648.341_COW1_v2_s12_el5.09_SUR.nc\",\n \"cfrad.20220322_170700.765_COW1_v2_s13_el5.99_SUR.nc\",\n \"cfrad.20220322_170713.179_COW1_v2_s14_el0.45_SUR.nc\",\n \"cfrad.20220322_170725.604_COW1_v2_s15_el1.20_SUR.nc\",\n \"cfrad.20220322_170738.030_COW1_v2_s16_el1.90_SUR.nc\",\n \"cfrad.20220322_170750.461_COW1_v2_s17_el2.61_SUR.nc\",\n \"cfrad.20220322_170802.877_COW1_v2_s18_el3.40_SUR.nc\",\n \"cfrad.20220322_170815.301_COW1_v2_s19_el4.22_SUR.nc\",\n \"cfrad.20220322_170827.715_COW1_v2_s20_el8.01_SUR.nc\",\n \"cfrad.20220322_170840.144_COW1_v2_s21_el11.02_SUR.nc\",\n]\n\n\nfor f in files:\n with open(f, \"w\") as of:\n of.write(\"\\n\")\n\n# force the if statement below to be True on first run\nel = 99999999\nbasepath = \".\"\n\n\nfor f in files:\n new_el = int(f.split(\".\")[2].split(\"_\")[-1].replace(\"el\", \"\"))\n if new_el < el:\n # store new dir name\n curr_dir = f.split(\".\")[1]\n print(curr_dir)\n # create directory\n os.makedirs(curr_dir, exist_ok=True)\n # store new el\n el = new_el\n # move file\n shutil.move(f\"{basepath}{os.sep}{f}\", f\"{basepath}{os.sep}{curr_dir}{os.sep}{f}\")\n\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074610266_python.txt
Q: Why is the custom layer decomposed into several operations in Keras? I want to get the weights of my custom layer, but I couldn't get them by model.layer().get_weights()[X]. So I checked the layers of the model, it seems that the custom layer is decomposed into several operations and no weights can be found in these layers. Here is the custom layer code class PixelBaseConv(Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(PixelBaseConv, self).__init__(**kwargs) def build(self, input_shape): # kernel_shape: w*h*c*output_dim kernel_size = input_shape[1:] kernel_shape = (1,) + kernel_size + (self.output_dim, ) self.kernel = self.add_weight(name='kernel', shape=kernel_shape, initializer='uniform', trainable=True) super(PixelBaseConv, self).build(input_shape) def call(self, inputs): # output_shape: w*h*output_dim outputs = [] inputs = K.cast(inputs, dtype="float32") for i in range(self.output_dim): #output = tf.keras.layers.Multiply()([inputs, self.kernel[..., i]]) output = inputs*self.kernel[...,i] output = K.sum(output, axis=-1) if len(outputs) != 0: outputs = np.dstack([outputs, output]) else: outputs = output[..., np.newaxis] return tf.convert_to_tensor(outputs) def compute_output_shape(self, input_shape): return input_shape + (self.output_dim, ) Here is part of the model structure enter image description here I tried different ways to obtain the weights but due to the strange layers, failed. Expected: the first five layers are replaced with single layer which has a trainable kernel. Weights can be get directly by get_weights() I listed weight list length of the first 10 layers and printed weight of layer 1 by following codes for i in range(len(model.layers)): print("layer " + str(i), len(model.layers[i].get_weights())) print(model.layers[1].get_weights()[0]) and got the result and error enter image description here enter image description here A: I found why this problem occurred. I wrote the custom layer by import tensorflow.python.keras while using other keras layers and creating the model by import tensorflow.keras I think these two libraries may not be compatible, so my custom layer was splitted into several operation layers. Thus, weights cannot be obtained and updated. I changed all imports to tensorflow.keras, now everything goes well.
Why is the custom layer decomposed into several operations in Keras?
I want to get the weights of my custom layer, but I couldn't get them by model.layer().get_weights()[X]. So I checked the layers of the model, it seems that the custom layer is decomposed into several operations and no weights can be found in these layers. Here is the custom layer code class PixelBaseConv(Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(PixelBaseConv, self).__init__(**kwargs) def build(self, input_shape): # kernel_shape: w*h*c*output_dim kernel_size = input_shape[1:] kernel_shape = (1,) + kernel_size + (self.output_dim, ) self.kernel = self.add_weight(name='kernel', shape=kernel_shape, initializer='uniform', trainable=True) super(PixelBaseConv, self).build(input_shape) def call(self, inputs): # output_shape: w*h*output_dim outputs = [] inputs = K.cast(inputs, dtype="float32") for i in range(self.output_dim): #output = tf.keras.layers.Multiply()([inputs, self.kernel[..., i]]) output = inputs*self.kernel[...,i] output = K.sum(output, axis=-1) if len(outputs) != 0: outputs = np.dstack([outputs, output]) else: outputs = output[..., np.newaxis] return tf.convert_to_tensor(outputs) def compute_output_shape(self, input_shape): return input_shape + (self.output_dim, ) Here is part of the model structure enter image description here I tried different ways to obtain the weights but due to the strange layers, failed. Expected: the first five layers are replaced with single layer which has a trainable kernel. Weights can be get directly by get_weights() I listed weight list length of the first 10 layers and printed weight of layer 1 by following codes for i in range(len(model.layers)): print("layer " + str(i), len(model.layers[i].get_weights())) print(model.layers[1].get_weights()[0]) and got the result and error enter image description here enter image description here
[ "I found why this problem occurred.\nI wrote the custom layer by\nimport tensorflow.python.keras \n\nwhile using other keras layers and creating the model by\nimport tensorflow.keras\n\nI think these two libraries may not be compatible, so my custom layer was splitted into several operation layers. Thus, weights cannot be obtained and updated.\nI changed all imports to tensorflow.keras, now everything goes well.\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "keras", "keras_layer", "python", "tensorflow" ]
stackoverflow_0074488122_deep_learning_keras_keras_layer_python_tensorflow.txt
Q: N queens placed in k*k chessboard? My problem should be a variant of N queens problem: Is there an algorithm to print all ways to place N queens in a k*k chessboard? I have tried to modify the DFS method used in the N-queens problem like the following but soon realized that I could only search the first "queen_number" of rows in the chessboard. def dfs(self, n, queen, queen_number, ret): if len(queen) == queen_number: ret.append(queen[:]) return for i in range(n): if i in queen: continue flag = False for j, idx in enumerate(queen): if abs(len(queen) - j) == abs(idx - i): flag = True break if flag: continue queen.append(i) self.dfs(n, queen, ret) queen.pop() If there is a better way to accomplish this task, I am also interested to learn it. A: Here's a solution based on the Python port of Niklaus Wirth's n-queen solver from https://en.wikipedia.org/wiki/Eight_queens_puzzle def queens(n, k, i=0, a=[], b=[], c=[]): if k == 0: yield a + [None] * (n - len(a)) return for j in range(n): if j not in a and i+j not in b and i-j not in c: yield from queens(n, k-1, i+1, a+[j], b+[i+j], c+[i-j]) if k < n - i: yield from queens(n, k, i+1, a+[None], b, c) for i, solution in enumerate(queens(10, 9)): print(i, solution) It finds all 56832 ways to place 9 queens on a 10x10 board in less than a second (1.5 sec if they are printed out to the console). The program works like Wirth's: a contains the positions of the queens on the rows processed so far, and b and c contain the diagonal numbers of the queens placed so far. The only difference is that there's an extra k parameter, which says how many queens to place, and some extra code to consider solutions with no queen on a row. Other than this simple optimization of how the board is represented, it's just a depth-first search. The format of the output is the solution number (0-based), and then a list of position of the queen on each row, or None if there's no queen on that row. A: I had some fun coming up with a brute force solution with this question: from typing import Dict, List, Callable, Optional def create_k_by_k_board(k: int) -> Dict[int, bool]: # True means the spot is available, False means it has a queen and None means it is checked by a Queen if k <= 0: raise ValueError("k must be higher than 0") return {i: True for i in range(1, k**2+1)} def check_position(conditions: List[Callable], position: int) -> bool: # Checks if any function returns True for func in conditions: if func(position): return True return False def get_all_checked_positions(queen_location: int, board_size: int) -> List[int]: from math import sqrt row_len = int(sqrt(board_size)) conditions = [ # Horizontal check lambda x: ((x - 1) // row_len) == ((queen_location - 1) // row_len), # Vertical check lambda x: (x % row_len) == (queen_location % row_len), # Right diagonal check lambda x: (x % (row_len + 1)) == (queen_location % (row_len + 1)), # Left diagonal check lambda x: (x % (row_len - 1)) == (queen_location % (row_len - 1)), ] return [ position for position in range(1, board_size + 1) if position != queen_location and check_position(conditions, position) ] def place_queen_on_board(board: Dict[int, Optional[bool]], position: int) -> Dict[int, Optional[bool]]: # We don't want to edit the board in place because we may need to go back to it new_board = board.copy() if new_board[position] is True: # Place a new queen new_board[position] = False for checked_position in get_all_checked_positions(position, len(board)): # Set the location as checked (no risk of erasing queens) new_board[checked_position] = None return new_board else: raise ValueError(f"Tried to add queen to position {position} in board {board}") def get_all_queen_configurations(numb_queens: int, row_length: int) -> List[Dict[int, Optional[bool]]]: board = create_k_by_k_board(row_length) def recursive_queen_search(curr_round: int, curr_board: Dict[int, Optional[bool]]) -> List[Dict[int, Optional[bool]]]: successful_boards = [] available_spots = [position for position, is_free in curr_board.items() if is_free is True] for spot in available_spots: new_board = place_queen_on_board(curr_board, spot) if curr_round < numb_queens: successful_boards.extend( recursive_queen_search(curr_round+1, new_board) ) else: successful_boards.append(new_board) return successful_boards success_boards_with_duplicates = recursive_queen_search(1, board) success_boards = [] for success_board in success_boards_with_duplicates: if success_board not in success_boards: success_boards.append(success_board) return success_boards def visualize_board(board: Dict[int, Optional[bool]]) -> None: from math import sqrt row_len = int(sqrt(len(board))) parse_dict = { False: "Q", } for row in range(len(board), 0, -row_len): print( "[" + "][".join([parse_dict.get(board[x], " ") for x in range(row - row_len + 1, row + 1)]) + "]" ) As I mentioned this is a brute force solution, you can further improve it by doing 2 things. Using this method on only one eighth of the board. For example if we use the algorithm for 3 queens on a 4*4 board: >>> a = get_all_queen_configurations(3,4) >>> visualize_board(a[0]) [ ][ ][Q][ ] [ ][ ][ ][ ] [ ][ ][ ][Q] [Q][ ][ ][ ] >>> visualize_board(a[1]) [ ][Q][ ][ ] [ ][ ][ ][Q] [ ][ ][ ][ ] [Q][ ][ ][ ] >>> visualize_board(a[2]) [ ][ ][ ][Q] [Q][ ][ ][ ] [ ][ ][ ][ ] [ ][Q][ ][ ] >>> visualize_board(a[3]) [ ][ ][ ][Q] [ ][ ][ ][ ] [Q][ ][ ][ ] [ ][ ][Q][ ] You can see that all the solutions are the same solution mirrored through one of the axes of symmetry. Using this simplification would make the code at least 8 times faster, probably more. Use a smarter algorithm. But this probably involves searching online and going through academic papers, which you've opted not to do. The limitations of the brute force algo are quite serious: 5 queens in 6*6 board: ~0.2 sec 6 queens in 7*7 board: ~4 sec 7 queens in 8*8 board: ~128 sec 8 queens in 9*9 board: ~Legend has it it's still running to this day... So whatever, here's my code, let me know if you have any questions.
N queens placed in k*k chessboard?
My problem should be a variant of N queens problem: Is there an algorithm to print all ways to place N queens in a k*k chessboard? I have tried to modify the DFS method used in the N-queens problem like the following but soon realized that I could only search the first "queen_number" of rows in the chessboard. def dfs(self, n, queen, queen_number, ret): if len(queen) == queen_number: ret.append(queen[:]) return for i in range(n): if i in queen: continue flag = False for j, idx in enumerate(queen): if abs(len(queen) - j) == abs(idx - i): flag = True break if flag: continue queen.append(i) self.dfs(n, queen, ret) queen.pop() If there is a better way to accomplish this task, I am also interested to learn it.
[ "Here's a solution based on the Python port of Niklaus Wirth's n-queen solver from https://en.wikipedia.org/wiki/Eight_queens_puzzle\ndef queens(n, k, i=0, a=[], b=[], c=[]):\n if k == 0:\n yield a + [None] * (n - len(a))\n return\n for j in range(n):\n if j not in a and i+j not in b and i-j not in c:\n yield from queens(n, k-1, i+1, a+[j], b+[i+j], c+[i-j])\n if k < n - i:\n yield from queens(n, k, i+1, a+[None], b, c)\n\nfor i, solution in enumerate(queens(10, 9)):\n print(i, solution)\n\nIt finds all 56832 ways to place 9 queens on a 10x10 board in less than a second (1.5 sec if they are printed out to the console).\nThe program works like Wirth's: a contains the positions of the queens on the rows processed so far, and b and c contain the diagonal numbers of the queens placed so far. The only difference is that there's an extra k parameter, which says how many queens to place, and some extra code to consider solutions with no queen on a row. Other than this simple optimization of how the board is represented, it's just a depth-first search.\nThe format of the output is the solution number (0-based), and then a list of position of the queen on each row, or None if there's no queen on that row.\n", "I had some fun coming up with a brute force solution with this question:\nfrom typing import Dict, List, Callable, Optional\n\n\ndef create_k_by_k_board(k: int) -> Dict[int, bool]:\n # True means the spot is available, False means it has a queen and None means it is checked by a Queen\n if k <= 0:\n raise ValueError(\"k must be higher than 0\")\n return {i: True for i in range(1, k**2+1)}\n\n\ndef check_position(conditions: List[Callable], position: int) -> bool:\n # Checks if any function returns True\n for func in conditions:\n if func(position):\n return True\n return False\n\n\ndef get_all_checked_positions(queen_location: int, board_size: int) -> List[int]:\n from math import sqrt\n row_len = int(sqrt(board_size))\n conditions = [\n # Horizontal check\n lambda x: ((x - 1) // row_len) == ((queen_location - 1) // row_len),\n # Vertical check\n lambda x: (x % row_len) == (queen_location % row_len),\n # Right diagonal check\n lambda x: (x % (row_len + 1)) == (queen_location % (row_len + 1)),\n # Left diagonal check\n lambda x: (x % (row_len - 1)) == (queen_location % (row_len - 1)),\n ]\n return [\n position for position in range(1, board_size + 1)\n if position != queen_location\n and check_position(conditions, position)\n ]\n\n\ndef place_queen_on_board(board: Dict[int, Optional[bool]], position: int) -> Dict[int, Optional[bool]]:\n # We don't want to edit the board in place because we may need to go back to it\n new_board = board.copy()\n if new_board[position] is True:\n # Place a new queen\n new_board[position] = False\n for checked_position in get_all_checked_positions(position, len(board)):\n # Set the location as checked (no risk of erasing queens)\n new_board[checked_position] = None\n return new_board\n else:\n raise ValueError(f\"Tried to add queen to position {position} in board {board}\")\n\n\ndef get_all_queen_configurations(numb_queens: int, row_length: int) -> List[Dict[int, Optional[bool]]]:\n board = create_k_by_k_board(row_length)\n\n def recursive_queen_search(curr_round: int, curr_board: Dict[int, Optional[bool]]) -> List[Dict[int, Optional[bool]]]:\n successful_boards = []\n available_spots = [position for position, is_free in curr_board.items() if is_free is True]\n for spot in available_spots:\n new_board = place_queen_on_board(curr_board, spot)\n if curr_round < numb_queens:\n successful_boards.extend(\n recursive_queen_search(curr_round+1, new_board)\n )\n else:\n successful_boards.append(new_board)\n return successful_boards\n\n success_boards_with_duplicates = recursive_queen_search(1, board)\n success_boards = []\n for success_board in success_boards_with_duplicates:\n if success_board not in success_boards:\n success_boards.append(success_board)\n return success_boards\n\n\ndef visualize_board(board: Dict[int, Optional[bool]]) -> None:\n from math import sqrt\n row_len = int(sqrt(len(board)))\n parse_dict = {\n False: \"Q\",\n }\n for row in range(len(board), 0, -row_len):\n print(\n \"[\" + \"][\".join([parse_dict.get(board[x], \" \") for x in range(row - row_len + 1, row + 1)]) + \"]\"\n )\n\nAs I mentioned this is a brute force solution, you can further improve it by doing 2 things.\n\nUsing this method on only one eighth of the board.\n\nFor example if we use the algorithm for 3 queens on a 4*4 board:\n>>> a = get_all_queen_configurations(3,4)\n>>> visualize_board(a[0])\n[ ][ ][Q][ ]\n[ ][ ][ ][ ]\n[ ][ ][ ][Q]\n[Q][ ][ ][ ]\n>>> visualize_board(a[1])\n[ ][Q][ ][ ]\n[ ][ ][ ][Q]\n[ ][ ][ ][ ]\n[Q][ ][ ][ ]\n>>> visualize_board(a[2])\n[ ][ ][ ][Q]\n[Q][ ][ ][ ]\n[ ][ ][ ][ ]\n[ ][Q][ ][ ]\n>>> visualize_board(a[3])\n[ ][ ][ ][Q]\n[ ][ ][ ][ ]\n[Q][ ][ ][ ]\n[ ][ ][Q][ ]\n\nYou can see that all the solutions are the same solution mirrored through one of the axes of symmetry.\nUsing this simplification would make the code at least 8 times faster, probably more.\n\nUse a smarter algorithm. But this probably involves searching online and going through academic papers, which you've opted not to do.\n\nThe limitations of the brute force algo are quite serious:\n\n5 queens in 6*6 board: ~0.2 sec\n6 queens in 7*7 board: ~4 sec\n7 queens in 8*8 board: ~128 sec\n8 queens in 9*9 board: ~Legend has it it's still running to this day...\n\nSo whatever, here's my code, let me know if you have any questions.\n" ]
[ 2, 0 ]
[]
[]
[ "algorithm", "n_queens", "python" ]
stackoverflow_0074608957_algorithm_n_queens_python.txt
Q: Cannot import module that imports a custom class I have a directory design that looks like this: MyProject --- - script.py | - helpers --- - __init__.py | - class_container.py | - helper.py class_container.py has a class called MyClass helper.py has this code: from class_container import MyClass def func(): # some code using MyClass script.py has this code: from helpers.helper import func When I run script.py: ModuleNotFoundError: No module named 'class_container' I tried changing the code in helper.py to from helpers.class_container import MyClass. Then running script.py started working but running helper.py started giving ModuleNotFoundError: No module named 'helpers'. I want to be able run both script.py and helper.py separately without needing to change the code in any module. Edit: I thought of a solution which is changing helper.py such as: from pathlib import Path import sys sys.path.append(str(Path(__file__).parent)) from class_container import MyClass def func(): # some code using MyClass And it worked. Basically I added the directory of helper.py to path by using built-in sys and pathlib modules and __file__ object. And unlike import statement's behaviour, __file__ will not forget it's roots when used from a different module (i.e it won't become the path of script.py when imported into it. It'll always be the path of helper.py since it was initiated there.). Though I'd appreciate more if someone can show another way that doesn't involve messing with sys.path, it feels like an illegal, 'unpythonic' tool. A: You have to create a __init__.py file in the MyProject/helpers directory. Maybe you already have created it. If not, create an empty file. Then in the MyProject/helpers/helper.py, access the module helpers.class_container like this. from helpers.class_container import MyClass def func(): # some code using MyClass You can also use a relative import like this. from .class_container import MyClass If you want to run the MyProject/helpers/helper.py independently, add test code in helper.py like this. from helpers.class_container import MyClass def func(): # some code using MyClass if __name__ == '__main__': func() And run like this in the MyProject directory.(I assume a Linux environment.) $ python3 -m helpers.helper The point is to differentiate Python modules from Python scripts and treat them differently.
Cannot import module that imports a custom class
I have a directory design that looks like this: MyProject --- - script.py | - helpers --- - __init__.py | - class_container.py | - helper.py class_container.py has a class called MyClass helper.py has this code: from class_container import MyClass def func(): # some code using MyClass script.py has this code: from helpers.helper import func When I run script.py: ModuleNotFoundError: No module named 'class_container' I tried changing the code in helper.py to from helpers.class_container import MyClass. Then running script.py started working but running helper.py started giving ModuleNotFoundError: No module named 'helpers'. I want to be able run both script.py and helper.py separately without needing to change the code in any module. Edit: I thought of a solution which is changing helper.py such as: from pathlib import Path import sys sys.path.append(str(Path(__file__).parent)) from class_container import MyClass def func(): # some code using MyClass And it worked. Basically I added the directory of helper.py to path by using built-in sys and pathlib modules and __file__ object. And unlike import statement's behaviour, __file__ will not forget it's roots when used from a different module (i.e it won't become the path of script.py when imported into it. It'll always be the path of helper.py since it was initiated there.). Though I'd appreciate more if someone can show another way that doesn't involve messing with sys.path, it feels like an illegal, 'unpythonic' tool.
[ "You have to create a __init__.py file in the MyProject/helpers directory. Maybe you already have created it. If not, create an empty file.\nThen in the MyProject/helpers/helper.py, access the module helpers.class_container like this.\nfrom helpers.class_container import MyClass\ndef func():\n # some code using MyClass\n\nYou can also use a relative import like this.\nfrom .class_container import MyClass\n\nIf you want to run the MyProject/helpers/helper.py independently, add test code in helper.py like this.\nfrom helpers.class_container import MyClass\ndef func():\n # some code using MyClass\n\nif __name__ == '__main__':\n func()\n\nAnd run like this in the MyProject directory.(I assume a Linux environment.)\n$ python3 -m helpers.helper\n\nThe point is to differentiate Python modules from Python scripts and treat them differently.\n" ]
[ 0 ]
[]
[]
[ "import", "module", "path", "python", "scope" ]
stackoverflow_0074607575_import_module_path_python_scope.txt
Q: I keep getting 'None' when getting enviroment variables I'm trying to keep my token in an enviroment variable, so I created the file .env, and I stored the TOKEN there: TOKEN=XXX When I run my .py file, I can't get the enviroment variable TOKEN, it keeps printing 'None'. import os token = os.environ.get("TOKEN") print(token) A: What you are trying to do is use a dotenv variable directly using os.environ. In order to use variables from .env, you need the dotenv library. Install dotenv library: pip install dotenv Then import dotenv like this. from dotenv import load_dotenv load_dotenv() # this will load variables from .env. A: If you have tried the answers mentioned by @Shadab and you're still getting None while printing the values from env. Maybe you didn't created the .env file in the correct file directory. We meed to create the .env file within project scope, not inside any folders.
I keep getting 'None' when getting enviroment variables
I'm trying to keep my token in an enviroment variable, so I created the file .env, and I stored the TOKEN there: TOKEN=XXX When I run my .py file, I can't get the enviroment variable TOKEN, it keeps printing 'None'. import os token = os.environ.get("TOKEN") print(token)
[ "What you are trying to do is use a dotenv variable directly using os.environ.\nIn order to use variables from .env, you need the dotenv library.\nInstall dotenv library:\npip install dotenv\n\nThen import dotenv like this.\nfrom dotenv import load_dotenv\nload_dotenv() # this will load variables from .env.\n\n", "If you have tried the answers mentioned by @Shadab and you're still getting None while printing the values from env.\n\nMaybe you didn't created the .env file in the correct file directory.\n\nWe meed to create the .env file within project scope, not inside any folders.\n" ]
[ 2, 0 ]
[]
[]
[ "discord.py", "environment_variables", "python" ]
stackoverflow_0068164516_discord.py_environment_variables_python.txt
Q: Where does Kubeflow pipeline look for packages in `packages_to_install`? I am using Kubeflow Pipelines in Vertex AI to create my ML pipeline and has beeen able to use standard packaged in Kubeflow component using the below syntax @component( # this component builds an xgboost classifier with xgboost packages_to_install=["google-cloud-bigquery", "xgboost", "pandas", "sklearn", "joblib", "pyarrow"], base_image="python:3.9", output_component_file="output_component/create_xgb_model_xgboost.yaml" ) def build_xgb_xgboost(project_id: str, data_set_id: str, training_view: str, metrics: Output[Metrics], model: Output[Model] ): Now I need to add my custom python module in packages_to_install . Is there a way to do it? For this I need to understand where does KFP look for packages when installing those on top of base_image. I understand this can be achieved using a custom base_image where I build the base_image with my python module in it. But it seems like an overkill for me and would prefer to specify python module where applicable in the component specification Something like below @component( # this component builds an xgboost classifier with xgboost packages_to_install=["my-custom-python-module","google-cloud-bigquery", "xgboost", "pandas", "sklearn", "joblib", "pyarrow"], base_image="python:3.9", output_component_file="output_component/create_xgb_model_xgboost.yaml" ) def build_xgb_xgboost(project_id: str, data_set_id: str, training_view: str, metrics: Output[Metrics], model: Output[Model] ): A: Under the hood, the step will install the package at the runtime when executing the component. This requires a package to be hosted in a location that can be accessed by the runtime environment later. Given that, you need to upload the package to a location that can be accessed later, e.g. git repository as Jose mentioned. A: For this I need to understand where does KFP look for packages when installing those on top of base_image. What you specified in packages_to_install is passed to pip install command, so it looks for packages from PyPI. You can also install a package from a source control as pip supports it. See examples: https://packaging.python.org/en/latest/tutorials/installing-packages/#installing-from-vcs A: I found the answer to this With KFP SDK 1.8.12, Kubeflow allows you to specify custom pip_index_url See Kubeflow feature request With this feature, I can install my custom python module like this @component( # this component builds an xgboost classifier with xgboost pip_index_urls=[CUSTOM_ARTEFACT_REPO, "https://pypi.python.org/simple"], packages_to_install=["my-custom-python-module","google-cloud-bigquery", "xgboost", "pandas", "sklearn", "joblib", "pyarrow"], base_image="python:3.9", output_component_file="output_component/create_xgb_model_xgboost.yaml" ) def build_xgb_xgboost(project_id: str, data_set_id: str, training_view: str, metrics: Output[Metrics], model: Output[Model] ):
Where does Kubeflow pipeline look for packages in `packages_to_install`?
I am using Kubeflow Pipelines in Vertex AI to create my ML pipeline and has beeen able to use standard packaged in Kubeflow component using the below syntax @component( # this component builds an xgboost classifier with xgboost packages_to_install=["google-cloud-bigquery", "xgboost", "pandas", "sklearn", "joblib", "pyarrow"], base_image="python:3.9", output_component_file="output_component/create_xgb_model_xgboost.yaml" ) def build_xgb_xgboost(project_id: str, data_set_id: str, training_view: str, metrics: Output[Metrics], model: Output[Model] ): Now I need to add my custom python module in packages_to_install . Is there a way to do it? For this I need to understand where does KFP look for packages when installing those on top of base_image. I understand this can be achieved using a custom base_image where I build the base_image with my python module in it. But it seems like an overkill for me and would prefer to specify python module where applicable in the component specification Something like below @component( # this component builds an xgboost classifier with xgboost packages_to_install=["my-custom-python-module","google-cloud-bigquery", "xgboost", "pandas", "sklearn", "joblib", "pyarrow"], base_image="python:3.9", output_component_file="output_component/create_xgb_model_xgboost.yaml" ) def build_xgb_xgboost(project_id: str, data_set_id: str, training_view: str, metrics: Output[Metrics], model: Output[Model] ):
[ "Under the hood, the step will install the package at the runtime when executing the component. This requires a package to be hosted in a location that can be accessed by the runtime environment later.\nGiven that, you need to upload the package to a location that can be accessed later, e.g. git repository as Jose mentioned.\n", "\nFor this I need to understand where does KFP look for packages when installing those on top of base_image.\n\nWhat you specified in packages_to_install is passed to pip install command, so it looks for packages from PyPI.\nYou can also install a package from a source control as pip supports it. See examples: https://packaging.python.org/en/latest/tutorials/installing-packages/#installing-from-vcs\n", "I found the answer to this\nWith KFP SDK 1.8.12, Kubeflow allows you to specify custom pip_index_url\nSee Kubeflow feature request\nWith this feature, I can install my custom python module like this\n@component(\n # this component builds an xgboost classifier with xgboost\n pip_index_urls=[CUSTOM_ARTEFACT_REPO, \"https://pypi.python.org/simple\"],\n packages_to_install=[\"my-custom-python-module\",\"google-cloud-bigquery\", \"xgboost\", \"pandas\", \"sklearn\", \"joblib\", \"pyarrow\"],\n base_image=\"python:3.9\",\n output_component_file=\"output_component/create_xgb_model_xgboost.yaml\"\n)\ndef build_xgb_xgboost(project_id: str,\n data_set_id: str,\n training_view: str,\n metrics: Output[Metrics],\n model: Output[Model]\n):\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "google_cloud_vertex_ai", "kubeflow", "kubeflow_pipelines", "python" ]
stackoverflow_0072716087_google_cloud_vertex_ai_kubeflow_kubeflow_pipelines_python.txt
Q: ModuleNotFoundError: No module named 'paho' I am trying to make connection with my raspberry Pi and my PC windows through MQTT protocol. And I have a problem I cant solve on my PC - I can t import library that I have installed to my program: import paho.mqtt.client as mqtt Results in the error ModuleNotFoundError: No module named 'paho' The topic was already issued here (Import Error: paho.mqtt.client not found), but solution doesnt work for me. I see it in the python folder: C:\Users\mhucl\AppData\Local\Programs\Python\Python310\Lib\site-packages\paho\mqtt\client I see it in the pip list. But the program can t find it. Do you know where could be the problem? I use python version 3.10.3. I tried it on different PC aswell and the same result. I followed all solutions on youtube, forums and nothing. Thank you very much. Here are screenshots if someone would like to see: https://uschovna.cz/zasilka/WLUWM5TNP7HP7GDG-XKP/UCKZFIZHCI A: pip install paho-mqtt https://pypi.org/project/paho-mqtt/ I solved my error by using this command
ModuleNotFoundError: No module named 'paho'
I am trying to make connection with my raspberry Pi and my PC windows through MQTT protocol. And I have a problem I cant solve on my PC - I can t import library that I have installed to my program: import paho.mqtt.client as mqtt Results in the error ModuleNotFoundError: No module named 'paho' The topic was already issued here (Import Error: paho.mqtt.client not found), but solution doesnt work for me. I see it in the python folder: C:\Users\mhucl\AppData\Local\Programs\Python\Python310\Lib\site-packages\paho\mqtt\client I see it in the pip list. But the program can t find it. Do you know where could be the problem? I use python version 3.10.3. I tried it on different PC aswell and the same result. I followed all solutions on youtube, forums and nothing. Thank you very much. Here are screenshots if someone would like to see: https://uschovna.cz/zasilka/WLUWM5TNP7HP7GDG-XKP/UCKZFIZHCI
[ "pip install paho-mqtt\n\nhttps://pypi.org/project/paho-mqtt/\nI solved my error by using this command\n" ]
[ 0 ]
[]
[]
[ "import", "libraries", "mqtt", "pip", "python" ]
stackoverflow_0071565510_import_libraries_mqtt_pip_python.txt
Q: IPython Notebook: how to display() multiple objects without newline Currently when I use display() function in the IPython notebook I get newlines inserted between objects: >>> display('first line', 'second line') first line second line But I would like the print() function's behaviour where everything is kept on the same line, e.g.: >>> print("all on", "one line") all on one line Is there a method of changing display behaviour to do this? A: No, display cannot prevent newlines, in part because there are no newlines to prevent. Each displayed object gets its own div to sit in, and these are arranged vertically. You might be able to adjust this by futzing with CSS, but I wouldn't recommend that. The only way you could really get two objects to display side-by-side is to build your own object which encapsulates multiple displayed objects, and display that instead. For instance, your simple string case: class ListOfStrings(object): def __init__(self, *strings): self.strings = strings def _repr_html_(self): return ''.join( [ "<span class='listofstr'>%s</span>" % s for s in self.strings ]) display(ListOfStrings("hi", "hello", "hello there")) example notebook A: I had exactly reverse problem (\n not working) and I solved it from py side. You can use for example this a = 'first line', 'second line' # This converts to tuple display(" ".join(a)) With this result 'first line second line'
IPython Notebook: how to display() multiple objects without newline
Currently when I use display() function in the IPython notebook I get newlines inserted between objects: >>> display('first line', 'second line') first line second line But I would like the print() function's behaviour where everything is kept on the same line, e.g.: >>> print("all on", "one line") all on one line Is there a method of changing display behaviour to do this?
[ "No, display cannot prevent newlines, in part because there are no newlines to prevent. Each displayed object gets its own div to sit in, and these are arranged vertically. You might be able to adjust this by futzing with CSS, but I wouldn't recommend that.\nThe only way you could really get two objects to display side-by-side is to build your own object which encapsulates multiple displayed objects, and display that instead.\nFor instance, your simple string case:\nclass ListOfStrings(object):\n def __init__(self, *strings):\n self.strings = strings\n\n def _repr_html_(self):\n return ''.join( [\n \"<span class='listofstr'>%s</span>\" % s\n for s in self.strings\n ])\n\ndisplay(ListOfStrings(\"hi\", \"hello\", \"hello there\"))\n\nexample notebook\n", "I had exactly reverse problem (\\n not working) and I solved it from py side. You can use for example this\na = 'first line', 'second line' # This converts to tuple\ndisplay(\" \".join(a)) \n\nWith this result\n'first line second line'\n\n" ]
[ 9, 0 ]
[]
[]
[ "ipython", "jupyter_notebook", "python" ]
stackoverflow_0017439176_ipython_jupyter_notebook_python.txt
Q: Django UUID Field does not autocreate with new entry I just added a new model where I want to use a UUID for the first time. I run Django 3.1.3 on python 3.8.10. Found some questions about this and I am quite certain I did it according to those suggestions. However, when I add an entry to that model (in phpmyadmin web-surface) the UUID is not being added, it just stays empty. However when I create an other one I get the error, that the UUID Field is not allowed to be the same as somewhere else (both empty) which means at least the unique=True does work. Another thing to mention is, when I create the field using VSCode, normally those fieldnames are being auto-completed, however it is not the case with this one. Thought this might give you a hint what is going on. My model looks like this: from django.db import models import uuid class MQTTTable(models.Model): uuid = models.UUIDField(primary_key = True, default = uuid.uuid4, editable = False, unique = True) description = models.CharField(max_length= 100, default = None) clientID = models.CharField(max_length = 50, default = None) mastertopic = models.CharField(max_length = 200, default = None) A: The default = uuid.uuid4 is a Django ORM default, it's not something the database will do for you like the auto incrementing for ID. So if you add an entry via the phpmyadmin, it will not set the uuid field. A: You will be needing to make migration like this in order to populate the fields for existing entries. Visit this to learn how to do that. With creation of new objects the field is auto populated using the python way. Thanks Migrations that add unique fields You can visit this question which may help you with your answer: Automatically fill pre-existing database entries with a UUID in Django A: Basically, you can't add an entry in the php admin panel and expect that django adds the UUID, because it is sets by Django and not by the database. You can try to implement a function in the model.py file and serialize it as source of a Charfield, however in this case you need django rest framework
Django UUID Field does not autocreate with new entry
I just added a new model where I want to use a UUID for the first time. I run Django 3.1.3 on python 3.8.10. Found some questions about this and I am quite certain I did it according to those suggestions. However, when I add an entry to that model (in phpmyadmin web-surface) the UUID is not being added, it just stays empty. However when I create an other one I get the error, that the UUID Field is not allowed to be the same as somewhere else (both empty) which means at least the unique=True does work. Another thing to mention is, when I create the field using VSCode, normally those fieldnames are being auto-completed, however it is not the case with this one. Thought this might give you a hint what is going on. My model looks like this: from django.db import models import uuid class MQTTTable(models.Model): uuid = models.UUIDField(primary_key = True, default = uuid.uuid4, editable = False, unique = True) description = models.CharField(max_length= 100, default = None) clientID = models.CharField(max_length = 50, default = None) mastertopic = models.CharField(max_length = 200, default = None)
[ "The default = uuid.uuid4 is a Django ORM default, it's not something the database will do for you like the auto incrementing for ID. So if you add an entry via the phpmyadmin, it will not set the uuid field.\n", "You will be needing to make migration like this in order to populate the fields for existing entries. Visit this to learn how to do that. With creation of new objects the field is auto populated using the python way. Thanks\nMigrations that add unique fields\nYou can visit this question which may help you with your answer:\nAutomatically fill pre-existing database entries with a UUID in Django\n", "Basically, you can't add an entry in the php admin panel and expect that django adds the UUID, because it is sets by Django and not by the database. You can try to implement a function in the model.py file and serialize it as source of a Charfield, however in this case you need django rest framework\n" ]
[ 1, 0, 0 ]
[ "change default = uuid.uuid4 to default = uuid.uuid4()\n" ]
[ -2 ]
[ "django", "python" ]
stackoverflow_0072270982_django_python.txt
Q: tkinter button color won't change (non-click event) I created a program to move a knight around a chess board touching every square without touching the same one twice, starting from a random location. I am attempting to show this action using tkinter by changing the color of the square (which is a button) to red as the knight moves. The Chess Board is made of tkinter buttons and they relate to the "Square" class. The program sets the correct colors when creating the grid, but won't change them during execution. Here is the reduced code (as instructed to provide). The reduced code is not the entire code, just the instance of creating the board and placing the knight in a random location. When the knight is placed, the button color should change. from tkinter import * from tkmacosx import Button import random import sys WIDTH = 800 HEIGHT = 1000 GRID_SIZE = 8 class Square: board = [] def __init__(self, x, y, touched=False): self.touched = touched self.board_btn_object = None self.x = x self.y = y def create_btn_object(self, location): btn = Button ( location, width=100, height=100, text= f"{self.x}, {self.y}", bg=None ) self.board_btn_object = btn Square.board.append(self) @property def is_touched(self): self.touched = True self.board_btn_object.configure(bg="red") @staticmethod def randomlocation(): return random.choice(Square.board) def __repr__(self): return f"Square({self.x}, {self.y}) touched={self.touched}" class Knight: def __init__(self, x, y): self.x = x self.y = y def main(): root = Tk() root.configure(bg='blue') root.geometry(f'{WIDTH}x{HEIGHT}') root.title("chess chaser") root.resizable(False, False) top_frame = Frame( root, bg = "red", width = WIDTH, height = height_prct(10) ) top_frame.place(x=0, y=0) bottom_frame = Frame( root, bg="black", width = WIDTH, height = height_prct(10) ) bottom_frame.place(x=0, y=height_prct(90)) center_frame = Frame( root, bg = "green", width = WIDTH, height = height_prct(80) ) center_frame.place( x=0, y=height_prct(10) ) make_board(center_frame) top_menu(top_frame) root.mainloop() def height_prct(percentage): return (HEIGHT / 100) * percentage def width_prct(percentage): return (WIDTH / 100) * percentage def top_menu(top_frame): run = Button ( top_frame, width = 100, height = 50, text=f"Run", command = play_game ) run.place(x=30, y=10) def make_board(center_frame): for x in range(GRID_SIZE): for y in range(GRID_SIZE): b = Square(x, y) b.create_btn_object(center_frame) b.board_btn_object.grid( column=x, row=y ) if (x+y) %2 ==0: b.board_btn_object.configure(bg="grey50") return def play_game(): knight = place_knight() sys.exit("Done") def place_knight(): coords = Square.randomlocation() coords.is_touched x = coords.x y = coords.y knight = Knight(x, y) return knight if __name__ == "__main__": main() I have tried using self.board_btn_object.change(bg="red") and .update and .after.... nothing works to change the color. If I change code and bind left click to change the button to red, it works fine; however, I want it to change when the KNIGHT is positioned there. A: You are just complicated everything. Create a cell class and keep information of x,y and iftouched in it. This way you can add new capabilities into cells easily. Create a board and make an array inside it. Create cell instances and place into that array. You should do inside this board whatever you do. Define rules, move capabilities etc. GUI simply should be an illusion of this imaginary board. When things get complicated you just update your user interface according to your board! import tkinter as tk import random GRID_SIZE = 8 #this is cell which going to keep information class cell: def __init__(self,x,y,IsTouched=False): self.x= x self.y = y self.touched = IsTouched class board: def __init__(self): self.initializeBoard() self.knight = [0,0] self.placeKnightRandomly() def moveKnight(self,x,y): #if knight can move in the board, return True so that #we can show in the GUI. if(self.board[x][y].touched): return False else: self.knight = [x,y] print(f"Knight new position:{x},{y}") return True def initializeBoard(self): #This function creates board from zero. #if you want to reset board, simple run this function. self.board = [] for row in range(GRID_SIZE): line = [] for column in range(GRID_SIZE): line.append(cell(row,column)) self.board.append(line) def placeKnightRandomly(self): #place knight randomly x = random.randint(0,7) y = random.randint(0,7) self.knight = [x,y] self.board[x][y].touched = True print("knight:",x,y) class GUI: def __init__(self): self.mw = tk.Tk() self.mw.geometry("800x800") self.board = board() self.buttonFrame = tk.Frame(self.mw) self.buttonFrame.place(relx=0,rely=0,relheight=1,relwidth=1) self.updateButtons() self.mw.mainloop() def updateButtons(self): for child in self.buttonFrame.winfo_children(): child.destroy() self.buttonsList = [] for lines in self.board.board: buttons = [] for column in lines: x = column.x y = column.y btn = tk.Button(self.buttonFrame,text=f"{x},{y}",width=12,height=5) if(self.board.board[x][y].touched): btn.configure(bg="red") #lets connect a buttons to functions. Send x and y as parameters. btn.configure(command=lambda x = x,y=y:self.touchInto(x,y) ) btn.grid(row=x,column=y) buttons.append(btn) self.buttonsList.append(buttons) def touchInto(self,x,y): #when you clicked button, this function going to run if(self.board.moveKnight(x,y)): self.board.board[x][y].touched = True self.buttonsList[x][y].configure(bg="red") else: print("cannot move!") if __name__ == "__main__": GUI()
tkinter button color won't change (non-click event)
I created a program to move a knight around a chess board touching every square without touching the same one twice, starting from a random location. I am attempting to show this action using tkinter by changing the color of the square (which is a button) to red as the knight moves. The Chess Board is made of tkinter buttons and they relate to the "Square" class. The program sets the correct colors when creating the grid, but won't change them during execution. Here is the reduced code (as instructed to provide). The reduced code is not the entire code, just the instance of creating the board and placing the knight in a random location. When the knight is placed, the button color should change. from tkinter import * from tkmacosx import Button import random import sys WIDTH = 800 HEIGHT = 1000 GRID_SIZE = 8 class Square: board = [] def __init__(self, x, y, touched=False): self.touched = touched self.board_btn_object = None self.x = x self.y = y def create_btn_object(self, location): btn = Button ( location, width=100, height=100, text= f"{self.x}, {self.y}", bg=None ) self.board_btn_object = btn Square.board.append(self) @property def is_touched(self): self.touched = True self.board_btn_object.configure(bg="red") @staticmethod def randomlocation(): return random.choice(Square.board) def __repr__(self): return f"Square({self.x}, {self.y}) touched={self.touched}" class Knight: def __init__(self, x, y): self.x = x self.y = y def main(): root = Tk() root.configure(bg='blue') root.geometry(f'{WIDTH}x{HEIGHT}') root.title("chess chaser") root.resizable(False, False) top_frame = Frame( root, bg = "red", width = WIDTH, height = height_prct(10) ) top_frame.place(x=0, y=0) bottom_frame = Frame( root, bg="black", width = WIDTH, height = height_prct(10) ) bottom_frame.place(x=0, y=height_prct(90)) center_frame = Frame( root, bg = "green", width = WIDTH, height = height_prct(80) ) center_frame.place( x=0, y=height_prct(10) ) make_board(center_frame) top_menu(top_frame) root.mainloop() def height_prct(percentage): return (HEIGHT / 100) * percentage def width_prct(percentage): return (WIDTH / 100) * percentage def top_menu(top_frame): run = Button ( top_frame, width = 100, height = 50, text=f"Run", command = play_game ) run.place(x=30, y=10) def make_board(center_frame): for x in range(GRID_SIZE): for y in range(GRID_SIZE): b = Square(x, y) b.create_btn_object(center_frame) b.board_btn_object.grid( column=x, row=y ) if (x+y) %2 ==0: b.board_btn_object.configure(bg="grey50") return def play_game(): knight = place_knight() sys.exit("Done") def place_knight(): coords = Square.randomlocation() coords.is_touched x = coords.x y = coords.y knight = Knight(x, y) return knight if __name__ == "__main__": main() I have tried using self.board_btn_object.change(bg="red") and .update and .after.... nothing works to change the color. If I change code and bind left click to change the button to red, it works fine; however, I want it to change when the KNIGHT is positioned there.
[ "You are just complicated everything. Create a cell class and keep information of x,y and iftouched in it. This way you can add new capabilities into cells easily.\nCreate a board and make an array inside it. Create cell instances and place into that array.\nYou should do inside this board whatever you do. Define rules, move capabilities etc.\nGUI simply should be an illusion of this imaginary board. When things get complicated you just update your user interface according to your board!\nimport tkinter as tk\nimport random\n\nGRID_SIZE = 8\n\n#this is cell which going to keep information\nclass cell:\n def __init__(self,x,y,IsTouched=False):\n self.x= x\n self.y = y\n self.touched = IsTouched\n\nclass board:\n def __init__(self):\n self.initializeBoard()\n self.knight = [0,0]\n self.placeKnightRandomly()\n \n def moveKnight(self,x,y):\n #if knight can move in the board, return True so that\n #we can show in the GUI.\n if(self.board[x][y].touched):\n return False\n else:\n self.knight = [x,y]\n print(f\"Knight new position:{x},{y}\")\n return True\n \n def initializeBoard(self):\n #This function creates board from zero.\n #if you want to reset board, simple run this function.\n self.board = []\n for row in range(GRID_SIZE):\n line = []\n for column in range(GRID_SIZE):\n line.append(cell(row,column))\n self.board.append(line)\n\n def placeKnightRandomly(self):\n #place knight randomly\n x = random.randint(0,7)\n y = random.randint(0,7)\n self.knight = [x,y]\n self.board[x][y].touched = True\n print(\"knight:\",x,y)\n\n\nclass GUI:\n def __init__(self):\n self.mw = tk.Tk()\n self.mw.geometry(\"800x800\")\n\n self.board = board()\n\n self.buttonFrame = tk.Frame(self.mw)\n self.buttonFrame.place(relx=0,rely=0,relheight=1,relwidth=1)\n\n self.updateButtons()\n self.mw.mainloop()\n\n def updateButtons(self):\n for child in self.buttonFrame.winfo_children():\n child.destroy()\n self.buttonsList = []\n for lines in self.board.board:\n \n buttons = []\n\n for column in lines:\n x = column.x\n y = column.y\n btn = tk.Button(self.buttonFrame,text=f\"{x},{y}\",width=12,height=5)\n if(self.board.board[x][y].touched):\n btn.configure(bg=\"red\")\n\n #lets connect a buttons to functions. Send x and y as parameters.\n btn.configure(command=lambda x = x,y=y:self.touchInto(x,y) )\n btn.grid(row=x,column=y)\n\n buttons.append(btn)\n\n self.buttonsList.append(buttons)\n \n def touchInto(self,x,y):\n #when you clicked button, this function going to run\n if(self.board.moveKnight(x,y)):\n self.board.board[x][y].touched = True\n self.buttonsList[x][y].configure(bg=\"red\")\n else:\n print(\"cannot move!\")\n\n\n\nif __name__ == \"__main__\":\n GUI()\n\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter", "tkinter_button" ]
stackoverflow_0074602264_python_tkinter_tkinter_button.txt
Q: How can I specify which try block to continue from? I have the following code while True: try: height_input=input(f"Please enter your height in meters : ") height=float(height_input) # weight_input=input(f"Please enter your weight in kilograms") except ValueError: print("Invalid Input. Please Try Again") continue try: weight_input=input(f"Please enter your weight in kilograms") weight=float(weight_input) except ValueError: print("Invalid Input. Please Try Again") continue try: bmi=weight/(height*height) print(round(bmi,2)) finally: break If I encounter an error with an invalid format for the line related to the user entering weight, it asks me for the height again even though that might have been entered correctly and was part of the first try block How do I specify that if an error is encountered in the second try block, to ask the user to input the weight again (which was part of the second try block) and not return to the user input question from the first try block? (the height) For example the current result: Question: Please Enter height User Input: 2 Question: Please Enter Weight: User Input: ghsdek Error Message: "Invalid Input. Please Try Again" Question: Please Enter height Expected result: Question: Please Enter height User Input: 2 Question: Please Enter Weight: User Input: ghsdek Error Message: "Invalid Input. Please Try Again" Question: Please Enter Weight A: Firstly, your exception handling syntax is wrong. You want the following. try: height_input = input(f"Please enter your height in meters : ") height = float(height_input) except ValueError: print("Invalid Input. Please Try Again") continue Secondly, continue is just going to the next loop iteration. You can't tell it where in the loop to begin. What you need are three separate loops to read each piece of information. Thirdly, at the end of each try block you'll want to break or your loops will continue infinitely. A: You can split the code to multiply while codes, also you need to check for height being different to 0 like below: while True: try: height_input = input("Please enter your height in meters : ") height = float(height_input) if height != 0: break except Exception: print("Invalid Input. Please Try Again") while True: try: weight_input = input("Please enter your weight in kilograms") weight = float(weight_input) break except Exception: print("Invalid Input. Please Try Again") bmi = weight/(height*height) print(f"Your bmi is: {round(bmi,2)}") A: I'd extract the input logic into a function in accordance with the DRY principle. That would also make the code more readable: def input_value(msg, type): while True: try: return type(input(msg)) except ValueError: print("Invalid Input. Please Try Again") weight = input_value("Please enter your weight in kilograms: ", float) height = input_value("Please enter your height in meters: ", float)
How can I specify which try block to continue from?
I have the following code while True: try: height_input=input(f"Please enter your height in meters : ") height=float(height_input) # weight_input=input(f"Please enter your weight in kilograms") except ValueError: print("Invalid Input. Please Try Again") continue try: weight_input=input(f"Please enter your weight in kilograms") weight=float(weight_input) except ValueError: print("Invalid Input. Please Try Again") continue try: bmi=weight/(height*height) print(round(bmi,2)) finally: break If I encounter an error with an invalid format for the line related to the user entering weight, it asks me for the height again even though that might have been entered correctly and was part of the first try block How do I specify that if an error is encountered in the second try block, to ask the user to input the weight again (which was part of the second try block) and not return to the user input question from the first try block? (the height) For example the current result: Question: Please Enter height User Input: 2 Question: Please Enter Weight: User Input: ghsdek Error Message: "Invalid Input. Please Try Again" Question: Please Enter height Expected result: Question: Please Enter height User Input: 2 Question: Please Enter Weight: User Input: ghsdek Error Message: "Invalid Input. Please Try Again" Question: Please Enter Weight
[ "Firstly, your exception handling syntax is wrong. You want the following.\ntry:\n height_input = input(f\"Please enter your height in meters : \")\n height = float(height_input)\nexcept ValueError:\n print(\"Invalid Input. Please Try Again\")\n continue\n\nSecondly, continue is just going to the next loop iteration. You can't tell it where in the loop to begin. What you need are three separate loops to read each piece of information.\nThirdly, at the end of each try block you'll want to break or your loops will continue infinitely.\n", "You can split the code to multiply while codes, also you need to check for height being different to 0 like below:\nwhile True:\n try:\n height_input = input(\"Please enter your height in meters : \")\n height = float(height_input)\n if height != 0:\n break\n\n except Exception:\n print(\"Invalid Input. Please Try Again\")\n\nwhile True:\n try:\n weight_input = input(\"Please enter your weight in kilograms\")\n weight = float(weight_input)\n\n break\n except Exception:\n print(\"Invalid Input. Please Try Again\")\n\nbmi = weight/(height*height)\nprint(f\"Your bmi is: {round(bmi,2)}\")\n\n\n", "I'd extract the input logic into a function in accordance with the DRY principle. That would also make the code more readable:\ndef input_value(msg, type):\n while True:\n try:\n return type(input(msg))\n except ValueError:\n print(\"Invalid Input. Please Try Again\")\n\nweight = input_value(\"Please enter your weight in kilograms: \", float)\nheight = input_value(\"Please enter your height in meters: \", float)\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074611138_python.txt
Q: Get data from an API with Flask I have a simple flask app, which is intended to make a request to an api and return data. Unfortunately, I can't share the details, so you can reproduce the error. The app looks like that: from flask import Flask import requests import json from requests.auth import HTTPBasicAuth app = Flask(__name__) @app.route("/") def getData(): # define variables username = "<username>" password = "<password>" headers = {"Authorization": "Basic"} reqHeaders = {"Content-Type": "application/json"} payload = json.dumps( { "jobType": "<jobType>", "jobName": "<jobName>", "startPeriod": "<startPeriod>", "endPeriod": "<endPeriod>", "importMode": "<importMode>", "exportMode": "<exportMode>" } ) jobId = 7044 req = requests.get("<url>", auth=HTTPBasicAuth(username, password), headers=reqHeaders, data=payload) return req.content if __name__ == "__main__": app.run() However, when executed this returns error 500: Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. The same script but outside a flask app (just the function as it is defined here) runs with no problems at all. What am I doing wrong? A: flask app return format json. If you return req.content, it will break function. You must parse response request to json before return it. from flask import jsonify return jsonify(req.json()) It's better with safe load response when the request fail req = requests.get() if req.status_code !=200: return {} else: return jsonify(req.json())
Get data from an API with Flask
I have a simple flask app, which is intended to make a request to an api and return data. Unfortunately, I can't share the details, so you can reproduce the error. The app looks like that: from flask import Flask import requests import json from requests.auth import HTTPBasicAuth app = Flask(__name__) @app.route("/") def getData(): # define variables username = "<username>" password = "<password>" headers = {"Authorization": "Basic"} reqHeaders = {"Content-Type": "application/json"} payload = json.dumps( { "jobType": "<jobType>", "jobName": "<jobName>", "startPeriod": "<startPeriod>", "endPeriod": "<endPeriod>", "importMode": "<importMode>", "exportMode": "<exportMode>" } ) jobId = 7044 req = requests.get("<url>", auth=HTTPBasicAuth(username, password), headers=reqHeaders, data=payload) return req.content if __name__ == "__main__": app.run() However, when executed this returns error 500: Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. The same script but outside a flask app (just the function as it is defined here) runs with no problems at all. What am I doing wrong?
[ "flask app return format json. If you return req.content, it will break function. You must parse response request to json before return it.\nfrom flask import jsonify \nreturn jsonify(req.json())\n\nIt's better with safe load response when the request fail\nreq = requests.get()\nif req.status_code !=200:\n return {}\nelse:\n return jsonify(req.json())\n\n" ]
[ 0 ]
[]
[]
[ "flask", "python", "python_requests" ]
stackoverflow_0074610450_flask_python_python_requests.txt
Q: Python list subtraction operation I want something like this: >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] >>> y = [1, 3, 5, 7, 9] >>> y - x # should return [2,4,6,8,0] A: Use a list comprehension: [item for item in x if item not in y] If you want to use the - infix syntax, you can just do: class MyList(list): def __init__(self, *args): super(MyList, self).__init__(args) def __sub__(self, other): return self.__class__(*[item for item in self if item not in other]) you can then use it like: x = MyList(1, 2, 3, 4) y = MyList(2, 5, 2) z = x - y But if you don't absolutely need list properties (for example, ordering), just use sets as the other answers recommend. A: Use set difference >>> z = list(set(x) - set(y)) >>> z [0, 8, 2, 4, 6] Or you might just have x and y be sets so you don't have to do any conversions. A: if duplicate and ordering items are problem : [i for i in a if not i in b or b.remove(i)] a = [1,2,3,3,3,3,4] b = [1,3] result: [2, 3, 3, 3, 4] A: That is a "set subtraction" operation. Use the set data structure for that. In Python 2.7: x = {1,2,3,4,5,6,7,8,9,0} y = {1,3,5,7,9} print x - y Output: >>> print x - y set([0, 8, 2, 4, 6]) A: For many use cases, the answer you want is: ys = set(y) [item for item in x if item not in ys] This is a hybrid between aaronasterling's answer and quantumSoup's answer. aaronasterling's version does len(y) item comparisons for each element in x, so it takes quadratic time. quantumSoup's version uses sets, so it does a single constant-time set lookup for each element in x—but, because it converts both x and y into sets, it loses the order of your elements. By converting only y into a set, and iterating x in order, you get the best of both worlds—linear time, and order preservation.* However, this still has a problem from quantumSoup's version: It requires your elements to be hashable. That's pretty much built into the nature of sets.** If you're trying to, e.g., subtract a list of dicts from another list of dicts, but the list to subtract is large, what do you do? If you can decorate your values in some way that they're hashable, that solves the problem. For example, with a flat dictionary whose values are themselves hashable: ys = {tuple(item.items()) for item in y} [item for item in x if tuple(item.items()) not in ys] If your types are a bit more complicated (e.g., often you're dealing with JSON-compatible values, which are hashable, or lists or dicts whose values are recursively the same type), you can still use this solution. But some types just can't be converted into anything hashable. If your items aren't, and can't be made, hashable, but they are comparable, you can at least get log-linear time (O(N*log M), which is a lot better than the O(N*M) time of the list solution, but not as good as the O(N+M) time of the set solution) by sorting and using bisect: ys = sorted(y) def bisect_contains(seq, item): index = bisect.bisect(seq, item) return index < len(seq) and seq[index] == item [item for item in x if bisect_contains(ys, item)] If your items are neither hashable nor comparable, then you're stuck with the quadratic solution. * Note that you could also do this by using a pair of OrderedSet objects, for which you can find recipes and third-party modules. But I think this is simpler. ** The reason set lookups are constant time is that all it has to do is hash the value and see if there's an entry for that hash. If it can't hash the value, this won't work. A: If the lists allow duplicate elements, you can use Counter from collections: from collections import Counter result = list((Counter(x)-Counter(y)).elements()) If you need to preserve the order of elements from x: result = [ v for c in [Counter(y)] for v in x if not c[v] or c.subtract([v]) ] A: Looking up values in sets are faster than looking them up in lists: [item for item in x if item not in set(y)] I believe this will scale slightly better than: [item for item in x if item not in y] Both preserve the order of the lists. A: The other solutions have one of a few problems: They don't preserve order, or They don't remove a precise count of elements, e.g. for x = [1, 2, 2, 2] and y = [2, 2] they convert y to a set, and either remove all matching elements (leaving [1] only) or remove one of each unique element (leaving [1, 2, 2]), when the proper behavior would be to remove 2 twice, leaving [1, 2], or They do O(m * n) work, where an optimal solution can do O(m + n) work Alain was on the right track with Counter to solve #2 and #3, but that solution will lose ordering. The solution that preserves order (removing the first n copies of each value for n repetitions in the list of values to remove) is: from collections import Counter x = [1,2,3,4,3,2,1] y = [1,2,2] remaining = Counter(y) out = [] for val in x: if remaining[val]: remaining[val] -= 1 else: out.append(val) # out is now [3, 4, 3, 1], having removed the first 1 and both 2s. Try it online! To make it remove the last copies of each element, just change the for loop to for val in reversed(x): and add out.reverse() immediately after exiting the for loop. Constructing the Counter is O(n) in terms of y's length, iterating x is O(n) in terms of x's length, and Counter membership testing and mutation are O(1), while list.append is amortized O(1) (a given append can be O(n), but for many appends, the overall big-O averages O(1) since fewer and fewer of them require a reallocation), so the overall work done is O(m + n). You can also test for to determine if there were any elements in y that were not removed from x by testing: remaining = +remaining # Removes all keys with zero counts from Counter if remaining: # remaining contained elements with non-zero counts A: I think the easiest way to achieve this is by using set(). >>> x = [1,2,3,4,5,6,7,8,9,0] >>> y = [1,3,5,7,9] >>> list(set(x)- set(y)) [0, 2, 4, 6, 8] A: You could also try this if the values are unique anyway: list(set(x) - set(y)) A: Try this. def subtract_lists(a, b): """ Subtracts two lists. Throws ValueError if b contains items not in a """ # Terminate if b is empty, otherwise remove b[0] from a and recurse return a if len(b) == 0 else [a[:i] + subtract_lists(a[i+1:], b[1:]) for i in [a.index(b[0])]][0] >>> x = [1,2,3,4,5,6,7,8,9,0] >>> y = [1,3,5,7,9] >>> subtract_lists(x,y) [2, 4, 6, 8, 0] >>> x = [1,2,3,4,5,6,7,8,9,0,9] >>> subtract_lists(x,y) [2, 4, 6, 8, 0, 9] #9 is only deleted once >>> A: We can use set methods as well to find the difference between two list x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] y = [1, 3, 5, 7, 9] list(set(x).difference(y)) [0, 2, 4, 6, 8] A: The answer provided by @aaronasterling looks good, however, it is not compatible with the default interface of list: x = MyList(1, 2, 3, 4) vs x = MyList([1, 2, 3, 4]). Thus, the below code can be used as a more python-list friendly: class MyList(list): def __init__(self, *args): super(MyList, self).__init__(*args) def __sub__(self, other): return self.__class__([item for item in self if item not in other]) Example: x = MyList([1, 2, 3, 4]) y = MyList([2, 5, 2]) z = x - y A: This example subtracts two lists: # List of pairs of points list = [] list.append([(602, 336), (624, 365)]) list.append([(635, 336), (654, 365)]) list.append([(642, 342), (648, 358)]) list.append([(644, 344), (646, 356)]) list.append([(653, 337), (671, 365)]) list.append([(728, 13), (739, 32)]) list.append([(756, 59), (767, 79)]) itens_to_remove = [] itens_to_remove.append([(642, 342), (648, 358)]) itens_to_remove.append([(644, 344), (646, 356)]) print("Initial List Size: ", len(list)) for a in itens_to_remove: for b in list: if a == b : list.remove(b) print("Final List Size: ", len(list)) A: from collections import Counter y = Counter(y) x = Counter(x) print(list(x-y)) A: list1 = ['a', 'c', 'a', 'b', 'k'] list2 = ['a', 'a', 'a', 'a', 'b', 'c', 'c', 'd', 'e', 'f'] for e in list1: try: list2.remove(e) except ValueError: print(f'{e} not in list') list2 # ['a', 'a', 'c', 'd', 'e', 'f'] This will change list2. if you want to protect list2 just copy it and use the copy of list2 in this code. A: def listsubtraction(parent,child): answer=[] for element in parent: if element not in child: answer.append(element) return answer I think this should work. I am a beginner so pardon me for any mistakes
Python list subtraction operation
I want something like this: >>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] >>> y = [1, 3, 5, 7, 9] >>> y - x # should return [2,4,6,8,0]
[ "Use a list comprehension:\n[item for item in x if item not in y]\n\nIf you want to use the - infix syntax, you can just do:\nclass MyList(list):\n def __init__(self, *args):\n super(MyList, self).__init__(args)\n\n def __sub__(self, other):\n return self.__class__(*[item for item in self if item not in other])\n\nyou can then use it like: \nx = MyList(1, 2, 3, 4)\ny = MyList(2, 5, 2)\nz = x - y \n\nBut if you don't absolutely need list properties (for example, ordering), just use sets as the other answers recommend.\n", "Use set difference\n>>> z = list(set(x) - set(y))\n>>> z\n[0, 8, 2, 4, 6]\n\nOr you might just have x and y be sets so you don't have to do any conversions.\n", "if duplicate and ordering items are problem :\n[i for i in a if not i in b or b.remove(i)]\na = [1,2,3,3,3,3,4]\nb = [1,3]\nresult: [2, 3, 3, 3, 4]\n\n", "That is a \"set subtraction\" operation. Use the set data structure for that.\nIn Python 2.7:\nx = {1,2,3,4,5,6,7,8,9,0}\ny = {1,3,5,7,9}\nprint x - y\n\nOutput:\n>>> print x - y\nset([0, 8, 2, 4, 6])\n\n", "For many use cases, the answer you want is:\nys = set(y)\n[item for item in x if item not in ys]\n\nThis is a hybrid between aaronasterling's answer and quantumSoup's answer.\naaronasterling's version does len(y) item comparisons for each element in x, so it takes quadratic time. quantumSoup's version uses sets, so it does a single constant-time set lookup for each element in x—but, because it converts both x and y into sets, it loses the order of your elements.\nBy converting only y into a set, and iterating x in order, you get the best of both worlds—linear time, and order preservation.*\n\nHowever, this still has a problem from quantumSoup's version: It requires your elements to be hashable. That's pretty much built into the nature of sets.** If you're trying to, e.g., subtract a list of dicts from another list of dicts, but the list to subtract is large, what do you do?\nIf you can decorate your values in some way that they're hashable, that solves the problem. For example, with a flat dictionary whose values are themselves hashable:\nys = {tuple(item.items()) for item in y}\n[item for item in x if tuple(item.items()) not in ys]\n\nIf your types are a bit more complicated (e.g., often you're dealing with JSON-compatible values, which are hashable, or lists or dicts whose values are recursively the same type), you can still use this solution. But some types just can't be converted into anything hashable.\n\nIf your items aren't, and can't be made, hashable, but they are comparable, you can at least get log-linear time (O(N*log M), which is a lot better than the O(N*M) time of the list solution, but not as good as the O(N+M) time of the set solution) by sorting and using bisect:\nys = sorted(y)\ndef bisect_contains(seq, item):\n index = bisect.bisect(seq, item)\n return index < len(seq) and seq[index] == item\n[item for item in x if bisect_contains(ys, item)]\n\n\nIf your items are neither hashable nor comparable, then you're stuck with the quadratic solution.\n\n* Note that you could also do this by using a pair of OrderedSet objects, for which you can find recipes and third-party modules. But I think this is simpler.\n** The reason set lookups are constant time is that all it has to do is hash the value and see if there's an entry for that hash. If it can't hash the value, this won't work.\n", "If the lists allow duplicate elements, you can use Counter from collections:\nfrom collections import Counter\nresult = list((Counter(x)-Counter(y)).elements())\n\nIf you need to preserve the order of elements from x:\nresult = [ v for c in [Counter(y)] for v in x if not c[v] or c.subtract([v]) ]\n\n", "Looking up values in sets are faster than looking them up in lists:\n[item for item in x if item not in set(y)]\n\nI believe this will scale slightly better than:\n[item for item in x if item not in y]\n\nBoth preserve the order of the lists.\n", "The other solutions have one of a few problems:\n\nThey don't preserve order, or\nThey don't remove a precise count of elements, e.g. for x = [1, 2, 2, 2] and y = [2, 2] they convert y to a set, and either remove all matching elements (leaving [1] only) or remove one of each unique element (leaving [1, 2, 2]), when the proper behavior would be to remove 2 twice, leaving [1, 2], or\nThey do O(m * n) work, where an optimal solution can do O(m + n) work\n\nAlain was on the right track with Counter to solve #2 and #3, but that solution will lose ordering. The solution that preserves order (removing the first n copies of each value for n repetitions in the list of values to remove) is:\nfrom collections import Counter\n\nx = [1,2,3,4,3,2,1] \ny = [1,2,2] \nremaining = Counter(y)\n\nout = []\nfor val in x:\n if remaining[val]:\n remaining[val] -= 1\n else:\n out.append(val)\n# out is now [3, 4, 3, 1], having removed the first 1 and both 2s.\n\nTry it online!\nTo make it remove the last copies of each element, just change the for loop to for val in reversed(x): and add out.reverse() immediately after exiting the for loop.\nConstructing the Counter is O(n) in terms of y's length, iterating x is O(n) in terms of x's length, and Counter membership testing and mutation are O(1), while list.append is amortized O(1) (a given append can be O(n), but for many appends, the overall big-O averages O(1) since fewer and fewer of them require a reallocation), so the overall work done is O(m + n).\nYou can also test for to determine if there were any elements in y that were not removed from x by testing:\nremaining = +remaining # Removes all keys with zero counts from Counter\nif remaining:\n # remaining contained elements with non-zero counts\n\n", "I think the easiest way to achieve this is by using set().\n>>> x = [1,2,3,4,5,6,7,8,9,0] \n>>> y = [1,3,5,7,9] \n>>> list(set(x)- set(y))\n[0, 2, 4, 6, 8]\n\n", "You could also try this if the values are unique anyway:\nlist(set(x) - set(y))\n\n", "Try this.\ndef subtract_lists(a, b):\n \"\"\" Subtracts two lists. Throws ValueError if b contains items not in a \"\"\"\n # Terminate if b is empty, otherwise remove b[0] from a and recurse\n return a if len(b) == 0 else [a[:i] + subtract_lists(a[i+1:], b[1:]) \n for i in [a.index(b[0])]][0]\n\n>>> x = [1,2,3,4,5,6,7,8,9,0]\n>>> y = [1,3,5,7,9]\n>>> subtract_lists(x,y)\n[2, 4, 6, 8, 0]\n>>> x = [1,2,3,4,5,6,7,8,9,0,9]\n>>> subtract_lists(x,y)\n[2, 4, 6, 8, 0, 9] #9 is only deleted once\n>>>\n\n", "We can use set methods as well to find the difference between two list\nx = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]\ny = [1, 3, 5, 7, 9]\nlist(set(x).difference(y))\n[0, 2, 4, 6, 8]\n\n", "The answer provided by @aaronasterling looks good, however, it is not compatible with the default interface of list: x = MyList(1, 2, 3, 4) vs x = MyList([1, 2, 3, 4]). Thus, the below code can be used as a more python-list friendly:\nclass MyList(list):\n def __init__(self, *args):\n super(MyList, self).__init__(*args)\n\n def __sub__(self, other):\n return self.__class__([item for item in self if item not in other])\n\nExample: \nx = MyList([1, 2, 3, 4])\ny = MyList([2, 5, 2])\nz = x - y\n\n", "This example subtracts two lists:\n# List of pairs of points\nlist = []\nlist.append([(602, 336), (624, 365)])\nlist.append([(635, 336), (654, 365)])\nlist.append([(642, 342), (648, 358)])\nlist.append([(644, 344), (646, 356)])\nlist.append([(653, 337), (671, 365)])\nlist.append([(728, 13), (739, 32)])\nlist.append([(756, 59), (767, 79)])\n\nitens_to_remove = []\nitens_to_remove.append([(642, 342), (648, 358)])\nitens_to_remove.append([(644, 344), (646, 356)])\n\nprint(\"Initial List Size: \", len(list))\n\nfor a in itens_to_remove:\n for b in list:\n if a == b :\n list.remove(b)\n\nprint(\"Final List Size: \", len(list))\n\n", "from collections import Counter\n\ny = Counter(y)\nx = Counter(x)\n\nprint(list(x-y))\n\n", "list1 = ['a', 'c', 'a', 'b', 'k'] \nlist2 = ['a', 'a', 'a', 'a', 'b', 'c', 'c', 'd', 'e', 'f'] \nfor e in list1: \n try: \n list2.remove(e) \n except ValueError: \n print(f'{e} not in list') \nlist2 \n# ['a', 'a', 'c', 'd', 'e', 'f']\n\nThis will change list2. if you want to protect list2 just copy it and use the copy of list2 in this code.\n", "def listsubtraction(parent,child):\n answer=[]\n for element in parent:\n if element not in child:\n answer.append(element)\n return answer\n\nI think this should work. I am a beginner so pardon me for any mistakes\n" ]
[ 456, 362, 45, 42, 25, 14, 10, 10, 9, 6, 2, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0003428536_list_python.txt
Q: How to paginate the filtered results in django views.py def do_paginator(get_records_by_date,request): page = request.GET.get('page', 1) paginator = Paginator(get_records_by_date, 5) try: users = paginator.page(page) except PageNotAnInteger: users = paginator.page(1) except EmptyPage: users = paginator.page(paginator.num_pages) return users def index(request): if request.method == "POST": from_date = request.POST.get("from_date") f_date = datetime.datetime.strptime(from_date,'%Y-%m-%d') print(f_date) to_date = request.POST.get("to_date") t_date = datetime.datetime.strptime(to_date, '%Y-%m-%d') print(t_date) new_records_check_box_status = request.POST.get("new_records", None) print(new_records_check_box_status) error_records_check_box_status = request.POST.get("error_records", None) print(error_records_check_box_status) drop_down_status = request.POST.get("field",None) print(drop_down_status) global get_records_by_date if new_records_check_box_status is None and error_records_check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)) get_records_by_date = check_drop_down_status(get_records_by_date,drop_down_status) get_records_by_date = do_paginator(get_records_by_date,request) elif new_records_check_box_status and error_records_check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)).filter(new_records__gt=0) get_records_by_date = check_drop_down_status(get_records_by_date, drop_down_status) get_records_by_date = do_paginator(get_records_by_date, request) elif error_records_check_box_status and new_records_check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)).filter(error_records__gt=0) get_records_by_date = check_drop_down_status(get_records_by_date, drop_down_status) get_records_by_date = do_paginator(get_records_by_date, request) else: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)).filter(Q(new_records__gt=0)|Q(error_records__gt=0)) get_records_by_date = check_drop_down_status(get_records_by_date, drop_down_status) get_records_by_date = do_paginator(get_records_by_date,request) # print(get_records_by_date) else: roles = Scrapper.objects.all() page = request.GET.get('page', 1) paginator = Paginator(roles, 5) try: users = paginator.page(page) except PageNotAnInteger: users = paginator.page(1) except EmptyPage: users = paginator.page(paginator.num_pages) return render(request, "home.html",{"users": users}) return render(request, "home.html", {"users": get_records_by_date}) home.html <!DOCTYPE html> <html> <body> <style> h2 {text-align: center;} </style> <h1>Facilgo Completed Jobs</h1> <div class="container"> <div class="row"> <div class="col-md-12"> <h2>Summary Details</h2> <table id="bootstrapdatatable" class="table table-striped table-bordered" width="100%"> <thead> <tr> <th>scrapper_id</th> <th>scrapper_jobs_log_id</th> <th>external_job_source_id</th> <th>start_time</th> <th>end_time</th> <th>scrapper_status</th> <th>processed_records</th> <th>new_records</th> <th>skipped_records</th> <th>error_records</th> </tr> </thead> <tbody> {% for stud in users %} {% csrf_token %} <tr> <td>{{stud.scrapper_id}}</td> <td>{{stud.scrapper_jobs_log_id}}</td> <td>{{stud.external_job_source_id}}</td> <td>{{stud.start_time}}</td> {% if stud.end_time == None %} <td style="color:red">No result</td> {% else %} <td>{{stud.end_time}}</td> {% endif %} {% if stud.scrapper_status == "1" %} <td>{{stud.scrapper_status}} --> Started</td> {% else %} <td>{{stud.scrapper_status}} --> Completed</td> {% endif %} <td>{{stud.processed_records}}</td> <td>{{stud.new_records}}</td> <td>{{stud.skipped_records}}</td> <td>{{stud.error_records}}</td> </tr> {% endfor %} </tbody> </table> {% if users.has_other_pages %} <ul class="pagination"> {% if users.has_previous %} <li><a href="?page={{ users.previous_page_number }}">«</a></li> {% else %} <li class="disabled"><span>«</span></li> {% endif %} {% if user.number|add:'-4' > 1 %} <li><a href="?page={{ page_obj.number|add:'-5' }}">&hellip;</a></li> {% endif %} {% for i in users.paginator.page_range %} {% if users.number == i %} <li class="active"><span>{{ i }} <span class="sr-only">(current)</span></span></li> {% elif i > users.number|add:'-5' and i < users.number|add:'5' %} <li><a href="?page={{ i }}">{{ i }}</a></li> {% endif %} {% endfor %} {% if users.has_next %} <li><a href="?page={{ users.next_page_number }}">»</a></li> {% else %} <li class="disabled"><span>»</span></li> {% endif %} </ul> {% endif %} </div> </div> </div> </body> </html> When I filter the datas the first page is getting the correct details. But when I click the next page its going to different datas. For the filtered element the datas should be obtained from the filtered query set. How to paginate according to the filtered datas. The second page is mismatched and returning to the original total datas. Is there any solution to paginate the datas which has been filtered. A: You don't need the if else block, you can simply do as it is documented here. page = request.GET.get('page') paginator = Paginator(get_records_by_date, 5) users = paginator.get_page(page) return users The django method already deals with input verification in the page GET, so you don't need to worry with the input you get.
How to paginate the filtered results in django
views.py def do_paginator(get_records_by_date,request): page = request.GET.get('page', 1) paginator = Paginator(get_records_by_date, 5) try: users = paginator.page(page) except PageNotAnInteger: users = paginator.page(1) except EmptyPage: users = paginator.page(paginator.num_pages) return users def index(request): if request.method == "POST": from_date = request.POST.get("from_date") f_date = datetime.datetime.strptime(from_date,'%Y-%m-%d') print(f_date) to_date = request.POST.get("to_date") t_date = datetime.datetime.strptime(to_date, '%Y-%m-%d') print(t_date) new_records_check_box_status = request.POST.get("new_records", None) print(new_records_check_box_status) error_records_check_box_status = request.POST.get("error_records", None) print(error_records_check_box_status) drop_down_status = request.POST.get("field",None) print(drop_down_status) global get_records_by_date if new_records_check_box_status is None and error_records_check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)) get_records_by_date = check_drop_down_status(get_records_by_date,drop_down_status) get_records_by_date = do_paginator(get_records_by_date,request) elif new_records_check_box_status and error_records_check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)).filter(new_records__gt=0) get_records_by_date = check_drop_down_status(get_records_by_date, drop_down_status) get_records_by_date = do_paginator(get_records_by_date, request) elif error_records_check_box_status and new_records_check_box_status is None: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)).filter(error_records__gt=0) get_records_by_date = check_drop_down_status(get_records_by_date, drop_down_status) get_records_by_date = do_paginator(get_records_by_date, request) else: get_records_by_date = Scrapper.objects.filter(start_time__date__range=(f_date, t_date)).filter(Q(new_records__gt=0)|Q(error_records__gt=0)) get_records_by_date = check_drop_down_status(get_records_by_date, drop_down_status) get_records_by_date = do_paginator(get_records_by_date,request) # print(get_records_by_date) else: roles = Scrapper.objects.all() page = request.GET.get('page', 1) paginator = Paginator(roles, 5) try: users = paginator.page(page) except PageNotAnInteger: users = paginator.page(1) except EmptyPage: users = paginator.page(paginator.num_pages) return render(request, "home.html",{"users": users}) return render(request, "home.html", {"users": get_records_by_date}) home.html <!DOCTYPE html> <html> <body> <style> h2 {text-align: center;} </style> <h1>Facilgo Completed Jobs</h1> <div class="container"> <div class="row"> <div class="col-md-12"> <h2>Summary Details</h2> <table id="bootstrapdatatable" class="table table-striped table-bordered" width="100%"> <thead> <tr> <th>scrapper_id</th> <th>scrapper_jobs_log_id</th> <th>external_job_source_id</th> <th>start_time</th> <th>end_time</th> <th>scrapper_status</th> <th>processed_records</th> <th>new_records</th> <th>skipped_records</th> <th>error_records</th> </tr> </thead> <tbody> {% for stud in users %} {% csrf_token %} <tr> <td>{{stud.scrapper_id}}</td> <td>{{stud.scrapper_jobs_log_id}}</td> <td>{{stud.external_job_source_id}}</td> <td>{{stud.start_time}}</td> {% if stud.end_time == None %} <td style="color:red">No result</td> {% else %} <td>{{stud.end_time}}</td> {% endif %} {% if stud.scrapper_status == "1" %} <td>{{stud.scrapper_status}} --> Started</td> {% else %} <td>{{stud.scrapper_status}} --> Completed</td> {% endif %} <td>{{stud.processed_records}}</td> <td>{{stud.new_records}}</td> <td>{{stud.skipped_records}}</td> <td>{{stud.error_records}}</td> </tr> {% endfor %} </tbody> </table> {% if users.has_other_pages %} <ul class="pagination"> {% if users.has_previous %} <li><a href="?page={{ users.previous_page_number }}">«</a></li> {% else %} <li class="disabled"><span>«</span></li> {% endif %} {% if user.number|add:'-4' > 1 %} <li><a href="?page={{ page_obj.number|add:'-5' }}">&hellip;</a></li> {% endif %} {% for i in users.paginator.page_range %} {% if users.number == i %} <li class="active"><span>{{ i }} <span class="sr-only">(current)</span></span></li> {% elif i > users.number|add:'-5' and i < users.number|add:'5' %} <li><a href="?page={{ i }}">{{ i }}</a></li> {% endif %} {% endfor %} {% if users.has_next %} <li><a href="?page={{ users.next_page_number }}">»</a></li> {% else %} <li class="disabled"><span>»</span></li> {% endif %} </ul> {% endif %} </div> </div> </div> </body> </html> When I filter the datas the first page is getting the correct details. But when I click the next page its going to different datas. For the filtered element the datas should be obtained from the filtered query set. How to paginate according to the filtered datas. The second page is mismatched and returning to the original total datas. Is there any solution to paginate the datas which has been filtered.
[ "You don't need the if else block, you can simply do as it is documented here.\n page = request.GET.get('page')\n paginator = Paginator(get_records_by_date, 5)\n users = paginator.get_page(page)\n return users\n\nThe django method already deals with input verification in the page GET, so you don't need to worry with the input you get.\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074611243_django_python.txt
Q: Convert into 1D list from multidimensional and multi type array I am doing my first project in NLP and and have encoded features of audio files (400 in length) using towhee. In the output, each row is encoding of each audio file. The output is as shown below: 0 [([[[-0.5464456 -0.27430105 -0.7668772 ... ... 1 [([[[ 2.4055429 1.6134734 0.87733674 ... ... 2 [([[[-1.36764 -1.158407 -2.8810601 ... -... 3 [([[[ 1.6112621 2.7935793 0.20885658 ... -... 4 [([[[-0.18209544 1.1110162 -0.61837935 ... ... Name: embeddings, Length: 400, dtype: object The dimension of each row is: (1, 1, 1, 99, 1024). The type of each item is: <class 'list'> <class 'tuple'> <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.float32'> I need help in flattening these into one list. My intermediate goal is to make and X,Y table and use it in SVM (Support Vector Machine) for supervised classification task. Intended output would look like this: embeddings output [0.55, 0.31, 0.12, ...] Sad [0.17, 0.65, 0.23, ...] Happy [0.82, 0.76, 0.19, ...] Angry [0.52, 0.71, 0.25, ...] Neutral Any help or even suggestion for different approach to achieve this problem would be appreciated. A: Actually this was a simple problem to solve using numpy library. I had to convert all using numpy.array() funtion. Numpy takes care of all the different types and converts them into numpy array. m = numpy.array(row[3])
Convert into 1D list from multidimensional and multi type array
I am doing my first project in NLP and and have encoded features of audio files (400 in length) using towhee. In the output, each row is encoding of each audio file. The output is as shown below: 0 [([[[-0.5464456 -0.27430105 -0.7668772 ... ... 1 [([[[ 2.4055429 1.6134734 0.87733674 ... ... 2 [([[[-1.36764 -1.158407 -2.8810601 ... -... 3 [([[[ 1.6112621 2.7935793 0.20885658 ... -... 4 [([[[-0.18209544 1.1110162 -0.61837935 ... ... Name: embeddings, Length: 400, dtype: object The dimension of each row is: (1, 1, 1, 99, 1024). The type of each item is: <class 'list'> <class 'tuple'> <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.ndarray'> <class 'numpy.float32'> I need help in flattening these into one list. My intermediate goal is to make and X,Y table and use it in SVM (Support Vector Machine) for supervised classification task. Intended output would look like this: embeddings output [0.55, 0.31, 0.12, ...] Sad [0.17, 0.65, 0.23, ...] Happy [0.82, 0.76, 0.19, ...] Angry [0.52, 0.71, 0.25, ...] Neutral Any help or even suggestion for different approach to achieve this problem would be appreciated.
[ "Actually this was a simple problem to solve using numpy library. I had to convert all using numpy.array() funtion. Numpy takes care of all the different types and converts them into numpy array.\nm = numpy.array(row[3])\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "list", "nlp", "numpy", "python" ]
stackoverflow_0074588807_arrays_list_nlp_numpy_python.txt
Q: Pandas: How to custom-sort on multiple columns? I have a pandas dataframe with data like: +-----------+-----------------+---------+ | JOB-NAME | Status | SLA | +-----------+-----------------+---------+ | job_1 | YET_TO_START | --- | | job_3 | COMPLETED | MET | | job_4 | RUNNING | MET | | job_2 | YET_TO_START | LATE | | job_6 | RUNNING | LATE | | job_5 | FAILED | LATE | | job_7 | YET_TO_START | --- | | job_8 | COMPLETED | NOT_MET | +-----------+-----------------+---------+ I need to sort this table based on the Status and SLA states, like for Status: FAILED will be top on the table, then YET_TO_START, then RUNNING, and finally COMPLETED. Similarly for SLA the order will be LATE, ---, NOT_MET, and MET. Like this: +-----------+-----------------+---------+ | JOB-NAME | Status | SLA | +-----------+-----------------+---------+ | job_5 | FAILED | LATE | | job_2 | YET_TO_START | LATE | | job_1 | YET_TO_START | --- | | job_7 | YET_TO_START | --- | | job_6 | RUNNING | LATE | | job_4 | RUNNING | MET | | job_8 | COMPLETED | NOT_MET | | job_3 | COMPLETED | MET | +-----------+-----------------+---------+ I am able to do this custom sorting priority-based only on single column Status, but unable to do for multiple columns. sort_order_dict = {"FAILED":0, "YET_TO_START":1, "RUNNING":2, "COMPLETED":3} joined_df = joined_df.sort_values(by=['status'], key=lambda x: x.map(sort_order_dict)) A solution is given here, but its for single column, not multiple column. A: You can extend dictionary by values from another columns, only necessary different keys in both columns for correct working like mentioned mozway in comments: sort_order_dict = {"FAILED":0, "YET_TO_START":1, "RUNNING":2, "COMPLETED":3, "LATE":4, "---":5, "NOT_MET":6, "MET":7} df = df.sort_values(by=['Status','SLA'], key=lambda x: x.map(sort_order_dict)) print (df) JOB-NAME Status SLA 5 job_5 FAILED LATE 3 job_2 YET_TO_START LATE 0 job_1 YET_TO_START --- 6 job_7 YET_TO_START --- 4 job_6 RUNNING LATE 2 job_4 RUNNING MET 7 job_8 COMPLETED NOT_MET 1 job_3 COMPLETED MET Or use ordered Categorical: df['Status'] = pd.Categorical(df['Status'], ordered=True, categories=['FAILED', 'YET_TO_START', 'RUNNING', 'COMPLETED']) df['SLA'] = pd.Categorical(df['SLA'], ordered=True, categories= ['LATE', '---', 'NOT_MET', 'MET']) df = df.sort_values(by=['Status','SLA']) print (df) JOB-NAME Status SLA 5 job_5 FAILED LATE 3 job_2 YET_TO_START LATE 0 job_1 YET_TO_START --- 6 job_7 YET_TO_START --- 4 job_6 RUNNING LATE 2 job_4 RUNNING MET 7 job_8 COMPLETED NOT_MET 1 job_3 COMPLETED MET A: Use numpy.lexsort, you can use any number of parameters easily: import numpy as np sort_order_dict = {"FAILED":0, "YET_TO_START":1, "RUNNING":2, "COMPLETED":3} sort_order_dict2 = {'LATE': 0, '---': 1, 'NOT_MET': 2, 'MET': 3} order = np.lexsort([df['SLA'].map(sort_order_dict2), df['Status'].map(sort_order_dict), ]) out = df.iloc[order] Output: JOB-NAME Status SLA 5 job_5 FAILED LATE 3 job_2 YET_TO_START LATE 0 job_1 YET_TO_START --- 6 job_7 YET_TO_START --- 4 job_6 RUNNING LATE 2 job_4 RUNNING MET 7 job_8 COMPLETED NOT_MET 1 job_3 COMPLETED MET
Pandas: How to custom-sort on multiple columns?
I have a pandas dataframe with data like: +-----------+-----------------+---------+ | JOB-NAME | Status | SLA | +-----------+-----------------+---------+ | job_1 | YET_TO_START | --- | | job_3 | COMPLETED | MET | | job_4 | RUNNING | MET | | job_2 | YET_TO_START | LATE | | job_6 | RUNNING | LATE | | job_5 | FAILED | LATE | | job_7 | YET_TO_START | --- | | job_8 | COMPLETED | NOT_MET | +-----------+-----------------+---------+ I need to sort this table based on the Status and SLA states, like for Status: FAILED will be top on the table, then YET_TO_START, then RUNNING, and finally COMPLETED. Similarly for SLA the order will be LATE, ---, NOT_MET, and MET. Like this: +-----------+-----------------+---------+ | JOB-NAME | Status | SLA | +-----------+-----------------+---------+ | job_5 | FAILED | LATE | | job_2 | YET_TO_START | LATE | | job_1 | YET_TO_START | --- | | job_7 | YET_TO_START | --- | | job_6 | RUNNING | LATE | | job_4 | RUNNING | MET | | job_8 | COMPLETED | NOT_MET | | job_3 | COMPLETED | MET | +-----------+-----------------+---------+ I am able to do this custom sorting priority-based only on single column Status, but unable to do for multiple columns. sort_order_dict = {"FAILED":0, "YET_TO_START":1, "RUNNING":2, "COMPLETED":3} joined_df = joined_df.sort_values(by=['status'], key=lambda x: x.map(sort_order_dict)) A solution is given here, but its for single column, not multiple column.
[ "You can extend dictionary by values from another columns, only necessary different keys in both columns for correct working like mentioned mozway in comments:\nsort_order_dict = {\"FAILED\":0, \"YET_TO_START\":1, \"RUNNING\":2, \"COMPLETED\":3,\n \"LATE\":4, \"---\":5, \"NOT_MET\":6, \"MET\":7}\ndf = df.sort_values(by=['Status','SLA'], key=lambda x: x.map(sort_order_dict))\nprint (df)\n JOB-NAME Status SLA\n5 job_5 FAILED LATE\n3 job_2 YET_TO_START LATE\n0 job_1 YET_TO_START ---\n6 job_7 YET_TO_START ---\n4 job_6 RUNNING LATE\n2 job_4 RUNNING MET\n7 job_8 COMPLETED NOT_MET\n1 job_3 COMPLETED MET\n\nOr use ordered Categorical:\ndf['Status'] = pd.Categorical(df['Status'], ordered=True, \n categories=['FAILED', 'YET_TO_START', 'RUNNING', 'COMPLETED'])\n \ndf['SLA'] = pd.Categorical(df['SLA'], ordered=True, \n categories= ['LATE', '---', 'NOT_MET', 'MET'])\ndf = df.sort_values(by=['Status','SLA'])\nprint (df)\n JOB-NAME Status SLA\n5 job_5 FAILED LATE\n3 job_2 YET_TO_START LATE\n0 job_1 YET_TO_START ---\n6 job_7 YET_TO_START ---\n4 job_6 RUNNING LATE\n2 job_4 RUNNING MET\n7 job_8 COMPLETED NOT_MET\n1 job_3 COMPLETED MET\n\n", "Use numpy.lexsort, you can use any number of parameters easily:\nimport numpy as np\n\nsort_order_dict = {\"FAILED\":0, \"YET_TO_START\":1, \"RUNNING\":2, \"COMPLETED\":3}\nsort_order_dict2 = {'LATE': 0, '---': 1, 'NOT_MET': 2, 'MET': 3}\n\norder = np.lexsort([df['SLA'].map(sort_order_dict2),\n df['Status'].map(sort_order_dict),\n ])\n\nout = df.iloc[order]\n\nOutput:\n JOB-NAME Status SLA\n5 job_5 FAILED LATE\n3 job_2 YET_TO_START LATE\n0 job_1 YET_TO_START ---\n6 job_7 YET_TO_START ---\n4 job_6 RUNNING LATE\n2 job_4 RUNNING MET\n7 job_8 COMPLETED NOT_MET\n1 job_3 COMPLETED MET\n\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074611496_pandas_python.txt
Q: Enable pyenv-virtualenv prompt at terminal I just installed pyenv and virtualenv following: https://realpython.com/intro-to-pyenv/ After completing installation I was prompted with: pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior I added export PYENV_VIRTUALENV_DISABLE_PROMPT=1 to my .bash_aliases just to see what the behavior would be, and sure enough it removed the prompt that used to exist at the beginning of the command prompt indicating the pyenv-virtualenv version. Used to be like: (myenv) user@foo:~/my_project [main] $ where (myenv) is the active environment, and [main] is the git branch. I would love to have the environment indicator back! It is very useful. I guess at some possibilities such as: export PYENV_VIRTUALENV_DISABLE_PROMPT=0 export PYENV_VIRTUALENV_ENABLE_PROMPT=1 But these do not return the previous behavior. I have googled all over and can't figure out how to get this back. This answer is not useful, as it seems like a hack around the original functionality, and displays the environment always, not just when I enter (or manually activate) an environment. A: Borrowing a solution from here, the following works (added to .bashrc or .bash_aliases): export PYENV_VIRTUALENV_DISABLE_PROMPT=1 export BASE_PROMPT=$PS1 function updatePrompt { if [[ "$(pyenv virtualenvs)" == *"* $(pyenv version-name) "* ]]; then export PS1='($(pyenv version-name)) '$BASE_PROMPT else export PS1=$BASE_PROMPT fi } export PROMPT_COMMAND='updatePrompt'
Enable pyenv-virtualenv prompt at terminal
I just installed pyenv and virtualenv following: https://realpython.com/intro-to-pyenv/ After completing installation I was prompted with: pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior I added export PYENV_VIRTUALENV_DISABLE_PROMPT=1 to my .bash_aliases just to see what the behavior would be, and sure enough it removed the prompt that used to exist at the beginning of the command prompt indicating the pyenv-virtualenv version. Used to be like: (myenv) user@foo:~/my_project [main] $ where (myenv) is the active environment, and [main] is the git branch. I would love to have the environment indicator back! It is very useful. I guess at some possibilities such as: export PYENV_VIRTUALENV_DISABLE_PROMPT=0 export PYENV_VIRTUALENV_ENABLE_PROMPT=1 But these do not return the previous behavior. I have googled all over and can't figure out how to get this back. This answer is not useful, as it seems like a hack around the original functionality, and displays the environment always, not just when I enter (or manually activate) an environment.
[ "Borrowing a solution from here, the following works (added to .bashrc or .bash_aliases):\nexport PYENV_VIRTUALENV_DISABLE_PROMPT=1\nexport BASE_PROMPT=$PS1\nfunction updatePrompt {\n if [[ \"$(pyenv virtualenvs)\" == *\"* $(pyenv version-name) \"* ]]; then\n export PS1='($(pyenv version-name)) '$BASE_PROMPT\n else\n export PS1=$BASE_PROMPT\n fi\n}\nexport PROMPT_COMMAND='updatePrompt'\n\n" ]
[ 1 ]
[]
[]
[ "pyenv", "pyenv_virtualenv", "python", "virtualenv" ]
stackoverflow_0074611317_pyenv_pyenv_virtualenv_python_virtualenv.txt
Q: How to override a method in python of an object and call super? I have an Object of the following class which inherates from the algorithm class. class AP(Algorithm): def evaluate(self, u): return self.stuff *2 +u The Algorithm class has a method called StoppingCritiria. At some point in the project the object objAP = AP() gets created. Later on I can then actually access it. And at that point in time I want to override the method StoppingCriteria by some function which calls the old StoppingCriteria. I tried simply def new_stopping(self): return super().StoppingCriteria() and custom(self.u) objAP.StoppingCriteria = newStoppingCriteria But that did not work. What did work were two rather inconviniend solutions: New AP class (not desirable since I possibly need to do that for lots of classes) class AP_custom(AP): def StoppingCriteria(self): return super().StoppingCriteria() and custom(self) Override the Method but not using super but rather copy pasting the code into the new function and adding my code to that. Not desirable since I want to changes in the original method to be applyed to my new function as well. A: See Override a method at instance level for many possible solutions. None of them will really work with super though, since you're simply not defining the replacement function in a class. You can define it slightly differently though for it to work: class Foo: def bar(self): print('bar') f = Foo() def _bar(self): type(self).bar(self) # or Foo.bar(self) print('baz') from typing import MethodType f.bar = MethodType(_bar, f) f.bar() # outputs bar baz Since you're replacing the method at the instance level, you don't really need to access the method of the super class, you just want to access the method of the class, which still exists in its original form.
How to override a method in python of an object and call super?
I have an Object of the following class which inherates from the algorithm class. class AP(Algorithm): def evaluate(self, u): return self.stuff *2 +u The Algorithm class has a method called StoppingCritiria. At some point in the project the object objAP = AP() gets created. Later on I can then actually access it. And at that point in time I want to override the method StoppingCriteria by some function which calls the old StoppingCriteria. I tried simply def new_stopping(self): return super().StoppingCriteria() and custom(self.u) objAP.StoppingCriteria = newStoppingCriteria But that did not work. What did work were two rather inconviniend solutions: New AP class (not desirable since I possibly need to do that for lots of classes) class AP_custom(AP): def StoppingCriteria(self): return super().StoppingCriteria() and custom(self) Override the Method but not using super but rather copy pasting the code into the new function and adding my code to that. Not desirable since I want to changes in the original method to be applyed to my new function as well.
[ "See Override a method at instance level for many possible solutions. None of them will really work with super though, since you're simply not defining the replacement function in a class. You can define it slightly differently though for it to work:\nclass Foo:\n def bar(self):\n print('bar')\n\nf = Foo()\n\ndef _bar(self):\n type(self).bar(self) # or Foo.bar(self)\n print('baz')\n\nfrom typing import MethodType\n\nf.bar = MethodType(_bar, f)\nf.bar() # outputs bar baz\n\nSince you're replacing the method at the instance level, you don't really need to access the method of the super class, you just want to access the method of the class, which still exists in its original form.\n" ]
[ 2 ]
[]
[]
[ "inheritance", "python", "python_3.x", "python_class" ]
stackoverflow_0074611413_inheritance_python_python_3.x_python_class.txt
Q: numpy keeps turning zeroes into very small numbers and "-2147483648" I have this code import numpy a=numpy.pad(numpy.empty([8,8]), 1, constant_values=1) print(a) 50% of the times I execute it it prints a normal array, 50% of times it prints this [[ 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000] [ 1.00000000e+000 3.25639960e-265 2.03709399e-231 -7.49281680e-111 9.57832017e-299 8.17611616e-093 9.57832017e-299 1.31887592e+066 -2.29724802e+236 1.00000000e+000] [ 1.00000000e+000 5.11889256e-014 -2.29724802e+236 2.19853714e-004 -2.29724802e+236 -9.20964279e+232 2.37057719e+043 1.48921177e+048 5.29583156e-235 1.00000000e+000] [ 1.00000000e+000 6.37391724e+057 5.68896808e-235 2.73626021e+067 6.08210460e-235 1.17578020e+077 6.66029790e-235 7.05235822e-235 2.13106310e-308 1.00000000e+000] [ 1.00000000e+000 7.83852638e-235 2.13214956e-308 8.62479942e-235 2.13323602e-308 9.41107246e-235 2.13432248e-308 1.61214828e+063 1.35001671e-284 1.00000000e+000] [ 1.00000000e+000 7.20990215e-264 9.57831969e-299 5.06352214e+139 3.18093720e+144 1.21642092e-234 1.25562635e-234 2.13866833e-308 1.41045067e-234 1.00000000e+000] [ 1.00000000e+000 2.13975479e-308 1.56770528e-234 2.14084125e-308 1.72495988e-234 2.14192771e-308 1.88221449e-234 2.14301418e-308 2.03946910e-234 1.00000000e+000] [ 1.00000000e+000 2.14410064e-308 2.19672371e-234 2.14518710e-308 2.35397832e-234 2.14627356e-308 1.61656736e+063 1.35004493e-284 7.20998544e-264 1.00000000e+000] [ 1.00000000e+000 3.93674833e-241 7.20999301e-264 6.00700127e-246 2.03709519e-231 -5.20176578e-111 9.57832021e-299 5.66452894e+075 -2.29724802e+236 1.00000000e+000] [ 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000]] what is worse, when i do .astype(int) it keeps doing this [[ 1 1 1 1 1 1 1 1 1 1] [ 1 0 0 0 -2147483648 0 -2147483648 0 0 1] [ 1 0 0 -2147483648 0 0 0 0 -2147483648 1] [ 1 0 0 0 0 -2147483648 0 0 0 1] [ 1 0 0 0 0 0 -2147483648 0 0 1] [ 1 0 0 -2147483648 0 0 0 0 0 1] [ 1 0 -2147483648 0 0 0 -2147483648 0 -2147483648 1] [ 1 0 -2147483648 -2147483648 0 -2147483648 0 0 -2147483648 1] [ 1 0 0 0 0 0 0 0 0 1] [ 1 1 1 1 1 1 1 1 1 1]] tried on normal python 3.11 and anaconda 3.9. I googled but I couldn't find a way to fix this, so any help would be much appreciated. The post needs to have more text so that it isn't "mostly code" and it lets me post it. I would like to know if there are any good ways to solve the issue I've described. As I wrote, I tested it on two different versions of python. Unfortunately, both lead to the same issue. A: You are using numpy.empty which is an Array of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None. See documentation. Use either numpy.zeros or numpy.ones to start with a proper initialized array. A: The problem lies in the fact that an empty array is initialized with np.empty. I am not an expert on computers and how they work, but what I do know is that when an empty array is initialized, a block of memory is allocated where the values of the new array are to be saved. This block of memory can contain values previously initialized. Basically there is still some 1's and zeros in the memory that you are now printing. When you change values in the np.pad, for the first time, the old memory gets overwritten. What I think you are trying to do is np.zeros(). comparison: >>> import numpy >>> a = numpy.empty([2,2]) >>> a array([[8.88913424e-317, 0.00000000e+000], [4.01601648e-212, 1.10215522e-317]]) >>> b = numpy.zeros([2,2]) >>> b array([[0., 0.], [0., 0.]]) >>>
numpy keeps turning zeroes into very small numbers and "-2147483648"
I have this code import numpy a=numpy.pad(numpy.empty([8,8]), 1, constant_values=1) print(a) 50% of the times I execute it it prints a normal array, 50% of times it prints this [[ 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000] [ 1.00000000e+000 3.25639960e-265 2.03709399e-231 -7.49281680e-111 9.57832017e-299 8.17611616e-093 9.57832017e-299 1.31887592e+066 -2.29724802e+236 1.00000000e+000] [ 1.00000000e+000 5.11889256e-014 -2.29724802e+236 2.19853714e-004 -2.29724802e+236 -9.20964279e+232 2.37057719e+043 1.48921177e+048 5.29583156e-235 1.00000000e+000] [ 1.00000000e+000 6.37391724e+057 5.68896808e-235 2.73626021e+067 6.08210460e-235 1.17578020e+077 6.66029790e-235 7.05235822e-235 2.13106310e-308 1.00000000e+000] [ 1.00000000e+000 7.83852638e-235 2.13214956e-308 8.62479942e-235 2.13323602e-308 9.41107246e-235 2.13432248e-308 1.61214828e+063 1.35001671e-284 1.00000000e+000] [ 1.00000000e+000 7.20990215e-264 9.57831969e-299 5.06352214e+139 3.18093720e+144 1.21642092e-234 1.25562635e-234 2.13866833e-308 1.41045067e-234 1.00000000e+000] [ 1.00000000e+000 2.13975479e-308 1.56770528e-234 2.14084125e-308 1.72495988e-234 2.14192771e-308 1.88221449e-234 2.14301418e-308 2.03946910e-234 1.00000000e+000] [ 1.00000000e+000 2.14410064e-308 2.19672371e-234 2.14518710e-308 2.35397832e-234 2.14627356e-308 1.61656736e+063 1.35004493e-284 7.20998544e-264 1.00000000e+000] [ 1.00000000e+000 3.93674833e-241 7.20999301e-264 6.00700127e-246 2.03709519e-231 -5.20176578e-111 9.57832021e-299 5.66452894e+075 -2.29724802e+236 1.00000000e+000] [ 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000 1.00000000e+000]] what is worse, when i do .astype(int) it keeps doing this [[ 1 1 1 1 1 1 1 1 1 1] [ 1 0 0 0 -2147483648 0 -2147483648 0 0 1] [ 1 0 0 -2147483648 0 0 0 0 -2147483648 1] [ 1 0 0 0 0 -2147483648 0 0 0 1] [ 1 0 0 0 0 0 -2147483648 0 0 1] [ 1 0 0 -2147483648 0 0 0 0 0 1] [ 1 0 -2147483648 0 0 0 -2147483648 0 -2147483648 1] [ 1 0 -2147483648 -2147483648 0 -2147483648 0 0 -2147483648 1] [ 1 0 0 0 0 0 0 0 0 1] [ 1 1 1 1 1 1 1 1 1 1]] tried on normal python 3.11 and anaconda 3.9. I googled but I couldn't find a way to fix this, so any help would be much appreciated. The post needs to have more text so that it isn't "mostly code" and it lets me post it. I would like to know if there are any good ways to solve the issue I've described. As I wrote, I tested it on two different versions of python. Unfortunately, both lead to the same issue.
[ "You are using numpy.empty which is an\n\nArray of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None.\n\nSee documentation.\nUse either numpy.zeros or numpy.ones to start with a proper initialized array.\n", "The problem lies in the fact that an empty array is initialized with np.empty. I am not an expert on computers and how they work, but what I do know is that when an empty array is initialized, a block of memory is allocated where the values of the new array are to be saved. This block of memory can contain values previously initialized. Basically there is still some 1's and zeros in the memory that you are now printing.\nWhen you change values in the np.pad, for the first time, the old memory gets overwritten.\nWhat I think you are trying to do is np.zeros().\ncomparison:\n\n>>> import numpy\n>>> a = numpy.empty([2,2])\n>>> a\narray([[8.88913424e-317, 0.00000000e+000],\n [4.01601648e-212, 1.10215522e-317]])\n>>> b = numpy.zeros([2,2])\n>>> b\narray([[0., 0.],\n [0., 0.]])\n>>> \n\n\n" ]
[ 1, 1 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074611463_arrays_numpy_python.txt
Q: Permutation List with Variable Dependencies- UnboundLocalError I was trying to break down the code to the simplest form before adding more variables and such. I'm stuck. I wanted it so when I use intertools the first response is the permutations of tricks and the second response is dependent on the trick's landings() and is a permutation of the trick's corresponding landing. I want to add additional variables that further branch off from landings() and so on. The simplest form should print a list that looks like: Backflip Complete Backflip Hyper 180 Round Complete 180 Round Mega Gumbi Complete My Code: from re import I import pandas as pd import numpy as np import itertools from io import StringIO backflip = "Backflip" one80round = "180 Round" gumbi = "Gumbi" tricks = [backflip,one80round,gumbi] complete = "Complete" hyper = "Hyper" mega = "Mega" backflip_landing = [complete,hyper] one80round_landing = [complete,mega] gumbi_landing = [complete] def landings(tricks): if tricks == backflip: landing = backflip_landing elif tricks == one80round: landing = one80round_landing elif tricks == gumbi: landing = gumbi_landing return landing for trik, land in itertools.product(tricks,landings(tricks)): trick_and_landing = (trik, land) result = (' '.join(trick_and_landing)) tal = StringIO(result) tl = (pd.DataFrame((tal))) print(tl) I get the error: UnboundLocalError: local variable 'landing' referenced before assignment A: Add a landing = "" after def landings(tricks): to get rid of the error. But the if checks in your function are wrong. You check if tricks, which is a list, is equal to backflip, etc. which are all strings. So thats why none of the ifs are true and landing got no value assigned. That question was also about permutation in python. Maybe it helps.
Permutation List with Variable Dependencies- UnboundLocalError
I was trying to break down the code to the simplest form before adding more variables and such. I'm stuck. I wanted it so when I use intertools the first response is the permutations of tricks and the second response is dependent on the trick's landings() and is a permutation of the trick's corresponding landing. I want to add additional variables that further branch off from landings() and so on. The simplest form should print a list that looks like: Backflip Complete Backflip Hyper 180 Round Complete 180 Round Mega Gumbi Complete My Code: from re import I import pandas as pd import numpy as np import itertools from io import StringIO backflip = "Backflip" one80round = "180 Round" gumbi = "Gumbi" tricks = [backflip,one80round,gumbi] complete = "Complete" hyper = "Hyper" mega = "Mega" backflip_landing = [complete,hyper] one80round_landing = [complete,mega] gumbi_landing = [complete] def landings(tricks): if tricks == backflip: landing = backflip_landing elif tricks == one80round: landing = one80round_landing elif tricks == gumbi: landing = gumbi_landing return landing for trik, land in itertools.product(tricks,landings(tricks)): trick_and_landing = (trik, land) result = (' '.join(trick_and_landing)) tal = StringIO(result) tl = (pd.DataFrame((tal))) print(tl) I get the error: UnboundLocalError: local variable 'landing' referenced before assignment
[ "Add a landing = \"\" after def landings(tricks): to get rid of the error.\nBut the if checks in your function are wrong. You check if tricks, which is a list, is equal to backflip, etc. which are all strings. So thats why none of the ifs are true and landing got no value assigned.\nThat question was also about permutation in python. Maybe it helps.\n" ]
[ 0 ]
[]
[]
[ "error_handling", "function", "permutation", "python" ]
stackoverflow_0074611395_error_handling_function_permutation_python.txt
Q: Filtering one column based on values in two other columns I've got an upper boundary and a lower boundary based on a predicted value and I want to filter out the data that do not fall between the upper and lower boundaries. My data frame looks like this weight KG Upper Boundary Lower Boundary 23.2 30 20 55.2 40 30 44.2 50 40 47.8 50 40 38.7 30 20 and I'd like it to look like this weight KG Upper Boundary Lower Boundary 23.2 30 20 44.2 50 40 47.8 50 40 I have tried this but it does not filter properly. df2= df1[(df1['weight_KG'] <= df1["UpperBoundary"]) & (df1['weight_KG'] >= df1["LowerBoundary"]] A: Your code works just fine. If it doesn't do the job. It might have the version and platform related thing. My environment is following: Macbook M1 chip, Ventura Python 3.9.14 Pandas 1.5.2 Code is following: import pandas as pd # Build DataFrame names = ["weight_KG", "UpperBoundary", "LowerBoundary"] weight_KG = [23.2, 55.2, 44.2, 47.8, 38.7, 0] UpperBoundary = [30, 40, 50, 50, 30, 20] LowerBoundary = [20, 30, 40, 40, 20, 10] dict = { "weight_KG": weight_KG, "UpperBoundary": UpperBoundary, "LowerBoundary": LowerBoundary, } df1 = pd.DataFrame(dict) df2 = df1[ (df1["weight_KG"] <= df1["UpperBoundary"]) & (df1["weight_KG"] >= df1["LowerBoundary"]) ] print(df2) print(pd.__version__) Output is following: weight_KG UpperBoundary LowerBoundary 0 23.2 30 20 2 44.2 50 40 3 47.8 50 40 1.5.2
Filtering one column based on values in two other columns
I've got an upper boundary and a lower boundary based on a predicted value and I want to filter out the data that do not fall between the upper and lower boundaries. My data frame looks like this weight KG Upper Boundary Lower Boundary 23.2 30 20 55.2 40 30 44.2 50 40 47.8 50 40 38.7 30 20 and I'd like it to look like this weight KG Upper Boundary Lower Boundary 23.2 30 20 44.2 50 40 47.8 50 40 I have tried this but it does not filter properly. df2= df1[(df1['weight_KG'] <= df1["UpperBoundary"]) & (df1['weight_KG'] >= df1["LowerBoundary"]]
[ "Your code works just fine. If it doesn't do the job. It might have the version and platform related thing.\nMy environment is following:\n\nMacbook M1 chip, Ventura\nPython 3.9.14\nPandas 1.5.2\n\nCode is following:\nimport pandas as pd\n\n# Build DataFrame\nnames = [\"weight_KG\", \"UpperBoundary\", \"LowerBoundary\"]\nweight_KG = [23.2, 55.2, 44.2, 47.8, 38.7, 0]\nUpperBoundary = [30, 40, 50, 50, 30, 20]\nLowerBoundary = [20, 30, 40, 40, 20, 10]\n\ndict = {\n \"weight_KG\": weight_KG,\n \"UpperBoundary\": UpperBoundary,\n \"LowerBoundary\": LowerBoundary,\n}\ndf1 = pd.DataFrame(dict)\n\ndf2 = df1[\n (df1[\"weight_KG\"] <= df1[\"UpperBoundary\"])\n & (df1[\"weight_KG\"] >= df1[\"LowerBoundary\"])\n]\nprint(df2)\nprint(pd.__version__)\n\nOutput is following:\n weight_KG UpperBoundary LowerBoundary\n0 23.2 30 20\n2 44.2 50 40\n3 47.8 50 40\n1.5.2\n\n" ]
[ 1 ]
[]
[]
[ "filtering", "python" ]
stackoverflow_0074605358_filtering_python.txt
Q: Tensorflow Multi Head Attention on Inputs: 4 x 5 x 20 x 64 with attention_axes=2 throwing mask dimension error (tf 2.11.0) The expectation here is that the attention is applied on the 2nd dimension (4, 5, 20, 64). I am trying to apply self attention using the following code (issue reproducible with this code): import numpy as np import tensorflow as tf from keras import layers as tfl class Encoder(tfl.Layer): def __init__(self,): super().__init__() self.embed_layer = tfl.Embedding(4500, 64, mask_zero=True) self.attn_layer = tfl.MultiHeadAttention(num_heads=2, attention_axes=2, key_dim=16) return def call(self, x): # Input shape: (4, 5, 20) (Batch size: 4) x = self.embed_layer(x) # Output: (4, 5, 20, 64) x = self.attn_layer(query=x, key=x, value=x) # Output: (4, 5, 20, 64) return x eg_input = tf.constant(np.random.randint(0, 150, (4, 5, 20))) enc = Encoder() enc(eg_input) However, the above layer defined throws the following error. Could someone please explain why is this happening & how to fix this? {{function_node __wrapped__AddV2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [4,5,2,20,20] vs. [4,5,1,5,20] [Op:AddV2] Call arguments received by layer 'softmax_2' (type Softmax): • inputs=tf.Tensor(shape=(4, 5, 2, 20, 20), dtype=float32) • mask=tf.Tensor(shape=(4, 5, 1, 5, 20), dtype=bool) PS: If I set mask_zero = False in defining the embedding layer, the code runs fine as expected without any issues. A: Just concat the input along axis=0 import numpy as np import tensorflow as tf from keras import layers as tfl class Encoder(tfl.Layer): def __init__(self,): super().__init__() self.embed_layer = tfl.Embedding(4500, 64, mask_zero=True) self.attn_layer = tfl.MultiHeadAttention(num_heads=2, key_dim=16, attention_axes=2) def call(self, x): x = self.embed_layer(x) # Output: (4, 5, 20, 32) x = tf.concat(x, axis=0) x, attention_scores = self.attn_layer(query=x, key=x, value=x , return_attention_scores=True) # Output: (4, 5, 20, 32) return x , attention_scores eg_input = tf.constant(np.random.randint(0, 150, (4, 5, 20))) enc = Encoder() scores , attentions = enc(eg_input) scores.shape , attentions.shape #(TensorShape([4, 5, 20, 64]), TensorShape([4, 5, 2, 20, 20]))
Tensorflow Multi Head Attention on Inputs: 4 x 5 x 20 x 64 with attention_axes=2 throwing mask dimension error (tf 2.11.0)
The expectation here is that the attention is applied on the 2nd dimension (4, 5, 20, 64). I am trying to apply self attention using the following code (issue reproducible with this code): import numpy as np import tensorflow as tf from keras import layers as tfl class Encoder(tfl.Layer): def __init__(self,): super().__init__() self.embed_layer = tfl.Embedding(4500, 64, mask_zero=True) self.attn_layer = tfl.MultiHeadAttention(num_heads=2, attention_axes=2, key_dim=16) return def call(self, x): # Input shape: (4, 5, 20) (Batch size: 4) x = self.embed_layer(x) # Output: (4, 5, 20, 64) x = self.attn_layer(query=x, key=x, value=x) # Output: (4, 5, 20, 64) return x eg_input = tf.constant(np.random.randint(0, 150, (4, 5, 20))) enc = Encoder() enc(eg_input) However, the above layer defined throws the following error. Could someone please explain why is this happening & how to fix this? {{function_node __wrapped__AddV2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [4,5,2,20,20] vs. [4,5,1,5,20] [Op:AddV2] Call arguments received by layer 'softmax_2' (type Softmax): • inputs=tf.Tensor(shape=(4, 5, 2, 20, 20), dtype=float32) • mask=tf.Tensor(shape=(4, 5, 1, 5, 20), dtype=bool) PS: If I set mask_zero = False in defining the embedding layer, the code runs fine as expected without any issues.
[ "Just concat the input along axis=0\nimport numpy as np\nimport tensorflow as tf\nfrom keras import layers as tfl\n\nclass Encoder(tfl.Layer):\n def __init__(self,):\n super().__init__()\n self.embed_layer = tfl.Embedding(4500, 64, mask_zero=True)\n self.attn_layer = tfl.MultiHeadAttention(num_heads=2,\n key_dim=16,\n attention_axes=2)\n\n def call(self, x):\n x = self.embed_layer(x) # Output: (4, 5, 20, 32)\n x = tf.concat(x, axis=0)\n x, attention_scores = self.attn_layer(query=x, key=x, value=x , return_attention_scores=True) # Output: (4, 5, 20, 32)\n return x , attention_scores\n\n\neg_input = tf.constant(np.random.randint(0, 150, (4, 5, 20)))\nenc = Encoder()\nscores , attentions = enc(eg_input)\n\nscores.shape , attentions.shape\n#(TensorShape([4, 5, 20, 64]), TensorShape([4, 5, 2, 20, 20]))\n\n" ]
[ 1 ]
[]
[]
[ "attention_model", "python", "python_3.x", "self_attention", "tensorflow" ]
stackoverflow_0074610068_attention_model_python_python_3.x_self_attention_tensorflow.txt
Q: Create a pdf from an image list I am developing a program in python to produce raffle tickets. The program creates as many tickets as required from a reference image by modifying only the ticket number. I have a list of images with potentially several hundred items. I would like to resize my images and save them in a pdf to allow printing. The user has to choose the number of tickets per row and per column on an A4 page PIL_img_list = [...] nbr_per_line = input("Number of ticket per line") nbr_per_column = input("Number of ticket per column") Does anyone have an idea? A: disclaimer: I am the author of borb, the library used in this answer You can simply add the Image to a Document (use absolute positioning), and then add a Paragraph of text (containing the raffle number) at the position you want to have it. from borb.pdf import Document from borb.pdf import Page from borb.pdf import Image from borb.pdf import Paragraph from borb.pdf import PDF from borb.pdf.canvas.geometry.rectangle import Rectangle from decimal import Decimal # create empty document doc: Document = Document() # add empty page pge: Page = Page() doc.add_page(pge) # define a Rectangle on which we want to draw content # fmt: off r: Rectangle = Rectangle( Decimal(59), # x: 0 + page_margin Decimal(848 - 84 - 100), # y: page_height - page_margin - height_of_textbox Decimal(595 - 59 * 2), # width: page_width - 2 * page_margin Decimal(100), # height ) # fmt: on # add Image Image("path_to_image.png", width=Decimal(100), height=Decimal(100)).paint(pge, r) # add Paragraph Paragraph("00001").paint(pge, r) # store with open("output.pdf", "wb") as fh: PDF.dumps(fh, doc) borb is an open source, pure Python PDF library that creates, modifies and reads PDF documents. You can download it using: pip install borb Alternatively, you can build from source by forking/downloading the GitHub repository.
Create a pdf from an image list
I am developing a program in python to produce raffle tickets. The program creates as many tickets as required from a reference image by modifying only the ticket number. I have a list of images with potentially several hundred items. I would like to resize my images and save them in a pdf to allow printing. The user has to choose the number of tickets per row and per column on an A4 page PIL_img_list = [...] nbr_per_line = input("Number of ticket per line") nbr_per_column = input("Number of ticket per column") Does anyone have an idea?
[ "disclaimer: I am the author of borb, the library used in this answer\nYou can simply add the Image to a Document (use absolute positioning), and then add a Paragraph of text (containing the raffle number) at the position you want to have it.\nfrom borb.pdf import Document\nfrom borb.pdf import Page\nfrom borb.pdf import Image\nfrom borb.pdf import Paragraph\nfrom borb.pdf import PDF\nfrom borb.pdf.canvas.geometry.rectangle import Rectangle\n\nfrom decimal import Decimal\n\n# create empty document\ndoc: Document = Document()\n\n# add empty page\npge: Page = Page()\ndoc.add_page(pge)\n\n# define a Rectangle on which we want to draw content\n# fmt: off\nr: Rectangle = Rectangle(\n Decimal(59), # x: 0 + page_margin\n Decimal(848 - 84 - 100), # y: page_height - page_margin - height_of_textbox\n Decimal(595 - 59 * 2), # width: page_width - 2 * page_margin\n Decimal(100), # height\n)\n# fmt: on\n\n# add Image\nImage(\"path_to_image.png\", \n width=Decimal(100), \n height=Decimal(100)).paint(pge, r)\n\n# add Paragraph\nParagraph(\"00001\").paint(pge, r)\n\n# store\nwith open(\"output.pdf\", \"wb\") as fh:\n PDF.dumps(fh, doc)\n\nborb is an open source, pure Python PDF library that creates, modifies and reads PDF documents. You can download it using:\npip install borb\n\nAlternatively, you can build from source by forking/downloading the GitHub repository.\n" ]
[ 0 ]
[]
[]
[ "image", "pdf", "python" ]
stackoverflow_0074607775_image_pdf_python.txt
Q: BayesianOptimization search errors out "TypeError: 'float' object is not subscriptable" I am getting the error TypeError: 'float' object is not subscriptable from the following line: tuner_nn.search(x_train, y_train, epochs=50, validation_data=(x_val,y_val ), verbose=0, callbacks=[Earlystopping]) I know there are a lot of questions with the the same error but still could not find a solution for this issue. While removing the y_val from the code and having the following incomplete line: tuner_nn.search(x_train, y_train, epochs=50, validation_data=(x_val,), verbose=0, callbacks=[Earlystopping]) The code somewhy pass without errors with green V. Yet with the warnings: INFO:tensorflow:Oracle triggered exit INFO:tensorflow:Reloading Oracle from existing project /Users/Farid Srouji/Documents/kerastuner\untitled_project\oracle.json INFO:tensorflow:Reloading Tuner from /Users/Farid Srouji/Documents/kerastuner\untitled_project\tuner0.json INFO:tensorflow:Oracle triggered exit INFO:tensorflow:Reloading Oracle from existing project /Users/Farid Srouji/Documents/kerastuner\untitled_project\oracle.json INFO:tensorflow:Reloading Tuner from /Users/Farid Srouji/Documents/kerastuner\untitled_project\tuner0.json INFO:tensorflow:Oracle triggered exit The full code in this block is: # Search hyperparameters SEED = 121 # NN tuner_nn = BayesianOptimization(nn_builder, objective = 'val_loss', max_trials = 20, seed = SEED, directory = '/Users/myuser/Documents/kerastuner', overwrite = True ) tuner_nn.search(x_train, y_train, epochs=50, validation_data=(x_val, ), verbose=0, callbacks=[Earlystopping]) ## Build model based on the optimized hyperparameters besthp_nn = tuner_nn.get_best_hyperparameters()[0] model_nn = tuner_nn.hypermodel.build(besthp_nn) # lstm tuner_lstm = BayesianOptimization(lstm_builder, objective = 'val_loss', max_trials = 20, seed = SEED, directory = '/Users/myuser/Documents/kerastuner') tuner_lstm.search(x_train, y_train, epochs=50, validation_data=(x_val, y_val), verbose=0, callbacks=[Earlystopping]) ## Build model based on the optimized hyperparameters besthp_lstm = tuner_lstm.get_best_hyperparameters()[0] model_lstm = tuner_lstm.hypermodel.build(besthp_lstm) # gru tuner_gru = BayesianOptimization(gru_builder, objective = 'val_loss', max_trials = 20, seed = SEED, directory = '/Users/myuser/Documents/kerastuner') tuner_gru.search(x_train, y_train, epochs=50, validation_data=(x_val, y_val), verbose=0, callbacks=[Earlystopping]) ## Build model based on the optimized hyperparameters besthp_gru = tuner_gru.get_best_hyperparameters()[0] model_gru = tuner_gru.hypermodel.build(besthp_gru) Why the removal of y_val the code works? Also there is no error for missing argument A: I think the proble comes from the following lines besthp_nn = tuner_nn.get_best_hyperparameters()\[0\] and besthp_gru = tuner_gru.get_best_hyperparameters()\[0\] You might want to try something like besthp_nn = tuner_nn.get_best_hyperparameters(1)[0] besthp_gru = tuner_gru.get_best_hyperparameters(1)[0] A: Python raises the TypeError: 'float' object is not subscriptable if you use indexing or slicing with the square bracket notation on a float variable that is not indexable.
BayesianOptimization search errors out "TypeError: 'float' object is not subscriptable"
I am getting the error TypeError: 'float' object is not subscriptable from the following line: tuner_nn.search(x_train, y_train, epochs=50, validation_data=(x_val,y_val ), verbose=0, callbacks=[Earlystopping]) I know there are a lot of questions with the the same error but still could not find a solution for this issue. While removing the y_val from the code and having the following incomplete line: tuner_nn.search(x_train, y_train, epochs=50, validation_data=(x_val,), verbose=0, callbacks=[Earlystopping]) The code somewhy pass without errors with green V. Yet with the warnings: INFO:tensorflow:Oracle triggered exit INFO:tensorflow:Reloading Oracle from existing project /Users/Farid Srouji/Documents/kerastuner\untitled_project\oracle.json INFO:tensorflow:Reloading Tuner from /Users/Farid Srouji/Documents/kerastuner\untitled_project\tuner0.json INFO:tensorflow:Oracle triggered exit INFO:tensorflow:Reloading Oracle from existing project /Users/Farid Srouji/Documents/kerastuner\untitled_project\oracle.json INFO:tensorflow:Reloading Tuner from /Users/Farid Srouji/Documents/kerastuner\untitled_project\tuner0.json INFO:tensorflow:Oracle triggered exit The full code in this block is: # Search hyperparameters SEED = 121 # NN tuner_nn = BayesianOptimization(nn_builder, objective = 'val_loss', max_trials = 20, seed = SEED, directory = '/Users/myuser/Documents/kerastuner', overwrite = True ) tuner_nn.search(x_train, y_train, epochs=50, validation_data=(x_val, ), verbose=0, callbacks=[Earlystopping]) ## Build model based on the optimized hyperparameters besthp_nn = tuner_nn.get_best_hyperparameters()[0] model_nn = tuner_nn.hypermodel.build(besthp_nn) # lstm tuner_lstm = BayesianOptimization(lstm_builder, objective = 'val_loss', max_trials = 20, seed = SEED, directory = '/Users/myuser/Documents/kerastuner') tuner_lstm.search(x_train, y_train, epochs=50, validation_data=(x_val, y_val), verbose=0, callbacks=[Earlystopping]) ## Build model based on the optimized hyperparameters besthp_lstm = tuner_lstm.get_best_hyperparameters()[0] model_lstm = tuner_lstm.hypermodel.build(besthp_lstm) # gru tuner_gru = BayesianOptimization(gru_builder, objective = 'val_loss', max_trials = 20, seed = SEED, directory = '/Users/myuser/Documents/kerastuner') tuner_gru.search(x_train, y_train, epochs=50, validation_data=(x_val, y_val), verbose=0, callbacks=[Earlystopping]) ## Build model based on the optimized hyperparameters besthp_gru = tuner_gru.get_best_hyperparameters()[0] model_gru = tuner_gru.hypermodel.build(besthp_gru) Why the removal of y_val the code works? Also there is no error for missing argument
[ "I think the proble comes from the following lines\nbesthp_nn = tuner_nn.get_best_hyperparameters()\\[0\\]\n\nand\nbesthp_gru = tuner_gru.get_best_hyperparameters()\\[0\\]\n\nYou might want to try something like\nbesthp_nn = tuner_nn.get_best_hyperparameters(1)[0]\nbesthp_gru = tuner_gru.get_best_hyperparameters(1)[0]\n\n\n", "Python raises the TypeError: 'float' object is not subscriptable if you use indexing or slicing with the square bracket notation on a float variable that is not indexable.\n" ]
[ 0, 0 ]
[]
[]
[ "neural_network", "python", "python_3.x" ]
stackoverflow_0074611572_neural_network_python_python_3.x.txt
Q: How to subtract second level columns in multiIndex level dataframe Here is the example data I am working with. What I am trying to accomplish is 1) subtract b column from column a and 2) create the C column in front of a and b columns. I would like to loop through and create the C column for x, y and z. import pandas as pd df = pd.DataFrame(data=[[100,200,400,500,111,222], [77,28,110,211,27,81], [11,22,33,11,22,33],[213,124,136,147,54,56]]) df.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('ab')]) print (df) Below is what I am trying to get. A: Use DataFrame.xs for select second levels with avoid remove first level with drop_level=False, then use rename for same MultiIndex, subtract and add to original with concat, last use DataFrame.sort_index: dfa = df.xs('a', axis=1, level=1, drop_level=False).rename(columns={'a':'c'}) dfb = df.xs('b', axis=1, level=1, drop_level=False).rename(columns={'b':'c'}) df = pd.concat([df, dfa.sub(dfb)], axis=1).sort_index(axis=1) print (df) x y z a b c a b c a b c 0 100 200 -100 400 500 -100 111 222 -111 1 77 28 49 110 211 -101 27 81 -54 2 11 22 -11 33 11 22 22 33 -11 3 213 124 89 136 147 -11 54 56 -2 With loop select columns by tuples, subtract Series and last use DataFrame.sort_index: for c in df.columns.levels[0]: df[(c, 'c')] = df[(c, 'a')].sub(df[(c, 'b')]) df = df.sort_index(axis=1) print (df) x y z a b c a b c a b c 0 100 200 -100 400 500 -100 111 222 -111 1 77 28 49 110 211 -101 27 81 -54 2 11 22 -11 33 11 22 22 33 -11 3 213 124 89 136 147 -11 54 56 -2 A: a = df.xs('a', level=1, axis=1) b = df.xs('b', level=1, axis=1) df1 = pd.concat([a.sub(b)], keys=['c'], axis=1).swaplevel(0, 1, axis=1) df1 x y z c c c 0 -100 -100 -111 1 49 -101 -54 2 -11 22 -11 3 89 -11 -2 then at first concat df and df1 , next sort pd.concat([df, df1], axis=1).sort_index(axis=1) other way use stack and unstack df.stack(level=0).assign(c=lambda x: x['b'] - x['a']).stack().unstack([1, 2]) result: x y z a b c a b c a b c 0 100 200 100 400 500 100 111 222 111 1 77 28 -49 110 211 101 27 81 54 2 11 22 11 33 11 -22 22 33 11 3 213 124 -89 136 147 11 54 56 2 A: Dump down into numpy, build a new dataframe, and concatenate to the original dataframe: result = df.loc(axis=1)[:,'a'].to_numpy() - df.loc(axis=1)[:, 'b'].to_numpy() header = pd.MultiIndex.from_product([['x','y','z'], ['c']]) result = pd.DataFrame(result, columns=header) pd.concat([df, result], axis=1).sort_index(axis=1) x y z a b c a b c a b c 0 100 200 -100 400 500 -100 111 222 -111 1 77 28 49 110 211 -101 27 81 -54 2 11 22 -11 33 11 22 22 33 -11 3 213 124 89 136 147 -11 54 56 -2 Another option, using pipe, without dumping into numpy: result = df.swaplevel(axis=1).pipe(lambda df: df['a'] - df['b']) result.columns = pd.MultiIndex.from_product([result.columns, ['c']]) pd.concat([df, result], axis=1).sort_index(axis=1) x y z a b c a b c a b c 0 100 200 -100 400 500 -100 111 222 -111 1 77 28 49 110 211 -101 27 81 -54 2 11 22 -11 33 11 22 22 33 -11 3 213 124 89 136 147 -11 54 56 -2
How to subtract second level columns in multiIndex level dataframe
Here is the example data I am working with. What I am trying to accomplish is 1) subtract b column from column a and 2) create the C column in front of a and b columns. I would like to loop through and create the C column for x, y and z. import pandas as pd df = pd.DataFrame(data=[[100,200,400,500,111,222], [77,28,110,211,27,81], [11,22,33,11,22,33],[213,124,136,147,54,56]]) df.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('ab')]) print (df) Below is what I am trying to get.
[ "Use DataFrame.xs for select second levels with avoid remove first level with drop_level=False, then use rename for same MultiIndex, subtract and add to original with concat, last use DataFrame.sort_index:\ndfa = df.xs('a', axis=1, level=1, drop_level=False).rename(columns={'a':'c'})\ndfb = df.xs('b', axis=1, level=1, drop_level=False).rename(columns={'b':'c'})\n\ndf = pd.concat([df, dfa.sub(dfb)], axis=1).sort_index(axis=1)\nprint (df)\n x y z \n a b c a b c a b c\n0 100 200 -100 400 500 -100 111 222 -111\n1 77 28 49 110 211 -101 27 81 -54\n2 11 22 -11 33 11 22 22 33 -11\n3 213 124 89 136 147 -11 54 56 -2\n\nWith loop select columns by tuples, subtract Series and last use DataFrame.sort_index:\nfor c in df.columns.levels[0]:\n df[(c, 'c')] = df[(c, 'a')].sub(df[(c, 'b')])\n\ndf = df.sort_index(axis=1)\nprint (df)\n x y z \n a b c a b c a b c\n0 100 200 -100 400 500 -100 111 222 -111\n1 77 28 49 110 211 -101 27 81 -54\n2 11 22 -11 33 11 22 22 33 -11\n3 213 124 89 136 147 -11 54 56 -2\n\n", "a = df.xs('a', level=1, axis=1)\nb = df.xs('b', level=1, axis=1)\ndf1 = pd.concat([a.sub(b)], keys=['c'], axis=1).swaplevel(0, 1, axis=1)\n\ndf1\n x y z\n c c c\n0 -100 -100 -111\n1 49 -101 -54\n2 -11 22 -11\n3 89 -11 -2\n\nthen at first concat df and df1 , next sort\npd.concat([df, df1], axis=1).sort_index(axis=1)\n\n\nother way\nuse stack and unstack\ndf.stack(level=0).assign(c=lambda x: x['b'] - x['a']).stack().unstack([1, 2])\n\nresult:\n x y z\n a b c a b c a b c\n0 100 200 100 400 500 100 111 222 111\n1 77 28 -49 110 211 101 27 81 54\n2 11 22 11 33 11 -22 22 33 11\n3 213 124 -89 136 147 11 54 56 2\n\n", "Dump down into numpy, build a new dataframe, and concatenate to the original dataframe:\nresult = df.loc(axis=1)[:,'a'].to_numpy() - df.loc(axis=1)[:, 'b'].to_numpy()\nheader = pd.MultiIndex.from_product([['x','y','z'], ['c']])\nresult = pd.DataFrame(result, columns=header)\npd.concat([df, result], axis=1).sort_index(axis=1)\n\n x y z\n a b c a b c a b c\n0 100 200 -100 400 500 -100 111 222 -111\n1 77 28 49 110 211 -101 27 81 -54\n2 11 22 -11 33 11 22 22 33 -11\n3 213 124 89 136 147 -11 54 56 -2\n\nAnother option, using pipe, without dumping into numpy:\nresult = df.swaplevel(axis=1).pipe(lambda df: df['a'] - df['b'])\nresult.columns = pd.MultiIndex.from_product([result.columns, ['c']])\npd.concat([df, result], axis=1).sort_index(axis=1)\n\n x y z\n a b c a b c a b c\n0 100 200 -100 400 500 -100 111 222 -111\n1 77 28 49 110 211 -101 27 81 -54\n2 11 22 -11 33 11 22 22 33 -11\n3 213 124 89 136 147 -11 54 56 -2\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "dataframe", "multi_index", "pandas", "pivot_table", "python" ]
stackoverflow_0074609560_dataframe_multi_index_pandas_pivot_table_python.txt
Q: Replace the symbol in list my_list=['A0_123','BD_SEI','SW_TH'] I need to replace the '_' to '+' . Expected output: my_list=['A0+123','BD+SEI','SW+TH'] Can some one help me? A: As pointed out by @python_user, you can iterate through every element in your list using a list comprehension and using replace(): new_list = [i.replace('_', '+') for i in my_list] Returns your expected output ['A0+123', 'BD+SEI', 'SW+TH']
Replace the symbol in list
my_list=['A0_123','BD_SEI','SW_TH'] I need to replace the '_' to '+' . Expected output: my_list=['A0+123','BD+SEI','SW+TH'] Can some one help me?
[ "As pointed out by @python_user, you can iterate through every element in your list using a list comprehension and using replace():\nnew_list = [i.replace('_', '+') for i in my_list]\nReturns your expected output\n['A0+123', 'BD+SEI', 'SW+TH']\n\n" ]
[ 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074611263_list_python.txt
Q: Why do I get a flask weakref error when importing my SQLAlchemy db from a submodule? I currently have a flask project set up as follows (I did make a few modifications here to try and get the smallest working example, so some of this may be changed slightly) from extensions import db def create_api(): # Create API app api = Flask(__name__) # Configure... the configs api.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get("DB_URL", "default") # Register information to run api register_extensions(api) register_models(api) register_urls(api) # Return the api object return api create_api().run() My register extension and register models methods look as follows: def register_models(api): with api.app_context(): db.create_all() def register_extensions(api): db.init_app(api) bcrypt.init_app(api) The extension module: from flask_sqlalchemy import SQLAlchemy from flask_bcrypt import Bcrypt # Establish SQLAlchemy Database extension db = SQLAlchemy() # Establish Bcrypt extension for hashing passwords bcrypt = Bcrypt() My project directory looks something like -FSBS -API - __init__.py - app.py - extensions.py - routes.py Everything works great like this. However- If I change the import in app.py from from extensions import db to from FSBS.API.extensions import db, all my attempts to use db start throwing "KeyError: <weakref at 0x00000251C4B1DAD0; to 'Flask' at 0x00000251C0ED3FA0>". This is somewhat problematic because I would like to start refactoring my routes into a subfolder, where I have to use from FSBS.API.extensions import db. Not only that, but I don't understand why this would make a difference, so any advice on solving this little puzzle would be greatly appriciated. A: I've recently encountered the exact same problem while trying to put my routes into a submodule. I can't explain why this happens, but from what I can tell, it has something to do with the Flask-SQLAlchemy version - I've initially tried running version 3.0.2. What solved the problem for me was a downgrade to version 2.5.1
Why do I get a flask weakref error when importing my SQLAlchemy db from a submodule?
I currently have a flask project set up as follows (I did make a few modifications here to try and get the smallest working example, so some of this may be changed slightly) from extensions import db def create_api(): # Create API app api = Flask(__name__) # Configure... the configs api.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get("DB_URL", "default") # Register information to run api register_extensions(api) register_models(api) register_urls(api) # Return the api object return api create_api().run() My register extension and register models methods look as follows: def register_models(api): with api.app_context(): db.create_all() def register_extensions(api): db.init_app(api) bcrypt.init_app(api) The extension module: from flask_sqlalchemy import SQLAlchemy from flask_bcrypt import Bcrypt # Establish SQLAlchemy Database extension db = SQLAlchemy() # Establish Bcrypt extension for hashing passwords bcrypt = Bcrypt() My project directory looks something like -FSBS -API - __init__.py - app.py - extensions.py - routes.py Everything works great like this. However- If I change the import in app.py from from extensions import db to from FSBS.API.extensions import db, all my attempts to use db start throwing "KeyError: <weakref at 0x00000251C4B1DAD0; to 'Flask' at 0x00000251C0ED3FA0>". This is somewhat problematic because I would like to start refactoring my routes into a subfolder, where I have to use from FSBS.API.extensions import db. Not only that, but I don't understand why this would make a difference, so any advice on solving this little puzzle would be greatly appriciated.
[ "I've recently encountered the exact same problem while trying to put my routes into a submodule.\nI can't explain why this happens, but from what I can tell, it has something to do with the Flask-SQLAlchemy version - I've initially tried running version 3.0.2.\nWhat solved the problem for me was a downgrade to version 2.5.1\n" ]
[ 0 ]
[]
[]
[ "flask", "flask_sqlalchemy", "python", "sqlalchemy" ]
stackoverflow_0074366188_flask_flask_sqlalchemy_python_sqlalchemy.txt
Q: Python mmap permission denied on Windows I have the following code that works perfectly: server.py from mmap import mmap from pickle import load, dump mm = mmap(-1, 32, tagname='test') last_request_id = None while True: mm.seek(0) try: request_id = int(load(mm)) if request_id != last_request_id: last_request_id = request_id print(request_id) except Exception: pass client.py from mmap import mmap from pickle import dump with mmap(-1, 32, tagname='test') as mm: request_id = 1 dump(request_id, mm) So every time the server receives a new request id, the server prints the id on the console. But now I want to use the global scope. So I changed the tagname on both (client and server) to r'Global\test'. With that change, when the servers or the client starts it shows a permission error: PermissionError: [WinError 5] Access is denied So I read this awnswer that it is a security mechanism that prevents anyone to create a global memory mapped file and that administrators, IIS users, or services, by default have permission to create a global memory mapped file. Knowing that I created a Windows service that runs the server and it does not give any errors, the server was able to create the global mmap. But the problem persists on the client side (permission error). Reading the Microsoft docs, it says "the privilege check is limited to the creation of file-mapping objects"... " any process running in any session can access that file-mapping object provided that the user has the necessary access". I want to known what I have to do to read the global mmap created by the server on my client application. A: You need to call the OpenFileMapping() at the client. But currently the Python mmap module calls the CreateFileMapping() to open an existing file mapping object.(You can see that at this.) So you can't do what you want in pure Python. I recommend using other IPC mechanisms provided by multiprocessing module.(The multiprocessing.shared_memory.SharedMemory will not help because it calls the mmap API in the backend.) As a side note, I recommend you to make an issue report on here.
Python mmap permission denied on Windows
I have the following code that works perfectly: server.py from mmap import mmap from pickle import load, dump mm = mmap(-1, 32, tagname='test') last_request_id = None while True: mm.seek(0) try: request_id = int(load(mm)) if request_id != last_request_id: last_request_id = request_id print(request_id) except Exception: pass client.py from mmap import mmap from pickle import dump with mmap(-1, 32, tagname='test') as mm: request_id = 1 dump(request_id, mm) So every time the server receives a new request id, the server prints the id on the console. But now I want to use the global scope. So I changed the tagname on both (client and server) to r'Global\test'. With that change, when the servers or the client starts it shows a permission error: PermissionError: [WinError 5] Access is denied So I read this awnswer that it is a security mechanism that prevents anyone to create a global memory mapped file and that administrators, IIS users, or services, by default have permission to create a global memory mapped file. Knowing that I created a Windows service that runs the server and it does not give any errors, the server was able to create the global mmap. But the problem persists on the client side (permission error). Reading the Microsoft docs, it says "the privilege check is limited to the creation of file-mapping objects"... " any process running in any session can access that file-mapping object provided that the user has the necessary access". I want to known what I have to do to read the global mmap created by the server on my client application.
[ "You need to call the OpenFileMapping() at the client. But currently the Python mmap module calls the CreateFileMapping() to open an existing file mapping object.(You can see that at this.)\nSo you can't do what you want in pure Python. I recommend using other IPC mechanisms provided by multiprocessing module.(The multiprocessing.shared_memory.SharedMemory will not help because it calls the mmap API in the backend.)\nAs a side note, I recommend you to make an issue report on here.\n" ]
[ 1 ]
[]
[]
[ "local_security_policy", "memory_mapped_files", "python", "windows", "windows_server_2019" ]
stackoverflow_0074607900_local_security_policy_memory_mapped_files_python_windows_windows_server_2019.txt
Q: KEYERROR issues with Flask-Mail I am having trouble with creating a flask server that can send confirmation emails. I get a KEY ERROR even though I have made sure to install both Flask and Flask_Mail through my Terminal Window. This is the code that generates the error: import os import re from flask import Flask, render_template,request,redirect from flask_mail import Mail, Message from cs50 import SQL app = Flask(__name__) app.config["MAIL_DEFAULT_SENDER"] = os.environ["MAIL_DEFAULT_SENDER"] app.config["MAIL_PASSWORD"] = os.environ["MAIL_PASSWORD"] app.config["MAIL_PORT"] = 587 # tcp port app.config["MAIL_SERVER"] = "smtp.gmail.com" app.config["MAIL_USE_TLS"] = True # use encryption= true app.config["MAIL_USERNAME"] = os.environ["MAIL_USERNAME"] mail = Mail(app) ... This is the error that is generated: Traceback (most recent call last): File "/opt/cs50/lib/flask", line 19, in <module> sys.exit(flask.cli.main()) File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 1047, in main cli.main() File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 84, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 911, in run_command raise e from None File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 897, in run_command app = info.load_app() File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 312, in load_app froshims5/ $ app = locate_app(import_name, None, raise_if_not_found=False) File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 218, in locate_app __import__(module_name) File "/workspaces/103199450/froshims5/app.py", line 10, in <module> app.config["MAIL_DEFAULT_SENDER"] = os.environ["MAIL_DEFAULT_SENDER"] File "/usr/local/lib/python3.10/os.py", line 679, in __getitem__ raise KeyError(key) from None KeyError: 'MAIL_DEFAULT_SENDER' What could be wrong with the code? A: Never mind, I somehow got it to work. I think the problem had to do with the environmental variables. What I did was change my code from the one above to this: app.config['MAIL_DEFAULT_SENDER'] = os.environ.get('MAIL_DEFAULT_SENDER') app.config["MAIL_PASSWORD"] = os.environ.get("MAIL_PASSWORD") app.config["MAIL_PORT"] = 587 app.config["MAIL_SERVER"] = "smtp.gmail.com" app.config["MAIL_USE_TLS"] = True app.config["MAIL_USERNAME"] = os.environ.get("MAIL_USERNAME") mail = Mail(app)
KEYERROR issues with Flask-Mail
I am having trouble with creating a flask server that can send confirmation emails. I get a KEY ERROR even though I have made sure to install both Flask and Flask_Mail through my Terminal Window. This is the code that generates the error: import os import re from flask import Flask, render_template,request,redirect from flask_mail import Mail, Message from cs50 import SQL app = Flask(__name__) app.config["MAIL_DEFAULT_SENDER"] = os.environ["MAIL_DEFAULT_SENDER"] app.config["MAIL_PASSWORD"] = os.environ["MAIL_PASSWORD"] app.config["MAIL_PORT"] = 587 # tcp port app.config["MAIL_SERVER"] = "smtp.gmail.com" app.config["MAIL_USE_TLS"] = True # use encryption= true app.config["MAIL_USERNAME"] = os.environ["MAIL_USERNAME"] mail = Mail(app) ... This is the error that is generated: Traceback (most recent call last): File "/opt/cs50/lib/flask", line 19, in <module> sys.exit(flask.cli.main()) File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 1047, in main cli.main() File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 84, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 911, in run_command raise e from None File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 897, in run_command app = info.load_app() File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 312, in load_app froshims5/ $ app = locate_app(import_name, None, raise_if_not_found=False) File "/usr/local/lib/python3.10/site-packages/flask/cli.py", line 218, in locate_app __import__(module_name) File "/workspaces/103199450/froshims5/app.py", line 10, in <module> app.config["MAIL_DEFAULT_SENDER"] = os.environ["MAIL_DEFAULT_SENDER"] File "/usr/local/lib/python3.10/os.py", line 679, in __getitem__ raise KeyError(key) from None KeyError: 'MAIL_DEFAULT_SENDER' What could be wrong with the code?
[ "Never mind, I somehow got it to work. I think the problem had to do with the environmental variables. What I did was change my code from the one above to this:\napp.config['MAIL_DEFAULT_SENDER'] = os.environ.get('MAIL_DEFAULT_SENDER')\napp.config[\"MAIL_PASSWORD\"] = os.environ.get(\"MAIL_PASSWORD\")\napp.config[\"MAIL_PORT\"] = 587\napp.config[\"MAIL_SERVER\"] = \"smtp.gmail.com\"\napp.config[\"MAIL_USE_TLS\"] = True\napp.config[\"MAIL_USERNAME\"] = os.environ.get(\"MAIL_USERNAME\")\nmail = Mail(app)\n\n" ]
[ 0 ]
[]
[]
[ "cs50", "flask", "flask_mail", "html", "python" ]
stackoverflow_0074604904_cs50_flask_flask_mail_html_python.txt
Q: tkinter callback event to retrieve value from Entry and write to text file *First of all, i am trying to create a register system saved into textfile(not real system, i know its not safe to write into textfiles) Created the GUI and then i defined multiple functions which is Menu, register and submit. Submit function is nested and inside register function. The problem is when i nested the functions it doesn't write into textfiles, but when i delete the register function, it works. When i press register and write something in the Entry box, it didnt record to the text files, i have been scratching my head as to find what the error in my code is. Edit: I now have put a picture to get better understanding. the blue window is main menu and its the mainloop and the yellow window is appearing when i click register button.* picture of what should be my app from tkinter import * def Register(): def Submit(): elev = open("bruker.txt","a", encoding="utf-8") elev.write(brukeridinc.get()+"\n") elev.write(fornavn.get()+"\n") elev.write(etternavn.get()+"\n") register = Tk() register.geometry("500x500") register.configure(bg="yellow") register.title("Bibliotekapp Login") label_brukerid = Label(register, text="Brukerid:", bg="white", font=("Arial", 25)) label_brukerid.grid(row=1, column=0, padx=5, pady=5, sticky=E) label_fornavn = Label(register, text="Fornavn:", bg="white", font=("Arial", 25)) label_fornavn.grid(row=2, column=0, padx=5, pady=5, sticky=E) label_etternavn = Label(register, text="Etternavn:", bg="white", font=("Arial", 25)) label_etternavn.grid(row=3, column=0, padx=5, pady=5, sticky=E) brukeridinc= StringVar() entry_brukerid= Entry(register, width=10, textvariable=brukeridinc, font=("Arial", 25)) entry_brukerid.grid(row=1, column=1, padx=5, pady=5, sticky=W) fornavn= StringVar() entry_fornavn= Entry(register, width=10, textvariable=fornavn, font=("Arial", 25)) entry_fornavn.grid(row=2, column=1, padx=5, pady=5, sticky=W) etternavn= StringVar() entry_etternavn= Entry(register, width=10, textvariable=etternavn, font=("Arial", 25)) entry_etternavn.grid(row=3, column=1, padx=5, pady=5, sticky=W) button_resultat= Button(register, text="Enter", command=Submit, height=2, width=15) button_resultat.grid(row=4, column=1, padx=5, pady=5, sticky=W) informasjon= StringVar() entry_informasjon= Entry(register, width=24, textvariable=informasjon, state="readonly", font=("Arial", 25)) entry_informasjon.grid(row=5, column=1, padx=5, pady=5, sticky=W) #GUI start = Tk() start.geometry("500x500") start.configure(bg="lightblue") start.title("Bibliotekapp") button_register = Button(start, text="Register", bg="white", font=("Arial", 25), command=Register) button_register.grid(row=5, column=5, padx=5, pady=5, sticky=E) start.mainloop() A: I do not understand how you write code. You are using too many duplicates. I will not going to explain to you. In line 51 and 54 , I changed command to None. You can replace it. Here is code: from tkinter import * start = Tk() start.geometry("800x500") start.configure(bg="lightblue") start.title("Bibliotekapp") def menu(): start.geometry("500x500") start.title("Bibliotekapp") start.configure(bg="red") def Register(): start.geometry("800x800") start.configure(bg="yellow") start.title("Bibliotekapp Login") def Submit(): elev = open("bruker.txt", "r") sjekk = elev.read() if brukeridinc.get() in sjekk: informasjon.set("User already exists") else: elev.close() elev = open("bruker.txt", "a") elev.write(brukeridinc.get() + "") elev.write(fornavn.get() + "") elev.write(etternavn.get() + "\n") elev.close() informasjon.set("You are started") ####### GUI ############ label_brukerid = Label(start, text="Brukerid:", bg="white", font=("Arial", 25)) label_brukerid.grid(row=1, column=0, padx=5, pady=5, sticky=E) label_fornavn = Label(start, text="Fornavn:", bg="white", font=("Arial", 25)) label_fornavn.grid(row=2, column=0, padx=5, pady=5, sticky=E) label_etternavn = Label(start, text="Etternavn:", bg="white", font=("Arial", 25)) label_etternavn.grid(row=3, column=0, padx=5, pady=5, sticky=E) brukeridinc= StringVar() entry_brukerid= Entry(start, width=10, textvariable=brukeridinc, font=("Arial", 25)) entry_brukerid.grid(row=1, column=1, padx=5, pady=5, sticky=W) fornavn= StringVar() entry_fornavn= Entry(start, width=10, textvariable=fornavn, font=("Arial", 25)) entry_fornavn.grid(row=2, column=1, padx=5, pady=5, sticky=W) etternavn= StringVar() entry_etternavn= Entry(start, width=10, textvariable=etternavn, font=("Arial", 25)) entry_etternavn.grid(row=3, column=1, padx=5, pady=5, sticky=W) button_start = Button(start, text="start", bg="white", font=("Arial", 25), command=Submit) button_start.grid(row=5, column=5, padx=5, pady=5, sticky=E) button_login = Button(start, text="Login", bg="white", font=("Arial", 25), command=menu) button_login.grid(row=6, column=5, padx=5, pady=5, sticky=E) button_lån = Button(start, text="Lån", bg="white", font=("Arial", 25), command=Register) button_lån.grid(row=7, column=5, padx=5, pady=5, sticky=E) informasjon= StringVar() entry_informasjon= Entry(start, width=24, textvariable=informasjon, state="readonly", font=("Arial", 25)) entry_informasjon.grid(row=5, column=1, padx=5, pady=5, sticky=W) start.mainloop() Result: Result Red: Result for yellow:
tkinter callback event to retrieve value from Entry and write to text file
*First of all, i am trying to create a register system saved into textfile(not real system, i know its not safe to write into textfiles) Created the GUI and then i defined multiple functions which is Menu, register and submit. Submit function is nested and inside register function. The problem is when i nested the functions it doesn't write into textfiles, but when i delete the register function, it works. When i press register and write something in the Entry box, it didnt record to the text files, i have been scratching my head as to find what the error in my code is. Edit: I now have put a picture to get better understanding. the blue window is main menu and its the mainloop and the yellow window is appearing when i click register button.* picture of what should be my app from tkinter import * def Register(): def Submit(): elev = open("bruker.txt","a", encoding="utf-8") elev.write(brukeridinc.get()+"\n") elev.write(fornavn.get()+"\n") elev.write(etternavn.get()+"\n") register = Tk() register.geometry("500x500") register.configure(bg="yellow") register.title("Bibliotekapp Login") label_brukerid = Label(register, text="Brukerid:", bg="white", font=("Arial", 25)) label_brukerid.grid(row=1, column=0, padx=5, pady=5, sticky=E) label_fornavn = Label(register, text="Fornavn:", bg="white", font=("Arial", 25)) label_fornavn.grid(row=2, column=0, padx=5, pady=5, sticky=E) label_etternavn = Label(register, text="Etternavn:", bg="white", font=("Arial", 25)) label_etternavn.grid(row=3, column=0, padx=5, pady=5, sticky=E) brukeridinc= StringVar() entry_brukerid= Entry(register, width=10, textvariable=brukeridinc, font=("Arial", 25)) entry_brukerid.grid(row=1, column=1, padx=5, pady=5, sticky=W) fornavn= StringVar() entry_fornavn= Entry(register, width=10, textvariable=fornavn, font=("Arial", 25)) entry_fornavn.grid(row=2, column=1, padx=5, pady=5, sticky=W) etternavn= StringVar() entry_etternavn= Entry(register, width=10, textvariable=etternavn, font=("Arial", 25)) entry_etternavn.grid(row=3, column=1, padx=5, pady=5, sticky=W) button_resultat= Button(register, text="Enter", command=Submit, height=2, width=15) button_resultat.grid(row=4, column=1, padx=5, pady=5, sticky=W) informasjon= StringVar() entry_informasjon= Entry(register, width=24, textvariable=informasjon, state="readonly", font=("Arial", 25)) entry_informasjon.grid(row=5, column=1, padx=5, pady=5, sticky=W) #GUI start = Tk() start.geometry("500x500") start.configure(bg="lightblue") start.title("Bibliotekapp") button_register = Button(start, text="Register", bg="white", font=("Arial", 25), command=Register) button_register.grid(row=5, column=5, padx=5, pady=5, sticky=E) start.mainloop()
[ "I do not understand how you write code. You are using too many duplicates. I will not going to explain to you. In line 51 and 54 , I changed command to None. You can replace it.\nHere is code:\nfrom tkinter import *\n\nstart = Tk()\nstart.geometry(\"800x500\")\nstart.configure(bg=\"lightblue\")\nstart.title(\"Bibliotekapp\")\n \ndef menu():\n start.geometry(\"500x500\")\n start.title(\"Bibliotekapp\")\n start.configure(bg=\"red\")\n\ndef Register():\n start.geometry(\"800x800\")\n start.configure(bg=\"yellow\")\n start.title(\"Bibliotekapp Login\")\n \ndef Submit():\n elev = open(\"bruker.txt\", \"r\")\n sjekk = elev.read()\n\n if brukeridinc.get() in sjekk:\n informasjon.set(\"User already exists\")\n\n else:\n elev.close()\n elev = open(\"bruker.txt\", \"a\")\n elev.write(brukeridinc.get() + \"\")\n elev.write(fornavn.get() + \"\")\n elev.write(etternavn.get() + \"\\n\")\n elev.close()\n informasjon.set(\"You are started\")\n\n\n####### GUI ############\nlabel_brukerid = Label(start, text=\"Brukerid:\", bg=\"white\", font=(\"Arial\", 25))\nlabel_brukerid.grid(row=1, column=0, padx=5, pady=5, sticky=E)\n\nlabel_fornavn = Label(start, text=\"Fornavn:\", bg=\"white\", font=(\"Arial\", 25))\nlabel_fornavn.grid(row=2, column=0, padx=5, pady=5, sticky=E)\n\nlabel_etternavn = Label(start, text=\"Etternavn:\", bg=\"white\", font=(\"Arial\", 25))\nlabel_etternavn.grid(row=3, column=0, padx=5, pady=5, sticky=E)\n\nbrukeridinc= StringVar()\nentry_brukerid= Entry(start, width=10, textvariable=brukeridinc, font=(\"Arial\", 25))\nentry_brukerid.grid(row=1, column=1, padx=5, pady=5, sticky=W) \n\nfornavn= StringVar()\nentry_fornavn= Entry(start, width=10, textvariable=fornavn, font=(\"Arial\", 25))\nentry_fornavn.grid(row=2, column=1, padx=5, pady=5, sticky=W) \n\netternavn= StringVar()\nentry_etternavn= Entry(start, width=10, textvariable=etternavn, font=(\"Arial\", 25))\nentry_etternavn.grid(row=3, column=1, padx=5, pady=5, sticky=W) \n\nbutton_start = Button(start, text=\"start\", bg=\"white\", font=(\"Arial\", 25), command=Submit)\nbutton_start.grid(row=5, column=5, padx=5, pady=5, sticky=E)\n\nbutton_login = Button(start, text=\"Login\", bg=\"white\", font=(\"Arial\", 25), command=menu)\nbutton_login.grid(row=6, column=5, padx=5, pady=5, sticky=E)\n\nbutton_lån = Button(start, text=\"Lån\", bg=\"white\", font=(\"Arial\", 25), command=Register)\nbutton_lån.grid(row=7, column=5, padx=5, pady=5, sticky=E)\n \ninformasjon= StringVar()\nentry_informasjon= Entry(start, width=24, textvariable=informasjon, state=\"readonly\", font=(\"Arial\", 25))\nentry_informasjon.grid(row=5, column=1, padx=5, pady=5, sticky=W)\n \nstart.mainloop()\n\nResult:\n\nResult Red:\n\nResult for yellow:\n\n" ]
[ 0 ]
[]
[]
[ "event_handling", "function", "python", "tkinter", "user_interface" ]
stackoverflow_0074610943_event_handling_function_python_tkinter_user_interface.txt
Q: Cannot import django-smart-selects I wanted to use the django smart select library to create related dropdowns. I did everything as indicated in the library documentation, but an error occurs: import "smart_selects.db_fields" could not be resolved Pylance(reportMissingImports) [Ln2, Col6] Even when I enter "import ..." the library itself already glows gray, as if it does not find it. This is what my INSTALLED_APPS in settings.py looks like: INSTALLED_APPS = [ 'routes_form.apps.RoutesFormConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'smart_selects', ] USE_DJANGO_JQUERY = True This is my urls.py: from django.contrib import admin from django.urls import path, include urlpatterns = [ path('form/', include('routes_form.urls')), path('admin/', admin.site.urls), path(r'^chaining/', include('smart_selects.urls')), ] ...and my model.py: from django.db import models from smart_selects.db_fields import ChainedForeignKey I tried to find a solution to the problem, looked for possible options. That is why I already changed from `JQUERY_URL = True` to `USE_DJANGO_JQUERY = True`. Errors (six) I did not take off. I have only this...: `import "smart_selects.db_fields" could not be resolved Pylance(reportMissingImports) [Ln2, Col6]` I would be incredibly grateful even for trying to help. SOLUTION: Just need typing in the path to library 'Import "Path.to.own.script" could not be resolved Pylance (reportMissingImports)' in VS Code using Python 3.x on Ubuntu 20.04 LTS A: You can use: USE_DJANGO_JQUERY = True instead of JQUERY_URL = True in your settings.py Please reply to this message if the issue still persist.
Cannot import django-smart-selects
I wanted to use the django smart select library to create related dropdowns. I did everything as indicated in the library documentation, but an error occurs: import "smart_selects.db_fields" could not be resolved Pylance(reportMissingImports) [Ln2, Col6] Even when I enter "import ..." the library itself already glows gray, as if it does not find it. This is what my INSTALLED_APPS in settings.py looks like: INSTALLED_APPS = [ 'routes_form.apps.RoutesFormConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'smart_selects', ] USE_DJANGO_JQUERY = True This is my urls.py: from django.contrib import admin from django.urls import path, include urlpatterns = [ path('form/', include('routes_form.urls')), path('admin/', admin.site.urls), path(r'^chaining/', include('smart_selects.urls')), ] ...and my model.py: from django.db import models from smart_selects.db_fields import ChainedForeignKey I tried to find a solution to the problem, looked for possible options. That is why I already changed from `JQUERY_URL = True` to `USE_DJANGO_JQUERY = True`. Errors (six) I did not take off. I have only this...: `import "smart_selects.db_fields" could not be resolved Pylance(reportMissingImports) [Ln2, Col6]` I would be incredibly grateful even for trying to help. SOLUTION: Just need typing in the path to library 'Import "Path.to.own.script" could not be resolved Pylance (reportMissingImports)' in VS Code using Python 3.x on Ubuntu 20.04 LTS
[ "You can use:\nUSE_DJANGO_JQUERY = True instead of JQUERY_URL = True in your \nsettings.py\n\nPlease reply to this message if the issue still persist.\n" ]
[ 0 ]
[]
[]
[ "django", "django_forms", "django_smart_selects", "dropdown", "python" ]
stackoverflow_0074593363_django_django_forms_django_smart_selects_dropdown_python.txt
Q: how to send email from gmai.com to hotmail.com/yahoo.com with colab, the words and pictures have become unormal I want to send messages from "[email protected]" to several emails like gmail or hotmail, yahoo etc. However, when I send this message. the hotmail words have become several html files instead of real words. When I read this hotmail from my iphone, the picture of "address.png" became the random numbers. Does anyone know how to mitigate those problems ? I want the email to contain plaintext words and the picture. import numpy as np import os import pandas as pd import csv from string import Template import smtplib from pathlib import Path from email import policy from email.mime.text import MIMEText from email.mime.image import MIMEImage from email.mime.multipart import MIMEMultipart from email.mime.application import MIMEApplication from google.colab import drive drive.mount('/content/drive') df=pd.read_csv('/content/drive/MyDrive/inform_test.csv') a=np.shape(df) for k in range(0,a[0]): content = MIMEMultipart() content["subject"] = "title" content["from"] = "[email protected]" content["to"] = df.iloc[k,1] content.attach( MIMEText(df.iloc[k,0],"html")) main_content = "hello world" content.attach( MIMEText(main_content,"html")) content.attach( MIMEText("<br>","html")) content.attach( MIMEText("<br>","html")) content.attach( MIMEText("<br>","html")) content.attach( MIMEText("phone","html")) content.attach( MIMEText("best regard","html")) content.attach(MIMEImage(Path("/content/drive/MyDrive/mail_test/address.png").read_bytes())) #print(k) with smtplib.SMTP(host="smtp.gmail.com", port="587") as smtp: try: smtp.ehlo() smtp.starttls() smtp.login("[email protected]", "aasjwgeaymtajuks") smtp.send_message(content) print("successful") except Exception as e: print("Error message: ", e) A: Try using SMTP. It is a standard Python packages. And the syntax is pretty easy. Tutorial link - https://www.youtube.com/watch?v=JRCJ6RtE3xU
how to send email from gmai.com to hotmail.com/yahoo.com with colab, the words and pictures have become unormal
I want to send messages from "[email protected]" to several emails like gmail or hotmail, yahoo etc. However, when I send this message. the hotmail words have become several html files instead of real words. When I read this hotmail from my iphone, the picture of "address.png" became the random numbers. Does anyone know how to mitigate those problems ? I want the email to contain plaintext words and the picture. import numpy as np import os import pandas as pd import csv from string import Template import smtplib from pathlib import Path from email import policy from email.mime.text import MIMEText from email.mime.image import MIMEImage from email.mime.multipart import MIMEMultipart from email.mime.application import MIMEApplication from google.colab import drive drive.mount('/content/drive') df=pd.read_csv('/content/drive/MyDrive/inform_test.csv') a=np.shape(df) for k in range(0,a[0]): content = MIMEMultipart() content["subject"] = "title" content["from"] = "[email protected]" content["to"] = df.iloc[k,1] content.attach( MIMEText(df.iloc[k,0],"html")) main_content = "hello world" content.attach( MIMEText(main_content,"html")) content.attach( MIMEText("<br>","html")) content.attach( MIMEText("<br>","html")) content.attach( MIMEText("<br>","html")) content.attach( MIMEText("phone","html")) content.attach( MIMEText("best regard","html")) content.attach(MIMEImage(Path("/content/drive/MyDrive/mail_test/address.png").read_bytes())) #print(k) with smtplib.SMTP(host="smtp.gmail.com", port="587") as smtp: try: smtp.ehlo() smtp.starttls() smtp.login("[email protected]", "aasjwgeaymtajuks") smtp.send_message(content) print("successful") except Exception as e: print("Error message: ", e)
[ "Try using SMTP. It is a standard Python packages. And the syntax is pretty easy.\nTutorial link - https://www.youtube.com/watch?v=JRCJ6RtE3xU\n" ]
[ 0 ]
[]
[]
[ "gmail", "hotmail", "html_email", "mimemultipart", "python" ]
stackoverflow_0074611777_gmail_hotmail_html_email_mimemultipart_python.txt
Q: Django Authentication issue after reseting password I am using django 1.8 for my project and I have tried using django.contrib.auth.middleware.SessionAuthenticationMiddleware in middleware to log off the other session after resetting the password. This is fine but the problem I am facing is after resetting it is logging off even that session who changed the password. I want that after resetting the password, the session where we change the password do not get log off. Our user model is inherited from AbstractUser A: If you use your own view to change password, django gives you ability to update the session after changing password so that the user isn't logged off. For that you can use update_session_auth_hash function. Django's user_change_password update the session after password change. But for you own custom views, you can use update_session_auth_hash like this: from django.contrib.auth import update_session_auth_hash def password_change(request): if request.method == 'POST': form = PasswordChangeForm(user=request.user, data=request.POST) if form.is_valid(): form.save() update_session_auth_hash(request, form.user) else: ... Django docs about session invalidation on password change Instead of going through all this, you can use django-allauth instead. It is an awesome app and has all sort of authentication functionality to it. A: For some reason the accepted answer didn't work for me. So I did the obvious... I logged in the user again as usual. Example of a ChangePasswordForm overriding the save() method. class ChangePasswordForm(models.Form): old_password = forms.CharField(widget=forms.PasswordInput()) password1 = forms.CharField(widget=forms.PasswordInput()) password2 = forms.CharField(widget=forms.PasswordInput()) def __init__(self, *args, **kwargs): self.request = kwargs.pop("request") super().__init__(*args, **kwargs) def clean(self): data = self.cleaned_data # authenticate user here (using old_password) # check if password1 == password2 return data def save(self, commit=True): user = super().save(commit=False) if commit: user.save() login(self.request, user) # HERE IS THE WHOLE THING return user
Django Authentication issue after reseting password
I am using django 1.8 for my project and I have tried using django.contrib.auth.middleware.SessionAuthenticationMiddleware in middleware to log off the other session after resetting the password. This is fine but the problem I am facing is after resetting it is logging off even that session who changed the password. I want that after resetting the password, the session where we change the password do not get log off. Our user model is inherited from AbstractUser
[ "If you use your own view to change password, django gives you ability to update the session after changing password so that the user isn't logged off.\nFor that you can use update_session_auth_hash function.\nDjango's user_change_password update the session after password change. But for you own custom views, you can use update_session_auth_hash like this:\nfrom django.contrib.auth import update_session_auth_hash\n\ndef password_change(request):\n if request.method == 'POST':\n form = PasswordChangeForm(user=request.user, data=request.POST)\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user)\n else:\n ...\n\nDjango docs about session invalidation on password change\nInstead of going through all this, you can use django-allauth instead. It is an awesome app and has all sort of authentication functionality to it.\n", "For some reason the accepted answer didn't work for me. So I did the obvious... I logged in the user again as usual. Example of a ChangePasswordForm overriding the save() method.\nclass ChangePasswordForm(models.Form):\n old_password = forms.CharField(widget=forms.PasswordInput())\n password1 = forms.CharField(widget=forms.PasswordInput())\n password2 = forms.CharField(widget=forms.PasswordInput())\n\n def __init__(self, *args, **kwargs):\n self.request = kwargs.pop(\"request\")\n super().__init__(*args, **kwargs)\n\n def clean(self):\n data = self.cleaned_data\n # authenticate user here (using old_password)\n # check if password1 == password2\n return data\n\n def save(self, commit=True):\n user = super().save(commit=False)\n if commit:\n user.save()\n login(self.request, user) # HERE IS THE WHOLE THING \n\n return user\n\n" ]
[ 5, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0036350317_django_python.txt
Q: visualize overlapping communities in graph by any of the python or R modules How can I visualize communities if there are overlapping communities in the graph? I can use any module in python (networkx, igraph, matplotlib, etc.) or R. For example, information on nodes, edges, and the nodes in each community is given as follows. Note that node G spans two communities. list_nodes = ['A', 'B', 'C', 'D','E','F','G','H','I','J'] tuple_edges = [('A','B'),('A','C'),('A','D'),('B','C'),('B','D'), ('C','D'),('C','E'), ('E','F'),('E','G'),('F','G'),('G','H'), ('G','I'), ('G','J'),('H','I'),('H','J'),('I','J'),] list_communities = [['A', 'B', 'C', 'D'],['E','F','G'],['G', 'H','I','J']] I would like a plot that visualizes the community as shown below. In networkx, it is possible to colour-code each node like this post, but this method is not suitable when communities overlap. In igraph, communities can be visualised using the community extraction method included in the package, as described in this post. However, in my case I want to define communities using the list of nodes contained in each community. A: Below is an option for igraph within R. I think you may have to annotate the community info manually (see grp below) and then use it when plotting, e.g., g <- graph_from_data_frame(df, directed = FALSE) grp <- lapply( groups(cluster_edge_betweenness(g)), function(x) { c( x, names(which(colSums(distances(g, x) == 1) > 1)) ) } ) plot(g, mark.groups = grp) or using max_cliques with minimal size 3 (thank @clp's comment) g <- graph_from_data_frame(df, directed = FALSE) grp <- max_cliques(g, min = 3) plot(g, mark.groups = grp) Data df <- data.frame( from = c("A", "A", "A", "B", "B", "C", "C", "E", "E", "F", "G", "G", "G", "H", "H", "I"), to = c("B", "C", "D", "C", "D", "D", "E", "F", "G", "G", "H", "I", "J", "I", "J", "J") )
visualize overlapping communities in graph by any of the python or R modules
How can I visualize communities if there are overlapping communities in the graph? I can use any module in python (networkx, igraph, matplotlib, etc.) or R. For example, information on nodes, edges, and the nodes in each community is given as follows. Note that node G spans two communities. list_nodes = ['A', 'B', 'C', 'D','E','F','G','H','I','J'] tuple_edges = [('A','B'),('A','C'),('A','D'),('B','C'),('B','D'), ('C','D'),('C','E'), ('E','F'),('E','G'),('F','G'),('G','H'), ('G','I'), ('G','J'),('H','I'),('H','J'),('I','J'),] list_communities = [['A', 'B', 'C', 'D'],['E','F','G'],['G', 'H','I','J']] I would like a plot that visualizes the community as shown below. In networkx, it is possible to colour-code each node like this post, but this method is not suitable when communities overlap. In igraph, communities can be visualised using the community extraction method included in the package, as described in this post. However, in my case I want to define communities using the list of nodes contained in each community.
[ "Below is an option for igraph within R.\n\nI think you may have to annotate the community info manually (see grp below) and then use it when plotting, e.g.,\ng <- graph_from_data_frame(df, directed = FALSE)\ngrp <- lapply(\n groups(cluster_edge_betweenness(g)),\n function(x) {\n c(\n x,\n names(which(colSums(distances(g, x) == 1) > 1))\n )\n }\n)\nplot(g, mark.groups = grp)\n\nor using max_cliques with minimal size 3 (thank @clp's comment)\ng <- graph_from_data_frame(df, directed = FALSE)\ngrp <- max_cliques(g, min = 3)\nplot(g, mark.groups = grp)\n\n\nData\ndf <- data.frame(\n from = c(\"A\", \"A\", \"A\", \"B\", \"B\", \"C\", \"C\", \"E\", \"E\", \"F\", \"G\", \"G\", \"G\", \"H\", \"H\", \"I\"),\n to = c(\"B\", \"C\", \"D\", \"C\", \"D\", \"D\", \"E\", \"F\", \"G\", \"G\", \"H\", \"I\", \"J\", \"I\", \"J\", \"J\")\n)\n\n" ]
[ 3 ]
[]
[]
[ "igraph", "networkx", "python", "r", "visualization" ]
stackoverflow_0074609794_igraph_networkx_python_r_visualization.txt
Q: Regex : replace url inside string i have string = 'Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE' i need a python regex expression to identify xxx-zzzzzzzzz.eeeeeeeeeee.fr to do a sub-string function to it Expected output : string : 'Server:PIPELININGSIZE' the URL is inside a string, i tried a lot of regex expressions A: No regex. single line use just to split on your target word. string = 'Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE' last = string.split("fr",1)[1] first =string[:string.index(":")] print(f'{first} : {last}') Gives # Server:PIPELININGSIZE A: Not sure if this helps, because your question was quite vaguely formulated. :) import re string = 'Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE' string_1 = re.search('[a-z.-]+([A-Z]+)', string).group(1) print(f'string: Server:{string_1}') Output: string: Server:PIPELININGSIZE A: The wording of the question suggests that you wish to find the hostname in the string, but the expected output suggests that you want to remove it. The following regular expression will create a tuple and allow you to do either. import re str = "Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE" p = re.compile('^([A-Za-z]+[:])(.*?)([A-Z]+)$') m = re.search(p, str) result = m.groups() # ('Server:', 'xxx-zzzzzzzzz.eeeeeeeeeee.fr', 'PIPELININGSIZE') Remove the hostname: print(f'{result[0]} {result[2]}') # Output: 'Server: PIPELININGSIZE' Extract the hostname: print(result[1]) # Output: 'xxx-zzzzzzzzz.eeeeeeeeeee.fr'
Regex : replace url inside string
i have string = 'Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE' i need a python regex expression to identify xxx-zzzzzzzzz.eeeeeeeeeee.fr to do a sub-string function to it Expected output : string : 'Server:PIPELININGSIZE' the URL is inside a string, i tried a lot of regex expressions
[ "No regex. single line use just to split on your target word.\nstring = 'Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE'\n\nlast = string.split(\"fr\",1)[1]\n\nfirst =string[:string.index(\":\")]\nprint(f'{first} : {last}')\n\nGives #\nServer:PIPELININGSIZE\n\n", "Not sure if this helps, because your question was quite vaguely formulated. :)\nimport re\n\nstring = 'Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE'\n\nstring_1 = re.search('[a-z.-]+([A-Z]+)', string).group(1)\n\nprint(f'string: Server:{string_1}')\n\nOutput:\nstring: Server:PIPELININGSIZE\n\n", "The wording of the question suggests that you wish to find the hostname in the string, but the expected output suggests that you want to remove it. The following regular expression will create a tuple and allow you to do either.\nimport re\n\nstr = \"Server:xxx-zzzzzzzzz.eeeeeeeeeee.frPIPELININGSIZE\"\n\np = re.compile('^([A-Za-z]+[:])(.*?)([A-Z]+)$')\nm = re.search(p, str)\nresult = m.groups()\n# ('Server:', 'xxx-zzzzzzzzz.eeeeeeeeeee.fr', 'PIPELININGSIZE')\n\nRemove the hostname:\nprint(f'{result[0]} {result[2]}')\n# Output: 'Server: PIPELININGSIZE'\n\nExtract the hostname:\nprint(result[1])\n# Output: 'xxx-zzzzzzzzz.eeeeeeeeeee.fr'\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "replace", "string", "url" ]
stackoverflow_0074611048_python_replace_string_url.txt
Q: how to remove item not match in list of object list Here is my list, it's list of object and inside object there is list: please rev [ { "id": 1, "test": [ { "id__": 1 }, { "id__": 1 }, { "id__": 1 }, { "id__": 2 } ] }, { "id": 2, "test": [ { "id__": 1 }, { "id__": 1 }, { "id__": 1 }, { "id__": 2 } ] } ] I want to remove matched id with one in objecso it can be like this : [ { "id": 1, "test": [ { "id__": 1 }, { "id__": 1 }, { "id__": 1 } ] }, { "id": 2, "test": [ { "id__": 2 } ] } ] Here is what I try: and notice that is final is the list mentioned above for i in final: for j in i["test"]: if j['id__'] == i["id"]: i.pop() can I use some help of you kind guys, I tried with remove attribute in list, and still no result satisfied. A: This is probably what you want: filtered_result = [] for i in a: lst_id = i["id"] lst_to_compare = i["test"] filtered_inner_list = [item for item in lst_to_compare if item["id__"] == lst_id] filtered_result.append({"id": lst_id, "test": filtered_inner_list}) print(filtered_result) A: Here's how I would solve this problem, hope it helps. The problem with your solution was that you were trying to eliminate the item of a list which you were looping over, so the whole iteration was faulty. ind_list = list() for key in my_list: id_num = key['id'] for item in key['test']: if item['id__'] != id_num: ind_list.append(key['test'].index(item)) for i in ind_list: key['test'].pop(i) ind_list.clear() print(my_list)
how to remove item not match in list of object list
Here is my list, it's list of object and inside object there is list: please rev [ { "id": 1, "test": [ { "id__": 1 }, { "id__": 1 }, { "id__": 1 }, { "id__": 2 } ] }, { "id": 2, "test": [ { "id__": 1 }, { "id__": 1 }, { "id__": 1 }, { "id__": 2 } ] } ] I want to remove matched id with one in objecso it can be like this : [ { "id": 1, "test": [ { "id__": 1 }, { "id__": 1 }, { "id__": 1 } ] }, { "id": 2, "test": [ { "id__": 2 } ] } ] Here is what I try: and notice that is final is the list mentioned above for i in final: for j in i["test"]: if j['id__'] == i["id"]: i.pop() can I use some help of you kind guys, I tried with remove attribute in list, and still no result satisfied.
[ "This is probably what you want:\nfiltered_result = []\nfor i in a:\n lst_id = i[\"id\"]\n lst_to_compare = i[\"test\"]\n filtered_inner_list = [item for item in lst_to_compare if item[\"id__\"] == lst_id]\n filtered_result.append({\"id\": lst_id, \"test\": filtered_inner_list})\n\nprint(filtered_result)\n\n", "Here's how I would solve this problem, hope it helps. The problem with your solution was that you were trying to eliminate the item of a list which you were looping over, so the whole iteration was faulty.\nind_list = list()\nfor key in my_list:\n id_num = key['id']\n for item in key['test']:\n if item['id__'] != id_num:\n ind_list.append(key['test'].index(item))\n for i in ind_list:\n key['test'].pop(i)\n ind_list.clear()\nprint(my_list)\n\n" ]
[ 0, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074611555_dictionary_list_python.txt
Q: Column disappearing after .apply - Pandas (Python) I'm new to pandas and I'm trying to merge the following 2 dataframes into 1 : nopat 0 2021-12-31 3.580000e+09 1 2020-12-31 6.250000e+08 2 2019-12-31 -1.367000e+09 3 2018-12-31 2.028000e+09 capital_employed 0 2021-12-31 5.924000e+10 1 2020-12-31 6.062400e+10 2 2019-12-31 5.203500e+10 3 2018-12-31 5.441200e+10 When I try to apply a function to my new datframe, all columns disappear. Here is my code : roce_by_year = pd.merge(nopat, capital_employed) \ .rename(columns={"": "date"}) \ .sort_values(by='date') \ .apply(lambda row: compute_roce(row['nopat'], row['capital_employed']), axis=1) \ .reset_index(name='roce') Here is the result : index roce 0 3 3.727119 1 2 -2.627078 2 1 1.030945 3 0 6.043214 I would like to have the following result : date roce 0 2018 3.727119 1 2019 -2.627078 2 2020 1.030945 3 2021 6.043214 Do you have an explanation ? A: If you want a method-chained solution, you could use something like this: import pandas as pd roce_by_year = ( pd.merge(nopat, capital_employed) .rename(columns={"": "date"}) .assign( date=lambda xdf: pd.to_datetime( xdf["date"], errors="coerce" ).dt.year ) .assign( roce=lambda xdf: xdf.apply( lambda row: compute roce( row["nopat"], row["capital_employed"] ), axis=1 ) ) .sort_values("date", ascending=True) )[["date", "roce"]] A: df1['date'] = pd.to_datetime(df1['date']) df1 ### date nopat 0 2021-12-31 3580000000 1 2020-12-31 625000000 2 2019-12-31 -1367000000 3 2018-12-31 2028000000 df2['date'] = pd.to_datetime(df2['date']) df2 ### date capital_employed 0 2021-12-31 59240000000 1 2020-12-31 60624000000 2 2019-12-31 52035000000 3 2018-12-31 54412000000 df3 = pd.merge(df1, df2, how='outer', left_on='date', right_on='date')\ .pipe(lambda x: x.assign(roe = x['nopat']/x['capital_employed']))\ .sort_values(by='date', ascending=True)\ .pipe(lambda x: x[['date', 'roe']])\ .pipe(lambda x: x.assign(date = x['date'].dt.strftime('%Y'))).reset_index(drop=True) df3 ### date roe 0 2018 0.037271 1 2019 -0.026271 2 2020 0.010309 3 2021 0.060432 A: Apply creates only the new column. You can try to create a new column on the existing dataframe like nopat.rename(columns={"": "date"}, inplace=True) nopat.sort_values(by='date', inplace=True) nopat.set_index('date', inplace=True) capital_employed.rename(columns={"": "date"}, inplace=True) capital_employed.set_index('date', inplace=True) capital_employed.sort_values(by='date', inplace=True) df = nopat.join(capital_employed, on='date') df['roce'] = df.apply(lambda row: compute_roce(row['nopat'], row['capital_employed']), axis=1)
Column disappearing after .apply - Pandas (Python)
I'm new to pandas and I'm trying to merge the following 2 dataframes into 1 : nopat 0 2021-12-31 3.580000e+09 1 2020-12-31 6.250000e+08 2 2019-12-31 -1.367000e+09 3 2018-12-31 2.028000e+09 capital_employed 0 2021-12-31 5.924000e+10 1 2020-12-31 6.062400e+10 2 2019-12-31 5.203500e+10 3 2018-12-31 5.441200e+10 When I try to apply a function to my new datframe, all columns disappear. Here is my code : roce_by_year = pd.merge(nopat, capital_employed) \ .rename(columns={"": "date"}) \ .sort_values(by='date') \ .apply(lambda row: compute_roce(row['nopat'], row['capital_employed']), axis=1) \ .reset_index(name='roce') Here is the result : index roce 0 3 3.727119 1 2 -2.627078 2 1 1.030945 3 0 6.043214 I would like to have the following result : date roce 0 2018 3.727119 1 2019 -2.627078 2 2020 1.030945 3 2021 6.043214 Do you have an explanation ?
[ "If you want a method-chained solution, you could use something like this:\nimport pandas as pd\n\n\nroce_by_year = (\n pd.merge(nopat, capital_employed)\n .rename(columns={\"\": \"date\"})\n .assign(\n date=lambda xdf: pd.to_datetime(\n xdf[\"date\"], errors=\"coerce\"\n ).dt.year\n )\n .assign(\n roce=lambda xdf: xdf.apply(\n lambda row: compute roce(\n row[\"nopat\"], row[\"capital_employed\"]\n ), axis=1\n )\n )\n .sort_values(\"date\", ascending=True)\n)[[\"date\", \"roce\"]]\n\n\n", "df1['date'] = pd.to_datetime(df1['date'])\ndf1\n###\n date nopat\n0 2021-12-31 3580000000\n1 2020-12-31 625000000\n2 2019-12-31 -1367000000\n3 2018-12-31 2028000000\n\ndf2['date'] = pd.to_datetime(df2['date'])\ndf2\n###\n date capital_employed\n0 2021-12-31 59240000000\n1 2020-12-31 60624000000\n2 2019-12-31 52035000000\n3 2018-12-31 54412000000\n\ndf3 = pd.merge(df1, df2, how='outer', left_on='date', right_on='date')\\\n .pipe(lambda x: x.assign(roe = x['nopat']/x['capital_employed']))\\\n .sort_values(by='date', ascending=True)\\\n .pipe(lambda x: x[['date', 'roe']])\\\n .pipe(lambda x: x.assign(date = x['date'].dt.strftime('%Y'))).reset_index(drop=True)\ndf3\n###\n date roe\n0 2018 0.037271\n1 2019 -0.026271\n2 2020 0.010309\n3 2021 0.060432\n\n", "Apply creates only the new column. You can try to create a new column on the existing dataframe like\nnopat.rename(columns={\"\": \"date\"}, inplace=True)\nnopat.sort_values(by='date', inplace=True)\n\nnopat.set_index('date', inplace=True)\ncapital_employed.rename(columns={\"\": \"date\"}, inplace=True)\ncapital_employed.set_index('date', inplace=True)\ncapital_employed.sort_values(by='date', inplace=True)\ndf = nopat.join(capital_employed, on='date')\ndf['roce'] = df.apply(lambda row: compute_roce(row['nopat'], \n row['capital_employed']), axis=1)\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074611681_pandas_python.txt
Q: Python AttributeError: type object has no attribute I have a simple node class with Id and Value, but python seems to not be able to access those attributes when i use the objects in a list. This is the class Node for context. class Node(): def __init__(self, id : int, value : int): self.id = id self.value = value This is a priority queue implementation (or at least a try to do so) where the error comes from class ListAlt(): def __init__(self): self.queue = [Node] def append(self, node : Node): self.queue.append(node) def dequeue(self): idMax = 0 i = 0 for nodo in self.queue: if (nodo.value > self.queue[idMax].value): idMax = i i += 1 result = self.queue[idMax] return result And this is the full code #!/usr/bin/python3 class Node(): def __init__(self, id : int, value : int): self.id = id self.value = value class ListAlt(): def __init__(self): self.queue = [Node] def append(self, node : Node): self.queue.append(node) def dequeue(self): idMax = 0 i = 0 for nodo in self.queue: if (nodo.value > self.queue[idMax].value): idMax = i i += 1 result = self.queue[idMax] return result n1 = Node(1,10) n2 = Node(2,3) n3 = Node(3,6) lista = ListAlt() lista.append(n1) lista.append(n2) lista.append(n3) print(lista.dequeue()) Now, i can access the values of n1,n2,n3 directly, but inside the ListAlt object in this exact line if (nodo.value > self.queue[idMax].value): it throws the exception saying "AttributeError: type object 'Node' has no attribute 'value'" it should have printed "10" if everything worked correctly. A: This is the problem: class ListAlt(): def __init__(self): self.queue = [Node] # <-- You are setting queue to a list containing the Node class. Why not just an empty list? Also, dequeue returns a Node instance. So to get 10, you need to write this instead: print(lista.dequeue().value) Here is how I would change that code: class Node: def __init__(self, node_id: int, value: int) -> None: self.id = node_id self.value = value class ListAlt: def __init__(self) -> None: self.queue: list[Node] = [] def append(self, node: Node) -> None: self.queue.append(node) def dequeue(self) -> Node: id_max = 0 i = 0 for nodo in self.queue: if nodo.value > self.queue[id_max].value: id_max = i i += 1 result = self.queue[id_max] return result if __name__ == "__main__": n1 = Node(1, 10) n2 = Node(2, 3) n3 = Node(3, 6) lista = ListAlt() lista.append(n1) lista.append(n2) lista.append(n3) print(lista.dequeue().value) A: You are initializing your queue with the type Node. So, it is trying to get the value attribute of the type, which does not exist. Change your code with: class ListAlt(): def __init__(self): self.queue = [] ...
Python AttributeError: type object has no attribute
I have a simple node class with Id and Value, but python seems to not be able to access those attributes when i use the objects in a list. This is the class Node for context. class Node(): def __init__(self, id : int, value : int): self.id = id self.value = value This is a priority queue implementation (or at least a try to do so) where the error comes from class ListAlt(): def __init__(self): self.queue = [Node] def append(self, node : Node): self.queue.append(node) def dequeue(self): idMax = 0 i = 0 for nodo in self.queue: if (nodo.value > self.queue[idMax].value): idMax = i i += 1 result = self.queue[idMax] return result And this is the full code #!/usr/bin/python3 class Node(): def __init__(self, id : int, value : int): self.id = id self.value = value class ListAlt(): def __init__(self): self.queue = [Node] def append(self, node : Node): self.queue.append(node) def dequeue(self): idMax = 0 i = 0 for nodo in self.queue: if (nodo.value > self.queue[idMax].value): idMax = i i += 1 result = self.queue[idMax] return result n1 = Node(1,10) n2 = Node(2,3) n3 = Node(3,6) lista = ListAlt() lista.append(n1) lista.append(n2) lista.append(n3) print(lista.dequeue()) Now, i can access the values of n1,n2,n3 directly, but inside the ListAlt object in this exact line if (nodo.value > self.queue[idMax].value): it throws the exception saying "AttributeError: type object 'Node' has no attribute 'value'" it should have printed "10" if everything worked correctly.
[ "This is the problem:\nclass ListAlt():\n def __init__(self):\n self.queue = [Node] # <--\n\nYou are setting queue to a list containing the Node class. Why not just an empty list?\nAlso, dequeue returns a Node instance. So to get 10, you need to write this instead:\nprint(lista.dequeue().value)\n\n\nHere is how I would change that code:\nclass Node:\n def __init__(self, node_id: int, value: int) -> None:\n self.id = node_id\n self.value = value\n\nclass ListAlt:\n def __init__(self) -> None:\n self.queue: list[Node] = []\n\n def append(self, node: Node) -> None:\n self.queue.append(node)\n\n def dequeue(self) -> Node:\n id_max = 0\n i = 0\n for nodo in self.queue:\n if nodo.value > self.queue[id_max].value:\n id_max = i\n i += 1\n result = self.queue[id_max]\n return result\n\nif __name__ == \"__main__\":\n n1 = Node(1, 10)\n n2 = Node(2, 3)\n n3 = Node(3, 6)\n \n lista = ListAlt()\n lista.append(n1)\n lista.append(n2)\n lista.append(n3)\n \n print(lista.dequeue().value)\n\n", "You are initializing your queue with the type Node. So, it is trying to get the value attribute of the type, which does not exist. Change your code with:\nclass ListAlt():\n def __init__(self):\n self.queue = []\n\n ...\n\n" ]
[ 3, 2 ]
[]
[]
[ "attributeerror", "python" ]
stackoverflow_0074611974_attributeerror_python.txt