content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: How to merge two dataframes, where one is multi-indexed, with different headers I've been trying to merge two dataframes that look as below, one is multi-indexed while the other is not. FIRST DATAFRAME: bd_df outcome opp_name Sam 3 win Roy Jones 2 win Floyd Mayweather 1 win Bernard Hopkins James 3 win James Bond 2 win Michael O'Terry 1 win Donald Trump Jonny 3 win Oscar De la Hoya 2 win Roberto Duran 1 loss Manny Pacquiao Dyaus 3 win Thierry Henry 2 win David Beckham 1 loss Gabriel Jesus SECOND DATAFRAME: bt_df name country colour wins losses 0 Sam England red 10 0 1 Jonny China blue 9 3 2 Dyaus Arsenal white 3 8 3 James USA green 12 6 I'm aiming to merge the two dataframes such that bd_df is joined to bt_df based on the 'name' value where they match. I also have been trying to rename the axis of bd_df with no luck - code is also below. My code is as below currently, with the output. Appreciate any help! boxrec_tables = pd.read_csv(Path(boxrec_tables_path),index_col=[0,1]).rename_axis(['name', 'bout number']) bt_df = pd.DataFrame(boxrec_tables) bout_data = pd.read_csv(Path(bout_data_path)) bd_df = pd.DataFrame(bout_data) OUTPUT outcome opp_name name country colour wins losses Sam 3 win Roy Jones James USA green 12 6 2 win Floyd Mayweather Dyaus Arsenal white 3 8 1 win Bernard Hopkins Jonny China blue 9 3 James 3 win James Bond James USA green 12 6 2 win Michael O'Terry Dyaus Arsenal white 3 8 1 win Donald Trump Jonny China blue 9 3 Jonny 3 win Oscar De la Hoya James USA green 12 6 2 win Roberto Duran Dyaus Arsenal white 3 8 1 loss Manny Pacquiao Jonny China blue 9 3 Dyaus 3 win Thierry Henry James USA green 12 6 2 win David Beckham Dyaus Arsenal white 3 8 1 loss Gabriel Jesus Jonny China blue 9 3 Following suggestion by @Jezrael: df = (bd_df.join(bt_df.set_index('opp name', drop=False)).set_index('name',append=True)) country colour wins losses outcome opp name name 0 Sam England red 10 0 NaN NaN 1 Jonny China blue 9 3 NaN NaN 2 Dyaus Arsenal white 3 8 NaN NaN 3 James USA green 12 6 NaN NaN Issue currently that the merged dataframe values are showing as NaN, while the bout number values are missing also A: I think you need merge by bout number in level of MultiIndex with index in bt_df: main_df = (bd_df.reset_index() .merge(bt_df, left_on='bout number', right_index=True, how='left', suffixes=('_','')) .set_index(['name_', 'bout number']) ) print (main_df) outcome opp_name name country colour wins \ name_ bout number Sam 3 win Roy Jones James USA green 12 2 win Floyd Mayweather Dyaus Arsenal white 3 1 win Bernard Hopkins Jonny China blue 9 James 3 win James Bond James USA green 12 2 win Michael O'Terry Dyaus Arsenal white 3 1 win Donald Trump Jonny China blue 9 Jonny 3 win Oscar De la Hoya James USA green 12 2 win Roberto Duran Dyaus Arsenal white 3 1 loss Manny Pacquiao Jonny China blue 9 Dyaus 3 win Thierry Henry James USA green 12 2 win David Beckham Dyaus Arsenal white 3 1 loss Gabriel Jesus Jonny China blue 9 losses name_ bout number Sam 3 6 2 8 1 3 James 3 6 2 8 1 3 Jonny 3 6 2 8 1 3 Dyaus 3 6 2 8 1 3
How to merge two dataframes, where one is multi-indexed, with different headers
I've been trying to merge two dataframes that look as below, one is multi-indexed while the other is not. FIRST DATAFRAME: bd_df outcome opp_name Sam 3 win Roy Jones 2 win Floyd Mayweather 1 win Bernard Hopkins James 3 win James Bond 2 win Michael O'Terry 1 win Donald Trump Jonny 3 win Oscar De la Hoya 2 win Roberto Duran 1 loss Manny Pacquiao Dyaus 3 win Thierry Henry 2 win David Beckham 1 loss Gabriel Jesus SECOND DATAFRAME: bt_df name country colour wins losses 0 Sam England red 10 0 1 Jonny China blue 9 3 2 Dyaus Arsenal white 3 8 3 James USA green 12 6 I'm aiming to merge the two dataframes such that bd_df is joined to bt_df based on the 'name' value where they match. I also have been trying to rename the axis of bd_df with no luck - code is also below. My code is as below currently, with the output. Appreciate any help! boxrec_tables = pd.read_csv(Path(boxrec_tables_path),index_col=[0,1]).rename_axis(['name', 'bout number']) bt_df = pd.DataFrame(boxrec_tables) bout_data = pd.read_csv(Path(bout_data_path)) bd_df = pd.DataFrame(bout_data) OUTPUT outcome opp_name name country colour wins losses Sam 3 win Roy Jones James USA green 12 6 2 win Floyd Mayweather Dyaus Arsenal white 3 8 1 win Bernard Hopkins Jonny China blue 9 3 James 3 win James Bond James USA green 12 6 2 win Michael O'Terry Dyaus Arsenal white 3 8 1 win Donald Trump Jonny China blue 9 3 Jonny 3 win Oscar De la Hoya James USA green 12 6 2 win Roberto Duran Dyaus Arsenal white 3 8 1 loss Manny Pacquiao Jonny China blue 9 3 Dyaus 3 win Thierry Henry James USA green 12 6 2 win David Beckham Dyaus Arsenal white 3 8 1 loss Gabriel Jesus Jonny China blue 9 3 Following suggestion by @Jezrael: df = (bd_df.join(bt_df.set_index('opp name', drop=False)).set_index('name',append=True)) country colour wins losses outcome opp name name 0 Sam England red 10 0 NaN NaN 1 Jonny China blue 9 3 NaN NaN 2 Dyaus Arsenal white 3 8 NaN NaN 3 James USA green 12 6 NaN NaN Issue currently that the merged dataframe values are showing as NaN, while the bout number values are missing also
[ "I think you need merge by bout number in level of MultiIndex with index in bt_df:\nmain_df = (bd_df.reset_index()\n .merge(bt_df, \n left_on='bout number',\n right_index=True, \n how='left', \n suffixes=('_',''))\n .set_index(['name_', 'bout number'])\n )\n\n\nprint (main_df)\n outcome opp_name name country colour wins \\\nname_ bout number \nSam 3 win Roy Jones James USA green 12 \n 2 win Floyd Mayweather Dyaus Arsenal white 3 \n 1 win Bernard Hopkins Jonny China blue 9 \nJames 3 win James Bond James USA green 12 \n 2 win Michael O'Terry Dyaus Arsenal white 3 \n 1 win Donald Trump Jonny China blue 9 \nJonny 3 win Oscar De la Hoya James USA green 12 \n 2 win Roberto Duran Dyaus Arsenal white 3 \n 1 loss Manny Pacquiao Jonny China blue 9 \nDyaus 3 win Thierry Henry James USA green 12 \n 2 win David Beckham Dyaus Arsenal white 3 \n 1 loss Gabriel Jesus Jonny China blue 9 \n\n losses \nname_ bout number \nSam 3 6 \n 2 8 \n 1 3 \nJames 3 6 \n 2 8 \n 1 3 \nJonny 3 6 \n 2 8 \n 1 3 \nDyaus 3 6 \n 2 8 \n 1 3 \n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "merge", "multi_index", "pandas", "python" ]
stackoverflow_0074613883_dataframe_merge_multi_index_pandas_python.txt
Q: Plotly: How to add volume to a candlestick chart code: from plotly.offline import init_notebook_mode, iplot, iplot_mpl def plot_train_test(train, test, date_split): data = [Candlestick(x=train.index, open=train['open'], high=train['high'], low=train['low'], close=train['close'],name='train'), Candlestick(x=test.index, open=test['open'], high=test['high'], low=test['low'], close=test['close'],name='test') ] layout = { 'shapes': [ {'x0': date_split, 'x1': date_split, 'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper', 'line': {'color': 'rgb(0,0,0)', 'width': 1}}], 'annotations': [{'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'left','text': ' test data'}, {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'right', 'text': 'train data '}] } figure = Figure(data=data, layout=layout) iplot(figure) The above code is ok.But now I want to 'volume' in this candlestick chart code: from plotly.offline import init_notebook_mode, iplot, iplot_mpl def plot_train_test(train, test, date_split): data = [Candlestick(x=train.index, open=train['open'], high=train['high'], low=train['low'], close=train['close'],volume=train['volume'],name='train'), Candlestick(x=test.index, open=test['open'], high=test['high'], low=test['low'],close=test['close'],volume=test['volume'],name='test')] layout = { 'shapes': [ {'x0': date_split, 'x1': date_split, 'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper', 'line': {'color': 'rgb(0,0,0)', 'width': 1}} ], 'annotations': [ {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'left', 'text': ' test data'}, {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'right', 'text': 'train data '} ] } figure = Figure(data=data, layout=layout) iplot(figure) error: ValueError: Invalid property specified for object of type plotly.graph_objs.Candlestick: 'volume' A: If you looking add smaller subplot of volume just below OHLC chart, you can use: rows and cols to specify the grid for subplots. shared_xaxes=True for same zoom and filtering row_width=[0.2, 0.7] to change height ratio of charts. ie. smaller volume chart than OHLC Plot: import pandas as pd import plotly.graph_objects as go from plotly.subplots import make_subplots # data df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv') # Create subplots and mention plot grid size fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.03, subplot_titles=('OHLC', 'Volume'), row_width=[0.2, 0.7]) # Plot OHLC on 1st row fig.add_trace(go.Candlestick(x=df["Date"], open=df["AAPL.Open"], high=df["AAPL.High"], low=df["AAPL.Low"], close=df["AAPL.Close"], name="OHLC"), row=1, col=1 ) # Bar trace for volumes on 2nd row without legend fig.add_trace(go.Bar(x=df['Date'], y=df['AAPL.Volume'], showlegend=False), row=2, col=1) # Do not show OHLC's rangeslider plot fig.update(layout_xaxis_rangeslider_visible=False) fig.show() A: You haven't provided a complete code snippet with a data sample, so I'm going to have to suggest a solution that builds on an example here. In any case, you're getting that error message simply because go.Candlestick does not have a Volume attribute. And it might not seem so at first, but you can easily set up go.Candlestick as an individual trace, and then include an individual go.Bar() trace for Volumes using: fig = make_subplots(specs=[[{"secondary_y": True}]]) fig.add_traces(go.Candlestick(...), secondary_y=True) fig.add_traces(go.Bar(...), secondary_y=False) Plot: Complete code: import plotly.graph_objects as go from plotly.subplots import make_subplots import pandas as pd # data df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv') # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # include candlestick with rangeselector fig.add_trace(go.Candlestick(x=df['Date'], open=df['AAPL.Open'], high=df['AAPL.High'], low=df['AAPL.Low'], close=df['AAPL.Close']), secondary_y=True) # include a go.Bar trace for volumes fig.add_trace(go.Bar(x=df['Date'], y=df['AAPL.Volume']), secondary_y=False) fig.layout.yaxis2.showgrid=False fig.show() A: Here is my improvement implementation based on the previous answer by Vestland, with some labelling and colouring improvements. import plotly.graph_objects as go from plotly.subplots import make_subplots candlesticks = go.Candlestick( x=candles.index, open=candles['open'], high=candles['high'], low=candles['low'], close=candles['close'], showlegend=False ) volume_bars = go.Bar( x=candles.index, y=candles['volume'], showlegend=False, marker={ "color": "rgba(128,128,128,0.5)", } ) fig = go.Figure(candlesticks) fig = make_subplots(specs=[[{"secondary_y": True}]]) fig.add_trace(candlesticks, secondary_y=True) fig.add_trace(volume_bars, secondary_y=False) fig.update_layout(title="ETH/USDC pool after Uniswap v3 deployment", height=800) fig.update_yaxes(title="Price $", secondary_y=True, showgrid=True) fig.update_yaxes(title="Volume $", secondary_y=False, showgrid=False) fig.show() You can find the full source code in this open-source notebook. A: If you want to add different colors for buy/sell isay 'green'/'red', you can use some libs (e.g. mplfinance) which do these automatically however the plots are non-interactive. To get interactive plot with plotly with separate colors for buy/sell colors, one needs to add trace for each data point. Here is code: import plotly.graph_objects as go from plotly.subplots import make_subplots import pandas as pd # Create subplots and mention plot grid size title=df.symbol.unique()[0] fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.02, row_width=[0.25, 0.75]) # Plot OHLC on 1st row fig.add_trace(go.Candlestick(x=df.index, open=df['open'], high=df['high'], low=df['low'], close=df['close'],showlegend=False),row=1, col=1,) # Bar trace for volumes on 2nd row without legend # fig.add_trace(go.Bar(x=df.index, y=df['volume'], showlegend=False), row=2, col=1) df['color']='' df['color']=['red' if (x>y) else t for x,y,t in zip(df['open'],df['close'],df['color'])] df['color']=['green' if (x<y) else t for x,y,t in zip(df['open'],df['close'],df['color'])] colors=df.color.tolist() df['prev_color']=[colors[0]]+colors[:(len(colors)-1)] df.loc[((df.open==df.close) & (df.color=='')),'color']=[z for x,y,z,t in zip(df['open'],df['close'],df['prev_color'],df['color']) if (x==y and t=='')] colors=df.color.tolist() df['prev_color']=[colors[0]]+colors[:(len(colors)-1)] df.loc[((df.open==df.close) & (df.color=='')),'color']=[z for x,y,z,t in zip(df['open'],df['close'],df['prev_color'],df['color']) if (x==y and t=='')] markers=['green','red'] for t in markers: df_tmp=df.loc[~(df.color==t)] ## somehow the color it takes is opposite so take negation to fig.add_trace(go.Bar(x=df_tmp.index, y=df_tmp['volume'], showlegend=False), row=2, col=1) # Do not show OHLC's rangeslider plot fig.update(layout_xaxis_rangeslider_visible=False) fig.layout.yaxis2.showgrid=False fig.update_layout(title_text=title,title_x=0.45) fig.show() A: My two cents on Plotting Volume in a different subplot with colors, it is just making @user6397960 response shorter without hacks to get the right color, just use marker_color. Think about it, what makes a candle green? The fact of having Close price above the Open price, and what about red candle? well, having a close price below the open price, so with this basics: import plotly.graph_objects as go from plotly.subplots import make_subplots # Create a Figure with 2 subplots, one will contain the candles # the other will contain the Volume bars figure = make_subplots(rows=2, cols=1, shared_xaxes=True, row_heights=[0.7, 0.3]) # Plot the candles in the first subplot figure.add_trace(go.Candlestick(x=df.index, open=df.open, high=df.high, low=df.low, close=df.close, name='price', increasing_line_color='#26a69a', decreasing_line_color='#ef5350'), row=1, col=1) # From our Dataframe take only the rows where the Close > Open # save it in different Dataframe, these should be green green_volume_df = df[df['close'] > df['open']] # Same for Close < Open, these are red candles/bars red_volume_df = df[df['close'] < df['open']] # Plot the red bars and green bars in the second subplot figure.add_trace(go.Bar(x=red_volume_df.index, y=red_volume_df.volume, showlegend=False, marker_color='#ef5350'), row=2, col=1) figure.add_trace(go.Bar(x=green_volume_df.index, y=green_volume_df.volume, showlegend=False, marker_color='#26a69a'), row=2, col=1) # Hide the Range Slider figure.update(layout_xaxis_rangeslider_visible=False) figure.update_layout(title=f'BTC/USDT', yaxis_title=f'Price') figure.update_yaxes(title_text=f'Volume', row=2, col=1) figure.update_xaxes(title_text='Date', row=2) References https://plotly.com/python/subplots/ https://plotly.com/python/candlestick-charts/
Plotly: How to add volume to a candlestick chart
code: from plotly.offline import init_notebook_mode, iplot, iplot_mpl def plot_train_test(train, test, date_split): data = [Candlestick(x=train.index, open=train['open'], high=train['high'], low=train['low'], close=train['close'],name='train'), Candlestick(x=test.index, open=test['open'], high=test['high'], low=test['low'], close=test['close'],name='test') ] layout = { 'shapes': [ {'x0': date_split, 'x1': date_split, 'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper', 'line': {'color': 'rgb(0,0,0)', 'width': 1}}], 'annotations': [{'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'left','text': ' test data'}, {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'right', 'text': 'train data '}] } figure = Figure(data=data, layout=layout) iplot(figure) The above code is ok.But now I want to 'volume' in this candlestick chart code: from plotly.offline import init_notebook_mode, iplot, iplot_mpl def plot_train_test(train, test, date_split): data = [Candlestick(x=train.index, open=train['open'], high=train['high'], low=train['low'], close=train['close'],volume=train['volume'],name='train'), Candlestick(x=test.index, open=test['open'], high=test['high'], low=test['low'],close=test['close'],volume=test['volume'],name='test')] layout = { 'shapes': [ {'x0': date_split, 'x1': date_split, 'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper', 'line': {'color': 'rgb(0,0,0)', 'width': 1}} ], 'annotations': [ {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'left', 'text': ' test data'}, {'x': date_split, 'y': 1.0, 'xref': 'x', 'yref': 'paper', 'showarrow': False, 'xanchor': 'right', 'text': 'train data '} ] } figure = Figure(data=data, layout=layout) iplot(figure) error: ValueError: Invalid property specified for object of type plotly.graph_objs.Candlestick: 'volume'
[ "If you looking add smaller subplot of volume just below OHLC chart, you can use:\n\nrows and cols to specify the grid for subplots.\nshared_xaxes=True for same zoom and filtering\nrow_width=[0.2, 0.7] to change height ratio of charts. ie. smaller volume chart than OHLC\n\nPlot:\n\nimport pandas as pd\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\n\n# data\ndf = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')\n\n\n# Create subplots and mention plot grid size\nfig = make_subplots(rows=2, cols=1, shared_xaxes=True, \n vertical_spacing=0.03, subplot_titles=('OHLC', 'Volume'), \n row_width=[0.2, 0.7])\n\n# Plot OHLC on 1st row\nfig.add_trace(go.Candlestick(x=df[\"Date\"], open=df[\"AAPL.Open\"], high=df[\"AAPL.High\"],\n low=df[\"AAPL.Low\"], close=df[\"AAPL.Close\"], name=\"OHLC\"), \n row=1, col=1\n)\n\n# Bar trace for volumes on 2nd row without legend\nfig.add_trace(go.Bar(x=df['Date'], y=df['AAPL.Volume'], showlegend=False), row=2, col=1)\n\n# Do not show OHLC's rangeslider plot \nfig.update(layout_xaxis_rangeslider_visible=False)\nfig.show()\n\n", "You haven't provided a complete code snippet with a data sample, so I'm going to have to suggest a solution that builds on an example here.\nIn any case, you're getting that error message simply because go.Candlestick does not have a Volume attribute. And it might not seem so at first, but you can easily set up go.Candlestick as an individual trace, and then include an individual go.Bar() trace for Volumes using:\n\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\nfig.add_traces(go.Candlestick(...), secondary_y=True)\nfig.add_traces(go.Bar(...), secondary_y=False)\n\nPlot:\n\nComplete code:\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\nimport pandas as pd\n\n# data\ndf = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')\n\n# Create figure with secondary y-axis\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\n\n# include candlestick with rangeselector\nfig.add_trace(go.Candlestick(x=df['Date'],\n open=df['AAPL.Open'], high=df['AAPL.High'],\n low=df['AAPL.Low'], close=df['AAPL.Close']),\n secondary_y=True)\n\n# include a go.Bar trace for volumes\nfig.add_trace(go.Bar(x=df['Date'], y=df['AAPL.Volume']),\n secondary_y=False)\n\nfig.layout.yaxis2.showgrid=False\nfig.show()\n\n", "Here is my improvement implementation based on the previous answer by Vestland, with some labelling and colouring improvements.\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\n\ncandlesticks = go.Candlestick(\n x=candles.index,\n open=candles['open'],\n high=candles['high'],\n low=candles['low'],\n close=candles['close'],\n showlegend=False\n)\n\nvolume_bars = go.Bar(\n x=candles.index,\n y=candles['volume'],\n showlegend=False,\n marker={\n \"color\": \"rgba(128,128,128,0.5)\",\n }\n)\n\nfig = go.Figure(candlesticks)\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\nfig.add_trace(candlesticks, secondary_y=True)\nfig.add_trace(volume_bars, secondary_y=False)\nfig.update_layout(title=\"ETH/USDC pool after Uniswap v3 deployment\", height=800)\nfig.update_yaxes(title=\"Price $\", secondary_y=True, showgrid=True)\nfig.update_yaxes(title=\"Volume $\", secondary_y=False, showgrid=False)\nfig.show()\n\n\nYou can find the full source code in this open-source notebook.\n", "If you want to add different colors for buy/sell isay 'green'/'red', you can use some libs (e.g. mplfinance) which do these automatically however the plots are non-interactive. To get interactive plot with plotly with separate colors for buy/sell colors, one needs to add trace for each data point. Here is code:\n import plotly.graph_objects as go\n from plotly.subplots import make_subplots\n import pandas as pd\n # Create subplots and mention plot grid size\n title=df.symbol.unique()[0]\n\n fig = make_subplots(rows=2, cols=1, shared_xaxes=True, \n vertical_spacing=0.02, \n row_width=[0.25, 0.75])\n\n # Plot OHLC on 1st row\n fig.add_trace(go.Candlestick(x=df.index,\n open=df['open'], high=df['high'],\n low=df['low'], close=df['close'],showlegend=False),row=1, col=1,)\n\n # Bar trace for volumes on 2nd row without legend\n # fig.add_trace(go.Bar(x=df.index, y=df['volume'], showlegend=False), row=2, col=1)\n\n df['color']=''\n df['color']=['red' if (x>y) else t for x,y,t in zip(df['open'],df['close'],df['color'])]\n df['color']=['green' if (x<y) else t for x,y,t in zip(df['open'],df['close'],df['color'])]\n colors=df.color.tolist()\n df['prev_color']=[colors[0]]+colors[:(len(colors)-1)]\n df.loc[((df.open==df.close) & (df.color=='')),'color']=[z for x,y,z,t in zip(df['open'],df['close'],df['prev_color'],df['color']) if (x==y and t=='')]\n colors=df.color.tolist()\n df['prev_color']=[colors[0]]+colors[:(len(colors)-1)]\n df.loc[((df.open==df.close) & (df.color=='')),'color']=[z for x,y,z,t in zip(df['open'],df['close'],df['prev_color'],df['color']) if (x==y and t=='')]\n \n markers=['green','red']\n\n for t in markers:\n df_tmp=df.loc[~(df.color==t)] ## somehow the color it takes is opposite so take negation to \n fig.add_trace(go.Bar(x=df_tmp.index, y=df_tmp['volume'], showlegend=False), row=2, col=1)\n\n # Do not show OHLC's rangeslider plot \n fig.update(layout_xaxis_rangeslider_visible=False)\n fig.layout.yaxis2.showgrid=False\n fig.update_layout(title_text=title,title_x=0.45)\n\n fig.show()\n\n", "My two cents on Plotting Volume in a different subplot with colors, it is just making @user6397960 response shorter without hacks to get the right color, just use marker_color. Think about it, what makes a candle green? The fact of having Close price above the Open price, and what about red candle? well, having a close price below the open price, so with this basics:\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\n\n# Create a Figure with 2 subplots, one will contain the candles\n# the other will contain the Volume bars\nfigure = make_subplots(rows=2, cols=1, shared_xaxes=True, row_heights=[0.7, 0.3])\n\n# Plot the candles in the first subplot\nfigure.add_trace(go.Candlestick(x=df.index, open=df.open, high=df.high, low=df.low, close=df.close, name='price',\n increasing_line_color='#26a69a', decreasing_line_color='#ef5350'),\n row=1, col=1)\n\n# From our Dataframe take only the rows where the Close > Open\n# save it in different Dataframe, these should be green\ngreen_volume_df = df[df['close'] > df['open']]\n# Same for Close < Open, these are red candles/bars\nred_volume_df = df[df['close'] < df['open']]\n\n# Plot the red bars and green bars in the second subplot\nfigure.add_trace(go.Bar(x=red_volume_df.index, y=red_volume_df.volume, showlegend=False, marker_color='#ef5350'), row=2,\n col=1)\nfigure.add_trace(go.Bar(x=green_volume_df.index, y=green_volume_df.volume, showlegend=False, marker_color='#26a69a'),\n row=2, col=1)\n\n# Hide the Range Slider\nfigure.update(layout_xaxis_rangeslider_visible=False)\nfigure.update_layout(title=f'BTC/USDT', yaxis_title=f'Price')\nfigure.update_yaxes(title_text=f'Volume', row=2, col=1)\nfigure.update_xaxes(title_text='Date', row=2)\n\nReferences\n\nhttps://plotly.com/python/subplots/\nhttps://plotly.com/python/candlestick-charts/\n\n" ]
[ 24, 23, 2, 1, 0 ]
[]
[]
[ "matplotlib", "plot", "plotly", "python" ]
stackoverflow_0064689342_matplotlib_plot_plotly_python.txt
Q: ImportError: Pandas requires version '3.0.7' or newer of 'openpyxl' (version '3.0.5' currently installed) I have a strange problem which leads to the msg in the title, leading to the error report below. The fact is - I have (on Linux) python 3.9.15, Pandas 1.5.2, openpyxl 3.0.10. I do not use venv, for editing I use Wing, but I do not run script from it, only from the shell. I looked over /usr/lib64/python3.9/site-packages/ but did not find any other version of openpyxl laying around. What's wrong? I even uninstalled and re-installed both pandas and openpyxl - no effect. File "./elektreiba-00-02.py", line 140, in <module> main(sys.argv[1:]) File "./elektreiba-00-02.py", line 79, in main df = pd.read_excel(infile, sheet_name=None) File "/usr/lib64/python3.9/site-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/lib64/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) File "/usr/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 482, in read_excel io = ExcelFile(io, storage_options=storage_options, engine=engine) File "/usr/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 1695, in __init__ self._reader = self._engines[engine](self._io, storage_options=storage_options) File "/usr/lib64/python3.9/site-packages/pandas/io/excel/_openpyxl.py", line 556, in __init__ import_optional_dependency("openpyxl") File "/usr/lib64/python3.9/site-packages/pandas/compat/_optional.py", line 171, in import_optional_dependency raise ImportError(msg) ImportError: Pandas requires version '3.0.7' or newer of 'openpyxl' (version '3.0.5' currently installed) A: It turned out being cache question - no idea what created a site-package cache under ~/./local/ and why python looked there at first
ImportError: Pandas requires version '3.0.7' or newer of 'openpyxl' (version '3.0.5' currently installed)
I have a strange problem which leads to the msg in the title, leading to the error report below. The fact is - I have (on Linux) python 3.9.15, Pandas 1.5.2, openpyxl 3.0.10. I do not use venv, for editing I use Wing, but I do not run script from it, only from the shell. I looked over /usr/lib64/python3.9/site-packages/ but did not find any other version of openpyxl laying around. What's wrong? I even uninstalled and re-installed both pandas and openpyxl - no effect. File "./elektreiba-00-02.py", line 140, in <module> main(sys.argv[1:]) File "./elektreiba-00-02.py", line 79, in main df = pd.read_excel(infile, sheet_name=None) File "/usr/lib64/python3.9/site-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/lib64/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) File "/usr/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 482, in read_excel io = ExcelFile(io, storage_options=storage_options, engine=engine) File "/usr/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 1695, in __init__ self._reader = self._engines[engine](self._io, storage_options=storage_options) File "/usr/lib64/python3.9/site-packages/pandas/io/excel/_openpyxl.py", line 556, in __init__ import_optional_dependency("openpyxl") File "/usr/lib64/python3.9/site-packages/pandas/compat/_optional.py", line 171, in import_optional_dependency raise ImportError(msg) ImportError: Pandas requires version '3.0.7' or newer of 'openpyxl' (version '3.0.5' currently installed)
[ "It turned out being cache question - no idea what created a site-package cache under ~/./local/ and why python looked there at first\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "python_3.x" ]
stackoverflow_0074612873_pandas_python_python_3.x.txt
Q: Pyinstaller --onefile warning file already exists but should not When running Pyinstaller --onefile, and starting the resulting .exe, multiple popups show up with the following warning: WARNING: file already exists but should not: C:\Users\myuser\AppData\Local\Temp\_MEI90082\Cipher\_AES.cp37-win_amd64.pyd This makes the .exe hard to use even though clicking through the warnings still allows the .exe to run properly. How to get rid of these warnings ? A: I have almost the same issue. Not a good idea - remove part of the list that you are iterating. Try this: from PyInstaller.building.datastruct import TOC # ... # a = Analysis(...) x = 'cp36-win_amd64' datas_upd = TOC() for d in a.datas: if x not in d[0] and x not in d[1]: datas_upd.append(d) a.datas = datas_upd A: Going to put this here in case it helps anyone, since I spent some time finding out how to do this. in the .spec of your pyinstaller project, add this after the line a = Analysis(...): # Avoid warning to_remove = ["_AES", "_ARC4", "_DES", "_DES3", "_SHA256", "_counter"] for b in a.binaries: found = any( f'{crypto}.cp37-win_amd64.pyd' in b[1] for crypto in to_remove ) if found: print(f"Removing {b[1]}") a.binaries.remove(b) Of course you may adapt the array to_remove as well as the exact file name .cp37-win_amd64.pyd to match the files that show up in your warnings. This results in the files not being included in the .exe and the warnings are gone. A: As good as it is to encounter the warning message, noting them and creating an iterable used to exclude them in the .spec file, it would be even better if we don't have to pass through that stress procedure. Let's try it. Observation: The data structure of both the 'datas' and 'binaries' are the same within the .spec file. i.e [(module_or_file_name, absolute_path, type_DATA_or_BINARY), ...] The methods here are the same, the implementation is what's different. We look for what has been added to a.datas and repeated/re-added into a.binaries Method 1: [1-liner but slower] # ... # a = Analysis(...) a.binaries = [b for b in a.binaries if not b in list(set(b for d in a.datas for b in a.binaries if b[1].endswith(d[0])))] # The unique binaries not repeated in a.datas. Method 2: [faster] # ... # a = Analysis(...) for b in a.binaries.copy(): # Traver the binaries. for d in a.datas: # Traverse the datas. if b[1].endswith(d[0]): # If duplicate found. a.binaries.remove(b) # Remove the duplicate. break I used this implementation when I was creating a simplified combined power of Cython + PyInstaller + AES encryption GUI Bundling application. Hope this helps someone in future.
Pyinstaller --onefile warning file already exists but should not
When running Pyinstaller --onefile, and starting the resulting .exe, multiple popups show up with the following warning: WARNING: file already exists but should not: C:\Users\myuser\AppData\Local\Temp\_MEI90082\Cipher\_AES.cp37-win_amd64.pyd This makes the .exe hard to use even though clicking through the warnings still allows the .exe to run properly. How to get rid of these warnings ?
[ "I have almost the same issue.\nNot a good idea - remove part of the list that you are iterating.\nTry this:\nfrom PyInstaller.building.datastruct import TOC\n\n# ...\n# a = Analysis(...)\n\nx = 'cp36-win_amd64'\ndatas_upd = TOC()\n\nfor d in a.datas:\n if x not in d[0] and x not in d[1]:\n datas_upd.append(d)\n\na.datas = datas_upd\n\n\n", "Going to put this here in case it helps anyone, since I spent some time finding out how to do this.\nin the .spec of your pyinstaller project, add this after the line a = Analysis(...):\n# Avoid warning\nto_remove = [\"_AES\", \"_ARC4\", \"_DES\", \"_DES3\", \"_SHA256\", \"_counter\"]\nfor b in a.binaries:\n found = any(\n f'{crypto}.cp37-win_amd64.pyd' in b[1]\n for crypto in to_remove\n )\n if found:\n print(f\"Removing {b[1]}\")\n a.binaries.remove(b)\n\nOf course you may adapt the array to_remove as well as the exact file name .cp37-win_amd64.pyd to match the files that show up in your warnings.\nThis results in the files not being included in the .exe and the warnings are gone.\n", "As good as it is to encounter the warning message, noting them and creating an iterable used to exclude them in the .spec file, it would be even better if we don't have to pass through that stress procedure. Let's try it.\nObservation: The data structure of both the 'datas' and 'binaries' are the same within the .spec file. i.e [(module_or_file_name, absolute_path, type_DATA_or_BINARY), ...]\nThe methods here are the same, the implementation is what's different. We look for what has been added to a.datas and repeated/re-added into a.binaries\nMethod 1: [1-liner but slower]\n# ...\n# a = Analysis(...)\n\na.binaries = [b for b in a.binaries if not b in list(set(b for d in a.datas for b in a.binaries if b[1].endswith(d[0])))] # The unique binaries not repeated in a.datas.\n\nMethod 2: [faster]\n# ...\n# a = Analysis(...)\n\nfor b in a.binaries.copy(): # Traver the binaries.\n for d in a.datas: # Traverse the datas.\n if b[1].endswith(d[0]): # If duplicate found.\n a.binaries.remove(b) # Remove the duplicate.\n break\n\nI used this implementation when I was creating a simplified combined power of Cython + PyInstaller + AES encryption GUI Bundling application.\nHope this helps someone in future.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "pyinstaller", "python" ]
stackoverflow_0066069360_pyinstaller_python.txt
Q: Changing bad format of number and currency from user input to float number I need to write a script in Python which will transform bad input from user to float number. For example "10,123.20 Kč" to "10123.2" "10.023,123.45 Kč" to "10023123.45" "20 743 210.2 Kč" to "20743210.2" or any other bad input - this is what I've come up with. Kč is Czech koruna. My thought process was to get rid of any spaces, letters. Then change every comma to dot to make numbers looks like "123.123.456.78" then delete all dots except of last one in a string and then change it to float so it would looks like "123123456.78". But I don't know how to do it. If you know any faster and easier way to do it, I would like to know. This is what I have and I'm lost now. import re my_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč'] for i in my_list: ws = i.replace("Kč", '') x = re.sub(',','.', ws).replace(" ","") print(x) A: You could select the find all numerics instead of trying to remove non-numerics In any case you have to make some assumtpions about the input, here is the code assuming that a final block of two digits in a text with separators is the fractional part. import re my_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč'] for s in my_list: parts = list(re.findall('\d+', s)) if len(parts) == 1 or len(parts[-1]) != 2: parts.append('0') print(float(''.join(parts[:-1]) + '.' + parts[-1])) A: This should do the job. def parse_entry(entry): #remove currency and spaces entry = entry.replace("Kč", "") entry = entry.replace(" ", "") #check if a comma is used for decimals or thousands comma_i = entry.find(",") if len(entry[comma_i:]) > 3: #it's a thousands separator, it can be removed entry = entry.replace(",", "") else: #it's a decimal separator entry = entry.replace(",", ".") #convert it to dot #remove extra dots while entry.count(".") > 1: entry = entry.replace(".", "", 1) #replace once return round(float(entry), 1) #round to 1 decimal my_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč'] parsed = list(map(parse_entry, my_list)) print(parsed) #[100.3, 10000.0, 10000.0, 10000.0, 32100.3, 12345678.9] A: I tried to keep your code and add just few lines. The idea is the store in a variable the number after "." and then add it after replacing the "," with "." and join the number separated by ".". import re my_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč'] for i in my_list: ws = i.replace("Kč", '') x = re.sub(',','.', ws).replace(" ","") if len( x.split("."))>1: end= x.split(".")[-1] x = "".join([i for i in x.split(".")[:-1]])+"."+end print(x) A: Whilst the other answers work for your specific scenario (e.g. you know the current code you're replacing), it's not very extensible. So here's a more generic approach: import re values = [ "100,30 Kč", "10 000,00 Kč", "10,000.00 Kč", "10000 Kč", "32.100,30 Kč", "12.345,678.91 Kč", # This value is a bit odd... is it _right_? ] for value in values: # Remove any character that's not a number or a comma value = re.sub("[^0-9,]", "", value) # Replace remaining commas with periods value = value.replace(",", ".") # Convert from string to number value = float(value) print(value) This outputs: 100.3 10000.0 10.0 10000.0 32100.3 12345.67891 A: Without the aid or re you could just do this: my_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč'] def fix(s): r = [] for c in s: if c in '0123456789': r.append(c) elif c == ',': r.append('.') elif not c in '. ': break return float(''.join(r)) for n in my_list: print(fix(n)) Output: 100.3 10000.0 10.0 10000.0 32100.3 12345.67891
Changing bad format of number and currency from user input to float number
I need to write a script in Python which will transform bad input from user to float number. For example "10,123.20 Kč" to "10123.2" "10.023,123.45 Kč" to "10023123.45" "20 743 210.2 Kč" to "20743210.2" or any other bad input - this is what I've come up with. Kč is Czech koruna. My thought process was to get rid of any spaces, letters. Then change every comma to dot to make numbers looks like "123.123.456.78" then delete all dots except of last one in a string and then change it to float so it would looks like "123123456.78". But I don't know how to do it. If you know any faster and easier way to do it, I would like to know. This is what I have and I'm lost now. import re my_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč'] for i in my_list: ws = i.replace("Kč", '') x = re.sub(',','.', ws).replace(" ","") print(x)
[ "You could select the find all numerics instead of trying to remove non-numerics\nIn any case you have to make some assumtpions about the input, here is the code assuming that a final block of two digits in a text with separators is the fractional part.\nimport re\n\nmy_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč']\n\nfor s in my_list:\n parts = list(re.findall('\\d+', s))\n if len(parts) == 1 or len(parts[-1]) != 2:\n parts.append('0')\n print(float(''.join(parts[:-1]) + '.' + parts[-1]))\n\n", "This should do the job.\ndef parse_entry(entry):\n\n #remove currency and spaces\n entry = entry.replace(\"Kč\", \"\")\n entry = entry.replace(\" \", \"\")\n\n #check if a comma is used for decimals or thousands\n comma_i = entry.find(\",\")\n if len(entry[comma_i:]) > 3: #it's a thousands separator, it can be removed\n entry = entry.replace(\",\", \"\")\n else: #it's a decimal separator\n entry = entry.replace(\",\", \".\") #convert it to dot\n\n #remove extra dots\n while entry.count(\".\") > 1:\n entry = entry.replace(\".\", \"\", 1) #replace once\n return round(float(entry), 1) #round to 1 decimal\n\nmy_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč']\nparsed = list(map(parse_entry, my_list))\nprint(parsed) #[100.3, 10000.0, 10000.0, 10000.0, 32100.3, 12345678.9]\n\n", "I tried to keep your code and add just few lines. The idea is the store in a variable the number after \".\" and then add it after replacing the \",\" with \".\" and join the number separated by \".\".\nimport re\n\nmy_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč']\n\nfor i in my_list:\n ws = i.replace(\"Kč\", '')\n x = re.sub(',','.', ws).replace(\" \",\"\")\n \n if len( x.split(\".\"))>1:\n end= x.split(\".\")[-1]\n x = \"\".join([i for i in x.split(\".\")[:-1]])+\".\"+end\n print(x)\n\n", "Whilst the other answers work for your specific scenario (e.g. you know the current code you're replacing), it's not very extensible.\nSo here's a more generic approach:\nimport re\n\nvalues = [\n \"100,30 Kč\",\n \"10 000,00 Kč\",\n \"10,000.00 Kč\",\n \"10000 Kč\",\n \"32.100,30 Kč\",\n \"12.345,678.91 Kč\", # This value is a bit odd... is it _right_?\n]\n\nfor value in values:\n # Remove any character that's not a number or a comma\n value = re.sub(\"[^0-9,]\", \"\", value)\n\n # Replace remaining commas with periods\n value = value.replace(\",\", \".\")\n\n # Convert from string to number\n value = float(value)\n\n print(value)\n\nThis outputs:\n100.3\n10000.0\n10.0\n10000.0\n32100.3\n12345.67891\n\n", "Without the aid or re you could just do this:\nmy_list = ['100,30 Kč','10 000,00 Kč', '10,000.00 Kč', '10000 Kč', '32.100,30 Kč', '12.345,678.91 Kč']\n\ndef fix(s):\n r = []\n for c in s:\n if c in '0123456789':\n r.append(c)\n elif c == ',':\n r.append('.')\n elif not c in '. ':\n break\n return float(''.join(r))\n\nfor n in my_list:\n print(fix(n))\n\nOutput:\n100.3\n10000.0\n10.0\n10000.0\n32100.3\n12345.67891\n\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "python_re" ]
stackoverflow_0074613237_python_python_re.txt
Q: Read a binary mp4 to a numpy array I read this answer on how to pass an mp4 file from client to server using python's FastAPI. I can read the file into its binary form like as suggested: contents = file.file.read() contents Out[25]: b'\x00\x00\x00\x18ftypmp42\x00\x00\x00\x00isommp42.. Now, I want to load the content into a numpy array. I have looked on several answers on the web but all of them read the file from disk, something like: import skvideo.io videodata = skvideo.io.vread("video_file_name") However, I want to avoid the disk operation of writing and deleting the binary string. Any help would be much appreciated. A: So, I commented heavily about how unsure I am that it is really the thing to do, and how I suspect XY problem on this. But, just to give a formal answer to the question, as is, I repeat here what I said in comments: np.frombuffer(contents, dtype=np.uint8) Is the way to turn a byte string into a numpy array of bytes (that is of uint8 integers) The dtype part is important. Since frombuffer does not just iterates bytes to create an array, but expect to find the data representation as is in the buffer. And without the dtype it will try to create an array of float64 from your buffer. Which, 7 times out of 8, will fail because an array of float64 can be represented only by buffers of bytes of len multiple of 8. And if len of contents happen to be multiple of 8, it will succeeds, giving your meaningless floats. For example, on a .mp4 of mine with open('out.mp4', 'rb') as f: content=f.read() len(content) # 63047 - it is a very small mp4 x=np.frombuffer(content) # ValueError: buffer size must be a multiple of element size x=np.frombuffer(content[:63040]) x.shape # (7880,) x.dtype # np.float64 x[:10] #array([ 6.32301702e+233, 2.78135139e-309, 9.33260821e-066, # 1.15681581e-071, 2.78106620e+180, 3.98476928e+252, # nan, 9.02529811e+042, -3.58729431e+222, # 1.08615058e-153]) x=np.frombuffer(content, dtype=np.uint8) x.shape # (63047,) x.dtype # uint8 x[:10] # array([ 0, 0, 0, 32, 102, 116, 121, 112, 105, 115], dtype=uint8)
Read a binary mp4 to a numpy array
I read this answer on how to pass an mp4 file from client to server using python's FastAPI. I can read the file into its binary form like as suggested: contents = file.file.read() contents Out[25]: b'\x00\x00\x00\x18ftypmp42\x00\x00\x00\x00isommp42.. Now, I want to load the content into a numpy array. I have looked on several answers on the web but all of them read the file from disk, something like: import skvideo.io videodata = skvideo.io.vread("video_file_name") However, I want to avoid the disk operation of writing and deleting the binary string. Any help would be much appreciated.
[ "So, I commented heavily about how unsure I am that it is really the thing to do, and how I suspect XY problem on this.\nBut, just to give a formal answer to the question, as is, I repeat here what I said in comments:\nnp.frombuffer(contents, dtype=np.uint8)\n\nIs the way to turn a byte string into a numpy array of bytes (that is of uint8 integers)\nThe dtype part is important. Since frombuffer does not just iterates bytes to create an array, but expect to find the data representation as is in the buffer. And without the dtype it will try to create an array of float64 from your buffer. Which, 7 times out of 8, will fail because an array of float64 can be represented only by buffers of bytes of len multiple of 8. And if len of contents happen to be multiple of 8, it will succeeds, giving your meaningless floats.\nFor example, on a .mp4 of mine\nwith open('out.mp4', 'rb') as f:\n content=f.read()\n\nlen(content)\n# 63047 - it is a very small mp4\n\nx=np.frombuffer(content)\n# ValueError: buffer size must be a multiple of element size\n\nx=np.frombuffer(content[:63040])\nx.shape\n# (7880,)\nx.dtype\n# np.float64\nx[:10]\n#array([ 6.32301702e+233, 2.78135139e-309, 9.33260821e-066,\n# 1.15681581e-071, 2.78106620e+180, 3.98476928e+252,\n# nan, 9.02529811e+042, -3.58729431e+222,\n# 1.08615058e-153])\n\nx=np.frombuffer(content, dtype=np.uint8)\nx.shape\n# (63047,)\nx.dtype\n# uint8\nx[:10]\n# array([ 0, 0, 0, 32, 102, 116, 121, 112, 105, 115], dtype=uint8)\n\n" ]
[ 0 ]
[]
[]
[ "binaryfiles", "mp4", "numpy", "python" ]
stackoverflow_0074613682_binaryfiles_mp4_numpy_python.txt
Q: Python FastAPI/Uvicorn - External logger wont work? I am using logtail.com and for some reason it wont log ONLY in my FastAPI/UVICORN app, I tried using the package in an a different test python file and it worked? I dont understand what I am missing. I call the logger and it should work but it does not, additionally I even do a log INSTANTLY after I instantiate the logger and it does not work. Code below. # # Logger.py # from logtail import LogtailHandler import logging class Logger: def __init__(self): handler = LogtailHandler(source_token="XXXXXXX") logger = logging.getLogger(__name__) logger.handlers = [] logger.setLevel(logging.DEBUG) # Set minimal log level logger.addHandler(handler) # asign handler to logger logger.debug('I am using Logtail!') def info(self, message): self.log.info(message) def error(self, message): self.log.error(message) def debug(self, message): self.log.debug(message) def warning(self, message): self.log.warning(message) def critical(self, message): self.log.critical(message) def exception(self, message): self.log.exception(message) # # __init__ # from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from abe.routes import main, bc_handler app = FastAPI(title="ABE-Backend", openapi_url="/openapi.json") app.include_router(main.router) app.include_router(bc_handler.router) from abe.utils.logger import Logger logger = Logger() #create tables # models.Base.metadata.create_all(bind=engine) origins = [ ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) if __name__ == "__main__": # Use this for debugging purposes only import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000, log_level="debug") logger.info("Starting server on port 8000, with cors origins: "+str(origins)) A: It should work, but only after you shutdown the API. Logging inside the main function before calling uvicorn.run() and inside endpoint routes should work as you expected. uvicorn.run() is a sync function. So the interpreter waits until the function has finished (API has shutdown) and executes the following statements afterwards.
Python FastAPI/Uvicorn - External logger wont work?
I am using logtail.com and for some reason it wont log ONLY in my FastAPI/UVICORN app, I tried using the package in an a different test python file and it worked? I dont understand what I am missing. I call the logger and it should work but it does not, additionally I even do a log INSTANTLY after I instantiate the logger and it does not work. Code below. # # Logger.py # from logtail import LogtailHandler import logging class Logger: def __init__(self): handler = LogtailHandler(source_token="XXXXXXX") logger = logging.getLogger(__name__) logger.handlers = [] logger.setLevel(logging.DEBUG) # Set minimal log level logger.addHandler(handler) # asign handler to logger logger.debug('I am using Logtail!') def info(self, message): self.log.info(message) def error(self, message): self.log.error(message) def debug(self, message): self.log.debug(message) def warning(self, message): self.log.warning(message) def critical(self, message): self.log.critical(message) def exception(self, message): self.log.exception(message) # # __init__ # from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from abe.routes import main, bc_handler app = FastAPI(title="ABE-Backend", openapi_url="/openapi.json") app.include_router(main.router) app.include_router(bc_handler.router) from abe.utils.logger import Logger logger = Logger() #create tables # models.Base.metadata.create_all(bind=engine) origins = [ ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) if __name__ == "__main__": # Use this for debugging purposes only import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000, log_level="debug") logger.info("Starting server on port 8000, with cors origins: "+str(origins))
[ "It should work, but only after you shutdown the API.\nLogging inside the main function before calling uvicorn.run() and inside endpoint routes should work as you expected.\nuvicorn.run() is a sync function. So the interpreter waits until the function has finished (API has shutdown) and executes the following statements afterwards.\n" ]
[ 1 ]
[]
[]
[ "fastapi", "logging", "python", "python_3.x", "uvicorn" ]
stackoverflow_0074605213_fastapi_logging_python_python_3.x_uvicorn.txt
Q: list comprehension to filter a list of lists This problem is from https://leetcode.com/problems/find-players-with-zero-or-one-losses/. Is it possible to use list comprehension in this problem to create a new list that only has the first item of every tuple that never shows up in the second item of any tuple. For instance: matches = [[1,3],[2,3],[3,6],[5,6],[5,7],[4,5],[4,8],[4,9],[10,4],[10,9]] I want a new list of: neverLost = [1, 2, 10] I would make two lists, one for each part of the question with list comprehension and then concatenate them together afterwards for the solution. I tried using list comprehension but I'm having syntax issues neverLost = [w for w, l in matches if w not l] The first part w for w, l in matches works fine and will create a list of just the first item of each tuple [1, 2, 3, 5, 5, 4, 4, 4, 10, 10], but I'm struggling with the syntax and understanding of the expression to filter the "winners". Please let me know if this is even a good solution for the problem. I know I can probably do this with a dictionary, but I wanted to know if this way was also possible. Thanks! A: List comprehension works, but not optimised way to solve these sort of problems In [48]: list(set([j[0] for j in matches if j[0] not in [i[1] for i in matches]])) Out[48]: [1, 2, 10] A: What you did neverLost = [w for w, l in matches if w not l] is going to check whether the first item in that tuple is equal to the second item in that same tuple. It is not checking whether the first item is present at 2nd position in any other tuple. You can use this - list(set([w[0] for w in matches if w[0] not in [l[1] for l in matches]])) A: I think you need to do something like, matches = [[1, 3], [2, 3], [3, 6], [5, 6], [5, 7], [4, 5], [4, 8], [4, 9], [10, 4], [10, 9]] losses_dict = {} for (_, value) in matches: losses_dict.setdefault(value, 0) # key might exist already losses_dict[value] += 1 final_list = [ [k for k, _ in matches if k not in losses_dict.keys()], [k for k, v in losses_dict.items() if v == 1] ] The dict.setdefault checks if the key already existis otherwise it appends the value 0. A: Sure, something like this is a reasonable solution: seconds = {second for _, second in matches} never_lost = [first for first, _ in matches if first not in seconds] never_lost = list(dict.fromkeys(never_lost)) # idiom for removing duplicates while maintaining order Now, if you mean only using a single list comprehension expression, then you'd have to get pretty janky: never_lost = [ x for x in { first: None for first, _ in matches for seconds in [{second for _, second in matches}] if first not in seconds } ] I'd stick to the first approach. Not everything has to or should be a single list comprehension expression. A: neverLost = list(set(w for w, l in matches if all(w != j for i, j in matches))) List comprehensions are good for performance but most of the time complex list comprehensions can decrease code readability. A: Another approach is to model the problem as a directed graph where each player is a vertex of the graph and edge determines result of the match, such that edge direction is from winner to loser. for eg. [1, 2] => [1](winner) -> [2](loser) Now problem boils down to calculating in-degree/out-degree of the vertices, which should give us the desired result.
list comprehension to filter a list of lists
This problem is from https://leetcode.com/problems/find-players-with-zero-or-one-losses/. Is it possible to use list comprehension in this problem to create a new list that only has the first item of every tuple that never shows up in the second item of any tuple. For instance: matches = [[1,3],[2,3],[3,6],[5,6],[5,7],[4,5],[4,8],[4,9],[10,4],[10,9]] I want a new list of: neverLost = [1, 2, 10] I would make two lists, one for each part of the question with list comprehension and then concatenate them together afterwards for the solution. I tried using list comprehension but I'm having syntax issues neverLost = [w for w, l in matches if w not l] The first part w for w, l in matches works fine and will create a list of just the first item of each tuple [1, 2, 3, 5, 5, 4, 4, 4, 10, 10], but I'm struggling with the syntax and understanding of the expression to filter the "winners". Please let me know if this is even a good solution for the problem. I know I can probably do this with a dictionary, but I wanted to know if this way was also possible. Thanks!
[ "List comprehension works, but not optimised way to solve these sort of problems\nIn [48]: list(set([j[0] for j in matches if j[0] not in [i[1] for i in matches]]))\nOut[48]: [1, 2, 10]\n\n", "What you did\nneverLost = [w for w, l in matches if w not l]\n\nis going to check whether the first item in that tuple is equal to the second item in that same tuple. It is not checking whether the first item is present at 2nd position in any other tuple.\nYou can use this -\nlist(set([w[0] for w in matches if w[0] not in [l[1] for l in matches]]))\n\n", "I think you need to do something like,\nmatches = [[1, 3], [2, 3], [3, 6], [5, 6], [5, 7], [4, 5], [4, 8], [4, 9], [10, 4], [10, 9]]\n\nlosses_dict = {}\nfor (_, value) in matches:\n losses_dict.setdefault(value, 0) # key might exist already\n losses_dict[value] += 1\n\nfinal_list = [\n [k for k, _ in matches if k not in losses_dict.keys()],\n [k for k, v in losses_dict.items() if v == 1]\n ]\n\n\nThe dict.setdefault checks if the key already existis otherwise it appends the value 0.\n", "Sure, something like this is a reasonable solution:\nseconds = {second for _, second in matches}\nnever_lost = [first for first, _ in matches if first not in seconds]\nnever_lost = list(dict.fromkeys(never_lost)) # idiom for removing duplicates while maintaining order\n\nNow, if you mean only using a single list comprehension expression, then you'd have to get pretty janky:\nnever_lost = [\n x for x in\n {\n first: None\n for first, _ in matches \n for seconds in [{second for _, second in matches}] \n if first not in seconds\n }\n]\n\nI'd stick to the first approach. Not everything has to or should be a single list comprehension expression.\n", "neverLost = list(set(w for w, l in matches if all(w != j for i, j in matches)))\n\nList comprehensions are good for performance but most of the time complex list comprehensions can decrease code readability.\n", "Another approach is to model the problem as a directed graph where each player is a vertex of the graph and edge determines result of the match, such that edge direction is from winner to loser.\nfor eg.\n\n[1, 2] => [1](winner) -> [2](loser)\n\nNow problem boils down to calculating in-degree/out-degree of the vertices, which should give us the desired result.\n" ]
[ 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "list", "list_comprehension", "python" ]
stackoverflow_0074611917_list_list_comprehension_python.txt
Q: Retain original document element index of argument passed through sklearn's CountVectorizer() in order to access corresponding part of speech tag I have a data frame with sentences and the respective part of speech tag for each word (Below is an extract of the data I'm working with (data taken from SNLI corpus). For each sentence in my collection I would like to extract unigrams and the corresponding pos-tag of that word. For instance if I've the following: vectorizer_unigram = CountVectorizer(analyzer='word', ngram_range=(1, 1), stop_words = 'english') doc = {'sent' : ['Two women are embracing while holding to go packages .'], 'tags' : ['NUM NOUN AUX VERB SCONJ VERB PART VERB NOUN PUNCT']} sentence = vectorizer_unigram.fit(doc['sent']) sentence_unigrams = sentence.get_feature_names_out() Then I would get the following unigrams output: array(['embracing', 'holding', 'packages', 'women'], dtype=object) But I don't know how to retain the part of speech tag after this. I tried to do a lookup version with the unigrams, but as they may differ from the words in the sentence (if you for instance do sentence.split(' ')) you don't necessarily get the same tokens. Any suggestions of how I can extract unigrams and retain the corresponding part-of-speech tag? A: After reviewing the source code for the sklearn CountVectorizer class, particularly the fit function, I don't believe the class has any way of tracking the original document element indexes relative to the extracted unigram features: where the unigram features do not necessarily have the same tokens. Other than the simple solution provided below, you might have to rely on some other method/library to achieve your desired results. If there is a particular case that fails, I'd suggest adding that to your question as it might help people generate solutions to your problem. from sklearn.feature_extraction.text import CountVectorizer vectorizer_unigram = CountVectorizer(analyzer='word', ngram_range=(1, 1), stop_words = 'english') doc = {'sent': ['Two women are embracing while holding to go packages .'], 'tags': ['NUM NOUN AUX VERB SCONJ VERB PART VERB NOUN PUNCT']} sentence = vectorizer_unigram.fit(doc['sent']) sentence_unigrams = sentence.get_feature_names_out() sent_token_list = doc['sent'][0].split() tags_token_list = doc['tags'][0].split() sentence_tags = [] for unigram in sentence_unigrams: for i in range(len(sent_token_list)): if sent_token_list[i] == unigram: sentence_tags.append(tags_token_list[i]) print(sentence_unigrams) # Output: ['embracing' 'holding' 'packages' 'women'] print(sentence_tags) # Output: ['VERB', 'VERB', 'NOUN', 'NOUN']
Retain original document element index of argument passed through sklearn's CountVectorizer() in order to access corresponding part of speech tag
I have a data frame with sentences and the respective part of speech tag for each word (Below is an extract of the data I'm working with (data taken from SNLI corpus). For each sentence in my collection I would like to extract unigrams and the corresponding pos-tag of that word. For instance if I've the following: vectorizer_unigram = CountVectorizer(analyzer='word', ngram_range=(1, 1), stop_words = 'english') doc = {'sent' : ['Two women are embracing while holding to go packages .'], 'tags' : ['NUM NOUN AUX VERB SCONJ VERB PART VERB NOUN PUNCT']} sentence = vectorizer_unigram.fit(doc['sent']) sentence_unigrams = sentence.get_feature_names_out() Then I would get the following unigrams output: array(['embracing', 'holding', 'packages', 'women'], dtype=object) But I don't know how to retain the part of speech tag after this. I tried to do a lookup version with the unigrams, but as they may differ from the words in the sentence (if you for instance do sentence.split(' ')) you don't necessarily get the same tokens. Any suggestions of how I can extract unigrams and retain the corresponding part-of-speech tag?
[ "After reviewing the source code for the sklearn CountVectorizer class, particularly the fit function, I don't believe the class has any way of tracking the original document element indexes relative to the extracted unigram features: where the unigram features do not necessarily have the same tokens. Other than the simple solution provided below, you might have to rely on some other method/library to achieve your desired results. If there is a particular case that fails, I'd suggest adding that to your question as it might help people generate solutions to your problem.\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nvectorizer_unigram = CountVectorizer(analyzer='word', ngram_range=(1, 1), stop_words = 'english')\n\ndoc = {'sent': ['Two women are embracing while holding to go packages .'],\n 'tags': ['NUM NOUN AUX VERB SCONJ VERB PART VERB NOUN PUNCT']}\n\nsentence = vectorizer_unigram.fit(doc['sent'])\nsentence_unigrams = sentence.get_feature_names_out()\n\nsent_token_list = doc['sent'][0].split()\ntags_token_list = doc['tags'][0].split()\nsentence_tags = []\n\nfor unigram in sentence_unigrams:\n for i in range(len(sent_token_list)):\n if sent_token_list[i] == unigram:\n sentence_tags.append(tags_token_list[i])\n\nprint(sentence_unigrams)\n# Output: ['embracing' 'holding' 'packages' 'women']\nprint(sentence_tags)\n# Output: ['VERB', 'VERB', 'NOUN', 'NOUN']\n\n" ]
[ 0 ]
[]
[]
[ "countvectorizer", "nlp", "python", "scikit_learn", "stanford_nlp" ]
stackoverflow_0074611192_countvectorizer_nlp_python_scikit_learn_stanford_nlp.txt
Q: Run function in tex-based RPG game writed in python How to write a function to the game where you can run from the fight and so it would return your position to the state before the battle, because with my current code it returns you to the beginning of the game. This is my code def run(): runnum = random.randitn(1, 10) if runnum <= 4: print("Success!") option = input(" ") start1() else runnum > 7: print("You can't run!") option = input(" ") fight() A: If you want to be able to come back to a previous state (before the fight), either store the previous state, or use the Command pattern which allows for easy "undo", or do something else that may require re-architecturing your game. It would be simpler to help you if we had a Minimal Reproducible Example of your problem. A: It is good to start a problem like this with the data structures. You might want a player, rooms and mobs. YAML is handy for this kind of data modeling. MODEL = """ rooms: 0: desc: You are in the lobby on Shang Tsung's Island. It is a rather grey room. name: The lobby 1: desc: An aged Shang Tsung sits on his throne and watches the fights below. Monk students also watch and applaud after every round is finished. name: The courtyard 2: desc: A prominent lava river flows in the background. On the west side of the stageis a bridge that goes over it, while the east side leads to a split road. A rock outcropping marks each side of the stage. Spikes and their victims, dot the landscape. name: Krossroads 3: desc: An sandy oval arena. From the stands giant drummers beat their drums as the fighters beat each other up. It is said that in the event of a tie, Konga, the biggest drummer of them all, will challenge both of the warriors. name: Drum Arena player: location: 0 health: 24 mobs: 0: name: Kobra location: 1 desc: Kobra is a tall man. The newest recruit of the Black Dragon has shoulder length blond hair and he wears a white karate gi with black and gold trim. He wears no shirt under his gi. He has taped wrists and gloves, and he wears dark baggy pants with yellow trim over bare feet. health: 14 strength: 6 1: name: Kira location: 3 desc: Kira is a rational level-headed anarchist, opposite in character to Kobra. She has unlikely red hair in a pair of even more unlikely pony tails and wears a black leather jerkin and leggings dyed in the blood of her many victims. In each hand she wields a serrated dagger. health: 20 strength: 8 """ The game is then structured around this data model. At heart it a while loop that continues until the player runs out of health. At the start of the loop prompt the user for a command. def run(): .... while player['health'] > 0: cmd = prompt("cmd: ") if cmd == ... More concretely, handle the command "quit" or "q" to allow the player to exit the game: import yaml from rich import print from rich.prompt import Prompt from rich.panel import Panel import random def run(): model = yaml.load(MODEL, Loader=yaml.Loader) rooms = model['rooms'] player = model['player'] while player['health'] > 0: loc = rooms[player['location']] prompt = f"[orange1 on grey23][{loc['name']}][/]" cmd = Prompt.ask(prompt).lower() if cmd in ["q", "quit"]: print("Thank you for playing") return Add commands to look, to move north and south. if cmd in ["l", "look"]: desc = [loc['desc']] print(Panel("\n".join(desc))) continue if cmd in ["n", "north"]: player['location'] += 1 continue elif cmd in ["s", "south"]: player['location'] -= 1 continue A complete example that incorporates mobs and a combat system. def find_mob(model): mobs = model['mobs'] player = model['player'] for mob in mobs.values(): if mob['location'] == player['location']: return mob def run(): print(Panel("Welcome to MORTAL TEXT BASED KOMBAT\n" "Commands are north, south, look, punch and kick.\n" "Good luck!")) model = yaml.load(MODEL, Loader=yaml.Loader) rooms = model['rooms'] player = model['player'] loc = rooms[player['location']] while player['health'] > 0: new_loc = rooms[player['location']] if loc != new_loc: print(f"[deep_sky_blue3 on thistle1] You enter {new_loc['name']}[/]") loc = new_loc mob = find_mob(model) if mob: prompt = f"[red3 on light_salmon1][{loc['name']} | You ({player['health']}) vs {mob['name']} ({mob['health']})] [/]" else: prompt = f"[orange1 on grey23][{loc['name']}][/]" cmd = Prompt.ask(prompt).lower() if cmd in ["n", "north"]: if player['location'] < max(rooms.keys()): player['location'] += 1 else: print("Sorry, you can't go north :worried: ") continue elif cmd in ["s", "south"]: if player['location'] > 0: player['location'] -= 1 else: print("Sorry, you can't go south :worried: ") continue elif cmd in ["l", "look"]: desc = [loc['desc']] if mob: desc.append(f"[red3 on grey93]{mob['name']} is here![/]") desc.append(f"[blue on white]{mob['desc']}[/]") print(Panel("\n".join(desc))) continue if mob: hit, damage = -1, 0 if cmd in ["punch", "p"]: cmd = "punch" hit, damage = random.randint(4,20), random.randint(1,6) elif cmd in ["kick", "k"]: cmd = "kick" hit, damage = random.randint(1,20), random.randint(2,8) if hit > 9: print(f"Your {cmd} hits and does {damage} damage!") mob["health"] -= damage if mob['health'] <= 0: print(f"You killed {mob['name']}!") mob['location'] = -1 continue elif hit > 0: print(f"Your {cmd} misses.") if random.randint(1,20) > 10: damage = random.randint(1,mob['strength']) print(f"{mob['name']} hits you. You take {damage} damage.") player['health'] -= damage if player['health'] <= 0: print("[red on black]You died[/] :skull:")
Run function in tex-based RPG game writed in python
How to write a function to the game where you can run from the fight and so it would return your position to the state before the battle, because with my current code it returns you to the beginning of the game. This is my code def run(): runnum = random.randitn(1, 10) if runnum <= 4: print("Success!") option = input(" ") start1() else runnum > 7: print("You can't run!") option = input(" ") fight()
[ "If you want to be able to come back to a previous state (before the fight), either store the previous state, or use the Command pattern which allows for easy \"undo\", or do something else that may require re-architecturing your game.\nIt would be simpler to help you if we had a Minimal Reproducible Example of your problem.\n", "It is good to start a problem like this with the data structures.\nYou might want a player, rooms and mobs. YAML is handy for this kind of data modeling.\nMODEL = \"\"\"\nrooms:\n 0:\n desc: You are in the lobby on Shang Tsung's Island. It is a rather grey room.\n name: The lobby\n 1:\n desc: An aged Shang Tsung sits on his throne and watches the fights below. Monk\n students also watch and applaud after every round is finished.\n name: The courtyard\n 2:\n desc: A prominent lava river flows in the background. On the west side of the\n stageis a bridge that goes over it, while the east side leads to a split road.\n A rock outcropping marks each side of the stage. Spikes and their victims, dot\n the landscape.\n name: Krossroads\n 3:\n desc: An sandy oval arena. From the stands giant drummers beat their drums as the \n fighters beat each other up. It is said that in the event of a tie, Konga, \n the biggest drummer of them all, will challenge both of the warriors.\n name: Drum Arena\nplayer:\n location: 0\n health: 24\nmobs:\n 0:\n name: Kobra\n location: 1\n desc: Kobra is a tall man. The newest recruit of the Black Dragon has \n shoulder length blond hair and he wears a white karate gi with black \n and gold trim. He wears no shirt under his gi. He has taped wrists \n and gloves, and he wears dark baggy pants with yellow trim over bare feet.\n health: 14\n strength: 6\n 1:\n name: Kira\n location: 3\n desc: Kira is a rational level-headed anarchist, opposite in character to Kobra.\n She has unlikely red hair in a pair of even more unlikely pony tails and\n wears a black leather jerkin and leggings dyed in the blood of her\n many victims. In each hand she wields a serrated dagger.\n health: 20\n strength: 8\n\"\"\"\n\nThe game is then structured around this data model. At heart it a while loop that continues until the player runs out of health. At the start of the loop prompt the user for a command.\n\ndef run():\n ....\n while player['health'] > 0:\n cmd = prompt(\"cmd: \")\n if cmd == ...\n\nMore concretely, handle the command \"quit\" or \"q\" to allow the player to exit the game:\nimport yaml\nfrom rich import print\nfrom rich.prompt import Prompt\nfrom rich.panel import Panel\nimport random\n\ndef run():\n model = yaml.load(MODEL, Loader=yaml.Loader)\n rooms = model['rooms']\n player = model['player']\n while player['health'] > 0:\n loc = rooms[player['location']]\n prompt = f\"[orange1 on grey23][{loc['name']}][/]\"\n cmd = Prompt.ask(prompt).lower()\n if cmd in [\"q\", \"quit\"]:\n print(\"Thank you for playing\")\n return\n\nAdd commands to look, to move north and south.\n if cmd in [\"l\", \"look\"]:\n desc = [loc['desc']]\n print(Panel(\"\\n\".join(desc)))\n continue\n if cmd in [\"n\", \"north\"]:\n player['location'] += 1\n continue\n elif cmd in [\"s\", \"south\"]:\n player['location'] -= 1\n continue\n\nA complete example that incorporates mobs and a combat system.\ndef find_mob(model):\n mobs = model['mobs']\n player = model['player']\n for mob in mobs.values():\n if mob['location'] == player['location']:\n return mob\n\ndef run():\n print(Panel(\"Welcome to MORTAL TEXT BASED KOMBAT\\n\"\n \"Commands are north, south, look, punch and kick.\\n\"\n \"Good luck!\"))\n model = yaml.load(MODEL, Loader=yaml.Loader)\n rooms = model['rooms']\n player = model['player']\n loc = rooms[player['location']]\n while player['health'] > 0:\n new_loc = rooms[player['location']]\n if loc != new_loc:\n print(f\"[deep_sky_blue3 on thistle1] You enter {new_loc['name']}[/]\")\n loc = new_loc\n mob = find_mob(model)\n if mob:\n prompt = f\"[red3 on light_salmon1][{loc['name']} | You ({player['health']}) vs {mob['name']} ({mob['health']})] [/]\"\n else:\n prompt = f\"[orange1 on grey23][{loc['name']}][/]\"\n cmd = Prompt.ask(prompt).lower()\n if cmd in [\"n\", \"north\"]:\n if player['location'] < max(rooms.keys()):\n player['location'] += 1\n else:\n print(\"Sorry, you can't go north :worried: \")\n continue\n elif cmd in [\"s\", \"south\"]:\n if player['location'] > 0:\n player['location'] -= 1\n else:\n print(\"Sorry, you can't go south :worried: \")\n continue\n elif cmd in [\"l\", \"look\"]:\n desc = [loc['desc']]\n if mob:\n desc.append(f\"[red3 on grey93]{mob['name']} is here![/]\")\n desc.append(f\"[blue on white]{mob['desc']}[/]\")\n print(Panel(\"\\n\".join(desc)))\n continue\n if mob:\n hit, damage = -1, 0\n if cmd in [\"punch\", \"p\"]:\n cmd = \"punch\"\n hit, damage = random.randint(4,20), random.randint(1,6)\n elif cmd in [\"kick\", \"k\"]:\n cmd = \"kick\"\n hit, damage = random.randint(1,20), random.randint(2,8)\n if hit > 9:\n print(f\"Your {cmd} hits and does {damage} damage!\")\n mob[\"health\"] -= damage\n if mob['health'] <= 0:\n print(f\"You killed {mob['name']}!\")\n mob['location'] = -1\n continue\n elif hit > 0:\n print(f\"Your {cmd} misses.\")\n if random.randint(1,20) > 10:\n damage = random.randint(1,mob['strength'])\n print(f\"{mob['name']} hits you. You take {damage} damage.\")\n player['health'] -= damage\n if player['health'] <= 0:\n print(\"[red on black]You died[/] :skull:\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "function", "python", "return", "text" ]
stackoverflow_0074604923_function_python_return_text.txt
Q: Copy dataframe n times, assign new IDs, keeping the original I have a dataframe that looks like this: df = pd.DataFrame({'id':[1,3,500, 53, 1, 500], 'code1':['a0', 'b0', 'b0', 'c0', 'b0', 'a0'], 'code2':['aa', 'bb', 'cc', 'bb', 'cc', 'bb'], 'date':['2022-10-01', '2022-09-01', '2022-10-01', '2022-11-01', '2022-09-01', '2022-11-01']}) I want to expand (copy) this dataframe N times, but each time has to have a different IDs, keeping the original ID and the original combination (e.g., id=1 has code1=[a0,b0], code2=[aa, cc], date=['2022-10-01', 2022-08-01'], the new id replacing id=1 should have the same). For N=1, I can do: df1 = df.loc[df.index.repeat(1)] # repeat 1 time my dataframe, I can also just copy ids = df.id.unique() # Get the original IDs new_ids = df.id.max()+1 # Create new IDs starting from the max ID nids = df.id.nunique() # Get the number of unique IDs new_ids = new_ids + range(0,nids) # Create a list with the new IDs df1['id'] = df1['id'].replace(ids, new_ids) # Replace the old IDs with the new ones df_final = pd.concat(df, df1, axis=0) # Concacatenate For N=2 and larger, I thought of doing a for loop, but I guess there is a better way? Important thing is to keep the combinations for each IDs with code1/code2/date and keep the original IDs. Thank you! A: You can use the key parameter of concat to increment a step based on the max id in the original DataFrame: N = 4 step = df['id'].max() out = pd.concat([df]*N, keys=range(N)) out['id'] += out.index.get_level_values(0)*step out = out.droplevel(0) More simple variant with numpy: import numpy as np N = 4 step = df['id'].max() a = np.repeat(np.arange(N), len(df)) out = pd.concat([df]*N) out['id'] += a*step output: id code1 code2 date 0 1 a0 aa 2022-10-01 1 3 b0 bb 2022-09-01 2 500 b0 cc 2022-10-01 3 53 c0 bb 2022-11-01 4 1 b0 cc 2022-09-01 5 500 a0 bb 2022-11-01 0 501 a0 aa 2022-10-01 # new id starts at 501 1 503 b0 bb 2022-09-01 2 1000 b0 cc 2022-10-01 3 553 c0 bb 2022-11-01 4 501 b0 cc 2022-09-01 5 1000 a0 bb 2022-11-01 0 1001 a0 aa 2022-10-01 # new id starts at 1001 1 1003 b0 bb 2022-09-01 2 1500 b0 cc 2022-10-01 3 1053 c0 bb 2022-11-01 4 1001 b0 cc 2022-09-01 5 1500 a0 bb 2022-11-01 0 1501 a0 aa 2022-10-01 # new id starts at 1501 1 1503 b0 bb 2022-09-01 2 2000 b0 cc 2022-10-01 3 1553 c0 bb 2022-11-01 4 1501 b0 cc 2022-09-01 5 2000 a0 bb 2022-11-01
Copy dataframe n times, assign new IDs, keeping the original
I have a dataframe that looks like this: df = pd.DataFrame({'id':[1,3,500, 53, 1, 500], 'code1':['a0', 'b0', 'b0', 'c0', 'b0', 'a0'], 'code2':['aa', 'bb', 'cc', 'bb', 'cc', 'bb'], 'date':['2022-10-01', '2022-09-01', '2022-10-01', '2022-11-01', '2022-09-01', '2022-11-01']}) I want to expand (copy) this dataframe N times, but each time has to have a different IDs, keeping the original ID and the original combination (e.g., id=1 has code1=[a0,b0], code2=[aa, cc], date=['2022-10-01', 2022-08-01'], the new id replacing id=1 should have the same). For N=1, I can do: df1 = df.loc[df.index.repeat(1)] # repeat 1 time my dataframe, I can also just copy ids = df.id.unique() # Get the original IDs new_ids = df.id.max()+1 # Create new IDs starting from the max ID nids = df.id.nunique() # Get the number of unique IDs new_ids = new_ids + range(0,nids) # Create a list with the new IDs df1['id'] = df1['id'].replace(ids, new_ids) # Replace the old IDs with the new ones df_final = pd.concat(df, df1, axis=0) # Concacatenate For N=2 and larger, I thought of doing a for loop, but I guess there is a better way? Important thing is to keep the combinations for each IDs with code1/code2/date and keep the original IDs. Thank you!
[ "You can use the key parameter of concat to increment a step based on the max id in the original DataFrame:\nN = 4\n\nstep = df['id'].max()\nout = pd.concat([df]*N, keys=range(N))\nout['id'] += out.index.get_level_values(0)*step\nout = out.droplevel(0)\n\nMore simple variant with numpy:\nimport numpy as np\n\nN = 4\n\nstep = df['id'].max()\na = np.repeat(np.arange(N), len(df))\nout = pd.concat([df]*N)\nout['id'] += a*step\n\noutput:\n id code1 code2 date\n0 1 a0 aa 2022-10-01\n1 3 b0 bb 2022-09-01\n2 500 b0 cc 2022-10-01\n3 53 c0 bb 2022-11-01\n4 1 b0 cc 2022-09-01\n5 500 a0 bb 2022-11-01\n0 501 a0 aa 2022-10-01 # new id starts at 501\n1 503 b0 bb 2022-09-01\n2 1000 b0 cc 2022-10-01\n3 553 c0 bb 2022-11-01\n4 501 b0 cc 2022-09-01\n5 1000 a0 bb 2022-11-01\n0 1001 a0 aa 2022-10-01 # new id starts at 1001\n1 1003 b0 bb 2022-09-01\n2 1500 b0 cc 2022-10-01\n3 1053 c0 bb 2022-11-01\n4 1001 b0 cc 2022-09-01\n5 1500 a0 bb 2022-11-01\n0 1501 a0 aa 2022-10-01 # new id starts at 1501\n1 1503 b0 bb 2022-09-01\n2 2000 b0 cc 2022-10-01\n3 1553 c0 bb 2022-11-01\n4 1501 b0 cc 2022-09-01\n5 2000 a0 bb 2022-11-01\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python", "repeat" ]
stackoverflow_0074614077_dataframe_pandas_python_repeat.txt
Q: Change x-tick intervals when using matplotlib.pyplot I have the following code. I am trying to plot a line plot. However, it is plotting too many x-values on the x-axis. I would like to plot fewer x-axis values (plot one value every few values) so that the x-axis scale is readable. I would be so grateful for a helping hand! plt.figure(figsize=(70,70)) ax = sns.lineplot(data=correcteddf,x=listedvariables[i],y="distance",errorbar ='se',err_style='bars',linewidth=4) ax.set_xlabel('nap_duration_mins',labelpad = 40,fontsize=70,weight='bold') ax.set_ylabel("Wayfinding Distance (z-score)",labelpad = 40,fontsize=70,weight='bold') correcteddf[['nap_duration_mins']] = correcteddf[['nap_duration_mins']].astype(float) ylabels = ['{:,.2f}'.format(x) for x in ax.get_yticks()] xlabels = ['{:,.2f}'.format(x) for x in ax.get_xticks()] ax.set_yticklabels(ylabels,weight='bold',fontsize=60) ax.set_xticklabels(xlabels,weight='bold',fontsize=60,rotation = 30) plt.xticks(rotation = 30,weight='bold',fontsize=60) title = ('nap_duration_mins' + ' ' + 'plot') ax.set_title(title,fontsize=70,pad=40,weight='bold') dir_name = "/Users/macbook/Desktop/" plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name)) plt.savefig('nap_duration_mins+' '+'scatterplot') plt.show() A: Here you can see the relevant documentation. Change the line plt.xticks(rotation = 30,weight='bold',fontsize=60) to plt.xticks(ticks = list(range(0,70_000,10_000)),rotation = 30,weight='bold',fontsize=60) and you will have ticks at only every 10 thousand steps. You can change the list to suit your needs.
Change x-tick intervals when using matplotlib.pyplot
I have the following code. I am trying to plot a line plot. However, it is plotting too many x-values on the x-axis. I would like to plot fewer x-axis values (plot one value every few values) so that the x-axis scale is readable. I would be so grateful for a helping hand! plt.figure(figsize=(70,70)) ax = sns.lineplot(data=correcteddf,x=listedvariables[i],y="distance",errorbar ='se',err_style='bars',linewidth=4) ax.set_xlabel('nap_duration_mins',labelpad = 40,fontsize=70,weight='bold') ax.set_ylabel("Wayfinding Distance (z-score)",labelpad = 40,fontsize=70,weight='bold') correcteddf[['nap_duration_mins']] = correcteddf[['nap_duration_mins']].astype(float) ylabels = ['{:,.2f}'.format(x) for x in ax.get_yticks()] xlabels = ['{:,.2f}'.format(x) for x in ax.get_xticks()] ax.set_yticklabels(ylabels,weight='bold',fontsize=60) ax.set_xticklabels(xlabels,weight='bold',fontsize=60,rotation = 30) plt.xticks(rotation = 30,weight='bold',fontsize=60) title = ('nap_duration_mins' + ' ' + 'plot') ax.set_title(title,fontsize=70,pad=40,weight='bold') dir_name = "/Users/macbook/Desktop/" plt.rcParams["savefig.directory"] = os.chdir(os.path.dirname(dir_name)) plt.savefig('nap_duration_mins+' '+'scatterplot') plt.show()
[ "Here you can see the relevant documentation. Change the line\nplt.xticks(rotation = 30,weight='bold',fontsize=60)\n\nto\nplt.xticks(ticks = list(range(0,70_000,10_000)),rotation = 30,weight='bold',fontsize=60)\n\nand you will have ticks at only every 10 thousand steps. You can change the list to suit your needs.\n" ]
[ 1 ]
[]
[]
[ "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074614017_jupyter_notebook_pandas_python.txt
Q: How to call a class class vraagmachine: def __init__(self): self.answers=[] self.count=0 def askname(self): self.name=input('What is your name?: ') if self.name=='stop': for i in range(self.count): print(self.answers) def askage(self): self.age=input('How old are you?: ') self.answer=self.name+' is '+self.age+' years old.' if self.age=='stop': for i in range(self.count): print(self.answers) else: self.answers.append(self.answer) How can I call this class so it will execute everything in it? Thanks in advance! I tried doing this: variable=vraagmachine() vaiable.the name of a function inside the class. A: You can't "execute everything in it" in one go, since there are three different functions. You'd do vm = vraagmachine() # instantiate a `vraagmachine`; calls __init__ vm.askname() vm.askage() if that's the order you want to ask things in, or the other way around maybe.
How to call a class
class vraagmachine: def __init__(self): self.answers=[] self.count=0 def askname(self): self.name=input('What is your name?: ') if self.name=='stop': for i in range(self.count): print(self.answers) def askage(self): self.age=input('How old are you?: ') self.answer=self.name+' is '+self.age+' years old.' if self.age=='stop': for i in range(self.count): print(self.answers) else: self.answers.append(self.answer) How can I call this class so it will execute everything in it? Thanks in advance! I tried doing this: variable=vraagmachine() vaiable.the name of a function inside the class.
[ "You can't \"execute everything in it\" in one go, since there are three different functions.\nYou'd do\nvm = vraagmachine() # instantiate a `vraagmachine`; calls __init__\nvm.askname()\nvm.askage()\n\nif that's the order you want to ask things in, or the other way around maybe.\n" ]
[ 0 ]
[]
[]
[ "call", "class", "python" ]
stackoverflow_0074614128_call_class_python.txt
Q: How to mock a class with nested properties and autospec? I'm wondering if it's possible to mock a class which contains properties by using patch and autospec? The goal in the example below is to mock (recursively) ClassB. Example: # file: class_c.py class ClassC: def get_default(self) -> list[int]: return [1, 2, 3] def delete(self, name: str): print(f"delete {name}") # ---------------------- # file: class_b.py from class_c import ClassC class ClassB: def __init__(self) -> None: self._ds = ClassC() @property def class_c(self) -> ClassC: return self._ds def from_config(self, cred: str) -> str: return cred # ---------------------- # file: class_a.py from class_b import ClassB class ClassA: def __init__(self): self._client = ClassB() @property def class_b(self) -> ClassB: return self._client def test(self, cred: str) -> str: return cred # ---------------------- # file: test.py import pytest from unittest import mock @mock.patch("class_a.ClassB", autospec=True) def test_class_a(_): class_a = ClassA() with pytest.raises(TypeError): class_a.class_b.from_config() # ✅ raised - missing 1 required positional argument: 'cred' with pytest.raises(TypeError): class_a.class_b.class_c.delete() # <- ❌ Problem - should raise exception since autospec=True The property class_c of the class ClassB is not mocked properly. I would expect TypeError when trying to call delete() without any argument I've tried several things but without success. Any idea? EDIT: The code is just an example, and the test function was just written to demonstrate the expected behaviour. ClassB can be seen as a third-party service which needs to be mocked. EDIT2: Additionally to the accepted answer, I would propose to use PropertyMock for mocking properties: def test_class_a(): class_b_mock = create_autospec(class_a.ClassB) class_c_mock = create_autospec(class_b.ClassC) type(class_b).class_c = PropertyMock(return_value=class_c_mock) with mock.patch("class_a.ClassB", return_value=class_b_mock): class_a_instance = ClassA() with pytest.raises(TypeError): class_a_instance.class_b.from_config() with pytest.raises(TypeError): class_a_instance.class_b.class_c.delete() A: Once you patched the target class, anything you try to access under that class will be mocked with MagicMock (also recursively). Therefore, if you want to keep the specification of that class, then yes, you should use the autospec=true flag. But because you are trying to mock a class within a class accessed by a property, You have to keep the specification of each class you want to test: def test_class_a(): class_b_mock = create_autospec(class_a.ClassB) class_c_mock = create_autospec(class_b.ClassC) class_b_mock.class_c = class_c_mock with mock.patch("class_a.ClassB", return_value=class_b_mock): class_a_instance = ClassA() with pytest.raises(TypeError): class_a_instance.class_b.from_config() with pytest.raises(TypeError): class_a_instance.class_b.class_c.delete()
How to mock a class with nested properties and autospec?
I'm wondering if it's possible to mock a class which contains properties by using patch and autospec? The goal in the example below is to mock (recursively) ClassB. Example: # file: class_c.py class ClassC: def get_default(self) -> list[int]: return [1, 2, 3] def delete(self, name: str): print(f"delete {name}") # ---------------------- # file: class_b.py from class_c import ClassC class ClassB: def __init__(self) -> None: self._ds = ClassC() @property def class_c(self) -> ClassC: return self._ds def from_config(self, cred: str) -> str: return cred # ---------------------- # file: class_a.py from class_b import ClassB class ClassA: def __init__(self): self._client = ClassB() @property def class_b(self) -> ClassB: return self._client def test(self, cred: str) -> str: return cred # ---------------------- # file: test.py import pytest from unittest import mock @mock.patch("class_a.ClassB", autospec=True) def test_class_a(_): class_a = ClassA() with pytest.raises(TypeError): class_a.class_b.from_config() # ✅ raised - missing 1 required positional argument: 'cred' with pytest.raises(TypeError): class_a.class_b.class_c.delete() # <- ❌ Problem - should raise exception since autospec=True The property class_c of the class ClassB is not mocked properly. I would expect TypeError when trying to call delete() without any argument I've tried several things but without success. Any idea? EDIT: The code is just an example, and the test function was just written to demonstrate the expected behaviour. ClassB can be seen as a third-party service which needs to be mocked. EDIT2: Additionally to the accepted answer, I would propose to use PropertyMock for mocking properties: def test_class_a(): class_b_mock = create_autospec(class_a.ClassB) class_c_mock = create_autospec(class_b.ClassC) type(class_b).class_c = PropertyMock(return_value=class_c_mock) with mock.patch("class_a.ClassB", return_value=class_b_mock): class_a_instance = ClassA() with pytest.raises(TypeError): class_a_instance.class_b.from_config() with pytest.raises(TypeError): class_a_instance.class_b.class_c.delete()
[ "Once you patched the target class, anything you try to access under that class will be mocked with MagicMock (also recursively). Therefore, if you want to keep the specification of that class, then yes, you should use the autospec=true flag.\nBut because you are trying to mock a class within a class accessed by a property,\nYou have to keep the specification of each class you want to test:\ndef test_class_a():\n class_b_mock = create_autospec(class_a.ClassB)\n class_c_mock = create_autospec(class_b.ClassC)\n class_b_mock.class_c = class_c_mock\n with mock.patch(\"class_a.ClassB\", return_value=class_b_mock):\n class_a_instance = ClassA()\n with pytest.raises(TypeError):\n class_a_instance.class_b.from_config()\n with pytest.raises(TypeError):\n class_a_instance.class_b.class_c.delete()\n\n" ]
[ 4 ]
[]
[]
[ "mocking", "python", "unit_testing" ]
stackoverflow_0074611520_mocking_python_unit_testing.txt
Q: Why orientation parameter doesn't exists in Slider? Hi have this small code for practice using CustomTkinter, reading the official documentation for a vertical slider i need to write orientation = 'vertical' but when i run the code Py charm says "_tkinter.TclError: unknown option "-orientation", how is possible that orientation isn't a parameter? I can't understand please help me, the code: from tkinter import * import customtkinter def slider(valore): valore = round(valore,1) print(valore) win = Tk() slider = customtkinter.CTkSlider(master=win, from_=0, to=100, command=slider,fg_color='#555555',progress_color='#144870') slider.configure(orientation='vertical') slider.pack(padx=100,pady=100) win.mainloop() A: Comment out line 8. Add this parameter orient=Tk.Vertical. slider = customtkinter.CTkSlider(master=win, from_=0, to=100,orient=Tk.Vertical, command=slider,fg_color='#555555',progress_color='#144870')
Why orientation parameter doesn't exists in Slider?
Hi have this small code for practice using CustomTkinter, reading the official documentation for a vertical slider i need to write orientation = 'vertical' but when i run the code Py charm says "_tkinter.TclError: unknown option "-orientation", how is possible that orientation isn't a parameter? I can't understand please help me, the code: from tkinter import * import customtkinter def slider(valore): valore = round(valore,1) print(valore) win = Tk() slider = customtkinter.CTkSlider(master=win, from_=0, to=100, command=slider,fg_color='#555555',progress_color='#144870') slider.configure(orientation='vertical') slider.pack(padx=100,pady=100) win.mainloop()
[ "Comment out line 8.\nAdd this parameter orient=Tk.Vertical.\nslider = customtkinter.CTkSlider(master=win, from_=0, to=100,orient=Tk.Vertical, command=slider,fg_color='#555555',progress_color='#144870')\n\n" ]
[ 1 ]
[]
[]
[ "python", "slider", "tkinter" ]
stackoverflow_0074613942_python_slider_tkinter.txt
Q: how to show avaliable sizes of clothes on the form? Django I'm developing online clothing store on Django. Now I faced the issue: I have a form which helps user to add to his cart some clothes. I need to show which sizes of this clothes are avaliable. To do this, I need to refer to the database. But how to do it from the form? models.py: from django.db import models from django.urls import reverse from multiselectfield import MultiSelectField class Category(models.Model): name = models.CharField(max_length=200, db_index=True) slug = models.SlugField(max_length=200, db_index=True, unique=True) class Meta: ordering = ('name',) def __str__(self): return self.name def get_absolute_url(self): return reverse('shop:product_list_by_category', args=[self.slug]) class Product(models.Model): category = models.ForeignKey(Category, related_name='products', on_delete=models.CASCADE) name = models.CharField(max_length=200, db_index=True) slug = models.SlugField(max_length=200, db_index=True) image = models.FileField(blank=True, upload_to=get_upload_path) SIZE_CHOICES = (('XXS', 'XXS'), ('XS', 'XS'), ('S', 'S'), ('M', 'M'), ('XL', 'XL'), ('XXL', 'XXL')) sizes = MultiSelectField(choices=SIZE_CHOICES, max_choices=6, max_length=17) description = models.TextField(blank=True) price = models.DecimalField(max_digits=10, decimal_places=2) stock = models.PositiveIntegerField() available = models.BooleanField(default=True) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Meta: ordering = ('name',) index_together = (('id', 'slug'),) def __str__(self): return self.name def get_absolute_url(self): return reverse('shop:product_detail', args=[self.id, self.slug]) my form: forms.py from django import forms PRODUCT_QUANTITY_CHOICES = [(i, str(i)) for i in range(1, 21)] class CartAddProductForm(forms.Form): quantity = forms.TypedChoiceField(choices=PRODUCT_QUANTITY_CHOICES, coerce=int) update = forms.BooleanField(required=False, initial=False, widget=forms.HiddenInput) # size = ?? the view which uses this form: views.py def product_detail(request: WSGIRequest, product_id: int, product_slug: str) -> HttpResponse: product = get_object_or_404(Product, id=product_id, slug=product_slug, available=True) cart_product_form = CartAddProductForm() return render(request, 'shop/product/detail.html', {'product': product, 'cart_product_form': cart_product_form}) shop/product/detail.html: {% extends "shop/base.html" %} <head> <meta charset="UTF-8"> <title>Detail</title> </head> <body> {% block content %} <br> <b>{{ product.name }} </b> <br> <i>{{ product.description }} </i> <br> {{ product.price }} <br> <img src="{{ product.image.url }}" width="300" height="500"> <br> Available sizes: <br> {{ product.sizes }}<br> <form action="{% url "cart:add_to_cart" product.id %}" method="post"> {{ cart_product_form }} {% csrf_token %} <input type="submit" value="Add to cart"> </form> {% endblock %} </body> I tried to create a function which gets avaliable sizes and send to the form: forms.py def get_sizes(product: Product): return product.sizes But to do this I need to refer to the Product from the form, I don't know how to do it. A: How about dividing each clothes by size(with quantity)? Clothes with different size can be treated as different product. product ID name image description price ... 1 jean a.jpg good jean 12345 ... size ID product_id size quantity ... 1 1 xxxl 12345 ... 2 1 xxl 1234 ... 3 1 xl 123 ... If quantity is greater than 0, that size of clothes is available. A: my solution is: forms.py from django import forms from django.forms import ModelForm from shop.models import Product class CartAddProductForm(ModelForm): class Meta: model = Product fields = ['sizes'] def __init__(self, pk, *args, **kwargs): super(CartAddProductForm, self).__init__(*args, **kwargs) sizes = tuple(Product.objects.get(pk=pk).sizes) sizes_list = [] for item in sizes: sizes_list.append((item, item)) self.fields['sizes'] = forms.ChoiceField(choices=sizes_list) when I create the form, I pass the pk: views.py product = get_object_or_404(Product, id=product_id, slug=product_slug, available=True) pk = product.pk cart_product_form = CartAddProductForm(instance=product, pk=pk)
how to show avaliable sizes of clothes on the form? Django
I'm developing online clothing store on Django. Now I faced the issue: I have a form which helps user to add to his cart some clothes. I need to show which sizes of this clothes are avaliable. To do this, I need to refer to the database. But how to do it from the form? models.py: from django.db import models from django.urls import reverse from multiselectfield import MultiSelectField class Category(models.Model): name = models.CharField(max_length=200, db_index=True) slug = models.SlugField(max_length=200, db_index=True, unique=True) class Meta: ordering = ('name',) def __str__(self): return self.name def get_absolute_url(self): return reverse('shop:product_list_by_category', args=[self.slug]) class Product(models.Model): category = models.ForeignKey(Category, related_name='products', on_delete=models.CASCADE) name = models.CharField(max_length=200, db_index=True) slug = models.SlugField(max_length=200, db_index=True) image = models.FileField(blank=True, upload_to=get_upload_path) SIZE_CHOICES = (('XXS', 'XXS'), ('XS', 'XS'), ('S', 'S'), ('M', 'M'), ('XL', 'XL'), ('XXL', 'XXL')) sizes = MultiSelectField(choices=SIZE_CHOICES, max_choices=6, max_length=17) description = models.TextField(blank=True) price = models.DecimalField(max_digits=10, decimal_places=2) stock = models.PositiveIntegerField() available = models.BooleanField(default=True) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Meta: ordering = ('name',) index_together = (('id', 'slug'),) def __str__(self): return self.name def get_absolute_url(self): return reverse('shop:product_detail', args=[self.id, self.slug]) my form: forms.py from django import forms PRODUCT_QUANTITY_CHOICES = [(i, str(i)) for i in range(1, 21)] class CartAddProductForm(forms.Form): quantity = forms.TypedChoiceField(choices=PRODUCT_QUANTITY_CHOICES, coerce=int) update = forms.BooleanField(required=False, initial=False, widget=forms.HiddenInput) # size = ?? the view which uses this form: views.py def product_detail(request: WSGIRequest, product_id: int, product_slug: str) -> HttpResponse: product = get_object_or_404(Product, id=product_id, slug=product_slug, available=True) cart_product_form = CartAddProductForm() return render(request, 'shop/product/detail.html', {'product': product, 'cart_product_form': cart_product_form}) shop/product/detail.html: {% extends "shop/base.html" %} <head> <meta charset="UTF-8"> <title>Detail</title> </head> <body> {% block content %} <br> <b>{{ product.name }} </b> <br> <i>{{ product.description }} </i> <br> {{ product.price }} <br> <img src="{{ product.image.url }}" width="300" height="500"> <br> Available sizes: <br> {{ product.sizes }}<br> <form action="{% url "cart:add_to_cart" product.id %}" method="post"> {{ cart_product_form }} {% csrf_token %} <input type="submit" value="Add to cart"> </form> {% endblock %} </body> I tried to create a function which gets avaliable sizes and send to the form: forms.py def get_sizes(product: Product): return product.sizes But to do this I need to refer to the Product from the form, I don't know how to do it.
[ "How about dividing each clothes by size(with quantity)? Clothes with different size can be treated as different product.\nproduct\n\n\n\n\nID\nname\nimage\ndescription\nprice\n...\n\n\n\n\n1\njean\na.jpg\ngood jean\n12345\n...\n\n\n\n\nsize\n\n\n\n\nID\nproduct_id\nsize\nquantity\n...\n\n\n\n\n1\n1\nxxxl\n12345\n...\n\n\n2\n1\nxxl\n1234\n...\n\n\n3\n1\nxl\n123\n...\n\n\n\n\nIf quantity is greater than 0, that size of clothes is available.\n", "my solution is:\nforms.py\nfrom django import forms\nfrom django.forms import ModelForm\nfrom shop.models import Product\n\n\nclass CartAddProductForm(ModelForm):\n class Meta:\n model = Product\n fields = ['sizes']\n\n def __init__(self, pk, *args, **kwargs):\n super(CartAddProductForm, self).__init__(*args, **kwargs)\n sizes = tuple(Product.objects.get(pk=pk).sizes)\n sizes_list = []\n for item in sizes:\n sizes_list.append((item, item))\n self.fields['sizes'] = forms.ChoiceField(choices=sizes_list)\n\nwhen I create the form, I pass the pk:\nviews.py\nproduct = get_object_or_404(Product,\n id=product_id,\n slug=product_slug,\n available=True)\npk = product.pk\ncart_product_form = CartAddProductForm(instance=product, pk=pk)\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_4.1", "python" ]
stackoverflow_0074600909_django_django_4.1_python.txt
Q: Python Regular Expression: re.sub to replace matches I am trying to analyze an earnings call using python regular expression. I want to delete unnecessary lines which only contain the name and position of the person, who is speaking next. This is an excerpt of the text I want to analyze: "Questions and Answers\nOperator [1]\n\n Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]\n I hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up.\n Timothy D. Cook, Apple Inc. - CEO & Director [3]\n ..." At the end of each line that I want to delete, you have [some number]. So I used the following line of code to get these lines: name_lines = re.findall('.*[\d]]', text) This works and gives me the following list: ['Operator [1]', ' Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]', ' Timothy D. Cook, Apple Inc. - CEO & Director [3]'] So, now in the next step I want to replace this strings in the text using the following line of code: for i in range(0,len(name_lines)): text = re.sub(name_lines[i], '', text) But this does not work. Also if I just try to replace 1 instead of using the loop it does not work, but I have no clue why. Also if I try now to use re.findall and search for the lines I obtained from the first line of code I don`t get a match. A: Try to use re.sub to replace the match: import re text = """\ Questions and Answers Operator [1] Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2] I hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up. Timothy D. Cook, Apple Inc. - CEO & Director [3]""" text = re.sub(r".*\d]", "", text) print(text) Prints: Questions and Answers I hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up. A: The first argument to re.sub is treated as a regular expression, so the square brackets get a special meaning and don't match literally. You don't need a regular expression for this replacement at all though (and you also don't need the loop counter i): for name_line in name_lines: text = text.replace(name_line, '')
Python Regular Expression: re.sub to replace matches
I am trying to analyze an earnings call using python regular expression. I want to delete unnecessary lines which only contain the name and position of the person, who is speaking next. This is an excerpt of the text I want to analyze: "Questions and Answers\nOperator [1]\n\n Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]\n I hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up.\n Timothy D. Cook, Apple Inc. - CEO & Director [3]\n ..." At the end of each line that I want to delete, you have [some number]. So I used the following line of code to get these lines: name_lines = re.findall('.*[\d]]', text) This works and gives me the following list: ['Operator [1]', ' Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]', ' Timothy D. Cook, Apple Inc. - CEO & Director [3]'] So, now in the next step I want to replace this strings in the text using the following line of code: for i in range(0,len(name_lines)): text = re.sub(name_lines[i], '', text) But this does not work. Also if I just try to replace 1 instead of using the loop it does not work, but I have no clue why. Also if I try now to use re.findall and search for the lines I obtained from the first line of code I don`t get a match.
[ "Try to use re.sub to replace the match:\nimport re\n\ntext = \"\"\"\\\nQuestions and Answers\nOperator [1]\n\nShannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]\nI hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up.\nTimothy D. Cook, Apple Inc. - CEO & Director [3]\"\"\"\n\ntext = re.sub(r\".*\\d]\", \"\", text)\nprint(text)\n\nPrints:\nQuestions and Answers\n\n\n\nI hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up.\n\n", "The first argument to re.sub is treated as a regular expression, so the square brackets get a special meaning and don't match literally.\nYou don't need a regular expression for this replacement at all though (and you also don't need the loop counter i):\nfor name_line in name_lines:\n text = text.replace(name_line, '')\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_re", "regex" ]
stackoverflow_0074613853_python_python_re_regex.txt
Q: create a list of lists with a checkerboard pattern I would like to change the values ​​of this list by alternating the 0 and 1 values ​​in a checkerboard pattern. table = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 i tried: for i in range(len(table)): for j in range(0, len(table[i]), 2): # ho definito uno step nella funzione range table[i][j] = 0 but for each list the count starts again and the result is: 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 my question is how can I change the loop to form a checkerboard pattern. I expect the result to be like: 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 A: for i in range(len(table)): for j in range(len(table[i])): if (i+j)%2 == 0: table[i][j] = 0 output: [[0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [0, 1, 0, 1, 0]]
create a list of lists with a checkerboard pattern
I would like to change the values ​​of this list by alternating the 0 and 1 values ​​in a checkerboard pattern. table = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 i tried: for i in range(len(table)): for j in range(0, len(table[i]), 2): # ho definito uno step nella funzione range table[i][j] = 0 but for each list the count starts again and the result is: 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 my question is how can I change the loop to form a checkerboard pattern. I expect the result to be like: 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
[ "for i in range(len(table)):\n for j in range(len(table[i])):\n if (i+j)%2 == 0:\n table[i][j] = 0\n\noutput:\n [[0, 1, 0, 1, 0],\n [1, 0, 1, 0, 1],\n [0, 1, 0, 1, 0],\n [1, 0, 1, 0, 1],\n [0, 1, 0, 1, 0]]\n\n" ]
[ 2 ]
[ "There doesn't appear to be any reliance on the original values in the list. Therefore it might be better to implement something that creates a list in the required format like this:\ndef checkboard(rows, columns):\n e = 0\n result = []\n for _ in range(rows):\n c = []\n for _ in range(columns):\n c.append(e)\n e ^= 1\n result.append(c)\n return result\n \nprint(checkboard(5, 5))\nprint(checkboard(2, 3))\nprint(checkboard(4, 4))\n\nOutput:\n[[0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [0, 1, 0, 1, 0]]\n[[0, 1, 0], [1, 0, 1]]\n[[0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 0, 1], [0, 1, 0, 1]]\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074613553_python.txt
Q: Edit an object with unique field: UNIQUE constraint failed: blog_post.title I am trying to update a record that has the title field as unique. when i edit any other field other than the title i get this error: UNIQUE constraint failed: blog_post.title, but when i edit the title, a new object is created. I have looked up examples and work arounds and couldn't fine a suitable approach to resolving it. so how do i update the record without this error? My database class BlogPost(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(250), unique=True, nullable=False) subtitle = db.Column(db.String(250), nullable=False) date = db.Column(db.String(250), nullable=False) body = db.Column(db.Text, nullable=False) author = db.Column(db.String(250), nullable=False) img_url = db.Column(db.String(250), nullable=False) my ulr path @app.route("/edit-post/<int:post_id>", methods=['GET','POST']) def edit_post(post_id): post_edit = BlogPost.query.get_or_404(post_id) edit_form = CreatePostForm(request.form, obj=post_edit) if edit_form.validate_on_submit(): edit_form.populate_obj(post_edit) db.session.commit() return redirect(url_for("show_post", index=post_edit.id)) return render_template("make-post.html", form=edit_form, is_edit=True) A: I think because you're creating a new instance instead of actually updating an existing record, after fetching your record via its id start changing the field values needed and then invoke the save method. post_edit.body="Example" post_edit.save()
Edit an object with unique field: UNIQUE constraint failed: blog_post.title
I am trying to update a record that has the title field as unique. when i edit any other field other than the title i get this error: UNIQUE constraint failed: blog_post.title, but when i edit the title, a new object is created. I have looked up examples and work arounds and couldn't fine a suitable approach to resolving it. so how do i update the record without this error? My database class BlogPost(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(250), unique=True, nullable=False) subtitle = db.Column(db.String(250), nullable=False) date = db.Column(db.String(250), nullable=False) body = db.Column(db.Text, nullable=False) author = db.Column(db.String(250), nullable=False) img_url = db.Column(db.String(250), nullable=False) my ulr path @app.route("/edit-post/<int:post_id>", methods=['GET','POST']) def edit_post(post_id): post_edit = BlogPost.query.get_or_404(post_id) edit_form = CreatePostForm(request.form, obj=post_edit) if edit_form.validate_on_submit(): edit_form.populate_obj(post_edit) db.session.commit() return redirect(url_for("show_post", index=post_edit.id)) return render_template("make-post.html", form=edit_form, is_edit=True)
[ "I think because you're creating a new instance instead of actually updating an existing record, after fetching your record via its id start changing the field values needed and then invoke the save method.\npost_edit.body=\"Example\"\npost_edit.save()\n\n" ]
[ 0 ]
[]
[]
[ "flask", "flask_sqlalchemy", "flask_wtforms", "python" ]
stackoverflow_0074613932_flask_flask_sqlalchemy_flask_wtforms_python.txt
Q: Edge Detection for high resolution pictures I am trying to locate three objects in my image and crop the sherd out. Any way that I can detect the edges better? This is the code I use to detect objects. def getEdgedImg(img): kernel = np.ones((3,3), np.uint8) eroded = cv2.erode(img, kernel) blur = cv2.medianBlur(eroded, 3) med_val = np.median(eroded) lower = int(max(0, 0.5*med_val)) upper = int(min(255, 1.3*med_val)) edged = cv2.Canny(blur, lower, upper) return edged edged = getEdgedImg(img) _, contours= cv2.findContours(edged ,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE for cnt in contours: if cv2.contourArea(colour_cnts[i]) > 400: x, y, w, h = cv2.boundingRect(colour_cnts[i]) cv2.rectangle(img2, (x, y), (x+w, y+h), (0, 255, 0), 2) cv2.imshow('hi', img) cv2.waitKey(0) I am currently doing image processing in my raw images. I have been looking for a few methods to improve the results but still, it doesn't work very well for some photos. edge detected: A: Had a shot at it, but as expected the weak background contrast is giving trouble, as does the directed lighting. Just checking the stone right now, but the script should give you the tools to find the two reference cards as well. If you want to show the intermediate images, see the comments in the script. Do you have the original image in another format then JPG by chance ? The color compression in the file is really not helping with extraction. import cv2 # get image img = cv2.imread("<YouPathHere>") # extract stone shadow in RGB chB, chG, chR = cv2.split(img) threshShadow = 48 imgChanneldiff = chR-chB imgshadow = cv2.threshold(imgChanneldiff,threshShadow,255,cv2.THRESH_BINARY)[1] #cv2.namedWindow("imgshadow", cv2.WINDOW_NORMAL) #cv2.imshow("imgshadow", imgshadow) # extract stone together with shadow in HSV imgHSV = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) threshHue = 10 chH, chS, chV = cv2.split(imgHSV) imgbin = cv2.threshold(chH,threshHue,255,cv2.THRESH_BINARY)[1] #cv2.namedWindow("imgbin", cv2.WINDOW_NORMAL) #cv2.imshow("imgbin", imgbin) imgResultMask = imgbin - imgshadow MorphKernelSize = 25; imgResultMask = cv2.erode(imgResultMask, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [MorphKernelSize,MorphKernelSize])) imgResultMask = cv2.dilate(imgResultMask, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [MorphKernelSize,MorphKernelSize])) cv2.namedWindow("imgResultMask", cv2.WINDOW_NORMAL) cv2.imshow("imgResultMask", imgResultMask) contours = cv2.findContours(imgResultMask,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)[0] cv2.drawContours(img, contours, -1, (255,125,0), 3) for c in contours: cv2.rectangle(img, cv2.boundingRect(c), (0,125,255)) cv2.namedWindow("img", cv2.WINDOW_NORMAL) cv2.imshow("img", img) cv2.waitKey(0) cv2.destroyAllWindows()
Edge Detection for high resolution pictures
I am trying to locate three objects in my image and crop the sherd out. Any way that I can detect the edges better? This is the code I use to detect objects. def getEdgedImg(img): kernel = np.ones((3,3), np.uint8) eroded = cv2.erode(img, kernel) blur = cv2.medianBlur(eroded, 3) med_val = np.median(eroded) lower = int(max(0, 0.5*med_val)) upper = int(min(255, 1.3*med_val)) edged = cv2.Canny(blur, lower, upper) return edged edged = getEdgedImg(img) _, contours= cv2.findContours(edged ,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE for cnt in contours: if cv2.contourArea(colour_cnts[i]) > 400: x, y, w, h = cv2.boundingRect(colour_cnts[i]) cv2.rectangle(img2, (x, y), (x+w, y+h), (0, 255, 0), 2) cv2.imshow('hi', img) cv2.waitKey(0) I am currently doing image processing in my raw images. I have been looking for a few methods to improve the results but still, it doesn't work very well for some photos. edge detected:
[ "Had a shot at it, but as expected the weak background contrast is giving trouble, as does the directed lighting. Just checking the stone right now, but the script should give you the tools to find the two reference cards as well. If you want to show the intermediate images, see the comments in the script.\nDo you have the original image in another format then JPG by chance ? The color compression in the file is really not helping with extraction.\n\nimport cv2\n\n# get image\nimg = cv2.imread(\"<YouPathHere>\")\n\n# extract stone shadow in RGB\nchB, chG, chR = cv2.split(img)\n\nthreshShadow = 48\nimgChanneldiff = chR-chB\nimgshadow = cv2.threshold(imgChanneldiff,threshShadow,255,cv2.THRESH_BINARY)[1]\n#cv2.namedWindow(\"imgshadow\", cv2.WINDOW_NORMAL)\n#cv2.imshow(\"imgshadow\", imgshadow)\n\n# extract stone together with shadow in HSV\nimgHSV = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)\nthreshHue = 10\nchH, chS, chV = cv2.split(imgHSV)\nimgbin = cv2.threshold(chH,threshHue,255,cv2.THRESH_BINARY)[1]\n#cv2.namedWindow(\"imgbin\", cv2.WINDOW_NORMAL)\n#cv2.imshow(\"imgbin\", imgbin)\n\n\nimgResultMask = imgbin - imgshadow\nMorphKernelSize = 25;\nimgResultMask = cv2.erode(imgResultMask, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [MorphKernelSize,MorphKernelSize]))\nimgResultMask = cv2.dilate(imgResultMask, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, [MorphKernelSize,MorphKernelSize]))\ncv2.namedWindow(\"imgResultMask\", cv2.WINDOW_NORMAL)\ncv2.imshow(\"imgResultMask\", imgResultMask)\n\ncontours = cv2.findContours(imgResultMask,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)[0]\ncv2.drawContours(img, contours, -1, (255,125,0), 3)\n\nfor c in contours:\n cv2.rectangle(img, cv2.boundingRect(c), (0,125,255))\n\ncv2.namedWindow(\"img\", cv2.WINDOW_NORMAL)\ncv2.imshow(\"img\", img)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n" ]
[ 2 ]
[]
[]
[ "computer_vision", "opencv", "python" ]
stackoverflow_0074612527_computer_vision_opencv_python.txt
Q: retrieve xpath or other element identifier from a python program I am trying to do something i can't find any help on. I want to be able to locate the xpath or other 'address' information in of a particular element for later use by selenium. I have text for the element and can find it using the selenium By.LINK.TEXT methodology. However, i am writing an application where speed is critical so i want to pre-find the element, store the xpath (for later use) and then use the By.XPATH methodology. In general finding an element using the BY.text construction takes .5 seconds whereas the xpath lookup takes on 10 - 20% of that time. I tried the code below but i get an error on getpath (WebElement object has no attribute getpath) Thanks for any help temp = br.find_element(By.LINK_TEXT, (str(day_to_book))) print(temp.getpath()) A: The Selenium WebElement object received by driver.find_element(ByLocator) is already a reference to the actual physical web element on the page. In other words, the WebElement object is an address of the actual web element you asking about. There is no way to get a By locator of an already found WebElement So, in your particular example temp = br.find_element(By.LINK_TEXT, (str(day_to_book))) the temp is an address of the element you can keep for future use (until the page is changed / refreshed)
retrieve xpath or other element identifier from a python program
I am trying to do something i can't find any help on. I want to be able to locate the xpath or other 'address' information in of a particular element for later use by selenium. I have text for the element and can find it using the selenium By.LINK.TEXT methodology. However, i am writing an application where speed is critical so i want to pre-find the element, store the xpath (for later use) and then use the By.XPATH methodology. In general finding an element using the BY.text construction takes .5 seconds whereas the xpath lookup takes on 10 - 20% of that time. I tried the code below but i get an error on getpath (WebElement object has no attribute getpath) Thanks for any help temp = br.find_element(By.LINK_TEXT, (str(day_to_book))) print(temp.getpath())
[ "\nThe Selenium WebElement object received by driver.find_element(ByLocator) is already a reference to the actual physical web element on the page. In other words, the WebElement object is an address of the actual web element you asking about.\nThere is no way to get a By locator of an already found WebElement\n\nSo, in your particular example temp = br.find_element(By.LINK_TEXT, (str(day_to_book))) the temp is an address of the element you can keep for future use (until the page is changed / refreshed)\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "xpath" ]
stackoverflow_0074614066_python_selenium_xpath.txt
Q: How to reshape a pandas data frame which has duplicate columns to a required format? I have a list of elements extracted from a xml file, they are passed to a pandas dataframe and assigned columns as below. #dataframe created with a list of lists df = pd.DataFrame([ ['2201 W WILLOW'], ['2201 W WILLOW'], ['ENID'], ['ENID, OK 73073'], ['73073'], ['2201 W WILLOW'], ['2201 W WILLOW'], ['ENID'], ['ENID, OK 73073'], ['73073'],['12345678']]).T # column cols= ['AddressLine1', 'AddressLine123', 'City', 'CityStateZip', 'PostalCode', 'AddressLine1', 'AddressLine123', 'City', 'CityStateZip', 'PostalCode','SSN'] # assign columns to pandas data frame df.columns = cols Expected output after reshaping would be: A: You can use a MultiIndex: (df.set_axis(pd.MultiIndex .from_arrays([df.columns, df.groupby(df.columns, axis=1) .cumcount()]), axis=1) .loc[0].unstack().add_prefix('value_') ) Output: value_0 value_1 AddressLine1 2201 W WILLOW 2201 W WILLOW AddressLine123 2201 W WILLOW 2201 W WILLOW City ENID ENID CityStateZip ENID, OK 73073 ENID, OK 73073 PostalCode 73073 73073 SSN 12345678 NaN If you have several rows in the input and want to keep them: (df.set_axis(pd.MultiIndex .from_arrays([df.columns, df.groupby(df.columns, axis=1) .cumcount()]), axis=1) .stack(0).add_prefix('value_') ) Output: value_0 value_1 0 AddressLine1 2201 W WILLOW 2201 W WILLOW AddressLine123 2201 W WILLOW 2201 W WILLOW City ENID ENID CityStateZip ENID, OK 73073 ENID, OK 73073 PostalCode 73073 73073 SSN 12345678 NaN
How to reshape a pandas data frame which has duplicate columns to a required format?
I have a list of elements extracted from a xml file, they are passed to a pandas dataframe and assigned columns as below. #dataframe created with a list of lists df = pd.DataFrame([ ['2201 W WILLOW'], ['2201 W WILLOW'], ['ENID'], ['ENID, OK 73073'], ['73073'], ['2201 W WILLOW'], ['2201 W WILLOW'], ['ENID'], ['ENID, OK 73073'], ['73073'],['12345678']]).T # column cols= ['AddressLine1', 'AddressLine123', 'City', 'CityStateZip', 'PostalCode', 'AddressLine1', 'AddressLine123', 'City', 'CityStateZip', 'PostalCode','SSN'] # assign columns to pandas data frame df.columns = cols Expected output after reshaping would be:
[ "You can use a MultiIndex:\n(df.set_axis(pd.MultiIndex\n .from_arrays([df.columns,\n df.groupby(df.columns, axis=1)\n .cumcount()]),\n axis=1)\n .loc[0].unstack().add_prefix('value_')\n)\n\nOutput:\n value_0 value_1\nAddressLine1 2201 W WILLOW 2201 W WILLOW\nAddressLine123 2201 W WILLOW 2201 W WILLOW\nCity ENID ENID\nCityStateZip ENID, OK 73073 ENID, OK 73073\nPostalCode 73073 73073\nSSN 12345678 NaN\n\nIf you have several rows in the input and want to keep them:\n(df.set_axis(pd.MultiIndex\n .from_arrays([df.columns,\n df.groupby(df.columns, axis=1)\n .cumcount()]),\n axis=1)\n .stack(0).add_prefix('value_')\n)\n\nOutput:\n value_0 value_1\n0 AddressLine1 2201 W WILLOW 2201 W WILLOW\n AddressLine123 2201 W WILLOW 2201 W WILLOW\n City ENID ENID\n CityStateZip ENID, OK 73073 ENID, OK 73073\n PostalCode 73073 73073\n SSN 12345678 NaN\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074614298_dataframe_pandas_python.txt
Q: Windows - Use default python rather than Anaconda installation my problem is before I installed Anaconda, my python with python and py commands shows the same versions. After I installed Anaconda, my python version is using Anaconda installation. How to prevent this, because I don't want to use Anaconda python version on my Windows. I already put my python PATH on the top When I check the PATH, there's no anaconda PATH anywhere when I done my installation My python version will use the default python version, not the Anaconda's version A: You need to change the default opener for the .py files on your computer. try to right-click on the .py file, select "open with" and look for python. You can always use cmd and run the following: python <path_to_file> or python3 <path_to_file> It depended on your python version (as long as you set your python path in windows Path)
Windows - Use default python rather than Anaconda installation
my problem is before I installed Anaconda, my python with python and py commands shows the same versions. After I installed Anaconda, my python version is using Anaconda installation. How to prevent this, because I don't want to use Anaconda python version on my Windows. I already put my python PATH on the top When I check the PATH, there's no anaconda PATH anywhere when I done my installation My python version will use the default python version, not the Anaconda's version
[ "You need to change the default opener for the .py files on your computer.\ntry to right-click on the .py file, select \"open with\" and look for python.\nYou can always use cmd and run the following:\npython <path_to_file>\n\nor\npython3 <path_to_file>\n\nIt depended on your python version\n(as long as you set your python path in windows Path)\n" ]
[ 0 ]
[]
[]
[ "anaconda", "python", "windows" ]
stackoverflow_0074614179_anaconda_python_windows.txt
Q: My wordcloud mask is producing a series of points outlining where the mask should be but the words are fitting to the shape of the entire image As described above my wordcloud is not behaving in a way I have sen before and I have no idea what is causing the issue as I have made them before and never experienced this problem. # import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from PIL import Image from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator This is what my data looks like data = pd.read_csv('cleanTwitterData.csv') data Date Close name tweet 3 2022-04-25 51.700001 elonmusk hope that even worst critics remain Twitter be... 4 2022-04-26 49.680000 elonmusk esaagar Suspending the Twitter account major n... 5 2022-04-27 48.639999 elonmusk For Twitter deserve public trust must politica... 6 2022-04-28 49.110001 elonmusk Let make Twitter maximum fun 7 2022-04-29 49.020000 elonmusk The people Twitter strongly agree that Twitter... ... ... ... ... ... 176 2022-10-15 50.450001 elonmusk KimDotcom Twitter trying hardest escalate this... 186 2022-10-25 52.779999 elonmusk SwipeWright Twitter should broadly inclusive p... 187 2022-10-26 53.349998 elonmusk Entering Twitter let that sink D68z4K2wq7 188 2022-10-27 53.700001 elonmusk Dear Twitter Advertisers GMwHmInPAS 189 2022-10-28 53.700001 elonmusk Comedy now legal Twitter This is the image im using for a mask pil_im = Image.open('twitterLogo.png') display(pil_im) My mask mask = np.array(Image.open("twitterLogo.png")) mask array([[[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [238, 238, 238], [238, 238, 238], [239, 239, 239]], [[255, 255, 255], [254, 254, 254], [254, 254, 254], ..., [237, 237, 237], [237, 237, 237], [238, 238, 238]], ..., [254, 254, 254], [254, 254, 254], [255, 255, 255]], [[239, 239, 239], [238, 238, 238], [238, 238, 238], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]], dtype=uint8) So at this point im thinking its looking good, mask looks like it should, data looks like it should so the next step is creating the wordcloud: #Generate a word cloud image text = " ".join(i for i in data.tweet) stopwords = set(STOPWORDS) wordcloud = WordCloud(stopwords=stopwords,background_color='white', max_words=1000, mask=mask,contour_color='#023075',contour_width=3,colormap='Blues').generate(text) plt.figure() plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() But instead of a nice twitter shaped cloud, I just get a rectangle, with the outline of the twitter logo pinned by little blue points within (kind of hard to see but if you look closely you can make out the shape of the twitter logo: Ive tried using one or two other random png files as the mask with the same result. Can somebody point out to me where im going wrong with this? Any help would be greatly appreciated. A: As commented by Paul Brodersen the image used for the mask has to be black and white, with black corresponding to the area to be filled. Thanks Paul
My wordcloud mask is producing a series of points outlining where the mask should be but the words are fitting to the shape of the entire image
As described above my wordcloud is not behaving in a way I have sen before and I have no idea what is causing the issue as I have made them before and never experienced this problem. # import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from PIL import Image from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator This is what my data looks like data = pd.read_csv('cleanTwitterData.csv') data Date Close name tweet 3 2022-04-25 51.700001 elonmusk hope that even worst critics remain Twitter be... 4 2022-04-26 49.680000 elonmusk esaagar Suspending the Twitter account major n... 5 2022-04-27 48.639999 elonmusk For Twitter deserve public trust must politica... 6 2022-04-28 49.110001 elonmusk Let make Twitter maximum fun 7 2022-04-29 49.020000 elonmusk The people Twitter strongly agree that Twitter... ... ... ... ... ... 176 2022-10-15 50.450001 elonmusk KimDotcom Twitter trying hardest escalate this... 186 2022-10-25 52.779999 elonmusk SwipeWright Twitter should broadly inclusive p... 187 2022-10-26 53.349998 elonmusk Entering Twitter let that sink D68z4K2wq7 188 2022-10-27 53.700001 elonmusk Dear Twitter Advertisers GMwHmInPAS 189 2022-10-28 53.700001 elonmusk Comedy now legal Twitter This is the image im using for a mask pil_im = Image.open('twitterLogo.png') display(pil_im) My mask mask = np.array(Image.open("twitterLogo.png")) mask array([[[255, 255, 255], [255, 255, 255], [255, 255, 255], ..., [238, 238, 238], [238, 238, 238], [239, 239, 239]], [[255, 255, 255], [254, 254, 254], [254, 254, 254], ..., [237, 237, 237], [237, 237, 237], [238, 238, 238]], ..., [254, 254, 254], [254, 254, 254], [255, 255, 255]], [[239, 239, 239], [238, 238, 238], [238, 238, 238], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]], dtype=uint8) So at this point im thinking its looking good, mask looks like it should, data looks like it should so the next step is creating the wordcloud: #Generate a word cloud image text = " ".join(i for i in data.tweet) stopwords = set(STOPWORDS) wordcloud = WordCloud(stopwords=stopwords,background_color='white', max_words=1000, mask=mask,contour_color='#023075',contour_width=3,colormap='Blues').generate(text) plt.figure() plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() But instead of a nice twitter shaped cloud, I just get a rectangle, with the outline of the twitter logo pinned by little blue points within (kind of hard to see but if you look closely you can make out the shape of the twitter logo: Ive tried using one or two other random png files as the mask with the same result. Can somebody point out to me where im going wrong with this? Any help would be greatly appreciated.
[ "As commented by Paul Brodersen the image used for the mask has to be black and white, with black corresponding to the area to be filled.\nThanks Paul\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "word_cloud" ]
stackoverflow_0074612436_matplotlib_python_word_cloud.txt
Q: UnboundLocalError: local variable referenced before assignment error I tried all the solutions I could find nothing helped A survey for a school project and expected to add 100 points to the persons score Do you think you could help code: #QUESTION q1 = "TRUE OR FALSE: The lightest atom is hydrogen" q2 = "TRUE OR FALSE: Osmium is one of the densest atom if not the most" q3 = "TRUE OR FALSE: chemistry is the branch of science that deals with the identification of the substances of which matter is composed; the investigation of their properties and the ways in which they interact, combine, and change; and the use of these processes to form new substances." q4 = "TRUE OR FALSE: There are 200 elements discovered in history" q5 = "TRUE OR FALSE: Dmitri Ivanovich Mendeleev was a Russian chemist and inventor. He is best known for formulating the Periodic Law and creating a version of the periodic table of elements." #_____________________________________________________________________ q6 = "TRUE OR FALSE: The earth is the center of the earth" q7 = "TRUE OR FALSE: The sun the the center of the earth" q8 = "TRUE OR FALSE: The sun makes up most of the mass in our solar system" q9 = "TRUE OR FALSE: Jupiter has the most moon in the solar system" q10 = "TRUE OR FALSE: Mars's nick name is the red planet" #____________________________________________________________________ q11 = "TRUE OR FALSE: Supernova's are the biggest explosions in the universe" q12 = "TRUE OR FALSE: Gamma rays bursts are the biggest explosions the universe" q13 = "TRUE OR FALSE: All stars turn into supernovas" q14 = "TRUE OR FALSE: All stars turn into blacks holes" q15 = "TRUE OR FALSE: All stars turn into neutron stars" #____________________________________________________________________ q16 = "TRUE OR FALSE: Based on the model, Juptier is the biggest planet" q17 = "TRUE OR FALSE: Based on the model, earth is the samllest planet" q18 = "TRUE OR FALSE: Based on the model, neptune is the farthest planet to the sun" q19 = "TRUE OR FALSE: Based on the model, the sun is the biggest object" q20 = "TRUE OR FALSE: Based on the model the asteroid belt is located between Jupiter and Mars" #____________________________________________________________________ #ANSWER KEY a1 = "True" a2 = "True" a3 = "True" a4 = "False" a5 = "True" a6 = "False" a7 = "True" a8 = "True" a9 = "False" a10 = "True" a11 = "False" a12 = "True" a13 = "False" a14 = "False" a15 = "False" a16 = "True" a17 = "False" a18 = "True" a19 = "True" a20 = "True" global player1points global player2points player1points = int(0) player2points = int(0) startkey = "Start" rulekey = "Rules" def software(key): if p1!=key and p2!=key: print("Both of you got it wrong") elif p2==key and p1!=key: print(f"{p2name} got it right") player2points = player2points+int(100) elif p1==key and p2!=key: print(f"{p1name} got it right") player1points = player1points+int(100) elif p1==key and p2==key: print("both of you got it right") player1points = player1points+int(100) player2points = player2points+int(100) start = input(f"Hello to the tester 9000\n\nType start to start\ntype rules to know how to play! ").capitalize().strip() if start==rulekey: rule = input("You need aleast 2 players to play this game\n\neverytime you get the question right you get 100 points\n\n the person with the most points after 20 questions is the winner\ntype start to start") rule = "Start" if start==startkey or rule==startkey: p1name = input("what is your name player one: ") p2name = input("what is your name player two: ") print(f"time for you first question") print(q1) p1 = input(f"{p1name} turn").capitalize().strip() p2 = input(f"{p2name} turn").capitalize().strip() software(a1) print(player1points) print(player2points) here the error code it gave me: Hello to the tester 9000 Type start to start type rules to know how to play! rules You need aleast 2 players to play this game everytime you get the question right you get 100 points the person with the most points after 20 questions is the winner type start to startstart what is your name player one: kunde what is your name player two: chan time for you first question TRUE OR FALSE: The lightest atom is hydrogen kunde turnfalse chan turntrue chan got it right Traceback (most recent call last): File "<string>", line 103, in <module> File "<string>", line 71, in software UnboundLocalError: local variable 'player2points' referenced before assignment A: Cause of Error The error is your global statements should be inside function software since the global keyword allows us to modify variables outside of the current scope. You need it in the function software Improvements Coding is largely about using the right data structures. Using a variable for each question and answer makes it difficult to loop over the questions and answers. It's better to use a list in this case. No need for global for this simple program (globals are normally unnecessary and frowned upon when not needed). use a mutable argument (such as a dictionary) to store points Unnecessarily to use f-strings in all your print statements. Unnecessary usage confuses the reader since they are looking for a variable in your string. software is a meaningless name for a function use something more descriptive of its purpose such as update_points Improved Code #The error is your global statements should be inside function software since the global keyword allows us to modify variables outside of the current scope. You need it in # The questions and answers should be in a data structure rather than each being its own variable # This allows you to interate over the questions # Most of your print statements you can use string literal rather than f-strings since they don't use a variable # Unnecessarily using f-strings confuses the code reviewer since they are looking for a variable in your string #QUESTION questions = ["TRUE OR FALSE: The lightest atom is hydrogen", "TRUE OR FALSE: Osmium is one of the densest atom if not the most", "TRUE OR FALSE: chemistry is the branch of science that deals with the identification of the substances of which matter is composed; the investigation of their properties and the ways in which they interact, combine, and change; and the use of these processes to form new substances.", "TRUE OR FALSE: There are 200 elements discovered in history", "TRUE OR FALSE: Dmitri Ivanovich Mendeleev was a Russian chemist and inventor. He is best known for formulating the Periodic Law and creating a version of the periodic table of elements.", "TRUE OR FALSE: The earth is the center of the earth", "TRUE OR FALSE: The sun the the center of the earth", "TRUE OR FALSE: The sun makes up most of the mass in our solar system", "TRUE OR FALSE: Jupiter has the most moon in the solar system", "TRUE OR FALSE: Mars's nick name is the red planet", "TRUE OR FALSE: Supernova's are the biggest explosions in the universe", "TRUE OR FALSE: Gamma rays bursts are the biggest explosions the universe" "TRUE OR FALSE: All stars turn into supernovas", "TRUE OR FALSE: All stars turn into blacks holes", "TRUE OR FALSE: All stars turn into neutron stars", "TRUE OR FALSE: Based on the model, Juptier is the biggest planet", "TRUE OR FALSE: Based on the model, earth is the samllest planet", "TRUE OR FALSE: Based on the model, neptune is the farthest planet to the sun", "TRUE OR FALSE: Based on the model, the sun is the biggest object", "TRUE OR FALSE: Based on the model the asteroid belt is located between Jupiter and Mars"] #____________________________________________________________________ #ANSWER KEY answers = ["True", "True", "True", "False", "True", "False", "True", "True", "False", "True", "False", "True", "False", "False", "False", "True", "False", "True", "True", "True"] def update_points(points, key): ''' Updates points for player1 & player2 ''' print(p1, p2, key) if p1!=key and p2!=key: print("Both of you got it wrong") elif p2==key and p1!=key: print(f"{p2name} got it right") # increment player 2 points points[p2name] += 100 elif p1==key and p2!=key: print(f"{p1name} got it right") # increment player 1 points points[p1name] += 100 elif p1==key and p2==key: print("both of you got it right") # increment both players points points[p1name] += 100 points[p2name] += 100 return points # updated points startkey = "Start" rulekey = "Rules" start = input('''Hello to the tester 9000 Type start to start type rules to know how to play!''').capitalize().strip() if start==rulekey: rule = input("You need aleast 2 players to play this game\n\neverytime you get the question right you get 100 points\n\n the person with the most points after 20 questions is the winner\ntype start to start") rule = "Start" if start==startkey or rule==startkey: p1name = input("what is your name player one: ") p2name = input("what is your name player two: ") # Initialize points for each player points = {} points[p1name] = 0 points[p2name] = 0 for i, q in enumerate(questions): # loop over each question in list. Enumerate provides the index of the quesiton print(f"time for question {i}") print(questions[i]) # Get answers p1 = input(f"{p1name} turn").capitalize().strip() p2 = input(f"{p2name} turn").capitalize().strip() # Update points points = update_points(points, answers[i]) print(f"Player 1 points: {points[p1name]}") print(f"Player 2 points: {points[p2name]}")
UnboundLocalError: local variable referenced before assignment error
I tried all the solutions I could find nothing helped A survey for a school project and expected to add 100 points to the persons score Do you think you could help code: #QUESTION q1 = "TRUE OR FALSE: The lightest atom is hydrogen" q2 = "TRUE OR FALSE: Osmium is one of the densest atom if not the most" q3 = "TRUE OR FALSE: chemistry is the branch of science that deals with the identification of the substances of which matter is composed; the investigation of their properties and the ways in which they interact, combine, and change; and the use of these processes to form new substances." q4 = "TRUE OR FALSE: There are 200 elements discovered in history" q5 = "TRUE OR FALSE: Dmitri Ivanovich Mendeleev was a Russian chemist and inventor. He is best known for formulating the Periodic Law and creating a version of the periodic table of elements." #_____________________________________________________________________ q6 = "TRUE OR FALSE: The earth is the center of the earth" q7 = "TRUE OR FALSE: The sun the the center of the earth" q8 = "TRUE OR FALSE: The sun makes up most of the mass in our solar system" q9 = "TRUE OR FALSE: Jupiter has the most moon in the solar system" q10 = "TRUE OR FALSE: Mars's nick name is the red planet" #____________________________________________________________________ q11 = "TRUE OR FALSE: Supernova's are the biggest explosions in the universe" q12 = "TRUE OR FALSE: Gamma rays bursts are the biggest explosions the universe" q13 = "TRUE OR FALSE: All stars turn into supernovas" q14 = "TRUE OR FALSE: All stars turn into blacks holes" q15 = "TRUE OR FALSE: All stars turn into neutron stars" #____________________________________________________________________ q16 = "TRUE OR FALSE: Based on the model, Juptier is the biggest planet" q17 = "TRUE OR FALSE: Based on the model, earth is the samllest planet" q18 = "TRUE OR FALSE: Based on the model, neptune is the farthest planet to the sun" q19 = "TRUE OR FALSE: Based on the model, the sun is the biggest object" q20 = "TRUE OR FALSE: Based on the model the asteroid belt is located between Jupiter and Mars" #____________________________________________________________________ #ANSWER KEY a1 = "True" a2 = "True" a3 = "True" a4 = "False" a5 = "True" a6 = "False" a7 = "True" a8 = "True" a9 = "False" a10 = "True" a11 = "False" a12 = "True" a13 = "False" a14 = "False" a15 = "False" a16 = "True" a17 = "False" a18 = "True" a19 = "True" a20 = "True" global player1points global player2points player1points = int(0) player2points = int(0) startkey = "Start" rulekey = "Rules" def software(key): if p1!=key and p2!=key: print("Both of you got it wrong") elif p2==key and p1!=key: print(f"{p2name} got it right") player2points = player2points+int(100) elif p1==key and p2!=key: print(f"{p1name} got it right") player1points = player1points+int(100) elif p1==key and p2==key: print("both of you got it right") player1points = player1points+int(100) player2points = player2points+int(100) start = input(f"Hello to the tester 9000\n\nType start to start\ntype rules to know how to play! ").capitalize().strip() if start==rulekey: rule = input("You need aleast 2 players to play this game\n\neverytime you get the question right you get 100 points\n\n the person with the most points after 20 questions is the winner\ntype start to start") rule = "Start" if start==startkey or rule==startkey: p1name = input("what is your name player one: ") p2name = input("what is your name player two: ") print(f"time for you first question") print(q1) p1 = input(f"{p1name} turn").capitalize().strip() p2 = input(f"{p2name} turn").capitalize().strip() software(a1) print(player1points) print(player2points) here the error code it gave me: Hello to the tester 9000 Type start to start type rules to know how to play! rules You need aleast 2 players to play this game everytime you get the question right you get 100 points the person with the most points after 20 questions is the winner type start to startstart what is your name player one: kunde what is your name player two: chan time for you first question TRUE OR FALSE: The lightest atom is hydrogen kunde turnfalse chan turntrue chan got it right Traceback (most recent call last): File "<string>", line 103, in <module> File "<string>", line 71, in software UnboundLocalError: local variable 'player2points' referenced before assignment
[ "Cause of Error\n\nThe error is your global statements should be inside function software since the global keyword allows us to modify variables outside of the current scope. You need it in the function software\n\nImprovements\n\nCoding is largely about using the right data structures. Using a variable for each question and answer makes it difficult to loop over the questions and answers. It's better to use a list in this case.\nNo need for global for this simple program (globals are normally unnecessary and frowned upon when not needed).\n\nuse a mutable argument (such as a dictionary) to store points\n\n\nUnnecessarily to use f-strings in all your print statements. Unnecessary usage confuses the reader since they are looking for a variable in your string.\nsoftware is a meaningless name for a function\n\nuse something more descriptive of its purpose such as update_points\n\n\n\nImproved Code\n#The error is your global statements should be inside function software since the global keyword allows us to modify variables outside of the current scope. You need it in \n# The questions and answers should be in a data structure rather than each being its own variable\n# This allows you to interate over the questions\n# Most of your print statements you can use string literal rather than f-strings since they don't use a variable\n# Unnecessarily using f-strings confuses the code reviewer since they are looking for a variable in your string\n#QUESTION\n\nquestions = [\"TRUE OR FALSE: The lightest atom is hydrogen\",\n \"TRUE OR FALSE: Osmium is one of the densest atom if not the most\",\n \"TRUE OR FALSE: chemistry is the branch of science that deals with the identification of the substances of which matter is composed; the investigation of their properties and the ways in which they interact, combine, and change; and the use of these processes to form new substances.\",\n \"TRUE OR FALSE: There are 200 elements discovered in history\",\n \"TRUE OR FALSE: Dmitri Ivanovich Mendeleev was a Russian chemist and inventor. He is best known for formulating the Periodic Law and creating a version of the periodic table of elements.\",\n \"TRUE OR FALSE: The earth is the center of the earth\",\n \"TRUE OR FALSE: The sun the the center of the earth\",\n \"TRUE OR FALSE: The sun makes up most of the mass in our solar system\",\n \"TRUE OR FALSE: Jupiter has the most moon in the solar system\",\n \"TRUE OR FALSE: Mars's nick name is the red planet\",\n \"TRUE OR FALSE: Supernova's are the biggest explosions in the universe\",\n \"TRUE OR FALSE: Gamma rays bursts are the biggest explosions the universe\"\n \"TRUE OR FALSE: All stars turn into supernovas\",\n \"TRUE OR FALSE: All stars turn into blacks holes\",\n \"TRUE OR FALSE: All stars turn into neutron stars\",\n \"TRUE OR FALSE: Based on the model, Juptier is the biggest planet\",\n \"TRUE OR FALSE: Based on the model, earth is the samllest planet\",\n \"TRUE OR FALSE: Based on the model, neptune is the farthest planet to the sun\",\n \"TRUE OR FALSE: Based on the model, the sun is the biggest object\",\n \"TRUE OR FALSE: Based on the model the asteroid belt is located between Jupiter and Mars\"]\n #____________________________________________________________________\n #ANSWER KEY\nanswers = [\"True\",\n \"True\",\n \"True\",\n \"False\",\n \"True\",\n \"False\",\n \"True\",\n \"True\",\n \"False\",\n \"True\",\n \"False\",\n \"True\",\n \"False\",\n \"False\",\n \"False\",\n \"True\",\n \"False\",\n \"True\",\n \"True\",\n \"True\"]\n\ndef update_points(points, key):\n '''\n Updates points for player1 & player2\n \n '''\n print(p1, p2, key)\n if p1!=key and p2!=key:\n print(\"Both of you got it wrong\")\n \n elif p2==key and p1!=key:\n print(f\"{p2name} got it right\")\n # increment player 2 points\n points[p2name] += 100\n \n elif p1==key and p2!=key:\n \n print(f\"{p1name} got it right\")\n # increment player 1 points\n points[p1name] += 100\n \n elif p1==key and p2==key:\n print(\"both of you got it right\")\n # increment both players points\n points[p1name] += 100\n points[p2name] += 100\n \n return points # updated points\n\n\nstartkey = \"Start\"\nrulekey = \"Rules\"\n\nstart = input('''Hello to the tester 9000\n Type start to start\n type rules to know how to play!''').capitalize().strip()\n\nif start==rulekey:\n rule = input(\"You need aleast 2 players to play this game\\n\\neverytime you get the question right you get 100 points\\n\\n the person with the most points after 20 questions is the winner\\ntype start to start\")\n \nrule = \"Start\"\n \nif start==startkey or rule==startkey:\n p1name = input(\"what is your name player one: \")\n p2name = input(\"what is your name player two: \")\n \n # Initialize points for each player\n points = {}\n points[p1name] = 0\n points[p2name] = 0\n \n for i, q in enumerate(questions):\n # loop over each question in list. Enumerate provides the index of the quesiton\n print(f\"time for question {i}\")\n print(questions[i])\n\n # Get answers\n p1 = input(f\"{p1name} turn\").capitalize().strip()\n p2 = input(f\"{p2name} turn\").capitalize().strip()\n\n # Update points\n points = update_points(points, answers[i])\n\n print(f\"Player 1 points: {points[p1name]}\")\n print(f\"Player 2 points: {points[p2name]}\")\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074609169_python.txt
Q: Create bar charts by grouped columns I have a dataframe like this: data = [[2008, 'A', 12.2], [2008, 'A', 11.3], [2009, 'A', 4.2], [2010, 'A', 3.4], [2011, 'A', 14.2], [2008, 'B', 4.1], [2008, 'B', 17.2], [2009, 'B', 12.2], [2008, 'C', 12.2], [2011, 'C', 12.2]] df = pd.DataFrame(data, columns=['year', 'type', 'income']) I'd like to group the data by year and type, and plot histograms of income distribution by year for each category. Now I can loop through the pivot table: for i in df['type'].unique(): df[df['type'] == i].pivot_table(index='year', columns = 'type', values='income', aggfunc='sum').plot(kind = 'bar') plt.show() But it seems to me that there must be some simpler way to do this. I would be grateful for help.
Create bar charts by grouped columns
I have a dataframe like this: data = [[2008, 'A', 12.2], [2008, 'A', 11.3], [2009, 'A', 4.2], [2010, 'A', 3.4], [2011, 'A', 14.2], [2008, 'B', 4.1], [2008, 'B', 17.2], [2009, 'B', 12.2], [2008, 'C', 12.2], [2011, 'C', 12.2]] df = pd.DataFrame(data, columns=['year', 'type', 'income']) I'd like to group the data by year and type, and plot histograms of income distribution by year for each category. Now I can loop through the pivot table: for i in df['type'].unique(): df[df['type'] == i].pivot_table(index='year', columns = 'type', values='income', aggfunc='sum').plot(kind = 'bar') plt.show() But it seems to me that there must be some simpler way to do this. I would be grateful for help.
[]
[]
[ "Use:\ndf.groupby(['type', 'year']).sum().groupby(level='type').plot.bar()\n\nOutput:\n\n\n\nAn interesting alternative:\n(df.pivot_table(index='year', columns='type', values='income', aggfunc='sum')\n .plot.bar(subplots=True, figsize=(4,10))\n)\n\nOutput:\n\n" ]
[ -1 ]
[ "pandas", "python" ]
stackoverflow_0074614367_pandas_python.txt
Q: how add a point in an specific position of a string in a column in python Hy! I have a dataframe with two columns latitude and longitude with a wrong format that i want to correct. The structure of de strings in columns is the next Lat Long -314193332 -6419125129999990 -313147283 -641708031 I need to append a point in the third position to have this structure: Lat Long -31.4193332 -64.19125129999990 -31.3147283 -64.1708031 how can i do this? the value being an integer type too, i can configure that if there is a function to edit integers A: You can use arithmetic with a conversion to log10 to get the number of digits: N = 2 # number of digits to keep before decimal part out = df.div(10**np.floor(np.log10(df.abs())+1).sub(N)) Output: Lat Long 0 -31.419333 -64.191251 1 -31.314728 -64.170803 Intermediate (number of digits): np.floor(np.log10(df.abs())+1) Lat Long 0 9.0 16.0 1 9.0 9.0 A: Another possible solution, which uses regex to replace the first two digits to the same digits plus .: df.astype(str).replace(r'(^-?\d{2})',r'\1.', regex=True).astype(float) Output: Lat Long 0 -31.419333 -64.191251 1 -31.314728 -64.170803
how add a point in an specific position of a string in a column in python
Hy! I have a dataframe with two columns latitude and longitude with a wrong format that i want to correct. The structure of de strings in columns is the next Lat Long -314193332 -6419125129999990 -313147283 -641708031 I need to append a point in the third position to have this structure: Lat Long -31.4193332 -64.19125129999990 -31.3147283 -64.1708031 how can i do this? the value being an integer type too, i can configure that if there is a function to edit integers
[ "You can use arithmetic with a conversion to log10 to get the number of digits:\nN = 2 # number of digits to keep before decimal part\nout = df.div(10**np.floor(np.log10(df.abs())+1).sub(N))\n\nOutput:\n Lat Long\n0 -31.419333 -64.191251\n1 -31.314728 -64.170803\n\nIntermediate (number of digits):\nnp.floor(np.log10(df.abs())+1)\n\n Lat Long\n0 9.0 16.0\n1 9.0 9.0\n\n", "Another possible solution, which uses regex to replace the first two digits to the same digits plus .:\ndf.astype(str).replace(r'(^-?\\d{2})',r'\\1.', regex=True).astype(float)\n\nOutput:\n Lat Long\n0 -31.419333 -64.191251\n1 -31.314728 -64.170803\n\n" ]
[ 4, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074613924_pandas_python.txt
Q: Delete comments with python function Anybody can advise what could be wrong with my code? I am trying to make a method that removes the single line comments from the content. Also, the method should return the single line comments that start with '#'. import os def deleteComments(file): try: my_file = open(file, 'r') data = my_file.read() clean = "" comment= 0 if i[0] == "#": comment += 1 else: pass with open("clean-", "w") as f: f.write(clean) f.close() my_file.close() except: print("An error occurred with accessing the files") return file def deleteComment(file): try: my_file = open(file, 'r') data = my_file.read() clean = "" comment= 0 if i[0] == "#": comment += 1 else: pass with open("clean-", "w") as f: f.write(clean) f.close() my_file.close() except: print("An error occurred with accessing the files") return file A: This should make it work. import os def deleteComments(file): try: my_file = open(file, 'r') data = my_file.read() clean = "" comments_count = 0 for i in data.split('\n'): if i[0] == "#": clean += i clean += '\n' comments_count += 1 else: pass name = os.path.basename(path) with open("clean-" + name, "w") as f: f.write(clean) f.close() my_file.close() return comments_count except: print("An error occurred with accessing the files") return file
Delete comments with python function
Anybody can advise what could be wrong with my code? I am trying to make a method that removes the single line comments from the content. Also, the method should return the single line comments that start with '#'. import os def deleteComments(file): try: my_file = open(file, 'r') data = my_file.read() clean = "" comment= 0 if i[0] == "#": comment += 1 else: pass with open("clean-", "w") as f: f.write(clean) f.close() my_file.close() except: print("An error occurred with accessing the files") return file def deleteComment(file): try: my_file = open(file, 'r') data = my_file.read() clean = "" comment= 0 if i[0] == "#": comment += 1 else: pass with open("clean-", "w") as f: f.write(clean) f.close() my_file.close() except: print("An error occurred with accessing the files") return file
[ "This should make it work.\nimport os\n\ndef deleteComments(file):\n try:\n my_file = open(file, 'r')\n data = my_file.read()\n clean = \"\"\n comments_count = 0\n for i in data.split('\\n'):\n if i[0] == \"#\":\n clean += i\n clean += '\\n'\n comments_count += 1\n else:\n pass\n name = os.path.basename(path)\n with open(\"clean-\" + name, \"w\") as f:\n f.write(clean)\n f.close()\n my_file.close()\n return comments_count\n\n except:\n print(\"An error occurred with accessing the files\")\n return file\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074614357_python.txt
Q: Strings in nested lists I have a list looking like this: record1 = [["2020/02/19", 7.0], ["2020/02/20", 7.3], ["2020/02/21", 6.1]] but I want to change the dates from yyyy/mm/dd to dd/mm/yyyy. record1= [["19/02/2020", 7.0], ["20/02/2020", 7,3], ["21/02/2020", 6.1] How can I do this? I cannot just use ::-1 because then everything gets mixed up. A: The proper way of doing it using datetime module: record1 = [[datetime.strptime(d, '%Y/%m/%d').strftime('%d/%m/%Y'), v] for d, v in record1] This converts it to datetime object, then formats it the way you intended >>> record1 [['19/02/2020', 7.0], ['20/02/2020', 7.3], ['21/02/2020', 6.1]] This would be a robust, explicit solution. But... Some experiments showed that manipulating strings with split would be a faster solution: from datetime import datetime import time def convert_with_datetime(record1): return [[datetime.strptime(d, '%Y/%m/%d').strftime('%d/%m/%Y'), v] for d, v in record1] def convert_with_split(record1): return [['/'.join(d.split('/')[::-1]), v] for d, v in record1] record1 = [["2020/02/19", 7.0], ["2020/02/20", 7.3], ["2020/02/21", 6.1]] * 10**6 start = time.time() convert_with_datetime(record1) end = time.time() print('convert_with_datetime:', end - start) start = time.time() convert_with_split(record1) end = time.time() print('convert_with_split:', end - start) The result is: convert_with_datetime: 12.311486959457397 convert_with_split: 1.3060288429260254
Strings in nested lists
I have a list looking like this: record1 = [["2020/02/19", 7.0], ["2020/02/20", 7.3], ["2020/02/21", 6.1]] but I want to change the dates from yyyy/mm/dd to dd/mm/yyyy. record1= [["19/02/2020", 7.0], ["20/02/2020", 7,3], ["21/02/2020", 6.1] How can I do this? I cannot just use ::-1 because then everything gets mixed up.
[ "The proper way of doing it using datetime module:\nrecord1 = [[datetime.strptime(d, '%Y/%m/%d').strftime('%d/%m/%Y'), v] for d, v in record1]\n\nThis converts it to datetime object, then formats it the way you intended\n>>> record1\n[['19/02/2020', 7.0], ['20/02/2020', 7.3], ['21/02/2020', 6.1]]\n\nThis would be a robust, explicit solution.\nBut... Some experiments showed that manipulating strings with split would be a faster solution:\nfrom datetime import datetime\nimport time\n\n\ndef convert_with_datetime(record1):\n return [[datetime.strptime(d, '%Y/%m/%d').strftime('%d/%m/%Y'), v] for d, v in record1]\n\ndef convert_with_split(record1):\n return [['/'.join(d.split('/')[::-1]), v] for d, v in record1]\n\nrecord1 = [[\"2020/02/19\", 7.0], [\"2020/02/20\", 7.3], [\"2020/02/21\", 6.1]] * 10**6\n\n\n\nstart = time.time()\nconvert_with_datetime(record1)\nend = time.time()\nprint('convert_with_datetime:', end - start)\n\nstart = time.time()\nconvert_with_split(record1)\nend = time.time()\nprint('convert_with_split:', end - start)\n\nThe result is:\nconvert_with_datetime: 12.311486959457397\nconvert_with_split: 1.3060288429260254\n\n" ]
[ 3 ]
[]
[]
[ "date", "nested_lists", "python" ]
stackoverflow_0074614410_date_nested_lists_python.txt
Q: python single file multiple lock issue I have a scenario where in there are 2 processes (Log_writer1.py and Log_writer2.py) running (as cron jobs) which are eventually writing to the same log file(test_log_file.txt) as part of the log_event function. Because of multiple locks, there are inconsistencies and all data are not being stored in the log file. Is there any way that a single lock can be shared between multiple processes to avoid these inconsistencies. Here are the below code snippets. Kindly suggest Script : test_cifs_log_writer.py ================================================================================================= def log_event(level, msg, job_log_file,lck): lck.acquire() for i in range(50): with open(job_log_file, 'a') as wr_log: print('Now printing message : '+str(msg)) wr_log.write(str(time.ctime())+' - '+level.upper()+' - '+str(msg)+'\n') lck.release() Script : Log_writer1.py ================================================================================================= from threading import Thread, Lock from test_cifs_log_writer import * lck=Lock() t1=Thread(target=log_event, args=('info','Thread 1 : msg','test_log_file.txt',lck)) t2=Thread(target=log_event, args=('info','Thread 2 : msg','test_log_file.txt',lck)) lst=[t1,t2] for thr in lst: thr.start() for thr in lst: thr.join() Script : Log_writer2.py ================================================================================================= from threading import Thread, Lock from test_cifs_log_writer import * lck=Lock() t1=Thread(target=log_event, args=('info','Thread 3 : msg','test_log_file.txt',lck)) t2=Thread(target=log_event, args=('info','Thread 4 : msg','test_log_file.txt',lck)) lst=[t1,t2] for thr in lst: thr.start() for thr in lst: thr.join() A: No, not an easy way. Even if you could share a lock, you'd then run into lock contention issues Either: (easiest, just requires an extra step later) have each process write to a separate, uniquely named log file and concatenate them afterwards if you need to. (harder, requires an extra process and communication) delegate writing the single log file to a single process that the other processes communicate with. (This is essentially what SyslogHandler does.) (worst, slow and lock-contented) have each process lock, open, write, close, unlock the file for each entry they want to write
python single file multiple lock issue
I have a scenario where in there are 2 processes (Log_writer1.py and Log_writer2.py) running (as cron jobs) which are eventually writing to the same log file(test_log_file.txt) as part of the log_event function. Because of multiple locks, there are inconsistencies and all data are not being stored in the log file. Is there any way that a single lock can be shared between multiple processes to avoid these inconsistencies. Here are the below code snippets. Kindly suggest Script : test_cifs_log_writer.py ================================================================================================= def log_event(level, msg, job_log_file,lck): lck.acquire() for i in range(50): with open(job_log_file, 'a') as wr_log: print('Now printing message : '+str(msg)) wr_log.write(str(time.ctime())+' - '+level.upper()+' - '+str(msg)+'\n') lck.release() Script : Log_writer1.py ================================================================================================= from threading import Thread, Lock from test_cifs_log_writer import * lck=Lock() t1=Thread(target=log_event, args=('info','Thread 1 : msg','test_log_file.txt',lck)) t2=Thread(target=log_event, args=('info','Thread 2 : msg','test_log_file.txt',lck)) lst=[t1,t2] for thr in lst: thr.start() for thr in lst: thr.join() Script : Log_writer2.py ================================================================================================= from threading import Thread, Lock from test_cifs_log_writer import * lck=Lock() t1=Thread(target=log_event, args=('info','Thread 3 : msg','test_log_file.txt',lck)) t2=Thread(target=log_event, args=('info','Thread 4 : msg','test_log_file.txt',lck)) lst=[t1,t2] for thr in lst: thr.start() for thr in lst: thr.join()
[ "No, not an easy way. Even if you could share a lock, you'd then run into lock contention issues\nEither:\n\n(easiest, just requires an extra step later) have each process write to a separate, uniquely named log file and concatenate them afterwards if you need to.\n(harder, requires an extra process and communication) delegate writing the single log file to a single process that the other processes communicate with. (This is essentially what SyslogHandler does.)\n(worst, slow and lock-contented) have each process lock, open, write, close, unlock the file for each entry they want to write\n\n" ]
[ 1 ]
[]
[]
[ "cron_task", "locks", "multithreading", "python" ]
stackoverflow_0074614387_cron_task_locks_multithreading_python.txt
Q: Why there is a python version besides the package version? When I check the version of a package, I get a python version in parentheses. What does it mean? This python 3.7.3 does not match with the PyCharm interpreter I am using (python 3.8). Is that the reason? Should I worry the version between parentheses is not the same as my python project interpreter? A: It is possible that you have many python versions installed on your computer. You probably need to pip install again the same package for the python version you are using with your Pycharm if you want them to work correctly. if you are not sure how to do that with CMD commands, you can access your Pycharm, look for "Python packages" at the bottom, and then search for your package. You can see there is this package is installed already or not. and if not - just hit the 'install' button. after that, you are ready to go.
Why there is a python version besides the package version?
When I check the version of a package, I get a python version in parentheses. What does it mean? This python 3.7.3 does not match with the PyCharm interpreter I am using (python 3.8). Is that the reason? Should I worry the version between parentheses is not the same as my python project interpreter?
[ "It is possible that you have many python versions installed on your computer.\nYou probably need to pip install again the same package for the python version you are using with your Pycharm if you want them to work correctly.\nif you are not sure how to do that with CMD commands,\nyou can access your Pycharm, look for \"Python packages\" at the bottom, and then search for your package.\nYou can see there is this package is installed already or not.\nand if not - just hit the 'install' button.\nafter that, you are ready to go.\n" ]
[ 1 ]
[]
[]
[ "pycharm", "python", "versioning" ]
stackoverflow_0074614311_pycharm_python_versioning.txt
Q: How do i solve the "AttributeError: 'NoneType' object has no attribute 'split' " on specifying the k-clustering value? I'm trying to find the best values of k clustering, but it is showing error k_range = range(1,10) sse = [] max_iter = 300 init = 'k-means++' n_init = 10 for k in k_range: km = KMeans(n_clusters=k, max_iter = max_iter, init = init, n_init = n_init) km.fit(df[['Age','Income($)']]) sse.append(km.inertia_) A: seems like an issue caused by a numpy. importing a specific version of numpy ( downgrading it to 1.21.4) should fix the problem import numpy numpy.__version__ '1.21.4' make sure, you not importing numpy as np again afterwards before you assign your clastering model A: Instead of downgrading numpy, you can try this: pip install -U threadpoolctl It worked for me.
How do i solve the "AttributeError: 'NoneType' object has no attribute 'split' " on specifying the k-clustering value?
I'm trying to find the best values of k clustering, but it is showing error k_range = range(1,10) sse = [] max_iter = 300 init = 'k-means++' n_init = 10 for k in k_range: km = KMeans(n_clusters=k, max_iter = max_iter, init = init, n_init = n_init) km.fit(df[['Age','Income($)']]) sse.append(km.inertia_)
[ "seems like an issue caused by a numpy.\nimporting a specific version of numpy ( downgrading it to 1.21.4) should fix the problem\nimport numpy \nnumpy.__version__ \n'1.21.4' \n\nmake sure, you not importing numpy as np again afterwards before you assign your clastering model\n", "Instead of downgrading numpy, you can try this:\npip install -U threadpoolctl\n\nIt worked for me.\n" ]
[ 0, 0 ]
[ "Setting the minimum value in your range to a value greater than 1 will fix this problem\nEX: range(2,10)\n" ]
[ -1 ]
[ "k_means", "python" ]
stackoverflow_0072395721_k_means_python.txt
Q: Access shadow root content with selenium I'm trying to accept the cookie pop up on http://www.immobilienscout24.de. I'm using selenium 4.61, webdriver-manger with chrome, python 3.11 and Fedora 37, but I'm always getting an error. I'm using the following code driver = webdriver.Chrome(ChromeDriverManager().install()) def accept_cookies(): shadow_root = WebDriverWait(driver, 2).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#usercentrics-root"))).shadow_root shadow_root.find_element((By.CLASS_NAME, "sc-gsDKAQ fWOgSr")).click() url = 'http://www.immobilienscout24.de/' driver.get(url) time.sleep(10) accept_cookies() The sleeping is only done to have the cookie pop up loaded. Error is: selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: 'using' must be a string on shadow_root.find_element((By.CLASS_NAME, 'sc-gsDKAQ fWOgSr')) A: The following code works for me: url = "http://www.immobilienscout24.de/" driver.get(url) time.sleep(10) element = driver.execute_script("""return document.querySelector('#usercentrics-root').shadowRoot.querySelector("button[data-testid='uc-accept-all-button']")""") element.click()
Access shadow root content with selenium
I'm trying to accept the cookie pop up on http://www.immobilienscout24.de. I'm using selenium 4.61, webdriver-manger with chrome, python 3.11 and Fedora 37, but I'm always getting an error. I'm using the following code driver = webdriver.Chrome(ChromeDriverManager().install()) def accept_cookies(): shadow_root = WebDriverWait(driver, 2).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#usercentrics-root"))).shadow_root shadow_root.find_element((By.CLASS_NAME, "sc-gsDKAQ fWOgSr")).click() url = 'http://www.immobilienscout24.de/' driver.get(url) time.sleep(10) accept_cookies() The sleeping is only done to have the cookie pop up loaded. Error is: selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: 'using' must be a string on shadow_root.find_element((By.CLASS_NAME, 'sc-gsDKAQ fWOgSr'))
[ "The following code works for me:\nurl = \"http://www.immobilienscout24.de/\"\ndriver.get(url)\n\ntime.sleep(10)\n\nelement = driver.execute_script(\"\"\"return document.querySelector('#usercentrics-root').shadowRoot.querySelector(\"button[data-testid='uc-accept-all-button']\")\"\"\")\nelement.click()\n\n" ]
[ 0 ]
[]
[]
[ "automation", "css_selectors", "python", "selenium", "shadow_dom" ]
stackoverflow_0074614456_automation_css_selectors_python_selenium_shadow_dom.txt
Q: Python3 surprising behavior of identifier being a non-ASCII Unicode character Following code runs without an assertion error: K = 'K' = '' = '' = '' = '' = '' ᴷ = 'ᴷ' assert K == == == == == ᴷ print(f'{K=}, {=}, {=}, {=}, {=}, {=}') and prints K='ᴷ', ='ᴷ', ='', ='ᴷ', ='ᴷ', ='ᴷ' I am aware of https://peps.python.org/pep-3131/ and have read the Python documentation about identifiers https://docs.python.org/3/reference/lexical_analysis.html#identifiers but haven't found any hints explaining the experienced behavior. So my question is: What is wrong with my expectation that the value of all of the other optical apparently different identifier doesn't change if a new value is assigned to one of them? UPDATE: taking currently available comments and answers into account raises the need to explain more about what I expect as satisfying answer to my question: The hint about NFKC conversion behind the comparison of names of identifiers helps to understand how it comes that the experienced behavior is there, but ... it leaves me still with the question opened what is the deep reason behind the choice to have different approaches for comparison of Unicode strings depending on context in which they occur? The way strings as string literals are compared to each other apparently differs from the way same strings are compared if they specify names of identifiers. What am I still missing to know about to be able to see the deep reason behind the why it was decided that Unicode strings representing names of identifiers in Python are not compared the same way to each other as Unicode strings representing string literals? If I understand it right Unicode comes with the possibility to have ambiguous specifications for the same expected outcome using either one code point representing a complex character or multiple code points with an appropriate base character plus its modifiers. Normalization of the Unicode string is then an attempt on the way to resolve the mess caused by introducing the possibility of this ambiguity in first place. But this is the Unicode specific stuff having in my eyes the heaviest impact on Unicode visualization tools like viewer and editors. What a programming language using representation of a string as a list of integer values (Unicode code points) larger than 255 actually implements is another thing, isn't it? Below some further attempts to find a better wording for the question I seek to get answered: What is the advantage of creating the possibility that two different Unicode strings are eventually considered not to be different if they are used as names of Python identifiers? What is the actual feature behind what I am considering to be a not making sense behavior because of broken WYSIWYG ability? Below some more code illustrating what is going on and demonstrating the difference in comparison between string literals and identifier names originated in same strings as the strings literals: from unicodedata import normalize as normal itisasitisRepr = [ char for char in ['K', '', '', '', '', '', 'ᴷ']] hexintasisRepr = [ f'{ord(char):5X}' for char in itisasitisRepr] normalizedRepr = [ normal('NFKC', char) for char in itisasitisRepr] hexintnormRepr = [ f'{ord(char):5X}' for char in normalizedRepr] print(itisasitisRepr) print(hexintasisRepr) print(normalizedRepr) print(hexintnormRepr) print(f"{ 'K' == '' = }") print(f"{normal('NFKC','K')==normal('NFKC','') = }") print(ᴷ == , 'ᴷ' == '') # gives: True, False gives: ['K', '', '', '', '', '', 'ᴷ'] [' 4B', '1D542', '1D6B1', '1D50E', '1D576', '1D4DA', ' 1D37'] ['K', 'K', 'Κ', 'K', 'K', 'K', 'K'] [' 4B', ' 4B', ' 39A', ' 4B', ' 4B', ' 4B', ' 4B'] 'K' == '' = False normal('NFKC','K')==normal('NFKC','') = True A: Python identifiers with non-ASCII characters are subject to NFKC normalisation(1), you can see the effect in the following code: import unicodedata for char in ['K', '', '', '', '', '', 'ᴷ']: normalised_char = unicodedata.normalize('NFKC', char) print(char, normalised_char, ord(normalised_char)) The output of that is: K K 75 K 75 Κ 922 K 75 K 75 K 75 ᴷ K 75 This shows that all but one of those is the same identifier, which is why your assert passes (it's missing the one different identifier) and why most seem to be the same value. It's no different really to the following code, in which it is hopefully immediately clear what will happen: a = '1' a = '2' b = '3' a = '4' a = '5' a = '6' a = '7' assert a == a == a == a == a == a # passes print(f'{a=}, {a=}, {b=}, {a=}, {a=}, {a=}') # a=7 a=7 b=3 a=7 a=7 a=7 In response to your update, specifically the text: What is the advantage of creating the possibility that two different Unicode strings are eventually considered not to be different if they are used as names of Python identifiers? My own particular viewpoint as a developer is that I want to be able to look at code and understand it. That's not going to be easy when different code-points map to similar or even identical graphemes(2), such as with: Ω = 1 Ω = 2 Ω = Ω + Ω print(Ω * Ω) What would you expect from that code? You set omega to one, then two. You then double it to four, and print the square which is sixteen. Easy, right? And, in actual fact, that's what you do get in Python, despite the fact that there are both omega and ohm characters in that code, and that's because they normalise to the same identifier. Were they not normalised, you would instead have the equivalent of: omega = 1 ohm = 2 ohm = omega + ohm print(ohm * ohm) And this would output nine rather than sixteen. Best of luck debugging that when you can't see a difference between the omega and ohm identifiers :-) There are also diacritics that can have different representations, such as ḋ: U+1e0b (Latin Small Letter D with Dot Above). U+0064, U+0307 (Latin Small Letter D, Combining dot above). And this may get even more complex where a base letter can have multiple diacritics such as ậ, ç̇, or ė́. The order of combining marks may be arbitrary, meaning that there could be many ways of representing the ậç̇ė́ variable (two by two by two gives eight, but there are potentially more since distinct code points also exist for "half-accented" characters like ç) . No, I think I very much appreciate the normalisation that happens to Python identifiers :-) (1) From the Python docs about identifiers: All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. (2) You can think of graphemes as the basic unit of writing (like a letter), similar to phonemes being the basic unit of speech (like a sound). So the English grapheme c has at least two phonemes, the hard-c in cook and the soft-c in ice. And, making matters even more complex, cook shows that there is one phoneme (hard-c) giving two separate graphemes, c and k. Now think how much more complex it gets when you introduce every other language on the planet, I'm surprised the members of the Unicode consortium don't go absolutely insane :-)
Python3 surprising behavior of identifier being a non-ASCII Unicode character
Following code runs without an assertion error: K = 'K' = '' = '' = '' = '' = '' ᴷ = 'ᴷ' assert K == == == == == ᴷ print(f'{K=}, {=}, {=}, {=}, {=}, {=}') and prints K='ᴷ', ='ᴷ', ='', ='ᴷ', ='ᴷ', ='ᴷ' I am aware of https://peps.python.org/pep-3131/ and have read the Python documentation about identifiers https://docs.python.org/3/reference/lexical_analysis.html#identifiers but haven't found any hints explaining the experienced behavior. So my question is: What is wrong with my expectation that the value of all of the other optical apparently different identifier doesn't change if a new value is assigned to one of them? UPDATE: taking currently available comments and answers into account raises the need to explain more about what I expect as satisfying answer to my question: The hint about NFKC conversion behind the comparison of names of identifiers helps to understand how it comes that the experienced behavior is there, but ... it leaves me still with the question opened what is the deep reason behind the choice to have different approaches for comparison of Unicode strings depending on context in which they occur? The way strings as string literals are compared to each other apparently differs from the way same strings are compared if they specify names of identifiers. What am I still missing to know about to be able to see the deep reason behind the why it was decided that Unicode strings representing names of identifiers in Python are not compared the same way to each other as Unicode strings representing string literals? If I understand it right Unicode comes with the possibility to have ambiguous specifications for the same expected outcome using either one code point representing a complex character or multiple code points with an appropriate base character plus its modifiers. Normalization of the Unicode string is then an attempt on the way to resolve the mess caused by introducing the possibility of this ambiguity in first place. But this is the Unicode specific stuff having in my eyes the heaviest impact on Unicode visualization tools like viewer and editors. What a programming language using representation of a string as a list of integer values (Unicode code points) larger than 255 actually implements is another thing, isn't it? Below some further attempts to find a better wording for the question I seek to get answered: What is the advantage of creating the possibility that two different Unicode strings are eventually considered not to be different if they are used as names of Python identifiers? What is the actual feature behind what I am considering to be a not making sense behavior because of broken WYSIWYG ability? Below some more code illustrating what is going on and demonstrating the difference in comparison between string literals and identifier names originated in same strings as the strings literals: from unicodedata import normalize as normal itisasitisRepr = [ char for char in ['K', '', '', '', '', '', 'ᴷ']] hexintasisRepr = [ f'{ord(char):5X}' for char in itisasitisRepr] normalizedRepr = [ normal('NFKC', char) for char in itisasitisRepr] hexintnormRepr = [ f'{ord(char):5X}' for char in normalizedRepr] print(itisasitisRepr) print(hexintasisRepr) print(normalizedRepr) print(hexintnormRepr) print(f"{ 'K' == '' = }") print(f"{normal('NFKC','K')==normal('NFKC','') = }") print(ᴷ == , 'ᴷ' == '') # gives: True, False gives: ['K', '', '', '', '', '', 'ᴷ'] [' 4B', '1D542', '1D6B1', '1D50E', '1D576', '1D4DA', ' 1D37'] ['K', 'K', 'Κ', 'K', 'K', 'K', 'K'] [' 4B', ' 4B', ' 39A', ' 4B', ' 4B', ' 4B', ' 4B'] 'K' == '' = False normal('NFKC','K')==normal('NFKC','') = True
[ "Python identifiers with non-ASCII characters are subject to NFKC normalisation(1), you can see the effect in the following code:\nimport unicodedata\nfor char in ['K', '', '', '', '', '', 'ᴷ']:\n normalised_char = unicodedata.normalize('NFKC', char)\n print(char, normalised_char, ord(normalised_char))\n\nThe output of that is:\nK K 75\n K 75\n Κ 922\n K 75\n K 75\n K 75\nᴷ K 75\n\nThis shows that all but one of those is the same identifier, which is why your assert passes (it's missing the one different identifier) and why most seem to be the same value. It's no different really to the following code, in which it is hopefully immediately clear what will happen:\na = '1'\na = '2'\nb = '3'\na = '4'\na = '5'\na = '6'\na = '7'\nassert a == a == a == a == a == a # passes\nprint(f'{a=}, {a=}, {b=}, {a=}, {a=}, {a=}') # a=7 a=7 b=3 a=7 a=7 a=7\n\n\nIn response to your update, specifically the text:\n\nWhat is the advantage of creating the possibility that two different Unicode strings are eventually considered not to be different if they are used as names of Python identifiers?\n\nMy own particular viewpoint as a developer is that I want to be able to look at code and understand it. That's not going to be easy when different code-points map to similar or even identical graphemes(2), such as with:\nΩ = 1\nΩ = 2\nΩ = Ω + Ω\nprint(Ω * Ω)\n\nWhat would you expect from that code? You set omega to one, then two. You then double it to four, and print the square which is sixteen. Easy, right?\nAnd, in actual fact, that's what you do get in Python, despite the fact that there are both omega and ohm characters in that code, and that's because they normalise to the same identifier. Were they not normalised, you would instead have the equivalent of:\nomega = 1\nohm = 2\nohm = omega + ohm\nprint(ohm * ohm)\n\nAnd this would output nine rather than sixteen. Best of luck debugging that when you can't see a difference between the omega and ohm identifiers :-)\nThere are also diacritics that can have different representations, such as ḋ:\n\nU+1e0b (Latin Small Letter D with Dot Above).\nU+0064, U+0307 (Latin Small Letter D, Combining dot above).\n\nAnd this may get even more complex where a base letter can have multiple diacritics such as ậ, ç̇, or ė́. The order of combining marks may be arbitrary, meaning that there could be many ways of representing the ậç̇ė́ variable (two by two by two gives eight, but there are potentially more since distinct code points also exist for \"half-accented\" characters like ç) .\nNo, I think I very much appreciate the normalisation that happens to Python identifiers :-)\n\n(1) From the Python docs about identifiers:\n\nAll identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC.\n\n\n(2) You can think of graphemes as the basic unit of writing (like a letter), similar to phonemes being the basic unit of speech (like a sound). So the English grapheme c has at least two phonemes, the hard-c in cook and the soft-c in ice.\nAnd, making matters even more complex, cook shows that there is one phoneme (hard-c) giving two separate graphemes, c and k.\nNow think how much more complex it gets when you introduce every other language on the planet, I'm surprised the members of the Unicode consortium don't go absolutely insane :-)\n" ]
[ 8 ]
[]
[]
[ "python", "python_3.x", "unicode" ]
stackoverflow_0074614341_python_python_3.x_unicode.txt
Q: How to identify in which region new point will lie using Sklearn Python? I have a sample code for the Sklearn taken from the website. I am trying to learn how to classify points using Sklearn(Scikit-Learn). Here is the code: import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.inspection import DecisionBoundaryDisplay names = [ "Nearest Neighbors", ] classifiers = [ KNeighborsClassifier(3), ] X, y = make_classification( n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1 ) rng = np.random.RandomState(2) X += 2 * rng.uniform(size=X.shape) linearly_separable = (X, y) datasets = [ linearly_separable, ] figure = plt.figure(figsize=(27, 9)) i = 1 # iterate over datasets for ds_cnt, ds in enumerate(datasets): # preprocess dataset, split into training and test part X, y = ds X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=42 ) x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(["#FF0000", "#0000FF"]) ax = plt.subplot(len(datasets), len(classifiers) + 1, i) if ds_cnt == 0: ax.set_title("Input data") # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k") # Plot the testing points ax.scatter( X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k" ) ax.set_xlim(x_min, x_max) ax.set_ylim(y_min, y_max) ax.set_xticks(()) ax.set_yticks(()) i += 1 # iterate over classifiers for name, clf in zip(names, classifiers): ax = plt.subplot(len(datasets), len(classifiers) + 1, i) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) All_Value_Response = DecisionBoundaryDisplay.from_estimator( clf, X, cmap=cm, alpha=0.8, ax=ax, eps=0.5 ) # Plot the training points ax.scatter( X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k" ) # Plot the testing points ax.scatter( X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, edgecolors="k", alpha=0.6, ) ax.set_xlim(x_min, x_max) ax.set_ylim(y_min, y_max) ax.set_xticks(()) ax.set_yticks(()) if ds_cnt == 0: ax.set_title(name) ax.text( x_max - 0.3, y_min + 0.3, ("%.2f" % score).lstrip("0"), size=15, horizontalalignment="right", ) i += 1 plt.tight_layout() plt.show() Here is the output: Now as one can see the areas formed are not regular shapes, so it is becoming a little difficult to understand how to know if a new point arrives and will lie in which region. I managed to capture the data of the regions (All_Value_Response variable stores that information) but it seems not helpful to me. So I want to know if I want to know in which region does the point (1,3) lies then how I can deduce it through code. I can do it by seeing on the graph but how to make it work using the code? Please help me find a solution to my problem. A: So, you have X_train and X_test. These are both lists containing tuples. The values in the tuples (a, b) have some range, like 0 -> 1. In your graphs, these are the x and y coordinates of your dots. You also have y_train and y_test. These are the known classifications of all the tuples in X_train and X_test. These values can be either 0 or 1, and none in between. If a dot in your graph is in the blue region, that means that the predicted value of that dot (a, b) is 0. If the dot is in the red region, this means the predicted value is 1. # if X_train is this X_train = [(0.0, 0.0), (0.1, 0.9), (0.9, 0.0), (1, 0.9)] # then y_train has to be this, for you chart y_train = [0, 0, 1, 1] If you then train a classifier on this (but normally more data), then you can ask it any point (a, b) and it will tell you 0 or 1 (aka blue or red). So for example I predict for a point (a, b) that it has not seen in X_train (aka something that is in X_test): result = clf.predict([(0.2, 0.2)]) result then equals: [0]. This is because looking at your graph, assuming x-axis and y-axis range 0 -> 1. The tuple (0.2, 0,2) falls in the blue region. It knows this because it has learned the blue red classification you see in your graph from X_train and X_test. So when it gets new tuples it sees on which region that dot falls and classifies it as 0 or 1, region blue or region red. To summarize. The colored regions show what value will be predicted for any given tuple (a, b). The dot positions (in the scatter) are given by the values (a, b) in the tuple. a and b for the tuple are in range between 0->1. The color is not a range, but a classification 0 or 1. Hopefully it helps, good luck! A: Well we can definitely determine which region the new point will lie in, but before we do that I want to call attention to something you're doing in your code that is going to come back to bite you. This line right here X = StandardScaler().fit_transform(X) will come back to smack you harder than you know. Remember, the point of doing a StandardScaler is to standardize the data (0 mean, unit variance). Also remember that what you do on your training set you must do on your test set. The caveat here is the operations you perform on your test set will be learned from your training set. I'll give a condensed form of your code to help illustrate this. X, y = datasets[0] # Save an instance of the standard scaler so we can apply it to unknown values later standard_scalar = StandardScaler() standard_scalar.fit(X) # Fit the "training data" X = standard_scalar.transform(X) # Transform the data # You ended up with a good score because the transformation in your data was applied to the entire dataset, "X". But normally you'd transform the X_test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42) # Train the classifier clf = KNeighborsClassifier(3) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) # 0.925. Good score but let's see what happens next # This is where things will go wrong in your code. # Instead, make sure you transform the test point your want to calculate >>> clf.predict([[1,3]]) array([0]) >>> clf.predict(standard_scalar.transform([[1,3]])) array([1]) So even though you can use predict in both cases, you need to make sure to apply the transformation to the data point that you're inputting. A: Try this. import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.inspection import DecisionBoundaryDisplay names = [ "Nearest Neighbors", ] classifiers = [ KNeighborsClassifier(3), ] X, y = make_classification( n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1 ) rng = np.random.RandomState(2) X += 2 * rng.uniform(size=X.shape) linearly_separable = (X, y) datasets = [ linearly_separable, ] figure = plt.figure(figsize=(27, 9)) i = 1 # iterate over datasets for ds_cnt, ds in enumerate(datasets): # preprocess dataset, split into training and test part X, y = ds X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=42 ) x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(["#FF0000", "#0000FF"]) ax = plt.subplot(len(datasets), len(classifiers) + 1, i) if ds_cnt == 0: ax.set_title("Input data") # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k") # Plot the testing points ax.scatter( X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k" ) ax.set_xlim(x_min, x_max) ax.set_ylim(y_min, y_max) ax.set_xticks(()) ax.set_yticks(()) i += 1 # iterate over classifiers for name, clf in zip(names, classifiers): ax = plt.subplot(len(datasets), len(classifiers) + 1, i) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) All_Value_Response = DecisionBoundaryDisplay.from_estimator( clf, X, cmap=cm, alpha=0.8, ax=ax, eps=0.5 ) # Plot the training points ax.scatter( X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k" ) # Plot the testing points ax.scatter( X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, edgecolors="k", alpha=0.6, ) X1 = All_Value_Response.xx0.ravel() Y1 = All_Value_Response.xx1.ravel() Color = All_Value_Response.response.ravel() Outputs = [] for X2, Y2 in X: XD = X2 - X1 YD = Y2 - Y1 Distance = (XD * XD) + (YD * YD) Color_Gradient = Color[Distance.argmin()] Outputs.append(Color_Gradient) print(Outputs) ax.set_xlim(x_min, x_max) ax.set_ylim(y_min, y_max) ax.set_xticks(()) ax.set_yticks(()) if ds_cnt == 0: ax.set_title(name) ax.text( x_max - 0.3, y_min + 0.3, ("%.2f" % score).lstrip("0"), size=15, horizontalalignment="right", ) i += 1 plt.tight_layout() plt.show() I have found a reference in of the question that you have asked : https://stackoverflow.com/a/74613354/4948889 The output of the above code is something like the below:
How to identify in which region new point will lie using Sklearn Python?
I have a sample code for the Sklearn taken from the website. I am trying to learn how to classify points using Sklearn(Scikit-Learn). Here is the code: import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.inspection import DecisionBoundaryDisplay names = [ "Nearest Neighbors", ] classifiers = [ KNeighborsClassifier(3), ] X, y = make_classification( n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1 ) rng = np.random.RandomState(2) X += 2 * rng.uniform(size=X.shape) linearly_separable = (X, y) datasets = [ linearly_separable, ] figure = plt.figure(figsize=(27, 9)) i = 1 # iterate over datasets for ds_cnt, ds in enumerate(datasets): # preprocess dataset, split into training and test part X, y = ds X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=42 ) x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 # just plot the dataset first cm = plt.cm.RdBu cm_bright = ListedColormap(["#FF0000", "#0000FF"]) ax = plt.subplot(len(datasets), len(classifiers) + 1, i) if ds_cnt == 0: ax.set_title("Input data") # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k") # Plot the testing points ax.scatter( X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k" ) ax.set_xlim(x_min, x_max) ax.set_ylim(y_min, y_max) ax.set_xticks(()) ax.set_yticks(()) i += 1 # iterate over classifiers for name, clf in zip(names, classifiers): ax = plt.subplot(len(datasets), len(classifiers) + 1, i) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) All_Value_Response = DecisionBoundaryDisplay.from_estimator( clf, X, cmap=cm, alpha=0.8, ax=ax, eps=0.5 ) # Plot the training points ax.scatter( X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k" ) # Plot the testing points ax.scatter( X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, edgecolors="k", alpha=0.6, ) ax.set_xlim(x_min, x_max) ax.set_ylim(y_min, y_max) ax.set_xticks(()) ax.set_yticks(()) if ds_cnt == 0: ax.set_title(name) ax.text( x_max - 0.3, y_min + 0.3, ("%.2f" % score).lstrip("0"), size=15, horizontalalignment="right", ) i += 1 plt.tight_layout() plt.show() Here is the output: Now as one can see the areas formed are not regular shapes, so it is becoming a little difficult to understand how to know if a new point arrives and will lie in which region. I managed to capture the data of the regions (All_Value_Response variable stores that information) but it seems not helpful to me. So I want to know if I want to know in which region does the point (1,3) lies then how I can deduce it through code. I can do it by seeing on the graph but how to make it work using the code? Please help me find a solution to my problem.
[ "So, you have X_train and X_test. These are both lists containing tuples. The values in the tuples (a, b) have some range, like 0 -> 1. In your graphs, these are the x and y coordinates of your dots.\nYou also have y_train and y_test. These are the known classifications of all the tuples in X_train and X_test. These values can be either 0 or 1, and none in between. If a dot in your graph is in the blue region, that means that the predicted value of that dot (a, b) is 0. If the dot is in the red region, this means the predicted value is 1.\n# if X_train is this\nX_train = [(0.0, 0.0), (0.1, 0.9), (0.9, 0.0), (1, 0.9)]\n\n# then y_train has to be this, for you chart\ny_train = [0, 0, 1, 1]\n\nIf you then train a classifier on this (but normally more data), then you can ask it any point (a, b) and it will tell you 0 or 1 (aka blue or red).\nSo for example I predict for a point (a, b) that it has not seen in X_train (aka something that is in X_test):\nresult = clf.predict([(0.2, 0.2)])\n\nresult then equals: [0]. This is because looking at your graph, assuming x-axis and y-axis range 0 -> 1. The tuple (0.2, 0,2) falls in the blue region.\nIt knows this because it has learned the blue red classification you see in your graph from X_train and X_test. So when it gets new tuples it sees on which region that dot falls and classifies it as 0 or 1, region blue or region red.\nTo summarize. The colored regions show what value will be predicted for any given tuple (a, b). The dot positions (in the scatter) are given by the values (a, b) in the tuple. a and b for the tuple are in range between 0->1. The color is not a range, but a classification 0 or 1.\nHopefully it helps, good luck!\n", "Well we can definitely determine which region the new point will lie in, but before we do that I want to call attention to something you're doing in your code that is going to come back to bite you.\nThis line right here X = StandardScaler().fit_transform(X) will come back to smack you harder than you know.\nRemember, the point of doing a StandardScaler is to standardize the data (0 mean, unit variance). Also remember that what you do on your training set you must do on your test set. The caveat here is the operations you perform on your test set will be learned from your training set. I'll give a condensed form of your code to help illustrate this.\nX, y = datasets[0]\n\n# Save an instance of the standard scaler so we can apply it to unknown values later\nstandard_scalar = StandardScaler()\nstandard_scalar.fit(X) # Fit the \"training data\"\n\nX = standard_scalar.transform(X) # Transform the data\n\n# You ended up with a good score because the transformation in your data was applied to the entire dataset, \"X\". But normally you'd transform the X_test\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)\n\n# Train the classifier\nclf = KNeighborsClassifier(3)\nclf.fit(X_train, y_train)\n\nscore = clf.score(X_test, y_test) # 0.925. Good score but let's see what happens next\n\n# This is where things will go wrong in your code.\n# Instead, make sure you transform the test point your want to calculate\n>>> clf.predict([[1,3]])\narray([0])\n\n>>> clf.predict(standard_scalar.transform([[1,3]]))\narray([1])\n\nSo even though you can use predict in both cases, you need to make sure to apply the transformation to the data point that you're inputting.\n", "Try this.\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.datasets import make_moons, make_circles, make_classification\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.gaussian_process import GaussianProcessClassifier\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\nfrom sklearn.inspection import DecisionBoundaryDisplay\n\nnames = [\n \"Nearest Neighbors\",\n]\n\nclassifiers = [\n KNeighborsClassifier(3),\n]\n\nX, y = make_classification(\n n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1\n)\nrng = np.random.RandomState(2)\nX += 2 * rng.uniform(size=X.shape)\nlinearly_separable = (X, y)\n\ndatasets = [\n linearly_separable,\n]\n\nfigure = plt.figure(figsize=(27, 9))\ni = 1\n# iterate over datasets\nfor ds_cnt, ds in enumerate(datasets):\n # preprocess dataset, split into training and test part\n X, y = ds\n X = StandardScaler().fit_transform(X)\n X_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.4, random_state=42\n )\n\n x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5\n y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5\n\n # just plot the dataset first\n cm = plt.cm.RdBu\n cm_bright = ListedColormap([\"#FF0000\", \"#0000FF\"])\n ax = plt.subplot(len(datasets), len(classifiers) + 1, i)\n if ds_cnt == 0:\n ax.set_title(\"Input data\")\n # Plot the training points\n ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors=\"k\")\n # Plot the testing points\n ax.scatter(\n X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors=\"k\"\n )\n ax.set_xlim(x_min, x_max)\n ax.set_ylim(y_min, y_max)\n ax.set_xticks(())\n ax.set_yticks(())\n i += 1\n\n # iterate over classifiers\n for name, clf in zip(names, classifiers):\n ax = plt.subplot(len(datasets), len(classifiers) + 1, i)\n clf.fit(X_train, y_train)\n score = clf.score(X_test, y_test)\n All_Value_Response = DecisionBoundaryDisplay.from_estimator(\n clf, X, cmap=cm, alpha=0.8, ax=ax, eps=0.5\n )\n\n # Plot the training points\n ax.scatter(\n X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors=\"k\"\n )\n # Plot the testing points\n ax.scatter(\n X_test[:, 0],\n X_test[:, 1],\n c=y_test,\n cmap=cm_bright,\n edgecolors=\"k\",\n alpha=0.6,\n )\n \n X1 = All_Value_Response.xx0.ravel()\n Y1 = All_Value_Response.xx1.ravel()\n Color = All_Value_Response.response.ravel()\n \n Outputs = []\n \n for X2, Y2 in X:\n XD = X2 - X1\n YD = Y2 - Y1\n Distance = (XD * XD) + (YD * YD)\n Color_Gradient = Color[Distance.argmin()]\n Outputs.append(Color_Gradient)\n \n print(Outputs)\n ax.set_xlim(x_min, x_max)\n ax.set_ylim(y_min, y_max)\n ax.set_xticks(())\n ax.set_yticks(())\n if ds_cnt == 0:\n ax.set_title(name)\n ax.text(\n x_max - 0.3,\n y_min + 0.3,\n (\"%.2f\" % score).lstrip(\"0\"),\n size=15,\n horizontalalignment=\"right\",\n )\n i += 1\n\nplt.tight_layout()\nplt.show()\n\nI have found a reference in of the question that you have asked : https://stackoverflow.com/a/74613354/4948889\nThe output of the above code is something like the below:\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "python", "python_3.x", "scikit_learn" ]
stackoverflow_0074577545_python_python_3.x_scikit_learn.txt
Q: Explanation for Linked List in Python I have started learning linked list in python and after going through a lot material for leaning linked list. I found out that linked list is made of nodes(each node has two values namely the data and a address) and the first node is called the HEAD node and last node points towards the None value showing it to be the end of linked list. A linked list is formed when the previous node contains the address of the next node. In python to achieve connecting one node to another we simply assign an object to the head variable. The below code is an example class Node: def __init__(self,data): self.data = data self.link = None class SingleLinkedList: def __init__(self): self.head = None def display(self): if self.head is None: print("Its a Empty Linked list") else: temp = self.head while temp: print(temp,"-->"end='') temp = temp.link L = SingleLinkedList() n = Node(10) L.head = n n1 = Node(20) L.head.link = n1 n2 =Node(30) n1.link = n2 L.display() My question is that in the linked list the node's link value(also known as reference/address value)should be contain the address of the next node, how is that achieved by the 18th line(L.head = n) of code My assumption is that when we assign an object to a variable(head) we are actually assigning the address of that object to the variable. I would like to know whether my assumption is correct or wrong and if its wrong why is it wrong can someone explain the flow of the code shown in the question. Thanks in Advance A: when we assign an object to a variable(head) we are actually assigning the address of that object to the variable. I would like to know whether my assumption is correct or wrong and if its wrong why is it wrong. This would be true for other -- pointer-based -- languages, like C, but Python does not actually give you pointers, not even memory addresses. Python has a layer of abstraction whereby all values are objects, and names can refer to them, and several names can refer to the same object. That's all that is really relevant here, although Python has identifiers for objects (see the id function). can someone explain the flow of the code shown in the question. The flow can be analysed by using a debugger with which you can step through the code, set breakpoints and inspect values. Here is a rough overview of what happens: L = SingleLinkedList() This calls the constructor of the SingleLinkedList class: __init__ gets executed, which initialises the attributes of the new object, and L is a name given to that object: ┌────────────┐ L:─┤ head: None │ └────────────┘ The next statement n = Node(10) will call the constructor of the Node class, with the data parameter as a name for 10. The new object gets its data and link attributes initialised, and n becomes a name for it: ┌────────────┐ L:─┤ head: None │ └────────────┘ ┌────────────┐ n:─┤ data: 10 │ │ link: None │ └────────────┘ The link is made with the assignment: L.head = n ┌────────────┐ L:─┤ head: ─┐ │ └────────│───┘ │ ┌────────┴───┐ n:─┤ data: 10 │ │ link: None │ └────────────┘ So now both the n name and the L.head attribute reference the same Node object. The next statement, n2 =Node(30) creates another Node object: ┌────────────┐ L:─┤ head: ─┐ │ └────────│───┘ │ ┌────────┴───┐ ┌────────────┐ n:─┤ data: 10 │ n2:─┤ data: 30 │ │ link: None │ │ link: None │ └────────────┘ └────────────┘ And n1.link = n2 establishes the link, mutating the n1 object: ┌────────────┐ L:─┤ head: ─┐ │ └────────│───┘ │ ┌────────┴───┐ ┌────────────┐ n:─┤ data: 10 │ n2:─┤ data: 30 │ │ link: ────────────┤ link: None │ └────────────┘ └────────────┘ The final call of display, will define temp to be what self.head is (with self being L): ┌────────────┐ L:─┤ head: ─┐ │ └────────│───┘ temp:─┐ │ ┌──────┴─┴───┐ ┌────────────┐ n:─┤ data: 10 │ n2:─┤ data: 30 │ │ link: ────────────┤ link: None │ └────────────┘ └────────────┘ ..and in the loop temp = temp.link will make temp reference the second node: ┌────────────┐ L:─┤ head: ─┐ │ └────────│───┘ │ temp:─┐ ┌────────┴───┐ ┌─────┴──────┐ n:─┤ data: 10 │ n2:─┤ data: 30 │ │ link: ────────────┤ link: None │ └────────────┘ └────────────┘ In a final iteration temp will get the value None, finishing the display call. I hope this clarifies it.
Explanation for Linked List in Python
I have started learning linked list in python and after going through a lot material for leaning linked list. I found out that linked list is made of nodes(each node has two values namely the data and a address) and the first node is called the HEAD node and last node points towards the None value showing it to be the end of linked list. A linked list is formed when the previous node contains the address of the next node. In python to achieve connecting one node to another we simply assign an object to the head variable. The below code is an example class Node: def __init__(self,data): self.data = data self.link = None class SingleLinkedList: def __init__(self): self.head = None def display(self): if self.head is None: print("Its a Empty Linked list") else: temp = self.head while temp: print(temp,"-->"end='') temp = temp.link L = SingleLinkedList() n = Node(10) L.head = n n1 = Node(20) L.head.link = n1 n2 =Node(30) n1.link = n2 L.display() My question is that in the linked list the node's link value(also known as reference/address value)should be contain the address of the next node, how is that achieved by the 18th line(L.head = n) of code My assumption is that when we assign an object to a variable(head) we are actually assigning the address of that object to the variable. I would like to know whether my assumption is correct or wrong and if its wrong why is it wrong can someone explain the flow of the code shown in the question. Thanks in Advance
[ "\nwhen we assign an object to a variable(head) we are actually assigning the address of that object to the variable. I would like to know whether my assumption is correct or wrong and if its wrong why is it wrong.\n\nThis would be true for other -- pointer-based -- languages, like C, but Python does not actually give you pointers, not even memory addresses. Python has a layer of abstraction whereby all values are objects, and names can refer to them, and several names can refer to the same object. That's all that is really relevant here, although Python has identifiers for objects (see the id function).\n\ncan someone explain the flow of the code shown in the question.\n\nThe flow can be analysed by using a debugger with which you can step through the code, set breakpoints and inspect values. Here is a rough overview of what happens:\nL = SingleLinkedList()\n\nThis calls the constructor of the SingleLinkedList class: __init__ gets executed, which initialises the attributes of the new object, and L is a name given to that object:\n ┌────────────┐\nL:─┤ head: None │\n └────────────┘\n\nThe next statement n = Node(10) will call the constructor of the Node class, with the data parameter as a name for 10. The new object gets its data and link attributes initialised, and n becomes a name for it:\n ┌────────────┐\nL:─┤ head: None │\n └────────────┘\n\n ┌────────────┐\nn:─┤ data: 10 │\n │ link: None │\n └────────────┘\n\nThe link is made with the assignment: L.head = n\n ┌────────────┐\nL:─┤ head: ─┐ │\n └────────│───┘\n │\n ┌────────┴───┐\nn:─┤ data: 10 │\n │ link: None │\n └────────────┘\n\nSo now both the n name and the L.head attribute reference the same Node object.\nThe next statement, n2 =Node(30) creates another Node object:\n ┌────────────┐\nL:─┤ head: ─┐ │\n └────────│───┘\n │\n ┌────────┴───┐ ┌────────────┐\nn:─┤ data: 10 │ n2:─┤ data: 30 │\n │ link: None │ │ link: None │\n └────────────┘ └────────────┘\n\nAnd n1.link = n2 establishes the link, mutating the n1 object:\n ┌────────────┐\nL:─┤ head: ─┐ │\n └────────│───┘\n │\n ┌────────┴───┐ ┌────────────┐\nn:─┤ data: 10 │ n2:─┤ data: 30 │\n │ link: ────────────┤ link: None │\n └────────────┘ └────────────┘\n\nThe final call of display, will define temp to be what self.head is (with self being L):\n ┌────────────┐\nL:─┤ head: ─┐ │\n └────────│───┘\n temp:─┐ │\n ┌──────┴─┴───┐ ┌────────────┐\nn:─┤ data: 10 │ n2:─┤ data: 30 │\n │ link: ────────────┤ link: None │\n └────────────┘ └────────────┘\n\n..and in the loop temp = temp.link will make temp reference the second node:\n ┌────────────┐\nL:─┤ head: ─┐ │\n └────────│───┘\n │ temp:─┐ \n ┌────────┴───┐ ┌─────┴──────┐\nn:─┤ data: 10 │ n2:─┤ data: 30 │\n │ link: ────────────┤ link: None │\n └────────────┘ └────────────┘\n\nIn a final iteration temp will get the value None, finishing the display call. I hope this clarifies it.\n" ]
[ 1 ]
[ "So linked list has this concept of Start pointer, commonly referred to as Head.\nThe head points to the first node of the linkedlist.\nSo when you say,\nL = SingleLinkedList()\n\nYou essentially create a linked list whose head pointer is still set to None.\nIn the next step you create a node :\nn = Node(10)\n\nAnd then in the next step you make the head pointer of your linked list point to node n1 :\nn = Node(10)\nL.head = n\n\nFurther on you just go on associating n1 to n2 :\nn2 =Node(30)\nn1.link = n2\nL.display()\n\nAt the end you have the LinkedList.\n" ]
[ -1 ]
[ "data_structures", "linked_list", "python", "python_3.x" ]
stackoverflow_0074609360_data_structures_linked_list_python_python_3.x.txt
Q: Python 3 SSL cannot grab certificate I'm trying to have a simple function collect certificates from servers. Using Python 3.10.8 and my code looks something this: import ssl def certgrab(dom): address = (dom, 443) try: f = ssl.get_server_certificate(address) except Exception as clanger: return {'clanger': clanger} print(f) This is fine when I try it against 'google.com' or 'microsoft.com'. But most websites return the following error:{'clanger': ConnectionRefusedError(10061, 'No connection could be made because the target machine actively refused it', None, 10061, None)}. I was wondering if it was a rejection because the sites don't like the user-agent (requests works fine with everything I test against, but obviously cannot grab the cert (unless it secretly can - which would be great!). But I cannot find a way of specifying one in the SSL library. I'm at a bit of a loss as it works against 'google.com' and 'microsoft.com' (but then I suppose they may have set their sites to be generous / forgiving regarding what types of connections they support). A: ConnectionRefusedError(10061, 'No connection could be made because the target machine actively refused it' This has nothing to do with certificates, not even with TLS. This is a connection error at the TCP level, i.e. even before any TLS and certificates are in effect. But most websites return the following error ... If this is really "most websites" then you might have serious problems in your infrastructure which limit access to large parts of the internet. Or, you might need to use a proxy - but ssl.get_server_certificate does not support a proxy.
Python 3 SSL cannot grab certificate
I'm trying to have a simple function collect certificates from servers. Using Python 3.10.8 and my code looks something this: import ssl def certgrab(dom): address = (dom, 443) try: f = ssl.get_server_certificate(address) except Exception as clanger: return {'clanger': clanger} print(f) This is fine when I try it against 'google.com' or 'microsoft.com'. But most websites return the following error:{'clanger': ConnectionRefusedError(10061, 'No connection could be made because the target machine actively refused it', None, 10061, None)}. I was wondering if it was a rejection because the sites don't like the user-agent (requests works fine with everything I test against, but obviously cannot grab the cert (unless it secretly can - which would be great!). But I cannot find a way of specifying one in the SSL library. I'm at a bit of a loss as it works against 'google.com' and 'microsoft.com' (but then I suppose they may have set their sites to be generous / forgiving regarding what types of connections they support).
[ "\nConnectionRefusedError(10061, 'No connection could be made because the target machine actively refused it'\n\nThis has nothing to do with certificates, not even with TLS. This is a connection error at the TCP level, i.e. even before any TLS and certificates are in effect.\n\nBut most websites return the following error ...\n\nIf this is really \"most websites\" then you might have serious problems in your infrastructure which limit access to large parts of the internet. Or, you might need to use a proxy - but ssl.get_server_certificate does not support a proxy.\n" ]
[ 0 ]
[]
[]
[ "certificate", "connection_refused", "python", "ssl" ]
stackoverflow_0074613540_certificate_connection_refused_python_ssl.txt
Q: Kivy settings panel removes text property value from button widgets I have come across an issue with the Kivy settings panel, when I open and close the panel, the text properties of my button widgets are cleared, even though they still display correctly. The following code demonstrates the issue: from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): widget = Button(text='Hello World') widget.bind(on_release=self._button_callback) return widget def _button_callback(self, button): debug_string = f'{hex(id(button))}:{type(button.text)}:{button.text}' print(debug_string) if __name__=='__main__': app = TestApp() app.run() When the program is run a window with a single button is displayed. The display text is "Hello World", clicking on this button produces the following 0x172aba17ed0:<class 'str'>:Hello World If I then press F1 and open the default Kivy setting panel, then close the panel (making no changes to the settings options) and press the button again, I get the following: 0x172aba17ed0:<class 'str'>: This seems a bit strange to me, why has the text property of the button been overwritten ? The text displayed on the button remains consistent. This example was run using Python 3.10.4 (64bit) and Kivy 2.1.0 A: Missing import of class SettingWithSpinner from kivy.uix.settings, related to python settings freezes GUI. Without the import, no errors where reported at runtime and the bug is present, including the import seems to resolve the issue.
Kivy settings panel removes text property value from button widgets
I have come across an issue with the Kivy settings panel, when I open and close the panel, the text properties of my button widgets are cleared, even though they still display correctly. The following code demonstrates the issue: from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): widget = Button(text='Hello World') widget.bind(on_release=self._button_callback) return widget def _button_callback(self, button): debug_string = f'{hex(id(button))}:{type(button.text)}:{button.text}' print(debug_string) if __name__=='__main__': app = TestApp() app.run() When the program is run a window with a single button is displayed. The display text is "Hello World", clicking on this button produces the following 0x172aba17ed0:<class 'str'>:Hello World If I then press F1 and open the default Kivy setting panel, then close the panel (making no changes to the settings options) and press the button again, I get the following: 0x172aba17ed0:<class 'str'>: This seems a bit strange to me, why has the text property of the button been overwritten ? The text displayed on the button remains consistent. This example was run using Python 3.10.4 (64bit) and Kivy 2.1.0
[ "Missing import of class SettingWithSpinner from kivy.uix.settings, related to python settings freezes GUI.\nWithout the import, no errors where reported at runtime and the bug is present, including the import seems to resolve the issue.\n" ]
[ 0 ]
[]
[]
[ "kivy", "python" ]
stackoverflow_0074612754_kivy_python.txt
Q: How to flatten string column in pyspark? a b [{'npi': [1013006469, 1003263552], 'tin': {'type': 'npi', 'value': '1013006469'}}, {'npi': [1487607883], 'tin': {'type': 'npi', 'value': '1487607883'}}] 0 [{'npi': [1275086126], 'tin': {'type': 'npi', 'value': '1275086126'}}, {'npi': [1285698381], 'tin': {'type': 'npi', 'value': '1285698381'}}] 2 above is input dataframe from which I want to flatten 'a' column which is in the form of string. I want following output a_npi a_tin_type a_tin_value b 1013006469 npi 1013006469 0 1003263552 npi 1013006469 0 1487607883 npi 1487607883 0 1275086126 npi 1275086126 2 1285698381 npi 1285698381 2 following is code that I've but it returns all null values inp_sch = spark.read.json(df.select(col('a').alias('jsonbody')).rdd.map(lambda x: x.jsonbody)).schema inp_json = df.select('*', from_json('a', inp_sch).alias('jsonstr')) A: from_json function for tin in pyspark will get it done. Example from pyspark.sql.functions import from_json, col from pyspark.sql.types import StructType, StructField, StringType schema = StructType( [ StructField('col1', StringType(), True), StructField('col2', StringType(), True) ] ) df.withColumn("data", from_json("data", schema))\ .select(col('data.*'))\ .show() and the first column with Array, use explode() function similarly.
How to flatten string column in pyspark?
a b [{'npi': [1013006469, 1003263552], 'tin': {'type': 'npi', 'value': '1013006469'}}, {'npi': [1487607883], 'tin': {'type': 'npi', 'value': '1487607883'}}] 0 [{'npi': [1275086126], 'tin': {'type': 'npi', 'value': '1275086126'}}, {'npi': [1285698381], 'tin': {'type': 'npi', 'value': '1285698381'}}] 2 above is input dataframe from which I want to flatten 'a' column which is in the form of string. I want following output a_npi a_tin_type a_tin_value b 1013006469 npi 1013006469 0 1003263552 npi 1013006469 0 1487607883 npi 1487607883 0 1275086126 npi 1275086126 2 1285698381 npi 1285698381 2 following is code that I've but it returns all null values inp_sch = spark.read.json(df.select(col('a').alias('jsonbody')).rdd.map(lambda x: x.jsonbody)).schema inp_json = df.select('*', from_json('a', inp_sch).alias('jsonstr'))
[ "from_json function for tin in pyspark will get it done.\nExample\nfrom pyspark.sql.functions import from_json, col\nfrom pyspark.sql.types import StructType, StructField, StringType\n\nschema = StructType(\n [\n StructField('col1', StringType(), True),\n StructField('col2', StringType(), True)\n ]\n)\n\ndf.withColumn(\"data\", from_json(\"data\", schema))\\\n .select(col('data.*'))\\\n .show()\n\nand the first column with Array, use explode() function similarly.\n" ]
[ 0 ]
[]
[]
[ "apache_spark_sql", "flatten", "pyspark", "python" ]
stackoverflow_0074611190_apache_spark_sql_flatten_pyspark_python.txt
Q: Invalid literal for int() when trying to load pandas I am using Spyder 5.4.0 with Miniconda3. I have created a new Python environment using conda create -n env_full anaconda, then successfully activated in Spyder (using packages like numpy or matplotlib), but when I try to import pandas, I get: File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\numexpr\utils.py", line 156, in _init_num_threads requested_threads = int(os.environ['OMP_NUM_THREADS']) ValueError: invalid literal for int() with base 10: '5,3,2' It does not happen in base environment. Any idea how to fix it? The full console output: import pandas Traceback (most recent call last): File "C:\Users\User\AppData\Local\Temp\ipykernel_25804\3648170110.py", line 1, in <module> import pandas File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\__init__.py", line 48, in <module> from pandas.core.api import ( File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\api.py", line 27, in <module> from pandas.core.arrays import Categorical File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arrays\__init__.py", line 1, in <module> from pandas.core.arrays.arrow import ArrowExtensionArray File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arrays\arrow\__init__.py", line 1, in <module> from pandas.core.arrays.arrow.array import ArrowExtensionArray File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arrays\arrow\array.py", line 42, in <module> from pandas.core.arraylike import OpsMixin File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arraylike.py", line 23, in <module> from pandas.core.ops.common import unpack_zerodim_and_defer File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\ops\__init__.py", line 33, in <module> from pandas.core.ops.array_ops import ( File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\ops\array_ops.py", line 50, in <module> import pandas.core.computation.expressions as expressions File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\computation\expressions.py", line 20, in <module> from pandas.core.computation.check import NUMEXPR_INSTALLED File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\computation\check.py", line 5, in <module> ne = import_optional_dependency("numexpr", errors="warn") File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\compat\_optional.py", line 141, in import_optional_dependency module = importlib.import_module(name) File "C:\ProgramData\Miniconda3\envs\env_full\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\numexpr\__init__.py", line 44, in <module> nthreads = _init_num_threads() File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\numexpr\utils.py", line 156, in _init_num_threads requested_threads = int(os.environ['OMP_NUM_THREADS']) ValueError: invalid literal for int() with base 10: '5,3,2' A: Well the error states that os.environ['OMP_NUM_THREADS'] evaluates to '5,3,2' which is a string of multiple numbers. This cannot be converted into an integer. Check out this documentation. So your virtual environment sets different thread numbers for different 'nesting depths'. I think you can set it to a single number yourself with export OMP_NUM_THREADS='5' before you import pandas.
Invalid literal for int() when trying to load pandas
I am using Spyder 5.4.0 with Miniconda3. I have created a new Python environment using conda create -n env_full anaconda, then successfully activated in Spyder (using packages like numpy or matplotlib), but when I try to import pandas, I get: File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\numexpr\utils.py", line 156, in _init_num_threads requested_threads = int(os.environ['OMP_NUM_THREADS']) ValueError: invalid literal for int() with base 10: '5,3,2' It does not happen in base environment. Any idea how to fix it? The full console output: import pandas Traceback (most recent call last): File "C:\Users\User\AppData\Local\Temp\ipykernel_25804\3648170110.py", line 1, in <module> import pandas File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\__init__.py", line 48, in <module> from pandas.core.api import ( File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\api.py", line 27, in <module> from pandas.core.arrays import Categorical File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arrays\__init__.py", line 1, in <module> from pandas.core.arrays.arrow import ArrowExtensionArray File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arrays\arrow\__init__.py", line 1, in <module> from pandas.core.arrays.arrow.array import ArrowExtensionArray File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arrays\arrow\array.py", line 42, in <module> from pandas.core.arraylike import OpsMixin File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\arraylike.py", line 23, in <module> from pandas.core.ops.common import unpack_zerodim_and_defer File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\ops\__init__.py", line 33, in <module> from pandas.core.ops.array_ops import ( File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\ops\array_ops.py", line 50, in <module> import pandas.core.computation.expressions as expressions File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\computation\expressions.py", line 20, in <module> from pandas.core.computation.check import NUMEXPR_INSTALLED File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\core\computation\check.py", line 5, in <module> ne = import_optional_dependency("numexpr", errors="warn") File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\pandas\compat\_optional.py", line 141, in import_optional_dependency module = importlib.import_module(name) File "C:\ProgramData\Miniconda3\envs\env_full\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\numexpr\__init__.py", line 44, in <module> nthreads = _init_num_threads() File "C:\ProgramData\Miniconda3\envs\env_full\lib\site-packages\numexpr\utils.py", line 156, in _init_num_threads requested_threads = int(os.environ['OMP_NUM_THREADS']) ValueError: invalid literal for int() with base 10: '5,3,2'
[ "Well the error states that os.environ['OMP_NUM_THREADS'] evaluates to '5,3,2' which is a string of multiple numbers. This cannot be converted into an integer.\nCheck out this documentation. So your virtual environment sets different thread numbers for different 'nesting depths'. I think you can set it to a single number yourself with export OMP_NUM_THREADS='5' before you import pandas.\n" ]
[ 0 ]
[]
[]
[ "multithreading", "pandas", "python", "spyder" ]
stackoverflow_0074614058_multithreading_pandas_python_spyder.txt
Q: KMeans Attribute Error: 'NoneType' object has no attribute 'split' The KMeans code was working before but now it's not. The change I made was "pip install scikit-image" which I think changed numpy 1.18.5 to numpy 1.22.3 . But then I changed numpy back to 1.18.5 by doing -m pip install numpy==1.18.5 --user . And this didn't fix the issue. Any ideas what else it could be? Also, I don't remember why I had to install scikit-image (again?). Is there anyway to tell which module is incompatible with the KMeans code I'm using? KMeans is from sklearn.cluster AttributeError Traceback (most recent call last) <timed exec> in <module> ~\Documents\UCSDproject\Interactive Framework\Framework_functions_modified.py in cluster_imgs(num_clusters) 110 kmodel = KMeans(n_clusters = k, random_state=728) 111 kmodel.fit(pred_images) #removed n_jobs=-1 b/c no longer kmeans feature (1/14/22) --> 112 kpredictions = kmodel.predict(pred_images) 113 shutil.rmtree(r"C:\Users\User\Documents\UCSDproject\Interactive Framework\Framework_clustered_imgs") 114 ~\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in predict(self, X, sample_weight) 1332 sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) 1333 -> 1334 return _labels_inertia_threadpool_limit( 1335 X, sample_weight, x_squared_norms, self.cluster_centers_, self._n_threads 1336 )[0] ~\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _labels_inertia_threadpool_limit(X, sample_weight, x_squared_norms, centers, n_threads) 753 ): 754 """Same as _labels_inertia but in a threadpool_limits context.""" --> 755 with threadpool_limits(limits=1, user_api="blas"): 756 labels, inertia = _labels_inertia( 757 X, sample_weight, x_squared_norms, centers, n_threads ~\anaconda3\lib\site-packages\sklearn\utils\fixes.py in threadpool_limits(limits, user_api) 312 return controller.limit(limits=limits, user_api=user_api) 313 else: --> 314 return threadpoolctl.threadpool_limits(limits=limits, user_api=user_api) 315 316 ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, limits, user_api) 169 self._check_params(limits, user_api) 170 --> 171 self._original_info = self._set_threadpool_limits() 172 173 def __enter__(self): ~\anaconda3\lib\site-packages\threadpoolctl.py in _set_threadpool_limits(self) 266 return None 267 --> 268 modules = _ThreadpoolInfo(prefixes=self._prefixes, 269 user_api=self._user_api) 270 for module in modules: ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, user_api, prefixes, modules) 338 339 self.modules = [] --> 340 self._load_modules() 341 self._warn_if_incompatible_openmp() 342 else: ~\anaconda3\lib\site-packages\threadpoolctl.py in _load_modules(self) 371 self._find_modules_with_dyld() 372 elif sys.platform == "win32": --> 373 self._find_modules_with_enum_process_module_ex() 374 else: 375 self._find_modules_with_dl_iterate_phdr() ~\anaconda3\lib\site-packages\threadpoolctl.py in _find_modules_with_enum_process_module_ex(self) 483 484 # Store the module if it is supported and selected --> 485 self._make_module_from_path(filepath) 486 finally: 487 kernel_32.CloseHandle(h_process) ~\anaconda3\lib\site-packages\threadpoolctl.py in _make_module_from_path(self, filepath) 513 if prefix in self.prefixes or user_api in self.user_api: 514 module_class = globals()[module_class] --> 515 module = module_class(filepath, prefix, user_api, internal_api) 516 self.modules.append(module) 517 ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, filepath, prefix, user_api, internal_api) 604 self.internal_api = internal_api 605 self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD) --> 606 self.version = self.get_version() 607 self.num_threads = self.get_num_threads() 608 self._get_extra_info() ~\anaconda3\lib\site-packages\threadpoolctl.py in get_version(self) 644 lambda: None) 645 get_config.restype = ctypes.c_char_p --> 646 config = get_config().split() 647 if config[0] == b"OpenBLAS": 648 return config[1].decode("utf-8") AttributeError: 'NoneType' object has no attribute 'split' A: seems like fixed an issue by importing a specific version of numpy import numpy numpy.__version__ '1.21.4' make sure, you not importing import numpy as np afterwards A: upgrading this: pip install -U threadpoolctl solved the prb for me.
KMeans Attribute Error: 'NoneType' object has no attribute 'split'
The KMeans code was working before but now it's not. The change I made was "pip install scikit-image" which I think changed numpy 1.18.5 to numpy 1.22.3 . But then I changed numpy back to 1.18.5 by doing -m pip install numpy==1.18.5 --user . And this didn't fix the issue. Any ideas what else it could be? Also, I don't remember why I had to install scikit-image (again?). Is there anyway to tell which module is incompatible with the KMeans code I'm using? KMeans is from sklearn.cluster AttributeError Traceback (most recent call last) <timed exec> in <module> ~\Documents\UCSDproject\Interactive Framework\Framework_functions_modified.py in cluster_imgs(num_clusters) 110 kmodel = KMeans(n_clusters = k, random_state=728) 111 kmodel.fit(pred_images) #removed n_jobs=-1 b/c no longer kmeans feature (1/14/22) --> 112 kpredictions = kmodel.predict(pred_images) 113 shutil.rmtree(r"C:\Users\User\Documents\UCSDproject\Interactive Framework\Framework_clustered_imgs") 114 ~\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in predict(self, X, sample_weight) 1332 sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) 1333 -> 1334 return _labels_inertia_threadpool_limit( 1335 X, sample_weight, x_squared_norms, self.cluster_centers_, self._n_threads 1336 )[0] ~\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _labels_inertia_threadpool_limit(X, sample_weight, x_squared_norms, centers, n_threads) 753 ): 754 """Same as _labels_inertia but in a threadpool_limits context.""" --> 755 with threadpool_limits(limits=1, user_api="blas"): 756 labels, inertia = _labels_inertia( 757 X, sample_weight, x_squared_norms, centers, n_threads ~\anaconda3\lib\site-packages\sklearn\utils\fixes.py in threadpool_limits(limits, user_api) 312 return controller.limit(limits=limits, user_api=user_api) 313 else: --> 314 return threadpoolctl.threadpool_limits(limits=limits, user_api=user_api) 315 316 ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, limits, user_api) 169 self._check_params(limits, user_api) 170 --> 171 self._original_info = self._set_threadpool_limits() 172 173 def __enter__(self): ~\anaconda3\lib\site-packages\threadpoolctl.py in _set_threadpool_limits(self) 266 return None 267 --> 268 modules = _ThreadpoolInfo(prefixes=self._prefixes, 269 user_api=self._user_api) 270 for module in modules: ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, user_api, prefixes, modules) 338 339 self.modules = [] --> 340 self._load_modules() 341 self._warn_if_incompatible_openmp() 342 else: ~\anaconda3\lib\site-packages\threadpoolctl.py in _load_modules(self) 371 self._find_modules_with_dyld() 372 elif sys.platform == "win32": --> 373 self._find_modules_with_enum_process_module_ex() 374 else: 375 self._find_modules_with_dl_iterate_phdr() ~\anaconda3\lib\site-packages\threadpoolctl.py in _find_modules_with_enum_process_module_ex(self) 483 484 # Store the module if it is supported and selected --> 485 self._make_module_from_path(filepath) 486 finally: 487 kernel_32.CloseHandle(h_process) ~\anaconda3\lib\site-packages\threadpoolctl.py in _make_module_from_path(self, filepath) 513 if prefix in self.prefixes or user_api in self.user_api: 514 module_class = globals()[module_class] --> 515 module = module_class(filepath, prefix, user_api, internal_api) 516 self.modules.append(module) 517 ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, filepath, prefix, user_api, internal_api) 604 self.internal_api = internal_api 605 self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD) --> 606 self.version = self.get_version() 607 self.num_threads = self.get_num_threads() 608 self._get_extra_info() ~\anaconda3\lib\site-packages\threadpoolctl.py in get_version(self) 644 lambda: None) 645 get_config.restype = ctypes.c_char_p --> 646 config = get_config().split() 647 if config[0] == b"OpenBLAS": 648 return config[1].decode("utf-8") AttributeError: 'NoneType' object has no attribute 'split'
[ "seems like fixed an issue by importing a specific version of numpy\nimport numpy \nnumpy.__version__ \n'1.21.4' \n\nmake sure, you not importing\nimport numpy as np \n\nafterwards\n", "upgrading this:\npip install -U threadpoolctl\nsolved the prb for me.\n" ]
[ 0, 0 ]
[]
[]
[ "error_handling", "python" ]
stackoverflow_0072117354_error_handling_python.txt
Q: Django - How to call a function with arguments inside a template I have the following function-based view: def get_emails(request, HOST, USERNAME, PASSWORD): context = { 'FU_HOST': settings.FU_HOST, 'FU_USERNAME': settings.FU_USERNAME, 'FU_PASSWORD': settings.FU_PASSWORD, 'FV_HOST': settings.FV_HOST, 'FV_USERNAME': settings.FV_USERNAME, 'FV_PASSWORD': settings.FV_PASSWORD, 'USV_HOST': settings.USV_HOST, 'USV_USERNAME': settings.USV_USERNAME, 'USV_PASSWORD': settings.USV_PASSWORD, } m = imaplib.IMAP4_SSL(HOST, 993) m.login(USERNAME, PASSWORD) m.select('INBOX') result, data = m.uid('search', None, "ALL") if result == 'OK': for num in data[0].split(): result, data = m.uid('fetch', num, '(RFC822)') if result == 'OK': email_message_raw = email.message_from_bytes(data[0][1]) email_from = str(make_header(decode_header(email_message_raw['From']))) email_addr = email_from.replace('<', '>').split('>') if len(email_addr) > 1: new_entry = EmailMarketing(email_address=email_addr[1], mail_server='X') new_entry.save() else: new_entry = EmailMarketing(email_address=email_addr[0], mail_server='X') new_entry.save() m.close() m.logout() messages.success(request, f'Subscribers list sychronized successfully.') return redirect('subscribers') I'd like to place 3 buttons on my front-end that call this same function with different arguments each time, for example one button get_emails(FU_HOST, FU_USERNAME, FU_PASSWORD), the other button get_emails(USV_HOST, USV_USERNAME, USV_PASSWORD). How can one achieve this in Django? My credentials are stored in .env file. A: Something like this is not achieved by placing that function on your frontend template, what you need to be doing is to redirect the user to a view that contains that function and by extracting these values from the users request, because as you can see you've got methods that are hitting your Database, which isn't something you can achieve from the frontend nor is it safe if it were possible. A: From the context I can understand that the arguments that are passed to your function must be secure. So, go with a POST form like this: def get_emails(request): context = { 'FU_HOST': settings.FU_HOST, 'FU_USERNAME': settings.FU_USERNAME, 'FU_PASSWORD': settings.FU_PASSWORD, 'FV_HOST': settings.FV_HOST, 'FV_USERNAME': settings.FV_USERNAME, 'FV_PASSWORD': settings.FV_PASSWORD, 'USV_HOST': settings.USV_HOST, 'USV_USERNAME': settings.USV_USERNAME, 'USV_PASSWORD': settings.USV_PASSWORD, } if request.method == "POST": HOST = request.POST["HOST"] USERNAME = request.POST["USERNAME"] PASSWORD = request.POST["PASSWORD"] m = imaplib.IMAP4_SSL(HOST, 993) m.login(USERNAME, PASSWORD) m.select('INBOX') result, data = m.uid('search', None, "ALL") if result == 'OK': for num in data[0].split(): result, data = m.uid('fetch', num, '(RFC822)') if result == 'OK': email_message_raw = email.message_from_bytes(data[0][1]) email_from = str(make_header(decode_header(email_message_raw['From']))) email_addr = email_from.replace('<', '>').split('>') if len(email_addr) > 1: new_entry = EmailMarketing(email_address=email_addr[1], mail_server='X') new_entry.save() else: new_entry = EmailMarketing(email_address=email_addr[0], mail_server='X') new_entry.save() m.close() m.logout() messages.success(request, f'Subscribers list sychronized successfully.') return redirect('subscribers') In the template, make a post request, given the host, username and password like this. <form action="{% url 'name-of-your-view' %}" method="POST"> <input type="text" name="HOST"> <input type="text" name="USERNAME"> <input type="text" name="PASSWORD"> <input type="submit"> </form> A: For this scenario, if the form post method is not working for you, you can use javascript and ajax. Add the following codes to your html code to use the javascript method. <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.1/jquery.min.js"></script> <script> function post(_host,_username,_password){ $.ajax({ url: '{% url "url-name"%}', type: 'POST', data: { csrfmiddlewaretoken: "{{ csrf_token }}", host: _host, username = _username, password = _password }, success: function (res) { console.log(res); } }); } </script> Update the python function as follows. def get_emails(request): HOST = request.POST.get('host') USERNAME = request.POST.get('username') PASSWORD = request.POST.get('password') context = { 'FU_HOST': settings.FU_HOST, 'FU_USERNAME': settings.FU_USERNAME, 'FU_PASSWORD': settings.FU_PASSWORD, 'FV_HOST': settings.FV_HOST, 'FV_USERNAME': settings.FV_USERNAME, 'FV_PASSWORD': settings.FV_PASSWORD, 'USV_HOST': settings.USV_HOST, 'USV_USERNAME': settings.USV_USERNAME, 'USV_PASSWORD': settings.USV_PASSWORD, } m = imaplib.IMAP4_SSL(HOST, 993) m.login(USERNAME, PASSWORD) m.select('INBOX') result, data = m.uid('search', None, "ALL") if result == 'OK': for num in data[0].split(): result, data = m.uid('fetch', num, '(RFC822)') if result == 'OK': email_message_raw = email.message_from_bytes(data[0][1]) email_from = str(make_header(decode_header(email_message_raw['From']))) email_addr = email_from.replace('<', '>').split('>') if len(email_addr) > 1: new_entry = EmailMarketing(email_address=email_addr[1], mail_server='X') new_entry.save() else: new_entry = EmailMarketing(email_address=email_addr[0], mail_server='X') new_entry.save() m.close() m.logout() messages.success(request, f'Subscribers list sychronized successfully.') return redirect('subscribers') I hope it helps.
Django - How to call a function with arguments inside a template
I have the following function-based view: def get_emails(request, HOST, USERNAME, PASSWORD): context = { 'FU_HOST': settings.FU_HOST, 'FU_USERNAME': settings.FU_USERNAME, 'FU_PASSWORD': settings.FU_PASSWORD, 'FV_HOST': settings.FV_HOST, 'FV_USERNAME': settings.FV_USERNAME, 'FV_PASSWORD': settings.FV_PASSWORD, 'USV_HOST': settings.USV_HOST, 'USV_USERNAME': settings.USV_USERNAME, 'USV_PASSWORD': settings.USV_PASSWORD, } m = imaplib.IMAP4_SSL(HOST, 993) m.login(USERNAME, PASSWORD) m.select('INBOX') result, data = m.uid('search', None, "ALL") if result == 'OK': for num in data[0].split(): result, data = m.uid('fetch', num, '(RFC822)') if result == 'OK': email_message_raw = email.message_from_bytes(data[0][1]) email_from = str(make_header(decode_header(email_message_raw['From']))) email_addr = email_from.replace('<', '>').split('>') if len(email_addr) > 1: new_entry = EmailMarketing(email_address=email_addr[1], mail_server='X') new_entry.save() else: new_entry = EmailMarketing(email_address=email_addr[0], mail_server='X') new_entry.save() m.close() m.logout() messages.success(request, f'Subscribers list sychronized successfully.') return redirect('subscribers') I'd like to place 3 buttons on my front-end that call this same function with different arguments each time, for example one button get_emails(FU_HOST, FU_USERNAME, FU_PASSWORD), the other button get_emails(USV_HOST, USV_USERNAME, USV_PASSWORD). How can one achieve this in Django? My credentials are stored in .env file.
[ "Something like this is not achieved by placing that function on your frontend template, what you need to be doing is to redirect the user to a view that contains that function and by extracting these values from the users request, because as you can see you've got methods that are hitting your Database, which isn't something you can achieve from the frontend nor is it safe if it were possible.\n", "From the context I can understand that the arguments that are passed to your function must be secure.\nSo, go with a POST form like this:\ndef get_emails(request):\n context = {\n 'FU_HOST': settings.FU_HOST,\n 'FU_USERNAME': settings.FU_USERNAME,\n 'FU_PASSWORD': settings.FU_PASSWORD,\n 'FV_HOST': settings.FV_HOST,\n 'FV_USERNAME': settings.FV_USERNAME,\n 'FV_PASSWORD': settings.FV_PASSWORD,\n 'USV_HOST': settings.USV_HOST,\n 'USV_USERNAME': settings.USV_USERNAME,\n 'USV_PASSWORD': settings.USV_PASSWORD,\n }\n\n if request.method == \"POST\":\n HOST = request.POST[\"HOST\"]\n USERNAME = request.POST[\"USERNAME\"]\n PASSWORD = request.POST[\"PASSWORD\"]\n\n m = imaplib.IMAP4_SSL(HOST, 993)\n m.login(USERNAME, PASSWORD)\n m.select('INBOX')\n result, data = m.uid('search', None, \"ALL\")\n if result == 'OK':\n for num in data[0].split():\n result, data = m.uid('fetch', num, '(RFC822)')\n if result == 'OK':\n email_message_raw = email.message_from_bytes(data[0][1])\n email_from = str(make_header(decode_header(email_message_raw['From'])))\n email_addr = email_from.replace('<', '>').split('>')\n if len(email_addr) > 1:\n new_entry = EmailMarketing(email_address=email_addr[1], mail_server='X')\n new_entry.save()\n else:\n new_entry = EmailMarketing(email_address=email_addr[0], mail_server='X')\n new_entry.save()\n m.close()\n m.logout()\n\n messages.success(request, f'Subscribers list sychronized successfully.')\n return redirect('subscribers')\n\nIn the template, make a post request, given the host, username and password like this.\n<form action=\"{% url 'name-of-your-view' %}\" method=\"POST\">\n<input type=\"text\" name=\"HOST\">\n<input type=\"text\" name=\"USERNAME\">\n<input type=\"text\" name=\"PASSWORD\">\n<input type=\"submit\">\n</form>\n\n", "For this scenario, if the form post method is not working for you, you can use javascript and ajax.\nAdd the following codes to your html code to use the javascript method.\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.6.1/jquery.min.js\"></script>\n<script>\n function post(_host,_username,_password){\n $.ajax({\n url: '{% url \"url-name\"%}',\n type: 'POST',\n data: { \n csrfmiddlewaretoken: \"{{ csrf_token }}\",\n host: _host,\n username = _username,\n password = _password\n },\n success: function (res) {\n console.log(res);\n }\n });\n }\n</script>\n\nUpdate the python function as follows.\ndef get_emails(request):\n HOST = request.POST.get('host')\n USERNAME = request.POST.get('username')\n PASSWORD = request.POST.get('password')\n context = {\n 'FU_HOST': settings.FU_HOST,\n 'FU_USERNAME': settings.FU_USERNAME,\n 'FU_PASSWORD': settings.FU_PASSWORD,\n 'FV_HOST': settings.FV_HOST,\n 'FV_USERNAME': settings.FV_USERNAME,\n 'FV_PASSWORD': settings.FV_PASSWORD,\n 'USV_HOST': settings.USV_HOST,\n 'USV_USERNAME': settings.USV_USERNAME,\n 'USV_PASSWORD': settings.USV_PASSWORD,\n }\n m = imaplib.IMAP4_SSL(HOST, 993)\n m.login(USERNAME, PASSWORD)\n m.select('INBOX')\n result, data = m.uid('search', None, \"ALL\")\n if result == 'OK':\n for num in data[0].split():\n result, data = m.uid('fetch', num, '(RFC822)')\n if result == 'OK':\n email_message_raw = email.message_from_bytes(data[0][1])\n email_from = str(make_header(decode_header(email_message_raw['From'])))\n email_addr = email_from.replace('<', '>').split('>')\n if len(email_addr) > 1:\n new_entry = EmailMarketing(email_address=email_addr[1], mail_server='X')\n new_entry.save()\n else:\n new_entry = EmailMarketing(email_address=email_addr[0], mail_server='X')\n new_entry.save()\n m.close()\n m.logout()\n\n messages.success(request, f'Subscribers list sychronized successfully.')\n return redirect('subscribers')\n\nI hope it helps.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "django", "function", "python", "view" ]
stackoverflow_0074613987_django_function_python_view.txt
Q: Spotify API invalid redirect URI I am running python and flask, when it starts to run I encounter this error message image of error message I have looked at other forums of people encoutering the same error to no avail #here is the code ''' from flask import Flask, request, url_for, session, redirect import spotipy from spotipy.oauth2 import SpotifyOAuth app = Flask(__name__) app.secret_key = "" app.config['SESSION_COOKIE_NAME'] = 'Spotify cookie' @app.route('/') def login(): sp_oauth = create_spotify_oauth() auth_url = sp_oauth.get_authorize_url() return redirect(auth_url) @app.route('/redirect') def redirectPage(): return 'redirect' @app.route('/getTracks') def getTracks(): return 'PAIN' def create_spotify_oauth(): return SpotifyOAuth( client_id="", client_secret="", redirect_uri=url_for('redirectPage', _external=True), scope="user-library-read") i have added the redirect URIs to the settings in spotify for developers image of redirect URIs added, I have pressed save i am expecting this to pop up image of spotify Oauth pop up A: Looks like you have a typo in the Redirect URI, in the one being sent to Spotify it is http://127.0.0.1:5000/redirect Note the spelling of redirect, from the Screenshot from the Spotify Dashboard this is "redierect" and also the lack of a forward slash on the one being passed to Spotify as this has to match exactly also so you just need to add that URI to the Spotify Dashboard as is as the URL needs to be exactly correct for it not to show that error for the Invalid Redirect URI so add this one to see if it works http://127.0.0.1:5000/redirect
Spotify API invalid redirect URI
I am running python and flask, when it starts to run I encounter this error message image of error message I have looked at other forums of people encoutering the same error to no avail #here is the code ''' from flask import Flask, request, url_for, session, redirect import spotipy from spotipy.oauth2 import SpotifyOAuth app = Flask(__name__) app.secret_key = "" app.config['SESSION_COOKIE_NAME'] = 'Spotify cookie' @app.route('/') def login(): sp_oauth = create_spotify_oauth() auth_url = sp_oauth.get_authorize_url() return redirect(auth_url) @app.route('/redirect') def redirectPage(): return 'redirect' @app.route('/getTracks') def getTracks(): return 'PAIN' def create_spotify_oauth(): return SpotifyOAuth( client_id="", client_secret="", redirect_uri=url_for('redirectPage', _external=True), scope="user-library-read") i have added the redirect URIs to the settings in spotify for developers image of redirect URIs added, I have pressed save i am expecting this to pop up image of spotify Oauth pop up
[ "Looks like you have a typo in the Redirect URI, in the one being sent to Spotify it is\n\nhttp://127.0.0.1:5000/redirect\n\nNote the spelling of redirect, from the Screenshot from the Spotify Dashboard this is \"redierect\" and also the lack of a forward slash on the one being passed to Spotify as this has to match exactly also so you just need to add that URI to the Spotify Dashboard as is as the URL needs to be exactly correct for it not to show that error for the Invalid Redirect URI so add this one to see if it works\n\nhttp://127.0.0.1:5000/redirect\n\n" ]
[ 0 ]
[]
[]
[ "python", "spotify" ]
stackoverflow_0074598589_python_spotify.txt
Q: In Python, how do I draw/ save a monochrome 2 bit BMP with Wand I have a need to create a 2 bit monochrome Windows BMP format image, and need to draw lines in a pattern. Since I have just started using python, I followed a tutorial and installed the Wand module. Drawing is fine, I get the content I need. Problem is saving the image. No matter what I do, the resulting image is always 24 bpp BMP. What I have so far: from wand.image import Image from wand.drawing import Drawing from wand.color import Color draw=Drawing() ### SET VARS mystepvalue = 15 with Image(filename='empty.bmp') as image: print(image.size) image.antialias = False draw.stroke_width = 1 draw.stroke_color = Color('black') draw.fill_color = Color('black') # # --- draw the lines --- # for xpos in range(0,4790,mystepvalue): draw.line( (xpos,0), (xpos, image.height) ) draw(image) # Have to remember this to see anything... # None of this makes a 2bit BMP image.depth = 2 image.colorspace = "gray" image.antialias = False image.monochrome = True image.imagedepth =2 image.save(filename='result.bmp') Source image is 4800 x 283 px 169kb, result.bmp is same, but 4.075 kb. It's probably the order of the color settings that I miss. A: Try using Image.quantize() to remove all the unique gray colors before setting the depth/colorspace properties. # ... image.quantize(2, colorspace_type='gray', dither=True) image.depth = 2 image.colorspace = 'gray' image.type = 'bilevel' # or grayscale would also work. image.save(filename='result.bmp') Also remember filename='bmp3:result.bmp' or filename='bmp2:result.bmp' controls which BMP version to encode with. A: For me setting image mode to "1" did the job: img.mode='1' img.save('result.bmp') Here are the docs on different modes: https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes
In Python, how do I draw/ save a monochrome 2 bit BMP with Wand
I have a need to create a 2 bit monochrome Windows BMP format image, and need to draw lines in a pattern. Since I have just started using python, I followed a tutorial and installed the Wand module. Drawing is fine, I get the content I need. Problem is saving the image. No matter what I do, the resulting image is always 24 bpp BMP. What I have so far: from wand.image import Image from wand.drawing import Drawing from wand.color import Color draw=Drawing() ### SET VARS mystepvalue = 15 with Image(filename='empty.bmp') as image: print(image.size) image.antialias = False draw.stroke_width = 1 draw.stroke_color = Color('black') draw.fill_color = Color('black') # # --- draw the lines --- # for xpos in range(0,4790,mystepvalue): draw.line( (xpos,0), (xpos, image.height) ) draw(image) # Have to remember this to see anything... # None of this makes a 2bit BMP image.depth = 2 image.colorspace = "gray" image.antialias = False image.monochrome = True image.imagedepth =2 image.save(filename='result.bmp') Source image is 4800 x 283 px 169kb, result.bmp is same, but 4.075 kb. It's probably the order of the color settings that I miss.
[ "Try using Image.quantize() to remove all the unique gray colors before setting the depth/colorspace properties.\n # ...\n image.quantize(2, colorspace_type='gray', dither=True)\n image.depth = 2\n image.colorspace = 'gray'\n image.type = 'bilevel' # or grayscale would also work.\n image.save(filename='result.bmp')\n\nAlso remember filename='bmp3:result.bmp' or filename='bmp2:result.bmp' controls which BMP version to encode with.\n", "For me setting image mode to \"1\" did the job:\nimg.mode='1'\nimg.save('result.bmp')\n\nHere are the docs on different modes: https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes\n" ]
[ 2, 0 ]
[]
[]
[ "python", "wand" ]
stackoverflow_0069254456_python_wand.txt
Q: How to convert date format from yyyy-mm-dd to yymmdd base on python Robot Framework How to convert date format from yyyy-mm-dd to date yymmdd base on python Robot Framework I tried below Keyword in RobotFramework ${StartDate}= Convert Date 2022-09-29 result_format=**%yy%MM%dd** But I am getting 20220929 yyyymmdd Expected Output from above example ==> 220929 yymmdd A: You can try it as below: ${StartDate}= Convert Date 2022-09-29 result_format=%y%m%d This will give your expected output 220929 in yymmdd format
How to convert date format from yyyy-mm-dd to yymmdd base on python Robot Framework
How to convert date format from yyyy-mm-dd to date yymmdd base on python Robot Framework I tried below Keyword in RobotFramework ${StartDate}= Convert Date 2022-09-29 result_format=**%yy%MM%dd** But I am getting 20220929 yyyymmdd Expected Output from above example ==> 220929 yymmdd
[ "You can try it as below:\n${StartDate}= Convert Date 2022-09-29 result_format=%y%m%d\n\nThis will give your expected output 220929 in yymmdd format\n" ]
[ 0 ]
[]
[]
[ "date", "date_conversion", "python", "robotframework" ]
stackoverflow_0073897604_date_date_conversion_python_robotframework.txt
Q: Searching for intersections in two tuples of tuples in python Having the following problem. I'm reading the data from stdin and save it in list that I convert to tuple the following way: x = int(input()) f = [] for i in range(x): a, b = map(int, input().split()) f.append([a,b]) def to_tuple(lst): return tuple(to_tuple(i) if isinstance(i, list) else i for i in lst) After this I receive two tuples of tuples looking something like that: f = ((0, 1), (1, 2), (0, 2), (0, 3)) s = (((0,), (1, 2, 3)), ((0, 1), (2, 3)), ((0, 1, 2), (3,))) What I'm trying to do is to find the number of intersections between all inner tuples of f and each tuple of s. In my case "intersection" should be considered as an "edges" between tuples (so in f we have all possible "edges" and checking if there will be an edge between inner tuples in particular tuple of s). So for the example it should print [3,3,1]. Basically, I know how to do in the simple case of intersection - so one can just use set() and then apply a.intersection(b) But how should I proceed in my case? Many thanks and sorry if the question was already asked before :=) A: I am sure this can be solve by different ways. but I believe this is the easiest. out = set() # holds the output for ff in f: # loop through f tuple ff = set(ff) # convert to set for ss1,ss2 in s: # loop through s tuple # you can select which tuple to do the intersection on. # here I am doing the intersection on both inner tuples in the s tuple. ss1 = set(ss1) # convert to set ss2 = set(ss2) out.update(ff.intersection(ss1)) # intersection and add to out out.update(ff.intersection(ss2)) # intersection and add to out #if you want your output to be in list format out = list(out) A: This is an example of how you can proceed a = ((1,1),(1,2)) b = (((1,2),(3,1)),((3,2),(1,2)),((1,4),)) l=[] for t in b: c=[i for i in a for j in t if i==j] l.append(c) print(l) A: General answer for overall amount of edges: def cnt_edges(a,b): edge_cnt = 0 for i in range(len(a)): node1 = a[i][0] node2 = a[i][1] for j in range(len(b)): inner_node1 = b[j][0] inner_node2 = b[j][1] if (node1 in inner_node1 and node2 in inner_node2) or (node1 in inner_node2 and node2 in inner_node1): edge_cnt += 1 return edge_cnt a = ((0, 1),(0, 2), (0,3)) b = (((0),(1,2,3)), ((0,1),(2,3)), ((0,1,2),(3))) cnt_edges(a,b)
Searching for intersections in two tuples of tuples in python
Having the following problem. I'm reading the data from stdin and save it in list that I convert to tuple the following way: x = int(input()) f = [] for i in range(x): a, b = map(int, input().split()) f.append([a,b]) def to_tuple(lst): return tuple(to_tuple(i) if isinstance(i, list) else i for i in lst) After this I receive two tuples of tuples looking something like that: f = ((0, 1), (1, 2), (0, 2), (0, 3)) s = (((0,), (1, 2, 3)), ((0, 1), (2, 3)), ((0, 1, 2), (3,))) What I'm trying to do is to find the number of intersections between all inner tuples of f and each tuple of s. In my case "intersection" should be considered as an "edges" between tuples (so in f we have all possible "edges" and checking if there will be an edge between inner tuples in particular tuple of s). So for the example it should print [3,3,1]. Basically, I know how to do in the simple case of intersection - so one can just use set() and then apply a.intersection(b) But how should I proceed in my case? Many thanks and sorry if the question was already asked before :=)
[ "I am sure this can be solve by different ways. but I believe this is the easiest.\nout = set() # holds the output\nfor ff in f: # loop through f tuple\n ff = set(ff) # convert to set\n for ss1,ss2 in s: # loop through s tuple\n # you can select which tuple to do the intersection on. \n # here I am doing the intersection on both inner tuples in the s tuple.\n ss1 = set(ss1) # convert to set\n ss2 = set(ss2)\n out.update(ff.intersection(ss1)) # intersection and add to out\n out.update(ff.intersection(ss2)) # intersection and add to out\n\n#if you want your output to be in list format\nout = list(out) \n \n\n", "This is an example of how you can proceed\na = ((1,1),(1,2))\nb = (((1,2),(3,1)),((3,2),(1,2)),((1,4),))\nl=[]\nfor t in b:\n c=[i for i in a for j in t if i==j]\n l.append(c)\nprint(l)\n \n\n", "General answer for overall amount of edges:\ndef cnt_edges(a,b):\n edge_cnt = 0\n \n for i in range(len(a)):\n node1 = a[i][0]\n node2 = a[i][1]\n \n for j in range(len(b)):\n inner_node1 = b[j][0]\n inner_node2 = b[j][1]\n \n if (node1 in inner_node1 and node2 in inner_node2) or (node1 in inner_node2 and node2 in inner_node1):\n edge_cnt += 1\n return edge_cnt\n\na = ((0, 1),(0, 2), (0,3))\nb = (((0),(1,2,3)), ((0,1),(2,3)), ((0,1,2),(3)))\n\ncnt_edges(a,b)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "tuples" ]
stackoverflow_0074584078_python_tuples.txt
Q: Issue with Netmiko when using Nornir Iv recently been using Nornir with Netmiko to get some output from my devices. When I run the following code: from nornir import InitNornir from nornir.core.filter import F from nornir_netmiko.tasks import netmiko_send_command, netmiko_send_config from nornir_utils.plugins.functions import print_result nr = InitNornir(config_file="config.yml") test = nr.filter(platform="Cisco") result = test.run(netmiko_send_command, command_string="sh ver") print_result(result) I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/nornir/core/task.py", line 99, in start r = self.task(self, **self.params) File "/home/benanater/.local/lib/python3.8/site-packages/nornir_netmiko/tasks/netmiko_send_command.py", line 26, in netmiko_send_command net_connect = task.host.get_connection(CONNECTION_NAME, task.nornir.config) File "/usr/local/lib/python3.8/dist-packages/nornir/core/inventory.py", line 494, in get_connection self.open_connection( File "/usr/local/lib/python3.8/dist-packages/nornir/core/inventory.py", line 546, in open_connection conn_obj.open( File "/home/benanater/.local/lib/python3.8/site-packages/nornir_netmiko/connections/netmiko.py", line 59, in open connection = ConnectHandler(**parameters) File "/home/benanater/.local/lib/python3.8/site-packages/netmiko/ssh_dispatcher.py", line 321, in ConnectHandler raise ValueError( ValueError: Unsupported 'device_type' currently supported platforms are: a10 Any help with why this error is being generated would be greatly appreciated. A: You set the platform equal to Cisco which is not mapped by the netmiko_plugin to any supported device_type. You should read the docs on netmiko_plugin. The platform should be equivalent to cisco_ios or cisco_ios_telnet. A: inventory['options']['hosts_query] = replace(replace(replace(replace(platform,'cisco-ios','ios'),'huawei_ce','huawei'),'cisco-nx-os','nxos'),'aruba','aruba_os') AS platform
Issue with Netmiko when using Nornir
Iv recently been using Nornir with Netmiko to get some output from my devices. When I run the following code: from nornir import InitNornir from nornir.core.filter import F from nornir_netmiko.tasks import netmiko_send_command, netmiko_send_config from nornir_utils.plugins.functions import print_result nr = InitNornir(config_file="config.yml") test = nr.filter(platform="Cisco") result = test.run(netmiko_send_command, command_string="sh ver") print_result(result) I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/nornir/core/task.py", line 99, in start r = self.task(self, **self.params) File "/home/benanater/.local/lib/python3.8/site-packages/nornir_netmiko/tasks/netmiko_send_command.py", line 26, in netmiko_send_command net_connect = task.host.get_connection(CONNECTION_NAME, task.nornir.config) File "/usr/local/lib/python3.8/dist-packages/nornir/core/inventory.py", line 494, in get_connection self.open_connection( File "/usr/local/lib/python3.8/dist-packages/nornir/core/inventory.py", line 546, in open_connection conn_obj.open( File "/home/benanater/.local/lib/python3.8/site-packages/nornir_netmiko/connections/netmiko.py", line 59, in open connection = ConnectHandler(**parameters) File "/home/benanater/.local/lib/python3.8/site-packages/netmiko/ssh_dispatcher.py", line 321, in ConnectHandler raise ValueError( ValueError: Unsupported 'device_type' currently supported platforms are: a10 Any help with why this error is being generated would be greatly appreciated.
[ "You set the platform equal to Cisco which is not mapped by the netmiko_plugin to any supported device_type. You should read the docs on netmiko_plugin. The platform should be equivalent to cisco_ios or cisco_ios_telnet.\n", "inventory['options']['hosts_query] = replace(replace(replace(replace(platform,'cisco-ios','ios'),'huawei_ce','huawei'),'cisco-nx-os','nxos'),'aruba','aruba_os') AS platform\n" ]
[ 2, 0 ]
[]
[]
[ "error_handling", "netmiko", "python", "python_3.x" ]
stackoverflow_0067458383_error_handling_netmiko_python_python_3.x.txt
Q: How to Split Image Into Multiple Pieces in Python I'm trying to split a photo into multiple pieces using PIL. def crop(Path,input,height,width,i,k,x,y,page): im = Image.open(input) imgwidth = im.size[0] imgheight = im.size[1] for i in range(0,imgheight-height/2,height-2): print i for j in range(0,imgwidth-width/2,width-2): print j box = (j, i, j+width, i+height) a = im.crop(box) a.save(os.path.join(Path,"PNG","%s" % page,"IMG-%s.png" % k)) k +=1 but it doesn't seem to be working. It splits the photo but not in an exact way (you can try it). A: Splitting image to tiles of MxN pixels (assuming im is numpy.ndarray): tiles = [im[x:x+M,y:y+N] for x in range(0,im.shape[0],M) for y in range(0,im.shape[1],N)] In the case you want to split the image to four pieces: M = im.shape[0]//2 N = im.shape[1]//2 tiles[0] holds the upper left tile A: Edit: I believe this answer missed the intent to cut an image into rectangles in columns and rows. This answer cuts only into rows. It looks like other answers cut in columns and rows. Simpler than all these is to use a wheel someone else invented :) It may be more involved to set up, but then it's a snap to use. These instructions are for Windows 7; they may need to be adapted for other OSs. Get and install pip from here. Download the install archive, and extract it to your root Python installation directory. Open a console and type (if I recall correctly): python get-pip.py install Then get and install the image_slicer module via pip, by entering the following command at the console: python -m pip install image_slicer Copy the image you want to slice into the Python root directory, open a python shell (not the "command line"), and enter these commands: import image_slicer image_slicer.slice('huge_test_image.png', 14) The beauty of this module is that it Is installed in python Can invoke an image split with two lines of code Accepts any even number as an image slice parameter (e.g. 14 in this example) Takes that parameter and automagically splits the given image into so many slices, and auto-saves the resultant numbered tiles in the same directory, and finally Has a function to stitch the image tiles back together (which I haven't yet tested); files apparently must be named after the convention which you will see in the split files after testing the image_slicer.slice function. A: from PIL import Image def crop(path, input, height, width, k, page, area): im = Image.open(input) imgwidth, imgheight = im.size for i in range(0,imgheight,height): for j in range(0,imgwidth,width): box = (j, i, j+width, i+height) a = im.crop(box) try: o = a.crop(area) o.save(os.path.join(path,"PNG","%s" % page,"IMG-%s.png" % k)) except: pass k +=1 A: As an alternative solution, we will construct the tiles by generating a grid of coordinates using itertools.product. We will ignore partial tiles on the edges, only iterating through the cartesian product between the two intervals, i.e. range(0, h-h%d, d) X range(0, w-w%d, d). Given filename: the image file name, d: the tile size, dir_in: the path to the directory containing the image, and dir_out: the directory where tiles will be outputted: from PIL import Image from itertools import product def tile(filename, dir_in, dir_out, d): name, ext = os.path.splitext(filename) img = Image.open(os.path.join(dir_in, filename)) w, h = img.size grid = product(range(0, h-h%d, d), range(0, w-w%d, d)) for i, j in grid: box = (j, i, j+d, i+d) out = os.path.join(dir_out, f'{name}_{i}_{j}{ext}') img.crop(box).save(out) A: crop would be a more reusable function if you separate the cropping code from the image saving code. It would also make the call signature simpler. im.crop returns a Image._ImageCrop instance. Such instances do not have a save method. Instead, you must paste the Image._ImageCrop instance onto a new Image.Image Your ranges do not have the right step sizes. (Why height-2 and not height? for example. Why stop at imgheight-(height/2)?). So, you might try instead something like this: import Image import os def crop(infile,height,width): im = Image.open(infile) imgwidth, imgheight = im.size for i in range(imgheight//height): for j in range(imgwidth//width): box = (j*width, i*height, (j+1)*width, (i+1)*height) yield im.crop(box) if __name__=='__main__': infile=... height=... width=... start_num=... for k,piece in enumerate(crop(infile,height,width),start_num): img=Image.new('RGB', (height,width), 255) img.paste(piece) path=os.path.join('/tmp',"IMG-%s.png" % k) img.save(path) A: Here is a concise, pure-python solution that works in both python 3 and 2: from PIL import Image infile = '20190206-135938.1273.Easy8thRunnersHopefully.jpg' chopsize = 300 img = Image.open(infile) width, height = img.size # Save Chops of original image for x0 in range(0, width, chopsize): for y0 in range(0, height, chopsize): box = (x0, y0, x0+chopsize if x0+chopsize < width else width - 1, y0+chopsize if y0+chopsize < height else height - 1) print('%s %s' % (infile, box)) img.crop(box).save('zchop.%s.x%03d.y%03d.jpg' % (infile.replace('.jpg',''), x0, y0)) Notes: The crops that go over the right and bottom of the original image are adjusted to the original image limit and contain only the original pixels. It's easy to choose a different chopsize for w and h by using two chopsize vars and replacing chopsize as appropriate in the code above. A: Not sure if this is the most efficient answer, but it works for me: import os import glob from PIL import Image Image.MAX_IMAGE_PIXELS = None # to avoid image size warning imgdir = "/path/to/image/folder" # if you want file of a specific extension (.png): filelist = [f for f in glob.glob(imgdir + "**/*.png", recursive=True)] savedir = "/path/to/image/folder/output" start_pos = start_x, start_y = (0, 0) cropped_image_size = w, h = (500, 500) for file in filelist: img = Image.open(file) width, height = img.size frame_num = 1 for col_i in range(0, width, w): for row_i in range(0, height, h): crop = img.crop((col_i, row_i, col_i + w, row_i + h)) name = os.path.basename(file) name = os.path.splitext(name)[0] save_to= os.path.join(savedir, name+"_{:03}.png") crop.save(save_to.format(frame_num)) frame_num += 1 This is mostly based on DataScienceGuy answer here A: Here is a late answer that works with Python 3 from PIL import Image import os def imgcrop(input, xPieces, yPieces): filename, file_extension = os.path.splitext(input) im = Image.open(input) imgwidth, imgheight = im.size height = imgheight // yPieces width = imgwidth // xPieces for i in range(0, yPieces): for j in range(0, xPieces): box = (j * width, i * height, (j + 1) * width, (i + 1) * height) a = im.crop(box) try: a.save("images/" + filename + "-" + str(i) + "-" + str(j) + file_extension) except: pass Usage: imgcrop("images/testing.jpg", 5, 5) Then the images will be cropped into pieces according to the specified X and Y pieces, in my case 5 x 5 = 25 pieces A: Here is another solution, just using NumPy built-in np.array_split : def divide_img_blocks(img, n_blocks=(5, 5)): horizontal = np.array_split(img, n_blocks[0]) splitted_img = [np.array_split(block, n_blocks[1], axis=1) for block in horizontal] return np.asarray(splitted_img, dtype=np.ndarray).reshape(n_blocks) It returns a NumPy array with the dimension passed as n_blocks. Each element of the array is a block, so to access each block and save it as an image you should write something like the following: result = divide_img_blocks(my_image) for i in range(result.shape[0]): for j in range(result.shape[1]): cv2.imwrite(f"my_block_{i}_{j}.jpg", result[i,j]) This answer is very fast, faster than @Nir answer, which among the posted ones was the cleanest. Additionally is almost three orders of magnitude faster than the suggested package (i.e. image_slicer). Time taken by divide_img_blocks: 0.0009832382202148438 Time taken by Nir answer: 0.002960681915283203 Time taken by image_slicer.slice: 0.4419238567352295 Hope it can still be useful. A: I find it easier to skimage.util.view_as_windows or `skimage.util.view_as_blocks which also allows you to configure the step http://scikit-image.org/docs/dev/api/skimage.util.html?highlight=view_as_windows#skimage.util.view_as_windows A: import os import sys from PIL import Image savedir = r"E:\new_mission _data\test" filename = r"E:\new_mission _data\test\testing1.png" img = Image.open(filename) width, height = img.size start_pos = start_x, start_y = (0, 0) cropped_image_size = w, h = (1024,1024) frame_num = 1 for col_i in range(0, width, w): for row_i in range(0, height, h): crop = img.crop((col_i, row_i, col_i + w, row_i + h)) save_to= os.path.join(savedir, "testing_{:02}.png") crop.save(save_to.format(frame_num)) frame_num += 1 A: For anyone looking for a simple approach to this, here is a simple working function for splitting an image into NxN sections. def slice_image(filename, N): i = Image.open(filename) width = i.width height = i.height for x in range(N): for y in range(N): index = (x * pieces) + 1 + y img = i.crop((x * width/N, y * height/N, x * width/N+ width/N, y * height/N+ height/N)) img.save(f"{filename}_sliced_{index}.jpeg") A: Thanks @Ivan for teaching me something about itertools and grids. Came here to split up tomographic 3D image data (tif-files) into smaller regions for evaluation. I adapted the script to 3D-TIF files (using the tiffile library) and added a "centered" approach. So the tiles don't start in the upper-left corner but are centered and crop too small tiles at the borders at each direction. Maybe this also help other people. from itertools import product import tifffile as tif import numpy as np path = 'PATH' filename= 'FILENAME.tif' img = tif.imread(path+filename) depth, height, width = img.shape tilesize = 100 grid = product(range(int((depth%tilesize)/2), int(depth-(depth%tilesize)/2), tilesize), range(int((width%tilesize)/2), int(width-((width%tilesize)/2)), tilesize), range(int((height%tilesize)/2), int(height-(height%tilesize)/2), tilesize)) for z,y,x in grid: crop = img[z:z+tilesize, y:y+tilesize, x:x+tilesize] tif.imwrite(path+filename+f'{z:04d}z_{y:04d}y_{x:04d}x.tif', crop, dtype = np.uint8) A: This is my script tools, it is very sample to splite css-sprit image into icons: Usage: split_icons.py img dst_path width height Example: python split_icons.py icon-48.png gtliu 48 48 Save code into split_icons.py : #!/usr/bin/env python # -*- coding:utf-8 -*- import os import sys import glob from PIL import Image def Usage(): print '%s img dst_path width height' % (sys.argv[0]) sys.exit(1) if len(sys.argv) != 5: Usage() src_img = sys.argv[1] dst_path = sys.argv[2] if not os.path.exists(sys.argv[2]) or not os.path.isfile(sys.argv[1]): print 'Not exists', sys.argv[2], sys.argv[1] sys.exit(1) w, h = int(sys.argv[3]), int(sys.argv[4]) im = Image.open(src_img) im_w, im_h = im.size print 'Image width:%d height:%d will split into (%d %d) ' % (im_w, im_h, w, h) w_num, h_num = int(im_w/w), int(im_h/h) for wi in range(0, w_num): for hi in range(0, h_num): box = (wi*w, hi*h, (wi+1)*w, (hi+1)*h) piece = im.crop(box) tmp_img = Image.new('L', (w, h), 255) tmp_img.paste(piece) img_path = os.path.join(dst_path, "%d_%d.png" % (wi, hi)) tmp_img.save(img_path) A: I tried the solutions above, but sometimes you just gotta do it yourself. Might be off by a pixel in some cases but works fine in general. import matplotlib.pyplot as plt import numpy as np def image_to_tiles(im, number_of_tiles = 4, plot=False): """ Function that splits SINGLE channel images into tiles :param im: image: single channel image (NxN matrix) :param number_of_tiles: squared number :param plot: :return tiles: """ n_slices = np.sqrt(number_of_tiles) assert int(n_slices + 0.5) ** 2 == number_of_tiles, "Number of tiles is not a perfect square" n_slices = n_slices.astype(np.int) [w, h] = cropped_npy.shape r = np.linspace(0, w, n_slices+1) r_tuples = [(np.int(r[i]), np.int(r[i+1])) for i in range(0, len(r)-1)] q = np.linspace(0, h, n_slices+1) q_tuples = [(np.int(q[i]), np.int(q[i+1])) for i in range(0, len(q)-1)] tiles = [] for row in range(n_slices): for column in range(n_slices): [x1, y1, x2, y2] = *r_tuples[row], *q_tuples[column] tiles.append(im[x1:y1, x2:y2]) if plot: fig, axes = plt.subplots(n_slices, n_slices, figsize=(10,10)) c = 0 for row in range(n_slices): for column in range(n_slices): axes[row,column].imshow(tiles[c]) axes[row,column].axis('off') c+=1 return tiles Hope it helps. A: I would suggest to use multiprocessing instead of a regular for loop as follows: from PIL import Image import os def crop(infile,height,width): im = Image.open(infile) imgwidth, imgheight = im.size for i in range(imgheight//height): for j in range(imgwidth//width): box = (j*width, i*height, (j+1)*width, (i+1)*height) yield im.crop(box) def til_image(infile): infile=... height=... width=... start_num=... for k,piece in enumerate(crop(infile,height,width),start_num): img=Image.new('RGB', (height,width), 255) img.paste(piece) path=os.path.join('/tmp',"IMG-%s.png" % k) img.save(path) from multiprocessing import Pool, cpu_count try: pool = Pool(cpu_count()) pool.imap_unordered(tile_image, os.listdir(root), chunksize=4) finally: pool.close() A: the easiest way: import image_slicer image_slicer.slice('/Address of image for exp/A1.png',16) this command splits the image into 16 slices and saves them in the directory that the input image is there. you should first install image_slicer: pip install image_slicer A: Splitting an image into squares of a specific size I adapted a solution so that it accepts a specific tile size instead of an amount of tiles because I needed to cut the image up into a grid of 32px squares. The parameters are the image_path and the size of the tiles in pixels. I tried to make the code as readable as possible. # Imports from PIL import Image import os import random # Function def image_to_tiles(im, tile_size = 32): """ Function that splits an image into tiles :param im: image: image path :param tile_size: width in pixels of a tile :return tiles: """ image = Image.open(im) w = image.width h = image.height row_count = np.int64((h-h%tile_size)/tile_size) col_count = np.int64((w-w%tile_size)/tile_size) n_slices = np.int64(row_count*col_count) # Image info print(f'Image: {im}') print(f'Dimensions: w:{w} h:{h}') print(f'Tile count: {n_slices}') r = np.linspace(0, w, row_count+1) r_tuples = [(np.int64(r[i]), np.int64(r[i])+tile_size) for i in range(0, len(r)-1)] q = np.linspace(0, h, col_count+1) q_tuples = [(np.int64(q[i]), np.int64(q[i])+tile_size) for i in range(0, len(q)-1)] #print(f'r_tuples:{r_tuples}\n\nq_tuples:{q_tuples}\n') tiles = [] for row in range(row_count): for column in range(col_count): [y1, y2, x1, x2] = *r_tuples[row], *q_tuples[column] x2 = x1+tile_size y2 = y1+tile_size tile_image = image.crop((x1,y1,x2,y2)) tile_coords = {'x1':x1,'y1':y1,'x2':x2,'y2':y2} tiles.append({'image':tile_image,'coords':tile_coords}) return tiles # Testing: img_path ='/home/user/path/to/image.jpg' tiles = image_to_tiles(img_path) for i in range(20): tile = random.choice(tiles) tile['image'].show() A: '''def split(img,nbxsplit,nbysplit):#this is for number of splitting in line and column xpart=int(img.shape[0]/nbxsplit)#number of part or region in line shape of image ypart=int(img.shape[1]/nbysplit)#number of part in column shape of image arr=[]#empty arr for storage for i in range(0,img.shape[0]-xpart+1,xpart):#boucle stepping by part of x and end until she arrived the last part ,in the there will be lost of of little size beacause sometimes thelast part we can't devide it into flly part for j in range(0,img.shape[1]-ypart+1,ypart): arr.append(img[i:i+xpart,j:j+ypart]) a=np.array(arr) a=np.reshape(a,(nbxsplit,nbysplit)) return a ''' A: you can use numpy stride tricks to achive this, but be careful, as this function has to be used with extreme care (doc) import numpy as np from numpy.lib.stride_tricks import as_strided def img_pieces(img, piece_size): height, width, chanels = img.shape n_bytes = img.strides[-1] return np.reshape( as_strided( img, ( height // piece_size, width // piece_size, piece_size, piece_size, chanels ), ( n_bytes * chanels * width * piece_size, n_bytes * chanels * piece_size, n_bytes * chanels * width, n_bytes * chanels, n_bytes ) ), ( -1, piece_size, piece_size, chanels ) )
How to Split Image Into Multiple Pieces in Python
I'm trying to split a photo into multiple pieces using PIL. def crop(Path,input,height,width,i,k,x,y,page): im = Image.open(input) imgwidth = im.size[0] imgheight = im.size[1] for i in range(0,imgheight-height/2,height-2): print i for j in range(0,imgwidth-width/2,width-2): print j box = (j, i, j+width, i+height) a = im.crop(box) a.save(os.path.join(Path,"PNG","%s" % page,"IMG-%s.png" % k)) k +=1 but it doesn't seem to be working. It splits the photo but not in an exact way (you can try it).
[ "Splitting image to tiles of MxN pixels (assuming im is numpy.ndarray):\ntiles = [im[x:x+M,y:y+N] for x in range(0,im.shape[0],M) for y in range(0,im.shape[1],N)]\n\nIn the case you want to split the image to four pieces:\nM = im.shape[0]//2\nN = im.shape[1]//2\n\ntiles[0] holds the upper left tile\n", "Edit: I believe this answer missed the intent to cut an image into rectangles in columns and rows. This answer cuts only into rows. It looks like other answers cut in columns and rows. \nSimpler than all these is to use a wheel someone else invented :) It may be more involved to set up, but then it's a snap to use.\nThese instructions are for Windows 7; they may need to be adapted for other OSs.\nGet and install pip from here.\nDownload the install archive, and extract it to your root Python installation directory. Open a console and type (if I recall correctly):\npython get-pip.py install\n\nThen get and install the image_slicer module via pip, by entering the following command at the console:\npython -m pip install image_slicer\n\nCopy the image you want to slice into the Python root directory, open a python shell (not the \"command line\"), and enter these commands:\nimport image_slicer\nimage_slicer.slice('huge_test_image.png', 14)\n\nThe beauty of this module is that it \n\nIs installed in python \nCan invoke an image split with two lines of code \nAccepts any even number as an image slice parameter (e.g. 14 in this example) \nTakes that parameter and automagically splits the given image into so many slices, and auto-saves the resultant numbered tiles in the same directory, and finally \nHas a function to stitch the image tiles back together (which I haven't yet tested); files apparently must be named after the convention which you will see in the split files after testing the image_slicer.slice function.\n\n", "from PIL import Image\n\ndef crop(path, input, height, width, k, page, area):\n im = Image.open(input)\n imgwidth, imgheight = im.size\n for i in range(0,imgheight,height):\n for j in range(0,imgwidth,width):\n box = (j, i, j+width, i+height)\n a = im.crop(box)\n try:\n o = a.crop(area)\n o.save(os.path.join(path,\"PNG\",\"%s\" % page,\"IMG-%s.png\" % k))\n except:\n pass\n k +=1\n\n", "As an alternative solution, we will construct the tiles by generating a grid of coordinates using itertools.product. We will ignore partial tiles on the edges, only iterating through the cartesian product between the two intervals, i.e. range(0, h-h%d, d) X range(0, w-w%d, d).\nGiven filename: the image file name, d: the tile size, dir_in: the path to the directory containing the image, and dir_out: the directory where tiles will be outputted:\nfrom PIL import Image\nfrom itertools import product\n\ndef tile(filename, dir_in, dir_out, d):\n name, ext = os.path.splitext(filename)\n img = Image.open(os.path.join(dir_in, filename))\n w, h = img.size\n \n grid = product(range(0, h-h%d, d), range(0, w-w%d, d))\n for i, j in grid:\n box = (j, i, j+d, i+d)\n out = os.path.join(dir_out, f'{name}_{i}_{j}{ext}')\n img.crop(box).save(out)\n\n\n", "\ncrop would be a more reusable\nfunction if you separate the\ncropping code from the\nimage saving\ncode. It would also make the call\nsignature simpler.\nim.crop returns a\nImage._ImageCrop instance. Such\ninstances do not have a save method.\nInstead, you must paste the\nImage._ImageCrop instance onto a\nnew Image.Image\nYour ranges do not have the right\nstep sizes. (Why height-2 and not\nheight? for example. Why stop at\nimgheight-(height/2)?).\n\nSo, you might try instead something like this:\nimport Image\nimport os\n\ndef crop(infile,height,width):\n im = Image.open(infile)\n imgwidth, imgheight = im.size\n for i in range(imgheight//height):\n for j in range(imgwidth//width):\n box = (j*width, i*height, (j+1)*width, (i+1)*height)\n yield im.crop(box)\n\nif __name__=='__main__':\n infile=...\n height=...\n width=...\n start_num=...\n for k,piece in enumerate(crop(infile,height,width),start_num):\n img=Image.new('RGB', (height,width), 255)\n img.paste(piece)\n path=os.path.join('/tmp',\"IMG-%s.png\" % k)\n img.save(path)\n\n", "Here is a concise, pure-python solution that works in both python 3 and 2:\nfrom PIL import Image\n\ninfile = '20190206-135938.1273.Easy8thRunnersHopefully.jpg'\nchopsize = 300\n\nimg = Image.open(infile)\nwidth, height = img.size\n\n# Save Chops of original image\nfor x0 in range(0, width, chopsize):\n for y0 in range(0, height, chopsize):\n box = (x0, y0,\n x0+chopsize if x0+chopsize < width else width - 1,\n y0+chopsize if y0+chopsize < height else height - 1)\n print('%s %s' % (infile, box))\n img.crop(box).save('zchop.%s.x%03d.y%03d.jpg' % (infile.replace('.jpg',''), x0, y0))\n\nNotes:\n The crops that go over the right and bottom of the original image are adjusted to the original image limit and contain only the original pixels.\n It's easy to choose a different chopsize for w and h by using two chopsize vars and replacing chopsize as appropriate in the code above.\n", "Not sure if this is the most efficient answer, but it works for me:\nimport os\nimport glob\nfrom PIL import Image\nImage.MAX_IMAGE_PIXELS = None # to avoid image size warning\n\nimgdir = \"/path/to/image/folder\"\n# if you want file of a specific extension (.png):\nfilelist = [f for f in glob.glob(imgdir + \"**/*.png\", recursive=True)]\nsavedir = \"/path/to/image/folder/output\"\n\nstart_pos = start_x, start_y = (0, 0)\ncropped_image_size = w, h = (500, 500)\n\nfor file in filelist:\n img = Image.open(file)\n width, height = img.size\n\n frame_num = 1\n for col_i in range(0, width, w):\n for row_i in range(0, height, h):\n crop = img.crop((col_i, row_i, col_i + w, row_i + h))\n name = os.path.basename(file)\n name = os.path.splitext(name)[0]\n save_to= os.path.join(savedir, name+\"_{:03}.png\")\n crop.save(save_to.format(frame_num))\n frame_num += 1\n\nThis is mostly based on DataScienceGuy answer here\n", "Here is a late answer that works with Python 3\nfrom PIL import Image\nimport os\n\ndef imgcrop(input, xPieces, yPieces):\n filename, file_extension = os.path.splitext(input)\n im = Image.open(input)\n imgwidth, imgheight = im.size\n height = imgheight // yPieces\n width = imgwidth // xPieces\n for i in range(0, yPieces):\n for j in range(0, xPieces):\n box = (j * width, i * height, (j + 1) * width, (i + 1) * height)\n a = im.crop(box)\n try:\n a.save(\"images/\" + filename + \"-\" + str(i) + \"-\" + str(j) + file_extension)\n except:\n pass\n\nUsage:\nimgcrop(\"images/testing.jpg\", 5, 5)\n\nThen the images will be cropped into pieces according to the specified X and Y pieces, in my case 5 x 5 = 25 pieces\n", "Here is another solution, just using NumPy built-in np.array_split :\ndef divide_img_blocks(img, n_blocks=(5, 5)):\n horizontal = np.array_split(img, n_blocks[0])\n splitted_img = [np.array_split(block, n_blocks[1], axis=1) for block in horizontal]\n return np.asarray(splitted_img, dtype=np.ndarray).reshape(n_blocks)\n\nIt returns a NumPy array with the dimension passed as n_blocks.\nEach element of the array is a block, so to access each block and save it as an image you should write something like the following:\nresult = divide_img_blocks(my_image)\n\nfor i in range(result.shape[0]):\n for j in range(result.shape[1]):\n cv2.imwrite(f\"my_block_{i}_{j}.jpg\", result[i,j])\n\nThis answer is very fast, faster than @Nir answer, which among the posted ones was the cleanest. Additionally is almost three orders of magnitude faster than the suggested package (i.e. image_slicer).\nTime taken by divide_img_blocks: 0.0009832382202148438\nTime taken by Nir answer: 0.002960681915283203\nTime taken by image_slicer.slice: 0.4419238567352295\n\nHope it can still be useful.\n", "I find it easier to skimage.util.view_as_windows or `skimage.util.view_as_blocks which also allows you to configure the step\nhttp://scikit-image.org/docs/dev/api/skimage.util.html?highlight=view_as_windows#skimage.util.view_as_windows\n", "import os\nimport sys\nfrom PIL import Image\n\nsavedir = r\"E:\\new_mission _data\\test\"\nfilename = r\"E:\\new_mission _data\\test\\testing1.png\"\nimg = Image.open(filename)\nwidth, height = img.size\nstart_pos = start_x, start_y = (0, 0)\ncropped_image_size = w, h = (1024,1024)\n\nframe_num = 1\nfor col_i in range(0, width, w):\n for row_i in range(0, height, h):\n crop = img.crop((col_i, row_i, col_i + w, row_i + h))\n save_to= os.path.join(savedir, \"testing_{:02}.png\")\n crop.save(save_to.format(frame_num))\n frame_num += 1\n\n", "For anyone looking for a simple approach to this, here is a simple working function for splitting an image into NxN sections.\ndef slice_image(filename, N):\n\n i = Image.open(filename)\n\n width = i.width\n height = i.height\n\n for x in range(N):\n\n for y in range(N):\n\n index = (x * pieces) + 1 + y\n\n img = i.crop((x * width/N, y * height/N,\n x * width/N+ width/N, y * height/N+ height/N))\n\n img.save(f\"{filename}_sliced_{index}.jpeg\")\n\n", "Thanks @Ivan for teaching me something about itertools and grids. Came here to split up tomographic 3D image data (tif-files) into smaller regions for evaluation. I adapted the script to 3D-TIF files (using the tiffile library) and added a \"centered\" approach. So the tiles don't start in the upper-left corner but are centered and crop too small tiles at the borders at each direction. Maybe this also help other people.\nfrom itertools import product\nimport tifffile as tif\nimport numpy as np\n\npath = 'PATH'\nfilename= 'FILENAME.tif'\nimg = tif.imread(path+filename)\n\ndepth, height, width = img.shape\ntilesize = 100\n\ngrid = product(range(int((depth%tilesize)/2), int(depth-(depth%tilesize)/2), tilesize),\n range(int((width%tilesize)/2), int(width-((width%tilesize)/2)), tilesize), \n range(int((height%tilesize)/2), int(height-(height%tilesize)/2), tilesize))\n\nfor z,y,x in grid:\n crop = img[z:z+tilesize, y:y+tilesize, x:x+tilesize]\n tif.imwrite(path+filename+f'{z:04d}z_{y:04d}y_{x:04d}x.tif', crop, dtype = np.uint8)\n\n", "This is my script tools, it is very sample to splite css-sprit image into icons:\nUsage: split_icons.py img dst_path width height\nExample: python split_icons.py icon-48.png gtliu 48 48\n\nSave code into split_icons.py :\n#!/usr/bin/env python\n# -*- coding:utf-8 -*-\nimport os\nimport sys\nimport glob\nfrom PIL import Image\n\ndef Usage():\n print '%s img dst_path width height' % (sys.argv[0])\n sys.exit(1)\n\nif len(sys.argv) != 5:\n Usage()\n\nsrc_img = sys.argv[1]\ndst_path = sys.argv[2]\n\nif not os.path.exists(sys.argv[2]) or not os.path.isfile(sys.argv[1]):\n print 'Not exists', sys.argv[2], sys.argv[1]\n sys.exit(1)\n\nw, h = int(sys.argv[3]), int(sys.argv[4])\nim = Image.open(src_img)\nim_w, im_h = im.size\nprint 'Image width:%d height:%d will split into (%d %d) ' % (im_w, im_h, w, h)\nw_num, h_num = int(im_w/w), int(im_h/h)\n\nfor wi in range(0, w_num):\n for hi in range(0, h_num):\n box = (wi*w, hi*h, (wi+1)*w, (hi+1)*h)\n piece = im.crop(box)\n tmp_img = Image.new('L', (w, h), 255)\n tmp_img.paste(piece)\n img_path = os.path.join(dst_path, \"%d_%d.png\" % (wi, hi))\n tmp_img.save(img_path)\n\n", "I tried the solutions above, but sometimes you just gotta do it yourself.\nMight be off by a pixel in some cases but works fine in general.\nimport matplotlib.pyplot as plt\nimport numpy as np\ndef image_to_tiles(im, number_of_tiles = 4, plot=False):\n \"\"\"\n Function that splits SINGLE channel images into tiles\n :param im: image: single channel image (NxN matrix)\n :param number_of_tiles: squared number\n :param plot:\n :return tiles:\n \"\"\"\n n_slices = np.sqrt(number_of_tiles)\n assert int(n_slices + 0.5) ** 2 == number_of_tiles, \"Number of tiles is not a perfect square\"\n\n n_slices = n_slices.astype(np.int)\n [w, h] = cropped_npy.shape\n\n r = np.linspace(0, w, n_slices+1)\n r_tuples = [(np.int(r[i]), np.int(r[i+1])) for i in range(0, len(r)-1)]\n q = np.linspace(0, h, n_slices+1)\n q_tuples = [(np.int(q[i]), np.int(q[i+1])) for i in range(0, len(q)-1)]\n\n tiles = []\n for row in range(n_slices):\n for column in range(n_slices):\n [x1, y1, x2, y2] = *r_tuples[row], *q_tuples[column] \n tiles.append(im[x1:y1, x2:y2])\n\n if plot:\n fig, axes = plt.subplots(n_slices, n_slices, figsize=(10,10))\n c = 0\n for row in range(n_slices):\n for column in range(n_slices):\n axes[row,column].imshow(tiles[c])\n axes[row,column].axis('off')\n c+=1\n\n return tiles\n\nHope it helps.\n", "I would suggest to use multiprocessing instead of a regular for loop as follows:\nfrom PIL import Image\nimport os\n\ndef crop(infile,height,width):\n im = Image.open(infile)\n imgwidth, imgheight = im.size\n for i in range(imgheight//height):\n for j in range(imgwidth//width):\n box = (j*width, i*height, (j+1)*width, (i+1)*height)\n yield im.crop(box)\n\ndef til_image(infile):\n infile=...\n height=...\n width=...\n start_num=...\n for k,piece in enumerate(crop(infile,height,width),start_num):\n img=Image.new('RGB', (height,width), 255)\n img.paste(piece)\n path=os.path.join('/tmp',\"IMG-%s.png\" % k)\n img.save(path)\n\nfrom multiprocessing import Pool, cpu_count\ntry:\n pool = Pool(cpu_count())\n pool.imap_unordered(tile_image, os.listdir(root), chunksize=4)\nfinally:\n pool.close()\n\n", "the easiest way:\nimport image_slicer\nimage_slicer.slice('/Address of image for exp/A1.png',16)\n\nthis command splits the image into 16 slices and saves them in the directory that the input image is there.\nyou should first install image_slicer:\npip install image_slicer\n\n", "Splitting an image into squares of a specific size\nI adapted a solution so that it accepts a specific tile size instead of an amount of tiles because I needed to cut the image up into a grid of 32px squares.\nThe parameters are the image_path and the size of the tiles in pixels.\nI tried to make the code as readable as possible.\n# Imports\nfrom PIL import Image\nimport os\nimport random\n\n# Function\ndef image_to_tiles(im, tile_size = 32):\n \"\"\"\n Function that splits an image into tiles\n :param im: image: image path\n :param tile_size: width in pixels of a tile\n :return tiles:\n \"\"\"\n image = Image.open(im)\n \n w = image.width\n h = image.height\n \n row_count = np.int64((h-h%tile_size)/tile_size)\n col_count = np.int64((w-w%tile_size)/tile_size)\n \n n_slices = np.int64(row_count*col_count)\n \n # Image info\n print(f'Image: {im}')\n print(f'Dimensions: w:{w} h:{h}')\n print(f'Tile count: {n_slices}')\n\n\n r = np.linspace(0, w, row_count+1)\n r_tuples = [(np.int64(r[i]), np.int64(r[i])+tile_size) for i in range(0, len(r)-1)]\n q = np.linspace(0, h, col_count+1)\n q_tuples = [(np.int64(q[i]), np.int64(q[i])+tile_size) for i in range(0, len(q)-1)]\n \n #print(f'r_tuples:{r_tuples}\\n\\nq_tuples:{q_tuples}\\n')\n \n tiles = []\n for row in range(row_count):\n for column in range(col_count):\n [y1, y2, x1, x2] = *r_tuples[row], *q_tuples[column]\n x2 = x1+tile_size\n y2 = y1+tile_size\n tile_image = image.crop((x1,y1,x2,y2))\n tile_coords = {'x1':x1,'y1':y1,'x2':x2,'y2':y2}\n tiles.append({'image':tile_image,'coords':tile_coords})\n\n return tiles\n\n# Testing:\nimg_path ='/home/user/path/to/image.jpg'\ntiles = image_to_tiles(img_path)\n\nfor i in range(20):\n tile = random.choice(tiles)\n tile['image'].show()\n\n", " '''def split(img,nbxsplit,nbysplit):#this is for number of splitting in line and column\n xpart=int(img.shape[0]/nbxsplit)#number of part or region in line shape of image\n ypart=int(img.shape[1]/nbysplit)#number of part in column shape of image\n arr=[]#empty arr for storage\n for i in range(0,img.shape[0]-xpart+1,xpart):#boucle stepping by part of x and end until she arrived the last part ,in the there will be lost of of little size beacause sometimes thelast part we can't devide it into flly part\n\n for j in range(0,img.shape[1]-ypart+1,ypart):\n arr.append(img[i:i+xpart,j:j+ypart])\n a=np.array(arr)\n a=np.reshape(a,(nbxsplit,nbysplit))\n return a\n '''\n\n", "you can use numpy stride tricks to achive this, but be careful, as this function has to be used with extreme care (doc)\nimport numpy as np\nfrom numpy.lib.stride_tricks import as_strided\n\ndef img_pieces(img, piece_size):\n height, width, chanels = img.shape\n n_bytes = img.strides[-1]\n\n return np.reshape(\n as_strided(\n img,\n (\n height // piece_size,\n width // piece_size,\n piece_size,\n piece_size,\n chanels\n ),\n (\n n_bytes * chanels * width * piece_size,\n n_bytes * chanels * piece_size,\n n_bytes * chanels * width,\n n_bytes * chanels,\n n_bytes\n )\n ),\n (\n -1,\n piece_size,\n piece_size,\n chanels\n )\n )\n\n" ]
[ 44, 41, 39, 34, 20, 4, 3, 3, 3, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "import cv2\n\ndef crop_image(image_path, output_path):\n im = cv2.imread(os.listdir()[2])\n imgheight=im.shape[0]\n imgwidth=im.shape[1]\n\n y1 = 0\n M = 2000\n N = 2000\n for y in range(0,imgheight,M):\n for x in range(0, imgwidth, N):\n y1 = y + M\n x1 = x + N\n tiles = im[y:y+M,x:x+N]\n if tiles.shape[0] < 100 or tiles.shape[1]<100:\n continue\n\n cv2.rectangle(im, (x, y), (x1, y1), (0, 255, 0))\n cv2.imwrite(output_path + str(x) + '_' + str(y)+\"{}.png\".format(image_path),tiles)\ncrop_image(os.listdir()[2], './cutted/')\n\n" ]
[ -1 ]
[ "crop", "image", "python", "python_imaging_library", "split" ]
stackoverflow_0005953373_crop_image_python_python_imaging_library_split.txt
Q: AttributeError: 'Pipeline' object has no attribute 'fit_resample' Based on the documentation given on the following link pipeline and imbalanced i have tried to implement code on some dataset, here is code : import numpy as np import pandas as pd from collections import Counter from sklearn.preprocessing import LabelEncoder,OneHotEncoder from imblearn.over_sampling import SMOTE from imblearn.under_sampling import RandomUnderSampler from sklearn.pipeline import Pipeline from sklearn.naive_bayes import GaussianNB data =pd.read_csv('aug_train.csv') data.drop('id',axis=1,inplace=True) print(data.info()) print(data.select_dtypes(include='object').columns.tolist()) data[data.select_dtypes(include='object').columns.tolist()]=data[data.select_dtypes(include='object').columns.tolist()].apply(LabelEncoder().fit_transform) print(data.head()) #print(data['Response'].value_counts()) mymodel =GaussianNB() y =data['Response'].values print(Counter(y)) X =data.drop('Response',axis=1).values #X,y =SMOTE().fit_resample(X,y) #mymodel.fit(X,y) #print(mymodel.score(X,y)) #print(Counter(y)) over = SMOTE(sampling_strategy=0.1) under = RandomUnderSampler(sampling_strategy=0.5) steps = [('o', over), ('u', under)] pipeline = Pipeline(steps=steps) # transform the dataset X, y = pipeline.fit_sample(X, y) the main problem in this code is with line : X, y = pipeline.fit_sample(X, y) error says that AttributeError: 'Pipeline' object has no attribute 'fit_resample' how can i fix this issue? thanks in advance A: The tutorial employs imblearn.pipeline.Pipeline, while your code uses sklearn.pipeline.Pipeline (check import expressions). These appear to be different kinds of pipelines.
AttributeError: 'Pipeline' object has no attribute 'fit_resample'
Based on the documentation given on the following link pipeline and imbalanced i have tried to implement code on some dataset, here is code : import numpy as np import pandas as pd from collections import Counter from sklearn.preprocessing import LabelEncoder,OneHotEncoder from imblearn.over_sampling import SMOTE from imblearn.under_sampling import RandomUnderSampler from sklearn.pipeline import Pipeline from sklearn.naive_bayes import GaussianNB data =pd.read_csv('aug_train.csv') data.drop('id',axis=1,inplace=True) print(data.info()) print(data.select_dtypes(include='object').columns.tolist()) data[data.select_dtypes(include='object').columns.tolist()]=data[data.select_dtypes(include='object').columns.tolist()].apply(LabelEncoder().fit_transform) print(data.head()) #print(data['Response'].value_counts()) mymodel =GaussianNB() y =data['Response'].values print(Counter(y)) X =data.drop('Response',axis=1).values #X,y =SMOTE().fit_resample(X,y) #mymodel.fit(X,y) #print(mymodel.score(X,y)) #print(Counter(y)) over = SMOTE(sampling_strategy=0.1) under = RandomUnderSampler(sampling_strategy=0.5) steps = [('o', over), ('u', under)] pipeline = Pipeline(steps=steps) # transform the dataset X, y = pipeline.fit_sample(X, y) the main problem in this code is with line : X, y = pipeline.fit_sample(X, y) error says that AttributeError: 'Pipeline' object has no attribute 'fit_resample' how can i fix this issue? thanks in advance
[ "The tutorial employs imblearn.pipeline.Pipeline, while your code uses sklearn.pipeline.Pipeline (check import expressions). These appear to be different kinds of pipelines.\n" ]
[ 1 ]
[]
[]
[ "imblearn", "python" ]
stackoverflow_0074614451_imblearn_python.txt
Q: convert nanosecond precision datetime to snowflake TIMESTAMP_NTZ format I have a string datetime "2017-01-01T20:19:47.922596536+09". I would like to convert this into snowflake's DATETIME_NTZ date type (which can be found here). Simply put, DATETIME_NTZ is defined as TIMESTAMP_NTZ TIMESTAMP_NTZ internally stores “wallclock” time with a specified precision. All operations are performed without taking any time zone into account. If the output format contains a time zone, the UTC indicator (Z) is displayed. TIMESTAMP_NTZ is the default for TIMESTAMP. Aliases for TIMESTAMP_NTZ: TIMESTAMPNTZ TIMESTAMP WITHOUT TIME ZONE I've tried using numpy.datetime64 but I get the following: > numpy.datetime64("2017-01-01T20:19:47.922596536+09") numpy.datetime64('2017-01-01T11:19:47.922596536') This for some reason converts the time to certain timezone. I've also tried pd.to_datetime: > pd.to_datetime("2017-01-01T20:19:47.922596536+09") Timestamp('2017-01-01 20:19:47.922596536+0900', tz='pytz.FixedOffset(540)') This gives me the correct value but when I try to insert the above value to snowflake db, I get the following error: sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 252004: Failed processing pyformat-parameters: 255001: Binding data in type (timestamp) is not supported. Any suggestions would be much appreciated! A: You can do this on the Snowflake side if you want by sending the string format as-is and converting to a timestamp_ntz. This single line shows two ways, one that simply strips off the time zone information, and one that converts the time zone to UTC before stripping off the time zone. select try_to_timestamp_ntz('2017-01-01T20:19:47.922596536+09', 'YYYY-MM-DD"T"HH:MI:SS.FF9TZH') TS_NTZ ,convert_timezone('UTC', try_to_timestamp_tz('2017-01-01T20:19:47.922596536+09', 'YYYY-MM-DD"T"HH:MI:SS.FF9TZH'))::timestamp_ntz UTC_TS_NTZ ; Note that Snowflake UI by default only shows 3 decimal places (milliseconds) unless you specify higher precision for the output display using to_varchar() and a timestamp format string. TS_NTZ UTC_TS 2017-01-01 20:19:47.922596536 2017-01-01 11:19:47.922596536
convert nanosecond precision datetime to snowflake TIMESTAMP_NTZ format
I have a string datetime "2017-01-01T20:19:47.922596536+09". I would like to convert this into snowflake's DATETIME_NTZ date type (which can be found here). Simply put, DATETIME_NTZ is defined as TIMESTAMP_NTZ TIMESTAMP_NTZ internally stores “wallclock” time with a specified precision. All operations are performed without taking any time zone into account. If the output format contains a time zone, the UTC indicator (Z) is displayed. TIMESTAMP_NTZ is the default for TIMESTAMP. Aliases for TIMESTAMP_NTZ: TIMESTAMPNTZ TIMESTAMP WITHOUT TIME ZONE I've tried using numpy.datetime64 but I get the following: > numpy.datetime64("2017-01-01T20:19:47.922596536+09") numpy.datetime64('2017-01-01T11:19:47.922596536') This for some reason converts the time to certain timezone. I've also tried pd.to_datetime: > pd.to_datetime("2017-01-01T20:19:47.922596536+09") Timestamp('2017-01-01 20:19:47.922596536+0900', tz='pytz.FixedOffset(540)') This gives me the correct value but when I try to insert the above value to snowflake db, I get the following error: sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 252004: Failed processing pyformat-parameters: 255001: Binding data in type (timestamp) is not supported. Any suggestions would be much appreciated!
[ "You can do this on the Snowflake side if you want by sending the string format as-is and converting to a timestamp_ntz. This single line shows two ways, one that simply strips off the time zone information, and one that converts the time zone to UTC before stripping off the time zone.\nselect try_to_timestamp_ntz('2017-01-01T20:19:47.922596536+09',\n 'YYYY-MM-DD\"T\"HH:MI:SS.FF9TZH') TS_NTZ\n ,convert_timezone('UTC', \n try_to_timestamp_tz('2017-01-01T20:19:47.922596536+09',\n 'YYYY-MM-DD\"T\"HH:MI:SS.FF9TZH'))::timestamp_ntz UTC_TS_NTZ\n;\n\nNote that Snowflake UI by default only shows 3 decimal places (milliseconds) unless you specify higher precision for the output display using to_varchar() and a timestamp format string.\n\n\n\n\nTS_NTZ\nUTC_TS\n\n\n\n\n2017-01-01 20:19:47.922596536\n2017-01-01 11:19:47.922596536\n\n\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "numpy", "pandas", "python", "snowflake_cloud_data_platform" ]
stackoverflow_0074611360_datetime_numpy_pandas_python_snowflake_cloud_data_platform.txt
Q: How to combine X_test, y test, and y predictions after text analytics prediction? After using logitics Reg on text analytics, I was trying to combine the X_test, y_arr_test (label), and y_predictions to ONE dataframe, but don't know how to do it. Need help. ''' from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer.fit(X_arr_train) X_train = vectorizer.transform(X_arr_train) X_test = vectorizer.transform(X_arr_test) X_train ''' ''' # logistic Reg from sklearn.linear_model import LogisticRegression classifier = LogisticRegression() classifier.fit(X_train, y_arr_train) score = classifier.score(X_test, y_arr_test) y_predictions=classifier.predict(X_test) ''' X_test return: <1333x5676 sparse matrix of type '<class 'numpy.int64'>' with 26934 stored elements in Compressed Sparse Row format> np.shape(y_arr_test) return: (1333,) Then I dont know how to put X_test, y_arr_test (label), and y_predictions to ONE dataframe. The goal is to show the wrong predictions and know why. Thanks. A: for the existing df you can just add the prediction results like: x_train['preds'] = y_predictions same goes for labels like x_train['labels'] = y_train or all in one: new_df = x_train.copy() new_df['preds'] = y_predictions new_df['labels'] = y_train have you tried this?
How to combine X_test, y test, and y predictions after text analytics prediction?
After using logitics Reg on text analytics, I was trying to combine the X_test, y_arr_test (label), and y_predictions to ONE dataframe, but don't know how to do it. Need help. ''' from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer.fit(X_arr_train) X_train = vectorizer.transform(X_arr_train) X_test = vectorizer.transform(X_arr_test) X_train ''' ''' # logistic Reg from sklearn.linear_model import LogisticRegression classifier = LogisticRegression() classifier.fit(X_train, y_arr_train) score = classifier.score(X_test, y_arr_test) y_predictions=classifier.predict(X_test) ''' X_test return: <1333x5676 sparse matrix of type '<class 'numpy.int64'>' with 26934 stored elements in Compressed Sparse Row format> np.shape(y_arr_test) return: (1333,) Then I dont know how to put X_test, y_arr_test (label), and y_predictions to ONE dataframe. The goal is to show the wrong predictions and know why. Thanks.
[ "for the existing df you can just add the prediction results like:\nx_train['preds'] = y_predictions\nsame goes for labels like\nx_train['labels'] = y_train\nor all in one:\nnew_df = x_train.copy()\n\nnew_df['preds'] = y_predictions\nnew_df['labels'] = y_train\n\n\nhave you tried this?\n" ]
[ 0 ]
[]
[]
[ "concatenation", "nlp", "pandas", "python", "scikit_learn" ]
stackoverflow_0065360549_concatenation_nlp_pandas_python_scikit_learn.txt
Q: How do I apply my Random Forest classifier to an unlabelled dataset? Using sklearn, I have just finished training, tuning hyperparameters and testing a Random Forest Multiclass Classifier using RandomizedSearchCV. I have obtained the best parameters, best score and so on. This was all done with a labelled dataset. Now I want to apply this classifier onto an unlabelled dataset (meaning there are only the features and no classes) to make class/label predictions. How do I go about doing this? I haven't tried anything yet because I am stuck. A: Edit: This answer is based on the following version of the question: https://stackoverflow.com/revisions/74613826/2 You can use the forest_search.predict(X_test) method, which will use the best parameters found in search. A: Or you can try to go to unsupervised learning direction and try one of the clustering methods (for example, KMeans). Here is the example: from sklearn.cluster import KMeans import numpy as np X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]) kmeans = KMeans(n_clusters=2, random_state=0).fit_predict(X) print(kmeans) Cool thing about this approach, that you compare new labels, received after clusting with the results of your RandomForest previous model
How do I apply my Random Forest classifier to an unlabelled dataset?
Using sklearn, I have just finished training, tuning hyperparameters and testing a Random Forest Multiclass Classifier using RandomizedSearchCV. I have obtained the best parameters, best score and so on. This was all done with a labelled dataset. Now I want to apply this classifier onto an unlabelled dataset (meaning there are only the features and no classes) to make class/label predictions. How do I go about doing this? I haven't tried anything yet because I am stuck.
[ "Edit: This answer is based on the following version of the question: https://stackoverflow.com/revisions/74613826/2\nYou can use the forest_search.predict(X_test) method, which will use the best parameters found in search.\n", "Or you can try to go to unsupervised learning direction and try one of the clustering methods (for example, KMeans).\nHere is the example:\nfrom sklearn.cluster import KMeans\nimport numpy as np\nX = np.array([[1, 2], [1, 4], [1, 0],\n [10, 2], [10, 4], [10, 0]])\nkmeans = KMeans(n_clusters=2, random_state=0).fit_predict(X)\nprint(kmeans)\n\nCool thing about this approach, that you compare new labels, received after clusting with the results of your RandomForest previous model\n" ]
[ 1, 0 ]
[]
[]
[ "classification", "python", "random_forest", "scikit_learn" ]
stackoverflow_0074613826_classification_python_random_forest_scikit_learn.txt
Q: Cannot locate python module: KV Language, PyInstaller I have an application developed in Kivy which works fine when I execute using a python interpreter. The problem happens when I try to execute after creating an executable using pyinstaller. The .kv file is unable to locate the python modules that it needs. I believe this has something to do with root path configuration of the KV Language but I couldn't find anything to resolve it. "config.py" is the harness/entry file that pyinstaller analyzes. The error occurs in "kv/root_screen.kv" which imports the respective python modules present in "baseclass" folder. Find the screenshot of the error and directory structure below. I have tried executing the files from multiple directories to no effect. A: I had the same problem, for me the solution was to fully unintall all your versions of python, NOT your code editors, but the Python files. Usually located in: C:\Users\YOUR PC NAME\AppData\Local\Programs\Python\Python311 After that go to the official python website: https://www.python.org/downloads/ And download your prefferred version. In the install click: Add Python... to Path: https://i.stack.imgur.com/YXYmE.png Then install your modules using PIP. Please let me know if this helped! Lexxnl
Cannot locate python module: KV Language, PyInstaller
I have an application developed in Kivy which works fine when I execute using a python interpreter. The problem happens when I try to execute after creating an executable using pyinstaller. The .kv file is unable to locate the python modules that it needs. I believe this has something to do with root path configuration of the KV Language but I couldn't find anything to resolve it. "config.py" is the harness/entry file that pyinstaller analyzes. The error occurs in "kv/root_screen.kv" which imports the respective python modules present in "baseclass" folder. Find the screenshot of the error and directory structure below. I have tried executing the files from multiple directories to no effect.
[ "I had the same problem, for me the solution was to fully unintall all your versions of python, NOT your code editors, but the Python files.\nUsually located in:\nC:\\Users\\YOUR PC NAME\\AppData\\Local\\Programs\\Python\\Python311\nAfter that go to the official python website:\nhttps://www.python.org/downloads/\nAnd download your prefferred version.\nIn the install click: Add Python... to Path:\nhttps://i.stack.imgur.com/YXYmE.png\nThen install your modules using PIP.\nPlease let me know if this helped!\n\nLexxnl\n\n" ]
[ 0 ]
[]
[]
[ "kivy_language", "pyinstaller", "python" ]
stackoverflow_0074614620_kivy_language_pyinstaller_python.txt
Q: Failing to pre-sign s3 url in Bahrain AWS region ONLY I've had some Python code that pre-signs AWS S3 URLs that have been working for years. We just added a new bucket in the Bahrain AWS data center. This location was disabled and required explicitly enabling that data center. That all seemed fine. However, the resulting URL always gives me an IllegalLocationConstraintException and I believe boto3 is generating the wrong domain name in the URL. The error indicates that the request is sent to us-east-1. I know you have to specify the region, we have 8-10 regions we are using and they all are fine until this one. The URL generated by boto3.client('s3', ...).generate_presigned_url() seems to give a URL for the us east S3. Given something like this: region: me-south-1 bucket: bucket-name key: full/path/to/file.txt You would expect something like this for the signed URL: https://bucket-name.s3.me-south-1.amazonaws.com/full/path/to/file.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AZIAJ4252S33LNN3Y14Q/20100504/me-south-1/s3/aws4_request&X-Amz-Date=20200504TA22522Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=1717275aff4af5fcff2f44f74615fadb5ba448eb83219c88f59d0792d4e44b8f Notice the domain: bucket-name.s3.me-south-1.amazonaws.com Yet, what we get from boto3.client('s3', ...).generate_presigned_url() is: https://bucket-name.s3.amazonaws.com/full/path/to/file.txt?X-Amz-Algorithm=... Notice that this resulting domain is: s3.amazonaws.com Has anyone had issues with the newer s3 locations like this? If it helps, the full error message we get back is: <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>IllegalLocationConstraintException</Code> <Message> The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to. </Message> </Error> A: Try specifying endpoint_url in S3 client: boto3.client('s3', endpoint_url='https://s3.me-south-1.amazonaws.com', region_name='me-south-1') If you get the following error The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. specify signature_version too: from botocore.client import Config boto3.client('s3', config=Config(signature_version='s3v4'), endpoint_url='https://s3.me-south-1.amazonaws.com', region_name='me-south-1') I have tried and it works for me ◡̈ A: I have the same issue in af-south-1 and this is workaround, which works for me: s3 = boto3.client('s3', region_name='af-south-1') endpointUrl = s3.meta.endpoint_url s3 = boto3.client('s3', endpoint_url=endpointUrl, region_name='af-south-1') Yes, as you can see, boto3 S3 client object knows the proper URL but does not use it. Of course it will be better to fix it in boto3.
Failing to pre-sign s3 url in Bahrain AWS region ONLY
I've had some Python code that pre-signs AWS S3 URLs that have been working for years. We just added a new bucket in the Bahrain AWS data center. This location was disabled and required explicitly enabling that data center. That all seemed fine. However, the resulting URL always gives me an IllegalLocationConstraintException and I believe boto3 is generating the wrong domain name in the URL. The error indicates that the request is sent to us-east-1. I know you have to specify the region, we have 8-10 regions we are using and they all are fine until this one. The URL generated by boto3.client('s3', ...).generate_presigned_url() seems to give a URL for the us east S3. Given something like this: region: me-south-1 bucket: bucket-name key: full/path/to/file.txt You would expect something like this for the signed URL: https://bucket-name.s3.me-south-1.amazonaws.com/full/path/to/file.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AZIAJ4252S33LNN3Y14Q/20100504/me-south-1/s3/aws4_request&X-Amz-Date=20200504TA22522Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=1717275aff4af5fcff2f44f74615fadb5ba448eb83219c88f59d0792d4e44b8f Notice the domain: bucket-name.s3.me-south-1.amazonaws.com Yet, what we get from boto3.client('s3', ...).generate_presigned_url() is: https://bucket-name.s3.amazonaws.com/full/path/to/file.txt?X-Amz-Algorithm=... Notice that this resulting domain is: s3.amazonaws.com Has anyone had issues with the newer s3 locations like this? If it helps, the full error message we get back is: <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>IllegalLocationConstraintException</Code> <Message> The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to. </Message> </Error>
[ "Try specifying endpoint_url in S3 client:\nboto3.client('s3', endpoint_url='https://s3.me-south-1.amazonaws.com', region_name='me-south-1')\n\nIf you get the following error\nThe authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.\n\nspecify signature_version too:\nfrom botocore.client import Config\nboto3.client('s3', config=Config(signature_version='s3v4'), endpoint_url='https://s3.me-south-1.amazonaws.com', region_name='me-south-1')\n\nI have tried and it works for me ◡̈ \n", "I have the same issue in af-south-1 and this is workaround, which works for me:\ns3 = boto3.client('s3', region_name='af-south-1')\nendpointUrl = s3.meta.endpoint_url\ns3 = boto3.client('s3', endpoint_url=endpointUrl, region_name='af-south-1')\n\nYes, as you can see, boto3 S3 client object knows the proper URL but does not use it.\nOf course it will be better to fix it in boto3.\n" ]
[ 5, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "python" ]
stackoverflow_0061602839_amazon_s3_amazon_web_services_python.txt
Q: Could not find a version that satisfies the requirement discord-componentsERROR: No matching distribution found for discord-components I got this error while Developing a Ticket Tool discord Bot Plz help me to solve this error ` import discord import datetime from discord.ext import commands from discord_components import Button, Select, SelectOption, ComponentsBot, interaction from discord_components.component import ButtonStyle ` A: Discord.py 2.0 has built in buttons with discord.ui.buttons use it instead. Read the documentation here. There are also plenty of tutorials on it.
Could not find a version that satisfies the requirement discord-componentsERROR: No matching distribution found for discord-components
I got this error while Developing a Ticket Tool discord Bot Plz help me to solve this error ` import discord import datetime from discord.ext import commands from discord_components import Button, Select, SelectOption, ComponentsBot, interaction from discord_components.component import ButtonStyle `
[ "Discord.py 2.0 has built in buttons with discord.ui.buttons use it instead.\nRead the documentation here. There are also plenty of tutorials on it.\n" ]
[ 0 ]
[]
[]
[ "components", "discord", "discord.py", "discord_buttons", "python" ]
stackoverflow_0074614321_components_discord_discord.py_discord_buttons_python.txt
Q: No module named 'scipy.signal' Whenever I am trying to import scipy.signal it gives the following error No module named 'scipy.signal' I am currently on python 3.9 and 1.9.3 for scipy.I have tried uninstalling and reinstalling scipy A: Well I solved the problem by completely uninstalling python and reinstalling it
No module named 'scipy.signal'
Whenever I am trying to import scipy.signal it gives the following error No module named 'scipy.signal' I am currently on python 3.9 and 1.9.3 for scipy.I have tried uninstalling and reinstalling scipy
[ "Well I solved the problem by completely uninstalling python and reinstalling it\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "scipy" ]
stackoverflow_0074535849_python_python_3.x_scipy.txt
Q: how to write xml back to a file I have about 100,000 price-table records in XML and I need to remove entries where the price amount is 0.00. The data is structured as follows: <data> <price-table product-id="100109a"> <amount quantity="1">10.00</amount> </price-table> <price-table product-id="201208c"> <amount quantity="1">0.00</amount> </price-table> </data> I'm trying to use Python to do the work and I have the following: from xml.etree import ElementTree as ET def readfile(): with open('prices.xml') as f: contents = f.read() return(contents) xml_string = readfile() root = ET.fromstring(xml_string) for price_table in root.findall('price-table'): amount = price_table.find('amount') if float(amount.text) != 0: root.remove(price_table) xmltowrite = ET.tostring(root) #print(xmltowrite) with open('xmlwrite.txt', 'w') as j: j.write(xmltowrite) When I run this, the error I get is: TypeError: write() argument must be str, not bytes But my understanding is that the ET.tostring() function should be converting the xmltowrite value to a string... Why is that not a string at the end? A: tostring() returns a bytes object unless encoding="unicode" is used. The code can be simplified quite a bit. There is no need to use open(), fromstring() or tostring(). Just parse the XML file into an ElementTree object, do your changes, and save using ElementTree.write(). from xml.etree import ElementTree as ET tree = ET.parse("prices.xml") root = tree.getroot() for price_table in root.findall('price-table'): amount = price_table.find('amount') if float(amount.text) != 0: root.remove(price_table) tree.write('xmlwrite.txt') A: If you print the type(xmltowrite) you will see it's a <class 'bytes'>. You can decode it with ET.tostring(root).decode("Utf-8"), than you get <class 'str'>.
how to write xml back to a file
I have about 100,000 price-table records in XML and I need to remove entries where the price amount is 0.00. The data is structured as follows: <data> <price-table product-id="100109a"> <amount quantity="1">10.00</amount> </price-table> <price-table product-id="201208c"> <amount quantity="1">0.00</amount> </price-table> </data> I'm trying to use Python to do the work and I have the following: from xml.etree import ElementTree as ET def readfile(): with open('prices.xml') as f: contents = f.read() return(contents) xml_string = readfile() root = ET.fromstring(xml_string) for price_table in root.findall('price-table'): amount = price_table.find('amount') if float(amount.text) != 0: root.remove(price_table) xmltowrite = ET.tostring(root) #print(xmltowrite) with open('xmlwrite.txt', 'w') as j: j.write(xmltowrite) When I run this, the error I get is: TypeError: write() argument must be str, not bytes But my understanding is that the ET.tostring() function should be converting the xmltowrite value to a string... Why is that not a string at the end?
[ "tostring() returns a bytes object unless encoding=\"unicode\" is used.\nThe code can be simplified quite a bit. There is no need to use open(), fromstring() or tostring(). Just parse the XML file into an ElementTree object, do your changes, and save using ElementTree.write().\nfrom xml.etree import ElementTree as ET\n\ntree = ET.parse(\"prices.xml\")\nroot = tree.getroot()\n\nfor price_table in root.findall('price-table'):\n amount = price_table.find('amount')\n if float(amount.text) != 0:\n root.remove(price_table)\n\ntree.write('xmlwrite.txt')\n\n", "If you print the type(xmltowrite) you will see it's a <class 'bytes'>. You can decode it with ET.tostring(root).decode(\"Utf-8\"), than you get <class 'str'>.\n" ]
[ 2, 1 ]
[]
[]
[ "elementtree", "python", "xml" ]
stackoverflow_0074609129_elementtree_python_xml.txt
Q: What is the difference between a "pip wheel -e" and "pip install -e"? Building a C-extension library to python, I noticed that building it with with wheel (and then install the *.whl) versus a direct install, gives very different results and build artifacts. Building with: pip install -e . We get an entry for Editable project location in the pip list results The buildi atrifacts are few, but contains *.egg-info folder. # pip install -e . --no-cache-dir # pip list Package Version Editable project location ------------------ ----------- ----------------------------------------------- black-scholes 0.0.1 C:\path-to\fbs # tree -L 3 . ├── docs ├── examples │   └── fbs_test.py ├── src │   ├── black_scholes │   │   ├── __init__.py │   │   └── fbs.pyd │   ├── black_scholes.egg-info │   │   ├── PKG-INFO │   │   ├── SOURCES.txt │   │   ├── dependency_links.txt │   │   └── top_level.txt │   └── lib │   ├── Makefile │   └── fbs.c ├── .gitignore ├── LICENSE.md ├── README.md ├── pyproject.toml └── setup.py Building with: pip wheel -e . However, when first building with wheel and then installing, we get a very different result. We get an entire build directory with lots of artifacts. Editable project location is empty # pip wheel -e . --no-cache-dir # pip install black_scholes-0.0.1-cp310-cp310-win_amd64.whl # tree -L 6 . ├── build │   ├── bdist.win-amd64 │   ├── lib.win-amd64-cpython-310 │   │   ├── black_scholes │   │   │   ├── __init__.py │   │   │   └── fbs.pyd │   │   └── lib │   │   └── fbs.c │   └── temp.win-amd64-cpython-310 │   └── Release │   └── src │   └── lib │   ├── fbs.exp │   ├── fbs.lib │   └── fbs.obj ├── docs ├── examples │   └── fbs_test.py ├── src │   ├── black_scholes │   │   └── __init__.py │   ├── black_scholes.egg-info │   │   ├── PKG-INFO │   │   ├── SOURCES.txt │   │   ├── dependency_links.txt │   │   └── top_level.txt │   └── lib │   ├── Makefile │   └── fbs.c ├── .gitignore ├── LICENSE.md ├── README.md ├── black_scholes-0.0.1-cp310-cp310-win_amd64.whl ├── pyproject.toml └── setup.py Q: What's going on, and how is pip doing this differently? (Is there a way to do the pip install, step-by-step?) References: This SOA - But still not very clear as it just refers back to setuptools docs A: So in summary: pip install -e . makes your installed (live) package Editable, by pointing and using the local (package project) directory for all it's files. This way you can edit the package files and see the results immediately. pip wheel -e . ignores the -e flag because it only builds the wheel, (in your local directory) and does not install the package. Therefore the -e is redundant. PS. I am not 100% on the ignoring part, but I cannot see any difference from not using it. It could have additional functionalities not documented.
What is the difference between a "pip wheel -e" and "pip install -e"?
Building a C-extension library to python, I noticed that building it with with wheel (and then install the *.whl) versus a direct install, gives very different results and build artifacts. Building with: pip install -e . We get an entry for Editable project location in the pip list results The buildi atrifacts are few, but contains *.egg-info folder. # pip install -e . --no-cache-dir # pip list Package Version Editable project location ------------------ ----------- ----------------------------------------------- black-scholes 0.0.1 C:\path-to\fbs # tree -L 3 . ├── docs ├── examples │   └── fbs_test.py ├── src │   ├── black_scholes │   │   ├── __init__.py │   │   └── fbs.pyd │   ├── black_scholes.egg-info │   │   ├── PKG-INFO │   │   ├── SOURCES.txt │   │   ├── dependency_links.txt │   │   └── top_level.txt │   └── lib │   ├── Makefile │   └── fbs.c ├── .gitignore ├── LICENSE.md ├── README.md ├── pyproject.toml └── setup.py Building with: pip wheel -e . However, when first building with wheel and then installing, we get a very different result. We get an entire build directory with lots of artifacts. Editable project location is empty # pip wheel -e . --no-cache-dir # pip install black_scholes-0.0.1-cp310-cp310-win_amd64.whl # tree -L 6 . ├── build │   ├── bdist.win-amd64 │   ├── lib.win-amd64-cpython-310 │   │   ├── black_scholes │   │   │   ├── __init__.py │   │   │   └── fbs.pyd │   │   └── lib │   │   └── fbs.c │   └── temp.win-amd64-cpython-310 │   └── Release │   └── src │   └── lib │   ├── fbs.exp │   ├── fbs.lib │   └── fbs.obj ├── docs ├── examples │   └── fbs_test.py ├── src │   ├── black_scholes │   │   └── __init__.py │   ├── black_scholes.egg-info │   │   ├── PKG-INFO │   │   ├── SOURCES.txt │   │   ├── dependency_links.txt │   │   └── top_level.txt │   └── lib │   ├── Makefile │   └── fbs.c ├── .gitignore ├── LICENSE.md ├── README.md ├── black_scholes-0.0.1-cp310-cp310-win_amd64.whl ├── pyproject.toml └── setup.py Q: What's going on, and how is pip doing this differently? (Is there a way to do the pip install, step-by-step?) References: This SOA - But still not very clear as it just refers back to setuptools docs
[ "So in summary:\n\npip install -e . makes your installed (live) package Editable, by pointing and using the local (package project) directory for all it's files. This way you can edit the package files and see the results immediately.\npip wheel -e . ignores the -e flag because it only builds the wheel, (in your local directory) and does not install the package. Therefore the -e is redundant.\n\nPS. I am not 100% on the ignoring part, but I cannot see any difference from not using it. It could have additional functionalities not documented.\n" ]
[ 0 ]
[]
[]
[ "pip", "python", "python_3.x", "setuptools" ]
stackoverflow_0074432908_pip_python_python_3.x_setuptools.txt
Q: problem with cvxopt on mac //incompatible architecture I need cvxopt to run some portfolio optimization scripts. I have a MacBook pro with an M1 chip running Monterey 12.3, Python 3.10.2 and pip 22.0.4. I installed cvxopt with pip, also installed Rosetta2 but I keep getting the following message: Exception has occurred: ImportError dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cvxopt/base.cpython-310-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cvxopt/base.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')) Could someone help? I am quite new to programming and only recently moved to Mac which I am still getting to grips with. Thanks! I read all the threads I could find, installed Rosetta2 manually, checked the version of python and pip I had but everything seems fine as far as I can tell. A: Building cvxopt from source with pip install --no-binary cvxopt cvxopt solved this problem for me.
problem with cvxopt on mac //incompatible architecture
I need cvxopt to run some portfolio optimization scripts. I have a MacBook pro with an M1 chip running Monterey 12.3, Python 3.10.2 and pip 22.0.4. I installed cvxopt with pip, also installed Rosetta2 but I keep getting the following message: Exception has occurred: ImportError dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cvxopt/base.cpython-310-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/cvxopt/base.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')) Could someone help? I am quite new to programming and only recently moved to Mac which I am still getting to grips with. Thanks! I read all the threads I could find, installed Rosetta2 manually, checked the version of python and pip I had but everything seems fine as far as I can tell.
[ "Building cvxopt from source with pip install --no-binary cvxopt cvxopt solved this problem for me.\n" ]
[ 0 ]
[]
[]
[ "cvxopt", "macos", "python" ]
stackoverflow_0071663396_cvxopt_macos_python.txt
Q: Using an API with python Here is the API from this website curl -X POST -F data=@path/to/file.csv https://api-adresse.data.gouv.fr/search/csv/ I would like to know how to use this in python. What I currently know is that from the same website, we also have this API curl "https://api-adresse.data.gouv.fr/search/?q=8+bd+du+port" With python we can do: import requests ADDOK_URL = 'http://api-adresse.data.gouv.fr/search/' response = requests.get(ADDOK_URL, params={'q': '8 bd du port', 'limit': 5}) response.json() But with curl -X POST -F data=@path/to/file.csv https://api-adresse.data.gouv.fr/search/csv/ I have to specify a path of csv file. And I also don't know what are -X POST -F. A: This worked for me, but I don't know what type of response you are expercting. I got no errors and some values in a test as a result. import requests files = [ ('data', ('file', open('your path to .csv file', 'rb'), 'application/octet-stream')) ] response = requests.post("https://api-adresse.data.gouv.fr/search/csv/", files=files, params={'q': '8 bd du port', 'limit': 5}) print(response.text) This is a POST request, you used a GET request, so use requests.post and not requests.get.
Using an API with python
Here is the API from this website curl -X POST -F data=@path/to/file.csv https://api-adresse.data.gouv.fr/search/csv/ I would like to know how to use this in python. What I currently know is that from the same website, we also have this API curl "https://api-adresse.data.gouv.fr/search/?q=8+bd+du+port" With python we can do: import requests ADDOK_URL = 'http://api-adresse.data.gouv.fr/search/' response = requests.get(ADDOK_URL, params={'q': '8 bd du port', 'limit': 5}) response.json() But with curl -X POST -F data=@path/to/file.csv https://api-adresse.data.gouv.fr/search/csv/ I have to specify a path of csv file. And I also don't know what are -X POST -F.
[ "This worked for me, but I don't know what type of response you are expercting. I got no errors and some values in a test as a result.\nimport requests\n\nfiles = [\n ('data', ('file', open('your path to .csv file', 'rb'), 'application/octet-stream'))\n]\n\nresponse = requests.post(\"https://api-adresse.data.gouv.fr/search/csv/\", files=files, params={'q': '8 bd du port', 'limit': 5})\n\nprint(response.text)\n\n\nThis is a POST request, you used a GET request, so use requests.post and not requests.get.\n" ]
[ 0 ]
[]
[]
[ "api", "curl", "python", "python_requests" ]
stackoverflow_0074614452_api_curl_python_python_requests.txt
Q: Pandas Dataframe add element to a list in a cell I am trying something like this: List append in pandas cell But the problem is the post is old and everything is deprecated and should not be used anymore. d = {'col1': ['TEST', 'TEST'], 'col2': [[1, 2], [1, 2]], 'col3': [35, 89]} df = pd.DataFrame(data=d) col1 col2 col3 TEST [1, 2, 3] 35 TEST [1, 2, 3] 89 My Dataframe looks like this, were there is the col2 is the one I am interested in. I need to add [0,0] to the lists in col2 for every row in the DataFrame. My real DataFrame is of dynamic shape so I cant just set every cell on its own. End result should look like this: col1 col2 col3 TEST [1, 2, 3, 0, 0] 35 TEST [1, 2, 3, 0, 0] 89 I fooled around with df.apply and df.assign but I can't seem to get it to work. I tried: df['col2'] += [0, 0] df = df.col2.apply(lambda x: x.append([0,0])) Which returns a Series that looks nothing like i need it df = df.assign(new_column = lambda x: x + list([0, 0)) A: Not sure if this is the best way to go but, option 2 works with a little modification import pandas as pd d = {'col1': ['TEST', 'TEST'], 'col2': [[1, 2], [1, 2]], 'col3': [35, 89]} df = pd.DataFrame(data=d) df["col2"] = df["col2"].apply(lambda x: x + [0,0]) print(df) Firstly, if you want to add all members of an iterable to a list use .extend instead of .append. This doesn't work because the method works inplace and doesn't return anything so "col2" values become None, so use list summation instead. Finally, you want to assign your modified column to the original DataFrame, not override it (this is the reason for the Series return) A: One idea is use list comprehension: df["col2"] = [x + [0,0] for x in df["col2"]] print (df) col1 col2 col3 0 TEST [1, 2, 0, 0] 35 1 TEST [1, 2, 0, 0] 89
Pandas Dataframe add element to a list in a cell
I am trying something like this: List append in pandas cell But the problem is the post is old and everything is deprecated and should not be used anymore. d = {'col1': ['TEST', 'TEST'], 'col2': [[1, 2], [1, 2]], 'col3': [35, 89]} df = pd.DataFrame(data=d) col1 col2 col3 TEST [1, 2, 3] 35 TEST [1, 2, 3] 89 My Dataframe looks like this, were there is the col2 is the one I am interested in. I need to add [0,0] to the lists in col2 for every row in the DataFrame. My real DataFrame is of dynamic shape so I cant just set every cell on its own. End result should look like this: col1 col2 col3 TEST [1, 2, 3, 0, 0] 35 TEST [1, 2, 3, 0, 0] 89 I fooled around with df.apply and df.assign but I can't seem to get it to work. I tried: df['col2'] += [0, 0] df = df.col2.apply(lambda x: x.append([0,0])) Which returns a Series that looks nothing like i need it df = df.assign(new_column = lambda x: x + list([0, 0))
[ "Not sure if this is the best way to go but, option 2 works with a little modification\nimport pandas as pd\n\nd = {'col1': ['TEST', 'TEST'], 'col2': [[1, 2], [1, 2]], 'col3': [35, 89]}\ndf = pd.DataFrame(data=d)\ndf[\"col2\"] = df[\"col2\"].apply(lambda x: x + [0,0])\nprint(df)\n\nFirstly, if you want to add all members of an iterable to a list use .extend instead of .append. This doesn't work because the method works inplace and doesn't return anything so \"col2\" values become None, so use list summation instead. Finally, you want to assign your modified column to the original DataFrame, not override it (this is the reason for the Series return)\n", "One idea is use list comprehension:\ndf[\"col2\"] = [x + [0,0] for x in df[\"col2\"]]\n \nprint (df)\n col1 col2 col3\n0 TEST [1, 2, 0, 0] 35\n1 TEST [1, 2, 0, 0] 89\n\n" ]
[ 3, 0 ]
[ "have you tried the flowing code?\nfor val in df['col2']:\n val.append(0)\n\nBest Regards,\nStan\n" ]
[ -1 ]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074614688_dataframe_pandas_python.txt
Q: Pyinstaller and hiddenimports: how to force to import a package that doesn't get automatically imported by pyinstaller I tried to generate a .exe file using pyinstaller. It works fine, except for fact that one package was not automatically detected and imported by pyinstaller. Such package, that in this example we will call "packageOfInterest", did not get imported because the developers did not provide an hook. Reading some documentation I understood that this issue could be easily fixed with the following line to be added in the .spec: hiddenimports=["packageOfInterest"] Unfortunately it doesn't work, the "packageOfInterest" was not imported even using such line of code. So my question is: What I'm still missing in the .spec file? Below my .spec file that I've been using with success with many applications where the packages could be automatically detected by pyinstaller, therefore this is not the case. import sys import os from kivy_deps import sdl2, glew from kivymd import hooks_path as kivymd_hooks_path path = os.path.abspath(".") a = Analysis( ["MyScript.py"], # "packageOfInterest" in the "hiddenimports" is the package name # that pyinstaller could not import automatically hiddenimports=["kivymd.stiffscroll", "packageOfInterest"], pathex=[path], hookspath=[kivymd_hooks_path], datas = [("media\\", "media\\")], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=None, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=None) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, *[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins)], debug=False, strip=False, upx=True, name="MyScript", console=True, icon='myicon.ico' ) The location of "packageOfInterest" is at the path: C:\Users\ASUS\AppData\Local\Programs\Python\Python39\Lib\site-packages Maybe this path should be specified somewhere (e.g. should be addedd in the "pathex" after the "path" value). In general would be great to identify a clear method that check for all packages imported via "MyScript.py" that cannot automatically be imported by pyinstaller, and that will force their import. At the same time would be appreciated to understand how to customized to .spec in order to fix the issue. thanks in advance A: The .exe crashed because one file is not showing. Such file was belonging to the "packageofinterest" (mne) hmm.. you could try use --collect-data packageofinterest as it seems you are missing some files which are belongs to the package. Also you can use --hidden-import packageofinterest if you don't want to use the spec file. A: This question seems old but I hope it helps someone in the future. I recently tried to create a GUI Bundling app. that circumvents as many imported module related issues when I encountered similar interest of forcing PyInstaller to pickup module used by any project I try to Bundle into executable. Here's my workaround. Every Python module has the file and/or path property/ies. This is because every third-party implementation we install for use (via pip, easy_install, etc) is provided as a module(path via file) or a package(path via path). Only those available as a package have access to both file and path properties. So, let's assume the modules that couldn't be picked up by PyInstaller are zipfile and psutil. The former is a module while the later is a package. import zipfile # Use any of the import rules. print( zipfile.__file__ ) # Take a look at where its absolute path resides. import psutil print( psutil.__path__ ) print( psutil.__file__ ) # This will return the package's __init__.py file. Definitely not what you want. Now, add this path to your PyInstaller's command using the --add-data option. syntax: pyinstaller --add-data module_absolute_path;destination my_program_startup_file.py NOTE: -The spaces around the print statements is only for readability sake. -The destination is always a '.' for modules and module name for packages Ex: psutil/ [refer PyInstaller documentation for clarity] Example: pyinstaller --onefile --clean --add-data C:\Users\USER\AppData\Local\Programs\Python\Python310\Lib\site-packages\psutil;psutil\ my_program_startup_file.py
Pyinstaller and hiddenimports: how to force to import a package that doesn't get automatically imported by pyinstaller
I tried to generate a .exe file using pyinstaller. It works fine, except for fact that one package was not automatically detected and imported by pyinstaller. Such package, that in this example we will call "packageOfInterest", did not get imported because the developers did not provide an hook. Reading some documentation I understood that this issue could be easily fixed with the following line to be added in the .spec: hiddenimports=["packageOfInterest"] Unfortunately it doesn't work, the "packageOfInterest" was not imported even using such line of code. So my question is: What I'm still missing in the .spec file? Below my .spec file that I've been using with success with many applications where the packages could be automatically detected by pyinstaller, therefore this is not the case. import sys import os from kivy_deps import sdl2, glew from kivymd import hooks_path as kivymd_hooks_path path = os.path.abspath(".") a = Analysis( ["MyScript.py"], # "packageOfInterest" in the "hiddenimports" is the package name # that pyinstaller could not import automatically hiddenimports=["kivymd.stiffscroll", "packageOfInterest"], pathex=[path], hookspath=[kivymd_hooks_path], datas = [("media\\", "media\\")], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=None, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=None) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, *[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins)], debug=False, strip=False, upx=True, name="MyScript", console=True, icon='myicon.ico' ) The location of "packageOfInterest" is at the path: C:\Users\ASUS\AppData\Local\Programs\Python\Python39\Lib\site-packages Maybe this path should be specified somewhere (e.g. should be addedd in the "pathex" after the "path" value). In general would be great to identify a clear method that check for all packages imported via "MyScript.py" that cannot automatically be imported by pyinstaller, and that will force their import. At the same time would be appreciated to understand how to customized to .spec in order to fix the issue. thanks in advance
[ "The .exe crashed because one file is not showing. Such file was belonging to the \"packageofinterest\" (mne)\n\nhmm.. you could try use --collect-data packageofinterest as it seems you are missing some files which are belongs to the package.\nAlso you can use --hidden-import packageofinterest if you don't want to use the spec file.\n", "This question seems old but I hope it helps someone in the future. I recently tried to create a GUI Bundling app. that circumvents as many imported module related issues when I encountered similar interest of forcing PyInstaller to pickup module used by any project I try to Bundle into executable. Here's my workaround.\nEvery Python module has the file and/or path property/ies. This is because every third-party implementation we install for use (via pip, easy_install, etc) is provided as a module(path via file) or a package(path via path). Only those available as a package have access to both file and path properties.\nSo, let's assume the modules that couldn't be picked up by PyInstaller are zipfile and psutil. The former is a module while the later is a package.\nimport zipfile # Use any of the import rules.\nprint( zipfile.__file__ ) # Take a look at where its absolute path resides.\n\nimport psutil\nprint( psutil.__path__ )\nprint( psutil.__file__ ) # This will return the package's __init__.py file. Definitely not what you want.\n\nNow, add this path to your PyInstaller's command using the --add-data option.\nsyntax:\npyinstaller --add-data module_absolute_path;destination my_program_startup_file.py\n\nNOTE:\n-The spaces around the print statements is only for readability sake.\n-The destination is always a '.' for modules and module name for packages Ex: psutil/ [refer PyInstaller documentation for clarity]\nExample:\npyinstaller --onefile --clean --add-data C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python310\\Lib\\site-packages\\psutil;psutil\\ my_program_startup_file.py\n\n" ]
[ 0, 0 ]
[]
[]
[ "pyinstaller", "python" ]
stackoverflow_0068684228_pyinstaller_python.txt
Q: Pandas group-by proportion of cumulative sum start from 0 I have the following pandas Data Frame (without 2 the last columns): name day show-in-appointment previous-missed-appointments proportion-previous-missed 0 Jack 2020/01/01 show 0 0 1 Jack 2020/01/02 no-show 0 0 2 Jill 2020/01/02 no-show 0 0 3 Jack 2020/01/03 show 1 0.5 4 Jill 2020/01/03 show 1 1 5 Jill 2020/01/04 no-show 1 0.5 6 Jack 2020/01/04 show 1 0.33 7 Jill 2020/01/05 show 2 0.66 8 jack 2020/01/06 no-show 1 0.25 9 jack 2020/01/07 show 2 0.4>>>2(noshow)/5(noshow+show) df = pd.DataFrame( data=np.asarray([ ['Jack', 'Jack', 'Jill', 'Jack', 'Jill', 'Jill', 'Jack', 'Jill', 'jack', 'jack'], [ '2020/01/01', '2020/01/02', '2020/01/02', '2020/01/03', '2020/01/03', '2020/01/04', '2020/01/04', '2020/01/05', '2020/01/06', '2020/01/07', ], ['show', 'no-show', 'no-show', 'show', 'show', 'no-show', 'show', 'show', 'no-show', 'show'], ]).T, columns=['name', 'day', 'show-in-appointment'], ) previous-missed-appointments column is create like the code below: df.name = df.name.str.capitalize() df['order'] = df.index df.day = pd.to_datetime(df.day) df['noshow'] = df['show-in-appointment'].map({'show': 0, 'no-show': 1}) df = df.sort_values(by=['name', 'day']) df['previous-missed-appointments'] = df.groupby('name').noshow.cumsum() df.loc[df.noshow == 1, 'previous-missed-appointments'] -= 1 df = df.sort_values(by='order') df = df.drop(columns=['noshow', 'order']) ********THE QUESTION IS HOW CAN I CREATE THE LAST COLUMN ????********* A: You can use cumsum and shift in groupby.apply for the first column, then divide by groupby.cumcount for the second column: m = df['show-in-appointment'].eq('no-show') g = m.groupby(df['name'].str.casefold(), group_keys=False) df['previous-missed-appointments'] = ( g.apply(lambda x: x.cumsum().shift(fill_value=0)) ) df['proportion-previous-missed'] = ( df['previous-missed-appointments'].div(g.cumcount()).fillna(0) ) print(df) Output: name day show-in-appointment previous-missed-appointments proportion-previous-missed 0 Jack 2020/01/01 show 0 0.000000 1 Jack 2020/01/02 no-show 0 0.000000 2 Jill 2020/01/02 no-show 0 0.000000 3 Jack 2020/01/03 show 1 0.500000 4 Jill 2020/01/03 show 1 1.000000 5 Jill 2020/01/04 no-show 1 0.500000 6 Jack 2020/01/04 show 1 0.333333 7 Jill 2020/01/05 show 2 0.666667 8 jack 2020/01/06 no-show 1 0.250000 9 jack 2020/01/07 show 2 0.400000
Pandas group-by proportion of cumulative sum start from 0
I have the following pandas Data Frame (without 2 the last columns): name day show-in-appointment previous-missed-appointments proportion-previous-missed 0 Jack 2020/01/01 show 0 0 1 Jack 2020/01/02 no-show 0 0 2 Jill 2020/01/02 no-show 0 0 3 Jack 2020/01/03 show 1 0.5 4 Jill 2020/01/03 show 1 1 5 Jill 2020/01/04 no-show 1 0.5 6 Jack 2020/01/04 show 1 0.33 7 Jill 2020/01/05 show 2 0.66 8 jack 2020/01/06 no-show 1 0.25 9 jack 2020/01/07 show 2 0.4>>>2(noshow)/5(noshow+show) df = pd.DataFrame( data=np.asarray([ ['Jack', 'Jack', 'Jill', 'Jack', 'Jill', 'Jill', 'Jack', 'Jill', 'jack', 'jack'], [ '2020/01/01', '2020/01/02', '2020/01/02', '2020/01/03', '2020/01/03', '2020/01/04', '2020/01/04', '2020/01/05', '2020/01/06', '2020/01/07', ], ['show', 'no-show', 'no-show', 'show', 'show', 'no-show', 'show', 'show', 'no-show', 'show'], ]).T, columns=['name', 'day', 'show-in-appointment'], ) previous-missed-appointments column is create like the code below: df.name = df.name.str.capitalize() df['order'] = df.index df.day = pd.to_datetime(df.day) df['noshow'] = df['show-in-appointment'].map({'show': 0, 'no-show': 1}) df = df.sort_values(by=['name', 'day']) df['previous-missed-appointments'] = df.groupby('name').noshow.cumsum() df.loc[df.noshow == 1, 'previous-missed-appointments'] -= 1 df = df.sort_values(by='order') df = df.drop(columns=['noshow', 'order']) ********THE QUESTION IS HOW CAN I CREATE THE LAST COLUMN ????*********
[ "You can use cumsum and shift in groupby.apply for the first column, then divide by groupby.cumcount for the second column:\nm = df['show-in-appointment'].eq('no-show')\n\ng = m.groupby(df['name'].str.casefold(), group_keys=False)\ndf['previous-missed-appointments'] = (\n g.apply(lambda x: x.cumsum().shift(fill_value=0))\n )\n\ndf['proportion-previous-missed'] = (\n df['previous-missed-appointments'].div(g.cumcount()).fillna(0)\n)\n\nprint(df)\n\nOutput:\n name day show-in-appointment previous-missed-appointments proportion-previous-missed\n0 Jack 2020/01/01 show 0 0.000000\n1 Jack 2020/01/02 no-show 0 0.000000\n2 Jill 2020/01/02 no-show 0 0.000000\n3 Jack 2020/01/03 show 1 0.500000\n4 Jill 2020/01/03 show 1 1.000000\n5 Jill 2020/01/04 no-show 1 0.500000\n6 Jack 2020/01/04 show 1 0.333333\n7 Jill 2020/01/05 show 2 0.666667\n8 jack 2020/01/06 no-show 1 0.250000\n9 jack 2020/01/07 show 2 0.400000\n\n" ]
[ 1 ]
[]
[]
[ "cumulative_sum", "group_by", "pandas", "proportions", "python" ]
stackoverflow_0074614849_cumulative_sum_group_by_pandas_proportions_python.txt
Q: How to copy content of docx file and append it to another docx file using python I want to combine multiple docx file and save it in another docx file. I not only want to copy all the text, but also it's formatting(runs). eg. bold, italics, underline, bullets, etc. A: If you need to copy the contents of just one docx file to another, you can use this from docx import Document from docxcompose.composer import Composer # main docx file master = Document(r"path\of\main.docx") composer = Composer(master) # doc1 is the docx file getting copied doc1 = Document(r"file\to\be\copied.docx") composer.append(doc1) composer.save(r"path\of\combined.docx") If you have multiple docx files to be copied, you can try like this def copy_docx(main_docx, docx_list): master = Document(main_docx) composer = Composer(master) for index, file in enumerate(docx_list): file = Document(file) composer.append(file) composer.save(r"path\of\combined.docx") main_docx = "path\of\main.docx" docx_list = [r"path\of\docx1.docx", r"path\of\docx2.docx", r"path\of\docx3.docx"] copy_docx(main_docx, docx_list)
How to copy content of docx file and append it to another docx file using python
I want to combine multiple docx file and save it in another docx file. I not only want to copy all the text, but also it's formatting(runs). eg. bold, italics, underline, bullets, etc.
[ "If you need to copy the contents of just one docx file to another, you can use this\nfrom docx import Document\nfrom docxcompose.composer import Composer\n\n# main docx file\nmaster = Document(r\"path\\of\\main.docx\")\ncomposer = Composer(master)\n# doc1 is the docx file getting copied\ndoc1 = Document(r\"file\\to\\be\\copied.docx\")\ncomposer.append(doc1)\ncomposer.save(r\"path\\of\\combined.docx\")\n\nIf you have multiple docx files to be copied, you can try like this\ndef copy_docx(main_docx, docx_list):\n master = Document(main_docx)\n composer = Composer(master)\n for index, file in enumerate(docx_list):\n file = Document(file)\n composer.append(file)\n composer.save(r\"path\\of\\combined.docx\")\n\n\nmain_docx = \"path\\of\\main.docx\"\ndocx_list = [r\"path\\of\\docx1.docx\",\n r\"path\\of\\docx2.docx\",\n r\"path\\of\\docx3.docx\"]\n\ncopy_docx(main_docx, docx_list)\n\n" ]
[ 0 ]
[]
[]
[ "docx", "file_handling", "python", "python_docx" ]
stackoverflow_0067689211_docx_file_handling_python_python_docx.txt
Q: Tkinter Radio button is being selected when hovered over I am having a weird issue with my radio buttons. When I run my program, they are initially unselected (as expected). However, if I mouse over one of them, they will select themselves, and this can allow both to be selected at the same time. This only seems to happen once per program execution, and manually selecting either of them will fix the issue, but I can't figure out why it's happening in the first place. What makes this issue strange is, when I first wrote this code outside of a function it worked, so it seems the code being in a function might have something to do with it. Any help appreciated. def createPriceFrame(self): priceFrame = tk.Frame(self.parent, bg = 'lightgreen') priceFrame.place(relx = 0.5, rely = 0.45, relwidth = 0.35, relheight = 0.15, anchor='c') option = tk.IntVar() radioUnder = tk.Radiobutton(priceFrame, text = "Under", value = 1, var = option, bg = 'lightgreen', font = ('Arial', 12, 'bold')) radioUnder.grid(row = 1, column = 0) radioOver = tk.Radiobutton(priceFrame, text =" Over", value = 2, var = option, bg = 'lightgreen', font = ('Arial', 12, 'bold')) radioOver.grid(row = 1, column = 1) A: You need to declare your variable (option) as global - and yes, it's very strange. Unfortunately, I could not find the reason. A: The reason is that the variable option gets garbage-collected. You can avoid this by having a reference to the variable, e.g., by adding self.option = option. This seems to be similar to the issue of adding images in tkinter, see, e.g., this and this.
Tkinter Radio button is being selected when hovered over
I am having a weird issue with my radio buttons. When I run my program, they are initially unselected (as expected). However, if I mouse over one of them, they will select themselves, and this can allow both to be selected at the same time. This only seems to happen once per program execution, and manually selecting either of them will fix the issue, but I can't figure out why it's happening in the first place. What makes this issue strange is, when I first wrote this code outside of a function it worked, so it seems the code being in a function might have something to do with it. Any help appreciated. def createPriceFrame(self): priceFrame = tk.Frame(self.parent, bg = 'lightgreen') priceFrame.place(relx = 0.5, rely = 0.45, relwidth = 0.35, relheight = 0.15, anchor='c') option = tk.IntVar() radioUnder = tk.Radiobutton(priceFrame, text = "Under", value = 1, var = option, bg = 'lightgreen', font = ('Arial', 12, 'bold')) radioUnder.grid(row = 1, column = 0) radioOver = tk.Radiobutton(priceFrame, text =" Over", value = 2, var = option, bg = 'lightgreen', font = ('Arial', 12, 'bold')) radioOver.grid(row = 1, column = 1)
[ "You need to declare your variable (option) as global - and yes, it's very strange. Unfortunately, I could not find the reason.\n", "The reason is that the variable option gets garbage-collected.\nYou can avoid this by having a reference to the variable, e.g., by adding self.option = option.\nThis seems to be similar to the issue of adding images in tkinter, see, e.g., this and this.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "radio_button", "tkinter" ]
stackoverflow_0057355027_python_radio_button_tkinter.txt
Q: Is Unpacking a Type Hint Possible? or Its Workarounds? Is there a way to unpack a tuple type alias? For example, ResultTypeA = tuple[float, float, dict[str, float]] ResultTypeB = tuple[*ResultTypeA, str, str] So that ResultTypeB evaluates to tuple[float, float, dict[str, float], str, str] instead of tuple[tuple[float, float, dict[str, float]], str, str] If not possible, what would be a workaround for this? A: What you are looking for may be the new typing.TypeVarTuple as proposed by PEP 646. Due to how new it is (Python 3.11+) and how big of a change this produces, many static type checkers still do not fully support it (see this mypy issue for example). Maybe typing.Unpack is actually more applicable in this case, but again hardly useful so long as type checkers don't support it. But at a certain point, you should probably ask yourself, if your design is all that good, if your type annotations become this complex.
Is Unpacking a Type Hint Possible? or Its Workarounds?
Is there a way to unpack a tuple type alias? For example, ResultTypeA = tuple[float, float, dict[str, float]] ResultTypeB = tuple[*ResultTypeA, str, str] So that ResultTypeB evaluates to tuple[float, float, dict[str, float], str, str] instead of tuple[tuple[float, float, dict[str, float]], str, str] If not possible, what would be a workaround for this?
[ "What you are looking for may be the new typing.TypeVarTuple as proposed by PEP 646. Due to how new it is (Python 3.11+) and how big of a change this produces, many static type checkers still do not fully support it (see this mypy issue for example).\nMaybe typing.Unpack is actually more applicable in this case, but again hardly useful so long as type checkers don't support it.\nBut at a certain point, you should probably ask yourself, if your design is all that good, if your type annotations become this complex.\n" ]
[ 2 ]
[]
[]
[ "python", "type_hinting" ]
stackoverflow_0074614332_python_type_hinting.txt
Q: Assign week number relative to month for each date in a DataFrame Let it be the following python pandas dataframe. | date | other_columns |... | ------------- | -------------- |... | 2022-02-06 | row |... | 2022-02-07 | row |... | 2022-02-08 | row |... | 2022-02-15 | row |... | 2022-02-24 | row |... | 2022-02-28 | row |... I want to add the week corresponding to each date as an additional week column. It is simply grouping the days in 7-day intervals to assign each number. I don't want the functionality of datetime.week, I want the value to be relative to the month. | date | other_columns |...| week | | ------------- | -------------- |...| -------- | | 2022-02-06 | row |...| 1 week | | 2022-02-07 | row |...| 1 week | | 2022-02-08 | row |...| 2 week | | 2022-02-15 | row |...| 3 week | | 2022-02-24 | row |...| 4 week | | 2022-02-28 | row |...| 5 week | (1-7) correspond to the first week, (8-14) to the second, (15-21) to the third one, (21-28) fourth, (29-31) fifth. Only the day number really matters, not the month. A: Could you use something like this? import pandas as pd import math # create a date range dr = pd.date_range( start="2022-02-01", end="2022-02-28", freq="D" ) # create a dataframe df = pd.DataFrame( { "date": dr } ) # define a function to get the week number def get_week_in_month(df, date_column): df["day"] = df[date_column].dt.day df["week"] = df["day"].apply(lambda x: math.ceil(x / 7)) del df["day"] return df # transform the dataframe df = get_week_in_month(df, "date") This gives me the following output: date week 0 2022-02-01 1 1 2022-02-02 1 2 2022-02-03 1 3 2022-02-04 1 4 2022-02-05 1 5 2022-02-06 1 6 2022-02-07 1 7 2022-02-08 2 8 2022-02-09 2 9 2022-02-10 2 10 2022-02-11 2 11 2022-02-12 2 12 2022-02-13 2 13 2022-02-14 2 14 2022-02-15 3 15 2022-02-16 3 16 2022-02-17 3 17 2022-02-18 3 18 2022-02-19 3 19 2022-02-20 3 20 2022-02-21 3 21 2022-02-22 4 22 2022-02-23 4 23 2022-02-24 4 24 2022-02-25 4 25 2022-02-26 4 26 2022-02-27 4 27 2022-02-28 4 You could then format the week number as you needed. You could also do the whole thing in one line using the code below: df["week"] = df["date"].dt.day.apply(lambda x: math.ceil(x / 7)) A: Use: df['date'] = pd.to_datetime(df['date']) df['new2'] = ((df["date"].dt.day - 1) // 7 + 1).astype(str) + ' week' print (df) date other_columns new2 0 2022-02-06 row 1 week 1 2022-02-07 row 1 week 2 2022-02-08 row 2 week 3 2022-02-15 row 3 week 4 2022-02-24 row 4 week 5 2022-02-28 row 4 week
Assign week number relative to month for each date in a DataFrame
Let it be the following python pandas dataframe. | date | other_columns |... | ------------- | -------------- |... | 2022-02-06 | row |... | 2022-02-07 | row |... | 2022-02-08 | row |... | 2022-02-15 | row |... | 2022-02-24 | row |... | 2022-02-28 | row |... I want to add the week corresponding to each date as an additional week column. It is simply grouping the days in 7-day intervals to assign each number. I don't want the functionality of datetime.week, I want the value to be relative to the month. | date | other_columns |...| week | | ------------- | -------------- |...| -------- | | 2022-02-06 | row |...| 1 week | | 2022-02-07 | row |...| 1 week | | 2022-02-08 | row |...| 2 week | | 2022-02-15 | row |...| 3 week | | 2022-02-24 | row |...| 4 week | | 2022-02-28 | row |...| 5 week | (1-7) correspond to the first week, (8-14) to the second, (15-21) to the third one, (21-28) fourth, (29-31) fifth. Only the day number really matters, not the month.
[ "Could you use something like this?\nimport pandas as pd\nimport math\n\n# create a date range\ndr = pd.date_range(\n start=\"2022-02-01\",\n end=\"2022-02-28\",\n freq=\"D\"\n)\n\n# create a dataframe\ndf = pd.DataFrame(\n {\n \"date\": dr\n }\n)\n\n# define a function to get the week number\ndef get_week_in_month(df, date_column):\n df[\"day\"] = df[date_column].dt.day\n\n df[\"week\"] = df[\"day\"].apply(lambda x: math.ceil(x / 7))\n\n del df[\"day\"]\n\n return df\n\n# transform the dataframe\ndf = get_week_in_month(df, \"date\")\n\nThis gives me the following output:\n date week\n0 2022-02-01 1\n1 2022-02-02 1\n2 2022-02-03 1\n3 2022-02-04 1\n4 2022-02-05 1\n5 2022-02-06 1\n6 2022-02-07 1\n7 2022-02-08 2\n8 2022-02-09 2\n9 2022-02-10 2\n10 2022-02-11 2\n11 2022-02-12 2\n12 2022-02-13 2\n13 2022-02-14 2\n14 2022-02-15 3\n15 2022-02-16 3\n16 2022-02-17 3\n17 2022-02-18 3\n18 2022-02-19 3\n19 2022-02-20 3\n20 2022-02-21 3\n21 2022-02-22 4\n22 2022-02-23 4\n23 2022-02-24 4\n24 2022-02-25 4\n25 2022-02-26 4\n26 2022-02-27 4\n27 2022-02-28 4\n\nYou could then format the week number as you needed.\nYou could also do the whole thing in one line using the code below:\ndf[\"week\"] = df[\"date\"].dt.day.apply(lambda x: math.ceil(x / 7))\n\n", "Use:\ndf['date'] = pd.to_datetime(df['date'])\n\ndf['new2'] = ((df[\"date\"].dt.day - 1) // 7 + 1).astype(str) + ' week'\nprint (df)\n date other_columns new2\n0 2022-02-06 row 1 week\n1 2022-02-07 row 1 week\n2 2022-02-08 row 2 week\n3 2022-02-15 row 3 week\n4 2022-02-24 row 4 week\n5 2022-02-28 row 4 week\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074614712_dataframe_datetime_pandas_python.txt
Q: Python scp/paramiko Bad Time Format Exception when attempting to get a file Python 3.9 scp 0.14.4 running on Mac OSX Ventura 13.0 When trying to run the following: def createSSHClient(server, port, user, password): client = SSHClient() client.load_system_host_keys() client.set_missing_host_key_policy(AutoAddPolicy()) client.connect(server, port, user, password) return client def get_file(): ssh = createSSHClient('myhost.com', 2022, 'username', 'mypassword') banner = ssh.exec_command('\n') scp = SCPClient(ssh.get_transport()) scp.get(remote_path='/mnt/users/username/file.txt', local_path='./file.txt', preserve_times=False) if __name__ == '__main__': get_file() The following exception is being thrown: Traceback (most recent call last): File "/git/projects/diet/scp-example/.venv/lib/python3.9/site-packages/scp.py", line 437, in _set_time mtime = int(times[0]) ValueError: invalid literal for int() with base 10: b'his' This seems to be trying to parse the banner or something? Even though I indicate not to preserve times so not sure why its attempting it. This is the banner returned once the host is listed: Unauthorized use of this system is prohibited. Anyone using this system expressly consents to the monitoring of his or her system use and understands that evidence of criminal activity discovered by system personnel may be reported to law enforcement officials. Thus I was thinking the 'his' was being picked up out of the banner. Tried slipping in a command prior to the scp get on the client but probably didn't really do much as I suspect the scp still gets the banner. when I manually do this via the cli all works fine Any thoughts on this would be great A: I think you are using two different libraries to access the file For paramiko you can use the following to get the file sftp = ssh.open_sftp() sftp.get(remotepath='/mnt/users/username/file.txt', localpath='file.txt') rather than scp = SCPClient(ssh.get_transport()) scp.get(remote_path='/mnt/users/username/file.txt', local_path='./file.txt', preserve_times=False)
Python scp/paramiko Bad Time Format Exception when attempting to get a file
Python 3.9 scp 0.14.4 running on Mac OSX Ventura 13.0 When trying to run the following: def createSSHClient(server, port, user, password): client = SSHClient() client.load_system_host_keys() client.set_missing_host_key_policy(AutoAddPolicy()) client.connect(server, port, user, password) return client def get_file(): ssh = createSSHClient('myhost.com', 2022, 'username', 'mypassword') banner = ssh.exec_command('\n') scp = SCPClient(ssh.get_transport()) scp.get(remote_path='/mnt/users/username/file.txt', local_path='./file.txt', preserve_times=False) if __name__ == '__main__': get_file() The following exception is being thrown: Traceback (most recent call last): File "/git/projects/diet/scp-example/.venv/lib/python3.9/site-packages/scp.py", line 437, in _set_time mtime = int(times[0]) ValueError: invalid literal for int() with base 10: b'his' This seems to be trying to parse the banner or something? Even though I indicate not to preserve times so not sure why its attempting it. This is the banner returned once the host is listed: Unauthorized use of this system is prohibited. Anyone using this system expressly consents to the monitoring of his or her system use and understands that evidence of criminal activity discovered by system personnel may be reported to law enforcement officials. Thus I was thinking the 'his' was being picked up out of the banner. Tried slipping in a command prior to the scp get on the client but probably didn't really do much as I suspect the scp still gets the banner. when I manually do this via the cli all works fine Any thoughts on this would be great
[ "I think you are using two different libraries to access the file\nFor paramiko you can use the following to get the file\nsftp = ssh.open_sftp()\nsftp.get(remotepath='/mnt/users/username/file.txt', localpath='file.txt')\n\nrather than\nscp = SCPClient(ssh.get_transport())\nscp.get(remote_path='/mnt/users/username/file.txt', local_path='./file.txt', preserve_times=False)\n\n" ]
[ 0 ]
[]
[]
[ "paramiko", "python", "scp" ]
stackoverflow_0074614758_paramiko_python_scp.txt
Q: Transform a raw json column of pandas df into more columns in my pandas dataframe I have a column which follows a simple pattern: {'author_position': 'first', 'author': {'id': 'https://openalex.org/A3003121718', 'display_name': 'Chaolin Huang', 'orcid': None}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}, and repeats the latter for each author of a certain paper. For instance, the first paper in my database has several authors and its authorships column look like this: df['authorships'][0] ### Output: [{'author_position': 'first', 'author': {'id': 'https://openalex.org/A3003121718', 'display_name': 'Chaolin Huang', 'orcid': None}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3006261277', 'display_name': 'Yeming Wang', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I2801051648', 'display_name': 'China-Japan Friendship Hospital', 'ror': 'https://ror.org/037cjxp13', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, National Clinical Research Center for Respiratory Diseases, China-Japan Friendship Hospital, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2620960243', 'display_name': 'Xingwang Li', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I4210150338', 'display_name': 'Beijing Ditan Hospital', 'ror': 'https://ror.org/05kkkes98', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Clinical and Research Center of Infectious Diseases Beijing Ditan Hospital Capital Medical University Beijing China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2103212470', 'display_name': 'Lili Ren', 'orcid': 'https://orcid.org/0000-0002-6645-8183'}, 'institutions': [{'id': None, 'display_name': 'NHC Key Laboratory of Systems Biology of Pathogens and Christophe Mérieux Laboratory, Institute of Pathogen Biology, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'NHC Key Laboratory of Systems Biology of Pathogens and Christophe Mérieux Laboratory, Institute of Pathogen Biology, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2582133136', 'display_name': 'Jianping Zhao', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I79431787', 'display_name': 'Tongji Medical College', 'ror': None, 'country_code': 'CN', 'type': None}], 'raw_affiliation_string': 'Tongji Hospital, Tongji medical college, Huazhong university of Science and Technology, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2550526349', 'display_name': 'Yi Hu', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I47720641', 'display_name': 'Huazhong University of Science and Technology', 'ror': 'https://ror.org/00p991c53', 'country_code': 'CN', 'type': 'education'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology , Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3197971936', 'display_name': 'Li Zhang', 'orcid': 'https://orcid.org/0000-0002-7615-4976'}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2911488157', 'display_name': 'Guohui Fan', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I2801051648', 'display_name': 'China-Japan Friendship Hospital', 'ror': 'https://ror.org/037cjxp13', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, National Clinical Research Center for Respiratory Diseases, China-Japan Friendship Hospital, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3001214061', 'display_name': 'Jiuyang Xu', 'orcid': 'https://orcid.org/0000-0002-1906-5918'}, 'institutions': [{'id': 'https://openalex.org/I99065089', 'display_name': 'Tsinghua University', 'ror': 'https://ror.org/03cve4549', 'country_code': 'CN', 'type': 'education'}], 'raw_affiliation_string': 'Tsinghua University,School of Medicine,Beijing,China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3006530843', 'display_name': 'Xiaoying Gu', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I2801051648', 'display_name': 'China-Japan Friendship Hospital', 'ror': 'https://ror.org/037cjxp13', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, National Clinical Research Center for Respiratory Diseases, China-Japan Friendship Hospital, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3205428521', 'display_name': 'Zhenshun Cheng', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I4210120234', 'display_name': 'Zhongnan Hospital of Wuhan University', 'ror': 'https://ror.org/01v5mqw79', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Respiratory Medicine, Zhongnan Hospital of Wuhan University, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2498193827', 'display_name': 'Ting Yu', 'orcid': None}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}] Now my aim is actually to take only some of the info contained in the above, namely the name of the unique authors' and institutions' records and create two columns containing a list with the author names and the institutions name. In the above, specifically, the result should be the construction of two columns "authors" and "institutions" looking like this (for what concerns the first paper): df['authors][0] ['Chaolin Huang','Yeming Wang','Xingwang Li','Lili Ren','Jianping Zhao','Yi Hu','Li Zhang','Guohui Fan','Jiuyang Xu','Xiaoying Gu','Zhenshun Cheng','Ting Yu'] df['institutions'][0] ['Jin Yin-tan Hospital, Wuhan, China','China-Japan Friendship Hospital','Beijing Ditan Hospital','Tsinghua University','Zhongnan Hospital of Wuhan University','Jin Yin-tan Hospital, Wuhan, China'] Please notice that doubles (e.g. 'China-Japan Friendship Hospital') are not repeated in the list. Thank you A: you can check if the following code works! df = pd.DataFrame() df['authors'] = pd.json_normalize(j)['author.display_name'] df['institutions'] = pd.json_normalize(j, record_path=['institutions'])['display_name'] df
Transform a raw json column of pandas df into more columns
in my pandas dataframe I have a column which follows a simple pattern: {'author_position': 'first', 'author': {'id': 'https://openalex.org/A3003121718', 'display_name': 'Chaolin Huang', 'orcid': None}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}, and repeats the latter for each author of a certain paper. For instance, the first paper in my database has several authors and its authorships column look like this: df['authorships'][0] ### Output: [{'author_position': 'first', 'author': {'id': 'https://openalex.org/A3003121718', 'display_name': 'Chaolin Huang', 'orcid': None}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3006261277', 'display_name': 'Yeming Wang', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I2801051648', 'display_name': 'China-Japan Friendship Hospital', 'ror': 'https://ror.org/037cjxp13', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, National Clinical Research Center for Respiratory Diseases, China-Japan Friendship Hospital, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2620960243', 'display_name': 'Xingwang Li', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I4210150338', 'display_name': 'Beijing Ditan Hospital', 'ror': 'https://ror.org/05kkkes98', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Clinical and Research Center of Infectious Diseases Beijing Ditan Hospital Capital Medical University Beijing China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2103212470', 'display_name': 'Lili Ren', 'orcid': 'https://orcid.org/0000-0002-6645-8183'}, 'institutions': [{'id': None, 'display_name': 'NHC Key Laboratory of Systems Biology of Pathogens and Christophe Mérieux Laboratory, Institute of Pathogen Biology, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'NHC Key Laboratory of Systems Biology of Pathogens and Christophe Mérieux Laboratory, Institute of Pathogen Biology, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2582133136', 'display_name': 'Jianping Zhao', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I79431787', 'display_name': 'Tongji Medical College', 'ror': None, 'country_code': 'CN', 'type': None}], 'raw_affiliation_string': 'Tongji Hospital, Tongji medical college, Huazhong university of Science and Technology, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2550526349', 'display_name': 'Yi Hu', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I47720641', 'display_name': 'Huazhong University of Science and Technology', 'ror': 'https://ror.org/00p991c53', 'country_code': 'CN', 'type': 'education'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology , Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3197971936', 'display_name': 'Li Zhang', 'orcid': 'https://orcid.org/0000-0002-7615-4976'}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2911488157', 'display_name': 'Guohui Fan', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I2801051648', 'display_name': 'China-Japan Friendship Hospital', 'ror': 'https://ror.org/037cjxp13', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, National Clinical Research Center for Respiratory Diseases, China-Japan Friendship Hospital, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3001214061', 'display_name': 'Jiuyang Xu', 'orcid': 'https://orcid.org/0000-0002-1906-5918'}, 'institutions': [{'id': 'https://openalex.org/I99065089', 'display_name': 'Tsinghua University', 'ror': 'https://ror.org/03cve4549', 'country_code': 'CN', 'type': 'education'}], 'raw_affiliation_string': 'Tsinghua University,School of Medicine,Beijing,China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3006530843', 'display_name': 'Xiaoying Gu', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I2801051648', 'display_name': 'China-Japan Friendship Hospital', 'ror': 'https://ror.org/037cjxp13', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, National Clinical Research Center for Respiratory Diseases, China-Japan Friendship Hospital, Beijing, China.'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A3205428521', 'display_name': 'Zhenshun Cheng', 'orcid': None}, 'institutions': [{'id': 'https://openalex.org/I4210120234', 'display_name': 'Zhongnan Hospital of Wuhan University', 'ror': 'https://ror.org/01v5mqw79', 'country_code': 'CN', 'type': 'healthcare'}], 'raw_affiliation_string': 'Department of Respiratory Medicine, Zhongnan Hospital of Wuhan University, Wuhan, China'}, {'author_position': 'middle', 'author': {'id': 'https://openalex.org/A2498193827', 'display_name': 'Ting Yu', 'orcid': None}, 'institutions': [{'id': None, 'display_name': 'Jin Yin-tan Hospital, Wuhan, China', 'ror': None, 'country_code': None, 'type': None}], 'raw_affiliation_string': 'Jin Yin-tan Hospital, Wuhan, China'}] Now my aim is actually to take only some of the info contained in the above, namely the name of the unique authors' and institutions' records and create two columns containing a list with the author names and the institutions name. In the above, specifically, the result should be the construction of two columns "authors" and "institutions" looking like this (for what concerns the first paper): df['authors][0] ['Chaolin Huang','Yeming Wang','Xingwang Li','Lili Ren','Jianping Zhao','Yi Hu','Li Zhang','Guohui Fan','Jiuyang Xu','Xiaoying Gu','Zhenshun Cheng','Ting Yu'] df['institutions'][0] ['Jin Yin-tan Hospital, Wuhan, China','China-Japan Friendship Hospital','Beijing Ditan Hospital','Tsinghua University','Zhongnan Hospital of Wuhan University','Jin Yin-tan Hospital, Wuhan, China'] Please notice that doubles (e.g. 'China-Japan Friendship Hospital') are not repeated in the list. Thank you
[ "you can check if the following code works!\ndf = pd.DataFrame()\ndf['authors'] = pd.json_normalize(j)['author.display_name']\ndf['institutions'] = pd.json_normalize(j, record_path=['institutions'])['display_name']\ndf\n\n" ]
[ 1 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074612814_json_pandas_python.txt
Q: Psycopg2 does not recognize the DB I want to drop I tried to write a function to drop database : def deleteDb(self, dbName: str): conn = psycopg2.connect(dbname="postgres", user="postgres") conn.autocommit = True curs = conn.cursor() curs.execute("DROP DATABASE {};".format(dbName)) curs.close() conn.close() When I try to test it with an existing db : def test_deleteDb(self): self.deleteDb("dbTest") I get this error : "psycopg2.errors.InvalidCatalogName: database "dbtest" does not exist" I tried to play with the isolation level, to drop all the connections to the database and to connect directly to the database but it did not work A: Remember to put quotes around identifiers that contain upper-case letters: curs.execute('DROP DATABASE "{}";'.format(dbName)) Note that string substitution into SQL-statements is generally a bad idea beause it is vulnerable to SQL-injection.
Psycopg2 does not recognize the DB I want to drop
I tried to write a function to drop database : def deleteDb(self, dbName: str): conn = psycopg2.connect(dbname="postgres", user="postgres") conn.autocommit = True curs = conn.cursor() curs.execute("DROP DATABASE {};".format(dbName)) curs.close() conn.close() When I try to test it with an existing db : def test_deleteDb(self): self.deleteDb("dbTest") I get this error : "psycopg2.errors.InvalidCatalogName: database "dbtest" does not exist" I tried to play with the isolation level, to drop all the connections to the database and to connect directly to the database but it did not work
[ "Remember to put quotes around identifiers that contain upper-case letters:\ncurs.execute('DROP DATABASE \"{}\";'.format(dbName))\n\nNote that string substitution into SQL-statements is generally a bad idea beause it is vulnerable to SQL-injection.\n" ]
[ 1 ]
[]
[]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0074615134_postgresql_psycopg2_python.txt
Q: AttributeError : module 'module.modulename' has no attribute 'register' Apologies everyone. Begining out Python and Flask. I'm trying to add all my routes to a separate routes.py file. Below is my folder structure. - appfolder - routes __init__.py (empty file) routes.py - app.py routes.py contents from flask import Blueprint routes = Blueprint('routes', __name__) @routes.route('/') def index(): return 'index' @routes.route('backend/login') def backendlogin(): return 'backend login' app.py contents from flask import Flask from flask_sqlalchemy import SQLAlchemy from routes import routes app = None db = SQLAlchemy() def create_app(): global app, db app = Flask(__name__) app.config['FLASK_DEBUG'] = True # Register Routes app.register_blueprint(routes) return app __all__ = (app, db, create_app) When I try to run flask run in the terminal I get thrown the below error. AttributeError: module 'routes.routes' has no attribute 'register' Any help is greatly appreciated as I've been stuck for a few hours on this. A: You're not actually importing the blueprint from routes.py, you're importing the script containing the blueprint hence the error message AttributeError: module 'routes.routes' has no attribute 'register' change this from routes import routes to from routes.routes import routes
AttributeError : module 'module.modulename' has no attribute 'register'
Apologies everyone. Begining out Python and Flask. I'm trying to add all my routes to a separate routes.py file. Below is my folder structure. - appfolder - routes __init__.py (empty file) routes.py - app.py routes.py contents from flask import Blueprint routes = Blueprint('routes', __name__) @routes.route('/') def index(): return 'index' @routes.route('backend/login') def backendlogin(): return 'backend login' app.py contents from flask import Flask from flask_sqlalchemy import SQLAlchemy from routes import routes app = None db = SQLAlchemy() def create_app(): global app, db app = Flask(__name__) app.config['FLASK_DEBUG'] = True # Register Routes app.register_blueprint(routes) return app __all__ = (app, db, create_app) When I try to run flask run in the terminal I get thrown the below error. AttributeError: module 'routes.routes' has no attribute 'register' Any help is greatly appreciated as I've been stuck for a few hours on this.
[ "You're not actually importing the blueprint from routes.py, you're importing the script containing the blueprint hence the error message\n\nAttributeError: module 'routes.routes' has no attribute 'register'\n\nchange this\nfrom routes import routes\nto\nfrom routes.routes import routes\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0073982014_python.txt
Q: Tuple function that returns certain parameters I'm stuck on an exercise where I should do a function which makes a tuple out of 3 given numbers and returns tuple following these rules: 1st element must be the smallest parameter 2nd element must be the biggest parameter 3rd element is the sum of parameters For example: > print(do_tuple(5, 3, -1)) # (-1, 5, 7) What I have so far: def do_tuple(x: int, y: int, z: int): tuple_ = (x,y,z) summ = x + y + z mini = min(tuple_) maxi = max(tuple_) if __name__ == "__main__": print(do_tuple(5, 3, -1)) I know I should be able to sort and return these values according to the criteria but I can't work my head around it.. A: You need to return the tuple inside your function def do_tuple(x: int, y: int, z: int): tuple_ = (x,y,z) summ = x + y + z mini = min(tuple_) maxi = max(tuple_) return (mini, maxi, summ) if __name__ == "__main__": print(do_tuple(5, 3, -1)) A: As already indicated in a previous answer, you just have to add a return statement in your function. Additionnally, you can use packing to simplify and manage a variable number of arguments. Finally it results in a one-lined and easy-to-read function, as follows: def do_tuple(*args): return (max(args), min(args), sum(args)) print(do_tuple(5, 3, -1)) # (5, -1, 7)
Tuple function that returns certain parameters
I'm stuck on an exercise where I should do a function which makes a tuple out of 3 given numbers and returns tuple following these rules: 1st element must be the smallest parameter 2nd element must be the biggest parameter 3rd element is the sum of parameters For example: > print(do_tuple(5, 3, -1)) # (-1, 5, 7) What I have so far: def do_tuple(x: int, y: int, z: int): tuple_ = (x,y,z) summ = x + y + z mini = min(tuple_) maxi = max(tuple_) if __name__ == "__main__": print(do_tuple(5, 3, -1)) I know I should be able to sort and return these values according to the criteria but I can't work my head around it..
[ "You need to return the tuple inside your function\ndef do_tuple(x: int, y: int, z: int):\n \n tuple_ = (x,y,z)\n summ = x + y + z\n mini = min(tuple_)\n maxi = max(tuple_)\n return (mini, maxi, summ)\n \n \nif __name__ == \"__main__\":\n print(do_tuple(5, 3, -1))\n\n", "As already indicated in a previous answer, you just have to add a return statement in your function. Additionnally, you can use packing to simplify and manage a variable number of arguments. Finally it results in a one-lined and easy-to-read function, as follows:\ndef do_tuple(*args):\n return (max(args), min(args), sum(args))\n \nprint(do_tuple(5, 3, -1)) # (5, -1, 7)\n\n" ]
[ 2, 2 ]
[]
[]
[ "python", "tuples" ]
stackoverflow_0074615011_python_tuples.txt
Q: Error on python - TypeError: 'str' object is not callable I'm starting to code right now, but I've searched on google and found an answer: I know the problem is a variable that is already predefined in python that needs to be renamed, but I can't find it. Can someone help me? import os import pandas as pd lista_arquivo = os.listdir(fr"C:\Users\Master\Desktop\cursos\projetos\PYTHON\Faturamento_AM") print(lista_arquivo) tabela_total = pd.DataFrame() for arquivo in lista_arquivo: if "abril.xlsx" in arquivo(): tabela = pd.read_excel(fr"C:\Users\Master\Desktop\cursos\projetos\PYTHON\Faturamento_AM\{arquivo}") tabela_total = tabela_total.append(tabela) print(arquivo) print(tabela_total) tabela_faturamento = tabela_total.groupby('Faturamento').sum() print(tabela_faturamento) TypeError: 'str' object is not callable I tried renaming the file, putting 'r', 'f' before the directory path, putting {file} at the end of the directory path... A: you have a typo in if "abril.xlsx" in arquivo(): it should be: if "abril.xlsx" in arquivo: When you are adding () to the variable name it is trying to "call" it - execute as a function, but it is string, that's why you're getting error
Error on python - TypeError: 'str' object is not callable
I'm starting to code right now, but I've searched on google and found an answer: I know the problem is a variable that is already predefined in python that needs to be renamed, but I can't find it. Can someone help me? import os import pandas as pd lista_arquivo = os.listdir(fr"C:\Users\Master\Desktop\cursos\projetos\PYTHON\Faturamento_AM") print(lista_arquivo) tabela_total = pd.DataFrame() for arquivo in lista_arquivo: if "abril.xlsx" in arquivo(): tabela = pd.read_excel(fr"C:\Users\Master\Desktop\cursos\projetos\PYTHON\Faturamento_AM\{arquivo}") tabela_total = tabela_total.append(tabela) print(arquivo) print(tabela_total) tabela_faturamento = tabela_total.groupby('Faturamento').sum() print(tabela_faturamento) TypeError: 'str' object is not callable I tried renaming the file, putting 'r', 'f' before the directory path, putting {file} at the end of the directory path...
[ "you have a typo in\nif \"abril.xlsx\" in arquivo():\n\nit should be:\nif \"abril.xlsx\" in arquivo:\n\nWhen you are adding () to the variable name it is trying to \"call\" it - execute as a function, but it is string, that's why you're getting error\n" ]
[ 1 ]
[]
[]
[ "python", "string" ]
stackoverflow_0074615211_python_string.txt
Q: Is it possible to transform one asset into another asset using ops in dagster? From what I found here, it is possible to use ops and graphs to generate assets. However, I would like to use an asset as an input for an op. I am exploring it for a following use case: I fetch a list of country metadata from an external API and store it in my resource: @dagster.asset def country_metadata_asset() -> List[Dict]: ... I use this asset to define some downstream assets, for example: @dagster.asset def country_names_asset(country_metadata_asset) -> List[str]: ... I would like to use this asset to call another data source to retrieve and validate data and then write it to my resource. It returns a huge amount of rows. That is why I need to do it somehow in batch, and I thought that graph with ops would be a better choice for it. I thought to do something like this: @dagster.op(out=dagster.DynamicOut()) def load_country_names(country_names_asset): for country_index, country_name in enumerate(country_names_asset): yield dagster.DynamicOutput( country_name, mapping_key=f"{country_index} {country_name}" ) @dagster.graph() def update_data_graph(): country_names = load_country_names() country_names.map(retrieve_and_process_data) @dagster.job() def run_update_job(): update_data_graph() It seems that my approach does not work, and I am not sure if it is conceptually correct. My questions are: How to tell dagster that the input for load_country_names is an asset? Should I manually materialise it inside op? How to efficiently write augmented data that I return from retrieve_and_process_data into my resource? It is not possible to keep data in memory. So I thought to implement it somehow using a custom IOManager, but I am not sure how to do it. A: It seems to me like the augmented data that's returned from retrieve_and_process_data can (at least in theory) be represented by an asset. So we can start from the standpoint that we'd like to create some asset that takes in country_names_asset, as well as the source data asset (the thing that has a bunch of rows in it, which we can call big_country_data_asset for now). I think this models the underlying relationships a bit better, independent of how we're actually implementing things. The question then is how to write the computation function for this asset in a way that doesn't require loading the entire contents of country_data_asset into memory at any point in time. While it's possible that you could do this with a dynamic graph, which you then wrap in a call to AssetsDefinition.from_graph, I think there's an easier approach. Dagster allows you to circumvent the IOManager machinery both when reading an asset as input, as well as when writing an asset as output. In essence, when you set an AssetKey as a non_argument_dep, this tells Dagster that there is some asset which is upstream of the asset you're defining, but will be loaded within the body of the asset function (rather than being loaded by Dagster using IOManager machinery). Similarly, if you set the output type of the function to None, this tells Dagster that the asset you're defining will be persisted by the logic inside of the function, rather than by an IOManager. Using both these concepts, we can write an asset which at no point needs to have the entire big_country_data_asset loaded. @asset(non_argument_deps={AssetKey("big_country_data_asset")}) def processed_country_data_asset(country_names_asset) -> None: for name in country_names_asset: # assuming this function actually stores data somewhere, # and intrinsically knows how to read from big_country_data_asset retrieve_and_process_data(name) IOManagers are a very flexible concept however, and it is possible to replicate all of this same batching behavior while using IOManagers (just a bit more convoluted). You'd need to do something like create a SourceAsset(key="big_country_data_asset", io_manager_def=my_custom_io_manager), where my_custom_io_manager has a weird load_input function which itself returns a function like: def load_input(context): def _fn(country_name): # however you actually get these rows rows = query_source_data_for_name(country_name) return rows return _fn then, you could define your asset like: @asset def processed_country_data_asset( country_names_asset, big_country_data_asset ) -> None: for name in country_names_asset: # big_country_data_asset has been loaded as a function rows = big_country_data_asset(name) process_data(rows) You can also handle writing the output of this function in an IOManager using a similar-looking trick: https://github.com/dagster-io/dagster/discussions/9772
Is it possible to transform one asset into another asset using ops in dagster?
From what I found here, it is possible to use ops and graphs to generate assets. However, I would like to use an asset as an input for an op. I am exploring it for a following use case: I fetch a list of country metadata from an external API and store it in my resource: @dagster.asset def country_metadata_asset() -> List[Dict]: ... I use this asset to define some downstream assets, for example: @dagster.asset def country_names_asset(country_metadata_asset) -> List[str]: ... I would like to use this asset to call another data source to retrieve and validate data and then write it to my resource. It returns a huge amount of rows. That is why I need to do it somehow in batch, and I thought that graph with ops would be a better choice for it. I thought to do something like this: @dagster.op(out=dagster.DynamicOut()) def load_country_names(country_names_asset): for country_index, country_name in enumerate(country_names_asset): yield dagster.DynamicOutput( country_name, mapping_key=f"{country_index} {country_name}" ) @dagster.graph() def update_data_graph(): country_names = load_country_names() country_names.map(retrieve_and_process_data) @dagster.job() def run_update_job(): update_data_graph() It seems that my approach does not work, and I am not sure if it is conceptually correct. My questions are: How to tell dagster that the input for load_country_names is an asset? Should I manually materialise it inside op? How to efficiently write augmented data that I return from retrieve_and_process_data into my resource? It is not possible to keep data in memory. So I thought to implement it somehow using a custom IOManager, but I am not sure how to do it.
[ "It seems to me like the augmented data that's returned from retrieve_and_process_data can (at least in theory) be represented by an asset.\nSo we can start from the standpoint that we'd like to create some asset that takes in country_names_asset, as well as the source data asset (the thing that has a bunch of rows in it, which we can call big_country_data_asset for now). I think this models the underlying relationships a bit better, independent of how we're actually implementing things.\nThe question then is how to write the computation function for this asset in a way that doesn't require loading the entire contents of country_data_asset into memory at any point in time. While it's possible that you could do this with a dynamic graph, which you then wrap in a call to AssetsDefinition.from_graph, I think there's an easier approach.\nDagster allows you to circumvent the IOManager machinery both when reading an asset as input, as well as when writing an asset as output. In essence, when you set an AssetKey as a non_argument_dep, this tells Dagster that there is some asset which is upstream of the asset you're defining, but will be loaded within the body of the asset function (rather than being loaded by Dagster using IOManager machinery).\nSimilarly, if you set the output type of the function to None, this tells Dagster that the asset you're defining will be persisted by the logic inside of the function, rather than by an IOManager.\nUsing both these concepts, we can write an asset which at no point needs to have the entire big_country_data_asset loaded.\n@asset(non_argument_deps={AssetKey(\"big_country_data_asset\")})\ndef processed_country_data_asset(country_names_asset) -> None:\n for name in country_names_asset:\n # assuming this function actually stores data somewhere,\n # and intrinsically knows how to read from big_country_data_asset\n retrieve_and_process_data(name)\n\nIOManagers are a very flexible concept however, and it is possible to replicate all of this same batching behavior while using IOManagers (just a bit more convoluted). You'd need to do something like create a SourceAsset(key=\"big_country_data_asset\", io_manager_def=my_custom_io_manager), where my_custom_io_manager has a weird load_input function which itself returns a function like:\ndef load_input(context):\n def _fn(country_name):\n # however you actually get these rows\n rows = query_source_data_for_name(country_name)\n return rows\n return _fn\n\nthen, you could define your asset like:\n@asset\ndef processed_country_data_asset(\n country_names_asset, big_country_data_asset\n) -> None:\n for name in country_names_asset:\n # big_country_data_asset has been loaded as a function\n rows = big_country_data_asset(name)\n process_data(rows)\n\nYou can also handle writing the output of this function in an IOManager using a similar-looking trick: https://github.com/dagster-io/dagster/discussions/9772\n" ]
[ 1 ]
[]
[]
[ "dagster", "python" ]
stackoverflow_0074613973_dagster_python.txt
Q: Unknown image file format. One of JPEG, PNG, GIF, BMP required I built a simple CNN model and it raised below errors: Epoch 1/10 235/235 [==============================] - ETA: 0s - loss: 540.2643 - accuracy: 0.4358 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-14-ab88232c98aa> in <module>() 15 train_ds, 16 validation_data=val_ds, ---> 17 epochs=epochs 18 ) 7 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None: InvalidArgumentError: Unknown image file format. One of JPEG, PNG, GIF, BMP required. [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [Op:__inference_test_function_2924] Function call stack: test_function The code I wrote is quite simple and standard. Most of them are just directly copied from the official website. It raised this error before the first epoch finish. I am pretty sure that the images are all png files. The train folder does not contain anything like text, code, except imgages. I am using Colab. The version of tensorlfow is 2.5.0. Appreciate for any help. data_dir = './train' train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, subset='training', validation_split=0.2, batch_size=batch_size, seed=42 ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, subset='validation', validation_split=0.2, batch_size=batch_size, seed=42 ) model = Sequential([ layers.InputLayer(input_shape=(image_size, image_size, 3)), layers.Conv2D(32, 3, activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(num_classes) ]) optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) model.compile( optimizer=optimizer, loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) A: Some of your files in the validation folder are not in the format accepted by Tensorflow ( JPEG, PNG, GIF, BMP), or may be corrupted. The extension of a file is indicative only, and does not enforce anything on the content of the file. You might be able to find the culprit using the imghdr module from the python standard library, and a simple loop. from pathlib import Path import imghdr data_dir = "/home/user/datasets/samples/" image_extensions = [".png", ".jpg"] # add there all your images file extensions img_type_accepted_by_tf = ["bmp", "gif", "jpeg", "png"] for filepath in Path(data_dir).rglob("*"): if filepath.suffix.lower() in image_extensions: img_type = imghdr.what(filepath) if img_type is None: print(f"{filepath} is not an image") elif img_type not in img_type_accepted_by_tf: print(f"{filepath} is a {img_type}, not accepted by TensorFlow") This should print out whether you have files that are not images, or that are not what their extension says they are, and not accepted by TF. Then you can either get rid of them or convert them to a format that TensorFlow supports. A: this should work fine, the same for supported types ... ex for png : image = tf.io.read_file("im.png") image = tf.image.decode_png(image, channels=3) A: TensorFlow has some strictness when dealing with image formats. This should guide in deleting the bad images. Some times your data set may even run well with, for instance Torch but will generate a format error with Tf. Nonetheless, it is best practice to always carryout preprocessing on the images to ensure a robust, safe and standard model. from pathlib import Path import imghdr from pathlib import Path import imghdr img_link=list(Path("/home/user/datasets/samples/").glob(r'**/*.jpg')) count_num=0 for lnk in img_link: binary_img=open(lnk,'rb') find_img=tf.compat.as_bytes('JFIF') in binary_img.peek(10)#The JFIF is a JPEG File Interchange Format (JFIF). It is a standard which we gauge if an image is corrupt or substandard if not find_img: count_num+=1 os.remove(str(lnk)) print('Total %d pcs image delete from Dataset' % count_num) #this should help you delete the bad encoded
Unknown image file format. One of JPEG, PNG, GIF, BMP required
I built a simple CNN model and it raised below errors: Epoch 1/10 235/235 [==============================] - ETA: 0s - loss: 540.2643 - accuracy: 0.4358 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-14-ab88232c98aa> in <module>() 15 train_ds, 16 validation_data=val_ds, ---> 17 epochs=epochs 18 ) 7 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None: InvalidArgumentError: Unknown image file format. One of JPEG, PNG, GIF, BMP required. [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [Op:__inference_test_function_2924] Function call stack: test_function The code I wrote is quite simple and standard. Most of them are just directly copied from the official website. It raised this error before the first epoch finish. I am pretty sure that the images are all png files. The train folder does not contain anything like text, code, except imgages. I am using Colab. The version of tensorlfow is 2.5.0. Appreciate for any help. data_dir = './train' train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, subset='training', validation_split=0.2, batch_size=batch_size, seed=42 ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, subset='validation', validation_split=0.2, batch_size=batch_size, seed=42 ) model = Sequential([ layers.InputLayer(input_shape=(image_size, image_size, 3)), layers.Conv2D(32, 3, activation='relu'), layers.MaxPooling2D(), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(num_classes) ]) optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) model.compile( optimizer=optimizer, loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit( train_ds, validation_data=val_ds, epochs=epochs )
[ "Some of your files in the validation folder are not in the format accepted by Tensorflow ( JPEG, PNG, GIF, BMP), or may be corrupted. The extension of a file is indicative only, and does not enforce anything on the content of the file.\nYou might be able to find the culprit using the imghdr module from the python standard library, and a simple loop.\nfrom pathlib import Path\nimport imghdr\n\ndata_dir = \"/home/user/datasets/samples/\"\nimage_extensions = [\".png\", \".jpg\"] # add there all your images file extensions\n\nimg_type_accepted_by_tf = [\"bmp\", \"gif\", \"jpeg\", \"png\"]\nfor filepath in Path(data_dir).rglob(\"*\"):\n if filepath.suffix.lower() in image_extensions:\n img_type = imghdr.what(filepath)\n if img_type is None:\n print(f\"{filepath} is not an image\")\n elif img_type not in img_type_accepted_by_tf:\n print(f\"{filepath} is a {img_type}, not accepted by TensorFlow\")\n\nThis should print out whether you have files that are not images, or that are not what their extension says they are, and not accepted by TF. Then you can either get rid of them or convert them to a format that TensorFlow supports.\n", "this should work fine, the same for supported types ... ex for png :\nimage = tf.io.read_file(\"im.png\")\nimage = tf.image.decode_png(image, channels=3)\n\n", "TensorFlow has some strictness when dealing with image formats. This should guide in deleting the bad images. Some times your data set may even run well with, for instance Torch but will generate a format error with Tf. Nonetheless, it is best practice to always carryout preprocessing on the images to ensure a robust, safe and standard model.\nfrom pathlib import Path\nimport imghdr\n\nfrom pathlib import Path\nimport imghdr\n\nimg_link=list(Path(\"/home/user/datasets/samples/\").glob(r'**/*.jpg'))\n\ncount_num=0\nfor lnk in img_link:\n binary_img=open(lnk,'rb')\n find_img=tf.compat.as_bytes('JFIF') in binary_img.peek(10)#The JFIF is a JPEG File Interchange Format (JFIF). It is a standard which we gauge if an image is corrupt or substandard\n if not find_img:\n count_num+=1\n os.remove(str(lnk))\nprint('Total %d pcs image delete from Dataset' % count_num)\n#this should help you delete the bad encoded\n\n" ]
[ 13, 0, 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0068191448_python_tensorflow.txt
Q: multiprocessing: No space left on device When i run the multiprocessing example on a OSX. I get the Error OSError: [Errno 28] No space left on device. The ENOSPC ("No space left on device") error will be triggered in any situation in which the data or the metadata associated with an I/O operation can't be written down anywhere because of lack of space. This doesn't always mean disk space – it could mean physical disk space, logical space (e.g. maximum file length), space in a certain data structure or address space. For example you can get it if there isn't space in the directory table (vfat) or there aren't any inodes left. It roughly means “I can't find where to write this down”. Source: https://stackoverflow.com/a/6999259/330658 What i don't understand, where files are written down in my code below? Any help highly appreicated. Example Code: #! /usr/bin/env python3 import sys import os import multiprocessing as mp import time def foo_pool(x): time.sleep(2) return x*x result_list = [] def log_result(result): result_list.append(result) print(result) def apply_async_with_callback(): pool = mp.Pool() for i in range(10): pool.apply_async(foo_pool, args = (i, ), callback = log_result) pool.close() pool.join() print(result_list) if __name__ == '__main__': apply_async_with_callback() Full Error: python3 test.py Traceback (most recent call last): File "test.py", line 32, in <module> apply_async_with_callback() File "test.py", line 23, in apply_async_with_callback pool = mp.Pool() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 191, in __init__ self._setup_queues() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 343, in _setup_queues self._inqueue = self._ctx.SimpleQueue() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 113, in SimpleQueue return SimpleQueue(ctx=self.get_context()) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/queues.py", line 342, in __init__ self._rlock = ctx.Lock() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 68, in Lock return Lock(ctx=self.get_context()) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 162, in __init__ SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 57, in __init__ sl = self._semlock = _multiprocessing.SemLock( OSError: [Errno 28] No space left on device A: One possible reason to run into this error (as in my case), is that the system reaches a limit of allowed POSIX semaphores. This limit can be inspected by the sysctl kern.posix.sem.max command and is 10000 on my macOS 13.0.1. To set it, for example to 15000 until next reboot, you can use: sudo sysctl -w kern.posix.sem.max=15000 While this allowed the Python script to run, I wasn't able to find out which processes were actually using up the semaphores. The only way I found of listing this type of semaphores was lsof. It shows them as type PSXSEM, e.g.: sudo lsof | grep PSXSEM But it found only a couple of the semaphores -- not nearly enough to justify reaching the limit. So, I suspect a bug in the system, where semaphores are not cleaned up correctly. Further evidence to this is that after a reboot the script was able to run with the initial limit set.
multiprocessing: No space left on device
When i run the multiprocessing example on a OSX. I get the Error OSError: [Errno 28] No space left on device. The ENOSPC ("No space left on device") error will be triggered in any situation in which the data or the metadata associated with an I/O operation can't be written down anywhere because of lack of space. This doesn't always mean disk space – it could mean physical disk space, logical space (e.g. maximum file length), space in a certain data structure or address space. For example you can get it if there isn't space in the directory table (vfat) or there aren't any inodes left. It roughly means “I can't find where to write this down”. Source: https://stackoverflow.com/a/6999259/330658 What i don't understand, where files are written down in my code below? Any help highly appreicated. Example Code: #! /usr/bin/env python3 import sys import os import multiprocessing as mp import time def foo_pool(x): time.sleep(2) return x*x result_list = [] def log_result(result): result_list.append(result) print(result) def apply_async_with_callback(): pool = mp.Pool() for i in range(10): pool.apply_async(foo_pool, args = (i, ), callback = log_result) pool.close() pool.join() print(result_list) if __name__ == '__main__': apply_async_with_callback() Full Error: python3 test.py Traceback (most recent call last): File "test.py", line 32, in <module> apply_async_with_callback() File "test.py", line 23, in apply_async_with_callback pool = mp.Pool() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 191, in __init__ self._setup_queues() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 343, in _setup_queues self._inqueue = self._ctx.SimpleQueue() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 113, in SimpleQueue return SimpleQueue(ctx=self.get_context()) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/queues.py", line 342, in __init__ self._rlock = ctx.Lock() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 68, in Lock return Lock(ctx=self.get_context()) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 162, in __init__ SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 57, in __init__ sl = self._semlock = _multiprocessing.SemLock( OSError: [Errno 28] No space left on device
[ "One possible reason to run into this error (as in my case), is that the system reaches a limit of allowed POSIX semaphores. This limit can be inspected by the sysctl kern.posix.sem.max command and is 10000 on my macOS 13.0.1.\nTo set it, for example to 15000 until next reboot, you can use:\nsudo sysctl -w kern.posix.sem.max=15000\n\nWhile this allowed the Python script to run, I wasn't able to find out which processes were actually using up the semaphores. The only way I found of listing this type of semaphores was lsof. It shows them as type PSXSEM, e.g.:\nsudo lsof | grep PSXSEM\n\nBut it found only a couple of the semaphores -- not nearly enough to justify reaching the limit. So, I suspect a bug in the system, where semaphores are not cleaned up correctly. Further evidence to this is that after a reboot the script was able to run with the initial limit set.\n" ]
[ 0 ]
[]
[]
[ "multiprocessing", "oserror", "python", "python_3.x" ]
stackoverflow_0070175977_multiprocessing_oserror_python_python_3.x.txt
Q: python app function error when reading a parquet file I am developing a python script that will run as a azure app function. It should read a parquet file from our gen1 datalake and do some processing over it. When running in debug mode in VS Code it works perfectly but when I deploy the script to the app function it retrieve a error with a not very meaninfull message. Executed 'Functions.get_warehouse_from_sap' (Failed, Id=227a48b8-0486-4c3f-8758-1f6298afaf68, Duration=9122ms) This happens when it tries to read the parquet file. I tried to use pyarrow and pandas.read_parquet function but both give the same error. I tried to put a try/execept around this particular point of the code but any excepetion is retrieved. To read the datalake I am using AzureDLFileSystem from azure.datalake.store.core python libray. Here is part of my code. from azure.datalake.store import lib from azure.datalake.store.core import AzureDLFileSystem import pandas as pd adlCreds = lib.auth(tenant_id=tenant_id, client_id=client_id, client_secret=secret_key, resource = 'https://datalake.azure.net/') adlsFileSystemClient = AzureDLFileSystem(adlCreds, store_name='<repository name>') f=adlsFileSystemClient.ls('<path to my file>') #until here it works fine. It can open the file #here is where the problem happens. try: df=pd.read_parquet(f) except Exception as e: logging.info(str(e)) Any idea? Thanks A: Well, the .lsfunction will list all the file in the folder. Instead of that I will suggest that you download the file and then read the file. To this you will need to run the following code. multithread.ADLDownloader(adlsFileSystemClient,lpath="< Local Path >",rpath="<Path to file>") Also, while reading the file you need an engine for parquet, we can use pyarrow or we can use fastparquet for this the code will look like this: pandas.read_parquet("<path of the file>",engine="pyarrow") Refer this MSDOC on Datalake gen1 . A: I finally manage to solve the problem. The problem was memory consumption. As I am still running the app in a dev enviroment it has just 1.5Gb of memory. I was reading the whole parquet file which was consuming approximatelly 2.5Gb. I changed my code so I use the pyarrow.read_table with filter option and reading just the necessary columns from the parquet file. This reduced the memory consumption and the app function started working.
python app function error when reading a parquet file
I am developing a python script that will run as a azure app function. It should read a parquet file from our gen1 datalake and do some processing over it. When running in debug mode in VS Code it works perfectly but when I deploy the script to the app function it retrieve a error with a not very meaninfull message. Executed 'Functions.get_warehouse_from_sap' (Failed, Id=227a48b8-0486-4c3f-8758-1f6298afaf68, Duration=9122ms) This happens when it tries to read the parquet file. I tried to use pyarrow and pandas.read_parquet function but both give the same error. I tried to put a try/execept around this particular point of the code but any excepetion is retrieved. To read the datalake I am using AzureDLFileSystem from azure.datalake.store.core python libray. Here is part of my code. from azure.datalake.store import lib from azure.datalake.store.core import AzureDLFileSystem import pandas as pd adlCreds = lib.auth(tenant_id=tenant_id, client_id=client_id, client_secret=secret_key, resource = 'https://datalake.azure.net/') adlsFileSystemClient = AzureDLFileSystem(adlCreds, store_name='<repository name>') f=adlsFileSystemClient.ls('<path to my file>') #until here it works fine. It can open the file #here is where the problem happens. try: df=pd.read_parquet(f) except Exception as e: logging.info(str(e)) Any idea? Thanks
[ "\nWell, the .lsfunction will list all the file in the folder. Instead of that I will suggest that you download the file and then read the file.\n\nTo this you will need to run the following code.\n\n\nmultithread.ADLDownloader(adlsFileSystemClient,lpath=\"< Local Path >\",rpath=\"<Path to file>\")\n\n\nAlso, while reading the file you need an engine for parquet, we can use pyarrow or we can use fastparquet for this the code will look like this:\n\npandas.read_parquet(\"<path of the file>\",engine=\"pyarrow\")\n\n\nRefer this MSDOC on Datalake gen1 .\n", "I finally manage to solve the problem. The problem was memory consumption. As I am still running the app in a dev enviroment it has just 1.5Gb of memory. I was reading the whole parquet file which was consuming approximatelly 2.5Gb. I changed my code so I use the pyarrow.read_table with filter option and reading just the necessary columns from the parquet file. This reduced the memory consumption and the app function started working.\n" ]
[ 0, 0 ]
[]
[]
[ "azure_functions", "parquet", "python" ]
stackoverflow_0074513036_azure_functions_parquet_python.txt
Q: Issues with Keras predict function I have trained LSTM model and saved the model in my drive. I uploaded the model and when I use model.predict I get issue, but it used to work before with no problems. What is really strange is that it works fine on my laptop but not on google colab. 2 frames /usr/local/lib/python3.7/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing) 2046 callbacks.on_predict_batch_end(end_step, {'outputs': batch_outputs}) 2047 if batch_outputs is None: -> 2048 raise ValueError('Unexpected result of `predict_function` ' 2049 '(Empty batch_outputs). Please use ' 2050 '`Model.compile(..., run_eagerly=True)`, or ' ValueError: Unexpected result of `predict_function` (Empty batch_outputs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. This is how I use model.predict test_predictions = np.argmax(model.predict(X, verbose=0) > 0.5, axis=-1)
Issues with Keras predict function
I have trained LSTM model and saved the model in my drive. I uploaded the model and when I use model.predict I get issue, but it used to work before with no problems. What is really strange is that it works fine on my laptop but not on google colab. 2 frames /usr/local/lib/python3.7/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing) 2046 callbacks.on_predict_batch_end(end_step, {'outputs': batch_outputs}) 2047 if batch_outputs is None: -> 2048 raise ValueError('Unexpected result of `predict_function` ' 2049 '(Empty batch_outputs). Please use ' 2050 '`Model.compile(..., run_eagerly=True)`, or ' ValueError: Unexpected result of `predict_function` (Empty batch_outputs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. This is how I use model.predict test_predictions = np.argmax(model.predict(X, verbose=0) > 0.5, axis=-1)
[]
[]
[ "the model needs to be initialized you can do the model.compiled() which is a good step to do only sometimes Tensorflow loads weights without initializing the values or you can use model.fit() which will prevent the errors.\nFor my examples, the custom layer tells you about how the weights are generated inside the layer, and that needs to match with the optimizer and parameters of the working programs.\nFor machines works or build robots they use small tools to understand only the number and matrixes capable, no Tensorflow installed that included third-party applications, and next web platforms implementing they build Java versions but that is why I don't install it over communications devices that they copied and reverts.\nSample: Weights initialized with shape and dtypes ( across versions may support different dtypes )\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Class\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass MyLSTMLayer( tf.keras.layers.LSTM ):\n def __init__(self, units, return_sequences, return_state):\n super(MyLSTMLayer, self).__init__( units, return_sequences=return_sequences, return_state=return_state )\n self.num_units = units\n self.return_sequences = return_sequences\n self.return_state = return_state\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b \n\nSample: Final products running on machines\ntemp = tf.random.normal([10], 1, 0.2, tf.float32)\ntemp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ]) #action = actions['up']\ntemp = tf.nn.softmax(temp)\naction = int(np.argmax(temp))\n\nThe values in matrixes is game variances, select max() or min() mapped to target actions { UP, UP, UP, UP, UP, HOLD, HOLD, HOLD, HOLD, HOLD }\n\n" ]
[ -1 ]
[ "keras", "python", "tensorflow" ]
stackoverflow_0074614374_keras_python_tensorflow.txt
Q: wget Python package downloads/saves XML without issue, but not text or html files Have been using this basic code to download and store updated sitemaps from a hosting/crawling service, and it works fine for all the XML files. However, the text and HTML files appear to be in the wrong encoding, but when I force them all to a single encoding (UTF-8) there is no change and the files are still unreadable (screenshots attached). No matter which encoding is used, the TXT and HTML files are unreadable, but the XML files are fine. I'm using Python 3.10, Django 3.0.9, and the latest wget python package available (3.2) on Windows 11. I've also tried using urllib and other packages with the same results. The code: sitemaps = ["https://.../sitemap.xml", "https://.../sitemap_images.xml", "https://.../sitemap_video.xml", "https://.../sitemap_mobile.xml", "https://.../sitemap.html", "https://.../urllist.txt", "https://.../ror.xml"] def download_and_save(url): save_dir = settings.STATICFILES_DIRS[0] filename = url.split("/")[-1] full_path = os.path.join(save_dir, filename) if os.path.exists(full_path): os.remove(full_path) wget.download(url, full_path) for url in sitemaps: download_and_save(url) For all of the XML files, I get this (which is the correct result): For the urllist.txt and sitemap.html files, however, this is the result: I'm not sure why the XML files save fine, but the encoding is messed up for text (.txt) and html files only. A: After speaking with the sitemap hosting provider (pro-sitemaps.net) it appears that the problem was on their end. The HTML and TXT files I was downloading were being served with the wrong encoding (or something similar to that). Though these files were visible/accessible in the browser from the direct URLs at their service, they were not being served via wget with the right encoding it appears. I submitted a ticket to the provider and the issue was resolved within 12 hours (though I didn't get confirmation of the exact issue that caused my problem here). I have now verified that the TXT and HTML files are being served by them in the correct encoding/format via wget.
wget Python package downloads/saves XML without issue, but not text or html files
Have been using this basic code to download and store updated sitemaps from a hosting/crawling service, and it works fine for all the XML files. However, the text and HTML files appear to be in the wrong encoding, but when I force them all to a single encoding (UTF-8) there is no change and the files are still unreadable (screenshots attached). No matter which encoding is used, the TXT and HTML files are unreadable, but the XML files are fine. I'm using Python 3.10, Django 3.0.9, and the latest wget python package available (3.2) on Windows 11. I've also tried using urllib and other packages with the same results. The code: sitemaps = ["https://.../sitemap.xml", "https://.../sitemap_images.xml", "https://.../sitemap_video.xml", "https://.../sitemap_mobile.xml", "https://.../sitemap.html", "https://.../urllist.txt", "https://.../ror.xml"] def download_and_save(url): save_dir = settings.STATICFILES_DIRS[0] filename = url.split("/")[-1] full_path = os.path.join(save_dir, filename) if os.path.exists(full_path): os.remove(full_path) wget.download(url, full_path) for url in sitemaps: download_and_save(url) For all of the XML files, I get this (which is the correct result): For the urllist.txt and sitemap.html files, however, this is the result: I'm not sure why the XML files save fine, but the encoding is messed up for text (.txt) and html files only.
[ "After speaking with the sitemap hosting provider (pro-sitemaps.net) it appears that the problem was on their end. The HTML and TXT files I was downloading were being served with the wrong encoding (or something similar to that). Though these files were visible/accessible in the browser from the direct URLs at their service, they were not being served via wget with the right encoding it appears.\nI submitted a ticket to the provider and the issue was resolved within 12 hours (though I didn't get confirmation of the exact issue that caused my problem here). I have now verified that the TXT and HTML files are being served by them in the correct encoding/format via wget.\n" ]
[ 0 ]
[]
[]
[ "django", "python", "python_3.x", "urllib", "wget" ]
stackoverflow_0074602922_django_python_python_3.x_urllib_wget.txt
Q: Put yfinance stock data into pandas dataframe Python I have a Pandas DataFrame that looks like this: DataFrame: Ticker Date AAPL 2022-11-22 MSFT 2022-11-22 META 2022-11-22 And I want to add a column that includes the stock price of each stock at that date like this: Ticker Date Price AAPL 2022-11-22 147,47 MSFT 2022-11-22 243,71 META 2022-11-22 108,50 In the ideal situation, I would append each price for the Ticker[i] inside the for loop, so I can easily make an Except: "not available" command for the stocks that are not found. What I have done so far is creating the following for loop, which let me to get all stock prices. However, I cannot find a way to merge/append/concatenate it to the dataframe. I currently have 2 separate dataframes, without a common column which makes it hard to merge. for i in DataFrame.index: ticker = DataFrame.index['Ticker'][i] start_date = DataFrame.index['Date'][i] data1 = pd.DataFrame(yf.download(ticker, start_date, start_date)) A: no loop is needed , not familiar with yf but nevertheless you can use apply : df['price'] = df.apply(lambda x : yf.download(x.['ticker'], x.['start_date'], x.['start_date']))
Put yfinance stock data into pandas dataframe Python
I have a Pandas DataFrame that looks like this: DataFrame: Ticker Date AAPL 2022-11-22 MSFT 2022-11-22 META 2022-11-22 And I want to add a column that includes the stock price of each stock at that date like this: Ticker Date Price AAPL 2022-11-22 147,47 MSFT 2022-11-22 243,71 META 2022-11-22 108,50 In the ideal situation, I would append each price for the Ticker[i] inside the for loop, so I can easily make an Except: "not available" command for the stocks that are not found. What I have done so far is creating the following for loop, which let me to get all stock prices. However, I cannot find a way to merge/append/concatenate it to the dataframe. I currently have 2 separate dataframes, without a common column which makes it hard to merge. for i in DataFrame.index: ticker = DataFrame.index['Ticker'][i] start_date = DataFrame.index['Date'][i] data1 = pd.DataFrame(yf.download(ticker, start_date, start_date))
[ "no loop is needed , not familiar with yf but nevertheless you can use apply :\ndf['price'] = df.apply(lambda x : yf.download(x.['ticker'], x.['start_date'], x.['start_date']))\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "stock", "yfinance" ]
stackoverflow_0074607047_pandas_python_stock_yfinance.txt
Q: Tkinter: 'image ''pyimage2'' doesn't exist'? My full code from tkinter import * i=0 for i in range(10) : window = Tk() window.title('add image') window = Canvas(window,width= 600, height= 600) window.pack() image=PhotoImage(file=r"C:\\Users\\Konstantinos\\New folder\\hello.png") window.create_image(0,0, anchor = NW, image=image) window.mainloop() The error when i run the program File "C:\Programms\Lib\tkinter\__init__.py", line 2832, in _create return self.tk.getint(self.tk.call( ^^^^^^^^^^^^^ _tkinter.TclError: image "pyimage2" doesn't exist The error when i debug the program Exception has occurred: TclError image "pyimage2" doesn't exist File "C:\Users\Konstantinos\New folder\demo.py", line 9, in <module> window.create_image(0,0, anchor = NW, image=image) So basically, the program opens an image multiple times. When th program is not in a loop it works but when i put it in a loop it gives me the error. Because i recently started programming i dont really know how to solve the problem and I have looked in other threads with the similar problem but none apply to me. I will appreciate any answer A: The error probably comes from multiple Tk instances. Try removing the for-loop and then it will work. But if your intention was for multiple windows, then you can look into this answer: https://stackoverflow.com/a/36316105/9983213. Feel free to tinker around with the example. A smaller example is: import tkinter as tk root = tk.Tk() for i in range(5): top = tk.Toplevel(root) root.mainloop()
Tkinter: 'image ''pyimage2'' doesn't exist'?
My full code from tkinter import * i=0 for i in range(10) : window = Tk() window.title('add image') window = Canvas(window,width= 600, height= 600) window.pack() image=PhotoImage(file=r"C:\\Users\\Konstantinos\\New folder\\hello.png") window.create_image(0,0, anchor = NW, image=image) window.mainloop() The error when i run the program File "C:\Programms\Lib\tkinter\__init__.py", line 2832, in _create return self.tk.getint(self.tk.call( ^^^^^^^^^^^^^ _tkinter.TclError: image "pyimage2" doesn't exist The error when i debug the program Exception has occurred: TclError image "pyimage2" doesn't exist File "C:\Users\Konstantinos\New folder\demo.py", line 9, in <module> window.create_image(0,0, anchor = NW, image=image) So basically, the program opens an image multiple times. When th program is not in a loop it works but when i put it in a loop it gives me the error. Because i recently started programming i dont really know how to solve the problem and I have looked in other threads with the similar problem but none apply to me. I will appreciate any answer
[ "The error probably comes from multiple Tk instances. Try removing the for-loop and then it will work. But if your intention was for multiple windows, then you can look into this answer: https://stackoverflow.com/a/36316105/9983213. Feel free to tinker around with the example.\nA smaller example is:\nimport tkinter as tk\n\nroot = tk.Tk()\nfor i in range(5):\n top = tk.Toplevel(root)\n\nroot.mainloop()\n\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0074615153_python_python_3.x_tkinter.txt
Q: Finding a sequence in a dataFrame I'm and amateur working on a project and I need a hand. I need to find a sequence of 5 numbers in order that are consequtive inside a dataframe with only 1 column. They are also in different dataframes. Dataframe b contains about 5000 numbers. E.g. dataframe a = 1.0, 0.0, -2.3, 0.0, 0.3 Dataframe b = 2.0, 1.5, -3.0 ,0.0... I know dataframe a can be found in dataframe b but I'm not sure how to search for it so that they come up in the correct order. Looking for when the numbers are found it prints the indexes they were found on plus any other occurrences in the list if it comes up, if there are more that one. Any help you be greatly appreciated. Tried a few if/while statements but not quite sure how to phrase it. A: As I understand you are looking for specific subseries in a series. To illustrate different solutions consider an example import numpy as np import pandas as pd NVALUES = 10 NSEGMENT = 3 ISTART = 2 dfb = pd.DataFrame(dict(values=np.random.rand(NVALUES))) dfa = pd.DataFrame(dict(values=range(NSEGMENT))) dfb[ISTART:ISTART + NSEGMENT] = dfa.values We have a dataframe dfb with random values and a segment dfa. And dfb is manipulated so that the NSEGMENT consecutive values in dfb starting from ISTART are copied from dfa. Now we need a function that given dfa and dfb would return ISTART. Here is a straightforward solution. Solution A. def find_sequence(dfa, dfb): nsegment = len(dfa) for i in range(len(dfb) - nsegment): if (dfb[i:i+nsegment] - dfa.values == 0).all().values[0]: return i This function returns ISTART or None if subseries equal to dfa is not found. In our example find_sequence(dfa, dfb) returns 2. There are several issues with this solution but the general idea should be clear. Solution B. A vectorized solution is less straightforward but avoids explicit iteration which is considered a good thing. Turn the dataframe b into a matrix matrix = dfb for i in range(len(dfa)-1): matrix = matrix.join(dfb['values'].shift(-i - 1).rename(i)) Now the matrix would look like a dataframe where each row starts with an element of dfb and continues with consecutive elements of dfb directly following the first element in a row. In our example the matrix might look like this: values 0 1 0 0.271286 0.355040 0.315251 1 0.355040 0.315251 0.000000 2 0.315251 0.000000 1.000000 3 0.000000 1.000000 2.000000 ... 8 0.305554 0.105416 NaN 9 0.105416 NaN NaN Now you can compare the rows to the values from dfa with a single call to apply(). The following would return all the indices of the elements in dfb where the following NSEGMENT elements match the ones in dfa. dfb.index[matrix.apply(lambda x: (x.values - np.transpose(dfa.values) == 0).all(), axis=1)] In our case this would return Int64Index([2], dtype='int64')
Finding a sequence in a dataFrame
I'm and amateur working on a project and I need a hand. I need to find a sequence of 5 numbers in order that are consequtive inside a dataframe with only 1 column. They are also in different dataframes. Dataframe b contains about 5000 numbers. E.g. dataframe a = 1.0, 0.0, -2.3, 0.0, 0.3 Dataframe b = 2.0, 1.5, -3.0 ,0.0... I know dataframe a can be found in dataframe b but I'm not sure how to search for it so that they come up in the correct order. Looking for when the numbers are found it prints the indexes they were found on plus any other occurrences in the list if it comes up, if there are more that one. Any help you be greatly appreciated. Tried a few if/while statements but not quite sure how to phrase it.
[ "As I understand you are looking for specific subseries in a series.\nTo illustrate different solutions consider an example\nimport numpy as np\nimport pandas as pd\n\nNVALUES = 10\nNSEGMENT = 3\nISTART = 2\n\ndfb = pd.DataFrame(dict(values=np.random.rand(NVALUES)))\ndfa = pd.DataFrame(dict(values=range(NSEGMENT)))\ndfb[ISTART:ISTART + NSEGMENT] = dfa.values\n\nWe have a dataframe dfb with random values and a segment dfa. And dfb is manipulated so that the NSEGMENT consecutive values in dfb starting from ISTART are copied from dfa.\nNow we need a function that given dfa and dfb would return ISTART.\nHere is a straightforward solution.\nSolution A.\ndef find_sequence(dfa, dfb):\n nsegment = len(dfa) \n for i in range(len(dfb) - nsegment):\n if (dfb[i:i+nsegment] - dfa.values == 0).all().values[0]:\n return i\n\nThis function returns ISTART or None if subseries equal to dfa is not found.\nIn our example find_sequence(dfa, dfb) returns 2.\nThere are several issues with this solution but the general idea should be clear.\nSolution B.\nA vectorized solution is less straightforward but avoids explicit iteration which is considered a good thing.\nTurn the dataframe b into a matrix\nmatrix = dfb\nfor i in range(len(dfa)-1):\n matrix = matrix.join(dfb['values'].shift(-i - 1).rename(i))\n\nNow the matrix would look like a dataframe where each row starts with an element of dfb and continues with consecutive elements of dfb directly following the first element in a row.\nIn our example the matrix might look like this:\n values 0 1\n0 0.271286 0.355040 0.315251\n1 0.355040 0.315251 0.000000\n2 0.315251 0.000000 1.000000\n3 0.000000 1.000000 2.000000\n...\n8 0.305554 0.105416 NaN\n9 0.105416 NaN NaN\n\nNow you can compare the rows to the values from dfa with a single call to apply(). The following would return all the indices of the elements in dfb where the following NSEGMENT elements match the ones in dfa.\ndfb.index[matrix.apply(lambda x: \n (x.values - np.transpose(dfa.values) == 0).all(), axis=1)]\n\nIn our case this would return\nInt64Index([2], dtype='int64')\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074615187_dataframe_python.txt
Q: Vectorized version of pandas Series.str.find The Series.str.find() function in pandas seems to take only a single integer for the start location. I have a Series containing strings and an array of start positions, and I want to find the position of a given substring starting from the corresponding position of each element as follows: a = pd.Series(data=['aaba', 'ababc', 'caaauuab']) a.str.find('b', start=[0, 1, 2]) # returns a series of NaNs I can do this using list comprehension: [s.find('b', pos) for s, pos in zip(a.values, [0, 1, 2])] Is there a function in numpy or pandas that can do this directly and faster? Also, is there one that can take an array of substrings as well? A: I think this is kind of more pythonic way to do so, because you do not have to worry about indexes: import pandas as pd def find_from_index(series: pd.Series, to_find: str) -> pd.Series: return pd.Series([v.find(to_find, i) for i, v in enumerate(series)]) a = pd.Series(data=['aaba', 'ababc', 'cbaauuab']) b = find_from_index(a, 'b') Hope this helps A: No there isn't, vectorizing string operations is difficult. You could think to convert your strings to arrays of characters, but the conversion would be the limiting step. A quick test tells me that it is roughly taking the same time than performing the list comprehension provided in your question. And we haven't even searched for the position yet. In short, your current approach seems reasonably the most efficient.
Vectorized version of pandas Series.str.find
The Series.str.find() function in pandas seems to take only a single integer for the start location. I have a Series containing strings and an array of start positions, and I want to find the position of a given substring starting from the corresponding position of each element as follows: a = pd.Series(data=['aaba', 'ababc', 'caaauuab']) a.str.find('b', start=[0, 1, 2]) # returns a series of NaNs I can do this using list comprehension: [s.find('b', pos) for s, pos in zip(a.values, [0, 1, 2])] Is there a function in numpy or pandas that can do this directly and faster? Also, is there one that can take an array of substrings as well?
[ "I think this is kind of more pythonic way to do so, because you do not have to worry about indexes:\nimport pandas as pd\n\ndef find_from_index(series: pd.Series, to_find: str) -> pd.Series:\n return pd.Series([v.find(to_find, i) for i, v in enumerate(series)])\n\na = pd.Series(data=['aaba', 'ababc', 'cbaauuab'])\nb = find_from_index(a, 'b')\n\nHope this helps\n", "No there isn't, vectorizing string operations is difficult.\nYou could think to convert your strings to arrays of characters, but the conversion would be the limiting step. A quick test tells me that it is roughly taking the same time than performing the list comprehension provided in your question. And we haven't even searched for the position yet.\nIn short, your current approach seems reasonably the most efficient.\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python", "string" ]
stackoverflow_0074615095_pandas_python_string.txt
Q: Pandas groupby doesn't take UTC time into account I have a data frame of several days, that looks something like: df = col1 date 2022-10-31 23:00:00 89.088556 2022-11-01 00:00:00 91.356805 2022-11-01 01:00:00 43.188002 2022-11-01 02:00:00 40.386937 2022-11-01 03:00:00 38.045470 ... ... 2022-11-28 18:00:00 320.695662 2022-11-28 19:00:00 289.392580 2022-11-28 20:00:00 266.770852 2022-11-28 21:00:00 258.787157 2022-11-28 22:00:00 238.077054 So hourly interval in UTC. Now, what I would like to do is just do a mean for each day. However, if I do something like df.groupby(df.index.day) the resulting df for one of the days looks something like this: df_day1 = col1 date 2022-11-01 00:00:00 91.356805 2022-11-01 01:00:00 43.188002 2022-11-01 02:00:00 40.386937 2022-11-01 03:00:00 38.045470 2022-11-01 04:00:00 38.063055 2022-11-01 05:00:00 44.420651 2022-11-01 06:00:00 99.115480 2022-11-01 07:00:00 105.455595 2022-11-01 08:00:00 97.385403 2022-11-01 09:00:00 88.105182 2022-11-01 10:00:00 82.650731 2022-11-01 11:00:00 79.717211 2022-11-01 12:00:00 78.173303 2022-11-01 13:00:00 72.926578 2022-11-01 14:00:00 77.644380 2022-11-01 15:00:00 95.185876 2022-11-01 16:00:00 110.620416 2022-11-01 17:00:00 124.516274 2022-11-01 18:00:00 117.191289 2022-11-01 19:00:00 102.931563 2022-11-01 20:00:00 96.657752 2022-11-01 21:00:00 93.358915 2022-11-01 22:00:00 92.043226 2022-11-02 23:00:00 56.089820 As such it seems okay. But since my date is in UTC, the correct day is actually from 23-22, and not 00-23. In my case this makes a difference. I've tried to convert to localize to UTC, convert to CET and stuff like that. But it always shifts the hours used for the correct day, so that it is in fact not the right hours used for each day. Am I doing something wrong ? A: You may use localization inside group by. This way your group will contain all the times for 1st day and 23:00 for the 31st day, same as in you first table reference. df.groupby(df.index.tz_localize("UTC").tz_convert("Europe/Copenhagen").day).get_group(1) Btw, a snippet to reproduce your situation: import pandas as pd import numpy as np rng = pd.date_range("2022-10-31 23:00:00", periods=28*24, freq='H') df = pd.DataFrame({'col1':np.random.randn(len(rng))}, index=rng) UPD: Another solution escaping time zones would be assembling groups based on a function: group_counter = 0 def GroupFunc(ind): global group_counter if ind.hour == 23: group_counter += 1 return group_counter df.groupby(lambda x: GroupFunc(x)) Here .groupby() will iterate through the whole DataFrame and each time it finds 23:00 it will create a new group. This means each group will be 23 to 22. ⚠ in this case the original DataFrame should be sorted ascending. Doing .groupby() with custom functions has a rare use, is considered a bit difficult and it's difficult to understand and maintain without comments. But! You don't have to care about time zones anymore. And in my case the custom function approach worked 4 times faster.
Pandas groupby doesn't take UTC time into account
I have a data frame of several days, that looks something like: df = col1 date 2022-10-31 23:00:00 89.088556 2022-11-01 00:00:00 91.356805 2022-11-01 01:00:00 43.188002 2022-11-01 02:00:00 40.386937 2022-11-01 03:00:00 38.045470 ... ... 2022-11-28 18:00:00 320.695662 2022-11-28 19:00:00 289.392580 2022-11-28 20:00:00 266.770852 2022-11-28 21:00:00 258.787157 2022-11-28 22:00:00 238.077054 So hourly interval in UTC. Now, what I would like to do is just do a mean for each day. However, if I do something like df.groupby(df.index.day) the resulting df for one of the days looks something like this: df_day1 = col1 date 2022-11-01 00:00:00 91.356805 2022-11-01 01:00:00 43.188002 2022-11-01 02:00:00 40.386937 2022-11-01 03:00:00 38.045470 2022-11-01 04:00:00 38.063055 2022-11-01 05:00:00 44.420651 2022-11-01 06:00:00 99.115480 2022-11-01 07:00:00 105.455595 2022-11-01 08:00:00 97.385403 2022-11-01 09:00:00 88.105182 2022-11-01 10:00:00 82.650731 2022-11-01 11:00:00 79.717211 2022-11-01 12:00:00 78.173303 2022-11-01 13:00:00 72.926578 2022-11-01 14:00:00 77.644380 2022-11-01 15:00:00 95.185876 2022-11-01 16:00:00 110.620416 2022-11-01 17:00:00 124.516274 2022-11-01 18:00:00 117.191289 2022-11-01 19:00:00 102.931563 2022-11-01 20:00:00 96.657752 2022-11-01 21:00:00 93.358915 2022-11-01 22:00:00 92.043226 2022-11-02 23:00:00 56.089820 As such it seems okay. But since my date is in UTC, the correct day is actually from 23-22, and not 00-23. In my case this makes a difference. I've tried to convert to localize to UTC, convert to CET and stuff like that. But it always shifts the hours used for the correct day, so that it is in fact not the right hours used for each day. Am I doing something wrong ?
[ "You may use localization inside group by. This way your group will contain all the times for 1st day and 23:00 for the 31st day, same as in you first table reference.\ndf.groupby(df.index.tz_localize(\"UTC\").tz_convert(\"Europe/Copenhagen\").day).get_group(1)\n\nBtw, a snippet to reproduce your situation:\nimport pandas as pd\nimport numpy as np\nrng = pd.date_range(\"2022-10-31 23:00:00\", periods=28*24, freq='H')\ndf = pd.DataFrame({'col1':np.random.randn(len(rng))}, index=rng)\n\nUPD:\nAnother solution escaping time zones would be assembling groups based on a function:\ngroup_counter = 0\ndef GroupFunc(ind):\n global group_counter\n if ind.hour == 23:\n group_counter += 1\n return group_counter\ndf.groupby(lambda x: GroupFunc(x))\n\nHere .groupby() will iterate through the whole DataFrame and each time it finds 23:00 it will create a new group. This means each group will be 23 to 22.\n⚠ in this case the original DataFrame should be sorted ascending.\nDoing .groupby() with custom functions has a rare use, is considered a bit difficult and it's difficult to understand and maintain without comments.\nBut! You don't have to care about time zones anymore. And in my case the custom function approach worked 4 times faster. \n" ]
[ 2 ]
[]
[]
[ "datetime", "group_by", "pandas", "python" ]
stackoverflow_0074614438_datetime_group_by_pandas_python.txt
Q: Adding gridlines to each subplot pie chart in matplotlib I have a 2d array with 8 sub arrays. I want to plot each of the array in pie charts as shown: fig, axes = plt.subplots(4, 2,figsize=(15, 15)) axes[0,0].pie(counts_list[0]) axes[0,1].pie(counts_list[1]) axes[1,0].pie(counts_list[2]) axes[1,1].pie(counts_list[3]) axes[2,0].pie(counts_list[4]) axes[2,1].pie(counts_list[5]) axes[3,0].pie(counts_list[6]) axes[3,1].pie(counts_list[7]) plt.show() What I have achieved with this code: Now I have tried every possible answer solution from SO and the internet, added grids like axes[0][0].grid() to each, added plt.rcParams['axes.grid'] = True but for some reason the grid lines are not coming up on the pie charts. How do I solve this? Any guidance in the correct direction is appreciated. Thanks. Also if the code for the arrays is required, please let me know in the comments. I have plotted the actual pie charts, just need to add the gridlines and the color scheme. A: The following should work: from matplotlib import pyplot as plt fig, ax = plt.subplots() data = [32, 45, 67, 12, 1] ax.pie(data) # turn on frame ax.set_frame_on(b=True) # create locations of grid points xrange = ax.get_xlim() ngrids = 7 dx = (xrange[1] - xrange[0]) / ngrids gridvals = [xrange[0] + (dx / 2) + i * dx for i in range(ngrids)] # set grid location ax.set_yticks(gridvals, minor=False) ax.set_xticks(gridvals, minor=False) # set tick labels (to be from 1 to 7) ax.set_xticklabels([f"{i}" for i in range(1, ngrids + 1)]) ax.set_yticklabels([f"{i}" for i in range(1, ngrids + 1)]) # turn grid on ax.yaxis.grid(True, which="major") ax.xaxis.grid(True, which="major") To use the Blues color map, see the example here.
Adding gridlines to each subplot pie chart in matplotlib
I have a 2d array with 8 sub arrays. I want to plot each of the array in pie charts as shown: fig, axes = plt.subplots(4, 2,figsize=(15, 15)) axes[0,0].pie(counts_list[0]) axes[0,1].pie(counts_list[1]) axes[1,0].pie(counts_list[2]) axes[1,1].pie(counts_list[3]) axes[2,0].pie(counts_list[4]) axes[2,1].pie(counts_list[5]) axes[3,0].pie(counts_list[6]) axes[3,1].pie(counts_list[7]) plt.show() What I have achieved with this code: Now I have tried every possible answer solution from SO and the internet, added grids like axes[0][0].grid() to each, added plt.rcParams['axes.grid'] = True but for some reason the grid lines are not coming up on the pie charts. How do I solve this? Any guidance in the correct direction is appreciated. Thanks. Also if the code for the arrays is required, please let me know in the comments. I have plotted the actual pie charts, just need to add the gridlines and the color scheme.
[ "The following should work:\nfrom matplotlib import pyplot as plt\n\nfig, ax = plt.subplots()\n\ndata = [32, 45, 67, 12, 1]\n\nax.pie(data)\n\n# turn on frame\nax.set_frame_on(b=True)\n\n# create locations of grid points\nxrange = ax.get_xlim()\nngrids = 7\ndx = (xrange[1] - xrange[0]) / ngrids\ngridvals = [xrange[0] + (dx / 2) + i * dx for i in range(ngrids)]\n\n# set grid location\nax.set_yticks(gridvals, minor=False)\nax.set_xticks(gridvals, minor=False)\n\n# set tick labels (to be from 1 to 7)\nax.set_xticklabels([f\"{i}\" for i in range(1, ngrids + 1)])\nax.set_yticklabels([f\"{i}\" for i in range(1, ngrids + 1)])\n\n# turn grid on\nax.yaxis.grid(True, which=\"major\")\nax.xaxis.grid(True, which=\"major\")\n\nTo use the Blues color map, see the example here.\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "numpy", "python" ]
stackoverflow_0074615257_matplotlib_numpy_python.txt
Q: Selenium Python: How to set_page_load_timeout, if beyond time return except(not error) I have this code and how to loop if load timeout it will return except and it run next test case def search_action(self, xpath, value): try: self.driver.set_page_load_timeout(1) element = self.driver.find_element(By.XPATH, xpath) element.send_keys(value) element.send_keys(Keys.ENTER) except TimeoutException as e: print('EXCEPT', e) pass print('success') se = class_name() for domain in config['list_domain']: se.get_domain(domain) for i in range(1, 100): xpath = config[domain]['list_test']['search'] se.search_action(xpath, i) se.get_domain(domain) return: selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: 0.319 I want loop from 1 to 100 i don't want return error A: I guess your problem is not with set_page_load_timeout. You need to use WebDriverWait expected_conditions to wait for element to become clickable. wait.until(EC.element_to_be_clickable((By.XPATH, xpath))) Since you did not share link and xpath details I can't give more detailed answer here.
Selenium Python: How to set_page_load_timeout, if beyond time return except(not error)
I have this code and how to loop if load timeout it will return except and it run next test case def search_action(self, xpath, value): try: self.driver.set_page_load_timeout(1) element = self.driver.find_element(By.XPATH, xpath) element.send_keys(value) element.send_keys(Keys.ENTER) except TimeoutException as e: print('EXCEPT', e) pass print('success') se = class_name() for domain in config['list_domain']: se.get_domain(domain) for i in range(1, 100): xpath = config[domain]['list_test']['search'] se.search_action(xpath, i) se.get_domain(domain) return: selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: 0.319 I want loop from 1 to 100 i don't want return error
[ "I guess your problem is not with set_page_load_timeout.\nYou need to use WebDriverWait expected_conditions to wait for element to become clickable.\nwait.until(EC.element_to_be_clickable((By.XPATH, xpath)))\n\nSince you did not share link and xpath details I can't give more detailed answer here.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "selenium", "webdriverwait", "xpath" ]
stackoverflow_0074615454_python_python_3.x_selenium_webdriverwait_xpath.txt
Q: Import error when calling "import tensorflow" in conda environment I am trying to import TensorFlow using my Conda environment. I received the ImportError message below. I tried to solve it by creating a new environment, installing TensorFlow 2, and trying with this new environment, but the error still appeared. It was work yesterday, and I don't know what is happen today. **>>> import tensorflow** Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\__init__.py", line 51, in <module> from ._api.v2 import compat File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\__init__.py", line 37, in <module> from . import v1 File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\__init__.py", line 30, in <module> from . import compat File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\compat\__init__.py", line 37, in <module> from . import v1 File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\compat\v1\__init__.py", line 52, in <module> from tensorflow._api.v2.compat.v1 import math File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\math\__init__.py", line 109, in <module> from tensorflow.python.ops.gen_math_ops import segment_sum_v2 **ImportError:** cannot import name 'segment_sum_v2' from 'tensorflow.python.ops.gen_math_ops' (C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\ops\gen_math_ops.py) A: I faced the same issue a couple of days back. I would suggest you to download Anaconda 2020.02 for a seamlessly smooth experience with TensorFlow 2. After installation; You can execute the following instructions and command to solve the issue: (do not include inverted commas) Open Anaconda Prompt type: create --name py3-TF2.0 python = 3 press y to continue After successful completion: type conda activate py3-TF2.0 type pip install matplotlib type pip install tensorflow==2.3.0 type conda deactivate type pip install ipykernel type conda install nb_conda_kernels Now exit the Anaconda prompt and open the Anaconda Navigator application. Switch to py3-TF2.0 from the base environment using the Environment tab. Install Jupyter in the specified environment. After successful installation, open Jupyter notebook and switch to py3-TF2.0 kernel using Kernel Tab. You're good to go for using TensorFlow 2.
Import error when calling "import tensorflow" in conda environment
I am trying to import TensorFlow using my Conda environment. I received the ImportError message below. I tried to solve it by creating a new environment, installing TensorFlow 2, and trying with this new environment, but the error still appeared. It was work yesterday, and I don't know what is happen today. **>>> import tensorflow** Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\__init__.py", line 51, in <module> from ._api.v2 import compat File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\__init__.py", line 37, in <module> from . import v1 File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\__init__.py", line 30, in <module> from . import compat File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\compat\__init__.py", line 37, in <module> from . import v1 File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\compat\v1\__init__.py", line 52, in <module> from tensorflow._api.v2.compat.v1 import math File "C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v1\math\__init__.py", line 109, in <module> from tensorflow.python.ops.gen_math_ops import segment_sum_v2 **ImportError:** cannot import name 'segment_sum_v2' from 'tensorflow.python.ops.gen_math_ops' (C:\Users\Saja\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\ops\gen_math_ops.py)
[ "I faced the same issue a couple of days back. I would suggest you to download Anaconda 2020.02 for a seamlessly smooth experience with TensorFlow 2.\nAfter installation;\nYou can execute the following instructions and command to solve the issue: (do not include inverted commas)\nOpen Anaconda Prompt\ntype: create --name py3-TF2.0 python = 3\npress y to continue After successful completion:\ntype conda activate py3-TF2.0\ntype pip install matplotlib\ntype pip install tensorflow==2.3.0\ntype conda deactivate\ntype pip install ipykernel\ntype conda install nb_conda_kernels\nNow exit the Anaconda prompt and open the Anaconda Navigator application. Switch to py3-TF2.0 from the base environment using the Environment tab. Install Jupyter in the specified environment. After successful installation, open Jupyter notebook and switch to py3-TF2.0 kernel using Kernel Tab. You're good to go for using TensorFlow 2.\n" ]
[ 0 ]
[]
[]
[ "anaconda3", "python", "tensorflow2.0" ]
stackoverflow_0074615519_anaconda3_python_tensorflow2.0.txt
Q: How can I split a list of dictionaries in separate lists of dictionaries based on some condition? I am new to python, and I am trying to split a list of dictionaries into separate lists of dictionaries based on some condition. This is how my list looks like this: [{'username': 'AnastasiadesCY', 'created_at': '2020-12-02 18:58:16', 'id': 1.33421029132062e+18, 'language': 'en', 'contenttype': 'text/plain', 'content': 'Pleased to participate to the international conference in support of the Lebanese people. Cypriot citizens, together with the Government , have provided significant quantities of material assistance, from the day of the explosion until today.\n\n#Lebanon '}, {'username': 'AnastasiadesCY', 'created_at': '2020-11-19 18:13:06', 'id': 1.32948788307022e+18, 'language': 'en', 'contenttype': 'text/plain', 'content': '#Cyprus stand ready to support all efforts towards a coordinated approach of vaccination strategies across Europe, that will prove instrumental in our fight against the pandemic.\n\nUnited Against #COVID19 \n\n#EUCO'},... I would like to split and group all list's elements that have the same username into separate lists of dictionaries. The elements of the list - so each dictionary - are ordered by username. Is there a way to loop over the dictionaries and append each element to a list until username in "item 1" is equal to username in "item 1 + 1" and so on? Thank you for your help! A: A better would be to create a dictionary with username as key and value as list of user attributes op = defauldict(list) for user_dic in list_of_userdictss: op[user_dic.pop('username')].append(user_dic) op = OrderedDict(sorted(user_dic.items())) A: Finding the same thing works the best if we sort the list by it - then all the same names are next to each other. But even after sorting, we don't need to do such things manually - there are already tools for that. :) - itertools.groupby documentation and a nice explanation how it works from itertools import groupby from operator import itemgetter my_list.sort(key=itemgetter("username")) result = {} for username, group in groupby(my_list, key=itemgetter("username")): result[username] = list(group) result is a dict with usernames as keys If you want a list-of-lists, do result = [] and then result.append(list(group)) instead. A: list of dict to separate lists data = pd.DataFrame(your_list_of_dict) username_list = data.username.values.tolist()
How can I split a list of dictionaries in separate lists of dictionaries based on some condition?
I am new to python, and I am trying to split a list of dictionaries into separate lists of dictionaries based on some condition. This is how my list looks like this: [{'username': 'AnastasiadesCY', 'created_at': '2020-12-02 18:58:16', 'id': 1.33421029132062e+18, 'language': 'en', 'contenttype': 'text/plain', 'content': 'Pleased to participate to the international conference in support of the Lebanese people. Cypriot citizens, together with the Government , have provided significant quantities of material assistance, from the day of the explosion until today.\n\n#Lebanon '}, {'username': 'AnastasiadesCY', 'created_at': '2020-11-19 18:13:06', 'id': 1.32948788307022e+18, 'language': 'en', 'contenttype': 'text/plain', 'content': '#Cyprus stand ready to support all efforts towards a coordinated approach of vaccination strategies across Europe, that will prove instrumental in our fight against the pandemic.\n\nUnited Against #COVID19 \n\n#EUCO'},... I would like to split and group all list's elements that have the same username into separate lists of dictionaries. The elements of the list - so each dictionary - are ordered by username. Is there a way to loop over the dictionaries and append each element to a list until username in "item 1" is equal to username in "item 1 + 1" and so on? Thank you for your help!
[ "A better would be to create a dictionary with username as key and value as list of user attributes\nop = defauldict(list)\nfor user_dic in list_of_userdictss:\n op[user_dic.pop('username')].append(user_dic)\nop = OrderedDict(sorted(user_dic.items()))\n\n", "Finding the same thing works the best if we sort the list by it - then all the same names are next to each other.\nBut even after sorting, we don't need to do such things manually - there are already tools for that. :) - itertools.groupby documentation and a nice explanation how it works\nfrom itertools import groupby\nfrom operator import itemgetter\n\nmy_list.sort(key=itemgetter(\"username\"))\nresult = {}\nfor username, group in groupby(my_list, key=itemgetter(\"username\")):\n result[username] = list(group)\n\nresult is a dict with usernames as keys\nIf you want a list-of-lists, do result = [] and then result.append(list(group)) instead.\n", "list of dict to separate lists\ndata = pd.DataFrame(your_list_of_dict)\n\nusername_list = data.username.values.tolist()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dictionary", "if_statement", "list", "loops", "python" ]
stackoverflow_0066258105_dictionary_if_statement_list_loops_python.txt
Q: How to run async function in FastAPI only once at a time? I'm new to FastAPI and asyncio and don't know how to realise this. I have an endpoint, when called, it should start an AI prediction that takes about 40s in the background. async def asyncio_test_prediction(): print("Starting asnycio func") time.sleep(30); print("Stopping asyncio func") @app.get('/sempos/start') async def start_prediction(): asyncio.ensure_future(asyncio_test_prediction()) return { "state": "Started" } This is the only way I managed for it to work. The EP function doesn't really need to be async but without it it doesn't work. Now, I also want to ensure that "asyncio_test_prediction" is only called when it's not already running. I read that this is possible with task.done() but I'm not sure how to initialize and store it. Or is there a better option? Thanks for your help.
How to run async function in FastAPI only once at a time?
I'm new to FastAPI and asyncio and don't know how to realise this. I have an endpoint, when called, it should start an AI prediction that takes about 40s in the background. async def asyncio_test_prediction(): print("Starting asnycio func") time.sleep(30); print("Stopping asyncio func") @app.get('/sempos/start') async def start_prediction(): asyncio.ensure_future(asyncio_test_prediction()) return { "state": "Started" } This is the only way I managed for it to work. The EP function doesn't really need to be async but without it it doesn't work. Now, I also want to ensure that "asyncio_test_prediction" is only called when it's not already running. I read that this is possible with task.done() but I'm not sure how to initialize and store it. Or is there a better option? Thanks for your help.
[]
[]
[ "I suggest changing the expected behavior of your API.\nHow about having two endpoints:\n\n'/sempos/start' - starts the AI prediction and immediately returns the id of the calculation\n'/sempos/result/{id}' - gives a generic message if calculations aren't done, else gives the calculation results JSON\n\nThis way you can:\n\nLeverage the speed of FastAPI\nUse the flexibility of asyncio\nFit your use case\n\nWhat do you think?\n" ]
[ -1 ]
[ "fastapi", "python", "python_3.x", "python_asyncio" ]
stackoverflow_0074614730_fastapi_python_python_3.x_python_asyncio.txt
Q: Pick the file with the shortest name I want to find the .txt file with the shortest name inside a folder. import glob import os inpDir = "C:/Users/ft/Desktop/Folder" os.chdir(inpDir) for file in glob.glob("*.txt"): l = len(file) For the moment I found the length of the str of the name, how can I return the shortest name? Thanks A: To find the shortest file just compare to the current shortest: chosen_file = "" for file in glob.glob("*.txt"): if chosen_file == "" or len(file) < len(chosen_file): chosen_file = file print(f"{chosen_file} is the shortest file") Once you've finished the loop, the chosen_file str is guaranteed to be the shortest. A: Cleaner to put it in a function and calling it: import glob import os def shortest_file_name(inpDir: str, extension: str) -> str: os.chdir(inpDir) shortest, l = '', 0b100000000 for file in glob.glob(extension): if len(file) < l: l = len(file) shortest = file return shortest inpDir = "C:/Users/ft/Desktop/Folder" min_file_name = shortest_file_name(inpDir, "*.txt") A: min(glob.glob('*.txt'), key=len)
Pick the file with the shortest name
I want to find the .txt file with the shortest name inside a folder. import glob import os inpDir = "C:/Users/ft/Desktop/Folder" os.chdir(inpDir) for file in glob.glob("*.txt"): l = len(file) For the moment I found the length of the str of the name, how can I return the shortest name? Thanks
[ "To find the shortest file just compare to the current shortest:\nchosen_file = \"\"\n\nfor file in glob.glob(\"*.txt\"):\n if chosen_file == \"\" or len(file) < len(chosen_file):\n chosen_file = file\n\nprint(f\"{chosen_file} is the shortest file\")\n\n\nOnce you've finished the loop, the chosen_file str is guaranteed to be the shortest.\n", "Cleaner to put it in a function and calling it:\n\nimport glob\nimport os\n\ndef shortest_file_name(inpDir: str, extension: str) -> str:\n os.chdir(inpDir)\n shortest, l = '', 0b100000000\n for file in glob.glob(extension):\n if len(file) < l:\n l = len(file)\n shortest = file\n return shortest\n\ninpDir = \"C:/Users/ft/Desktop/Folder\"\nmin_file_name = shortest_file_name(inpDir, \"*.txt\")\n\n", "min(glob.glob('*.txt'), key=len)\n\n" ]
[ 2, 0, 0 ]
[ "min = 1000\n\nfor file in glob.glob(\"*.txt\"):\n if len(file) < min:\n min = len(file)\n name = file\n\n" ]
[ -3 ]
[ "python" ]
stackoverflow_0074615450_python.txt
Q: unittest, assert if one of two values is in a string Assuming a string of "string" I want my test to pass if one of "x" or "y" passes. If both fail - the test fails. This is what I tried: def check_if_x_or_y_in_string(self): self.assertIn("x", "given string") self.assertIn("y", "given string") It keep failing because only one of them can be correct, the other always fails. Appreciate the help. A: assertTrue("x" in s or "y" in s). – MrBean Bremen
unittest, assert if one of two values is in a string
Assuming a string of "string" I want my test to pass if one of "x" or "y" passes. If both fail - the test fails. This is what I tried: def check_if_x_or_y_in_string(self): self.assertIn("x", "given string") self.assertIn("y", "given string") It keep failing because only one of them can be correct, the other always fails. Appreciate the help.
[ "assertTrue(\"x\" in s or \"y\" in s).\n– MrBean Bremen\n" ]
[ 0 ]
[]
[]
[ "python", "python_unittest", "unit_testing" ]
stackoverflow_0074615333_python_python_unittest_unit_testing.txt
Q: How to read a python tuple using PyYAML? I have the following YAML file named input.yaml: cities: 1: [0,0] 2: [4,0] 3: [0,4] 4: [4,4] 5: [2,2] 6: [6,2] highways: - [1,2] - [1,3] - [1,5] - [2,4] - [3,4] - [5,4] start: 1 end: 4 I'm loading it using PyYAML and printing the result as follows: import yaml f = open("input.yaml", "r") data = yaml.load(f) f.close() print(data) The result is the following data structure: { 'cities': { 1: [0, 0] , 2: [4, 0] , 3: [0, 4] , 4: [4, 4] , 5: [2, 2] , 6: [6, 2] } , 'highways': [ [1, 2] , [1, 3] , [1, 5] , [2, 4] , [3, 4] , [5, 4] ] , 'start': 1 , 'end': 4 } As you can see, each city and highway is represented as a list. However, I want them to be represented as a tuple. Hence, I manually convert them into tuples using comprehensions: import yaml f = open("input.yaml", "r") data = yaml.load(f) f.close() data["cities"] = {k: tuple(v) for k, v in data["cities"].items()} data["highways"] = [tuple(v) for v in data["highways"]] print(data) However, this seems like a hack. Is there some way to instruct PyYAML to directly read them as tuples instead of lists? A: I wouldn't call what you've done hacky for what you are trying to do. Your alternative approach from my understanding is to make use of python-specific tags in your YAML file so it is represented appropriately when loading the yaml file. However, this requires you modifying your yaml file which, if huge, is probably going to be pretty irritating and not ideal. Look at the PyYaml doc that further illustrates this. Ultimately you want to place a !!python/tuple in front of your structure that you want to represented as such. To take your sample data, it would like: YAML FILE: cities: 1: !!python/tuple [0,0] 2: !!python/tuple [4,0] 3: !!python/tuple [0,4] 4: !!python/tuple [4,4] 5: !!python/tuple [2,2] 6: !!python/tuple [6,2] highways: - !!python/tuple [1,2] - !!python/tuple [1,3] - !!python/tuple [1,5] - !!python/tuple [2,4] - !!python/tuple [3,4] - !!python/tuple [5,4] start: 1 end: 4 Sample code: import yaml with open('y.yaml') as f: d = yaml.load(f.read()) print(d) Which will output: {'cities': {1: (0, 0), 2: (4, 0), 3: (0, 4), 4: (4, 4), 5: (2, 2), 6: (6, 2)}, 'start': 1, 'end': 4, 'highways': [(1, 2), (1, 3), (1, 5), (2, 4), (3, 4), (5, 4)]} A: Depending on where your YAML input comes from your "hack" is a good solution, especially if you would use yaml.safe_load() instead of the unsafe yaml.load(). If only the "leaf" sequences in your YAML file need to be tuples you can do ¹: import pprint import ruamel.yaml from ruamel.yaml.constructor import SafeConstructor def construct_yaml_tuple(self, node): seq = self.construct_sequence(node) # only make "leaf sequences" into tuples, you can add dict # and other types as necessary if seq and isinstance(seq[0], (list, tuple)): return seq return tuple(seq) SafeConstructor.add_constructor( u'tag:yaml.org,2002:seq', construct_yaml_tuple) with open('input.yaml') as fp: data = ruamel.yaml.safe_load(fp) pprint.pprint(data, width=24) which prints: {'cities': {1: (0, 0), 2: (4, 0), 3: (0, 4), 4: (4, 4), 5: (2, 2), 6: (6, 2)}, 'end': 4, 'highways': [(1, 2), (1, 3), (1, 5), (2, 4), (3, 4), (5, 4)], 'start': 1} if you then need to process more material where sequence need to be "normal" lists again, use: SafeConstructor.add_constructor( u'tag:yaml.org,2002:seq', SafeConstructor.construct_yaml_seq) ¹ This was done using ruamel.yaml a YAML 1.2 parser, of which I am the author. You should be able to do same with the older PyYAML if you only ever need to support YAML 1.1 and/or cannot upgrade for some reason A: I ran in the same problem as the question and I was not too satisfied by the two answers. While browsing around the pyyaml documentation I found really two interesting methods yaml.add_constructor and yaml.add_implicit_resolver. The implicit resolver solves the problem of having to tag all entries with !!python/tuple by matching the strings with a regex. I also wanted to use the tuple syntax, so write tuple: (10,120) instead of writing a list tuple: [10,120] which then gets converted to a tuple, I personally found that very annoying. I also did not want to install an external library. Here is the code: import yaml import re # this is to convert the string written as a tuple into a python tuple def yml_tuple_constructor(loader, node): # this little parse is really just for what I needed, feel free to change it! def parse_tup_el(el): # try to convert into int or float else keep the string if el.isdigit(): return int(el) try: return float(el) except ValueError: return el value = loader.construct_scalar(node) # remove the ( ) from the string tup_elements = value[1:-1].split(',') # remove the last element if the tuple was written as (x,b,) if tup_elements[-1] == '': tup_elements.pop(-1) tup = tuple(map(parse_tup_el, tup_elements)) return tup # !tuple is my own tag name, I think you could choose anything you want yaml.add_constructor(u'!tuple', yml_tuple_constructor) # this is to spot the strings written as tuple in the yaml yaml.add_implicit_resolver(u'!tuple', re.compile(r"\(([^,\W]{,},){,}[^,\W]*\)")) Finally by executing this: >>> yml = yaml.load(""" ...: cities: ...: 1: (0,0) ...: 2: (4,0) ...: 3: (0,4) ...: 4: (4,4) ...: 5: (2,2) ...: 6: (6,2) ...: highways: ...: - (1,2) ...: - (1,3) ...: - (1,5) ...: - (2,4) ...: - (3,4) ...: - (5,4) ...: start: 1 ...: end: 4""") >>> yml['cities'] {1: (0, 0), 2: (4, 0), 3: (0, 4), 4: (4, 4), 5: (2, 2), 6: (6, 2)} >>> yml['highways'] [(1, 2), (1, 3), (1, 5), (2, 4), (3, 4), (5, 4)] There could be a potential drawback with save_load compared to load which I did not test. A: You treat a tuple as a list. params.yaml foo: bar: ["a", "b", "c"] Source
How to read a python tuple using PyYAML?
I have the following YAML file named input.yaml: cities: 1: [0,0] 2: [4,0] 3: [0,4] 4: [4,4] 5: [2,2] 6: [6,2] highways: - [1,2] - [1,3] - [1,5] - [2,4] - [3,4] - [5,4] start: 1 end: 4 I'm loading it using PyYAML and printing the result as follows: import yaml f = open("input.yaml", "r") data = yaml.load(f) f.close() print(data) The result is the following data structure: { 'cities': { 1: [0, 0] , 2: [4, 0] , 3: [0, 4] , 4: [4, 4] , 5: [2, 2] , 6: [6, 2] } , 'highways': [ [1, 2] , [1, 3] , [1, 5] , [2, 4] , [3, 4] , [5, 4] ] , 'start': 1 , 'end': 4 } As you can see, each city and highway is represented as a list. However, I want them to be represented as a tuple. Hence, I manually convert them into tuples using comprehensions: import yaml f = open("input.yaml", "r") data = yaml.load(f) f.close() data["cities"] = {k: tuple(v) for k, v in data["cities"].items()} data["highways"] = [tuple(v) for v in data["highways"]] print(data) However, this seems like a hack. Is there some way to instruct PyYAML to directly read them as tuples instead of lists?
[ "I wouldn't call what you've done hacky for what you are trying to do. Your alternative approach from my understanding is to make use of python-specific tags in your YAML file so it is represented appropriately when loading the yaml file. However, this requires you modifying your yaml file which, if huge, is probably going to be pretty irritating and not ideal.\nLook at the PyYaml doc that further illustrates this. Ultimately you want to place a !!python/tuple in front of your structure that you want to represented as such. To take your sample data, it would like: \nYAML FILE:\ncities:\n 1: !!python/tuple [0,0]\n 2: !!python/tuple [4,0]\n 3: !!python/tuple [0,4]\n 4: !!python/tuple [4,4]\n 5: !!python/tuple [2,2]\n 6: !!python/tuple [6,2]\nhighways:\n - !!python/tuple [1,2]\n - !!python/tuple [1,3]\n - !!python/tuple [1,5]\n - !!python/tuple [2,4]\n - !!python/tuple [3,4]\n - !!python/tuple [5,4]\nstart: 1\nend: 4\n\nSample code: \nimport yaml\n\nwith open('y.yaml') as f:\n d = yaml.load(f.read())\n\nprint(d)\n\nWhich will output: \n{'cities': {1: (0, 0), 2: (4, 0), 3: (0, 4), 4: (4, 4), 5: (2, 2), 6: (6, 2)}, 'start': 1, 'end': 4, 'highways': [(1, 2), (1, 3), (1, 5), (2, 4), (3, 4), (5, 4)]}\n\n", "Depending on where your YAML input comes from your \"hack\" is a good solution, especially if you would use yaml.safe_load() instead of the unsafe yaml.load(). If only the \"leaf\" sequences in your YAML file need to be tuples you can do ¹:\nimport pprint\nimport ruamel.yaml\nfrom ruamel.yaml.constructor import SafeConstructor\n\n\ndef construct_yaml_tuple(self, node):\n seq = self.construct_sequence(node)\n # only make \"leaf sequences\" into tuples, you can add dict \n # and other types as necessary\n if seq and isinstance(seq[0], (list, tuple)):\n return seq\n return tuple(seq)\n\nSafeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n construct_yaml_tuple)\n\nwith open('input.yaml') as fp:\n data = ruamel.yaml.safe_load(fp)\npprint.pprint(data, width=24)\n\nwhich prints:\n{'cities': {1: (0, 0),\n 2: (4, 0),\n 3: (0, 4),\n 4: (4, 4),\n 5: (2, 2),\n 6: (6, 2)},\n 'end': 4,\n 'highways': [(1, 2),\n (1, 3),\n (1, 5),\n (2, 4),\n (3, 4),\n (5, 4)],\n 'start': 1}\n\nif you then need to process more material where sequence need to be \"normal\" lists again, use:\nSafeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n SafeConstructor.construct_yaml_seq)\n\n\n¹ This was done using ruamel.yaml a YAML 1.2 parser, of which I am the author. You should be able to do same with the older PyYAML if you only ever need to support YAML 1.1 and/or cannot upgrade for some reason\n", "I ran in the same problem as the question and I was not too satisfied by the two answers. While browsing around the pyyaml documentation I found\nreally two interesting methods yaml.add_constructor and yaml.add_implicit_resolver. \nThe implicit resolver solves the problem of having to tag all entries with !!python/tuple by matching the strings with a regex. I also wanted to use the tuple syntax, so write tuple: (10,120) instead of writing a list tuple: [10,120] which then gets\nconverted to a tuple, I personally found that very annoying. I also did not want to install an external library. Here is the code:\nimport yaml\nimport re\n\n# this is to convert the string written as a tuple into a python tuple\ndef yml_tuple_constructor(loader, node): \n # this little parse is really just for what I needed, feel free to change it! \n def parse_tup_el(el): \n # try to convert into int or float else keep the string \n if el.isdigit(): \n return int(el) \n try: \n return float(el) \n except ValueError: \n return el \n\n value = loader.construct_scalar(node) \n # remove the ( ) from the string \n tup_elements = value[1:-1].split(',') \n # remove the last element if the tuple was written as (x,b,) \n if tup_elements[-1] == '': \n tup_elements.pop(-1) \n tup = tuple(map(parse_tup_el, tup_elements)) \n return tup \n\n# !tuple is my own tag name, I think you could choose anything you want \nyaml.add_constructor(u'!tuple', yml_tuple_constructor)\n# this is to spot the strings written as tuple in the yaml \nyaml.add_implicit_resolver(u'!tuple', re.compile(r\"\\(([^,\\W]{,},){,}[^,\\W]*\\)\")) \n\nFinally by executing this:\n>>> yml = yaml.load(\"\"\"\n ...: cities:\n ...: 1: (0,0)\n ...: 2: (4,0)\n ...: 3: (0,4)\n ...: 4: (4,4)\n ...: 5: (2,2)\n ...: 6: (6,2)\n ...: highways:\n ...: - (1,2)\n ...: - (1,3)\n ...: - (1,5)\n ...: - (2,4)\n ...: - (3,4)\n ...: - (5,4)\n ...: start: 1\n ...: end: 4\"\"\")\n>>> yml['cities']\n{1: (0, 0), 2: (4, 0), 3: (0, 4), 4: (4, 4), 5: (2, 2), 6: (6, 2)}\n>>> yml['highways']\n[(1, 2), (1, 3), (1, 5), (2, 4), (3, 4), (5, 4)]\n\nThere could be a potential drawback with save_load compared to load which I did not test.\n", "You treat a tuple as a list.\nparams.yaml\nfoo:\n bar: [\"a\", \"b\", \"c\"]\n\nSource\n" ]
[ 31, 5, 4, 0 ]
[]
[]
[ "python", "pyyaml", "yaml" ]
stackoverflow_0039553008_python_pyyaml_yaml.txt
Q: How to handle CORS for web workers? In one of my js files (game.js), web workers are used which causes problems for CORS. From game.js: var engine = new Worker(options.machinejs|| 'static/js/mainjs/machine.js'); First problem I got was about SharedArrayBuffer is not defined and I solved it by adding the needed headers. @app.route("/") def home(): resp = make_response(render_template("index.html")) resp.headers['Cross-Origin-Embedder-Policy'] = 'require-corp' resp.headers['Cross-Origin-Opener-Policy'] = 'same-origin' return resp I got rid of that error but got this one: http://127.0.0.1:5000/static/js/mainjs/machine.js net::ERR_BLOCKED_BY_RESPONSE 304 When I check the responses I can see the header parts I added for "http://127.0.0.1:5000/" but I see no such headers for http://127.0.0.1:5000/static/js/mainjs/machine.js In fact, it warns me to add Cross-Origin-Embedder-Policy: require-corp in my document. Where and how though? A: I obviously confused CORS with CORP although they're related. The solution I found was: @app.route("/") def home(): return render_template("index.html") @app.after_request def add_header_home(response): response.headers['Cross-Origin-Embedder-Policy'] = 'require-corp' response.headers['Cross-Origin-Opener-Policy'] = 'same-origin' return response Somehow after_request decorator helps us here.
How to handle CORS for web workers?
In one of my js files (game.js), web workers are used which causes problems for CORS. From game.js: var engine = new Worker(options.machinejs|| 'static/js/mainjs/machine.js'); First problem I got was about SharedArrayBuffer is not defined and I solved it by adding the needed headers. @app.route("/") def home(): resp = make_response(render_template("index.html")) resp.headers['Cross-Origin-Embedder-Policy'] = 'require-corp' resp.headers['Cross-Origin-Opener-Policy'] = 'same-origin' return resp I got rid of that error but got this one: http://127.0.0.1:5000/static/js/mainjs/machine.js net::ERR_BLOCKED_BY_RESPONSE 304 When I check the responses I can see the header parts I added for "http://127.0.0.1:5000/" but I see no such headers for http://127.0.0.1:5000/static/js/mainjs/machine.js In fact, it warns me to add Cross-Origin-Embedder-Policy: require-corp in my document. Where and how though?
[ "I obviously confused CORS with CORP although they're related.\nThe solution I found was:\[email protected](\"/\")\ndef home():\n return render_template(\"index.html\")\n\[email protected]_request\ndef add_header_home(response):\n response.headers['Cross-Origin-Embedder-Policy'] = 'require-corp'\n response.headers['Cross-Origin-Opener-Policy'] = 'same-origin'\n return response\n\nSomehow after_request decorator helps us here.\n" ]
[ 0 ]
[]
[]
[ "cross_origin_embedder_policy", "cross_origin_resource_policy", "flask", "javascript", "python" ]
stackoverflow_0074613280_cross_origin_embedder_policy_cross_origin_resource_policy_flask_javascript_python.txt
Q: Replace NaN values from DataFrame with values from series I am trying to implement code which will do the following with pandas. def fill_in_capabilities(df): capacity_means = df.groupby("LV_Name").mean(["LEO_Capa", "GTO_Capa"]) for row in df: if np.isnan(row["LEO_Capa"]): row["LEO_Capa"] = capacity_means[row["LV_Name"]] return df Basically, for the rows in df where the value in the column "LEO_Capa" is NaN, I would like to replace the value there with a value from the series capacity_means, indexed by the value in the column "LV_Name" from the df with the missing value. How would one do this with pandas, as the code there does not work. Thanks. A: You can use a function: def fill_in_capabilities(df: pd.DataFrame) -> pd.DataFrame: df[["LEO_Capa", "GTO_Capa"]] = df[["LEO_Capa", "GTO_Capa"]].fillna( df.groupby("LV_Name")[["LEO_Capa", "GTO_Capa"]].transform("mean") ) return df df = fill_in_capabilities(df)
Replace NaN values from DataFrame with values from series
I am trying to implement code which will do the following with pandas. def fill_in_capabilities(df): capacity_means = df.groupby("LV_Name").mean(["LEO_Capa", "GTO_Capa"]) for row in df: if np.isnan(row["LEO_Capa"]): row["LEO_Capa"] = capacity_means[row["LV_Name"]] return df Basically, for the rows in df where the value in the column "LEO_Capa" is NaN, I would like to replace the value there with a value from the series capacity_means, indexed by the value in the column "LV_Name" from the df with the missing value. How would one do this with pandas, as the code there does not work. Thanks.
[ "You can use a function:\ndef fill_in_capabilities(df: pd.DataFrame) -> pd.DataFrame:\n df[[\"LEO_Capa\", \"GTO_Capa\"]] = df[[\"LEO_Capa\", \"GTO_Capa\"]].fillna(\n df.groupby(\"LV_Name\")[[\"LEO_Capa\", \"GTO_Capa\"]].transform(\"mean\")\n )\n\n return df\n\n\ndf = fill_in_capabilities(df)\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "nan", "pandas", "python", "series" ]
stackoverflow_0074615607_dataframe_nan_pandas_python_series.txt
Q: Flask for to have result as variable I have below form which selects the data and redirects to the page user_data It selects the date and redirects to another page. Am able to get the data using request.form['Period'] method in python. But this is not getting called in form action <form action = "/user_data/{{period}}" method="POST"> period variable is empty resulting in Not Found error. Is there a way to select a value and pass it into a same form <form action = "/user_data/{{period}}" method="POST"> <label for = "Period" id = "Period">Period</label> <input list="User_Listing_Period" id ="User_Lst_Period" name = "Period"> <datalist id = "User_data_Period"> {% for row in user_dat_filter %} <option name = "User_Dat_Period" value = "{{row[0]}}"></option> {% endfor %} </datalist> <div class="submit"> <label for = "Submit" id = "Submit"></label> <button type="submit">Submit</button> </div> A: two options here: let form direct to url /user_data, and based on the Period value renders the page i.e it renders the data for that month. as value is based on user selection, JS can be utilized. <html> <body> <form action = "/user_data/{{period}}" method="POST" id="myForm"> <label for = "Period" id = "Period">Period</label> <input list="User_Listing_Period" id ="User_Lst_Period" name = "Period"> <datalist id = "User_Listing_Period"> <!-- I commented this, so can check in HTML, you can go with your code {% for row in user_dat_filter %} --> <option name = "User_Dat_Period" value = "Aug22"></option> <option name = "User_Dat_Period" value = "Sep22"></option> <option name = "User_Dat_Period" value = "Oct222"></option> <!-- {% endfor %} --> </datalist> <div class="submit"> <label for = "Submit" id = "Submit"></label> <button type="submit">Submit</button> </div> </form> <script> const myform= document.getElementById("myForm")//identify the form // add a listener to myform myform.addEventListener('submit', async (e) => { e.preventDefault(); //block default submit const selectedOption=document.getElementById("User_Lst_Period") let value = selectedOption.value document.location.href='/user_data/' + value //redirect to value selected by user }) </script> </body> </html>
Flask for to have result as variable
I have below form which selects the data and redirects to the page user_data It selects the date and redirects to another page. Am able to get the data using request.form['Period'] method in python. But this is not getting called in form action <form action = "/user_data/{{period}}" method="POST"> period variable is empty resulting in Not Found error. Is there a way to select a value and pass it into a same form <form action = "/user_data/{{period}}" method="POST"> <label for = "Period" id = "Period">Period</label> <input list="User_Listing_Period" id ="User_Lst_Period" name = "Period"> <datalist id = "User_data_Period"> {% for row in user_dat_filter %} <option name = "User_Dat_Period" value = "{{row[0]}}"></option> {% endfor %} </datalist> <div class="submit"> <label for = "Submit" id = "Submit"></label> <button type="submit">Submit</button> </div>
[ "two options here:\n\nlet form direct to url /user_data, and based on the Period value renders the page i.e it renders the data for that month.\nas value is based on user selection, JS can be utilized.\n\n<html>\n <body>\n<form action = \"/user_data/{{period}}\" method=\"POST\" id=\"myForm\">\n <label for = \"Period\" id = \"Period\">Period</label>\n <input list=\"User_Listing_Period\" id =\"User_Lst_Period\" name = \"Period\">\n <datalist id = \"User_Listing_Period\">\n <!-- I commented this, so can check in HTML, you can go with your code\n {% for row in user_dat_filter %} -->\n <option name = \"User_Dat_Period\" value = \"Aug22\"></option>\n <option name = \"User_Dat_Period\" value = \"Sep22\"></option>\n <option name = \"User_Dat_Period\" value = \"Oct222\"></option>\n <!-- {% endfor %} -->\n </datalist>\n <div class=\"submit\">\n <label for = \"Submit\" id = \"Submit\"></label>\n <button type=\"submit\">Submit</button>\n </div>\n </form>\n <script>\n const myform= document.getElementById(\"myForm\")//identify the form\n // add a listener to myform\n myform.addEventListener('submit', async (e) => {\n e.preventDefault(); //block default submit\n const selectedOption=document.getElementById(\"User_Lst_Period\")\n let value = selectedOption.value\n document.location.href='/user_data/' + value //redirect to value selected by user\n\n\n })\n </script>\n</body>\n</html>\n\n" ]
[ 1 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074614627_flask_python.txt
Q: Try using .loc[row_indexer,col_indexer] = value instead again Following line of code: df_under['Work Ratio']=df_under['Work Ratio'].astype(float) is generating Try using .loc[row_indexer,col_indexer] = value instead warning. How to get rid of it? Thank you for help A: This looks like a pandas dataframe and you would like to change the data type of the column 'Work Ratio'? The warning tells you that by changing df_under['Work Ratio'] you will not change the actual dataframe in place. The warning tells you to access the column by saying df_under.loc[:,'Work Ratio']=df_under['Work Ratio'].astype(float) #all rows, column 'Work Ratio' Alternatively you could save the data types of your dataframe col_types = df_under.dtypes which gives you a pandas series of types. Now change the type of 'Work Ratio' col_types['Work Ratio'] = float and change the whole dataframe as df_under = df_under.astype(col_types)
Try using .loc[row_indexer,col_indexer] = value instead again
Following line of code: df_under['Work Ratio']=df_under['Work Ratio'].astype(float) is generating Try using .loc[row_indexer,col_indexer] = value instead warning. How to get rid of it? Thank you for help
[ "This looks like a pandas dataframe and you would like to change the data type of the column 'Work Ratio'? The warning tells you that by changing df_under['Work Ratio'] you will not change the actual dataframe in place. The warning tells you to access the column by saying\ndf_under.loc[:,'Work Ratio']=df_under['Work Ratio'].astype(float) #all rows, column 'Work Ratio'\n\nAlternatively you could save the data types of your dataframe\ncol_types = df_under.dtypes\n\nwhich gives you a pandas series of types. Now change the type of 'Work Ratio'\ncol_types['Work Ratio'] = float\n\nand change the whole dataframe as\ndf_under = df_under.astype(col_types)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074615334_python.txt
Q: Cannot open anaconda suddenly Today I found I cannot open anaconda navigator, which operated just fine before. At the same time, spyder could not be open either, but jupyter notebook and anaconda prompt are available. I tried different methods following instructions online. 1) conda update anaconda-navigator and reboot the system 2) anaconda-navigator --reset , but it shows the error as follows: Traceback (most recent call last): File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 169, in <module> from PySide import __version__ as PYSIDE_VERSION # analysis:ignore ImportError: No module named 'PySide' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\Anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module> from anaconda_navigator.app.main import main File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 22, in <module> from anaconda_navigator.utils.conda import is_conda_available File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\utils\__init__.py", line 15, in <module> from qtpy.QtGui import QIcon File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 175, in <module> raise PythonQtError('No Qt bindings could be found') qtpy.PythonQtError: No Qt bindings could be found (base) C:\Users\User\Anaconda3>anaconda-navigator --reset Traceback (most recent call last): File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 169, in <module> from PySide import __version__ as PYSIDE_VERSION # analysis:ignore ImportError: No module named 'PySide' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\Anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module> from anaconda_navigator.app.main import main File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 22, in <module> from anaconda_navigator.utils.conda import is_conda_available File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\utils\__init__.py", line 15, in <module> from qtpy.QtGui import QIcon File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 175, in <module> raise PythonQtError('No Qt bindings could be found') qtpy.PythonQtError: No Qt bindings could be found So I tried different methods to reinstall pyqt as follows: 1) conda install pyqt 2) conda install --force qt 3) pip3 install PyQt5 But still not working. I really wonder how I could fix this. And now when I ran anaconda navigator in the prompt, I got anaconda: error: argument : invalid choice: 'navigator' (choose from 'auth', 'label', 'channel', 'config', 'copy', 'download', 'groups', 'login', 'logout', 'notebook', 'package', 'remove', 'search', 'show', 'upload', 'whoami') PS: my python version is 3.5, and anaconda has been updated to the latest version. PS2-Possible Solution: I ran conda install PySide on prompt and it downgraded some of the packages, where I remembered few days ago I upgraded beautifulsou4, please be very aware when upgrading packages on anaconda ! A: This error means that you installed pyqt5 with pip along side the pyqt conda package. It could be solved by you uninstalling the pip package. Try: pip uninstall PyQt5 Then update conda: conda update conda and conda update anaconda-navigator It will surely resolve your problem. A: I tried all the solutions listed here but they didn't work for me. Later I was able to resolve the issue. Even though your solutions didn't directly solve my issue, the steps I used to solve it were based on the answers I found here. I will now list the steps I used to solve my issue: python -V # I checked the python version to make sure it's python 3.4 and above conda update conda conda update anaconda-navigator pip install PySide2 pip uninstall pyqt pip uninstall PyQt5 Now when doing step 5&6 both pyqt and PyQt5 were not installed on the system, which was the cause of the issue pip install qtpy # This module was already installed on the system pip install PyQt5 # This was the final step that solved the issue A: I tried all the answers provided. Some steps worked some didn't. So I will just tell all the steps which finally after many trials and fails worked for me: cd C:\Users\UserName\Anaconda\Scripts pip install PySide2 pip uninstall PyQt5 pip uninstall pyqt (on this I got this error- WARNING: Skipping pyqt as it is not installed. So not sure if this is helpful) conda update conda conda update anaconda-navigator A: I had exactly this problem. Same error message and all. To fix I first updated Conda: $ conda update conda Then updated the navigator $ conda update anaconda-navigator This performed an integrity check (though it took a while - you have to be patient) and found that the environment was inconsistent. It created a package plan to download and install new packages and updates as well as downgrade some packages. It asked me to approve the change before performing the change and updating the specs. Having approved it, all was performed flawlessly and I was able to relaunch Anaconda navigator without a problem. A: I just had a similar problem with Navigator. I typed anaconda-navigator on the command line and it opened fine. This doesn't explain why it won't open from the icon but does offer a work-around. A: Currently I have two anaconda prompt windows open on my screen, one normal, and one that is running as an Admin. When I try 'anaconda-navigator' in the normal window, I get the same errors as you. When I try the same command in the admin rights window, then it all works. This isn't really a fix, but it might point someone more experienced in the right direction. A: The real fix that worked for me was to install the module that was causing the error, that is the PySide module. Do not install PySide (pip install PySide) unless you are using Python 3.4 or less. If you are running Python 3.4 and up (up to 3.7), you need to install PySide2 (pip install PySide2). None of the other answers worked for me at all because the error was the same all over again. A: Uninstalled PyQt5 pip uninstall PyQt5 and installed qtpy pip install qtpy worked for me A: Run these one by one, it worked for me: pip uninstall pyqt pip uninstall PyQt5 conda update conda conda update anaconda-navigator A: I had a similar problem with Anaconda Navigator and Spyder. This commands on the anaconda prompt worked for me: conda update conda conda update anaconda-navigator conda install pyside2 A: Try this : conda update conda conda update anaconda-navigator And finally: pip install PyQt5 It worked for me fine A: Make sure that the path in system environment variables point to the right folder as shown in image. C:\Anaconda3\Scripts, C:\Anaconda3 A: I was working on a project which used the GPU instead of CPU. I did some Googling and found out that the best way to do this is by using the Anaconda. As you can see I ended up here due to errors I faced. All the answers in this tread is insightful but the recent working answer that has worked for me is a combination previous answers. I'll try to sum up one simple answer and step that can be followed for developers like me to solve this issue. Change the path in system settings> advanced system settings > Environment Variables> System variables > path > edit > add "C:ProgramData\Anaconda3\Scripts" Make sure the python is 3.4+ Open CMD in admin mode, change directory to the anaconda then type conda update conda if there is SSL error type conda config --set ssl_verify no or conda config --set ssl_verify false Then type conda update anaconda-navigator Then to avoid HTTP and SSL error. Copy the following files: libcrypto-1_1-x64.* (.dll and .pdb files) libssl-1_1-x64.* (.dll and .pdb files) From C:ProgramData\Anaconda3\Library\bin to C:ProgramData\Anaconda3\DLLs. Now open Anaconda prompt(Anaconda3) type(one by one) pip install PySide2 pip uninstall pyqt pip uninstall PyQt5 6.Then while in Anaconda3 type pip install pyqt test if you can open anaconda-navigator in anaconda prompt. (if it works you're almost there) Close everything, Fire up the CMD in Admin mode and type pip install PyQt5. Open anaconda-navigator in CMD. if It is not working properly, repeat the process after step 4. I'm not a very pro coder. but this is the process that worked for me! Good luck!! A: Tried all of the other answers, but didn't work for me. What worked for me was this sequence: pip uninstall pyside2 conda uninstall qt conda update conda conda install anaconda-navigator
Cannot open anaconda suddenly
Today I found I cannot open anaconda navigator, which operated just fine before. At the same time, spyder could not be open either, but jupyter notebook and anaconda prompt are available. I tried different methods following instructions online. 1) conda update anaconda-navigator and reboot the system 2) anaconda-navigator --reset , but it shows the error as follows: Traceback (most recent call last): File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 169, in <module> from PySide import __version__ as PYSIDE_VERSION # analysis:ignore ImportError: No module named 'PySide' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\Anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module> from anaconda_navigator.app.main import main File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 22, in <module> from anaconda_navigator.utils.conda import is_conda_available File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\utils\__init__.py", line 15, in <module> from qtpy.QtGui import QIcon File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 175, in <module> raise PythonQtError('No Qt bindings could be found') qtpy.PythonQtError: No Qt bindings could be found (base) C:\Users\User\Anaconda3>anaconda-navigator --reset Traceback (most recent call last): File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 169, in <module> from PySide import __version__ as PYSIDE_VERSION # analysis:ignore ImportError: No module named 'PySide' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\Anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module> from anaconda_navigator.app.main import main File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 22, in <module> from anaconda_navigator.utils.conda import is_conda_available File "C:\Users\User\Anaconda3\lib\site-packages\anaconda_navigator\utils\__init__.py", line 15, in <module> from qtpy.QtGui import QIcon File "C:\Users\User\Anaconda3\lib\site-packages\qtpy\__init__.py", line 175, in <module> raise PythonQtError('No Qt bindings could be found') qtpy.PythonQtError: No Qt bindings could be found So I tried different methods to reinstall pyqt as follows: 1) conda install pyqt 2) conda install --force qt 3) pip3 install PyQt5 But still not working. I really wonder how I could fix this. And now when I ran anaconda navigator in the prompt, I got anaconda: error: argument : invalid choice: 'navigator' (choose from 'auth', 'label', 'channel', 'config', 'copy', 'download', 'groups', 'login', 'logout', 'notebook', 'package', 'remove', 'search', 'show', 'upload', 'whoami') PS: my python version is 3.5, and anaconda has been updated to the latest version. PS2-Possible Solution: I ran conda install PySide on prompt and it downgraded some of the packages, where I remembered few days ago I upgraded beautifulsou4, please be very aware when upgrading packages on anaconda !
[ "This error means that you installed pyqt5 with pip along side the pyqt conda package. It could be solved by you uninstalling the pip package.\nTry:\npip uninstall PyQt5\n\nThen update conda:\nconda update conda\n\nand \nconda update anaconda-navigator\n\nIt will surely resolve your problem.\n", "I tried all the solutions listed here but they didn't work for me. Later I was able to resolve the issue. Even though your solutions didn't directly solve my issue, the steps I used to solve it were based on the answers I found here. I will now list the steps I used to solve my issue:\n\npython -V # I checked the python version to make sure it's python 3.4 and above\nconda update conda\nconda update anaconda-navigator\npip install PySide2\npip uninstall pyqt\npip uninstall PyQt5\n\nNow when doing step 5&6 both pyqt and PyQt5 were not installed on the system, which was the cause of the issue\n\npip install qtpy # This module was already installed on the system\npip install PyQt5 # This was the final step that solved the issue\n\n", "I tried all the answers provided. Some steps worked some didn't. So I will just tell all the steps which finally after many trials and fails worked for me:\n\ncd C:\\Users\\UserName\\Anaconda\\Scripts\npip install PySide2\npip uninstall PyQt5\npip uninstall pyqt (on this I got this error- WARNING: Skipping pyqt as it is not installed. So not sure if this is helpful)\nconda update conda\nconda update\nanaconda-navigator\n\n", "I had exactly this problem. Same error message and all.\nTo fix I first updated Conda:\n$ conda update conda\n\nThen updated the navigator\n$ conda update anaconda-navigator\n\nThis performed an integrity check (though it took a while - you have to be patient) and found that the environment was inconsistent. It created a package plan to download and install new packages and updates as well as downgrade some packages. It asked me to approve the change before performing the change and updating the specs.\nHaving approved it, all was performed flawlessly and I was able to relaunch Anaconda navigator without a problem. \n", "I just had a similar problem with Navigator. I typed anaconda-navigator on the command line and it opened fine. This doesn't explain why it won't open from the icon but does offer a work-around.\n", "Currently I have two anaconda prompt windows open on my screen, one normal, and one that is running as an Admin. \nWhen I try 'anaconda-navigator' in the normal window, I get the same errors as you.\nWhen I try the same command in the admin rights window, then it all works. \nThis isn't really a fix, but it might point someone more experienced in the right direction. \n", "The real fix that worked for me was to install the module that was causing the error, that is the PySide module. Do not install PySide (pip install PySide) unless you are using Python 3.4 or less. If you are running Python 3.4 and up (up to 3.7), you need to install PySide2 (pip install PySide2).\nNone of the other answers worked for me at all because the error was the same all over again.\n", "Uninstalled PyQt5\npip uninstall PyQt5\n\nand installed qtpy\npip install qtpy\n\nworked for me\n", "Run these one by one, it worked for me:\npip uninstall pyqt\npip uninstall PyQt5\nconda update conda\nconda update anaconda-navigator\n\n", "I had a similar problem with Anaconda Navigator and Spyder. This commands on the anaconda prompt worked for me:\nconda update conda\nconda update anaconda-navigator\nconda install pyside2\n\n", "Try this :\nconda update conda\nconda update anaconda-navigator\n\nAnd finally:\npip install PyQt5\n\nIt worked for me fine\n", "Make sure that the path in system environment variables point to the right folder as shown in image.\nC:\\Anaconda3\\Scripts,\nC:\\Anaconda3\n\n\n", "I was working on a project which used the GPU instead of CPU. I did some Googling and found out that the best way to do this is by using the Anaconda. As you can see I ended up here due to errors I faced. All the answers in this tread is insightful but the recent working answer that has worked for me is a combination previous answers. I'll try to sum up one simple answer and step that can be followed for developers like me to solve this issue.\n\nChange the path in system settings> advanced system settings > Environment Variables> System variables > path > edit > add \"C:ProgramData\\Anaconda3\\Scripts\"\n\nMake sure the python is 3.4+\n\nOpen CMD in admin mode, change directory to the anaconda then type\nconda update conda\nif there is SSL error type\nconda config --set ssl_verify no or\nconda config --set ssl_verify false\n\nThen type conda update anaconda-navigator\n\nThen to avoid HTTP and SSL error. Copy the following files:\n\n\nlibcrypto-1_1-x64.* (.dll and .pdb files) libssl-1_1-x64.* (.dll and .pdb files)\nFrom C:ProgramData\\Anaconda3\\Library\\bin to C:ProgramData\\Anaconda3\\DLLs.\n\nNow open Anaconda prompt(Anaconda3) type(one by one)\npip install PySide2\npip uninstall pyqt\npip uninstall PyQt5\n\n\n6.Then while in Anaconda3 type pip install pyqt test if you can open anaconda-navigator in anaconda prompt. (if it works you're almost there)\n\nClose everything, Fire up the CMD in Admin mode and type pip install PyQt5.\n\nOpen anaconda-navigator in CMD. if It is not working properly, repeat the process after step 4.\n\n\nI'm not a very pro coder. but this is the process that worked for me!\nGood luck!!\n", "Tried all of the other answers, but didn't work for me.\nWhat worked for me was this sequence:\npip uninstall pyside2\nconda uninstall qt\nconda update conda\nconda install anaconda-navigator\n\n" ]
[ 40, 5, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "anaconda", "python", "spyder" ]
stackoverflow_0051435579_anaconda_python_spyder.txt
Q: How to filter a collection by multiple conditions I have a csv file named film.csv here is the header line with a few lines to use as an example Year;Length;Title;Subject;Actor;Actress;Director;Popularity;Awards;*Image 1990;111;Tie Me Up! Tie Me Down!;Comedy;Banderas, Antonio;Abril, Victoria;Almodóvar, Pedro;68;No;NicholasCage.png 1991;113;High Heels;Comedy;Bosé, Miguel;Abril, Victoria;Almodóvar, Pedro;68;No;NicholasCage.png 1983;104;Dead Zone, The;Horror;Walken, Christopher;Adams, Brooke;Cronenberg, David;79;No;NicholasCage.png 1979;122;Cuba;Action;Connery, Sean;Adams, Brooke;Lester, Richard;6;No;seanConnery.png 1978;94;Days of Heaven;Drama;Gere, Richard;Adams, Brooke;Malick, Terrence;14;No;NicholasCage.png 1983;140;Octopussy;Action;Moore, Roger;Adams, Maud;Glen, John;68;No;NicholasCage.png I am trying to filter, and need to display the move titles, for this criteria: first name contains "Richard", Year < 1985, Awards == "Y" I am able to filter for the award, but not the rest. can you help? file_name = "film.csv" lines = (line for line in open(file_name,encoding='cp1252')) #generator to capture lines lists = (s.rstrip().split(";") for s in lines) #generators to capture lists containing values from lines #browse lists and index them per header values, then filter all movies that have been awarded #using a new generator object cols=next(lists) #obtains only the header print(cols) collections = (dict(zip(cols,data)) for data in lists) filtered = (col["Title"] for col in collections if col["Awards"][0] == "Y") for item in filtered: print(item) # input() This works for the award but I don't know how to add additional filters. Also when I try to filter for if col["Year"] < 1985 I get error message because string vs int not compatible. How do I make the years a value? I believe for the first name I can filter like this: if col["Actor"].split(", ")[-1] == "Richard" A: You know how to add one filter. There is no such thing as "additional" filters. Just add your conditions to the current condition. Since you want all of the conditions to be True to select a record, you'd use the boolean and logic. For example: filtered = ( col["Title"] for col in collections if col["Awards"][0] == "Y" and col["Actor"].split(", ")[-1] == "Richard" and int(col["Year"]) < 1985 ) Notice I added an int() around the col["Year"] to convert it to an integer. You've actually gone and reinvented csv.DictReader in the setup to this problem! Instead of file_name = "film.csv" lines = (line for line in open(file_name,encoding='cp1252')) #generator to capture lines lists = (s.rstrip().split(";") for s in lines) #generators to capture lists containing values from lines #browse lists and index them per header values, then filter all movies that have been awarded #using a new generator object cols=next(lists) #obtains only the header print(cols) collections = (dict(zip(cols,data)) for data in lists) filtered = ... You could have just done: import csv file_name = "film.csv" with open(file_name) as f: collections = csv.DictReader(delimiter=";") filtered = ...
How to filter a collection by multiple conditions
I have a csv file named film.csv here is the header line with a few lines to use as an example Year;Length;Title;Subject;Actor;Actress;Director;Popularity;Awards;*Image 1990;111;Tie Me Up! Tie Me Down!;Comedy;Banderas, Antonio;Abril, Victoria;Almodóvar, Pedro;68;No;NicholasCage.png 1991;113;High Heels;Comedy;Bosé, Miguel;Abril, Victoria;Almodóvar, Pedro;68;No;NicholasCage.png 1983;104;Dead Zone, The;Horror;Walken, Christopher;Adams, Brooke;Cronenberg, David;79;No;NicholasCage.png 1979;122;Cuba;Action;Connery, Sean;Adams, Brooke;Lester, Richard;6;No;seanConnery.png 1978;94;Days of Heaven;Drama;Gere, Richard;Adams, Brooke;Malick, Terrence;14;No;NicholasCage.png 1983;140;Octopussy;Action;Moore, Roger;Adams, Maud;Glen, John;68;No;NicholasCage.png I am trying to filter, and need to display the move titles, for this criteria: first name contains "Richard", Year < 1985, Awards == "Y" I am able to filter for the award, but not the rest. can you help? file_name = "film.csv" lines = (line for line in open(file_name,encoding='cp1252')) #generator to capture lines lists = (s.rstrip().split(";") for s in lines) #generators to capture lists containing values from lines #browse lists and index them per header values, then filter all movies that have been awarded #using a new generator object cols=next(lists) #obtains only the header print(cols) collections = (dict(zip(cols,data)) for data in lists) filtered = (col["Title"] for col in collections if col["Awards"][0] == "Y") for item in filtered: print(item) # input() This works for the award but I don't know how to add additional filters. Also when I try to filter for if col["Year"] < 1985 I get error message because string vs int not compatible. How do I make the years a value? I believe for the first name I can filter like this: if col["Actor"].split(", ")[-1] == "Richard"
[ "You know how to add one filter. There is no such thing as \"additional\" filters. Just add your conditions to the current condition. Since you want all of the conditions to be True to select a record, you'd use the boolean and logic. For example:\nfiltered = (\n col[\"Title\"] \n for col in collections \n if col[\"Awards\"][0] == \"Y\"\n and col[\"Actor\"].split(\", \")[-1] == \"Richard\"\n and int(col[\"Year\"]) < 1985\n )\n\nNotice I added an int() around the col[\"Year\"] to convert it to an integer.\n\nYou've actually gone and reinvented csv.DictReader in the setup to this problem! Instead of\nfile_name = \"film.csv\"\nlines = (line for line in open(file_name,encoding='cp1252')) #generator to capture lines\nlists = (s.rstrip().split(\";\") for s in lines) #generators to capture lists containing values from lines\n\n#browse lists and index them per header values, then filter all movies that have been awarded\n#using a new generator object\n\ncols=next(lists) #obtains only the header\nprint(cols)\ncollections = (dict(zip(cols,data)) for data in lists)\nfiltered = ...\n\nYou could have just done:\nimport csv\n\nfile_name = \"film.csv\"\nwith open(file_name) as f:\n collections = csv.DictReader(delimiter=\";\")\n filtered = ...\n\n" ]
[ 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0074615660_csv_python.txt
Q: Palindrome check for items in a list. Return True or False for each Is there a way to have a function take in a list and then return true or false for each item in the list if they are palindromes? Below is what I have tried but I would like the console to look like this: True False True x=[121,13,155551] def palindrome_check(x): for num_from__list in x: if str(num_from__list) == str(num_from__list[::-1]): return True continue else: return False print(palindrome_check(x)) A: x = [121,13,155551] def palindrome_check(x): res = [] for num_from__list in x: res.append(str(num_from__list) == str(num_from__list)[::-1]) return res print(palindrome_check(x)) or even better: x = [121,13,155551] def palindrome_check(x): return [str(num_from__list) == str(num_from__list[::-1]) for num_from__list in x] print(palindrome_check(x)) A: You can change your palindrome_check function to take in a single number instead of an array of numbers and then use a list comprehension to get the results into a new list: def palindrome_check(num): return str(num) == str(num)[::-1] numbers = [121, 13, 155551] results = [palindrome_check(num) for num in numbers] print(results) Right now your function returns True or False based on only the first number in the list it received, which is probably not what you want. A: You can replace return by yield hence transforming your function into a generator. def palindrome_check(x): for num_from__list in x: if str(num_from__list) == str(num_from__list[::-1]): yield True else: yield False print(list(palindrome_check(x))) A: I would go for the following: def palindrome_check(x): return " ".join([str(str(n) == str(n)[::-1]) for n in x]) Explanation: For loop myList = [function(n) for n in x] # same as myList = list() for n in x: myList.append(function(n)) # eg. [n for n in x] === [121, 13, 155551] Convert number to string and its mirror # number n # eg. 123 # string str(n) # eg. '123' # mirrored string str(n)[::-1] # eg. '321' # eg. [str(n) for n in x] === ['121', '13', '155551'] # [str(n)[::-1] for n in x] === ['121', '31', '155551'] Compare string value with its mirror str(n) == str(n)[::-1] # True/False # eg. [str(n) == str(n)[::-1] for n in x] === [True, False, True] Stringify boolean for concatenation (join) str(str(n) == str(n)[::-1]) # 'True'/'False' # eg. [str(str(n) == str(n)[::-1]) for n in x] === ['True', 'False', 'True'] Concatenate string elements of a list # eg. "xx".join(["A", "B", "C"]) returns 'AxxBxxC' " ".join([str(str(n) == str(n)[::-1]) for n in x]) # 'True False True'
Palindrome check for items in a list. Return True or False for each
Is there a way to have a function take in a list and then return true or false for each item in the list if they are palindromes? Below is what I have tried but I would like the console to look like this: True False True x=[121,13,155551] def palindrome_check(x): for num_from__list in x: if str(num_from__list) == str(num_from__list[::-1]): return True continue else: return False print(palindrome_check(x))
[ "x = [121,13,155551]\n\ndef palindrome_check(x):\n res = []\n for num_from__list in x:\n res.append(str(num_from__list) == str(num_from__list)[::-1])\n return res\n\nprint(palindrome_check(x))\n\nor even better:\nx = [121,13,155551]\n\ndef palindrome_check(x):\n return [str(num_from__list) == str(num_from__list[::-1]) for num_from__list in x]\n\nprint(palindrome_check(x))\n\n", "You can change your palindrome_check function to take in a single number instead of an array of numbers and then use a list comprehension to get the results into a new list:\ndef palindrome_check(num):\n return str(num) == str(num)[::-1]\n\nnumbers = [121, 13, 155551]\n\nresults = [palindrome_check(num) for num in numbers]\nprint(results)\n\nRight now your function returns True or False based on only the first number in the list it received, which is probably not what you want.\n", "You can replace return by yield hence transforming your function into a generator.\ndef palindrome_check(x):\n for num_from__list in x:\n if str(num_from__list) == str(num_from__list[::-1]):\n yield True\n else:\n yield False\n\nprint(list(palindrome_check(x)))\n\n", "I would go for the following:\ndef palindrome_check(x):\n return \" \".join([str(str(n) == str(n)[::-1]) for n in x])\n\nExplanation:\n\nFor loop\nmyList = [function(n) for n in x]\n# same as\nmyList = list()\nfor n in x:\n myList.append(function(n))\n# eg. [n for n in x] === [121, 13, 155551]\n\n\nConvert number to string and its mirror\n# number\nn # eg. 123\n# string\nstr(n) # eg. '123'\n# mirrored string\nstr(n)[::-1] # eg. '321'\n# eg. [str(n) for n in x] === ['121', '13', '155551']\n# [str(n)[::-1] for n in x] === ['121', '31', '155551']\n\n\nCompare string value with its mirror\nstr(n) == str(n)[::-1] # True/False\n# eg. [str(n) == str(n)[::-1] for n in x] === [True, False, True]\n\n\nStringify boolean for concatenation (join)\nstr(str(n) == str(n)[::-1]) # 'True'/'False'\n# eg. [str(str(n) == str(n)[::-1]) for n in x] === ['True', 'False', 'True']\n\n\nConcatenate string elements of a list\n# eg. \"xx\".join([\"A\", \"B\", \"C\"]) returns 'AxxBxxC'\n\" \".join([str(str(n) == str(n)[::-1]) for n in x]) # 'True False True'\n\n\n\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074614956_python.txt
Q: Merge multiple PDFs and keep nested bookmarks I am trying to merge a few PDFs and keep the nested bookmarks all pdfs have a content parent in common when only one is needed, when i use the code below only the bookmarks of the last pdf in the folder are present in the merge, can anyone advise on what i need to change to have all the bookmarks preserved and a shared contents parent? content bookmark bookmark bookmark from PyPDF2 import PdfFileMerger, PdfFileReader import os from os import listdir from os.path import isfile, join os.chdir('filepath') source_dir = 'filepath' onlyfiles = [f for f in listdir('filepath') if isfile(join('filepath', f))] for file in source_dir: fileReader = PdfFileReader(open(file,'rb')) outlines = fileReader.getOutlines() merger = PdfFileMerger(strict=False) for item in os.listdir(source_dir): if item.endswith('pdf'): merger.bookmarks = outlines merger.append(item) merger.write('merged.pdf') merger.close() A: Every time you run in your for loop: outlines = fileReader.getOutlines() you are overwriting the contents of outlines not appending to it. So it is not surprising you end up with only the last. What is the type of outlines? Is it a list or similar? Consult the PyPDF2 documentation to find out, and work out how to build the appended outlines. Then adding the top level entries is your second task. A: for anyone wondering this is the solution however all bookmarks from the merged pdfs start on page 1 (or 0 in code speak) anyone with a solution to this im all ears :) record = [] for file in onlyfiles: fileReader = PdfFileReader(open(file,'rb')) outlines = fileReader.getOutlines() record.append(outlines) merger = PdfFileMerger(strict=False) for item in os.listdir(source_dir): if item.endswith('pdf'): merger.bookmarks=record merger.append(item) merger.write('merged.pdf') merger.close()
Merge multiple PDFs and keep nested bookmarks
I am trying to merge a few PDFs and keep the nested bookmarks all pdfs have a content parent in common when only one is needed, when i use the code below only the bookmarks of the last pdf in the folder are present in the merge, can anyone advise on what i need to change to have all the bookmarks preserved and a shared contents parent? content bookmark bookmark bookmark from PyPDF2 import PdfFileMerger, PdfFileReader import os from os import listdir from os.path import isfile, join os.chdir('filepath') source_dir = 'filepath' onlyfiles = [f for f in listdir('filepath') if isfile(join('filepath', f))] for file in source_dir: fileReader = PdfFileReader(open(file,'rb')) outlines = fileReader.getOutlines() merger = PdfFileMerger(strict=False) for item in os.listdir(source_dir): if item.endswith('pdf'): merger.bookmarks = outlines merger.append(item) merger.write('merged.pdf') merger.close()
[ "Every time you run in your for loop:\noutlines = fileReader.getOutlines()\nyou are overwriting the contents of outlines not appending to it. So it is not surprising you end up with only the last.\nWhat is the type of outlines? Is it a list or similar? Consult the PyPDF2 documentation to find out, and work out how to build the appended outlines. Then adding the top level entries is your second task.\n", "for anyone wondering this is the solution however all bookmarks from the merged pdfs start on page 1 (or 0 in code speak) anyone with a solution to this im all ears :)\nrecord = []\nfor file in onlyfiles:\n fileReader = PdfFileReader(open(file,'rb'))\n outlines = fileReader.getOutlines()\n record.append(outlines)\n\n\nmerger = PdfFileMerger(strict=False) \n\nfor item in os.listdir(source_dir):\n if item.endswith('pdf'): \n merger.bookmarks=record\n merger.append(item)\n \nmerger.write('merged.pdf')\nmerger.close() \n\n" ]
[ 0, 0 ]
[]
[]
[ "bookmarks", "pypdf2", "python" ]
stackoverflow_0074571369_bookmarks_pypdf2_python.txt