content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Aure cognitive services text to speech REST API I am calling a Azure Text to Speech REST API to get audio response from my flask API. When I call the Azure REST API from postman I get the output as audio file and can play what I have given as text but when I call the API from my flask I get empty video file instead of audio file def call_azure_cognitive_api(text): token = get_token() cognitive_service_url = 'https://eastus.tts.speech.microsoft.com/cognitiveservices/v1' headers = { 'Authorization': 'Bearer %s' %token, 'X-Microsoft-OutputFormat': 'audio-16khz-32kbitrate-mono-mp3', 'Content-Type':'application/ssml+xml' } data = """<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male' name='en-US-ChristopherNeural'> Microsoft Speech Service Text-to-Speech API </voice></speak>""" response = requests.post(cognitive_service_url,data=data,headers=headers) print(response) return send_file(response, mimetype="audio/mpeg", download_name="ajinkya.mp3") I am getting below error <Response [200]> Debugging middleware caught exception in streamed response at a point where response headers were already sent. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/werkzeug/wsgi.py", line 576, in __next__ data = self.file.read(self.buffer_size) AttributeError: 'Response' object has no attribute 'read' Below is the screenshot of postman request to Azure API directly and its response in postman Postman image when I call my API to call azure API I am not sure what I am missing A: I think you should use response.content instead of the Response object itself to get the response body as bytes. Meanwhile, I recommened you to use the Azure Speech SDK for speech synthesis, which has an easy-to-use interface. Refer to the Python Quickstart for TTS for more details.
Aure cognitive services text to speech REST API
I am calling a Azure Text to Speech REST API to get audio response from my flask API. When I call the Azure REST API from postman I get the output as audio file and can play what I have given as text but when I call the API from my flask I get empty video file instead of audio file def call_azure_cognitive_api(text): token = get_token() cognitive_service_url = 'https://eastus.tts.speech.microsoft.com/cognitiveservices/v1' headers = { 'Authorization': 'Bearer %s' %token, 'X-Microsoft-OutputFormat': 'audio-16khz-32kbitrate-mono-mp3', 'Content-Type':'application/ssml+xml' } data = """<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male' name='en-US-ChristopherNeural'> Microsoft Speech Service Text-to-Speech API </voice></speak>""" response = requests.post(cognitive_service_url,data=data,headers=headers) print(response) return send_file(response, mimetype="audio/mpeg", download_name="ajinkya.mp3") I am getting below error <Response [200]> Debugging middleware caught exception in streamed response at a point where response headers were already sent. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/werkzeug/wsgi.py", line 576, in __next__ data = self.file.read(self.buffer_size) AttributeError: 'Response' object has no attribute 'read' Below is the screenshot of postman request to Azure API directly and its response in postman Postman image when I call my API to call azure API I am not sure what I am missing
[ "I think you should use response.content instead of the Response object itself to get the response body as bytes.\nMeanwhile, I recommened you to use the Azure Speech SDK for speech synthesis, which has an easy-to-use interface. Refer to the Python Quickstart for TTS for more details.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_cognitive_services", "flask", "python" ]
stackoverflow_0074609190_azure_azure_cognitive_services_flask_python.txt
Q: How to draw a patterned curve with Python Let's say I have a set of coordinates that when plotted looks like this: I can turn the dots into a smooth-ish line by simply drawing lines from adjacent pair of points: That one's easy. However, I need to draw a line with a pattern because it represents a railroad track, so it should look like this: (This is simulated using Paint.Net, hence the non-uniform spacing. I would like the spacing between pairs of black pips to be uniform, of course.) That is where I'm stumped. How do I paint such a patterned line? I currently only know how to use pillow, but if need be I will learn how to use other packages. Edited to Add: Do note that pillow is UNABLE to draw a patterned line natively. A: I got it! Okay first a bit of maths theory. There are several ways of depicting a line in geometry. The first is the "slope-intercept" form: y = mx + c Then there's the "point-slope" form: y = y1 + m * (x - x1) And finally there's the "generalized form": None of these forms are practical for several reasons: For the first 2 forms, there's the edge case of vertical lines which means y increases even though x stays the same. Near-verticals mean I have to advance x reaaally slowly or else y increases too fast Even with the "generalized form", to make the segment lengths uniform, I have to handle the iterations differently for horizontally-oriented lines (iterate on x) with vertically-oriented lines (iterate on y) However, just this morning I got reminded that there's yet another form, the "parametric form": R = P + tD Where D is the "displacement vector", P is the "starting point", and R is the "resultant vector". t is a parameter that can be defined any which way you want, depending on D's dimension. By adjusting D and/or t's steps, I can get as precise as I want, and I don't have to concern myself with special cases! With this concept, I can imagine someone walking down the line segment with a marker, and whenever they have traversed a certain distance, replace the marker with another one, and continue. Based on this principle, here's the (quick-n-dirty) program: import math from itertools import pairwise, cycle from math import sqrt, isclose from typing import NamedTuple from PIL import Image, ImageDraw class Point(NamedTuple): x: float y: float def rounded(self) -> tuple[int, int]: return round(self.x), round(self.y) # Example data points points: list[Point] = [ Point(108.0, 272.0), Point(150.0, 227.0), Point(171.0, 218.0), Point(187.0, 221.0), Point(192.0, 234.0), Point(205, 315), Point(216, 402), Point(275, 565), Point(289, 586), Point(312, 603), Point(343, 609), Point(387, 601), Point(420, 577), Point(484, 513), Point(505, 500), Point(526, 500), Point(551, 509), Point(575, 550), Point(575, 594), Point(546, 656), Point(496, 686), Point(409, 712), Point(329, 715), Point(287, 701), ] class ParametricLine: def __init__(self, p1: Point, p2: Point): self.p1 = p1 self.x1, self.y1 = p1 self.p2 = p2 self.x2, self.y2 = p2 self._len = -1.0 @property def length(self): if self._len < 0.0: dx, dy = self.displacement self._len = sqrt(dx ** 2 + dy ** 2) return self._len @property def displacement(self): return (self.x2 - self.x1), (self.y2 - self.y1) def replace_start(self, p: Point): self.p1 = p self.x1, self.y1 = p self._len = -1.0 def get_point(self, t: float) -> Point: dx, dy = self.displacement xr = self.x1 + (t / self.length) * dx xy = self.y1 + (t / self.length) * dy return Point(xr, xy) image = Image.new("RGBA", (1000, 1000)) idraw = ImageDraw.Draw(image) def draw(segments: list[tuple[Point, Point]], phase: str): drawpoints = [] prev_p2 = segments[0][0] p2 = None for p1, p2 in segments: assert isclose(p1.x, prev_p2.x) assert isclose(p1.y, prev_p2.y) drawpoints.append(p1.rounded()) prev_p2 = p2 drawpoints.append(p2.rounded()) if phase == "dash" or phase == "gapp": idraw.line(drawpoints, fill=(255, 255, 0), width=10, joint="curve") elif phase == "pip1" or phase == "pip2": idraw.line(drawpoints, fill=(0, 0, 0), width=10, joint="curve") def main(): limits: dict[str, float] = { "dash": 40.0, "pip1": 8.0, "gapp": 8.0, "pip2": 8.0, } pointpairs = pairwise(points) climit = cycle(limits.items()) phase, tleft = next(climit) segments: list[tuple[Point, Point]] = [] pline: ParametricLine | None = None p1 = p2 = Point(math.nan, math.nan) while True: if pline is None: try: p1, p2 = next(pointpairs) except StopIteration: break pline = ParametricLine(p1, p2) if pline.length > tleft: # The line segment is longer than our leftover budget. # Find where we should truncate the line and draw the # segments until the truncation point. p3 = pline.get_point(tleft) segments.append((p1, p3)) draw(segments, phase) segments.clear() pline.replace_start(p3) p1 = p3 phase, tleft = next(climit) else: # The segment is shorter than our leftover budget. # Record that and reduce the budget. segments.append((p1, p2)) tleft -= pline.length pline = None if abs(tleft) < 0.01: # The leftover is too small, let's just assume that # this is insignificant and go to the next phase. draw(segments, phase) segments.clear() phase, tleft = next(climit) if segments: draw(segments, phase) image.save("results.png") if __name__ == '__main__': main() And here's the result: A bit rough, but usable for my purposes. And the beauty of this solution is that by varying what happens in draw() (and the contents of limits), my solution can also handle dashed lines quite easily; just make the limits toggle back and forth between, say, "dash" and "blank", and in draw() only actually draw a line when phase == "dash". Note: I am 100% certain that the algorithm can be optimized / tidied up further. As of now I'm happy that it works at all. I'll probably skedaddle over to CodeReview SE for suggestions on optimization. Edit: The final version of the code is live and open for review on CodeReview SE. If you arrived here via a search engine because you're looking for a way to draw a patterned line, please use the version on CodeReview SE instead.
How to draw a patterned curve with Python
Let's say I have a set of coordinates that when plotted looks like this: I can turn the dots into a smooth-ish line by simply drawing lines from adjacent pair of points: That one's easy. However, I need to draw a line with a pattern because it represents a railroad track, so it should look like this: (This is simulated using Paint.Net, hence the non-uniform spacing. I would like the spacing between pairs of black pips to be uniform, of course.) That is where I'm stumped. How do I paint such a patterned line? I currently only know how to use pillow, but if need be I will learn how to use other packages. Edited to Add: Do note that pillow is UNABLE to draw a patterned line natively.
[ "I got it!\nOkay first a bit of maths theory. There are several ways of depicting a line in geometry.\nThe first is the \"slope-intercept\" form: y = mx + c\nThen there's the \"point-slope\" form: y = y1 + m * (x - x1)\nAnd finally there's the \"generalized form\":\n\nNone of these forms are practical for several reasons:\n\nFor the first 2 forms, there's the edge case of vertical lines which means y increases even though x stays the same.\nNear-verticals mean I have to advance x reaaally slowly or else y increases too fast\nEven with the \"generalized form\", to make the segment lengths uniform, I have to handle the iterations differently for horizontally-oriented lines (iterate on x) with vertically-oriented lines (iterate on y)\n\nHowever, just this morning I got reminded that there's yet another form, the \"parametric form\":\n R = P + tD\n\nWhere D is the \"displacement vector\", P is the \"starting point\", and R is the \"resultant vector\". t is a parameter that can be defined any which way you want, depending on D's dimension.\nBy adjusting D and/or t's steps, I can get as precise as I want, and I don't have to concern myself with special cases!\nWith this concept, I can imagine someone walking down the line segment with a marker, and whenever they have traversed a certain distance, replace the marker with another one, and continue.\nBased on this principle, here's the (quick-n-dirty) program:\nimport math\nfrom itertools import pairwise, cycle\nfrom math import sqrt, isclose\nfrom typing import NamedTuple\nfrom PIL import Image, ImageDraw\n\n\nclass Point(NamedTuple):\n x: float\n y: float\n\n def rounded(self) -> tuple[int, int]:\n return round(self.x), round(self.y)\n\n\n# Example data points\npoints: list[Point] = [\n Point(108.0, 272.0),\n Point(150.0, 227.0),\n Point(171.0, 218.0),\n Point(187.0, 221.0),\n Point(192.0, 234.0),\n Point(205, 315),\n Point(216, 402),\n Point(275, 565),\n Point(289, 586),\n Point(312, 603),\n Point(343, 609),\n Point(387, 601),\n Point(420, 577),\n Point(484, 513),\n Point(505, 500),\n Point(526, 500),\n Point(551, 509),\n Point(575, 550),\n Point(575, 594),\n Point(546, 656),\n Point(496, 686),\n Point(409, 712),\n Point(329, 715),\n Point(287, 701),\n]\n\n\nclass ParametricLine:\n def __init__(self, p1: Point, p2: Point):\n self.p1 = p1\n self.x1, self.y1 = p1\n self.p2 = p2\n self.x2, self.y2 = p2\n self._len = -1.0\n\n @property\n def length(self):\n if self._len < 0.0:\n dx, dy = self.displacement\n self._len = sqrt(dx ** 2 + dy ** 2)\n return self._len\n\n @property\n def displacement(self):\n return (self.x2 - self.x1), (self.y2 - self.y1)\n\n def replace_start(self, p: Point):\n self.p1 = p\n self.x1, self.y1 = p\n self._len = -1.0\n\n def get_point(self, t: float) -> Point:\n dx, dy = self.displacement\n xr = self.x1 + (t / self.length) * dx\n xy = self.y1 + (t / self.length) * dy\n return Point(xr, xy)\n\n\nimage = Image.new(\"RGBA\", (1000, 1000))\nidraw = ImageDraw.Draw(image)\n\n\ndef draw(segments: list[tuple[Point, Point]], phase: str):\n drawpoints = []\n prev_p2 = segments[0][0]\n p2 = None\n for p1, p2 in segments:\n assert isclose(p1.x, prev_p2.x)\n assert isclose(p1.y, prev_p2.y)\n drawpoints.append(p1.rounded())\n prev_p2 = p2\n drawpoints.append(p2.rounded())\n if phase == \"dash\" or phase == \"gapp\":\n idraw.line(drawpoints, fill=(255, 255, 0), width=10, joint=\"curve\")\n elif phase == \"pip1\" or phase == \"pip2\":\n idraw.line(drawpoints, fill=(0, 0, 0), width=10, joint=\"curve\")\n\n\ndef main():\n limits: dict[str, float] = {\n \"dash\": 40.0,\n \"pip1\": 8.0,\n \"gapp\": 8.0,\n \"pip2\": 8.0,\n }\n\n pointpairs = pairwise(points)\n climit = cycle(limits.items())\n\n phase, tleft = next(climit)\n segments: list[tuple[Point, Point]] = []\n\n pline: ParametricLine | None = None\n p1 = p2 = Point(math.nan, math.nan)\n while True:\n if pline is None:\n try:\n p1, p2 = next(pointpairs)\n except StopIteration:\n break\n pline = ParametricLine(p1, p2)\n if pline.length > tleft:\n # The line segment is longer than our leftover budget.\n # Find where we should truncate the line and draw the\n # segments until the truncation point.\n p3 = pline.get_point(tleft)\n segments.append((p1, p3))\n draw(segments, phase)\n segments.clear()\n pline.replace_start(p3)\n p1 = p3\n phase, tleft = next(climit)\n else:\n # The segment is shorter than our leftover budget.\n # Record that and reduce the budget.\n segments.append((p1, p2))\n tleft -= pline.length\n pline = None\n if abs(tleft) < 0.01:\n # The leftover is too small, let's just assume that\n # this is insignificant and go to the next phase.\n draw(segments, phase)\n segments.clear()\n phase, tleft = next(climit)\n if segments:\n draw(segments, phase)\n\n image.save(\"results.png\")\n\n\nif __name__ == '__main__':\n main()\n\nAnd here's the result:\n\nA bit rough, but usable for my purposes.\nAnd the beauty of this solution is that by varying what happens in draw() (and the contents of limits), my solution can also handle dashed lines quite easily; just make the limits toggle back and forth between, say, \"dash\" and \"blank\", and in draw() only actually draw a line when phase == \"dash\".\nNote: I am 100% certain that the algorithm can be optimized / tidied up further. As of now I'm happy that it works at all. I'll probably skedaddle over to CodeReview SE for suggestions on optimization.\nEdit: The final version of the code is live and open for review on CodeReview SE. If you arrived here via a search engine because you're looking for a way to draw a patterned line, please use the version on CodeReview SE instead.\n" ]
[ 0 ]
[]
[]
[ "curve", "python" ]
stackoverflow_0074588881_curve_python.txt
Q: Elasticsearch query to match values from list of values in Excel [Python] I'm new to Elasticsearch. I have a list of values for example: id_list=[1111,2222,3333,4444,5555] Now I want to match those ids in that id_list to match with some information stored in Elasticsearch having the same id no. I'm thinking to use for loop to loop all the ids to match using the ES query, but I not sure how exactly to do that. I know that using For Loop can run through all values in the list for id in id_list: print(id) I able to search the id one-by-one using below ES query: query={"bool": {must": [{"match":{"id_list":"1111"}}] }} Any possible way to include loop function so that I dont have to key-in the id manually like above? Thanks! A: You can use terms query from elasticsearch to query list of ids: { "query": { "terms": { "id_list": [1111,2222,3333,4444,5555] } } } Updated Based on comments: As mentioned in documentation maximum of 65,536 terms. By default, Elasticsearch limits the terms query to a maximum of 65,536 terms. This includes terms fetched using terms lookup. You can change this limit using the index.max_terms_count setting.
Elasticsearch query to match values from list of values in Excel [Python]
I'm new to Elasticsearch. I have a list of values for example: id_list=[1111,2222,3333,4444,5555] Now I want to match those ids in that id_list to match with some information stored in Elasticsearch having the same id no. I'm thinking to use for loop to loop all the ids to match using the ES query, but I not sure how exactly to do that. I know that using For Loop can run through all values in the list for id in id_list: print(id) I able to search the id one-by-one using below ES query: query={"bool": {must": [{"match":{"id_list":"1111"}}] }} Any possible way to include loop function so that I dont have to key-in the id manually like above? Thanks!
[ "You can use terms query from elasticsearch to query list of ids:\n{\n \"query\": {\n \"terms\": {\n \"id_list\": [1111,2222,3333,4444,5555]\n }\n }\n}\n\nUpdated Based on comments:\nAs mentioned in documentation maximum of 65,536 terms.\n\nBy default, Elasticsearch limits the terms query to a maximum of\n65,536 terms. This includes terms fetched using terms lookup. You can\nchange this limit using the index.max_terms_count setting.\n\n" ]
[ 0 ]
[]
[]
[ "boolean_logic", "elasticsearch", "elasticsearch_dsl", "python" ]
stackoverflow_0074623369_boolean_logic_elasticsearch_elasticsearch_dsl_python.txt
Q: Visual studio don't have parameter detectMultiScale I'm newbie at here. I don't know why my visual studio code doesn't show parameter detectMultiScale. What should I fix it? I attach image here for detail. image I try to reinstall for several times but it still not show parameter. A: Add these two lines in the settings.json to enable type hints "python.analysis.inlayHints.variableTypes": true, "python.analysis.inlayHints.functionReturnTypes": true,
Visual studio don't have parameter detectMultiScale
I'm newbie at here. I don't know why my visual studio code doesn't show parameter detectMultiScale. What should I fix it? I attach image here for detail. image I try to reinstall for several times but it still not show parameter.
[ "Add these two lines in the settings.json to enable type hints\n \"python.analysis.inlayHints.variableTypes\": true,\n \"python.analysis.inlayHints.functionReturnTypes\": true,\n\n\n" ]
[ 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074617111_python_visual_studio_code.txt
Q: How can find iframe value in Selenium? Above is webpage html code. In python selenium, How i can find iframe value? Below is my code. frame = driver.find_element(By.NAME, "frm") frame = driver.find_elements(By.TAG_NAME, "iframe") frame = driver.find_elements(By.TAG_NAME, "body_iframe") frame = driver.find_element(By.CSS_SELECTOR, '.Center_top iframe') When i insert len(frame), Its result is 0 or Error... help me. A: Try the below locator: driver.find_element(By.XPATH, ".//iframe[@id='body_iframe' and @name='body_iframe']")
How can find iframe value in Selenium?
Above is webpage html code. In python selenium, How i can find iframe value? Below is my code. frame = driver.find_element(By.NAME, "frm") frame = driver.find_elements(By.TAG_NAME, "iframe") frame = driver.find_elements(By.TAG_NAME, "body_iframe") frame = driver.find_element(By.CSS_SELECTOR, '.Center_top iframe') When i insert len(frame), Its result is 0 or Error... help me.
[ "Try the below locator:\ndriver.find_element(By.XPATH, \".//iframe[@id='body_iframe' and @name='body_iframe']\")\n\n" ]
[ 0 ]
[]
[]
[ "html", "iframe", "python", "selenium" ]
stackoverflow_0074623330_html_iframe_python_selenium.txt
Q: Sliding time window with python deque I have deque. Each element of the deque consists of time and event field. So, this is similar to list of dicts. Data is always sorted by time from oldest to newest. First element of the deque is the oldest. Please, note that deque is infinite and every time new element(s) are added with unknown time. This means that new element can be added after 1 minute or after 1 hour. Who knows... data = [ { "time": "07:14:40", "event": 24 }, { "time": "07:15:40", "event": 394 }, { "time": "07:16:40", "event": 384 }, { "time": "07:17:40", "event": 394 }, { "time": "07:18:40", "event": 384 }, { "time": "07:19:40", "event": 2 }, { "time": "07:20:40", "event": 24 }, { "time": "07:21:40", "event": 72 }, { "time": "07:22:40", "event": 24 }, { "time": "07:23:40", "event": 72 }, { "time": "07:24:40", "event": 99 } ] I'm also given window size. Let it be 5 minutes. I want to iterate over this deque with the given window size and calculate expanding moving sum. Let me elaborate what does this mean. During iteration over this deque, during every iteration, I have to check the current AND older elements if they are inside 5 minute window and sum them up. If older element(s) are outside of 5 minute window then pop them from deque. In other words, during first iteration start date will be 07:09:40 - (going 5 minute back) and end date will be 07:14:40 and sum will be 24. During second iteration, as this element is not inside the date range then I have to redefine my date range in the following way: start date will be 07:10:40 and end date will be 07:15:40 Now, I have to look back and check all previous elements. The date of the first element is 07:14:40 which is inside my new date range and I will do new summation (24 + 394) During third iteration, the time field is outside my previous date range and then I have to redefine my date range in the same manner as I did during previous iteration and do all the summation similarly. When I reach the following element (7th iteration) "time": "07:20:40", "event": 24 My date range will be: start date: 07:15:40 end date: 07:20:40 Then I have to look back and grab all the elements which time field is inside this date range. Note that the first element is outside the date range and I have to pop out this first element from the deque. - This is my question. How can I do this? This is the code fragment I did but it does not work. from collections import deque, defaultdict window_size = 300 test = deque(sort_data(list(read_json("final_real_test.json").values())[0])) result = defaultdict(list) final_input = deque() end_date = test[0]["time"] start_date = end_date - datetime.timedelta(seconds=window_size) while test: record = test.popleft() if start_date <= record["time"] <= end_date: # Calculate the sum final_input.append(record) else: end_date = record["time"] start_date = end_date - datetime.timedelta(seconds=window_size) print("Returning back to the queue...") test.appendleft(record) print("Done") A: You did not explain how your deque was updated, and how it should affect the window processing. But here is a proof-of-concept of the algorithm : from datetime import datetime from typing import Generator, List, Dict, Union Element = Dict[str, Union[str, int]] Series = List[Element] def sliding_window(series: Series, window_duration: int) -> Generator[Series, None, None]: time_format = "%H:%M:%S" if len(series) > 0: for i_ending_item, ending_item in enumerate(series): end_window_time = datetime.strptime(ending_item["time"], time_format) print(f"window ends at item n°{i_ending_item} ({end_window_time!r})") window = [ending_item] for window_candidate_item in reversed(series[0:max(i_ending_item, 0)]): candidate_time = datetime.strptime(window_candidate_item["time"], time_format) assert end_window_time > candidate_time candidate_delta = end_window_time - candidate_time print(f" {candidate_time=!r} {candidate_delta=!r} {candidate_delta.seconds=!r}") if candidate_delta.seconds < window_duration: # non inclusive print(" added to the window") window.insert(0, window_candidate_item) else: print(" stop there") break else: print(" reached the beginning of the series") yield window DATA: Series = [ {"time": "07:14:40", "event": 24}, {"time": "07:15:40", "event": 394}, {"time": "07:16:40", "event": 384}, {"time": "07:17:40", "event": 394}, {"time": "07:18:40", "event": 384}, {"time": "07:19:40", "event": 2}, {"time": "07:20:40", "event": 24}, {"time": "07:21:40", "event": 72}, {"time": "07:22:40", "event": 24}, {"time": "07:23:40", "event": 72}, {"time": "07:24:40", "event": 99} ] WINDOW_SIZE = 5*60 for window in sliding_window(DATA, WINDOW_SIZE): print(window, "sum=", sum(item["event"] for item in window)) which produces window ends at item n°0 (datetime.datetime(1900, 1, 1, 7, 14, 40)) reached the beginning of the series [{'time': '07:14:40', 'event': 24}] sum= 24 window ends at item n°1 (datetime.datetime(1900, 1, 1, 7, 15, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window reached the beginning of the series [{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}] sum= 418 window ends at item n°2 (datetime.datetime(1900, 1, 1, 7, 16, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window reached the beginning of the series [{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}] sum= 802 window ends at item n°3 (datetime.datetime(1900, 1, 1, 7, 17, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window reached the beginning of the series [{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}] sum= 1196 window ends at item n°4 (datetime.datetime(1900, 1, 1, 7, 18, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window reached the beginning of the series [{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}] sum= 1580 window ends at item n°5 (datetime.datetime(1900, 1, 1, 7, 19, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300 stop there [{'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}] sum= 1558 window ends at item n°6 (datetime.datetime(1900, 1, 1, 7, 20, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300 stop there [{'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}] sum= 1188 window ends at item n°7 (datetime.datetime(1900, 1, 1, 7, 21, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300 stop there [{'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}] sum= 876 window ends at item n°8 (datetime.datetime(1900, 1, 1, 7, 22, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 21, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300 stop there [{'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}, {'time': '07:22:40', 'event': 24}] sum= 506 window ends at item n°9 (datetime.datetime(1900, 1, 1, 7, 23, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 22, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 21, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300 stop there [{'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}, {'time': '07:22:40', 'event': 24}, {'time': '07:23:40', 'event': 72}] sum= 194 window ends at item n°10 (datetime.datetime(1900, 1, 1, 7, 24, 40)) candidate_time=datetime.datetime(1900, 1, 1, 7, 23, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 22, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 21, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240 added to the window candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300 stop there [{'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}, {'time': '07:22:40', 'event': 24}, {'time': '07:23:40', 'event': 72}, {'time': '07:24:40', 'event': 99}] sum= 291 To me it seems to answer your question : how to have a sliding window based on the time of the events. I used a list for the data by simplicity. If you want to share a Minimal Reproducible Example it would be simpler to answer your question.
Sliding time window with python deque
I have deque. Each element of the deque consists of time and event field. So, this is similar to list of dicts. Data is always sorted by time from oldest to newest. First element of the deque is the oldest. Please, note that deque is infinite and every time new element(s) are added with unknown time. This means that new element can be added after 1 minute or after 1 hour. Who knows... data = [ { "time": "07:14:40", "event": 24 }, { "time": "07:15:40", "event": 394 }, { "time": "07:16:40", "event": 384 }, { "time": "07:17:40", "event": 394 }, { "time": "07:18:40", "event": 384 }, { "time": "07:19:40", "event": 2 }, { "time": "07:20:40", "event": 24 }, { "time": "07:21:40", "event": 72 }, { "time": "07:22:40", "event": 24 }, { "time": "07:23:40", "event": 72 }, { "time": "07:24:40", "event": 99 } ] I'm also given window size. Let it be 5 minutes. I want to iterate over this deque with the given window size and calculate expanding moving sum. Let me elaborate what does this mean. During iteration over this deque, during every iteration, I have to check the current AND older elements if they are inside 5 minute window and sum them up. If older element(s) are outside of 5 minute window then pop them from deque. In other words, during first iteration start date will be 07:09:40 - (going 5 minute back) and end date will be 07:14:40 and sum will be 24. During second iteration, as this element is not inside the date range then I have to redefine my date range in the following way: start date will be 07:10:40 and end date will be 07:15:40 Now, I have to look back and check all previous elements. The date of the first element is 07:14:40 which is inside my new date range and I will do new summation (24 + 394) During third iteration, the time field is outside my previous date range and then I have to redefine my date range in the same manner as I did during previous iteration and do all the summation similarly. When I reach the following element (7th iteration) "time": "07:20:40", "event": 24 My date range will be: start date: 07:15:40 end date: 07:20:40 Then I have to look back and grab all the elements which time field is inside this date range. Note that the first element is outside the date range and I have to pop out this first element from the deque. - This is my question. How can I do this? This is the code fragment I did but it does not work. from collections import deque, defaultdict window_size = 300 test = deque(sort_data(list(read_json("final_real_test.json").values())[0])) result = defaultdict(list) final_input = deque() end_date = test[0]["time"] start_date = end_date - datetime.timedelta(seconds=window_size) while test: record = test.popleft() if start_date <= record["time"] <= end_date: # Calculate the sum final_input.append(record) else: end_date = record["time"] start_date = end_date - datetime.timedelta(seconds=window_size) print("Returning back to the queue...") test.appendleft(record) print("Done")
[ "You did not explain how your deque was updated, and how it should affect the window processing.\nBut here is a proof-of-concept of the algorithm :\nfrom datetime import datetime\nfrom typing import Generator, List, Dict, Union\n\nElement = Dict[str, Union[str, int]]\nSeries = List[Element]\n\ndef sliding_window(series: Series, window_duration: int) -> Generator[Series, None, None]:\n time_format = \"%H:%M:%S\"\n if len(series) > 0:\n for i_ending_item, ending_item in enumerate(series):\n end_window_time = datetime.strptime(ending_item[\"time\"], time_format)\n print(f\"window ends at item n°{i_ending_item} ({end_window_time!r})\")\n window = [ending_item]\n for window_candidate_item in reversed(series[0:max(i_ending_item, 0)]):\n candidate_time = datetime.strptime(window_candidate_item[\"time\"], time_format)\n assert end_window_time > candidate_time\n candidate_delta = end_window_time - candidate_time\n print(f\" {candidate_time=!r} {candidate_delta=!r} {candidate_delta.seconds=!r}\")\n if candidate_delta.seconds < window_duration: # non inclusive\n print(\" added to the window\")\n window.insert(0, window_candidate_item)\n else:\n print(\" stop there\")\n break\n else:\n print(\" reached the beginning of the series\")\n yield window\n\nDATA: Series = [\n {\"time\": \"07:14:40\", \"event\": 24},\n {\"time\": \"07:15:40\", \"event\": 394},\n {\"time\": \"07:16:40\", \"event\": 384},\n {\"time\": \"07:17:40\", \"event\": 394},\n {\"time\": \"07:18:40\", \"event\": 384},\n {\"time\": \"07:19:40\", \"event\": 2},\n {\"time\": \"07:20:40\", \"event\": 24},\n {\"time\": \"07:21:40\", \"event\": 72},\n {\"time\": \"07:22:40\", \"event\": 24},\n {\"time\": \"07:23:40\", \"event\": 72},\n {\"time\": \"07:24:40\", \"event\": 99}\n]\nWINDOW_SIZE = 5*60\n\nfor window in sliding_window(DATA, WINDOW_SIZE):\n print(window, \"sum=\", sum(item[\"event\"] for item in window))\n\nwhich produces\nwindow ends at item n°0 (datetime.datetime(1900, 1, 1, 7, 14, 40))\n reached the beginning of the series\n[{'time': '07:14:40', 'event': 24}] sum= 24\nwindow ends at item n°1 (datetime.datetime(1900, 1, 1, 7, 15, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n reached the beginning of the series\n[{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}] sum= 418\nwindow ends at item n°2 (datetime.datetime(1900, 1, 1, 7, 16, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n reached the beginning of the series\n[{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}] sum= 802\nwindow ends at item n°3 (datetime.datetime(1900, 1, 1, 7, 17, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n reached the beginning of the series\n[{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}] sum= 1196\nwindow ends at item n°4 (datetime.datetime(1900, 1, 1, 7, 18, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n reached the beginning of the series\n[{'time': '07:14:40', 'event': 24}, {'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}] sum= 1580\nwindow ends at item n°5 (datetime.datetime(1900, 1, 1, 7, 19, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 14, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300\n stop there\n[{'time': '07:15:40', 'event': 394}, {'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}] sum= 1558\nwindow ends at item n°6 (datetime.datetime(1900, 1, 1, 7, 20, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 15, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300\n stop there\n[{'time': '07:16:40', 'event': 384}, {'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}] sum= 1188\nwindow ends at item n°7 (datetime.datetime(1900, 1, 1, 7, 21, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 16, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300\n stop there\n[{'time': '07:17:40', 'event': 394}, {'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}] sum= 876\nwindow ends at item n°8 (datetime.datetime(1900, 1, 1, 7, 22, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 21, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 17, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300\n stop there\n[{'time': '07:18:40', 'event': 384}, {'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}, {'time': '07:22:40', 'event': 24}] sum= 506\nwindow ends at item n°9 (datetime.datetime(1900, 1, 1, 7, 23, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 22, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 21, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 18, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300\n stop there\n[{'time': '07:19:40', 'event': 2}, {'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}, {'time': '07:22:40', 'event': 24}, {'time': '07:23:40', 'event': 72}] sum= 194\nwindow ends at item n°10 (datetime.datetime(1900, 1, 1, 7, 24, 40))\n candidate_time=datetime.datetime(1900, 1, 1, 7, 23, 40) candidate_delta=datetime.timedelta(seconds=60) candidate_delta.seconds=60\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 22, 40) candidate_delta=datetime.timedelta(seconds=120) candidate_delta.seconds=120\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 21, 40) candidate_delta=datetime.timedelta(seconds=180) candidate_delta.seconds=180\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 20, 40) candidate_delta=datetime.timedelta(seconds=240) candidate_delta.seconds=240\n added to the window\n candidate_time=datetime.datetime(1900, 1, 1, 7, 19, 40) candidate_delta=datetime.timedelta(seconds=300) candidate_delta.seconds=300\n stop there\n[{'time': '07:20:40', 'event': 24}, {'time': '07:21:40', 'event': 72}, {'time': '07:22:40', 'event': 24}, {'time': '07:23:40', 'event': 72}, {'time': '07:24:40', 'event': 99}] sum= 291\n\nTo me it seems to answer your question : how to have a sliding window based on the time of the events.\nI used a list for the data by simplicity. If you want to share a Minimal Reproducible Example it would be simpler to answer your question.\n" ]
[ 1 ]
[]
[]
[ "deque", "python", "python_3.x", "queue" ]
stackoverflow_0074600559_deque_python_python_3.x_queue.txt
Q: How to filter fows in Pandas using partial string in column I've used the SerpAPI to pull down some data about jobs in a sector I want to return to. There is a lot of junk about training and I'd like to remove the results based on the displayed_link column. position title link displayed_link date snippet snippet_highlighted_words sitelinks about_this_result about_page_link about_page_serpapi_link cached_page_link related_questions rich_snippet related_pages_link thumbnail duration key_moments 0 1 What Does a Data Analyst Do? Your 2022 Career ... https://www.coursera.org/articles/what-does-a-... https://www.coursera.org › Coursera Articles ›... Nov 14, 2022 A data analyst is a person whose job is to gat... [data analyst] {'inline': [{'title': 'Business analyst', 'lin... {'source': {'description': 'Coursera Inc. is a... https://www.google.com/search?q=About+https://... https://serpapi.com/search.json?engine=google_... https://webcache.googleusercontent.com/search?... NaN NaN NaN NaN NaN NaN 1 2 What Does a Data Analyst Do? Exploring the Day... https://www.rasmussen.edu/degrees/technology/b... https://www.rasmussen.edu › degrees › technolo... Sep 19, 2022 Generally speaking, a data analyst will retrie... [data analyst, Data analysts] {'inline': [{'title': 'Where Do Data Analysts ... {'source': {'description': 'Rasmussen Universi... https://www.google.com/search?q=About+https://... https://serpapi.com/search.json?engine=google_... https://webcache.googleusercontent.com/search?... NaN NaN NaN NaN NaN NaN 2 3 Become a Data Analyst Learning Path - LinkedIn https://www.linkedin.com/learning/paths/become... https://www.linkedin.com › learning › become-a... NaN Data analysts examine information using data a... [Data analysts, data analysis] NaN {'source': {'description': 'LinkedIn is an Ame... https://www.google.com/search?q=About+https://... https://serpapi.com/search.json?engine=google_... NaN NaN NaN NaN NaN NaN NaN 3 4 What Does a Data Analyst Do? - SNHU https://www.snhu.edu/about-us/newsroom/stem/wh... https://www.snhu.edu › about-us › newsroom › stem Tried manually creating of the sites I want to exclude sites in this list promotions = ["coursera" ,"rasmussen" ,"snhu" ,"mastersindatascience" ,"northeastern" ,"mygreatlearning" ,"payscale.com" ,"careerfoundry" ,"microsoft.com" ,"codecademy" ,"edx.org" ,"ahima.org" ,"›certification-exams›chda'"] Tried this: df['displayed_link'].map(lambda x: "T" if x in promotions else "F") And all it does is return F - I'm guessing because it needs exact string. df['displayed_link'].map(lambda x: "T" if promotions in x else "F") I tried it the other way, but that was a syntax error. What is the most efficient way of filtering rows based on a column based on a list of manually curated strings? enter code here A: Use Series.str.contains with chain list by | for regex OR: df['test1'] = np.where(df['displayed_link'].str.contains('|'.join(promotions)), 'T', 'F') df['test2'] = (df['displayed_link'].str.contains('|'.join(promotions)) .map({True:'T',False: 'F'})) If necessary, use words boundaries \b\b: pat = '|'.join(rf"\b{x}\b" for x in promotions)) df['test3']= np.where(df['displayed_link'].str.contains(pat), 'T', 'F') df['test4']= df['displayed_link'].str.contains(pat).map({True:'T',False: 'F'}) print (df) displayed_link test1 test2 test3 test4 0 https://www.coursera.org/articles/what-does-a T T T T 1 https://www.rasmussen.edu/degrees/technology/ T T T T 2 https://www.linkedin.com/learning/paths/ F F F F 3 https://www.snhu1.edu/about-us/newsroom/stem/ T T F F
How to filter fows in Pandas using partial string in column
I've used the SerpAPI to pull down some data about jobs in a sector I want to return to. There is a lot of junk about training and I'd like to remove the results based on the displayed_link column. position title link displayed_link date snippet snippet_highlighted_words sitelinks about_this_result about_page_link about_page_serpapi_link cached_page_link related_questions rich_snippet related_pages_link thumbnail duration key_moments 0 1 What Does a Data Analyst Do? Your 2022 Career ... https://www.coursera.org/articles/what-does-a-... https://www.coursera.org › Coursera Articles ›... Nov 14, 2022 A data analyst is a person whose job is to gat... [data analyst] {'inline': [{'title': 'Business analyst', 'lin... {'source': {'description': 'Coursera Inc. is a... https://www.google.com/search?q=About+https://... https://serpapi.com/search.json?engine=google_... https://webcache.googleusercontent.com/search?... NaN NaN NaN NaN NaN NaN 1 2 What Does a Data Analyst Do? Exploring the Day... https://www.rasmussen.edu/degrees/technology/b... https://www.rasmussen.edu › degrees › technolo... Sep 19, 2022 Generally speaking, a data analyst will retrie... [data analyst, Data analysts] {'inline': [{'title': 'Where Do Data Analysts ... {'source': {'description': 'Rasmussen Universi... https://www.google.com/search?q=About+https://... https://serpapi.com/search.json?engine=google_... https://webcache.googleusercontent.com/search?... NaN NaN NaN NaN NaN NaN 2 3 Become a Data Analyst Learning Path - LinkedIn https://www.linkedin.com/learning/paths/become... https://www.linkedin.com › learning › become-a... NaN Data analysts examine information using data a... [Data analysts, data analysis] NaN {'source': {'description': 'LinkedIn is an Ame... https://www.google.com/search?q=About+https://... https://serpapi.com/search.json?engine=google_... NaN NaN NaN NaN NaN NaN NaN 3 4 What Does a Data Analyst Do? - SNHU https://www.snhu.edu/about-us/newsroom/stem/wh... https://www.snhu.edu › about-us › newsroom › stem Tried manually creating of the sites I want to exclude sites in this list promotions = ["coursera" ,"rasmussen" ,"snhu" ,"mastersindatascience" ,"northeastern" ,"mygreatlearning" ,"payscale.com" ,"careerfoundry" ,"microsoft.com" ,"codecademy" ,"edx.org" ,"ahima.org" ,"›certification-exams›chda'"] Tried this: df['displayed_link'].map(lambda x: "T" if x in promotions else "F") And all it does is return F - I'm guessing because it needs exact string. df['displayed_link'].map(lambda x: "T" if promotions in x else "F") I tried it the other way, but that was a syntax error. What is the most efficient way of filtering rows based on a column based on a list of manually curated strings? enter code here
[ "Use Series.str.contains with chain list by | for regex OR:\ndf['test1'] = np.where(df['displayed_link'].str.contains('|'.join(promotions)), 'T', 'F')\ndf['test2'] = (df['displayed_link'].str.contains('|'.join(promotions))\n .map({True:'T',False: 'F'}))\n\nIf necessary, use words boundaries \\b\\b:\npat = '|'.join(rf\"\\b{x}\\b\" for x in promotions))\ndf['test3']= np.where(df['displayed_link'].str.contains(pat), 'T', 'F')\ndf['test4']= df['displayed_link'].str.contains(pat).map({True:'T',False: 'F'})\nprint (df)\n displayed_link test1 test2 test3 test4\n0 https://www.coursera.org/articles/what-does-a T T T T\n1 https://www.rasmussen.edu/degrees/technology/ T T T T\n2 https://www.linkedin.com/learning/paths/ F F F F\n3 https://www.snhu1.edu/about-us/newsroom/stem/ T T F F\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "serpapi" ]
stackoverflow_0074623631_pandas_python_serpapi.txt
Q: Replace consecutive delimiters in string with values from list I have a string, for example: s = "I ? am ? a ? string" And I have a list equal in length to the number of ? in the string: l = ['1', '2', '3'] What is a pythonic way to return s with each consecutive ? replaced with the values in l?, e.g.: s_new = 'I 1 am 2 a 3 string' A: 2 Methods: # Method 1 s = "I ? am ? a ? string" l = ['1', '2', '3'] for i in l: s = s.replace('?', i, 1) print(s) # Output: I 1 am 2 a 3 string # Method 2 from functools import reduce s = "I ? am ? a ? string" l = ['1', '2', '3'] s_new = reduce(lambda x, y: x.replace('?', y, 1), l, s) print(s_new) # Output: I 1 am 2 a 3 string A: If the placeholders (not "delimiters") were {} rather than ?, this would be exactly how the built-in .format method handles empty {} (along with a lot more power). So, we can simply replace the placeholders first, and then use that functionality: >>> s = "I ? am ? a ? string" >>> l = ['1', '2', '3'] >>> s.replace('?', '{}').format(*l) 'I 1 am 2 a 3 string' Notice that .format expects each value as a separate argument, so we use * to unpack the list. If the original string contains { or } which must be preserved, we can first escape them by doubling them up: >>> s = "I ? {am} ? a ? string" >>> l = ['1', '2', '3'] >>> s.replace('?', '{}').format(*l) Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'am' >>> s.replace('{', '{{').replace('}', '}}').replace('?', '{}').format(*l) 'I 1 {am} 2 a 3 string'
Replace consecutive delimiters in string with values from list
I have a string, for example: s = "I ? am ? a ? string" And I have a list equal in length to the number of ? in the string: l = ['1', '2', '3'] What is a pythonic way to return s with each consecutive ? replaced with the values in l?, e.g.: s_new = 'I 1 am 2 a 3 string'
[ "2 Methods:\n# Method 1\ns = \"I ? am ? a ? string\"\nl = ['1', '2', '3']\n\nfor i in l:\n s = s.replace('?', i, 1) \n\nprint(s)\n# Output: I 1 am 2 a 3 string\n\n\n# Method 2\nfrom functools import reduce\ns = \"I ? am ? a ? string\"\nl = ['1', '2', '3']\n\ns_new = reduce(lambda x, y: x.replace('?', y, 1), l, s)\n\nprint(s_new) \n# Output: I 1 am 2 a 3 string\n\n", "If the placeholders (not \"delimiters\") were {} rather than ?, this would be exactly how the built-in .format method handles empty {} (along with a lot more power). So, we can simply replace the placeholders first, and then use that functionality:\n>>> s = \"I ? am ? a ? string\"\n>>> l = ['1', '2', '3']\n>>> s.replace('?', '{}').format(*l)\n'I 1 am 2 a 3 string'\n\nNotice that .format expects each value as a separate argument, so we use * to unpack the list.\nIf the original string contains { or } which must be preserved, we can first escape them by doubling them up:\n>>> s = \"I ? {am} ? a ? string\"\n>>> l = ['1', '2', '3']\n>>> s.replace('?', '{}').format(*l)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nKeyError: 'am'\n>>> s.replace('{', '{{').replace('}', '}}').replace('?', '{}').format(*l)\n'I 1 {am} 2 a 3 string'\n\n" ]
[ 3, 3 ]
[]
[]
[ "python" ]
stackoverflow_0074623448_python.txt
Q: web scraping all universities with websites and description WHED website anyone can help with scraping from https://www.whed.net/home.php the code I'm using is giving me empty df. would love to have universities with websites and maybe field of study. My scraping skills are weak so if you can guide me through this would be great thanks guys. begin=time.time() countries=['Emirates','United States of America (all)'] result = [] # List to store all data univ_links=[] # Links for all universities fields = ['Street:','City:','Province:','Post Code:','WWW:','Fields of study:','Job title:'] webD = wb.Chrome(executable_path=r'C:\Users\Admin\OneDrive\Sagasit\chromedriver.exe') # To launch chrome and run script # Trigger the target website webD.get("https://www.whed.net/results_institutions.php") webD.implicitly_wait(5) #all_countries=[] cntry_el = webD.find_elements_by_xpath('//*[@id="Chp1"]/option') #cntry_grp = webD.find_elements_by_xpath('//*[@id="Chp1"]/optgroup') grps=webD.find_elements_by_xpath('//*[@id="Chp1"]/optgroup/option[1]') for c in cntry_el:countries.append(c.text) for g in grps: countries.append(g.text) for cntry in countries: select = Select(webD.find_element_by_id('Chp1'))#select country dropdown select.select_by_visible_text(cntry)#choosing country Btn_GO = webD.find_element_by_xpath('//*[@id="fsearch"]/p/input') Btn_GO.click() select_rpp = Select(webD.find_element_by_name('nbr_ref_pge'))#select results per page drop down select_rpp.select_by_visible_text('100')#choosing 100 results per page option university_form = webD.find_element_by_xpath('//*[@id="contenu"]').find_element_by_id('results') university_list = university_form.find_elements_by_xpath('//*[@id="results"]/li') # list of university elements for univ in range(len(university_list)): href = university_list[univ].find_element_by_class_name('details').find_elements_by_tag_name('a')[0].get_property('href') # University details link univ_links.append(href) while True: try: webD.find_element_by_partial_link_text('Next').click() university_form = webD.find_element_by_xpath('//*[@id="contenu"]').find_element_by_id('results') university_list = university_form.find_elements_by_xpath('//*[@id="results"]/li') for univ in range(len(university_list)): href = university_list[univ].find_element_by_class_name('details').find_elements_by_tag_name('a')[0].get_property('href') # University details link univ_links.append(href) except NoSuchElementException: break for l in univ_links: webD.get(l) webD.implicitly_wait(2) title=webD.find_element_by_xpath('//*[@id="page"]/div/div/div[2]/div[1]').text title_detailed = webD.find_element_by_xpath('//*[@id="page"]/div/div/div[2]/div[2]').text cntry_name=webD.find_element_by_xpath('//*[@id="contenu"]/p[2]').text t1=webD.find_elements_by_class_name('dt') t2=webD.find_elements_by_class_name('dd') labels=webD.find_elements_by_class_name('libelle') content=webD.find_elements_by_class_name('contenu') temp={} fos='' fos1='' temp.update({'Title': title,'Detailed Title':title_detailed,'Country':cntry_name}) for i in range(len(t1)): if t1[i].text == '' or t1[i].text == 'Address': continue else: value=t2[i].text temp.update({t1[i].text:value.replace('\n',',')}) for j in range(len(content)): if labels[j].text in fields: if labels[j].text == 'Fields of study:': info=content[j].text fos=fos+','+info elif labels[j].text == 'Job title:': info1=content[j].text fos1=fos1+','+info1 else: key=labels[j].text temp.update({key[:-1]: content[j].text}) temp.update({'Fields of study': fos.lstrip(','),'Job titles':fos1.lstrip(',')}) result.append(temp) data=pd.DataFrame(result) data end=time.time() print("Time taken : "+ str(end-begin) +"s") data.to_csv("WHED1.csv",index=False) this code what i could use taken from github project. would be great if i can re-create the data and save it, want this to be used as a dropdown in a web application just to make sure no mistakes written in the university studied in. A: Update 1/12/22 - Async Found a much better solution using aiohttp, it also runs the entire list of countries in ~30 seconds instead of 3 hours import json import time import aiohttp import asyncio from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.remote.webelement import WebElement from selenium.webdriver.support.select import Select from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service def main(): print("Init") driver = init_driver() print("Opening Homepage") url = "https://www.whed.net/results_institutions.php" driver.get(url) time.sleep(1) print("Gathering Countries") countries = get_countries(driver) driver.quit() print("Scraping") start = time.time() institution_list = asyncio.run(fetch_all(countries)) print("Writing out") f = open('output.json', 'w') f.write(json.dumps(institution_list)) f.close() end = time.time() print(f"Total time: {end - start}s") def init_driver(): chrome_executable = Service(executable_path='chromedriver.exe', log_path='NUL') chrome_options = Options() chrome_options.add_argument("--headless") driver = webdriver.Chrome(service=chrome_executable, options=chrome_options) return driver def get_countries(driver): select = Select(driver.find_element(By.ID, "Chp1")) countries = list(map(lambda c: c.get_attribute('value'), select.options)) countries.pop(0) return countries def extract_institutions(html, country): soup = BeautifulSoup(html, 'html.parser') page = soup.find('p', {'class': 'infos'}).text print(str(page)) number_of_institutions = str(page).split()[0] if number_of_institutions == 'No': print(f"No results for {country}") return [] results = [] inst_index = 0 raw = soup.find_all('a', {'class': 'fancybox fancybox.iframe'}) for i in raw: results.append({ 'name': str(i.text).strip(), 'url': 'https://www.whed.net/' + str(i.attrs['href']).strip(), 'country': country }) inst_index += 1 return { 'country': country, 'count': number_of_institutions, 'records': results } async def get_institutions(country, session): try: async with session.post( url='https://www.whed.net/results_institutions.php', data={"Chp1": country, "nbr_ref_pge": 10000} ) as response: html = await response.read() print(f"Successfully got {country}") return extract_institutions(html, country) except Exception as e: print(f"Unable to get {country} due to {e.__class__}.") async def fetch_all(countries): async with aiohttp.ClientSession() as session: return await asyncio.gather(*[get_institutions(country, session) for country in countries]) # Main call main() Old answer using synchronous algorithm Improving on @Mithun's answer since it doesn't really work as it'll be stuck on the same page. Also added direct access to the name and url to make it easier in case you want to access those. import time from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.select import Select from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service print("Init") chrome_executable = Service(executable_path='chromedriver.exe', log_path='NUL') chrome_options = Options() chrome_options.add_argument("--headless") driver = webdriver.Chrome(service=chrome_executable, options=chrome_options) print("Opening Homepage") url = "https://www.whed.net/results_institutions.php" driver.get(url) time.sleep(1) print("Selecting country") select = Select(driver.find_element(By.ID, "Chp1")) country = "Albania" select.select_by_visible_text(country) time.sleep(.5) print("Searching") driver.find_element(By.XPATH, "//input[@value='Go']").click() time.sleep(1) print("Parsing") html = driver.page_source soup = BeautifulSoup(html, 'html.parser') page = soup.find('p', {'class': 'infos'}).text number_of_pages = str(page).split()[0] counter = 10 results = [] while True: raw = soup.find_all('a', {'class': 'fancybox fancybox.iframe'}) for i in raw: results.append({ 'name': str(i.text).strip(), 'url': 'https://www.whed.net/' + str(i.attrs['href']).strip(), 'country': country }) print(f'{len(results)}/{number_of_pages}') if counter >= int(number_of_pages): break counter += 10 driver.find_element(By.LINK_TEXT, "Next page").click() time.sleep(0.5) soup = BeautifulSoup(driver.page_source, 'html.parser') driver.quit() print(results)
web scraping all universities with websites and description WHED website
anyone can help with scraping from https://www.whed.net/home.php the code I'm using is giving me empty df. would love to have universities with websites and maybe field of study. My scraping skills are weak so if you can guide me through this would be great thanks guys. begin=time.time() countries=['Emirates','United States of America (all)'] result = [] # List to store all data univ_links=[] # Links for all universities fields = ['Street:','City:','Province:','Post Code:','WWW:','Fields of study:','Job title:'] webD = wb.Chrome(executable_path=r'C:\Users\Admin\OneDrive\Sagasit\chromedriver.exe') # To launch chrome and run script # Trigger the target website webD.get("https://www.whed.net/results_institutions.php") webD.implicitly_wait(5) #all_countries=[] cntry_el = webD.find_elements_by_xpath('//*[@id="Chp1"]/option') #cntry_grp = webD.find_elements_by_xpath('//*[@id="Chp1"]/optgroup') grps=webD.find_elements_by_xpath('//*[@id="Chp1"]/optgroup/option[1]') for c in cntry_el:countries.append(c.text) for g in grps: countries.append(g.text) for cntry in countries: select = Select(webD.find_element_by_id('Chp1'))#select country dropdown select.select_by_visible_text(cntry)#choosing country Btn_GO = webD.find_element_by_xpath('//*[@id="fsearch"]/p/input') Btn_GO.click() select_rpp = Select(webD.find_element_by_name('nbr_ref_pge'))#select results per page drop down select_rpp.select_by_visible_text('100')#choosing 100 results per page option university_form = webD.find_element_by_xpath('//*[@id="contenu"]').find_element_by_id('results') university_list = university_form.find_elements_by_xpath('//*[@id="results"]/li') # list of university elements for univ in range(len(university_list)): href = university_list[univ].find_element_by_class_name('details').find_elements_by_tag_name('a')[0].get_property('href') # University details link univ_links.append(href) while True: try: webD.find_element_by_partial_link_text('Next').click() university_form = webD.find_element_by_xpath('//*[@id="contenu"]').find_element_by_id('results') university_list = university_form.find_elements_by_xpath('//*[@id="results"]/li') for univ in range(len(university_list)): href = university_list[univ].find_element_by_class_name('details').find_elements_by_tag_name('a')[0].get_property('href') # University details link univ_links.append(href) except NoSuchElementException: break for l in univ_links: webD.get(l) webD.implicitly_wait(2) title=webD.find_element_by_xpath('//*[@id="page"]/div/div/div[2]/div[1]').text title_detailed = webD.find_element_by_xpath('//*[@id="page"]/div/div/div[2]/div[2]').text cntry_name=webD.find_element_by_xpath('//*[@id="contenu"]/p[2]').text t1=webD.find_elements_by_class_name('dt') t2=webD.find_elements_by_class_name('dd') labels=webD.find_elements_by_class_name('libelle') content=webD.find_elements_by_class_name('contenu') temp={} fos='' fos1='' temp.update({'Title': title,'Detailed Title':title_detailed,'Country':cntry_name}) for i in range(len(t1)): if t1[i].text == '' or t1[i].text == 'Address': continue else: value=t2[i].text temp.update({t1[i].text:value.replace('\n',',')}) for j in range(len(content)): if labels[j].text in fields: if labels[j].text == 'Fields of study:': info=content[j].text fos=fos+','+info elif labels[j].text == 'Job title:': info1=content[j].text fos1=fos1+','+info1 else: key=labels[j].text temp.update({key[:-1]: content[j].text}) temp.update({'Fields of study': fos.lstrip(','),'Job titles':fos1.lstrip(',')}) result.append(temp) data=pd.DataFrame(result) data end=time.time() print("Time taken : "+ str(end-begin) +"s") data.to_csv("WHED1.csv",index=False) this code what i could use taken from github project. would be great if i can re-create the data and save it, want this to be used as a dropdown in a web application just to make sure no mistakes written in the university studied in.
[ "Update 1/12/22 - Async\nFound a much better solution using aiohttp, it also runs the entire list of countries in ~30 seconds instead of 3 hours\nimport json\nimport time\nimport aiohttp\nimport asyncio\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.remote.webelement import WebElement\nfrom selenium.webdriver.support.select import Select\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.chrome.service import Service\n\n\ndef main():\n print(\"Init\")\n driver = init_driver()\n\n print(\"Opening Homepage\")\n url = \"https://www.whed.net/results_institutions.php\"\n driver.get(url)\n time.sleep(1)\n\n print(\"Gathering Countries\")\n countries = get_countries(driver)\n driver.quit()\n\n print(\"Scraping\")\n start = time.time()\n institution_list = asyncio.run(fetch_all(countries))\n\n print(\"Writing out\")\n\n f = open('output.json', 'w')\n f.write(json.dumps(institution_list))\n f.close()\n end = time.time()\n print(f\"Total time: {end - start}s\")\n\n\ndef init_driver():\n chrome_executable = Service(executable_path='chromedriver.exe', log_path='NUL')\n chrome_options = Options()\n chrome_options.add_argument(\"--headless\")\n driver = webdriver.Chrome(service=chrome_executable, options=chrome_options)\n return driver\n\n\ndef get_countries(driver):\n select = Select(driver.find_element(By.ID, \"Chp1\"))\n countries = list(map(lambda c: c.get_attribute('value'), select.options))\n countries.pop(0)\n return countries\n\n\ndef extract_institutions(html, country):\n soup = BeautifulSoup(html, 'html.parser')\n page = soup.find('p', {'class': 'infos'}).text\n print(str(page))\n number_of_institutions = str(page).split()[0]\n if number_of_institutions == 'No':\n print(f\"No results for {country}\")\n return []\n\n results = []\n inst_index = 0\n\n raw = soup.find_all('a', {'class': 'fancybox fancybox.iframe'})\n for i in raw:\n results.append({\n 'name': str(i.text).strip(),\n 'url': 'https://www.whed.net/' + str(i.attrs['href']).strip(),\n 'country': country\n })\n\n inst_index += 1\n\n return {\n 'country': country,\n 'count': number_of_institutions,\n 'records': results\n }\n\n\nasync def get_institutions(country, session):\n try:\n async with session.post(\n url='https://www.whed.net/results_institutions.php',\n data={\"Chp1\": country, \"nbr_ref_pge\": 10000}\n ) as response:\n html = await response.read()\n print(f\"Successfully got {country}\")\n return extract_institutions(html, country)\n except Exception as e:\n print(f\"Unable to get {country} due to {e.__class__}.\")\n\n\nasync def fetch_all(countries):\n async with aiohttp.ClientSession() as session:\n return await asyncio.gather(*[get_institutions(country, session) for country in countries])\n\n\n# Main call\nmain()\n\n\nOld answer using synchronous algorithm\nImproving on @Mithun's answer since it doesn't really work as it'll be stuck on the same page.\nAlso added direct access to the name and url to make it easier in case you want to access those.\nimport time\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.select import Select\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.chrome.service import Service\n\nprint(\"Init\")\n\nchrome_executable = Service(executable_path='chromedriver.exe', log_path='NUL')\nchrome_options = Options()\nchrome_options.add_argument(\"--headless\")\ndriver = webdriver.Chrome(service=chrome_executable, options=chrome_options)\n\nprint(\"Opening Homepage\")\nurl = \"https://www.whed.net/results_institutions.php\"\ndriver.get(url)\ntime.sleep(1)\n\nprint(\"Selecting country\")\nselect = Select(driver.find_element(By.ID, \"Chp1\"))\ncountry = \"Albania\"\nselect.select_by_visible_text(country)\ntime.sleep(.5)\n\nprint(\"Searching\")\ndriver.find_element(By.XPATH, \"//input[@value='Go']\").click()\ntime.sleep(1)\n\nprint(\"Parsing\")\nhtml = driver.page_source\nsoup = BeautifulSoup(html, 'html.parser')\n\npage = soup.find('p', {'class': 'infos'}).text\n\nnumber_of_pages = str(page).split()[0]\n\ncounter = 10\nresults = []\nwhile True:\n raw = soup.find_all('a', {'class': 'fancybox fancybox.iframe'})\n for i in raw:\n results.append({\n 'name': str(i.text).strip(),\n 'url': 'https://www.whed.net/' + str(i.attrs['href']).strip(),\n 'country': country\n })\n print(f'{len(results)}/{number_of_pages}')\n\n if counter >= int(number_of_pages):\n break\n counter += 10\n\n driver.find_element(By.LINK_TEXT, \"Next page\").click()\n time.sleep(0.5)\n soup = BeautifulSoup(driver.page_source, 'html.parser')\ndriver.quit()\nprint(results)\n\n" ]
[ 0 ]
[ "You can use Selenium to scrape data. The following code will help you scrape the university names for \"United States of America (all)\". Similarly, you can scrape for other countries as well using Loop or entering the name manually. If you need the field of study for every university, you can scrape its href using bs4 and its field of study.\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.select import Select\ndriver = webdriver.Chrome(r\"chromedriver.exe\")\nurl = \"https://www.whed.net/results_institutions.php\"\ndriver.get(url)\ntime.sleep(1)\nselect = Select(driver.find_element(By.ID, \"Chp1\"))\nselect.select_by_visible_text(\"United States of America (all)\")\ntime.sleep(1)\ndriver.find_element(By.XPATH, \"//input[@value='Go']\").click()\ntime.sleep(1)\nhtml = driver.page_source\nsoup = BeautifulSoup(html, 'html.parser')\npage = soup.find('p', {'class': 'infos'}).text\nnumber_of_pages = str(page).split()[0]\ncounter = 10\nwhile counter < int(number_of_pages):\n raw = soup.find_all('div', {'class': 'details'})\n for i in raw:\n i = (str(i.text).lstrip())\n i = i.replace(\"\\n\",\"\")\n i = i.replace(\"\\r\", \"\")\n i = i.replace(\"\\t\", \"\")\n print(i)\n next_page = driver.find_element(By.LINK_TEXT, \"Next page\").click()\n counter += 10\ndriver.quit()\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0072800062_python.txt
Q: Tranformdata in pandas I am trying to tranform data from raw data to goal/output. I know the logic when writing basic python but i am failing to implement in pandas. for i in range (0, qty[i]); number =[] np.append number [start[i]+1] I am failing to apply the same logic in pandas data frame sample data item id qty start end 1 3 1000 1003 desired output item id qty start end number 1 3 1000 1003 1000 1 3 1000 1003 1001 1 3 1000 1003 1002 A: Use: df = df.loc[df.index.repeat(df['end'].sub(df['start']))] df['number'] = df['start'].add(df.groupby(level=0).cumcount()) df = df.reset_index(drop=True) print (df) item id qty start end number 0 1 3 1000 1003 1000 1 1 3 1000 1003 1001 2 1 3 1000 1003 1002
Tranformdata in pandas
I am trying to tranform data from raw data to goal/output. I know the logic when writing basic python but i am failing to implement in pandas. for i in range (0, qty[i]); number =[] np.append number [start[i]+1] I am failing to apply the same logic in pandas data frame sample data item id qty start end 1 3 1000 1003 desired output item id qty start end number 1 3 1000 1003 1000 1 3 1000 1003 1001 1 3 1000 1003 1002
[ "Use:\ndf = df.loc[df.index.repeat(df['end'].sub(df['start']))]\n\ndf['number'] = df['start'].add(df.groupby(level=0).cumcount())\n\ndf = df.reset_index(drop=True)\nprint (df)\n item id qty start end number\n0 1 3 1000 1003 1000\n1 1 3 1000 1003 1001\n2 1 3 1000 1003 1002\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074619614_pandas_python.txt
Q: Rows as dictionaries - pymssql I want to receive rows as dictionaries in pymssql. in python-idle i ran: >>> conn = pymssql.connect(host='192.168.1.3', user='majid', password='123456789', database='GeneralTrafficMonitor', as_dict=True) >>> cur = conn.cursor() >>> cur.execute('SELECT TOP 10 * FROM dbo.tblTrafficCounterData') >>> cur.as_dict True >>> for row in cur: print row['ID'] But it gives: Traceback (most recent call last): File "<pyshell#83>", line 2, in <module> print row['ID'] TypeError: tuple indices must be integers, not str Could someone help? A: You need to add the parameter as_dict=True like so: cursor = conn.cursor(as_dict=True) You will then be able to access row['id'] if it is a field name in the result set. As per documentation below: https://pythonhosted.org/pymssql/pymssql_examples.html#rows-as-dictionaries A: Look at the version of pymssql that you are using. Only since 1.0.2 does it return a dict, earlier versions seem to return tuple. A: It is possible to set as_dict=True while creating the connection itself. pymssql.connect(server='.', user='', password='', database='', as_dict=True,) https://pythonhosted.org/pymssql/ref/pymssql.html?highlight=connect#pymssql.connect A: Specifying results as dictionaries can be done on a cursor by cursor basis: import pymysql from pymysql.cursors import DictCursor # create database connection # connect to database mydb = pymysql.connect( ... ) mycursor = mydb.cursor(cursor=DictCursor)
Rows as dictionaries - pymssql
I want to receive rows as dictionaries in pymssql. in python-idle i ran: >>> conn = pymssql.connect(host='192.168.1.3', user='majid', password='123456789', database='GeneralTrafficMonitor', as_dict=True) >>> cur = conn.cursor() >>> cur.execute('SELECT TOP 10 * FROM dbo.tblTrafficCounterData') >>> cur.as_dict True >>> for row in cur: print row['ID'] But it gives: Traceback (most recent call last): File "<pyshell#83>", line 2, in <module> print row['ID'] TypeError: tuple indices must be integers, not str Could someone help?
[ "You need to add the parameter as_dict=True like so:\n cursor = conn.cursor(as_dict=True)\n\nYou will then be able to access row['id'] if it is a field name in the result set.\nAs per documentation below:\nhttps://pythonhosted.org/pymssql/pymssql_examples.html#rows-as-dictionaries\n", "Look at the version of pymssql that you are using. Only since 1.0.2 does it return a dict, earlier versions seem to return tuple.\n", "It is possible to set as_dict=True while creating the connection itself.\npymssql.connect(server='.', user='', password='', database='', as_dict=True,)\n\nhttps://pythonhosted.org/pymssql/ref/pymssql.html?highlight=connect#pymssql.connect\n", "Specifying results as dictionaries can be done on a cursor by cursor basis:\nimport pymysql\nfrom pymysql.cursors import DictCursor\n\n# create database connection\n# connect to database\nmydb = pymysql.connect( ... )\n\nmycursor = mydb.cursor(cursor=DictCursor)\n\n\n" ]
[ 5, 1, 0, 0 ]
[]
[]
[ "dictionary", "pymssql", "python" ]
stackoverflow_0009972944_dictionary_pymssql_python.txt
Q: How to merge folders in every subdirectory using python code? my directory now.[1] I want to combine every Sub in every Folder so it should look something like this [2] [1]: https://i.stack.imgur.com/xz7FJ.png [2]: https://i.stack.imgur.com/scK48.png A: You can iterate through all the files/dirs in a directory using os.walk(). We are just iterating over all files in a dir and then moving them to a destination directory using os.rename() import os directory = '/tmp/dataset' def move_all_files_to_destination_dir(original_dir, destination_dir): # moves all NESTED files in original_dir to destination_dir files = [] for root, dirs, files in os.walk(original_dir): for name in files: os.rename(root + os.sep + name, destination_dir + os.sep + name) subdirs = [f.path for f in os.scandir(directory) if f.is_dir()] # [Folder1, Folder2] for d in subdirs: move_all_files_to_destination_dir(d, d) # remove the subdirectories of d after we have moved all files under it for sub in os.scandir(d): if sub.is_dir(): shutil.rmtree(sub)
How to merge folders in every subdirectory using python code?
my directory now.[1] I want to combine every Sub in every Folder so it should look something like this [2] [1]: https://i.stack.imgur.com/xz7FJ.png [2]: https://i.stack.imgur.com/scK48.png
[ "You can iterate through all the files/dirs in a directory using os.walk().\nWe are just iterating over all files in a dir and then moving them to a destination directory using os.rename()\nimport os\ndirectory = '/tmp/dataset'\n\ndef move_all_files_to_destination_dir(original_dir, destination_dir):\n # moves all NESTED files in original_dir to destination_dir\n files = []\n for root, dirs, files in os.walk(original_dir):\n for name in files:\n os.rename(root + os.sep + name, destination_dir + os.sep + name)\n \nsubdirs = [f.path for f in os.scandir(directory) if f.is_dir()] # [Folder1, Folder2]\nfor d in subdirs:\n move_all_files_to_destination_dir(d, d)\n # remove the subdirectories of d after we have moved all files under it\n for sub in os.scandir(d):\n if sub.is_dir():\n shutil.rmtree(sub)\n\n" ]
[ 1 ]
[]
[]
[ "directory", "directory_structure", "merge", "python" ]
stackoverflow_0074623529_directory_directory_structure_merge_python.txt
Q: How to convert a False or True boolean values inside nested python dictionary into javascript in Django template? This is my views.py: context = { "fas": fas_obj, } # TemplateResponse can only be rendered once return render(request, "project_structure.html", context) In the project_structure.html and javascript section: const pp = {{ fas|safe }}; I get an error here. because fas contains a False or True boolean value somewhere deep inside. fas is complicated and has lists of dictionaries with nested dictionaries. What did work is I did this: context = { "fas": fas_obj, # need a fas_json version for the javascript part # because of the boolean in python doesn't render well in javascript "fas_json": json.dumps(fas_obj), I know now I have two versions because I need the original version for the other part of the template. In the javascript: const pp = {{fas_json|safe}}; Is there an easier way than passing the original and the json version? A: Try this: const pp = JSON.parse("{{fas_json|safe}}");
How to convert a False or True boolean values inside nested python dictionary into javascript in Django template?
This is my views.py: context = { "fas": fas_obj, } # TemplateResponse can only be rendered once return render(request, "project_structure.html", context) In the project_structure.html and javascript section: const pp = {{ fas|safe }}; I get an error here. because fas contains a False or True boolean value somewhere deep inside. fas is complicated and has lists of dictionaries with nested dictionaries. What did work is I did this: context = { "fas": fas_obj, # need a fas_json version for the javascript part # because of the boolean in python doesn't render well in javascript "fas_json": json.dumps(fas_obj), I know now I have two versions because I need the original version for the other part of the template. In the javascript: const pp = {{fas_json|safe}}; Is there an easier way than passing the original and the json version?
[ "Try this:\nconst pp = JSON.parse(\"{{fas_json|safe}}\");\n\n" ]
[ 0 ]
[]
[]
[ "django", "javascript", "json", "python" ]
stackoverflow_0074623687_django_javascript_json_python.txt
Q: Timestring passed into URL to output JSON file - Python API Call I'm getting the following error for my python scraper: import requests import json symbol_id = 'COINBASE_SPOT_BTC_USDT' time_start = '2022-11-20T17:00:00' time_end = '2022-11-21T05:00:00' limit_levels = 100000000 limit = 100000000 url = 'https://rest.coinapi.io/v1/orderbooks/{symbol_id}/history?time_start={time_start}limit={limit}&limit_levels={limit_levels}' headers = {'X-CoinAPI-Key' : 'XXXXXXXXXXXXXXXXXXXXXXX'} response = requests.get(url, headers=headers) print(response) with open('raw_coinbase_ob_history.json', 'w') as json_file: json.dump(response.json(), json_file) with open('raw_coinbase_ob_history.json', 'r') as handle: parsed = json.load(handle) with open('coinbase_ob_history.json', 'w') as coinbase_ob: json.dump(parsed, coinbase_ob, indent = 4) <Response [400]> And in my written json file, I'm outputted {"error": "Wrong format of 'time_start' parameter."} I assume a string goes into a url, so I flattened the timestring to a string. I don't understand why this doesn't work. This is the documentation for the coinAPI call I'm trying to make with 'timestring'. https://docs.coinapi.io/?python#historical-data-get-4 A: Incorrect syntax for python. To concatenate strings, stick them together like such: a = 'a' + 'b' + 'c' A: string formatting is invalid, and also need use & in between different url params # python3 url = f"https://rest.coinapi.io/v1/orderbooks/{symbol_id}/history?time_start={time_start}&limit={limit}&limit_levels={limit_levels}" # python 2 url = "https://rest.coinapi.io/v1/orderbooks/{symbol_id}/history?time_start={time_start}&limit={limit}&limit_levels={limit_levels}".format(symbol_id=symbol_id, time_start=time_start, limit=limit, limit_levels=limit_levels) https://docs.python.org/3/tutorial/inputoutput.html https://docs.python.org/2/tutorial/inputoutput.html
Timestring passed into URL to output JSON file - Python API Call
I'm getting the following error for my python scraper: import requests import json symbol_id = 'COINBASE_SPOT_BTC_USDT' time_start = '2022-11-20T17:00:00' time_end = '2022-11-21T05:00:00' limit_levels = 100000000 limit = 100000000 url = 'https://rest.coinapi.io/v1/orderbooks/{symbol_id}/history?time_start={time_start}limit={limit}&limit_levels={limit_levels}' headers = {'X-CoinAPI-Key' : 'XXXXXXXXXXXXXXXXXXXXXXX'} response = requests.get(url, headers=headers) print(response) with open('raw_coinbase_ob_history.json', 'w') as json_file: json.dump(response.json(), json_file) with open('raw_coinbase_ob_history.json', 'r') as handle: parsed = json.load(handle) with open('coinbase_ob_history.json', 'w') as coinbase_ob: json.dump(parsed, coinbase_ob, indent = 4) <Response [400]> And in my written json file, I'm outputted {"error": "Wrong format of 'time_start' parameter."} I assume a string goes into a url, so I flattened the timestring to a string. I don't understand why this doesn't work. This is the documentation for the coinAPI call I'm trying to make with 'timestring'. https://docs.coinapi.io/?python#historical-data-get-4
[ "Incorrect syntax for python. To concatenate strings, stick them together like such:\na = 'a' + 'b' + 'c'\n\n", "string formatting is invalid, and also need use & in between different url params\n# python3\nurl = f\"https://rest.coinapi.io/v1/orderbooks/{symbol_id}/history?time_start={time_start}&limit={limit}&limit_levels={limit_levels}\"\n\n# python 2\nurl = \"https://rest.coinapi.io/v1/orderbooks/{symbol_id}/history?time_start={time_start}&limit={limit}&limit_levels={limit_levels}\".format(symbol_id=symbol_id, time_start=time_start, limit=limit, limit_levels=limit_levels)\n\nhttps://docs.python.org/3/tutorial/inputoutput.html\nhttps://docs.python.org/2/tutorial/inputoutput.html\n" ]
[ 0, 0 ]
[]
[]
[ "python", "time", "url" ]
stackoverflow_0074593089_python_time_url.txt
Q: Bunch of errors while using Flask command to make simple website so I've been following this tutorial on how to build an basic Flask website off youtube Tutorial. Currently trying to add HTML templates which are giving me "500 Internal Server Error" with application errors in the CMD. **My python code **` from flask import Flask, redirect, url_for, render_template app = Flask(__name__) @app.route("/<name>") def home(name): return render_template("index.html", content=name) if __name__ == "__main__": app.run() **my html code ** <!DOCTYPE html> <html> <head> <title>ETF World!</title> </head> <body> <h1>Explore the world of Exchange Traded Funds</h1> <p>{{content}}</p> </body> </html> The ERROR Traceback (most recent call last): File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 2077, in wsgi_app response = self.full_dispatch_request() File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) TypeError: home() missing 1 required positional argument: 'name' I expected the website to work when I enter a forward slash in the URL and for dynamic information to pass information to be delivered from the backend to the front of what "content" variable I wrote. Not 100% sure if the video is relevant anymore as it is 3 years old Any help is deeply appreciated. A: it happens to work on mine though. main.py from flask import Flask, redirect, url_for, render_template app = Flask(__name__) @app.route("/<name>") def home(name): return render_template("index.html", content=name) if __name__ == "__main__": app.run() index.html: <head> <title>ETF World!</title> </head> <body> <h1>Explore the world of Exchange Traded Funds</h1> <p>{{content}}</p> </body> </html> also make sure to put index.html to the template folder because flask look into that folder.
Bunch of errors while using Flask command to make simple website
so I've been following this tutorial on how to build an basic Flask website off youtube Tutorial. Currently trying to add HTML templates which are giving me "500 Internal Server Error" with application errors in the CMD. **My python code **` from flask import Flask, redirect, url_for, render_template app = Flask(__name__) @app.route("/<name>") def home(name): return render_template("index.html", content=name) if __name__ == "__main__": app.run() **my html code ** <!DOCTYPE html> <html> <head> <title>ETF World!</title> </head> <body> <h1>Explore the world of Exchange Traded Funds</h1> <p>{{content}}</p> </body> </html> The ERROR Traceback (most recent call last): File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 2077, in wsgi_app response = self.full_dispatch_request() File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\Bradley J Stewart\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\flask\app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) TypeError: home() missing 1 required positional argument: 'name' I expected the website to work when I enter a forward slash in the URL and for dynamic information to pass information to be delivered from the backend to the front of what "content" variable I wrote. Not 100% sure if the video is relevant anymore as it is 3 years old Any help is deeply appreciated.
[ "it happens to work on mine though.\nmain.py\nfrom flask import Flask, redirect, url_for, render_template \n\napp = Flask(__name__)\n\[email protected](\"/<name>\")\ndef home(name):\n return render_template(\"index.html\", content=name)\n\n\nif __name__ == \"__main__\":\n app.run()\n\nindex.html:\n<head>\n <title>ETF World!</title>\n </head>\n <body>\n <h1>Explore the world of Exchange Traded Funds</h1>\n <p>{{content}}</p>\n </body>\n</html>\n\nalso make sure to put index.html to the template folder because flask look into that folder.\n\n" ]
[ 1 ]
[]
[]
[ "flask", "html", "python" ]
stackoverflow_0074623084_flask_html_python.txt
Q: Splitting text in a column into multiple rows in python (data wrangling) channelTitle video_id tags 0 Channel1 ojPuGJaiVjE [tag3, tag4, tag5] 1 Channel1 NdWI3sov93I [tag1, tag4] 2 Channel1 67PYna-rScE [tag1, tag2, tag3, tag5] 3 Channel2 lNoDeZzn_4o NaN 4 Channel3 QJSOGP-nJto [tag3] Hi all, I have a pandas dataframe (video_df) in python as shown in the table above. May I know how should I split the tags column into multiple rows (depends on how many tags per video_id) so that there will be only one tag per row? The expected output should be something like this: Any help or advice will be greatly appreciated! A: Use DataFrame.explode(): video_df = video_df.explode('tags', ignore_index=True) print(video_df) # channelTitle video_id tags #0 Channel1 ojPuGJaiVjE tag3 #1 Channel1 ojPuGJaiVjE tag4 #2 Channel1 ojPuGJaiVjE tag5 #3 Channel1 NdWI3sov93I tag1 #4 Channel1 NdWI3sov93I tag4 #5 Channel1 67PYna-rScE tag1 #6 Channel1 67PYna-rScE tag2 #7 Channel1 67PYna-rScE tag3 #8 Channel1 67PYna-rScE tag5 #9 Channel2 lNoDeZzn_4o NaN #10 Channel3 QJSOGP-nJto tag3 By default, DataFrame.explode() keeps the index value for exploding rows, so you need to specify ignore_index=True to reindex.
Splitting text in a column into multiple rows in python (data wrangling)
channelTitle video_id tags 0 Channel1 ojPuGJaiVjE [tag3, tag4, tag5] 1 Channel1 NdWI3sov93I [tag1, tag4] 2 Channel1 67PYna-rScE [tag1, tag2, tag3, tag5] 3 Channel2 lNoDeZzn_4o NaN 4 Channel3 QJSOGP-nJto [tag3] Hi all, I have a pandas dataframe (video_df) in python as shown in the table above. May I know how should I split the tags column into multiple rows (depends on how many tags per video_id) so that there will be only one tag per row? The expected output should be something like this: Any help or advice will be greatly appreciated!
[ "Use DataFrame.explode():\nvideo_df = video_df.explode('tags', ignore_index=True)\n\nprint(video_df)\n# channelTitle video_id tags\n#0 Channel1 ojPuGJaiVjE tag3\n#1 Channel1 ojPuGJaiVjE tag4\n#2 Channel1 ojPuGJaiVjE tag5\n#3 Channel1 NdWI3sov93I tag1\n#4 Channel1 NdWI3sov93I tag4\n#5 Channel1 67PYna-rScE tag1\n#6 Channel1 67PYna-rScE tag2\n#7 Channel1 67PYna-rScE tag3\n#8 Channel1 67PYna-rScE tag5\n#9 Channel2 lNoDeZzn_4o NaN\n#10 Channel3 QJSOGP-nJto tag3\n\nBy default, DataFrame.explode() keeps the index value for exploding rows, so you need to specify ignore_index=True to reindex.\n" ]
[ 1 ]
[]
[]
[ "data_wrangling", "python", "split" ]
stackoverflow_0074623564_data_wrangling_python_split.txt
Q: How to make lists equal a quadrilateral in Python? I am having difficulty figuring out to make user-entered lists of numbers equal a quadrilateral, specifically a rhombus and square, in Python. I do not know if my code needs a function or loops or if/else statements to know if it is a rhombus or square. So far, I have this, but I can't figure out how to go through each side to find out if they are equal to each other or if angle 1 equals angle 2, etc. Which afterwards, I would determine if it the numbers that the user entered made a list that equaled a rhombus or square. Excuse the confusing sentences, I tried to make my problem as clear as possible. Thank you for taking your time to help, if you do! Edit: I tried a function, but found that it was too complicated by itself. I want to figure out how to search through each list's elements and make sure that they equal/match each other. Thank you! A rhombus has All four sides the same length Angle 1 equals Angle 3 Angle 2 equals Angle 4 A square has All four sides the same length. All angles equal to each other while True: # Input, Validation, Repetition print("=== Please enter Sides ===\n") sList = [] # list for sides for i in range(0,4,1): sides = float(input("Please enter side %i : " %(i + 1 ) ) ) if sides < 0: sides = float(input("Value must be positive! Please enter side %i : " %(i + 1 ))) sList.append(sides) print() print("=== Please enter angles ===\n") aList = [] # list for angles for i in range(0,4,1): angles = float(input("Please enter angle %i : " %(i + 1 ) ) ) if angles < 0: angles = float(input("Value must be positive! Please enter angle %i : " %(i + 1 ))) aList.append(angles) print("=======================\n") # Rhombus if sList == sList and aList[1] == aList[3] and aList[2] == aList[4]: print("This is a rhombus\n") #Square if sList == aList: print("This is a square!\n") keep = input("Would you like to repeat? (1-Yes, 2-No): ") if keep != '1': break A: First we need some tweaking in the edge control. Because "if sides[i] == sides[i]:" statement returns always True. Instead, let's write a function that checks that all edges are the same. def is_same_edges(edges) -> bool: sum_edges = sum(edges) return (sum_edges / len(edges)) == edges[0] This function calculates the sum of all the elements in the list it takes as a parameter. Then it divides the length of the list it takes as a parameter and checks it with any of the lists element. If it is the same, all the elements are the same. Now that we've done the edge checks, we can move on to the angle check. def is_opposite_angles_same(angles) -> bool: return angles[0] == angles[2] and angles[1] == angles[3] This function returns whether the opposite angles are equal. Okay, we have the functions we need. Now let's edit your function. def rhombus(sList, aList): if is_same_edges(sList) and is_opposite_angles_same(aList): rhombus = sList print(rhombus) A: Here's an approach that should work. First, define a function that evaluates your conditions to check whether something is a rhombus. def is_rhombus(sList, aList): if len(set(sList)) == 1 and aList[0] == aList[2] and aList[1] == aList[3]: return True else: return False is_rhombus checks whether the set obtained by converting sList into a set has a length of 1 - if yes, that means all members of sList were equal to each other. It also checks whether opposite angles in aList are equal to each other. if both conditions are satisfied, it returns True, else returns False. Next, because we know all rhombuses are squares, but not necessarily the other way round, we create another function called is_square: def is_square(sList, aList): if is_rhombus(sList, aList) and len(set(aList)) == 1: return True else: return False is_square also takes sList and aList, and it calls the previously defined function is_rhombus. If the shape IS a rhombus, is_square checks whether in addition to being a rhombus, all the angles are equal - using the same approach employed by is_rhombus above. Finally, with both these functions defined, we enter your while True loop, and after we get the sides and angles from the user, we go into a little if-elif-else block where we first check whether the shape is a square - because that's the narrowest option available to us. If the shape is a square, we print a message to the screen accordingly. If it's not a square, it could still be a rhombus, so we check for that. If it is a rhombus, we print that message to screen. If it's not even a rhombus, we say it's not a rhombus (no point also saying it's not a square - if it's not a rhombus, it can't possibly be a square). while True: # Input, Validation, Repetition print("=== Please enter Sides ===\n") sList = [] # list for sides for i in range(0,4,1): sides = float(input("Please enter side %i : " %(i + 1 ) ) ) if sides < 0: sides = float(input("Value must be positive! Please enter side %i : " %(i + 1 ))) sList.append(sides) print() print("=== Please enter angles ===\n") aList = [] # list for angles for i in range(0,4,1): angles = float(input("Please enter angle %i : " %(i + 1 ) ) ) if angles < 0: angles = float(input("Value must be positive! Please enter angle %i : " %(i + 1 ))) aList.append(angles) print("=======================\n") # Rhombus if is_square(sList, aList): print ('This is a square!') elif is_rhombus(sList, aList): print ('This is a rhombus!') else: print ('This is not a rhombus.') keep = input("Would you like to repeat? (1-Yes, 2-No): ") if keep != '1': break Trialling this on a square: === Please enter Sides === Please enter side 1 : 40 Please enter side 2 : 40 Please enter side 3 : 40 Please enter side 4 : 40 === Please enter angles === Please enter angle 1 : 90 Please enter angle 2 : 90 Please enter angle 3 : 90 Please enter angle 4 : 90 ======================= This is a square! Would you like to repeat? (1-Yes, 2-No): n Trialling this on a rhombus: === Please enter Sides === Please enter side 1 : 40 Please enter side 2 : 40 Please enter side 3 : 40 Please enter side 4 : 40 === Please enter angles === Please enter angle 1 : 60 Please enter angle 2 : 120 Please enter angle 3 : 60 Please enter angle 4 : 120 ======================= This is a rhombus! Would you like to repeat? (1-Yes, 2-No): n Lastly, trialling it on a non-rhombus: === Please enter Sides === Please enter side 1 : 5 Please enter side 2 : 6 Please enter side 3 : 4 Please enter side 4 : 3 === Please enter angles === Please enter angle 1 : 10 Please enter angle 2 : 20 Please enter angle 3 : 40 Please enter angle 4 : 30 ======================= This is not a rhombus. Would you like to repeat? (1-Yes, 2-No): n Of course it bears remembering that any shape with four equal sides is a rhombus. That opposite angles will be equal to each other is an attribute of rhombuses, not a criterion for something to be a rhombus. Recognising this, you could redefine your is_rhombus function differently to ignore aList altogether, and rely only on sList to tell us whether the shape is a rhombus. def is_rhombus(sList): if sum(sList)/4 == sList[0]: return True else: return False Then, we change the way we call these functions - if the shape isn't a rhombus, why make the user enter a bunch of angles? You could rearrange your while True block like this: while True: # Input, Validation, Repetition print("=== Please enter Sides ===\n") sList = [] # list for sides for i in range(0,4,1): sides = float(input("Please enter side %i : " %(i + 1 ) ) ) if sides < 0: sides = float(input("Value must be positive! Please enter side %i : " %(i + 1 ))) sList.append(sides) print() if is_rhombus(sList): print("=== Please enter angles ===\n") aList = [] # list for angles for i in range(0,4,1): angles = float(input("Please enter angle %i : " %(i + 1 ) ) ) if angles < 0: angles = float(input("Value must be positive! Please enter angle %i : " %(i + 1 ))) aList.append(angles) print("=======================\n") if is_square(sList, aList): print ('This is a square!') else: print ('This is a rhombus!') else: print ('This is not a rhombus.') keep = input("Would you like to repeat? (1-Yes, 2-No): ") if keep != '1': break This behaviour is identical to before - except if four unequal sides are provided, we don't bother asking for a bunch of angles.
How to make lists equal a quadrilateral in Python?
I am having difficulty figuring out to make user-entered lists of numbers equal a quadrilateral, specifically a rhombus and square, in Python. I do not know if my code needs a function or loops or if/else statements to know if it is a rhombus or square. So far, I have this, but I can't figure out how to go through each side to find out if they are equal to each other or if angle 1 equals angle 2, etc. Which afterwards, I would determine if it the numbers that the user entered made a list that equaled a rhombus or square. Excuse the confusing sentences, I tried to make my problem as clear as possible. Thank you for taking your time to help, if you do! Edit: I tried a function, but found that it was too complicated by itself. I want to figure out how to search through each list's elements and make sure that they equal/match each other. Thank you! A rhombus has All four sides the same length Angle 1 equals Angle 3 Angle 2 equals Angle 4 A square has All four sides the same length. All angles equal to each other while True: # Input, Validation, Repetition print("=== Please enter Sides ===\n") sList = [] # list for sides for i in range(0,4,1): sides = float(input("Please enter side %i : " %(i + 1 ) ) ) if sides < 0: sides = float(input("Value must be positive! Please enter side %i : " %(i + 1 ))) sList.append(sides) print() print("=== Please enter angles ===\n") aList = [] # list for angles for i in range(0,4,1): angles = float(input("Please enter angle %i : " %(i + 1 ) ) ) if angles < 0: angles = float(input("Value must be positive! Please enter angle %i : " %(i + 1 ))) aList.append(angles) print("=======================\n") # Rhombus if sList == sList and aList[1] == aList[3] and aList[2] == aList[4]: print("This is a rhombus\n") #Square if sList == aList: print("This is a square!\n") keep = input("Would you like to repeat? (1-Yes, 2-No): ") if keep != '1': break
[ "First we need some tweaking in the edge control.\nBecause \"if sides[i] == sides[i]:\" statement returns always True.\nInstead, let's write a function that checks that all edges are the same.\ndef is_same_edges(edges) -> bool:\n sum_edges = sum(edges)\n return (sum_edges / len(edges)) == edges[0]\n\nThis function calculates the sum of all the elements in the list it takes as a parameter. Then it divides the length of the list it takes as a parameter and checks it with any of the lists element. If it is the same, all the elements are the same.\nNow that we've done the edge checks, we can move on to the angle check.\ndef is_opposite_angles_same(angles) -> bool:\n return angles[0] == angles[2] and angles[1] == angles[3]\n\nThis function returns whether the opposite angles are equal.\nOkay, we have the functions we need. Now let's edit your function.\ndef rhombus(sList, aList):\n if is_same_edges(sList) and is_opposite_angles_same(aList):\n rhombus = sList\n print(rhombus)\n\n", "Here's an approach that should work.\nFirst, define a function that evaluates your conditions to check whether something is a rhombus.\ndef is_rhombus(sList, aList):\n if len(set(sList)) == 1 and aList[0] == aList[2] and aList[1] == aList[3]:\n return True\n else:\n return False\n\nis_rhombus checks whether the set obtained by converting sList into a set has a length of 1 - if yes, that means all members of sList were equal to each other. It also checks whether opposite angles in aList are equal to each other. if both conditions are satisfied, it returns True, else returns False.\nNext, because we know all rhombuses are squares, but not necessarily the other way round, we create another function called is_square:\ndef is_square(sList, aList):\n if is_rhombus(sList, aList) and len(set(aList)) == 1:\n return True\n else:\n return False\n\nis_square also takes sList and aList, and it calls the previously defined function is_rhombus. If the shape IS a rhombus, is_square checks whether in addition to being a rhombus, all the angles are equal - using the same approach employed by is_rhombus above.\nFinally, with both these functions defined, we enter your while True loop, and after we get the sides and angles from the user, we go into a little if-elif-else block where we first check whether the shape is a square - because that's the narrowest option available to us. If the shape is a square, we print a message to the screen accordingly. If it's not a square, it could still be a rhombus, so we check for that. If it is a rhombus, we print that message to screen. If it's not even a rhombus, we say it's not a rhombus (no point also saying it's not a square - if it's not a rhombus, it can't possibly be a square).\nwhile True:\n\n# Input, Validation, Repetition\n\n print(\"=== Please enter Sides ===\\n\")\n sList = [] # list for sides\n for i in range(0,4,1):\n sides = float(input(\"Please enter side %i : \" %(i + 1 ) ) )\n if sides < 0:\n sides = float(input(\"Value must be positive! Please enter side %i : \" %(i + 1 )))\n sList.append(sides)\n print()\n\n print(\"=== Please enter angles ===\\n\")\n aList = [] # list for angles\n for i in range(0,4,1):\n angles = float(input(\"Please enter angle %i : \" %(i + 1 ) ) )\n if angles < 0:\n angles = float(input(\"Value must be positive! Please enter angle %i : \" %(i + 1 )))\n aList.append(angles)\n print(\"=======================\\n\")\n\n# Rhombus\n if is_square(sList, aList):\n print ('This is a square!')\n elif is_rhombus(sList, aList):\n print ('This is a rhombus!')\n else:\n print ('This is not a rhombus.')\n \n keep = input(\"Would you like to repeat? (1-Yes, 2-No): \")\n if keep != '1':\n break\n\nTrialling this on a square:\n=== Please enter Sides ===\n\n\nPlease enter side 1 : 40\n\nPlease enter side 2 : 40\n\nPlease enter side 3 : 40\n\nPlease enter side 4 : 40\n\n=== Please enter angles ===\n\n\nPlease enter angle 1 : 90\n\nPlease enter angle 2 : 90\n\nPlease enter angle 3 : 90\n\nPlease enter angle 4 : 90\n=======================\n\nThis is a square!\n\nWould you like to repeat? (1-Yes, 2-No): n\n\nTrialling this on a rhombus:\n=== Please enter Sides ===\n\n\nPlease enter side 1 : 40\n\nPlease enter side 2 : 40\n\nPlease enter side 3 : 40\n\nPlease enter side 4 : 40\n\n=== Please enter angles ===\n\n\nPlease enter angle 1 : 60\n\nPlease enter angle 2 : 120\n\nPlease enter angle 3 : 60\n\nPlease enter angle 4 : 120\n=======================\n\nThis is a rhombus!\n\nWould you like to repeat? (1-Yes, 2-No): n\n\nLastly, trialling it on a non-rhombus:\n=== Please enter Sides ===\n\n\nPlease enter side 1 : 5\n\nPlease enter side 2 : 6\n\nPlease enter side 3 : 4\n\nPlease enter side 4 : 3\n\n=== Please enter angles ===\n\n\nPlease enter angle 1 : 10\n\nPlease enter angle 2 : 20\n\nPlease enter angle 3 : 40\n\nPlease enter angle 4 : 30\n=======================\n\nThis is not a rhombus.\n\nWould you like to repeat? (1-Yes, 2-No): n\n\nOf course it bears remembering that any shape with four equal sides is a rhombus. That opposite angles will be equal to each other is an attribute of rhombuses, not a criterion for something to be a rhombus. Recognising this, you could redefine your is_rhombus function differently to ignore aList altogether, and rely only on sList to tell us whether the shape is a rhombus.\ndef is_rhombus(sList):\n if sum(sList)/4 == sList[0]:\n return True\n else:\n return False\n\nThen, we change the way we call these functions - if the shape isn't a rhombus, why make the user enter a bunch of angles? You could rearrange your while True block like this:\nwhile True:\n\n# Input, Validation, Repetition\n\n print(\"=== Please enter Sides ===\\n\")\n sList = [] # list for sides\n for i in range(0,4,1):\n sides = float(input(\"Please enter side %i : \" %(i + 1 ) ) )\n if sides < 0:\n sides = float(input(\"Value must be positive! Please enter side %i : \" %(i + 1 )))\n sList.append(sides)\n print()\n \n if is_rhombus(sList):\n print(\"=== Please enter angles ===\\n\")\n aList = [] # list for angles\n for i in range(0,4,1):\n angles = float(input(\"Please enter angle %i : \" %(i + 1 ) ) )\n if angles < 0:\n angles = float(input(\"Value must be positive! Please enter angle %i : \" %(i + 1 )))\n aList.append(angles)\n print(\"=======================\\n\")\n \n if is_square(sList, aList):\n print ('This is a square!')\n else:\n print ('This is a rhombus!')\n else:\n print ('This is not a rhombus.')\n\n\n keep = input(\"Would you like to repeat? (1-Yes, 2-No): \")\n if keep != '1':\n break\n\nThis behaviour is identical to before - except if four unequal sides are provided, we don't bother asking for a bunch of angles.\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python", "shapes" ]
stackoverflow_0074623458_list_python_shapes.txt
Q: Mask RCNN, AttributeError: module 'keras.engine' has no attribute 'Layer I tried to run matterport/MaskRCNN. Even though I've tried to change import keras.engine as KE to import keras.engine.topology as KE topology didn't work because topology module could not be resolved. I've also tried pip uninstall keras -y pip uninstall keras-nightly -y pip uninstall keras-Preprocessing -y pip uninstall keras-vis -y pip uninstall tensorflow -y pip uninstall h5py -y and install new by pip install tensorflow==1.13.1 pip install keras==2.0.8 pip install h5py==2.10.0 it didn't work because tensorflow's version error. I've researched through github discussion and stackflow. Can anyone help me to solve this problem? AttributeError Traceback (most recent call last) Cell In [2], line 16 14 sys.path.append(ROOT_DIR) # To find local version of the library 15 from mrcnn import utils ---> 16 import mrcnn.model as modellib 17 from mrcnn import visualize 18 # Import COCO config File c:\Users\admin\AppData\Local\Programs\Python\Python310\lib\site-packages\mask_rcnn-2.1-py3.10.egg\mrcnn\model.py:255 251 clipped.set_shape((clipped.shape[0], 4)) 252 return clipped --> 255 class ProposalLayer(KE.Layer): 256 """Receives anchor scores and selects a subset to pass as proposals 257 to the second stage. Filtering is done based on anchor scores and 258 non-max suppression to remove overlaps. It also applies bounding (...) 267 Proposals in normalized coordinates [batch, rois, (y1, x1, y2, x2)] 268 """ 270 def __init__(self, proposal_count, nms_threshold, config=None, **kwargs): AttributeError: module 'keras.engine' has no attribute 'Layer' Thank you. A: Change "keras.engine as KE" to "keras.layers as KE"
Mask RCNN, AttributeError: module 'keras.engine' has no attribute 'Layer
I tried to run matterport/MaskRCNN. Even though I've tried to change import keras.engine as KE to import keras.engine.topology as KE topology didn't work because topology module could not be resolved. I've also tried pip uninstall keras -y pip uninstall keras-nightly -y pip uninstall keras-Preprocessing -y pip uninstall keras-vis -y pip uninstall tensorflow -y pip uninstall h5py -y and install new by pip install tensorflow==1.13.1 pip install keras==2.0.8 pip install h5py==2.10.0 it didn't work because tensorflow's version error. I've researched through github discussion and stackflow. Can anyone help me to solve this problem? AttributeError Traceback (most recent call last) Cell In [2], line 16 14 sys.path.append(ROOT_DIR) # To find local version of the library 15 from mrcnn import utils ---> 16 import mrcnn.model as modellib 17 from mrcnn import visualize 18 # Import COCO config File c:\Users\admin\AppData\Local\Programs\Python\Python310\lib\site-packages\mask_rcnn-2.1-py3.10.egg\mrcnn\model.py:255 251 clipped.set_shape((clipped.shape[0], 4)) 252 return clipped --> 255 class ProposalLayer(KE.Layer): 256 """Receives anchor scores and selects a subset to pass as proposals 257 to the second stage. Filtering is done based on anchor scores and 258 non-max suppression to remove overlaps. It also applies bounding (...) 267 Proposals in normalized coordinates [batch, rois, (y1, x1, y2, x2)] 268 """ 270 def __init__(self, proposal_count, nms_threshold, config=None, **kwargs): AttributeError: module 'keras.engine' has no attribute 'Layer' Thank you.
[ "Change\n\"keras.engine as KE\"\nto\n\"keras.layers as KE\"\n" ]
[ 1 ]
[]
[]
[ "keras", "mask_rcnn", "python" ]
stackoverflow_0074023523_keras_mask_rcnn_python.txt
Q: python selenium clicking a list object I am trying to click 1 Min button on this site below is my python code url = 'https://www.investing.com/technical/technical-analysis' driver.get(url) events = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "section#leftColumn"))) print("Required elements found") events.find_element(By.XPATH,"//a[text()='1 Min']").click() Am getting the following error: events.find_element(By.XPATH,"//a[text()='1 Min']").click() AttributeError: 'list' object has no attribute 'find_element' What can I change in the code to click the '1 Min' button succesfully? A: You have to use 'for' loop to iterate through all the elements in 'events' element: events = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "section#leftColumn"))) print("Required elements found") for event in events: event.find_element(By.XPATH,"//a[text()='1 Min']").click() A: Maybe change the last line to; driver.find_element Also you can check for that XPATH with WebDriverWait until located.
python selenium clicking a list object
I am trying to click 1 Min button on this site below is my python code url = 'https://www.investing.com/technical/technical-analysis' driver.get(url) events = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "section#leftColumn"))) print("Required elements found") events.find_element(By.XPATH,"//a[text()='1 Min']").click() Am getting the following error: events.find_element(By.XPATH,"//a[text()='1 Min']").click() AttributeError: 'list' object has no attribute 'find_element' What can I change in the code to click the '1 Min' button succesfully?
[ "You have to use 'for' loop to iterate through all the elements in 'events' element:\nevents = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, \"section#leftColumn\")))\nprint(\"Required elements found\")\nfor event in events:\n event.find_element(By.XPATH,\"//a[text()='1 Min']\").click()\n\n", "Maybe change the last line to;\ndriver.find_element\nAlso you can check for that XPATH with WebDriverWait until located.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "selenium", "web_crawler", "web_scraping" ]
stackoverflow_0074623368_python_selenium_web_crawler_web_scraping.txt
Q: Fast method for key-point based image matching [Python] The scenario is, Let's suppose I took random images from my Gallery and put in some folder, and also print these image in hard form. Now I turned on my laptop camera and place an image in front of camera, My code need the tell the name of matching image from the folder. Actually the code I have add in the question works fine but when I increases the number of image in folder then Identification start taking more time. I want to know, is there any method that I can try to improve my speed? Here is the working Demo of my scenario. import cv2 import os from pathlib import Path import time # Define path pathImages = 'ImagesQuery' # Set the Number of detected Features orb = cv2.ORB.create(nfeatures=1000) # Set the threshold of minimum Features detected to give a positive, around 20 to 30 thres = 27 # List Images and Print out their Names and how many there are in the Folder images = [] classNamesImages = [] myListImages = os.listdir(pathImages) print(myListImages) print('Total Classes Detected', len(myListImages)) # this will read in the images for cl in myListImages: imgCur = cv2.imread(f'{pathImages}/{cl}', 0) images.append(imgCur) # delete the file extension classNamesImages.append(os.path.splitext(cl)[0]) print(classNamesImages) # this will find the matching points in the images def findDes(images): desList = [] for img in images: kp, des = orb.detectAndCompute(img, None) desList.append(des) return desList # this will compare the matches and find the corresponding image def findID(img, desList): image_matching_time = time.time() kp2, des2 = orb.detectAndCompute(img, None) bf = cv2.BFMatcher() matchList = [] finalVal = -1 try: for des in desList: matches = bf.knnMatch(des, des2, k=2) good = [] for m, n in matches: # the distance of the matches in comparison to each other if m.distance < 0.75 * n.distance: good.append([m]) matchList.append(len(good)) except: pass # uncomment to see how many positive matches, according to this the thres is set # print(matchList) if len(matchList) != 0: if max(matchList) > thres: finalVal = matchList.index(max(matchList)) print("matching time --- %s seconds ---" % (time.time() - image_matching_time)) return finalVal desList = findDes(images) print(len(desList)) # open Webcam cap = cv2.VideoCapture(1) while True: success, img2 = cap.read() imgOriginal = img2.copy() # convert Camera to Grayscale img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) # if Matching with Image in List, send the respective Name id = findID(img2, desList) if id != -1: # put text for the found Image cv2.putText(imgOriginal, classNamesImages[id], (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 3) # show the final Image cv2.imshow('img2', imgOriginal) cv2.waitKey(1) I have tried with deep learning base solution but it did't work. Second I have tried FLANN feature matcher its showing the same time delay 0.4 sec as in upper shown code. I am looking for any fast solution. Here is a sample of original image from folder and query image from the camera. [Edit]: Number of query images is around 1000 and I want to compare 20 fps with all query images. A: One way to speed up the process is to use a feature descriptor that is more efficient than ORB. For example, you can use SIFT or SURF which are more efficient than ORB. You can also use a combination of feature descriptors such as ORB + SIFT or ORB + SURF. Another way to speed up the process is to use a faster feature matching algorithm such as FLANN or KNN. FLANN is usually faster than KNN, but it is also more memory intensive. You can also use a pre-trained model such as a convolutional neural network (CNN) to identify the images. This will be much faster than using feature descriptors and matching algorithms. However, it will require more training data and more computing power. A: another option, you can use pretrained model called SuperGlue: https://github.com/magicleap/SuperGluePretrainedNetwork
Fast method for key-point based image matching [Python]
The scenario is, Let's suppose I took random images from my Gallery and put in some folder, and also print these image in hard form. Now I turned on my laptop camera and place an image in front of camera, My code need the tell the name of matching image from the folder. Actually the code I have add in the question works fine but when I increases the number of image in folder then Identification start taking more time. I want to know, is there any method that I can try to improve my speed? Here is the working Demo of my scenario. import cv2 import os from pathlib import Path import time # Define path pathImages = 'ImagesQuery' # Set the Number of detected Features orb = cv2.ORB.create(nfeatures=1000) # Set the threshold of minimum Features detected to give a positive, around 20 to 30 thres = 27 # List Images and Print out their Names and how many there are in the Folder images = [] classNamesImages = [] myListImages = os.listdir(pathImages) print(myListImages) print('Total Classes Detected', len(myListImages)) # this will read in the images for cl in myListImages: imgCur = cv2.imread(f'{pathImages}/{cl}', 0) images.append(imgCur) # delete the file extension classNamesImages.append(os.path.splitext(cl)[0]) print(classNamesImages) # this will find the matching points in the images def findDes(images): desList = [] for img in images: kp, des = orb.detectAndCompute(img, None) desList.append(des) return desList # this will compare the matches and find the corresponding image def findID(img, desList): image_matching_time = time.time() kp2, des2 = orb.detectAndCompute(img, None) bf = cv2.BFMatcher() matchList = [] finalVal = -1 try: for des in desList: matches = bf.knnMatch(des, des2, k=2) good = [] for m, n in matches: # the distance of the matches in comparison to each other if m.distance < 0.75 * n.distance: good.append([m]) matchList.append(len(good)) except: pass # uncomment to see how many positive matches, according to this the thres is set # print(matchList) if len(matchList) != 0: if max(matchList) > thres: finalVal = matchList.index(max(matchList)) print("matching time --- %s seconds ---" % (time.time() - image_matching_time)) return finalVal desList = findDes(images) print(len(desList)) # open Webcam cap = cv2.VideoCapture(1) while True: success, img2 = cap.read() imgOriginal = img2.copy() # convert Camera to Grayscale img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) # if Matching with Image in List, send the respective Name id = findID(img2, desList) if id != -1: # put text for the found Image cv2.putText(imgOriginal, classNamesImages[id], (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 3) # show the final Image cv2.imshow('img2', imgOriginal) cv2.waitKey(1) I have tried with deep learning base solution but it did't work. Second I have tried FLANN feature matcher its showing the same time delay 0.4 sec as in upper shown code. I am looking for any fast solution. Here is a sample of original image from folder and query image from the camera. [Edit]: Number of query images is around 1000 and I want to compare 20 fps with all query images.
[ "One way to speed up the process is to use a feature descriptor that is more efficient than ORB. For example, you can use SIFT or SURF which are more efficient than ORB. You can also use a combination of feature descriptors such as ORB + SIFT or ORB + SURF.\nAnother way to speed up the process is to use a faster feature matching algorithm such as FLANN or KNN. FLANN is usually faster than KNN, but it is also more memory intensive.\nYou can also use a pre-trained model such as a convolutional neural network (CNN) to identify the images. This will be much faster than using feature descriptors and matching algorithms. However, it will require more training data and more computing power.\n", "another option, you can use pretrained model called SuperGlue: https://github.com/magicleap/SuperGluePretrainedNetwork\n" ]
[ 3, 1 ]
[]
[]
[ "cbir", "computer_vision", "machine_learning", "opencv", "python" ]
stackoverflow_0074499400_cbir_computer_vision_machine_learning_opencv_python.txt
Q: can find the problem to solve Template doesn't working? from django.shortcuts import render # Create your views here. def home(request): return render(request, 'dashboard/home.html') from django.urls import path from . import views urlpatterns = [ path('', views.home), ] Plz Help me to Resolve the Problem adding path in template_DIRs A: In your settings.py add this variable: import os TEMPLATE_DIR = os.path.join(BASE_DIR, "templates") Add TEMPLATE_DIR in TEMPLATE_DIRS. Then, at the root of your project create a folder named templates. Or create a folder named templates inside your app's directory. Inside the templates folder place your dashboard folder. A: How to solve template does not exist? Step 1: Try to find the template setting in settings.py file and check if it is correct. If it is not correct, then add path in DIRS in TEMPLATES EX: DIRS: [os.path.join(BASE_DIR, 'templates')]. Step 2: After step1, check the correct urls path if there is some spelling mistakes while navigating in browser then also you will get template does not exists error. Step 3: Check the spelling of template files and also check spelling of templates while rendering in views.
can find the problem to solve Template doesn't working?
from django.shortcuts import render # Create your views here. def home(request): return render(request, 'dashboard/home.html') from django.urls import path from . import views urlpatterns = [ path('', views.home), ] Plz Help me to Resolve the Problem adding path in template_DIRs
[ "In your settings.py add this variable:\nimport os\n\nTEMPLATE_DIR = os.path.join(BASE_DIR, \"templates\")\n\nAdd TEMPLATE_DIR in TEMPLATE_DIRS.\nThen, at the root of your project create a folder named templates. Or create a folder named templates inside your app's directory. Inside the templates folder place your dashboard folder.\n", "How to solve template does not exist?\nStep 1: Try to find the template setting in settings.py file and check if it is correct. If it is not correct, then add path in DIRS in TEMPLATES EX: DIRS: [os.path.join(BASE_DIR, 'templates')].\nStep 2: After step1, check the correct urls path if there is some spelling mistakes while navigating in browser then also you will get template does not exists error.\nStep 3: Check the spelling of template files and also check spelling of templates while rendering in views.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_templates", "django_views", "python" ]
stackoverflow_0074623851_django_django_templates_django_views_python.txt
Q: Django Rest Framework, how to use serializers.ListField with model and view? I want to store an array of integers in the day_of_the_week field. for which I am using the following code models.py class Schedule(models.Model): name = models.CharField(max_length=100) day_of_the_week = models.CharField(max_length=100) serializers.py class ScheduleSerializer(serializers.ModelSerializer): day_of_the_week = serializers.ListField() class Meta(): model = Schedule fields = "__all__" Views.py # schedule list class ScheduleList(APIView): def get(self, request): scheduleData = Schedule.objects.all() serializer = ScheduleSerializer(scheduleData, many=True) return Response(serializer.data) def post(self, request): serializer = ScheduleSerializer(data=request.data) serializer.is_valid(raise_exception=True) serializer.save() return Response("Schedule Added") Data save successfully but when I try to get data it returns data in this format "day_of_the_week": [ "[2", " 1]" ], is there any way to get an array of integers as a response? A: While saving try to add the child field in the serializer: class ScheduleSerializer(serializers.ModelSerializer): day_of_the_week = serializers.SerializerMethodField() def get_day_of_the_week(self, instance): return instance.day_of_the_week[1:-1].split(',') class Meta(): model = Schedule fields = "__all__"
Django Rest Framework, how to use serializers.ListField with model and view?
I want to store an array of integers in the day_of_the_week field. for which I am using the following code models.py class Schedule(models.Model): name = models.CharField(max_length=100) day_of_the_week = models.CharField(max_length=100) serializers.py class ScheduleSerializer(serializers.ModelSerializer): day_of_the_week = serializers.ListField() class Meta(): model = Schedule fields = "__all__" Views.py # schedule list class ScheduleList(APIView): def get(self, request): scheduleData = Schedule.objects.all() serializer = ScheduleSerializer(scheduleData, many=True) return Response(serializer.data) def post(self, request): serializer = ScheduleSerializer(data=request.data) serializer.is_valid(raise_exception=True) serializer.save() return Response("Schedule Added") Data save successfully but when I try to get data it returns data in this format "day_of_the_week": [ "[2", " 1]" ], is there any way to get an array of integers as a response?
[ "While saving try to add the child field in the serializer:\nclass ScheduleSerializer(serializers.ModelSerializer):\n day_of_the_week = serializers.SerializerMethodField()\n def get_day_of_the_week(self, instance):\n\n return instance.day_of_the_week[1:-1].split(',')\n\n\n class Meta():\n model = Schedule\n fields = \"__all__\"\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "mysql", "python" ]
stackoverflow_0074623523_django_django_models_django_rest_framework_mysql_python.txt
Q: maximum word split using Recursion Given a string s and a dictionary of valid words d, determine the largest number of valid words the string can be split up into using Recursion I tried solving this problem with the code below but it is not giving me the answer I am looking for. Can someone please help me understand how Recursion can be used to solve this problem. I am particularly trying to get the maximum number of words. for example- "warmontheat" can be divided into maximum 4 words- warm on the at. def wordBreak( s, wordDict): if len(s)==0: return 0 for end in range( 1, len(s) + 1): if s[0:end] in wordDict and wordBreak(s, wordDict): return 1 return wordBreak(s, wordDict) s="warmontheat" words=("war","month","on","the","heat","eat","he","arm","at","warm") print(wordBreak(s,words)) A: There are these issues: The recursive call is made with the same values for both arguments, which means the recursion -- if it starts -- will never end: the conditions are always the same. The aim of recursion is to solve a smaller problem, so you should pass a shorter string -- without the prefix that was just matched. So pass s[end:]. The recursive call is made as if it returns a boolean, but you'll want to know how many partitions were made, so making that call in an if condition is not going to help with that. You need to make that call inside the if block and do something with the result you got from it. With return 1 you're actually saying the recursive call only managed to chop the string up into one partition, but this is not true. Another way to put it: your function is only able to return 0 or 1, nothing else. Instead, the returned value should depend on the number you get back from recursion. It is wrong to have a return inside the loop. There might be cases where you have success with a word, but actually need an alternative, longer word first in order to benefit from shorter words in the rest of the string, and so you cannot assume that the first success corresponds to the optimal solution. The loop must continue to make iterations, and only when the loop has completed, you can know which of the valid options is the optimal one. Here is the corrected code: def wordBreak(s, wordDict): if len(s) == 0: return 0 result = float('-inf') for end in range(1, len(s) + 1): if s[:end] in wordDict: result = max(result, wordBreak(s[end:], wordDict)) return 1 + result
maximum word split using Recursion
Given a string s and a dictionary of valid words d, determine the largest number of valid words the string can be split up into using Recursion I tried solving this problem with the code below but it is not giving me the answer I am looking for. Can someone please help me understand how Recursion can be used to solve this problem. I am particularly trying to get the maximum number of words. for example- "warmontheat" can be divided into maximum 4 words- warm on the at. def wordBreak( s, wordDict): if len(s)==0: return 0 for end in range( 1, len(s) + 1): if s[0:end] in wordDict and wordBreak(s, wordDict): return 1 return wordBreak(s, wordDict) s="warmontheat" words=("war","month","on","the","heat","eat","he","arm","at","warm") print(wordBreak(s,words))
[ "There are these issues:\n\nThe recursive call is made with the same values for both arguments, which means the recursion -- if it starts -- will never end: the conditions are always the same. The aim of recursion is to solve a smaller problem, so you should pass a shorter string -- without the prefix that was just matched. So pass s[end:].\n\nThe recursive call is made as if it returns a boolean, but you'll want to know how many partitions were made, so making that call in an if condition is not going to help with that. You need to make that call inside the if block and do something with the result you got from it.\n\nWith return 1 you're actually saying the recursive call only managed to chop the string up into one partition, but this is not true. Another way to put it: your function is only able to return 0 or 1, nothing else. Instead, the returned value should depend on the number you get back from recursion.\n\nIt is wrong to have a return inside the loop. There might be cases where you have success with a word, but actually need an alternative, longer word first in order to benefit from shorter words in the rest of the string, and so you cannot assume that the first success corresponds to the optimal solution. The loop must continue to make iterations, and only when the loop has completed, you can know which of the valid options is the optimal one.\n\n\nHere is the corrected code:\ndef wordBreak(s, wordDict):\n if len(s) == 0:\n return 0\n result = float('-inf')\n for end in range(1, len(s) + 1):\n if s[:end] in wordDict:\n result = max(result, wordBreak(s[end:], wordDict))\n return 1 + result\n\n" ]
[ 0 ]
[]
[]
[ "algorithm", "dynamic_programming", "python", "recursion" ]
stackoverflow_0074621047_algorithm_dynamic_programming_python_recursion.txt
Q: typeerror print_slow() takes 1 positional argument but 2 were given so im fiddling around in pytbon cuz im bored and i realise i try to slow print a input i have earlier in the code i bave defined slow print ive imported every thing i need but when i run it it saw its got 1 positional argument buts been give 2 and im not that good at coding and am only a young student so coupd anyone be a huge help and explain it in basic terms ` import sys import os import time def print_slow(str): for letter in str: sys.stdout.write(letter) sys.stdout.flush() time.sleep(0.1) num1 = int(input("Chose any number: ")) print_slow("Did you say",num1) so my issue is that i cant seem to get it to slow print i expected this to work like it always does but i've never slow printed an input before A: You are implicitly checking that the input value is of type int by converting the input string. If you insist on doing that then you might find this easier: from time import sleep def print_slow(*args): for arg in args: for c in str(arg): print(c, flush=True, end='') sleep(0.1) print() v = int(input('Choose any number: ')) print_slow('Did you say ', v, '?') A: Replace print_slow("Did you say", num1) by print_slow(f"Did you say {num1}") and you should be good to go. This is (in my opinion) a slightly simpler solution than Cobra's. Here we are using Formatted strings(example) which you will encounter often going forward and that you should take a couple of minutes to understand. Also, as mentioned in the comments of your post, DO NOT NAME A VARIABLE str (or more generally, do not name a variable with a type name) for it will mess things up later.
typeerror print_slow() takes 1 positional argument but 2 were given
so im fiddling around in pytbon cuz im bored and i realise i try to slow print a input i have earlier in the code i bave defined slow print ive imported every thing i need but when i run it it saw its got 1 positional argument buts been give 2 and im not that good at coding and am only a young student so coupd anyone be a huge help and explain it in basic terms ` import sys import os import time def print_slow(str): for letter in str: sys.stdout.write(letter) sys.stdout.flush() time.sleep(0.1) num1 = int(input("Chose any number: ")) print_slow("Did you say",num1) so my issue is that i cant seem to get it to slow print i expected this to work like it always does but i've never slow printed an input before
[ "You are implicitly checking that the input value is of type int by converting the input string. If you insist on doing that then you might find this easier:\nfrom time import sleep\n\ndef print_slow(*args):\n for arg in args:\n for c in str(arg):\n print(c, flush=True, end='')\n sleep(0.1)\n print()\n\nv = int(input('Choose any number: '))\n\nprint_slow('Did you say ', v, '?')\n\n", "Replace print_slow(\"Did you say\", num1) by print_slow(f\"Did you say {num1}\") and you should be good to go. This is (in my opinion) a slightly simpler solution than Cobra's. Here we are using Formatted strings(example) which you will encounter often going forward and that you should take a couple of minutes to understand.\nAlso, as mentioned in the comments of your post, DO NOT NAME A VARIABLE str (or more generally, do not name a variable with a type name) for it will mess things up later.\n" ]
[ 0, 0 ]
[]
[]
[ "input", "printing", "python", "python_2.7" ]
stackoverflow_0074623860_input_printing_python_python_2.7.txt
Q: Is there a way to grab list attributes that have been initialized using self and append data to them in Python? I have a class in Python that initializes the attributes of an environment. I am attempting to grab the topographyRegistry attribute list of my Environment class in a separate function, which when called, should take in the parameters of 'self' and the topography to be added. When this function is called, it should simply take an argument such as addTopographyToEnvironment(self, "Mountains") and append it to the topographyRegistry of the Environment class. When implementing what I mentioned above, I ran into an error regarding the 'self' method not being defined. Hence, whenever I call the above line, it gives me: print (Environment.addTopographyToEnvironment(self, "Mountains")) ^^^^ NameError: name 'self' is not defined This leads me to believe that I am unaware of and missing a step in my implementation, but I am unsure of what that is exactly. Here is the relevant code: class EnvironmentInfo: def __init__(self, perceivableFood, perceivableCreatures, regionTopography, lightVisibility): self.perceivableFood = perceivableFood self.perceivableCreatures = perceivableCreatures self.regionTopography = regionTopography self.lightVisibility = lightVisibility class Environment: def __init__(self, creatureRegistry, foodRegistry, topographyRegistery, lightVisibility): logging.info("Creating new environment") self.creatureRegistry = [] self.foodRegistry = [] self.topographyRegistery = [] self.lightVisibility = True def displayEnvironment(): creatureRegistry = [] foodRegistry = [] topographyRegistery = ['Grasslands'] lightVisibility = True print (f"Creatures: {creatureRegistry} Food Available: {foodRegistry} Topography: {topographyRegistery} Contains Light: {lightVisibility}") def addTopographyToEnvironment(self, topographyRegistery): logging.info( f"Registering {topographyRegistery} as a region in the Environment") self.topographyRegistery.append(topographyRegistery) def getRegisteredEnvironment(self): return self.topographyRegistry if __name__ == "__main__": print (Environment.displayEnvironment()) #Display hardcoded attributes print (Environment.addTopographyToEnvironment(self, "Mountains"))#NameError print (Environment.getRegisteredEnvironment(self)) #NameError What am I doing wrong or not understanding when using 'self'? Edit: In regard to omitting 'self' from the print statement, it still gives me an error indicating a TypeError: print (Environment.addTopographyToEnvironment("Mountains")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Environment.addTopographyToEnvironment() missing 1 required positional argument: 'topographyRegistery' A: Comments Despite having def getRegisteredEnvironment(self): it wasn't indented, so it's not recognized as a class method. self is a keyword used in conjunction with classes (class methods or attributes) - not functions. self is implied to be the instantiated object (eg a = Environment(...) -> self would refer to a) or the module's (I can't think of the proper term) class. You didn't have your addTopographyToEnvironment class method defined. In terms of your Environment class, you aren't using the variables you are passing to the class, so I made that change as well - I don't know if that was intentional or not. As per your comment from the other answer, if you had def my_class_method(self) and you try to invoke it through an object with additional parameters, like so a = my_object(); a.my_class_method("Mountains"), you should get an error of the sorts, "2 positional arguments passed, expected 1.". Your main problem is that you are doing Environment.class_method() and not creating an object from the class. Do a = Environment(whatever arguments here) to create an object from the class, then do a.addTopographyToEnvironment("Mountains") to do what you were going to do with "Mountains" and that object. What you have currently may be right, its just is missing the proper implementation, but the below article does a great job explaining the differences between all of them (Class Methods vs Static Methods vs Instance Methods), and is definitely worth the read. class EnvironmentInfo: def __init__(self, perceivableFood, perceivableCreatures, regionTopography, lightVisibility): self.perceivableFood = perceivableFood self.perceivableCreatures = perceivableCreatures self.regionTopography = regionTopography self.lightVisibility = lightVisibility class Environment: def __init__(self, creatureRegistry, foodRegistry, topographyRegistery, lightVisibility): logging.info("Creating new environment") self.creatureRegistry = creatureRegistry self.foodRegistry = foodRegistry self.topographyRegistery = topographyRegistery self.lightVisibility = lightVisibility def displayEnvironment(self): creatureRegistry = [] foodRegistry = [] topographyRegistery = ['Grasslands'] lightVisibility = True print (f"Creatures: {creatureRegistry} Food Available: {foodRegistry} Topography: {topographyRegistery} Contains Light: {lightVisibility}") def addTopographyToEnvironment(self, environment): return "Whatever this is supposed to return." + environment def getRegisteredEnvironment(self): return self.topographyRegistry if __name__ == "__main__": print (Environment.displayEnvironment()) #Display hardcoded attributes print (Environment.addTopographyToEnvironment("Mountains"))#NameError print (Environment.getRegisteredEnvironment()) #NameError Object Instantiation In Python With all that out of the way, I will answer the question as is posed, "Is there a way to grab list attributes that have been initialized using self and append data to them in Python?". I am assuming you mean the contents of the list and not the attributes of it, the attributes would be "got" or at least printed with dir() As a simple example: class MyClass: def __init__(self, my_list): self.my_list = my_list if __name__ == "__main__": a = MyClass([1, 2, 3, 4, 5]) print(a.my_list) # will print [1, 2, 3, 4, 5] a.my_list.append(6) print(a.my_list) # will print [1, 2, 3, 4, 5, 6] print(dir(a.my_list)) # will print all object methods and object attributes for the list associated with object "a". Sub Classing In Python Given what you have above, it looks like you should be using method sub classing - this is done with the keyword super. From what I can guess, it would look like you'd implement that kind of like this: class EnvironmentInfo: def __init__(self, perceivableFood, perceivableCreatures, regionTopography, lightVisibility): self.perceivableFood = perceivableFood self.perceivableCreatures = perceivableCreatures self.regionTopography = regionTopography self.lightVisibility = lightVisibility class Environment(EnvironmentInfo): def __init__(self, creatureRegistry, foodRegistry, topographyRegistery, lightVisibility, someOtherThingAvailableToEnvironmentButNotEnvironmentInfo): logging.info("Creating new environment") super.__init__(foodRegistry, creatureRegistry, topographyRegistery, lightVisibility) self.my_var1 = someOtherThingAvailableToEnvironmentButNotEnvironmentInfo def displayEnvironment(self): creatureRegistry = [] foodRegistry = [] topographyRegistery = ['Grasslands'] lightVisibility = True print (f"Creatures: {creatureRegistry} Food Available: {foodRegistry} Topography: {topographyRegistery} Contains Light: {lightVisibility}") def addTopographyToEnvironment(self, environment): return "Whatever this is supposed to return." + environment def getRegisteredEnvironment(self): return self.topographyRegistry def methodAvailableToSubClassButNotSuper(self) return self.my_var1 if __name__ == "__main__": a = Environment([], [], [], True, "Only accessible to the sub class") print(a.methodAvailableToSubClassButNotSuper()) as the article describes when talking about super(), methods and attributes from the super class are available to the sub class. Extra Resources Class Methods vs Static Methods vs Instance Methods - "Difference #2: Method Defination" gives an example that would be helpful I think. What is sub classing in Python? - Just glanced at it; probably an okay read. A: Self represents the instance of the class and you don't have access to it outside of the class, by the way when you are calling object methods of a class you don't need to pass self cause it automatically be passed to the method you just need to pass the parameters after self so if you want to call an object method like addTopographyToEnvironment(self, newVal) you should do it like: Environment.addTopographyToEnvironment("Mountains") and it should work fine
Is there a way to grab list attributes that have been initialized using self and append data to them in Python?
I have a class in Python that initializes the attributes of an environment. I am attempting to grab the topographyRegistry attribute list of my Environment class in a separate function, which when called, should take in the parameters of 'self' and the topography to be added. When this function is called, it should simply take an argument such as addTopographyToEnvironment(self, "Mountains") and append it to the topographyRegistry of the Environment class. When implementing what I mentioned above, I ran into an error regarding the 'self' method not being defined. Hence, whenever I call the above line, it gives me: print (Environment.addTopographyToEnvironment(self, "Mountains")) ^^^^ NameError: name 'self' is not defined This leads me to believe that I am unaware of and missing a step in my implementation, but I am unsure of what that is exactly. Here is the relevant code: class EnvironmentInfo: def __init__(self, perceivableFood, perceivableCreatures, regionTopography, lightVisibility): self.perceivableFood = perceivableFood self.perceivableCreatures = perceivableCreatures self.regionTopography = regionTopography self.lightVisibility = lightVisibility class Environment: def __init__(self, creatureRegistry, foodRegistry, topographyRegistery, lightVisibility): logging.info("Creating new environment") self.creatureRegistry = [] self.foodRegistry = [] self.topographyRegistery = [] self.lightVisibility = True def displayEnvironment(): creatureRegistry = [] foodRegistry = [] topographyRegistery = ['Grasslands'] lightVisibility = True print (f"Creatures: {creatureRegistry} Food Available: {foodRegistry} Topography: {topographyRegistery} Contains Light: {lightVisibility}") def addTopographyToEnvironment(self, topographyRegistery): logging.info( f"Registering {topographyRegistery} as a region in the Environment") self.topographyRegistery.append(topographyRegistery) def getRegisteredEnvironment(self): return self.topographyRegistry if __name__ == "__main__": print (Environment.displayEnvironment()) #Display hardcoded attributes print (Environment.addTopographyToEnvironment(self, "Mountains"))#NameError print (Environment.getRegisteredEnvironment(self)) #NameError What am I doing wrong or not understanding when using 'self'? Edit: In regard to omitting 'self' from the print statement, it still gives me an error indicating a TypeError: print (Environment.addTopographyToEnvironment("Mountains")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Environment.addTopographyToEnvironment() missing 1 required positional argument: 'topographyRegistery'
[ "Comments\n\nDespite having def getRegisteredEnvironment(self): it wasn't indented, so it's not recognized as a class method.\nself is a keyword used in conjunction with classes (class methods or attributes) - not functions. self is implied to be the instantiated object (eg a = Environment(...) -> self would refer to a) or the module's (I can't think of the proper term) class.\nYou didn't have your addTopographyToEnvironment class method defined.\nIn terms of your Environment class, you aren't using the variables you are passing to the class, so I made that change as well - I don't know if that was intentional or not.\nAs per your comment from the other answer, if you had def my_class_method(self) and you try to invoke it through an object with additional parameters, like so a = my_object(); a.my_class_method(\"Mountains\"), you should get an error of the sorts, \"2 positional arguments passed, expected 1.\".\nYour main problem is that you are doing Environment.class_method() and not creating an object from the class. Do a = Environment(whatever arguments here) to create an object from the class, then do a.addTopographyToEnvironment(\"Mountains\") to do what you were going to do with \"Mountains\" and that object. What you have currently may be right, its just is missing the proper implementation, but the below article does a great job explaining the differences between all of them (Class Methods vs Static Methods vs Instance Methods), and is definitely worth the read.\n\nclass EnvironmentInfo:\n def __init__(self, perceivableFood, perceivableCreatures, regionTopography, lightVisibility):\n self.perceivableFood = perceivableFood\n self.perceivableCreatures = perceivableCreatures\n self.regionTopography = regionTopography\n self.lightVisibility = lightVisibility\n\nclass Environment:\n def __init__(self, creatureRegistry, foodRegistry, topographyRegistery, lightVisibility):\n logging.info(\"Creating new environment\")\n self.creatureRegistry = creatureRegistry\n self.foodRegistry = foodRegistry\n self.topographyRegistery = topographyRegistery\n self.lightVisibility = lightVisibility\n\n def displayEnvironment(self):\n creatureRegistry = []\n foodRegistry = []\n topographyRegistery = ['Grasslands']\n lightVisibility = True\n print (f\"Creatures: {creatureRegistry} Food Available: {foodRegistry} Topography: {topographyRegistery} Contains Light: {lightVisibility}\")\n\n def addTopographyToEnvironment(self, environment):\n return \"Whatever this is supposed to return.\" + environment\n\n def getRegisteredEnvironment(self):\n return self.topographyRegistry\n\nif __name__ == \"__main__\":\n print (Environment.displayEnvironment()) #Display hardcoded attributes\n print (Environment.addTopographyToEnvironment(\"Mountains\"))#NameError\n print (Environment.getRegisteredEnvironment()) #NameError\n\n\nObject Instantiation In Python\nWith all that out of the way, I will answer the question as is posed, \"Is there a way to grab list attributes that have been initialized using self and append data to them in Python?\". I am assuming you mean the contents of the list and not the attributes of it, the attributes would be \"got\" or at least printed with dir()\nAs a simple example:\nclass MyClass:\n def __init__(self, my_list):\n self.my_list = my_list\n\n\nif __name__ == \"__main__\":\n a = MyClass([1, 2, 3, 4, 5])\n print(a.my_list)\n # will print [1, 2, 3, 4, 5]\n a.my_list.append(6)\n print(a.my_list)\n # will print [1, 2, 3, 4, 5, 6]\n print(dir(a.my_list))\n # will print all object methods and object attributes for the list associated with object \"a\".\n\n\nSub Classing In Python\nGiven what you have above, it looks like you should be using method sub classing - this is done with the keyword super. From what I can guess, it would look like you'd implement that kind of like this:\nclass EnvironmentInfo:\n def __init__(self, perceivableFood, perceivableCreatures, regionTopography, lightVisibility):\n self.perceivableFood = perceivableFood\n self.perceivableCreatures = perceivableCreatures\n self.regionTopography = regionTopography\n self.lightVisibility = lightVisibility\n\nclass Environment(EnvironmentInfo):\n def __init__(self, creatureRegistry, foodRegistry, topographyRegistery, lightVisibility, someOtherThingAvailableToEnvironmentButNotEnvironmentInfo):\n logging.info(\"Creating new environment\")\n super.__init__(foodRegistry, creatureRegistry, topographyRegistery, lightVisibility)\n self.my_var1 = someOtherThingAvailableToEnvironmentButNotEnvironmentInfo\n\n def displayEnvironment(self):\n creatureRegistry = []\n foodRegistry = []\n topographyRegistery = ['Grasslands']\n lightVisibility = True\n print (f\"Creatures: {creatureRegistry} Food Available: {foodRegistry} Topography: {topographyRegistery} Contains Light: {lightVisibility}\")\n\n def addTopographyToEnvironment(self, environment):\n return \"Whatever this is supposed to return.\" + environment\n\n def getRegisteredEnvironment(self):\n return self.topographyRegistry\n\n def methodAvailableToSubClassButNotSuper(self)\n return self.my_var1\n\nif __name__ == \"__main__\":\n a = Environment([], [], [], True, \"Only accessible to the sub class\")\n print(a.methodAvailableToSubClassButNotSuper())\n\nas the article describes when talking about super(), methods and attributes from the super class are available to the sub class.\n\nExtra Resources\nClass Methods vs Static Methods vs Instance Methods - \"Difference #2: Method Defination\" gives an example that would be helpful I think.\nWhat is sub classing in Python? - Just glanced at it; probably an okay read.\n", "Self represents the instance of the class and you don't have access to it outside of the class, by the way when you are calling object methods of a class you don't need to pass self cause it automatically be passed to the method you just need to pass the parameters after self so if you want to call an object method like addTopographyToEnvironment(self, newVal) you should do it like:\nEnvironment.addTopographyToEnvironment(\"Mountains\")\n\nand it should work fine\n" ]
[ 1, 0 ]
[]
[]
[ "object", "python", "self" ]
stackoverflow_0074623581_object_python_self.txt
Q: Using function scoped fixture to setup a class I have a fixture in conftest.py with a function scope. @pytest.fixture() def registration_setup( test_data, # fixture 1 credentials, # fixture 2 deployment # fixture 3 deployment_object # fixture 4 ): # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup() I use it in a test class like this: class TestClass: @pytest.fixture(autouse=True) def _inventory_cleanup(self, registration_setup): log('Cleanup Done!') def test_1(): ... def test_2(): ... def test_3(): ... Now I want to create a new test class where I run the registartion_setup fixture once for the entire class. The desired behaviour here is, First the pre-test cleanup executes and then all the tests in the new test class execute, followed by the post-test cleanup. How can I achieve this, thanks for the help. A: Option 1 You can use the same approach you did on your other test class, but set the fixture scope to class: class TestClass: @pytest.fixture(scope='class', autouse=True) def _inventory_cleanup(self, registration_setup): log('Cleanup Done!') def test_1(): ... def test_2(): ... def test_3(): ... But you will then need to change the scope of the fixture registration_setup to class to avoid a ScopeMismatch error. Option 2 To keep using it with a function scope, I suggest having two fixtures with the same behavior, but with different scopes, like this: @pytest.fixture() def registration_setup_for_function( test_data, # fixture 1 credentials, # fixture 2 deployment # fixture 3 deployment_object # fixture 4 ): # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup() @pytest.fixture(scope='class') def registration_setup_for_class( test_data, # fixture 1 credentials, # fixture 2 deployment # fixture 3 deployment_object # fixture 4 ): # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup() If your other fixtures 1, 2, 3 and 4 have function scope, you will have to change them also. Option 3 If you don't want to have two identical fixtures with different scopes, you can do something like this: In a conftest.py file in the project root: def pytest_configure(config): config.first_test_executed = False Then, wherever you have your fixture: @pytest.fixture() def registration_setup( test_data, # fixture 1 credentials, # fixture 2 deployment, # fixture 3 deployment_object, # fixture 4 request # Note the request fixture here ): if 'TestClassWhereFixtureShouldRunOnlyOnce' in request.node.nodeid: if not request.config.first_test_executed: # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup() request.config.first_test_executed = True else: # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup() I know it is still a bit repeated, but this way your tests inside the class will call the registration_setup fixture only once for the whole class, while other tests will call it always. Maybe you will find a better way knowing this now. More info on the documentation: Fixtures can introspect the requesting test context A session-fixture which can look at all collected tests pytest_configure(config) A: Huge thanks to @Marco.S for his answer. Just a small additional check to get it to work def reg_setup(request): if 'TestClassWhereFixtureShouldRunOnlyOnce' in request.node.nodeid: if not request.config.first_test_executed: # pre-test cleanup do_cleanup() request.config.first_test_executed = True yield if 'test_last' in request.node.nodeid: # Check for the last test case. # post-test cleanup do_cleanup() else: # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup()
Using function scoped fixture to setup a class
I have a fixture in conftest.py with a function scope. @pytest.fixture() def registration_setup( test_data, # fixture 1 credentials, # fixture 2 deployment # fixture 3 deployment_object # fixture 4 ): # pre-test cleanup do_cleanup() yield # post-test cleanup do_cleanup() I use it in a test class like this: class TestClass: @pytest.fixture(autouse=True) def _inventory_cleanup(self, registration_setup): log('Cleanup Done!') def test_1(): ... def test_2(): ... def test_3(): ... Now I want to create a new test class where I run the registartion_setup fixture once for the entire class. The desired behaviour here is, First the pre-test cleanup executes and then all the tests in the new test class execute, followed by the post-test cleanup. How can I achieve this, thanks for the help.
[ "Option 1\nYou can use the same approach you did on your other test class, but set the fixture scope to class:\nclass TestClass:\n\n @pytest.fixture(scope='class', autouse=True)\n def _inventory_cleanup(self, registration_setup):\n log('Cleanup Done!')\n \n def test_1():\n ...\n\n def test_2():\n ...\n \n def test_3():\n ...\n\nBut you will then need to change the scope of the fixture registration_setup to class to avoid a ScopeMismatch error.\nOption 2\nTo keep using it with a function scope, I suggest having two fixtures with the same behavior, but with different scopes, like this:\[email protected]()\ndef registration_setup_for_function(\n test_data, # fixture 1\n credentials, # fixture 2\n deployment # fixture 3\n deployment_object # fixture 4\n):\n # pre-test cleanup\n do_cleanup()\n yield\n # post-test cleanup\n do_cleanup()\n\n\[email protected](scope='class')\ndef registration_setup_for_class(\n test_data, # fixture 1\n credentials, # fixture 2\n deployment # fixture 3\n deployment_object # fixture 4\n):\n # pre-test cleanup\n do_cleanup()\n yield\n # post-test cleanup\n do_cleanup()\n\nIf your other fixtures 1, 2, 3 and 4 have function scope, you will have to change them also.\nOption 3\nIf you don't want to have two identical fixtures with different scopes, you can do something like this:\nIn a conftest.py file in the project root:\ndef pytest_configure(config):\n config.first_test_executed = False\n\nThen, wherever you have your fixture:\[email protected]()\ndef registration_setup(\n test_data, # fixture 1\n credentials, # fixture 2\n deployment, # fixture 3\n deployment_object, # fixture 4\n request # Note the request fixture here\n):\n if 'TestClassWhereFixtureShouldRunOnlyOnce' in request.node.nodeid:\n if not request.config.first_test_executed:\n # pre-test cleanup\n do_cleanup()\n yield\n # post-test cleanup\n do_cleanup()\n request.config.first_test_executed = True\n else:\n # pre-test cleanup\n do_cleanup()\n yield\n # post-test cleanup\n do_cleanup()\n\nI know it is still a bit repeated, but this way your tests inside the class will call the registration_setup fixture only once for the whole class, while other tests will call it always. Maybe you will find a better way knowing this now.\nMore info on the documentation:\nFixtures can introspect the requesting test context\nA session-fixture which can look at all collected tests\npytest_configure(config)\n", "Huge thanks to @Marco.S for his answer. Just a small additional check to get it to work\ndef reg_setup(request):\n if 'TestClassWhereFixtureShouldRunOnlyOnce' in request.node.nodeid:\n if not request.config.first_test_executed:\n # pre-test cleanup\n do_cleanup()\n request.config.first_test_executed = True\n yield\n if 'test_last' in request.node.nodeid: # Check for the last test case.\n # post-test cleanup\n do_cleanup()\n else:\n # pre-test cleanup\n do_cleanup()\n yield\n # post-test cleanup\n do_cleanup()\n\n" ]
[ 0, 0 ]
[]
[]
[ "fixtures", "pytest", "pytest_fixtures", "python", "selenium" ]
stackoverflow_0074561999_fixtures_pytest_pytest_fixtures_python_selenium.txt
Q: Standardize same-scale variables? thinking about a problem… should you standardize two predictors that are already on the same scale (say kilograms) but may have different ranges? The model is a KNN I think you should because the model will give the predictor eith the higher range more importance in calculating distance A: It is better to standardize the data even though being on same scale. Standardizing would reduce the distance (specifically euclidean) that would help weights to not vary much from the point intial to them. Having huge seperated distance would rather have more calculation involved. Also distance calculation done in KNN requires feature values to scaling is always prefered.
Standardize same-scale variables?
thinking about a problem… should you standardize two predictors that are already on the same scale (say kilograms) but may have different ranges? The model is a KNN I think you should because the model will give the predictor eith the higher range more importance in calculating distance
[ "It is better to standardize the data even though being on same scale. Standardizing would reduce the distance (specifically euclidean) that would help weights to not vary much from the point intial to them. Having huge seperated distance would rather have more calculation involved. Also distance calculation done in KNN requires feature values to scaling is always prefered.\n" ]
[ 0 ]
[]
[]
[ "knn", "machine_learning", "python", "standardization" ]
stackoverflow_0074623115_knn_machine_learning_python_standardization.txt
Q: How can I know a anaconda installer is for which python version? I want to install python3.7 by anaconda and the anaconda list is shown below: anaconda version list。 My question is how can I know a anaconda installer is for a special verison of python? Actually, I know "Anaconda3-2020.05-Linux-x86_64.sh" is for python3.7。However, I am confused that by what infomation we can get the answer before we finish installing it. A: You can install specific version of python through the anaconda prompt, using: conda install python = 2.7.8 or conda install python = 3.5.0 (for example). You can even create a dedicated python environnement for a specific version: conda create --name py36 python=3.6 A: Anton B answer is correct, but if you just want to download the correct installer, you can use the following: For Anaconda you can use this: https://docs.anaconda.com/anaconda/packages/oldpkglists/ For miniconda you can use the tables on the following page: https://docs.conda.io/en/latest/miniconda.html A similar question was previously asked here: How to get anaconda/miniconda vs python versions mapping table?
How can I know a anaconda installer is for which python version?
I want to install python3.7 by anaconda and the anaconda list is shown below: anaconda version list。 My question is how can I know a anaconda installer is for a special verison of python? Actually, I know "Anaconda3-2020.05-Linux-x86_64.sh" is for python3.7。However, I am confused that by what infomation we can get the answer before we finish installing it.
[ "You can install specific version of python through the anaconda prompt, using:\nconda install python = 2.7.8 or conda install python = 3.5.0 (for example).\nYou can even create a dedicated python environnement for a specific version:\nconda create --name py36 python=3.6\n", "Anton B answer is correct, but if you just want to download the correct installer, you can use the following:\n\nFor Anaconda you can use this: https://docs.anaconda.com/anaconda/packages/oldpkglists/\nFor miniconda you can use the tables on the following page: https://docs.conda.io/en/latest/miniconda.html\n\nA similar question was previously asked here:\nHow to get anaconda/miniconda vs python versions mapping table?\n" ]
[ 1, 1 ]
[]
[]
[ "anaconda", "python" ]
stackoverflow_0074623821_anaconda_python.txt
Q: Application runs with uvicorn but can't find Module (No module named 'app') . ├── __pycache__ │ └── api.cpython-310.pyc ├── app │ ├── __pycache__ │ │ └── main.cpython-310.pyc │ ├── api_v1 │ │ ├── __pycache__ │ │ │ └── apis.cpython-310.pyc │ │ ├── apis.py │ │ └── endpoints │ │ ├── __pycache__ │ │ │ └── message_prediction.cpython-310.pyc │ │ └── message_prediction.py │ ├── config.py │ ├── main.py │ └── schemas │ ├── Messages.py │ └── __pycache__ │ └── Messages.cpython-310.pyc ├── app.egg-info │ ├── PKG-INFO │ ├── SOURCES.txt │ ├── dependency_links.txt │ └── top_level.txt ├── build │ └── bdist.macosx-12.0-arm64 ├── data │ ├── processed │ │ ├── offers_big.csv_cleaned.xlsx │ │ └── requests_big.csv_cleaned.xlsx │ ├── processed.dvc │ ├── raw │ │ ├── offers.csv.old │ │ ├── offers_big.csv │ │ ├── requests.csv.old │ │ └── requests_big.csv │ ├── raw.dvc │ ├── validated │ │ ├── validated_offers.xlsx │ │ └── validated_requests.xlsx │ └── validated.dvc ├── dist │ └── app-0.1.0-py3.10.egg ├── model.pkl ├── model.py ├── notebooks │ └── contact-form.ipynb ├── requirements.in ├── requirements.txt ├── setup.py └── test_api.py # main.py import os from fastapi import FastAPI import uvicorn from app.api_v1.apis import api_router # create the app messages_classification_app = FastAPI() messages_classification_app.include_router(api_router) if __name__ == '__main__': uvicorn.run("app.main:messages_classification_app", host=os.getenv("HOST", "0.0.0.0"), port=int(os.getenv("PORT", 8000))) # requirements.in fastapi uvicorn -e file:.#egg=app Trying to run the fastAPI app with python, results in error: py[learning]  ~/r/v/contact-form-classification   master ±  python app/main.py Traceback (most recent call last): File "/Users/xxxxx/repos/visable/contact-form-classification/app/main.py", line 4, in <module> from app.api_v1.apis import api_router ModuleNotFoundError: No module named 'app' Running it with uvicorn directly works: py[learning]  ~/r/v/contact-form-classification   master ±  uvicorn app.main:messages_classification_app INFO: Started server process [53665] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) Any idea why? Looked into similar questions, don't seem to apply to mine. A: python app/main.py will make app/ the first entry in sys.path, so app imports within won't work. Do python -m app.main to run app/main.py as a module without having Python touch sys.path.
Application runs with uvicorn but can't find Module (No module named 'app')
. ├── __pycache__ │ └── api.cpython-310.pyc ├── app │ ├── __pycache__ │ │ └── main.cpython-310.pyc │ ├── api_v1 │ │ ├── __pycache__ │ │ │ └── apis.cpython-310.pyc │ │ ├── apis.py │ │ └── endpoints │ │ ├── __pycache__ │ │ │ └── message_prediction.cpython-310.pyc │ │ └── message_prediction.py │ ├── config.py │ ├── main.py │ └── schemas │ ├── Messages.py │ └── __pycache__ │ └── Messages.cpython-310.pyc ├── app.egg-info │ ├── PKG-INFO │ ├── SOURCES.txt │ ├── dependency_links.txt │ └── top_level.txt ├── build │ └── bdist.macosx-12.0-arm64 ├── data │ ├── processed │ │ ├── offers_big.csv_cleaned.xlsx │ │ └── requests_big.csv_cleaned.xlsx │ ├── processed.dvc │ ├── raw │ │ ├── offers.csv.old │ │ ├── offers_big.csv │ │ ├── requests.csv.old │ │ └── requests_big.csv │ ├── raw.dvc │ ├── validated │ │ ├── validated_offers.xlsx │ │ └── validated_requests.xlsx │ └── validated.dvc ├── dist │ └── app-0.1.0-py3.10.egg ├── model.pkl ├── model.py ├── notebooks │ └── contact-form.ipynb ├── requirements.in ├── requirements.txt ├── setup.py └── test_api.py # main.py import os from fastapi import FastAPI import uvicorn from app.api_v1.apis import api_router # create the app messages_classification_app = FastAPI() messages_classification_app.include_router(api_router) if __name__ == '__main__': uvicorn.run("app.main:messages_classification_app", host=os.getenv("HOST", "0.0.0.0"), port=int(os.getenv("PORT", 8000))) # requirements.in fastapi uvicorn -e file:.#egg=app Trying to run the fastAPI app with python, results in error: py[learning]  ~/r/v/contact-form-classification   master ±  python app/main.py Traceback (most recent call last): File "/Users/xxxxx/repos/visable/contact-form-classification/app/main.py", line 4, in <module> from app.api_v1.apis import api_router ModuleNotFoundError: No module named 'app' Running it with uvicorn directly works: py[learning]  ~/r/v/contact-form-classification   master ±  uvicorn app.main:messages_classification_app INFO: Started server process [53665] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) Any idea why? Looked into similar questions, don't seem to apply to mine.
[ "python app/main.py will make app/ the first entry in sys.path, so app imports within won't work.\nDo python -m app.main to run app/main.py as a module without having Python touch sys.path.\n" ]
[ 1 ]
[]
[]
[ "fastapi", "python" ]
stackoverflow_0074624111_fastapi_python.txt
Q: Running Json file in VScode using Python I am very fresh in Python. I would like to read JSON files in Python, but I did not get what are the problems. Please see the image. A: You have to specify a mode to the open() function. In this case I think you're trying to read the file, so your mode would be "r". Your code should be: with open(r'path/to/read/','r') as file: data = json.load(file) Your code should run now. A: Your path should not contain spaces. Please modify the file path. Generally speaking, the file path is best to be in full English with no spaces and no special characters. A: import sys import os import json def JsonRead(str): with open(str, encoding='utf-8') as f: data = json.load(f) return data new_Data = JsonRead(filePath) Then import JsonRead in project
Running Json file in VScode using Python
I am very fresh in Python. I would like to read JSON files in Python, but I did not get what are the problems. Please see the image.
[ "You have to specify a mode to the open() function. In this case I think you're trying to read the file, so your mode would be \"r\". Your code should be:\nwith open(r'path/to/read/','r') as file: \n data = json.load(file)\n\nYour code should run now.\n", "Your path should not contain spaces. Please modify the file path.\nGenerally speaking, the file path is best to be in full English with no spaces and no special characters.\n\n", "import sys\nimport os\nimport json\n\ndef JsonRead(str):\n with open(str, encoding='utf-8') as f:\n data = json.load(f)\n return data\n\nnew_Data = JsonRead(filePath)\n\nThen import JsonRead in project\n" ]
[ 1, 0, 0 ]
[]
[]
[ "json", "python", "visual_studio_code" ]
stackoverflow_0074623982_json_python_visual_studio_code.txt
Q: Object Detection Using YOLOv3 I was implementing YOLOv3 for object detection using python in visual studio. My code is working fine but it's not detecting bounding boxes with it's label which means that bounding boxes code is not working. I am unable to find the error behind it. I have used yolov3 pretrained models in my code. Can any one tell me what are the possible reasons? The link for the code is : <https://github.com/spmallick/learnopencv/blob/master/ObjectDetection-YOLO/object_detection_yolo.py> # Get the names of the output layers def getOutputsNames(net): # Get the names of all the layers in the network layersNames = net.getLayerNames() # Get the names of the output layers, i.e. the layers with unconnected outputs return [layersNames[i[0] - 1] for i in net.getUnconnectedOutLayers()] def drawPred(classId, conf, left, top, right, bottom): cv.rectangle(frame, (left, top), (right, bottom), (255, 178, 50), 3) label = '%.2f' % conf # Get the label for the class name and its confidence if classes: assert(classId < len(classes)) label = '%s:%s' % (classes[classId], label) #Display the label at the top of the bounding box labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1) top = max(top, labelSize[1]) cv.rectangle(frame, (left, top - round(1.5*labelSize[1])), (left + round(1.5*labelSize[0]), top + baseLine), (255, 255, 255), cv.FILLED) cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.75, (0,0,0), 1) # Remove the bounding boxes with low confidence using non-maxima suppression def postprocess(frame, outs): frameHeight = frame.shape[0] frameWidth = frame.shape[1] A: try to replace layer_names[i[0] - 1] with layer_name[i-1] def getOutputsNames(net): # Get the names of all the layers in the network layersNames = net.getLayerNames() # Get the names of the output layers, i.e. the layers with unconnected outputs return [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]
Object Detection Using YOLOv3
I was implementing YOLOv3 for object detection using python in visual studio. My code is working fine but it's not detecting bounding boxes with it's label which means that bounding boxes code is not working. I am unable to find the error behind it. I have used yolov3 pretrained models in my code. Can any one tell me what are the possible reasons? The link for the code is : <https://github.com/spmallick/learnopencv/blob/master/ObjectDetection-YOLO/object_detection_yolo.py> # Get the names of the output layers def getOutputsNames(net): # Get the names of all the layers in the network layersNames = net.getLayerNames() # Get the names of the output layers, i.e. the layers with unconnected outputs return [layersNames[i[0] - 1] for i in net.getUnconnectedOutLayers()] def drawPred(classId, conf, left, top, right, bottom): cv.rectangle(frame, (left, top), (right, bottom), (255, 178, 50), 3) label = '%.2f' % conf # Get the label for the class name and its confidence if classes: assert(classId < len(classes)) label = '%s:%s' % (classes[classId], label) #Display the label at the top of the bounding box labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1) top = max(top, labelSize[1]) cv.rectangle(frame, (left, top - round(1.5*labelSize[1])), (left + round(1.5*labelSize[0]), top + baseLine), (255, 255, 255), cv.FILLED) cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.75, (0,0,0), 1) # Remove the bounding boxes with low confidence using non-maxima suppression def postprocess(frame, outs): frameHeight = frame.shape[0] frameWidth = frame.shape[1]
[ "try to replace layer_names[i[0] - 1] with layer_name[i-1]\ndef getOutputsNames(net):\n # Get the names of all the layers in the network\n layersNames = net.getLayerNames()\n # Get the names of the output layers, i.e. the layers with unconnected outputs\n return [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "visual_c++", "visual_studio", "yolo" ]
stackoverflow_0060427567_python_python_3.x_visual_c++_visual_studio_yolo.txt
Q: ETL from SQL Server with AWS Glue in Python I need to write an ETL job that run regularly with AWS Glue, in Python. The job is to query data SQL Server. If I do this on local machine, I need to install pyodbc (pip install pyodbc and an ODBC driver (from here), and run this sample Python code (referenced from here): cnxn_str = ("Driver={SQL Server Native Client 11.0};" "Server=USXXX00345,67800;" "Database=DB02;" "UID=Alex;" "PWD=Alex123;") cnxn = pyodbc.connect(cnxn_str) If I want to do this in an AWS Glue job, how do I install the ODBC driver and pyodbc in order to import pyodbc A: pyodbc is available by default in aws glue python shell jobs. Please keep track of the official aws docs to know the latest running version. You may also find a list of other supported and managed libraries in below link https://docs.aws.amazon.com/glue/latest/dg/add-job-python.html You could go ahead and import pyodbc.
ETL from SQL Server with AWS Glue in Python
I need to write an ETL job that run regularly with AWS Glue, in Python. The job is to query data SQL Server. If I do this on local machine, I need to install pyodbc (pip install pyodbc and an ODBC driver (from here), and run this sample Python code (referenced from here): cnxn_str = ("Driver={SQL Server Native Client 11.0};" "Server=USXXX00345,67800;" "Database=DB02;" "UID=Alex;" "PWD=Alex123;") cnxn = pyodbc.connect(cnxn_str) If I want to do this in an AWS Glue job, how do I install the ODBC driver and pyodbc in order to import pyodbc
[ "pyodbc is available by default in aws glue python shell jobs. Please keep track of the official aws docs to know the latest running version. You may also find a list of other supported and managed libraries in below link\nhttps://docs.aws.amazon.com/glue/latest/dg/add-job-python.html\nYou could go ahead and import pyodbc.\n" ]
[ 0 ]
[]
[]
[ "aws_glue", "pyodbc", "python" ]
stackoverflow_0073979688_aws_glue_pyodbc_python.txt
Q: Problem with for loop, break statement does not do what I thought it would This is my first time posting here, so be gentle, please. I have written the following code: import pandas as pd import spacy df = pd.read_csv('../../../Data/conll2003.dev.conll', sep='\t', on_bad_lines='skip', header=None) nlp = spacy.load('en_core_web_sm') nlp.max_length = 1500000 ## https://stackoverflow.com/questions/48169545/does-spacy-take-as-input-a-list-of-tokens all_tokens = [] for token in df[0]: all_tokens.append(str(token)) string = ' '.join(all_tokens) doc = nlp(string) token_tuples = tuple(enumerate(doc)) outfile = open('./conll2003.dev.syntax_corrupt.conll', 'w') i = 0 ## initiate by looking at the first token in the doc for x, token in enumerate(df[0]): for num, tok in token_tuples[i:]: ## we add this step to ensure that the for loop always looks from the last token that was a match, since doc is longer ## than df[0], otherwise it would at some point start looking from earlier tokens since spacy has more tokens and if there is an accidental match, it ## would provide the wrong dep and head if token == tok.text: i = num ## get the number from the token tuples as new starting point outfile.write(str(df[0][x]) + '\t' + str(df[1][x]) + '\t' + str(df[2][x]) + '\t' + str(df[3][x]) + '\t' + str(tok.dep_) + '\t' + str(tok.head.text) + '\n') break else: outfile.write(str(df[0][x]) + '\t' + str(df[1][x]) + '\t' + str(df[2][x]) + '\t' + str(df[3][x]) + '\t' + 'no_dep' + '\t' + 'no_head' + '\n') break outfile.close() The code is supposed to take data from the 2003conll-shared task on NER and first join the individual tokens to a string (as the data comes pre-tokenized) and then feed it into spaCy in order to make use of its dependency parsing. After that, I want to write the same lines that were in the original file + two new columns containing the dependency relation and the respective head noun. SpaCy obviously tokenizes the text differently than what came pre-tokenized so I had to find a way that the correct relation would be attributed to the correct token as len(doc)!= len(df[0]). It works fine if I do not include the else statement and it writes the correct relation with the token to the outfile. However, when I do include it, I would expect it to print one line with the values "no_dep" and "no_head" (for the token spaCy did not take into account) and then continue printing the tokens where there is information on the dependency relations (because the break statement should break the loop, yeah?). But it does not. It writes to every following token "no_dep" and "no_head" instead of going back to writing the actual relations. In other words: inputfile (snippet): LONDON NNP B-NP B-LOC 1996-08-30 CD I-NP O West NNP B-NP B-MISC outputfile without else statement: LONDON NNP B-NP B-LOC nmod Simmons West NNP B-NP B-MISC nmod Indian what I want with the else statement: LONDON NNP B-NP B-LOC nmod Simmons 1996-08-30 CD I-NP no_dep no_head West NNP B-NP B-MISC nmod Indian what I get: LONDON NNP B-NP B-LOC no_dep no_head 1996-08-30 CD I-NP O no_dep no_head West NNP B-NP B-MISC no_dep no_head (Note that the first line in the outputfile does have the correct dependency relation and head noun, the problem starts from the second line.) Any ideas what it is that I'm doing wrong? Thanks! A: You should preserve the original tokenization. To do this, manually create the Doc in order to skip the tokenizer in the pipeline: import spacy from spacy.tokens import Doc nlp = spacy.load(model) words = ["here", "are", "the", "original", "tokens"] doc = Doc(nlp.vocab, words=words) # apply the model to the doc (it skips the tokenizer for an input `Doc`) doc = nlp(doc)
Problem with for loop, break statement does not do what I thought it would
This is my first time posting here, so be gentle, please. I have written the following code: import pandas as pd import spacy df = pd.read_csv('../../../Data/conll2003.dev.conll', sep='\t', on_bad_lines='skip', header=None) nlp = spacy.load('en_core_web_sm') nlp.max_length = 1500000 ## https://stackoverflow.com/questions/48169545/does-spacy-take-as-input-a-list-of-tokens all_tokens = [] for token in df[0]: all_tokens.append(str(token)) string = ' '.join(all_tokens) doc = nlp(string) token_tuples = tuple(enumerate(doc)) outfile = open('./conll2003.dev.syntax_corrupt.conll', 'w') i = 0 ## initiate by looking at the first token in the doc for x, token in enumerate(df[0]): for num, tok in token_tuples[i:]: ## we add this step to ensure that the for loop always looks from the last token that was a match, since doc is longer ## than df[0], otherwise it would at some point start looking from earlier tokens since spacy has more tokens and if there is an accidental match, it ## would provide the wrong dep and head if token == tok.text: i = num ## get the number from the token tuples as new starting point outfile.write(str(df[0][x]) + '\t' + str(df[1][x]) + '\t' + str(df[2][x]) + '\t' + str(df[3][x]) + '\t' + str(tok.dep_) + '\t' + str(tok.head.text) + '\n') break else: outfile.write(str(df[0][x]) + '\t' + str(df[1][x]) + '\t' + str(df[2][x]) + '\t' + str(df[3][x]) + '\t' + 'no_dep' + '\t' + 'no_head' + '\n') break outfile.close() The code is supposed to take data from the 2003conll-shared task on NER and first join the individual tokens to a string (as the data comes pre-tokenized) and then feed it into spaCy in order to make use of its dependency parsing. After that, I want to write the same lines that were in the original file + two new columns containing the dependency relation and the respective head noun. SpaCy obviously tokenizes the text differently than what came pre-tokenized so I had to find a way that the correct relation would be attributed to the correct token as len(doc)!= len(df[0]). It works fine if I do not include the else statement and it writes the correct relation with the token to the outfile. However, when I do include it, I would expect it to print one line with the values "no_dep" and "no_head" (for the token spaCy did not take into account) and then continue printing the tokens where there is information on the dependency relations (because the break statement should break the loop, yeah?). But it does not. It writes to every following token "no_dep" and "no_head" instead of going back to writing the actual relations. In other words: inputfile (snippet): LONDON NNP B-NP B-LOC 1996-08-30 CD I-NP O West NNP B-NP B-MISC outputfile without else statement: LONDON NNP B-NP B-LOC nmod Simmons West NNP B-NP B-MISC nmod Indian what I want with the else statement: LONDON NNP B-NP B-LOC nmod Simmons 1996-08-30 CD I-NP no_dep no_head West NNP B-NP B-MISC nmod Indian what I get: LONDON NNP B-NP B-LOC no_dep no_head 1996-08-30 CD I-NP O no_dep no_head West NNP B-NP B-MISC no_dep no_head (Note that the first line in the outputfile does have the correct dependency relation and head noun, the problem starts from the second line.) Any ideas what it is that I'm doing wrong? Thanks!
[ "You should preserve the original tokenization. To do this, manually create the Doc in order to skip the tokenizer in the pipeline:\nimport spacy\nfrom spacy.tokens import Doc\n\nnlp = spacy.load(model)\nwords = [\"here\", \"are\", \"the\", \"original\", \"tokens\"]\ndoc = Doc(nlp.vocab, words=words)\n\n# apply the model to the doc (it skips the tokenizer for an input `Doc`)\ndoc = nlp(doc)\n\n" ]
[ 1 ]
[]
[]
[ "conll", "nlp", "pandas", "python", "spacy" ]
stackoverflow_0074623127_conll_nlp_pandas_python_spacy.txt
Q: What will be the python regex to match this? Given this string: var python_books = { 'name': 'Python Notebooks', 'sub-menu': [{ 'name' : 'Python Research Notebook', 'snippet' : [ 'import os, sys, json, time', '', 'import numpy as np', 'import pandas as pd', 'import matplotlib.pyplot as plt', 'from scipy import stats', '', '%matplotlib inline', 'plt.style.use("ggplot")', '%config InlineBackend.figure_format = "retina"', ] }, {'name': 'Getting Data', 'sub-menu': []}, {'name': 'Visualizations', 'sub-menu': []} ] }; I want a regex which can match everything in 'snippet' between [...] these square brackets. I already have tried this regex: regex = r"(?:'Python Research Notebook',\s*'snippet' : \[(.|\n)*?\])". But I want to exclude this part from regex: 'Python Research Notebook', 'snippet' : Positive lookbehind is also not working since it contains variable length width due to \s*. How do I do this? A: try this var_python_books = { 'name': 'Python Notebooks', 'sub-menu': [{ 'name': 'Python Research Notebook', 'snippet': [ 'import os, sys, json, time', '', 'import numpy as np', 'import pandas as pd', 'import matplotlib.pyplot as plt', 'from scipy import stats', '', '%matplotlib inline', 'plt.style.use("ggplot")', '%config InlineBackend.figure_format = "retina"', ] }, {'name': 'Getting Data', 'sub-menu': []}, {'name': 'Visualizations', 'sub-menu': []} ] }; for i in var_python_books['sub-menu']: if i.get('snippet'): print(i['snippet']) >>> ['import os, sys, json, time', '', 'import numpy as np', 'import pandas as pd', 'import matplotlib.pyplot as plt', 'from scipy import stats', '', '%matplotlib inline', 'plt.style.use("ggplot")', '%config InlineBackend.figure_format = "retina"']
What will be the python regex to match this?
Given this string: var python_books = { 'name': 'Python Notebooks', 'sub-menu': [{ 'name' : 'Python Research Notebook', 'snippet' : [ 'import os, sys, json, time', '', 'import numpy as np', 'import pandas as pd', 'import matplotlib.pyplot as plt', 'from scipy import stats', '', '%matplotlib inline', 'plt.style.use("ggplot")', '%config InlineBackend.figure_format = "retina"', ] }, {'name': 'Getting Data', 'sub-menu': []}, {'name': 'Visualizations', 'sub-menu': []} ] }; I want a regex which can match everything in 'snippet' between [...] these square brackets. I already have tried this regex: regex = r"(?:'Python Research Notebook',\s*'snippet' : \[(.|\n)*?\])". But I want to exclude this part from regex: 'Python Research Notebook', 'snippet' : Positive lookbehind is also not working since it contains variable length width due to \s*. How do I do this?
[ "try this\nvar_python_books = {\n 'name': 'Python Notebooks',\n 'sub-menu': [{\n 'name': 'Python Research Notebook',\n 'snippet': [\n 'import os, sys, json, time',\n '',\n 'import numpy as np',\n 'import pandas as pd',\n 'import matplotlib.pyplot as plt',\n 'from scipy import stats',\n '',\n '%matplotlib inline',\n 'plt.style.use(\"ggplot\")',\n '%config InlineBackend.figure_format = \"retina\"',\n ]\n },\n {'name': 'Getting Data', 'sub-menu': []},\n {'name': 'Visualizations', 'sub-menu': []}\n ]\n};\n\nfor i in var_python_books['sub-menu']:\n if i.get('snippet'):\n print(i['snippet'])\n\n>>> ['import os, sys, json, time', '', 'import numpy as np', 'import pandas as pd', 'import matplotlib.pyplot as plt', 'from scipy import stats', '', '%matplotlib inline', 'plt.style.use(\"ggplot\")', '%config InlineBackend.figure_format = \"retina\"']\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074623680_python_regex.txt
Q: Serialize QuerySet to JSON with FK DJANGO I want to send a JSON of a model of an intersection table so I only have foreign keys saved, I tried to make a list and then convert it to JSON but I only receive the ids and I need the content, I also tried in the back as a temporary solution to make a dictionary with the Queryset but the '<>' makes it mark an error in the JS, does anyone know a way to have the data of my foreign keys and make them a JSON? models: class Periodos(models.Model): anyo = models.IntegerField(default=2022) periodo = models.CharField(max_length=10) fecha_inicio = models.DateField(blank=True, null=True) fecha_fin = models.DateField(blank=True, null=True) class Meta: app_label = 'modelos' verbose_name = u'periodo' verbose_name_plural = u'Periodos' ordering = ('id',) def __str__(self): return u'%s - %s' % (self.anyo,self.periodo) class Programas(models.Model): programa = models.CharField(max_length=255,blank=True, null=True) activo = models.BooleanField(default=True) class Meta: app_label = 'modelos' verbose_name = u'Programas' verbose_name_plural = u'Programas' def __str__(self) -> str: return self.programa class Programa_periodo(models.Model): periodo = models.ForeignKey(Periodos, related_name='Programa_periodo_periodo',on_delete=models.CASCADE) programa = models.ForeignKey(Programas, related_name='Programa_periodo_Programa',on_delete=models.CASCADE) class Meta: app_label = 'modelos' verbose_name = u'Programa Periodo' verbose_name_plural = u'Programa Periodo' def __str__(self) -> str: return self.programa.programa py where i send data def iniciativa(request): if request.user.is_authenticated: context = {} context['marcas'] = json.dumps(list(Marcas.objects.values())) context['eo'] = get_estructura_org() #This is where I call the data programa = Programa_periodo.objects.all() #These two only return the ids # context['programa_periodos'] = json.dumps(list(Programa_periodo.objects.values())) #context['programa_periodos'] = serializers.serialize("json", Programa_periodo.objects.all()) #One of my try but fail for the '<>' programa_periodo = {} for pg in programa: programa_periodo[pg.periodo] = pg.programa context['programa_periodos'] = programa_periodo return render(request, 'agregar_iniciativa.html', context) else: return HttpResponseBadRequest('Favor de ingresar sesión en el sistema.', format(request.method), status=401) A: I am not sure that I get the question right, but if you need a special field value from foreign key you can use smth like: Programa_periodo.objects.values("id", "periodo__periodo", "programa__programa") With double underscore. Try it in the shell first. Check the docs here https://docs.djangoproject.com/en/4.0/ref/models/querysets/#values
Serialize QuerySet to JSON with FK DJANGO
I want to send a JSON of a model of an intersection table so I only have foreign keys saved, I tried to make a list and then convert it to JSON but I only receive the ids and I need the content, I also tried in the back as a temporary solution to make a dictionary with the Queryset but the '<>' makes it mark an error in the JS, does anyone know a way to have the data of my foreign keys and make them a JSON? models: class Periodos(models.Model): anyo = models.IntegerField(default=2022) periodo = models.CharField(max_length=10) fecha_inicio = models.DateField(blank=True, null=True) fecha_fin = models.DateField(blank=True, null=True) class Meta: app_label = 'modelos' verbose_name = u'periodo' verbose_name_plural = u'Periodos' ordering = ('id',) def __str__(self): return u'%s - %s' % (self.anyo,self.periodo) class Programas(models.Model): programa = models.CharField(max_length=255,blank=True, null=True) activo = models.BooleanField(default=True) class Meta: app_label = 'modelos' verbose_name = u'Programas' verbose_name_plural = u'Programas' def __str__(self) -> str: return self.programa class Programa_periodo(models.Model): periodo = models.ForeignKey(Periodos, related_name='Programa_periodo_periodo',on_delete=models.CASCADE) programa = models.ForeignKey(Programas, related_name='Programa_periodo_Programa',on_delete=models.CASCADE) class Meta: app_label = 'modelos' verbose_name = u'Programa Periodo' verbose_name_plural = u'Programa Periodo' def __str__(self) -> str: return self.programa.programa py where i send data def iniciativa(request): if request.user.is_authenticated: context = {} context['marcas'] = json.dumps(list(Marcas.objects.values())) context['eo'] = get_estructura_org() #This is where I call the data programa = Programa_periodo.objects.all() #These two only return the ids # context['programa_periodos'] = json.dumps(list(Programa_periodo.objects.values())) #context['programa_periodos'] = serializers.serialize("json", Programa_periodo.objects.all()) #One of my try but fail for the '<>' programa_periodo = {} for pg in programa: programa_periodo[pg.periodo] = pg.programa context['programa_periodos'] = programa_periodo return render(request, 'agregar_iniciativa.html', context) else: return HttpResponseBadRequest('Favor de ingresar sesión en el sistema.', format(request.method), status=401)
[ "I am not sure that I get the question right, but if you need a special field value from foreign key you can use smth like:\nPrograma_periodo.objects.values(\"id\", \"periodo__periodo\", \"programa__programa\")\n\nWith double underscore. Try it in the shell first. Check the docs here https://docs.djangoproject.com/en/4.0/ref/models/querysets/#values\n" ]
[ 1 ]
[]
[]
[ "django", "django_queryset", "python" ]
stackoverflow_0074622766_django_django_queryset_python.txt
Q: Adding background image to the window with Qt-Designer (Pyqt5)- Error: Could not create pixmap from : Why is it so hard to get a 'bcg-image' in the background with PyQt5? Please read my explanation before answering, I am a beginner in programming and it has been a week looking for a solution for the problem. I have a program with the first window that has many input fields to enter some data from the user. After that, some calculations are performed based on the entered data. Now, after clicking a button on the first window, the results of the calculations will be displayed within a new window in a QTableWidget without closing the first window. So, now we have two windows on the screen. I hope you can follow with me in your imagination, after that with another button in the first window, again in the first not in the second one, in the first window with a button click, a third window will appear within a different QTableWidget showing some results. Now, we have three windows on the screen, all of them are "QMainWindow" class. The problem is, I had successfully added the 'bcg-image' for the first window and it works fine, I simply added at the end of the Python code the following lines: stylesheet=""" MainWindow { background-image:url("C:/Users/Mk/Bureau/png_cytec_worldwide.png") } """ app=QApplication(sys.argv) window1=MainWindow() window1.setStyleSheet(stylesheet) window1.setFixedSize(1441,950) window1.setWindowTitle('Data input') window1.show() sys.exit(app.exec_()) So now to add bcg-image for the other two windows "window2" and "window3" I used the Qt designer, I opened the recourse browser, I created the resource file I added a prefix I added the image file (All of this is in the same directory of the Python code, all of them, the qrc-file , the image, and everything is in the same directory.) Now, in Qt Designer with a right click on window 2 and the same for window 3, I am choosing stylesheet and in the stylesheet, I am choosing add resource and choosing the resource that I created before and the image from the resource and hit apply and OK and save. After running the Python file from the vs-code-studio, I get an error for window2 and window3. The bcg-image is not showing up and the error is: Could not create pixmap from:'the selected image from the resource' However, window 1 is good. How can I fix this problem? I tried to look for a solution, but I didn't find something helpful. Here is how I built the code, it has three classes: myappid = 'mycompany.myproduct.subproduct.version' # arbitrary string ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid) # this is the window 2 "Problem" class Calculation_window(QMainWindow): def __init__(self): super().__init__() #loading the the with Qt designer created UI file: uic.loadUi("C:\\Users\\Mk\\MainProgram_for_Dev\\Main_program_PyQt5\\caculated_results_window.ui",self) # some functions that has nothing to do with the problem self.ExportButton1.clicked.connect(self.Export_to_Excel) def Export_to_Excel(self): ..... ..... ..... ..... # this is the window 3 "Problem" class Calculation_window_Specific(QWidget): #loading the the with Qt designer created UI file: def __init__(self): super().__init__() uic.loadUi("C:\\Users\\Mk\\MainProgram_for_Dev\\Main_program_PyQt5\\caculated_results_Specific.ui",self) self.ExportButton2.clicked.connect(self.Export_to_Excel2) def Export_to_Excel2(self): ..... ..... ..... ..... # this is the window 1, "no problem " class MainWindow(QMainWindow): def __init__(self): super().__init__() uic.loadUi("C:\\Users\\MAkhiat\\OneDrive\\cytec\\MainProgram_for_Dev\\Main_ program_PyQt5\\Main_Program_new.ui",self) ..... ..... ..... stylesheet=""" MainWindow { background-image:url("C:/Users/Mk/Bureau/png_cytec_worldwide.png") } """ app=QApplication(sys.argv) window1=MainWindow() window1.setStyleSheet(stylesheet) window1.setFixedSize(1441,950) window1.setWindowTitle('Data input') window1.show() sys.exit(app.exec_()) the code is long I only fit the parts here that might have something to do with the problem. A: At MainWindow on the Qt Designer Mouse Right Click --> Change styleSheet --> Add Resources add your image
Adding background image to the window with Qt-Designer (Pyqt5)- Error: Could not create pixmap from :
Why is it so hard to get a 'bcg-image' in the background with PyQt5? Please read my explanation before answering, I am a beginner in programming and it has been a week looking for a solution for the problem. I have a program with the first window that has many input fields to enter some data from the user. After that, some calculations are performed based on the entered data. Now, after clicking a button on the first window, the results of the calculations will be displayed within a new window in a QTableWidget without closing the first window. So, now we have two windows on the screen. I hope you can follow with me in your imagination, after that with another button in the first window, again in the first not in the second one, in the first window with a button click, a third window will appear within a different QTableWidget showing some results. Now, we have three windows on the screen, all of them are "QMainWindow" class. The problem is, I had successfully added the 'bcg-image' for the first window and it works fine, I simply added at the end of the Python code the following lines: stylesheet=""" MainWindow { background-image:url("C:/Users/Mk/Bureau/png_cytec_worldwide.png") } """ app=QApplication(sys.argv) window1=MainWindow() window1.setStyleSheet(stylesheet) window1.setFixedSize(1441,950) window1.setWindowTitle('Data input') window1.show() sys.exit(app.exec_()) So now to add bcg-image for the other two windows "window2" and "window3" I used the Qt designer, I opened the recourse browser, I created the resource file I added a prefix I added the image file (All of this is in the same directory of the Python code, all of them, the qrc-file , the image, and everything is in the same directory.) Now, in Qt Designer with a right click on window 2 and the same for window 3, I am choosing stylesheet and in the stylesheet, I am choosing add resource and choosing the resource that I created before and the image from the resource and hit apply and OK and save. After running the Python file from the vs-code-studio, I get an error for window2 and window3. The bcg-image is not showing up and the error is: Could not create pixmap from:'the selected image from the resource' However, window 1 is good. How can I fix this problem? I tried to look for a solution, but I didn't find something helpful. Here is how I built the code, it has three classes: myappid = 'mycompany.myproduct.subproduct.version' # arbitrary string ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid) # this is the window 2 "Problem" class Calculation_window(QMainWindow): def __init__(self): super().__init__() #loading the the with Qt designer created UI file: uic.loadUi("C:\\Users\\Mk\\MainProgram_for_Dev\\Main_program_PyQt5\\caculated_results_window.ui",self) # some functions that has nothing to do with the problem self.ExportButton1.clicked.connect(self.Export_to_Excel) def Export_to_Excel(self): ..... ..... ..... ..... # this is the window 3 "Problem" class Calculation_window_Specific(QWidget): #loading the the with Qt designer created UI file: def __init__(self): super().__init__() uic.loadUi("C:\\Users\\Mk\\MainProgram_for_Dev\\Main_program_PyQt5\\caculated_results_Specific.ui",self) self.ExportButton2.clicked.connect(self.Export_to_Excel2) def Export_to_Excel2(self): ..... ..... ..... ..... # this is the window 1, "no problem " class MainWindow(QMainWindow): def __init__(self): super().__init__() uic.loadUi("C:\\Users\\MAkhiat\\OneDrive\\cytec\\MainProgram_for_Dev\\Main_ program_PyQt5\\Main_Program_new.ui",self) ..... ..... ..... stylesheet=""" MainWindow { background-image:url("C:/Users/Mk/Bureau/png_cytec_worldwide.png") } """ app=QApplication(sys.argv) window1=MainWindow() window1.setStyleSheet(stylesheet) window1.setFixedSize(1441,950) window1.setWindowTitle('Data input') window1.show() sys.exit(app.exec_()) the code is long I only fit the parts here that might have something to do with the problem.
[ "At MainWindow on the Qt Designer\nMouse Right Click --> Change styleSheet --> Add Resources\nadd your image\n" ]
[ 0 ]
[]
[]
[ "pyqt", "pyqt5", "python", "qt", "qt_designer" ]
stackoverflow_0074623965_pyqt_pyqt5_python_qt_qt_designer.txt
Q: Maximum of Each Row and return the columname in a dataframe having a data frame as below: I need to find the largest value in each row and return the column name. Expected output: I tried the below code: df['Max'] = df.idxmax(axis=1) error:TypeError: reduction operation 'argmax' not allowed for this dtype A: It seems that adding numeric_only flag would solve your problem: # Panda 1.5.2 import pandas as pd d = {'prod': ['p1', 'p2', 'p3'], 'p1': [10, 20.0, 30], 'p2': [7 ,6 ,4], 'p3': [12,50,5], 'MAX': ['','','']} df = pd.DataFrame(data=d) print(type(df)) # <class 'pandas.core.frame.DataFrame'> df['MAX'] = df.idxmax(1, numeric_only=True) print(df) Check your pandas version. I have 1.5.2 pip show pandas Check type of your data as @Niko suggests
Maximum of Each Row and return the columname in a dataframe
having a data frame as below: I need to find the largest value in each row and return the column name. Expected output: I tried the below code: df['Max'] = df.idxmax(axis=1) error:TypeError: reduction operation 'argmax' not allowed for this dtype
[ "It seems that adding numeric_only flag would solve your problem:\n# Panda 1.5.2\nimport pandas as pd\n\nd = {'prod': ['p1', 'p2', 'p3'], 'p1': [10, 20.0, 30], 'p2': [7 ,6 ,4], 'p3': [12,50,5], 'MAX': ['','','']}\ndf = pd.DataFrame(data=d)\nprint(type(df)) # <class 'pandas.core.frame.DataFrame'>\ndf['MAX'] = df.idxmax(1, numeric_only=True)\nprint(df)\n\nCheck your pandas version. I have 1.5.2\npip show pandas\n\nCheck type of your data as @Niko suggests\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074624062_dataframe_python.txt
Q: Randomly remove 'x' elements from a list I'd like to randomly remove a fraction of elements from a list without changing the order of the list. Say I had some data and I wanted to remove 1/4 of them: data = [1,2,3,4,5,6,7,8,9,10] n = len(data) / 4 I'm thinking I need a loop to run through the data and delete a random element 'n' times? So something like: for i in xrange(n): random = np.randint(1,len(data)) del data[random] My question is, is this the most 'pythonic' way of doing this? My list will be ~5000 elements long and I want to do this multiple times with different values of 'n'. Thanks! A: Sequential deleting is a bad idea since deletion in a list is O(n). Instead do something like this: def delete_rand_items(items,n): to_delete = set(random.sample(range(len(items)),n)) return [x for i,x in enumerate(items) if not i in to_delete] A: You can use random.sample like this: import random a = [1,2,3,4,5,6,7,8,9,10] no_elements_to_delete = len(a) // 4 no_elements_to_keep = len(a) - no_elements_to_delete b = set(random.sample(a, no_elements_to_keep)) # the `if i in b` on the next line would benefit from b being a set for large lists b = [i for i in a if i in b] # you need this to restore the order print(len(a)) # 10 print(b) # [1, 2, 3, 4, 5, 8, 9, 10] print(len(b)) # 8 Two notes on the above. You are not modifying the original list in place but you could. You are not actually deleting elements but rather keeping elements but it is the same thing (you just have to adjust the ratios) The drawback is the list-comprehension that restores the order of the elements As @koalo says in the comments the above will not work properly if the elements in the original list are not unique. I could easily fix that but then my answer would be identical to the one posted by@JohnColeman. So if that might be the case just use his instead. A: Is the order meaningful? if not you can do something like: shuffle(data) data=data[:len(data)-n] A: I suggest using numpy indexing as in import numpy as np data = np.array([1,2,3,4,5,6,7,8,9,10]) n = len(data)/4 indices = sorted(np.random.choice(len(data),len(data)-n,replace=False)) result = data[indices] A: I think it will be more convenient this way: import random n = round(len(data) *0.3) for i in range(n): data.pop(random.randrange(len(data)))
Randomly remove 'x' elements from a list
I'd like to randomly remove a fraction of elements from a list without changing the order of the list. Say I had some data and I wanted to remove 1/4 of them: data = [1,2,3,4,5,6,7,8,9,10] n = len(data) / 4 I'm thinking I need a loop to run through the data and delete a random element 'n' times? So something like: for i in xrange(n): random = np.randint(1,len(data)) del data[random] My question is, is this the most 'pythonic' way of doing this? My list will be ~5000 elements long and I want to do this multiple times with different values of 'n'. Thanks!
[ "Sequential deleting is a bad idea since deletion in a list is O(n). Instead do something like this:\ndef delete_rand_items(items,n):\n to_delete = set(random.sample(range(len(items)),n))\n return [x for i,x in enumerate(items) if not i in to_delete]\n\n", "You can use random.sample like this:\nimport random\n\na = [1,2,3,4,5,6,7,8,9,10]\n\nno_elements_to_delete = len(a) // 4\nno_elements_to_keep = len(a) - no_elements_to_delete\nb = set(random.sample(a, no_elements_to_keep)) # the `if i in b` on the next line would benefit from b being a set for large lists\nb = [i for i in a if i in b] # you need this to restore the order\nprint(len(a)) # 10\nprint(b) # [1, 2, 3, 4, 5, 8, 9, 10]\nprint(len(b)) # 8\n\nTwo notes on the above.\n\nYou are not modifying the original list in place but you could.\nYou are not actually deleting elements but rather keeping elements but it is the same thing (you just have to adjust the ratios)\nThe drawback is the list-comprehension that restores the order of the elements\n\n\nAs @koalo says in the comments the above will not work properly if the elements in the original list are not unique. I could easily fix that but then my answer would be identical to the one posted by@JohnColeman. So if that might be the case just use his instead.\n", "Is the order meaningful? \nif not you can do something like: \nshuffle(data)\ndata=data[:len(data)-n]\n\n", "I suggest using numpy indexing as in\nimport numpy as np\ndata = np.array([1,2,3,4,5,6,7,8,9,10])\nn = len(data)/4\nindices = sorted(np.random.choice(len(data),len(data)-n,replace=False))\nresult = data[indices]\n\n", "I think it will be more convenient this way:\nimport random\nn = round(len(data) *0.3)\nfor i in range(n):\n data.pop(random.randrange(len(data)))\n\n" ]
[ 11, 5, 1, 0, 0 ]
[]
[]
[ "list", "python", "random" ]
stackoverflow_0044883905_list_python_random.txt
Q: How to apply aplhabetic and numeric validation rule to database columns in pyspark? I have one DB contains emp table ID,NAME,YEAR,AGE,DEPT columns. I want to print pass if the NAME column passes the condition that contains characters only else fail. And pass if year is in dd-mm-yyyy format else fail pass if age col contains integers only else fail And is it possible that above whole process can move to 1 function ? A: For each part of your question, you can use a trick. name: you can use regular-expression with rlike() function. date: you can cast date string to date format and check if it is valid. name: you can cast to integer and check if it is valid. note that if a cast is not valid pyspark returns Null. schema = ['age', 'name', 'date'] data = [ ("1", "A1", '30-12-2022'), ("2", "Aa", '36-11-2022'), ("3", "Aa", '2022-10-12'), ("4a", "Aa", '30-11-2022'), ("5", "Aa", '30-11-2022'), ] df = spark.createDataFrame(data = data, schema = schema) ( df .filter(F.col('name').rlike("^[a-zA-Z]+$")) .filter(F.to_date(F.col('date'), 'dd-MM-yyyy').isNotNull()) .filter(F.col('age').cast('int').isNotNull()) ).show() +---+----+----------+ |age|name| date| +---+----+----------+ | 5| Aa|30-11-2022| +---+----+----------+
How to apply aplhabetic and numeric validation rule to database columns in pyspark?
I have one DB contains emp table ID,NAME,YEAR,AGE,DEPT columns. I want to print pass if the NAME column passes the condition that contains characters only else fail. And pass if year is in dd-mm-yyyy format else fail pass if age col contains integers only else fail And is it possible that above whole process can move to 1 function ?
[ "For each part of your question, you can use a trick.\nname: you can use regular-expression with rlike() function.\ndate: you can cast date string to date format and check if it is valid.\nname: you can cast to integer and check if it is valid.\nnote that if a cast is not valid pyspark returns Null.\nschema = ['age', 'name', 'date']\ndata = [\n (\"1\", \"A1\", '30-12-2022'),\n (\"2\", \"Aa\", '36-11-2022'),\n (\"3\", \"Aa\", '2022-10-12'),\n (\"4a\", \"Aa\", '30-11-2022'),\n (\"5\", \"Aa\", '30-11-2022'),\n]\ndf = spark.createDataFrame(data = data, schema = schema)\n(\n df\n .filter(F.col('name').rlike(\"^[a-zA-Z]+$\"))\n .filter(F.to_date(F.col('date'), 'dd-MM-yyyy').isNotNull())\n .filter(F.col('age').cast('int').isNotNull())\n).show()\n\n+---+----+----------+\n|age|name| date|\n+---+----+----------+\n| 5| Aa|30-11-2022|\n+---+----+----------+\n\n" ]
[ 0 ]
[]
[]
[ "pyspark", "python" ]
stackoverflow_0074623785_pyspark_python.txt
Q: Sudoku Backtracking Python to find Multiple Solutions I have a code to solve a Sudoku recursively and print out the one solution it founds. But i would like to find the number of multiple solutions. How would you modify the code that it finds all possible solutions and gives out the number of solutions? Thank you! :) code: board = [ [7,8,0,4,0,0,1,2,0], [6,0,0,0,7,5,0,0,9], [0,0,0,6,0,1,0,7,8], [0,0,7,0,4,0,2,6,0], [0,0,1,0,5,0,9,3,0], [9,0,4,0,6,0,0,0,5], [0,7,0,3,0,0,0,1,2], [1,2,0,0,0,7,4,0,0], [0,4,9,2,0,6,0,0,7] ] def solve(bo): find = find_empty(bo) if not find: return True else: row, col = find for num in range(1,10): if valid(bo, num, (row, col)): bo[row][col] = num if solve(bo): return True bo[row][col] = 0 return False def valid(bo, num, pos): # Check row for field in range(len(bo[0])): if bo[pos[0]][field] == num and pos[1] != field: return False # Check column for line in range(len(bo)): if bo[line][pos[1]] == num and pos[0] != line: return False # Check box box_x = pos[1] // 3 box_y = pos[0] // 3 for i in range(box_y*3, box_y*3 + 3): for j in range(box_x * 3, box_x*3 + 3): if bo[i][j] == num and (i,j) != pos: return False return True def print_board(bo): for i in range(len(bo)): if i % 3 == 0 and i != 0: print("- - - - - - - - - - - - - ") for j in range(len(bo[0])): if j % 3 == 0 and j != 0: print(" | ", end="") if j == 8: print(bo[i][j]) else: print(str(bo[i][j]) + " ", end="") def find_empty(bo): for i in range(len(bo)): for j in range(len(bo[0])): if bo[i][j] == 0: return (i, j) # row, col return None if __name__ == "__main__": print_board(board) solve(board) print("___________________") print("") print_board(board) I already tried to change the return True term at the Solve(Bo) Function to return None/ deleted it(For both return Terms) that it continues… Then the Algorithm continues and finds multiple solutions, but in the end fills out the correct numbers from the very last found solutions again into 0’s. This is the solution then printed out. A: From a high-level view, it seems to me that a recursive approach to this should work as follows: Check if the grid is valid: If the grid is invalid, return immediately Else, check if the grid is complete: If the the grid is complete, add (a copy of) it to the list of solutions Else, the grid is valid and incomplete, so find the first empty cell and run the function recursively on the grid with all possible values filled in for that box (and make sure to clear any modified cells at the end, after the loop) Then, the list of solutions is generated, and the length of that list is the number of possible solutions. (If you find there are a lot of solutions and generating the list takes a very long time, you may just want to make a counter for the number of solutions found.) Implementing this shouldn't be too difficult using what you have, since you already have functions for finding the first empty cell and verifying whether the grid is valid, etc. How does that sound? A: As asked: How would you modify the code that it finds all possible solutions and gives out the number of solutions? If you don't want to return ("give out") the solutions themselves, but the number of solutions, then you need to maintain a counter, and use the count you get back from the recursive call to update the owned counter: def solve(bo): find = find_empty(bo) if not find: return 1 count = 0 row, col = find for num in range(1, 10): if valid(bo, num, (row, col)): bo[row][col] = num count += solve(bo) bo[row][col] = 0 return count In the main program, you would no longer print the board, as you don't expect the filled board now, but a number: print(solve(board)) # Will output 1 for your example board. Getting all solutions If you don't just want to know the count, but every individual solution itself, then I would go for a generator function, that yields each solution: def solve(bo): find = find_empty(bo) if not find: yield [row[:] for row in bo] # Make a copy return row, col = find for num in range(1, 10): if valid(bo, num, (row, col)): bo[row][col] = num yield from solve(bo) bo[row][col] = 0 Then the main program can do: count = 0 for solution in solve(board): print("SOLUTION:") print_board(solution) count += 1 print("NUMBER of SOLUTIONS:", count)
Sudoku Backtracking Python to find Multiple Solutions
I have a code to solve a Sudoku recursively and print out the one solution it founds. But i would like to find the number of multiple solutions. How would you modify the code that it finds all possible solutions and gives out the number of solutions? Thank you! :) code: board = [ [7,8,0,4,0,0,1,2,0], [6,0,0,0,7,5,0,0,9], [0,0,0,6,0,1,0,7,8], [0,0,7,0,4,0,2,6,0], [0,0,1,0,5,0,9,3,0], [9,0,4,0,6,0,0,0,5], [0,7,0,3,0,0,0,1,2], [1,2,0,0,0,7,4,0,0], [0,4,9,2,0,6,0,0,7] ] def solve(bo): find = find_empty(bo) if not find: return True else: row, col = find for num in range(1,10): if valid(bo, num, (row, col)): bo[row][col] = num if solve(bo): return True bo[row][col] = 0 return False def valid(bo, num, pos): # Check row for field in range(len(bo[0])): if bo[pos[0]][field] == num and pos[1] != field: return False # Check column for line in range(len(bo)): if bo[line][pos[1]] == num and pos[0] != line: return False # Check box box_x = pos[1] // 3 box_y = pos[0] // 3 for i in range(box_y*3, box_y*3 + 3): for j in range(box_x * 3, box_x*3 + 3): if bo[i][j] == num and (i,j) != pos: return False return True def print_board(bo): for i in range(len(bo)): if i % 3 == 0 and i != 0: print("- - - - - - - - - - - - - ") for j in range(len(bo[0])): if j % 3 == 0 and j != 0: print(" | ", end="") if j == 8: print(bo[i][j]) else: print(str(bo[i][j]) + " ", end="") def find_empty(bo): for i in range(len(bo)): for j in range(len(bo[0])): if bo[i][j] == 0: return (i, j) # row, col return None if __name__ == "__main__": print_board(board) solve(board) print("___________________") print("") print_board(board) I already tried to change the return True term at the Solve(Bo) Function to return None/ deleted it(For both return Terms) that it continues… Then the Algorithm continues and finds multiple solutions, but in the end fills out the correct numbers from the very last found solutions again into 0’s. This is the solution then printed out.
[ "From a high-level view, it seems to me that a recursive approach to this should work as follows:\n\nCheck if the grid is valid:\n\nIf the grid is invalid, return immediately\nElse, check if the grid is complete:\n\nIf the the grid is complete, add (a copy of) it to the list of\nsolutions\nElse, the grid is valid and incomplete, so find the first empty cell and run the function recursively on the grid with all possible values filled in for that box (and make sure to clear any modified cells at the end, after the loop)\n\n\n\n\n\nThen, the list of solutions is generated, and the length of that list is the number of possible solutions. (If you find there are a lot of solutions and generating the list takes a very long time, you may just want to make a counter for the number of solutions found.)\nImplementing this shouldn't be too difficult using what you have, since you already have functions for finding the first empty cell and verifying whether the grid is valid, etc.\nHow does that sound?\n", "As asked:\n\nHow would you modify the code that it finds all possible solutions and gives out the number of solutions?\n\nIf you don't want to return (\"give out\") the solutions themselves, but the number of solutions, then you need to maintain a counter, and use the count you get back from the recursive call to update the owned counter:\ndef solve(bo):\n find = find_empty(bo)\n if not find:\n return 1\n\n count = 0\n row, col = find\n for num in range(1, 10):\n if valid(bo, num, (row, col)):\n bo[row][col] = num \n count += solve(bo)\n bo[row][col] = 0 \n\n return count\n\nIn the main program, you would no longer print the board, as you don't expect the filled board now, but a number:\n print(solve(board)) # Will output 1 for your example board.\n\nGetting all solutions\nIf you don't just want to know the count, but every individual solution itself, then I would go for a generator function, that yields each solution:\ndef solve(bo):\n find = find_empty(bo)\n if not find:\n yield [row[:] for row in bo] # Make a copy\n return\n \n row, col = find\n for num in range(1, 10):\n if valid(bo, num, (row, col)):\n bo[row][col] = num \n yield from solve(bo)\n bo[row][col] = 0 \n\nThen the main program can do:\n count = 0\n for solution in solve(board):\n print(\"SOLUTION:\")\n print_board(solution)\n count += 1\n print(\"NUMBER of SOLUTIONS:\", count)\n\n" ]
[ 0, 0 ]
[]
[]
[ "backtracking", "python", "recursion", "sudoku" ]
stackoverflow_0074622588_backtracking_python_recursion_sudoku.txt
Q: JSON pandas dataframe ValueError: Expected object or value I am trying to read a JSON file using pandas. The JSON file is in this format: { "category": "CRIME", "headline": "There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV", "authors": "Melissa Jeltsen", "link": "https://www.huffingtonpost.com/entry/texas-amanda-painter-mass-shooting_us_5b081ab4e4b0802d69caad89", "short_description": "She left her husband. He killed their children. Just another day in America.", "date": "2018-05-26" } { "category": "ENTERTAINMENT", "headline": "Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song", "authors": "Andy McDonald", "link": "https://www.huffingtonpost.com/entry/will-smith-joins-diplo-and-nicky-jam-for-the-official-2018-world-cup-song_us_5b09726fe4b0fdb2aa541201", "short_description": "Of course, it has a song.", "date": "2018-05-26" } However, I get the following error that I don't understand why: ValueError Traceback (most recent call last) /var/folders/j6/rj901v4j40368zfdw64pbf700000gn/T/ipykernel_11792/4234726591.py in <module> ----> 1 df = pd.read_json('db.json', lines=True) 2 df.head() ~/opt/anaconda3/lib/python3.9/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 205 else: 206 kwargs[new_arg_name] = new_arg_value --> 207 return func(*args, **kwargs) 208 209 return cast(F, wrapper) ~/opt/anaconda3/lib/python3.9/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 309 stacklevel=stacklevel, 310 ) --> 311 return func(*args, **kwargs) 312 313 return wrapper ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates, keep_default_dates, numpy, precise_float, date_unit, encoding, encoding_errors, lines, chunksize, compression, nrows, storage_options) 610 611 with json_reader: --> 612 return json_reader.read() 613 614 ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in read(self) 742 data = ensure_str(self.data) 743 data_lines = data.split("\n") --> 744 obj = self._get_object_parser(self._combine_lines(data_lines)) 745 else: 746 obj = self._get_object_parser(self.data) ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in _get_object_parser(self, json) 766 obj = None 767 if typ == "frame": --> 768 obj = FrameParser(json, **kwargs).parse() 769 770 if typ == "series" or obj is None: ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in parse(self) 878 self._parse_numpy() 879 else: --> 880 self._parse_no_numpy() 881 882 if self.obj is None: ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in _parse_no_numpy(self) 1131 if orient == "columns": 1132 self.obj = DataFrame( -> 1133 loads(json, precise_float=self.precise_float), dtype=None 1134 ) 1135 elif orient == "split": ValueError: Expected object or value My code is written as follows: import pandas as pd df = read_json('db.json', lines=True) df.head() I tried changing the structure of the JSON file as suggested by here but it doesn't work. The error that I get is the same error as the one I have specified above. Is there any other way that i can solve this issue? A: You can wrap it in square brackets [] and add a comma between the dictionaries for valid json. [{ "category": "CRIME", "headline": "There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV", "authors": "Melissa Jeltsen", "link": "https://www.huffingtonpost.com/entry/texas-amanda-painter-mass-shooting_us_5b081ab4e4b0802d69caad89", "short_description": "She left her husband. He killed their children. Just another day in America.", "date": "2018-05-26" }, { "category": "ENTERTAINMENT", "headline": "Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song", "authors": "Andy McDonald", "link": "https://www.huffingtonpost.com/entry/will-smith-joins-diplo-and-nicky-jam-for-the-official-2018-world-cup-song_us_5b09726fe4b0fdb2aa541201", "short_description": "Of course, it has a song.", "date": "2018-05-26" }] Read file:: import pandas as pd df = pd.read_json("/path/to/file/db.json") print(df) Output: category headline authors link short_description date 0 CRIME There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV Melissa Jeltsen https://www.huffingtonpost.com/entry/texas-amanda-painter-mass-shooting_us_5b081ab4e4b0802d69caad89 She left her husband. He killed their children. Just another day in America. 2018-05-26 1 ENTERTAINMENT Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song Andy McDonald https://www.huffingtonpost.com/entry/will-smith-joins-diplo-and-nicky-jam-for-the-official-2018-... Of course, it has a song. 2018-05-26
JSON pandas dataframe ValueError: Expected object or value
I am trying to read a JSON file using pandas. The JSON file is in this format: { "category": "CRIME", "headline": "There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV", "authors": "Melissa Jeltsen", "link": "https://www.huffingtonpost.com/entry/texas-amanda-painter-mass-shooting_us_5b081ab4e4b0802d69caad89", "short_description": "She left her husband. He killed their children. Just another day in America.", "date": "2018-05-26" } { "category": "ENTERTAINMENT", "headline": "Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song", "authors": "Andy McDonald", "link": "https://www.huffingtonpost.com/entry/will-smith-joins-diplo-and-nicky-jam-for-the-official-2018-world-cup-song_us_5b09726fe4b0fdb2aa541201", "short_description": "Of course, it has a song.", "date": "2018-05-26" } However, I get the following error that I don't understand why: ValueError Traceback (most recent call last) /var/folders/j6/rj901v4j40368zfdw64pbf700000gn/T/ipykernel_11792/4234726591.py in <module> ----> 1 df = pd.read_json('db.json', lines=True) 2 df.head() ~/opt/anaconda3/lib/python3.9/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 205 else: 206 kwargs[new_arg_name] = new_arg_value --> 207 return func(*args, **kwargs) 208 209 return cast(F, wrapper) ~/opt/anaconda3/lib/python3.9/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 309 stacklevel=stacklevel, 310 ) --> 311 return func(*args, **kwargs) 312 313 return wrapper ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates, keep_default_dates, numpy, precise_float, date_unit, encoding, encoding_errors, lines, chunksize, compression, nrows, storage_options) 610 611 with json_reader: --> 612 return json_reader.read() 613 614 ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in read(self) 742 data = ensure_str(self.data) 743 data_lines = data.split("\n") --> 744 obj = self._get_object_parser(self._combine_lines(data_lines)) 745 else: 746 obj = self._get_object_parser(self.data) ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in _get_object_parser(self, json) 766 obj = None 767 if typ == "frame": --> 768 obj = FrameParser(json, **kwargs).parse() 769 770 if typ == "series" or obj is None: ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in parse(self) 878 self._parse_numpy() 879 else: --> 880 self._parse_no_numpy() 881 882 if self.obj is None: ~/opt/anaconda3/lib/python3.9/site-packages/pandas/io/json/_json.py in _parse_no_numpy(self) 1131 if orient == "columns": 1132 self.obj = DataFrame( -> 1133 loads(json, precise_float=self.precise_float), dtype=None 1134 ) 1135 elif orient == "split": ValueError: Expected object or value My code is written as follows: import pandas as pd df = read_json('db.json', lines=True) df.head() I tried changing the structure of the JSON file as suggested by here but it doesn't work. The error that I get is the same error as the one I have specified above. Is there any other way that i can solve this issue?
[ "You can wrap it in square brackets [] and add a comma between the dictionaries for valid json.\n[{\n \"category\": \"CRIME\",\n \"headline\": \"There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV\",\n \"authors\": \"Melissa Jeltsen\",\n \"link\": \"https://www.huffingtonpost.com/entry/texas-amanda-painter-mass-shooting_us_5b081ab4e4b0802d69caad89\", \"short_description\": \"She left her husband. He killed their children. Just another day in America.\",\n \"date\": \"2018-05-26\"\n},\n{\n \"category\": \"ENTERTAINMENT\",\n \"headline\": \"Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song\",\n \"authors\": \"Andy McDonald\",\n \"link\": \"https://www.huffingtonpost.com/entry/will-smith-joins-diplo-and-nicky-jam-for-the-official-2018-world-cup-song_us_5b09726fe4b0fdb2aa541201\",\n \"short_description\": \"Of course, it has a song.\",\n \"date\": \"2018-05-26\"\n}]\n\nRead file::\nimport pandas as pd\n\n\ndf = pd.read_json(\"/path/to/file/db.json\")\nprint(df)\n\nOutput:\n category headline authors link short_description date\n0 CRIME There Were 2 Mass Shootings In Texas Last Week, But Only 1 On TV Melissa Jeltsen https://www.huffingtonpost.com/entry/texas-amanda-painter-mass-shooting_us_5b081ab4e4b0802d69caad89 She left her husband. He killed their children. Just another day in America. 2018-05-26\n1 ENTERTAINMENT Will Smith Joins Diplo And Nicky Jam For The 2018 World Cup's Official Song Andy McDonald https://www.huffingtonpost.com/entry/will-smith-joins-diplo-and-nicky-jam-for-the-official-2018-... Of course, it has a song. 2018-05-26\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "json", "pandas", "python" ]
stackoverflow_0074624275_dataframe_json_pandas_python.txt
Q: How to add array of integer field in Django Rest Framework? I want to add an array of integer fields in my model class Schedule(models.Model): name = models.CharField(max_length=100) start_time = models.DateTimeField(auto_now_add=True) end_time = models.DateTimeField(null=True, blank=True) day_of_the_week = ?? ( array of integer ) I tried with class Schedule(models.Model): name = models.CharField(max_length=100) start_time = models.DateTimeField(auto_now_add=True) end_time = models.DateTimeField(null=True, blank=True) day_of_the_week = models.CharField(max_length=100) and in the serializer add ListField class ScheduleSerializer(serializers.ModelSerializer): day_of_the_week = serializers.ListField() class Meta(): model = Schedule fields = "__all__" but this one is not working can anyone suggest me how to deal with this issue? A: Try this: class Schedule(models.Model): name = models.CharField(max_length=100) start_time = models.DateTimeField(auto_now_add=True) end_time = models.DateTimeField(null=True, blank=True) day_of_the_week = models.JSONField(default=list) class ScheduleSerializer(serializers.ModelSerializer): day_of_the_week = serializers.ListField( child=serializers.IntegerField(), ) class Meta(): model = Schedule fields = "__all__" A: Your solution should work with one small modification: models.CharField(validators=[int_list_validator], max_length=100)
How to add array of integer field in Django Rest Framework?
I want to add an array of integer fields in my model class Schedule(models.Model): name = models.CharField(max_length=100) start_time = models.DateTimeField(auto_now_add=True) end_time = models.DateTimeField(null=True, blank=True) day_of_the_week = ?? ( array of integer ) I tried with class Schedule(models.Model): name = models.CharField(max_length=100) start_time = models.DateTimeField(auto_now_add=True) end_time = models.DateTimeField(null=True, blank=True) day_of_the_week = models.CharField(max_length=100) and in the serializer add ListField class ScheduleSerializer(serializers.ModelSerializer): day_of_the_week = serializers.ListField() class Meta(): model = Schedule fields = "__all__" but this one is not working can anyone suggest me how to deal with this issue?
[ "Try this:\nclass Schedule(models.Model):\n name = models.CharField(max_length=100)\n start_time = models.DateTimeField(auto_now_add=True)\n end_time = models.DateTimeField(null=True, blank=True)\n day_of_the_week = models.JSONField(default=list)\n\nclass ScheduleSerializer(serializers.ModelSerializer):\n day_of_the_week = serializers.ListField(\n child=serializers.IntegerField(),\n )\n\n class Meta():\n model = Schedule\n fields = \"__all__\"\n\n", "Your solution should work with one small modification:\nmodels.CharField(validators=[int_list_validator], max_length=100)\n\n" ]
[ 1, 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "django_serializer", "python" ]
stackoverflow_0074624299_django_django_models_django_rest_framework_django_serializer_python.txt
Q: Selenium overlay button cannot be clicked I am having some issues with one website's button. Here is my driver function. def get_driver(): options = webdriver.ChromeOptions() # options.add_argument("--headless") options.add_argument("--incognito") driver = webdriver.Chrome(executable_path = ChromeDriverManager().install(), chrome_options = options) driver.get('https://online.depo-diy.lt') return driver driver = get_driver() Problem is that I cannot select and click anything from the drop down manu when I start my driver. Website request to select a store and then you can press OK to proceed. I check for iframes with driver.window_handles but it shows only one frame. This seems like overlay, but I don't know what to do with it. Only button I can find is "//button[@class='ms-Button-flexContainer flexContainer-51']". Here is the screenshot, don't know how to copy HTML output.. Blockquote A: Try this: WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "ModalFocusTrapZone30"))) driver.find_element(By.ID, "Dropdown31-option").click() time.sleep(1) driver.find_element(By.XPATH, ".//*[contains(text(), 'Šilutės 28B, Klaipėda')]").click() time.sleep(1) driver.find_element(By.XPATH, ".//*[@data-automationid='splitbuttonprimary']//span[text()='OK']").click()
Selenium overlay button cannot be clicked
I am having some issues with one website's button. Here is my driver function. def get_driver(): options = webdriver.ChromeOptions() # options.add_argument("--headless") options.add_argument("--incognito") driver = webdriver.Chrome(executable_path = ChromeDriverManager().install(), chrome_options = options) driver.get('https://online.depo-diy.lt') return driver driver = get_driver() Problem is that I cannot select and click anything from the drop down manu when I start my driver. Website request to select a store and then you can press OK to proceed. I check for iframes with driver.window_handles but it shows only one frame. This seems like overlay, but I don't know what to do with it. Only button I can find is "//button[@class='ms-Button-flexContainer flexContainer-51']". Here is the screenshot, don't know how to copy HTML output.. Blockquote
[ "Try this:\nWebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, \"ModalFocusTrapZone30\")))\ndriver.find_element(By.ID, \"Dropdown31-option\").click()\ntime.sleep(1)\ndriver.find_element(By.XPATH, \".//*[contains(text(), 'Šilutės 28B, Klaipėda')]\").click()\ntime.sleep(1)\ndriver.find_element(By.XPATH, \".//*[@data-automationid='splitbuttonprimary']//span[text()='OK']\").click()\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "web_scraping" ]
stackoverflow_0074624230_python_selenium_web_scraping.txt
Q: How can I label a column of strings into numbered groups based on another column containing substrings? I have the 1st column that is around 4920 different chemical compounds. For example: 0 Ag(AuS)2 1 Ag(W3Br7)2 2 Ag0.5Ge1Pb1.75S4 3 Ag0.5Ge1Pb1.75Se4 4 Ag2BBr ... ... 4916 ZrTaN3 4917 ZrTe 4918 ZrTi2O 4919 ZrTiF6 4920 ZrW2 I have the 2nd column that has all the elements of the periodic table numerically listed atomic number 0 H 1 He 2 Li 3 Be 4 B .. ... 113 Fl 114 Uup 115 Lv 116 Uus 117 Uuo How can I classify the first column into groups based on the compound's first element corresponding to their atomic number from column 2 so that I can return the first column The atomic number of Ag = 27 The atomic number of Zr = 40 0 47 1 47 2 47 3 47 4 47 ... ... 4916 40 4917 40 4918 40 4919 40 4920 40 A: Since the first element could be a varying number of letters, the simplest solution would be to use the regex approach for getting the first section. For example: import re compounds = ["Ag(AuS)2", "HTiF", "ZrTaN3"] for compound in compounds: match = re.match(r"[A-Z][a-z]*", compound) if match: fist_element = match.group(0) print(fist_element) this will print out the first element of each compound. Note: If there are some more complex compounds and you need to adjust your regex, I recommend using https://regex101.com/ as a playground. Once you have that information it just needs to be connected with the element in the second column which would be easiest if you mapped that column to a dictionary resembling: { H: 0, He: 1, Li: 2 ...} which would allow you to simply get the element index by calling dict_with_elements.get(first_element). From there on, the rest is just looping and writing data. I hope this helps.
How can I label a column of strings into numbered groups based on another column containing substrings?
I have the 1st column that is around 4920 different chemical compounds. For example: 0 Ag(AuS)2 1 Ag(W3Br7)2 2 Ag0.5Ge1Pb1.75S4 3 Ag0.5Ge1Pb1.75Se4 4 Ag2BBr ... ... 4916 ZrTaN3 4917 ZrTe 4918 ZrTi2O 4919 ZrTiF6 4920 ZrW2 I have the 2nd column that has all the elements of the periodic table numerically listed atomic number 0 H 1 He 2 Li 3 Be 4 B .. ... 113 Fl 114 Uup 115 Lv 116 Uus 117 Uuo How can I classify the first column into groups based on the compound's first element corresponding to their atomic number from column 2 so that I can return the first column The atomic number of Ag = 27 The atomic number of Zr = 40 0 47 1 47 2 47 3 47 4 47 ... ... 4916 40 4917 40 4918 40 4919 40 4920 40
[ "Since the first element could be a varying number of letters, the simplest solution would be to use the regex approach for getting the first section.\nFor example:\nimport re\n\ncompounds = [\"Ag(AuS)2\", \"HTiF\", \"ZrTaN3\"]\n\nfor compound in compounds:\n match = re.match(r\"[A-Z][a-z]*\", compound)\n if match:\n fist_element = match.group(0)\n print(fist_element)\n\nthis will print out the first element of each compound.\nNote: If there are some more complex compounds and you need to adjust your regex, I recommend using https://regex101.com/ as a playground.\nOnce you have that information it just needs to be connected with the element in the second column which would be easiest if you mapped that column to a dictionary resembling:\n{ H: 0, He: 1, Li: 2 ...}\n\nwhich would allow you to simply get the element index by calling dict_with_elements.get(first_element).\nFrom there on, the rest is just looping and writing data. I hope this helps.\n" ]
[ 2 ]
[]
[]
[ "categorical_data", "floating_point", "grouping", "python", "string" ]
stackoverflow_0074624050_categorical_data_floating_point_grouping_python_string.txt
Q: How to round only numbers in python dataframe columns with object mixed I have a dataframe named "df" as the picture. In this dataframe there are "null" as object(dtype) and numerics. I wish to round(2) only the numeric values in multiple columns. I have written this code but keep getting "TypeError: 'int' object is not iterable" as TypeError. *The first line code is to convert na's to "null", since other numbers need to be numeric dtype. df['skor_change_w_ts']=pd.to_numeric(df['skor_change_w_ts'], errors='coerce').fillna("null", downcast='infer') for i in len(df): if df['skor_change_w_ts'][i] is float: df['skor_change_w_ts'][i]=df['skor_change_w_ts'][i].round(2) What would be the most simple code to round(2) only numeric values in multiple columns? A: round before fillna: df['skor_change_w_ts'] = (pd.to_numeric(df['skor_change_w_ts'], errors='coerce') .round(2).fillna("null", downcast='infer') ) Example input: df = pd.DataFrame({'skor_change_w_ts': [1, 2.6666, 'null']}) Output: skor_change_w_ts 0 1.0 1 2.67 2 null A: You don't need to call .fillna() at all, coerce will do that for you. df['skor_change_w_ts'] = (pd.to_numeric(df['skor_change_w_ts'], errors='coerce').round(2) Should do the trick.
How to round only numbers in python dataframe columns with object mixed
I have a dataframe named "df" as the picture. In this dataframe there are "null" as object(dtype) and numerics. I wish to round(2) only the numeric values in multiple columns. I have written this code but keep getting "TypeError: 'int' object is not iterable" as TypeError. *The first line code is to convert na's to "null", since other numbers need to be numeric dtype. df['skor_change_w_ts']=pd.to_numeric(df['skor_change_w_ts'], errors='coerce').fillna("null", downcast='infer') for i in len(df): if df['skor_change_w_ts'][i] is float: df['skor_change_w_ts'][i]=df['skor_change_w_ts'][i].round(2) What would be the most simple code to round(2) only numeric values in multiple columns?
[ "round before fillna:\ndf['skor_change_w_ts'] = (pd.to_numeric(df['skor_change_w_ts'], errors='coerce')\n .round(2).fillna(\"null\", downcast='infer')\n )\n\nExample input:\ndf = pd.DataFrame({'skor_change_w_ts': [1, 2.6666, 'null']})\n\nOutput:\n skor_change_w_ts\n0 1.0\n1 2.67\n2 null\n\n", "You don't need to call .fillna() at all, coerce will do that for you.\ndf['skor_change_w_ts'] = (pd.to_numeric(df['skor_change_w_ts'], errors='coerce').round(2) \n\nShould do the trick.\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "numeric", "pandas", "python", "rounding" ]
stackoverflow_0074624331_dataframe_numeric_pandas_python_rounding.txt
Q: Python Creating Dictionary from excel data I want to create a dictionary from the values, i get from excel cells, My code is below, wb = xlrd.open_workbook('foo.xls') sh = wb.sheet_by_index(2) for i in range(138): cell_value_class = sh.cell(i,2).value cell_value_id = sh.cell(i,0).value and I want to create a dictionary, like below, that consists of the values coming from the excel cells; {'class1': 1, 'class2': 3, 'class3': 4, 'classN':N} Any idea on how I can create this dictionary? A: or you can try pandas from pandas import * xls = ExcelFile('path_to_file.xls') df = xls.parse(xls.sheet_names[0]) print df.to_dict() A: d = {} wb = xlrd.open_workbook('foo.xls') sh = wb.sheet_by_index(2) for i in range(138): cell_value_class = sh.cell(i,2).value cell_value_id = sh.cell(i,0).value d[cell_value_class] = cell_value_id A: This script allows you to transform an excel data table to a list of dictionaries: import xlrd workbook = xlrd.open_workbook('foo.xls') workbook = xlrd.open_workbook('foo.xls', on_demand = True) worksheet = workbook.sheet_by_index(0) first_row = [] # The row where we stock the name of the column for col in range(worksheet.ncols): first_row.append( worksheet.cell_value(0,col) ) # transform the workbook to a list of dictionaries data =[] for row in range(1, worksheet.nrows): elm = {} for col in range(worksheet.ncols): elm[first_row[col]]=worksheet.cell_value(row,col) data.append(elm) print data A: You can use Pandas to do this. Import pandas and Read the excel as a pandas dataframe. import pandas as pd file_path = 'path_for_your_input_excel_sheet' df = pd.read_excel(file_path, encoding='utf-16') You can use pandas.DataFrame.to_dict to convert a pandas dataframe to a dictionary. Find the documentation for the same here df.to_dict() This would give you a dictionary of the excel sheet you read. Generic Example : df = pd.DataFrame({'col1': [1, 2],'col2': [0.5, 0.75]},index=['a', 'b']) >>> df col1 col2 a 1 0.50 b 2 0.75 >>> df.to_dict() {'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}} A: If you want to convert your Excel data into a list of dictionaries in python using pandas, Best way to do that: excel_file_path = 'Path to your Excel file' excel_records = pd.read_excel(excel_file_path) excel_records_df = excel_records.loc[:, ~excel_records.columns.str.contains('^Unnamed')] records_list_of_dict=excel_records_df.to_dict(orient='record') Print(records_list_of_dict) A: I'd go for: wb = xlrd.open_workbook('foo.xls') sh = wb.sheet_by_index(2) lookup = dict(zip(sh.col_values(2, 0, 138), sh.col_values(0, 0, 138))) A: There is also a PyPI package for that: https://pypi.org/project/sheet2dict/ It is parsing excel and csv files and returning it as an array of dictionaries. Each row is represented as a dictionary in the array. Like this way : Python 3.9.0 (default, Dec 6 2020, 18:02:34) [Clang 12.0.0 (clang-1200.0.32.27)] on darwin Type "help", "copyright", "credits" or "license" for more information. # Import the library >>> from sheet2dict import Worksheet # Create an object >>> ws = Worksheet() # return converted rows as dictionaries in the array >>> ws.xlsx_to_dict(path='Book1.xlsx') [ {'#': '1', 'question': 'Notifications Enabled', 'answer': 'True'}, {'#': '2', 'question': 'Updated', 'answer': 'False'} ] A: if you can convert it to csv this is very suitable. import dataconverters.commas as commas filename = 'test.csv' with open(filename) as f: records, metadata = commas.parse(f) for row in records: print 'this is row in dictionary:'+row A: If you use, openpyxl below code might help: import openpyxl workbook = openpyxl.load_workbook("ExcelDemo.xlsx") sheet = workbook.active first_row = [] # The row where we stock the name of the column for col in range(1, sheet.max_column+1): first_row.append(sheet.cell(row=1, column=col).value) data =[] for row in range(2, sheet.max_row+1): elm = {} for col in range(1, sheet.max_column+1): elm[first_row[col-1]]=sheet.cell(row=row,column=col).value data.append(elm) print (data) credit to: Python Creating Dictionary from excel data A: I Tried a lot of ways but this is the most effective way i found import pyexcel as p def read_records(): records = p.get_records(file_name="File") products = [row for row in records] return products
Python Creating Dictionary from excel data
I want to create a dictionary from the values, i get from excel cells, My code is below, wb = xlrd.open_workbook('foo.xls') sh = wb.sheet_by_index(2) for i in range(138): cell_value_class = sh.cell(i,2).value cell_value_id = sh.cell(i,0).value and I want to create a dictionary, like below, that consists of the values coming from the excel cells; {'class1': 1, 'class2': 3, 'class3': 4, 'classN':N} Any idea on how I can create this dictionary?
[ "or you can try pandas\nfrom pandas import *\nxls = ExcelFile('path_to_file.xls')\ndf = xls.parse(xls.sheet_names[0])\nprint df.to_dict()\n\n", "d = {}\nwb = xlrd.open_workbook('foo.xls')\nsh = wb.sheet_by_index(2) \nfor i in range(138):\n cell_value_class = sh.cell(i,2).value\n cell_value_id = sh.cell(i,0).value\n d[cell_value_class] = cell_value_id\n\n", "This script allows you to transform an excel data table to a list of dictionaries:\nimport xlrd\n\nworkbook = xlrd.open_workbook('foo.xls')\nworkbook = xlrd.open_workbook('foo.xls', on_demand = True)\nworksheet = workbook.sheet_by_index(0)\nfirst_row = [] # The row where we stock the name of the column\nfor col in range(worksheet.ncols):\n first_row.append( worksheet.cell_value(0,col) )\n# transform the workbook to a list of dictionaries\ndata =[]\nfor row in range(1, worksheet.nrows):\n elm = {}\n for col in range(worksheet.ncols):\n elm[first_row[col]]=worksheet.cell_value(row,col)\n data.append(elm)\nprint data\n\n", "You can use Pandas to do this. Import pandas and Read the excel as a pandas dataframe.\nimport pandas as pd\nfile_path = 'path_for_your_input_excel_sheet'\ndf = pd.read_excel(file_path, encoding='utf-16')\n\nYou can use pandas.DataFrame.to_dict to convert a pandas dataframe to a dictionary. Find the documentation for the same here\ndf.to_dict()\n\nThis would give you a dictionary of the excel sheet you read.\nGeneric Example :\ndf = pd.DataFrame({'col1': [1, 2],'col2': [0.5, 0.75]},index=['a', 'b'])\n\n>>> df\ncol1 col2\na 1 0.50\nb 2 0.75\n>>> df.to_dict()\n{'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}}\n", "If you want to convert your Excel data into a list of dictionaries in python using pandas,\nBest way to do that:\nexcel_file_path = 'Path to your Excel file'\nexcel_records = pd.read_excel(excel_file_path)\nexcel_records_df = excel_records.loc[:, ~excel_records.columns.str.contains('^Unnamed')]\nrecords_list_of_dict=excel_records_df.to_dict(orient='record')\nPrint(records_list_of_dict)\n\n", "I'd go for:\nwb = xlrd.open_workbook('foo.xls')\nsh = wb.sheet_by_index(2) \nlookup = dict(zip(sh.col_values(2, 0, 138), sh.col_values(0, 0, 138)))\n\n", "There is also a PyPI package for that: https://pypi.org/project/sheet2dict/\nIt is parsing excel and csv files and returning it as an array of dictionaries.\nEach row is represented as a dictionary in the array.\nLike this way :\nPython 3.9.0 (default, Dec 6 2020, 18:02:34)\n[Clang 12.0.0 (clang-1200.0.32.27)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n# Import the library\n>>> from sheet2dict import Worksheet\n\n# Create an object\n>>> ws = Worksheet()\n\n# return converted rows as dictionaries in the array \n>>> ws.xlsx_to_dict(path='Book1.xlsx')\n[\n {'#': '1', 'question': 'Notifications Enabled', 'answer': 'True'}, \n {'#': '2', 'question': 'Updated', 'answer': 'False'}\n]\n\n", "if you can convert it to csv this is very suitable.\nimport dataconverters.commas as commas\nfilename = 'test.csv'\nwith open(filename) as f:\n records, metadata = commas.parse(f)\n for row in records:\n print 'this is row in dictionary:'+row\n\n", "If you use, openpyxl below code might help:\nimport openpyxl\nworkbook = openpyxl.load_workbook(\"ExcelDemo.xlsx\")\nsheet = workbook.active\nfirst_row = [] # The row where we stock the name of the column\nfor col in range(1, sheet.max_column+1):\n first_row.append(sheet.cell(row=1, column=col).value)\ndata =[]\nfor row in range(2, sheet.max_row+1):\n elm = {}\n for col in range(1, sheet.max_column+1):\n elm[first_row[col-1]]=sheet.cell(row=row,column=col).value\n data.append(elm)\nprint (data)\n\ncredit to: Python Creating Dictionary from excel data\n", "I Tried a lot of ways but this is the most effective way i found\nimport pyexcel as p\ndef read_records():\n records = p.get_records(file_name=\"File\")\n products = [row for row in records]\n return products\n\n" ]
[ 49, 20, 17, 5, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "xlrd" ]
stackoverflow_0014196013_python_xlrd.txt
Q: Adding nested dictionary to another one would like to make a contact book using dictinaries but i cant figure out how to add a nested dict to the current dict i have. would be something like this my_contacts = {"1": { "Tom Jones", "911", "22.10.1995"}, "2": { "Bob Marley", "0800838383", "22.10.1991"} } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) for i in range(user_input): name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") adress = input("Enter the address") my_contacts[name] = number my_contacts[birthday] = adress but unforunatly this doesnt add them as a dict. so how can i add them as one dict inside my current dict? my_contacts = {"1": { "Tom Jones", "911", "22.10.1995"}, "2": { "Bob Marley", "0800838383", "22.10.1991"} } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) for i in range(user_input): name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") adress = input("Enter the address") my_contacts[name] = number my_contacts[birthday] = adress A: This will be a nice solution for you. All i'm doing here is every time you iterate inside your for loop you are creating a new dictionary of the details you want to put in for a contact and adding that dictionary to the original dictionary, therefor adding a new contact as a dictionary. There should be some extra error handling of course but this for now may be a good solution. my_contacts = {1: {"Name": "Tom Jones", "Number": "911", "Birthday": "22.10.1995", "Address": "212 street"}, 2: {"Name": "Bob Marley", "Number": "0800838383", "Birthday": "22.10.1991", "Address": "31 street"} } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) index = len(my_contacts) + 1 for _ in range(user_input): details = {} name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") details["Name"] = name details["Number"] = number details["Birthday"] = birthday details["Address"] = address my_contacts[index] = details index += 1 A: So basically what you have here: my_contacts = {"1": { "Tom Jones", "911", "22.10.1995"}, "2": { "Bob Marley", "0800838383", "22.10.1991"} } Is a dict of sets (mapping a string to a set), not a dict of dicts. 1.Pass the current contacts to your function (so you can add new contact, not create them from scratch) 2.Create a new dict for the new contact 3.Add it to the passed contacts dict def add_contact(contacts: dict)->None: user_input = int(input("please enter how many contacts you wanna add: ")) for i in range(user_input): name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") address = input("Enter the address") new_contact = {"name":name, "number":number, "birthday":birthday, "address":address} next_key = str(max([int(k) for k in contacts.keys()])+1) contacts[new_key] = new_contact
Adding nested dictionary to another one
would like to make a contact book using dictinaries but i cant figure out how to add a nested dict to the current dict i have. would be something like this my_contacts = {"1": { "Tom Jones", "911", "22.10.1995"}, "2": { "Bob Marley", "0800838383", "22.10.1991"} } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) for i in range(user_input): name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") adress = input("Enter the address") my_contacts[name] = number my_contacts[birthday] = adress but unforunatly this doesnt add them as a dict. so how can i add them as one dict inside my current dict? my_contacts = {"1": { "Tom Jones", "911", "22.10.1995"}, "2": { "Bob Marley", "0800838383", "22.10.1991"} } def add_contact(): user_input = int(input("please enter how many contacts you wanna add: ")) for i in range(user_input): name = input("Enter the name: ") number = input("Enter the number: ") birthday = input("Enter the birthday") adress = input("Enter the address") my_contacts[name] = number my_contacts[birthday] = adress
[ "This will be a nice solution for you.\nAll i'm doing here is every time you iterate inside your for loop you are creating a new dictionary of the details you want to put in for a contact and adding that dictionary to the original dictionary, therefor adding a new contact as a dictionary. There should be some extra error handling of course but this for now may be a good solution.\nmy_contacts = {1: {\"Name\": \"Tom Jones\",\n \"Number\": \"911\",\n \"Birthday\": \"22.10.1995\",\n \"Address\": \"212 street\"},\n 2: {\"Name\": \"Bob Marley\",\n \"Number\": \"0800838383\",\n \"Birthday\": \"22.10.1991\",\n \"Address\": \"31 street\"}\n }\n\n\ndef add_contact():\n user_input = int(input(\"please enter how many contacts you wanna add: \"))\n index = len(my_contacts) + 1\n for _ in range(user_input):\n details = {}\n name = input(\"Enter the name: \")\n number = input(\"Enter the number: \")\n birthday = input(\"Enter the birthday\")\n address = input(\"Enter the address\")\n\n details[\"Name\"] = name\n details[\"Number\"] = number\n details[\"Birthday\"] = birthday\n details[\"Address\"] = address\n\n my_contacts[index] = details\n index += 1\n\n", "So basically what you have here:\nmy_contacts = {\"1\": { \"Tom Jones\", \"911\", \"22.10.1995\"},\n \"2\": { \"Bob Marley\", \"0800838383\", \"22.10.1991\"}\n }\n\nIs a dict of sets (mapping a string to a set), not a dict of dicts.\n1.Pass the current contacts to your function (so you can add new contact, not create them from scratch)\n2.Create a new dict for the new contact\n3.Add it to the passed contacts dict\ndef add_contact(contacts: dict)->None:\n user_input = int(input(\"please enter how many contacts you wanna add: \"))\n for i in range(user_input):\n name = input(\"Enter the name: \")\n number = input(\"Enter the number: \")\n birthday = input(\"Enter the birthday\")\n address = input(\"Enter the address\")\n new_contact = {\"name\":name, \"number\":number, \"birthday\":birthday, \"address\":address}\n next_key = str(max([int(k) for k in contacts.keys()])+1)\n contacts[new_key] = new_contact\n\n" ]
[ 0, 0 ]
[]
[]
[ "contacts", "dictionary", "list", "nested", "python" ]
stackoverflow_0074624280_contacts_dictionary_list_nested_python.txt
Q: How to calculate time in pairs - python dataframe I have a data frame with data: Transaction group - superior Application number - could be different in one transaction group Created time – data time Status - application or calculation. And I want to calculate time between Created time for each pair in status application – calculation. Data doesn’t have column ‘pair’ , to be better understand I labeled them A, B, C in order. pairs Transaction group Application number Created time Status A 221110000363 HB202211100902000100 2022-11-10 09:05 application A 221110000363 HB202211100902000100 2022-11-10 13:42 calculation B 221110000363 HB202211100902000200 2022-11-10 14:02 application B 221110000363 HB202211100902000200 2022-11-10 14:07 calculation C 221110000363 HB202211100902000200 2022-11-10 14:43 application C 221110000363 HB202211100902000200 2022-11-10 15:03 calculation I was trying to split it on two df: one with all applications for transaction group - df1: pairs Transaction group Application number start_time Status A 221110000363 HB202211100902000100 2022-11-10 09:05 application B 221110000363 HB202211100902000200 2022-11-10 14:02 application C 221110000363 HB202211100902000200 2022-11-10 14:43 application and another with all calculations - df2: pairs Transaction group Application number stop_time Status A 221110000363 HB202211100902000100 2022-11-10 13:42 calculation B 221110000363 HB202211100902000200 2022-11-10 14:07 calculation C 221110000363 HB202211100902000200 2022-11-10 15:03 calculation Created time in df1 I renamed to ‘statrt_time’ and in df2 to ‘stop_time’ But then I don’t know how to merge this two data frames that in one row I will have one pair with two times – start and stop. It is also important that it must be group by ‘Transaction group’ and ‘Application number’. I’m looking for result like that: pairs Transaction group Application number start_time stop_time A 221110000363 HB202211100902000100 2022-11-10 09:05 2022-11-10 13:42 B 221110000363 HB202211100902000200 2022-11-10 14:02 2022-11-10 14:07 C 221110000363 HB202211100902000200 2022-11-10 14:43 2022-11-10 15:03 EDIT: Logic to pairs: In one transaction group I'm looking for application number and for this I'm looking for first lowest datetime in column ‘Status’ where the value is ‘application’ and then the first lowest datetime where in column ‘Status’ value is ‘calculation’. This is the first pair. Second pair is second lowest datetime in Status - application and then the second lowest datetime in Status - calculation. This is the second pair. EDIT2 Data: Application number Created time Transaction group Status HB202211100902000100 2022-11-10 09:05 221110000363 application HB202211100902000100 2022-11-10 13:42 221110000363 calculation HB202211100902000100 2022-11-10 14:02 221110000363 application HB202211100902000100 2022-11-10 14:07 221110000363 calculation HB202211100902000100 2022-11-10 14:43 221110000363 application HB202211100902000100 2022-11-10 15:03 221110000363 calculation HB202211100902000100 2022-11-10 16:24 221110000363 application HB202205152239000200 2022-05-15 22:53 220515000252 application HB202205152239000200 2022-05-16 14:57 220515000252 calculation HB202205152253000100 2022-05-15 22:56 220515000252 application HB202205152253000100 2022-05-16 14:56 220515000252 calculation HB202205152257000100 2022-05-15 22:57 220515000252 application HB202205152257000100 2022-05-16 14:56 220515000252 calculation HB202205152259000100 2022-05-15 23:00 220515000252 application HB202205152259000100 2022-05-16 14:56 220515000252 calculation HB202205152301000100 2022-05-15 23:02 220515000252 application HB202205152301000100 2022-05-16 14:56 220515000252 calculation HB202205152302000100 2022-05-15 23:04 220515000252 application HB202205152302000100 2022-05-16 14:55 220515000252 calculation HB202205152305000100 2022-05-15 23:07 220515000252 application HB202205152305000100 2022-05-16 14:55 220515000252 calculation HB202205152307000100 2022-05-15 23:09 220515000252 application HB202205152307000100 2022-05-16 14:54 220515000252 calculation HB202205152312000100 2022-05-15 23:13 220515000252 application HB202205152312000100 2022-05-16 14:54 220515000252 calculation HB202205152313000100 2022-05-15 23:17 220515000252 application HB202205152313000100 2022-05-16 14:54 220515000252 calculation So I use this: > df['pairs'] = df.groupby(['Status']).cumcount() And my data looks like : Application number Created time Transaction group Status pairs 0 HB202211100902000100 2022-11-10 09:05:28 221110000363 application 0 1 HB202211100902000100 2022-11-10 13:42:44 221110000363 calculation 0 2 HB202211100902000100 2022-11-10 14:02:43 221110000363 application 1 3 HB202211100902000100 2022-11-10 14:07:16 221110000363 calculation 1 4 HB202211100902000100 2022-11-10 14:43:54 221110000363 application 2 5 HB202211100902000100 2022-11-10 15:03:06 221110000363 calculation 2 6 HB202211100902000100 2022-11-10 16:24:26 221110000363 application 3 7 HB202205152239000200 2022-05-15 22:53:18 220515000252 application 4 8 HB202205152239000200 2022-05-16 14:57:14 220515000252 calculation 3 9 HB202205152253000100 2022-05-15 22:56:11 220515000252 application 5 10 HB202205152253000100 2022-05-16 14:56:55 220515000252 calculation 4 11 HB202205152257000100 2022-05-15 22:57:58 220515000252 application 6 12 HB202205152257000100 2022-05-16 14:56:47 220515000252 calculation 5 So the problem starts from 3rd pair. After using pivot: df = (df.pivot(index=['pairs', 'Transaction group', 'Application number'], columns='Status', values='Created time')) Data looks that: Status application calculation pairs Transaction group Application number 0 221110000363 HB202211100902000100 2022-11-10 09:05:28 2022-11-10 13:42:44 1 221110000363 HB202211100902000100 2022-11-10 14:02:43 2022-11-10 14:07:16 2 221110000363 HB202211100902000100 2022-11-10 14:43:54 2022-11-10 15:03:06 3 220515000252 HB202205152239000200 NaT 2022-05-16 14:57:14 221110000363 HB202211100902000100 2022-11-10 16:24:26 NaT 4 220515000252 HB202205152239000200 2022-05-15 22:53:18 NaT HB202205152253000100 NaT 2022-05-16 14:56:55 5 220515000252 HB202205152253000100 2022-05-15 22:56:11 NaT HB202205152257000100 NaT 2022-05-16 14:56:47 So, in my opinion the most important thing is to make a good pairs. A: Use a pivot and rename: df['pairs'] = df.groupby('Status').cumcount() (df.pivot(index=['pairs', 'Transaction group', 'Application number'], columns='Status', values='Created time') .rename(columns={'application': 'start_time', 'calculation': 'end_time'}) .reset_index() ) Output: Status pairs Transaction group Application number start_time end_time 0 A 221110000363 HB202211100902000100 2022-11-10 09:05:00 2022-11-10 13:42:00 1 B 221110000363 HB202211100902000200 2022-11-10 14:02:00 2022-11-10 14:07:00 2 C 221110000363 HB202211100902000200 2022-11-10 14:43:00 2022-11-10 15:03:00
How to calculate time in pairs - python dataframe
I have a data frame with data: Transaction group - superior Application number - could be different in one transaction group Created time – data time Status - application or calculation. And I want to calculate time between Created time for each pair in status application – calculation. Data doesn’t have column ‘pair’ , to be better understand I labeled them A, B, C in order. pairs Transaction group Application number Created time Status A 221110000363 HB202211100902000100 2022-11-10 09:05 application A 221110000363 HB202211100902000100 2022-11-10 13:42 calculation B 221110000363 HB202211100902000200 2022-11-10 14:02 application B 221110000363 HB202211100902000200 2022-11-10 14:07 calculation C 221110000363 HB202211100902000200 2022-11-10 14:43 application C 221110000363 HB202211100902000200 2022-11-10 15:03 calculation I was trying to split it on two df: one with all applications for transaction group - df1: pairs Transaction group Application number start_time Status A 221110000363 HB202211100902000100 2022-11-10 09:05 application B 221110000363 HB202211100902000200 2022-11-10 14:02 application C 221110000363 HB202211100902000200 2022-11-10 14:43 application and another with all calculations - df2: pairs Transaction group Application number stop_time Status A 221110000363 HB202211100902000100 2022-11-10 13:42 calculation B 221110000363 HB202211100902000200 2022-11-10 14:07 calculation C 221110000363 HB202211100902000200 2022-11-10 15:03 calculation Created time in df1 I renamed to ‘statrt_time’ and in df2 to ‘stop_time’ But then I don’t know how to merge this two data frames that in one row I will have one pair with two times – start and stop. It is also important that it must be group by ‘Transaction group’ and ‘Application number’. I’m looking for result like that: pairs Transaction group Application number start_time stop_time A 221110000363 HB202211100902000100 2022-11-10 09:05 2022-11-10 13:42 B 221110000363 HB202211100902000200 2022-11-10 14:02 2022-11-10 14:07 C 221110000363 HB202211100902000200 2022-11-10 14:43 2022-11-10 15:03 EDIT: Logic to pairs: In one transaction group I'm looking for application number and for this I'm looking for first lowest datetime in column ‘Status’ where the value is ‘application’ and then the first lowest datetime where in column ‘Status’ value is ‘calculation’. This is the first pair. Second pair is second lowest datetime in Status - application and then the second lowest datetime in Status - calculation. This is the second pair. EDIT2 Data: Application number Created time Transaction group Status HB202211100902000100 2022-11-10 09:05 221110000363 application HB202211100902000100 2022-11-10 13:42 221110000363 calculation HB202211100902000100 2022-11-10 14:02 221110000363 application HB202211100902000100 2022-11-10 14:07 221110000363 calculation HB202211100902000100 2022-11-10 14:43 221110000363 application HB202211100902000100 2022-11-10 15:03 221110000363 calculation HB202211100902000100 2022-11-10 16:24 221110000363 application HB202205152239000200 2022-05-15 22:53 220515000252 application HB202205152239000200 2022-05-16 14:57 220515000252 calculation HB202205152253000100 2022-05-15 22:56 220515000252 application HB202205152253000100 2022-05-16 14:56 220515000252 calculation HB202205152257000100 2022-05-15 22:57 220515000252 application HB202205152257000100 2022-05-16 14:56 220515000252 calculation HB202205152259000100 2022-05-15 23:00 220515000252 application HB202205152259000100 2022-05-16 14:56 220515000252 calculation HB202205152301000100 2022-05-15 23:02 220515000252 application HB202205152301000100 2022-05-16 14:56 220515000252 calculation HB202205152302000100 2022-05-15 23:04 220515000252 application HB202205152302000100 2022-05-16 14:55 220515000252 calculation HB202205152305000100 2022-05-15 23:07 220515000252 application HB202205152305000100 2022-05-16 14:55 220515000252 calculation HB202205152307000100 2022-05-15 23:09 220515000252 application HB202205152307000100 2022-05-16 14:54 220515000252 calculation HB202205152312000100 2022-05-15 23:13 220515000252 application HB202205152312000100 2022-05-16 14:54 220515000252 calculation HB202205152313000100 2022-05-15 23:17 220515000252 application HB202205152313000100 2022-05-16 14:54 220515000252 calculation So I use this: > df['pairs'] = df.groupby(['Status']).cumcount() And my data looks like : Application number Created time Transaction group Status pairs 0 HB202211100902000100 2022-11-10 09:05:28 221110000363 application 0 1 HB202211100902000100 2022-11-10 13:42:44 221110000363 calculation 0 2 HB202211100902000100 2022-11-10 14:02:43 221110000363 application 1 3 HB202211100902000100 2022-11-10 14:07:16 221110000363 calculation 1 4 HB202211100902000100 2022-11-10 14:43:54 221110000363 application 2 5 HB202211100902000100 2022-11-10 15:03:06 221110000363 calculation 2 6 HB202211100902000100 2022-11-10 16:24:26 221110000363 application 3 7 HB202205152239000200 2022-05-15 22:53:18 220515000252 application 4 8 HB202205152239000200 2022-05-16 14:57:14 220515000252 calculation 3 9 HB202205152253000100 2022-05-15 22:56:11 220515000252 application 5 10 HB202205152253000100 2022-05-16 14:56:55 220515000252 calculation 4 11 HB202205152257000100 2022-05-15 22:57:58 220515000252 application 6 12 HB202205152257000100 2022-05-16 14:56:47 220515000252 calculation 5 So the problem starts from 3rd pair. After using pivot: df = (df.pivot(index=['pairs', 'Transaction group', 'Application number'], columns='Status', values='Created time')) Data looks that: Status application calculation pairs Transaction group Application number 0 221110000363 HB202211100902000100 2022-11-10 09:05:28 2022-11-10 13:42:44 1 221110000363 HB202211100902000100 2022-11-10 14:02:43 2022-11-10 14:07:16 2 221110000363 HB202211100902000100 2022-11-10 14:43:54 2022-11-10 15:03:06 3 220515000252 HB202205152239000200 NaT 2022-05-16 14:57:14 221110000363 HB202211100902000100 2022-11-10 16:24:26 NaT 4 220515000252 HB202205152239000200 2022-05-15 22:53:18 NaT HB202205152253000100 NaT 2022-05-16 14:56:55 5 220515000252 HB202205152253000100 2022-05-15 22:56:11 NaT HB202205152257000100 NaT 2022-05-16 14:56:47 So, in my opinion the most important thing is to make a good pairs.
[ "Use a pivot and rename:\ndf['pairs'] = df.groupby('Status').cumcount()\n\n(df.pivot(index=['pairs', 'Transaction group', 'Application number'],\n columns='Status', values='Created time')\n .rename(columns={'application': 'start_time', 'calculation': 'end_time'})\n .reset_index()\n)\n\nOutput:\nStatus pairs Transaction group Application number start_time end_time\n0 A 221110000363 HB202211100902000100 2022-11-10 09:05:00 2022-11-10 13:42:00\n1 B 221110000363 HB202211100902000200 2022-11-10 14:02:00 2022-11-10 14:07:00\n2 C 221110000363 HB202211100902000200 2022-11-10 14:43:00 2022-11-10 15:03:00\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0074624381_dataframe_group_by_pandas_python.txt
Q: Stop pip from failing on single package when installing with requirements.txt I am installing packages from requirements.txt pip install -r requirements.txt The requirements.txt file reads: Pillow lxml cssselect jieba beautifulsoup nltk lxml is the only package failing to install and this leads to everything failing (expected results as pointed out by larsks in the comments). However, after lxml fails pip still runs through and downloads the rest of the packages. From what I understand the pip install -r requirements.txt command will fail if any of the packages listed in the requirements.txt fail to install. Is there any argument I can pass when running pip install -r requirements.txt to tell it to install what it can and skip the packages that it cannot, or to exit as soon as it sees something fail? A: Running each line with pip install may be a workaround. cat requirements.txt | xargs -n 1 pip install Note: -a parameter is not available under MacOS, so old cat is more portable. A: This solution handles empty lines, whitespace lines, # comment lines, whitespace-then-# comment lines in your requirements.txt. cat requirements.txt | sed -e '/^\s*#.*$/d' -e '/^\s*$/d' | xargs -n 1 pip install Hat tip to this answer for the sed magic. A: For windows users, you can use this: FOR /F %k in (requirements.txt) DO ( if NOT # == %k ( pip install %k ) ) Logic: for every dependency in file(requirements.txt), install them and ignore those start with "#". A: For Windows: pip version >=18 import sys from pip._internal import main as pip_main def install(package): pip_main(['install', package]) if __name__ == '__main__': with open(sys.argv[1]) as f: for line in f: install(line) pip version <18 import sys import pip def install(package): pip.main(['install', package]) if __name__ == '__main__': with open(sys.argv[1]) as f: for line in f: install(line) A: The xargs solution works but can have portability issues (BSD/GNU) and/or be cumbersome if you have comments or blank lines in your requirements file. As for the usecase where such a behavior would be required, I use for instance two separate requirement files, one which is only listing core dependencies that need to be always installed and another file with non-core dependencies that are in 90% of the cases not needed for most usecases. That would be an equivalent of the Recommends section of a debian package. I use the following shell script (requires sed) to install optional dependencies: #!/bin/sh while read dependency; do dependency_stripped="$(echo "${dependency}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" # Skip comments if [[ $dependency_stripped == \#* ]]; then continue # Skip blank lines elif [ -z "$dependency_stripped" ]; then continue else if pip install "$dependency_stripped"; then echo "$dependency_stripped is installed" else echo "Could not install $dependency_stripped, skipping" fi fi done < recommends.txt A: Building on the answer by @MZD, here's a solution to filter out all text starting with a comment sign # cat requirements.txt | grep -Eo '(^[^#]+)' | xargs -n 1 pip install A: For Windows using PowerShell: foreach($line in Get-Content requirements.txt) { if(!($line.StartsWith('#'))){ pip install $line } } A: One line PowerShell: Get-Content .\requirements.txt | ForEach-Object {pip install $_} If you need to ignore certain lines then: Get-Content .\requirements.txt | ForEach-Object {if (!$_.startswith("#")){pip install $_}} OR Get-Content .\requirements.txt | ForEach-Object {if ($_ -notmatch "#"){pip install $_}} A: Thanks, Etienne Prothon for windows cases. But, after upgrading to pip 18, pip package don't expose main to public. So you may need to change code like this. # This code install line by line a list of pip package import sys from pip._internal import main as pip_main def install(package): pip_main(['install', package]) if __name__ == '__main__': with open(sys.argv[1]) as f: for line in f: install(line) A: Another option is to use pip install --dry-run to get a list of packages that you need to install and then keep trying it and remove the ones that don't work. A: A very general solution The following code installs all requirements for: multiple requirement files (requirements1.txt, requirements2.txt) ignores lines with comments # skips packages, which are not instalable runs pip install each line (not each word as in some other answers) $ (cat requirements1.txt; echo ""; cat requirements2.txt) | grep "^[^#]" | xargs -L 1 pip install
Stop pip from failing on single package when installing with requirements.txt
I am installing packages from requirements.txt pip install -r requirements.txt The requirements.txt file reads: Pillow lxml cssselect jieba beautifulsoup nltk lxml is the only package failing to install and this leads to everything failing (expected results as pointed out by larsks in the comments). However, after lxml fails pip still runs through and downloads the rest of the packages. From what I understand the pip install -r requirements.txt command will fail if any of the packages listed in the requirements.txt fail to install. Is there any argument I can pass when running pip install -r requirements.txt to tell it to install what it can and skip the packages that it cannot, or to exit as soon as it sees something fail?
[ "Running each line with pip install may be a workaround.\ncat requirements.txt | xargs -n 1 pip install\n\nNote: -a parameter is not available under MacOS, so old cat is more portable.\n", "This solution handles empty lines, whitespace lines, # comment lines, whitespace-then-# comment lines in your requirements.txt.\ncat requirements.txt | sed -e '/^\\s*#.*$/d' -e '/^\\s*$/d' | xargs -n 1 pip install\n\nHat tip to this answer for the sed magic.\n", "For windows users, you can use this:\nFOR /F %k in (requirements.txt) DO ( if NOT # == %k ( pip install %k ) )\n\nLogic: for every dependency in file(requirements.txt), install them and ignore those start with \"#\".\n", "For Windows:\npip version >=18\nimport sys\nfrom pip._internal import main as pip_main\n\ndef install(package):\n pip_main(['install', package])\n\nif __name__ == '__main__':\n with open(sys.argv[1]) as f:\n for line in f:\n install(line)\n\npip version <18\nimport sys\nimport pip\n\ndef install(package):\n pip.main(['install', package])\n\nif __name__ == '__main__':\n with open(sys.argv[1]) as f:\n for line in f:\n install(line)\n\n", "The xargs solution works but can have portability issues (BSD/GNU) and/or be cumbersome if you have comments or blank lines in your requirements file.\nAs for the usecase where such a behavior would be required, I use for instance two separate requirement files, one which is only listing core dependencies that need to be always installed and another file with non-core dependencies that are in 90% of the cases not needed for most usecases. That would be an equivalent of the Recommends section of a debian package.\nI use the following shell script (requires sed) to install optional dependencies:\n#!/bin/sh\n\nwhile read dependency; do\n dependency_stripped=\"$(echo \"${dependency}\" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')\"\n # Skip comments\n if [[ $dependency_stripped == \\#* ]]; then\n continue\n # Skip blank lines\n elif [ -z \"$dependency_stripped\" ]; then\n continue\n else\n if pip install \"$dependency_stripped\"; then\n echo \"$dependency_stripped is installed\"\n else\n echo \"Could not install $dependency_stripped, skipping\"\n fi\n fi\ndone < recommends.txt\n\n", "Building on the answer by @MZD, here's a solution to filter out all text starting with a comment sign #\ncat requirements.txt | grep -Eo '(^[^#]+)' | xargs -n 1 pip install\n\n", "For Windows using PowerShell:\nforeach($line in Get-Content requirements.txt) {\n if(!($line.StartsWith('#'))){\n pip install $line\n }\n}\n\n", "One line PowerShell:\nGet-Content .\\requirements.txt | ForEach-Object {pip install $_}\nIf you need to ignore certain lines then:\nGet-Content .\\requirements.txt | ForEach-Object {if (!$_.startswith(\"#\")){pip install $_}}\nOR\nGet-Content .\\requirements.txt | ForEach-Object {if ($_ -notmatch \"#\"){pip install $_}}\n", "Thanks, Etienne Prothon for windows cases. \nBut, after upgrading to pip 18, pip package don't expose main to public. So you may need to change code like this. \n # This code install line by line a list of pip package \n import sys\n from pip._internal import main as pip_main\n\n def install(package):\n pip_main(['install', package])\n\n if __name__ == '__main__':\n with open(sys.argv[1]) as f:\n for line in f:\n install(line)\n\n", "Another option is to use pip install --dry-run to get a list of packages that you need to install and then keep trying it and remove the ones that don't work.\n", "A very general solution\nThe following code installs all requirements for:\n\nmultiple requirement files (requirements1.txt, requirements2.txt)\nignores lines with comments #\nskips packages, which are not instalable\nruns pip install each line (not each word as in some other answers)\n\n$ (cat requirements1.txt; echo \"\"; cat requirements2.txt) | grep \"^[^#]\" | xargs -L 1 pip install\n\n" ]
[ 395, 27, 12, 9, 4, 4, 3, 2, 0, 0, 0 ]
[ "For Windows:\nimport os\nfrom pip.__main__ import _main as main\n\nerror_log = open('error_log.txt', 'w')\n\ndef install(package):\n try:\n main(['install'] + [str(package)])\n except Exception as e:\n error_log.write(str(e))\n\nif __name__ == '__main__':\n f = open('requirements1.txt', 'r')\n for line in f:\n install(line)\n f.close()\n error_log.close()\n\n\nCreate a local directory, and put your requirements.txt file in it.\nCopy the code above and save it as a python file in the same directory. Remember to use .py extension, for instance, install_packages.py\nRun this file using a cmd: python install_packages.py\nAll the packages mentioned will be installed in one go without stopping at all. :)\n\nYou can add other parameters in install function. Like:\n main(['install'] + [str(package)] + ['--update'])\n" ]
[ -2 ]
[ "pip", "python" ]
stackoverflow_0022250483_pip_python.txt
Q: found changes on same id (column A) pandas I have dataframe # | A | B | C | D | E --+-------+-------+-------+-------+------- 1 | "5" | "4" | "2" | "3" | "2022-11-29" | 2 | "5" | "d" | "2" | "3" | "2022-11-30" | 3 | "5" | "4" | "2" | "h" | "2022-11-29" | 4 | "4" | "4" | "2" | "3" | "2022-11-28" | 5 | "4" | "4" | "g" | "3" | "2022-11-29" | I would like to find changes in same id (column A), but ignore changes in column E expected result: ID 5 changed in column B (changedTovalue "d") and column D (changedTovalue "h") ID 4 changed in column C (changedTovalue "g") is this possible to do with pandas? A: You can compare most common value per columns by DataFrame.mode by DataFrame.eq, set missing values by DataFrame.where if no match, reshape by DataFrame.stack, last convert to DataFrame: df1 = df.set_index('A').drop('E', axis=1) print (df1) B C D A 5 4 2 3 5 d 2 g <- added new not matched value 5 4 2 h 4 4 2 3 4 4 g 3 s = df1.mode().iloc[0] df2 = (df1.where(df1.ne(s)).stack() .rename_axis(['A','cols']) .reset_index(name='val')) print (df2) A cols val 0 5 B d 1 5 D g 2 5 D h 3 4 C g EDIT: df1 = df.set_index(['A','E']) print (df1) B C D A E 5 2022-11-29 4 2 3 2022-11-30 d 2 3 2022-11-29 4 2 h 4 2022-11-28 4 2 3 2022-11-29 4 g 3 s = df1.mode().iloc[0] df2 = (df1.where(df1.ne(s)).stack() .rename_axis(['A','E','cols']) .reset_index(name='val') ) print (df2) A E cols val 0 5 2022-11-30 B d 1 5 2022-11-29 D h 2 4 2022-11-29 C g
found changes on same id (column A) pandas
I have dataframe # | A | B | C | D | E --+-------+-------+-------+-------+------- 1 | "5" | "4" | "2" | "3" | "2022-11-29" | 2 | "5" | "d" | "2" | "3" | "2022-11-30" | 3 | "5" | "4" | "2" | "h" | "2022-11-29" | 4 | "4" | "4" | "2" | "3" | "2022-11-28" | 5 | "4" | "4" | "g" | "3" | "2022-11-29" | I would like to find changes in same id (column A), but ignore changes in column E expected result: ID 5 changed in column B (changedTovalue "d") and column D (changedTovalue "h") ID 4 changed in column C (changedTovalue "g") is this possible to do with pandas?
[ "You can compare most common value per columns by DataFrame.mode by DataFrame.eq, set missing values by DataFrame.where if no match, reshape by DataFrame.stack, last convert to DataFrame:\ndf1 = df.set_index('A').drop('E', axis=1)\n\nprint (df1)\n B C D\nA \n5 4 2 3\n5 d 2 g <- added new not matched value\n5 4 2 h\n4 4 2 3\n4 4 g 3\n\ns = df1.mode().iloc[0]\n\n\ndf2 = (df1.where(df1.ne(s)).stack()\n .rename_axis(['A','cols'])\n .reset_index(name='val'))\nprint (df2)\n A cols val\n0 5 B d\n1 5 D g\n2 5 D h\n3 4 C g\n\nEDIT:\ndf1 = df.set_index(['A','E'])\nprint (df1)\n B C D\nA E \n5 2022-11-29 4 2 3\n 2022-11-30 d 2 3\n 2022-11-29 4 2 h\n4 2022-11-28 4 2 3\n 2022-11-29 4 g 3\n\ns = df1.mode().iloc[0]\n\n\ndf2 = (df1.where(df1.ne(s)).stack()\n .rename_axis(['A','E','cols'])\n .reset_index(name='val')\n )\nprint (df2)\n A E cols val\n0 5 2022-11-30 B d\n1 5 2022-11-29 D h\n2 4 2022-11-29 C g\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074624488_dataframe_pandas_python.txt
Q: Is there any library like arch unit for Django? I've been searching for a library or tool to test my Django project architecture, check the dependencies, layers, etc. like Arch Unit for Java. But until now, I didn't find anything. I don't even know if it's viable doing these kinds of tests in Python/Django projects. I know that Django itself already checks for cyclic dependencies. A: Check out what you can find under Python complexity metrics. A tool called Wily may be of use. However, what counts as good practices will be very different for Java and Python. A: The closest that I know of is https://github.com/seddonym/import-linter It's capable only of linting - as the name suggests - imports between modules so it has far fewer capabilities than ArchUnit, but I used it with great success in several clean/hexagonal architecture projects. To use it to the full extent you need to wisely split your projects into modules so that the import statements "follow" (or rather "match") your designated architecture. Some example of the rules that I often use: [importlinter:contract:8] name = Modules .core cannot depend on shared infrastructure type = forbidden source_modules = src.services.service1.modules.module1.core src.services.service2.modules.module1.core src.services.service2.modules.module2.core forbidden_modules = src.shared.infrastructure [importlinter:contract:3] name = modules inside services shall be independent type = independence modules = src.services.service1.modules.module1 src.services.service2.modules.module1 src.services.service2.modules.module2 A: There is also pytest-archon. Its syntax is simpler than import-linter's and uses pytest instead of separate config files.
Is there any library like arch unit for Django?
I've been searching for a library or tool to test my Django project architecture, check the dependencies, layers, etc. like Arch Unit for Java. But until now, I didn't find anything. I don't even know if it's viable doing these kinds of tests in Python/Django projects. I know that Django itself already checks for cyclic dependencies.
[ "Check out what you can find under Python complexity metrics.\nA tool called Wily may be of use. However, what counts as good practices will be very different for Java and Python.\n", "The closest that I know of is https://github.com/seddonym/import-linter\nIt's capable only of linting - as the name suggests - imports between modules so it has far fewer capabilities than ArchUnit, but I used it with great success in several clean/hexagonal architecture projects.\nTo use it to the full extent you need to wisely split your projects into modules so that the import statements \"follow\" (or rather \"match\") your designated architecture.\nSome example of the rules that I often use:\n[importlinter:contract:8]\nname = Modules .core cannot depend on shared infrastructure\ntype = forbidden\nsource_modules =\n src.services.service1.modules.module1.core\n src.services.service2.modules.module1.core\n src.services.service2.modules.module2.core\n\nforbidden_modules =\n src.shared.infrastructure\n\n\n[importlinter:contract:3]\nname = modules inside services shall be independent\ntype = independence\nmodules = \n src.services.service1.modules.module1\n src.services.service2.modules.module1\n src.services.service2.modules.module2\n\n", "There is also pytest-archon. Its syntax is simpler than import-linter's and uses pytest instead of separate config files.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "architecture", "archunit", "django", "python", "testing" ]
stackoverflow_0066692195_architecture_archunit_django_python_testing.txt
Q: Implementing a function that returns a n x m matrix in counter-clockwise spiral order starting from at the bottom right entry of the matrix So I'm trying to implement a function in python that returns all elements of a n x m matrix in counter-clockwise spiral order, starting at the bottom furthest-right entry of the matrix. For example, let's say the input was: matrix = [[1,2,3], [4,5,6], [7,8,9]] Then our output would be [9, 6, 3, 2, 1, 4, 7, 8, 5] In another case, if the matrix = [[1,2], [3,4], [5,6]] Then our output would be [6, 4, 2, 1, 3, 5] And finally, if the matrix = [3], we'd return [3]. The function I'm implementing follows this header: def spiralOrder(matrix: list[list[int]]) -> list[int]: A: You could consider using a while-loop: def spiralOrder(self, matrix: list[list[int]]) -> list[int]: result = [] left, right = 0, len(matrix[0]) - 1 up, down = 0, len(matrix) - 1 step = 0 while left <= right and up <= down: match step % 4: case 0: for i in range(down, up - 1, -1): result.append(matrix[i][right]) right -= 1 case 1: for i in range(right, left - 1, -1): result.append(matrix[up][i]) up += 1 case 2: for i in range(up, down + 1): result.append(matrix[i][left]) left += 1 case 3: for i in range(left, right + 1): result.append(matrix[down][i]) down -= 1 step += 1 return result Example Usage 1: print(spiralOrder([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) [9, 6, 3, 2, 1, 4, 7, 8, 5] Example Usage 2: print(spiralOrder([[1, 2], [3, 4], [5, 6]]) [6, 4, 2, 1, 3, 5] Example Usage 3: print(spiralOrder([[3]]) [3] Note: [3] is a not of the type list[list[int]] hence I am assuming it is a typo in your question.
Implementing a function that returns a n x m matrix in counter-clockwise spiral order starting from at the bottom right entry of the matrix
So I'm trying to implement a function in python that returns all elements of a n x m matrix in counter-clockwise spiral order, starting at the bottom furthest-right entry of the matrix. For example, let's say the input was: matrix = [[1,2,3], [4,5,6], [7,8,9]] Then our output would be [9, 6, 3, 2, 1, 4, 7, 8, 5] In another case, if the matrix = [[1,2], [3,4], [5,6]] Then our output would be [6, 4, 2, 1, 3, 5] And finally, if the matrix = [3], we'd return [3]. The function I'm implementing follows this header: def spiralOrder(matrix: list[list[int]]) -> list[int]:
[ "You could consider using a while-loop:\ndef spiralOrder(self, matrix: list[list[int]]) -> list[int]:\n result = []\n left, right = 0, len(matrix[0]) - 1\n up, down = 0, len(matrix) - 1\n step = 0\n while left <= right and up <= down:\n match step % 4:\n case 0:\n for i in range(down, up - 1, -1):\n result.append(matrix[i][right])\n right -= 1\n case 1:\n for i in range(right, left - 1, -1):\n result.append(matrix[up][i])\n up += 1\n case 2:\n for i in range(up, down + 1):\n result.append(matrix[i][left])\n left += 1\n case 3:\n for i in range(left, right + 1):\n result.append(matrix[down][i])\n down -= 1\n step += 1\n return result\n\nExample Usage 1:\nprint(spiralOrder([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n[9, 6, 3, 2, 1, 4, 7, 8, 5]\n\nExample Usage 2:\nprint(spiralOrder([[1, 2], [3, 4], [5, 6]])\n[6, 4, 2, 1, 3, 5]\n\nExample Usage 3:\nprint(spiralOrder([[3]])\n[3]\n\nNote: [3] is a not of the type list[list[int]] hence I am assuming it is a typo in your question.\n" ]
[ 1 ]
[]
[]
[ "matrix", "python" ]
stackoverflow_0074623945_matrix_python.txt
Q: How to use MLFlow in a functional style / functional programming? Is there a reliable way to use MLFlow in a functional style? As it is not possible to pass the run ID for example to the function which logs a parameter, I wonder whether it is possible to seperate code executed in my MLFLow run into multiple pure fuctions. Have I overlooked something, or is it simply not possible? So far I have looked up the documentation and did not find a way to pass the run id to a MLFlow log function, neither for parameters, nor metrics or anything else. A: https://mlflow.org/docs/latest/python_api/mlflow.client.html#mlflow.client.MlflowClient.log_param log_param(run_id: str, key: str, value: Any) A: The solution is to use the mlflow.client module instead of the mlflow module as stated in the documentation of the mlflow client: The mlflow.client module provides a Python CRUD interface to MLflow Experiments, Runs, Model Versions, and Registered Models. This is a lower level API that directly translates to MLflow REST API calls. For a higher level API for managing an “active run”, use the mlflow module. mlflow client documentation @Andre: Thanks for pointing me in the right direction.
How to use MLFlow in a functional style / functional programming?
Is there a reliable way to use MLFlow in a functional style? As it is not possible to pass the run ID for example to the function which logs a parameter, I wonder whether it is possible to seperate code executed in my MLFLow run into multiple pure fuctions. Have I overlooked something, or is it simply not possible? So far I have looked up the documentation and did not find a way to pass the run id to a MLFlow log function, neither for parameters, nor metrics or anything else.
[ "https://mlflow.org/docs/latest/python_api/mlflow.client.html#mlflow.client.MlflowClient.log_param\nlog_param(run_id: str, key: str, value: Any)\n", "The solution is to use the mlflow.client module instead of the mlflow module as stated in the documentation of the mlflow client:\n\nThe mlflow.client module provides a Python CRUD interface to MLflow\nExperiments, Runs, Model Versions, and Registered Models. This is a\nlower level API that directly translates to MLflow REST API calls. For\na higher level API for managing an “active run”, use the mlflow\nmodule.\n\n\nmlflow client documentation\n\n@Andre: Thanks for pointing me in the right direction.\n" ]
[ 0, 0 ]
[]
[]
[ "contextmanager", "functional_programming", "machine_learning", "mlflow", "python" ]
stackoverflow_0074603005_contextmanager_functional_programming_machine_learning_mlflow_python.txt
Q: i have made a function to stem by data, but it gives error def stem(text): y=[] for i in text.split(): y.append(ps.stem(i)) return " ".join(y) new_df['tags'].apply(stem()) error:stem() missing 1 required positional argument: 'text' A: try this code.. def stem(text): y=[] for i in text.split(): y.append(ps.stem(i)) return " ".join(y) new_df['tags']=new_df['tags'].apply(stem)
i have made a function to stem by data, but it gives error
def stem(text): y=[] for i in text.split(): y.append(ps.stem(i)) return " ".join(y) new_df['tags'].apply(stem()) error:stem() missing 1 required positional argument: 'text'
[ "try this code..\ndef stem(text):\n y=[]\n for i in text.split():\n y.append(ps.stem(i))\n return \" \".join(y)\n\nnew_df['tags']=new_df['tags'].apply(stem)\n\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0070912880_pandas_python.txt
Q: How can I get the node where the pod is located using kubernetes python client? I have a Pod name and I want to know where the Pod is located using kubernetes python client. Is it possible to use kubernetes python client in order to get the node name by Pod? (Just like the NODE column in kubectl get pod -o wide) I've referred to the document https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md. But I didn't find a solution. A: I think read_namespaced_pod has the information (spec.nodeName). https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#read_namespaced_pod
How can I get the node where the pod is located using kubernetes python client?
I have a Pod name and I want to know where the Pod is located using kubernetes python client. Is it possible to use kubernetes python client in order to get the node name by Pod? (Just like the NODE column in kubectl get pod -o wide) I've referred to the document https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md. But I didn't find a solution.
[ "I think read_namespaced_pod has the information (spec.nodeName).\nhttps://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#read_namespaced_pod\n" ]
[ 0 ]
[]
[]
[ "api", "client", "kubernetes", "python" ]
stackoverflow_0074623810_api_client_kubernetes_python.txt
Q: How can you compare two lists in such a way that you find out how many times a word from one list is in the second list? I have two lists, one containing true values selected by humans and a second list with extracted values. I would like to measure how well the pipeline is performing based on how many true values are contained in the extracted list. Example: extracted_value = ["value", "of", "words", "that", "were", "tracked"] real_value = ["value", "words", "that"] I need a metric that describes: 3 out of 3 real values were extracted For multiple Documents: 5 out of 10 real values were extracted 2 out of 3 real values were extracted 1 out of 9 real values were extracted Based on the individual comparison, can I get a score that describes how well the extracted keywords perform on average across all documents? A: Will something simple like this work? score = len([x for x in real_value if x in extracted_value])/len(extracted_value) print(score) >>> 0.5 A: The metric you're looking for is recall. @sfat's solution works well for a single document, you can then get the average over multiple documents by summing the scores and then dividing by the len of documents. For more advanced scoring for your retrieval, check the F-Score section of the linked article. A: To check how many values are shared between extracted_value and real_value. I believe you're looking for the recall of your model, you can use set operations, specifically & (and) divided by your ground truth (real_values): recall = len(set(real_value) & set(extracted_value))/len(real_values) or if you want exactly which specific values are shared, which you could always take the len of: shared_vals = set(real_value) & set(extracted_value) If you want to then calculate recall with shared_vals: recall = len(shared_vals)/len(real_value)
How can you compare two lists in such a way that you find out how many times a word from one list is in the second list?
I have two lists, one containing true values selected by humans and a second list with extracted values. I would like to measure how well the pipeline is performing based on how many true values are contained in the extracted list. Example: extracted_value = ["value", "of", "words", "that", "were", "tracked"] real_value = ["value", "words", "that"] I need a metric that describes: 3 out of 3 real values were extracted For multiple Documents: 5 out of 10 real values were extracted 2 out of 3 real values were extracted 1 out of 9 real values were extracted Based on the individual comparison, can I get a score that describes how well the extracted keywords perform on average across all documents?
[ "Will something simple like this work?\nscore = len([x for x in real_value if x in extracted_value])/len(extracted_value)\nprint(score)\n>>> 0.5\n\n", "The metric you're looking for is recall.\n@sfat's solution works well for a single document, you can then get the average over multiple documents by summing the scores and then dividing by the len of documents.\nFor more advanced scoring for your retrieval, check the F-Score section of the linked article.\n", "To check how many values are shared between extracted_value and real_value. I believe you're looking for the recall of your model, you can use set operations, specifically & (and) divided by your ground truth (real_values):\nrecall = len(set(real_value) & set(extracted_value))/len(real_values)\n\nor if you want exactly which specific values are shared, which you could always take the len of:\nshared_vals = set(real_value) & set(extracted_value)\n\nIf you want to then calculate recall with shared_vals:\nrecall = len(shared_vals)/len(real_value)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "performance", "python" ]
stackoverflow_0074624497_performance_python.txt
Q: Linux Selenium Chromedriver can't connect to chrome version I'm transferring my python scraper to a vps on Ubuntu. I've installed chromedriver using apt get and I'm getting an error when running my script. selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:51757 from session not created: This version of ChromeDriver only supports Chrome version 108 Current browser version is 107.0.5304.121 Does anyone know how in bash to fix this? Do I change the chrome version, because I'm not sure how to do that, and I don't know where it is found on my filesystem. Thanks. A: You have to have the version that your Selenium driver was downloaded for. If you want to maintain your 107.0.5304.121 version on Chrome, download the selenium 107.x.x.x driver https://chromedriver.chromium.org/downloads You can check here all the latest versions
Linux Selenium Chromedriver can't connect to chrome version
I'm transferring my python scraper to a vps on Ubuntu. I've installed chromedriver using apt get and I'm getting an error when running my script. selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:51757 from session not created: This version of ChromeDriver only supports Chrome version 108 Current browser version is 107.0.5304.121 Does anyone know how in bash to fix this? Do I change the chrome version, because I'm not sure how to do that, and I don't know where it is found on my filesystem. Thanks.
[ "You have to have the version that your Selenium driver was downloaded for.\nIf you want to maintain your 107.0.5304.121 version on Chrome, download the selenium 107.x.x.x driver\nhttps://chromedriver.chromium.org/downloads\nYou can check here all the latest versions\n" ]
[ 0 ]
[]
[]
[ "linux", "python", "selenium", "selenium_chromedriver" ]
stackoverflow_0074624643_linux_python_selenium_selenium_chromedriver.txt
Q: SQLAlchemy: Separation of table classes by different files I will try to explain in order so that my question is clear. The simplified code executed according to the lesson: engine = create_engine(...) session = (...) Base = declarative_base() class <someTable>(Base): ... Base.metadata.create_all(bind=engine) And in this case, everything works as it should. A table in the database is being created. Next I want to split a lot of (Base) into by different files located in the same folder "schemas". Now, after launching the application, tables are not created, because there is no explicit definition of "class (Base)" before "create_all". I thought that if import "Base" in a separate file in which a specific "class (Base)" is registered, then this will be enough. The first thing that comes to mind is to put an "if" condition before each CRUD operation - if the table does not exist, then "create_all". The first thing that comes to mind is to put an "if" condition before each CRUD operation - if the table does not exist, then "create_all". The file structure of the project looks something like this: |project | |-|utils | |-db_api | |-|db_api | |-|schemas | |-__init__.py | |-<someTable_1>.py #Contains "class <someTable_1>(Base)" | |-<someTable_2>.py | |-__init__.py | |-session.py #there session, Base, and create_all | |-<some_db_commands>.py | |-__init__.py | |-<some_bot_funcs>.py |-main.py This is a simple training telegram bot that uses the "/start" command to create a table in postgres and writes the user id there. Can I keep a similar file structure by achieving SQLAlchemy operability? A: Each model class must be read by the interpreter before create_all is called, otherwise they will not be in the metadata and so will not be created. So what you need to do is: Define Base in a module. All the files that declare model classes must import the module that defines Base, and use that Base as their superclass. In (say) main.py, import Base and all the model classes (so that they are read by the interpreter) and then do Base.metadata.create_all(). You may be able to adjust the above steps a little, but understand that generally you won't be able to declare Base and do create_all() in the same file without creating circular dependencies
SQLAlchemy: Separation of table classes by different files
I will try to explain in order so that my question is clear. The simplified code executed according to the lesson: engine = create_engine(...) session = (...) Base = declarative_base() class <someTable>(Base): ... Base.metadata.create_all(bind=engine) And in this case, everything works as it should. A table in the database is being created. Next I want to split a lot of (Base) into by different files located in the same folder "schemas". Now, after launching the application, tables are not created, because there is no explicit definition of "class (Base)" before "create_all". I thought that if import "Base" in a separate file in which a specific "class (Base)" is registered, then this will be enough. The first thing that comes to mind is to put an "if" condition before each CRUD operation - if the table does not exist, then "create_all". The first thing that comes to mind is to put an "if" condition before each CRUD operation - if the table does not exist, then "create_all". The file structure of the project looks something like this: |project | |-|utils | |-db_api | |-|db_api | |-|schemas | |-__init__.py | |-<someTable_1>.py #Contains "class <someTable_1>(Base)" | |-<someTable_2>.py | |-__init__.py | |-session.py #there session, Base, and create_all | |-<some_db_commands>.py | |-__init__.py | |-<some_bot_funcs>.py |-main.py This is a simple training telegram bot that uses the "/start" command to create a table in postgres and writes the user id there. Can I keep a similar file structure by achieving SQLAlchemy operability?
[ "Each model class must be read by the interpreter before create_all is called, otherwise they will not be in the metadata and so will not be created. So what you need to do is:\n\nDefine Base in a module.\nAll the files that declare model classes must import the module that defines Base, and use that Base as their superclass.\nIn (say) main.py, import Base and all the model classes (so that they are read by the interpreter) and then do Base.metadata.create_all().\n\nYou may be able to adjust the above steps a little, but understand that generally you won't be able to declare Base and do create_all() in the same file without creating circular dependencies\n" ]
[ 0 ]
[]
[]
[ "orm", "python", "python_3.x", "sqlalchemy" ]
stackoverflow_0074623946_orm_python_python_3.x_sqlalchemy.txt
Q: server starts to closed the connection unexpectedly I have a project with 10+ parsers and at the end have this code: ` cursor = conn.cursor() my_file = open(r'csv\file.csv') sql_statement = """ CREATE TEMP TABLE temp ( LIKE vhcl ) ON COMMIT DROP; COPY temp FROM STDIN WITH CSV HEADER DELIMITER AS ','; INSERT INTO vhcl SELECT * FROM temp ON CONFLICT (id) DO UPDATE SET name= EXCLUDED.name""" cursor.copy_expert(sql=sql_statement, file=my_file) conn.commit() cursor.close() ` Everything worked fine until a couple of weeks ago I started to get these errors: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. I noticed, that if parsers works (for example) less, than 10 minutes, I won't get those errors I tried to make a separate function, that adds data to the DB after the parser ends working. It still gives me that error. The strange thing is that I ran my parsers on my home pc, and it works fine, also, if I add data manually with the same function, but in a different file, it also works fine. I asked about banned IP for db, but it's okay. So I have no idea why I have this error. PostgreSQL log A: You have a network problem. Both the server and the client complain that the other side unexpectedly hung up on them. So it was some misconfigured network component in the middle that cut the line. You have two options: fix the network lower tcp_keepalives_idle on the PostgreSQL client or server, so that the operating system sends "keepalive packets" frequently and the network doesn't consider the connection idle You might want to read this for more details. A: Finally, I found a solution. I still don't know what the problem was. It isn't a connection issue, cause some parsers with the same IP and same network connections work normally. And I'm was still able to add data with the same script, but in a separate project file. My solution is to add 'keepalives' settings in connection: conn = psycopg2.connect( host=hostname, dbname=database, user=username, password=password, port=port_id, keepalives=1, keepalives_idle=30, keepalives_interval=10, keepalives_count=5)
server starts to closed the connection unexpectedly
I have a project with 10+ parsers and at the end have this code: ` cursor = conn.cursor() my_file = open(r'csv\file.csv') sql_statement = """ CREATE TEMP TABLE temp ( LIKE vhcl ) ON COMMIT DROP; COPY temp FROM STDIN WITH CSV HEADER DELIMITER AS ','; INSERT INTO vhcl SELECT * FROM temp ON CONFLICT (id) DO UPDATE SET name= EXCLUDED.name""" cursor.copy_expert(sql=sql_statement, file=my_file) conn.commit() cursor.close() ` Everything worked fine until a couple of weeks ago I started to get these errors: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. I noticed, that if parsers works (for example) less, than 10 minutes, I won't get those errors I tried to make a separate function, that adds data to the DB after the parser ends working. It still gives me that error. The strange thing is that I ran my parsers on my home pc, and it works fine, also, if I add data manually with the same function, but in a different file, it also works fine. I asked about banned IP for db, but it's okay. So I have no idea why I have this error. PostgreSQL log
[ "You have a network problem.\nBoth the server and the client complain that the other side unexpectedly hung up on them. So it was some misconfigured network component in the middle that cut the line. You have two options:\n\nfix the network\n\nlower tcp_keepalives_idle on the PostgreSQL client or server, so that the operating system sends \"keepalive packets\" frequently and the network doesn't consider the connection idle\n\n\nYou might want to read this for more details.\n", "Finally, I found a solution. I still don't know what the problem was. It isn't a connection issue, cause some parsers with the same IP and same network connections work normally. And I'm was still able to add data with the same script, but in a separate project file.\nMy solution is to add 'keepalives' settings in connection:\nconn = psycopg2.connect(\nhost=hostname,\ndbname=database,\nuser=username,\npassword=password,\nport=port_id,\nkeepalives=1,\nkeepalives_idle=30,\nkeepalives_interval=10,\nkeepalives_count=5)\n\n" ]
[ 0, 0 ]
[]
[]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0074611976_postgresql_psycopg2_python.txt
Q: fake_useragent module not connecting properly - IndexError: list index out of range I tried to use fake_useragent module with this block from fake_useragent import UserAgent ua = UserAgent() print(ua.random) But when the execution reached this line ua = UserAgent(), it throws this error Traceback (most recent call last): File "/home/hadi/Desktop/excel/gatewayform.py", line 191, in <module> gate = GateWay() File "/home/hadi/Desktop/excel/gatewayform.py", line 23, in __init__ ua = UserAgent() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 69, in __init__ self.load() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 75, in load self.data = load_cached( File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 250, in load_cached update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 245, in update write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl)) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 178, in load raise exc File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 99, in get_browsers html = html.split('<table class="w3-table-all notranslate">')[1] IndexError: list index out of range I use linux and I have installed the module using this command pip3 install fake_useragent --upgrade. Is there any solution for this issue? if not, is there a better module to use? A: There is a solution for this, from Github pull request #110. Basically, all you need to do is change one character in one line of the fake_useragent/utils.py source code. To do this on your system, open /usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py† in your favorite text editor using admin privileges. Go to line 99, and change the w3 html = html.split('<table class="w3-table-all notranslate">')[1] # ^^ change this to ws: html = html.split('<table class="ws-table-all notranslate">')[1] # ^^ to this Save the file (with admin permissions), restart your Python session, and your code should work just fine. † To find the fake_useragent directory in which utils.py resides, run the following code: import fake_useragent print(fake_useragent.__file__) For example, on my Windows laptop, this printed 'C:\\Users\\mattdmo\\AppData\\Roaming\\Python\\Python310\\site-packages\\fake_useragent\\__init__.py' so the folder to open is C:\Users\mattdmo\AppData\Roaming\Python\Python310\site-packages\fake_useragent. A: I tried UserAgent(use_cache_server=False, verify_ssl=False) but didn't work. Upgrading the version to 0.1.13 worked for me. pip3 install fake-useragent==0.1.13 I tried pip install fake-useragent -U but it somehow didn't upgrade the package correctly. From the package owner: https://github.com/fake-useragent/fake-useragent/pull/136#issuecomment-1302431518
fake_useragent module not connecting properly - IndexError: list index out of range
I tried to use fake_useragent module with this block from fake_useragent import UserAgent ua = UserAgent() print(ua.random) But when the execution reached this line ua = UserAgent(), it throws this error Traceback (most recent call last): File "/home/hadi/Desktop/excel/gatewayform.py", line 191, in <module> gate = GateWay() File "/home/hadi/Desktop/excel/gatewayform.py", line 23, in __init__ ua = UserAgent() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 69, in __init__ self.load() File "/usr/local/lib/python3.9/dist-packages/fake_useragent/fake.py", line 75, in load self.data = load_cached( File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 250, in load_cached update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 245, in update write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl)) File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 178, in load raise exc File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py", line 99, in get_browsers html = html.split('<table class="w3-table-all notranslate">')[1] IndexError: list index out of range I use linux and I have installed the module using this command pip3 install fake_useragent --upgrade. Is there any solution for this issue? if not, is there a better module to use?
[ "There is a solution for this, from Github pull request #110. Basically, all you need to do is change one character in one line of the fake_useragent/utils.py source code.\nTo do this on your system, open /usr/local/lib/python3.9/dist-packages/fake_useragent/utils.py† in your favorite text editor using admin privileges. Go to line 99, and change the w3\n html = html.split('<table class=\"w3-table-all notranslate\">')[1]\n# ^^ change this\n\nto ws:\n html = html.split('<table class=\"ws-table-all notranslate\">')[1]\n# ^^ to this\n\nSave the file (with admin permissions), restart your Python session, and your code should work just fine.\n\n† To find the fake_useragent directory in which utils.py resides, run the following code:\nimport fake_useragent\nprint(fake_useragent.__file__)\n\nFor example, on my Windows laptop, this printed\n'C:\\\\Users\\\\mattdmo\\\\AppData\\\\Roaming\\\\Python\\\\Python310\\\\site-packages\\\\fake_useragent\\\\__init__.py'\n\nso the folder to open is C:\\Users\\mattdmo\\AppData\\Roaming\\Python\\Python310\\site-packages\\fake_useragent.\n", "I tried UserAgent(use_cache_server=False, verify_ssl=False) but didn't work.\nUpgrading the version to 0.1.13 worked for me.\npip3 install fake-useragent==0.1.13\n\nI tried pip install fake-useragent -U but it somehow didn't upgrade the package correctly.\nFrom the package owner: https://github.com/fake-useragent/fake-useragent/pull/136#issuecomment-1302431518\n" ]
[ 15, 0 ]
[]
[]
[ "python", "user_agent" ]
stackoverflow_0068772211_python_user_agent.txt
Q: Getting Errors When Using Google Search API on Python I'm trying to run a Google Search API on Python, specifically this one: https://github.com/abenassi/Google-Search-API. When I try to test it out by running this code on Sublime Text 3, from google import google num_page = 1 search_results = google.search("This is my query", num_page) for result in search_results: print(result.description) I get a lot of errors, as shown below. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in __init__ self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 67, in get context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 97, in get_browsers html = get(settings.BROWSERS_STATS_PAGE, verify_ssl=verify_ssl) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 84, in get raise FakeUserAgentError('Maximum amount of retries reached') fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in __init__ self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 67, in get context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 97, in get_browsers html = get(settings.BROWSERS_STATS_PAGE, verify_ssl=verify_ssl) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 84, in get raise FakeUserAgentError('Maximum amount of retries reached') fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in __init__ self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 67, in get context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Application Support/Sublime Text 3/Packages/User/first.py", line 7, in <module> search_results = google.search("This is my query", num_page) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/google/modules/standard_search.py", line 70, in search html = get_html(url) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/google/modules/utils.py", line 432, in get_html ua = UserAgent() File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/fake.py", line 69, in __init__ self.load() File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/fake.py", line 78, in load verify_ssl=self.verify_ssl, File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 250, in load_cached update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 245, in update write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl)) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 189, in load verify_ssl=verify_ssl, File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 84, in get raise FakeUserAgentError('Maximum amount of retries reached') fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached What do I need to do to get rid of these errors and get the Google Search API working? A: fake_useragent is only to mimic as a user agent but not official. Hence it throws error when detected. For the most recent call lost I did the following: pip3 install fake_useragent --upgrade ua = UserAgent(use_cache_server = False, verify_ssl=False) #Very important step, after this step you can write your code ua = UserAgent() #creating object of UserAgent in my case header = {'user-agent': ua.chrome} #accessing print(ua.chrome) A: This is not the official API so google is probably detecting that you are scraping their content "FakeUserAgentError". Try using the official API(you get 100 free searches a day). Then try something like this here A: I tried UserAgent(use_cache_server = False, verify_ssl=False) but didn't work. Upgrading the version to 0.1.13 worked for me. pip3 install fake-useragent==0.1.13 I tried pip install fake-useragent -U but it somehow didn't upgrade the package correctly. From the package owner: https://github.com/fake-useragent/fake-useragent/pull/136#issuecomment-1302431518
Getting Errors When Using Google Search API on Python
I'm trying to run a Google Search API on Python, specifically this one: https://github.com/abenassi/Google-Search-API. When I try to test it out by running this code on Sublime Text 3, from google import google num_page = 1 search_results = google.search("This is my query", num_page) for result in search_results: print(result.description) I get a lot of errors, as shown below. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in __init__ self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 67, in get context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 97, in get_browsers html = get(settings.BROWSERS_STATS_PAGE, verify_ssl=verify_ssl) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 84, in get raise FakeUserAgentError('Maximum amount of retries reached') fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in __init__ self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 67, in get context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 154, in load for item in get_browsers(verify_ssl=verify_ssl): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 97, in get_browsers html = get(settings.BROWSERS_STATS_PAGE, verify_ssl=verify_ssl) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 84, in get raise FakeUserAgentError('Maximum amount of retries reached') fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in __init__ self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 67, in get context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myname/Library/Application Support/Sublime Text 3/Packages/User/first.py", line 7, in <module> search_results = google.search("This is my query", num_page) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/google/modules/standard_search.py", line 70, in search html = get_html(url) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/google/modules/utils.py", line 432, in get_html ua = UserAgent() File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/fake.py", line 69, in __init__ self.load() File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/fake.py", line 78, in load verify_ssl=self.verify_ssl, File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 250, in load_cached update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 245, in update write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl)) File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 189, in load verify_ssl=verify_ssl, File "/Users/myname/Library/Python/3.6/lib/python/site-packages/fake_useragent/utils.py", line 84, in get raise FakeUserAgentError('Maximum amount of retries reached') fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached What do I need to do to get rid of these errors and get the Google Search API working?
[ "fake_useragent is only to mimic as a user agent but not official. Hence it throws error when detected. For the most recent call lost I did the following:\n\npip3 install fake_useragent --upgrade\nua = UserAgent(use_cache_server = False, verify_ssl=False) #Very important step, after this step you can write your code\nua = UserAgent() #creating object of UserAgent in my case\nheader = {'user-agent': ua.chrome} #accessing\nprint(ua.chrome)\n\n", "This is not the official API so google is probably detecting that you are scraping their content \"FakeUserAgentError\". Try using the official API(you get 100 free searches a day). Then try something like this here\n", "I tried UserAgent(use_cache_server = False, verify_ssl=False) but didn't work.\nUpgrading the version to 0.1.13 worked for me.\npip3 install fake-useragent==0.1.13\n\nI tried pip install fake-useragent -U but it somehow didn't upgrade the package correctly.\nFrom the package owner: https://github.com/fake-useragent/fake-useragent/pull/136#issuecomment-1302431518\n" ]
[ 1, 0, 0 ]
[]
[]
[ "google_search_api", "python", "python_3.x" ]
stackoverflow_0057531742_google_search_api_python_python_3.x.txt
Q: ModuleNotFoundError: No module named 'pandas.compat' I have been working with pandas all the time, but now it suddenly says: "ModuleNotFoundError: No module named 'pandas.compat'" when importing it. I didnt (knowlingly) change anything. I already reainstalled it (and pandas-compat). I even created a whole new environment. I still cant import it. Anybody has a clue what this might be? Would reinstalling anaconda help? I feel like some files in my system might be broken. Which ones would I have to delete, to give it a full "python reboot"? I saw the other threads about it, but nothing helped so far. Full error: Traceback (most recent call last): File "L:\Python_files\LOPF-KNST.py", line 12, in <module> import pypsa File "C:\Users\<username>\.conda\envs\PyPSA\lib\site-packages\pypsa\__init__.py", line 25, in <module> from . import components, descriptors File "C:\Users\<username>\.conda\envs\PyPSA\lib\site-packages\pypsa\components.py", line 31, in <module> import pandas as pd File "\\...\<username>\AppData\Roaming\Python\Python38\site-packages\pandas\__init__.py", line 22, in <module> from pandas.compat import is_numpy_dev as _is_numpy_dev ModuleNotFoundError: No module named 'pandas.compat' A: You may have several installations of pandas library. So uninstall pandas using "pip uninstall pandas". (If on linux use: " sudo apt-get purge python3-pandas") and install again. This helped solve my problem.
ModuleNotFoundError: No module named 'pandas.compat'
I have been working with pandas all the time, but now it suddenly says: "ModuleNotFoundError: No module named 'pandas.compat'" when importing it. I didnt (knowlingly) change anything. I already reainstalled it (and pandas-compat). I even created a whole new environment. I still cant import it. Anybody has a clue what this might be? Would reinstalling anaconda help? I feel like some files in my system might be broken. Which ones would I have to delete, to give it a full "python reboot"? I saw the other threads about it, but nothing helped so far. Full error: Traceback (most recent call last): File "L:\Python_files\LOPF-KNST.py", line 12, in <module> import pypsa File "C:\Users\<username>\.conda\envs\PyPSA\lib\site-packages\pypsa\__init__.py", line 25, in <module> from . import components, descriptors File "C:\Users\<username>\.conda\envs\PyPSA\lib\site-packages\pypsa\components.py", line 31, in <module> import pandas as pd File "\\...\<username>\AppData\Roaming\Python\Python38\site-packages\pandas\__init__.py", line 22, in <module> from pandas.compat import is_numpy_dev as _is_numpy_dev ModuleNotFoundError: No module named 'pandas.compat'
[ "You may have several installations of pandas library. So uninstall pandas using\n\"pip uninstall pandas\".\n(If on linux use:\n\" sudo apt-get purge python3-pandas\")\nand install again.\nThis helped solve my problem.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0073258906_pandas_python.txt
Q: Find number of datapoints in each range I have a data frame that looks like this data = [['A', 0.20], ['B',0.25], ['C',0.11], ['D',0.30], ['E',0.29]] df = pd.DataFrame(data, columns=['col1', 'col2']) Col1 is a primary key (each row has a unique value) The max of col2 is 1 and the min is 0. I want to find the number of datapoint in ranges 0-.30 (both 0 and 0.30 are included), 0-.29, 0-.28, and so on till 0-.01. I can use pd.cut, but the lower limit is not fixed. My lower limit is always 0. Can someone help? A: One option using numpy broadcasting: step = 0.01 up = np.arange(0, 0.3+step, step) out = pd.Series((df['col2'].to_numpy()[:,None] <= up).sum(axis=0), index=up) Output: 0.00 0 0.01 0 0.02 0 0.03 0 0.04 0 0.05 0 0.06 0 0.07 0 0.08 0 0.09 0 0.10 0 0.11 1 0.12 1 0.13 1 0.14 1 0.15 1 0.16 1 0.17 1 0.18 1 0.19 1 0.20 2 0.21 2 0.22 2 0.23 2 0.24 2 0.25 3 0.26 3 0.27 3 0.28 3 0.29 4 0.30 5 dtype: int64 With pandas.cut and cumsum: step = 0.01 up = np.arange(0, 0.3+step, step) (pd.cut(df['col2'], up, labels=up[1:].round(2)) .value_counts(sort=False).cumsum() ) Output: 0.01 0 0.02 0 0.03 0 0.04 0 0.05 0 0.06 0 0.07 0 0.08 0 0.09 0 0.1 0 0.11 1 0.12 1 0.13 1 0.14 1 0.15 1 0.16 1 0.17 1 0.18 1 0.19 1 0.2 2 0.21 2 0.22 2 0.23 2 0.24 2 0.25 3 0.26 3 0.27 3 0.28 3 0.29 4 0.3 5 Name: col2, dtype: int64
Find number of datapoints in each range
I have a data frame that looks like this data = [['A', 0.20], ['B',0.25], ['C',0.11], ['D',0.30], ['E',0.29]] df = pd.DataFrame(data, columns=['col1', 'col2']) Col1 is a primary key (each row has a unique value) The max of col2 is 1 and the min is 0. I want to find the number of datapoint in ranges 0-.30 (both 0 and 0.30 are included), 0-.29, 0-.28, and so on till 0-.01. I can use pd.cut, but the lower limit is not fixed. My lower limit is always 0. Can someone help?
[ "One option using numpy broadcasting:\nstep = 0.01\nup = np.arange(0, 0.3+step, step)\n\nout = pd.Series((df['col2'].to_numpy()[:,None] <= up).sum(axis=0), index=up)\n\nOutput:\n0.00 0\n0.01 0\n0.02 0\n0.03 0\n0.04 0\n0.05 0\n0.06 0\n0.07 0\n0.08 0\n0.09 0\n0.10 0\n0.11 1\n0.12 1\n0.13 1\n0.14 1\n0.15 1\n0.16 1\n0.17 1\n0.18 1\n0.19 1\n0.20 2\n0.21 2\n0.22 2\n0.23 2\n0.24 2\n0.25 3\n0.26 3\n0.27 3\n0.28 3\n0.29 4\n0.30 5\ndtype: int64\n\nWith pandas.cut and cumsum:\nstep = 0.01\n\nup = np.arange(0, 0.3+step, step)\n(pd.cut(df['col2'], up, labels=up[1:].round(2))\n .value_counts(sort=False).cumsum()\n)\n\nOutput:\n0.01 0\n0.02 0\n0.03 0\n0.04 0\n0.05 0\n0.06 0\n0.07 0\n0.08 0\n0.09 0\n0.1 0\n0.11 1\n0.12 1\n0.13 1\n0.14 1\n0.15 1\n0.16 1\n0.17 1\n0.18 1\n0.19 1\n0.2 2\n0.21 2\n0.22 2\n0.23 2\n0.24 2\n0.25 3\n0.26 3\n0.27 3\n0.28 3\n0.29 4\n0.3 5\nName: col2, dtype: int64\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074624685_numpy_pandas_python.txt
Q: How to convert list of list binary into list of decimal? LIST OF LIST BIN DIVIDED INTO 8 : [[0, 1, 1, 0, 0, 1, 0, 1], [0, 1, 1, 1, 0, 1, 1, 1]] the output I want is: [101, 119] A: This is more complex but significantly faster than any kind of string manipulation as it's essentially just integer arithmetic. from timeit import timeit lob = [[0, 1, 1, 0, 0, 1, 0, 1], [0, 1, 1, 1, 0, 1, 1, 1]] def v1(): result = [] for e in lob: r = 0 for _e in e: r = r * 2 + _e result.append(r) return result def v2(): return [int(''.join([str(y) for y in x]), 2) for x in lob] assert v1() == v2() for func in v1, v2: print(func.__name__, timeit(func)) Output: v1 0.6906622060014342 v2 2.173182999999881
How to convert list of list binary into list of decimal?
LIST OF LIST BIN DIVIDED INTO 8 : [[0, 1, 1, 0, 0, 1, 0, 1], [0, 1, 1, 1, 0, 1, 1, 1]] the output I want is: [101, 119]
[ "This is more complex but significantly faster than any kind of string manipulation as it's essentially just integer arithmetic.\nfrom timeit import timeit\n\nlob = [[0, 1, 1, 0, 0, 1, 0, 1], [0, 1, 1, 1, 0, 1, 1, 1]]\n\ndef v1():\n result = []\n for e in lob:\n r = 0\n for _e in e:\n r = r * 2 + _e\n result.append(r)\n return result\n\ndef v2():\n return [int(''.join([str(y) for y in x]), 2) for x in lob]\n\nassert v1() == v2()\n\nfor func in v1, v2:\n print(func.__name__, timeit(func))\n\nOutput:\nv1 0.6906622060014342\nv2 2.173182999999881\n\n" ]
[ 2 ]
[]
[]
[ "arrays", "list", "python" ]
stackoverflow_0074624691_arrays_list_python.txt
Q: How to double for loop over two dataframes I have dataframe consisting of unknown places, just a set of latitude and longitudes. This list contains of a lot of places that almost have the same coordinates. I want to create a new dataframe with 'filtered unknown places', where places that are almost the same are merged into one place. For each 'filtered unknown place' we keep track of a counter indicating the number of unknown places it contains. I tried to solve this with two for loops; first looping over the unknown places and within that for loop looping over the filtered unknown places, see below. accuracy = 0.2 #km df_unknown_places_filtered = pd.DataFrame(columns = ['GpsLatitude', 'GpsLongitude', 'Count']) for i, row in df_unknown_places.iterrows(): min_dist = 999999 closest = 0 for j, row2 in df_unknown_places_filtered.iterrows(): dist = self.distance(row['GpsLatitude'], row['GpsLongitude'], row2['GpsLatitude'], row2['GpsLongitude']) if dist < min_dist: min_dist = dist closest = j if min_dist < accuracy: current_count = df_unknown_places_filtered.at[closest, 'Count'] df_unknown_places_filtered.at[closest,'Count'] = current_count + 1 else: row_to_insert = {'GpsLatitude':row['GpsLatitude'], 'GpsLongitude':row['GpsLongitude'], 'Count': 1 } df_unknown_places_filtered = pd.concat([df_unknown_places_filtered, pd.DataFrame.from_records([row_to_insert])], axis = 0) It seems however that for the second iterrows the value of j is not updating and I have no idea why. Anyone an idea what I do wrong? A: Your df_unknown_places_filtered has no rows.
How to double for loop over two dataframes
I have dataframe consisting of unknown places, just a set of latitude and longitudes. This list contains of a lot of places that almost have the same coordinates. I want to create a new dataframe with 'filtered unknown places', where places that are almost the same are merged into one place. For each 'filtered unknown place' we keep track of a counter indicating the number of unknown places it contains. I tried to solve this with two for loops; first looping over the unknown places and within that for loop looping over the filtered unknown places, see below. accuracy = 0.2 #km df_unknown_places_filtered = pd.DataFrame(columns = ['GpsLatitude', 'GpsLongitude', 'Count']) for i, row in df_unknown_places.iterrows(): min_dist = 999999 closest = 0 for j, row2 in df_unknown_places_filtered.iterrows(): dist = self.distance(row['GpsLatitude'], row['GpsLongitude'], row2['GpsLatitude'], row2['GpsLongitude']) if dist < min_dist: min_dist = dist closest = j if min_dist < accuracy: current_count = df_unknown_places_filtered.at[closest, 'Count'] df_unknown_places_filtered.at[closest,'Count'] = current_count + 1 else: row_to_insert = {'GpsLatitude':row['GpsLatitude'], 'GpsLongitude':row['GpsLongitude'], 'Count': 1 } df_unknown_places_filtered = pd.concat([df_unknown_places_filtered, pd.DataFrame.from_records([row_to_insert])], axis = 0) It seems however that for the second iterrows the value of j is not updating and I have no idea why. Anyone an idea what I do wrong?
[ "Your df_unknown_places_filtered has no rows.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "for_loop", "python" ]
stackoverflow_0074624784_dataframe_for_loop_python.txt
Q: Integration of a curve generated using matplotlib I have generated a graph using basic function - plt.plot(tm, o1) tm is list of all x coordinates and o1 is a list of all y coordinates NOTE there is no specific function such as y=f(x), rather a certain y value remains constant for a given range of x.. see figure for clarity My question is how to integrate this function, either using the matplotlib figure or using the lists (tm and o1) A: The integral corresponds to computing the area under the curve. The most easy way to compute (or approximate) the integral "numerically" is using the rectangle rule which is basically approximating the area under the curve by summing area of rectangles (see https://en.wikipedia.org/wiki/Numerical_integration#Quadrature_rules_based_on_interpolating_functions). Practically in your case, it quite straightforward since it is a step function. First, I recomment to use numpy arrays instead of list (more handy for numerical computing): import matplotlib.pyplot as plt import numpy as np x = np.array([0,1,3,4,6,7,8,11,13,15]) y = np.array([8,5,2,2,2,5,6,5,9,9]) plt.plot(x,y) Then, we compute the width of rectangles using np.diff(): w = np.diff(x) Then, the height of the same rectangles (multiple possibilities exist): h = y[:-1] Here I chose the 2nd value of each two successive y values. the top right angle of rectangle is on the curve. You can choose the mean value of each two successive y values h = (y[1:]+y[:-1])/2 in which the middle of the top of the rectangle coincide with the curve. Then , you will need to multiply and sum: area = (w*h).sum()
Integration of a curve generated using matplotlib
I have generated a graph using basic function - plt.plot(tm, o1) tm is list of all x coordinates and o1 is a list of all y coordinates NOTE there is no specific function such as y=f(x), rather a certain y value remains constant for a given range of x.. see figure for clarity My question is how to integrate this function, either using the matplotlib figure or using the lists (tm and o1)
[ "The integral corresponds to computing the area under the curve.\nThe most easy way to compute (or approximate) the integral \"numerically\" is using the rectangle rule which is basically approximating the area under the curve by summing area of rectangles (see https://en.wikipedia.org/wiki/Numerical_integration#Quadrature_rules_based_on_interpolating_functions).\nPractically in your case, it quite straightforward since it is a step function.\nFirst, I recomment to use numpy arrays instead of list (more handy for numerical computing):\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.array([0,1,3,4,6,7,8,11,13,15])\ny = np.array([8,5,2,2,2,5,6,5,9,9])\n\nplt.plot(x,y)\n\nThen, we compute the width of rectangles using np.diff():\nw = np.diff(x)\n\nThen, the height of the same rectangles (multiple possibilities exist):\nh = y[:-1]\n\nHere I chose the 2nd value of each two successive y values. the top right angle of rectangle is on the curve. You can choose the mean value of each two successive y values h = (y[1:]+y[:-1])/2 in which the middle of the top of the rectangle coincide with the curve.\nThen , you will need to multiply and sum:\narea = (w*h).sum()\n\n" ]
[ 1 ]
[]
[]
[ "graph", "integral", "list", "matplotlib", "python" ]
stackoverflow_0074624574_graph_integral_list_matplotlib_python.txt
Q: Using click.command to make a function as a command I am trying to make the function log into a command using the following code inside simple.py: import click @click.command() @click.option('-v', '--verbose', count=True) def log(verbose): click.echo(f"Verbosity: {verbose}") When I type the following on the command terminal:log -vvv , I get an error as : Command 'log' not found, but there are 16 similar ones. @click.command should have converted the function log into a command? But, it doesn't work here. Could someone explain,please? Thanks! I have tried the following commands: log -vvv Command 'log' not found, but there are 16 similar ones. python3 simple.py log Usage: simple.py [OPTIONS] Try 'simple.py --help' for help. Error: Got unexpected extra argument (log) Could someone please explain what does @click.command() actually do and how's it different from running simple.py. The documentation does not make it very clear to me as well. Thanks! A: import click @click.command() @click.option('-v', '--verbose', count=True) def log(verbose): click.echo(f"Verbosity: {verbose}") if __name__ == '__main__': log() Then calling it like $ python simple.py Verbosity: 0 $ python simple.py -v Verbosity: 1 The way you try to run it, suggest you think about command group, i.e. nesting commands import click @click.group() def cli(): pass @cli.command('log') @click.option('-v', '--verbose', count=True) def log(verbose): click.echo(f"Verbosity: {verbose}") @cli.command('greet') def greet(): click.echo("Hello") if __name__ == '__main__': cli() Then $ python simple.py greet Hello $ python simple.py log -v -v # or just -vv Verbosity: 2 Next step would be setuptools integration, etc.
Using click.command to make a function as a command
I am trying to make the function log into a command using the following code inside simple.py: import click @click.command() @click.option('-v', '--verbose', count=True) def log(verbose): click.echo(f"Verbosity: {verbose}") When I type the following on the command terminal:log -vvv , I get an error as : Command 'log' not found, but there are 16 similar ones. @click.command should have converted the function log into a command? But, it doesn't work here. Could someone explain,please? Thanks! I have tried the following commands: log -vvv Command 'log' not found, but there are 16 similar ones. python3 simple.py log Usage: simple.py [OPTIONS] Try 'simple.py --help' for help. Error: Got unexpected extra argument (log) Could someone please explain what does @click.command() actually do and how's it different from running simple.py. The documentation does not make it very clear to me as well. Thanks!
[ "import click\n\[email protected]()\[email protected]('-v', '--verbose', count=True)\ndef log(verbose):\n click.echo(f\"Verbosity: {verbose}\")\n\nif __name__ == '__main__':\n log()\n\nThen calling it like\n$ python simple.py \nVerbosity: 0\n\n$ python simple.py -v\nVerbosity: 1\n\nThe way you try to run it, suggest you think about command group, i.e. nesting commands\nimport click\n\[email protected]()\ndef cli():\n pass\n\[email protected]('log')\[email protected]('-v', '--verbose', count=True)\ndef log(verbose):\n click.echo(f\"Verbosity: {verbose}\")\n\[email protected]('greet')\ndef greet():\n click.echo(\"Hello\")\n\nif __name__ == '__main__':\n cli()\n\nThen\n$ python simple.py greet\nHello\n\n$ python simple.py log -v -v # or just -vv\nVerbosity: 2\n\nNext step would be setuptools integration, etc.\n" ]
[ 0 ]
[]
[]
[ "command", "python", "python_click" ]
stackoverflow_0074624467_command_python_python_click.txt
Q: Enable Python to Connect to MySQL via SSH Tunnelling I'm using MySqldb with Python 2.7 to allow Python to make connections to another MySQL server import MySQLdb db = MySQLdb.connect(host="sql.domain.com", user="dev", passwd="*******", db="appdb") Instead of connecting normally like this, how can the connection be made through a SSH tunnel using SSH key pairs? The SSH tunnel should ideally be opened by Python. The SSH tunnel host and the MySQL server are the same machine. A: Only this worked for me import pymysql import paramiko import pandas as pd from paramiko import SSHClient from sshtunnel import SSHTunnelForwarder from os.path import expanduser home = expanduser('~') mypkey = paramiko.RSAKey.from_private_key_file(home + pkeyfilepath) # if you want to use ssh password use - ssh_password='your ssh password', bellow sql_hostname = 'sql_hostname' sql_username = 'sql_username' sql_password = 'sql_password' sql_main_database = 'db_name' sql_port = 3306 ssh_host = 'ssh_hostname' ssh_user = 'ssh_username' ssh_port = 22 sql_ip = '1.1.1.1.1' with SSHTunnelForwarder( (ssh_host, ssh_port), ssh_username=ssh_user, ssh_pkey=mypkey, remote_bind_address=(sql_hostname, sql_port)) as tunnel: conn = pymysql.connect(host='127.0.0.1', user=sql_username, passwd=sql_password, db=sql_main_database, port=tunnel.local_bind_port) query = '''SELECT VERSION();''' data = pd.read_sql_query(query, conn) conn.close() A: I'm guessing you'll need port forwarding. I recommend sshtunnel.SSHTunnelForwarder import mysql.connector import sshtunnel with sshtunnel.SSHTunnelForwarder( (_host, _ssh_port), ssh_username=_username, ssh_password=_password, remote_bind_address=(_remote_bind_address, _remote_mysql_port), local_bind_address=(_local_bind_address, _local_mysql_port) ) as tunnel: connection = mysql.connector.connect( user=_db_user, password=_db_password, host=_local_bind_address, database=_db_name, port=_local_mysql_port) ... A: from sshtunnel import SSHTunnelForwarder import pymysql import pandas as pd tunnel = SSHTunnelForwarder(('SSH_HOST', 22), ssh_password=SSH_PASS, ssh_username=SSH_UNAME, remote_bind_address=(DB_HOST, 3306)) tunnel.start() conn = pymysql.connect(host='127.0.0.1', user=DB_UNAME, passwd=DB_PASS, port=tunnel.local_bind_port) data = pd.read_sql_query("SHOW DATABASES;", conn) credits to https://www.reddit.com/r/learnpython/comments/53wph1/connecting_to_a_mysql_database_in_a_python_script/ A: If your private key file is encrypted, this is what worked for me: mypkey = paramiko.RSAKey.from_private_key_file(<<file location>>, password='password') sql_hostname = 'sql_hostname' sql_username = 'sql_username' sql_password = 'sql_password' sql_main_database = 'sql_main_database' sql_port = 3306 ssh_host = 'ssh_host' ssh_user = 'ssh_user' ssh_port = 22 with SSHTunnelForwarder( (ssh_host, ssh_port), ssh_username=ssh_user, ssh_pkey=mypkey, ssh_password='ssh_password', remote_bind_address=(sql_hostname, sql_port)) as tunnel: conn = pymysql.connect(host='localhost', user=sql_username, passwd=sql_password, db=sql_main_database, port=tunnel.local_bind_port) query = '''SELECT VERSION();''' data = pd.read_sql_query(query, conn) print(data) conn.close() A: You may only write the path to the private key file: ssh_pkey='/home/userName/.ssh/id_ed25519' (documentation is here: https://sshtunnel.readthedocs.io/en/latest/). If you use mysql.connector from Oracle you must use a construction cnx = mysql.connector.MySQLConnection(... Important: a construction cnx = mysql.connector.connect(... does not work via an SSh! It is a bug. (The documentation is here: https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html). Also, your SQL statement must be ideal. In case of an error on SQL server side, you do not receive an error message from SQL-server. import sshtunnel import numpy as np with sshtunnel.SSHTunnelForwarder(ssh_address_or_host='ssh_host', ssh_username="ssh_username", ssh_pkey='/home/userName/.ssh/id_ed25519', remote_bind_address=('localhost', 3306), ) as tunnel: cnx = mysql.connector.MySQLConnection(user='sql_username', password='sql_password', host='127.0.0.1', database='db_name', port=tunnel.local_bind_port) cursor = cnx.cursor() cursor.execute('SELECT * FROM db_name.tableName;') arr = np.array(cursor.fetchall()) cursor.close() cnx.close() A: This works for me: import mysql.connector import sshtunnel with sshtunnel.SSHTunnelForwarder( ('ip-of-ssh-server', 'port-in-number-format'), ssh_username = 'ssh-username', ssh_password = 'ssh-password', remote_bind_address = ('127.0.0.1', 3306) ) as tunnel: connection = mysql.connector.connect( user = 'database-username', password = 'database-password', host = '127.0.0.1', port = tunnel.local_bind_port, database = 'databasename', ) mycursor = connection.cursor() query = "SELECT * FROM datos" mycursor.execute(query) A: Paramiko is the best python module to do ssh tunneling. Check out the code here: https://github.com/paramiko/paramiko/blob/master/demos/forward.py As said in comments this one works perfect. SSH Tunnel for Python MySQLdb connection A: Best practice is to parameterize the connection variables. Here is how I have implemented. Works like charm! import mysql.connector import sshtunnel import pandas as pd import configparser config = configparser.ConfigParser() config.read('c:/work/tmf/data_model/tools/config.ini') ssh_host = config['db_qa01']['SSH_HOST'] ssh_port = int(config['db_qa01']['SSH_PORT']) ssh_username = config['db_qa01']['SSH_USER'] ssh_pkey = config['db_qa01']['SSH_PKEY'] sql_host = config['db_qa01']['HOST'] sql_port = int(config['db_qa01']['PORT']) sql_username = config['db_qa01']['USER'] sql_password = config['db_qa01']['PASSWORD'] with sshtunnel.SSHTunnelForwarder( (ssh_host,ssh_port), ssh_username=ssh_username, ssh_pkey=ssh_pkey, remote_bind_address=(sql_host, sql_port)) as tunnel: connection = mysql.connector.connect( host='127.0.0.1', port=tunnel.local_bind_port, user=sql_username, password=sql_password) query = 'select version();' data = pd.read_sql_query(query, connection) print(data) connection.close() A: Someone said this in another comment. If you use the python mysql.connector from Oracle then you must use a construction cnx = mysql.connector.MySQLConnection(.... Important: a construction cnx = mysql.connector.connect(... does not work via an SSH! This bug cost me a whole day trying to understand why connections were being dropped by the remote server: with sshtunnel.SSHTunnelForwarder( (ssh_host,ssh_port), ssh_username=ssh_username, ssh_pkey=ssh_pkey, remote_bind_address=(sql_host, sql_port)) as tunnel: connection = mysql.connector.MySQLConnection( host='127.0.0.1', port=tunnel.local_bind_port, user=sql_username, password=sql_password) query = 'select version();' data = pd.read_sql_query(query, connection) print(data) connection.close()
Enable Python to Connect to MySQL via SSH Tunnelling
I'm using MySqldb with Python 2.7 to allow Python to make connections to another MySQL server import MySQLdb db = MySQLdb.connect(host="sql.domain.com", user="dev", passwd="*******", db="appdb") Instead of connecting normally like this, how can the connection be made through a SSH tunnel using SSH key pairs? The SSH tunnel should ideally be opened by Python. The SSH tunnel host and the MySQL server are the same machine.
[ "Only this worked for me\nimport pymysql\nimport paramiko\nimport pandas as pd\nfrom paramiko import SSHClient\nfrom sshtunnel import SSHTunnelForwarder\nfrom os.path import expanduser\n\nhome = expanduser('~')\nmypkey = paramiko.RSAKey.from_private_key_file(home + pkeyfilepath)\n# if you want to use ssh password use - ssh_password='your ssh password', bellow\n\nsql_hostname = 'sql_hostname'\nsql_username = 'sql_username'\nsql_password = 'sql_password'\nsql_main_database = 'db_name'\nsql_port = 3306\nssh_host = 'ssh_hostname'\nssh_user = 'ssh_username'\nssh_port = 22\nsql_ip = '1.1.1.1.1'\n\nwith SSHTunnelForwarder(\n (ssh_host, ssh_port),\n ssh_username=ssh_user,\n ssh_pkey=mypkey,\n remote_bind_address=(sql_hostname, sql_port)) as tunnel:\n conn = pymysql.connect(host='127.0.0.1', user=sql_username,\n passwd=sql_password, db=sql_main_database,\n port=tunnel.local_bind_port)\n query = '''SELECT VERSION();'''\n data = pd.read_sql_query(query, conn)\n conn.close()\n\n", "I'm guessing you'll need port forwarding. I recommend sshtunnel.SSHTunnelForwarder \nimport mysql.connector\nimport sshtunnel\n\nwith sshtunnel.SSHTunnelForwarder(\n (_host, _ssh_port),\n ssh_username=_username,\n ssh_password=_password,\n remote_bind_address=(_remote_bind_address, _remote_mysql_port),\n local_bind_address=(_local_bind_address, _local_mysql_port)\n) as tunnel:\n connection = mysql.connector.connect(\n user=_db_user,\n password=_db_password,\n host=_local_bind_address,\n database=_db_name,\n port=_local_mysql_port)\n ...\n\n", "from sshtunnel import SSHTunnelForwarder\nimport pymysql\nimport pandas as pd\n\ntunnel = SSHTunnelForwarder(('SSH_HOST', 22), ssh_password=SSH_PASS, ssh_username=SSH_UNAME,\n remote_bind_address=(DB_HOST, 3306)) \ntunnel.start()\nconn = pymysql.connect(host='127.0.0.1', user=DB_UNAME, passwd=DB_PASS, port=tunnel.local_bind_port)\ndata = pd.read_sql_query(\"SHOW DATABASES;\", conn)\n\ncredits to https://www.reddit.com/r/learnpython/comments/53wph1/connecting_to_a_mysql_database_in_a_python_script/\n", "If your private key file is encrypted, this is what worked for me:\n mypkey = paramiko.RSAKey.from_private_key_file(<<file location>>, password='password')\n sql_hostname = 'sql_hostname'\n sql_username = 'sql_username'\n sql_password = 'sql_password'\n sql_main_database = 'sql_main_database'\n sql_port = 3306\n ssh_host = 'ssh_host'\n ssh_user = 'ssh_user'\n ssh_port = 22\n\n\n with SSHTunnelForwarder(\n (ssh_host, ssh_port),\n ssh_username=ssh_user,\n ssh_pkey=mypkey,\n ssh_password='ssh_password',\n remote_bind_address=(sql_hostname, sql_port)) as tunnel:\n conn = pymysql.connect(host='localhost', user=sql_username,\n passwd=sql_password, db=sql_main_database,\n port=tunnel.local_bind_port)\n query = '''SELECT VERSION();'''\n data = pd.read_sql_query(query, conn)\n print(data)\n conn.close()\n\n", "You may only write the path to the private key file: ssh_pkey='/home/userName/.ssh/id_ed25519' (documentation is here: https://sshtunnel.readthedocs.io/en/latest/).\nIf you use mysql.connector from Oracle you must use a construction\ncnx = mysql.connector.MySQLConnection(...\nImportant: a construction \ncnx = mysql.connector.connect(...\ndoes not work via an SSh! It is a bug.\n(The documentation is here: https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html).\nAlso, your SQL statement must be ideal. In case of an error on SQL server side, you do not receive an error message from SQL-server.\nimport sshtunnel\nimport numpy as np\n\nwith sshtunnel.SSHTunnelForwarder(ssh_address_or_host='ssh_host',\n ssh_username=\"ssh_username\",\n ssh_pkey='/home/userName/.ssh/id_ed25519',\n remote_bind_address=('localhost', 3306),\n ) as tunnel:\n cnx = mysql.connector.MySQLConnection(user='sql_username',\n password='sql_password',\n host='127.0.0.1',\n database='db_name',\n port=tunnel.local_bind_port)\n cursor = cnx.cursor()\n cursor.execute('SELECT * FROM db_name.tableName;')\n arr = np.array(cursor.fetchall())\n cursor.close()\n cnx.close()\n\n", "This works for me:\nimport mysql.connector\nimport sshtunnel\n\nwith sshtunnel.SSHTunnelForwarder(\n ('ip-of-ssh-server', 'port-in-number-format'),\n ssh_username = 'ssh-username',\n ssh_password = 'ssh-password',\n remote_bind_address = ('127.0.0.1', 3306)\n) as tunnel:\n connection = mysql.connector.connect(\n user = 'database-username',\n password = 'database-password',\n host = '127.0.0.1',\n port = tunnel.local_bind_port,\n database = 'databasename',\n )\n mycursor = connection.cursor()\n query = \"SELECT * FROM datos\"\n mycursor.execute(query)\n\n", "Paramiko is the best python module to do ssh tunneling. Check out the code here:\nhttps://github.com/paramiko/paramiko/blob/master/demos/forward.py\nAs said in comments this one works perfect.\nSSH Tunnel for Python MySQLdb connection\n", "Best practice is to parameterize the connection variables.\nHere is how I have implemented. Works like charm!\nimport mysql.connector\nimport sshtunnel\nimport pandas as pd\nimport configparser\n\nconfig = configparser.ConfigParser()\nconfig.read('c:/work/tmf/data_model/tools/config.ini')\n\nssh_host = config['db_qa01']['SSH_HOST']\nssh_port = int(config['db_qa01']['SSH_PORT'])\nssh_username = config['db_qa01']['SSH_USER']\nssh_pkey = config['db_qa01']['SSH_PKEY']\nsql_host = config['db_qa01']['HOST']\nsql_port = int(config['db_qa01']['PORT'])\nsql_username = config['db_qa01']['USER']\nsql_password = config['db_qa01']['PASSWORD']\n\nwith sshtunnel.SSHTunnelForwarder(\n (ssh_host,ssh_port),\n ssh_username=ssh_username,\n ssh_pkey=ssh_pkey,\n remote_bind_address=(sql_host, sql_port)) as tunnel:\n connection = mysql.connector.connect(\n host='127.0.0.1',\n port=tunnel.local_bind_port,\n user=sql_username,\n password=sql_password)\n query = 'select version();'\n data = pd.read_sql_query(query, connection)\n print(data)\n connection.close()\n\n", "Someone said this in another comment. If you use the python mysql.connector from Oracle then you must use a construction cnx = mysql.connector.MySQLConnection(....\nImportant: a construction cnx = mysql.connector.connect(... does not work via an SSH! This bug cost me a whole day trying to understand why connections were being dropped by the remote server:\nwith sshtunnel.SSHTunnelForwarder(\n (ssh_host,ssh_port),\n ssh_username=ssh_username,\n ssh_pkey=ssh_pkey,\n remote_bind_address=(sql_host, sql_port)) as tunnel:\n connection = mysql.connector.MySQLConnection(\n host='127.0.0.1',\n port=tunnel.local_bind_port,\n user=sql_username,\n password=sql_password)\n query = 'select version();'\n data = pd.read_sql_query(query, connection)\n print(data)\n connection.close()\n\n" ]
[ 42, 30, 9, 5, 4, 1, 0, 0, 0 ]
[]
[]
[ "mysql", "python", "python_2.7", "ssh" ]
stackoverflow_0021903411_mysql_python_python_2.7_ssh.txt
Q: Python3 easysnmp querying multiple switches results in a random timeout for some random switches I've got 3 switches I'm trying to get SNMP data from. Every switch does respond from time to time, but which switches respond depends upon the order I'm querying them. I'm using easysnmp.Session. My code (passwords obviously not shown): import easysnmp switches_dns1 = { "switch1": "switch-mBvk1-1.obis.ns.nl", "switch2": "switch-mBvk1-2.obis.ns.nl", "switch3": "switch-abv5-1.obis.ns.nl" } switches_dns2 = { "switch3": "switch-abv5-1.obis.ns.nl", "switch1": "switch-mBvk1-1.obis.ns.nl", "switch2": "switch-mBvk1-2.obis.ns.nl" } switches_dns3 = { "switch2": "switch-mBvk1-2.obis.ns.nl", "switch3": "switch-abv5-1.obis.ns.nl", "switch1": "switch-mBvk1-1.obis.ns.nl" } switches_dns4 = { "switch2": "switch-abv5-1.obis.ns.nl", "switch3": "switch-mBvk1-2.obis.ns.nl", "switch1": "switch-mBvk1-1.obis.ns.nl" } print("====> Testcase 1") for switch, hostname_dns in switches_dns1.items(): print(switch) try: snmp_session = easysnmp.Session(hostname=hostname_dns, security_username='test_user', auth_protocol='MD5', auth_password='blahblahblah', version=3, security_level='auth_with_privacy', privacy_password='blahblahblah', privacy_protocol='DES') result=snmp_session.get('.1.3.6.1.4.1.37072.302.3.1.1.3.6.0') print("Switch: {}, Result: {}".format(switch, result)) except easysnmp.exceptions.EasySNMPTimeoutError as err: print("Switch: {}, result: TIMEOUT ERROR".format(switch)) When running this script for all for dictionaries, the only thing changing being the order of the switches queried I'm getting these results: ====> Testcase 1 switch1 Switch: switch1, Result: <SNMPVariable value='10.160.59.131' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch3 Switch: switch3, result: TIMEOUT ERROR [root@tesu john.roede]# python3 test2.py ====> Testcase 2 switch3 Switch: switch3, Result: <SNMPVariable value='10.160.59.133' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch1 Switch: switch1, result: TIMEOUT ERROR switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> [root@tesu john.roede]# python3 test2.py ====> Testcase 3 switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch3 Switch: switch3, result: TIMEOUT ERROR switch1 Switch: switch1, result: TIMEOUT ERROR [root@tesu john.roede]# python3 test2.py ====> Testcase 4 switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.133' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch3 Switch: switch3, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch1 Switch: switch1, result: TIMEOUT ERROR As you can see, every switch does respond at times, so there shouldn't be an issue at the switch level. Just to be 100% sure I've also written a quick bash script querying all 3 switches in succession. This does work: ./test.sh SNMPv2-SMI::enterprises.37072.302.3.1.1.3.6.0 = IpAddress: 10.160.59.131 SNMPv2-SMI::enterprises.37072.302.3.1.1.3.6.0 = IpAddress: 10.160.59.132 SNMPv2-SMI::enterprises.37072.302.3.1.1.3.6.0 = IpAddress: 10.160.59.133 I really can't figure out what's causing the issue with easysnmp. I've tried separating the several SNMP calls with a sleep to rule out I'm firing of the calls in too quick a succession, but this didn't solve the issue. I've also looked in the easysnmp documentation for a Session.close() function but there doesn't seem to be one. There is mention of a update_session() function which I haven't tried since the documentation states: "While it is recommended to create a new Session instance instead, this method has been added for your convenience in case you really need it (we’ve mis-typed the community string before in our interactive sessions and totally understand your pain)." I'd really appreciate some help with this issue and would like to thank you for your time in advance. PS. Apologies, I can't seem to get the formatting of the output right :-( A: I've been digging a bit deeper and it turns out this issue is caused by several devices using the same engine ID. See https://github.com/easysnmp/easysnmp/issues/156 for more information.
Python3 easysnmp querying multiple switches results in a random timeout for some random switches
I've got 3 switches I'm trying to get SNMP data from. Every switch does respond from time to time, but which switches respond depends upon the order I'm querying them. I'm using easysnmp.Session. My code (passwords obviously not shown): import easysnmp switches_dns1 = { "switch1": "switch-mBvk1-1.obis.ns.nl", "switch2": "switch-mBvk1-2.obis.ns.nl", "switch3": "switch-abv5-1.obis.ns.nl" } switches_dns2 = { "switch3": "switch-abv5-1.obis.ns.nl", "switch1": "switch-mBvk1-1.obis.ns.nl", "switch2": "switch-mBvk1-2.obis.ns.nl" } switches_dns3 = { "switch2": "switch-mBvk1-2.obis.ns.nl", "switch3": "switch-abv5-1.obis.ns.nl", "switch1": "switch-mBvk1-1.obis.ns.nl" } switches_dns4 = { "switch2": "switch-abv5-1.obis.ns.nl", "switch3": "switch-mBvk1-2.obis.ns.nl", "switch1": "switch-mBvk1-1.obis.ns.nl" } print("====> Testcase 1") for switch, hostname_dns in switches_dns1.items(): print(switch) try: snmp_session = easysnmp.Session(hostname=hostname_dns, security_username='test_user', auth_protocol='MD5', auth_password='blahblahblah', version=3, security_level='auth_with_privacy', privacy_password='blahblahblah', privacy_protocol='DES') result=snmp_session.get('.1.3.6.1.4.1.37072.302.3.1.1.3.6.0') print("Switch: {}, Result: {}".format(switch, result)) except easysnmp.exceptions.EasySNMPTimeoutError as err: print("Switch: {}, result: TIMEOUT ERROR".format(switch)) When running this script for all for dictionaries, the only thing changing being the order of the switches queried I'm getting these results: ====> Testcase 1 switch1 Switch: switch1, Result: <SNMPVariable value='10.160.59.131' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch3 Switch: switch3, result: TIMEOUT ERROR [root@tesu john.roede]# python3 test2.py ====> Testcase 2 switch3 Switch: switch3, Result: <SNMPVariable value='10.160.59.133' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch1 Switch: switch1, result: TIMEOUT ERROR switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> [root@tesu john.roede]# python3 test2.py ====> Testcase 3 switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch3 Switch: switch3, result: TIMEOUT ERROR switch1 Switch: switch1, result: TIMEOUT ERROR [root@tesu john.roede]# python3 test2.py ====> Testcase 4 switch2 Switch: switch2, Result: <SNMPVariable value='10.160.59.133' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch3 Switch: switch3, Result: <SNMPVariable value='10.160.59.132' (oid='enterprises.37072.302.3.1.1.3.6.0', oid_index='', snmp_type='IPADDR')> switch1 Switch: switch1, result: TIMEOUT ERROR As you can see, every switch does respond at times, so there shouldn't be an issue at the switch level. Just to be 100% sure I've also written a quick bash script querying all 3 switches in succession. This does work: ./test.sh SNMPv2-SMI::enterprises.37072.302.3.1.1.3.6.0 = IpAddress: 10.160.59.131 SNMPv2-SMI::enterprises.37072.302.3.1.1.3.6.0 = IpAddress: 10.160.59.132 SNMPv2-SMI::enterprises.37072.302.3.1.1.3.6.0 = IpAddress: 10.160.59.133 I really can't figure out what's causing the issue with easysnmp. I've tried separating the several SNMP calls with a sleep to rule out I'm firing of the calls in too quick a succession, but this didn't solve the issue. I've also looked in the easysnmp documentation for a Session.close() function but there doesn't seem to be one. There is mention of a update_session() function which I haven't tried since the documentation states: "While it is recommended to create a new Session instance instead, this method has been added for your convenience in case you really need it (we’ve mis-typed the community string before in our interactive sessions and totally understand your pain)." I'd really appreciate some help with this issue and would like to thank you for your time in advance. PS. Apologies, I can't seem to get the formatting of the output right :-(
[ "I've been digging a bit deeper and it turns out this issue is caused by several devices using the same engine ID.\nSee https://github.com/easysnmp/easysnmp/issues/156 for more information.\n" ]
[ 0 ]
[]
[]
[ "easysnmp", "python" ]
stackoverflow_0074613299_easysnmp_python.txt
Q: Applying a function depending of index and column of a dataframe to a dataframe I have a dataframe df whose index is [x[0], ..., x[N]] and column is [y[0], ..., y[M]] and whose data is a 2D array of z[i,j]'s. I have a python function def f(x, y, z) of 3 float variables and I would like to calculate the 2d array of f(x[i], y[j], z[i,j])'s in the fastest way using numpy and/or pandas but I don't see how to do it. I see the df.transform method but it doesn't seem to allow for lambdas that are dependent on index and column of df -- or at least I don't know how to provide such lambdas. Details on df and f : How was my df obtained ? I created it during a 45 minutes computation using an intensive numerical python vectorized function on a grid with N = 5000 and M = 5000 and I "to_csv'ed" it. Now when I want to use it, I use read_csv. Now my function f is quite an involved numerical C++ function that I exposed to python with pybind11 (I put the tag for sake of completness) and that I don't want to rewrite in a "numpy vectorizable fashion" for now as it is ultra-optimized and very fast unitarily. Given x,y the function f solves numerically (iterative root finder) an equation with parameters x,y,z and unknow Z, the root of the equation being f(x,y,z). A: You could do a pd.melt: df.reset_index().rename(columns={'index':'x'}).melt(var_name='y', value_name='z', id_vars='x') It essentially transform the dataframe to the long format, making each row to have three entries: x, y and z. A: If you don't want to rewite the function, then using loop for to apply the function seems a easy way. you can do this idx = df.index cols = df.columns vals = df.to_numpy() r = [ [f(x,y,z) for y, z in zip(cols, vals[i])] for i, x in enumerate(idx) ] # if you want to recreate a dataframe df_root = pd.DataFrame(data=r, index=idx, columns=cols) there is a list comprehension on the index that includes a list comprehension on both the columns and the values of the row at the same time. vals[i] access the values from the row at position i. The result r is a list of length number of rows (N) and each item is a list of length number of columns (M). you don't need this structure especially but it is a easy way to build a dataframe with same index-columns as the original data. Note that it will still be long, you have about 25 million operations to do, even if f is optimized. A: I finally did this : matrix = df_prices.values x_matrix = np.tile(x, (y.size, 1)).transpose() y_matrix = np.tile(y, (x.size, 1)) f_vect = np.frompyfunc(f, 3, 1) res = f_vect (matrix , x_matrix , y_matrix ) Performance-wise it was optimal without having to meddle with vectorizing myself the ultra-optimized but not vectorized root solver f -- which by the way is a C++ function that I expose to python with pybind11.
Applying a function depending of index and column of a dataframe to a dataframe
I have a dataframe df whose index is [x[0], ..., x[N]] and column is [y[0], ..., y[M]] and whose data is a 2D array of z[i,j]'s. I have a python function def f(x, y, z) of 3 float variables and I would like to calculate the 2d array of f(x[i], y[j], z[i,j])'s in the fastest way using numpy and/or pandas but I don't see how to do it. I see the df.transform method but it doesn't seem to allow for lambdas that are dependent on index and column of df -- or at least I don't know how to provide such lambdas. Details on df and f : How was my df obtained ? I created it during a 45 minutes computation using an intensive numerical python vectorized function on a grid with N = 5000 and M = 5000 and I "to_csv'ed" it. Now when I want to use it, I use read_csv. Now my function f is quite an involved numerical C++ function that I exposed to python with pybind11 (I put the tag for sake of completness) and that I don't want to rewrite in a "numpy vectorizable fashion" for now as it is ultra-optimized and very fast unitarily. Given x,y the function f solves numerically (iterative root finder) an equation with parameters x,y,z and unknow Z, the root of the equation being f(x,y,z).
[ "You could do a pd.melt:\ndf.reset_index().rename(columns={'index':'x'}).melt(var_name='y', value_name='z', id_vars='x')\n\nIt essentially transform the dataframe to the long format, making each row to have three entries: x, y and z.\n", "If you don't want to rewite the function, then using loop for to apply the function seems a easy way. you can do this\nidx = df.index\ncols = df.columns\nvals = df.to_numpy()\nr = [ \n [f(x,y,z) for y, z in zip(cols, vals[i])]\n for i, x in enumerate(idx)\n]\n# if you want to recreate a dataframe\ndf_root = pd.DataFrame(data=r, index=idx, columns=cols)\n\nthere is a list comprehension on the index that includes a list comprehension on both the columns and the values of the row at the same time. vals[i] access the values from the row at position i. The result r is a list of length number of rows (N) and each item is a list of length number of columns (M). you don't need this structure especially but it is a easy way to build a dataframe with same index-columns as the original data.\nNote that it will still be long, you have about 25 million operations to do, even if f is optimized.\n", "I finally did this :\nmatrix = df_prices.values\nx_matrix = np.tile(x, (y.size, 1)).transpose()\ny_matrix = np.tile(y, (x.size, 1)) \nf_vect = np.frompyfunc(f, 3, 1)\nres = f_vect (matrix , x_matrix , y_matrix )\n\nPerformance-wise it was optimal without having to meddle with vectorizing myself the ultra-optimized but not vectorized root solver f -- which by the way is a C++ function that I expose to python with pybind11.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "numpy", "pandas", "pybind11", "python" ]
stackoverflow_0074606401_dataframe_numpy_pandas_pybind11_python.txt
Q: Calling TaskGroup with Dynamic sub task id from BranchPythonOperator I want to call a TaskGroup with a Dynamic sub-task id from BranchPythonOperator. This is the DAG flow that I have: branch_dag My case is I want to check whether a table exists in BigQuery or not. If exists: do nothing and end the DAG If not exists: Ingest the data from Postgres to Google Cloud Storage I know that to call a TaskGroup from BranchPythonOperator is by calling the task id with following format: group_task_id.task_id The problem is, my task group's sub task id is dynamic, depends on how many time I loop the TaskGroup. So the sub_task will be: parent_task_id.sub_task_1 parent_task_id.sub_task_2 parent_task_id.sub_task_3 ... parent_task_id.sub_task_x This is the following code for the DAG that I have: import airflow from airflow.providers.google.cloud.transfers.postgres_to_gcs import PostgresToGCSOperator from airflow.utils.task_group import TaskGroup from google.cloud.exceptions import NotFound from airflow import DAG from airflow.operators.python import BranchPythonOperator from airflow.operators.dummy import DummyOperator from google.cloud import bigquery default_args = { 'owner': 'Airflow', 'start_date': airflow.utils.dates.days_ago(2), } with DAG(dag_id='branch_dag', default_args=default_args, schedule_interval=None) as dag: def create_task_group(worker=1): var = dict() with TaskGroup(group_id='parent_task_id') as tg1: for i in range(worker): var[f'sub_task_{i}'] = PostgresToGCSOperator( task_id = f'sub_task_{i}', postgres_conn_id = 'some_postgres_conn_id', sql = 'test.sql', bucket = 'test_bucket', filename = 'test_file.json', export_format = 'json', gzip = True, params = { 'worker': worker } ) return tg1 def is_exists_table(): client = bigquery.Client() try: table_name = client.get_table('dataset_id.some_table') if table_name: return 'task_end' except NotFound as error: return 'parent_task_id' task_start = DummyOperator( task_id = 'start' ) task_branch_table = BranchPythonOperator( task_id ='check_table_exists_in_bigquery', python_callable = is_exists_table ) task_pg_to_gcs_init = create_task_group(worker=3) task_end = DummyOperator( task_id = 'end', trigger_rule = 'all_done' ) task_start >> task_branch_table >> task_end task_start >> task_branch_table >> task_pg_to_gcs_init >> task_end When I run the dag, it returns **airflow.exceptions.TaskNotFound: Task parent_task_id not found ** But this is expected, what I don't know is how to iterate the parent_task_id.sub_task_x on is_exists_table function. Or are there any workaround? This is the test.sql file if it's needed SELECT id, name, country FROM some_table WHERE 1=1 AND ABS(MOD(hashtext(id::TEXT), 3)) = {{params.worker}}; -- returns 1M+ rows I already seen this question as reference Question but I think my case is more specific. A: When designing your data pipelines, you may encounter use cases that require more complex task flows than "Task A > Task B > Task C." For example, you may have a use case where you need to decide between multiple tasks to execute based on the results of an upstream task. Or you may have a case where part of your pipeline should only run under certain external conditions. Fortunately, Airflow has multiple options for building conditional logic and/or branching into your DAGs. A: I found a dirty way around it. What I did is creating 1 additional task using DummyOperator called task_pass. task_pass = DummyOperator( task_id = 'pass_to_task_group' ) So the DAG flow now looks like this: task_start >> task_branch_table >> task_end task_start >> task_branch_table >> task_pass >> task_pg_to_gcs_init >> task_end Also there is 1 mistake that I made from the code above, notice that the params I set was worker. This is wrong because worker is the constant while the thing that I need to iterate is the i variable. So I change it from: params: worker to: params: i
Calling TaskGroup with Dynamic sub task id from BranchPythonOperator
I want to call a TaskGroup with a Dynamic sub-task id from BranchPythonOperator. This is the DAG flow that I have: branch_dag My case is I want to check whether a table exists in BigQuery or not. If exists: do nothing and end the DAG If not exists: Ingest the data from Postgres to Google Cloud Storage I know that to call a TaskGroup from BranchPythonOperator is by calling the task id with following format: group_task_id.task_id The problem is, my task group's sub task id is dynamic, depends on how many time I loop the TaskGroup. So the sub_task will be: parent_task_id.sub_task_1 parent_task_id.sub_task_2 parent_task_id.sub_task_3 ... parent_task_id.sub_task_x This is the following code for the DAG that I have: import airflow from airflow.providers.google.cloud.transfers.postgres_to_gcs import PostgresToGCSOperator from airflow.utils.task_group import TaskGroup from google.cloud.exceptions import NotFound from airflow import DAG from airflow.operators.python import BranchPythonOperator from airflow.operators.dummy import DummyOperator from google.cloud import bigquery default_args = { 'owner': 'Airflow', 'start_date': airflow.utils.dates.days_ago(2), } with DAG(dag_id='branch_dag', default_args=default_args, schedule_interval=None) as dag: def create_task_group(worker=1): var = dict() with TaskGroup(group_id='parent_task_id') as tg1: for i in range(worker): var[f'sub_task_{i}'] = PostgresToGCSOperator( task_id = f'sub_task_{i}', postgres_conn_id = 'some_postgres_conn_id', sql = 'test.sql', bucket = 'test_bucket', filename = 'test_file.json', export_format = 'json', gzip = True, params = { 'worker': worker } ) return tg1 def is_exists_table(): client = bigquery.Client() try: table_name = client.get_table('dataset_id.some_table') if table_name: return 'task_end' except NotFound as error: return 'parent_task_id' task_start = DummyOperator( task_id = 'start' ) task_branch_table = BranchPythonOperator( task_id ='check_table_exists_in_bigquery', python_callable = is_exists_table ) task_pg_to_gcs_init = create_task_group(worker=3) task_end = DummyOperator( task_id = 'end', trigger_rule = 'all_done' ) task_start >> task_branch_table >> task_end task_start >> task_branch_table >> task_pg_to_gcs_init >> task_end When I run the dag, it returns **airflow.exceptions.TaskNotFound: Task parent_task_id not found ** But this is expected, what I don't know is how to iterate the parent_task_id.sub_task_x on is_exists_table function. Or are there any workaround? This is the test.sql file if it's needed SELECT id, name, country FROM some_table WHERE 1=1 AND ABS(MOD(hashtext(id::TEXT), 3)) = {{params.worker}}; -- returns 1M+ rows I already seen this question as reference Question but I think my case is more specific.
[ "When designing your data pipelines, you may encounter use cases that require more complex task flows than \"Task A > Task B > Task C.\" For example, you may have a use case where you need to decide between multiple tasks to execute based on the results of an upstream task. Or you may have a case where part of your pipeline should only run under certain external conditions. Fortunately, Airflow has multiple options for building conditional logic and/or branching into your DAGs.\n", "I found a dirty way around it.\nWhat I did is creating 1 additional task using DummyOperator called task_pass.\n task_pass = DummyOperator(\n task_id = 'pass_to_task_group'\n )\n\nSo the DAG flow now looks like this:\ntask_start >> task_branch_table >> task_end\ntask_start >> task_branch_table >> task_pass >> task_pg_to_gcs_init >> task_end\n\nAlso there is 1 mistake that I made from the code above, notice that the params I set was worker. This is wrong because worker is the constant while the thing that I need to iterate is the i variable. So I change it from:\nparams: worker\n\nto:\nparams: i\n\n" ]
[ 0, 0 ]
[]
[]
[ "airflow", "airflow_2.x", "python" ]
stackoverflow_0074612781_airflow_airflow_2.x_python.txt
Q: create a table and different columns within that table based on input of other columns using python I have been working on a python code to try to create the output below in the python terminal. Basically what I'm trying to accomplish is I want the user to input the numbers for Columns A and B, then the code outputs Columns C, D and E. In Column C, I show how I want the python code to compute that particular column and so on (in other words, I don't want it to actually show "2 + 3 = 5", but I only want it to show 5. That's only to show you how I'm calculated it). Also, at the bottom of the code, I am trying to calculate and print the averages of Columns C and D. I have been struggling with this for quite some time. Below is the example of how I want the output to be and also the code that I have. Sorry in advance for the code. Some things I know how to do and other things I don't which is why I am asking the Python community for help. Any help would be greatly appreciated. def solve(columnA, columnB): print(" Class ColumnA ColumnB ColumnC ColumnD ColumnE") i = 0 columnC = columnB columnD = columnC - columnB columnE = [] empty1 = [] empty2 = [] empty3 = [] while(i < len(columnA)): columnC += columnB[i] print("str(i + 1) + " " + str(columnA[i]) + " " + str(columnB) + \ " " + str(columnC) + " " + str(columnD) + " " + str(columnE))") empty1.append(columnC) empty2.append(columnD) empty2.append(columnE) i += 1 return (empty1, empty2, empty3) np = int(input("Enter the number of Classes: ")) column_A = [] column_B = [] for i in range(np): column_A.append(int(input("Enter the numbers for Column A " + str(i + 1) + ": "))) column_B.append(int(input("Enter the numbers for Column B " + str(i + 1) + ": "))) lis = solve(column_A, column_B) print("Average of Column C: " + str(sum(lis[0]) / len(lis[0]))) print("Average of Column D: " + str(sum(lis[1]) / len(lis[1]))) A: You could use something like this: import itertools import statistics number_of_classes = int(input("Enter the number of Classes: ")) def ask_numbers(name, amount_to_ask): print('Please enter the numbers for Column', name) for how_many_asked_yet in range(amount_to_ask): yield (int(input(f'{how_many_asked_yet + 1}) '))) print() columns_names = ('A', 'B') columns = {name: tuple(ask_numbers(name, number_of_classes)) for name in columns_names} def compute_c_values(source_column): values_accumulator = zip(source_column, itertools.accumulate(source_column)) current_value, previous_value = next(values_accumulator) yield (f'{current_value}', current_value) for current_value, sum_value in values_accumulator: yield (f'{previous_value}+{current_value}={sum_value}', sum_value) previous_value = sum_value columns['C'] = tuple(compute_c_values(columns['A'])) columns['D'] = tuple(c_value - b_value for (b_value, (_, c_value)) in zip(columns['B'], columns['C'])) columns['E'] = tuple(0 if d_value < 0 else 1 for d_value in columns['D']) output_header_footer_format = "{:6} | {:10} | {:10} | {:15} | {:10} | {:10}" print(output_header_footer_format.format('Class', 'Column A', 'Column B', 'Column C', 'Column D', 'Column E')) print('-------|------------|------------|-----------------|------------|-----------') output_line_format = "{0:6} | {1:10} | {2:10} | {3[0]:^15} | {4:10} | {5:^10}" for class_nb, output_values in enumerate(zip(*columns.values())): print(output_line_format.format(class_nb, *output_values)) print(output_header_footer_format.format('', '', '', '', '', '')) averages = { 'C' : statistics.mean(c_value for (_, c_value) in columns['C']), 'D' : statistics.mean(columns['D']) } print(output_header_footer_format.format('' , '', 'Averages =', averages['C'], averages['D'], '')) To fully understand this code you will need to read about: Generators Generator expressions Dictionary comprehensions .format() method of string objects Tuples unpacking Unpacking Argument List
create a table and different columns within that table based on input of other columns using python
I have been working on a python code to try to create the output below in the python terminal. Basically what I'm trying to accomplish is I want the user to input the numbers for Columns A and B, then the code outputs Columns C, D and E. In Column C, I show how I want the python code to compute that particular column and so on (in other words, I don't want it to actually show "2 + 3 = 5", but I only want it to show 5. That's only to show you how I'm calculated it). Also, at the bottom of the code, I am trying to calculate and print the averages of Columns C and D. I have been struggling with this for quite some time. Below is the example of how I want the output to be and also the code that I have. Sorry in advance for the code. Some things I know how to do and other things I don't which is why I am asking the Python community for help. Any help would be greatly appreciated. def solve(columnA, columnB): print(" Class ColumnA ColumnB ColumnC ColumnD ColumnE") i = 0 columnC = columnB columnD = columnC - columnB columnE = [] empty1 = [] empty2 = [] empty3 = [] while(i < len(columnA)): columnC += columnB[i] print("str(i + 1) + " " + str(columnA[i]) + " " + str(columnB) + \ " " + str(columnC) + " " + str(columnD) + " " + str(columnE))") empty1.append(columnC) empty2.append(columnD) empty2.append(columnE) i += 1 return (empty1, empty2, empty3) np = int(input("Enter the number of Classes: ")) column_A = [] column_B = [] for i in range(np): column_A.append(int(input("Enter the numbers for Column A " + str(i + 1) + ": "))) column_B.append(int(input("Enter the numbers for Column B " + str(i + 1) + ": "))) lis = solve(column_A, column_B) print("Average of Column C: " + str(sum(lis[0]) / len(lis[0]))) print("Average of Column D: " + str(sum(lis[1]) / len(lis[1])))
[ "You could use something like this:\nimport itertools\nimport statistics\n\nnumber_of_classes = int(input(\"Enter the number of Classes: \"))\n\ndef ask_numbers(name, amount_to_ask):\n print('Please enter the numbers for Column', name)\n for how_many_asked_yet in range(amount_to_ask):\n yield (int(input(f'{how_many_asked_yet + 1}) ')))\n print()\n\ncolumns_names = ('A', 'B')\n\ncolumns = {name: tuple(ask_numbers(name, number_of_classes)) for name in columns_names}\n\ndef compute_c_values(source_column):\n values_accumulator = zip(source_column, itertools.accumulate(source_column))\n current_value, previous_value = next(values_accumulator)\n yield (f'{current_value}', current_value)\n \n for current_value, sum_value in values_accumulator:\n yield (f'{previous_value}+{current_value}={sum_value}', sum_value)\n previous_value = sum_value\n\ncolumns['C'] = tuple(compute_c_values(columns['A']))\n\ncolumns['D'] = tuple(c_value - b_value for (b_value, (_, c_value)) in zip(columns['B'], columns['C']))\n\ncolumns['E'] = tuple(0 if d_value < 0 else 1 for d_value in columns['D'])\n\noutput_header_footer_format = \"{:6} | {:10} | {:10} | {:15} | {:10} | {:10}\"\nprint(output_header_footer_format.format('Class', 'Column A', 'Column B', 'Column C', 'Column D', 'Column E'))\nprint('-------|------------|------------|-----------------|------------|-----------')\n\noutput_line_format = \"{0:6} | {1:10} | {2:10} | {3[0]:^15} | {4:10} | {5:^10}\"\n\nfor class_nb, output_values in enumerate(zip(*columns.values())):\n print(output_line_format.format(class_nb, *output_values))\n\nprint(output_header_footer_format.format('', '', '', '', '', ''))\n\naverages = {\n 'C' : statistics.mean(c_value for (_, c_value) in columns['C']),\n 'D' : statistics.mean(columns['D'])\n}\nprint(output_header_footer_format.format('' , '', 'Averages =', averages['C'], averages['D'], ''))\n\nTo fully understand this code you will need to read about:\n\nGenerators\nGenerator expressions\nDictionary comprehensions\n.format() method of string objects\nTuples unpacking\nUnpacking Argument List\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074248354_python.txt
Q: how to connect two raspberry pi using OPCUA? I have two Raspberry Pi and i want to connect these two via OPC UA making one of them as Server and other as Client. Do you have any Idea or clues or you knows any Websites which helps me to understand the basic ? your prompt reply would be appreciated. Thank you very much in advance. Best regards, Ankit Mavani I have searched on internet, but unfortunately i did not find anything. Perheps you could help me out this problem A: I can not recommend any resources. But from your tags I quess you want to use Python for your client and server. So for this task you can use asyncua. A good starting point are the various examples. Also you can find some docs. For debuging the server and understanding I would recomend to use UAExpert where you can browse your server and read/write values.
how to connect two raspberry pi using OPCUA?
I have two Raspberry Pi and i want to connect these two via OPC UA making one of them as Server and other as Client. Do you have any Idea or clues or you knows any Websites which helps me to understand the basic ? your prompt reply would be appreciated. Thank you very much in advance. Best regards, Ankit Mavani I have searched on internet, but unfortunately i did not find anything. Perheps you could help me out this problem
[ "I can not recommend any resources. But from your tags I quess you want to use Python for your client and server. So for this task you can use asyncua.\nA good starting point are the various examples. Also you can find some docs.\nFor debuging the server and understanding I would recomend to use UAExpert where you can browse your server and read/write values.\n" ]
[ 0 ]
[]
[]
[ "communication_protocol", "opc_ua", "python", "raspberry_pi" ]
stackoverflow_0074598319_communication_protocol_opc_ua_python_raspberry_pi.txt
Q: How do I export CSVs saved in AWS S3 to DynamoDB? I saw that it's possible to import CSVs uploaded to s3 directly to dynamodb but I haven't figured out how to do it properly yet. I'm guessing my issue is likely related to the naming of my partition key vs the actual headers in my csvs but I am unsure. Is there a way to easily import CSVs to dynamodb from s3 programatically? I have 20ish CSVs that I upload to S3 via Python script: import boto3 a_key = '...' s_key = '...' region = 'us-west-2' def upload_to_aws(local_file, bucket, s3_file): s3 = boto3.client('s3', aws_access_key_id=a_key, aws_secret_access_key=s_key, region_name=region) try: s3.upload_file(local_file, bucket, s3_file) print("Upload to Cloud Successful") return True except FileNotFoundError: print("The file was not found") return False def snaps(team_name): uploaded = upload_to_aws(f'CSVs/{team_name}_snaps.csv', 'teamcsvs', f'CSVs/{team_name}_snaps.csv') def short(team_name, short_year): uploaded = upload_to_aws(f'CSVs/{team_name}{short_year}.csv', 'teamcsvs', f'CSVs/{team_name}{short_year}.csv') I followed this guide and expected my CSVs to seamlessly become a dynamoDB table, but my imports keep failing. I've gotten a few different errors from the logs but the latest is: { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } 2022-11-29T13:04:57.021-07:00 Copy { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } {"itemS3Pointer":{"bucket":"teamcsvs","key":"CSVs/atl22.csv","itemIndex":0},"importArn":"arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0","errorMessages":["One or more parameter values were invalid: Value for Item.Score is ambiguous"]} 2022-11-29T13:04:57.021-07:00 Copy { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } {"itemS3Pointer":{"bucket":"teamcsvs","key":"CSVs/atl22.csv","itemIndex":0},"importArn":"arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0","errorMessages":["One or more parameter values were invalid: Value for Item.Score is ambiguous"]} 2022-11-29T13:04:57.021-07:00 Copy { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } {"itemS3Pointer":{"bucket":"teamcsvs","key":"CSVs/atl22.csv","itemIndex":0},"importArn":"arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0","errorMessages":["One or more parameter values were invalid: Value for Item.Score is ambiguous"]} A: You have not gotten a few different errors, you get the same one each time. Item.Score is ambiguous. Can you share a snippet of your CSV file, that will help us determine the issue.
How do I export CSVs saved in AWS S3 to DynamoDB?
I saw that it's possible to import CSVs uploaded to s3 directly to dynamodb but I haven't figured out how to do it properly yet. I'm guessing my issue is likely related to the naming of my partition key vs the actual headers in my csvs but I am unsure. Is there a way to easily import CSVs to dynamodb from s3 programatically? I have 20ish CSVs that I upload to S3 via Python script: import boto3 a_key = '...' s_key = '...' region = 'us-west-2' def upload_to_aws(local_file, bucket, s3_file): s3 = boto3.client('s3', aws_access_key_id=a_key, aws_secret_access_key=s_key, region_name=region) try: s3.upload_file(local_file, bucket, s3_file) print("Upload to Cloud Successful") return True except FileNotFoundError: print("The file was not found") return False def snaps(team_name): uploaded = upload_to_aws(f'CSVs/{team_name}_snaps.csv', 'teamcsvs', f'CSVs/{team_name}_snaps.csv') def short(team_name, short_year): uploaded = upload_to_aws(f'CSVs/{team_name}{short_year}.csv', 'teamcsvs', f'CSVs/{team_name}{short_year}.csv') I followed this guide and expected my CSVs to seamlessly become a dynamoDB table, but my imports keep failing. I've gotten a few different errors from the logs but the latest is: { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } 2022-11-29T13:04:57.021-07:00 Copy { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } {"itemS3Pointer":{"bucket":"teamcsvs","key":"CSVs/atl22.csv","itemIndex":0},"importArn":"arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0","errorMessages":["One or more parameter values were invalid: Value for Item.Score is ambiguous"]} 2022-11-29T13:04:57.021-07:00 Copy { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } {"itemS3Pointer":{"bucket":"teamcsvs","key":"CSVs/atl22.csv","itemIndex":0},"importArn":"arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0","errorMessages":["One or more parameter values were invalid: Value for Item.Score is ambiguous"]} 2022-11-29T13:04:57.021-07:00 Copy { "itemS3Pointer": { "bucket": "teamcsvs", "key": "CSVs/atl22.csv", "itemIndex": 0 }, "importArn": "arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0", "errorMessages": [ "One or more parameter values were invalid: Value for Item.Score is ambiguous" ] } {"itemS3Pointer":{"bucket":"teamcsvs","key":"CSVs/atl22.csv","itemIndex":0},"importArn":"arn:aws:dynamodb:us-west-2:874782694093:table/atl_snaps/import/01669752143020-2b8be4f0","errorMessages":["One or more parameter values were invalid: Value for Item.Score is ambiguous"]}
[ "You have not gotten a few different errors, you get the same one each time. Item.Score is ambiguous.\nCan you share a snippet of your CSV file, that will help us determine the issue.\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "amazon_s3", "python" ]
stackoverflow_0074621907_amazon_dynamodb_amazon_s3_python.txt
Q: how to play youtube video with sound in wxpython window? I am making a simple youtube player and downloader app accessible for blind people with screenreaders, but also usable by sighted people. I chose wxpython because it has best accessibility ever. How to play a youtube video in the wx window? Do I need to use wx.media.MediaCtrl and how to correctly use it? can I play youtube video without downloading it right in the wxpython window? I tried using wx.media.MediaCtrl, but I downloaded the video from youtube and then made it to show the video. A: wx.media will play direct from a URL, although I have no idea if it can access a YouTube url directly (I'm not a fan). Here's a very bare bones player accessing a url rather than a file. I would suggest that you test the media link to see if it is a url or a file, then use LoadURI or Load depending on the outcome. import wx import wx.media class TestPanel(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title='Media Player') panel = wx.Panel(self, -1, size=(500, 700)) self.player = wx.media.MediaCtrl(panel, style=wx.SIMPLE_BORDER) self.stop = wx.Button(panel, -1, "Stop") sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.player, 1, flag=wx.ALL, border=5) sizer.Add(self.stop, 0, wx.ALL, 5) panel.SetSizer(sizer) self.media = 'http://clips.vorwaerts-gmbh.de/VfE_html5.mp4' self.player.Bind(wx.media.EVT_MEDIA_LOADED, self.play) self.player.Bind(wx.media.EVT_MEDIA_FINISHED, self.quit) self.Bind(wx.media.EVT_MEDIA_STOP, self.quit) self.Bind(wx.EVT_BUTTON, self.quit, self.stop) self.Bind(wx.EVT_CLOSE, self.quit) self.timer = wx.Timer(self) self.Bind(wx.EVT_TIMER, self.OnTimer, self.timer) self.timer.Start(250) if self.player.LoadURI(self.media): pass else: print("Media not found") self.quit(None) self.Show() def OnTimer(self, event): if self.player.GetState() == wx.media.MEDIASTATE_PLAYING: print("Progress:",self.player.Tell()) def play(self, event): self.player.Play() def quit(self, event): self.timer.Stop() if self.player.GetState() == wx.media.MEDIASTATE_PLAYING: self.player.Stop() self.Destroy() if __name__ == '__main__': app = wx.App() Frame = TestPanel() app.MainLoop() Worthy of note, you may have to choose your backend player via the szBackend parameter. Available options are: MEDIABACKEND_DIRECTSHOW MEDIABACKEND_QUICKTIME MEDIABACKEND_GSTREAMER MEDIABACKEND_WMP10 If you want something more complicated than wx.media offers, you will have to roll your own. Gstreamer and Vlc both work well on Linux, on other operating systems, I have no idea. Edit: If you must stream direct from youtube, use something like pafy to isolate the url. For the above code: add import pafy instead of specifying the url to self.media replace self.media = 'http://clips.vorwaerts-gmbh.de/VfE_html5.mp4' with the following: # youtube url via pafy url = "https://www.youtube.com/watch?v=iL-jC7XyLeo" video = pafy.new(url) best = video.getbest() playurl = best.url self.media = playurl Then continue as above
how to play youtube video with sound in wxpython window?
I am making a simple youtube player and downloader app accessible for blind people with screenreaders, but also usable by sighted people. I chose wxpython because it has best accessibility ever. How to play a youtube video in the wx window? Do I need to use wx.media.MediaCtrl and how to correctly use it? can I play youtube video without downloading it right in the wxpython window? I tried using wx.media.MediaCtrl, but I downloaded the video from youtube and then made it to show the video.
[ "wx.media will play direct from a URL, although I have no idea if it can access a YouTube url directly (I'm not a fan).\nHere's a very bare bones player accessing a url rather than a file.\nI would suggest that you test the media link to see if it is a url or a file, then use LoadURI or Load depending on the outcome.\nimport wx\nimport wx.media\n\nclass TestPanel(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None, title='Media Player')\n panel = wx.Panel(self, -1, size=(500, 700))\n self.player = wx.media.MediaCtrl(panel, style=wx.SIMPLE_BORDER)\n self.stop = wx.Button(panel, -1, \"Stop\")\n sizer = wx.BoxSizer(wx.VERTICAL)\n sizer.Add(self.player, 1, flag=wx.ALL, border=5)\n sizer.Add(self.stop, 0, wx.ALL, 5)\n panel.SetSizer(sizer)\n\n self.media = 'http://clips.vorwaerts-gmbh.de/VfE_html5.mp4'\n self.player.Bind(wx.media.EVT_MEDIA_LOADED, self.play)\n self.player.Bind(wx.media.EVT_MEDIA_FINISHED, self.quit)\n self.Bind(wx.media.EVT_MEDIA_STOP, self.quit)\n self.Bind(wx.EVT_BUTTON, self.quit, self.stop)\n self.Bind(wx.EVT_CLOSE, self.quit)\n self.timer = wx.Timer(self)\n self.Bind(wx.EVT_TIMER, self.OnTimer, self.timer)\n self.timer.Start(250)\n if self.player.LoadURI(self.media):\n pass\n else:\n print(\"Media not found\")\n self.quit(None)\n self.Show()\n\n def OnTimer(self, event):\n if self.player.GetState() == wx.media.MEDIASTATE_PLAYING:\n print(\"Progress:\",self.player.Tell())\n\n def play(self, event):\n self.player.Play()\n\n def quit(self, event):\n self.timer.Stop()\n if self.player.GetState() == wx.media.MEDIASTATE_PLAYING:\n self.player.Stop()\n self.Destroy()\n\nif __name__ == '__main__':\n app = wx.App()\n Frame = TestPanel()\n app.MainLoop()\n\nWorthy of note, you may have to choose your backend player via the szBackend parameter.\nAvailable options are:\n\nMEDIABACKEND_DIRECTSHOW\nMEDIABACKEND_QUICKTIME\nMEDIABACKEND_GSTREAMER\nMEDIABACKEND_WMP10\n\nIf you want something more complicated than wx.media offers, you will have to roll your own.\nGstreamer and Vlc both work well on Linux, on other operating systems, I have no idea.\nEdit:\nIf you must stream direct from youtube, use something like pafy to isolate the url.\nFor the above code:\nadd import pafy\ninstead of specifying the url to self.media replace\nself.media = 'http://clips.vorwaerts-gmbh.de/VfE_html5.mp4'\nwith the following:\n# youtube url via pafy\n url = \"https://www.youtube.com/watch?v=iL-jC7XyLeo\"\n video = pafy.new(url)\n best = video.getbest()\n playurl = best.url\n self.media = playurl\n\nThen continue as above\n" ]
[ 0 ]
[]
[]
[ "accessibility", "python", "wxpython", "youtube" ]
stackoverflow_0074583726_accessibility_python_wxpython_youtube.txt
Q: Is there a way for my discord bot to change a message it sent prior, completing replacing that message with a new one? I'm creating a connect 4 game in discord.py, using buttons to place pieces. Currently, it resends a new gameboard every time a piece is placed, with the buttons causing issues as they display "this interaction failed", even though its code ran fine. So is there a way to edit the message of the game board to prevent this issue? def place(name, Line, row): open_file = open(name, "r") board = [] piece = open_file.readline().strip("\n") for x in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() Place = True while Place and Line != 6: if board[Line][row - 1] == ":white_large_square:": Line += 1 else: Place = False if not board[0][row - 1] == piece: Line -= 1 board[Line][row - 1] = piece Line1 = ",".join(board[0]) Line2 = ",".join(board[1]) Line3 = ",".join(board[2]) Line4 = ",".join(board[3]) Line5 = ",".join(board[4]) Line6 = ",".join(board[5]) Lines = [Line1, Line2, Line3, Line4, Line5, Line6] phrase = "\n".join(Lines) new_piece = "" if piece == ":green_circle:": new_piece = ":red_circle:\n" elif piece == ":red_circle:": new_piece = ":green_circle:\n" open_file = open(name, "w") open_file.write(new_piece + phrase) open_file.close() return "Valid Move" else: return "Invalid Move" @bot.command( brief=" Begins your Connect 4 Game", description= " Displays the board and buttons, which will place your piece in the desired lane", Arguements="None") async def Connect4(ctx): open_file = open(ctx.author.name + "#", "w") open_file.write( ":green_circle:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:" ) open_file.close() open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) button1 = Button(label="", emoji="1️⃣", style=discord.ButtonStyle.gray, row=0) button2 = Button(label="", emoji="2️⃣", style=discord.ButtonStyle.gray, row=0) button3 = Button(label="", emoji="3️⃣", style=discord.ButtonStyle.gray, row=0) button4 = Button(label="", emoji="4️⃣", style=discord.ButtonStyle.gray, row=1) button5 = Button(label="", emoji="5️⃣", style=discord.ButtonStyle.gray, row=1) button6 = Button(label="", emoji="6️⃣", style=discord.ButtonStyle.gray, row=1) button7 = Button(label="", emoji="7️⃣", style=discord.ButtonStyle.gray, row=1) async def button1Clicked(interaction): x = place(ctx.author.name + "#", 0, 1) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button2Clicked(interaction): x = place(ctx.author.name + "#", 0, 2) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button3Clicked(interaction): x = place(ctx.author.name + "#", 0, 3) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button4Clicked(interaction): x = place(ctx.author.name + "#", 0, 4) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button5Clicked(interaction): x = place(ctx.author.name + "#", 0, 5) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button6Clicked(interaction): x = place(ctx.author.name + "#", 0, 6) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button7Clicked(interaction): x = place(ctx.author.name + "#", 0, 7) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) button1.callback = button1Clicked button2.callback = button2Clicked button3.callback = button3Clicked button4.callback = button4Clicked button5.callback = button5Clicked button6.callback = button6Clicked button7.callback = button7Clicked view1 = View() view1.add_item(button1) view1.add_item(button2) view1.add_item(button3) view1.add_item(button4) view1.add_item(button5) view1.add_item(button6) view1.add_item(button7) await ctx.send(piece + " turn\n" + L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6, view=view1) A: Search in the docs: I did it for you The docs is the most useful thing readily available to you. . . message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 m = await ctx.send(message, view=view1) #btw this still sends the message . . m.edit(content=message, view=...)
Is there a way for my discord bot to change a message it sent prior, completing replacing that message with a new one?
I'm creating a connect 4 game in discord.py, using buttons to place pieces. Currently, it resends a new gameboard every time a piece is placed, with the buttons causing issues as they display "this interaction failed", even though its code ran fine. So is there a way to edit the message of the game board to prevent this issue? def place(name, Line, row): open_file = open(name, "r") board = [] piece = open_file.readline().strip("\n") for x in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() Place = True while Place and Line != 6: if board[Line][row - 1] == ":white_large_square:": Line += 1 else: Place = False if not board[0][row - 1] == piece: Line -= 1 board[Line][row - 1] = piece Line1 = ",".join(board[0]) Line2 = ",".join(board[1]) Line3 = ",".join(board[2]) Line4 = ",".join(board[3]) Line5 = ",".join(board[4]) Line6 = ",".join(board[5]) Lines = [Line1, Line2, Line3, Line4, Line5, Line6] phrase = "\n".join(Lines) new_piece = "" if piece == ":green_circle:": new_piece = ":red_circle:\n" elif piece == ":red_circle:": new_piece = ":green_circle:\n" open_file = open(name, "w") open_file.write(new_piece + phrase) open_file.close() return "Valid Move" else: return "Invalid Move" @bot.command( brief=" Begins your Connect 4 Game", description= " Displays the board and buttons, which will place your piece in the desired lane", Arguements="None") async def Connect4(ctx): open_file = open(ctx.author.name + "#", "w") open_file.write( ":green_circle:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:\n:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:,:white_large_square:" ) open_file.close() open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) button1 = Button(label="", emoji="1️⃣", style=discord.ButtonStyle.gray, row=0) button2 = Button(label="", emoji="2️⃣", style=discord.ButtonStyle.gray, row=0) button3 = Button(label="", emoji="3️⃣", style=discord.ButtonStyle.gray, row=0) button4 = Button(label="", emoji="4️⃣", style=discord.ButtonStyle.gray, row=1) button5 = Button(label="", emoji="5️⃣", style=discord.ButtonStyle.gray, row=1) button6 = Button(label="", emoji="6️⃣", style=discord.ButtonStyle.gray, row=1) button7 = Button(label="", emoji="7️⃣", style=discord.ButtonStyle.gray, row=1) async def button1Clicked(interaction): x = place(ctx.author.name + "#", 0, 1) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button2Clicked(interaction): x = place(ctx.author.name + "#", 0, 2) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button3Clicked(interaction): x = place(ctx.author.name + "#", 0, 3) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button4Clicked(interaction): x = place(ctx.author.name + "#", 0, 4) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button5Clicked(interaction): x = place(ctx.author.name + "#", 0, 5) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button6Clicked(interaction): x = place(ctx.author.name + "#", 0, 6) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) async def button7Clicked(interaction): x = place(ctx.author.name + "#", 0, 7) if x == "Invalid Move": await ctx.send("Invalid Move") else: open_file = open(ctx.author.name + "#", "r") board = [] piece = open_file.readline() for _ in range(6): value = open_file.readline() board.append(value.strip("\n").split(",")) open_file.close() L1 = "".join(board[0]) L2 = "".join(board[1]) L3 = "".join(board[2]) L4 = "".join(board[3]) L5 = "".join(board[4]) L6 = "".join(board[5]) message = piece + " turn\n" +L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6 await ctx.send(message, view=view1) button1.callback = button1Clicked button2.callback = button2Clicked button3.callback = button3Clicked button4.callback = button4Clicked button5.callback = button5Clicked button6.callback = button6Clicked button7.callback = button7Clicked view1 = View() view1.add_item(button1) view1.add_item(button2) view1.add_item(button3) view1.add_item(button4) view1.add_item(button5) view1.add_item(button6) view1.add_item(button7) await ctx.send(piece + " turn\n" + L1 + "\n" + L2 + "\n" + L3 + "\n" + L4 + "\n" + L5 + "\n" + L6, view=view1)
[ "Search in the docs: I did it for you\nThe docs is the most useful thing readily available to you.\n.\n.\nmessage = piece + \" turn\\n\" +L1 + \"\\n\" + L2 + \"\\n\" + L3 + \"\\n\" + L4 + \"\\n\" + L5 + \"\\n\" + L6\nm = await ctx.send(message, view=view1) #btw this still sends the message\n.\n.\nm.edit(content=message, view=...)\n\n" ]
[ 0 ]
[]
[]
[ "button", "discord", "discord.py", "python" ]
stackoverflow_0074624777_button_discord_discord.py_python.txt
Q: Is requirements.txt still needed when using pyproject.toml? Since mid 2022 it is now possible to get rid of setup.py, setup.cfg in favor of pyproject.toml. Editable installs work with recent versions of setuptools and pip and even the official packaging tutorial switched away from setup.py to pyproject.toml. However, documentation regarding requirements.txt seems to be have been also removed, and I wonder where to put the pinned requirements now? As a refresher: It used to be common practice to put the dependencies (without version pinning) in setup.py avoiding issues when this package gets installed with other packages needing the same dependencies but with conflicting version requirements. For packaging libraries a setup.py was usually sufficient. For deployments (i.e. non libraries) you usually also provided a requirements.txt with version-pinned dependencies. So you don't accidentally get the latest and greatest but the exact versions of dependencies that that package has been tested with. So my question is, did anything change? Do you still put the pinned requirements in the requirements.txt when used together with pyproject.toml? Or is there an extra section for that in pyproject.toml? Is there some documentation on that somewhere? A: This is the pip documentation for pyproject.toml ...This file contains build system requirements and information, which are used by pip to build the package. So this is not the correct place. Looking at the side bar we can see there is an entry for Requirements File Format which is the "old" requirements.txt file A: Quoting myself from here My current assumption is: [...] you put your (mostly unpinned) dependencies to pyproject.toml instead of setup.py, so you library can be installed as a dependency of something else without causing much troubles because of issues resolving version constraints. On top of that, for "deployable applications" (for lack of a better term), you still want to maintain a separate requirements.txt with exact version pinning. Which has been confirmed by a Python Packaging Authority (PyPA) member and clarification of PyPA's recommendations should be updated accordingly at some point.
Is requirements.txt still needed when using pyproject.toml?
Since mid 2022 it is now possible to get rid of setup.py, setup.cfg in favor of pyproject.toml. Editable installs work with recent versions of setuptools and pip and even the official packaging tutorial switched away from setup.py to pyproject.toml. However, documentation regarding requirements.txt seems to be have been also removed, and I wonder where to put the pinned requirements now? As a refresher: It used to be common practice to put the dependencies (without version pinning) in setup.py avoiding issues when this package gets installed with other packages needing the same dependencies but with conflicting version requirements. For packaging libraries a setup.py was usually sufficient. For deployments (i.e. non libraries) you usually also provided a requirements.txt with version-pinned dependencies. So you don't accidentally get the latest and greatest but the exact versions of dependencies that that package has been tested with. So my question is, did anything change? Do you still put the pinned requirements in the requirements.txt when used together with pyproject.toml? Or is there an extra section for that in pyproject.toml? Is there some documentation on that somewhere?
[ "This is the pip documentation for pyproject.toml\n\n...This file contains build system requirements and information, which are used by pip to build the package.\n\nSo this is not the correct place. Looking at the side bar we can see there is an entry for Requirements File Format which is the \"old\" requirements.txt file\n", "Quoting myself from here\n\nMy current assumption is: [...] you put your (mostly unpinned) dependencies to pyproject.toml instead of setup.py, so you library can be installed as a dependency of something else without causing much troubles because of issues resolving version constraints.\n\n\nOn top of that, for \"deployable applications\" (for lack of a better term), you still want to maintain a separate requirements.txt with exact version pinning.\n\nWhich has been confirmed by a Python Packaging Authority (PyPA) member and clarification of PyPA's recommendations should be updated accordingly at some point.\n" ]
[ 1, 0 ]
[ "I suggest switching to poetry, it's way better than a standard pip for dependency management. And because it uses pyproject.toml your dependencies and configs are in one place so it's easier to manage everything \n" ]
[ -1 ]
[ "python", "python_packaging", "requirements.txt" ]
stackoverflow_0074508024_python_python_packaging_requirements.txt.txt
Q: run a robot file and get its log with python and return log back to that robot file run a robot file and get its log with python and return log back to that robot file . I try to use subprocess.run() but I think it does not get the log pythonfile ` @keywords("") def cal() p1=subprocess.run("robot cmnd to run a testcase") return p1.stdout ` A: You should also use PIPE for reading the output; from subprocess import Popen, PIPE, STDOUT command = f"shell command with arguments" process = Popen(command, shell=True, stdout=PIPE, stderr=STDOUT) with process.stdout: for line in iter(process.stdout.readline, b''): print(line.decode("utf-8").strip()) for more details: Run subprocess and print output to logging
run a robot file and get its log with python and return log back to that robot file
run a robot file and get its log with python and return log back to that robot file . I try to use subprocess.run() but I think it does not get the log pythonfile ` @keywords("") def cal() p1=subprocess.run("robot cmnd to run a testcase") return p1.stdout `
[ "You should also use PIPE for reading the output;\nfrom subprocess import Popen, PIPE, STDOUT\n\ncommand = f\"shell command with arguments\"\nprocess = Popen(command, shell=True, stdout=PIPE, stderr=STDOUT)\n\nwith process.stdout:\n for line in iter(process.stdout.readline, b''):\n print(line.decode(\"utf-8\").strip())\n\nfor more details: Run subprocess and print output to logging\n" ]
[ 0 ]
[]
[]
[ "keyword", "loops", "python", "robotframework", "time" ]
stackoverflow_0074624284_keyword_loops_python_robotframework_time.txt
Q: How to check if there is a line segment between two given points? I made a model that predicts electrical symbols and junctions: image of model inference. Given the xywh coordinates of each junctions' bounding box in a form of a dataframe: image of the dataframe, how would I make an output that stores the location of all the wires in a .txt file in a form of: (xstart,ystart), (xend,yend). I'm stuck at writing a way to check if there is a valid line (wire) between any two given junctions. data = df.loc[df['name'] == 'junction'] # iterates through all of the junctions for index, row in data.iterrows(): for index2, row2 in data.iterrows(): check_if_wire_is_valid() My attempt was to erase all electrical symbols (make everything in bounding boxes white except for junctions) from the inference image and run cv.HoughLinesP to find wires. How can I write a function that checks if the cv.HoughLinesP output lies between two junctions? Note that the minimum distance that lies between two junctions should be greater than 1px because if I have a parallel circuit like such: top left and bottom right junction would "detect" more than 1px of line between them and misinterpret that as a valid line. EDIT: minAreaRect on contours . I've drawn this circuit with no elements for simplification and testing. This is the resulting minAreaRect found for the given contours. I can't seem to find a way to properly validate lines from this. My initial solution was to compare any two junctions and if they are relatively close on the x-axis, then I would say that those two junctions form a vertical wire, and if other two junctions were close on the y-axis I would conclude they form a horizontal wire. junction distance to axis. Now, this would create a problem if I had a diagonal line. I'm trying find a solution that is consistent and applicable to every circuit. I believe I'm onto something with HoughLinesP method or contours but that's as far as my knowledge can assist me. The main goal is to create an LTSpice readable circuit for simulating purposes. Should I change my method of finding valid lines? If so, what is your take on the problem? A: This should be doable using findContours(). A wire is always a (roughly) straigt line, right ? Paint the classified boxes white, as you said threshold() to get a binary image with the wires (and other symbols and letters) in white, everything else black. run findContours() on that to extract objects. Get the bounding boxes (minAreaRect) for all contours discard all contours with a too wide side ratio, those are letter or symbols, keep only those slim enough to be a wire Now you got all wires as objects, similiar to the junction list. Now, for how to merge those two... Some options that come to mind: Grow the boxes by a certain amount, and check if they overlap. Interpolate a line from the wire boxes and check if they cross any intersection box close by. Or the other way around: draw a line between intersections and check how much of it goes through a certain wire box. This is a pure math problem, and i don't know what you performance requirements are. So i'll leave it at that.
How to check if there is a line segment between two given points?
I made a model that predicts electrical symbols and junctions: image of model inference. Given the xywh coordinates of each junctions' bounding box in a form of a dataframe: image of the dataframe, how would I make an output that stores the location of all the wires in a .txt file in a form of: (xstart,ystart), (xend,yend). I'm stuck at writing a way to check if there is a valid line (wire) between any two given junctions. data = df.loc[df['name'] == 'junction'] # iterates through all of the junctions for index, row in data.iterrows(): for index2, row2 in data.iterrows(): check_if_wire_is_valid() My attempt was to erase all electrical symbols (make everything in bounding boxes white except for junctions) from the inference image and run cv.HoughLinesP to find wires. How can I write a function that checks if the cv.HoughLinesP output lies between two junctions? Note that the minimum distance that lies between two junctions should be greater than 1px because if I have a parallel circuit like such: top left and bottom right junction would "detect" more than 1px of line between them and misinterpret that as a valid line. EDIT: minAreaRect on contours . I've drawn this circuit with no elements for simplification and testing. This is the resulting minAreaRect found for the given contours. I can't seem to find a way to properly validate lines from this. My initial solution was to compare any two junctions and if they are relatively close on the x-axis, then I would say that those two junctions form a vertical wire, and if other two junctions were close on the y-axis I would conclude they form a horizontal wire. junction distance to axis. Now, this would create a problem if I had a diagonal line. I'm trying find a solution that is consistent and applicable to every circuit. I believe I'm onto something with HoughLinesP method or contours but that's as far as my knowledge can assist me. The main goal is to create an LTSpice readable circuit for simulating purposes. Should I change my method of finding valid lines? If so, what is your take on the problem?
[ "This should be doable using findContours(). A wire is always a (roughly) straigt line, right ?\n\nPaint the classified boxes white, as you said\nthreshold() to get a binary image with the wires (and other symbols and letters) in white, everything else black.\nrun findContours() on that to extract objects.\nGet the bounding boxes (minAreaRect) for all contours\ndiscard all contours with a too wide side ratio, those are letter or symbols, keep only those slim enough to be a wire\n\nNow you got all wires as objects, similiar to the junction list. Now, for how to merge those two... Some options that come to mind:\n\nGrow the boxes by a certain amount, and check if they overlap.\nInterpolate a line from the wire boxes and check if they cross any intersection box close by.\nOr the other way around: draw a line between intersections and check how much of it goes through a certain wire box.\n\nThis is a pure math problem, and i don't know what you performance requirements are. So i'll leave it at that.\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "image_processing", "object_detection", "opencv", "python" ]
stackoverflow_0074620007_deep_learning_image_processing_object_detection_opencv_python.txt
Q: request.session.get('user') returning AnonymousUser when there is a user logged in I am somewhat new at this so apologies in advance. I have run into a problem where I am using Auth0 with a custom db. I have created a user Profile model with a one-to-one relationship with User I am trying to allow the user to update their profile using a modelform Where I am getting stuck is that I am trying to save() the form to a profile instance which is related to the user_id. However, I can't seem to find a way to get the user_id and when I use request.user.id I keep getting back AnonymousUser. - I assume this is because it's looking at the admin backend to find the user but there is none, the user is only in the session. I have tried using request.session.get('user') and when I do I get back the auth0 user(which include all the userinfo), but there is no user_id which is associated with the user in the db. So my issue is: How do I get a user_id which relates to the user that is logged in on the session but is in the db so I can store the data against it? models.py from django.db import models from django.contrib.auth.models import User from django.db.models.signals import post_save class Profile(models.Model): user = models.OneToOneField( User, null=True, on_delete=models.CASCADE, ) name = models.CharField(max_length=255, blank=True) bio = models.TextField(max_length=255, blank=True) platforms = models.CharField(max_length=255, blank=True) profile_url = models.URLField(max_length=255, blank=True) def __str__(self): return self.name forms.py from django import forms from django.forms import ModelForm from requests import request from .models import Profile from django.contrib.auth.models import User class ProfileForm(ModelForm): user = forms.CharField() name = forms.CharField() bio = forms.CharField(widget=forms.TextInput) choices = [('1', 'Option1'), ('2', 'Option2'), ('3', 'Option3'), ('4', 'Option4'), ('5', 'Option5')] platforms = forms.ChoiceField(widget=forms.Select, choices=choices) profile_url = forms.URLField() class Meta: model = Profile fields = ['user','name','bio','platforms','profile_url'] views.py def updateProfile(request): form = forms.ProfileForm() profile = request.session.get('user') if request.method == 'POST': try: form = forms.ProfileForm(request.POST, instance=profile) except: form = forms.ProfileForm(request.POST) if form.is_valid(): form.save() return render( request, "update_profile.html", context={ "form":form, "session": request.session.get("user"), "pretty": json.dumps(request.session.get("user"), indent=4), }, ) else: print(form.errors.as_data()) return render( request, "update_profile.html", context={ "form":form, "session": request.session.get("user"), "pretty": json.dumps(request.session.get("user"), indent=4), }, ) update_profile.html ...form... <label for="brand" class="">Input your profile URL</label> **{% render_field form.user type="number" name="user" id="user" value=request.user.id %} ** </div> </div> <div class=""> <button type="submit" class=""> Update Profile </button> urls.py urlpatterns = [ path("update-profile", views.updateProfile, name="update-profile") ] settings.py MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.RemoteUserMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] A: There is a django.contrib.auth.models.User object attached to the request. You can access it in a view via request.user. You must have the auth middleware installed, though. def view(request): if request.user.is_authenticated: user = request.user print(user) # do something with user
request.session.get('user') returning AnonymousUser when there is a user logged in
I am somewhat new at this so apologies in advance. I have run into a problem where I am using Auth0 with a custom db. I have created a user Profile model with a one-to-one relationship with User I am trying to allow the user to update their profile using a modelform Where I am getting stuck is that I am trying to save() the form to a profile instance which is related to the user_id. However, I can't seem to find a way to get the user_id and when I use request.user.id I keep getting back AnonymousUser. - I assume this is because it's looking at the admin backend to find the user but there is none, the user is only in the session. I have tried using request.session.get('user') and when I do I get back the auth0 user(which include all the userinfo), but there is no user_id which is associated with the user in the db. So my issue is: How do I get a user_id which relates to the user that is logged in on the session but is in the db so I can store the data against it? models.py from django.db import models from django.contrib.auth.models import User from django.db.models.signals import post_save class Profile(models.Model): user = models.OneToOneField( User, null=True, on_delete=models.CASCADE, ) name = models.CharField(max_length=255, blank=True) bio = models.TextField(max_length=255, blank=True) platforms = models.CharField(max_length=255, blank=True) profile_url = models.URLField(max_length=255, blank=True) def __str__(self): return self.name forms.py from django import forms from django.forms import ModelForm from requests import request from .models import Profile from django.contrib.auth.models import User class ProfileForm(ModelForm): user = forms.CharField() name = forms.CharField() bio = forms.CharField(widget=forms.TextInput) choices = [('1', 'Option1'), ('2', 'Option2'), ('3', 'Option3'), ('4', 'Option4'), ('5', 'Option5')] platforms = forms.ChoiceField(widget=forms.Select, choices=choices) profile_url = forms.URLField() class Meta: model = Profile fields = ['user','name','bio','platforms','profile_url'] views.py def updateProfile(request): form = forms.ProfileForm() profile = request.session.get('user') if request.method == 'POST': try: form = forms.ProfileForm(request.POST, instance=profile) except: form = forms.ProfileForm(request.POST) if form.is_valid(): form.save() return render( request, "update_profile.html", context={ "form":form, "session": request.session.get("user"), "pretty": json.dumps(request.session.get("user"), indent=4), }, ) else: print(form.errors.as_data()) return render( request, "update_profile.html", context={ "form":form, "session": request.session.get("user"), "pretty": json.dumps(request.session.get("user"), indent=4), }, ) update_profile.html ...form... <label for="brand" class="">Input your profile URL</label> **{% render_field form.user type="number" name="user" id="user" value=request.user.id %} ** </div> </div> <div class=""> <button type="submit" class=""> Update Profile </button> urls.py urlpatterns = [ path("update-profile", views.updateProfile, name="update-profile") ] settings.py MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.RemoteUserMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ]
[ "There is a django.contrib.auth.models.User object attached to the request. You can access it in a view via request.user. You must have the auth middleware installed, though.\ndef view(request):\n if request.user.is_authenticated:\n user = request.user\n print(user)\n # do something with user\n\n" ]
[ 0 ]
[]
[]
[ "auth0", "django_forms", "django_models", "django_views", "python" ]
stackoverflow_0074612251_auth0_django_forms_django_models_django_views_python.txt
Q: Pandas: group by and Pivot table difference I just started learning Pandas and was wondering if there is any difference between groupby() and pivot_table() functions. Can anyone help me understand the difference between them. A: Both pivot_table and groupby are used to aggregate your dataframe. The difference is only with regard to the shape of the result. Using pd.pivot_table(df, index=["a"], columns=["b"], values=["c"], aggfunc=np.sum) a table is created where a is on the row axis, b is on the column axis, and the values are the sum of c. Example: df = pd.DataFrame({"a": [1,2,3,1,2,3], "b":[1,1,1,2,2,2], "c":np.random.rand(6)}) pd.pivot_table(df, index=["a"], columns=["b"], values=["c"], aggfunc=np.sum) b 1 2 a 1 0.528470 0.484766 2 0.187277 0.144326 3 0.866832 0.650100 Using groupby, the dimensions given are placed into columns, and rows are created for each combination of those dimensions. In this example, we create a series of the sum of values c, grouped by all unique combinations of a and b. df.groupby(['a','b'])['c'].sum() a b 1 1 0.528470 2 0.484766 2 1 0.187277 2 0.144326 3 1 0.866832 2 0.650100 Name: c, dtype: float64 A similar usage of groupby is if we omit the ['c']. In this case, it creates a dataframe (not a series) of the sums of all remaining columns grouped by unique values of a and b. print df.groupby(["a","b"]).sum() c a b 1 1 0.528470 2 0.484766 2 1 0.187277 2 0.144326 3 1 0.866832 2 0.650100 A: pivot_table = groupby + unstack and groupby = pivot_table + stack hold True. In particular, if columns parameter of pivot_table() is not used, then groupby() and pivot_table() both produce the same result (if the same aggregator function is used). # sample df = pd.DataFrame({"a": [1,1,1,2,2,2], "b": [1,1,2,2,3,3], "c": [0,0.5,1,1,2,2]}) # example gb = df.groupby(['a','b'])[['c']].sum() pt = df.pivot_table(index=['a','b'], values=['c'], aggfunc='sum') # equality test gb.equals(pt) #True In general, if we check the source code, pivot_table() internally calls __internal_pivot_table(). This function creates a single flat list out of index and columns and calls groupby() with this list as the grouper. Then after aggregation, calls unstack() on the list of columns. If columns are never passed, there is nothing to unstack on, so groupby and pivot_table trivially produce the same output. A demonstration of this function is: gb = ( df .groupby(['a','b'])[['c']].sum() .unstack(['b']) ) pt = df.pivot_table(index=['a'], columns=['b'], values=['c'], aggfunc='sum') gb.equals(pt) # True As stack() is the inverse operation of unstack(), the following holds True as well: ( df .pivot_table(index=['a'], columns=['b'], values=['c'], aggfunc='sum') .stack(['b']) .equals( df.groupby(['a','b'])[['c']].sum() ) ) # True In conclusion, depending on the use case, one is more convenient than the other but they can both be used instead of the other and after correctly applying stack()/unstack(), both will result in the same output. However, there's a performance difference between the two methods. In short, pivot_table() is slower than groupby().agg().unstack(). You can read more about it from this answer. A: It's more appropriate to use .pivot_table() instead of .groupby() when you need to show aggregates with both rows and column labels. .pivot_table() makes it easy to create row and column labels at the same time and is preferable, even though you can get similar results using .groupby() with few extra steps. A: Difference between pivot_table and groupby pivot_table gropby
Pandas: group by and Pivot table difference
I just started learning Pandas and was wondering if there is any difference between groupby() and pivot_table() functions. Can anyone help me understand the difference between them.
[ "Both pivot_table and groupby are used to aggregate your dataframe. The difference is only with regard to the shape of the result.\nUsing pd.pivot_table(df, index=[\"a\"], columns=[\"b\"], values=[\"c\"], aggfunc=np.sum) a table is created where a is on the row axis, b is on the column axis, and the values are the sum of c. \nExample:\ndf = pd.DataFrame({\"a\": [1,2,3,1,2,3], \"b\":[1,1,1,2,2,2], \"c\":np.random.rand(6)})\npd.pivot_table(df, index=[\"a\"], columns=[\"b\"], values=[\"c\"], aggfunc=np.sum)\n\nb 1 2\na \n1 0.528470 0.484766\n2 0.187277 0.144326\n3 0.866832 0.650100\n\nUsing groupby, the dimensions given are placed into columns, and rows are created for each combination of those dimensions.\nIn this example, we create a series of the sum of values c, grouped by all unique combinations of a and b.\ndf.groupby(['a','b'])['c'].sum()\n\na b\n1 1 0.528470\n 2 0.484766\n2 1 0.187277\n 2 0.144326\n3 1 0.866832\n 2 0.650100\nName: c, dtype: float64\n\nA similar usage of groupby is if we omit the ['c']. In this case, it creates a dataframe (not a series) of the sums of all remaining columns grouped by unique values of a and b.\nprint df.groupby([\"a\",\"b\"]).sum()\n c\na b \n1 1 0.528470\n 2 0.484766\n2 1 0.187277\n 2 0.144326\n3 1 0.866832\n 2 0.650100\n\n", "pivot_table = groupby + unstack and groupby = pivot_table + stack hold True.\nIn particular, if columns parameter of pivot_table() is not used, then groupby() and pivot_table() both produce the same result (if the same aggregator function is used).\n# sample\ndf = pd.DataFrame({\"a\": [1,1,1,2,2,2], \"b\": [1,1,2,2,3,3], \"c\": [0,0.5,1,1,2,2]})\n\n# example\ngb = df.groupby(['a','b'])[['c']].sum()\npt = df.pivot_table(index=['a','b'], values=['c'], aggfunc='sum')\n\n# equality test\ngb.equals(pt) #True\n\n\nIn general, if we check the source code, pivot_table() internally calls __internal_pivot_table(). This function creates a single flat list out of index and columns and calls groupby() with this list as the grouper. Then after aggregation, calls unstack() on the list of columns.\nIf columns are never passed, there is nothing to unstack on, so groupby and pivot_table trivially produce the same output.\nA demonstration of this function is:\ngb = (\n df\n .groupby(['a','b'])[['c']].sum()\n .unstack(['b'])\n)\npt = df.pivot_table(index=['a'], columns=['b'], values=['c'], aggfunc='sum')\n\ngb.equals(pt) # True\n\nAs stack() is the inverse operation of unstack(), the following holds True as well:\n(\n df\n .pivot_table(index=['a'], columns=['b'], values=['c'], aggfunc='sum')\n .stack(['b'])\n .equals(\n df.groupby(['a','b'])[['c']].sum()\n )\n) # True\n\nIn conclusion, depending on the use case, one is more convenient than the other but they can both be used instead of the other and after correctly applying stack()/unstack(), both will result in the same output.\nHowever, there's a performance difference between the two methods. In short, pivot_table() is slower than groupby().agg().unstack(). You can read more about it from this answer.\n", "It's more appropriate to use .pivot_table() instead of .groupby() when you need to show aggregates with both rows and column labels.\n.pivot_table() makes it easy to create row and column labels at the same time and is preferable, even though you can get similar results using .groupby() with few extra steps.\n", "Difference between pivot_table and groupby\npivot_table\n\ngropby\n\n" ]
[ 127, 15, 12, 0 ]
[]
[]
[ "dataframe", "pandas", "pandas_groupby", "pivot_table", "python" ]
stackoverflow_0034702815_dataframe_pandas_pandas_groupby_pivot_table_python.txt
Q: How to display code line number (without additional context) in Icecream print outputs? I am currently using Icecream (https://github.com/gruns/icecream) to print variables and other info for debugging and review purposes. I would like to be able to display the line number where the print call originated from, without including additional information. I don't need to be using Icecream if there is a better option. The code below can produce the following output. 1 - from icecream import ic 2 - test = 'hello' 3 - ic(test) ic| test: 'hello' This is great, and I can adjust the prefix how I like, but I want to also be able to include the line number that generated the output. Icecream has a function that can do this, but it also outputs a bunch of other information that I am not interested in (see below). 1 - from icecream import ic 2 - ic.configureOutput(prefix=f'Debug | ', includeContext=True) 3 - test = 'hello' 4 - ic(test) Debug | test.py:4 in <module>- test: 'hello' There doesn't seem to be a native way to just show the line number (4 in the above example) without the rest of the information (filename and parent function). What I can do is include some code in the prefix editor to get a line number, but this just gives me with the line number of the ic.configureOutput() function, which I suppose makes sense, because this is the function that made the line number request. 1 - from icecream import ic 2 - import sys 3 - def line_number(): 4 - return sys._getframe().f_back.f_lineno 5 - ic.configureOutput(prefix=f'Debug:{line_number()} | ') 6 - test = 'hello' 7 - ic(test) Debug:5 | test: 'hello' Is there a way (or some other method entirely) to get the above output, but have the line number (line 5 in the above example) be the actual line number in the script that originated the call (this would be 7 in the above example)? A: I think you can use "inspect" library for this purpose; from inspect import currentframe def get_line(); return currentframe().f_back_f_lineno print("this is sample:", get_line())
How to display code line number (without additional context) in Icecream print outputs?
I am currently using Icecream (https://github.com/gruns/icecream) to print variables and other info for debugging and review purposes. I would like to be able to display the line number where the print call originated from, without including additional information. I don't need to be using Icecream if there is a better option. The code below can produce the following output. 1 - from icecream import ic 2 - test = 'hello' 3 - ic(test) ic| test: 'hello' This is great, and I can adjust the prefix how I like, but I want to also be able to include the line number that generated the output. Icecream has a function that can do this, but it also outputs a bunch of other information that I am not interested in (see below). 1 - from icecream import ic 2 - ic.configureOutput(prefix=f'Debug | ', includeContext=True) 3 - test = 'hello' 4 - ic(test) Debug | test.py:4 in <module>- test: 'hello' There doesn't seem to be a native way to just show the line number (4 in the above example) without the rest of the information (filename and parent function). What I can do is include some code in the prefix editor to get a line number, but this just gives me with the line number of the ic.configureOutput() function, which I suppose makes sense, because this is the function that made the line number request. 1 - from icecream import ic 2 - import sys 3 - def line_number(): 4 - return sys._getframe().f_back.f_lineno 5 - ic.configureOutput(prefix=f'Debug:{line_number()} | ') 6 - test = 'hello' 7 - ic(test) Debug:5 | test: 'hello' Is there a way (or some other method entirely) to get the above output, but have the line number (line 5 in the above example) be the actual line number in the script that originated the call (this would be 7 in the above example)?
[ "I think you can use \"inspect\" library for this purpose;\nfrom inspect import currentframe\n\ndef get_line();\n return currentframe().f_back_f_lineno\n\nprint(\"this is sample:\", get_line())\n\n" ]
[ 0 ]
[]
[]
[ "debugging", "icecream", "python" ]
stackoverflow_0074624205_debugging_icecream_python.txt
Q: To find the transpose of a given matrix I have been trying to run the code but its giving error that - "list index out of range" What is the reason? And is there any other way to find the transpose of a matrix without using numpy This is the code I wrote n = int(input("Enter the size of square matrix")) matrix = [] for i in range(n): a =[] for j in range(n): a.append(int(input("Enter the entries rowwise:"))) matrix.append(a) matrix1 = [] for i in range(0,n): b = [] for j in range(0,n): matrix1[i][j] = matrix[j][i] for i in range(n): for j in range(n): print(matrix1[i][j], end = " ") print() What is the reason for the error in the line matrix1[i][j] = matrix[j][i]? And is there any other way to find the transpose of a matrix without using numpy A: The matrix1 is a list of dim 1 you try to index it as it already is a 2-d list. Try to use the b list to append the elements of the transposed matrix and then append it to matrix1 like below: n = int(input("Enter the size of square matrix")) matrix = [] for i in range(n): a =[] for j in range(n): a.append(int(input("Enter the entries rowwise:"))) matrix.append(a) matrix1 = [] for i in range(0,n): b = [] for j in range(0,n): b.append( matrix[j][i]) matrix1.append(b) for i in range(n): for j in range(n): print(matrix1[i][j], end = " ") print() A: You can use zip function also for transpose. Example Code:- matrix=[[1,2,3],[4,5,6],[7,8,9]] #print(matrix) res=[] for i in zip(*matrix): res.append(i) print(res) Output:- [(1, 4, 7), (2, 5, 8), (3, 6, 9)] A: If the name of the matrix is "matrix" just go with: transposed = list(zip(*matrix))
To find the transpose of a given matrix
I have been trying to run the code but its giving error that - "list index out of range" What is the reason? And is there any other way to find the transpose of a matrix without using numpy This is the code I wrote n = int(input("Enter the size of square matrix")) matrix = [] for i in range(n): a =[] for j in range(n): a.append(int(input("Enter the entries rowwise:"))) matrix.append(a) matrix1 = [] for i in range(0,n): b = [] for j in range(0,n): matrix1[i][j] = matrix[j][i] for i in range(n): for j in range(n): print(matrix1[i][j], end = " ") print() What is the reason for the error in the line matrix1[i][j] = matrix[j][i]? And is there any other way to find the transpose of a matrix without using numpy
[ "The matrix1 is a list of dim 1 you try to index it as it already is a 2-d list. Try to use the b list to append the elements of the transposed matrix and then append it to matrix1 like below:\nn = int(input(\"Enter the size of square matrix\"))\nmatrix = []\nfor i in range(n): \n a =[]\n for j in range(n): \n a.append(int(input(\"Enter the entries rowwise:\")))\n matrix.append(a)\nmatrix1 = []\nfor i in range(0,n):\n b = []\n for j in range(0,n):\n b.append( matrix[j][i])\n matrix1.append(b)\nfor i in range(n):\n for j in range(n):\n print(matrix1[i][j], end = \" \")\nprint()\n\n", "You can use zip function also for transpose.\nExample\nCode:-\nmatrix=[[1,2,3],[4,5,6],[7,8,9]]\n#print(matrix)\nres=[]\nfor i in zip(*matrix):\n res.append(i)\nprint(res)\n\nOutput:-\n[(1, 4, 7), (2, 5, 8), (3, 6, 9)]\n\n", "If the name of the matrix is \"matrix\" just go with:\ntransposed = list(zip(*matrix))\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074624974_python.txt
Q: Why replacing in range "(len(list)" with "in list" changing output of the program I was writing code for a program which performs intersection of elements in the two lists, which means the common elements in both the lists are returned. changing "in _list" with "in range (len(list))" used for traversing in one of the list changed the output of the function Input code 1: def inn(nums1,nums2): a=set() b={} for i in range(len(nums2)): b[nums2[i]]="h" print (b) for j in nums1: #**calling elemnts by in** if j in b: print(j) a.add(j) return a inn([1,2,2,1],[2,2]) Output code 1(correct): {2: 'h'} 2 2 {2} Input code 2:(with changed method in 2nd loop): def inn(nums1,nums2): a=set() b={} for i in range(len(nums2)): b[nums2[i]]="h" print (b) for j in range(len(nums1)): #**calling elements by range** if nums1[j] in b: print(nums1[j]) a.add(j) return a inn([1,2,2,1],[2,2]) output code 2(Incorrect): {2: 'h'} 2 2 {1, 2}
Why replacing in range "(len(list)" with "in list" changing output of the program
I was writing code for a program which performs intersection of elements in the two lists, which means the common elements in both the lists are returned. changing "in _list" with "in range (len(list))" used for traversing in one of the list changed the output of the function Input code 1: def inn(nums1,nums2): a=set() b={} for i in range(len(nums2)): b[nums2[i]]="h" print (b) for j in nums1: #**calling elemnts by in** if j in b: print(j) a.add(j) return a inn([1,2,2,1],[2,2]) Output code 1(correct): {2: 'h'} 2 2 {2} Input code 2:(with changed method in 2nd loop): def inn(nums1,nums2): a=set() b={} for i in range(len(nums2)): b[nums2[i]]="h" print (b) for j in range(len(nums1)): #**calling elements by range** if nums1[j] in b: print(nums1[j]) a.add(j) return a inn([1,2,2,1],[2,2]) output code 2(Incorrect): {2: 'h'} 2 2 {1, 2}
[]
[]
[ "You can simplify your inn function with the built in set operations in python:\ndef inn(nums1,nums2):\n return set(nums1) & set(nums2)\n\n", "In the first listing, j is an element of the list num1.\nIn the second listing, j is used as the index of an element in the list num1.\nYou understood the difference when you replaced\n if j in b:\n print(j)\n\nin the first listing with\n if nums1[j] in b:\n print(nums1[j])\n\nin the second listing.\nBut you forgot the difference in the next line, which is the same in both listings. It should be\n a.add(nums1[j])\n\nin the second listing.\nThen both outputs would be the same.\n" ]
[ -2, -2 ]
[ "list", "python", "range", "set" ]
stackoverflow_0074624924_list_python_range_set.txt
Q: Shifting start position of X Axis of line chart I want to shift the start position of the red line from "FEB-2020" to "JAN-2021". Currently this is my code and a picture of my current output. Basically Shorten the period of the whole graph to the dates stated above. # plot daily vaccinated fig, ax1= plt.subplots(1,figsize=(20,10)) # set up plt.ticklabel_format(style='plain') #changing the tick figure from le 6 to millions plt.setp(ax1, xticks= np.arange(0, 680, 30), xticklabels=vac_dates) # plot chart ax1.plot(vaccinated['received_at_least_one_dose'], label= 'Total Vaccinated', c='Red') # axis and legend settings ax1.set_ylabel('population (millions)', size= 14) ax1.set_title('Monthly Vaccinated Numbers', size= 20) plt.xticks(rotation=45) plt.grid() ax1.legend(loc="upper left") ########################################################## # plot daily covid cases ax2 = ax1.twinx() # np.arrange looks at the number of rows plt.setp(ax2, xticks= np.arange(0, 1035, 31), xticklabels=dates) ax2.xaxis.tick_top() ax2.plot(infected_update) plt.xlabel('date', fontsize=14) plt.ylabel('population (thousands)', fontsize=14) plt.grid(False) ax2.legend(['imported','local'], loc="upper right") I've tried using codes from the following links but it doesn't seem to work https://stackoverflow.com/questions/29370057/select-dataframe-rows-between-two-dates https://stackoverflow.com/questions/32434607/how-to-shift-a-graph-along-the-x-axis A: Here is a very basic example using pandas, datetime index and matplotlib altogether. It is important to make sure the index is of type DatetimeIndex. import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'a':[3,4,5]}, index=pd.date_range('2020-01-03', '2020-01-05', freq='D')) df_2 = pd.DataFrame({'b':[1,2,3,4,5], 'c':[1,2,2,2,2]}, index=pd.date_range('2020-01-01', '2020-01-05', freq='D')) # plot daily vaccinated fig, ax1= plt.subplots(1,figsize=(10,5)) # set up ax1.plot(df.index, df['a'], label= 'Total Vaccinated', c='Red') # axis and legend settings ax1.set_ylabel('population (millions)', size= 14) ax1.set_title('Monthly Vaccinated Numbers', size= 20) plt.xticks(rotation=45) plt.grid() ax1.legend(loc="upper left") ########################################################## # plot daily covid cases ax2 = ax1.twinx() # np.arrange looks at the number of rows ax2.xaxis.tick_top() ax2.plot(df_2.index, df_2['b'], color='Orange') plt.xlabel('date', fontsize=14) plt.ylabel('population (thousands)', fontsize=14) plt.grid(False) This means in your case, you have to make sure to set parse_dates=True and set the index, when your read your data. Using pd.read_csv() this could look like df = pd.read_csv('covid.csv', sep=',', parse_dates=True, index_col=0) Because you have to DataFrames, you have so make yure both have a DatetimeIndex. Afterwards just replace the columns in the two calls with ac.plot(). Comment: If you want to plot all columns of a Dataframe, ax2.plot(df_2.index, df_2) works. If your want to select a subset of columns ax2.plot(df_2.index, df_2[['b', 'c']]) is doing the job.
Shifting start position of X Axis of line chart
I want to shift the start position of the red line from "FEB-2020" to "JAN-2021". Currently this is my code and a picture of my current output. Basically Shorten the period of the whole graph to the dates stated above. # plot daily vaccinated fig, ax1= plt.subplots(1,figsize=(20,10)) # set up plt.ticklabel_format(style='plain') #changing the tick figure from le 6 to millions plt.setp(ax1, xticks= np.arange(0, 680, 30), xticklabels=vac_dates) # plot chart ax1.plot(vaccinated['received_at_least_one_dose'], label= 'Total Vaccinated', c='Red') # axis and legend settings ax1.set_ylabel('population (millions)', size= 14) ax1.set_title('Monthly Vaccinated Numbers', size= 20) plt.xticks(rotation=45) plt.grid() ax1.legend(loc="upper left") ########################################################## # plot daily covid cases ax2 = ax1.twinx() # np.arrange looks at the number of rows plt.setp(ax2, xticks= np.arange(0, 1035, 31), xticklabels=dates) ax2.xaxis.tick_top() ax2.plot(infected_update) plt.xlabel('date', fontsize=14) plt.ylabel('population (thousands)', fontsize=14) plt.grid(False) ax2.legend(['imported','local'], loc="upper right") I've tried using codes from the following links but it doesn't seem to work https://stackoverflow.com/questions/29370057/select-dataframe-rows-between-two-dates https://stackoverflow.com/questions/32434607/how-to-shift-a-graph-along-the-x-axis
[ "Here is a very basic example using pandas, datetime index and matplotlib altogether. It is important to make sure the index is of type DatetimeIndex.\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame({'a':[3,4,5]}, index=pd.date_range('2020-01-03', '2020-01-05', freq='D'))\ndf_2 = pd.DataFrame({'b':[1,2,3,4,5], 'c':[1,2,2,2,2]}, index=pd.date_range('2020-01-01', '2020-01-05', freq='D'))\n\n# plot daily vaccinated\nfig, ax1= plt.subplots(1,figsize=(10,5))\n# set up\nax1.plot(df.index, df['a'], label= 'Total Vaccinated', c='Red')\n# axis and legend settings\nax1.set_ylabel('population (millions)', size= 14)\nax1.set_title('Monthly Vaccinated Numbers', size= 20)\nplt.xticks(rotation=45)\nplt.grid()\n\nax1.legend(loc=\"upper left\")\n\n##########################################################\n# plot daily covid cases\nax2 = ax1.twinx()\n\n# np.arrange looks at the number of rows\nax2.xaxis.tick_top()\nax2.plot(df_2.index, df_2['b'], color='Orange')\n\nplt.xlabel('date', fontsize=14)\nplt.ylabel('population (thousands)', fontsize=14)\n\nplt.grid(False)\n\n\nThis means in your case, you have to make sure to set parse_dates=True and set the index, when your read your data. Using pd.read_csv() this could look like\ndf = pd.read_csv('covid.csv', sep=',', parse_dates=True, index_col=0)\n\nBecause you have to DataFrames, you have so make yure both have a DatetimeIndex. Afterwards just replace the columns in the two calls with ac.plot().\nComment:\nIf you want to plot all columns of a Dataframe, ax2.plot(df_2.index, df_2) works. If your want to select a subset of columns ax2.plot(df_2.index, df_2[['b', 'c']]) is doing the job.\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "pandas", "python", "x_axis" ]
stackoverflow_0074623492_matplotlib_pandas_python_x_axis.txt
Q: Append matrices generated in for loop that have different col names to an external list of empty dataframes I have a large dataset that I am trying to perform various analyses on, but first need to transform into matrices grouped by different variables. For example, here is a toy dataset: myData = pd.DataFrame({'dataset': ['cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'dog', 'dog', 'bird', 'bird', 'bird', 'bird'], 'category_1': ['orange', 'orange', 'white', 'white', 'black', 'brown', 'brown', 'black', 'red', 'green', 'red', 'green'], 'category_2': ['this_cat', 'that_cat', 'this_cat', 'that_cat', 'this_dog', 'that_dog', 'this_dog', 'that_dog', 'this_bird', 'that_bird', 'this_bird', 'that_bird'], 'values': ['1', '8', '9', '2', '5', '4', '3', '10', '0', '2', '7', '9'] }) for i, animals in myData.groupby('dataset'): tuples = animals.groupby(['category_1', 'category_2'])['values'].mean().reset_index() tuples = pd.DataFrame(tuples) matrix = tuples.pivot(index='category_2', columns='category_1', values='values').reset_index() display(matrix) Here I am grouping my data by "animals" and converting each group into a matrix. However, because the column names are not same across my matrices, I am having trouble saving my output into an external empty list or dataframe. For example, I'd like to save each matrix into a separate dataframe that is dynamically generated depending on the number of groups in my data: output_dfs = {k: pd.DataFrame([]) for k in myData['dataset']} Desired output in this case would be 3 separate dataframes that I can access by a name: (the values are based on the toy dataset) dataset category_1 category_2 green red bird 0 that_bird 14.5 NaN bird 1 this_bird NaN 3.5 dataset category_1 category_2 orange white cat 0 that_cat 8.0 2.0 cat 1 this_cat 1.0 9.0 dataset category_1 category_2 black brown dog 0 that_dog 10.0 4.0 dog 1 this_dog 5.0 3.0 A: I'm not sure what you mean, is this the result you want to achieve? myData = pd.DataFrame({'dataset': ['cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'dog', 'dog', 'bird', 'bird', 'bird', 'bird'], 'category_1': ['orange', 'orange', 'white', 'white', 'black', 'brown', 'brown', 'black', 'red', 'green', 'red', 'green'], 'category_2': ['this_cat', 'that_cat', 'this_cat', 'that_cat', 'this_dog', 'that_dog', 'this_dog', 'that_dog', 'this_bird', 'that_bird', 'this_bird', 'that_bird'], 'values': ['1', '8', '9', '2', '5', '4', '3', '10', '0', '2', '7', '9'] }) result = {} for i, animals in myData.groupby('dataset'): tuples = animals.groupby(['category_1', 'category_2'])['values'].mean().reset_index() tuples = pd.DataFrame(tuples) matrix = tuples.pivot(index='category_2', columns='category_1', values='values').reset_index() result[i] = matrix display(result)
Append matrices generated in for loop that have different col names to an external list of empty dataframes
I have a large dataset that I am trying to perform various analyses on, but first need to transform into matrices grouped by different variables. For example, here is a toy dataset: myData = pd.DataFrame({'dataset': ['cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'dog', 'dog', 'bird', 'bird', 'bird', 'bird'], 'category_1': ['orange', 'orange', 'white', 'white', 'black', 'brown', 'brown', 'black', 'red', 'green', 'red', 'green'], 'category_2': ['this_cat', 'that_cat', 'this_cat', 'that_cat', 'this_dog', 'that_dog', 'this_dog', 'that_dog', 'this_bird', 'that_bird', 'this_bird', 'that_bird'], 'values': ['1', '8', '9', '2', '5', '4', '3', '10', '0', '2', '7', '9'] }) for i, animals in myData.groupby('dataset'): tuples = animals.groupby(['category_1', 'category_2'])['values'].mean().reset_index() tuples = pd.DataFrame(tuples) matrix = tuples.pivot(index='category_2', columns='category_1', values='values').reset_index() display(matrix) Here I am grouping my data by "animals" and converting each group into a matrix. However, because the column names are not same across my matrices, I am having trouble saving my output into an external empty list or dataframe. For example, I'd like to save each matrix into a separate dataframe that is dynamically generated depending on the number of groups in my data: output_dfs = {k: pd.DataFrame([]) for k in myData['dataset']} Desired output in this case would be 3 separate dataframes that I can access by a name: (the values are based on the toy dataset) dataset category_1 category_2 green red bird 0 that_bird 14.5 NaN bird 1 this_bird NaN 3.5 dataset category_1 category_2 orange white cat 0 that_cat 8.0 2.0 cat 1 this_cat 1.0 9.0 dataset category_1 category_2 black brown dog 0 that_dog 10.0 4.0 dog 1 this_dog 5.0 3.0
[ "I'm not sure what you mean, is this the result you want to achieve?\nmyData = pd.DataFrame({'dataset': ['cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'dog', 'dog', 'bird', 'bird', 'bird', 'bird'], \n 'category_1': ['orange', 'orange', 'white', 'white', 'black', 'brown', 'brown', 'black', 'red', 'green', 'red', 'green'], \n 'category_2': ['this_cat', 'that_cat', 'this_cat', 'that_cat', 'this_dog', 'that_dog', 'this_dog', 'that_dog', 'this_bird', 'that_bird', 'this_bird', 'that_bird'],\n 'values': ['1', '8', '9', '2', '5', '4', '3', '10', '0', '2', '7', '9']\n })\n\nresult = {}\nfor i, animals in myData.groupby('dataset'):\n tuples = animals.groupby(['category_1', 'category_2'])['values'].mean().reset_index()\n tuples = pd.DataFrame(tuples)\n matrix = tuples.pivot(index='category_2', columns='category_1', values='values').reset_index()\n result[i] = matrix\ndisplay(result)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "matrix", "pandas", "python" ]
stackoverflow_0074622555_dataframe_matrix_pandas_python.txt
Q: how to add the SECRET path so that i can get the client email in python secrets_base_path = os.environ['SECRETS_PATH'] SECRET = open(secrets_base_path+"/bbak_crewl.yaml", "r") with open(SECRET) as jf: json_secrets = json_load(jf) json_secrets['client_email'] gs = pygsheets.authorize(service_ file=SECRET) receiving an error when trying to run this python file, i know the code needs to be corrected , pls help. expected str,bytes or os.Path like object, not to _io.Text10wrapper error A: You should convert the TextIO to string and you should use "loads" function; from json import loads with open(SECRET) as jf: data = loads(jf.read())
how to add the SECRET path so that i can get the client email in python
secrets_base_path = os.environ['SECRETS_PATH'] SECRET = open(secrets_base_path+"/bbak_crewl.yaml", "r") with open(SECRET) as jf: json_secrets = json_load(jf) json_secrets['client_email'] gs = pygsheets.authorize(service_ file=SECRET) receiving an error when trying to run this python file, i know the code needs to be corrected , pls help. expected str,bytes or os.Path like object, not to _io.Text10wrapper error
[ "You should convert the TextIO to string and you should use \"loads\" function;\nfrom json import loads\n\nwith open(SECRET) as jf:\n data = loads(jf.read())\n\n\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0074624139_jupyter_notebook_python.txt
Q: The size of torch.cat ((x1, x2, x3, x4), 1) does not match I'm creating a model like the one in the figure below based on this paper but concat doesn't match the size and I get an error RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 128 but got size 64 for tensor number 2 in the list. My code: import torch from torch import nn import torch.nn.functional as F class DepthwiseConv2d(torch.nn.Conv2d): def __init__(self, in_channels, depth_multiplier=1, kernel_size=3, stride=1, padding=0, dilation=1, bias=True, padding_mode='zeros' ): out_channels = in_channels * depth_multiplier super().__init__( in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=in_channels, bias=bias, padding_mode=padding_mode ) class InceptionEEGNet(nn.Module): def __init__(self,bathsize): # input = (1,22,256) super().__init__() self.bathsize = bathsize self.conv2d_1 = nn.Conv2d(1,64,(1,16),padding='same') self.batchnorm2d_1 = nn.BatchNorm2d(64) self.conv2d_2 = DepthwiseConv2d(64,depth_multiplier=4,kernel_size=(22,1),padding='valid') self.batchnorm2d_elu_1 = nn.Sequential( nn.BatchNorm2d(256), nn.ELU() ) self.averagepooling = nn.Sequential( nn.AvgPool2d((1,2)), nn.Dropout2d(p=0.5) ) self.inception_block_1 = nn.Sequential( nn.Conv2d(256,64,(1,1),padding='same'), nn.Conv2d(64,64,(1,7),padding='same'), ) self.inception_block_2 = nn.Sequential( nn.Conv2d(256,64,(1,1),padding='same'), nn.Conv2d(64,64,(1,9),padding='same'), ) self.inception_block_3 = nn.Sequential( nn.AvgPool2d((1,2)), nn.Conv2d(256,64,(1,1),padding='same'), ) self.inception_block_4 = nn.Conv2d(256,64,(1,1),stride=(1,2)) self.batchnorm2d_elu_2 = nn.Sequential( nn.BatchNorm2d(64), nn.ELU(), nn.Dropout2d(p=0.5) ) self.conv2d_3 = nn.Conv2d(256,256,(1,5),padding='same') def forward(self,x): x = self.conv2d_1(x) x = self.batchnorm2d_1(x) x = self.conv2d_2(x) x = self.batchnorm2d_elu_1(x) x = self.averagepooling(x) print(x.shape) x1 = self.inception_block_1(x) print(x1.shape) x2 = self.inception_block_2(x) x3 = self.inception_block_3(x) x4 = self.inception_block_4(x) x = torch.cat((x1, x2, x3, x4), 1) x = self.batchnorm2d_elu_2(x) x = self.conv2d_3(x) x = self.batchnorm2d_elu_2(x) x = F.adaptive_avg_pool3d(x, (1,1,3)) x = x.squeeze() return x net = InceptionEEGNet(10) x = torch.rand(10,1,22,256) print(net(x).shape) A: You forgot the padding=same in inception_block_4. That would change the output size of the block to such that it does not fit the other blocks: self.inception_block_4 = nn.Conv2d(256,64,(1,1),stride=(1,2), padding='same') edit: This is not possible for strided convolutions, so instead a pointwise convolution should be used as mentioned in the paper: 'A branch with a pointwise convolution with a kernel size of 1, 1 with a stride of (1, 2)'. or depth-wise separable convolution, more on how that achieve that here.
The size of torch.cat ((x1, x2, x3, x4), 1) does not match
I'm creating a model like the one in the figure below based on this paper but concat doesn't match the size and I get an error RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 128 but got size 64 for tensor number 2 in the list. My code: import torch from torch import nn import torch.nn.functional as F class DepthwiseConv2d(torch.nn.Conv2d): def __init__(self, in_channels, depth_multiplier=1, kernel_size=3, stride=1, padding=0, dilation=1, bias=True, padding_mode='zeros' ): out_channels = in_channels * depth_multiplier super().__init__( in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=in_channels, bias=bias, padding_mode=padding_mode ) class InceptionEEGNet(nn.Module): def __init__(self,bathsize): # input = (1,22,256) super().__init__() self.bathsize = bathsize self.conv2d_1 = nn.Conv2d(1,64,(1,16),padding='same') self.batchnorm2d_1 = nn.BatchNorm2d(64) self.conv2d_2 = DepthwiseConv2d(64,depth_multiplier=4,kernel_size=(22,1),padding='valid') self.batchnorm2d_elu_1 = nn.Sequential( nn.BatchNorm2d(256), nn.ELU() ) self.averagepooling = nn.Sequential( nn.AvgPool2d((1,2)), nn.Dropout2d(p=0.5) ) self.inception_block_1 = nn.Sequential( nn.Conv2d(256,64,(1,1),padding='same'), nn.Conv2d(64,64,(1,7),padding='same'), ) self.inception_block_2 = nn.Sequential( nn.Conv2d(256,64,(1,1),padding='same'), nn.Conv2d(64,64,(1,9),padding='same'), ) self.inception_block_3 = nn.Sequential( nn.AvgPool2d((1,2)), nn.Conv2d(256,64,(1,1),padding='same'), ) self.inception_block_4 = nn.Conv2d(256,64,(1,1),stride=(1,2)) self.batchnorm2d_elu_2 = nn.Sequential( nn.BatchNorm2d(64), nn.ELU(), nn.Dropout2d(p=0.5) ) self.conv2d_3 = nn.Conv2d(256,256,(1,5),padding='same') def forward(self,x): x = self.conv2d_1(x) x = self.batchnorm2d_1(x) x = self.conv2d_2(x) x = self.batchnorm2d_elu_1(x) x = self.averagepooling(x) print(x.shape) x1 = self.inception_block_1(x) print(x1.shape) x2 = self.inception_block_2(x) x3 = self.inception_block_3(x) x4 = self.inception_block_4(x) x = torch.cat((x1, x2, x3, x4), 1) x = self.batchnorm2d_elu_2(x) x = self.conv2d_3(x) x = self.batchnorm2d_elu_2(x) x = F.adaptive_avg_pool3d(x, (1,1,3)) x = x.squeeze() return x net = InceptionEEGNet(10) x = torch.rand(10,1,22,256) print(net(x).shape)
[ "You forgot the padding=same in inception_block_4. That would change the output size of the block to such that it does not fit the other blocks:\nself.inception_block_4 = nn.Conv2d(256,64,(1,1),stride=(1,2), padding='same')\n\nedit:\nThis is not possible for strided convolutions, so instead a pointwise convolution should be used as mentioned in the paper:\n'A branch with a pointwise convolution with a kernel size of 1, 1 with a stride of (1, 2)'.\nor depth-wise separable convolution, more on how that achieve that here.\n" ]
[ 0 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0074625002_python_pytorch.txt
Q: Use Snowpark python to unload snowflake data to S3. How to provide storage integration option I am trying to unload snowflake data to S3, I have storage integration setup for the same. I could unload using SQL query, but wanted to do that using snowpark python. DataFrameWriter.copy_into_location - this snowpark method does not have any parameter for storage_integration, which leaves me clue less on how to get this unload job done with snowpark! Any help on this would be highly appreciated! Tried using the existing copy_into_location method, with storage_integration='SI_NAME', which the internal SQL query thrown an error - Invalid value ''SI_NAME'' for property 'STORAGE_INTEGRATION'. String literal identifier is unsupported for this property. Please use an unquoted or double-quoted identifier. A: You are right, DataFrameWriter.copy_into_location does not have the storage integration parameter. You can create an external stage object pointing to your S3 location using your storage integration. create stage my_stage_s3 storage_integration = my_storage_int url = 's3://mybucket/encrypted_files/' file_format = my_format; Then, in your copy_into_location call, you specify the location as "@my_stage_s3/"
Use Snowpark python to unload snowflake data to S3. How to provide storage integration option
I am trying to unload snowflake data to S3, I have storage integration setup for the same. I could unload using SQL query, but wanted to do that using snowpark python. DataFrameWriter.copy_into_location - this snowpark method does not have any parameter for storage_integration, which leaves me clue less on how to get this unload job done with snowpark! Any help on this would be highly appreciated! Tried using the existing copy_into_location method, with storage_integration='SI_NAME', which the internal SQL query thrown an error - Invalid value ''SI_NAME'' for property 'STORAGE_INTEGRATION'. String literal identifier is unsupported for this property. Please use an unquoted or double-quoted identifier.
[ "You are right, DataFrameWriter.copy_into_location does not have the storage integration parameter.\nYou can create an external stage object pointing to your S3 location using your storage integration.\n create stage my_stage_s3\n storage_integration = my_storage_int\n url = 's3://mybucket/encrypted_files/'\n file_format = my_format;\n\nThen, in your copy_into_location call, you specify the location as \"@my_stage_s3/\"\n" ]
[ 1 ]
[]
[]
[ "amazon_s3", "python", "snowflake_cloud_data_platform", "snowpark" ]
stackoverflow_0074625149_amazon_s3_python_snowflake_cloud_data_platform_snowpark.txt
Q: Boto3 Athena are not showing all tables trying to get a list of table names in Athena Table using BOTO3 python. this is my code; I think my attempts to do paginator is not correct. Any help is appreciated import boto3 client = boto3.client('glue') responseGetDatabases = client.get_databases() databaseList = responseGetDatabases['DatabaseList'] for databaseDict in databaseList: databaseName = databaseDict['Name'] if "dbName_" in databaseName: print '\ndatabaseName: ' + databaseName responseGetTables = client.get_tables( DatabaseName = databaseName ) paginator = client.get_paginator(['TableList']) for page in paginator: tableList = responseGetTables['TableList'] for tables in tableList: print tables['Name'] A: The get_paginator function parameter must be the name of the operation. It looks like you're trying to paginate on the get_tables function so paginator = client.get_paginator(['TableList']) should be: paginator = client.get_paginator('get_tables') Once you have the paginator object, you need to call paginator.paginate to retrieve the iterator. You can send your database parameters like so: page_iterator = paginator.paginate( DatabaseName=databaseDict['Name'], PaginationConfig={ 'MaxItems': 123, 'PageSize': 123, 'StartingToken': 'string' } ) See the documention for this function here. Now that you have the iterator, you can call a for loop by enumerating on it: for page_index, page in enumerate(page_iterator): A: Here's a full working example on how to do it using paginator. Remember to provide region_name and database_name. import boto3 region_name = '<PROVIDE_AWS_REGION_NAME>' database_name = '<PROVIDE_YOUR_DATABASE_NAME>' catalog_name = 'AwsDataCatalog' athena = boto3.client('athena', region_name=region_name) paginator = athena.get_paginator('list_table_metadata') response_iterator = paginator.paginate( CatalogName=catalog_name, DatabaseName=database_name ) table_names = [] for page in response_iterator: table_names.extend( (i['Name'] for i in page['TableMetadataList']) ) print(table_names)
Boto3 Athena are not showing all tables
trying to get a list of table names in Athena Table using BOTO3 python. this is my code; I think my attempts to do paginator is not correct. Any help is appreciated import boto3 client = boto3.client('glue') responseGetDatabases = client.get_databases() databaseList = responseGetDatabases['DatabaseList'] for databaseDict in databaseList: databaseName = databaseDict['Name'] if "dbName_" in databaseName: print '\ndatabaseName: ' + databaseName responseGetTables = client.get_tables( DatabaseName = databaseName ) paginator = client.get_paginator(['TableList']) for page in paginator: tableList = responseGetTables['TableList'] for tables in tableList: print tables['Name']
[ "The get_paginator function parameter must be the name of the operation. It looks like you're trying to paginate on the get_tables function so\n paginator = client.get_paginator(['TableList'])\n\nshould be:\n paginator = client.get_paginator('get_tables')\n\nOnce you have the paginator object, you need to call paginator.paginate to retrieve the iterator. You can send your database parameters like so:\n page_iterator = paginator.paginate(\n DatabaseName=databaseDict['Name'],\n PaginationConfig={\n 'MaxItems': 123,\n 'PageSize': 123,\n 'StartingToken': 'string'\n }\n )\n\nSee the documention for this function here.\nNow that you have the iterator, you can call a for loop by enumerating on it:\nfor page_index, page in enumerate(page_iterator):\n\n", "Here's a full working example on how to do it using paginator.\nRemember to provide region_name and database_name.\nimport boto3\n\nregion_name = '<PROVIDE_AWS_REGION_NAME>'\ndatabase_name = '<PROVIDE_YOUR_DATABASE_NAME>'\ncatalog_name = 'AwsDataCatalog'\n\nathena = boto3.client('athena', region_name=region_name)\n\npaginator = athena.get_paginator('list_table_metadata')\nresponse_iterator = paginator.paginate(\n CatalogName=catalog_name,\n DatabaseName=database_name\n)\n\ntable_names = []\nfor page in response_iterator:\n table_names.extend(\n (i['Name'] for i in page['TableMetadataList'])\n )\n\nprint(table_names)\n\n" ]
[ 2, 0 ]
[]
[]
[ "amazon_athena", "aws_glue", "boto3", "paginator", "python" ]
stackoverflow_0047933931_amazon_athena_aws_glue_boto3_paginator_python.txt
Q: ImportError: cannot import name 'config' from partially initialized module 'panel.config' I'm trying to import the package panel, however, when I try to do that I receive the following message: Output exceeds the size limit. Open the full output data in a text editor ImportError Traceback (most recent call last) c:\Users\nicol\Documents\Gist\Centrafrique Python\6. Draft_keyword frequency.ipynb Cell 2 in <cell line: 1>() ----> 1 import hvplot.pandas 2 import holoviews as hv 4 hv.extension('bokeh') File ~\AppData\Roaming\Python\Python310\site-packages\hvplot_init_.py:8, in 5 import textwrap 7 import param ----> 8 import holoviews as _hv 10 from holoviews import Store 12 from .converter import HoloViewsConverter File ~\AppData\Roaming\Python\Python310\site-packages\holoviews_init_.py:12, in 8 version = str(param.version.Version(fpath=file, archive_commit="$Format:%h$", 9 reponame="holoviews")) 11 from . import util # noqa (API import) ---> 12 from .annotators import annotate # noqa (API import) 13 from .core import archive, config # noqa (API import) 14 from .core.boundingregion import BoundingBox # noqa (API import) File ~\AppData\Roaming\Python\Python310\site-packages\holoviews\annotators.py:10, in 6 from inspect import getmro 8 import param ... ----> 9 from ..config import config 11 from .callbacks import PeriodicCallback # noqa 12 from .embed import embed_state # noqa ImportError: cannot import name 'config' from partially initialized module 'panel.config' (most likely due to a circular import) (C:\Users\nicol\AppData\Roaming\Python\Python310\site-packages\panel\config.py) This is my code: import panel as pn pn.extension('tabulator', sizing_mode="stretch_width") What am I doing wrong? A: Give this a try from panel import panel as pn
ImportError: cannot import name 'config' from partially initialized module 'panel.config'
I'm trying to import the package panel, however, when I try to do that I receive the following message: Output exceeds the size limit. Open the full output data in a text editor ImportError Traceback (most recent call last) c:\Users\nicol\Documents\Gist\Centrafrique Python\6. Draft_keyword frequency.ipynb Cell 2 in <cell line: 1>() ----> 1 import hvplot.pandas 2 import holoviews as hv 4 hv.extension('bokeh') File ~\AppData\Roaming\Python\Python310\site-packages\hvplot_init_.py:8, in 5 import textwrap 7 import param ----> 8 import holoviews as _hv 10 from holoviews import Store 12 from .converter import HoloViewsConverter File ~\AppData\Roaming\Python\Python310\site-packages\holoviews_init_.py:12, in 8 version = str(param.version.Version(fpath=file, archive_commit="$Format:%h$", 9 reponame="holoviews")) 11 from . import util # noqa (API import) ---> 12 from .annotators import annotate # noqa (API import) 13 from .core import archive, config # noqa (API import) 14 from .core.boundingregion import BoundingBox # noqa (API import) File ~\AppData\Roaming\Python\Python310\site-packages\holoviews\annotators.py:10, in 6 from inspect import getmro 8 import param ... ----> 9 from ..config import config 11 from .callbacks import PeriodicCallback # noqa 12 from .embed import embed_state # noqa ImportError: cannot import name 'config' from partially initialized module 'panel.config' (most likely due to a circular import) (C:\Users\nicol\AppData\Roaming\Python\Python310\site-packages\panel\config.py) This is my code: import panel as pn pn.extension('tabulator', sizing_mode="stretch_width") What am I doing wrong?
[ "Give this a try\nfrom panel import panel as pn\n\n" ]
[ 0 ]
[]
[]
[ "panel", "python" ]
stackoverflow_0074625277_panel_python.txt
Q: How to check if further `scroll down` is not possible using Selenium Am using Selenium + python to scrap a page which has infinite scroll (basically scroll till max first 500 results are shown) Using below code, am able to scroll to bottom of the page. Now i want to stop when further scrolling doesn't fetches any content. (say, page only has 200 results, i don't want to keep on scrolling assuming max 500 result) driver = webdriver.Firefox() driver.get(url) driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") I tried accessing window.pageYOffset but it's coming as None always. A: I'm using Selenium with Chrome, not Firefox, but the following worked for me: capture page height before scrolling down; scroll down using key down; capture page height after scrolling down; if page height was same before and after scrolling, stop scrolling My code looks like this: import time from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get("www.yourTargetURL.com") reached_page_end = False last_height = driver.execute_script("return document.body.scrollHeight") while not reached_page_end: driver.find_element_by_xpath('//body').send_keys(Keys.END) time.sleep(2) new_height = driver.execute_script("return document.body.scrollHeight") if last_height == new_height: reached_page_end = True else: last_height = new_height driver.quit() A: Just in case, if someone is using playwright. This code snippet is very similar to the ATJ answer. import time from playwright.sync_api import sync_playwright def run(playwright): page = playwright.chromium.launch(headless=False).new_page() page.goto("URL") reached_end = False last_height = page.evaluate("() => document.body.scrollHeight") # scrollHeight: 5879 while not reached_end: page.keyboard.press("End") time.sleep(2) new_height = page.evaluate("() => document.body.scrollHeight") if new_height == last_height: reached_end = True else: last_height = new_height page.close() with sync_playwright() as playwright: run(playwright) A: We can use a hard count while scrolling and as soon we reach that max count we get out of the loop. b=0; boolean x = true; while (x){ WebElement button = null; try { button = driver.findElement(By.xpath("//*[@id='vjs_video_3']/div[7]/div[1]/button[1]")); x= false; } catch (Exception ex){ JavascriptExecutor js = (JavascriptExecutor) driver; js.executeScript("javascript:window.scrollBy(50, 80)"); try { Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } js.executeScript("javascript:window.scrollBy(50, 50)"); b++; System.out.println("\n"+ b); if(b>50) { System.out.println("out!"); break; } // js.executeScript("javascript:window.scrollBy(50, 180)"); // Thread.sleep(1000); // js.executeScript("javascript:window.scrollBy(50, 150)"); // button is missing } } }
How to check if further `scroll down` is not possible using Selenium
Am using Selenium + python to scrap a page which has infinite scroll (basically scroll till max first 500 results are shown) Using below code, am able to scroll to bottom of the page. Now i want to stop when further scrolling doesn't fetches any content. (say, page only has 200 results, i don't want to keep on scrolling assuming max 500 result) driver = webdriver.Firefox() driver.get(url) driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") I tried accessing window.pageYOffset but it's coming as None always.
[ "I'm using Selenium with Chrome, not Firefox, but the following worked for me:\n\ncapture page height before scrolling down;\nscroll down using key down;\ncapture page height after scrolling down;\nif page height was same before and after scrolling, stop scrolling\n\nMy code looks like this:\nimport time\nfrom selenium import webdriver\nfrom selenium.webdriver import Chrome\nfrom selenium.webdriver.common.keys import Keys\n\ndriver = webdriver.Chrome()\ndriver.get(\"www.yourTargetURL.com\")\n\nreached_page_end = False\nlast_height = driver.execute_script(\"return document.body.scrollHeight\")\n\nwhile not reached_page_end:\n driver.find_element_by_xpath('//body').send_keys(Keys.END) \n time.sleep(2)\n new_height = driver.execute_script(\"return document.body.scrollHeight\")\n if last_height == new_height:\n reached_page_end = True\n else:\n last_height = new_height\n\ndriver.quit()\n\n", "Just in case, if someone is using playwright. This code snippet is very similar to the ATJ answer.\nimport time\nfrom playwright.sync_api import sync_playwright\n\n\ndef run(playwright):\n page = playwright.chromium.launch(headless=False).new_page()\n page.goto(\"URL\")\n\n reached_end = False\n last_height = page.evaluate(\"() => document.body.scrollHeight\") # scrollHeight: 5879\n\n while not reached_end:\n page.keyboard.press(\"End\")\n time.sleep(2)\n\n new_height = page.evaluate(\"() => document.body.scrollHeight\")\n if new_height == last_height:\n reached_end = True\n else:\n last_height = new_height\n\n page.close()\n\n\nwith sync_playwright() as playwright:\n run(playwright)\n\n", "We can use a hard count while scrolling and as soon we reach that max count we get out of the loop.\n b=0;\n boolean x = true;\n while (x){\n WebElement button = null;\n try {\n button = driver.findElement(By.xpath(\"//*[@id='vjs_video_3']/div[7]/div[1]/button[1]\"));\n x= false;\n } catch (Exception ex){\n JavascriptExecutor js = (JavascriptExecutor) driver;\n js.executeScript(\"javascript:window.scrollBy(50, 80)\");\n \n try {\n Thread.sleep(500);\n } catch (InterruptedException e) {\n e.printStackTrace();\n } \n js.executeScript(\"javascript:window.scrollBy(50, 50)\"); \n b++;\n System.out.println(\"\\n\"+ b);\n if(b>50) {\n System.out.println(\"out!\");\n break;\n }\n\n// js.executeScript(\"javascript:window.scrollBy(50, 180)\");\n// Thread.sleep(1000);\n// js.executeScript(\"javascript:window.scrollBy(50, 150)\"); // button is missing\n }\n \n\n }\n}\n\n" ]
[ 2, 0, 0 ]
[ "You can check document.body.scrollTop by before and after each scroll attempt if there is no data to fetch then this value will stay the same \ndistanceToTop = driver.execute_script(\"return document.body.scrollTop);\")\n\n" ]
[ -1 ]
[ "python", "selenium" ]
stackoverflow_0044721009_python_selenium.txt
Q: Django error: list index out of range (when there's no objects) Everything works fine until I delete all the objects and try to trigger the url, then it gives me this traceback: list index out of range. I can't use get because there might be more than one object and using [0] with filter leads me to this error when there's no object present, any way around this? I'm trying to get the recently created object of the Ticket model (if created that is) and then perform the logic, so that if the customer doesn't have any tickets, nothing happens but if the customer does then the logic happens Models class Ticket(models.Model): date_posted = models.DateField(auto_now_add=True, blank=True, null=True) customer = models.ForeignKey(Customer, on_delete=models.SET_NULL, blank=True, null=True) Views try: ticket = Ticket.objects.filter(customer=customer).order_by("-id")[0] now = datetime.now().date() set_date = ticket.date_posted check_time = now - set_date <= timedelta(hours=24) if check_time: print('working') else: print('not working') except Ticket.DoesNotExist: ticket = None context = {"check_time": check_time} A: You can also do this: ticket = Ticket.objects.filter(customer=customer).order_by("-id").first() or None if ticket is not None: now = datetime.now().date() set_date = ticket.date_posted check_time = now - set_date <= timedelta(hours=24) if check_time: print('working') else: print('not working') context = {"check_time": check_time} instead of: ticket = Ticket.objects.filter(customer=customer).order_by("-id")[0] A: Instead of this: ticket = Ticket.objects.filter(customer=customer).order_by("-id")[0] Use this using exists() which is a very efficient way if there is any object exist in DB: tickets = Ticket.objects.filter(customer=customer).order_by("-id") if tickets.exists(): ticket = tickets.first() else: ticket = None Update You can do the query inside the filter function. tickets = Ticket.objects.filter(customer=customer, date_posted__lte=timezone.now().date() - timedelta(hours=24)) context = {"check_time": tickets.exists()}
Django error: list index out of range (when there's no objects)
Everything works fine until I delete all the objects and try to trigger the url, then it gives me this traceback: list index out of range. I can't use get because there might be more than one object and using [0] with filter leads me to this error when there's no object present, any way around this? I'm trying to get the recently created object of the Ticket model (if created that is) and then perform the logic, so that if the customer doesn't have any tickets, nothing happens but if the customer does then the logic happens Models class Ticket(models.Model): date_posted = models.DateField(auto_now_add=True, blank=True, null=True) customer = models.ForeignKey(Customer, on_delete=models.SET_NULL, blank=True, null=True) Views try: ticket = Ticket.objects.filter(customer=customer).order_by("-id")[0] now = datetime.now().date() set_date = ticket.date_posted check_time = now - set_date <= timedelta(hours=24) if check_time: print('working') else: print('not working') except Ticket.DoesNotExist: ticket = None context = {"check_time": check_time}
[ "You can also do this:\nticket = Ticket.objects.filter(customer=customer).order_by(\"-id\").first() or None\nif ticket is not None: \n now = datetime.now().date()\n set_date = ticket.date_posted\n check_time = now - set_date <= timedelta(hours=24)\n if check_time:\n print('working')\n else:\n print('not working')\n context = {\"check_time\": check_time}\n\ninstead of:\nticket = Ticket.objects.filter(customer=customer).order_by(\"-id\")[0]\n\n", "Instead of this:\nticket = Ticket.objects.filter(customer=customer).order_by(\"-id\")[0]\n\nUse this using exists() which is a very efficient way if there is any object exist in DB:\ntickets = Ticket.objects.filter(customer=customer).order_by(\"-id\")\nif tickets.exists():\n ticket = tickets.first()\nelse:\n ticket = None\n\nUpdate\nYou can do the query inside the filter function.\ntickets = Ticket.objects.filter(customer=customer, date_posted__lte=timezone.now().date() - timedelta(hours=24))\n\ncontext = {\"check_time\": tickets.exists()}\n\n" ]
[ 2, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074624343_django_python.txt
Q: Get Top 3 max values I used to have a list and only needed to extract the max values in column 33 every day using below code and then export the data. df_= pd.read_excel (r'file_location.xlsx') df['Date'] = pd.to_datetime(df['Date'], errors='coerce') df_new = (df.groupby(pd.Grouper(key="Date",freq="D")) .agg({df.columns[33]: np.max}) .reset_index()) Now I have a new task to extract the top 3 valus in the same column everyday. I tried below code but doesn't work. Any idea? df_= pd.read_excel (r'file_location.xlsx') df['Date'] = pd.to_datetime(df['Date'], errors='coerce') df_new = (df.groupby(pd.Grouper(key="Date",freq="D")) .agg({df.columns[33]: np.head(3)}) .reset_index()) A: You need specify column after groupby and call GroupBy.head without agg: df_e_new = df.groupby(pd.Grouper(key="Date",freq="D"))[df.columns[33]].head(3) Or use SeriesGroupBy.nlargest for top3 sorted values: df_e_new = df.groupby(pd.Grouper(key="Date",freq="D"))[df.columns[33]].nlargest(3) For sum top3 values use lambda function: df_e_new = (df.groupby(pd.Grouper(key="Date",freq="D"))[df.columns[33]] .agg(lambda x: x.nlargest(3).sum()) .reset_index(name='top3sum'))
Get Top 3 max values
I used to have a list and only needed to extract the max values in column 33 every day using below code and then export the data. df_= pd.read_excel (r'file_location.xlsx') df['Date'] = pd.to_datetime(df['Date'], errors='coerce') df_new = (df.groupby(pd.Grouper(key="Date",freq="D")) .agg({df.columns[33]: np.max}) .reset_index()) Now I have a new task to extract the top 3 valus in the same column everyday. I tried below code but doesn't work. Any idea? df_= pd.read_excel (r'file_location.xlsx') df['Date'] = pd.to_datetime(df['Date'], errors='coerce') df_new = (df.groupby(pd.Grouper(key="Date",freq="D")) .agg({df.columns[33]: np.head(3)}) .reset_index())
[ "You need specify column after groupby and call GroupBy.head without agg:\ndf_e_new = df.groupby(pd.Grouper(key=\"Date\",freq=\"D\"))[df.columns[33]].head(3)\n \n\nOr use SeriesGroupBy.nlargest for top3 sorted values:\ndf_e_new = df.groupby(pd.Grouper(key=\"Date\",freq=\"D\"))[df.columns[33]].nlargest(3)\n\nFor sum top3 values use lambda function:\ndf_e_new = (df.groupby(pd.Grouper(key=\"Date\",freq=\"D\"))[df.columns[33]]\n .agg(lambda x: x.nlargest(3).sum())\n .reset_index(name='top3sum'))\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074625434_pandas_python.txt
Q: How do I change background color in pysimplegui? I'm working on UI for a virus simulation me and a friend are making and I'm really struggling to change the background color of the UI. I took most of the code from the one of the demo projects because i couldn't figure out how to implement matplotlib with pysimplegui so there's some things I don't fully understand but usually with pysimplegui it's as simple as sg.theme="color to change the main background color but it isn't working this time. Any help would be really appreciated, thanks. import PySimpleGUI as sg import numpy as np import tkinter import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk def draw_figure(canvas, fig): if canvas.children: for child in canvas.winfo_children(): child.destroy() figure_canvas_agg = FigureCanvasTkAgg(fig, master=canvas) figure_canvas_agg.draw() figure_canvas_agg.get_tk_widget().pack(side='right', fill='both', expand=1) # ------------------------------- PySimpleGUI CODE layout = [ [sg.Text("Population size: "), sg.Input(key="-POPULATIONSIZE-")], [sg.Text("Duration: "), sg.Input(key="-DURATION-")], [sg.Text("R Number: "), sg.Input(key="-RNUMBER-")], [sg.Text("Starting Infections: "), sg.Input(key="-STARTINGINFECTIONS-")], [sg.B('OK'), sg.B('Exit')], [sg.Canvas(key='controls_cv')], [sg.T('Figure:')], [sg.Column( layout=[ [sg.Canvas(key='fig_cv', size=(400 * 2, 400) )] ], background_color='#DAE0E6', pad=(0, 0) )], ] window = sg.Window('Virus Simulation', layout,) while True: event, values = window.read() print(event, values) if event in (sg.WIN_CLOSED, 'Exit'): break elif event is 'OK': # ------------------------------- PASTE YOUR MATPLOTLIB CODE HERE plt.figure(1) fig = plt.gcf() DPI = fig.get_dpi() # ------------------------------- you have to play with this size to reduce the movement error when the mouse hovers over the figure, it's close to canvas size fig.set_size_inches(404 * 2 / float(DPI), 404 / float(DPI)) # ------------------------------- x = list(range(1, 100)) y = list(range(1, 100)) plt.plot(x, y, color="r") plt.title('Virus Infections Data') plt.xlabel('Time in Days') plt.ylabel('Infections') plt.grid() # ------------------------------- Instead of plt.show() draw_figure(window['fig_cv'].TKCanvas, fig,) window.close() A: You can change the background color by specifying a Hex Color Code in the following argument under layout: background_color='#DAE0E6' You can use a Color Picker like this one https://htmlcolorcodes.com/color-picker/ to get your color You can also use: window = sg.Window('Virus Simulation', layout, background_color='hex_color_code') To change the color of a window object A: I'm a tad confused in 2 ways. 1 - I don't see a call to sg.theme() in your code, so I don't know where it was in the code. WHERE it is placed matters. Always place the theme as early as possible, definitely before making your layout. 2 - I don't know what "it isn't working this time means". Again, it's more of needing to see a complete example to get it. The sample code in the question that you said normally works was weirdly formatted so something must have been scrambled. The question shows: sg.theme="color But as Jason has pointed out, the value passed to sg.theme() is more than a color. The "Theme Name Formula" is described in the main PySimpleGUI documentation here - https://pysimplegui.readthedocs.io/en/latest/#theme-name-formula. In case there's a problem getting to that section, here's what it says: Theme Name Formula Themes names that you specify can be "fuzzy". The text does not have to match exactly what you see printed. For example "Dark Blue 3" and "DarkBlue3" and "dark blue 3" all work. One way to quickly determine the best setting for your window is to simply display your window using a lot of different themes. Add the line of code to set the theme - theme('Dark Green 1'), run your code, see if you like it, if not, change the theme string to 'Dark Green 2' and try again. Repeat until you find something you like. The "Formula" for the string is: Dark Color # or Light Color # Color can be Blue, Green, Black, Gray, Purple, Brown, Teal, Red. The # is optional or can be from 1 to XX. Some colors have a lot of choices. There are 13 "Light Brown" choices for example. If you want to only change the background color of your theme, then you can use individual color names or hex values. sg.theme_background_color('#FF0000') or sg.theme_background_color('red') will set the background color to red. Hope that helps with themes. Nice work on using the PSG coding conventions. Looking at your code was effortless as a result. Zero guesswork as to what I was seeing. Great to see and it helps in numerous ways when you use them. A: I tinkered to change the background color and found this solution. Create a theme as a dictionary Add the theme with theme_add_new function Specify this theme The question code edited: import PySimpleGUI as sg import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg new_theme = {"BACKGROUND": '#DAE0E6', "TEXT": sg.COLOR_SYSTEM_DEFAULT, "INPUT": sg.COLOR_SYSTEM_DEFAULT, "TEXT_INPUT": sg.COLOR_SYSTEM_DEFAULT, "SCROLL": sg.COLOR_SYSTEM_DEFAULT, "BUTTON": sg.OFFICIAL_PYSIMPLEGUI_BUTTON_COLOR, "PROGRESS": sg.COLOR_SYSTEM_DEFAULT, "BORDER": 1, "SLIDER_DEPTH": 1, "PROGRESS_DEPTH": 0 } sg.theme_add_new('MyTheme', new_theme) sg.theme('MyTheme') def draw_figure(canvas, fig): if canvas.children: for child in canvas.winfo_children(): child.destroy() figure_canvas_agg = FigureCanvasTkAgg(fig, master=canvas) figure_canvas_agg.draw() figure_canvas_agg.get_tk_widget().pack(side='right', fill='both', expand=1) # ------------------------------- PySimpleGUI CODE layout = [ [sg.Text("Population size: "), sg.Input(key="-POPULATIONSIZE-")], [sg.Text("Duration: "), sg.Input(key="-DURATION-")], [sg.Text("R Number: "), sg.Input(key="-RNUMBER-")], [sg.Text("Starting Infections: "), sg.Input(key="-STARTINGINFECTIONS-")], [sg.B('OK'), sg.B('Exit')], [sg.Canvas(key='controls_cv')], [sg.T('Figure:')], [sg.Column( layout=[ [sg.Canvas(key='fig_cv', size=(400 * 2, 400) )] ], # background_color='#DAE0E6', pad=(0, 0) )], ] window = sg.Window('Virus Simulation', layout,) while True: event, values = window.read() print(event, values) if event in (sg.WIN_CLOSED, 'Exit'): break elif event == 'OK': # ------------------------------- PASTE YOUR MATPLOTLIB CODE HERE plt.figure(1) fig = plt.gcf() DPI = fig.get_dpi() # ------------------------------- you have to play with this size to reduce the movement error when the mouse hovers over the figure, it's close to canvas size fig.set_size_inches(404 * 2 / float(DPI), 404 / float(DPI)) # ------------------------------- x = list(range(1, 100)) y = list(range(1, 100)) plt.plot(x, y, color="r") plt.title('Virus Infections Data') plt.xlabel('Time in Days') plt.ylabel('Infections') plt.grid() # ------------------------------- Instead of plt.show() draw_figure(window['fig_cv'].TKCanvas, fig,) window.close()
How do I change background color in pysimplegui?
I'm working on UI for a virus simulation me and a friend are making and I'm really struggling to change the background color of the UI. I took most of the code from the one of the demo projects because i couldn't figure out how to implement matplotlib with pysimplegui so there's some things I don't fully understand but usually with pysimplegui it's as simple as sg.theme="color to change the main background color but it isn't working this time. Any help would be really appreciated, thanks. import PySimpleGUI as sg import numpy as np import tkinter import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk def draw_figure(canvas, fig): if canvas.children: for child in canvas.winfo_children(): child.destroy() figure_canvas_agg = FigureCanvasTkAgg(fig, master=canvas) figure_canvas_agg.draw() figure_canvas_agg.get_tk_widget().pack(side='right', fill='both', expand=1) # ------------------------------- PySimpleGUI CODE layout = [ [sg.Text("Population size: "), sg.Input(key="-POPULATIONSIZE-")], [sg.Text("Duration: "), sg.Input(key="-DURATION-")], [sg.Text("R Number: "), sg.Input(key="-RNUMBER-")], [sg.Text("Starting Infections: "), sg.Input(key="-STARTINGINFECTIONS-")], [sg.B('OK'), sg.B('Exit')], [sg.Canvas(key='controls_cv')], [sg.T('Figure:')], [sg.Column( layout=[ [sg.Canvas(key='fig_cv', size=(400 * 2, 400) )] ], background_color='#DAE0E6', pad=(0, 0) )], ] window = sg.Window('Virus Simulation', layout,) while True: event, values = window.read() print(event, values) if event in (sg.WIN_CLOSED, 'Exit'): break elif event is 'OK': # ------------------------------- PASTE YOUR MATPLOTLIB CODE HERE plt.figure(1) fig = plt.gcf() DPI = fig.get_dpi() # ------------------------------- you have to play with this size to reduce the movement error when the mouse hovers over the figure, it's close to canvas size fig.set_size_inches(404 * 2 / float(DPI), 404 / float(DPI)) # ------------------------------- x = list(range(1, 100)) y = list(range(1, 100)) plt.plot(x, y, color="r") plt.title('Virus Infections Data') plt.xlabel('Time in Days') plt.ylabel('Infections') plt.grid() # ------------------------------- Instead of plt.show() draw_figure(window['fig_cv'].TKCanvas, fig,) window.close()
[ "You can change the background color by specifying a Hex Color Code in the following argument under layout:\nbackground_color='#DAE0E6'\n\nYou can use a Color Picker like this one https://htmlcolorcodes.com/color-picker/ to get your color\nYou can also use:\nwindow = sg.Window('Virus Simulation', layout, background_color='hex_color_code')\n\nTo change the color of a window object\n", "I'm a tad confused in 2 ways.\n1 - I don't see a call to sg.theme() in your code, so I don't know where it was in the code. WHERE it is placed matters. Always place the theme as early as possible, definitely before making your layout.\n2 - I don't know what \"it isn't working this time means\". Again, it's more of needing to see a complete example to get it.\nThe sample code in the question that you said normally works was weirdly formatted so something must have been scrambled.\nThe question shows: sg.theme=\"color\nBut as Jason has pointed out, the value passed to sg.theme() is more than a color. The \"Theme Name Formula\" is described in the main PySimpleGUI documentation here - https://pysimplegui.readthedocs.io/en/latest/#theme-name-formula. In case there's a problem getting to that section, here's what it says:\nTheme Name Formula\nThemes names that you specify can be \"fuzzy\". The text does not have to match exactly what you see printed. For example \"Dark Blue 3\" and \"DarkBlue3\" and \"dark blue 3\" all work.\n\nOne way to quickly determine the best setting for your window is to simply display your window using a lot of different themes. Add the line of code to set the theme - theme('Dark Green 1'), run your code, see if you like it, if not, change the theme string to 'Dark Green 2' and try again. Repeat until you find something you like.\n\nThe \"Formula\" for the string is:\n\nDark Color #\n\nor\n\nLight Color #\n\nColor can be Blue, Green, Black, Gray, Purple, Brown, Teal, Red. The # is optional or can be from 1 to XX. Some colors have a lot of choices. There are 13 \"Light Brown\" choices for example.\n\nIf you want to only change the background color of your theme, then you can use individual color names or hex values. sg.theme_background_color('#FF0000') or sg.theme_background_color('red') will set the background color to red.\nHope that helps with themes.\n\nNice work on using the PSG coding conventions. Looking at your code was effortless as a result. Zero guesswork as to what I was seeing. Great to see and it helps in numerous ways when you use them.\n", "I tinkered to change the background color and found this solution.\n\nCreate a theme as a dictionary\nAdd the theme with theme_add_new function\nSpecify this theme\n\nThe question code edited:\nimport PySimpleGUI as sg\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg\n\nnew_theme = {\"BACKGROUND\": '#DAE0E6', \"TEXT\": sg.COLOR_SYSTEM_DEFAULT, \"INPUT\": sg.COLOR_SYSTEM_DEFAULT,\n \"TEXT_INPUT\": sg.COLOR_SYSTEM_DEFAULT, \"SCROLL\": sg.COLOR_SYSTEM_DEFAULT,\n \"BUTTON\": sg.OFFICIAL_PYSIMPLEGUI_BUTTON_COLOR, \"PROGRESS\": sg.COLOR_SYSTEM_DEFAULT, \"BORDER\": 1,\n \"SLIDER_DEPTH\": 1, \"PROGRESS_DEPTH\": 0\n }\n\nsg.theme_add_new('MyTheme', new_theme)\nsg.theme('MyTheme')\n\n\ndef draw_figure(canvas, fig):\n if canvas.children:\n for child in canvas.winfo_children():\n child.destroy()\n figure_canvas_agg = FigureCanvasTkAgg(fig, master=canvas)\n figure_canvas_agg.draw()\n figure_canvas_agg.get_tk_widget().pack(side='right', fill='both', expand=1)\n\n\n# ------------------------------- PySimpleGUI CODE\n\nlayout = [\n [sg.Text(\"Population size: \"), sg.Input(key=\"-POPULATIONSIZE-\")],\n [sg.Text(\"Duration: \"), sg.Input(key=\"-DURATION-\")],\n [sg.Text(\"R Number: \"), sg.Input(key=\"-RNUMBER-\")],\n [sg.Text(\"Starting Infections: \"), sg.Input(key=\"-STARTINGINFECTIONS-\")],\n [sg.B('OK'), sg.B('Exit')],\n [sg.Canvas(key='controls_cv')],\n [sg.T('Figure:')],\n [sg.Column(\n layout=[\n [sg.Canvas(key='fig_cv',\n size=(400 * 2, 400)\n )]\n ],\n # background_color='#DAE0E6',\n pad=(0, 0)\n )],\n]\n\nwindow = sg.Window('Virus Simulation', layout,)\n\nwhile True:\n event, values = window.read()\n print(event, values)\n if event in (sg.WIN_CLOSED, 'Exit'):\n break\n elif event == 'OK':\n # ------------------------------- PASTE YOUR MATPLOTLIB CODE HERE\n plt.figure(1)\n fig = plt.gcf()\n DPI = fig.get_dpi()\n # ------------------------------- you have to play with this size to reduce the movement error when the mouse hovers over the figure, it's close to canvas size\n fig.set_size_inches(404 * 2 / float(DPI), 404 / float(DPI))\n # -------------------------------\n x = list(range(1, 100))\n y = list(range(1, 100))\n plt.plot(x, y, color=\"r\")\n plt.title('Virus Infections Data')\n plt.xlabel('Time in Days')\n plt.ylabel('Infections')\n plt.grid()\n\n # ------------------------------- Instead of plt.show()\n draw_figure(window['fig_cv'].TKCanvas, fig,)\n\n\nwindow.close()\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "pysimplegui", "python", "tkinter", "user_interface" ]
stackoverflow_0069151062_pysimplegui_python_tkinter_user_interface.txt
Q: Speed logic for pygame game I'm doing a final project for my coding class at school. I learnt from a tutorial online on how to use pygame by making pong. Then, I decided to create an Undertale battle system with the knowledge I had gained from learning how to make pong in pygame. I have come across an issue however, and its regarding the heart's speed. Here is the code: import pygame import sys # setup pygame.init() clock = pygame.time.Clock() # main window screen_width = 900 screen_height = 600 screen = pygame.display.set_mode((screen_width, screen_height)) pygame.display.set_caption('Undertale') icon = pygame.image.load('heart.png') pygame.display.set_icon(icon) # game rectangles heart = pygame.image.load('heart.png') DEFAULT_IMAGE_SIZE = (20, 20) heart_img = pygame.transform.scale(heart, DEFAULT_IMAGE_SIZE) battle_width = 10 battle_height = 200 middle_offset = 30 length = 200 + 2*battle_width battle_left = pygame.Rect(screen_width/2 - 100, screen_height/2 + middle_offset, battle_width, battle_height) battle_right = pygame.Rect(screen_width/2 + 100 + battle_width, screen_height/2 + 30, battle_width, battle_height) battle_up = pygame.Rect(screen_width/2 - 100, screen_height/2 + middle_offset, length, battle_width) battle_down = pygame.Rect(screen_width/2 - 100, screen_height/2 + middle_offset + battle_height, length, battle_width) # colours bg_color = pygame.Color(0, 0, 0) red = (255, 0, 0) white = (255, 255, 255) # game vars heart_x = screen_width/2 heart_y = screen_height/2 + middle_offset + battle_height/2 - 10 # Using half of Default Image Size Width heart_speed_x = 0 heart_speed_y = 0 speed = 4 while True: # input handles for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: heart_speed_x -= speed if event.key == pygame.K_RIGHT: heart_speed_x += speed if event.key == pygame.K_UP: heart_speed_y -= speed if event.key == pygame.K_DOWN: heart_speed_y += speed if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT: heart_speed_x += speed if event.key == pygame.K_RIGHT: heart_speed_x -= speed if event.key == pygame.K_UP: heart_speed_y += speed if event.key == pygame.K_DOWN: heart_speed_y -= speed # logic heart_x += heart_speed_x heart_y += heart_speed_y heart_rect = heart_img.get_rect(topleft=(heart_x, heart_y)) if heart_rect.colliderect(battle_left) or heart_rect.colliderect(battle_right): heart_speed_x *= -1 if heart_rect.colliderect(battle_up) or heart_rect.colliderect(battle_down): heart_speed_y *= -1 # visuals screen.fill(bg_color) pygame.draw.rect(screen, white, battle_left) pygame.draw.rect(screen, white, battle_right) pygame.draw.rect(screen, white, battle_up) pygame.draw.rect(screen, white, battle_down) screen.blit(heart_img, (heart_x, heart_y)) # window update pygame.display.flip() clock.tick(60) The heart is the player. I want it to be able to move around inside the box, and I do not want it to come out, I want it to stay inside. I made it so that the heart image is a rectangle so that it can collide with the box. It worked, but the speed makes it so that the heart is constantly moving, I want it to stop whenever the KEYUP event occurs. And that's what I did, yet it refused to cooperate. A: I figured out the solution: heart_x += 0 - heart_speed_x and heart_y += 0 - heart_speed_y The problem was that the either or both of the x and y position constantly changed after hitting the surface. This was because the speed of the variables stayed the same. So in the solution, I made it so that the x any y positions only change once once it conditionally checks for collision.
Speed logic for pygame game
I'm doing a final project for my coding class at school. I learnt from a tutorial online on how to use pygame by making pong. Then, I decided to create an Undertale battle system with the knowledge I had gained from learning how to make pong in pygame. I have come across an issue however, and its regarding the heart's speed. Here is the code: import pygame import sys # setup pygame.init() clock = pygame.time.Clock() # main window screen_width = 900 screen_height = 600 screen = pygame.display.set_mode((screen_width, screen_height)) pygame.display.set_caption('Undertale') icon = pygame.image.load('heart.png') pygame.display.set_icon(icon) # game rectangles heart = pygame.image.load('heart.png') DEFAULT_IMAGE_SIZE = (20, 20) heart_img = pygame.transform.scale(heart, DEFAULT_IMAGE_SIZE) battle_width = 10 battle_height = 200 middle_offset = 30 length = 200 + 2*battle_width battle_left = pygame.Rect(screen_width/2 - 100, screen_height/2 + middle_offset, battle_width, battle_height) battle_right = pygame.Rect(screen_width/2 + 100 + battle_width, screen_height/2 + 30, battle_width, battle_height) battle_up = pygame.Rect(screen_width/2 - 100, screen_height/2 + middle_offset, length, battle_width) battle_down = pygame.Rect(screen_width/2 - 100, screen_height/2 + middle_offset + battle_height, length, battle_width) # colours bg_color = pygame.Color(0, 0, 0) red = (255, 0, 0) white = (255, 255, 255) # game vars heart_x = screen_width/2 heart_y = screen_height/2 + middle_offset + battle_height/2 - 10 # Using half of Default Image Size Width heart_speed_x = 0 heart_speed_y = 0 speed = 4 while True: # input handles for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: heart_speed_x -= speed if event.key == pygame.K_RIGHT: heart_speed_x += speed if event.key == pygame.K_UP: heart_speed_y -= speed if event.key == pygame.K_DOWN: heart_speed_y += speed if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT: heart_speed_x += speed if event.key == pygame.K_RIGHT: heart_speed_x -= speed if event.key == pygame.K_UP: heart_speed_y += speed if event.key == pygame.K_DOWN: heart_speed_y -= speed # logic heart_x += heart_speed_x heart_y += heart_speed_y heart_rect = heart_img.get_rect(topleft=(heart_x, heart_y)) if heart_rect.colliderect(battle_left) or heart_rect.colliderect(battle_right): heart_speed_x *= -1 if heart_rect.colliderect(battle_up) or heart_rect.colliderect(battle_down): heart_speed_y *= -1 # visuals screen.fill(bg_color) pygame.draw.rect(screen, white, battle_left) pygame.draw.rect(screen, white, battle_right) pygame.draw.rect(screen, white, battle_up) pygame.draw.rect(screen, white, battle_down) screen.blit(heart_img, (heart_x, heart_y)) # window update pygame.display.flip() clock.tick(60) The heart is the player. I want it to be able to move around inside the box, and I do not want it to come out, I want it to stay inside. I made it so that the heart image is a rectangle so that it can collide with the box. It worked, but the speed makes it so that the heart is constantly moving, I want it to stop whenever the KEYUP event occurs. And that's what I did, yet it refused to cooperate.
[ "I figured out the solution:\nheart_x += 0 - heart_speed_x\n\nand\nheart_y += 0 - heart_speed_y\n\nThe problem was that the either or both of the x and y position constantly changed after hitting the surface. This was because the speed of the variables stayed the same. So in the solution, I made it so that the x any y positions only change once once it conditionally checks for collision.\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074624768_pygame_python.txt
Q: how to redirect to homepage in django-microsoft-auth if there is no next parameter? I registered the app in Azure AD and configured my Django app so I can log in using a Microsoft account. The problem I am facing is changing the redirect from the admin page to my homepage. Mu redirect URI on Azure looks like this: https://localhost:8000/microsoft/auth-callback/ What do I need to do to change the redirect to https://localhost:8000/home A: After a few days of researching I cloned the repository and found in the login.js file this piece of code: // redirect to next URL if it was provided let new_path = this.parseGETParam('next') || '/admin'; window.location = origin + new_path; The only thing to do is to change /admin to /home or whatever path you want to put. No need to change anything in the callback function. I hope it will help someone.
how to redirect to homepage in django-microsoft-auth if there is no next parameter?
I registered the app in Azure AD and configured my Django app so I can log in using a Microsoft account. The problem I am facing is changing the redirect from the admin page to my homepage. Mu redirect URI on Azure looks like this: https://localhost:8000/microsoft/auth-callback/ What do I need to do to change the redirect to https://localhost:8000/home
[ "After a few days of researching I cloned the repository and found in the login.js file this piece of code:\n// redirect to next URL if it was provided\nlet new_path = this.parseGETParam('next') || '/admin';\nwindow.location = origin + new_path;\n\nThe only thing to do is to change /admin to /home or whatever path you want to put. No need to change anything in the callback function.\nI hope it will help someone.\n" ]
[ 0 ]
[]
[]
[ "django", "django_authentication", "python" ]
stackoverflow_0074564581_django_django_authentication_python.txt
Q: 503 service is unavailable in Analytics Reporting API V4 Sample Call: https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json Sample Error: googleapiclient.errors.HttpError: <HttpError 503 when requesting https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "The service is currently unavailable.". Details: "The service is currently unavailable."> Calling it once an hour instead of backoff fails every time. Then, after a day or two, it intermittently succeeds. Only one account has a problem. The rest is fine. code: query = { "reportRequests": [ { "viewId": str(view_id), "dateRanges": [ { "startDate": "2022-11-29", "endDate": "2022-11-29", } ], "metrics": [{"expression": "ga:users"}, {"expression": "ga:newUsers"}, {"expression": "ga:transactions"}], "dimensions": [{"name": "ga:source"}, {"name": "ga:medium"}, {"name": "ga:campaign"}, {"name": "ga:adContent"}, "pageSize": 100_000, } ] } return service.reports().batchGet(body=query).execute() Thanks for your help. A: First off 500 errors are something is wrong on Googles end. There is nothing you can do to fix it. Its often caused by the server being over loaded, and your script timing out. A tip would be to never run on the hour, everyone that has a cron job set up has it set to run on the hour and your going to be completing for server processing with everyone else. I cant see anything wrong with the code you have. PRO tip. the google apis python client library has back off built in. So if your seeing a 500 error the library has already retried the call about ten times before you are even seeing the error.
503 service is unavailable in Analytics Reporting API V4
Sample Call: https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json Sample Error: googleapiclient.errors.HttpError: <HttpError 503 when requesting https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "The service is currently unavailable.". Details: "The service is currently unavailable."> Calling it once an hour instead of backoff fails every time. Then, after a day or two, it intermittently succeeds. Only one account has a problem. The rest is fine. code: query = { "reportRequests": [ { "viewId": str(view_id), "dateRanges": [ { "startDate": "2022-11-29", "endDate": "2022-11-29", } ], "metrics": [{"expression": "ga:users"}, {"expression": "ga:newUsers"}, {"expression": "ga:transactions"}], "dimensions": [{"name": "ga:source"}, {"name": "ga:medium"}, {"name": "ga:campaign"}, {"name": "ga:adContent"}, "pageSize": 100_000, } ] } return service.reports().batchGet(body=query).execute() Thanks for your help.
[ "First off 500 errors are something is wrong on Googles end. There is nothing you can do to fix it. Its often caused by the server being over loaded, and your script timing out.\nA tip would be to never run on the hour, everyone that has a cron job set up has it set to run on the hour and your going to be completing for server processing with everyone else.\nI cant see anything wrong with the code you have.\nPRO tip. the google apis python client library has back off built in. So if your seeing a 500 error the library has already retried the call about ten times before you are even seeing the error.\n" ]
[ 1 ]
[]
[]
[ "google_analytics_api", "google_api", "google_api_python_client", "python" ]
stackoverflow_0074624361_google_analytics_api_google_api_google_api_python_client_python.txt
Q: Django/Python - AssertionError: Class ThreadSerializer missing "Meta.model" attribute I've been trying to build the backend of a forum board for a mobile application and I've been running into an issue when trying to test the API endpoints over Postman. It says that my ThreadSerializer class is missing the "Meta.model" attribute. My serializer code: from rest_framework import serializers from forum.models import Thread from forum.models import Post class ThreadSerializer(serializers.ModelSerializer): class Meta: Thread_Model = Thread Thread_Fields = ['id', 'thread_id', 'title', 'desc', 'created_at'] class PostSerializer(serializers.ModelSerializer): class Meta: Post_Model = Post Post_Fields = ['id', 'post_id', 'post_content', 'post_time', 'thread_id'] Model code --- from django.db import models from django.conf import settings from Authentication.models import User # Create your models here. class Thread(models.Model): userid = models.ForeignKey(User, on_delete=models.CASCADE) thread_id = models.AutoField(primary_key=True) title = models.CharField(max_length=100) desc = models.TextField() created_at = models.DateTimeField(auto_now_add=True) class Post(models.Model): userid = models.ForeignKey(User, on_delete=models.CASCADE) thread_id = models.ForeignKey(Thread, on_delete=models.CASCADE) post_id = models.AutoField(primary_key=True) post_content = models.TextField() post_time = models.DateTimeField(auto_now_add=True) Any advice on how to fix this? This is the error I got on VSC: and Postman: A: change from rest_framework import serializers from forum.models import Thread from forum.models import Post class ThreadSerializer(serializers.ModelSerializer): class Meta: Thread_Model = Thread Thread_Fields = ['id', 'thread_id', 'title', 'desc', 'created_at'] class PostSerializer(serializers.ModelSerializer): class Meta: Post_Model = Post Post_Fields = ['id', 'post_id', 'post_content', 'post_time', 'thread_id'] to: from rest_framework import serializers from forum.models import Thread from forum.models import Post class ThreadSerializer(serializers.ModelSerializer): class Meta: model = Thread fields = ['id', 'thread_id', 'title', 'desc', 'created_at'] class PostSerializer(serializers.ModelSerializer): class Meta: model = Post fields = ['id', 'post_id', 'post_content', 'post_time', 'thread_id']
Django/Python - AssertionError: Class ThreadSerializer missing "Meta.model" attribute
I've been trying to build the backend of a forum board for a mobile application and I've been running into an issue when trying to test the API endpoints over Postman. It says that my ThreadSerializer class is missing the "Meta.model" attribute. My serializer code: from rest_framework import serializers from forum.models import Thread from forum.models import Post class ThreadSerializer(serializers.ModelSerializer): class Meta: Thread_Model = Thread Thread_Fields = ['id', 'thread_id', 'title', 'desc', 'created_at'] class PostSerializer(serializers.ModelSerializer): class Meta: Post_Model = Post Post_Fields = ['id', 'post_id', 'post_content', 'post_time', 'thread_id'] Model code --- from django.db import models from django.conf import settings from Authentication.models import User # Create your models here. class Thread(models.Model): userid = models.ForeignKey(User, on_delete=models.CASCADE) thread_id = models.AutoField(primary_key=True) title = models.CharField(max_length=100) desc = models.TextField() created_at = models.DateTimeField(auto_now_add=True) class Post(models.Model): userid = models.ForeignKey(User, on_delete=models.CASCADE) thread_id = models.ForeignKey(Thread, on_delete=models.CASCADE) post_id = models.AutoField(primary_key=True) post_content = models.TextField() post_time = models.DateTimeField(auto_now_add=True) Any advice on how to fix this? This is the error I got on VSC: and Postman:
[ "change\nfrom rest_framework import serializers\nfrom forum.models import Thread\nfrom forum.models import Post\n\nclass ThreadSerializer(serializers.ModelSerializer):\n class Meta: \n Thread_Model = Thread\n Thread_Fields = ['id', 'thread_id', 'title', 'desc', 'created_at']\n\n\nclass PostSerializer(serializers.ModelSerializer):\n class Meta:\n Post_Model = Post\n Post_Fields = ['id', 'post_id', 'post_content', 'post_time', 'thread_id']\n\nto:\nfrom rest_framework import serializers\nfrom forum.models import Thread\nfrom forum.models import Post\n\nclass ThreadSerializer(serializers.ModelSerializer):\n class Meta: \n model = Thread\n fields = ['id', 'thread_id', 'title', 'desc', 'created_at']\n\n\nclass PostSerializer(serializers.ModelSerializer):\n class Meta:\n model = Post\n fields = ['id', 'post_id', 'post_content', 'post_time', 'thread_id']\n\n" ]
[ 0 ]
[]
[]
[ "django", "postman", "python", "request" ]
stackoverflow_0074625451_django_postman_python_request.txt
Q: Better fuzzy matching performance? I'm currently using method get_close_matches method from difflib to iterate through a list of 15,000 strings to get the closest match against another list of approx 15,000 strings: a=['blah','pie','apple'...] b=['jimbo','zomg','pie'...] for value in a: difflib.get_close_matches(value,b,n=1,cutoff=.85) It takes .58 seconds per value which means it will take 8,714 seconds or 145 minutes to finish the loop. Is there another library/method that might be faster or a way to improve the speed for this method? I've already tried converting both arrays to lower case, but it only resulted in a slight speed increase. A: fuzzyset indexes strings by their bigrams and trigrams so it finds approximate matches in O(log(N)) vs O(N) for difflib. For my fuzzyset of 1M+ words and word-pairs it can compute the index in about 20 seconds and find the closest match in less than a 100 ms. A: Perhaps you can build an index of the trigrams (three consecutive letters) that appear in each list. Only check strings in a against strings in b that share a trigram. You might want to look at the BLAST bioinformatics tool; it does approximate sequence alignments against a sequence database. A: RapidFuzz is the super-fast lib for fuzzy string matching. It has the same API as famous fuzzywuzzy, but times faster and MIT licensed. A: Try this https://code.google.com/p/pylevenshtein/ The Levenshtein Python C extension module contains functions for fast computation of - Levenshtein (edit) distance, and edit operations - string similarity - approximate median strings, and generally string averaging - string sequence and set similarity It supports both normal and Unicode strings. A: I had tried few methods for fuzzy match. the best one was cosine similarity, with threshold as per your need (i kept 80% fuzzy match). A: Benchmarks in 2022 tl;dr: RapidFuzz was fastest. Test: Pick the best string match from 1.000.000 elements. Tested on my old i7 notebook with 32gb RAM. Best to worst: RapidFuzz (drop-in replacement for TheFuzz): ~20ms fuzzyset2: ~320ms TheFuzz (ex fuzzywuzzy): ~7s
Better fuzzy matching performance?
I'm currently using method get_close_matches method from difflib to iterate through a list of 15,000 strings to get the closest match against another list of approx 15,000 strings: a=['blah','pie','apple'...] b=['jimbo','zomg','pie'...] for value in a: difflib.get_close_matches(value,b,n=1,cutoff=.85) It takes .58 seconds per value which means it will take 8,714 seconds or 145 minutes to finish the loop. Is there another library/method that might be faster or a way to improve the speed for this method? I've already tried converting both arrays to lower case, but it only resulted in a slight speed increase.
[ "fuzzyset indexes strings by their bigrams and trigrams so it finds approximate matches in O(log(N)) vs O(N) for difflib. For my fuzzyset of 1M+ words and word-pairs it can compute the index in about 20 seconds and find the closest match in less than a 100 ms.\n", "Perhaps you can build an index of the trigrams (three consecutive letters) that appear in each list. Only check strings in a against strings in b that share a trigram.\nYou might want to look at the BLAST bioinformatics tool; it does approximate sequence alignments against a sequence database.\n", "RapidFuzz\nis the super-fast lib for fuzzy string matching. It has the same API as famous fuzzywuzzy, but times faster and MIT licensed.\n", "Try this\nhttps://code.google.com/p/pylevenshtein/\nThe Levenshtein Python C extension module contains functions for fast computation of - Levenshtein (edit) distance, and edit operations - string similarity - approximate median strings, and generally string averaging - string sequence and set similarity It supports both normal and Unicode strings.\n", "I had tried few methods for fuzzy match. the best one was cosine similarity, with threshold as per your need (i kept 80% fuzzy match).\n", "Benchmarks in 2022\ntl;dr: RapidFuzz was fastest.\nTest: Pick the best string match from 1.000.000 elements.\nTested on my old i7 notebook with 32gb RAM.\nBest to worst:\n\nRapidFuzz (drop-in replacement for TheFuzz): ~20ms\nfuzzyset2: ~320ms\nTheFuzz (ex fuzzywuzzy): ~7s\n\n" ]
[ 7, 3, 3, 1, 0, 0 ]
[]
[]
[ "difflib", "fuzzy_comparison", "levenshtein_distance", "performance", "python" ]
stackoverflow_0021408760_difflib_fuzzy_comparison_levenshtein_distance_performance_python.txt