question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-17 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,955,553
2024-2-7
https://stackoverflow.com/questions/77955553/dataframe-with-items-as-str-array-convert-to-multiindex
I've got some input data, which has a structure such that items can be a single entry, but also a str that is actually an array, but was not yet converted. A basic example would look like this: d = { 'col1': ['[5.02, 1, 3, 2]', '[2.002, 3.14, 4.5, 6.000153, 3]'], 'col2': ['[1.01, 1.02, 2, 4, 5, 10]', '[2, 3.05, 4.001, 7]'], 'col3': [5.2, 3] } df = pd.DataFrame(d) Output: col1 col2 col3 0 [5.02, 1, 3, 2] [1.01, 1.02, 2, 4, 5, 10] 5.2 1 [2.002, 3.14, 4.5, 6.000153, 3] [2, 3.05, 4.001, 7] 3.0 Each row corresponds to a dataset, each column represents one type of measurement within that dataset and the entries in each item correspond to the actual measurement data. Currently, the strings are converted to numpy arrays, but each item in col1 and col2 will still hold an array. So a lot of the code that follows with make use apply to perform calculations on the datasets, which I suspect is highly inefficient (confirmation would be good here) as it takes quite some time. Should note that in the real data there are many more rows and columns, so efficiency is important. Thus, I think it might make sense to handle this data differently and instead convert the data to a multiindex, where the first index selects the dataset (so the row in the above example) and the second index refers to the data point in each dataset. It is however important to know that I can't rely on all of those datasets having the same length, I have to assume a variable length. Since I have only very little experience with pandas multi index, can someone help me out here how to convert this data in an efficient way into a pandas multi index? The result should look like this: col1 col2 col3 dataset data point 0 0 5.02 1.01 5.2 1 1 1.02 2 3 2 3 2 4 4 5 5 10 1 0 2.002 2 3.0 1 3.14 3.05 2 4.5 4.001 3 6.000153 7 4 3 Any help / suggestions on this would be highly appreciated.
One option taking advantage of the fact that str.extractall adds a second level: out = pd.concat({c: df[c].str.extractall('(\d+\.?\d*)')[0].astype(float) if df[c].dtype == object else df[c].set_axis( pd.MultiIndex.from_product([df[c].index, [0]]) ) for c in df}, axis=1 ).sort_index().rename_axis(['dataset', 'data point']) Or: out = pd.concat({c: df[c].astype(str) .str.extractall('(\d+\.?\d*)')[0] .astype(float) for c in df}, axis=1 ).sort_index().rename_axis(['dataset', 'data point']) Or using ast.literal_eval and explode: from ast import literal_eval def expand(s): if s.dtype == object: s = s.apply(literal_eval).explode() return s.set_axis(pd.MultiIndex .from_arrays([s.index, s.groupby(level=0).cumcount(), ], names=['dataset', 'data point']) ) out = pd.concat({c: expand(df[c]) for c in df}, axis=1) Output: col1 col2 col3 dataset data point 0 0 5.02 1.01 5.2 1 1 1.02 NaN 2 3 2 NaN 3 2 4 NaN 1 0 2.002 2 3.0 1 3.14 3.05 NaN 2 4.5 4.001 NaN 3 6.000153 7 NaN 4 3 NaN NaN 0 4 NaN 5 NaN 5 NaN 10 NaN
2
1
77,953,618
2024-2-7
https://stackoverflow.com/questions/77953618/how-to-change-the-root-directory-for-the-jupyter-notebook-in-windows
I can't «persuade» Jupyter to use another directory then the default one for storing its projects. I edited the following parameter of the jupyter_notebook_config.py file (I tried all three variants one-by-one, none worked): c.NotebookApp.notebook_dir = 'D:\\_Travail\\Python\\Jupyter_projects' c.NotebookApp.notebook_dir = r'D:\_Travail\Python\Jupyter_projects' c.NotebookApp.notebook_dir = 'D:/_Travail/Python/Jupyter_projects' Besides, I tried to rename the config-file (and names of parameters therein) exactly as described in the only answer in this thread (I also have this message 'notebook_dir' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release. in console). I wonder if it is important but the disk D: where I want files stored is a network drive connected to another machine on LAN.
You can launch the Jupyter in the directory you want. Open Anaconda Prompt and enter the following statement: jupyter notebook --notebook-dir "your-directory"
2
4
77,952,195
2024-2-7
https://stackoverflow.com/questions/77952195/streaming-function-in-microsoft-autogen
I want to know how to use the Streaming function in Autogen. The code below uses Autogen to enable the Agent to have streaming function. llm_config = { "config_list" : config_list, "timeout" : 120, "seed" : random.randint(1, 100), "stream" : True } user_proxy.initiate_chat( manager, message="hello, the weather is nice today", llm_config=llm_config, ) The Streaming function that I know of is to print and display each letter as it is created before the sentence is completed, but even if I use the option("stream" : True), the result is that the entire text is output on the terminal. I'm not sure if I'm misunderstanding the streaming function or if I know how to use it properly. Could you please help? Thank you :)
Currently, there isn't a direct solution provided by AutoGen for the streaming issue you've described. This is evidenced by the lack of solutions in their open issues on GitHub, specifically related to streaming (https://github.com/microsoft/autogen/issues?q=is%3Aissue+is%3Aopen+streaming). However, a workaround exists that involves using "monkey patching" to modify the behavior of the function responsible for printing received messages. This method allows you to intercept and alter the output behavior to achieve the streaming effect you're looking for. The monkey patching technique can be implemented by overwriting the _print_received_message method in the library. This approach is detailed in https://youtu.be/iNPCB6b5gvA, which provides step-by-step instructions on how to apply this method effectively.
2
2
77,935,269
2024-2-4
https://stackoverflow.com/questions/77935269/performance-results-differ-between-run-in-threadpool-and-run-in-executor-in
Here's a minimal reproducible example of my FastAPI app. I have a strange behavior and I'm not sure I understand the reason. I'm using ApacheBench (ab) to send multiple requests as follows: ab -n 1000 -c 50 -H 'accept: application/json' -H 'x-data-origin: source' 'http://localhost:8001/test/async' FastAPI app import time import asyncio import enum from typing import Any from fastapi import FastAPI, Path, Body from starlette.concurrency import run_in_threadpool app = FastAPI() loop = asyncio.get_running_loop() def sync_func() -> None: time.sleep(3) print("sync func") async def sync_async_with_fastapi_thread() -> None: await run_in_threadpool( time.sleep, 3) print("sync async with fastapi thread") async def sync_async_func() -> None: await loop.run_in_executor(None, time.sleep, 3) async def async_func() -> Any: await asyncio.sleep(3) print("async func") @app.get("/test/sync") def test_sync() -> None: sync_func() print("sync") @app.get("/test/async") async def test_async() -> None: await async_func() print("async") @app.get("/test/sync_async") async def test_sync_async() -> None: await sync_async_func() print("sync async") @app.get("/test/sync_async_fastapi") async def test_sync_async_with_fastapi_thread() -> None: await sync_async_with_fastapi_thread() print("sync async with fastapi thread") Here's the ApacheBench results: async with (asyncio.sleep) : *Concurrency Level: 50 Time taken for tests: 63.528 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 128000 bytes HTML transferred: 4000 bytes Requests per second: 15.74 [#/sec] (mean) Time per request: 3176.407 [ms] (mean) Time per request: 63.528 [ms] (mean, across all concurrent requests) Transfer rate: 1.97 [Kbytes/sec] received* sync (with time.sleep): Concurrency Level: 50 *Time taken for tests: 78.615 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 128000 bytes HTML transferred: 4000 bytes Requests per second: 12.72 [#/sec] (mean) Time per request: 3930.751 [ms] (mean) Time per request: 78.615 [ms] (mean, across all concurrent requests) Transfer rate: 1.59 [Kbytes/sec] received* sync_async (time sleep with run_in_executor) : *Concurrency Level: 50 Time taken for tests: 256.201 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 128000 bytes HTML transferred: 4000 bytes Requests per second: 3.90 [#/sec] (mean) Time per request: 12810.038 [ms] (mean) Time per request: 256.201 [ms] (mean, across all concurrent requests) Transfer rate: 0.49 [Kbytes/sec] received* sync_async_fastapi (time sleep with run_in threadpool): *Concurrency Level: 50 Time taken for tests: 78.877 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 128000 bytes HTML transferred: 4000 bytes Requests per second: 12.68 [#/sec] (mean) Time per request: 3943.841 [ms] (mean) Time per request: 78.877 [ms] (mean, across all concurrent requests) Transfer rate: 1.58 [Kbytes/sec] received* In conclusion, I'm experiencing a surprising disparity in results, especially when using run_in_executor, where I'm encountering significantly higher average times (12 seconds). I don't understand this outcome. --- EDIT --- After AKX answer. Here the code working as expected: import time import asyncio from anyio import to_thread to_thread.current_default_thread_limiter().total_tokens = 200 loop = asyncio.get_running_loop() executor = ThreadPoolExecutor(max_workers=100) def sync_func() -> None: time.sleep(3) print("sync func") async def sync_async_with_fastapi_thread() -> None: await run_in_threadpool( time.sleep, 3) print("sync async with fastapi thread") async def sync_async_func() -> None: await loop.run_in_executor(executor, time.sleep, 3) async def async_func() -> Any: await asyncio.sleep(3) print("async func") @app.get("/test/sync") def test_sync() -> None: sync_func() print("sync") @app.get("/test/async") async def test_async() -> None: await async_func() print("async") @app.get("/test/sync_async") async def test_sync_async() -> None: await sync_async_func() print("sync async") @app.get("/test/sync_async_fastapi") async def test_sync_async_with_fastapi_thread() -> None: await sync_async_with_fastapi_thread() print("sync async with fastapi thread")
Using run_in_threadpool() FastAPI is fully compatible with (and based on) Starlette, and hence, with FastAPI you get all of Starlette's features, such as run_in_threadpool(). Starlette's run_in_threadpool(), which uses anyio.to_thread.run_sync() behind the scenes, "will run the sync blocking function in a separate thread to ensure that the main thread (where coroutines are run) does not get blocked"—see this answer and AnyIO's Working with threads documentation for more details. Calling run_in_threadpool()—which internally calls anyio.to_thread.run_sync(), and subsequently, AsyncIOBackend.run_sync_in_worker_thread()—will return a coroutine that will then be awaited to get the eventual result of the sync function (e.g., result = await run_in_threadpool(...)), and hence, FastAPI will still work asynchronously (instead of calling that synchronous function directly in the event loop, in which case, would block the event loop, and hence, block the main thread). As can be seen in Starlette's source code (link is given above), the run_in_threadpool() function simply looks like this (supporting both sequence and keyword arguments): async def run_in_threadpool( func: typing.Callable[P, T], *args: P.args, **kwargs: P.kwargs ) -> T: if kwargs: # pragma: no cover # run_sync doesn't accept 'kwargs', so bind them in here func = functools.partial(func, **kwargs) return await anyio.to_thread.run_sync(func, *args) As described in AnyIO's documentation: Adjusting the default maximum worker thread count The default AnyIO worker thread limiter has a value of 40, meaning that any calls to to_thread.run_sync() without an explicit limiter argument will cause a maximum of 40 threads to be spawned. You can adjust this limit like this: from anyio import to_thread async def foo(): # Set the maximum number of worker threads to 60 to_thread.current_default_thread_limiter().total_tokens = 60 Note AnyIO’s default thread pool limiter does not affect the default thread pool executor on asyncio. Since FastAPI uses Startlette's concurrency module to run blocking functions in an external threadpool (the same threadpool is also used by FastAPI to run endpoints defined with normal def instead of async def, as described in this answer), the default value of the thread limiter, as shown above, is applied here as well, i.e., 40 threads maximum—see the relevant AsyncIOBackend.current_default_thread_limiter() method that returns the CapacityLimiter with the default number of threads. Hence, sending 50 requests simultaneously, as in your case, would lead to threadpool starvation, meaning that there wouldn't be enough threads available in the threadpool to handle all incoming requests concurrently. As described earlier, one could adjust that value, by increasing the number of threads, which might lead to an improvement in performance results—always depending on the number of requests to def endpoints (or async def endpoints that make calls to run_in_threadpool() inside) that your API is expected to serve concurrently. For instance, if you expect the API to serve no more than 50 requests at a time to such endpoints, then set the maximum number of threads to 50. Note that if, in addition to def endpoints, your FastAPI application uses synchronous/blocking background tasks and/or StreamingResponse's generators and/or Dependencies (synchronous/blocking functions refer to those defined with normal def instead of async def), or even UploadFile's async methods, such as await file.read()/await file.close()/etc. (which they all call the corresponding synchronous def file methods, using run_in_threadpool() behind the scenes), you could then increase the number of threads as required, as FastAPI actually runs all those functions in the same external threadpool as well—it is all explained in this answer in details. Note that using the approach below, which was described here, would have the same effect on adjusting the number of worker threads: from anyio.lowlevel import RunVar from anyio import CapacityLimiter RunVar("_default_thread_limiter").set(CapacityLimiter(60)) But, it would be best to follow the approach provided by AnyIO's official documentation (as shown earlier). It is also a good idea to have this done when the application starts up, using a lifespan event handler, as demonstrated here. In the working example below, since the /sync endpoint is defined with normal def instead of async def, FastAPI will run it in a separate thread from the external threadpool and then await it, thus ensuring the event loop (and hence, the main thread and the entire server) does not get blocked due to the blocking operations (either blocking IO-bound or CPU-bound) that will be performed inside that endpoint. Working Example 1 from fastapi import FastAPI from contextlib import asynccontextmanager from anyio import to_thread import time @asynccontextmanager async def lifespan(app: FastAPI): to_thread.current_default_thread_limiter().total_tokens = 60 yield app = FastAPI(lifespan=lifespan) @app.get("/sync") def test_sync() -> None: time.sleep(3) print("sync") @app.get('/get_available_threads') async def get_available_threads(): return to_thread.current_default_thread_limiter().available_tokens Using ApacheBench, you could test the example above as follows, which will send 1000 requests in total with 50 being sent simultaneously at a time (-n: Number of requests, -c : Number of concurrent requests): ab -n 1000 -c 50 "http://localhost:8000/sync" While running a performance test on the example above, if you call the /get_available_threads endpoint from your browser, e.g., http://localhost:8000/get_available_threads, you would see that the amount of threads available is always 10 or above (since only 50 threads are used at a time in this test, but the thread limiter was set to 60), meaning that setting the maximum number of threads on AnyIO's thread limiter to a number that is well above your needs, like 200 as shown in some other answer and in your recent example, wouldn't bring about any improvements in the performance; on the contrary, you would end up with a number of threads "sitting" there without being used. As explained earlier, the number of maximum threads should depend on (1) the number of requests your API is expected to serve concurrently (i.e., number of calls to def endpoints, or async def endpoints that call run_in_threadpool() inside), (2) any additional blocking tasks/functions that would run in the threadpool by FastAPI itself under the hood, as well as (3) the server machine's resources available. The example below is the same as the one above, but instead of letting FastAPI itself to handle the blocking operation(s) inside the def endpoint (by running the def endpoint in the external threadpool and awaiting it), the endpoint is now defined with async def (meaning that FastAPI will run it directly in the event loop), and inside the endpoint, run_in_threadpool() (which returns an awaitable) is used to run the blocking operation (i.e., time.sleep() in the example). Performing a benchmark test on the example below would yield similar results to the previous example. Working Example 2 from fastapi import FastAPI from fastapi.concurrency import run_in_threadpool from contextlib import asynccontextmanager from anyio import to_thread import time @asynccontextmanager async def lifespan(app: FastAPI): to_thread.current_default_thread_limiter().total_tokens = 60 yield app = FastAPI(lifespan=lifespan) @app.get("/sync_async_run_in_tp") async def test_sync_async_with_run_in_threadpool() -> None: await run_in_threadpool(time.sleep, 3) print("sync_async using FastAPI's run_in_threadpool") @app.get('/get_available_threads') async def get_available_threads(): return to_thread.current_default_thread_limiter().available_tokens Using ApacheBench, you could test the example above as follows: ab -n 1000 -c 50 "http://localhost:8000/sync_async_run_in_tp" Using loop.run_in_executor() with ThreadPoolExecutor When using asyncio's loop.run_in_executor()—after obtaining the running event loop using asyncio.get_running_loop()—one could pass None to the executor argument, which would lead to the default executor being used; that is, a ThreadPoolExecutor. Note that when calling loop.run_in_executor() and passing None to the executor argument, this does not create a new instance of a ThreadPoolExecutor every time you call, for instance, await loop.run_in_executor(None, time.sleep, 3). Instead, the default ThreadPoolExecutor is only initialized once the first time you do that, but for subsequent calls to loop.run_in_executor() with the executor argument set None, Python reuses that very same instance of ThreadPoolExecutor (hence, the default executor). This can been seen in the source code of loop.run_in_executor(). That means, the number of threads that can be created, when calling await loop.run_in_executor(None, ...), is limited to the default number of thread workers in the ThreadPoolExecutor class. As described in the documentation of ThreadPoolExecutor—and as shown in its implementation here—by default, the max_workers argument is set to None, in which case, the number of worker threads is set based on the following equation: min(32, os.cpu_count() + 4). The os.cpu_count() function reutrns the number of logical CPUs in the current system. As explained in this article, physical cores refers to the number of CPU cores provided in the hardware (e.g., the chips), while logical cores is the number of CPU cores after hyperthreading is taken into account. If, for instance, your machine has 4 physical cores, each with hyperthreading (most modern CPUs have this), then Python will see 8 CPUs and will allocate 12 threads (8 CPUs + 4) to the pool by default (Python limits the number of threads to 32 to "avoid consuming surprisingly large resources on multi-core machines"; however, one could always adjust the max_workers argument on their own when using a custom ThreadPoolExecutor, instead of using the default one). You could check the default number of worker threads on your system as follows: import concurrent.futures # create a thread pool with the default number of worker threads pool = concurrent.futures.ThreadPoolExecutor() # report the number of worker threads chosen by default # Note: `_max_workers` is a protected variable and may change in the future print(pool._max_workers) Now, as shown in your original example, you are not using a custom ThreadPoolExecutor; instead, you are using the default ThreadPoolExecutor every time a request arrives, by calling await loop.run_in_executor(None, time.sleep, 3) (inside the sync_async_func() function, which is triggered by the /test/sync_async endpoint). Assuming your machine has 4 physical cores with hyperthreading enabled (as explained in the example earlier), then the default number of worker threads for the default ThreadPoolExecutor would be 12. That means, based on your original example and the /test/sync_async endpoint that triggers the await loop.run_in_executor(None, time.sleep, 3) function, your application could only handle 12 concurrent requests at a time. That is the main reason for the difference observed in the performance results when compared to using run_in_threadpool(), which comes with 40 allocated threads by default. Even though, in both cases, a threadpool starvation was caused when sending 50 requests simultaneously, the endpoint (in your example) that uses run_in_threadpool() performed better only because the default number of threads created was greater than the one used by the default ThreadPoolExecutor (in your other endpoint). One way to solve this is to create a new instance of ThreadPoolExecutor (on your own, instead of using the default executor) every time a request arrives and have it terminated once the task is completed (using the with statement), as shown below: import concurrent.futures import asyncio loop = asyncio.get_running_loop() with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool: await loop.run_in_executor(pool, time.sleep, 3) Although the above should wok just fine, it would be best to instantiate a ThreadPoolExecutor once at application startup, adjust the number of worker threads as needed, and reuse that executor when required. Having said that, if, for any reason, you ever encounter a memory leak—i.e., memory that is no longer needed, but is not released—after tasks are completed when reusing a ThreadPoolExecutor (maybe due to external libraries that you might be using for that blocking task, which do not properly release the memory), you might find creating a new instance of ThreadPoolExecutor each time, as shown above, more suitable. Note, however, that if this was a ProcessPoolExecutor instead, creating and destroying many processes over and over could become computationally expensive. Creating and destroying too many threads could consume huge memory as well. Below is a complete working example, demonstrating how to create a reusable custom ThreadPoolExecutor. Calling the /get_active_threads endpoint from your browser, e.g., http://localhost:8000/get_active_threads, while running a performance test with ApacheBench (using 50 concurrent requests, as described in your question and as shown below), you would see that the number of active threads never goes above 51 (50 concurrent threads + 1, which is the main thread), despite setting the max_workers argument to 60 in the example below. This is simply because, in this performance test, the application is never required to serve more than 50 requests at the same time. Also, ThreadPoolExecutor won't spawn new threads, if idle threads are available (thus, saving resources)—see the relevant implementation part. Hence, again, initializing the ThreadPoolExecutor with max_workers=100, as shown in your recent update, would be unecessary, if you never expect your FastAPI application to serve more than 50 requests at a time (to endpoints where that ThreadPoolExecutor is used). Working Example from fastapi import FastAPI, Request from contextlib import asynccontextmanager import concurrent.futures import threading import asyncio import time @asynccontextmanager async def lifespan(app: FastAPI): pool = concurrent.futures.ThreadPoolExecutor(max_workers=60) yield {'pool': pool} pool.shutdown() app = FastAPI(lifespan=lifespan) @app.get("/sync_async") async def test_sync_async(request: Request) -> None: loop = asyncio.get_running_loop() await loop.run_in_executor(request.state.pool, time.sleep, 3) print("sync_async") @app.get('/get_active_threads') async def get_active_threads(): return threading.active_count() Using ApacheBench, you could test the example above as follows: ab -n 1000 -c 50 "http://localhost:8000/sync_async" Final Notes In general, you should always aim at using asynchronous code (i.e., using async/await), wherever is possible, as async code, also known as coroutines, runs directly in the event loop—the event loop runs in a thread (typically the main thread) and executes all callbacks and Tasks in its thread. That means there is only one thread that can take a lock on the interpreter; thus, avoiding the additional overhead of context switching (i.e., the CPU jumping from one thread of execution to another). When dealing with sync blocking IO-bound tasks though, you could either (1) define your endpoint with def and let FastAPI handle it behind the scenes as described earlier, as well as in this answer, or (2) define your endpoint with async def and use run_in_threadpool() on your own to run that blocking task in a separate thread and await it, or (3) define your endpoint with async def and use asyncio's loop.run_in_executor() with a custom (preferably reusable) ThreadPoolExecutor, adjusting the number of worker threads as required. When required to perform blocking CPU-bound tasks, while running such tasks in a separate thread from an external threadpool and then awaiting them would successfully prevent the event loop from getting blocked, it wouldn't, however, provide the performance improvement you would expect from running code in parallel. Thus, for CPU-bound tasks, one may choose to use loop.run_in_executor() with ProcessPoolExecutor instead—relevant example could be found in this answer (Note that when using processes in general, you need to explicitly protect the entry point with if __name__ == '__main__'). To run tasks in the background, without waiting for them to complete in order to proceed with executing the rest of the code in an endpoint, you could use FastAPI's BackgroundTasks, as shown here and here. If the background task function is defined with async def, FastAPI will run it directly in the event loop, whereas if it is defined with normal def, FastAPI will use run_in_threadpool() and await the returned coroutine (same concept as API endpoints). Another option when you need to run an async def function in the background, but not necessarily having it trigerred after returning a FastAPI response (which is the case in BackgroundTasks), is to use asyncio.create_task(), as shown in this answer and this answer. If you need to perform heavy background computation and you don't necessarily need it to be run by the same process, you may benefit from using other bigger tools such as Celery. Finally, regarding the optimal/maximum number of worker threads, I would suggest reading this article (have a look at this article as well for more details on ThreadPoolExecutor in general). As explained in the article: It is important to limit the number of worker threads in the thread pools to the number of asynchronous tasks you wish to complete, based on the resources in your system, or on the number of resources you intend to use within your tasks. Alternately, you may wish to increase the number of worker threads dramatically, given the greater capacity in the resources you intend to use. [...] It is common to have more threads than CPUs (physical or logical) in your system. The reason for this is that threads are used for IO-bound tasks, not CPU-bound tasks. This means that threads are used for tasks that wait for relatively slow resources to respond, like hard drives, DVD drives, printers, network connections, and much more. Therefore, it is not uncommon to have tens, hundreds and even thousands of threads in your application, depending on your specific needs. It is unusual to have more than one or a few thousand threads. If you require this many threads, then alternative solutions may be preferred, such as AsyncIO. Also, in the same article: Does the Number of Threads in the ThreadPoolExecutor Match the Number of CPUs or Cores? The number of worker threads in the ThreadPoolExecutor is not related to the number of CPUs or CPU cores in your system. You can configure the number of threads based on the number of tasks you need to execute, the amount of local system resources you have available (e.g., memory), and the limitations of resources you intend to access within your tasks (e.g., connections to remote servers). How Many Threads Should I Use? If you have hundreds of tasks, you should probably set the number of threads to be equal to the number of tasks. If you have thousands of tasks, you should probably cap the number of threads at hundreds or 1,000. If your application is intended to be executed multiple times in the future, you can test different numbers of threads and compare overall execution time, then choose a number of threads that gives approximately the best performance. You may want to mock the task in these tests with a random sleep operation. What Is the Maximum Number of Worker Threads in the ThreadPoolExecutor? There is no maximum number of worker threads in the ThreadPoolExecutor. Nevertheless, your system will have an upper limit of the number of threads you can create based on how much main memory (RAM) you have available. Before you exceed main memory, you will reach a point of diminishing returns in terms of adding new threads and executing more tasks. This is because your operating system must switch between the threads, called context switching. With too many threads active at once, your program may spend more time context switching than actually executing tasks. A sensible upper limit for many applications is hundreds of threads to perhaps a few thousand threads. More than a few thousand threads on a modern system may result in too much context switching, depending on your system and on the types of tasks that are being executed.
9
29
77,919,632
2024-2-1
https://stackoverflow.com/questions/77919632/how-to-find-the-indexes-of-the-first-n-maximum-values-of-a-tensor
I know that torch.argmax(x, dim = 0) returns the index of the first maximum value in x along dimension 0. But is there an efficient way to return the indexes of the first n maximum values? If there are duplicate values I also want the index of those among the n indexes. As a concrete example, say x=torch.tensor([2, 1, 4, 1, 4, 2, 1, 1]). I would like a function generalized_argmax(xI torch.tensor, n: int) such that generalized_argmax(x, 4) returns [0, 2, 4, 5] in this example.
To acquire all you need you have to go over the whole tensor. The most efficient should therefore be to use argsort afterwards limited to n entries. >>> x=torch.tensor([2, 1, 4, 1, 4, 2, 1, 1]) >>> x.argsort(dim=0, descending=True)[:n] [2, 4, 0, 5] Sort it again to get [0, 2, 4, 5] if you need the ascending order of indices.
2
4
77,927,215
2024-2-2
https://stackoverflow.com/questions/77927215/understanding-scikit-learn-import-variants
Scikit learn import statements in their tutorials are on the form from sklearn.decomposition import PCA Another versions that works is import sklearn.decomposition pca = sklearn.decomposition.PCA(n_components = 2) However import sklearn pca = sklearn.decomposition.PCA(n_components = 2) does not, and complains AttributeError: module 'sklearn' has no attribute 'decomposition' Why is this, and how can I predict which ones will work and not so i don't have to test around? If the understanding and predictiveness extends to python packages in general that would be the best.
sklearn doesn't automatically import its submodules. If you want to use sklearn.<SUBMODULE>, then you will need to import it explicitly e.g. import sklearn.<SUBMODULE>. Then you can use it without any further imports like result = sklearn.<SUBMODULE>.function(...). Large packages often behave this way where they don't automatically import all the submodules. Memory and load-time efficiency become worse if the submodules are automatically loaded; by specifying the submodule explicitly it saves on memory consumption and minimises the start-up time. I think namespace cluttering is another consideration, where explicit imports reduce the chance of naming conflicts and help maintain clarity about the specific functionality being used.
2
4
77,914,856
2024-1-31
https://stackoverflow.com/questions/77914856/sphinx-raises-warnings-for-class-derived-from-intenum-can-i-do-something-about
As the title says. The warnings are: docstring of liesel.goose.EpochType.from_bytes:9: WARNING: Inline interpreted text or phrase reference start-string without end-string. docstring of liesel.goose.EpochType.to_bytes:8: WARNING: Inline interpreted text or phrase reference start-string without end-string. I am using Sphinx version 7.2.6. My investigation yielded the following: The class EpochType is derived from enum.IntEnum. IntEnum is derived partly from int (source code on GitHub). As far as I can see, the methods from_bytes and to_bytes originate from the corresponding methods on int. I do not understand why sphinx is throwing this warning and/or what I can do about it.
I ran into this problem too, and @mzjn's explanation seems to be right. A fix to use apostrophes for the opening quotes got merged and will be in the upcoming Python 3.13. release: https://github.com/python/cpython/pull/117847 I also tried to fix a handful other similar cases of this in the standard library in https://github.com/python/cpython/pull/119231 so hopefully people won't run into it with other classes. Until you can switch to Python 3.13, you can probably work around this by writing :inherited-members: int instead of :inherited-members:, telling Sphinx to stop documenting inherited members once it hits class int. See autodoc docs for details.
2
2
77,949,419
2024-2-6
https://stackoverflow.com/questions/77949419/how-to-force-an-async-context-manager-to-exit
I've been getting into Structured Concurrency recently and this is a pattern that keeps cropping up: It's nice to use async context managers to access resource - say a some websocket. That's all great if the websocket stays open, but what about if it closes? well we expect our context to be forcefully exited - normally through an exception. How can I write and implement a context manager that exhibits this behaviour? How can I throw an exception 'into' the calling codes open context? How can I forcefully exit a context? Here's a simple setup, just for argument's sake: # Let's pretend I'm implementing this: class SomeServiceContextManager: def __init__(self, service): self.service = service async def __aenter__(self): await self.service.connect(self.connection_state_callback) return self.service async def __aexit__(self, exc_type, exc, tb): self.service.disconnect() return False def connection_state_callback(self, state): if state == "connection lost": print("WHAT DO I DO HERE? how do I inform my consumer and force the exit of their context manager?") class Consumer: async def send_stuff(self): try: async with SomeServiceContextManager(self.service) as connected_service: while True: await asyncio.sleep(1) connected_service.send("hello") except ConnectionLostException: #<< how do I implement this from the ContextManager? print("Oh no my connection was lost!!") How is this generally handled? It seems to be something I've run up into a couple of times when writing ContextManagers! Here's a slightly more interesting example (hopefully) to demonstrate how things get a bit messy - say you are receiving through an async loop but want to close your connection if something downstream disconnects: # Let's pretend I'm implementing this: class SomeServiceContextManager: def __init__(self, service): self.service = service async def __aenter__(self): await self.service.connect(self.connection_state_callback) return self.service async def __aexit__(self, exc_type, exc, tb): self.service.disconnect() return False def connection_state_callback(self, state): if state == "connection lost": print("WHAT DO I DO HERE? how do I inform my consumer and force the exit of their context manager?") class Consumer: async def translate_stuff_stuff(self): async with SomeOtherServiceContextManager(self.otherservice) as connected_other_service: try: async with SomeServiceContextManager(self.service) as connected_service: for message in connected_other_service.messages(): connected_service.send("message received: " + message.text) except ConnectionLostException: #<< how do I implement this from the ContextManager? print("Oh no my connection was lost - I'll also drop out of the other service connection!!")
Before we get started, let's replace manual __aenter__() and __aexit__() implementations with contextlib.asynccontextmanager. This takes care of handling exceptions properly and is especially useful when you have nested context managers, as we're going to have in this answer. Here's your snippet rewritten in this way. from contextlib import asynccontextmanager class SomeServiceConnection: def __init__(self, service): self.service = service async def _connect(self, connection_state_callback): await self.service.connect(connection_state_callback) async def _disconnect(self): self.service.disconnect() @asynccontextmanager async def get_service_connection(service): connection = SomeServiceConnection(service) await connection._connect( ... # to be done ) try: yield connection finally: await connection._disconnect() OK, with that out of the way: The core of the answer here is that, if you want to stop running tasks in response to some event, then use a cancel scope. @asynccontextmanager async def get_service_connection(service): connection = SomeServiceConnection(service) with trio.CancelScope() as cancel_scope: await connection._connect(cancel_scope.cancel) try: yield connection finally: await connection._disconnect() if cancel_scope.called: raise RuntimeError("connection lost") But wait... what if some other exception (or exceptions!) were thrown at roughly the same time that the connection was closed? That would be lost when you raise your own exception. This is handily dealt with by using a nursery instead. This has its own cancel scope doing the cancellation work, but it also deals with creating ExceptionGroup objects (formerly known as MultiErrors). Now your callback just needs to raise an exception inside the nursery. As a bonus, there is a good chance you needed to run a background task to make the callback happen anyway. (If not, e.g., your callback is called from another thread via a Trio token, then use a trio.Event as another answer suggested, and await it from within the nursery.) async def check_connection(connection): await connection.wait_disconnected() raise RuntimeError("connection lost") @asynccontextmanager async def get_service_connection(service): connection = SomeServiceConnection(service) await connection._connect() try: async with trio.open_nursery() as nursery: nursery.start_soon(check_connection) yield connection nursery.cancel_scope.cancel() finally: await connection._disconnect()
3
3
77,946,746
2024-2-6
https://stackoverflow.com/questions/77946746/connect-docker-container-with-sql-server
I'm trying to run code that writes on a SQL Server database. The code is working fine on my machine (Microsoft). dockerfile: # syntax=docker/dockerfile:1 FROM ubuntu:22.04 WORKDIR /app RUN apt-get update RUN apt-get install -y apt-utils \ && echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get install -y locales \ && localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 \ && localedef -i it_IT -c -f UTF-8 -A /usr/share/locale/locale.alias it_IT.UTF-8 ENV LANG it_IT.utf8 RUN apt-get install -y python3 python3-pip RUN apt-get install -y unixodbc-dev libodbc2 COPY ./src . RUN pip3 install -r requirements.txt CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0", "--port=5000"] application-docker.cfg file: SECRET_KEY = 'xxx' ALLOWED_FILES = ['xls'] MAX_RUNNING_JOBS = 3 STORAGE_PATH = '/tmp' DRIVER = 'SQL Server' SERVER = 'xxx' DB_NAME = 'xxx' USER_ID = 'xxx' PSW = 'xxx' It raises this error: sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'SQL Server' : file not found (0) (SQLDriverConnect)") I've tried to replace DRIVER with DRIVER = '/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.1.1' but it raises quite the same error: sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.1.1' : file not found (0) (SQLDriverConnect)") Tried also by using DRIVER = 'ODBC Driver 17 for SQL Server' and in the docker file RUN apt-get install -y msodbcsql17 mssql-tools but when I build the image: 0.915 E: Unable to locate package msodbcsql17. 0.915 E: Unable to locate package mssql-tools
If you must have mssql-tools inside the container then you have to follow : the Microsoft instructions (note different Ubuntu version) or get an existing Docker container with mssql-tools in it. The basic commands (Ubuntu 22.04 version) are: curl https://packages.microsoft.com/keys/microsoft.asc | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc curl https://packages.microsoft.com/config/ubuntu/22.04/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list sudo apt-get update sudo apt-get install mssql-tools18 unixodbc-dev One can also try one of existing mssql-tools Docker images but these may be outdated (most popular is 7 years old) / unknown quality / different Linux distros
2
2
77,939,941
2024-2-5
https://stackoverflow.com/questions/77939941/unable-to-start-mlflow-ui
Problem: Following a quicksart guide in official docs (https://mlflow.org/docs/latest/tracking/autolog.html), every time I try running mlflow ui --port 8080 inside parent directory with particular conda env activated same error keeps popping up: C:\Users\user\miniconda3\envs\regular\Lib\site-packages\pydantic\_internal\_config.py:322: UserWarning: Valid config keys have changed in V2: * 'schema_extra' has been renamed to 'json_schema_extra' warnings.warn(message, UserWarning) Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\user\miniconda3\envs\regular\Scripts\mlflow.exe\__main__.py", line 4, in <module> File "C:\Users\user\miniconda3\envs\regular\Lib\site-packages\mlflow\cli.py", line 356, in <module> type=click.Choice([e.name for e in importlib.metadata.entry_points().get("mlflow.app", [])]), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'EntryPoints' object has no attribute 'get' I have tried multiple versions, reinstalling importlib-metadata (following same issue 'EntryPoints' object has no attribute 'get' - Digital ocean), installing older versions prior to mlflow=2.10.0 as well as older versions of pydantic prior to pydantic==2.6.1 (following the advice of @BenWilson2 from DataBricks on MLflow GH issues). Would really appreciate if anyone could help me wrap my head around the issue. Thanks in advance. Project structure: regModel.py: import warnings import argparse import logging import pandas as pd import numpy as np import mlflow from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.model_selection import train_test_split from sklearn.linear_model import ElasticNet logging.basicConfig(level=logging.WARN) logger = logging.getLogger(__name__) #evaluation function def eval_metrics(actual, pred): rmse = np.sqrt(mean_squared_error(actual, pred)) mae = mean_absolute_error(actual, pred) r2 = r2_score(actual, pred) return rmse, mae, r2 def run_experiment(alpha, l1_ratio): warnings.filterwarnings("ignore") # Read the wine-quality csv file from local data = pd.read_csv("data/red-wine-quality.csv") data.to_csv("data/red-wine-quality.csv", index=False) # Split the data into training and test sets. (0.75, 0.25) split. train, test = train_test_split(data) # The predicted column is "quality" which is a scalar from [3, 9] train_x = train.drop(["quality"], axis=1) test_x = test.drop(["quality"], axis=1) train_y = train[["quality"]] test_y = test[["quality"]] lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42) lr.fit(train_x, train_y) predicted_qualities = lr.predict(test_x) (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities) print("Elasticnet model (alpha={:f}, l1_ratio={:f}):".format(alpha, l1_ratio)) print(" RMSE: %s" % rmse) print(" MAE: %s" % mae) print(" R2: %s" % r2) mlflow.log_param("alpha", alpha) mlflow.log_param("l1", l1_ratio) mlflow.log_metric("rmse", rmse) mlflow.log_metric("r2", r2) mlflow.log_metric("mae", mae) mlflow.sklearn.log_model(lr, "models") inroduction.ipynb:
Obviously the error was in cli.py. However the appropriate fix was needed, which, thanks to Gabomfim, has been discovered: just substitute .get("mlflow.app", []) by .select(group="mlflow.app")
2
2
77,938,155
2024-2-5
https://stackoverflow.com/questions/77938155/acknowledged-kombu-message-is-not-removed-from-rabbitmq-queue
I am trying to create a Python script that waits for a message on a RabbitMQ queue to start a task in a subprocess. During task execution, the script continues to consume another queue for a cancel order that would stop the task. I use kombu package to handle interactions with RabbitMQ. I call message.ack() when the task terminates (whether normally or because of cancellation). Despite the call to message.ack(), the start message is not removed from the queue (I can tell by using rabbitmqctl). This causes the message to be redelivered even if the task ran to completion. I created a sample repository to show the problem. The README file shows the reproduction steps. I do not know where the problem could be. I realize there are many moving parts but this is a slightly simplified version of a real project with its own constraints (like the Python version being fixed to 3.8). I am open to any suggestion to make a better use of kombu or asyncio because I am quite new to AMQP and async Python.
I finally found the cause of my problem. Actually there were two causes: I did not set channels when creating the Consumers, so they were both using the same channel. Since I had set noAck=True for the "stop" consumer, calling message.ack() had no effect. The strange thing is that noAck=True should have caused the ACK to be sent at reception, even for start message, but it was not. I got mistaken with the closure of the on_done callback. I thought the closure contained the message at the moment it was created, but it wasn't. Instead, it contained the last message received, i.e. the "stop" message. I'm quite new to Python so I am not sure how closures work. I assumed it worked like JavaScript but it seems not. Overall I stored the message when creating the task, gave it back as an argument to the on_done callback and it worked. I could not find the solution until I used the debugger to inspect the execution step by step. This shows once again the superiority of the debugger when troubleshooting. I should have gone with it earlier.
2
1
77,940,460
2024-2-5
https://stackoverflow.com/questions/77940460/psycopg3-pool-all-connections-getting-lost-instantly-after-long-idle-time
If it's a standalone persistant connection, I have no problem, connection lasts for hours. If I use psycopg(3) Connection pool, I make make requests, first requests are ok and my pool size stays at 5, at one point pool size decreases and I get a Pool Timeout when client makes a new request. Then I tried: start pool, do not request anything, just wait. After some time (around 1h) I look at postgresql (pg_stat_activity), I have 5 idle (=pool size) connections. Then I make a request from client, and all connections vanish at same time (I can see it from pg_stat_activity) and Pool Timeout, and situation is stuck. I also tried to decrease max_timeout to 900 but still same issue. def init_pool(self, min_cnx=5): cnx_str = f"host={DB_HOST} port={DB_PORT} dbname={DB_NAME} user={DB_USERNAME} password={DB_USERPWD}" self.pool = ConnectionPool(conninfo=cnx_str, min_size=min_cnx, open=True, check=ConnectionPool.check_connection) def query(self, q, dbv=None, debug=False) -> list: print("pool size: ", len(self.pool._pool)) print("pool stats before: ", self.pool.get_stats()) with self.pool.connection() as cnx: if cnx.closed: self.pool.check() raise ConnectionError("ERROR: PostgreSQL cnx from pool is closed.") cnx.autocommit = True cnx.row_factory = self.row_factory with psycopg.ClientCursor(cnx) as rdc: rdc.execute(q, dbv) if dbv else rdc.execute(q) if debug and rdc._query: print(rdc._query.query) if rdc.description: data = rdc.fetchall() else: data = [] print("pool stats after query: ", self.pool.get_stats()) print("pool stats after: ", self.pool.get_stats()) return data And logs: [pid: 236344|app: 0|req: 26/26] () {56 vars in 1083 bytes} [Mon Feb 5 11:41:56 2024] POST /v1/user => generated 933 bytes in 109 msecs (HTTP/1.1 200) 8 headers in 749 bytes (1 switches on core 0) pool size: 3 pool stats before: {'connections_num': 5, 'requests_num': 3, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usage_ms ': 34, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 3, 'requests_waiting': 0} pool stats after query: {'connections_num': 5, 'requests_num': 4, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usa ge_ms': 34, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 2, 'requests_waiting': 0} pool stats after: {'connections_num': 5, 'requests_num': 4, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usage_ms' : 49, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 2, 'requests_waiting': 0} [pid: 236344|app: 0|req: 28/28] () {56 vars in 1087 bytes} [Mon Feb 5 11:41:58 2024] POST /v1/iobjs => generated 4788 bytes i n 29 msecs (HTTP/1.1 200) 6 headers in 302 bytes (1 switches on core 0) [pid: 236344|app: 0|req: 29/29] () {54 vars in 816 bytes} [Mon Feb 5 11:42:05 2024] OPTIONS /v1/user/quit => generated 0 byte s in 0 msecs (HTTP/1.1 200) 6 headers in 307 bytes (0 switches on core 0) pool size: 0 pool stats before: {'connections_num': 5, 'requests_num': 6, 'requests_queued': 1, 'connections_ms': 268, 'requests_wait_ms': 34, 'usage_ms ': 62, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 0, 'requests_waiting': 0} Traceback (most recent call last): File "/var/srvr/log.py", line 68, in process self.db.query( File "/var/srvr/pg3p.py", line 71, in query with self.pool.connection() as cnx: File "/usr/local/lib/python3.12/contextlib.py", line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File "/var/srvr/lib/python3.12/site-packages/psycopg_pool/pool.py", line 170, in connection conn = self.getconn(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/var/srvr/lib/python3.12/site-packages/psycopg_pool/pool.py", line 204, in getconn raise PoolTimeout( psycopg_pool.PoolTimeout: couldn't get a connection after 30.00 sec pool size: 0 pool stats before: {'connections_num': 5, 'requests_num': 7, 'requests_queued': 2, 'connections_ms': 268, 'requests_wait_ms': 30035, 'usage _ms': 62, 'requests_errors': 1, 'pool_min': 5, 'pool_max': 5, 'pool_size': 5, 'pool_available': 0, 'requests_waiting': 1} EDIT: I switched back to 1 single persistant connection, it is very stable (days). Then following advices in comments, I moved back to pool with min_size=10 and max_size=20. No change in behaviour: pool is loosing connections without trying to initiate new ones to replace the lost (also tried 20 and 50 min/max, no difference) pool stats after: {'connections_num': 11, 'requests_num': 34, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_m s': 67, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 5, 'requests_waiting': 0} [pid: 282017|app: 0|req: 39/39] () {56 vars in 1087 bytes}[Thu Feb 8 11:02:17 2024] POST /v1/iobjs => generated 30081 bytes in 10 msecs (HTTP/1.1 200) 6 headers in 303 bytes (1 switches on core 0) pool size: 5 pool stats before: {'connections_num': 11, 'requests_num': 34, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_ ms': 67, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 5, 'requests_waiting': 0} pool stats after query: {'connections_num': 11, 'requests_num': 35, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'u sage_ms': 67, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 3, 'requests_waiting': 0} pool stats after: {'connections_num': 11, 'requests_num': 35, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_m s': 70, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 3, 'requests_waiting': 0} [pid: 282017|app: 0|req: 40/40] () {56 vars in 1087 bytes} [Thu Feb 8 11:02:17 2024] POST /v1/iobjs => generated 4788 bytes i n 5 msecs (HTTP/1.1 200) 6 headers in 302 bytes (1 switches on core 0) [pid: 282017|app: 0|req: 41/41] () {54 vars in 808 bytes} [Thu Feb 8 11:02:26 2024] OPTIONS /v1/iobjs => generated 0 bytes in 0 msecs (HTTP/1.1 200) 6 headers in 307 bytes (0 switches on core 0) [pid: 282017|app: 0|req: 42/42] () {54 vars in 814 bytes} [Thu Feb 8 11:02:26 2024] OPTIONS /v1/settings => generated 0 bytes in 0 msecs (HTTP/1.1 200) 6 headers in 307 bytes (0 switches on core 0) pool size: 3 pool stats before: {'connections_num': 11, 'requests_num': 35, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_ ms': 70, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 3, 'requests_waiting': 0} pool stats after query: {'connections_num': 11, 'requests_num': 36, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'u sage_ms': 70, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 2, 'requests_waiting': 0} pool stats after: {'connections_num': 11, 'requests_num': 36, 'requests_queued': 1, 'connections_ms': 641, 'requests_wait_ms': 37, 'usage_m s': 73, 'connections_lost': 1, 'pool_min': 10, 'pool_max': 20, 'pool_size': 11, 'pool_available': 2, 'requests_waiting': 0} [pid: 282017|app: 0|req: 43/43] () {56 vars in 1087 bytes} [Thu Feb 8 11:02:26 2024] POST /v1/iobjs => generated 4788 bytes i n 6 msecs (HTTP/1.1 200) 6 headers in 302 bytes (1 switches on core 0) Traceback (most recent call last): File "main.py", line 326, in v1_settings row = db.query_row( ^^^^^^^^^^^^^ and postgresql logs (debug3) show nothing special (AFAIU): 2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make 2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: exit(0) 2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make 2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make 2024-02-08 11:02:17.075 CET [282007] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make 2024-02-08 11:02:17.079 CET [281970] DEBUG: server process (PID 282007) exited with exit code 0 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: exit(0) 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make 2024-02-08 11:02:26.183 CET [282006] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make 2024-02-08 11:02:26.188 CET [282009] pguser@maindb DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make 2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make 2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make 2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: exit(0) 2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make 2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make 2024-02-08 11:02:26.189 CET [282009] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make 2024-02-08 11:02:26.189 CET [281970] DEBUG: server process (PID 282006) exited with exit code 0 2024-02-08 11:02:26.191 CET [282011] pguser@maindb DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make 2024-02-08 11:02:26.191 CET [282011] pguser@maindb DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make 2024-02-08 11:02:26.191 CET [282011] pguser@maindb DEBUG: proc_exit(0): 2 callbacks to make 2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: exit(0) 2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make 2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make 2024-02-08 11:02:26.192 CET [282011] pguser@maindb DEBUG: proc_exit(-1): 0 callbacks to make 2024-02-08 11:02:26.193 CET [281970] DEBUG: server process (PID 282009) exited with exit code 0 2024-02-08 11:02:26.194 CET [281970] DEBUG: server process (PID 282011) exited with exit code 0 2024-02-08 11:02:33.979 CET [281970] DEBUG: postmaster received pmsignal signal 2024-02-08 11:02:33.983 CET [282844] DEBUG: InitPostgres 2024-02-08 11:02:33.985 CET [282844] DEBUG: autovacuum: processing database "template1" query_row() being same as query(): def query_row(self, q, dbv=None, debug=False): with self.pool.connection() as cnx: cnx.autocommit = True cnx.row_factory = self.row_factory with psycopg.ClientCursor(cnx) as c: c.execute(q, dbv) if dbv else c.execute(q) if debug and c._query: print(c._query.query) if c.rowcount == 1: return c.fetchone() else: return None
connection logs from postgresql gave a hint: SSL error before all connection going dust: 2024-02-09 10:38:05.627 CET [297036] LOG: checkpoint starting: time 2024-02-09 10:38:05.739 CET [297036] LOG: checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0. 103 s, sync=0.002 s, total=0.112 s; sync files=2, longest=0.001 s, average=0.001 s; distance=1 kB, estimate=1 kB; lsn=0/1C4BB70, redo lsn=0/1C4BB38 2024-02-09 10:46:46.488 CET [297518] user@db LOG: SSL error: decryption failed or bad record mac 2024-02-09 10:46:46.489 CET [297518] user@db LOG: could not receive data from client: Connection reset by peer 2024-02-09 10:46:46.489 CET [297518] user@db LOG: disconnection: session time: 0:10:00.165 user=user database=db host=127.0.0.1 port=56266 2024-02-09 12:18:21.905 CET [297528] user@db LOG: disconnection: session time: 1:41:35.416 user=user database=db host=127.0.0.1 port=56356 2024-02-09 12:18:21.909 CET [297519] user@db LOG: disconnection: session time: 1:41:35.587 user=user database=db host=127.0.0.1 port=56276 2024-02-09 12:18:24.998 CET [297520] user@db LOG: disconnect103 s, sync=0.002 s, total=0.112 s; sync files=2, longest=0.001 s, average=0.001 s; distance=1 kB, estimate=1 kB; lsn=0/1C4BB70, redo lsn=0/1C4BB38 2024-02-09 12:18:25.033 CET [297521] user@db LOG: disconnection: session time: 1:41:38.672 user=user database=db host=127.0.0.1 port=56296 2024-02-09 12:18:33.726 CET [297522] user@db LOG: disconnection: session time: 1:41:47.360 user=user database=db host=127.0.0.1 port=56302 2024-02-09 12:18:33.739 CET [297523] user@db LOG: disconnection: session time: 1:41:47.372 user=user database=db host=127.0.0.1 port=56308 2024-02-09 12:18:33.745 CET [297524] user@db LOG: disconnection: session time: 1:41:47.316 user=user database=db host=127.0.0.1 port=56310 2024-02-09 12:18:33.760 CET [297525] user@db LOG: disconnection: session time: 1:41:47.325 user=user database=db host=127.0.0.1 port=56320 2024-02-09 12:18:35.793 CET [297526] user@db LOG: disconnection: session time: 1:41:49.353 user=user database=db host=127.0.0.1 port=56336 2024-02-09 12:18:35.847 CET [297527] user@db LOG: disconnection: session time: 1:41:49.365 user=user database=db host=127.0.0.1 port=56340 Reason of this SSL error comes from uwsgi default behaviour: if you have 2 workers, at init a context is created (variables initialized, among them Pool and connections) then workers share memory. It results the classical case where connections are shared between 2 workers. At one point in time, it is causing a SSL inconsistency, resulting in mass-closing. Solution was to instruct, with lazy-apps=true, uwsgi to create each worker with its own context.
2
2
77,929,999
2024-2-2
https://stackoverflow.com/questions/77929999/how-to-generate-name-combinations-with-initials
Given a string of a person's name: "Richard David" I'd like to create a list of all combinations of this name where up to all but one of the individual names are replaced with initials: [ "Richard David", "R David", "Richard D", ] An example with three individual names: "Richard Anthony David" Would output: [ # replace no initials "Richard Anthony David", # replace one name with initial "Richard Anthony D", "Richard A David", "R Anthony David", # replace two names with initials "R A David", "R Anthony D", "Richard A D", ] itertools is usually my go to for combinations but I can't quite work out how to build in the replacement of a name with an initial.
Using the fact that the order of the elements returned by itertools.product is predictable, as guaranteed in its documentation. from itertools import product def all_names(fullname): pool = [(word, word[0]) for word in fullname.split()] return [' '.join(name) for name in product(*pool)][:-1] print(*all_names("Richard Anthony David"), sep='\n') # Richard Anthony David # Richard Anthony D # Richard A David # Richard A D # R Anthony David # R Anthony D # R A David
2
3
77,922,817
2024-2-1
https://stackoverflow.com/questions/77922817/importerror-cannot-import-name-iterator-from-typing-extensions
When I try to install openai on my Google Colab, I get this error: from openai import OpenAI --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-6-5ee9a4e68b89> in <cell line: 1>() ----> 1 from openai import OpenAI 5 frames /usr/local/lib/python3.10/dist-packages/openai/_utils/_streams.py in <module> 1 from typing import Any ----> 2 from typing_extensions import Iterator, AsyncIterator 3 4 5 def consume_sync_iterator(iterator: Iterator[Any]) -> None: ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py) --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. My details to reproduce: Python 3.10.12 OS is Windows 11 typing_extensions is 4.9.0 Edit: my typing_extensions details:
I found a related issue on OpenAI forum: https://community.openai.com/t/error-while-importing-openai-from-open-import-openai/578166/26 Solution 1: Credit to @Aymane_oub pip install --force-reinstall typing-extensions==4.5 pip install --force-reinstall openai==1.8 Solution 2: Credit to @Shannon-21 After navigating through the answers I found that I can change the import of the file that is trying to import Iterator from typing_extensions in this way (On Google Colab): %%writefile /usr/local/lib/python3.10/dist-packages/openai/_utils/_streams.py from typing import Any from typing_extensions import AsyncIterator from typing import Iterator # import Iterator from the correct library def consume_sync_iterator(iterator: Iterator[Any]) -> None: for _ in iterator: ... async def consume_async_iterator(iterator: AsyncIterator[Any]) -> None: async for _ in iterator: ... #output After running the above code on my colab cell, I was able to use: Link to answer : https://community.openai.com/t/error-while-importing-openai-from-open-import-openai/578166/27?u=rubeenatalha313
2
4
77,941,127
2024-2-5
https://stackoverflow.com/questions/77941127/how-can-i-get-the-fully-qualified-names-of-return-types-and-argument-types-using
Consider the following example. I use python clang_example.py to parse the header my_source.hpp for function and method declarations. my_source.hpp #pragma once namespace ns { struct Foo { struct Bar {}; Bar fun1(void*); }; using Baz = Foo::Bar; void fun2(Foo, Baz const&); } clang_example.py I use the following code to parse the function & method declarations using libclang's python bindings: import clang.cindex import typing def filter_node_list_by_predicate( nodes: typing.Iterable[clang.cindex.Cursor], predicate: typing.Callable ) -> typing.Iterable[clang.cindex.Cursor]: for i in nodes: if predicate(i): yield i yield from filter_node_list_by_predicate(i.get_children(), predicate) if __name__ == '__main__': index = clang.cindex.Index.create() translation_unit = index.parse('my_source.hpp', args=['-std=c++17']) for i in filter_node_list_by_predicate( translation_unit.cursor.get_children(), lambda n: n.kind in [clang.cindex.CursorKind.FUNCTION_DECL, clang.cindex.CursorKind.CXX_METHOD] ): print(f"Function name: {i.spelling}") print(f"\treturn type: \t{i.type.get_result().spelling}") for arg in i.get_arguments(): print(f"\targ: \t{arg.type.spelling}") Output Function name: fun1 return type: Bar arg: void * Function name: fun2 return type: void arg: Foo arg: const Baz & Now I would like to extract the fully qualified name of the return type and argument types so I can correctly reference them from the outermost scope: Function name: ns::Foo::fun1 return type: ns::Foo::Bar arg: void * Function name: ns::fun2 return type: void arg: ns::Foo arg: const ns::Baz & Using this SO answer I can get the fully qualified name of the function declaration, but not of the return and argument types. How do I get the fully qualified name of a type (not a cursor) in clang? Note: I tried using Type.get_canonical and it gets me close: print(f"\treturn type: \t{i.type.get_result().get_canonical().spelling}") for arg in i.get_arguments(): print(f"\targ: \t{arg.type.get_canonical().spelling}") But Type.get_canonical also resolves typedefs and aliases, which I do not want. I want the second argument of fun2 to be resolved as const ns::Baz & and not const ns::Foo::Bar &. EDIT: After having tested Scott McPeak's answer on my real application case I realized that I need this code to properly resolve template classes and nested types of template classes as well. Given the above code as well as namespace ns { template <typename T> struct ATemplate { using value_type = T; }; typedef ATemplate<Baz> ABaz; ABaz::value_type fun3(); } I would want the return type to be resolved to ns::ABaz::value_type and not ns::ATemplate::value_type or ns::ATemplate<ns::Foo::Bar>::value_type. I would be willing to settle for ns::ATemplate<Baz>::value_type. Also, I can migrate to the C++ API, if the functionality of the Python bindings are too limited for what I want to do.
Unfortunately, there does not appear to be a simple way to print a type using fully-qualified names. Even in the C++ API, QualType::getAsString(PrintingPolicy&) ignores the SuppressScope flag due to the intervention of the ElaboratedTypePolicyRAII class (I don't know why, and the git commit history offers no clues that I could find). Even if the C++ API worked as I would have hoped/expected, PrintingPolicy isn't exposed in the C or Python APIs. Consequently, to do this, we have to resort to taking apart the type structure in the client code, printing fully qualified names whenever we hit a named type, which is typically expressed as TypeKind.ELABORATED. (I'm not sure if they always are.) The following example program demonstrates the technique, embodied by the type_str function. As a proof of concept, it does not exhaustively handle all of the cases, although it does cover the most common ones. You can look at the source of TypePrinter.cpp to get an idea of what handling all cases entails. #!/usr/bin/env python3 """ Print types with fully-qualified names. This demonstrates the basic approach, digging into the type structure to print its details, including fully-qualified names when we encounter named types. However, there are several unhandled cases, some of which are indicated with TODOs below. Also, beware that I was unable to get 'mypy' to work properly on this (despite installing the 'types-clang' package), so the type annotations below might be incorrect. """ import clang.cindex import typing def get_decl_fqn(decl: clang.cindex.Cursor) -> str: """ Given a Cursor that refers to a Declaration, get its fully qualified name. """ # The semantic parent is the enclosing class, namespace, or # translation unit. parent = decl.semantic_parent assert(parent is not None) # When we hit the TU, just return the simple identifier. if parent.kind == clang.cindex.CursorKind.TRANSLATION_UNIT: return decl.spelling # Otherwise, print the parent name as a qualifier. else: return get_decl_fqn(parent) + "::" + decl.spelling def starts_with_letter(s: str) -> bool: """ True if 's' starts with a letter. """ return s != "" and s[0].isalpha() def ends_with_letter(s: str) -> bool: """ True if 's' ends with a letter. """ return s != "" and s[-1].isalpha() def join_type_strs(s1: str, s2: str) -> str: """ Join two strings containing fragments of type syntax, inserting a space if both are non-empty and either has a letter adjacent to the joined edge. """ if s1 != "" and s2 != "" and (ends_with_letter(s1) or starts_with_letter(s2)): return s1 + " " + s2 else: return s1 + s2 def type_str(t: clang.cindex.Type) -> str: """ Print 't' in C++ syntax, using fully qualified names for named types. (In contrast, 't.spelling' omits qualifiers.) """ return join_type_strs(before_type_str(t), after_type_str(t)) def before_type_str(t: clang.cindex.Type) -> str: """ Print the part of 't' that would go before the declarator name in a declaration of a variable with that type. """ return join_type_strs(before_type_str_nq(t), cv_qualifiers_str(t)) def cv_qualifiers_str(t: clang.cindex.Type) -> str: """ If 't' has any const/volatile/restrict qualifiers, return a string containing them, separated by spaces. Otherwise, return "". """ qualifiers = [] if t.is_const_qualified(): qualifiers.append("const") if t.is_volatile_qualified(): qualifiers.append("volatile") if t.is_restrict_qualified(): qualifiers.append("restrict") return " ".join(qualifiers) def before_type_str_nq(t: clang.cindex.Type) -> str: """ Print the part of 't' that would go before the declarator name in a declaration of a variable with that type, ignoring any CV qualifiers. """ if t.kind == clang.cindex.TypeKind.ELABORATED: # Most named types are represented with the "elaborated" node, # which typically has a name. return get_decl_fqn(t.get_declaration()) elif t.kind == clang.cindex.TypeKind.POINTER: p = t.get_pointee() # TODO: This does not handle pointer-to-function properly, since # that requires additional parentheses. return join_type_strs(before_type_str(p), "*") elif t.kind == clang.cindex.TypeKind.LVALUEREFERENCE: p = t.get_pointee() return join_type_strs(before_type_str(p), "&") elif t.kind == clang.cindex.TypeKind.RVALUEREFERENCE: p = t.get_pointee() return join_type_strs(before_type_str(p), "&&") elif t.kind == clang.cindex.TypeKind.FUNCTIONPROTO: rettype = t.get_result() return before_type_str(rettype) # TODO: FUNCTIONNOPROTO, pointer-to-member, and possibly others. else: # For other types, just use the spelling as its "before" syntax. return t.spelling def after_type_str(t: clang.cindex.Type) -> str: """ Print the part of 't' that would go after the declarator name in a declaration of a variable with that type. """ if t.kind == clang.cindex.TypeKind.FUNCTIONPROTO: res = "(" count = 0 for argtype in t.argument_types(): if count > 0: res += ", " count += 1 res += type_str(argtype) res += ")" return res # TODO: FUNCTIONNOPROTO and the various array types. return "" # ------------- Original code, edited to call 'type_str' --------------- def filter_node_list_by_predicate( nodes: typing.Iterable[clang.cindex.Cursor], predicate: typing.Callable ) -> typing.Iterable[clang.cindex.Cursor]: for i in nodes: if predicate(i): yield i yield from filter_node_list_by_predicate(i.get_children(), predicate) if __name__ == '__main__': index = clang.cindex.Index.create() translation_unit = index.parse('my_source.hpp', args=['-std=c++17']) for i in filter_node_list_by_predicate( translation_unit.cursor.get_children(), lambda n: n.kind in [clang.cindex.CursorKind.FUNCTION_DECL, clang.cindex.CursorKind.CXX_METHOD] ): print(f"Function name: {i.spelling}") # ---- Edited section ---- # Compare the 'spelling' method to 'type_str' defined above. t = i.type print(f"\tFunction type spelling : {t.spelling}") print(f"\tFunction type type_str(): {type_str(t)}") # EOF On your example input, it prints: $ ./fq-type-name.py Function name: fun1 Function type spelling : Bar (void *) Function type type_str(): ns::Foo::Bar (void *) Function name: fun2 Function type spelling : void (Foo, const Baz &) Function type type_str(): void (ns::Foo, ns::Baz const &) Notably, this fully qualifies ns::Foo::Bar in the return type of fun1. It also uses ns::Baz in the argument list of fun2, rather than using the underlying type, Bar. The revised question asks about a case involving templates and a typedef that is used as a scope qualifier, and wants to recover a fully-qualified name that uses that typedef. This is not possible using the approach outlined above because we construct the qualifiers by walking up the scope stack from the found declaration, ignoring how the type was expressed originally. Using the Python API, it is possible to see the original type syntax and its qualifiers by iterating over children, but the child list is difficult to interpret. For example, if the input is: namespace ns { struct A { struct Inner {}; }; typedef A B; B::Inner f(int x, A a); } and we use this code to print the TU: def print_ast(node: clang.cindex.Cursor, label: str, indentLevel: int) -> None: """ Recursively print the subtree rooted at 'node'. """ indent = " " * indentLevel print(f"{indent}{label}: kind={node.kind} " + f"spelling='{node.spelling}' " + f"loc={node.location.line}:{node.location.column}") indentLevel += 1 index = 0 for c in node.get_children(): print_ast(c, f"child {index}", indentLevel) index += 1 then the output is: TU: kind=CursorKind.TRANSLATION_UNIT spelling='test3.cc' loc=0:0 child 0: kind=CursorKind.NAMESPACE spelling='ns' loc=1:11 child 0: kind=CursorKind.STRUCT_DECL spelling='A' loc=2:10 child 0: kind=CursorKind.STRUCT_DECL spelling='Inner' loc=3:12 child 1: kind=CursorKind.TYPEDEF_DECL spelling='B' loc=5:13 child 0: kind=CursorKind.TYPE_REF spelling='struct ns::A' loc=5:11 child 2: kind=CursorKind.FUNCTION_DECL spelling='f' loc=6:12 child 0: kind=CursorKind.TYPE_REF spelling='ns::B' loc=6:3 child 1: kind=CursorKind.TYPE_REF spelling='struct ns::A::Inner' loc=6:6 child 2: kind=CursorKind.PARM_DECL spelling='x' loc=6:18 child 3: kind=CursorKind.PARM_DECL spelling='a' loc=6:23 child 0: kind=CursorKind.TYPE_REF spelling='struct ns::A' loc=6:21 Observe that the qualified type B::Inner is expressed as this pair of adjacent children: child 0: kind=CursorKind.TYPE_REF spelling='ns::B' loc=6:3 child 1: kind=CursorKind.TYPE_REF spelling='struct ns::A::Inner' loc=6:6 There's no simple way to see that the first child is the qualifier portion of the second child. This is a general problem with the Clang C API, and consequently of the Python API: accessors for specific roles are often missing, so one must resort to iterating over children and trying to reverse-engineer which is which. (I spent a couple weeks going down this road for a different project, and eventually had to admit defeat.) Therefore, with the revised requirement of not merely computing a type syntax string that uses fully-qualified names, but one that adheres to the original syntax as closely as possible, I think it's going to be a difficult task to robustly complete using the Python API since that original syntax is tough to unambiguously retrieve. I recommend instead using the C++ API. This is still non-trivial, but all the information is there and available through accessors that distinguish the various "child" roles. If you want a tip on getting started, I have a tool on GitHub called print-clang-ast that prints a lot (but by no means all) of the Clang AST in a moderately readable JSON format. I even just added code to print the details of NestedNameSpecifier (which is how qualified names are represented) while trying to see if the Python API could be used for what you what. If you try to accomplish this using the C++ API but run into trouble, you could then ask a new question based on where you get stuck.
2
1
77,951,155
2024-2-6
https://stackoverflow.com/questions/77951155/does-offset-have-a-default-value
In the code below, if the upload_file_2_sp function is called, I'm unsure of how the offset argument is being assigned a value. The code will run successfully as is, but I'm trying to add a second argument to the print_upload_progress function to pass each file in my file list loop to the print_upload_progress function, so I can display the upload progress as the for loop iterates through my list of files. Not sure how to do this, because I'm not sure how the assignment of the offset argument is currently working. local_path = '/sso_win_mounts/win_appsprod/CRMDAGROUP/phones/reporting/customer_care/prod/weekly/kpm/test 3.xlsb' #display progress of file upload to sharepoint def print_upload_progress(offset): print(str(offset)) # type: (int) -> None file_size = os.path.getsize(local_path) print( "Uploaded '{0}' bytes from '{1}'...[{2}%]".format( offset, file_size, round(offset / file_size * 100, 2) ) ) #upload files to sharepoint def upload_file_2_sp(report_list, base_url, target_url): ctx = sharepointConnection(base_url) target_folder = ctx.web.get_folder_by_server_relative_url(target_url) size_chunk = 1000000 for file in report_list: uploaded_file = target_folder.files.create_upload_session( file, size_chunk, print_upload_progress ).execute_query() print("File {0} has been uploaded successfully".format(uploaded_file.serverRelativeUrl))
The arguments are passed automatically by create_upload_session(), you can't change what it passes. You could use a lambda to pass additional arguments to your function. for file in report_list: uploaded_file = target_folder.files.create_upload_session( file, size_chunk, lambda offset, file=file: print_upload_progress(offset, file) ).execute_query() print("File {0} has been uploaded successfully".format(uploaded_file.serverRelativeUrl)) See Creating functions (or lambdas) in a loop (or comprehension) for the explanation of the file=file argument to the lambda.
2
3
77,950,773
2024-2-6
https://stackoverflow.com/questions/77950773/resampling-and-interpolation-of-pandas-dataframe-with-multiple-columns
I have a dataframe that looks roughly like this: timestamp mmsi departures_count calc_speed coursechange 2012-01-31 07:11:59 1.252340e+12 1 4.041755 0.000000 2012-01-31 07:22:20 1.252340e+12 1 5.209561 30.000000 2012-01-31 07:35:34 1.252340e+12 2 5.766184 1.000000 2012-01-31 07:45:35 1.252340e+12 2 5.932638 4.000000 2016-11-24 17:05:21 2.775153e+14 1 1.673716 17.000000 2016-11-24 17:21:21 2.775153e+14 1 0.725156 180.800003 2016-11-24 17:38:40 2.775153e+14 1 0.418093 117.500003 The Dataframe has more columns and way more rows (2284331 rows x 16 columns), but these are the important ones needed for the operations. It consists of unique identifiers mmsi and departures_count indicating an associated trip. Since the timestep between datapoints varies significantly, I am trying to resample the data for better comparison. I would like the timesteps to always be 10 minutes. This means that I want to upsample the data if there are multiple datapoints within a 10 minute span und downsample it (using interpolation) when there is a larger timegap. This needs to be done for each departures_count per mmsi. I tried using the following but it either just returns NaN's or the same value for each row of a trip (departure). timeindex = pd.to_datetime(df['timestamp'], format='%Y-%m-%d %H:%M:%S') df.index = timeindex group = df.groupby(['mmsi', 'departures_count']) df_test = group[['coursechange', 'calc_speed']].resample('10Min').interpolate(method='linear') If possible, I would also like to change the interpolation method to cubic instead of linear. Any suggestions would be highly appreciated!
You can try: def fn(g): rng = pd.date_range( g.index[0].floor("10min"), g.index[-1].ceil("10min"), freq="10min" ) g = ( g.reindex(g.index.union(rng)) .interpolate(method="linear", limit_direction="both") .reindex(rng) ) return g df["timestamp"] = pd.to_datetime(df["timestamp"]) df = df.set_index("timestamp") out = df.groupby(["mmsi", "departures_count"], group_keys=False).apply(fn) print(out) Prints: mmsi departures_count calc_speed coursechange 2012-01-31 07:10:00 1.252340e+12 1.0 4.041755 0.000000 2012-01-31 07:20:00 1.252340e+12 1.0 4.625658 15.000000 2012-01-31 07:30:00 1.252340e+12 1.0 5.209561 30.000000 2012-01-31 07:30:00 1.252340e+12 2.0 5.766184 1.000000 2012-01-31 07:40:00 1.252340e+12 2.0 5.849411 2.500000 2012-01-31 07:50:00 1.252340e+12 2.0 5.932638 4.000000 2016-11-24 17:00:00 2.775153e+14 1.0 1.673716 17.000000 2016-11-24 17:10:00 2.775153e+14 1.0 1.357529 71.600001 2016-11-24 17:20:00 2.775153e+14 1.0 1.041343 126.200002 2016-11-24 17:30:00 2.775153e+14 1.0 0.571624 149.150003 2016-11-24 17:40:00 2.775153e+14 1.0 0.418093 117.500003
2
2
77,950,619
2024-2-6
https://stackoverflow.com/questions/77950619/how-to-suppress-line-too-long-warning-pyright-extended-in-replit-python
When coding in Python using Replit, I get the annoying warning "Line too long" (pyright-extended). How can I get Replit to always ignore displaying this type of warnings?
Just add 'E501' to the "[tool.ruff]" section of the "pyproject.toml"-file, to disable the warning in the default python template. https://ask.replit.com/t/python-88-characters-maximum-in-a-line/74941
2
2
77,950,122
2024-2-6
https://stackoverflow.com/questions/77950122/handle-exceptions-in-python-functions-signature
I wonder if I have designed my script correctly (it works so far) because of a loop of warnings from which I cannot get out. I am trying to define a function that either is able to parse a JSON response from a website or it simply terminates the program. Here my code: from requests import get, Response, JSONDecodeError from typing import Any from sys import exit def logout() -> None: exit() def get_page(url:str) -> dict[Any,Any]: try: response: Response = get(url) page: dict[Any,Any] = response.json() except JSONDecodeError as e: print(e) logout() return page try: result: dict[Any, Any] = get_page("https://stackoverflow.com/") except Exception as e: #handle exception pass print(result['something']) Pylance informs me that page (as in return page) and result (as in print(result['something']) is possibly unbound. As far as I understand there are different possible workarounds to this problem, but none of them solves entirely the problem: When moving return page before the except statement, mypy raises a Missing return statement. If I ignore the "error" from mypy, Pylance informs me that I should anyway sign my function specifying that it should return dict[Any,Any] | None. Changing the return signature to dict[Any,Any] means that outside the function result should be result: dict[Any,Any]|None but then result['something'] is possibly unbound and Object of type "None" is not subscriptable initializing page = {} at the beginning of the function is a possible solution, but what if page is a custom class which need some parameter that I cannot provide? I could raise an Exception from within the function if there would have been a way to inform Pylance about this possible behaviour (is there?) My point is: the function will NEVER return None right? So, am I doing something wrong? How do you deal with this? Should I refactor my function?
typing.NoReturn is used for functions that never return. For example: from typing import NoReturn def stop() -> NoReturn: raise RuntimeError('no way') So you should type hint the return value of logout() with NoReturn: from typing import NoReturn def logout() -> NoReturn: exit()
2
3
77,948,903
2024-2-6
https://stackoverflow.com/questions/77948903/no-matching-distribution-found-for-setuptools-while-building-my-own-pip-packag
I am building a Python package using setuptools. This is the basic structure: /path/to/project/ ├── myproj/ │ ├── __init__.py │ └── my_module.py ├── pyproject.toml ├── setup.cfg └── ## other stuff This is the content of pyproject.toml [build-system] requires = [ "setuptools", "setuptools-scm"] build-backend = "setuptools.build_meta" and this is the content of setup.cfg [project] name = "my_package" author = 'my name' requires-python = ">=3.8" keywords = ["one", "two"] license = {text = "BSD-3-Clause"} classifiers = [] dependencies = [] [metadata] version = attr: my_package.__version__ [options] zip_safe = True packages = my_package include_package_data = True install_requires = matplotlib==3.6.0 spacy==3.7.2 networkx>=2.6.3 nltk>=3.7 shapely>=1.8.4 pandas>=1.3.5 community>=1.0.0b1 python-louvain>=0.16 numpy==1.23.4 scipy [options.package_data] my_package = ## pointers to data [options.packages.find] exclude = examples* demos* I can run successfully python -m build. Then, if I pip install it locally, i.e., pointing directly to the .tar file that was built, I can install it. However, if I upload it to test.pypi and then I pip install it from test.pypi, I get the following error: pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [3 lines of output] Looking in indexes: https://test.pypi.org/simple/ ERROR: Could not find a version that satisfies the requirement setuptools (from versions: none) ERROR: No matching distribution found for setuptools [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Looks like an issue with setuptools, but I really cannot figure out how to fix it. Any suggestions?
However, if I upload it to test.pypi and then I pip install it from test.pypi I assume this means you're using --index-url https://test.pypi.org/simple/, which is confirmed by the build output Looking in indexes: https://test.pypi.org/simple/. That tells pip to use TestPyPI exclusively as the package index. You will only be able to install packages that are published to TestPyPI, which setuptools is not. If you want to install packages from both TestPyPI and PyPI during the same pip install command, you can use --extra-index-url instead of index-url. Note that this means that any package can be installed from either TestPyPI or PyPI, with no guarantees as to which packages come from which index. If you want install particular packages from particular indices (e.g. my_package from TestPyPI, and everything else from PyPI), you can try the solutions from Install specific package from specific index.
2
3
77,945,663
2024-2-6
https://stackoverflow.com/questions/77945663/polars-alternative-solution-to-using-map-groups-with-with-columns
I think after the recent update of Polars (version 0.20.7), it stopped supporting the use of map_groups with with_columns I got a Type Error stating TypeError: cannot call `map_groups` when grouping by an expression I am trying to Group By a column called Product Number and fill the null values using backward strategy for this Group By. The code I wrote was. df1 = ( df1 .group_by(['Product Number']) .map_groups(lambda df1: ( df1 .with_columns( df1['New Date'].fill_null(strategy='backward').alias('New Date1') ) ) ) ) Is there alternate solution which supports grouping with with_columns ?
Instead, you can use pl.Expr.over to compute expressions over certain groups. ( df1 .with_columns( pl.col("New Date") .fill_null(strategy="backward") .over("'Product Number'") .alias("New Date1") ) )
3
0
77,947,027
2024-2-6
https://stackoverflow.com/questions/77947027/how-can-i-use-a-classvar-literal-as-a-discriminator-for-pydantic-fields
I'd like to have Pydantic fields that are discriminated based on a class variable. For example: from pydantic import BaseModel, Field from typing import Literal, ClassVar class Cat(BaseModel): animal_type: ClassVar[Literal['cat']] = 'cat' class Dog(BaseModel): animal_type: ClassVar[Literal['dog']] = 'dog' class PetCarrier(BaseModel): contains: Cat | Dog = Field(discriminator='animal_type') But this code throws an exception at import time: pydantic.errors.PydanticUserError: Model 'Cat' needs a discriminator field for key 'animal_type' If I remove the ClassVar annotations, it works fine, but then animal_type is only available as an instance property, which is less convenient in my case. Can anyone help me use class attributes as discriminators with Pydantic? This is Pydantic version 2.
This is a very interesting question. Broadly it seems to me that Pydantic does not support discriminated unions for ClassVar literals, but that asks the question, why not? An important thing to note about ClassVar's, is they don't become regular fields in pydantic (https://docs.pydantic.dev/2.3/usage/models/#class-vars). In some way they're quite a lot like PrivateAttr. They're only accessible via get, they don't feature in models. If we consider your question again but using PrivateAttr, it makes sense why this isn't supported: from pydantic import BaseModel, Field, PrivateAttr from typing import Literal class Cat(BaseModel): _animal_type: Literal['cat'] = PrivateAttr(default='cat') class Dog(BaseModel): _animal_type: Literal['dog'] = PrivateAttr(default='dog') class PetCarrier(BaseModel): contains: Cat | Dog = Field(discriminator='_animal_type') Field cannot access the discriminator _animal_type because it isn't accessible to it - it's a PrivateAttr. A discriminator has to be something that is a part of the model - even if instances aren't part of your use case, you have to consider what would happen if someone did make an instance. Just like PrivateAttr, ClassVar is a field that's not really part of the model, so it's impossible to use as a discriminator. In Pydantic I would think of ClassVar as kind of like a PrivateAttr that's publically accessible and can't be altered.
4
2
77,945,267
2024-2-6
https://stackoverflow.com/questions/77945267/vscode-settings-json-error-incorrect-type-expected-string
how do i fix the error shown in the pic, i was trying to edit my VSCode settings.json, installed the extension Ruff and tried to edit the settings based on an article I've read here: https://medium.com/@ordinaryindustries/the-ultimate-vs-code-setup-for-python-538026b34d94 but i get the error shown in the pic below
Smart Tips(Ctrl+Space) displays these two settings with five values, which are: "always": Triggers Code Actions on explicit saves and auto saves triggered by window or focus changes. "explicit": Triggers Code Actions only when explicitly saved "never": Never triggers Code Actions on save "false": Never triggers Code Actions on save. This value will be deprecated in favor of "never". "true": Triggers Code Actions only when explicitly saved. This value will be deprecated in favor of "explicit". From the explanation, it is clear that false and true are about to be deprecated, so please use the corresponding "never" or "explicit".
3
4
77,942,843
2024-2-5
https://stackoverflow.com/questions/77942843/how-to-install-a-namespace-package-with-hatch
in The context of the sphinxcontrib organisation the package are supposed to all live in a sphinxcontrib namespace package like "sphinxcontrib.icon" or "sphinxcontrib.badge". The file have the following structure: sphinxcontrib-icon/ ├── sphinxcontrib/ │ └── icon/ │ └── __init__.py └── pytproject.toml In a setuptools based pyproject.toml file I would do the following: # pyproject.toml [build-system] requires = ["setuptools>=61.2", "wheel", "pynpm>=0.2.0"] build-backend = "setuptools.build_meta" [tool.setuptools] include-package-data = false packages = ["sphinxcontrib.icon"] Now I would like to migrate to hatchings but I don't manage to reprduce this behaviour. In my parameters I do: # pyproject.toml [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [tool.hatch.build.targets.wheel] packages = ["sphinxcontrib/skeleton"] In the "site-packages" folder my lib is not under in "sphincontrib/icon" as I would expect. How should I adapt my code to make it work ? I have a example sitting here: https://github.com/sphinx-contrib/sphinxcontrib-skeleton/tree/test Building the docs with nox -s docs is failing because the package is not really installed.
copying here the answer given by the hatchling repository maintainers: The packages option is for the case where your code lives under a directory like src/ and will collapse until the last path component. So if your namespace package lived under such a directory then you could set that to src/sphinxcontrib but since it's not you case So I could simply do: [tool.hatch.build.targets.wheel] only-include = ["sphinxcontrib"] Note that you could also use the regular include option there but that would still have to recurse other directories, so just a performance hack.
4
2
77,942,463
2024-2-5
https://stackoverflow.com/questions/77942463/numba-implementation-for-monte-carlo-simulation
I'm actually writing a Monte-Carlo code for simple fluid simulation. The code was predicting good results for energy and pressure before Numba implementation, and after its implementation it doesn't predicts good result at all. Since I'm not very familiar with Numba, I don't really know where the error is comming from. Maybe you will have an idea... Thanks a lot. #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Sun Feb 4 13:32:54 2024 @author: tristanmillet """ import numpy as np from numba import njit # Constants N = 100 density = 0.8 T = 2.0 beta = 1 / T L = (N / density) ** (1.0 / 3.0) cutoff = 2.5 boxVolume = N / density numEqSteps = 100000 numSampSteps = 100000 numTotalSteps = numEqSteps + numSampSteps progressFreq = int(numTotalSteps * 0.01) energy_instant_values = np.empty(numTotalSteps, dtype=np.float64) virial_instant_values = np.empty(numTotalSteps, dtype=np.float64) @njit def latticeDisplace(): positions = np.empty((N, 3), dtype=np.float64) delta = L / (N ** (1 / 3)) flag = 0 for x in np.arange(delta / 2, L, delta): for y in np.arange(delta / 2, L, delta): for z in np.arange(delta / 2, L, delta): if flag < N: positions[flag] = [x, y, z] flag += 1 return positions @njit def calc_dist(i, j): p1 = positions[i] p2 = positions[j] dx = p1[0] - p2[0] dy = p1[1] - p2[1] dz = p1[2] - p2[2] if dx > L / 2: dx -= L if dx < -L / 2: dx += L if dy > L / 2: dy -= L if dy < -L / 2: dy += L if dz > L / 2: dz -= L if dz < -L / 2: dz += L return np.sqrt(dx ** 2 + dy ** 2 + dz ** 2) @njit def calc_LJ_potential(dist): pot = 4.0 * ((1.0 / dist) ** 12.0 - (1.0 / dist) ** 6.0) return pot @njit def calc_energy_particle(p): energy_particle = 0.0 for j in range(N): if j != p: dist = calc_dist(p, j) if dist <= cutoff: energy_particle += calc_LJ_potential(dist) return energy_particle @njit def calc_energy_total(): energy_total = 0.0 for i in range(N): energy_total += calc_energy_particle(i) energy_total *= 0.5 # Tail correction energy_tail_corr = (8.0 / 3.0) * np.pi * density * (1.0 ** 3) * ( (1.0 / 3.0) * ((1.0 / cutoff) ** 9) - ((1.0 / cutoff) ** 3)) energy_total += N * energy_tail_corr return energy_total @njit def calc_virial_particle(p): virial_particle = 0.0 for j in range(N): if j != p: dist = calc_dist(p, j) if dist <= cutoff: virial_particle += calc_virial(dist) return virial_particle @njit def calc_virial(dist): return 48.0 * ((1 / dist) ** 12 - 0.5 * (1 / dist) ** 6) @njit def calc_virial_total(): virial_total = 0.0 for i in range(N): virial_total += calc_virial_particle(i) return 0.5 * virial_total @njit def move_particle(p, displ=0.5): local_ = positions[p].copy() for i in range(3): local_[i] += (np.random.rand() - 0.5) * displ if local_[i] >= L: local_[i] -= L if local_[i] < 0.0: local_[i] += L return local_ positions = latticeDisplace() energy = calc_energy_total() virial = calc_virial_total() accept_counter=0 energy_sum = 0.0 virial_sum = 0.0 print(f"Parameters used for simulation: T={T},rho={density}, N={N}") for step in range(numTotalSteps): particle_index = np.random.randint(0, N) prev_particle_energy = calc_energy_particle(particle_index) prev_particle_virial = calc_virial_particle(particle_index) prev_particle = np.array(positions[particle_index]) positions[particle_index] = move_particle(particle_index) new_particle_energy = calc_energy_particle(particle_index) delta_particle_energy = new_particle_energy - prev_particle_energy if (delta_particle_energy < 0) or (np.random.rand() < np.exp(-beta * delta_particle_energy)): energy += delta_particle_energy new_particle_virial = calc_virial_particle(particle_index) virial += new_particle_virial - prev_particle_virial accept_counter += 1 else: # Restore old configuration positions[particle_index] = prev_particle virial_instant_values[step] = virial energy_instant_values[step] = energy energy_sum += energy virial_sum += virial # Reset sums and counter for sampling if step == numEqSteps: energy_sum = 0.0 virial_sum = 0.0 accept_counter = 0 if step % progressFreq == 0: print(accept_counter) accept_counter = 0 print(str(int((step * 1.0 / numTotalSteps) * 100)) + "% " + ( "[Equilibration]" if step < numEqSteps else "[Sampling]")) avgEnergy = energy_sum / numSampSteps / N pressure_tail_corr = (16.0 / 3.0) * np.pi * (density ** 2) * (1 ** 3) * ((2.0 / 3.0) * ((1 / cutoff) ** 9) - ((1 / cutoff) ** 3)) pressure = (virial_sum / 3.0 / numSampSteps / boxVolume) + density * T + pressure_tail_corr finalAcceptRate = accept_counter * 1.0 / numSampSteps * 100.0 np.savetxt("instant_energy.txt",energy_instant_values) np.savetxt("instant_virial.txt",virial_instant_values) print("Avg. energy = " + str(avgEnergy)) print("Avg. pressure = " + str(pressure)) print("Accept. rate = " + str(finalAcceptRate)) I tried to debug briefly the code by "print" some part of the code, and apparently the Metropolis criteria (which determines if we move or not a particle) is always fulfilled, which is not the case when I don't implement Numba. Here the difference of the result between Numba implementation and without Numba: without Numba (good prediction) Avg. energy = -4.789510303401584 Avg. pressure = 5.078151508549905 Accept. rate = 0.469 Density = 0.8 T=2 with Numba: Avg. energy = 114.5028952818473 Avg. pressure = 407.77783626468135 Accept. rate = 1.999 Density = 0.8 T=2
Don't use global variables. I changed the jitted function to accept positions as first argument: import numpy as np from numba import njit # Constants N = 100 density = 0.8 T = 2.0 beta = 1 / T L = (N / density) ** (1.0 / 3.0) cutoff = 2.5 boxVolume = N / density numEqSteps = 100000 numSampSteps = 100000 numTotalSteps = numEqSteps + numSampSteps progressFreq = int(numTotalSteps * 0.01) energy_instant_values = np.empty(numTotalSteps, dtype=np.float64) virial_instant_values = np.empty(numTotalSteps, dtype=np.float64) @njit def latticeDisplace(): positions = np.empty((N, 3), dtype=np.float64) delta = L / (N ** (1 / 3)) flag = 0 for x in np.arange(delta / 2, L, delta): for y in np.arange(delta / 2, L, delta): for z in np.arange(delta / 2, L, delta): if flag < N: positions[flag] = [x, y, z] flag += 1 return positions @njit def calc_dist(positions, i, j): p1 = positions[i] p2 = positions[j] dx = p1[0] - p2[0] dy = p1[1] - p2[1] dz = p1[2] - p2[2] if dx > L / 2: dx -= L if dx < -L / 2: dx += L if dy > L / 2: dy -= L if dy < -L / 2: dy += L if dz > L / 2: dz -= L if dz < -L / 2: dz += L return np.sqrt(dx**2 + dy**2 + dz**2) @njit def calc_LJ_potential(dist): pot = 4.0 * ((1.0 / dist) ** 12.0 - (1.0 / dist) ** 6.0) return pot @njit def calc_energy_particle(positions, p): energy_particle = 0.0 for j in range(N): if j != p: dist = calc_dist(positions, p, j) if dist <= cutoff: energy_particle += calc_LJ_potential(dist) return energy_particle @njit def calc_energy_total(positions): energy_total = 0.0 for i in range(N): energy_total += calc_energy_particle(positions, i) energy_total *= 0.5 # Tail correction energy_tail_corr = ( (8.0 / 3.0) * np.pi * density * (1.0**3) * ((1.0 / 3.0) * ((1.0 / cutoff) ** 9) - ((1.0 / cutoff) ** 3)) ) energy_total += N * energy_tail_corr return energy_total @njit def calc_virial_particle(positions, p): virial_particle = 0.0 for j in range(N): if j != p: dist = calc_dist(positions, p, j) if dist <= cutoff: virial_particle += calc_virial(dist) return virial_particle @njit def calc_virial(dist): return 48.0 * ((1 / dist) ** 12 - 0.5 * (1 / dist) ** 6) @njit def calc_virial_total(positions): virial_total = 0.0 for i in range(N): virial_total += calc_virial_particle(positions, i) return 0.5 * virial_total @njit def move_particle(positions, p, displ=0.5): local_ = positions[p].copy() for i in range(3): local_[i] += (np.random.rand() - 0.5) * displ if local_[i] >= L: local_[i] -= L if local_[i] < 0.0: local_[i] += L return local_ positions = latticeDisplace() energy = calc_energy_total(positions) virial = calc_virial_total(positions) accept_counter = 0 energy_sum = 0.0 virial_sum = 0.0 print(f"Parameters used for simulation: T={T},rho={density}, N={N}") for step in range(numTotalSteps): particle_index = np.random.randint(0, N) prev_particle_energy = calc_energy_particle(positions, particle_index) prev_particle_virial = calc_virial_particle(positions, particle_index) prev_particle = np.array(positions[particle_index]) positions[particle_index] = move_particle(positions, particle_index) new_particle_energy = calc_energy_particle(positions, particle_index) delta_particle_energy = new_particle_energy - prev_particle_energy if (delta_particle_energy < 0) or ( np.random.rand() < np.exp(-beta * delta_particle_energy) ): energy += delta_particle_energy new_particle_virial = calc_virial_particle(positions, particle_index) virial += new_particle_virial - prev_particle_virial accept_counter += 1 else: # Restore old configuration positions[particle_index] = prev_particle virial_instant_values[step] = virial energy_instant_values[step] = energy energy_sum += energy virial_sum += virial # Reset sums and counter for sampling if step == numEqSteps: energy_sum = 0.0 virial_sum = 0.0 accept_counter = 0 if step % progressFreq == 0: print(accept_counter) accept_counter = 0 print( str(int((step * 1.0 / numTotalSteps) * 100)) + "% " + ("[Equilibration]" if step < numEqSteps else "[Sampling]") ) avgEnergy = energy_sum / numSampSteps / N pressure_tail_corr = ( (16.0 / 3.0) * np.pi * (density**2) * (1**3) * ((2.0 / 3.0) * ((1 / cutoff) ** 9) - ((1 / cutoff) ** 3)) ) pressure = ( (virial_sum / 3.0 / numSampSteps / boxVolume) + density * T + pressure_tail_corr ) finalAcceptRate = accept_counter * 1.0 / numSampSteps * 100.0 # np.savetxt("instant_energy.txt", energy_instant_values) # np.savetxt("instant_virial.txt", virial_instant_values) print("Avg. energy = " + str(avgEnergy)) print("Avg. pressure = " + str(pressure)) print("Accept. rate = " + str(finalAcceptRate)) Prints: Avg. energy = -4.771041235079697 Avg. pressure = 5.186359541491075 Accept. rate = 0.47800000000000004
2
2
77,943,395
2024-2-5
https://stackoverflow.com/questions/77943395/openai-embeddings-api-how-to-extract-the-embedding-vector
I use nearly the same code as here in this GitHub repo to get embeddings from OpenAI: oai = OpenAI( # This is the default and can be omitted api_key="sk-.....", ) def get_embedding(text_to_embed, openai): response = openai.embeddings.create( model= "text-embedding-ada-002", input=[text_to_embed] ) return response embedding_raw = get_embedding(text,oai) According to the GitHub repo, the vector should be in response['data'][0]['embedding']. But it isn't in my case. When I printed the response variable, I got this: print(embedding_raw) Output: CreateEmbeddingResponse(data=[Embedding(embedding=[0.009792150929570198, -0.01779201813042164, 0.011846082285046577, -0.0036859565880149603, -0.0013213189085945487, 0.00037509595858864486,..... -0.0121011883020401, -0.015751168131828308], index=0, object='embedding')], model='text-embedding-ada-002', object='list', usage=Usage(prompt_tokens=360, total_tokens=360)) How can I access the embedding vector?
Simply return just the embedding vector as follows: def get_embedding(text_to_embed, openai): response = openai.embeddings.create( model= "text-embedding-ada-002", input=[text_to_embed] ) return response.data[0].embedding # Change this embedding_raw = get_embedding(text,oai)
3
2
77,943,054
2024-2-5
https://stackoverflow.com/questions/77943054/how-do-i-make-a-custom-class-thats-serializable-with-dataclasses-asdict
I'm trying to use a dataclass as a (more strongly typed) dictionary in my application, and found this strange behavior when using a custom type subclassing list within the dataclass. I'm using Python 3.11.3 on Windows. from dataclasses import dataclass, asdict class CustomFloatList(list): def __init__(self, args): for i, arg in enumerate(args): assert isinstance(arg, float), f"Expected index {i} to be a float, but it's a {type(arg).__name__}" super().__init__(args) @classmethod def from_list(cls, l: list[float]): return cls(l) @dataclass class Poc: x: CustomFloatList p = Poc(x=CustomFloatList.from_list([3.0])) print(p) # Prints Poc(x=[3.0]) print(p.x) # Prints [3.0] print(asdict(p)) # Prints {'x': []} This does not occur if I use a regular list[float], but I'm using a custom class here to enforce some runtime constraints. How do I do this correctly? I'm open to just using .__dict__ directly, but I thought asdict() was the more "official" way to handle this A simple modification makes the code behave as expected, but is slightly less efficient: from dataclasses import dataclass, asdict class CustomFloatList(list): def __init__(self, args): dup_args = list(args) for i, arg in enumerate(dup_args): assert isinstance(arg, float), f"Expected index {i} to be a float, but it's a {type(arg).__name__}" super().__init__(dup_args) @classmethod def from_list(cls, l: list[float]): return cls(l) @dataclass class Poc: x: CustomFloatList p = Poc(x=CustomFloatList.from_list([3.0])) print(p) print(p.x) print(asdict(p))
If you look at the source code of asdict, you'll see that passes a generator expression that recursively calls itself on the elements of a list when it encounters a list: elif isinstance(obj, (list, tuple)): # Assume we can create an object of this type by passing in a # generator (which is not true for namedtuples, handled # above). return type(obj)(_asdict_inner(v, dict_factory) for v in obj) But your implementation depletes any iterator it gets in __init__ before the super call. Don't do that. You'll have to "cache" the values if you want to use the superclass constructor. Something like: class CustomFloatList(list): def __init__(self, args): args = list(args) for i, arg in enumerate(args): assert isinstance(arg, float), f"Expected index {i} to be a float, but it's a {type(arg).__name__}" super().__init__(args) Or perhaps: class CustomFloatList(list): def __init__(self, args): super().__init__(args) for i, arg in enumerate(self): if not isinstance(arg, float): raise TypeError(f"Expected index {i} to be a float, but it's a {type(arg).__name__}")
2
3
77,939,542
2024-2-5
https://stackoverflow.com/questions/77939542/how-to-slice-data-in-pytorch-tensor
I've put my data to the pytorch tensor and now i want to split into the batches size 64. I got the following code: batch = 0 BATCH_SIZE = 64 X_train = x_scaled.to(device) y_train = y_scaled.to(device) for batch in range(0,len(X_train[0]),BATCH_SIZE): ### Training model.train() # train mode is on by default after construction # 1. Forward pass y_pred = model(X_train[0:2][batch:batch+BATCH_SIZE]) The shape of the tensor is: torch.Size([2, 11938]). And i want to slice it into [2,64]. However it do not slice properly and gives an error: mat1 and mat2 shapes cannot be multiplied (2x11938 and 2x64) What i want: tensor([[0.0000, 0.0002, 0.0004, 0.0005, 0.0007, 0.0009, 0.0011, 0.0013, 0.0014, 0.0016, 0.0018, 0.0018, 0.0020, 0.0022, 0.0023, 0.0025, 0.0027, 0.0029, 0.0029, 0.0031, 0.0032, 0.0034, 0.0036, 0.0038, 0.0040, 0.0041, 0.0043, 0.0045, 0.0047, 0.0049, 0.0051, 0.0052, 0.0054, 0.0056, 0.0058, 0.0060, 0.0061, 0.0061, 0.0063, 0.0065, 0.0067, 0.0069, 0.0070, 0.0072, 0.0074, 0.0076, 0.0078, 0.0079, 0.0081, 0.0083, 0.0083, 0.0085, 0.0087, 0.0088, 0.0090, 0.0092, 0.0094, 0.0094, 0.0096, 0.0097, 0.0099, 0.0101, 0.0103, 0.0105],[0.0684, 0.0684, 0.0684, 0.0684, 0.0684, 0.0684, 0.0684, 0.0684, 0.0684, 0.0703, 0.0703, 0.0703, 0.0684, 0.0684, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0703, 0.0712, 0.0712, 0.0712, 0.0712, 0.0712, 0.0712, 0.0712, 0.0712]], device='cuda:0', dtype=torch.float64) What i get: tensor([[0.0000e+00, 1.8038e-04, 3.6076e-04, ..., 9.9964e-01, 9.9982e-01, 1.0000e+00], [6.8395e-02, 6.8395e-02, 6.8395e-02, ..., 5.7695e-01, 5.7695e-01, 5.7695e-01]], device='cuda:0', dtype=torch.float64) How can i slice the torch tensor to the requiered shape?
You should read how indexing works in pytorch/numpy and other similar libraries. You have a tensor of shape (2, 11938). When you index with X_train[0:2], you get a tensor of size (2, 11938). You don't need to index if you want all the rows. When you index with X_train[0:2][batch:batch+BATCH_SIZE], you're indexing the result of X_train[0:2] with [batch:batch+BATCH_SIZE]. This means you are still indexing the first axis. If you want a chunk of size BATCH_SIZE from the second axis, you need to index the second axis, ie X_train[:, batch:batch+BATCH_SIZE].
2
2
77,943,148
2024-2-5
https://stackoverflow.com/questions/77943148/how-to-add-numeric-value-from-one-column-to-other-list-column-elements-in-polars
Suppose I have the following Polars DataFame: import polars as pl df = pl.DataFrame({ 'lst': [[0, 1], [9, 8]], 'val': [3, 4] }) And I want to add the number in the val column, to every element in the corresponding list in the lst column, to get the following result: ┌───────────┬─────┐ │ lst ┆ val │ │ --- ┆ --- │ │ list[i64] ┆ i64 │ ╞═══════════╪═════╡ │ [3, 4] ┆ 3 │ │ [13, 12] ┆ 4 │ └───────────┴─────┘ I know how to add a constant value, e.g.: new_df = df.with_columns( pl.col('lst').list.eval(pl.element() + 2) ) But when I try: new_df = df.with_columns( pl.col('lst').list.eval(pl.element() + pl.col('val')) ) I get the following error: polars.exceptions.ComputeError: named columns are not allowed in `list.eval`; consider using `element` or `col("")` Is there any elegant way to achieve my goal (without map_elements)? Thanks in advance.
If lst has a fixed number of items for the whole column AND you know how many in advance then you can do this: df.with_columns( pl.concat_list(pl.col('lst').list.get(x) + pl.col('val') for x in range(2)) ) If not, then you can still do this: df.with_columns( pl.concat_list( pl.col('lst').list.get(x) + pl.col('val') for x in range(df['lst'].list.len().max()) ) .list.gather( pl.int_ranges(0,pl.col('lst').list.len()) ) ) The way this works is that it replaces the 2 from the first case with df['lst'].list.len().max() which is the longest list in the whole column. Then it does a gather so that it only takes the first n elements that correspond to the length of each particular lst. concat_list isn't super efficient so it might be the case that explode, group_by is more efficient. (df .with_row_index('i') .explode('lst') .group_by('i',maintain_order=True) .agg( pl.col('lst')+pl.col('val'), pl.col('val').first(), ) .drop('i') )
4
2
77,942,425
2024-2-5
https://stackoverflow.com/questions/77942425/how-to-serialize-list-of-list-items-to-pydantic-model
I have response from server: { "date": "2024-02-05 15:34:44", "status": True, "data": [ [1, "red"], [2, "blue"], [3, "yellow"] ] } And i want to serialize this response to pydantic model, but i don't know how parse list (example [1, "red"]) in pydantic model class Item(BaseModel): # how convert list from data to this model id: int color: str ... class Model(BaseModel): date: datetime status: bool data: list[Item]
You can work for example with a model_validator to parse the list into the Item model: from pydantic import BaseModel, model_validator data = { "date": "2024-02-05 15:34:44", "status": True, "data": [ [1, "red"], [2, "blue"], [3, "yellow"] ] } class Item(BaseModel): id: int color: str @model_validator(mode="before") @classmethod def validate(cls, values): id, color = values return {"id": id, "color": color} class Model(BaseModel): date: datetime status: bool data: list[Item] print(Model(**data)) Which prints: date=datetime.datetime(2024, 2, 5, 15, 34, 44) status=True data=[Item(id=1, color='red'), Item(id=2, color='blue'), Item(id=3, color='yellow')] There are likely other approaches, such as AliasPath, but this one seems the simplest. I hope this helps!
2
3
77,941,994
2024-2-5
https://stackoverflow.com/questions/77941994/what-is-setuptools-alternative-to-the-deprecated-distutils-strtobool
I am migrating to Python 3.12, and finally have to remove the last distutils dependency. I am using from distutils.util import strtobool to enforce that command-line arguments via argparse are in fact bool, properly taking care of NaN vs. False vs. True, like so: arg_parser = argparse.ArgumentParser() arg_parser.add_argument("-r", "--rebuild_all", type=lambda x: bool(strtobool(x)), default=True) So this question is actually twofold: What would be an alternative to the deprecated strtobool? Alternatively: What would be an even better solution to enforce 'any string' to be interpreted as bool, in a safe way (e.g., to parse args)?
What about using argparse's BooleanOptionalAction? This is documented in the argparse docs, but not in a way that provides me with a useful link. That would look like: arg_parser = argparse.ArgumentParser() arg_parser.add_argument("-r", "--rebuild-all", action=argparse.BooleanOptionalAction, default=True) And it would allow you to pass either --rebuild-all or --no-rebuild-all. This gets you an option that can be either true or false, but without the hassle of having to parse a string into a boolean value.
3
3
77,940,596
2024-2-5
https://stackoverflow.com/questions/77940596/azure-devops-and-uploading-a-python-package-to-artifacts-using-pipeline-authen
I am trying to use the Azure pipeline to publish a Python package to Artifact's feed. I could do it from my local machine and upload the package using Twine, but I have an authentication issue in the pipeline. trigger: - main pool: vmImage: ubuntu-22.04 variables: pip_cache_dir: '$(Pipeline.Workspace)/.pip_cache' steps: - task: UsePythonVersion@0 inputs: versionSpec: '3.10' addToPath: true - bash: | python -m venv worker_venv source worker_venv/bin/activate pip install --upgrade pip pip install pipenv pipenv requirements > requirements.txt pipenv requirements --dev > requirements-dev.txt pip install --cache-dir $(pip_cache_dir) -r ./requirements.txt pip install --target="./.python_packages/lib/site-packages" --cache-dir $(pip_cache_dir) -r ./requirements.txt displayName: 'Install tools' - script: | source worker_venv/bin/activate python setup.py sdist bdist_wheel displayName: 'Build package' - task: TwineAuthenticate@1 inputs: artifactFeed: sample-feed-01 - script: | source worker_venv/bin/activate python -m twine upload --verbose --config-file $(PYPIRC_PATH) --repository-url https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-01/pypi/upload/ dist/* env: TWINE_USERNAME: "azure" TWINE_PASSWORD: $(PYPI_TOKEN) displayName: 'Upload package to Azure Artifacts' I tried everything, including GPT-4, but the solutions seem wrong or outdated. this is the error: /usr/bin/bash --noprofile --norc /home/vsts/work/_temp/75d0c60b-0c2a-44e9-be0f-29d838a3b86e.sh Uploading distributions to https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-01/pypi/up load/ INFO dist/**package**.whl (2.6 KB) INFO dist/**package**.tar.gz (2.6 KB) INFO username set by command options INFO password set by command options INFO username: azure INFO password: <hidden> Uploading **package**.whl 25l 0% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/5.7 kB • --:-- • ? 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 kB • 00:00 • ? 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 kB • 00:00 • ? 25hINFO Response from https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-0 1/pypi/upload/: 401 Unauthorized INFO {"$id":"1","innerException":null,"message":"TF400813: The user 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' is not authorized to access this resource.","typeName":"Microsoft.TeamFoundation.Framework.Server.Unauth orizedRequestException, Microsoft.TeamFoundation.Framework.Server","typeKey":"UnauthorizedReque stException","errorCode":0,"eventId":3000} ERROR HTTPError: 401 Unauthorized from https://pkgs.dev.azure.com/**company**/Platform/_packaging/sample-feed-0 1/pypi/upload/ Unauthorized I have some doubts about 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' as user name, which I did not obfuscate, and that is what I see in the pipeline. any help would be appreciated
Go the Azure Artifacts feed you want to publish the Python package to, then navigate to Feed Settings > Permissions page to check and ensure the following identities have been assigned with Contributor role at least. Project Collection Build Service ({OrgName}) {ProjectName} Build Service ({OrgName}) For more details, see "Job access tokens". After configuring the permission role, in the pipeline, you can build and publish the Python package like as below. steps: - Some steps to install dependences and build the package. - task: TwineAuthenticate@1 displayName: 'Twine Authenticate ' inputs: artifactFeed: {feed_name} - bash: 'twine upload -r {feed_name} --config-file "$(PYPIRC_PATH)" dist/*.whl' displayName: 'Publish to Artifacts feed' With this way, you do not need to provide your PAT/password to authenticate. For more details, see "Publish Python packages with Azure Pipelines". EDIT: By default, the pipeline will use one of the following build identities (also as mentioned above) to access the Azure Artifacts feeds and other resources within current organization: Project Collection Build Service ({OrgName}): Can access resources with full/partial permissions across projects within current organization by default. {ProjectName} Build Service ({OrgName}): Can access resources with full/partial permissions only within the current project by default. On the Organization Settings (and Project Settings), there are two options: Limit job authorization scope to current project for non-release pipelines Limit job authorization scope to current project for release pipelines If the options are enabled, the pipelines use the identity "{ProjectName} Build Service ({OrgName})". If the options are disabled, the identity "Project Collection Build Service ({OrgName})" is used. The TwineAuthenticate@1 task will use one of above build identities to generate the twine credentials and set the credentials to the environment variable PYPIRC_PATH. Since the identities might not have the full access to the feed by default, to assign the required permission role to the identities, you need to click the "Add users/groups" button, then on the pop-up window, search for and add the identities with the required roles.
2
1
77,920,479
2024-2-1
https://stackoverflow.com/questions/77920479/how-to-specify-bounds-and-variable-length-in-python-union-types
I have a type that has subclasses. E.g., Event, with classes Event1(Event), Event2(Event), etc. I want to specify a type annotation for the argument to a function that accepts any union of subclasses of Event. Concretely, I want all of the following to type check: listen_for_events(Event1) listen_for_events(Event1 | Event3) listen_for_events(Event1 | Event2 | Event3) while the following should not type check: listen_for_events(str | int) How do I specify the type on the argument to listen_for_events? I can do this with tuple (tuple[type[Event], ...]), but I don't appear to be able to do something similar with Union. Is there a way to correctly type this?
So to answer this question, firstly let's look at how union types work. The syntax you're describing generates a types.UnionType, and you can't specify any more detail about this type. e.g. Something like the below passes for all of your examples, including the one you didn't want: def listen_for_events(event: type[Event] | types.UnionType): ... listen_for_events(Event1) listen_for_events(Event1 | Event2 | Event3) listen_for_events(str | int) Why does python not offer a more descriptive syntax for this? Consider the features of the collection of events. What the union syntax is really expressing here is we want to be able to supply: an unknown number of events in no particular order with no repeats ...or in other words, a set: def listen_for_events(event: set[type[Event]]): ... listen_for_events({Event1}) listen_for_events({Event1, Event2, Event3}) listen_for_events({str, int}) # raises error Note: for this to work you have to make event hashable, or use frozenset This is actually more than a coincidence - in some sense a union of objects is just a way of expressing the specific members in a particular set. A | B is the set that contains only A and B, it's just that normally in python we ascribe this set to a type and say: "x must be an instance of the class that features in the set A | B" Is there a better syntax? If we relax the unordered and non-repeating condition, which maybe is not so critical to the problem, there become other options available (including the tuple mentioned in the comments). Personally I think the best syntax for the problem you're trying to solve would be the below: def listen_for_events(*events: type[Event]): ... listen_for_events(Event1) listen_for_events(Event1, Event2, Event3) listen_for_events(str, int) # raises error An advantage of this syntax is it even flags the exact member that differs from the type if you entered something like: listen_for_events(str, Event1) In summary, python may not offer an exact way to type the syntax you want, but it does off other structures that express exactly the same concept. Hope this is useful!
2
2
77,939,544
2024-2-5
https://stackoverflow.com/questions/77939544/python-h5py-warning-getlimits-py511-userwarning-signature-class-numpy-float
I get following warning on RedHat 8.8 for H5PY Package: Is there any way to fix this? I dont want to use warnings.filterwarnings("ignore") bash-4.4$ python3 Python 3.11.6 (main, Oct 16 2023, 03:41:34) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import h5py /scratch/Python/lib/python3.11/site-packages/numpy/core/getlimits.py:511: UserWarning: Signature b'\x00\xd0\xcc\xcc\xcc\xcc\xcc\xcc\xfb\xbfOE\xfc\x7f\x00\x00' for <class 'numpy.float128'> does not match any known type: falling back to type probe function. This warnings indicates broken support for the dtype! machar = _get_machar(dtype) >>> numpy.__version__ '1.24.3' >>> h5py.__version__ '3.10.0' >>> exit() bash-4.4$ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="8.8 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.8" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8" BUG_REPORT_URL="https://bugzilla.redhat.com/" REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.8 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.8" bash-4.4$ This warning is seen only on Linux, Windows is clean.
A similar issue posted on git : https://github.com/h5py/h5py/issues/2357 moving to numpy version older than 1.24 may fix the warning. Looks like a broken support for long double/numpy.float128 data type from the OS. You can try other methods to ignore warnings as well.
2
3
77,922,917
2024-2-1
https://stackoverflow.com/questions/77922917/find-maximum-radius-circle-everywhere-between-two-lines
I have two sine waves, as pictured below. Sometimes they are identical waves, just shifted in the y-axis. Other times the sine waves have different period / amplitude etc. I need to find the maximum radius circle that can fit between the two lines everywhere along the lines (discretized by the resolution that I generated the lines). What is the best way to do this? Is there a faster way than iteratively creating a circle tangent to the bottom curve and stepping up in radius until intersection with the upper curve, at every point?
I'm not sure if this a the complete answer, but it may contain a few useful ideas. (There is some math, but I think the code is also important, so I think SO is a reasonable place for this question.) There may be multiple solutions to the last equation, but in practice scipy.optimize.newton seems to converge to the circle of minimum radius. Also, it's vectorized, so you can solve for several circles at once. import numpy as np from scipy.optimize import newton import matplotlib.pyplot as plt from matplotlib import colors as mcolors colors = list(mcolors.TABLEAU_COLORS.values()) # amplitude, angular frequency, phase, and y-offsets params1 = 1, 1, 0, 0 params2 = 0.5, 0.5, 1, 2 # Sinusoid def y(x, params): A, w, p, y = params return A*np.cos(w*x + p) + y # Derivative of Sinusoid def dy(x, params): A, w, p, y = params return -w*A*np.sin(w*x + p) # Angle of Sinusoid Surface Normal def phi(x, params): return np.arctan2(-1, dy(x, params)) # Radius of circle required given y-coordinates def ry(y1, y2, phi1, phi2): return (y2 - y1)/(np.sin(phi1) + np.sin(phi2)) # Radius of circle required given x-coordinates def rx(x1, x2, phi1, phi2): return (x2 - x1)/(np.cos(phi1) + np.cos(phi2)) # At the solution, the radius of the circle given y-coordinates # must agree with the radius of the circle given x-coordinates def f(x2, x1): y1, y2 = y(x1, params1), y(x2, params2) phi1, phi2 = phi(x1, params1), phi(x2, params2) return rx(x1, x2, phi1, phi2) - ry(y1, y2, phi1, phi2) # Coordinate of center of circle def x0y0(x1, r): y1 = y(x1, params1) phi1 = phi(x1, params1) return (x1 + r*np.cos(phi1), y1 + r*np.sin(phi1)) # Plot Sinusoids t_plot = np.linspace(0, 15, 300) y1 = y(t_plot, params1) y2 = y(t_plot, params2) plt.plot(t_plot, y1, 'k', t_plot, y2, 'k') ax = plt.gca() plt.ylim(-2, 4) plt.axis('equal') # Coordinates on sinusoid 1 x1 = np.asarray([1, 4.5, 7, 11, 13]) y1 = y(x1, params1) # X-coordinates on sinusoid 2 x2 = newton(f, x1, args=(x1,)) y2 = y(x2, params2) # Circle radii and centers phi1 = phi(x1, params1) phi2 = phi(x2, params2) r = rx(x1, x2, phi1, phi2) x0, y0 = x0y0(x1, r) for i in range(len(x1)): color = colors[i] # Mark points on curve plt.plot(x1[i], y1[i], color, marker='*') plt.plot(x2[i], y2[i], color, marker='*') # Plot circles circle = plt.Circle((x0[i], y0[i]), r[i], color=color, fill=False) ax.add_patch(circle) Two caveats: scipy.optimize.newton is not guaranteed to converge, and even if it does, it is not guaranteed to converge to the circle with minimum radius (unless it can be proven otherwise...). If it is not reliable for your particular parameters but you like the approach, consider posting a more specific question about how to reliably solve the relevant equations numerically. It may just be a matter of solving for one circle and using that as the guess for the next circle. This code doesn't consider the possibility of the circle intersecting with a second point on the lower curve before it touches the upper curve. I couldn't tell from the problem statement if this was required because the question mentioned "stepping up in radius until intersection with the upper curve". This is easy enough to check, though, due to symmetry of the convex portion of the sinusoid.
2
3
77,932,050
2024-2-3
https://stackoverflow.com/questions/77932050/chromedriver-is-suddenly-displaying-a-strange-warning-about-blocking-third-party
Here's my python code: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from flask import Flask, render_template from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service APP = Flask(__name__) @APP.route('/ohc_search_link') def ohc_search_link(): ohc_search() return render_template("finished.html") def ohc_search(): chrome_options = Options() chrome_options.add_argument("--headless") chrome_options.add_argument('--disable-dev-shm-usage') chrome_options.add_argument("--no-sandbox") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=chrome_options) driver.get('https://docs.oracle.com/') # More scraping code here ... driver.close() if __name__ == '__main__': #start OHC search APP.run() When I run this, I see this warning on the console: [0203/171925.768:INFO:CONSOLE(0)] "Third-party cookie will be blocked. Learn more in the Issues tab.", source: https://docs.oracle.com/search/?q=recruiting&pg=1&size=10&showfirstpage=true&lang=en My script runs fine though and I'm seeing the results I expect. But can I ignore this warning? If yes, is there a way to suppress it? I wasn't seeing any warnings a few days earlier. I remember that my chrome browser updated around that time. I'm seeing this warning after that update. Do you think this is related to this news about restricting third party cookies? But strangely I don't see the Eye icon on the website I'm scraping.
I just came across this error too. Try adding the following: chrome_options.add_argument('log-level=3')
3
3
77,930,475
2024-2-3
https://stackoverflow.com/questions/77930475/replicating-columns-in-dataframe-based-on-the-column-names
I have a following dataframe called ol_axle: Single Tridem Tandem Tridem 700 1500 1200 2700 The code to create the dataframe is as follows: weight=[700,1500,1200,2700] name=['Single','Tridem','Tandem','Tridem'] ol_axle=pd.DataFrame([weight],columns=[name]) My apologies as the query doesn't appear to be that difficult, but what I want is that if there is 'Tridem' in column name, the column repeats two more times and the value is divided by 3 and if it there is 'Tandem' in col name, it repeats once more and the value is divided by 2 such that I get the resulting dataframe: Single Tridem Tridem Tridem Tandem Tandem Tridem Tridem Tridem 700 500 500 500 600 600 900 900 900 The code that I developed to get the required result is as follows: # Initialize a flag to track whether division has been performed tridem_divided = False tandem_divided = False # Iterate over the columns and update values based on conditions for column in ol_axle.columns: if 'Tridem' in column and not tridem_divided: ol_axle[column] = ol_axle[column] / 3 tridem_divided = True elif 'Tandem' in column and not tandem_divided: ol_axle[column] = ol_axle[column] / 2 tandem_divided = True else: ol_axle[column] = ol_axle[column] ol_axle The issue is, that my code was working perfectly well and and I used it for a lot of data with various different combinations of these axle groups, but all of a sudden and for some reason I still do not know I started receiving the following error; PerformanceWarning: indexing past lexsort depth may impact performance. ol_axle[column] = ol_axle[column] / 3 I just installed a Matplotlib and OSMNX library 2 days earlier and which I am sure has got nothing to do with this error popping up now. I have tried other snippets but I am still getting this error. I am not that experienced in Python as yet, so it would be great if anyone can help me get the results that I want for my dataframe.
Using numpy.repeat with map: n = {'Single': 1, 'Tandem': 2, 'Tridem': 3} rep = ol_axle.columns.map(n) out = (pd.DataFrame(np.repeat(ol_axle.div(rep, axis=1), rep, axis=1), columns=np.repeat(ol_axle.columns, rep), index=ol_axle.index) ) print(out) Variant of @Timeless' approach using iloc: n = {'Single': 1, 'Tandem': 2, 'Tridem': 3} rep = ol_axle.columns.map(n) out = (ol_axle.div(rep) .iloc[:, np.repeat(np.arange(ol_axle.shape[1]), rep)] ) Output: Single Tridem Tridem Tridem Tandem Tandem Tridem Tridem Tridem 0 700.0 500.0 500.0 500.0 600.0 600.0 900.0 900.0 900.0
2
2
77,928,132
2024-2-2
https://stackoverflow.com/questions/77928132/how-to-install-conda-file-downloaded-from-anaconda-package-site
For example, I want to install fish shell using conda. But the server has no internet connection. On https://anaconda.org/conda-forge/fish/files , many versions are provided. But almost the latest few versions are always in .conda format. I downloaded linux-64/fish-3.7.0-hdab1d28_0.conda and install it using conda install linux-64/fish-3.7.0-hdab1d28_0.conda But this does not work. It shows a long error report like ... FileNotFoundError: [Errno 2] No such file or directory: '...../miniconda3/pkgs/linux-64_fzf-0.46.1-ha8f183a_0/info/repodata_record.json' FileNotFoundError: [Errno 2] No such file or directory: '...../miniconda3/pkgs/linux-64_fzf-0.46.1-ha8f183a_0/info/index.json' .... etc But old version in tar.gz format like linux-64/fish-3.4.1-h682823d_0.tar.bz2 installed just fine. So how to correctly conda install .conda file?
Bug with Anaconda Cloud downloads There is an outstanding bug with the Anaconda Cloud website that prepends the subdirectory (e.g., a linux-64_ prefix) to file names when downloading from a browser (i.e., clicking file links). This is fine for .tar.gz files, but for .conda the file name is coupled(!) to being able to correctly unpack the archive. Error in OP shows a reference to linux-64_fzf-0.46.1-ha8f183a_0, which implies a linux-64_fzf-0.46.1-ha8f183a_0.conda file is being unpacked. However, the file name should likely not have the linux-64_ prefix. Try renaming the file to fzf-0.46.1-ha8f183a_0.conda. Note that programmatically downloading the .conda files (e.g., with curl, wget) will not have this problem, so one may also be interested in downloading archives en masse, like what is provided for in this answer.
4
3
77,930,086
2024-2-2
https://stackoverflow.com/questions/77930086/how-to-alter-pythons-json-encoder-to-convert-nan-to-none
Say I have this (all this is Python 3.11 in case that matters): not_a_number = float("NaN") # this actually comes from somewhere else json.encode([not_a_number]) The output is an (invalid) JSON literal NaN. I've been trying to create an JSONEncoder subclass that would use math.isnan() to determine if the value is a NaN and output None instead. I first tried subclassing JSONEncoder and doing it in default(), which I found later isn't called for things like float. I then found a recommendation to override the encode() method instead, so I tried this: class NanEncoder(json.JSONEncoder): def encode(self, obj): if isinstance(obj, float): if math.isnan(obj): return None return super(NanEncoder, self).encode(obj) This works: >>> json.dumps(not_a_number, cls=NanEncoder) >>> json_string = json.dumps(not_a_number, cls=NanEncoder) >>> print(json_string) None Cool, I think I've got it. BUT, this does not work: not_a_number_list = [not_a_number] print(not_a_number_list) [nan] json_string = json.dumps(not_a_number_list, cls=NanEncoder) print(json_string) [NaN] So, as I see in the python docs, maybe I need to call the encode method slightly differently, so I try that: json_string = NanEncoder().encode(not_a_number_list) print(json_string) [NaN] Alas, no difference. So, here's my question: is it possible to create a JSONEncoder subclass that will find instances of the float that is NaN in Python and output None instead? Or am I relegated to do a search/replace on the string NaN with null in the output JSON (which, theoretically anyway, could alter data I don't want to)? Fixing the input dictionary is not a great option because the dict that the values are in is quite large and it's construction is not under my control (so I can't stop NaN from getting in there in the first place).
I don't know about easy solution to replace NaN with custom values (other than replace NaNs in original object). On top of that, json module doesn't make it easy to monkeypatch the inner workings: https://github.com/python/cpython/blob/034bb70aaad5622bd53bad21595b18b8f4407984/Lib/json/encoder.py#L224 But you can try: import json def custom_make_iterencode(*args, **kwargs): def floatstr( o, allow_nan=True, _repr=float.__repr__, _inf=float("inf"), _neginf=float("-inf"), ): if o != o: text = "None" # <--- Here is the actuall NaN replacement! elif o == _inf: text = "Infinity" elif o == _neginf: text = "-Infinity" else: return _repr(o) if not allow_nan: raise ValueError( "Out of range float values are not JSON compliant: " + repr(o) ) return text return orig_make_iterencode( {}, # markers json.encoder.JSONEncoder.default, json.encoder.py_encode_basestring_ascii, None, # indent floatstr, json.encoder.JSONEncoder.key_separator, json.encoder.JSONEncoder.item_separator, False, # sort_keys False, # skip_keys False, # _one_shot ) orig_make_iterencode = json.encoder._make_iterencode # https://github.com/python/cpython/blob/034bb70aaad5622bd53bad21595b18b8f4407984/Lib/json/encoder.py#L247 json.encoder._make_iterencode = custom_make_iterencode json.encoder.c_make_encoder = None # <-- pretend we don't have C version of the function # finally you can do: print(json.dumps(float("nan"))) print(json.dumps([float("nan")])) # don't forget to set the `json.encoder._make_iterencode` and `json.encoder.c_make_encoder` back! Prints: None [None]
3
0
77,930,211
2024-2-2
https://stackoverflow.com/questions/77930211/removing-duplicates-in-a-pandas-dataframe-for-only-a-specified-value
Let's say I have a dataframe: df = pd.DataFrame({ 'ID': [1, 2, 3, 1, 2, 3], 'Value': ['A', 'B', 'A', 'B', 'C', 'A'] }) If I wanted to only remove duplicates on ID when ID is a specified value (let's say 1), how would I do that? In other words, the resulting dataframe would look like: |ID|Value| |--|-----| |1 |A | |2 |B | |3 |A | |2 |C | |3 |A | AI assistants are having a surprisingly difficult time with this one.
Create a boolean mask with DataFrame.duplicated: mask = df.ID.eq(1) & df.duplicated(subset=["ID"]) print(df[~mask]) Prints: ID Value 0 1 A 1 2 B 2 3 A 4 2 C 5 3 A Steps: print(df.ID.eq(1)) 0 True 1 False 2 False 3 True 4 False 5 False Name: ID, dtype: bool print(df.duplicated(subset=["ID"])) 0 False 1 False 2 False 3 True 4 True 5 True dtype: bool mask = df.ID.eq(1) & df.duplicated(subset=["ID"]) print(mask) 0 False 1 False 2 False 3 True 4 False 5 False dtype: bool
2
3
77,928,740
2024-2-2
https://stackoverflow.com/questions/77928740/python-sockets-in-a-multithreaded-program-with-classes-objects-interfere-with-ea
As an aid to my work, I am developing a communication simulator between devices, based on TCP sockets, in Python 3.12 (with an object oriented approach). Basically, the difference between a SERVER type communication channel rather than a CLIENT type is merely based on the way by which the sockets are instantiated: respectively, the server ones listen/accept for connection requests while the client ones actively connect to their endpoint. Once the connection is established, either party can begin to transmit something, which the other receives and processes and then responds (this on the same socket pair of course). As you can see. this simulator has a simple interface based on Tkinter You can create up to 4 channels n a grid layout, in this case we have two: When the user clicks on CONNECT button, this is what happens in the listener of that button in the frame class: class ChannelFrame(tk.Frame): channel = None #istance of channel/socket type def connectChannel(self): port = self.textPort.get(); if self.socketType.get() == 'SOCKET_SERVER': self.channel = ChannelServerManager(self,self.title,port) elif self.socketType.get() == 'SOCKET_CLIENT': ipAddress = self.textIP.get() self.channel = ChannelClientManager(self,self.title,ipAddress,port) Then I have an implementation of a channel of type Server and one for type Client. Their constructors basically collect the received data and create a main thread whose aim is to create socket and then: 1a) connect to the counterpart in case of socket client 1b) waiting for requests of connections in case of socket server 2.) enter a main loop using select.select and trace in the text area of their frame the received and sent data Here is the code for main thread Client class ChannelClientManager(): establishedConn = None receivedData = None eventMainThread = None #set this event when user clicks on DISCONNECT button def threadClient(self): self.socketsInOut.clear() self.connected = False while True: if (self.eventMainThread.is_set()): print(f"threadClient() --> ChannelClient {self.channelId}: Socket client requested to shut down, exit main loop") break; if(not self.connected): try : self.establishedConn = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.establishedConn.connect((self.ipAddress, int(self.port))) self.channelFrame.setConnectionStateChannel(True) self.socketsInOut.append(self.establishedConn) self.connected = True #keep on trying to connect to my counterpart until I make it except socket.error as err: print(f'socket.error threadClient() --> ChannelClient {self.channelId}: Error while connecting to server: {err}') time.sleep(0.5) continue except socket.timeout as sockTimeout: print(f'socket.timeout threadClient() --> ChannelClient {self.channelId}: Timeout while connecting to server: {sockTimeout}') continue except Exception as e: print(f'Exception on connecting threadClient() --> ChannelClient {self.channelId}: {e}') continue if(self.connected): try: r, _, _ = select.select(self.socketsInOut, [], [], ChannelClientManager.TIMEOUT_SELECT) if len(r) > 0: #socket ready to be read with incoming data for fd in r: data = fd.recv(1) if data: self.manageReceivedDataChunk(data) else: print(f"ChannelClient {self.channelId}: Received not data on read socket, server connection closed") self.closeConnection() else: #timeout self.manageReceivedPartialData() except ConnectionResetError as crp: print(f"ConnectionResetError threadClient() --> ChannelClient {self.channelId}: {crp}") self.closeConnection() except Exception as e: print(f'Exception on selecting threadClient() --> ChannelClient {self.channelId}: {e}') Here is the code for main thread Server class ChannelServerManager(): socketServer = None #user to listen/accept connections establishedConn = None #represents accepted connections with the counterpart receivedData = None eventMainThread = None socketsInOut = [] def __init__(self, channelFrame, channelId, port): self.eventMainThread = Event() self.socketsInOut.clear() self.socketServer = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.socketServer.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socketServer.bind(('', int(port))) #in ascolto qualsiasi interfaccia di rete, se metto 127.0.0.1 starebbe in ascolto solo sulla loopback self.socketServer.listen(1) #accepting one connection from client self.socketsInOut.append(self.socketServer) self.mainThread = Thread(target = self.threadServer) self.mainThread.start() def threadServer(self): self.receivedData = '' while True: if (self.eventMainThread.is_set()): print("threadServer() --> ChannelServer is requested to shut down, exit main loop\n") break; try: r, _, _ = select.select(self.socketsInOut, [], [], ChannelServerManager.TIMEOUT_SELECT) if len(r) > 0: #socket pronte per essere lette for fd in r: if fd is self.socketServer: #if the socket ready is my socket server, then we have a client wanting to connect --> let's accept it clientsock, clientaddr = self.socketServer.accept() self.establishedConn = clientsock print(f"ChannelServer {self.channelId} is connected from client address {clientaddr}") self.socketsInOut.append(clientsock) self.channelFrame.setConnectionStateChannel(True) self.receivedData = '' elif fd is self.establishedConn: data = fd.recv(1) if not data: print(f"ChannelServer {self.channelId}: Received not data on read socket, client connection closed") self.socketsInOut.remove(fd) self.closeConnection() else: self.manageReceivedDataChunk(data) else: #timeout self.manageReceivedPartialData() except Exception as e: print(f"Exception threadServer() --> ChannelServer {self.channelId}: {traceback.format_exc()}") I don't know why, but this frames/sockets appear to interfere with each other or "share data". Or, disconnecting and closing a channel from its button in its own frame also causes the other one into error, or the other one closes/crashes too. These two frames/objects should each live their own life and move forward with their counterpart as long as it is connected, instead they interfere. As you can see from this screenshot: By a medical device (which is server), I am sending this data <VT>MSH|^~\&|KaliSil|KaliSil|AM|HALIA|20240130182136||OML^O33^OML_O33|1599920240130182136|P|2.5<CR>PID|1||A20230522001^^^^PI~090000^^^^CF||ESSAI^Halia||19890522|M|||^^^^^^H|||||||||||||||<CR>PV1||I||||||||||||A|||||||||||||||||||||||||||||||<CR>SPM|1|072401301016^072401301016||h_san^|||||||||||||20240130181800|20240130181835<CR>ORC|NW|072401301016||A20240130016|saisie||||20240130181800|||^^|CP1A^^^^^^^^CP1A||20240130182136||||||A^^^^^ZONA<CR>TQ1|1||||||||0||<CR>OBR|1|072401301016||h_GLU_A^^T<CR>OBX|1|NM|h_GLU_A^^T||||||||||||||||<CR>BLG|D<CR><FS> only to channel on port 10001 but part of this data is received on one socket client, other part on the other (right) socket client. This is not a problem of rendering the text in the right frame, also the log of the received data shows that some data is received in Channel 0 and some other data in Channel 1. Why does this happen? Instead, I start 2 instances of the simulator with only one channel each, then everything works perfectly but this defeats our purpose of being able to work up to 4 channels in parallel from a single window. Do you have any ideas? The first time I had implemented ChannelServerManager and ChannelClientManager as extended from an ChannelAbstractManager with common methods and data structures, based on Python library ABC Then I read that inheritance in Python is not the same as in Java, so I thought the different instances were sharing some attributes. I removed the abstract class and replicated the code and resources in both classes but this has not solved. Any suggestions?
Then I read that inheritance in Python is not the same as in Java Thanks, that was a good tip to find the issue! While not an inheritance issue, I think you're seeing problems caused by this Java-ism: class ChannelClientManager(): establishedConn = None receivedData = None eventMainThread = None #set this event when user clicks on DISCONNECT button ... class ChannelServerManager(): socketServer = None #user to listen/accept connections establishedConn = None #represents accepted connections with the counterpart receivedData = None eventMainThread = None socketsInOut = [] def __init__(self, channelFrame, channelId, port): ... In Python you don't need to declare your attributes in advance, you should just assign to them directly in your __init__ method (constructor). All these variables you're declaring are actually class attributes and hence shared between all instances like you suspected. It might not be obvious because when you do self.establishedConn = ... you're also creating an instance attribute that overrides the visibility of the class attribute, so you never actually access the shared values. With one exception: socketsInOut = [] def __init__(self, channelFrame, channelId, port): ... self.socketsInOut.clear() ... self.socketsInOut.append(self.socketServer) Because you never assigned self.socketsInOut, all instances are instead accessing the (shared) class attribute. After a small startup period, all channels end up sending and receiving messages on the same socket list, hence the messages being split. The fix is to remove all those unnecessary attribute "declarations", and add a missing one for socketsInOut: class ChannelClientManager(): def threadClient(self): self.socketsInOut = [] # Change here. ... class ChannelServerManager(): def __init__(self, channelFrame, channelId, port): self.eventMainThread = Event() self.socketsInOut = [] # And here. self.socketServer = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ... And to save you time debugging another well known Python wart, default arguments are also shared between calls. So never declare a method with a mutable default argument, like def foo(items=[]): ....
2
4
77,928,596
2024-2-2
https://stackoverflow.com/questions/77928596/finding-all-combinations-of-two-sets-with-k-elements-being-exchanged
Lets say I have two list ['a', 'b', 'c', 'd'] and ['x', 'y', 'z', 'w']. I want to create a set of two other list where k elements are being exchanged between the two lists: Example for a single element: ['x', 'b', 'c', 'd'] ['a', 'y', 'z', 'w'] ['a', 'x', 'c', 'd'] ['b', 'y', 'z', 'w'] ['a', 'b', 'x', 'd'] ['c', 'y', 'z', 'w'] ['a', 'b', 'c', 'x'] ['d', 'y', 'z', 'w'] ['y', 'b', 'c', 'd'] ['x', 'a', 'z', 'w'] ['a', 'y', 'c', 'd'] ['x', 'b', 'z', 'w'] ['a', 'b', 'y', 'd'] ['x', 'c', 'z', 'w'] ['a', 'b', 'c', 'y'] ['x', 'd', 'z', 'w'] ['z', 'b', 'c', 'd'] ['x', 'y', 'a', 'w'] ['a', 'z', 'c', 'd'] ['x', 'y', 'b', 'w'] ['a', 'b', 'z', 'd'] ['x', 'y', 'c', 'w'] ['a', 'b', 'c', 'z'] ['x', 'y', 'd', 'w'] ['w', 'b', 'c', 'd'] ['x', 'y', 'z', 'a'] ['a', 'w', 'c', 'd'] ['x', 'y', 'z', 'b'] ['a', 'b', 'w', 'd'] ['x', 'y', 'z', 'c'] ['a', 'b', 'c', 'w'] ['x', 'y', 'z', 'd'] Is there an efficient way to generate this kind of lists? The only thing I can think of is swapping elements in a for loop: list_0 = ['a', 'b', 'c', 'd'] list_1 = ['x', 'y', 'z', 'w'] for j in range(len(list_0)): for i in range(len(list_1)): list_0[i], list_1[j] = list_1[j], list_0[i] print(list_0, list_1) list_0 = ['a', 'b', 'c', 'd'] list_1 = ['x', 'y', 'z', 'w'] This approach is very inefficient and am not sure how I could extend it to swapping k-elements.
I think the nested loops is as efficient as it gets, because you're basically generating a list of [source_index, target_index] pairs. Though re-creating the lists on every iteration is indeed wasteful. Here's a more straightforward solution for k=1, using itertools.product: import itertools list_0 = ['a', 'b', 'c', 'd'] list_1 = ['x', 'y', 'z', 'w'] indexes = range(len(list_0)) for j, i in itertools.product(indexes, repeat=2): # Swap items. list_0[i], list_1[j] = list_1[j], list_0[i] print(list_0, list_1) # Swap back. list_0[i], list_1[j] = list_1[j], list_0[i] # Alternative without swapping, by creating new lists. # new_list_0 = [value if index != i else list_1[j] for index, value in enumerate(list_0)] # new_list_1 = [value if index != j else list_0[i] for index, value in enumerate(list_1)] # print(new_list_0, new_list_1) # ['x', 'b', 'c', 'd'] ['a', 'y', 'z', 'w'] # ['a', 'x', 'c', 'd'] ['b', 'y', 'z', 'w'] # ['a', 'b', 'x', 'd'] ['c', 'y', 'z', 'w'] # ['a', 'b', 'c', 'x'] ['d', 'y', 'z', 'w'] # ['y', 'b', 'c', 'd'] ['x', 'a', 'z', 'w'] # ['a', 'y', 'c', 'd'] ['x', 'b', 'z', 'w'] # ['a', 'b', 'y', 'd'] ['x', 'c', 'z', 'w'] # ['a', 'b', 'c', 'y'] ['x', 'd', 'z', 'w'] # ['z', 'b', 'c', 'd'] ['x', 'y', 'a', 'w'] # ['a', 'z', 'c', 'd'] ['x', 'y', 'b', 'w'] # ['a', 'b', 'z', 'd'] ['x', 'y', 'c', 'w'] # ['a', 'b', 'c', 'z'] ['x', 'y', 'd', 'w'] # ['w', 'b', 'c', 'd'] ['x', 'y', 'z', 'a'] # ['a', 'w', 'c', 'd'] ['x', 'y', 'z', 'b'] # ['a', 'b', 'w', 'd'] ['x', 'y', 'z', 'c'] # ['a', 'b', 'c', 'w'] ['x', 'y', 'z', 'd'] And here's a general solution for any k: import itertools list_0 = ['a', 'b', 'c', 'd'] list_1 = ['x', 'y', 'z', 'w'] k = 2 # Number of elements to swap. indexes = range(len(list_0)) for source_indexes in itertools.permutations(indexes, k): for target_indexes in itertools.combinations(indexes, k): # Swap elements. for target_index, source_index in zip(source_indexes, target_indexes): list_0[source_index], list_1[target_index] = list_1[target_index], list_0[source_index] print(list_0, list_1) # Swap back. for target_index, source_index in zip(source_indexes, target_indexes): list_0[source_index], list_1[target_index] = list_1[target_index], list_0[source_index] # ['x', 'y', 'c', 'd'] ['a', 'b', 'z', 'w'] # ['x', 'b', 'y', 'd'] ['a', 'c', 'z', 'w'] # ['x', 'b', 'c', 'y'] ['a', 'd', 'z', 'w'] # ['a', 'x', 'y', 'd'] ['b', 'c', 'z', 'w'] # ['a', 'x', 'c', 'y'] ['b', 'd', 'z', 'w'] # ['a', 'b', 'x', 'y'] ['c', 'd', 'z', 'w'] # ['x', 'z', 'c', 'd'] ['a', 'y', 'b', 'w'] # ['x', 'b', 'z', 'd'] ['a', 'y', 'c', 'w'] # ['x', 'b', 'c', 'z'] ['a', 'y', 'd', 'w'] # ['a', 'x', 'z', 'd'] ['b', 'y', 'c', 'w'] # ['a', 'x', 'c', 'z'] ['b', 'y', 'd', 'w'] # ['a', 'b', 'x', 'z'] ['c', 'y', 'd', 'w'] # ['x', 'w', 'c', 'd'] ['a', 'y', 'z', 'b'] # ['x', 'b', 'w', 'd'] ['a', 'y', 'z', 'c'] # ['x', 'b', 'c', 'w'] ['a', 'y', 'z', 'd'] # ['a', 'x', 'w', 'd'] ['b', 'y', 'z', 'c'] # ['a', 'x', 'c', 'w'] ['b', 'y', 'z', 'd'] # ['a', 'b', 'x', 'w'] ['c', 'y', 'z', 'd'] # ['y', 'x', 'c', 'd'] ['b', 'a', 'z', 'w'] # ...
2
2
77,920,123
2024-2-1
https://stackoverflow.com/questions/77920123/using-starlette-testclient-causes-an-attributeerror-unixselectoreventloop-o
Using FastAPI : 0.101.1 I run this test_read_aynsc and it pass. # app.py from fastapi import FastAPI app = FastAPI() app.get("/") def read_root(): return {"Hello": "World"} # conftest.py import pytest from typing import Generator from fastapi.testclient import TestClient from server import app @pytest.fixture(scope="session") def client() -> Generator: with TestClient(app) as c: yield c # test_root.py def test_read_aynsc(client): response = client.get("/item") However, executing this test in DEBUG mode (in pycharm) will cause an error. Here is the Traceback : test setup failed cls = <class 'anyio._backends._asyncio.AsyncIOBackend'> func = <function start_blocking_portal.<locals>.run_portal at 0x1555c51b0> args = (), kwargs = {}, options = {} @classmethod def run( cls, func: Callable[[Unpack[PosArgsT]], Awaitable[T_Retval]], args: tuple[Unpack[PosArgsT]], kwargs: dict[str, Any], options: dict[str, Any], ) -> T_Retval: @wraps(func) async def wrapper() -> T_Retval: task = cast(asyncio.Task, current_task()) task.set_name(get_callable_name(func)) _task_states[task] = TaskState(None, None) try: return await func(*args) finally: del _task_states[task] debug = options.get("debug", False) loop_factory = options.get("loop_factory", None) if loop_factory is None and options.get("use_uvloop", False): import uvloop loop_factory = uvloop.new_event_loop with Runner(debug=debug, loop_factory=loop_factory) as runner: > return runner.run(wrapper()) ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:1991: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:193: in run return self._loop.run_until_complete(task) ../../../Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/233.13763.11/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:202: in run_until_complete self._run_once() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_UnixSelectorEventLoop running=False closed=True debug=False> def _run_once(self): """ Simplified re-implementation of asyncio's _run_once that runs handles as they become ready. """ ready = self._ready scheduled = self._scheduled while scheduled and scheduled[0]._cancelled: heappop(scheduled) timeout = ( 0 if ready or self._stopping else min(max( scheduled[0]._when - self.time(), 0), 86400) if scheduled else None) event_list = self._selector.select(timeout) self._process_events(event_list) end_time = self.time() + self._clock_resolution while scheduled and scheduled[0]._when < end_time: handle = heappop(scheduled) ready.append(handle) > if self._compute_internal_coro: E AttributeError: '_UnixSelectorEventLoop' object has no attribute '_compute_internal_coro' ../../../Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/233.13763.11/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:236: AttributeError During handling of the above exception, another exception occurred: @pytest.fixture(scope="session") def client() -> Generator: > with TestClient(app) as c: tests/fixtures/common/http_client_app.py:10: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/starlette/testclient.py:730: in __enter__ self.portal = portal = stack.enter_context( ../../../.pyenv/versions/3.10.12/lib/python3.10/contextlib.py:492: in enter_context result = _cm_type.__enter__(cm) ../../../.pyenv/versions/3.10.12/lib/python3.10/contextlib.py:135: in __enter__ return next(self.gen) ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/from_thread.py:454: in start_blocking_portal run_future.result() ../../../.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py:451: in result return self.__get_result() ../../../.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py:403: in __get_result raise self._exception ../../../.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/thread.py:58: in run result = self.fn(*self.args, **self.kwargs) ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_core/_eventloop.py:73: in run return async_backend.run(func, args, {}, backend_options) ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:1990: in run with Runner(debug=debug, loop_factory=loop_factory) as runner: ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:133: in __exit__ self.close() ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:141: in close _cancel_all_tasks(loop) ../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:243: in _cancel_all_tasks loop.run_until_complete(tasks.gather(*to_cancel, return_exceptions=True)) ../../../Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/233.13763.11/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:202: in run_until_complete self._run_once() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_UnixSelectorEventLoop running=False closed=True debug=False> def _run_once(self): """ Simplified re-implementation of asyncio's _run_once that runs handles as they become ready. """ ready = self._ready scheduled = self._scheduled while scheduled and scheduled[0]._cancelled: heappop(scheduled) timeout = ( 0 if ready or self._stopping else min(max( scheduled[0]._when - self.time(), 0), 86400) if scheduled else None) event_list = self._selector.select(timeout) self._process_events(event_list) end_time = self.time() + self._clock_resolution while scheduled and scheduled[0]._when < end_time: handle = heappop(scheduled) ready.append(handle) > if self._compute_internal_coro: E AttributeError: '_UnixSelectorEventLoop' object has no attribute '_compute_internal_coro' I am not sure to understand what causes the error Since I can see the _UnixSelectorEventLoop, I need to precise that my operating system is MacOS M1.
To support async debugging, PyCharm patches a bunch of asyncio APIs with custom wrapped functions. Notably, it patches asyncio.new_event_loop() in ~/Applications/PyCharm Professional Edition.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:169. Starlette uses anyio, with the asyncio backend by default. Anyio is eventually gonna try to get its event loop from asyncio.events.new_event_loop(), which is unpatched. Subsequent calls to patched asyncio APIs are gonna throw errors because they assume a patched event loop. Until it's fixed properly, you can resolve this issue by forcing anyio to use the patched new_event_loop with TestClient(app, backend_options={'loop_factory': asyncio.new_event_loop}) Update JetBrains dev says the fix is on the way, see https://youtrack.jetbrains.com/issue/PY-70245. Until then, they recommend disabling python.debug.asyncio.repl in Help | Find Actions | Registry
9
11
77,924,775
2024-2-2
https://stackoverflow.com/questions/77924775/how-to-get-correct-date-with-gspread
How can I get the spreadsheet values with Python's gspread? Suppose there is a cell that looks like 1/1 because m/d is specified as the cell display format, but actually contains 2024/1/1. Retrieving this cell using get_all_values() returns "1/1". I want the actual value "2024/1/1", not the value displayed on the sheet. What should I do? I will omit the sheet acquisition part. values = workbook.worksheet(sheet_name).get_all_values() value = values[1][0] # 1/1 will be obtained
Issue and workaround: From your showing image, script, and current value, I understood that you put a value of 2024/01/01 into a cell "A2" as a date object. And, the cell value is shown as 1/1 with the number format. In the current stage, when this cell value is retrieved by Sheets API, 1/1 is retrieved as the default request of valueRenderOption: FORMATTED_VALUE. When valueRenderOption is changed to UNFORMATTED_VALUE, the serial number of 45292 is retrieved. Unfortunately, in the current stage, the inputted value of 2024/01/01 cannot be directly retrieved. So, it is required for the following flow. Retrieve the serial number from a cell with valueRenderOption: UNFORMATTED_VALUE. Convert the serial number to the date object. Convert date format. When this flow is reflected in your showing script, how about the following modification? Modified script: values = workbook.worksheet(sheet_name).get_all_values(value_render_option="UNFORMATTED_VALUE") value = values[1][0] date = datetime.fromtimestamp((int(value) - 25569) * 86400) # Ref: https://stackoverflow.com/a/6154953 formatted = date.strftime('%Y/%m/%d') # or date.strftime('%Y/%-m/%-d') # or date.strftime('%Y/%#m/%#d') print(formatted) In the case that the cell "A2" has 2024/01/01 of the date object and 1/1 is shown by the number format, when this script is run, formatted is 2024/01/01. from datetime import datetime is used. References: get_all_values Method: spreadsheets.values.get
2
2
77,922,861
2024-2-1
https://stackoverflow.com/questions/77922861/why-does-sympy-solve-produce-empty-array-for-certain-inequalities
sympy's solve seems to generally work for inequalities. For example: from sympy import symbols, solve; x = symbols('x'); print(solve('x^2 > 4', x)); # ((-oo < x) & (x < -2)) | ((2 < x) & (x < oo)) In some cases where all values of x or no values of x are valid solutions, solve() returns True or False to indicate that. print(solve('x^2 > 4 + x^2', x)); # False print(solve('x^2 >= x^2', x)); # True However, there are other cases where I would expect a result of True or False and instead I get []: print(solve('x - x > 0', x)); # [] print(solve('x - x >= 0', x)); # [] Why does this happen/what am I doing wrong?
Remember that expressions are not WYSIWYG: your expressions did not have x in them: print(solve('x - x > 0', x)); # [] <-- solve(0>0, x) => solve(False, x) print(solve('x - x >= 0', x)); # [] <-- solve(0>=0, x) => solve(True, x) Since x does not appear in either expression, there are no values of x to report in the first case that can make the False become True; in the second case there is no way to represent the Universal set (since any value of x cannot change the value of True to anything else). So in both cases there is no solution to report (though maybe an error or at least x<=oo could be reported for the second case) but in general: if the requested symbol does not appear in the equation (at the outset or after simplification) then [] is returned (and x does not appear in True).
2
2
77,915,146
2024-1-31
https://stackoverflow.com/questions/77915146/binding-an-event-when-a-shape-is-clicked-in-turtle
I generate random figures on the screen. I want the screen to clear when I click on any circle shape. How to bind an event when a circle is clicked? I tried onclick() with clearscreen(), but it doens't work. My code: # -*- coding: utf-8 -*- import random import turtle class Figure: def __init__(self): __colors = ['red', 'green', 'yellow', 'purple', 'orange'] __figures = ['square', 'circle', 'triangle'] self.__x = random.randint(-330, 330) self.__y = random.randint(-230, 230) self.__color = random.choice(__colors) self.t = turtle.Turtle() self.t.hideturtle() self.t.fillcolor(self.__color) self.t.up() self.t.setpos(self.__x, self.__y) self.t.down() __choose = random.choice(__figures) if __choose == 'square': self.square() elif __choose == 'circle': self.circle() else: self.triangle() def square(self): self.t.begin_fill() for i in range(4): self.t.fd(50) self.t.left(90) self.t.end_fill() def circle(self): self.t.begin_fill() self.t.circle(50) self.t.end_fill() self.t.onclick(self.delete_figures) def triangle(self): self.t.begin_fill() self.t.fd(50) self.t.lt(120) self.t.fd(50) self.t.lt(120) self.t.fd(50) self.t.end_fill() def delete_figures(self, x ,y): turtle.clearscreen() turtle.tracer(1, 0) for i in range(10): Figure() turtle.mainloop() I want the screen to clear when I click on any circle shape. How to bind an event when a circle is clicked?
onclick doesn't register a click listener on drawings the turtle has made, only on the turtle itself. But these turtles are hidden and only a few pixels large even if visible, so there's nothing to click. There are two main options: Keep the turtle drawing the same and add a screen click listener. In the click handler, loop over all of your circles and determine if the x/y position of the click occurred within the radius of any circle. Here's an example: import random import turtle class Figure: def __init__(self): __colors = ['red', 'green', 'yellow', 'purple', 'orange'] __figures = ['square', 'circle', 'triangle'] self.__x = random.randint(-330, 330) self.__y = random.randint(-230, 230) self.__color = random.choice(__colors) self.t = turtle.Turtle() self.t.hideturtle() self.t.fillcolor(self.__color) self.t.up() self.t.setpos(self.__x, self.__y) self.t.down() self.shape = random.choice(__figures) if self.shape == 'square': self.square() elif self.shape == 'circle': self.circle() else: self.triangle() def square(self): self.t.begin_fill() for i in range(4): self.t.fd(50) self.t.left(90) self.t.end_fill() def circle(self): self.t.begin_fill() self.t.circle(50) self.t.end_fill() self.t.right(90) self.t.forward(-50) def triangle(self): self.t.begin_fill() self.t.fd(50) self.t.lt(120) self.t.fd(50) self.t.lt(120) self.t.fd(50) self.t.end_fill() def is_touching(self, x, y): return self.shape == 'circle' and self.t.distance(x, y) < 50 turtle.tracer(1, 0) figures = [] for i in range(10): figures.append(Figure()) def test_circle_click(x, y): for f in figures: if f.is_touching(x, y): turtle.clearscreen() turtle.onscreenclick(test_circle_click) turtle.mainloop() Instead of drawing circles, use turtle.shape("circle") and turtle.color() to make the turtle look like a circle. Don't hide the turtle to allow it to be clickable using your current handler. Here's an example (and an answer in this thread from the author of that other post).
2
2
77,917,904
2024-2-1
https://stackoverflow.com/questions/77917904/pycharm-debugger-console-not-autocompleting-variables-created-in-the-debugger-co
I have just upgraded to PyCharm 2023.3.3 community edition and am experiencing a weird issue that hasn't occurred before. System is Windows 10 64 bit, and this has worked fine on this machine before. When I debug a script and hit a breakpoint and the program is suspended and the debugger window is opened, autocomplete doesn't seem to work on new variables i create in the debugging console, however it works on variables created in the script before the breakpoint was hit. e.g., Script Debugging console showing autocomplete working for variable testing but not test Any ideas why the debugging console is not autocompleting test? and how I could fix this?
Open Settings | Build, Execution, Deployment | Console and switch Code Completion to Runtime. Relevant ticket in PyCharm's issue tracker - PY-64055.
4
7
77,915,725
2024-1-31
https://stackoverflow.com/questions/77915725/pass-fail-result-for-paramiko-sftpclient-get-and-put-functions
I'm writing an Ansible module using Paramiko's SFTPClient class. I have the following code snippets import paramiko from ansible.module_utils.basic import * def get_sftp_client(host, port, user, password): # Open a transport transport = paramiko.Transport((host,port)) # Auth transport.connect(None,user,password) # Get a client object sftp = paramiko.SFTPClient.from_transport(transport) return (transport, sftp) def transfer_file(params): has_changed = False meta = {"upload": "not yet implemented"} user = params['user'] password = params['password'] host = params['host'] port = params['port'] direction = params['direction'] src = params['src'] dest = params['dest'] transport, sftp = get_sftp_client(host, port, user, password) stdout = None stderr = None if direction == 'download': sftp.get(src, dest) else: sftp.put(src, dest) meta = { "source": src, "destination": dest, "direction": direction, "stdout": stdout, "stderr": stderr } # Close if sftp: sftp.close() if transport: transport.close() return (has_changed, meta) def main(): fields = { "user": {"required": True, "type": "str"}, "password": {"required": True, "type": "str", "no_log": True}, "host": {"required": True, "type": "str"}, "port": {"type": "int", "default": 22}, "direction": { "type": "str", "choices": ["upload", "download"], "default": "upload", }, "src": {"type": "path", "required": True}, "dest": {"type": "path", "required": True}, } module = AnsibleModule(argument_spec=fields) # Do the work has_changed, result = transfer_file(module.params) module.exit_json(changed=has_changed, meta=result) if __name__ == '__main__': main() How can I capture the pass/fail result of the get() and put() functions? The docs don't mention the return value for get(), and the return value for put() is an SFTPAttribute class, which I don't need. I just want to know if the transaction passed or failed, and if failed, get the error. Thanks.
The Paramiko SFTPClient.put and .get throw an exception on error. If they do not throw, then the transfer succeeded.
2
2
77,915,868
2024-1-31
https://stackoverflow.com/questions/77915868/module-os-has-no-attribute-add-dll-directory-in-aws-lambda-function
I'm facing an issue that os does not have an add_dll_directory attribute in the AWS lambda function on the trigger. I have written code for creating a thumbnail of a video and I have tried it locally that's working fine but when I upload this function with the required libraries then I get this issue and still, I'm unable to resolve it. I did not get any help from the internet yet regarding this issue. that is an issue that I'm facing [ERROR] AttributeError: module 'os' has no attribute 'add_dll_directory' Traceback (most recent call last): File "/var/lang/lib/python3.12/importlib/__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1381, in _gcd_import File "<frozen importlib._bootstrap>", line 1354, in _find_and_load File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 929, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 994, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "/var/task/lambda_function.py", line 2, in <module> import cv2 File "/var/task/cv2/__init__.py", line 11, in <module> import numpy File "/var/task/numpy/__init__.py", line 112, in <module> _delvewheel_patch_1_5_2() File "/var/task/numpy/__init__.py", line 109, in _delvewheel_patch_1_5_2 os.add_dll_directory(libs_dir) This is the code that I'm using for creating a thumbnail import json import cv2 import tempfile from PIL import Image from io import BytesIO import boto3 s3 = boto3.client('s3') def get_video_file(bucket, key): video_file = s3.get_object(Bucket=bucket, Key=key)['Body'].read() return video_file def upload_image_file(image_byte_data, bucket, key): s3.put_object(Bucket=bucket, Key=key, Body=image_byte_data, ContentType="image/png") def generate_thumbnail(video_byte_data): with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as temp_file: temp_file_path = temp_file.name temp_file.write(video_byte_data) video_capture = cv2.VideoCapture(temp_file_path, cv2.CAP_ANY) success, frame = video_capture.read() if success: frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) thumbnail_image = Image.fromarray(frame_rgb) thumbnail_size = (320,320) thumbnail_image.resize(thumbnail_size) thumbnail_data = BytesIO() thumbnail_image.save(thumbnail_data, format='PNG') thumbnail_bytes = thumbnail_data.getvalue() return thumbnail_bytes else: raise ValueError('Unable to read video frame') def lambda_handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] key = event['Records'][0]['s3']['object']['key'] video_file = get_video_file(bucket, key) image_output_file = generate_thumbnail(video_file) upload_image_file(image_output_file, "thumbnail876", key.split(".")[0]+".png") return { "statusCode": 200, "body": json.dumps( { "message": "hello world", } ), } Please can someone tell me why I'm facing this issue? and how can I resolve this? Thanks
Lambda runtime doesn't provide Pillow or openCV, and you seem to have packaged Windows binary distributions of your dependencies with your Lambda code. You need to package the distributions compatible with Amazon Linux and the architecture of your Lambda function (x86_64 or arm). The easiest way to do this is create a layer. Put the list of your dependencies (Pillow, opencv-python-headless, plus whatever else you need) into requirements.txt Run these commands: pip install \ --platform manylinux2014_x86_64 \ --target=python \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: \ --upgrade \ -r requirements.txt zip -r imaging.zip python/ If your Lambda is running on ARM, replace manylinux2014_x86_64 with manylinux2014_aarch64 Create the layer from the resulting zip file and use it in your lambda.
6
5
77,920,704
2024-2-1
https://stackoverflow.com/questions/77920704/how-to-access-values-of-two-keys-in-the-map-lambda-function
I have a list of dictionary like below : sample = [{'By': {'email': '[email protected]', 'name': 'xyzzy', 'id': '5859'}},{'By': {'email': '[email protected]', 'name': 'abccb', 'id': '9843'}}, {'By': {'email': '[email protected]', 'name': 'xyzzy', 'id': '5859'}}] From this, I am trying to access keys 'name' and 'id' and write distinct values into a dictionary. Below is returning id's alone : print(set(map(lambda x: x['By']['id'], sample))) output: {'9843', '5859'} Required output : {"9843":"abccb", "5859":"xyzzy"} I tried with f-string to concatenate the 'id' and 'name' values with a colon (:) in between to give in the format "id:name" for each dictionary item in the list. set(map(lambda x: f"{x['By']['id']}:{x['By']['name']}", sample)) output : {'9843:abccb', '5859:xyzzy'} Is it possible to access values of two keys in the map lambda function ? Thank you.
You want a dict, not a set. You can get all the values in the lambda and pack them in a tuple print(dict(map(lambda x: (x['By']['id'], x['By']['name']), sample))) You should also consider a simpler solution, you don't really need map and lambda here, dict-comprehensions will do just fine print({x['By']['id']: x['By']['name'] for x in sample})
2
2
77,920,093
2024-2-1
https://stackoverflow.com/questions/77920093/python-3-12-correct-typing-for-listlistint-str-listliststr
I have a list of users, that I would like to type correctly, but I'm not able to find an answer explaining how to do this. The list looks like this: listofusers = [[167, 'john', 'John Fresno', [[538, 'Privileged'], [529, 'User']]]] I tried doing like this: def test(listofusers: list[list[str,list[list[str]]]]) -> None: ... But VS Code only shows typing for: list[list[str]]: I also tried using Union, like this: listofusers: list[list[Union[int,str,list[list[str]]]]] But again, this doesn't seems to work. Can someone shed a light over this?
As already mentioned in comments and in this answer, having such complicated structures look like code smell. You can think of re-organising your code a bit and do some more explicit semantics: import dataclasses @dataclasses.dataclass class Data: data_id: int attribute: str @dataclasses.dataclass class User: user_id: int name: str fullname: str data: list[Data] user_1 = User( user_id=167, name="john", fullname="John Fresno", data=[ Data(data_id=538, attribute='Privileged'), Data(data_id=529, attribute='User') ] ) The code above is just an example (maybe I missed somewhere a detail, but I hope you get my point). The details are fully up to you. The advantage is that your listofusers becomes simply: users: list[User] = [user_1] with better readability, no? You can also use Pydantic, of course. It is very similar to dataclasses (using it behind the scenes AFAIK) and has some convenient simplifications regarding data verification.
4
4
77,915,010
2024-1-31
https://stackoverflow.com/questions/77915010/how-to-faster-find-x-value-that-minimizes-result-for-function-involving-multip
I have a LazyFrame with multiple columns of hourly data for a few periods. For each period, I want to find the x value for a function involving mathematical operations of multiple columns that minimizes the result. I used scipy.optimize.minimize to accomplish this, and I actually obtain the desired result. The problem is that this process runs extremely slowly, so I'm just looking for anything that accomplishes the same, but faster. def minimization_target(x, period_start): return hourly_data.filter(pl.col('period_start') == period_start).select((((pl.col('price').median() * pl.col('quantity').median() - (pl.col('estimated_quantity') * (pl.col('estimated_price') + x)).sum()) / (pl.col('key_product') * (pl.col('estimated_price') + x)).sum())).abs() - 1).abs()).collect().item() results = hourly_data.group_by('period_start', maintain_order=True).map_groups(lambda group: pl.DataFrame({'x_values': scipy.optimize.minimize(minimization_target, group.get_column('initial_guess').median(), args=group.get_column('period_start').median()).x}), schema=None) Minimal example: import scipy import polars as pl from datetime import datetime hourly_data = pl.DataFrame({'period': [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3], 'price': [4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7], 'quantity': [7, 7, 7, 7, 7, 7, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4], 'estimated_price': [5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8], 'estimated_quantity': [6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3], 'key_product': [0.9, 0.8, 0.7, 0.8, 0.9, 0.8, 0.7, 0.8, 0.9, 0.8, 0.7, 0.8, 0.9, 0.8, 0.7, 0.8, 0.9, 0.8, 0.7, 0.8, 0.9, 0.8, 0.7, 0.8], 'initial_guess': [10, 10, 10, 10, 10, 10, 20, 20, 20, 20, 20, 20, 30, 30, 30, 30, 30, 30, 40, 40, 40, 40, 40, 40]}).lazy() hourly_data = hourly_data.with_columns(pl.datetime_range(datetime(2024, 1, 1), datetime(2024, 1, 1, 23), '1h').alias('hour')) hourly_data = hourly_data.with_columns(pl.col('hour').min().over('period').alias('period_start')) def minimization_target(x, period_start): return hourly_data.filter(pl.col('period_start') == period_start).select((((pl.col('price').median() * pl.col('quantity').median() - (pl.col('estimated_quantity') * (pl.col('estimated_price') + x)).sum()) / (pl.col('key_product') * (pl.col('estimated_price') + x)).sum()).abs() - 1).abs()).collect().item() results = hourly_data.group_by('period_start', maintain_order=True).map_groups(lambda group: pl.DataFrame({'x_values': scipy.optimize.minimize(minimization_target, group.get_column('initial_guess').median(), args=group.get_column('period_start').median()).x}), schema=None) Input: shape: (24, 9) ┌────────┬───────┬──────────┬─────────────┬───┬─────────────┬────────────┬────────────┬────────────┐ │ period ┆ price ┆ quantity ┆ estimated_p ┆ … ┆ key_product ┆ initial_gu ┆ hour ┆ period_sta │ │ --- ┆ --- ┆ --- ┆ rice ┆ ┆ --- ┆ ess ┆ --- ┆ rt │ │ i64 ┆ i64 ┆ i64 ┆ --- ┆ ┆ f64 ┆ --- ┆ datetime[μ ┆ --- │ │ ┆ ┆ ┆ i64 ┆ ┆ ┆ i64 ┆ s] ┆ datetime[μ │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ s] │ ╞════════╪═══════╪══════════╪═════════════╪═══╪═════════════╪════════════╪════════════╪════════════╡ │ 0 ┆ 4 ┆ 7 ┆ 5 ┆ … ┆ 0.9 ┆ 10 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 00:00:00 ┆ 00:00:00 │ │ 0 ┆ 4 ┆ 7 ┆ 5 ┆ … ┆ 0.8 ┆ 10 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 01:00:00 ┆ 00:00:00 │ │ 0 ┆ 4 ┆ 7 ┆ 5 ┆ … ┆ 0.7 ┆ 10 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 02:00:00 ┆ 00:00:00 │ │ 0 ┆ 4 ┆ 7 ┆ 5 ┆ … ┆ 0.8 ┆ 10 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 03:00:00 ┆ 00:00:00 │ │ 0 ┆ 4 ┆ 7 ┆ 5 ┆ … ┆ 0.9 ┆ 10 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 04:00:00 ┆ 00:00:00 │ │ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │ │ 3 ┆ 7 ┆ 4 ┆ 8 ┆ … ┆ 0.8 ┆ 40 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 19:00:00 ┆ 18:00:00 │ │ 3 ┆ 7 ┆ 4 ┆ 8 ┆ … ┆ 0.9 ┆ 40 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 20:00:00 ┆ 18:00:00 │ │ 3 ┆ 7 ┆ 4 ┆ 8 ┆ … ┆ 0.8 ┆ 40 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 21:00:00 ┆ 18:00:00 │ │ 3 ┆ 7 ┆ 4 ┆ 8 ┆ … ┆ 0.7 ┆ 40 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 22:00:00 ┆ 18:00:00 │ │ 3 ┆ 7 ┆ 4 ┆ 8 ┆ … ┆ 0.8 ┆ 40 ┆ 2024-01-01 ┆ 2024-01-01 │ │ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 23:00:00 ┆ 18:00:00 │ └────────┴───────┴──────────┴─────────────┴───┴─────────────┴────────────┴────────────┴────────────┘ Desired output: shape: (4, 1) ┌────────────┐ │ x_values │ │ --- │ │ f64 │ ╞════════════╡ │ -16.006287 │ │ 10.331055 │ │ 25.420471 │ │ 37.352234 │ └────────────┘
Avoid working with entire dataframe. As a start, you can cut the runtime by a factor of ~2 by only using the current group in the optimization target instead of filtering the entire dataframe first. This can be achieved as follows. import functools def minimization_target(x, group: pl.DataFrame): numerator_expr = pl.col('price').median() * pl.col('quantity').median() - (pl.col('estimated_quantity') * (pl.col('estimated_price') + x)).sum() denominator_expr = (pl.col('key_product') * (pl.col('estimated_price') + x)).sum() return ( group.select( ( (numerator_expr / denominator_expr).abs() - 1 ).abs() ) .item() ) ( hourly_data .group_by('period_start', maintain_order=True) .map_groups( lambda group: pl.DataFrame({"x": scipy.optimize.minimize(functools.partial(minimization_target, group=group), group.get_column('initial_guess').median()).x}), schema=None, ) .collect() ) Simplifying minimization target. We'd like the minimization target to be as simple as possible as scipy.optimize.minimize will repeatedly evaluate the function. The current implementation works with "heavy" DataFrame objects and computes many terms common to all evaluations. Using a simpler minimization target while precomuting these terms yields another ~4x speedup on top. import numpy as np def minimization_target(x: float, s1: float, f1: float, s2: float, f2: float): return np.abs(np.abs((s1 - f1 * x) / (s2 + f2 * x)) - 1) ( hourly_data .group_by('period_start', maintain_order=True) .map_groups( lambda group: pl.DataFrame({ "x": scipy.optimize.minimize( fun=partial( minimization_target, # precomputing terms common to all evaluations of the minimization target s1=group.select(pl.col('price').median() * pl.col('quantity').median() - (pl.col('estimated_quantity') * pl.col('estimated_price')).sum()).item(), f1=group.select(pl.col("estimated_quantity").sum()).item(), s2=group.select((pl.col('key_product') * pl.col('estimated_price')).sum()).item(), f2=group.select(pl.col("key_product").sum()).item(), ), x0=group.get_column('initial_guess').median() ).x }), schema=None, ) .collect() )
3
5
77,919,397
2024-2-1
https://stackoverflow.com/questions/77919397/polars-create-an-integer-column-from-another-one-by-computing-the-difference-wi
I have a polars dataframe like this one: shape: (10, 2) ┌────────┬───────┐ │ foo ┆ bar │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════╪═══════╡ │ 86 ┆ 11592 │ │ 109 ┆ 2765 │ │ 109 ┆ 4228 │ │ 153 ┆ 4214 │ │ 153 ┆ 7217 │ │ 153 ┆ 11095 │ │ 160 ┆ 1134 │ │ 222 ┆ 5509 │ │ 225 ┆ 10150 │ │ 239 ┆ 4151 │ └────────┴───────┘ And a sorted list of integers points: points = [0, 1500, 3000, 4500, 6000, 7500, 9000, 10500, 12000] I want to create a new column baz, such that for each element y of bar, I find the largest x in points such that x =< y. Then the element of baz is y - x. How can I do that?
You can have a solution which fully relies on polars expressions, by using join_asof. First, create DataFrame out of the list and then join (I'm assuming that points list is pre-sorted) Here, we leave strategy argument of the join empty, so default backward strategy is going to be used. A backward search selects the last row in the right DataFrame (in our case - list of points) whose on key is less than or equal to the left’s key: df_points = pl.DataFrame({'point': points}) ( df.sort('bar') .join_asof(df_points.set_sorted('point'), right_on='point', left_on='bar') .with_columns(baz=pl.col('bar') - pl.col('point')) .drop('point') .sort('foo') ) ┌─────┬───────┬──────┐ │ foo ┆ bar ┆ baz │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═══════╪══════╡ │ 86 ┆ 11592 ┆ 1092 │ │ 109 ┆ 2765 ┆ 1265 │ │ 109 ┆ 4228 ┆ 1228 │ │ 153 ┆ 4214 ┆ 1214 │ │ 153 ┆ 7217 ┆ 1217 │ │ 153 ┆ 11095 ┆ 595 │ │ 160 ┆ 1134 ┆ 1134 │ │ 222 ┆ 5509 ┆ 1009 │ │ 225 ┆ 10150 ┆ 1150 │ │ 239 ┆ 4151 ┆ 1151 │ └─────┴───────┴──────┘
2
4
77,918,063
2024-2-1
https://stackoverflow.com/questions/77918063/how-to-get-numeric-data-from-the-object-data-type-and-make-it-int-in-a-dataf
When loading securities quote data, the price values ​​look like this: Where units is the integer part, nano is the fractional part. | close | volume | time | {'units': 270, 'nano': 190000000} | 51642 | 2023-12-29 07:30:00+00:00 | {'units': 270, 'nano': 930000000} | 47910 | 2023-12-29 08:00:00+00:00 | {'units': 271, 'nano': 0} | 49376 | 2023-12-29 08:30:00+00:00 How can I fix the "close" column so that the data looks like this? | close | volume | time | 270.19 | 51642 | 2023-12-29 07:30:00+00:00 | 270.93 | 47910 | 2023-12-29 08:00:00+00:00 | 271 | 49376 | 2023-12-29 08:30:00+00:00 I tried to sum the values as follows, but nothing came out. def cost_money(v): return v.units + v.nano / 1e9
Convert dictionaries to DataFrame by json_normalize and then create new column: v = pd.json_normalize(df['close']) df['close'] = v.units + v.nano / 1e9 But if there are strings repr of dictionaries first convert them: import ast v = pd.json_normalize(df['close'].apply(ast.literal_eval)) df['close'] = v.units + v.nano / 1e9 print (df) close volume time 0 270.19 51642 2023-12-29 07:30:00+00:00 1 270.93 47910 2023-12-29 08:00:00+00:00 2 271.00 49376 2023-12-29 08:30:00+00:00 Youe solution: import ast def cost_money(v): #if necessary convert string repr to dicts v = ast.literal_eval(v) return v['units'] + v['nano'] / 1e9 df['close'] = df['close'].apply(cost_money)
2
4
77,917,091
2024-2-1
https://stackoverflow.com/questions/77917091/unique-melt-and-pivot-crosstab-format-with-a-groupby-using-pandas
I have a dataset where I would like for some values to become column headers, as well as a crosstab format. data year qtr ID type growth re nondd_re se_re or 2024 2024Q1 NY aa 3.18 1.14 0 0 0 2024 2024Q2 NY aa 2.1 1.14 0 0 0 2024 2024Q1 NY dd 6.26 3.07 3.07 0 0 2024 2024Q2 NY dd 4.13 3.07 3.07 0 0 2024 2024Q1 CA aa 0 0 0 0 0 2024 2024Q2 CA aa 0.03 0 0 0 0 2024 2024Q1 CA dd 0 0 0 0 0 2024 2024Q2 CA dd 0.06 0 0 0 0 desired ID type type 2024Q1 2024Q2 NY growth dd 6.26 4.13 NY nond_ re dd 3.07 3.07 NY se_re dd 0 0 NY or dd 0 0 NY re dd 3.07 3.07 NY growth aa 3.18 2.1 NY nond_ re aa 0 0 NY se_re aa 0 0 NY or aa 0 0 NY re aa 1.14 1.14 CA growth dd 0 0.6 CA nond_ re dd 0 0 CA se_re dd 0 0 CA or dd 0 0 CA re dd 0 0 CA growth aa 0 0.3 CA nond_ re aa 0 0 CA se_re aa 0 0 CA or aa 0 0 CA re aa 0 0 doing # Melt the dataframe to transform metrics columns into rows melted_df = df.melt(id_vars=["year", "qtr", "ID", "type"], var_name="type", value_name="value") # Pivot the melted dataframe pivot_df = melted_df.pivot_table(index=["ID","type"], columns="qtr", values="value", fill_value=0) # Reset index to turn multi-index into columns pivot_df = pivot_df.reset_index() The issue is that all of the values aren't getting. The above code produces output with missing values Any suggestion is appreciated
You can try to set_index() + stack()/unstack(): out = ( df.set_index(["year", "qtr", "ID", "type"]) .stack() .unstack("qtr") .reset_index() .rename(columns={"level_3": "type2"}) .rename_axis(columns=None, index=None) ) print(out) Prints: year ID type type2 2024Q1 2024Q2 0 2024 CA aa growth 0.00 0.03 1 2024 CA aa re 0.00 0.00 2 2024 CA aa nondd_re 0.00 0.00 3 2024 CA aa se_re 0.00 0.00 4 2024 CA aa or 0.00 0.00 5 2024 CA dd growth 0.00 0.06 6 2024 CA dd re 0.00 0.00 7 2024 CA dd nondd_re 0.00 0.00 8 2024 CA dd se_re 0.00 0.00 9 2024 CA dd or 0.00 0.00 10 2024 NY aa growth 3.18 2.10 11 2024 NY aa re 1.14 1.14 12 2024 NY aa nondd_re 0.00 0.00 13 2024 NY aa se_re 0.00 0.00 14 2024 NY aa or 0.00 0.00 15 2024 NY dd growth 6.26 4.13 16 2024 NY dd re 3.07 3.07 17 2024 NY dd nondd_re 3.07 3.07 18 2024 NY dd se_re 0.00 0.00 19 2024 NY dd or 0.00 0.00
2
1
77,912,637
2024-1-31
https://stackoverflow.com/questions/77912637/what-is-in-python-function-definition
I know what a single asterisk (*) means in a function definition. See, for example, this question. However, the documentation of warnings.warn contains the entry \*. What does this symbol combination mean in a function definition?
A typo, up to 3.11 it's not there warnings.warn(message, category=None, stacklevel=1, source=None): warnings.warn(message, category=None, stacklevel=1, source=None) Issue a warning, or maybe ignore it or raise an exception. The category argument, if given, must be a warning category class; it defaults to UserWarning. Alternatively, message can be a Warning instance, in which case category will be ignored and message.__class__ will be used. In this case, the message text will be str(message). This function raises an exception if the particular warning issued is changed into an error by the warnings filter. The stacklevel argument can be used by wrapper functions written in Python, like this: def deprecation(message): warnings.warn(message, DeprecationWarning, stacklevel=2) This makes the warning refer to deprecation()’s caller, rather than to the source of deprecation() itself (since the latter would defeat the purpose of the warning message). source, if supplied, is the destroyed object which emitted a ResourceWarning. Changed in version 3.6: Added source parameter. Try it: def func(\*): pass Result: ERROR! File "<string>", line 1 def func(\*): ^ SyntaxError: unexpected character after line continuation character and : def func(po = 4): print('po = ' , po) pass a = func( 3) a = func(po = 3) def func(*, po = 4): print('po = ' , po) pass a = func(po = 3) a = func(3) output : ERROR! po = 3 po = 3 po = 3 Traceback (most recent call last): File "<string>", line 18, in <module> TypeError: func() takes 0 positional arguments but 1 was given > See What does a bare asterisk do in a parameter list? What are "keyword-only" parameters? for explanation : Bare * is used to force the caller to use named arguments and PEP 570 :
3
2
77,914,684
2024-1-31
https://stackoverflow.com/questions/77914684/how-do-you-write-a-statically-typed-python-function-that-converts-a-string-to-a
I would like to write a statically typed Python function that accepts a general type and a string and that returns a value of the given type, derived from the given string. For example: >>> parse(int | None, "5") 5 >>> parse(int | None, "") is None True Here's a Python 3.12 attempt (which has a problem in the type signature since union types can't be assigned to variables of type type): def parse[a](x: type[a], v: str) -> a: if x in (bool, bool | None): return bool(v) if x in (float, float | None): return float(v) if x in (int, int | None): return int(v) if x in (str, str | None): return v raise ValueError However, this code yields errors with Pyright 1.1.349: error: Expression of type "bool" cannot be assigned to return type "a@parse" error: Expression of type "float" cannot be assigned to return type "a@parse" error: Expression of type "int" cannot be assigned to return type "a@parse" How can this be fixed?
In short, this is a problem of parametricity. Your function is not the same for every possible type A. With the signature def parse[A](x: type[A], v: str) -> A: ... the type checker has no idea what A is, only that a value of type A is required. As such, the only way to get such a value is to call x. The following trivial definition would type-check: def parse[A](x: type[A], v: str) -> A: return x() Your function, however, tries to return a value of type bool, or type str, or type int, etc, which (statically speaking) is distinct from a value of type A, even if A is bound at runtime to bool or str or int. Type narrowing does not apply to a generic type. One workaround is to pass not the type itself, but a string representing the type, and using typing.overload to implement ad-hoc polymorphism rather than parametric polymorphism. from typing import overload, Literal @overload def parse(x: Literal["str"], v: str) -> str: ... @overload def parse(x: Literal["float"], v: str) -> float: ... @overload def parse(x: Literal["int"], v: str) -> int: ... @overload def parse(x: Literal["bool"], v: str) -> bool: ... def parse(x, v): match x: case "str": return v case "int": return int(v) case "float": return float(v) case "bool": return v.lower() == "true" case _: raise ValueError reveal_type(parse("int", "3")) reveal_type(parse("float", "3.0")) reveal_type(parse("bool", "True")) reveal_type(parse("str", "foo")) # The following is a type error, but you were going to # raise a runtime error anyway. reveal_type(parse("complex", "3.9"))
2
1
77,914,583
2024-1-31
https://stackoverflow.com/questions/77914583/gammainc-difference-in-python-and-matlab
I am transfering code between Matlab and Python. However when I use scipy.special.gammainc I get different values than when I use gammainc in Matlab. My Python code is: from scipy.special import gammainc gammainc([1, 2, 3, 4], 5) array([0.99326205, 0.95957232, 0.87534798, 0.73497408]) However in Matlab I have: gammainc([1,2,3,4],5,'lower') [0.00365984682734371, 0.0526530173437111, 0.184736755476228, 0.371163064820127] I am trying to change my Python version to give me the same version as the Matlab code. I have tried instead: from scipy.special import gammaincc gammaincc([1, 2, 3, 4], 5) array([0.00673795, 0.04042768, 0.12465202, 0.26502592]) This is closer but it is still not what I am looking for.
There are two issues with your code: Matlab's gammainc function expects the input arguments in reverse order compared to the usual mathematical notation. So it expects the integral limit first, then the exponent. Scipy's gammaincc is the upper incomplete Gamma function, not the lower. For the lower version you need to use gammainc. So, to reproduce Matlab's result in Python with Scipy, use from scipy.special import gammainc print(gammainc(5, [1, 2, 3, 4])) This gives [0.00365985 0.05265302 0.18473676 0.37116306]
4
3
77,914,497
2024-1-31
https://stackoverflow.com/questions/77914497/keep-all-rows-before-first-occurrence-of-a-value
I have a pandas dataframe and the opposite question to Drop all rows before first occurrence of a value Instead of drop all rows before the first occurence of a value, I want to keep the rows before the first occurence of a value. so for example, i want to keep all the rows before the occurence 1 in the column count: Year ID Count 1997 1 0 1998 2 0 1999 3 1 2000 4 0 2001 5 1 leads to: Year ID Count 1997 1 0 1998 2 0
Use cummax and negate its output (~) to form a boolean Series for boolean indexing: out = df[~df['Count'].eq(1).cummax()] Output: Year ID Count 0 1997 1 0 1 1998 2 0 Intermediates: Year ID Count eq(1) cummax ~ 0 1997 1 0 False False True 1 1998 2 0 False False True 2 1999 3 1 True True False 3 2000 4 0 False True False 4 2001 5 1 True True False Variant, if you are sure that there is at least one 1: out = df.iloc[:np.argmax(df['Count'].eq(1))]
2
3
77,914,135
2024-1-31
https://stackoverflow.com/questions/77914135/weird-behavior-of-new
I have encountered a peculiar behavior with the __new__ method in Python and would like some clarification on its functionality in different scenarios. Let me illustrate with two unrelated classes, A and B, and provide the initial code: class A: def __new__(cls, *args, **kwargs): return super().__new__(B) def __init__(self, param): print(param) class B: def __init__(self, param): print(param) if __name__ == '__main__': a = A(1) In this case, no output is generated, and neither A's __init__ nor B's __init__ is called. However, when I modify the code to make B a child of A: ...... class B(A): ...... Suddenly, B's __init__ is invoked, and it prints 1. I am seeking clarification on how this behavior is occurring. In the first case, if I want to invoke B's __init__ explicitly, I find myself resorting to the following modification: class A: def __new__(cls, *args, **kwargs): obj = super().__new__(B) obj.__init__(*args, **kwargs) return obj Can someone explain why the initial code behaves as it does and why making B a child of A alters the behavior? Additionally, how does the modified code explicitly calling B's __init__ achieve the desired outcome and not without it?
TL;DR super().__new__(B) is not the same as B(). As @jonsharpe pointed out in a comment, __init__ is only called when an instance of cls is returned. But this implicit call to __init__ is handled* by type.__call__, not the call to __new__ itself before it returns. For example, you can imagine type.__call__ is implemented as something like def __call__(cls, *args, **kwargs): obj = cls.__new__(cls, *args, **kwargs) if isinstance(obj, cls): obj.__init__(*args, **kwargs) return obj So when you call A(1) in the first case, this translates to A.__call__(1), which gets implemented as type.__call__(A, 1). Inside __call__, cls is bound to A, so the first line is equivalent to obj = A.__new__(A, 1) Inside A.__new__, you don't pass cls as the first argument to super().new, but B, so obj gets assigned an instance of B. That object is not an instance of A, so obj.__init__ never gets called, and then the instance is returned. When you make B a subclass of A, now obj is an instance of A, so obj.__init__ gets called. * I believe this to be the case; everything I've ever tried is consistent with this model, but I haven't delved deeply enough into CPython to confirm. For example, if you change A's metaclass to class MyMeta(type): def __call__(cls, *args, **kwargs): return cls.__new__(cls, *args, **kwargs) then B.__init__ fails to be called, even when B is a subclass of A. The sentence in the documentation, If __new__() is invoked during object construction and it returns an instance of cls, then the new instance’s __init__() method will be invoked like __init__(self[, ...]), where self is the new instance and the remaining arguments are the same as were passed to the object constructor. is vague about what exactly "during object construction" means and who is invoking __init__. My understanding of type.__call__ conforms to this, but alternate implementations may be allowed. That would seem to make defining any __call__ method in a custom metaclass that doesn't return super().__call__(*args, **kwargs) a dicey proposition.
3
4
77,914,136
2024-1-31
https://stackoverflow.com/questions/77914136/dictionary-comparison-conditional-check-in-python
I have two list of dictionaries in the format: list1 = [ {"time": "2024-01-29T18:32:24.000Z", "value": "abc"}, {"time": "2024-01-30T19:47:48.000Z", "value": "def"}, {"time": "2024-01-30T19:24:20.000Z", "value": "ghi"} ] list2 = [ {"time": "2024-01-30T18:34:44.000Z", "value": "xyz"}, {"time": "2024-01-30T19:47:48.000Z", "value": "pqr"}, {"time": "2024-01-30T19:24:20.000Z", "value": "jkl"} ] Requirement : I need to compare value of "time" key of each dictionary in list1 with "time" key of each dictionary in list2 and if they are equal, will need to form a key value pair dictionary for subsequent lookups. In the above, list1 and list2 have 2 dictionaries with same time : "2024-01-30T19:47:48.000Z" "2024-01-30T19:24:20.000Z" I need to combine value of these 2 to form output as below : Output : {"def" : "pqr", "ghi" : "jkl"} using itertools.product function , i am able to list as below so far but not in the key value pair dictionary as required . import itertools list1 = [{"time": "2024-01-29T18:32:24.000Z", "value": "abc"}, {"time": "2024-01-30T19:47:48.000Z", "value": "def"}, {"time": "2024-01-30T19:24:20.000Z", "value": "ghi"}] list2 = [{"time": "2024-01-30T18:34:44.000Z", "value": "xyz"}, {"time": "2024-01-30T19:47:48.000Z", "value": "pqr"}, {"time": "2024-01-30T19:24:20.000Z", "value": "jkl"}] output = [f"{x['value']} : {y['value']}" if x['time'] == y['time'] else None for (x, y) in itertools.product(list1, list2)] print(output) Current Output : [None, None, None, None, 'def : pqr', None, None, None, 'ghi : jkl'] I'm wondering if i can use lambda function to achieve output as : {"def" : "pqr", "ghi" : "jkl"} Any help or suggestions is appreciated.
dict comprehension has an if clause (at the end), to also filter out elements, like this: output = {x['value']: y['value'] for x, y in itertools.product(list1, list2) if x['time'] == y['time']} Output: {'def': 'pqr', 'ghi': 'jkl'}
6
4
77,911,173
2024-1-31
https://stackoverflow.com/questions/77911173/discrepancy-between-opencv-python-and-c-implementations-of-imreconstruct
I'm implementing MATLAB's imreconstruct in both Python and C++. However, for my test case, the Python implementation matches the MATLAB's output, the C++ one doesn't. This is the Python implementation: def imReconstruct(marker: np.array, mask: np.array) -> np.array: """ Naive implementation of MatLAB's imReconstruct function works when `mask` consists of mostly background (global minumum) will be slow otherwise """ kernel = cv.getStructuringElement(cv.MORPH_RECT, (3, 3)) # calculate the extreme values of the mask image min_val, max_val, _, _ = cv.minMaxLoc(mask) # clip the marker by global extrema of mask _, marker = cv.threshold(marker, min_val, max_val, cv.THRESH_TRUNC | cv.THRESH_BINARY_INV) while True: expanded = cv.dilate(marker, kernel) expanded = np.minimum(expanded, mask) # return `expanded` when the difference is small if np.max(np.abs(expanded - marker)) < 1e-5: return expanded # set expanded to marker and repeat marker = expanded D = np.array( [[4.2426405, 3.6055512, 3.1622777, 3. , 3. ], [3.6055512, 2.828427 , 2.236068 , 2. , 2. ], [3.1622777, 2.236068 , 1.4142135, 1. , 1. ], [3. , 2. , 1. , 0. , 0. ], [3. , 2. , 1. , 0. , 0. ]], dtype=float32) imReconstruct(D-.85,D) # output # array([[3.3926406, 3.3926406, 3.1622777, 3. , 3. ], # [3.3926406, 2.828427 , 2.236068 , 2. , 2. ], # [3.1622777, 2.236068 , 1.4142135, 1. , 1. ], # [3. , 2. , 1. , 0. , 0. ], # [3. , 2. , 1. , 0. , 0. ]], dtype=float32) And C++: Mat imReconstruct(Mat marker, Mat mask){ /************** * naive implementation of MatLAB's imReconstruct * works when `mask` consist of mostly background (global minimum) * will be slow otherwise *************/ Mat kernel = getStructuringElement(MORPH_RECT, Size(3,3)); // calculate the min and max values from mask double minMask, maxMask; minMaxLoc(mask, &minMask, &maxMask); // clip the marker by global extrema of mask threshold(marker, marker, minMask, maxMask, THRESH_TRUNC|THRESH_BINARY_INV); Mat expanded; // keep filling the holes with `dilate` // until there are no more changes while (1){ dilate(marker, expanded, kernel); expanded = min(expanded, mask); // compute the max difference minMaxLoc(expanded-marker, &minMask, &maxMask); // return image when changes are small if (maxMask<1e-5) return expanded; // set expanded as marker and continue looping marker = expanded; } } // test case cv::Mat D = (cv::Mat_<float> (5,5) << 4.2426405, 3.6055512, 3.1622777, 3. , 3. , 3.6055512, 2.828427 , 2.236068 , 2. , 2. , 3.1622777, 2.236068 , 1.4142135, 1. , 1. , 3. , 2. , 1. , 0. , 0. , 3. , 2. , 1. , 0. , 0. ); std::cout << imReconstruct(D - .85, D) << std::endl; // output // [3.3926406, 3.3926406, 3.1622777, 2.7555513, 2.3122778; // 3.3926406, 2.8284271, 2.236068, 2, 2; // 3.1622777, 2.236068, 1.4142135, 1, 1; // 2.7555513, 2, 1, 0, 0; // 2.3122778, 2, 1, 0, 0] What is the cause of this discrepancy? I may have overlooked something simple, but I already spent few hours in vain without any positive results.
There are minor, but significant differences between the Python and C++ implementation of the while loop body you presented. Those, coupled with some nuances of how cv::Mat and many OpenCV functions work came together to bite you. The variant of Python code that accurately matches the C++ implementation would look something like the following: expanded = None while True: expanded = cv.dilate(marker, kernel, expanded) expanded = cv.min(expanded, mask, expanded) _, max_val, _, _ = cv.minMaxLoc(expanded - marker) if max_val < 1e-5: return expanded marker = expanded This implementation would then suffer from the same issue as the C++ variant, producing the same (incorrect result). But why? Let's step through the relevant part of C++ code (the while loop) and discuss what happens. Analysis We begin with cv::Mat marker containing a 5x5 float array, and cv::Mat expanded empty. First iteration We call cv::dilate(marker, expanded, kernel);. Internally, cv::Mat::create is called on the destination expanded. Since it's empty, a new 5x5 float array is allocated. Dilation is performed and result written to expanded's buffer. Next step is expanded = cv::min(expanded, mask);. This invokes an overload of cv::Mat::operator= which "can reuse already allocated matrix if it has the right size and type to fit the matrix expression result". Hence, this is equivalent to call to cv::min, specifically cv::min(expanded, mask, expanded);. Like before, cv::Mat::create is called on the destination expanded. This time it already contains a 5x5 float array, so no allocation happens, and the buffer is reused. The calculation is done in-place, and after the call, expanded still refers to the same chunk of memory. Next the cv::minMaxLoc runs on result of expanded - marker. Nothing surprising here, the two are different arrays. The maxMask is not less than 1e-5, so we continue. Problem Starts Trouble begins with the final statement in the loop body, marker = expanded; — a shallow copy. This invokes a different overload of cv::Mat::operator=. "Matrix assignment is an O(1) operation. This means that no data is copied but the data is shared and the reference counter, if any, is incremented." That means that after this point, both marker and expanded refer the the same 5x5 float array, with reference count of 2. With this in mind, we proceed to second iteration. Second Iteration We call cv::dilate(marker, expanded, kernel);. Destination is already 5x5 float array, no allocation happens, and result is written to expanded's buffer (which is still shared with marker). Next, expanded = cv::min(expanded, mask);. Destination is already 5x5 float array, no allocation happens, and result is written to expanded's buffer (which is still shared with marker). Failure Finally, the cv::minMaxLoc runs on result of expanded - marker... but both of those really refer to the same data, so it's rather expanded - expanded, and that's just all zeros. Therefore, the test (maxMask < 1e-5) succeeds and the loop terminates prematurely. Solution 1 Ensure that new expanded is allocated on each iteration, as happens in the Python version of the code. Do this by simply moving declaration of expanded into the loop body. Code: // ... while (true) { cv::Mat expanded; cv::dilate(marker, expanded, kernel); expanded = cv::min(expanded, mask); cv::minMaxLoc(expanded - marker, nullptr, &maxMask); if (maxMask < 1e-5) { return expanded; } marker = expanded; } Output: [3.3926406, 3.3926406, 3.1622777, 3, 3; 3.3926406, 2.8284271, 2.236068, 2, 2; 3.1622777, 2.236068, 1.4142135, 1, 1; 3, 2, 1, 0, 0; 3, 2, 1, 0, 0] Solution 2 Instead of shallow copy, perform a deep copy. This is done by calling cv::Mat::copyTo. Code: Mat expanded; while (true) { cv::dilate(marker, expanded, kernel); expanded = cv::min(expanded, mask); cv::minMaxLoc(expanded - marker, nullptr, &maxMask); if (maxMask < 1e-5) { return expanded; } expanded.copyTo(marker); } Output is identical to that of solution 1. NB: The Python equivalent of this would be np.copyto(expanded, marker).
2
3
77,913,154
2024-1-31
https://stackoverflow.com/questions/77913154/vectorizing-power-of-jax-grad
I'm trying to vectorize the following "power-of-grad" function so that it accepts multiple orders: (see here) def grad_pow(f, order, argnum): for i in jnp.arange(order): f = grad(f, argnums=argnum) return f This function produces the following error after applying vmap on the argument order: jax.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: traced array with shape int32[]. It arose in the jnp.arange argument 'stop' I have tried writing a static version of grad_pow using jax.lax.cond and jax.lax.scan, following the logic here: def static_grad_pow(f, order, argnum): order_max = 3 ## maximum order def grad_pow(f, i): return cond(i <= order, grad(f, argnum), f), None return scan(grad_pow, f, jnp.arange(order_max+1))[0] if __name__ == "__main__": test_func = lambda x: jnp.exp(-2*x) test_func_grad_pow = static_grad_pow(jax.tree_util.Partial(test_func), 1, 0) print(test_func_grad_pow(1.)) Nevertheless, this solution still produces an error: return cond(i <= order, grad(f, argnum), f), None TypeError: differentiating with respect to argnums=0 requires at least 1 positional arguments to be passed by the caller, but got only 0 positional arguments. Just wondering how this issue can be resolved?
The fundamental issue with your question is that a vmapped function cannot return a function, it can only return arrays. All other details aside, that precludes any possibility of writing a valid function that does what you intend. There are alternatives: for example, rather than attempting to create a function that will return a function, you could instead create a function that accepts arguments and applies that function to those arguments. In that case, you'll run into another issue: if n is traced, there is no way to apply grad n times. JAX transformations like grad are evaluated at trace-time, and traced values like n are not available until runtime. One way to work around this is to pre-define all the functions you're interested in, and to use lax.switch to choose between them at runtime. The result would look something like this: import jax import jax.numpy as jnp from functools import partial @partial(jax.jit, static_argnums=[0], static_argnames=['argnum', 'max_order']) def apply_multi_grad(f, order, *args, argnum=0, max_order=10): funcs = [f] for i in range(max_order): funcs.append(jax.grad(funcs[-1], argnum)) return jax.lax.switch(order, funcs, *args) order = jnp.arange(3) x = jnp.ones(3) f = jnp.sin print(jax.vmap(apply_multi_grad, in_axes=(None, 0, 0))(f, order, x)) # [ 0.84147096 0.5403023 -0.84147096] # Compare by doing it manually: print(jnp.array([f(x[0]), jax.grad(f)(x[1]), jax.grad(jax.grad(f))(x[2])])) # [ 0.84147096 0.5403023 -0.84147096]
2
2
77,913,473
2024-1-31
https://stackoverflow.com/questions/77913473/why-pandas-value-counts-generates-tuples-as-index
I have the following pandas dataframe: import pandas as pd df = pd.DataFrame({'a': ["Q1", "Q2", "Q1", "P1"]}) That is: a 0 Q1 1 Q2 2 Q1 3 P1 When I count the values with counts = df.value_counts(normalize=True), indexes change to tuple. That is, counts.index is now: MultiIndex([('Q1',), ('P1',), ('Q2',)], names=['a']) but I want preserve the strings as indexes. In other words, I want counts.index to be an Index object instead of a MultiIndex. So, my questions are: Why value_counts return a MultiIndex? Is there an alternative or a "fix" to that?
You have a MultiIndex because you're starting from a DataFrame, not a Series. This is actually not very well described in the DataFrame.value_counts documentation: The returned Series will have a MultiIndex with one level per input column but an Index (non-multi) for a single label. What this actually means, is that unless you explicitly pass a single name to subset, this will create a MultiIndex. If you use subset, make sure to not wrap the name in a list (df.value_counts(subset=['a'], normalize=True) would still produce a MultiIndex). workaround If you don't want a MultiIndex, first slice the column or use the subset parameter: counts = df['a'].value_counts(normalize=True) # or counts = df.value_counts('a', normalize=True) counts.index # Index(['Q1', 'Q2', 'P1'], dtype='object', name='a') Or squeeze to automatically convert a single column DataFrame to Series: counts = df.squeeze().value_counts(normalize=True) counts.index # Index(['Q1', 'Q2', 'P1'], dtype='object', name='a')
2
2
77,913,003
2024-1-31
https://stackoverflow.com/questions/77913003/how-does-autocommit-work-with-multiple-queries
I have a psycopg connection with autocommit turned on. Say, I run a query that is a combination of multiple queries, e.g.: query = ";".join([create_table, insert_data, analyze_table]) conn.execute(query) Is this batch executed in a single transaction or multiple transactions? What happens if a query in the middle fails?
From Chain multiple statements within Psycopg2 In SQL chaining refers to the process of linking multiple SQL statements together into a single string, separated by semicolons. This allows you to execute multiple SQL statements at once, without having to execute them individually. For example, you might chain together a SELECT statement to retrieve data from a table, followed by an UPDATE statement to modify the data, and then a DELETE statement to remove it. When using chaining, it is important to note that each statement will be executed in the order they appear in the chain and that the results of one statement can be used in the next one. Additionally, when chaining SQL statements, if any statement in the chain fails, the entire chain will fail and none of the statements will be executed.
3
2
77,911,884
2024-1-31
https://stackoverflow.com/questions/77911884/python-azure-function-works-in-local-emulator-but-no-http-triggers-found-when-de
I have 5 Azure Functions I'm developing, and 4 of them deploy and work fine, just having this trouble with one of them. I have the Azure Functions Core Tools installed and Azurite running as a VSCode plugin, and am able to run my function with no errors by calling func start --python --verbose from CLI. I can call the HTTP trigger from the local emulator and it returns a correct response. When I deploy to Azure Functions, the logs end in No HTTP triggers found with no errors shown throughout the deploy process. I've been able to download the web package via SCM and confirmed that the files are the same in there as they are in my development environment. What steps can I take to get more verbose information out of the Azure Functions environment to help with identifying the error which is causing the HTTP trigger to fail to deploy to production?
Turns out this was due to a custom environment variable missing from production which was present in development. I worked that one out on my own, I never found a useful error message to point to the fact that that is what was happening. If anyone knows how I'd be supposed to get a meaningful error message out of Azure Functions to point to such an error then please share, but I'll leave this answer here for anyone else going through the same thing, hopefully it helps you too.
2
3
77,911,172
2024-1-31
https://stackoverflow.com/questions/77911172/if-you-specify-a-background-color-in-pandas-in-python-number-formatting-is-disa
After sorting the data in pandas, I am saving it in Excel. At this time, we are trying to improve readability by changing the format. I specified the number format and background color at the same time, but there is an error where only one of the two is specified. I'm pretty sure this isn't an error, it's just 'I'm not understanding very well'. But I don't know where I made a mistake. I would like to receive your help. Below is the example code I wrote and the results of running it. The first code is when I formatted the number. It works normally. df = pd.DataFrame({ 'Name': ['John', 'Isac'], 'Value': [100000, 300000000] }) with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer: df.to_excel(writer, sheet_name='test', index=False) workbook = writer.book worksheet = writer.sheets['test'] format_with_commas = workbook.add_format({'num_format':'#,##0'}) worksheet.set_column('B:B', 15, format_with_commas) However, the problem is that if you write code to specify the background color here, the number format will not work. I have included the code and execution results as images. df = pd.DataFrame({ 'Name': ['John', 'Isac'], 'Value': [100000, 300000000] }) styled_df = df.style.set_properties(**{'background-color':'yellow', 'color':'black'}) with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer: styled_df.to_excel(writer, sheet_name='test', index=False) workbook = writer.book worksheet = writer.sheets['test'] format_with_commas = workbook.add_format({'num_format':'#,##0'}) worksheet.set_column('B:B', 15, format_with_commas) If my method is not correct, please let me know where I am wrong. Alternatively, if this code is outdated, please point that out as well.
I think you can use only pandas styles solution, but if check Styler.format: Styler.format is ignored when using the output format Styler.to_excel, since Excel and Python have inherrently different formatting structures. However, it is possible to use the number-format pseudo CSS attribute to force Excel permissible formatting. See examples. So try number-format pseudo CSS attribute, see last paragraph: styled_df = df.style.set_properties(**{'background-color':'yellow', 'color':'black'}) pseudo_css = 'number-format: #,##0' with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer: (styled_df.map(lambda v: pseudo_css, subset=['Value']) .to_excel(writer, sheet_name='test', index=False)) For oldier pandas versions: styled_df = df.style.set_properties(**{'background-color':'yellow', 'color':'black'}) pseudo_css = 'number-format: #,##0' with pd.ExcelWriter('test.xlsx', engine='xlsxwriter') as writer: (styled_df.applymap(lambda v: pseudo_css, subset=['Value']) .to_excel(writer, sheet_name='test', index=False))
3
3
77,910,262
2024-1-31
https://stackoverflow.com/questions/77910262/how-do-i-properly-type-hint-a-decorated-getitem-and-setitem
For example: T = TypeVar("T", bound="CustomDict") class CustomDict(dict): def __init__(self) -> None: super().__init__() class dict_operator: def __init__(self, method: Callable[..., Any]) -> None: self.method = method def __get__(self, instance: T, owner: Type[T]) -> Callable[..., Any]: def wrapper(key: Any, *args: Any, **kwargs: Any) -> Any: results = self.method(instance, key, *args, **kwargs) print("Did something after __getitem__ or __setitem__") return results return wrapper @dict_operator def __getitem__(self, key: Any) -> Any: return super().__getitem__(key) @dict_operator def __setitem__(self, key: Any, value: Any) -> None: super().__setitem__(key, value) Mypy: Signature of "__getitem__" incompatible with supertype "dict"Mypyoverride Signature of "__getitem__" incompatible with supertype "Mapping"Mypyoverride (variable) Any: Any I take it the decorator has altered the overriden methods' signatures but I don't know how to account for this.
The issue isn't the type annotations on __getitem__ and __setitem__. For better or for worse, mypy (currently) doesn't recognise that a __get__ which returns a Callable is, for the majority of cases, a safe override in lieu of a real Callable. The fastest fix is to add a __call__ to your dict_operator class body (mypy Playground 1): class CustomDict(dict): ... class dict_operator: def __init__(self, method: Callable[..., Any]) -> None: ... def __get__(self, instance: T, owner: Type[T]) -> Callable[..., Any]: # This needs to be the same as the return type of `__get__` __call__: Callable[..., Any] @dict_operator def __getitem__(self, key: Any) -> Any: ... # OK @dict_operator def __setitem__(self, key: Any, value: Any) -> None: ... # OK I don't know if you only did this for demonstration purposes, but IMO, you have too much imprecise typing here; Callable[..., Any] in particular is not a very useful type annotation. I don't know what Python version you're using, but if you're able to use typing_extensions instead, you can have access to the latest typing constructs for better static checking. See mypy Playground 2 for a possible way to implement stricter typing.
3
2
77,888,735
2024-1-26
https://stackoverflow.com/questions/77888735/insert-many-to-many-relationship-objects-using-sqlmodel-when-one-side-of-the-rel
I am trying to insert records in a database using SQLModel where the data looks like the following. A House object, which has a color and many locations. Locations will also be associated with many houses. The input is: [ { "color": "red", "locations": [ {"type": "country", "name": "Netherlands"}, {"type": "municipality", "name": "Amsterdam"}, ], }, { "color": "green", "locations": [ {"type": "country", "name": "Netherlands"}, {"type": "municipality", "name": "Amsterdam"}, ], }, ] Here's a reproducible example of what I'm trying to do: import asyncio from typing import List from sqlalchemy.ext.asyncio import create_async_engine from sqlalchemy.orm import sessionmaker from sqlmodel import Field, Relationship, SQLModel, UniqueConstraint from sqlmodel.ext.asyncio.session import AsyncSession DATABASE_URL = "sqlite+aiosqlite:///./database.db" engine = create_async_engine(DATABASE_URL, echo=True, future=True) async def init_db() -> None: async with engine.begin() as conn: await conn.run_sync(SQLModel.metadata.create_all) SessionLocal = sessionmaker( autocommit=False, autoflush=False, bind=engine, class_=AsyncSession, expire_on_commit=False, ) class HouseLocationLink(SQLModel, table=True): house_id: int = Field(foreign_key="house.id", nullable=False, primary_key=True) location_id: int = Field( foreign_key="location.id", nullable=False, primary_key=True ) class Location(SQLModel, table=True): id: int = Field(primary_key=True) type: str # country, county, municipality, district, city, area, street, etc name: str # Amsterdam, Germany, My Street, etc houses: List["House"] = Relationship( back_populates="locations", link_model=HouseLocationLink, ) __table_args__ = (UniqueConstraint("type", "name"),) class House(SQLModel, table=True): id: int = Field(primary_key=True) color: str = Field() locations: List["Location"] = Relationship( back_populates="houses", link_model=HouseLocationLink, ) # other fields... data = [ { "color": "red", "locations": [ {"type": "country", "name": "Netherlands"}, {"type": "municipality", "name": "Amsterdam"}, ], }, { "color": "green", "locations": [ {"type": "country", "name": "Netherlands"}, {"type": "municipality", "name": "Amsterdam"}, ], }, ] async def add_houses(payload) -> List[House]: result = [] async with SessionLocal() as session: for item in payload: locations = [] for location in item["locations"]: locations.append(Location(**location)) house = House(color=item["color"], locations=locations) result.append(house) session.add_all(result) await session.commit() asyncio.run(init_db()) asyncio.run(add_houses(data)) The problem is that when I run this code, it tries to insert duplicated location objects together with the house object. I'd love to be able to use relationship here because it makes accessing house.locations very easy. However, I have not been able to figure out how to keep it from trying to insert duplicated locations. Ideally, I'd have a mapper function to perform a get_or_create location. The closest I've seen to making this possible is SQLAlchemy's association proxy. But looks like SQLModel doesn't support that. Does anybody have an idea on how to achieve this? If you know how to do it using SQLAlchemy instead of SQLModel, I'd be interested in seeing your solution. I haven't started on this project yet, so I might as well using SQLAlchemy if it will make my life easier. I've also tried tweaking with sa_relationship_kwargs such as sa_relationship_kwargs={ "lazy": "selectin", "cascade": "none", "viewonly": "true", } But that prevents the association entries from being added to the HouseLocationLink table. Any pointers will much appreciated. Even if it means changing my approach altogether. Thanks!
I am writing this solution because you mentioned you are open to using SQLAlchemy. As you mentioned, you need association proxy but you also need "Unique Objects". I have adapted it to function with asynchronous queries (instead of synchronous), aligning with my individual preferences, all without altering the logic significantly. import asyncio from sqlalchemy import UniqueConstraint, ForeignKey, select, text, func from sqlalchemy.orm import DeclarativeBase, mapped_column, Mapped, relationship from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from sqlalchemy.ext.associationproxy import AssociationProxy, association_proxy class Base(DeclarativeBase): pass class UniqueMixin: cache = {} @classmethod async def as_unique(cls, session: AsyncSession, *args, **kwargs): cache = getattr(session, "_cache", None) if cache is None: session._cache = cache = {} key = cls, cls.unique_hash(*args, **kwargs) if key in cache: return cache[key] with session.no_autoflush: statement = select(cls).where(cls.unique_filter(*args, **kwargs)).limit(1) obj = (await session.scalars(statement)).first() if obj is None: obj = cls(*args, **kwargs) session.add(obj) cache[key] = obj return obj @classmethod def unique_hash(cls, *args, **kwargs): raise NotImplementedError("Implement this in subclass") @classmethod def unique_filter(cls, *args, **kwargs): raise NotImplementedError("Implement this in subclass") class Location(UniqueMixin, Base): __tablename__ = "location" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column() type: Mapped[str] = mapped_column() house_associations: Mapped[list["HouseLocationLink"]] = relationship(back_populates="location") __table_args = (UniqueConstraint(type, name),) @classmethod def unique_hash(cls, name, type): # this is the key for the dict return type, name @classmethod def unique_filter(cls, name, type): # this is how you want to establish the uniqueness # the result of this filter will be the value in the dict return (cls.type == type) & (cls.name == name) class House(Base): __tablename__ = "house" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column() location_associations: Mapped[list["HouseLocationLink"]] = relationship(back_populates="house") locations: AssociationProxy[list[Location]] = association_proxy( "location_associations", "location", # you need this so you can directly add ``Location`` objects to ``House`` creator=lambda location: HouseLocationLink(location=location), ) class HouseLocationLink(Base): __tablename__ = "houselocationlink" house_id: Mapped[int] = mapped_column(ForeignKey(House.id), primary_key=True) location_id: Mapped[int] = mapped_column(ForeignKey(Location.id), primary_key=True) location: Mapped[Location] = relationship(back_populates="house_associations") house: Mapped[House] = relationship(back_populates="location_associations") engine = create_async_engine("sqlite+aiosqlite:///test.sqlite") async def main(): data = [ { "name": "red", "locations": [ {"type": "country", "name": "Netherlands"}, {"type": "municipality", "name": "Amsterdam"}, ], }, { "name": "green", "locations": [ {"type": "country", "name": "Netherlands"}, {"type": "municipality", "name": "Amsterdam"}, ], }, ] async with engine.begin() as conn: await conn.run_sync(Base.metadata.create_all) async with AsyncSession(engine) as session, session.begin(): for item in data: house = House( name=item["name"], locations=[await Location.as_unique(session, **location) for location in item["locations"]] ) session.add(house) async with AsyncSession(engine) as session: statement = select(func.count(text("*")), Location) assert await session.scalar(statement) == 2 statement = select(func.count(text("*")), House) assert await session.scalar(statement) == 2 statement = select(func.count(text("*")), HouseLocationLink) assert await session.scalar(statement) == 4 asyncio.run(main()) You can notice that the asserts do pass with no violation of unique constraint and no multiple inserts. I have left some inline comments which mention the "key" aspects of this code. If you run this code multiple times, you will notice that only new House objects and corresponding HouseLocationLink are added, no new Location objects are added. There will be only one query made to cache this behavior per key - value pair.
2
2
77,903,321
2024-1-30
https://stackoverflow.com/questions/77903321/importerror-cannot-import-name-mock-s3-from-moto
import pytest from moto import mock_s3 @pytest.fixture(scope='module') def s3(): with mock_s3(): os.environ['AWS_ACCESS_KEY_ID'] = 'test' os.environ['AWS_SECRET_ACCESS_KEY'] = 'test' os.environ['AWS_DEFAULT_REGION'] = 'us-east-1' s3 = boto3.resource('s3') s3.create_bucket(Bucket='test_bucket') yield s3 This code was working, but is now throwing an exception Cannot import name mock_s3 from moto. What am I doing wrong?
Simply replace your import of mock_s3 with from moto import mock_aws and use with mock_aws(): instead. Moto was recently bumped to version 5.0, and you were probably running 4.x before. https://github.com/getmoto/moto/blob/master/CHANGELOG.md If you check the change log, you will see that an important breaking change was made: All decorators have been replaced with a single decorator: mock_aws
27
57
77,877,288
2024-1-25
https://stackoverflow.com/questions/77877288/how-to-cache-playwright-python-contexts-for-testing
I am doing some web scraping using playwright-python>=1.41, and have to launch the browser in a headed mode (e.g. launch(headless=False). For CI testing, I would like to somehow cache the headed interactions with Chromium, to enable offline testing: First invocation: uses Chromium to make real-world HTTP transactions Later invocations: uses Chromium, but all HTTP transactions read from a cache How can this be done? I can't find any clear answers on how to do this.
It might solve your problem using HAR-file recording: Run the first test while recording a HAR-file Storing the HAR-file as an artifact, in your repo or similar in your CI environment Running test again with recorded HAR-file Here is how to do that with playwright==1.41.1 and pytest-playwright==0.3.3: import pathlib import pytest from playwright.sync_api import Browser, Playwright CACHE_DIR = pathlib.Path(__file__).parent / "cache" @pytest.fixture(name="example_har", scope="session") def fixture_example_har(playwright: Playwright) -> pathlib.Path: har_file = CACHE_DIR / "example.har" with ( playwright.chromium.launch(headless=False) as browser, browser.new_page() as page, ): page.route_from_har(har_file, url="*/**", update=True) page.goto("https://example.com/") return har_file def test_caching(browser: Browser, example_har: pathlib.Path) -> None: with browser.new_context(offline=True) as context: page = context.new_page() page.route_from_har(example_har, url="*/**") page.goto("https://example.com/")
3
3
77,907,578
2024-1-30
https://stackoverflow.com/questions/77907578/how-to-add-a-general-title-with-a-seaborn-objects-plot-facet-plot
I am currently using the seaborn library in Python to create a facetted stacked barplot from a pandas dataframe named averages with columns ['Name', 'Period', 'Value', 'Solver']. Here is the code I use to create the plot I want. p = so.Plot(data = averages, x = 'Period', y = 'Value', color = 'Name').add(so.Bar(), so.Stack(), suptitle='Inventory levels') p = p.facet(col='Solver', order=['spse', 'mp2', 'mels']) I am searching for a way to add a general title to the plot i.e. a title above each subplot, like the function matplotlib.pyplot.suptitle function does for example. I know that the function seaborn.objects.Plot.label has a title= option, but when I use it, this puts the same title above each subplot of the facetted graph.
You can provide an existing Matplotlib figure or axes for drawing the plot using so.Plot.on. This gives you access to the underlying matplotlib.figure.Figure object to which the suptitle can be added. import matplotlib.pyplot as plt import seaborn.objects as so import pandas as pd df = pd.DataFrame({ "x": [1, 2, 3, 4, 5, 1, 2, 3, 4, 5], "y": [1, 2, 3, 4, 5, 1, 2, 4, 8, 16], "group": ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"], }) fig = plt.Figure() fig.suptitle("Suptitle") ( so.Plot(df, x="x", y="y") .add(so.Line()) .facet(col="group") .on(fig) .plot() )
2
2
77,871,364
2024-1-24
https://stackoverflow.com/questions/77871364/vectorize-a-convex-hull-and-interpolation-loop-on-a-specific-axis-of-an-ndarray
I'm struggling to find an efficient way to implement this interpolated convex hull data treatment. I have a 2D ndarray, call it arr, with shape (2000000,19), containing floats. I have a 1D ndarray, call it w, with shape (19,), also containing floats. What I achieved (and works perfectly except that it's excruciatingly slow) is the following: import numpy as np from scipy.interpolate import interp1d # Sample data arr = np.array([[49.38639913, 49.76769437, 49.66370476, 49.49905455, 49.15242251, 48.0518658 , 45.998071 , 45.31347273, 45.29614113, 45.25281212, 45.0448329 , 44.61154286, 43.72763117, 42.38443203, 41.17121991, 40.48662165, 40.35663463, 39.88001558, 39.55938095], [47.97387359, 47.86121818, 47.69656797, 47.18528571, 46.70000087, 45.39146494, 43.50232035, 43.18168571, 43.82295498, 43.62364156, 43.31167273, 42.88704848, 42.37576623, 41.0585645 , 40.37396623, 39.09142771, 38.79679048, 38.51948485, 38.52815065]]) w = np.array([2.1017, 2.1197, 2.1374, 2.1548, 2.172 , 2.1893, 2.2068, 2.2254, 2.2417, 2.2592, 2.2756, 2.2928, 2.3097, 2.326 , 2.3421, 2.3588, 2.3745, 2.3903, 2.4064]) def upper_andrews_hull(points: np.ndarray): """ Computes the upper half of the convex hull of a set of 2D points. :param points: an iterable sequence of (x, y) pairs representing the points. """ # 2D cross product of OA and OB vectors, i.e. z-component of their 3D cross product. # Returns a positive value, if OAB makes a counter-clockwise turn, # negative for clockwise turn, and zero if the points are collinear. def cross(o, a, b): return (a[0] - o[0]) * (b[1] - o[1]) - (a[1] - o[1]) * (b[0] - o[0]) # Reverse the points so that we can pop from the end points = np.flip(points, axis=0) # Build upper hull upper = [] for p in points: while len(upper) >= 2 and cross(upper[-2], upper[-1], p) <= 0: upper.pop() upper.append(p) # Reverse the upper hull upper = np.flip(np.array(upper), axis=0) return upper result = np.empty(arr.shape) for i in range(arr.shape[0]): # Create points, using w as x values, and arr as y values points = np.stack((w, arr[i,:]), axis=1) # Calculate the convex hull around the points hull = upper_andrews_hull(points) # Interpolate the hull interp_function = interp1d(*hull.T) # Store interpolation's result to have the same x references as original points result[i,:] = interp_function(w) I'm pretty sure that there's a way to forego the loop and only use vectorial calculations, but I can't find it (plus, there's the issue that hull doesn't always have the same number of points, so all of the hulls would not be storable in an ndarray. My expected behaviour would be something like result = interpolated_hull(arr, w, axis=0), to apply the operations on the entire arr array without a loop.
What about using numba in your code? This gives me a speed improvement on my machine by a factor of: loop_hulls_numba: ~70 (single core) or 70s to 0.973s loop_hulls_numba_parallel: ~350 (multiple cores) or 70s to 0.223s (depends strongly on the number of CPUs) based on a 2'000'000 by 16 array! Speed tests are performed with timeit, i.e. %timeit -n 3 -r 3 loop_hulls_numba(w, arr_n). numba needs some extra time for the first compilation, which I excluded in the speed test. Generate a bigger array for testing n = int(2e6) arr_n = np.resize(arr, (n, 19)) The numba solution @jit(nopython=True) def cross_numba(o, a, b): return (a[0] - o[0]) * (b[1] - o[1]) - (a[1] - o[1]) * (b[0] - o[0]) @jit(nopython=True) def upper_andrews_hull_numba(x: np.ndarray, y: np.ndarray): """ Computes the upper half of the convex hull of a set of 2D points. :param points: an iterable sequence of (x, y) pairs representing the points. """ # 2D cross product of OA and OB vectors, i.e. z-component of their 3D cross product. # Returns a positive value, if OAB makes a counter-clockwise turn, # negative for clockwise turn, and zero if the points are collinear. # Build upper hull upper = [(x[-1], y[-1]), (x[-2], y[-2])] # Reverse the points so that we can pop from the end for p in zip(x[-3::-1], y[-3::-1]): while len(upper) >= 2 and cross_numba(upper[-2], upper[-1], p) <= 0: upper.pop() upper.append(p) # Reverse the upper hull return upper[::-1] @jit(nopython=True) def loop_hulls_numba(w, arr): result = np.zeros(arr.shape) for i in range(arr.shape[0]): # Calculate the convex hull around the points hull = np.array(upper_andrews_hull_numba(w, arr[i,:])) # # Interpolate the hull x, y = hull.T result[i,:] = np.interp(w, x, y) return result Get the maximum out of your CPU by parallelization @jit(nopython=True, parallel=True) def loop_hulls_numba_parallel(w, arr): result = np.zeros(arr.shape) for i in prange(arr.shape[0]): # Calculate the convex hull around the points hull = np.array(upper_andrews_hull_numba(w, arr[i,:])) # # Interpolate the hull x, y = hull.T result[i] = np.interp(w, x, y) return result
2
3
77,878,901
2024-1-25
https://stackoverflow.com/questions/77878901/image-reconstruction-from-predicted-array-padding-same-shows-grid-tiles-in-rec
I have two images, E1 and E3, and I am training a CNN model. In order to train the model, I use E1 as train and E3 as y_train. I extract tiles from these images in order to train the model on tiles. The model, does not have an activation layer, so the output can take any value. So, the predictions for example, preds , have values around preds.max() = 2.35 and preds.min() = -1.77. My problem is that I can't reconstruct the image at the end using preds and I think the problem is the scaling-unscaling of the preds values. If I just do np.uint8(preds) its is almost full of zeros since preds has small values. The image should look like as close as possible to E2 image. import cv2 import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, \ Input, Add from tensorflow.keras.models import Model from PIL import Image CHANNELS = 1 HEIGHT = 32 WIDTH = 32 INIT_SIZE = ((1429, 1416)) def NormalizeData(data): return (data - np.min(data)) / (np.max(data) - np.min(data) + 1e-6) def extract_image_tiles(size, im): im = im[:, :, :CHANNELS] w = h = size idxs = [(i, (i + h), j, (j + w)) for i in range(0, im.shape[0], h) for j in range(0, im.shape[1], w)] tiles_asarrays = [] count = 0 for k, (i_start, i_end, j_start, j_end) in enumerate(idxs): tile = im[i_start:i_end, j_start:j_end, ...] if tile.shape[:2] != (h, w): tile_ = tile tile_size = (h, w) if tile.ndim == 2 else (h, w, tile.shape[2]) tile = np.zeros(tile_size, dtype=tile.dtype) tile[:tile_.shape[0], :tile_.shape[1], ...] = tile_ count += 1 tiles_asarrays.append(tile) return np.array(idxs), np.array(tiles_asarrays) def build_model(height, width, channels): inputs = Input((height, width, channels)) f1 = Conv2D(32, 3, padding='same')(inputs) f1 = BatchNormalization()(f1) f1 = Activation('relu')(f1) f2 = Conv2D(16, 3, padding='same')(f1) f2 = BatchNormalization()(f2) f2 = Activation('relu')(f2) f3 = Conv2D(16, 3, padding='same')(f2) f3 = BatchNormalization()(f3) f3 = Activation('relu')(f3) addition = Add()([f2, f3]) f4 = Conv2D(32, 3, padding='same')(addition) f5 = Conv2D(16, 3, padding='same')(f4) f5 = BatchNormalization()(f5) f5 = Activation('relu')(f5) f6 = Conv2D(16, 3, padding='same')(f5) f6 = BatchNormalization()(f6) f6 = Activation('relu')(f6) output = Conv2D(1, 1, padding='same')(f6) model = Model(inputs, output) return model # Load data img = cv2.imread('E1.tif', cv2.IMREAD_UNCHANGED) img = cv2.resize(img, (1408, 1408), interpolation=cv2.INTER_AREA) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = np.array(img, np.uint8) #plt.imshow(img) img3 = cv2.imread('E3.tif', cv2.IMREAD_UNCHANGED) img3 = cv2.resize(img3, (1408, 1408), interpolation=cv2.INTER_AREA) img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2RGB) img3 = np.array(img3, np.uint8) # extract tiles from images idxs, tiles = extract_image_tiles(WIDTH, img) idxs2, tiles3 = extract_image_tiles(WIDTH, img3) # split to train and test data split_idx = int(tiles.shape[0] * 0.9) train = tiles[:split_idx] val = tiles[split_idx:] y_train = tiles3[:split_idx] y_val = tiles3[split_idx:] # build model model = build_model(HEIGHT, WIDTH, CHANNELS) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss = tf.keras.losses.Huber(), metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse')]) # scale data before training train = train / 255. val = val / 255. y_train = y_train / 255. y_val = y_val / 255. # train history = model.fit(train, y_train, validation_data=(val, y_val), epochs=50) # predict on E2 img2 = cv2.imread('E2.tif', cv2.IMREAD_UNCHANGED) img2 = cv2.resize(img2, (1408, 1408), interpolation=cv2.INTER_AREA) img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB) img2 = np.array(img2, np.uint8) # extract tiles from images idxs, tiles2 = extract_image_tiles(WIDTH, img2) #scale data tiles2 = tiles2 / 255. preds = model.predict(tiles2) #preds = NormalizeData(preds) #preds = np.uint8(preds) # reconstruct predictions reconstructed = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8) # reconstruction process for tile, (y_start, y_end, x_start, x_end) in zip(preds[:, :, -1], idxs): y_end = min(y_end, img.shape[0]) x_end = min(x_end, img.shape[1]) reconstructed[y_start:y_end, x_start:x_end] = tile[:(y_end - y_start), :(x_end - x_start)] im = Image.fromarray(reconstructed) im = im.resize(INIT_SIZE) im.show() You can find the data here If I use : def normalize_arr_to_uint8(arr): the_min = arr.min() the_max = arr.max() the_max -= the_min arr = ((arr - the_min) / the_max) * 255. return arr.astype(np.uint8) preds = model.predict(tiles2) preds = normalize_arr_to_uint8(preds) then, I receive an image which seems right, but with lines all over. Here is the image I get: This is the image I should take (as close as possible to E2). Note, that I just use a small cnn network for this example, so I can't receive receive much details for the image. But, when I try better model, still I have horizontal and/or vertical lines: UPDATE I found this. In the code above, I use at: # reconstruction process for tile, (y_start, y_end, x_start, x_end) in zip(preds[:, :, -1], idxs): preds[:, :, -1] this is wrong. I must use preds[:, :, :, -1] because preds shape is: (1936, 32, 32, 1). So, If I use preds[:, :, -1] I am receiving the image I posted. If I use preds[:, :, :, -1], which is right , I receive a new image where except from the horizontal lines, I get vertical lines also! UPDATE 2 I am just adding new code where I use another patches and reconstruction functions, but produce the same results (a little better picture). import cv2 import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, \ Input, Add from tensorflow.keras.models import Model from PIL import Image # gpu setup gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) CHANNELS = 1 HEIGHT = 1408 WIDTH = 1408 PATCH_SIZE = 32 STRIDE = PATCH_SIZE//2 INIT_SIZE = ((1429, 1416)) def normalize_arr_to_uint8(arr): the_min = arr.min() the_max = arr.max() the_max -= the_min arr = ((arr - the_min) / the_max) * 255. return arr.astype(np.uint8) def NormalizeData(data): return (data - np.min(data)) / (np.max(data) - np.min(data) + 1e-6) def recon_im(patches: np.ndarray, im_h: int, im_w: int, n_channels: int, stride: int): """Reconstruct the image from all patches. Patches are assumed to be square and overlapping depending on the stride. The image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping parts. Parameters ----------- patches: 4D ndarray with shape (patch_number,patch_height,patch_width,channels) Array containing extracted patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have `n_channels=3`. im_h: int original height of image to be reconstructed im_w: int original width of image to be reconstructed n_channels: int number of channels the image has. For RGB image, n_channels = 3 stride: int desired patch stride Returns ----------- reconstructedim: ndarray with shape (height, width, channels) or ndarray with shape (height, width) if output image only has one channel Reconstructed image from the given patches """ patch_size = patches.shape[1] # patches assumed to be square # Assign output image shape based on patch sizes rows = ((im_h - patch_size) // stride) * stride + patch_size cols = ((im_w - patch_size) // stride) * stride + patch_size if n_channels == 1: reconim = np.zeros((rows, cols)) divim = np.zeros((rows, cols)) else: reconim = np.zeros((rows, cols, n_channels)) divim = np.zeros((rows, cols, n_channels)) p_c = (cols - patch_size + stride) / stride # number of patches needed to fill out a row totpatches = patches.shape[0] initr, initc = 0, 0 # extract each patch and place in the zero matrix and sum it with existing pixel values reconim[initr:patch_size, initc:patch_size] = patches[0]# fill out top left corner using first patch divim[initr:patch_size, initc:patch_size] = np.ones(patches[0].shape) patch_num = 1 while patch_num <= totpatches - 1: initc = initc + stride reconim[initr:initr + patch_size, initc:patch_size + initc] += patches[patch_num] divim[initr:initr + patch_size, initc:patch_size + initc] += np.ones(patches[patch_num].shape) if np.remainder(patch_num + 1, p_c) == 0 and patch_num < totpatches - 1: initr = initr + stride initc = 0 reconim[initr:initr + patch_size, initc:patch_size] += patches[patch_num + 1] divim[initr:initr + patch_size, initc:patch_size] += np.ones(patches[patch_num].shape) patch_num += 1 patch_num += 1 # Average out pixel values reconstructedim = reconim / divim return reconstructedim def get_patches(GT, stride, patch_size): """Extracts square patches from an image of any size. Parameters ----------- GT : ndarray n-dimensional array containing the image from which patches are to be extracted stride : int desired patch stride patch_size : int patch size Returns ----------- patches: ndarray array containing all patches im_h: int height of image to be reconstructed im_w: int width of image to be reconstructed n_channels: int number of channels the image has. For RGB image, n_channels = 3 """ hr_patches = [] for i in range(0, GT.shape[0] - patch_size + 1, stride): for j in range(0, GT.shape[1] - patch_size + 1, stride): hr_patches.append(GT[i:i + patch_size, j:j + patch_size]) im_h, im_w = GT.shape[0], GT.shape[1] if len(GT.shape) == 2: n_channels = 1 else: n_channels = GT.shape[2] patches = np.asarray(hr_patches) return patches, im_h, im_w, n_channels def build_model(height, width, channels): inputs = Input((height, width, channels)) f1 = Conv2D(32, 3, padding='same')(inputs) f1 = BatchNormalization()(f1) f1 = Activation('relu')(f1) f2 = Conv2D(16, 3, padding='same')(f1) f2 = BatchNormalization()(f2) f2 = Activation('relu')(f2) f3 = Conv2D(16, 3, padding='same')(f2) f3 = BatchNormalization()(f3) f3 = Activation('relu')(f3) addition = Add()([f2, f3]) f4 = Conv2D(32, 3, padding='same')(addition) f5 = Conv2D(16, 3, padding='same')(f4) f5 = BatchNormalization()(f5) f5 = Activation('relu')(f5) f6 = Conv2D(16, 3, padding='same')(f5) f6 = BatchNormalization()(f6) f6 = Activation('relu')(f6) output = Conv2D(1, 1, padding='same')(f6) model = Model(inputs, output) return model # Load data img = cv2.imread('E1.tif', cv2.IMREAD_UNCHANGED) img = cv2.resize(img, (HEIGHT, WIDTH), interpolation=cv2.INTER_AREA) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = np.array(img, np.uint8) img3 = cv2.imread('E3.tif', cv2.IMREAD_UNCHANGED) img3 = cv2.resize(img3, (HEIGHT, WIDTH), interpolation=cv2.INTER_AREA) img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2RGB) img3 = np.array(img3, np.uint8) # extract tiles from images tiles, H, W, C = get_patches(img[:, :, :CHANNELS], stride=STRIDE, patch_size=PATCH_SIZE) tiles3, H, W, C = get_patches(img3[:, :, :CHANNELS], stride=STRIDE, patch_size=PATCH_SIZE) # split to train and test data split_idx = int(tiles.shape[0] * 0.9) train = tiles[:split_idx] val = tiles[split_idx:] y_train = tiles3[:split_idx] y_val = tiles3[split_idx:] # build model model = build_model(PATCH_SIZE, PATCH_SIZE, CHANNELS) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss = tf.keras.losses.Huber(), metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse')]) # scale data before training train = train / 255. val = val / 255. y_train = y_train / 255. y_val = y_val / 255. # train history = model.fit(train, y_train, validation_data=(val, y_val), batch_size=16, epochs=20) # predict on E2 img2 = cv2.imread('E2.tif', cv2.IMREAD_UNCHANGED) img2 = cv2.resize(img2, (HEIGHT, WIDTH), interpolation=cv2.INTER_AREA) img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB) img2 = np.array(img2, np.uint8) # extract tiles from images tiles2, H, W, CHANNELS = get_patches(img2[:, :, :CHANNELS], stride=STRIDE, patch_size=PATCH_SIZE) #scale data tiles2 = tiles2 / 255. preds = model.predict(tiles2) preds = normalize_arr_to_uint8(preds) reconstructed = recon_im(preds[:, :, :, -1], HEIGHT, WIDTH, CHANNELS, stride=STRIDE) im = Image.fromarray(reconstructed) im = im.resize(INIT_SIZE) im.show() and the image produced: UPDATE 3 After @Lescurel comment, I tried this arcitecture: def build_model(height, width, channels): inputs = Input((height, width, channels)) f1 = Conv2D(32, 3, padding='valid')(inputs) f1 = BatchNormalization()(f1) f1 = Activation('relu')(f1) f2 = Conv2D(16, 3, strides=2,padding='valid')(f1) f2 = BatchNormalization()(f2) f2 = Activation('relu')(f2) f3 = Conv2D(16, 3, padding='same')(f2) f3 = BatchNormalization()(f3) f3 = Activation('relu')(f3) addition = Add()([f2, f3]) f4 = Conv2D(32, 3, padding='valid')(addition) f5 = Conv2D(16, 3, padding='valid')(f4) f5 = BatchNormalization()(f5) f5 = Activation('relu')(f5) f6 = Conv2D(16, 3, padding='valid')(f5) f6 = BatchNormalization()(f6) f6 = Activation('relu')(f6) f7 = Conv2DTranspose(16, 3, strides=2,padding='same')(f6) f7 = BatchNormalization()(f7) f7 = Activation('relu')(f7) f8 = Conv2DTranspose(16, 3, strides=2,padding='same')(f7) f8 = BatchNormalization()(f8) f8 = Activation('relu')(f8) output = Conv2D(1,1, padding='same', activation='sigmoid')(f8) model = Model(inputs, output) return model which uses valid and same padding and the image I receive its: So, the square tiles changed dimensions and shape. So, the problem is how can I use my original architecture and get rid of these tiles!
Now that I understood this: My problem is not the small black box. This is some bad pixel. My problem is the square tile border lines. This is a stitching problem in my opinion. The border lines, come from your approach to crop the full image into tiles and then stitch them back together later. Due to the padding of your convolutions in your model architecture, it is expected to have border artifacts. There are two things you can do now: Train and process on larger tiles and then crop the center tile. This does away with the padding issue by "cutting away the problematic parts" def extract_image_tiles_with_overlap(size, stride, im, center_tile_width, overlap_percentage): im = im[:, :, :CHANNELS] w = h = size s = stride overlap = int(center_tile_width * overlap_percentage / 100) idxs = [(i, (i + h), j, (j + w)) for i in range(0, im.shape[0] - h + 1, s) for j in range(0, im.shape[1] - w + 1, s)] tiles_asarrays = [] for k, (i_start, i_end, j_start, j_end) in enumerate(idxs): tile = im[i_start:i_end, j_start:j_end, ...] if tile.shape[:2] != (h, w): tile_ = tile tile_size = (h, w) if tile.ndim == 2 else (h, w, tile.shape[2]) tile = np.zeros(tile_size, dtype=tile.dtype) tile[:tile_.shape[0], :tile_.shape[1], ...] = tile_ tiles_asarrays.append(tile) return np.array(idxs), np.array(tiles_asarrays), overlap The result looks like this: Of course this does not completely solve the stitching problem entirely. As a next step you can experiment with aggregating overlapping segments of the tiles to get this smoother. This smoothing through aggregation can also be done as a post-processing step. E.g. via a Gaussian Filtering, or morphological operations. iterations = 10 # Adjust the number of iterations as needed reconstructed_dilated = binary_dilation(reconstructed, iterations=iterations) reconstructed_smoothed = binary_erosion(reconstructed_dilated, iterations=iterations) Or you use some OpenCV denoising (make sure to further optimize the hyperparameters). denoised_reconstructed = cv2.fastNlMeansDenoising(reconstructed, h=10, templateWindowSize=7, searchWindowSize=21) Result now is this: Full code to reproduce I changed a few other things (e.g. model saving, etc.) import cv2 import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, \ Input, Add from tensorflow.keras.models import Model from PIL import Image import os from scipy.ndimage import gaussian_filter from scipy.ndimage import binary_dilation, binary_erosion CHANNELS = 1 HEIGHT = 32 WIDTH = 32 INIT_SIZE = ((1429, 1416)) MODEL_SAVE_PATH = 'my_croptile_model.h5' # Change this path to your desired location def NormalizeData(data): return (data - np.min(data)) / (np.max(data) - np.min(data) + 1e-6) def extract_image_tiles_with_overlap(size, stride, im, center_tile_width, overlap_percentage): im = im[:, :, :CHANNELS] w = h = size s = stride overlap = int(center_tile_width * overlap_percentage / 100) idxs = [(i, (i + h), j, (j + w)) for i in range(0, im.shape[0] - h + 1, s) for j in range(0, im.shape[1] - w + 1, s)] tiles_asarrays = [] for k, (i_start, i_end, j_start, j_end) in enumerate(idxs): tile = im[i_start:i_end, j_start:j_end, ...] if tile.shape[:2] != (h, w): tile_ = tile tile_size = (h, w) if tile.ndim == 2 else (h, w, tile.shape[2]) tile = np.zeros(tile_size, dtype=tile.dtype) tile[:tile_.shape[0], :tile_.shape[1], ...] = tile_ tiles_asarrays.append(tile) return np.array(idxs), np.array(tiles_asarrays), overlap def build_model(height, width, channels): inputs = Input((height, width, channels)) f1 = Conv2D(32, 3, padding='same')(inputs) f1 = BatchNormalization()(f1) f1 = Activation('relu')(f1) f2 = Conv2D(16, 3, padding='same')(f1) f2 = BatchNormalization()(f2) f2 = Activation('relu')(f2) f3 = Conv2D(16, 3, padding='same')(f2) f3 = BatchNormalization()(f3) f3 = Activation('relu')(f3) addition = Add()([f2, f3]) f4 = Conv2D(32, 3, padding='same')(addition) f5 = Conv2D(16, 3, padding='same')(f4) f5 = BatchNormalization()(f5) f5 = Activation('relu')(f5) f6 = Conv2D(16, 3, padding='same')(f5) f6 = BatchNormalization()(f6) f6 = Activation('relu')(f6) output = Conv2D(1, 1, padding='same')(f6) model = Model(inputs, output) return model # Load data img = cv2.imread('images/E1.tif', cv2.IMREAD_UNCHANGED) img = cv2.resize(img, (1408, 1408), interpolation=cv2.INTER_AREA) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = np.array(img, np.uint8) # plt.imshow(img) img3 = cv2.imread('images/E3.tif', cv2.IMREAD_UNCHANGED) img3 = cv2.resize(img3, (1408, 1408), interpolation=cv2.INTER_AREA) img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2RGB) img3 = np.array(img3, np.uint8) # extract tiles from images # idxs, tiles = extract_image_tiles(WIDTH, img) # idxs2, tiles3 = extract_image_tiles(WIDTH, img3) # extract tiles from images with overlap center_tile_width = WIDTH # Adjust as needed overlap_percentage = 60 # Adjust as needed idxs, tiles, overlap = extract_image_tiles_with_overlap(WIDTH, WIDTH // 2, img, center_tile_width, overlap_percentage) # split to train and test data split_idx = int(tiles.shape[0] * 0.9) train = tiles[:split_idx] val = tiles[split_idx:] y_train = tiles[:split_idx] y_val = tiles[split_idx:] # Build or load model if os.path.exists(MODEL_SAVE_PATH): model = tf.keras.models.load_model(MODEL_SAVE_PATH) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss=tf.keras.losses.Huber(), metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse')]) # scale data before training train = train / 255. val = val / 255. y_train = y_train / 255. y_val = y_val / 255. # train history = model.fit(train, y_train, validation_data=(val, y_val), epochs=0) # Adjust epochs as needed # Save the model model.save(MODEL_SAVE_PATH) else: model = build_model(HEIGHT, WIDTH, CHANNELS) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss=tf.keras.losses.Huber(), metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse')]) # scale data before training train = train / 255. val = val / 255. y_train = y_train / 255. y_val = y_val / 255. # train history = model.fit(train, y_train, validation_data=(val, y_val), epochs=50) # Adjust epochs as needed # Save the model model.save(MODEL_SAVE_PATH) # predict on E2 img2 = cv2.imread('images/E2.tif', cv2.IMREAD_UNCHANGED) img2 = cv2.resize(img2, (1408, 1408), interpolation=cv2.INTER_AREA) img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB) img2 = np.array(img2, np.uint8) # extract tiles from images idxs, tiles2, overlap = extract_image_tiles_with_overlap(WIDTH, WIDTH // 2, img2, center_tile_width, overlap_percentage) # scale data tiles2 = tiles2 / 255. preds = model.predict(tiles2) # Check model output range print("Max prediction value:", np.max(preds)) print("Min prediction value:", np.min(preds)) # Invert colors in predictions inverted_preds = 1.0 - preds # Ensure values are within valid range inverted_preds = np.clip(inverted_preds, 0, 1) # Reconstruct inverted predictions with cropping center part of the tile reconstructed = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8) # Reconstruction process for (y_start, y_end, x_start, x_end), tile in zip(idxs, inverted_preds[:, :, :, -1]): y_end = min(y_end, img.shape[0]) x_end = min(x_end, img.shape[1]) # Calculate the crop size based on the size of the original tile without overlap crop_size = WIDTH # Rescale the tile to the original size (WIDTH + overlap) rescaled_tile = cv2.resize(tile, (WIDTH + overlap, WIDTH + overlap)) # Crop the center part of the rescaled tile center_crop = rescaled_tile[overlap//2:overlap//2+crop_size, overlap//2:overlap//2+crop_size] # Update the reconstructed image directly with the cropped tile reconstructed[y_start:y_end, x_start:x_end] = (center_crop * 255).astype(np.uint8) # Apply binary dilation and erosion to enhance the tile boundaries # Apply non-local means denoising denoised_reconstructed = cv2.fastNlMeansDenoising(reconstructed, h=10, templateWindowSize=7, searchWindowSize=21) im = Image.fromarray(denoised_reconstructed) im = im.resize(INIT_SIZE) im.show() The updated answer ends here. So I think this is mainly about post processing and visualization of your data: I visualize like this now: # predict on E2 img2 = cv2.imread('images/E2.tif', cv2.IMREAD_UNCHANGED) img2 = cv2.resize(img2, (1408, 1408), interpolation=cv2.INTER_AREA) img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB) img2 = np.array(img2, np.uint8) # extract tiles from images idxs, tiles2 = extract_image_tiles(WIDTH, img2) # scale data tiles2 = tiles2 / 255. preds = model.predict(tiles2) # Check model output range print("Max prediction value:", np.max(preds)) print("Min prediction value:", np.min(preds)) # Invert colors in predictions inverted_preds = 1.0 - preds # Ensure values are within valid range inverted_preds = np.clip(inverted_preds, 0, 1) # Reconstruct inverted predictions reconstructed = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8) # Reconstruction process for tile, (y_start, y_end, x_start, x_end) in zip(inverted_preds[:, :, :, -1], idxs): y_end = min(y_end, img.shape[0]) x_end = min(x_end, img.shape[1]) reconstructed[y_start:y_end, x_start:x_end] = (tile * 255).astype(np.uint8) im = Image.fromarray(reconstructed) im = im.resize(INIT_SIZE) im.show() I can already clearly see the black square. So next I will be trying some thresholding to get enhance that visibility in post-processing. I am thinking of something like that: # Threshold value (adjust as needed) threshold = 0.9 #0.45 # Thresholding binary_output = (reconstructed >= threshold).astype(np.uint8) * 255 # Second visualization im_binary = Image.fromarray(binary_output) im_binary = im_binary.resize(INIT_SIZE) im_binary.show() Which leaves me with this: Not sure how good this scales across your full dataset, but this is definitely in the ball-park for some morphological operators in post-processing.
3
4
77,900,822
2024-1-29
https://stackoverflow.com/questions/77900822/distribution-plot-with-gradient-fill-in-python
I'm trying to create a density plot with a gradient fill in python like this: I've attempted to do so using this code: plt.figure(figsize=(6, 1)) sns.kdeplot(data=df, x='Overall Rating', fill=True, palette='viridis') ...and sns.kdeplot(data=df, x='Overall Rating', fill=True, cmap='viridis') Neither work, both outputting this plot: I've searched all over for a method to do this in python, but no luck. I've tried implementing the methods from this answer by @Diziet Asahi but can't wrap my head around it. Any help would really be appreciated!
You just need to grab the ax: ax = sns.kdeplot(...) and then execute the inner part of the for loop in @Diziet Asahi's solution with that ax. import matplotlib import matplotlib.pyplot as plt import seaborn as sns import numpy as np sns.set_theme(style='white') iris = sns.load_dataset('iris') ax = sns.kdeplot(data=iris, x='petal_length', fill=True) sns.despine() cmap = 'turbo' im = ax.imshow(np.linspace(0, 1, 256).reshape(1, -1), cmap=cmap, aspect='auto', extent=[*ax.get_xlim(), *ax.get_ylim()], zorder=10) path = ax.collections[0].get_paths()[0] patch = matplotlib.patches.PathPatch(path, transform=ax.transData) im.set_clip_path(patch) plt.show()
3
3
77,904,458
2024-1-30
https://stackoverflow.com/questions/77904458/what-does-the-mode-reduced-in-numpy-linalg-qr-do
In the numpy documentation for the qr factorization (https://numpy.org/doc/stable/reference/generated/numpy.linalg.qr.html), the default mode is 'reduced'. I know the theory of the 'complete' mode and expected it to be the default. What does the mode 'reduced' mean mathematically? I could not find any information about that in the documentation.
Let's summarize the answer given by @stéphane-laurent, so people can find the results more easily: Let , the QR-decomposition is defined as . If , the values of can be dropped, because the last rows of are zero. I.e. we multiply the last columns of with zeros. Therefore, we can just drop these columns to get the same result. In order for the dimensions to work out, we also need to remove the rows that are zero (i.e. the last rows) of . We end up with , where and .
3
1
77,910,049
2024-1-30
https://stackoverflow.com/questions/77910049/how-do-i-get-a-mean-inlcuding-nan-values-in-python
Apparently I have the opposite of everyone else's problem... I would like to take the mean of a pandas dataframe, and I would like to have the result return NaN if there are ANY NaNs in the frame. However, it seems like neither np.mean nor np.nanmean do this. Example code: b = pd.DataFrame([[1,2],[math.nan,4]]) print(b) print(np.mean(b)) print(np.nanmean(b)) Result: Expected Result:
It can be done using pandas DataFrame.mean() function : b.mean(axis=None,skipna=False) According to the documentation of numpy, the reason that you don't get nan from mean function is that when numpy sees the input of mean() is not multiarray.ndarray, it tries to use mean function from that data type if possible. if type(a) is not mu.ndarray: try: mean = a.mean except AttributeError: pass else: return mean(axis=axis, dtype=dtype, out=out, **kwargs) In your case it is like calling the mean() function of pandas with axis=None. Which is like running the following code: b.mean(axis=None) # 2.333333 If you are insisting on using numpy.mean() to calculate mean, you can convert your DataFrame to numpy array before callin np.mean(). array = b.to_numpy() np.mean(array) # nan
3
3
77,909,551
2024-1-30
https://stackoverflow.com/questions/77909551/how-to-merge-two-columns-by-the-intersection-of-the-elements-in-each-col
Imagine I have a dataframe like this: With lists of elements in a single string. data = {'Col1': ["apple, banana, orange", "dog, cat", "python, java, c++"], 'Col2': ["banana, lemon, blueberry", "bird, cat", "R, fortran"] } df = pd.DataFrame(data) df How can I create a Col3 with the intersection of elements in Col1 and Col2 Expected output: data = {'Col1': ["apple, banana, orange", "dog, cat", "python, java, c++"], 'Col2': ["banana, lemon, blueberry", "bird, cat", "R, fortran"], 'Col3': ["banana", "cat", NA] } df = pd.DataFrame(data) df
Using a list comprehension and set intersection: df['Col3'] = [', '.join(set(a.split(', ')) & set(b.split(', '))) for a,b in zip(df['Col1'], df['Col2'])] Output: Col1 Col2 Col3 0 apple, banana, orange banana, lemon, blueberry banana 1 dog, cat bird, cat cat 2 python, java, c++ R, fortran If you want NAs on empty intersections: df['Col3'] = [x if (x:=', '.join(set(a.split(', ')) & set(b.split(', ')))) else pd.NA for a,b in zip(df['Col1'], df['Col2'])] Output: Col1 Col2 Col3 0 apple, banana, orange banana, lemon, blueberry banana 1 dog, cat bird, cat cat 2 python, java, c++ R, fortran <NA>
2
2
77,904,890
2024-1-30
https://stackoverflow.com/questions/77904890/how-to-edit-telegram-messages-with-monospace-or-html-text-using-telethon
When utilizing Telethon to send Telegram messages with monospace or HTML text in Python, the parse_mode option (for example parse_mode='markdown' or parse_mode='html') works well during initial message sending. However, when attempting to edit a previously sent message, the parse_mode option does not appear to be available. How can I effectively edit a message and incorporate monospace or HTML-formatted text? Example: Sending a message containing monospace string sent_message = client.send_message(To_id, "the authentication code `8632496`", parse_mode='markdown') Then editing the message and sending new text containing a monospace string client(EditMessageRequest( peer=To_id, id=sent_message.id, message="the new authentication code `9832798237`" )) the result: I am aware that using entities is an option. For instance: entities = [MessageEntityCode(offset=9, length=18)] # specifying a code block entity client(EditMessageRequest( peer=To_id, id=sent_message.id, message='This is a monospace string.', no_webpage=True, entities=entities )) However, my text is long and contains multiple monospace strings originating from various sources, for example: text = f"something {variable1}, something {variable2}" Hence, identifying the starting point and length of each monospace string is challenging. I'd greatly appreciate any guidance on how to approach this issue, including methods or best practices for effectively managing monospace or HTML text when editing Telegram messages using Telethon.
client(EditMessageRequest( is a old, not preferred way of editing a message. You should use client.edit_message method, in your example: client.edit_message(tst_id, first.id, 'Oeps, the code is `1234789`') This way, you don't even need to specify parse_mode it's the same as the message your editing. Full code to test this: from telethon import TelegramClient import asyncio import time client = TelegramClient('anon', '1234567', '0123456789abcdef0123456789abcdef') client.start() async def main(): tst_id = 1234 first = await client.send_message(tst_id, "the authentication code `8632496`", parse_mode='markdown') time.sleep(3) second = await client.edit_message(tst_id, first.id, 'Oeps, the code is `1234789`') loop = asyncio.get_event_loop() loop.run_until_complete(main()) Result: message changes after 3 seconds:
2
2
77,902,366
2024-1-29
https://stackoverflow.com/questions/77902366/plotting-weighted-histograms-with-weighted-kde-kernel-density-estimate
I want to plot two distributions of data as weighted histograms with weighted kernel density estimate (KDE) plots, side by side. The data (length of DNA fragments, split by categorical variable regions) are integers in (0, 1e8) interval. I can plot the default, unweighted, histograms and KDE without a problem, using the python code below. The code plots histograms for the tiny example of the input data in testdata variable. See the unweighted (default) histograms below. I want to produce a different plot, where the data in the histograms are weighted by length (= the X axis numeric variable). I used weights option (seaborn.histplot — seaborn documentation): weights : vector or key in data If provided, weight the contribution of the corresponding data points towards the count in each bin by these factors. The histograms changed as expected (see weighted histograms plots below). But the KDE (kernel density estimate) lines did not change. Question: How can I change the kernel density estimate (KDE) to reflect the fact that I am using weighted histograms? Unweighted (default) histograms: Weighted histograms: Code with the minimal reproducible example: import io import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns def plot_restriction_digest(df, out_file_base, weights): # Prevent the python icon from showing in the dock when the script is # running: matplotlib.use('Agg') sns.set_theme(style='ticks') f, ax = plt.subplots(figsize=(7, 5)) sns.despine(f) hist = sns.histplot(data=df, x='length', hue='regions', weights=weights, # Normalize such that the total area of the histogram # equals 1: stat='density', # stat='count', # Make all histograms visible, otherwise 'captured' # regions histogram is much smaller than 'all' regions # one: common_norm=False, # Default plots too many very thin bins, which are poorly # visible in pdf format (OK in png). Note that 10 bins is # too crude, and 1000 bins makes too many thin bins: bins=100, # X axis log scale: log_scale=True, # Compute a kernel density estimate to smooth the # distribution and show on the plot as lines: kde=True, ) sns.move_legend(hist, 'upper left') plt.savefig(f'{out_file_base}.pdf') return testdata=""" 1 all 1 all 2 all 2 all 2 all 3 all 4 captured 4 captured 5 captured 5 captured 5 captured 8 captured """ # Default histograms: df = pd.read_csv(io.StringIO(testdata), sep='\s+', header=None, names='length regions'.split()) plot_restriction_digest(df, 'test_tiny', None) # Weighted histograms: df = pd.read_csv(io.StringIO(testdata), sep='\s+', header=None, names='length regions'.split()) plot_restriction_digest(df, 'test_tiny_weighted', 'length') print('Done.') Notes: The two distributions of data are DNA fragment lengths for two types of genomic regions: "all" and "captured", but this is irrelevant to this specific question. The minimal reproducible example illustrates the question. The real data frame has tens of millions of rows, so the histograms and KDE plots are much more smooth an meaningful. The actual data need the X axis to be log-transformed to better tell the two broad distributions apart. I am using these packages and versions: Python 3.11.6 matplotlib-base 3.8.2 py311hfdba5f6_0 conda-forge numpy 1.26.3 py311h7125741_0 conda-forge pandas 2.2.0 py311hfbe21a1_0 conda-forge seaborn 0.13.1 hd8ed1ab_0 conda-forge seaborn-base 0.13.1 pyhd8ed1ab_0 conda-forge
Histograms and kde plots with a very small sample size often aren't a good indicator for how things behave with more suitable sample sizes. In this case, the default bandwidth seems too wide for this particular dataset. The bandwidth can be scaled via the bw_adjust parameter of the kdeplot. When creating a histplot, parameters ("keywords") for the kde can be provided via the kde_kws dictionary. In my test, to both test weights and a log scale, I used a few x values, and gave each a weight of 1, except for two of them, giving them a really high weight. A log scale can be mimicked by taking the log of the x values. The axis will show the log values, which will be less intuitive than the original values. For small data sets and integer weights, np.repeat can create a dataset with every value repeated. This doesn't work for very large datasets due to memory constraints. To quickly test settings and behavior for the real dataset, waiting time can be reduced by taking a random sample (e.g. df.sample(10000)). The test code below indicates that both the weights and the log scale seem to work as expected. One difference is that the default bandwidth is different in both cases; a different bw_scale is used to compensate. import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = pd.DataFrame({'x': [1, 10, 20, 30, 40, 50, 60], 'w': [1, 10, 1, 1, 1, 10, 1]}) fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(9, 6)) sns.histplot(df, x='x', bins=10, weights='w', kde=True, log_scale=True, stat='density', kde_kws={'bw_adjust': 0.3}, ax=ax1) ax1.set_xticks(df['x'].to_numpy(), df['x'].to_numpy()) ax1.set_title('sns.histplot with log scale, weights and kde') sns.histplot(x=np.log(np.repeat(df['x'], df['w'])), bins=10, kde=True, stat='density', kde_kws={'bw_adjust': 0.55}, ax=ax2) ax2.set_title('mimicing scale and weights') plt.tight_layout() plt.show()
4
2
77,908,410
2024-1-30
https://stackoverflow.com/questions/77908410/how-can-i-drop-columns-with-zero-variance-from-a-numpy-array
So using the following example: import numpy as np X = np.array([[1,10,np.nan,0],[2,10,np.nan,0],[3,10,np.nan,0]]) I can drop the all-NaN or all-0 columns like so: X = X[:,~np.all(np.isnan(X), axis=0)] X = X[:,~np.all(X==0, axis=0)] I would also like to drop the zero variance columns, is there a good way to do so using the same concise notation? Thanks!
You can. import numpy as np X[:, np.var(X, axis=0) != 0] array([[ 1., nan], [ 2., nan], [ 3., nan]]) This calculates the variance of each column (axis=0) and then returns the ones not equal to 0.
2
2
77,888,944
2024-1-26
https://stackoverflow.com/questions/77888944/how-do-you-pull-aruba-reports-using-api-calls-via-python
I need to login to aruba appliance and get report data using the following api at aruba-airwave endpoint. I have this code so far: import xml.etree.ElementTree as ET # libxml2 and libxslt import requests # HTTP requests import csv import sys # --------------------------------------------------------------------------- # Constants # --------------------------------------------------------------------------- # Login/password for Airwave (read-only account) LOGIN = 'operator' PASSWD = 'verylongpasswordforyourreadonlyaccount' # URL for REST API LOGIN_URL = 'https://aruba-airwave.example.com/LOGIN' AP_LIST_URL = 'https://aruba-airwave.example.com/report_detail?id=100' # Delimiter for CSV output DELIMITER = ';' # HTTP headers for each HTTP request HEADERS = { 'Content-Type' : 'application/x-www-form-urlencoded', 'Cache-Control' : 'no-cache' } # --------------------------------------------------------------------------- # Fonctions # --------------------------------------------------------------------------- def open_session(): """Open HTTPS session with login""" ampsession = requests.Session() data = 'credential_0={0}&credential_1={1}&destination=/&login=Log In'.format(LOGIN, PASSWD) loginamp = ampsession.post(LOGIN_URL, headers=HEADERS, data=data) return {'session' : ampsession, 'login' : loginamp} def get_ap_list(session): output=session.get(AP_LIST_URL, headers=HEADERS, verify=False) ap_list_output=output.content return ap_list_output def main(): session=open_session() get_ap_list(session) main() I get this error: get() takes no keyword arguments any ideas what I am doing wrong here or suggestion for another method?
The problem you are running into is because your open_session() function returns a dict and a dicts args are typically for accessing a key within the dict. since your session parameter is a dict in this case, if you want to use the elements in session, you need to access them by their key that you defined in open_session(). # session['session'].get(include your arguments for the get request here) or # session['login'].get() # although the data attributed to your already obtained session # should be handled by the session hence storing the login result # isn't really necessary unless you are checking to see if the # request was successful but that can be done other ways without # actually storing the result. def get_ap_list(session): output=session['session'].get(AP_LIST_URL, headers=HEADERS, verify=False) ap_list_output=output.content return ap_list_output I haven't verified that this code works however it does address the error you are currently receiving.
2
2
77,901,985
2024-1-29
https://stackoverflow.com/questions/77901985/how-do-i-add-a-message-back-onto-the-queue-from-a-service-bus-triggered-python-a
I want my Service Bus triggered Python Azure Function to make the message it has retrieved from the queue available on the queue again to be retried. I want this to happen without having to raise an exception in the function. However as per the documentation: In non-C# functions, exceptions in the function results in the runtime calls abandonAsync in the background. If no exception occurs, then completeAsync is called in the background. the completeAsync is going to be called if the function returns successfully. I am aware that the Service Bus Receiver has a function that allows you to abandon a message so that it is made available on the queue again. But I don't believe this can work in a situation when the Azure Function has already retrieved the message for you. Also, the ServiceBusMessage object that is received in the Azure function doesn't seem to have a method that lets you 'abandon' or otherwise return the message to the queue. Most other questions regarding this issue online seem to be specific to C#, so I've not had much look trying to find an answer. My Azure function is calling a third party API that unfortunately is prone to erroring which is outside of my control. However, when looking at error reports, I don't want this Azure Function to be always be at the top of the list of failing functions. Having to raise an exception in my function when this API call fails seems excessive when it is not an unexpected outcome. I would like to return the function as a success, since from my perspective there is nothing wrong with the function, but the message on the queue needs to be tried again.
The short version is - this can't be done from the Python language worker at this time. Currently, only the .NET Function workers (both in-proc and isolated) are able to bind to the ServiceBusMessageActions parameter needed to explicitly complete messages. The only way for other language workers to complete a message is for processing to succeed or for it to fail enough times that the delivery count is exceeded or one of the other service-side DLQ actions are triggered and the message is dead-lettered. Work is planned to bring the message actions functionality to the other language workers but, to my knowledge, there has not yet been a timeline shared.
3
1
77,906,150
2024-1-30
https://stackoverflow.com/questions/77906150/how-to-modify-a-pass-in-value-as-a-not-nonlocal-variable
I am learning to use a python bytecode preprocessing mechanism. However it complains when processing these lines of code: def q_sample(self, x_start, t, noise=None): noise = default(noise, lambda: torch.randn_like(x_start)) return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) It said RuntimeError: Preprocessing issues detected: Nonlocal variables are not supported but q_sample() defined in /opt/dev/training/stable_diffusion/ldm/models/diffusion/ddpm.py:467 defines nonlocal variable: x_start I don't understand why x_start could be nonlocal here. If yes, how can I modify it to be a not Nonlocal variable? Thanks!
Because there is a nested function(the lambda) which has access to a variable(the x_start) from the outer function(the q_sample())'s namespace. To solve that you can make the x_start local to the lambda by passing it as the default parameter of the lambda like: def q_sample(self, x_start, t, noise=None): noise = default(noise, lambda x_start=x_start: torch.randn_like(x_start)) Here is simplified code for inspection: def outer(a): return lambda: print(a) closure_function = outer(10) print(closure_function.__closure__) print(closure_function.__code__.co_freevars) output: (<cell at 0x102c422c0: int object at 0x10422e718>,) ('a',)
3
2
77,890,446
2024-1-27
https://stackoverflow.com/questions/77890446/how-to-call-via-google-places-apinew-text-search-id-only-in-python
According to documentation here if I ask, just for place_id, the call should be for free. I tried various versions of call, but all of them either did not work or gave me not just id, but also other basic info such as address, location, rating, etc. Bard told me that it is ok and as I asked just for id, the calls will really cost nothing. However it is not possible to see cost of each call and based on whole day billing I am afraid I paid also for these calls. Should this be really free and even though address, location, etc. should be priced as basic info, it is actually given for free or should I change sth in my code? My code: def find_place(query, api_key): base_url = "https://maps.googleapis.com/maps/api/place/textsearch/json" params = { "query": query, "inputtype": "textquery", "fieldMask": "place_id", # Only retrieve the place ID "key": api_key # Include API key } response = requests.get(base_url, params=params) return response.json()
Finally I found out how to do that properly. Now the response is only the needed field(s): def find_place(query, api_key): # Define the API endpoint url = 'https://places.googleapis.com/v1/places:searchText' # Define the headers headers = { 'Content-Type': 'application/json', 'X-Goog-Api-Key': api_key, # Replace 'API_KEY' with your actual Google Places API key 'X-Goog-FieldMask': 'places.id' } # Define the data payload for the POST request data = { 'textQuery': query } # Make the POST request response = requests.post(url, headers=headers, json=data) # Check if the request was successful if response.status_code == 200: # Process the response print(response.json()) else: print(f"Error: {response.status_code}, {response.text}") return json.loads(response.text)
2
3
77,906,670
2024-1-30
https://stackoverflow.com/questions/77906670/remove-duplicates-in-df-and-convert-into-a-json-obj-in-python
I have a df something like below Name Series ============================= A A1 B B1 A A2 A A1 B B2 I need to convert the series to a list which should be assigned to each Name like a dict or json obj as something like below { "A": ["A1", "A2"], "B": ["B1", "B2"] } So far I have tried using groupby, but it just groups everything a separate dict test = df.groupby("Series")[["Name"]].apply(lambda x: x) The above code gives an output as a df like Series Name A 0 A1 2 A2 3 A1 B 1 B1 4 B2 Any help is much appreciated Thanks,
First drop_duplicates to ensure having , then groupby.agg as list: out = df.drop_duplicates().groupby('Name')['Series'].agg(list).to_dict() Or with unique: out = df.groupby('Name')['Series'].agg(lambda x: x.unique().tolist()).to_dict() Output: {'A': ['A1', 'A2'], 'B': ['B1', 'B2']} If you have other columns, ensure to only keep those of interest: out = (df[['Name', 'Series']].drop_duplicates() .groupby('Name')['Series'].agg(list).to_dict() ) sorting the lists: out = (df.groupby('Name')['Series'] .agg(lambda x: sorted(x.unique().tolist())).to_dict() ) Example: # input Name Series 0 A Z1 1 B B1 2 A A2 3 A Z1 4 B B2 # output {'A': ['A2', 'Z1'], 'B': ['B1', 'B2']}
2
1
77,906,193
2024-1-30
https://stackoverflow.com/questions/77906193/the-behavior-of-numpy-fromfunction
I was trying to create different arrays using Numpy's fromfunction(). It was working fine until I faced this issue. I tried to make an array of ones using fromfunction()(I know I can create it using ones() and full()) and here is the issue : array = np.fromfunction(lambda i, j: 1, shape=(2, 2), dtype=float) print(array) Surprisingly, the output of this function is this : 1 Which is expected to be : [[1. 1.] [1. 1.]] When I change the input function by adding zero times i, It works just fine. array = np.fromfunction(lambda i, j: i*0 + 1, shape=(2, 2), dtype=float) print(array) The output of this code is : [[1. 1.] [1. 1.]] My main question is how does fromfunction() actually behaves? I passed the same function with 2 different representations and the output is completely different.
The callable function is passed two arrays, not repeatedly called with two numbers: i=[ [ 0.0, 0.0 ], [ 1.0, 1.0 ] ] j=[ [ 0.0, 1.0 ], [ 0.0, 1.0 ] ] It returns what it is asked to provide ... just ONCE ... to be the ultimate result of np.fromfunction. So an input of 1 returns just 1 whereas an input of i*0+1 returns [ [ 0.0, 0.0 ], [ 1.0, 1.0 ] ] * 0 + 1 which is the (broadcast) array [ [ 1.0, 1.0 ], [ 1.0, 1.0 ] ]
3
1
77,896,596
2024-1-28
https://stackoverflow.com/questions/77896596/why-arent-the-environment-variables-working-inside-my-django-docker-container
I'm encountering an issue where Django doesn't seem to be reading environment variables when running in a Docker-compose environment. Strangely, the environment variables work fine for PostgreSQL but not for Django. I've tried both the env_file and environment options in my Docker-compose file, and I've also experimented with django-dotenv without success. Here's a snippet of my code: print(f" {os.environ.get('SECRET_KEY')} {os.environ.get('DEBUG')} {os.environ.get('DATABASE_NAME')} ") # The above print statement outputs None for all environment variables SECRET_KEY = os.environ.get("SECRET_KEY") # ... version: '3.9' services: db: # ... web: build: ./src command: gunicorn myapp.wsgi:application --access-logfile /logs/gunicorn/access.log --error-logfile /logs/gunicorn/error.log --bind 0.0.0.0:8000 volumes: # ... # env_file: #not working # - .env environment: - SECRET_KEY=${SECRET_KEY} - DATABASE_NAME=${DATABASE_NAME} # ... (other environment variables) nginx: # ... SECRET_KEY=3df-81ioo^5cx(p9cl$)s%m3mlu3t7*yh#1lr5h0po4_sab3*5 DATABASE_NAME=mydb DATABASE_USER=postgres DATABASE_PASS=postgres DATABASE_HOST=db DATABASE_PORT=5432 DEBUG=0
I believe I've identified the root cause of the issue in my Dockerfile. The problem lies in the sequence of commands during the Docker image build process. Specifically, I observed that the Dockerfile attempts to run python manage.py collectstatic --noinput and python manage.py migrate before Docker Compose copies the environment variables. This results in errors during the build process. To address this, I propose a solution: remove the collectstatic and migrate commands from the Dockerfile and instead include them in the Docker Compose configuration. Adjust the command parameter in the docker-compose.yml (in web part): command: "sh -c 'python manage.py migrate && python manage.py collectstatic --noinput && gunicorn -c conf/gunicorn.py techgeeks.wsgi:application'" Here is the previous version of my Dockerfile: # set work directory WORKDIR /app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install dependencies # RUN pip install --upgrade pip COPY . . RUN pip install --upgrade pip RUN pip install -r requirements.txt # The following two lines caused issues, so they were removed RUN python manage.py collectstatic --noinput RUN python manage.py migrate
3
2
77,905,231
2024-1-30
https://stackoverflow.com/questions/77905231/calculating-head-to-head-records-in-a-dataframe-based-on-target-values-in-python
I have a Python script that processes tennis match data stored in a Pandas DataFrame (tennis_data_processed). Each row represents a single match from 2010 to 2023, including details about the tournament, match, and the two players involved. There's also a target variable that indicates whether Player1 won (1) or Player1 lost (0). I'm attempting to add two new features, player1_h2h and player2_h2h, which represents the head-to-head record of players in each match. The idea is to count the number of wins each player has against the other player before the current match. The code I've implemented works well when the target is 1 (indicating Player1 won). However, there's an issue when the target is 0 (indicating Player1 lost and therefore player2 won). In this case, the head-to-head record is not updating correctly for the subsequent match, as it adds the win to the player that lost. The table I have before trying to create the h2h features is: tourney_date player1_id player2_id target 2012-01-16 A B 1 2012-01-16 C D 0 2012-03-27 B A 1 2012-03-27 D C 0 2012-04-29 A B 1 The table i want as a result (I'll show what it should look like for a head-to-head between two specific players, but it should be done for all matches): tourney_date player1_id player2_id target player1_h2h player2_h2h 2012-01-16 A B 0 0 0 2012-01-27 A B 0 0 1 2012-03-14 B A 1 2 0 2015-01-20 A B 0 0 3 2020-10-07 B A 1 4 0 2020-10-15 A B 1 0 5 2020-10-15 B A 1 5 1 To do this I have the following code: def calculate_head2head(row, player1_col, player2_col, target_col): # Identify player1 and player2 based on the target player1_id = row[player1_col] if row[target_col] == 1 else row[player2_col] player2_id = row[player2_col] if row[target_col] == 1 else row[player1_col] # Identify the player who won in the previous row prev_target = 1 - row[target_col] # Switch 1 to 0 and vice versa prev_won_player_id = row[player2_col] if prev_target == 1 else row[player1_col] # Filter relevant matches for head-to-head calculation matches = tennis_data_processed[ ((tennis_data_processed[player1_col] == player1_id) & (tennis_data_processed[player2_col] == player2_id)) | ((tennis_data_processed[player1_col] == player2_id) & (tennis_data_processed[player2_col] == player1_id)) ] # Count the number of wins for the players player1_wins = matches[(matches[target_col] == 1) & (matches['tourney_date'] < row['tourney_date'])].shape[0] player2_wins = matches[(matches[target_col] == 0) & (matches['tourney_date'] < row['tourney_date'])].shape[0] # Adjust wins if player1 is now in player2 column if row[target_col] == 0: player1_wins, player2_wins = player2_wins, player1_wins prev_won_player_id = player2_id # Update the previous winner to player2 prev_matches = tennis_data_processed.loc[ (tennis_data_processed.index < row.name) & (tennis_data_processed[target_col] == prev_target) ].sort_values(by='tourney_date', ascending=False) if not prev_matches.empty: if row['tourney_date'] > prev_matches.iloc[0]['tourney_date']: if prev_target == 0: player2_wins += 1 else: player1_wins += 1 return player1_wins, player2_wins # Apply the function row-wise to calculate head-to-head records tennis_data_processed[['player1_h2h', 'player2_h2h']] = tennis_data_processed.apply( lambda row: calculate_head2head(row, 'player1_id', 'player2_id', 'target'), axis=1, result_type='expand' ) But with this code the resulting DataFrame is: tourney_date player1_id player2_id target player1_h2h player2_h2h 2012-01-16 A B 0 0 0 2012-01-27 A B 0 1 0 2012-03-14 B A 1 0 2 2015-01-20 A B 0 2 1 2020-10-07 B A 1 1 3 2020-10-15 A B 1 4 1 With my code, when the target is 0 (indicating Player1 lost and therefore player2 won)the head-to-head record is beeing updated to the player that lost the previous match. And when target is 1, it adds the win to the correct player.
Code Of course, since it is a dataset with mixed data from multiple players, I created a function to make it possible to use groupby. Of course, there may be better ways to do this. import pandas as pd import numpy as np def get_h2h(df): cond = df['target'].eq(1) players = pd.concat([df['player1_id'], df['player2_id']]).unique() tmp = pd.Series(np.where(cond, df['player1_id'], df['player2_id']), index=df.index) cond2 = df['player1_id'].eq(players[0]) arr1 = np.where(cond2, tmp.eq(players[0]).shift().fillna(0).cumsum(), tmp.eq(players[1]).shift().fillna(0).cumsum()) arr2 = np.where(cond2, tmp.eq(players[1]).shift().fillna(0).cumsum(), tmp.eq(players[0]).shift().fillna(0).cumsum()) return df.assign(player1_heh=arr1, player2_h2h=arr2) grp = pd.MultiIndex.from_arrays(np.sort(df[['player1_id', 'player2_id']]).T) out = df.groupby(grp, group_keys=False).apply(get_h2h) out: tourney_date player1_id player2_id target player1_h2h player2_h2h 0 2012-01-16 A B 0 0 0 1 2012-01-27 A B 0 0 1 2 2012-03-14 B A 1 2 0 3 2015-01-20 A B 0 0 3 4 2020-10-07 B A 1 4 0 5 2020-10-15 A B 1 0 5 6 2020-10-15 B A 1 5 1 chk with other example: import pandas as pd import numpy as np data1 = {'tourney_date': ['2012-01-16', '2012-01-27', '2012-03-14', '2012-03-15', '2012-03-16', '2012-03-17', '2015-01-20', '2020-10-07', '2020-10-15', '2020-10-15'], 'player1_id': ['A', 'A', 'B', 'C', 'C', 'C', 'A', 'B', 'A', 'B'], 'player2_id': ['B', 'B', 'A', 'D', 'D', 'D', 'B', 'A', 'B', 'A'], 'target': [0, 0, 1, 1, 1, 1, 0, 1, 1, 1]} df = pd.DataFrame(data1) df: tourney_date player1_id player2_id target 0 2012-01-16 A B 0 1 2012-01-27 A B 0 2 2012-03-14 B A 1 3 2012-03-15 C D 1 <-- C & D 4 2012-03-16 C D 1 <-- C & D 5 2012-03-17 C D 1 <-- C & D 6 2015-01-20 A B 0 7 2020-10-07 B A 1 8 2020-10-15 A B 1 9 2020-10-15 B A 1 apply code: grp = pd.MultiIndex.from_arrays(np.sort(df[['player1_id', 'player2_id']]).T) out = df.groupby(grp, group_keys=False).apply(get_h2h) out: tourney_date player1_id player2_id target player1_h2h player2_h2h 0 2012-01-16 A B 0 0 0 1 2012-01-27 A B 0 0 1 2 2012-03-14 B A 1 2 0 3 2012-03-15 C D 1 0 0 <-- C & D 4 2012-03-16 C D 1 1 0 <-- C & D 5 2012-03-17 C D 1 2 0 <-- C & D 6 2015-01-20 A B 0 0 3 7 2020-10-07 B A 1 4 0 8 2020-10-15 A B 1 0 5 9 2020-10-15 B A 1 5 1
3
2
77,903,414
2024-1-30
https://stackoverflow.com/questions/77903414/pip-install-mysqlclient-fails-with-call-to-undeclared-function-error
I am currently on macOS 14.3 on Apple silicon. I am trying to install mysqlclient using pip. I already have mysql package installed using brew install mysql pkg-config but my mysqlclient installation is failing due to the errors mentioned below: I am trying to run pip install mysqlclient I am getting the following error: building 'MySQLdb._mysql' extension creating build/temp.macosx-14.1-arm64-cpython-311 creating build/temp.macosx-14.1-arm64-cpython-311/src creating build/temp.macosx-14.1-arm64-cpython-311/src/MySQLdb clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/opt/homebrew/opt/mysql-client/include "-Dversion_info=(2, 2, 1, 'final', 0)" -D__version__=2.2.1 -I/Users/avishekde/.pyenv/versions/3.11.6/include/python3.11 -c src/MySQLdb/_mysql.c -o build/temp.macosx-14.1-arm64-cpython-311/src/MySQLdb/_mysql.o -I/opt/homebrew/Cellar/mysql-client/8.3.0/include/mysql -std=c99 src/MySQLdb/_mysql.c:527:9: error: call to undeclared function 'mysql_ssl_set'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] mysql_ssl_set(&(self->connection), key, cert, ca, capath, cipher); ^ src/MySQLdb/_mysql.c:527:9: note: did you mean 'mysql_close'? /opt/homebrew/Cellar/mysql-client/8.3.0/include/mysql/mysql.h:797:14: note: 'mysql_close' declared here void STDCALL mysql_close(MYSQL *sock); ^ src/MySQLdb/_mysql.c:1795:9: error: call to undeclared function 'mysql_kill'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] r = mysql_kill(&(self->connection), pid); ^ src/MySQLdb/_mysql.c:1795:9: note: did you mean 'mysql_ping'? /opt/homebrew/Cellar/mysql-client/8.3.0/include/mysql/mysql.h:525:13: note: 'mysql_ping' declared here int STDCALL mysql_ping(MYSQL *mysql); ^ src/MySQLdb/_mysql.c:2011:9: error: call to undeclared function 'mysql_shutdown'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] r = mysql_shutdown(&(self->connection), SHUTDOWN_DEFAULT); ^ 3 errors generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for mysqlclient Failed to build mysqlclient ERROR: Could not build wheels for mysqlclient, which is required to install pyproject.toml-based projects The following is the output from pkg-config --cflags --libs mysqlclient -I/opt/homebrew/Cellar/mysql-client/8.3.0/include/mysql -L/opt/homebrew/Cellar/mysql-client/8.3.0/lib -lmysqlclient
You can download the source code,and make some modifications, then build & install. Here are the details: https://github.com/PyMySQL/mysqlclient/issues/688
2
5
77,872,454
2024-1-24
https://stackoverflow.com/questions/77872454/running-python-script-in-pycharm-debugger-with-proxychains
I am working with PyCharm IDE and have encountered an issue where I need to run my Python script within both the PyCharm internal debugger and under proxychains. The reason for using proxychains is that the modules I'm working with do not support working via a SOCKS5 proxy, which is necessary for internet access. Running the script directly from the internal debugger does not provide an option to use proxychains, and using the built-in console does not allow me to step through the code with the debugger. Is there a way to configure PyCharm to run my script with both the internal debugger and proxychains? I want to be able to debug my code while also routing network requests through proxychains. Any help or suggestions on how to achieve this would be greatly appreciated. IDE version: PyCharm 2021.3 (Community Edition) under Linux
This can be accomplished by creating a shell script which would run your script through ProxyChains and then using this script as an interpreter in PyCharm. First, create a shell script named python in your project directory (let's say it's located at ~/MyProject): #!/bin/sh proxychains4 /usr/bin/python3 "$@" $@ is used to pass all the arguments (specifically, path to your main script) through this script to the actual Python interpreter (/usr/bin/python3). Make this script executable: chmod +x ~/MyProject/python Now we can add this script as an interpreter in PyCharm. If you're not using a virtual environment for this project, you can just add a new system interpreter and pick the created python shell script (~/MyProject/python) as the executable. Then add a new configuration using this new interpreter and path to your main script. If you need a virtual environment for this, create a new virtual environment, specify its location and path to your default python interpreter (say, /usr/bin/python3), install all the necessary dependencies inside it and then once again go to Preferences -> Python Interpreter -> Show all, select your newly created virtual environment and change the "interpreter path" field to the shell script path (~/MyProject/python). And one more step. Since PyCharm's debugger uses a built-in local server, you need to exclude connection to it in proxychains.conf: localnet 127.0.0.1/255.255.255.255 Voilà! [proxychains] config file found: /usr/local/etc/proxychains.conf [proxychains] preloading /usr/local/Cellar/proxychains-ng/4.17/lib/libproxychains4.dylib [proxychains] DLL init: proxychains-ng 4.17 [proxychains] DLL init: proxychains-ng 4.17 Connected to pydev debugger (build 233.13763.11) [proxychains] Strict chain ... 171.244.140.160:3991 ... api.ipify.org:443 ... OK {'ip': '171.244.140.160'}
2
3
77,889,518
2024-1-26
https://stackoverflow.com/questions/77889518/how-to-fusion-cells-of-a-dataframe-by-summation
I want to transform my dataframe by merging it cells and summing them into other larger cells given the indices of those, as an example, given the indices [0,2] & [2,4] on the X and Y axis and go from the following dataframe : +----+----+----+----+ | 1 | 2 | 3 | 4 | +----+----+----+----+ | 5 | 6 | 7 | 8 | +----+----+----+----+ | 9 | 10 | 11 | 12 | +----+----+----+----+ | 13 | 14 | 15 | 16 | +----+----+----+----+ to the following one: +----+----+ | 14 | 22 | +----+----+ | 46 | 54 | +----+----+ I was thinking of Pandas' groupBY.transform or rolling would be of help. Any clues?
Assuming you have homogenous blocks (e.g, 2x2), the most efficient would be to reshape the underlying numpy array and sum: N = 2 out = pd.DataFrame(df.to_numpy() # convert the 2D array to 4D .reshape(len(df)//N, N, -1, N) # sum along dimensions 1 and 3 to go back to 2D .sum((1, 3)) ) If you want non-square blocks (RxC): R, C = 2, 2 out = pd.DataFrame(df.to_numpy() .reshape(len(df)//R, R, df.shape[1]//C, C) .sum((1, 3)) ) Output: 0 1 0 14 22 1 46 54 Intermediate 4D array: # df.to_numpy().reshape((len(df)//N, N, -1, N)) array([[[[ 1, 2], # ──┐ [ 3, 4]], # ─┐├─> 1+2+5+6 = 14 # ││ [[ 5, 6], # ──┘ [ 7, 8]]], # ─┴──> 3+4+7+8 = 22 [[[ 9, 10], # ──┐ [11, 12]], # ─┐├─> 9+10+13+14 = 46 # ││ [[13, 14], # ──┘ [15, 16]]]]) # ─┴──> 11+12+15+16 = 54 shape is not a multiple of N You need to pad the input to a multiple of N. For example using: N = 3 X = int(np.ceil(df.shape[0]/N)*N) Y = int(np.ceil(df.shape[1]/N)*N) df = df.reindex(index=range(X), columns=range(Y), fill_value=0) Then the aggregation code.
2
2
77,902,775
2024-1-29
https://stackoverflow.com/questions/77902775/how-to-forward-generic-argument-types-to-a-callable-in-python
I want to annotate a generic function which takes as arguments another function and its parameters. def forward(func, **kwargs): func(**kwargs) So, if I have a function which takes two integers: def sum_int(a: int, b: int): ... in my editor I want help if I pass the wrong object types: forward(sum_int, 1.5, 2.6) # want type checker to complain about using floats instead of integers How can I annotate forward? Something like: def forward(func: Callable[rest, ret], **kwargs: rest] -> ret: ... So, the first argument to forward is func and the rest are the keyword arguments. The return is ret. But rest and ret are also the keyword arguments and return type for func. I used to do generics years (decades!) ago in C++, and there were tricks for capturing and unpacking various types, but I don't know whether we have to jump through those kind of hoops with Python, or whether it's even possible. I don't really know what to search for, and didn't turn up anything anywhere near helpful. Thanks!
You can use typing.ParamSpec to associate the arguments expected by func with the arguments expected by forward, though you do apparently need to use both positional and keyword arguments in the definition of forward: RV = TypeVar('RV') P = ParamSpec('P') def forward(func: Callable[P, RV], *args: P.args, **kwargs: P.kwargs) -> RV: ... def foo(*, a: int, b: int) -> str: ... forward(foo, a=3, b=5) # OK forward(foo, 3, 5) # Not OK, too many positional arguments for foo
3
3
77,901,277
2024-1-29
https://stackoverflow.com/questions/77901277/building-python-from-source-sqlite3-built-but-not-imported
I have a Rocky Linux 9.0 server and we use modules on it. I'm trying to compile different versions of python from source and to pack them into modules that can be loaded by the user as needed. But, unfortunately, I've been struggling to build them from source. First off, if anybody has a good alternative on how to build moduls containing a specific python version (i.e. without compilation), feel free to let me know, I would appreciate some hints on that as well, but that is not the issue of this post. I have the following script to download and build python and its dependency sqlite: #!/bin/bash # build a python module # based on: https://stackoverflow.com/questions/43993890/modulenotfounderror-no-module-named-sqlite3 # Save current working directory CURR_DIR="$(pwd)" # Set working directory to the location of this file cd "$(dirname "${BASH_SOURCE[0]}")" # config PYTHON_VERSION=3.11.7 INSTALL_BASE_PATH="/modules/shared/apps/python/${PYTHON_VERSION}" SQLITE_VERSION=3450000 # clear the install directory rm -rf ${INSTALL_BASE_PATH} # download python mkdir build cd build [ -f "Python-${PYTHON_VERSION}.tgz" ] || wget --no-check-certificate "https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz" tar -zxf "Python-${PYTHON_VERSION}.tgz" # download sqlite # https://www.sqlite.org/2024/sqlite-autoconf-3450000.tar.gz SQLITE_URL="https://www.sqlite.org/2024/sqlite-autoconf-${SQLITE_VERSION}.tar.gz" echo "Downloading SQLite from ${SQLITE_URL}" [ -f "sqlite-autoconf-${SQLITE_VERSION}.tar.gz" ] || wget --no-check-certificate "${SQLITE_URL}" tar -zxf "sqlite-autoconf-${SQLITE_VERSION}.tar.gz" # install sqlite cd sqlite-autoconf-${SQLITE_VERSION} ./configure --prefix=${INSTALL_BASE_PATH} make make install # install python cd "../Python-${PYTHON_VERSION}" export PKG_CONFIG_PATH="${INSTALL_BASE_PATH}/lib/pkgconfig" ./configure --prefix=${INSTALL_BASE_PATH} --enable-loadable-sqlite-extensions LDFLAGS="-L${INSTALL_BASE_PATH}/lib" CPPFLAGS="-I${INSTALL_BASE_PATH}/include" CFLAGS="-I${INSTALL_BASE_PATH}/include" make -j 8 read -p "Compiled python, check for errors and press key to continue..." make install # create a symlink for python (cd "${INSTALL_BASE_PATH}/bin"; ln -s python3 python) # set working directory back to original cd $CURR_DIR read -p "Done, press key to finish..." When I run the script and check for errors after make -j 8, I see the following: *** WARNING: renaming "_sqlite3" since importing it failed: /path/to/python/build/Python-3.11.7/build/lib.linux-x86_64-3.11/_sqlite3.cpython-311-x86_64-linux-gnu.so: undefined symbol: sqlite3_deserialize The necessary bits to build these optional modules were not found: _gdbm _tkinter nis readline To find the necessary bits, look in setup.py in detect_modules() for the module's name. Following modules built successfully but were removed because they could not be imported: _sqlite3 I'm trying to leave the underlying OS untouched, so installing the usual pre-compiled sqlite-devel is not really an option. There is a similar issue here, but not much of an answer that is helping me. Does anybody have an idea what the issue could be?
You need to define LD_LIBRARY_PATH: # install python cd "../Python-${PYTHON_VERSION}" export LD_LIBRARY_PATH=${INSTALL_BASE_PATH}/lib export PKG_CONFIG_PATH="${INSTALL_BASE_PATH}/lib/pkgconfig" After you've built Python, you need to put in ~/.bashrc : export LD_LIBRARY_PATH=/modules/shared/apps/python/3.11.7/lib
2
2
77,902,595
2024-1-29
https://stackoverflow.com/questions/77902595/polars-groupby-ignore-nans-when-calculating-mean
What is the best way to ignore NaNs when calculating the mean in Polars? As of polars v0.19.19, the default pl.Expr.mean does not ignore NaNs. Example: test_data = pl.DataFrame( { "group": ["A", "A", "B", "B"], "values": [1, np.nan, 2, 3] } ) test_data.group_by("group").agg(pl.col("values").mean()) results in group values "A" NaN "B" 2.5 Compared to pandas: test_data = pd.DataFrame( { "group": ["A", "A", "B", "B"], "values": [1, np.nan, 2, 3] } ) test_data.groupby("group").mean() results in group values "A" 1.0 "B" 2.5 The best alternative I can think of is using polars pl.Expr.map_elements and np.nanmean as such: test_data = pl.DataFrame( { "group": ["A", "A", "B", "B"], "values": [1, np.nan, 2, 3] } ) test_data.group_by("group").agg( pl.col("values").map_elements(lambda x: np.nanmean(x.to_numpy())) ) However, the polars API doesn't recommend using it if possible due its lack of speed. Are there faster ways to calculate the mean while ignoring NaNs?
TLDR. You can simply drop the NaNs before computing the mean. df.group_by("group").agg(pl.col("values").drop_nans().mean()) Output. shape: (2, 2) ┌───────┬────────┐ │ group ┆ values │ │ --- ┆ --- │ │ str ┆ f64 │ ╞═══════╪════════╡ │ A ┆ 1.0 │ │ B ┆ 2.5 │ └───────┴────────┘ Performance comparison against pl.Expr.fill_nan(None). We start by creating an example dataset with 100 million rows and ~20% NaNs in the values column. import random import string N_ROWS = 100_000_000 df = pl.DataFrame({ "group": [random.choice(string.ascii_uppercase) for _ in range(N_ROWS)], "values": [random.random() if random.random() > 0.2 else np.nan for _ in range(N_ROWS)] }) Both methods yield identical results. df_drop = df.group_by("group", maintain_order=True).agg(pl.col("values").drop_nans().mean()) df_fill = df.group_by("group", maintain_order=True).agg(pl.col("values").fill_nan(None).mean()) assert np.allclose(df_drop.select("values").to_numpy(), df_fill.select("values").to_numpy()) Timing both methods yields. %timeit df.group_by("group").agg(pl.col("values").drop_nans().mean()) # 1.21 s ± 28.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit df.group_by("group").agg(pl.col("values").fill_nan(None).mean()) # 737 ms ± 14.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Hence, using pl.Expr.fill_nan(None) gives a ~1.6x speed-up. However, unless you are working with dataframes having hundreds of millions of rows, the impact is likely negligible. Note. I've noticed that the relationship above reverses with increasing number of groups. In such cases, efficient parallelisation becomes increasingly important.
4
6
77,877,556
2024-1-25
https://stackoverflow.com/questions/77877556/does-the-order-of-modeling-affect-computational-speed-in-pulp-for-integer-progra
I am currently solving an integer programming problem using Pulp. I am aware that the order of statements in Pulp modeling can affect the outcome of the computations. However, I am curious to know if a specific order of modeling can also lead to improvements in computational speed. Is the practice of finding a particular order to enhance calculation speed common? Additionally, I have already implemented optimizations such as eliminating unnecessary variables as much as possible, and dividing the process into pre-processing and post-processing stages. In my case, I noticed a significant improvement in computation time after changing the modeling order: Before changing the order: about 50 seconds After changing the order: about 30 seconds Is there any general advice or guidelines for determining the optimal order of modeling in Pulp to achieve faster computational speeds?
Does the Order of Modeling Affect Computational Speed in Pulp for Integer Programming Problems? Yes. But that's something bad and solvers try hard to decrease this effect. It's also not limited to pulp but applies to discrete-optimization in general. See resource below (focusing on Integer-Programming)! Is the practice of finding a particular order to enhance calculation speed common? Absolutely not! If some entities like locations are country-wise sorted, it's absolutely okay (and a good idea) to iterate it the same way. But actively tuning stuff not automatically available... i wouldn't do that. Two remarks: It's like tuning random-seeds which one should never do In different but similar communities (SAT solvers; similar in terms of "automatic-search"), competitions evaluating different solvers on previously unknown instances randomly permute the model rows afaik! Before changing the order: about 50 seconds After changing the order: about 30 seconds This sounds like a statistical-evaluation with a sample-size of 1. This does not tell much. It probably won't even survive a re-run with a different seed in the solver. If your problem is as sensitive as you claim, it's usually indicating, that your model is not good enough. It basically means, that your solver is very much depending on luck! Other formulations (not the order) might be better but this is always problem-dependent. If there are some insights which can help the solver, there are also more robust methods of giving hints (branch-heuristics and co.) although i'm not sure how much pulp supports here. For some background: Performance variability in mixed integer programming (PDF) e.g. page 9: "First variability generator: permutations"
2
3
77,901,835
2024-1-29
https://stackoverflow.com/questions/77901835/subs-is-not-defined-error-when-trying-to-graph-the-tangent-line-of-the-derivat
I've graphed my original function and I want to graph the tangent line at the point of interest. import numpy as np from sympy import lambdify, symbols, diff, Abs point_of_interest = 8 graphRange = [1,15] # define the variable and the function xsym = symbols('x') # With a chord length of 60, plot the circle diameters origFunction = 2 * ((60 ** 2) / (8 * Abs(xsym)) + Abs(xsym) / 2) # define the derivative derivative = diff(origFunction, xsym) # define the tangent line at point of interest tangentLine = derivative.subs(xsym, point_of_interest) * (xsym - point_of_interest) + origFunction.subs(xsym, point_of_interest) # Convert the SymPy function to a lambda function origF = lambdify(xsym, origFunction, "numpy") # Generate x values x_values = np.linspace(graphRange[0], graphRange[1], 100) # Generate y-values for the original function y_values = origF(x_values) # Plot the original function plt.plot(x_values, y_values, label='Original Function') # THIS SECTION DOESN'T WORK YET # Convert the SymPy function to a lambda function diffF = lambdify(xsym, tangentLine, "numpy") # Generate y-values for the tangent line y_tang_values = diffF(x_values) # Plot the tangent line plt.plot(x_values, y_tang_values, label='Tangent Line') #plot the point of interest plt.plot(point_of_interest, origF(point_of_interest), 'ro', label='Point of Interest') # Add labels and legend plt.xlabel('x') plt.ylabel('y') plt.title('Graph of Original Function and Tangent Line') plt.legend() # Show the plot plt.show() The error I get is this Traceback (most recent call last): File "x:\Python Projects\Chord calc.py", line 37, in y_tang_values = diffF(x_values) ^^^^^^^^^^^^^^^ File "", line 4, in _lambdifygenerated NameError: name 'Subs' is not defined I don't know what to do to fix this error.
At the moment, you are using a complex symbol, xsym: it doesn't have any assumptions, hence it is as general as it gets. Your function contains Abs(xsym): because xsym is complex, its derivative will be quite convoluted: print(Abs(xsym).diff(xsym)) # (re(x)*Derivative(re(x), x) + im(x)*Derivative(im(x), x))*sign(x)/x Once you substitutes a numeric value into that expression, Subs appears, and you get the error later on. The solution is simple: create a real symbol. Replace this line: xsym = symbols('x') with: xsym = symbols('x', real=True) Then, everything will work as expected.
2
2
77,900,818
2024-1-29
https://stackoverflow.com/questions/77900818/why-does-running-a-plot-in-a-secondary-thread-works-the-first-time-with-warning
For some quick tests (this code will evolve later anyway, so a temporary solution is ok), I need to use Matplotlib in a thread, and not the main thread. Usually we have this warning but it works anyway: UserWarning: Starting a Matplotlib GUI outside of the main thread will likely fail. Here it is indeed the case. However, when the first plot is closed, and we do the same a second time, then it totally fails with not a warning, but an error: RuntimeError: main thread is not in main loop Is there a way to make the following code work? (even if it is not the recommened matplotlib way) import numpy as np import matplotlib.pyplot as plt import threading, time def visualization_thread(): fig = plt.figure() ax = fig.add_subplot(111) l1, *_ = ax.plot(y1, color='r', label="1") fig.show() fig.canvas.flush_events() while running: l1.set_ydata(y1) fig.canvas.draw_idle() fig.canvas.flush_events() plt.pause(0.020) def data_thread(): global y1, y2, y3 while running: y1 = np.random.rand(100) time.sleep(0.020) running = True # !!! WARNING but it still works threading.Thread(target=data_thread).start() threading.Thread(target=visualization_thread).start() time.sleep(4) running = False time.sleep(2) running = True # !!! FAILS, why? threading.Thread(target=data_thread).start() threading.Thread(target=visualization_thread).start()
I just noticed that adding plt.close() at the end of the visualization thread solves the problem. Again, this is a temporary solution, since matplotlib is not threadsafe anyway.
2
2
77,900,646
2024-1-29
https://stackoverflow.com/questions/77900646/post-request-error-400-when-trying-to-search-a-website
I am trying to search for movie titles here: https://classindportal.mj.gov.br/consulta-filmes and scrape the resulting page. I know this involves an intermediary step of sending a certain request to the site with my search term, which I currently cannot get to work. When using Google DevTools, the network tab shows me the following info Request URL: https://classindportal.mj.gov.br/api/solicitacao-classificacao-consultas/list Request Method: POST Status Code: 200 OK Referrer Policy: strict-origin-when-cross-origin and that the request payload contains a key tituloBr, which will have a value equal to the search term (e.g. {'tituloBr': 'shrek'} if I type 'shrek' in the search bar and press enter). I believe the search involves sending a post request to the Request URL as shown above, sending the data {'tituloBr': 'shrek'} so I used the requests library as follows: payload = {'tituloBr': 'shrek'} r = requests.post('https://classindportal.mj.gov.br/api/solicitacao-classificacao-consultas/list', data = payload) but this gives an error code 400 with r.reason showing 'Bad Request'. I don't think there is any problem with the URL or data I have sent, so I'm not sure what the issue is.
Iv'e inspected the page, and seems like you need to provide a token - which can be obtained by sending a POST request to: https://sso.mj.gov.br/auth/realms/PRD/protocol/openid-connect/token So, get the token, and then send another request to the API with the token to search for your desired movie import requests SEARCH_TERM = "shrek" token_url = "https://sso.mj.gov.br/auth/realms/PRD/protocol/openid-connect/token" movies_url = ( "https://classindportal.mj.gov.br/api/solicitacao-classificacao-consultas/list" ) headers = { "Accept": "application/json, text/plain, */*", "Accept-Language": "en-US,en;q=0.9,he;q=0.8", "Authorization": "Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJMRVNSQzZ4UGtUdnlzNUdvUHpwaHNmeTJTSmMta0ZZcjFKM2VBNS1uOExnIn0.eyJleHAiOjE3MDY1NDIwNzMsImlhdCI6MTcwNjU0MTc3MywianRpIjoiYzNkY2FhOTctMTFhNi00N2Y0LThlMjUtNzRlYzcxMTIzNGNkIiwiaXNzIjoiaHR0cHM6Ly9zc28ubWouZ292LmJyL2F1dGgvcmVhbG1zL1BSRCIsImF1ZCI6WyJjbGFzc2luZC1iYWNrZW5kIiwiYWNjb3VudCJdLCJzdWIiOiIxODNmYWI5MC1hM2Y1LTQ1MWMtODQwMi1hYzAwMWVhYmM1ZTMiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJjbGFzc2luZC1jb25zdWx0YXB1YmxpY2EtZnJvbnRlbmQiLCJhY3IiOiIxIiwiYWxsb3dlZC1vcmlnaW5zIjpbImh0dHBzOi8vY2xhc3NpbmRwb3J0YWwubWouZ292LmJyIl0sInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJ1bWFfYXV0aG9yaXphdGlvbiIsImRlZmF1bHQtcm9sZXMtcHJkIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJjbGFzc2luZC1iYWNrZW5kIiwiY2xpZW50SWQiOiJjbGFzc2luZC1jb25zdWx0YXB1YmxpY2EtZnJvbnRlbmQiLCJjbGllbnRIb3N0IjoiMTAuMjUwLjEyOC4xMTMiLCJjbGllbnRBZGRyZXNzIjoiMTAuMjUwLjEyOC4xMTMifQ.RbreSBJYQ4aPZYEQmSHWo5ZkQaEEy4M9UqWkOHg2wRAoQsxHCzo3dj3CRilyHocnt-K6toV1MUVF_pk1rg2IYeOcrq5NJFaErKGl4Iy69dG_PBwU1RHP3da5-paLDg6DPZZTu2UR1FmShuvlzaSXFNe5JSDoWP1RMjpCSP5bBpXHz0M-KvbZqPykYky-pIpxCpwEIlsL15hpTFqxrghpvWcpiLfjC-YRALynXxPZFiDzqpNq9nsQwLFCXjC6lAeZmP3GQcDZMIDEBgeSx7slomM2E360teqK2WXmZHmJxRwIWP1snJDetlxbDlDHuFxGVLyLsR8kJMbKTPnZEeDUyw", "Connection": "keep-alive", "Origin": "https://classindportal.mj.gov.br", "Referer": "https://classindportal.mj.gov.br/consulta-filmes", "Sec-Fetch-Dest": "empty", "Sec-Fetch-Mode": "cors", "Sec-Fetch-Site": "same-origin", "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36", "sec-ch-ua": '"Not A(Brand";v="99", "Google Chrome";v="121", "Chromium";v="121"', "sec-ch-ua-mobile": "?0", "sec-ch-ua-platform": '"macOS"', } json_data = { "currentPage": 0, "pageSize": 10, "sortItem": None, "totalResults": None, "itens": None, "tituloBr": f"{SEARCH_TERM}", "tituloOr": "", "requerente": "", "produtor": "", "editora": "", "idModulo": 1, } token_data = { "client_id": "classind-consultapublica-frontend", "client_secret": "4PmaBa8bBeVow40SKFNb7qNHzAxuLoqz", "grant_type": "client_credentials", "scope": "classind-backend", } with requests.Session() as session: token = session.post(token_url, data=token_data).json()["access_token"] headers["Authorization"] = f"Bearer {token}" response = session.post(movies_url, json=json_data, headers=headers) print(response.json()) You can even convert the data to a Pandas data frame if you'd like: import pandas as pd # ... with requests.Session() as session: token = session.post(token_url, data=token_data).json()["access_token"] headers["Authorization"] = f"Bearer {token}" response = session.post(movies_url, json=json_data, headers=headers) data = response.json()["itens"] df = pd.DataFrame(data) print(df) Which prints: id tituloBrasil ... classificacaoAtribuida classificacaoPretendida 0 164346 SHREK ... Livre None 1 164345 SHREK 2 ... Livre None 2 164344 SHREK PARA SEMPRE ... Livre None 3 164343 SHREK TERCEIRO ... Livre None 4 146845 SHREK 2 ... Livre None 5 146844 SHREK TERCEIRO ... Livre None 6 135770 SHREK ... Livre None 7 135769 SHREK 2 ... Livre None 8 135768 SHREK PARA SEMPRE ... Livre None 9 135767 SHREK TERCEIRO ... Livre None [10 rows x 8 columns] It does seem like there is some pagination to do - but I'll leave that upto the OP.
2
2
77,899,281
2024-1-29
https://stackoverflow.com/questions/77899281/how-to-create-an-file-dialog-box-in-customtkinter
Please review this code where I have used CustomTkinter (alias as "ctk") to create a button. The "command" field contains a function "selectfile". I want to implement this "selectfile" function. button_to_select = ctk.CTkButton(app, text = "Choose file", fg_color = "blue", command = selectfile) button_to_select.pack(padx = 25, pady = 25) button_to_select.place(x = 825, y = 300) So we have always seen applications opening up another window for other works such as Logins or File Selections. I want to create the same here. The button on click will open up a window of the Windows Explorer and let users select any number and any type of files. We will also have an "Ok" button that will close the window and send the files back to the application for further work. How to implement this function? So it will be like when we want to send some files into WhatsApp, the Windows Explorer opens and we select the files... This would be something similar. And no is not an Web-app. It is a Native App. I am unable to create anything because I lack knowledge. All I tried to do is use the "subprocess" module to run the Windows Explorer. But that just runs the explorer separately and does not do the expected work as I described.
You can use filedialog. Use this function to open the windows file selector from customtkinter import filedialog def selectfile(): filename = filedialog.askopenfilename() print(filename) Edit : Customtkinter got this function
2
4