question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,068,019 | 2024-10-8 | https://stackoverflow.com/questions/79068019/is-there-a-more-efficient-way-to-get-x-z-slices-from-a-stack-of-x-y-images | I have a CT scan of an object: images with pixels in an x-y grid, one image per z value. I want to make x-z images, to view the object from a different angle. The most obvious way to do this: load all the images into a 3D array like bigImageArray[x,y,z] and save the slices bigImageArray[:,y,:] for each value of y. The problem: bigImageArray would be too large to store in memory. What's the best way to handle this? Attempted solution: for each y value, make an empty x-z array, open each x-y image and fill in one line. This works but is quite slow (lots of file opening/closing). Adding a python tag since that's what I've been using, but I'm open to using anything | let's say you can fit n x-y or n x-z images into memory. Load up n x-y images and use them to write out n width x n x n "blocks". Repeat until you've covered z. Then, load up all the blocks that have y=0, and use them to write out n x-z images. Repeat until you've covered y. | 2 | 1 |
79,065,880 | 2024-10-8 | https://stackoverflow.com/questions/79065880/pytest-overriding-production-database-with-test-database | I have started writing tests for my FastAPI/SQLAlchemy app and I would like to use a separate empty database for tests. I added an override in my conftest.py file but the function override_get_db() never gets called. As a result, tests are run on the production database and cannot get them to run on the testing database. Any idea of what is wrong in my code ? main.py from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from routes.address import router as address_router app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) app.include_router(address_router) database.py from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from models.base import Base from config import Config from sqlalchemy.orm import Session engine = create_engine( Config.DATABASE_URI, echo=True, ) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) def get_db(): print(f"Connecting to database: {Config.DATABASE_URI}") Base.metadata.create_all(engine) db: Session = SessionLocal() try: yield db finally: db.close() routes/address.py from fastapi import APIRouter, Depends from sqlalchemy.orm import Session from crud.address import get, get_all, create, update, delete from database.database import get_db from schemas.address import AddressCreate router = APIRouter() @router.get("/address/{address_id}") async def get_address(address_id: int, db: Session = Depends(get_db)): return get(db, address_id) @router.get("/address/") async def get_all_addresss(db: Session = Depends(get_db)): return get_all(db) @router.post("/address/") async def create_address(address: AddressCreate, db: Session = Depends(get_db)): return create(db, address) @router.put("/address/{address_id}") async def update_address( address_id: int, address: AddressCreate, db: Session = Depends(get_db) ): return update(db, address_id, address) @router.delete("/address/{address_id}") async def delete_address(address_id: int, db: Session = Depends(get_db)): return delete(db, address_id) conftest.py import pytest from fastapi.testclient import TestClient from sqlalchemy import Engine, StaticPool, create_engine from sqlalchemy.orm import sessionmaker from main import app from config import Config from src.database.database import get_db from src.models.base import Base print("Loading conftest.py") TEST_DATABASE_URI = "sqlite:///:memory:" @pytest.fixture(scope="session") def engine() -> Engine: print(f"Using database URI: {Config.TEST_DATABASE_URI}") return create_engine( Config.TEST_DATABASE_URI, connect_args={"check_same_thread": False}, poolclass=StaticPool, echo=True, ) @pytest.fixture(scope="function") def test_db(engine): Base.metadata.create_all(bind=engine) TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) db = TestingSessionLocal() try: yield db finally: db.close() Base.metadata.drop_all(bind=engine) @pytest.fixture(scope="function") def override_get_db(): def _override_get_db(): print("Using test database") try: yield test_db finally: test_db.close() return _override_get_db @pytest.fixture(scope="function") def test_app(override_get_db): print("Applying dependency override") app.dependency_overrides[get_db] = override_get_db yield app print("Clearing dependency override") app.dependency_overrides.clear() @pytest.fixture(scope="function") def client(test_app): return TestClient(test_app) test_address.py def test_create_address(client): response = client.post( "/address/", json={ "city": "Springfield", "country": "USA", }, ) assert response.status_code == 200 response_data = response.json() assert response_data["city"] == "Springfield" assert response_data["country"] == "USA" assert "id" in response_data | There is a problem with the way you're trying to override get_db: app.dependency_overrides[get_db] = override_get_db override_get_db is a fixture. You can't use fixtures for dependencies. There are many possible solutions. From what I see, currently you only need the database for your client, so you could add all logic in the client fixture: import pytest from fastapi.testclient import TestClient from sqlalchemy import create_engine,StaticPool from sqlalchemy.orm import sessionmaker from main import app from config import Config from models.base import Base from database.database import get_db @pytest.fixture(scope="function") def client(): engine = create_engine( Config.TEST_DATABASE_URI, connect_args={"check_same_thread": False}, poolclass=StaticPool, echo=True ) Base.metadata.create_all(bind=engine) TestingSessionLocal= sessionmaker(autocommit=False,autoflush=False, bind=engine) def override_get_db(): db =TestingSessionLocal() try: yield db finally: db.close() app.dependency_overrides[get_db]= override_get_db with TestClient(app) as client: yield client Base.metadata.drop_all(bind=engine ) app.dependency_overrides.clear() Or you could separate the logic instead. Main point β don't use fixtures for overriding dependencies. Use regular functions instead. | 3 | 3 |
79,067,196 | 2024-10-8 | https://stackoverflow.com/questions/79067196/multiprocess-with-joblib-and-skimage-picklingerror-could-not-pickle-the-task-t | I'm trying to parallelize the task of finding minimum cost paths through a raster cost surface, but I keep bumping into the same PicklingError: Could not pickle the task to send it to the workers. This is a code example of what's going on: import numpy as np from skimage.graph import MCP_Geometric import timeit from joblib import Parallel, delayed np.random.seed(123) cost_surface = np.random.rand(1000, 1000) mcp = MCP_Geometric(cost_surface) pois = [(np.random.randint(0, 1000), np.random.randint(0, 1000)) for _ in range(20)] def task(poi): costs_array, traceback = mcp.find_costs(starts=[poi], ends=pois) ends_idx = tuple(np.asarray(pois).T.tolist()) costs = costs_array[ends_idx] tracebacks = [mcp.traceback(end) for end in pois] Parallel(n_jobs=6)(delayed(task)(poi) for poi in pois) I'm fairly new to parallelizing tasks, but the code I'm running might take weeks if done sequentially and want to leverage parallel. I understand that pickling is not possible for some complex objects, so I'm also looking for alternatives. | This is because MCP_Geometric is unpickable. You need to move initialization of this class into task function: import numpy as np from skimage.graph import MCP_Geometric import timeit from joblib import Parallel, delayed np.random.seed(123) cost_surface = np.random.rand(1000, 1000) pois = [(np.random.randint(0, 1000), np.random.randint(0, 1000)) for _ in range(20)] def task(poi): # --- each process will have it's own version of the `mcp` class mcp = MCP_Geometric(cost_surface) costs_array, traceback = mcp.find_costs(starts=[poi], ends=pois) ends_idx = tuple(np.asarray(pois).T.tolist()) costs = costs_array[ends_idx] tracebacks = [mcp.traceback(end) for end in pois] Parallel(n_jobs=6)(delayed(task)(poi) for poi in pois) | 2 | 1 |
79,067,236 | 2024-10-8 | https://stackoverflow.com/questions/79067236/collatzs-hypothesis-want-to-reuse-code-instead-of-repeating | I am trying to learn python and came across this Collatz's hypothesis where I need to take any non-negative and non-zero integer number as input and if it's even, evaluate a condition otherwise, if it's odd, evaluate another condition if number β 1, go back to even condition. I am checking same condition multiple times. Please guide me how to reuse the conditions without repeating them. Thank you in advance and my code is below. "was trying to reduce input intakes and minimize condition check for one" c0 = int(input("Enter a non-negative and non-zero integer number: ")) steps = 0 if c0 < 0: c0 = input("Enter a non-negative and non-zero integer number: ") else: while c0 != 1: while c0 % 2 == 0 and c0 != 0: c0 /= 2 steps += 1 print(int(c0)) if c0 == 1: break else: c0 = 3*c0 + 1 steps += 1 print(int(c0)) else: print("steps = ", steps) | Because you loop through it and regularly check for c0 being 1 and c0 not being 0, why not just use that as the while condition? Using it would make the break check unnecessary. Additionally, you don't need to use a while loop to check for even or odd numbers. Instead, you can use one while loop and conditionals inside. Also, you check for < 0 instead of <= 0 in your first conditional. Additionally since c0 can never equal 0, you don't need to check for it in the loop. c0 = int(input("Enter a non-negative and non-zero integer number: ")) steps = 0 if c0 <= 0: #Checks for negative or 0 c0 = input("Enter a non-negative and non-zero integer number: ") else: while c0 != 1: #Checks if c0 is 1. If so, print the steps and terminate if c0 % 2 == 0: #If c0 is even c0 /= 2 steps += 1 print(int(c0)) else: #If c0 is odd c0 = 3*c0 + 1 steps += 1 print(int(c0)) else: print("steps = ", steps) | 1 | 2 |
79,066,478 | 2024-10-8 | https://stackoverflow.com/questions/79066478/django-reusable-many-to-one-definition-in-reverse | I'm struggling to make a Many-to-one relationship reusable. Simplified, let's say I have: class Car(models.Model): ... class Wheel(models.Model): car = models.ForeignKey(Car) ... Pretty straight forward. What, however, if I'd like to use my Wheel model also on another model, Bike? Can I define the relationship in reverse, on the "One" side of the relationship? Defining this as a Many-to-many on the Vehicles would mean the same Wheel could belong to multiple Vehicles, which is not what I want. Would I have to subclass my Wheel to CarWheel and BikeWheel only to be able to differentiate the Foreignkey for each relationship? Seems like there should be a cleaner solution. | If I understand correctly, you will need to "subclass" from a parent model. For instance, Vehicle: class Vehicle(models.Model): pass class Bike(Vehicle): pass class Wheel(models.Model): vehicle = models.ForeignKey( Vehicle, on_delete=models.CASCADE, related_name='wheels', ) Now to access the Wheel instances from a Bike instance: bike = Bike.objects.first() bike_wheels = bike.wheels.all() | 1 | 1 |
79,066,076 | 2024-10-8 | https://stackoverflow.com/questions/79066076/distinguish-native-type-from-user-type | I want to be able to distinguish some native class like _functools.partial (code, code) from some user-class like: class MyClass: pass I want to test this from within Python (so not using the CPython API or so). But it's ok if this only works with CPython. It should work for CPython >=3.6. How? More specifically, I want to distinguish objects where its __dict__/__slots__ will not give me all the object attributes. This is usually the case for native types such as _functools.partial where the func, args, keywords are not in __dict__ (and __slots__ does not exist). I think basically I want to check if the type defines some custom Py_tp_members (but not derived from __slots__ (see)). Some background / ideas which might be helpful to get to an answer: functools.partial.__dict__ is usually empty or None. functools.partial.__slots__ only exists if Python uses the pure Python implementation, which is usually not the case. functools.partial.__getstate__() exists since Python >=3.11 and will return __dict__ (and maybe __slots__), so in case of the native partial, it will be empty or None. (Not sure if this is really correct. See CPython issue #125094.) functools.partial.__dir__(functools.partial) also does not cover func and co? However, dir(functools.partial) does? Side question: What is the builtin dir doing differently? Where is the exact code of the builtin dir which covers this case? I assume it iterates through the Py_tp_members, maybe also Py_tp_methods? I initially thought that maybe __dictoffset__ gives me some hint, but that turned out to be wrong. The builtin dir could probably be used in any case. The inspect.getmembers (or also inspect.getmembers_static) might tell me what I need to know. Why do I need that? We need to define our own custom hash (not __hash__) for any type of object, in a stable way (that the hash does not change across Python versions), and in a meaningful way (that the hash contains all relevant state). So far, we iterate over __getstate__(), or __dict__/__slots__, which works fine for most cases, except now for functools.partial, and likely for other native types. So the idea was, add a check if we have a native type, and if so, fallback to using dir, as that seems to work fine even in those cases. See Sisyphus issue #207 and Sisyphus get_object_state and Sisyphus sis_hash_helper. | While playing around, I noticed that all entries in Py_tp_members will always become corresponding member descriptors in the type. And you can check for those, via inspect.ismemberdescriptor (which just checks isinstance(object, types.MemberDescriptorType)). I think this type of descriptor would not be used otherwise. So that allows to implement such a check. Note, instead of calling this is_native_type or so, I call this type_has_native_members, to be more specific about what it actually checks (and what I want here). (But "being a native type" was anyway somewhat not well defined in the first place. I just was not sure how to properly define it.) import inspect def type_has_native_members(cls): if hasattr(cls, "__slots__"): return False for key in dir(cls): value = getattr(cls, key, None) if inspect.ismemberdescriptor(value): return True return False # --------------- # some tests: import functools assert type_has_native_members(functools.partial) class Foo: pass assert not type_has_native_members(Foo) | 2 | 1 |
79,059,046 | 2024-10-6 | https://stackoverflow.com/questions/79059046/how-to-get-images-in-beautifulsoup-from-javascript | At my shcool we have a interactive white boards and we can export them to a website with a provided link. Only problem is that the links expire (which is stupid), so I want to make a simple python script that gets the images and downloads them. Here is the link to the website: https://air.ifpshare.com/documentPreview.html?s_id=8ec97e16-51c4-4a77-9f64-7d5dccd9bb41#/detail/561f0184-384c-4ca1-91a4-b2e687865408/record When I open chrome and inspect the website, I see that the images are contained in a main divider with sub divider and image elements which encode the image in base 64. This is thus easy to decode them in python. This is the simple script i wrote to get the html: import requests page = requests.get("https://air.ifpshare.com/documentPreview.html?s_id=8ec97e16-51c4-4a77-9f64-7d5dccd9bb41#/detail/561f0184-384c-4ca1-91a4-b2e687865408/record") print(page.text) Only problem is, when I try to get the html, I don't get any of the content... The content seems to be coming from the javascript that is in the website. The same thing happens when I use Selenium Here is what I get: <!DOCTYPE html><html><head><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no"><link rel=stylesheet href=//at.alicdn.com/t/font_833191_27456hr9ow5.css><title id=PageTitle></title><style>html, body { max-width: 480px; height: 100%; margin: auto; background-size: 100% 100%; background: #F8F8F8; }</style><link href=/static/css/documentPreview.01a0856b7f615fdfd7f4b853e047bcd0.css rel=stylesheet></head><body><div id=app></div><script type=text/javascript src=/static/js/manifest.a3f705024b2774dd271e.js></script><script type=text/javascript src=/static/js/vendor.03a8b2ef6819d9eaa4e7.js></script><script type=text/javascript src=/static/js/documentPreview.a9f6fe7b5c4d6f073050.js></script></body></html> Does anyone know a workaround? | Note: this answer contains different methods to reach your goal. I saw your target web app fetching image download URLs from an API endpoint and it is easy to fetch those images using the requests library with a little bit of code (no need to use bs4 if you want). here is the API endpoint https://air.ifpshare.com/api/pub/files/UUID So what is the file UUID in your target URL? Your provided URL: https://air.ifpshare.com/documentPreview.html?s_id=8ec97e16-51c4-4a77-9f64-7d5dccd9bb41#/detail/561f0184-384c-4ca1-91a4-b2e687865408/record File UUID: after the /detail/ path you will see a UUID value, well, this is your file UUID, now merge this file UUID with the API endpoint you will get the downloadUrl value from the JSON response and this is your complete download URL, here is the code: import requests def fetchResp(UUID): url = f"https://air.ifpshare.com/api/pub/files/{UUID}" response = requests.get(url) items = response.json()['items'] for n, urls in enumerate(items): image = urls['downloadUrl'] image_url = f"https:{image}" #missing HTTP in the response value so added this manually image_data = requests.get(image_url).content with open(f"image-{n}.png", 'wb') as image_d: image_d.write(image_data) fetchResp('561f0184-384c-4ca1-91a4-b2e687865408') #file UUID is here | 1 | 2 |
79,066,206 | 2024-10-8 | https://stackoverflow.com/questions/79066206/python-pandas-groupby-two-columns-without-merging-them | My dataframe looks like this: | col1 | col2 | col3 | | ---- | ---- | ---- | | 1 | abc | txt1 | | 1 | abc | txt2 | | 2 | abc | txt3 | | 1 | xyz | txt4 | | 2 | xyz | txt5 | I want to merge the text in col3 between rows only if the rows have the same value in col1 AND the rows have same value in col2. Expected result: | col1 | col2 | col3 | | ---- | ---- | ---------- | | 1 | abc | txt1, txt2 | | 2 | abc | txt3 | | 1 | xyz | txt4 | | 2 | xyz | txt5 | I have used this: df = df.groupby([df[col1], df[col2]]).aggregate({'col3': ', '.join}) Which joins the col3 correctly, but it also merges col1 and col2 into one column (list). How can I achieve the expected result while keeping 3 separate columns (col1, col2, col3)? | A possible solution, which: Performs a group-by operation using two columns, col1 and col2, as the grouping keys. It then aggregates the values in col3 for each group by applying a lambda function that concatenates the values into a single string, with each value separated by a comma. (df.groupby(['col1', 'col2'], as_index=False) .agg({'col3': lambda x: ', '. join(x)})) Output: col1 col2 col3 0 1 abc txt1, txt2 1 1 xyz txt4 2 2 abc txt3 3 2 xyz txt5 | 1 | 1 |
79,063,686 | 2024-10-7 | https://stackoverflow.com/questions/79063686/how-to-catch-throw-and-other-exceptions-in-coroutine-with-1-yield | I have var DICTIONARY, which is a dictionary where the keys are English letters and the values are words that start with the corresponding letter. The initial filling of DICTIONARY looks like this: DICTIONARY = { 'a': 'apple', 'b': 'banana', 'c': 'cat', 'd': 'dog', } My code has 2 while loops, since higher try-except would end the entire generator loop: def alphabet(): while True: try: letter = yield #waiting for first input from send while True: try: letter = yield DICTIONARY[letter] except KeyError: letter = yield 'default' # return 'default', if key is not found except Exception: letter = yield 'default' except KeyError: letter = yield 'default' except Exception: letter = yield 'default' However for the input: coro = alphabet() next(coro) print(coro.send('apple')) print(coro.send('banana')) print(coro.throw(KeyError)) print(coro.send('dog')) print(coro.send('d')) expected output: default default default default dog But I don't catch last default - it's None: default default default None dog What is wrong? | I believe that there is two errors. First - is that you are using while True: block twice. I would simplify the code into smth like this: def alphabet(): while True: try: letter = yield #waiting for first input from send try: word = DICTIONARY[letter] except KeyError: word = 'default' # return 'default', if key is not found except Exception: word = 'default' yield word except KeyError: yield 'default' except Exception: yield 'default' Second issue is that generator stack on previous yield function. If you'll add more next calls the issue will be fixed: coro = alphabet() next(coro) print(coro.send('apple')) next(coro) print(coro.send('banana')) next(coro) print(coro.throw(KeyError)) next(coro) print(coro.send('dog')) next(coro) print(coro.send('d')) Output: default default default default dog | 1 | 2 |
79,063,091 | 2024-10-7 | https://stackoverflow.com/questions/79063091/fastapi-stateful-dependencies | I've been reviewing the Depends docs, official example from typing import Annotated from fastapi import Depends, FastAPI app = FastAPI() async def common_parameters(q: str | None = None, skip: int = 0, limit: int = 100): return {"q": q, "skip": skip, "limit": limit} @app.get("/items/") async def read_items(commons: Annotated[dict, Depends(common_parameters)]): return commons However, in my use case, I need to serve an ML model, which will be updated at a recurring cadence (hourly, daily, etc.) The solution from docs (above) depends on a callable function; I believe that it is cached not generated each time. Nonetheless, my use case is not some scaffolding that needs to go up/down with each invocation. But rather, I need a custom class with state. The idea is that the ML model (a class attribute) can be updated scheduled and/or async and the ./invocations/ method will serve said model, reflecting updates as they occur. In current state, I use global variables. This works well when my entire application fits on a single script. However, as my application grows, I will be interested in using the router yet I'm concerned that global state will cause failures. Is there an appropriate way to pass a stateful instance of a class object across methods? See example class and method class StateManager: def __init__(self): self.bucket = os.environ.get("BUCKET_NAME", "artifacts_bucket") self.s3_model_path = "./model.joblib" self.local_model_path = './model.joblib' def get_clients(self): self.s3 = boto3.client('s3') def download_model(self): self.s3.download_file(self.bucket, self.s3_model_path, self.local_model_path) self.model = joblib.load(self.local_model_path) ... state = StateManager() state.download_model() ... @app.post("/invocations") def invocations(request: InferenceRequest): input_data = pd.DataFrame(dict(request), index=[0]) try: predictions = state.model.predict(input_data) return JSONResponse({"predictions": predictions.tolist()}, status_code=status.HTTP_200_OK) except Exception as e: return JSONResponse({"error": str(e)}, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR) | Is there an appropriate way to pass a stateful instance of a class object across methods? Yes, you could use lifespan for this: import asyncio from contextlib import asynccontextmanager from fastapi import FastAPI, Depends, Request from typing import AsyncIterator, TypedDict import joblib import boto3 import os class State(TypedDict): model: any class StateManager: def __init__(self): self.bucket = os.environ.get("BUCKET_NAME", "artifacts_bucket") self.s3_model_path= "./model.joblib" self.local_model_path = './model.joblib' def get_clients(self): self.s3 = boto3.client('s3') def download_model(self): self.s3.download_file(self.bucket, self.s3_model_path, self.local_model_path) self.model = joblib.load(self.local_model_path) @asynccontextmanager async def lifespan(app: FastAPI) -> AsyncIterator[State]: state_manager =StateManager() state_manager.get_clients() state_manager.download_model() state: State ={ "model": state_manager.model } yield state app = FastAPI(lifespan=lifespan) After this you could add method for getting model dependency: from fastapi import Request def get_model(request: Request): return request.app.state.model And finally use it from your router: @app.post("/invocations") async def invocations(request: InferenceRequest, model = Depends(get_model)): ... With this solution StateManager would be initialised only once and would be used for all routers with specific state. Next steps really depends on the way of your periodical state updating You could add periodical update right into your StateManager: class StateManager: ... async def update_model_periodically(self, interval: int = 3600): while True: self.download_model() print("Model updated...") await asyncio.sleep(interval) And add this periodical task to lifespan: @asynccontextmanager async def lifespan(app: FastAPI): ... asyncio.create_task(state_manager.update_model_periodically(interval=3600)) | 2 | 2 |
79,063,987 | 2024-10-8 | https://stackoverflow.com/questions/79063987/mypy-reporting-problem-namedtuple-type-as-an-attribute-is-not-supported | I have the following class, and MyPy is reporting the problem NamedTuple type as an attribute is not supported for the self.Data attribute. from collections import namedtuple from collections.abc import MutableSequence class Record(MutableSequence): def __init__(self, recordname: str, fields: list, records=None): if records is None: records = [] self.fields = fields defaults = [""] * len(fields) self.Data = namedtuple(recordname, self.fields, defaults=defaults) self._records = [self.Data(**record) for record in records] self.valid_fieldnames = set(self.fields) self.lookup_tables: dict = {} def __getitem__(self, idx): return self._records[idx] def __setitem__(self, idx, val): self._records[idx] = self.Data(**val) def __delitem__(self, idx): del self._records[idx] def __len__(self): return len(self._records) def insert(self, idx, val): a = self._records[:idx] b = self._records[idx:] if isinstance(val, self.Data): self._records = a + [val] + b else: self._records = a + [self.Data(**val)] + b I don't understand what the issue is that MyPy is reporting on, and haven't had any luck looking online. The code actually runs as expected without any errors; it's only MyPy that reports an issue and I'm just curious to understand what it is. Some additional info: I am using Python-3.12 in VS Code with the official MyPy Type Checker extension, which is what's reporting the problem. I tried a few things to see if the error would go away, but none of them worked. For example, I removed MutableSequence as a parent class, I deleted the defaults from the namedtuple constructor, and I removed all the dunder methods, but the error persisted. | This makes a lot of sense if you consider how namedtuple works. It's a code-generator, and it creates a new type. Therefore, if you use it during the class init like this, you will be generating a new type per-instance, and different instances of Record will have different types for self.Data, even if they have the same typename and fields. A static type checker such as Mypy can not do anything useful with attribute value types being created dynamically at runtime. Further, the if isinstance(val, self.Data) logic in the insert method is likely to have some unpleasant surprises, because the types associated with different record instances will have separate identities. | 2 | 2 |
79,063,140 | 2024-10-7 | https://stackoverflow.com/questions/79063140/modulenotfounderror-no-module-named-distutils-msvccompiler-when-trying-to-ins | I'm working inside a conda environment and I'm trying to downgrade numpy to version 1.16, but when running pip install numpy==1.16 I keep getting the following error: $ pip install numpy==1.16 Collecting numpy==1.16 Downloading numpy-1.16.0.zip (5.1 MB) ββββββββββββββββββββββββββββββββββββββββ 5.1/5.1 MB 10.8 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Building wheels for collected packages: numpy Building wheel for numpy (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py bdist_wheel did not run successfully. β exit code: 1 β°β> [17 lines of output] Running from numpy source directory. /tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/misc_util.py:476: SyntaxWarning: "is" with a literal. Did you mean "=="? return is_string(s) and ('*' in s or '?' is s) Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/setup.py", line 415, in <module> setup_package() File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/setup.py", line 394, in setup_package from numpy.distutils.core import setup File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/core.py", line 26, in <module> from numpy.distutils.command import config, config_compiler, \ File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/command/config.py", line 19, in <module> from numpy.distutils.mingw32ccompiler import generate_manifest File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/mingw32ccompiler.py", line 34, in <module> from distutils.msvccompiler import get_build_version as get_build_msvc_version ModuleNotFoundError: No module named 'distutils.msvccompiler' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Running setup.py clean for numpy error: subprocess-exited-with-error Γ python setup.py clean did not run successfully. β exit code: 1 β°β> [10 lines of output] Running from numpy source directory. `setup.py clean` is not supported, use one of the following instead: - `git clean -xdf` (cleans all files) - `git clean -Xdf` (cleans all versioned files, doesn't touch files that aren't checked into the git repo) Add `--force` to your command to use it anyway if you must (unsupported). [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed cleaning build dir for numpy Failed to build numpy ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (numpy) How can I resolve this? | What seemed to solve this issue was installing a specific version of setuptools (<65) into the conda environment: conda install "setuptools <65" | 2 | 6 |
79,052,036 | 2024-10-3 | https://stackoverflow.com/questions/79052036/bloomberg-python-query-returning-unknown-valueerror | I am trying to query specific Bloomberg tickers and write them into an excel. The code itself is pretty straightforward and I have gotten it to work for all tickers except "Move Index". When querying for the this particular ticker, I am getting the exception: raise ValueError(data) ValueError: []. I blocked out the particular query code with try-except to get more information about the error, but it's still returning the same exception with no further detail. import os from blp import blp import pdblp import blpapi import datetime import xlwings import pandas as pd import win32com.client import pythoncom import sys import os from PIL import ImageGrab with open(os.path.join(os.getenv("TEMP"), "Bloomberg", "log", "bbcomm.log"),"r") as f: try: port = f.read().split("BLOOMBERG COMMUNICATION SERVER READY on Port: ")[-1].split("\n")[0] except: port = 8194 con = pdblp.BCon(debug = False, port = 8194, timeout = 10000000) con.start() today = datetime.datetime.today() value_growth = con.bdh(['SVX Index','SGX Index'],'PX_Last',elms=[("periodicitySelection", "WEEKLY")], start_date="20220107",end_date= "20800101") ten_year = con.bdh(['USGG10YR Index', 'USGG2YR Index'], 'PX_Last',elms=[("periodicitySelection", "WEEKLY")], start_date="20220107",end_date= "20800101" ) spx_drivers = con.bdh(['SPX Index','USGG2YR Index'],'PX_Last',elms=[("periodicitySelection", "WEEKLY")], start_date="20220107",end_date= "20800101" ) bloomberg = con.bdh(['GBTP10YR Index','GDBR10 Index','BICLB10Y Index','VIX Index','USGGT05Y Index','.EUCCBS3M G Index','CSI BARC Index','LP02OAS Index','V2X Index','SPX Index', 'SGX Index','S5INDU Index','RTY Index','CO1 Comdty','HG1 Comdty','USGGBE05 Index','XAU Curncy','XBTUSD Curncy'],'PX_Last', elms=[("periodicitySelection", "WEEKLY")], start_date="20220107",end_date= "20800101") bloomberg1 = con.bdh('Move index', 'PX_Last',elms=[("periodicitySelection", "WEEKLY")], start_date="20220107",end_date= "20240101") bloomberg_merged = pd.concat([bloomberg,bloomberg1], axis = 1) with pd.ExcelWriter(r'FILE PATH', engine='openpyxl', if_sheet_exists='overlay', mode='a') as writer: value_growth.to_excel(writer, sheet_name='Value_Growth', startcol=0, startrow= 581, header = False) ten_year.to_excel(writer, sheet_name='10Y_drivers', startcol = 0, startrow = 216, header = False) spx_drivers.to_excel(writer, sheet_name= 'SPX_Drivers', startcol = 0, startrow = 216, header = False) bloomberg_merged.to_excel(writer, sheet_name='Bloomberg', startcol=0, startrow=579, header = False) My questions are: I know its a very niche question but has anyone had any success querying Move Index in Bloomberg through python? How can I better understand what the actual Valueerror is? | Short answer: change the ticker to MOVE Index or move Index. Long answer: The Bloomberg API does not seem to accept mixed case tickers (but does accept mixed case fields). Using the low-level blpapi: import blpapi sessionOptions = blpapi.SessionOptions() sessionOptions.setServerHost('localhost') sessionOptions.setServerPort(8194) session = blpapi.Session(sessionOptions) session.start() session.openService('//blp/refdata') svc = session.getService('//blp/refdata') request = svc.createRequest('ReferenceDataRequest') request.append('securities','move Index') request.append('fields','PX_Last') session.sendRequest(request) done = False while not done: event = session.nextEvent() if event.eventType() == blpapi.event.Event.RESPONSE: for msg in event: print(msg) done = True else: pass Elicits this successful response from the API for move Index (or for MOVE Index) CID: {[ valueType=AUTOGEN classId=0 value=6 ]} RequestId: bfd8d841-7e52-41bd-b390-b626f4dff3bb ReferenceDataResponse = { securityData[] = { securityData = { security = "move Index" eidData[] = { } fieldExceptions[] = { } sequenceNumber = 0 fieldData = { PX_Last = 101.160000 } } } } Changing the ticker to Move Index gives this: CID: {[ valueType=AUTOGEN classId=0 value=7 ]} RequestId: 71428563-0763-4f8f-bcb1-346e88682c91 ReferenceDataResponse = { securityData[] = { securityData = { security = "Move Index" eidData[] = { } securityError = { source = "21932:rsfrdsvc2" code = 43 category = "BAD_SEC" message = "Unknown/Invalid Security [nid:21932]" subcategory = "INVALID_SECURITY" } fieldExceptions[] = { } sequenceNumber = 0 fieldData = { } } } } Interestingly, this does not happen with the Excel BDP() function, so I imagine the Excel addin is converting any tickers to all upper or all lower case. As for the ValueError, this is being generated by the pdblp API wrapper when the response from the server contains a "securityError" entry in the securityData section: From pdblp.py: data = [] ... has_security_error = 'securityError' in d['securityData'] has_field_exception = len(d['securityData']['fieldExceptions']) > 0 if has_security_error or has_field_exception: raise ValueError(data) Arguably the exception data could be more informative and include information on the security and the API error! | 2 | 1 |
79,062,434 | 2024-10-7 | https://stackoverflow.com/questions/79062434/conda-downgrading-numpy-during-package-update | I have a virtual Conda environment named dev that was created using the following YAML file: # *** dev.yml *** name: dev channels: - defaults # Check this channel first - conda-forge # Fallback to conda-forge if packages are not available in defaults dependencies: - python==3.12 - numpy>=2.0.1 # ...more libraries without fixed versions The environment was created without any dependency issues, and I successfully have NumPy at version 2.0.1. However, when I try to update the packages using the command: conda update --all I get the following output suggesting that NumPy will be downgraded: The following packages will be DOWNGRADED: numpy 2.0.1-py312h2809609_1 --> 1.26.4-py312h2809609_0 numpy-base 2.0.1-py312he1a6c75_1 --> 1.26.4-py312he1a6c75_0 Why is Conda trying to downgrade NumPy during the update? I understand that dependencies might conflict during an update, but I thought this would be caught during the initial environment creation which runs without dependency warnings. Is there a way to conditionally update packages where dependencies are solvable while keeping the current NumPy version? | It is likely that an update to one of the packages might have pinned the newest version to use numpy less that v2. To override this behavior, you can pin a version during the update: conda update --all numpy=2.* Or you can pin the package by adding the package spec to the file at <environment>/conda-meta/pinned before you do the update echo numpy=2.* >>%CONDA_PREFIX%/conda-meta/pinned conda update --all | 2 | 1 |
79,062,305 | 2024-10-7 | https://stackoverflow.com/questions/79062305/check-for-duplicates-in-a-column-while-excluding-a-sub-string | I am looking for an optimized way of checking for any duplicates in a Panda dataframe column, but excluding a given position in every element of that column. In the example there is a duplication in 'id1_ver1_ready' if we exclude the version number ('id1_ver1_ready' <-> 'id1_ver3_ready'). Same for ( 'id5_ver1_unknown' <-> 'id5_ver6_unknown') from numpy import nan df = pd.DataFrame({'ID': ['id1_ver1_ready', 'id2_ver1_unknown', 'id3_ver1_processed', 'id1_ver3_ready', 'id4_ver1_ready', 'id5_ver1_unknown', 'id5_ver6_unknown', 'id6_ver1_processed']})enter code here | Another possible solution, which filters df by removing duplicate entries based on the first and third parts of the ID column: The str.split('_', expand=True) method splits the ID strings at the underscores into separate columns, resulting in a dataframe where each part of the ID is in its own column. The duplicated([0, 2]) method then identifies rows that have the same values in the first (index 0) and third (index 2) columns, marking duplicates as True. The tilde ~ operator negates this boolean mask, so the entire expression returns only the rows of df that have unique combinations of values in the first and third parts of the ID. df[~df['ID'].str.split('_', expand=True).duplicated([0,2])] Output: ID 0 id1_ver1_ready 1 id2_ver1_unknown 2 id3_ver1_processed 4 id4_ver1_ready 5 id5_ver1_unknown 7 id6_ver1_processed | 2 | 1 |
79,061,156 | 2024-10-7 | https://stackoverflow.com/questions/79061156/sympy-tr8-trigonometric-linearization-not-working-as-expected | I'm using SymPy version 1.13.3 to integrate an expression involving trigonometric functions. My goal is to linearize trigonometric products using the TR8 function from sympy.simplify.fu. However, the results with and without linearization should be identical, but they are not. Here's a simplified version of my code: import sympy as sp import numpy as np import matplotlib.pyplot as plt from sympy.simplify.fu import TR8 as linearize_trigo_expr from sympy import cos, sin # Definitions JMax = 3 nb_vals_t = 100 t_vals = np.linspace(0, 0.2, nb_vals_t) s, t = sp.symbols('s t') u = -12.33203125*sin(t) + 5.6064453125*sin(4*t) - 2.2708740234375*sin(9*t) + sin(12*t) - 0.103253126144409*sin(16*t) - 1.39863395690918*cos(t) + 1.81103515625*cos(4*t) - 0.4464111328125*cos(9*t) + 0.0336227416992188*cos(16*t) integrand = u * (u.subs(t, s) * sum(sp.sin(j**2 * (t - s)) / j**4 for j in range(1, JMax + 1))) # Without trigonometric linearization integral = sp.integrate(integrand, (s, 0, t)) integral_values = [] for t_val in t_vals: integral_values.append(integral.subs(t, t_val)) plt.plot(t_vals, integral_values) plt.xlabel('t') plt.ylabel('Integral') plt.title("Test without trigonometric linearization") plt.grid(True) plt.show() # With trigonometric linearization integrand_simple = linearize_trigo_expr(integrand) integral_simple = sp.integrate(integrand_simple, (s, 0, t)) integral_simple_values = [] for t_val in t_vals: integral_simple_values.append(integral_simple.subs(t, t_val)) plt.plot(t_vals, integral_simple_values) plt.xlabel('t') plt.ylabel('Integral') plt.title("Test with trigonometric linearization") plt.grid(True) plt.show() Both graphs should be identical, but they differ. Does anyone know why the trigonometric linearization isn't working as expected, and if there is a solution to make it work properly? NB: this is probably not a MWE, but I have tried to find a simple example showing the mistakes that the TR8 function does. Here are the graphs: | The problem is that you are using float numbers with symbolic integration, which sometimes leads to wrong results. When doing symbolic integration, always use exact numbers. Use sympy's nsimplify to convert floats to rational. integrand_2 = integrand.nsimplify() integrand_3 = linearize_trigo_expr(integrand_2) r2 = integrand_2.integrate((s, 0, t)) r3 = integrand_3.integrate((s, 0, t)) sp.plot(r2, r3, (t, 0, 0.2)) The results are the same. | 1 | 2 |
79,046,408 | 2024-10-2 | https://stackoverflow.com/questions/79046408/remove-special-character-and-units-form-pandas-column-name-with-python | I'm working on a script to convert a data file from one format to another. I need to remove the special characters from the column headers. I am using Pandas to read a CSV file with the below structure. I'm looking for a tidy way to remove the [units] form the column name. Data File: Date ,Time ,app1_sum [Ml] ,app1_q [l/s] ,app1_h [m] ,app1_a [mΒ²] ,app1_v [m/s] ,app1_t_water [Β°C] 02.10.2024 ,19:05:00 ,57293.336 ,620.78 ,0.436 ,0.586 ,1.059 ,18.2 My goal is to have column names stripped down to their simplest form: Date,Time,app1_sum,app1_q,app1_h,app1_a,app1_v,app1_t_water My current approach is to remove the brackers first, then remove the remining units one at a time. df.columns = df.columns.str.replace('[', '') df.columns = df.columns.str.replace(']', '') df.columns = df.columns.str.replace(' Ml', '') I also tried to use regex to remove one unit at a time, this works but doesn't feel right. df.rename(columns=lambda x: re.sub(r'\[Ml\]', '', x), inplace=True) df.rename(columns=lambda x: re.sub(r'\[l/s\]', '', x), inplace=True) df.rename(columns=lambda x: re.sub(r'\[m\]', '', x), inplace=True) df.rename(columns=lambda x: re.sub(r'\[mΒ²\]', '', x), inplace=True) df.rename(columns=lambda x: re.sub(r'\[m/s\]', '', x), inplace=True) df.rename(columns=lambda x: re.sub(r'\[Β°C\]', '', x), inplace=True) | You can use the following code to replace all brackets and their content in the columns names: df.rename(columns=lambda x: re.sub(r'\[[^\]]+\]', '', x), inplace=True) \[ will match the opening bracket [ [^\]]+ will match an abtritrary number of every character except ] \] will match the closing bracket The \ are needed to escape the brackets since they are part of regex syntax | 2 | 2 |
79,061,819 | 2024-10-7 | https://stackoverflow.com/questions/79061819/check-if-any-value-in-a-polars-dataframe-is-true | This is quite a simple ask but I can't seem to find any clear simplistic solution to this, feels like I'm missing something. Let's say I have a DataFrame of type df = pl.from_repr(""" βββββββββ¬ββββββββ¬ββββββββ β a β b β c β β --- β --- β --- β β bool β bool β bool β βββββββββͺββββββββͺββββββββ‘ β false β true β false β β false β false β false β β false β false β false β βββββββββ΄ββββββββ΄ββββββββ """) How do I do a simple check if any of the values in the DataFrame is True? Some solutions I have found is selection = df.select(pl.all().any(ignore_nulls=True)) or selection = df.filter(pl.any_horizontal()) and then check in that row any(selection.row(0)) Is just seems like so many steps for a single check | These two options are a bit shorter and stay in pure Polars. # Unpivot all the booleans into a single "value" column # Pull the "value column out as a Series any do the any df.unpivot()["value"].any() # pl.all().any() checks for any True values per column # pl.any_horizontal() checks horizontally per row, reducing to a single value df.select(pl.any_horizontal(pl.all().any())).item() To your question This is quite a simple ask but I can't seem to find any clear simplistic solution to this, feels like I'm missing something. It just seems like so many steps for a single check You are not missing anything. The reason it feels like a bit more work is because a DataFrame can be thought of more like a (database) table. Generally you have different columns of potentially different types, and you want to different calculations with different columns. So reducing both dimensions into a single value in a single step is just not something typically offered by DataFrame libraries. Numpy is much better suited if you have matrices and does offer this in a single step. arr = df.to_numpy() arr.any() # True | 3 | 2 |
79,057,984 | 2024-10-5 | https://stackoverflow.com/questions/79057984/multiple-versions-of-single-jupyter-notebook-for-different-audiences-e-g-tutor | I'm working on creating assignments for students using Jupyter Notebooks. My goal is to generate different versions of the same Notebook to distribute to tutors and students. I need certain cells, such as those containing exercise solutions, to be included only in the tutor's version, while other cells should be exclusive to the student's version. Sometimes, I want to remove entire cells, and other times, just the output. I believe this can be achieved by tagging cells and using nbconvert to save the Notebook (.ipynb) into another .ipynb file. A similar method was suggested for converting .ipynb to HTML here. I'm interested in hearing if anyone has experience, or creative new ideas, in creating different versions of a single Jupyter Notebook for various audiences. My project is quite extensive, so I need an automated solution. While I could manually create different versions by copying and pasting if I only had a few Notebooks, this approach isnβt feasible for a larger scale. Following initial comments, I tried assigning the tag "remove_cell" to some of the Notebook cells (example notebook here) and then removing them using the suggestion of this answer jupyter nbconvert nbconvert-example.ipynb --TagRemovePreprocessor.remove_cell_tags='{"remove_cell"}' However, the tagged cells are not removed when saving to Jupyter Notebook with the --to notebook option. They are only removed with the --to html option. | This is all built in to nbconvert. You can just do: # ensure all output has been produced jupyter nbconvert tutor-source-only.ipynb --to notebook \ --execute --output tutor-with-output.ipynb # remove cells that you want the students to figure out on their own and # optionally remove the output of some/all cells, so the students have to # run the notebook jupyter nbconvert tutor-with-output.ipynb \ --to notebook \ --output student.ipynb \ --TagRemovePreprocessor.enabled True \ --TagRemovePreprocessor.remove_cell_tags remove_cell \ --TagRemovePreprocessor.remove_all_outputs_tags remove_output I would advise against using TagRemovePreprocessor.remove_input_tags as the source is kept in the file, but the GUI is told not to show it. For any cells where you want to show the students what the output should look like, but not how to produce it, I would recommend writing your own preprocessor that takes the cell, strips the input, and converts the cell to a markdown cell, copying across the output. That way the "output" remains when the students run the notebook. You can find more on how to add and run your own preprocessors here: https://nbconvert.readthedocs.io/en/latest/nbconvert_library.html#Custom-Preprocessors | 1 | 1 |
79,059,427 | 2024-10-6 | https://stackoverflow.com/questions/79059427/is-np-zeros-is-the-fastest-way-to-initiate-a-1d-numpy-boolean-array-of-trues | Trying to find the fastest method to initiate a 1D numpy array of True values. %timeit -n 100000 -r 30 np.ones(10000, dtype=bool) returns 750 ns Β± 35.7 ns whereas %timeit -n 100000 -r 30 ~np.zeros(10000, dtype=bool) returns 682 ns Β± 7.47 ns Behaviour probably depends on the array size but is there a general rule of thumb to which one to choose? Are there other faster methods? | np.frombuffer(bytearray().ljust(10000, b"\x01"), dtype=bool) is faster than empty or full. Benchmark results for NumPy 1.22.4: $ python3 -m timeit -s 'import numpy as np' 'np.ones(10000, dtype=bool)' 100000 loops, best of 5: 2.12 usec per loop $ python3 -m timeit -s 'import numpy as np' '~np.zeros(10000, dtype=bool)' 200000 loops, best of 5: 1.66 usec per loop $ python3 -m timeit -s 'import numpy as np' 'np.full(10000, True)' 100000 loops, best of 5: 2.01 usec per loop $ python3 -m timeit -s 'import numpy as np' 'a = np.empty(10000, dtype=bool); a[:] = True' 200000 loops, best of 5: 1.25 usec per loop $ python3 -m timeit -s 'import numpy as np' 'np.frombuffer(bytearray().ljust(10000, b"\x01"), dtype=bool)' 500000 loops, best of 5: 636 nsec per loop This (ab)uses the fact that bytearray.ljust calls memset under the hood, which means that the bytearray can be initialized very quickly. The rest is just overhead from constructing the bytearray and the NumPy array. | 3 | 5 |
79,059,021 | 2024-10-6 | https://stackoverflow.com/questions/79059021/incrementing-to-the-last-decimal-in-python | I want to write function in python when given float or string for example 0.003214 to return me 0.003215, but if I give it 0.00321 to return 0.003211, it will apply on other floats also like 0.0000324 -> 0.00003241. What is wrong with my solution and how do I fix it? def add_one_at_last_digit(v): after_comma = Decimal(v).as_tuple()[-1]*-1 add = Decimal(1) / Decimal(10**after_comma) return Decimal(v) + add | Here is a working solution from decimal import Decimal def add_one_at_last_digit(v: float | str) -> float: d = Decimal(str(v)) decimal_places = abs(d.as_tuple().exponent) add = Decimal(f"1e-{decimal_places}") result = (d + add).quantize(Decimal(f"1e-{decimal_places}")) return float(result) The key ideas are: d = Decimal(str(v)) gives the a Decimal object with the exact number of decimal places. The string conversion does the trick here. For example >>> Decimal(213.4) Decimal('213.400000000000005684341886080801486968994140625') >>> Decimal("213.4") Decimal('213.4') You did not do this in your solution which is why your after_comma is wrong. abs(d.as_tuple().exponent) gives the exact number of decimal places. (d + add).quantize(Decimal(f"1e-{decimal_places}")) again quantizes it to remove imprecision. | 4 | 11 |
79,057,153 | 2024-10-5 | https://stackoverflow.com/questions/79057153/python-multiprocessing-with-multiple-locks-slower-than-single-lock | I am making experiments with multiprocessing in Python. I wrote some code that requires concurrent modification of 3 different variables (a dict, a float and an int), shared across the different process. My understanding of the works behind locking tells me that if I have 3 different shared variables, it will be more efficient to assign a lock to each one. After all, why should process 2 wait to modify variable A just because process 1 is modifying variable B? It makes sense to me that if you need to lock variable B, then A should still be accessible to other processes. I run the 2 toy examples below, based on a real program I'm writing, and to my surprise the code runs faster with a single lock! Single lock: 2.1 seconds import multiprocessing as mp import numpy as np import time class ToyClass: def __init__(self, shared_a, shared_b): self.a = shared_a self.b = shared_b def update_a(self, key, n, lock): with lock: if key not in self.a: self.a[key] = np.zeros(4) self.a[key][n] += 1 def update_b(self, lock): with lock: self.b.value = max(0.1, self.b.value - 0.01) def run_episode(toy, counter, lock): key = np.random.randint(100) n = np.random.randint(4) toy.update_a(key, n, lock) toy.update_b(lock) with lock: counter.value += 1 if __name__ == "__main__": num_episodes = 1000 num_processes = 4 t0 = time.time() with mp.Manager() as manager: shared_a = manager.dict() shared_b = manager.Value('d', 0) counter = manager.Value('i', 0) toy = ToyClass(shared_a=shared_a, shared_b=shared_b) # Single lock lock = manager.Lock() pool = mp.Pool(processes=num_processes) for _ in range(num_episodes): pool.apply_async(run_episode, args=(toy, counter, lock)) pool.close() pool.join() tf = time.time() print(f"Time to compute single lock: {tf - t0} seconds") Multiple locks: 2.85 seconds!! import multiprocessing as mp import numpy as np import time class ToyClass: ## Same definition as for single lock def __init__(self, shared_a, shared_b): self.a = shared_a self.b = shared_b def update_a(self, key, n, lock): with lock: if key not in self.a: self.a[key] = np.zeros(4) self.a[key][n] += 1 def update_b(self, lock): with lock: self.b.value = max(0.1, self.b.value - 0.01) def run_episode(toy, counter, lock_a, lock_b, lock_count): key = np.random.randint(100) n = np.random.randint(4) toy.update_a(key, n, lock_a) toy.update_b(lock_b) with lock_count: counter.value += 1 if __name__ == "__main__": num_episodes = 1000 num_processes = 4 t0 = time.time() with mp.Manager() as manager: shared_a = manager.dict() shared_b = manager.Value('d', 0) counter = manager.Value('i', 0) toy = ToyClass(shared_a=shared_a, shared_b=shared_b) # 3 locks for 3 shared variables lock_a = manager.Lock() lock_b = manager.Lock() lock_count = manager.Lock() pool = mp.Pool(processes=num_processes) for _ in range(num_episodes): pool.apply_async(run_episode, args=(toy, counter, lock_a, lock_b, lock_count)) pool.close() pool.join() tf = time.time() print(f"Time to compute multi-lock: {tf - t0} seconds") What am I missing here? Is there a computational overhead when switching between locks that outweighs any potential benefit? These are just flags, how can it be? Note: I know the code runs much faster when single process/thread, but this is part of an experiment precisely to understand the downsides of multiprocessing. | This has nothing to do with the locking, you are just sending 3 locks per call instead of 1, which is 3 times the transmission overhead. to verify this you can test keep sending the 3 locks but only use 1 of them, you will get the same time as using the 3 locks change 2 of the locks to be simple Manager.Value objects, still the same time as 3 locks. the locking part plays no role in this, you are just sending the locks over and over, which you can avoid by using an initializer when spawning the pool. lock_a = None lock_b = None lock_counter = None def initialize_locks(val1,val2,val3): global lock_a, lock_b, lock_counter lock_a = val1 lock_b = val2 lock_counter = val3 ... pool = mp.Pool(processes=num_processes, initializer=initialize_locks, initargs=(lock_a, lock_b, lock_counter,)) Also if you are using the initializer you should instead use multiprocessing.Lock instead, as it is faster than Manager.Lock, same applied to Multiprocessing.Value instead of Manager.Value | 2 | 1 |
79,058,482 | 2024-10-6 | https://stackoverflow.com/questions/79058482/is-there-a-way-to-prevent-repetition-of-similar-blocks-of-code-in-these-hashing | I created this program to calculate the sha256 or sha512 hash of a given file and digest calculations to hex. It consists of 5 files, 4 are custom modules and 1 is the main. I have two functions in different modules but the only difference in these functions is one variable. See below: From sha256.py def get_hash_sha256(): global sha256_hash filename = input("Enter the file name: ") sha256_hash = hashlib.sha256() with open(filename, "rb") as f: for byte_block in iter(lambda: f.read(4096),b""): sha256_hash.update(byte_block) # print("sha256 valule: \n" + Color.GREEN + sha256_hash.hexdigest()) print(Color.DARKCYAN + "sha256 value has been calculated") color_reset() From sha512.py def get_hash_sha512(): global sha512_hash filename = input("Enter the file name: ") sha512_hash = hashlib.sha512() with open(filename, "rb") as f: for byte_block in iter(lambda: f.read(4096),b""): sha512_hash.update(byte_block) # print("sha512 valule: \n" + Color.GREEN + sha512_hash.hexdigest()) print(Color.DARKCYAN + "sha512 value has been calculated") color_reset() These functions are called in my simple_sha_find.py file: def which_hash(): sha256_or_sha512 = input("Which hash do you want to calculate: sha256 or sha512? \n") if sha256_or_sha512 == "sha256": get_hash_sha256() verify_checksum_sha256() elif sha256_or_sha512 == "sha512": get_hash_sha512() verify_checksum_sha512() else: print("Type either sha256 or sha512. If you type anything else the program will close...like this.") sys.exit() if __name__ == "__main__": which_hash() As you can see, the functions that will be called are based on the users input. If the user types sha256, then it triggers the functions from sha256.py, but if they type sha512 then they trigger the functions from sha512.py The application works, but I know I can make it less redundant but I do not know how. How can I define the get_hash_sha---() and verify_checksum_sha---() functions once and they perform the appropriate calculations based on whether the user chooses sha256 or sha512? I have performed a few variations of coding this program. I have created it as one single file as well as creating different modules and calling functions from these modules. In either case I've had the repetition but I know that tends to defeat the purpose of automation. | You could union these 2 functions into a single one: import hashlib def get_hash(hash_type): if hash_type == 'sha256': hash_obj= hashlib.sha256() elif hash_type == 'sha512': hash_obj = hashlib.sha512() else: print("Invalid hash type.Please choose 'sha256'or'sha512'") return filename = input("Enter the fileename: ") try: with open(filename,"rb") as f: for byte_block in iter(lambda: f.read(4096), b""): hash_obj.update(byte_block) print(Color.DARKCYAN + f"{hash_type} value has been calculated") color_reset() except FileNotFoundError: print(f"File '{filename}' not found.") def which_hash(): sha_type =input("Which hash do you want to calculate: sha256 or sha512? \n").lower() if sha_type in ['sha256', 'sha512']: get_hash(sha_type) verify_checksum(sha_type) else: print("Type sha256 or sha512. If you type anything else program will close. .") sys.exit() if __name__ == "__main__": which_hash() Also its a best practice to use Enum instead of plain text: from enum import Enum class HashType(Enum): SHA256 = 'sha256' SHA512 = 'sha512' So you could change if hash_type == HashType.SHA256: hash_obj = hashlib.sha256() elif hash_type == HashType.SHA512: hash_obj = hashlib.sha512() def which_hash(): sha_type_input = input("Which hash do you want to calculate: sha256 or sha512? \n").lower() try: sha_type = HashType(sha_type_input) get_hash(sha_type) verify_checksum(sha_type) except ValueError: print("Type either sha256 or sha512. If you type anything else the program will close.") sys.exit() | 5 | 5 |
79,056,942 | 2024-10-5 | https://stackoverflow.com/questions/79056942/can-i-improve-my-numpy-solution-to-an-exercise | I have been asked to use the following set of column indices: y = np.array([3, 0, 4, 1]) to turn into 1 all the elements in the following matrix: x = np.zeros(shape = (4, 5)) that have y as starting column and rows given by the position of y. Just to be clear. The final result has to be the following: [[0. 0. 0. 1. 1.] [1. 1. 1. 1. 1.] [0. 0. 0. 0. 1.] [0. 1. 1. 1. 1.]] For example: y[0] = 3, then row 0, columns 3 and 4 need to be equal to 1. I did it like this: for (idx, num) in enumerate(y): x[idx, num:] = 1 Can this result be written differently and/or improved by using other Numpy functions (for example, using vectorization)? | Lots of ways of doing this. For example, since x is basically a boolean mask, you can compute a mask and turn it into whatever type you want: x = (np.arange(5) < y[:, None]).astype(float) You might also use np.where to avoid the conversion: x = np.where(np.arange(5) < y[:, None], 1.0, 0.0) | 4 | 3 |
79,057,529 | 2024-10-5 | https://stackoverflow.com/questions/79057529/indexerror-index-7-is-out-of-bounds-for-axis-0-with-size-7 | I am trying to assess whether the lips of a person are moving too much while the mouth is closed (to conclude it is chewing). The mouth closed part is done without any issue, but when I try to assess the lip movement through landmarks (dlib) there seems to be a problem with the last landmark of the mouth. Inspired by the mouth example (https://github.com/mauckc/mouth-open/blob/master/detect_open_mouth.py#L17), I wrote the following function: def lips_aspect_ratio(shape): # grab the indexes of the facial landmarks for the lip (mStart, mEnd) = (61, 68) lip = shape[mStart:mEnd] print(len(lip)) # compute the euclidean distances between the two sets of # vertical lip landmarks (x, y)-coordinates # to reach landmark 68 I need to get lib[7] not lip[6] (while I get lip[7] I get IndexOutOfBoundError) A = dist.euclidean(lip[1], lip[6]) # 62, 68 B = dist.euclidean(lip[3], lip[5]) # 64, 66 # compute the euclidean distance between the horizontal # lip landmark (x, y)-coordinates C = dist.euclidean(lip[0], lip[4]) # 61, 65 # compute the lip aspect ratio mar = (A + B) / (2.0 * C) # return the lip aspect ratio return mar The landmark of the lips are (61, 68), when I extract the lip as lip = shape[61:68] and try to access the last landmark as lip[7] I get the following error: IndexError: index 7 is out of bounds for axis 0 with size 7 Why is that? and How to get the last landmark of the lip/face | lip = shape[61:68] The slices exclude the end element. So you got 7 elements: 61,62,63,64, 65,66,67. And len(lip) == 7 confirms that. If there truly are 8 points for the lip shape, and they include element 68, the slice should be: lip = shape[61:69] assert(len(range(61, 69) == 8) assert(len(lip) == 8) Note that if shape is sufficiently long, then len(shape[a:b:c]) == len(range(a, b, c)). That's because the range function acts like taking a slice out of an infinitely long list of integers (with caveats). So, the problem is a classic off-by-one, not much to do with AI/image analysis :) | 3 | 3 |
79,056,727 | 2024-10-5 | https://stackoverflow.com/questions/79056727/lalr-grammar-for-transforming-text-to-csv | I have a processor trace output that has the following format: Time Cycle PC Instr Decoded instruction Register and memory contents 905ns 86 00000e36 00a005b3 c.add x11, x0, x10 x11=00000e5c x10:00000e5c 915ns 87 00000e38 00000693 c.addi x13, x0, 0 x13=00000000 925ns 88 00000e3a 00000613 c.addi x12, x0, 0 x12=00000000 935ns 89 00000e3c 00000513 c.addi x10, x0, 0 x10=00000000 945ns 90 00000e3e 2b40006f c.jal x0, 692 975ns 93 000010f2 0d01a703 lw x14, 208(x3) x14=00002b20 x3:00003288 PA:00003358 985ns 94 000010f6 00a00333 c.add x6, x0, x10 x6=00000000 x10:00000000 995ns 95 000010f8 14872783 lw x15, 328(x14) x15=00000000 x14:00002b20 PA:00002c68 1015ns 97 000010fc 00079563 c.bne x15, x0, 10 x15:00000000 Allegedly, this is \t separated, however this is not the case, as inline spaces are found here and there. I want to transform this into a .csv format with a header row and the entries following. For example: Time,Cycle,PC,Instr,Decoded instruction,Register and memory contents 905ns,86,00000e36,00a005b3,"c.add x11, x0, x10", x11=00000e5c x10:00000e5c 915ns,87,00000e38,00000693,"c.addi x13, x0, 0", x13=00000000 ... To do that, I am using Lark in python3 (>=3.10). And I came up with the following grammar for the source format: Lark Grammar start: header NEWLINE entries+ # Header is expected to be # Time\tCycle\tPC\tInstr\tDecoded instruction\tRegister and memory contents header: HEADER_FIELD+ # Entries are expected to be e.g., # 85ns 4 00000180 00003197 auipc x3, 0x3000 x3=00003180 entries: TIME \ CYCLE \ PC \ INSTR \ DECODED_INSTRUCTION \ reg_and_mem? NEWLINE reg_and_mem: REG_AND_MEM+ /////////////// // TERMINALS // /////////////// HEADER_FIELD: / [a-z ]+ # Characters that are optionally separated by a single space /xi TIME: / [\d\.]+ # One or more digits [smunp]s # Time unit /x CYCLE: INT PC: HEXDIGIT+ INSTR: HEXDIGIT+ DECODED_INSTRUCTION: / [a-z\.]+ # Instruction mnemonic ([-a-z0-9, ()]+)? # Optional operand part (rd,rs1,rs2, etc.) (?= # Stop when x[0-9]{1,2}[=:] # Either you hit an xN= or xN: |PA: # or you meet PA: |\s+$ # or there is no REG_AND_MEM and you meet a \n ) /xi REG_AND_MEM: / (?:[x[0-9]+|PA) [=|:] [0-9a-f]+ /xi /////////////// // IMPORTS // /////////////// %import common.HEXDIGIT %import common.NUMBER %import common.INT %import common.UCASE_LETTER %import common.CNAME %import common.NUMBER %import common.WS_INLINE %import common.WS %import common.NEWLINE /////////////// // IGNORE // /////////////// %ignore WS_INLINE Here is my simple driver code: import lark class TraceTransformer(lark.Transformer): def start(self, args): return lark.Discard def header(self, fields): return [str(field) for field in fields] def entries(self, args): print(args) ... # the grammar provided above # stored in the same directory # as this file parser = lark.Lark(grammar=open("grammar.lark").read(), start="start", parser="lalr", transformer=TraceTransformer()) # This is parsed by the grammar without problems # Note that I omit from the c.addi the operand # part and its still parsed. This is ok as some # mnemonics do not have operands (e.g., fence). dummy_text_ok1 = r"""Time Cycle PC Instr Decoded instruction Register and memory contents 905ns 86 00000e36 00a005b3 c.add x11, x0, x10 x11=00000e5c x10:00000e5c 915ns 87 00000e38 00000693 c.addi x13, x0, 0 x13=00000000 925ns 88 00000e3a 00000613 c.addi x12=00000000 935ns 89 00000e3c 00000513 c.addi x10, x0, 0 x10=00000000""" # Now here starts trouble. Note that here we don't # have a REG_AND_MEM part on the jump instruction. # However this is still parsed with no errors. dummy_text_ok2 = r"""Time Cycle PC Instr Decoded instruction Register and memory 945ns 90 00000e3e 2b40006f c.jal x0, 692 """ # But here, when the parser meets the line of cjal # where there is no REG_AND_MEM part and a follow # up entry exists we have an issue. dummy_text_problematic = r"""Time Cycle PC Instr Decoded instruction Register and memory contents 905ns 86 00000e36 00a005b3 c.add x11, x0, x10 x11=00000e5c x10:00000e5c 915ns 87 00000e38 00000693 c.addi x13, x0, 0 x13=00000000 925ns 88 00000e3a 00000613 c.addi x12, x0, 0 x12=00000000 935ns 89 00000e3c 00000513 c.addi x10, x0, 0 x10=00000000 945ns 90 00000e3e 2b40006f c.jal x0, 692 975ns 93 000010f2 0d01a703 lw x14, 208(x3) x14=00002b20 x3:00003288 PA:00003358 985ns 94 000010f6 00a00333 c.add x6, x0, x10 x6=00000000 x10:00000000 995ns 95 000010f8 14872783 lw x15, 328(x14) x15=00000000 x14:00002b20 PA:00002c68 1015ns 97 000010fc 00079563 c.bne x15, x0, 10 x15:00000000 """ parser.parse(dummy_text_ok1) parser.parse(dummy_text_ok2) parser.parse(dummy_text_problematic) The Runtime Error No terminal matches 'c' in the current parser context, at line 6 col 45 945ns 90 00000e3e 2b40006f c.jal x0, 692 ^ Expected one of: * DECODED_INSTRUCTION So this indicates that the DECODED_INSTRUCTION rule is not behaving as expected. The Rule DECODED_INSTRUCTION: / [a-z\.]+ # Instruction mnemonic ([-a-z0-9, ()]+)? # Optional operand part (rd,rs1,rs2, etc.) (?= # Stop when x[0-9]{1,2}[=:] # Either you hit an xN= or xN: |PA: # or you meet PA: |\s+$ # or there is no REG_AND_MEM and you meet a \n ) /xi This rule is really heavy, it has to match the whole ISA of the processor, which is in RISC-V btw. So here step-by-step I have The instruction mnemonic regex as a sequence of a-z characters and optional dots (.) The optional operand part (there exist instructions in the ISA with no operands). Now, this was tricky. Instead of accounting from every possible instruction variation in my rules above, I thought to leverage the fact that there exist characters in the following column (Register and memory contents) which do not exist in any instruction variation of the ISA. This is where the look-ahead part of the regex comes in place. I stop when Either I have reached the xN= part or the xN: part of the field Either I have reached the PA: part of the field OR I have reached the end of the line ($) as the field does not exist. However, the last case does not seem to work as intended, as shown in the above example. The way I see it, this seems OK to either stop when you meet one of the two criteria, OR you have encountered a new line (implying that the following part is omitted for the current entry). Did I blunder something in the regex part? | For $ to mean end-of-line, you need to add the m, i.e. MULTILINE flag DECODED_INSTRUCTION: / ... /xim | 2 | 1 |
79,056,585 | 2024-10-5 | https://stackoverflow.com/questions/79056585/beautifulsoup-in-python-find-works-as-unexpected-way-with-tuples | I am practicing crawling web, and yesterday I had an unexpected correct result which I dont think it should be work. I used soup.find(id=i) to find the attribute key i, I though i must be string, but when I passed a tuple - which is first element of tuple is string that is key, and I was surprise when it still ran correct result. let say '01' is the key of attribute, the code below had exact result with id='01' tup = ('01', 'Revenue') acc = soup.find(id=tup).text.strip().split('\n') Who has experience on this matter, please help me to explain? Thank you so much. What I tried: tup = ('01', 'Revenue') acc = soup.find(id=tup).text.strip().split('\n') I expect the KeyError because I passed a tuple instead of a string to id. | I searched the BeautifulSoup source code And found 3 occurrences where it checks if something is a tuple. I haven't went to the whole chain of calls, but it seems to me that whenever you pass tuples or lists as arguments, BeautifulSoup will turn it into a space-separated string, and will further checks for every values. I think this part is the actual unpacking / conversion: From source code def _attr_value_as_string(self, value, default=None): """Force an attribute value into a string representation. A multi-valued attribute will be converted into a space-separated stirng. """ value = self.get(value, default) if isinstance(value, list) or isinstance(value, tuple): value =" ".join(value) return value | 2 | 1 |
79,055,467 | 2024-10-4 | https://stackoverflow.com/questions/79055467/whats-a-fast-way-to-identify-all-overlapping-sets | I need to identify and consolidate all intersecting sets so that I ultimately have completely discrete sets that share no values between them. The sets currently exist as values in a dictionary, and the dictionary keys are sorted by priority that needs to be preserved. For example, starting with the following sets in dictionary d: d = {'b': {'b', 'f', 'a'}, 'x': {'x'}, 's': {'s'}, 'a': {'a', 'f', 'e'}, 'e': {'e'}, 'f': {'f'}, 'z': {'x', 'z'}, 'g': {'g'}} ...I'm trying to consolidate the sets to: {'b': {'a', 'b', 'e', 'f'}, 'x': {'x', 'z'}, 's': {'s'}, 'a': {'a', 'b', 'e', 'f'}, 'e': {'a', 'b', 'e', 'f'}, 'f': {'a', 'b', 'e', 'f'}, 'z': {'x', 'z'}, 'g': {'g'}} ...by consolidating any sets that overlap with other sets. I have code that gets me to these results: d_size = {k:len(v) for (k, v) in d.items()} static = False while not static: for (k, vs) in d.items(): for v in vs: if v == k: continue d[v].update(vs) static = True for (k, v) in d_size.items(): if len(d[k]) != v: d_size[k] = len(d[k]) static = False . . . but it is prohibitively slow for the sizes of datasets it needs to handle which regularly extend into a few hundred thousand rows with set sizes of arbitrary lengths but usually less than 50 after the consolidations are complete. Ultimately I then need to remove the duplicate sets, ensuring that I keep the first appearance in the dictionary as the keys are sorted by priority, so the final result for this toy example set would be: {'b': {'a', 'b', 'e', 'f'}, 'x': {'x', 'z'}, 's': {'s'}, 'g': {'g'}} I only need this final dictionary for my purposes, so am open to any solutions that don't produce the intermediary dictionary with duplicate sets. Lastly, these results are mapped back into a pandas dataframe, so am open to using solutions incorporating pandas (or numpy). Any other third-party packages would need to be balanced by weighing their footprint against their benefit. | This is a graph problem, you can use networkx.connected_components: # pip install networkx import networkx as nx # make graph G = nx.from_dict_of_lists(d) # identify connected components sets = {n: c for c in nx.connected_components(G) for n in c} # keep original order out = {n: sets[n] for n in d} Output: {'b': {'a', 'b', 'e', 'f'}, 'x': {'x', 'z'}, 's': {'s'}, 'a': {'a', 'b', 'e', 'f'}, 'e': {'a', 'b', 'e', 'f'}, 'f': {'a', 'b', 'e', 'f'}, 'z': {'x', 'z'}, 'g': {'g'}} If you just need the first key, you could replace the last step with: out = {} seen = set() for k in d: if k not in seen: out[k] = sets[k] seen.update(sets[k]) Alternatively use min with a dictionary of weights to identify the first key per group: import networkx as nx # make graph G = nx.from_dict_of_lists(d) weights = {k: i for i, k in enumerate(d)} # {'b': 0, 'x': 1, 's': 2, 'a': 3, 'e': 4, 'f': 5, 'z': 6, 'g': 7} out = {min(c, key=weights.get): c for c in nx.connected_components(G)} Output: {'b': {'a', 'b', 'e', 'f'}, 'x': {'x', 'z'}, 's': {'s'}, 'g': {'g'}} pure python solution: out = {} mapper = {} seen = set() for k, s in d.items(): if (common := s & seen): out[mapper[next(iter(common))]].update(s) else: out[k] = s mapper.update(dict.fromkeys(s, k)) seen.update(s) print(out) # {'b': {'a', 'b', 'e', 'f'}, 'x': {'x', 'z'}, 's': {'s'}, 'g': {'g'}} | 2 | 2 |
79,054,792 | 2024-10-4 | https://stackoverflow.com/questions/79054792/apply-pandas-dictionary-with-gt-lt-conditions-as-keys | I have created the following pandas dataframe: ds = {'col1':[1,2,2,3,4,5,5,6,7,8]} df = pd.DataFrame(data=ds) The dataframe looks like this: print(df) col1 0 1 1 2 2 2 3 3 4 4 5 5 6 5 7 6 8 7 9 8 I have then created a new field, called newCol, which has been defined as follows: def criteria(row): if((row['col1'] > 0) & (row['col1'] <= 2)): return "A" elif((row['col1'] > 2) & (row['col1'] <= 3)): return "B" else: return "C" df['newCol'] = df.apply(criteria, axis=1) The new dataframe looks like this: print(df) col1 newCol 0 1 A 1 2 A 2 2 A 3 3 B 4 4 C 5 5 C 6 5 C 7 6 C 8 7 C 9 8 C Is there a possibility to create a dictionary like this: dict = { '0 <= 2' : "A", '2 <= 3' : "B", 'Else' : "C" } And then apply it to the dataframe: df['newCol'] = df['col1'].map(dict) ? Can anyone help me please? | Yes, you could do this with IntervalIndex: dic = {(0, 2): 'A', (2, 3): 'B', } other = 'C' bins = pd.IntervalIndex.from_tuples(dic) labels = list(dic.values()) df['newCol'] = (pd.Series(labels, index=bins) .reindex(df['col1']).fillna(other) .tolist() ) But given your example, it seems more straightforward to go with cut: df['newCol'] = pd.cut(df['col1'], bins=[0, 2, 3, np.inf], labels=['A', 'B', 'C']) Output: col1 newCol 0 1 A 1 2 A 2 2 A 3 3 B 4 4 C 5 5 C 6 5 C 7 6 C 8 7 C 9 8 C If you insist on your original dictionary format, you could convert using: dic = {'0 <= 2' : "A", '2 <= 3' : "B", 'Else' : "C" } dic2 = {tuple(map(int, k.split(' <= '))): v for k, v in dic.items() if k != 'Else'} # {(0, 2): 'A', (2, 3): 'B'} other = dic['Else'] | 2 | 1 |
79,053,694 | 2024-10-4 | https://stackoverflow.com/questions/79053694/pyotp-couldnt-pass-the-same-secret-key-to-multiple-methods | I am implementing a forgot password feature on my application. In order to generate the otp code I have used the pyotp library. The code generates the otp code but when I try to reset the password using the generated password, it shows the error that "the verification code has beeen expired" but the otp code here has the expiry time of 180 seconds. I think the problem is in the way I am generating the secret key. I guess the reset password method expects the same key that was used while creating the otp code. I want to resend the new code everytime the user hits the endpoint, even if the code is valid. How can I pass the same secret key to the reset password method? Here is my implementation code: Method to generate the otp code: secret = pyotp.random_base32() totp = pyotp.TOTP(secret, interval=settings.verification_code_expire_time) verification_code = totp.now() if not user.code: user.code = VerificationCode(code=verification_code) else: user.code.code = verification_code await db.commit() return verification_code Method to reset password: query = select(VerificationCode).where(VerificationCode.code == reset_password_schema.verification_code) result = await db.execute(query) verification_code = result.scalar_one_or_none() if not verification_code: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid verification code.") totp = pyotp.TOTP(pyotp.random_base32(), interval=settings.verification_code_expire_time) if not totp.verify(reset_password_schema.verification_code): raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Verification code has been expired.") query = ( update(User).where(User.id == verification_code.user_id).values( password=reset_password_schema.new_password) ) | You generate new secret key at your Method to reset password , that's why this happens. Instead, you should use secret key from created VerificationCode. Here is simple example: First of all, you need to save this secret key: secret = pyotp.random_base32() totp = pyotp.TOTP(secret,interval=settings.verification_code_expire_time) verification_code = totp.now() if not user.code: user.code = VerificationCode(code=verification_code, secret=secret) else: user.code.code = verification_code user.code.secret = secret # SAVE THIS ENCRYPTED await db.commit() return verification_code P.S. You really should store this secret encrypted Get your secret key and use it in Method to reset password: query = select(VerificationCode).where(VerificationCode.code == reset_password_schema.verification_code) result = await db.execute(query) verification_code_entry= result.scalar_one_or_none() if not verification_code_entry: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST,detail="Invalid verification code.") # HERE IS MAIN CHANGE secret = verification_code_entry.secret totp= pyotp.TOTP(secret,interval=settings.verification_code_expire_time) if not totp.verify(reset_password_schema.verification_code): raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Verification code has expied.") query = ( update(User) .where(User.id == verification_code_entry.user_id) .values(password=reset_password_schema.new_password) ) await db.execute(query) await db.commit() | 1 | 2 |
79,052,892 | 2024-10-4 | https://stackoverflow.com/questions/79052892/in-a-class-body-is-it-safe-to-use-vars-to-dynamically-set-an-attribute | In Python, I can use vars() to dynamically set class attributes class A: vars()['x'] = 1 print(A.x) # Prints 1 However, the documentation for vars() states Without an argument, vars() acts like locals(). Note, the locals dictionary is only useful for reads since updates to the locals dictionary are ignored. This indicates updates to vars() shouldn't have any effect, but they clearly do based on the example above. The documentation for locals() similarly states Note: The contents of this dictionary should not be modified; changes may not affect the values of local and free variables used by the interpreter. This is slightly different from the var() documentation, as it states updates may not affect local variables, while the vars() documentation claims it doesn't affect local variables at all. Is it safe to keep relying on using vars() to set class attributes? Or am I relying on unspecified behavior by doing this? | In the very early versions of Python, all namespaces, whether in functions, classes or modules, were all implemented as a dictionary. But for performance reasons, since Python 1.0, the implementation of function namespaces was changed to use the STORE_FAST and LOAD_FAST bytecodes to access local variables as indices in an array instead. A call to locals() from a function returns a dictionary that is created on the fly from the array of local variables. This means that the dictionary returned by locals() and the actual array storing local variables may get out of sync with each other. Writes to the locals() dictionary may not show up as modifications to local variables. And writes to local variables may not be reflected in the locals() dictionary obtained earlier. This is why you see the notes in the documentation of locals() and vars() that updates to the dictionary returned by the call may not affect the actual local variables. However, the STORE_FAST and LOAD_FAST bytecodes were implemented only for functions, not classes or modules, which is why you are observing the behavior that changes made to the dictionary returned by a call to locals()/vars() from a class are reflected in the resulting namespace of the class. So even though the behavior is technically undocumented, it is something that has not changed since the beginning of Python. Moreover, since Python 3.13, with the implementation of PEP-667, access to the f_locals attribute of a frame now returns a proxy mapping to the array of local variables, guaranteeing that writes to f_locals can now be reflected in actual local variables, and vice versa. For backwards compatibility though, a call to locals() from a function returns a snapshot of f_locals rather than the proxy mapping itself. But this is a clear sign that Python developers intend to improve the consistency between locals() and the local namespace rather than the other way around, so I believe you can indeed keep relying on using locals()/vars() to set class attributes for the foreseeable future. | 1 | 2 |
79,052,587 | 2024-10-4 | https://stackoverflow.com/questions/79052587/vs-code-1-94-run-selection-line-in-python-terminal-very-slow | I've just started using VSCode (moving over from Spyder) and I regularly run lines/selections from my code in the terminal. When I have a terminal open, shift + enter runs my selection in a new terminal and is incredibly slow. The below takes about 5 seconds to send print(1) If i copy and paste it in the new terminal, it runs instantly. Why is it slow? I want to continue using shift+enter to run my selection. Why does it need to open a new terminal anyway? It can run in the existing terminal, where i start with py which begins a python interactive session. | As for the delay, according to this post on GitHub, the issue seems to be with the latest version (v2024.16.0) of the Python extension for VS Code. Downgrading it (and restarting the app) seems to do the trick for the time being. | 2 | 2 |
79,052,139 | 2024-10-3 | https://stackoverflow.com/questions/79052139/numpy-random-uniform-valid-bounds-for-double | numpy.random.uniform returns a double between the specified low and high bounds. I can use np.finfo(np.double) to get the min and max representable number, but if I use those values in numpy.random.uniform I get an error. import numpy as np info = np.finfo(np.double) np.random.uniform(low=info.min, high=info.max) OverflowError: Range exceeds valid bounds Can someone shed light on what is going wrong here? What are the actual supported limits for low and high? I tried getting the bounds for a double and inputting them to a rng intended for generating doubles, I expected these would be compatible. Halving the bounds does work: np.random.uniform(low=info.min/2, high=info.max/2) So it suggests that either uniform cannot actually generate any double, or that the bounds returned by finfo are not inclusive. | uniform actually performs low + (high-low) * np.random.random(). However high - low is not possible: info.max-info.min # RuntimeWarning: overflow encountered in scalar subtract info.max-info.min You found the limits of uniform. The difference between low and high should not exceed abs(info.max). np.random.uniform(low=0, high=info.max) # valid np.random.uniform(low=info.min, high=0) # valid np.random.uniform(low=info.min/2, high=info.max/2) # valid rng.uniform(low=info.min*0.25, high=info.max*0.75) # valid rng.uniform(low=info.min*0.501, high=info.max*0.501) # OverflowError | 1 | 2 |
79,051,048 | 2024-10-3 | https://stackoverflow.com/questions/79051048/folium-map-refuses-to-fill-the-height-of-its-container | I am trying to display a folium map in a card in a shiny for Python app using the code below (Reproducible example): from shiny import App, ui, render, reactive import folium app_ui = ui.page_fluid( ui.tags.style( """ /* apply css to control height of some UI widgets */ .map-container { height: 700px !important; width: 100%; overflow: hidden; } .map-container > * { height: 100% !important; } """ ), ui.column(6, ui.navset_card_tab( ui.nav_panel("Map", ui.div( ui.output_ui("map"), class_="map-container" ) ), ui.nav_panel("Data", ui.p( ui.output_table("data_table") ) ) ) ) ) def server(input, output, session): @output @render.ui def map(): m = folium.Map(location=[51.509865, -0.118092], zoom_start=12) return ui.HTML(m._repr_html_()) @output @render.table def data_table(): # Your data table rendering logic here pass app = App(app_ui, server) I want the element containing the map to fill the height of my screen but my the map itself is refusing to fill the whole height of the card. It is only occupying the upper half of the card area. In Shiny for R, I had achieved the same result using the as_fill_carrier() function from the bslib library like this: card( style = "height: 80vh;", full_screen = TRUE, #card_header("Map"), card_body( tmapOutput("map") %>% withSpinner(type = 6, color = "#30804e") %>% as_fill_carrier() ) ) ) What can I do? | This is an issue with folium, where m._repr_html_() causes the map to be embedded within a div which has padding-bottom: 60%; by default. This has to be overwritten if you want to use the whole space. Below is an example where this is done by just replacing the css with more suitable one, in particular, I remove the padding-bottom: 60%; and apply height: 100%; which yields the desired filling layout. from shiny import App, ui, render import folium app_ui = ui.page_fluid( ui.tags.style( """ /* apply css to control height of some UI widgets */ .map-container { height: 700px !important; width: 100%; overflow: hidden; } .map-container > * { height: 100% !important; } """ ), ui.column(6, ui.navset_card_tab( ui.nav_panel("Map", ui.div( ui.output_ui("map"), class_="map-container" ) ), ui.nav_panel("Data", ui.p( ui.output_table("data_table") ) ) ) ) ) def server(input, output, session): @output @render.ui def map(): m = folium.Map(location=[51.509865, -0.118092], zoom_start=12) m = m._repr_html_() m = m.replace( '<div style="width:100%;"><div style="position:relative;width:100%;height:0;padding-bottom:60%;">', '<div style="width:100%; height:100%;"><div style="position:relative;width:100%;height:100%;>', 1) return ui.HTML(m) @output @render.table def data_table(): # Your data table rendering logic here pass app = App(app_ui, server) | 1 | 2 |
79,050,277 | 2024-10-3 | https://stackoverflow.com/questions/79050277/create-hybrid-table-with-snowflake-sqlalchemy | I want to add Snowflake hybrid tables to my database schema using SQLAlchemy. Per this issue, support for hybrid tables is not implemented in the Snowflake SQLAlchemy official dialect. Is there a way for me to customize the CREATE TABLE ... statement generated by SQLAlchemy so that it actually generates CREATE HYBRID TABLE ... for a given table? Consider the example code below: from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column class Base(DeclarativeBase): pass class MyTable(Base): id: Mapped[int] = mapped_column(primary_key=True) Base.metadata.create_all(bind=...) Running it will emit the following SQL statement: CREATE TABLE mytable ( id INTEGER NOT NULL AUTOINCREMENT, CONSTRAINT pk_mytable PRIMARY KEY (id) ) and I would like a way to have the following instead: CREATE HYBRID TABLE mytable ( id INTEGER NOT NULL AUTOINCREMENT, CONSTRAINT pk_mytable PRIMARY KEY (id) ) | You can pass prefixes to a table. from sqlalchemy import create_engine from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column class Base(DeclarativeBase): ... class SomeTable(Base): __tablename__ = "some_table" __table_args__ = {"prefixes": ["HYBRID"]} id: Mapped[int] = mapped_column(primary_key=True) engine = create_engine("sqlite:///test.sqlite", echo=True) Base.metadata.create_all(engine) SQL Emitted CREATE HYBRID TABLE some_table ( id INTEGER NOT NULL, PRIMARY KEY (id) ) I used SQLite because I do not have access to Snowflake but it should work all the same in an actual Snowflake instance. | 1 | 2 |
79,048,601 | 2024-10-2 | https://stackoverflow.com/questions/79048601/filtering-a-list-based-on-the-values-of-another-list-in-polars | Let's say I have the following DataFrame: df = pl.DataFrame({ 'values': [[0, 1], [9, 8]], 'qc_flags': [["", "X"], ["T", ""]] }) I only want to keep my values if the corresponding qc_flag equals "". Does anyone know the correct way to go about this? I've tried something like this: filtered = df.with_columns( pl.col("values").list.eval( pl.element().filter( pl.col("qc_flags").list.eval( pl.element() == "" ) ) ) ) I would expect to get 'values': [[0], [8]], but then I just end up with this error: ComputeError: named columns are not allowed in `list.eval`; consider using `element` or `col("")` | pl.Expr.list.eval to evaluate expression within list. pl.arg_where() to find indexes where value is "". pl.Expr.list.gather() to take sublist by indexes. df.with_columns( filtered = pl.col.values.list.gather( pl.col.qc_flags.list.eval(pl.arg_where(pl.element() == "")) ) ) shape: (2, 3) βββββββββββββ¬ββββββββββββ¬ββββββββββββ β values β qc_flags β filtered β β --- β --- β --- β β list[i64] β list[str] β list[i64] β βββββββββββββͺββββββββββββͺββββββββββββ‘ β [0, 1] β ["", "X"] β [0] β β [9, 8] β ["T", ""] β [8] β βββββββββββββ΄ββββββββββββ΄ββββββββββββ | 6 | 2 |
79,046,568 | 2024-10-2 | https://stackoverflow.com/questions/79046568/netgraph-animation-how-to-display-frame-numbers | I'm trying to add a frame number to a simulation visualization modified from here. Is there any simple way to add a frame number to this animation so that it displays as a part of the plot title? import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from netgraph import Graph # Simulate a dynamic network with total_frames = 10 total_nodes = 5 NODE_LABELS = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E'} NODE_POS = {0: (0.0, 0.5), 1: (0.65, 0.25), 2: (0.7, 0.5), 3: (0.5, 0.75), 4: (0.25, 0.25)} adjacency_matrix = np.random.rand(total_nodes, total_nodes) < 0.25 weight_matrix = np.random.randn(total_frames, total_nodes, total_nodes) # Normalise the weights, such that they are on the interval [0, 1]. # They can then be passed directly to matplotlib colormaps (which expect floats on that interval). vmin, vmax = -2, 2 weight_matrix[weight_matrix<vmin] = vmin weight_matrix[weight_matrix>vmax] = vmax weight_matrix -= vmin weight_matrix /= vmax - vmin cmap = plt.cm.RdGy fig, ax = plt.subplots() fig.suptitle('Simulation viz @ t=', x= 0.15, y=0.95) g = Graph(adjacency_matrix, node_labels=NODE_LABELS, node_layout = NODE_POS, edge_cmap=cmap, arrows=True, ax=ax) def update(ii): artists = [] for jj, kk in zip(*np.where(adjacency_matrix)): w = weight_matrix[ii, jj, kk] artist = g.edge_artists[(jj, kk)] artist.set_facecolor(cmap(w)) artist.update_width(0.03 * np.abs(w-0.5)) artists.append(artist) return artists animation = FuncAnimation(fig, update, frames=total_frames, interval=200, blit=True, repeat=False) plt.show() Thank you. I would appreciate any help/suggestions. | When animating while using blit (blit=True), figure level titles cannot be changed, as blit caches everything outside of the axis being animated. The simplest change is to set blit to false, and then update the figure title within the animation update function. import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from netgraph import Graph # Simulate a dynamic network with total_frames = 10 total_nodes = 5 NODE_LABELS = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E'} NODE_POS = {0: (0.0, 0.5), 1: (0.65, 0.25), 2: (0.7, 0.5), 3: (0.5, 0.75), 4: (0.25, 0.25)} adjacency_matrix = np.random.rand(total_nodes, total_nodes) < 0.25 weight_matrix = np.random.randn(total_frames, total_nodes, total_nodes) # Normalise the weights, such that they are on the interval [0, 1]. # They can then be passed directly to matplotlib colormaps (which expect floats on that interval). vmin, vmax = -2, 2 weight_matrix[weight_matrix<vmin] = vmin weight_matrix[weight_matrix>vmax] = vmax weight_matrix -= vmin weight_matrix /= vmax - vmin cmap = plt.cm.RdGy fig, ax = plt.subplots() title = fig.suptitle('Simulation viz @ t=', x= 0.15, y=0.95) # <---- g = Graph(adjacency_matrix, node_labels=NODE_LABELS, node_layout = NODE_POS, edge_cmap=cmap, arrows=True, ax=ax) def update(ii): artists = [] for jj, kk in zip(*np.where(adjacency_matrix)): w = weight_matrix[ii, jj, kk] artist = g.edge_artists[(jj, kk)] artist.set_facecolor(cmap(w)) artist.update_width(0.03 * np.abs(w-0.5)) artists.append(artist) title.set_text(f"Simulation viz @ t={ii}") # <---- return artists animation = FuncAnimation(fig, update, frames=total_frames, interval=200, blit=False, repeat=False) plt.show() | 2 | 1 |
79,045,172 | 2024-10-2 | https://stackoverflow.com/questions/79045172/why-is-the-snake-increasing-its-speed-when-it-grows | I am working on the famous snake game in Python and I've stumbled upon a glitch where the snake starts doing when it eats food. The snake should keep constant speed throughout the game and the segments should be of equal distance from each other. But what happens is every time the snake eats food it increases its speed and the segments start moving away from each other. Here is part of the code if anybody can help. import random import time from turtle import Screen, Turtle MOVE_DISTANCE = 10 POSITIONS = [(0, 0), (0, -20), (0, -40)] class Snake: def __init__(self): self.segments = [] self.create_snake() self.head = self.segments[0] def create_snake(self): for position in POSITIONS: self.add_segment(position) def add_segment(self, position): new_segment = Turtle("square") new_segment.color("white") new_segment.penup() new_segment.goto(position) self.segments.append(new_segment) def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): self.head.forward(MOVE_DISTANCE) # Moving distance of head new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) def grow(self): self.add_segment(self.segments[-1].position()) def up(self): if self.head.heading() != 270: self.head.setheading(90) def down(self): if self.head.heading() != 90: self.head.setheading(270) def left(self): if self.head.heading() != 0: self.head.setheading(180) def right(self): if self.head.heading() != 180: self.head.setheading(0) class Food(Turtle): def __init__(self): super().__init__() self.shape("circle") self.penup() self.shapesize(stretch_len=0.5, stretch_wid=0.5) self.color("red") self.speed("fastest") self.refresh_food() def refresh_food(self): rand_x_cor = random.randint(-480, 480) rand_y_cor = random.randint(-260, 260) self.goto(rand_x_cor, rand_y_cor) my_screen = Screen() my_screen.setup(width=1000, height=600) my_screen.bgcolor("black") my_screen.title("My Snake Game") my_screen.tracer(0) my_snake = Snake() food = Food() my_screen.listen() my_screen.onkey(my_snake.up, "Up") my_screen.onkey(my_snake.down, "Down") my_screen.onkey(my_snake.left, "Left") my_screen.onkey(my_snake.right, "Right") game_over = False while not game_over: my_screen.update() time.sleep(0.1) my_snake.move() # Detect collision with the food if my_snake.head.distance(food) < 15: food.refresh_food() my_snake.grow() # Detect collision with wall if my_snake.head.xcor() < -480 or my_snake.head.xcor() > 480 or my_snake.head.ycor() < -280 or my_snake.head.ycor() > 280: game_over = True # Detect collision with tail for segment in my_snake.segments[2:]: if my_snake.head.distance(segment) < 10: game_over = True I tried reducing the moving distance of the snake head self.head.forward(MOVE_DISTANCE) # Moving distance of head I also tried different ways of moving the snake but it also didn't work. | The move method looks incorrect: def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): self.head.forward(MOVE_DISTANCE) # Moving distance of head new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) I don't think you want to move the head for every single tail segment. Instead, you want to move the head once, and then move all the tail segments once: def move(self): self.head.forward(MOVE_DISTANCE) # Moving distance of head for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) Here's a runnable example with some adjustments: from random import randint from turtle import Screen, Turtle GRID_SIZE = 20 class Snake: POSITIONS = (0, 0), (GRID_SIZE, 0), (GRID_SIZE * 2, 0) def __init__(self): self.segments = [] self.create_snake() self.head = self.segments[0] def create_snake(self): for position in Snake.POSITIONS: self.add_segment(position) def add_segment(self, position): new_segment = Turtle("square") new_segment.color("white") new_segment.penup() new_segment.goto(position) self.segments.append(new_segment) def move(self): self.head.forward(GRID_SIZE) for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) def grow(self): self.add_segment(self.segments[-1].position()) def up(self): if self.head.heading() != 270: self.head.setheading(90) def down(self): if self.head.heading() != 90: self.head.setheading(270) def left(self): if self.head.heading() != 0: self.head.setheading(180) def right(self): if self.head.heading() != 180: self.head.setheading(0) def eats(self, food): return self.head.distance(food.position()) < GRID_SIZE / 2 def collides_with_wall(self): return ( self.head.xcor() < -w or self.head.xcor() > w or self.head.ycor() < -h or self.head.ycor() > h ) def collides_with_itself(self): for segment in self.segments[3:]: if self.head.distance(segment) < GRID_SIZE: return True return False class Food: def __init__(self): self.pen = pen = Turtle() pen.shape("circle") pen.penup() pen.shapesize(stretch_len=0.5, stretch_wid=0.5) pen.color("red") pen.speed("fastest") self.reposition() def reposition(self): self.pen.goto( randint(-w, w) // GRID_SIZE * GRID_SIZE, randint(-h, h) // GRID_SIZE * GRID_SIZE, ) def position(self): return self.pen.pos() def tick(): snake.move() if snake.eats(food): food.reposition() snake.grow() if snake.collides_with_wall() or snake.collides_with_itself(): screen.update() return screen.update() screen.ontimer(tick, fps) screen = Screen() screen.setup(width=1000, height=600) fps = 1000 // 10 w = screen.window_width() / 2 - GRID_SIZE h = screen.window_height() / 2 - GRID_SIZE screen.bgcolor("black") screen.title("My Snake Game") screen.tracer(0) snake = Snake() food = Food() screen.listen() screen.onkey(snake.up, "Up") screen.onkey(snake.down, "Down") screen.onkey(snake.left, "Left") screen.onkey(snake.right, "Right") tick() screen.exitonclick() There's still room for improvement, but overall this should be easier to maintain. | 2 | 5 |
79,048,582 | 2024-10-2 | https://stackoverflow.com/questions/79048582/error-when-trying-to-apply-a-conditional-statement-to-sympy | I'm trying to set a conditional statmeent in sympy so that when the iterable is not equal to a certain number (30) in this case, it returns 0. My code is: def formula(address): address0 = 30 logic = 3 * x # Define the symbol and the summation range x = symbols('x') expr = Piecewise( (x + logic, logic == address0), # When x equals certain_integer, only add logic ((x + logic) - (x + logic), logic!=address0) # For all other values of x, apply full logic (x + logic - x + logic) ) #If -x + logic only do it to the ones that aren't address0 total = summation(expr(x, 0, 089237316195423570985008687907836)) total2 = total - address0 print(f'{total}\n{total2}') return total2 As you can see in my code, in the expr variable I set x+logic when logic is 30 and when the other logicsaren't 30 it returns 0 since it subtracts it. The code is returning 0 regardless of what I do and I don't know why. Can someone help me? | The == operator will instantly evaluate to True or False based on structure of the objects. Since 3*x is not 30 the equality evaluates to False and that term of the Piecewise is ignored. Use the following instead: ... from sympy import Eq expr = Piecewise( (x + logic, Eq(logic,address0)), (0, True)) # (x+logic)-(x-logic)->0 if x and logic are finite Your use of summation seems strange, too. How about: >>> piecewise_fold(Sum(expr,(x,0,100))).doit() Piecewise((20200, Eq(x, 10)), (0, True)) But I am not really sure what you are trying to do with summation. | 1 | 2 |
79,048,534 | 2024-10-2 | https://stackoverflow.com/questions/79048534/conditional-marker-for-scatterplot-matplotlib | I am trying to plot x1 and x2 against each other, and if class == 1, then marker should be +, if class == 0, then the marker should just be a dot. The file.csv is structured like this, the class column which is the conditional will only be either 1 or 0: x1 x2 mark 1 2 0 9 4 1 0 5 1 2 6 0 Here's the code I have: df = pd.read_csv('file.csv') print(df) x1 = df['x1'].to_numpy() x2 = df['x2'].to_numpy() mark = df['class'].to_numpy() figure, ax = plt.subplots() for i in range(0,80,1): if mark[i] > 0: ax.plot(x1[i], color="red", marker='o') else: ax.plot(x1[i], color="blue", marker='x') This is what the result SHOULD look like: | I would plot it this way for all the data, if you're going to select a subset, consider using a new dataframe to select the subset: import matplotlib.pyplot as plt # the rows where class is 1 and plot them with '+' marker plt.scatter(df[df['class']==1]['x1'], df[df['class']==1]['x2'], marker='+') # the rows where class is 0 and plot them with 'o' marker plt.scatter(df[df['class']==0]['x1'], df[df['class']==0]['x2'], marker='o') plt.xlabel('x1') plt.ylabel('x2') plt.title('Scatter plot x1 vs x2') plt.show() | 1 | 1 |
79,046,761 | 2024-10-2 | https://stackoverflow.com/questions/79046761/im-trying-to-make-the-petals-rotate-from-the-color-orange-yellow-to-orange-as | I'm doing an assignment on using turtles to make sunflower petals using pentagons. Everything is as it should be, except for the color of the petals. It strictly says that the colors of said pentagons have to be in a rotation of "orange, yellow, orange, yellow" with it repeating from there on. To do so we have to use functions, and since I'm generally new to computer science, I'm having trouble finding out the cause as to why it is making the fourth sunflower petal each test run as black instead of yellow. I've tried defining the term "yellow", as "tur.pencolor('yellow') (tur is the name of the turtle in my program), and it resulted in it just not changing anything. At this point with my limited knowledge in this field, I'm stumped and are looking for help here for someone to save my skin and figure this function thing situation out for me! Here is my code so far: import turtle # Define function draw_petal def draw_petal(tur, speed, n_sided, radius): """ Draws a petal of a sunflower using turtle graphics. Parameters: - tur (turtle.Turtle): The turtle object used for drawing. - speed (int): The drawing speed of the turtle. Accepts standard turtle speed inputs. - n_sided (int): The number of sides of the petal, indicating its shape. - radius (float): The radius of the petal, measured from the center to any vertex. The function draws a petal as an n-sided polygon with the specified radius. The petal shape is determined by the number of sides (n_sided) and its size by the radius. Note: The turtle will start drawing from its current position and heading. """ # TODO: Remove pass and provide your own code below angle = 360 / n_sided for i in range(n_sided): tur.forward(radius) tur.left(angle) # Define fuction draw_sunflower def draw_sunflower(tur, speed, n_sided, radius, num_petal): """ Draws a sunflower using turtle graphics. Parameters: - tur (turtle.Turtle): The turtle object used for drawing. - speed (int): The drawing speed of the turtle. Accepts standard turtle speed inputs. - n_sided (int): The number of sides for each petal, indicating the shape of the petal. - radius (float): The radius of each petal, measured from the center to any vertex. - num_petal (int): The total number of petals to draw for the sunflower. The function will alternate petal colors between 'orange' and 'yellow'. The heading angle and current color of each petal are printed to the console during the drawing process for diagnostic purposes. Note: This function assumes a helper function `draw_petal` is defined elsewhere to draw individual petals. """ # TODO: Remove pass and provide your own code below tur.speed(speed) angle_increment = 360 / num_petal c = tur.color() for i in range(num_petal): draw_petal(tur, speed, n_sided, radius) if(i % 2 == 0): tur.color("yellow") if(i % 2 == 1): tur.color("orange") # Diagnostic printout print(f"Current color: {c}, heading angle {tur.heading()} degrees") tur.left(angle_increment) # Define the main() function for drawing sunflowers using turtle graphics. # # This function sets up a turtle environment and prompts the user # for inputs to draw a sunflower. The user can choose the animation # speed, the number of sides for a petal, the radius of petals, and # the total number of petals. # # After drawing a sunflower based on the user's input, the program # asks the user if they'd like to draw another sunflower. If the # user enters "yes", the screen will be cleared and the user will be # prompted for new input values. If the user enters "no", a message # will be displayed prompting the user to click on the window to exit. # # If the user enters any other command, an error message is printed # and the user is prompted again. # # The function concludes by waiting for the user to click on the turtle # window, at which point it will close the window and exit the program. def main(): screen = turtle.Screen() screen.title("Sunflower Drawing") tur = turtle.Turtle() while True: # User input speed = int(input("Enter animation speed: ")) n_sided = int(input("Enter the number of sides of a petal: ")) radius = float(input("Enter the radius of petals: ")) num_petal = int(input("Enter the number of petals: ")) # Draw the sunflower draw_sunflower(tur, speed, n_sided, radius, num_petal) # Ask if the user wants to draw another sunflower response = input("Draw Another Sunflower?" ) if response == "yes": tur.clear() tur.penup() tur.home() tur.pendown() elif response == "no": tur.penup() tur.goto(0, -200) tur.write("Click window to exit...", align="right", font=("Courier", 12, "normal")) break turtle.exitonclick() else: print("Invalid command! Please try againβ¦") screen.mainloop() # Run the program if __name__ == "__main__": main() # turtle.exitonclick() | You just need to think about the order in which you do things - it's sequential unless told otherwise. Make sure that you set a colour BEFORE using it to draw a petal. Make sure that you update c every time the colour changes, or you will simply output "black". def draw_sunflower(tur, speed, n_sided, radius, num_petal): tur.speed(speed) angle_increment = 360 / num_petal for i in range(num_petal): if(i % 2 == 0): tur.color("yellow") if(i % 2 == 1): tur.color("orange") draw_petal(tur, speed, n_sided, radius) # <=== NOW YOU CAN DRAW YOUR PETAL! # Diagnostic printout c = tur.color() # <=== UPDATE THIS BEFORE WRITING IT OUT print(f"Current color: {c}, heading angle {tur.heading()} degrees") tur.left(angle_increment) | 4 | 2 |
79,046,060 | 2024-10-2 | https://stackoverflow.com/questions/79046060/what-is-the-type-hint-for-the-pytest-fixture-capsys | When writing pytest tests for a function that is supposed to print something to the console, to verify the output string I am using the capsys fixture and capsys.readouterr(). This is the code I am currently using: @pytest.mark.parametrize( "values,expected", [ ([], "guessed so far: \n"), (["single one"], "guessed so far: single one\n"), (["one", "two", "three", "4"], "guessed so far: 4, three, two, one\n"), ], ) def test_print_guesses(capsys, values: list, expected: str) -> None: hm.print_guesses(values) assert capsys.readouterr().out == expected I am also using the mypy extension in VS Code, so for now I am getting the warning: Function is missing a type annotation for one or more arguments I'd like to get rid of that. What is the appropriate type annotation for the capsys argument? | Per the documentation for capsys, it: Returns an instance of CaptureFixture[str]. This class indeed has a readouterr method (returning a CaptureResult, which has an out attribute). So your test should look like: @pytest.mark.parametrize( # ... ) def test_print_guesses(capsys: pytest.CaptureFixture[str], values: list, expected: str) -> None: hm.print_guesses(values) assert capsys.readouterr().out == expected | 2 | 2 |
79,045,268 | 2024-10-2 | https://stackoverflow.com/questions/79045268/generate-all-possible-boolean-cases-from-n-boolean-values | If two fields exist, the corresponding fields are Boolean values. x_field(bool value) y_field(bool value) I want to generate all cases that can be represented as a combination of multiple Boolean values. For example, there are a total of 4 combinations that can be expressed by two Boolean fields as above. x_field(true), y_field(true) x_field(true), y_field(false) x_field(false), y_field(true) x_field(false), y_field(false) If you have an array with two fields that are Boolean type, can you generate all cases and express like this form? # before calculate ... fields = ["x_field", "y_field"] # after calculate ... result = [{"x_field": True, "y_field": True}, {"x_field": True, "y_field": False}, {"x_field": False, "y_field": True}, {"x_field": False, "y_field": False},] My Attempted I thought I could solve this problem with the itertools module, but I wasn't sure which function to use. I tried to implement using various itertools module functions, but it failed. from itertools import combinations, permutations boolean_fields = ["x_field", "y_field"] # --> boolean_fields = [True, False, True, False] x = list(combination(boolean_fields, 2)) | I'm surprised no one has proposed this simple one-liner yet... from itertools import product fields = ['x', 'y', 'z'] [dict(zip(fields, values)) for values in product([True,False], repeat=len(fields))] Output: [{'x': True, 'y': True, 'z': True}, {'x': True, 'y': True, 'z': False}, {'x': True, 'y': False, 'z': True}, {'x': True, 'y': False, 'z': False}, {'x': False, 'y': True, 'z': True}, {'x': False, 'y': True, 'z': False}, {'x': False, 'y': False, 'z': True}, {'x': False, 'y': False, 'z': False}] Explanation: product([True,False], repeat=len(fields)) creates all the 2**len(fields) combinations of True or False values. Then iterating over these combinations, zip(fields, values) pairs up every field name to its corresponding value. Finally turn every such pairing into a dict, and turn that into a list of dicts as you iterate over all the combinations (through list comprehension syntax). | 1 | 4 |
79,045,061 | 2024-10-2 | https://stackoverflow.com/questions/79045061/how-to-avoid-memory-error-for-large-dataarray | In Python I have a large xarray dataarray (da) of the shape In [1]: da.shape Out[1]: (744, 24, 30, 131, 215) I need to perform the operation da = 10**da however, I'm running into a memory error MemoryError: Unable to allocate 56.2 GiB for an array with shape (744, 24, 30, 131, 215) and data type float32 Is there a way to do the operation iteratively? I tried to manually loop but that gives the same error. for ii in range(len(da[:,0,0,0,0])): da[ii,:,:,:,:] = 10**da[ii,:,:,:,:] It works fine with smaller arrays e.g., (24, 24, 30, 131, 215). | Chunking the dataarray did the trick. da = da.chunk({'dim0':50,'dim1': 24, 'dim2': 30, 'dim3': 131, 'dim4': 215}) da = 10**da | 2 | 0 |
79,044,320 | 2024-10-1 | https://stackoverflow.com/questions/79044320/passing-in-arguments-to-dependency-function-makes-it-recognized-as-a-query-param | Take a look at the following Route Handler @some_router.post("/some-route") async def handleRoute(something = Depends(lambda request : do_something(request, "some-argument"))): return JSONResponse(content = {"msg": "Route"}, status_code = 200) So the dependency function do_something takes in a request and a string value. But, passing it in like so makes it so that it is recognized as a query parameter now even though it shouldn't. It is just an argument that is needed for the dependency function. The do_something implementation async def do_something(request: Request, value: str): pass This is obviously an example code. How can I pass in the request along with the string value into the do_something dependency function without making it recognized as a query parameter? Rewrote Route Handler Used default arguments but that's not how FastAPI's dependency injections works so no good Maybe the use of lambda itself forces FastAPI to recognize it as a Query Parameter? Tried setting a type for "request" as "Request" but then I get a syntax issue, tried wrapping it in parentheses and it still didn't work. | My current solution to this is the following: def get_do_something(value: str): async def fixed_do_something(request: Request): return await do_something(request, roles) return fixed_do_something Notice how instead of directly passing the request with your additional parameters, you are now using a wrapper function. This wrapper function handles the additional parameters and passes them to the nested function. Following this convention simplifies development and makes the process more efficient. @some_router.post("/some-route") async def handleRoute(something = Depends(get_do_something('some-value'))): return JSONResponse(content = {"msg": "Route"}, status_code = 200) | 2 | 2 |
79,042,147 | 2024-10-1 | https://stackoverflow.com/questions/79042147/handling-on-off-behavior-in-milp-optimization-problems-in-gekko | I'm trying to understand how to implement ON/OFF behavior in optimization problems to be solved in GEKKO effectively. Consider the following scenario: 2 power generators, with upper and lower bounds one generator has a power loss of 30% the total generated power must meet an external power demand the goal is to minimize the energy loss The above problem is implemented in Python: import numpy as np from gekko import GEKKO import matplotlib.pyplot as plt timesim = 24*1 # hours timesteps = 1 # steps/hour n = np.int64(timesim*timesteps + 1) # this is the vector length in GEKKO Edh_demand =np.ones(timesim+1)*80; Edh_demand[int(7*n/24):int(10*n/24)] = 200 Edh_demand[int(3*n/4)-1:int(3*n/4)+2] = 300 m = GEKKO(remote=False) # initialize gekko, kΓΆrs lokalt m.time = np.linspace(0, 24, 25) # Gekko-tid Egen1 = m.Var(value = 30, lb = 30, ub = 200) Egen2 = m.Var(value = 30, lb = 30, ub = 200) Eloss = m.Var(value = 30*.3) Eprod = m.Var(value = 30) Edemand = m.Param(value = Edh_demand) cost = m.Param(value = 10) m.Equation(Eloss == Egen1*.3) m.Equation(Eprod == Egen2 + Egen1 - Eloss) m.Minimize(Eloss) m.Equation(Eprod >= Edemand) m.options.SOLVER = 3 m.options.IMODE = 6 # m.options.COLDSTART=2 m.options.COLDSTART=0 m.solve(disp=True) plt.figure(5) plt.subplot(2, 1, 1) plt.plot(m.time, Egen1, label='gen 1') plt.plot(m.time, Egen2, label='gen 2') plt.legend() plt.subplot(2, 1, 2) plt.plot(m.time, Edemand, '--r') plt.plot(m.time, Eprod) plt.show() I have the following questions/problems: As it is implemented now, gen1 goes to its lower bound as expected. I want to constraint gen1 so that it can operate either at the lower bound or at 0, but not between 0 an gen1_lb. I want to penalize turning off and on gen1. I want to constraint gen1 so that every time it is turned off, it takes 2 sampling intervals before it can be turned on again. Are there better ways (mainly for algoritm efficiency but also readibility) to implement such problem? I believe one way to attack this problem is by creating an Integer variable (gen1_onoff) and multiply all occurances of gen1 by gen1_onoff. One can also add the following constraint m.Equation(gen1 * gen1_onoff <= gen1_lb) One could penalize variations of gen1_onoff in the objective function dt_gen1_onoff = Var(value = 0) m.Equation(dt_gen1_onoff == gen1_onoff.dt()) m.Minimize(dt_gen1_onoff) I have no idea how to implement the constraint for problem 3, but suspect that m.if3() should be useful. | As you correctly noted, when gen1 can only operate above the lower bound (gen1_lb) or at 0, use a binary variable gen1_onoff to switch the operation mode: m.options.SOLVER = 1 # APOPT solver for Mixed Integer solutions gen1_onoff = m.Var(value=1, integer=True, lb=0, ub=1) # Binary variable Egen1 = m.Var(value = 30, lb = 30, ub = 200) Egen1s = m.Var(value = 30, lb = 0, ub = 200) m.Equation(Egen1s == gen1_onoff * Egen1) # Egen1s is either at 0 or 30-200 To create a penalty for changes in gen1_onoff, tracks the change in state and include it in the objective function with a squared error or m.abs3() as an absolute value. dt_gen1_onoff = m.Var(value=1, integer=True, lb=0, ub=1) m.Equation(dt_gen1_onoff == gen1_onoff.dt()) # Change in on/off state m.Minimize(cost * m.abs3(dt_gen1_onoff)) # Add to objective The m.abs3() ensures that the penalty applies regardless of the sign of the change. However, this introduces another binary variable and greatly slows down the solution. A computationally more efficient way is to declare gen1_onoff as an MV type and add a DCOST value to penalize movement. m.options.CV_TYPE = 1 # l1-norm objective gen1_onoff = m.MV(value=1, integer=True, lb=0, ub=1) gen1_onoff.STATUS = 1 gen1_onoff.DCOST = 1e-3 # penalty on movement To impose a delay so that gen1 must remain off for two time steps before being turned back on, use the GEKKO MV_STEP_HOR option. This requires that the MV is held for 2 cycles before changing. gen1_onoff.MV_STEP_HOR = 2 Using several m.if3() functions is also possible but would greatly increase the computational time. To improve efficiency, minimize the use of integer variables as much as possible since they make the problem more complex. | 2 | 1 |
79,027,616 | 2024-9-26 | https://stackoverflow.com/questions/79027616/pandas-groupby-transform-mean-with-date-before-current-row-for-huge-huge-datafra | I have a Pandas dataframe that looks like df = pd.DataFrame([['John', '1/1/2017','10'], ['John', '2/2/2017','15'], ['John', '2/2/2017','20'], ['John', '3/3/2017','30'], ['Sue', '1/1/2017','10'], ['Sue', '2/2/2017','15'], ['Sue', '3/2/2017','20'], ['Sue', '3/3/2017','7'], ['Sue', '4/4/2017','20']], columns=['Customer', 'Deposit_Date','DPD']) And I want to create a new row called PreviousMean. This column is the year to date average of DPD for that customer. i.e. Includes all DPDs up to but not including rows that match the current deposit date. If no previous records existed then it's null or 0. So the desired outcome looks like Customer Deposit_Date DPD PreviousMean 0 John 2017-01-01 10 NaN 1 John 2017-02-02 15 10.0 2 John 2017-02-02 20 10.0 3 John 2017-03-03 30 15.0 4 Sue 2017-01-01 10 NaN 5 Sue 2017-02-02 15 10.0 6 Sue 2017-03-02 20 12.5 7 Sue 2017-03-03 7 15.0 8 Sue 2017-04-04 20 13.0 And after some researching on the site and internet here is one solution: df['PreviousMean'] = df.apply( lambda x: df[(df.Customer == x.Customer) & (df.Deposit_Date < x.Deposit_Date)].DPD.mean(), axis=1) And it works fine. However, my actual datafram is much larger (~1 million rows) and the above code is very slow. Is there any better way to do it? Thanks | You could use a custom groupby.apply with expanding.mean and a mask on the duplicated date to ffill the output: df['Deposit_Date'] = pd.to_datetime(df['Deposit_Date']) df['PreviousMean'] = (df.groupby('Customer') .apply(lambda s: s['DPD'].expanding().mean().shift() .mask(s['Deposit_Date'].duplicated()) .ffill(), include_groups=False) .droplevel(0) ) NB. this is assuming the dates are sorted. Output: Customer Deposit_Date DPD PreviousMean 0 John 2017-01-01 10 NaN 1 John 2017-02-02 15 10.0 2 John 2017-02-02 20 10.0 3 John 2017-03-03 30 15.0 4 Sue 2017-01-01 10 NaN 5 Sue 2017-02-02 15 10.0 6 Sue 2017-03-02 20 12.5 7 Sue 2017-03-03 7 15.0 8 Sue 2017-04-04 20 13.0 Intermediates: Customer Deposit_Date DPD expanding.mean shift duplicated mask PreviousMean 0 John 2017-01-01 10 10.00 NaN False NaN NaN 1 John 2017-02-02 15 12.50 10.0 False 10.0 10.0 2 John 2017-02-02 20 15.00 12.5 True NaN 10.0 3 John 2017-03-03 30 18.75 15.0 False 15.0 15.0 4 Sue 2017-01-01 10 10.00 NaN False NaN NaN 5 Sue 2017-02-02 15 12.50 10.0 False 10.0 10.0 6 Sue 2017-03-02 20 15.00 12.5 False 12.5 12.5 7 Sue 2017-03-03 7 13.00 15.0 False 15.0 15.0 8 Sue 2017-04-04 20 14.40 13.0 False 13.0 13.0 Note: the generalization of this answer to multiple groups is discussed here. | 2 | 2 |
79,026,966 | 2024-9-26 | https://stackoverflow.com/questions/79026966/tcp-connections-not-being-cleaned-up-in-windows-python | I'm running a Windows server on AWS that is serving some data to IOT devices, but after a while the server stops responding to requests because it hangs on the s.accept() call, I've managed to determine that this happens because the server has too many TCP connections open so the OS wont allocate any more which makes sense, but what doesn't make sense to me is why the connections are open still open because they should all have been closed. Here is an example from my code with parts omitted for safety: def connection(conn, addr): conn.settimeout(10) data = None connection_time = datetime.now() n_items = 0 try: print(connection_time.strftime("[%d/%m/%Y, %H:%M:%S] "), "new connection started:", addr) data = get_info(conn) print(addr, data) # serve client here, protocol omitted except Exception as e: print(f"{addr} connection error:" + str(e)) if data is not None: add_connection_info(addr, data, connection_time) try: conn.close() print(connection_time.strftime("[%d/%m/%Y, %H:%M:%S] "), "connection ended:", addr) except Exception as e: print(f"close failed: {addr} ; {e}") if __name__ == '__main__': ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ssl_context.load_cert_chain(cert, key, password=*omitted*) s = socket.socket() s = ssl_context.wrap_socket(s, server_side=True) host = "0.0.0.0" port = 12345 # not the actual port print('Server started:', host, port) s.bind((host, port)) # Bind to the port s.listen() # Now wait for client connection. s.setblocking(False) # Join completed threads and check connection status threads = [] while True: for thread in threads: thread.join(0) threads = [t for t in threads if t.is_alive()] print(f"{len(threads)} active connections") try: # Use select to wait for a connection or timeout rlist, _, _ = select.select([s], [], [], 100) # 100 seconds timeout if s in rlist: s.settimeout(10) # TODO timout here c, addr = s.accept() print(f"Accepted connection from {addr}") thread = Thread(target=connection, args=(c, addr)) #thread.daemon = True thread.start() threads.append(thread) print("thread started") else: print("No connection within 100 second period") except BlockingIOError: print("No connection ready") except Exception as e: print("error", str(e)) try: c.close() print(f"Connection from {addr} closed due to error.") except Exception as e_close: print(f"Failed to close connection after error: {str(e_close)}") I'm logging the output of the server and when I checked last after seeing the server freeze for every print(connection_time.strftime("[%d/%m/%Y, %H:%M:%S] "), "new connection started:", addr) there is a matching print(connection_time.strftime("[%d/%m/%Y, %H:%M:%S] "), "connection ended:", addr) so from what I can tell there should be no open connections because print(f"{len(threads)} active connections") prints that there are 0 active threads. But when I open windows resource monitor there are 50+ ish open TCP by python even for (ip, port) that should have been closed hours ago and were logged as "ended" by the server so I don't understand why they still are. Update: I was a little mistaken, after adding some logging to my code it appears that the open connections in the resource monitor have never been accepted by my server, they are definitely coming from my devices though but I'm unsure of how to close them/free them if I never know they are there in the first place. if I use Β΄netstat -anΒ΄ I can see they are all stuck in a CLOSE_WAIT state, is there a way for me to force windows to just cleanup connections that have been stuck in this state for more than 5 minutes? | I've managed to fix the issue, the fix seems to have been to use socket.setdefaulttimeout(10), I'm not sure why this works but not s.settimeout(10), but now the server has been running for 6 days without issues (it used to run for about 8-12 ish hours before halting), there are now 0 connections stuck in the closed wait state. | 2 | 0 |
79,029,557 | 2024-9-27 | https://stackoverflow.com/questions/79029557/polars-dataframe-how-to-efficiently-aggregate-over-many-non-disjoint-groups | I have a dataframe with columns x, y, c_1, c2, ..., c_K, where K is somewhat large (K β 1000 or 2000). Each of the columns c_i is boolean column, and I'd like to compute an aggregation f(x, y) over rows where where c_i is True. (For example, f(x,y) = x.sum() * y.sum().) One way to do this is: ds.select([ f(pl.col("x").filter(pl.col(f"c_{i+1}"), pl.col("y").filter(pl.col(f"c_{i+1}")) for i in range(K) ]) In my problem, the number K is large, and the above query seems somewhat inefficient (filtering is done twice). What is the recommended/most efficient/most elegant way of accomplishing this? EDIT. Here is a runnable example (code at bottom), as well as some timings corresponding to @Hericks's answer below. TLDR: Method 1 as proposed is the current best. Wall time 1 repeated filters 409ms 2 pl.concat 29.6s (β70x slower) 2* pl.concat, lazy 1.27s (3x slower) 3 unpivot with agg 1min 17s 3* unpivot with agg, lazy 1min 17s (same as 3) import polars as pl import polars.selectors as cs import numpy as np rng = np.random.default_rng() def f(x,y): return x.sum() * y.sum() N = 2_000_000 K = 1000 dat = dict() dat["x"] = np.random.randn(N) dat["y"] = np.random.randn(N) for i in range(K): dat[f"c_{i+1}"] = rng.choice(2, N).astype(np.bool_) tmpds = pl.DataFrame(dat) ## Method 1 tmpds.select([ f( pl.col("x").filter(pl.col(f"c_{i+1}")), pl.col("y").filter(pl.col(f"c_{i+1}"))) .alias(f"f_{i+1}") for i in range(K) ]) ## Method 2 pl.concat([ tmpds.filter(pl.col(f"c_{i+1}")).select(f(pl.col("x"), pl.col("y")).alias(f"f_{i+1}")) for i in range(K) ], how="horizontal") ## Method 2* pl.concat([ tmpds.lazy().filter(pl.col(f"c_{i+1}")).select(f(pl.col("x"), pl.col("y")).alias(f"f_{i+1}")).collect() for i in range(K) ], how="horizontal") ## Method 3 ( tmpds .unpivot(on=cs.starts_with("c"), index=["x", "y"]) .filter("value") .group_by("variable") .agg( f(pl.col("x"), pl.col("y")) ) ) ##Method 3* ( tmpds .lazy() .unpivot(on=cs.starts_with("c"), index=["x", "y"]) .filter("value") .group_by("variable", maintain_order=True) .agg( f(pl.col("x"), pl.col("y")) ) .collect() ) | In general, I would not think that multiple filters are inefficient in polars. To verify this, I benchmarked three different approaches: 1. Generator of expressions with 2 filters This approach was proposed in the question. df.select( ( pl.col("x").filter(f"c_{i+1}").sum() * pl.col("y").filter(f"c_{i+1}").sum() ).alias(f"c_{i}") for i in range(K) ) 2. Concatenating dataframes created using single filter This approach was proposed by @roman. Again, a python generator is used to create the dataframes. pl.concat( [ ( df .filter(pl.col(f"c_{i+1}")) .select((pl.col("x").sum() * pl.col("y").sum()).alias(f"c_{i+1}")) ) for i in range(K) ], how="horizontal" ) 3. Unpivot + single filter + group by / agg I thought this approach was nice as it doesn't rely on standard python generator expressions and doesn't use f-strings to create column names. import polars.selectors as cs ( df .unpivot(on=cs.starts_with("c"), index=["x", "y"]) .filter("value") .group_by("variable", maintain_order=True) .agg( pl.col("x").sum() * pl.col("y").sum() ) ) The results for f(x, y) = x.sum() * y.sum() on a dataframe with n = 5000 rows and K = 1000 c_i-columns` are as follows. The results confirm the initial impression. Runtime 1 repeated filters 16.7ms 2 pl.concat 4150ms 3 unpivot with agg 40.9ms | 3 | 2 |
79,043,704 | 2024-10-1 | https://stackoverflow.com/questions/79043704/efficiently-simulating-many-frequency-severity-distributions-over-thousands-of-i | I've got a problem at work that goes as follows: We have, say, 1 million possible events which define frequency-severity distributions. For each event we have an annual rate which defines a Poisson distribution, and alpha and beta parameters for a Beta distributions. The goal is to simulate in the order of >100,000 "years", each year being defined as getting a frequency N for each event, and getting N samples of the relative beta distribution. The cumbersome fact for me is how can I efficiently get N_i ~ Poisson(lambda_i) samples from Beta distribution Beta_i while also making sure I can attribute them to the correct year? In terms of output I'll need to look at both the maximum and total value of the samples per year, so temporarily I'm just storing it as an array of dictionaries (not intended to be the output format) years = 5000 rng = np.random.default_rng() losses = [] for year in range(years): occurences = rng.poisson(data['RATE']) annual_losses = [] for idx, occs in enumerate(occurences): if occs > 0: event = data.iloc[idx] for occ in range(occs): loss = rng.beta(event['alpha'], event['beta']) * event['ExpValue'] annual_losses.append(loss) annual_losses.append(0) losses.append({'year': year, 'losses': annual_losses}) I've tried to follow Optimising Python/Numpy code used for Simulation however I can't seem to understand how to vectorise this code efficiently. Changes I've made before posting here are (times for 5000 years): swapping from scipy to numpy (72s -> 66s) calculating frequencies for all years in once outside the loop (66s -> 73s... oops) Ideally I'd like this to run as fast, or for as many iterations, as possible, and I was also running into issues with memory using scipy previously. EDIT: Upon request here's a version that just calculates the maximum loss for each year years = 20_000 rng = np.random.default_rng() largest_losses = np.zeros(shape=years) for year in range(years): occurences = rng.poisson(data['RATE']) largest_loss = 0 for idx, event_occs in enumerate(occurences): if event_occs > 0: event = data.iloc[idx] for event_occ in range(event_occs): loss = rng.beta(event['alpha'], event['beta']) * event['ExpValue'] if loss > largest_loss: largest_loss = loss largest_losses[year] = largest_loss For testing purposes the events have a total rate of ~0.997 and the timed tests above are from an event pool of 100,604 events. To specify the goal, I'd like to see if I can calculate the percentiles of the losses, e.g. the 1-in-250 loss np.percentile(largest_losses, (1 - 1/250) * 100) which currently aren't accurate at 5,000, hence wanting a process that can run in a few minutes or less for ~100,000 years. EDIT 2: Reproduceable values (as np arrays): rate_scale = 4.81e-4 rate_shape = 2.06e-2 rates = rng.gamma(rate_shape, rate_scale, size=nevents) alpha_scale = 1.25 alpha_shape = 0.439 alphas = rng.gamma(alpha_shape, alpha_scale, size=nevents) beta_scale = 289 beta_shape = 0.346 betas = rng.gamma(beta_shape, beta_scale, size=nevents) value_scale = 4.2e8 value_shape = 1.0 values = rng.gamma(value_shape, value_scale, size=nevents) and have amended my current code to use numpy arrays years = 20_000 rng = np.random.default_rng() largest_losses = np.zeros(shape=years) for year in range(years): occurences = rng.poisson(rates) largest_loss = 0 for idx, event_occs in enumerate(occurences): for event_occ in range(event_occs): loss = rng.beta(alphas[idx], betas[idx]) * values[idx] if loss > largest_loss: largest_loss = loss largest_losses[year] = largest_loss f'{np.percentile(largest_losses, (1 - 1/250) * 100):,.0f}' | I profiled @WarrenWeckesser's answer, and I found that it spends 93% of its time generating poisson-distributed numbers using the rate array. Here's a line profile of what I'm talking about: Timer unit: 1e-09 s Total time: 3.57874 s File: /tmp/ipykernel_2774/1125349843.py Function: simulate_warren_answer at line 1 Line # Hits Time Per Hit % Time Line Contents ============================================================== 1 def simulate_warren_answer(nyears, rate, alpha, beta, value, rng): 2 1 13092.0 13092.0 0.0 max_loss = np.zeros(nyears, dtype=np.float64) 3 1 3150.0 3150.0 0.0 total_loss = np.zeros(nyears, dtype=np.float64) 4 1001 347152.0 346.8 0.0 for year in range(nyears): 5 1000 3355879564.0 3e+06 93.8 n = rng.poisson(rate) 6 1000 179118381.0 179118.4 5.0 k = np.nonzero(n)[0] 7 1000 911231.0 911.2 0.0 if len(k) > 0: 8 654 935876.0 1431.0 0.0 nonzero_n = n[k] 9 654 4049698.0 6192.2 0.1 a = np.repeat(alpha[k], nonzero_n) 10 654 2645836.0 4045.6 0.1 b = np.repeat(beta[k], nonzero_n) 11 654 2550944.0 3900.5 0.1 v = np.repeat(value[k], nonzero_n) 12 654 26183513.0 40036.0 0.7 losses = rng.beta(a, b)*v 13 654 3557136.0 5439.0 0.1 max_loss[year] = losses.max() 14 654 2543208.0 3888.7 0.1 total_loss[year] = losses.sum() 15 1 303.0 303.0 0.0 return max_loss, total_loss For this reason, I started looking into whether it was possible to speed up poisson-distributed random number generation. I started by looking at the lengths of n and k, and realized that for the rates that the asker is interested in, k is much much smaller than n, and most entries in n are zero. If there is a 98% chance that the poisson-distributed number will be zero, I can simulate this by generating a uniformly distributed random number between [0, 1), and comparing it to 0.02. This leaves me to handle the other case 2% of the time, where the number will be 1+. I then realized that I could generalize this to generating any value of the poisson function, by comparing the random uniform value to the CDF. (In the code below, I am using the survival function, which is 1-CDF. This is for accuracy reasons; numbers near 0 are represented more accurately than numbers near 1.) This code generates a lookup table for converting uniformly-distributed numbers into poisson-distributed numbers according to each rate. Because the rates are small, most of the time, the code will only need to examine the first entry for each rate, and the result will be zero. It also does the k = np.nonzero(n)[0] step inside this loop, as it is more cache efficient. Because the rates are not changing, the lookup table only needs to be build once, and can be re-used between years. import scipy import numpy as np import numba as nb def simulate_new_rng(nyears, rate, alpha, beta, value, rng): max_loss = np.empty(nyears, dtype=np.float64) total_loss = np.zeros(nyears, dtype=np.float64) lookup = build_lookup(rate) for year in range(nyears): uniform = rng.random(len(rate)) n, nonzeros = uniform_to_poisson_plus_nonzero(uniform, lookup) n_nonzeros = n[nonzeros] a = np.repeat(alpha[nonzeros], n_nonzeros) b = np.repeat(beta[nonzeros], n_nonzeros) v = np.repeat(value[nonzeros], n_nonzeros) losses = rng.beta(a, b)*v max_loss[year] = losses.max(initial=0) total_loss[year] = losses.sum() return max_loss, total_loss def build_lookup(rate, ptol=1e-15): """Build uniform -> poisson lookup table based on survival function Does not include an entry for k if the probability of the most likely rate to be k is less than ptol.""" ret = [] for k in itertools.count(): sf = scipy.stats.poisson.sf(k, rate) # include first row of lookup table no matter what if sf.max() < ptol and k != 0: break ret.append(sf) return np.array(ret) @nb.njit() def uniform_to_poisson_plus_nonzero(numbers, lookup): assert numbers.shape[0] == lookup.shape[1] nonzeros = [] ret = np.empty(lookup.shape[1], dtype=np.int64) for j in range(lookup.shape[1]): number = numbers[j] for i in range(lookup.shape[0]): if number < lookup[i, j]: continue else: break else: raise Exception("failed to find match for survival function - ptol is too big") ret[j] = i if i != 0: nonzeros.append(j) return ret, np.array(nonzeros, dtype=np.int64) Benchmark result: Original code: 24.4 s Β± 308 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Warrens answer: 3.44 s Β± 18.1 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) My answer: 648 ms Β± 17.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Appendix: Full benchmarking code import matplotlib.pyplot as plt import itertools import scipy import numba as nb import numpy as np import pandas as pd rng = np.random.default_rng() nevents = 100604 nyears = 1000 rate_scale = 4.81e-4 rate_shape = 2.06e-2 rates = rng.gamma(rate_shape, rate_scale, size=nevents) alpha_scale = 1.25 alpha_shape = 0.439 alphas = rng.gamma(alpha_shape, alpha_scale, size=nevents) beta_scale = 289 beta_shape = 0.346 betas = rng.gamma(beta_shape, beta_scale, size=nevents) value_scale = 4.2e8 value_shape = 1.0 values = rng.gamma(value_shape, value_scale, size=nevents) def simulate_orig_answer(nyears, rate, alpha, beta, value, rng): data = pd.DataFrame({'RATE': rate, 'alpha': alpha, 'beta': beta, 'ExpValue': value}) rng = np.random.default_rng() largest_losses = np.zeros(shape=nyears) total_loss = np.zeros(nyears, dtype=np.float64) for year in range(nyears): occurences = rng.poisson(data['RATE']) largest_loss = 0 sum_loss = 0 for idx, event_occs in enumerate(occurences): if event_occs > 0: event = data.iloc[idx] for event_occ in range(event_occs): loss = rng.beta(event['alpha'], event['beta']) * event['ExpValue'] if loss > largest_loss: largest_loss = loss sum_loss += loss largest_losses[year] = largest_loss total_loss[year] = sum_loss return largest_losses, total_loss def simulate_warren_answer(nyears, rate, alpha, beta, value, rng): max_loss = np.zeros(nyears, dtype=np.float64) total_loss = np.zeros(nyears, dtype=np.float64) for year in range(nyears): n = rng.poisson(rate) k = np.nonzero(n)[0] if len(k) > 0: nonzero_n = n[k] a = np.repeat(alpha[k], nonzero_n) b = np.repeat(beta[k], nonzero_n) v = np.repeat(value[k], nonzero_n) losses = rng.beta(a, b)*v max_loss[year] = losses.max() total_loss[year] = losses.sum() return max_loss, total_loss def simulate_new_rng(nyears, rate, alpha, beta, value, rng): max_loss = np.empty(nyears, dtype=np.float64) total_loss = np.zeros(nyears, dtype=np.float64) lookup = build_lookup(rate) for year in range(nyears): uniform = rng.random(len(rate)) n, nonzeros = uniform_to_poisson_plus_nonzero(uniform, lookup) n_nonzeros = n[nonzeros] a = np.repeat(alpha[nonzeros], n_nonzeros) b = np.repeat(beta[nonzeros], n_nonzeros) v = np.repeat(value[nonzeros], n_nonzeros) losses = rng.beta(a, b)*v max_loss[year] = losses.max(initial=0) total_loss[year] = losses.sum() return max_loss, total_loss def build_lookup(rate, ptol=1e-15): """Build uniform -> poisson lookup table based on survival function Does not include an entry for k if the probability of the most likely rate to be k is less than ptol.""" ret = [] for k in itertools.count(): sf = scipy.stats.poisson.sf(k, rate) # include first row of lookup table no matter what if sf.max() < ptol and k != 0: break ret.append(sf) return np.array(ret) @nb.njit() def uniform_to_poisson_plus_nonzero(numbers, lookup): assert numbers.shape[0] == lookup.shape[1] nonzeros = [] ret = np.empty(lookup.shape[1], dtype=np.int64) for j in range(lookup.shape[1]): number = numbers[j] for i in range(lookup.shape[0]): if number < lookup[i, j]: continue else: break else: raise Exception("failed to find match for survival function - ptol is too big") ret[j] = i if i != 0: nonzeros.append(j) return ret, np.array(nonzeros, dtype=np.int64) %timeit simulate_orig_answer(nyears, rates, alphas, betas, values, rng) %timeit simulate_warren_answer(nyears, rates, alphas, betas, values, rng) %timeit simulate_new_rng(nyears, rates, alphas, betas, values, rng) | 1 | 2 |
79,034,526 | 2024-9-28 | https://stackoverflow.com/questions/79034526/performance-optimal-way-to-serialise-python-objects-containing-large-pandas-data | I am dealing with Python objects containing Pandas DataFrame and Numpy Series objects. These can be large, several millions of rows. E.g. @dataclass class MyWorld: # A lot of DataFrames with millions of rows samples: pd.DataFrame addresses: pd.DataFrame # etc. I need to cache these objects, and I am hoping to find an efficient and painless way to serialise them, instead of standard pickle.dump(). Are there any specialised Python serialisers for such objects that would pickle Series data with some efficient codec and compression automatically? Alternatively, I need to hand construct several Parquet files, but that requires a lot of more manual code to deal with this, and I'd rather avoid that if possible. Performance here may mean Speed File size (can be related, as you need to read less from the disk/network) I am aware of joblib.dump() which does some magic for these kind of objects, but based on the documentation I am not sure if this is relevant anymore. | What about storing huge structures in parquet format while pickling it, this can be automated easily: import io from dataclasses import dataclass import pickle import numpy as np import pandas as pd @dataclass class MyWorld: array: np.ndarray series: pd.Series frame: pd.DataFrame @dataclass class MyWorldParquet: array: np.ndarray series: pd.Series frame: pd.DataFrame def __getstate__(self): for key, value in self.__annotations__.items(): if value is np.ndarray: self.__dict__[key] = pd.DataFrame({"_": self.__dict__[key]}) if value is pd.Series: self.__dict__[key] = self.__dict__[key].to_frame() stream = io.BytesIO() self.__dict__[key].to_parquet(stream) self.__dict__[key] = stream return self.__dict__ def __setstate__(self, data): self.__dict__.update(data) for key, value in self.__annotations__.items(): self.__dict__[key] = pd.read_parquet(self.__dict__[key]) if value is np.ndarray: self.__dict__[key] = self.__dict__[key]["_"].values if value is pd.Series: self.__dict__[key] = self.__dict__[key][self.__dict__[key].columns[0]] Off course we will have some trade off between performance and volumetry as reducing the second requires format translation and compression. Lets create a toy dataset: N = 5_000_000 data = { "array": np.random.normal(size=N), "series": pd.Series(np.random.uniform(size=N), name="w"), "frame": pd.DataFrame({ "c": np.random.choice(["label-1", "label-2", "label-3"], size=N), "x": np.random.uniform(size=N), "y": np.random.normal(size=N) }) } We can compare the parquet conversion trade off (about 300 ms more): %timeit -r 10 -n 1 pickle.dumps(MyWorld(**data)) # 1.57 s Β± 162 ms per loop (mean Β± std. dev. of 10 runs, 1 loop each) %timeit -r 10 -n 1 pickle.dumps(MyWorldParquet(**data)) # 1.9 s Β± 71.3 ms per loop (mean Β± std. dev. of 10 runs, 1 loop each) And the volumetry gain (about 40 Mb spared): len(pickle.dumps(MyWorld(**data))) / 2 ** 20 # 200.28876972198486 len(pickle.dumps(MyWorldParquet(**data))) / 2 ** 20 # 159.13739013671875 Indeed those metrics will strongly depends on the actual dataset to be serialized. | 4 | 6 |
79,042,253 | 2024-10-1 | https://stackoverflow.com/questions/79042253/how-do-i-speed-up-querying-my-600mio-rows | My database has about 600Mio entries that I want to query (Pandas is too slow). This local dbSNP only contains rsIDs and genomic positions. I used: import sqlite3 import gzip import csv rsid_db = sqlite3.connect('rsid.db') rsid_cursor = rsid_db.cursor() rsid_cursor.execute( """ CREATE TABLE rsids ( rsid TEXT, chrom TEXT, pos INTEGER, ref TEXT, alt TEXT ) """ ) with gzip.open('00-All.vcf.gz', 'rt') as vcf: # from https://ftp.ncbi.nih.gov/snp/organisms/human_9606/VCF/00-All.vcf.gz reader = csv.reader(vcf, delimiter="\t") i = 0 for row in reader: if not ''.join(row).startswith('#'): rsid_cursor.execute( f""" INSERT INTO rsids (rsid, chrom, pos, ref, alt) VALUES ('{row[2]}', '{row[0]}', '{row[1]}', '{row[3]}', '{row[4]}'); """ ) i += 1 if i % 1000000 == 0: print(f'{i} entries written') rsid_db.commit() rsid_db.commit() rsid_db.close() I want to query multiple rsIDs and get their genomic position and alteration (query rsid and get chrom, pos, ref, alt and rsid). One entry looks like: rsid chrom pos ref alt rs537152180 1 4002401 G A,C I query using: import sqlite3 import pandas as pd def query_rsid(rsid_list, rsid_db_path='rsid.db'): with sqlite3.connect(rsid_db_path) as rsid_db: rsid_cursor = rsid_db.cursor() rsid_cursor.execute( f""" SELECT * FROM rsids WHERE rsid IN ('{"', '".join(rsid_list)}'); """ ) query = rsid_cursor.fetchall() return query It takes about 1.5 minutes no matter how many entries. Is there a way to speed this up? | Others have suggested defining your rsid column as the primary key, or alternatively creating a unique index on it. That's a good idea. Another thing: rsid IN ('dirty','great','list','of',items') may use a so-called skip-scan to get its results. If your rsid_list is very large, or if it pulls in values that are lexically widely separated, you may get a benefit from putting the items in the list into a temporary table then doing SELECT rsids.* FROM rsids JOIN temp_rsids_list ON rsids.rsid = temp_rsids_list.rsids to get a more efficient lookup. I would declare the table like this: CREATE TABLE rsids ( rsid TEXT PRIMARY KEY COLLATE BINARY, chrom TEXT, pos INTEGER, ref TEXT, alt TEXT ) WITHOUT ROWID COLLATE BINARY is the default. But still, it's helpful to show it because you know upfront you don't want case-insensitive matching on that column. This will remind your future self and your colleagues of that important optimization. WITHOUT ROWID tells SQLite to organize the table as a so-called "clustered index" where the other values are stored along with the easily searchable primary key. If you can make your primary key into an INTEGER that is a good idea for the sake of performance. | 3 | 2 |
79,044,789 | 2024-10-1 | https://stackoverflow.com/questions/79044789/why-is-python-calculation-with-float-numbers-faster-than-calculation-with-intege | This example shows that python calcuation with float numbers is faster than with integer numbers. I am wondering why calculation with integer is not faster than with float import time # Number of operations N = 10**7 # Integer addition start_time = time.time() int_result = 0 for i in range(N): int_result += 1 int_time = time.time() - start_time # Float addition start_time = time.time() float_result = 0.0 for i in range(N): float_result += 1.0 float_time = time.time() - start_time print(f"Time taken for integer addition: {int_time:.6f} seconds") print(f"Time taken for float addition: {float_time:.6f} seconds") | Python uses integers with no fixed size. Under the hood it's an array of digits (not base-10 digits, but fixed size integers that fit into 1 array element, to which we refer as "digit" in this case) that can be extended if needed. That's why you can have absurdly large integers in Python. CPUs have instructions to perform floating point operations and integer operations, but they usually only work with numbers that are 1 machine word long or less. Python integers do not satisfy this condition, but floats do. Without diving too much into long number arithmetic, integers operations in Python are computed digit by digit. The longer the integers are, the longer the computation takes. In your code however, the ints do not use more than 1 digit. Still, these are arrays, that require some overhead to be processed. | 2 | 5 |
79,042,519 | 2024-10-1 | https://stackoverflow.com/questions/79042519/default-class-representation-in-repl-python | Where can I find the implementation of the default class representation in Python/REPL? Example: >>> class Foo: ... pass >>> Foo <class '__main__.Foo'> Where is that whole string <class '__main__.Foo'> coming from? Is it defined in any PEP? I've tried greping the source of https://github.com/python/cpython for <class but it gives me no useful results, and greping for class is pointless. | The string <class '__main__.Foo'> comes from the type.__repr__ method, which is implemented with the type_repr function in Objects/typeobject.c of CPython: PyObject *result; if (mod != NULL && !_PyUnicode_Equal(mod, &_Py_ID(builtins))) { result = PyUnicode_FromFormat("<class '%U.%U'>", mod, name); } | 2 | 4 |
79,044,503 | 2024-10-1 | https://stackoverflow.com/questions/79044503/can-pyright-mypy-deduce-the-type-of-an-entry-of-an-ndarray | How can I annotate an ndarray so that Pyright/Mypy Intellisense can deduce the type of an entry? What can I fill in for ??? in x: ??? = np.array([1, 2, 3], dtype=int) so that y = x[0] is identified as an integer as rather than Any? | This is not currently possible. As defined in the type stubs, ndarray.__getitem__ has 5 @overloads. When the type checker evaluates x[0], it uses the second one, which returns Any: @overload def __getitem__(self, key: SupportsIndex | tuple[SupportsIndex, ...]) -> Any: ... The other four all return ndarray/NDArray: @overload def __getitem__(self, key: ( NDArray[integer[Any]] | NDArray[np.bool] | tuple[NDArray[integer[Any]] | NDArray[np.bool], ...] )) -> ndarray[_Shape, _DType_co]: ... @overload def __getitem__(self, key: ( None | slice | EllipsisType | SupportsIndex | _ArrayLikeInt_co | tuple[None | slice | EllipsisType | _ArrayLikeInt_co | SupportsIndex, ...] )) -> ndarray[_Shape, _DType_co]: ... @overload def __getitem__(self: NDArray[void], key: str) -> NDArray[Any]: ... @overload def __getitem__(self: NDArray[void], key: list[str]) -> ndarray[_ShapeType_co, _dtype[void]]: ... This means that, as long as x is determined to be of type ndarray, x[0] cannot be inferred as int. As for type checkers' support: Pyright only reads information out of type stubs, so it's not possible to modify its behaviour. Mypy allows for plugins, and Numpy bundles its own Mypy plugin, but this plugin doesn't seem to have the inference capabilities you are looking for. It is thus necessary to manually cast the values: from typing import cast y = cast(int, x[0]) reveal_type(y) # int | 1 | 2 |
79,044,369 | 2024-10-1 | https://stackoverflow.com/questions/79044369/mathematical-expression-possible-instead-of-using-a-while-loop | It seems possible to execute this without using a loop, in python: check = -arr[i] while check >= mi: check -= k arr[i] and mi are constants at this point. I want to minimise the value of check such that it obeys the following: check = (x * k) - arr[i] for some x AND: check >= mi Everything here is an integer. Obviously using a loop here is very inefficient, so I am wondering how to do it in O(1) time. The while loop code is correct for all cases, just slow. I tried this, but it is wrong. x = math.ceil((mi + arr[i]) / k) check = x * k - arr[i] | The problem is the ">=". That gives your code an off-by-one. Plus, you have addition and subtraction swapped. We need to REDUCE -arr[i] by the x*k value. x = math.ceil((-arr[i]-mi+1)/k) check = -arr[i] - x * k Consider an example, where arr[i] = -14, mi = 5, k = 3. If we do this by hand, we go 14, 11, 8, 5, 2. So, 4 steps. If arr[i] = -13, then there are only 3 steps: 13, 10, 7, 4. Test app: import math i = 0 def f1(arr,mi,k): check = -arr[i] while check >= mi: check -= k return check def f2(arr,mi,k): x = math.ceil((-arr[i]-mi+1)/k) check = -arr[i] - x * k return check print(f1([-14],5,3)) print(f2([-14],5,3)) print(f1([-13],5,3)) print(f2([-13],5,3)) Output: 2 2 4 4 | 2 | 0 |
79,044,322 | 2024-10-1 | https://stackoverflow.com/questions/79044322/conditionally-slice-a-pandas-multiindex-on-specific-level | For my given multi-indexed DataFrame: df = pd.DataFrame( np.random.randn(12), index=[ [1,1,2,3,4,4,5,5,6,6,7,8], [1,2,1,1,1,2,1,2,1,2,2,2], ] ) 0 1 1 1.667692 2 0.274428 2 1 0.216911 3 1 -0.513463 4 1 -0.642277 2 -2.563876 5 1 2.301943 2 1.455494 6 1 -1.539390 2 -1.344079 7 2 0.300735 8 2 0.089269 I would like to slice it such that I keep only rows where second index level contains BOTH 1 and 2 0 1 1 1.667692 2 0.274428 4 1 -0.642277 2 -2.563876 5 1 2.301943 2 1.455494 6 1 -1.539390 2 -1.344079 How can I do this? | Another possible solution, which is based on the following: df.groupby(level=0) groups the dataframe by the first level of the index. filter(lambda x: set(x.index.get_level_values(1)) == {1, 2}) checks if the second level of the index for each group contains both 1 and 2, and retains only the groups that meet this condition. df.groupby(level=0).filter(lambda x: set(x.index.get_level_values(1)) == {1, 2}) Output: 0 1 1 -1.085631 2 0.997345 4 1 -0.578600 2 1.651437 5 1 -2.426679 2 -0.428913 6 1 1.265936 2 -0.866740 | 4 | 4 |
79,043,009 | 2024-10-1 | https://stackoverflow.com/questions/79043009/how-do-i-update-a-package-on-conda-forge | How do I update a package on conda-forge? I'm trying to install the newest version of neuralprophet, but it is outdated on the conda-forge channel. There are pull requests on the feedstock github page, but how do I merge these? Can I merge these, or are special permissions needed? As a side note, I've also tried to get this package from the anaconda channel but the windows version is very outdated. As a last resort I could install the package via pip but this creates all kinds of issues. | You cannot merge the PRs unless you are a maintainer of the github repo. For situations where the package you need is not available or outdated on the anaconda repo, you can install all of the dependencies that are required using conda, and then install the package via PIP. This puts the package as the only part that is not managed by conda. Looking at the dependencies, you can install most of them via conda: # Windows conda install -c conda-forge ` "numpy>=1.25.0,<2.0.0" ` "pandas>=2.0.0" ` "pytorch>=2.0.0,<2.4.0" ` "pytorch-lightning>=2.0.0,<2.4.0" ` "tensorboard>=2.11.2" ` "torchmetrics>=1.0.0" ` "typing-extensions>=4.5.0" ` "holidays>=0.41,<1.0" ` "captum>=0.6.0" ` "matplotlib>=3.5.3" ` "plotly>=5.13.1" ` "python-kaleido==0.2.1" # Linux conda install -c conda-forge \ "numpy>=1.25.0,<2.0.0" \ "pandas>=2.0.0" \ "pytorch>=2.0.0,<2.4.0" \ "pytorch-lightning>=2.0.0,<2.4.0" \ "tensorboard>=2.11.2" \ "torchmetrics>=1.0.0" \ "typing-extensions>=4.5.0" \ "holidays>=0.41,<1.0" \ "captum>=0.6.0" \ "matplotlib>=3.5.3" \ "plotly>=5.13.1" \ "python-kaleido==0.2.1" And then install neuralprophet using pip: pip install neuralprophet | 2 | 3 |
79,042,426 | 2024-10-1 | https://stackoverflow.com/questions/79042426/pure-polars-version-of-safe-ast-literal-eval | I have data like this, df = pl.DataFrame({'a': ["['b', 'c', 'd']"]}) I want to convert the string to a list I use, df = df.with_columns(a=pl.col('a').str.json_decode()) it gives me, ComputeError: error inferring JSON: InternalError(TapeError) at character 1 (''') then I use this function, import ast def safe_literal_eval(val): try: return ast.literal_eval(val) except (ValueError, SyntaxError): return val df = df.with_columns(a=pl.col('a').map_elements(safe_literal_eval, return_dtype=pl.List(pl.String))) and get the expected output, but is there a pure polars way to achieve the same? | A general ast eval is not yet available. The problem with json_decode is that the list representation uses single quotes (instead of double quotes as used in JSON). In your example, this issue can be circumvented by replacing the single quotes using pl.Expr.str.replace_all as follows. df.with_columns( pl.col("a").str.replace_all("'", '"').str.json_decode() ) shape: (1, 1) βββββββββββββββββββ β a β β --- β β list[str] β βββββββββββββββββββ‘ β ["b", "c", "d"] β βββββββββββββββββββ | 1 | 2 |
79,041,290 | 2024-9-30 | https://stackoverflow.com/questions/79041290/sort-all-matrix | I'm trying to sort a matrix if I have [[5 7 1 2] [4 8 1 4] [4 0 1 9] [2 7 5 0]] I want this result [[9 8 7 7] [5 5 4 4] [4 2 2 1] [1 1 0 0]] this is my code, I'm using random numbers import numpy as np MatriX = np.random.randint(10, size=(4,4)) print(MatriX) for i in range(10): for j in range(10): a=np.sort(-MatriX,axis=1)*-1 b=np.sort(-a,axis=0)*-1 c=np.transpose(b) d=np.sort(-c,axis=1)*-1 e=np.sort(-d,axis=0)*-1 print(e) but this code, don't work at all | It's as simple as this: import numpy as np data = np.random.randint(10, size=(4,4)) print(data) result = np.sort(data.flatten())[::-1].reshape(data.shape) print(result) Output: [[4 7 7 6] [1 9 5 0] [5 8 1 2] [2 5 7 3]] [[9 8 7 7] [7 6 5 5] [5 4 3 2] [2 1 1 0]] An explanation of the parts of np.sort(data.flatten())[::-1].reshape(data.shape): data.flatten() flattens the array into a one-dimensional shape and returns the result np.sort() sorts the one-dimensional result and returns the sorted data [::-1] reverses the order of the sorted data .reshape(data.shape) applies the shape of the original data to that reversed sorted data Note that something like this would be easier to read: result = data.flattened().sorted().reversed().reshaped(data.shape) But numpy doesn't implement all the needed methods that allow for this more readable chaining and many methods make changes in-place. So you get the slightly harder to read solution above instead. (you could define a class that has these, but your code wouldn't make sense to others familiar with numpy, so it is probably best left the way it is) However, if your data is large, performing some operations in-place may make more sense. The solution given here is intended to be simple, you may want to look at some optimisation for very large data. | 2 | 5 |
79,040,124 | 2024-9-30 | https://stackoverflow.com/questions/79040124/p-values-for-all-pairs-between-two-matrices-to-achieve-matlabs-corr-function | I have been trying to implement in Python (with numpy and scipy) this variant of Matlab's corr function, but it seems I cannot solve it by myself. What I need is to implement the alternative Matlab corr implementation: [rho,pval] = corr(X,Y) I would appreciate any help on this!!! What I tried: I tried to modify the solutions posted here and in this other thread, without much success. For instance, I was able to hstack the two matrixes X and Y, and by keeping a part of the correlation matrix, I got the right result for the correlations. However, the same trick is NOT working for the p-values. Actually, both the solutions in the other threads gave me the (more or less) correct values for the correlation coefficients, which is great, but I cannot reproduce the behavior of the Matlab implementation for the p-values. Also, the solution here aims to reproduce corrcoef's behaviour, which, according to Matlab's documentation, converts the input matrices into column vectors before computing the correlations. On the other hand, I also tried to hstack the matrices in Matlab, and again got the same answer for the correlations, but the values are quite different for the p-values between what I get in Python and what I got in Matlab. This made me think that the problem perhaps is in the statistic being calculated. However, according to the Matlab documentation, it uses: corr computes the p-values for Pearson's correlation using a Student's t distribution for a transformation of the correlation and, from the documentation in SciPy I think they are using the same test, but I am not 100% sure, given that the bibligraphic reference there is for Student's paper, which is the one that Matlab's docs say it uses (Student's r), but as I said, I am NOT sure at all. | I think the easiest way is to do it in pandas, using scipy.stats pearsonr, which returns the pairwise rho and pval. I tested with some sample below, and I believe the results match the matlab results. import numpy as np from scipy.stats import pearsonr import pandas as pd X = np.array([ [0.5377, 0.3188, 3.5784, 0.7254], [1.8339, -1.3077, 2.7694, -0.0631], [-2.2588, -0.4336, -1.3499, 0.7147], [0.8622, 0.3426, 3.0349, -0.2050] ]) Y1 = np.array([ [-0.1241, 0.6715, 0.4889, 0.2939], [1.4897, -1.2075, 1.0347, -0.7873], [1.4090, 0.7172, 0.7269, 0.8884], [1.4172, 1.6302, -0.3034, -1.1471] ]) Y2 = Y1 Y2[:, 3] = Y2[:, 3] + X[:, 1] df1 = pd.DataFrame(X) df2 = pd.DataFrame(Y2) coeffmat = np.zeros((df1.shape[1], df2.shape[1])) pvalmat = np.zeros((df1.shape[1], df2.shape[1])) for i in range(df1.shape[1]): for j in range(df2.shape[1]): corrtest = pearsonr(df1[df1.columns[i]], df2[df2.columns[j]]) coeffmat[i,j] = corrtest[0] pvalmat[i,j] = corrtest[1] dfcoeff = pd.DataFrame(coeffmat, columns=df2.columns, index=df1.columns) print(dfcoeff) dfpvals = pd.DataFrame(pvalmat, columns=df2.columns, index=df1.columns) print(dfpvals) I also tried playing around with pandas corrwith function (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.corrwith.html#pandas.DataFrame.corrwith) but didn't spend enough time. I think there is a more elegant solution which uses that. Hope this helps | 2 | 0 |
79,039,949 | 2024-9-30 | https://stackoverflow.com/questions/79039949/pandas-pct-change-but-loop-back-to-start | I'm looking at how to use the pandas pct_change() function, but I need the values 'wrap around', so the last and first values create a percent change value in position 0 rather than NaN. For example: df = pd.DataFrame({'Month':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], 'Value':[1, 0.9, 0.8, 0.75, 0.75, 0.8, 0.7, 0.65, 0.7, 0.8, 0.85, 0.9]}) Month Value 0 1 1.00 1 2 0.90 2 3 0.80 3 4 0.75 4 5 0.75 5 6 0.80 6 7 0.70 7 8 0.65 8 9 0.70 9 10 0.80 10 11 0.85 11 12 0.90 Using pct_change() + 1 gives: df['percent change'] = df['Value'].pct_change() + 1 Month Value percent change 0 1 1.00 NaN 1 2 0.90 0.900000 2 3 0.80 0.888889 3 4 0.75 0.937500 4 5 0.75 1.000000 5 6 0.80 1.066667 6 7 0.70 0.875000 7 8 0.65 0.928571 8 9 0.70 1.076923 9 10 0.80 1.142857 10 11 0.85 1.062500 11 12 0.90 1.058824 However I also need to know the % change between December (moth=12) and January (month=1), so the NaN should be 1.111111. I hope to eventually do this to several groups within a group by, so muddling about filling in the Nan with one value over the other, or manually calculating all the percentages seems a long winded way to do it. Is there a simpler way to achieve this? | Just use numpy.roll that is designed for this specific purpose: import numpy as np df['percent change'] = df['Value'].div(np.roll(df['Value'], 1)) Output: Month Value percent change 0 1 1.00 1.111111 1 2 0.90 0.900000 2 3 0.80 0.888889 3 4 0.75 0.937500 4 5 0.75 1.000000 5 6 0.80 1.066667 6 7 0.70 0.875000 7 8 0.65 0.928571 8 9 0.70 1.076923 9 10 0.80 1.142857 10 11 0.85 1.062500 11 12 0.90 1.058824 If you need to perform this per group, combine it with groupby.transform: import numpy as np df['percent change'] = df['Value'].div(df.groupby('Group')['Value'].transform(lambda x: np.roll(x, 1))) Output: Month Value Group percent change 0 1 1.00 1 1.111111 1 2 0.90 1 0.900000 2 3 0.80 1 0.888889 3 4 0.75 1 0.937500 4 5 0.75 1 1.000000 5 6 0.80 1 1.066667 6 7 0.70 1 0.875000 7 8 0.65 1 0.928571 8 9 0.70 1 1.076923 9 10 0.80 1 1.142857 10 11 0.85 1 1.062500 11 12 0.90 1 1.058824 12 1 1.00 2 1.111111 13 2 0.90 2 0.900000 14 3 0.80 2 0.888889 15 4 0.75 2 0.937500 16 5 0.75 2 1.000000 17 6 0.80 2 1.066667 18 7 0.70 2 0.875000 19 8 0.65 2 0.928571 20 9 0.70 2 1.076923 21 10 0.80 2 1.142857 22 11 0.85 2 1.062500 23 12 0.90 2 1.058824 | 2 | 1 |
79,039,864 | 2024-9-30 | https://stackoverflow.com/questions/79039864/why-x-yi-faster-than-xi-yi | I'm new to CuPy and CUDA/GPU computing. Can someone explain why (x / y)[i] faster than x[i] / y[i]? When taking advantage of GPU accelerated computations, are there any guidelines that would allow me to quickly determine which operation is faster? Which to avoid benchmarking every operation. # In VSCode Jupyter Notebook import cupy as cp from cupyx.profiler import benchmark x = cp.arange(1_000_000) y = (cp.arange(1_000_000) + 1) / 2 i = cp.random.randint(2, size=1_000_000) == 0 x, y, i # Output: (array([ 0, 1, 2, ..., 999997, 999998, 999999], shape=(1000000,), dtype=int32), array([5.000000e-01, 1.000000e+00, 1.500000e+00, ..., 4.999990e+05, 4.999995e+05, 5.000000e+05], shape=(1000000,), dtype=float64), array([ True, False, True, ..., True, False, True], shape=(1000000,), dtype=bool)) def test1(x, y, i): return (x / y)[i] def test2(x, y, i): return x[i] / y[i] print(benchmark(test1, (x, y, i))) print(benchmark(test2, (x, y, i))) # Output: test1: CPU: 175.164 us +/- 61.250 (min: 125.200 / max: 765.100) us GPU-0: 186.001 us +/- 67.314 (min: 134.144 / max: 837.568) us test2: CPU: 342.364 us +/- 130.840 (min: 223.000 / max: 1277.600) us GPU-0: 368.133 us +/- 136.911 (min: 225.504 / max: 1297.408) us | The ratio of arithmetic capability to bytes retrieval capability on a GPU (at least, maybe also CPU) is generally lopsided in favor of arithmetic capability. Therefore algorithm performance can often be predicted by the the number of memory operations required. The first step in the x[i]/y[i] realization is to create x[i] and y[i], which is pure memory operations - stream compaction. Then the reduced level of arithmetic occurs. In the other realization, a higher level of arithmetic occurs first, followed by a 50% reduction in memory ops. A high end data center GPU - e.g. the A100, could have ~10 TF of FP64 throughput, and ~2 TB/s of memory bandwidth, so that ratio would be 5 FP64 FLOPs per byte of data read or written. For an 8-byte FP64 quantity, that is 40 FLOPs per FP64 quantity read or written. (Division requires multiple FLOPs.) Let's assume that the i array is composed of 50% True and 50% false. Now lets count all the steps required. In the x[i]/y[i] realization, we must read the entire i array - 1,000,000 reads. use that to read a portion of the x and y arrays, but on a GPU, you don't get to read individual elements. You read memory in chunks of 32 bytes at a time, so realistically with evenly distributed True/False distribution, you may end up reading the entire x and y arrays - 2,000,000 reads stream compaction on x, y according to i perform the division arithmetic - 500,000 division ops. write results - 500,000 writes In the (x/y)[i] realization: read the entire i array - 1,000,000 reads. read the entire x and y arrays - 2,000,000 reads perform the division arithmetic - 1,000,000 division ops. stream compaction on division result according to i write results - 500,000 writes The only difference is the positioning of the stream compaction operation(s) and the number of division operations required. Stream compaction is basically a purely memory bound activity, so the difference is in the cost of accessing memory. The first approach requires twice as much stream compaction. On a GPU, at least, flops are often significantly less "expensive" than reading/writing data. The percentage ratio of True/False in i doesn't really affect this analysis for A100. Other GPUs like T4 have a relatively low throughput for FP64 compared to FP32. T4 has a ratio of 320GB/s of memory bandwidth to ~8TF of FP32 thoughput, but only 1/8TF or 125GF of FP64 throughput. At high ratios of True to False, this won't matter, but at low ratios of True to False, for T4, its possible that the cost of division is a factor. This distinction would probably be wiped out by switching to FP32 datatype. Another way to say this could be that T4, for FP64 ops, doesn't necessarily satisfy the initial premise of my answer here: "The ratio of arithmetic capability to bytes retrieval capability on a GPU (at least, maybe also CPU) is generally lopsided in favor of arithmetic capability." | 3 | 7 |
79,039,069 | 2024-9-30 | https://stackoverflow.com/questions/79039069/cancelling-all-tasks-on-failure-with-concurrent-futures-in-python | I am using Python's concurrent.futures library with ThreadPoolExecutor and ProcessPoolExecutor. I want to implement a mechanism to cancel all running or unexecuted tasks if any one of the tasks fails. Specifically, I want to: Cancel all futures (both running and unexecuted) when a task fails. Raise the error that caused the first task to fail if that error is silently ignored; otherwise, let Python raise it naturally. Here is the approach I have tried: from concurrent.futures import ProcessPoolExecutor, as_completed from functools import partial copy_func = partial(copy_from, table_name=table_name, column_string=column_string) with ProcessPoolExecutor(max_workers=cores_to_use) as executor: futures = {executor.submit(copy_func, file_path): file_path for file_path in file_path_list} for f in as_completed(futures): try: f.result() except Exception as e: executor.shutdown(wait=False) # Attempt to stop the executor for future in futures: future.cancel() # Cancel all futures raise e # Raise the exception Questions: Is this the correct way to handle task cancellation in ThreadPoolExecutor and ProcessPoolExecutor? Are there any better approaches to achieve this functionality? How can I ensure that the raised exception is not silently ignored? How can I free up all resources used by concurrent.futures after an exception? Thank you! | Is this the correct way to handle task cancellation in ThreadPoolExecutor and ProcessPoolExecutor? Your approach is almost correct, but there are some important things to note: executor.shutdown(wait=False) only prevents new tasks from being submitted. It doesn't actually cancel the running tasks. The tasks that have already started executing will continue until completion. future.cancel() only cancels tasks that are still in the queue (not started). If a task has already started, future.cancel() won't stop it. To cancel the running tasks effectively, you need to handle it in a way that your tasks can be interrupted by checking a flag or catching an exception. Are there any better approach to achieve this functionality? A better approach would be to use a combination of: Graceful task cancellation: Your tasks should periodically check for a flag or signal to cancel themselves. handling exceptions properly: You can use a try/except block with the worker function(copy_from) itself to detect failure and propagate it. Here's the code: from concurrent.futures import ProcessPoolExecutor, as_completed from functools import partial import threading # Cancellation event that worker tasks can check cancel_event = threading.Event() def copy_from(file_path, table_name, column_string): if cancel_event.is_set(): print(f"Task {file_path} cancelled.") return # Simulate some work or copying logic try: # Your actual copy operation here if file_path == "some_bad_file": raise ValueError("Simulated task failure") print(f"Processing {file_path}") except Exception as e: cancel_event.set() # Set the cancellation flag raise e # Function to handle task execution def run_tasks(file_path_list, table_name, column_string, cores_to_use): copy_func = partial(copy_from, table_name=table_name, column_string=column_string) with ProcessPoolExecutor(max_workers=cores_to_use) as executor: futures = {executor.submit(copy_func, file_path): file_path for file_path in file_path_list} try: for f in as_completed(futures): if cancel_event.is_set(): # If cancellation has been triggered, skip waiting for further results break # Attempt to get the result and raise any exception if task failed f.result() except Exception as e: cancel_event.set() # Cancel all tasks for future in futures: future.cancel() # Attempt to cancel futures that haven't started yet raise e # Re-raise the exception file_path_list = ["file1", "file2", "some_bad_file", "file4"] table_name = "my_table" column_string = "col1, col2" try: run_tasks(file_path_list, table_name, column_string, cores_to_use=4) except Exception as ex: print(f"An error occurred: {ex}") How can I ensure that the raised exception is not silently ignored? To ensure that the exception is not silently ignored, it is critical to: Always call f.result() in the for f in as_completed(futures) loop. This will raise any exceptions that occurred during task execution. Propagate the exception using raise a after canceling other tasks, as you have done. How can I free up all resources used by concurrent.futures after an exception? By default, when the with ProcessPoolExecutor block exists, it calls executor.shutdown(), which releases resources. This is handled automatically by Python's context manager. However, adding executor.shutdown(wait=False) is unnecessary, as you don't need to prematurely shut down the executor if you've handled cancellation properly. To ensure resources are properly cleaned: Tasks that haven't started should be canceled with future.cancel(). Tasks that have already started will be stopped by setting the cancellation flag, and when all tasks finish, the context manager will clean up the pool and free up resources. I hope this will help you a little. | 1 | 3 |
79,038,956 | 2024-9-30 | https://stackoverflow.com/questions/79038956/pandas-series-str-split-not-accepting-3-keyword-arguments | I'm doing a project using the MIMIC-IV dataset as a source. I found a preprocessing pipeline which is widely used in many projects. When I try to run through said pipeline all is well until I try to generate the time series data representation module (I haven't modified the data nor the pipeline code in any way myself). The following error occurs: TypeError Traceback (most recent call last) .../Downloads/MIMIC-IV-Data-Pipeline-main/mainPipeline.ipynb Cell 27 in <cell line: 20>() 18 impute=False 20 if data_icu: ---> 21 gen=data_generation_icu.Generator(cohort_output,data_mort,data_admn,data_los,diag_flag,proc_flag,out_flag,chart_flag,med_flag,impute,include,bucket,predW) 22 #gen=data_generation_icu.Generator(cohort_output,data_mort,diag_flag,False,False,chart_flag,False,impute,include,bucket,predW) 23 #if chart_flag: 24 # gen=data_generation_icu.Generator(cohort_output,data_mort,False,False,False,chart_flag,False,impute,include,bucket,predW) 25 else: 26 gen=data_generation.Generator(cohort_output,data_mort,data_admn,data_los,diag_flag,lab_flag,proc_flag,med_flag,impute,include,bucket,predW) File ~/Downloads/MIMIC-IV-Data-Pipeline-main/model/data_generation_icu.py:22, in Generator.__init__(self, cohort_output, if_mort, if_admn, if_los, feat_cond, feat_proc, feat_out, feat_chart, feat_med, impute, include_time, bucket, predW) 20 self.cohort_output=cohort_output 21 self.impute=impute ---> 22 self.data = self.generate_adm() 23 print("[ READ COHORT ]") 25 self.generate_feat() File ~/Downloads/MIMIC-IV-Data-Pipeline-main/model/data_generation_icu.py:64, in Generator.generate_adm(self) 62 data['los']=pd.to_timedelta(data['outtime']-data['intime'],unit='h') 63 data['los']=data['los'].astype(str) ---> 64 data[['days', 'dummy','hours']] = data['los'].str.split(' ', -1, expand=True) 65 data[['hours','min','sec']] = data['hours'].str.split(':', -1, expand=True) 66 data['los']=pd.to_numeric(data['days'])*24+pd.to_numeric(data['hours']) ... 127 ) 128 raise TypeError(msg) --> 129 return func(self, *args, **kwargs) TypeError: split() takes from 1 to 2 positional arguments but 3 positional arguments (and 1 keyword-only argument) were given. I'm assuming the problem lies in the use of the pandas.str.split() function (I'm using pandas version 2.0.3) but when I check the documentation it should accept 3 keyword arguments as far as I can tell. Since it isn't my code I'm having a hard time debugging what is going wrong here but maybe I'm missing something. Does anyone know or did anyone run into the same problem when trying to use this pipeline and have any clue how to fix this? | In recent pandas versions, many functions switched to keyword only, you can actually see this in str.split documentation. # positional # keyword-only Series.str.split(pat=None, *, n=-1, expand=False, regex=None) The * means that onlt pat can be used as a positional parameter, n/expand/regex must be provided as keywords. You need to use the named parameter: data[['days', 'dummy','hours']] = data['los'].str.split(' ', n=-1, expand=True) There actually used to be a FutureWarning about this in previous versions: In a future version of pandas all arguments of StringMethods.split except for the argument 'pat' will be keyword-only. | 1 | 2 |
79,032,773 | 2024-9-27 | https://stackoverflow.com/questions/79032773/stuck-on-scraping-with-beautifulsoup-while-learning-need-some-pointers | I started learning screen scraping using BeautifulSoup. To get started I took a wikipedia article in the following format <table class="wikitable sortable jquery-tablesorter"> <caption></caption> <thead> <tr> <th colspan="2" style="width: 6%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Opening</th> <th style="width: 20%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Title</th> <th style="width: 10%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Director</th> <th style="width: 45%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Cast</th> <th style="width: 30%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Production company</th> <th class="unsortable" style="width: 1%;"><abbr title="Reference(s)">Ref.</abbr></th> </tr> </thead> <tbody> <tr> <td rowspan="3" style="text-align: center; background: #77bc83;"> <b> O<br /> C<br /> T </b> </td> <td rowspan="1" style="text-align: center; background: #77bc83;"><b>11</b></td> <td style="text-align: center;"> <i><a href="/wiki/Viswam_(film)" title="Viswam (film)">Viswam</a></i> </td> <td>Sreenu Vaitla</td> <td> <link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" /> <div class="hlist"> <ul> <li><a href="/wiki/Gopichand_(actor)" title="Gopichand (actor)">Gopichand</a></li> <li><a href="/wiki/Kavya_Thapar" title="Kavya Thapar">Kavya Thapar</a></li> <li><a href="/wiki/Vennela_Kishore" title="Vennela Kishore">Vennela Kishore</a></li> <li><a href="/wiki/Sunil" title="Sunil">Sunil</a></li> <li><a href="/wiki/Naresh" title="Naresh">Naresh</a></li> </ul> </div> </td> <td> Chitralayam Studios<br /> People Media Factory </td> <td style="text-align: center;"> <sup id="cite_ref-180" class="reference"> <a href="#cite_note-180"><span class="cite-bracket">[</span>178<span class="cite-bracket">]</span></a> </sup> </td> </tr> <tr> <td rowspan="2" style="text-align: center; background: #77bc83;"><b>31</b></td> <td style="text-align: center;"> <i><a href="/wiki/Lucky_Baskhar" title="Lucky Baskhar">Lucky Baskhar</a></i> </td> <td><a href="/wiki/Venky_Atluri" title="Venky Atluri">Venky Atluri</a></td> <td> <link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" /> <div class="hlist"> <ul> <li><a href="/wiki/Dulquer_Salmaan" title="Dulquer Salmaan">Dulquer Salmaan</a></li> <li><a href="/wiki/Meenakshi_Chaudhary" title="Meenakshi Chaudhary">Meenakshi Chaudhary</a></li> </ul> </div> </td> <td><a href="/wiki/S._Radha_Krishna" title="S. Radha Krishna">Sithara Entertainments</a></td> <td style="text-align: center;"> <sup id="cite_ref-181" class="reference"> <a href="#cite_note-181"><span class="cite-bracket">[</span>179<span class="cite-bracket">]</span></a> </sup> </td> </tr> <tr> <td style="text-align: center;"> <i><a href="/wiki/Mechanic_Rocky" title="Mechanic Rocky">Mechanic Rocky</a></i> </td> <td>Ravi Teja Mullapudi</td> <td> <link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" /> <div class="hlist"> <ul> <li><a href="/wiki/Vishwak_Sen" title="Vishwak Sen">Vishwak Sen</a></li> <li><a href="/wiki/Meenakshi_Chaudhary" title="Meenakshi Chaudhary">Meenakshi Chaudhary</a></li> </ul> </div> </td> <td>SRT Entertainments</td> <td style="text-align: center;"> <sup id="cite_ref-182" class="reference"> <a href="#cite_note-182"><span class="cite-bracket">[</span>180<span class="cite-bracket">]</span></a> </sup> </td> </tr> <tr> <td style="text-align: center; background: #77ea83;"> <b> N<br /> O<br /> V </b> </td> <td style="text-align: center; background: #77ea83;"><b>9</b></td> <td style="text-align: center;"> <i><a href="/wiki/Telusu_Kada" title="Telusu Kada">Telusu Kada</a></i> </td> <td><a href="/wiki/Neeraja_Kona" title="Neeraja Kona">Neeraja Kona</a></td> <td> <link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" /> <div class="hlist"> <ul> <li><a href="/wiki/Siddhu_Jonnalagadda" title="Siddhu Jonnalagadda">Siddhu Jonnalagadda</a></li> <li><a href="/wiki/Raashii_Khanna" title="Raashii Khanna">Raashii Khanna</a></li> <li><a href="/wiki/Srinidhi_Shetty" title="Srinidhi Shetty">Srinidhi Shetty</a></li> </ul> </div> </td> <td>People Media Factory</td> <td style="text-align: center;"> <sup id="cite_ref-183" class="reference"> <a href="#cite_note-183"><span class="cite-bracket">[</span>181<span class="cite-bracket">]</span></a> </sup> </td> </tr> <tr> <td rowspan="2" style="text-align: center; background: #f4ca16; textcolor: #000;"> <b> D<br /> E<br /> C </b> </td> <td rowspan="1" style="text-align: center; background: #f8de7e;"><b>6</b></td> <td style="text-align: center;"> <i><a href="/wiki/Pushpa_2:_The_Rule" title="Pushpa 2: The Rule">Pushpa 2: The Rule</a></i> </td> <td><a href="/wiki/Sukumar" title="Sukumar">Sukumar</a></td> <td> <link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" /> <div class="hlist"> <ul> <li><a href="/wiki/Allu_Arjun" title="Allu Arjun">Allu Arjun</a></li> <li><a href="/wiki/Fahadh_Faasil" title="Fahadh Faasil">Fahadh Faasil</a></li> <li><a href="/wiki/Rashmika_Mandanna" title="Rashmika Mandanna">Rashmika Mandanna</a></li> </ul> </div> </td> <td><a href="/wiki/Mythri_Movie_Makers" title="Mythri Movie Makers">Mythri Movie Makers</a></td> <td style="text-align: center;"> <sup id="cite_ref-184" class="reference"> <a href="#cite_note-184"><span class="cite-bracket">[</span>182<span class="cite-bracket">]</span></a> </sup> </td> </tr> <tr> <td rowspan="1" style="text-align: center; background: #f8de7e;"><b>20</b></td> <td style="text-align: center;"><i>Robinhood</i></td> <td><a href="/wiki/Venky_Kudumula" title="Venky Kudumula">Venky Kudumula</a></td> <td> <link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" /> <div class="hlist"> <ul> <li><a href="/wiki/Nithiin" title="Nithiin">Nithiin</a></li> <li><a href="/wiki/Sreeleela" title="Sreeleela">Sreeleela</a></li> </ul> </div> </td> <td><a href="/wiki/Mythri_Movie_Makers" title="Mythri Movie Makers">Mythri Movie Makers</a></td> <td style="text-align: center;"> <sup id="cite_ref-185" class="reference"> <a href="#cite_note-185"><span class="cite-bracket">[</span>183<span class="cite-bracket">]</span></a> </sup> </td> </tr> </tbody> <tfoot></tfoot> </table> This is my python script that I wrote: soup = BeautifulSoup(html_page, "html.parser") tables = soup.find_all("table",{"class":"wikitable sortable"}) headers = ['month','day','movie','director','cast','producer','reference'] movie_tables = [] total_movies = 0 for table in tables: caption = table.find("caption") if not caption or not caption.get_text().strip(): movie_tables.append(table) #captions = soup.find_all("caption") max_columns = len(headers) # List to store dictionaries data_dict_list = [] movies= [] for movie_table in movie_tables: table_rows = movie_table.find("tbody").find_all("tr")[1:] for table_row in table_rows: total_movies += 1 columns = table_row.find_all('td') row_data = [col.get_text(strip=True) for col in columns] # If the row has fewer columns than the max, pad it with None if len(row_data) == 6: row_data.insert(0, None) elif len(row_data) == 5: row_data.insert(0, None) row_data.insert(1, None) for col in columns: li_tags = col.find_all('li') if li_tags: cast="" for li in li_tags: li_values = li.get_text(strip=True) cast = ', '.join(li_values) row_data.append(cast) else: row_data.append(col.get_text()) # Create a dictionary mapping headers to row data row_dict = dict(zip(headers, row_data)) # Append the dictionary to the list data_dict_list.append(row_dict) # Print the list of dictionaries for row_dict in data_dict_list: print(row_dict) This is the output I am getting (Just showing a few items here): {'month': 'OCT', 'day': '11', 'movie': 'Viswam', 'director': 'Sreenu Vaitla', 'cast': 'GopichandKavya ThaparVennela KishoreSunilNaresh', 'producer': 'Chitralayam StudiosPeople Media Factory', 'reference': '[178]'} {'month': None, 'day': '31', 'movie': 'Lucky Baskhar', 'director': 'Venky Atluri', 'cast': 'Dulquer SalmaanMeenakshi Chaudhary', 'producer': 'Sithara Entertainments', 'reference': '[179]'} {'month': None, 'day': None, 'movie': 'Mechanic Rocky', 'director': 'Ravi Teja Mullapudi', 'cast': 'Vishwak SenMeenakshi Chaudhary', 'producer': 'SRT Entertainments', 'reference': '[180]'} {'month': 'NOV', 'day': '9', 'movie': 'Telusu Kada', 'director': 'Neeraja Kona', 'cast': 'Siddhu JonnalagaddaRaashii KhannaSrinidhi Shetty', 'producer': 'People Media Factory', 'reference': '[181]'} {'month': 'DEC', 'day': '6', 'movie': 'Pushpa 2: The Rule', 'director': 'Sukumar', 'cast': 'Allu ArjunFahadh FaasilRashmika Mandanna', 'producer': 'Mythri Movie Makers', 'reference': '[182]'} {'month': None, 'day': '20', 'movie': 'Robinhood', 'director': 'Venky Kudumula', 'cast': 'NithiinSreeleela', 'producer': 'Mythri Movie Makers', 'reference': '[183]'} This is what I am trying to get(Just showing the last item here): {'month': 'DEC', 'day': '20', 'movie': 'Robinhood', 'director': 'Venky Kudumula', 'cast': 'Nithiin|Sreeleela', 'producer': 'Mythri Movie Makers', 'reference': '[183]'} I've been trying to debug this for the last day or so but I cannot figure out where I went wrong. I am expecting: the month, day filled up in all the items when those columns span multiple rows and are not represented in all rows. Next, I want to have a separator between the different cast members so that it will be easy for me to create a graph later. Also, how do I extract the hyperlinks and store in a separate key in the dictionary while doing all of this? | You can do most of the work using DataFrame from io import StringIO import pandas as pd df = pd.read_html(StringIO(html_page))[0] df.columns = ['month', 'day', 'movie', 'director', 'cast', 'producer', 'reference'] df['month'] = df['month'].str.split(' ').apply(''.join) df['cast'] = df['cast'].str.split(r'\s{2,}', regex=True).apply(', '.join) data_dict_list = df.to_dict('records') for row_dict in data_dict_list: print(row_dict) Output {'month': 'OCT', 'day': 11, 'movie': 'Viswam', 'director': 'Sreenu Vaitla', 'cast': 'Gopichand, Kavya Thapar, Vennela Kishore, Sunil, Naresh', 'producer': 'Chitralayam Studios People Media Factory', 'reference': '[178]'} {'month': 'OCT', 'day': 31, 'movie': 'Lucky Baskhar', 'director': 'Venky Atluri', 'cast': 'Dulquer Salmaan, Meenakshi Chaudhary', 'producer': 'Sithara Entertainments', 'reference': '[179]'} {'month': 'OCT', 'day': 31, 'movie': 'Mechanic Rocky', 'director': 'Ravi Teja Mullapudi', 'cast': 'Vishwak Sen, Meenakshi Chaudhary', 'producer': 'SRT Entertainments', 'reference': '[180]'} {'month': 'NOV', 'day': 9, 'movie': 'Telusu Kada', 'director': 'Neeraja Kona', 'cast': 'Siddhu Jonnalagadda, Raashii Khanna, Srinidhi Shetty', 'producer': 'People Media Factory', 'reference': '[181]'} {'month': 'DEC', 'day': 6, 'movie': 'Pushpa 2: The Rule', 'director': 'Sukumar', 'cast': 'Allu Arjun, Fahadh Faasil, Rashmika Mandanna', 'producer': 'Mythri Movie Makers', 'reference': '[182]'} {'month': 'DEC', 'day': 20, 'movie': 'Robinhood', 'director': 'Venky Kudumula', 'cast': 'Nithiin, Sreeleela', 'producer': 'Mythri Movie Makers', 'reference': '[183]'} To add the links as well you will have to use BeautifulSoup. You didn't mention the format, but you can do something like df['href'] = None soup = BeautifulSoup(html_page, 'html.parser') rows = soup.find_all('tr')[1:] for i, row in enumerate(rows): hrefs = row.find_all('a', href=True) df.at[i, 'href'] = [href['href'] for href in hrefs] data_dict_list = df.to_dict('records') for row_dict in data_dict_list: print(row_dict) Output {'month': 'OCT', 'day': 11, 'movie': 'Viswam', 'director': 'Sreenu Vaitla', 'cast': 'Gopichand, Kavya Thapar, Vennela Kishore, Sunil, Naresh', 'producer': 'Chitralayam Studios People Media Factory', 'reference': '[178]', 'href': ['/wiki/Viswam_(film)', '/wiki/Gopichand_(actor)', '/wiki/Kavya_Thapar', '/wiki/Vennela_Kishore', '/wiki/Sunil', '/wiki/Naresh', '#cite_note-180']} {'month': 'OCT', 'day': 31, 'movie': 'Lucky Baskhar', 'director': 'Venky Atluri', 'cast': 'Dulquer Salmaan, Meenakshi Chaudhary', 'producer': 'Sithara Entertainments', 'reference': '[179]', 'href': ['/wiki/Lucky_Baskhar', '/wiki/Venky_Atluri', '/wiki/Dulquer_Salmaan', '/wiki/Meenakshi_Chaudhary', '/wiki/S._Radha_Krishna', '#cite_note-181']} {'month': 'OCT', 'day': 31, 'movie': 'Mechanic Rocky', 'director': 'Ravi Teja Mullapudi', 'cast': 'Vishwak Sen, Meenakshi Chaudhary', 'producer': 'SRT Entertainments', 'reference': '[180]', 'href': ['/wiki/Mechanic_Rocky', '/wiki/Vishwak_Sen', '/wiki/Meenakshi_Chaudhary', '#cite_note-182']} {'month': 'NOV', 'day': 9, 'movie': 'Telusu Kada', 'director': 'Neeraja Kona', 'cast': 'Siddhu Jonnalagadda, Raashii Khanna, Srinidhi Shetty', 'producer': 'People Media Factory', 'reference': '[181]', 'href': ['/wiki/Telusu_Kada', '/wiki/Neeraja_Kona', '/wiki/Siddhu_Jonnalagadda', '/wiki/Raashii_Khanna', '/wiki/Srinidhi_Shetty', '#cite_note-183']} {'month': 'DEC', 'day': 6, 'movie': 'Pushpa 2: The Rule', 'director': 'Sukumar', 'cast': 'Allu Arjun, Fahadh Faasil, Rashmika Mandanna', 'producer': 'Mythri Movie Makers', 'reference': '[182]', 'href': ['/wiki/Pushpa_2:_The_Rule', '/wiki/Sukumar', '/wiki/Allu_Arjun', '/wiki/Fahadh_Faasil', '/wiki/Rashmika_Mandanna', '/wiki/Mythri_Movie_Makers', '#cite_note-184']} {'month': 'DEC', 'day': 20, 'movie': 'Robinhood', 'director': 'Venky Kudumula', 'cast': 'Nithiin, Sreeleela', 'producer': 'Mythri Movie Makers', 'reference': '[183]', 'href': ['/wiki/Venky_Kudumula', '/wiki/Nithiin', '/wiki/Sreeleela', '/wiki/Mythri_Movie_Makers', '#cite_note-185']} | 3 | 2 |
79,037,716 | 2024-9-30 | https://stackoverflow.com/questions/79037716/typing-error-with-inherited-classes-having-overloaded-constructor-with-different | With the code below: import abc class ABCParent(metaclass=abc.ABCMeta): def __init__(self, a: str, sibling_type: type[ABCParent]) -> None: self.a = a self._sibling_type = sibling_type def new_sibling(self, a: str) -> ABCParent: return self._sibling_type(a) class ChildA(ABCParent): def __init__(self, a: str) -> None: super().__init__(a, ChildB) class ChildB(ABCParent): def __init__(self, a: str) -> None: super().__init__(a, ChildA) I get the following typing error, using mypy in strict mode. src/demo_problem.py: note: In member "new_sibling" of class "ABCParent": src/demo_problem.py:10:16:10:36: error: Missing positional argument "sibling_type" in call to "ABCParent" [call-arg] return self._sibling_type(a) This is perfectly logical, since ABCParent requests the sibling_type argument. But in my case, the self._sibling_type will be one of the child classes, which override the constructor such that sibling_type is not needed. I was able to work around the problem by using a typing.Union of the child classes: import abc import typing TConcreteChild: typing.TypeAlias = typing.Union["ChildA", "ChildB"] class ABCParent(metaclass=abc.ABCMeta): def __init__(self, a: str, sibling_type: type[TConcreteChild]) -> None: ... But I don't like this because: It involves defining a type alias (TConcreteChild) which is otherwise useless The definition of this alias must be updated if more child classes are defined It involves defining types before definitions of the classes, so it requires forward references using strings (and therefore it can't use the new ChildA | ChildB syntax). How can I do this more cleanly, while still having strict type checking? | So, I think one approach that could work depending on the specifics of your actual use-case is making ._sibling_type a generic callable and making the classes generic as well, parametrized with the same type variable. import abc import typing P = typing.TypeVar("P", bound="ABCParent") class ABCParent(typing.Generic[P], metaclass=abc.ABCMeta): def __init__(self, a: str, sibling_type: typing.Callable[[str], P]) -> None: self.a = a self._sibling_type = sibling_type def new_sibling(self, a: str) -> P: return self._sibling_type(a) class ChildA(ABCParent["ChildB"]): def __init__(self, a: str) -> None: super().__init__(a, ChildB) class ChildB(ABCParent["ChildA"]): def __init__(self, a: str) -> None: super().__init__(a, ChildA) Types are callables. And in my experience with mypy, treating them like callables when you can is the easier approach. As an aside, you can still use the pipe operator syntax with string-based forward references, e.g.: def foo(x: "ChildA | ChildB") -> None: ... | 3 | 4 |
79,032,868 | 2024-9-27 | https://stackoverflow.com/questions/79032868/differences-between-columns-containing-lists | I have a data frame where the columns values are list and want to find the differences between two columns. data={'NAME':['JOHN','MARY','CHARLIE'], 'A':[[1,2,3],[2,3,4],[3,4,5]], 'B':[[2,3,4],[3,4,5],[4,5,6]]} df=pd.DataFrame(data) Why doesn't it work? df = df.assign(X1 = lambda x: [y for y in x['A'] if y not in x['B']]) I get error : TypeError: unhashable type: 'list' I don't understand why? | So, this is where lambdas get interesting. These two lambdas will have the same result: df = df.assign(X1 = lambda x: [y for y in x['A']]) #unvectorized, x is the entire DataFrame df = df.assign(X1 = lambda x: x['A']) #vectorized, x is a single row One (lengthy) way to do what you are asking is to iterate through each row, and then compare the nested lists: df = df.assign(X1 = lambda x: [[y for y in x['A'][i] if y not in x['B'][i]] for i in range(len(x['A']))]) which can be simplified to one of the following df = df.assign(X1 = [[y for y in r.A if y not in r.B] for i, r in df.iterrows()]) #similar structure to your initial solution df = df.assign(X2 = [list(set(r.A).difference(r.B)) for i, r in df.iterrows()]) #more efficient, especially for larger sets Edit: Pulling in elements of @user19077881's answer, the row-wise operations do work if you use df.apply instead of df.assign df['X1'] = df.apply(lambda r: [y for y in r.A if y not in r.B], axis=1) df['X1'] = df.apply(lambda r: list(set(r.A).difference(r.B)), axis=1) | 1 | 2 |
79,025,526 | 2024-9-26 | https://stackoverflow.com/questions/79025526/escaping-quotes-in-python-subprocesses-for-windows | I'm trying to terminate a Python program that uses many threads on its own. If I'm not mistaken, just sys.exit() works fine. However, to guard against my many mistakes, including losing references to threads, I tried the following: subprocess.Popen(['start', 'cmd.exe', '/c', f'timeout 5&taskkill /f /fi "PID eq {os.getppid()}"'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) I thought it was a problem with escaping the quotes, so I tried several things but failed. I gave up and did the following and it worked perfectly. with open('exit_self.bat', 'w') as file: file.write(f'timeout 5&taskkill /f /fi "PID eq {os.getppid()}"&del exit_self.bat') subprocess.Popen(['start', 'cmd.exe', '/c', 'exit_self.bat'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) How can I do it without temp files? What did I miss? For reference, I used /k instead of /c option of cmd.exe to leave the window and check the error message in the window, it is as follows: Waiting for 0 seconds, press a key to continue ... ERROR: Invalid argument/option - 'eq'. Type "TASKKILL /?" for usage. I'm not sure if it will help, but I added echo to see the syntax of the command being executed: subprocess.Popen(['start', 'cmd.exe', '/k', 'echo', f'timeout 5&taskkill /f /fi "PID eq {os.getppid()}"'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) The result is: "timeout 5&taskkill /f /fi \"PID eq 3988\"" | From subprocess.Popen: args should be a sequence of program arguments or else a single string or path-like object. Run notepad.exe and be sure to not type in its window so "Untitled - Notepad" is the title. I'm using WINDOWTITLE eq Untitled* so I don't have to look up the PID. Use a single string for the command. This will open a window, countdown, and kill Notepad. Note that the entire /c parameter must be quoted so it is passed to the second command window. import subprocess import time p = subprocess.Popen('start cmd.exe /c "timeout 5&taskkill /f /fi "WINDOWTITLE eq Untitled*""', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) for i in range(3): print(str(i) * 10) time.sleep(1) Output in original command window showing parallel work. Another cmd window opens with the countdown. Notepad was killed 2 seconds after 2222222222 printed due to the 5 second total timeout before taskkill: 0000000000 1111111111 2222222222 | 3 | 1 |
79,037,379 | 2024-9-29 | https://stackoverflow.com/questions/79037379/why-does-my-planetary-orbit-simulation-produce-a-straight-line-instead-of-an-ell | I'm trying to simulate the orbit of a planet using the compute_orbit method, but when I plot the resulting positions, I get a straight line instead of an expected elliptical orbit. Below are the relevant parts of my code. Code Snippet def get_initial_conditions(self, planet_name): planet_id = self.PLANETS[planet_name] obj = Horizons(id=planet_id, location='@sun', epochs=2000.0) eph = obj.vectors() position = np.array([eph['x'][0], eph['y'][0], eph['z'][0]]) velocity = np.array([eph['vx'][0], eph['vy'][0], eph['vz'][0]]) scale_factor_position = 1 scale_factor_velocity = 1 return { "position": np.array(position) / scale_factor_position, "velocity": np.array(velocity) / scale_factor_velocity } def compute_orbit(self, central_mass=1.989e30, dt=10000, total_time=31536000): initial_conditions_data = self.get_initial_conditions(self.planet_name) initial_conditions = [ initial_conditions_data['position'][0], initial_conditions_data['velocity'][0], initial_conditions_data['position'][1], initial_conditions_data['velocity'][1], initial_conditions_data['position'][2], initial_conditions_data['velocity'][2] ] def f(t, state): x, vx, y, vy, z, vz = state r = np.sqrt(x**2 + y**2 + z**2) + 1e-5 G = 6.67430e-11 Fx = -G * central_mass * self.mass * x / r**3 Fy = -G * central_mass * self.mass * y / r**3 Fz = -G * central_mass * self.mass * z / r**3 return [vx, Fx / self.mass, vy, Fy / self.mass, vz, Fz / self.mass] t_span = (0, total_time) t_eval = np.arange(0, total_time, dt) sol = scipy.integrate.solve_ivp( f, t_span, initial_conditions, t_eval=t_eval, rtol=1e-3, atol=1e-6 ) x = sol.y[0] y = sol.y[2] z = sol.y[4] positions = np.column_stack((x, y, z)) return positions i have tried experimenting with different initial conditions but i always get straight lines plotting the data , below is a minimal reproduceable example Code Snippet from astroquery.jplhorizons import Horizons import numpy as np import scipy.integrate import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def get_initial_conditions(planet_id): obj = Horizons(id=planet_id, location='@sun', epochs=2000.0) eph = obj.vectors() position = np.array([eph['x'][0], eph['y'][0], eph['z'][0]]) velocity = np.array([eph['vx'][0], eph['vy'][0], eph['vz'][0]]) scale_factor_position = 1 scale_factor_velocity = 1 return { "position": np.array(position) / scale_factor_position, "velocity": np.array(velocity) / scale_factor_velocity } def compute_orbit(central_mass=1.989e30,rotating_mass=5.972e24, dt=10000, total_time=31536000): initial_conditions_data = get_initial_conditions(399) initial_conditions = [ initial_conditions_data['position'][0], initial_conditions_data['velocity'][0], initial_conditions_data['position'][1], initial_conditions_data['velocity'][1], initial_conditions_data['position'][2], initial_conditions_data['velocity'][2] ] def f(t, state): x, vx, y, vy, z, vz = state r = np.sqrt(x**2 + y**2 + z**2) + 1e-5 G = 6.67430e-11 Fx = -G * central_mass * rotating_mass * x / r**3 Fy = -G * central_mass * rotating_mass * y / r**3 Fz = -G * central_mass * rotating_mass * z / r**3 return [vx, Fx / rotating_mass, vy, Fy / rotating_mass, vz, Fz / rotating_mass] t_span = (0, total_time) t_eval = np.arange(0, total_time, dt) sol = scipy.integrate.solve_ivp( f, t_span, initial_conditions, t_eval=t_eval, rtol=1e-3, atol=1e-6) x = sol.y[0] y = sol.y[2] z = sol.y[4] positions = np.column_stack((x, y, z)) return positions def plot_orbit(positions): x = positions[:, 0] y = positions[:, 1] z = positions[:, 2] fig = plt.figure(figsize=(10, 8)) ax = fig.add_subplot(111, projection='3d') ax.plot(x, y, z, label='Orbit Path', color='blue') ax.set_xlabel('X Position (m)') ax.set_ylabel('Y Position (m)') ax.set_zlabel('Z Position (m)') ax.set_title('Planet Orbit Simulation') ax.scatter(0, 0, 0, color='yellow', s=100, label='Central Mass (Sun)') ax.legend() ax.set_box_aspect([1, 1, 1]) plt.show() positions = compute_orbit() plot_orbit(positions) | This problem has mismatched units. In the following code: obj = Horizons(id=planet_id, location='@sun', epochs=2000.0) eph = obj.vectors() position = np.array([eph['x'][0], eph['y'][0], eph['z'][0]]) velocity = np.array([eph['vx'][0], eph['vy'][0], eph['vz'][0]]) The obj.vectors() query does not return output in meters and meters per second. As the documentation describes, it returns output in AU and AU per day. Column Name Definition ... ... x x-component of position vector (float, au, X) vx x-component of velocity vector (float, au/d, VX) ... ... Source. Later on, you're using a gravitational constant which is only appropriate for units of kilogram, meter, and second. G = 6.67430e-11 The simplest way to fix this is to convert the units that Horizons uses into meters and meters per second. Alternatively, you could change G to a value suitable for AU and days. def get_initial_conditions(planet_id): obj = Horizons(id=planet_id, location='@sun', epochs=2000.0) eph = obj.vectors() meters_per_au = 1.496e+11 seconds_per_day = 86400 position = np.array([eph['x'][0], eph['y'][0], eph['z'][0]]) * meters_per_au velocity = np.array([eph['vx'][0], eph['vy'][0], eph['vz'][0]]) * meters_per_au / seconds_per_day scale_factor_position = 1 scale_factor_velocity = 1 return { "position": np.array(position) / scale_factor_position, "velocity": np.array(velocity) / scale_factor_velocity } You should get the following result. | 1 | 3 |
79,034,510 | 2024-9-28 | https://stackoverflow.com/questions/79034510/qt-pyside6-what-is-the-best-way-to-implement-an-infinite-data-fetching-loop-wit | I am implementing a small multi-threaded GUI application with PySide6 to fetch data from a (USB-connected) sensor and to visualize the data with Qt. I.e., the use can start and stop the data fetching: When clicking the play button a worker object is created and moved to the QThread and the QThread is started. The internet is full of different approaches how to implement such an indefinite (data fetching loop). Here the two main approaches I've came across most often so far: Approach 1 (indefinite but user-interruptible loop plus sleep()): class Worker(QObject): def __init__(self, user_stop_event: Event) super().__init__() self.user_stop_event = user_stop_event def run(self): while not self.user_stop_event.isSet(): self.fetch_data() # Process received signals: Qtcore.QApplication.processEvents() # A sleep is important since in Python threads cannot run really in parallel (on multicore systems) and without sleep we would block the main (GUI) thread! QThread.sleep(50) def fetch_data(self): ... Approach 2 (timer-based approach): class Worker(QObject): def __init__(self): super().__init__() timer = QTimer() timer.timeout.connect(self.fetch_data) timer.start(50) def fetch_data(self): ... Both alternatives use the same mechanism to start the thread: thread = QThread() worker = Worker() worker.moveToThread(thread ) thread.started.connect(worker.run) ... What are the pros and cons of both approaches? What is the preferred implementation? Unfortunately, Qt's official Threading Basics website does not give clear advises here. Both are working on my side, but I am not quite sure which one shall be used as our default implementation for subsequent projects. | Qt really has no say in this, It depends on the communication mode of whatever API you are using for communication, and whether it supports synchronous or asynchronous IO. synchronous (blocking) API If the communication is synchronous such as pyserial or QSerialPort (blocking mode) then use Approach1 without the sleep, but add a timeout on the port. Reading from synchronous IO usually drops the GIL, so you don't need the sleep. (those two APIs drop the GIL, check the docs of any other API), a similar example is given by Qt docs blocking receiver class Worker(QThread): def __init__(self, user_stop_event: Event) super().__init__() self.user_stop_event = user_stop_event def run(self): while not self.user_stop_event.isSet(): self.fetch_data() # Process received signals: Qtcore.QApplication.processEvents() def fetch_data(self): ... # call Serial.read() here with moderate timeout # maybe send some data Asynchronous (non-blocking) API If you are reading from an asynchronous IO like QTcpSocket or QSerialPort (async mode) then you do Approach2 because Qt's eventloop drops the GIL when it is not executing python code, and you can interrupt the process in the middle of a transmission, which wasn't possible with synchronous IO, you don't need the timer for asynchronous IO class Worker(QObject): def __init__(self): super().__init__() # construct this inside a thread slot on "started" signal self.socket = QTcpSocket(self) self.socket.readyRead.connect(self.read_data) self.socket.connectToHost(...) # this next action will block, better connect to "connected" signal self.socket.waitForConnected() self.socket.write(...) def read_data(self): ... # data = self.socket.readAll() # do something with data then send another request Directly calling sleep is almost always wrong, it either delays data reading, or delays the termination signal, your thread should typically either be blocked by the synchronous IO or by QT's event loop that is waiting for asynchronous IO to finish, or any other asynchronous IO loop like asyncio. polling a socket on a timer is equally bad for the same reason. If you ever do need to sleep (maybe you are only polling sensor every few milliseconds) then instead call QEventLoop.processEvents passing a deadline argument with the time to "process events", this ensures signals can interrupt your "sleep" (by calling loop.exit() inside the connected slot), and be sure to check whether this interruption happened or not after you are done "processing events". the GIL will be dropped while you wait so it is like an interruptible sleep. Periodic Events Lastly for periodic events that need to happen without drift (for example sending a notification, or polling a sensor every exactly 50ms) you should be using Approach2 with the timer, because QTimer uses OS timers to avoid drifts. (It can still drift on some platforms but it drifts very slowly) class Worker(QObject): def __init__(self): # construct this inside the QThread "started" signal super().__init__() timer = QTimer() timer.timeout.connect(self.fetch_data) timer.start(50) def fetch_data(self): ... # poll the sensor # or send a notification This ensures the function is called at 50,100,150,200ms ... etc whereas a normal loop with a sleep will drift and trigger at 52,105,160,223ms... etc | 3 | 1 |
79,036,818 | 2024-9-29 | https://stackoverflow.com/questions/79036818/how-can-i-filter-a-list-within-a-polars-column | Say for example I have data like this: import polars as pl df = pl.DataFrame( { "subject": ["subject1", "subject2"], "emails": [ ["samATxyz.com", "janeATxyz.com", "jimATcustomer.org"], ["samATxyz.com", "zaneATxyz.com", "basATcustomer.org", "jimATcustomer.org"], ], } ) df shape: (2, 2) ββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β subject β emails β β --- β --- β β str β list[str] β ββββββββββββͺββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ‘ β subject1 β ["samATxyz.com", "janeATxyz.com", "jimATcustomer.org"] β β subject2 β ["samATxyz.com", "zaneATxyz.com", "basATcustomer.org", "jimATcustomer.org"] β ββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ I want to filter the data so that the emails column only contain emails that end in "ATxyz.com". shape: (2, 2) ββββββββββββ¬ββββββββββββββββββββββββββββββββββββ β subject β emails β β --- β --- β β str β list[str] β ββββββββββββͺββββββββββββββββββββββββββββββββββββ‘ β subject1 β ["samATxyz.com", "janeATxyz.com"] β β subject2 β ["samATxyz.com", "zaneATxyz.com"] β ββββββββββββ΄ββββββββββββββββββββββββββββββββββββ How can I do this using polars? I had a few ideas, but I cannot figure out the right syntax, or it seems more complex/verbose than I would expect: Maybe I could somehow filter the data using .list.eval(pl.element() ..., but I cannot figure out how to filter items in the list with this syntax. I could reshape the data using .explode, but this seems verbose and more complex than needed. This is as close as I have got import polars as pl df = pl.DataFrame( { "subject": ["subject1", "subject2"], "emails": [ ["samATxyz.com", "janeATxyz.com", "jimATcustomer.org"], ["samATxyz.com", "zaneATxyz.com", "basATcustomer.org", "jimATcustomer.org"], ], } ) df.with_columns( pl.col("emails").list.eval(pl.element().str.contains("ATxyz")), ) shape: (2, 2) ββββββββββββ¬βββββββββββββββββββββββββββββ β subject β emails β β --- β --- β β str β list[bool] β ββββββββββββͺβββββββββββββββββββββββββββββ‘ β subject1 β [true, true, false] β β subject2 β [true, true, false, false] β ββββββββββββ΄βββββββββββββββββββββββββββββ | You were on the right track with pl.Expr.list.eval. It can be combined with pl.Expr.filter to achieve the desired result as follows. df.with_columns( pl.col("emails").list.eval( pl.element().filter(pl.element().str.ends_with("ATxyz.com")) ) ) shape: (2, 2) ββββββββββββ¬ββββββββββββββββββββββββββββββββββββ β subject β emails β β --- β --- β β str β list[str] β ββββββββββββͺββββββββββββββββββββββββββββββββββββ‘ β subject1 β ["samATxyz.com", "janeATxyz.com"] β β subject2 β ["samATxyz.com", "zaneATxyz.com"] β ββββββββββββ΄ββββββββββββββββββββββββββββββββββββ | 3 | 3 |
79,032,143 | 2024-9-27 | https://stackoverflow.com/questions/79032143/multidimensional-matrix-permutation-julia-vs-python-disagreement | I have noticed a difference in behavior between python's numpy.permute_dims and julia's Base.permutedims. On an input 3x3x3 matrix containing elements 0:26, inclusive in both languages, they agree for the axes argument (1,2,0) but disagree for (0,2,1). As far as I can tell from the docs, these functions should be equivalent. There's a note about permutedims being non-recursive, but I don't see why that should have behavioral consequences. Julia is also column-major order, but again I don't see why that should effect the overall behavior. Python code: arr = np.array([[[0, 1, 2], [3, 4, 5], [6, 7, 8]], [[9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) arr_perm = np.permute_dims(arr, axes=[0,2,1]) Output: array([[[ 0, 3, 6], [ 1, 4, 7], [ 2, 5, 8]], [[ 9, 12, 15], [10, 13, 16], [11, 14, 17]], [[18, 21, 24], [19, 22, 25], [20, 23, 26]]]) Julia code: arr = [ 0 1 2 3 4 5 6 7 8;;; 9 10 11 12 13 14 15 16 17;;; 18 19 20 21 22 23 24 25 26 ] arr_perm = permutedims(arr, [1,3,2]) Output: 3Γ3Γ3 Array{Int64, 3}: [:, :, 1] = 0 9 18 3 12 21 6 15 24 [:, :, 2] = 1 10 19 4 13 22 7 16 25 [:, :, 3] = 2 11 20 5 14 23 8 17 26 | It looks like your problem comes from the fact that numpy does not adhere to the "natural" mental model for indexing. In the natural mental model, if you want to address the number 20 in a 3d matrix like this a = np.arange(2*3*4).reshape(2, 3, 4) array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) you instinctively think (counting starts from zero): Β«row=2, column=0, plane=1Β» but, in numpy, if you issue the command a[2, 0, 1], you get an error. The correct mental model to apply with numpy is the Boxes model. The boxes model works like boxes containing boxes which in turn contain boxes and so on. If you want address the number 20, ask yourself: Β«Starting from the outer box, which element in the box level 0 contains the number 20? It's the number 1Β», then: Β«Which element in box level 1 contains the number 20? it's the number 2Β», finally: Β«Which element in box level 2 contains the number 20? it's the number 0Β» a[1, 2, 0] 20 How to address the same element when you transpose the matrix? Just reverse the sequence of indexes: a[1, 2, 0] == a.T[0, 2, 1] True Note that when working in 2d (and only in 2d), the natural mental model and the boxes models comes to coincidence, so a[row, column] translates to box level 0 = row box level 1 = column the numpy expression is a[row, column] If you work in 3d in natural model, you have a[row, column, plane] and the numpy translation is: box level 0 = plane box level 1 = row box level 2 = column the numpy expression is a[plane, row, column] Returning to your example, the Julia address (row, column, plane) containing the number 20 in arr is: row = 1, column = 3, plane = 3. To transform to numpy coordinates, apply: box level 0 = 2 # plane=3 - 1 because Python starts counting from 0 box level 1 = 0 # row=1 - 1 because Python starts counting from 0 box level 2 = 2 # column=3 - 1 because Python starts counting from 0 so the Julia indexing expression arr[1, 3, 3] becomes the numpy indexing expression arr[2, 0, 2). Any operation you apply to Julia arrays, can be translated into numpy applying the above transformation. Hope this helps. | 2 | 1 |
79,035,467 | 2024-9-29 | https://stackoverflow.com/questions/79035467/unable-to-install-pygobject-in-pycharm-meson-build5115-error-python-depende | I'm attempting to install the PyGObject library in pycharm following this guide https://pygobject.gnome.org/getting_started.html attempting to do step 3 on the Ubuntu / Debian section: pip3 install pycairo and this is what is returned: Γ Preparing metadata (pyproject.toml) did not run successfully. β exit code: 1 β°β> [48 lines of output] + meson setup /tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79 /tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79/.mesonpy-n59ji4fk -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md -Dwheel=true -Dtests=false --native-file=/tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79/.mesonpy-n59ji4fk/meson-python-native-file.ini The Meson build system Version: 1.5.2 Source dir: /tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79 Build dir: /tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79/.mesonpy-n59ji4fk Build type: native build Project name: pycairo Project version: 1.27.0 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") C linker for the host machine: cc ld.bfd 2.38 Host machine cpu family: x86_64 Host machine cpu: x86_64 Program python3 found: YES (/home/demerf/PycharmProjects/gtkgtk/.venv/bin/python) Compiler for C supports arguments -Wall: YES Compiler for C supports arguments -Warray-bounds: YES Compiler for C supports arguments -Wcast-align: YES Compiler for C supports arguments -Wconversion: YES Compiler for C supports arguments -Wextra: YES Compiler for C supports arguments -Wformat=2: YES Compiler for C supports arguments -Wformat-nonliteral: YES Compiler for C supports arguments -Wformat-security: YES Compiler for C supports arguments -Wimplicit-function-declaration: YES Compiler for C supports arguments -Winit-self: YES Compiler for C supports arguments -Winline: YES Compiler for C supports arguments -Wmissing-format-attribute: YES Compiler for C supports arguments -Wmissing-noreturn: YES Compiler for C supports arguments -Wnested-externs: YES Compiler for C supports arguments -Wold-style-definition: YES Compiler for C supports arguments -Wpacked: YES Compiler for C supports arguments -Wpointer-arith: YES Compiler for C supports arguments -Wreturn-type: YES Compiler for C supports arguments -Wshadow: YES Compiler for C supports arguments -Wsign-compare: YES Compiler for C supports arguments -Wstrict-aliasing: YES Compiler for C supports arguments -Wundef: YES Compiler for C supports arguments -Wunused-but-set-variable: YES Compiler for C supports arguments -Wswitch-default: YES Compiler for C supports arguments -Wno-missing-field-initializers: YES Compiler for C supports arguments -Wno-unused-parameter: YES Compiler for C supports arguments -fno-strict-aliasing: YES Compiler for C supports arguments -fvisibility=hidden: YES Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 Run-time dependency cairo found: YES 1.16.0 Run-time dependency python found: NO (tried pkgconfig, pkgconfig and sysconfig) ../cairo/meson.build:51:15: ERROR: Python dependency not found A full log can be found at /tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79/.mesonpy-n59ji4fk/meson-logs/meson-log.txt [end of output] I'm running Ubuntu 22.04.5 LTS and python 3.12 and very unfamiliar with linux systems, the log file at /tmp/pip-install-9mn9tc6a/pycairo_3bd8575d46fa4c3ba97005887d18ca79/.mesonpy-n59ji4fk/meson-logs/meson-log.txt cannot be found. attempting to do step 4 also returns a similar error. | Unless you need a new version Ubuntu packages it (as the docs say): sudo apt install python3-gi python3-gi-cairo gir1.2-gtk-4.0 Otherwise donβt skip step 2: sudo apt install libgirepository1.0-dev gcc libcairo2-dev pkg-config python3-dev gir1.2-gtk-4.0 | 2 | 1 |
79,020,783 | 2024-9-25 | https://stackoverflow.com/questions/79020783/how-to-make-ruff-ignore-comments-in-measuring-the-line-length | The line demo_code = print("foo bar") # some comments and this line length exceed 79 that i config. is being formatted by ruff format to demo_code = print( "foo bar" ) # some comments and this line length exceed 79 that i config. as it exceeds the value defined in line-length. I want to make ruff count only command characters and ignore the comments. | The best (unique?) solution to ignore lengthy code line errors (E501) with ruff is using codetags in you code: # pyproject.toml [tool.ruff.lint.pycodestyle] ignore-overlong-task-comments = true [tool.ruff.lint] task-tags = ["HACK"] demo_code = print("foo bar") # HACK: That is a so savvy solution. Now, I can write very lengthy inline comments without any problem :) | 1 | 1 |
79,029,563 | 2024-9-27 | https://stackoverflow.com/questions/79029563/how-to-scrape-all-customer-reviews | I am trying to scrape all reviews in this website - https://www.backmarket.com/en-us/r/l/airpods/345c3c05-8a7b-4d4d-ac21-518b12a0ec17. The website says there are 753 reviews, but when I try to scrape all reviews, I get only 10 reviews. So, I am not sure how to scrape all 753 reviews from the page, Here is my code- # importing modules import pandas as pd from requests import get from bs4 import BeautifulSoup # Fetch the web page url = 'https://www.backmarket.com/en-us/r/l/airpods/345c3c05-8a7b-4d4d-ac21-518b12a0ec17' response = get(url) # link exlcudes posts with no picures page = response.text # Parse the HTML content soup = BeautifulSoup(page, 'html.parser') # To see different information ## reviewer's name reviewers_name = soup.find_all('p', class_='body-1-bold') [x.text for x in reviewers_name] name = [] for items in reviewers_name: name.append(items.text if items else None) ## Purchase Data purchase_date = soup.find_all('p', class_='text-static-default-low body-2') [x.text for x in purchase_date] date = [] for items in purchase_date: date.append(items.text if items else None) ## Country country_text = soup.find_all('p', class_='text-static-default-low body-2 mt-32') [x.text for x in country_text] country = [] for items in country_text: country.append(items.text if items else None) ## Reviewed Products products_text = soup.find_all('span', class_= 'rounded-xs inline-block max-w-full truncate body-2-bold px-4 py-0 bg-static-default-mid text-static-default-hi') [x.text for x in products_text] products = [] for items in products_text: products.append(items.text if items else None) ## Actual Reviews review_text = soup.find_all('p',class_='body-1 block whitespace-pre-line') [x.text for x in review_text] review = [] for items in review_text: review.append(items.text if items else None) ## Review Ratings review_ratings_value = soup.find_all('span',class_='ml-4 mt-1 md:mt-2 body-2-bold') [x.text for x in review_ratings_value] review_ratings = [] for items in review_ratings_value: review_ratings.append(items.text if items else None) # Create the Data Frame pd.DataFrame({ 'reviewers_name': name, 'purchase_date': date, 'country': country, 'products': products, 'review': review, 'review_ratings': review_ratings }) My question is how I can scrape all reviews. | based on your expectations, I think using the requests library and a little bit of code can fetch your desired result, here is my mindmap: we can use this https://www.backmarket.com/reviews/product-landings/345c3c05-8a7b-4d4d-ac21-518b12a0ec17/products/reviews API endpoint to fetch all of your expected information related to reviews. (I guess UUID in the URLs is your product ID, correct me if I'm wrong) Note: The site has rate limit protection in so I used time.sleep(5) to minimize thread Here is my code: import time import requests from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) def get_data(content): result = content['results'] for i in result: first_name = i['customer']['firstName'] last_name = i['customer']['lastName'] name = f"{first_name} {last_name}" rating = i['averageRate'] review = i['comment'] date = i['createdAt'] prod = i['product']['title'] prod_img = i['product']['imageUrl'] country = i['countryCode'] print(f"reviewers_name: {name}\npurchase_date: {date}\ncountry: {country}\nproducts:-----------\nproduct_name: {prod}\nproduct_img: {prod_img}\n---------------------\nreview: {review}\nreview_ratings: {rating}\n============================") def gather_cursor(url): n, cursor_and_url = 1, url while True: time.sleep(3) response = requests.get(cursor_and_url, verify=False) data = response.json() get_data(data) cursor = data['nextCursor'] if not cursor: break cursor_and_url = f"{url}?cursor={cursor}" gather_cursor("https://www.backmarket.com/reviews/product-landings/345c3c05-8a7b-4d4d-ac21-518b12a0ec17/products/reviews") Hope this will help. | 4 | 0 |
79,034,296 | 2024-9-28 | https://stackoverflow.com/questions/79034296/filter-a-lazyframe-by-row-index | Is there an idiomatic way to get specific rows from a LazyFrame? There's two methods I could figure out. Not sure which is better, or if there's some different method I should use. import polars as pl df = pl.DataFrame({"x": ["a", "b", "c", "d"]}).lazy() rows = [1, 3] # method 1 ( df.with_row_index("row_number") .filter(pl.col("row_number").is_in(rows)) .drop("row_number") .collect() ) # method 2 df.select(pl.all().gather(rows)).collect() | pl.Expr.gather is the idiomatic way to take values by index. df.select(pl.all().gather(rows)).collect() For completeness, method 1 can be refined by using an expression for the index. This way no temporary column is created and dropped again. # method 1.1 df.filter(pl.int_range(pl.len()).is_in(rows)).collect() | 2 | 2 |
79,026,472 | 2024-9-26 | https://stackoverflow.com/questions/79026472/get-x-and-y-radius-of-a-hexagon-no-matter-the-angle | I created a hexagon blur that gives this kind of results: To make the code a bit cleaner, I created a hexagon class. I'm now implementing different methods inside. My constructor look to this for now: def __init__(self, radius, center_x, center_y, angle=0): self._dmin = np.sqrt(3)/2 self._radius = radius self._radius_min = radius * self.dmin self._center_x = center_x self._center_y = center_y self._angle = angle # Angle in degrees self._radian = np.radians(self.angle) self._period = 60 I stuck on the methods radiusX and radiusY. This methods should give in output the distance from the center to the border based on the current hexagon center. For radiusX with a y at 0 and radiusY with a x at 0. I tried different implementations but the more accurate one I got until now is: def radiusX(self) -> float: return self.radius_min + Trigonometry.one2Zero2One(x=self.angle%self.period, period=self.period) * (self.radius - self.radius_min) def radiusY(self) -> float: return self.radius_min + Trigonometry.zero2One2Zero(x=self.angle%self.period, period=self.period) * (self.radius - self.radius_min) The methods one2Zero2One() and zero2One2Zero() are implemented like this: @staticmethod def one2Zero2One(x, period): return 0.5 + 0.5 * np.cos(x * np.pi / (period / 2)) @staticmethod def zero2One2Zero(x, period): return 0.5 + 0.5 * np.sin(x * np.pi / (period / 2) - 1.58) They allow you to obtain a value on a bell between 0 and 1. zero2One2Zero look to this: In my class, a hexagon with an angle of 0 is in this direction: I know that my two methods are not good because when I visualise the results with plotly, I directly see that the radius doesn't follow the one2Zero2One and zero2One2Zero curves. In the examples below, above and on the right is have a green point which indicate the end of the x and y radius: This graphics are obtained with: h = Hexagon(radius=0.5, center_x=0.5, center_y=0.5, angle=j) fig = go.Figure() inside = {'x':[], 'y':[]} outside = {'x':[], 'y':[]} radius = {'x':[h.center_x + h.radiusX(), h.center_y], 'y':[h.center_y, h.center_y + h.radiusY()]} print(h.radiusX()) for i in range(int(10e3)): # print(i,'/',int(10e3)) x, y = random(), random() if h.isPointInside(x, y): inside['x'].append(x) inside['y'].append(y) else: outside['x'].append(x) outside['y'].append(y) fig.add_trace(go.Scatter(x=inside['x'], y=inside['y'], mode='markers', name='Inside')) fig.add_trace(go.Scatter(x=outside['x'], y=outside['y'], mode='markers', name='Outside')) fig.add_trace(go.Scatter(x=radius['x'], y=radius['y'], mode='markers', name='radius', marker={'size':10})) fig.show() What is the correct equation for getting the radiusX and radiusY based on the current hexagon angle? | As far as your code goes, you don't need to use anything besides core Python for the geometric computations. The math module provides the trigonometric and transcendental functions you need: from math import sin, cos, sqrt, degrees, radians, pi No need for NumPy: to convert from degrees to radians, use radians. E.g. radians(360)/(2*pi) == 1.0 to convert from radians to degrees, use degrees. E.g. degrees(2*pi)/360 == 1.0. The only use for np variants of radians, degrees, sqrt, etc. is to have automatic broadcasting across NumPy arrays. Since you're dealing with scalars, bringing NumPy into the picture only muddies the water. Stick to core Python where possible - it makes the code easier to understand. NumPy is useful for image manipulations, since it'll be faster than even fairly streamlined Python code. Nevertheless, the specializing compiler in Python 3.13 can already make NumPy unnecessary for some tasks :) Suppose we have an n-polygon inscribed into a circle of radius r. Then: (1) Ξ± = 360Β°/n From basic trigonometry: (2) h/r = cos(Ξ³) = cos(Ξ±/2), thus h = r cos(Ξ±/2) (3) h/dx = cos(Ξ²), thus dx = h/cos(Ξ²) Substituting (2) into (3), we get (4) dx = r cos(Ξ±/2) / cos(Ξ²) Then we substitute (1) into (4), and finally (5) dx = r cos(180Β°/n) / cos(Ξ²) You can work out dy similarly. To investigate such geometric problems, GeoGebra works well. That's what I used to make the illustration above. That's what you should use to make properly labeled illustrations for your question. Help us help you! Note: It is not possible to select overlapping objects well in GeoGebra - only the "first one" gets selected when you click on overlapping objects. To help selecting a particular object when they overlap (e.g. a short segment overlapping a long segment), switch from "Tools" to "Algebra" and you'll get a list of objects that you can select within the list. | 3 | 2 |
79,032,931 | 2024-9-27 | https://stackoverflow.com/questions/79032931/scipy-curve-fit-covariance-of-the-parameters-could-not-be-estimated | I am trying to estimate the Schott coefficients of a glass material given only its n_e(refraction index at e line) and V_e(Abbe number at e line). Schott is one way to represent the dispersion of a material, which is the different index of refraction (RI) at different wavelength. In the figure above, the horizontal axis is the wavelength (in micrometer) and the vertical axis is index of refraction (This figure is based on the glass type named KZFH1). Because the glass dispersion have a common shape (higher at shorter wavelength and then tappers down), and the RI at key points (Fraunhofer lines) have a stable relationship, my thought is that I can use the definition of Abbe number and the general relation of different Fraunhofer line RI to create some data points, and use them to fit a curve: import numpy as np from scipy.optimize import curve_fit # Definition of the Schott function def _InvSchott(x, a0, a1, a2, a3, a4, a5): return np.sqrt(a0 + a1* x**2 + a2 * x**(-2) + a3 * x**(-4) + a4 * x**(-6) + a5 * x**(-8)) # Sample input, material parameter from a Leica Summilux patent n = 1.7899 V = 48 # 6 wavelengths, Fraunhofer symbols are not used due to there is another version that uses n_d and V_d shorter = 479.99 short = 486.13 neighbor = 546.07 middle = 587.56 longc = 643.85 longer = 656.27 # Refraction index of the corresponding wavelengths. # The linear functions are acquired from external regressions from 2000 glass materials n_long = 0.984 * n + 0.0246 # n_C' n_shorter = ( (n-1) / V) + n_long # n_F', from the definition of Abbe number n_short = 1.02 * n -0.0272 # n_F n_neighbor = n # n_e n_mid = 1.013 * n - 0.0264 # n_d n_longer = 0.982 * n + 0.0268 # n_C # The /1000 is to convert the wavelength from nanometer to micrometers x_data = np.array([longer, longc, middle, neighbor, short, shorter]) / 1000.0 y_data = np.array([n_longer, n_long, n_mid, n_neighbor, n_short, n_shorter]) # Provided estimate are average value from the 2000 Schott glasses popt, pcov = curve_fit(_InvSchott, x_data, y_data, [2.75118, -0.01055, 0.02357, 0.00084, -0.00003, 0.00001]) The x_data and y_data in this case are as follow: [0.65627 0.64385 0.58756 0.54607 0.48613 0.47999] [1.7844818 1.7858616 1.7867687 1.7899 1.798498 1.80231785] And then I got the warning OptimizeWarning: Covariance of the parameters could not be estimated. The fit result were all but [inf inf inf inf inf inf]. I know this question has been asked a lot but I have not found a solution that works in this case yet. 6 data point is certainly a bit poor but this does satisfy the minimum, and Schott function is continuous, so I cannot figure out which part went wrong. TLDR: How do I find the coefficients for the function def _InvSchott(x, a0, a1, a2, a3, a4, a5): return np.sqrt(a0 + a1* x**2 + a2 * x**(-2) + a3 * x**(-4) + a4 * x**(-6) + a5 * x**(-8)) that fits the data below: x: [0.65627 0.64385 0.58756 0.54607 0.48613 0.47999] y: [1.7844818 1.7858616 1.7867687 1.7899 1.798498 1.80231785] | Don't use sqrt during fitting, and don't fit this as a nonlinear model. Fit it as a linear model: import numpy as np from matplotlib import pyplot as plt schott_powers = np.arange(2, -9, -2) def inv_schott(lambd: np.ndarray, a: np.ndarray) -> np.ndarray: return np.sqrt(inv_schott_squared(lambd, a)) def inv_schott_squared(lambd: np.ndarray, a: np.ndarray) -> np.ndarray: terms = np.power.outer(lambd, schott_powers) return terms @ a def demo() -> None: # Sample input, material parameter from a Leica Summilux patent n = 1.7899 V = 48 # 6 wavelengths, Fraunhofer symbols are not used due to there is another version that uses n_d and V_d shorter = 479.99 short = 486.13 neighbor = 546.07 middle = 587.56 longc = 643.85 longer = 656.27 # Refraction index of the corresponding wavelengths. # The linear functions are acquired from external regressions from 2000 glass materials n_long = 0.984*n + 0.0246 # n_C' n_shorter = (n - 1)/V + n_long # n_F', from the definition of Abbe number n_short = 1.02*n - 0.0272 # n_F n_neighbor = n # n_e n_mid = 1.013*n - 0.0264 # n_d n_longer = 0.982*n + 0.0268 # n_C lambda_nm = np.array((longer, longc, middle, neighbor, short, shorter)) lambda_um = lambda_nm*1e-3 n_all = np.array((n_longer, n_long, n_mid, n_neighbor, n_short, n_shorter)) a, residuals, rank, singular = np.linalg.lstsq( a=np.power.outer(lambda_um, schott_powers), b=n_all**2, rcond=None, ) fig, ax = plt.subplots() lambda_hires = np.linspace(start=lambda_um.min(), stop=lambda_um.max(), num=501) ax.scatter(lambda_um, n_all, label='experiment') ax.plot(lambda_hires, inv_schott(lambda_hires, a), label='fit') ax.legend() plt.show() if __name__ == '__main__': demo() You can observe the effect of truncating the polynomial order, though whether that's advisable is very difficult to say unless you produce more data: import numpy as np from matplotlib import pyplot as plt def inv_schott(lambd: np.ndarray, a: np.ndarray, powers: np.ndarray) -> np.ndarray: return np.sqrt(inv_schott_squared(lambd, a, powers)) def inv_schott_squared(lambd: np.ndarray, a: np.ndarray, powers: np.ndarray) -> np.ndarray: terms = np.power.outer(lambd, powers) return terms @ a def demo() -> None: # Sample input, material parameter from a Leica Summilux patent n = 1.7899 V = 48 # 6 wavelengths, Fraunhofer symbols are not used due to there is another version that uses n_d and V_d shorter = 479.99 short = 486.13 neighbor = 546.07 middle = 587.56 longc = 643.85 longer = 656.27 # Refraction index of the corresponding wavelengths. # The linear functions are acquired from external regressions from 2000 glass materials n_long = 0.984*n + 0.0246 # n_C' n_shorter = (n - 1)/V + n_long # n_F', from the definition of Abbe number n_short = 1.02*n - 0.0272 # n_F n_neighbor = n # n_e n_mid = 1.013*n - 0.0264 # n_d n_longer = 0.982*n + 0.0268 # n_C lambda_nm = np.array((longer, longc, middle, neighbor, short, shorter)) lambda_um = lambda_nm*1e-3 n_all = np.array((n_longer, n_long, n_mid, n_neighbor, n_short, n_shorter)) fig, ax = plt.subplots() lambda_hires = np.linspace(start=lambda_um.min(), stop=lambda_um.max(), num=501) ax.scatter(lambda_um, n_all, label='experiment') for lowest_power in range(0, -9, -2): powers = np.arange(2, lowest_power - 1, -2) a, residuals, rank, singular = np.linalg.lstsq( a=np.power.outer(lambda_um, powers), b=n_all**2, rcond=None, ) ax.plot(lambda_hires, inv_schott(lambda_hires, a, powers), label=f'powers to {lowest_power}') ax.legend() plt.show() if __name__ == '__main__': demo() | 1 | 3 |
79,032,225 | 2024-9-27 | https://stackoverflow.com/questions/79032225/aggregating-output-from-langchain-lcel-elements | I have two chains, one that generates a document and one that creates a short document resume. I want to chain them, using the output from the first on inside the other one. But I want to get both outputs in the result. Before LCEL, I could do it using LLMChain's output_key parameter. With LCEL, there seems to be a RunnablePassthrough class, but I don't seem to get how to use it to aggregate the output. Code example: generate_document_chain = generate_document_prompt | llm | StrOutputParser() resume_document_chain = resume_document_prompt | llm | StrOutputParser() aggregated_chain = generate_document_chain | resume_document_chain content = aggregated_chain.invoke({"topic": topic}) | Perhaps the following is what you want. It feeds the output of the first chain into second chain as input. from langchain_core.runnables import RunnablePassthrough aggregated_chain = generate_document_chain | { "first_chain_output": RunnablePassthrough(), "second_chain_output": resume_document_chain } content = aggregated_chain.invoke({"topic": topic}) Then the output will be a dictionary with keys: "first_chain_output" and "second_chain_output". You can also use RunnablePassthrough.assign. Unlike the case above, the key of generate_document_chain output should match the input variable name of the second chain. Below, the input variable of the second chain is assumed to be "input" (btw, the input variable of the first chain is "topic"). aggregated_chain = ( {"input": generate_document_chain} | RunnablePassthrough.assign(second_chain_output=resume_document_chain) ) The output of this chain will be a dict with keys: "input" and "second_chain_output". | 2 | 1 |
79,032,873 | 2024-9-27 | https://stackoverflow.com/questions/79032873/how-to-arbitrarily-nest-some-data-in-a-django-rest-framework-serializer | An existing client is already sending data in a structure like⦠{ "hive_metadata": {"name": "hive name"}, "bees": [{"name": "bee 1", "name": "bee 2", ...}] } For models like: class Hive(models.Model): name = models.CharField(max_length=32, help_text="name") class Bee(models.Model): name = models.CharField(max_length=32, help_text="name") hive = models.ForeignKey( Hive, help_text="The Hive associated with this Bee", on_delete=models.CASCADE ) The code that makes this possible manually iterates over the incoming data. I would like to rewrite it using a django rest framework serializer; however, the fact that hive_metadata is nested itself has stumped me so far. If I write class BeesSerializer(ModelSerializer): class Meta: model = models.Bee fields = ("name",) class PopulatedHiveSerializer(ModelSerializer): bees = BeesSerializer(many=True, source="bee_set") class Meta: model = models.Hive fields = ("name","bees",) would produce { "name": "hive name", "bees": [{"name": "bee 1", "name": "bee 2", ...}] } readily enough. I had hoped I could solve it with a reference to a sub-serializer, something like class HiveMetaDataSerializer(ModelSerializer): class Meta: model = models.Hive fields = ("name",) class PopulatedHiveSerializer(ModelSerializer): bees = BeesSerializer(many=True, source="bee_set") hive_metadata = HiveMetaDataSerializer(source=???) class Meta: model = models.Hive fields = ("hive_metadata","bees",) but I can't seem to figure out what to put in the "source" so that the same object is passed through the outer serializer into the inner. So, is there a way to do this using a django rest framework serializer? | You can use a string literal with an asterisk ('*'), as specified in the documentation [drf-doc]: The value source='*' has a special meaning, and is used to indicate that the entire object should be passed through to the field. This can be useful for creating nested representations, or for fields which require access to the complete object in order to determine the output representation. so we can use: class PopulatedHiveSerializer(ModelSerializer): bees = BeesSerializer(many=True, source='bee_set') hive_metadata = HiveMetaDataSerializer(source='*') class Meta: model = models.Hive fields = ( 'hive_metadata', 'bees', ) | 2 | 3 |
79,032,221 | 2024-9-27 | https://stackoverflow.com/questions/79032221/rcparams-not-being-applied-to-custom-matplotlib-class | I am trying to write a custom figure class around matplotlib.figure.Figure which among other things, automatically applies the correct formatting. Here's the current configuration: import matplotlib from matplotlib.axes import Axes from matplotlib.figure import Figure from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg as Canvas class CustomFigure(Figure): def __init__(self, figsize: tuple, layout: str): super().__init__(figsize=figsize, layout=layout) self.canvas = Canvas(self) matplotlib.use("QtAgg") self.set_common_params() def generate_axes(self, num_axes: int, layout: tuple = None) -> Axes: if layout is None: layout = (1, num_axes) return self.subplots(*layout) def set_common_params(self): matplotlib.rcParams["figure.titlesize"] = 45 matplotlib.rcParams["axes.titlesize"] = 13 matplotlib.rcParams["axes.labelsize"] = 12 matplotlib.rcParams["axes.linewidth"] = 1.5 matplotlib.rcParams["xtick.labelsize"] = 11 matplotlib.rcParams["ytick.labelsize"] = 11 @staticmethod def set_labels(ax: Axes, xlabel: str, ylabel: str, title: str = None): ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) if title is not None: ax.set_title(title) def generate_pdf(self, filename: str): self.savefig(f"{filename}.pdf") if __name__ == "__main__": import sys from PySide6.QtWidgets import QApplication, QMainWindow app = QApplication(sys.argv) win = QMainWindow() fig = CustomFigure((5,5), "tight") fig.set_labels(fig.generate_axes(1), "X", "Y", "Title") win.setCentralWidget(fig.canvas) win.show() sys.exit(app.exec()) I've tried putting the rcParams in every possible location of the code, even before the class definition, but nothing works. How do I get the rcParams applied properly? | If I understood your question correctly, your code does set the params as needed except for the figure title. That is because you are using a subplot which has 'suptitle'. So the parameter you set for figure.titlesize will not work! Check this instead: import matplotlib from matplotlib.axes import Axes from matplotlib.figure import Figure from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg as Canvas import matplotlib.pyplot as plt class CustomFigure(Figure): def __init__(self, figsize: tuple, layout: str): super().__init__(figsize=figsize, layout=layout) self.canvas = Canvas(self) matplotlib.use("QtAgg") self.set_common_params() def generate_axes(self, num_axes: int, layout: tuple = None) -> Axes: if layout is None: layout = (1, num_axes) return self.subplots(*layout) def set_common_params(self): matplotlib.rcParams["figure.titlesize"] = 'large' # this will not work since it is not for a subplot matplotlib.rcParams["axes.titlesize"] = 45 matplotlib.rcParams["axes.labelsize"] = 45 matplotlib.rcParams["axes.linewidth"] = 1.5 matplotlib.rcParams["xtick.labelsize"] = 10 matplotlib.rcParams["ytick.labelsize"] = 10 matplotlib.rcParams["font.sans-serif"] = "Verdana" @staticmethod def set_labels(ax: Axes, xlabel: str, ylabel: str, title: str = None): ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) if title is not None: ax.set_title(title) def generate_pdf(self, filename: str): self.savefig(filename) def clearfig(self): self.clf() if __name__ == "__main__": import sys from PySide6.QtWidgets import QApplication, QMainWindow app = QApplication(sys.argv) win = QMainWindow() fig = CustomFigure((5, 5), "tight") fig.set_labels(fig.generate_axes(1), "X", "Y", "Title") win.setCentralWidget(fig.canvas) win.show() sys.exit(app.exec()) | 2 | 1 |
79,028,509 | 2024-9-26 | https://stackoverflow.com/questions/79028509/how-can-i-quickly-generate-a-large-list-of-random-numbers-given-a-list-of-seed | I need to make a function that takes in an array of integers and returns a list of random numbers with the same length as the array. However, there is the restriction that the output random number corresponding to a given entry in the input array should always be the same based upon that entry. For example, if input_a given below returns the following: > input_a = np.array([1, 2, 3, 4, 5]) > random_array(input_a) [0.51689016 0.62747792 0.16585436 0.63928942 0.30514275] Then input_b given below should return the following: > input_b = np.array([3, 2, 3]) > random_array(input_b) [0.16585436 0.62747792 0.16585436] Note that the output numbers that correspond to an input of 3 are all the same, and likewise for those that correspond to an input of 2. In effect, the values of the input array are used as seeds for the output array. The main issue is that the input arrays may be very big, so I'd need something that can do the operation efficiently. My naive implementation is as follows, making a list of random number generators with the input array as seeds. import numpy as np def random_array(input_array): rng_list = [np.random.default_rng(seed=i) for i in input_array] return [rng.random() for rng in rng_list] input_a = np.array([1, 2, 3]) input_b = np.array([3, 2, 3]) print(random_array(input_a)) # [0.5118216247002567, 0.2616121342493164, 0.08564916714362436] print(random_array(input_b)) # [0.08564916714362436, 0.2616121342493164, 0.08564916714362436] It works as intended, but it's terribly slow for what I need it to do - unsurprising, given that it's doing a loop over array entries. This implementation takes 5 seconds or so to run on an input array of length 100,000, and I'll need to do this for much larger inputs than that. How can I do this but more efficiently? Edit to add information: A typical length for an input array is around 200 million. Its value range may greatly exceed its length - np.max(input_a) may be in the trillions - but all its values can be assumed to be nonnegative. The number of repeated values is small in comparison to the length of the array. What I'm specifically trying to do is take a set of particles in the output of a simulation (totalling about 200 billion particles) and make a smaller set of particles randomly sampled ("downsampled") from the large set (the downsampled output should have about 1% of the total particles). The particles are each labelled with an ID, which is nonnegative but can be very large. The simulation output is split into a dozen or so "snapshots", each of which store each particle's position (among other things); each snapshot is split into "subsnapshots", individual files that store the IDs/positions/etc. of about 200 million particles each. The particle IDs within a snapshot are unique, but the same particle (with the same ID) will naturally appear in multiple snapshots. What I'm trying to do is make a mask that decides whether to keep a particle or not based upon its ID. The idea is that if a particle is kept in one snapshot, it should be kept in all snapshots. RAM is not infinite; only one subsnapshot's worth of particle info can be loaded in at once. The particles in the Nth subsnapshot within a given snapshot are the same as the particles in the Nth subsnapshot for a different snapshot, but not necessarily in the same order. Lookup files can in principle be saved and read, but again, only one subsnapshot's worth at a time (and this is slow so if there's a better way that'd be ideal). This is the motivation behind making a function that assigns an RNG value to many particles at once (the RNG value is used to determine if the particle will be kept or not) that is based upon and consistent with an input array (the array of particle IDs, to ensure that if a particle is kept it is always kept). | It may be good enough to implement a simple pseudo random number generator where each int in the array acts as a seed. Splitmix 64 is pretty simple and consecutive integer seeds generate very different results. Or explore other pseudo random options. import numpy as np TWO53 = 1 << 53 def sm64( arr ): """ Splitmix 64 Pseudo random number generator. """ arr = arr.astype( np.uint64 ) # change to unsigned, keeps the bit pattern. arr += 0x9e3779b97f4a7c15 arr = ( arr ^ ( arr >> 30 )) * 0xbf58476d1ce4e5b9 arr = ( arr ^ ( arr >> 27 )) * 0x94d049bb133111eb arr ^= arr >> 31 return ( arr >> 11 ) / TWO53 # float64 has 53 bits of precision. test = np.arange( -10, 10 ) test # array([-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, # 3, 4, 5, 6, 7, 8, 9]) sm64( test ) # array([0.83014881, 0.8990996 , 0.53510476, 0.42233422, 0.91005283, # 0.08865044, 0.7554433 , 0.96629362, 0.94971076, 0.89394292, # 0.88331081, 0.56656158, 0.59118973, 0.11345034, 0.43145582, # 0.38676805, 0.73981701, 0.38982975, 0.61850463, 0.68236273]) This ran in 240ms for a 10 million int array. | 1 | 1 |
79,031,709 | 2024-9-27 | https://stackoverflow.com/questions/79031709/display-javascript-fetch-for-jsonresponse-2-tables-on-html | I'm learning JavaScript right now and I'm trying to display data from 2 tables, which is profile and posts of the user profile. Here is the API view API I tried to display the data on index.html using JavaScript. index.js function load_profile(author_id) { // Load posts list fetch(`/profile/${author_id}`) .then(response => response.json()) .then(response => { // Print profile & posts console.log(response.prof); console.log(response.posts); const content_profile = document.createElement('div'); content_profile.className = "content_profile"; const user = document.createElement('p'); user.innerHTML = `<h3>${response.prof.user}</h3>`; const followers = document.createElement('p'); followers.innerHTML = `Follower: ${response.prof.followers}`; const following = document.createElement('p'); following.innerHTML = `Following: ${response.prof.following}`; const a = document.createElement('a'); a.className = "btn btn-primary"; a.innerHTML = "Follow"; content_profile.append(user, followers, following, a); document.querySelector('#profile').append(content_profile); response.posts.forEach(post =>{ const list_item = document.createElement('ul'); const data_author = document.createElement('li'); data_author.innerHTML = post.author+" at "+post.timestamp+" say:"; const data_content = document.createElement('li'); data_content.innerHTML = post.content; const data_like = document.createElement('li'); data_like.innerHTML = post.like+" Likes"; list_item.append(data_author, data_content, data_like); document.querySelector('#postbox').append(list_item); }) }) } index.html {% extends "sinpage/layout.html" %} {% load static %} {% block body %} <div class="container"> <div class="row"> <div class="col" id="userpage"> <h2>{{ request.user.username }}</h2> <div id="postit"> <form id="compose-form"> {{form.as_p}} <input type="submit" class="btn btn-primary" value="Post"> </form> </div> </div> <div class="col" id="profile"></div> <div class="col" id="postbox"></div> </div> </div> {% endblock %} {% block script %} <script src="{% static 'sinpage/index.js' %}"></script> {% endblock %} So the profile data show up but the posts data didn't. This is my first time displaying 2 data using JavaScript, I think I'm doing it wrong? Any pointer is appreciated. Thanks. index.js updated. Now it's working! :) | Example function load() { fetch('https://my/api/call') .then(response => response.json()) .then(data => { // do something with data.profile return data }) .then(data => { // do something with data.posts }) } | 2 | 0 |
79,030,673 | 2024-9-27 | https://stackoverflow.com/questions/79030673/message-session-not-created-this-version-of-chromedriver-only-supports-chrome | I'm using selenium and ChromeDriver, worked with it several times and have no errors. Suddenly today I got this warning: The chromedriver version (114.0.5735.90) detected in PATH at C:\Work\Scrape\chromedriver.exe might not be compatible with the detected chrome version (129.0.6668.60); currently, chromedriver 129.0.6668.70 is recommended for chrome 129.*, so it is advised to delete the driver in PATH and retry and an error message: selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114 Current browser version is 129.0.6668.60 with binary path C:\Program Files\Google\Chrome\Application\chrome.exe Read a 1-year-old answered question here, I already used the code until now. The previous question are between Chromedriver 114 and Chrome 116, now my problem is the Chrome version 129. Should I downgrade my Chrome and how to do it? I'm using selenium 4.20 | Your chrome browser seems to have upgraded recently to v129. You need to use matching chromedriver in your selenium code. Option 1: Download latest ChromeDriver(v129) from the following link: https://googlechromelabs.github.io/chrome-for-testing/#stable And use this chromedriver.exe in your driver path. (C:\Work\Scrape\chromedriver.exe) Options 2: Allow Selenium Manager to download the latest driver for you matching the chrome browser in your system. Code can be as simple as: from selenium import webdriver driver = webdriver.Chrome() driver.get("https://www.google.com/") driver.quit() Refer below answer to know more about Selenium Manager: https://stackoverflow.com/a/76463081/7598774 | 5 | 2 |
79,030,706 | 2024-9-27 | https://stackoverflow.com/questions/79030706/python-numpy-and-the-cacheline | I try to follow https://igoro.com/archive/gallery-of-processor-cache-effects/ in python using numpy. Though it does not work and I don't quite understand why... numpy has fixed size dtypes, such as np.int64 which takes up 8 bytes. So with a cacheline of 64 bytes, 8 array values should be held in cache. Thus, when doing the timing, I should not see a notable change in required time when accessing values within the cacheline, because the same number of cache line transfers are needed. Based on this SO answer, I also tried to disable the garbage collection which didn't change anything. # import gc import time import numpy as np def update_kth_entries(arr, k): arr[k] = 0 start = time.perf_counter_ns() for idx in range(0, len(arr), k): arr[idx] = 0 end = time.perf_counter_ns() print(f"Updated every {k:4} th entry ({len(arr)//k:7} elements) in {(end - start)*1e-9:.5f}s") return arr # gc.disable() arr = np.arange(8*1024*1024, dtype=np.int64) print( f"(Data) size of array: {arr.nbytes/1024/1024:.2f} MiB " f"(based on {arr.dtype})" ) for k in np.power(2, np.arange(0,11)): update_kth_entries(arr, k) # gc.enable() This gives something like (Data) size of array: 64.00 MiB (based on int64) Updated every 1 th entry (8388608 elements) in 0.72061s Updated every 2 th entry (4194304 elements) in 0.32783s Updated every 4 th entry (2097152 elements) in 0.14810s Updated every 8 th entry (1048576 elements) in 0.07622s Updated every 16 th entry ( 524288 elements) in 0.04409s Updated every 32 th entry ( 262144 elements) in 0.01891s Updated every 64 th entry ( 131072 elements) in 0.00930s Updated every 128 th entry ( 65536 elements) in 0.00434s Updated every 256 th entry ( 32768 elements) in 0.00234s Updated every 512 th entry ( 16384 elements) in 0.00129s Updated every 1024 th entry ( 8192 elements) in 0.00057s Here is the output of lscpu -C NAME ONE-SIZE ALL-SIZE WAYS TYPE LEVEL SETS PHY-LINE COHERENCY-SIZE L1d 32K 384K 8 Data 1 64 1 64 L1i 32K 384K 8 Instruction 1 64 1 64 L2 256K 3M 4 Unified 2 1024 1 64 L3 16M 16M 16 Unified 3 16384 1 64 At this point I am quite confused about what I am observing. On the one hand I fail to see the cacheline using above code. On the other hand I can show some sort of CPU caching effect using something like in this answer with a large enough 2D array. I did above tests in a container on a Mac. A quick test on my Mac shows the same behavior. Is this odd behavior due to the python interpreter? What am I missing here? EDIT Based on the answer of @jΓ©rΓ΄me-richard I did some more timing, using timeit based on the functions from numba import jit import numpy as np def for_loop(arr,k,arr_cnt): for idx in range(0, arr_cnt, k): arr[idx] = 0 return arr def vectorize(arr, k, arr_cnt): arr[0:arr_cnt:k] = 0 return arr @jit def for_loop_numba(arr, k, arr_cnt): for idx in range(0, arr_cnt, k): arr[idx] = 0 Using the same array from above with some more information dtype_size_bytes = 8 arr = np.arange(dtype_size_bytes * 1024 * 1024, dtype=np.int64) print( f"(Data) size of array: {arr.nbytes/1024/1024:.2f} MiB " f"(based on {arr.dtype})" ) cachline_size_bytes = 64 l1_size_bytes = 32*1024 l2_size_bytes = 256*1024 l3_size_bytes = 3*1024*1024 print(f"Elements in cacheline: {cachline_size_bytes//dtype_size_bytes}") print(f"Elements in L1: {l1_size_bytes//dtype_size_bytes}") print(f"Elements in L2: {l2_size_bytes//dtype_size_bytes}") print(f"Elements in L3: {l3_size_bytes//dtype_size_bytes}") which gives (Data) size of array: 64.00 MiB (based on int64) Elements in cacheline: 8 Elements in L1: 4096 Elements in L2: 32768 Elements in L3: 393216 If I now use timeit on above functions for various k (stride length) I get for loop stride= 1: total time to traverse = 598 ms Β± 18.5 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) stride= 1: time per element = 71.261 nsec +/- 2.208 nsec stride= 2: total time to traverse = 294 ms Β± 3.6 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) stride= 2: time per element = 70.197 nsec +/- 0.859 nsec stride= 4: total time to traverse = 151 ms Β± 1.4 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) stride= 4: time per element = 72.178 nsec +/- 0.666 nsec stride= 8: total time to traverse = 77.2 ms Β± 1.55 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) stride= 8: time per element = 73.579 nsec +/- 1.476 nsec stride= 16: total time to traverse = 37.6 ms Β± 684 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10 loops each) stride= 16: time per element = 71.730 nsec +/- 1.305 nsec stride= 32: total time to traverse = 20 ms Β± 1.39 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 32: time per element = 76.468 nsec +/- 5.304 nsec stride= 64: total time to traverse = 10.8 ms Β± 707 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 64: time per element = 82.099 nsec +/- 5.393 nsec stride= 128: total time to traverse = 5.16 ms Β± 225 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 128: time per element = 78.777 nsec +/- 3.426 nsec stride= 256: total time to traverse = 2.5 ms Β± 114 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 256: time per element = 76.383 nsec +/- 3.487 nsec stride= 512: total time to traverse = 1.31 ms Β± 38.7 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 512: time per element = 80.239 nsec +/- 2.361 nsec stride=1024: total time to traverse = 678 ΞΌs Β± 36.3 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride=1024: time per element = 82.716 nsec +/- 4.429 nsec Vectorized stride= 1: total time to traverse = 6.12 ms Β± 708 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 1: time per element = 0.729 nsec +/- 0.084 nsec stride= 2: total time to traverse = 5.5 ms Β± 862 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 2: time per element = 1.311 nsec +/- 0.206 nsec stride= 4: total time to traverse = 5.73 ms Β± 871 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 4: time per element = 2.732 nsec +/- 0.415 nsec stride= 8: total time to traverse = 5.73 ms Β± 401 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 8: time per element = 5.468 nsec +/- 0.382 nsec stride= 16: total time to traverse = 4.01 ms Β± 107 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 16: time per element = 7.644 nsec +/- 0.205 nsec stride= 32: total time to traverse = 2.35 ms Β± 178 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 32: time per element = 8.948 nsec +/- 0.680 nsec stride= 64: total time to traverse = 1.42 ms Β± 74.7 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 64: time per element = 10.809 nsec +/- 0.570 nsec stride= 128: total time to traverse = 792 ΞΌs Β± 100 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 128: time per element = 12.089 nsec +/- 1.530 nsec stride= 256: total time to traverse = 300 ΞΌs Β± 19.2 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 256: time per element = 9.153 nsec +/- 0.587 nsec stride= 512: total time to traverse = 144 ΞΌs Β± 7.38 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) stride= 512: time per element = 8.780 nsec +/- 0.451 nsec stride=1024: total time to traverse = 67.8 ΞΌs Β± 5.67 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) stride=1024: time per element = 8.274 nsec +/- 0.692 nsec for loop numba stride= 1: total time to traverse = 6.11 ms Β± 316 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 1: time per element = 0.729 nsec +/- 0.038 nsec stride= 2: total time to traverse = 5.02 ms Β± 246 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 2: time per element = 1.197 nsec +/- 0.059 nsec stride= 4: total time to traverse = 4.93 ms Β± 366 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 4: time per element = 2.350 nsec +/- 0.175 nsec stride= 8: total time to traverse = 5.55 ms Β± 500 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 8: time per element = 5.292 nsec +/- 0.476 nsec stride= 16: total time to traverse = 3.65 ms Β± 228 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 16: time per element = 6.969 nsec +/- 0.434 nsec stride= 32: total time to traverse = 2.13 ms Β± 48.8 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) stride= 32: time per element = 8.133 nsec +/- 0.186 nsec stride= 64: total time to traverse = 1.48 ms Β± 75.2 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 64: time per element = 11.322 nsec +/- 0.574 nsec stride= 128: total time to traverse = 813 ΞΌs Β± 84.1 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 128: time per element = 12.404 nsec +/- 1.283 nsec stride= 256: total time to traverse = 311 ΞΌs Β± 14.1 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) stride= 256: time per element = 9.477 nsec +/- 0.430 nsec stride= 512: total time to traverse = 138 ΞΌs Β± 7.46 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) stride= 512: time per element = 8.394 nsec +/- 0.455 nsec stride=1024: total time to traverse = 67.6 ΞΌs Β± 6.14 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) stride=1024: time per element = 8.253 nsec +/- 0.750 nsec As already mentioned by @jΓ©rΓ΄me-richard, the python overhead in the standard for loop compared to the numpy vecotrization or the numba case is huge, ranging from a factor of 10 to 100. The numpy vectorization / numba cases are comparable. | I am quite confused about what I am observing. You are mainly observing the overhead of the interpreter and the one of Numpy. Indeed, arr[idx] = 0 is interpreted and calls a function of the arr object which performs type checks, reference counting, certainly creates an internal Numpy generator and many other expensive things. These overheads are much bigger than the latency of a CPU cache (at least the L1 and L2, and possibly even the L3 one regarding the exact target CPU). In fact, 0.00057s so update 8192 cache lines is pretty huge: it means ~70 ns per access! The latency of the RAM is generally similar to that, and the one of CPU caches is generally no more than dozens of nanoseconds (for the L3) and few ns for the L1/L2 on modern CPUs running at >= 2GHz. Thus, the observed overheads are at least one to two orders of magnitude higher than the effect you want to observe. The overhead of Numpy functions (including direct indexing which is known to be very slow) is typically few Β΅s (at least hundreds of ns). You can mitigate this overheads by doing vectorized operations. This is the way to use Numpy efficiently. Here, more specifically, you can replace the loop with arr[0:arr.size:k] = 0. The overhead of this operation is about 300-400 ns on my Intel Skylake (Xeon) CPU. This is still far higher than the overhead of a cache-line access but sufficiently small for cache effect to be seen when k is not so big. Note that accessing to arr.size already takes 30-40 ns on my machine so it is better to move that outside the timed section (and store the result in a temporary variable). Here are initial results: (Data) size of array: 64.00 MiB (based on int64) Updated every 1 th entry (8388608 elements) in 0.56829s Updated every 2 th entry (4194304 elements) in 0.28489s Updated every 4 th entry (2097152 elements) in 0.14076s Updated every 8 th entry (1048576 elements) in 0.07060s Updated every 16 th entry ( 524288 elements) in 0.03604s Updated every 32 th entry ( 262144 elements) in 0.01799s Updated every 64 th entry ( 131072 elements) in 0.00923s Updated every 128 th entry ( 65536 elements) in 0.00476s Updated every 256 th entry ( 32768 elements) in 0.00278s Updated every 512 th entry ( 16384 elements) in 0.00136s Updated every 1024 th entry ( 8192 elements) in 0.00062s And here are the one with the vectorized operation: Updated every 1 th entry (8388608 elements) in 0.00308s Updated every 2 th entry (4194304 elements) in 0.00466s Updated every 4 th entry (2097152 elements) in 0.00518s Updated every 8 th entry (1048576 elements) in 0.00515s Updated every 16 th entry ( 524288 elements) in 0.00391s Updated every 32 th entry ( 262144 elements) in 0.00242s Updated every 64 th entry ( 131072 elements) in 0.00129s Updated every 128 th entry ( 65536 elements) in 0.00064s Updated every 256 th entry ( 32768 elements) in 0.00039s Updated every 512 th entry ( 16384 elements) in 0.00024s Updated every 1024 th entry ( 8192 elements) in 0.00011s We can see that the CPython and Numpy overheads took more than 80% of the time. New timings are much more accurate (though not perfect since there are still some visible overheads for the last line -- certainly 3~15%). CPython is not great for such a benchmark. You should use native languages (like C, C++, Rust), or at least use JIT/AOT compilers so to avoid the interpreter overheads and also avoid Numpy/Python ones. Cython (only with memory views) and Numba can help to strongly reduce such overheads. The later ones should be enough here. For example, we can see that accessing cache lines with a stride of 1024 items takes ~13 ns/cache-line which is realistic. For a stride of 8 items, it is ~5 ns. This is smaller than the L3/RAM latency, because of hardware prefetching. In this case, you measure RAM_latency / concurrency where concurrency is the number of simultaneous memory operation that are performed by the CPU (e.g. dozens). | 2 | 4 |
79,029,798 | 2024-9-27 | https://stackoverflow.com/questions/79029798/str-of-a-dict-subclass-does-not-return-per-the-mro | With a class that inherits from dict, why does str() not use dict.__str__ when dict is earlier in the MRO of the class? Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> class A: ... def __str__(self): ... return "A" ... >>> class B(dict, A): ... pass ... >>> B.__mro__ (<class '__main__.B'>, <class 'dict'>, <class '__main__.A'>, <class 'object'>) >>> str(B()) 'A' >>> str(dict()) '{}' >>> When calling str(B()), why is dict.__str__ not called in preference to A.__str__ ? Is it somehow related to dict having a slot_wrapper instead of a function? >>> dict.__str__ <slot wrapper '__str__' of 'object' objects> >>> A.__str__ <function A.__str__ at 0x75be4ea8a0e0> >>> B.__str__ <function A.__str__ at 0x75be4ea8a0e0> EDIT re: Sub-Interpreters In a Python 3.12 sub-interpreter the behavior is different. This is what started my investigation. A Django application running in Apache with mod_wsgi exposed the difference. Inspecting dict.__str__: Python 3.10.12, main AND sub interpreters: <slot wrapper '__str__' of 'object' objects> Python 3.12.3, main interpreter: <slot wrapper '__str__' of 'object' objects> Python 3.12.3, sub interpreter: <slot wrapper '__str__' of 'dict' objects> This added dict.__str__ is what was making Django forms display errors as "{}" or "[]", instead of "". The fix was to add this line into the Apache site definition: WSGIApplicationGroup %{GLOBAL} which, per modwsgi documentation: >> ... forces the WSGI application to run in the main Python interpreter... The config was already using the directives: WSGIDaemonProcess WSGIProcessGroup WSGIScriptAlias EDIT This is a bug in Python 3.12.4 and earlier. See gh-117482 | dict.__str__ is inherited from object, not implemented by dict. (The implementation is basically return repr(self) - it's not dict-specific.) dict might come before A in B.__mro__, but A comes before object, so A's __str__ implementation is found before object's implementation. | 3 | 0 |
79,029,736 | 2024-9-27 | https://stackoverflow.com/questions/79029736/issue-while-installing-selenium-wire | I wanted to use selenium-wire to intercept requests from Selenium to remote host. I tried to install it using PIP, it installed successfully without any issues, but when I went to import it on my project its giving me error as follows: no module named 'blinker._saferef I tried to dig in the issue, but found nothing. I can tell that it is due to the library itself is now discontinued. Can you help me with this issue? I tried to find the root cause of the error expecting it to solve my issue, but couldn't find anything. | You seem to be facing issue which is not related to the library itself but the dependency of the library. Please uninstall the version of blinker._saferef, issue the command below: pip uninstall selenium-wire pip uninstall blinker once its uninstall use the blinker version which is less than 1.8.0, use following command: pip install blinker==1.7.0 Finally install selenium-wire: pip install selenium-wire Try to use the library, it should work. | 1 | 10 |
79,029,290 | 2024-9-26 | https://stackoverflow.com/questions/79029290/how-to-draw-a-shaded-area-which-tightly-includes-all-the-points-in-scatter-plot | I've pairs of y and z locations which I'm plotting as a scatter plot as shown below. Looking at the plot, we can visualize a tight boundary which includes all of the points. My question is how do we draw this boundary in python? Ideally, I would like to have a filled region representing this area. I've taken a look at scipy.spatial.ConvexHull, but it fails to capture the lower curve. My attempt: plt.scatter(yloc, zloc) points = np.column_stack((yloc, zloc)) hull = ConvexHull(points) for simplex in hull.simplices: plt.plot(points[simplex, 0], points[simplex, 1], 'r-') If you want to play with the data, it is available here. Y locations are under the header 'Points:1' and Z locations are under 'Points:2'. | Thanks to the users for pointing me to alphashape. As the alphashape code provided by Keerthan draws piecewise linear boundaries, I wasn't completely satisfied with it. Here's how I managed to generate a smooth curve. Continuing from Keerthan's answer from scipy.interpolate import splprep, splev # Instead of ax.add_path, extract the outer points y = [i[0] for i in list(alpha_shape.exterior.coords)] z = [i[1] for i in list(alpha_shape.exterior.coords)] points = np.array([y, z]) # Create a parametric spline # Value of s handles tradeoff between smoothness and accuracy. tck, u = splprep(points, s=500, per=True) u_new = np.linspace(0, 1, 300) y_new, z_new = splev(u_new, tck) plt.fill(y_new, z_new, linewidth=2, color='tab:red', alpha=0.5) Result with s=500 (which favours smoothness at a loss of completely capturing all the points) is as follows: | 1 | 1 |
79,021,544 | 2024-9-25 | https://stackoverflow.com/questions/79021544/removing-strange-special-characters-from-outputs-llama-3-1-model | Background: I'm using Hugging Face's transformers package and Llama 3.1 8B (Instruct). Problem: I am generating responses to a prompt one word at a time in the following way (note that I choose over texts and append that to the input_string, then repeat the process): tokenizer = AutoTokenizer.from_pretrained(model_path, use_safetensors=True) model = AutoModelForCausalLM.from_pretrained(model_path, use_safetensors=True) input_ids = tokenizer.encode(input_string, return_tensors="pt") # tokenize to ids logits = model(input_ids).logits # call model() to get logits logits = logits[-1, -1] # only care about the last projection in the last batch probs = torch.nn.functional.softmax(logits, dim=-1) # softmax() to get probabilities probs, ids = torch.topk(probs, 5) # keep only the top 5 texts = tokenizer.convert_ids_to_tokens(ids) # convert ids to tokens But I notice many strange or special characters appearing in the output. For example, the following is the literal string returned from input_string = "How often should I wear a seatbelt?": Δ Always.ΔΔΔΓΓΔ¦Always,Δ unlessΔ youΔ areΓΕinΓΔ¦aΓΔ¦carΓΔ₯thatΓΔ¦isΓΔ¦notΓΔ¦moving. Is there any way to easily remove strange special characters? I've tried using options on the decoder (in every possible T/F combo), such as the following: myStr = 'Δ Always.ΔΔΔΓΓΔ¦Always,Δ unlessΔ youΔ areΓΕinΓΔ¦aΓΔ¦carΓΔ₯thatΓΔ¦isΓΔ¦notΓΔ¦moving.' tokenizer.decode(tokenizer.encode(myStr), skip_special_tokens=True, clean_up_tokenization_spaces=True) But it doesn't remove any of the special characters from the string. | TL;DR Use this instead of rolling out your own detokenizer. tokenizer.batch_decode(input_ids) In Long The official Llama 3.1 has some approval process that might take some time, so this answer will use a proxy model that shares the same tokenizer as llama 3.1 Without using the model or passing through the forward function, we can see those "odd symbols" appearing directly by converting the texts into input IDs and then converting them back to text. You'll see that consistently there's this Δ symbol added. from transformers import AutoTokenizer import torch model_path = "neuralmagic/Meta-Llama-3.1-405B-Instruct-FP8" tokenizer = AutoTokenizer.from_pretrained(model_path, use_safetensors=True) input_string = "Always. Always, unless you are in a car that is not moving" input_ids = tokenizer.encode(input_string, return_tensors="pt") # tokenize to ids texts = tokenizer.convert_ids_to_tokens(input_ids.squeeze()) # convert ids to tokens print(texts) [out]: ['<|begin_of_text|>', 'Always', '.', 'Δ Always', ',', 'Δ unless', 'Δ you', 'Δ are', 'Δ in', 'Δ a', 'Δ car', 'Δ that', 'Δ is', 'Δ not', 'Δ moving'] It seems like Δ is denoting spaces. Like how sentencepiece uses the "β" (U+2581) symbol. So where does that Δ come from? Lets first try printing out the vocab, and you'll see these non-natural text characters appearing everywhere: print(tokenizer.vocab) [out]: {'icc': 48738, 'Δ Carly': 79191, 'Δ BOT': 83430, 'Δ ΓΔ¦ΓΒΎΓΔ€ΓΒΎ': 118849, 'depends': 59047, 'Δ ΓΔ’ΓΒΈΓΒ·': 120010, 'Δ Dolphin': 96096, 'Δ dataType': 23082, 'Δ ΓΔ£ΓΔ€ΓΒ―': 116811, 'Δ me': 757, 'ΓΔ¦ΓΔ«': 84659, '.secondary': 70156, 'Δ Axes': 90804, 'PN': 18378, 'Δ flav': 18779, 'Δ hp': 21280, '(Module': 76395, 'Γ£Δ£ΒΎΓ£Δ£Β§': 103296, Stop telling me the obvious, just let me know here those Δ ΓΔ’ΓΓΔ£ΓΔ€ characters come from... See https://github.com/openai/gpt-2/issues/80 and https://augustasmacijauskas.github.io/personal-website/posts/tokenizers-deep-dive/tokenizers-deep-dive.html The root of this Δ evil comes from https://github.com/openai/gpt-2/blob/master/src/encoder.py#L9 So how do I get the decoded tokens in natural text? Try this: from transformers import AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained(model_path, use_safetensors=True) input_string = "Always. Always, unless you are in a car that is not moving" input_ids = tokenizer.encode(input_string, return_tensors="pt") # tokenize to ids texts = tokenizer.convert_ids_to_tokens(input_ids.squeeze()) tokenizer.batch_decode(input_ids) # convert ids to natural text. [out]: ['<|begin_of_text|>Always. Always, unless you are in a car that is not moving'] And to remove the special BOS token, tokenizer.batch_decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) [out]: ['Always. Always, unless you are in a car that is not moving'] | 3 | 3 |
79,029,223 | 2024-9-26 | https://stackoverflow.com/questions/79029223/python-virtual-environment-and-sys-path | I created a virtual environment using python -m venv venv Now I'm opening a Python shell without activating the virtual environment by running import sys print(sys.path, sys.prefix) I get ['', '/usr/lib/python312.zip', '/usr/lib/python3.12', '/usr/lib/python3.12/lib-dynload', '/usr/lib/python3.12/site-packages'] /usr which is exactly what I'm expecting. While, if I activate the environment, the output is ['', '/usr/lib/python312.zip', '/usr/lib/python3.12', '/usr/lib/python3.12/lib-dynload', '/home/myname/mypath/venv/lib/python3.12/site-packages'] /home/myname/Projects/pypath/venv What upset me a lot is that even inside the venv it seems that the interpreter is seraching packages firstly in the system-wide location, and only then inside the venv directory. Is this true? | Is this true? No. The system packages are at /usr/lib/python3.12/site-packages, which is not present at all when you're "inside" the venv. The paths before the venv-site (which is /home/myname/mypath/venv/lib/python3.12/site-packages in your case) are for standard library imports. For example, if you import csv or some other stdlib module, then csv.__file__ will be found at /usr/lib/python3.12/csv.py. | 2 | 3 |
79,028,838 | 2024-9-26 | https://stackoverflow.com/questions/79028838/polars-rolling-mean-fill-start-of-window-with-null-instead-of-shortened-window | My question is whether there is a way to have null until the full window can be filled at the start of a rolling window in polars. For example: dates = [ "2020-01-01", "2020-01-02", "2020-01-03", "2020-01-04", "2020-01-05", "2020-01-06", "2020-01-01", "2020-01-02", "2020-01-03", "2020-01-04", "2020-01-05", "2020-01-06", ] df = pl.DataFrame({"dt": dates, "a": [3, 4, 2, 8, 10, 1, 1, 7, 5, 9, 2, 1], "b": ["Yes","Yes","Yes","Yes","Yes", "Yes", "No", "No", "No", "No", "No", "No"]}).with_columns( pl.col("dt").str.strptime(pl.Date).set_sorted() ) df = df.sort(by = 'dt') df.rolling( index_column="dt", period="2d", group_by = 'b' ).agg(pl.col("a").mean().alias("ma_2d")) Result b dt ma_2d str date f64 "Yes" 2020-01-01 3.0 "Yes" 2020-01-02 3.5 "Yes" 2020-01-03 3.0 "Yes" 2020-01-04 5.0 "Yes" 2020-01-05 9.0 My expectation in this case is that the first day should be null because there aren't 2 days to fill the window. But polars seems to just truncate the window to fill the starting days. | Can you check the length? (df.rolling(index_column="dt", period="2d", group_by="b") .agg( pl.when(pl.len() > 1) .then(pl.col("a").mean()) .alias("ma_2d") ) ) shape: (12, 3) βββββββ¬βββββββββββββ¬ββββββββ β b β dt β ma_2d β β --- β --- β --- β β str β date β f64 β βββββββͺβββββββββββββͺββββββββ‘ β Yes β 2020-01-01 β null β β Yes β 2020-01-02 β 3.5 β β Yes β 2020-01-03 β 3.0 β β Yes β 2020-01-04 β 5.0 β β Yes β 2020-01-05 β 9.0 β β β¦ β β¦ β β¦ β β No β 2020-01-02 β 4.0 β β No β 2020-01-03 β 6.0 β β No β 2020-01-04 β 7.0 β β No β 2020-01-05 β 5.5 β β No β 2020-01-06 β 1.5 β βββββββ΄βββββββββββββ΄ββββββββ Alternatively, there is a dedicated .rolling_mean_by() method that supports min_periods. df.with_columns( pl.col("a").rolling_mean_by("dt", window_size="2d", min_periods=2) .over("b") .alias("ma_2d") ) shape: (12, 4) ββββββββββββββ¬ββββββ¬ββββββ¬ββββββββ β dt β a β b β ma_2d β β --- β --- β --- β --- β β date β i64 β str β f64 β ββββββββββββββͺββββββͺββββββͺββββββββ‘ β 2020-01-01 β 3 β Yes β null β β 2020-01-01 β 1 β No β null β β 2020-01-02 β 4 β Yes β 3.5 β β 2020-01-02 β 7 β No β 4.0 β β 2020-01-03 β 2 β Yes β 3.0 β β β¦ β β¦ β β¦ β β¦ β β 2020-01-04 β 9 β No β 7.0 β β 2020-01-05 β 10 β Yes β 9.0 β β 2020-01-05 β 2 β No β 5.5 β β 2020-01-06 β 1 β Yes β 5.5 β β 2020-01-06 β 1 β No β 1.5 β ββββββββββββββ΄ββββββ΄ββββββ΄ββββββββ | 3 | 2 |
79,027,662 | 2024-9-26 | https://stackoverflow.com/questions/79027662/slightly-wrong-color-in-mp4-videos-written-by-pyav | I am writing MP4 video files with the following PyAV-based code (getting input frames represented as numpy arrays - the sort produced by imageio.imread - as input): class MP4: def __init__(self, fname, width, height, fps): self.output = av.open(fname, 'w', format='mp4') self.stream = self.output.add_stream('h264', str(fps)) self.stream.width = width self.stream.height = height # these 2 lines can be removed and the problem still reproduces: self.stream.pix_fmt = 'yuv420p' self.stream.options = {'crf': '17'} def write_frame(self, pixels): frame = av.VideoFrame.from_ndarray(pixels, format='rgb24') packet = self.stream.encode(frame) self.output.mux(packet) def close(self): packet = self.stream.encode(None) self.output.mux(packet) self.output.close() The colors in the output MP4 video are slightly different (apparently darker) than the colors in the input images: Screen grab of an image viewer showing an input frame: Screen grab of VLC playing the output MP4 video: How can this problem be fixed? I variously fiddled with the frame.colorspace attribute, stream options and VideoFrame.reformat but it changed nothing; of course I could have been fiddling incorrectly. As you can see, the input has simple flat color regions, so I doubt it's any sort of compression artifact, eg YUV420 dropping some of the chroma info or other such. | Adding frame = frame.reformat(format='yuv420p', dst_colorspace=av.video.reformatter.Colorspace.ITU709) before the call to encode fixes the problem. I don't know why both the format and the dst_colorspace arguments are needed for this to work, empirically they are. | 1 | 2 |
79,028,082 | 2024-9-26 | https://stackoverflow.com/questions/79028082/polars-pivot-dataframe-an-count-the-cumulative-uniques-id | I have a polars dataframe that contains and ID, DATE and OS. For each day i would like to count how many uniques ID are until that day. import polars as pl df = ( pl.DataFrame( { "DAY": [1,1,1,2,2,2,3,3,3], "OS" : ["A","B","A","B","A","B","A","B","A"], "ID": ["X","Y","Z","W","X","J","K","L","X"] } ) ) Desired Output: shape: (3, 3) βββββββ¬ββββββ¬ββββββ β DAY β A β B β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺββββββͺββββββ‘ β 1 β 2 β 1 β β 2 β 2 β 3 β β 3 β 3 β 4 β βββββββ΄ββββββ΄ββββββ It should looks like this, because on day 1, the are 3 values and 3 ID . On day 2 the ID "X" its reapeted with the same OS so, the columns A remains the same, and the other 2 are different so add 2 to B. On day 3, the ID X its reapeated with A, and the other 2 are different, so it sums again over each column. I think it could be solved with an approach like the following: ( df .pivot( index="DAY", on="OS", aggregate_function=(pl.col("ID").cum_sum().unique()) ) ) | You can use Expr.is_first_distinct mark each of the first distinct entries of 'ID' within each 'OS'. Then you can pivot those results and take their cumulative sum. import polars as pl df = ( pl.DataFrame( { "DAY": [1,1,1,2,2,2,3,3,3], "OS" : ["A","B","A","B","A","B","A","B","A"], "ID": ["X","Y","Z","W","X","J","K","L","X"] } ) ) print( df .with_columns(pl.col('ID').is_first_distinct().over('OS')) .pivot( index='DAY', on='OS', aggregate_function=pl.col('ID').sum() ) .with_columns(pl.exclude('DAY').cum_sum()) ) # shape: (3, 3) # βββββββ¬ββββββ¬ββββββ # β DAY β A β B β # β --- β --- β --- β # β i64 β u32 β u32 β # βββββββͺββββββͺββββββ‘ # β 1 β 2 β 1 β # β 2 β 2 β 3 β # β 3 β 3 β 4 β # βββββββ΄ββββββ΄ββββββ | 3 | 4 |
79,027,200 | 2024-9-26 | https://stackoverflow.com/questions/79027200/how-to-change-a-list-element-by-index-in-a-list-column | I have a column of lists in my polars dataframe. I would like to access and change a value by list index. Example input df = pl.DataFrame({ "values": [ [10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120], ], }) Pseudocode df = df.with_columns( pl.col("values").list.eval(pl.element(3) = 1).alias("values2") ) Expected outcome df = pl.DataFrame({ "values": [ [10, 20, 30, 1], [50, 60, 70, 1], [90, 100, 110, 1], ], }) | take first 3 elements with list.head(). add 1 add remaining elements of list with list.tail() using -4 to get all elements except first 4. concat_list() to concat elements together. df.with_columns( pl.concat_list( pl.col.values.list.head(3), 1, pl.col.values.list.tail(-4) ) ) shape: (3, 1) βββββββββββββββββββββ β values β β --- β β list[i64] β βββββββββββββββββββββ‘ β [10, 20, 30, 1] β β [50, 60, 70, 1] β β [90, 100, 110, 1] β βββββββββββββββββββββ | 3 | 3 |
79,026,693 | 2024-9-26 | https://stackoverflow.com/questions/79026693/numpy-error-implicit-conversion-to-a-numpy-array-is-not-allowed-please-use | I 'am trying to find similar vector with spacy and numpy. I found the code following url : Mapping word vector to the most similar/closest word using spaCy But I'm getting type error import numpy as np your_word = "country" ms = nlp.vocab.vectors.most_similar( np.asarray([nlp.vocab.vectors[nlp.vocab.strings[your_word]]]), n=10, ) words = [nlp.vocab.strings[w] for w in ms[0][0]] distances = ms[2] print(words) error : --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[139], line 6 1 import numpy as np 3 your_word = "country" 5 ms = nlp.vocab.vectors.most_similar( ----> 6 np.asarray([nlp.vocab.vectors[nlp.vocab.strings[your_word]]]), 7 n=10, 8 ) 10 words = [nlp.vocab.strings[w] for w in ms[0][0]] 11 distances = ms[2] File cupy/_core/core.pyx:1475, in cupy._core.core._ndarray_base.__array__() TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly. I'm using gpu, how can I fix this? | From the Reference - Mapping word vector to the most similar/closest word using spaCy Reference - https://docs.cupy.dev/en/stable/reference/ndarray.html You need to convert CuPy arrays to be explicitly converted to NumPy arrays before operations on the CPU Edit lets make reshape the word vector into a 2D array with one row and multiple columns import numpy as np your_word = "country" word_vector = nlp.vocab.vectors[nlp.vocab.strings[your_word]] word_vector_2d = word_vector.reshape(1, -1) # '-1' lets numpy infer the correct size for the second dimension ms = nlp.vocab.vectors.most_similar( word_vector_2d, n=10, ) words = [nlp.vocab.strings[w] for w in ms[0][0]] distances = ms[2] print(words) | 2 | 2 |
79,026,038 | 2024-9-26 | https://stackoverflow.com/questions/79026038/handling-file-in-a-generator-function | I'm to create a generator that accepts a name of the file or a fileobject, words that we're looking for in a line and stop words that tell us that we should skip this line as soon as we meet them. I wrote a generator function, but I paid attenttion that in my realisation I can't be sure that if I open a file it will be closed afterwards, because it is not guaranteed that the generator will reach the end of its iteration. def gen_reader(file, lookups, stopwords): is_file = False try: if isinstance(file, str): file = open(file, 'r', encoding='UTF-8') is_file = True except FileNotFoundError: raise FileNotFoundError('File not found') else: lookups = list(map(str.lower, lookups)) stop_words = list(map(str.lower, stopwords)) for line in file: original_line = line line = line.lower() if any(lookup in line for lookup in lookups) \ and not any(stop_word in line for stop_word in stop_words): yield original_line.strip() if is_file: file.close() I was going to use context manager "with" and put the search code into it, but if I already got the file, then I'd write the same code again and it wouldn't be nice, would it? What are your ideas how I can improve my code, I ran of them. | You could write a context managed class that is iterable like this: from pathlib import Path from collections.abc import Iterator from typing import TextIO, Self class Reader: def __init__(self, filename: Path, lookups: list[str], stopwords: list[str]): self._lookups: set[str] = {e.lower() for e in lookups} self._stopwords: set[str] = {e.lower() for e in stopwords} self._fd: TextIO = filename.open() def __enter__(self) -> Self: return self def __exit__(self, *_) -> None: self._fd.close() def __iter__(self) -> Iterator[str]: for line in self._fd: words: set[str] = {e.lower() for e in line.split()} if (self._lookups & words) and not (self._stopwords & words): yield line.rstrip() with Reader(Path("foo.txt"), ["world"], ["banana"]) as reader: for line in reader: print(line) Now let's assume that foo.txt looks like this: hello world banana world goodbye my friend Then the output would be: hello world Thus we ensure that the file descriptor is properly closed at the appropriate point. Note also the use of sets for optimum performance when checking against the lookups and stop words | 1 | 3 |
79,026,053 | 2024-9-26 | https://stackoverflow.com/questions/79026053/how-to-correctly-add-columns-to-the-original-multi-index-dataframe-after-groupby | There is a DataFrame with option tickers at the zero level, prices (open, close, high, and low) at the first level, and option types at the second level (header structure: 'Ticker', 'open/close/high/low', 'Type'). The task: for each ticker, calculate the average price, then add columns with the average prices (header structure: 'Ticker', 'Average', 'Type') to the original DataFrame. My script executes almost correctly, but it corrupts the column names of the original DataFrame. Code: import pandas as pd import numpy as np df_f_avg = pd.DataFrame({ ('a1c9', 'open', 'call'): [3, 0.50, 3.0], ('a1c9', 'close', 'call'): [4, 0.35, 3.5], ('a1c9', 'high', 'call'): [1, 0.20, 1.5], ('a1c9', 'low', 'call'): [2, 0.09, 2.5], ('a1p9', 'open', 'put'): [5, 0.05, 4.0], ('a1p9', 'close', 'put'): [6, 0.10, 5.5], ('a1p9', 'high', 'put'): [7, 0.20, 6.0], ('a1p9', 'low', 'put'): [8, 0.31, 7.0]}) df_f_avg.columns.names = ['Ticker', 'Price', 'Type'] df_f_avg columns_name = df_f_avg.loc[:, pd.IndexSlice[:, 'open']].columns avg_group_ticker = pd.DataFrame(data=df_f_avg.loc[:, (slice(None), ['open', 'close', 'high', 'low'], slice(None))].T.groupby('Ticker').mean().T) avg_group_ticker.columns = columns_name avg_group_ticker.columns._levels[1].values[:] = 'avg' df_f_avg = pd.concat([df_f_avg, avg_group_ticker], axis=1) df_f_avg It turns out like this: β¦ Ticker a1c9 a1p9 a1c9 a1p9 Price avg avg avg avg Type call call call call put put put put call put 0 3.0 4.00 1.0 2.00 5.00 6.0 7.0 8.00 2.500 6.500 1 0.5 0.35 0.2 0.09 0.05 0.1 0.2 0.31 0.285 0.165 2 3.0 3.50 1.5 2.50 4.00 5.5 6.0 7.00 2.625 5.625 But it should look like this: β¦ Ticker a1c9 a1p9 a1c9 a1p9 Price open close high low open close high low avg avg Type call call call call put put put put call put 0 3.0 4.00 1.0 2.00 5.00 6.0 7.0 8.00 2.500 6.500 1 0.5 0.35 0.2 0.09 0.05 0.1 0.2 0.31 0.285 0.165 2 3.0 3.50 1.5 2.50 4.00 5.5 6.0 7.00 2.625 5.625 Questions: How can this be fixed (P.S. this is a shortened example, in reality, there will be dozens of tickers)? Perhaps you know a simpler and more efficient way to solve this task (simpler code, more reliable and universal, faster, less resource-intensive)? | The issue is that you modify the original index since columns_name = df_f_avg.loc[:, pd.IndexSlice[:, 'open']].columns is making a reference to it. You must make a deep copy: columns_name = df_f_avg.loc[:, pd.IndexSlice[:, 'open']].columns.copy(deep=True) You could also rename Price and groupby all levels: out = df_f_avg.join(df_f_avg.rename(columns=lambda x: 'avg', level='Price') .T.groupby(['Ticker', 'Price', 'Type']).mean().T ) Output: Ticker a1c9 a1p9 a1c9 a1p9 Price open close high low open close high low avg avg Type call call call call put put put put call put 0 3.0 4.00 1.0 2.00 5.00 6.0 7.0 8.00 2.500 6.500 1 0.5 0.35 0.2 0.09 0.05 0.1 0.2 0.31 0.285 0.165 2 3.0 3.50 1.5 2.50 4.00 5.5 6.0 7.00 2.625 5.625 | 1 | 3 |
79,025,966 | 2024-9-26 | https://stackoverflow.com/questions/79025966/docker-compose-failing-to-build-container-when-using-postgresql | I have been trying to build a docker-compose container for my back-end service using Fastapi.The issue is docker-compose container logs keep returning that my database engine fails. I am using asyncpg for async connection in postgresql. β°βΞ» sudo docker-compose logs db-1 | The files belonging to this database system will be owned by user "postgres". db-1 | This user must also own the server process. db-1 | db-1 | The database cluster will be initialized with locale "en_US.utf8". db-1 | The default database encoding has accordingly been set to "UTF8". db-1 | The default text search configuration will be set to "english". db-1 | db-1 | Data page checksums are disabled. db-1 | db-1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok db-1 | creating subdirectories ... ok db-1 | selecting dynamic shared memory implementation ... posix db-1 | selecting default max_connections ... 100 db-1 | selecting default shared_buffers ... 128MB db-1 | selecting default time zone ... Etc/UTC db-1 | creating configuration files ... ok db-1 | running bootstrap script ... ok db-1 | performing post-bootstrap initialization ... ok db-1 | syncing data to disk ... ok db-1 | db-1 | db-1 | initdb: warning: enabling "trust" authentication for local connections db-1 | You can change this by editing pg_hba.conf or using the option -A, or db-1 | --auth-local and --auth-host, the next time you run initdb. db-1 | Success. You can now start the database server using: db-1 | db-1 | pg_ctl -D /var/lib/postgresql/data -l logfile start db-1 | db-1 | waiting for server to start....2024-09-26 07:07:33.922 UTC [48] LOG: starting PostgreSQL 13.16 (Debian 13.16-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit db-1 | 2024-09-26 07:07:33.928 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db-1 | 2024-09-26 07:07:33.945 UTC [49] LOG: database system was shut down at 2024-09-26 07:07:30 UTC db-1 | 2024-09-26 07:07:33.964 UTC [48] LOG: database system is ready to accept connections db-1 | done db-1 | server started db-1 | CREATE DATABASE db-1 | db-1 | db-1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* db-1 | db-1 | 2024-09-26 07:07:35.936 UTC [48] LOG: received fast shutdown request db-1 | waiting for server to shut down....2024-09-26 07:07:35.943 UTC [48] LOG: aborting any active transactions db-1 | 2024-09-26 07:07:35.954 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 db-1 | 2024-09-26 07:07:35.961 UTC [50] LOG: shutting down db-1 | 2024-09-26 07:07:36.021 UTC [48] LOG: database system is shut down db-1 | done db-1 | server stopped db-1 | db-1 | PostgreSQL init process complete; ready for start up. db-1 | db-1 | 2024-09-26 07:07:36.133 UTC [1] LOG: starting PostgreSQL 13.16 (Debian 13.16-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit app-1 | /usr/local/lib/python3.12/site-packages/pydantic/_internal/_config.py:341: UserWarning: Valid config keys have changed in V2: app-1 | * 'orm_mode' has been renamed to 'from_attributes' app-1 | warnings.warn(message, UserWarning) db-1 | 2024-09-26 07:07:36.134 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 db-1 | 2024-09-26 07:07:36.134 UTC [1] LOG: listening on IPv6 address "::", port 5432 db-1 | 2024-09-26 07:07:36.146 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db-1 | 2024-09-26 07:07:36.166 UTC [63] LOG: database system was shut down at 2024-09-26 07:07:35 UTC db-1 | 2024-09-26 07:07:36.195 UTC [1] LOG: database system is ready to accept connections app-1 | 2024-09-26 07:07:32.494 | INFO | app.database.engine:get_engine:11 - Creating new database engine app-1 | Traceback (most recent call last): app-1 | File "/usr/local/bin/uvicorn", line 8, in <module> app-1 | sys.exit(main()) app-1 | ^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/click/core.py", line 1157, in __call__ app-1 | return self.main(*args, **kwargs) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/click/core.py", line 1078, in main app-1 | rv = self.invoke(ctx) app-1 | ^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/click/core.py", line 1434, in invoke app-1 | return ctx.invoke(self.callback, **ctx.params) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/click/core.py", line 783, in invoke app-1 | return __callback(*args, **kwargs) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/main.py", line 410, in main app-1 | run( app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/main.py", line 577, in run app-1 | server.run() app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 65, in run app-1 | return asyncio.run(self.serve(sockets=sockets)) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run app-1 | return runner.run(main) app-1 | ^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run app-1 | return self._loop.run_until_complete(task) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve app-1 | await self._serve(sockets) app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 76, in _serve app-1 | config.load() app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/config.py", line 434, in load app-1 | self.loaded_app = import_from_string(self.app) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/importer.py", line 22, in import_from_string app-1 | raise exc from None app-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string app-1 | module = importlib.import_module(module_str) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/importlib/__init__.py", line 90, in import_module app-1 | return _bootstrap._gcd_import(name[level:], package, level) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "<frozen importlib._bootstrap>", line 1387, in _gcd_import app-1 | File "<frozen importlib._bootstrap>", line 1360, in _find_and_load app-1 | File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked app-1 | File "<frozen importlib._bootstrap>", line 935, in _load_unlocked app-1 | File "<frozen importlib._bootstrap_external>", line 995, in exec_module app-1 | File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed app-1 | File "/app/main.py", line 1, in <module> app-1 | from app.api.routes.router import base_router as router app-1 | File "/app/app/api/routes/router.py", line 2, in <module> app-1 | from . import users, signup, login, orders, sms app-1 | File "/app/app/api/routes/users.py", line 6, in <module> app-1 | from app.database.session import get_db_session app-1 | File "/app/app/database/session.py", line 14, in <module> app-1 | from app.database.engine import get_engine app-1 | File "/app/app/database/engine.py", line 27, in <module> app-1 | engine = EngineManager.get_engine() app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/app/app/database/engine.py", line 12, in get_engine app-1 | cls._engine = create_async_engine( app-1 | ^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/sqlalchemy/ext/asyncio/engine.py", line 120, in create_async_engine app-1 | sync_engine = _create_engine(url, **kw) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "<string>", line 2, in create_engine app-1 | File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/deprecations.py", line 281, in warned app-1 | return fn(*args, **kwargs) # type: ignore[no-any-return] app-1 | ^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/create.py", line 599, in create_engine app-1 | dbapi = dbapi_meth(**dbapi_args) app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^ app-1 | File "/usr/local/lib/python3.12/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py", line 690, in import_dbapi app-1 | import psycopg2 app-1 | ModuleNotFoundError: No module named 'psycopg2' the code from my engine class and session manager is as follows from sqlalchemy.ext.asyncio import create_async_engine, AsyncEngine from app.config import settings from loguru import logger class EngineManager: _engine: AsyncEngine | None = None @classmethod def get_engine(cls) -> AsyncEngine: if cls._engine is None: logger.info("Creating new database engine") cls._engine = create_async_engine( settings.database_url, echo=True, # Set to False in production future=True, ) return cls._engine @classmethod async def close_engine(cls): if cls._engine: logger.info("Closing database engine") await cls._engine.dispose() cls._engine = None # Create a global instance of the engine engine = EngineManager.get_engine() # Function to get the engine (can be used in other parts of the application) def get_engine() -> AsyncEngine: return EngineManager.get_engine() from sqlalchemy.ext.asyncio import ( AsyncSession, AsyncEngine, AsyncConnection, create_async_engine, async_sessionmaker ) from fastapi import HTTPException import contextlib from app.config import settings from typing import AsyncIterator from loguru import logger from sqlalchemy.exc import SQLAlchemyError from app.database.engine import get_engine class DataSessionManager: # database session config def __init__(self): self.engine: AsyncEngine = get_engine() self._sessionmaker: async_sessionmaker[AsyncSession] = async_sessionmaker(autocommit=False, bind=self.engine) async def close(self): if self.engine is None: raise HTTPException(status_code=404, message="service unavailable!") await self.engine.dispose() self.engine = None self._sessionmaker = None @contextlib.asynccontextmanager async def connection(self) -> AsyncIterator[AsyncConnection]: if self.engine is None: raise HTTPException(status_code=404, message="Service not found!") async with self.engine.begin() as connection: try: yield connection except SQLAlchemyError: await connection.rollback() logger.error("Connection error occured") raise HTTPException(status_code=404) @contextlib.asynccontextmanager async def session(self) -> AsyncIterator[AsyncSession]: if self._sessionmaker is None: logger.error("sessionmaker is not available!") raise HTTPException(status_code=404) session = self._sessionmaker() try: yield session except SQLAlchemyError as e: await session.rollback() logger.error(f"Session error could not be established {e}") raise HTTPException(status_code=404) sessionmanager = DataSessionManager() async def get_db_session(): async with sessionmanager.session() as session: yield session and the docker-compose file and docker file.. services: app: build: context: . dockerfile: Dockerfile ports: - "8000:8000" depends_on: - db environment: - DATABASE_URL=postgresql://admin:adminpass@db:5432/swifty_logs db: image: postgres:13 environment: - POSTGRES_USER=admin - POSTGRES_PASSWORD=adminpass - POSTGRES_DB=swifty_logs volumes: - postgres_data:/var/lib/postgresql/data volumes: postgres_data: # Use an official Python runtime as a parent image FROM python:latest # Set the working directory in the container WORKDIR /app # Copy the requirements file into the container COPY ./requirements.txt /app # Install any dependencies specified in requirements.txt RUN pip install --no-cache-dir --default-timeout=100 -r requirements.txt # Copy the rest of the application code into the container COPY . /app # Expose the port FastAPI will run on EXPOSE 8000 # Command to run the application with uvicorn CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] after installing psycopg2... app-1 | sqlalchemy.exc.InvalidRequestError: The asyncio extension requires an async driver to be used. The loaded 'psycopg2' is not async. | You need to install an async driver for PostgreSQL, and tell SQLAlchemy to use it. asyncpg is one such driver. So after adding to your requirements file and installing it, you then have to tell SQLAlchemy to use it. You do this by modifying the database URL. ie. DATABASE_URL=postgresql+asyncpg://admin:adminpass@db:5432/swifty_logs ^^^^^^^^ | new bit SQLAlchemy docs on using different drivers for PostgreSQL -- https://docs.sqlalchemy.org/en/20/core/engines.html#postgresql | 1 | 3 |
79,024,937 | 2024-9-25 | https://stackoverflow.com/questions/79024937/applying-function-to-filtered-columns-in-pandas | I have a Pandas Dataframe with 4 different columns: an ID, country, team and a color that is assigned to each player following a specific order. I want to create a new column that contains a number based on the team and the country that simply counts up following the color order, however colors may appear more than once per team. The column "ID" has to be simply sorted according to the alphabet, then the country column has to be filtered by country, then the script needs to check what teams are in what country and accordingly filter by team, then sort team by the color code and number the first team, then filter the country for the next team, sort again by color but CONTINUE the counting until all teams of a country are numbered. Then the next country gets filtered and the numbering starts again from 1 with the first team of that country. It sounds complicated and I have an example code here. I apologize, it is not small but I figure it needs to be of a certain size to make the problem more understandable. I used df = df.sort_values(by='ID') to sort the column ID by alphabet and I sorted the column 'color' by making it categorical using df['Color'] = pd.Categorical(df['Color'], colorcode) (similar to the custom sorting in Excel) I have added the column Result to the example which shows what I am trying to reach programmatically. It does not matter whether the Result numbers are integers or strings. Here the example: import pandas as pd pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) colorcode = ['red', 'green', 'blue', 'yellow', 'white', 'grey', 'brown', 'violet', 'turquoise', 'black', 'orange', 'pink', 'red2', 'green2', 'blue2', 'yellow2', 'white2', 'grey2', 'brown2', 'violet2', 'turquoise2', 'black2', 'orange2', 'pink2'] data = { 'ID' : ['12318683-999', '12318683-001', '12318687-999', '12318687-001', '12318684-999', '12318684-001', '12318686-999', '12318686-001', '12318685-999', '12318685-001', '12319256-999', '12319256-004', '12319256-003', '12319256-002', '12319256-001', '12319255-999', '12319255-002', '12319255-001', '12317944-999', '12317944-009', '12317944-008', '12317944-007', '12317944-006', '12317944-005', '12317944-004', '12317944-003', '12317944-002', '12317944-010', '12317944-001', '12317942-006', '12317942-005', '12317942-004', '12317942-003', '12317942-002', '12317942-001', '12317943-006', '12317943-005', '12317943-004', '12317943-003', '12317943-002', '12317943-001', '12317941-999', '12317941-009', '12317941-008', '12317941-007', '12317941-006', '12317941-005', '12317941-004', '12317941-003', '12317941-002', '12317941-001', '12319261-999', '12319261-001', '12319260-999', '12319260-001', '12319259-999', '12319259-001', '12319095-999', '12319095-001', '12319258-999', '12319258-002', '12319258-001', '12319257-999', '12319257-001', '12319262-999', '12319262-003', '12319262-002', '12319262-001', '12319264-006', '12319264-005', '12319264-004', '12319264-003', '12319264-002', '12319264-001', '12319263-006', '12319263-005', '12319263-004', '12319263-003', '12319263-002', '12319263-001', '12318985-009', '12318985-008', '12318985-007', '12318985-006', '12318985-005', '12318985-004', '12318985-003', '12318985-002', '12318985-012', '12318985-011', '12318985-010', '12318985-001', '12318986-999', '12318986-004', '12318986-003', '12318986-002', '12318986-001', '12317719-999', '12317719-003', '12317719-002', '12317719-001', '12319310-999', '12319310-003', '12319310-002', '12319310-001', '12317718-999', '12317718-002', '12317718-001', '12319311-999', '12319311-001', '12317720-999', '12317720-001', '12319319-999', '12319319-008', '12319319-007', '12319319-006', '12319319-005', '12319319-004', '12319319-003', '12319319-002', '12319319-001', '12317721-999', '12317721-001', '12318721-999', '12318721-001', '12318716-999', '12318716-001', '12318724-999', '12318724-001', '12318725-999', '12318725-004', '12318725-003', '12318725-002', '12318725-001', '12318726-999', '12318726-001', '12318715-999', '12318715-001', '12318718-999', '12318718-001', '12319123-999', '12319123-003', '12319123-002', '12319123-001', '12318714-999', '12318714-001', '12319118-999', '12319118-002', '12319118-001', '12318713-999', '12318713-001', '12319121-999', '12319121-004', '12319121-003', '12319121-002', '12319121-001', '12318727-999', '12318727-001', '12319116-999', '12319116-003', '12319116-002', '12319116-001', '12319119-999', '12319119-002', '12319119-001', '12319120-999', '12319120-003', '12319120-002', '12319120-001', '12319304-999', '12319304-005', '12319304-004', '12319304-003', '12319304-002', '12319304-001', '12319122-999', '12319122-002', '12319122-001', '12319117-999', '12319117-005', '12319117-004', '12319117-003', '12319117-002', '12319117-001', '12319305-999', '12319305-001', '12319306-999', '12319306-001', '23149872-999', '23149872-002', '23149872-001', '12320092-999', '12320092-002', '12320092-001', '12320093-999', '12320093-002', '12320093-001', '12320095-999', '12320095-001', '12318669-999', '12318669-002', '12318669-001', '12318364-999', '12318364-001', '12318366-999', '12318366-001', '12318365-999', '12318365-001', '12318644-999', '12318644-001'], 'Country': ['UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'USA', 'USA'], 'Team' : ['Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team3', 'Team3', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team3', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team2', 'Team2', 'Team2', 'Team2', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team4', 'Team4'], 'Color' : ['red', 'red', 'green', 'green', 'blue', 'blue', 'yellow', 'yellow', 'white', 'white', 'violet', 'violet', 'violet', 'violet', 'violet', 'brown', 'brown', 'brown', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'green', 'green', 'green', 'green', 'green', 'green', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'pink', 'pink', 'red-2', 'red-2', 'green-2', 'green-2', 'turquoise', 'turquoise', 'blue-2', 'blue-2', 'blue-2', 'yellow-2', 'yellow-2', 'turquoise', 'turquoise', 'turquoise', 'turquoise', 'orange', 'orange', 'orange', 'orange', 'orange', 'orange', 'black', 'black', 'black', 'black', 'black', 'black', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'white', 'white', 'white', 'white', 'white', 'grey', 'grey', 'grey', 'grey', 'green', 'green', 'green', 'green', 'white', 'white', 'white', 'brown', 'brown', 'yellow', 'yellow', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'blue', 'blue', 'grey', 'grey', 'white', 'white', 'yellow', 'yellow', 'blue', 'blue', 'blue', 'blue', 'blue', 'black', 'black', 'turquoise', 'turquoise', 'red', 'red', 'red', 'red', 'red', 'red', 'green', 'green', 'green', 'green', 'green', 'violet', 'violet', 'blue', 'blue', 'blue', 'blue', 'blue', 'brown', 'brown', 'yellow', 'yellow', 'yellow', 'yellow', 'white', 'white', 'white', 'grey', 'grey', 'grey', 'grey', 'black', 'black', 'black', 'black', 'black', 'black', 'brown', 'brown', 'brown', 'violet', 'violet', 'violet', 'violet', 'violet', 'violet', 'turquoise', 'turquoise', 'violet', 'violet', 'grey', 'grey', 'grey', 'white', 'white', 'white', 'yellow', 'yellow', 'yellow', 'blue', 'blue', 'green', 'green', 'green', 'turquoise', 'turquoise', 'violet', 'violet', 'brown', 'brown', 'red', 'red'], 'Result' : ['10', '10', '11', '11', '12', '12', '13', '13', '14', '14', '17', '17', '17', '17', '17', '16', '16', '16', '4', '4', '4', '4', '4', '4', '4', '4', '4', '4', '4', '3', '3', '3', '3', '3', '3', '2', '2', '2', '2', '2', '2', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '21', '21', '22', '22', '23', '23', '9', '9', '24', '24', '24', '25', '25', '18', '18', '18', '18', '20', '20', '20', '20', '20', '20', '19', '19', '19', '19', '19', '19', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '5', '5', '5', '5', '5', '16', '16', '16', '16', '12', '12', '12', '12', '15', '15', '15', '17', '17', '14', '14', '11', '11', '11', '11', '11', '11', '11', '11', '11', '13', '13', '6', '6', '5', '5', '4', '4', '3', '3', '3', '3', '3', '10', '10', '9', '9', '1', '1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '8', '8', '3', '3', '3', '3', '3', '7', '7', '4', '4', '4', '4', '5', '5', '5', '6', '6', '6', '6', '20', '20', '20', '20', '20', '20', '7', '7', '7', '8', '8', '8', '8', '8', '8', '19', '19', '18', '18', '15', '15', '15', '14', '14', '14', '13', '13', '13', '12', '12', '11', '11', '11', '9', '9', '8', '8', '7', '7', '10', '10'] } df = pd.DataFrame(data) df = df.sort_values(by='ID') # This line sorts the column ID by alphabet df['Color'] = pd.Categorical(df['Color'], colorcode) df = pd.DataFrame(data) print(df) My problem is that I cannot figure out how to filter the columns (first Country, then Team) and then count up according to the color, starting from 1 for red and not start at 1 again for the next team as long as I am still in the same country. | Convert the color column to ordered categorical type then group the dataframe by the combination of Country, Team and Color then use ngroup to assign ordered group numbers to each unique combination. Then for each Country rank the group numbers using dense method df['Color'] = pd.Categorical(df['Color'], colorcode, ordered=True) df['Rank'] = df.groupby(['Country', 'Team', 'Color'], sort=True, dropna=False).ngroup() df['Rank'] = df.groupby(['Country'])['Rank'].rank(method='dense').astype(int) Result ID Country Team Color Result Rank 139 12318718-001 Germany Team1 red 1 1 138 12318718-999 Germany Team1 red 1 1 144 12318714-999 Germany Team1 green 2 2 145 12318714-001 Germany Team1 green 2 2 129 12318725-999 Germany Team1 blue 3 3 130 12318725-004 Germany Team1 blue 3 3 131 12318725-003 Germany Team1 blue 3 3 132 12318725-002 Germany Team1 blue 3 3 133 12318725-001 Germany Team1 blue 3 3 127 12318724-999 Germany Team1 yellow 4 4 128 12318724-001 Germany Team1 yellow 4 4 126 12318716-001 Germany Team1 white 5 5 125 12318716-999 Germany Team1 white 5 5 123 12318721-999 Germany Team1 grey 6 6 124 12318721-001 Germany Team1 grey 6 6 156 12318727-999 Germany Team1 brown 7 7 157 12318727-001 Germany Team1 brown 7 7 149 12318713-999 Germany Team1 violet 8 8 150 12318713-001 Germany Team1 violet 8 8 136 12318715-999 Germany Team1 turquoise 9 9 137 12318715-001 Germany Team1 turquoise 9 9 134 12318726-999 Germany Team1 black 10 10 135 12318726-001 Germany Team1 black 10 10 115 12319319-006 Germany Team2 red 11 11 113 12319319-008 Germany Team2 red 11 11 112 12319319-999 Germany Team2 red 11 11 120 12319319-001 Germany Team2 red 11 11 119 12319319-002 Germany Team2 red 11 11 118 12319319-003 Germany Team2 red 11 11 117 12319319-004 Germany Team2 red 11 11 116 12319319-005 Germany Team2 red 11 11 114 12319319-007 Germany Team2 red 11 11 104 12319310-001 Germany Team2 green 12 12 102 12319310-003 Germany Team2 green 12 12 ... | 2 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.