question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-18 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
59,617,184
2020-1-6
https://stackoverflow.com/questions/59617184/why-are-some-python-exceptions-lower-case
In Python, exceptions are classes and cased as such. For example: OSError. However, there are some exceptions, such as those in the socket module, that are named in lower-case. For example: socket.timeout, socket.error. Why is this?
According to the docs, exception socket.error A deprecated alias of OSError. Changed in version 3.3: Following PEP 3151, this class was made an alias of OSError. PEP 3151 says while standard exception types live in the root namespace, they are visually distinguished by the fact that they use the CamelCase convention, while almost all other builtins use lowercase naming (except True, False, None, Ellipsis and NotImplemented)
9
7
59,572,706
2020-1-3
https://stackoverflow.com/questions/59572706/django-collectstatic-not-working-on-production-with-s3-but-same-settings-work-l
I've been moving around some settings to make more defined local and production environments, and I must have messed something up. Below are the majority of relevant settings. If I move the production.py settings (which just contains AWS-related settings at the moment) to base.py, I can update S3 from my local machine just fine. Similarly, if I keep those AWS settings in base.py and push to production, S3 updates appropriately. In addition, if I print something from production.py, it does print. However, if I make production.py my "local" settings on manage.py, or when I push to Heroku with the settings as seen below, S3 is not updating. What about my settings is incorrect? (Well, I'm sure a few things, but specifically causing S3 not to update?) Here's some relevant code: __init__.py (in the directory with base, local, and production) from cobev.settings.base import * base.py INSTALLED_APPS = [ ... 'whitenoise.runserver_nostatic', 'django.contrib.staticfiles', ... 'storages', ] ... STATIC_URL = '/static/' STATICFILES_DIRS = [os.path.join(BASE_DIR, "global_static"), os.path.join(BASE_DIR, "media", ) ] MEDIA_ROOT = os.path.join(BASE_DIR, 'media/') MEDIA_URL = '/media/' local.py # local_settings.py from .base import * ... STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') production.py from .base import * # AWS Settings AWS_ACCESS_KEY_ID = config('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = config('AWS_SECRET_ACCESS_KEY') AWS_STORAGE_BUCKET_NAME = 'cobev' AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME AWS_S3_OBJECT_PARAMETERS = { 'CacheControl': 'max-age=86400', } AWS_LOCATION = 'static' AWS_DEFAULT_ACL = 'public-read' STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage' DEFAULT_FILE_STORAGE = 'cobev.storage_backends.MediaStorage' STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION) ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/' # End AWS wsgi.py import os from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cobev.settings.production") application = get_wsgi_application() from whitenoise.django import DjangoWhiteNoise application = DjangoWhiteNoise(application) manage.py #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cobev.settings.local") try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) from exc execute_from_command_line(sys.argv)
Ok, let me try, as discovered in comments from the question, you do S3 update using collectstatic, but this is a management command which is called using manage.py file where you set cobev.settings.local as settings which are not equal to cobev.settings.production which is used for wsgi.py file. I think you should manage your settings file using, normal Django way, OS environment variable named DJANGO_SETTINGS_MODULE. For sure you should be able to set it in any production environment you are running.
7
2
59,596,261
2020-1-5
https://stackoverflow.com/questions/59596261/return-or-yield-from-a-function-that-calls-a-generator
I have a generator generator and also a convenience method to it - generate_all. def generator(some_list): for i in some_list: yield do_something(i) def generate_all(): some_list = get_the_list() return generator(some_list) # <-- Is this supposed to be return or yield? Should generate_all return or yield? I want the users of both methods to use it the same, i.e. for x in generate_all() should be equal to some_list = get_the_list() for x in generate(some_list)
Generators are lazy-evaluating so return or yield will behave differently when you're debugging your code or if an exception is thrown. With return any exception that happens in your generator won't know anything about generate_all, that's because when generator is really executed you have already left the generate_all function. With yield in there it will have generate_all in the traceback. def generator(some_list): for i in some_list: raise Exception('exception happened :-)') yield i def generate_all(): some_list = [1,2,3] return generator(some_list) for item in generate_all(): ... Exception Traceback (most recent call last) <ipython-input-3-b19085eab3e1> in <module> 8 return generator(some_list) 9 ---> 10 for item in generate_all(): 11 ... <ipython-input-3-b19085eab3e1> in generator(some_list) 1 def generator(some_list): 2 for i in some_list: ----> 3 raise Exception('exception happened :-)') 4 yield i 5 Exception: exception happened :-) And if it's using yield from: def generate_all(): some_list = [1,2,3] yield from generator(some_list) for item in generate_all(): ... Exception Traceback (most recent call last) <ipython-input-4-be322887df35> in <module> 8 yield from generator(some_list) 9 ---> 10 for item in generate_all(): 11 ... <ipython-input-4-be322887df35> in generate_all() 6 def generate_all(): 7 some_list = [1,2,3] ----> 8 yield from generator(some_list) 9 10 for item in generate_all(): <ipython-input-4-be322887df35> in generator(some_list) 1 def generator(some_list): 2 for i in some_list: ----> 3 raise Exception('exception happened :-)') 4 yield i 5 Exception: exception happened :-) However this comes at the cost of performance. The additional generator layer does have some overhead. So return will be generally a bit faster than yield from ... (or for item in ...: yield item). In most cases this won't matter much, because whatever you do in the generator typically dominates the run-time so that the additional layer won't be noticeable. However yield has some additional advantages: You aren't restricted to a single iterable, you can also easily yield additional items: def generator(some_list): for i in some_list: yield i def generate_all(): some_list = [1,2,3] yield 'start' yield from generator(some_list) yield 'end' for item in generate_all(): print(item) start 1 2 3 end In your case the operations are quite simple and I don't know if it's even necessary to create multiple functions for this, one could easily just use the built-in map or a generator expression instead: map(do_something, get_the_list()) # map (do_something(i) for i in get_the_list()) # generator expression Both should be identical (except for some differences when exceptions happen) to use. And if they need a more descriptive name, then you could still wrap them in one function. There are multiple helpers that wrap very common operations on iterables built-in and further ones can be found in the built-in itertools module. In such simple cases I would simply resort to these and only for non-trivial cases write your own generators. But I assume your real code is more complicated so that may not be applicable but I thought it wouldn't be a complete answer without mentioning alternatives.
33
15
59,600,000
2020-1-5
https://stackoverflow.com/questions/59600000/none-propagation-in-python-chained-attribute-access
Is there a null propagation operator ("null-aware member access" operator) in Python so I could write something like var = object?.children?.grandchildren?.property as in C#, VB.NET and TypeScript, instead of var = None if not myobject\ or not myobject.children\ or not myobject.children.grandchildren\ else myobject.children.grandchildren.property
No. There is a PEP proposing the addition of such operators but it has not (yet) been accepted. In particular, one of the operators proposed in PEP 505 is The "None-aware attribute access" operator ?. ("maybe dot") evaluates the complete expression if the left hand side evaluates to a value that is not None
14
31
59,587,183
2020-1-4
https://stackoverflow.com/questions/59587183/python-dynamodb-expressionattributevalues-contains-invalid-key-syntax-error-ke
Trying to do an update_item which is supposed to create new attributes if it doesn't find existing ones (according to documentation) but I am getting a Sytax error. I have been wracking my brain all day trying to figure out why I am getting this and I can't seem to get past this. Thank you for any help Error I am getting: ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: ExpressionAttributeValues contains invalid key: Syntax error; key: "var4" MyCode: dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('contacts') table.update_item( Key={'email': emailID}, UpdateExpression=SET last_name = :var0, address_1_state = :var1, email_2 = :var2, phone = :var3, phone_2 = :var4 ExpressionAttributeValues={ 'var0': 'Metzger', 'var1': 'CA', 'var2': 'none', 'var3': '949 302-9072', 'var4': '818-222-2311' } )
just change the section like the following - ExpressionAttributeValues={ ':var0': 'Metzger', ':var1': 'CA', ':var2': 'none', ':var3': '949 302-9072', ':var4': '818-222-2311' } Hope the code will work then :)
8
28
59,587,631
2020-1-4
https://stackoverflow.com/questions/59587631/use-center-diverging-colormap-in-a-pandas-dataframe-heatmap-display
I would like to use a diverging colormap to color the background of a pandas dataframe. The aspect that makes this trickier than one would think is the centering. In the example below, a red to blue colormap is used, but the middle of the colormap isn't used for values around zero. How to create a centered background color display where zero is white, all negatives are a red hue, and all positives are a blue hue? import pandas as pd import numpy as np import seaborn as sns np.random.seed(24) df = pd.DataFrame() df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4)*10, columns=list('ABCD'))], axis=1) df.iloc[0, 2] = 0.0 cm = sns.diverging_palette(5, 250, as_cmap=True) df.style.background_gradient(cmap=cm).set_precision(2) The zero in the above display has a red hue and the closest to white background is used for a negative number.
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from matplotlib import colors np.random.seed(24) df = pd.DataFrame() df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4)*10, columns=list('ABCD'))], axis=1) df.iloc[0, 2] = 0.0 cm = sns.diverging_palette(5, 250, as_cmap=True) def background_gradient(s, m, M, cmap='PuBu', low=0, high=0): rng = M - m norm = colors.Normalize(m - (rng * low), M + (rng * high)) normed = norm(s.values) c = [colors.rgb2hex(x) for x in plt.cm.get_cmap(cmap)(normed)] return ['background-color: %s' % color for color in c] even_range = np.max([np.abs(df.values.min()), np.abs(df.values.max())]) df.style.apply(background_gradient, cmap=cm, m=-even_range, M=even_range).set_precision(2)
7
5
59,594,516
2020-1-4
https://stackoverflow.com/questions/59594516/how-to-sample-from-pandas-dataframe-while-keeping-row-order
Given any DataFrame 2-dimensional, you can call eg. df.sample(frac=0.3) to retrieve a sample. But this sample will have completely shuffled row order. Is there a simple way to get a subsample that preserves the row order?
What we can do instead is use df.sample(), and then sort the resultant index by the original row order. Appending the sort_index() call does the trick. Here's my code: df = pd.DataFrame(np.random.randn(100, 10)) result = df.sample(frac=0.3).sort_index() You can even get it in ascending order. Documentation here.
10
9
59,589,494
2020-1-4
https://stackoverflow.com/questions/59589494/how-do-i-change-the-index-values-of-a-pandas-series
How can I change the index values of a Pandas Series from the regular integer value that they default to, to values within a list that I have? e.g. x = pd.Series([421, 122, 275, 847, 175]) index_values = ['2014-01-01', '2014-01-02', '2014-01-03', '2014-01-04', '2014-01-05'] How do I get the dates in the index_values list to be the indexes in the Series that I've created?
You can assign index values by list: x.index = index_values print(x) 2014-01-01 421 2014-01-02 122 2014-01-03 275 2014-01-04 847 2014-01-05 175 dtype: int64
40
16
59,582,142
2020-1-3
https://stackoverflow.com/questions/59582142/import-cannot-import-name-resolveinfo-from-graphql-error-when-using-newest
I am having some issues with my django app since updating my dependencies. Here aer my installed apps: INSTALLED_APPS = [ 'graphene_django', 'rest_framework', 'corsheaders', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'dojo_manager.dojo', ] and my requirements.txt: aniso8601==8.0.0 asgiref==3.2.3 Django==3.0.2 django-cors-headers==3.2.0 django-filter==2.2.0 django-graphql-jwt==0.3.0 djangorestframework==3.11.0 djangorestframework-jwt==1.11.0 graphene==2.1.8 graphene-django==2.8.0 graphene-django-extras==0.4.8 graphql-core==3.0.1 graphql-relay==3.0.0 pip-upgrade-outdated==1.5 pipupgrade==1.5.2 promise==2.3 PyJWT==1.7.1 python-dateutil==2.8.1 pytz==2019.3 Rx==3.0.1 singledispatch==3.4.0.3 six==1.13.0 sqlparse==0.3.0 I am getting ImportError: cannot import name 'ResolveInfo' from 'graphql' (E:\Ben\GitHub-Repos\dojo-manager\env\lib\site-packages\graphql\__init__.py) I am aware of https://github.com/graphql-python/graphene-django/issues/737 and https://github.com/graphql-python/graphene/issues/546 , none of which seem to solve it in my case. Any help greatly appreciated.
Ok I was able to fix it by downgrading graphql-core==3.0.1 to graphql-core<3 (and all the depencencies). I must have missed the errors when performing pip install -r requirements.txt
10
8
59,588,886
2020-1-4
https://stackoverflow.com/questions/59588886/why-isnt-pip-installing-the-latest-version-of-a-package-even-when-a-newer-vers
I was trying to upgrade to the latest version of a package I had installed with pip, but for some reason it won't get the latest version. I've tried uninstalling the package in question, or even reinstalling pip entirely, but it still refuses to get the latest version from PyPI. When I try to pin the package version (e.g. pip install package==0.10.0) it says that it "Could not find a version that satisfies the requirement package==0.10.0 (from versions: ...)" pip search package even acknowledges that the installed version isn't the latest, labeling the two versions for me. I've seen other questions with external files or local versions, but I've tried the respective solutions (--allow-external doesn't exist anymore, and --no-cache-dir doesn't help) and I'm still stuck on the older version.
I was trying to upgrade Quart. Maybe other packages have something else going on. In this particular case, Quart had dropped support for Python 3.6 (the version I had installed) and only supported 3.7 or later. (This was a fairly recent change to the project, so I just didn't see the news.) However, when attempting to install a package only supported by a later Python, pip doesn't really explain why it couldn't find a version to satisfy the requirement - instead, it just lists all the versions that should work with the current Python, without indicating that more exist and just can't be installed. The only real options to fix are: Update your Python to meet the package's requirements Ask/help the maintainer to backport the package to the version you have.
8
7
59,583,726
2020-1-3
https://stackoverflow.com/questions/59583726/django-importerror-cannot-import-name-python-2-unicode-compatible
I'm building a website and I was trying to create a custom user-to-user messaging system so I installed django-messages and maybe a few other things, and suddenly when I tried to run my server I get the following error : Watching for file changes with StatReloader Exception in thread django-main-thread: Traceback (most recent call last): File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\commands\runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\autoreload.py", line 76, in raise_last_exception raise _exception[1] File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\__init__.py", line 357, in execute autoreload.check_errors(django.setup)() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\apps\registry.py", line 114, in populate app_config.import_models() File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\apps\config.py", line 211, in import_models self.models_module = import_module(models_module_name) File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django_messages\models.py", line 9, in <module> from django.utils.encoding import python_2_unicode_compatible ImportError: cannot import name 'python_2_unicode_compatible' from 'django.utils.encoding' (C:\Users\mertz\AppData\Local\Programs\Python\Python38\lib\site-packages\django\utils\encoding.py) It sounds like chinese to me, I don't understand a single line of this error, can someone help me ? I've made a few researchs, it looks like it's related to a package name six but I was not able to find a solution. I can provide you my code if it's needed but I don't know which file you need so feel free to ask for a file in the comments and I will edit my post accordingly. Thanks in advance !
You are using Django 3, where all the Python 2 compatibility APIs that used to be bundled with Django were removed. django-messages still depends on these, and is trying and failing to import them. You either need to downgrade to Django 2.2, or wait for django-messages to be updated for Django 3 support. This applies for any library in which you get such errors - it means the library is not compatible yet with Django 3.
14
17
59,585,624
2020-1-3
https://stackoverflow.com/questions/59585624/vectorize-a-for-loop-in-numpy-to-calculate-duct-tape-overlaping
I'm creating an application with python to calculate duct-tape overlapping (modeling a dispenser applies a product on a rotating drum). I have a program that works correctly, but is really slow. I'm looking for a solution to optimize a for loop used to fill a numpy array. Could someone help me vectorize the code below? import numpy as np import matplotlib.pyplot as plt # Some parameters width = 264 bbddiam = 940 accuracy = 4 #2 points per pixel drum = np.zeros(accuracy**2 * width * bbddiam).reshape((bbddiam * accuracy , width * accuracy)) # The "slow" function def line_mask(drum, coef, intercept, upper=True, accuracy=accuracy): """Masks a half of the array""" to_return = np.zeros(drum.shape) for index, v in np.ndenumerate(to_return): if upper == True: if index[0] * coef + intercept > index[1]: to_return[index] = 1 else: if index[0] * coef + intercept <= index[1]: to_return[index] = 1 return to_return def get_band(drum, coef, intercept, bandwidth): """Calculate a ribbon path on the drum""" to_return = np.zeros(drum.shape) t1 = line_mask(drum, coef, intercept + bandwidth / 2, upper=True) t2 = line_mask(drum, coef, intercept - bandwidth / 2, upper=False) to_return = t1 + t2 return np.where(to_return == 2, 1, 0) single_band = get_band(drum, 1 / 10, 130, bandwidth=15) # Visualize the result ! plt.imshow(single_band) plt.show() Numba does wonders for my code, reducing runtime from 5.8s to 86ms (special thanks to @Maarten-vd-Sande): from numba import jit @jit(nopython=True, parallel=True) def line_mask(drum, coef, intercept, upper=True, accuracy=accuracy): ... A better solution with numpy is still welcome ;-)
There is no need for any looping at all here. You have effectively two different line_mask functions. Neither needs to be looped explicitly, but you would probably get a significant speedup just from rewriting it with a pair of for loops in an if and else, rather than an if and else in a for loop, which gets evaluated many many times. The really numpythonic thing to do is to properly vectorize your code to operate on entire arrays without any loops. Here is a vectorized version of line_mask: def line_mask(drum, coef, intercept, upper=True, accuracy=accuracy): """Masks a half of the array""" r = np.arange(drum.shape[0]).reshape(-1, 1) c = np.arange(drum.shape[1]).reshape(1, -1) comp = c.__lt__ if upper else c.__ge__ return comp(r * coef + intercept) Setting up the shapes of r and c to be (m, 1) and (n, 1) so that the result is (m, n) is called broadcasting, and is the staple of vectorization in numpy. The result of the updated line_mask is a boolean mask (as the name implies) rather than a float array. This makes it smaller, and hopefully bypasses float operations entirely. You can now rewrite get_band to use masking instead of addition: def get_band(drum, coef, intercept, bandwidth): """Calculate a ribbon path on the drum""" t1 = line_mask(drum, coef, intercept + bandwidth / 2, upper=True) t2 = line_mask(drum, coef, intercept - bandwidth / 2, upper=False) return t1 & t2 The remainder of the program should stay the same, since these functions preserve all the interfaces. If you want, you can rewrite most of your program in three (still somewhat legible) lines: coeff = 1/10 intercept = 130 bandwidth = 15 r, c = np.ogrid[:drum.shape[0], :drum.shape[1]] check = r * coeff + intercept single_band = ((check + bandwidth / 2 > c) & (check - bandwidth / 2 <= c))
7
8
59,582,663
2020-1-3
https://stackoverflow.com/questions/59582663/cnn-pytorch-error-input-type-torch-cuda-bytetensor-and-weight-type-torch-cu
I'm receiving the error, Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same Following is my code, device = torch.device('cuda:0') trainData = torchvision.datasets.FashionMNIST('/content/', train=True, transform=None, target_transform=None, download=True) testData = torchvision.datasets.FashionMNIST('/content/', train=False, transform=None, target_transform=None, download=True) class Net(nn.Module): def __init__(self): super().__init__() ''' Network Structure: input > (1)Conv2D > (2)MaxPool2D > (3)Conv2D > (4)MaxPool2D > (5)Conv2D > (6)MaxPool2D > (7)Linear > (8)LinearOut ''' # Creating the convulutional Layers self.conv1 = nn.Conv2d(in_channels=CHANNELS, out_channels=32, kernel_size=3) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3) self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3) self.flatten = None # Creating a Random dummy sample to get the Flattened Dimensions x = torch.randn(CHANNELS, DIM, DIM).view(-1, CHANNELS, DIM, DIM) x = self.convs(x) # Creating the Linear Layers self.fc1 = nn.Linear(self.flatten, 512) self.fc2 = nn.Linear(512, CLASSES) def convs(self, x): # Creating the MaxPooling Layers x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), kernel_size=(2, 2)) if not self.flatten: self.flatten = x[0].shape[0] * x[0].shape[1] * x[0].shape[2] return x # FORWARD PASS def forward(self, x): x = self.convs(x) x = x.view(-1, self.flatten) sm = F.relu(self.fc1(x)) x = F.softmax(self.fc2(sm), dim=1) return x, sm x_train, y_train = training_set x_train, y_train = x_train.to(device), y_train.to(device) optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE) loss_func = nn.MSELoss() loss_log = [] for epoch in range(EPOCHS): for i in tqdm(range(0, len(x_train), BATCH_SIZE)): x_batch = x_train[i:i+BATCH_SIZE].view(-1, CHANNELS, DIM, DIM).to(device) y_batch = y_train[i:i+BATCH_SIZE].to(device) net.zero_grad() output, sm = net(x_batch) loss = loss_func(output, y_batch.float()) loss.backward() optimizer.step() loss_log.append(loss) # print(f"Epoch : {epoch} || Loss : {loss}") return loss_log train_set = (trainData.train_data, trainData.train_labels) test_set = (testData.test_data, testData.test_labels) EPOCHS = 5 LEARNING_RATE = 0.001 BATCH_SIZE = 32 net = Net().to(device) loss_log = train(net, train_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) And this is the Error that I'm getting, RuntimeError Traceback (most recent call last) <ipython-input-8-0db1a1b4e37d> in <module>() 5 net = Net().to(device) 6 ----> 7 loss_log = train(net, train_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) 6 frames <ipython-input-6-7de4a78e3736> in train(net, training_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) 13 14 net.zero_grad() ---> 15 output, sm = net(x_batch) 16 loss = loss_func(output, y_batch.float()) 17 loss.backward() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) <ipython-input-5-4fddc427892a> in forward(self, x) 41 # FORWARD PASS 42 def forward(self, x): ---> 43 x = self.convs(x) 44 x = x.view(-1, self.flatten) 45 sm = F.relu(self.fc1(x)) <ipython-input-5-4fddc427892a> in convs(self, x) 31 32 # Creating the MaxPooling Layers ---> 33 x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) 34 x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) 35 x = F.max_pool2d(F.relu(self.conv3(x)), kernel_size=(2, 2)) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 343 344 def forward(self, input): --> 345 return self.conv2d_forward(input, self.weight) 346 347 class Conv3d(_ConvNd): /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight) 340 _pair(0), self.dilation, self.groups) 341 return F.conv2d(input, weight, self.bias, self.stride, --> 342 self.padding, self.dilation, self.groups) 343 344 def forward(self, input): RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same I double-checked that my Neural Net and my Inputs both are in GPU. I'm still getting this error and I don't understand why! Somebody, please help me to get out of this error.
Cast your input x_batch to float. Use x_batch = x_batch.float() before you pass it through your model.
16
29
59,582,503
2020-1-3
https://stackoverflow.com/questions/59582503/matplotlib-how-to-draw-vertical-line-between-two-y-points
I have 2 y points for each x points. I can draw the plot with this code: import matplotlib.pyplot as plt x = [0, 2, 4, 6] y = [(1, 5), (1, 3), (2, 4), (2, 7)] plt.plot(x, [i for (i,j) in y], 'rs', markersize = 4) plt.plot(x, [j for (i,j) in y], 'bo', markersize = 4) plt.xlim(xmin=-3, xmax=10) plt.ylim(ymin=-1, ymax=10) plt.xlabel('ID') plt.ylabel('Class') plt.show() This is the output: How can I draw a thin line connecting each y point pair? Desired output is:
just add plt.plot((x,x),([i for (i,j) in y], [j for (i,j) in y]),c='black')
7
12
59,563,025
2020-1-2
https://stackoverflow.com/questions/59563025/how-to-reset-tensorboard-when-it-tries-to-reuse-a-killed-windows-pid
Apologies if two days' frustration leaks through... Problem: can't reliably run Tensorboard in jupyter notebook (actually, in Jupyter Lab) with %tensorboard --logdir {logdir} and if I kill the tensorboard process and start again in the notebook it says it is reusing the dead process and port, but the process is dead and netstat -ano | findstr :6006` shows nothing, so the port looks closed too. Question: How in the name of $deity do I get tensorboard to restart from scratch and forget what it thinks it knows about processes, ports etc.? If I could do that I could hack away at residual path etc. issues... Known issues already addressed (I think): need to escape backslashes in Python string to get proper path and other OS gremlins; avoid spaces in path, ensure correct capitalisation... Environment: Win 64-bit Home with Anaconda and Tensforflow-GPU 2 installed via conda install - TF is working and writes data to the specified path given via the call back tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1) # logdir is the full path But I'm damned if I can start Tensorboard reliably within the notebook. I found that if I started an Anaconda command window and invoked tensorboard from there tensorboard started ok... (TF2GPU_Anaconda) C:\Users\Julian>tensorboard --logdir "a:\tensorboard\20200102-112749" 2020-01-02 11:53:58.478848: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all TensorBoard 2.0.0 at http://localhost:6006/ (Press CTRL+C to quit) It was accessibly in Chrome at localhost:6006 as stated (specifically http://localhost:6006/#scalars&run=20200102-112749%5Ctrain) (i'll ignore the other problems with tensorboard such as refresh failures on scalars, odd message on graph, etc.) and %tensorboard --logdir {logdir} then shows tensorboard in the notebook and in the separate chrome tab. However! whilst tensorboard reports in the notebook that it is reusing the old dead PID it is in fact on a completely different new PID What have I been doing wrong, and how do I reset tensorboard completely? PS the last (successful!) invocation was in fact with %tensorboard --logdir {makeWindowsCmdPath('A:\\tensorboard\\20200102-112749')} where makeWindowsCmdPath is defined as def makeWindowsCmdPath(path): return '\"' + str(path) + '\"' UPDATE 2020-01-03 A MWE of eventual success has been uploaded in a comment at Github in response to an issue that includes the PID referencing errors of tensorboard
Hey—sorry to hear that you’re running into issues. It’s entirely plausible that everything that you describe is both accurate and my fault. :-) How in the name of $deity do I get tensorboard to restart from scratch and forget what it thinks it knows about processes, ports etc.? If I could do that I could hack away at residual path etc. issues... There is a directory called .tensorboard-info in your temp directory that maintains a best-effort registry of the TensorBoard jobs that we think are running. When TensorBoard launches (in any manner, including with %tensorboard), it writes an “info file” to that directory, and when you use %tensorboard we first check to see if a “compatible instance” (same working directory and CLI args) is still running, and if so reuse it instead. When a TensorBoard instance shuts down cleanly, it removes its own info file. The idea is that as long as TensorBoard is shut down cleanly we should always have an accurate record of which processes are live, and since this registry is in a temp directory any errors due to hard shutdowns will be short-lived. But this is where I erred: coming from the POSIX world and not being very familiar with Windows application development, I didn’t realize that the Windows temp directory is not actually automatically deleted, ever. Therefore, any bookkeeping errors persist indefinitely. So, the answer to your question is, “remove the .tensorboard-info directory located under tempfile.gettempdir()” (preferably when you don’t have any actively running TensorBoard instances). There are ways that we can plausibly work around this in TensorBoard core: see https://github.com/tensorflow/tensorboard/issues/2483 for a start, and I’ve also considered amortized approaches like letting each TensorBoard instance perform some cleanup of other instances at start time. We haven’t yet gotten around to implementing these. Let me know if this is helpful or if it fails to address your question.
16
26
59,581,746
2020-1-3
https://stackoverflow.com/questions/59581746/why-does-vs-code-autopep8-format-2-white-lines
print("Hello") def world(): print("Hello") world() Gets corrected to: print("Hello") def world(): print("Hello") world() I have tried to: Reinstall Virtual Studio Code Reinstall Python 3.8 Computer Reboot Using other formatters like Black and yapf but got the same result
Because autopep8 follows PEP8 which suggests 2 blank lines around top-level functions. Surround top-level function and class definitions with two blank lines.
7
5
59,576,397
2020-1-3
https://stackoverflow.com/questions/59576397/python-kernel-dies-on-jupyter-notebook-with-tensorflow-2
I installed tensorflow 2 on my mac using conda according these instructions: conda create -n tf2 tensorflow Then I installed ipykernel to add this new environment to my jupyter notebook kernels as follows: conda activate tf2 conda install ipykernel python -m ipykernel install --user --name=tf2 That seemed to work well, I am able to see my tf2 environment on my jupyter notebook kernels. Then I tried to run the simple MNIST example to check if all was working properly and I when I execute this line of code: model.fit(x_train, y_train, epochs=5) The kernel of my jupyter notebook dies without more information. I executed the same code on my terminal via python mnist_test.py and also via ipython (command by command) and I don't have any issues, which let's me assume that my tensorflow 2 is correctly installed on my conda environment. Any ideas on what went wrong during the install? Versions: python==3.7.5 tensorboard==2.0.0 tensorflow==2.0.0 tensorflow-estimator==2.0.0 ipykernel==5.1.3 ipython==7.10.2 jupyter==1.0.0 jupyter-client==5.3.4 jupyter-console==5.2.0 jupyter-core==4.6.1 Here I put the complete script as well as the STDOUT of the execution: import tensorflow as tf import matplotlib.pyplot as plt import seaborn as sns mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 nn_model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) nn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) nn_model.fit(x_train, y_train, epochs=5) nn_model.evaluate(x_test, y_test, verbose=2) (tf2) ➜ tensorflow2 python mnist_test.py 2020-01-03 10:46:10.854619: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags. 2020-01-03 10:46:10.854860: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance. Train on 60000 samples Epoch 1/5 60000/60000 [==============================] - 6s 102us/sample - loss: 0.3018 - accuracy: 0.9140 Epoch 2/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.1437 - accuracy: 0.9571 Epoch 3/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.1054 - accuracy: 0.9679 Epoch 4/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.0868 - accuracy: 0.9729 Epoch 5/5 60000/60000 [==============================] - 6s 103us/sample - loss: 0.0739 - accuracy: 0.9772 10000/1 - 1s - loss: 0.0359 - accuracy: 0.9782 (tf2) ➜ tensorflow2
After trying different things I run jupyter notebook on debug mode by using the command: jupyter notebook --debug Then after executing the commands on my notebook I got the error message: OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/. And following this discussion, installing nomkl on the virtual environment worked for me. conda install nomkl
11
16
59,567,172
2020-1-2
https://stackoverflow.com/questions/59567172/multiple-assignments-via-walrus-operator
I've attempted to make multiple assignments with the walrus operator, and seen questions on StackOverflow such as this which also fail to assign multiple variables using a walrus operator, and am just wondering what a successful multiple assignment would look like, or whether it is not possible. The purpose of doing so is to add support for detecting all assigned variable names in my library mvdef (specifically, within the find_assigned_args function in the mvdef.src.ast_util module). From running ast.parse I can see that the := operator produces an ast.NamedExpr node, and this has a .target attribute which is an ast.Name object, from which I can obtain the assigned name from the object's .id attribute. If I had to guess, I'd presume that if it were to be at all possible, the .target attribute would be a list of ast.Name objects instead of a single ast.Name object, however the fact that I can't seem to get an example of this makes me wonder whether it is impossible, at least for the time being (in which case I can simplify my code and not just guess at what an implementation should be). If anyone knows which specific part of the Python source to look at to tell me if this is possible or not, that'd be helpful, thanks! P.S. - from looking at the test cases in Lib/test/test_parser.py provided in the initial commit (via), there don't seem to be examples of multiple assignments with the walrus operator, so I'm going to assume for now it's not possible (but please chime in if I'm wrong!) def test_named_expressions(self): self.check_suite("(a := 1)") self.check_suite("(a := a)") self.check_suite("if (match := pattern.search(data)) is None: pass") self.check_suite("[y := f(x), y**2, y**3]") self.check_suite("filtered_data = [y for x in data if (y := f(x)) is None]") self.check_suite("(y := f(x))") self.check_suite("y0 = (y1 := f(x))") self.check_suite("foo(x=(y := f(x)))") self.check_suite("def foo(answer=(p := 42)): pass") self.check_suite("def foo(answer: (p := 42) = 5): pass") self.check_suite("lambda: (x := 1)") self.check_suite("(x := lambda: 1)") self.check_suite("(x := lambda: (y := 1))") # not in PEP self.check_suite("lambda line: (m := re.match(pattern, line)) and m.group(1)") self.check_suite("x = (y := 0)") self.check_suite("(z:=(y:=(x:=0)))") self.check_suite("(info := (name, phone, *rest))") self.check_suite("(x:=1,2)") self.check_suite("(total := total + tax)") self.check_suite("len(lines := f.readlines())") self.check_suite("foo(x := 3, cat='vector')") self.check_suite("foo(cat=(category := 'vector'))") self.check_suite("if any(len(longline := l) >= 100 for l in lines): print(longline)") self.check_suite( "if env_base := os.environ.get('PYTHONUSERBASE', None): return env_base" ) self.check_suite( "if self._is_special and (ans := self._check_nans(context=context)): return ans" ) self.check_suite("foo(b := 2, a=1)") self.check_suite("foo(b := 2, a=1)") self.check_suite("foo((b := 2), a=1)") self.check_suite("foo(c=(b := 2), a=1)")
Iterable packing and unpacking is one difference between = and :=, with only the former supporting them. As found in PEP-572: # Equivalent needs extra parentheses loc = x, y # Use (loc := (x, y)) info = name, phone, *rest # Use (info := (name, phone, *rest)) # No equivalent px, py, pz = position name, phone, email, *other_info = contact
22
28
59,562,997
2020-1-2
https://stackoverflow.com/questions/59562997/how-to-parse-and-read-id-field-from-and-to-a-pydantic-model
I am trying to parse MongoDB data to a pydantic schema but fail to read its _id field which seem to just disappear from the schema. The issue is definitely related to the underscore in front of the object attribute. I can't change _id field name since that would imply not parsing the field at all. Please find below the code I use (using int instead of ObjectId for the sake of simplification) from pydantic import BaseModel class User_1(BaseModel): _id: int data_1 = {"_id": 1} parsed_1 = User_1(**data_1) print(parsed_1.schema()) class User_2(BaseModel): id: int data_2 = {"id": 1} parsed_2 = User_2(**data_2) print(parsed_2.schema()) User_1 is parsed successfully since its _id field is required but can't be read afterwards. User_2 works in the above example by fails if attached to Mongo which doesn't provide any id field but _id. Output of the code above reads as follows: User_1 {'title': 'User_1', 'type': 'object', 'properties': {}} User_2 {'title': 'User_2', 'type': 'object', 'properties': {'id': {'title': 'Id', 'type': 'integer'}}, 'required': ['id']}
you need to use an alias for that field name from pydantic import BaseModel, Field class User_1(BaseModel): id: int = Field(..., alias='_id') See the docs here.
24
36
59,559,941
2020-1-2
https://stackoverflow.com/questions/59559941/how-to-round-decimal-places-in-a-dash-table
I have the following python code: import dash import dash_html_components as html import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/dougmellon/rockies_dash/master/rockies_2019.csv') def generate_table(dataframe, max_rows=10): return html.Table( # Header [html.Tr([html.Th(col) for col in dataframe.columns])] + # Body [html.Tr([ html.Td(dataframe.iloc[i][col]) for col in dataframe.columns ]) for i in range(min(len(dataframe), max_rows))] ) external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div(children=[ html.H4('Batting Stats (2019)'), generate_table(df) ]) if __name__ == '__main__': app.run_server(debug=True) which is pulling data from this csv file (github): When I run the following code, python app.py It displays data with greater than three decimals - which isn't in my csv file. I have tried three or four times to reenter the data manually and re-upload the CSV to github but for some reason there is still data with greater than three decimals. Does anyone have any suggestions as to how I could possibly fix this issue?
I would check this for the dataframe and maybe it might help - How to display pandas DataFrame of floats using a format string for columns? or simply try- pd.options.display.float_format = '${:.2f}'.format I just read in one of the DashTable forums that - you can format data in pandas dataframe and DataTable will display them that way as they are. For example: To display percent value: table_df['col_name']=table_df['col_name'].map('{:,.2f}%'.format) To display float type without decimal part: table_df['col_name']=table_df['col_name'].map("{:,.0f}".format)
8
6
59,554,493
2020-1-1
https://stackoverflow.com/questions/59554493/unable-to-fire-a-docker-build-for-django-and-mysql
I am building an application with Djnago and MySql. I want to use docker for the deployment of my application. I have prepared a requirement.txt, docker-compose.yml and a Dockerfile docker-compose.yml version: "3" services: law-application: restart: always build: context: . ports: - "8000:8000" volumes: - ./app:/app command: > sh -c "python manage.py runserver 0.0.0.0:8000" depends_on: - mysql_db mysql_db: image: mysql:latest command: mysqld --default-authentication-plugin=mysql_native_password volumes: - "./mysql:/var/lib/mysql" ports: - "3306:3306" restart: always environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_DATABASE=root - MYSQL_USER=root - MYSQL_PASSWORD=root requirements.txt django>=2.1.3,<2.2.0 djangorestframework==3.11.0 mysqlclient==1.4.6 Dockerfile FROM python:3.7-alpine MAINTAINER Intersources Inc. ENV PYTHONUNBUFFERED 1 COPY ./requirements.txt /requirements.txt RUN pip install -r /requirements.txt RUN apt-get update RUN apt-get install python3-dev default-libmysqlclient-dev -y RUN mkdir /app WORKDIR /app COPY ./app /app RUN adduser -D jeet USER jeet settings.py DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'my-app-db', 'USER': 'root', 'PASSWORD': 'root', 'HOST': 'mysql_db', 'PORT': 3307, } } I have been trying to run the command docker build . to build an image from docker file but I get this error. Looks like there is some issue with the MySql connector. I have tried searching for the solution but couldn't found any thing to fix this. I am able to build the image if I remove the mysql_db service from the docker-compose.yml. Sending build context to Docker daemon 444.7MB Step 1/12 : FROM python:3.7-alpine ---> 6c7f85a86cca Step 2/12 : MAINTAINER Intersources Inc. ---> Using cache ---> 03b6fa5764d4 Step 3/12 : ENV PYTHONUNBUFFERED 1 ---> Using cache ---> 22ecd91dcb55 Step 4/12 : COPY ./requirements.txt /requirements.txt ---> Using cache ---> e58c16108f20 Step 5/12 : RUN pip install -r /requirements.txt ---> Running in 8f3eb8240fce Collecting django<2.2.0,>=2.1.3 Downloading https://files.pythonhosted.org/packages/ff/82/55a696532518aa47666b45480b579a221638ab29d60d33ce71fcbd3cef9a/Django-2.1.15-py3-none-any.whl (7.3MB) Collecting djangorestframework==3.11.0 Downloading https://files.pythonhosted.org/packages/be/5b/9bbde4395a1074d528d6d9e0cc161d3b99bd9d0b2b558ca919ffaa2e0068/djangorestframework-3.11.0-py3-none-any.whl (911kB) Collecting mysqlclient==1.4.6 Downloading https://files.pythonhosted.org/packages/d0/97/7326248ac8d5049968bf4ec708a5d3d4806e412a42e74160d7f266a3e03a/mysqlclient-1.4.6.tar.gz (85kB) ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-e9kt1otq/mysqlclient/setup.py'"'"'; __file__='"'"'/tmp/pip-install-e9kt1otq/mysqlclient/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-e9kt1otq/mysqlclient/pip-egg-info cwd: /tmp/pip-install-e9kt1otq/mysqlclient/ Complete output (12 lines): /bin/sh: mysql_config: not found /bin/sh: mariadb_config: not found /bin/sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-e9kt1otq/mysqlclient/setup.py", line 16, in <module> metadata, options = get_config() File "/tmp/pip-install-e9kt1otq/mysqlclient/setup_posix.py", line 61, in get_config libs = mysql_config("libs") File "/tmp/pip-install-e9kt1otq/mysqlclient/setup_posix.py", line 29, in mysql_config raise EnvironmentError("%s not found" % (_mysql_config_path,)) OSError: mysql_config not found ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. The command '/bin/sh -c pip install -r /requirements.txt' returned a non-zero code: 1
Just run pip install requirements after apt-get install because mysqlclient requires libmysqlclient-dev: You're using apt package manager with alpine base linux image which is incompatible. I recommend to take python3.7-slim with debian os which supports apt. FROM python:3.7-slim MAINTAINER Intersources Inc. ENV PYTHONUNBUFFERED 1 RUN apt-get update RUN apt-get install python3-dev default-libmysqlclient-dev gcc -y COPY ./requirements.txt /requirements.txt RUN pip install -r /requirements.txt RUN mkdir /app WORKDIR /app COPY ./app /app RUN adduser -D jeet USER jeet If you do need alpine modify Dockerfile like this: FROM python:3.7-alpine MAINTAINER Intersources Inc. RUN apk update RUN apk add musl-dev mariadb-dev gcc RUN pip install mysqlclient RUN mkdir /app WORKDIR /app COPY ./app /app RUN adduser -D jeet USER jeet
7
19
79,418,235
2025-2-6
https://stackoverflow.com/questions/79418235/msgraph-sdk-python-get-sharepoint-site-id-from-name
I want to work with SharePoint sites using Python. For now, I just want to ls on the site. I immediately ran into an issue where I need site id to do anything with it. How do I get site id from the site name? I did retrieve it using powershell: $uri = "https://graph.microsoft.com/v1.0/sites/"+$SharePointTenant+":/sites/"+$SiteName+"?$select=id" $RestData1 = Invoke-MgGraphRequest -Method GET -Uri $uri -ContentType "application/json" # Display Site ID $RestData1.displayName $RestData1.id But now I want to perform the same operation as above but using msgraph library. The problem is I am not sure how to do it. So far I have: import asyncio from azure.identity.aio import ClientSecretCredential from msgraph import GraphServiceClient tenant_id="..." client_id="..." client_secret="..." credential = ClientSecretCredential(tenant_id, client_id, client_secret) scopes = ['https://graph.microsoft.com/.default'] client = GraphServiceClient(credentials=credential, scopes=scopes) sharepoint_tenant = "foobar365.sharepoint.com" site_name = "someTestSPsite" site_id = "f482d14d-6bd1-1234-gega-3121183eb87a" # how do I get this using GraphServiceClient from the site_name ? # so that I can do this # this works with the id retrieved from PS script drives = (asyncio.run(client.sites.by_site_id(site_id).drives.get())) print(drives)
To get the Site ID from Site name and get Drives, modify the code like below: import asyncio from azure.identity.aio import ClientSecretCredential from msgraph import GraphServiceClient from msgraph.generated.sites.sites_request_builder import SitesRequestBuilder from kiota_abstractions.base_request_configuration import RequestConfiguration tenant_id = "TenantID" client_id = "ClientID" client_secret = "Secret" credential = ClientSecretCredential(tenant_id, client_id, client_secret) scopes = ['https://graph.microsoft.com/.default'] graph_client = GraphServiceClient(credential, scopes) query_params = SitesRequestBuilder.SitesRequestBuilderGetQueryParameters( search="RukSite",) request_configuration = RequestConfiguration( query_parameters=query_params, ) async def get_site_id(): result = await graph_client.sites.get(request_configuration=request_configuration) site_id = result.value[0].id return site_id # Get the drives of the site by site_id async def get_drives(site_id): drives = await graph_client.sites.by_site_id(site_id).drives.get() return drives async def main(): site_id = await get_site_id() # Print the Site ID print(f"Site ID: {site_id}") # Fetch the drives for the site using the site ID drives = await get_drives(site_id) # Print the drives print(f"Drives: {drives}") # Run the main async function asyncio.run(main()) Make sure to assign Sites.Read.All application type API permission.
1
4
79,417,096
2025-2-6
https://stackoverflow.com/questions/79417096/how-to-force-the-reinsall-of-my-local-package-with-uv
I'm starting to use uv for my CI as it's showing outstanding performances with respect to normal pip installation. for each CI run I (in fact nox does it on my behalf) create a virtual environment that will be used to run the tests. In this environment I run the following: uv pip install .[test] my folder is a simple python package as this one: my_package/ ├── __init__.py └── logic.py docs/ ├── index.rst └── conf.py test/ └── test_something.py pyproject.toml noxfile.py As uv is caching everything my virtual environment is never updated and I cannot check new fonctionalities without rebuilding the venv from scratch. how could I make sure that "." get's reinstalled from source everytime I run my tests ? I see 3 potential options: always install in editable mode: uv pip install -e .[test] which is not ideal for testing purposes as I don't check if the wheel build includes all the necessary files. force the reinstall in the CI call: uv pip install --reinstall .[test] I guess I will loose the caching for all the libs and not only my package force the reinstall from the pyproject.toml: reinstall-package = ["."] but I don't know if it's going to mess with normal installation from users that are not running the tests Am I missing an alternative and which one is the best to avoid unwanted side effects in my tests ?
The uv team has been super fast to answer my question and since https://github.com/astral-sh/uv/issues/12038 it's part of the default mechanism: local sources are now always reinstalled
3
0
79,424,466
2025-2-9
https://stackoverflow.com/questions/79424466/python-socket-can-not-connect-on-two-separated-machines
I used the socket library in Python, and made two simple projects which are server.py and client.py, then I executed them as server.exe and client.exe When I run both (server.exe and client.exe) on my computer, it works fine, but when I run them on two separate computers, the client can not connect to the server that runs on the other computer. Here is my code: Server side: import socket server = socket.socket(socket.AF_INET,socket.SOCK_STREAM) # use our ipv4 for making server on port 9000, and show it on the screen ip = socket.gethostbyname(socket.gethostname()) server.bind((ip,9000)) server.listen(1) print(f"The server ipv4 , give it to client to connect to the server -> {ip}") print("Waiting for client ...") client_socket = server.accept()[0] client_socket.send(b"hello") client_socket.close() server.close() print("Done !!") Client side: import socket server_ip = input("Please enter server ipv4 here : ") client = socket.socket(socket.AF_INET,socket.SOCK_STREAM) client.connect((server_ip,9000)) # receive data from server (b"hello" which sent from the server) Print(client.recv(5).decode('utf-8')) client.close() print("Done !!") There are no more exceptions in these two codes but just one exception appears on the client side when I am trying to connect the client to the server which runs on another computer as I said above. The exception on the client side says that the client socket attempted to connect to the server, but it could not connect to the server, and it said something about timeout connection. I searched a lot and watched lots of videos that make the same projects for testing sockets in python and it works for them, but not for me :( I allowed the client and server in firewall on two computers, also disabled the firewall, but it did not work at all, and the same exception appeared. I used netstat -a (cmd command) on the computer on which the server was running on it, and saw that the server socket was listening on port 9000. Nothing seems to work, any help will be appreciated.
I think this is virtually impossible without an online host or any third party programs, if I say correctly, this because a main difference between network and internet, we use network for devices which connected to a same router, these devices can easily communicating together because they are in a local network, but we use internet for groups of devices (network groups) connection through different routers. This is what I understood after researches.(sorry if I explained wrongly) Are there any free host for testing projects (host for python which supports socket libraries) ?
2
0
79,420,818
2025-2-7
https://stackoverflow.com/questions/79420818/clearing-tf-data-dataset-from-gpu-memory
I'm running into an issue when implementing a training loop that uses a tf.data.Dataset as input to a Keras model. My dataset has an element spec of the following format: ({'data': TensorSpec(shape=(15000, 1), dtype=tf.float32), 'index': TensorSpec(shape=(2,), dtype=tf.int64)}, TensorSpec(shape=(1,), dtype=tf.int32)) So, basically, each sample is structured as tuple (x, y), in which x has the structure of a dict containing two tensors, one of data with shape (15000, 1), and the other an index of shape (2,) (the index is not used during training), and y is a single label. The tf.data.Dataset is created using dataset = tf.data.Dataset.from_tensor_slices((X, y)), where X is a dict of two keys: data: an np array of shape (200k, 1500, 1), index with index: an np array of shape (200k, 2) and y is a single array of shape (200k, 1) My dataset has about 200k training samples (after running undersampling) and 200k validation samples. Right after calling tf.data.Dataset.from_tensor_slices I noticed a spike in GPU memory usage, with about 16GB being occupied after creating the training tf.Dataset, and 16GB more after creating the validation tf.Dataset. After creating of the tf.Dataset, I run a few operations (e.g. shuffle, batching, and prefetching), and call model.fit. My model has about 500k trainable parameters. The issue I'm running into is after fitting the model. I need to run inference on some additional data, so I create a new tf.Dataset with this data, again using tf.Dataset.from_tensor_slices. However, I noticed the training and validation tf.Dataset still reside in GPU memory, which causes my script to break with an out of memory problem for the new tf.Dataset I want to run inference on. I tried calling del on the two tf.Dataset, and subsequently calling gc.collect(), but I believe that will only clear RAM, not GPU memory. Also, I tried disabling some operations I apply, such as prefetch, and also playing with the batch size, but none of that worked. I also tried calling keras.backend.clear_session(), but it also did not work to clear GPU memory. I also tried importing cuda from numba, but due to my install I cannot use it to clear memory. Is there any way for me to clear the tf.data.Dataset from GPU memory? Minimum reproducible example below Setup import numpy as np import tensorflow as tf from itertools import product # Setting tensorflow memory growth for GPU gpus = tf.config.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) Create dummy data with similar size as my actual data (types are the same as the actual data): train_index = np.array(list(product(np.arange(1000), np.arange(200)))).astype(np.int32) train_data = np.random.rand(200000, 15000).astype(np.float32) train_y = np.random.randint(0, 2, size=(200000, 1)).astype(np.int32) val_index = np.array(list(product(np.arange(1000), np.arange(200)))).astype(np.int32) val_data = np.random.rand(200000, 15000).astype(np.float32) val_y = np.random.randint(0, 2, size=(200000, 1)).astype(np.int32) This is the nvidia-smi output at this point: Creating the training tf.data.Dataset, with as batch size of 256 train_X = {'data': train_data, 'index':train_index} train_dataset = tf.data.Dataset.from_tensor_slices((train_X, train_y)) train_dataset = train_dataset.batch(256) This is the nvidia-smi output after the tf.data.Dataset creation: Creating the validation tf.data.Dataset, with as batch size of 256 val_X = {'data': val_data, 'index':val_index} val_dataset = tf.data.Dataset.from_tensor_slices((val_X, val_y)) val_dataset = val_dataset.batch(256) This is the nvidia-smi output after the second tf.data.Dataset creation: So GPU usage grows when creating each tf.data.Dataset. Since after running model.fit I need to create a new tf.data.Dataset of similar size, I end up running out of memory. Is there any way to clear this data from GPU memory?
The problem is due to cache that does not get cleared when needed, this is an open issue. the only way I found is to make a large data set to replace the old one in cache dataset = tf.data.Dataset.range(num_epochs // 8) #drop the cache every 8 epochs dataset = dataset.flat_map(lambda newcache1: create_dataset().repeat(8)) Model.fit(newcache1=dataset, ...)
9
2
79,421,366
2025-2-7
https://stackoverflow.com/questions/79421366/plot-modification-to-align-both-x-axes-and-start-from-where-it-exists-in-list
import matplotlib.pyplot as plt size = 18 value = [20, 25, 30, 35] x_values1 = list(range(0, 100, 5)) x_values2 = [0.0, 20.0, 40.0, 60.0, 80.0, 90.0, 95.0] x_values3 = [80,76,72,68,64,60,56,52,48,44,40,36,32,28,24,20,16,12,8,4] y_value1 = [8.226502331074E-07, 2.23276092438077E-06, 4.05102969501409E-06, 6.37286188209644E-06, 9.0948649799175E-06, 1.1823432395575E-05, 1.39950605433403E-05, 1.52880124352018E-05, 1.59623881451955E-05, 1.66013609598997E-05, 1.75776082796254E-05, 1.89088470296623E-05, 2.04282721607529E-05, 2.19342009434028E-05, 2.32357328236355E-05, 2.41468143248193E-05, 2.44765091608964E-05, 2.40357022548111E-05, 2.26629075512208E-05, 2.02551778821116E-05] y_value2 = [5.20787001629681E-07, 9.49836092316734E-06, 1.55921680069521E-05, 2.14212986542911E-05, 2.44079518003369E-05, 2.50048219351487E-05, 2.07892738820319E-05] fig = plt.figure(figsize=(10, 10)) #dpi=300) ax1 = fig.add_subplot(111) ax2 = ax1.twiny() prop = ax1._get_lines.prop_cycler prop = ax2._get_lines.prop_cycler marker1 = itertools.cycle(('o', '*', 'D', 'x', 's', '^', 'v', '<', '>', ',')) marker2 = itertools.cycle(('o', '*', 'D', 'x', 's', '^', 'v', '<', '>', ',')) for i in value: color = next(prop)['color'] column = 'Au_%0.3f' %(i/100) column1 = 'residual_Ag_%0.3f' %(i/100) ax1.plot(x_values1, y_value1, linewidth=3, color=color) ax1.scatter(x_values2, y_value2, marker=next(marker1), s=150, color=color) ax1.plot([], [], marker=next(marker2), label='x=%0.3f' %(i/100), linewidth=3, markersize=15, color=color) ax2.plot(x_values3, y_value1, linewidth=3, color=color) ax1.tick_params(axis='both', which='major', labelsize=25, length=10, width=2) ax2.tick_params(axis='both', which='major', labelsize=25, length=10, width=2) ax1.legend(loc='lower right', fontsize=25) ax1.set_xlabel('xlabel1', fontsize=25) ax1.set_ylabel('ylabel', fontsize=25) ax2.set_xlabel('xlabel1', fontsize=25) plt.rcParams["font.family"] = "Arial" for axis in ['top', 'bottom', 'right', 'left']: ax1.spines[axis].set_linewidth(2) ax2.spines[axis].set_linewidth(2) ax1.yaxis.get_offset_text().set_fontsize(25) plt.tight_layout() plt.show() when this code is run I get the plot This code plots a graph as shown in the figure. I want the x-axes to be aligned and the secondary axis to remain as it is code i.e., in decreasing order.
import matplotlib.pyplot as plt import itertools print('matplotlib version:', matplotlib.__version__) size = 18 value = [20, 25, 30, 35] x_values1 = list(range(0, 100, 5)) x_values2 = [0.0, 20.0, 40.0, 60.0, 80.0, 90.0, 95.0] x_values3 = [80, 76, 72, 68, 64, 60, 56, 52, 48, 44, 40, 36, 32, 28, 24, 20, 16, 12, 8, 4] y_value1 = [8.226502331074E-07, 2.23276092438077E-06, 4.05102969501409E-06, 6.37286188209644E-06, 9.0948649799175E-06, 1.1823432395575E-05, 1.39950605433403E-05, 1.52880124352018E-05, 1.59623881451955E-05, 1.66013609598997E-05, 1.75776082796254E-05, 1.89088470296623E-05, 2.04282721607529E-05, 2.19342009434028E-05, 2.32357328236355E-05, 2.41468143248193E-05, 2.44765091608964E-05, 2.40357022548111E-05, 2.26629075512208E-05, 2.02551778821116E-05] y_value2 = [5.20787001629681E-07, 9.49836092316734E-06, 1.55921680069521E-05, 2.14212986542911E-05, 2.44079518003369E-05, 2.50048219351487E-05, 2.07892738820319E-05] fig = plt.figure(figsize=(10, 10)) # dpi=300) ax1 = fig.add_subplot(111) ax2 = ax1.twiny() prop = ax1._get_lines.prop_cycler prop = ax2._get_lines.prop_cycler marker1 = itertools.cycle(('o', '*', 'D', 'x', 's', '^', 'v', '<', '>', ',')) marker2 = itertools.cycle(('o', '*', 'D', 'x', 's', '^', 'v', '<', '>', ',')) for i in value: color = next(prop)['color'] column = 'Au_%0.3f' % (i / 100) column1 = 'residual_Ag_%0.3f' % (i / 100) ax1.plot(x_values1, y_value1, linewidth=3, color=color) ax1.scatter(x_values2, y_value2, marker=next(marker1), s=150, color=color) ax1.plot([], [], marker=next(marker2), label='x=%0.3f' % (i / 100), linewidth=3, markersize=15, color=color) ax2.plot(x_values3, y_value1, linewidth=3, color=color) ax1.tick_params(axis='both', which='major', labelsize=25, length=10, width=2) ax2.tick_params(axis='both', which='major', labelsize=25, length=10, width=2) ax1.legend(loc='lower right', fontsize=25) ax1.set_xlabel('xlabel1', fontsize=25) ax1.set_ylabel('ylabel', fontsize=25) ax2.set_xlabel('xlabel1', fontsize=25) # Reverse the limits of the second x-axis (x2) ax2.set_xlim(max(x_values3), min(x_values3)) # Set the ticks in decreasing order for x2 ax2.set_xticks(x_values3) plt.rcParams["font.family"] = "Arial" for axis in ['top', 'bottom', 'right', 'left']: ax1.spines[axis].set_linewidth(2) ax2.spines[axis].set_linewidth(2) ax1.yaxis.get_offset_text().set_fontsize(25) plt.tight_layout() plt.show() to set x2 axis in decreasing order # Reverse the limits of the second x-axis (x2) ax2.set_xlim(max(x_values3), min(x_values3)) # Set the ticks in decreasing order for x2 ax2.set_xticks(x_values3) with matplotlib version: 3.7.1
1
1
79,422,336
2025-2-7
https://stackoverflow.com/questions/79422336/azure-function-with-service-bus-trigger-managedidentitycredential-performance
I am working on an Azure Function that uses a ServiceBusTrigger and queries Azure Table Storage. In order to process multiple messages as quickly as possible we're using the MaxConcurrentCalls setting to enable parallel message processing (e.g. we set MaxConcurrentCalls to 200). We're using Managed Identity to access the Service bus and the Azure Table Storage via DefaultAzureCredential. Performance testing shows that multiple instances of the Function app are instantiated and are processing messages as expected, however each instance makes a call to the Azure /msi/token endpoint to obtain a ManagedIdentityCredential, and this call is the bottleneck, taking anywhere from 200ms to 5000ms. I.e using the above setting, if 200 messages get dropped onto the service bus then 200 "instances" of the azure function will start processing them, and make 200 calls to get a ManagedIdentityCredential. What is the mechanism behind how Azure functions are processing messages concurrently, will it create multiple processes or multiple threads within the same process? is there a way to share/cache the credential once it's obtained to be used by the other message processing instances as well and eliminate the redundant calls to /msi/token ? We're using Python as programming language. this is the code to initialize the Azure resources # helper function to initialize global table service client def init_azure_resource_clients(config_settings: EligibilitySettings): """get table service client for Azure Table Storage and service bus client""" non_aio_credential = DefaultAzureCredential() # initialize global Service Bus client global _azure_servicebus_client _azure_servicebus_client = ServiceBusClient(fully_qualified_namespace=config_settings.serviceBusNamespaceFQ, credential=non_aio_credential) # initialize global Table Service Client global _azure_table_service_client # prefer connection string if available if config_settings.tableStorageConnectionString: _azure_table_service_client = TableServiceClient.from_connection_string(conn_str=config_settings.tableStorageConnectionString) else: _azure_table_service_client = TableServiceClient(endpoint=f"https://{config_settings.tableStorageAccount}.table.core.windows.net", credential=non_aio_credential) And here is some sample code on how is it called: import logging import azure.functions as func # global reference to the azure resources we need to access _azure_table_service_client = None _azure_servicebus_client = None app = func.FunctionApp() @app.function_name(name="ServiceBusQueueTrigger1") @app.service_bus_queue_trigger(arg_name="msg", queue_name="<QUEUE_NAME>", connection="<CONNECTION_SETTING>") def test_function(msg: func.ServiceBusMessage): logging.info('ServiceBus queue trigger processed message: %s', msg.get_body().decode('utf-8')) # initialize global azure resources init_azure_resource_clients(config_settings) # parse incoming message message_body = msg.get_body().decode('utf-8') message_json = json.loads(message_body) result = process_message(message_json)
To prevent each function instance from making a call to the /msi/token endpoint when obtaining a ManagedIdentityCredential, initialize DefaultAzureCredential once and cache it in _credential. This prevents redundant calls to /msi/token for each message. Additionally, ensure that the Storage Table Data Contributor and Storage Table Data Reader roles are assigned. Refer to this link for guidance on using DefaultAzureCredential() in an Azure Function with Python. Below sample code is using DefaultAzureCredential() in Azure Function with Python import azure.functions as func import logging import json import asyncio from azure.identity import DefaultAzureCredential, TokenCachePersistenceOptions from azure.servicebus.aio import ServiceBusClient from azure.data.tables.aio import TableServiceClient _credential = None _servicebus_client = None _table_service_client = None def get_azure_credential(): """Initialize and cache the Azure Managed Identity credential.""" global _credential if _credential is None: logging.info("Initializing Managed Identity Credential with token caching.") _credential = DefaultAzureCredential(cache_persistence_options=TokenCachePersistenceOptions()) return _credential async def init_azure_resource_clients(): """Initialize Azure Service Bus and Table Storage clients asynchronously.""" global _servicebus_client, _table_service_client credential = get_azure_credential() if _servicebus_client is None: _servicebus_client = ServiceBusClient( fully_qualified_namespace="SERviceBusName.servicebus.windows.net", credential=credential ) logging.info("Initialized Azure Service Bus Client.") if _table_service_client is None: _table_service_client = TableServiceClient( endpoint="https://ravitewaja.table.core.windows.net", credential=credential ) logging.info("Initialized Azure Table Storage Client.") app = func.FunctionApp() @app.function_name(name="ServiceBusQueueTrigger") @app.service_bus_queue_trigger( arg_name="msg", queue_name="queue_name", connection="CONNECTION_SETTING" ) async def service_bus_trigger(msg: func.ServiceBusMessage): """Function triggered by Service Bus messages.""" logging.info("Processing Service Bus message.") await init_azure_resource_clients() message_body = msg.get_body().decode("utf-8").strip() logging.info(f"Received Message: {message_body}") # Handle empty message if not message_body: logging.warning("Received an empty message. Skipping processing.") return try: message_json = json.loads(message_body) except json.JSONDecodeError as e: logging.error(f"Failed to decode JSON: {e}") return await process_message(message_json) async def process_message(message_json): """Processes the Service Bus message and interacts with Azure Table Storage.""" global _table_service_client if _table_service_client is None: logging.error("Table Service Client is not initialized.") return try: table_name = "sampath" table_client = _table_service_client.get_table_client(table_name) entity = { "PartitionKey": "Messages", "RowKey": message_json.get("id", "default"), "Message": json.dumps(message_json) } await table_client.upsert_entity(entity) logging.info("Message successfully written to Azure Table Storage.") except Exception as e: logging.error(f"Error processing message: {e}") Output:
1
1
79,424,859
2025-2-9
https://stackoverflow.com/questions/79424859/how-to-improve-efficiency-of-my-python-function-involving-sparse-matrices
I want to implement a particular function in python involving sparse matrices and want to make it as fast as possible. I describe in detail what the problem is and how I implemented it so far. Problem: I have N=1000 fixed (dimension is fixed, entries fixed) sparse matrices collectively called B each of size 1000x1000 (average sparsity, i.e. number of non-zero entries over all entries, is 0.0001). For a given vector u (of size 1000) I want to compute c[j] = u @ B[j] @ u for each j=0,...,999 and the output should be the numpy array c. So the sparse matrices B[j] (stored in the tuple B) are fixed and u is my function input. My implementation so far: I precompute all the matrices and treat them as global variables in my program. I decided to safe them as scipy.sparse matrices in csr_matrix format (I read that this is the best format when I just want to calculate matrix vector products) in a Tuple. To calculate my desired function I do # precompute matrices, so treat them as global fixed variables B = [] for j in range(1000): sparse_matrix = ...compute it... # sparse csr_matrix of size 1000x1000 B.append(sparse_matrix) B = tuple(B) def func(u: np.ndarray) -> np.ndarray: """ u is a np.array of length N=1000 """ return np.array([u.dot(B[k].dot(u.transpose())) for k in range(len(B))]) Question: Is this the most efficient it can get, or do you see room for improvement regarding speed, e.g. change the structure how I safe the matrices or how I compute all the vector-matrix-vector products? Also, do you see potential for parallelising this computation? If so, do you have a suggestion what libraries/functions I should look into? Thanks in advance! Edit: With c[j] = u @ B[j] @ u I mean that I want to compute the matrix-vector product of B[j] and u and then the inner product with u. (Mathematically u.transposed() * B * u) If it is of help to anyone. Here is a small benchmark program where I create some random sparse matrices and evaluate it on some random vector. import numpy as np from random import randint from scipy.sparse import coo_matrix, csr_matrix from time import time # Create random tuple of sparse matrices N = 1000 n = 100 B = [] for i in range(N): data = np.random.uniform(-1, 1, n).tolist() rows = [randint(0, N-1) for _ in range(n)] cols = [randint(0, N-1) for _ in range(n)] sparse_matrix = csr_matrix(coo_matrix((data, (rows, cols)), shape=(N, N))) B.append(sparse_matrix) B = tuple(B) # My function def func(u: np.ndarray) -> np.ndarray: """ u is a np.array of length N=1000 """ return np.array([u.dot(B[k].dot(u.transpose())) for k in range(len(B))]) # random vector to evaluate function u = np.random.uniform(-1, 1, N) START = time() func(u) END = time() print(f"Speed : {END - START}") >>> Speed : 0.005256175994873047
Here is another answer using another data structure for B which is different from the one in the question and Numba-friendly. Since B is "fixed", we can transform it to a more efficient array-based data-structure packing all COO matrices into only 4 big arrays. The number of non-zero values is stored so to know where each COO matrix start and end. The indices are stored in 16-bit integers (more compact in memory) since all matrices are known to be small (less than 65536x65536 and with less than 65536 non-zeros items). Unsigned integers are used to avoid wrap-around checks in Numba. Note that answer is an improvement of the answer of Nick ODell. Here is the resulting code (including the one to build B): import numpy as np from scipy.sparse import coo_matrix import numba as nb # Converts the input matrix to a numba-friendly data-structure def to_multi_coo(B): all_size = np.array([b.nnz for b in B], dtype=np.uint16) all_rows = np.concatenate([b.row.astype(np.uint16) for b in B]) all_cols = np.concatenate([b.col.astype(np.uint16) for b in B]) all_data = np.concatenate([b.data for b in B]) return (all_size, all_rows, all_cols, all_data) @nb.njit('(Tuple([uint16[::1], uint16[::1], uint16[::1], float64[::1]]), float64[::1])', fastmath=True) def mult_matrix_multi_coo(B, u): all_size, all_rows, all_cols, all_data = B n = len(all_size) offset = np.uint64(0) result = np.empty(n, dtype=np.float64) for i in range(n): s = 0.0 for j in range(np.uint64(all_size[i])): s += u[all_rows[offset+j]] * u[all_cols[offset+j]] * all_data[offset+j] result[i] = s offset += all_size[i] return result # Create random tuple of sparse matrices np.random.seed(42) N = 1000 n = 100 B = [] for i in range(N): data = np.random.uniform(-1, 1, n) rows = np.random.randint(0, N, n) cols = np.random.randint(0, N, n) sparse_matrix = coo_matrix((data, (rows, cols)), shape=(N, N)) B.append(sparse_matrix) B = tuple(B) B = to_multi_coo(B) u = np.random.uniform(-1, 1, N) mult_matrix_multi_coo(B, u) Performance results Here are results on my machine with a i5-9600KF CPU: Initial solution: 6135 µs NickODell's solution: 602 µs This solution: 59 µs <----- Thus, this solution is about 10 times faster than the best solution so far and about 100 times faster than the initial solution. It is so fast that parallelizing it certainly does not even worth it. Indeed, creating threads and waiting for them takes some time: often about dozens of µs on mainstream PCs (often much more on some computing servers, like ones with many-cores).
4
3
79,418,290
2025-2-6
https://stackoverflow.com/questions/79418290/pyspark-foreachpartition-not-getting-executed
I am trying to copy data from an iceberg table to a postgres table using a glue job. I have this code: def execute_job(spark, factory: DependencyFactory, environment, logger): print("Starting job") sql = f"SELECT * FROM {DATABASE_NAME}.transactions ORDER BY unique_identifier, in_date" df = spark.sql(sql) df.show(10) config = Config(environment) window_spec = Window.partitionBy("unique_identifier").orderBy("in_date") df = df.withColumn("version", row_number().over(window_spec)) df = df.withColumn("start_date", col("in_date")) df = df.withColumn("end_date", lit("9999-12-31").cast("string")) df.show(10) df = df.withColumn("prev_start_date", lag("start_date").over(window_spec)) df = df.withColumn("end_date", when(col("version") == 1, col("end_date")).otherwise(col("prev_start_date"))) df = df.withColumn("id", col("unique_identifier").cast("string") + ":" + col("version").cast("string")) df.show(10) total_count = df.count() print(f"Total records: {total_count}") if total_count == 0: print("No records to insert. Exiting.") return num_partitions = max(1, total_count // 100000) processed_df = df.repartition(num_partitions) print(f"Number of partitions: {processed_df.rdd.getNumPartitions()}") processed_df.show(10) def process_partition(partition): logger.info("Processing partition") print("Processing partition") partition_list = list(partition) insert_query = """ INSERT INTO transaction.transactions_master (start_date, end_date, in_date, out_date, update_time, id, version, unique_identifier, id_tr, alias, update_correlation_id, correlation_id) VALUES %s """ db_client = PostgresDbClient(DbConfigBuilder(config, config['vault']['service_type'], ssl_cert_file=None)) if not partition_list: print("Skipping empty partition") return batch_size = 100000 try: values = [tuple(row) for row in partition_list] db_client.execute_values_query(insert_query, values, page_size=batch_size) print(f"Inserted {len(values)} records successfully") except Exception as e: print(f"Error inserting partition: {e}") try: print("Starting foreachPartition") processed_df.foreachPartition(process_partition) print("Finished foreachPartition") except Exception as e: print(f"Error during foreachPartition: {e}") print("Data successfully copied to transaction.transactions_master PostgreSQL table") everything seems to be working (it's printing results in the logs, also tested db connection and it seems fine). However, when it gets to processed_df.foreachPartition(process_partition) it's like the code inside process_partition is not getting executed. I'm not seeing anything in the logs. Last entries are just "Starting foreachPartition" and "Data successfully copied to transaction.transactions_master PostgreSQL table" What could be the issue? Am I using foreachPartition wrong?
foreachPartition function works on executor level, you can see details logs on the executor logs rather than driver logs. Worth checking the executor logs. Also, do you see the data getting updated in PostgreSQL table?
3
0
79,423,247
2025-2-8
https://stackoverflow.com/questions/79423247/how-to-color-excel-cells-with-specific-values-in-dataframe-python
I created a code to insert my dataframe, called df3, into an excel file. My code is working fine, but now I'd like to change background cells color in all cells based on value I tried this solution, I don't get any errors but I also don't get any cells colored My code: def cond_formatting(x): if x == 'OK': return 'background-color: lightgreen' elif x == 'NO': return 'background-color: red' else: return None print(pd.merge(df, df2, left_on='uniquefield', right_on='uniquefield2', how='left').drop('uniquefield2', axis=1)) df3 = df.merge(df2, left_on='uniquefield', right_on='uniquefield2', how='left').drop(['uniquefield2', 'tournament2', 'home2', 'away2', 'result2'], axis=1) df3 = df3[["home","away","scorehome","scoreaway","best_bets","oddtwo","oddthree","htresult","shresult","result","over05ht","over15ht","over05sh","over15sh","over05","over15","over25","over35","over45","goal","esito","tournament","uniquefield"]] df3 = df3.sort_values('best_bets') df3.style.applymap(cond_formatting) # determining the name of the file file_name = camp + '_Last_20' + '.xlsx' # saving the excel df3.to_excel(file_name, freeze_panes=(1, 0)) print('Tournament is written to Excel File successfully.') How I said, code is working but all background cells color are white (no colors) any suggestion? Thanks for your help
If you provided complete code, it would be easier for me to set colors for the cells for your excel file. But an example script given below might assist you: import pandas as pd from openpyxl import load_workbook from openpyxl.styles import PatternFill data = { "ID": [1, 2, 3, 4, 5], "Status": ["OK", "NO", "OK", "NO", "OK"] } df = pd.DataFrame(data) excel_filename = "status.xlsx" df.to_excel(excel_filename, index=False) wb = load_workbook(excel_filename) ws = wb.active # Define fill colors green_fill = PatternFill(start_color="00FF00", end_color="00FF00", fill_type="solid") # Green for "OK" red_fill = PatternFill(start_color="FF0000", end_color="FF0000", fill_type="solid") # Red for "NO" # Apply color based on cell value in the "Status" column for row in ws.iter_rows(min_row=2, max_row=ws.max_row, min_col=2, max_col=2): for cell in row: if cell.value == "OK": cell.fill = green_fill elif cell.value == "NO": cell.fill = red_fill wb.save(excel_filename) print("Excel file created successfully!")
2
3
79,428,256
2025-2-10
https://stackoverflow.com/questions/79428256/dash-multi-value-dropdown-wrapping-text-skips-the-first-line
I'm making a Dash Multi-Value Dropdown and finding that long labels use a text wrapping process that skips the first line entirely. When I make a Dash Multi-Value Dropdown like this: dcc.Dropdown( options=[ { 'label': str(entity), 'value': entity, 'title': entity, } for entity in entity_list ], multi=True, optionHeight=55, value=entity_list, ) Longer labels, like the 3rd row in this screenshot, use a text wrapping process that skips the first line entirely. The issue is shown in the middle row here: Is there a way to make those labels wrap text in a way that includes the first line? Even inelegant/hacky solutions are appreciated!
You can add explicit line breaks (\n) wrapped in html.Pre to preserve the line breaks and white space: from dash import Dash, html, dcc app = Dash() entity_list = ['vital sign measurements', 'vital sign\nmeasurements'] app.layout = [html.Div( [ dcc.Dropdown( options=[ { 'label': html.Pre(entity), 'value': entity, 'title': entity, } for entity in entity_list ], multi=True, value=entity_list, )], style={"width": "10%"}, ), ] if __name__ == '__main__': app.run()
2
1
79,428,279
2025-2-10
https://stackoverflow.com/questions/79428279/incoming-http-requests-dont-get-read-from-python-socket-in-time
I'm making a simple HTTP server with Python using the socket module as a personal exercise to understand the HTTP protocol, and it appears that some incoming requests don't get read from the socket until new requests comes along. The (very) summarized version of the code is: import socket id = 0 def handle_request(clientSocket): has_body = lambda l: "POST" in l or "PUT" in l or "PATCH" in l with clientSocket.makefile() as incomingMessage: global id requestFirstLine = "" requestHeaders = "" requestBody = "" requestID = id linesRead = 0 blanksRead = 0 maxBlanks = 2 for line in incomingMessage: if linesRead == 0: requestFirstLine = line # HTTP methods that don't have a body have a single blank line at the end # Methods that do have a body have one between the headers and the body and one at the end maxBlanks = maxBlanks - 1 if not has_body(HTTPStartLine) else maxBlanks if linesRead > 0 and blanksRead == 0 and (line != "\r\n" or line != "\n"): requestHeaders += line if linesRead > 0 and blanksRead == 1 and (line != "\r\n" or line != "\n"): requestBody += line if line == "\r\n" or line == "\n": blanksRead += 1 if blanksRead == maxBlanks: # Once all lines are read, process the request try: # process the HTTP request and generate a response print(f"Request {requestFirstLine} ID: {requestID}") response = dummy_method(requestFirstLine, requestHeaders, requestBody, requestID) clientSocket.sendall(response) print("Request Answered!") id += 1 return True except Exception as e: print(e) return False linesRead += 1 return False # failsafe def run_server(port): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as serverSocket: serverSocket.bind(("localhost", port)) serverSocket.listen(5) while True: clientSocket, address = serverSocket.accept() print(f"Incoming connection from {address}") success = handle_request(clientSocket) if success: # Handle this This works wonderfully for simple requests, like fetching a single HTML file. However, when I tried to use this server to load a webpage with HTML, CSS (/assets/css/main.css) and an .ico icon (/assets/favicon.ico) in Firefox, only the first two requests get processed, as is printed on the log: Incoming connection from ('127.0.0.1', 49776) Request: GET / HTTP/1.1 ID: 0 Request Answered! Incoming connection from ('127.0.0.1', 49782) Request: GET /assets/css/main.css HTTP/1.1 ID: 1 Request Answered! Notice that the request for the .ico file was not received. When a new request is made, say like curl -i -X GET http://localhost:9999/ this gets printed in the log: Incoming connection from ('127.0.0.1', 49776) Request: GET / HTTP/1.1 ID: 0 Request Answered! Incoming connection from ('127.0.0.1', 49782) Request: GET /assets/css/main.css HTTP/1.1 ID: 1 Request Answered! Incoming connection from ('127.0.0.1', 52828) Request: GET / HTTP/1.1 ID: 2 Request Answered! Incoming connection from ('127.0.0.1', 52834) Request: GET /assets/favicon.ico HTTP/1.1 ID: 3 Request Answered! Only after the GET request from curl was answered the GET /assets/favicon.ico request from Firefox gets answered, and I cannot understand why that happens. I theorized that the time to process the second request (GET /assets/css/main.css) is so long that python is still processing this when the new request comes along, so that by the time execution returns to the serverSocket.accept() line, the new request is not caught in time, although it is still somehow recovered, how that happens is also something that I do not understand. Any help in understanding this issue will be greatly appreciated.
The issue you're running into probably happens because your server handles requests one at a time and waits (blocks) on serverSocket.accept(). Since Firefox sends multiple requests at once (HTML, CSS, favicon, etc.), only the first couple get processed immediately. The favicon request gets stuck in the queue, and your server doesn’t pick it up until a new request (like from curl) forces accept() to run again. Your server is blocking on accept(), meaning it stops and waits for a new connection before continuing. If multiple requests come in quickly, only one gets processed at a time, while others wait in the OS queue. The favicon request was there the whole time, but your server didn’t notice it until something else (like curl) woke it up. A good way to fix this is by using select.select(), which lets your server watch multiple sockets at once without blocking. import select def run_server(port): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as serverSocket: serverSocket.bind(("localhost", port)) serverSocket.listen(5) serverSocket.setblocking(False) # Make it non-blocking connections = [] while True: readable, _, _ = select.select([serverSocket] + connections, [], []) for s in readable: if s is serverSocket: clientSocket, address = serverSocket.accept() clientSocket.setblocking(False) connections.append(clientSocket) print(f"Incoming connection from {address}") else: success = handle_request(s) connections.remove(s) Hopefully this will help, there may be an easier way though!
1
2
79,428,300
2025-2-10
https://stackoverflow.com/questions/79428300/interactive-brokers-ibkr-gateway-api-how-to-fix-no-security-definition-has-be
Requesting contract details for NQ futures... Error 200, reqId 9: No security definition has been found for the request, contract: Future(symbol='NQ', exchange='GLOBEX', currency='USD') No contract details found for NQ. Please verify your parameters in TWS/IB Gateway. #!/usr/bin/env python3 from ib_insync import IB, Future from datetime import datetime import sys def parse_expiry(expiry): """ Parse the expiry string from the contract details. IB typically provides expiry as YYYYMM or YYYYMMDD. For comparison, we convert to a datetime object. """ if len(expiry) == 6: # Assume format is YYYYMM; we'll use the 1st of the month for comparison. try: return datetime.strptime(expiry, "%Y%m") except Exception as e: print(f"Error parsing expiry {expiry}: {e}") return None elif len(expiry) == 8: try: return datetime.strptime(expiry, "%Y%m%d") except Exception as e: print(f"Error parsing expiry {expiry}: {e}") return None else: return None def main(): # Connect to IBKR Gateway on localhost using port 4002. ib = IB() print("Connecting to IBKR Gateway...") ib.connect('-----------', 4002, clientId=1) # Create a generic NQ futures contract. contract = Future() contract.symbol = "NQ" contract.secType = "FUT" contract.exchange = "GLOBEX" contract.currency = "USD" # Do not specify lastTradeDateOrContractMonth so that IB returns all available contracts. print("Requesting contract details for NQ futures...") details = ib.reqContractDetails(contract) if not details: print("No contract details found for NQ. Please verify your parameters in TWS/IB Gateway.") ib.disconnect() sys.exit(1) It would be great if you could help me with this. tried different exchanges, tried loading different contract symbols, built a test connection.py ,connects to IBKR Gateway
GLOBEX is an old exchange code that is no longer valid. It should be CME. You can look up exchange codes on IBKR's website: https://www.interactivebrokers.com/en/trading/products-exchanges.php Or you can look it up on QuantRocket's website with fewer clicks: https://www.quantrocket.com/data/?modal=ibkrexchangesmodal
1
1
79,415,181
2025-2-5
https://stackoverflow.com/questions/79415181/quantum-circuit-not-drawing-on-colab
So, I tried to simulate this gate on Colab using matplotlib. The histogram is showing at the end, but not the circuit. I have tried out all fixes suggested by ChatGPT such as force inline and circuit_drawer, but it did not work. Please help me with the solution. qc_h = QuantumCircuit(1, 1, name = "qc") qc_h.h(0) qc_h.measure(0, 0) qc_h.draw('mpl') simulator_h = Aer.get_backend('qasm_simulator') transpiled_circuit_h = transpile(qc_h, simulator_h) job_h = simulator_h.run(transpiled_circuit_h, shots = 1024) result_h = job_h.result() counts_h = result_h.get_counts(qc_h) plot_histogram(counts_h)
We have to define the matplotlib axes (e.g. two matplotlib subplots) and input the axis as a kwargument for draw and plot_histogram. from qiskit import QuantumCircuit from qiskit_aer import Aer from qiskit.compiler import transpile from qiskit.visualization import plot_histogram from matplotlib import pyplot as plt fig = plt.figure(figsize=(10, 10)) ax = plt.subplot(1, 2, 1) qc_h = QuantumCircuit(1, 1, name='qc') qc_h.h(0) qc_h.measure(0, 0) qc_h.draw('mpl', ax=ax) ax = plt.subplot(1, 2, 2) simulator_h = Aer.get_backend('qasm_simulator') transpiled_circuit_h = transpile(qc_h, simulator_h) job_h = simulator_h.run(transpiled_circuit_h, shots=1024) result_h = job_h.result() counts_h = result_h.get_counts(qc_h) plot_histogram(counts_h, ax=ax) plt.show() Outputs: Google Colab pips: !pip install qiskit !pip install pylatexenc !pip install qiskit_aer
1
2
79,428,043
2025-2-10
https://stackoverflow.com/questions/79428043/why-ast-assign-targets-is-a-list
ast.Assign.targets is a list a, b = c, d Yields following AST: Assign( targets=[ Tuple( elts=[ Name(id='a', ctx=Store()), Name(id='b', ctx=Store())], ctx=Store())], value=Tuple( elts=[ Name(id='c', ctx=Load()), Name(id='d', ctx=Load())], ctx=Load())) Under what conditions target would be a list with multiple elements instead of single Tuple ?
When there are multiple destinations for the assignment of the value. That is: a = b = c = d AST: Module( body=[ Assign( targets=[ Name(id='a', ctx=Store()), Name(id='b', ctx=Store()), Name(id='c', ctx=Store())], value=Name(id='d', ctx=Load()))], type_ignores=[])
2
3
79,427,869
2025-2-10
https://stackoverflow.com/questions/79427869/default-filter-expression-to-match-anything
What kind of polars expression (pl.Expr) might be used in a filter context that will match anything including nulls? Use case: Type hinting and helper Functions that should return an polars.Expr.
The expression representing the literal value True might be used. See pl.lit for more details. Example. import polars as pl df = pl.DataFrame({ "a": [1, 2, None] }) df.filter(pl.lit(True)) shape: (3, 1) ┌──────┐ │ a │ │ --- │ │ i64 │ ╞══════╡ │ 1 │ │ 2 │ │ null │ └──────┘ Note. In general, simply True also works, but its not an instance of pl.Expr.
4
5
79,421,635
2025-2-7
https://stackoverflow.com/questions/79421635/how-to-add-flask-sqlalchemy-multiple-tables-query-result-with-column-name-valu
I have got 2 tables, one is department and another one is supplier. For every department, there could be a supplier or it could be none. So I use the outerjoin and the result is correct. I need to know how add these results in to a list @department.route("/department/getDepartmentDetails/<int:id>", methods=['GET', 'POST']) def getDepartmentDetails(id): department = db.session.query(Department, Supplier) \ .outerjoin(Department, Department.default_supplier_id == Supplier.id) \ .filter(Department.id == id) If I query single table like below, I can use getattr to get the desired result department = Department.query.filter(Department.id==id).first() dept={c.name: str(getattr(department, c.name)) for c in department.__table__.columns} return jsonify(dept) How do I get the same for the multi-table query result?
Assuming that you want to return a single dict (or JSON object) for each row in the resultset, all that is required is generate a dict for each entity in each (Department, Supplier) row, and merge them. However we need to consider that the models may share some attribute names, which must be disambiguated. The disambiguation can be handled by creating a function, based on your code, which accepts a prefix string and an entity, and returns a dict of prefixed attribute names and values: def to_prefix_dict(prefix, entity): if not entity: return {} return { f'{prefix.lower()}_{c.name}': getattr(entity, c.name) for c in entity.__table__.columns } It can be used like this: import functools import sqlalchemy as sa ... # This code is vanilla SQLAlchemy, but Flask-SQLAlchemy code will be # essentially the same: substitute `db.session.query` for `sa.select` and # `db.session` for `s`. with Session() as s: q = sa.select(Department, Supplier).outerjoin( Supplier, Department.default_supplier_id == Supplier.id ) rs = s.execute(q) result = [ functools.reduce( dict.__or__, [to_prefix_dict(key, entity) for key, entity in zip(rs.keys(), row)], ) for row in rs ] Resulting in a structure like this when passed through jsonify, assuming one department with a supplier, and one without: [ { "department_id": 1, "department_name": "D1", "department_default_supplier_id": 1, "supplier_id": 1, "supplier_name": "S1" }, { "department_id": 2, "department_name": "S2", "department_default_supplier_id": null } ] Since the ORM entities returned by the query aren't really required for the output, you could consider specifying the required columns and their labels directly: q = sa.select( Department.name.label('department'), Supplier.name.label('supplier') ).outerjoin(Supplier, Department.default_supplier_id == Supplier.id) rs = s.execute(q) result = [dict(r) for r in rs.mappings()] Output: [ { "department": "D1", "supplier": "S1" }, { "department": "S2", "supplier": null } ] Manual labelling can be avoided by configuring the query to automatically label each column as <table name + column name>. If your table names are not siutable for exposure you can use an alias: dept = orm.aliased(Department, name='dep') supp = orm.aliased(Supplier, name='sup') q = ( sa.select(dept.name, supp.name) .outerjoin(supp, dept.default_supplier_id == supp.id) .set_label_style(sa.LABEL_STYLE_TABLENAME_PLUS_COL) ) rs = s.execute(q) result = [dict(r) for r in rs.mappings()] Output: [ { "department": "D1", "supplier": "S1" }, { "department": "S2", "supplier": null } ] If all the column in both tables are required, you can unpack both tables' columns attributes. dept = sa.alias(Department.__table__, 'dep') supp = sa.alias(Supplier.__table__, 'sup') q = ( sa.select(*dept.columns, *supp.columns) .outerjoin(supp, dept.c.default_supplier_id == supp.c.id) .set_label_style(sa.LABEL_STYLE_TABLENAME_PLUS_COL) ) rs = s.execute(q) result = [dict(r) for r in rs.mappings()] Output: [ { "dep_id": 1, "dep_name": "D1", "dep_default_supplier_id": 1, "sup_id": 1, "sup_name": "S1" }, { "dep_id": 2, "dep_name": "S2", "dep_default_supplier_id": null, "sup_id": null, "sup_name": null } ] Alternatively, this function can used to return a nested row structure, which may be useful if the consumer of the JSON needs to handle departments and suppliers separately: def to_dict(entity): if not entity: return {} return {c.name: getattr(entity, c.name) for c in entity.__table__.columns} like this: ... result = [ {key_name: to_dict(entity) for key_name, entity in zip(rs.keys(), row)} for row in rs ] With this result: [ { "Department": { "id": 1, "name": "D1", "default_supplier_id": 1 }, "Supplier": { "id": 1, "name": "S1" } }, { "Department": { "id": 2, "name": "S2", "default_supplier_id": null }, "Supplier": {} } ]
1
1
79,426,835
2025-2-10
https://stackoverflow.com/questions/79426835/pandas-in-python-dbt-model-duckdb
Im trying to use pandas in a dbt python model (dbt-duckdb), but I keep getting the problem Python model failed: No module named 'pandas'. Here you can find my dbt model configuration: import boto3 import pandas as pd def model(dbt, session): dbt.config( materialized="table", packages = ["pandas==2.2.3"], python_version="3.11" ) key = "my_key" bucket = "my_bucket" client = boto3.client('s3') return None Also I know duckdb has a way of importing s3 files but I need to manipulate the files before duckdb reads them because they are not correct. Also this is my models yaml config version: 2 models: - name: test config: packages: - "pandas==2.2.3" Also I have a virtualenvironemnt with pandas installed. Anyone who has experience with it, thanks in advance!
Found it, make sure that the venv you are using is called dbt-env dbt will automatically take this venv where you have installed pandas or whatever package you needed!
1
1
79,426,836
2025-2-10
https://stackoverflow.com/questions/79426836/create-a-new-dataframe-form-an-existing-dataframe-taking-only-the-rows-matching
I have a dataframe called "base_dataframe" that looks as following: F_NAME L_NAME EMAIL 0 Suzy Maripol [email protected] 1 Anna Smith [email protected] 2 Flo Mariland [email protected] 3 Sarah Linder [email protected] 4 Nala Kross [email protected] 5 Sarosh Fink [email protected] I would like to create a new dataframe that only contains the rows matching specific regular expressions that I define: For column "F_NAME" I only want to copy over the rows that contain "Sar" For column "L_NAME" I only want to copy over the rows that contain "Mari" The way I tackle this in my code is : sar_df = base_dataframe["F_NAME"].str.extract(r'(?P<sar_content>(^Sar.*))') mari_df = base_dataframe["L_NAME"].str.extract(r'(?P<mar_content>(^Mari.*))') Then I copy those filtered columns/DFs over to my target dataframe "new_dataframe": new_dataframe["selected_F_NAME"] = sar_df.copy new_dataframe["selected_L_NAME"] = mari_df.copy And my "new_dataframe" would at the end look like this : F_NAME L_NAME EMAIL 0 Suzy Maripol [email protected] 2 Flo Mariland [email protected] 3 Sarah Linder [email protected] 5 Sarosh Fink [email protected] This works for me but it takes an extremely long time to copy over all the data to my "new_dataframe", because my "base_dataframe" has many hundred thousands of rows. I also need to apply multiple different regular-expressions on multiples columns ( the dataframe example I gave is basically simplified, just to explain what I want to do). I am pretty sure there is a more optimised way to do this, but can't figure it out right now. I would appreciate any help with this.
Since you goal seems to be filtering, couldn't you replace your extract logic by a simple boolean indexing?: # identify rows with F_NAME starting with "Sar" m1 = base_dataframe['F_NAME'].str.startswith('Sar') # identify rows with L_NAME starting with "Mari" m2 = base_dataframe['L_NAME'].str.startswith('Mari') # keep rows with either match out = base_dataframe[m1|m2] Or, if you have multiple conditions: conditions = [base_dataframe['F_NAME'].str.startswith('Sar'), base_dataframe['L_NAME'].str.startswith('Mari'), # ... other conditions, ] out = base_dataframe[np.logical_xor.reduce(conditions)] Output: F_NAME L_NAME EMAIL 0 Suzy Maripol [email protected] 2 Flo Mariland [email protected] 3 Sarah Linder [email protected] 5 Sarosh Fink [email protected]
3
2
79,426,847
2025-2-10
https://stackoverflow.com/questions/79426847/how-to-express-the-dot-product-of-3-dimensions-arrays-with-numpy
How do I do the following dot product in 3 dimensions with numpy? I tried: x = np.array([[[-1, 2, -4]], [[-1, 2, -4]]]) W = np.array([[[2, -4, 3], [-3, -4, 3]], [[2, -4, 3], [-3, -4, 3]]]) y = np.dot(W, x.transpose()) but received this error message: y = np.dot(W, x) ValueError: shapes (2,2,3) and (2,1,3) not aligned: 3 (dim 2) != 1 (dim 1) It's 2 dimensions equivalent is: x = np.array([-1, 2, -4]) W = np.array([[2, -4, 3], [-3, -4, 3]]) y = np.dot(W,x) print(f'{y=}') which will return: y=array([-22, -17]) Also, y = np.dot(W,x.transpose()) will return the same answer.
The issue comes from the 3D transposition which does not transpose the axes you want by default. You need to specify the right axes during this call: W @ x.transpose(0, 2, 1) # Output: # array([[[-22], # [-17]], # # [[-22], # [-17]]])
4
6
79,426,845
2025-2-10
https://stackoverflow.com/questions/79426845/in-pandas-a-groupby-followed-by-boxplot-gives-keyerror-none-of-indexa-1
This very simple script gives error KeyError: "None of [Index(['A', 1], dtype='object')] are in the [index]": import pandas as pd import matplotlib.pyplot as plt L1 = ['A','A','A','A','B','B','B','B'] L2 = [1,1,2,2,1,1,2,2] V = [9.8,9.9,10,10.1,19.8,19.9,20,20.1] df = pd.DataFrame({'L1':L1,'L2':L2,'V':V}) print(df) df.groupby(['L1','L2']).boxplot(column='V') plt.show() So my dataframe is: L1 L2 V 0 A 1 9.8 1 A 1 9.9 2 A 2 10.0 3 A 2 10.1 4 B 1 19.8 5 B 1 19.9 6 B 2 20.0 7 B 2 20.1 and I would expect a plot with four boxplot showing the values V, the labels of boxplots should be A/1, A/2, B/1, B/2. I had a look at How To Solve KeyError: u"None of [Index([..], dtype='object')] are in the [columns]" but I was not able to fix my error, AI tools are not helping me either. What am I not understanding?
You can use boxplot for the grouping as well df.boxplot(column='V', by=['L1', 'L2'])
4
3
79,426,130
2025-2-10
https://stackoverflow.com/questions/79426130/how-to-create-a-class-that-runs-business-logic-upon-a-query
I'd like to create a class/object that I can use for querying, that contains business logic. Constraints: Ideally that class/object is not the same one that is responsible for table creation. It's possible to use the class inside a query Alembic should not get confused. SQLAlchemy Version: 1.4 and 2.x. How do I do that? Is that even possible? Use Case My database table has two columns: value_a and show_value_a. show_value_a specifies if the value is supposed to be shown on the UI or not. Currently, all processes that query value_a have to check if show_value_a is True; If not, the value of value_a will be masked (i.e. set to None) upon returning. Masking the value is easy to forget. Also, each process has their own specific query (with their specific JOINs), so it's ineffective to do this in some kind of pattern form. Example Table definition: from sqlalchemy import Column, String, Boolean class MyTable(Base): __tablename__ = "mytable" valueA = Column("value_a", String(60), nullable=False) showValueA = Column("show_value_a", Boolean, nullable=False) Data: value_a show_value_a "A" True "B" False "C" True Query I'd like to do: values = session.query(MyTable.valueA).all() # returns ["A", None, "C"] Querying the field will intrinsically check if show_value_a is True. If it is, the value is returned. If not, None is returned
You can use an execute event to intercept queries and modify them before execution. This sample event Checks the session's info dictionary to determine whether the query relates to an entity of interest Creates a modified query that checks whether valueA can be shown Replaces the original query with the modified query @sa.event.listens_for(Session, 'do_orm_execute') def _do_orm_execute(orm_execute_state): if orm_execute_state.is_select: statement = orm_execute_state.statement col_descriptions = statement.column_descriptions if ( col_descriptions[0]['entity'] in orm_execute_state.session.info['check_entities'] ): expr = sa.case((MyTable.showValueA, MyTable.valueA), else_=None).label( 'value_a' ) columns = [ c if c.name != 'value_a' else expr for c in statement.inner_columns ] new_statement = sa.select(MyTable).from_statement(sa.select(*columns)) orm_execute_state.statement = new_statement Note that this will only work for 2.0-style queries (or 1.4 with the future option set on engines and sessions). The code assumes a simple select(MyTable) query - you would need to add where criteria, order_by etc from the original query. Joins etc might also require some additional work. Here's a runnable example: import sqlalchemy as sa from sqlalchemy import orm from sqlalchemy.orm import Mapped, mapped_column class Base(orm.DeclarativeBase): pass class MyTable(Base): __tablename__ = 't79426130' id: Mapped[int] = mapped_column(primary_key=True) valueA: Mapped[str] = mapped_column('value_a') showValueA: Mapped[bool] = mapped_column('show_value_a') engine = sa.create_engine('sqlite://', echo=True) Base.metadata.create_all(engine) info = {'check_entities': {MyTable}} Session = orm.sessionmaker(engine, info=info) @sa.event.listens_for(Session, 'do_orm_execute') def _do_orm_execute(orm_execute_state): if orm_execute_state.is_select: statement = orm_execute_state.statement col_descriptions = statement.column_descriptions if ( col_descriptions[0]['entity'] in orm_execute_state.session.info['check_entities'] ): expr = sa.case((MyTable.showValueA, MyTable.valueA), else_=None).label( 'value_a' ) columns = [ c if c.name != 'value_a' else expr for c in statement.inner_columns ] new_statement = sa.select(MyTable).from_statement(sa.select(*columns)) orm_execute_state.statement = new_statement with Session.begin() as s: mts = [MyTable(valueA=v, showValueA=s) for v, s in zip('ABC', [True, False, True])] s.add_all(mts) with Session() as s: for mt in s.scalars(sa.select(MyTable)): print(mt.valueA, mt.showValueA)
2
0
79,426,203
2025-2-10
https://stackoverflow.com/questions/79426203/problem-scraping-table-row-data-into-an-array
Background I am looking to measure statistics from a website wotstars for a XBOX and Playstation game, World of Tanks Console. Initially I tried just using Excel to scrape the site directly for me into Power Query, the immediate issue was that only 5 rows of recent match data (5 games) is available from the website as loaded, a button needs to be click to view the last 100 matches. I thought this was a good idea to learn more about web-scraping with python. I successfully have managed to use Selenium to activate the "View More" button to shows all 100 rows and I can return the table of interest. Problem I am currently stuck through writing the td data to a 2D array that I can then dump to a csv file, I seem to be writing 100 columns rather than 100 rows? I have looked at sites such as A Guide to Scraping HTML Tables with Pandas and BeautifulSoup for approaches (but this particular example doesn't show the source table). Other If I look at Excel the query connection that attaches to this table is shown below Code from selenium import webdriver from selenium.webdriver.common.by import By import time from bs4 import BeautifulSoup import pandas as pd # Set up the WebDriver driver = webdriver.Chrome() # Famous player url = 'https://www.wotstars.com/xbox/6757320' # Open the login page driver.get(url) time.sleep(5) login_button = driver.find_elements(By.CLASS_NAME, "_button_1gcqp_2") for login in login_button: print (login.text) if login_button[3].text == 'Start tracking': login_button[4].click() print("button activated") time.sleep(2) #login_button[3].click() soup = BeautifulSoup(driver.page_source, 'html.parser') rows = [] for child in soup.find_all('table')[3].children: row = [] for td in child: #print (td.text) try: row.append(td.text.replace('\n\r', '')) except: continue if len(row) > 0: #print (row) rows.append(row) print (len(rows[0])) #not working #df = pd.DataFrame(rows[1:], columns=rows[0]) #display(HTML(df.to_html())) print("done") # Close the browser driver.quit Object Inspector
Since you have to use selenium for the view more button click anyway, you don't need to use beautifulsoup. Using only class names for selecting elements is difficult for the page as it has multiple tables and buttons with the same class. Instead, use XPath to find the section BattleTracker by text. that way you can access the right table every time. To scrape the table using pandas, you have to provide the HTML. You can get that by selecting the table you want and then pass it to pandas. Check the following code: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import pandas as pd # Set up the WebDriver chrome_options = Options() chrome_options.add_argument("--headless") # headless mode driver = webdriver.Chrome(options=chrome_options) wait = WebDriverWait(driver, 30) # Famous player url = 'https://www.wotstars.com/xbox/6757320' # Open the login page driver.get(url) # wait for the view more button to be available and click wait.until(EC.element_to_be_clickable((By.XPATH, "//div[contains(@class,'sessionWrap') and contains(.,'BattleTracker')]//button[text()='View more']"))).click() # wait for the table to be expanded wait.until(EC.visibility_of_element_located((By.XPATH,"//div[contains(@class,'sessionWrap') and contains(.,'BattleTracker')]//table/tbody/tr[6]"))) # getting the table which will be passed to pandas table = driver.find_element(By.XPATH,"//div[contains(@class,'sessionWrap') and contains(.,'BattleTracker')]//table") # parse the table using pandas df = pd.read_html(table.get_attribute('outerHTML')) print(df) # Close the browser driver.quit() OUTPUT: [ Tank Result Kills Dmg+A Dmg Received ... Hits Pens Spots WN8 Duration When 0 IXFraternité Char Mle. 75 Victory (survived) 2 3089 2469 0 ... 15 86.67% 2 3768 5m 1s 2h 4m ago 1 IXFraternité Char Mle. 75 Victory (destroyed) 2 3217 2649 1200 ... 16 87.50% 2 4144 5m 5s 2h 12m ago 2 IXFraternité Char Mle. 75 Victory (survived) 1 3556 3123 176 ... 17 94.12% 1 3921 3m 58s 2h 19m ago 3 IXFraternité Char Mle. 75 Victory (survived) 0 3055 2803 691 ... 16 87.50% 2 2731 4m 31s 2h 26m ago 4 IXFraternité Char Mle. 75 Victory (survived) 2 4434 4157 0 ... 22 100.00% 0 6526 5m 41s 2h 36m ago .. ... ... ... ... ... ... ... ... ... ... ... ... ... 95 IXObject 777 Version II Defeat (destroyed) 0 2948 1357 1850 ... 5 60.00% 3 952 4m 13s 2/3/2025, 2:30:38 AM 96 IXVK 45.02 (P) Ausf. B7 Defeat (destroyed) 1 1417 1417 1950 ... 3 100.00% 0 1344 3m 2/3/2025, 2:28:04 AM 97 IXMäuschen Victory (survived) 3 8778 6818 453 ... 31 63.33% 1 14097 5m 52s 2/3/2025, 2:17:47 AM 98 IXMäuschen Defeat (destroyed) 0 5529 3846 2150 ... 8 87.50% 1 3256 6m 5s 2/3/2025, 2:09:19 AM 99 VIIType 62 Defeat (destroyed) 0 542 542 880 ... 3 100.00% 3 691 1m 33s 2/3/2025, 2:04:01 AM [100 rows x 16 columns]]
2
2
79,425,937
2025-2-10
https://stackoverflow.com/questions/79425937/how-do-i-modify-a-list-value-in-a-for-loop
In How to modify list entries during for loop?, the general recommendation is that it can be unsafe so don't do it unless you know it's safe. In comments under the first answer @martineau says: It [the loop variable] doesn't make a copies. It just assigns the loop variable name to successive elements or value of the thing being iterated-upon. That is the behavior I expect and want, but I'm not getting it. I want to remove trailing Nones from each value in the list. My loop variable is modified correctly but the list elements remain unchanged. How can I get the loop variable fv to be a pointer to the rows of list foo, not a copy of the row? Extra credit: my code is sucky non-pythonic, so a solution using comprehensions or slices instead of for loops would be preferred. foo = [ ['a', 'b', None, None], ['c', None, 'e', None], ['f', None, None, None] ] desired = [ ['a', 'b'], ['c', None, 'e'], ['f'] ] for fv in foo: for v in range(len(fv) -1, 0, -1): if fv[v] == None: fv = fv[:v] print(f' {v:2} {fv}') else: break print(foo) The output is: 3 ['a', 'b', None] 2 ['a', 'b'] 3 ['c', None, 'e'] 3 ['f', None, None] 2 ['f', None] 1 ['f'] [['a', 'b', None, None], ['c', None, 'e', None], ['f', None, None, None]]
The line fv = fv[:v] will create a new variable named fv instead of mutating fv. You should mutate the existing list. One solution could be using while to strip unwanted values until none left. The .pop() method will mutate row instead of returning a new list: for row in foo: while row and row[-1] is None: row.pop() assert foo == desired
3
3
79,423,875
2025-2-8
https://stackoverflow.com/questions/79423875/equivalent-of-pythons-selection-by-multiindex-level-especially-columns-in-juli
My understanding is that DataFrames do not support MultiIndexing, which generally does not pose much problems, but translating some pythonic habits to Julia poses difficulties. I wonder how one could load and subselect features by columns, as in the example below. import numpy as np import pandas as pd #generating sample data nsmpls = 10 smpls = [f'smpl{j}' for j in range(nsmpls)] nfeats = 5 feats = [f'feat{j}' for j in range(nfeats)] data = np.random.rand(nfeats, nsmpls) countries = ['France'] * 2 + ['UK'] * 3 + ['US'] * 5 df = pd.DataFrame(data, index=feats, columns=pd.MultiIndex.from_tuples(zip(countries, smpls))) df.to_csv('./data.tsv', sep='\t') #--------------------------------------------------------------------- #loading dataset df = pd.read_csv('./data.tsv', sep='\t', index_col=0, header=[0,1]) #extracting subset dg = df.xs('France', level=0, axis=1) print(dg.shape) #iterating for country, group in df.groupby(level=0, axis=1): print('#samples: {}'.format(group.shape[1]))
Something like this ? using DataFrames, CSV # Used your sample data df = DataFrame(CSV.File("data.tsv")) # Filter the columns by country name france_cols = findall(x -> occursin("France", x), names(df)) # Subset the df dg = select(df, france_cols) # Optional : use "sampleX" as col names instead of the country name rename!(dg, collect(dg[1, :])) dg = dg[2:end, :] display(dg) println(size(dg)) By default, DataFrames adds numbers to similar column names like this : France, France_1 etc so I selected all the columns containing "France".
2
2
79,422,455
2025-2-8
https://stackoverflow.com/questions/79422455/gimp-wire-read-error-upon-adding-python-script-in-correct-folder
I am trying to learn GIMP 2.10.38 with python and I don't know either that well, so I tried adding a simple python script but it doesn't show up in GIMP at all. Here is the script I added: #!/usr/bin/env python from gimpfu import * def simple_plugin(image, drawable): gimp.message("Hello from a simple plugin!") register( "python-fu-simple-plugin", "Simple Plugin Test", "A simple test plugin", "Your Name", "Your Name", "2024", "<Image>/Filters/Simple Test", # Or any menu location "RGB*", [], simple_plugin, ) main() The script is in the correct folder - confirmed that in preferences - folders - plug-ins Searching online I came across the --verbose argument and that led me to this error: Querying plug-in: 'C:\Users\Prafulla\AppData\Roaming\GIMP\2.10\plug-ins\simplepy.py' gimp-2.10: LibGimpBase-WARNING: gimp-2.10: gimp_wire_read(): error I know it is a general error related to communication within GIMP but couldn't find enough information online. I have tried a reinstall already, but not tried deleting all folders manually yet. Any idea how I should proceed?
Your problem isn't the "wire-read" error (which has been showing in my Gimp console for years without being related to any trouble). In the same terminal output, you should also see this: Traceback (most recent call last): File "/path/to/your/file.py", line 17, in <module> simple_plugin, TypeError: register() takes at least 11 arguments (10 given) The missing argument is the (usually empty) list of return values register( "python-fu-simple-plugin", "Simple Plugin Test", "A simple test plugin", "Your Name", "Your Name", "2024", "<Image>/Filters/Simple Test", # Or any menu location "RGB*", [], # <- your existing list of input values [], # <- your missing list of returned values simple_plugin, ) This said this is a deprecated registration usage, in more modern times you would use: #!/usr/bin/env python from gimpfu import * def simple_plugin(image, drawable): gimp.message("Simple plugin, working on %s in %s" % (drawable.name,image.name)) register( "simple-plugin", # "python-fu" added by default anyway... "Simple Plugin Test", "A simple test plugin", "Your Name", "Your Name", "2025", "Simple Test", # Just the menu item "RGB*,GRAY*", # GRAY necessary if you want the plugin to also work on mask/Channels. [ (PF_IMAGE, 'image', 'Input image', None), # Explicit input image (if necessary) (PF_DRAWABLE,'drawable','Input drawable',None), # Explicit input drawable (if necessary) ], [], simple_plugin, menu="<Image>/Filters" # Menu location of menu item above ) main() Where the two main differences are that: You explicitly declare the input image and drawable. If they are in that order as the first two arguments, Gimp will use the current image/drawable and not ask for them in a dialog. This syntax allows you to write scripts that don't even need an existing image, or that don't need any input drawable The menu where the item appears is defined separately. You can put the menu in any <dialog> that is coherent with the second argument: <Image>, but also <Layers>, <Vectors>, <Brushes>, <Palettes>...
1
1
79,419,153
2025-2-6
https://stackoverflow.com/questions/79419153/c-to-python-rsa-implement
Just trying to rewrite this c# code to python. Server send public key(modulus, exponent), need to encrypt it with pkcs1 padding. using (TcpClient client = new TcpClient()) { await client.ConnectAsync(ip, port); using (NetworkStream stream = client.GetStream()) { await App.SendCmdToServer(stream, "auth", this.Ver.ToString().Split('.', StringSplitOptions.None)); byte[] modulus = new byte[256]; int num2 = await stream.ReadAsync(modulus, 0, modulus.Length); byte[] exponent = new byte[3]; int num3 = await stream.ReadAsync(exponent, 0, exponent.Length); this.ServerRsa = RSA.Create(); this.ServerRsa.ImportParameters(new RSAParameters() { Modulus = modulus, Exponent = exponent }); using (MemoryStream data = new MemoryStream()) { using (BinaryWriter writer = new BinaryWriter((Stream) data)) { writer.Write(string1); writer.Write(string2); await App.SendDataToServer(stream, this.ServerRsa.Encrypt(data.ToArray(), RSAEncryptionPadding.Pkcs1)); } } } } Everything works fine, except encrypted result by python. I've tried with rsa and pycryptodome, no luck at all, server returns reject. Tried something like this (rsa) server_rsa = rsa.newkeys(2048)[0] server_rsa.n = int.from_bytes(modulus, byteorder='big') server_rsa.e = int.from_bytes(exponent, byteorder='big') data = (string1 + string2).encode() encrypted_data = rsa.encrypt(data, server_rsa) or this (pycryptodome) pubkey = construct((int.from_bytes(modulus, 'big'), int.from_bytes(exponent, 'big'))) cipher = PKCS1_v1_5.new(pubkey) encrypted_data = cipher.encrypt(data) Is there some special python RSA implementation, that just not working with C#, or vice versa?
The PyCryptodome is a good choice for cryptographic tasks in Python. The problem is with the data formatting, you are concatenating the strings directly in Python and the BinaryWriter in C# write the lengths of the strings as prefixes.This code show how you can do that: import struct data = b"" data += struct.pack(">I", len(string1.encode('utf-8'))) # add length as big-endian unsigned int data += string1.encode('utf-8') data += struct.pack(">I", len(string2.encode('utf-8'))) data += string2.encode('utf-8') In the code above I encoded the length of the strings as big-endian unsigned int but as was commented by @Topaco the BinaryWriter encodes the length prefix with LEB128. So to replicate BinaryWriter you can do this: import leb128 data = bytearray() data += leb128.u.encode(len(string1.encode())) data += string1.encode() data += leb128.u.encode(len(string2.encode())) data += string2.encode() I used the leb128 package that can be installed with pip install leb128. But you can create a function to do that encoding def encode_leb128(number): if number == 0: return bytearray([0]) result = bytearray() while number > 0: byte = number & 0x7f number >>= 7 if number > 0: byte |= 0x80 result.append(byte) return result
6
3
79,417,439
2025-2-6
https://stackoverflow.com/questions/79417439/how-to-display-additional-count-near-progress-bar-in-enquiry-screen
I am working on Odoo 18 and need to modify the Enquiry Screen to display additional counts near the progress bar. The current progress bar already shows the Expected Revenue, but I want to add more details for better visibility. Requirements CHECK THIS IMAGES 👇 Current view Expected View For the "NEW" Stage: The display format should be: 525K - 12 (10) 525K → Expected Revenue (already displayed) 12 → Total Enquiries for the Year (10) → Count of enquiries currently in this stage For all other stages: The display format should be: 525K - 6 6 → Count of enquiries currently in this stage My Questions Where should I modify the code to achieve this? Should it be done in a custom module, or can it be handled via the XML views? How can I fetch and display these counts dynamically for each stage in Odoo 18? If it needs changes in OWL JS, how should I approach updating the progress bar component? What I Have Tried I checked the model (crm.lead or similar) but didn’t find an existing field that directly provides the total count of enquiries per year. I explored the OWL JS components but am unsure how to modify the display logic near the progress bar.
To display an additional count near the progress bar in Odoo 18's Enquiry Screen, follow these steps: Template Inheritance: Extend the existing ColumnProgress template from the crm module using XML. This allows you to inject custom elements into the progress bar's structure. XPath Positioning: Use an XPath expression to target the <AnimatedNumber> element (which displays the current value) and insert your additional count after it. Manifest Configuration: Ensure your module's __manifest__.py declares a dependency on crm and loads the XML asset in the backend. Code Solution: <?xml version="1.0" encoding="UTF-8"?> <templates xml:space="preserve"> <!-- Inherit and extend the CRM progress bar template --> <t t-inherit="crm.ColumnProgress" t-inherit-mode="extension"> <!-- Insert additional count after the AnimatedNumber element --> <xpath expr="//AnimatedNumber" position="after"> <span class="ml-2"> <!-- Replace 'your_field' with the actual field name --> ####### </span> </xpath> </t> </templates> Manifest File (__manifest__.py): { # ... other manifest keys 'assets': { 'web.assets_backend': [ 'your_module/static/src/xml/your_template.xml', # XML asset path ], }, # ... other manifest keys }
2
2
79,422,061
2025-2-7
https://stackoverflow.com/questions/79422061/problem-with-fastapi-pydantic-and-kebab-case-header-fields
In my FastAPI project, if I create a common header definition with Pydantic, I find that kebab-case header fields aren't behaving as expected. The "magic" conversion from kebab-case header fields in the request to their snake_case counterparts is not working, in addition to inconsistencies in the generated Swagger docs. What is the right way to specify this Pydantic header class so that the Swagger docs and behavior match? Here's a minimal reproduction of the problem: ### main.py from typing import Annotated from fastapi import FastAPI, Header from pydantic import BaseModel, Field app = FastAPI() class CommonHeaders(BaseModel): simpleheader: str a_kebab_header: str | None = Field( default=None, title="a-kebab-header", alias="a-kebab-header", description="This is a header that should be specified as `a-kebab-header`", ) @app.get("/") def root_endpoint( headers: Annotated[CommonHeaders, Header()], ): result = {"headers received": headers} return result If I run this and look at the Swagger docs at http://localhost:8000/docs I see this, which looks correct: And if I "try it out" it will generate what I would expect as the correct request: curl -X 'GET' \ 'http://localhost:8000/' \ -H 'accept: application/json' \ -H 'simpleheader: foo' \ -H 'a-kebab-header: bar' But in the response, it becomes clear it did not correctly receive the kebab-case header: { "headers received": { "simpleheader": "foo", "a-kebab-header": null } } Changing the header name to snake_case "a_kebab_header" in the request does not work, either. Updating the header definition to look like this doesn't work as expected, either. The Swagger docs and actual behavior are inconsistent. class CommonHeaders(BaseModel): simpleheader: str a_kebab_header: str | None = Field( default=None, description="This is a header that should be specified as `a-kebab-header`", ) Notice this now results in the Swagger docs specifying it in snake_case: And using "try it out" results in the snake_case variant: curl -X 'GET' \ 'http://localhost:8000/' \ -H 'accept: application/json' \ -H 'simpleheader: foo' \ -H 'a_kebab_header: bar' But SURPRISINGLY this doesn't work! The response: { "headers received": { "simpleheader": "foo", "a_kebab_header": null } } But in a SURPRISE ENDING, if I manually re-write the request in kebab-case: curl -X 'GET' \ 'http://localhost:8000/' \ -H 'accept: application/json' \ -H 'simpleheader: foo' \ -H 'a-kebab-header: bar' it finally picks up that header value via the magic translation and I get the desired results back: {"headers received":{"simpleheader":"foo","a_kebab_header":"bar"}} What is the right way to specify this Pydantic header class so that the Swagger docs and behavior match? If the docs are inconsistent with behavior I'm going to get hassled. As a final thought: the following way works correctly in both the OpenAPI documentation and in the application (displaying and working as kebab-case), BUT it doesn't use Pydantic and so I lose the ability to define and use a common header structure easily across my project, and instead need to declare them individually for each endpoint: """Alternative version without Pydantic.""" from typing import Annotated from fastapi import FastAPI, Header app = FastAPI() @app.get("/") def root_endpoint( simpleheader: Annotated[str, Header()], a_kebab_header: Annotated[ str | None, Header( title="a-kebab-header", description="This is a header that should be specified as `a-kebab-header`", ), ] = None, ): result = { "headers received": { "simpleheader": simpleheader, "a_kebab_header": a_kebab_header, } } return result
The only way I found is to define parameters without using Pydantic model. To use this common parameters in different endpoints you can define them using dependency function: from typing import Annotated from fastapi import Depends, FastAPI, Header from pydantic import BaseModel app = FastAPI() class CommonHeaders(BaseModel): simpleheader: str a_kebab_header: str | None def get_common_headers( simpleheader: Annotated[str, Header()], a_kebab_header: str | None = Header( default=None, title="a-kebab-header", alias="a-kebab-header", description="This is a header that should be specified as `a-kebab-header`", ), ): return CommonHeaders(simpleheader=simpleheader, a_kebab_header=a_kebab_header) @app.get("/") def root_endpoint( headers: Annotated[CommonHeaders, Depends(get_common_headers)], ): result = {"headers received": headers} return result @app.get("/another") def another_endpoint( headers: Annotated[CommonHeaders, Depends(get_common_headers)], ): result = {"headers received": headers} return result
3
1
79,422,054
2025-2-7
https://stackoverflow.com/questions/79422054/check-if-string-only-contains-characters-from-a-certain-iso-specification
Short question: What is the most efficient way to check whether a .TXT file contains only characters defined in a selected ISO specification? Question with full context: In the German energy market EDIFACT is used to automatically exchange information. Each file exchanged has a header segment which contains information about the contents of the file. Please find an example of this segment below. UNB+UNOC:3+9903323000007:500+9900080000007:500+250102:0900+Y48A42R58CRR43++++++ As you can see after the UNB+ we find the content UNOC. This tells us which character set is used in the file. In this case it is ISO/IEC 8859-1. I would like a python method which checks whether the EDIFACT file contains only characters specified in ISO/IEC 8859-1. The most simple solution I can think of is doing something like this (pseudo code). ISO_string = "All characters contained in ISO/IEC 8859-1" EDIFACT_string = "Contents of EDIFACT file" is_iso_char = FALSE For EDIFACT_Char in EDIFACT_string: For ISO_char in ISO_string: if EDIFACT_Char = ISO_char: is_iso_char = TRUE break if is_iso_char == FALSE: raiseerror("File contains char not contained in ISO/IEC 8859-1 and needs to be rejected") do_error_handling() is_iso_char = FALSE I studied business informatics and lack the theoretical background for algorithm theory. This feels like a very inefficient method and since EDIFACT needs to be processed quickly I don't want this functionality to be a bottleneck. Is there an inbuilt pyhton way to do what I want to achieve better? Update #1: I wrote this code as suggested by Barmar. To check it I added the Chinese characters for "World" in the file (世界). I expected .decode to throw an error. However it just decodes the byte string and adds some strange characters at the beginning. File Contents: 世界UNB+UNOC:3+9903323000007:500+9900080000007:500+250102:0900+Y48A42R58CRR43++++++ with open(Filename, "rb") as edifact_file: edifact_bytes = edifact_file.read() try: verified_edifact_string = edifact_bytes.decode(encoding='latin_1', errors='strict') except: print("String does not conform to ISO specification") print(verified_edifact_string) Prints: If I just copy Stackoverflow cuts away some of the characters. Edit #2: According to Python documentation the ISO/IEC_8859-1 specification is called latin_1 when using Python's .decode() and .encode() methods.
Credits to Barmar for suggesting the use of .decode() I found a solution which looks smooth to me. If I encode the string using the latin_1 encoding the Chinese characters seem to not be encoded into bytes. I didn't check but I guess the .encode() method omits them since they don't belong to latin_1. If I then convert the encoded string back using .decode() I get a string without the Chinese characters. If I then compare the original with the encoded and decoded string my question is answered whether characters were contained which don't belong to latin_1. with open(Filename, "r", encoding="utf-8") as edifact_file: edifact_string = edifact_file.read() encoded_edifact_string = edifact_string.encode('latin_1', 'ignore') if encoded_edifact_string.decode('latin_1', 'ignore') == edifact_string: print('Is latin_1') print(edifact_string) print(encoded_edifact_string.decode('latin_1', 'ignore')) else: print('Is no latin_1') print(edifact_string) print(encoded_edifact_string.decode('latin_1', 'ignore')) Next question is now whether looping over the strings and comparing each character is faster or slower than encoding and decoding and comparing afterwards. But I can check that myself.
2
0
79,419,739
2025-2-7
https://stackoverflow.com/questions/79419739/optimized-binary-matrix-inverse
I am attempting to invert (mod 2) a non-sparse binary matrix. I am trying to do it with very very large matricies, like 100_000 by 100_000. I have tried libraries, such as sympy, numpy, numba. Most of these do this with floats and don't apply mod 2 at each step. In general, numpy reports the determinant of a random binary invertible array of size 1000 by 1000 as INF. Here is my attempted code, but this lags at 10_000 by 10_000. This function is also nice, since even if I am not invertible, I can see the rank. def binary_inv(A): n = len(A) A = np.hstack((A.astype(np.bool_),np.eye(n,dtype=np.bool_))) for i in range(n): j = np.argmax(A[i:,i]) + i if i!=j: A[[i,j],i:] = A[[j,i],i:] A[i,i] = False A[A[:,i],i+1:] ^= A[i:i+1,i+1:] return A[:,n:] Any thoughts on making this as fast as possible?
First of all, the algorithmic complexity of a matrix inversion is identical to the complexity of a matrix multiplication. A naive matrix multiplication has a complexity of O(n**3). The best practical algorithm known so far is Strassen with a complexity of ~O(n**2.8) (other can be considered as galactic algorithms). This complexity improvement comes with a bigger hidden constant so it only worth it for really large matrices already like one bigger than 10000x10000. On such big matrices, Strassen can certainly results in a speed up of about 2x to 4x (because Strassen on small matrices is not SIMD-friendly). This is far from being great considering that a code implementing Strassen efficiently is generally noticeably more complex than one implementing efficiently the naive algorithm (which is actually still used by most BLAS library nowadays). Actually, the biggest improvements that can be achieved come from basic (low-level) code optimizations, not algorithmic one. One major optimization to perform is to pack boolean values into bits. This is not done by Numpy automatically (Numpy uses 1 byte per boolean value). This enable the CPU to operate on 8 times more items with a very small overhead. Besides, it also reduces the memory space used by the algorithm by the same amount so the operation becomes more cache friendly. Another major optimization is to compute this operation with multiple thread. Indeed, modern mainstream personal CPUs often have 6~8 cores (significantly more on computing server) so you should expect the computation to be about 3~6 faster. This is not as good as the number of cores because Gauss-Jordan elimination does not scale well. Indeed, it tends to be memory-bound (this is why packing bits is actually so important). Yet another optimization consists in computing the matrix block by block so to make it more cache-friendly. This kind of optimization resulted in a roughly twice faster code the last time I tested it (with relatively small blocks). In practice, here, it might be even faster since you operate on a GF(2) field. Applying such optimization on a Numpy code without introducing substantial overheads is difficult if even possible. I strongly advise you to write this in a native language like C, C++, or Rust. Numba can help but there are known performance issues (mainly vectorization) with packing/unpacking. You should be careful to write a SIMD-friendly code because SIMD instructions can operate on much more items than scalar ones (e.g. the AVX2 instruction set can operate on 32 x 8-bit items = 256-bit at a time at a cost similar to scalar instructions). If all of this is not enough, you can move this on GPUs and get again a significant improvement (let set 4~20 regarding the target GPU). Note you should pack boolean values into 32-bit integers to get good performances. On top of that, tensor cores might help to push this limit further if you succeed to leverage them. Note that applying some of the above optimization on GPU is actually pretty hard (especially blocking) even for skilled developers. Once optimized, I expect a very-good GPU implementation to take less than 10 minutes on a recent mid/high-end GPU. All of this should be enough to make the operation realistically 2 to 3 orders of magnitude faster!
3
3
79,421,406
2025-2-7
https://stackoverflow.com/questions/79421406/are-there-any-builtin-frozen-modules-in-python
I was going through the python import process and found out about frozen modules. The only thing I understood after searching is that frozen modules are files that can be directly executed without python installed on the system. I wanted to know one thing. Like sys.modules and sys.builtin_module_names are any frozen modules also present in python that are loaded automatically when running python? If yes, can we also access a list of them somehow? My main goal is to know any names that I should avoid giving to the files.
If your goal is to know what names to avoid, you don't need to know about frozen modules for that. Just don't pick the name of a built-in function, stdlib module, or keyword, and you should be fine. So pretty much the 3 lists matszwecja linked. But learning about frozen modules is interesting too, so let's talk about those. Frozen modules aren't files that can be executed without Python installed. "Freezing" a Python program generates a custom executable containing a Python interpreter and embedded bytecode for all the Python files the program needs. Frozen modules are modules loaded from that embedded bytecode. Now, that description makes it sound like frozen modules would only exist as part of such special executables, but a standard Python interpreter actually does come with some frozen modules. Some of them are frozen because they're part of the import system itself, and freezing them makes it easier to set them up while the import system isn't ready yet. Some of them are frozen to serve as test cases for importing frozen modules. On more recent Python versions, a bunch of extra stdlib modules are also frozen because they're imported at Python startup, and freezing them makes startup faster. I don't think there's a Python-level interface to query the list of frozen modules. Your best bet would probably be to read the contents of Python/frozen.c for your Python version. If you want, you could access the C API, but they changed things in Python 3.11 when they added the extra frozen stdlib modules. On 3.10 and down, the frozen stdlib modules were all in the documented PyImport_FrozenModules array, so you could read that: import ctypes import itertools # Make sure this matches the definition in the docs: # https://docs.python.org/3/c-api/import.html#c._frozen class _frozen(ctypes.Structure): _fields_ = [('name', ctypes.c_char_p), ('code', ctypes.POINTER(ctypes.c_ubyte)), ('size', ctypes.c_int), ('is_package', ctypes.c_bool)] PyImport_FrozenModules = ctypes.POINTER(_frozen).in_dll(ctypes.pythonapi, 'PyImport_FrozenModules') if PyImport_FrozenModules: for i in itertools.count(): record = PyImport_FrozenModules[i] if record.name is None: break print(record.name) but since 3.11, PyImport_FrozenModules defaults to NULL, and the frozen stdlib modules have been moved into 3 undocumented arrays. If you want to access undocumented APIs, you can do it anyway: # Extra for Python 3.11 and up for name in ['_PyImport_FrozenBootstrap', '_PyImport_FrozenStdlib', '_PyImport_FrozenTest']: records = PyImport_FrozenModules = ctypes.POINTER(_frozen).in_dll(ctypes.pythonapi, name) for i in itertools.count(): record = records[i] if record.name is None: break print(record.name)
1
3
79,420,610
2025-2-7
https://stackoverflow.com/questions/79420610/undertanding-python-import-process-importing-custom-os-module
I was reading through the python docs for how import is resolved and found this ... the interpreter first searches for a built-in module with that name. These module names are listed in sys.builtin_module_names. If not found, it then searches for a file named spam.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations: The directory containing the input script (or the current directory when no file is specified). PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH). ... So python first looks into sys.builtin_module_names and then into sys.path. So I checked sys.builtin_module_names on my OS (Mac). >>> sys.builtin_module_names ('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_suggestions', '_symtable', '_sysconfig', '_thread', '_tokenize', '_tracemalloc', '_typing', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time') >>> 'os' in sys.builtin_module_names False Since, os is not in sys.builtin_module_names, a file named os.py in the same directory as my python file should take precedence over the os python module. I created a file named os.py in a test directory with the following simple code: #os.py def fun(): print("Custom os called!") And created another file named test.py which imports os #test.py import sys print("os in sys.builtin_module_names -", 'os' in sys.builtin_module_names) print("First directory on sys -", sys.path[0]) import os print("source file of the imported os -", os.__file__) print(os.fun()) This is the output of test.py > python3 test.py os in sys.builtin_module_names - False First directory on sys - /Users/dhruv/Documents/test source file of the imported os - /opt/homebrew/Cellar/[email protected]/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/os.py Traceback (most recent call last): File "/Users/dhruv/Documents/test/test.py", line 9, in <module> print(os.fun()) ^^^^^^ AttributeError: module 'os' has no attribute 'fun' Why is the python os module called?
sys.path is initialized the way the tutorial says, but Python needs the os module before that initialization can fully complete. os is needed during Python setup. It's imported by the site module, which is imported during Python setup unless you manually say not to do that (with the -S flag). These imports happen so early, the Python interpreter hasn't actually decided what Python program it's going to run yet. It figures that out later. And it can't add the script directory to sys.path until it's decided what the script is. So at this early point in setup, the script directory isn't on sys.path, and imports don't search the script directory. Python can't find your os.py this early. Now, on more recent Python versions, there's another thing to consider. On Python 3.11 and up, the devs decided to freeze a bunch of modules written in Python that are needed at Python startup. This embeds their bytecode directly into the Python executable, which helps make startup faster. Frozen modules aren't mentioned in the tutorial - tutorials usually gloss over obscure details like this. Frozen modules are searched for after built-in modules and before the sys.path search. os is one of the modules that got frozen, so on Python 3.11 and up, the stdlib version of os would take priority over an os.py in the script directory even once sys.path is fully initialized.
1
1
79,421,320
2025-2-7
https://stackoverflow.com/questions/79421320/python-polars-expression-to-get-length-of-lists-in-a-struct
In Python Polars, I am trying to extract the length of the lists inside a struct to re-use it in an expression. For example, I have the code below: import polars as pl df = pl.DataFrame( { "x": [0, 4], "y": [ {"low": [-1, 0, 1], "up": [1, 2, 3]}, {"low": [-2, -1, 0], "up": [0, 1, 2]}, ], } ) df.with_columns( check=pl.concat_list([pl.all_horizontal( [ pl.col("x").ge(pl.col("y").struct["low"].list.get(i)), pl.col("x").le(pl.col("y").struct["up"].list.get(i)), ] ) for i in range(3)]).list.max() ) shape: (2, 3) ┌─────┬─────────────────────────┬───────┐ │ x ┆ y ┆ check │ │ --- ┆ --- ┆ --- │ │ i64 ┆ struct[2] ┆ bool │ ╞═════╪═════════════════════════╪═══════╡ │ 0 ┆ {[-1, 0, 1],[1, 2, 3]} ┆ true │ │ 4 ┆ {[-2, -1, 0],[0, 1, 2]} ┆ false │ └─────┴─────────────────────────┴───────┘ and I would like to infer the length of the lists in advance (i.e. not having to hardcode the 3), as it can change depending on the call. The challenge I am facing, is that I need to include everything in the same expression context. I have tried as below, but it is not working as I cannot extract the value returned by one of the expressions: df.with_columns( check=pl.concat_list([pl.all_horizontal( [ pl.col("x").ge(pl.col("y").struct["low"].list.get(i)), pl.col("x").le(pl.col("y").struct["up"].list.get(i)), ] ) for i in range(pl.col("y").struct["low"].list.len())]).list.max() )
Unfortunately, I don't see a way to use an expression for the list length here. Also, direct comparisons of list columns are not yet natively supported. Still, some on-the-fly exploding and imploding of the list columns could be used to achieve the desired result without relying on knowing the list lengths upfront. ( df .with_columns( ge_low=(pl.col("x") >= pl.col("y").struct["low"].explode()).implode().over(pl.int_range(pl.len())), le_up=(pl.col("x") <= pl.col("y").struct["up"].explode()).implode().over(pl.int_range(pl.len())), ) .with_columns( check=(pl.col("ge_low").explode() & pl.col("le_up").explode()).implode().over(pl.int_range(pl.len())) ) ) shape: (2, 5) ┌─────┬─────────────────────────┬─────────────────────┬───────────────────────┬───────────────────────┐ │ x ┆ y ┆ ge_low ┆ le_up ┆ check │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ struct[2] ┆ list[bool] ┆ list[bool] ┆ list[bool] │ ╞═════╪═════════════════════════╪═════════════════════╪═══════════════════════╪═══════════════════════╡ │ 0 ┆ {[-1, 0, 1],[1, 2, 3]} ┆ [true, true, false] ┆ [true, true, true] ┆ [true, true, false] │ │ 4 ┆ {[-2, -1, 0],[0, 1, 2]} ┆ [true, true, true] ┆ [false, false, false] ┆ [false, false, false] │ └─────┴─────────────────────────┴─────────────────────┴───────────────────────┴───────────────────────┘
2
2
79,421,288
2025-2-7
https://stackoverflow.com/questions/79421288/interpolating-a-array-that-has-many-zeros-outputs-wrong-result
Im trying to interpolate a set of x coordinates to y coordinates. x are the points Im trying to find a y for. The points I know are: xp (x coordinates) and their respective fp (y coordinates). The problem is that when I run the code, y is the number 0.97610476 repeated 33 times. Which is of course not the right answer. I belive the problem rises when about 70% of the first numbers in the x and xp arrays are zero, maybe the numpy function np.interp takes all numbers in the array into consideration to estimate the y values, oposed to what would be done with a calculator or on paper, where you take only the values between the value you want into the calculation. Here is the code with the arrays: import numpy as np x = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.007705615446897033, -0.007705615446897033, -0.05709260606658533, -0.05709260606658533, -0.12137136703522947, -0.12137136703522947, -0.1535107475195779, -0.1535107475195779, -0.18565012800417238, -0.18565012800417238]) xp = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.34689e-06, -0.000273863136, -0.001326340329, -0.00287521918, -0.004941795493, -0.007555995729, -0.01071049581, -0.01439643971, -0.018603479056, -0.02331980126, -0.028531366613, -0.034172728629, -0.040283876545, -0.046985078741, -0.053797694609, -0.060610310434, -0.067422926258, -0.07423554208, -0.081048157901, -0.087860773721, -0.09467338954, -0.101486005358, -0.108298621175, -0.115111236993, -0.12192385281, -0.128736468628, -0.135549084446, -0.142361700265, -0.149174316084, -0.155986931905, -0.162799547727, -0.16961216355, -0.176424779375, -0.183237395203, -0.190050011032, -0.196862626864, -0.203675242698, -0.210487858535, -0.217300474376]) fp = np.array([0.0, 0.010598548767, 0.021197097534, 0.031795646301, 0.042394195067, 0.052992743834, 0.063591292601, 0.074189841368, 0.084788390135, 0.095386938902, 0.105985487669, 0.116584036435, 0.127182585202, 0.137781133969, 0.148379682736, 0.158978231503, 0.16957678027, 0.180175329037, 0.190773877804, 0.20137242657, 0.211970975337, 0.222569524104, 0.233168072871, 0.243766621638, 0.254365170405, 0.264963719172, 0.275562267938, 0.286160816705, 0.296759365472, 0.307357914239, 0.317956463006, 0.328555011773, 0.33915356054, 0.349752109306, 0.360350658073, 0.37094920684, 0.381547755607, 0.392146304374, 0.402744853141, 0.413343401908, 0.423941950675, 0.434540499441, 0.445139048208, 0.455737596975, 0.466336145742, 0.476934694509, 0.487533243276, 0.498131792043, 0.508730340809, 0.519328889576, 0.529927438343, 0.54052598711, 0.551124535877, 0.561723084644, 0.572321633411, 0.582920182177, 0.593518730944, 0.604117279711, 0.614715828478, 0.625314377246, 0.635912926032, 0.646511474256, 0.657104739543, 0.667649141654, 0.678133049978, 0.688526896259, 0.698796708974, 0.708913631472, 0.718849263002, 0.728575709565, 0.738065661756, 0.74729295673, 0.756264519149, 0.764920687899, 0.773130776065, 0.781249735501, 0.789368694886, 0.797487654269, 0.80560661365, 0.813725573029, 0.821844532407, 0.829963491784, 0.838082451161, 0.846201410536, 0.854320369912, 0.862439329287, 0.870558288663, 0.878677248039, 0.886796207416, 0.894915166794, 0.903034126173, 0.911153085554, 0.919272044936, 0.927391004321, 0.935509963708, 0.943628923097, 0.95174788249, 0.959866841886, 0.967985801285, 0.976104760687]) y = np.interp(x, xp, fp) print(len(y)) I didn't tried many other alternatives. I tried using scipy interp1d() function, but that resulted the same problem. I don't expect theres much more to do about this, maybe I will have to create a "manual" interpolation function.
If you look at the docs np.interp requires "One-dimensional linear interpolation for monotonically increasing sample points." Your data is actually monotically decreasing and the x values are negative. If we change the sign of x then the data reflects on the y axis and become monotically incresing. You can also filter the points where x is 0 if you want (In order to really be a function, x values need be unique) In the code I filtered the values of x that are 0. import numpy as np import matplotlib.pyplot as plt x = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.007705615446897033, -0.007705615446897033, -0.05709260606658533, -0.05709260606658533, -0.12137136703522947, -0.12137136703522947, -0.1535107475195779, -0.1535107475195779, -0.18565012800417238, -0.18565012800417238]) xp = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.34689e-06, -0.000273863136, -0.001326340329, -0.00287521918, -0.004941795493, -0.007555995729, -0.01071049581, -0.01439643971, -0.018603479056, -0.02331980126, -0.028531366613, -0.034172728629, -0.040283876545, -0.046985078741, -0.053797694609, -0.060610310434, -0.067422926258, -0.07423554208, -0.081048157901, -0.087860773721, -0.09467338954, -0.101486005358, -0.108298621175, -0.115111236993, -0.12192385281, -0.128736468628, -0.135549084446, -0.142361700265, -0.149174316084, -0.155986931905, -0.162799547727, -0.16961216355, -0.176424779375, -0.183237395203, -0.190050011032, -0.196862626864, -0.203675242698, -0.210487858535, -0.217300474376]) fp = np.array([0.0, 0.010598548767, 0.021197097534, 0.031795646301, 0.042394195067, 0.052992743834, 0.063591292601, 0.074189841368, 0.084788390135, 0.095386938902, 0.105985487669, 0.116584036435, 0.127182585202, 0.137781133969, 0.148379682736, 0.158978231503, 0.16957678027, 0.180175329037, 0.190773877804, 0.20137242657, 0.211970975337, 0.222569524104, 0.233168072871, 0.243766621638, 0.254365170405, 0.264963719172, 0.275562267938, 0.286160816705, 0.296759365472, 0.307357914239, 0.317956463006, 0.328555011773, 0.33915356054, 0.349752109306, 0.360350658073, 0.37094920684, 0.381547755607, 0.392146304374, 0.402744853141, 0.413343401908, 0.423941950675, 0.434540499441, 0.445139048208, 0.455737596975, 0.466336145742, 0.476934694509, 0.487533243276, 0.498131792043, 0.508730340809, 0.519328889576, 0.529927438343, 0.54052598711, 0.551124535877, 0.561723084644, 0.572321633411, 0.582920182177, 0.593518730944, 0.604117279711, 0.614715828478, 0.625314377246, 0.635912926032, 0.646511474256, 0.657104739543, 0.667649141654, 0.678133049978, 0.688526896259, 0.698796708974, 0.708913631472, 0.718849263002, 0.728575709565, 0.738065661756, 0.74729295673, 0.756264519149, 0.764920687899, 0.773130776065, 0.781249735501, 0.789368694886, 0.797487654269, 0.80560661365, 0.813725573029, 0.821844532407, 0.829963491784, 0.838082451161, 0.846201410536, 0.854320369912, 0.862439329287, 0.870558288663, 0.878677248039, 0.886796207416, 0.894915166794, 0.903034126173, 0.911153085554, 0.919272044936, 0.927391004321, 0.935509963708, 0.943628923097, 0.95174788249, 0.959866841886, 0.967985801285, 0.976104760687]) # filter zero values x=x[x<0] fp=fp[xp<0] xp=xp[xp<0] # just take the negative of x and xp to mirror it in the y axis y = np.interp(-x, -xp, fp) plt.plot(xp,fp,'x') plt.plot(x,y,'x') print(y) plt.show() [0.69927656 0.69927656 0.78517646 0.78517646 0.8617809 0.8617809 0.90008312 0.90008312 0.93838535 0.93838535]
2
3
79,421,290
2025-2-7
https://stackoverflow.com/questions/79421290/reassigning-values-of-multiple-columns-to-values-of-multiple-other-columns
For the following df I wish to change the values in columns A,B and C to those in columns X,Y and Z, taking into account a boolean selection on column B. columns = {"A":[1,2,3], "B":[4,pd.NA,6], "C":[7,8,9], "X":[10,20,30], "Y":[40,50,60], "Z":[70,80,90]} df = pd.DataFrame(columns) df A B C X Y Z 0 1 4 7 10 40 70 1 2 <NA> 8 20 50 80 2 3 6 9 30 60 90 However when I try to do the value reassignment I end up with NULLS. df.loc[~(df["B"].isna()), ["A","B","C"]] = df.loc[~(df["B"].isna()), ["X","Y","Z"]] df A B C X Y Z 0 NaN NaN NaN 10 40 70 1 2.0 <NA> 8.0 20 50 80 2 NaN NaN NaN 30 60 90 My desired result is: A B C X Y Z 0 10 40 70 10 40 70 1 2 <NA> 8 20 50 80 2 30 60 90 30 60 90 If I do the reassignment on a single column then I get my expected result: df.loc[~(df["B"].isna()), "A"] = df.loc[~(df["B"].isna()), "X"] df A B C X Y Z 0 10 4 7 10 40 70 1 2 <NA> 8 20 50 80 2 30 6 9 30 60 90 However I expected that I should be able to do this on multiple columns at once. Any ideas what I am missing here? Thanks
Your columns are not matching on the two sides of the assignment. Since you use a mask on both sides, don't perform index alignment and directly pass the values: m = ~df['B'].isna() df.loc[m, ['A', 'B', 'C']] = df.loc[m, ['X', 'Y', 'Z']].values For your approach to work, you could also rename the columns. In this case you would not need the mask on both sides and could benefit from the index alignment: cols_left = ['A', 'B', 'C'] cols_right = ['X', 'Y', 'Z'] df.loc[~df['B'].isna(), cols_left] = df[cols_right].rename( columns=dict(zip(cols_right, cols_left)) ) Output: A B C X Y Z 0 10 40 70 10 40 70 1 2 <NA> 8 20 50 80 2 30 60 90 30 60 90
1
2
79,416,139
2025-2-5
https://stackoverflow.com/questions/79416139/adjust-matplotlib-polar-plot-to-show-sub-degree-motion-aka-stretch-a-polar-plot
I have RA and DEC pointing data I would like to show on a polar plot (converting to rho and theta). The theta motion is very small, ~0.01 degrees. This is not easily seen on a full polar plot so I am trying to 'zoom in' to the region and show the change from data point to data point. When I adjust the thetamin/thetamax below to the limits I would prefer the wedge becomes a very thin line that losses all useful information. I would like a wedge shape like below but where the min/max theta angle shown is at least a degree. import numpy as np import matplotlib.pyplot as plt import matplotlib import pandas as pd print('matplotlib version : ', matplotlib.__version__) fig = plt.figure() ra = np.asarray([1.67484,1.67485,1.67485,1.67486,1.67486,1.67488,1.67487,1.67488,1.67487, 1.67487]) #radians dec = np.asarray([-0.92147,-0.92147,-0.92147,-0.92147,-0.92147,-0.92147,-0.92147, -0.92147,-0.92147, -0.92147]) #radians rho = np.sqrt(ra**2 + dec**2) # get rho from ra and dec theta = np.arctan2(dec,ra) # get theta from ra and dec fig = plt.figure() ax = fig.add_subplot(1,1,1,polar=True) ax.plot(theta, rho,'*-',color='y') ax.set_ylim(1.9114,1.9117) # limits of rho ax.set_thetamin(310) ax.set_thetamax(340) plt.show() I've been reading online and looking at the matplotlib polar plot documentation but the examples I've found don't go beyond what I've implemented so far..
First of all, you still have some potential concerning narrowing down the plotted radial range, e.g., to ax.set_ylim(1.91159, 1.91164). Please also note, that when I was searching for a solution (which I couldn't fin on the internet either), I found that using np.arctan2() is the appropriate approach for polar coordinates, that is the reason why I changed this part in your code. Otherwise, I had no better idea than applying a scaling approach to your plot. Now the wedge in question is scaled up by an arbitrary scaling factor (i.e., theta for the upscaled plot in degrees), as in: import numpy as np import matplotlib.pyplot as plt ra = np.asarray([1.67484, 1.67485, 1.67485, 1.67486, 1.67486, 1.67488, 1.67487, 1.67488, 1.67487, 1.67487]) dec = np.asarray([-0.92147, -0.92147, -0.92147, -0.92147, -0.92147, -0.92147, -0.92147, -0.92147, -0.92147, -0.92147]) rho = np.sqrt(ra**2 + dec**2) theta = np.arctan2(dec, ra) # instead of np.tan(dec / ra), see the other answer. # set scaling factor scalingfactor = 40 # scaling up by arbitrary scaling factor theta_scaled = (theta - np.min(theta)) / (np.max(theta) - np.min(theta)) * np.radians(scalingfactor) fig, ax = plt.subplots(figsize=(6, 6), subplot_kw={'projection': 'polar'}) ax.plot(theta_scaled, rho, '*-', color='y', label=f"Scaled 1:{1/scalingfactor}") ax.set_ylim(1.91159, 1.91164) #check theta limits print(np.degrees(min(theta)), np.degrees(max(theta))) #creating theta ticks ticks = np.radians(np.linspace(0, scalingfactor, 5)) ax.set_xticks(ticks) # setting theta labels explicitly scaled_labels = np.round(np.linspace(360+np.degrees(min(theta)), 360+np.degrees(max(theta)), len(ticks)), 4) ax.set_xticklabels(scaled_labels) ax.set_thetamin(-3) ax.set_thetamax(scalingfactor+1) ax.grid(True, linestyle="--", alpha=0.5) ax.legend() plt.show() I am sure you can still make it look prettier, otherwise, it looks to be a solid solution to me, resulting in for example this version of the plot:
3
3
79,414,537
2025-2-5
https://stackoverflow.com/questions/79414537/mongomock-bulkoperationbuilder-add-update-unexpected-keyword-argument-sort
I'm testing a function that performs a bulk upsert using UpdateOne with bulk_write. In production (using the real MongoDB client) everything works fine, but when running tests with mongomock I get this error: app.mongodb.exceptions.CatalogException: failure in mongo repository function `batch_upsert_catalog_by_sku`: BulkOperationBuilder.add_update() got an unexpected keyword argument 'sort' I don't pass any sort parameter in my code. Here's the relevant function: def batch_upsert_catalog_by_sku(self, items: List[CatalogBySkuWrite]) -> None: operations = [] current_time = datetime.datetime.now(datetime.timezone.utc) for item in items: update_fields = item.model_dump() update_fields["updated_at"] = current_time operations.append( UpdateOne( {"sku": item.sku, "chain_id": item.chain_id}, { "$set": update_fields, "$setOnInsert": {"created_at": current_time}, }, upsert=True, ) ) if operations: result = self.collection.bulk_write(operations) logger.info("Batch upsert completed", matched=result.matched_count, upserted=result.upserted_count, modified=result.modified_count) Has anyone seen this error with mongomock? Is it a version issue or a bug, and what would be a good workaround? I'm using mongomock version 4.3.0 and pymongo version 4.11. Thanks!
This seem to be related to pymongo version 4.11. In 4.10.1, the method _add_to_bulk in the class UpdateOne here calls add_update this way: def _add_to_bulk(self, bulkobj: _AgnosticBulk) -> None: """Add this operation to the _AsyncBulk/_Bulk instance `bulkobj`.""" bulkobj.add_update( self._filter, self._doc, False, bool(self._upsert), collation=validate_collation_or_none(self._collation), array_filters=self._array_filters, hint=self._hint, ) However, in the current release (4.11), here, it calls it this way: def _add_to_bulk(self, bulkobj: _AgnosticBulk) -> None: """Add this operation to the _AsyncBulk/_Bulk instance `bulkobj`.""" bulkobj.add_update( self._filter, self._doc, False, bool(self._upsert), collation=validate_collation_or_none(self._collation), array_filters=self._array_filters, hint=self._hint, sort=self._sort, ) Latest version of mongomock does not seem to support the sort parameter, check here Forcing pymongo to version < 4.11 seem to solve the issue on my side (using Poetry: poetry add "pymongo<4.11") I have opened an issue in mongomock repository.
2
3
79,419,480
2025-2-6
https://stackoverflow.com/questions/79419480/using-python-to-replace-triple-double-quotes-with-single-double-quote-in-csv
I used the pandas library to manipulate the original data down to a simple 2 column csv file and rename it to a text file. The file has triple double quotes that I need replaced with single double quotes. Every line of the file is formatted as: """QA12345""","""Some Other Text""" What I need is: "QA12345","Some Other Text" This python snippet wipes out the file after it finishes. with fileinput.FileInput(input_file, inplace=True) as f: next(f) for line in f: line.replace("""""", '"') It doesn't work with line.replace('"""', '"') either. I've also tried adjusting the input values to be '"Some Other Text"' and variations (""" and '\"') but nothing seems to work. I believe the triple quote is causing the issue, but I don't know what I have to do to.
Judging by the OP's code, I guess the aim are Edit file in-place keep the first line unmodified For the rest of the lines, replace 3 double quotes with a single one My solution is almost the same, but with print(): with fileinput.input(input_file, inplace=True) as stream: print(next(stream), end="") for line in stream: print(line.replace('"""', '"'), end="") That should give the desired result.
1
1
79,417,722
2025-2-6
https://stackoverflow.com/questions/79417722/python-script-unable-to-process-larger-zip-files-from-irs-990-aws-datalake
I am trying to access raw 990 (nonprofit tax returns) XML data through the AWS datalake. The XML files are organized into Zip files split by the month in which the IRS processed them ("2024_1A" for January, "2024_2A" for Feburary). See photo for the Zip files I am iterating through. In certain months, there are multiple Zip files where there were too many returns for a single Zip file. I have written a Python script that is able to access the Datalake and process the XML files for 10 out of 14 Zip files. However, for the very large Zips -- 5A, 5B, 11A, 11B -- it returns this error: "Error processing file EfileData/XmlZips/2024_TEOS_XML_05A.zip: That compression method is not supported" The only differentiating factor between these Zips and the other Zips seems to be their size -- see the fixes/checks I've tried below. This is the code I'm using right now to unzip the files to a temporary directory. Does anyone have thoughts on why it would work for all the other zip files but not 5A, 5B, 11A, 11B, and how I can fix it? Thank you! with zipfile.ZipFile(local_zip_path, 'r') as zip_ref: zip_ref.extractall(temp_dir) Here's some of the fixes I've tried: Using the pyz7r library and the 7zip tool through Python Confirming the Zips are compression type 9, which corresponds to DEFLATE compression (zipfile.ZIP_DEFLATED) which should be supported by Python's zipfile module. Downloaded the Zip files onto my computer and previewed the .XML files inside to confirm they're not corrupted. Tried unzipping the Zip file onto my computer to then upload directly into the Google Colab workspace where I am working, but it was too large (70,000 files in each) and it took several hours to download just one of the 4 Zip files and it froze google chrome each time I tried to them upload it into Google colab. Attempted to extract the Zip file in batches to be more efficient
Compression method 9 is not Deflate. (Deflate is method 8.) Method 9 is a proprietary PKWare enhancement of Deflate called Deflate64. Python's zipfile does not support it. You would need to use an unzip utility, such as Info-ZIP's unzip, 7-zip, or the like. When I try it, zipfile raises an error that says exactly that, which you are also seeing: raise NotImplementedError("That compression method is not supported") R1D3R175 notes in a comment below that there exists a zipfile-deflate64 project that can handle the Deflate64 method. These zip files were likely made using Windows' built-in compression tool, which elects to use Deflate64 when the sum of the file sizes is greater than 2 GB.
1
2
79,417,747
2025-2-6
https://stackoverflow.com/questions/79417747/how-to-concatenate-gzip-streams-in-python
I am attempting to combine multiple gzip streams into a single stream, my understanding is this should be possible but my implementation is flawed. Based on what I have read, my expectation was that I should be able remove the 10byte header and 8 byte footer from all streams, concatenate the bytes together and reconstruct a header and footer. However, when I do try and do this the decompression operation fails, I am assuming this is because the .flush() is including some information in the block about "end of data" that is not being removed. Ideal Case It is possible to concatenate multiple gzip streams together without altering them. This is a valid gzip file containing multiple streams. Unfortunately, when using zlib.decompress(data, GZIP_WBITS), rather than using decompressobj to check for an unconsumed_tail, only the first stream is returned. Naïve Concatenation Example to show how concatenation might break some downstream clients consuming these files. import zlib GZIP_WBITS = 16 + zlib.MAX_WBITS def decompress(data: bytes) -> bytes: return zlib.decompress(data, GZIP_WBITS) def compress(data: list[bytes]) -> bytes: output = b"" for datum in data: deflate = zlib.compressobj(8, zlib.DEFLATED, GZIP_WBITS) output += deflate.compress(datum) output += deflate.flush() return output def test_decompression(): data = [b"Hello", b"World!"] compressed = compress(data) decompressed = decompress(compressed) # this should be b"".join(data) == decompressed assert decompressed == data[0] Sample Code (Not working) import zlib import struct test_bytes = b"hello world" # create an GZIP example stream deflate = zlib.compressobj(8, zlib.DEFLATED, GZIP_WBITS) single = deflate.compress(test_bytes) single += deflate.flush() # quick sanity check that decompression works zlib.decompress(single, GZIP_WBITS) print("Single:", single.hex()) # check our understanding of the footer is correct single_len = struct.unpack("<I", single[-4:])[0] assert single_len == len(test_bytes), "wrong len" single_crc = struct.unpack("<I", single[-8:-4])[0] assert single_crc == zlib.crc32(test_bytes), "wrong crc" # Create an example GZIP stream with duplicated input bytes deflate = zlib.compressobj(8, zlib.DEFLATED, GZIP_WBITS) double = deflate.compress(test_bytes) double += deflate.compress(test_bytes) double += deflate.flush() # quick sanity check that decompression works zlib.decompress(double, GZIP_WBITS) # Check we can calculate the len and bytes correctly double_length = struct.unpack("<I", double[-4:])[0] assert double_length == len(test_bytes + test_bytes), "wrong len" double_crc = struct.unpack("<I", double[-8:-4])[0] assert double_crc == zlib.crc32(test_bytes + test_bytes), "wrong crc" print(f"Double: {double.hex()}") # Remove the header and footer from our original GZIP stream single_data = single[10:-8] print(f" Data: {' '*20}{single_data.hex()}") # Concatenate the original stream with footer removed with a duplicate # with the header and footer removed concatenated = single[:-8] + single_data # Add the footer, comprising the CRC and Length concatenated += struct.pack("<I", double_crc) concatenated += struct.pack("<I", double_length) assert concatenated .startswith(single[:-8]) print(f" Maybe: {concatenated.hex()}") # Confirm this is bad data zlib.decompress(concatenated, GZIP_WBITS) My assumption is it will be possible to use the following function to combine the crc32 values: def crc_combine(crcA, crcB, lenB): crcA0 = zlib.crc32(b'\0' * lenB, crcA ^ 0xffffffff) ^ 0xffffffff return crcA0 ^ crcB Requirements Pure python with no dependencies (aws lambda runtime where dependency management is a paid) Avoid decompressing and recompressing the streams, we have control of the original content and so can calculate the resultant CRC using the crc32_combine function. The resulting file can be decompressed using a single call to zlib.decompress(data, GZIP_WBITS) as the resultant files form part of a "public interface" and this would be considered a breaking change. Sources Concatenate multiple zlib compressed data streams into a single stream efficiently Combining two non-pure CRC32 values
This is much easier than you're making it out to be. Simply concatenate the gzip files without removing or in any way messing with the headers and trailers. Any concatenation of gzip streams is a valid gzip stream, and will decompress to the concatenation of the uncompressed contents of the individual gzip streams.
1
4
79,418,676
2025-2-6
https://stackoverflow.com/questions/79418676/plot-multiple-column-pairs-in-one-plot
I have columns of data in an excel sheet. The data is paired. I want to plot column 1 and 2 together, 3 and 4 together and so on. In my case I have four columns. I've tried creating a for loop to go through the data and plot it but I keep getting a separate plots. Anyone know what's wrong here? for i in range(0,data.shape[1]): if i%2 == 0: plt.plot(data.iloc[:,i], data.iloc[:,i+1]) plt.xlabel('Strain (mm)') plt.ylabel('Stress (lbf)') plt.show()
If I understand you correctly, you want to generate a single plot using several column combinations. You should create an axis object and then add the data to it. import numpy as np import pandas as pd from matplotlib import pyplot as plt data = pd.DataFrame({ 'col1': np.linspace(0, 0.2, 100), 'col2': np.linspace(0, 0.2, 100), 'col3': np.linspace(0.2, 1, 100), 'col4': np.linspace(0.2, 1, 100) }) fig, ax = plt.subplots() for i in range(0, data.shape[1]): if i%2 == 0: ax.plot(data.iloc[:,i], data.iloc[:,i+1]) ax.set_xlabel('Strain (mm)') ax.set_ylabel('Stress (lbf)')
1
1
79,417,642
2025-2-6
https://stackoverflow.com/questions/79417642/softmax-with-polars-lazy-dataframe
I'm relatively new to using polars and it seems to be very verbose compared to pandas for what I would consider even relatively basic manipulations. Case in point, the shortest way I could figure out doing a softmax over a lazy dataframe is the following: import polars as pl data = pl.DataFrame({'a': [1,2,3,4,5,6,7,8,9,10], 'b':[5,5,5,5,5,5,5,5,5,5], 'c': [10,9,8,7,6,5,4,3,2,1]}).lazy() cols = ['a','b','c'] data = data.with_columns([ pl.col(c).exp().alias(c) for c in cols]) # Exp all columns data = data.with_columns(pl.sum_horizontal(cols).alias('sum')) # Get row sum of exps data = data.with_columns([ (pl.col(c)/pl.col('sum')).alias(c) for c in cols ]).drop('sum') data.collect() Am I missing something and is there a shorter, more readable way of achieving this?
You would use a multi-col selection e.g. pl.all() instead of list comprehensions. (Or pl.col(cols) for a named "subset" of columns) df.with_columns( pl.all().exp() / pl.sum_horizontal(pl.all().exp()) ) shape: (10, 3) ┌──────────┬──────────┬──────────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ f64 │ ╞══════════╪══════════╪══════════╡ │ 0.000123 ┆ 0.006692 ┆ 0.993185 │ │ 0.000895 ┆ 0.01797 ┆ 0.981135 │ │ 0.006377 ┆ 0.047123 ┆ 0.946499 │ │ 0.04201 ┆ 0.114195 ┆ 0.843795 │ │ 0.211942 ┆ 0.211942 ┆ 0.576117 │ │ 0.576117 ┆ 0.211942 ┆ 0.211942 │ │ 0.843795 ┆ 0.114195 ┆ 0.04201 │ │ 0.946499 ┆ 0.047123 ┆ 0.006377 │ │ 0.981135 ┆ 0.01797 ┆ 0.000895 │ │ 0.993185 ┆ 0.006692 ┆ 0.000123 │ └──────────┴──────────┴──────────┘ With LazyFrames we can use .explain() to inspect the query plan. plan = df.lazy().with_columns(pl.all().exp() / pl.sum_horizontal(pl.all().exp())).explain() print(plan) # simple π 3/7 ["a", "b", "c"] # WITH_COLUMNS: # [[(col("__POLARS_CSER_0x9b1b3182d015f390")) / (col("__POLARS_CSER_0x762bfea120ea9e6"))].alias("a"), [(col("__POLARS_CSER_0xb82f49f764da7a09")) / (col("__POLARS_CSER_0x762bfea120ea9e6"))].alias("b"), [(col("__POLARS_CSER_0x1a200912e2bcc700")) / (col("__POLARS_CSER_0x762bfea120ea9e6"))].alias("c")] # WITH_COLUMNS: # [col("a").exp().alias("__POLARS_CSER_0x9b1b3182d015f390"), col("b").exp().alias("__POLARS_CSER_0xb82f49f764da7a09"), col("c").exp().alias("__POLARS_CSER_0x1a200912e2bcc700"), col("a").exp().sum_horizontal([col("b").exp(), col("c").exp()]).alias("__POLARS_CSER_0x762bfea120ea9e6")] # DF ["a", "b", "c"]; PROJECT */3 COLUMNS Polars caches the duplicate pl.all().exp() expression into a temp __POLARS_CSER* column for you. See also: https://docs.pola.rs/user-guide/lazy/optimizations/
2
4
79,415,726
2025-2-5
https://stackoverflow.com/questions/79415726/pandas-dataframe-slicing-performance-is-affected-by-how-subset-was-previously-as
In a recent post Pandas performance while iterating a state vector, I noticed a performance when slicing pandas dataframes that i do not understand. The code presented here does not do anything usefull, but highlight the issue: I create a dataframe with two areas of columns named extra_columns and columns The part of the code which takes time to execute is the loop, where slices in columns are assigned. What baffles me that the way i assign values to extra_columns before the loop affects the loop performance Python code import timeit setup_stmt =""" import pandas as pd num_cols = 500 n_iter = 100 extra_column = [ "product"] columns = [chr(i+65) for i in range(num_cols)] index= range(n_iter) """ stmt1 =""" df = pd.DataFrame(index = index, columns=extra_column + columns) df["product"] = "x" for i in index: df.loc[i,columns] = 0 """ stmt2 =""" df = pd.DataFrame(index = index, columns=extra_column + columns) df.product = "x" for i in index: df.loc[i,columns] = 0 """ stmt3 =""" df = pd.DataFrame(index= index, columns=extra_column + columns) df.loc[index,"product"] = "x" for i in index: df.loc[i,columns] = 0 """ stmt4 =""" df = pd.DataFrame(index = index, columns=extra_column + columns) for i in index: df.loc[i,columns] = 0 df["product"] = "x" """ print(f" stmt1 takes { timeit.timeit(setup= setup_stmt, stmt= stmt1, number=10):2.2f} seconds" ) print(f" stmt2 takes { timeit.timeit(setup= setup_stmt, stmt= stmt2, number=10):2.2f} seconds" ) print(f" stmt3 takes { timeit.timeit(setup= setup_stmt, stmt= stmt3, number=10):2.2f} seconds" ) print(f" stmt4 takes { timeit.timeit(setup= setup_stmt, stmt= stmt4, number=10):2.2f} seconds" ) Output stmt1 takes 20.60 seconds stmt2 takes 0.46 seconds stmt3 takes 0.46 seconds stmt4 takes 0.46 seconds
TL;DR Your original df has a single block in memory. With stmt1, use of df["product"] = "x" makes the BlockManager (an internal memory manager) add a new block. Having multiple blocks adds overhead, as pandas needs to check and consolidate them each time a row gets modified. With stmt3, you do not have this issue, as df.loc[index,"product"] = "x" is an in-place modification, that keeps the original, single block intact. stmt2 should be ignored (see note at the end). stmt4 is irrelevant, as the second block is created only after the for loop. Answer The difference in performance between your stmt1 and stmt3 has to do with the so-called BlockManager, which is an internal manager that tries to keep columns with compatible dtypes together as blocks in memory. Initial situation: one block Useful information about the use of the BlockManager for a specific pd.DataFrame can be retrieved by accessing df._mgr. With your example: import pandas as pd num_cols = 3 n_iter = 3 extra_column = ["product"] columns = [chr(i+65) for i in range(num_cols)] index= range(n_iter) df = pd.DataFrame(index=index, columns=extra_column + columns) product A B C 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN df._mgr BlockManager Items: Index(['product', 'A', 'B', 'C'], dtype='object') Axis 1: RangeIndex(start=0, stop=3, step=1) NumpyBlock: slice(0, 4, 1), 4 x 3, dtype: object # all cols So, here we see that the BlockManager is working with a single block in memory. stmt1: adding a new column / replacing one adds a block If now we use bracket notation ([]) to assign "x" to column "product", we are really re-creating that column. As a result, a second block is created: df["product"] = "x" print(df._mgr) BlockManager Items: Index(['product', 'A', 'B', 'C'], dtype='object') Axis 1: RangeIndex(start=0, stop=3, step=1) NumpyBlock: slice(1, 4, 1), 3 x 3, dtype: object # cols "A, "B", "C" NumpyBlock: slice(0, 1, 1), 1 x 3, dtype: object # col "product" The important thing here is that this column is replacing the old column "product": it's a new column. E.g., if we use df.loc to create a new column, the same thing happens: df.loc[:, "new_col"] = "x" print(df._mgr) BlockManager Items: Index(['product', 'A', 'B', 'C', 'new_col'], dtype='object') Axis 1: RangeIndex(start=0, stop=3, step=1) NumpyBlock: slice(1, 4, 1), 3 x 3, dtype: object # cols "A, "B", "C" NumpyBlock: slice(0, 1, 1), 1 x 3, dtype: object # col "product" NumpyBlock: slice(4, 5, 1), 1 x 3, dtype: object # col "new_col" stmt3: in-place modification keeps block intact Here we see the difference with df.loc[index,"product"] = "x", because in this case we are not re-creating "product", we are simply updating its values. This does not create a new block: df = pd.DataFrame(index = index, columns=extra_column + columns) df.loc[index,"product"] = "x" print(df._mgr) BlockManager Items: Index(['product', 'A', 'B', 'C'], dtype='object') Axis 1: RangeIndex(start=0, stop=3, step=1) NumpyBlock: slice(0, 4, 1), 4 x 3, dtype: object # "product" still here Key takeaways The upshot of all this for the different versions you use: stmt1 with df["product"] = "x" internally has two blocks stmt3 with df.loc[index,"product"] = "x" internally keeps one block stmt4 with df["product"] = "x" after the for loop only has two blocks after that loop. The significant delay for stmt1 is caused by pandas needing to reconcile multiple blocks each time df.loc[i,columns] = 0 is executed in the loop. These internal checks trigger extra memory operations, as pandas must align modified rows across separate blocks. This results in a sizeable slowdown compared to the single-block df. Interestingly, df.copy leads to a reset of the blocks. Consequently, adding df = df.copy() gets the performance of stmt1 very close to stmt3 again: # adding: `df = df.copy()` stmt1 =""" df = pd.DataFrame(index = index, columns=extra_column + columns) df["product"] = "x" df = df.copy() for i in index: df.loc[i,columns] = 0 """ print(f" stmt1 takes { timeit.timeit(setup= setup_stmt, stmt= stmt1, number=10):2.2f} seconds" ) print(f" stmt3 takes { timeit.timeit(setup= setup_stmt, stmt= stmt3, number=10):2.2f} seconds" ) Prints: stmt1 takes 1.00 seconds stmt3 takes 0.99 seconds Further reading Some interesting reads on this complex topic and the difficulty of establishing its influence for specific use cases: Internal Structure of Pandas DataFrames, by Darius Kharazi (2020-05-15). The one pandas internal I teach all my new colleagues: the BlockManager, blog by Uwe Korn (2020-05-24). There are plans to replace the BlockManager: see here, cf. here. A note on stmt2 stmt2 should be ignored here, because it is not doing what you think it does. "dot notation" is a convenience feature that can provide attribute access to a df column. But this method comes with a few caveats. One being: The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed, but s['min'] is possible. This applies here, because df.product is a method of class pd.DataFrame. I.e., when you do df.product = "x", you are simply overwriting the method and storing the string "x" in its place: df = pd.DataFrame({"product": [1]}) type(df.product) method df.product = "x" type(df.product) str I.e., we never updated the actual df: product 0 1 # nothing changed
4
2
79,416,850
2025-2-6
https://stackoverflow.com/questions/79416850/how-to-reduce-verbosity-of-self-documenting-expressions-in-python-f-strings
This script: import numpy as np a = np.array([2, 3, 1, 9], dtype='i4') print(a) print(f'{a=}') produces: [2 3 1 9] a=array([2, 3, 1, 9], dtype=int32) Is there a way to get just a=[2 3 1 9] from {a=} f-string as in the standard formatting in the first line?
By default, = f-string syntax calls repr on the thing to be formatted, since that's usually more useful than str or format for the debugging use cases the = syntax was designed for. You can have it apply normal formatting logic instead by specifying a format - even an empty format will do, so this would work: print(f'{a=:}') or you can explicitly say to call str with !s: print(f'{a=!s}')
3
4
79,412,165
2025-2-4
https://stackoverflow.com/questions/79412165/trying-to-understand-differences-in-weighted-logistic-regression-outputs-between
I have a fictional weighted survey dataset that contains information about respondents' car colors and their response to the question "I enjoy driving fast." I would like to perform a regression to see whether the likelihood of slightly agreeing with this question varies based on whether or not the respondent drives a black car. (This isn't a serious analysis; I'm just introducing it for the purpose of comparing weighted regression outputs in R and Python.) In order to answer this question, I first ran a weighted logistic regression using R's survey and srvyr packages. This regression provided a test statistic of -1.18 for the black car color coefficient and a p value of 0.238. However, when I ran a weighted logistic regression within statsmodels, I received a test statistic of -1.35 and a p value of 0.177 for this coefficient. I would like to understand why these test statistics are different, and whether I'm making any mistakes within my setup for either test that could explain this discrepancy. It's also worth noting that, when I removed the weight component from each test, my test statistics and P values were almost identical. Therefore, it seems that these two implementations are treating my survey weights differently. Here is my code: (Note: I am running my R tests within rpy2 so that I can execute them in the same notebook as my Python code. You should be able to reproduce this output on your end within a Jupyter Notebook as long as you have rpy2 set up on your end.) import pandas as pd import statsmodels.formula.api as smf import statsmodels.api as sm %load_ext rpy2.ipython %R library(dplyr) %R library(srvyr) %R library(survey) %R library(broom) import pandas as pd df_car_survey = pd.read_csv( 'https://raw.githubusercontent.com/ifstudies/\ carsurveydata/refs/heads/main/car_survey.csv') # Adding dummy columns for independent and dependent variables: for column in ['Car_Color', 'Enjoy_Driving_Fast']: df_car_survey = pd.concat([df_car_survey, pd.get_dummies( df_car_survey[column], dtype = 'int', prefix = column)], axis = 1) df_car_survey.columns = [column.replace(' ', '_') for column in df_car_survey.columns] # Loading DataFrame into R and creating a survey design object: # See https://tidy-survey-r.github.io/tidy-survey-book/c10-sample-designs-replicate-weights.html # for more details. # This book was also inval %Rpush df_car_survey %R df_sdo <- df_car_survey %>% as_survey_design(\ weights = 'Weight') print("Survey design object:") %R print(df_sdo) # Logistic regression in R: # (This code was based on that found in # https://tidy-survey-r.github.io/tidy-survey-book/c07-modeling.html ) %R logit_result <- svyglm(\ Enjoy_Driving_Fast_Slightly_Agree ~ Car_Color_Black, \ design = df_sdo, na.action = na.omit,\ family = quasibinomial()) print("\n\n Logistic regression results from survey package within R:") %R print(tidy(logit_result)) # Logistic regression within Python: # (Based on StupidWolf's response at # https://stackoverflow.com/a/62798889/13097194 ) glm = smf.glm("Enjoy_Driving_Fast_Slightly_Agree ~ Car_Color_Black", data = df_car_survey, family = sm.families.Binomial(), freq_weights = df_car_survey['Weight']) model_result = glm.fit() print("\n\nStatsmodels logistic regression results:") print(model_result.summary()) Output: Survey design object: Independent Sampling design (with replacement) Called via srvyr Sampling variables: - ids: `1` - weights: Weight Data variables: - Car_Color (chr), Weight (dbl), Enjoy_Driving_Fast (chr), Count (int), Response_Sort_Map (int), Car_Color_Black (int), Car_Color_Red (int), Car_Color_White (int), Enjoy_Driving_Fast_Agree (int), Enjoy_Driving_Fast_Disagree (int), Enjoy_Driving_Fast_Slightly_Agree (int), Enjoy_Driving_Fast_Slightly_Disagree (int), Enjoy_Driving_Fast_Strongly_Agree (int), Enjoy_Driving_Fast_Strongly_Disagree (int) Logistic regression results from survey package within R: # A tibble: 2 × 5 term estimate std.error statistic p.value <chr> <dbl> <dbl> <dbl> <dbl> 1 (Intercept) -2.08 0.145 -14.3 8.85e-43 2 Car_Color_Black -0.293 0.248 -1.18 2.38e- 1 Statsmodels logistic regression results: Generalized Linear Model Regression Results ============================================================================================= Dep. Variable: Enjoy_Driving_Fast_Slightly_Agree No. Observations: 1059 Model: GLM Df Residuals: 1057 Model Family: Binomial Df Model: 1 Link Function: Logit Scale: 1.0000 Method: IRLS Log-Likelihood: -345.24 Date: Tue, 04 Feb 2025 Deviance: 690.48 Time: 10:07:53 Pearson chi2: 1.06e+03 No. Iterations: 5 Pseudo R-squ. (CS): 0.001763 Covariance Type: nonrobust =================================================================================== coef std err z P>|z| [0.025 0.975] ----------------------------------------------------------------------------------- Intercept -2.0834 0.125 -16.693 0.000 -2.328 -1.839 Car_Color_Black -0.2931 0.217 -1.350 0.177 -0.719 0.133 ===================================================================================
As suggested in the comments by George Savva, DaveArmstrong, and BenBolker, I believe the issue here is that that my R code is interpreting the weights as sampling weights (which makes sense, since I'm applying survey analysis libraries), whereas Statsmodels is interpreting them as frequency weights. For further evidence of this, I compared the output of a chi squared test using R's survey/srvyr libraries with those in Python's Samplics library, which is built for survey analyses. Here is the code that I used (which builds off that shown above): %R svychisq_result <- svychisq(\ ~ Enjoy_Driving_Fast_Slightly_Agree + Car_Color_Black, \ design = df_sdo, na.action = na.omit) print("\n\n Chi squared results from survey package within R:") %R print(svychisq_result) from samplics.categorical import CrossTabulation from samplics.utils.types import PopParam crosstab = CrossTabulation(param=PopParam.prop) crosstab.tabulate( vars=df_car_survey[["Enjoy_Driving_Fast_Slightly_Agree", "Car_Color_Black"]], samp_weight=df_car_survey['Weight'], remove_nan=True ) # Based on # https://samplics-org.github.io/samplics/pages/ # categorical_tabulation.html#two-way-tabulation-cross-tabulation print("Chi squared results from Python's Samplics library:",crosstab) The F statistic and p value (1.4033 and 0.2364, respectively) generated by the survey/srvyr libraries matched the adjusted F statistic and p value created by the Samplics library. Once logistic regression becomes available within Samplics, I will plan to use that library to perform regression analyses of survey data within Python. In the meantime, unless there's an alternative Python approach that would let me specify sampling weights (and not frequency weights) for my regression analyses, it looks like I'll need to exclusively use R for those tests.
2
2
79,413,507
2025-2-5
https://stackoverflow.com/questions/79413507/access-self-in-classmethod-when-instance-self-calls-classmethod
Is it possible to access the object instance in a Python @classmethod annotated function when the call occurs via the instance itself rather than the class? class Foo(object): def __init__(self, text): self.text = text @classmethod def bar(cls): return None print(Foo.bar()) foo = Foo('foo') # If possible, I'd like to return "foo" print(foo.bar()) I'm working around limitations of a library which calls a class method via the instance (rather than the class) and was hoping I could somehow workaround the fact that I can only access the class in bar(cls) (I'd to access the instance to access .text). I don't think that's possible, but I thought I'd ask (I couldn't unearth something on the internet either for such a niche request).
One possible approach without modifying Foo is to define bar as an instance method of a subclass, but use a custom descriptor to make attribute access to the method return the class method of the same name of the super class instead when the attribute is accessed from the class, when the instance passed to __get__ is None: class hybridmethod: def __init__(self, f): self.f = f def __get__(self, obj, cls=None): if obj is not None: return self.f.__get__(obj) return getattr(super(cls, cls), self.f.__name__).__get__(cls) class MyFoo(Foo): @hybridmethod def bar(self): return self.text print(MyFoo.bar()) # outputs None foo = MyFoo('foo') print(foo.bar()) # outputs foo Demo: https://ideone.com/yVAzr3 EDIT: Since you have now clarified in the comment that you have full control over Foo, you can instead modify Foo.bar directly such that it is a custom descriptor whose __get__ method calls a class method or an instance method based on the aforementioned condition. For convenience I've made the descriptor a decorator that decorates a class method with an additional instancemethod method that can decorate an instance method: class instanceable_classmethod(classmethod): def __init__(self, func): super().__init__(func) self.instance_func = self.__func__ def instancemethod(self, func): self.instance_func = func return self def __get__(self, obj, cls=None): if obj is None: return super().__get__(obj, cls) return self.instance_func.__get__(obj) class Foo(object): def __init__(self, text): self.text = text @instanceable_classmethod def bar(cls): return None @bar.instancemethod def bar(self): return self.text print(Foo.bar()) # outputs None foo = Foo('foo') print(foo.bar()) # outputs foo Demo: https://ideone.com/PmSnUC
1
1
79,416,288
2025-2-5
https://stackoverflow.com/questions/79416288/django-compilemessages-error-cant-find-msgfmt-gnu-gettext-on-ubuntu-vps
I am trying to compile translation messages in my Django project by running the following command: python manage.py compilemessages However, I get this error: CommandError: Can't find msgfmt. Make sure you have GNU gettext tools 0.15 or newer installed. So then I tried installing gettext. sudo apt update && sudo apt install gettext -y After installation, I still get the same error when running python manage.py compilemessages. Checked if msgfmt is installed by msgfmt --version but it says: command not found OS is Ubuntu and Django version is 4.2.5. How can I resolve this issue and make Django recognize msgfmt? Any help would be appreciated!
I've got the point. After deactivating the virtual environment, I tried installing gettext. Now it is okay.
1
1
79,416,093
2025-2-5
https://stackoverflow.com/questions/79416093/how-to-verify-if-a-instance-of-a-generic-class-is-from-the-good-type-variable-in
Let's assume that I have a generic class like that : import typing type_variable = typing.TypeVar("type_variable") class OneClass(typing.Generic[type_variable]): def __init__(self, value: type_variable): self.value = value Now I create two simple objects : a = OneClass("a str-typped object") b = OneClass(b'a byte-typped object') When I check if a and b have same type using type, it says True : >>> type(a) == type(b) True When I check if a is of type OneClass[str], it says False: >>> type(a) == OneClass[str] False I know that there I can use something like type(a.value) == type(b.value), but in some other cases I can have this kind of objects : tuple_object_1 = OneClass(("a", "first", "tupple", "with", "strings")) tuple_object_2 = OneClass((b'a', b'second', b'tupple', b'with', b'bytes')) In that case I have as types : tuple_object_1 : OneClass[tuple[str]] tuple_object_2 : OneClass[tuple[bytes]] and so evaluating the types of tuple_object_1 and tuple_object_2 only returns tuple and not something like tuple[str] and tuple[bytes], so the type evaluation of the content is also not valid. How can I get the type of those kind of objects, considering also their generic types? Something like a deep type. Thanks for help!
You are making a fundamental error. You are confusing types (int, float, OneClass) with type annotations (all the previous types, as well as OneClass[int], tuple[int, ...], etc.) As far as Python is concerned, tuple_object_1 and tuple_object_2 are just tuples, nothing more. It doesn't care in the least what the types of the elements are within the tuple until you actually access them. Type annotations are used by IDEs and checkers to verify the correctness of your code. If you declare something to be List[int], it will be your IDE or your lint checker that warns you if you try to add a string to it. But Python couldn't care less.
1
3
79,412,275
2025-2-4
https://stackoverflow.com/questions/79412275/pandas-performance-while-iterating-a-state-vector
I want to make a pandas dataframe that describes the state of a system at different times I have the initial state which describes the first row Each row correspond to a time I have reveserved the first two columns for "household" / statistics The following columns are state parameters At each iteration/row a number of parameters change - this could be just one or many I have created a somewhat simplified version that simulates my change data : df_change Question 1 Can you think of a more efficient way of generating the matrix than what i do in this code? i have a state that i update in a loop and insert Question 2 This is what i discovered while trying to write the sample code for this discussion. I see 20 fold performanne boost in loop iteration performance if i do the assignments to the "household" columns after the loop. Why is this? I am using python = 3.12.4 and pandas 2.2.2. df["product"] ="some_product" #%% import numpy as np import pandas as pd from tqdm import tqdm num_cols =600 n_changes = 40000 # simulate changes extra_colums = ["n_changes","product"] columns = [chr(i+65) for i in range(num_cols)] state = { icol : np.random.random() for icol in columns} change_index = np.random.randint(0,4,n_changes).cumsum() change_col = [columns[np.random.randint(0,num_cols)] for i in range(n_changes)] change_val= np.random.normal(size=n_changes) # create change matrix df_change=pd.DataFrame(index= change_index ) df_change['col'] = change_col df_change['val'] = change_val index = np.unique(change_index) # %% # Slow approach gives 5 iterations/s df = pd.DataFrame(index= index, columns=extra_colums + columns) df["product"] ="some_product" for i in tqdm(index): state.update(zip(df_change.loc[[i],"col"] , df_change.loc[[i],"val"])) df.loc[i,columns] = pd.Series(state) # %% # Fast approach gives 1000 iterations/sec df2 = pd.DataFrame(index= index, columns=extra_colums + columns) for i in tqdm(index): state.update(zip(df_change.loc[[i],"col"] , df_change.loc[[i],"val"])) df2.loc[i,columns] = pd.Series(state) df2["product"] ="some_product" Edit I marked the answer by ouroboros1 as theaccepted solution - it works really well and answered Question 1. I am still curios about Question 2 : the difference in pandas performance using the two methods where i iterate through the rows. I found that I can also get a performance similar to the original "df2" method depending on how i assign the value before the loop. The interesting point here is that pre assignment changes the performance in loop that follows. # Fast approach gives 1000 iterations/sec df3 = pd.DataFrame(index=index, columns=extra_colums + columns) #df3.loc[index,"product"] = "some_product" # Fast #df3["product"] = "some_product" # Slow df3.product = "some_product" # Fast for i in tqdm(index): state.update(zip(df_change.loc[[i], "col"], df_change.loc[[i], "val"])) df3.loc[i, columns] = np.array(list(state.values()))
Here's one approach that should be much faster: Data sample num_cols = 4 n_changes = 6 np.random.seed(0) # reproducibility # setup ... df_change col val 1 C 0.144044 4 A 1.454274 5 A 0.761038 7 A 0.121675 7 C 0.443863 10 B 0.333674 state {'A': 0.5488135039273248, 'B': 0.7151893663724195, 'C': 0.6027633760716439, 'D': 0.5448831829968969} Code out = (df_change.reset_index() .pivot_table(index='index', columns='col', values='val', aggfunc='last') .rename_axis(index=None, columns=None) .assign(product='some_product') .reindex(columns=extra_colums + columns) .fillna(pd.DataFrame(state, index=[index[0]])) .ffill() ) Output n_changes product A B C D 1 NaN some_product 0.548814 0.715189 0.144044 0.544883 4 NaN some_product 1.454274 0.715189 0.144044 0.544883 5 NaN some_product 0.761038 0.715189 0.144044 0.544883 7 NaN some_product 0.121675 0.715189 0.443863 0.544883 10 NaN some_product 0.121675 0.333674 0.443863 0.544883 # note: # A updated in 4, 5, 7 # B updated in 10 # C updated in 1, 7 Explanation / Intermediates Use df.reset_index to access 'index' inside df.pivot_table. For the aggfunc use 'last'. I.e., we only need to propagate the last value in case of duplicate 'col' values per index value. Cosmetic: use df.rename_axis to reset index and columns names to None. # df_chagne.reset_index().pivot_table(...).rename_axis(...) A B C 1 NaN NaN 0.144044 4 1.454274 NaN NaN 5 0.761038 NaN NaN 7 0.121675 NaN 0.443863 10 NaN 0.333674 NaN Use df.assign to add column 'product' with a scalar ('some_product'). Use df.reindex to get the columns in the desired order (with extra_columns up front). Not yet existing column 'n_changes' will be added with NaN values. Now, apply df.fillna and use a pd.DataFrame with state for only the first index value (index[0]), to fill the first row (alternatively, use df.combine_first). # after .fillna(...) n_changes product A B C D 1 NaN some_product 0.548814 0.715189 0.144044 0.544883 4 NaN some_product 1.454274 NaN NaN NaN 5 NaN some_product 0.761038 NaN NaN NaN 7 NaN some_product 0.121675 NaN 0.443863 NaN 10 NaN some_product NaN 0.333674 NaN NaN Finally, we want to forward fill: df.ffill. Performance comparison: num_cols = 100 n_changes = 100 np.random.seed(0) # reproducibility # out: 7.01 ms ± 435 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) # df2 (running this *afterwards*, as you are updating `state` 93.7 ms ± 3.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) Equality check: df2.astype(out.dtypes).equals(out) # True
1
2
79,414,728
2025-2-5
https://stackoverflow.com/questions/79414728/python-polars-how-to-add-columns-in-one-lazyframe-to-another-lazyframe
I have a Polars LazyFrame and would like to add to it columns from another LazyFrame. The two LazyFrames have the same number of rows and different columns. I have tried the following, which doesn't work as with_columns expects an iterable. def append_columns(df:pl.LazyFrame): df2 = pl.LazyFrame([1,2]) return df.with_columns(df2)
For this, pl.concat setting how="horizontal" might be used. import polars as pl df = pl.LazyFrame({ "a": [1, 2, 3], "b": [4, 5, 6], }) other = pl.LazyFrame({ "c": [9, 10, 11], "d": [12, 13, 14], "e": [15, 16, 17], }) result = pl.concat((df, other.select("c", "d")), how="horizontal") The resulting pl.LazyFrame then looks as follows. shape: (3, 4) ┌─────┬─────┬─────┬─────┐ │ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═════╡ │ 1 ┆ 4 ┆ 9 ┆ 12 │ │ 2 ┆ 5 ┆ 10 ┆ 13 │ │ 3 ┆ 6 ┆ 11 ┆ 14 │ └─────┴─────┴─────┴─────┘
2
3
79,415,141
2025-2-5
https://stackoverflow.com/questions/79415141/pandas-convert-string-column-to-bool-but-convert-typos-to-false
I have an application that takes an input spreadsheet filled in by the user. I'm trying to bug fix, and I've just noticed that in one True/False column, they've written FLASE instead of FALSE. I'm trying to write in as many workarounds for user error as I can, as the users of this app aren't very technical, so I was wondering if there's a way to convert this column to type bool, but set any typos to False? I appreciate this would also set something like TURE to be false too. For example: df = pd.DataFrame({'bool_col':['True', 'Flase', 'False', 'True'], 'foo':[1,2,3,4]}) Running df['bool_col'] = df['bool_col'].astype(bool) returns True for everything (as they're all non-empty strings), however I would like it to reutrn True, False, False, True.
If you want a generic approach, you could use fuzzy matching, for example with thefuzz: from thefuzz import process def to_bool(s, threshold=60): bools = [True, False] choices = list(map(str, bools)) match, score = process.extractOne(s, choices) d = dict(zip(choices, bools)) if score > threshold: return d[match] df['bool'] = df['bool_col'].map(to_bool) Example: bool_col foo bool 0 True 1 True 1 Flase 2 False 2 False 3 False 3 True 4 True 4 ture 5 True 5 banana 6 None
1
1
79,414,891
2025-2-5
https://stackoverflow.com/questions/79414891/how-to-make-my-dataclass-compatible-with-ctypes-and-not-lose-the-dunder-method
Consider a simple data class: from ctypes import c_int32, c_int16 from dataclasses import dataclass @dataclass class MyClass: field1: c_int32 field2: c_int16 According to the docs, if we want to make this dataclass compatible with ctypes, we have to define it like this: import ctypes from ctypes import Structure, c_int32, c_int16, sizeof from dataclasses import dataclass, fields @dataclass class MyClass(ctypes.Structure): _pack_ = 1 _fields_ = [("field1", c_int32),("field2", c_int16)] print(ctypes.sizeof(MyClass)) But unfortunately, this definition deprives us of the convenient features of the dataclass, which are called “dunder” methods. For example, constructor(__init__()) and string representation(__repr__()) become unavailable: inst = MyClass(c_int32(42), c_int16(43)) # will give error Q: What is the most elegant and idiomatic way to make a dataclass compatible with ctypes without losing “dunder” methods? If we ask me, this code seems to work at first glance: @dataclass class MyClass(ctypes.Structure): field1: c_int32 field2: c_int16 _pack_ = 1 MyClass._fields_ = [(field.name, field.type) for field in fields(MyClass)] #_pack_ is skipped Since I'm a beginner, I'm not sure if this code doesn't lead to some other, non-obvious problems.
Both ctypes.Structure and dataclass have some similar functionality - but neither was built with the explicit intent of being collaborative with the other - therefore we have to make this bridging code. For one, the dataclass decorator will always attempt to be less disruptive as it can to whatever functionalities the class already have. Since Structure already provides an __init__ method, which works for it, we have to tell dataclass to leave it in place - this can be done just passing the init=False argument to dataclass: A Structure class created this way will work, but won't have a __repr__ - dataclass needs the fields to be annotated in order to know its things. The following decorator would work instead of @dataclass: def sdataclass(cls=None,/, **kwargs): if cls is None: return lambda cls: sdataclass(cls, **kwargs) dataclassdeco = dataclass(init=False, **kwargs) cls.__annotations__ = dict(cls._fields_) return dataclassdeco(cls) @sdataclass class S(ctypes.Structure): _fields_ = [("field1", ctypes.c_uint8),] _pack_ = 1 The small amount of logic inside the decorator is just so that it can pass extra parameters to the original dataclass call- the only important things there are setting the .__annotation__ fields and passing the init=False parameter to the dataclass. HOWEVER, dataclasses come at a later type than structures, when the annotation syntax is in place, and it is more convenient to declare a class fields than with the _fields_ parameter. A converse decorator can just do what you have done in the second part of your question - just passing the init=False decorator. (I am not sure the _pack_ can be set after the class is created - please test the functionality) import ctypes from dataclasses import fields, dataclass def sdataclass(cls=None,/, **kwargs): if cls is None: return lambda cls: sdataclass(cls, **kwargs) # Allow the pack information to be passed # in the decorator: pack = kwargs.pop("pack", True) dataclassdeco = dataclass(init=False, **kwargs) cls = dataclassdeco(cls) cls._fields_ = [(field.name, field.type) for field in fields(cls)] cls._pack_ = pack return cls ... @sdataclass class S(ctypes.Structure): field1: ctypes.c_uint8 field2: ctypes.c_uint16 of course, ctypes structures has some extra functionalities, like allowing the creation of bitfields - this won't suffice as it is - but it should be enough for composing complex structures as it is, without making use of forward declarations.
1
2
79,415,052
2025-2-5
https://stackoverflow.com/questions/79415052/how-to-keep-the-same-number-of-threads-on-python-all-the-time
This is a part of my code. For example, even if 3 out of 10 ends first, I want the next 3 to start right away and always keep 10 threads running. However, the current code is moving on to the next 10 only when all 10 are completely finished. How can I modify the code? I always want to keep a certain number of threads, but as it stands, I have to finish the previous 10 threads completely before moving on to the next 10. import concurrent.futures import time def example_task(n): print(f"Task {n} started.") time.sleep(n) print(f"Task {n} completed.") return n with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor: futures = [] for i in range(10): futures.append(executor.submit(example_task, i+1)) for future in concurrent.futures.as_completed(futures): try: result = future.result() print(f"Result of task: {result}") next_task = len(futures) + 1 futures.append(executor.submit(example_task, next_task)) except Exception as e: print(f"Error: {e}")
Use a queue-based approach with ThreadPoolExecutor, where tasks are continuously submitted as soon as one completes. import concurrent.futures import time import itertools def example_task(n): print(f"Task {n} started.") time.sleep(n) # Simulate work print(f"Task {n} completed.") return n def main(): max_threads = 5 total_tasks = 20 # Total number of tasks you want to process task_counter = itertools.count(1) # Infinite counter for task numbers with concurrent.futures.ThreadPoolExecutor(max_workers=max_threads) as executor: futures = {} # Submit initial batch of tasks for _ in range(max_threads): task_id = next(task_counter) futures[executor.submit(example_task, task_id)] = task_id # Process tasks dynamically while futures: done, _ = concurrent.futures.wait(futures, return_when=concurrent.futures.FIRST_COMPLETED) for future in done: task_id = futures.pop(future) # Remove completed task try: result = future.result() print(f"Result of task {result}") # Submit a new task if we haven't reached the total task limit if task_id < total_tasks: new_task_id = next(task_counter) futures[executor.submit(example_task, new_task_id)] = new_task_id else: print("FINISHING") break except Exception as e: print(f"Task {task_id} failed: {e}") if __name__ == "__main__": main()
2
2
79,414,070
2025-2-5
https://stackoverflow.com/questions/79414070/seleniumbase-cdp-mode-execute-script-and-evaluate-with-javascript-gives-error-s
I am using SeleniumBase in CDP Mode. I am having a hard time figuring out if this a python issue or SeleniumBase issue. The below simple example shows my problem: from seleniumbase import SB with SB(uc=True, locale_code="en", headless=True) as sb: link = "https://news.ycombinator.com" print(f"\nOpening {link}") sb.wait_for_ready_state_complete(timeout=120) sb.activate_cdp_mode(link) script = f""" function getSomeValue() {{ return '42'; }} return getSomeValue(); """ # data = sb.execute_script(script) data = sb.cdp.evaluate(script) print(data) print("Finished!") This throws error: seleniumbase.undetected.cdp_driver.connection.ProtocolException: exceptionId: 1 text: Uncaught lineNumber: 5 columnNumber: 4 scriptId: 6 exception: type: object subtype: error className: SyntaxError description: SyntaxError: Illegal return statement objectId: 3089353218542582072.1.2 Notice above that I have tried both sb.execute_script(script) and sb.cdp.evaluate(script) and both give the same issue. How can I execute such scripts?
In CDP Mode, don't include the final return when evaluating JS. Your script part should look like this: script = f""" function getSomeValue() {{ return '42'; }} getSomeValue(); """ data = sb.cdp.evaluate(script) (Instead of using "return getSomeValue();", which breaks evaluate(expression).)
1
2
79,413,756
2025-2-5
https://stackoverflow.com/questions/79413756/regex-a-string-with-a-space-between-words
import re texto = "ABC ABC. ABC.png ABC thumb.png" regex = r"ABC(?!.png)|ABC(?! thumb.png)" novo = re.sub(regex, "bueno", texto) print(novo) I'm trying to replace the ABC word with exceptions. I only want to replace it if it doesn't follow the word ".png" or " thumb.png". The string would be then "ABC thumb.png" I expected bueno bueno. ABC.png ABC thumb.png But the output is this bueno bueno. bueno.png bueno thumb.png It isn't detecting the space and it actually messes up the first condition.
Starting with your original pattern: ABC(?!\.png)|ABC(?! thumb\.png) (Note: Dot is a regex metacharacter and should be escaped with backslash) This will match ABC which is not followed by .png or ABC not followed by thumb.png. Every possible occurrence of ABC will match this pattern. Therefore, all occurrences of ABC will be match, because every extension will match at least one of the two conditions. We can write the following correction: \bABC(?!\.png| thumb\.png) This pattern says to match: \b word boundary ABC match ABC (?!\.png| thumb\.png) neither .png or thumb.png follows The negative lookahead used here basically has AND flavored logic, and will exclude both following extensions.
8
8
79,414,138
2025-2-5
https://stackoverflow.com/questions/79414138/using-a-dict-for-caching-async-function-result-does-not-use-cached-results
BASE_URL = "https://query2.finance.yahoo.com/" DATA_URL_PART = "v8/finance/chart/{ticker}" async def fetch_close_price(aiosession: aiohttp.ClientSession, ticker: str, start: datetime, end: datetime, ticker_price_dict): params = { 'period1': int(to_midnight(start, naive=False).timestamp()), 'period2': int(to_midnight(end, naive=False).timestamp()), 'interval': '1d', 'includeAdjustedClose': 'false' # Explicitly exclude adjusted close prices } print(f'Checking cache for ticker {ticker}') if ticker in ticker_price_dict: print(f'Found cache for ticker {ticker}') return ticker_price_dict[ticker] async with aiosession.get(DATA_URL_PART.format(ticker=ticker), params=params) as response: if response.status == 200: print(f"Downloading {ticker}", response.status) data = await response.json() .... .... df = pd.DataFrame({ticker: close_prices}, index=dates) ticker_price_dict[ticker] = df print(f"Downloaded and setting {ticker}", response.status) return df else: print(f"Failed to fetch data for {ticker}", response.status) return pd.DataFrame() async def download_close_prices(aiosession: aiohttp.ClientSession, tickers: list[str], start: datetime, end: datetime): all_ticker_close = pd.DataFrame() ticker_price_dict = {} tasks = [fetch_close_price(aiosession, ticker, start, end, ticker_price_dict) for ticker in tickers] results = await asyncio.gather(*tasks) async def download_close_prices_all(tickers, start, end): connector = aiohttp.TCPConnector(limit=50) async with aiohttp.ClientSession(BASE_URL, connector=connector) as aiosession: return await download_close_prices(aiosession, tickers, start, end) if __name__ == "__main__": tickers = ["AAPL", "USDINR=X", "XCN18679-USD", "XCN18679-USD", "XCN18679-USD"] close_prices = asyncio.run(download_close_prices_all(tickers, start, end)) I have this asyncio function which uses aiohttp to make requests and return dataframes, sometimes I might have lots of repeated tickers to fetch like the example given where I am calling "XCN18679-USD" many times, I tried passing in a simple dict to cache the results but it never seems to find the ticker in dict and always downloads, I don't know what to do next, maybe use cache, but sometimes I can get download failed and I don't want to cache that. Can anyone point me in the right direction what I might be doing wrong? thanks a lot to everyone in advance.
For sure the simplest solution is to adopt the comment of @deceze and just ensure that your list of tickers has no duplicates, When that is not practical for whatever reason, then the following technique can be used: When fetch_close_price discovers that the ticker price is not in the cache, it will go ahead and make the request to fetch it. But it will first create a Future instance representing the completion of the request and store that in the cache. When the price is finally downloaded, it will be placed in the cache but the future that had been there will be set with the price. If, however, the cache does have an entry for the ticker, then it is either a Future instance or the actual price. In the former case, we simply await the future's completion. In the latter case we already have the price. Either way, no new request to download the price need be made. The only complication is determining whether the cache, if not empty, contains a future or the actual price. When I create a future with loop.create_future, the actual class is _asyncio.Future. Rather than not having to assume any particular class for futures, I presumably know the class of a price and can check for that (a str instance in my code below). import asyncio async def fetch_close_price(ticker: str, ticker_price_dict): print(f'Checking cache for ticker {ticker}') price = ticker_price_dict.get(ticker) if price: # We cannot be sure of the class used for a future, so # we check to see if this is the class of a price. # My price is just a str instance if not isinstance(price, str): # This is a future future = price print(f'ticker {ticker} has already been requested.') price = await future print(f'Got price {price!r} for ticker {ticker} from the future.') else: print(f'Found price for ticker {ticker} in cache') return price # The ticker is not in the cache, so we have to make a request # Put a future in the cache to signal to other tasks interested in # the same ticker that the request has been made. future = asyncio.get_running_loop().create_future() ticker_price_dict[ticker] = future # Download ticker. Here we emulate doing that by sleeping a bit: print(f'Making request for ticker {ticker}') await asyncio.sleep(1) price = f'{ticker} price' print(f'Got price {price!r} for ticket {ticker} from the request') # Put the 'result" in the cache: ticker_price_dict[ticker] = price # And tell others interested in this price that we have it: future.set_result(price) async def download_close_prices(tickers: list[str]): ticker_price_dict = {} tasks = [fetch_close_price(ticker, ticker_price_dict) for ticker in tickers] results = await asyncio.gather(*tasks) return results async def download_close_prices_all(tickers: list[str]): return await download_close_prices(tickers) if __name__ == "__main__": tickers = ["AAPL", "USDINR=X", "XCN18679-USD", "XCN18679-USD", "XCN18679-USD"] close_prices = asyncio.run(download_close_prices_all(tickers)) Prints: Checking cache for ticker AAPL Making request for ticker AAPL Checking cache for ticker USDINR=X Making request for ticker USDINR=X Checking cache for ticker XCN18679-USD Making request for ticker XCN18679-USD Checking cache for ticker XCN18679-USD ticker XCN18679-USD has already been requested. Checking cache for ticker XCN18679-USD ticker XCN18679-USD has already been requested. Got price 'AAPL price' for ticket AAPL from the request Got price 'USDINR=X price' for ticket USDINR=X from the request Got price 'XCN18679-USD price' for ticket XCN18679-USD from the request Got price 'XCN18679-USD price' for ticker XCN18679-USD from the future. Got price 'XCN18679-USD price' for ticker XCN18679-USD from the future.
1
3
79,412,706
2025-2-4
https://stackoverflow.com/questions/79412706/whats-going-on-with-the-chaining-in-pythons-string-membership-tests
I just realized I had a typo in my membership test and was worried this bug had been causing issues for a while. However, the code had behaved just as expected. Example: "test" in "testing" in "testing" in "testing" This left me wondering how this membership expression works and why it's allowed. I tried applying some order of operations logic to it with parentheses but that just breaks the expression. And the docs don't mention anything about chaining. Is there a practical use case for this I am just not aware of?
in is a comparison operator. As described at the top of the section in the docs you linked to, all comparison operators can be chained: Formally, if a, b, c, …, y, z are expressions and op1, op2, …, opN are comparison operators, then a op1 b op2 c ... y opN z is equivalent to a op1 b and b op2 c and ... y opN z, except that each expression is evaluated at most once. So: "test" in "testing" in "testing" in "testing" Is equivalent to: "test" in "testing" and "testing" in "testing" and "testing" in "testing"
2
2
79,412,615
2025-2-4
https://stackoverflow.com/questions/79412615/understanding-and-fixing-the-regex
I have a regex on my input parameter: r"^(ABC-\d{2,9})|(ABz?-\d{3})$" Ideally it should not allow parameters with ++ or -- at the end, but it does. Why is the regex not working in this case but works in all other scenarios? ABC-12 is a valid. ABC-123456789 is a valid. AB-123 is a valid. ABz-123 is a valid.
The problem is that your ^ and $ anchors don't apply to the entire pattern. You match ^ only in the first alternative, and $ only in the second alternative. So if the input matches (ABC-\d{2,9}) at the beginning, the match will succeed even if there's more after this. You can put a non-capturing group around everything except the anchors to fix this. r"^(?:(ABC-\d{2,9})|(ABz?-\d{3}))$"
1
8
79,411,167
2025-2-4
https://stackoverflow.com/questions/79411167/how-to-use-the-apply-function-to-return-a-list-to-new-column-in-pandas
I have a Pandas dataframe: import pandas as pd import numpy as np np.random.seed(150) df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=['A', 'B']) I want to add a new column "C" whose values ​​are the combined-list of every three rows in column "B". So I use the following method to achieve my needs, but this method is slow when the data is large. >>> df['C'] = [df['B'].iloc[i-2:i+1].tolist() if i >= 2 else None for i in range(len(df))] >>> df A B C 0 4 9 None 1 0 2 None 2 4 5 [9, 2, 5] 3 7 9 [2, 5, 9] 4 8 3 [5, 9, 3] 5 8 1 [9, 3, 1] 6 1 4 [3, 1, 4] 7 4 1 [1, 4, 1] 8 1 9 [4, 1, 9] 9 3 7 [1, 9, 7] When I try to use the df.apply function, I get an error message: df['C'] = df['B'].rolling(window=3).apply(lambda x: list(x), raw=False) TypeError: must be real number, not list I remember that Pandas apply doesn't seem to return a list, so how do I do this? I searched the forum, but couldn't find a suitable topic about apply and return.
You can use numpy's sliding_window_view: from numpy.lib.stride_tricks import sliding_window_view as swv N = 3 df['C'] = pd.Series(swv(df['B'], N).tolist(), index=df.index[N-1:]) Output: A B C 0 4 9 NaN 1 0 2 NaN 2 4 5 [9, 2, 5] 3 7 9 [2, 5, 9] 4 8 3 [5, 9, 3] 5 8 1 [9, 3, 1] 6 1 4 [3, 1, 4] 7 4 1 [1, 4, 1] 8 1 9 [4, 1, 9] 9 3 7 [1, 9, 7]
5
8
79,437,687
2025-2-13
https://stackoverflow.com/questions/79437687/typeerror-asyncclient-init-got-an-unexpected-keyword-argument-proxies
Error: File "/app/.venv/lib/python3.11/site-packages/anthropic/_client.py", line 386, in __init__ super().__init__( File "/app/.venv/lib/python3.11/site-packages/anthropic/_base_client.py", line 1437, in __init__ self._client = http_client or AsyncHttpxClientWrapper(^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.11/site-packages/anthropic/_base_client.py", line 1334, in __init__ super().__init__(**kwargs) TypeError: Async`Client.__init__() got an unexpected keyword argument 'proxies' I know that to fix this problem i need to downgrade httpx to another version, but what if im using fasthx = "2.0.1". Whats the solution for Antropic AI? My code: load_dotenv() log = logging.getLogger(__name__) anthropic_api_key = os.getenv("ANTHROPIC_API_KEY") anthropic_client = anthropic.Anthropic(api_key=anthropic_api_key) # <-- Eror happens here anthropic_client = instructor.from_anthropic(anthropic_client) I want to fix the error TypeError: AsyncClient.init() got an unexpected keyword argument 'proxies'
Just upgrade anthropic to the latest version, the problem was fixed in version 0.45.2 and above.
1
0
79,445,863
2025-2-17
https://stackoverflow.com/questions/79445863/how-can-i-prevent-the-telemetry-client-instance-id-changed-from-aaaaaaaaaaaaaaa
When the consumer (which is a very simple confluent-kafka-python consumer), we see this log message after the assignment %6|1739802885.947|GETSUBSCRIPTIONS|<consumer id>#consumer-1| [thrd:main]: Telemetry client instance id changed from AAAAAAAAAAAAAAAAAAAAAA to <some random string> I tried running the consumer locally (in contrast to the Kubernetes cluster) and see no such logs. I tried googling for this log message but found no bugs or help avoiding this (though I am not the only person with such logs)
As per my previous comment, from confluent_kafka import Consumer conf = { 'bootstrap.servers': 'your broker', 'group.id': 'your group', 'enable.metrics.push': False } consumer = Consumer(conf)
2
1
79,443,450
2025-2-16
https://stackoverflow.com/questions/79443450/regex-for-ip-address-domain-and-url
Problem statement: I am trying to generate regex for ip-address, domain and url. These are my defitions: IP Address: 93.114.205.169 Domain: example.com sub.example.com Url: 93.114.205.169/path example.com/path sub.example.com/path So, an url always has a path to resource. But an IP-Address or domain should not have path to resource otherwise it would be an URL. Also note that these IP-address, domain and url can have http or https optionally with or without www. My attempt: I have tried various ways for these regex: [[rules]] id = "ip-address" description = "Potential IP Address detected." regex = '''\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b''' entropy = 2 keywords = ["ip"] [[rules]] id = "domain" description = "Potential domain name detected." regex = '''\b[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}\b''' entropy = 2 keywords = ["domain"] [[rules]] id = "url" description = "Potential URL detected." regex = '''\b(?:https?|ftp):\/\/(?:[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}|\d{1,3}(?:\.\d{1,3}){3})(?::\d{1,5})?(?:\/[^\s\"<>]*)?\b''' entropy = 2 keywords = ["http", "https", "ftp", "url"] But, these regex covering ip-address as url. For example, this ip-address http://93.114.205.169 is covering in url not as ip-address which should be only as ip-address but not as url according to my above definitions. I changed to these regex: [[rules]] id = "ip-address" description = "Potential IP Address detected." regex = '''\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b''' entropy = 2 keywords = ["ip"] [[rules]] id = "domain" description = "Potential domain name detected." regex = '''\b[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}\b''' entropy = 2 keywords = ["domain"] [[rules]] id = "url" description = "Potential URL detected." regex = '''\b(?:https?|ftp):\/\/(?:[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}|\d{1,3}(?:\.\d{1,3}){3})(?::\d{1,5})?(?:\/[^\s\"<>]*)?\b''' entropy = 2 keywords = ["http", "https", "ftp", "url"] This also has same problem as above as http://93.114.205.169 recogize as url, but also it recognizing as ip-address too. It means it indentifies ip-address but it can also returning ip-addresses from urls like this http://93.114.205.169/path as url and 93.114.205.169 as ip-address. Could you suggest me correct regex for my these definitions: IP Address: 93.114.205.169 Domain: example.com sub.example.com Url: 93.114.205.169/path example.com/path sub.example.com/path These IP-address, domain and url can have http or https optionally with or without www.
Some Details Are Not Clear The "code" that you posted is not Python but rather appears to be some sort of configuration file. Without understanding how the input is being processed with this configuration, it is difficult to give you a precise answer. An example will illustrate this: It appears based on your English language description that a URL is essentially either an IP address or a domain specification followed by a path, which starts with a '/' character (I will assume that a such a forward can but need not be followed by alpha characters so that '200.12.119.1/' is a URL). Let's say that we have a regex for detecting IP addresses and we are able to match, for example, '250.127.100.2' or 'http://250.127.100.2' with this regex. But it would be erroneous to match an IP address within the string '250.127.100.2/somepath'. We could create a single regex that was the "or-ing" of separate regular expressions for detecting a URL, a domain and an IP address such as: ip_regex = r'some regex' domain_regex = r'some regex' url_regex = fr'(?:{ip_regex}|{domain_regex})/[a-zA-z]*' rex = f'{url_regex}|{ip_regex}|{domain_regex}' So rex is a final or-ing of 3 subexpressions with the match for a URL being the first alternate subexpression. If we were to use this regular expression using method re.finditer we could then iterate the return from this method and find all matches and we would only match for example an IP address if it were not part of a larger URL match since we are trying to match a URL first. But what you posted leaves it very open to question as this is even possible. Your actual Python code would need to take the individual regexes in your configuration file and join them together with a '|' between them. The second and most likely alternative is that the input is being tested by individual regular expressions. So if we are just looking for say IP addresses, our regex for such a match now needs to use a negative lookahead to ensure that the candidate match is not followed by a '/' character. Suggestions First, if you really want to validate proper IP addresses, you would want to use something like: (?x) (?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9][0-9]|[0-9])\.){3} (?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9][0-9]|[0-9]) There are more concise expressions for validating an IP address, but the above regex is the most readable. It would accept '123.255.12.1' but not '333.255.12.1'. We might even want to reject '123.255.12.1' depending on what precedes it and follows it. For example, the string '123.255.12.1.99' contains a couple of valid IP addresses, i.e. '123.255.12.1' and '255.12.1.99', but I suspect we might not wish to accept either. In this case, we might add some negative assertions: (?x) (?<![0-9.]) (?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9][0-9]|[0-9])\.){3} (?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9][0-9]|[0-9]) (?![0-9.]) Now we are ensuring that out candidate match is not preceded or followed by a digit or decimal point. The following program demonstrates the above points. The first 2 calls to re.finditer where we are matching IP addresses and domains use regexes that have negative lookahead assertions. These regexes are what you would use if the Python code that uses the configuration file needs the ability to just look for one specific type of entity. The final call to re.finditer uses the "or-ing" of 3 regexes the first two of which do not require the negative lookahead insertions because an IP or domain is only matched if we can't match the longer URL. Needless to say, if you need to initialize a configuration file, then where I use f-strings to join together previously defined regex expressions, you would need to do this manually. I would suggest then that you print out the regexes and remove the extraneous whitespace I use with the (?x) flag. import re prefix = r'(?:https?://)' basic_ip_regex = fr''' (?:{prefix}|(?<![0-9.])) # preceded by http:// or not preceded by a digit or period (?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9][0-9]|[0-9])\.){{3}} (?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9][0-9]|[0-9]) (?![0-9.]) # Not followed by a digit or period ''' ip_regex = fr'''(?x) {basic_ip_regex} (?!/) # Not followed additionally by a / ''' basic_domain_regex = fr''' (?:{prefix}|(?<!\.)) # optionaly preceded by http:// or not preceded by a period (?:www\.)?(?:[a-zA-Z]+\.)+[a-zA-Z]++ # Match as many alpha characters as possible (?!\.) # not followed by a period ''' domain_regex = fr'''(?x) {basic_domain_regex} (?!/) # Not followed additionally by a / ''' basic_url_regex = fr''' (?: {basic_ip_regex} | {basic_domain_regex} ) /[a-zA-Z]* # / by iteself is a path ''' url_regex = f'(?x){basic_url_regex}' # If we can use finditer: rex = fr'''(?x) (?P<url>){basic_url_regex} | (?P<ip>){basic_ip_regex} | (?P<domain>){basic_domain_regex} ''' text = """ 123.45.67.89 # IP address http://123.45.67.89 # IP address https://123.45.67.89 # IP address 123.45.67.89.99 # Invalid 323.45.67.89 # Invalid booboo.com # domain www.booboo.com # domain http://www.booboo.com # domain https://www.booboo.com # domain 123.45.67.89/abc # URL http://23.45.67.89/abc # URL https://23.45.67.89/abc # URL https://booboo.com/abc # URL 123.45.67.89/ # URL """ # Just look for IP addresses: for m in re.finditer(ip_regex, text): print('IP', m[0]) print('\n************\n') # Just look for domains for m in re.finditer(domain_regex, text): print('domain', m[0]) print('\n************\n') # Just look for URLs for m in re.finditer(url_regex, text): print('URL', m[0]) print('\n************\n') # Look for everything: for m in re.finditer(rex, text): print(m.lastgroup, m[0]) Prints: IP 123.45.67.89 IP http://123.45.67.89 IP https://123.45.67.89 ************ domain booboo.com domain www.booboo.com domain http://www.booboo.com domain https://www.booboo.com ************ URL 123.45.67.89/abc URL http://23.45.67.89/abc URL https://23.45.67.89/abc URL https://booboo.com/abc URL 123.45.67.89/ ************ ip 123.45.67.89 ip http://123.45.67.89 ip https://123.45.67.89 domain booboo.com domain www.booboo.com domain http://www.booboo.com domain https://www.booboo.com url 123.45.67.89/abc url http://23.45.67.89/abc url https://23.45.67.89/abc url https://booboo.com/abc url 123.45.67.89/
2
1
79,436,039
2025-2-13
https://stackoverflow.com/questions/79436039/how-to-plot-polygons-from-categorical-grid-points-in-matplotlib-phase-diagram
I have a dataframe that contains 1681 evenly distributed 2D grid points. Each data point has its x and y coordinates, a label representing its category (or phase), and a color for that category. x y label color 0 -40.0 -30.0 Fe #660066 1 -40.0 -29.0 Fe #660066 2 -40.0 -28.0 FeS #ff7f50 3 -40.0 -27.0 FeS #ff7f50 4 -40.0 -26.0 FeS #ff7f50 ... ... ... ... ... 1676 0.0 6.0 Fe2(SO4)3 #8a2be2 1677 0.0 7.0 Fe2(SO4)3 #8a2be2 1678 0.0 8.0 Fe2(SO4)3 #8a2be2 1679 0.0 9.0 Fe2(SO4)3 #8a2be2 1680 0.0 10.0 Fe2(SO4)3 #8a2be2 [1681 rows x 4 columns] I want to generate a polygon diagram that shows the linear boundary of each category (in my case also known as a "phase diagram"). Sor far I can only show this kind of diagram in a simple scatter plot like this: import matplotlib.pyplot as plt import pandas as pd plt.figure(figsize=(8., 8.)) for color in df.color.unique(): df_color = df[df.color==color] plt.scatter( x=df_color.x, y=df_color.y, c=color, s=100, label=df_color.label.iloc[0] ) plt.xlim([-40., 0.]) plt.ylim([-30., 10.]) plt.xlabel('Log pO2(g)') plt.ylabel('Log pSO2(g)') plt.legend(bbox_to_anchor=(1.05, 1.)) plt.show() However, what I want is a phase diagram with clear linear boundaries that looks something like this: Is there any way I can generate such phase diagram using matplotlib? Note that the boundary is not deterministic, especially when the grid points are not dense enough. Hence there needs to be some kind of heuristics, for example the boundary line should always lie in the middle of two neighboring points with different categories. I imagine there will be some sort of line fitting or interpolation needed, and matplotlib.patches.Polygon is probably useful here. For easy testing, I attach a code snippet for generating the data, but the polygon information shown below are not supposed to be used for generating the phase diagram import numpy as np import pandas as pd from shapely.geometry import Point, Polygon labels = ['Fe', 'Fe3O4', 'FeS', 'Fe2O3', 'FeS2', 'FeSO4', 'Fe2(SO4)3'] colors = ['#660066', '#b6fcd5', '#ff7f50', '#ffb6c1', '#c6e2ff', '#d3ffce', '#8a2be2'] polygons = [] polygons.append(Polygon([(-26.7243,-14.7423), (-26.7243,-30.0000), (-40.0000,-30.0000), (-40.0000,-28.0181)])) polygons.append(Polygon([(-18.1347,-0.4263), (-16.6048,1.6135), (-16.6048,-30.0000), (-26.7243,-30.0000), (-26.7243,-14.7423), (-18.1347,-0.4263)])) polygons.append(Polygon([(-18.1347,-0.4263), (-26.7243,-14.7423), (-40.0000,-28.0181), (-40.0000,-22.2917), (-18.1347,-0.4263)])) polygons.append(Polygon([(0.0000,-20.2615), (0.0000,-30.0000), (-16.6048,-30.0000), (-16.6048,1.6135), (-16.5517,1.6865), (-6.0517,-0.9385), (0.0000,-3.9643)])) polygons.append(Polygon([(-14.2390,10.0000), (-14.5829,7.5927), (-16.5517,1.6865), (-16.6048,1.6135), (-18.1347,-0.4263), (-40.0000,-22.2917), (-40.0000,10.0000)])) polygons.append(Polygon([(-6.0517,-0.9385), (-16.5517,1.6865), (-14.5829,7.5927), (-6.0517,-0.9385)])) polygons.append(Polygon([(0.0000,-3.9643), (-6.0517,-0.9385), (-14.5829,7.5927), (-14.2390,10.0000), (0.0000,10.0000)])) x_grid = np.arange(-40., 0.01, 1.) y_grid = np.arange(-30., 10.01, 1.) xy_grid = np.array(np.meshgrid(x_grid, y_grid)).T.reshape(-1, 2).tolist() data = [] for coords in xy_grid: point = Point(coords) for i, poly in enumerate(polygons): if poly.buffer(1e-3).contains(point): data.append({ 'x': point.x, 'y': point.y, 'label': labels[i], 'color': colors[i] }) break df = pd.DataFrame(data)
I am not sure if you can easily get a representation with contiguous polygons, however you could easily get the bounding polygon from a set of points using shapely.convex_hull: import shapely import matplotlib.pyplot as plt f, ax = plt.subplots(figsize=(8, 8)) for (name, color), coords in df.groupby(['label', 'color'])[['x', 'y']]: polygon = shapely.convex_hull(shapely.MultiPoint(coords.to_numpy())) ax.fill(*polygon.exterior.xy, color=color) ax.annotate(name, polygon.centroid.coords[0], ha='center', va='center') If you want the shapely polygons: polygons = {k: shapely.convex_hull(shapely.MultiPoint(g.to_numpy())) for k, g in df.groupby(['label', 'color'])[['x', 'y']]} Output: contiguous polygons To have contiguous polygons you can use the same strategy after adding points with a greater density and assigning them to their closest counterpart with a KDTree: from scipy.spatial import KDTree # interpolate points on the initial polygons polygons = {k: shapely.convex_hull(shapely.MultiPoint(g.to_numpy())) for k, g in df.groupby('label')[['x', 'y']]} def interp_ext(shape): try: return np.c_[shape.xy].T except NotImplementedError: pass e = shape.exterior if hasattr(shape, 'exterior') else shape points = e.interpolate(np.linspace(0, e.length, 1000)) return np.c_[Polygon(points).exterior.xy].T df2 = (pd.DataFrame([(l, *interp_ext(p)) for l, p in polygons.items()], columns=['label', 'x', 'y']) .merge(df[['label', 'color']], on='label') .explode(['x', 'y']) ) # get bounding values xmin, ymin, xmax, ymax = df[['x', 'y']].agg(['min', 'max']).values.ravel() # create a grid with a higher density (here 10x) Xs = np.arange(xmin, xmax, 0.1) Ys = np.arange(ymin, ymax, 0.1) Xs, Ys = (x.ravel() for x in np.meshgrid(Xs, Ys)) grid = np.c_[Xs, Ys] # indentify closest reference point _, idx = KDTree(df2[['x', 'y']]).query(grid) # create new DataFrame with labels/colors df3 = pd.DataFrame(np.c_[grid, df2[['label', 'color']].to_numpy()[idx]], columns=['x', 'y', 'label', 'color'] ) # plot f, ax = plt.subplots(figsize=(8, 8)) for (name, color), coords in df3.groupby(['label', 'color'])[['x', 'y']]: polygon = shapely.convex_hull(shapely.MultiPoint(coords.to_numpy())) ax.fill(*polygon.exterior.xy, color=color) ax.annotate(name, polygon.centroid.coords[0], ha='center', va='center') Output: Another, faster, option could be to use a Voronoi diagram based on the original shapes. I found a library (voronoi-diagram-for-polygons) that does this but requires GeoPandas: import geopandas as gpd from longsgis import voronoiDiagram4plg from shapely import Polygon, convex_hull, coverage_union # create the initial convex hulls tmp = (df.groupby(['label', 'color']) .apply(lambda x: convex_hull(Polygon(x[['x', 'y']].to_numpy()))) .reset_index(name='geometry') ) # convert to geodataframe gdf = gpd.GeoDataFrame(tmp, geometry='geometry') # Split using a Voronoi diagram mask = Polygon([(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin)]) tmp = voronoiDiagram4plg(gdf, mask, densify=True) # plot tmp.plot(color=gdf['color']) Output:
9
12
79,428,750
2025-2-11
https://stackoverflow.com/questions/79428750/python-multiprocessing-function-multiple-parameters
trying to run a function with 'X' cores simultaneously that has multiple parameters. with PROCESS I cant set the number of cores, and with POOL I cant set multiple parameters for a function. def funct(a, b): #run .bat file with params (a,b) if __name__ == '__main__': cores = int(cores) #set with input pool = Pool(processes=cores) pool.map(funct(3, 44)) Im getting this error: TypeError: Pool.map() missing 1 required positional argument: 'iterable' I cant use this, because I cant set Cores p = Process(target=funct, args=(1,2))
Note - Multiprocessing.py doesn't work in IDLE, you have to use software like intellij or something. def funct(a, b): #run .bat file with params (a,b) if __name__ == "__main__": # run as many cores as in the range processes = [] for _ in range(4): p = multiprocessing.Process(target=funct, args=[2, "hello"]) p.start() processes.append(p) for process in processes: process.join()
1
0
79,432,064
2025-2-12
https://stackoverflow.com/questions/79432064/image-preprocessing-to-extract-2d-number-list
I've been tring to make a puzzle solving program. The game is 'fruit box' and you can play it through the link below. https://en.gamesaien.com/game/fruit_box/ To do that, I have to extract numbers from game screen fruit box game screen shot I found 'pytesseract' which is able to identify characters from image, and almost finish extracting with using it. but the result value wasn't satisfied for me. threshold At first, I used threshold function. I had to erase most of it because the background was the same white color as the numbers I was aiming for. The code and image are like this. import pytesseract import os import cv2 image = os.getcwd() + '\\appletest.png' img=cv2.imread(image) grayImage = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ret,img_binary = cv2.threshold(grayImage, 246, 255, cv2.THRESH_BINARY) text = pytesseract.image_to_string(img_binary, config='--psm 6') # text = pytesseract.image_to_string(img_binary, config='--psm 6 --oem 3 -c tessedit_char_whitelist=0123456789 ') print(text) cv2.imshow('Image', img_binary) cv2.waitKey(0) cv2.destroyAllWindows() threshold result The 'image_to_string' function returns numbers like this 41233366429816415 412567594457956471 3572263437133946 68241491629765459 73278354155567666 7796565142328726 15349752855757571 31221174825264255 83517514412317216 1957899195693134 It almost same! but there are some wrong number.(for example, at second line, 412567594457956471 should be just 41256759445796471) So I had to find other way. inrange, floodFill This tring is simple. Recognizing apples first, floodfill back ground second. the code and result is below. import pytesseract import os import cv2 import numpy as np image = os.getcwd() + '\\appletest.png' img=cv2.imread(image) hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) #find apple color dst1 = cv2.inRange(hsv, (0, 100, 20), (10, 255, 255)) rows, cols = dst1.shape[:2] mask = np.zeros((rows+2, cols+2), np.uint8) loDiff, upDiff = (10,10,10), (10,10,10) retval = cv2.floodFill(dst1, mask, (1,1), (255,255,255), loDiff, upDiff) text = pytesseract.image_to_string(dst1, config='--psm 6 --oem 3 -c tessedit_char_whitelist=0123456789 ') # text = pytesseract.image_to_string(img_gradient, config='--psm 6') print(text) cv2.imshow('Image', dst1) cv2.waitKey(0) cv2.destroyAllWindows() floodFill result the result is this. 412333664298164215 412567594457964721 3957722634237619946 68241491629765458 732783542195567666 779685651412328726 15349752855757571 3912214174825264255 835175144121313217281216 15179191956988322134 But there were still wrong numbers added. I guess it comes from quality of number(or image), so I implemented many preprocessing functions(sharpening, Erosion, Dilation, blur) but couldn't see perfect correct number list. I don't know what should do more from here. Can you advise me to solve this situation?
My guess is that page segmentation mode 6 expects a true "block" of text and gets a bit nervous when seeing so much whitespace, so it decides to hallucinate a bit. Let's give it a hand by removing the whitespace and leave no more room for hallucinations: # [your code up to flood fill] # let the letters bleed out a bit to extract # the whole character with some padding blurred = cv2.blur(dst1,(5,5)) # crop out the white space text_space = blurred.mean(axis=0) != 255 dst1 = dst1[:,text_space] cfg = '--psm 6 --oem 3 -c tessedit_char_whitelist=0123456789 ' text = pytesseract.image_to_string(dst1, config=cfg) print(text) # 41233366429816415 # 41256759445796471 # 35772263437619946 # 68241491629765459 # 73278354195567666 # 77968565141328726 # 15349752855757571 # 31221174825264255 # 83517514411317116 # 15179191956983134
2
1
79,446,809
2025-2-17
https://stackoverflow.com/questions/79446809/transpose-pitch-down-an-octave
Using music21, how can I transpose a pitch down an octave? If I use p.transpose(-12), flats get changed to sharps: import music21 p = music21.pitch.Pitch('D-4') print(p.transpose(-12)) output C#3
Instead of -P12, use -P8, like this: import music21 p = music21.pitch.Pitch('D-4') print(p.transpose('-P8'))
1
1
79,443,615
2025-2-16
https://stackoverflow.com/questions/79443615/creating-a-adjecency-matrix-fast-for-use-in-path-finding-to-calculate-gird-dista
Goal I'm working on a project that requires me visualize how far a given budget can reach on a map with different price zones. Example with 3 price zones, water, countryside, and city, each having different costs per meter traveled: Red point is the origin, green point is just a sample point, it has no bearing on the result. Gradient goes from yellow (cheap) to purple (max budget) Issue Time complexity is currently really bad, causing time to grow very fast as I increase the resolution (decreasing size in meters of each cell). This is most likely due to the iterative creation of the scipy.sparse.lil_array when creating a array adjacency graph. Details My current processes to achieve this is by following 6 major steps (time added to show bottlenecks): Create a Python list of shapely.geometry.Point which represents all points in a 2d grid which we will sample the prices from the geopandas geometry. The list is initialized with empty values before modifications. I iteratively set each index (one at a time) to a point created depending on the grid cell size. And lastly I convert the python list into a pandas dataFrame to later intersect with the geopandas regions. (13.6s) I now use the dataFrame with points and convert it into a GeoDataFrame and the use gpd.sjoin against my diffrent price areas to receive in this case 3 geoDataFrame's each containing only the points which intersected with the given price area. (0.7s) After mapping each point with their price, I want to combine it back into a 2d array with the value being the price to do this I use a Python 2d List and index and replace values using a conversion from point coordinates to index coordinates. (14.9s) We now have a 2d array that contains the cost/weight for each cell. Now I want to convert it into a graph (matrix adjacency graph). To do this I use an iterative procces and inserts each edge into a scipy.sparse.lil_array and then after each element has been connected convert the array into a scipy.sparse.csr_array (136.7s) I now call scipy.sparse.csgraph.dijkstra and use the distance array created. (0.4s) Due to a fixed price needed for each node I once again iteratively loop through the array adding the fixed cost. (0.2s) In addition to step 4 I should mention that I use more than 4 edges per node. Namely using a 5x5 centered on the node (but ignoring duplicate cardinal and diagonal paths), but this introduces a bit of a error margin as it is possible to avoid cost of a cell by jumping diagonal over it, but the error margin should be small enough that it can be ignored. This is due to the otherwise square result: More info regarding step 4: To create the edges I iterate trough a list for each node adjecency_offsets= [ (-3, -2, 3.60), (-3, -1, 3.16), (-3, 1, 3.16), (-3, 2, 3.60), (-2, -3, 3.60), (-2, -1, 2.24), (-2, 1, 2.24), (-2, 3, 3.60), (-1, -3, 3.16), (-1, -2, 2.24), (-1, -1, 1.42), (-1, 0, 1 ), (-1, 1, 1.42), (-1, 2, 2.24), (-1, 3, 3.16), ( 0, -1, 1 ), ( 0, 1, 1 ), ( 1, -3, 3.16), ( 1, -2, 2.24), ( 1, -1, 1.42), ( 1, 0, 1 ), ( 1, 1, 1.42), ( 1, 2, 2.24), ( 1, 3, 3.16), ( 2, -3, 3.60), ( 2, -1, 2.24), ( 2, 1, 2.24), ( 2, 3, 3.60), (-3, -2, 3.60), (-3, -1, 3.16), (-3, 1, 3.16), (-3, 2, 3.60) ] first integer is the row offset, 2nd is column offset, and 3rd value is the distance multiplier to reach the cell from the node (using approximations instead of sq(2) etc) Then by using following code we create a graph and populate the adjacency array def Step_4(arr: list[list[int]]): # arr is a 2d-array of cost for each point (the index) height = len(arr) width = len(arr[0]) num_nodes = width*height tmp_array = scipy.sparse.lil_array((num_nodes, num_nodes), dtype=np.float32) # we dont need more precision than this # For every row and column in 2d array for row in range(height): for col in range(width): # Create a src vertex src = row * width + col # and find all edges for (r_off, c_off, w_mul) in adjecency_offsets: # ignore invalid destinations if (row + r_off < 0 or col + c_off < 0): continue if (row + r_off >= height or col + c_off >= width): continue # valid destination dst = (row + r_off) * width + col + c_off # Multiply cost of cell with distance to cell float_w = w_mul * arr[row + r_off][col + c_off] # Set weight in adjacency array tmp_array[src, dst] = float_w # return crs version due to faster pathfinding return tmp_array.tocsr() Question Is this a good approach to this problem? (even if it has blocky unnatural edges due to it using graph with limited amount of edges for each node) If so how can I more efficiently create my adjacency graph (Reduce time spent in step 4)? What I have tried I have tried drawing lines from origin in a circle and saving how long the line can get before the budget runs out. this gave me poor runtime (probably a coding issue) but also can not handle finding a cheaper route by going further thru cheaper areas. Using a list adjacency graph. whilst this reduced creation time by a lot, it also increased path finding time by a larger margin (needed to use a self implemented version of Dijkstra based on Wikipedia using a heap and early exit using budget) in total the total time of creating and path finding is around 2x slowdown compared to the adjacency matrix method mentioned above. I have tried using DOK format instead but that resulted in a time of 674s instead of 120s. So not sure whats happening there.
The complexity of this code seems fine. The main issue is that your code is a pure-Python code: it is not vectorized (i.e. it does not spent most of its time in fast native functions). As a result, it is very slow because Python codes are generally executed using the CPython interpreter. The key to fix this issue is to make this function native (e.g. using Cython, Numba, by rewriting this in languages like C/C++/Rust), or using modules like Numpy (e.g. using broadcasting operation). In this case, I choose to use Numba (so to keep a similar code and show how inefficient interpreted CPython code is). Numba is Python module able to compile specific Python functions (mainly operating on Numpy arrays) to native ones at runtime (Just-in-Time compiler a.k.a. JIT). Here is the optimized Numba code: import scipy import numpy as np @nb.njit('(int32[:,::1], int32[::1], int32[::1], float32[::1])') def compute(arr, row_offsets, col_offsets, weight_offsets): # arr is a 2d-array of cost for each point (the index) height = len(arr) width = len(arr[0]) num_nodes = width * height out_rows = np.zeros(num_nodes * weight_offsets.size, dtype=np.int32) out_cols = np.zeros(num_nodes * weight_offsets.size, dtype=np.int32) out_vals = np.zeros(num_nodes * weight_offsets.size, dtype=np.float32) nnz = 0 for row in range(height): for col in range(width): src = row * width + col for (r_off, c_off, w_mul) in zip(row_offsets, col_offsets, weight_offsets): if (row + r_off < 0 or col + c_off < 0) or (row + r_off >= height or col + c_off >= width): continue dst = (row + r_off) * width + col + c_off float_w = w_mul * arr[row + r_off][col + c_off] if float_w != 0: out_rows[nnz] = src out_cols[nnz] = dst out_vals[nnz] = float_w nnz += 1 return (out_rows, out_cols, out_vals, nnz) def Step_4_fast(list_of_list): arr = np.array(list_of_list, dtype=np.int32) row_offsets = np.array([item[0] for item in adjecency_offsets], dtype=np.int32) col_offsets = np.array([item[1] for item in adjecency_offsets], dtype=np.int32) weight_offsets = np.array([item[2] for item in adjecency_offsets], dtype=np.float32) out_rows, out_cols, out_vals, nnz = compute(arr, row_offsets, col_offsets, weight_offsets) return scipy.sparse.csr_array((out_vals[:nnz], (out_rows[:nnz], out_cols[:nnz])), shape=(arr.size, arr.size)) The main idea is to build a COO matrix manually (i.e. "ijv triplet") and then build a Scipy's CSR matrix from it. The COO matrix takes a significant space but be aware that CPython list of CPython list was already pretty inefficient in term of memory usage: each float item typically takes 24 bytes on mainstream machines and each float reference also takes 8 bytes which means 32 bytes per item instead of just 4 (so 8 times more than needed). I used 32-bit integer so to reduce the memory usage and improve performance. You should check if this is fine in your real-wold use-case though (i.e. check there are no overflow). Otherwise, you should just use 64-bit ones. Benchmark Here are input for the benchmark: np.random.seed(42) # adjecency_offsets is like in the OP question arr = np.random.randint(1, 100_000, (500, 500)).tolist() Here are performance results on my i5-9600KF CPU: Step_4(arr): 16.43 seconds Step_4_fast(arr): 0.16 seconds Thus, the Numba code is about 100 times faster! Notes One downside of the Numba code is that the function is compiled at runtime. It takes about 0.4 second on my machine. If this is a problem, then you can use Cython instead. Alternatively, you can cache the compilation so only the first compilation is slower (using the cache=True compilation flag). Note that only ~30% of the time is spent in the compute function showing how fast it is. ~60% of the time is spent in the COO to CSR matrix conversion. ~10% of the time is spent in the list-of-list to Numpy array conversion. Thus, for better performance, it would be a good idea to use a better data structure than a COO matrix (a kind of CSR one but with growing lists) and avoid list-of-list to Numpy array conversions (by not using lists at all but Numpy arrays instead). That being said, this is certainly not easy to do that efficiently in Numba (simpler in native languages). Another way to make the code faster is to parallelize the operations once the code is modified to directly operate on CSR-like matrices (array of growing list). However this is far from being easy in Python (even in Cython/Numba). In a native language, each thread can operate on its block of arr and modify a thread-local CSR-like matrix. The CSR-like matrices should then be merged if possible also in parallel (pretty difficult part). This last implementation should be an order of magnitude faster than the already much faster above Numba code on a mainstream CPU (e.g. with 6~8 cores).
1
2
79,444,392
2025-2-17
https://stackoverflow.com/questions/79444392/how-does-pypy-implement-integers
CPython implements arbitrary-precision integers as PyLongObject, which extends PyObject. On 64-bit platforms they take at least 28 bytes, which is quite memory intensive. Also well-known is that it keeps a cache of small integer objects from -5 to 256. I am interested in seeing how PyPy implements these, in particular what optimizations there are optimizations for limited size integer objects. It is difficult to find documentation online. The PyPy docs mention a tagged pointer optimization for "small ints" of 63 bits (signed?). The most obvious to me is treating an integer as a primitive instead of a general purpose object if possible.
The PyPy docs mention a tagged pointer optimization as something that you need to enable explicitly, and it's never enabled by default because it comes with a large performance cost. Instead, the story is: There are two different internal representations, one for 64-bit signed integers and one for larger integers. The common, small representation is a "PyObject"-like object, but only 2x8 bytes in total (including headers etc.). (The reason is that our production GC on 64-bit machines adds only a single 64-bit header, which packs a number to identify the RPython class and a number of GC flags; and then the W_IntObject class only adds one 64-bit field for the integer value.) There is a separate optimization for "list of small integers", which is implemented as an array of 64-bit ints instead of an array of objects (so 8 bytes per item instead of 16-plus-8-for-the-pointer, plus increased locality).
2
2
79,444,122
2025-2-16
https://stackoverflow.com/questions/79444122/how-can-i-save-the-32x32-icon-from-a-ico-file-that-has-the-highest-color-depth
I'm trying to extract and save the 32x32 icon from a .ico file that contains multiple icons with multiple sizes and color depths using Pillow. The 32x32 icon is available in the following color depths: 32-bit, 8-bit and 4-bit. I tried opening the icon file using Image.open(), then set its size to 32x32, and then save it as a .png file. I expect to get the icon with the highest color depth possible, but I'm getting the one with the lowest color depth possible. Here's a minimal, reproducible example: from PIL import Image icon = Image.open("icon.ico") icon.size = (32, 32) icon.save("icon.png") I'm expecting to get this: But, I'm getting this: You can get the .ico file here: https://www.mediafire.com/file/uls693wvjn3njqa/icon.ico/file I also tried looking for similar questions on the internet, but I didn't find anything. Is there a way I can tell Pillow to extract the 32x32 icon from a .ico file at the highest color depth possible without modifying my .ico file to exclude icons with lower color depths?
WHile an ICO image loaded in PIL provide access to the individual images through the img.ico.frame call, there is one problem: upon loading, PIL will convert all loaded individual frames which are indexed into the RGBA format. Fortunately, it will preserve a bpp attribute listing the original bitcount for each frame in a headers list stored in the img.ico.entry attribute. Which means a small function which iterates the frame headers can find the needed information: from PIL import Image def get_max_bpp(ico, size=(32,32)): frames = [(index, header, ico.ico.frame(index)) for index, header in enumerate(icon.ico.entry) if header.width==size[0] and header.height == size[1] ] frames.sort(key=lambda frame_info: frame_info[1].bpp, reverse=True) return frames[0][2] icon = Image.open("icon.ico") max_32 = get_max_bpp(icon) max_32.save("icon.png") So, a straight look at the .mode attribute of each frame won't help - all will show up as RGBA (at least for an ICO file which contains at least one 24 or 32bit frame. I had not tested with a file with indexed-only frames).
3
3
79,437,928
2025-2-13
https://stackoverflow.com/questions/79437928/override-property-from-wrapper-class
I have a class A with a property prop and a method method that uses this property: class A: @property def prop(self) -> int: return 1 def method(self) -> int: return self.prop * 2 # Uses self.prop Then I have a wrapper class that tries to override this property like this: class B: def __init__(self, a: A): self._a = a @property def prop(self) -> int: return 2 # Override A's property def __getattr__(self, attr): return getattr(self._a, attr) # Delegate attribute access to A if attr not in B When b.method() is called, I'd like the calculation to be done with B's prop, and get 4 as result. What is happening instead is that prop is resolving within A, and b.method() returns 2. b = B(a) b.method() # Returns 2. Wanted 4 Is there any way I can override the property prop in A, when used on A's code, when it is being wrapped by B? I can't use inheritance for these classes I'd like to avoid replicating the method() logic in B I also tried setting __getattribute__ in B, but it seems it is not even called when resolving prop inside A EDIT: I also want the property to be overridden for other properties too. So if I have other_prop in A as follows: class A: @property def prop(self) -> int: return 1 @property def other_prop(self) -> int: return self.prop * 4 # Uses self.prop def method(self) -> int: return self.prop * 2 # Uses self.prop I'd like b.other_prop to return 8 Any way to achieve this?
You can test if the attribute obtained from _a is a method and re-bind the underlying function (accessible via the __func__ attribute) to the B instance so that the method can have access to B's properties: from inspect import ismethod class B: def __getattr__(self, attr): value = getattr(self._a, attr) if ismethod(value): value = value.__func__.__get__(self) return value Demo: https://ideone.com/jLJma8 Note that it is unnecessary for __getattr__ to test if attr in self.__dict__: because __getattr__ is called only when an attribute name is not found in __dict__ in the first place. EDIT: Similarly, if you need properties of _a to be able to access the B instance, you can test if the attribute obtained from _a's type is a property and call the the property's getter function with the B instance instead. So together with the method re-binding logics above, B.__getattr__ shall become: class B: def __getattr__(self, attr): value = getattr(type(self._a), attr, None) if isinstance(value, property): return value.fget(self) value = getattr(self._a, attr) if ismethod(value): value = value.__func__.__get__(self) return value Demo: https://ideone.com/uE3WHl
1
2
79,446,667
2025-2-17
https://stackoverflow.com/questions/79446667/pandas-shifting-columns-to-a-specific-column-position
I have a simple dataframe: data = [[2025, 198237, 77, 18175], [202, 292827, 77, 292827]] I only want the 1st and 4th columns and I don't want header or index labels: df = pd.DataFrame(data).iloc[:,[0,3]] print(df.to_string(index=False, header=False)) Output is the following: 2025 18175 202 292827 How do I line up my first column in column 3 (left-justified) and line up my second column in column 10 (left-justified)? Since i'm calling the to_string method, which is converting the dataframe to a string representation, shouldn't I be able to use ljust? I'm not able to produce the desired output, which would be: 2025 18175 202 292827
Define a formatting function with a single arg and apply to relevant columns import pandas as pd mw=7 def left_align(x): return f"{x: <{mw}}" data = [[2025, 198237, 77, 18175], [202, 292827, 77, 292827]] df = pd.DataFrame(data).iloc[:,[0,3]] # get length of max value mw = len(str(df.max(numeric_only=True).max())) #print(mw) print(df.to_string(index=False, header=False, formatters=[left_align, left_align])) Result 2025 18175 202 292827
1
1
79,439,971
2025-2-14
https://stackoverflow.com/questions/79439971/pandas-only-select-rows-containing-a-substring-in-a-column
I'm using something similar to this as input.txt header 040525 $$$$$ 9999 12345 random stuff 040525 $$$$$ 8888 12345 040525 $$$$$ 7777 12345 random stuff 040525 $$$$$ 6666 12345 footer Due to the way this input is being pre-processed, I cannot correctly use pd.read_csv. I must first create a list from the input; Then, create a DataFrame from the list. data_list = [] with open('input.txt', 'r') as data: for line in data: data_list.append(line.strip().split()) df = pd.DataFrame(data_list) I only want to append lines that contain '$$$' in the second column. Desired output would be: 0 1 2 3 0 40525 $$$$$ 9999 12345 1 40525 $$$$$ 8888 12345 2 40525 $$$$$ 7777 12345 3 40525 $$$$$ 6666 12345
data_list = [] with open('input.txt', 'r') as data: for line in data: #create split_row to check split_row = line.strip().split() #check if the second substring in split_row starts with "*****" if split_row[1].startswith("*****"): data_list.append(split_row) df = pd.DataFrame(data_list)
1
1
79,446,382
2025-2-17
https://stackoverflow.com/questions/79446382/negative-lookahead-regex-in-re-subn-context
I am trying to use regular expressions to replace numeric ranges in text, such as "4-5", with the phrase "4 to 5". The text also contains dates such as "2024-12-26" that should not be replaced (should be left as is). The regular expression (\d+)(\-)(\d+) (attempt one below) is clearly wrong, because it falsely matches dates. Using a negative lookahead expression, I came up with the regex (?!\d+\-\d+\-)(\d+)(\-)(\d+) instead (attempt two below), which correctly matches "4-5" while rejecting "2024-12-26". However, attempt_two does not behave correctly in a re.subn() context, because although it rejects "2024-12-26", the search continues on to match (and replace) the substring "12-26": import re text = """ 2024-12-26 4-5 78-79 """ attempt_one = re.compile(r"(\d+)(\-)(\d+)") attempt_two = re.compile(r"(?!\d+\-\d+\-)(\d+)(\-)(\d+)") print("Attempt one:") print(re.match(attempt_one, "4-5")) # Match: OK print(re.match(attempt_one, "2024-12-26")) # Match: False positive new_text, _ = re.subn(attempt_one, r"\1 to \3", text) # Incorrect substitution print(new_text) print("Attempt two:") print(re.match(attempt_two, "4-5")) # Match: OK print(re.match(attempt_two, "2024-12-26")) # Doesn't match: OK new_text, _ = re.subn(attempt_two, r"\1 to \3", text) # Still incorrect print(new_text) Output: Attempt one: <re.Match object; span=(0, 3), match='4-5'> <re.Match object; span=(0, 7), match='2024-12'> 2024 to 12-26 4 to 5 78 to 79 Attempt two: <re.Match object; span=(0, 3), match='4-5'> None 2024-12 to 26 4 to 5 78 to 79 What regular expression can I use so that the substitution returns the following instead? 2024-12-26 4 to 5 78 to 79 (As my goal is to learn about regular expressions, I am not interested in workarounds such as matching the whitespace or newline after "12-26".)
You need both a negative lookbehind and a negative lookahead, to prohibit an extra hyphen before or after the match. (?<![-\d])(\d+)-(\d+)(?![-\d]) The lookarounds also have to match digits, so it won't match part of the date, e.g. 024-1 from 2024-12-26.
3
3
79,446,265
2025-2-17
https://stackoverflow.com/questions/79446265/python-dataclassesslots-true-breaks-super
Consider the following code. I have a base and derived class, both dataclasses, and I want to call a method of the base class in the derived class via super(): import abc import dataclasses import typing SLOTS = False @dataclasses.dataclass(slots=SLOTS) class Base: @abc.abstractmethod def f(self, x: int) -> int: return x @dataclasses.dataclass(slots=SLOTS) class Derived(Base): @typing.override def f(self, x: int) -> int: return super().f(x) d = Derived() d.f(2) When setting SLOTS = False, this runs fine. When setting SLOTS = True, I get an error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[2], line 24 20 return super().f(x) 23 d = Derived() ---> 24 d.f(2) Cell In[2], line 20, in Derived.f(self, x) 18 @typing.override 19 def f(self, x: int) -> int: ---> 20 return super().f(x) TypeError: super(type, obj): obj (instance of Derived) is not an instance or subtype of type (Derived). Why is that? Note that it works if I use a regular slotted class instead of a dataclass: import abc import typing class Base: __slots__ = tuple() @abc.abstractmethod def f(self, x: int) -> int: return x class Derived(Base): __slots__ = tuple() @typing.override def f(self, x: int) -> int: return super().f(x) d = Derived() d.f(2)
WHen one uses the slots=True option on dataclasses, it will create a new class using the namespace of the decorated class - while not passing it makes it modify the class "in place". (This is needed because using slots really change how the class is built - its "layout" as it is called) The simplest thing to do is not to use __slots__ altogether - the gains slots gives one in modern Python are small since some of the optimizations that went on Python 3.11 (check it here: Are Python 3.11 objects as light as slots? ) Anyway, the workaround is quite simple - instead of the parameterless version of super(), which uses a mechanism introduced in Python 3.0, just use the explicit version of it, where you pass the class and instance as arguments. The thing is that the parameterless version uses implicitly the class body where the super() call is actually written in: at compile time, the current class is frozen into the method, in a non-local like variable named __class__. The new class created by dataclass with slots can't (or simply does not do) update this value - so parameterless super will try to call super on the original (pre dataclass decorator ) Derived class and will fail. This version of the code will work: @dataclasses.dataclass(slots=SLOTS) class Derived(Base): @typing.override def f(self, x: int) -> int: return super(Derived, self).f(x)
1
2
79,445,857
2025-2-17
https://stackoverflow.com/questions/79445857/pandas-dataframe-returning-only-1-column-after-creating-from-a-list
I'm using something similar to this as input.txt 040525 $$$$$ 9999 12345 040525 $$$$$ 8888 12345 040525 $$$$$ 7777 12345 040525 $$$$$ 6666 12345 Due to the way this input is being pre-processed, I cannot correctly use pd.read_csv. I must first create a list from the input; Then, create a DataFrame from the list. data_list = [] with open('input.txt', 'r') as data: for line in data: data_list.append(line.strip()) df = pd.DataFrame(data_list) This results in each row being considered 1 column print(df.shape) print(df) print(df.columns.tolist()) (4, 1) 0 0 040525 $$$$$ 9999 12345 1 040525 $$$$$ 8888 12345 2 040525 $$$$$ 7777 12345 3 040525 $$$$$ 6666 12345 [0] How can I create 4 columns in this DataFrame? Desired output would be: (4, 4) a b c d 0 40525 $$$$$ 9999 12345 1 40525 $$$$$ 8888 12345 2 40525 $$$$$ 7777 12345 3 40525 $$$$$ 6666 12345 ['a', 'b', 'c', 'd']
In your loop, you should split the strings into a list of substrings for the fields: for line in input_txt: data_list.append(line.strip().split()) This will give you the correct number of columns. Alternatively, keep your loop as it is, but create a Series and str.split with expand=True. This might be less efficient, but could be more robust if you don't have a consistent number of fields: data_list = [] with open('input.txt', 'r') as data: for line in data: data_list.append(line.strip()) df = pd.Series(data_list).str.split(expand=True) Output: 0 1 2 3 0 040525 $$$$$ 9999 12345 1 040525 $$$$$ 8888 12345 2 040525 $$$$$ 7777 12345 3 040525 $$$$$ 6666 12345 For the first approach, if you want column names: df = pd.DataFrame(data_list, columns=['a', 'b', 'c', 'd']) Output: a b c d 0 040525 $$$$$ 9999 12345 1 040525 $$$$$ 8888 12345 2 040525 $$$$$ 7777 12345 3 040525 $$$$$ 6666 12345
1
1
79,445,156
2025-2-17
https://stackoverflow.com/questions/79445156/how-to-apply-a-function-to-all-possible-tuples-of-two-groups-obtained-by-groupby
I am grouping my data as below: all_groups = df.groupby('age').groups Printing all_groups shows: {1.0: [11, 14, 15, 22], 2.0: [12, 13, 27], 3.0: [16, 17, 19, 20, 23, 24], 6.0: [21], 7.0: [18, 25, 26], 11.0: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]} Now I want to run stats.mannwhitneyu on all possible combinations of two classes. In this example, I have 6 groups, therefor, 15 combinations are possible, e.g., stats.mannwhitneyu(class1, class2), stats.mannwhitneyu(class1, class3), ..., stats.mannwhitneyu(class7, class11). I need a general approach to do it, specially that I don't know the number of classes in advance. What is the cleanest/smartest way to do it? Thank you in advance.
You could compute a GroupBy object, then apply your test on all combinations: from itertools import combinations from scipy.stats import mannwhitneyu groups = df.groupby('age')['value'] out = pd.DataFrame.from_dict({(a[0], b[0]): mannwhitneyu(a[1], b[1]) for a, b in combinations(groups, 2)}, orient='index') Example: statistic pvalue (0, 1) 17.0 0.939860 (0, 2) 14.0 1.000000 (0, 3) 61.0 0.205667 (0, 4) 28.0 0.757692 (0, 5) 20.0 0.797203 ... ... ... (16, 18) 8.0 1.000000 (16, 19) 13.0 0.380952 (17, 18) 17.0 0.420635 (17, 19) 21.0 0.329004 (18, 19) 18.0 0.662338 [190 rows x 2 columns] Used input: np.random.seed(0) df = pd.DataFrame({'age': np.random.randint(0, 20, 100), 'value': np.random.random(100) }) If you want a square matrix of pvalues as output, using squareform: from scipy.spatial.distance import squareform idx = sorted(df['age'].unique()) out = pd.DataFrame(squareform([mannwhitneyu(a[1], b[1]).pvalue for a, b in combinations(groups, 2)]), index=idx, columns=idx).sort_index().sort_index(axis=1) Output: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 0 0.000000 0.939860 1.000000 0.205667 0.757692 0.797203 0.297702 0.330070 0.863636 0.035964 0.260140 0.727273 0.148252 1.000000 0.898102 0.114161 1.000000 0.898102 0.699301 0.528671 1 0.939860 0.000000 0.857143 0.187812 0.787879 0.730159 0.285714 0.485714 0.857143 0.066667 0.485714 1.000000 0.200000 1.000000 0.904762 0.163636 1.000000 0.555556 1.000000 0.609524 2 1.000000 0.857143 0.000000 0.286713 0.666667 0.571429 0.250000 1.000000 1.000000 0.095238 0.400000 1.000000 0.228571 0.800000 1.000000 0.266667 1.000000 0.392857 1.000000 0.714286 3 0.205667 0.187812 0.286713 0.000000 0.193233 0.055278 0.953047 0.733267 0.468531 0.313187 0.839161 0.216783 0.023976 0.363636 0.439560 0.417318 0.216783 0.055278 0.206460 0.792458 4 0.757692 0.787879 0.666667 0.193233 0.000000 0.876263 0.267677 0.315152 1.000000 0.073427 0.230303 0.833333 0.315152 0.888889 0.530303 0.164918 1.000000 1.000000 0.431818 0.533800 5 0.797203 0.730159 0.571429 0.055278 0.876263 0.000000 0.150794 0.190476 1.000000 0.017316 0.063492 0.785714 0.555556 0.857143 0.690476 0.106061 1.000000 1.000000 0.309524 0.246753 6 0.297702 0.285714 0.250000 0.953047 0.267677 0.150794 0.000000 0.555556 0.571429 0.428571 1.000000 0.250000 0.063492 0.380952 0.309524 0.755051 0.392857 0.095238 0.222222 0.930736 7 0.330070 0.485714 1.000000 0.733267 0.315152 0.190476 0.555556 0.000000 0.400000 0.114286 0.685714 0.857143 0.028571 0.800000 0.412698 0.527273 0.400000 0.111111 0.555556 0.914286 8 0.863636 0.857143 1.000000 0.468531 1.000000 1.000000 0.571429 0.400000 0.000000 0.166667 0.628571 1.000000 0.400000 1.000000 0.571429 0.266667 1.000000 1.000000 1.000000 0.904762 9 0.035964 0.066667 0.095238 0.313187 0.073427 0.017316 0.428571 0.114286 0.166667 0.000000 0.609524 0.047619 0.009524 0.285714 0.051948 0.730769 0.166667 0.004329 0.051948 0.240260 10 0.260140 0.485714 0.400000 0.839161 0.230303 0.063492 1.000000 0.685714 0.628571 0.609524 0.000000 0.228571 0.057143 0.533333 0.412698 0.927273 0.400000 0.111111 0.285714 0.761905 11 0.727273 1.000000 1.000000 0.216783 0.833333 0.785714 0.250000 0.857143 1.000000 0.047619 0.228571 0.000000 0.228571 1.000000 0.785714 0.266667 1.000000 0.571429 1.000000 0.714286 12 0.148252 0.200000 0.228571 0.023976 0.315152 0.555556 0.063492 0.028571 0.400000 0.009524 0.057143 0.228571 0.000000 0.533333 0.063492 0.024242 0.628571 0.285714 0.063492 0.171429 13 1.000000 1.000000 0.800000 0.363636 0.888889 0.857143 0.380952 0.800000 1.000000 0.285714 0.533333 1.000000 0.533333 0.000000 1.000000 0.333333 0.800000 0.857143 1.000000 0.642857 14 0.898102 0.904762 1.000000 0.439560 0.530303 0.690476 0.309524 0.412698 0.571429 0.051948 0.412698 0.785714 0.063492 1.000000 0.000000 0.343434 0.785714 0.841270 0.841270 0.792208 15 0.114161 0.163636 0.266667 0.417318 0.164918 0.106061 0.755051 0.527273 0.266667 0.730769 0.927273 0.266667 0.024242 0.333333 0.343434 0.000000 0.266667 0.073232 0.202020 0.365967 16 1.000000 1.000000 1.000000 0.216783 1.000000 1.000000 0.392857 0.400000 1.000000 0.166667 0.400000 1.000000 0.628571 0.800000 0.785714 0.266667 0.000000 1.000000 1.000000 0.380952 17 0.898102 0.555556 0.392857 0.055278 1.000000 1.000000 0.095238 0.111111 1.000000 0.004329 0.111111 0.571429 0.285714 0.857143 0.841270 0.073232 1.000000 0.000000 0.420635 0.329004 18 0.699301 1.000000 1.000000 0.206460 0.431818 0.309524 0.222222 0.555556 1.000000 0.051948 0.285714 1.000000 0.063492 1.000000 0.841270 0.202020 1.000000 0.420635 0.000000 0.662338 19 0.528671 0.609524 0.714286 0.792458 0.533800 0.246753 0.930736 0.914286 0.904762 0.240260 0.761905 0.714286 0.171429 0.642857 0.792208 0.365967 0.380952 0.329004 0.662338 0.000000
3
5
79,443,938
2025-2-16
https://stackoverflow.com/questions/79443938/reading-excel-data-and-named-ranges-into-python-script
I am new to Python but have used Excel and named ranges for some time. I have a football workbook, several worksheets, with several named ranges for team name, colors, plays, rules, formations, etc. The workbook is used to create a wristband for football players. I have userforms and sheet change code that handles various activities. I would like to create this in python and be able to send out to other coaches in an exe file. Some named ranges are 1 cell others are multiple columns and rows. For example a list of formations is 15 plus. Each play has up to 20 columns of rules for each player and data for the wristband. The 'playbook' as a whole could be several hundred plays potentially. That is a table. In python, I am able to open windows and create buttons, textblocks, etc. I am able to convert a py file into exe (Without a data file), just trying to get to the next step in my mind. I am trying to better understand how to read in varying data into python, assign it to a variable, and then later write it back out to a data file tagged to the exe. What are the ways to achieve such effect?
Python is fine to use with Excel and maybe even easier to use than VBA in some instances. It probable that the most issue would be converting a python script to an exe if that indeed is what you want at the end. There are two methods of working with Excel in Python, apart from Python in Excel which is running Python in the Excel App. The aforementioned Openpyxl and Xlsxwriter; neither of these modules use Excel they modify the Excel file itself. The other two main modules are Xlwings and pywin32 aka win32com both of which interact with the Excel Application to modify an Excel file. Each has its advantages and disadvantages of course; Some of the obvious being; Openpyxl and Xlsxwriter do not need Excel to be installed and as such will run on Linux as well as Windows and MACOS. Openpyxl is probably the more commonly used module due to it not needing to run the Excel App each time you use it. But this module does have it disadvantages too and as it uses the OOXML interface which is not well documented and some functionality is not available or may be difficult to determine. But if it suits your needs there are lots of questions/answers on it's use you can use for reference. Xlsxwriter is a very well documented module with much functionality for building workbooks however... it should be noted that Xlsxwriter can only create new workbooks, so if you are editing/updating existing workbooks this module cannot help you. Xlwings will work on Windows and MACOS where Excel is installed though MACOS attributes may vary slightly from the more common Windows. But as Xlwings is sometimes termed VBA for Python it can pretty much do anything VBA can do, but you might find some areas lacking. Win32com is similar to Xlwings but for Windows only. This document python-tools-for-excel may also help with some details on modules and what they do. Your question may be closed since it is not a specific query about a programming problem so technically is off-topic for this site. You need to research what you need to do exactly and try out some code and if you have issues getting it working come back with questions.
2
2
79,435,219
2025-2-13
https://stackoverflow.com/questions/79435219/cannot-apply-operator-between-seriesfloat-and-float
I'm developing an indicator in Indie. I’m trying to calculate Bollinger Bands and Keltner Channels, but I’m running into an error when adding a Series[float] to a float value. Here’s the error message: Error: 20:20 cannot apply operator + to operands of types: <class 'indie.Series[float]'> and <class 'float'> Here’s the relevant code snippet: sDev = StdDev.new(self.close, length) mid_line_bb = Sma.new(self.close, length) lower_band_bb = mid_line_bb + num_dev_dn * sDev[0] # <-- ERROR HERE upper_band_bb = mid_line_bb + num_dev_up * sDev[0] # <-- ERROR HERE What I’ve Tried: Wrapping sDev[0] with MutSeriesF.new(), but that caused another issue. Using sDev.value_or(0) (which doesn’t seem to exist in Indie v4). Looking for an explicit type conversion function in Indie’s documentation. I assume the issue is that mid_line_bb is a Series[float], while sDev[0] is a single float. How should I properly handle this type of mismatch in Indie?
The problem is that you cannot add a Series container with a float number, you must extract a specific value from the Series and perform an arithmetic operation with it. You can extract the value from the Series using square brackets mid_line_bb[0] or using the get method mid_line_bb.get(offset=0, default=0). The source code of the BB algorithm on Indie may also help you: # indie:lang_version = 5 from indie import algorithm, SeriesF, MutSeriesF from indie.algorithms import Sma, StdDev @algorithm def Bb(self, src: SeriesF, length: int, mult: float) -> tuple[SeriesF, SeriesF, SeriesF]: '''Bollinger Bands''' middle = Sma.new(src, length)[0] dev = mult * StdDev.new(src, length)[0] lower = middle - dev upper = middle + dev return MutSeriesF.new(lower), MutSeriesF.new(middle), MutSeriesF.new(upper) If we adapt this to your example, we will get the following indicator code: # indie:lang_version = 5 from indie import indicator, param from indie.algorithms import StdDev, Sma @indicator('Std dev', overlay_main_pane=True) @param.int('length', default=5) @param.float('num_dev_dn', default=1.0) @param.float('num_dev_up', default=1.0) def Main(self, length, num_dev_dn, num_dev_up): sDev = StdDev.new(self.close, length) mid_line_bb = Sma.new(self.close, length) lower_band_bb = mid_line_bb[0] + num_dev_dn * sDev[0] upper_band_bb = mid_line_bb[0] + num_dev_up * sDev[0] return mid_line_bb[0], lower_band_bb, upper_band_bb This code on Indie v4 would have looked the same, switching to v5 did not affect working with Series type.
2
1
79,442,836
2025-2-16
https://stackoverflow.com/questions/79442836/python-asyncio-not-able-to-run-the-tasks
I am trying to test python asyncio and aiohttp. Idea is to fetch the data from API parallely and store the .html file in a local drive. Below is my code. import asyncio import aiohttp import time import os url_i = "<some_urls>-" file_path = "<local_drive>\\asynciotest" async def download_pep(pep_number: int) -> bytes: url = url + f"{pep_number}/" print(f"Begin downloading {url}") async with aiohttp.ClientSession() as session: async with session.get(url) as resp: content = await resp.read() print(f"Finished downloading {url}") return content async def write_to_file(pep_number: int, content: bytes) -> None: with open(os.path.join(file_path,f"{pep_number}"+'-async.html'), "wb") as pep_file: print(f"{pep_number}_Begin writing ") pep_file.write(content) print(f"Finished writing") async def web_scrape_task(pep_number: int) -> None: content = await download_pep(pep_number) await write_to_file(pep_number, content) async def main() -> None: tasks = [] for i in range(8010, 8016): tasks.append(web_scrape_task(i)) await asyncio.wait(tasks) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Execution time: {elapsed:0.2f} seconds.") The above code is throwing an error TypeError: Passing coroutines is forbidden, use tasks explicitly. sys:1: RuntimeWarning: coroutine 'web_scrape_task' was never awaited I am completely novish in asyncio hence not getting any clue. I have followed documentation but have not got any clue. Am I missing here things? Edit I am trying to call APIs sequentially with each concurrent / parallel call. For this I am using asyncio.Semaphore() and restricting the concurrency into 2. I got the clue from here and from the comments below. I have made the revision in the code below: async def web_scrape_task(pep_number: int) -> None: for i in range(8010, 8016): content = await download_pep(i) await write_to_file(pep_number, content) ##To limit concurrent call 2## sem = asyncio.Semaphore(2) async def main() -> None: tasks = [] for i in range(8010, 8016): async with sem: tasks.append(asyncio.create_task(web_scrape_task(i))) await asyncio.gather(*tasks) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) #await main() elapsed = time.perf_counter() - s print(f"Execution time: {elapsed:0.2f} seconds.") Now the question is whether this is the correct approach?
The error occurs because you're passing raw coroutines to asyncio.wait() instead of scheduling them as tasks. All you have to do is wrap your web_scrape_task() call inside main() with asyncio.create_task(), like so: async def main() -> None: tasks = [] for i in range(8010, 8016): tasks.append(asyncio.create_task(web_scrape_task(i))) await asyncio.wait(tasks) that way the coroutine is converted into an asyncio task and is correctly awaited. Hope this helps :) EDIT: Full Code w/ concurrent method calls & a single aiohttp Client Session import asyncio import aiohttp import time import os url_i = "<some_urls>-" file_path = "<local_drive>\\asynciotest" async def download_pep(session: aiohttp.ClientSession, pep_number: int) -> bytes: url = url_i + f"{pep_number}/" print(f"Begin downloading {url}") async with session.get(url) as resp: content = await resp.read() print(f"Finished downloading {url}") return content async def write_to_file(pep_number: int, content: bytes) -> None: with open(os.path.join(file_path,f"{pep_number}"+'-async.html'), "wb") as pep_file: print(f"{pep_number}_Begin writing ") pep_file.write(content) print(f"Finished writing") async def web_scrape_task(session: aiohttp.ClientSession, pep_number: int) -> None: content = await download_pep(session, pep_number) await write_to_file(pep_number, content) async def main() -> None: async with aiohttp.ClientSession() as session: tasks = [web_scrape_task(session, i) for i in range(8010, 8016)] await asyncio.gather(*tasks) if __name__ == "__main__": s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"Execution time: {elapsed:0.2f} seconds.") EDIT 2: Semaphore Handling Small modification for limiting the concurrency (is that a word?) to two. async def web_scrape_task(session: aiohttp.ClientSession, pep_number: int, semaphore: asyncio.Semaphore) -> None: # take in semaphore as a prop async with semaphore: # synchronise with the semaphore here content = await download_pep(session, pep_number) await write_to_file(pep_number, content) async def main() -> None: semaphore = asyncio.Semaphore(2) # initialise a semaphore async with aiohttp.ClientSession() as session: tasks = [web_scrape_task(session, i, semaphore) for i in range(8010, 8016)] # pass semaphore as a prop await asyncio.gather(*tasks)
2
2